Exploring the Ethical Implications of AI with Kavita Ganesan
Many New Zealand business leaders I've connected with recently believe that AI solutions can be seamlessly integrated without any downstream risks. They assume that "prompt engineering" and chat functionalities can solve numerous automation problems straight out of the box.
This mindset sounds like; “I don't need to evaluate or integrate anything. I simply trust the output of the AI without scrutinizing it.”
Adopting such an approach is not only dangerous but also short-sighted.
As business leaders it's vital we grasp the risks and potential issues associated with AI systems, allowing us to make informed decisions about employing and disseminating AI solutions.
This is true whether you're using versatile generative AI tools or specialized single-task AI models.
Yesterday I read an insightful article that thoroughly examines the ethical implications of AI in real-world scenarios: “Exploring the Ethical Implications of AI: A Closer Look at the Challenges Ahead” by Kavita Ganesan.
Kavita Ganesan is an author I hold in very high regard, since reading her book “The Business Case for AI: A Leader's Guide to AI Strategies, Best Practices & Real-World Applications”. I can’t recommend this book highly enough to any and all of you wanting to infuse AI solutions in your businesses.
Below, I offer a summary of Ganesan’s perspectives and guidance on the ethics of AI.
The hope is that it can help you navigate the ever-evolving AI landscape while ensuring your business reaps the benefits and mitigates risks associated with AI adoption…
Why AI Ethics are relevant to business leaders
Contrary to some thinking out there, AI is not an exclusive technology for large corporations or governments.
It should also now be considered the technology for businesses to evolve products, services, processes, and customer experiences.
According to a recent report by McKinsey, AI could generate up to $13 trillion of additional global economic activity by 2030.
However, AI also comes with risks and responsibilities. As Ganesan writes, "AI is not magic. It is not always right. It can be biased. It can be hacked. It can be misused."
Therefore, as business leaders, we need to be aware of the ethical implications of using AI in our businesses, and take proactive steps to ensure that our AI systems are trustworthy, fair, transparent, accountable, and respectful of human rights and values.
My 5 key takeaways
After reading Ganesan's article yesterday I’ve taken some time to really chew on it.
The ethics of AI isn’t a sexy topic. It’s not going to light up dinner parties. It’s not even one that’s necessarily going to be welcomed easily into boardrooms. It’s niggly to think about, and doesn’t present ROI.
But as I often find in reading Kavita Ganesan’s ideas, it’s a vital and necessary topic for any modern business leader.
Here are the 5 key takeaways/themes of the ethics of Ai that I was left with after processing Ganesan’s latest article:
-
Privacy and Data Protection
-
Bias and Discrimination
-
Transparency and Explainability
-
Safety and Reliability
-
Human Dignity and Autonomy
Privacy and data protection
AI relies on large amounts of data to learn and make decisions. In short; any output we get from an AI is successful or otherwise only because of the size and quality of the data we feed it.
However, this data may contain sensitive or personal information about individuals or groups, such as their identities, preferences, behaviors, health conditions, financial status, or political opinions. Collecting, storing, processing, sharing, or selling this data without proper consent or protection may violate people's privacy rights and expose them to potential harm or discrimination.
As responsible pilots of businesses, we should ensure that we have a clear and transparent data governance policy that complies with relevant laws and regulations (such as the General Data Protection Regulation or GDPR in the European Union), and that we respect the data rights of our customers, employees, partners, and suppliers. We should also be implementing appropriate security measures to prevent data breaches or unauthorized access to data.
Bias and discrimination
AI systems may inherit or amplify human biases that exist in the data they use or the algorithms they employ.
These biases may result in unfair or inaccurate outcomes that affect individuals or groups based on their characteristics such as gender, race, ethnicity, age, religion, disability, or sexual orientation. For example, an AI system may deny someone a loan, a job opportunity, or a medical treatment based on their demographic profile or social background.
We should be asking ourselves how we ensure that we use high-quality and representative data that reflects the diversity of our target populations and markets. We should also be testing and monitoring our AI systems for potential biases or errors that may affect their performance or reliability. Inherent within this is a consideration of how to implement mechanisms for feedback and redress for those who may be adversely affected by the AI systems we’re using in our businesses.
Transparency and explainability
AI systems may operate in complex or opaque ways that are not easily understandable or interpretable by humans. This may create challenges for trust, accountability, and responsibility, especially when AI systems make decisions that have significant impacts on people's lives or rights.
For example, an AI system may recommend a diagnosis, a treatment, or a legal verdict without providing sufficient justification or evidence for its reasoning.
Ideally, our businesses should provide clear and accessible information about our AI systems, such as their goals, capabilities, limitations, assumptions, and sources of data.
We should also provide explanations for how our AI make decisions, especially when they affect people's rights or interests.
I see this as the most difficult of all of Ganesan’s recommendations. It reflects a wider, ongoing discussion within the AI community. When AI makes its calculations and reasonings within a “black box” that isn’t understandable by humans, how do we provide explanations about the outputs of an AI tool? It’s a conundrum occupying the greatest AI minds out there, and one which I’m sure Ganesan herself would admit is a huge challenge for everyone, including us as business leaders.
And lastly in relation to the transparency and explainability of AI systems, Ganesan recommends that we should also enable human oversight and intervention when necessary to ensure that our AI systems are aligned with human values and norms.
Safety and reliability
AI systems may malfunction or fail due to technical errors, human mistakes, malicious attacks, or unforeseen circumstances. These failures may cause harm or damage to people, property, or the environment.
For example, an AI system may cause a car accident, a power outage, a cyberattack, or an environmental disaster.
AIs that we run within our companies should be designed and developed with safety and reliability in mind. They should follow best practices and standards for software engineering, quality assurance, risk management, and ethical design. They should also be updated and maintained regularly to ensure their functionality and security.
I talk about this regularly in my training programs.
In a nutshell, this is about having a robust process for vetting either off-the-shelf AI solutions or bespoke AI product developers, to ensure that they take AI safety and reliability seriously, and aren’t just AI cowboys.
Human dignity and autonomy
The final theme Ganesan discusses in her article is that AI systems may affect the dignity and autonomy of human beings by influencing their choices, behaviors, emotions, or values.
For example, an AI system may manipulate someone's preferences, opinions, or actions through persuasive techniques or nudges. An AI system may also replace or diminish someone's skills, roles, or responsibilities.
What does this mean for us as business leaders? To my mind, it means that any AI systems we deploy should respect the dignity and autonomy of our customers, employees, partners, and suppliers. We shouldn’t use AI systems to deceive, coerce, exploit, or harm them.
But, as well as simply not causing harm, we should be aiming up; to empower our human stakeholders with easy ways to make informed and meaningful decisions about their interactions with our AI systems.
Conclusion
If you want to learn more about AI ethical issues and how to address them in the context of New Zealand businesses, you can read the full blog post by Ganesan here.
You can also check out some of the other resources and initiatives that I’ve found, that are relevant to us from the angle of business leaders who want to learn more about ethical AI:
- The European Commission's Ethics Guidelines for Trustworthy AI.
- The OECD Principles on Artificial Intelligence.
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
- The Partnership on AI.
- The Responsible AI Toolkit.
The rapid advancement and integration of AI into various aspects of our businesses present both significant opportunities and challenges. While AI has the potential to revolutionize industries and drive growth, it also comes with inherent risks and responsibilities that cannot be overlooked. As Ganesan aptly points out, AI is not infallible; it can be biased, hacked, and misused, which can lead to unintended consequences.
As business leaders striving to harness the power of AI, it’s on us to adopt a responsible and proactive approach in its implementation. This includes understanding the ethical implications of AI and taking steps to ensure that AI systems are trustworthy, fair, transparent, and accountable. Additionally, these systems must respect human rights and values to prevent any harm to individuals or society as a whole.
Should we choose to, we can successfully navigate the complex AI landscape, mitigating potential risks and creating a more secure, ethical, and inclusive environment.
Ultimately, this will not only foster innovation and growth but also solidify the reputation of our businesses as responsible and forward-thinking companies.
Got something to add? Chime in below...