AI Vision & Leadership

Why the future for leaders is human vision plus AI precision

I posted on LinkedIn this week about a recent University of Cambridge study that showed AI consistently outperformed human CEOs (in a simulated strategic setting). 

The post drew some pretty emotive reactions, both positive and negative, in the comments and in private emails I received. 

No-one sat on the fence. This one definitely stirs things up for people.

What was it about? 

In a nutshell, the researchers built a “simulated strategic gameplay environment of the automotive industry”, within which an AI managed to outthink and outmanoeuvre experienced human players (both Cambridge business students, and real life CEOs) across various metrics. 

What did I take out of reading the study? 

That AI can excel at optimising business metrics with precision that most human leaders could only dream of. Yet, when chaos struck (the study calls them unpredictable “black swan” events) the AI faltered, showing its limitations. 

To me, this juxtaposition begins a conversation along these lines… what will make for an effective CEO in an AI-first world that will likely demand both AI's analytical precision and great human intuition?

Generative AI as a strategic asset

Although not many companies are talking about it (preferring to keep it quiet as a strategic asset), I consider it a real privilege of what I do, that I get to watch first-mover senior leadership groups beginning to use AI as an emerging strategic asset in their boardrooms. 

Those innovators are getting capabilities from their AI in optimising resource allocation, product-market fit, and cost management, at levels of efficiency they couldn’t touch previously. 

That’s what is profiled in the University of Cambridge study… AI navigating corporate decisions with a combination of speed, depth, and accuracy that yields better outcomes than its human counterparts (albeit in a simulated gameplay environment).

Despite the ever-growing body of evidence that shows AI’s edge in many facets of strategic thinking, planning and decision making, many human CEOs remain hesitant about integrating AI into their leadership framework. 

That reluctance was explored in a recent Stanford study, which sheds light on a significant barrier to widespread AI adoption… the fear of working alongside a machine that might outperform us. 

In a test scenario where doctors diagnosed patients using AI assistance, these were the results: 

  • Human doctors made correct diagnoses 74% of the time, and it took them 565 seconds to make those diagnoses.
  • Human doctors with ChatGPT (4o) assistance made correct diagnoses 76% of the time, and it took them 519 seconds to make those diagnoses.

For those following along… doctors using ChatGPT were 2 percentage points and 46 seconds more effective than doctors without AI assistance.

But here’s the kicker, and where it gets really interesting… 

ChatGPT deployed by itself with no human doctor had a higher accuracy rate (92%) than the group of doctors using ChatGPT (76%)!

So what does that suggest to us? 

That many of the doctors who had assistance from ChatGPT were getting valuable advice and assistance from the AI, and choosing to ignore it.

Why were they reluctant to follow the AI’s advice? On that we can only speculate. 

But from what I see anecdotally in my work every day, my personal speculation is that the fear of working alongside a machine that might outperform them likely played a part for these doctors. 

It’s important to acknowledge there is a certain level of discomfort that comes when a machine starts to operate at or above a human level. I see it all the time. And I feel it myself.

To link things back to the Cambridge study that I opened this article with… I see it as feasible that this discomfort might (already, and increasingly so in the near future) be starting to hold back an entire generation of New Zealand CEOs and senior leaders from taking advantage of AI’s growing strategic capabilities.

The efficiency vs. adaptability dilemma

I thought the University of Cambridge study also highlighted an obvious and glaring issue… AI's obsession with efficiency at the expense of adaptability. 

GPT-4o, the generative model used in the research, was really effective in its optimisation-focused strategies. 

It maximised market share and reduced costs with precision. 

However, when unpredictable events like the Covid-19-like market collapses were introduced into the gameplay scenario, the AI was too rigid to respond effectively. It couldn’t adapt quickly enough, which led to outcomes that, in the real world, would likely have cost a CEO their job.

Humans, on the other hand, have an intrinsic ability to pivot strategies in the face of uncertainty. 

The best-performing human participants in the study avoided locking themselves into rigid paths. Instead, they kept their options open, understanding the value of adaptability over maximising short-term gains. It’s the kind of intuition that (for now at least, but I hope for a long time yet) AI cannot display at better-than-human capabilities.

I think this is a pretty key lesson for business leaders considering their own personal AI adoption methodologies… while AI can provide amazing efficiencies, human foresight and flexibility are irreplaceable in scenarios involving unknown and unexpected challenges.

Why the future of CEOs is human vision plus AI precision2"C3TO... Are you making fun of me me with that robo-mo?"

Accountability in an AI-driven boardroom

It’s a hypothetical conversation I’m hearing more and more these days… should/could/when will an AI become a CEO of a real company?

One of the biggest roadblocks to that is accountability. 

When a human CEO makes a bad decision, they answer to their board, shareholders, and even sometimes the public. 

They can be replaced, sued, or reprimanded. 

But if an AI makes a flawed strategic choice, accountability becomes murky. Who do you hold responsible — the creators of the model, the company deploying it, the individual who pressed the button? (Or no one at all?)

I raise this question because it points to some significant ethical questions. 

Without the clarity of personal accountability, the risks of implementing AI at the C-suite level grow exponentially. 

My hope is that for a long time yet, any 'AI executive' remains in a reporting role, supporting human decision-makers rather than acting autonomously. 

The implications of the reverse scenario, where AI leads with humans reporting to it, remain fraught with ethical challenges that I don’t think we’re collectively prepared to handle yet.

Digital twins & an AI sandbox

Enter the concept of digital twins — a controlled, virtual replica of a company's ecosystem that allows AI experimentation. 

In environments such as this, generative AI can test hundreds/thousands/millions of hypotheses, simulate strategies, and make decisions without the real-world consequences of a misstep.

All of which can be monitored, analysed and reviewed by human leaders. 

I see this as an ideal setup, given AI’s capabilities right now at the back end of  2024. 

CEOs with digital twins in a “sandbox” (or a “dojo” in Silicon Valley terminology) where the AI learns and makes mistakes before making recommendations, will ultimately lead to more robust decision-making pathways, without risking company resources or reputation. 

I can really see a near-term future where companies would be considered archaic (and possibly negligent?) if they don’t use this iterative learning process; applying insights gathered in the AI sandbox to make well-calibrated, data-backed decisions in the real world.

The myth of the “replaceable CEO”

And, let’s not forget that the role of a CEO is much more than just making decisions based on data. (As was strongly pointed out by some of the excellent comments on my LinkedIn post that led to this essay).

A great CEO has excellent human intuition, the ability to build relationships, and ethical foresight. 

Yes, generative AI can excel in processing data, but it still lacks EQ, nuanced understanding of context, and the creative leaps that define exceptional human leadership.

A great CEO knows when to lean into their instinct, even if data advises otherwise. 

A great CEO builds trust within teams, navigates complex human relationships, and balances company interests with broader societal good. 

AI, in its current form, simply cannot replace these qualities. (And, personally, I hope it never does!) 

I see generative AI’s role within leadership therefore, as an enhancement rather than a replacement for humans. 

It can/should augment human decision-making, providing the analytical horsepower for data-heavy scenarios, freeing up CEOs to focus on strategic foresight, company culture, and values-driven leadership.

The future CEO will have human vision and AI precision

I think the future of leadership lies in hybrid models where human vision, creativity, and ethical sensibilities merge (not literally, as Elon would like us to with his Neuralink) with AI’s capacity for precision and efficiency. 

CEOs who thrive in the coming years won’t be those who compete with AI but those who collaborate with it, effectively leveraging AI as a strategic partner. For now, the best use of AI remains as a complementary force… a partner, not a commander.

By automating data-heavy analyses and modelling complex scenarios, AI can allow human leaders to channel their energies into more strategic, empathetic, and creative realms — areas where humans naturally excel.

The real risk for today’s CEOs isn't the rise of AI, but rather their reluctance to embrace it as a partner. 

If we cling too tightly to the notion that leadership or intelligence is purely a human pursuit (as some of the doctors in the Stanford study mentioned above likely did), we might miss the opportunity to build a future of leadership where AI’s precision sharpens our human capabilities.

The challenge for current CEOs is to not think of AI as a threat, but as a partner to empower your leadership. 

Go back to the Stanford study as a starting point: AI is ready to assist and exceed, but it is up to us to decide where the human touch remains essential. 

Let’s embrace AI for what it is — an ally for better decisions, and a catalyst for a more effective, human-driven future.

Excitingly, this is a challenge that I’m watching real, bona-fide, Kiwi business leaders start to take on, right now in 2024. 

Yes, they’re the early adopters. But they only have two arms, two legs, etc, just like the rest of us. 

I see no reason why this trend won’t continue to become more and more widespread amongst NZ business leaders… staying human in our leadership but letting AI be a trusted partner in precision. 

Got something to add? Chime in below...