I read a great quote recently (by a man named Chris Bach, successful co-founder of Netlify), which made me chuckle, but only in a “that’s funny because it’s worryingly true” kind of way…
"We're not fully prepared for a world in which 20-year-old summer interns are uploading thousands of pages of proprietary company financial and product information to some startup company that has just raised $2.2 million and hasn't gotten around to creating a terms of service or privacy policy."
While Bach's quote is tongue-in-cheek, I also don’t think it’s too far off from reality for most Kiwi businesses.
In many respects, when it comes to company data and content, generative AI is proving to be a real juxtaposition of innovation and risk.
Recent findings from a global LinkedIn and Microsoft AI study/report show the rapid integration of AI into our work lives:
I think these stats show the first steps towards what’s a pretty profound transformation in our work methods.
When we’re considering the implications on data control and privacy for an organisation though, the rapid uptick in AI adoption is a bit of a mixed blessing.
It’s like giving a high-performance sports car to a learner driver. The potential for speed to get from A to B is enormous. But without proper training and safeguards, the risk of a crash is equally significant.
So on the one hand, while we're seeing unprecedented levels of innovation and productivity resulting from generative AI use (the likes of which I’ve covered extensively here, here and here), on the other hand, when it comes to data privacy we face some serious new challenges from AI…
This isn’t speculation on my behalf.
In the past week alone I’ve spoken with at least three senior business leaders who have recently discovered, to their shock, how many of their employees are either using non-sanctioned AI tools, or using their personal AI accounts to get around company guidelines.
Make no mistake, this is happening right now, all over New Zealand.
In client work, and when speaking to public groups, I recommend the following strategies to take advantage of AI's benefits while taking sufficient care to protect your organisations at the same time…
What’s my aim here?
It certainly isn't to stifle innovation or create a culture of fear around AI. (I actually think the media is doing a fine job of that themselves, but perhaps that’s the topic of another article, for another time)
Instead, my goal is to spread the message that we need to approach this new and powerful technology with a balance of enthusiasm and caution. As one participant in a client workshop I ran recently put it; "It's like we've been given a superpower. We just have to make sure we use it responsibly."
We're in a new business era. The AI genie is out of the bottle, and there's no putting it back.
But with the right approach, we can harness the technology to drive our businesses forward while safeguarding our most valuable assets; our data, our IP, the productivity of our employees and the trust of our customers.
Don't wait for perfect solutions or government regulations. Start today by auditing your AI use, educating your teams, and developing a responsible AI strategy.
I challenge you to take this step...
By the end of this week, schedule an 'AI reveal' session with your team. Ask everyone to list every AI tool they've used in the last month, with the promise of no repercussions.
You will be surprised, I guarantee that.
Then the work begins... to instigate safe, powerful ways for your people to benefit from the promise of AI, without putting your company data and IP at risk.