AI + your data: A juxtaposition of innovation and risk
I read a great quote recently (by a man named Chris Bach, successful co-founder of Netlify), which made me chuckle, but only in a “that’s funny because it’s worryingly true” kind of way…
"We're not fully prepared for a world in which 20-year-old summer interns are uploading thousands of pages of proprietary company financial and product information to some startup company that has just raised $2.2 million and hasn't gotten around to creating a terms of service or privacy policy."
While Bach's quote is tongue-in-cheek, I also don’t think it’s too far off from reality for most Kiwi businesses.
In many respects, when it comes to company data and content, generative AI is proving to be a real juxtaposition of innovation and risk.
The AI workplace: A new reality
Recent findings from a global LinkedIn and Microsoft AI study/report show the rapid integration of AI into our work lives:
- 75% of employees already use AI at work
- 46% started in just the last six months
- 40% use unapproved tools that their organisations have banned
- Employees of all ages bring personal AI tools and licences to work
I think these stats show the first steps towards what’s a pretty profound transformation in our work methods.
When we’re considering the implications on data control and privacy for an organisation though, the rapid uptick in AI adoption is a bit of a mixed blessing.
It’s like giving a high-performance sports car to a learner driver. The potential for speed to get from A to B is enormous. But without proper training and safeguards, the risk of a crash is equally significant.
So on the one hand, while we're seeing unprecedented levels of innovation and productivity resulting from generative AI use (the likes of which I’ve covered extensively here, here and here), on the other hand, when it comes to data privacy we face some serious new challenges from AI…
- Data security: As Bach's quote alludes ti, sensitive information is being fed into AI systems by employees (yes, yours!) with little regard for where that data ends up.
- Compliance issues: Many AI tools haven't been vetted for compliance by business leaders against industry regulations or local NZ data protection laws.
- Loss of intellectual property control: Without the proper data precautions being taken, company data shared with AI tools could potentially be used to train the AI models that power the tools (by OpenAI, Google, Microsoft, Anthropic, etc).
This isn’t speculation on my behalf.
In the past week alone I’ve spoken with at least three senior business leaders who have recently discovered, to their shock, how many of their employees are either using non-sanctioned AI tools, or using their personal AI accounts to get around company guidelines.
Make no mistake, this is happening right now, all over New Zealand.
Five strategies for responsible AI integration
In client work, and when speaking to public groups, I recommend the following strategies to take advantage of AI's benefits while taking sufficient care to protect your organisations at the same time…
- Develop comprehensive AI policies: Clear guidelines on what AI tools can be used, how, and when, are crucial. I’d go so far to say that they’re now non-negotiables for every business in NZ. These policies should be part of day-one training for all employees, from interns to executives. Done right, they’ll provide guardrails, while still allowing for responsible (and very powerful) use of AI tools by your employees.
- Invest in organisation-wide AI literacy: In my opinion, from the C-suite to the frontline, everyone needs to understand both the benefits and the risks of AI tools. In committing to raising AI literacy, you’re committing to showing your people a responsible, safe path to (radical) efficiency and productivity gains. Conversely, without fundamental education about the potential, opportunity, risk and threat of generative AI, letting your people loose with generative AI tools is not only a recipe for disappointment and ineffective results, but it’s also dangerous from a data privacy point of view.
- Enhance data protection measures: If you don't feel the built-in protections of off-the-shelf AI tools are sufficient for your company (and I’d argue this would not occur in many scenarios), you can consider additional safeguards. In these situations, sandboxing (typically a tactic only used by larger enterprises though), can provide an extra layer of security.
- Foster a culture of responsible innovation: I believe in encouraging the massive creativity and efficiency gains that AI offers to your workforce, in balance with careful consideration of data security and ethics. It’s a real art, and takes effort from senior leaders, but creating company environments where employees feel comfortable discussing their AI use and raising concerns will be ongoing work that distinguishes companies who take a responsible approach to their AI strategy, from those who don’t.
- Implement regular AI audits: Stay ahead of potential issues by regularly assessing what AI tools are being used by your people, and how they’re using them. I’m careful to add here… I’m not recommending ‘policing’ your employees, but I am a massive advocate for senior leaders understanding the AI landscape within the organisation, to make ongoing, informed decisions about the safety of AI use.
Charting the course forward
What’s my aim here?
It certainly isn't to stifle innovation or create a culture of fear around AI. (I actually think the media is doing a fine job of that themselves, but perhaps that’s the topic of another article, for another time)
Instead, my goal is to spread the message that we need to approach this new and powerful technology with a balance of enthusiasm and caution. As one participant in a client workshop I ran recently put it; "It's like we've been given a superpower. We just have to make sure we use it responsibly."
We're in a new business era. The AI genie is out of the bottle, and there's no putting it back.
But with the right approach, we can harness the technology to drive our businesses forward while safeguarding our most valuable assets; our data, our IP, the productivity of our employees and the trust of our customers.
Don't wait for perfect solutions or government regulations. Start today by auditing your AI use, educating your teams, and developing a responsible AI strategy.
I challenge you to take this step...
By the end of this week, schedule an 'AI reveal' session with your team. Ask everyone to list every AI tool they've used in the last month, with the promise of no repercussions.
You will be surprised, I guarantee that.
Then the work begins... to instigate safe, powerful ways for your people to benefit from the promise of AI, without putting your company data and IP at risk.
Got something to add? Chime in below...