We love AI. Its potential is transformative, and we’re already using it to deliver faster, deeper advice to our clients.
But like any major tech shift, AI comes with risks – especially when regulations are catching up.
We’re not too concerned. Why? Because we’re managing them. The question is: are you managing AI risk within your agency?
Here’s a quick guide to help you stay on the right side of the law when using tools like ChatGPT, Claude, and others.
1. Choose the right version
If your teams are using ChatGPT – are they using the Plus, Team, or Enterprise versions? Or is it possible someone in your team is using the Free version?
The data and privacy protections vary significantly. Make sure your team is using a version that properly safeguards client data and your (and their) intellectual property.
2. Understand the End User Licence Agreement (EULA)
Don’t just tick the box — review the AI tool’s End User Licence Agreement (EULA). Key questions to ask:
– Who owns the outputs?
– Can your inputs be stored, reused, or shared?
– Was the training data used with the rights holders’ permission?
– Could your agency be liable if the AI generates infringing content?
Getting clear on the EULA helps reduce legal risk – especially around copyright and IP.
3. Protect confidentiality
Some tools, like meeting note-takers, retain audio data to improve their models. This can breach NDAs or client confidentiality obligations. Always check whether a tool stores data, and where that data goes.
4. Respect Data Protection laws
AI is not above or outside of GDPR (EU or UK). If your AI use includes processing personal data, you must ensure such AI use is GDPR compliant including making sure you have informed consent before inputting the personal data into AI. Best practice? Avoid inputting personal data into AI tools without explicit consent.
5. Know where IP stands
Who owns AI-generated content? Many EULAs state that the licence holder owns the outputs. Whilst U.K. copyright law acknowledges that copyright can exist in content generated by a computer without a human author, who is deemed the author and first owner turns on specific facts. Even if a contract says you own the output, that ownership might not be legally enforceable. Until the law evolves, it’s wise to keep clear records of your prompts, outputs, and the tool’s terms to help support your position.
6. Set clear AI policies
You need two policies:
– An internal AI use policy for your team, clarifying what can and cannot be done.
– A client-facing AI statement to build trust and meet procurement expectations.
Many agencies we work with are now being asked by corporate clients to provide these.
7. Don’t overlook bias and ethics
AI tools can produce biased or discriminatory content. It’s not just a technical issue; it can damage your agency’s reputation. Keep ethics on the radar.
Final Word
By managing these legal considerations up front, your agency can use AI with confidence and stay ahead of client expectations.
If you’d like tailored advice on AI use, data protection, or IP ownership, we’re here to help.