6 October 2025 5 mins

Whilst businesses are eagerly (and some hesitantly) looking at how to reap the benefits of agentic AI, lawyers are deliberating over the question of whether AI agents can – and should – enter into binding legal contracts, plus if they do who is bound?  We think it’s a topic worth giving thought to and so should you. 

Earlier this year the Legal Loop looked at Who owns AI-generated content in the UK? where you might recall we discussed how Section 9(3) of the UK Copyright, Designs and Patents Act 1988 (CDPA) gave copyright ownership to the deployer of the GenAI that created an output.  Applying this same logic to contracts executed by agentic AI, the deployer of the agentic AI (though not prompting it) “owns” the legal obligations and rights created under any such contracts.  Let’s unpick this..

Agentic AI, no matter how convincing in its communications, does not have legal personhood so cannot independently enter into contracts i.e. your AI agent that you decide to call Phil (“AI Phil”) cannot be the contracting party.  However, AI Phil could potentially execute a contract on your behalf if you give AI Phil authority to do so. This goes to intention; for a contract to be legally binding under English Law there must be an intention to create legal relations – the courts are not going to question AI Phil’s intention (remember, it’s not legal person), the courts are going to assess whether the human user intended for AI Phil to act on their behalf.

Agentic AI is also known as autonomous AI and herein lies the problem: if your AI agent has some degree of autonomy it can be very hard to prove the user intended to create legal relations in a contract which the AI agent ‘autonomously’ entered into on their behalf.  By the same token, it can be hard for the user to prove they did not intend to enter a binding contract. 

This takes us back to traceability and transparency.  It is now more important than ever to be able to show clear initial instructions and boundaries are given to your AI agent, as well as human oversight and approval of “decisions” made by you AI agent.

Agentic AI can be a powerful tool but with autonomy comes responsibility. Whether you’re experimenting or deploying at scale, here are seven essential tips to keep your use ethical, effective, and safe:

  1. 🧠 Think before you deploy
    Ask yourself: Do I really need an agent for this task? Consider the purpose, the complexity, and whether autonomy adds value.
  2. 🔒 Use trusted, licensed models
    Stick to commercially licensed, subscription-based AI services. They’re more likely to meet legal, ethical, and security standards.
  3. 🗣️ Set clear instructions and boundaries
    Treat your agent like a human assistant: be specific, define limits, and avoid ambiguity.
  4. 📝 Keep a record of your inputs
    Logging prompts and instructions helps with accountability, troubleshooting, and compliance.
  5. 👀 Maintain human oversight
    Autonomous doesn’t mean unsupervised. Decide which decisions require human review and stay in the loop.
  6. 📊 Monitor and evaluate regularly
    Use observability tools to track agent behaviour, detect anomalies, and ensure continued alignment with your goals.
  7. Stay grounded
    It’s still an algorithm. Don’t expect it to make you a cup of tea (or understand your sarcasm).

Your Legal Toolkit for safe, confident AI Adoption.

If you want to protect yourself and your business when it comes to using AI, our AI Compliance Package gives you the legal clarity and confidence you need without drowning in jargon or red tape.

Perfect if you’re:

  • Using tools like ChatGPT, Midjourney or Claude
  • Unsure how AI affects your contracts, IP or data
  • Keen to stay compliant without slowing down your business

What’s included:

  • Discovery Call – a focused chat to uncover risks around data, IP, confidentiality, and governance (including B Corp alignment).
  • AI Tool Review – legal review of up to 3 AI tools’ licence terms.
  • Bespoke Legal Docs – tailored AI clauses, an internal AI policy, and a client-facing external policy.

From £1,500 + VAT
Keep your agency ahead of the curve – Get in touch about your AI Compliance Package today.

(Optional add-ons: impact assessments, AI training, extended reviews, and ongoing updates as laws evolve.)

Share with your network

Alan Reid

Alan brings a wealth of director level, leadership and management experience to Hybrid.

Share with your network
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Read our Privacy Policy.