Rogue actions, governance challenges and agency law: Unwrapping practical legal risks of agentic AI

Agentic AI is an advanced form of artificial intelligence that can act autonomously to achieve broad goals or objectives, with minimal human oversight and intervention. Unlike more traditional AI systems, which operate within pre-defined rules and tasks, agentic AI can make decisions, initiate actions and adapt to changing circumstances in pursuit of its broader goals and objectives.

For example, AI agents can enter into contracts on behalf of a company, manage supply chain logistics, handle customer service interactions and even complete online purchases.

Whilst AI agents may seem like the perfect ‘Santa’s little helper’, the rise of agentic AI introduces new practical and legal challenges for companies. These issues will crystalise further as we see more extensive use of these tools, but we can already see from some of the reported activity that some key issues that companies should consider when using agentic AI are:

  1. Will the company be bound by the actions of its AI agent, even if the AI agent has not been properly appointed or acts outside the scope of its terms of appointment or authority;
  2. Is the use of an AI agent permitted in the relevant context – e.g. is it permitted by the terms of the online platform it is interfacing with; and
  3. Who is ultimately responsible if an AI agent makes a mistake?

1. Will a company be bound by the actions of its AI agent?

Despite the name, the exact legal capacity of an agentic AI tool is not always clear (and is not always the same).

Off-the-shelf agentic AI solutions are typically governed by the terms of use put forward by the supplier (or in rarer cases, negotiated between the supplier and its customer). At its heart, then, the relationship is defined and governed by contract. 

These terms differ, and so while it will vary from case to case, there are certainly arguments that some agentic AI tools are appointed with agent-like authority. This gives rise to the question of whether the law of agency may apply.

If these laws do apply, under Australian law, the actions of an agent (acting within its actual or implied authority) are binding on the principal. Conversely, a principal will not usually be bound if the agent acts outside its scope of authority.

Despite this, there are circumstances where, even if an agent does not have authority, third parties may be able to rely on the acts of the agent if the third party reasonably believes, based on the principal’s representations or conduct, that the agent has the requisite authority to act on the principal’s behalf. This is referred to as ‘apparent authority’.

These principles create interesting practical challenges in the use of agentic AI, including whether the AI agent has been properly appointed and the scope of its appointment. 

Problems with appointment

As with any agency appointment, it is critical to define the scope of the AI agent’s authority.

Take for example a simple ecommerce transaction that is executed by an AI agent in response to a general objective given by a procurement officer to ‘buy 10,000 ballpoint pens at the cheapest price available’.

This seems to be straightforward enough, but several things could go wrong. For example:

  1. What if the procurement officer was not authorised to give the instruction to the AI agent?
  2. What if the AI agent created an account on an ecommerce platform in order to buy the pens, without those terms being reviewed by the company’s legal team (in breach of standard operating procedure)?
  3. What if the AI agent acquired pens from a supplier in a sanctioned country, or from a supplier who does not comply with modern slavery laws, in breach of the company’s procurement rules, legal requirements and public statements?
  4. What if the AI agent located payment card details from the company’s systems that it was not supposed to use, and then loaded them into the supplier’s public website, which was not secure and had not been vetted by the company’s security team?
  5. What if the company had an exclusive purchasing arrangement in place with a stationery supplier, and the AI agent was not aware of that and executed a purchase in breach of that arrangement?

Under normal circumstances, there may be an argument that as a result of the procurement officer going beyond their own authority in appointing the AI agent, the AI agent had acted outside the company’s authority and so the company was not bound by the resulting transaction. 

However, when the ‘agent’ is in effect a software tool provided by a corporate entity in accordance with a contract between the parties, and that software tool is capable at a technical level of interacting with an online platform to complete a transaction, then these arguments may not be effective.

While the law of agency may appear to give some potential relief from unwanted transactions, it is far for clear that such laws would apply. The scenario gives rise to important questions about how to put guardrails around the use of agentic AI tools, and where use is permitted, how to properly prompt an AI agent on not only the objective, but also how to operate within specific guardrails (e.g., relevant company policies, procurement rules, and relevant laws).

Agents going rogue

Whilst guardrails could be put in place around the use of AI agents, there are other risks that companies should keep in mind. For example, what if an AI agent acts outside its intended scope and completes a purchase that the company did not intend to make? What if the AI agent accepted an offer to bundle the pens with an equal order of pencils, to maximise the discount available on the pens? Is the ‘principal’ legally bound to honour that purchase given it had not contemplated or authorised it, and even if the principal is bound, does it then have recourse against the agentic AI supplier to recover the cost of that unwanted purchase?

The answer to those questions will certainly involve an interpretation of the relevant terms of use (both between the AI agent supplier and the company; and also between the company and the platform on which the purchase was made). The former will likely define responsibility for acts of the agent, and the latter will often specify that a user is responsible for all activity performed under their user name or with their account, whether or not they actually authorised the activity.  

However, the answers may also involve questions of apparent authority – did the third party platform provider reasonably believe that the AI agent had authority to act on behalf of the end user in completing the purchase? Or should it have been on notice that there was an AI agent acting, and taken steps to verify the intention of the purchaser sitting behind the agent?

To be honest, we don’t think there is a simple answer – certainly not a single answer.  But it does show that companies should be aware that they could be bound by the actions of an AI agent, even where the AI agent acts beyond its scope. This warrants particular consideration of the terms of service under which the agentic AI tool is supplied. 

2. What if the counterparty does not allow the use of AI agents?

It is also important to look at this issue from the other side as well – that is, does the counterparty to the transaction know that it is dealing with an AI agent, and does it agree to do so? 

Some agentic AI systems can undertake tasks without third parties knowing that it is dealing with an AI agent. The ability for AI agents to operate ‘covertly’ was a key issue raised by a recent claim against Perplexity AI. In that case, an online marketplace provider alleges, amongst other issues, that Perplexity AI’s agentic browser extension, which is designed to autonomously shop on behalf of users, covertly accessed customer accounts on that marketplace and disguised automated activity as human browsing.

In circumstances where an AI agent is acting covertly, notwithstanding the privacy and security concerns that this brings, it may be reasonable for the third party dealing with the AI agent to assume they are dealing with a human user and therefore, that the human user should be bound by the AI agent’s actions. This means the user is likely to be bound by transactions completed by the AI agents. However, in the context of e-commerce transactions, it is important to recognise that those third parties may also have contractual terms that prohibit the use of AI agents with their platforms. Any action by that agent (assuming there are no effective arguments about the agent not being properly appointed) could therefore constitute a breach of those terms of use by the principal. That breach may create liability for the principal to the platform provider.

3. Who is responsible if an AI agent makes a mistake?

We have considered above questions of whether a user of an AI agent will be responsible for the valid acts of the AI agent. But a related issue is who is responsible if the AI agent makes a mistake. Is it the end user or the supplier of the AI agent?

Recently, it was reported that an agentic AI tool deleted a user’s entire D drive without permission. Instead of just deleting the cache as the user requested, the AI agent allegedly went further and wiped the entire D drive clean. It appears that the AI agent may have made a mistake and misinterpreted the command. It was certainly reported to be apologetic for the mistake, but who is ultimately responsible? Could the user recover the cost of recovering their data (and for any lost business or opportunity during the downtime)?

This will likely depend on what the AI supplier’s terms say about responsibility and liability.

Many AI suppliers’ terms expressly provide that the user is solely responsible for the acts of an AI agent and then exclude the supplier’s liability and responsibility for the actions and tasks performed by AI. In these circumstances, and depending on the actual wording of the AI supplier’s terms, it is likely that the end user would be responsible for the actions of the AI agent (even mistakes) and may have limited (if any) recourse against the supplier of the AI agent to recover any of the losses they suffered as a result of the deletion of their D drive.  Whether such terms would be ‘fair’, and whether the mistake might trigger a separate claim in negligence against the supplier of the AI agent, raises other issues under Australia’s unfair contract terms regime and negligence laws, which might be a topic for another day.

Either way, this is another example of why it is important for companies to carefully review the terms and conditions before using any agentic AI system so that the company understands the risk and liability that it assumes when using an AI agent.

Conclusion

AI agents can be a great tool to help companies to streamline workflows and processes (and make shopping for your Christmas presents less stressful). However, before using AI agents, companies should carefully consider the risks involved and put processes in place to manage and mitigate these risks as much as possible.

Risks include defining the scope of the appointment, contextualising the agent’s role in the company’s policy requirements and legal obligations, determining the appropriate risk allocation between the (supplier of the) agent and the company, and considering risks arising from the interface with the outside world (particularly terms that apply to platforms the agentic AI will interface with). It may raise tricky questions of agency law (and perhaps unfair terms and negligence as well), but it certainly raises important questions of policy, governance and contractual negotiation. 

With that, here’s how we like to visualise our AI elf agents working this ChristmAIs…