12 Days of ChristmAIs: A TMT insight series
In years to come, we will no doubt look back at 2025 as a pivotal year in the adoption of AI. Nvidia, maker of the chips that power many of the frontier models, became the first publicly listed company to hit a USD5T valuation; ‘AI slop’ was the Macquarie Dictionary’s word of the year; and topically in Australia, just this week our Federal Government released its National AI Plan, which seeks to strengthen domestic AI capability and attractive investment, support workers through skills development and AI adoption, and ensure safe, responsible AI use in line with our national values.
On the flipside, an increasing chorus of investors and pundits is calling the top of the AI bubble and highlighting the challenges of making a return on (sometimes eye-watering) AI investments.
Against that backdrop, we thought it would be interesting to explore some of the practical legal implications of integrating AI into a modern business environment. This is not focused specifically on AI governance and implementation, which would be worthy of another series on its own, but rather, is a collection of observations regarding trends and implications in various areas of law and practice.
To that end, we present you with our “12 Days of ChristmAIs”, kicking off today with some observations from our Corporate team about trends in M&A.
Happy reading, and Merry ChristmAIs!

Technology, Media and Telecommunications
The pitfalls and traps of drafting or accepting AI warranties in M&A and commercial transactions
As businesses increasingly leverage AI across their operations and the use of AI becomes ever more pervasive throughout all industries, the inclusion of AI-specific warranties and indemnities is becoming progressively more common in an M&A context and in commercial contracts generally. In this article we unwrap some of the pitfalls and traps involved when negotiating such provisions.
Defining AI – untangling the tinsel
Just like untangling last year’s Christmas lights, defining AI is trickier than it looks. Without a clear definition, warranties can become either too broad or too narrow, leaving dealmakers in a knot.
In the legal context, there is not yet an agreed or market standard definition for what constitutes AI. When we talk about AI, we could be referring to generative AI (for example, ChatGPT) or machine learning, natural language processing or computer vision algorithms.
Without precise definition, a seller may find themselves providing a warranty that is either too broad (potentially exposing it to unknown and significant liabilities) or too narrow (which will provide limited or no comfort to a prospective buyer).
Given how quickly AI technologies evolve, any definition used in transactional documents should be regularly reviewed and updated, ensuring it reflects current usage and does not unintentionally widen (or narrow) the warranty package.
The diligence dilemma – a seasonal reminder
Buyers often struggle to see where and how AI is used within a business, and it can therefore be difficult to verify:
- where AI is used,
- how deeply it is embedded in operational workflows, or
- what data employees or systems feed into AI tools.
This uncertainty is magnified by shadow AI, where staff use AI platforms informally to improve efficiency, usually outside authorised channels. Many businesses have implemented AI-use policies, but these are often aspirational, difficult to monitor, and almost impossible to diligence in a traditional transaction process.
This creates a double-sided dilemma:
For buyers:
Opaque AI use makes it challenging to assess legal, operational and reputational risk. As a result, buyers may seek AI-specific warranties or indemnities to bridge the information gap. However, these provisions can inadvertently shift unknown or unquantifiable liabilities to the seller.For sellers:
Accepting broad AI warranties may expose them to risks they cannot investigate or verify. Sellers should therefore:
- ensure AI warranties are qualified by awareness,
- disclose internal AI policies carefully, and
- negotiate robust liability caps and limitations given the uncertain nature of AI-related exposure.
Buyers, on the other hand, should raise targeted questions about AI across the business, focusing on areas where AI informs critical decisions, interacts with sensitive data, or influences customer outcomes.
The vendor black box problem – elves behind closed doors
Third-party AI vendors can feel like Santa’s elves working behind closed doors.
Even when a business understands its internal use of AI, many AI systems rely on third-party models that operate as opaque “black boxes”. These systems may:
- be trained on unknown datasets,
- have evolving or proprietary behaviour,
- be governed by restrictive licence terms, and
- provide limited visibility into their decision logic.
This creates a structural problem for warranties: a seller cannot realistically warrant the behaviour, inputs, or training data of a third-party model it does not control.
Warranties that imply visibility or control over vendor-supplied AI can therefore expose sellers to unintended strict liability for risks that sit entirely outside the organisation.
Regulation at reindeer speed – keeping the sleigh on course
AI regulation is moving as fast as Santa’s sleigh on Christmas Eve.
The regulation of AI within Australia and across the world is evolving rapidly and, consequently, the legal baseline is frequently changing.
For buyers:
There is a growing desire for assurance that the target’s AI use complies with current law and is not exposed to foreseeable regulatory change.For sellers:
Broad AI compliance warranties can potentially and inadvertently capture:
- emerging standards,
- draft legislation, or
- future regulatory expectations that were not foreseeable when the warranty was given.
Sellers should carefully qualify these warranties by materiality, awareness and timing, ensuring the allocation of risk reflects the regulatory context as it exists at signing.
The multi-vector nature of AI risk — ornaments on a tree
Just as one branch of a Christmas tree can hold multiple ornaments, one AI system can expose a business to a variety of legal risks at the same time.
Unlike traditional software, AI introduces simultaneous risk across multiple legal domains. A single AI system may raise issues relating to:
- privacy law (by processing personal information),
- intellectual property (through the use or generation of protected content),
- discrimination law (via biased outputs),
- consumer law (through misleading or inaccurate results), and
- cybersecurity (through vulnerable model inputs or integrations).
This multi-vector risk profile means AI warranties can inadvertently function as broad, catch-all promises that cut across a range of legal regimes — often more expansive than either party anticipates. Careful drafting and tight scoping are essential to avoid unintended overlap with other warranty sets.
Conclusion – unwrapping with care
As dealmakers unwrap the First Day of ChristmAIs, take care and recognise that AI warranties present nuances and risks that don’t fit neatly into traditional warranty frameworks. The smart move this festive season is to unwrap slowly, inspect carefully, and avoid being the one left holding an AI-shaped liability you didn’t intend to take home.