McCullough Robertson Lawyers https://mccullough.com.au/ Brisbane Law Firm | Sydney Law Firm | Corporate, Government, Private Fri, 19 Dec 2025 03:25:51 +0000 en-AU hourly 1 https://mccullough.com.au/wp-content/uploads/2024/10/cropped-MCR-LOGO_RGB-32x32.png McCullough Robertson Lawyers https://mccullough.com.au/ 32 32 How the tables have turned after the recent High Court decision https://mccullough.com.au/2025/12/19/how-the-tables-have-turned-after-the-recent-high-court-decision/ Fri, 19 Dec 2025 03:01:27 +0000 https://mccullough.com.au/?p=93246 Bed Bath ‘N’ Table Pty Ltd v Global Retail Brands Australia Pty Ltd [2025] HCA 50 Summary On 10 December 2025, the High Court of Australia (HCA) unanimously ruled that the use of the trade mark “House Bed & Bath” was misleading or deceptive conduct. The Facts This case originated as a trade mark dispute […]

The post How the tables have turned after the recent High Court decision appeared first on McCullough Robertson Lawyers.

]]>
Bed Bath ‘N’ Table Pty Ltd v Global Retail Brands Australia Pty Ltd [2025] HCA 50

Summary

On 10 December 2025, the High Court of Australia (HCA) unanimously ruled that the use of the trade mark “House Bed & Bath” was misleading or deceptive conduct.

The Facts

This case originated as a trade mark dispute between Bed Bath ‘N’ Table Pty Ltd (BBNT) who operates stores under the trade mark “Bed Bath ‘N’ Table”, and Global Retail Brands Australia Pty Ltd (GRBA) who began operating a new homewares store under the trade mark “House Bed & Bath” (pictured below).

At first instance, Justice Rofe, found that GRBA’s “House Bed & Bath” trade mark did not infringe BBNT’s BED BATH ‘N’ TABEL registered trade mark, however use of “House Bed & Bath” by GRBA constituted misleading or deceptive conduct.1 On appeal, the Full Federal Court (FFC) held that GRBA’s trade mark did not infringe BBNT’s mark, nor was the use of “House Bed & Bath” misleading or deceptive, and set aside the primary judge’s injunction.2

BBNT appealed to the HCA in relation to the finding that GRBA had not engaged in misleading or deceptive conduct.

We previously reported on the matter after the FCC decision, for more information on the facts refer here: Another trade mark stoush ‘put to bed’ – McCullough Robertson Lawyers.

The Decision

On 10 December 2025, the HCA ruled that GRBA’s use of “House Bed & Bath” for its homewares store constituted misleading or deceptive conduct.

The HCA adopted the primary judge’s reasoning, ruling that the FFC had unduly focused its analysis on whether the trade marks were deceptively similar under s120(1) of the Trade Marks Act 1995 (Cth) (TMA) and failed to consider the immediate and broader context of the impugned conduct, conflating the required inquiries under s18(1) of schedule 2 of the Competition and Consumer Act 2010 (Cth) (Australian Consumer Law). The HCA found that:

  1. A finding that a trade mark is not deceptively similar under s120(1) of the TMA does not preclude a finding that use of the mark constitutes misleading or deceptive under Australian Consumer Law. A contravention of s18(1) of the Australian Consumer Law ultimately involves an objective characterisation of the conduct of the party viewed as a whole and whether that conduct is misleading or deceptive, or is likely to mislead or deceive;
  2. In the boarder context, BBNT and GRBA have considerable reputations in relation to their respective business sectors and store branding that consumers recognise. BBNT’s long-standing use of the words “bed” and “bath” in that order has made them roll off the tongue, despite the words not appearing in their sequential order. BBNT’s well-known pairing of the words was “undoubtedly” part of the appeal for GRBA’s use of “Bed & Bath”;
  3. While it was found that GRBA did not have a commercially dishonest intention to appropriate part of BBNT’s trade and reputation, dishonest intention is not an element of s18(1) of the Australian Consumer Law. GRBA’s “wilful blindness” to the prospect of consumer confusion was relevant to the objective question of contravention of s18(1) as providing further context; and
  4. GRBA’s use of “House Bed & Bath” is likely to entice consumers into their stores under the belief that they have some association to BBNT, and as such constitutes misleading or deceptive conduct.

The HCA therefore allowed BBNT’s appeal and reinstated the primary judge’s injunction. The case has now been remitted to the Federal Court for determination of the remaining issues, including BBNT’s claim for compensation.

Key takeaways

This case highlights that brand reputation and competitor analysis in the relevant market is key, and businesses must exercise caution when developing new brands or expanding into new product categories. A ‘wilful blindness’ to potential confusion will not be sufficient. Care should be taken to assess both the trade mark and broader common law landscape noting that trade mark infringement and misleading or deceptive conduct are distinct causes of action.

Before adopting new branding, businesses should be mindful of potential similarities with brands already in the market and take appropriate steps to mitigate associated risks, including consumer confusion.

If you would like advice on a new brand, availability searches, or to discuss IP strategy, please contact our Digital and IP team.

[1] Bed Bath ‘N’ Table Pty Ltd v Global Retail Brands Australia Pty Ltd (2023) 182 IPR 393.

[2] Bed Bath ‘N’ Table Pty Ltd v Global Retail Brands Australia Pty Ltd (2024) 424 ALR 119.

The post How the tables have turned after the recent High Court decision appeared first on McCullough Robertson Lawyers.

]]>
Managing the rise in workers’ compensation claims for psychological injury https://mccullough.com.au/2025/12/19/rise-in-workers-compensation-claims-psychological-injury/ Thu, 18 Dec 2025 22:39:17 +0000 https://mccullough.com.au/?p=93227 Workers’ compensation claims for psychological injuries have significantly increased. In the four years up to 2022, Work Safe Queensland reported a 92% increase in accepted psychological injury claims and a further increase in the subsequent year. It is therefore critical for local governments to proactively respond to these claims and to comply with the recently […]

The post Managing the rise in workers’ compensation claims for psychological injury appeared first on McCullough Robertson Lawyers.

]]>
Workers’ compensation claims for psychological injuries have significantly increased. In the four years up to 2022, Work Safe Queensland reported a 92% increase in accepted psychological injury claims and a further increase in the subsequent year. It is therefore critical for local governments to proactively respond to these claims and to comply with the recently enhanced rehabilitation and return to work obligations.

Increased penalties for failing to assist with rehabilitation

Under the Workers’ Compensation and Rehabilitation Act 2003 (Qld) (WCRA), employers have a duty to take all ‘reasonable steps’ to assist or provide the worker with rehabilitation (including necessary and reasonable suitable duties programs) and to cooperate with the insurer to provide rehabilitation. The maximum penalty for non-compliance was increased last year to $83,450.

If an employer believes that a suitable duties program is not practicable, it must produce written evidence to the insurer of this or it may incur a penalty of $16,690 (prior to 23 August 2024 there was no penalty). If the insurer is not satisfied with the employer’s evidence, it must provide the employer with reasons for that opinion and ‘reasonable opportunity’ for further submissions.

Responding to claims

If a worker has made a claim for psychiatric injury (or there are signs of one emerging), employers should take appropriate steps to:

  1. ensure adequate support is provided (regular check ins, employee assistance programs, allowing the worker to take time off);
  2. investigate and document the claim circumstances; and
  3. if there are concerns about the veracity of the claim, respond to the claim at an early stage by gathering all relevant information and documents and providing that to the insurer with submissions outlining the employer’s version of events. Often a claim is contestable on the basis that the injury was caused by the employer’s ‘reasonable management action’.

Key takeaways

The growing number of workers’ compensation claims for psychiatric injury makes it increasingly important for employers to be proactive about contesting claims at an early stage, including preparing responsive submissions to the insurer. If a worker’s claim is accepted, employers now have enhanced obligations to take reasonable steps in providing rehabilitation to the injured worker.

We have assisted many government bodies to respond to contentious workers’ compensation claims. For detailed guidance on the enhanced rehabilitation obligations and managing claims, please reach out to our Insurance and Corporate Risk team.

The post Managing the rise in workers’ compensation claims for psychological injury appeared first on McCullough Robertson Lawyers.

]]>
No time to waste? NSW’s new Waste and Circular Infrastructure Plan https://mccullough.com.au/2025/12/18/no-time-to-waste-nsws-new-waste-and-circular-infrastructure-plan/ Thu, 18 Dec 2025 02:01:38 +0000 https://mccullough.com.au/?p=93136 Emerging opportunities for waste sector investment The NSW Government recently released Chapter 1 of its new Waste and Circular Infrastructure Plan (Plan), which signals a shift toward a more coordinated, long-term approach to managing the state’s residual waste (unrecycled) and food and garden waste needs. The Plan seeks to respond to immediate waste infrastructure pressures […]

The post No time to waste? NSW’s new Waste and Circular Infrastructure Plan appeared first on McCullough Robertson Lawyers.

]]>
Emerging opportunities for waste sector investment

The NSW Government recently released Chapter 1 of its new Waste and Circular Infrastructure Plan (Plan), which signals a shift toward a more coordinated, long-term approach to managing the state’s residual waste (unrecycled) and food and garden waste needs. The Plan seeks to respond to immediate waste infrastructure pressures — including rapidly reducing landfill capacity, population growth and growing waste volumes — and provides a list of proposed actions and associated timeframes for the planning and coordination of strategic waste infrastructure. This includes:

Establishing a streamlined planning process for new priority waste infrastructure

The NSW Department of Planning Housing and Infrastructure (DPHI) will issue the Secretary’s Environmental Assessment Requirements (SEARs) in 18 days, if required, and assess applications in 80 days. DPHI will also establish an agency liaison group to provide a whole-of-government approach to the assessment of individual priority applications for existing landfills. This group will identify early issues with proposals and provide technical feedback on the assessment approach during preparation of the Environmental Impact Statement (EIS) and DPHI’s assessment of the application. The establishment of this new pathway is set to start from late 2025.

Rapidly assess applications for existing priority landfills

The NSW Government will “rapidly assess” priority applications for existing landfills on an ‘as needed’ basis, to avoid critical landfill shortfalls expected from 2030, as well as priority landfill extension or expansion proposals in line with the new streamlined process, outlined above.

Consider reopening closed landfills to improve resilience in Greater Sydney’s waste management network

If extending or expanding currently operating landfills is not sufficient to offset expected shortfalls in putrescible waste capacity, the NSW Government will consider reopening previously closed landfill sites to build additional capacity and resilience in Greater Sydney’s waste management system. The NSW Government will also assess opportunities to utilise these sites as transfer stations and organics processing facilities. DPHI indicates it will consider reopening any such suitable landfill sites by 2028.

Establish a waste infrastructure concierge to provide planning advice and support to applicants

The NSW Government will establish a waste infrastructure concierge to provide proponents with planning support for their proposed waste infrastructure applications. Property & Development NSW, DPHI and the NSW Environment Protection Authority (EPA) will provide expert advisers to provide advice to proponents to support preparation of high quality development applications that meet the required standards. For operators, this may open opportunities to invest in organics processing, advanced recycling, transfer-station upgrades and, where suitable, residual-waste treatment technologies. This work is set to start from early 2026.

Alleviate pressures on existing landfills

Waste Asset Management Corporation and the EPA will investigate opportunities to maximise the reuse of virgin excavated natural material (VENM) and excavated natural material (ENM) produced through construction and demolition activities, including the feasibility of establishing VENM/ENM holding yards. This investigation is set to be complete by the end of 2025.

A coordinated approach to infrastructure planning and investment

The Plan indicates that the NSW Government will also establish an Advisory Committee for strategic waste infrastructure, comprising local council representatives, industry members and technical experts. The purpose of the committee will be to provide local insights, identify barriers to planning and investment in critical waste and recycling infrastructure, highlight opportunities to accelerate industry investment in line with the waste hierarchy, and advise the Government on risks to implementing the Plan and achieving its objectives.

A submission in response to the Plan by Local Government NSW (LGNSW) indicates it welcomes the Plan’s intent and its recognition that waste is an essential service requiring whole-of-government coordination. Councils recognise the value in clearer planning pathways and more consistent strategic direction, particularly for facilities that support food-and-garden-organics rollout, expanded resource recovery and modernised transfer capacity. For private operators, these signals may help provide greater planning certainty and a more stable environment for long-term investment.

At the same time, LGNSW has highlighted the need for the Plan to embed stronger roles for councils, establish clearly defined responsibilities, KPIs, enforceable commitments and sustainable funding and governance frameworks. Councils remain cautious about approaches that rely solely on landfill extensions or concentrate residual-waste infrastructure in a small number of communities. LGNSW emphasises that success will depend on early engagement, transparent decision-making and ensuring communities understand and have a voice in how waste infrastructure is planned and delivered. We agree that this sensitivity is very important. Recent public debate shows communities are increasingly alert to where waste facilities are located, how impacts are managed, and whether the burden of new infrastructure is being fairly shared. Concerns about environmental safeguards, transport links, cumulative impacts and long-term monitoring are shaping expectations for any new waste facility — particularly in regional areas. Operators considering new proposals will need to build confidence through rigorous environmental performance, open communication and a clear demonstration of the proposal’s benefits.

The Plan is still evolving, and many details — including funding arrangements, timelines and the next chapters — are yet to be released. Even so, it marks the beginning of a more coordinated and forward-looking approach. For waste operators prepared to align innovation with collaboration and community engagement, it may also mark the start of a more certain, investment-ready phase for the sector.

Should you wish to hear more about the Plan, please don’t hesitate to contact Kate Swain or Elizabeth Ross-Smith in our Project Approvals Team.

Link to the Plan: Waste and Circular Infrastructure Plan | EPA

The post No time to waste? NSW’s new Waste and Circular Infrastructure Plan appeared first on McCullough Robertson Lawyers.

]]>
Avoid PID paralysis – tips for managing Public Interest Disclosures https://mccullough.com.au/2025/12/18/avoid-pid-paralysis-tips-for-managing-public-interest-disclosures/ Wed, 17 Dec 2025 23:23:17 +0000 https://mccullough.com.au/?p=93216 Public interest disclosures (PIDs) under the Public Interest Disclosure Act 2010 (Qld) are a common occurrence for many councils. That Act sets out strict obligations for councils to manage PIDs. Councils occasionally fail to manage PIDs promptly and diligently. However, councils should avoid overreacting to PIDs – stopping reasonable employment processes such as performance management […]

The post Avoid PID paralysis – tips for managing Public Interest Disclosures appeared first on McCullough Robertson Lawyers.

]]>
Public interest disclosures (PIDs) under the Public Interest Disclosure Act 2010 (Qld) are a common occurrence for many councils. That Act sets out strict obligations for councils to manage PIDs. Councils occasionally fail to manage PIDs promptly and diligently. However, councils should avoid overreacting to PIDs – stopping reasonable employment processes such as performance management or discipline, and rushing into a potentially unwarranted and expensive investigation which soaks up time and resources. Sometimes those actions are necessary, but understanding the obligations and options allows for informed decisions and an appropriate process. We support our clients to effectively manage PIDs, and some of our tips are below.

What is a PID?

Not every disclosure meets the definition of a PID.  

Councillors and members of the public can make PIDs about four categories of issues.[1] Public officers can make PIDs about broader issues such as alleged corrupt conduct, maladministration,[2] or misuse of resources.  

The discloser must have an honest and reasonable belief the disclosure shows the relevant issue, or provide information tending to show it. A council employee’s allegations about fraud, discrimination or bullying could all potentially be PIDs.

Council’s Key Obligations

Some critical obligations for councils managing PIDs are:

  • Establish procedures: Procedures to assess disclosures, investigate disclosures when appropriate, support disclosers, and address any identified wrongdoing must be created.
  • Reprisal Risk Management: Identify and manage reprisal risks, in tandem with managing psychosocial safety risks. 
  • Referral:  If a PID is also an allegation of ‘corrupt conduct’, as defined in the Crime and Corruption Act 2001 (Qld), it generally needs to be referred by council to the Crime and Corruption Commission.
  • Confidentiality: Strict confidentiality must be maintained over PIDs, subject to exemptions (e.g. to investigate the disclosure). Breaching confidentiality may constitute an offence.
  • Records: Make and retain detailed records about council’s management of a potential PID. Even if council determines a disclosure is not a PID, that decision should still be recorded as this may later be the subject of dispute.

Avoiding PID Paralysis

Employees sometimes raise complaints to subvert or stop employment processes.[1] Each disclosure must be managed diligently. It is sometimes possible to continue an employment process while managing that employee’s disclosure. If managed expertly, with council’s decision-makers independent and insulated, prompt and compliant resolution of the PID and employment process can be achieved.

Importantly, bear in mind that:

  • A PID is a shield against reprisal. It does not render a discloser immune to lawful and reasonable management action. 
  • Not every PID requires an investigation. A council may elect not to investigate or further deal with a PID where, for example:
    • relevant information is too dated;
    • it is too trivial, and dealing with it would substantially and unreasonably divert Council’s resources; or
    • the matter has been investigated or dealt with by another process.

PID paralysis can be avoided. Disciplined PID management, supported by experienced advisors, will allow your Council to meet its obligations, protect disclosers, and mitigate operational disruptions.

Key takeaways

Councils should assess whether a disclosure truly meets the definition of a PID before acting. PIDs must be managed promptly and confidentially, with clear records and consideration of reprisal and psychosocial safety risks. It is not always necessary to investigate every PID, and reasonable management action can often continue alongside the process. Careful, informed management supported by experienced advisors helps ensure compliance while avoiding unnecessary disruption.

Please reach out to Cameron Dean or Bernard Dwyer from our Employment Relations and Safety Team if you have any questions.

[1] A substantial and specific danger toa disabled person’s health or safety, issues related to substantial and
specific environmental dangers covering two categories, and reprisals.
[2] This is a very broad category which includes ‘unreasonable’ Council administrative actions.
[3] For example, disciplinary action, performance management or redundancy

The post Avoid PID paralysis – tips for managing Public Interest Disclosures appeared first on McCullough Robertson Lawyers.

]]>
New planning rules for battery energy storage systems https://mccullough.com.au/2025/12/17/new-planning-rules-for-battery-energy-storage-systems/ Wed, 17 Dec 2025 02:40:23 +0000 https://mccullough.com.au/?p=93181 Taking effect on 12 December 2025, the Planning (Battery Storage Facilities) and Other Legislation Amendment Regulation 2025 (Qld) (Regulations) amends the Planning Regulation 2017 (Qld) by introducing significant regulatory reform to the planning and assessment framework for battery energy storage facilities. The Regulations aim to align the State’s planning framework with current assessment processes for other […]

The post New planning rules for battery energy storage systems appeared first on McCullough Robertson Lawyers.

]]>
Taking effect on 12 December 2025, the Planning (Battery Storage Facilities) and Other Legislation Amendment Regulation 2025 (Qld) (Regulations) amends the Planning Regulation 2017 (Qld) by introducing significant regulatory reform to the planning and assessment framework for battery energy storage facilities. The Regulations aim to align the State’s planning framework with current assessment processes for other renewable energy projects. Our previous article on the changes to the planning legislation can be found here.

Battery energy storage systems (BESS) play a crucial role in the energy transition, stabilising the grid by storing excess generation and dispatching electricity during periods of peak demand. Prior to the Regulations, BESS projects were assessed against 77 different sets of rules depending on the requirements of individual local government planning schemes. This approach resulted in inconsistent and uncertain approvals processes for BESS projects across Queensland.

What are the key changes?

The key changes introduced by the Regulations include:

  1. Development applications for battery storage facilities with a capacity of 50MW or more must be accompanied by a social impact assessment (SIA) and community benefit agreement (CBA) at lodgment.
  2. BESS development, other than small scale battery storage facilities will require impact assessment. This means applications will be subject to public notification and community consultation.
  3. Appointing the State Assessment and Referral Agency (SARA) as the assessment manager (decision maker) for assessable battery storage facility developments.
  4. Introduction of the new State Code 27: Battery Storage Facilities (now released) under the State Development Assessment Provisions, providing specific assessment benchmarks that are required to be considered when assessing battery storage facilities.  
  5. Any BESS that is not yet approved and has a capacity of 50MW or more will not be exempt from the new rules. Any application (including those already under assessment) failing to include a compliant SIA or CBA will be deemed “not properly made” and as a result, will not be accepted for assessment. This is consistent with the transitional framework applied to wind and solar farms. Applications for BESS projects below the 50MW threshold will continue to be assessed by local governments. 

What does this mean for current and future projects?

There are no grandfathering or transitional provisions that allow projects that are already well advanced in the approvals pathway to be exempt from the new rules. Those projects that are already well advanced through the approvals process will be impacted and may face significant delay.

Current and future proponents of BESS projects will need to prepare for longer approval timeframes and greater upfront costs to allow for extensive community consultation and to ensure applications adequately address the additional obligations.

Although the reforms will inevitably delay BESS projects, aligning approvals processes with the regulatory regime for other renewable projects will improve consistency and transparency while strengthening community confidence in battery storage developments.

The new State Code 27 will provide a unified approach for BESS projects across different local government areas. We recommend that proponents should now review the new State Code 27 and Planning Guidelines to ensure that they can comply.

The post New planning rules for battery energy storage systems appeared first on McCullough Robertson Lawyers.

]]>
How courts are treating machine-generated outputs: admissibility, reliability, expert evidence https://mccullough.com.au/2025/12/17/new-south-wales-queensland-guidance-on-the-use-of-ai-in-courts/ Wed, 17 Dec 2025 00:55:31 +0000 https://mccullough.com.au/?p=93201 As the use of AI continues to evolve, the State and Federal Courts of Australia have started issuing guidance and practice directions as to how AI can – and can not – be used in preparing witness and expert evidence.  This article considers the guidance and directions provided by the Queensland and NSW Supreme Courts.

The post How courts are treating machine-generated outputs: admissibility, reliability, expert evidence appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

As with almost all other areas of law and life, there is certainly scope for AI to provide useful assistance to the conduct of litigation, including by dealing with the dreaded problem of blank page and making prose read in a more eloquent, clear, and compelling manner.

While much of the profile-grabbing headlines in relation to the use of AI in litigation focuses on litigants referring to ‘hallucinating’ cases and legislation which simply do not exist, a lesser discussed but equally critical area is how AI can – and cannot – be used in the preparation of evidence.

Evidence in civil litigation

There are two common types of written evidence in civil litigation – lay witness evidence (provided through affidavits and witness statements), and expert evidence (provided through expert reports).

An affidavit or witness statement is a written statement in which the witness relays their evidence as to what has factually occurred, and exhibits any documents which support their statement.

An expert report is a report prepared by an expert, by which they provide their expert opinion. Some common examples include medical reports, engineering reports, forensic accounting reports, and valuations.

As the use of AI continues to evolve, the State and Federal Courts of Australia have started issuing guidance and practice directions as to how AI can – and can not – be used in preparing witness and expert evidence. This article considers the guidance and directions provided by the Queensland and NSW Supreme Courts.

Queensland’s guidance on the use of generative AI

The Queensland Courts have issued a guidance note on the Use of Generative AI by Non-Lawyers (Qld Note),1 and the Supreme Court of Queensland has issued a practice direction in relation to the Use of Expert Evidence in Criminal Proceedings (Qld Practice Direction).2 There is not yet a similar practice direction for civil proceedings.

The Qld Note provides guidance only, and advises non-lawyers to be cautious about using generative AI to prepare affidavits or witness statements, and that it is important to ensure that the document ‘is sworn/ affirmed or finalised in a manner that accurately reflects the person’s own knowledge and words’.

Conversely, the Qld Practice Direction provides prescriptive requirements in relation to the use of AI in relation to preparing expert reports in criminal proceedings, and specifically requires that where the expert has used generative AI to assist in the formulation or expression of the opinions contained in their report, the report must:

  1. specify the name of the generative AI program used, and how it was used;
  2. disclose (as an annexure to the report) a complete record of the inputs / prompts used for the generative AI program to formulate or express the relevant opinions, including any source material, default values and/or variable sets;
  3. disclose (as an annexure to the report) a complete record of the outputs delivered by the generative AI program in the formulation or expression of the relevant opinions;
  4. specify if the way in which the generative AI program was used is regulated or addressed by any relevant code of practice that binds the expert and, if so, how that code of practice was adhered to by the expert in the formulation or expression of the relevant opinions; and
  5. identify any possible biases or other known limitations that might affect the accuracy or reliability of the opinions formulated or expressed through use of the generative AI program.

While this practice direction applies only to criminal proceedings, it provides a helpful indication as to types of requirements the courts may impose in civil proceedings in the near future.

New South Wales guidance on the use of AI

Similarly, the New South Wales Supreme Court has issued a practice note on the Use of Generative Artificial Intelligence (NSW Practice Note),3 which applies to all proceedings (and not just criminal proceedings).

Unlike the Qld Practice Direction, the NSW Practice Note provides that generative AI must not be used to generate the content of:

  1. an affidavit or a witness statement, including by altering, embellishing, strengthening, diluting or rephrasing a witness’s evidence;
  2. an annexure or exhibit to an affidavit or a witness statement (without leave of the court); and
  3. an expert’s report (without leave of the court). 

Affidavits and witness statements must include a statement that AI was not used to generate its content, or the content of any annexures or exhibits (except to the extent that leave was given by the court).

Where the Court allows the use of generative AI in an expert report, the expert witness must:

  1. disclose the parts of the report prepared using generative AI, including which generative AI program was used to generate the content of the report (including which version);
  2. keep records and identifying in an annexure to the report how the generative AI tool or program was used, including for example, any prompts used, any default values used, and any variables set; and
  3. if the use of generative AI is regulated or addressed by any relevant code of practice or principles that bind or apply to the expert, identify that fact and annex to the report a copy of the relevant codes or principles.

These requirements bear similarity to the requirements of the Qld Practice Direction.

Other areas to keep an eye on

As AI is still an emerging technology, Australian courts are largely yet to be tested when it comes to the reliability and admissibility of evidence that has been prepared with AI.

As the ability to generate videos, images, and voice recordings through AI becomes increasingly proficient and generally available to the public, courts may be faced with considering whether it is necessary to verify that the evidence has not been generated through AI and, if so, how that verification can occur.

This problem is just starting to emerge in courtrooms in Australia, with the question of reliability and admissibility being considered by the Federal Circuit and Family Court of Australia in February 2025 in the matter of Barry v Letton [2025] FedCFam C2F 222.  In this case, the applicant sought to tender audio recordings of phone conversations and sought to rely on transcripts of the phone recordings which had been prepared using AI software. 

Ultimately, the Court did not allow the transcripts for a number of reasons, which relevantly included because they had not been verified to be accurate. In making this decision, the court commented that it was commonly known that AI tools have a tendency to hallucinate or be inaccurate or incomplete in their outputs.4 There was no discussion as to how the transcripts should have been verified.

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

1 The use of Generative Artificial Intelligence (AI): Guidelines for responsible use by non-lawyers.

2 Practice Direction 2024/14 – Expert Evidence in Criminal Proceedings – AMENDED.

3 Supreme Court Practice Note SC Gen 23, titled ‘Use of Generative Artificial Intelligence (Gen AI).

4 Barry v Letton [2025] FedCFam C2F 222 at [13] and [14].

The post How courts are treating machine-generated outputs: admissibility, reliability, expert evidence appeared first on McCullough Robertson Lawyers.

]]>
NSW planning reform – Streamlining modification pathways under the EP&A Act https://mccullough.com.au/2025/12/17/nsw-planning-reforms-modification-pathways/ Tue, 16 Dec 2025 22:34:51 +0000 https://mccullough.com.au/?p=93175 New South Wales’ environmental planning legislation has been amended to apply significant changes to how modification applications are assessed under the Environmental Planning and Assessment Act 1979 (EP&A Act), with the aim of reducing delays and costs for applicants by creating proportionate pathways for modifications, particularly where proposed changes have no environmental impact. The changes, […]

The post NSW planning reform – Streamlining modification pathways under the EP&A Act appeared first on McCullough Robertson Lawyers.

]]>
New South Wales’ environmental planning legislation has been amended to apply significant changes to how modification applications are assessed under the Environmental Planning and Assessment Act 1979 (EP&A Act), with the aim of reducing delays and costs for applicants by creating proportionate pathways for modifications, particularly where proposed changes have no environmental impact.

The changes, introduced through the Environmental Planning and Assessment Amendment (Planning System) Reforms Act 2025 (Reforms Act), are to commence on a date yet to be appointed by proclamation.

What’s changing?

Historically, section 4.55(1) of the EP&A Act allowed consent authorities to approve a modification only for minor errors, misdescriptions or miscalculations – a very limiting scope for application. Under the Reforms Act, this section now applies to any modification that has no environmental impact, enabling a streamlined approval process for administrative or technical fixes that do not alter the physical form or environmental outcomes of a development.

New section 4.55A and the 14-day rule

The introduction of section 4.55A imposes a strict assessment period for decision-making regarding applications made under s4.55(1) of the EP&A Act. Consent authorities now have 14 days to determine modifications made under s4.55(1) involving minor error, misdescription or miscalculations or which carry no environmental impact. If no decision is made within that period, the application is deemed approved. This change provides greater certainty for applicants and encourages timely determinations

Refined pathways for minimal impact changes

Because “no-impact” changes now fall under s 4.55(1), the section 4.55(1A) pathway is confined to modifications that involve some change to environmental impacts but only where those impacts remain minimal. These applications typically involve minor physical alterations and still require assessment and notification.

Streamlining the process for substantial modifications

For more substantial modifications under section 4.55(2), the previous requirement for consent authorities to consult with relevant ministers or agencies about concurrence conditions has been removed. This amendment simplifies the process and reduces delays for larger changes.

Why it matters

These reforms create a tiered system for modification applications.

  1. No-impact changes – quick, low-cost approvals under s 4.55(1);
  2. Minimal-impact changes – short review process under s 4.55(1A); and
  3. Significant changes – standard pathway under s 4.55(2), now more efficient.

These changes also align with updates to the Environmental Planning and Assessment Regulation 2021 (EP&A Regulation), facilitating the goal of a faster, more predictable planning system.

Looking ahead

The Reforms Act represents a step toward a more efficient planning framework in New South Wales. By tailoring modification pathways to the level of environmental impact, the system now offers greater certainty for applicants while maintaining appropriate oversight.

If you are considering a modification to an existing consent or want to understand how these reforms could streamline your approvals process, our Planning and Environment team would be pleased to discuss the practical implications for your projects.

The post NSW planning reform – Streamlining modification pathways under the EP&A Act appeared first on McCullough Robertson Lawyers.

]]>
Walking in an AI wonderland: new social media advertising guidance for therapeutic goods https://mccullough.com.au/2025/12/16/walking-in-an-ai-wonderland-new-social-media-advertising-guidance-for-therapeutic-goods/ Tue, 16 Dec 2025 03:50:31 +0000 https://mccullough.com.au/?p=93169 Using AI-generated advertisements or AI-assisted advertising content is not unlawful. However, all AI outputs should undergo robust and adequate oversight by a human. Here's how to remain compliant with the TGA when advertising engages in AI-use

The post Walking in an AI wonderland: new social media advertising guidance for therapeutic goods appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

AI is the latest and greatest toy on everyone’s wish list.  It is transforming digital marketing by creating and automating advertising campaigns. At the press of a button, AI can deliver tailored customer experiences and provide targeted recommendations based on customer preferences and behaviours.

However, just like Christmas lunch, it’s easy to enjoy too much of a good thing. While AI can be an effective advertising tool, it poses compliance risks, particularly in relation to the advertisement of therapeutic goods in Australia.

In November this year the Therapeutic Goods Administration (TGA) provided updated guidance about using artificial intelligence (AI) to advertise therapeutic goods on social media, with the view to supporting improved compliance. We discuss the TGA’s new guidance below.

What is a ‘therapeutic good’?

The Therapeutic Goods Act 1989 (Cth) (TG Act) and the Therapeutic Goods Regulations 1990 (Cth) (TG Regulations), (together, the TG Rules) aim to control the quality, safety, efficacy and availability of therapeutic goods in Australia. 

Therapeutic goods comprise a broad range of health-related products e.g. prescription medications, vitamins, herbal medicines, vaccines, medical devices, bandages, paracetamol and more. Food and cosmetics are generally not therapeutic goods; however cosmetics that make therapeutic claims or have a therapeutic purpose may be considered a therapeutic good e.g., moisturisers that contain a sunscreen agent as a secondary component could be classed as a therapeutic good and therefore subject to the TG Rules.

The Code

The Therapeutic Goods (Therapeutic Goods Advertising Code) Instrument 2021 (Cth) (the Code) outlines advertising and packaging requirements for the promotion of therapeutic goods in Australia. Failure to comply with the TG Rules and Code may result in the issuance of infringement notices, directions or prevention notices, and civil or criminal penalties.

Advertisements must (among other things):

  • be accurate, balanced and not misleading;
  • support the safe and proper use of the therapeutic goods;
  • not cause undue alarm, fear or distress; and
  • be reviewed to ensure the content of testimonials and endorsements are verified.

Using AI to advertise therapeutic goods

In its November update, the TGA acknowledged the rise of AI technology and its use in the digital marketing environment to create, automate and deliver personalised experiences and drive website traffic.

However, in circumstances where business owners and advertisers have engaged a third-party AI service provider or tool to generate advertising content, the TGA has confirmed that the responsibility to ensure AI-generated advertising content complies with the TG Rules and Code rests with business owners and advertisers. The responsibility covers influencer endorsements and user-generated content including third party comments posted on social media platforms.

Using AI-generated advertisements or AI-assisted advertising content is not unlawful. However, all AI outputs should undergo robust and adequate oversight by a human. 

Business owners and advertisers must ensure that all advertising content, including materials generated by, or with the assistance of AI, complies with the TG Rules and Code. This includes both current and historical posts and content. Accountability rests with business owners and advertisers to oversee any AI outputs regardless of how big or small their involvement.

Risks associated with using AI in advertising

Using AI in business can create efficiencies and keep costs down.  However, AI-generated advertising content poses non-compliance risks, including:

Dissemination of inaccurate and misleading information

In generating its outputs, AI models tend to prioritise engagement, not accuracy.  If product packaging incorporates a particular claim, AI may generate advertising that repeats or amplifies non-compliant language to increase consumer engagement.  

Contrary to the actual characteristics of goods, AI is likely to prioritise consumer desires and might use terms such as ‘fun’, ‘natural’, or ‘harm-less’, or omit ingredients within promotional content to persuade consumers to purchase the goods, which can lead to non-compliance with the Code.

Automatic production and modification of data not substantiated by the advertiser

Once again, AI prioritises engagement over accuracy and often uses click-bait hooks to gain audience attention. For example, describing goods as ‘new and improved’ or the ‘best on the market’, when these claims might be factually incorrect. 

Making claims that cannot be substantiated through evidence is a breach of the Code.  It is important to verify the claims made in AI advertising campaigns before publication or dissemination.

False or misleading reviews and testimonials

Business owners and advertisers should ensure any use of AI does not create synthetic media or misleading representations of real people, false endorsements, or misused celebrity likeness (such as deep fakes), without permission and verification.  Advertisers should not blindly adopt AI generated endorsements, or any other comments, without first ensuring compliance with the Code.

False or misleading AI chatbot referrals

As a result of recent changes to how popular search engines generate and display search results (e.g. through an AI summary at the top of search results), the market is observing a trend of business owners relying on referrals through popular AI chatbots (e.g. ChatGPT).

However, the use of AI chatbot referrals or endorsements also creates advertising compliance risks. This is because AI often exaggerates the usefulness of products and may not base comments on a genuine, unbiased account of an ordinary consumer’s use of the product.

Business owners and advertisers may have little control or oversight over AI chatbot output. Yet it is business and advertisers owners who are responsible for compliance with the TGA Rules and the Code.

Walking in an AI wonderland

The rise and use of AI is here to stay. Using AI-generated advertisements or AI-assisted advertising content is not unlawful. However, all AI outputs should undergo robust and adequate oversight by a human. Otherwise, there is a real risk the packaging and advertising of therapeutic goods may be deemed non-compliant under the TGA Rules and Code, leading to the issuance of infringement notices, directions or prevention notices, and civil or criminal penalties. 

In its update, the TGA recommend businesses publicly provide corrective information if they become aware of the publication of any misinformation about their products.

To regulate social media comments, the TGA has also suggested that business owners and advertisers develop a social media acceptable use policy which sets out guidelines to comply with their obligations under the Code.

If you have any queries or require any assistance regarding AI-supported advertising practices, please reach out to expert Digital & Intellectual Property team. We are here to help you navigate this evolving regulatory landscape with confidence.

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post Walking in an AI wonderland: new social media advertising guidance for therapeutic goods appeared first on McCullough Robertson Lawyers.

]]>
Court of Appeal provides direction on Trunk vs Non-Trunk Infrastructure Conditions https://mccullough.com.au/2025/12/16/trunk-vs-non-trunk-infrastructure-ruling/ Mon, 15 Dec 2025 23:10:46 +0000 https://mccullough.com.au/?p=93037 The recent Queensland Court of Appeal decision of Homeland Property Developments Pty Ltd v Whitsunday Regional Council [2025] QCA 234 provides clarification on the operation of the infrastructure provisions in chapter 4 of the Planning Act 2016 (Qld) (PA) and their relationship with the development assessment framework in chapter 3. Whitsunday Regional Council (Council) approved […]

The post Court of Appeal provides direction on Trunk vs Non-Trunk Infrastructure Conditions appeared first on McCullough Robertson Lawyers.

]]>
The recent Queensland Court of Appeal decision of Homeland Property Developments Pty Ltd v Whitsunday Regional Council [2025] QCA 234 provides clarification on the operation of the infrastructure provisions in chapter 4 of the Planning Act 2016 (Qld) (PA) and their relationship with the development assessment framework in chapter 3.

Whitsunday Regional Council (Council) approved a large residential development to be developed in stages over a long period of time. Council imposed conditions on the development approval which required the applicant to undertake various infrastructure works, including water reservoir and sewerage works. Council imposed the conditions under section 145 of the PA as non-trunk infrastructure works. The applicant was prepared to undertake the works but sought to have the conditions imposed under section 128 of the PA. A condition imposed under section 128 would enable the applicant to benefit from offsets and refunds for trunk infrastructure works.

The applicant appealed various conditions to the Planning and Environment Court (PEC). At the time the development application was properly made, there was no Local Government Infrastructure Plan (LGIP). However, at the time of the hearing an LGIP was in place but it did not identify future water or sewerage infrastructure to service the site. His Honour Judge Williamson KC made orders to approve the application subject to conditions without the imposition of conditions under section 128 of the PA.

The applicant made an application to appeal to the Court of Appeal (COA) alleging errors of law.
The central issue before the COA was whether the statutory pre-conditions for imposing necessary infrastructure conditions under section 128 of the PA were met. The COA confirmed that these provisions only apply where an LGIP formed part of the planning scheme when the development application was properly made. Her Honour Justice Debra Mullins, COA President, thought that the starting point for determining the dispute should have been the decision of the PEC as to the weight to be given to amendments made to the LGIP in assessing the development application.

Her Honour held that section 111 of the PA governs the application of part 2 of chapter 4 (other than section 112 and division 5) which only applies to a local government if its planning scheme includes an LGIP. In this matter, no LGIP was in place at the time the application was properly made. As a result, the legislative basis for imposing a necessary infrastructure condition under section 128 was not engaged as part of the ordinary assessment under section 45(5) and (7) of the PA. If the planning scheme is amended (as happened here), then the primary question becomes what weight, if any, should be afforded to the LGIP amendment in the exercise of the planning discretion, acknowledging that section 128 is only engaged if the requirements of section 127 of the PA are met.

A significant aspect of the judgment was the COA’s endorsement of the primary judge’s interpretation of section 127 of the PA. Mullins P accepted that the provision should be read as requiring trunk infrastructure “for the subject premises”, reinforcing the need for the LGIP to identify the infrastructure relevant to the particular site as a threshold issue to the availability of a necessary trunk infrastructure condition under section 128 of the PA. This interpretation aligned with the statutory context and explanatory notes. The amended LGIP in force at the time of the PEC hearing did not identify existing or future trunk infrastructure which meant that the requirements of section 127(1) of the PA had not been met.

The COA accepted that later amendments introducing an LGIP could be afforded weight in the development assessment under section 45(8) of the PA, but the weight given to those amendments is a discretionary matter for the assessment manager (or the PEC on appeal). The primary judge had given no weight to either of these LGIP amendments. Mullins P confirmed that the exercise of this discretion was open to the primary judge and there was no error of law.

The COA also drew a clear distinction between trunk and non-trunk infrastructure conditions. While section 128 applies only where an LGIP exists and the requirements of section 127 of the PA are satisfied, section 145 is not so limited. That provision is in part 2, division 5 which is excluded from the application section (section 111). A condition under section 145 of the PA can be imposed even if no LGIP exists provided its requirements are met.

Key takeaways

This decision reinforces the importance of identifying at the outset whether an LGIP was in force when the development application was properly made. Without an LGIP, the statutory pathway for imposing a necessary infrastructure condition under section 128 cannot be engaged in the standard assessment of an application. Even where an LGIP exists, the infrastructure must be identified as trunk infrastructure for the specific premises to satisfy the requirements of section 127(1)(a) and (b) of the PA. Later LGIP amendments may be relevant as a question of weight under section 45(8), but the extent to which amendments influence the assessment will always depend on the circumstances and involve matters of discretion.

The case also demonstrates that, where section 128 is unavailable, a local government still retains the ability to impose non-trunk infrastructure conditions under section 145. These conditions are not subject to the LGIP-based constraints for necessary trunk infrastructure conditions, provided they meet the statutory requirements for a development condition.

The post Court of Appeal provides direction on Trunk vs Non-Trunk Infrastructure Conditions appeared first on McCullough Robertson Lawyers.

]]>
The rise of continuation funds https://mccullough.com.au/2025/12/16/the-rise-of-continuation-funds/ Mon, 15 Dec 2025 22:52:09 +0000 https://mccullough.com.au/?p=93132 As Australia’s private equity market continues to mature, continuation funds are emerging as an increasingly common tool to manage mature assets while balancing liquidity needs across diverse investor bases. While the structure has long been established in US and European markets, Australian investors are now engaging more actively with continuation vehicles as part of broader […]

The post The rise of continuation funds appeared first on McCullough Robertson Lawyers.

]]>
As Australia’s private equity market continues to mature, continuation funds are emerging as an increasingly common tool to manage mature assets while balancing liquidity needs across diverse investor bases. While the structure has long been established in US and European markets, Australian investors are now engaging more actively with continuation vehicles as part of broader GP-led secondary processes. This article outlines how continuation funds operate, why they are gaining traction locally, and the legal considerations relevant to Australian market participants.

Characteristics of a continuation fund

A continuation fund is a newly formed investment vehicle established to acquire one or more portfolio companies or assets from an existing private equity fund nearing the end of its term. The continuation fund allows the trustee or general partner (GP) to continue managing assets while providing a mechanism for existing investors or limited partners (LPs) with a choice of either:

  1. receiving liquidity by selling their interests in the fund and their exposure to the fund’s underlying assets; or
  2. electing to rollover all or part their exposure into the continuation vehicle and participate in future upside.

At the same time, existing and new investors may commit fresh capital to support follow-on investments, future growth initiatives, or balance-sheet optimisation at the portfolio-company level.

Why are Continuation funds becoming more relevant in Australia?

Several factors are driving the uptake of Continuation funds in Australia, including:

  1. Extended value-creation runway: Many assets require additional time beyond the initial fund term to complete their growth trajectory or strategic plan. It allows time for GPs to implement new strategies for the assets which may involve bolt on opportunities. It also avoids selling a valuable asset at a less-than-optimal time in the market.
  2. Liquidity for existing LPs: Investors nearing the end of a fund’s life often seek predictable liquidity, which continuation funds can provide without requiring an immediate exit. This gives investors a viable exit option alongside an IPO, trade sale or a secondary sale.
  3. Alignment of incentives: LPs can choose between immediate liquidity or retaining exposure to a strongly performing asset.

Legal and transactional considerations

Conflicts of interest

The establishment of a continuation fund inherently creates a conflict of interest because the GP or manager is effectively selling an asset from one fund it manages to another fund it will also manage and the value at which the asset is being sold by the existing fund and being acquired by the new fund must be able to withstand investor scrutiny and negate any concern that the asset is being sold or acquired at a price which favours investors in one fund over the other.

Australian GPs/managers must carefully comply with:

  1. fiduciary and equitable duties, particularly where LPs/unitholders are wholesale clients relying on the GP’s judgment obtaining an independent valuation or fairness opinion;
  2. disclosure obligations under the Corporations Act 2001 (Cth); and
  3. fund-specific conflict-management processes required under the GP’s AFSL.

Australian GPs must also ensure that LPs are given a genuine choice between liquidity and rollover options.

To combat the conflict risk, Australian market practice increasingly involves obtaining:

  • an independent valuation or fairness opinion;
  • oversight from the fund’s investment committee (which is comprised of investors) or advisory board; and
  • a robust LP voting or consent process.
Regulatory considerations

Continuation fund transactions can trigger a range of regulatory, fiduciary and governance requirements because the selling fund is entering into a self-directed related-party transaction. This includes compliance with:

  1. disclosure requirements;
  2. AFSL obligations;
  3. requirements of the selling fund’s trust deed and constitution; and
  4. any related-party transaction rules applicable to the transaction.
Approvals

A key obstacle in implementing a continuation fund transaction is the need to obtain the necessary approvals from the selling fund, which may include investor consent, investor committee approval, or compliance with other regulatory requirements (mentioned above). Because continuation fund transactions involve the GP or manager effectively selling an asset to a vehicle it also controls, the approval pathway can be complex, time-consuming, and uncertain.   Investors who carry over into the new fund will be keen to ensure there is no increase in management fees to ensure they are not in a worse position compared to their prior investment.

Tax and duty

Australian tax implications can be complex and typically require early structuring advice. Issues can include:

  1. rollover tax treatment for existing LPs;
  2. potential CGT events on the transfer of assets between funds or the acquisition of fund interests;
  3. transfer duty or landholder duty on the transfer of assets between funds or the acquisition of fund interests;
  4. carried-interest structuring; and
  5. the use of intermediary jurisdictions to facilitate foreign investor participation.
Case Study: Anacacia Capital’s A$280m Continuation Fund

The growing acceptance of continuation funds in Australia was highlighted recently by Anacacia Capital’s successful raise of a A$280 million continuation vehicle to house three of its existing portfolio companies. The transaction was backed by both rollover investors and new secondary capital and allowed Anacacia to transfer Direct Couriers, RP Infrastructure and Big River Industries into a new fund.

The structure provided liquidity options for existing investors nearing the end of the fund’s term, while enabling others to remain invested in assets that Anacacia considers are high-conviction for future growth. It also brought in fresh capital to support the next phase of these businesses’ development.

This deal reflects a broader market trend that Australian private equity managers are increasingly turning to continuation funds as an alternative to traditional exits, particularly where portfolio companies require more time to realise their full value. The size of the Anacacia continuation fund and its application across multiple mid-market assets signals that continuation structures are moving firmly into the mainstream of the Australian private equity ecosystem.

Conclusion

For Australian sponsors and investors, continuation funds present a compelling mechanism to extend the lifecycle of high-performing assets while offering LPs meaningful optionality. With careful attention to governance, conflicts management, regulatory compliance, and tax structuring, continuation funds can deliver significant benefits across the stakeholder spectrum and are poised to remain a growing feature of Australia’s private equity ecosystem.

For sponsors seeking additional runway on high-conviction assets and for investors looking to tailor their liquidity exposure, continuation funds offer a compelling and increasingly familiar solution. With appropriate governance, transparent process management, and careful structuring, continuation funds can deliver meaningful benefits while safeguarding the interests of all stakeholders.

The post The rise of continuation funds appeared first on McCullough Robertson Lawyers.

]]>
Rogue actions, governance challenges and agency law: Unwrapping practical legal risks of agentic AI https://mccullough.com.au/2025/12/15/legal-risks-of-agentic-ai/ Mon, 15 Dec 2025 06:01:15 +0000 https://mccullough.com.au/?p=93144 Agentic AI is an advanced form of artificial intelligence that can act autonomously to achieve broad goals or objectives, with minimal human oversight and intervention. Unlike more traditional AI systems, which operate within pre-defined rules and tasks, agentic AI can make decisions, initiate actions and adapt to changing circumstances in pursuit of its broader goals […]

The post Rogue actions, governance challenges and agency law: Unwrapping practical legal risks of agentic AI appeared first on McCullough Robertson Lawyers.

]]>
Agentic AI is an advanced form of artificial intelligence that can act autonomously to achieve broad goals or objectives, with minimal human oversight and intervention. Unlike more traditional AI systems, which operate within pre-defined rules and tasks, agentic AI can make decisions, initiate actions and adapt to changing circumstances in pursuit of its broader goals and objectives.

For example, AI agents can enter into contracts on behalf of a company, manage supply chain logistics, handle customer service interactions and even complete online purchases.

Whilst AI agents may seem like the perfect ‘Santa’s little helper’, the rise of agentic AI introduces new practical and legal challenges for companies. These issues will crystalise further as we see more extensive use of these tools, but we can already see from some of the reported activity that some key issues that companies should consider when using agentic AI are:

  1. Will the company be bound by the actions of its AI agent, even if the AI agent has not been properly appointed or acts outside the scope of its terms of appointment or authority;
  2. Is the use of an AI agent permitted in the relevant context – e.g. is it permitted by the terms of the online platform it is interfacing with; and
  3. Who is ultimately responsible if an AI agent makes a mistake?

1. Will a company be bound by the actions of its AI agent?

Despite the name, the exact legal capacity of an agentic AI tool is not always clear (and is not always the same).

Off-the-shelf agentic AI solutions are typically governed by the terms of use put forward by the supplier (or in rarer cases, negotiated between the supplier and its customer). At its heart, then, the relationship is defined and governed by contract. 

These terms differ, and so while it will vary from case to case, there are certainly arguments that some agentic AI tools are appointed with agent-like authority. This gives rise to the question of whether the law of agency may apply.

If these laws do apply, under Australian law, the actions of an agent (acting within its actual or implied authority) are binding on the principal. Conversely, a principal will not usually be bound if the agent acts outside its scope of authority.

Despite this, there are circumstances where, even if an agent does not have authority, third parties may be able to rely on the acts of the agent if the third party reasonably believes, based on the principal’s representations or conduct, that the agent has the requisite authority to act on the principal’s behalf. This is referred to as ‘apparent authority’.

These principles create interesting practical challenges in the use of agentic AI, including whether the AI agent has been properly appointed and the scope of its appointment. 

Problems with appointment

As with any agency appointment, it is critical to define the scope of the AI agent’s authority.

Take for example a simple ecommerce transaction that is executed by an AI agent in response to a general objective given by a procurement officer to ‘buy 10,000 ballpoint pens at the cheapest price available’.

This seems to be straightforward enough, but several things could go wrong. For example:

  1. What if the procurement officer was not authorised to give the instruction to the AI agent?
  2. What if the AI agent created an account on an ecommerce platform in order to buy the pens, without those terms being reviewed by the company’s legal team (in breach of standard operating procedure)?
  3. What if the AI agent acquired pens from a supplier in a sanctioned country, or from a supplier who does not comply with modern slavery laws, in breach of the company’s procurement rules, legal requirements and public statements?
  4. What if the AI agent located payment card details from the company’s systems that it was not supposed to use, and then loaded them into the supplier’s public website, which was not secure and had not been vetted by the company’s security team?
  5. What if the company had an exclusive purchasing arrangement in place with a stationery supplier, and the AI agent was not aware of that and executed a purchase in breach of that arrangement?

Under normal circumstances, there may be an argument that as a result of the procurement officer going beyond their own authority in appointing the AI agent, the AI agent had acted outside the company’s authority and so the company was not bound by the resulting transaction. 

However, when the ‘agent’ is in effect a software tool provided by a corporate entity in accordance with a contract between the parties, and that software tool is capable at a technical level of interacting with an online platform to complete a transaction, then these arguments may not be effective.

While the law of agency may appear to give some potential relief from unwanted transactions, it is far for clear that such laws would apply. The scenario gives rise to important questions about how to put guardrails around the use of agentic AI tools, and where use is permitted, how to properly prompt an AI agent on not only the objective, but also how to operate within specific guardrails (e.g., relevant company policies, procurement rules, and relevant laws).

Agents going rogue

Whilst guardrails could be put in place around the use of AI agents, there are other risks that companies should keep in mind. For example, what if an AI agent acts outside its intended scope and completes a purchase that the company did not intend to make? What if the AI agent accepted an offer to bundle the pens with an equal order of pencils, to maximise the discount available on the pens? Is the ‘principal’ legally bound to honour that purchase given it had not contemplated or authorised it, and even if the principal is bound, does it then have recourse against the agentic AI supplier to recover the cost of that unwanted purchase?

The answer to those questions will certainly involve an interpretation of the relevant terms of use (both between the AI agent supplier and the company; and also between the company and the platform on which the purchase was made). The former will likely define responsibility for acts of the agent, and the latter will often specify that a user is responsible for all activity performed under their user name or with their account, whether or not they actually authorised the activity.  

However, the answers may also involve questions of apparent authority – did the third party platform provider reasonably believe that the AI agent had authority to act on behalf of the end user in completing the purchase? Or should it have been on notice that there was an AI agent acting, and taken steps to verify the intention of the purchaser sitting behind the agent?

To be honest, we don’t think there is a simple answer – certainly not a single answer.  But it does show that companies should be aware that they could be bound by the actions of an AI agent, even where the AI agent acts beyond its scope. This warrants particular consideration of the terms of service under which the agentic AI tool is supplied. 

2. What if the counterparty does not allow the use of AI agents?

It is also important to look at this issue from the other side as well – that is, does the counterparty to the transaction know that it is dealing with an AI agent, and does it agree to do so? 

Some agentic AI systems can undertake tasks without third parties knowing that it is dealing with an AI agent. The ability for AI agents to operate ‘covertly’ was a key issue raised by a recent claim against Perplexity AI. In that case, an online marketplace provider alleges, amongst other issues, that Perplexity AI’s agentic browser extension, which is designed to autonomously shop on behalf of users, covertly accessed customer accounts on that marketplace and disguised automated activity as human browsing.

In circumstances where an AI agent is acting covertly, notwithstanding the privacy and security concerns that this brings, it may be reasonable for the third party dealing with the AI agent to assume they are dealing with a human user and therefore, that the human user should be bound by the AI agent’s actions. This means the user is likely to be bound by transactions completed by the AI agents. However, in the context of e-commerce transactions, it is important to recognise that those third parties may also have contractual terms that prohibit the use of AI agents with their platforms. Any action by that agent (assuming there are no effective arguments about the agent not being properly appointed) could therefore constitute a breach of those terms of use by the principal. That breach may create liability for the principal to the platform provider.

3. Who is responsible if an AI agent makes a mistake?

We have considered above questions of whether a user of an AI agent will be responsible for the valid acts of the AI agent. But a related issue is who is responsible if the AI agent makes a mistake. Is it the end user or the supplier of the AI agent?

Recently, it was reported that an agentic AI tool deleted a user’s entire D drive without permission. Instead of just deleting the cache as the user requested, the AI agent allegedly went further and wiped the entire D drive clean. It appears that the AI agent may have made a mistake and misinterpreted the command. It was certainly reported to be apologetic for the mistake, but who is ultimately responsible? Could the user recover the cost of recovering their data (and for any lost business or opportunity during the downtime)?

This will likely depend on what the AI supplier’s terms say about responsibility and liability.

Many AI suppliers’ terms expressly provide that the user is solely responsible for the acts of an AI agent and then exclude the supplier’s liability and responsibility for the actions and tasks performed by AI. In these circumstances, and depending on the actual wording of the AI supplier’s terms, it is likely that the end user would be responsible for the actions of the AI agent (even mistakes) and may have limited (if any) recourse against the supplier of the AI agent to recover any of the losses they suffered as a result of the deletion of their D drive.  Whether such terms would be ‘fair’, and whether the mistake might trigger a separate claim in negligence against the supplier of the AI agent, raises other issues under Australia’s unfair contract terms regime and negligence laws, which might be a topic for another day.

Either way, this is another example of why it is important for companies to carefully review the terms and conditions before using any agentic AI system so that the company understands the risk and liability that it assumes when using an AI agent.

Conclusion

AI agents can be a great tool to help companies to streamline workflows and processes (and make shopping for your Christmas presents less stressful). However, before using AI agents, companies should carefully consider the risks involved and put processes in place to manage and mitigate these risks as much as possible.

Risks include defining the scope of the appointment, contextualising the agent’s role in the company’s policy requirements and legal obligations, determining the appropriate risk allocation between the (supplier of the) agent and the company, and considering risks arising from the interface with the outside world (particularly terms that apply to platforms the agentic AI will interface with). It may raise tricky questions of agency law (and perhaps unfair terms and negligence as well), but it certainly raises important questions of policy, governance and contractual negotiation. 

With that, here’s how we like to visualise our AI elf agents working this ChristmAIs…

The post Rogue actions, governance challenges and agency law: Unwrapping practical legal risks of agentic AI appeared first on McCullough Robertson Lawyers.

]]>
Unwrapping the risks of AI tax advice this holiday season https://mccullough.com.au/2025/12/12/ai-and-tax-advice/ Fri, 12 Dec 2025 02:30:16 +0000 https://mccullough.com.au/?p=93121 Artificial intelligence (AI) chatbots, such as ChatGPT, have become popular tools for summarising information and providing general guidance. While these innovations may seem like the latest must-have gift, it is important to recognise their limitations, especially when it comes to complex matters like tax law.

The post Unwrapping the risks of AI tax advice this holiday season appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

AI: The shiny new toy in the accountant’s stocking

As the festive season approaches, many clients and professional advisors are exploring new technologies to streamline their work. Artificial intelligence (AI) chatbots, such as ChatGPT, have become popular tools for summarising information and providing general guidance. While these innovations may seem like the latest must-have gift, it is important to recognise their limitations, especially when it comes to complex matters like tax law.

We still need the elves – AI can’t do it all

Australian tax law is intricate, with frequent updates, nuanced exceptions, and detailed requirements. While AI can assist with basic research and drafting, it cannot replace the expertise and judgment of a qualified accountant or tax advisor. The subtleties of tax legislation require careful interpretation and professional experience – qualities that AI, at its current stage, simply does not possess.

Don’t get caught on the naughty list: the dangers of AI tax advice

Relying on AI for tax advice can expose individuals and businesses to significant risks, including:

  1. Use of outdated or incomplete information;
  2. Failure to recognise recent changes in legislation or Australian Tax Office (ATO) guidance;
  3. Inability to interpret complex interactions between different tax rules; and
  4. Potential reliance on withdrawn or superseded documents.

These risks can result in incorrect advice, leading to audits, penalties, or reputational harm, outcomes no one wants to find in their stocking.

Tax law: More complicated than untangling Christmas lights

The application of tax law in Australia is not always limited to the text of legislation. It also involves the analysis of ATO rulings, guidelines, and court decisions, all of which are subject to frequent change. AI tools may struggle to keep pace with these developments or to understand how the ATO applies rules in practice. The complexity of tax law often requires a level of discernment and contextual understanding that AI cannot provide.

Avoid holiday headaches: The legal pitfalls of AI advice

Legal risks are heightened when AI-generated advice is relied upon without human oversight. Courts and regulators consider the specific facts and circumstances of each case, and professional judgment is essential to navigate these nuances. AI may offer plausible-sounding answers, but without the ability to weigh conflicting authorities or apply practical experience, its advice can be misleading.

AI: Good for wrapping presents, still work to do on complex tax problems

AI chatbots can be valuable tools for background research, generating ideas, or drafting preliminary documents. However, they should not be used as a substitute for professional advice in matters of tax law. Human expertise remains essential to ensure compliance, accuracy, and the best possible outcomes for clients.

For a merry tax season – trust the experts, not just AI

This holiday season, safeguard your financial well-being by consulting qualified tax professionals. While AI can assist with certain tasks, it cannot replace the knowledge and judgment of experienced advisors. For peace of mind and a truly happy new year, rely on expert guidance, not just the latest technology.

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post Unwrapping the risks of AI tax advice this holiday season appeared first on McCullough Robertson Lawyers.

]]>
World-first: Australia’s social media ban for children starts today https://mccullough.com.au/2025/12/10/australias-social-media-ban-under-16/ Wed, 10 Dec 2025 05:20:47 +0000 https://mccullough.com.au/?p=93082 Australia has become the first country to enact a social media ban for children under the age of 16. Legislative reform has amended the Online Safety Act 2021 to establish obligations on providers of age-restricted social media platforms to take reasonable steps to prevent age-restricted users from having an account.

The post World-first: Australia’s social media ban for children starts today appeared first on McCullough Robertson Lawyers.

]]>
The social media ban for children under the age of 16

Today, 10 December 2025, Australia has become the first country to enact a social media ban for children under the age of 16. After a national campaign in 2024 run by ’36 Months’, legislative reform was pushed through the federal parliament in November 2024 to amend the Online Safety Act 2021 to establish obligations on providers of age-restricted social media platforms to take reasonable steps to prevent age-restricted users from having an account.

The Minister for Communications can now make legislative rules specifying services that are or are not covered by the definition of ‘age-restricted social media platform’, require the social media platforms to take reasonable steps to prevent children who have not reached minimum age from having accounts, and provide further privacy protections for information collected by social media platforms for the purposes of the minimum age requirement.

Users under 16 will no longer be able to create accounts on these 10 age-restricted social media platforms

What are the age restrictions on social media platforms?

As of 21 November 2025, eSafety listed the first 10 age-restricted social media platforms that were accepted by the Minister for Communications. Practically this means that users under 16 will no longer be able to create accounts on these 10 age-restricted social media platforms and if they already have an account, will have their account deactivated. It should be noted that this list is not exhaustive and is likely to be expanded as social media platforms continue to be evaluated.

To deter non-compliance, penalties have risen from 500 to 30,000 civil penalty units (currently equivalent to $9.9 million). This increases if the non-compliant provider is a body corporate (including online service provider) with a maximum penalty of 150,000 penalty units (currently equivalent to $49.5 million).

Which social media platforms will have age restrictions?

These 10 platforms have age restrictions as of today:

Snapchat, Facebook, Instagram, YouTube, TikTok, Reddit, Kick, X, Threads and Twitch

Which social media platforms currently do not have age restrictions?

eSafety currently considers that these platforms will not be age-restricted social media platforms: Discord, GitHub, Google Classroom, LEGO Play, Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids

Is everyone happy about it?

While these restrictions may have been welcomed by many, a constitutional challenge has been filed in the High Court of Australia by two 15 year olds, as part of the Digital Freedom Project, to argue the ban trespasses on teenagers’ implied freedom of political communication.

There is also speculation that global online forum Reddit may also seek to challenge the ban in the High Court regarding the same, however at this stage, it has been noted that they will comply with the law.

Otherwise, we expect this to be an area of ongoing interest, particularly as there are real and evolving implementation challenges for the platforms as ever-inventive teenagers will no doubt seek ways around the bans, and ongoing regulatory consideration of whether others should be added (or removed) over time.

The post World-first: Australia’s social media ban for children starts today appeared first on McCullough Robertson Lawyers.

]]>
Copyright and AI: What businesses need to know about compliance and liability https://mccullough.com.au/2025/12/10/ai-and-copyright-infringement/ Wed, 10 Dec 2025 03:31:15 +0000 https://mccullough.com.au/?p=93068 If AI is the ChristmAIs present under the ChristmAIs tree, you need to look carefully behind the wrapping. While AI is transforming creativity and compliance in a bright and shiny way, it is also introducing new legal risks for business.

The post Copyright and AI: What businesses need to know about compliance and liability appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

TL;DR

If AI is the ChristmAIs present under the ChristmAIs tree, you need to look carefully behind the wrapping. While AI is transforming creativity and compliance in a bright and shiny way, it is also introducing new legal risks for business. Agentic AI can generate infringing works without a user (such as a staff member) prompting, creating additional risk. Businesses that develop, deploy, or rely on AI and agentic AI systems may face heightened exposure, against the current copyright landscape in Australia which provides an accessible enforcement pathway for rights-holders.

ChristmAIs came early in Australia on 28 October 2025, when the Attorney-General’s Department met with the Copyright and AI Reference Group (CAIRG) to discuss potential reforms to the Copyright Act 1968 (Cth) which would: (1) clarify how copyright applies to AI-generated works; and (2) introduce lower-cost enforcement options (including through a small claims forum). 

AI and copyright infringement – under the ChristmAIs wrapping

Under the Copyright Act 1968 (Cth) (Copyright Act), copyright infringement occurs when a person or business uses all or a substantial part of a copyright work in a way that infringes the copyright owner’s exclusive rights, and does so without permission or a relevant defence (such as the fair dealing exceptions). 

To claim copyright infringement, the claimant must be the author or owner of the work in question, and copyright must also subsist in that work. The infringement also needs to satisfy the following:

  1. the infringing act has been done in relation to a substantial part of the work;
  2. when comparing the works, there is “objective similarity”; and
  3. there is a casual connection showing the infringement occurred as a result of copying, whether done deliberately or subconsciously.

There are three main types of copyright infringements:

  1. Direct infringement: using all or a substantial part of a work in a way that conflicts with the copyright owner’s exclusive rights;
  2. Indirect infringement: dealing with an infringing copy without authorisation (for example, importing infringing material into Australia); and
  3. Authorisation infringement: allowing, encouraging or directing someone else to infringe. Liability sits with the person who facilitated the infringement.

However, the ChristmAIs presents are yet to be unwrapped by the Australian courts, with a notable lack of AI-related copyright infringement cases. Nonetheless, both sides of the AI ecosystem, from providers to deployers, could be exposed. For example, a provider (who builds or supplies the AI system) may face liability if they fail to take reasonable steps to prevent infringing outputs. A deployer (who uses or integrates the AI system) may face liability if they prompt, generate, store, or rely on infringing outputs. Both may also be liable where copyrighted material is used to train the AI system. Ultimately, liability depends on the conduct of each party and how the system stores, processes, or reproduces copyrighted material. In most circumstances, the current defences to copyright infringement in Australia are unlikely to apply, and the Australian Government, in particular, does not support placing a text and data-mining exception (ChristmAIs present) under the ChristmAIs tree, which is a joyful gift for our creative industries.  

Civil and criminal liability – the naughty list

If an AI system (including an agentic AI) causes copyright infringement, a court can still order the usual civil remedies: injunctions, damages or an account of profits. Historically, however, litigation has been minimally pursued due to cost and complexity. The proposed small-claims pathway aims to make it easier for rights holders to enforce their rights and address lower value infringement matters; meaning businesses using or supplying AI are more likely to be pursued in the future.

Serious or commercial-scale infringement can amount to a criminal offence.

This can include making, importing, distributing or possessing infringing AI-generated copies for commercial advantage, or using AI systems to produce or disseminate those copies. Penalties include hefty fines and imprisonment.

Key takeaway tips – Santa’s nice list

If your business is a provider or deployer of AI, you should consider the following (for a very merry ChristmAIs):

  • Audit your AI tools (peek behind the wrapping paper): Check what your AI systems actually do. Can they store training data, reproduce copyrighted content, or generate material that looks like someone else’s work?  Identify where infringement could realistically arise.
  • Check who wears the risk (who gets the best present?): Review your contracts with AI providers. Confirm who is responsible if the system outputs infringing material, and whether you have indemnities or warranties covering training data, outputs, and misuse.
  • Use licensed content where needed (make your list and check it twice!): If your AI systems (or agentic AI workflows) rely on third-party content, get permission or licences rather than assuming the AI systems’ use is covered.
  • Put guardrails in place (‘tis the season to monitor): Adopt internal rules for how staff should use AI tools.  Monitor what training data is fed into your systems and spot-check outputs for potential infringement risk.
  • Train your staff (come all ye faithful): Give employees simple guidance (don’t ask AI to “replicate” existing works, don’t upload copyrighted material without permission, and report any suspicious outputs).
  • Keep an eye on reform (what’s on the 2026 ChristmAIs list?): With the text and data mining exemption ruled out, the Australian Government has indicated copyright rules in Australia may tighten further. Track updates so your policies and contracts stay up-to-date.
This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post Copyright and AI: What businesses need to know about compliance and liability appeared first on McCullough Robertson Lawyers.

]]>
Digital & IP Lawyers Catching Data Gremlins https://mccullough.com.au/2025/12/09/how-is-data-stored-in-ai-models-data-governance-data-licensing/ Tue, 09 Dec 2025 02:28:02 +0000 https://mccullough.com.au/?p=93051 As Australian organisations increasingly embed AI models into their operations, they must also think twice about the value and sensitivity of the data feeding those systems. Here are the “data gremlins” we see with the increasing deployment of AI models.

The post Digital & IP Lawyers Catching Data Gremlins appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

With the festive season approaching, even the North Pole would think twice before outsourcing Santa’s workshop to a mysterious algorithm. As Australian organisations increasingly embed AI models into their operations, they must also think twice about the value and sensitivity of the data feeding those systems. While AI can bring efficiencies, automation and insights, this must be balanced against the risk of increased legal, operational and reputational risk if data governance is not properly addressed. This article outlines some recurring “data gremlins” we see with the increasing deployment of AI models.

Understanding your data

The accuracy and reliability of an AI model, and the regulatory risks associated with using it, are inherently linked to the data it processes.  Before deploying an AI model, organisations must ensure they understand the datasets (both structured and unstructured) that will be ingested by that AI model.  AI models often extract more value from these datasets, and use this data for different purposes, than was originally contemplated by the customer.  Asking (and understanding the answer to) fundamental questions such as “what data do we actually hold”, “where is our data stored and how is it accessed”, and “what data will the AI model have access to”, “where will the data be processed” as well as understanding your organisation’s data flows at a granular level, is critical for assessing risk with respect to your data and any privacy, security, IP and contractual issues that might arise when deploying an AI model.

Data integrity

A key issue when using any AI model is data integrity. Users of AI models often (incorrectly) assume that any rights in data they upload (whether personal information, commercially sensitive material, or proprietary datasets) remain entirely theirs, and that the data they upload is only used by the AI model provider for a specific purpose.  In reality, AI model providers will frequently seek include in their terms the right for themselves to use such data (whether provided as part of an input, a prompt or otherwise) to train, customise and improve their AI models. This can lead to the embedding of proprietary data into training sets, giving rise to unintended uses and the potential for that data to be disclosed to third parties. Customers must be aware of this risk and include appropriate contractual provisions restricting such usage where needed.

An organisation’s own employees can also pose a risk to the integrity of its data, through the disclosure (intentional or otherwise) of proprietary information or personal information by way of prompts or inputs into the AI model. In 2023 a global technology provider banned the use of AI-powered chatbots after one of its software engineers uploaded sensitive internal source code to a chatbot. This demonstrates the implementation of appropriate internal policies and guardrails with respect to the use of AI models, and the running of regular internal training sessions for employees to reinforce the policies, are essential to minimising data leakage and resultant negative impact for an organisation.

Upstream data licensing issues can also have a profound effect on downstream customers of an AI model, especially where the AI model provider has breached third-party licensing terms when training its AI model.  Use of tools trained using improperly obtained data can create reputational and business continuity risks for customers of those tools, if the use of those tools is enjoined as part of actions brought by the owners of the training data against the provider of the AI model, or if those proceedings become publicised. Current litigation all around the world by data subjects and copyright owners against the owners of AI models demonstrates that third-party rights are (and will continue to be) a prevalent issue when it comes to licensing for AI models.  Understanding how an AI model has been trained (including on what data), and having protections (including through contractual assurances) that the AI model provider has appropriate third-party data licensing terms and permissions in place is critical. 

Ownership of output

Contracts for AI models will often provide that ownership of output defaults to the AI model provider. This creates potential limitations around how an organisation can use ‘its’ outputs, and how those outputs (which may contain elements of your training or prompt data) can subsequently be used by the model provider for training or indeed in outputs for other customers. Organisations must be alive to this and consider the nature of the outputs to be produced by the AI model. Where the output is likely to contain proprietary insights or some other form of inherent value, organisations should negotiate express ownership or exclusive rights with respect to that output.  Clear drafting is essential to avoid uncertainty about whether outputs can be commercialised, shared with third parties, or embedded in new products. 

Hallucinations and output accuracy

The accuracy and reliability of the output of an AI model is dependent on the input data and the data that the AI model was trained on. 

allucinations and errors in output frequently arise where underlying data quality is poor, or where there are inherent weaknesses in the design of the AI model (which is often due to the data the AI model was trained on).  Model drift (being the degradation in the performance of the AI model due to changes in the underlying data) is also a real concern and can lead to a decline in the effectiveness of decision-making.

Organisations can alleviate the risk of these issues through various means, including by implementing a human in the loop safety check, including quality monitoring to address model drift, having programs to retrain the models periodically, and by ensuring that it has robust validation frameworks are in place with respect to output.  However, to identify which of these is relevant for any particular use case, organisations need first to have conducted the fundamental analysis to determine the risks relevant to their particular model and intended deployment.

Privacy

The use of personal information by AI models will continue to be the subject of real scrutiny leading into 2026, especially with anticipated Australian privacy reforms on the horizon. The Office of the Australian Information Commissioner (OAIC) has released guidance with respect to the use of AI models and has reiterated that obligations under Australian privacy law will apply to any personal information input into an AI model, together with any output generated by an AI model that includes personal information. Further, as of December 2026, it will be necessary to include mandatory transparency requirements about automated decision making in privacy policies. 

If personal information will be used to train or prompt an AI model, the organisation responsible for that personal information must understand precisely what information is involved, how it was collected, the purposes for which it was collected, how it will be processed by the service provider, what secondary uses are proposed (if any), and whether all of those matters have been adequately disclosed to the data subject.  Lack of transparency can amount to a breach of Australian (and other international) privacy laws, especially if that personal information is repurposed in a way that is inconsistent with the original purpose of collection.

Customers should also strictly manage any overseas disclosures, and ensure that appropriate data security arrangements are in place. They should negotiate the contract with the service provide to include the controls necessary to address the risks they identify.

As best practice, the OAIC recommends that organisations do not enter personal information (and particularly sensitive information) into publicly available generative AI tools, because of the complex and significant privacy issues that might arise.  However, to utilise the full benefits of AI, this is often necessary and can often be done lawfully – provided proper analysis has been done, negotiations have been conducted and contractual and technical controls have been implemented.

How we can help

The above issues highlight the need for clear understanding of and controls around how an organisation’s data can be stored, used, handled and analysed. An increasing integration of AI models into an organisation’s technology ecosystem means that having appropriate data governance mechanisms and contractual protections in place to manage data risk becomes critical. Our digital and intellectual property team regularly advise organisations with respect to the procurement, safe deployment and ongoing management of AI models. We regularly negotiate contractual terms, and advise on appropriate policies and procedures, to alleviate business risk. A pre-Christmas review may be the simplest way to keep those data gremlins out of your organisation’s stocking.

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post Digital & IP Lawyers Catching Data Gremlins appeared first on McCullough Robertson Lawyers.

]]>
AI & ERS: The human touch required in increasingly automated employment processes https://mccullough.com.au/2025/12/08/ai-redeployment-and-automation-displacement-employment-processes/ Mon, 08 Dec 2025 06:35:49 +0000 https://mccullough.com.au/?p=93016 The consultation duties and Fair Work Act compliance required in AI-driven restructures, AI-driven redeployment, and automation displacement.

The post AI & ERS: The human touch required in increasingly automated employment processes appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

AI-driven restructures are no longer theoretical. Employers are already using AI tools to automate tasks, design new operating models, and even draft redundancy correspondence. The legal framework, however, is old-fashioned: the Fair Work Act 2009 (Cth) requires genuine consultation, real consideration of redeployment, and a process grounded in human judgment. The challenge for employers is making old-world obligations fit fast, AI-enabled operating models.

Genuine redundancy in an AI world

Section 389 of the Fair Work Act provides a three-limb test for a “genuine redundancy”:

  1. The job is no longer required to be performed by anyone due to changes in the operational requirements of the enterprise.
  2. The employer has complied with any consultation obligation in a modern award or enterprise agreement.
  3. It was not reasonable in all the circumstances to redeploy the employee within the employer’s enterprise (or an associated entity).

AI may be driving the “operational requirements” – for example, process automation or AI replacing parts of a role – but it does not dilute limbs (2) and (3).

Lord v Millet Hospitality: AI-written consultation gone wrong

The recent unfair dismissal case of Hayley Lord v Millet Hospitality Geelong Pty Ltd [2025] FWC 2740 is a timely warning for anyone tempted to outsource redundancy communications to a chatbot. In that case, a hospitality business used ChatGPT to draft an email informing an employee that “we have made the difficult decision to remove the Housekeeping Supervisor position,” before offering a “discussion” about alternatives and saying it would “proceed with the removal… as planned” if she did not respond.

The Commission held:

  • the wording showed a final decision had already been made, so any “consultation” offered was illusory; and
  • relying on ChatGPT did not excuse the employer’s failure “to adhere to basic standards of decency” or to have a face-to-face conversation about redundancy.

The takeaway is blunt: AI can help draft, but it cannot own the message. Employers remain responsible for ensuring communications are accurate and framed in a way that leaves space for genuine consultation.

Consultation duties in AI-driven restructures

For employers planning AI-driven restructures:

  • Consult before the decision is locked in; Award and EA consultation clauses typically require information to be shared while options are still genuinely open – not after the automation project is effectively implemented.
  • Explain the “why”; in an AI context, that means being able to describe (at a human level) what the technology change is, why it affects certain roles and what alternatives have been considered.
  • Engage in two-way dialogue; employees must have a real opportunity to challenge assumptions, suggest changes, or propose retraining or redesign of duties.

Redeployment and automation displacement

The Helensburgh Coal Pty Ltd v Bartley [2025] HCA 29 decision confirmed that when considering redeployment, the Commission can scrutinise how an employer structures its labour model – including reducing reliance on contractors or labour hire – to create or free up roles for otherwise redundant employees. That is a significant shift for employers running complex automation or outsourcing programs, particularly where a restructure involves replacing internal labour with technology supported by external providers.

For AI-driven restructures, that means that consideration of redeployment should include:

  • Mapping roles likely to be disrupted by AI or automation early and identifying adjacent roles that could be created or expanded.
  • Looking beyond internal vacancies to contractor and labour-hire arrangements – could some of that work be brought back in-house to redeploy impacted employees?
  • Considering retraining and upskilling as part of the redeployment analysis.

A redundancy program that trumpets “efficiency through AI” while ignoring obvious redeployment possibilities – including substituting employees for external providers – is now more vulnerable.

Safety and “automation displacement” risks

From a work health and safety perspective, AI restructures clearly engage psychosocial risk duties. Large-scale automation programs create job insecurity, anxiety about skill obsolescence and, in some cases, perceived unfairness or discrimination (for example, where older or disabled workers are disproportionately affected).

Reasonably practicable control measures may include:

If automation displacement is handled clumsily, employers face potential WHS exposure on top of unfair dismissal, general protections or discrimination claims.

Practical steps

Used well, AI can support lawful restructures. Used lazily, the output becomes Exhibit A to a legal claim. In practice, employers should:

  • Use AI as a drafting assistant only – all consultation and termination documents must be checked against the actual plans, award/EA obligations and the required legal language.
  • Train managers on how to talk to employees about automation and redundancy, and insist on live conversations for significant changes.
  • Build a structured redeployment and retraining review, expressly considering contractor and labour-hire roles.
  • Document the reasoning: why roles are no longer required, what alternatives were assessed, and why redeployment was not reasonable.

AI might drive the restructure, but the Commission will still be looking for fairness, transparency and humanity.

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post AI & ERS: The human touch required in increasingly automated employment processes appeared first on McCullough Robertson Lawyers.

]]>
Directors checked their duties twice https://mccullough.com.au/2025/12/05/directors-checked-their-duties-twice/ Fri, 05 Dec 2025 02:22:00 +0000 https://mccullough.com.au/?p=93010 With AI increasingly being used as a tool to analyse and synthesise data, what obligations do directors have when discharging their duties? In this article we unpack business judgement in the age of AI

The post Directors checked their duties twice appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

The increasing integration of artificial intelligence (AI) into business operations raises important governance considerations. While AI promises enhanced data analysis, automation and efficiency, directors remain subject to their core duties at law, including under the Corporations Act 2001 (Cth) (Corporations Act). AI may assist in meeting those duties, but it cannot be used to outsource or abdicate them.

Business Judgment in the age of AI

A number of key duties underpin a director’s responsibilities. Under section 180 of the Corporations Act, directors must, in exercising their powers and discharging their duties, exercise a degree of care and diligence that a reasonable person in their position would exercise. In making a business judgment, a director or officer may be taken to have complied with this care and diligence requirement (and the equivalent duties at common law) if they:

  1. make the judgment in good faith and for a proper purpose;
  2. do not have a material personal interest in the subject matter of the judgment;
  3. inform themselves about the subject matter of the judgment; and
  4. rationally believe that the judgment is in the best interest of the corporation.

The rule is intended to encourage considered and reasonable risk-taking in relation to ‘business judgments’. It does not protect negligent, uninformed or ill-founded decisions, nor does it extend to other types of decisions such as compliance or oversight failures, delegation decisions, or insolvent trading-related conduct.

In looking to rely on the Business Judgment Rule, is there a place for AI?

Yes, if used responsibly.  

AI can support directors in making better decisions by analysing and synthesising data and patterns, providing an overview of industry trends and generating insights that may prompt further consideration.

AI-generated results cannot, however, be relied on blindly and without critical analysis or replace a director’s obligation to make reasonable inquiries and inform themselves. While a helpful tool, directors and officers remain responsible for utilising relevant expertise and knowledge to assess AI generated information including to identify inaccuracies or inconsistencies, as well as consider the possibility that the information provided may be biased, incomplete or simply incorrect. AI is an input, not a substitute.

The Australian Institute of Company Directors (AICD) ‘Director’s Guide to AI Governance’ notes that while a helpful tool, AI also has unique limitations including opacity, data bias and, at times, unexplained outputs. These characteristics mean directors must scrutinise AI recommendations in the same way they would scrutinise expert human advice, but with added due diligence.

Board reliance on AI-generated reports from management

Directors are entitled to rely on information and advice, including reports that incorporate AI tools, from management, external advisers and internal experts if the reliance is rational and properly informed. In doing so, however, directors must satisfy themselves that:

  1. management understands the AI tools it is using and has implemented appropriate controls, testing and validation;
  2. the outputs are plausible, not internally inconsistent, and not contradicted by other available information; and
  3. any limitations or uncertainties have been disclosed.

In short, AI raises the baseline for what a ‘reasonable director’ must ask. When boards are reviewing AI-supported management reports, it is reasonable to expect management to provide:

  1. methodology transparency – a high-level but meaningful explanation of how AI tools were used, what data sources were relied on, known model limits, error rates or blind spots and the validation and testing that was undertaken;
  2. human oversight – confirmation that outputs were reviewed and challenged by competent executives;
  3. escalation of uncertainty – identification of any low-confidence outputs, areas where the AI model struggled and assumptions needing board attention;
  4. comparison to non-AI evidence – where appropriate, results should be benchmarked against historical data, industry norms and human judgment; and
  5. a clear explanation of the recommendation – board papers should show how the AI-supported analysis fits into management’s judgment, not replaces it.

Consistent with AICD guidance, to assist a board to uphold its responsibilities and duties, as well as strengthen the availability of the Business Judgement Rule, boards may consider implementing some of the following strategies:

  1. request an AI inventory and risk classification of all AI systems that an organisation relies on, including those offered by third-party providers;
  2. ensure management provides clear explanations of AI processes, accuracy, testing results and limitations;
  3. adopt AI-specific policies, controls and escalation frameworks;
  4. require human oversight for all material decisions; and
  5. maintain AI literacy at board and executive levels through ongoing education.

Consistent with AICD guidance, to assist a board to uphold its responsibilities and duties, as well as strengthen the availability of the business judgment rule, boards may consider implementing AI-specific policies, controls and escalation frameworks to assist management to meet its obligations while utilising AI tools.

AI can be a powerful tool to assist directors fulfil their statutory duties but cannot replace them.

Under the Corporations Act, directors remain personally responsible for exercising judgment, acting in good faith, protecting confidential information, and ensuring proper purpose. If boards rely on AI, they must do so within a robust governance framework, incorporating human oversight, transparency and accountability. Australian boards that fail to do so may risk not only regulatory and reputational harm, but also personal liability

This article forms part of the series, the 12 Days of ChristmAIs: A Technology, Media and Telecommunications series on artificial intelligence and its intersection with the law. You can view all the articles here.

The post Directors checked their duties twice appeared first on McCullough Robertson Lawyers.

]]>
The pitfalls and traps of drafting or accepting AI warranties in M&A and commercial transactions https://mccullough.com.au/2025/12/04/ai-warranties-mergers-and-acquisitions/ Thu, 04 Dec 2025 02:21:45 +0000 https://mccullough.com.au/?p=92996 There is not yet an agreed or market standard definition for what constitutes AI. Without precise definition, a seller may find themselves providing a warranty that is either too broad (potentially exposing unknown liabilities) or too narrow (which provides limited comfort to a prospective buyer).

The post The pitfalls and traps of drafting or accepting AI warranties in M&A and commercial transactions appeared first on McCullough Robertson Lawyers.

]]>
12 Days of ChristmAIs: A TMT insight series

In years to come, we will no doubt look back at 2025 as a pivotal year in the adoption of AI. Nvidia, maker of the chips that power many of the frontier models, became the first publicly listed company to hit a USD5T valuation; ‘AI slop’ was the Macquarie Dictionary’s word of the year; and topically in Australia, just this week our Federal Government released its National AI Plan, which seeks to strengthen domestic AI capability and attractive investment, support workers through skills development and AI adoption, and ensure safe, responsible AI use in line with our national values. 

On the flipside, an increasing chorus of investors and pundits is calling the top of the AI bubble and highlighting the challenges of making a return on (sometimes eye-watering) AI investments.

Against that backdrop, we thought it would be interesting to explore some of the practical legal implications of integrating AI into a modern business environment.  This is not focused specifically on AI governance and implementation, which would be worthy of another series on its own, but rather, is a collection of observations regarding trends and implications in various areas of law and practice.

To that end, we present you with our “12 Days of ChristmAIs”, kicking off today with some observations from our Corporate team about trends in M&A.

Happy reading, and Merry ChristmAIs!

Alex Hutchens, Partner
Technology, Media and Telecommunications


The pitfalls and traps of drafting or accepting AI warranties in M&A and commercial transactions

As businesses increasingly leverage AI across their operations and the use of AI becomes ever more pervasive throughout all industries, the inclusion of AI-specific warranties and indemnities is becoming progressively more common in an M&A context and in commercial contracts generally. In this article we unwrap some of the pitfalls and traps involved when negotiating such provisions.

Defining AI – untangling the tinsel

Just like untangling last year’s Christmas lights, defining AI is trickier than it looks. Without a clear definition, warranties can become either too broad or too narrow, leaving dealmakers in a knot.

In the legal context, there is not yet an agreed or market standard definition for what constitutes AI. When we talk about AI, we could be referring to generative AI (for example, ChatGPT) or machine learning, natural language processing or computer vision algorithms.

Without precise definition, a seller may find themselves providing a warranty that is either too broad (potentially exposing it to unknown and significant liabilities) or too narrow (which will provide limited or no comfort to a prospective buyer).

Given how quickly AI technologies evolve, any definition used in transactional documents should be regularly reviewed and updated, ensuring it reflects current usage and does not unintentionally widen (or narrow) the warranty package.

The diligence dilemma – a seasonal reminder

Buyers often struggle to see where and how AI is used within a business, and it can therefore be difficult to verify:

  • where AI is used,
  • how deeply it is embedded in operational workflows, or
  • what data employees or systems feed into AI tools.

This uncertainty is magnified by shadow AI, where staff use AI platforms informally to improve efficiency, usually outside authorised channels. Many businesses have implemented AI-use policies, but these are often aspirational, difficult to monitor, and almost impossible to diligence in a traditional transaction process.

This creates a double-sided dilemma:

For buyers:
Opaque AI use makes it challenging to assess legal, operational and reputational risk. As a result, buyers may seek AI-specific warranties or indemnities to bridge the information gap. However, these provisions can inadvertently shift unknown or unquantifiable liabilities to the seller.

For sellers:
Accepting broad AI warranties may expose them to risks they cannot investigate or verify. Sellers should therefore:

  • ensure AI warranties are qualified by awareness,
  • disclose internal AI policies carefully, and
  • negotiate robust liability caps and limitations given the uncertain nature of AI-related exposure.

Buyers, on the other hand, should raise targeted questions about AI across the business, focusing on areas where AI informs critical decisions, interacts with sensitive data, or influences customer outcomes.

The vendor black box problem – elves behind closed doors

Third-party AI vendors can feel like Santa’s elves working behind closed doors.

Even when a business understands its internal use of AI, many AI systems rely on third-party models that operate as opaque “black boxes”. These systems may:

  • be trained on unknown datasets,
  • have evolving or proprietary behaviour,
  • be governed by restrictive licence terms, and
  • provide limited visibility into their decision logic.

This creates a structural problem for warranties: a seller cannot realistically warrant the behaviour, inputs, or training data of a third-party model it does not control.

Warranties that imply visibility or control over vendor-supplied AI can therefore expose sellers to unintended strict liability for risks that sit entirely outside the organisation.

Regulation at reindeer speed – keeping the sleigh on course

AI regulation is moving as fast as Santa’s sleigh on Christmas Eve. 

The regulation of AI within Australia and across the world is evolving rapidly and, consequently, the legal baseline is frequently changing.

For buyers:
There is a growing desire for assurance that the target’s AI use complies with current law and is not exposed to foreseeable regulatory change.

For sellers:
Broad AI compliance warranties can potentially and inadvertently capture:

  • emerging standards,
  • draft legislation, or
  • future regulatory expectations that were not foreseeable when the warranty was given.

Sellers should carefully qualify these warranties by materiality, awareness and timing, ensuring the allocation of risk reflects the regulatory context as it exists at signing.

The multi-vector nature of AI risk — ornaments on a tree

Just as one branch of a Christmas tree can hold multiple ornaments, one AI system can expose a business to a variety of legal risks at the same time.

Unlike traditional software, AI introduces simultaneous risk across multiple legal domains. A single AI system may raise issues relating to:

This multi-vector risk profile means AI warranties can inadvertently function as broad, catch-all promises that cut across a range of legal regimes — often more expansive than either party anticipates. Careful drafting and tight scoping are essential to avoid unintended overlap with other warranty sets.

Conclusion – unwrapping with care

As dealmakers unwrap the First Day of ChristmAIs, take care and recognise that AI warranties present nuances and risks that don’t fit neatly into traditional warranty frameworks. The smart move this festive season is to unwrap slowly, inspect carefully, and avoid being the one left holding an AI-shaped liability you didn’t intend to take home.

The post The pitfalls and traps of drafting or accepting AI warranties in M&A and commercial transactions appeared first on McCullough Robertson Lawyers.

]]>
Raising the standard – Compliance with the Queensland Child Safe Standards https://mccullough.com.au/2025/12/03/qld-child-safe-standards-phase-two/ Tue, 02 Dec 2025 22:37:19 +0000 https://mccullough.com.au/?p=92963 The Child Safe Organisations Act 2024 (Act) recently commenced on 1 October 2025. Part one of our three-part series[1] gave a summary of the ten Child Safe Standards, and the dates by which organisations need to ensure that they have been implemented. This second part of our three-part series addresses what compliance with the new […]

The post Raising the standard – Compliance with the Queensland Child Safe Standards appeared first on McCullough Robertson Lawyers.

]]>
The Child Safe Organisations Act 2024 (Act) recently commenced on 1 October 2025.

Part one of our three-part series[1] gave a summary of the ten Child Safe Standards, and the dates by which organisations need to ensure that they have been implemented.

This second part of our three-part series addresses what compliance with the new Child Safe Standards looks like.

What are the Child Safe Standards and how should you implement them?

The Queensland Family and Child Commission (QFCC) has summarised the Child Safe Standards as standards that

‘… help create environments where Queensland’s children can grow, learn, and thrive without harm. They are built on children’s rights and provide a clear framework for keeping children safe’.

All organisations classified as ‘child safe entities’ under section 10 of the Act must implement and comply with the Child Safe Standards.[2] In broad terms, this obligation applies to any organisation that works with or provides spaces and facilities for children.

In implementing and complying with the Child Safe Standards, organisations must also comply with the Universal Principle of providing an environment that promotes and upholds the right to cultural safety of children who are Aboriginal persons or Torres Strait Islander persons.[3] 

The actions that a particular organisation will need to take to implement the Child Safe Standards and the Universal Principle will be unique to it. The necessary actions will depend on (among other things) the size and scale of the organisation, the sector it is in, the extent to which it works with children, and the nature of the work the organisation does with children. 

We outline in this article what each of the ten Child Safe Standards are in greater detail, and some actions that might be taken to implement them. 

Standard one – Leadership and culture

Making a commitment to child safety, wellbeing and cultural safety that is visible across the organisation and is evident in key policies and procedures is an important starting point to implementing standard one.[4] Indicators of compliance include ensuring the organisation:

  1. can demonstrate it has publicly available and current documents and procedures, including:
    • child safety and wellbeing policies;
    • practice guidance;
    • information sharing protocols; and
    • staff and volunteer codes of conduct;
  2. has established governance structures and clear reporting frameworks that enable organisational leadership to effectively monitor and act on risks to children’s safety;
  3. provides leaders, staff and volunteers with regular training and support to understand and fulfil their responsibilities for children’s safety and wellbeing with reference to the relevant policies; and
  4. works with Aboriginal and Torres Strait Islander staff and stakeholders to identify appropriate progress and success indicators and evaluation methods.

Standard two – Voice of children

As part of ensuring children have a voice, standard two[5] aims to ensure that organisations foster an environment where children’s voices are not just heard but actively shape outcomes that affect them. Some key indicators of compliance could include ensuring that the organisation:

  1. has programs and resources to educate children on their rights, including their right to safety and their right to be listened to;
  2. is proactive in providing age-appropriate platforms to regularly seek children’s views and encourage participation in decision-making; and
  3. has processes where children are informed about their roles and responsibilities in helping to ensure the safety and wellbeing of their peers.

Standard three – Family and community

Standard three[6] focuses on the principle that child safety and wellbeing are strengthened when families and communities are informed, engaged and active partners in promoting safe environments. From a practical perspective, organisations should prioritise:

  1. creating opportunities for families and communities to be involved in how the organisation operates, including to co-design and participate in the development and implementation of policies, practices and programs; and
  2. having clear and accessible information for families and communities on issues of children’s safety and wellbeing, including in relation to Aboriginal and Torres Strait Islander families and those from other diverse backgrounds.

Standard four – Equity and diversity

Queensland’s diverse population is recognised through standard four,[7] which requires organisations to uphold equity and respect diverse needs in policy and practice. This can be achieved through:

  1. policies that promote equity and diversity in the context of the safety and wellbeing of children;
  2. child-friendly material being provided in accessible languages and formats that promotes inclusion and informs all children of the support and complaints processes available to them;
  3. staff and volunteer training to help them recognise and respond effectively to children with diverse needs; and
  4. a cultural safety framework and/or action plan to embed cultural safety, equity and diversity principles across policies, programs and governance structures.

Standard five – People

Standard five[8] requires organisations to ensure that staff and volunteers engaging with children are not only suitable to work with children but are supported with ongoing professional development and clear guidance, empowering them to model safe and respectful practices in every interaction. Key action areas for organisations to focus on when implementing this standard include:

  1. rigorous recruitment and advertising practices, including ensuring that selection criteria demonstrate a commitment to child safety and wellbeing and an understanding of a child’s development needs and culturally safe practices; and
  2. careful screening processes, including Working with Children Checks[9] and referee checks.

Standard six – Complaints management

Standard six[10] requires organisations to have processes in place that provide a child-focused approach to complaints and concerns. To implement this standard, organisations should consider ensuring that they:

  1. have child-friendly complaints policies that are accessible to children, carers, families, staff and volunteers, and promote timely feedback to those who raise concerns; and
  2. offer training for staff to handle complaints sensitively, with a focus on supporting and protecting children and seeking to ensure that no child or person is retraumatised throughout the process.

Standard seven – Knowledge and skills

Standard seven[11] intersects with standards four and five in positively requiring staff and volunteers working with children to be equipped with the knowledge, skills and awareness to keep children safe through ongoing education and training. In addition to being screened and qualified, organisations should ensure that staff and volunteers receive ongoing education, training and mentoring to build the knowledge, skills and awareness to proactively safeguard children in all interactions.

Standard eight – Physical and online environments

Given the diversity of settings where children interact with organisations, creating safe physical and online environments is a cornerstone of children’s safety and wellbeing. To ensure compliance with standard eight,[12] some key issues organisations should consider include:

  1. whether it has a risk management strategy that addresses physical and online risks;
  2. staff and volunteer access and use of online environments in line with the organisation’s code of conduct, and relevant communication protocols; and
  3. whether there are appropriate measures in place to ensure the safety and wellbeing of children with respect to third parties who might visit environments provided by organisations, such as contractors engaged to provide services.

Standard nine – Continuous improvement

Child safety is a dynamic process. Standard nine[13] focuses on the need for organisations to continuously review and improve the implementation of the Child Safe Standards. To ensure compliance with standard nine, organisations should consider:

  1. seeking the participation of children, carers and communities in regular reviews of child safety and wellbeing policies;
  2. regularly analysing complaints and outcomes to assist in improving child safe practices; and
  3. providing consistent updates to staff and volunteers as changes and improvements occur.

Standard ten – Policies and procedures

Standard ten[14] requires organisations to ensure that their policies and procedures prioritise the safety and wellbeing of children. Policies and procedures should equip staff and volunteers with an understanding of how to identify and prevent harm to children. To ensure compliance with standard ten, organisations should consider conducting:

  • regular audits of their policies and procedures to provide evidence of how they are child safe and culturally safe; and
  • interviews or surveys of children, carers, families and community members to determine the confidence level in, and awareness of, the policies and procedures relevant to child safety and wellbeing.

Key documents

Listed below are some key documents that all child safe entities should consider developing and regularly reviewing, as a starting point:

  • a published commitment to children’s safety and wellbeing in your organisation’s physical and/or online environment;
  • a child safety and wellbeing policy;
  • a code of conduct for staff and volunteers who work with children and for promoting child safety and wellbeing;
  • a complaints handling policy; and
  • a risk management policy that identifies, assesses and takes steps to mitigate the risk of harm to children.

The QFCC has indicated that these are documents that your organisation may need as a foundation to build on, but that this list is not exhaustive.

When does compliance with the Child Safe Standards need to have occurred by?

The following phase one services listed in part one of this series should have taken steps to ensure that the Child Safe Standards have been implemented by 1 October 2025:

  • child protection services;
  • government entities;
  • justice or detention services; and
  • services for children with disability,

The phase two services mentioned in part one of this series should now be prioritising taking steps to implement the Child Safe Standards, given the upcoming deadline for compliance of 1 January 2026. The phase three services mentioned in part one of this series have until 1 April 2026 to ensure that they have implemented the Child Safe Standards.

We encourage you to reach out to our team of experts for any assistance you need in ensuring that your organisation has implemented the Child Safe Standards by the relevant deadline.

Early next year, we will deliver part three of this series, which will focus on:

  • which sectors will need to introduce a Reportable Conduct Scheme by 1 July 2026, and how to effect this; and
  • enforcement options that are available to the QFCC, which is responsible for monitoring and reporting on the operation of the Child Safe Organisations system established by the Act.

[1] If you haven’t already, please refer to part one of this three part series: Phase one of the Queensland Child Safe Standards to be introduced in October 2025 – what are your obligations?.

[2] Section 11(1) of the Act.

[3] Section 11(2) of the Act.

[4] Section 9(a) of the Act.

[5] Section 9(b) of the Act.

[6] Section 9(c) of the Act.

[7] Section 9(d) of the Act.

[8] Section 9(e) of the Act.

[9] https://ppr.qed.qld.gov.au/pp/working-with-children-blue-card-procedure.

[10] Section 9(f) of the Act.

[11] Section 9(g) of the Act.

[12] Section 9(h) of the Act.

[13] Section 9(i) of the Act.

[14] Section 9(j) of the Act.

The post Raising the standard – Compliance with the Queensland Child Safe Standards appeared first on McCullough Robertson Lawyers.

]]>
A bitter pill for Cosette: FIRB says “no” to Mayne pharma takeover https://mccullough.com.au/2025/12/01/firb-blocks-cosette-mayne-pharma-takeover/ Sun, 30 Nov 2025 22:42:42 +0000 https://mccullough.com.au/?p=92943 On 21 November 2025, Treasurer Jim Chalmers accepted the Foreign Investment Review Board’s (FIRB’s) recommendation to block US-based Cosette Pharmaceutical’s (Cosette) 100% takeover of Mayne Pharma Group Limited (Mayne) for a proposed A$672 million, following a detailed assessment that identified unresolvable national-interest risks. In reaching his decision, the Treasurer undertook broad consultation from the Department […]

The post A bitter pill for Cosette: FIRB says “no” to Mayne pharma takeover appeared first on McCullough Robertson Lawyers.

]]>
On 21 November 2025, Treasurer Jim Chalmers accepted the Foreign Investment Review Board’s (FIRB’s) recommendation to block US-based Cosette Pharmaceutical’s (Cosette) 100% takeover of Mayne Pharma Group Limited (Mayne) for a proposed A$672 million, following a detailed assessment that identified unresolvable national-interest risks. In reaching his decision, the Treasurer undertook broad consultation from the Department of Health and Aged Care, the Takeovers Panel, the Therapeutic Goods Administration and the South Australian Government. 

On 20 February 2025, Cosette and Mayne entered into a Scheme Implementation Deed (SID) in relation to the acquisition of all the shares in Mayne for $7.40 cash per share by way of a scheme of arrangement, subject to certain conditions precedent, including that no ‘Material Adverse Change’ (MAC) had occurred and that Cosette obtained the Treasurer’s approval under the Foreign Acquisitions and Takeovers Act 1975 (Cth).

On 17 May 2025, Cosette issued a notice to Mayne, alleging there had been a MAC under the SID alleging:

  • a MAC had occurred under the SID based on a deterioration in Mayne’s trade, including underperformance in its third-quarter earnings update compared to historical trends and prior forecasts; and
  • potential regulatory issues regarding Mayne’s promotion of its Nextstellis oral contraceptive pill raised in an FDA letter.

In response, Mayne suggested that Cosette was trying to amplify the risk of job losses and reduced manufacturing capability in Australia in an attempt to cause the FIRB approval condition to fail, having exhausted other avenues to avoid completing the transaction.

The NSW Supreme Court eventually rejected the claim, opining that a MAC must be based on actual events and their real impact, not on missed forecasts or revised projections, although downgraded forecasts may point to underlying issues. It also noted that market volatility can both drive and continue to influence such downgrades, making any claimed impact too uncertain to trigger a MAC.

Subsequently, Cosette threatened to close Mayne’s manufacturing plant in Salisbury, Adelaide (the Plant) and shed over 200 jobs.  Mayne had spent millions on the plant since 2016 as part of its rollout into the US market, where it sells 71 US Food and Drug Administration-approved products. Following Cosette’s threat, the South Australia Government requested that FIRB block the takeover, raising concerns that the site’s closure would mean the loss of unique expertise and facilities and expose Australia to global supply shocks, in an already fragile sector, heavily reliant on importation.

On 19 November 2025, the Takeovers Panel ruled in favour of the acquisition arguing the Treasury could require Cosette to keep the Plant open by imposing ‘any conditions reasonably required’. The Panel found that Cosette had shifted its intentions regarding the Plant, including potential closure or disposal, without timely disclosure to the market and recommended that Cosette be required to give clear legal guarantees to protect the Plant.  Despite this, the Treasurer ultimately claimed that following unequivocal advice from the Treasury and FIRB there were no conditions that could be put in place to adequately mitigate national interest risks, particularly unique risks to the supply of critical medicines. 

The public nature in which this FIRB process played out is rare, but with the MAC dispute and potentially other commercial drivers it has provided some insight into the way FIRB approaches these decisions and the fact that it can, and on occasions will, ignore advice from other regulators. The fact the Treasurer considered the imposition of conditions insufficient to protect the national interest, demonstrates that FIRB’s typical risk mitigation step of imposing conditions will not facilitate all approvals sought.

Further, the accusations made by Mayne that Cosette was intentionally sabotaging its approval highlights the importance of strong contractual protections around public disclosure of deal information and the need for counterparties (that do not themselves need to apply for regulatory approval) being involved in the regulatory approvals process. This usually takes the form of having a right to review and amend regulatory applications, but in light of this recent FIRB rejection, general conduct clauses and positive obligations to progress approvals should be carefully considered. 

This decision may prompt sellers to adjust their regulatory strategies to stop bidders from leveraging FIRB and the political landscape to their advantage.

The Treasurer’s Media Release can be found here.

The post A bitter pill for Cosette: FIRB says “no” to Mayne pharma takeover appeared first on McCullough Robertson Lawyers.

]]>