12 Days of ChristmAIs: A TMT insight series
The increasing integration of artificial intelligence (AI) into business operations raises important governance considerations. While AI promises enhanced data analysis, automation and efficiency, directors remain subject to their core duties at law, including under the Corporations Act 2001 (Cth) (Corporations Act). AI may assist in meeting those duties, but it cannot be used to outsource or abdicate them.
Business Judgment in the age of AI
A number of key duties underpin a director’s responsibilities. Under section 180 of the Corporations Act, directors must, in exercising their powers and discharging their duties, exercise a degree of care and diligence that a reasonable person in their position would exercise. In making a business judgment, a director or officer may be taken to have complied with this care and diligence requirement (and the equivalent duties at common law) if they:
- make the judgment in good faith and for a proper purpose;
- do not have a material personal interest in the subject matter of the judgment;
- inform themselves about the subject matter of the judgment; and
- rationally believe that the judgment is in the best interest of the corporation.
The rule is intended to encourage considered and reasonable risk-taking in relation to ‘business judgments’. It does not protect negligent, uninformed or ill-founded decisions, nor does it extend to other types of decisions such as compliance or oversight failures, delegation decisions, or insolvent trading-related conduct.
In looking to rely on the Business Judgment Rule, is there a place for AI?
Yes, if used responsibly.
AI can support directors in making better decisions by analysing and synthesising data and patterns, providing an overview of industry trends and generating insights that may prompt further consideration.
AI-generated results cannot, however, be relied on blindly and without critical analysis or replace a director’s obligation to make reasonable inquiries and inform themselves. While a helpful tool, directors and officers remain responsible for utilising relevant expertise and knowledge to assess AI generated information including to identify inaccuracies or inconsistencies, as well as consider the possibility that the information provided may be biased, incomplete or simply incorrect. AI is an input, not a substitute.
The Australian Institute of Company Directors (AICD) ‘Director’s Guide to AI Governance’ notes that while a helpful tool, AI also has unique limitations including opacity, data bias and, at times, unexplained outputs. These characteristics mean directors must scrutinise AI recommendations in the same way they would scrutinise expert human advice, but with added due diligence.
Board reliance on AI-generated reports from management
Directors are entitled to rely on information and advice, including reports that incorporate AI tools, from management, external advisers and internal experts if the reliance is rational and properly informed. In doing so, however, directors must satisfy themselves that:
- management understands the AI tools it is using and has implemented appropriate controls, testing and validation;
- the outputs are plausible, not internally inconsistent, and not contradicted by other available information; and
- any limitations or uncertainties have been disclosed.
In short, AI raises the baseline for what a ‘reasonable director’ must ask. When boards are reviewing AI-supported management reports, it is reasonable to expect management to provide:
- methodology transparency – a high-level but meaningful explanation of how AI tools were used, what data sources were relied on, known model limits, error rates or blind spots and the validation and testing that was undertaken;
- human oversight – confirmation that outputs were reviewed and challenged by competent executives;
- escalation of uncertainty – identification of any low-confidence outputs, areas where the AI model struggled and assumptions needing board attention;
- comparison to non-AI evidence – where appropriate, results should be benchmarked against historical data, industry norms and human judgment; and
- a clear explanation of the recommendation – board papers should show how the AI-supported analysis fits into management’s judgment, not replaces it.
Consistent with AICD guidance, to assist a board to uphold its responsibilities and duties, as well as strengthen the availability of the Business Judgement Rule, boards may consider implementing some of the following strategies:
- request an AI inventory and risk classification of all AI systems that an organisation relies on, including those offered by third-party providers;
- ensure management provides clear explanations of AI processes, accuracy, testing results and limitations;
- adopt AI-specific policies, controls and escalation frameworks;
- require human oversight for all material decisions; and
- maintain AI literacy at board and executive levels through ongoing education.
Consistent with AICD guidance, to assist a board to uphold its responsibilities and duties, as well as strengthen the availability of the business judgment rule, boards may consider implementing AI-specific policies, controls and escalation frameworks to assist management to meet its obligations while utilising AI tools.
AI can be a powerful tool to assist directors fulfil their statutory duties but cannot replace them.
Under the Corporations Act, directors remain personally responsible for exercising judgment, acting in good faith, protecting confidential information, and ensuring proper purpose. If boards rely on AI, they must do so within a robust governance framework, incorporating human oversight, transparency and accountability. Australian boards that fail to do so may risk not only regulatory and reputational harm, but also personal liability.