Skip to Content

Statement on the use of artificial intelligence in Australian legal practice

Commissioner's message

6 December 2024

We have seen plenty of media and professional commentary over the past year and a half on AI and the transformative potential for the profession. We know some law practices are already using large language models (LLMs) – forms of AI that can process and generate text – and are even developing in-house LLMs using their own data.

As I first said in our Risk Outlook 2024 (which was published in June), while enjoying the benefits of AI, it’s important for lawyers to remember that it’s their duty to provide accurate legal information, not the duty of the AI program they use. Unlike a professionally trained lawyer, AI can’t exercise superior judgement, or provide ethical and confidential services. Ultimately, it’s your expertise that clients rely on when they engage your services.

Understanding the capabilities and limitations of AI tools is important for lawyers, not only because they may use it themselves, but also so they can provide trusted guidance to their clients. As the legal profession and the wider community adapt to the rapidly changing digital landscape, consumers will need a reliable source of advice – both on how they can lawfully use AI, and defend their interests if they have been adversely affected by a third party using these technologies.

To ensure lawyers have clarity on our expectations when they use AI, we’ve collaborated with partners in the other Uniform Law jurisdictions to develop our Statement on the use of artificial intelligence in Australian legal practice. This expands on our Risk Outlook 2024 and our previously published commentary on generative AI and lawyers in 2023.

We will continue to review developments in AI for lawyers as its use in the legal profession continues to evolve.

Fiona McLeay
Board CEO and Commissioner

 

Statement on the use of artificial intelligence in Australian legal practice

For over 200 years, the Australian legal profession has had a unique and powerful role in upholding the rule of law, protecting individual rights and freedoms, and promoting access to justice. During that time, lawyers have adapted to the changing needs of their clients and the community, including the shift to digital communication, remote working, and most recently, the widespread availability of artificial intelligence (AI).

It is important for lawyers to understand AI, including the capabilities and limitations of the large language models (LLMs) and foundation models that underpin the latest AI tools — not only because they may use AI themselves, but also because their clients may be: using AI, seeking advice on how to lawfully use AI, or adversely affected by a third party’s use of AI.

The Law Society of New South Wales, the Legal Practice Board of Western Australia and the Victorian Legal Services Board and Commissioner have produced this statement to help lawyers understand our expectations when they use AI tools to assist them in their legal work. We will regularly review and update our guidance on AI for lawyers, as its use in the legal profession continues to evolve.

When using AI and other legal technology, lawyers must continue to maintain high ethical standards and comply with their professional obligations under the Legal Profession Uniform Law (Uniform Law), and either the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (ASCR), or the Legal Profession Uniform Conduct (Barristers) Rules 2015 (BR), including:

  • Maintaining client confidentiality (ASCR r 9.1; BR r 114). Lawyers cannot safely enter confidential, sensitive or privileged client information into public AI chatbots/copilots (like ChatGPT), or any other public tools. If lawyers use commercial AI tools with any client information, they need to carefully review contractual terms to ensure the information will be kept secure.
  • Providing independent advice (ASCR r 4.1.4; BR rr 3(b), 4(e) and 42). AI chatbots/copilots and other LLM-based tools cannot reason, understand, or advise. Lawyers are responsible for exercising their own forensic judgement when advising clients, and cannot rely on the output of an AI tool as a substitute for their own assessment and analysis of a client’s needs and circumstances.
  • Being honest and delivering legal services competently and diligently (ASCR rr 4.1.2 and 4.1.3; BR rr 4(c)–(d), 8(a) and 35). AI chatbots/copilots, research assistants, and summarisers cannot be relied on as a substitute for legal knowledge, experience or expertise. No tool based on current LLMs can be free of ‘hallucinations’ (i.e. responses which are fluent and convincing, but inaccurate), and lawyers using AI to prepare documents must be able and qualified to personally verify the information they contain, and must actually ensure that their contents are accurate, and not likely to mislead their client, the court, or another party.
  • Charging costs that are fair, reasonable and proportionate (Uniform Law ss 172–173; ASCR r 12.2). Lawyers using AI to support their work should ensure that the time and work items they bill clients for accurately represent the legal work done by law practice staff for their client. Lawyers who use AI should ensure that it does not unnecessarily increase costs for their client above traditional methods (e.g. because of additional time spent verifying or correcting its output).

Lawyers who are using AI in their practice should also consider:

  • Implementing clear, risk-based policies to minimise data and security breaches, and set out what AI tools they have decided to use in their practice, who can use those tools, for what purposes, and with what information. These policies should also set out how they will continuously and actively supervise the use of AI tools by junior and support staff, and how documents containing AI-generated content will be reviewed for accuracy, and verified before they are settled. We recommend that lawyers make these policies available to clients upon request to increase transparency (see below).
  • Limiting the use of AI tools in their practice to tasks which are lower-risk and easier to verify (e.g. drafting a polite email or suggesting how to structure an argument) and prohibiting its use for tasks which are higher-risk and which should be independently verified (e.g. translating advice into another language, analysing an unfamiliar legal concept or executive decision-making). Generative AI tools can be biased, and cannot understand human psychology or other external complicating factors that may be relevant. There may also be ethical concerns with the training data used for an AI model, including intellectual property and ownership rights, or the inclusion of highly offensive or unlawful text or imagery (including child exploitation material) that may make them inappropriate to use in your work.
  • Being transparent about their use of AI, and properly recording and disclosing to their clients (and where necessary or appropriate, the court and fellow practitioners) when and how they have used AI in a matter and how the use of AI is reflected in costs, if requested by the client (see above, Charging costs that are fair, reasonable and proportionate). Lawyers should carefully consider any ethical concerns about the use of AI raised by their clients, and address them proactively.

A joint initiative by:

Law Society of New South Wales

Legal Practice Board of Western Australia

Victorian Legal Services Board and Commissioner

 

Further resources

Last updated on
* Indicates required field
Back to top