featured-image-posts-800w
No Comments

By Simone Brew, Principal & Gigi Au, Senior Associate of Matthews Folbigg Lawyers

The growth of generative artificial intelligence (“AI”) has been unprecedented over the past few years, with millions of users all over the world seeking assistance with educational, professional, and personal tasks. However, as our reliance on generative AI tools continues to grow, it is important to understand the impact its use could have on our workplaces and our professional careers.

The use of AI by legal professionals was explored in the recently released ‘Tech, AI and the Law’ report by Thomson Reuters as an innovative way to improve efficiency and save time. In a survey conducted for the report, of the 869 legal professionals sampled:

  • 46% use AI for legal research;
  • 38% use AI to summarise documents; and
  • 35% use AI to assist with the drafting of correspondence.

The use of AI tools for such tasks were primarily found to be advantageous for the management of workloads and the avoidance of completing repetitive and labour-intensive tasks. In many cases, these advantages resulted in legal professionals having more time to complete tasks that require more expertise.

Whilst AI undoubtedly has the potential to increase productivity, the survey unveiled that nearly one in three law firm professionals use unofficial AI tools to assist them at work. The report refers to an unofficial AI tool as an AI tool that has not been officially implemented by an employee’s organisation. For example, an AI tool that is publicly available on the internet, such as ChatGPT, could be considered as an unofficial AI tool by an employer. The use of these tools in the workforce poses significant cyber security and professional risks, especially for professionals, such as lawyers, who have an obligation to keep information confidential.

Risks of AI Tools

By its very nature, generative AI uses the collection of vast amounts of data inputted by users to train its algorithms to output answers and solutions. Therefore, if a user inputs sensitive or confidential information, such as client-specific information, into an unofficial AI tool, there is an inherent risk that this information could be used in the tool’s interactions with other users. In addition to potential leaks caused by AI’s learning processes, the haven of collected information makes AI tools an enticing target for cybercriminals seeking to obtain unauthorised access.
The potential risks of the storage of information by AI tools was made apparent on a minor scale in 2023 when ChatGPT discovered a bug in their open-source library. This bug meant that some users of the tool were able to see titles and the first message from other user’s chat history. Upon further investigation, ChatGPT discovered the same bug could have resulted in the payment information of some ChatGPT Plus subscribers becoming visible. This incident warns that the unintentional release of information stored by AI tools is no longer a hypothetical scenario but a real threat that must be actively considered by users when inputting sensitive and confidential information.

A proactive response to the risks of unofficial AI tools has been showcased by larger companies, such as JPMorgan Chase. In 2023, JPMorgan Chase was reported to have restricted their employees’ use of ChatGPT after concerns that the potential sharing of sensitive financial information by employees could result in regulatory and legal compliance issues. However, in recognition of AI’s productivity benefits, JPMorgan Chase released its own internal AI assistant in 2024 which reportedly operates similarly to ChatGPT but grants JPMorgan Chase more control over the collection, storage, and use of their data.

Privacy and confidentiality are the backbone of many professional industries. Complacency when using AI tools in the course of your work to assist in productivity can pose risks to data security and confidentiality.

It is therefore important to recognise the risks of using AI tools in your workplace to ensure effective management of confidential information when using technology.

Matthews Folbigg Lawyers has a specialist team dedicated to Cyber Security.

If you would like more information or advice in relation to cyber security, contact Simone Brew at simoneb@matthewsfolbigg.com.au or Gigi Au at gigia@matthewsfolbigg.com.au of Matthews Folbigg Cyber Security Group