Legality and Morality of AI in Creative Arts

Recently Michael Paterson participated in a debate organised by AISIG (AI Special Interest Group), a special initiative of the ACS WA Branch which focuses on topics in Artificial Intelligence. Michael has been an ACS member for just over 40 years!

The two teams debated the following statement:

AI in the creative arts can effectively, legally, and morally democratize creativity and expand modes of human expression.

While the Affirmative Team dazzled the audience with the possibilities of AI turning everyone into creative artists, they actually seemed to forget to explain how they claimed that AI had expanded human expression!  Then the Negative Team argued that:

  • AI is the fast food of creativity – quick, cheap, and mass-produced, but ultimately lacking depth, authenticity, and nourishment of the real thing.
  • The debate was concerned with whether AI CAN democractise creativity, not whether it WILL someday after a firmware update.
  • AI CANNOT currently make creativity accessible, legal and ethical, in fact failed spectacularly in all three categories.
  • AI has not actually expanded the modes of human expression, they have not in fact changed.  AI has only achieved an increase in the quantity of creative artwork.
  • AI is currently unreliable, disregards copyright, privacy and other rights, churns out misinformation and costs a fortune in electricity and water to run.

Michael had the job of refuted the statement in relation to legality and morality.  The following is his reasoning as to why AI is guilty of crimes against creativity and that, as useful as AI might be, on the balance of probabilities, AI tools have been created, and are being used, illegally and immorally as per the following charges:

  1. Copyright Infringement

Copyright arises automatically and resides in the author, or the author’s employer. Reproducing copyrighted works without permission is illegal all over the world – Copyright Law 101.

Setting up a system which allows other to make unauthorised reproductions is also illegal – Copyright Law 102.

Systems like ChatGPT are built on massive text and image datasets copied from the internet, including copyrighted books, articles, images and other creative artwork.  If a creative work is published on the internet does not give permission to everyone to use the creative work for their own use.  That may indeed surprise some of you.

Some recent examples of blatant breaches:

  • AI training datasets: Systems like ChatGPT are built on massive text and image datasets copied from the internet, including copyrighted books, articles, images and other creative artwork.  If a creative work is published on the internet does not give permission to everyone to use the creative work for their own use.  That may indeed surprise some of you.
  • Unauthorized reproductions: In the New York Times v. OpenAI case, evidence was presented showing that ChatGPT could generate nearly verbatim passages from NYT articles. HERE is a copy of the claim.  Note paragraph 91.  The red parts are verbatim reproductions.  That’s not innovation—it’s copying with extra steps.  And no, the “but we only indexed it” is no defence.
  • Getty Images v. Stability AI: Stability AI’s image generator was caught reproducing pictures with watermarks intact. It’s like photocopying an artist’s work and forgetting to crop out the signature.

  1. Exceptions and Fair Dealin

One major exception is when a temporary image in RAM occurs during the communication of a work, providing:

  • the communication is not in breach in the first place; and
  • it is indeed temporary

However, because the communication by AI is not authorised and systems like ChatGPT store a record of your entire history, neither is the case with AI.

AI companies claim that their actions fall under “fair use” exceptions in the US or “fair dealing” exceptions in Australia, which covers:

  • Research & Study, Criticism & Review, Parody & Satire and News Reporting Fair Dealing exceptions (section 40-42); or
  • Providing Professional Legal Advice (section 43). While AI may be able to pass the American Bar Exam, in the top 10%, it is yet to be considered a ‘judge’ or ‘lawyer’.

The key is that the use needs to be fair.

Let’s face it: AI companies are not academic researchers or reporters or lawyers. They’re corporations profiting from storing and substantially reproducing other people’s hard work.  There is no “Fair” in that sort of dealing with the copyright works of others.

  1. Moral Rights and Credit

Even when AI-generated content isn’t an outright copyright violation, it often breaches moral rights under the Copyright Act 1968, which require attribution and maintaining integrity in the original artwork.

AI tools manipulate artwork in such a way that the original can be denigrated. They do not properly credit authors, if at all, and even when citations are provided, they often lead to non-existent articles or hallucinated sources.

Worse, AI generated content sometimes misattributes information, making it easier to spread misinformation.

I asked ChatGPT about myself, and it regurgitated content from my own website—without giving me credit, just a link to my website. That’s plagiarism, AI-style.

  1. Contracts and Terms of Service

Big Corporations might ignore the terms of service agreements, but they can’t pretend they don’t exist.

  • The New York Times Terms of Service explicitly forbid web scraping. OpenAI still scraped their content. 
  • Getty Images explicitly prohibits reproductions of its photos. Stability AI’s system still did it.
  • Many websites have clauses against automated data extraction, yet AI firms persist in hoovering up data.

Whether it’s a direct copyright breach or a contract violation, these AI companies are taking what they shouldn’t.

  1. Other Fun Crimes (Because AI doesn’t stop at one!)

Privacy Violations and Espionage – AI models are trained on, and collect, vast amounts of personal data – often illegally. AI developers have ignored privacy laws globally and are being prosecuted for doing so.

  • Is there any hidden malicious software in Deepseek, or is it just uploading as much private data that it can from your device?  Among other things, Deepseek is reported to disable App Transport Security, monitor keystrokes and transfer all manner of personal information back to Chinese servers.  Is ChatGPT just as bad or worse?
  • Google DeepMind unlawfully obtained medical records of 1.6 million UK NHS patients.
  • Tesla’s AI-driven camera system allegedly captures and stores footage without consent.

Deepfakes and Misinformation – AI generated content can impersonate politicians and public figures, leading to real world consequences.

With elections approaching, deepfake videos are a growing threat.

  • AI-generated content can impersonate politicians and public figures, leading to real-world consequences.  Senator David Shoebridge recently commissioned a deepfake video of himself to illustrate the dangers.
  • AI-generated fake news articles flood social media, influencing public opinion.
  • The New York Times lawsuit against OpenAI highlighted AI hallucinations attributing fake quotes to it – see paragraphs 136 onwards.
  • AI has wrongly labelled individuals as criminals, corrupt officials, or involved in scandals.
  • False information can destroy reputations, and AI companies are not being held accountable.

When AI is used to deceive and violation, it’s not just unethical and immoral, it’s illegal!

Misleading and Deceptive Conduct and Defamation – AI models don’t just make mistakes, they confidently state falsehoods. Sometimes, these errors are defamatory. False information can destroy reputations, and AI companies are not being held accountable.

  1. Ethics and Morality?

Even if AI somehow avoided outright illegality, its use in creative fields is still morally dubious.

Artists spend years perfecting their craft, AI scrapes their work in seconds. Writers, musicians and filmmakers rely on intellectual property protections, AI ignores them. Big Tech profits from content they didn’t create, with little to no compensation for original creators.

Further, at what cost do we get AI.  The Information Age article published on 10 February 2025 claims that “Globally, AI-related infrastructure has been estimated to soon consume six times the amount of water as Denmark”, citing this paper.

If that doesn’t scream unethical and immoral, I don’t know what does.

It follows that AI is an industry running at full speed while conveniently ignoring rules and regulations. It is not just on the balance of probability that AI has engaged in unlawful activities, it is beyond reasonable doubt!

Though the final poll showed the Affirmative Team won by 14% (44% vs 38%), looking at how much each debate moved voters, the Negative Team had the most success. From before to after the debate, the Negative Team managed to get +11% to change their mind in favour, while the Affirmative Team lost 1%.

Our thoughts:  As per our AI Usage Policy, we already use AI as a tool a lot, but treat generative AI in particular, as a very naïve paralegal and check everything that it produces.  AI tools will continue to improve rapidly.  Whether your average Jill or Joe will be able to produce creative artwork to rival the masters, or even be of any artistic merit, is another matter.  The potential is there, but AI is not there yet.  Even if it were, we are concerned that the path to get there is strewn with very serious breaches of the rights of others.

However, the majority at the debate disagreed.

What are your thoughts?!

Let's Chat:

Chat:
(08) 9443 5383

Correspond:
legaladvice@patersons.com.au

Coffee:
4/88 Walters Drive
Osborne Park
Western Australia 6017

Complete:
the form below…

Please provide your details...

Chat:
(08) 9443 5383
Correspond:
legaladvice@patersons.com.au
Coffee:
4/88 Walters Drive
Osborne Park
Western Australia 6017
Complete:
the form below…

Please provide your details...