AI in the Creative Arts: Illegally Democratizing Creativity?
Recently I participated in a debate competition at the Australian Computer Society and the following is a summary of our position.
Debate Topic: “AI in the creative arts can effectively, legally, and morally democratize creativity and expand modes of human expression”
The Case Against AI
Ladies and gentlemen of the jury!
AI is on trial for crimes against creativity!
- Intro
Judge Ona Wang, in New York Times v. OpenAI said:
“This case is about whether the Defendant trained their LLMs using Plaintiff’s copyrighted material … It is not a referendum on the benefits of Gen AI.”
As useful as AI might be, my job as the legal spokesperson of the negative, is to prove to you that, on the balance of probabilities, AI tools have been created, and are being used, illegally and immorally as per the following charges, only one of which needs to stick for the negative to be successful!
- Copyright Infringement: AI, You’re Doing It Wrong
Let’s start with Copyright, right?
Copyright arises automatically and resides in the author, or the author’s employer.
Reproducing copyrighted works without permission is illegal all over the world – Copyright Law 101.
Setting up a system, which allows other to make unauthorised reproductions is also illegal – Copyright Law 102.
Some recent examples of blatant breaches:
- Unauthorized reproductions: In the New York Times v. OpenAI case, evidence was presented showing that ChatGPT could generate nearly verbatim passages from NYT articles.
- Getty Images v. Stability AI: Stability AI’s image generator was caught reproducing pictures with watermarks intact. It’s like photocopying an artist’s work and forgetting to crop out the signature.
- AI training datasets: Systems like ChatGPT are built on massive text and image datasets copied from the internet, including copyrighted books, articles, images and other creative artwork. If a creative work is published on the internet this does not give permission to everyone to use the creative work for their own use. That may indeed surprise some of you.
- Exceptions and Fair Dealing? Not So Fast
One major exception is a temporary image in RAM does not count if it occurs during the communication of a work, but for the exception to apply:
- the communication must not be in breach in the first place; and
- it needs to be temporary.
Because the communication by AI is not authorised and the likes of ChatGPT store a record of your entire history, neither is the case with AI.
AI companies claim their actions fall under “fair use” exceptions in the U.S., or “fair dealing” exceptions in Australia, which cover:
- SS.40-42 provide Research & Study, Criticism & Review, Parody & Satire and New Reporting Fair Dealing exceptions but wholesale copyright does not count.
- Providing Professional Advice is another exception but it only applies to judges and lawyers. While AI might be able to pass the bar exam in the top 10%, this exception does not apply either.
The key is that the use needs to be fair.
Let’s face it: AI companies are not academic researchers or reporters or lawyers. They’re corporations profiting from storing and substantially reproducing other people’s hard work. There is no “Fair” in that sort of dealing with the copyright works of others.
- Moral Rights: Where’s My Credit?
Even when AI-generated content isn’t an outright copyright violation, it often breaches moral rights under the Copyright Act 1968, which require attribution and maintaining integrity in the original artwork.
- AI tools manipulate artwork in such a way that the original can be denigrated.
- AI tools do not properly credit authors, if at all.
- Even when citations are provided, they often lead to non-existent articles or hallucinated sources.
- Worse, AI-generated content sometimes misattributes information, making it easier to spread misinformation.
Personal example: I asked ChatGPT about myself, and it regurgitated content from my own website—without giving me credit, just a link to my website. That’s plagiarism, AI-style.
- Contracts? Who Reads Those Anyway? (AI Should)
Big corporations might ignore terms of service agreements, but they can’t pretend they don’t exist.
- The New York Times terms explicitly forbid web scraping. OpenAI still scraped their content.
- Getty Images explicitly prohibits reproductions of its photos. Stability AI’s system still did it.
- Many websites have clauses against automated data extraction, yet AI firms persist in hoovering up data.
Whether it’s a direct copyright breach or a contract violation, these AI companies are taking what they shouldn’t.
- Other Fun Crimes! (Because AI Doesn’t Stop at One)
Privacy Violations & Espionage
AI models are trained on, and collect, vast amounts of personal data—often illegally.
- Who has already downloaded the Deepseek IOS App?! Among other things, it is reported to disable App Transport Security, monitor keystrokes and transfer all manner of personal information back to Chinese servers
- Is OpenAI just as bad?!
- Google DeepMind unlawfully obtained medical records of 1.6 million UK NHS patients.
- Tesla’s AI-driven camera system allegedly captures and stores footage without consent.
- Amazon’s Ring doorbell cameras shared footage with law enforcement without notifying users.
AI developers have ignored privacy laws globally and are being prosecuted for doing so.
Deepfakes & Misinformation
With elections approaching, deepfake videos are a growing threat.
- AI-generated content can impersonate politicians and public figures, leading to real-world consequences. Senator David Shoebridge recently commissioned a deepfake video of himself to illustrate the dangers.
- AI-generated fake news articles are flooding social media, influencing public opinion.
When AI is used to deceive and violate, it’s not just unethical and immoral —it’s illegal.
Misleading & Deceptive Conduct and Defamation
AI models don’t just make mistakes—they confidently state falsehoods. Sometimes, these errors are defamatory.
- The New York Times lawsuit highlighted AI hallucinations attributing fake quotes to it.
- AI has wrongly labelled individuals as criminals, corrupt officials, or involved in scandals.
- False information can destroy reputations, and AI companies are not being held accountable.
- Discrimination: AI Has a Bias Problem
For a prime example in the Creative Arts space there is a total lack of gender and ethnic diversity – 5 white males and some robots.
LLMs’ exacerbate stereotypes and marginalise minorities, leading to discriminatory outcomes with real legal implications.
- Ethics & Morality? AI Skipped That Lecture
Even if AI somehow avoided outright illegality, its use in creative fields is still morally dubious.
- Artists spend years perfecting their craft—AI scrapes their work in seconds.
- Writers, musicians, and filmmakers rely on intellectual property protections—AI ignores them.
- Big Tech profits from content they didn’t create, with little to no compensation for original creators.
If that doesn’t scream unethical and immoral, I don’t know what does.
- The Final Verdict? AI in the Creative Arts = A Lawless Wild West
AI is an industry running at full speed while conveniently ignoring rules and regulations.
It is not just on the balance of probability that AI has engaged in unlawful activities, it is beyond reasonable doubt!
Therefore, I implore you, ladies and gentlemen of the jury, to find AI Guilty of crimes against creativity on all counts!