top of page

AI’s Unintended Gift - Exposing Corporate Lawlessness


Illustration by Will Allen/Europinion
Illustration by Will Allen/Europinion

The rapid expansion of artificial intelligence (AI) has been at the centre of debates on ethics and copyright violations, as AI rarely comes without warranted controversy. However, one of its most unintended consequences has been the exposure of a deeper and more troubling reality: the legal system holds everyday people rigorously accountable for infractions while allowing major corporations to bypass the rules when profit is on the line. Although individuals face severe penalties for piracy, tech giants, fully aware of the legal ramifications, have leveraged stolen content on an unprecedented scale to develop AI models, confident in their ability to sidestep serious consequences.


A recent scandal involving Meta, OpenAI, and LibGen illustrates this disparity. Meta and OpenAI reportedly used millions of books from LibGen’s database (an online repository that provides unauthorised access to books and research papers) to train their AI systems. LibGen, though illegal, is widely used by people to download literature, including academics, students, and researchers who lack access to expensive educational materials. Some argue that it serves an ethical purpose by making knowledge available to those who would otherwise be unable to afford it. Still, an ordinary person caught downloading from the platform could face legal action. Yet, major tech companies with vast resources assessed the "medium-high legal risk" and chose to proceed anyway.


Internal communications from Meta, disclosed in legal filings, reveal that executives, including Mark Zuckerberg, were aware of these actions yet moved forward despite many employee concerns. Reports suggest Meta’s teams deliberately removed copyright identifiers and ISBNs from pirated books to obscure their use. If a small AI startup had engaged in similar practices, it would likely have been shut down immediately, but for Meta, it was just another strategic move in the AI arms race.


This isn’t an isolated incident. OpenAI and Google have faced accusations of scraping copyrighted material from websites, including news articles, personal blogs, and creative works, without permission. Lawsuits from artists and writers claim OpenAI has profited from their work without compensation, while Google has been criticised for covertly collecting YouTube transcripts and online texts to refine its AI models.  


The music industry provides another example of AI’s role in circumventing legal boundaries. Platforms such as Suno and Udio have been accused of mimicking musicians’ styles without consent, effectively replacing human artists with algorithmically generated imitations. The pattern is consistent: companies knowingly infringe upon intellectual property rights, reap substantial rewards, and, when caught, treat legal battles as a mere cost of doing business.


The real issue is less AI and more the unchecked power of corporate giants. When a teenager illegally downloads a film, they risk massive fines and legal action. But when a trillion-dollar company builds an AI model using stolen intellectual property, the worst they might face is a financial penalty that barely dents their bottom line. Legal frameworks claim to uphold fairness, yet they continuously favour those with the power to manipulate them. Corporate legal teams meticulously assess the risks of violating laws, often concluding that the financial benefits of breaking the rules far outweigh any potential punishment. This imbalance reveals that laws only serve as deterrents when there’s genuine enforcement, but when companies become too large and influential, accountability evaporates.


If corporations can train AI models on stolen material without repercussions, what’s stopping them from exploiting personal data, manipulating financial markets, or using AI-generated misinformation to sway public opinion? The belief that companies will regulate themselves is naïve at best. Outrage from writers, artists, journalists, and musicians is growing, but whether it leads to substantial change remains uncertain. Lawsuits against Meta, OpenAI, and others serve as an important step, but they are reactionary measures rather than proactive solutions. 


Stronger regulatory frameworks are necessary to prevent AI firms from freely exploiting copyrighted works, such as the EU’s AI Act - the first legal framework of its kind, but still too new to make a difference. Transparency must be a legal requirement; that is to say, companies should be obligated to disclose their training data sources, while copyright infringement fines should scale in proportion to a company’s revenue, ensuring penalties are significant enough to deter misconduct. In this case, each book should be treated as a separate infringement, meaning fines would accumulate rapidly. Executives who knowingly greenlight unlawful practices should face direct legal accountability. This scandal makes the creation of independent AI regulatory bodies responsible for auditing corporate practices and preventing the unchecked extraction of intellectual property even more crucial. If left unregulated, AI will solidify a system where corporations dictate the rules while individuals remain bound by them.


Until real consequences—both financial and criminal—are imposed, tech giants will continue operating with impunity, fully aware that the legal system serves as little more than a minor inconvenience. AI has revealed just how deep corporate lawlessness runs. The question is no longer whether corporations will break the law for profit; it’s whether anyone will stop them.






No image changes made.

Comments


bottom of page