Artificial Intelligence Content Generator. A man uses a laptop to interact with AI assistant. AI offers functions like chatbot, generate images, write code, writer bot, translate and advertising

How AI can go wrong

Artificial intelligence is no longer the stuff of sci-fi. It’s here – embedded in everyday apps, services, and systems.

But while it promises efficiency and innovation, AI can also go dangerously off track. And when it does, it’s often consumers who pay the price.

This guide explores the many ways AI can go wrong, equipping you with the knowledge to recognise the dangers and understand what needs to change.

Guide overview

Introduction

The rise of AI has brought a wave of innovation, but it’s not all smooth sailing. While artificial intelligence offers smarter services and slicker tech, it also brings risks that are often hidden beneath the surface. From how your data is collected to how life-changing decisions are made on your behalf, AI is raising urgent questions about fairness, consent, and accountability.

Consumers are already feeling the impact – whether it’s your work being used to train machines, your credit score being affected by opaque algorithms, or your face being scanned without you knowing.

In this guide, we’ll take a closer look at how these risks play out, and why it’s time for stronger protections.

Data misuse: the invisible cost of convenience

We all rely on apps, platforms, and smart tools to get through the day. But while these services make life easier, they often come at a hidden price: your personal data.

AI systems run on data. The more they have, the better they perform. But not all data is collected ethically or used responsibly.

Your data, their training set

Many AI tools are trained using vast amounts of real-world data scraped from websites, social media, and public platforms. That might sound harmless, but often, this includes personal information: names, photos, opinions, purchase histories, and more.

Some companies argue this information is “public” and fair game. But being online doesn’t mean you’ve consented to have your data fed into a machine learning model, especially one that might be commercialised or reused without your knowledge.

What can go wrong?

When AI systems handle vast amounts of data, things can go wrong in ways that are invisible, until they’re not. Here are some of the biggest risks to watch out for:

The case against Microsoft and Google

In 2024, concerns emerged around how Microsoft and Google may have used personal data from everyday tools like Gmail, Word, Chrome, Outlook, and Teams to train their AI systems.

UK law firms are now investigating claims that this data was used without proper consent – a potential breach of data protection laws. If confirmed, those affected could be entitled to compensation for the misuse of their information. The investigation includes products used by millions of people, such as Xbox Live, Google Maps, YouTube and more. If you’ve used these services, your data may be involved.

Copyright abuse: creators left in the cold

One of AI’s flashiest tricks is generating content: artwork, articles, music, even deepfake videos. But AI doesn’t create from scratch – it mimics what it has seen.

To learn how to write like a journalist or paint like an illustrator, AI models are trained on millions of real pieces of content, often without asking permission or compensating the original creators. This raises serious copyright questions. If an AI tool can spit out a new image in your style, is that theft? If it uses your blog posts to learn how to write persuasive copy, should you be paid?

What can go wrong?

The case against Stability AI

Legal challenges are mounting against AI developers over the use of copyrighted work in training models. One of the most high-profile cases involves Getty Images, which is suing Stability AI, for allegedly using its copyrighted images without permission. Getty claims that more than 50,000 photographers and content contributors may have had their work copied or mimicked by the AI system, both in its training phase and in the outputs it generates.

While a recent High Court ruling in England and Wales blocked a representative claim from proceeding due to technicalities around class definition, legal experts suggest that upcoming UK copyright reforms – including proposals to force AI developers to disclose their training data – could make mass claims more viable in future.

Algorithmic bias: when AI reinforces discrimination

AI is only as fair as the data it learns from. If the training data contains bias – and it often does – then the AI will learn to replicate and reinforce it.

This is particularly dangerous in high-stakes decisions: hiring, credit scoring, insurance pricing, and criminal justice. If historic data reflects discrimination, an AI tool will carry that forward.

What can go wrong?

Complaint against HireVue and Intuit over AI hiring discrimination

In 2025, the American Civil Liberties Union (ACLU) filed a complaint against Intuit and its hiring tech provider, HireVue. The complaint followed a case where a Deaf Indigenous woman was denied a promotion after being evaluated by an AI-powered system that relied on automated speech recognition and non-verbal analysis to assess candidates. The system reportedly failed to accommodate her disability, misjudging her communication style and costing her the opportunity. The ACLU claims this represents a breach of both disability rights and anti-discrimination law. 

57% of Black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the general population.

Lack of accountability: who takes the blame when AI goes wrong?

When AI gets it wrong, who is responsible? That’s one of the thorniest legal questions of our time.

Part of the problem lies in how AI systems are designed and deployed. Many decisions are made with little human oversight, especially when algorithms are outsourced or embedded in larger systems. That creates confusion over who owns the outcome – the tech company, the business using the AI, or no one at all.

Unlike a human employee, an AI tool can’t apologise or be disciplined. And tech companies often hide behind layers of complexity, claiming that outcomes are generated by “the model” and can’t be easily explained. This makes it incredibly hard to challenge unfair decisions or seek redress.

Without transparency, people can be left with no explanation and no clear route to challenge what happened. As a result, consumers are left chasing shadows, trying to hold someone accountable for decisions no one seems to own.

What can go wrong?

Deepfakes and misinformation: truth under threat

AI tools can now create incredibly realistic fake videos, images, and audio clips. These “deepfakes” are more than a novelty – they pose real risks.

From political propaganda to revenge porn, deepfakes have been used to deceive, harass, and manipulate. AI-generated misinformation can spread faster than fact-checkers can keep up, especially on social media.

What can go wrong?

Martin Lewis deepfake investment scam

In 2023, a frightening deepfake featuring consumer finance expert Martin Lewis appeared on Facebook and Instagram, promoting a fake investment scheme tied to Elon Musk.

The AI-generated video was highly convincing and went viral, causing public alarm. The clip prompted Martin Lewis to warn: “This is frightening … they are going to get better”

Deepfake featuring consumer finance expert Martin Lewis appeared on Facebook and Instagram, promoting a fake investment scheme tied to Elon Musk.

AI in sensitive settings: where consent and clarity vanish

AI isn’t just shaping the content you see online, it’s now entering more sensitive spaces, like your workplace, your school, and even your GP’s surgery. And while these tools are often introduced to increase efficiency, they can create serious risks when they’re deployed without transparency or regulation.

In these contexts, people may not know that their words, decisions, or personal information are being shared with AI. Even more worrying, they may not have been given the option to consent in the first place.

What can go wrong?

Doctors found using unapproved AI

In June 2025, a Sky News investigation revealed that some NHS clinicians were using unapproved generative AI tools to transcribe and summarise patient appointments.

The software had not yet received NHS-wide approval, and there were concerns that patients were not clearly informed their consultations were being processed by AI. As a result NHS bosses wrote to GPs and hospitals to demand that they stop using AI tools that potentially breach data protection rules and put patients at risk.

The Sky News report also flagged a critical issue: AI “hallucinations” – where the technology invents false information and presents it as fact. In a healthcare setting, that kind of error isn’t just misleading, it could be life-threatening. 

AI hallucinations: when machines make things up

One of the most unsettling flaws in generative AI is its tendency to “hallucinate” – a term used when AI systems confidently produce false or misleading information. These systemic problems are caused by how these tools are trained and how they generate text, images, or data.

Hallucinations often sound convincing, which makes them hard to detect. They can range from subtle factual errors to complete fabrications. In high-stakes situations – like healthcare, finance, or law – these made-up statements could have serious real-world consequences.

What can go wrong?

UK lawyers warned about using AI

In 2025, the UK High Court publicly reprimanded legal professionals who submitted documents containing fake case-law citations generated by AI.

In a major lawsuit (estimated to be worth £89 million), the claimant’s legal team presented 45 cited cases — 18 of which were entirely made up by generative AI. Dame Victoria Sharp warned that such mistakes threaten public trust, could lead to contempt proceedings, and might even bring criminal charges for perverting the course of justice

88% of UK adults believe the government should have the power to stop harmful AI products.

Legal grey areas: the rules haven’t caught up

AI is evolving faster than the laws that govern it. While existing protections like GDPR provide a foundation, they were not designed with modern AI systems in mind. As a result, many of the risks AI poses – from misuse of personal data to discriminatory algorithms – fall into legal grey zones.

Playing catch-up

Without clear, AI-specific regulations, tech companies often make up their own rules. They decide how data is collected, how algorithms are used, and what consumers are told, or not told, about how these systems operate. This lack of legal clarity leaves consumers in the dark about their rights and options for redress when something goes wrong.

Even when regulators do intervene, they’re often hampered by slow processes, limited resources, and outdated legal frameworks that struggle to keep pace with fast-moving technology. Meanwhile, companies continue to develop and deploy new AI systems with little oversight.

AI regulation is coming – but not quickly enough

In the UK, the Information Commissioner’s Office (ICO) has raised concerns about AI use, but enforcement actions remain rare due to gaps in existing frameworks.

Proposals to introduce dedicated AI regulation have been delayed. In June 2025, the government announced plans for a more ‘comprehensive’ AI bill, which will include safety and copyright protections. But the legislation won’t be introduced until the next parliamentary session at the earliest – a delay that’s drawn criticism from campaigners and creative industries worried about unchecked AI development.

Originally, ministers had proposed a faster, narrowly focused bill to address the immediate risks posed by large language models like ChatGPT. That draft would have required AI companies to submit models for safety testing. But in a bid to align with the US and avoid deterring investment, the government opted to delay regulation.

A brewing copyright row

A separate data bill could allow AI developers to train on copyrighted content unless the rights holder explicitly opts out. Artists and rights groups – including figures like Elton John and Kate Bush – have strongly opposed the change, calling it a betrayal of the UK’s creative sector.

Despite amendments passed in the House of Lords, ministers have so far resisted calls to strengthen copyright protections. Campaigners fear that delays could leave UK consumers and creators exposed for years to come.

What needs to happen next

AI doesn’t have to be dangerous. But right now, the balance of power is tilted too far in favour of big tech. Here’s what needs to change:

Stronger regulation

Updated laws to cover AI-specific harms and ensure clear consumer protections.

Greater transparency

Making AI systems explainable and accountable.

Informed consent

Requiring opt-in for data used to train AI.

Fair compensation

Paying creators whose work trains or is mimicked by AI.

Independent oversight

Establishing watchdogs with teeth to hold AI systems and companies to account.

FAQs about AI and your rights

As AI becomes more common, many people are unsure what their rights are when things go wrong. Here are answers to some of the most common questions.

Possibly. If your personal data was used to train an AI system without your knowledge or permission, and it breaches UK GDPR or data protection laws, you may be entitled to compensation.

You have the right to challenge automated decisions that significantly affect you – for example, being denied credit, a job, or access to services. Under UK GDPR, you can request a human review of the decision. However, under UK GDPR (Article 22), this right only applies to fully automated decisions that produce legal or similarly significant effects. Not all algorithmic decisions qualify.

This is a grey area. Some companies claim content made public online is fair game, but if it contains copyrighted material or personal data, UK law may offer protection. Legal challenges are already under way.

AI hallucinations happen when systems generate false or misleading content that sounds convincing. These can be dangerous in sensitive settings like law or healthcare, especially if they’re trusted without verification.

Partially. While UK GDPR and consumer protection laws apply, there are still major gaps when it comes to regulating AI specifically. A dedicated AI bill is in development but has yet to be introduced.

It’s not always obvious. You might spot unfair treatment, see your content copied by an AI tool, or discover that your personal data was scraped for training. If you’re unsure, check our latest claims and investigations.

How Join the Claim can help

AI can be a powerful force for good. But it needs guardrails. As consumers, we have the right to demand better, safer, and more responsible systems – before the harms become irreversible.

At Join the Claim, we work with trusted law firms to help consumers understand their rights and take action when needed.

If you believe your data or rights have been misused by AI, we can help you check if there is a live case, confirm your eligibility and connect you with legal experts.

Written by:

Picture of Bee Stuttard

Bee Stuttard

Bee is the content lead at Join the Claim, where she helps people understand their rights and take action when they’ve been wronged. With a background in PR, copywriting, and content strategy, she’s spent over a decade writing about legal matters – turning complex topics into clear, accessible resources that inform and empower.

From writing about data breaches to explaining how group claims work, Bee’s goal is always the same: to give people the confidence they need to take the next step. She’s committed to making legal information feel human, relevant, and easy to trust.

Disclaimer 

This guide is for general information only and should not be taken as legal advice. While we aim to provide accurate and up-to-date content, the legal and regulatory landscape surrounding AI is complex and rapidly evolving. If you believe your data or rights have been misused by AI, we recommend seeking guidance from a qualified legal professional.

Last Updated: 2 July 2025

AI in the news

Hundreds of thousands of user conversations have been made publicly available through Google search results...
AI is changing everything. But at what cost? Discover the risks of AI misuse, from...
From data misuse to biased algorithms, we break down the key risks AI poses to...

Did you know we have a newsletter?

Sign up for our newsletter to stay up to date.