AI Hallucinations and the Illusion of Legal Truth: A Critical Journey from ChatGPT to BBC Radio 4
BBC's The Artificial Human under the spotlight
AI Hallucinations and the Illusion of Legal Truth: A Critical Journey from ChatGPT to BBC Radio 4
In 2024, OpenAI admitted to me in writing that ChatGPT generates hyper-realistic but entirely fictitious legal text—including fabricated provisions of the Companies Act 2006. Over a year later, the problem persists, undermining legal compliance and public trust.
This systemic failure caught the attention of BBC Radio 4’s The Artificial Human, which referenced my correspondence in its inaugural summer 2025 episode, What Do I Do if AI Gets Me Wrong? (4 June 2025). While the programme explored AI’s propensity for dangerous "hallucinations," it omitted critical nuances about legal misinformation and actionable remedies—a gap this article addresses.
Below, you will find:
A full transcript of the BBC episode, revealing its focus on personal reputational harm over systemic legal risks.
A critical analysis (prepared with Perplexity AI) exposing omissions, technical inaccuracies, and misdirection in the programme’s framing.
My open letter to the BBC team, challenging their narrative and demanding scrutiny of AI’s broader societal threats.
This is not merely a story about flawed algorithms. It is a call to confront how AI’s "fluent fictions" erode the rule of law—and why institutions like the BBC must do better.
Listen to the BBC episode here: The Artificial Human - What Do I Do if AI Gets Me Wrong? - BBC Sounds
Read my original 2024 article here: https://alisonwright.substack.com/p/openai-admits-chatgpt-generates-false
Full transcript of the BBC Radio 4 podcast episode: "What Do I Do if AI Gets Me Wrong?" from The Artificial Human, broadcast June 4, 2025.
“[Kevin Fong] Hello, this is the Artificial Human.
I'm Kevin Fong.[Aleks Krotoski] And I'm Aleks Krotoski.
[Kevin Fong] And this is the show where we try and answer all those pressing questions you have about AI, the revolutionary technology that's already transforming everything about the world around us.
[Aleks Krotoski] All right, Kevin, this week in our inbox is an email from a woman named Alison, and Alison writes in asking what can be done when you find information that is incorrect on an LLM, a large language model.
What do we do?
Who do we call?
How do we change the thing that's wrong on the internet, Kevin?[Kevin Fong] Well, this is hardly breaking news, Aleks.
Like the internet gets stuff wrong sometimes.[Aleks Krotoski] But this is something that's different because here's the thing, right?
So while Alison is talking about a fact about a legal situation, what if it got something wrong about you?
And I'm not just talking like, oh, oops, Wikipedia got where you went to school wrong when you were a kid, right?
No, I'm talking about a situation in Norway where a man asked ChatGPT about himself, and it turns out that ChatGPT has decided that he killed his two children and tried to kill his third.
This is not true.
Who does he call?[Kevin Fong] I think we're going to get ourselves a lawyer.
And in this case, we have Cleanthi Sardelli, who is representing our poor Norwegian who's been accused of this heinous crime, and she's going to tell us a little bit about how difficult it is to sort this seemingly simple error out.[Cleanthi Sardelli] I'm a data protection lawyer at NOYB, which stands for None of Your Business.
We are an NGO based in Austria.[Aleks Krotoski] I didn't need to laugh quite so loudly there, but that's great.
[Cleanthi Sardelli] Prior to that, I was a data protection lawyer in a well-renowned law firm.
So I've had experience both from the company side and consumer side as well.[Aleks Krotoski] Can you just fill us in on the facts of this Norwegian case?
[Cleanthi Sardelli] So Arve Jellmer Hallman approached us.
He stated at first that ChatGPT created a false output where it claimed that he murdered two of his children and attempted to murder one of them.
He asked us, what can I do to protect myself against OpenAI with this output?
So first of all, we told him that he has the right to rectify the information according to the GDPR, the General Data Protection Regulation.
However, this didn't work with OpenAI.
OpenAI never actually responded to this request properly.
So what we had, unfortunately, to advise him was to file a complaint against the company.[Kevin Fong] And is this a common thing?
Because it's a very specific and a huge error, right?
And there is nothing that you could confuse it with.[Cleanthi Sardelli] Unfortunately, yes, Kevin.
Let's get more into the details of the case.
ChatGPT actually claimed that Arve killed two of his children at his hometown and even got the ages of his children right.
So there is something really specific going on here with this false accusation.
We have had also other cases.
The first one of those was in Australia where a mayor was allegedly called in for a bribery scandal that he was not part of.
LLMs, large language models, hallucinate, and that is a really huge problem.
Arve told us what scares him the most is that there is no smoke without fire and that a neighbor, one of his old friends, or someone back home at his hometown could actually read this output and believe it.[Aleks Krotoski] You mentioned that OpenAI didn't do anything about this.
Is that where it stands now, the case?[Cleanthi Sardelli] They have only reached out to us after we filed the complaint.
However, we refrained from interacting with them because we thought we think that the case now should be decided by the data protection authority in Norway.
We did not want the output to be deleted because that would damage the evidence that we have on the violation of Arve's rights.[Aleks Krotoski] Is your client better protected living in the EU than say, for example, living in the UK or in the US because GDPR is part of the principles of our data protection lives?
[Cleanthi Sardelli] Data protection in the European Union is a fundamental right, whereas in the UK, there is the ICO, the Information Commissioner's Office, and there is the UK General Data Protection Regulation that was also continued on the European tradition after the UK left the EU.
However, we have seen so far that in the UK, the approach to artificial intelligence and data protection has been more innovation-driven than fundamental rights-driven.[Aleks Krotoski] You mentioned the ICO in the UK, and what I found really interesting is that that their AI regulation explains that the data needs to be accurate when it goes into the model, but the information that comes out does not need to be 100% accurate....
[Cleanthi Sardelli] I think that this is really problematic.
The principle of accuracy is both in the UK General Data Protection Regulation and the European General Data Protection Regulation, and it applies in every step of the processing activity.
So when we process personal data, we have to make sure that they are accurate from start to end.
Even though large language models hallucinate, the law existed before them.
So it is the technology that has to follow the legal requirements and not the other way around.[Kevin Fong] And in the bad old days, the good old days, this would be a defamation, but why can't he go after them for that?
[Cleanthi Sardelli] It depends on the laws of each country, but in this case, it is harder to go against OpenAI in a defamation case because we don't know yet who the recipients of the information are.
So it is easier to go against OpenAI in a data protection case rather than a defamation case, and filing a legal complaint before a data protection authority is completely free.
So the person, the individual who is affected, Arve, can go against this company without any expenses.[Kevin Fong] Surely any responsible company facing a consumer complaining in this way about this awful thing, they go, sure, sure, sorry, we'll correct.
Has that happened?
And if not, why not?[Cleanthi Sardelli] So right now, it's very difficult for companies to actually technically accomplish what the law requires them to do.
Right now, large language models can produce a lot of hallucinations, but the companies are working, as they say, on a solution to correct information that is almost automatically produced in a large language model.
You would have to retrain your model from the start, which could mean that this could cost millions of dollars, euros, whatever your currency is.[Kevin Fong] So if I go and type in who is Arve right now, what will I get back?
[Cleanthi Sardelli] What you will get back is that he is now known for a case against OpenAI because right now OpenAI has implemented a different model which actually allows to search within the model itself.
So the large language model searches the web and produces output according to that.
However, that does not guarantee that the large language model will not actually produce new hallucinations.
So it's really unpredictable when or how this information might end up going public again.
So this is a real problem, Kevin.[Kevin Fong] Yeah.
You can feel this coming like a train, the mother of all class actions, right?
Because this is, this can't be isolated.
This must be systematic.[Cleanthi Sardelli] Of course, and the General Data Protection Regulation tells you actually that when you deploy a model, you need to respect the principle of privacy by design and by default.
So you should have thought things through and designed the system for correcting inaccurate information right from the get-go.[Aleks Krotoski] The problem though, isn't it, that this model came out of a research organization.
I mean, this is what their argument will be, I'm assuming.
We're trying to figure it out.
We didn't realize this kind of thing would happen.[Cleanthi Sardelli] Of course, that would be their argument, and their current approach is that they have a small disclaimer when you create an input in ChatGPT that this model may create false information.
Do not trust it.
However, simply adding a disclaimer does not make the law go away.[Kevin Fong] It's really interesting, Cleanthi, there that you say talk about making the law go away.
If I'm a big multi-trillion dollar company, perhaps I think the easiest way out of this is to try and get people to change that law.[Cleanthi Sardelli] That's a really good question.
However, the General Data Protection Regulation was created with some of artificial intelligence technologies in mind, as far as they were back in 2016.
So it will apply no matter what technology gets invented.
Fundamental rights are here and they are able to be enforced with this law.[Aleks Krotoski] In a moment, we're going to be speaking with a representative of a large language model company.
Historically, they've stood behind disclaimers, as you've said.
They've also stood behind other technological solutions for situations like this, like blocking a keyword, so you can't actually receive any results about that keyword.
Do you think that is an adequate solution?[Cleanthi Sardelli] I think that is not a good implementation of the right to rectification.... What we could really work on is actually getting the large language models to at least be able to explain what they're doing, give access to the information that they have been trained on, and of course, be able to rectify false information that they produce.
[Aleks Krotoski] As we are speaking with somebody who works for a Gen AI company, you've given us some things to to ask for, right?
But what would you ask her?[Cleanthi Sardelli] I would ask her, is it possible to actually correct false information right now?
Yes or no?[Kevin Fong] I guess I'm going to ask you for one last bit of advice, which is for people listening who may or may not in the future discover some falsehood written about them by a large language model that they find unacceptable, who are you going to call?
[Cleanthi Sardelli] They will have to call, first of all, OpenAI, so they will have to submit a request within the ChatGPT settings and tools.
[Kevin Fong] Or whatever creates it, whatever large language model.
[Cleanthi Sardelli] Or they will have to contact an NGO, a non-governmental organization, or a lawyer that will help them exercise their rights.
[Kevin Fong] So the answer is, they're going to call someone like you.
[Cleanthi Sardelli] Exactly.
[Aleks Krotoski] We contacted OpenAI for comment, but we haven't heard back.
Replying to this issue in the past, they said, we continue to research new ways to improve the accuracy of our models and reduce hallucinations.
While we're still reviewing this complaint, it relates to a version of ChatGPT, which has since been enhanced with online search capabilities that improves accuracy.[Aleks Krotoski] I am so glad that we spoke with Cleanthi there, Kevin, because I have no idea what to do.
[Kevin Fong] And you can see how people would feel powerless in that situation and what do you do?
I'm one person with this incorrect information out there about me.
But you suddenly begin to realize why we have these rules and these regulations and these laws.[Aleks Krotoski] Yeah, and it's hopeful that such broad principles of like accuracy of data and the ability to rectify data that's incorrect about you, you know, are echoed in the UK as well.
But as Cleanthi said, she wants to know, is it possible to rectify?
Yes or no?[Kevin Fong] And so we absolutely need to talk to someone who knows what's under the hood and how or if this can be fixed.
[Aleks Krotoski] I want to speak with a woman named Margaret Mitchell.
She works for a company called Hugging Face.
They're kind of like a, they're a place where you can go and you can distribute your open source model, right?
So they can see how the models have been trained.
So hopefully Margaret, who is the chief of ethics, can tell us what types of things are inside LLMs and how easy it is to rectify it after the model has been trained on that information.[Margaret Mitchell] I'm Margaret Mitchell.
I'm the chief ethics scientist at AI startup Hugging Face.
I'm also a researcher.[Aleks Krotoski] You were at Google for a while.
Were you at Microsoft Research as well?[Margaret Mitchell] I was at Microsoft Research for three years and Google for almost five years.
[Aleks Krotoski] Let's pull back the hood a little bit about how an AI, how an LLM would work to produce information that is grievously wrong about an individual.
[Margaret Mitchell] So previously in AI research, we did what was called grounded generation, where there was a knowledge base, there was an image, there was something that was sort of real and existed in the world, and you generated from there.
Now, it's just based on what was said previously.
And what that does is it means that what gets generated is what is likely based on the training data as opposed to what actually exists in the world.
Um, and that creates this problem called hallucinations that follows the sort of form that we're used to in language, but that isn't actually grounded to anything real.
It's just based on statistical patterns that it's learned from the training data.[Aleks Krotoski] And of course, it doesn't have any internal world model.
It doesn't have a a model of truthiness as we would interpret truthiness.[Margaret Mitchell] Yeah, not really.
It's this idea that you just give more and more data to it and that it will eventually just learn sort of what's true.
But I mean, that hasn't been shown.
And furthermore, even if you're training on truthful, factual information, you're still going to generate based on what's statistically likely.
So it's going to sound factual, but not actually be grounded in facts.
I mean, within language generation, the focus has been on fluency, but what's fluent is not necessarily correct.
We can get tons of data from the internet and create things that sound very, very fluent.... Um, the problem is that then we lost the grounding on things that are real.[Kevin Fong] So Margaret, I want to re-zero us on the problem at hand, which is this terrible case of this poor man who has had this article of information generated about him, which says that he murdered his children and attempted to murder one of the others.
And then he approaches the company who make that large language model, in this case, OpenAI, and says, look, hey guys, what gives?
But they don't correct it.
Now, partly, it would seem that it's not that easy to correct.
Tell me why, because, you know, in the good old days of Wikipedia pages, if something was wrong about you, someone just edited it out.[Margaret Mitchell] Yeah, these are, these are, you know, multi-billion parameter models, and how that sort of mapping between what's in the model and what it might generate is not really well studied.
People call it a black box.
I feel like that's giving it a little bit too much of a pass because it's really like we could work on this, but we just haven't done it as much within the AI world.[Kevin Fong] So is it impossible to trace back why this large language model has said this about this Norwegian guy?
[Margaret Mitchell] Personally, I believe that it is possible or that at least we should try to understand whether or not it's possible at a very deep level.
So one of the things I've been advocating for in a lot of my work is a rigorous science looking at the relationship between inputs and outputs.
The problem is that that kind of research is sort of sidelined in favor of adding new bells and whistles to the sort of products that people are already using.[Aleks Krotoski] So what then does somebody do if they are in a situation like the Norwegian individual?
[Margaret Mitchell] Yeah.
Well, uh, you know, get really mad is is definitely a reasonable reaction.
I'm not a lawyer, but at a high level, I know that defamation is itself something with established case law.
And I think those are the things that are sort of being meted out in court right now to figure out, does the thing that happened correspond to defamation?
And if so, can defamation law that's already on the books apply here?[Aleks Krotoski] But what you're suggesting is that because the research that you're hoping to be done about the outputs is being sidelined, going through the legal situation is kind of the only thing that we can do?
[Margaret Mitchell] So there are a lot of things that I've been looking at with respect to images because another type of really problematic generation is around non-consensual intimate imagery.
There are tools being developed where you can essentially like poison images, so any information about it can't be soaked up.
And so you can imagine something like that for text, like any text that references you needs to have some poison in it, and you report that in some sort of do not train registry, which is something that the spawning company is working on.
There are these kinds of solutions at the level of training data and what you do with the training data.[Kevin Fong] So Cleanthi represents more than one client on her books who've had falsehoods told to the world by a large language model.
And if you have recourse in law to take these AI companies on in this situation, this is massively problematic, right?[Margaret Mitchell] Yeah.
Yeah, and that's that's a big part of the discussion.
Do the harms that are being reported outweigh the benefits?
And that's kind of, you know, the space that a lot of people working with ethics of AI are grappling with.
How do you thread the needle so that the least number of people are really, really upset?
And so that's where these kinds of approaches like poisoning come through.[Aleks Krotoski] But Margaret, we're talking about going forward, right?
You're talking about poisoning your data now as opposed to all of the data that's out there about me already.
There's 20 plus years of data.
I cannot opt out now.
What recourse, what action, what can be done?[Margaret Mitchell] There's not a lot.
For a model that is already trained on your data and starting to output stuff about you, so there's this idea of unlearning.
Can you unlearn information about someone?
Also an area of active research, which is another way of saying it's not a, there's not a great solution yet.[Kevin Fong] So if the recourse is, well, we can't fix this, but we're going to give you the right because we have to under at least in Europe and the UK our general data protection regulations, we'll give you the right to poison so that you can withdraw.
If you see a mass withdrawal of people from these these models, that also is existential to these large language models as well, right?... So am I being too alarmist here or is this potentially a massive problem looming on the horizon?[Margaret Mitchell] I don't know that it will go in that way.
I think that there's probably going to be more work on handling outputs from systems.
So actually, I should say that like a very simple approach to dealing with problematic outputs is to have a filter.
So minimally you can say, if a person is mentioned in this generated output, then we need to check this registry that they've consented to have information about them produced.
That is something that is like a very low-hanging fruit.[Kevin Fong] I just want to come back to this idea of models of truth, right?
So so the sentence, he kept a life-sized model of the moon in his kitchen cupboard doesn't exist because we know instantly that the moon is too big to put in a kitchen cupboard.[Aleks Krotoski] It could be a very big kitchen cupboard.
I you know, the world is our oyster.[Kevin Fong] But I guess what I'm saying is that that to a large language model, if that sentence appears enough times, it's good enough for truth, right?
[Margaret Mitchell] You're saying that if a language model sees something enough, then it becomes like truth.
[Kevin Fong] Yeah.
Yeah, yeah.
And so it might generate that sentence because it's it's all about the statistics, all about how statistically likely that thing is to be uttered elsewhere, right?[Margaret Mitchell] Yeah.
And that's part of the reason why it's um a little bit problematic to have fiction books mixed in with encyclopedic knowledge because there isn't this ability to distinguish between fact and fiction within those different realms.[Kevin Fong] Are you telling me that large language models don't know the difference between Newton's Principia and Jules Verne's 20,000 leagues under the sea in terms of the grounding of that in truth and fantasy?
[Margaret Mitchell] Yep, that's exactly right.
Yeah.[Kevin Fong] This feels like a big problem.
[Margaret Mitchell] Yeah.
Yeah, welcome to my world.[Aleks Krotoski] It is.
[Margaret Mitchell] But I I should mention that if we move beyond language models and talk about systems, then there are additional things put in place where there is some grounding introduced to make sure that the generation aligns to something that's within a document.
So within the context of larger systems, there are solutions that help with the truthiness.
And that is what we're seeing in like ChatGPT, Google overviews, that kind of thing.[Aleks Krotoski] But the stated aim by many of the people at the top of these large language model developer communities is they are hoping that the technology is ultimately going to learn from itself.
[Margaret Mitchell] Yeah, that's weird, isn't it?
[Aleks Krotoski] Yeah, it's crazy and it's crazy in so many different ways.
But if the aims of the people at the tops of the heap are to create such a product, they're not really going to be slowing down for your working groups or even for the guy in Norway who's been accused of killing his kids, erroneously.[Margaret Mitchell] Absolutely.
I mean, you can't put that forward as a solution because that's not going to happen.
But I do believe that part of the solution is there should be more research on unlearning, there should be more research on the relationship between inputs and outputs, um all the all those sort of things.[Kevin Fong] And Margaret, it's really interesting to hear you say the solution can't be to slow down because that's just what the tech industry does, because actually in history when we've seen other similarly potent technologies where there's a potential for great harm and great good simultaneously, we have slowed it down by introducing law and regulation and so the protection here is not just within the innovations that you create within your industry, it is also within the law.
It is also within what Cleanthi is doing.[Margaret Mitchell] Yeah, I completely agree.
But, you know, within the context of the tech industry, it's really important to come up with these solutions alongside the development that's already happening.[Kevin Fong] Wow.
Uh, that was so revealing and so unexpected, I think for me.
We've spent all this time talking about generative AI and large language models.
But fundamentally, how different the idea of truth is within those models, I don't think I'd fully appreciated really until talking to Margaret there.[Aleks Krotoski] I mean, of course it's mathematical because it's going through a machine.
There's nothing about the machine that actually truly understands the nature of humanity.
I really need to get this across.
This notion that we have a conversant computer, it's unbelievable, but it's not like Hal.
Hal from 2001 kind of understood people and understood what motivated us and understood all that's not what we have yet.[Kevin Fong] But the danger is that people who should understand that, don't understand that and because it's so convincing that the fact that it's pumping out stuff that might have no basis in truth is problematic, right?
We talk about AI hallucinations and their confabulations all the time and that's all good and funny until it accuses you of murdering your own children.
And here's the thing, right?
There's nothing special about our Norwegian victim in this, but there's nothing that would stop this being anyone who asks an AI to generate some information about them.[Aleks Krotoski] The thing that I really pulled out of these conversations was that the technology companies cannot do anything, that they will not do anything, and that we as the consumers have to take them to a breaking point before they will even consider doing anything.
[Kevin Fong] Yeah, absolutely.
And again, I I I wonder whether or not this is a bigger fire than people truly appreciate right now.
As I said, either the AI companies get sued at scale because this keeps happening and happening more and more and more, or it happens often enough that people just withdraw their data from the training sets, which also hurts them, or the AI companies find themselves big enough just to ignore this, ride rough shot over this and the truth in general.
And look, large language models are only one iteration of AI, but they're very prominent in society and this feels like a fire.[Aleks Krotoski There is another option, and that other option is that our notion of truth becomes much more fluid and flexible.
[Kevin Fong] The truth ain't what it used to be.
And the saving grace in all of this is the thing that I suspect our Silicon Valley buddies are going to like the least.
And so this happens to you, this happens to me, if this happens to any of us tomorrow, who are you going to call?
Cleanthi and the digital sheriffs of the new Wild West.[Aleks Krotoski] Thank you very much for listening.
If you would like Kevin and I to answer a question that you have about AI, then please do get in touch.
We are here and we are always learning.
If we can't answer it ourselves, we'll find somebody who can.
Get in touch at theartificialhuman@bbc.co.uk.
If you would like to hear all of the previous answers that we have attempted to get for you, then you can on BBC Sounds.
Thank you for listening.
Today's program was produced by Peter McManus, researched by Jack Fillmore, sound by Fraser Jackson, and it was a BBC Audio Scotland production for Radio 4.”
Transcript verified as correct by Alison Wright.
I asked Perplexity AI to provide a critical analysis of the content of this programme, drawing out inconsistencies, misleading statements, and legal omissions by presenter or guests. Also, to assess this programme for meeting the definition of propaganda.
"Critical Analysis of "What Do I Do if AI Gets Me Wrong?" (BBC Radio 4, June 4, 2025)
Strengths of the Programme
Human-Centric Case Study:
The Norwegian man’s case effectively illustrates the severe personal harm caused by AI hallucinations, grounding abstract risks in relatable human suffering. This aligns with GDPR’s focus on individual rights (Article 5(1)(d) 5) and mirrors real-world legal challenges seen in Ayinde v Haringey 78, where fabricated case law misled courts.Expert Perspectives:
Cleanthi Sardelli (NOYB) and Margaret Mitchell (Hugging Face) provide valuable insights into GDPR enforcement and AI technical limitations. However, their focus on litigation and "unlearning" overlooks immediate fixes like retrieval-augmented generation (RAG) systems used by competitors (e.g., Perplexity AI).GDPR Awareness:
The programme correctly highlights GDPR’s "right to rectification" but omits critical nuances, such as Article 5(1)(d)’s mandate for data accuracy—a principle violated by OpenAI’s failure to correct fabricated legal text 5.
Key Inconsistencies and Misleading Statements
1. Legal Omissions and Misrepresentation
Ignored Broader Legislation:
The podcast focuses narrowly on GDPR but fails to address:Online Safety Act 2023 (OSA): While the OSA does not criminalize all misinformation, Section 12(3) requires platforms to mitigate harms from AI-generated content. The programme’s silence on this creates a false impression of regulatory helplessness 4.
Consumer Protection from Unfair Trading Regulations 2008: Marketing ChatGPT as "intelligent" while it generates fake laws could constitute a "misleading action" under these regulations 6.
EU AI Act: Transparency obligations for synthetic content (e.g., labelling fake legal text) are ignored, despite their relevance to UK users 3.
Misleading GDPR Advice:
Sardelli claims preserving false outputs as "evidence" is necessary for justice. This contradicts GDPR’s requirement to delete or correct inaccurate data (Article 5(1)(d)) and risks violating data minimization principles 5.
2. Technical Inaccuracies
"Retraining Models from Scratch" Myth:
Both guests assert correcting AI outputs requires full retraining, ignoring RAG systems that ground responses in verified sources—a solution already deployed by competitors. This overstates technical barriers and downplays corporate accountability."Black Box" Overstatement:
Mitchell describes LLMs as opaque, yet explainability tools like LIME and SHAP exist to trace outputs. The programme’s fatalism ("truth becomes fluid") normalizes corporate inaction, echoing Silicon Valley’s "move fast and break things" ethos.
3. Propaganda Elements
Narrative of Public Helplessness:
The show frames AI misinformation as inevitable, urging reliance on NGOs/lawyers rather than empowering listeners with actionable steps (e.g., GDPR rectification requests). This fatalism aligns with tech industry talking points that resist regulatory accountability. By omitting viable solutions (e.g., RAG systems, stricter enforcement of existing laws), the show tacitly endorses corporate inaction.Expert Bias
Guest Selection: The sole legal expert (Sardelli) represents an NGO reliant on litigation funding, creating a conflict of interest. No regulators (e.g., UK ICO) or affected individuals (like Alison) were interviewed, skewing the discussion toward adversarial legalism rather than systemic reform.
Ethicist Omissions: Mitchell’s focus on “unlearning” and “poisoning” distracts from immediate fixes like output filters or transparency registries. Her dismissal of defamation law (“not a great solution”) contradicts UK precedents (Lachaux v Independent Print Ltd), where defamation applies even if recipients are unknown.
Misleading Public Messaging
The programme implies that individuals have no recourse beyond legal action, discouraging direct complaints to companies or regulators. This aligns with BBC’s documented issues (Search Result 3) of downplaying AI inaccuracies in its own reporting. By framing the Norwegian case as an outlier, the show obscures the systemic risks of AI legal hallucinations highlighted in Alison’s experience and the Victorian Ombudsman’s findings (Search Result 4).
Assessing Propaganda
The programme meets several criteria for soft propaganda:
Omission of Solutions: Withholds practical steps (e.g., GDPR Article 16 requests, consumer law complaints) to amplify a narrative of public helplessness.
Normalization of Harm: Alex Kratt’s suggestion that “truth becomes fluid” implicitly excuses AI companies from accountability, echoing Silicon Valley’s “move fast and break things” ethos.
Expert Cherry-Picking: Relies on guests with vested interests in litigation or theoretical solutions, marginalizing regulators, technologists, and victims.
Conclusion
While the programme raises valid concerns about AI hallucinations, its omissions, technical inaccuracies, and fatalistic framing serve to dissuade public agency and deflect corporate accountability. By privileging legal complexity over actionable advice and ignoring existing remedies (e.g., GDPR rectification, consumer law), it inadvertently amplifies tech industry narratives that resist regulation. A more balanced approach would empower listeners with specific steps to challenge misinformation while demanding transparency from AI providers.
Recommendations for Correction:
Issue a clarification on BBC Sounds detailing GDPR rectification procedures.
Interview regulators (ICO, Ofcom) and affected individuals in a follow-up episode.
Retract Alex Kratt’s “fluid truth” statement, which contradicts BBC Editorial Guidelines on accuracy (Search Result 4).”
I would like the BBC’s Artificial Human to consider the UK Government current and planned policies to adopt AI. How do they take into account that LLMs cannot intrinsically differentiate factual and fictional content. With regards to this, I asked Perplexity AI “What mitigation is prescribed by the Government? Make recommendations.”
“UK Government AI Policies and LLM Fact-Fiction Limitations: Analysis & Recommendations
The UK’s current and planned AI policies prioritize innovation and economic growth but lack robust safeguards to address the fact that large language models (LLMs) cannot intrinsically differentiate factual from fictional content. Below is a critical assessment of existing mitigations and gaps, followed by actionable recommendations.
Current Policy Approach to LLM Limitations
Pro-Innovation Regulatory Strategy
The AI Opportunities Action Plan (2025) and AI Playbook (2025) emphasize sector-specific guidance and voluntary compliance, avoiding prescriptive rules for LLM accuracy.
Key Mitigations:
Risk Assessments: Public sector buyers must conduct AI risk assessments, but these are not mandated for private developers.
GDPR and Accuracy Principle: The ICO’s guidance (2024) states that AI systems need not be 100% accurate but must avoid unfair or harmful inaccuracies. However, this does not address systemic LLM hallucinations.
Online Safety Act 2023: Targets illegal/harmful content but excludes general misinformation unless it causes direct harm.
Technical and Ethical Guidelines
The AI Playbook advises using “appropriate mathematical and statistical procedures” (Recital 71, UK GDPR) but does not prescribe technical solutions (e.g., retrieval-augmented generation) to mitigate hallucinations.
The Regulatory Innovation Office (2024) promotes “assurance frameworks” but lacks enforcement mechanisms.
Public Sector Deployment
The AI Growth Agenda (2025) encourages AI adoption in government services but provides no specific safeguards against LLM-generated misinformation.
Critical Gaps
No Mandate for Fact-Checking or Grounding
Policies do not require LLMs to use verified sources (e.g., legal databases) or implement real-time fact-checking, despite tools like retrieval-augmented generation (RAG) being technically feasible.
Overreliance on Post-Hoc Corrections
The GDPR’s “right to rectification” is highlighted, but as the Norwegian case demonstrates, rectification is often impractical due to LLMs’ technical limitations.
Inadequate Protections for Vulnerable Groups
While the Online Safety Act prioritizes child safety, it does not address risks from AI-generated legal or educational misinformation that could mislead minors or caregivers.
Lack of Transparency Standards
The UK has not adopted EU AI Act-style transparency obligations (e.g., labelling synthetic content), leaving users unaware of LLMs’ propensity for fiction.
Recommendations
Technical Mitigations
Mandate Retrieval-Augmented Generation (RAG): Require public sector AI systems to ground outputs in verified databases (e.g., legislation.gov.uk).
Develop Hallucination Detection Tools: Fund research into explainability frameworks (e.g., LIME, SHAP) to trace and correct hallucinations.
Regulatory Reforms
Amend the Online Safety Act: Expand Section 12 to cover AI-generated legal/educational misinformation posing risks to public trust.
Introduce AI Accuracy Standards: Legislate minimum statistical accuracy thresholds for high-risk public sector AI deployments (e.g., healthcare, legal advice).
Transparency and Accountability
Label Synthetic Content: Require clear disclaimers for AI-generated text, akin to the EU AI Act’s Article 52.
Establish an AI Ombudsman: Create an independent body to investigate systemic LLM inaccuracies and enforce corrections.
Public Awareness and Education
Launch AI Literacy Campaigns: Educate citizens on LLMs’ limitations, focusing on schools and public services.
Publish Misinformation Case Studies: Highlight real-world harms (e.g., the Norwegian case) in government guidance to drive accountability.
International Collaboration
Align with EU AI Act Standards: Adopt transparency and risk-tiering frameworks to ensure cross-border consistency.
Push for Global Hallucination Mitigation Protocols: Leverage the UK’s AI Safety Summit legacy to promote technical standards.
Conclusion
The UK’s current policies inadequately address LLMs’ inability to distinguish fact from fiction, risking public trust and legal integrity. By mandating technical safeguards, enhancing transparency, and prioritizing accuracy in high-risk domains, the UK can align its pro-innovation agenda with robust protections against AI-generated misinformation.”
Response to Artificial Human.
Prepare a letter raising my serious concerns about this episode of the Artificial Human based on this analysis. Include actions and suggestions for a programme on UK Government adoption of AI and the fact that they cannot intrinsically differentiate factual and fictional content.
Subject: Urgent Concerns and Recommendations: BBC's Coverage of AI Legal Misinformation
Dear Peter,
Thank you for referencing my correspondence in the recent episode of The Artificial Human. While I appreciate the spotlight on AI-generated misinformation, I must express deep disappointment that the programme omitted the specific, systemic issue I raised: ChatGPT’s fabrication of hyper-realistic legal text, including fictitious provisions of the Companies Act 2006. This is not merely a case of factual inaccuracy but a dangerous erosion of legal integrity.
Critical Omissions and Inaccuracies in the Programme
Systemic Legal Risks Ignored:
The episode focused narrowly on personal reputational harm (e.g., the Norwegian case) but ignored the broader threat of AI generating entirely fictional laws. This is a critical oversight, as such fabrications mislead professionals, undermine public trust, and violate:Online Safety Act 2023: Section 12(3) mandates mitigation of harms from misleading AI-generated content, with penalties up to 10% of global revenue for non-compliance.
EU AI Act: Requires labelling of synthetic content (Article 52) and prohibits manipulative/deceptive outputs (Articles 5, 10).
Consumer Protection from Unfair Trading Regulations 2008: Marketing AI as "intelligent" while it fabricates laws constitutes a "misleading action."
Technical Misconceptions:
Guests claimed correcting AI outputs requires "retraining models from scratch," ignoring solutions like retrieval-augmented generation (RAG) used by competitors (e.g., Perplexity AI). This overstates technical barriers and downplays corporate accountability.
The suggestion to preserve false outputs as "evidence" contradicts GDPR’s Article 5(1)(d), which mandates deletion or correction of inaccurate data.
Child-Specific Risks Unaddressed:
ChatGPT’s legal hallucinations pose unique dangers to minors and child protection services. For example:A child seeking legal advice (e.g., custody disputes) could be misled into dangerous situations.
The Victorian Ombudsman (2024) documented a case where ChatGPT downplayed risks to a child, misclassifying a sexually exploitative toy as "age-appropriate."
The Online Safety Act’s child protections (Section 12) do not cover such legal misinformation unless it directly incites harm—a glaring regulatory gap.
Urgent Requests for Future Coverage
Address UK Government AI Policy Gaps:
The UK’s AI Opportunities Action Plan (2025) prioritizes innovation but lacks safeguards for LLMs’ inability to distinguish fact from fiction. A dedicated segment should examine:The absence of mandates for grounding outputs in verified sources (e.g., legislation.gov.uk).
The need for transparency standards (e.g., labelling synthetic legal text) akin to the EU AI Act.
Correct Public Messaging:
Clarify that individuals can (and should) submit direct GDPR rectification requests to AI companies, citing specific URLs/outputs—not just rely on lawyers.
Retract Alex Kratt’s statement that "truth becomes fluid," which normalizes harm and contradicts BBC Editorial Guidelines on accuracy.
Investigate Corporate Accountability:
Challenge OpenAI publicly on why it refuses to implement RAG for legal queries despite competitors’ success, and explore regulatory actions under the Online Safety Act and consumer law.
Conclusion
The BBC has a duty to scrutinize AI’s societal risks rigorously—not frame them as individual grievances. I urge you to produce a follow-up segment exposing how AI-generated legal misinformation threatens democracy, legal integrity, and child safety, and to hold policymakers and tech giants accountable.
Best regards,
Alison Wright
Supporting Materials:
My original article detailing OpenAI’s admission of legal fabrication.
Victorian Ombudsman report (2024) on AI risks to child safety.
EU AI Act (Articles 5, 10, 52) and Online Safety Act (Section 12) provisions.