Quantum disinformation
Bless you educators, I don't know if I have the stomach for it
Warning: This rough note contains spoilers of Black Mirror season 7 episode “Bête Noire”
There’s an episode from the newest season of Black Mirror that I struggled to get on board with. Confectionary creator Maria, sees her life rapidly turned upside-down by a new coworker, Verity, who is also an ex-schoolmate with a serious grudge. Maria’s grip on reality is threatened as past events and facts she knows to be true are disputed by the people around her, including the name of an old chicken shop where her boyfriend once worked, or important information missing from a work email which Maria categorically remembers providing when she sent it. In both instances, Verity is there to contradict Maria’s account of events, smugly suggesting that she must be misremembering — the chicken shop was always “Barnies” not “Bernies” and a quick look at Maria’s email clearly shows no mention of an important allergen for her newest product. In a dispute between Maria and Verity in their boss’s office, Maria says she witnessed Verity stealing oat milk from the staff fridge, which Verity rebuffs by saying it was, in fact, Maria who stole the milk. Finally grateful to have indisputable fact on her side, Maria reminds her boss that she has a severe nut allergy so wouldn’t touch oat milk with a barge pole. Unfortunately for Maria, her boss seems to have no idea what she means by a “nutallergy” and looks at her with perplexed and accusatory eyes.
We discover, after some agonising moments of explosive gaslighting and psychological warfare, that aggressively-named Verity has been manipulating not only truth but history with a pendant she wears around her neck, whispering new realities into it like a microphone, dictating the laws of the universe. Security camera footage shows Maria guzzling oat milk straight from the carton.
I wasn’t on board with the twist at first, probably because it involves having some level of understanding of quantum mechanics that I do not possess. However, having now finished my first Teaching Fellow contract — I am writing this alongside grading the last pieces of written coursework of my first-year undergraduate students — I think I get it now.
A device that can change the fabric of the universe and all that everyone knows and is has been used here as a metaphor for disinformation enabled by technology, but there is a moment in the episode, titled Bête Noire (translatable as “detestable person”), where we are also shown how such technology is the perfect tool for fascism. At one point, Verity admits to Maria that she had fun playing around with the device after she first developed it, using it to declare herself Empress of the Universe, worshipped by acolytes, before realising her true calling was to get revenge on the girls at school who made fun of her for spending all her time in the computer room and who spread a cruel rumour about her sleeping with a teacher. Her childhood trauma combined with her academic prowess led to her creating the device, but it also sent her down a path toward a universe-level dictatorship. In the show, this aspect is a funny little detail, but with pieces of my soul utterly trashed by the torturous experience of grading generative AI paper after generative AI paper, I find myself stuck on the image of Verity in a crown, fanned by slaves, exalted on her throne. How easy it is to manipulate and fabricate, in order to get what we want. How much easier this will be thanks to the “tech revolution.”
Back to those papers — I have now lost count of how many students have so clearly used generative AI to produce the majority of their coursework. I’ve graded papers with Reference lists rife with fake sources, which for some reason we’ve decided to call “hallucinations” as if to further anthropomorphise a system that has already been humanised to death. I’ve read Methods sections spelling out research there is no possibility was undertaken by the students, and have seen them use terminology we have never discussed in class, more often than not misused. Some of these papers are tidier than others — a faint degree of effort has been put in to at least make it look not like gen AI. In others, red-handed sentences remain like “your professor will want to see that you have considered the limitations of this study.” But it is the references that get me the most. The refusal to read long texts they find uninteresting is one thing, but the acceptance of information that they have not checked, especially obvious when this information is coming from papers that are not real, is something else because it points to behaviour patterns with longer-term consequences.
How do I explain this to the 18-year-olds in my class without sounding like a conspiracy theorist or dystopia fanatic? The worm inside me says “fuck them, it’s too late for these guys” but the person inside, still breathing though barely, remembers that my job is to teach, not to pass judgement, and knows non-altruistically that we will have to rely on this generation at some point to keep us all alive. The assignment itself is also partially to blame — too easy to cheat on, too seemingly inconsequential (boring?) to take seriously by students who are only taking this class because it’s mandatory. If I were to teach the class again, I’d recommend a complete change in assignment format, although I doubt the institution would allow for much wiggle room. I’m sure university students’ attitudes towards coursework have changed considerably since I was a first-year, because they’ve been exposed to generative AI’s writing functionalities since school. A glaring issue is that we know so little about how the last three years, since the release of ChatPGT to the public domain, has changed reading and comprehension abilities for young people at school and beyond. As of yet, there hasn’t been a large body of research dedicated to the effects of generative AI on learning, at least not from the angle of “what the hell is this doing to our literal brains??”
To many of the students, this was just another assignment they needed to submit in order to more onto the next academic year. The act of cheating by using ChatGPT or Grok or whatever other system they’ve discovered to write a 2,000 word paper is nothing more than a sly way to avoid doing more work on top of their already packed learning schedule, a bit of an oopsie if they get caught.
To me, though, their actions mark the development stage of Verity’s device. Their papers are full to the brim with random and incoherent information undetectable to an untrained eye, with conclusions based on information that is not real and declarations of work that they have not done. Take, for instance, the student who claimed to have executed a full survey-based study with multiple participants, despite the task having been to use secondary data, even providing fake primary data with responses that were timestamped three hours before the student submitted the assignment. It reads as if they asked ChatGPT to generate a research paper, forgetting to prompt it to only use secondary sources, and then had to make up some data to accompany the bits that talked about actually doing data collection. This isn’t really the same thing as referencing fake sources, but it pissed me off so I had to mention it.
Despite the lack of substantial research, academics and teachers have been flagging the risk of generative AI to education and the value of learning for a while now, based on their working experience. I have strong opinions on gen AI in that I detest it and I absolutely believe that using it to write papers is an act of plagiarism, stealing the hard work and ideas of people who have spent years of their lives contributing to knowledge-building. It’s also a pathetic way to be speeding up planetary death. Cheating is wrong on a moral level, it is unfair to the students who are doing the work honestly, it is boring to grade and has instilled in me a feeling of disillusionment so strong that I’ve started to doubt if I want to teach again. It makes it incredibly difficult to judge how much the students have actually learnt or what they need to improve on, and it sucks the life out of the job. Educators are feeling the pressure, reluctantly turning into walking Turnitins to try and detect cheating that is often impossible to prove with absolute certainty.
All of this is important to say, but beyond our feelings and our moral compasses and the value of the written word is the turning tide of honesty, the slack grip on reality, the acceptance by so many that the truth behind your words is of limited importance, so long as the assignment gets done and you get what you need from it.
I’m riled up, not just because grading these papers has been so unpleasant, but because of the Black Mirrorness of it all. Fabricating information (otherwise known as lying) and refusing to check the credibility of sources are the same tools used by coercive governments and figureheads to seize power over populations, to change the facts of history, to justify violence and control. Misinformation, disinformation, fake news and the spread of bullshit is just too serious to wave off with a bad grade. But these students are not beta-stage fascists (god, imagine). Rather, they are victims of a deliberate attempt to squash reliable information sources. They’re the Marias of the story, not the Veritys (actually, this metaphor doesn’t really work if you’ve seen the end of the episode, but whatevs).

Google searches forcing gen AI responses, every Tom, Dick and Harry in tech coming up with new ways to get customers to use gen AI to “create” work, student (and professional!) papers that take 3 minutes to “write” — all of these threaten the beginning of the end of society’s grip on reality because they neglect falsification and truth. I was genuinely shocked to learn that some schools and universities are now actually encouraging students to use gen AI to help write summaries, essays, and longer papers. I could write a dissertation on how generative AI responses are rife with bias, how they are so easily weaponised to further the goals of white, western, conservative ideologies, maybe I will one day. But right now I am 80 papers down, and I am absolutely furious that it only took three years for us to say “fuck it” to knowledge, to give up our reasoning and rationality to systems designed, not to inform or empower, but to puke information under the pretence of fact.
My short stint as a lecturer has instilled in me more than ever before that truth is our only protection against power that seeks to control and destroy, that there needs to be an effective way to communicate this to students without relentlessly showing them image after image of genocides and wars, and to connect this to their disaffection, showing them that they must invest in truth now with the long-term goal of liberty, even when it seems so much easier to lie. I remind myself that the students are young, that they are balancing all the unyielding demands and pressures that come with being at university. That some of them seem lazy, but laziness is in all of us to an extent and it need not define them forever.
“It’s a shame you chose not to take this assignment seriously,” I write in my feedback, “this would have been an interesting topic for you to research properly. I hope you will try harder next time.”
So many people have said to me that generative AI isn’t going anywhere so we need to embrace it, and teach students how to use it responsibly. It won’t come as a surprise, from the tone of this essay so far, that I fundamentally disagree. To me, gen AI could be as impactful and as fleeting as NFTs. Technology comes and goes, and it can absolutely be resisted, but that’s up to us. Another issue is that people sometimes conflate “teaching how to use it responsibly” with “stating that you’ve used ChatGPT” which, to me, misses the entire point of responsible information provision. Using ChatGPT to help point you in the direction of what might be a credible source is one thing (still don’t like it, for the resource gobbling this requires), using it to generate information, definitions, opinions, without the ability to credit the original source (because citing ChatGPT is as helpful as no citation at all) is something completely different.
Wealthy corporations like OpenAI claim to be giving us freedom through efficiency of knowledge access and production, but are selling us a good old-fashioned lie. There’s the argument that large language models are this generation’s calculator — that back in the day Maths teachers marched against their use, claiming students would stop learning how to do maths properly, and now look at how much of a benefit calculators have been to society! Generative AI isn’t a calculator, though. To take that point as a metaphorical one and relate it to writing, it can’t even be argued that it works like a word processor, because the user is not thinking through or expressing the ideas that are ultimately produced, unlike when you punch numbers into a calculator in the logical order. The number you get from a calculator is the same number you’d get if you did the problem by hand. The text you get from ChatGPT is not the same text you would get if you had written it yourself, because it is not your thoughts, not your ideas, not your words. To take it more literally, gen AI also isn’t a calculator because it has not been designed to solve numerical problems. Rather, to answer a maths question, it draws from its enormous stores of data on similar questions and answers, mashing it up all together to give an answer that may or may not be right.
I know this because I’ve tested it. I wanted to see how ChatGPT would answer a statistical probability question included in a midterm exam, since the guidance on how to grade that particular question was not entirely clear. I was curious, so I asked the bot. I was surprised to find that it gave me the wrong answer, so I responded with a follow-up prompt: “are you sure that’s right?” allowing for the possibility that my own answer was wrong. The bot responded, “You’re absolutely right, that is not quite correct. Let’s take a look at that again,” followed by another round of incorrect working and answer provision. It took a third prompt for the bot to finally get the correct result. For a brief moment, I had doubted myself. I momentarily accepted ChatGPT’s response, willing to believe that 2 + 2 = 5. The only reason I thought to press on was because I felt confident that my own answer was correct, because I had done the problem myself and I had checked it with the grading scheme. I knew the truth.
It's all getting a bit 1984. Every single fascist regime in history has utilised censorship and propaganda as primary weapons, controlling access to and the production of knowledge. To topple them or to stop them in their tracks requires an enormous dedication to resisting the suppression of information, a dedication to truth. Generative AI and algorithmic bias are only going to make it easier for bad agents to censor, to spread propaganda, to control and manipulate information. Is it really worth it to help write a work email faster, to create another beige marketing campaign, or to get a class assignment done in a day?
What might start off as seemingly convenient now is only going to be our downfall later, so I guess, at the heart of this rough note, is a plea to the ten or twenty people who read this to refuse generative AI for the provision of information, to resist the perception of ease, to think for your fucking self and hold tightly to the power of your own reasoning. And, just as I told my students, if you’re reading something that sounds compelling but provides no sources, no evidence, question the reliability of that information. What might be true today could be whispering an entirely re-written history of you tomorrow.
And just so I’m not contradicting my own reasoning, here are some resources and suggestions for further reading.
How ChatGPT works, from the horse’s mouth
American Maths Teacher protests in the 80s
If you’re not sure you agree with my statement on fascism and information control, see: Father of Fascism Mussolini’s usage of the press to remake Italy, the Nazis lying about Germany’s defeat in WWI and the country’s economic problems in order to blame Jewish people for every bad thing, Trump’s war on critical race theory and strategic removal of information relating to social issues from US government websites, Israel’s destruction of Palestine’s literary and educational infrastructure, throw it back further to Hume and the Enlightenment claiming that Black people were low intelligence, unevolved, and only worthy of enslavement… and so on and so forth.
“I don’t understand” wails Maria, after Verity explains the quantum razzle dazzling.
“I don’t care if you understand. I’m doing it to hurt you,” Verity replies.



