I was fortunate to get to spend 2 weeks in Ireland in June. I was fortunate to get to give the final Gasta talk of the EDEN 2023 conference, and I am finally getting to share it in blogpost form here. Many thanks to the TEL team at Munster Technological University for the work we got to do together before EDEN, and to Tom Farrelly for being an excellent (as usual) Gasta-master.
The theme of the conference was “Digital Education for Better Futures” and as you can see I was coming at that theme somewhat contrariwise. Gasta (Irish for “lightning talk) is an extemporaneous event for me, so this blogpost is me making prose of my notes and slides, likely not an exact representation of the talk live. If you want to know what that was like, I’ve linked to the recording at the end of this post.
“The Future” is Bullshit
Let’s start with some basic folkloristic definitions. Folklorists (at least, the ones I was trained by) divide narrative folklore into three broad genres;
- Folktales: fiction, told as fiction
- Legend: fiction, told as true
- Myth: told to communicate sacred truths
Each of these genres are defined with the notion of truth embedded in them. Within each of these genres is also embedded a sense of the roles of tellers and audiences. These genres are not simply one-way narrative experiences, but require a back and forth between teller and told. Suspension of disbelief, on the part of the audience, is key in particular to the definition of legends.
Paul Bunyan and his Big Ox, Blue https://www.flickr.com/photos/65172294@N00/4285626001/in/photostream/
In the US we have the sub-genre of legends called a “tall tale”—outrageous narratives, sometimes called “yarns”or even more clearly, “lies”—it is a settler colonial North American spin on narrative conventions surely influenced by all of the Irish people who participated in the European occupation of these lands. There is more than “a bit of the Blarney” involved in tall tales (note I made this point primarily because I was delivering this talk in Dublin, Ireland…)
Tall tales, in the telling and the listening, are fun, suspension of disbelief by the audience as the tale is being told is part of the fun. When you participate in Tall Tales, either as a teller or as a listener you are operating within the frame of play. You know about the frame of play, children point to it explicitly on playgrounds when asking each other “are you playing??” or shouting at each other “time out!” (to suspend the frame of play when trying to pause the game, to argue about rules, or because they are hurt…) or declaring “I’m not playing!” to make sure they are not within the frame while their peers play tag, or make believe.
The key to the frame of play is consent. The key to participating in a fun way with tall tales and other legends (remember, fiction told as true) is consent: “I agree to hear you tell me lies.”What is that participation worth? What can the frame of play bring us, as people? Joy, laughter, connection.
Consent. Let’s sit with the idea that futures should be things that we consent to, that we mutually create, not that are handed to us by people trying to sell us things.
I am using as my working definition of bullshit that offered by philosopher Harry Frankfurt: “Persuasive without regard for the truth (2005)”
Related to that is David Graeber’s definition of bullshit jobs: collections of tasks that are done without regard for what matters (that is my paraphrase of Graeber 2018).
These definitions underlie much of my reactions to Large Language Models (LLM) tools such as Chat GPT (Bender, Gebru, McMillan-Major, and Shmitchell 2021; Bergstrom and West 2021). These tools are bullshit generators, producing content without regard for either truth or worth.
LLM tools are not minds, they cannot “know” a lie, and are not capable of engaging in the human relationships required for constructing and recognizing the frame of play.
If we believe the hype of the people trying to sell us these tools, we are told there are a lot of things we should be worried about.
Students will cheat
Robots will take our jobs
We need to give venture capitalists money to build “EthIcaL AI”
What do we really need to be concerned with?
We need to fight in education to be able to focus on processes, not products.
We need to recognize the reason that people have or don’t have jobs has nothing to do with robots and everything to do with capitalism
We need to realize that the bullshit future that Venture Capitalists are peddling justifies the harm they are doing in the present (Perrigo 2023)
If people are already using LLM tools for work they have to do, we might consider that as evidence that they are surrounded by tasks without merit and this is how they are coping. So, we are looking at a tool that seems perfect to meet the bullshit demands of bullshit jobs we’re being told is our future by the people peddling the bullshit.
If some people think these tools are good for helping them deal with (for example) professional development and social justice work, that might be evidence that those people think that non-bullshit things (professional development, social justice work) are bullshit.
We should pay attention when people mistake worthwhile and necessary service work for bullshit.
Who decides what matters? The “bullshit generators will help us take care of bullshit tasks” formulation doesn’t entirely work. We can observe people using LLM tools in part to give us evidence of what they think does not matter (and we can learn from that, or at least think about the implications of that).
What does it mean that we can find evidence that some people think that all of the following are bullshit tasks suitable for being completed by a bullshit generator:
Filling out bureaucratic forms
Completing DEI statements
Writing letters of recommendation
Wait, what? Maybe it’s not that these tasks don’t matter at all. Maybe it’s that the value of these is not visible to everyone. And that a case needs to be made for these tasks.
Venture capitalists and tech bro billionaires (especially those who call LLM tools “AI”) spin Heinleinian fever dream visions of a future to sell their products. That vision has no regard for what is happening in the present.
That future has no regard for the worth of the present, the agency of people to create their own futures, or any regard for the people in the present who are not the wealthy white men trying to dictate the future. Their actions now are happening outside of any frame of consent
The rhetorical churn around LLM and the future is bad bullshit.
Let’s think about who we want to bring along into the future. Not just who we are being told will be there (AI? Robots? White billionaires?) but who is in the present, and how the communities that surround us in the present need to see themselves as having a future, and be seen as deserving of a future by those with the power to facilitate it.
Even better, we need to make sure that people have the power to make their own futures, not just be handed one by people with more money than sense (Gilliard 2023; Feisler 2022; Forlano 2021)
I want to take back bullshit, keep the good fun stuff of sitting around and “telling lies” to our friends.
And I want to remind us that with consent, and the lodestones of what is true, and what matters, we can do better than the bullshit future that venture capitalists want to sell us.
(if you want to see me give this talk in 5 minutes the link is below)
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜.” In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610-623. 2021.
Bergstrom, Carl T., and Jevin D. West. Calling bullshit: The art of skepticism in a data-driven world. Random House Trade Paperbacks, 2021.
Gilliard, Chris. “Challenging Tech’s Imagined Future.” Just Tech. Social Science Research Council. March 2, 2023. DOI: doi.org/10.35650/JT.3050.d.2023.
Graeber, David. (2018). Bullshit jobs: A theory. New York, NY: Simon and Schuster.
Fiesler, Casey “The Black Mirror Writers Room: The Case (and Caution) for Ethical Speculation in CS Education” CU InfoScience, Medium, March 4 2022, retrieved 6 April 2023 https://medium.com/cuinfoscience/the-black-mirror-writers-room-the-case-and-caution-for-ethical-speculation-in-cs-education-5c81d05d2c67
Forlano, Laura (2021) “The Future is not a Solution,” Public Books, October 18, 2021 https://www.publicbooks.org/the-future-is-not-a-solution/
Frankfurt, Harry G. On bullshit. Princeton University Press, 2005.
Kohn, Alfie (2023) “I’ve never been able to improve on the management theorist Frederick Herzberg’s timeless 10-word maxim: “Idleness, indifference, and irresponsibility are healthy responses to absurd work.” (Teachers/parents: Feel free to substitute “worksheets” for “absurd work.”). (2023, March 25). https://sciences.social/@alfiekohn/110083807127046096.
Perrigo, Billy (2023) “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” Time Magazine, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/
Quintarelli, Stefano (2019) “Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI). Nov 24, 2019,