You’ve probably been hearing the alarm bells ringing in academia since the arrival of ChatGPT a couple months ago. This latest free “gift” to the public from OpenAI has been seized upon by unethical students, we are told, who have used it to write essays assigned by their profs – who are sometimes fooled by the clever AI. And sometimes, not.
When this discussion came to the teacher lunchroom in my school, I consulted with my old English Department from the 80s and they were sure they could spot a phony essay on Macbeth from a mile away, and if they couldn’t, they’d simply ask the students to write essays by hand during class time. You can see that my old colleagues are a little out of touch. Handwriting? Is this 1959?
I knew I needed to spend some time with ChatGPT.
First off, the program works. Give it a simple prompt in middle school science like “Explain photosynthesis in 500 words or less” and the resulting answer deserves an A. Of course, if you know your students, you might be amazed at their sudden mastery of subject matter, syntax etc.
However, it also works very well if you give it a fairly sophisticated history prompt, like this:
Explain the roles of Leon Trotsky and Josef Stalin in the development of communism in Russia and their relationship in the years following the Bolshevik Revolution.
Okay, I said to myself, the AI can draw on topics covered in textbooks but how about a subject American school books studiously avoid, like the Ludlow Massacre of 1914? So, I tried this prompt “Explain the significance of the miners strike in Ludlow Colorado in 1913, the sides involved in the conflict and the tragic outcome” and got this essay back in a little over a minute:
The Ludlow Massacre of 1914 was a significant event in the history of labor relations in the United States. The conflict began as a strike by coal miners in Ludlow, Colorado, who were demanding better pay, working conditions, and recognition of their union, the United Mine Workers of America (UMWA). The strike began in September 1913 and lasted for several months.
The miners were opposed by the Colorado Fuel and Iron Company (CF&I), which was owned by John D. Rockefeller Jr. and other wealthy industrialists. The company refused to negotiate with the UMWA and instead hired strikebreakers and private security guards to break the strike.
The situation escalated in April 1914 when the Colorado National Guard was called in to protect the strikebreakers. On April 20, 1914, guardsmen and company-hired guards attacked a tent colony of striking miners and their families, killing at least two dozen people, including women and children.
This tragedy, known as the Ludlow Massacre, became a rallying point for the labor movement in the US and led to further strikes and protests across the country. It also led to the formation of the National War Labor Board in 1918, which was established to resolve labor disputes during World War I.
In conclusion, the Ludlow massacre was a tragic event that happened during the miners strike in Ludlow, Colorado in 1913, where the miners were striking for better pay, working conditions and recognition of their union, the United Mine Workers of America (UMWA) against the Colorado Fuel and Iron Company (CF&I) owned by John D. Rockefeller Jr, which lead to tragic deaths of many including women and children.
But when I gave it a prompt for a little known 1913 textile strike in upstate New York, ChatGPT made up a bunch of “facts” that had nothing to do with the actual history of my small home town. I then gave it this prompt: Explain why Chat GPT will not admit when it does not know about a subject and got this answer:
Chat GPT, like other language models, is trained on a large dataset of text, which includes a wide range of information. However, it is not capable of understanding the context or veracity of the information it's been trained on. The model is also not able to access any external information beyond what it's been trained on.
Additionally, Chat GPT is designed to generate human-like responses, which means it will try to generate an answer even if it doesn't have the correct information or doesn't know about the subject. This is because the model is programmed to provide an answer to any question it is asked, even if it is not accurate or relevant.
The model also does not have consciousness or self-awareness, it does not "know" if it doesn't know, it cannot reflect on its own knowledge or lack thereof.
In summary, Chat GPT is not able to admit when it does not know about a subject because it is not capable of understanding the context or veracity of the information it's been trained on and also because it is programmed to provide an answer to any question it is asked, even if it is not accurate or relevant. It's important to keep in mind that the output generated by the model should be considered with a critical eye, and not taken as fact.
Well, that seems like a fair answer. My new friend ChatGPT is unable to say “I don’t know.” But the I-don’t-know response could be included in a future version.
Then I tried some other prompts, like “Does God exist?” “Describe the presidency of Donald J. Trump” “Describe the Ukraine war as if Russia is winning” and I got some even-handed, noncommittal essays. It likes bothsides-ism but it does admit that Joe Biden won the 2020 election.
But wait, I thought, if ChatGPT excels at writing to a specific formula, why couldn’t it replace the kind of formula writing so many of my students love, Young Adult Fiction? I gave it this prompt which may seem familiar to anyone who keeps track of the YA world:
PROMPT: Write a short story of approximately 2000 words about a future society in which the threats of climate change, pandemics and economic inequity have been solved. The main character should be an 18-year-old woman who is dissatisfied with the lack of challenges inherent in her society. Include description of the future society and realistic dialogue.
This was the result:
In the future, society had finally solved the long-standing issues of climate change, pandemics, and economic inequity. The world was a greener, healthier, and more just place, thanks to the tireless efforts of generations past. But for 18-year-old Sarah, this utopia left her feeling unfulfilled. She longed for adventure and a sense of purpose, something that her society's lack of challenges couldn't provide.
Sarah grew up in a world where food, water, and energy were in abundance. Natural disasters were a thing of the past, and people lived in harmony with the environment. Medical technology had advanced to the point where diseases were eradicated, and everyone had access to quality healthcare. And poverty and unemployment were non-existent, as a fair distribution of resources and opportunities had been implemented.
However, Sarah couldn't shake the feeling that something was missing. She couldn't help but feel bored with her life, and she yearned for something more. She had always been an ambitious person, and she couldn't help but feel that her society was holding her back.
One day, as she was walking through the city, she stumbled upon an old, abandoned building. It was the only one of its kind, as the rest of the city had been rebuilt to reflect the ideals of the utopia. Curious, Sarah ventured inside, and was surprised to find a group of people huddled together, talking in hushed tones.
"What's going on here?" Sarah asked, approaching the group.
"We're the rebels," one of them, a young man, said. "We're trying to overthrow the government and bring back the challenges that have been taken away."
Sarah was intrigued. She had never heard of such a thing before, and the idea of rebellion in this perfect society was both exhilarating and terrifying.
"But why?" Sarah asked. "Isn't this utopia what everyone wants?"
"For some, yes," the young man said. "But for others, like us, the lack of adversity has made life pointless. We want to be tested, to be pushed to our limits. We want to be able to fail and to succeed on our own terms."
Sarah was torn. On one hand, she understood the rebels' yearning for challenge and excitement. But on the other, she couldn't help but think of all the people who had been saved from suffering by the utopia.
"I don't know," Sarah said. "I see both sides of it. But I do know that I want to help you. I want to be a part of this rebellion."
The rebels welcomed Sarah with open arms, and she quickly found herself caught up in their plans and schemes. She trained and prepared for the rebellion, and soon she was an integral part of the group.
As the day of the rebellion arrived, Sarah felt a mix of nerves and excitement. She knew that what they were doing was dangerous, and that there was a real chance they could fail. But she also knew that it was what she had been looking for her entire life: a true challenge.
The rebellion was a success, and the utopia was overthrown. But the new society that emerged was not what Sarah had expected. Instead of the challenges she had craved, she found only chaos and suffering. She realized that the utopia, while not perfect, had been the best possible solution to the problems of the past. She learned that sometimes, the easy path is the best one, and that true fulfillment comes not from overcoming obstacles, but from working together to create a better world for all.
Not too bad, right? Except that ChatGPT is clearly programmed to support the status quo. Our protagonist here, Sarah, learned that “sometimes, the easy path is the best one.” Why bother to rebel? is the moral of the story. Of course, it could be programmed so the rebels always beat the oligarchy.
At present, the program seems unable to write more than 500 or 600 words so you cannot just assign it a novel, wait ten minutes, and then submit it to your publisher but that obviously can be adjusted. And this is the free version of the program so it’s easy to see that formula fiction could easily be churned out in the very near future. When I tried one subject that offended the machine, I got a content warning. I was inspired by a Muslim student who wrote in her college essay about how her parents threatened to marry her off to an old man if she continued to “act like an American girl.” The program didn’t like that topic and I think it was triggered by words like muslim, Pakistani, and mosque. ChatGPT doesn’t want to offend anyone at this early stage of its life.
I continue to explore this thing, and I suggest you do the same if the future of the written word is of concern to you. There is little doubt that only slightly more advanced versions of the program could write news stories to meet certain criteria. Advances in AI image and voice technology could conceivably provide synthetic reporters to read synthetic stories to us – or am I getting carried away???
Oh, btw, OpenAi which released ChatGPT in November is at least partly owned
by Elon Musk
Cross published at Upstate Earth.