AI is a new tool that brings a lot of convenience, optimizing our daily tasks and search for information. In its many incarnations, it serves to suggest relevant products, edit out visual noise from our photos, help us discover the music we will like, and generally make our life more comfortable.
However, like any other tool, it serves some ends better than others. With this post, we open a series on the disadvantages of AI use for education purposes. I must say that I, for once, am enthusiastic about LLMs and the things they are capable of. No denying that they can help editors, translators, bloggers, and copywriters work faster, better and get more satisfaction from their daily tasks. However, I must stress that all these professionals have strong language competencies and writing skills they can fall back on. They can use their judgment and professional opinion to correct any deficiencies that AI output might have.
Whereas students don’t yet possess all the necessary skills and experience to tell whether AI did a good job or not. That is why this series will specifically discuss the disadvantages of AI in education and the impact of AI-powered tools on the skill development and academic results of high school and college students.
General Implications and Disadvantages of AI in Education
We have been covering AI a lot lately, with experts chiming in from all ends of the opinion spectrum – from those convinced that AI will end education as such to those who dismiss it as merely a new shiny toy that will lose its attraction to students earlier than the general public will be done discussing it.
Many educators highlighted the benefits AI can offer students by removing writer’s block anxiety with quick ready-made drafts while teaching them the value of editing. However, I must admit that these benefits aren’t immediately apparent and require thoughtful instruction from open-minded teachers. Unfortunately, that’s not where most students are at the moment.
The majority is overwhelmed with the ever-growing volume of writing assignments and would welcome any tool allowing them to cut corners. I am sure that if not for the fear of being discovered, many more students would jump at the chance to generate their essays. In essence, it’s not a fascination with LLMs like ChatGPT as such. More like the “I don’t care how it’s done. As long as I don’t have to write my paper, I’m happy enough” attitude.
Strictly speaking, this approach is not that different from old tricks like buying last-year papers from senior students, paying your classmate to write your essay for you, or just copying big chunks of text from an old book in hopes no one added it to the plagiarism checker’s index.
However, even though AI-generated texts pass plagiarism detectors, they aren’t impervious to automated checks. First of all, since every word in generated texts is algorithmically calculated, it can be deciphered. In layperson’s terms, AI detectors sniff out AI-generated content fairly easily – birds of a feather, if you will. For example, for this test, I have conducted a little experiment. I asked ChatGPT to generate me a philosophy essay about good and evil. I deliberately chose humanities and a broad topic for maximum flexibility. Yet even basic free detectors widely available online had no difficulty cracking it:
The fact that the text is predictable is not the only issue. Proponents of AI use for writing practice often stress how generated texts can serve as an example, especially to ESL students, bridging the gap and empowering those who struggle with English. However, in my opinion, learners should take cues from the best available examples, whereas AI, being trained on the entirety of content from all over the internet, doesn’t generate impeccable texts. Despite spelling being mostly okay, its offerings are still riddled with mistakes in word use, syntax, punctuation, and grammar.
These are the most apparent AI disadvantages stemming from the dubious quality of generated papers. However, there are more significant implications of AI that will only be relevant when (and if) AI becomes considerably better, hence can only be measured in a series of longitude studies in some distant future. Still, even today, we can make certain assumptions and informed guesses based on our knowledge of the education system and AI capabilities. Let’s focus on these long-term disadvantages of AI for education that will primarily impact students.
Disadvantages of AI for Students
I am not going to get into the intricacies of academic integrity. Instead, I want to focus on the unique downsides of automatically generated papers and the harm they might cause when seen as an example of proper academic writing.
- Example of uninventive, formulaic writing
These papers are generated based on the templates plastered all over the internet: 3-5 paragraphs, an introduction stating “this paper is about,” and a conclusion that juggles the same words in a slightly different order but does nothing for drawing the line under the argument. This is a template that your middle-school teacher gives you when you know nothing about writing, composition, and analysis. It’s an exercise.
For an algorithm, arranging words, sentences, and paragraphs correctly to produce a coherent text is no mean feat, and I can see why people are raving about it. Yet, for a human – it’s mediocre. When high school and college students see this as an “excellent example,” it limits their view and makes them think that this uninventive and painfully straightforward composition is the only way to write, and any deviation from this must be a mistake.
- Bland, insipid style
The cookie-cutter structure is just one shortcoming of generated texts. The awkward style is another big issue. Yes, with all the dictionaries and thesauri fed into it, a linguistic model can wow you with flowery verbiage and exotic synonyms used correctly and to the point, but that’s not what we mean by style – not if we rise above the middle-school level, anyway.
What makes any reading compelling and engaging – humor, vivid analogies, unexpected comparisons, keen observation of reality – is woefully lacking in generated texts. They are the mean value of all the other texts using approximately the same set of words. If styles are colors, you can get a rainbow if you hand-pick and use them in your text intentionally. Yet, if you try to get a medium value, you will get a depressing muddy mess.
Again, to find your voice and learn a unique style, one must consume a lot of good examples. If instead of reading human-written originals bearing the imprint of individuality, students will read these secondary imitations and think of them as “exemplary,” it will stun their writing skills, making them complacent and unwilling to strive for better, make an effort, and grow.
- Factual mistakes
Let’s suppose that students won’t generate papers to hand them in. What if they aren’t after quick fixes but instead want to leverage AI’s educational potential and ask probing answers? Isn’t that what ChatGPT was designed for? Indeed, AI does provide some very relevant and concise results and can help with research. However – and I cannot stress this enough – it cannot be a complete substitute for direct work with the sources. It’s a generative model, so it generates. It’s given some flexibility and liberty in order to be “creative,” otherwise it wouldn’t be able to produce new material. Yet it also means that it can mix things up, confuse or conflate facts, or flat-out invent them.
Everything ChatGPT or similar AI tells you should be fact-checked and backed up by an authoritative scholarly source. Believing that LLM is a reliable source of factual information is a big mistake that can cost you a grade.
- No reliable quotes
You must have heard that ChatGPT can be employed as a search engine: to suggest games, films, books, or products fitting your description. For writing purposes, students might be tempted to ask it for a relevant quote, and it will oblige. However, unless the quote is a snappy one-liner from Shakespeare on love or hatred, one must spend a copious amount of time trying to verify and track it down.
AI has no familiarity with reality. All it knows is a dataset of text it was fed. It cannot tell the difference between “generating original text” and “inventing things that aren’t true.” So students trying to cut corners and just plug one or two “expert quotes” to give more gravitas to their essay might end up with entirely fabricated words from non-existent but believable characters. In my experience, ChatGPT does sometimes give the warning that the quotes are generated and only look real. Still, often it will only confess if you suss it out and ask about it directly.
The temptation to get a nice selection of quotes to sprinkle through your essay with just a click of a button instead of the hours of reading, bookmarking, highlighting, and taking notes is understandable. Yet the chances are high that you will end up with the fake one.
- Low grades
If you put in uninventive structure, dull writing, factual mistakes, and fake quotes, you will never receive an A-worthy paper in the output. Students who boast high grades from “just cheating with ChatGPT” on social media must be unlucky to have overwhelmed teachers who have neither time nor energy to grade their papers properly and read them with the necessary attention and deliberation.
Additionally, I’m inclined to believe that these students are still in middle school. On higher academic levels, they don’t assign papers requiring nothing more from students than regurgitating facts from the course book – the only thing that GhatGPT can fake convincingly. When it comes to opinions, personal takes, examples, etc., it fails miserably.
- No creative process = no skills formed
This is one of the not immediately apparent downsides of AI. Imagine for a moment that AI creators will address all the flaws I listed above and fix ChatGPT. Now it can create not only passable but actually good papers with accurate quotes that can fool AI detectors. Is it a win for students?
The answer is obvious. It’s a disaster. Of course, AI only highlights underlying problems in modern education, including the failure to engage students and motivate them intrinsically. As a result, students see schooling as a game, where they feed in papers and tests, collect grades, and level up, going through a series of matriculations and graduations, with the final boss being a master thesis. It’s all transactional. Grades are currency. The only skill worth getting is how to play the game, including getting good grades effortlessly.
ChatGPT makes gaming the system easier – much, much easier and at an alarming rate. Students who were previously misguided about the purposes of education at least had some motivation to try. When they failed, they learned something. Each failure was a brick paving the way to success, even if their motivation was a grade, not progress as such. Now that students have an opportunity to cheat this easily, they have no motivation at all.
- No impetus for growth
This one is the more general AI disadvantage, not unique to education but linked to it intrinsically. If students have no motivation to build skills and grow, it starts a rather dismal chain of events: no initial investment in problem-solving -> no joy of discovery and ownership over results -> no pride in one’s work -> no job satisfaction. Students learning a single skill of feeding the right instructions to AI risk ending up redundant in the brave new world or post-industrial society.
This situation reminds me of a sci-fi parody film Galaxy Quest, where Sigourney Weaver played Gwen DeMarco – a crewmember whose only job on a starship was to communicate with a voice-operated board computer and repeat everything it said back to others. This job was meant as an epitome of uselessness and a wry commentary on the reductive portrayal of female characters in sci-fi. Yet now it looks to me like students are practicing for exactly that kind of job that promises neither challenge nor fulfillment.
This is a bird-view take on the disadvantages that AI has for education. In the later posts, we intend to look closer at the particular shortcomings of AI samples in various genres of academic writing in different subjects.