top of page

INEFFECTIVENESS OF AI DETECTORS

  • Editorial
  • 2 hours ago
  • 4 min read

By Faith Munanie


Imagine this: AI checkers have now become a commercial product. One detector might claim a student’s work is 70 percent AI generated, another says 30 percent, and yet another gives a 50 percent score. The goal behind these varying results appears to be simple (push users to pay for a premium version).


But even after paying, many students find themselves stuck in a loop. They attempt to “humanize” their work using one AI tool, only to have another checker still flag the content as AI generated. It becomes an endless cycle of editing, checking, and paying without ever reaching a definitive answer. Still, many blindly use AI detection tools and make final judgments without even consulting the student. Ironically, some of these very detectors admit they aren’t fully reliable, but if you meet an old school lecturer like our Mzungu Mumeru, you’re in trouble.


To make matters worse, regional language patterns are often flagged unfairly. Despite the fact that many generative AI models were trained with input from Kenyan users, students using Kenyan English are frequently penalized. Their authentic writing is labeled as AI generated, simply because it doesn't align with Western standard patterns.


It’s a weird time where sounding too smart or writing too well is seen as suspicious. The irony is, we're told to aim high in school, but now, producing polished work raises AI flags. It’s as if intelligence is being policed. The idea that using strong vocabulary or structured logic means you're “trying too hard” is wild,humans can be articulate, deep, and original. It’s frustrating and quite discouraging, especially for people who’ve always written well.


The same resistance happened with the introduction of the internet and calculators,people tried to stigmatize them. But we can’t go back to using the Pascaline rule or logarithmic tables when calculators,and now AI can do it much faster and more efficiently. I remember some people were even against the printing press because they thought it would make people lazy, yet it used to take a monk’s lifetime to copy just one Bible. With modern technology, we can now produce millions of copies, with better illustrations and superb writing styles,something a human monk could never achieve.


At this point, AI is starting to lead to a form of anti intellectualism,especially in academic writing,because everything is being questioned. Writers have to be extremely careful to avoid being flagged. It's strange that different AI detectors give conflicting results, and it makes you wonder if everything we were taught before university is now considered irrelevant. Humanizing essays with AI has created odd expectations i.e if you're too clever, it’s considered fake; if you use rich vocabulary, you're said to be trying too hard. It’s as if human intelligence is no longer believable.


Many in the older generation tend to focus only on the negative side of new inventions, forgetting there’s a positive side as well. Being smart has somehow become a red flag. It feels like we just have to get used to living among people who doubt that there are individuals who are genuinely intelligent or who learn beyond the classroom. And there’s nothing more frustrating than trying to prove a point or having a conversation with someone who refuses to change their view, even when presented with clear evidence. Some people are still stuck in chapter one while others are already in the epilogue—some might not even be aware the rest of the chapters exist.


AI detectors for text are increasingly being exposed as ineffective and ultimately dangerous tools. Despite being sold to institutions and companies with promises of accuracy, these systems have repeatedly shown that they do not work as intended.


There are first hand experiences of students being wrongly flagged by systems like Turnitin. In one case, students were tasked with writing an ethics paper that was entirely abstract and based solely on personal thoughts and opinions. Even such original work ended up being flagged by AI grading systems, suggesting a flaw in how these detectors evaluate text.


Another clear example of the unreliability of AI detectors involved a professor who failed a student for submitting an essay that he believed was AI written. Ironically, the student sent an apology email that also appeared AI generated, raising more concerns about the accuracy of such judgments.


In another widely shared instance, a passage from the book of Genesis was submitted to an AI detection tool, which concluded with 78.99 percent certainty that the biblical text was AI generated. This highlighted just how flawed these detectors can be when applied blindly.


Experts have raised alarms, calling AI classifiers the modern day equivalent of snake oil,sold for profit, with the illusion of effectiveness. It’s been pointed out that OpenAI, one of the leading players in artificial intelligence, discontinued its own classifier as of July 20, 2023, citing a low rate of accuracy. They have instead committed to researching better techniques that will help users understand whether content is AI generated without relying on faulty systems.


Some students have voiced their frustrations online, noting that even essays written entirely from scratch after research,up to 1500 words,were still wrongly flagged as AI generated.


It is now wild and suspicious that we now have to dump down to avoid sounding like AI.I mean,since when did writing proper grammatical paragraph become suspicious?

 
 
 

Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page