Slow Death of Authenticity in Newsrooms.
- Editorial
- Mar 18
- 4 min read
By Faith Munanie

Have you ever read a news article and wondered if it was really written by a journalist or just a machine? Do you sometimes feel like news stories are missing something, like they don’t have the realness they used to? What happens when the people we trust to give us the truth start using AI to do their work? Can journalism even survive in a world where machines are taking over human jobs?
AI generated journalism is not just something being used in newsrooms, it has also found its way into lecture halls. Many students pursuing Bachelor’s degrees in Journalism and Mass Communication in Kenya have already started relying on AI tools to create news stories. ChatGPT, Jasper AI, and Copy.ai are now being used to generate articles, press releases, and even interview transcripts. What used to take hours of research and writing can now be done in minutes. Some students argue that AI makes their work easier, allowing them to focus on creativity and storytelling. However, others worry that this overreliance is killing the skills that real journalists need, critical thinking, investigative research, and the ability to write compelling stories. If students are already depending on AI before they even join newsrooms, what will journalism look like in the next few years?
At first, AI in journalism sounded like a good idea. AI can write stories faster and save companies a lot of money, especially now that most media houses are struggling with low sales. The pressure of meeting deadlines is also less because AI can analyze large amounts of information in just a few seconds. But this new way of working is already bringing big problems.
One of the biggest issues is accuracy. AI collects data from the internet, but it doesn’t know how to confirm if that information is correct. There have already been cases of AI generated stories having many mistakes. In 2023, Sports Illustrated was exposed for publishing AI written stories under fake journalist names. When the truth came out, they quickly deleted the articles but never fully admitted their mistake (Futurism, 2023).
Similarly, The Star Kenya faced criticism when readers noticed that some of its online articles had AI generated content that lacked depth and context. Many social media users called them out, pointing out that the stories had repetitive sentences and factual errors. Tuko.co.ke was also under fire when one of its AI generated entertainment articles misquoted a Kenyan celebrity, leading to backlash and demands for a correction. Pulse Live Kenya published an AI written piece about a political scandal, but keen readers realized it contained outdated information from previous years, making it clear that the article had been auto-generated without proper verification.
If major media houses in Kenya are already struggling to control fake news created by AI, how much worse could it be as technology advances? During Kenya’s 2022 General Elections, some media houses tried using AI for real time vote tallying. At first, it seemed like a smart move, news was being reported faster with less mistakes. But soon, Kenyans started noticing numbers that didn’t add up. Some media houses were reporting different figures from others, making people suspect that they were being manipulated (Media Council of Kenya, 2022).
But the problem with AI journalism is not just about mistakes. AI is now being used to create deepfake content that confuses the public. In 2023, AI generated videos falsely showing police officers beating protesters were shared widely. Thousands of Kenyans believed the videos before fact checkers proved they were fake (Africa Check, 2023). But by that time, the damage had already been done, anger had already spread, and many people refused to believe the truth when it finally came out.
A similar case happened when AI generated images of a supposed "one million chapatis per day" machine went viral. After President William Ruto announced a plan to acquire a machine that could produce a million chapatis daily, the internet went wild. AI generated pictures of massive chapati making machines flooded social media. Many people thought they were real, while others made jokes and memes about it. The images spread so fast that even those who initially doubted them started believing in the existence of such a machine (Instagram, 2024). This was yet another example of how AI can create viral content that people accept as truth, even when it is completely fake.
Sports journalism in Kenya is also changing because of AI. Websites like Pulse Sports Kenya and SportsBrief now use AI to write quick match reports. While AI is good at listing scores and statistics, it completely fails when it comes to telling the deeper stories behind the game. The excitement, the emotions, the real struggles of players, these are things that only a human journalist can capture.
Even with all these problems, AI is not completely bad for journalism. If used well, it can help journalists with research, data analysis, and even checking facts. However, AI should only be used as a tool, not as a replacement for human journalists. Experts in Kenya have also raised concerns about this issue. Dr. Nancy Booker, the Director of Academic Affairs at the Aga Khan University Graduate School of Media and Communications, warns that “AI should complement journalism, not replace it. We need policies to ensure accuracy and accountability.” David Omwoyo, the CEO of the Media Council of Kenya, has also emphasized the importance of regulation, stating that “Kenyan media houses must be transparent about AI use. If readers can’t trust the news, journalism as we know it will collapse.”
The Media Council of Kenya has to step in and put rules on AI generated journalism before it’s too late. If media houses keep publishing AI generated articles without telling readers, people will start questioning whether any news is even real. Journalism has always depended on trust, and once that trust is lost, it’s hard to get it back.
The future of news in Kenya is in the hands of media houses and readers. If AI is used well, it can help journalists work better. But if it takes over completely, news will lose its truth and trust. And when people stop trusting the news, they will stop reading it.
Comments