“I built a tool that helps AI-written content pass as human – and realized it could fuel cheating, spam, and mistrust. But here’s why I believe it’s part of something bigger (and not all bad).“
– Developer of humanizer.fi
The Tool Helping AI Cheat the System – And Why I’m Okay With It
Gulp. I had just coded a tool that could help students cheat and marketers lie. What had I done?! 😳 The room felt warmer. My fingers hovered over the keyboard. My conscience was blinking red. 🚨
It all began with an exciting new software project in natural language processing. The goal? To create a tool that takes a chunk of Finnish text and makes it sound more human and natural. Simple, useful, helpful. Right?
The tool had clear benefits. AI chatbots often produce clunky Finnish, and users need help polishing the output. For tasks like this, the humanizer.fi1 app works wonders. It felt good to contribute to AI-human collaboration – not just in English, but in smaller languages too.
But the more I thought about it, the more uneasy I became. This tool wasn’t just a writer’s assistant. It was also a disguise machine. A way to cloak AI-written content in a human tone and bypass detection. And that’s where the ethical alarm bells started ringing.
Hide from the Detectors!
Who wants to dodge AI detectors? Two types of people: content marketers and students.
First, the marketers. They fear being penalized for using AI-generated content, not by readers, but by search engines. And search traffic equals money.
Then, the students. Homework, essays, research papers – easy prey for generative AI. Schools use AI detectors to flag this, but tools like mine create an arms race between generation and detection.
Teachers are justifiably worried. Not just because of cheating, but because detectors are unreliable and can produce false positives, wrongly accusing students of using AI when they haven’t. The confusion is real, and growing.
And there I was, wondering: Am I enabling academic fraud? Helping flood the internet with spam? Eroding trust in digital content? Holy shit.
Was I part of the problem?
Maybe Not.
Let’s zoom out.
Are students misusing AI? Or just using it as intended? Maybe they gain a short-term advantage, but in the long run, they cheat themselves out of learning. The tool doesn’t change that – the student’s choices do.
And marketers? Google’s official position (as of 2023) was that quality matters more than the tool used. Helpful content won’t be penalized just because it’s AI-generated.2 Yes, rumors swirl about Google removing AI-written content, but that’s likely due to spam and duplication – not AI per se.
So I stopped sweating. My tool helps people create better-sounding content. That includes marketers optimizing for SEO, students looking for clarity, and anyone else who wants to sound more human. I’m okay with that.
The End of the Arms Race?
AI detectors have struggled from the start. They’ve falsely flagged real human writers, especially non-native speakers.3 Even OpenAI had to shut down their own classifier due to poor accuracy.4
And honestly, with how fast AI is improving, how long until generated text becomes undetectable? Spoiler: not long. Experts agree!5 The detection game might already be lost.
If AI writing becomes indistinguishable from human writing, what happens next? No more flags in classrooms. No more SEO penalties. The whole ethical debate around tools like humanizer.fi might just vanish into irrelevance.
And when that happens, we need a whole new mindset.
A Complete Paradigm Shift
What if we stop worrying about who wrote the content, and start focusing on what the content says?
Ideas. Accuracy. Originality. Quality.
That’s where our focus should shift.
But here’s the twist: AI is coming for those, too. Tools are already improving at generating not just polished text, but truly insightful content.
There were predictions, that 90% of all online content would be AI-generated by 2025.6 Maybe we’re already there. So what happens to quality information in a pool of toxic waste?
With 175 trillion gigabytes of data online, much of it environmentally harmful* trash, i.e. low-quality, spammy, or redundant content7, we risk burying human insight under a digital landfill. (* considering the energy consumption of datacenters. 😉
So What Do We Do?
We evolve.
Enter the era of machine-mediated consumption. People won’t browse articles; they’ll ask their AI assistant for the highlights. Google’s already on this path with AI Overviews – auto-summaries at the top of search results.
But more content isn’t better content. These AI summaries have been caught being biased, misleading, even misquoting sources.8
With SEO-driven content cluttering the web, search is becoming less about finding answers and more about sifting through noise.9
Let’s Talk About Money
This shift hits the ad business, too. Zero-click searches mean fewer site visits, which means fewer ad impressions.
Publishers lose traffic.10 Google loses revenue.11 Companies will need new playbooks: subscriptions, exclusives, sponsored chatbot answers.
But this also opens doors. What about startups that verify human authorship? Or AI tools that rank quality, not keywords? When the rules change, so do the opportunities.
Time to Get Serious
We started with a little Finnish text converter. Now we’re talking about education, economics, and existential risk.
AI isn’t just changing essays or ad copy. It’s changing everything. Even war.12
So the real question isn’t whether we can detect AI.
It’s whether we can still detect what makes us human.
That’s the debate we need to have. And we need to have it now.
This article is written by the author of AI generated text humanizer tool for Finnish language, humanizer.fi, and is released also in LinkedIn.