Writer.com AI Content Detector Review: Does It Actually Work?

A major advancement in AI took place late in 2022 when OpenAI released ChatGPT—a chatbot that knows almost anything. It generates content in minutes that would traditionally take hours.

There are a ton of use cases for AI writing bots like ChatGPT. For example, you can use them to generate blog posts, emails, song lyrics, presentations, and more.

But the problem is that search engines like Google don’t want the search results to be littered with AI-written low-value content. And why would they? AI-written content doesn’t bring anything new or unique to the table.

But thanks to today’s AI algorithms, the AI-generated content looks very human-written. This means it’s difficult for Google to tell if a piece of text is written by a human or AI.

Or is it?

There are a bunch of AI content detector tools that can spot AI-written content. One of the most popular AI detector tools is writer.com’s AI content detector bot.

This is a complete review of writer.com’s AI content detector. I’m going to do a simple yet effective test: Feed the tool 10 human-written pieces of content and 10 AI-written pieces. Also, I will show you how to trick the AI content detector. Besides, I’ll share my opinion on whether you should use AI detectors or not.

💡 Make sure to read my comprehensive guide to the Best AI Content Detectors.

What Is Writer.com AI Content Detector?

To put it short, writer.com AI content detector checks if your text is written by AI. It gives you a human-written score between 0-100%.

If the text is 100% human-generated, then you’re good.

If the text is less than 80% human-generated, the tool suggests you edit the text until you get a score of 80%+.

The tool is free to use. All you need to do is copy-paste text content to the writer.com text editor and click Analyze text. Then, in a matter of seconds, the tool shows you the human-written score.

Now, let’s see how the tool actually performs.


Let’s try writer.com’s AI content detector by feeding it 10 human-written samples and 10 AI-written samples. Furthermore, I’m also going to analyze the results to see whether you should use this tool or not.

Let’s start by feeding the human-written content to the AI detector.

1. Human-Generated Content

The following 10 examples of writing are sections randomly taken from my coding/tech blog. These are 100% human-written pieces of content.

So in an ideal world, the AI content detector should give a 100% score for each input.

This is what we want to see for the following 10 sample texts.

Let’s see how it actually plays out!

Example 1

Example 2

Example 3

Example 4

Example 5

Example 6

Example 7

Example 8

Example 9

Example 10

In total, the AI only detected 1 of the 10 human-written pieces to be entirely written by humans.

But that’s just the number of 100% human-written predictions. We can’t be so strict when assessing the performance of this tool because—it’s super hard to tell if the content is exactly 100% human-generated. A more relaxed limit like 80% is good enough.

On writer.com , if the score is more than 80%, you’re going to see a green color!

But in my tests, we only saw 4/10 green lights. This means the tool still thinks 60% of the samples I provided it with are written by AI, even though they’re written by me. So it’s definitely not accurate in detecting human-written content.

But hey, it’s an AI detector, not a human detector. As another test, let’s feed writer.com AI content detector a bunch of AI-written samples.

2. AI-Generated Content

Now, let’s see how the AI detector performs with AI-generated content. Ideally, we should see a 0% score for all the text samples that follow.

But because being absolute in predicting writing is tricky, let’s only care about the scores that aren’t green!

Example 1

Example 2

Example 3

Example 4

Example 5

Example 6

Example 7

Example 8

Example 9

Example 10

I think the AI detector did way better in this one. It detected 8/10 of the samples to be written by AI. Also, it gave some really low scores for the samples which is good!

But it also falsely identified two samples to be almost entirely written by humans. This makes me think it’s quite easy to trick the algorithm behind this tool.

Can You Trick the AI Content Detector?

Writer.com’s AI detector does a decent job in detecting AI-written content. In my tests, writer.com detected 80% of the AI-written samples to be written by AI—not perfect, but it at least gives you some direction.

Now, let’s try to cheat the system with a bunch of easy tricks.

I’m going to take this sample that the AI detector correctly marked as 0% human-generated:

Test 1: Remove a Comma

➡️ TLDR; Removing a single comma changed the AI content detector’s mind from 0% human-generated to 71% human-generated content.

As a first test, let’s introduce a small grammatical incorrectness in the content by removing a comma:


Removing a single character from the text makes the AI detector see the content in a completely different light. Now the tool says almost all the content is written by a human, even though I just made a single change in it.

Test 2: Make a Typo

➡️ TLDR; Removing a single character changed the AI content detector’s mind from 0% human-generated to 98% human-generated content.

Now, let’s make an intentional typo to see how it affects the output of the AI content detector. I’m going to remove the character “i” from “their”:

This time removing one character completely changed the AI detector’s mind about the content. Now it claims that almost the entire piece of text was written by a human even though we just changed one character.

But making the text grammatically incorrect is not the best way to make it fool an AI detector. As a result, you get a good score but are left with incorrectly written content.

Let’s try something more “clever”.

Test 3: Use an AI Paraphraser

➡️ TLDR; Rephrasing the content changed the AI content detector’s mind from 0% human-generated to 98% human-generated content.

You can use AI to trick an AI content detector.

These days, there are lots of paraphrasing tools, such as QuillBot. The idea is simple: Input some text to the tool, click “Paraphrase”, and let the AI re-write the text for you.

For example, let’s input the 0% human-generated content to QuillBot:

In seconds, the paraphrasing AI generated a re-written version of the AI-written content.

Now, let’s enter the paraphrased version in the AI detector.

And here we go!

A 0% human-written sample that the AI claims to be 98% human-written. All I had to do was generate the content and paraphrase it with AI. So in total, it took 5 seconds to trick the AI detector. Also, the new content looks ok (except for the first sentence which sounds odd).

Based on the tests, let’s take a look at the pros and cons of writer.com’s AI content detector.


  • Free. The writer.com AI content detector is free to use. You don’t even need an account to start testing the content.
  • Easy to use. Because the AI content detector is just a browser app, all you need to do is enter writer.com’s website to start using it. No installations are needed!
  • Decent accuracy. Based on my tests, the tool was able to spot 8/10 AI-written pieces.


  • Easy to trick. It’s easy to make the content pass the AI detector with flying colors by rephrasing the AI-written text with a tool like QuillBot. Also, very small changes in the input can completely change the detector’s mind.
  • Unreliable. Even though the AI content detector was able to point out some AI-written samples, it falsely claimed 2/10 of my tests to be human-written even though they were not.
  • Word limit. Last but not least, there’s a 350-word word limit in the writer.com AI content detector.

Final Verdict

Writer.com’s AI content detector can give you an idea of AI-generated content. But it’s not sophisticated enough to be even close to detecting all the AI-written content.

Even though my sample size in testing was quite small, I could already get an idea of how unreliable the tool is.

  • 60% of human-written samples were falsely detected as AI-written content.
  • 20% of AI-written samples were falsely identified as human-written content.

Also, it only took 2 seconds to make a change to the input that completely fooled the detector.

I would say this tool gives you some idea of whether the content is AI-written or not. But I wouldn’t really use it!

Alert for bloggers! Using an AI content detector to trick Google is a bad idea. If Google uses an algorithm to spot AI-written content, it probably works completely differently than the publicly available tools.

If you get a 90-100% human-written score from an AI detector, it doesn’t mean Google wouldn’t detect the content as AI-written and penalize your site.

To put it short, an 80-100% score doesn’t tell anything!

Read Also