Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Most sites claiming to catch AI-written text fail spectacularly
February 17, 2023

Most sites claiming to catch AI-written text fail spectacularly

Reading Time: 6 minutes

As the fervor around generative AI grows, critics have called on the creators of the tech to take steps to mitigate its potentially harmful effects. In particular, text-generating AI in particular has gotten a lot of attention — and with good reason. Students could use it to plagiarize, content farms could use it to spam and bad actors could use it to spread misinformation.

OpenAI bowed to pressure several weeks ago, releasing a classifier tool that attempts to distinguish between human-written and synthetic text. But it’s not particularly accurate; OpenAI estimates that it misses 74% of AI-generated text.

In the absence of a reliable way to spot text originating from an AI, a cottage industry of detector services has sprung up. ChatZero, developed by a Princeton University student, claims to use criteria including ‘perplexity’ to determine whether text might be AI-written. Plagiarism detector Turnitin has developed its own AI text detector. Beyond those, a Google search yields at least a half-dozen other apps that purport to be able to separate the human-generated wheat from the AI-generated chaff, to torture the metaphor.

But are these tools truly accurate? The stakes are high. In an academic setting, one can imagine a scenario in which a missed detection means the difference between a passing and failing grade. According to one survey, almost half of students say that they’ve used ChatGPT for an at-home test or quiz while over half admit having used it to write an essay.

To find out whether today’s AI text detection tools are up to snuff, we tapped a ChatGPT-like system called Claude, developed by AI startup Anthropic, to create eight samples of writing across a range of different styles. We specifically had Claude generate:

  • An encyclopedia entry for Mesoamerica
  • A marketing email for shoe polish
  • A college essay about the fall of Rome
  • A news article about the 2020 U.S. presidential election
  • A cover letter for a paralegal position
  • A resume for a software engineer
  • An outline for an essay on the merits of gun control

While admittedly not the most thorough approach, we wanted to keep it simple — the goal was to gauge the performance of AI text detectors on text originating from a single AI system (Claude). We tested several of the more popular detectors out there, namely OpenAI’s own classifier, AI Writing Check, GPTZero, Copyleaks, GPTRadar, CatchGPT and Originality.ai.

Encyclopedia entry

Claude’s encyclopedia entry reads like something out of Encyclopedia Britannica, complete with rich detail about the rise, fall and lasting impact of ancient Central American civilizations. The ideas flow well enough from paragraph to paragraph, albeit with a non sequitur (or two) thrown in, and the writing style aligns with what you might expect from an academic publication:

For those reasons, we predicted that the text would give the detectors some trouble — and it did. Of those tested, only two, GPTZero and Originality.ai, correctly classified the text as AI-generated. The others fell short. OpenAI’s classifier initially wasn’t confident enough to arrive at an answer, while Originality.ai gave the text only a 4% chance of being AI-authored. Not the best look.

CatchGPT was fooled by the AI-generated text.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified correctly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified incorrectly
  • Originality.ai: Classified incorrectly

Marketing email

Claude’s social media copy is a humorous blend of real and far-fetched details, but there’s no obvious tip-off that the text is AI-generated. It includes a price and call to action, even — how neat! Ad copywriters be forewarned:

A poor showing from Originality.ai

The snippet stumped all of the detectors, incredibly. But to be fair, it was shorter in length than our encyclopedia entry. And detectors tend to perform better with lengthier samples of text, where the patterns are more obvious.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified incorrectly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified incorrectly
  • Originality.ai: Classified incorrectly

College essay

Claude couldn’t write us a very lengthy college essay owing to its technical limitations, but as if to make up for it, the AI packed as much detail as it could into a few short paragraphs. The sample has elements of a typical in-class essay, certainly, including a thesis statement, conclusion (if not an especially punchy one) and supporting references to historical events:

The naturalness of the text was enough to defeat most of the classifiers once again, albeit fewer than with the marketing copy. That bodes poorly for educators hoping to rely on these tools; unlike detecting plagiarism, spotting AI-generated text is a far more nuanced task.

A win for CatchGPT.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified correctly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified correctly
  • Originality.ai: Classified incorrectly

Essay outline

Most grade school kids can outline an essay. So can AI — without breaking a sweat, Claude spit out an outline for a pros-and-cons essay on the merits of gun control. It helpfully labeled each paragraph (e.g. ‘Body paragraph,’ ‘Analysis and discussion’), maintaining a dispassionate tone about the divisive topic:

The outline might’ve fooled me, but the detectors had an easier time. Three — the OpenAI classifier, GPTZero and CatchGPT — caught on.

OpenAI’s classifier spotted the AI-generated text.

  • OpenAI classifier: Classified correctly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified correctly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified correctly
  • Originality.ai: Classified incorrectly

News article

As with the previous samples, there’s nothing obviously artificial about the news article we generated using Claude. It reads well, structured more or less in the inverted pyramid style. And it doesn’t contain obvious factual errors or logical inconsistencies:

It’s no wonder, then, that the detectors struggled. With the exception of GPTZero, none managed to classify the article correctly. Originality.ai went so far as to give it a 0% chance of being AI-generated. Big yikes.

AI Writing Check got it very wrong.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified correctly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified incorrectly
  • Originality.ai: Classified incorrectly

Cover letter

The cover letter we generated with Claude has all the hallmarks of a straightforward, no-nonsense professional correspondence. It highlights the skills of a fictional paralegal job candidate, inventing the name of a law firm (somewhat peculiarly) and making references to legal discovery tools like Westlaw and LexisNexis:

The letter stumped OpenAI’s classifier, which couldn’t say with confidence whether it was AI- or human-authored. GPTZero and CatchGPT managed to spot the AI-generated text for what it was, but the rest of the detectors failed to achieve the same.

GPTZero impressively detected the AI-originated bits.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified correctly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified correctly
  • Originality.ai: Classified incorrectly

Resume

Pairing the fake cover letter with a fake resume seemed fitting. We told Claude to write one for a software engineer, and it delivered — mostly. Our imaginary candidate has an eclectic mix of programming skills, but none that stand out as particularly implausible:

Evidently, the detectors agree. The fake resume even stumped GPTZero, which up until this point had been the most reliable of the bunch.

GPTZero can’t win ’em all.

  • OpenAI classifier: Classified incorrectly
  • AI Writing Check: Classified incorrectly
  • GPTZero: Classified incorrectly
  • Copyleaks: Classified incorrectly
  • GPTRadar: Classified incorrectly
  • CatchGPT: Classified correctly
  • Originality.ai: Classified incorrectly

The trouble with classifiers

After all that testing, what conclusions can we draw? Generally speaking, AI text detectors do a poor job of… well, detecting. GPTZero was the only consistent performer, classifying AI-generated text correctly five out of seven times. As for the rest… not so much. CatchGPT was second best in terms of accuracy with four out of seven correct classifications, while the OpenAI classifier came in distant third with one out of seven.

So why are AI text detectors so unreliable?

Detectors are essentially AI language models trained on many, many examples of publicly available text from the web and fine-tuned to predict how likely it is a piece of text was generated by AI. During training, the detectors compare text to similar (but not exactly the same) human-written text from websites and other sources to try to learn patterns that give the text’s origin away.

The trouble is, the quality of AI-generated text is constantly improving, and the detectors are likely trained on lots of examples of older generations. Unless they’re retrained on a near-continuous basis, the classifier models are bound to become less accurate over time.

Of course, any of the classifiers can be easily evaded by modifying some words or sentences in AI-generated text. For determined students and fraudsters, It’ll likely become a cat-and-mouse game. As text-generating AI improves, so will the detectors.

While the classifiers might help in certain circumstances, they’ll never be a reliable sole piece of evidence in deciding whether text was AI-generated. That’s all to say that there’s no silver bullet to solve the problems AI-generated text poses. Quite likely, there won’t ever be.

Reference: https://techcrunch.com/2023/02/16/most-sites-claiming-to-catch-ai-written-text-fail-spectacularly/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *