Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Chatbots Suck at Journalism
February 27, 2023

Chatbots Suck at Journalism

Reading Time: 10 minutes

I’m Better Than Chatbots at the Job They’re Trying to Take, So why is journalism welcoming them?, Chatbots suck at journalism. Why is journalism welcoming them?

If there is one thing the boosters and cynics agree on about artificial intelligence, it’s that the tech is coming for white-collar jobs. One obvious target of text-generating tools like ChatGPT? Journalism. This is not speculation of a far-off future—it’s happening now.

It makes sense, from a cold business perspective, that text-based media would want to adopt A.I. in order to cut costs (humans, expensive) and speed up output (humans, slow). Just look at how BuzzFeed’s rock-bottom stock value jumped when it said last month that the site would use services from buzzy startup OpenAI to spiff up the site’s famed quizzes. As Damon Beres wrote in the Atlantic shortly after the announcement: ‘The bleak future of media is human-owned websites profiting from automated banner ads placed on bot-written content, crawled by search-engine bots, and occasionally served to bot visitors.’

This hype-and-fear cycle has persisted for months, generating an infinite scroll of warnings about the A.I. content we’ll all be subjected to in the future. The prophecies are well-taken. But recent examples demonstrate that it might not make business sense—at least, not just yet—for journalists to be displaced by tools like ChatGPT. In part because these artificial intelligence and machine learning services are very bad at journalism.

Since 2014, plenty of newsrooms have tapped automated and even artificial-intelligence tech to aid their work: the Associated Press, Reuters, and the Washington Post, for tallying corporate earnings and sports-game scores; Bloomberg, to personalize news feeds and search results for individual readers; the Los Angeles Times, for brief and speedy reporting on homicides and earthquakes; the Guardian, to track international political donations. MediaDownloader uses automated podcast transcription for accessibility purposes, and—for kicks—once tested whether ChatGPT could offer advice appropriate for Dear Prudence. (Hey, nothing wrong with a gimmick.) British publications like Press Association and the Times utilize the tech to extrapolate trend stories from mass data sets, and to personalize newsletters delivered to readers.

Those efforts all have something in common: They use software to take on drudging tasks like transcribing and initial data-gathering so that journalists have more time to do more intensive reporting. The difference now is that some outlets are trying to use ChatGPT and other A.I. tools as more than just a donkey carrying a heavy load.

In an already infamous example, the tech news and review site CNET tried using generative text programs to fully write articles, with little in the way of editorial oversight, internal or public transparency, or even plain accuracy. According to reporting last month by Futurism and the Verge, CNET began quietly publishing these service articles on its website back in November—for example, a daily article reporting the latest mortgage rates—originally attributing their authorship to ‘CNET Money Staff.’ The articles noted that they were ‘generated using automated technology,’ but the disclosure didn’t include any specifics. (Currently, the articles are attributed to ‘CNET Money’ and feature newly worded disclosures: ‘This article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff.’)

The site’s editor in chief, in response to the reporting, claimed that every piece was ‘reviewed, fact-checked and edited’ before publication and framed the site’s experiment as an ‘assist’ to her staffers. But Futurism soon found that CNET was using A.I. to rewrite some of its already published articles without due disclosure, while the Verge uncovered that nearly half the site’s A.I.-generated articles required substantive corrections. (They were also plagiarizing, a lot.) Human CNET employees became increasingly upset with both their new chatbot colleagues and the company’s response. As such, the experiment is now on pause, though Red Ventures is planning to update and relaunch the system soon, according a Futurism report published Friday.

‘It seems like Cnet’s owner, Red Ventures, basically ordered the site’s leadership to start running A.I. articles, and then when they got caught they tried to invent a retroactive explanation that they’d been ‘testing and assessing new technology,’ ‘ wrote Futurism managing editor Jon Christian in an email. ‘If they’d owned their mistakes and written seriously about what they’d learned, it could have been a huge boon for the rest of the publishing industry.’ Indeed, Red Ventures, which acquired CNET in 2020 and forced mass layoffs as a precondition, was definitely in a position to offer advice to the field. Two more of its subsidiary sites, Bankrate and CreditCards.com, were also regularly publishing (erroneous) A.I.-written articles for a while.

One reason for this commodity publishing strategy is, obviously, web traffic. And web traffic depends a lot on Google. So if every day, people are Googling about mortgage rates … why not autopublish an article that will bring their eyeballs to CNET?

CNET, already weakened by workforce reductions, had gradually lost visibility on Google Search results throughout 2022, according to a report from the SEO firm SISTRIX. A serious problem, considering search engine optimization is ever more important for digital publishers in the age of plummeting social media traffic. So, the idea is, quickly produced A.I.-written explainers might boost a site’s search rankings and traffic. Perhaps even if the A.I.-written and search-targeted posts are inaccurate, as was the case for Bankrate.

This is slightly counterintuitive; auto-generated content is associated with low-quality journalism, and that’s precisely what Google has tried to suppress in its search results. ‘Before this GPT conversation, Google was doubling down on content quality and what they call EEAT: experience, expertise, authority, and trust,’ explained Lily Ray, an SEO specialist and author of the SISTRIX report. ‘They want to know who created the content—they want to know that person’s an expert or has a lot of experience. They want to be able to trust that person—trust the brand, trust a website.’ While that guidance may sound beneficent for the web at large, it’s also influenced by Google’s self-interest. ‘They want to make the internet a safe place—that people can trust Google’s results,’ said Ray. ‘They’ve been on that mission for a long time, and that leads to elevating content that probably doesn’t use a lot of A.I.’

So ugly were the CNET and Bankrate belly-flops that Google clarified its policies for a GPT-3-written world. On Feb. 8, the search giant shared a blog post outlining how it would approach A.I.-penned information going forward. ‘Appropriate use of A.I. or automation is not against our guidelines. This means that it is not used to generate content primarily to manipulate search ranking,’ Google explained, clarifying that ‘using AI doesn’t give content any special gains. It’s just content.’ The company recommended that A.I. use in online writing be made clear to readers when appropriate, that A.I. not be made a primary author, and that accuracy and authority still be prioritized when publishing digital information. In an independent study, SEO consultant Gael Breton found that A.I. writing did not automatically reduce Google traffic, but that prolific human authors still commanded the larger share of search referrals in comparison.

For now, in other words, websites need not worry about a blanket ban on ‘CNET Money Staff’–style articles as long as generative-app use is made clear, accuracy is prioritized, and the content doesn’t consist of misleading ‘health, civic, or financial information.’ And so publications keep trying to clear that very low bar. The publishing company the Arena Group—which hosts a portfolio of understaffed, struggling publications like Sports Illustrated—announced early this month it would employ OpenAI tech for assistive purposes, and then published a chatbot-penned advice piece in Men’s Journal that contained significant errors.

Here’s the thing about the newest generation of text-generative A.I. applications, like ChatGPT and Microsoft’s chaotic new Bing chatbot: They’re very good at having a conversation, which is to say these things can write. They’re also talented fabulists that will pass off inaccurate or hallucinated information with eerie confidence. If any human CNET staffers got caught pulling this crap, they’d be fired.

But a couple of possibilities loom. One is that the technology improves, or human guardrails advance, to the point that these things are capable of producing a kind of journalism at scale. Another is that they become so good that consumers skip visiting their formerly favorite tech website and direct their nagging, pressing questions to—and only to—their friendly neighborhood chatbots.

When it comes to chatbots, ‘there are so many queries that can cut into people needing to go to Google for anything,’ said Ray, the SEO specialist, giving examples like simple recipes or code troubleshooting, both of which ChatGPT is well-versed in. As a result, should more consumers turn to GPT models for such answers, that could fuel ‘a huge decline in people feeling like they need to go Google something, and that cuts into [websites’] ad impressions.’ ChatGPT famously does not link to any of the sources it relies on for its answers, nor does it provide a hint as to what may have informed its output. This has caused consternation among major digital publishers like CNN and the Wall Street Journal, whose work was scraped extensively, without their permission, in order to train ChatGPT’s current iteration; OpenAI has defended this practice as constituting fair use. (According to data compiled by former OpenAI executive Jack Clark, MediaDownloader was one of the most-utilized sources for training ChatGPT-2, an earlier iteration of the app.)

Major search engines, too—the ones that used to drive so much traffic to CNET—are starting to incorporate A.I. chatbots, with wild and unpredictable results. What that entails for digital journalism is similarly murky. ‘We have Neeva, we have Bing, and they’re clearly trying to set a good example by saying, We’re still going to send traffic to sites, we’re still going to say where we got this from,’ said Ray. ‘There’s a new tool called Perplexity.ai that’s similar to ChatGPT but it cites all its sources. People really like that.’ But just having these references available doesn’t guarantee that users will click them. As media-biz analyst Brian Morrissey wrote in his newsletter, ‘The change of the search interface to accommodate AI chat will make [referrals to publishers] less effective. Some will click on the citations, but many will find the answer good enough.’

And none of these search engines has the stature of Google, which isn’t following their lead on citations and links. The same week that it described its stance on A.I. and SEO, Google demonstrated its own rival chatbot, Bard. Not only did Bard spit out an inaccurate answer that appeared to throttle Google’s stock price, it showed that Google seems to be taking a ChatGPT-style lane when it comes to disclosing whence its answers, accurate or not, were sourced. ‘We’ve never quite seen a search result like the one they presented that just removes the answer from the place that the answer came from,’ said Ray. ‘Almost every other type of Google search result has some type of attribution. That’s really bad for publishers and very scary.’

For such reasons, David Karpf, a George Washington University associate media professor who’s written about A.I. for years, doesn’t think journalism should count on Google’s dominance and purported idealism to aid the sector in staving off the business threats posed by ChatGPT-like tools. In an interview, he pointed to a prior example of an online upstart that threatened virtual publishing as we know it: Demand Media, the content-farms juggernaut whose potency was killed off when Google adjusted its algorithm to halt referrals to its properties. ‘I look at what CNET’s doing, and it feels so much like the Demand Media SEO-bait articles that they were trying to do as cheaply as possible,’ said Karpf. ‘If the way that we imperfectly saved news a decade ago was the benevolence of the platform monopolist,’ which began sifting out low-quality articles, ‘I’m not sure we can have faith in that this time, because the platform monopolist might also be the one who’s producing those tools.’ After all, Google famously invented the ‘T’ in ChatGPT: the Transformer, a neural network for automated language learning and processing. That very tech is what enabled OpenAI to present a competitive threat to Google when it comes to quick information reference.

The future of news outlets’ search viability may depend on which engines win the ongoing chatbot arms race: the ones that prominently link, credit, and encourage more in-depth discovery, versus the ones that do none of those things. Predictions for this space should expand beyond chatbots—news curators and aggregators are also training their own algorithms to personalize non–social media news feeds for users through easy-to-use apps. In that vein, it could be that news eventually gets decoupled from search altogether. Some publishers, like Mediaite, are backing out of SEO-tailored content to curate a direct, loyal audience, one that’s more likely to visit its homepage or to click on Mediaite articles surfaced to readers on apps like Flipboard and specialized feeds like Google News.

Google and Microsoft seem to be bracing for a future in which users abandon traditional web search for chatbots, which could reorder how people experience, and how organizations publish to, the web. We’re not there yet. But Karpf suggests it’s never too early for a backlash.

When it comes to A.I.-written articles on traditional news sites, ‘if advertisers don’t like it, if they want to take the extra step of saying, ‘no CNET articles at all,’ that requires organizing,’ said Karpf. ‘Do readers eventually start signaling to the platforms, ‘I’m really annoyed that I keep getting served ChatGPT crap when I search for something or when I ask for something, and I would like to be able to trust this’?’ We already know online misinformation is quite lucrative, and we don’t know yet whether the A.I. factor will affect this, unless there’s clear reputational damage at stake.

Part of that backlash may come from within news organizations. Labor reporter Hamilton Nolan has gone so far as to write that ‘if something did not come from a human mind, it is not journalism. Not because A.I. cannot spit out a convincing replica of the thing, but because journalism—unlike art or entertainment—requires accountability for it to be legitimate.’ Some newsroom humans are already working to ensure standards for A.I. use are clear-cut, that writers are made aware of intent to use these tools, and that those projects do not muddy the definition of journalism, following in the steps of screenwriters and voice actors who are working to clarify terms of proper, ethical A.I. use in their own contracts. For example, Wirecutter’s editorial union stipulates that layoffs due to automated newsroom processes, as well as editorial work outsourced to automated systems, require advance notice and good-faith bargaining with the labor union. Two journalists who are currently organizing their newsroom through the Writers Guild of America, East—speaking on condition of anonymity—told me they’d been considering pushing for A.I. provisions in their contracts even before the CNET situation, thanks to the use of such tech in ever-present workplace surveillance apparatuses like Microsoft’s ‘productivity score.’ Now, in the midst of ChatGPT hype, ‘the language we’re bargaining over is what materially affects journalists,’ one of the journalists said. ‘Something A.I. is not very good at is taking a lot of disparate elements that are unrelated and qualitative and formulating something new and qualitative based off that.’

What’s more, chatbots can’t talk to people on the ground, learn to be extra careful and discerning with certain sources, describe situations and people firsthand with scrutiny or empathy, or come up with entirely new observations or arguments, at least not yet. Using ChatGPT for web articles ‘would make sense only if our goal is to repackage information that’s already available,’ Ted Chiang wrote in the New Yorker. Copywriting and blogging and proofreading could be automated to an extent, provided that chatbots fix their myriad kinks, but the journalism process of reporting, discovery, sensible synthesis, fact-checking and double-checking, and accountability to a community (or the entire world) cannot be supplanted just yet. Rapid text output is not the same as reporting out truthful stories with a responsible paper trail. As MediaDownloader contributor John Warner, an author and professor, put it in his newsletter: ‘ChatGPT is not generating meaning. It is arranging word patterns.’

For these reasons, most of the sources I spoke with recommended that publishers take caution before throwing themselves into A.I. chaos. ‘They should probably move very slowly here,’ Karpf said. ‘The instinct to say, ‘Oh good, we can use this for everything, let’s cut costs to the bone,’ carries more risk than they expect.’ The Writers Guild–affiliated journalists concurred: ‘If we start relying on it too quickly, then you’re going to see a lot of errors, racial bias, unobjective journalism, or just copied-and-pasted press releases.’ Michael King, an SEO expert who’s been experimenting with A.I. developments for years, mentioned that ChatGPT produces far more errors when prompted to produce a lengthy response as opposed to a shorter one, making clear its current utility for content generation: short, fact-checked text snippets at most. We’ll see how that changes when GPT-4 comes along.

All these shiny new apps far surpass any prior A.I. developments in terms of capability and training, and they presage some major changes in the field, as well as our lives. Still, it’s one thing to predict yet another technological reckoning for our embattled profession. It’s another to actually put it into practice and behold the results.

Future Tense is a partnership of MediaDownloaderNew America, and Arizona State University that examines emerging technologies, public policy, and society.

Reference: https://slate.com/technology/2023/02/chatbots-suck-at-journalism-why-is-journalism-welcoming-them.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *