Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Wikipedia Will Survive A.I.
August 25, 2023

Wikipedia Will Survive A.I.

Reading Time: 6 minutes

Rumors of Wikipedia’s death at the hands of ChatGPT are greatly exaggerated., Wikipedia will survive A.I.

Welcome to Source Notes, a Future Tense column about the internet’s information ecosystem.

Wikipedia is, to date, the largest and most-read reference work in human history. But the editors who update and maintain Wikipedia are certainly not complacent about its place as the preeminent information resource, and are worried about how it might be displaced by generative A.I. At last week’s Wikimania, the site’s annual user conference, one of the sessions was ‘ChatGPT vs. WikiGPT,’ and a panelist at the event mentioned that rather than visiting Wikipedia, people seem to being going to ChatGPT for their information needs. Veteran Wikipedians have couched ChatGPT as an existential threat, predicting that A.I. chatbots will supplant Wikipedia in the same way that Wikipedia infamously dethroned Encyclopedia Britannica back in 2005.

But it seems to me that rumors of the imminent ‘death of Wikipedia‘ at the hands of generative A.I. are greatly exaggerated. Sure, the implementation of A.I. technology will undoubtedly alter how Wikipedia is used and transform the user experience. At the same time, the features and bugs of large language models, or LLMs, like ChatGPT intersect with human interests in ways that support Wikipedia rather than threaten it.

For context, there have been elements of artificial intelligence and machine learning on Wikipedia since 2002. Automated bots on Wikipedia must be approved, as set forth in the bot policy, and generally must be supervised by a human. Content review is assisted by bots such as ClueBot NG, which identifies profanity and unencyclopedic punctuation like ‘!!!11.’ Another use case is machine translation, which has helped provide content for the 334 different language versions of the encyclopedia, again generally with human supervision. ‘At the end of the day, Wikipedians are really, really practical—that’s the fundamental characteristic,’ said Chris Albon, director of machine learning at the Wikimedia Foundation, the nonprofit organization that supports the project. ‘Wikipedians have been using A.I. and M.L. from 2002 because it just saved time in ways that were useful to them.’

In other words, bots are old news for Wikipedia—it’s the offsite LLMs that present new challenges. Earlier this year, I reported on how Wikipedians were grappling with the then-new ChatGPT and deciding whether chatbot-generated content should be used in the process of composing Wikipedia articles. At the time, the editors were understandably concerned with how LLMs hallucinate, responding to prompts with outright fabrications complete with fake citations. There is a real risk that users who copy ChatGPT text into Wikipedia would risk polluting the project with misinformation. But an outright ban on generative A.I. seemed both too harsh and too Luddite—a failure to recognize new ways of working. Some editors have reported that ChatGPT answers were useful as a starting point or a skeletal outline. While banning generative A.I. could keep low-quality ChatGPT content off of Wikipedia, it could also curtail the productivity of human editors.

These days, Wikipedians are in the process of drafting a policy for how LLMs can be used on the project. What’s being discussed is essentially a ‘take care and declare’ framework: The human editor must disclose in an article’s public edit history that an LLM was used and must take personal responsibility for vetting the LLM content and ensuring its accuracy. It’s worth noting that the proposed policy for LLMs is very similar to how most Wikipedia bots require some human supervision. Leash your bots, your dogs, and now your LLMs.

To be clear, the Wikipedia community has jurisdiction over how their fellow editors use bots—but not how external agents are using Wikipedia. These days, generative A.I. companies are taking advantage of the internet encyclopedia’s open license. Every LLM so far has been trained on Wikipedia’s content, and the site is almost always the largest source of training data within their data sets.

Despite swallowing Wikipedia’s entire corpus, ChatGPT is not the polite sort of robot that graciously credits Wikipedia when it uses that information for one of its responses. Quite the contrary—the chatbot doesn’t typically disclose its sources at all. Critics are advocating for greater transparency, and advocating restraint until chatbots become an explainable A.I. system.

Of course, there’s a scary reason that LLMs don’t normally credit their sources: the A.I. does not always know how it has arrived at its answer. Pardon the grotesque simile, but the knowledge base of a typical LLM is like a huge hairball; the LLM may pull strands from Wikipedia, Tumblr, Reddit, and a variety of other sources without distinguishing among them. And the LLM is basically programmed solely to predict the next phrase, not to provide credit when it’s due.

Journalists in particular seem very concerned about how ChatGPT isn’t acknowledging Wikipedia in its responses. The New York Times Magazine published a feature last month on how the reuse of Wikipedia information by A.I. imperiled Wikipedia’s health and made people forget about its important role behind the scenes.

But I get the sense that most Wikipedia contributors are less concerned about credit-claiming than the average reporter. For one thing, Wikipedians are used to this: After all, before LLMs, Siri and Alexa were the ones scraping Wikipedia without credit. (As of publication time, these smart assistants have been updated to say something like ‘from Wikipedia.’) More fundamentally, there has always been an altruistic element in curating information for Wikipedia: People add knowledge to the site expecting that everyone else will use it how they will.

Rather than sapping away the morale of volunteer human Wikipedians, generative A.I. may add a new reason to the list of their motivations: a sincere desire to train the robots. This is also a reason that generative A.I. companies like OpenAI should care about maintaining Wikipedia’s role as ChatGPT’s primary tutor. It’s important for Wikipedia to remain a human-written knowledge source. We now know that LLM-generated content is like poison for training LLMs: If the training data is not human-created, then LLMs become measurably dumber. LLMs that eat too much of their own cooking are prone to model collapse, a symptom of the curse of recursion.

As Selena Deckelmann, the Wikimedia Foundation’s chief product and technology officer, put it, ‘the world’s generative AI companies need to figure out how to keep sources of original human content, the most critical element of our information system, sustainable and growing over time.’ This mutual interest is perhaps why Google.org, the Musk Foundation, Facebook, and Amazon are among the benefactors who have donated more than a million dollars to the Wikimedia Endowment—A.I. companies seem to have realized that keeping Wikipedia a human-created project is in their interests. (For further context, the foundation is primarily supported by numerous small donations by ordinary Wikipedia readers and supporters, which is comforting for those of us who worry about any big tech company gaining too much influence over the direction of the nonprofit organization.)

The weaknesses of A.I. chatbots could also popularize new use cases for Wikipedia. In July, the Wikimedia Foundation released a new Wikipedia ChatGPT plug-in that allows ChatGPT to search for and summarize the most up-to-date information on Wikipedia to answer general knowledge queries. For instance, if you ask ChatGPT 3.5 in its standard form about Donald Trump’s indictment, the chatbot says it doesn’t know about it because it is only trained on the internet through September 2021. But with the new plug-in, the chatbot accurately summarizes current events. Notice how Wikipedia in this example is functioning something like a water filter: sitting on the tap of the raw LLM, rooting out inaccuracies, and bringing the content up to speed.

Whether Wikipedia is incorporated into A.I. via the training data or as a plug-in, it’s clear that it’s important to keep humans interested in curating information for the site. Albon told me about several proposals to leverage LLMs to help make the editing process more enjoyable. One idea proposed by the community is to allow LLMs to summarize the lengthy discussions on talk pages, the non-article spaces where editors delve into the site’s policies. Since Wikipedia is more than 20 years old, some of these walls of texts are now lengthier than War and Peace. Few people have the time to review all of the discussion that has taken place since 2005 about what qualifies as a reliable source for Wikipedia, much less perennial sources. Rather than expecting new contributors to review multiyear discussions about the issue, the LLM could just summarize them at the top. ‘The reason that’s important is to draw in new editors, to make it so it’s not so daunting,’ Albon said.

John Samuel, an assistant professor of computer science at CPE Lyon, told me that prospective Wikipedia editors he’s recruited often find it difficult to get started. Finding reliable sources to use for an article can be very labor-intensive, and Gen Z has grown impatient with the chore of sifting through Google search results. An internet that has become flooded with machine-generated content will make the process of finding quality sources even more painful.

But Samuel foresees a hopeful future in which Wikipedia has integrated some A.I. technology that helps human editors find quality sources and double checks to ensure that the underlying sources in fact state what the human claims. ‘We cannot delay things. We have to think about integrating the newer A.I.-based tools so that we save the time of contributors,’ Samuel said.

If there’s a common theme running through the A.I.-gloom discourse, it’s that A.I. is going to take people’s jobs. And what about the ‘job’ of volunteer Wikipedia editors? The answer is nuanced. On the one hand, a lot of repetitive work (adding article categories, basic formatting, easy summaries) is likely to be automated. Then again, the work of the people editing Wikipedia has never really been about writing text, per se. The more important job has always involved discussions between members of the community, debates about whether one source or the other is more reliable, arguments about whether wording is representative or misleading, trying to collaborate with the shared goal of improving the encyclopedia. So perhaps that’s where the future is heading for Wikipedia: leave the polite busywork for the A.I., but keep the discourse and the disagreement—that messy, meaningful, consensus-building stuff—for humans.

Reference: https://slate.com/technology/2023/08/wikipedia-artificial-intelligence-threat.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *