Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Misunderstanding Misinformation
June 11, 2023

Misunderstanding Misinformation

Reading Time: 7 minutes

I Helped Lead the Movement Against ‘Fake News.’ I Have a Big Regret., I fought to make the term misinformation mainstream. Now, I think focusing on labels can distract from a larger problem., Misinformation v. disinformation: How to really fight fake news.

In the fall of 2017, the Collins Dictionary named fake news the word of the year. It was hard to argue with the decision. Journalists were using the phrase to raise awareness of false and misleading information online. Academics had started publishing copiously on the subject. And, of course, U.S. President Donald Trump regularly used the epithet from the podium to discredit nearly anything he disliked.

By spring of that year, I had already become exasperated by how this term was being used to attack the news media. Worse, it had never captured the problem: Most content wasn’t actually fake, but genuine content used out of context—and only rarely did it look like news. I made a rallying cry to stop using fake news and instead use misinformationdisinformation, and malinformation under the umbrella term information disorder. These terms, especially the first two, have caught on, but they represent an overly simple, tidy framework I no longer find useful.

Both disinformation and misinformation describe false or misleading claims, but disinformation is distributed with the intent to cause harm, whereas misinformation is the mistaken sharing of the same content. Analyses of both generally focus on whether a post is accurate and whether it is intended to mislead. The result? We researchers become so obsessed with labeling the dots that we can’t see the larger pattern they show.

By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms.

To understand what these terms leave out, consider ‘Lynda,’ a fictional person based on many I track online. Lynda fervently believes vaccines are dangerous. She scours databases for newly published scientific research, watches regulatory hearings for vaccine approvals, reads vaccine inserts to analyze ingredients and warnings. Then she shares what she learns with her community online.

Is she a misinformer? No. She’s not mistakenly sharing information that she didn’t bother to verify. She takes the time to seek out information.

Nor is she a disinformation agent as commonly defined. She isn’t trying to cause harm or get rich. My sense is that Lynda is driven to post because she feels an overwhelming need to warn people about a health system she sincerely believes has harmed her or a loved one. She is strategically choosing information to connect with people and promote a worldview. Her criteria for choosing what to post depends less on whether it makes sense rationally and more about her social identities and affinities.

Dismissing Lynda for her selective interpretation and lack of research credentials risks failing to see what she’s accomplishing overall: taking snippets or clips that support her belief systems from information published by authoritative institutions (maybe an admission by a scientist that more research is needed, or a disclaimer about known side effects) and sharing that without any wider context or explanation. This ‘accurate’ information that she has uncovered via her own research is used to support inaccurate narratives—perhaps that governments are rolling out vaccines for population control, or that doctors are dupes or pharmaceutical company shills.

To understand the contemporary information ecosystem, researchers need to move away from our fixation on accuracy and zoom out to understand the characteristics of some of these online spaces that are powered by people’s need for connection, community, and affirmation. As communications scholar Alice Marwick has written, ‘Within social environments, people are not necessarily looking to inform others: they share stories (and pictures, and videos) to express themselves and broadcast their identity, affiliations, values, and norms.’

Lynda’s online world points to the importance of connections, which is not easily captured by the labels misinformation and disinformation. While Lynda might post primarily in anti-vaccine Facebook groups, if I follow her activities, it’s very likely I’ll also find her posting in #stopthesteal or similar groups and sharing climate change denial memes or conspiracy theories about the latest mass shooting on Instagram.

One of the challenges of studying this arena is that its narrow focus means that the role of the world’s Lyndas is barely understood. A growing body of research points to the volume of problematic content online that can be traced back to a surprisingly small number of so-called superspreaders, but so far even that work studies those who amplify content within a particular topic rather than create it—leaving the impacts of devoted true believers like Lynda still understudied.

This reflects a larger issue. Those of us who are funded to track harmful information online too often work in silos. I’m based in a school of public health, so people assume I should just study health misinformation. My colleagues in political science departments are funded to investigate speech that might erode democracy. I suspect that people like Lynda drive an outsize amount of wide-ranging problematic content, but they do not operate the way we academics are set up to think about our broken information systems.

Every month there are academic and policy conferences focused on health misinformation, political disinformation, climate communication, or Russian disinformation in Ukraine. Often each has very different experts talking about identical problems with little awareness of other disciplines’ scholarship. Funding agencies and policymakers inadvertently create even more silos by concentrating on nation states or distinct regions such as the European Union.

Events and incidents also become silos. Funders fixate on high-profile, scheduled events like an election, the rollout of a new vaccine, or the next United Nations climate change conference. But those who are trying to manipulate, monetize, recruit, or inspire people excel at exploiting moments of tension or outrage, whether it’s the latest British royals documentary, a celebrity divorce trial, or the World Cup. No one funds investigations into the online activity those moments generate, although doing so could yield crucial insights.

Authorities’ responses are siloed as well. In November 2020, my team published a report on 20 million posts we had gathered from Instagram, Twitter, and Facebook that included conversations about COVID-19 vaccines. (Note that we didn’t set out to collect posts containing misinformation; we simply wanted to know how people were talking about the vaccines.) From this large data set, the team identified several key narratives, including the safety, efficacy, and necessity of getting vaccinated and the political and economic motives for producing the vaccine. But the most frequent conversation about vaccines on all three platforms was a narrative we labeled liberty and freedom. People were less likely to discuss the safety of the vaccines than whether they would be forced to get vaccinated or carry vaccine verification. Yet agencies like the Centers for Disease Control and Prevention are only equipped to engage the single narrative about safety, efficacy, and necessity.

Unfortunately, most scholars who study and respond to polluted information still think in terms of what I call atoms of content, rather than in terms of narratives. Social media platforms have teams making decisions about whether an individual post should be fact-checked, labeled, down-ranked, or removed. The platforms have become increasingly deft at playing whack-a-mole with posts that may not even violate their guidelines. But by focusing on individual posts, researchers are failing to see the larger picture: People aren’t influenced by one post so much as they’re influenced by the narratives that these posts fit into.

In this sense, individual posts are not atoms, but something like drops of water. One drop of water is unlikely to persuade or do harm, but over time, the repetition starts to fit into overarching narratives—often, narratives that are already aligned with people’s thinking.
What happens to public trust when people repeatedly see, over months and months, posts that are ‘just asking questions’ about government institutions or public health organizations? Like drops of water on stone, one drop will do no harm, but over time, grooves are cut deep.

To really move forward, proponents of healthy information ecosystems need a broader, integrated view of how and why information circulates. They must learn to assess multilingual, networked flows of content that span conventional boundaries of disciplines and regions. I chaired a taskforce that proposed a permanent, global institution to monitor and study information that would be centrally funded and thus independent of both nations and tech companies. Right now, efforts to monitor disinformation often do overlapping work but fail to share data and classification mechanisms and have limited ability to respond in a crisis.

To clean up our polluted information ecosystem, we also must learn to participate, because the system itself is participatory—a site of constant experimentation as participants drive engagement and better connect with their audiences’ concerns. Although news outlets and government agencies appear to embrace social media, they rarely engage the two-way, interactive features that characterize Web 2.0. Traditional science communication assumes that experts know what information to supply and that audiences will passively consume information and respond as intended. These systems have much to learn from people like Lynda about how to connect with, rather than present to, audiences. An essential first step is to train government communications staff, community organizations, librarians, and journalists to seek out and listen to the public’s questions and concerns.

Today, global and national funders also have an outsized focus on how to expunge the ‘bad stuff’ rather than how to expand the ‘good stuff.’ Instead of pursuing such whack-a-mole efforts, major funders should find a way to support specific place-based responses for what communities need. For example, health researcher Stephen Thomas created the Health Advocates In-Reach and Research campaign, which trains local barbershop and beauty salon owners to listen to their customers about health concerns and then to provide advice and direct people to appropriate resources for follow-up care. And after assessing the information needs of the local Spanish-speaking community in Oakland, California, and finding them to be woefully underserved, journalist Madeleine Bair founded the participatory online news site El Tímpano in 2018.

Targeted ‘cradle-to-grave’ educational campaigns can also help people learn to navigate polluted information systems. Techniques such as the SIFT method (which outlines steps to assess sources and trace claims to their original context) and lateral reading (which teaches how to verify information while consuming it) have been proven effective, as have programs to equip people with skills to understand how their emotions are targeted.

For each of these tasks, people and entities hoping to foster healthy information ecosystems must commit to the long game. The only way to make inroads is to look beyond the neat diagrams and tidy typologies of misinformation to see what is really going on, and craft a response not for the information system itself but for the humans operating within it.

Reference: https://slate.com/technology/2023/06/the-problem-with-misinformation.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *