Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


X Is a Fog-of-War Machine
October 12, 2023

X Is a Fog-of-War Machine

Reading Time: 6 minutes

We’re Now Seeing the Dire Consequences of Elon Musk’s Changes to Twitter, What Elon Musk did to his social network to make it almost useless for news about Israel and Gaza., X/Twitter is useless for Israel and Gaza news. Here’s how Elon Musk caused it.

Twitter has never been the most popular social media platform. It’s never been the coolest, it’s never had the most features, and it’s never been the biggest moneymaker. It has, at times, been the weirdest, and many groups given to high-velocity posting have maintained valuable communities there over the years. But Twitter’s X factor—sigh, pun intended—has always been its strength as a platform for news.

In moments of crisis, Twitter has been essential—no more so in times of bloody conflict. Not only do professional journalists relay on-the-ground realities, but so do eyewitnesses and citizen journalists. War being war, the fruits of Twitter’s role as an information funnel have never been perfect. But they’ve been a helpful first draft of the news.

In contrast, under the ownership of Elon Musk, who bought the platform for $44 billion last October, the platform now called X has become a vortex of false claims and doctored footage. It’s a fog-of-war machine.

That’s been the unmistakable reality in the days after Hamas’ deadly terrorist attack on Israeli civilians—a land, air, and sea operation that has killed at least 1,200 people in Israel and led to another 900 deaths in Gaza following Israel’s military retaliation.

Musk’s changes to the foundation of how Twitter works have not only rendered Twitter useless as a means of making sense of the conflict as (or even hours after) it unfolds, but made it actively counterproductive for users trying to figure out what’s going on. As Musk and Twitter CEO Linda Yaccarino have rolled back the platform’s rules of engagement and rid their ranks of the content-moderation teams and tools that actually keep X trustworthy, they’ve also put in place a system that fundamentally incentivizes the spread of misinformation during times of mass panic and confusion, in part because X is now a platform that pays for viral content.

The end result is that Twitter, more so than any other platform right now, is fertile ground for a new kind of war profiteering.

On Oct. 8, the day after the initial Hamas attack, an account called @AGCast4 posted a video supposedly showing a Hamas rocket attack in Israel. The BBC journalist and fact-checker Shayan Sardarizadeh debunked it: The footage wasn’t from the ongoing conflict or any real-life war but from the video game Arma 3. The account was—and still is—verified with a blue check mark.

Two days later, the investigative outfit Bellingcat, known for its visual forensics work, had to debunk some fake news … about itself. A doctored ‘BBC’ video was circulating on social media, claiming that Bellingcat’s journalists had confirmed Ukrainian weapon sales to Hamas. ‘We’ve reached no such conclusions or made any such claims,’ Bellingcat’s official account wrote on Twitter. In a screenshot, Bellingcat showed that a Twitter account called Geopoliitics & Empire had shared the video. Like the account that posted video game footage, this account was also verified with a blue check mark. (The account owner deleted the post and called it an ‘honest mistake,’ simultaneously posting a meme captioned ‘We are going to be famous.’)

If a user had taken even a yearlong hiatus from Twitter and redownloaded the app this week to follow the goings-on of the emerging war, they’d be disoriented. Why are these accounts posting nonsense, and why are they allowed to do so without any ramifications? Twitter has always had problems with the spread of misinformation, but the current site experience is noticeably degraded. So, why is that?

First, the blue check mark doesn’t mean what it used to. Verification once signified that Twitter had confirmed the identity of a person or organization of note: a journalist, a public health organization, or even a professional athlete. But in April, Twitter began removing check marks from all but the most famous.

But now anyone who pays for Twitter Blue—recently renamed X Premium—can just buy a blue check mark for $8 a month, along with the veneer that they are a notable person or a legitimate source of information. Just last week, X removed headlines from linked news articles, making the site exponentially more confusing to scroll through.

‘There is a difference between platforms that take steps to mitigate harm, platforms that have not yet started taking these steps, and platforms that take steps to undo processes that mitigated harm,’ Chinmayi Arun, the executive director of Yale Law School’s Information Society Project, told me. ‘Users who are accustomed to a different version of X may not know how to process or understand what they are seeing now.’

It’s been mere days since the war broke out, but European regulators are already peeved with what they’ve seen. In a posted letter to Musk, European commissioner Thierry Breton asked the X owner to comply with the continent’s sweeping Digital Services Act. He urged the billionaire to respond within 24 hours with assurances that he’s taking the spread of ‘illegal content and disinformation’ seriously or face legal penalties.

Musk responded, ‘Our policy is that everything is open source and transparent, an approach that I know the EU supports.’

Musk has delivered on a lot of what he promised. He campaigned to buy Twitter on a platform of restoring ‘free speech,’ which meant loosening the site’s rules, firing most of its content moderation staff, removing blue check marks from the accounts of professional journalists, and prioritizing subscription revenue over advertising.

What we’re seeing right now is the culmination of all of those factors: a degraded site that can’t be trusted for sensitive breaking news.

There are several additional perks for paying $8 for a blue check mark. The first is that paying users now get priority placement in a tweet’s replies. Take a Musk tweet, for example—scroll down and it’ll take a while before you find any reply without a check mark next to it. (Good way for a billionaire to insulate himself from criticism, huh?) But they also get increased reach across the site—especially on users’ algorithmic news feeds.

There’s another perk that’s even more dangerous. In July, Musk began paying out the most engaging users on X—as long as they had bought a check mark. Twitter rewarded a number of prominent accounts—mostly far-right influencers, as the Washington Post reported—with big paychecks. Andrew Tate, a popular right-wing internet personality facing rape and human trafficking charges in Romania, received $20,000 in his first check alone.

Twitter lagged far behind other platforms that have been paying out top influencers for years—YouTube began doing so in 2007. But the rules about who is eligible to receive payouts, and what rules they have to follow, are vague. ‘By promising honestly very opaque parameters,’ said Christine Tran, a doctoral candidate at the University of Toronto, ‘the floodgates open for accounts to generate content about major events that arouses engagement without discrimination—regardless of what good that information serves.’

X did not respond to a request for comment, but according to its website, ‘sensitive events,’ including war, are not eligible for monetization. That fine print, though, doesn’t seem to stop would-be profiteers from trying, asking the platform’s few remaining moderators to differentiate eligible posts from rule-breaking ones—especially since X doesn’t seem to be punishing any misleading posts about war. ‘The unclear rules about what engagement [leads to] monetization leads to ‘See what sticks to the wall’ incentives to aggregate engagement,’ Tran said. ‘It costs nothing to post (yet), but a viral post could lead to untold profit. Low risks, high reward.’

Even if fake-news peddlers are unable to profit directly from viral posts about war, there are perks to merely being allowed to post them at all: Mass engagement like this can help an account build an audience—and from there, they can profit off future viral posts, sell stuff to their followers, and monetize their newfound following off-platform. In the creator economy, all attention can be good attention. But on X, the race for clicks is simultaneously a race to the bottom.

Twitter isn’t the first platform that’s financially rewarded the spread of misinformation, but its policy decisions have made it all the more vulnerable to abuse, an own goal that hurts not only trust in the platform but also users’ understanding of a major geopolitical event.

Instead, Musk has promoted the use of Community Notes—a crowdsourced fact-checking system formerly known as Birdwatch—and, in recent days, has claimed to have increased the speed at which these notes appear on misleading content. Further complicating things, a recent report found that he’s also stopped allowing users to self-report political misinformation on specific posts. Community Notes is a helpful system (when it’s not wrong!), but Twitter is ultimately outsourcing the job of content moderation from in-house professionals to unpaid volunteers. And fundamentally, leaving bad information up with a user-generated addendum is not the same as removing or hiding it with a warning label, as Twitter’s old guard did.

Shannon McGregor, an associate professor at the University of North Carolina at Chapel Hill’s media and journalism school, has been arguing for years that the most powerful people on a platform shouldn’t be treated with kid gloves but taken more seriously. That includes not only political leaders—remember Donald Trump’s ongoing feuds with Twitter?—but also users paying for greater reach and the chance to make money.

‘Those with the greatest reach and power should be subject to at least the same policies as all users, if not perhaps either more stringent or more holistically enforced versions of those policies,’ McGregor said. ‘That’s where we see the danger. It’s not like some random person breaking a content moderation rule, which is a problem. It’s a [bigger] problem when someone who has a ton of power and attention does it.’

Musk may want to prioritize ‘free speech’ and being ‘open source,’ but millions of people rely on his platform for reliable information. And, as it’s played out time after time, there are often very scary real-world consequences when conspiracy theories and fraudulent stories are allowed to run rampant. The only thing that’s transparent is the owner’s inattention.

Reference: https://slate.com/technology/2023/10/x-twitter-gaza-israel-misinformation-elon-musk.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *