Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Sen. Richard Blumenthal Defends His Controversial Bill Regulating Social Media for Kids
September 20, 2023

Sen. Richard Blumenthal Defends His Controversial Bill Regulating Social Media for Kids

Reading Time: 9 minutes

Can politicians keep kids safe online?, Sen. Richard Blumenthal Defends His Controversial Bill Regulating Social Media for Kids.

For a while now, Washington has been wrestling with two big forces shaping technology: social media and artificial intelligence. Should they be regulated? Who should do it—and how? Currently, Congress is considering a bill that would regulate how social media companies treat minors: the Kids Online Safety Act.

Although it has bipartisan support, KOSA is not without controversy. Several critics have called it ‘government censorship.’ One group, the Electronic Frontier Foundation, says it is ‘one of the most dangerous bills in years.’ One of KOSA’s sponsors is Connecticut Democratic Sen. Richard Blumenthal. On Friday’s episode of What Next: TBD, I spoke with Blumenthal about tech, kids, and what role the government should play when it comes to regulating Silicon Valley. Our conversation has been edited and condensed for clarity.

Lizzie O’Leary: The Kids Online Safety Act empowers state attorneys general to file lawsuits based on content that they believe will be harmful to young people. Can you help me understand why the legislation is structured in that way? Why does it give so much latitude to state attorneys general?

Sen. Richard Blumenthal: Let me first draw a distinction because you’ve said that the bill empowers state attorneys general to go to court based on content. It does not. This bill does not give state attorneys general—or the Federal Trade Commission, for that matter—the power to censor in any way. What it does is target the product designs, the algorithms that drive that content at kids.

That distinction is very important because what we’re doing here is creating a duty of care that makes the social media platforms accountable for the harms they’ve caused. But it gives attorneys general and the FTC the power to bring lawsuits based on the product designs that, in effect, drive eating disorders, bullying, suicide, and sex and drug abuse that kids haven’t requested and that can be addictive.

If you go back to Frances Haugen’s testimony, she was the whistleblower from The Facebook Files. She presented not only testimony but documents that showed that social media companies are aware of those harms. They design those algorithms in a way that drives content that kids haven’t requested. You ask for content on diet, and what the child receives is eating disorder content.

I think that’s a really important distinction: no censorship—focus on the product designs or algorithm.

I hear you and I hear the noble purpose behind that. You’re right; that is something Frances Haugen brought up. She presented those documents particularly as it related to teen girls and Instagram. But I’m trying to understand here: If you are questioning your gender identity, don’t you want the algorithm to show you, say, some more content, maybe trans influencers?

This is the question that I think a lot of LGBTQ+ groups have raised—the Electronic Frontier Foundation has raised this issue as well—that there might a distinction without a difference here if an attorney general in a state that is questioning transgender children says, ‘Wait a minute, we don’t want the algorithm showing kids that.’ Do you understand what I’m saying here?

First, I would never put my name on a bill that targets, disparages, harms the trans or LGBTQ+ community. Second, this bill does not censor content or prevent anybody in that community from accessing content that they request. The bill holds social media platforms accountable for their product designs or algorithms. It’s not a Section 230 bill. Third, very, very importantly, the bill not only permits but encourages safe spaces where the LGBTQ+ community can access content that is specifically requested.

What it tries to prevent is these social media companies designing algorithms that feed kids content about bullying or eating disorders when they haven’t requested it, when they don’t want it, and in fact creates a kind of addictive rabbit hole that they go down because the business model of these companies is to create and drive addictive content.

Your co-sponsor of the bill, Sen. Marsha Blackburn of Tennessee, was recorded in March discussing KOSA. When asked about the top issue that conservatives should be taking action on right now, she responded, ‘Of course, protecting minor children from the transgender and this culture and that influence. And I would add too that watching what’s happening on social media. And I’ve got the Kids Online Safety Act that I think we’re going to end up getting through probably this summer. This would put a duty of care and responsibility on the social media platforms. And this is where children are being indoctrinated. They’re hearing things at school and then they’re getting onto YouTube to watch a video, and all of a sudden this comes to them.’

The word indoctrinate, which Blackburn used, has really resonated with a number of LGBTQ+ advocates—particularly parents of trans kids who hear those sentences and your co-sponsor invoke KOSA and say, Wait a minute, this is the unintentional scenario that they are concerned about.

Let me just be really blunt, Lizzie: I fundamentally emphatically disagree with Sen. Blackburn’s position on the LGBTQ+ community. Whatever Sen. Blackburn may say about her personal beliefs, I know what the bill does. I know what the provisions state. We have tightened those provisions to avoid the kinds of harms that were raised by the LGBTQ+ community. Working directly with them, the organizations and individuals, was incredibly important to me when writing this bill. Not just to ensure that it can’t be misused, but also that it would prevent harms against that community and others because the LGBTQ+ youth suffer those harms often the most dramatically and deeply.

Sen. Blackburn and I don’t agree on everything. In fact, if you look at our voting records, we’re a pretty unlikely pair. But we agree on the Kids Online Safety Act, that it doesn’t target or censor anyone, that it focuses on those product designs or algorithms that repeat and drive content that neither LGBTQ+ youth nor others really want. But it’s in the business interests of social media to drive at them because it is inflammatory or addictive, and they know it is and it’s harmful and they ought to be held accountable for those harms. On that point, Sen. Blackburn and I agree: holding Big Tech accountable and making sure that those product designs are done more responsibly.

Let’s spin forward a year. If you start to see lawsuits in Missouri, Texas, Florida based on the concerns that I’ve outlined, what will you do?

As an attorney general for 20 years, I know very well that the language of the bill, the provisions, is what courts have to apply. Attorneys general may misuse a bill, but ultimately, they have to face a judge, and the language of this bill has been tightened. It targets specific harms: eating disorders, bullying, suicide. It does not target or censor content that is sought by the individual. So, if an attorney general misuses this language, it would be tossed out by a court, and if not by the trial court, then by an appeals court. If we have in some way overlooked the need to tighten it further, we can take action.

When it comes to A.I., you have another unlikely ally: Sen. Josh Hawley from Missouri. Let’s talk about this framework for regulating A.I. At a hearing you chaired, you took a strong stance that we need to take a risk-based approach. What are the risk cases that you see for A.I.? Do you think in terms of the sort of Skynet wipe-out-humanity stuff, or something more narrow than we’re seeing in the present?

I think that the standards for testing before a product is released can be designed in a way that takes account of the amount of risk involved. Let me give you a practical example: A.I. that regulates the temperature in your home. If it’s off by a few degrees, OK, you’re uncomfortable and you have to go back and reset it. That is an error that has fewer human consequences than, for example, A.I. that is designed to detect cancer in someone. A.I. that misses a cancer diagnosis can cause really serious consequences. Likewise, if you have A.I. that’s used in a weapons platform—and we are now beginning to use A.I. in some of the unmanned aircraft or unmanned undersea warfare—that’s pretty serious stuff. There needs to be regulation here. It’s just a question of what it is and when it’s going to come.

Not only should there be standards and oversight by some independent governmental entity, but there also ought to be encouragement for innovation because A.I. has enormous promise as well as peril. We ought to try to prevent the peril by having this regulatory regimen that requires testing—red-teaming, as it’s called by a lot of people—preventing the deepfakes, the hallucinations, the election interference (which is now so imminent), impersonations. There needs to be regulation, and basing it on risk is an important point.

You’ve called for some sort of regulatory body that would give out licenses to companies that are creating A.I. Who is qualified to do that? I had a conversation with a colleague of yours in the House, Rep. Don Beyer of Virginia, who’s actually getting a master’s degree in machine learning, and he laughed at the idea that anyone in Congress was actually qualified to understand and oversee A.I. creation.

He’s right. In fact, we had a forum just very recently involving some of the big tech titans, and one of them said, ‘I’m an engineer. I was the head of one of these companies, and I don’t understand these algorithms.’ There is a lot of reason to be cautious here and to empower experts. That’s why we need that independent oversight entity. It shouldn’t be done by members of Congress.

So who should do it?

There are people who are knowledgeable in this area, people who have degrees in computer science or machine learning, and who can assess the risks and design the tests. Sam Altman, whom I’ve come to know, has some really practical wisdom on this topic.

You trust Sam Altman—who has a financial interest in A.I.—to be the person doing this?

No, I wouldn’t trust him to be in the entity. But what he observed is that tests can be designed in very different ways. He says that his product ChatGPT-5 has passed all these tests where it would be safe and effective. But you want it to go before an independent entity. Analogize it to pharmaceutical drugs. You don’t take Pfizer’s word for it. You don’t take Merck’s word for it. You have the FDA review it for safety and efficacy. Who does the review? Experts do the review. Members of Congress? No. Airline safety, same principle. Car safety, toy safety. This idea is not novel.

Yeah. I mean, ChatGPT thinks I was born in Ohio. I was born in Washington, D.C.

And that’s the reason why you want to be cautious and have an oversight entity to review ChatGPT, not take Sam Altman’s word for it, not take Google’s or any of the other big company’s word for it.

A.I. requires two things: a massive amount of computing power and a massive amount of data—particularly if you’re talking about large language models or generative A.I. Doesn’t that just give the biggest five tech companies in the world an automatic home-field advantage? How do you think about this idea of encouraging innovation while also installing safeguards?

That’s a really important point, and it goes to the core of government regulations in drug safety, for example. How do you encourage innovation, new drugs if you require all this testing and government bureaucratic rigmarole that new companies have to go through and they have to invest millions of dollars in the kinds of clinical trials that are necessary? The same principle applies here. Innovation could be stifled if the government regulation is too heavy-handed and unnecessarily burdensome. It really is a balance that we strike when it comes to new products in other fields as well, when health and safety are at risk.

A risk-based regimen which requires licensing and testing, not by Congress, not by members of the industry—we don’t want to delegate it to them to do it. There are some members of the industry who have said, ‘Let us do it; we’ll regulate ourselves.’ No, that’s not adequate. Sam Altman and the others who are in this field, they have wisdom to impart, they have pertinent insights, but they should not be in charge of their own product review.

Here’s the encouraging part. First of all, Josh Hawley and I, again, we may vote very differently, we may disagree on a lot of things, but on this we agree: There should be government regulation by an independent oversight entity. Call it an Office of Artificial Intelligence, call it something different, but it should have that authority to go to court along with individuals, individual rights of action to go to court. I believe that state attorneys general should be able to go to court and that these companies should be held accountable. When they cause harm, they should have to pay.

It’s a drop in the bucket for them, though.

I’ve been as critical as anyone of the lack of effective penalties when it comes to consumer protection. But I think the penalties here ought to be very rigorous and the enforcement resources ought to be adequate so that this oversight entity can not only review effectively but also enforce. I’m an enforcer. That’s what I did for most of my career as a U.S. attorney and state attorney general. If you don’t have effective enforcement, the law is a dead letter.

I want to understand why Congress cannot seem to pass a privacy law. I know that you would like to do that. California has one. Europe has these. It just seems that a data privacy law would actually encapsulate so many of these issues—and yet it’s dead in the water on Capitol Hill.

Excellent point. One of the reasons why a federal privacy bill is difficult is because you have state laws, a number of them that are already protecting the consumers of their states. And I have said very emphatically, I don’t want to dilute or preempt the best provisions of those laws. In other words, I don’t think we ought to have a lowest common denominator here. And preemption is one of the issues.

Well, yeah, but I mean that would work the same way as CAFE standards from the Energy Department—California has stricter ones than the federal ones. Why couldn’t it work like that?

It could work like that. We have to get agreement on it. The issues for some of my colleagues have been whether we’re going to have a national standard that is lower, but binding, on the states that have already adopted higher standards. I don’t want to dilute or take away protection from the states that have already pioneered this effort.

You mentioned Europe. Europe is ahead of us. Europe is also ahead of us on social media. We cannot allow A.I. to advance without imposing government regulation. We need to learn the lesson of social media and we need to do more on privacy. But at the same time, why should Americans have less privacy protection than Europeans? We don’t want a lowest common denominator here. I am pressing for a higher standard because I really believe that Americans should have the gold standard in privacy protection.

Reference: https://slate.com/technology/2023/09/senator-blumenthal-kosa-kids-online-internet-safety-ai-regulation-congress.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *