Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Women in AI: Sarah Kreps, professor of government at Cornell
March 16, 2024

Women in AI: Sarah Kreps, professor of government at Cornell

Reading Time: 5 minutes

Sarah Kreps is a political scientist, U.S. Air Force veteran and analyst who focuses on U.S. foreign and defense policy. She’s a professor of government at Cornell University, adjunct professor of law at Cornell Law School and an adjunct scholar at West Point’s Modern War Institute.

Kreps’ recent research explores both the potential and risks of AI tech such as OpenAI’s GPT-4, specifically in the political sphere. In an opinion column for The Guardian last year, she wrote that, as more money pours into AI, the AI arms race not just across companies but countries will intensify — while the AI policy challenge will become harder.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I had my start in the area of emerging technologies with national security implications. I had been an Air Force officer at the time the Predator drone was deployed, and had been involved in advanced radar and satellite systems. I had spent four years working in this space, so it was natural that, as a PhD, I would be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the debate in drones was moving toward questions of autonomy, which of course implicates artificial intelligence.

In 2018, I was at an artificial intelligence workshop at a D.C. think tank and OpenAI gave a presentation about this new GPT-2 capability they had developed. We had just gone through the 2016 election and foreign election interference, which had been relatively easy to spot because of little things like grammatical errors of non-native English speakers — the kind of errors that were not surprising given that the interference had come from the Russian-backed Internet Research Agency. As OpenAI gave this presentation, I was immediately preoccupied with the possibility of generating credible disinformation at scale and then, through microtargeting, manipulating the psychology of American voters in far more effective ways than had been possible when these individuals were trying to write content by hand, where scale was always going to be a problem.

I reached out to OpenAI and became one of the early academic collaborators in their staged release strategy. My particular research was aimed at investigating the possible misuse case — whether GPT-2 and later GPT-3 were credible as political content generators. In a series of experiments, I evaluated whether the public would see this content as credible but then also conducted a large field experiment where I generated ‘constituency letters’ that I randomized with actual constituency letters to see whether legislators would respond at the same rates to know whether they could be fooled — whether malicious actors could shape the legislative agenda with a large-scale letter writing campaign.

These questions struck at the heart of what it means to be a sovereign democracy and I concluded unequivocally that these new technologies did represent new threats to our democracy.

What work are you most proud of (in the AI field)?

I’m very proud of the field experiment I conducted. No one had done anything remotely similar and we were the first to show the disruptive potential in a legislative agenda context.

But I’m also proud of tools that unfortunately I never brought to market. I worked with several computer science students at Cornell to develop an application that would process legislative inbound emails and help them respond to constituents in meaningful ways. We were working on this before ChatGPT and using AI to digest the large volume of emails and provide an AI assist for time-pressed staffers communicating with people in their district or state. I thought these tools were important because of constituents’ disaffection from politics but also the increasing demands on the time of legislators. Developing AI in these publicly interested ways seemed like a valuable contribution and interesting interdisciplinary work for political scientists and computer scientists. We conducted a number of experiments to assess the behavioral questions of how people would feel about an AI assist responding to them and concluded that maybe society was not ready for something like this. But then a few months after we pulled the plug, ChatGPT came on the scene and AI is so ubiquitous that I almost wonder how we ever worried about whether this was ethically dubious or legitimate. But I still feel like it’s right that we asked the hard ethical questions about the legitimate use case.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a researcher, I have not felt those challenges terribly acutely. I was just out in the Bay Area and it was all dudes literally giving their elevator pitches in the hotel elevator, a cliché that I could see being intimidating. I would recommend that they find mentors (male and female), develop skills and let those skills speak for themselves, take on challenges and stay resilient.

What advice would you give to women seeking to enter the AI field?

I think there are a lot of opportunities for women — they need to develop skills and have confidence and they’ll thrive.

What are some of the most pressing issues facing AI as it evolves?

I worry that the AI community has developed so many research initiatives that focus on things like ‘superalignment‘ that obscure the deeper — or actually, the right — questions about whose values or what values we are trying to align AI with. Google Gemini’s problematic rollout showed the caricature that can arise from aligning with a narrow set of developers’ values in ways that actually led to (almost) laughable historical inaccuracies in their outputs. I think those developers’ values were good faith, but revealed the fact that these large language models are being programmed with a particular set of values that will be shaping how people think about politics, social relationships and a variety of sensitive topics. Those issues aren’t of the existential risk variety but do create the fabric of society and confer considerable power into the big firms (e.g. OpenAI, Google, Meta and so on) that are responsible for those models.

What are some issues AI users should be aware of?

As AI becomes ubiquitous, I think we’ve entered a ‘trust but verify’ world. It’s nihilistic not to believe anything but there’s a lot of AI-generated content and users really need to be circumspect in terms of what they instinctively trust. It’s good to look for alternative sources to verify the authenticity before just assuming that everything is accurate. But I think we already learned that with social media and misinformation.

What is the best way to responsibly build AI?

I recently wrote a piece for the Bulletin of the Atomic Scientists, which started out covering nuclear weapons but has adapted to address disruptive technologies like AI. I had been thinking about how scientists could be better public stewards and wanted to connect some of the historical cases I had been looking at for a book project. I not only outline a set of steps I would endorse for responsible development but also speak to why some of the questions that AI developers are asking are wrong, incomplete or misguided.

Reference: https://techcrunch.com/2024/03/08/women-in-ai-sarah-kreps-professor-of-government-at-cornell/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *