Europe spins up AI research hub to apply accountability rules on Big Tech
Reading Time: 12 minutesAs the European Union gears up to enforce a major reboot of its digital rulebook in a matter of months, a new dedicated research unit is being spun up to support oversight of large platforms under the bloc’s flagship Digital Services Act (DSA).
The European Centre for Algorithmic Transparency (ECAT), which was officially inaugurated in Seville, Spain, today, is expected to play a major role in interrogating the algorithms of mainstream digital services — such as Facebook, Instagram and TikTok.
ECAT is embedded within the EU’s existing Joint Research Centre (JRC), a long-established science facility that conducts research in support of a broad range of EU policymaking, from climate change and crisis management to taxation and health sciences. But while the ECAT is embedded within the JRC — and temporarily housed in the same austere-looking building (Seville’s World Trade Centre), ahead of getting more open-plan bespoke digs in the coming years — it has a dedicated focus on the DSA, supporting lawmakers to gather evidence to build cases so they can act on any platforms that don’t take their obligations seriously.
Commission officials describe the function of ECAT being to identify ‘smoking guns’ to drive enforcement of the DSA — say, for example, an AI-based recommender system that can be shown is serving discriminatory content despite the platform in question claiming to have taken steps to ‘de-bias’ output — with the unit’s researchers being tasked with coming up with hard evidence to help the Commission build cases for breaches of the new digital rulebook.
The bloc is at the forefront of addressing the asymmetrical power of platforms globally, having prioritized a major retooling of its approach to regulating digital services and platforms at the start of the current Commission mandate back in 2019 — leading to the DSA and its sister regulation, the Digital Markets Act (DMA), being adopted last year.
Both regulations will come into force in the coming months, although the full sweep of provisions in the DSA won’t start being enforced until early 2024. But a subset of so-called very large online platforms (VLOPs) and very large online search engines (VLOSE) face imminent oversight — and expand the usual EU acronym soup.
Today, the Commission said it will ‘very soon’ designate which platforms will be subject to the special oversight regime — which requires that VLOPS/VLOSE proactively assess systemic risks their algorithms may pose, apply mitigations and submit to having stuff they say they’ve done to address such risks scrutinized by EU regulators.
It’s not yet confirmed exactly which platforms will get the designation but set criteria in the DSA — such as having 45 million+ regional users — encourages educated guesses: The usual (U.S.-based) GAFAM giants are almost certain to meet the threshold, along with (probably) a smattering of larger European platforms. Plus, given its erratic new owner, Twitter may have painted a DSA-shaped target on its feathered back. But we should find out for sure in the coming weeks.
Once designated as VLOPs (or VLOSE), tech giants will have four months to comply with the obligations, including producing their first risk assessment reports. This suggests formal oversight may start to kick off around fall. (Of course, building cases will take time, so we may not see any real enforcement fireworks until next year.)
Risks the DSA stipulates platforms must consider include the distribution of disinformation and illegal content, along with negative impacts on freedom of expression and users’ fundamental rights (which means considering issues like privacy and child safety). The regulation also puts some limits on profiling-driven content feeds and the use of personal data for targeted advertising. And EU lawmakers are already claiming credit for certain iterations in the usual platform trajectories — such as the recent open sourcing of the Twitter algorithm.
The bloc’s overarching goal for the DSA is to set new standards in online safety by using mandatory transparency as a flywheel for driving algorithmic accountability. The idea is that by forcing tech giants to open up about the workings of their AI ‘Black Boxes,’ they’ll have no choice but to take a more proactive approach to addressing data-driven harms than they typically have.
Much of Big Tech has gained a reputation for profiting off of toxicity and/or irresponsibility — whether it’s fencing fake products or conspiracy theories or amplifying outrage-fueled content and deploying hyper-engagement dark patterns that can drive vulnerable individuals into very dark places (and plenty more besides).
Mainstream marketplaces and social media giants have long been accused of failing to meaningfully address myriad harms attached to how they operate their powerful sociotechnical platforms. Instead, when another scandal strikes, they often lavish resources on crisis PR or reach for other cynical tactics designed to keep shielding their ops, deflecting blame and delaying or avoid real change. But that road looks to be running out in Europe.
At the least, the DSA should help end the era of platforms’ PR-embellished self-regulation — aka, all those boilerplate statements where tech giants claim to really care about privacy/security/safety, and so on, while doing anything but. Because they will have to show their workings at arriving at such statements. (A core piece of ECAT’s work will be coming up with ways to test claims made by tech giants in the risk assessment reports they’re required to submit to the Commission at least annually.)
Zooming out, the unit is being positioned as the jewel in the crown of the Commission’s DSA toolbox — a crack team of dedicated and motivated experts who are steeped in European values and will be bringing scientific rigor, expertise, and human feeling and experience to the complex task of understanding AI effects and auditing immediate impacts.
The EU also hopes ECAT will be become a hub for world-leading research in the area of algorithmic auditing — and that by supporting regulated algorithmic transparency on tech giants, regional researchers will be able to unpick longer term societal impacts of mainstream AIs.
If all goes to plan, the Commission is anticipating basking in the geopolitical glory of having written the rulebook that tamed Big Tech. Yet there’s no doubt the gambit is bold, the mission complex, and poor results across multiple measures and dimensions will make the bloc a lightning rod for a fresh wave of ‘anti-innovation’ criticism.
Brussels is of course anticipating that particular attack — hence its framing talks about working to shape ‘a digital decade that is marked by strong human centric regulation, combined with strong innovation,’ as Renate Nikolay, the deputy DG for Communications Networks, Content and Technology, emphatically put it as she cut ECAT’s virtual ribbon today.
At the same time, there’s no doubt algorithmic transparency is a timely mission to be taking on — with heavy hype swirling around developments in generative AI that’s spiking wide-ranging concerns over possible impacts of such fast-scaling tech.
OpenAI’s ChatGPT got a passing mention at the ECAT launch event — dubbed as ‘one more reason’ to set up ECAT, by Mikel Landabaso, a director at the JRC. ‘The issue here is we need to open the lid of the Black Box of algorithms that are so influential in our lives,’ he said. ‘For the citizen. For the safe online space. For an artificial intelligence which is human centred and ethical. For the European way to [do] artificial intelligence. For something that is autonomous — which is leading the world in terms of non-standard research technology in this field, which is such a good opportunity for all of us and our scene.’
The EU’s Nikolay also hyped the importance of the mission — saying the DSA is about bringing ‘accountability in the platform economy [and] transparency in the business models of platforms,’ which is something she argued will protect ‘users and citizens as they navigate the online environment.’
‘It increases their trust in it and their choice,’ she suggested, before going on to hint at a modicum of stage fright in Brussels — seasoning the main dish lawmakers will be hoping to dine out on here (i.e., increased global influence).
‘I can tell you the world is watching… International organisations, many partners in the world are looking at reference points when they are designing their approach to the digital economy. And why not take inspiration from the European model?’
Nikolay also took a moment in her speech to address the doubters. ‘I want to give a strong signal of reassurance,’ she said, anticipating the criticism that the EU is simply not ready to be Big Tech’s algorithmic watchdog by stressing there will actually be a pack of hounds on the case: ‘The Commission is getting ready for this role…We have prepared for it. We’re doing it together. And this is also where the [ECAT] comes in. Because we’re not doing it alone — we’re doing it together with important partners.’
Speaking during a background technical briefing ahead of the official inauguration, ECAT staff also pointed back to work done already by the JRC — looking at ‘trustworthy algorithmic systems’ — which they suggested they’d be building on, as well as also drawing on the expertise of colleagues in the wider research facility.
They described their role as conducting applied research into AI but with a ‘unique’ focus tied to policy enforcement. (Or: ‘The main difference is…this is a research team on artificial intelligence that has a regulatory force. This is the first time you have specialist researchers with this very specialist focus on a regulated legal service to understanding algorithmic systems. And this is unique. This gives us lots of powers.’)
In terms of size, the plan is for a team of 30 to 40 to staff the unit — perhaps reaching full capacity by the end of the year — with some 14 hires made so far, the majority of whom are scientific staff. The initial recruitment drive attracted significant interest, with over 500 applications following their job ads last year, according to ECAT staff.
Funding for the unit is coming from the existing budget of the JRC, per Commission officials, although a 1% supervisory fee on VLOPs/VLOSE will be used to finance the ECAT’s staff costs as that mechanism spins up.
At today’s launch event, ECAT staff gave a series of brief presentations of four projects they’re already undertaking — including examining racial bias in search results; investigating how to design voice assistant technology for children to be sensitive to the vulnerability of minors; and researching social media recommender systems by creating a series of test profiles to explore how different likes influence the character of the recommended content.
Other early areas of research include facial expression recognition algorithms and algorithmic ranking and pricing.
During the technical briefing for press, ECAT staff also noted they’ve built a data analysis tool to help the Commission with the looming task of parsing the risk assessment reports that designated platforms will be required to submit for scrutiny — anticipating what’s become a common tactic for tech giants receiving regulatory requests to respond with reams of (mostly) irrelevant information in a cynical bid to flood the channel with noise.
And, as noted above, as well as having a near term focus on supporting the Commission’s policy enforcement ECAT will aim to shine a light on societal impact by studying longer term effects of interactions with algorithmic technologies — also with a focus on priorities set out in the DSA, which includes areas like gender-based violence, child safety and mental health.
Given the complexity of studying algorithms and platforms in the real world, where all sorts of sociotechnical impacts and effects are possible, the Center is taking a multidisciplinary approach to hiring talent — bringing in not only computer and data scientists but also social and cognitive scientists and other types of researchers. Staff emphasized they want to be able to apply a broad variety of expertise and perspectives to interrogating AI impacts.
They also stressed they won’t be a walled garden within the JRC either — with plans to ensure their research is made accessible to the public and to partner with the wider European research community. (The future home for ECAT, pictured below behind JRC director Stephen Quest, has been designed as a bit of a visual metaphor for the spirit of openness they’re aiming to channel.)
The aim is for ECAT to catalyze the wider academic community in Europe to zero in on AI impacts, with staff saying they will be working to build bridges between research institutions, civil society groups and others to try to establish a wide and deep regional ecosystem dedicated to unpicking algorithmic effects.
One early partnership is with France’s PEReN — a research organization set up to support national policymaking and regulatory enforcement. (In another example discussed at the launch, PEReN said it had devised a tool to study how quickly the TikTok algorithm latches on to a new target when a user’s interests change — which they achieved by creating a profile that was used to mostly watch cat videos but which switched to looking at videos of trucks and then mapping how the algorithm responded.)
While enforcement of EU rules can sometimes appear even more painstaking slow than the bloc’s legislative process itself, the DSA takes a new tack, thanks to the component of centralized oversight of larger platforms combined with a regime of meaty penalties that can scale up to 6% of global annual turnover for tech giants that don’t take transparency and accountability requirements seriously.
The regulation also puts legal obligation on platforms to cooperate with regulatory agencies — including requirements to provide data to support Commission investigations or even deliver up staff for interview by the technical experts staffing ECAT.
It’s true the EU’s data protection regime, the GDPR, also has large penalties on paper (up to 4% of global turnover); and does empower regulators to ask for information. However, its application against Big Tech has been stymied by forum shopping — which simply won’t be possible for VLOPS/VLOSE (albeit we should probably expect them to further expand their Brussels lobbying budgets).
But the hope, at least, is that this centralized enforcement structure will sum to more robust and reliable enforcement. And, as a consequence, act as an irresistible force to switch platforms to put genuine focus on common goods.
At the same time, there will inexorably be ongoing debate about how best to measure AI impacts on subjective considerations like well-being or mental health impacts. As well as what to prioritize (which platforms? which technologies? which harms?) — so, really, how to slice and dice limited research time given there’s such a vast, multifaceted potential surface area you could cover.
Questions about how prepared the Commission is for dealing with Big Tech’s army of friction-generating policy staffers started early and seem unlikely to just disappear. Much will depend on how it sets the tone on enforcement. So whether it comes out swinging early — or allows Big Tech to set the timeline, shape the narrative around any interventions and engage in other bad faith tactics like demand never-ending dialogues about how they see ‘such and such’ an issue.
The Commission had to face questions from assembled press members at the technical briefing on its preparedness — and whether such a relatively small number of researchers can really make a dent in cracking open Big Tech’s algorithmic black boxes. It responded by professing confidence in its abilities to get on with the business of regulating.
Officials also gave off a confident vibe that the DSA is the enabling framework that can pull this massive, public service-focused reverse engineering mission off.
‘If you look at the Digital Services Act, it has very clear transparency obligations already for the platforms. So they need to be more concerned about the algorithmic systems, the recommender systems and we will of course hold them accountable to that,’ said one official, batting the concern away.
A more realistic-sounding prediction of the quasi-Sisyphean task ahead of the EU came via Rumman Chowdhury, who was speaking at today’s launch event. ‘There’ll be a lot of controversy and discussion,’ she predicted. ‘And my main feedback to people who have been pushing back has been, yes, it will be a very messy 3-5 years but it will be a very beneficial 3-5 years. At the end of it, we will actually have accomplished something that, to date, we have not been able to quite yet — enabling individuals outside companies who have the interest of humanity in their minds and in their hearts to actually implement these laws in platforms at scale.’
Until recently, Chowdhury headed up Twitter’s AI ethics team — before new owner, Elon Musk, came in and liquidated the entire unit. She has since established a consultancy firm focused on algorithmic auditing and she revealed she’s been co-opted into the DSA effort too, saying she’s been working with the EU on research and implementation for the regulation by sharing her take on how to devise algorithmic assessment methodology.
‘I celebrate and applaud the event of the Digital Services Act and the work I am doing with the DSA in order to, again, move these concepts of benefit to humanity and society, from research and application into tangible requirements. And that I think is the most powerful aspect of what the Digital Services Act is going to accomplish, and also what the ECAT will help accomplish,’ she said.
‘This is what we should be focused on,’ she further emphasized, dubbing the EU’s gambit ‘quite unprecedented.’
‘What the DSC introduces — and what folks like myself can hopefully help with — is how does a company work on the inside? How is data checked? Stored? Measured? Assessed? How are models being built? And we’re asking questions that, actually, individuals outside the companies have not been able to ask until now,’ she suggested.
In her public remarks, Chowdhury also hit out at the latest AI hype cycle that’s being driven by generative AI tools like ChatGPT — warning that the same bogus claims are being unboxed for human-programmed technologies with a known set of flaws, such as embedded bias — while platforms are simultaneously dismantling their internal ethics teams. The pairing is no accident, she implied, but rather this is cynical opportunism at work as tech giants attempt to reboot the same old cycle and keep ducking responsibility.
‘Over the past years I’ve watched the slow demise of internal accountability teams at most technology companies. Most famously my own team at Twitter. But also Margaret Mitchell and Timnit Gebru’s team at Google. The last few weeks at Twitch, as well as Microsoft. At the same time, hand in hand, we are seeing the launch and imposition, frankly, the societal imposition of generative AI algorithms and solutions. So simultaneously firing the team who were the conscience of most of these companies while also building technology that, at scale, has unprecedented impacts.’
While the shuttering of AI ethics teams by major platforms hardly augurs well for them turning over a fresh leaf when it comes to algorithmic accountability, Chowdhury’s presence at the EU event implied one tangible upside: Insider talent is being freed up — and, dare we say it, motivated — to take jobs working in the interest of the public good, rather than being siloed (and contained) within commercial walled gardens.
‘Most of the talented individuals who have qualitative or quantitative skills, technical skills, get snatched up by companies. The brain drain has been very real. My hope is that these kinds of laws and these kinds of methodologies can actually appeal to the conscience of so many people who want to be doing this kind of work, folks like myself, who had no other way back then to go work at companies,’ she suggested. ‘And here’s where I see there’s a gap that can be filled — that needs to be filled quite badly.’
Reference: https://techcrunch.com/2023/04/18/ecat/
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG