Wyden Calls for Accountability, Transparency for AI In Remarks at Georgetown Law

Source: United States Senator Ron Wyden (D-Ore)

March 03, 2023

As Prepared for Delivery

Thanks to Georgetown Law’s Institute for Technology Law & Policy and Yale’s Information Society Project for organizing today’s event.  You all have the NBA All-Stars of algorithmic accountability and AI on the agenda today. 

Before we get to the experts, I thought I’d kick off today’s event by discussing my view of why these questions about fairness and effectiveness of AI systems are so important, and how Congress is approaching the issue.  

Chatbots like ChatGPT are the current hot topic for reporters, since these tools are public facing, they’re easy for journalists to evaluate, and they’re fun to play with. It is absolutely appropriate to scrutinize those tools. But as folks here know, there is a whole galaxy of automated decision systems that the public can’t see, that are impacting people’s lives right now. 

As we speak, millions of Americans are applying for jobs, filling prescriptions, shopping for insurance, and looking for housing online. Their access to these critical services is often impacted by unregulated, un-audited algorithmic systems. In my view, the first job for Congress and experts is to find out where these systems are, create some baseline transparency for consumers, and make sure these black box systems really work. I’m especially focused on making sure automated systems don’t automate and amplify discrimination. 

One of the first areas I got involved in was the NFL concussion settlement. The NFL was using a formula to decide which retired players got benefits. OK, fine. Here’s the problem — it assumed Black players have lower cognitive function than white players. That’s a show-stopper. It meant Black players were less likely to get benefits as a result. 

So after the New York Times reported on this race-based formula, I wrote the NFL a letter asking, what’s the deal? Is the NFL effectively denying Black players settlement payments that they would otherwise be entitled to? Because if it is, that’s textbook racism. 

I asked the NFL a series of straightforward questions about how this formula worked, whether it was used to determine payouts, and how many former players were affected. But, when the NFL responded, it refused to supply me with the stats and references I had requested – it mostly ignored those questions. I’m talking about basic stuff: How many players didn’t get benefits because of this formula? Where are the peer-reviewed studies that say this is an acceptable way to measure whether a player suffered brain impairment from football? 

Ultimately the NFL and retired players agreed on a new way to give out benefits that didn’t rely on the formula. 

This isn’t the kind of complicated system that you all are working with, but it illustrates the exact same issues of algorithmic fairness and effectiveness.

Once I started looking into AI accountability, it was shocking how many examples I found, going back years. In 2014, Amazon engineers set out to automate the process of recommending and hiring workers.  Instead of having to hire HR workers to sort through applications, Amazon executives wanted a system that could sort hundreds of resumes and recommend the top five applicants.  

Amazon engineers used a dataset of ten years’ worth of resumes from people Amazon had hired in the past, and then trained a statistical model on the terms that appeared in those resumes.  

Very soon after launching, the system began to detect subtle cues that recurred on successful applications.  One big inference the system picked up on is that Amazon hadn’t hired many women over the previous ten years.   

So instead of making the hiring process more fair, the algorithm began to actively downrank applications that mentioned the word “women” or featured women’s colleges.  

Engineers could not find a way to completely remove the influence of gender proxies influencing the tool’s outcomes. In 2018, Amazon finally shut down its program and recruiters stopped using the tool. 

One final example: 

In 2021, journalists found that screening tools meant to identify patients who are high-risk for prescription painkiller abuse were flagging cancer patients with legitimate prescriptions. 

What is more, journalists highlighted that these algorithms were trained on extremely wide-ranging sensitive health data.  

For a system that is meant to flag depression, trauma, and criminal records, all of which are more prevalent among women and racial minorities, this is a particularly troubling story.    

All of these examples highlight the extent of flawed automated systems operating in the real world.  

The harms flowing from the examples above could have been mitigated if companies had tested their products for faulty data, bias, safety risks, performance gaps and other problems.

Unfortunately many companies have done far too little to make sure their algorithms work and are fair.  And to make it worse, neither the public nor the government knows when or how that’s happening.

I’ve taken the first step to begin remedying this state of affairs by introducing, with my colleague Rep. Yvette Clarke, the Algorithmic Accountability Act each Congress starting in 2019. 

As you might be aware, this Act would force companies to perform ongoing impact assessment, where they need to take a hard look at the algorithms they use, identify negative impacts in these systems and fix problems, including biased outcomes, that they find. 

It also requires summary reporting to the Federal Trade Commission. And it creates a new public repository at the FTC so consumers can see where algorithms are being used.

We need action on this bill.  It’s beyond time to pull back the curtain on the secret algorithms that decide whether you get to see a doctor, rent a house or get into a school. 

Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.  

In the world of AI, things are moving incredibly fast. Legislation needs to remain current to remain relevant.  

That’s why in 2022, Representative Clarke, Senator Booker and I updated the bill substantially. We’re again looking at ways to improve the legislation before reintroducing it.   

I’m also keeping an eye on other distributing AI trends. 

For example, the proliferation of AI-powered emotion detection tools gives me pause.  

Let’s be clear: Any system that purports to be able to determine things like a person’s character, capability, or protected class status based on their facial features, eye movements, or tone of voice is probably deploying dangerous pseudoscience. 

Any yet, billions of dollars are flooding into tech that purports to do exactly this. For example, companies like Zoom have developed tools that purport to tell sales professionals how their targets are reacting to their sales pitches. Companies like Empath claim to be able to give employers a better sense of their employees’ emotional state by listening in to their calls. 

Such tech threatens to bring us back to the Victorian Era when evaluating people based on their facial features or the shape of their heads was all the rage.  

Scientists have had to remind people again and again why this junk science faded from popularity in the first place.  

I’m watching closely how the European Union is seeking to deal with this issue in their omnibus AI Act, and thinking about what might be able to do about it here in the U.S. 

I’ll note one final area of concern. 

There is a lot of promise and potential in AI. It makes sense that companies are investing billions into AI and machine learning, and looking for ways to integrate AI into popular products and services. 

However, it is absolutely essential that at the same time companies invest in teams focused on making sure AI innovation is fair and ethical.  

Rep. Clarke and I wrote to Google in 2020 about the firing of members of Google’s AI Ethics team. In the case of Dr. Timnit Gebru, this was at least partially over a paper Dr. Gebru was seeking to publish about bias in AI systems. 

Historically marginalized and vulnerable populations are more likely to be harmed by bias and privacy violations. It is critical that companies emphasize diversity in all teams, including those developing, implementing, and overseeing AI systems. But they also need to listen to the voices of people who raise concerns about those systems, not ignore them, or retaliate against them. 

Companies can’t wait until after a product or tool is already in the market, or until problems arise to listen to their perspectives. 

Friends don’t filibuster friends, so I’ll end there.

###