It’s Time Congress Pulled Back the Curtain on Social Media Algorithms

Source: United States Senator for South Dakota John Thune

CNN
Congress has heard enough. So have the American people.
We’ve all heard testimony about social media’s damaging effects on consumers. We’ve seen Facebook’s internal documents, which reveal the company knew Instagram, which it acquired in 2012, was particularly harmful for teenage girls. Witnesses have also told us that not enough is being done to protect children online. Hearing after hearing demonstrated that people are not aware of the ways big tech companies collect user data to mold their online experiences.
The one thing we haven’t heard, though, is how any of this will change without action from Congress. That’s why I have introduced a bill that would essentially create a light switch for big tech’s secret algorithms — artificial intelligence (AI) that’s designed to shape and manipulate users’ experiences — and give consumers the choice to flip it on or off.
It’s become increasingly clear that the algorithms that power social media and search engines shape what we see on these platforms. The powerful AI, which varies from company to company, serves as a prediction engine that creates a unique universe of information for each user — a phenomenon that’s often referred to as the “filter bubble.”
By showing similar content based on what a user has already liked, watched, searched for or reacted to, the filter bubble contributes to political polarization and social isolation. Perhaps the most important thing to understand is that users don’t make a conscious decision to enter the filter bubble. This can be particularly troubling for younger users. For example, a recent Wall Street Journal investigation described in detail how TikTok’s algorithm can quickly inundate minors with highly inappropriate videos.
Long gone are the days when Facebook displayed posts on the news feed in chronological order. Now Facebook — and other social media platforms — use secret algorithms to shape what users see by predicting what he or she might be emotionally drawn to and giving it more prominence.
Algorithms can be useful, of course, but many people simply aren’t aware of just how much their experience on these platforms is being manipulated and how this manipulation can have negative emotional effects. These algorithms have largely been a black box to consumers and Congress alike, but now there is new momentum for accountability and transparency.
I’ve introduced the Filter Bubble Transparency Act in the Senate along with Sen. Richard Blumenthal and several of my other colleagues, and companion legislation was recently introduced in the House of Representatives by Reps. Ken Buck and David Cicilline.
Supporters of this legislation are an ideologically diverse group, but we all agree that consumers should have more information and control over how algorithms — fed by users’ personal data — are influencing their online experiences.
Our bipartisan legislation is simple and straightforward. It would make large tech companies, including Facebook, Google, Instagram, TikTok, YouTube, and others, notify users if the platform is using AI to prioritize content. If the user doesn’t want to have an opaque algorithm manipulate his or her online experience, he or she can easily opt out.
In today’s divided and hyper-partisan Congress, big tech regulation stands out as an opportunity for bipartisan action. After years of hearing about the problems with big tech, it’s time for Congress to hold these platforms accountable. 
The best place to start is to pull back the curtain on these secret algorithms and give consumers more transparency and choice. Congress can do that by passing the bipartisan Filter Bubble Transparency Act.