Source: United States Senator for Kansas – Jerry Moran
WASHINGTON – U.S. Senator Jerry Moran (R-Kan.) today questioned Facebook whistleblower Frances Haugen at a U.S. Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security hearing regarding protecting children online.
During the hearing, Sen. Moran asked Ms. Haugen about instances in which Facebook knew its decisions would be harmful to users, but still proceeded with the planned harmful behavior.
Ms. Haugen responded: “Facebook’s internal research is aware that there are a variety of problems facing children on Instagram…they know that severe harm is happening to children. Kids who are bullied on Instagram, the bullying follows them home… the last thing they see before they go to bed at night is someone being cruel to them or the first thing they see in the morning is someone being cruel to them.”
Following the hearing, Sen. Moran stated: “People should be able to connect with one another online without being manipulated by secret algorithms created by Big Tech that can exacerbate mental illness and thoughts of suicide. There must be increased transparency provided by these tech giants so that Kansans have the information necessary to choose what services to use and have better control over their personal and private information.”
Last year, Sen. Moran introduced landmark federal data privacy legislation, the Consumer Data Privacy and Security Act, to strengthen the laws that govern consumers’ personal data. During the hearing, he and Sen. Richard Blumenthal (D-Conn.) recommitted to finding a bipartisan path forward in light of the recent news surrounding the harmful practices of Facebook and Instagram. Sen. Moran is also a cosponsor of the Filter Bubble Transparency Act, which would require large-scale internet platforms to allow users to view content that was not curated as a result of a secret algorithm.
Click HERE to Watch Sen. Moran’s Questioning Period
Transcript of exchange:
Sen. Moran: Thank you very much for your testimony. We’ve talked particularly about children, teenage girls specifically, but what other examples do you know about where Facebook or Instagram knew its decisions would be harmful to its users, but still proceeded with the plan and executed that harmful behavior?
Ms. Haugen: Facebook’s internal research is aware that there are a variety of problems facing children on Instagram…they know that severe harm is happening to children. For example, in the case of bullying, Facebook knows that Instagram dramatically changes the experience of high school…When I was in high school…most kids have positive home lives…it doesn’t matter how bad it is at school, kids can go home and rest for 16 hours. Kids who are bullied on Instagram, the bullying follows them home. It follows them into their bedrooms. The last thing they see before they go to bed at night is someone being cruel to them or the first thing they see in the morning is someone being cruel to them. Kids are learning that their own friends – the people that care about them – are cruel to them. Think about how that is going to impact their domestic relationships when they become twenty-somethings or thirty-somethings – to believe that people that care about you are mean to you. Facebook knows that parents today, because they didn’t experience these things, they never experienced this addictive experience with a piece of technology, they give their children bad advice. They say things like “why don’t you just stop using it?” Facebook’s own research is aware that children express feelings of loneliness and struggling with these things because they can’t even get support from their own parents. I don’t understand how Facebook can know all these things and not escalate it to someone like Congress for help and support in navigating these problems.
Sen. Moran: Let me ask the question in a broader way. Besides teenagers or besides girls or besides youth, are there other practices that Facebook and Instagram know to be harmful, yet are pursued?
Ms. Haugen: Facebook is aware that choices it made in establishing meaningful social interactions – so engagement based ranking that didn’t care if you bullied someone or committed hate speech in the comments – that was meaningful. They know that that change directly changed publishers behavior. Companies like BuzzFeed wrote in and said, ‘the content that is most successful on our platform is the content we’re most ashamed of, you have a problem with your ranking,’ and [Facebook] did nothing. They know that politicians are being forced to take positions they know their own constituents don’t like or approve of because those are the ones that get distributed on Facebook – that’s a huge, huge negative impact. Facebook also knows, that they have admitted in public, that engagement based ranking is dangerous without integrity and security systems, but then not rolled out those integrity and security systems to most of the languages in the world, and that’s what’s causing things like ethnic violence in Ethiopia.
Sen. Moran: Thank you for your answer.
# # #