Source: United States Senator Josh Hawley (R-Mo)
Illustration: Allie Carl/Axios
Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet’s bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday that was shared exclusively with Axios.
Why it matters: Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 of the Communications Decency Act, the law that largely shields platforms from lawsuits over third-party content.
- It’s a newly urgent issue thanks to the explosive of generative AI.
- The new bipartisan bill bolsters the argument that Section 230 doesn’t cover AI-generated work. It also gives lawmakers an opening to go after Section 230 after vowing to amend it, without much success, for years.
The big picture: Section 230 is often credited as the law that allowed the internet to flourish and for social media to take off, as well as websites hosting travel listings and restaurant reviews.
- To its detractors, it goes too far and is not fit for today’s web, allowing social media companies to leave too much harmful content up online.
Details: Hawley and Blumenthal’s “No Section 230 Immunity for AI Act” would amend Section 230 “by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI,” per a description of the bill from Hawley’s office.
- The bill would also allow people to sue companies in federal or state court for alleged harm by generative AI models.
What they’re saying: “We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” Hawley told Axios.
- “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”
- “AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield,” Blumenthal told Axios. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era.”
The other side: Some legal experts have argued that Section 230 covers generative AI, and that denying AI developers 230 protection would open them up to business-crushing lawsuits — or products that are too concerned about potential harms to be effective or usable.
- Others have argued that generative AI does not get 230 protection, and that courts are likely to consider large language models like ChatGPT as “information content providers” and therefore not qualified for immunity.
Our thought bubble: Sam Altman had a good showing on Capitol Hill when he testified earlier this year and has had a positive relationship with lawmakers thus far, welcoming regulation.
- But it’s hard to imagine Microsoft and OpenAI, or any company working on generative AI products, accepting such a law and opening themselves up to a torrent of lawsuits if people claim to have been hurt by something a chatbot told them.