By Lauren Feiner
Representing California in Congress comes with a unique challenge: navigating national politics while reflecting the interests of the most populous state in the US, including a large constituency from the tech industry. It’s a challenge both current California Sen. Laphonza Butler and Vice President Kamala Harris — who previously held that title — have taken on. And right now, governing the tech world means addressing AI.
Congress hasn’t made much headway on a national framework for regulating generative AI. But California is the epicenter of the AI industry, home to companies like OpenAI and Google. On the national stage, Harris has acted as an AI czar within the Biden administration, leading discussions with industry players and civil society leaders about how to regulate it. Butler, who has a long history with the VP, is focusing on a specific problem: how AI systems impact labor and social equity.
Butler spoke with The Verge about balancing the interests of AI companies and the people their products impact, including workers who fear being automated out of a job. “It all starts with listening,” says Butler, a former labor leader. “It starts with listening to both the developers, the communities potentially impacted negatively, and the spaces where opportunity exists.”
A balancing act
Like many officials, Butler says she wants to help protect Americans from the potential dangers of AI without stifling opportunities that could come from it. She praised both Schumer and the Biden administration for “creating spaces for communities to have [a] voice.” Both have brought in labor and civil society leaders in addition to major AI industry executives to educate and engage on regulation in the space.
Butler insists lawmakers don’t need to make “false choices” between the interests of AI company executives and the people who make up the workforce. “Listening is fundamental, balancing everyone’s interest, but the goal has to be to do the most good for the most people. And to me, that is where a policymaker will always tend to land.”
California state Senator Scott Wiener made similar statements about his hotly contested state-level bill, SB 1047. The bill, which would have required whistleblower protections and safeguards for potentially disastrous events at large AI companies, made it all the way to Gov. Gavin Newsom’s desk before being vetoed, with companies like OpenAI warning it would slow innovation. In August, Wiener argued that “we can advance both innovation and safety; the two are not mutually exclusive.” So far, however, lawmakers are struggling to find a balance between the two.
Read the full article HERE.