Business
Women in AI: Sarah Myers West says we should ask, ‘Why build AI at all?’
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Sarah Myers West is managing director at the AI Now institute, an American research institute studying the social implications of AI and policy research that addresses the concentration of power in the tech industry. She previously served as senior adviser on AI at the U.S. Federal Trade Commission and is a visiting research scientist at Northeastern University, as well as a research contributor at Cornell’s Citizens and Technology Lab.
Briefly, how did you get your start in AI? What attracted you to the field?
I’ve spent the last 15 years interrogating the role of tech businesses as powerful political actors as they emerged on the front lines of international governance. Early in my career, I had a front row seat observing how U.S. tech businesses showed up around the world in ways that changed the political landscape — in Southeast Asia, China, the Middle East and elsewhere — and wrote a book delving in to how industry lobbying and regulation shaped the origins of the surveillance business model for the internet despite technologies that offered alternatives in theory that in practice failed to materialize.
At many points in my career, I’ve wondered, “Why are we getting locked into this very dystopian vision of the future?” The answer has little to do with the tech itself and a lot to do with public policy and commercialization.
That’s pretty much been my project ever since, both in my research career and now in my policy work as co-director of AI Now. If AI is a part of the infrastructure of our daily lives, we need to critically examine the institutions that are producing it, and make sure that as a society there’s sufficient friction — whether through regulation or through organizing — to ensure that it’s the public’s needs that are served at the end of the day, not those of tech businesses.
What work are you most proud of in the AI field?
I’m really proud of the work we did while at the FTC, which is the U.S. government agency that among other things is at the front lines of regulatory enforcement of artificial intelligence. I loved rolling up my sleeves and working on cases. I was able to use my methods training as a researcher to engage in investigative work, since the toolkit is essentially the same. It was gratifying to get to use those tools to hold power directly to account, and to see this work have an immediate impact on the public, whether that’s addressing how AI is used to devalue workers and drive up prices or combatting the anti-competitive behavior of big tech businesses.
We were able to bring on board a fantastic team of technologists working under the White House Office of Science and Technology Policy, and it’s been exciting to see the groundwork we laid there have immediate relevance with the emergence of generative AI and the importance of cloud infrastructure.
What are some of the most pressing issues facing AI as it evolves?
First and foremost is that AI technologies are widely in use in highly sensitive contexts — in hospitals, in schools, at borders and so on — but remain inadequately tested and validated. This is error-prone technology, and we know from independent research that those errors are not distributed equally; they disproportionately harm communities that have long borne the brunt of discrimination. We should be setting a much, much higher bar. But as concerning to me is how powerful institutions are using AI — whether it works or not — to justify their actions, from the use of weaponry against civilians in Gaza to the disenfranchisement of workers. This is a problem not in the tech, but of discourse: how we orient our culture around tech and the idea that if AI’s involved, certain choices or behaviors are rendered more ‘objective’ or somehow get a pass.
What is the greatest way to responsibly build AI?
We need to always start from the question: Why build AI at all? What necessitates the use of artificial intelligence, and is AI technology fit for that purpose? Sometimes the answer is to build better, and in that case developers should be ensuring compliance with the law, robustly documenting and validating their systems and making open and transparent what they can, so that independent researchers can do the same. But other times the answer is not to build at all: We don’t need more ‘responsibly built’ weapons or surveillance technology. The end use matters to this question, and it’s where we need to start.
-
Entertainment5 days ago
How to watch Pharrell’s ‘Piece by Piece’ at home: When is it streaming?
-
Entertainment5 days ago
‘Gladiator II’ review: Ridley Scott grapples with modern masculinity in ancient Rome
-
Entertainment4 days ago
BookTok’s growing rift over politics is heating up
-
Entertainment3 days ago
Trump taps Musk for ‘Department of Government Efficiency’: What it is and what’s at risk.
-
Entertainment3 days ago
Trump appoints Elon Musk to DOGE, a new U.S. government department
-
Entertainment3 days ago
Stocking up on holiday gift cards? Watch out for this scam.
-
Entertainment2 days ago
Greatest birthday gifts for men: Practical and posh presents that are sure to please
-
Entertainment2 days ago
6 gadgets to help keep your home clean, from robot vacuums to electric scrubbers