Serena Booth Joins Brown CS As Assistant Professor
- Posted by Jesse Polhemus
- on Nov. 8, 2024

Serena Booth earned a doctorate in CS at MIT in 2023 and currently works in the U.S. Senate as an AAAS AI Policy Fellow. In the fall of 2025, she’ll join Brown CS as assistant professor. Serena is one of the four latest hires in the multi-year CS With Impact campaign, our largest expansion to date.
“Coming to work on AI policy was a slow burn for me,” she explains. “Before grad school, I worked at Google, and the small policy questions I was interfacing with began to gnaw at me. Think about the search query ‘dinosaurs’. Parents filed a bunch of bug reports because they’d searched for dinosaurs and pterodactyl wasn’t showing up. But there’s a problem: pterodactyl is a pterosaur, not a dinosaur! Should Google provide fallacious results? A question like this is mostly insignificant, but the broader questions of how information is accessed and consumed quickly become gnarly. These gnarly questions need policy interventions, and ultimately that drove me toward this path.”
In her research, Serena works in the fields of human-AI interaction and AI safety, with robotics as an area of inspiration. She thinks about how people design, or should design, specifications for AI systems and robots, how to support people in learning about the capabilities and limitations of these systems, and how to support people in revising their specifications. This loop, Serena believes, requires mutual teaching and learning for both the human and the AI system.
Policy, she assures us, is a critical component of this research agenda, saying, “We need to think about how to govern AI systems, and how to ensure we balance the benefits and risks to society and to those who are most likely to be harmed by the development of these systems. We need policy solutions: this is why I currently work in the Senate, and why AI policy will equally be an integral part of my research program at Brown.”
Serena’s research career started a little over a decade ago, with an unusual job at Disney Research for a gap year between high school and college. “Disney Research taught me about behavioral research,” she says, “which is a major component of my research process even today.”
At Disney, Serena contributed to a team studying hotel towel use patterns at Disney’s Paris and Orlando theme parks: “We designed interventions to try to increase hotel guests’ reuse of their towels, and we were studying whether people were changing their behavior in response to these interventions. In this early research experience, I learned to appreciate how challenging the problem of studying people is. Designing high-quality experiments is intricate work, and the data you get back is always so noisy.”
Arriving at Harvard University for her undergraduate studies, Serena was planning on a career in the sciences: “I was raised around mathematicians, so studying computer science was not a surprise to my family or others around me. I picked CS over other disciplines because I received incredible mentorship as an undergrad. I want to pay that forward, and I think Brown CS has a similar caring approach to education.”
Serena wrote an undergraduate senior thesis at Harvard that’s still one of her best-known works. She set out to determine whether we place too much trust in robotic systems, and to test part of this question, she came up with a scheme of trying to get a robot to enter student dorms without authorization. She disguised a robot as a food delivery agent and had the robot ask students to allow it passage into the dorms. She found that alone, people were a bit nervous and suspicious, but groups of people readily complied with the robot’s request. The project captured the imaginations of WIRED, Science Friday, Motherboard, and others, as well as popular webcomics PhD Comics and Soonish.
Serena’s interest in AI and robotics accelerated at MIT, where she completed her doctorate. One of her favorite research contributions from her time in grad school looks at how experts design reward functions. “There’s dogma in reinforcement learning,” she says, “that reward functions – the specifications for these systems – are given, and our goal as RL scientists is just to ‘solve’ that environment. But that’s patently false. So I studied the practice of reward function design, and I uncovered that experts make systematic errors in their design. Moving forward, I want to create mechanisms to support people with these thorny design tasks: that’s a big part of my research agenda.”
At this point in a career that’s ranged from Disney Research to Apple to Google, does Serena see herself deliberately stepping away from industry?
“I picture myself turning more toward academia and governance in my own work,” she says, “because they provide a beautiful vantage point for making the kind of contributions I want to make. But I think academics often view work in industry as less intellectually fulfilling, and I strongly dispute that. I’m eager to mentor students who are choosing between careers in industry, academia, and government. That last option – government – is often overlooked in computing. I want to use my experience to help people consider all the possibilities, and to assess where they might have the most impact.”
“For me,” Serena adds, “advocacy means something quite specific. It means going to our systems of power, finding people who have the levers of change, and bringing them concrete proposals. I think I’m well positioned to help organize us not just as a department or a university but as a computing community, leveraging government at all levels, to improve the diversity of our field. We need to bring people into the field who didn’t grow up around mathematicians, and to support them every step of the way.”
While the work of changemaking goes on, Serena leaves time for fun: “My partner and I ride a tandem bike. In Cambridge, we became part of the fabric of the city since our community so often saw us out on rides, and I hope we bring that energy to Providence. We also have a dog, a Nova Scotia Duck Tolling Retriever. So, if I’m not in lab, I’m probably out throwing sticks for Ducki.”
At the end of our time together, I ask Serena to look back at her 2017 paper that received so much attention from scientists and laypeople alike. Seven years later, what’s different about the world that we’re living in?
“Thinking about trust in robotic and AI systems is still just as hard and unsolved,” she says. “There’s a common refrain that we need calibrated trust in these systems, and the question of how to calibrate trust, and how much control humans should maintain in these interactions with AI systems, is still a guiding question in my research. Questions of trust, of when and how we should collaborate with computers and when we should let them solve our problems – all of these remain and need significant research.”
Serena says that she’s proud of the two years at MIT that she spent as a Socially Responsible Computing Scholar because the need to reflect on social and ethical responsibilities is instinctive for her.
“As soon as you start thinking about the interactions between humans and computers,” she says, “you consider everything that can go wrong, and you start picturing dystopian science fiction. How do we make technology beneficial for more people?”
For more information, click the link that follows to contact Brown CS Communications Manager Jesse C. Polhemus.