Posts

Harini Suresh Joins Brown CS As Assistant Professor

    None
    Click the link that follows for more news about our history-making CS With Impact expansion.

    Harini Suresh has ambitious plans for her research in machine learning (ML) and human-computer interaction (HCI), decidedly broad in their scope, but she doesn’t intend to tackle them alone. “Can we imagine new futures for technology,” she asks, “that stem from the needs and desires of people who aren’t currently involved in the process? Would things unfold differently if we involve more people in a way that’s thoughtful and intentional, working together to build technology that reflects our shared values?” 

    Next fall, Harini joins Brown CS as assistant professor. She’s the latest hire in the multi-year CS With Impact campaign, our largest expansion to date. 

    After being drawn to what she calls “the very physical side of biology” in high school, growing cells and making solutions, Harini began her undergraduate studies at the Massachusetts Institute of Technology (MIT) with the intention of combining life science and computer science. But computation proved infectious.

    “I was already interested in data analysis and research on a broader scale than just running experiments,” Harini says, “and I just started seeing the applications, the potential that CS can bring to many different areas outside of biology.  I continued to work in healthcare as an application, but also started learning about other impactful opportunities.”

    As she continued on at MIT for a Master’s degree, her focus was broadening, but it was also about to make a noteworthy turn. Harini remembers a specific moment when she decided to put societal impact and fairness at the center of her research: “I was working in a lab that focused on ML for healthcare, wrapping up a project that used electronic health record data to predict the need for certain interventions. In one particular group meeting, the topic was about predicting pain management strategies from doctor’s notes and someone chimed in, be careful: women and racial minorities are often perceived to be in less pain than they actually are, and that’s probably reflected in the clinical notes. This immediately struck me as something I knew was true – I just hadn’t yet connected the dots to realize how those kinds of social biases and norms deeply impacted the work I was doing. Pretty quickly after that I started reading and thinking a lot about the sociotechnical aspects of ML, and decided that I wanted that to be the focus of my PhD research.” 

    Harini explains that her current work is motivated by the idea of involving diverse and marginalized viewpoints throughout the entire life cycle of ML or artificial intelligence (AI) systems, from problem conceptualization through data curation, algorithm development, deployment, evaluation, and updating.

    “It’s both retrospective and forward-looking,” she says. “Lots of amazing researchers have demonstrated how these systems can cause harm when they’re not built carefully, and I’m interested in continuing that as well, but I want to pair the auditing work with a prospective question: what are the currently overlooked people, contexts, or applications that we could begin supporting if we bring in more voices and re-envision the status quo processes?” 

    Machine learning has made daily news since the arrival of ChatGPT and its peers, and much of that coverage is admonitory, sometimes apocalyptically so. Is it daunting to have such a new field already needing to be keenly aware of its own potential for harm?

    “I don’t know that it is,” Harini says. “There’s so much auditing work that has been done with technology like airplanes or medical devices, and there are important differences with ML, but they have a lot to teach us: one of the tenets of my work is drawing from a broad range of domains, inside and outside of CS. There’s a lot of potential to pick out the parts of prior work from other fields that we should adapt to the new risks that ML carries.”    

    The idea of intersectionality, Harini explains, has been a useful and enlightening tool for her to evaluate those risks and understand the nuances of different problems: “Applying  an intersectional lens to our work on helping activists monitor feminicide really helped us understand the similarities and differences of various groups and contexts we were working with. Missing Indigenous women and police violence against Black women stem from shared societal issues, but they also unfold quite differently. Using intersectionality analytically allows you to understand those kinds of differences and lets you create tech that’s aware of and built to support them.”   

    Looking ahead to her time at Brown, Harini tells us that she’s excited to be at a liberal arts university, where the ethos and the emphasis on teaching match the work that she wants to do. That includes a focus on AI literacy, which she sees as crucial for a well-informed general population. She’s eager to reach out to colleagues across campus for multidisciplinary collaborations, and she’s interested in the opportunity to shape the teaching of HCI at Brown through new classes and curricula.

    “And I’m pretty excited about Providence as an artsy place!” Harini says. Long before her pursuits in biology and computer science, she was an avid writer, and she still enjoys creative writing, making pottery, and sketching or painting as a way of activating different parts of the brain than the ones that she uses in her research. 

    “I’m also really impressed by what Brown CS is already doing with socially responsible computing,” she notes, “making it a core tenet of computer science education instead of an add-on or an elective. I’d love to see more and more courses with final projects that feature community partnerships where students are working with non-profits or activist organizations to develop tools in collaboration with a broader range of stakeholders who already have goals related to social good. Everyone knows about the big tech companies, but I want students who are on the fence about whether CS is the right field for them to see the kind of societal impact they can have, to envision that future for themselves.”

    The next generation is also one of Harini’s largest sources of inspiration: “When I look around, I see so many young people who are taking independent initiative to learn about important issues, to learn things they’re passionate about and imagine different career paths and new kinds of activism.” 

    Though younger than her, they sound like peers, maybe even potential collaborators: people like Harini, who are interested in designing a better future and working to bring it about.

    “I have a lot of hope,” she says, “that the next generation of kids and the next generation of computer scientists are open to the idea of not automatically accepting what they think they should do or what society thinks they should do.”

    For more information, click the link that follows to contact Brown CS Communications Manager Jesse C. Polhemus.