Randall Balestriero Joins Brown CS As Assistant Professor
- Posted by Jesse Polhemus
- on Aug. 19, 2024
This summer, Randall Balestriero has joined Brown CS as assistant professor. He’s one of the four latest hires in the multi-year CS With Impact campaign, our largest expansion to date.
Visitors to Randall’s personal web page may find it somewhat atypical for a young academic, repeatedly required to self-promote. The first words that greet the viewer’s eye are a research area: Practical Deep Learning Theory. Randall’s name appears lower, in a sidebar.
The reasoning, he says, is less about humility than the importance of good science and his excitement for a rapidly-evolving field: “My website is about conveying my research, not my personal views. In a new field like AI, there’s a danger that if someone influential says that a particular direction is useless, new researchers might abandon it for no reason other than someone’s opinion. I want people to follow my proofs, my research, and make their own informed decisions.”
“But to me,” Randall says, “the newness of AI is also the most exciting thing. In more established areas such as statistics, many problems are considered solved for decades. That view, which is often short-sighted, discourages young researchers from diving in and trying things out. In AI, everything is still up for grabs, and this is why so many exciting methods keep emerging year over year. There are no barriers, and nobody cares where you came from – it’s about real-world impact.”
Born in France, Randall remembers his delight at receiving a computer as a childhood gift from his parents, then tinkering with C++ in middle school. Studying first at Toulon University, then Pierre et Marie Curie University and Ecole Normale Superieure in Paris, the research opportunities for undergraduates were few, but Randall was fortunate in finding an interesting side project with Professor Herve Glotin, who taught graph theory: using AI to analyze underwater recordings and detect the presence of mammals, from whales to dolphins.
At the time, Randall says, there was much less AI interest: “There was no stable infrastructure, but we were seeing how computers could learn on their own, much faster and more efficiently than humans. You start dreaming: what if it could do other things much better than us, like cure disease through AI-designed drugs? I was a bit naive, of course, but that kind of utopian passion drew me in.”
After completing a PhD at Rice University and a postdoctorate at Meta/Facebook AI Research, Randall explains that he’s currently working on problems that we were barely aware of at the start of the AI revolution.
“Even if you’re not an expert,” he says, “I want my research to enable you to use an autonomous AI pipeline and get an answer from your data. For this to happen, we need to understand more theoretically and with more grounded math what AI systems are really doing, and we need to make them more reliable. For example, if you’re polling people to estimate a population’s opinion, the way that you sample matters a lot: you need to talk to people from different areas with different backgrounds, jobs, education. I want AI to reach a similar level of awareness when using your data to produce a prediction, but we don’t have the mathematical tools – that’s what I’m working toward.”
For the layperson who might find the idea of autonomous AI daunting, what does Randall recommend?
“We’re in a place,” he says, “where I think everyone should at least stay informed about how an AI solution could affect their workplace. Its first impact in the real world will be a much more dynamic work force. But when it comes to AGI [Artificial General Intelligence], remember that there’s a lot of embellished communication in industry because their market capitalization depends on it. We look at something like an autocorrect feature and have a tendency to think of it as AGI, but currently it’s really more of a very effective information retriever than something that’s reflecting internally on your question.”
Randall is drawn to the Brown CS Socially Responsible Computing program, he says, because he’s realistic about AI and ML’s potential for creating misinformation and inequity: “If you create a method with a huge amount of compute, this on its own is a bias and is harmful for society in the long term. I want people to be able to use my work and build upon it even if their only resource is a laptop. Another challenge is that current ML models typically look at aggregated statistics with no fine-grained analysis of how they might perform for, say, non-native speakers. When you maximize this aggregated metric, you introduce a lot of biases.”
When we ask Randall about an interesting phrase in his biography (“practical solutions from first principles”), he immediately turns his attention to the divide between theory and practice: “In deep learning specifically, there’s a huge gap. It’s really hard to mathematically prove anything, so people remove what makes their life harder in order to show provable results, but there’s no link between the proof and what’s used in practice. When I derive a theorem, I don’t bother if it won’t answer at least one question for a practitioner.”
Wanting to provide practical answers for fellow researchers is a major part of the reason why Randall has come to Brown. “If you can’t publish,” he says, “it kills the point. I want to publish, I want to help practitioners. For me, the way to stay up to date is to collaborate, and what job will give you the opportunity to collaborate with anyone? The answer is academia.”
Occasionally, Randall says, he’s toyed with some interesting startup ideas: “I’ve never made a startup happen, but I try and brainstorm, think about what’s useful for society – it’s a good exercise to keep your research informed.” He spends his spare time biking, skiing, playing tennis, traveling, and enjoying good food.
Returning to the subject of helping fellow researchers, Randall also speaks compellingly about what he owes his future students. “I try,” he says, “never to be disconnected from any part of a research project I’m involved with, whether it’s the experiment, the implementation, or the writing. I don’t want to talk about a student’s project and not understand every detail. It’s not micromanaging: they deserve me being hands-on and trying to help as much as possible. That’s something I really want to focus on.”
“And I want to add,” Randall concludes, “that my focus may be on AI in general and ML, but I’m always looking for interesting problems, and sometimes ones that are very down to earth. For me, saying that you only specialize in artificial intelligence is a bit reductive. I’m always happy to do research when the questions are interesting, even if they take me very far from AI.”
“As a scientist,” he says, “I try my best not to triage unanswered research questions. Rather, I strive to expand and leverage my expertise to come up with interesting and trustworthy solutions.”
For more information, click the link that follows to contact Brown CS Communications Manager Jesse C. Polhemus.