Click the link that follows for more news about our historic CS With Impact expansion.
“The most successful computer vision algorithms of the last few years,” says Chen Sun, “were all built on top of supervised deep learning, where big data and powerful computing devices are the key factors. I’m interested in making machines perceive and learn more naturally, like humans.” Currently a staff research scientist at Google, Chen has just joined Brown University’s Department of Computer Science as a visiting assistant professor, and he’ll become a full assistant professor next July. He’s the newest hire in the multi-year CS With Impact campaign, the largest expansion in Brown CS history.
One of the reasons why Chen is hoping to change the way machines learn is personal. “When I’m joking with friends,” he says, “I tell them that I have one major goal before I retire: automating everything in my life as much as possible and building myself a robot assistant!” Self-driving cars have been a longtime interest, and one of his recent efforts in this area was an attempt to improve their decision-making through behavior prediction.
In his youth, Chen was an avid videogamer but didn’t find himself immediately drawn to computer science. “I wasn’t quite the standard ‘good student’ according to East Asian culture,” he says. “I often got bored during classes and fell asleep. On the other hand, I worked hard on the topics I was fascinated with.” That atypical experience has provided a lot of insight into pedagogy: “It made me believe that every student is unique and has their own way of learning, and a great teacher can provide a diverse set of tools for different students to choose from and learn most effectively.”
Not learning to code in high school made for a stressful start to college, and it was a year and a half before CS felt comfortable. The two turning points were an algorithm class taught by Professor Xiaoming Sun, who Chen describes as his role model for teaching, and two computer system classes where Chen wrote a compiler and then a mini operating system. “The labs were so nicely designed,” he says, “that after finishing the course I had the skills to build larger software systems and apply them to research projects.”
Chen graduated from Tsinghua University with a Bachelor’s degree in Computer Science, then went on to earn his PhD at the University of Southern California, advised by Professor Ram Nevatia. Video understanding was one of the first subjects to catch his eye. It remains his focus, but Chen says that his relationships with other experts keep his interests broad: “Computer science and AI in general are really interdisciplinary, and computer vision, natural language processing, machine learning, and even cognitive science are all connected – I draw inspiration from people in those areas, and I want to help them as well.”
As Chen explains it, humans are easily able to distill rich and highly complex information from numerous kinds of videos (documentaries, instructional videos, movies) without supervision, but the process is much more difficult for machines. Inspired in part by his love of cooking alongside his wife, one of Chen’s recent projects (see the paper here and blog post here) involved using an algorithm that discovered human actions and object state transitions automatically by watching people in cooking videos do things and listening to them explain.
“Humans observe the world and then actively explore it,” he says. “I want robots to help us cook, but I want them to learn by watching YouTube, playing around in my kitchen, and asking me for tips, then apply skills that they learned virtually in the real world. I want to close the loop between their perception and their taking action.”
Also critical, Chen says, is for machines to learn to anticipate multiple futures of human behaviors, whether it’s thinking about varying game strategies or different ways for autonomous vehicles to drive. “That ability to anticipate the future is critical for robots to interact with us,” he says. “It determines whether a self-driving car should yield to pedestrians or not.”
“In the long run,” Chen tells us, “ I’d like to develop algorithms that could discover structured, abstract knowledge automatically from multimodal data, such as videos, websites, speech or music. Such algorithms could lead to smarter softwares, such as an automatic editor for the video of Brown CS faculty singing to graduates. Of course, the ultimate goal is to transfer such knowledge to a robot that can cook me breakfast, drive me to work, and vacuum the house.”
While that robotic helper is taking shape, Chen is happy to be landing somewhere that reminds him very much of his own undergraduate experience. “I’m getting started as an educator, and Brown is a great place for me.” he says. “The undergraduate teaching assistant and undergraduate research assistant programs at Brown CS are so strong, and I’m really looking forward to getting inspiration from students. The opportunities for collaboration are everywhere – I just read a paper on compositionality by Ellie Pavlick, and it was full of ideas that I could transfer to my research. I love that she’s just down the hall.”
Advising is one of the things that Chen’s looking forward to most: “I really want students to prosper. Taking my experience, helping them be successful – that’s really fulfilling for me. I can’t wait.”
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.