by Juan Siliezar (Communications Manager and Writer for the Physical Sciences)
Its roots date back to ancient Greek philosophers. The advent of computers in the mid-20th century made the concept a bit more concrete. And now it’s here to stay.
That was the overarching message about artificial intelligence during a Thursday, Nov. 3, panel discussion in which three AI scholars at Brown University shared their expertise.
“AI seems like it has popped up out of nowhere, but it's actually been under the surface for a long time,” said Assistant Professor of Computer Science Stephen Bach. “And now that it's popped up… keep an eye out.”
The discussion, titled “Let’s Chat About ChatGPT,” welcomed a packed audience in Brown’s Stephen Robert ’62 Campus Center. It helped to ground some of the myths, truths and inner workings of AI tools that have been lost among often sensational headlines — including the deep history of AI, how such systems are built, what they promise and what they actually deliver.
“Artificial intelligence is very much a very old ambition,” said panelist Ellie Pavlick, an assistant professor of computer science and linguistics at Brown. “You can argue [it goes] as far back as Plato and Socrates, but definitely at least to the ’50s, [which] was the heyday of — ‘We now have computers. Is it going to be possible to replicate human-level intelligence in a non-human thing?’”
The panel was part of an Office of the Provost initiative titled “Conversations on AI and our data-driven society,” hosted in partnership with Brown’s Data Science Institute. It was the first event in the new series, which will offer monthly discussions about the opportunities and impact that AI technology presents to higher education and the world well beyond.
“We are just beginning to understand the ways that AI will impact the way we teach and learn at Brown and, more broadly, society,” said University Provost Francis J. Doyle III, who moderated the event. “Charting a path for the future of AI at Brown requires a strategic approach that cuts across all fields of scholarship. These campus-wide discussions about the impact of AI on our teaching and research practices are vitally important, as we try to build an understanding of the promises and limitations of AI.”
Thursday’s discussion came at a timely moment. It was just under a year ago that ChatGPT made its debut, igniting worldwide conversations about AI-powered chatbots and technologies that are now not only playing out around kitchen tables, but in the halls of Washington. Early this week, U.S. President Joe Biden issued a wide-ranging executive order aimed at safeguarding against threats posed by artificial intelligence, including generative AI — which can often take the form of “deep fakes,” or artificially generated false images and videos.
The executive order and other proposed solutions — such as adding watermarks to images generated by AI — were a major talking point during the panel, which also featured Suresh Venkatasubramanian, a professor of computer science and director of Brown’s Center for Technological Responsibility, Reimagination and Redesign.
Panelists spoke about how important early precautionary steps can be, especially if they get at the root of what really needs to be addressed.
“They’re all steps forward, but we're going to have to wait and see how they play out,” said Venkatasubramanian, who served as an advisor to the White House Office of Science and Technology Policy and helped author a “Blueprint for an AI Bill of Rights,” which Biden’s recent order drew from. “The solution to problems of technology is to recognize that the problems of technology are the problems of technology and society, and that we need a broad socio-technical approach to thinking about this, which means we need to understand how people interact with technology… As long as we're doing that, then these approaches will be part of the solution.”
Another major thread in the discussion was the question of how much trust to invest in AI. All three panelists advised caution as the technology emerges into the mainstream. But it wasn’t all doom and gloom — the panelists spoke about the positive impacts AI may have, including in research and teaching at Brown. Bach, for instance, researches how to more easily teach AI tools to perform new tasks. He spoke about projects that involve scholars from fields including medicine and physics.
“We have people who are not necessarily computer scientists who have concepts and tasks that they want the [AI or machine learning] model to understand,” Bach said. “One of the things we look at is social media messages and conversations among adolescents who are at risk for things like self-harm. If we want to understand that right now, it requires a great deal of expertise to annotate all of that data, but maybe AI or machine learning could help with that. It could draw better insights, if we could scale it up.”
During an audience Q&A session, the panelists fielded queries ranging from ways they’ve seen AI come up in classrooms to how to address biases that can manifest through these programs, like racial discrimination in facial recognition technology.
Asked to offer closing remarks to remember on AI technology, Venkatasubramanian summed it up best: “AI is not magic,” he reminded the audience.
The next event in the series is slated for Thursday, Nov. 30, and will focus on authorship in the age of AI.