Brown CS And HCRI Faculty Advise On Forthcoming AI Legislation
- Posted by Jesse Polhemus
- on Nov. 2, 2017
Click the links that follow for more Brown CS content on artificial intelligence and HCRI.
The judicial systems of eight states utilize a computational process sometimes referred to as algorithmic sentencing, which uses algorithms to assess the risk of a criminal committing future crimes. This assessment then provides guidance for judges as they make sentencing decisions, and it's recently come under fire from critics who argue that it's flawed and unreliable, reflecting racial biases and exacerbating societal disparities.
Rhode Island isn't one of the eight, and when a constituent approached State Representative Aaron Regunberg with concerns about algorithmic sentencing being used in the Ocean State, he knew he needed to act.
"It shocked me at first that we had another thing to worry about," he says. "I'd been thinking about the positive and negative effects that artificial intelligence and automation can have from mostly an employment perspective, and this was really eye-opening. This impacts everyday people who have no knowledge of what's going on or any voice in it."
Looking for expert advice, Aaron turned to the faculty of Brown University's Department of Computer Science (Brown CS) and Humanity-Centered Robotics Initiative (HCRI), sharing draft legislation ("An Act Relating to AI") that's believed to be the nation's first state-level AI safety legislation. "The goal," he explains, "is to make sure that if AI is being used by state or local government –and we're starting with a focus on the criminal justice system– we want to let people know about it and make sure there's an appeal process with human decision-making involved."
Peter Haas, Associate Director of HCRI, and Professors Amy Greenwald, George Konidaris, Michael Littman, and Bertram Malle reviewed the draft and offered comments and recommendations on the technical language in the bill. "Proprietary AI systems," Peter says, "are being deployed without considering the potential for bias or even cogency of decision making. Right now it's very hard to understand why a machine learning algorithm made a decision. The machine can't explain itself."
The results, he says, are extremely far-reaching: "This should be concerning to everybody as these systems are being deployed in areas of healthcare, financial services, and even criminal justice. It's one thing if a research algorithm misidentifies a husky for a wolf, another if a self-driving car mistakes a baby carriage for a shopping cart, or a computer denies your Medicaid claim. Transparency in AI is important to consider as we deploy these systems in society. "
Aaron is currently revising the text based on feedback from Peter and his Brown CS/HCRI colleagues, and he hopes to bring an updated version to the General Assembly later this fall. There's a sense of urgency, he says:
"The goal of my legislative work in general is to build the kind of government and economy that work for everyone, and it was so helpful to have Brown's incredibly insightful, deep well of knowledge to fill in the gaps for what I need to know. They're real experts, and I've learned a lot from them. Technology is affecting more and more of how we live and work, and policy can't just sit still, it has to keep up. It has to move forward."
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.