Click the links that follow for more news about other accomplishments by Brown CS students.
AAAI-20 is the thirty-fourth AAAI Conference on Artificial Intelligence, one of the world's most prominent international conferences on the subject. Held this year in New York from February 7-12, AAAI promotes theoretical and applied AI research as well as intellectual interchange among researchers and practitioners. This year, the conference also included an Undergraduate Consortium (AAAI-UC), at which Brown CS undergraduates Jessica Dai and Pazia Bermudez-Silverman presented their recent research. Both were advised by Sarah M. Brown, a postdoctoral research associate at Brown's Data Science Initiative.
"My work," Jessica says, "is about how to understand and interpret the performance of 'fair' machine learning algorithms, particularly in situations where the data used for training a model is unrepresentative of the real-world domain in which it will be applied. My research finds that simply changing the demographics of datasets (e.g. to reflect real-world proportions) has unpredictable effects on 'fair' algorithms, and proposes a model for understanding 'ML bias' that encompasses historical or systemic inequity; poor data collection, sampling, or labeling; or any combination of these potential 'sources of unfairness'."
AAAI-UC offered undergraduate students an opportunity to enrich their conference experience by (1) presenting and receiving critical feedback about their work in a professional, academic setting; (2) meeting prospective graduate advisors; (3) receiving mentoring about the advantages (and disadvantages) of pursuing graduate studies in AI as well as practical early career advice; (4) expanding their professional network to include both AI experts, current graduate students, and undergraduate peers; and (5) providing advice, tools, and resources for successfully applying to and attending graduate school in an AI-related field.
Pazia explains that her research focuses on the harm (racial, gender-based, class-based, and so on) that AI models and algorithms are causing in our society. "While these models," she says, "are used in many areas, including employment, housing, and welfare, I am focusing particularly on AI systems used in criminal justice, including predictive policing, recidivism, and facial recognition algorithms, which have all been shown to be racially biased. My work synthesizes previous analyses of this urgent topic and recommendations to make change in this area, including auditing these systems, spreading awareness, and putting pressure on those using them, as well as demonstrates how these algorithms feed each other and contribute to both a culture of poverty and structural racism."
Their applications were reviewed according to criteria that included evidence of significant personal contributions to an AI research project, assessment of contribution to and benefit from participating in the Undergraduate Consortium, and input from their advisors. The acceptance rate for applications was only 16 percent.
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.