Posts

Avrim Blum Gives The 23rd Annual Kanellakis Memorial Lecture

    None
    Click the link that follows for more news about the Paris C. Kanellakis Memorial Lecture.

    The Paris C. Kanellakis Memorial Lecture, a tradition of more than two decades, honors a distinguished computer scientist who was an esteemed and beloved member of the Brown CS community. Paris came to Brown in 1981 and became a full professor in 1990. His research area was theoretical computer science, with emphasis on the principles of database systems, logic in computer science, the principles of distributed computing, and combinatorial optimization. He died in an airplane crash on December 20, 1995, along with his wife, Maria Teresa Otoya, and their two young children, Alexandra and Stephanos Kanellakis.

    Each year, Brown CS invites one of the field's most prominent scientists to address wide-ranging topics in honor of Paris. Last month, Avrim Blum, Professor and Chief Academic Officer of the Toyota Technological Institute at Chicago (TTIC), delivered the twenty-third annual Paris C. Kanellakis Memorial Lecture: “Robustly-reliable learners for unreliable data”.

    In his opening remarks, Brown CS faculty member Philip N. Klein welcomed Avrim and noted his many accomplishments, which appropriately include the Association for Computing Machinery (ACM) Paris Kanellakis Theory and Practice Award, which honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing.

    Taking the stage, Avrim began his talk by explaining that global concern about adversarial attacks on machine learning (ML) systems, often ignored in ML’s infancy, has grown tremendously as the technology becomes ubiquitous. He identified two main kinds of attacks: data poisoning attacks, in which a small number of corrupted points are added to training data to cause the learner to make specific mistakes, and test-time attacks, where innocuous-looking adversarial perturbations to test points cause bizarre errors.

    As a theoretician, Blum explained, he’s drawn to a theoretical question that arises: when can we be confident in our predictions in the presence of these kinds of unreliable data? Avrim began his lecture focusing on clean-label poisoning attacks, in which attackers can often cause desired misclassifications even by adding correctly-labeled data into a training set.

    “To what extent,” he asked, “is this issue the fault of the complexity of the deep network or could this be an issue even for the kind of simple learning rules that we think we understand, like linear separators?”

    Avrim continued with a theoretical analysis of clean-label poisoning attacks, making a distinction between the two adversarial goals of creating a high error rate and causing specific targeted errors. There’s a fundamental theoretical difference, he maintained, between the two. Although some safe cases exist, one of the challenges is that even in low complexity cases where uniform convergence holds and all consistent classifiers have low true error, the adversary may still be able to move the error region to cause specific misclassifications.

    How should security researchers respond? Ideally, Avrim said, we want a robustly-reliable learner, which takes a training sample and outputs a classifier with a prediction and, if possible, a guarantee that the prediction is correct under well-specified assumptions. It’s a notion that provides a principled approach to certified correctness in the face of poisoning and test-time attacks.

    “We want algorithms,” Avrim concluded, “that tell you why you should be confident in what they’re saying.”

    A recording of the lecture is available here.

    For more information, click the link that follows to contact Brown CS Communications Manager Jesse C. Polhemus.