Posts

George Konidaris Wins A Salomon Award To Express Prior Knowledge To Reinforcement Learning Agents

None
Click the links that follow for more news about George KonidarisRichard B. Salomon Faculty Awards, and other recent accomplishments by our faculty.

Brown CS Professor George Konidaris has just received a Richard B. Salomon Faculty Research Award. This honor, given annually by Brown’s Office of the Vice-President for Research, was established to support excellence in scholarly work by providing funding for selected faculty research projects of exceptional merit with preference given to junior faculty who are in the process of building their research portfolio.

Asked to situate his work, George explains that there's a key difference between humans and the type of artificial learners that use a machine learning paradigm known as reinforcement learning, or RL. Typically, RL is accomplished tabula rasa: learning occurs from scratch, without any information or prior biases about desirable behavior, or how the world works.

"By contrast," he says, "humans bring a wealth of prior knowledge about the world to each new task. Injecting such background knowledge into RL is therefore a highly promising approach to matching human learning efficiency. However, the question of how to integrate background knowledge remains unanswered: there exists no simple, complete, and standardized way for a human to help an RL agent learn by giving it useful information about the task it is facing."

As a step toward that answer, George's research proposes to design and implement RLang, a domain-specific declarative programming language designed to express anything that a human may wish to tell an RL agent. RL tasks are typically defined as Markov Decision Processes, or MDPs, which are composed of precisely-defined mathematical objects. The semantics of RLang can ground to information about each of those objects: the result is a clean, simple, and natural syntax with formal semantics that cover the domain of interest. RLang can therefore serve as a direct means of giving information to an agent, and also (more ambitiously) as a target for automatically translating natural language instructions to a semantic logical form. Although there have been scattered ad-hoc attempts to specify how advice should be provided to an agent about an isolated component of an MDP, no expressive and complete language currently exists.

"Our initial prototyping of RLang," George says, "has led to preliminary experimental results that suggest that RLang-enabled agents are capable of vastly outperforming traditional reinforcement learning agents in the same tasks."

A full list of awardees is available here. George joins multiple previous Brown CS winners, including (most recently) Anna Lysyanskaya, Ritambhara SinghMalte SchwarzkopfTheophilus A. Benson, and George Konidaris.

For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.