Jacob (Jake) Beck is a Brown CS alum, MS and BS, and former researcher under the supervision of Michael Littman. In between his two degrees from Brown, Jake worked at Microsoft Research on developing memory for intelligent agents. His investigation found that existing approaches were very sensitive to random and irrelevant information, or noise, often present in the training of a Reinforcement Learning (RL) agent, especially at the beginning of training. He published research at ICLR 2020 entitled “AMRL: Aggregated Memory for Reinforcement Learning”, suggesting a framework to fix this problem. Loosely, the solution can be conceptualized as taking snapshots of your working memory, which includes information about your short-term past, and storing all these short-term memories without respect to the order in which they occurred. Technically, the framework takes states of existing approaches throughout time and aggregates them all into a single memory state less sensitive to noise. Evaluated in Minecraft, the AMRL framework has demonstrated strong empirical results in RL tasks with long dependencies on the sequence of events. In addition to this framework, the research provides insights for the analysis and development of new approaches to memory in the future.
You can find a video presentation or the full paper here: Paper [ICLR].
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.