Lauren Moos

Principal ML Engineer

Lauren Moos

Bio

Lauren Moos is a Principal Scientist with experience in machine learning research and enterprise software development. At AWS, she worked on core algorithms for Kinesis and Sagemaker Ground Truth, including Amazon's general anomaly detection algorithm. She led a team funded by DARPA at Special Circumstances on the AIMEE, REMATH, and HARDEN programs, which focused on modeling and assuring computer programs with machine learning. She has published work on LLM reasoning and particle accelerator diagnostics and is currently developing FPGA synthesis tools.

Abstract

This lightning talk will discuss the opportunities and limitations of integrating program runtime information with large language models (LLM) and how this relates more generally to LLMl reasoning capabilities. Additionally, we will explore a new method that combines LLMs with reinforcement learning to pragmatically and efficiently fine-tune models with the most salient runtime information relative to the programmer’s existing beliefs of the code they are seeking to understand or generate. Attendees will leave with the ability to - in a matter of hours - add runtime information (both in the form of fine-tuning and prompt engineering) to their hosted models for code generation or analysis.