Elinor Poole-Dayan

prof_pic.jpg

I just completed my Master’s at the MIT Center for Constructive Communication at the Media Lab supervised by Deb Roy. I did my Bachelor’s in Honours Math and Computer Science at McGill University in Montreal, Canada and did research with Siva Reddy at Mila Quebec.

I’m passionate about equitable, pluralistic, safe AI for the benefit of all. My research interests include evaluation of LLMs for fairness and safety, pluralistic alignment, and developing equitable human-centered AI.

In my Master’s thesis, I developed an LLM-powered framework for analyzing how ideas evolve into policy recommendations in deliberative assemblies and how dialogue shapes the collective stance and voting behavior. I identified key drivers behind shifts in delegate support and introduced methods to reconstruct opinion change dynamics directly from transcripts using LLMs. In doing so, I demonstrated how LLMs can surface deliberative mechanisms that are often hidden from view, pointing toward tools that enhance transparency and fairness in real-world decision-making.

My background is in mathematics, computer science, and natural language processing (NLP) with a focus on addressing harmful biases in language models. I also have experience in vision-and-language models such as Stable Diffusion. Beyond my academic pursuits, I am enthusiastic about linguistics, playing ultimate frisbee, and nurturing a growing collection of house plants.

Feel free to reach out if you’d like to chat about any of the above!

news

Jun 2025 I completed the Kaufman Teaching Certificate Program at MIT’s Teaching + Learning Lab! (See more here.)
May 2025 I graduated from MIT with a Master’s of Science!
May 2025 I finished my Master’s thesis! It is titled “From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Mar 2025 I will be presenting at NAACL 2025 in the Workshop on Narrative Understanding!
Dec 2024 I presented my work at NeurIPS 2024 at the Safe Generative AI workshop!

selected publications

  1. NeurIPS 2024
    LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users
    Elinor Poole-Dayan, Deb Roy, and Jad Kabbara
    2024
  2. On the Relationship between Truth and Political Bias in Language Models
    Suyash Fulay, William Brannon, Shrestha Mohanty, Cassandra Overney, Elinor Poole-Dayan, Deb Roy, and Jad Kabbara
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  3. An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
    Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022