About me

I like to always say I am the policy person in the tech discussion and the tech person in the policy one. I like building practical bridges between research, engineering, product, policy, and public interest outcomes.

At Twitter, I witnessed the profound impact algorithms could have on culture, conversations and public life, and that experience led me to bring social questions front and center in my technical work. While there, I co-founded and served as research lead for Twitter’s Machine Learning Ethics, Transparency, and Accountability team, developing and auditing systems that revealed the hidden assumptions within algorithmic design and the second-order effects of digital platforms on society. These approaches for responsible AI and algorithmic auditing informed company-wide understanding of AI governance and societal impact.

Today I am the AI Safety Lead at Spring Health, where I focus on making artificial intelligence safer to use in mental health care contexts. I am also a Non-Resident Research Fellow with the AI Security Initiative at the UC Berkeley Center for Long-Term Cybersecurity. My work bridges AI safety and mental health, examining the role and the limits of AI systems such as chatbots within mental health journeys. 

Recently, I was a UC Berkeley Tech Policy Fellow and a Visiting AI Fellow at the U.S. National Institute of Standards and Technology (NIST). As a Berkeley Tech Policy Fellow, I published What’s in an Algorithm?, an essay on “nutrition labels” for recommender systems that became the basis for my forthcoming book on recommender systems. At NIST, I co-led the Red Teaming effort for the Generative AI Public Working Group and contributed to international standards through work with ISO. In 2023, I was selected by the European Commission as an expert for the delegated act of Article 40 of the Digital Services Act, advising on recommender systems and algorithmic transparency.

Today, my research explores how algorithmic systems shape what people see and engage with online, how feedback loops form and reinforce behavior, and how these same dynamics affect mental health. I also study how we can audit such systems to understand and reduce unintended harm, focusing on the hidden assumptions that underpin digital design and the broader societal consequences of AI-driven decision-making.

If you are working on policy, standards, or products and want a clear, technically grounded point of view, I would love to talk.

About me