Started at Google Zurich in 2012, designing conversational interfaces and smart displays. Led Duplex on the web, using ML to automate web browsing for everyday tasks. Before that, shaped Google Maps personal context features and the original Google Trips.
Now at Google Cambridge, focusing on Android System UI and intelligence. The work spans screen context, input methods, and text/image understanding. Less about individual apps, more about making the entire system aware and adaptive.
Assistant-mediated fulfillment through automated web browsing. Using ML to reduce friction for everyday tasks.
Leading design and integration of intelligence across Android OS. Screen context, input methods, text and image understanding.
On-device AI summarization of messaging threads using Gemini Nano. Privacy-first design with no data sent to cloud.
Led early conceptual design and established system framework for the first Google Assistant smart displays.
Managed design process and led UX across iPhone and Android. Personalized travel guide pulling Gmail reservations.
Framework for interfaces that rise and settle based on user context, cognitive load, and task complexity.
Real-time Bayesian inference engine for signals intelligence logging. Camera-first interface with brutal simplicity.
Claude skill for comprehensive health data import and analysis. Pattern recognition across training and recovery.
iOS pushup counting app with Apple Health integration. Computer vision tracking with real-time form feedback.