Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10505 publications
Closing the AI generalisation gap by adjusting for dermatology condition distribution differences across clinical settings
Rajeev Rikhye
Aaron Loh
Grace Hong
Margaret Ann Smith
Vijaytha Muralidharan
Doris Wong
Michelle Phung
Nicolas Betancourt
Bradley Fong
Rachna Sahasrabudhe
Khoban Nasim
Alec Eschholz
Basil Mustafa
Jan Freyberg
Terry Spitz
Kat Chou
Peggy Bui
Justin Ko
Steven Lin
The Lancet eBioMedicine (2025)
Preview abstract
Background: Generalisation of artificial intelligence (AI) models to a new setting is challenging. In this study, we seek to understand the robustness of a dermatology (AI) model and whether it generalises from telemedicine cases to a new setting including both patient-submitted photographs (“PAT”) and clinician-taken photographs in-clinic (“CLIN”).
Methods: A retrospective cohort study involving 2500 cases previously unseen by the AI model, including both PAT and CLIN cases, from 22 clinics in the San Francisco Bay Area, spanning November 2015 to January 2021. The primary outcome measure for the AI model and dermatologists was the top-3 accuracy, defined as whether their top 3 differential diagnoses contained the top reference diagnosis from a panel of dermatologists per case.
Findings: The AI performed similarly between PAT and CLIN images (74% top-3 accuracy in CLIN vs. 71% in PAT), however, dermatologists were more accurate in PAT images (79% in CLIN vs. 87% in PAT). We demonstrate that demographic factors were not associated with AI or dermatologist errors; instead several categories of conditions were associated with AI model errors (p < 0.05). Resampling CLIN and PAT to match skin condition distributions to the AI development dataset reduced the observed differences (AI: 84% CLIN vs. 79% PAT; dermatologists: 77% CLIN vs. 89% PAT). We demonstrate a series of steps to close the generalisation gap, requiring progressively more information about the new dataset, ranging from the condition distribution to additional training data for rarer conditions. When using additional training data and testing on the dataset without resampling to match AI development, we observed comparable performance from end-to-end AI model fine tuning (85% in CLIN vs. 83% in PAT) vs. fine tuning solely the classification layer on top of a frozen embedding model (86% in CLIN vs. 84% in PAT).
Interpretation: AI algorithms can be efficiently adapted to new settings without additional training data by recalibrating the existing model, or with targeted data acquisition for rarer conditions and retraining just the final layer.
View details
Quantum Simulation of Chemistry via Quantum Fast Multipole Transform
Dominic Berry
Kianna Wan
Andrew Baczewski
Elliot Eklund
Arkin Tikku
arXiv:2510.07380 (2025)
Preview abstract
Here we describe an approach for simulating quantum chemistry on quantum computers with significantly lower asymptotic complexity than prior work.
The approach uses a real-space first-quantised representation of the molecular Hamiltonian which we propagate using high-order product formulae.
Essential for this low complexity is the use of a technique similar to the fast multipole method for computing the Coulomb operator with $\widetilde{\cal O}(\eta)$ complexity for a simulation with $\eta$ particles. We show how to modify this algorithm so that it can be implemented on a quantum computer. We ultimately demonstrate an approach with $t(\eta^{4/3}N^{1/3} + \eta^{1/3} N^{2/3} ) (\eta Nt/\epsilon)^{o(1)}$ gate complexity, where $N$ is the number of grid points, $\epsilon$ is target precision, and $t$ is the duration of time evolution.
This is roughly a speedup by ${\cal O}(\eta)$ over most prior algorithms.
We provide lower complexity than all prior work for $N<\eta^6$ (the only regime of practical interest), with only first-quantised interaction-picture simulations providing better performance for $N>\eta^6$. However, we expect the algorithm to have large constant factors that are likely to limit its practical applicability.
View details
Scalability of Generative AI Models: Challenges and Opportunities in Large-Scale Data Generation and Training
International Journal of Computer Science and Information Technology Research (IJCSITR) (2025)
Preview abstract
Scalability of Generative AI Models: Challenges and Opportunities in Large-Scale Data Generation and Training
View details
Validation of Quantum Elliptic Curve Point Addition Circuits
(2025) (to appear)
Preview abstract
Specific quantum algorithms exist to—in theory—
break elliptic curve cryptographic protocols. Implementing these
algorithms requires designing quantum circuits that perform elliptic curve arithmetic. To accurately judge a cryptographic protocol’s resistance against future quantum computers, researchers
figure out minimal resource-count circuits for performing these
operations while still being correct. To assure the correctness of
a circuit, it is integral to restore all ancilla qubits used to their
original states. Failure to do so could result in decoherence of the
computation’s final result. Through rigorous classical simulation
and unit testing, I surfaced four inconsistencies in the state-ofthe-art quantum circuit for elliptic curve point addition where
the circuit diagram states the qubits are returned in the original
(|0⟩) state, but the intermediate values are not uncomputed. I
provide fixes to the circuit without increasing the leading-order
gate cost.
View details
Deep Researcher with Test-time Diffusion
Guan Sun
Zoey CuiZhu
Yuanjun (Sophia) Bi
Weiming Wen
Hui Wan
Chunfeng Wen
Solène Maître
George Lee
Vishy Tirumalashetty
Emily Xue
Burak Gokturk
2025
Preview abstract
Deep research agents, powered by Large Language Models (LLMs), are rapidly advancing; yet, their performance often plateaus when generating complex, long-form research reports using generic test-time scaling algorithms. Drawing inspiration from the iterative nature of human research, which involves cycles of searching, reasoning, and revision, we propose the Test-Time Diffusion Deep Researcher (TTD-DR). This novel framework conceptualizes research report generation as a diffusion process. TTD-DR initiates this process with a preliminary draft, an updatable skeleton that serves as an evolving foundation to guide the research direction. The draft is then iteratively refined through a "denoising" process, which is dynamically informed by a retrieval mechanism that incorporates external information at each step. The core process is further enhanced by a self-evolutionary algorithm applied to each component of the agentic workflow, ensuring the generation of high-quality context for the diffusion process. This draft-centric design guides the report writing process to be more timely and coherent while reducing information loss during the iterative search process. We demonstrate that our TTD-DR achieves state-of-the-art results on a wide array of benchmarks that require intensive search and multi-hop reasoning, significantly outperforming existing deep research agents.
View details
Matryoshka Model Learning for Improved Elastic Student Models
Cho-Jui Hsieh
Chetan Verma
Inderjit Dhillon
Xin Liu
Wen Chen
Ngot Bui
Yang Zhang
2025
Preview abstract
Production machine learning models in the industry are often devel-oped with a primary focus on maximizing model quality. However,these models must ultimately operate within the resource con-straints of their serving infrastructure, including limitations on com-pute, memory and bandwidth. The rapid evolution of serving hard-ware, particularly with advancements in accelerator technology,necessitates periodic retraining to leverage newer, more efficientinfrastructure. This cyclical retraining process is resource-intensive,demanding significant model development time and incurring sub-stantial training costs. This challenge is further amplified by thetrend towards increasingly complex models, which inherently re-quire greater computational resources for training and deployment.While prior work has explored techniques like supernet sub-modelextraction to address training efficiency, a critical gap remains: theefficient generation of a spectrum of high-quality models froman existing production model, a common requirement in diverseindustrial applications. To bridge this gap, we introduce a novel ap-proach leveraging a "Teaching Assistant" (TA) model, derived froma given production model (referred to as the Student model). Wedemonstrate that through co-training the Student and TA modelswith Matryoshka structure while using online distillation, we notonly enhance the Student model’s performance but also enable theflexible creation of a model family offering a compelling trade-offbetween model quality and model size.
View details
MaRVL-QA: A Benchmark for Mathematical Reasoning over Visual Landscapes
Nilay Pande
Sahiti Yerramilli
Jayant Tamarapalli
Rynaa Grover
(2025)
Preview abstract
A key frontier for Multimodal Large Language Models (MLLMs) is the ability to perform deep mathematical and spatial reasoning directly from images, moving beyond their established success in semantic description. Mathematical surface plots provide a rigorous testbed for this capability, as they isolate the task of reasoning from the semantic noise common in natural images. To measure progress on this frontier, we introduce MaRVL (Mathematical Reasoning over Visual Landscapes), a new benchmark designed to quantitatively evaluate these core reasoning skills. The benchmark comprises two novel tasks: Topological Counting, identifying and enumerating features like local maxima; and Transformation Recognition, recognizing applied geometric transformations. Generated from a curated library of functions with rigorous ambiguity filtering, our evaluation on MaRVL reveals that even state-of-the-art MLLMs struggle significantly, often resorting to superficial heuristics instead of robust spatial reasoning. MaRVL provides a challenging new tool for the research community to measure progress, expose model limitations, and guide the development of MLLMs with more profound reasoning abilities.
View details
Preview abstract
Julia's strength in mathematical computation and high performance makes it a popular choice across scientific fields, mostly due to its focus on mathematics in a broad sense and execution performance. It is a language of choice to implement new numerical algorithms, but it really shines in modelling for optimisation thanks to JuMP.jl and MathOptInterface.jl.
These libraries are, first and foremost, made for mathematical optimisation (linear, mixed-integer, conic, etc.), yet they are now generic enough to support more paradigms, such as constraint programming. This talk will introduce the basic principles behind the current implementation of JuMP.jl and explain why and how they are very good matches for modelling using constraint programming… and solving using any kind of mixed-integer-programming solver.
Constraint-programming solvers can also be implemented using linear programming, in a great collaboration between discrete and continuous optimisation. This talk will briefly explain the connection and its implementation in Google’s CP-SAT, a leading, award-winning constraint solver that uses linear programs in its solving process — a solver that will soon be available in Julia too.
View details
Mitigating Clinician Information Overload: Generative AI for Integrated EHR and RPM Data Analysis
Shashank Kapoor
Aman Raj
IEEE Compsac 2025 (2025)
Preview abstract
Generative AI (GenAI), particularly Large Language Models (LLMs), offer powerful capabilities for interpreting the complex data landscape in healthcare. In this paper, we present a comprehensive overview of the capabilities, requirements and applications of GenAI for deriving clinical insights and improving clinical efficiency. We first provide some background on the forms and sources of patient data, namely real-time Remote Patient Monitoring (RPM) streams and traditional Electronic Health Records (EHR). The sheer volume and heterogeneity of this combined data present significant challenges to clinicians and contribute to information overload.
In addition, we explore the potential of LLM-powered applications for improving clinical efficiency. These applications can enhance navigation of longitudinal patient data and provide actionable clinical decision support through natural language dialogue. We discuss the opportunities this presents for streamlining clinician workflows and personalizing care, alongside critical challenges such as data integration complexity, ensuring data quality and RPM data reliability, maintaining patient privacy, validating AI outputs for clinical safety, mitigating bias, and ensuring clinical acceptance. We believe this work represents the first summarization of GenAI techniques for managing clinician data overload due to combined RPM / EHR data complexities.
View details
ESAM++: Efficient Online 3D Perception on the Edge
Qin Liu
Lavisha Aggarwal
Vikas Bahirwani
Lin Li
Aleksander Holynski
Saptarashmi Bandyopadhyay
Zhengyang Shen
Marc Niethammer
Ehsan Adeli
Andrea Colaco
2025
Preview abstract
Online 3D scene perception in real time is critical for robotics, AR/VR, and autonomous systems, particularly in edge computing scenarios where computational resources are limited. Recent state-of-the-art methods like EmbodiedSAM (ESAM) demonstrate the promise of online 3D perception by leveraging the 2D visual foundation model (VFM) with efficient 3D query lifting and merging. However, ESAM depends on a computationally expensive sparse 3D U-Net for point cloud feature extraction, which we identify as the primary efficiency bottleneck. In this paper, we propose a lightweight and scalable alternative for online 3D scene perception tailored to edge devices. Our method introduces a 3D Sparse FeaturePyramid Network (SFPN) that efficiently captures multi-scale geometric features from streaming 3D point clouds while significantly reducing computational over-head and model size. We evaluate our approach on four challenging segmentation benchmarks—ScanNet, ScanNet200, SceneNN, and 3RScan—demonstrating that our model achieves competitive accuracy with up to 3×faster inference and 3×small model size compared to ESAM, enabling practical deployment in real-world edge scenarios. Code and models will be released.
View details
Participatory AI Considerations for Advancing Racial Health Equity
Andrea G. Parker
Jatin Alla
Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI) (2025) (to appear)
Online-EYE: Multimodal Implicit Eye Tracking Calibration for XR
Baosheng James Hou
Lucy Abramyan
Prasanthi Gurumurthy
Khushman Patel
Haley Adams
Andrea Colaco
Ken Pfeuffer
Hans Gellersen
Karan Ahuja
2025
Preview abstract
Unlike other inputs for VR that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration.
View details
Safe Coding: Rigorous Modular Reasoning about Software Safety (Extended Version)
Google Security Engineering (2025) (to appear)
Preview abstract
(to appear)
View details
Toward Community- Led Evaluations of Text-to-Image AI Representations of Disability, Health, and Accessibility
Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) (2025)
Preview abstract
Responsible AI advocates for user evaluations, particularly when concerning people with disabilities, health conditions, and accessibility needs ( DHA)–wide- ranging but umbrellaed sociodemograph- ics. However, community- centered text- to- image AI’s ( T2I) evaluations are often researcher- led, situating evaluators as consumers. We instead recruited 21 people with diverse DHA to evaluate T2I by writing and editing their own T2I prompts with their preferred language and topics, in a method mirroring everyday use. We contribute user- generated terminology categories which inform future research and data collections, necessary for developing authentic scaled evaluations. We additionally surface yet- discussed DHA AI harms intersecting race and class, and participants shared harm impacts they experienced as image- creator evaluators. To this end, we demonstrate that prompt engineering– proposed as a misrepresentation mitigation– was largely ineffective at improving DHA representations. We discuss the importance of evaluator agency to increase ecological validity in community- centered evaluations, and opportunities to research iterative prompting as an evaluation technique.
View details
Benchmarking and improving algorithms for attributing satellite-observed contrails to flights
Vincent Rudolf Meijer
Rémi Chevallier
Allie Duncan
Kyle McConnaughay
Atmospheric Measurement Techniques, 18 (2025), pp. 3495-3532
Preview abstract
Condensation trail (contrail) cirrus clouds cause a substantial fraction of aviation's climate impact. One proposed method for the mitigation of this impact involves modifying flight paths to avoid particular regions of the atmosphere that are conducive to the formation of persistent contrails, which can transform into contrail cirrus. Determining the success of such avoidance maneuvers can be achieved by ascertaining which flight formed each nearby contrail observed in satellite imagery. The same process can be used to assess the skill of contrail forecast models. The problem of contrail-to-flight attribution is complicated by several factors, such as the time required for a contrail to become visible in satellite imagery, high air traffic densities, and errors in wind data. Recent work has introduced automated algorithms for solving the attribution problem, but it lacks an evaluation against ground-truth data. In this work, we present a method for producing synthetic contrail detections with predetermined contrail-to-flight attributions that can be used to evaluate – or “benchmark” – and improve such attribution algorithms. The resulting performance metrics can be employed to understand the implications of using these observational data in downstream tasks, such as forecast model evaluation and the analysis of contrail avoidance trials, although the metrics do not directly quantify real-world performance. We also introduce a novel, highly scalable contrail-to-flight attribution algorithm that leverages the characteristic compounding of error induced by simulating contrail advection using numerical weather models. The benchmark shows an improvement of approximately 25 % in precision versus previous contrail-to-flight attribution algorithms, without compromising recall.
View details