Tuesday, January 20, 2026

Two University of Iowa engineers have won funding from the National Science Foundation to develop a theory that would improve the accuracy and speed of AI-generated images, videos, text, and speech.

Soura Dasgupta
Soura Dasgupta

Soura Dasgupta, F. Wendell Miller Distinguished Professor in the Department of Electrical and Computer Engineering, and Raghu Mudumbai, associate professor in the department, aim to create a direct, mathematically correct reverse process to solve current inaccuracies in diffusion-based generative AI and to ease computational burdens associated with them. 

Generative AI combined with diffusion models has been successful in creating a range of products. These models work by learning patterns from the data they were trained on. They contain two main steps: a forward process that adds noise to an image until it becomes pure static and a reverse process that learns to gradually remove the noise until a clear image or other data sample is produced. 

The award stems from a paper presented at a conference last month by Dasgupta and Brian Anderson from the Australian National University that establishes a theoretical framework for directly reversing a discrete time diffusion. The framework aims to transform generative AI by avoiding the approximations inherent in existing approaches. Mudumbai and Dasgupta will develop this direct approach and adapt it to the demands of generative AI. 

Raghu Mudumbai
Raghu Mudumbai

One potential application would be to construct medically accurate segmentations for computed tomography (CT) images, a practice widely used in medicine to image complex bone fractures, severely eroded joints, or bone tumors.

“We are excited for the opportunity to develop a theory of reverse diffusions that could fundamentally alter the manner in which diffusion-based generative AI models are implemented,” Mudumbai says. 

The two-year award through the EArly-concept Grants for Exploratory Research (EAGER) is for $299,000.