ai

New AI Method Unifies Sampling and Mapping

November 21, 2025 · 2 min read

New AI Method Unifies Sampling and Mapping

Generative modeling is a cornerstone of modern AI, enabling everything from image creation to data simulation, but traditional s often separate the steps of sampling noise and transforming it, which can lead to inefficiencies and complexity.

In a new study, researchers propose an alternative approach that ties sampling and mapping together using conjugate moment measures, aiming to streamline the process and yield more intuitive on standard examples like Gaussians and -dimensional distributions.

Ology builds on moment measures, which state that for any measure, there is a unique convex potential u such that ue-u links sampling from a log-concave distribution and pushing particles through u, but this was found ill-suited for practical tasks in initial tests.

To address this, the team explored an alternative factorization where the measure is expressed as we-w, with w being the convex conjugate of a convex potential w, and they call this conjugate moment measures, showing it performs better in simple cases.

Because w serves as the Monge map between the log-concave distribution e-w and the target measure, the researchers used optimal transport solvers to develop an algorithm that recovers w from samples of the measure, parameterizing w as an input-convex neural network for scalability.

They also tackled scenarios where the density of the measure is only known up to a normalizing constant, proposing another algorithm to learn w in such settings, which is common in real-world applications where exact probabilities are unavailable.

The study draws inspiration from Brenier's polar factorization theorem, which generalizes matrix decompositions to vector fields and states that any field can be composed of a convex function gradient and a measure-preserving map, providing a theoretical foundation for the approach.

Limitations include the reliance on optimal transport solvers, which can be computationally intensive, and the need for further validation on complex, high-dimensional datasets beyond the simple examples demonstrated in the paper.