Search by speaker

Filter institution








Filter content






Tue, 11. Feb at 11:15
1.023 (BMS Room, ...
Counting in Calabi-Yau categories
Abstract. I will discuss a replacement of the notion of homotopy cardinality in the setting of even-dimensional Calabi--Yau categories and their relative generalizations. This includes cases where the usual definition does not apply, such as Z/2-graded dg categories. As a first application, this allows us to define a version of Hall algebras for odd-dimensional Calabi-Yau categories. I will explain its relation to some previously known constructions of Hall algebras. If time permits, I will also discuss another application in the context of invariants of smooth and graded Legendrian links, where we prove a conjecture of Ng-Rutherford-Shende-Sivek relating ruling polynomials with augmentation categories. The talk is based on joint work with Fabian Haiden, arxiv:2409.10154.
Wed, 12. Feb at 10:00
WIAS Erhard-Schmi...
Convergence of diffusion models under the manifold hypothesis in high-dimensions
Abstract. Denoising Diffusion Probabilistic Models (DDPM) are powerful state-of-the-art methods used to generate synthetic data from high-dimensional data distributions and are widely used for image, audio and video generation as well as many more applications in science and beyond. The \textit{manifold hypothesis} states that high-dimensional data often lie on lower-dimensional manifolds within an ambient space of large dimension D , and is widely believed to hold in provided examples. While recent results have provided invaluable insight into how diffusion models adapt to the manifold hypothesis, they do not capture the great empirical success of these models. In this work, we study DDPMs under the manifold hypothesis and prove that they achieve rates independent of the ambient dimension in terms of learning the score. In terms of sampling, we obtain rates independent of the ambient dimension w.r.t.\ the Kullback-Leibler divergence, and $O(\sqrt{D})$ w.r.t.\ the Wasserstein distance. We do this by developing a new framework connecting diffusion models to the well-studied theory of extrema of Gaussian Processes. This is a joint work with I. Azangulov and G. Deligliannidis (University of Oxford)
Wed, 12. Feb at 11:30
online
Hybrid Models for Large Scale Infection Spread Simulations
Abstract
Wed, 12. Feb at 13:00
FUB, Room 119 A3
Neural Networks for Unsupervised Discovery of Plane Colorings
Abstract. We present a framework that transforms geometric and combinatorial problems into optimization tasks by designing loss functions that vanish precisely when the desired coloring properties are achieved. We employ neural networks trained through gradient descent to minimize these loss functions, allowing for efficient exploration of the solution space. We demonstrate the effectiveness of the method on variants of the Hadwiger-Nelson problem, which asks for plane colorings that avoid monochromatic unit-distance pairs and sketch how the approach can be applied to other problems.
Wed, 12. Feb at 13:00
FUB, Room 119 A3
Neural Networks for Unsupervised Discovery of Plane Colorings
Abstract. We present a framework that transforms geometric and combinatorial problems into optimization tasks by designing loss functions that vanish precisely when the desired coloring properties are achieved. We employ neural networks trained through gradient descent to minimize these loss functions, allowing for efficient exploration of the solution space. We demonstrate the effectiveness of the method on variants of the Hadwiger-Nelson problem, which asks for plane colorings that avoid monochromatic unit-distance pairs and sketch how the approach can be applied to other problems.
Wed, 12. Feb at 13:15
Room: 3.007 John ...
The geometry of the universal Jacobian
Abstract. I will explain some results new and old related to the topology and cycle theory of the universal Jacobian over the moduli space of curves (joint work with D. Petersen, S. Wood, J. Schmitt).
Wed, 12. Feb at 15:15
HVP 5-7, R. 411
Breakdown of the mean-field description of interacting systems: Phase transitions, metastability and coarsening
Abstract
Wed, 12. Feb at 16:00
Wed, 12. Feb at 16:30
EN 058
Complex analogues of the Tverberg-Vrećica conjecture and central transversal theorems
Abstract. The Tverberg-Vrećica conjecture is a broad generalization of Tverberg's classical theorem. One of its consequences, the central transversal theorem, extends both the centerpoint theorem and the ham sandwich theorem. In this talk, we will consider complex analogues of these results, where the corresponding transversals are complex affine spaces. The proofs of the complex Tverberg-Vrećica conjecture and its optimal colorful version rely on the non-vanishing of an equivariant Euler class. Furthermore, we obtain new Borsuk--Ulam-type theorems on complex Stiefel manifolds, which are interesting on their own. These theorems yield complex analogues of recent extensions of the ham sandwich theorem for mass assignments and provide a direct proof of the complex central transversal theorem. This talk is based on a joint work with Pablo Soberón.
Thu, 13. Feb at 14:00
Polarization of lattices: Stable cold spots and spherical designs
Abstract. We consider the problem of finding the minimum of inhomogeneous Gaussian lattice sums: Given a lattice $L \subseteq \R^n$ and a positive constant $\alpha$, the goal is to find the minimizers of $\sum_{x \in L} e^{-\alpha \|x - z\|^2}$ over all $z \in \R^n$, which we call the cold spots of the lattice $L$, while the value of the inhomogeneous Gaussian lattice sum at such a point is called the polarization of the lattice. By a result of B\'etermin and Petrache from 2017 it is known that for steep potential energy functions---when $\alpha$ tends to infinity---the minimizers in the limit are found at deep holes of the lattice, one might even say that "polarization converges to sphere covering". In this talk I will discuss some expected and unexpected geometric phenomena related to the cold spots of lattices. Firstly we will discuss when a lattice can have stable cold spots, that is a point which is a minimizer for all $\alpha \geq \alpha_0$ for some finite $\alpha_0$. It turns out that generic lattices do not have stable cold spots. For several important lattices, like the root lattices, the Coxeter-Todd lattice, and the Barnes-Wall lattice, I will discuss how to apply the linear programming bound for spherical designs to prove that the deep holes are stable cold spots. Finally, I will discuss an example of a very famous lattice, which, somewhat unexpectedly, does not have stable cold spots. The talk is based on joint work with C. Bachoc, O. Marzorati, P. Moustrou, and F. Vallentin.
Thu, 13. Feb at 15:15
Rudower Chaussee ...
Hybrid Algorithms for a class of Joint Optimization and Learning Problems
Wed, 19. Feb at 16:30
EN 058
Triangulation and stratification
Abstract. In this talk, I will report an ongoing project (joint with Rocco Chivirì, Martina Costa Cesari and Peter Littelmann) on understanding relations between triangulations of normal polytopes and stratifications on the associated toric varieties. The goal of the project is to bridge them using the theory of Seshadri stratification and the associated semi-toric degeneration. We will concentrate in the talk on a special case, where the triangulation comes from a barycentric-type subdivision (arXiv:2501.16161), and the stratification is given by the torus orbits. If time permits I will explain the construction of the higher rank secondary fan, an important step towards generalizing this special case to an arbitrary triangulation of the polytope.
Thu, 20. Feb at 10:15
WIAS, Erhard-Schm...
Some results on a modified Cahn-Hilliard model with chemotaxis
Abstract
Mon, 10. Mar at 13:30
WIAS 405-406
First Optimize, Then Discretize for Scientific Machine Learning
Abstract. This talk provides an infinite-dimensional viewpoint on optimization problems encountered in scientific machine learning and discusses the paradigm first optimize, then discretize for their solution. This amounts to first choosing an appropriate infinite-dimensional algorithm which is subsequently discretized in the tangent space of the neural network ansatz. To illustrate this point, we show that recently proposed state-of-the-art algorithms for scientific machine learning applications can be derived within this framework. Finally, we discuss the crucial aspect of scalability of the resulting algorithms.
Wed, 12. Mar at 13:00
ZIB, Room 2006 (S...
Wed, 26. Mar at 13:00
ZIB, Room 2006 (S...
Wed, 16. Apr at 16:30
EN 058
Tue, 29. Apr at 11:15
1.023 (BMS Room, ...
Tue, 13. May at 11:15
1.023 (BMS Room, ...