08.01.2026
Image generation models today can create almost anything, like a futuristic city glowing at sunset, a classical painting of your cat, or a realistic spaceship made of glass. But when you ask them to go bigger and sharper, the magic slows down. The process takes longer, eats up more memory, and …
18.12.2025
Using AI and LLMs at work feels almost unavoidable today: they make things easier, but they can also go wrong in important ways. One of the trickiest problems? Gender bias. For example, if you ask about someone’s skills from a photo, it may confidently label them a “born leader” or “working well under pressure” with no real …
11.12.2025
Ever wondered how a 3D shape can smoothly change — like a robot arm bending or a dog rising from sitting to standing — without complex simulations or hand-crafted data? Researchers from MCML and the University of Bonn tackled this challenge in their ICLR 2025 paper, “Implicit Neural Surface Deformation with Explicit Velocity Fields”.
04.12.2025
Large language models like ChatGPT or Gemini are now everywhere, from summarizing text to writing code or answering simple questions. But there’s one thing they still struggle with: admitting uncertainty. Ask a fine-tuned LLM a tricky question, and it might sound quite confident, even when it’s completely wrong. This “overconfidence” …
©MCML
01.12.2025
From May to July, I spent three exciting months as a visiting researcher at the Computer Science Department of Princeton University, hosted by Prof. Manoel Horta Ribeiro. The visit grew out of a keynote Manoel gave at LMU. After his talk, we discussed potential joint projects at the intersection of causal inference, machine learning, and social …
27.11.2025
Large vision-language models (VLMs) like CLIP (Contrastive Language-Image Pre-training) have changed how AI works with mixed inputs of images and text, by learning to connect pictures and words. Given an image with a caption like “a dog playing with a ball”, CLIP learns to link visual patterns (the dog, the ball, the grass) with the …
2024-11-22 - Last modified: 2026-01-08