Markus Peschl

I am a senior deep learning researcher at Qualcomm AI Research in Amsterdam.

I am dedicated to advancing AI research that is both theoretically rigorous and practically impactful. Previously, I worked on reinforcement learning and at the intersection of deep learning and combinatorial optimization for wireless and chip design problems. Lately, my research focuses on generative models for sequence modeling, decision making, and embodied AI.

In my free time, you can find me playing the piano🎹, practicing meditation🧘, reading philosophy📙 and thinking about artificial general intelligence (preferably on the beach🏖️).

Email  /  Scholar  /  LinkedIn  /  X

profile photo

Research

Differentiable and Learnable Wireless Simulation with Geometric Transformers
Thomas Hehn, Markus Peschl, Tribhuvanesh Orekondy, Arash Behboodi, Johann Brehmer,
arXiv, 2024 / ICML 2024 Workshop GRaM
arXiv

A Geometric Algebra Transformer "WiGATr" for learning an E(3) equivariant simulator of wireless signal transmission. We test WiGATr as a predictive model as well as a diffusion model of signal and 3D geometry.

NeuroSteiner: A Graph Transformer for Wirelength Estimation
Sahil Manchanda, Dana Kianfar, Markus Peschl, Romain Lepert, Michaël Defferrard,
arXiv, 2024
arXiv

Using graph transformers to predict Steiner points in a hybrid fashion to tackle the Steiner Tree problem in physical design (chip design).

Robust scheduling with GFlowNets
David W Zhang, Corrado Rainone, Markus Peschl, Roberto Bondesan,
ICLR, 2023
OpenReview / arXiv

We use GFlowNets to tackle the NP-hard problem of computation graph scheduling by combining a temparature conditioned policy with top-k sampling.

Learning Perturbations for Soft-Output Linear MIMO Demappers
Daniel E. Worrall, Markus Peschl, Arash Behboodi, Roberto Bondesan,
GLOBECOM, 2022
IEEE / arXiv

We combine lattice reduction, Bayesian optimization and a stochastic sampler to arrive at a highly efficient, yet accurate linear MIMO demapper.

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning
Markus Peschl, Arkady Zgonnikov, Frans A Oliehoek, Luciano C Siebert,
AAMAS, 2022
arXiv

AI alignment is challenging due to a multitude of different and possibly conflicting values present in human feedback. We propose to overcome this challenge with a multi-objective agent that actively learns user preferences.