Original, evidence-first research on how environments evolve, how to simulate them faithfully, and how agents learn and decide inside worlds they can never fully model.
How do we build computational representations of complex environments that are good enough to learn from, plan with, and make decisions in — when the real environment is non-stationary, partially observable, and changes because agents are acting inside it?
Learning environment dynamics from observation. How do you learn a latent representation of how a system evolves over time? How do you dream in that latent space to plan without acting in the real world? Builds on a control engineering foundation — state-space models, dynamical systems, stability analysis — reimagined through modern deep learning and physics-informed neural networks.
What happens when agents act inside an environment and the environment changes as a result? In markets, it is reflexivity. In multi-agent RL, it is non-stationarity — every agent's policy is part of every other agent's environment. In social systems, it is the Lucas critique. Experiments involve building simulated environments with multiple adaptive agents and studying what emerges from the feedback loop.
Using simulation to answer "what if?" Structural causal models as simulable world models. The experiments involve building environments where you can intervene, not just observe, and measuring whether counterfactual reasoning actually improves decision-making compared to purely observational methods.
Training dynamics models, evaluating fidelity, testing model-based planning inside dream environments.
Building simulated environments with adaptive agents and documenting what emerges from the interaction.
Causal simulation versus purely observational methods. Does intervening improve decision quality?
Hybrid models combining known dynamics with learned components. Environment design studies.
| Title | Type | Thread | Status | Published |
|---|---|---|---|---|
|
First experiments arriving 2026 |
||||