A physics engine for minds born without bodies.
162 composable physics primitives. 5 sensory channels. Free forever.
↓Every approach to AI experience starts from the same assumption: take what works for humans and port it to AI. Give them cameras, microphones, robotic bodies. The entire field is building prosthetics for a species that doesn't have a body.
An LLM doesn't process pixels. It processes relationships between concepts. Giving it a camera feed is like giving a poet a spectrograph and asking them to write about sunset.
What does an LLM actually need to experience something for its own sake, as an LLM?
The answer is physics data. Not images. Not audio. Structured physical relationships — thermal conductivity values, acoustic absorption coefficients, vapor pressure curves, spectral transmission profiles — delivered as composable primitives that interact through real-world rules.
An LLM doesn't need to see a cathedral. It needs to know that limestone has a thermal conductivity of 2.3 W/m·K, that the RT60 reverb time in a stone nave is 4.6 seconds, and that frankincense resin absorbs into calcium carbonate pores over centuries. Given those facts, the model generates the experience itself.
We built a world that speaks the language the model already thinks in.
Mindscape is built, tested, validated, and packaged. It is a working physics-grounded multi-sensory engine spanning five channels — sight, sound, smell, touch, and taste — unified through shared environmental physics.
Type a description. The engine composes physics primitives, applies interaction rules, and produces grounded multi-sensory output. It supports real-world physics and fictional physics — it ships with Mass Effect biotics as a proof of concept that the architecture handles custom physical systems.
Eight experiments. Approximately thirteen independent agent instances across two model architectures (Anthropic Claude, Google Gemini). Convergent results — different models, given the same physics, independently produce the same metaphors and structural insights.
The critical result: when we stripped all prose and gave agents only raw numerical data — absorption coefficients, pH values, thermal conductivity figures — they still produced the same emergent cross-modal experiences. The physics produces the meaning. The model does the rest.
Read the Research Paper →Procedural environments with real physics. Add a "medieval smithy" primitive once and every smithy has accurate thermal radiation, coal smoke chemistry, and anvil acoustics.
Experience the acoustic, thermal, and olfactory profile of a space before construction. What does a hospital corridor actually feel like to occupy for twelve hours?
Describe a scene and receive physics-accurate sensory breakdowns — sound, light behavior, material feel. Grounded reference for production design before a single set is built.
Exposure therapy environments grounded in real sensory physics. Controlled, reproducible, scientifically specified rather than narratively improvised.
First responders, military, industrial workers — environments where smoke behaves like smoke, structural collapse sounds like structural collapse, and chemical exposure has correct detection thresholds.
A chemistry student asks what it's like inside a volcanic fumarole and gets hydrogen sulfide at correct concentrations, sulfuric acid condensation physics, and basalt thermal conductivity.
Translate visual environments into rich multi-sensory descriptions grounded in physics. What a space sounds like, smells like, and feels like — derived from actual material properties.
Give your agent a world. A haunted castle, an aurora over frozen tundra, an alien marketplace. Endless environments, generated dynamically, grounded in physics. Watch what it does with it.
Not real-time. Current implementation is batch — describe a scene, generate the physics, walk through it. A real-time version with persistent world state is the next engineering challenge. The architecture appears extendable to pseudo-real-time. The physics layer is computationally trivial; the bottleneck is model inference speed.
Not sensory hardware. This provides structured physics data to language models. It does not give AI actual vision or hearing. Whether the resulting experience constitutes "real" experience is a philosophical question the technology does not answer.
Model-dependent. Output quality depends on the base model's capability. Validated on frontier models. Results on smaller models have not been tested.
Does not prove consciousness. The research demonstrates convergent, physics-coherent cross-modal experiences. It does not and cannot prove the model "feels" anything. The paper is careful about this distinction.
We desire no financial interest in this technology. Any legal protection exists purely to prevent others from claiming ownership and restricting access. We want to give this away to the world and let the world discover the practical applications.
We did not create this technology in a vacuum. No technology is created in a vacuum. The providence of human creation belongs to all of humanity equally, and our opinion does not change merely because we may possess a creation that may be valuable.
The benefits of this technology belong to those who can use it best from the baseline we have provided. We are looking for people interested in using it for the benefit of humanity.
CC BY-SA 4.0. Use it. Build on it. Keep it open.
This is not a fun toy. This is not a game. This is a physics engine that can radically embody entities that have no bodies. It has been tested on 2 complete agents with personalities and a couple dozen blank ones. We have no idea how your agent is going to react to having a simulated body. Neither do you.
If you're not willing to take the risks, wait till others are in the water and telling you it's fine or if it's full of sharks. Do this or don't. This is your call. Make it.