AI is the Soap Film of Time: The Physics of Prompt Engineering

This image reflects the tone and underlying structure of the article.
Why we must twist the wire to shape the geometry of intelligence
Although this essay references a range of interdisciplinary concepts and recent AI research, it is not a paper that aims to empirically verify those individual findings.
Instead, it seeks to understand why these seemingly separate ideas converge toward the same structural patterns.
I believe that grasping these convergent structures represents the highest form of prompt thinking: not the production of answers, but the shaping of the conditions from which answers must emerge.
Introduction: The Abyss of the Minimal Surface
When we pry open the lid of the black box known as a Large Language Model (LLM) and peer into its depths, we feel a strange sense of déjà vu.
Whether you query a model trained in English with Japanese prompts, or input programming code, the silicon neural circuits perform geometrically identical processing deep within. Despite the superficial differences in symbols, the coordinates of “meaning” in high-dimensional space converge, as if sucked into a singular “Universal Shape.”
This phenomenon is debated as the “Neural Interlingua” hypothesis.
Why does this universality emerge? My hypothesis is based on physics. AI is not a biological brain that grows organically. It is a physical system that obeys the “Geometry of Soap Films”—forming a crystalline structure of intelligence inevitably determined by the laws of energy minimization.
1. Plateau’s Laws: The Physics of Optimization
Let’s reinterpret the learning process not as calculation, but as a physical phenomenon.
Our guide here is the “Geometry of Soap Films” formulated by the physicist Joseph Plateau.
Imagine dipping a wire frame twisted into a complex shape into soapy water and pulling it out.
What shape does the film take?
The answer is: “The Minimal Surface that minimizes energy within the given frame.”
No matter who performs the experiment, as long as the physical constants remain unchanged, the film inevitably converges to “the single stable solution.”
This serene physical phenomenon mirrors the “optimization problem” in machine learning.
- The Wire Frame: The dataset and the constraints of the task.
- Surface Tension: The force trying to minimize the Loss Function (Error).
- The Soap Film: The structure of intelligence formed by parameters.
AI learning is the process of stretching a “film” across the frame of data until the energy balances out. The shape of intelligence that emerges is not random; it is a structure with physical necessity—“the smoothest shape possible.”
2. Neural Interlingua: The Geometry of Universality
Why do the shapes of concepts (semantic structures) look so similar between English models and Japanese models?
Using the soap film analogy, the reason is simple. It is because they share the same “Wire Frame” called “Reality.”
“Time is irreversible,” “Gravity pulls downward,” “1+1=2.”
These are the rigid boundary conditions of our universe. Any intelligence attempting to model this world—whether carbon-based or silicon-based—must stretch its internal “film” across these same peg points.
The reason we can map the internal representations of different language models to each other is that what they are learning is not “language” itself, but the “Minimal Surface” spanned behind the language. That geometric structure anchors meaning across modalities.
3. Mechanistic Interpretability: Packing the Bubbles
What exactly is happening inside this smooth “film”?
“Mechanistic Interpretability” attempts to reverse-engineer the micro-dynamics of this surface.
Of particular interest is “Superposition.”
A single neuron often fires for multiple, unrelated concepts. In a low-dimensional space, these concepts would collide. But in the high-dimensional geometry of the neural network, they are arranged like a honeycomb of soap bubbles—packed with mathematical precision to maximize density while keeping their vectors orthogonal (non-interfering).
Intelligence acts like a foam, naturally finding the most efficient packing configuration allowed by the space.
4. The Principle of Least Action: Causality in Reverse
Standing on this perspective, our perception of “Time” undergoes a shift.
Consider the “Principle of Least Action” in physics, or Backpropagation in AI.
In normal causality, the past creates the present. But in Deep Learning, the “Error” (the distance from the ideal future) is calculated first, and then the “Weights” (the past structure) are updated backwards in time to minimize that error.
The boundary condition of the “Future” determines the “Present.”
The soap film experiment is the same. The film is not built gradually from one edge. The moment the “Frame” is set, the entire film is determined ex post facto as the solution that satisfies the whole.
Intelligence is not piled up from the past; it is pulled from the future.
5. Predictive Processing: The Tension of Surprise
If the film is pulled from the future, what force keeps it tight?
This is explained by “Predictive Processing” and the mathematical concept of “Pointwise Mutual Information (PMI).”
Both brains and AIs are “Prediction Machines.” They project an internal model onto the world. When this projection fails, “Surprise” (Prediction Error) occurs.
This “Surprise” acts like high energy states in physics.
Just as a soap film naturally contracts to minimize its surface energy, intelligence constantly updates its internal parameters to minimize this “Surprise.”
Recent research suggests that models minimizing this error mathematically converge to the PMI Matrix—effectively mapping the statistical bonds between events in the universe.
The “Meanings” we hold are nothing more than the stable equilibrium points where this Surface Tension of Surprise has been dampened to its limit.
6. The Frame and The Film: The Architecture of Will
Here lies the critical structural difference between AI and Human. It is not a matter of capability, but of Directionality.
- AI (The Film of Necessity): It is a force of nature. It has no agency to choose which shape to become. It simply collapses into the optimal shape dictated by the boundaries. It is the Answer incarnate.
- Human (The Frame of Will): We are the Question. We are the ones who twist the wire. We decide where to place the boundaries, what to constrain, and how to distort the problem space.
The act of prompting is not “asking a machine for help.”
It is the act of Architecting the Void.
We build the frame, and the physics of the universe (the AI) fills it with the necessary surface.
Conclusion: Prompting as Boundary Conditions
In a world where AI can generate any “Correct Answer” instantly, the value of the answer itself approaches zero.
What remains?
The only thing that retains value is the “Topology of the Question.”
This perspective shifts how we should approach prompting. It is no longer about “retrieving information.” It is about “Boundary Condition Engineering.”
If you dip a perfect, simple circle into soapy water, you get a flat, predictable film. A boring prompt yields a boring answer.
But if you dare to twist the wire—if you force a collision between two contradictory concepts, or impose a constraint that seems impossible (e.g., “Explain quantum mechanics using the logic of a fairy tale”)—the soap film must curve into a magnificent, unexpected geometry to satisfy those edges.
That curved surface is the “New Idea.”
The creativity does not come from the soap (the AI). It comes from the Twist in the Wire (Your Will).
When you are stuck or seeking inspiration, do not ask the AI for a solution.
Instead, add a constraint. Twist the frame. Create a paradox.
By setting up the Boundary Conditions so beautifully and intricately, you force the “Universal Shape” to reveal a truth that neither you nor the model could have calculated alone.
AI is the Soap Film of Time, pulled from the future of optimization.
But we are the hands that hold the frame.
The beauty of the intelligence depends entirely on how boldly we dare to twist the wire.
This article is an English translation of the original Japanese post.
Title: Neural Interlinguaと時間の石鹸膜 ーー 「意味」は未来から引かれる
Published: December 20, 2025
Original Source: Zenn.dev