Adequacy v. intelligibility, or why mathematical formalisms of quantum are a precedent
Quantum mechanics is the most successful theory in the history of physics. It predicts the electron's magnetic moment to twelve decimal places. It underpins semiconductors, lasers, MRI machines, the device you're reading this on. Nothing else comes close.
It also resists human understanding in a peculiar way. After a century of work by very capable minds, there is no consensus on what the theory means: what it says about reality, what happens during measurement, whether the wavefunction is a thing in the world or a tool for calculation. The mathematics is precise, the predictions are extraordinary, but the interpretation remains contested.
We tend to pass over this too quickly. Our best theory of the physical world works—spectacularly—without being understood. And by "understood" I mean something specific: not the ability to calculate with it, or build technology from it, or teach it to graduate students, but the ability to form a coherent picture of why reality behaves in the way it describes, to hold in mind what it says is actually happening.
That kind of understanding, we don't have. The question of whether fundamental physics must have an intelligible ontology—whether we're owed a picture we can hold in mind—is not a question quantum mechanics answers. It simply proceeds without one.
And this might be a precedent more than a temporary situation awaiting resolution.
Foundations
To see why this matters beyond physics, we need to be precise about what quantum mechanics established.
In 1936, Birkhoff and von Neumann showed that the propositions of quantum mechanics form a non-distributive lattice. The distributive law of classical logic—
A ∧ (B ∨ C) = (A ∧ B) ∨ (A ∧ C)
—fails for quantum systems.
A concrete case. Take an electron and consider three propositions:
A: "The electron has spin-up along the z-axis"
B: "The electron has spin-up along the x-axis"
C: "The electron has spin-down along the x-axis"
In classical logic, if A is true, then either (A ∧ B) or (A ∧ C) must be true: the electron has z-spin-up and some definite x-spin. But in quantum mechanics, an electron with definite z-spin has indefinite x-spin. The state is a superposition of B and C, not a hidden choice between them. So A ∧ (B ∨ C) can be true while both (A ∧ B) and (A ∧ C) are neither true nor false; they're not well-formed claims about the system's state.
The lattice of closed subspaces in Hilbert space is orthomodular, not Boolean. Classical propositional logic assumed Boolean structure. What quantum mechanics requires is something else.
The curious metaphysical case
Some philosophers frame this as forcing metaphysical revision rather than logical revision, a reconception of what properties are rather than what inference rules hold. (One might ask whether classical logic is itself a metaphysical commitment, which would blur the distinction.) But either way, the structure of quantum systems doesn't match the structure that seemed, for millennia, to be necessary for coherent thought. Something had to give.
The formalism itself has a curious status. Quantum mechanics isn't derived from self-evident axioms. Its foundations are arguably either inconsistent or incomplete, depending on how you count, and the theory was built not from first principles but from pieces that work, assembled through decades of trial and error.
Why is the state space a complex Hilbert space rather than real, or quaternionic, or something else entirely? We have consistency arguments and reconstruction theorems, but no deep reason. It's what works.
Why are observables represented by Hermitian operators? Because Hermitian operators have real eigenvalues, and measurement outcomes are real numbers. That's an explanation of compatibility, not a derivation from first principles.
Why does probability come from the Born rule (the squared amplitude of the wavefunction)? Born guessed it in 1926. Every experiment since has confirmed it. But it remains a postulate, not a theorem. Attempts to derive it from deeper principles (decision-theoretic arguments, Gleason's theorem with various caveats) remain contested.
The axioms, in other words, were reverse-engineered. Phenomena came first; formalism was built to fit; justification, if it comes, comes later. A century on, it still hasn't fully arrived.
Ask a physicist what quantum mechanics says about reality, and you enter strange territory. The theory has led intelligent people—not cranks, but serious physicists and philosophers—to claim that the universe is perpetually splitting into countless copies of itself, or that conscious observation causes physical systems to "jump" in unpredictable ways, or that classical logic requires fundamental revision, or that the wavefunction isn't a description of reality at all but a calculus for organizing expectations.
These aren't fringe positions. Careful thinkers who accept the same formalism and the same experimental results disagree on what exists, what happens during measurement, whether physics is deterministic, whether there is one world or endlessly many. And their disagreements make no difference to any prediction. Empirically, the positions are indistinguishable.
The formalism works without our knowing what it describes. The mathematics says: if you prepare a system this way and measure that observable, here are the probabilities. It doesn't say what the system is between preparation and measurement. The extraordinary success of the theory doesn't require an answer.
There's a structural feature of quantum mechanics that deserves particular attention: the measurement problem.
Quantum mechanics has two rules for how states evolve. Between measurements, the state evolves according to the Schrödinger equation: deterministic, linear, reversible. Superpositions remain superpositions. Upon measurement, the state "collapses" to an eigenstate of the observable being measured: stochastic, non-linear, irreversible.
These rules are in tension. Unitary evolution is smooth and preserves superposition. Measurement is abrupt and destroys it. The formalism includes both but doesn't say when each applies. "Measurement" isn't defined in physical terms. There's no equation for what counts as one.
Decoherence theory has illuminated part of this: it explains why interference terms vanish in practice for macroscopic systems, why we don't see cats in superposition. But it doesn't resolve the deeper puzzle: why we get definite outcomes at all, rather than simply becoming entangled with the system we're measuring. The interpretations disagree precisely on this point, and decoherence is compatible with all of them.
This is a structural gap in the theory, not a puzzle awaiting a clever solution. The formalism contains two incompatible dynamics and no principle for demarcating them. Each interpretation addresses this differently, but the formalism itself is silent.
Implications
So, here's where we stand.
A theory whose axiomatic foundations remain unsettled, which (on at least some accounts) violates classical logic, and which admits radically different interpretations of what it says about reality… this theory also works better than any theory in history.
What are we to make of this?
The obvious response is that the situation is temporary, that eventually we'll find the right interpretation, or derive the axioms from something deeper, or recover the classical logical structure at some level of description. But a century is a long time, and the divergences show no signs of narrowing. At some point, the persistence of the situation becomes part of the data.
Perhaps prediction can simply decouple from explanation. Perhaps assumptions that seem necessary—like classical logical structure—can turn out to be wrong in ways we couldn't have anticipated. And perhaps interpretive pluralism is stable rather than transitional, a permanent feature of theories in domains where the formalism outruns what we can picture.
If quantum mechanics is a precedent, what's it a precedent for?
There may be a class of phenomena where formalization follows a distinctive pattern. Not the textbook sequence of clear axioms, rigorous derivation, empirical confirmation, and obvious interpretation, but something stranger: phenomena resist existing tools, formalism is built to fit, predictions work, interpretation remains contested, classical assumptions turn out to be wrong.
Quantum mechanics is the paradigm case. But there might be others, domains where the phenomenon is real, formalization is possible, but understanding in the classical sense (clear ontology, intuitive explanation, derivation from self-evident principles) isn't available. Perhaps not available yet. Perhaps not available to our evolutionary brains.
What would it take for reasoning to be such a domain?
Reasoning has a feature that makes it particularly hard to formalize: reflexivity. A reasoning system can evaluate its own processes. It can notice that an inference rule is leading it astray, that an evidence-weighting scheme is biased, that a whole framework of assumptions needs revision. It can take its own operations as objects and transform them.
This creates a structural problem for any formalism. If you build a system that reasons according to rules R, then R is fixed—external to the system, not something it can modify. If you add a meta-level to evaluate R, the meta-level has its own rules R′, which are equally fixed. The regress continues:
Level 0: reasons according to fixed rules R₀
Level 1: reasons about Level 0, using fixed rules R₁
Level 2: reasons about Level 1, using fixed rules R₂
...
At no level does the system gain genuine reflexivity: the ability to represent and revise the rules governing that very level. Each meta-level relocates the fixed point.
This has a family resemblance to the measurement problem. In quantum mechanics, the formalism includes two dynamics (unitary evolution and collapse) without a principled boundary between them. In reasoning, any formalism seems to include object-level and meta-level without genuine reflexivity connecting them. Both are structural gaps, places where the formalism contains something it can't fully internalize.
Maybe this is solvable. Maybe there's a way to formalize reasoning that doesn't stack meta-levels but makes reflexivity native, where the rules are inside the space of things the system can manipulate, not outside it.
Or maybe reasoning is like quantum mechanics: formalizable in a way that works (predicts, systematizes, enables) without being fully transparent to us.
What would a theory of reasoning with the epistemic profile of quantum mechanics look like?
It would work: systematizing inference, predicting failures and successes, enabling the construction of systems that reason in ways we recognize as powerful. The formalism would be rigorous and applicable. At the same time, it might violate assumptions we currently treat as obvious: perhaps that reasoning decomposes neatly into discrete steps, or that rules and the systems following them are cleanly separable, or that reflexivity can be handled by stacking meta-levels.
Its axioms might be reverse-engineered, as quantum mechanics' were: structures identified because they work, capturing reasoning's compositional features and reflexivity without being derived from self-evident principles. And it might admit multiple interpretations: different accounts of what the formalism means (what reasoning "really is," what it says about minds or machines or abstract structure) that are empirically equivalent. We might be able to use the theory without settling what it describes.
It might even have its own version of the measurement problem: the boundary between reasoning and meta-reasoning, between operating within a framework and transforming the framework, present in the formalism but not derivable from it.
This would be success, but a strange kind. A theory that works without being fully understood, just like quantum mechanics.
If the precedent holds, several things follow.
The most immediate is that intelligibility isn't guaranteed. We might get a theory of reasoning that's formally exact and humanly opaque, not because we're not clever enough, but because the phenomenon doesn't fit the kind of understanding we're wired for. Quantum mechanics shows this is possible.
This should make us suspicious of our current assumptions about reasoning—that it's rule-following, or optimization, or something that decomposes into steps. Before quantum mechanics, classical logic (intuitive causality, etc.) seemed like a requirement for coherent thought. The structure of quantum systems showed otherwise. Our intuitions about what reasoning must be like might prove equally parochial.
Quantum mechanics is more a precedent and less a metaphor for reasoning: proof that a domain can be formally rigorous, empirically successful, and interpretively unsettled, all at once, and stably so. Not a way station to full understanding, but the permanent situation of our best theory.
Reasoning might be in a similar situation. The phenomenon is real. Existing formalisms don't quite capture it. A new formalism might work (might systematize, predict, enable) without delivering the kind of explanation we instinctively want.
That would be uncomfortable. It would mean using a theory of reasoning without fully understanding reasoning, operating within a formalism that works without knowing why it works or what it describes. But then, quantum mechanics is uncomfortable too, and has been for a century. The discomfort hasn't kept it from being extraordinarily productive.
If we're forced to choose between intelligibility and adequacy—between a theory we understand and a theory that captures the phenomenon—quantum mechanics shows which choice leads to progress.
Whether reasoning might force the same choice remains open.