Does formalizing reasoning require new mathematics?

Before we can answer that question, we need to understand what it would mean. And that opens onto something deeper than reasoning, deeper than mathematics: something about the strange relationship between structure, representation, and reality itself.

We use mathematics to understand reality. And it works - unreasonably well, as Wigner once put it. But where does the mathematics come from? Not historically (we know the names and dates). I mean: what happens when the mathematics we have can't capture the domain we're trying to understand? When the structures we're investigating don't fit into the formal machinery available?

The usual story treats mathematics as a fixed toolkit. You reach in, grab the appropriate tool, apply it. Algebra for this, calculus for that, statistics for the other thing… as if the tools were always already there, waiting.

But the history of mathematics, and of mathematically-driven science, is punctuated by moments when the tools had to be invented because the existing ones couldn't represent the phenomenon at all. Those moments are where I want to dwell.

Consider motion, and the birth of calculus.

Before calculus, the concept of instantaneous velocity was, strictly speaking, incoherent. Velocity is distance divided by time. At an instant, no time elapses and no distance is covered. You get 0/0, which is undefined. You could calculate average velocity over an interval, but as the interval shrinks toward zero, you approach nonsense.

Zeno's paradoxes exploited exactly this gap. The flying arrow, at any given instant, occupies a space exactly equal to itself. It isn't moving at that instant, because motion requires change over time, and an instant has no duration. So when does it move? The mathematics of antiquity couldn't answer. More precisely: it couldn't even formulate the question in a way that admitted an answer.

What Newton and Leibniz introduced was the concept of a limit: what a ratio approaches as both numerator and denominator approach zero. This has a precise definition (for any ε > 0, there exists a δ > 0 such that...). The limit concept made instantaneous rate of change well-defined, and from there came derivatives, integrals, differential equations.

The issue with Newton's mechanics before calculus wasn't clumsiness or lack of rigor. The laws couldn't have been stated at all, because the formalism that would make them well-formed sentences didn't exist. F = ma says that force equals mass times the second derivative of position with respect to time. That's a differential equation. Absent the formalism, it isn't even a proposition. The formalism (at least in this case) is prior to the thought, not subsequent to it.

Or consider space, and what happened when Euclid's geometry stopped being the only game in town.

For two thousand years, Euclidean geometry wasn't a theory of space; it was the theory. Seemingly the only coherent way to think about extension and form.

Euclid's fifth postulate (the parallel postulate) states, roughly, that through a point not on a given line, exactly one parallel line can be drawn. Unlike his other axioms, this one felt less self-evident. Mathematicians spent centuries trying to prove it from the other four, convinced it must follow.

The attempts failed. Worse: when mathematicians like Saccheri tried proof by contradiction (assume the parallel postulate is false, derive absurdity), they got strange results but no contradictions. Spaces where parallel lines diverge. Spaces where they converge. Internally consistent, just... different.

In the 19th century, Lobachevsky, Bolyai, and Gauss made the breakthrough explicit: these alternative geometries are not contradictory. Instead, they're coherent formal systems, as rigorous as Euclid's. Riemann generalized further, developing the geometry of curved spaces of arbitrary dimension.

What this revealed—and it deserves more wonder than it usually receives—is that geometry is not a priori knowledge of physical space. It's a formal system that may or may not describe reality. Which geometry applies is an empirical question.

And reality answered. Einstein's general relativity describes gravity not as a curvature of spacetime, not as a force. The mathematics required is Riemannian geometry, the very framework built by exploring what happens when Euclid's postulate fails. Space itself turned out to be non-Euclidean.

We couldn't discover this until we could conceive it. We couldn't conceive it until the mathematics existed.

The deepest case is quantum mechanics. It eventually required a new kind of mathematical object, not just new equations.

Classical physics describes the state of a system as a point in "phase space": a list of positions and momenta for every particle. Evolution over time traces a trajectory through this space. Observables (energy, momentum, and so on) are functions that assign numbers to points. If there's probability, it's because we don't know the exact state; the underlying reality is determinate.

Quantum systems resist all of this. A quantum state is a vector in a Hilbert space (a complete vector space with an inner product, often infinite-dimensional). Observables are operators that act on these vectors. And crucially, operators can fail to commute: the order in which you apply them matters. Position and momentum, for instance, satisfy ΔxΔp ≥ ℏ/2. The uncertainty principle falls directly out of their non-commutation.

Measurement projects the state onto an eigenspace of the observable, with probabilities given by the Born rule. Superposition, entanglement, interference: none of this can be represented in classical phase space, because classical structures lack the compositional properties these phenomena require. The Hilbert space formalism appears to be the only known structure adequate to what quantum systems do.

Even the concept of number has been repeatedly reconstructed when existing versions couldn't represent what was needed.

Negative numbers: the equation x + 5 = 3 has no solution in positive integers. Merchants dealing with debt, physicists dealing with direction, needed numbers that could represent "less than nothing" relative to a reference point. The extension to negative integers required restructuring arithmetic to be symmetric around zero, a reconception of what number means, not an addition to an existing collection.

Complex numbers: the equation x² = -1 has no solution in real numbers. For centuries, √-1 was dismissed as meaningless. But Cardano, Euler, and Gauss showed that defining a number i such that i² = -1 yields a consistent, enormously powerful system. Complex numbers turned out to be essential for describing oscillation, rotation, wave phenomena, and quantum mechanics, where the state space is a complex Hilbert space.

Real numbers: the rationals seem dense (between any two, there's another), but they have "holes." √2 is not a ratio of integers, so it doesn't exist in ℚ. Dedekind cuts and Cauchy sequences filled these holes, constructing the complete ordered field ℝ necessary for rigorous analysis, for calculus to actually work.

Each extension involved building new structure because the existing structure couldn't do what was needed. The numbers weren't sitting there waiting to be found, nor were they arbitrary inventions. They were constructed under constraint, to solve problems the prior system couldn't even pose.

Just notation?

There's a tempting view that mathematics is just notation, a convenient language for expressing ideas that exist independently. The formalism doesn't matter; what matters is the underlying reality.

History refutes this.

You cannot think what you cannot represent. Before you have the formalism, you don't have ineffable insights waiting for expression. You have, at best, vague gestures. The representational structure determines the boundary of what's thinkable.

Calculus gave Newton the concept of instantaneous velocity, a concept that requires the mathematical structure to be thinkable at all. Something similar holds for non-Euclidean geometry: what Riemann and his predecessors constructed were the conditions under which a new kind of space could be conceived. (It turned out, somewhat remarkably, that we inhabit one.) And Hilbert spaces appear to be the only known structure adequate to quantum phenomena, because superposition, entanglement, and non-commuting observables have no home in classical mathematical frameworks.

The choice of mathematical representation shapes what truths are available to us.

Mathematics is the tool we use to make things precise. It's how we move from vague intuition to rigorous structure. It's the medium of exact description.

But new mathematics has to be built. Someone has to construct it.

With what?

Not with the mathematics that doesn't exist yet (that's what we're building). Not with pure logic (we're not deriving from axioms; we're choosing axioms, forging definitions, deciding what structures to study). Not with empirical observation alone (we're building formal machinery, not just collecting data).

There's something strange here. We extend the tools of precision from within the medium of precision, but before the extension, the medium doesn't include what we're about to add. We're reaching beyond our current representational capacity to construct new representational capacity.

Riemann sat with the problem for years, followed failed paths, and constructed something—manifolds, curvature tensors—that had no prior formal existence. His intuition guided the construction, but the intuition wasn't itself mathematical; the math didn't exist yet. It was something pre-formal that nonetheless tracked structure precisely enough to build what was needed. And then something cohered that couldn't have been mechanically derived from what came before.

What is this capacity? We don't have a good theory of it. It's not "creativity" in the generic sense; not all creativity produces rigorous formal systems. It's not "intelligence" in the IQ sense; the skills don't obviously correlate. It seems to be a sensitivity to structure that operates before the structure has been formalized, yet reliably produces valid formalizations.

This is a description of what needs explaining, not an explanation. The capacity to extend mathematics is not itself explained by mathematics. The tools of precision cannot fully account for their own extension. This seems to be a structural feature of the situation rather than a mystical claim.

This puts pressure on an old question: is mathematics discovered or invented?

The discovery view says mathematical structures exist independently of us. The integers, the continuum, Hilbert spaces: they're out there, and mathematicians are explorers mapping pre-existing terrain. This explains why mathematics is objective (we don't get to vote on whether a proof works), why different cultures converge on the same structures, and why math is so effective in physics (it describes real structure).

The invention view says mathematics is a human construction. We create formal systems, define rules, explore their consequences. This explains the creative element in mathematical work, the existence of multiple incompatible systems (how can they all be "discovered"?), and the way new mathematics gets built for specific purposes.

But consider what actually happens when new mathematics is constructed.

Riemann didn't discover Riemannian geometry in the way you discover a new continent, stumbling upon something that was fully there all along. But he also didn't invent it the way you invent a game, making arbitrary choices constrained only by internal consistency.

He was constrained. The geometry had to cohere internally. It had to capture something about the structure of possible spaces. It had to compose correctly with other mathematics. Not anything would work.

And yet the geometry didn't exist until he built it. There was no fact about Riemannian manifolds before there was a concept of Riemannian manifolds. The structure became determinate through the process of articulation.

Maybe the dichotomy is false. Maybe mathematics is neither discovered nor invented, but something else: the articulation of structure that becomes determinate only through being articulated. The structure is real (we're not making it up; we're constrained by it). But it's not sitting fully formed in some Platonic realm. It comes into focus through the work of formalization, like a figure emerging from fog as you approach, except your approach is also what constitutes the figure.

It seems to be a structural feature of any domain where articulation is constitutive. A poem isn't discovered (it didn't exist before being written) but also isn't invented arbitrarily (not any words will do; the poet is constrained by meaning, form, sound). The poem becomes real through being written, yet what's written isn't arbitrary. It has to work.

Mathematical construction might be like that, but for structure as such.

One level deeper: why is mathematics so effective at describing reality?

It's not enough to say "because reality has mathematical structure." The question is why reality has structure that mathematics can capture. Why laws? Why regularities? Why patterns that hold across time and space?

One response: structure is something we impose. The regularities we find are regularities we bring. Kant went this direction, treating space, time, and causality as forms of intuition that organize experience, not features of things-in-themselves.

But this seems too weak. Airplanes fly. Semiconductors compute. Predictions work far beyond what's needed for "organizing experience." If mathematical structure were merely imposed, it would be miraculous that it aligns with reality well enough to enable technology.

Another response: mathematics is the study of structure in general (patterns, relations, forms), and reality is made of those. So of course math applies.

This is closer, but it still doesn't explain why reality is structured. Why should there be patterns at all? Why not chaos, or nothing, or something that resists systematization entirely?

Some physicists treat it as brute fact: the universe has laws; we don't know why; further questioning is metaphysics. But that's not an answer so much as a decision to stop asking.

Others appeal to anthropics: only structured universes can contain observers asking questions, so of course we find ourselves in one. This might explain why we observe structure, but not why structure exists to be observed.

We don't know. The effectiveness of mathematics—the fit between formal structure and physical reality—is not understood. Not "mysterious" in a conversation-stopping way, but open: a question without a settled answer, taken seriously by serious people.

And notice the reflexive situation: we're using structured reasoning to ask why structure exists. Mathematics to ask about mathematics. There's no external vantage point. We're inside the phenomenon we're trying to explain. Is there a view from nowhere?

Now we can return to the opening question with a better sense of what's at stake.

When a domain resists existing mathematical tools, there are three options. You can force the domain into available formalism and accept the distortion. You can stay informal, describing the domain in natural language and giving up on precision. Or you can build new tools: construct representational machinery adequate to the domain's actual structure.

The third option is what happened with motion, space, quantum phenomena, and number itself. Someone recognized that existing tools couldn't represent the phenomenon, that forcing it produced unacceptable distortion, and that precision was necessary. So they built.

The construction involves defining new objects (vectors in Hilbert space, points on manifolds), specifying how they compose (operator products, tensor products, covariant derivatives), establishing what follows from what (spectral theorems, geodesic equations). The result is a formal structure that makes the domain's features explicit, derivable, manipulable.

When such a construction succeeds, something irreversible happens. What was murky becomes precise. Structure becomes visible that was previously hidden. And the expansion is genuine: one can now think what one couldn't before, derive what was previously underivable.

Often the new tools illuminate domains beyond their origin. Group theory, developed for algebraic equations, now describes symmetry across physics. Topology, developed for abstract purposes, now underpins quantum field theory and data analysis. When the structure is right, it tends to apply more broadly than expected, as if—perhaps metaphysically loaded—one has found a genuine joint in reality.

There's beauty in this. The moment when a new formalism clicks into place, when vague intuitions become precise theorems and structure becomes visible that was previously hidden: it's a distinctive intellectual experience.

But there are stakes beyond aesthetics.

If formalism determines what's thinkable, then we have to reckon with a disquieting possibility: that inadequate formalism doesn't merely slow us down but stops us entirely. And stops us in a way we can't perceive, because the thoughts we can't have don't announce themselves as missing. We experience them only as absence, as the place where inquiry trails off without knowing why.

The scientists before Newton weren't stupid. They couldn't think correctly about motion because the mathematics didn't exist. The physicists before Riemann weren't incurious. They couldn't conceive of curved space because the structure hadn't been articulated.

What are we currently unable to think because we lack the formalism?

We can't know directly; that's what it means to lack it. It’s a problem of epistemic closure. But we can notice symptoms: domains where existing tools feel forced, where formalizations distort rather than illuminate, where the conceptual structure we intuit doesn't match the formal structure we're imposing.

Those symptoms are invitations. They mark places where new mathematics might be needed, and where building it might crack open what currently seems intractable.

Reasoning might be such a place.

We have logics. We have probability theory. We have computational models. And yet something doesn't quite fit: the way reasoning can evaluate its own standards, transform its own frameworks, operate on its own structure.

Existing formalisms tend to treat that capacity as external, as "meta," as something other than reasoning itself. Add a meta-level to reason about the base level, and you've relocated the problem: the meta-level operates within its own fixed rules, untouchable from within. The regress doesn't terminate in genuine reflexivity. It terminates in another frozen layer.

Maybe that's fine. Or maybe we're forcing a phenomenon into tools that structurally cannot capture it.

Whether the history recounted here—the history of formalisms that had to be invented because existing ones couldn't represent the phenomenon—might have another chapter still to be written is not something I can answer. But the symptoms are there, and we should take them seriously.

Previous
Previous

Adequacy v. intelligibility, or why mathematical formalisms of quantum are a precedent

Next
Next

Quantum superintelligence: pursuing type-2 generalization