Summary

Logic provides the formal framework within which AI systems can represent knowledge, draw sound inferences, and justify conclusions. The two fields are connected at multiple levels: logic supplies formal languages for unambiguous knowledge representation, inference rules for valid reasoning, proof systems for justifying conclusions, and formal criteria for distinguishing genuine reasoning from statistical approximation. The deeper question — whether current AI systems reason in a philosophically robust sense — depends on what reasoning is taken to require, which is precisely what proof-theoretic and philosophical approaches to logic aim to clarify.

Two traditions

The history of AI contains two broad traditions, often presented as rivals.

Symbolic AI — sometimes called “Good Old-Fashioned AI” or GOFAI — represents knowledge using formal languages: predicate logic, rules, and ontologies. Reasoning proceeds by explicit inference: a theorem prover or rule engine applies inference rules to a knowledge base to derive new facts. Early expert systems, automated theorem provers, and planning systems work in this tradition.

Statistical AI, which now dominates in the form of neural networks and large language models, learns patterns from data. Rather than representing explicit rules, it adjusts millions or billions of numerical parameters so that its outputs approximate the distribution of training examples. Reasoning, in this framework, is not explicitly performed; it emerges, to the extent it does, from pattern matching at scale.

Contemporary neuro-symbolic AI attempts to combine both: using neural networks for perception, pattern recognition, and language, while incorporating logical modules for structured reasoning and formal verification. The challenge is to make the two components interact in a principled way.

What logic provides

Formal knowledge representation

Natural language is ambiguous; logic is not. Formal languages — first-order logic, description logics, modal logics — allow knowledge to be represented precisely and unambiguously, enabling consistent inference across large knowledge bases and ontologies.

Sound inference

A logic comes equipped with inference rules that are sound: if the premises are true, the conclusion is guaranteed to be true. This provides AI systems with a mechanism for deriving new knowledge that is provably reliable, in contrast to probabilistic outputs that may be calibrated but not certified.

Proof and justification

A proof is a structured object that makes an inference visible step by step. AI systems equipped with proof-generating capabilities can not only produce an answer but demonstrate why that answer follows from given premises — a prerequisite for genuine explainability.

Formal verification

Logic underpins the formal verification of software and hardware systems, establishing mathematically that a program satisfies its specification. As AI systems are deployed in safety-critical contexts, formal verification methods become increasingly important for assurance.

Normative structure

Deontic logics and formal models of norms allow policies, rules, and governance frameworks to be represented precisely. As AI systems increasingly operate under regulatory constraints, logical frameworks for representing and enforcing norms become essential.

The reasoning gap

One of the central questions in the philosophy and foundations of AI is whether the outputs of large language models and neural networks constitute reasoning or something else: very sophisticated pattern completion.

The distinction matters. A system that reasons can, in principle, justify its conclusions: it can identify the premises from which the conclusion follows and the inferential steps connecting them. A system that predicts produces outputs that are statistically likely given its training data, but those outputs may not be traceable to any principled inferential structure.

This gap has practical consequences. A reasoning system that makes an error can, in principle, have that error located and corrected. A prediction system that makes an error may simply be retrained — but with no guarantee that the correction generalises to structurally similar cases.

Proof-theoretic semantics offers a framework for making this distinction precise. A central claim of Gheorghiu’s work is that the gap between statistical prediction and genuine reasoning is not merely philosophical but formally precise: a system that cannot trace its outputs back to principled inferential steps cannot, in any robust sense, be said to reason. Understanding this distinction is essential for building AI systems that are not just powerful, but interpretable and trustworthy.

Logic and explainability

Explainable AI (XAI) is a broad research area concerned with making AI systems’ outputs interpretable to humans. Much XAI work focuses on approximate explanations — saliency maps, attention weights, feature importance scores — that highlight which inputs most influenced an output. These are useful, but they are not explanations in the logically rigorous sense.

A logically rigorous explanation is an argument: a sequence of inferential steps leading from premises to conclusion, where each step is licensed by a valid inference rule. This kind of explanation is verifiable: one can check each step and reject the explanation if a step fails. Approximate explanations cannot be verified in this way.

For AI systems to give genuine explanations in high-stakes settings — medicine, law, finance, policy — logic provides the only formal framework with the right properties. The integration of proof-generating capabilities into AI systems is an active area of research.

Open Questions

What remains contested

Do large language models reason?

Large language models can produce outputs that pass many tests we associate with reasoning: solving mathematical problems, drawing analogies, following multi-step arguments. Whether they do so by reasoning or by sophisticated pattern completion — and whether this distinction is ultimately coherent — is actively debated.

Can neural networks be made formally verifiable?

Neural networks are, at present, extremely difficult to verify formally. Establishing that a network satisfies a given property for all possible inputs is computationally intractable for large models. Connecting the expressiveness of neural networks with the precision of formal verification is an open research challenge.

Is there a unified framework for neuro-symbolic AI?

Neuro-symbolic AI combines neural and symbolic components, but combining them in a principled way — where the logical structure genuinely constrains the neural outputs rather than post-processing them — remains technically and conceptually difficult.

What does genuine machine reasoning require?

This is perhaps the deepest question: what would it mean, precisely, for an artificial system to reason? Proof-theoretic semantics proposes that reasoning requires the capacity to generate, trace, and justify inferential steps — but specifying this formally for systems of the complexity of contemporary AI is a fundamental open problem.

From This Site

For the semantic framework underlying Gheorghiu’s approach to reasoning in AI, see What is proof-theoretic semantics?

For an overview of related research programmes, see the Research page.

Gheorghiu has written for general audiences on these themes. See the Writing page for accessible essays and talks, including work that received the Graham Hoare Prize.