Consider a mathematical system \(M\) consisting of two parts:

- A list of axioms \(A=\left\{ a_1, a_2 \ldots a_n\right\}\). Each axiom is a proposition. A proposition is just a string of symbols from some specified alphabet which is well-formed according to some specified grammar.
- Rules of logic \(L\). \(L\) is just a machine that, when given one or more propositions, spits out one or more new propositions. We say that \(L\) proves the propositions it spits out from the propositions it is fed, but this process involves no higher thought. It is just a mechanical process that could be programmed into any computer.

From these premises we can construct a new set of propositions \(T\) called the theorems. A proposition \(p\) is a theorem if \(L\) when fed any set of axioms or theorems it spits out \(p\). So a theorem is just a proposition that could be proven through a number of intermediate steps from the axioms. The axioms are trivially a subset of the theorems, but the whole set of theorems is, unlike the axioms, usually infinite.

Mathematical systems ordinarily have symbols, grammar, and rules of logic such that if any proposition \(p\) and its negation \(\neg p\) are theorems, then so will any other proposition \(q\). In other words, in a mathematical system where any proposition and its negation can be proven, all propositions (and their negations) can be proven. As such systems are rather uninteresting, mathematicians prefer to study consistent mathematical systems—those were there is no proposition \(p\) so that both it and its negation are theorems.

That raises the question whether in every consistent mathematical system—one in which there is no \(p\) so that both \(p\) and its negation \(\neg p\) are theorems—it *necessarily* is the case that for any proposition \(p\) one of \(p\) and \(\neg p\) are theorems. In other words, are all propositions decidable (*i.e.*, provable or disprovable)?

Until less than a century ago, mathematicians universally assumed that all propositions in their systems are decidable. But then Gödel famously proved that in every consistent mathematical system above a certain minimum level of complexity, there must exist at least one proposition \(u\) that is not decidable (*i.e.*, neither \(u\) nor \(\neg u\) are in the theorems).

This caused enormous surprise. Some took offense at the proof. Others jumped to the conclusion that this means that mathematics and logic are fallible and hence inferior to some sort of mysticism. Yet others proclaimed that this establishes the superiority of human brains over any formal, mechanical system of reasoning.

But none of these reactions seem justified for the existence of undecidable propositions should have been obvious to any mathematician who gave the matter any thought.

For example, Euclid and his successors for two millenia expended enormous effort trying to show that their parallels axiom \(a_5\) was not a necessary axiom at all, but could removed from the axioms and still proven as a theorem. But in the nineteenth century, Lobachevsky and Bolyai showed that both the full set of Euclid's axioms \(\left\{ a_1, a_2, a_3, a_4, a_5 \right\}\) *and* \(\left\{ a_1, a_2, a_3, a_4, \neg a_5 \right\}\) were consistent mathematical systems. In other words, \(a_5\) is undecidable from \(\left\{ a_1, a_2, a_3, a_4 \right\}\).

But far more trivial and obvious examples of undecidability exist. Take the very familiar group axioms and this proposition \(p_3\):

$$\exists x : \exists y : \exists z : x \neq y \wedge y \neq z \wedge z \neq x$$\(p_3\) is the proposition that there exist three elements in the group so that no two of them are equal. In other words, does the group have at least three distinct elements?

For most groups, the answer is yes and \(p_3\) is provable. But for some perfectly good groups, like \(S_2\), the answer is no and \(\neg p_3\) is provable. But as all groups by definition satisfy the group axioms, the proposition \(p_3\) must be undecidable from them alone.

So, undecidable propositions are very common and nothing to worry about. If one encounters an undecidable proposition \(u\) in mathematical system \(M\), one can just split into the mathematical systems \(M^\prime\) (the same as \(M\) with \(u\) added to the axioms) and \(M^{\prime\prime}\) (the same as \(M\) with \(\neg u\) added to the axioms). Both \(M^\prime\) and \(M^{\prime\prime}\) will be consistent because proof of \(\neg u\) (or, respectively, \(u\)) would imply proof of \(u\) (or, respectively, \(\neg u\)) in \(M\) contrary to our premise that \(u\) is undecidable in \(M\).

Gödel's first incompleteness theorem just means that every sufficiently rich, consistent mathematical system \(M\) can always be split into two new consistent mathematical system, \(M^\prime\) and \(M^{\prime\prime}\). Will \(M^\prime\) and \(M^{\prime\prime}\) always be sufficiently rich to apply Gödel's ingenuous method to split them again? The intuition is that always at least one of the two will be, but that in some cases only one will be.

For example, take the Riemann Hypothesis \(R\) famously unproven from the axioms of complex analysis. Two logical possibilities are that \(R\) will be either proven or disproven (such as by counter-example). A third would be that \(R\) is undecidable from the axioms of complex analysis. In that case there would have to exist two separate and distinct sets of mathematical objects, both of which fully obey all theorems of complex analysis, but for one of which the Riemann Hypothesis is true and one for which it is not. That would be interesting.