As humans, we have always dreamt of artificially built machines capable of reasoning—and possibly thinking—like us and even deeper than us.
In the plot of works by ancient Greek authors Aeschylus and Euripides, a device was sometimes used to simulate the intervention of the god. Latins later called it deus-ex-machina to indicate an entity able to resolve human conflicts that humans were unable to settle down alone. In the course of history, polymaths and philosophers, many times, attempted to provide a theoretical formulation for the mechanics of the human thought.
The first polymath who imagined something close to an “artificial form of intelligence” was Ramon Lull—a Spanish theologian who lived in the 13th century in the island of Majorca. The legend says that Lull had a vision around 1272 during a solitary spiritual retreat on the top of a mountain. He envisioned a series of basic principles that, properly combined together, could lead to the core principles of every science and every form of knowledge. In his further works, Lull used symbols to represent each basic principle so that each basic truth could be expressed in a formal way. It was nothing more than permutation of nested sets of symbols each with a specific meaning. All possible combinations of symbols were intended to be a truth statement.
Way ahead of his time, Lull didn’t go further than using the system as an aid to convert people to the Catholic religion with pure reasoning rather than crusades, war and violence.
In the middle of the 17th century, Gottfried Leibniz built on the foundation of the Lull’s work and conjectured that the human thought could be systematized in an array of algebraic rules so that any argumentation could be reduced to some basic mechanical calculation. He even projected and built a machine–the Ratiocinator–for the purpose. (See the featured image, courtesy of the Gottfried Wilhelm Leibniz Bibliothek in Hannover, Germany.)
The Leibnitz’s work inspired further development of mathematical logic that George Boole and, in the early 20th century, David Hilbert and Bertrand Russell carried out. In particular, Hilbert in 1900 posed a challenge to his mathematical peers and questioned whether a single set of axioms could exist that allows to derive all mathematical statements from them. No relevant answer came for a couple of decades and to many it just looked like an abstract, meta-mathematics problem. Then, all of a sudden, from a side field of research a definitive answer popped up.
In a paper published in 1931, Kurt Gödel demonstrated a couple of propositions of mathematical logic collectively immortalized as the Theorems of Incompleteness. The whole scientific community interpreted the incompleteness theorems as a negative answer to the Hilbert’s original question: “Can all mathematical statements be expressed using a set of well-defined rules?” If in an axiomatic system an undecidable statement always exists, then the dream of a unified set of rules to explain the entire mathematics just vanishes.
However, in which way is the incompleteness theorem such a landmark in logic?
Well, on one hand Gödel proved that there are things that mathematical logic can’t just prove. On the other hand, though, Gödel proved that, within the limits of a consistent formal system, any reasoning can always be expressed as a set of formal rules and then, in some way, mechanized.
That was the beginning of the process that today brought me here to write these notes.