Stephen Wolfram is doing ground-breaking work on this topic by using computers to model how simple rules can describe the behavior of complex systems. The field is called “Computational Irreducibility.”
A New Kind of Science:
“In traditional science it has usually been assumed that if one can succeed in finding definite underlying rules for a system then this means that ultimately there will always be a fairly easy way to predict how the system will behave. “Several decades ago chaos theory pointed out that to have enough information to make complete predictions one must in general know not only the rules for a system but also its complete initial conditions.
“But now computational irreducibility leads to a much more fundamental problem with prediction. For it implies that even if in principle one has all the information one needs to work out how some particular system will behave, it can still take an irreducible amount of computational work actually to do this.” [ pg. 739 ]
Why is this significant? It’s about our entire approach to science--is what matters the symbols and equations? Or the scientific thinking that those tools are designed to enrich?
First a little history: In the second half of the 19th century, James Maxwell’s successful implementation of advanced, formal mathematics in the theory that light was just a part of a bigger spectrum had led to the discovery of X-rays and radio waves. By the turn of the 20th century, the mathematical community was left feeling invincible. Bertrand Russell and Alfred North Whitehead were working on the crown jewel--Principia Mathematica, which would compile all logical truth into one axiomatized system. Their attempt ultimately failed, as Kurt Godel logically and formally proved with his Incompleteness Theorem. Godel showed that even with the basic axioms of arithmetic [ the Natural Numbers and two operations of addition ( the rule a+0=a ) and multiplication ( a*0=0 ) ], there are holes even in these axioms--true statements that cannot possibly be proven. The axioms imply the vast majority of true statements in this system , but not all. Hence Incompleteness.
For any field of logic, there are two systems at work: 1. the rule that manufactures all possible statements; 2. the rule that manufactures all possible proofs. The assumption traditionally was that there would be as many correct proofs as there were true statements. Godel proved this could not possibly be the case in traditional arithmetic on the natural numbers.
Computational Irreducibility takes it a step further, claiming that sometimes, the idea of formal proofs is completely unnecessary and meaningless. Wolfram is claiming that often, worrying about proving the truth of statements from the axioms is pointless to begin with! Proofs are only meaningful when they provide us with shortcuts. Otherwise, the behavior of the system is computationally irreducible, and the proofs are just as complicated as recreating the whole system, and therefore meaningless as predictors. In this way, Computational Irreducibility represents the completion of what Godel started--thought freed from its over-reliance on symbol.
The task of our age is to free thought from symbol and image--not just in mathematics, but in religion, economics, politics, etc.--preferably without violence and nuclear bombs, but it could always come to that, if we fail to find another way.