Mohammad-Ali Rahebi


Sensitivity to initial conditions is the death of reductionism. It says that any small uncertainty that may exist in the initial conditions will grow exponentially with time, and eventually (very soon, in most cases) it will become so large that we will lose all useful knowledge of the state of the system. Even if we know the state of the system very precisely now, we cannot predict the future trajectory forever. We can do it for a little while, but the error grows exponentially and we have to give up at some point. (Baranger, 2000)

Enter feedback and cybernetic cross-checking and self-correction.

Earlier in the essay quoted above, Baranger teaches us that classic, non-chaotic science relies on calculus, the invention of Leibniz, the tool that can map curves via obtaining a function y=f(x); a compression-function that tries to come up with a simplified formula for any curve drawn on a Cartesian diagram. It is the bet that even in the cases of the most complicated curves (as an aggregate of connected points in, e.g. two-dimensional space), there is a way to designate them that is simpler than (or, in a worst-case scenario, the same as) the data itself. Leibniz even has a compression-efficient god.

Now it is in no way an accident that Leibniz has blind monads and thus needs the perfectly predictable world of pre-established harmony (and argues for a compression god-algorithm of world-optimization). Now that we have seeing monad-machines, we see that curves and calculus are too reductive and representationally poor because representational in essence. With this comes greater computability of very complex situation: the model, if any is even necessary, changes from instant to instant by observing the system in real-time and self-correcting as necessary: backpropagation.

The god-function of Leibniz is what is supposed to create the most complex and plentiful world-plenum from the simplest possible plan of action, thus creating the best of all possible worlds through its efficiency, which is defined in terms of compressive power. The Cybernetic Organon is the death of the Leibnizian efficiency (which operates in the blind world of pre-established harmony, itself a theory of pre-feedback machines unable to see, though unbeknownst to Leibniz himself).

Are cybernetic machines classifiable as “living machines?” Does self-correction amount to self-organization? An unsupervised deep-learning algorithm operating on Big Data is an example. By processing and training with data, the machine (a machine can be a piece of software running on some hardware, dedicated or not) changes from a state of indeterminate “noise” (where all the node weights are assigned either an equivalent value (e.g. 0 or 1), or random values uniformly distributed) and organizes itself (i.e. its weights and biases, or even the function etc.) into a singular entity as unique as anything and at least as complex as the data it feeds on for its organization, i.e. it is at least as complex as its environment (which is given to it qua data, structured or not).

While the cybernetic machine is at least as complex as its environment/data, in Leibniz’s compressive organon the god-function of calculus is at most as complex as its environment.

The Cybernetic Organon operates in a world of windowed monads and constant feedback, real-time updates of the environment and as such can afford a higher order of efficiency which takes complexity into account and welcomes it into itself as a computational feature deployed through an intelligence without representation, an intelligence without transcendence.



Baranger, M. (2000). Chaos, complexity, and entropy. New England Complex Systems Institute.