Now we've covered the relationships with postmodernity in the previous post, (I promise not to bring it up again, but I have recently been accused quite often to be 'almost postmodern', which -being an engineer myself- I find rather absurd) it is time to turn our attention to an Enlightenment favourite, namely rationality.
Methodological Stuff:
- Introduction
- Patterns
- Patterns, Objectivity and Truth
- Patterns and Processes
- Complexity and Randomness
- Complexity and Postmodernism
The Pattern Library:
Rationality comes in many flavours, but recently it has gotten a strong connotation in science with 'optimising utility'. Much economic, game, social (rational-choice) and evolution theory tends to focus on a teleology of forms that maximise utility, even to the point that everyone is forced in a straightjacket of utility-maximisation, and that every action must be the result of this dogma.
Research in psychology already shows that human beings do not (always) work like this, and that we do crazy things and often get away with it. Also rationality only seems to work if all the players conform to this utility-maximising behaviour, as the following great post on science 2.0 demonstrates. Such constraints demonstrate that the premises are fragile in real-world settings; the underlying idealisation is: given premises X (e.g. utility-maximising behaviour), we can demonstrate Y. This is perfectly okay as science goes, but then the scientists often make a REALLY bad turn by implicitly assuming that the real-world conforms to premises X. So they say that every human individual is an utility-maximising creature, and thus the real world will (have to) conform to Y. Of course, humans will often only conform to X every now and then, and they will also conform to A,B,C,... some of which may even interfere with X (see: the hourglass pattern).
For example, many economists will presuppose that I am trying to get as much value for my money all of the time. Then I give money to charity, and suddenly I am doing this to feel good, to suppress my guilt, or due to another 'rational' explanation, in terms of certain utility-maximising behaviour. In other words, whatever you do it is 'reasoned' in terms of rational choices, which immediately makes the whole term a hollow one; you cannot contradict it, because this type of rationality can no longer be falsified.
Utilitarian rationality is currently a dominant belief in science, but it shares a heritage that goes back to the dawn of Greek philosophy and returns in many guises. Basically, rationality is a belief that truth and (heavenly)order can be found in a certain manner of reasoning, and the mastery of this manner of reasoning will, like the meditative chanting of priests, reveal its mysteries. Logical thinking is another exemplar, and one that has proven to be very powerful; rationality and logic therefore often travel closely together, at least in Western science. For a beautiful account of the mythodological sources of rationality, see Henri Atlan's Enlightenment to Enlightenment.
For the purposes here, we need to see how rationality conforms to our particular epistemological game of PAC, where observers operate with limited knowledge. This already comprimises utilitarian rationality, as has been amply demonstrated in many experiments. If you have to make a choice from a certain set of alternatives, and you cannot make an unambiguous ranking of these alternatives, then the whole system of utility-utilising rationality is compromised, because you will just choose on a hunch. Most choices are made this way, because you have to make a decision in limited means and time, with limited knowledge.This does not mean that the hunch has to be a bad or irrational one, but it is also far from rational in the utility-maximising sense of the word.
In PAC, we have to understand that rationality, in contrast to the classical ideal, is embedded in a certain system of values. Suppose I present you a set of data {1,2,3,5,7,9,11}. This set is totally meaningless unless I contextualise it. For instance, I say 'prime numbers', and immediately your brain starts a whole shabam to 'make sense' of the numbers according to the criterion of 'prime numbers'. If I say 'uneven numbers', your brain will contextualise the set of numbers differently: same data, different contexts. So the first thing we can safely say that there is no epistemological system that is really 'context-free'; there is always a preselection of data, and a certain set of rules or processes, that operate on this selection. These inject certain values in this set, with respect to everything that is discarded. The premises of any epistemological production system thus predetermines a base set of values which then constrain that production system. This example also shows that (this) 'truth' is a result of contextualisation and is not independent of the epistemological system. The 'truth' of the set of data being primes, or uneven numbers can only be assessed once the the numbers have 'been put in context' (in both cases, the answer is obviously 'no'). Truth is a 'form' (or: is formed) within a certain value system.
Second, with Kurt Gödel and George Spencer-Brown, logic (and with it math) has to respect at least one source of ambiguity. This ambiguity is injected in the system once it becomes self-referential, and depicts a boundary in which logic can no longer make any statements. Logic works as long as you can make a tree-like structure of truths that are coherent within the rules of a partcular epistemological production system, and even self-reinforcing cycles are possible, but feedback with a negation causes contradictions, and with it ambiguity. Gödel and Spencer-Brown have demonstrated that this always will happen in a self-referential logical production system, although it is as yet unsure why this is the case. Besides this, a practical production system will also have a form of ambiguity because of differences in categorisation, layering and so on, as was already discussed in the pattern of difference. Rationality therefore is bounded by a certain system of values, and can only be manifest in absence of ambiguity. Hence it is completely rational to assume an impending apocalypse by immaculately calculating the End of Time, based on Biblical dates and understanding the symbolic meaning of holy numbers. However, it is most likely not the rationality that science would value, with probably the exception of Robert Langdon and Indiana Jones.
If we up-scale these findings to human cognition, then everything we do occurs within one or more value systems, which allow us to make sense of our life-world. Rationality therefore does not occur in absence of values, rather values create the stage in which rational decisions can be made. This means that we host processes that 'give value to' our sensory data. I often see this 'in action' in our local supermarket, where the employees (mostly students) tend to have a sharper eye for the other employees (of the other sex) than the customers. Customers tend to be a disturbance to all the social/hormonal interactions, and the first reaction when you ask someone where a product is, is the beginning of an irritated 'don't bother me with your stupid questions, dude!' sort of look. Only training can replace this value system with one that considers the customer to be more important. Or age, I suppose.
On a side note, I often also see this attitude with (us) geeks, when they consider the technology more interesting than the wishes of the customer who pays them to design the product.
“Shut up! You may pay me to do my hobby, because I know best what's good for you anyway!”
Seems to be quite a common attitude in the corporate world!
It is likely that this value system is guided by our emotions. This is confirmed by neuro-scientific research, for instance by Antonio Damasio, who has seen that people who are incapable of (certain) emotions also have impaired abilities to make rational decisions. Likewise, many people-most notably men- with a singular dominant value system (e.g. make money!) have demonstrated that they can be very harmful for society as a whole (e.g. Credit crisis). They may be rational, they may be intelligent, but they lack the wisdom to balance contradicting goals and to judge the impact of their actions in a wider context, for instance their fellow human beings (see pattern of contextual diminution). They often also get away with it, because they can act according to these singular models, which means that they can construct a social organisation that creates their truths as ontological fact. Put simply: they create self-fulfilling prophecies, as long as the plasticity of their environment allows it. I still find it really surprising that many feel that economic theory has failed, because it did not pedict the credit crisis, and yet no-one thinks of replacing economists by other scientists in their role as advisors in politics and other areas (also see: “The Age of Stupid”).
Rationality can help us to behave intelligently, but it is wisdom that helps us to deal with the uncertainties of life.
Comments