Three Seminal Books on Human Nature & the Nature of Truth
- ncameron
- Apr 9, 2020
- 10 min read
As I stated several Blogs ago, whilst undertaking a law degree at the University of Sussex I had to study a variety of non-legal subjects, including sociology, epistemology and psychology. Some students regarded this as a painful distraction from their major topic; I loved it all. In the course of studying these other topics I had to read a large number of books, three of which burned their way into my consciousness and have influenced my world view ever since.
These three books were:
- When Prophecy Fails, 1956, by Leon Festinger, Henry Riecken and Stanley Schachter
- Obedience to Authority, 1974, by Stanley Milgram
- The Logic of Scientific Discovery, 1936, by Karl Popper



They are all brilliant and I commend them all to you.
What they have in common is they they reveal key elements of flaws in human deductive reasoning and consequent human behaviour. It made me seek a new yardstick for my own search for truth, and that yardstick is, and can only be, rational scientific logical reasoning.
Let's take each of them in turn.
When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World - Festinger etc
That is the full title - snappy it isn't. It centres around the infiltration of a UFO apocalyptic cult in suburban Chicago by three academic social psychologists. They had read in a local paper that a local cult known as The Seekers, led by a 'Marian Keech' (an alias), were predicting the imminent end of the world, and were acting accordingly by such actions as giving up their jobs or studies, and giving their money and possessions away. Following Marion Keech's private messages conveyed via automated writing from the planet Clarion, the cult was expecting to be rescued by a flying saucer that would come for them just in time to save them from a great flood due before dawn on December 21, 1954.
On the basis that they were fairly confident that this cataclysm would not occur, the three authors inveigled their way into the group with the objective of seeing how the group coped psychologically when the prophecy did not come to pass.
As Wikipedia puts it, the following happened:
Before December 20. The group shuns publicity. Interviews are given only grudgingly. Access to Keech's house is only provided to those who can convince the group that they are true believers. The group evolves a belief system - provided by the automatic writing from the planet Clarion - to explain the details of the cataclysm, the reason for its occurrence, and the manner in which the group would be saved from the disaster.
December 20. The group expects a visitor from outer space to call upon them at midnight and to escort them to a waiting spacecraft. As instructed, the group goes to great lengths to remove all metallic items from their persons. As midnight approaches, zippers, bra straps, and other objects are discarded. The group waits.
12:05 am, December 21. No visitor. Someone in the group notices that another clock in the room shows 11:55. The group agrees that it is not yet midnight.
12:10 am. The second clock strikes midnight. Still no visitor. The group sits in stunned silence. The cataclysm itself is no more than seven hours away.
4:00 am. The group has been sitting in stunned silence. A few attempts at finding explanations have failed. Keech begins to cry.
4:45 am. Another message by automatic writing is sent to Keech. It states, in effect, that the God of Earth has decided to spare the planet from destruction. The cataclysm has been called off: "The little group, sitting all night long, had spread so much light that God had saved the world from destruction."
Afternoon, December 21. Newspapers are called; interviews are sought. In a reversal of its previous distaste for publicity, the group begins an urgent campaign to spread its message to as broad an audience as possible.
Essentially, the group's conviction as a whole was not broken by the disconfirmation of the prophecy, apart from a few cases, rather it was magnified, an example of what the researchers called cognitive dissonance.
They argue that if Keech could add consonant elements by converting others to the basic premise, then the magnitude of her dissonance following disconfirmation would be reduced. Festinger and his colleagues accurately predicted that the inevitable disconfirmation would be followed by an enthusiastic effort at proselytizing to seek social support and lessen the pain of disconfirmation.
What? Madness; but absolutely true.
What I Learned: cognitive dissonance is a very strange and powerful human trait; people can believe in all sorts of rubbish - where is the yardstick of belief?
Obedience to Authority - Milgram
Another classic work. Stanley Milgram was an academic psychologist from Yale in 1961 who duped volunteers during a fake 'learning experiment' into administrating 'electric shocks' (not real) of apparent increasing severity to other innocent 'volunteers' (actually co-experimenters) under the guidance of the grey-coated experimenter. The idea was to try and persuade ordinary people to perform actions that were contrary to their normal moral code simply because they were 'ordered' to by an authority figure. It was conceived, in relation to the Nazi war crime defence of 'superior orders', to determine whether (under the right conditions) 'normal' people could be made to obey immoral orders, or whether you needed to start with those whose moral code was already perverted.
Volunteers were recruited for a “learning” experiment. Participants were 40 males, aged between 20 and 50, whose jobs ranged from unskilled to professional, from the New Haven area. They were paid $4.50 for just turning up. At the beginning of the experiment, they were introduced to another participant, who was an experimenter.
The procedure was that the participant was paired with another person and they drew lots to find out who would be the ‘learner’ and who would be the ‘teacher.’ The draw was fixed so that the participant was always the 'teacher', and the 'learner' was one the experimenters.
The 'learner' (Mr. Wallace) was taken into a room and had electrodes attached to his arms, and the 'teacher' and researcher went into a room next door that contained an electric shock generator and a row of switches marked from 15 volts ('Slight Shock') to 375 volts ('Danger: Severe Shock') to 450 volts ('XXX').
They drew straws to determine their roles – learner or teacher – although this was fixed and the confederate was always the learner. There was also an 'experimenter' dressed in a grey lab coat, played by an actor.
Two rooms in the Yale Interaction Laboratory were used - one for the learner (with an electric chair) and another for the teacher and experimenter with an electric shock generator.
The 'learner' (Mr. Wallace) was strapped to a chair with electrodes. After he has learned a list of word pairs given him to learn, the 'teacher' tests him by naming a word and asking the learner to recall its partner/pair from a list of four possible choices.
The teacher is told to administer an electric shock every time the learner makes a mistake, ostensibly in order to see how the 'punishment' affects learning, and increasing the level of shock each time. There were 30 switches on the shock generator marked from 15 volts ('slight shock') to 300 ('danger') and on to 450 ('XXX – severe shock').
The 'learner' gave mainly wrong answers (on purpose), and for each of these, the 'teacher' gave him an electric shock. When the teacher refused to administer a shock, the experimenter was to give a series of orders/prods to ensure they continued. There were four prods and if one was not obeyed, then the experimenter read out the next prod, and so on.
Prod 1: Please continue. Prod 2: The experiment requires you to continue. Prod 3: It is absolutely essential that you continue. Prod 4: You have no other choice but to continue.
As the voltage of the fake shocks increased, the learner began making audible protests, such as banging repeatedly on the wall that separated him from the teacher, and eventually demanding to be released from the experiment. When the highest voltages were reached, the learner fell silent. Despite this 65% of participants (i.e. 'teachers') continued to the highest level of 450 volts, and all the participants continued to 300 volts.
This high level of obedience surprised Milgram, and he went on to vary the experiment in order to see how far obedient people would go (one the one hand) and to see what factors reduced obedience (on the other).
In one variation some subjects would even press down the protesting 'learner's' hand on to the electrode physically in order to shock them.
In order to test the prediction that if the 'authority' figure was less authoritative the obedience level would fall, other variations of the experiment removed the grey lab coat, moved the location away from academia to a run-down office or moved the authority figure to another room, In all these cases obedience did decrease, but only somewhat.
Milgram summarised his findings as follows:
"The legal and philosophic aspects of obedience are of enormous import, but they say very little about how most people behave in concrete situations.
I set up a simple experiment at Yale University to test how much pain an ordinary citizen would inflict on another person simply because he was ordered to by an experimental scientist.
Stark authority was pitted against the subjects’ [participants’] strongest moral imperatives against hurting others, and, with the subjects’ [participants’] ears ringing with the screams of the victims, authority won more often than not.
The extreme willingness of adults to go to almost any lengths on the command of an authority constitutes the chief finding of the study and the fact most urgently demanding explanation".
As a follow up to this some decade later, the notorious Stanford Prison experiment, which divided up volunteers into 'guards' and 'prisoners' in simulated prison conditions had to be ended early after the 'guards' started physically abusing the 'prisoners' to a shocking degree.
Both of these experiments showed what a thin veneer of respectable decent behaviour overlays us all, and how - in the right conditions - many humans can be persuaded to behave in a inhuman fashion.
What I Learned: that, under pressure, many ordinary people will bypass their own innate sense of humanity and overcome natural cognitive dissonance in order to behave appallingly to each other.
The Logic of Scientific Discovery - Popper
So, the first two books made me question the rationale of any human decision making, or belief orientating, process. Other experiences made me question various other belief establishing criteria - for example, you cannot take anything to be true just because large numbers of people say it is so, or because it 'feels right', or because it 'fills a need'.
You need some other yardstick on which to base your beliefs and consequent actions. With this book, I found it.
Anyone who has studied the philosophy of science will probably have been directed to begin with this book. It examines the basic question underlying the philosophy of science (or epistemology) - how do we know what we know? It is a long and complex and tightly argued work, but it can be summarised as follows.
First of all we have to distinguish between deductive reasoning and inductive reasoning.
Deductive reasoning moves from general propositions to specific conclusions:
- All ravens are black birds
- This bird is a raven, therefore it is black
Inductive reasoning moves from specific examples to more general conclusions
- This raven is a black bird
- All ravens are black birds
The key distinction is that deductive reasoning reaches conclusions that are necessarily true, whereas inductive reasoning does not. This is demonstrated easily if we substitute the word swans for ravens. In the civilised world it was known that the swan is a white bird, and therefore that all swans are white birds; until we got to the Swan River at Perth and saw black swans for the first time.
The problem, first spelled out by Hume, is that humans mostly use inductive reasoning - which can be very helpful in day-to-day practice, but which cannot be relied upon as a fail-safe yardstick of truth. Furthermore, you cannot prove an inductive conclusion by scientific experiment, because no matter how many affirmative sightings of black ravens you have you cannot prove that all ravens are black, as only one instance of a non-black crow will prove you wrong.
So, we must inherently distrust inductive conclusions no matter how seductive. After all, the fact that the sun has risen for some 4.543 billion years, and which has led humans to conclude that it will rise tomorrow, and tomorrow and tomorrow - will one day be wrong.
Popper argues that as we cannot definitively prove every daily working hypothesis, while there is no way to prove that the sun will rise, it is reasonable to formulate the theory that every day the sun will rise and behave accordingly: whilst recognising that if it does not rise on some particular day, the theory will be falsified and will have to be replaced by a different one. Until that day, there is no need to reject the assumption that the theory is true.
However, if we want to progress scientifically, with increasing certainty of knowledge, we must identify theories which are useful. Those that are useful are not the ones whereby no number of affirmations can prove it right, but rather those in which one experimental instance can definitively prove it wrong.
Therefore, true scientific theories are the which are conceivably falsifiable - i.e. are capable of being proven wrong.
Another very useful theory of the advancement of knowledge - or right/wrong (or truth) - is 'Occam's razor', from the 14th century logician and Franciscan friar William of Ockham. This was originally stated as 'entities should not be multiplied without necessity'. In more modern parlance we might rather say, 'in an effort to explain any phenomenon do not consider any more complex explanation than the most obvious one, unless you have to'.
Combining Popper and Occam we can come up with the most efficient and effective scientific way of determining any truth:
- observe any natural phenomenon
- come with the simplest, falsifiable theory that explains it
- try and prove that theory wrong
Until you prove it wrong, it is a valuable workable hypothesis (as Newtonian mechanics was for 200 years - and still is in most circumstances). Once you do prove it wrong, then come up with the next simplest (but still falsifiable) explanation, ad infinitum.
The most flagrant misstep in this scientific approach that humans repeatedly fall for, including scientists, is to devise experiments to try and prove a theory is right, as opposed to wrong. One of the most common, and worrying, areas where this attitude prevails is that of law enforcement. Almost all instances of wrongful arrest, or conviction, are due to investigative authorities coming up with an idea of who 'dunnit' - through prejudice - and then going to extremes in trying to obtain evidence that their theory is correct, disregarding all evidence to the contrary.
Any theory that is not falsifiable does not provide a workable basis for belief; such as:
- fairies exist
- Jesus is my saviour
- 5G masts cause coronavirus
- the world is run by shape-shifting lizards
- homeopathic remedies work
- I am receiving private messages conveyed via automated writing from the planet Clarion.
Conclusion
The employment of rational and logical thought, along these lines, is the only yardstick that can give you a workable degree of certitude for what you can reasonably believe to be TRUE.

Comments