Saturday 19 May 2018

Ali Smith - Free Love and Other Stories

The good stories in this are really wonderful and sketch vivid, evocative scenes with just the right amount of detail. Free Love, Text for the day, Jenny Robertson your friend isn’t coming, the second passage in To the cinema and College were all good; especially Text for the day. The less good stories either had too little information in them, making them feel disconnected and obscure (Scary, The unthinkable happens to people everyday), or were overly sentimental, too neat and tidy or twee (A story of folding and unfolding, The touching of wood). Conversely, A quick one had a really excellent ending and Cold Iron had an ending I enjoyed too, even if it is a little sentimental, because I liked the idea of gathering things together and trying to think out a story that makes sense. I really loved the best stories but overall it was a mixture.

Free Love is good and i liked the way it was ambiguous as to whether it was a boy or girl for the first page or so. The recollections have a warm, youthful glow and the details are terse but full of richly descriptive context. Very enjoyable and evocative but not so much in a sexual way. Even though the story is explicitly about sex and sexuality, its sex scenes are sparse and minimalist.

A story of folding and unfolding is quite bad and doesn’t really capture either of the situations it describes vividly. Scene of recently bereaved father is better than the electricians bantering in the girls’ dormitory. A little too romantic and twee for me. Also a it grand and theatrical in its conception.

Text for the day. Loved the idea of the whole story and it’s rapid, haphazard pace. The voices are funny and likeable, especially on the telephone sections. The idea of the book fuelled hiatus from work and life was fantastic. Perhaps leaving a paper trail behind her as she went was a little pretentious - what would you think of someone actually ripping up books in front of you? - but the whole concept was so cool I really enjoyed the overall effect. The prose, characters and narrative all seemed to chime together on this and it was my favourite so far.

A Quick One. I wasn’t so sure about the descriptions of love making at the beginning of this but the atmosphere of meeting up with an old lover at the cafe is really well done. I also really liked the end, which isn’t prim and doesn’t try to tie up too many loose ends or stuff a huge amount of the protagonist's narrative into two paragraphs.

Jenny Robertson your friend isn’t coming was beautifully mundane and uneventful. The protagonist was whinging and seemed to be unusually preoccupied with illness. It was an enjoyable, realistic description of a trip for dinner and a film told from the perspective of someone who doesn’t seem to be happy unless they’re moaning!

The cinematic theme in To the cinema didn’t really resonate with me but I liked the thinly sketched scenes of domestic deterioration. The protagonist seems unhappy in her relationship and seems to be under pressure to have a baby from Geoff, her partner. She seems to have a day job in marketing but works at the cinema on Sundays, which Geoff attributes to her ‘vulgar streak’. The stifling effect of her relationship, work and life, with the exception of the bliss she feels at the cinema where she dreams she would stay all day without getting paid, are very richly drawn with remarkably few words. The story itself is a bit longer than the ones that precede it. It’s split into four parts and seems to be have been written by: 1) an outside observer who knows the cinema intimately 2) the protagonist who works at the cinema on Sundays taking tickets 3) the protagonist’s stalker 4) perhaps the impartial, cinema-loving third person from section 1 again? The second section is by far my favourite because it paints her life so expertly yet so succinctly. The third one is creepy and slightly obsessive; its voice hovers somewhere between harmless immaturity and menacing fixation.

The touching of wood was a bit of a strange mixture of holiday scenes, leprosy and a love story. I didn’t like the end and thought it was a bit cliched or too neat and tidy. Strangely, one of my favourite parts was the nightmare the narrator has before they wake up and enact the clunky ‘touching wood’ ending. The scenes were the two lovers lark about on the island are well drawn. It annoyed me that the couple say they are on a week long holiday but when they talk about looking at the photos they’re taking they say they will do it in two Saturdays time; is this delay for developing the film or are they going on somewhere else and, if they are going somewhere else, why doesn’t this count as holiday too? The fact that the narrator’s lover looks much better and loses the dark circles around their eyes, ‘coloured in by the sun’ which is a good phrase, is also annoying as it didn’t really seem to connect with any other parts of the story. Sometimes, Smith seems to get it spot on and gives you just enough detail, just well enough connected to the narrative to make it conjure up all sorts of ideas and situations. In this instance, the detail seems to sparse and unconnected; it just sort of sits there, disconnected and meaningless except for its hint at an illness or unhappiness that’s never explored.

Cold Iron was a slightly disjointed account of a family bereavement, which I didn’t enjoy much. The last lines, “Myself I’m hanging on, leaning on the rail that overlooks the sea on either side of me. I’m picking up bits and pieces for my house. I’m thinking it out, I’m working out the story.” were crisp and cathartic while not being overly optimistic or sentimental.

I liked College and thought it had a good pace and well expressed scenes and characters. It’s quite exciting, without having a especially far fetched narrative, and I felt it had a cinematic quality when I read it. The scenes are vivid and the descriptions of the sunny, Southern England university town (Cambridge, I think, because of the bridge) were very realistic. I assumed the family of the deceased student are Scottish and felt the story did a good job of capturing the foreignness of Cambridge.

Scary tells the story of a new couple taking a train trip to visit an old friend of Tom’s and her new boyfriend in London. The protagonist, Tom’s girlfriend, seems fairly passive during the trip, their arrival and supper. However, when Tom is in the bathroom, she takes her things and leaves to go home on the last train back to where the came from. It’s not clear whether she objects to the other couple’s obsession with River Phoenix or whether it’s Tom’s moaning about how rude his friend’s boyfriend was to him or neither of these things. Whatever it is, it’s strange for the protagonist to take such drastic action without any indication as to why she is doing it.

The unthinkable happens to people everyday is about a man trying to find someone at an old address where he grew up in Scotland. He calls from London, where he lives and works and finds out they no longer live there. He crosses the street and smashes a few TVs in a TV shop before getting in his car and driving all the way to Scotland and the house where the mystery person he is trying to find lived, even though he already knows they are no longer there. His car runs out of petrol so he leaves it on a road and walks into a loch beside the road where he meets a young girl. The young girl’s mother gives him something to eat and some petrol and he drives home again. Is this a depiction of someone who has gone temporarily mad? It’s a strange and disunited story and I never felt connected to the man, his actions or his possible motivations.

The World with Love describes a chance meeting between old school friends who reminisce about how their French teacher went mad one day. This leads Sam, the protagonist, on to other reflections about the French class, the teacher and a girl who he had fancied long ago. The scenes had a sad and oppressive quality like there were many things Sam wanted to do back then that he didn’t and now looks back on it somewhat sorrowfully.

Friday 18 May 2018

Michael Lewis - The Undoing Project

Reading this after finishing Thinking Fast and Slow really threw the two writing styles into sharp relief. Lewis’ prose is flowing, readable and journalistic. There are plenty of evocative character details and lots of thought-provoking, illustrative analogies. Kahneman’s prose is tortured by comparison. His style is dry and academic and the more anecdotal passages read like they have been written by someone who has been told to be ‘witty’, ‘conversational’ or ‘informal’ but really has no idea how this might be done. There is no doubt that Kahneman’s book contains much more information; each chapter is essentially a detailed summary of an academic paper in Thinking Fast and Slow. But the density is overwhelming and, in the absence of a clear structure, soon creates confusion. In fairness to Kahneman, he does try to group the chapters into five sections: 1) Two Systems 2) Heuristics and Biases 3) Overconfidence 4) Choices 5) Two Selves. Despite this, there is so much information in each section, and it is presented in such a detailed way, I rarely felt like I was mastering the key points. Lewis, on the other hand, drastically reduces the amount of material he’s trying to present in The Undoing Project but still manages to convey a lot of the same information that Kahneman does because he is so much better at selecting and presenting the material. There is a real sense in which Lewis has simply taken the five or ten most striking ideas in Thinking Fast and Slow and mixed them with some of his excellent character sketches and career histories of the key actors behind the ideas. This is exactly the same formula as The Big Short; simply swapping complex decision theory for complex financial derivatives and psychologists for hedge fund managers. The result feels journalistic insofar as it is a collection of shorter essays, profiles and sketches stitched together. With Lewis it’s easy to look back over what you’ve read and identify the key themes: representativeness, availability, how people feel commission is worse than omission when dealing with regret and how people are risk seeking when dealing with losses and risk averse when dealing with gains. With Kahneman, the same information is presented in far richer detail but the overall sense of clarity is lost amongst the deluge of papers reviewed, the subtlety of the distinctions he is drawing and his questionable writing style. Kahneman’s book contains all the nuance and, ultimately, is a more informative book but Lewis’ presents the more comprehensible narrative; something that Kahneman would admit is very important given the way our mind works! Lewis is superficial and no one could accuse Kahneman of being that, but he is also readable and no one could accuse Kahneman of that either!

The life stories of both Kahneman and Tversky are interesting. With Kahneman as a fretful, nervous lonely, immigrant geek and Tversky as a swashbuckling, popular, native war hero; they make an odd couple. Perhaps Lewis’ book is too kind to Kahneman, who I presume spoke with Lewis far more than Tversky who was probably dead before Lewis even started this project. At one point I had the impression that Kahneman had all the ideas and Tversky simply helped him to have more confidence in them and make their papers a bit more readable! The academic community seem to have taken the opposite view, ascribing the majority of the credit to Tversky, which I think shows how this book ‘takes Kahneman’s side’ vs. the contemporary history. It was Tversky’s unwillingness to correct this presumption that he was the senior partner in the relationship that seems to have been the root of Kahneman’s displeasure with him. However, according to Lewis, it seems theirs’ was a true collaboration with neither able to compare individually with their abilities as a duo. As such, it is especially sad when the two fall out and drift apart. Given that both men seem to be fantastically difficult individuals it can hardly be seen as surprising. As ever, Michael Lewis draws intriguing biographical outlines of the two characters. Kahneman lived in a chicken coop for a while in occupied France as his family hid from the Nazis and their French sympathisers while Tversky once disobeyed orders to rescue an Israeli army colleague who had fainted beside a landmine he was trying to clear, saving his life. As usual, Lewis’ sketches are vivid but not comprehensive or particularly reflective. It reads like a newspaper or magazine profile or essay. As with his treatment of their academic work, he does a great job of selecting interesting vignettes and presenting them in a form that’s easy to consume. But both lack true depth, which is what makes the book such an easy read and probably makes Lewis so popular. Like almost everyone else, I’m more disposed to ‘A very short introduction to…’ type titles than completing the hard slog of long reading lists to really get to know a subject. To some extent, the more superficial and simplified it is, the easier it is to read and therefore the more enjoyable provided there is a decent narrative to keep you interested. Lewis creates most of this narrative out of the relationship between the two with the progression of their ideas playing an impressive supporting role. He’s good at judging the correct level of detail and spinning these quite scant details into a compelling narrative. But in the same way that I couldn’t hope to get the same level of understanding of the ideas contained in Thinking Fast and Slow from this book, it’s probably foolish to think I really know much about either psychologist’s life or character from the brief caricatures here. Looked at in another way, Lewis is a master of giving you enough information to support the main points he wants to make and doesn’t make the mistake of reducing the clarity of the whole book in an attempt to be exhaustive. Lewis is an author who appeals to your System 1 brain, as Kahneman would say! He tells you a cogent, vivid story that is easy to comprehend and satisfies you with its coherence! Lewis’ method has the drawback of being a bit journalistic. The information he selects is, necessarily, brief and sometimes I had the feeling that the protagonists’ lives, careers and characters were being simplified to make them more readily comprehensible or appealing; which, of course, they were!

It was a bit repetitive reading the same ideas that I had just read in Thinking Fast and Slow but, frankly, Lewis expounds them so much more briefly in prose that is so much easier to read I skipped through most of the repetition without inconvenience. There was only one section where I thought the writing was confused. In chapter 5, around p157, Lewis tells us that people are quick to jump to conclusions from a small amount of data; making assumptions about a general population on the basis of evidence from a sample size that is too small. As an example, he uses IQ and a sample group of students. If people are told the average IQ for all students is 100 and are then told that one student in the group sampled has an IQ of 150. Apparently they are, when asked, still likely to believe that the average IQ of the sample is 100 and that the person with the IQ of 150 is an outlier. This is as may be, but it definitely does not illustrate the point that Lewis is trying to make. In fact, it makes quite the opposite point. If people were too quick to draw general conclusions from small samples then they should believe that the average IQ of the sample is higher than 100 based on the person with an IQ of 150. This example Lewis gives shows that people are RELUCTANT to adjust their general assumptions based on a small sample not vice versa.

Kahneman’s character and work are subject to a good deal of hagiography in this book and you could argue that this is deserved given how influential his work has been. There were several aspects that stuck with me. Kahneman was a prolific idea generator and seemed to have far less trouble than most deciding to change his mind. He said, “I’ve always felt ideas were a dime a dozen. If you had one that didn’t work out, you should not fight too hard to save it, just go find another.” This strikes me as sound advice for investment too! The best example of this trait is probably when Kahneman abandons his theory of regret causing people to be risk averse when they actually showed themselves to be risk seeking when dealing with losses. Although he was losing a lot of work, by not obsessing over this he went on to discover something even more significant. Indeed, he showed admirable psychological fortitude in overcoming the sunk cost fallacy! Kahneman was also non-combative, unlike Tversky, and was, apparently, fond of giving the following example to students: Imagine a plank held in place by a spring on either side of it. How do you move it? Well, you can increase the force on one side of the plank or you can reduce the force on the other side. “In one case the overall tension is reduced and in the other it is increased. It’s a key idea, making it easy to change”. Kahneman appears to have questioned himself, looked for faults in his behaviour and changed his mind a lot more than the average person. He also thought that singing things made them easier to remember, which is probably true!

At one stage I was tempted to say you could read this book and not bother with Thinking Fast and Slow but the two are of a totally different scope. TF&S takes you through all the nuance and nitty gritty. TUP is a brief precis of TF&S’s main ideas, with a history of the relationship between Kahneman and Tversky appended. This appropriately involves a history of each individual. However, Lewis includes a few other biographies that have a far more minor role in events. Probably most notable is the chapter on the general manager of the Houston Rockets, ostensibly included as this was the way Lewis himself found out about Kahneman and his work! Overall, this was a good, interesting book that summarised some of the ideas of TF&S and placed them within a brief context of the author’s lives and careers. It’s weak points were its slightly haphazard side plots and its formulaic, journalistic structure. The prose was flowing without ever being exceptionally beautiful but it was extremely readable. I think I would have preferred it if Lewis had been forced to write a more detailed summary of TF&S and then two focussed biographies of Kahneman and Tversky. Lewis has a gift for explaining complex ideas comprehensively and concisely. He also has a gift for profiles or personality sketches. However, in this book he wanders too far from the central topic in ways which are fairly enjoyable but also pretty irrelevant and I felt the main subject deserved tighter attention. Lewis focuses a lot on the magical relationship between the two psychologists even though it will be forever impossible to recreate a dynamic that only two people, one dead, ever experienced. I felt like a tighter focus on the pair’s biographies might have yielded more interesting insights. The book could have had a better, narrower structure in my opinion but it’s a minor complaint as, on the whole, I thought it was a good book.

Thursday 17 May 2018

Daniel Kahneman - Thinking Fast and Slow

This book was incredibly dense and took a long time to read and note. I don’t think it is especially well written but it is definitely full of interesting information. Each chapter introduces different behavioural or psychological theories and experiments and discusses their effects on decision making and everyday life. Despite the five part structure, the sheer volume of information makes this a tough book to grapple with. It read like a detailed summary of successive scientific papers.

While it’s welcome to have scientific language simplified (e.g. the two systems, Econs and Humans etc.) it was still prone to arcane flourishes. Kahneman is simultaneously writing technically and also attempting to make it readable for ‘the layperson’ and, while he is more successful at the former, the combination isn’t a tasty literary concoction. I think a lot of this is due to his limited ability to write flowing, enjoyable prose. Similarly, on the whole it is probably a good thing that Kahneman doesn’t exacerbate the problems of density by going into the minutiae of academic debate for each idea he introduces. Especially as Psychology seems jargon prone. On the other hand, sometimes this can make the book seem arrogant and overreaching. Clearly, when dealing with such complicated subjects there is a balance to be struck between the, undoubtedly necessary, task of simplification for the lay reader and avoiding overinterpreting the significance of the studies and experiments that are being used as examples. Occasionally, it feels like he draws very wide-reaching and concrete conclusions about huge swathes of human behaviour from fairly limited, specifically designed experimentation. Perhaps it is just because most of these experiments involve asking people strange trick questions in a lab in Oregon or Jerusalem and feel, intuitively, a bit weird and otherworldly! These experiments always yield interesting insights into human behaviour, however, as Kahneman would surely recognise, the context is key. Kahneman seems gifted in identifying other areas of relevance and application for his, seemingly esoteric, findings but sometimes it feels like he slightly oversteps the mark. Like a good scientist should, Kahneman wants to measure and test his hypothesis against empirical data and I don’t feel like he should be critcised for this. His DRM struck me as mad immediately but maybe inspired. I wondered how reliable U-scores could ever really be. Every now and then I felt like the precisely formulated, laboratory tested studies and empirical measures of happiness he references may not apply to everyday life quite as broadly as he wants them too. Of course, it’s a complex area, and one that needs to be simplified, but sometimes his assertions or conclusions felt one-sided and unchallenged. Kahneman also likes to write quite combatively against academics who disagree with him and this all seemed a bit petty to me even though some of the opinions, as described by Kahneman, are worthy of derision. Even though he may not be the most eloquent writer, in terms of content this book is full of interesting ideas and experiments.

Kahneman is a clear-ish guide through a fascinating, highly complex and jargon ridden subject. The combination of the sheer density and Kahneman’s scientific literary style didn’t make it an especially easy or enjoyable read but the quality of the ideas made up for that. I found myself thinking or talking about the book lots as I plodded my way through it, reading, digesting and noting. For that reason this book gave me a lot more pleasure and enjoyment than the basic act of reading the book, which was always interesting but not usually a pleasure! I hope I’ll look back over the excessive notes I’ve taken as there is altogether too much to take in with this book. I am convinced I would never feel like picking it up again to re-read it but I also feel like there is a lot of information that I would like to go over again. This was a fascinating read but also a bit of a slog.

NOTES

PART 1: TWO SYSTEMS

CHAPTER 1 & 2

Most people have a poor intuitive grasp of statistics, even specialists in the field.

The mind has, broadly, two systems of operation. System 1 is intuitive and fast and helps us to respond to everyday questions like, ‘what is 2x2? What is the capital of France?’ System 2 is slower and involves deliberate analysis and processing of information and is used for questions like, ‘what is 15 x 27?’.

Most people believe they use System 2 most of the time to make most of their decisions but this is an illusion. Kahneman describes System 2 as a supporting actress in a film who thinks she is the lead role.

Pupil dilation is highly associated with level of concentration or System 2 thinking, when System 2 is engaged other mental processing will be severely impaired (e.g. gorilla appearing in video of basketball players passing a ball when people have been asked to count the number of passes, people stop to think when you ask them a question that involves lots of System 2 thinking when they are walking)

CHAPTER 3

System 2 also operates self-control so when people are occupied with a complex cognitive task they are more likely to make impulsive decisions as System 1 takes over, or operates more freely, than it would if System 2 wasn’t occupied elsewhere.

Exercising self control makes people fatigued both mentally and physically so if subjects are asked to suppress emotions or other functions of System 1 they will then perform less well in subsequent self-control tasks because they are suffering ‘ego depletion’. The effects of this can be reversed by consuming glucose. Being drunk also weakens System 2 and increases the likelihood of reverting to standard / intuitive operating procedure as opposed to thinking things through using System 2.

System 2 is also responsible for checking suggested answers to questions / situations provided by System 1. Best e.g. was “A bat and ball cost 1.10 and the bat is a dollar more than the ball, how much is the ball?” where the intuitive answer is 0.10 but, with checking, it’s not hard to see it is 0.05. 50% of Harvard, MIT and Princeton students got this wrong and >80% at less selective universities. It’s not so much that it is really hard, it is more that most people are lazy and reluctant to engage System 2 as it costs more work / energy and is a bit unpleasant. Most prefer to follow the law of least effort.

CHAPTER 4

When the brain sees a word or image it makes a large number of associations instantly using the unconscious System 1.

If people are exposed to certain words associated with a certain topic this ‘primes’ their brain to make these connections more strongly. For example, if a person sees the word EAT and then is asked to complete SO_P they will be more likely to say SOUP and if they have been primed with WASH then it will be SOAP. John Bargh at NYU did a study where subjects were asked to arrange random words into sentences and then walk down the corridor to do another test. Those subjects who had received words associated with elderiness walked more slowly than others because their behaviour had been primed with ideas about oldness and this had an effect on their physical actions! This is called ‘the ideomotor effect’ . It also works in reverse, adopting certain physical actions can affect the way your brain receives or perceives information. For instance, being asked to nod while you listen to opinions makes you more likely to agree with them and being asked to shake your head less. In this last example, the subjects were told they were testing the sound quality of headphones under motion so they were not focussed on the opinions being played through the headphones even though they were later asked about them.

Priming also happens with objects, for example, people who vote near a school or see school related pictures before they vote will be more likely to vote in favour of increased educational spending. People who are primed for money become more diligent and more selfish. Money primes for individualism and a reduced desire to help other people. In a similar strain, pictures of the leader in dictatorships are supposed to increase obedience; as do forms of priming involving the military. Another good example is an experiment where different pictures were placed above an honesty box week after week. People consistently contributed more when a pair of eyes were there vs. pictures of flowers. All these examples show that in lots of situations where we firmly believe that we are making decisions using only, or mainly, System 2; the role of System 1 is actually far larger than we give it credit for.

CHAPTER 5

Cognitive ease is associated with everything being fine, cognitive strain usually means something is going wrong, is threatening or requires attention. Things that make cognition easier are more likely to make you believe things are true. Familiarity is closely associated with perceptions of truth because it eases cognition. However, the whole proposition does not have to be repeated to increase familiarity - see the e.g. of priming above. Familiarity, priming, clear and easy display of info and either being in a good mood or mimicking the body language of being in a good mood will all produce the same feelings of ease associated with truth.

When writing, high quality paper with bold type that maximises the contrast between the two is preferable. Using bright blue or red, simple language, rhyming aphorisms and quoting sources with simple names increases ease and therefore credibility. This makes everything easy to process and so System 2 isn’t engaged meaning you’re more likely to trust your intuition that everything is fine because nothing is arousing your subconscious suspicion.

Equally, when things aren’t clear and cognition is strained then this will make you engage System 2 and, ordinarily, make more critical judgements although it will decrease creativity. The 2nd and 3rd parts of the bat and ball question are: 1) if it takes 5 machines 5 minutes to make 5 widgets, how long will it take 100 machines to make 100 widgets? 100 mins or 5 mins. 2) If a lily doubles in size everyday and covers an entire lake in 48 days, how many days does it take to cover half? 24 days or 47. If printed clearly, 90% of people make at least one mistake in the three questions, but if printed faintly then only 35% make a mistake because System 2 is alerted and engaged by the cognitive strain of the faint print!!

Stocks with pronounceable tickers or names may outperform those without for some time periods!

Most novel stimuli are treated with suspicion by animals and humans. But if continually exposed with no negative consequences then this turns into a positive emotion associated with cognitive ease. Many experiments show that ‘mere exposure’ to an otherwise meaningless word, symbol or shape make people associate positively with it! Has big implications for brands - I imagine!

Being in a happy mood increases the capability of System 1 whereas being in a bad mood decreases it meaning that being unhappy decreases intuition and creativity. System 1 is associated with good mood, intuition, creativity, gullibility. System 2 is associated with sadness, vigilance, suspicion, an analytic approach and increased effort.

CHAPTER 6

System 1 does a lot of work establishing norms and detecting when these have been violated. Examples given include not being so surprised by something unexpected the second time it happens, even though System 2 realises this is actually less likely, because System 1 has accepted this event as ‘more normal’. Very little repetition is required for something to begin to seem ‘normal’. System 1 has huge associative networks which allow it to understand language very quickly and spot discrepancies in highly nuanced ways. Example given is “the large mouse steps over the small elephant’s trunk”, where the words ‘large’ and ‘small’ function within normal expectations of the associations made with those animals. Another experiment registered participant’s detection of abnormality when an upper class voice says, “i have a large tattoo on my back”. This sort of deduction requires a lot of complex world knowledge but, interestingly, doesn’t involve any conscious System 2 ‘thinking’.

Hume saw causality as learned from observation but in the 1940s Michotte showed that the causality we observe between billiard balls or when knocking things over also extends to black squares drawn on a piece of paper where one moves into another and then the other starts moving. There is no physical ‘cause’ here but humans recognise it as one moving the other early as 2 years old and are surprised when it breaks down! We are hard wired to ‘see’ causes regardless of whether they exist or not. This reminds me of Tolstoy in W&P:

The human mind cannot grasp the causes of events in their completeness, but the desire to find those causes is implanted in the human soul. And the human mind, without considering the multiplicity and complexity of the conditions any one of which taken separately may seem to be the cause, seizes the first approximation to a cause that seems to him intelligible, and says: ‘This is the cause!’

And to go even further into metaphysics:

There is, and can be, no cause of a historical event except the one cause of all causes.

Both from Book 13, Chapter 1

Similarly, psychologists Heider and Simmel use a video of a large triangle, a small triangle and a circle moving around a box, that looks like a house in plan view with a door, to demonstrate how humans are hard wired to create narratives and ascribe characteristics to things. These objects could be seen as moving randomly around a box but almost everyone describes the same story of a bullying large triangle being overcome by the small triangle and the circle teaming up. It’s hard not to if you watch the video:

https://www.youtube.com/watch?v=VTNmLt7QX8E

Humans also experience what we think of as freely willed action as different from physical causality. Our own physical actions are seen as controlled by a non-physical mind, whereas a door opening and knocking something over is seen as physical cause. Psychologist Bloom sees this as an explanation for the universality of religion; the world of objects is seperate from the world of minds so we can think of soulless bodies and bodiless souls. Because of these two types of cause most religions conceive of an immaterial creator of all physical things and the idea of immortal souls controlling bodies and out lasting them when they die. [Latter also fulfils a lot of egoistic wishes imo!!]

CHAPTER 7

System 1 jumps to conclusions and does not deal in uncertainty and doubt, which are the domain of System 2. System 1 suggests an immediate, unequivocal response without expending mental effort on conscious thought or consideration of alternatives.

Daniel Gilbert, psychologist, posits that System 1 always makes an attempt to believe any statement it is confronted with. System 2 will then analyse this attempt in order to see if it should be unbelieved. However, if System 2 is kept busy with a task then people are more likely to believe nonsensical statements because they are more reliant on System 1. System 1 is gullible and biased to believing, System 2 is in charge of doubting and unbelieving. But if System 2 is occupied / lazy, or if people are tired or otherwise depleted, they are more likely to believe empty, persuasive messages like advertising.

Unlike science, the brain tests hypotheses by trying to prove them not to refute them; our brains search for supporting evidence for a statement rather than the opposite.

People are biased towards agreeing with other individuals that they like or have a pre-existing positive view of. This is called ‘the halo effect’ whereby if we meet someone and we like them then we assume they have other positive attributes without any evidence. First impressions are almost always the strongest and can be almost impossible to reverse, even in the presence of substantial contrary evidence. It is important to be aware of this bias and to try and mitigate it - e.g. randomise papers when marking, get people to write down opinions simultaneously in a meeting rather than stating them verbally in turn.
What you see is all there is (WYSIATI), System 1 excels at creating a believable story from the information available, even if it is scant and imperfect, but it doesn’t use any information that isn’t immediately available. WYSIATI facilitates the achievement of coherence and of the cognitive ease that causes us to accept a statement as true. We will often not seek out, or actively repress, information that doesn’t fit the System 1 generated narrative. In this way we are hard wired to jump to conclusions.

CHAPTER 8

System 1 makes rapid basic assessments about safety and familiarity. It is a survival mechanism that is constantly scanning the environment, in the background, for threats, opportunities or cues to approach or avoid. Facial assessments are a major part of human System 1 and probably make up a much larger part of decision making than most people are willing to admit. This can be demonstrated by an experiment where subjects are shown the faces of competing candidates for an election they are unaware of and do not have other knowledge of. In 70% of cases, the judgements made solely on facial assessment match the eventual results of the voters who are supposedly making more informed, System 2 driven choices! This effect is 3x larger for people who watch a lot of TV and don’t seek out other information (aka read!) - THE IDIOT BOX!

System 1 represents categories by a prototype or set of typical exemplars - i.e. it can quickly tell you how how many objects there are in an array (if it is small) and what the average of such an array is - however, it does not deal well with “sum like variables” which require System 2. For example, what is the total length of 4 lines in a box. Most people will quickly identify there are 4 and what the average length is intuitively. Another example is asking different groups how much they would pay to save 2k, 20k or 200k birds after an oil spill; the results are substantially the same indicating that there is an almost complete neglect of quantity in emotional contexts. [Also could be lack of comprehension of large numbers, “one man’s death is a tragedy…”]

System 1 also performs ‘intensity matching’ where one fact or thing is equated to another. For example, matching the tone of a colour or the loudness of a sound to the severity of a crime or punishment. Or answering the question, how tall is a person who is as intelligent as Jim?

System 1 is also a ‘mental shotgun’ that scatterguns more computations than it has been asked to do. For example, comparing the spelling of pairs of words when it has only be asked to compare the sounds or comparing the metaphorical truth of statements when it has only been asked to compare the literal truth. These ‘excess’, hardwired computations are involuntary and happen even though they may be detrimental to completion of the task the brain is being asked to perform.

CHAPTER 9

It is remarkable how seldom we are stumped in normal mental life. We have intuitive feelings and opinions about almost everything without analysing them.

Much of this is use of heuristics, from the same root as ‘Eureka’, whereby we can’t answer the question but substitute it for one we can answer and then use intensity mapping to relate that answer to the original question. A good example of this is a visual trick where subjects are asked if the figure on the right is larger than the one on the left in the following picture:


Everyone will say yes because our mind substitutes 3D size for 2D size because of the perspective of the picture. Actually they are the same size. We should ignore these visual cues to interpret it as a 3D scene because the question is only about figure in the picture but it is too powerful to resist!

Another powerful example is a questionnaire in which people are asked ‘how happy are you?’ and ‘how many dates have you been on in the past month?’. If they are asked in this order then the correlation between the two answers is zero. But if the dating question is asked first then participants substitute a complex assessment of happiness for happiness with their love life because it is easily available and the correlation becomes very high. The same is true if the question is about family or finances rather than dating. It is an example of WYSIATI and demonstrates how much influence current mood and context has on assessment of happiness.

Paul Slovic proposes an ‘affect heuristic’ whereby we let our likes and dislikes determine our beliefs about the world. Here System 2 is more compliant with System 1 and searches for information to support the pre-existing beliefs of System 1 rather than criticising them. I have experienced this a lot in reading research about companies on the stock exchange.

PART TWO: HEURISTICS AND BIASES

CHAPTER 10

Statistically extreme outcomes are much more likely in small sample sizes of a random population. E.G. - in a binary draw of 4 black/white balls the prob of them all being black is 12.5% whereas with 7 it is 1.56%. However, when we see a statement like ‘cancer rates lowest in rural, republican counties in the midwest’ we try to make causal links to explain this when really it is a statistical phenomenon caused by the smaller sample sizes in rural communities. This can been seen clearly in the example because the same type of communities have the highest incidence of cancer too, demonstrating there is no causal link beyond the statistical one. Amos and Kahneman showed that intuitive feel for statistically appropriate sample sizes was poor among other academic scientists. The difference between 6 and 60m is intuitive but the difference between 150 and 3,000 is less so. As System 1 thinking is not prone to doubt we see a small-ish sample size but are predisposed to ‘over-believe’ its resemblance of a much larger general population because we prefer certainty over doubt.

Interesting examples include the desire to identify patterns where none exist and think that random events are somehow unlikely or represent some cause. E.g. most people think the series of babies born in a hospital BBBBBB or GGGBBB are somehow rare when they have just the same probability as other outcomes.

“To the untrained eye, randomness appears as regularity or a tendency to cluster” Feller

Other example is the ‘hot hand’ myth in basketball, a player is assumed to be more likely to score because he has been successful on a hot streak of shots when really this is just randomness. “We are far too willing to reject the belief that much of what we see in life is random.”

Gates Foundations spent $1.7bn on program to reduce school sizes because they surveyed c.1k schools and found that small schools were statistically over-represented in the best schools. However, they were also over-represented in the worst schools because of the propensity of small sample sizes to produce extreme results!

CHAPTER 11

The strength of the ‘anchoring effect’ will mean any number you are asked to to consider as a possible solution to an estimation problem will induce an anchoring effect in the answers given. This anchoring takes place in both Systems 1+2 by different methods. In System 2 the number mentioned forms the anchor around which the estimator moves their estimate until it reaches a plausible level but stops when uncertainty begins to creep in, especially if they are mentally depleted / slightly drunk. In System 1 it is more a priming effect whereby the number sets off the selective activation of compatible memories as it attempts to construct a story where the anchor number is true because it has been primed to do so, not because of any logical connection between the anchor number and the number they’re being asked to estimate. Most people are sceptical about anchoring because it seems so obvious and they often know that the anchor number is not informative but its effect is very powerful and universal nonetheless.

Setting a ‘limit per customer’ for discounted items can positively affect sales. Also, in negotiations, it is best not to engage with a derogatory low offers as it has an anchoring effect too; author advises making a scene and forcing the other side to make a more reasonable bid so the gap in expectations doesn’t start out too big.

CHAPTER 12

The ‘availability heuristic’ substitutes information about how frequent something is (e.g. divorce under 60 or people killed by sharks) by answering a different question about how easy it was to retrieve examples of this happening and then using this to comment on frequency. Things will be easily available if they happened to famous people (celeb divorce), if they are in the news a lot (shark attacks) or if you have a personal connection or remembrance of that type of event. None of these causes of easy retrieval has a statistical basis or character. In collaborative efforts or teams members usually overestimate their contribution to outcomes, especially positive ones*, because the availability heuristic brings forward lots of examples of what they have done individually but doesn’t do the same for the other members.
*success always has many parents but failure is always an orphan

The ease with which experiences are recalled is much more important than the number. People asked to recall 6 or 12 instances of assertiveness and then judge their assertiveness rated themselves more assertive with 6 instances, because this number is more fluently recalled, than with 12 because with this number fluency is reduced even though the absolute quantity of evidence is larger. In this way people actually become LESS confident about a choice the more arguments they are asked to produce to support it or LESS impressed with a car the more advantages are listed about it! One professor asked students to list varying numbers of improvements that could be made to the course - the higher the number the more satisfied the students were with the course.

However, this heuristic can be reversed if an explanation for the increased difficulty, and reduced fluency, of recall is provided regardless of how spurious it is (colour of text, background music etc.). Under these conditions, people don’t use it as a heuristic and rate themselves in the same way as if they had retrieved fewer examples with more ease and fluency. As such, it is more an ‘unexpected loss of fluency’ that occurs more rapidly than System 1 anticipates that causes this to be used as a heuristic. The more one concentrates and engages System 2 in this kind of environment, the more likely you are to use the content to guide your decisions and the less likely you are to use fluency. The more confident or powerful someone feels, or is reminded of feeling, the more likely they will use System 1 more heavily and trust their intuition.

CHAPTER 13

People, in general, rate unusual or unlikely events with negative consequences as more likely than they actually are. This is in part because they are over-represented in the media, in part because people want to read about them and therefore they are in the media more often, and also because triggering frightening thoughts and images come easily and, if they are fluent and vivid, exacerbate fear. This is a good example of how associative memory works and how difficult questions like, ‘do more people die of diabetes or accidents?’, are usually answered by substitution for easier questions like, ‘what ideas can i easily associate with either of these topics?’.

Paul Slovic develops the idea of an ‘affect heuristic’ whereby difficult questions (e.g. what is the balance of benefits and risks associated with water fluoridation?) is substituted for, ‘do I like water fluoridation?’ or ‘how do i feel about it?’. The answers to these easier questions are then fleshed out with ‘rational’ examples; in this way, ‘the emotional tail wags the rational dog’ (Haidt), which is accurate as most people would perceive their actions the other way round. People seem to find it hard to hold balanced views and will normally list lots of benefits and few risks or vice versa. The affect heuristic simplifies our lives by creating a world that is much tidier than reality.

Slovic speaks about ‘risk’ as a wholly subjective measure depending on which perspective one takes. Others reject this and believe that important judgements can be made probabilistically using number of human years saved and cost as determinants. Slovic would argue that the inability of most people to correctly assess minor risks (they are usually ignored or hugely overestimated) is legitimate as ‘risk’ only exists as a concept in the minds of those people who feel it. On the other hand, others (e.g. Sunstein) would see it as a serious problem that most people only think about the numerator and ignore the denominator when thinking about low probability risk (he calls this probability neglect) as it causes inefficient regulatory and legal outcomes / actions. He thinks this probability neglect combined with ‘availability cascades’, whereby a certain story or issue becomes especially prominent for a short period of time and thus occupies a disproportionate amount of visibility in the public perception, are risks to effective policy and should be tempered by the opinions of ‘experts’. Both arguments have merit; allocation of public resources should be as efficient as possible but decisions cannot be made without the public's approval in a democracy. [At the end of the day, I suspect that the eventual outcomes from most major decisions are too nonlinear to model or assess accurately!!]

CHAPTER 14

‘Representativeness’ vs. probability. Kahneman recalls an experiment he designed with Amos where a hypothetical student is presented as possibly studying 9 subjects. Here, people guess based on how large the student numbers in each subject are (called ‘base rate’) and perhaps adjust this using the info that he is a man. After this, participants are given a character sketch of ‘Tom’ and are told it is of dubious value. Nonetheless, people then ranked the subjects differently because the description was written to be highly representative of a ‘geeky’ or ‘nerdy’ stereotype. This shows how quickly and strongly people are swayed by mental stereotypes even when this is described as of debatable value. If people were rational then they would rank the subjects on size of student in take alone and not give much weight to the iffy character sketch. In reality, uncertain questions about probability trigger a mental shotgun that immediately proposes a System 1 style narrative based on the information they have available. In this kind of thinking, representativeness seems to hugely, and wrongly, outweigh base rate. Seems similar to chapter 6 where people almost cannot resist making up a narrative / character even when it may not be appropriate to do so. System 1 hardwires us to construct believable narratives. However, engaging System 2 has been shown to reduce this tendency in some experiments; for example, frowning while calculating their guesses.

The Tom W example might be interesting to conduct with EXPLICIT base rates given to the participants to see how they react.

In general, the advice of this chapter is to invoke Bayesian thinking when assessing uncertain probability in light of new information. Stick quite close to the base rate unless the evidence is very strong because we are naturally hardwired to prefer the specificity and narrative ease of representativeness. E.g. - is a ‘shy poetry loving woman’ more likely to be an MBA or a Chinese literature student? Answer should be MBA because even if there aren’t many shy, poetry loving MBAs it will be higher than because there are lots of MBAs vs. hardly any Chinese students - i.e. base rate is far more important despite our lazy preference for specific narratives and the representativeness heuristic.

CHAPTER 15

Another experiment conducted by K&T, called ‘Linda’, also concerns the persuasive nature of similarity to stereotype. As with Tom W, Linda is described as intelligent, interested in social justice, a philosophy grad etc etc and then participants are asked to rank her career based on 7 options. Those 7 options included ‘bank teller’ and ‘bank teller active in feminist movement’. Against statistical and logical likelihood, almost everyone ranks bank teller + feminism above just bank teller, because it fits with the narrative they have been given about Linda. Most people (89%) don’t realise / care that the chance of her being a feminist bank teller MUST be lower than just a bank teller, statistically speaking, because the first is a subset of the second. Even when the experiment was modified to ask which of these two bank telling options is more likely, excluding all other options, still 85-90% of undergrads preferred the more richly detailed feminist option. Only a sample of Stanford grads in social sciences got it right (65%) of any of the studies conducted. This shows the brain prefers possibilities with more detail that are plausible even though they are logically less likely. Coherence, plausibility and probability are obv easily confused! However, examples like - which is more likely: Aly is a teacher or Aly is a teacher who walks to work - do not cause the fallacy to occur because the STORY isn’t any richer, better or more plausible. In the absence of a competing and persuasive intuition, logic seems to prevail.

Other psychologists conducted experiments whereby participants guessed the value of sets of dinner plates or baseball cards. When judged together, dinner sets with all the plates and some extra items (some broken some not) and sets of high value baseball cards (some with some mediocre ones thrown in too) were correctly judged to be higher value if they included extra items as this adds at least SOME value to the sum calculation of total value. However, when the sets were judged individually, without comparison, the sets with extra, lower value, items were judged to be worth less - called the MORE IS LESS principle - because they were perceived to damage the value of the whole set, indicating intuition and presumption about consumer items too. In these cases (plates, cards), joint evaluation can overcome the ‘conjunction fallacy’ (i.e. adding items reduces value or increasing specificity increases probability) but with Linda this was not the case because money is easier to quantify than probability.

Another example that demonstrates the primacy of plausibility over logic concerned a 6-sided die with 4 green faces and 2 red ones. When asked which was more likely:

RGRRR
GRGRRR
GRRRRR
Most people choose 2 because it has more Gs (as you would expect given the 2:1 ratio of G:R) however, B is just A with a G added to the front which MUST necessarily reduce its probability vs. A! 2/3rds of respondents preferred option B.

There is some evidence that performance can be improved by using the language like ‘out of 100 people’ vs. ‘what percentage’ because then people may conjure a visual image which makes it easier to identify one group as a subset of another in questions like ‘what percentage of men surveyed have had a stroke?’ vs. ‘what percentage of men surveyed over 55 have had a stroke?’

The dishes / baseball card experiment and Linda have the same structure but in the Linda example intuition still beats logic even in joint valuation whereas this is reversed in joint evaluation of plates / cards. This seems to show that probability is harder to assess than money, plausibility trumps probability in detailed, coherent examples and that System 2 is lazy in criticising plausible narratives that appeal to System 1!
CHAPTER 16

This example concerns the mind’s hunger for causal stories and how it prioritises causal information. The example is as follows: in a city 85% of cabs are green and 15% blue, an accident occurs and a witness says it involves a blue cab. The reliability of their testimony is tested and determined to be 80% correct, 20% incorrect. What is the prob that the cab was Blue rather than Green? The answer is 41% according to Bayes but many people answer 80% because this statistic is relevant and can be linked causally in a narrative to the accident. The example is then modified so that green and blue cabs represent 50% of the total but green cabs are involved in 85% of accidents. The information about the witness is the same. In the second iteration, answers are a lot closer to the accurate answer because people take the statistic about green cabs being involved in 85% of accidents into account whereas they don’t use the ‘base rate’ stats about the colour split of cabs in the city because it doesn’t seem relevant to the SPECIFIC story of the accident. This is the difference between ‘statistical base rates’ and ‘causal base rates’. The solution is the same in both questions but people do far better in the second because they see the info as ‘relevant’ (causal) and give it more weight in their calculations. This is an example of stereotyping, which Kahneman thinks is rightly illegal in most legal situations as it leads to a better, more equal society, but he does think there is a cost to not using this heuristic.

Another example involves asking participants to decide if a student passed a test or not based on some short info. In one exam, participants are told that 75% passed and in another only 25%. These causal base rates had a significant sway on participants guesses. However, if respondents were told that the person who selected the sample of students did so in a manner that meant 75% of the sample had failed this had a lesser effect because people didn’t seem to see a causal link. [this is very odd to me as they are saying the same thing]. This seems to show that System 1 can deal with stories in which elements are causally linked but it is weak in statistical reasoning.

However, not all causal base rates are used to improve understanding of a situation if they conflict with existing intuitions or ideas. The ‘helping experiment’ involves taking 6 people who all sit in booths and take turns to speak into a microphone about their personal life and problems. One of the participants is a stooge. The stooge speaks first and says they are having trouble adjusting to NY and are prone to seizures. Everyone else speaks then the stooge speaks again and pretends to be having a seizure, says he is dying and asks for help before the mic is turned off. Of 15 participants, only 4 came to his help immediately, 6 never left their booths and 5 did so only long after the victim had apparently started choking! Here we would expect to see people apply this to their own behaviour, or the behaviour of an average person, as they would a causal base rate. To test this, the psychologists who designed the test (Nisbett & Borgida) asked students to watch a short video of two participants where they discuss their lives and appear nice and normal and then guess how long it took them to respond to the choking person’s request for help. To use a Bayesian approach, only 4/15 helped immediately = 27% so the base rate is roughly 75% did not help. The information in the short videos does not modify this probability. The psychologists showed the films to two sets of students: those who knew the results of the test and those who did not. But the results were EXACTLY the same!! This seems to indicate that the students are stupid in the case of the ones who knew the results. However, the professors conclude that people “quietly exclude” themselves, friends and even people on a video from conclusions of experiments that surprise them. In a follow up experiment, they showed the same two videos, explained the procedure of the test but not the results and then told them neither of the two people helped. They then asked the students to guess the global results, and the their guesses were remarkably accurate. Here it seems that it is hard for people to apply a general, but unpleasant, truth to individuals but they are capable of making an individual observation apply generally. The experimenters said, “subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”. Surprising individual cases have a much stronger effect on our thinking than general information, which we seems to disregard in the face of personal experience or long-held beliefs.

Causal stats have a far larger effect on thinking that non-causal even if both are equally relevant because of System 1 narrative building and stereotyping. However, even very persuasive causal stats can have a hard time overcoming long held beliefs. Individual examples are far more effective in changing thinking and behaviour. People may remember surprising stats about human behaviour and may even repeat it to friends but will still fall back on their earlier beliefs if it is presented in a ‘general’ format whereas the reverse is true of individual cases or personal experience. [this seems to have a parallel in stocks - e.g. ‘all commodity companies are crooks’ but when you meet one you think ‘surely not them as most businessmen are competent / honest or else they wouldn't succeed’ then you lose your shirt in one and always remember it!!]

CHAPTER 17

Reward for improved performance in training is more effective than punishment for poor performance. It may not seem like this to an instructor because of regression to the mean e.g. when an instructor praises good performance it usually gets worse afterwards; when he criticises bad performance it gets better. Because of the human brain’s love of causal stories, regression to the mean is often explained in causal terms (e.g. the Sports Illustrated jinx). Statistically, variance doesn’t need a causal explanation but the mind wants one!

Regression to the mean was discovered by Francis Galton (half cousin of Charles Darwin) in 1886. Correlation and regression to the mean are not two concepts but rather different perspectives on the same concept. Whenever the correlation between two scores is imperfect, there will be regression to the mean. Again, the desire for causal explanations plays a role. Between the two statements:
Intelligent women tend to marry less intelligent men
The correlation between intelligence scores of spouses is less than perfect
1 seems to cry out for a causal explanation whereas 2 seems to be a far less interesting statement of fact although they are a statement of exactly the same thing. We are strongly biased towards causal explanations and seek them out whereas mere statistics do not appeal to our minds. Mean regression has an explanation but doesn’t have a cause and therefore doesn’t appeal much to our minds. System 1 makes insistent demands for causal explanations! Also, it seems reasonable to link predictions to evaluations of evidence meaning that we won’t learn to understand mean regression from experience; it is counter intuitive.

CHAPTER 18

In general, people substitute evaluation of evidence for prediction. When people should be thinking statistically for predictions they are instead letting their System 1 thinking take over and equating their evaluation with prediction because it is an easier question to ask. For example, participants asked to rank a student based on a short report (evaluation) and then asked to predict their future performance (prediction) usually substitute the first for the second via intensity matching without taking mean regression into account.

Another example involves a woman, Julie, who participants are told learned to read age 4. They are then asked to predict her GPA. Here most people ‘predict’ a high GPA like 3.7 or 3.8 because they substitute the hard question of predicting GPA for the easy question of evaluating how precocious it is to read at 4. Kahneman advises the following procedure for mitigating this kind of knee-jerk, System 1, heuristic:
Estimate average GPA
Estimate GPA that matches the evidence (e.g. reading at 4)
Estimate correlation between the two (30%?)
Move 30% higher from average GPA to the far higher GPA your ‘intuition’ suggests

When assessing probability or numerical outcomes it is best to start with the ‘base rate’ figure and the figure suggested by intuition and then assess how relevant the information on which your intuition was built is (i.e. what is the correlation between info and what you’re being asked to guess). If correlation is 1, stick with intuition, if it is 0, stick with base rate and sliding scale in between.

This kind of thinking can be difficult to implement because following our intuitions seems more natural and somehow more pleasant (cf. gambling) than acting against them. For me it is also interesting to think about how we react to success / failure based on intuitive / researched guesses; is it more enjoyable to ‘win’ when following an intuition? Or is ‘losing’ harder to deal with when you have overridden your intuition using stats?

System 1 is keen on extreme predictions and is willing to predict extreme events from weak evidence. Because System 1 takes any evidence there is and uses that to create as convincing a story as possible it tends to overestimate the relevance of that information and lead to overconfidence in the ‘prediction’ put forward.

PART THREE: OVERCONFIDENCE

CHAPTER 19

The mind prefers events, things that actually happened, to non-events, things that could have happened but didn’t. As such, when we hear stories of business success we are likely to overestimate the role of skill and underplay the role of luck. This is called ‘narrative fallacy’. Part of this arises from the way stories are told, they usually include causality! This is pernicious as it gives people the idea that they understand the past and therefore can know about the future. Really, the future is unknowable [shout out Keynes!]. This quote struck me as very descriptive of normal thought, “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” p201 and “these illusions are comforting. They reduce the anxiety that we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence” p205. System 1 is a sense-making machine for these reasons. People modify their beliefs with the benefit of new information but find it VERY hard to recall their former beliefs instead retrieving their current ones in an instance of substitution. This is called the ‘hindsight bias’ or the i-knew-it-all-along effect and seems to arise from our mind’s desire to make sense of the world. In experiments where people are asked to ascribe a probability to an event and then recall that probability later after the event has / hasn’t happened - most overestimate the probability they ascribed to things that did happen and vice versa. Here the bias is driven by the outcome. Most people match their opinions to fit the outcome and are unaware of this. Most struggle with the concept of a ‘good’ prediction or bet that doesn’t actually happen. As with the example of the successful business, people wait until it is successful before they call the founders geniuses!! [Another example I can think of is rogue traders; everyone knows of the examples where they lost money but when was one ever fired for taking too much risk and making money? He probably got a promotion!!]

Says that research shows that the, generous, correlation between success of a firm and CEO is 0.3 meaning in a pair of otherwise identical firms the one with the better CEO will outperform 60% of the time, only 10% more than chance (0.5). I wonder how this was tested?? This would be much lower for fund managers! The fact that many CEOs are hero worshipped is, according to Kahneman, partly because of the need for clear, causal stories and partly the halo effect. Most assessments of ‘best practice’ or ‘visionary CEOs’ have the causal relationship upside down. The firm is considered to be succeeding because the CEO is brilliant when really the CEO is considered to be brilliant because the firm is succeeding - exactly the same in the case of failing businesses / badly perceived CEOs.

CHAPTER 20

“Considering how little we know, the confidence we have in our beliefs is preposterous - and is also essential.”

Kahneman used to test soldiers leadership and problem solving abilities with tasks and then make recommendations about which ones were suitable for officer training. When feedback about the soldiers who went on to officer training showed that these predictions were little better than random guesses, Kahneman felt that this should have shattered his confidence in the predictive value of his judgements. However, it continued to feel sensible and valid to make assessments based on what he saw and he called this ‘the illusion of validity’. It was easy to draw conclusions from what he saw and these conclusions seemed legitimate leading to confidence - the result of a combination of coherent information and cognitive ease of processing.

Gives fund management as an example of this, which is fair enough! Two in three mutual funds underperform the market and year-to-year correlation is barely higher than zero. Against this, you might use the argument Buffett makes in ‘The Investors of Graham and Doddsville’.

Both of these examples show that individual experience is far more persuasive than obscure statistics. In the financial industry, it may also be a result of these people existing in a community of like-minded believers. Most finance professionals probably think themselves better than other people and believe they can do what others cannot; it is inherently arrogant. I have always believed arrogance to be a fundamental truth about fund managers.

The unpredictability of the future is undermined by ease with which the past is explained with the benefit of hindsight.

Mentions an interesting sounding book Expert Political Judgement by Philip Tetlock where the accuracy of expert predictions are tested. In general, people who know a bit more are a bit better than people who know less but experts suffer from overconfidence and make worse predictions. This was especially true of experts who were in the limelight when asked for their opinion, which gives pause for thought about all the expert opinions we are offered in the media every day. Unsurprisingly, experts had a lot of excuses when they were confronted with the fact that they had been wrong!

CHAPTER 21

Psychologist Paul Meehl wrote a book that claimed statistical predictions almost always outperformed clinical ones called Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. One reasons for this, according to Meehl, is that ‘experts’ overcomplicate things in their predictions. For this reason, humans will underperform an algorithm even if the humans are given the results of the algorithm because they will feel they can overrule it to take advantage of more complex relationships between factors that they perceive! Expert judgement is also very inconsistent and will give different outcomes when confronted with the same material on different occasions; probably because of the priming effect of minute factors that takes place in our System 1 minds. Over confidence also plays a role; humans will ordinarily place too much weight in their own judgements and not enough in empirical factors thus reducing validity. Furthermore, a formula constructed on the back of an envelope can usually compete with an optimally weighted formula. Atul Garwande’s A Checklist Manifesto and the investor Mohnish Prabhai are other examples of this principle.

Clinical psychologists reacted to this work with disbelief and hostility. Some of this is because clinical psychologists do have skill in making short term predictions, an area in which they have a lot of practice. However, often they are less skilled in making long term predictions because they would have to live multiple lives in order to make the same number of observations that would cause them to acquire the same level of skill. It is also because humans are biased to prefer the human approach - cf. man vs. machine contests like a computer vs Kasparov and ‘organic’ vs commercially grown fruit and ‘all natural’ or ‘no preservatives’ labelling vs. artificial products. [Another example could be the current intensive media coverage of people who die because of self-driving cars even though, statistically, humans make errors much more frequently and kill far larger numbers proportionately; it makes for better news and somehow is more shocking.]

Kahneman designed a test for assessing the aptitude of army inductees and employed Meehl’s principles. Instead of interviewers making a judgement on which soldiers were most suited to combat, they would score relevant personality traits based on largely factual questions but the final fitness would be computed by a standard formula using these scores. As such, the interviewers stopped making predictive judgements and began collecting data, leaving the computation to formula. Apparently, the interviewers were close to mutiny and complained that they had been turned into robots! To prove his point, Kahneman gathered feedback from the units where the soldiers were assigned under both methods of selection. When the old human method was compared it to the formula, the result was ‘completely useless’ (old method, humans) vs. ‘moderately useful’ (new method, formula). However, interestingly, when the interviewers were asked to ‘close their eyes and guess how good a candidate would be on a scale of 1-5’ after conducting the new interview - this ALSO functioned as a good prediction! Intuition did add value but only after disciplined collection of objective information, not when used in and of itself on a stand alone basis. Seems like an appropriate mix works best so while it is good not to totally trust intuition it is also good not totally distrust it. Kahneman ended up giving the ‘close your eyes’ score the same weight as the other 6 scores in his formula.

CHAPTER 22

Not everyone agrees with Kahneman’s focus on biases and heuristics. Gary Klein is an advocate of Natural Decision Making (NDM) and Kahneman and him co-authored a paper to outline the differences in their approaches. Klein’s background was in the study of the ‘intuition’ of fire commanders, who seem to make good crucial decisions without conducting any analysis. However, both Klein and Kahneman agreed that this ‘intuition’ was, in fact, a function of recognition and memory. Just because they don’t know why they know doesn’t mean it is intuition; knowing without knowing why is a ubiquitous feature of much of mental life.

Some ‘intuitions’ are easy to acquire, like being afraid of something bad happening in a place or situation in which it has happened in the past and vice versa for good experiences (Pavlov’s dog). However, learning ‘intuition’ about more complex situations (firefighting, chess etc.) takes longer because these ‘skills’ are in reality made up of many different mini-skills that take longer to learn, or become embedded in associative memory, and can then be recalled.

Klein and Kahneman’s differences may have arisen from the different types of experts that they analysed (nurses, firemen vs. stock pickers, political experts). Both agreed that subjective confidence in judgement is not a good guide to validity and proposed that intuitions that are likely to be skilled (and therefore listened to) arise from situations that are: 1) sufficiently regular to be predictable and 2) come from people who have sufficient time to learn these regularities through prolonged practice. In the case of firefighter, their system 1 has learned valid cues whereas in the case of a stock picker, according to Kahneman, there are no valid clues to learn! Meehl’s clinicians from the previous chapter operated in a low validity environment; indicated by the fact that even though the algorithms outperformed the clinicians, they didn’t do very well either in absolute terms. The conclusion was: Intuition cannot be trusted in the absence of stable regularities in the environment.

CHAPTER 23

Kahneman was involved in a project to design a curriculum and textbook for decision making to be taught in Israeli schools. After a year or so, the team members were asked to guess how long it would take them to complete the task and the average guess was around 2 years. However, under closer examination, the curriculum expert in the group, who also guessed around 2 years, revealed that most groups like theirs take 7-10 years, 40% fail and that the current group was worse than average!! By the time the book was finished (8 years) the ministry for education didn’t even use it! Kahneman says this situation shows three lessons: 1) inside view vs. outside view 2) planning fallacy and 3) irrational perseverance

The inside view is a form of WYSIATI, the team had written 2 chapters already and knew roughly how many more had to be written and roughly how long that would take. However, this failed to take account of the fact that the existing chapters may have been the easiest and enthusiasm for the project at that stage was probably at its peak. Alongside this, and far more important, were the ‘unknown unknowns’ of the project such as illness, divorce and bureaucratic delays! In this case, the outside view was the experience of the curriculum expert. This was the equivalent of a base rate and should have formed the basis for their estimates, only adjusting it if there was strong evidence to suggest this was valid. Here, it is clear that people who know information about an individual case rarely feel they need stats about the general class to which that case belongs to make accurate predictions. But they should!! Even when they did learn it they ignored it. Personal impressions are far more persuasive than pallid stats. Often, people may see their case as unique and therefore not subject to the base rate.

The planning fallacy describes forecasts that are too close to best-case scenarios and that could be improved by impartial reference to the statistics about similar cases. The history of cost estimates for the Scottish Parliament is a good example of this! Another good example is, of all the rail projects undertaken worldwide between 1969-1998 - 90% overestimated passenger numbers by an average 106%! Often, the desire to get the plan approved or to go ahead with the project causes over-optimism. Also, people may be aware that a project will continue once started as it is undesirable to leave it half finished and therefore deliberately underestimate its cost. The solution is to use the outside view and use all statistical, distributional information available - i.e. use a base rate calculated using a large database - but this is never a natural inclination! (cf Bent Flyvbjerg - planning guru)




CHAPTER 24

The planning fallacy is an example of the optimistic bias, which is extremely pervasive. Most people view the world as more benign than it is, their own attributes as better than they are and believe their goals to be more achievable than they are. People also exaggerate their ability to predict the future leading to over confidence. Optimists tend to take more risks and be happier people, so may play an outsized role in life vs. pessimists.

Because optimists fail to assess the odds related to projects like their’s, or do not think that they apply to them, they are resilient and persiverent in the face of obstacles.

Interesting study by Malmendier and Tate showed that more optimistic CEOs typically held more stock in their company but still took on excessive risks. [This is contrary to my own theory about stock ownership vs. options.] The same study also found that performance of companies with ‘celebrity CEOs’ (who had received awards and press attention) was poorer once these CEOs had been anointed, which i would tend to agree with.

Part of entrepreneurial overconfidence is WYSIATI, people overwhelmingly believe their fate is in their own hands and neglect to make a proper assessment of the competition called ‘competition neglect’.

A study of CFOs by psychologists at Duke gathered over 11,000 predictions about how the market would perform and found no correlation. They also found that when asked for a confidence interval of 80% (i.e. a figure which would be too high 90% of the time and one that would be too low 90% of the time) about the range in which the market would trade, the CFOs made it 4x too narrow - largely because they did not feel comfortable setting a realistically wide range because they would be laughed at for making such a wide prediction about something they were supposed to be knowledgeable about. Optimism and overconfidence are, wrongly, valued highly by society! Over confident advisors, experts and consultants can expect to be more highly in demand than their more realistic counterparts.

The mixture of emotional, cognitive and social factors that support exaggerated optimism are powerful. For this reason, risk takers rarely have a higher appetite for risk; they simply underestimate the risks and are overly optimistic about the perceived prospects of a project.

According to Martin Seligman, the key to a positive psychology is preservation of self-image so that a disproportionate amount of credit is taken for both successes and failures. When something goes well, it was all because of you, but when it goes badly, it was due to factors outside your control. This strikes me as true and highly observable in most optimists behaviour.

Gary Klein recommends the practice of a ‘premortem’ to tame the optimistic biases of System 1. Before taking important decisions, take 5-10 mins to write a hypothetical explanation of why this decision was a terrible idea 1 year in the future. This practice moves against the positive group think that develops around ideas that are just about to be taken and forces knowledgeable participants to think negatively about an idea they favour.
PART FOUR: CHOICES

CHAPTER 25

Rejects the expected utility view of humans as rational, selfish and unchanging in their tastes. For example, people prefer $46 to a coin toss to win $100 or nothing OR people prefer $80, for sure, to a gamble where 80% of the time you win $100 and 20% $10 even though the EV (expected value) for this ‘bet’ is $82.

Daniel Bernoulli (1738 paper) noted that people dislike risk even if the EV of the riskier option is larger than the amount they would receive as a sure thing. He attributed this to the utility value (he called it ‘moral expectation’) of the outcomes rather than their pure monetary value. A risk taker with diminishing marginal utility for wealth will be risk averse.

Bernoulli used this concept [close to a logarithmic scale, but not quite?] to explain why poorer people buy insurance and richer people sell it; the loss of the same amount of money causes a far larger decrease in the utility for a poorer person than it does a richer one. E.g. loss of 1m for a person worth 10m is 4 pts whereas for a person worth 3m it is 18 pts.

However, while Bernoulli is correct in equating money with its utility rather than its absolute value he misses out the context of how a certain level of wealth has been arrived at. Kahneman’s example is as follows, Jack and Jill have 5m each but yesterday Jack had 9m and Jill had 1m; are both equally happy? Of course not! Another problem with his theories that Kahneman sees is the reference points for a given gamble. In a choice between 2m for sure or equal chances of 1m and 4m Bernoulli would see most people choosing the same option. Kahneman objects because he thinks that a person with 1m already would take the sure thing as they will double their money vs. either stay the same or quadruple it. However, he thinks that a person with 4m would gamble as their choice is a certain loss of half their money or gamble to loss 75% of their wealth or nothing. For the person with only 1m both are OK and the sure thing is better, for the person with 4m both options are bad and the gamble is the better one. Faced with only gains (or neutral outcomes) people are risk averse, whereas faced with only losses people are risk seeking. [Although I agree with the conclusions, I think this example doesn’t make much sense as what kind of gamble requires one person to put in 1m and the other 4m - both for the same outcomes? Nonsensical!]

While I agree that utility theory is flawed because the history of one’s wealth has serious psychological implications I think the second example that Kahneman gives represents a misunderstanding of the nature of gambles or investments.

An interesting aside mentioned in this chapter is the St Petersburg paradox, with a starting stake of $2 a player is offered the chance to toss a coin with the stake doubling every time a head comes up (2,4,8,16 etc) and the game ending when a tail comes up. How much should someone pay for the chance to play this game? Theoretically, the returns are infinite but most people won’t pay much to play the game.

CHAPTER 26

Amos and Kahneman saw that people’s attitudes to gains and losses are not the same even when the same amount is involved and that this nuance wasn’t covered by Bernoulli’s theory. For example, if offered the chance to gain $900 or a 90% chance to win $1000 (the same in terms of EV) most people will choose the sure thing. However, if offered the chance to lose $900 or gamble on a 90% chance of losing $1000 most people will gamble. People like winning and dislike losing and they dislike losing more than they like winning. This was demonstrated by another set of problems:
Receive $1000 and then either gamble on a 50% chance to win another $1000 or get $500 for sure.
Receive $2000 and then either gamble on a 50% chance to LOSE $1000 or LOSE $500 for sure
Most people take the sure thing when presented with a gain and gamble when presented with a loss. Both examples are the same, gain $1500 for sure or gamble and get between 1k and 2k. People don’t adjust their decisions because of the larger amounts involved or the differences it would make to their overall wealth. It is to do with the reference point, how the conundrum is presented and attitudes to winning and losing.

Amos and Kahneman called this ‘Prospect Theory’ and identified three main cognitive features that play a role in financial decision making, all of which are operating characteristics of System 1:
Evaluation relative to a neutral reference point
Diminishing returns (diff between $100 and $200 is much more than 900 and 1000)
Loss aversion - losses loom larger than gains. This may have an evolutionary background as organisms that treat threats as more urgent have a better chance to survive and reproduce.

Loss aversion can be seen from an example where people are asked if they would toss a coin to win or lose $100. Most would not and would require a winning amount between $150-250 to offset the risk of losing $100. In a 50-50 gamble, a win - lose ratio of 1:1 is appropriate but people dislike the loss more than they like the gain so desire an asymmetric payoff to replicate their asymmetric feelings. Equally, this aversion shows that Bernoulli’s theory of pure utility of wealth cannot explain the extreme aversion shown to inconsequential amounts. Matthew Rabin showed that if someone rejects the gamble - 50% chance to lose $100, 50% chance to win $200, which most people do, then they should also reject the gamble 50% chance to lose $200, 50% chance to win $20,000, which is clearly mad. [But does Bernoulli’s theory really advise this, not 100% clear to me??]

Kahneman does recognise the limits of prospect theory too, highlighting its inability to adjust the all important ‘reference point’ when people’s hopes or expectations have been raised by the high probability of an outcome. It cannot deal with disappointment. Equally, it cannot deal with the regret of choosing one option over another.

CHAPTER 27 THE ENDOWMENT EFFECT

Kahneman goes on to argue that the reference point, or situation, have profound implications for another central model of economic theory - the standard model of indifference curves. Both these, and Bernoulli’s representation of outcomes as states of wealth, make the erroneous assumption that the utility for a state of affairs depends only on that state and is not affected by your personal circumstances and history. Things that might once have been assessed as equally valuable (extra salary vs. extra vacation) soon come to be seen as losses when a status quo has been established.

Richard Thaler, a doyenne of behavioral economics, used to wonder why his standard economic theory professors taught rational economic behaviour but behaved irrationally. For example, one professor collected wine and would never pay more than $35 for a bottle but was unwilling to sell them even for $100 making a mockery of the theory that the bottle had a single price below which he would buy and above which he would sell. This is known as the endowment effect; the professor was unwilling to sell the wine as he would have felt a loss of utility from its sale. Here, when objects are held to be enjoyed by the owner they are much more reluctant to sell them even at a price far higher than they would buy them. Another example is a football fan with a ticket for a big game, if the person finds out that tickets are trading hands for hundreds or thousands of dollars they probably still won’t sell them even though they would never buy them at that price themselves.

An experiment conducted by Amos and Kahneman modified an experiment conducted by Vernon Smith. In Smith’s experiment, participants were given tokens and each person told a different cash amount that these could be traded for at the end of the experiment. The participants then traded tokens and, as standard theory predicts, the tokens ended up in the hands of the people to whom they were worth the most. Amos and Kahneman adapted this by giving some participants a useful object that they could be expected to use; an attractive coffee mug. Some participants received mugs (sellers), others did not (buyers). Buyers had to use their own money to acquire the mug. The value of the mug was about $6 but when trading commenced the sellers want c.$7 and the buyers would only pay just under $3 - almost exactly the ratio by which gain must outweigh loss (2:1) in the coin toss example designed to demonstrate the principle of loss aversion. Apparently, the gap was not as large in the UK when the experiment was conducted there. Kahneman also mentions another variant where a third group, choosers, can choose to receive a mug or an amount of money they think equivalent. This group chose a value just above $3. Kahneman says they face the same choice as the buyers but, as I have understood it, buyers must buy with their own cash whereas choosers are going to end up with something from someone else either way? I don’t think either of these examples are much good - especially the choosers one, which I found confusing. However, they do indicate the endowment principle; that it is hard to sell something you own and intend to use. Apparently, this stimulates parts of the brain associated with disgust and pain, which can also be the case for buying if you feel like you are getting ripped off, buying at an attractive price stimulates parts of the brain related to pleasure. The reluctance babies have to give up a toy demonstrates how the endowment effect is part of our System 1. Another experiment (Jack Knetsch), which is clearer, involves participants filling out a form with a reward in front of them. This reward was an expensive pen in one example and a bar of Swiss chocolate in another. Asked at the end of the form if they would like to swap gifts for the other one, only 10% agreed to swap. Knetsch also tested a variant where people did not actually possess the object before they were offered a trade and this increased trading. The same experiment was conducted at a baseball card trading convention and inexperienced traders (18%) were much more reluctant to trade than experienced ones (48%), which may show that the effect diminishes with experience or that the experienced traders will trade come what may!!

A study of the luxury housing market in Boston during the downturn of 2008-9 showed that those who had paid higher prices for their, identical, units, spent longer selling their homes and eventually received more money for them; displaying some degree of anchoring or endowment effect.

People experiencing poverty do not display the endowment effect because they are always short of the money they need for their expenditures or, in the language of prospect theory, are living below their reference point (the money they need for all their necessities). As such, small amounts of money received are seen as a reduced loss and not as a gain. So it seems like in a case of extreme need for money people living in poverty don’t display the same attachment to what they already have.

CHAPTER 28

Threats are processed faster than opportunities by our System 1 brains. One experiment flashed pictures of another person’s eyes for 2/100th of a second, far too quickly to be processed by vision, while they sat in a brain scanner. When the eyes are smiling or happy nothing happens but when they look terrified or threatened their is a huge reaction in the participant's amygdala - the brain’s threat centre. Equally, some experiments report that angry faces ‘pop’ out of a crowd of happy faces while a single happy face in a crowd of angry ones won’t. A single cockroach in a bowl of cherries spoils its appeal but a single cherry in a bowl of cockroaches does nothing! It seems, for evolutionary purposes, our brains give priority to bad news and can process it extremely rapidly. Of course, opportunities to mate or feed can also be recognised very quickly but not, apparently, with the same speed as threats. This difference in speed of processing also includes words associated with bad outcomes and statements with which a person strongly disagrees.

Michel Cabanac sees most pleasure as functioning to indicate the direction of a biologically significant improvement in circumstances.

Reference points, whether they are the status quo or targets set for the future, involve loss aversion too. For example, taxi drivers may have a target earnings for the year and may break this down into a daily target. On a rainy day, this target will be easy to attain whereas on a day with good weather it will be harder. Logically, taxi drivers should work as much as possible on a rainy day and take time off when it is nice as this time off ‘costs’ them less per hour. However, because of loss aversion, drivers will usually go home early once they have made their target on a rainy day and work longer hours on less profitable days. Equally, in professional golf each hole has a reference point, par. Two economists studied over 2.5m putts and determined that players were 3.6% better at putting for par than for birdie! This difference would have equated to about 1 stroke per round!

The asymmetric intensity experienced in losses vs. gains makes lots of negotiations very difficult. This can be reduced if the negotiations concern a growing market rather than a shrinking one because allocating losses is prob 2x as painful as allocating gains. In the animal kingdom, the owner usually wins territorial battles demonstrating, perhaps, both the endowment effect and loss aversion! Equally, in institutions, losers from reform will usually be far more active than winners because they feel the losses more keenly. Loss aversion is a powerful conservative force across animals, individuals and institutions that favours minimal changes to the status quo.

Loss aversion and reference points also seem to have an impact on what is deemed fair. Businesses that increased prices on items when they were in high demand were almost universally branded as unfair. So were profitable companies that reduced existing workers salaries when there was a fall in wages in the labour market where that company operated. In both cases, the businesses broke informal contracts with customers or employees judged by reference points (price prior to demand spike or previous wage). Interestingly, if a company sacked the old employee and hired a new one at a reduced wage this wasn’t deemed unfair. Nor was it deemed unfair for a company to share losses with either its customers (by increasing price) or employees (by reducing wages). Furthermore, a company benefiting from a windfall profit because of a fall in manufacturing prices wasn’t deemed unfair either although a company was deemed more fair if it shared these profits with other stakeholders. Seemingly, it is considered OK for businesses to share losses but unfair for a company to exploit its power to increase profits by burdening clients or customers with losses relative to their reference points. The perception of unfairness has also been proven to be detrimental to businesses in the L/T as employees are less productive and clients will try to buy things elsewhere. Even people who have no existing business with the company perceived to be unfair will often join in the punishment by deliberately avoiding patronising the business in future. MRI scans reveal that altruistic punishment of this nature is rewarding but our brains are not designed to reward generosity as consistently as they are to punish meanness - another asymmetry in the psychological treatment of losses vs. gains The reward associated with punishing a stranger who was mean to another stranger may be the glue that holds society together! [Righteous indignation!!]

In the law, the familiar rule that possession is 9/10th of the law indicates the moral primacy of the reference point and also seems to confirm endowment effect to a certain extent. Equally, when goods are lost in transit the owner is compensated for the loss but not, usually, for the profit they may have lost on selling or using these good for their final purpose.

CHAPTER 29

When making assessments of complex decisions multiple factors are considered and given different weights. Ordinarily, this is a function of our System 1.

The possibility effect and certainty effect describe how we overweight small risks because they have very bad outcomes. We are willing to pay much more than EV to eliminate them these small risks (certainty, insurance). Equally, the difference between certain disaster and a 95% chance of disaster seems very large (possibility, gambling). In both cases, small probabilities are overweighted relative to their EV. This goes against the expectation principle and rational choice, which states that the utility of a gamble is the average of the utilities of its outcomes, each weighted by its probabilities.

Another example of this abrogation of rational choice is the Allais Paradox (1952):
Would you prefer 61% chance of $520k or a 63% chance of $500k?
Would you prefer a 98% of $520k or an 100% of $500k?
Most people will choose the first option in 1), because of possibility effect, and the second in 2), because of the certainty effect, but in rational terms the EVs are as follows:
317.2K or 315k - so people SHOULD choose the second
509.6k or 500k - so people SHOULD choose the first
This does not chime with most people’s choices nor the choices of leading rational choice (aka The American School) experts at the conference where Allais presented it!!

Amos and Kahneman conducted studies into decision weights vs actual probabilities and found that both the possibility effect (overweighting small chances) and the certainty effect (underweighting large probabilities, or conversely, overweighting the small prob of their non-occurrence) were both strongly in evidence:

They also found that the certainty effect was more powerful than the possibility effect if the outcome in question was negative. People also tend to treat very small risks, or very high probabilities, as the same even when there is considerable difference between the two. For example, a cancer risk rate of 0.001% is much the same as 0.00001% to most people although in a population of the US one would cause 3,000 cancers and the other 30. Amos and Kahneman summarised their findings in The Fourfold Pattern:

The top left is as Bernoulli described - risk averse to large gains. The bottom left is best exemplified by the popularity of lotteries. The bottom right is best exemplified by the purchase of insurance. The top right was where A&K found something new - when faced with a choice between a sure loss and the high probability of a larger one diminishing sensitivity makes the sure loss more aversive and the certainty effect makes the gamble more attractive than the actual probability; the exact opposite of favourable outcomes. When faced with a large loss, people will take huge, irresponsible gambles because the loss is too painful and the slim chance of relief too enticing [cf. gambling losses!! And poss doubling down on stocks on which you’ve already lost??]

In individual cases, it is easy to empathise with hopes and fears that drive irrational behaviour. However, in the long term the failures to accept the true odds, or deviations from EV, are costly.

[I should try to create an equivalent of the fourfold model for long only stock investment - what are the equivalent situations and behaviours?]

CHAPTER 30

Just as buying a lottery ticket causes, or displays, overestimation of an unlikely, pleasant event so terrorism is effective because it causes the same overestimation of a horrible event, such as suicide bombers. Kahneman uses the example of suicide bombers in buses in Israel. Even though people know that being the victim of a suicide bomber, or even to be affected by one indirectly, is highly unlikely the vividness, fluency and attention that such gruesome events provoke make them prime fodder for System 1 and create an availability cascade.

People generally overestimate the probabilities of unlikely events and overweight these events in their decisions. Craig Fox showed this, and also a tendency to be overly optimistic about what a person is being asked to think about, in an experiment where participants were asked to assess the % probability of a playoff team winning the NBA. The sum of the average estimates was 240%! When asked to bet on the same tournament, the average amount wagered was $287 even though the maximum pay out for any one team was $160 thus guaranteeing a loss of $127.

While prospect theory differs from utility theory insofar as it does not equate decision weights with probability, it still maintains that the decision weight will be the same for events of the same probability. This turns out to be untrue. People have an even lower sensitivity to probability concerning emotional outcomes vs. monetary ones. People are much better at assessing the appropriate decision weights for cash bets vs. receiving roses. However, this is not entirely down to the emotive nature of the example, according to Kahneman, much of it is to do with the vividness of the image and the fluency that this allows System 1 to construct a plausible and coherent story. As such, vivid outcomes are poorly judged even when they contain monetary information as well because of the power of cognitive ease.

An effect called ‘denominator neglect’ seems to confirm this insofar as, in experiments, 30-40% of people choose option B when confronted with the prospect of attempting to draw a ‘winning’ red marble from the following bowls:
10 marbles, 1 red
100 marbles, 8 red
Perhaps because the larger number of marbles conjures up a more vivid image in System 1 causing people to ignore the fairly simple probability calculation that would yield the correct choice. Equally, when risks are described in terms of, ‘one person per 100,000 will die’ it is far more vivid, and leads to more dramatic overweighting, vs. ‘the risk of death is 0.0001%’. System 1 is far better at dealing with individuals than it is abstract categories. Other examples include, a disease that kills 1,286 / 10k being judged as more dangerous than one that kills 24.14% of the population or one that kills 24.4/100! Frequency formulations are FAR more evocative than ratios.

If people have experience of a probability rather than just having it described to them - e.g. being allowed to press the buttons themselves rather than being told in an experiment ‘one button pays $10, 5% of the time the other $1, 50% of the time’ - then estimation is much more accurate and overweighting far less common. As such, when System 1 is allowed to develop an idea of what is ‘normal’ or to form a view as to the ‘character’ of each button then it works better. When dealing with examples that draw attention to unlikely events like descriptions of probabilities, very vivid images, concrete representations or explicit reminders then the mind will usually overweight them because System 1 has a chance to create a narrative in which the event becomes ‘real’ in the mind. Or at least more likely than in purely statistical terms. However, if the mind is not asked about a specific risk and has not experienced it either then it will tend to neglect it entirely.

CHAPTER 31

[Is there a mistake in the first set of answers to the first problem? Seems like option D should have an EV of $750 to me (0.75 x $1000 + 0.25 x $0 = $750) not $760 as stated.]

When faced with groups of questions that must be answered together, the answers can be framed narrowly (i.e. individually, one after another) or broadly (i.e. wholistically, taken together). Humans are, by nature, narrow framers and prefer to make decisions as they arise even when asked to consider their responses jointly.

This preference for narrow framing may cause a loss averse person to reject a coin toss gamble even when they stands to lose $100 for heads and win $200 for tails because they feel the loss twice as much as the gain - i.e. the EV is $0 with doubled losses rather than $50 without taking account of the loss aversion. However, Kahneman advises that loss averse people should still take this gamble because, if they take a number of small gambles like this, even when losses are doubled the EV is still positive when calculated over multiple bets with the same odds and payouts. This is an example of broad framing; the individual bet may have a neutral EV to a loss averse person and therefore not be worth taken but if it is viewed in the context of the opportunity to make many such bets then it is still advantageous even for a person who feels losses twice as keenly as gains. Broad framing, or viewing bets as part of a portfolio, decrease the emotional reaction to losses and increase willingness to take risks. [I feel like this is BROADLY true but Kahneman is a little slippery here in his example because he ONLY doubles the losses when all the bets are lost, combinations that involve one loss and one win retain the original value of the loss, not the doubled one, but surely a loss averse person would feel any loss as double not just multiple ones?? I could redo the table on p337 and see what results. On the other hand, you could argue if someone wins one toss and loses one then they haven’t actually got any OVERALL losses to feel twice as keenly!]

[This has applications for investment performance - it should be checked infrequently and on an aggegrate basis.]

CHAPTER 32

With the exception of the very poor, most people’s desire for more money is not necessarily economic. It is more a proxy for points on a scale of self-regard and achievement. As such, people often refuse to cut losses as it is seen as a form of failure, are biased against actions that may lead to regret and draw an illusory distinction between omission (not doing) and commission (doing) because the later seems to carry far greater responsibility.

Lots of people keep mental accounts for different areas of their lives or different activities they require money for - housekeeping, retirement saving, children’s education saving, medical emergencies, war etc. etc. Whether people see these various accounts as a ‘success’ or a ‘failure’ is largely a function of System 1. The same is true of the stocks that make up investment portfolios. If a person needs cash they are far more likely to sell a winner than a loser because the former makes them feel good about their investment prowess and the themselves whereas the latter does the opposite. This is called the disposition effect and is a form of narrow framing. If the portfolio is looked at as a whole, the stock to be sold should be sold on its future prospects alone; the purchase price SHOULD be irrelevant. However, when framed narrowly, every stock has its own mental account and should be closed at a profit according to System 1; even if the person’s System 2 knows this is impossible!

The fact that losses are tax deductible means that this narrow, mental accounting is reversed in the month before tax returns are filed but in the other 11 months people routinely sell more winners because it is pleasurable. Experienced investors are less prone to this mistake than inexperienced ones.

Often, investors will put more money into a losing position (doubling down) because it makes them feel better (much cheaper, lowers average purchase price), this is known as the sunk cost fallacy. It can also apply to projects and businesses both at the individual and corporate level. As in figure 13 from chp 29, this is a choice between a sure loss and an unfavourable gamble and often the unfavourable gamble is chosen. Equally, as most projects / investments have individual originators often a CEO / investor will be more inclined to put more money into a floundering endeavour as it may save their reputation; whereas admitting defeat and taking on a new project / investment, usually someone else’s idea, will consign them to permanent failure. In this way, mental accounts and sunk costs prevent people from correctly evaluating current opportunities.

Regret and blame are emotions which are experienced far more keenly when they arise from a deviation from what is considered the default option in a given situation. As such, a person who sells a stock and buys another but would have made more by not selling feels much more regret than someone who doesn’t sell but would have made more by switching. Commission hurts > omission hurts. Equally, participants in an experiment who played a simulation of blackjack showed more regret if they answered ‘yes’ regardless of whether they were asked ‘do you want to hit?’ or ‘do you want to stand?’ indicating a default of, ‘i don’t have a strong wish to do it’ as a response to both [surely this would be different depending on what cards the player had? 10 vs. 20 for example!!]. In another example, consumers who were reminded that they might feel regret about their purchases showed a preference for brand names. Fund managers indulge in ‘window dressing’ of their portfolios to remove losers and buy more of the stocks they hold that have been performing well; albeit at higher prices. All are motivated by regret or fear of blame.

Richard Thaler has an example in which participants are asked how much they would pay to be vaccinated for a disease they have a 1/1000 of having; the vaccine can only be taken before the symptoms are displayed. He also asked how much people would have to be paid to expose themselves to a 1/1000 chance of contracting the disease without the possibility of being vaccinated at any time. Apparently, people want to be paid 50x what they would pay for the vaccine! This is because 1) it is not legitimate to sell one’s health, in general and 2) in the second example people who expose themselves to the disease have deviated from the default option of doing nothing. Equally, parents considering insecticide for their children were prepared to pay a premium for safer alternatives but not prepared to purchase a more dangerous alternative for a discount. The idea of trading their child’s safety for money is a ‘taboo trade off’ even though, logically speaking, the risk may be small and the money saved could be redeployed in protecting their children from more probable, and therefore dangerous, risks. In a legal setting, the precautionary principle place the entire burden of responsibility on anyone who does anything that may harm another person or the environment. Clearly, this principle is costly to innovation and may even be paralysing. It’s strict application would have prevented innovations including - aeroplanes, AC, antibiotics, cars, vaccines and X ray to name a few, which have all contributed more to society as a whole vs. the risks they posed in the early stages of their innovation. Enhanced loss aversion in a moral context is a strong and salient feature of our behaviour that has its origins in System 1. The balance between this intuition and efficient risk management is complex and does not have a simple solution.

Avoidance of regret and self administered punishments like blame play a large, and sometimes irrationally costly, part in our lives. Considering the possibility of bad outcomes can help to mitigate the pain of regret when bad outcomes do occur as it reduces the power of the hindsight bias, which makes you feel like it was easy to make a better choice all along and that you are stupid for making the choice you did. The psychologist Daniel Gilbert claims that actual regret is usually less bad than the anticipation of it so advises people not to overweight it in their decisions because, even if it does happen, it will hurt less than we think.

CHAPTER 33

Single and joint evaluations can yield very different outcomes. When asked to consider $ amounts of compensation for a man who lost his right arm in a robbery while he was buying something in a shop participants didn’t give different amounts depending on if he was shopping in a shop he visits regularly vs. one he hardly ever visits. However, when told about the two incidences separately, excluding the possibility of joint evaluation, participants gave a far higher figure when the man was injured in a shop he hardly ever visits because of the increased poignancy of the scenario. This triggers a System 1 response whereas when asked to compare the two situations people engage System 2 and override this, broadly irrelevant, emotional response. Joint evaluation almost always involves more careful and effortful consideration, meaning System 2 is involved far more than System 1. Single evaluation is far more heavily influenced by System 1. This is called preference reversal.

Another example (Lichtenstein & Slovic) involves two bets [which have roughly equal EV as far as I can see]:
11/36 to win $160, 25/36 chance to lose $15 [EV $38.47]
35/36 to win $40, 1/36 chance to lose $10 [EV $38.61]
Respondents choose B over A consistently but when asked the minimum price they would sell each bet for if they owned it people then choose A as higher value than B!!

Most of us split the world into different categories and are familiar with the norms that are applicable to those categories. As such, things that make sense to our mind ‘within category’ may no longer do so when evaluated jointly ‘between categories’. The examples given were two emails asking for donation:
For safe breeding areas for dolphins suffering from pollution
For regular skin cancer check ups for farmworkers spending long hours in the sun
When asked individually, the dolphins attracted higher $ donations on average but when considered jointly, thus introducing a humans vs animals comparison, contributions to farmworkers were higher. It all depends on the mental context that we, unwittingly, use to frame the question, which in turn effects the substitution of harder questions for easier ones (e.g. replace ‘what is the $ value of a dolphin?’ for ‘is the environment a good cause?’, ‘how much do i like dolphins?’ and ‘How much do i usually give to charities?’) and intensity matching based on the answers to those easier questions.

Joint evaluations may differ from single ones because of the ‘evaluability hypothesis’ whereby attributes (e.g. number of entries in a dictionary, Hsee) may only be comparable in joint evaluation.

There is evidence that reversals also play a part in the administration of justice. People are faced with complex, difficult to answer questions that they do not know the answer to. As such, they might be better approached in a broader context of joint evaluation. However, this is not allowed within the law so all are judged as single evaluations, which seems counterintuitive and bad. However, the context is so key to joint evaluations it is also important that other cases chosen for joint evaluation do not exert an overwhelming bias. Also, while the scale of severity of punishments is often consistent within the category they seem to be out of kilter with other categories. For example, the highest fine in the US for regulations regarding worker safety is $7k whereas a violations of the Wild Birds Conservation Act can result in a fine of up to $25k! As such, penalties may be consistent within agencies but incoherent globally.

CHAPTER 34

P363-4 are very good in meaning and emotional framing. Even though two statements may have the same meaning the words they use and the contexts they evoke in our System 1 mind make them very different propositions emotionally. People will more readily forgo a discount than pay a charge.

There is neuroscientific and experimental evidence to suggest the way facts, stats or choices are framed plays a large part in what they ‘mean’ to our brain, how they will be processed and what decisions they are more or less likely to prompt. However, some people are less susceptible to framing than others.

A&K designed an experiment where participants were asked to choose between two options for defending a population against an outbreak of a foreign disease expected to kill 600 people:
200 people will be saved
1/3 rd prob 600 will be saved, 2/3rds prob 0 will be saved
Most respondents choose 1. But if reframed:
400 people will die
1/3rd prob nobody will die, 2/3rds prob 600 will die
Most respondents choose 2. Preferences between the same objective outcomes reverse with different formulations. The powerful effects of framing pertain even when public health professionals are asked the same questions, demonstrating the considerable influence of framing. Our preferences are about framed problems and our moral intuitions are about descriptions, not about substance.

Another example used is taken from Thomas Schelling’s Choice and Consequence - if a tax code uses a family with 2 children as the norm, allowing a discount for 2 or more children but effectively charging a surcharge for those with less than 2 then:
Should the exemption be larger for rich people?
Should the surcharge for poor be as large as the rich?
The answer to both of these questions is ordinarily ‘no’ but it is not logically possible to want the poor to receive the same (or greater) benefits than the rich AND simultaneously want them to pay less of a penalty for being childless. [not sure I follow here]

Equally, when asked to choose who will save more petrol if both drive the same number of miles:
Stu switching from a 12mpg car to a 14mpg car
Gor switching from a 30mpg car to a 40mpg one
Most people choose 2 even though it is actually 1 because the mpg frame is ‘wrong’ (unhelpful and counter to reality). It is easier to work out on a ‘gallons per mile’ or ‘gallons per 100 miles’ basis.

Another example of starkly different responses to the same questions comes from organ donation rates in Europe. License holders must either opt in (i.e. check a box, Germany - 12%, Denmark - 4%) or opt out (i.e. not check a box, Sweden - 86%, Austria - 100%). The fact that such a significant societal benefit is dictated by something as simple as how the question is asked is both worrisome and embarrassing.

PART FIVE: TWO SELVES

CHAPTER 35

Our experience of painful situations and are memories of them are often different. When remembering an experience, people will rate it retrospectively as the average of the worst part of their experience and the level of pain they were experiencing when the incident stopped (Worst pain + end pain/2= remembered pain). This is called the peak-end rule. People will also not show any difference in average ratings of total pain even when the duration is much different, meaning a person who experiences 10 mins of pain reports it as the same as a person who experienced 5. It’s called ‘duration neglect’. Because of these two factors people will choose to experience more pain (on a sum basis) because they remember it as less painful. For example:
20 mins, peak pain - 8, end pain 1 (p-e = 4.5)
10 mins, peak pain - 8, end pain 7 (p-e = 7.5)
Objectively, no one would choose 1 because it will undoubtedly involve more pain when measured as a sum total. However, the peak-end rule dictates that this will not be how the incident is remembered. The System 1 mind has a preference of norms, averages and prototypes so records some representative moments as memory rather than summing the total of every moment as it was experienced, which is probably impossible for any length of time. Studies in rats using pleasurable stimuli has also shown that duration is widely ignored while intensity is better remembered.

Also mentions the ‘cold hand’ study where subjects are asked to put their hands in a bowl of painfully, but not intolerably, cold water for 60 secs and then told to remove it. They were then asked to do a second experiment where they put it in for 60 secs and keep it there while slightly warmer water is released into the bowl making the temperature rise a degree for an additional 30 secs. When asked, having completed both experiments, which they prefered to repeat 80% said the 90 sec test even though, objectively, it can only be worse than the 60 sec test as it includes the entirety of the that test and only adds to its overall unpleasantness. It does, however, conform to the ‘peak-end’ rule as the end pain is lower.

We think we care about the duration of experiences but our System 1 memory seems to record them based on intensity and how we felt at the end. As such, we seem to have a memory that doesn’t discriminate duration much even though, if asked, we would all say duration matters a great deal.

CHAPTER 36

The remembering self composes stories and keeps them for future reference. We are interested in other people’s stories as shown by the news, history and even imaginary ones novels. We are sad for a man who thought his wife loved him when he dies and it turns out she had a lover for many years and only stayed with him for his money. His life was happy but we are sad about the story. We may take this approach to evaluating entire lives as well, giving far too much weight to the end, as we do in many other areas. One experiment showed participants to judge a 30 or 60 year life of happiness to be worse than the exact same life followed by 5 slightly less enjoyable years, which surely no one would choose if asked for themselves!

We also like to tell ourselves stories and create stored memories. Many people take vacations for this reason and will be, hypothetically, willing to pay far less for holidays they will not be able to remember vs. ones they will. Indeed, some people say they would not go at all indicating a complete dominance of the remembering self over the experiencing self. Would people feel the same about undergoing a painful experience and then forgetting it? Kahneman thinks so but I would be amazed even though it is, in some senses, true to say that we ARE our remembering selves and not the experiencing one - if it makes any sense to divide them!

CHAPTER 37

In an attempt to collect data about the unremembered experiencing self, Kahneman set up phone reminders that then ask people to score their experience or the strength of current emotions. Another method involved asking people to recall their day in detail called Daily Reconstruction Method (DRM). A U-index scores how much people enjoy a given activity - a score of 100% is bad and indicates that 100% of the time people spend doing this activity they are more unhappy than happy emotionally. In large scale studies, it was found a small number of people do the vast majority of being in distress [90:10 rule?] for a large variety of different reasons. A study of 100 mid-western US women discovered the following U-scores:
Morning commute 29%
Work 27%
Child care 24%
Housework 18%
Socialising 12%
TV watching 12%
Sex 5%
U-scores were 6 percentage points higher on weekdays. It also found that being with one’s children doesn’t rank very highly in terms of enjoyment. These studies indicated that the easiest way to have an impact on your own happiness was to control your time as much as possible so you spend more time doing the things you enjoy.

Using this sort of data (experienced) vs. declared evaluations of happiness (remembered), better educated people tended to higher evaluations of their own lives but not greater experienced well-being. Ill health has a severe effect on experienced well-being than on life evaluation. Being poor makes people less happy and amplifies the effects of being ill and other adverse experiences such as divorce and loneliness. The very poor also do not experience the weekend boost in well-being as much as other people. A household income of $75k proved to be the satiation level beyond which more money did not really increase experienced well-being, which may suggest that higher income comes with a reduced ability to enjoy life’s small pleasures having spent lots of money on grander ones! Higher income did, however, provide higher reported satisfaction well beyond the point at which it ceases to have any positive effect on experience.

CHAPTER 38

When people are asked to evaluate the total happiness of their lives they are overwhelmed and will substitute this large, complex question for more simple ones like their mood or a small sample of highly available ideas, not by careful weighing of the domains of your life.
The following graph shows levels of life satisfaction around the time of marriage:

Contrary to the initial impression, marriage does not make you much happier and then sadder as you adapt to the changes of married life and the novelty wears off. Kahneman thinks that heuristics are at work. People can’t really easily evaluate how happy they have been or are and so make reference to recent and vivid memories that spring to mind. For people who are getting married or have recently been married this will usually involve thinking about their happy recent, or prospective, marriage! DRM studies reveal that married life is comparable with single life in terms of experienced happiness, showing that marriage is neither better, nor worse, than single life as it changes some aspects for the better and some for the worse.

Another reason for the low correlations between individuals’ circumstances and their satisfaction with life is that both how life is experienced and how it is evaluated are subject to the genetics of temperament. People who state that a high income is important to them as teenageers will go on to be more satisfied with their life than average if they achieve that goal and significantly less satisfied with their life if they do not achieve it. The difference in life satisfaction (measured on a 5 point scale) between people earning >$200k and those earning <$50k was 0.57 for people who has stated a high income as important at 18 years old vs. 0.12 for people who did not rate income as such an important factor. The goals we set for ourselves as young people are key in determining our evaluation of happiness with our lives.

The Focusing Illusion is a form of WYSIATI. What we focus on will disproportionately determine our beliefs about our overall happiness. People who experience extremely pleasurable or displeasurable experiences will be affected by them abnormally in the short term, but will adapt to them in the longer term. So, when asked to think about the happiness of Californians people, including Californians, will think about the weather and other prototypical and highly available aspects of Californian life before judging them to be happier than average. However, they are not any happier and the climate makes almost no difference to their reported well being, unless they are asked to focus on it vs. a highly available contrasting alternative. In the same way, most people judge victims of crippling accidents to be less happy than most, which is true in the short term, but in the longer term it is not true at all because it becomes the experience of normality for that person. Adaptation to a new situation, good or bad, consists largely of thinking about the incident (marriage, accident) less and less often. The exceptions to this rule of adaptation appear to be chronic pain, exposure to loud noise and depression.

The remembering self seems to be especially susceptible to the focussing illusion. Colostomy patients are shown to be no more or less happy than the healthy population in terms of experienced happiness. However, patients would be willing to trade away years of their life for a shorter life without a colostomy. Furthermore, patients whose colostomy had been reversed were prepared to give up even more years of their remaining life not to have it return. Both seem to indicate that the remembering self is unwilling to consent to states of being that the experiencing self deals with quite comfortably. This may be because of the focussing illusion, which creates a bias of attention towards events that are initially exciting. Because buying a new car is usually exciting it retains attention value because of the intensity of this feeling, not the amount of time this feeling will last because this will fade quickly, and because of this it is appreciated more than other activities that retain their attention value over a long time like learning a language or a musical instrument. Again, time is neglected.

The focussing illusion involves attention to certain moments and neglect of what happens at other times. The mind likes and remembers stories but is not well equipped to processing time.