Week 7: Introduction to Unit 6

Week 7: Introduction to Unit 6

“Introduction to Unit 6 … Some Recent Nudging Interventions … Decision Aids … Disclosure … Wrapping Up … The Last Mile … Debate … Skills and Knowledge”
(Source URL)

Summaries

  • Unit 6 > 6.0 Introduction to Unit 6 > 6.0.2 Unit 5 Debate Debrief
  • Unit 6 > 6.1 Some Recent Nudging Interventions > 6.1.1 Recent Nudge Experiments
  • Unit 6 > 6.2 Decision Aids > 6.2.1 A Hierarchy of Decision Aids
  • Unit 6 > 6.2 Decision Aids > 6.2.3 Advice Taking
  • Unit 6 > 6.2 Decision Aids > 6.2.5 Decision Support Systems
  • Unit 6 > 6.2 Decision Aids > 6.2.7 Intuition + Models
  • Unit 6 > 6.3 Disclosure > 6.3.1 Disclosures
  • Unit 6 > 6.4 Wrapping Up > 6.4.1 The End, The Next Steps, and the Beginning
  • Unit 6 > 6.5 The Last Mile > 6.5.1 The Last Mile
  • Unit 6 > 6.6 Debate > 6.6.1 Debate 6

Unit 6 > 6.0 Introduction to Unit 6 > 6.0.2 Unit 5 Debate Debrief

  • DILIP SOMAN: Which is better for long term welfare, nudging or education? And on the video we had four sets of comments.
  • First we had at Adele Atkinson from the OECD, as well as Professor Anna Maria Lucarni from George Washington University who both made the comment that education trumps nudging because real life this complex, different people have different challenges, there are always dynamic issues in terms of, in that case, financial planning, and less people are equipped to handle all those complex conditions, those complex real life situations.
  • I presented a series of arguments based on not just my own research but also conversations I had with a few other people where the point was that education is great.
  • So therefore nudging is an indispensable part of the toolkit of anybody who wants to improve individual welfare.
  • At some point in time, as if I hadn’t given you enough work to do, I would encourage you to google Brian’s website and you’ll see some interesting research that touches on a lot of the ideas we’ve talked about, nudging and choice architecture.
  • So simply by using a nudging strategy, you’re not leaving the retailer out, you’re not essentially hitting their profits.
  • Two points of view supporting the education side, two points of view supporting the nudging side.
  • Let’s focus a little bit on when do we nudge people, when do we educate them.
  • For obvious reasons I’m in the education business.
  • I would never stand up in front of a camera and say that education is completely useless.
  • We’ve got to nudge them to get a membership to the health club.
  • We’ve got to nudge them to buy fitness equipment.
  • We’ve got to nudge them to become vegetarian or eat spinach or whatever else it is that actually marks the beginning of that program.
  • Once you’ve nudged them into action, that’s when the education part kicks in.
  • Nudging would be a great thing to do to get people started and then let education take over.
  • JOHN LYNCH: So which is better for trying to improve consumers long run welfare? Is it better to invest our resources in financial education or in nudging? My point of view is that there is scope for each.
  • I would say one of the things that has been argued by the proponents of nudging is that financial education has had pretty mixed success.
  • My research shows that financial education effects decayed dramatically over time.
  • So we have to be sensitive to the idea that if we’re trying to make consumers better off by financial education, the education has to be what I call “Just in time” financial education, close in time to the specific decision we’re trying to influence.
  • Otherwise, what my research shows, is that if you give somebody a course, for example, a financial education course, and try to look for effects on behaviors two years later, there’s really nothing.
  • So financial education has a role, but it has to be timely.
  • SPEAKER 1: So there you have it, two ways in which we can meaningfully integrate the benefits of nudging with the benefits of education.
  • Way number one, get nudging to start people on programs, use education to push them through the program.
  • Think about education that is timely, that is irrelevant, and that is delivered in a dynamic sense when people actually need it.
  • Second thing, which is what John Lynch was talking about, is if they are behaviors that are standard, that you want everybody to do, nudging is a dominant strategy.
  • It is easier to nudge people, if in fact, there is what we call a heterogeneity of what the right behavior is, then perhaps education dominates nudging.

Unit 6 > 6.1 Some Recent Nudging Interventions > 6.1.1 Recent Nudge Experiments

  • In particular, the three nudges we’re going to talk about are enhanced active choice, reminders, and finally, reform designs to get people to be more honest.
  • It’s October in Canada and it’s flu season, and in fact, it is flu season in many parts of the northern hemisphere at this point in time.
  • Every time the flu season comes around, you often see messages like this asking you to go to your nearest clinic and get a flu shot or an injection to prevent yourself from getting the flu.
  • Most of us don’t get a flu shot, and the question is, how can we design an intervention to get more people to protect themselves from the seasonal flu? This was worked on by Anand-Keller and her colleagues at Dartmouth, and what they were doing is they worked with the company that had an annual flu program, where if you did indeed take a flu shot, you not only got protection from the flu, but the company gave you an incentive of $50. Under that circumstance, even then there were a large number of people that did not get the flu shot.
  • What Puman and her colleagues did was they experimented with three different ways of asking people whether in fact they wanted to get a flu shot.
  • What are the bottlenecks in the decision to get a flu shot? Bottleneck number one, it’s not an active decision.
  • Bottleneck number two, when most people think about the decision to get a flu shot, the benefits of not getting a flu shot are salient.
  • The benefits of getting a flu shot seem to recede in the background.
  • Every time participants in that particular organization got a message about a flu shot, they were then presented with a card or a questionnaire that asked them for their intention to get the flu shot in a standard opt-in condition.
  • They saw the message that the checker box that said, yes, I want to get a flu shot this fall.
  • This is what is called an enhanced active choice and it said, check one of the following; yes, I will get the flu shot to reduce my risk of getting the flu and because I like the $50 incentive.
  • Or no, I won’t get a flu shot this fall because I don’t care about my risk of getting the flu, and I don’t care for $50. So what’s happened here is the way in which you frame the question makes the cost of not getting a flu shot, say it.
  • Now, if you actually see this question frame, you would have to convince yourself that you’re completely irrational not to get the flu shot.
  • Is people that saw the standard opt-in condition, 42% of those folks did express a desire to get a flu shot.
  • What these researchers found using a randomized control approach in these three different experiments was that simply sending reminders increased savings by about 6%. These researchers also tried other things through the reminders.
  • They tried making the reminders specific and concrete, reminding people what their goal was, framing the messages a gain versus a loss.
  • What they found was that to the extent that the reminders are specific and remind people of a very concrete goal that savings goes up by 16% and not just 6%. So simple reminder served as a nudge.
  • The bottleneck here was the fact that the decision was passive, the reminder made it active.

Unit 6 > 6.2 Decision Aids > 6.2.1 A Hierarchy of Decision Aids

  • DILIP SOMAN: Crutches to aid decision making are essentially gadgets or devices or engines that help people make better choices.
  • We are going to talk about five different kinds of crutches that people make better decisions.
  • The simplest crutch you can provide people is data or feedback.
  • Simple idea- if you had a gadget or a device that simply told people who paid by credit cards how much they had spent up to that point in time in a given month, would spending reduce? And the answer is yes.
  • Another simple experiment that we did at a fairly small scale in the city of Toronto was we actually gave people feedback on the amount of garbage they had produced in a given period of time.
  • If you are struggling with a decision, ask someone else to help you make that decision.
  • So the second piece of decision support that one could give people is just the input, the advice, the recommendation of somebody else.
  • So retrieving an instance that is similar to the one at hand and using the information from that instance to make a judgment about the current instance is an example of what we call a case-based or a data-based decision support system.
  • If you actually had a simple online device that knew your weighting system for each of these attributes and could simply accept values for that option on each attribute, it could compute a utility and tell you what the score for a given individual, for a given company, for a given option is going to be.
  • Giving people a consumption vocabulary allows them to better develop a framework to make a decision.
  • Objects of art, bottles of wine, fine quilts and classical music are all hard for people to evaluate because they don’t know the right attributes.
  • So providing them with the vocabulary to evaluate and give weight to each of those attributes will actually improve the quality of their decision making.
  • So five simple ways in which we can provide crutches to help people make better choices.

Unit 6 > 6.2 Decision Aids > 6.2.3 Advice Taking

  • Or you could multiply the height of the coins through some process where you have a judgment for how many coins there are at each level.
  • There are many ways in which we could go across making this judgment.
  • Let’s imagine that you ask two people how many coins there were, and let’s say that those two people give you two numbers each, or a number each.
  • How would you aggregate their two judgments in order to arrive at a judgment of your own? Now, guessing how many coins are in this jar is simply a metaphor for many kinds of judgments that we make about things that are going to happen in the future.
  • You hire a new employee, you’re making a judgment about how well he or she’s going to do for you.
  • You buy a stock, you’re making a judgment about what its price is going to be five years from now.
  • Let’s say you ask two people for how many coins do they think are in this jar.
  • One of the judges is off by 10 coins, the other one is off by 20.
  • Let’s say you took the average of the two judges.
  • The average will be 55, and the average error is 15.
  • It’s better than one judge, but not as good as the other judge.
  • Now what’s happened is your average is 39, and the error of the average is simply one.
  • If you ask people for estimates, one of two things could happen.
  • Both of those judges over or under predict, in which case your average is better than one judge, but not as good as the other one.
  • If that happens, the average is always going to be better than either of the two judges independently.
  • What’s that telling us? That’s telling us that in many real world cases, turns out that averaging the opinion of two judges is often better than listening to any advice of any one judge on their own.
  • If two experts gave you advice, and you were asked, what do I do with the advice, what would most of us do? Here’s what we do.
  • We try and figure out which of these two experts is the better expert, and we go with that judgment.
  • You might be better off simply averaging.
  • In particular, if you set your two experts up such that, the dispersion between the two is going to be high, one of them is likely to over predict, the other one to under predict.
  • Then averaging is always a better strategy than relying on any one judge at the same time.
  • Now, much of the work in this area has been done by two professors at Duke University, Jack Soll and Rick Larrick.
  • What Jack and Rick essentially argue is that, while we can prove mathematically that averaging is a dominant strategy, most of us don’t believe it is.
  • Most of us believe that averaging results in an average judgment.
  • Why? Because now, you and your judge bring different information to the table, and therefore, if you averaged your two judgments, you’re more likely to be accurate.
  • Ms. Y is perhaps a better choice, because now you’re bringing more data, more information to the judgment that you’re going to make.
  • They would go and ask venture capitalists, or entrepreneurs questions such as, what do you think is going to be the likelihood that your company is going to be successful, defined in a given way, five years from now? And let’s say the entrepreneur said 50%. They would then say, well, but gee, the average success rate in this industry is only 10%. That’s the difference between an inside view and an outside view.
  • Simply average them, and that’s likely to be more accurate than any one of their own judgments.
  • Weight your advice at the same level as the judge’s advice.
  • Make sure you weight the judge’s advice as much as you weigh your own judgment.
  • The phenomena of the average of two being better is only stronger when you put three, four, five, or more judges.
  • What do I mean by that? Think about making the same judgment again under a different set of circumstances.
  • If you made the original judgment at work while you were busy in the middle of a meeting, try and think about the same problem in a different context, when you’re relaxed, when you’re sitting at home.
  • It turns out, simply by thinking through the same problem differently, you might come to different judgments, take the average, and that is likely to be better than any one of those two judgments that you made.

Unit 6 > 6.2 Decision Aids > 6.2.5 Decision Support Systems

  • DILIP SOMAN: A decision support system simply refers to the use of any computer-based data delivery system that can help people make a better decision.
  • We talked about two kinds of DSSs. The first one is a model based decision support system.
  • The second one is a case based, or a data based, decision support system.
  • Let’s think first about a model based decision support system.
  • If you had a computer system that used this equation, all it’s going to ask you to do is it’s going to ask you to plug in the values of the four attributes, and it would then give you as output the credit rating score.
  • You can then make a decision based on that, whether you would like to give it a loan or not.
  • So that’s a simple model based decision support system.
  • Let’s think about a case based DSS. What happens in a case based DSS is you get an application, you’re going to ask the system to find a previous instance of an applicant that looks similar to this particular application.
  • Here’s what an interface for a case based decision support system might look like.
  • What the computer is going to do is it’s going to follow a simple algorithm which is called the minimization of least squares, or minimizing the Euclidean distance in the four dimensional space between the new applicant and its existing database of cases.
  • Jayrod, Lansco, Lobsen, and MS and Z. So four companies it has pulled out that, based on the criteria that the model uses, are similar to the new applicant.
  • How does that change your prediction of the score? So that’s a DSS based system.
  • Now in a series of experiments, Hoch and Schkade actually compared these two different kinds of systems.
  • Managers were either given a case based decision support system or a model based system, or in some cases, both.
  • They found that when environments are stable, nothing major is happening in terms of changes, no new products being launched, no income shocks, nothing dramatic is happening, the model based decision support systems actually outperform the case based decision support systems.
  • In other words, managers that use the model based systems end up making more accurate judgments or predictions than managers that use the case based systems.
  • Turns out when the environment is noisy, when in fact there is, let’s say, a recession or a new product has been launched, or some new regulation, where now the old model might not be true anymore, the case based system actually is a little bit better than the model based system.

Unit 6 > 6.2 Decision Aids > 6.2.7 Intuition + Models

  • Should I use my intuition, my judgment, my expertise in making a choice, or should I rely on a model? Now let’s think about the difference between the expertise of a model versus the expertise of an expert, a manager, a policy maker.
  • Why does a model outperform an expert? What are the three simple reasons? The first, models are consistent.
  • So anytime you give the model the same set of data, it is always going to return to you the same prediction.
  • Models don’t care about framing, they interpret information down to its very basics, they are not influenced by organizational politics.
  • Third, models never get tired, they don’t get fatigued.
  • In particular, there are four reasons why experts make better decisions or better judgments than models.
  • First, you’ve got to keep in mind that a model only knows what the expert tells it.
  • It’s the expert that decides how to value those cues.
  • So if you don’t have an expert, you’re not going to have a model in the first place.
  • Finally, experts have access to more cues, more pieces of information, than models ever do.
  • Remember when we talked about interaction effects? Experts can actually make me think through interaction effects.
  • Both the expert and the model are bringing something to the table.
  • The question is, which of these should we choose? Now that question was precisely the heart of an interesting research paper that was done by Bob Blattberg and Steve Hoch, where they compared the predictability of the judgments, and in fact the quality of the outcome, for model versus an expert.
  • In every experiment, either a manager got a bunch of data or a model was given the same bunch of data.
  • Hoch and Blattberg simply looked at the performance of the manager versus the performance of the model.
  • The pink bars stand for the performance of the model.
  • The violet bars stand for the performance of the expert.
  • What do the green bars do? What they did with the green bars was they simply took a average prediction, which was the average of what the model said, and what the manager said- 50-50.
  • Which means, that the average performance of the model and the manager is better than the model alone or the manager alone.
  • If you take two experts and you average their prediction, is better than the prediction of any one expert.
  • Think about the expert in this case and the model as two different experts.
  • Make it on judgment independent of what the model says, let the model make its judgement, look at the outputs, take the average, and you will do better than either yourself or the model independently.

Unit 6 > 6.3 Disclosure > 6.3.1 Disclosures

  • DILIP SOMAN: We’re going to spend the next few minutes talking about perhaps one of the most standard levers used by policymakers all over the word- disclosure.
  • Disclosure simply refers to the idea that if you make information accessible to people, it betters people’s ability to make decisions, because you have told them all there is to know about that particular decision-making task.
  • Or every country has a regulatory authority that provides disclosure guidelines for financial markets.
  • So the goal of disclosure is simply to make sure that people have information.
  • So for example, there’s mandated disclosure.
  • Every government, every regulatory authority, will require firms to disclose certain data when they launch products and services and financial instruments.
  • So a conflict of interest disclosure is one where an agent informs the principal about the fact that they might have the incentive to recommend certain products.
  • President Obama in the United States has spoken about the fact that all data collected by the US government should be made accessible to the citizens of the United States.
  • We’ve heard similar disclosure policies from several other governments.
  • It’s fair to say that over the last few years, we have now moved towards a regime of comprehensive disclosure, where the idea is that the more you disclose, the better informed people are, and therefore the better decisions they will make.
  • Does disclosure work? Well, here are some simple human truths.
  • Truth number one- providing information doesn’t mean that that information is read. Truth number two- reading that information doesn’t mean that the information is understood and used in making a decision.
  • Truth number three- once you get to a certain point in time or a certain quantity of information, the more information you provide, the more likely is it that the information will be not used at all.
  • So these are three things that we’ve learned from this course that should worry people who are proponents of disclosure.
  • If you disclose too much, chances are good that that disclosure will not work at all.
  • Now earlier this year, Will Tucker and Dick Thaler wrote an interesting and important article in the Harvard Business Review, and they made a distinction between disclosure and smart disclosure.
  • So what is smart disclosure? Smart disclosure is simply the idea that you can disclose as much data as you want, but you should provide consumers with the tools to curate and summarize that data into something that’s meaningful for them.
  • So rather than simply providing copious amounts of data, can you provide people with engines- and they use the term “Choice engines” in their article- that will take the data and present it in a simple, meaningful form that actually helps the consumer make a better decision? So curating the data is key, and customizing the data to what that individual needs is the second key.
  • What BrightScope does is it basically collects information from a very large number of 401(k) plans- these are retirement plans in the United States, about 45,000 of them- and it takes the data and it can summarize them into an overall score which captures the performance of each of those funds.
  • So what it is is an engine that looks at a large mass of data but allows the user to specify their criteria and then takes the data and summarizes it for the user along those criteria.
  • I could report one hour’s worth of my own consumption data, and it will then project that data into what my annual bills are going to look like.
  • These are all examples of initiatives where disclosure is simply not providing people with humongous amount of data, but allowing that person to specify what data they want and to curate and then simplify that data for them.
  • Let’s turn our attention onto a second kind of disclosure for a moment, and that’s the disclosure of conflicts of interest.
  • Or it somehow lifts the burden of disclosure, if you will, and now as an adviser, I’ve basically told you that I’m getting paid for giving you bad advice.
  • So this brings us back of the key question- is disclosure good? In theory, if people are perfect processors of information, yes.
  • We know that people are not, right? Will smart disclosure help us achieve this? Perhaps yes.
  • If that smart disclosure engine could take all of the data and make it usable, then perhaps it will increase the quality of decisions made.
  • We’ve seen research over the past many years which shows that giving people more data makes the confidence go up but does not change the accuracy.
  • So you might actually have kinds of biases or certain situations where not only does disclosure not help, it might actually hurt.
  • We also saw that in the case of Sunita Sah and her experiments with the disclosure about conflicts of interest.
  • Let me end up by talking about what I’m going to call the perverse effects of disclosure.
  • This is where disclosure actually helps, but not because it provides consumers with better knowledge.
  • So it’s not that consumers are using the information, but the burden of disclosure makes it more likely that the service provider will actually increase the quality.
  • This is the likelihood that a vehicle is going to topple and roll over- in particular, an important piece of data for vehicles that have a high wheel base.
  • That’s, again, a similar effect where the manufacturer knows that the data is going to be made public, and therefore they try and make sure that the quality of the product goes up.
  • So disclosure can certainly help in many indirect ways like this.

Unit 6 > 6.4 Wrapping Up > 6.4.1 The End, The Next Steps, and the Beginning

  • DILIP SOMAN: Every time I come to the end of any course, I’m always reminded of a fictional character called Father Guido Sarducci.
  • Now, I hope most of you are here not just for the five-minute version but for a much more bigger objective, which is to think about behavioral economics differently.
  • Let me quickly remind you what you’ve done and then push through a talk about some key messages that I have it in the next steps, both for the course as well as for using behavior economics effectively in your careers and your lives.
  • We’d study the principles of behavioral economics, we’d study the methods, and we would study applications.
  • We looked at two kinds of applications, first, choice architecture and nudging, and second, designing decision support systems or tools to improve decision making.
  • Apply the thinking principles to come up with an interesting, creative solution for one of the two nudge challenges that we spoke about and that you’ll see a lot more of on your EdX web pages.
  • Thought number one- in behavioral economics, everything matters.
  • Remember when we went to the lab, we heard this interesting point that Julie made, which was that every part of the context in which a decision is made in the lab is an opportunity for a researcher to create a manipulation and to influence choice.
  • Choices depend on not just option A and option B which is presented to the decision maker, but to the color of the table around them, the color of the walls, the height of the ceiling, the ambient temperature.
  • What does that mean? That means that developing a theory of decision making is complex.
  • So when you audit the decision-making process, rid yourself of the illusion that there are certain things that are relevant and other things that are not.
  • Anything could potentially influence the outcome and make sure you think about every single thing.
  • We’ve talked about the power of changing defaults on changing decisions.
  • Defaults are traditionally created because somebody- 50, 20, 100 years ago- thought that that was the right way to make a decision.
  • Every decision has a default, but very little thought goes into designing the default.
  • So rather than accepting a given away of asking a question or accepting a given process that an individual would need to make a decision, challenge it and see if there’s a better way of doing it.
  • We started off in 1970, 1980, 1990 designing basic ideas of rational decision making.
  • Every single day, we learn something new about decision making.
  • Advice number 4- in your organization, in your work, whether you’re a policy maker or a business, it is important to develop a culture of data.
  • Time and again, that unless policy or decisions are informed by data, there is very little to stand alone.
  • Let the data tell you and work with experiments to try and make your decisions more behaviorally informed.
  • So in the interest of the field as a whole, see if you can disseminate your learnings as best as you can.
  • Even if they failed and your nudging strategies didn’t work, share them so others won’t waste time doing exactly what it was that did not work.
  • If you find something that didn’t work for somebody else but worked for you, that’s also an interesting report to file.
  • It’s time to focus on so what? In this course, we talked about two “So whats”- choice architecture and tools.
  • While all of you are believers, and we’ve all drunk Kool-Aid together, and we all understand that behavioral economics is here to stay, there are a lot of people who still are not exposed to the field.

Unit 6 > 6.5 The Last Mile > 6.5.1 The Last Mile

  • DILIP SOMAN: I want to spend a minute walking you through The Last Mile.
  • The Last Mile is a book that talks about the notion of behavioral insights, and why it is important for organizations to have a new, last mile framework, for thinking about value creation and value delivery.
  • I’m sure all of you have taken a trip down the roadways, where you go through an Expressway, you’re zipping along, the cars on cruise control, and everything looks fine.
  • Streets become narrower, there is construction, there are drivers trying to make a left turn across the street that might hold you up.
  • Highway journeys are typically more efficient, as compared to journeys in the city streets.
  • My belief is that value creation systems are pretty much the same as highways and city streets.
  • Think about the first mile of any value creation system.
  • Once all of that is done, we take the product to market, on what I would call the last mile.
  • The last mile is inherently more challenging, because you are now required to create actions that change, in response to how your constituents behave.
  • In The Last Mile book, we talk about three different aspects of developing last mile solutions.
  • We talk about experimental design and field studies, and more importantly, why it is essential that organizations must master these techniques, to be able to effectively master the last mile.
  • How do we help consumers protect their online data? You could think about an equipped strategy, which is perhaps a privacy literacy program, where you can teach people what sorts of information they should and should not share online.
  • That makes the last mile inherently hard to model.
  • It is my belief that, over the past many years, organizations- be they governments, or not-for-profits, or even for-profits- have spent way too much time on the first mile, and way too little on the last mile.
  • Hopefully, this book will help you and your organization help achieve excellence at the last mile.

Unit 6 > 6.6 Debate > 6.6.1 Debate 6

  • Well my career at Procter & Gamble was focused on creating decision support methods and models.
  • So you’d think based on working for Procter & Gamble, a highly decision based culture, and a highly data-based decision culture, and working with models that I would be very much in favor of outsourcing decisions to the modelers.
  • All models are generalizations and all decisions are situational.
  • So models are built on grand averages of the way things happen but every decision is situated in a very specific context.
  • So somebody made a decision about what factors to include, and what data to include.
  • It’s not human nature to concede decision making to models.
  • What self respecting manager is going to allow a model to take all the credit for the brilliant decisions that are at hand.
  • MIN ZHAO: I’ve never been a big fan of decision models especially for policy and managerial situations.
  • Just like calculators reduce people’s ability to do math, I think the use of decision models, or aids will also reduce people’s ability to solve problems or make decisions over time.
  • CLAIRE TSAI: I’m a big believer in decision aids for at least two reasons.
  • So you would first have to convert the attribute into numbers, then aggregate them into useful data that can help you make decisions.
  • So I think decision aids can really help people use information in a more consistent matter, and help people avoid the biases that they often fall prey when they make decisions.
  • SARA N-MARANDI: We should definitely consider the development and use of decision making models that aid us in our decision making process.
  • There’s a lot of research out there that suggests people if left to their own devices they will make bias and irrational decisions.
  • People, real people in general, are inconsistent, they’re ill informed, their kind of weak in their decision making, and lazy.
  • As a result, in general, people are inconsistent and they make irrational decisions that are biased.
  • The use of decision making models are really just meant to help us remove some of those biases, force us to pause and think, and actually consider our options without the biases.
  • KELLY PETERS: I think it’s fantastic to help people with decision aids and even algorithms to help them make a decision.
  • We overestimate our ability to handle a number of decisions.
  • I think we overestimate, first of all, our ability to make a good decision.
  • Decision aids give us a framework to make those decisions to the best of our ability.
  • Once we make a decision, once we’ve gone through the process of either a cost benefit analysis, looking at the pros and cons, soliciting advice from experts, even once we’ve gone through that process we still need further guidance to be able to follow through and act on that sound decision.
  • So decision aids are one part of a process in order to help us to make an informed decision, but sometimes decisions are so complex we would benefit from an algorithm that leverages all of that information and in fact makes the decision for you.

ٕ

Return to Summaries List.

(image source)
Print Friendly, PDF & Email