Welcome back to Shapingtomorrowsworld.org

The Shapingtomorrowsworld.org blog experienced unexpected and unresolvable technical difficulties on its original host several months ago. We have recreated the blog on this new host, using the WordPress software, which will provide a stable platform from here on. In the process, it has also been integrated with www.cogsciWA.com, Stephan Lewandowsky’s academic home page.

All original posts published on Shapingtomorrowsworld.org will gradually be ported to the new host and will appear below over the next few weeks or months. Unfortunately, it would be prohibitively time consuming to transfer comments from the original content, so they will not become available here in the future.

Blogging will resume once all the development and transfer tasks have been completed. Thank you for your patience until then.

Familiarity-based processing in the continued influence of misinformation

Advertisements in the U.S. for Listerine mouthwash falsely claimed that the product helped prevent or reduce the severity of colds and sore throats for more than 50 years. After a prolonged legal battle, the Federal Trade Commission directed Listerine to mount an advertising campaign to correct these deceptive claims. For 16 months, the company ran a $10,000,000 ad campaign in which the cold-related claims about Listerine were retracted. The campaign was only partially successful, with more than half of consumers continuing to report that the product’s presumed medicinal effects were a key factor in their purchasing decision. This real-life example from the 1970s underscores the difficulties of debunking of misinformation. Even a large budget, prime time TV exposure, and numerous repetitions may be insufficient to alter the public’s belief in false information.

But this does not mean that debunking of misinformation is necessarily impossible: On the contrary, there are some known techniques that can help make a correction more effective, which John Cook and I summarized in the Debunking Handbook some time ago.

One of the recommendations in the Debunking Handbook was to avoid the so-called Familiarity Backfire Effect. As we noted in the Handbook:

To debunk a myth, you often have to mention it – otherwise, how will people know what you’re talking about? However, this makes people more familiar with the myth and hence more likely to accept it as true. Does this mean debunking a myth might actually reinforce it in people’s minds?

To test for this backfire effect, people were shown a flyer that debunked common myths about flu vaccines. Afterwards, they were asked to separate the myths from the facts. When asked immediately after reading the flyer, people successfully identified the myths. However, when queried 30 minutes after reading the flyer, some people actually scored worse after reading the flyer. The debunking reinforced the myths.

We therefore went on to recommend that communicators should

avoid mentioning the myth altogether while correcting it. When seeking to counter misinformation, the best approach is to focus on the facts you wish to communicate.”

Instead of saying “it’s a myth that vaccines cause autism” it is better to accurately state that “vaccinations save many lives.”

Two recent studies published by Ullrich Ecker and colleagues and I took another empirical look at this familiarity backfire effect. In a nutshell, we did not find the effect in our experiments, which instead showed that repeating a myth while correcting it need not be harmful.

The two studies have been discussed in previous posts by Tania Lombrozo here and by Briony Swire here.

These results raise two related questions: First, is the cautionary discussion of the familiarity backfire effect in the Debunking Handbook wrong? Second, is the recommendation not to repeat a myth during debunking wrong?

It would be premature to answer those questions in the affirmative.

To understand why, we need to consider the theory that underlies the familiarity backfire effect. We discussed the effect in the Debunking Handbook not just because it had been shown to exist, but also because its existence is compatible with a lot of our knowledge about how memory works. A common assumption in memory research is that there are two separate classes of memory retrieval processes, known as strategic and automatic, respectively. Strategic memory processes allow for the controlled recollection of the information’s contextual details. Similar to the meta-data of a computer file, contextual details include information about the information itself, such as the information’s spatiotemporal context of encoding, its source, and its veracity. In contrast, automatic processes provide a fast indication of memory strength or familiarity of the information but little else.

Automatic retrieval processes can therefore contribute to the continued influence of misinformation in two related ways. First, people’s truth judgments about a statement are known to be influenced by its familiarity. It follows that false information might be accepted as true just because it seems familiar. Second, when corrected misinformation is automatically retrieved from memory without any accompanying contextual details, it might mistakenly be considered true. To illustrate, many researchers have suggested that when information in memory is retracted, a “negation tag” is linked to the original memory representation (e.g., “Listerine alleviates cold symptoms–NOT TRUE”). If retrieval relies only on automatic processes, those processes might retrieve the claim without also retrieving the attached negation tag. If strategic memory processes are not engaged, familiar claims may be mistakenly judged to be true even after they have been corrected (and the person has diligently updated their memory).

In support of this idea, misinformation effects have been found to be reduced when conditions encourage reliance on strategic processing. In an earlier study, Ullrich Ecker and colleagues and I found that presenting participants with a pre-exposure warning detailing the continued-influence effect greatly reduced reliance on misinformation, and was as effective as providing a factual alternative. We suggested that those warnings not only allowed individuals to more effectively tag misinformation as false at encoding, but also boosted later recall of the “negation tag” because people were more likely to engage strategic processes.

It follows that the recent studies showing the absence of a familiarity backfire effect should be examined more closely to see if the data contain a signature of the presumed familiarity process. If that signature could be detected, then the studies might best be interpreted as showing that the familiarity backfire effect is not always observed but that there are theoretical reasons to expect it to occur in some circumstances. Note that this is quite different from showing that the familiarity backfire effect does not exist.

It turns out that the study by Swire, Ecker, and Lewandowsky contains fairly strong evidence for the presence of the presumed familiarity process. The figure below shows one aspect of this evidence (it replicates across measures and experiments):

The bars on the left show belief ratings for facts after they were affirmed across three retention intervals. It can be seen that affirmation raises belief above the pre-intervention baseline (the dotted line), and that the increase in belief ratings is remarkably stable over time (we can ignore the ‘brief’ vs. ‘detailed’ manipulation as it clearly had little effect).  

The bars on the right show the same measure for myths after they were rebutted. Again, the rebuttal was effective because it lowered belief far below the pre-intervention baseline. However, unlike the effect on facts, the effects of the correction obviously wore off over time: After a week, people’s belief in the myths had increased considerably compared to the low level immediately after the correction had been received.

In other words, misinformation began to be “re-believed” after a week, whereas newly-acquired fact belief remained stable during that time.

This asymmetrical forgetting of the intervention is arguably a signature of the familiarity process. As we note in the paper:

“In the case of an affirmed fact, it does not matter if an individual relies on the recollection of the affirmation or on the boosted familiarity of the factual statement—familiarity and recollection operate in unison and lead to the individual assuming the item to be true. However, in the case of a retracted myth, recollection of the retraction will support the statement’s correct rejection, whereas the myth’s boosted familiarity will foster its false acceptance as true, as familiarity and recollection stand in opposition.”

The asymmetry of forgetting therefore suggests that the familiarity of the material persists whereas the strategic component of memory retrieval becomes less effective over time: The strategic component is required to retrieve the negation tag and “unbelieve” a myth, whereas familiarity is sufficient to believe affirmed facts even when strategic processes have become less effective.

A clear implication of this interpretation is that under conditions in which strategic processes are even less effective, a familiarity-based backfire effect may well emerge. In a nutshell, our recent studies show that familiarity-based responding is important but that it can be overridden by a correction when strategic processing remains intact.

What our studies do not show is that the familiarity backfire effect will never be observed.

Although our experiments showed that familiarity does not lead to a backfire effect in some cases where strategic processes are compromised (i.e. in older adults, when little detail is provided about why the misconception is incorrect, after a long period of time between encoding the retraction and recalling it), our studies do not address whether familiarity backfire effects occur in other circumstances. For example, strategic memory processes are also compromised when people pay little attention during encoding of the correction because they are otherwise occupied (e.g., driving a car while listening to a corrective message on the radio). Those expectations remain to be examined by experimentation.

A recent as-yet unpublished study by Gordon Pennycook and colleagues reports results that point in the direction of a familiarity-backfire effect. In their study, participants were shown “fake news” headlines that were sometimes accompanied by warnings that they were false (“disputed by independent fact checkers”). At a later stage in their study, participants rated fake news headlines that had been identified as false and were presented a second time as more accurate than novel fake news headlines. In other words, the familiarity afforded by repetition outweighed the effect of the warnings about the veracity of the content. 

In light of these latest results, which point to a fluid balance between familiarity-based processing and strategic processing, what should be our recommendations to communicators who are tasked to debunk misinformation?

The picture below shows the recommendations from the Debunking Handbook annotated in light of the recent results:

Qualifying the Familiarity Backfire Effect

With ‘fake news’ fast becoming a global issue, the ability to effectively correct inaccurate information has never been more pertinent. Unfortunately, the task of correcting misinformation is far from trivial. In many instances, corrections are only partially effective and people often continue to rely on outdated information. This is known as the continued-influence effect, and it has been of great interest for cognitive psychology, political science, and communication studies.

One recommendation that has emerged from the literature regarding optimal misinformation correction, is that it is best to avoid repeating the initial misconception. This is due to the risk of the repetition of the misconception increasing its familiarity. For example, truthfully stating that “playing Mozart to an infant will not boost its IQ” mentions both “Mozart” and “boosting IQ”. This makes the link between the two concepts more familiar, even though the actual statement is aiming to correct the myth. The potential problem that arises from increased familiarity is that people are often more likely to think that familiar information is true.

The Familiarity Backfire Effect

Some reports even suggest that the increased familiarity associated with a correction can be so detrimental, that it causes a familiarity backfire effect. A backfire effect is when a correction ironically increases an individual’s belief in the original misconception. For example, if people were more likely to think that listening to Mozart can boost a child’s IQ after they had been told that this was false (in comparison to their belief levels prior to the correction), this would be considered a backfire effect.

However, scientific evidence for this phenomenon has been fairly thin on the ground. In fact, the most cited example of the familiarity backfire effect is from an unpublished manuscript. The manuscript reports an experiment that corrects misinformation regarding vaccines. We know that corrective attempts for misinformation about contentious subjects is likely to backfire because people’s worldview is being challenged. It is therefore unclear whether the backfire effect described in the manuscript arose solely due to familiarity.

Ullrich Ecker, Stephan Lewandowsky, and I designed a set of experiments to see how detrimental familiarity really is to the updating of beliefs. We focused on many different topics to avoid confounding worldview backfire effects and familiarity backfire effects. The experiments were reported in an article that recently appeared in the Journal of Experimental Psychology: Learning, Memory, and Cognition.

Memory Retrieval

The experiments were based upon what we know about how memory works. It is a common assumption that there are two types of memory retrieval; strategic and automatic. Strategic memory allows you to remember details such as where or when you learnt the information, and whether information is true or false. However, strategic memory takes effort and may therefore fail when you are distracted or cannot expend effort for other reasons.

Automatic memory retrieval, by contrast, does not take effort and is based largely on a perception of familiarity. For example, you will have little difficulty recognizing that the word “Mozart” occurred in this post based on its sense of familiarity alone, whereas you might have to engage in more strategic retrieval to recall in what context it appeared.

You are less likely to be able to use strategic memory retrieval (and are more likely to rely on automatic memory retrieval) if you are (1) an older adult (2) you are not provided with enough detail about why the information is incorrect, and (3) there is a long period of time between the correction presentation and when you are asked to remember it.

Given that we know under what circumstances people are more likely to use automatic processes, we can manipulate the extent to which people rely on familiarity when they are updating their beliefs.

Experiment 1: Young Adults

We presented 100 undergraduate students with 20 familiar myths (for example, “Ostriches hide their head in the sand when frightened”), and 20 facts (for example, “Dogs shouldn’t eat chocolate”). We asked participants to rate how much they believed each item on a scale between 0-10. We then informed them as to what was true and what was false. We did this by either briefly stating “this is true/false”, or by giving participants a short evidence-based blurb as to why the information was true or false. The participants then re-rated their belief either immediately, after half an hour, or after one-week.

If a familiarity backfire effect were to be elicited in an undergraduate population, we would expect it to occur in cases where they had brief corrections and/or after a long delay. For example, the figure below shows what a backfire effect could hypothetically look like. Belief levels in the myth after correction would be greater than people’s belief prior to the correction:

 

We did not find this to be the case—the figure below shows the pattern of results that we actually found. Note that the dotted horizontal line refers to the belief prior to the correction. Any bar that falls below that line therefore represents successful memory updating. As belief levels after the correction consistently remained below belief prior to the correction, participants updated their beliefs in the right direction.

Familiarity did play a role in that both brief corrections and a long delay led to corrections being less effective, but there was no evidence for a familiarity-based backfire effect.

As previously noted, age is also known to be a factor in whether people can use strategic memory processes. It is therefore possible that the familiarity backfire effect only exists in older adult populations. This idea was tested in our second experiment.

Experiment 2: Older adults

We asked 124 older adults over the age of 50 to participant in a very similar study to Experiment 1. The only change we made to the Experiment 1 design, was that we added a three-week delay condition to maximize the chance of eliciting a familiarity backfire effect. The results of this study can be seen below:

Belief after the correction remained well below belief levels prior to the correction. Even under circumstances most conducive to a familiarity backfire effect (i.e. after a long period of time, with a correction that provided little detail, in older adults), we failed to find it.

However, we again found that familiarity impacted the efficacy of the correction: People were better at sustaining their belief over time if they were middle aged participants under the age of 65, in comparison to those over the age of 65.

It might be a relief for fact-checkers and news accuracy experts that repeating the misinformation when correcting it will most likely not make people’s belief worse. This is also good news as correcting information without mentioning the original misconception can be extremely challenging. In fact, there is evidence to suggest that repeating the misconception prior to the correction can be beneficial to belief updating, as we showed in a previous post.

One concern that stems from our data is that people over the age of 65 are potentially worse than middle-aged and younger adults at sustaining their post-correction belief that myths are inaccurate. It is therefore even more important that we not only understand the underlying mechanisms of why people continue to believe in inaccurate information, but develop techniques to facilitate evidence-based updating, so that all sectors of the population can be on the same page as to what is fact and what is fiction.

So where do our results leave the familiarity backfire effect? Should we avoid repeating the myth when correcting it? Can we confidently use the phrase “playing Mozart does not boost an infant’s IQ” without ramifications? Those issues will be taken up in the next post.

Can Repeating False Information Help People Remember True Information?

Last Saturday, a powerful earthquake struck the Philippines.

It was first reported as having a magnitude of 7.2; this was later corrected to 6.8.

Last Friday, a wharf collapsed in Gloucester Harbor in Massachusetts. It was first reported as a wharf belonging to Cape Ann Ice, but later identified as a wharf used by Channel Fish.

Last Thursday, President Trump announced plans regarding NAFTA. He originally claimed that he would withdraw from the agreement entirely, but later indicated plans to renegotiate.

Corrections and retractions are common — not only in the news, but also in science and in everyday life. Sometimes it’s as simple as correcting a careless mistake; other cases involve new information that leads to a reinterpretation of the evidence and the rejection of some prior assumption. We discover that the complaint wasn’t made by our neighbor after all, or that the purported link between vaccines and autism was based on deliberate fraud.

The trouble is that initial beliefs are sometimes hard to dislodge. Dozens of studies in experimental psychology have identified a phenomenon known as the continued influence effect: Even after misinformation is retracted, many people continue to treat it as true. In other words, it has a continued influence on their thinking.

When misinformation concerns something like the safety of vaccines or the perpetrators behind some atrocity, getting it wrong can be personally and societally consequential. That’s one reason why psychologists have been eager to understand precisely what drives the continued influence effect, and what kinds of corrections are most likely to be effective.

A new paper by Ullrich Ecker, Joshua Hogan and Stephan Lewandowsky, forthcoming in the Journal of Applied Research in Memory and Cognition, takes up one important question regarding the correction of misinformation: Is it better to explicitly state and retract the false claim, or is it better to avoid repeating something false, and instead simply state what’s now believed to be true?

Both possibilities are suggested by prior research. On the one hand, repeating a false claim could make it more familiar. Familiarity, in turn, could be mistaken for fact, or at least the suspicion that there’s something to the (false) claim. Weeks after reading a brochure about vaccine safety, for example, there might be something familiar about the idea that vaccines are associated with autism, but you might not remember precisely what was claimed, and in particular that the association was refuted.

On the other hand, there’s evidence that explicitly articulating a misconception can facilitate the process of updating one’s beliefs. For instance, some approaches to student learning emphasize the value of engaging with students’ initial (mistaken) beliefs as a precursor to conceptual change. Perhaps drawing attention to a false belief is a good way to assimilate the new information in a way that replaces, rather than merely co-exists with, the initial misinformation.

Given these competing possibilities, Ecker and his colleagues designed an experiment in which 60 university undergraduates read a series of scenarios that were written as pairs of news stories, half of which involved a retraction in the second story of some misinformation stated in the first story. The crucial variation was in how the retraction occurred: by merely stating the new claim; by implying that the new claim revised a prior claim (but without stating what the prior claim was); or by including both the initial claim and the new claim that superseded it.

To measure the “continued influence” of the initial misinformation, participants were asked a series of questions relevant to that aspect of the news story. The researchers found that people’s reasoning often showed an influence of the initial, retracted claim, thus replicating prior work. However, they also found that this influence was most pronounced when the new claim was simply stated, and least pronounced when the retraction included both the initial claim and the new claim that superseded it. At least for these scenarios, the most effective retractions were those that repeated the initial misinformation.

The study’s authors are cautious about making strong recommendations on the basis of this single result. For instance, they still suggest that unnecessary repetitions of misinformation should be avoided; if someone doesn’t already believe the misinformation, repeating it could do more harm than good.

It’s also important to know how robust these findings are to different kinds of (mis)information and different ways in which it is presented. One important factor could be time. Does it matter if the retraction follows the initial information almost immediately, versus after a long delay? Moreover, it could be that the retraction that’s most effective for the few minutes after it’s been read doesn’t have the most staying power as weeks and months go by.

These caveats aside, the new results offer an important qualification to prior recommendations concerning misinformation and its correction, some of which encouraged educators and communicators to avoid repeating false claims. At least sometimes, there may be value in repeating misinformation alongside the alternative we now consider to be true.

This post was originally published at NPR’s 13.7: Cosmos & Culture page. It is reposted here as a first post in a series of three posts on recent research on misinformation by Ulli Ecker, Stephan Lewandowsky, and colleagues.

The next post reports a recent study, with Briony Swire as first author, that takes a further look at whether repeating a myth during its debunking is always harmful.

Constraining the social discount rate by consideration of uncertainties

An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and ambiguity.

In the previous three posts

Those three posts provided us with the background needed to understand the simulation experiment that formed the core of our paper.

Basic Procedure

The goal of our simulation experiment was to explore different sources of uncertainty that are relevant to decision making in climate economics. In particular, we wanted to constrain the social discount rate, ρ, within a prescriptive framework embodied by the Ramsay rule:

ρ = δ + η × g.

As explained earlier, the parameters d and h represent ethical considerations relating to pure time preference and inequality aversion, respectively. The anticipated future economic growth is represented by g.

To derive candidate discount rates from this framework we therefore need estimates of future economic growth. We obtained these estimates of g in our experiment by projecting global warming till the end of the century using a climate model (a simple emulator), and converting that warming into a marginal effect on baseline economic growth through an empirical model of the temperature-growth relationship reported by Marshall Burke, Solomon Hsiang and Edward Miguel in 2015.

Their model is shown in the figure below:

It can be seen that, controlling for all other variables, economic productivity is maximal at an annual average temperature of around 13°C, with temperatures below or above that leading to a reduction in economic output. This descriptive model has been shown to be quite robust and we relied on it to convert warming forecasts to economic growth rates.

Experimental Design

We projected economic growth as a function of three variables that are the source of considerable uncertainty: the sensitivity of the climate to carbon emissions, the emissions trajectory that results from our policy choices, and the socio-economic development pathway that the world is following. We formed all possible combinations of those three variables to examine their effect on projected global growth.

The figure below shows our experimental design.

  • We fixed climate sensitivity at a constant mean but varied the uncertainty of that sensitivity, expressed as its standard deviation in 6 steps from 0.26°C to 1.66°C
  • We employed the climate forcings (i.e., the imbalance of incoming and outgoing energy that results from atmospheric greenhouse gases) provided by several of the IPCC’s Representative Concentration Pathways (RCPs). Specifically, we used RCP 2.6, RCP 4.5, RCP 6.0, and RCP 8.5 for the period 2000 through 2100. These RCPs span the range from aggressive mitigation and limiting global temperature rises to approximately 2°C (RCP 2.6), to continued business as usual and extensive warming (RCP 8.5).
  • We compared two Shared Socio-Economic Pathways (SSPs). SSPs form the basis of the IPCC’s projections of future global development in Working Group 3. We employed two scenarios, SSP3 and SSP5. SSP3 assumes low baseline growth and slow global income convergence between rich and poor countries; SSP5 assumes high baseline growth and fast global income convergence.

Our experiment thus consisted of 48 cells, obtained by fully crossing 6 levels of uncertainty about climate sensitivity with 4 RCPs and 2 SSPs. For each cell, 1,000 simulation replications were performed by sampling a realization of climate sensitivity from the appropriate distribution. For each realization, global temperatures were projected to the end of the century and the economic effects of climate change were derived by considering the relevant SSP in conjunction with the empirical model relating temperature to economic production. Cumulative average growth rates for the remainder of the century were then computed across the 1,000 replications in each cell of the design.

These 48 projected global economic trajectories to the end of the century, each of which represented the expectation under one set of experimental conditions, were then converted into candidate social discount rates.

At this stage the ethical considerations (top left of the above figure; see my previous post here for a discussion) were applied to each trajectory, by combining each of the 48 projected global economic growth rates (g) with four combinations of η and δ. Specifically, we used values for η and δ obtained by a recent expert survey, such that δ was either 0% or 3.15% with probability 65% and 35%, respectively, and η was 0.5 or 2.2 with equal probability.

This yielded a final set of 192 candidate discount rates across all combinations of experimental variables which were then integrated via gamma discounting into a single certainty-equivalent declining discount rate. I explained gamma discounting in a previous post, and you may wish to re-read that if the process is not clear to you.

Results

Although the experiment was quite complex—after all, we explored 3 sources of scientific, socio-economic, and policy uncertainty plus 2 sources of ethical ambiguity!—the crucial results are quite straightforward and consist of a single declining discount rate that is integrated across all those sources of ambiguity and uncertainty.

The figure below shows the main result (the article itself contains lots more but we skip over those data here).

The solid black line represents the (spot) certainty-equivalent declining discount rate that applies at any given point in time. For example, if we are concerned with a damage cost that falls due in 2050, then we would discount that cost at 3%. If we are worried about damages at the end of the century, then we would discount that cost by less than 2%.

The figure also shows various previous estimates of declining discount rates that were derived by different means but all based on the underlying principle of gamma discounting.

Our approach differs from those precedents in two important ways: First, we explicitly consider the major (if not most) sources of uncertainty and ambiguity, and we encompass their effects via gamma discounting. Second, our approach explicitly models the impact of future climate change on economic production.

When the likely impact of climate change on the global economy is considered, a more rapid decline of the discount rate is observed than in previous work. By 2070, our estimates of the spot rate dips below the other past benchmark estimates in the above figure. It should be noted that our results mesh well with the median long-run social discount rate elicited from experts.

We consider this article to provide a proof of concept, with much further exploration remaining to be performed. We take up some of those open issues and the limitations of our work in the article itself.

There is one clear message from our work: uncertainty is no reason to delay climate mitigation. Quite on the contrary, our extensive exploration of uncertainty yielded a lower discount rate (form around 2070 onward) than existing proposals. This lower discount rate translates into a considerable increase in the social cost of carbon emissions, and hence even greater impetus to mitigate climate change.

One caveat to our conclusion is that our discounting model assumes that things can be done only now or never. This makes sense in many situations when individuals or firms are confronted with a choice about a potential project. However, there are limitations to this approach. To take an extreme example, suppose we knew that the precise value of climate sensitivity would be revealed to us by some miraculous process in exactly a year’s time. In that it case, it would not be impossible that we might decide to wait that year to learn the precise climate sensitivity before acting. 

A possible alternative approach that stretches the decision path over time involves so-called real options models. Real options analyses account for the sequential nature and path dependence of choice processes. We flag this alternative briefly, but it remains for future work to apply it to climate economics in a more systematic fashion.

Harnessing uncertainty: A single certainty-equivalent social discount rate

An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and ambiguity.

In the previous two posts, I first outlined the basics of the discounting problem and highlighted the importance of the discount rate in climate economics. In the second post, I discussed the ethical considerations and value judgments that are relevant to determining the discount rate within a prescriptive Ramsay framework.

I showed that those ethical considerations can yield unresolvable ambiguity: different people have different values, and sometimes those values cannot be reconciled. Fortunately, in the discounting context, we can “integrate out” those ambiguities by a process known as gamma discounting. This is the topic of the remainder of this post.

A final post explains our simulation experiment and the results.

Gamma Discounting

We know from the last post that in a recent survey of 200 experts, Moritz Drupp and colleagues found that the distribution of expert responses was closely approximated by setting δ to zero with 65% probability and setting it to 3.15 with 35% probability. (If you don’t know what d refers to, please read the previous post first.)

It turns out that recent work in economics has proposed a way to resolve such ambiguity or uncertainty. This process is known as gamma discounting. In a nutshell, instead of averaging the candidate discount rates, the process averages the discounted future values for each candidate rate.

The table below illustrates gamma discounting using an example provided by Ken Arrow and colleagues.

The table shows discounted values of $1,000 at various times t in the future for three different candidate discount rates (namely, 1%, 4%, and 7%). For example, if the rate is 4%, then the discounted value of $1,000 after 50 years is $135.34, and so on.

So how do we deal with the uncertainty about the discount rate? Suppose we assume that the rate is either 1% or 7% with equal probability, then 50 years from now our $1,000 can be worth either $606.53 or $30.20 (also with equal probability).

It follows that the average of those two uncertain values represents the probability-weighted expectation for our $1,000, which 50 years from now is ($30.20 + $606.53)/2 = $318.36.

These averages are called the “mean expected present values” and are shown in the column labeled MEV. They form the basis of our final computation. The ratio between successive MEVs yields a single certainty-equivalent discount rate (columns labeled CE-DDR) for any given point in time. For example, the MEV at t = 50 is $318.36, and the MEV at t = 51 is $314.33. The ratio between those successive values, $318.37/$314.33 = 1.0128 = 1.28%, provides the CE-DDR at time t = 50, known as the “forward” rate, and those values are shown in the second-to-last column of the table.

Several important points can be made about that column: First, there is only one column. No matter how many candidate discount rates we started out with, and what their probability weighting might be, we end up with a single certainty-equivalent discount rate that can be applied with 100% certainty, but that has embedded within it the uncertainty we started out with.

Second, the discount rate is not constant: as can be seen in the table, it starts out at nearly 4% and converges towards 1% after 100 years or more. The discount rate is therefore declining over time. (In the limit, when time heads towards infinity, the discount rate converges towards the smallest candidate rate being considered. The choice of lowest candidate rate is therefore crucial, although this primarily affects the distant future.)

Finally, the second-to-last column captures the slope of the declining discount rate function between times t and t + 1. Those forward values, however, cannot be used to discount an amount from the present to time t—for example, the MEV at time t = 50 cannot be obtained by discounting $1,000 at a rate of 1.28% over 50 years.

Instead, to obtain a rate that covers all of those 50 years, we need a different certainty-equivalent discount rate that is also declining but generally has higher values. This rate is called the “spot” certainty-equivalent declining discount rate and is shown in the final column.

If we apply the “spot” rate to our present-day $1,000 for time t, it will yield the MEVs for that time shown in the table. For example, $1,000 discounted at 2.32% over 50 years (i.e., 1000/1.023250) yields the MEV of $318 (± rounding error).

To summarize: We start out by being uncertain about the discount rate. For example, we don’t know whether to set the pure time preference to zero or to permit a value greater than that. We apply gamma discounting and this uncertainty has “disappeared”. Of course, it hasn’t really disappeared, it has just been taken into account in the final certainty-equivalent discount rate.

But for all practical intents and purposes, we now have a single number that we can apply in economic decision making.

In the first post, we considered the implications of the discount rate if climate change were to cause $5 trillion (i.e., $5,000,000,000,000) in damages by the end of the century. We noted that the present discounted cost could be as large as $2.2 trillion (discount at 1%) or as little as $18 billon (at 7%). If we assume that 1% or 7% are equally likely to be “correct”, then from the table above we can obtain a certainty-equivalent spot rate of somewhere below 2% (the end of the century is 83 years away, but that’s reasonably close to the 100 years that yield a spot rate of 1.71%).

It follows that to avert $5 trillion in damages, it would be economically advisable to expend in excess of $1 trillion now on climate mitigation even if we are uncertain about which discount rate to apply.

Combining sources of uncertainty

This post explained the basic idea behind gamma discounting. We now have a mathematical platform to convert ambiguity and uncertainty about the discount rate into a single certainty-equivalent discount rate.

The beauty of gamma discounting is that it is theoretically particularly firmly grounded when the candidate discount rates (the first three columns in the above table) arise from irreducible heterogeneity among expert opinions rather than from random variation about an imprecise estimate.

Different ethical positions about inequality aversion (η; see previous post) and pure time preference (δ) are clear instances of such irreducible heterogeneity. In our simulation experiment, we considered the uncertainty about three other relevant variables as similar cases of irreducible heterogeneity; namely, uncertainty about climate sensitivity, uncertainty about emissions policy, and uncertainty about future global development.

To briefly foreshadow the final post, we conducted a simulation experiment that forecast economic growth till the end of the century under all possible combinations of those variables. We then applied gamma discounting as in the table above to extract a single certainty-equivalent declining discount rate that policy makers can apply in the knowledge that a broad range of uncertainties has been considered.

We must discount, but how much?

An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and ambiguity.

In a previous post, I outlined the basics of the discounting problem and highlighted the importance of the discount rate in climate economics. In the remainder of this post, I will outline the ethical considerations and value judgments that are relevant to determining the discount rate.

Because those considerations may yield unresolvable ambiguity, they must be “integrated out” by a process known as gamma discounting. This will be explained in the next post.

A further post will explain our simulation experiment and the results.

Considerations about the social discount rate in climate economics

When individuals or businesses make decisions about investments, they tend to use the prevailing market interest rates to set the discount rate. This approach, known as the descriptive approach to setting the discount rate, makes perfect sense for short or medium-term time horizons when the costs and benefits of a project involve the same people and the same markets. The approach is called descriptive because the discount rate correctly describes how society actually discounts, as determined by the markets.

An alternative approach, called the prescriptive approach, prefers to estimate the social discount rate directly from its primitives rather than using market rates of interest. In this context, the discount rate is usually called the social discount rate because it applies not to individuals or firms but to society overall. The approach is called prescriptive because it imposes a rate on social planners that is, at least in part, based on value judgments.

There are a number of arguments that support the prescriptive approach.

For example, many economists and philosophers would argue that we cannot discount with respect to future generations. That is, present-day decision makers should not endorse policies that inevitably disadvantage future generations, who have no power to resist or retaliate against present-day decisions. In addition, those most affected by climate change—the poor, often in developing countries—do not influence market interest rates. This arguably places a burden on governments to take a wider ethical perspective than investors who trade in financial markets. 

Our article therefore took the prescriptive approach to setting the discount rate, consistent with governmental recommendations in the UK and much of Europe. (Although US authorities have generally preferred descriptive approaches to intergenerational discounting.)

The prescriptive approach is conventionally understood within the framework of the Ramsay rule:

ρ = δ + η × g.

It can be seen that the social discount rate, ρ, results from two distinct components: A component known as the “pure time preference”, encapsulated by δ, and a component that combines the expected average annual real economic growth rate, g, with a parameter η that turns out to capture people’s inequality aversion. (It also does other things but here we focus on inequality aversion).

The pure time preference is simply our impatience: It’s our impulse that $50 today is “worth more” than $51 in a month, even though the accrual during this delay would correspond to a whopping annual interest rate of nearly 27%.

The rationale for inclusion of the growth rate is that growing wealth makes a given cost for future generations more bearable than it appears to us now, in the same way that $100 is valued a great deal more by a poor student than by a billionaire.

Within the Ramsey framework we thus have to consider three quantities to determine the social discount rate: Future economic growth, inequality aversion, and pure time preference. Future growth rates can be estimated by economic modeling—and that is precisely what we did in our article, and I will describe the details of that in the next post.

Determination of the other two quantities, by contrast, involves ethical value judgments that are necessarily subjective. (Given the inter-generational context, we ignore the possibility of estimating η and δ from asset markets and behavioral experiments, respectively.)

To illustrate the ethical implications I focus on δ, the pure time preference. I will ignore issues surrounding η for simplicity.

It has been argued that it is ethically indefensible for δ to be greater than zero, as it would embody “a clear violation of the attitude of impartiality that is foundational to ethics”. That is, we should not disadvantage future generations simply because we happen to have been born before them. If one wanted to treat future generations equally to ours, as most people seem prepared to do, one would therefore want to constrain δ to be zero—and indeed, in the U.K.’s influential Stern report, δ was set to (near) zero for that reason.

However, the seemingly attractive idea of treating all generations equally by setting δ to zero entails some unnerving consequences. In general, the lower the discount rate, the more future consumption (or cost) matters and hence the more we should set aside for the benefit of future generations. Partha Dasgupta computed the mathematically implied savings rate when δ is set to the value recommended in the Stern report and found it to be 97%. That is, out of $100 we currently own, we may only consume $3, with the remainder being tucked away for the benefit of our children. Our children, in turn, would also only be allowed to spend $3 of their considerably greater wealth, with the remainder being passed on to their children, and so on. An implication of d being near zero therefore is the impoverishment of each current generation for the benefit of the succeeding one!

And it doesn’t stop there: low discounting, although it may appear benign in the climate context, has dangerous implications elsewhere. As William Nordhaus put it: “Countries might start wars today because of the possibility of nuclear proliferation a century ahead; or because of a potential adverse shift in the balance of power two centuries ahead; or because of speculative futuristic technologies three centuries ahead. It is not clear how long the globe could survive the calculations and machinations of zero-discount-rate military strategists.” 

So what is the “correct” value of δ?

We don’t know.

But we do know that in a recent survey of 200 experts, Moritz Drupp and colleagues found that the distribution of expert responses was closely approximated by setting δ to zero with 65% probability and setting it to 3.15 with 35% probability.

So now what?

Do we make policy decisions based on majority rule? Or based on the average of the two sets of expert opinions? Or do we decide that experts are no good and that we should ask Bruce at the pub?

The next post presents a solution to this dilemma known as gamma discounting.

The future is certainly uncertain

The future is uncertain. So how do we best cope with this uncertainty? Nowhere is this question more acute than in the climate arena where today’s policy decisions have an impact on people centuries hence.

The existence of scientific uncertainty has often been used in support of arguments that climate mitigation is unnecessary or too costly. Those arguments are flawed because, if anything, greater uncertainty about the future evolution of the climate should compel us to act with even greater urgency than if there were no (or less) uncertainty. I published two articles that sketched out this analysis a few years ago, and in earlier posts I explained their underlying logic and mathematics in some detail here, here, and here. Climate scientist Michael Mann also made this point during his recent Congressional testimony to the House Committee on Science, Space, and Technology.

In a nutshell, uncertainty is not your friend but a Dutch uncle advising you to roll up your sleeves and start working towards climate mitigation.

Our initial work was not the final word on the matter, but it stimulated follow-up research by an economist from the UK, Mark Freeman, who together with colleagues Gernot Wagner and Richard Zeckhauser from Harvard’s Kennedy School, published a more extensive mathematical analysis of the problem that came to roughly the same conclusions.

One limitation of our existing work on uncertainty has been that we were unable to say anything that was specifically policy relevant. That is, although we could make a strong case for mitigation and against “business as usual”, we were unable to specify how much mitigation would be appropriate on the basis of our work to date.

An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, tackled this problem. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and ambiguity.

I have written a series of posts that unpack this rather dense summary statement of our article and that will appear here during the next few days:

  • In the remainder of this post, I describe the basics of discounting.
  • The next post describes the ethical considerations that enter into setting of the discount rate.
  • A further post explains how uncertainty about the proper discount rate can be “integrated out” to yield a single certainty-equivalent declining discount rate.
  • A final post explains our simulation experiment and the results.

Discounting the future[1]

We value the present more than the future. When given the choice, very few people would prefer to wait a month to receive $51 if the alternative were to receive $50 today, even though the accrual during this delay would correspond to a whopping annual interest rate of nearly 27%.

This entrenched preference for the present, and the discounting of the future it entails, appears to be an immutable aspect not just of human cognition but of organisms more generally. When given the choice between a smaller reward now or a larger reward later, most animals generally prefer the immediate reward.

In humans, decisions relating to the present involve regions of the brain (viz. limbic and paralimbic cortical structures) that are also consistently implicated in impulsive behavior and cravings such as heroin addiction, whereas decisions that pertain to the future involve brain regions (viz. lateral prefrontal and parietal areas) known to support deliberative processing and numerical computation.

Our strong preference for immediate rewards may therefore reflect the proverbial “reptilian brain,” which competes with our “rational brain” that is telling us to consider and plan for the future.

However, that does not mean that discounting is irrational: On the contrary, discounting is a standard aspect of inter-temporal decision making in economics. Whenever costs and benefits of projects are evaluated, the comparison must be adjusted by the delay between current costs and future benefits (or vice versa). This is done by setting an interest rate known as the discount rate.

The discount rate is at the same time both quite simple and surprisingly nuanced. For now, let’s focus on its simplicity and introduce it with the following example: Suppose you are faced with the decision whether to attend university now, thereby incurring tuition costs and deferring earned income, or to enter the job market straight away. Ignoring all non-economic variables (not recommended in reality!), this decision boils down to figuring out whether the cost of tuition and deferred income will be recouped in the future by the higher income you are likely to earn with a university degree than without one. (A peer-reviewed paper that works this out in detail can be found here.)

Economists often use the prevailing market interest rates to make inter-temporal decisions of this type. To illustrate, let’s suppose the prevailing annual interest rate is 3%. Let’s furthermore suppose you are trying to decide whether to service your car engine now, because you have a pretty good idea that if you didn’t, you’d incur a repair bill of $100 in a year’s time. Now here is the crucial feature of discounting: If you had a $100 now and invested it at 3%, then you could pay the damage in a year’s time and pocket $3 profit. Or conversely, the damage of $100 in a year’s time is only “worth” a little over $97 today (because $97.09 invested at 3% would be $100 in a year). Thus, an economist might argue that you should get your car serviced now only if the cost is less than $97—any more than that, and you’d be better off investing the money and using it to pay off the future repair bill.

This trivial example illustrates the discount rate: it is simply the interest rate you would accrue on current costs (or benefits) until some future point in time when the benefits (or costs) come due.

Determining the discount rate for personal decisions, such as whether to service your car or attend university, is relatively straightforward because we have a very good historical record of the prevailing interest rates and may extrapolate those to the medium-term future with some confidence.

Enter climate change.

The situation changes dramatically when inter-temporal decisions cross generational boundaries and extend into the distant future. Today’s policy decisions with respect to climate change will affect people who have not yet been born, and whom today’s decision makers will never meet.

The extended temporal horizon renders the setting of the discount rate ever more important and tricky. To illustrate, suppose climate change will cause $5 trillion (i.e., $5,000,000,000,000) in damages by the end of the century. At a discount rate of 1%, this would be “worth” $2.2 trillion today—a whopping amount, to be sure, but still less than half the value at the end of the century. At a discount rate of 3%, this damage would be “worth” around $430 billion today—considerably less than at 1%. Incidentally, $430 billion is a little over two thirds of the U.S. military budget. At a discount rate of 7%, finally, the damage in today’s dollars would be only $18 billion, an amount equivalent to the foreign investment in Vietnam during 2016.

Seemingly slight variations in the discount rate can thus make future climate damages appear either very large (two-thirds of the Pentagon budget or more) or small (Vietnam is a pretty small economy). Taking mitigative action is more compelling in the former case than the latter.

The choice of an appropriate discount rate for climate economics has therefore been hotly contested in policy, economics, and ethics. This debate has failed to yield a consensual value, with some scholars proposing that the discount rate for climate change should be negative and others permitting rates of 5% or more.

In the next post, I discuss the ethical and economic considerations that typically enter into setting the discount rate.


[1] Parts of this section are derived from a post I previously published at https://featuredcontent.psychonomic.org/2016/04/26/when-todays-grass-is-greener-than-tomorrows-gold/

A post-Shakespearean farce for a post-fact political age

Never before has so much deception unraveled so quickly and with so little shame.

Within a few hours of the EU referendum, the major planks of the Leave campaign had evaporated. We learned that the additional £350,000,000 that could be spent on the NHS every week, if we only left the EU, never existed. After all the fear mongering and denigration of immigrants, we learned that withdrawal from the EU would not reduce immigration.

And we are currently learning the hard way that Leaving is economically disastrous: Although today’s downgrade in the UK’s credit rating may sound like a distant and hardly-relevant rumbling, we may care that we will now receive less of an annuity for our retirement.

Perhaps that explains why the Leave Campaign diligently wiped its webpage of any content, leaving behind the message “thank you” but no record of their promises. (Don’t worry, it’s been archived. After all, the Leave campaign was the evil twin of climate denial, and so the reality-based community knows how to prevent things going down the memory hole.)

We now get to watch in fascination (and terror?) how another “hockey-stick” graph is unfolding in front of our very eyes:

Even arctic ice doesn’t melt that fast.

While the “Project Fear” of the Remain campaign may now turn out to have been a “Project Understatement”, we should briefly summarize the activities of the various main actors in this unfolding “Project Farce”:

  •  The soon-to-be-no-more Prime Minister has stopped tweeting about huggable heroes and addressed Parliament today and said something that contained a lot of words. If you want to know what he really said, you need to read this.
  •  The still-completely-in-charge Chancellor assured everyone this morning that Great Britain is Great again and that we should keep calm and carry on with being unemployed or deprived (but quietly). A few hours later, and innumerable Footsie and Dow points further south, the UK’s credit rating was downgraded. (After the markets in London closed, so as not to upset anyone).
  •  Leave operative Michael Gove likened the experts, who predicted the current economic fallout, to Nazis before the referendum. He has not been heard from since Friday morning. (Please report any sightings if he has left the bunker for air or check his investments).
  • The leader of the Leave campaign, who may yet become our next Prime Minister, played cricket all weekend before he reassuringly reappeared in the Telegraph this morning, pronouncing the economy to be in good health and guaranteeing us access to the EU’s free market without any bothersome human rights legislation. He gets paid £5,000 for each of those columns, so he is bound to be back next week.

A Shakespearean tragedy in a world of post-fact politics.

Actually, no.

Shakespeare’s tragedies, like their Greek counterparts, included evil villains who cunningly conspired to bring down kings and empires.

The villains of the Brexit tragedy are not evil and cunning.

Their banal evil arises from an infantile recklessness that gave them license to turn the future of this country, Europe, and the world economy into a Riot Club frat-boy tussle. Unchecked by the jingoist tabloids, their abject recklessness turned a decision of grave consequence into a platform for dull patriotic cheer-leading.

Who are the adults in this post-Shakespearean farce?

  •  President Obama (are you missing him already?) sent his Secretary of State, John Kerry, to Europe to exercise some adult supervision. Perhaps pointedly, Kerry stopped in Europe before visiting London, the home of special relationships.
  • Angela Merkel directly addressed the 48% of Britons who wanted to avoid this mess, and has generally struck a balance between giving the UK time to re-constitute itself and insisting on speedy action to commence divorce proceedings for the sake of the world economy.
  • Nicola Sturgeon, the First Minister of Scotland, continued to calmly clarify that the Riot Club frat boys did not have license to tear down Scotland as well. 
  •  The columnists and the millions who will not abandon Europe to the deceptions of demagogues.

How will this “Project Farce” play out in the end?

No one knows, but there is one ghoul that is emerging from the fog. Taking leadership of the country now, and being the one who pushes the irreversible button of Article 50 to commence separation from the EU, must surely be the most poisoned chalice of recent history.  

 

 

There once was a referendum about whether or not the UK should remain in the EU.

It is no more.

Thursday’s referendum is no longer about the EU but about the UK and what kind of society it wants to be.

The referendum has effectively turned into a plebiscite about diversity and tolerance vs. bigotry and hatred: While the Leave campaign started out with arguments that the UK might be better off economically outside the EU, as those arguments were demolished by everyone from President Obama to the Bank of England and the vast majority of economists, the campaign remolded itself into an appeal to increasingly shrill and ugly emotion.

The Leave campaign is no longer about leaving the EU, it is about moving the UK closer to Weimar and the Munich beer halls of the 1920s.

How could it have come to that? How could a campaign find so much popular traction by explicitly disavowing rational and informed deliberation? How could an idea that is favored by Vladimir Putin and Donald Trump, but opposed by nearly every other world leader, find so much popular support?

Some commentators have responded to those questions with bewilderment and—at least partial—resignation, as if rightwing populism and hatred are unavoidable socio-political events, much like volcanic eruptions or earthquakes.

Far from it. Populism and hatred do not erupt, they are stoked. We now know from painstakingly detailed research that the “Tea Party” in the U.S. was not a spontaneous eruption of “grassroots” opposition to President Obama’s healthcare initiative but the result of long-standing efforts by Libertarian “think tanks” and political operatives. Donald Trump did not come out of nowhere but learned his trade from Sen. Joe McCarthy’s chief counsel who was the brains behind the paranoid hunt for communist infiltrators in the 1950s.

Likewise, the present demagoguery in the UK against the EU arises at least in part from the same well-funded but nebulous network of organizations who deny that the climate is changing because of human activity. And public emotion cannot remain calm and reasoned when UK tabloids run more than 8,000 stories about asylum seekers—many of them inflammatory—in a 6-year period alone. When more than 1,400 of those articles use the terms “immigrant” and “asylum-seeker” interchangeably, then it is a short step from that semantic equivalence to Nigel Farrage’s recent poster that used Nazi imagery to draw a visual connection between EU membership and masses of Syrian refugees.

Populism is not an inevitable natural disaster but the result of political choices made by identifiable individuals who ultimately can be held accountable for those choices.

There is of course another side to the story: Joe McCarthy could not have destroyed countless careers without public paranoia about communists, and Donald Trump could not have compared immigrants to snakes without an audience that would embrace such rhetoric. Likewise, the Leave campaign could not resort to conspiracy theories with impunity if there were no receptive audience for their paranoia.

The public’s willingness to endorse rightwing populism can be explained and predicted by a variety of variables.

For example, one particularly detailed recent analysis by a team of economists led by Manuel Funke of the Free University in Berlin shows that over a period of nearly 150 years, every financial crisis was followed by a 10-year surge in support for far-right populist parties: On average, far-right votes increased by 30% after a financial crisis—but not after “normal” recessions (that is, contractions of the economy that were not accompanied by a systemic financial crisis).

Although it may at first glance appear paradoxical that “normal” recessions are not also followed by greater endorsement of far-right populism, this finding is consonant with other research which has shown that support for populism is not directly predicted by a person’s economic position. In one study involving a large Belgian sample, neither economic status nor life satisfaction predicted populism directly. Instead, what matters is how people interpret their economic position: feelings of relative personal deprivation and a general view of society being in decline were found to be the major predictors of populism.

It’s not the economy, stupid, it’s how people feel.

There is now reasonably consistent evidence that populism thrives on people’s feeling of a lack of political efficacy, a belief that the world is unfair and that they do not get what they deserve, and that the world is changing too quickly for them to retain control. Whenever people attribute the origins of their perceived vulnerability to factors outside themselves, populism is not far away.

So what about immigration?

The answer is nuanced and complex, but as a first approximation here, too, what seems to matter very much are not the actual numbers but how they are being interpreted. For example, in 1978, when net migration to the UK was around zero, up to 70% of the British public felt that they were in danger of ‘being swamped’ by other cultures. Conversely, in the early 2010’s, the white Britons who were least concerned about immigration were those who lived in highly diverse areas in “Cosmopolitan London”.

It’s not just immigration, it’s how people feel about their new neighbors.

Where do we go from here?

On the supply side, hate-filled politicians and journalists alike must be held accountable for their choices and their words through the media, the rule of law, and ultimately, elections. McCarthy was brought down when Joseph Welch, chief counsel of the U.S. Army, confronted him in the U.S. Senate. “You’ve done enough,” Welch said. “Have you no sense of decency, sir, at long last? Have you left no sense of decency?”

A sense of decency may not be the first thing that comes to mind in connection with today’s demagogues, but London voters recently sent a clear signal about their decency when they rejected the fear-mongering of one candidate by resoundingly electing his Muslim opponent.

On the demand side, several recommendations to counter populism have been put forward, although the debate on this is still in its early stages. Two insights are promising: First, the need to offer a vision for a better society that people can identify with. The Remain campaign has thus far focused on highlighting the risks of an EU exit. Those risks loom large but highlighting them, by itself, does not create a better world. It would be advisable to highlight the many ways in which the EU has contributed to such a better world—how many UK voters remember that the EU won the Nobel Peace Prize in 2012 for transforming Europe from a continent of war to a continent of peace? How many realize that the EU is one of the few institutions able to stand up to multinational tax avoidance which appears poised to extract billions from Apple? The list goes on and deserves to be heard.

Second, we know with some degree of confidence that fear of the “other”, and hostility towards immigrants, can be overcome by interaction if certain key conditions are met. This work, mainly at the local level, is essential to heal the wounds of this divisive debate, whatever the outcome on Thursday. Lest one be pessimistic about the possibility of success, we need to remind ourselves how quickly and thoroughly we have tackled homophobia in Western societies: Whereas gay people were feared, marginalized, and excluded not so long ago, the UK Parliament is now the “queerest legislature in the world”, with 32 MPs calling themselves gay, lesbian, or bisexual.

 

Does the UK have a government?

It has now been 12 hours or more, since at the close of the first day of trading after the UK’s referendum vote to leave the EU, more than $2 trillion had been wiped off the stock markets around the world. This response is pretty much as it was expected by the preponderance of national and international experts, whom a leading “Leave” campaigner likened to the Nazis in the closing days of the campaign.

During those 12 hours, on what should be a relatively quiet Saturday in summer, a number of remarkable things have transpired:

  • The Leave campaign took less than 48 hours to abandon its pre-referendum fantasies, acknowledging that the mythical £350,000,000 being sent to Brussels every week do not actually exist and therefore cannot be used to fund the NHS, and expressing surprise at the expectation that immigration would now decline. If anything will decline, it might be funding for the NHS.
  • Scotland has a government. The government of Scotland met and expressed its intention to remain part of the EU, in accordance with the overwhelming will of their people.
  • The EU has a governing structure. It met in Berlin and decided to move forward at a rapid clip to reduce the inevitable period of economic uncertainty to the extent possible and for the Brexit negotiations to commence.
  • France has a government, and their Foreign Minister made the rather obvious observation that it would be nice for the UK to have a new Prime Minister in a few days so that Brexit negotiations could commence. Not an unreasonable request at face value.

One thing that has been remarkably absent from this list of events, as of 1pm Saturday, is any mention or appearance of any sort of a government of the United Kingdom. We have not heard from the currently-former Prime Minister, nor from a future-possible Prime Minister.

Does the UK even have a government at this most crucial time of its history during my life time? Events are unfolding on a millisecond time scale, all the world’s market analysts are tracing events waiting to pounce when the markets open on Monday, and the UK government has gone AWOL.

Driven by demagogues and arsonists, the UK ignored all the experts and all the facts and, to its own horror, set off a global crisis and a national recession on Thursday. On Friday the pound Sterling suffered a record loss of value and the stock markets worldwide lost $2,000,000,000,000. Also on Friday, the Leave campaign revealed itself to be the scam that it was.

On Saturday, the arsonists and demagogues are nowhere to be seen, while the frat boys in the Tory party are trying to figure out what to do next.

Eventually the adults will have to clean up the mess.

Upated 2:10pm:

Now this from the defense secretary, Michael Fallon:

“The prime minister goes on, the government goes on until the autumn, until there is a new leader and a new government. We’ll remain at our posts and we have a big agenda. We were elected only a year ago and we’ve set out fresh legislation, which we’re taking through parliament at the moment. Cabinet is meeting on Monday. We were all elected just a year ago on a big programme of continuing to move the economy forward, creating more jobs, a programme of social reform, and investment in defence which you can see today.”

Oh dear. Seriously?

Updated 2:33pm:

London has a government too. Mayor Khan came out strongly, declaring that

We also have a video message from the currently-still-not-quite-former Prime Minister about celebrating Gay Pride.

This is actually the second tweet of the day by No 10. I missed the first one because it was about huggable heroes and did not show up in my news feed. Apologies to the huggables.