Sunday, September 29, 2013

Gilles Saint-Paul's blog

My attention has been drawn to the blog of my (former) colleague from TSE, Gilles Saint-Paul. Gilles's views are not for the faint-hearted, but they are always grounded on insightful economic analysis. For instance, have a look at his analysis (entitled "Ubu sets prices") of the recent French law establishing rent controls.

Wednesday, September 25, 2013

Graphical description of policy making in the US

A graphical description of what the US Congress must do in order to prevent government shutdown by October 1st. It is very far from the simple "majority voting equilibrium" I am used to play with...






















What Congress Must Do to Avoid a Shutdown

The House on Friday passed a bill that would keep the government open through Dec. 15, but only if the health care law is stripped of all financing. That sent the fight to the Senate, where the most ardent conservatives, led by Senator Ted Cruz, Republican of Texas, began waging a procedural war to stretch out the debate. Here is what needs to happen before financing runs out Sept. 30. Related Article »
... by Monday
... by Sunday
... by Saturday
... by Wednesday
    or Thursday
To avoid shutdown, this
would need to happen
by Wednesday.
Boehner could send bill
back to Senate with
additional Republican
policy language.
Government
would shut
down Oct 1.
Fails
Government
would be financed
through Nov. 15.
Passes
Republicans could
waive 30-hour
requirement.
Would set in motion 30 hours
of debate before senators
could consider the actual bill.
Republicans could
waive 30-hour
requirement.
Fails
Passes
Reid would call up amendment.
Needs 51 votes.
Would set in motion 30 hours
of debate before senators
could consider the actual bill.
Fails (unlikely)
Passes
John A. Boehner, the
House speaker, could call
for vote on Senate bill.
If it gets to this point, the Senate is
expected to pass the amended bill
and send it back to the House.
Second and final vote
to cut off debate.
Needs 60 votes.
Harry Reid, the Senate majority leader,
would introduce an amendment stripping
the health care language from the House
spending bill — essentially a substitute
bill. He may also add provisions intended
to attract Republican votes.
First Senate vote to cut off
debate on motion to take
up House spending bill.
Needs 60 votes.

Tuesday, September 24, 2013

Econlolcats

or, as they writeThe world's first peer-remewed pictorial economics journal. 

I love their latest pictorial:

Ain’t no cat likes a corner solution. Except this one. From @RHTGreen.

Sunday, September 15, 2013

The obesity paradox

An excellent presentation of the data supporting the "obesity paradox", where being slightly overweight (seems to) decrease mortality:

The big fat truth

Late in the morning on 20 February, more than 200 people packed an auditorium at the Harvard School of Public Health in Boston, Massachusetts. The purpose of the event, according to its organizers, was to explain why a new study about weight and death was absolutely wrong.
The report, a meta-analysis of 97 studies including 2.88 million people, had been released on 2 January in the Journal of the American Medical Association (JAMA)1. A team led by Katherine Flegal, an epidemiologist at the National Center for Health Statistics in Hyattsville, Maryland, reported that people deemed 'overweight' by international standards were 6% less likely to die than were those of 'normal' weight over the same time period.
The result seemed to counter decades of advice to avoid even modest weight gain, provoking coverage in most major news outlets — and a hostile backlash from some public-health experts. “This study is really a pile of rubbish, and no one should waste their time reading it,” said Walter Willett, a leading nutrition and epidemiology researcher at the Harvard school, in a radio interview. Willett later organized the Harvard symposium — where speakers lined up to critique Flegal's study — to counteract that coverage and highlight what he and his colleagues saw as problems with the paper. “The Flegal paper was so flawed, so misleading and so confusing to so many people, we thought it really would be important to dig down more deeply,” Willett says.
But many researchers accept Flegal's results and see them as just the latest report illustrating what is known as the obesity paradox. Being overweight increases a person's risk of diabetes, heart disease, cancer and many other chronic illnesses. But these studies suggest that for some people — particularly those who are middle-aged or older, or already sick — a bit of extra weight is not particularly harmful, and may even be helpful. (Being so overweight as to be classed obese, however, is almost always associated with poor health outcomes.)
Source: Childers, D.K. & Allison, D.B. Int. J. obesity 34, 1231–1238Photo by: 2010
The paradox has prompted much discussion in the public-health community — including a string of letters in JAMA last month2 — in part because the epidemiology involved is complex, and eliminating confounding factors is difficult. But the most contentious part of the debate is not about the science per se, but how to talk about it. Public-health experts, including Willett, have spent decades emphasizing the risks of carrying excess weight. Studies such as Flegal's are dangerous, Willett says, because they could confuse the public and doctors, and undermine public policies to curb rising obesity rates. “There is going to be some percentage of physicians who will not counsel an overweight patient because of this,” he says. Worse, he says, these findings can be hijacked by powerful special-interest groups, such as the soft-drink and food lobbies, to influence policy-makers.
But many scientists say that they are uncomfortable with the idea of hiding or dismissing data — especially findings that have been replicated in many studies — for the sake of a simpler message. “One study may not necessarily tell you the truth, but a bulk of studies saying the same thing and being consistent, that really is reinforcing,” says Samuel Klein, a physician and obesity expert at Washington University in St Louis, Missouri. “We need to follow the data just like the yellow brick road, to the truth.”

Throwing a curve

The notion that excess weight hastens death can be traced back to studies from the US insurance industry. In 1960, a thick report based on data from policy-holders at 26 life-insurance companies found that mortality rates were lowest among people who weighed a few kilograms less than the US average, and that mortality climbed steadily with weight above this point. This spurred the Metropolitan Life Insurance Company (MetLife) to update its table of 'desirable weights', creating standards that were widely used by doctors for decades to come.
In the early 1980s, Reubin Andres, who was the director of the US National Institute on Aging in Bethesda, Maryland, made headlines for challenging the dogma. By reanalysing actuarial tables and research studies, Andres reported that the relationship between height-adjusted weight and mortality follows a U-shaped curve. And the nadir of that curve — the weight at which death rates are lowest — depends on age (see 'Weight watching'). The weights recommended by MetLife may be appropriate for people who are middle-aged, he calculated, but not for those in their 50s or older3, who were better off 'overweight'. It was the first glimmer of the obesity paradox.
Andres's ideas were roundly rejected by the mainstream medical community. In an often-citedJAMA paper4 published in 1987, for example, Willett and JoAnn Manson, an epidemiologist at the Harvard School of Public Health, analysed 25 studies of weight–death relationships and claimed that most were tainted by two confounders: smoking and sickness. Smokers tend to be leaner and die earlier than non-smokers, and many people who are chronically ill also lose weight. These effects could make thinness itself seem to be a risk.
Manson and Willett backed up that idea in a 1995 report that analysed body-mass index (BMI) — the 'gold-standard' measure of weight, defined as weight in kilograms divided by height in metres squared — in more than 115,000 female nurses enrolled in a long-term health study5. When the researchers excluded women who had ever smoked and those who died during the first four years of the study (reasoning that these women may have had disease-related weight loss), they found a direct linear relationship between BMI and death, with the lowest mortality at BMIs below 19. (That is about 50 kilograms for a woman who is 1.63 metres tall.)
“It didn't seem to be biologically plausible that overweight and obesity could both increase the risk of life-threatening diseases and yet lower mortality rates,” Manson says. The study proved, she says, that this idea “was more artefact than fact”.
Around the same time, the world was waking up to obesity. Since 1980, rates of overweight and obesity had begun to rocket678, and in 1997, the World Health Organization (WHO) held its first meeting on the subject, in Geneva, Switzerland. That meeting resulted in the introduction of new criteria for 'normal weight' (BMI of 18.5–24.9), 'overweight' (BMI of 25–29.9) and 'obese' (BMI of 30 or higher). In 1998, the US Centers for Disease Control and Prevention (CDC) lowered its BMI cut-offs to match the WHO's classifications. “We used to call [obesity] the Cinderella of risk factors, because nobody was paying attention to it,” says Francisco Lopez-Jimenez, a cardiac physician at the Mayo Clinic in Rochester, Minnesota. They were now.

Statistical sparring

Flegal was one of those raising the alarm. At the statistics centre, which is part of the CDC, she has at her fingertips data from the agency's National Health and Nutrition Examination Survey (NHANES). Based on interviews and physical examinations of about 5,000 people a year, the NHANES has been running since the 1960s. Flegal and her colleagues used it to show that rates of overweight and obesity in the United States were climbing67.
In 2005, however, Flegal found that NHANES data confirmed Andres's U-shaped mortality curve. Her analysis showed that people who were overweight — but not obese — had a lower mortality rate than those of normal weight, and that the pattern held even in people who had never smoked9.
Flegal's study got a lot of press, says Willett, because she works at the CDC and it seemed to be a sanction for gaining weight. “A lot of people interpreted this as being the official statement of the US government,” he says. Just as they did earlier this year, Willett and his colleagues criticized the work and put together a public symposium to discuss it. The academic kerfuffle drew a lot of negative media attention to Flegal's study. “I was pretty surprised by the vociferous attacks on our work,” says Flegal, who prefers to focus on the finer points of epidemiological number-crunching, rather than the policy implications of the resulting statistics. “Particularly initially, there were a lot of misunderstandings and confusion about our findings, and trying to clear those up was time-consuming and somewhat difficult.”
Over the next few years, other researchers found the same trend, and Flegal decided to carry out the meta-analysis that she published earlier this year1. “We felt it was time to put all of this stuff together,” she says. “We might not understand what it all means, but this is what's out there.” Her analysis included all prospective studies that assessed all-cause mortality using standard BMI categories — 97 studies in total. All the studies used standard statistical adjustments to account for the effects of smoking, age and sex. When the data from all adult age groups were combined, people whose BMIs were in the overweight range (between 25 and 29.9) showed the lowest mortality rates.
The Harvard group contends, however, that Flegal's approach did not fully correct for age, sickness-related weight loss and smoking. They say that the effect would have vanished in younger age groups if Flegal had separated them out. They also argue that not all smokers have the same level of exposure — people who smoke heavily tend to be leaner than occasional smokers, for example — so the best way to remove smoking as a confounder is to focus on people who have never smoked. Willett points to one of his studies10, published in 2010, that was not included in Flegal's analysis because it did not use standard BMI categories. Analysing data from 1.46 million people, Willett and his colleagues found that among people who have never smoked, the lowest mortality occurs in the 'normal' BMI range, of 20–25.
Flegal, in turn, criticizes the Willett study for scrapping large swathes of the raw data set: nearly 900,000 people in all. “Once you delete such large numbers, and they are really large, you don't quite know how the never-smokers in the sample differ from the others,” she says. Never-smokers could be richer or more educated, for example. What is more, says Flegal, Willett's study relies on participants' self-reported heights and weights, rather than objective measures. “It's a huge deal,” Flegal says, because people tend to underestimate how much they weigh. This could skew death risks upwards if, for example, people who are obese and at high risk say that they are merely overweight.

Healthy balance

Many obesity experts and health biostatisticians take issue with the harsh tone of Willett's statements about Flegal's work. They say that there is merit in both Willett's and Flegal's studies, that the two are simply looking at data in different ways and that enough studies support the obesity paradox for it to be taken seriously. “It's hard to argue with data,” says Robert Eckel, an endocrinologist at University of Colorado in Denver. “We're scientists. We pay attention to data, we don't try to un-explain them.”
What they are trying to explain is the reason for the paradox. One hint lies in the growing number of studies over the past decade showing that in people with serious illnesses such as heart disease, emphysema and type 2 diabetes, those who are overweight have the lowest death rates. A common explanation is that people who are overweight have more energy reserves to fight off illness. They are like contestants on the television show Survivor, says Gregg Fonarow, a cardiologist at the University of California, Los Angeles: “Those that started off pretty thin often don't come out successful.”
Metabolic reserves could also be important as people age. “Survival is a balance of risks,” says Stefan Anker, a cardiology researcher at Charité Medical University in Berlin. “If you are young and healthy, then obesity, which causes problems in 15 or 20 years, is relevant,” he says. With age, though, the balance may tip in favour of extra weight.
Genetic and metabolic factors may also be at play. Last year, Mercedes Carnethon, a preventive-medicine researcher at Northwestern University in Chicago, Illinois, reported that adults who develop type 2 diabetes while they are of normal weight are twice as likely to die over a given period as those who are overweight or obese11. Carnethon says that the trend is probably driven by a subset of people who are thin yet 'metabolically obese': they have high levels of insulin and triglycerides in their blood, which puts them at a higher risk for developing diabetes and heart disease.
All this suggests that BMI is a crude measure for evaluating the health of individuals. Some researchers contend that what really matters is the distribution of fat tissue on the body, with excess abdominal fat being most dangerous; others say that cardiovascular fitness predicts mortality regardless of BMI or abdominal fat. “BMI is just a first step for anybody,” says Steven Heymsfield, an obesity researcher and the executive director of the Pennington Biological Research Center in Baton Rouge, Louisiana. “If you can then add waist circumference and blood tests and other risk factors, then you can get a more complete description at the individual level.”
If the obesity-paradox studies are correct, the issue then becomes how to convey their nuances. A lot of excess weight, in the form of obesity, is clearly bad for health, and most young people are better off keeping trim. But that may change as they age and develop illnesses.
Some public-health experts fear, however, that people could take that message as a general endorsement of weight gain. Willett says that he is also concerned that obesity-paradox studies could undermine people's trust in science. “You hear it so often, people say: 'I read something one month and then a couple of months later I hear the opposite. Scientists just can't get it right',” he says. “We see that time and time again being exploited, by the soda industry, in the case of obesity, or by the oil industry, in the case of global warming.”
Preventing weight gain in the first place should be the primary public-health goal, Willett says. “It's very challenging to lose weight once you're obese. That's the most serious consequence of saying there's no problem with being overweight. We want to have people motivated not to get there in the first place.” But Kamyar Kalantar-Zadeh, a nephrologist at the University of California, Irvine, says that it is important not to hide subtleties about weight and health. “We are obliged to say what the real truth is,” he says.
Flegal, meanwhile, says that the public's reaction to her results is not her primary concern. “I work for a federal statistical agency,” she says. “Our job is not to make policy, it's to provide accurate information to guide policy-makers and other people who are interested in these topics.” Her data, she says, are “not intended to have a message”.

Academic fraud

The amazing story of Diederik Stapel, a social psychologist from Tilburg U, who has invented most of his experimental results for many years and has finally been caught.

Definitely, this could not happen within the field of experimental economics ...

See the NYT article here (too long to post here).

Re-industrialization is not the solution

I wish someone would send this post by Lane Kenworthy to the French minister for re-industrialization...
Many of the rich countries, when they return to reasonably robust economic growth, will face two potential obstacles to shared prosperity. One is a shortage of jobs. The other is stagnant (or falling) wages for those in the lower half.
The quantity of jobs is easier to solve, as there is considerable scope for expansion of employment in helping-caring services. These jobs will be valuable to society; we will benefit from having more people educate children, keep us healthy and care for us when we are ill, and give us personalized assistance in transitioning from school to work, switching from one type of work to another in middle age, improving our family life, transitioning into retirement, flourishing during retirement years, and much more. There will be plenty of demand for these services. As we get richer, most of us are happy to outsource tasks that we lack the expertise and/or time to perform ourselves. And we will likely be able to afford them as the cost of food, manufactured items, and possibly also energy falls1.
But some of these jobs, maybe many of them, will be low paying. Moreover, an array of economic shifts coupled with likely weakening of unions and collective bargaining may cause pay for workers in the lower half to stagnate or even decrease. The potential result: a replication of the American experience since the 1970s, featuring decoupling between growth of the economy and growth of household incomes for those in the middle and below (see figure 1). The economy will grow, but little of the gain will trickle down to the bottom half.
Figure 1. Economic growth and household income growth in the United States,
1947-79 versus 1979-2007
Each series is displayed as an index set to equal 1 in 1947. Q1, Q2, and Q3 are the first (lowest), second, and third quintiles of the income distribution. Inflation adjustment for each series is via the CPIU- RS. Data sources: Bureau of Economic Analysis; Census Bureau; Congressional Budget Office.
What can we do to ensure that the incomes and living standards of lower-half
households more closely track growth of the economy?
Strategies unlikely to work
Let me begin with three strategies that are traditional favourites of the left but probably aren’t up to the task.
Reindustrialise
For persons with limited education, a job in manufacturing is one of the few paths to decent and rising pay. Protecting existing manufacturing jobs, bringing back lost ones, and creating new ones is a perennial aim of the left. But possibilities here are limited. As figure 2 shows, manufacturing’s share of employment has been shrinking steadily in all rich nations. (I’ve highlighted Denmark, Germany, and the UK in this and several later charts, purely for illustrative purposes.) There are no exceptions. Even South Korea, which didn’t industrialise until the 1970s and 1980s, has joined the downward march.
Figure 2. Manufacturing’s share of employment

Manufacturing employment as a share of total employment. 21 countries. The lines are loess curves. Average in 1979: 23%. Average in 2007: 15%. Data source: OECD.
Figure 3. Manufacturing employment and total employment, 2007
Employment rate: employed persons as a share of the population age 25-64. Data source: OECD.
Like in agriculture, this employment decline is due partly to automation. It owes also, of course, to opportunities for low-cost production in poorer nations. Neither is likely to abate. Two decades from now, manufacturing jobs will have shrunk to less than 10 per cent of employment in most affluent countries.
This is a problem for wages and wage growth, but it is not necessarily an obstacle to high employment. Looking across the rich countries, there is no tendency for those with a larger share of employment in manufacturing to have a higher employment rate, as figure 3 indicates.
Strengthen collective bargaining
Strong labour unions can blunt the downward pressure on wages. For several decades following World War 2, unions ensured that firms passed a healthy portion of profit growth on to employees in the form of pay increases, and that has continued in the countries where unions remain strong.
But as figure 4 shows, unionisation has been falling in most affluent nations. Only five now have rates above 40 per cent, and four of those (Belgium, Denmark, Finland, and Sweden) are countries in which access to unemployment insurance is tied to union membership.
Figure 4. Unionisation
 
Union members as a share of all employees. 20 countries. The lines are loess curves. Data source: Jelle Visser, “ICTWSS: Database on Institutional Characteristics of Trade Unions, Wage Setting, State Intervention, and Social Pacts,” version 3, 2011, Amsterdam Institute for Advanced Labour Studies, series ud.
Figure 5. Collective Bargaining Coverage
Share of employees with wages determined by collective agreements. 20 countries. The lines are loess curves. Data source: Jelle Visser, “ICTWSS: Database on Institutional Characteristics of Trade Unions, Wage Setting, State Intervention, and Social Pacts,” version 3, 2011, Amsterdam Institute for Advanced Labour Studies, series adjcov.
Figure 5 shows that despite the near-universal decline in unionisation, collective bargaining coverage has held up in many nations. Will it continue to hold up? That’s difficult to predict, but the German experience is worrisome. It’s a non-Anglo country with a long history of successful pattern bargaining, yet collective agreement coverage has fallen by about 20 percentage points.
Even if there is no further reduction in bargaining coverage going forward, in all but a handful of the rich countries 20 per cent or more of the employed already are outside the reach of collective agreements. And in half of the countries it’s 40 per cent or more. That’s a lot of people facing the prospect of no sustained wage improvement.
Tighten the labour market
Full employment can help push wages up even in an otherwise inhospitable market and institutional context. Indeed, in the United States, an unemployment rate around 4 per cent was the key to the past generation’s one brief period of nontrivial wage growth – the late 1990s. But monetary authorities aren’t likely to cooperate, particularly given that monetary accommodation is widely thought to have contributed to the housing bubble and bust that precipitated the 2008 economic crash.
More promising routes to lift living standards
Here are four strategies I see as more promising routes to shared prosperity in the new economic context.
Educate
Schooling is not a cure-all. It can’t guarantee high employment, rising wages, broadly shared prosperity, or any other element of a good society. But it helps. The better we do with education, the larger the share of the population who will be able to work in decent-paying analytical professional jobs 4.
Public services
Public goods, services, spaces, and mandated free time – from childcare to roads and bridges to health care to holidays and vacations and paid parental leave – increase the sphere of consumption for which the cost to households is zero or minimal. They lift the living standards of households directly and free up income for purchasing other goods and services. Their addition to material well-being doesn’t show up in income statistics, but it’s real nonetheless.
Universal early education would be a particularly fruitful path to pursue. Denmark and Sweden point the way forward. Danish and Swedish parents can take a paid year off work following the birth of a child. After that, parents can put the child in a public or cooperative early education centre. Early education teachers get training and pay comparable to elementary school teachers. Parents pay a fee, but the cost is capped at around 10% of a household’s income.
Early education has three benefits. First, it facilitates employment of parents, especially mothers, thereby boosting family incomes. In a context of flat or declining wages, adding employment hours is the only way for families to increase their earnings.
Second, early education helps parents balance work and family, which is a quality-of-life improvement in and of itself.
Third, early education enhances capabilities, particularly for those from less advantaged homes. In the Nordic countries, the influence of parents’ education, income, and parenting practices on their children’s cognitive abilities, likelihood of completing high school and college, and labour market success is weaker than elsewhere. Evidence increasingly suggests that the early years are the most important ones for developing cognitive and noncognitive skills, so the Nordic countries’ success in equalising opportunity very likely owes partly, perhaps largely, to early education 5.
A statutory minimum wage that rises with prices
If union decline continues and collective bargaining coverage follows suit, a statutory minimum wage will be needed to secure a decent wage floor. To ensure that the floor rises, the statutory minimum should be tied (indexed) to prices and also periodically adjusted upward in real terms.
Though vital, a wage floor is of limited help to many. Its main effect is to compress the bottom of the wage distribution rather than to push up wages for everyone in the lower half 6.
Decoupling insurance
I recommend a government programme that can compensate for stagnant wages in a context of robust economic growth – insurance against decoupling, if you will.7 Countries that already have an employment-conditional earnings subsidy (Earned Income Tax Credit, Universal Credit, etc.) could build on that. The ideal, in my view, would be to make receipt conditional on earnings, give it to everyone with earnings rather than only to those with low income, tax it for households with relatively high income, and index it to average compensation (or perhaps GDP per capita). This would ensure that when the economy grows, household incomes do too.
Some will ask why taxpayers rather than employers should bear the cost of ensuring that household incomes rise. It’s an understandable sentiment. But consider how we think about health insurance, pensions, unemployment insurance, and sickness/disability insurance. Like income, these contribute to economic security and material well-being. In all affluent nations, they are financed at least partly by taxes or social contributions. Few object to the fact that firms aren’t the sole funders.
Why propose a new (or expanded) government social programme at a moment when economic conditions and political sentiment in many countries militate in favour of spending cuts? First, this is a strategy for the medium- and long run. Second, the logic of public policy as a mechanism to insure against risk remains as compelling as ever. If we want shared prosperity and if markets and institutions no longer can provide it, offering a simple public insurance remedy such as this can be both smart policy and smart politics.
Lane Kenworthy is professor of sociology and political science at the University
of Arizona
Lane Kenworthy will speak at the Policy Network/Global Progress conference on "Progressive Governance: Towards Growth and Shared Prosperity" taking place on the 11th and 12th of April 2013.
1. William J. Baumol, The Cost Disease, Yale University Press, 2012.
2.  Jess Bailey, Joe Coward, and Matthew Whittaker, “Painful Separation: An International Study of the Weakening Relationship between Economic Growth and the Pay of Ordinary Workers,” Commission on Living Standards, Resolution Foundation, 2011.
3.  Lane Kenworthy, Progress for the Poor, Oxford University Press, 2011; Kenworthy, “When Does
Economic Growth Benefit People on Low-to-Middle Incomes – and Why?” Commission on Living Standards, Resolution Foundation, 2011
4.  Lane Kenworthy, “Two and a Half Cheers for Education,” pp. 111-123 in After the Third Way: The Future of Social Democracy in Europe, edited by Olaf Cramme and Patrick Diamond, a Policy Network book, I.B. Tauris, 2012.
5.  James J. Heckman, “Schools, Skills, and Synapses,” NBER Working Paper 14064, 2008; Christopher Ruhm and Jane Waldfogel, “Long-Term Effects of Early Childcare and Education,” IZA Discussion Paper 6149, 2011; John Ermisch, Markus Jäntti, and Timothy Smeeding, eds., From Parents to Children: The Intergenerational Transmission of Advantage, Russell Sage Foundation, 2012; Gøsta Esping-Andersen and Sandra Wagner, “Asymmetries in the Opportunity Structure: Intergenerational Mobility Trends in Europe,” Research in Social Stratification and Mobility 30: 473-487, 2012.
6.  For a useful illustration, see figure 4.12 in Resolution Foundation Commission on Living Standards, Gaining from Growth, 2012.
7.  Robert Shiller’s “inequality insurance” proposal is similar in spirit. See Shiller, The New Financial Order, Princeton University Press, 2003, ch. 11. See also Robert B. Reich, Aftershock, Knopf, 2010

How misunderstanding of basic probability can have huge consequences...

Or why Prob (A given B) differs from Prob (B given A), and how it can lead you to jail (and why people educated in law - Hi Quentin!- should follow math courses):

The Prosecutor’s Fallacy

Later this month – or it could be next month – a group of three judicial “wise men” in the Netherlands should finally settle the fate of a very unlucky woman named Lucia de Berk. A 45-year-old nurse, de Berk is currently in a Dutch prison, serving a life sentence for murder and attempted murder. The “wise men” – an advisory judicial committee known formally as the Posthumus II Commission – are reconsidering the legitimacy of her conviction four years ago.
Lucia is in prison, it seems, mostly because of human susceptibility to mathematical error – and our collective weakness for rushing to conclusions as a single-minded herd.
When a court first convicted her, the evidence seemed compelling. Following a tip-off from hospital administrators, investigators looked into a series of “suspicious” deaths or near deaths in hospital wards where de Berk had worked from 1999 to 2001, and they found that Lucia had been physically present when many of them took place. A statistical expert calculated that the odds were only 1 in 342 million that it could have been mere coincidence.

Open and shut case, right? Maybe not. A number of Dutch scientists now argue convincingly that the figure cited was incorrect and, worse, irrelevant to the proceedings, which were in addition plagued by numerous other problems.
For one, it seems that the investigators weren’t as careful as they might have been in collecting their data. When they went back, sifting through hospital records looking for suspicious cases, they classified at least some events as suspicious only after they realized that Lucia had been present. So the numbers that emerged were naturally stacked against her.
Mathematician Richard Gill of the University of Leiden, in the Netherlands, and others who have redone the statistical analysis to sort out this problem and others suggest that a more accurate number is something like 1 in 50, and that it could be as low as 1 in 5.
More seriously still – and here’s where the human mind really begins to struggle – the court, and pretty much everyone else involved in the case, appears to have committed a serious but subtle error of logic known as the prosecutor’s fallacy.
The big number reported to the court was an estimate (possibly greatly inflated) of the chance that so many suspicious events could have occured with Lucia present if she was in fact innocent. Mathematically speaking, however, this just isn’t at all the same as the chance that Lucia is innocent, given the evidence, which is what the court really wants to know.
To see why, suppose that police pick up a suspect and match his or her DNA to evidence collected at a crime scene. Suppose that the likelihood of a match, purely by chance, is only 1 in 10,000. Is this also the chance that they are innocent? It’s easy to make this leap, but you shouldn’t.
Here’s why. Suppose the city in which the person lives has 500,000 adult inhabitants. Given the 1 in 10,000 likelihood of a random DNA match, you’d expect that about 50 people in the city would have DNA that also matches the sample. So the suspect is only 1 of 50 people who could have been at the crime scene. Based on the DNA evidence only, the person is almost certainly innocent, not certainly guilty.
This kind of error is so subtle that the untrained human mind doesn’t deal with it very well, and worse yet, usually cannot even recognize its own inability to do so. Unfortunately, this leads to serious consequences, as the case of Lucia de Berk illustrates. Worse yet, our strong illusion of certainty in such matters can also lead to the systematic suppression of doubt, another shortcoming of the de Berk case.
Indeed, de Berk’s defense team presented other numbers that should have created serious doubt in the mind of the court, but apparently didn’t. When de Berk worked on the hospital wards in question, from 1999 to 2001, six suspicious deaths occurred. In the same wards, in a similar period of time before de Berk started working there, there were actually seven suspicious deaths.
If de Berk were a serial killer, it certainly would be bizarre that her presence would lead to a decrease in the overall number of deaths.
Of course, the de Berk case is hardly an isolated example of statistical error in the courtroom. In a famous case in the United Kingdom a few years ago, Sally Clark was found guilty of killing her two infants, largely on the basis of testimony given by Roy Meadows, a physician who told the court that the chance that the two both could have died from Sudden Infant Death Syndrome (SIDS) was only 1 in 73 million. Meadows arrived at this number by squaring the estimated probability for one such death, which is an elementary mistake. Because SIDS may well have genetic links, the chance that a mother who already had one child die from SIDS would have a second one may be considerably higher.
Here, too, the prosecutor’s fallacy seems to have loomed large, as the likelihood of two SIDS deaths, whatever the number, is not the chance that the mother is guilty, though the court may have interpreted it as such.
Even our powerful intuitive belief that “common sense” is a reliable guide can be extremely dangerous. In Sally Clark’s first appeal, statistician Philip Dawid of University College London was called as an expert witness, but judges and lawyers ultimately decided not to take his advice, as the statistical matters in question were not, they decided, “rocket science.” The conviction was upheld on this appeal (although it was subsequently overturned).
Legal experts in the United States and the United Kingdom are taking some tentative steps to rectify this problem – by organizing further education in statistics for judges and lawyers, and by arranging for the use of special scientific panels in court. Still, it will remain difficult to counteract the timeless process of social amplification that can turn the opinions of a few, based on whatever reasoning, into the near certainty of the crowd.
In the wake of the impressive 1-in-342 million number, the Dutch press piled on de Berk, demonizing her as a cold, remorseless killer. They noted, as if it were somehow relevant, that she had suspiciously spent a number of years outside of the Netherlands, and had even worked for a time as a prostitute. Other “evidence” at the trial was an entry from de Berk’s diary, on the same date as one of the deaths, which said that she had “given in to her compulsion.” Elsewhere she wrote that she had “a very great secret,” which she insisted was reading Tarot cards, but the prosecution alleged, and many people believed, referred to her murdering patients.
What ensued was something akin to the Salem witch hunt. Throughout the trial, Lucia maintained her innocence. But the prosecution called an expert witness who testified that serial killers often refuse to confess. So her protestations became yet more evidence against her.
But now that the evidence has been called into question, social opinion, expressed most clearly in the press, has swung the other way. As Gill, the Leiden mathematician, said to me in an e-mail message, the media suddenly have begun pushing the view that maybe there’s been a miscarriage of justice.
“Suddenly we’re seeing real photos of Lucia de Berk as a normal person,” said Gill, “rather than as a kind of caricature of a modern witch. It’s a fascinating glimpse of group psychology, and a huge change seeded by a little bit of information at the right moment.”
In ordinary usage, “common sense” is taken to be something of value. Albert Einstein had a less charitable view. “Common sense,” he wrote, “is nothing more than a deposit of prejudices laid down by the mind before you reach age 18.”
Our ability as people to understand our habitual failings, both individually and socially, is a great part of what sets us apart from the rest of nature. We excel precisely insofar as we manage to use that ability. Sadly, in the legal setting at least, we still have lots of room for improvement.
__________________
In the case of Lucia de Berk, several Dutch scientists deserve enormous credit for their determined exploration of the way Lucia’s case was handled, and especially for exposing the flawed nature of the statistical arguments. Richard Gill has an extensive summary of the details of the case on the web. Ton Derksen, a Dutch philosopher of science, has written a book critical of the case. Both have submitted presentations to a Dutch committee of legal “wise men” which is now considering whether the case should be reopened.

Who pays the corporate income tax?

The following is a nice NYT piece on the incidence of corporate income taxation. I was surprised to read the following statement:

"Probably most people assume that the corporate income tax is largely paid by consumers of its products or services. That is, they assume that although the tax is nominally levied on the corporation as a whole, in fact the burden of the tax is shifted onto customers in the form of higher prices."

I may be wrong, but I would surmised that nearly everyone in France (including current and past members of government and parliament) think that the corporate income tax is paid by shareholders, and thus by rich people. I would be surprised if a more than a tiny minority of people in France would think about taxes being shifted to consumers (and to workers, for that matter)...

Who Pays the Corporate Income Tax

DESCRIPTION
Bruce Bartlett held senior policy roles in the Reagan and George H.W. Bush administrations and served on the staffs of Representatives Jack Kemp and Ron Paul. He is the author of “The Benefit and the Burden: Tax Reform – Why We Need It and What It Will Take.”
The United States has had a corporate income tax since 1909, but in all the years since there is a major question about it that economists haven’t been able to answer satisfactorily: who pays it? The possibility that Congress may act on corporate tax reform this year makes this a highly salient question.
TODAY’S ECONOMIST
Perspectives from expert contributors.
The problem, of course, is that people must ultimately pay all taxes. Corporations,contrary to the views of some Republicans, are not people. They are legal entities that exist only because governments permit them to and are artificial vehicles through which sales, wages and profits flow. Hence, the actual burden of the corporate tax may fall on any of the groups that receive such flows; namely, customers, workers and shareholders, the ultimate owners of the corporation.
Probably most people assume that the corporate income tax is largely paid by consumers of its products or services. That is, they assume that although the tax is nominally levied on the corporation as a whole, in fact the burden of the tax is shifted onto customers in the form of higher prices.
All economists reject that idea. They point out that prices are set by market forces and the suppliers of goods and services aren’t only C-corporations, which pay taxes on the corporate tax schedule, but also sole proprietorships, partnerships and S-corporations that are taxed under the individual income tax. Other suppliers include foreign corporations and nonprofits.
Therefore, corporations cannot raise prices to compensate for the corporate income tax because they will be undercut by businesses to which the tax does not apply. It should also be noted that the states have substantially different corporate tax regimes, including some that do not tax corporations at all, and we do not observe that prices for goods and services vary from state to state depending on its taxation of corporations.
That leaves two remaining groups that may bear the burden of the corporate tax: workers and shareholders.
In 1962, the University of Chicago economist Arnold C. Harberger, published an important article arguing that the corporate tax was borne entirely by shareholders. This was unquestionably true in the first instance; that is, when the corporate income tax was first imposed. The tax simply reduced corporate profits and had to come out of the pockets of shareholders, given that it could not be shifted onto consumers.
But as time went by, some economists argued that a substantial portion of the corporate income tax was ultimately paid by workers in the form of lower wages. This resulted because the supply of capital would shrink in order to raise the rate of return on capital. A smaller capital stock would reduce the productivity of labor and cause real wages to be lower in the long run.
Most economists now agree that the burden of the corporate income tax falls on labor to some extent, but there is disagreement over the degree. This is important because the political prospects for cutting the statutory corporate tax rate, a goal shared by all tax reformers, may depend on the extent to which it can be shown that workers will benefit.
The just-published March 2013 issue of The National Tax Journal, the principal academic journal devoted to tax analysis, contains four articles by top scholars who have sought to clarify the incidence of the corporate income tax. Unfortunately, there is no consensus.
The first article, by a Reed College economist, Kimberly Clausing, supports the traditional idea that capital bears all of the corporate tax. She notes that large multinational corporations have a great deal of flexibility in determining where to locate production, incur costs and realize profits.
A company may borrow in one country and take the deduction for interest there, locate actual production facilities and employ workers in another country, and realize profits in a third country by transferring intellectual property such as patents there or by adjusting prices on internal sales among its foreign subsidiaries.
Moreover, Professor Clausing notes, corporate shareholders may live in many different countries, each facing a different tax regime with respect to the taxation of dividends and capital gains.
For these reasons, she argues that it is impossible for workers to bear any significant portion of the corporate tax in the form of lower wages. It all falls on capital. A second article, by Jennifer Gravelle, a Congressional Budget Office economist, agrees with this conclusion.
But a third article, by an Oxford University economist, Li Liu and a Rutgers economist, Rosanne Altshuler, argues in favor of the idea that labor bears most of the burden of the corporate tax.
They take advantage of the fact that different industries bear different tax burdens because of various provisions of the tax law, and also that concentration and competition varies among industries. They empirically examine wages among industries and conclude that labor bears about 60 percent of the corporate tax burden.
That is, a $1 increase in corporate taxes will reduce wages by about 60 cents.
Finally, four Treasury Department economists detail the method the Treasury uses to allocate the corporate tax in distribution tables. They have the advantage of access to actual corporate tax returns and far greater detail on corporate finances than available to private researchers.
The Treasury economists conclude that 82 percent of the corporate tax falls on capital and 18 percent on labor. This is very close to the methodology of the private Tax Policy Center, whose analyses are frequently cited in policy debates. It assumes that 80 percent of the corporate tax is borne by capital and 20 percent by labor.
Of course, all of these assumptions may be called into question when dealing with any specific tax reform proposal. For example, a change in depreciation allowances is mainly going to affect manufacturing companies, whereas a change in the taxes on foreign-source income will have an impact only on multinationals.
To build support for or opposition to particular changes in corporate taxation, many claims will be made about the constituencies that will benefit or be harmed. People should be aware that even the best academic economists disagree on the basics of who actually pays the corporate tax.

Welcome to the new French empire!

From Krugman's blog:

Here are the Eurostat population projections out to 2060:
Eurostat
If we assume that major European nations will have similar levels of GDP per capita, which seems reasonable, then by mid-century France, not Germany, will be the biggest European economy, through sheer force of numbers. If the EU is still holding together, this could mean that France is in turn the leader of one of the world’s great economic powers. Welcome to the new French empire!
OK, maybe that’s going too far. But I am surprised that France’s relative demographic advantage within Europe doesn’t get more attention.

Cliodynamics

From Peter Turchin: "It is time for history to become an analytical, and even a predictive, science." He proposes to found cliodynamics "from Clio, the muse of history, and dynamics, the study of temporally varying processes and the search for causal mechanisms. Let history continue to focus on the particular. Cliodynamics, meanwhile, will develop unifying theories and test them with data generated by history, archaeology and specialized disciplines such as numismatics (the study of ancient coins)."

This sounds a lot like Hari Seldon and psychohistory to me! Very ambitious, but most interesting. I have always liked books about patterns in history, dating back to Paul Kennedy's "The rise and fall of the great powers". And bringing analytical tools to it makes a lot of sense of course, and fits my interests.

Anyway, go check Peter Turchin's blog, it is full of interesting posts. In the meantime, I am going to look at his book on Secular Cycles.