Saturday, December 6, 2014

Metric system in the US

3 countries do not make use of the metric system:


Liberia, Myanmar, and the United States are the only countries that don't use the metric system

As Vox’s Susannah Locke wrote, "The measuring system that the United States uses right now isn’t really a system at all. It’s a hodgepodge of various units that often seem to have no logical relationship to one another — units collected throughout our history here and there, bit by bit. Twelve inches in a foot, three feet in a yard, 1,760 yards in a mile." That’s why the rest of the world uses the metric system, where "all you need to do is multiply or divide by some factor of ten. 10 millimeters in a centimeter, 100 centimeters in a meter, 1,000 meters in a kilometer. Water freezes at 0°C and boils at 100°C."


But, as some people still remember (hello, Michel T!), there was a movement in the 70ies to move to the metric system in the US as well. This is documented in the recent book "Whatever happened to the metric system?" reviewed by the New York Times hereunder

WHATEVER HAPPENED TO THE METRIC SYSTEM?
How America Kept Its Feet
By John Bemelmans Marciano
Illustrated. 310 pp. Bloomsbury. $26.

In the 1970s, children across America were learning the metric system at school, gas stations were charging by the liter, freeway signs in some states gave distances in kilometers, and American metrication seemed all but inevitable. But Dean Krakel, director of the National Cowboy Hall of Fame in Oklahoma, saw things differently: “Metric is definitely Communist,” he solemnly said. “One monetary system, one language, one weight and measurement system, one world — all Communist.” Bob Greene, syndicated columnist and founder of the WAM! (We Ain’t Metric) organization, agreed. It was all an Arab plot “with some Frenchies and Limeys thrown in,” he wrote.
Krakel and Greene might sound to us like forerunners of the Tea Party, but in the 1970s meter-bashing was not limited to right-wing conservatives. Stewart Brand, publisher of the Whole Earth Catalog, advised that the proper response to the meter was to “bitch, boycott and foment,” and New York’s cultural elite danced at the anti-metric “Foot Ball.” Assailed from both right and left, the United States Metric Board gave up the fight and died a quiet death in 1982.
In his entertaining and enormously informative new book, “Whatever Happened to the Metric System?,” John Bemelmans Marciano tells the story of the rise and fall of metric America. With a keen ear for anecdotes and a sharp eye for human motivations, Marciano brings to life the fight over the meter, its champions and its enemies. The 1970s bookend his narrative, but the reader soon finds the struggle lasted not a decade but centuries. And in what was to me the book’s greatest revelation, the meter — that alleged vehicle of international Communism — turns out to be American through and through.
The father of American metrication was none other than Thomas Jefferson, who in the 1780s turned his attention to replacing the menagerie of doubloons, pistoles and Spanish dollars then in use in the states. Jefferson proposed minting a new dollar, but whereas the European coins were divided into halves, eighths, sixteenths, etc., the American coin would be divided into tenths, hundredths and thousandths. When Jefferson’s plan was approved by Congress, the United States became the first country to adopt the decimal system for its currency.
That money is related to measurement might seem counterintuitive today. But as Marciano points out, until very recently the value of coins was ultimately dependent on their weight in gold or silver, which means the divisions of a currency imply a division of weight. And so, when Jefferson arrived in Paris as a diplomat in 1784, he joined forces with French luminaries in promoting a complete reform of weights and measures. Their opportunity came only a few years later, when at the height of the French Revolution its leaders cast away all traditional measures and replaced them with the new meter, kilogram and ­liter. Jefferson, who had returned home in 1789, was convinced the new system would be promptly adopted in America.
It didn’t turn out that way. As France descended into terror and war, the metric system became entangled in a worldwide struggle over its legacy. To its supporters it stood for reason and democracy; to its detractors, godlessness and the guillotine. It was not until the aftermath of World War II, when new global institutions were established and a host of new nations adopted the meter, that its place as the near-­universal measure was secured.
In America, however, repeated efforts at metrication, from Jefferson to Jimmy Carter, were scuttled by a formidable combination of hostility and indifference. According to Marciano the debate is now over, since the digital revolution has made conversion instantaneous and a change of system pointless. Still, as his book beautifully shows, clashes over the meter were more often about ideology, not utility. And so, as long as the struggle continues over reason and faith, universalism and tradition, I wouldn’t count the meter out.


Serendipity in research

Fascinating reading by the New Yorker on how geologists discovered that a meteorite had caused a massive extinction: basically, they used iridium as a simple marker for the time taken for a layer of clay to deposit, and they discovered so much iridium in that clay that they hypothesized that  a large meteorite had collided with earth at that time. Serendipity indeed!



All the way into the nineteen-sixties, paleontologists continued to give talks with titles like “The Incompleteness of the Fossil Record.” And this view might have persisted even longer had it not been for a remarkable, largely inadvertent discovery made in the following decade.
In the mid-nineteen-seventies, Walter Alvarez, a geologist at the Lamont Doherty Earth Observatory, in New York, was studying the earth’s polarity. It had recently been learned that the orientation of the planet’s magnetic field reverses, so that every so often, in effect, south becomes north and then vice versa. Alvarez and some colleagues had found that a certain formation of pinkish limestone in Italy, known as the scaglia rossa, recorded these occasional reversals. The limestone also contained the fossilized remains of millions of tiny sea creatures called foraminifera. In the course of several trips to Italy, Alvarez became interested in a thin layer of clay in the limestone that seemed to have been laid down around the end of the Cretaceous. Below the layer, certain species of foraminifera—or forams, for short—were preserved. In the clay layer there were no forams. Above the layer, the earlier species disappeared and new forams appeared. Having been taught the uniformitarian view, Alvarez wasn’t sure what to make of what he was seeing, because the change, he later recalled, certainly “looked very abrupt.”
Alvarez decided to try to find out how long it had taken for the clay layer to be deposited. In 1977, he took a post at the University of California at Berkeley, where his father, the Nobel Prize-winning physicist Luis Alvarez, was also teaching. The older Alvarez suggested using the element iridium to answer the question.
Iridium is extremely rare on the surface of the earth, but more plentiful in meteorites, which, in the form of microscopic grains of cosmic dust, are constantly raining down on the planet. The Alvarezes reasoned that, if the clay layer had taken a significant amount of time to deposit, it would contain detectable levels of iridium, and if it had been deposited in a short time it wouldn’t. They enlisted two other scientists, Frank Asaro and Helen Michel, to run the tests, and gave them samples of the clay. Nine months later, they got a phone call. There was something seriously wrong. Much too much iridium was showing up in the samples. Walter Alvarez flew to Denmark to take samples of another layer of exposed clay from the end of the Cretaceous. When they were tested, these samples, too, were way out of line.

The Alvarez hypothesis, as it became known, was that everything—the clay layer from the scaglia rossa, the clay from Denmark, the spike in iridium, the shift in the fossils—could be explained by a single event. In 1980, the Alvarezes and their colleagues proposed that a six-mile-wide asteroid had slammed into the earth, killing off not only the forams but the dinosaurs and all the other organisms that went extinct at the end of the Cretaceous. “I can remember working very hard to make that 1980 paper just as solid as it could possibly be,” Walter Alvarez recalled recently. Nevertheless, the idea was greeted with incredulity.
(...)
Today, it’s generally accepted that the asteroid that plowed into the Yucatán led, in very short order, to a mass extinction, but scientists are still uncertain exactly how the process unfolded. One theory holds that the impact raised a cloud of dust that blocked the sun, preventing photosynthesis and causing widespread starvation. According to another theory, the impact kicked up a plume of vaporized rock travelling with so much force that it broke through the atmosphere. The particles in the plume then recondensed, generating, as they fell back to earth, enough thermal energy to, in effect, broil the surface of the planet.
Whatever the mechanism, the Alvarezes’ discovery wreaked havoc with the uniformitarian idea of extinction. The fossil record, it turned out, was marked by discontinuities because the history of life was marked by discontinuities.

How turning frogs into pregnancy tests is now killing them worldwide

Frogs all over the world are disappearing fast. The culprit is a fungi. According to this New Yorker article, here is how this fungi got spread:


Chytrid fungi are older even than amphibians—the first species evolved more than six hundred million years ago—and even more widespread. In a manner of speaking, they can be found—they are microscopic—just about everywhere, from the tops of trees to deep underground. Generally, chytrid fungi feed off dead plants; there are also species that live on algae, species that live on roots, and species that live in the guts of cows, where they help break down cellulose. Until two pathologists, Don Nichols and Allan Pessier, identified a weird microorganism growing on dead frogs from the National Zoo, chytrids had never been known to attack vertebrates. Indeed, the new chytrid was so unusual that an entire genus had to be created to accommodate it. It was named Batrachochytrium dendrobatidisbatrachos is Greek for “frog”—or Bd for short.


Rick Speare is an Australian pathologist who identified Bd right around the same time that the National Zoo team did. From the pattern of decline, Speare suspected that Bd had been spread by an amphibian that had been moved around the globe. One of the few species that met this condition was Xenopus laevis, commonly known as the African clawed frog. In the early nineteen-thirties, a British zoologist named Lancelot Hogben discovered that female Xenopus laevis, when injected with certain types of human hormones, laid eggs. His discovery became the basis for a new kind of pregnancy test and, starting in the late nineteen-thirties, thousands of African clawed frogs were exported out of Cape Town. In the nineteen-forties and fifties, it was not uncommon for obstetricians to keep tanks full of the frogs in their offices.

To test his hypothesis, Speare began collecting samples from live African clawed frogs and also from specimens preserved in museums. He found that specimens dating back to the nineteen-thirties were indeed already carrying the fungus. He also found that live African clawed frogs were widely infected with Bd, but seemed to suffer no ill effects from it. In 2004, he co-authored an influential paper that argued that the transmission route for the fungus began in southern Africa and ran through clinics and hospitals around the world.
“Let’s say people were raising African clawed frogs in aquariums, and they just popped the water out,” Speare told me. “In most cases when they did that, no frogs got infected, but then, on that hundredth time, one local frog might have been infected. Or people might have said, ‘I’m sick of this frog, I’m going to let it go.’ And certainly there are populations of African clawed frogs established in a number of countries around the world, to illustrate that that actually did occur.”

Saturday, November 22, 2014

Maps and charts

I love maps and charts. You will find 22 surprising ones here.

Here are some of my favorites:

Africa is REALLY big:

Compare Canada and Continental Europe:




Colombia’s Data-Driven Fight Against Crime

From the New York Times


"Colombia has always been a violent country, but after the 1948 assassination of Jorge Eliécer Gaitán, a politician who threatened to break the oligarchy’s stranglehold, hundreds of thousands of people were killed over the next five years. The killings were due to scorched-earth battles between the two major political parties — both parties of the upper classes whose major difference was their names. Violence acquired capital letters and became known as La Violencia.

(...)

The dark ages returned with the rise of cocaine in the late 1970s. By the late 1980s and early 1990, the leading cause of death in several of Colombia’s major cities was homicide. The most deadly and infamous was Medellín, but this was also true in Cali.

In 1983, there were 23 homicides per 100,000 people in Cali. Ten years later the rate had reached 120, and some counts put it slightly higher. That’s more than twice as high as the homicide rate ever reached in Detroit, the most dangerous city in any developed country, which peaked in 1993 at 57.6.

But crime in Cali changed after Rodrigo Guerrero became mayor.
What Guerrero did to make Cali safer was remarkable because it worked, and because of the novelty of his strategy. Before becoming mayor, Guerrero was not a politician, but a Harvard-trained epidemiologist who was president of the Universidad del Valle in Cali. He set out to prevent murder the way a doctor prevents disease.
(...)
When Guerrero became mayor in 1992, the conventional wisdom was that the vast bulk of Cali’s murders stemmed from disputes over cocaine trafficking — at the time, the Cali Cartel was overtaking the Medellín Cartel in control of the cocaine trade.
But Guerrero didn’t assume, he measured. The police, courts and every other institution that counted murders all came up with different figures. Guerrero had weekly meetings with these groups and academic researchers to find more accurate figures. Then they mapped homicides by time and neighborhood.
That took about a year — and his term was only two and a half years — but he found something important: deaths were concentrated on weekends, especially payday weekends. 
(...)
“Things that happen on the weekend in our country are often associated with alcohol,” Guerrero said. So Cali started to look at alcohol in the blood of victims (few perpetrators were caught) — and found a large percentage of victims had very high levels. “My initial hypothesis was that this was drug trafficking,” he said. “But the traffickers were not going to wait for weekends to resolve their conflicts — and get their victims drunk.”
The astronomical murder rate was related to the cocaine trade, Guerrero concluded — but only indirectly. Cocaine created social disruption and intensified an already-violent culture. 
(...)
Guerrero banned the sale of alcohol after 1 a.m. on weeknights and 2 a.m. on Fridays and Saturdays. (That 2 a.m. is considered early closing says a lot about the problem.) As he expected, bar owners — and bar patrons — objected. Guerrero asked bars to try it for three months, but success was obvious nearly instantly. The effects were big enough to overcome the objections.
The other decree banned the carrying of guns — enforced by checkpoints and pat-downs — on payday weekends and holidays. The army, which held a monopoly on the manufacture and sale of guns, fought the law. But again, success was persuasive. Researchers compared gun ban days to similar days with no ban in Cali and in Bogotá, which replicated the program. They found that neighborhoods with the ban saw 14 percent fewer homicides in Cali and 13 percent fewer in Bogotá than neighborhoods without restrictions.
Together, those two decrees cut the homicide rate where they were instituted by 35 percent.
There was more: Since the data showed that a large majority of offenders were under 24, Guerrero instituted a curfew for young people in high crime neighborhoods between 11 p.m. and 5 a.m. on weekends.
(...)
The homicide rate declined. From its peak of 120 in 1994, it declined to 80 in 1997. Homicides also dropped in Bogotá and Medellín, which adopted Cali’s program.
Was Guerrero’s program responsible for the drop? It’s hard to separate the effects of the program from outside factors, such as the economy, Colombia’s guerrilla wars and the drug trade (most of the heads of the Cali Cartel were captured in 1995, which could cut either way on homicide). But the reduction in homicide in these three cities was much greater than in the rest of the country.
It’s difficult to say which parts were responsible. There is strong evidence that the gun and alcohol restrictions worked, but the fact that the overall rate in Cali didn’t start to drop until 1994 indicates that other pieces of the program — which were slower to implement and didn’t yield immediate results — mattered as well.
(...)
Perhaps the showcase of Guerrero’s ideas is not Cali, but Bogotá, which never wavered in its implementation of the program. The city’s homicide rate, which was more than 80 per 100,000 in 1993, is 16.7 per 100,000 today.
The gains did not hold in Cali, however. Mayors after Guerrero dismantled almost all the program (two left office early due to accusations of corruption or ties to traffickers). The murder rate went back up again.
But Guerrero is once again mayor of Cali, and once again, the homicide rate is dropping. This year it is on track to drop by more than 30 percent, which would put it at under 60 per 100,000."

Thursday, November 13, 2014

Bayesian statistics applied to Ebola mortality rate

According to this article, the Ebola mortality rate varies a lot across countries:

About two-thirds of the 2,387 people who’d contracted the disease before this year’s outbreak died worldwide. The fatality rate of more than 14,000 people who have been infected with Ebola in West Africa this year is 71 percent. But eight of the nine people who have been treated for the disease in the U.S. have recovered and been released from the hospital. One person has died.


The question addressed in the article is then: how to aggregate this information?

Bayesian statistics seemed like a promising option for estimating U.S. mortality, because it provides a framework for updating prior informed belief (mortality rate in prior outbreaks in the U.S. and elsewhere) with new information (the lower mortality rate in the U.S. in this outbreak).

(...)

Tony O’Hagan, emeritus professor of statistics at the University of Sheffield in the U.K., (...) started with the assumption that the Ebola mortality rate in the U.S. would be 30 percent, about half that in Africa — peppered with a liberal amount of uncertainty because it was essentially an educated guess. Then once he factored in the eight recoveries in nine U.S. cases, his estimate was of a mortality rate of 17 percent.
Waller offered some ideas for how to refine the analysis. What we want is a model that takes into account the individual attributes of each case when estimating the likelihood of death. Those attributes include age and health of the patient, time from first symptoms to start of treatment, training of medical staff and treatments used. What factors led to lower mortality in the U.S., and which can be replicated in the West African countries with climbing caseloads?
Building such a model would require detailed data not just on the nine U.S. patients, but on as many Ebola patients worldwide as possible. That data isn’t always collected and compiled in a usable way, though — especially in an emergency treatment setting. 

Saturday, October 18, 2014

China's people love capitalism, but hate inequality. That's bad news for the government.

At least, that's the argument in this VOX article, mainly based on the following answers to the 2014 PEW Global Attitude survey, showing first a very large support for the market economy


but at the same time blaming the government for the inequality the country has experienced:


Out of curiosity, I have looked at the answers to those questions in France. French people are much less convinced than Chinese about the market system: only 60% agree that most people are better off in a free market economy, while 39% disagree. As for the fraction of people who see inequality as a major challenge, it is much larger in France (60%) than in China (42%)! Finally, French people blame less their own government's policies (23%), trade (4%) and more workers' pay (26%), their educational system (10%), their tax system (19%) or the fact that some work harder than others (17%) than the Chinese.


Thursday, October 16, 2014

5 reasons why 2013 was the best year in human history

Read them here.

Here is a quick summary, with graphs:

1. Fewer people are dying young, and more are living longer.


2. Fewer people suffer from extreme poverty, and the world is getting happier.


3. War is becoming rarer and less deadly.

Pinker


4. Rates of murder and other violent crimes are in free-fall.

5. There’s less racism, sexism, and other forms of discrimination in the world.

best_year_graphics-04

Monday, September 29, 2014

What's going on in Hong Kong right now is a very big deal,

Tear gas Hong Kong
Read why in the VOX article here.

Excerpts:

Protest marches and vigils are fairly common in Hong Kong, but what began on Friday and escalated dramatically on Sunday is unprecedented. Mass acts of civil disobedience were met by a shocking and swift police response, which has led to clashes in the streets and popular outrage so great that analysts can only guess at what will happen next.
What's going on in Hong Kong right now is a very big deal, and for reasons that go way beyond just this weekend's protests. Hong Kong's citizens are protesting to keep their promised democratic rights, which they worry — with good reason — could be taken away by the central Chinese government in Beijing. This moment is a sort of standoff between Hong Kong and China over the city's future, a confrontation that they have been building toward for almost 20 years.
You have to remember that this is Hong Kong: an affluent and orderly place that prides itself on its civility and its freedom. Hong Kongers have a bit of a superiority complex when it comes to China, and see themselves as beyond the mainland's authoritarianism and disorder. But there is also deep, deep anxiety that this could change, that Hong Kong could lose its special status, and this week's events have hit on those anxieties to their core.
Hong Kong police try to retake downtown Central (Anthony Kwan/Getty)
Hong Kong police try to retake downtown Central (Anthony Kwan/Getty)
So these protests aren't just about Beijing's plan to hand-pick candidates for the 2017 election, they're about whether Hong Kong will remain fundamentally free, an ongoing and open-ended question that will continue for years no matter how these protests resolve.
The other thing you have to understand is that the memory of the 1989 Tiananmen Square massacre, in which the Chinese military mowed down 2,600 peaceful pro-democracy protesters in Beijing and other cities, looms awfully large in Hong Kong. While Hong Kong was unaffected by the massacre (it was under British rule at the time), the city holds an annual vigil in memory of the event, which has been so heavily censored in China itself that many young people have never heard of it.
Hong Kongers feel they have a responsibility to keep memory of Tiananmen for the fellow Chinese who cannot, but they also earnestly fear that it could happen to them. So that is a big part of why Hong Kong's residents are so upset to see their police donning military-like uniforms and firing tear gas this weekend; it feels like an echo, however faint, of 1989's violence.
This crisis comes in the middle of a political division among the citizens of Hong Kong, who embrace their freedoms but also tend to be conservative, over their future as part of China. Some are okay with integrating with the rest of China, or at least accept it as inevitable and don't want to kick up too much of a fuss, while some want to fight for democracy and autonomy. (There are other layers to this debate, such as Chinese nationalism versus Hong Kong exceptionalism; there's also a strong law-and-order constituency.)
In other words, both the pro-democracy protesters and Beijing are hoping to force Hong Kong's public to choose whether or not to accept, at a fundamental level, China's growing control over Hong Kong politics. If the public tacitly accepts Beijing's terms for the 2017 election, it will likely be taken as a green light for more limits on Hong Kong's democracy and autonomy, however subtle those limits end up being. But if Hong Kong residents join the protesters en masse, they will be rejecting not just the 2017 election terms, but the basic terms of Hong Kong's relationship with the central Chinese government.

Tuesday, September 23, 2014

Gerrymandering in Alabama

The New Republic has a very interesting article on the "new racism" and the Civil Rights movement in the south of the US, with a focus on Alabama. The article mentions two instances of gerrymandering.

The first one illustrates very well how Democrats lost their 136 year old (!) majority in the Alabama legislature by making sure that black Senators were (re)elected, even at the expense of their white colleagues of the same party:

"It was the Democrats themselves who helped Hubbard realize his goal. During the 2001 legislative redistricting process, Joe Reed and other prominent black leaders were eager to further protect black incumbents. They successfully pushed to fill the House’s 27 majority-minority and the Senate’s eight majority-minority districts with even more black voters. In the process, they endangered the seats of white Democrats, who increasingly relied on African Americans to make up for the growing number of whites defecting to the GOP. James Blacksher, a civil rights attorney who advised Democrats on redistricting, is still stunned by the shortsightedness of this plan. It wasn’t so much a gerrymander, he told me, as a “dummymander.”

(...)

The transformation of Alabama politics was nearly instantaneous. Prior to the 2010 election, the Alabama House had 60 Democratic members, 34 of them white and 26 black. Afterward, there were 36 Democratsten white, 26 black. Meanwhile, in the Alabama Senate, the number of black Democrats remained seven, while the number of white Democrats fell from 13 to four."

This story nicely complements the one usually told about voters, who have been more and more aligned (especially in the South) according to race than to class, with poor white voters voting along with their richer white brethren for the Republicans, even at the expense of their economic interests.

But the story of gerrymandering does not end there. It was now the Republicans' turn to enjoy redistricting:

"After targeting white Democrats in 2010, Alabama Republicans then used the redistricting process to cement their supermajority by a tactic Democrats refer to as “bleaching.” The method involves even further increasing the African American percentage of voters in Alabama’s majority- minority districts and, by the same token, further decreasing their share of the vote in majority-white districts.

All it took was a little statistical tinkering. In past rounds of Alabama redistricting, which Democrats controlled, the mapmakers allowed for what’s known as a 5 percent maximum population deviation restrictionmeaning that a given district had to fall within 5 percent of the ideal population size (137,000 for Senate districts; 40,000 for House districts). After 2010, however, Republican mapmakers chose to work under a 2 percent maximum population deviation. This was a key distinction. Many of Alabama’s rural districts are underpopulated, and so in order to meet the 2 percent deviation, the mapmakers had to add tens of thousands of voters to them. Since many of those districts are majority-minority, and since Republicans sought to maintain the same number of majority- minority districts, that usually meant taking black voters from districts represented by white Democrats and moving them into districts represented by black ones. Bradley Davidson, the former executive director of the Alabama Democratic Party, says, “The Alabama Republican Party wants it so that, whenever you see a person with a D next to his or her name on TV, that person is black.”"

The result has been striking:

"According to research by David Bositis, in 1994, 99.5 percent of black state legislators in the South served in the majority. By 2010, the percentage had fallen to 50.5. Today, it’s a mere 4.8 percent."


Beyond gerrymandering, the article also provides a nice nutshell of political development in the South since the end of the Civil war:

"The Southern historian C. Vann Woodward famously described the civil rights movement as the Second Reconstruction. The First Reconstruction, of course, began at the conclusion of the Civil War and led to the election of hundreds of black politicians across the South. One of those black politicians, a South Carolina legislator named Thomas Miller, later described the era with great pride: “We had built schoolhouses, established charitable institutions, built and maintained the penitentiary system, provided for the education of the deaf and dumb ... rebuilt the bridges and reestablished the ferries. In short, we had reconstructed the State and placed it upon the road to prosperity.”

But in 1877, the Republican Party agreed to withdraw federal troops from the South in exchange for putting its presidential candidate, Rutherford B. Hayes, in the White House, and the period of biracial democratic government came to an end. White “Redeemers,” as they were known, undid all the Reconstruction-era reforms they could. (...) A mere dozen years after it began, the First Reconstruction was over."

Then, in the 1960ies, came the Civil Right movement and the Voting Right Acts:

"Political scientists distinguish between descriptive representation and substantive representation. The former focuses on the number of, say, African Americans who are elected to a legislative body, while the latter focuses on the effect of those African American representatives on the legislation passed by that body. It was easy to see, by the early ’80s, that the Voting Rights Act had successfully achieved descriptive representation for African Americans in the Southern state legislatures. But, as time went on, it began to achieve substantive representation, as well. “There was a thirty-year period in the South, from about 1980 to 2010, where there really was biracial collaboration and cooperation in politics,” says Bositis. “And it was a genuine biracial politicsmore genuine than in some northern states.”"

This blessed period has then come to an abrupt end.

Tuesday, September 9, 2014

Economic Freedom Indices

CESIfo has an amazing graphical interface here, where you can chart the evolution of various indices of economic freedom (including property rights, rule of law, openness of markets) over time and space.

On most if not all measures, Canada is doing extremely well and France is at best average, often worse. This fits my experience so far in Québec, where I am spending a few months in Montréal... Montréal feels like a North American town (with all the efficiency it implies in terms of public transport, public services, etc) populated by Europeans (with all the warmth in interpersonal relationships). So far, so good!

Monday, August 25, 2014

L'impôt confisqué

Je recommande chaudement la lecture d'un petit ouvrage intitulé "L'impôt confisqué" et publié par Martin Collet chez Odile Jacob (collection Corpus). L'auteur, un juriste, y décrit en moins de 100 pages l'évolution de la pratique du Conseil Constitutionnel, qui censure de plus en plus souvent la législation fiscale en France. Le livre est très clair et argumenté. Il montre d'abord comment le Conseil Constitutionnel a de lui même élargi sa mission en 1971 en décidant qu'il lui fallait contrôler les lois non seulement sur le fondement du texte de la Constitution de 1958, mais aussi sur la base de différents principes inscrits dans la Déclaration des droits de l'homme et du citoyen du 26 août 1789 et dans le Préambule de la Constitution de 1946! Un beau "coup de force interprétatif" comme l'appelle l'auteur.

Le livre présente à la fois les avantages et inconvénients de cette pratique. Parmi les avantages, celui de l'évaluation de la cohérence du dispositif étudié au regard de ses motifs. Cela oblige à une certaine transparence et sincérité de ces motifs, et met un frein aux dérives clientélistes. Le livre met cependant en exergue une dérive assez récente des pratiques du Conseil Constitutionnel, quand il décide arbitrairement si un impôt est "confiscatoire" ou non. Les exemples concrets donnés semblent établir qu'un taux marginal d'imposition de 68% est acceptable, alors qu'un taux marginal de 75% ne l'est pas. L'auteur du livre n'y trouve aucune rationalité juridique, et de mon côté je n'y vois aucune rationalité économique. En outre, pourquoi ce concentrer sur les taux marginaux plutôt que moyens si l'objectif est de juger du caractère confiscatoire du prélèvement?

Ce bref résumé ne rend pas justice à cet excellent livre, dont je conseille la lecture à toute personne intéressée par l'économie publique (et politique) en France.

Friday, August 1, 2014

Economics of climate change

I am tidying up my stack of interesting newspaper articles before going on vacations (and remaining offline!).

Here are several interesting pieces on the economics of climate change:

A. My colleague (and boss ;-) Christian Gollier points out (in French, but with graphs) that we are still emitting more and more carbon in the atmosphere, because the real progresses in the First World are overtaken by changes in the developing world. Here are the graphs:

1 - La consommation d'énergies propres est en augmentation... mais moins que celle des énergies fossiles


2 - L'intensité de la consommation en carbone n'a pas diminué depuis 1999


3 - La consommation mondiale de charbon atteint des records


4 - Les énergies propres sont en pleine expansion... mais ne représentent qu'une petite partie de la consommation


5 - Conclusion : les émissions de dioxyde de carbone sont toujours en expansion



Read more at http://www.atlantico.fr/decryptage/5-graphiques-pour-comprendre-quel-point-monde-echoue-maitriser-consommation-energies-christian-gollier-1629698.html#yY4rzRUtlGpcMMZ9.99


B. A nice piece showing all the potential downsides of natural gas in the US: although it has a smaller carbon footprint than coal, it often replaces nuclear energy, it is often burned off as a byproduct of oil fracking, it escapes unburned, it reduces the growth of wind energy, etc.

C. How the North- East of the US has cut emissions and enjoyed growth (no excuse anymore for the other US states not to join their cap-and-trade program then!)

D. How a recent deal between Russia and China to bring natural gas from the former to the latter has played out.

E. The challenges awaiting the US to reduce their emissions.

F. The climate scientist and campaigner James E. Hansen pressing the climate case for ... nuclear energy!

G. Carbon tax versus cap-and-trade in Australia.

H. The incredibly dumb German energy policy, attaining what is close to a Pareto worsening allocation.

Theory of value

Superb article making three points:

1. How De Beers has managed to corner the market for diamonds,
2. How it has manipulated consumers' preferences for these pieces of carbon,
3. That diamonds are very bad investments.


Diamonds Are Bullshit


American males enter adulthood through a peculiar rite of passage - they spend most of their savings on a shiny piece of rock. They could invest the money in assets that will compound over time and someday provide a nest egg. Instead, they trade that money for a diamond ring, which isn’t much of an asset at all. As soon as you leave the jeweler with a diamond, it loses over 50% of its value. 
Americans exchange diamond rings as part of the engagement process, because in 1938 De Beers decided that they would like us to. Prior to a stunningly successful marketing campaign 1938, Americans occasionally exchanged engagement rings, but wasn’t a pervasive occurrence. Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high.
Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But here’s the thing - this obligation only exists because the company that stands to profit from it willed it into existence.  
So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that as a society we got tricked for about century into coveting sparkling pieces of carbon, but it’s time to end the nonsense.
The Concept of Intrinsic Value
In finance, there is concept called intrinsic value. An asset’s value is essentially driven by the (discounted) value of the future cash that asset will generate. For example, when Hertz buys a car, its value is the profit they get from renting it out and selling the car at the end of its life (the “terminal value”). For Hertz, a car is an investment. When you buy a car, unless you make money from it somehow, its value corresponds to its resale value. Since a car is a depreciating asset, the amount of value that the car loses over its lifetime is a very real expense you pay.
A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue.
Gold and silver are commodities that can be purchased on financial markets. They can appreciate and hold value in times of inflation. You can even hoard gold under your bed and buy gold coins and bullion (albeit at a ~10% premium to market rates). If you want to hoard gold jewelry however, there is  typically a 100-400% retail markup so that’s probably not a wise investment. 
But with that caveat in mind, the market for gold is fairly liquid and gold is fungible - you can trade one large piece of gold for ten smalls ones like you can a ten dollar bill for a ten one dollar bills. These characteristics make it a feasible potential investment.
Diamonds, however, are not an investment. The market for them is neither liquid nor are they fungible.
The first test of a liquid market is whether you can resell a diamond. In a famous piece published by The Atlantic in 1982, Edward Epstein explains why you can’t sell used diamonds for anything but a pittance:
Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The “keystone,” or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. 
Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers.
When you buy a diamond, you buy it at retail, which is a 100% to 200% markup. If you want to resell it, you have to pay less than wholesale to incent a diamond buyer to risk their own capital on the purchase. Given the large markup, this will mean a substantial loss on your part. The same article puts some numbers around the dilemma:
Because of the steep markup on diamonds, individuals who buy retail and in effect sell wholesale often suffer enormous losses. For example, Brod estimates that a half-carat diamond ring, which might cost $2,000 at a retail jewelry store, could be sold for only $600 at Empire.
Some diamonds are perhaps investment grade, but you probably don’t own one, even if you spent a lot.
The appraisers at Empire Diamonds examine thousands of diamonds a month but rarely turn up a diamond of extraordinary quality. Almost all the diamonds they find are slightly flawed, off-color, commercial-grade diamonds. The chief appraiser says, “When most of these diamonds were purchased, American women were concerned with the size of the diamond, not its intrinsic quality.” He points out that the setting frequently conceals flaws, and adds, “The sort of flawless, investment-grade diamond one reads about is almost never found in jewelry.”
As with televisions and mattresses, the diamond classification scheme is extremely complicated. Diamonds are not fungible and can’t be easily exchanged with each other. Diamond professionals use the 4 C’s when classifying and pricing diamonds: carats, color, cut, and clarity. Due to the complexity of these 4 dimensions, it’s hard to make apples to apples comparisons between diamonds.
But even when looking at the value of one stone, professionals seem like they’re just making up diamond prices:
In 1977, for example, Jewelers’ Circular Keystone polled a large number of retail dealers and found a difference of over 100 percent in offers for the same quality of investment-grade diamonds.
So let’s be very clear, a diamond is not an investment. You might want one because it looks pretty or its status symbol to have a “massive rock”, but not because it will store value or appreciate in value.
But among all the pretty, shiny things out there - gold and silver, rubies and emeralds - why do Americans covet diamond engagement rings in the first place?
A Diamond is Forever a Measure of your Manhood
"The reason you haven’t felt it is because it doesn’t exist. What you call love was invented by guys like me, to sell nylons."
Don Draper, Madmen
We like diamonds because Gerold M. Lauck told us to. Until the mid 20th century, diamond engagement rings were a small and dying industry in America. Nor had the concept really taken hold in Europe. Moreover, with Europe on the verge of war, it didn’t seem like a promising place to invest. 
Not surprisingly, the American market for diamond engagement rings began to shrink during the Great Depression. Sales volume declined and the buyers that remained purchased increasingly smaller stones. But the US market for engagement rings was still 75% of De Beers’ sales. If De Beers was going to grow, it had to reverse the trend.
And so, in 1938, De Beers turned to Madison Avenue for help. They hired Gerold Lauck and the N. W. Ayer advertising agency, who commissioned a study with some astute observations. Men were the key to the market:
Since “young men buy over 90% of all engagement rings” it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship.
However, there was a dilemma. Many smart and prosperous women didn’t want diamond engagement rings. They wanted to be different. 
The millions of brides and brides-to-be are subjected to at least two important pressures that work against the diamond engagement ring. Among the more prosperous, there is the sophisticated urge to be different as a means of being smart…. the lower-income groups would like to show more for the money than they can find in the diamond they can afford…
Lauck needed to sell a product that people either did not want or could not afford. His solution would haunt men for generations. He advised that De Beers market diamonds as astatus symbol:
 ”The substantial diamond gift can be made a more widely sought symbol of personal and family success — an expression of socio-economic achievement.”
"Promote the diamond as one material object which can reflect, in a very personal way, a man’s … success in life."
The next time you look at a diamond, consider this. Nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines your self worth. They created this convention - that unless a man purchases (an intrinsically useless) diamond, his life is a failure - while sitting in a room, racking their brains on how to sell diamonds that no one wanted. 
With this insight, they began marketing diamonds as a symbol of status and love:
Movie idols, the paragons of romance for the mass audience, would be given diamonds to use as their symbols of indestructible love. In addition, the agency suggested offering stories and society photographs to selected magazines and newspapers which would reinforce the link between diamonds and romance. Stories would stress the size of diamonds that celebrities presented to their loved ones, and photographs would conspicuously show the glittering stone on the hand of a well-known woman. 
Fashion designers would talk on radio programs about the “trend towards diamonds” that Ayer planned to start. The Ayer plan also envisioned using the British royal family to help foster the romantic allure of diamonds. 
Even the royal family was in on the hoax! The campaign paid immediate dividends. Within 3 years, despite the Great Depression, diamond sales in the US increased 55%! Twenty years later, an entire generation believed that an expensive diamond ring was a necessary step in the marriage process. 
The De Beers marketing machine continued to churn out the hits. They circulated marketing materials suggesting, apropos of nothing, that a man should spend one month’s salary on a diamond ring. It worked so well that De Beers arbitrarily decided to increase the suggestion to two months salary. That’s why you think that you need to spend two month’s salary on a ring - because the suppliers of the product said so.
Today, over 80% of women in the US receive diamond rings when they get engaged. The domination is complete.
A History of Market Manipulation
What, you might ask, could top institutionalizing demand for a useless product out of thin air? Monopolizing the supply of diamonds for over a century to make that useless product extremely expensive. You see, diamonds aren’t really even that rare.
Before 1870, diamonds were very rare. They typically ended up in a Maharaja’s crown or a royal necklace. In 1870, enormous deposits of diamonds were discovered in Kimberley, South Africa. As diamonds flooded the market, the financiers of the mines realized they were making their own investments worthless. As they mined more and more diamonds, they became less scarce and their price dropped.
The diamond market may have bottomed out were it not for an enterprising individual by the name of Cecil Rhodes. He began buying up mines in order to control the output and keep the price of diamonds high. By 1888, Rhodes controlled the entire South African diamond supply, and in turn, essentially the entire world supply. One of the companies he acquired was eponymously named after its founders, the De Beers brothers.
Building a diamond monopoly isn’t easy work. It requires a balance of ruthlessly punishing and cooperating with competitors, as well as a very long term view. For example, in 1902, prospectors discovered a massive mine in South Africa that contained as many diamonds as all of De Beers’ mines combined. The owners initially refused to join the De Beers cartel, joining three years later after new owner Ernest Oppenheimer recognized that a competitive market for diamonds would be disastrous for the industry:
Common sense tells us that the only way to increase the value of diamonds is to make them scarce, that is to reduce production.
Here’s how De Beers has controlled the diamond supply chain for most of the last century. De Beers owns most of the diamond mines. For mines that they don’t own, they have historically bought out all the diamonds, intimidating or co-opting any that think of resisting their monopoly. They then transfer all the diamonds over to the Central Selling Organization (CSO), which they own. 
The CSO sorts through the diamonds, puts them in boxes and presents them to the 250 partners that they sell to. The price of the diamonds and quantity of diamonds are non-negotiable - it’s take it or leave it. Refuse your boxes and you’re out of the diamond industry.
For most of the 20th century, this system has controlled 90% of the diamond trade and been solely responsible for the inflated price of diamonds. However, as Oppenheimer took over leadership at De Beers, he keenly assessed the primary operational risk that the companyfaced:
Our only risk is the sudden discovery of new mines, which human nature will work recklessly to the detriment of us all.
Because diamonds are “valuable”, there will always be the risk of entrepreneurs finding new sources of diamonds. Although controlling the discoverers of new mines often actually meant working with communists. In 1957, the Soviet Union discovered a massive deposit of diamonds in Siberia. Though the diamonds were a bit on the smallish side, De Beers still had to swoop in and buy all of them from the Soviets, lest they risk the supply being unleashed on the world market. 
Later, in Australia, a large supply of colored diamonds was discovered. When the mine refused to join the syndicate, De Beers retaliated by unloading massive amounts of colored diamonds that were similar to the Australian ones to drive down their price. Similarly, in the 1970s, some Israeli members of the CSO started stockpiling the diamonds they were allocated rather than reselling them. This made it difficult for De Beers to control the market price and would eventually cause a deflation in diamond prices when the hoarders released their stockpile. Eventually, these offending members were banned from the CSO, essentially shutting them out from the diamond business.
In 2000, De Beers announced that they were relinquishing their monopoly on the diamond business. They even settled a US Antitrust lawsuit related to price fixing industrial diamonds to the tune of $10 million (How generous! What is that, the price of one investment banker’s engagement ring?). 
Today, De Beers hold on the industry supply chain is less strong. And yet, price continue to rise as new deposits haven’t been found recently and demand for diamonds is increasing inIndia and China. For now, it’s less necessary that the company monopolize the supply chain because its lie that a diamond is a proxy for a man’s worth in life has infected the rest of the world.
Conclusion
“I didn’t get a bathroom door that looks like a wall by being bad at business”
Jack Donaghy,30 Rock
We covet diamonds in America for a simple reason: the company that stands to profit from diamond sales decided that we should. De Beers’ marketing campaign single handedly made diamond rings the measure of one’s success in America. Despite its complete lack of inherent value, the company manufactured an image of diamonds as a status symbol. And to keep the price of diamonds high, despite the abundance of new diamond finds, De Beers executed the most effective monopoly of the 20th century. Okay, we get it De Beers, you guys are really good at business! 
The purpose of this post was to point out that diamond engagement rings are a lie - they’re an invention of Madison Avenue and De Beers. This post has completely glossed over the sheer amount of human suffering that we’ve caused by believing this lie: conflict diamondsfunding wars, supporting apartheid for decades with our money, and pillaging the earth to find shiny carbon. And while we’re on the subject, why is it that women need to be asked and presented with a ring in order to get married? Why can’t they ask and do the presenting?
Diamonds are not actually scarce, make a terrible investment, and are purely valuable as a status symbol.
Diamonds, to put it delicately, are bullshit.
This post was written by Rohin Dhar. He has a very patient wife.