Sunday, March 17, 2013

Price vs Quantity instruments to regulated CO2 emissions

We teach students that (1) price regulation (through a corrective tax) and quantity regulation (with tradable permits) are two equivalent ways to obtain the socially optimal of pollution, in a certain environment. In case of uncertainty, the two instruments are not equivalent anymore. When the marginal damage from pollution is fairly constant over large range of emissions, it is socially much better to use a price instrument (a tax) rather than a quantity instrument. The canonical example is ... global warming and the taxation of CO2 emissions: see Gruber(2013), Figure 5.10.

This textbook analysis is apparently way to sophisticated for European makers who have opted for a quantity instrument, with the obvious consequences related hereunder:


European Parliament Approves Plan to Bolster Carbon Trading

LONDON — Lawmakers in Brussels moved on Tuesday to shore up the sagging market for carbon emissions permits, a central component of the European Union’s efforts to reduce air pollution.
Prices of carbon allowances, which let companies emit greenhouse gases, fell last month to as low as 2.80 euros, or about $3.75, a metric ton, compared with 9 euros a ton a year ago and 30 euros a ton in 2008. To reduce the supply of permits and drive up the price, the environmental committee of the European Parliament voted to allow the European Commission to reduce the number of allowances to be auctioned over the next three years.
After the committee’s vote, prices fell to about 4.60 euros a ton, from a close of 5.13 euros on Monday. But the panel’s vote had been expected, and the plan still needs approval from the full European Parliament and the governments of the 27 member states.
“It is really the first step in a long, long process,” said Kash Burchett, an analyst at the energy research firm IHS. The committee’s vote — 38 to 25, with two abstentions — is “a lifeline for the carbon market and for emissions trading as a policy tool for curbing emissions,” said Stig Schjoelset, head of carbon analysis at Thomson Reuters Point Carbon, a market research firm in Oslo.
If the vote had gone the other way, Mr. Schjoelset said, the Emissions Trading System would have been “more or less dead.”
The European Union introduced the system in 2005 in an effort to force utilities and manufacturers to reduce their carbon emissions. Under the system, companies are allocated a certain number of permits, each allowing them to emit one metric ton of carbon dioxide each year. If emissions exceed the level allowed by the permits, the companies must buy additional permits. Noncompliance risks heavy fines.
The total number of permits is scheduled to be reduced over time, forcing a corresponding reduction in emissions. The European Union is on track to meet its goal of reducing emissions in 2020 to 80 percent of 1990 levels, but that is mainly because the recession has reduced industrial activity and energy use. As a result, companies have a surplus of permits on hand, which depresses their price.
The plan approved Tuesday would take 900 million tons of carbon credits that are now scheduled to be auctioned from 2013 to 2015 and “backload” them so they are auctioned in 2019 and 2020. That will put a dent in the surplus of carbon credits, which is estimated at two billion tons.
It is widely thought that the European Commission has handed out too many credits. In 2012, for example, ArcelorMittal, the Luxembourg-based steel maker, sold 21.8 million tons of credits — about one quarter of the number it received from the commission — for $220 million. The company said it spent the proceeds on energy-saving investments.
Advocates say that carbon pricing, if properly managed, is the most efficient way to lower emissions. By putting a hefty price on carbon, the system lets investment decisions drive emissions reductions rather than having governments dictate investment in particular clean energy sources like solar or wind.
But industrialists and analysts say that single-digit prices for carbon permits do not provide sufficient incentive for companies to switch to cleaner fuels and energy-efficient technology.
Mr. Schjoelset said a price of 30 to 40 euros a ton was needed to encourage electricity producers to switch from coal to natural gas, a cleaner fuel.
This article has been revised to reflect the following correction:
Correction: February 19, 2013
An earlier version of this article misidentified an analyst at IHT, an energy research firm. He is Kash Burchett, not Kass Burchett.
This article has been revised to reflect the following correction:
Correction: February 20, 2013
An earlier version of this correction misstated the name of the energy research firm where Kash Burchett is an analyst. It is IHS, not IHT.
© 2013 The New York Times Company.

The content you have chosen to save (which may include videos, articles, images and other copyrighted materials) is intended for your personal, noncommercial use. Such content is owned or controlled by The New York Times Company or the party credited as the content provider. Please refer to nytimes.com and the Terms of Service available on its website for information and restrictions related to the content.

Point de vue d'un statisticien sur la recherche portant sur la dangerosité des OGM

Article qui décoiffe d'un statisticien sur les études tendant à démontrer la (non) dangerosité des OGM.

J'ajouterais deux remarques:
- j'ai du mal à comprendre que ces revues scientifiques publient des articles qui tirent des conclusions basées sur de telles "faiblesse méthodologiques", et surtout dans des domaines aussi sensibles que les OGM. J'imagine mal des revus d'économie "sérieuses" publier des articles comportant de telles failles...
- tout cela est peu rassurant sur la qualité de la recherche faite sur l'innocuité ou la dangerosité des OGM...




Y’a quelque chose qui cloche là-dedans !

Marc Lavielle


Directeur de Recherche, Inria Saclay (page web)

« L’affaire NK603 »

Mardi 18 septembre après-midi : coup de téléphone d’un journaliste d’un grand quotidien : « une nouvelle étude de 2 ans de G.E.Séralini montre des effets du maïs NK603. Le Nouvel Obs sort demain un dossier sur le sujet. Nous allons nous aussi sortir un papier. Qu’en pensez-vous ? au fait… j’ai dû signer un accord de confidentialité,vous ne pouvez donc pas lire l’article de Séralini en question avant sa publication demain 15h00 »

Euh… un peu comme si on me demandait ce que je pense du prochain disque de Mireille Matthieu sans avoir le droit de l’écouter…

Bien que faisant partie du Conseil Scientifique du Haut Conseil des Biotechnologies (en tant que statisticien), j’ai dû me résigner à faire comme le commun des mortels (vous savez, tout ce qui n’est pas journaliste de grand hebdo ou quotidien) et patienter jusqu’à la levée de l’embargo, fixée au mercredi 15h00.

Entre temps, le « buzz » a commencé, on a appris par la presse qu’il avait été démontré de façon indiscutable et définitive que tous les OGM étaient des poisons (même à faible dose, dixit Guillaume Malaurie du Nouvel Obs)…

Bon… si les journaux le disent… c’est que ça doit être vrai !

Une fois l’article de Séralini atterri dans ma boîte mail à 15:01, il ne me restait plus qu’à le lire pour expliquer à mes collègues du HCB pourquoi nous allions tous mourir dans d’atroces souffrances si nous avions eu le malheur de consommer cet OGM (même à faible dose, j’insiste, dixit Guillaume Malaurie du Nouvel Obs).

Ben tiens, c’est curieux… y’a quelque chose qui cloche là-dedans… j’y retourne immédiatement !

Les conclusions de l’étude

Y’a quelque chose qui cloche là-dedans !
JPEG - 27.2 ko
Un échantillon de 200 rats, constitué de 100 mâles et 100 femelles, a été randomisé en 20 groupes de 10 rats de même sexe : on dispose ainsi pour chaque sexe d’1 groupe contrôle et de 9 groupes expérimentaux (9 régimes à base de NK603 traité ou non avec du RoundUp et de RoundUp administré sous forme liquide). L’étude a duré 2 ans durant lesquels plusieurs analyses ont été effectuées :
  • Une analyse de mortalité,
  • Une étude de pathologies anatomiques,
  • Une analyse de paramètres biochimiques.

Le corps de l’article se limite essentiellement à une description des résultats obtenus lors de cette étude. Les remarques concernant cette partie de l’article concernent le choix des différences observées mis en avant. En effet, une telle analyse descriptive ne devrait pas soulever de remarques particulières si les auteurs se contentaient de décrire de façon objective ce qu’ils ont observé chez les différents groupes de 10 rats (courbes de mortalité, pathologies anatomiques,…). Ils ont malheureusement parfois tendance à sélectionner soigneusement quelles comparaisons présenter. On peut ainsi lire §3.1 “Before this period, 30% control males (three in total) and 20% females (only two) died spontaneously, while up to 50% males and 70% females died in some groups on diets containing the GM maize (Fig. 1).” [1]. Mais si quelques groupes expérimentaux de mâles présentent en effet un taux de mortalité de 50% (5 rats morts) à la date de 600 jours, les groupes expérimentaux de mâles ayant reçu les plus grandes doses de NK603 et/ou de Roundup présentent des taux de mortalité de seulement 10% ou 20% (1 ou 2 rats morts). Pourquoi ne pas avoir décrit cette différence ?
Et pourquoi ne montrer que des photos de rats issus des groupes expérimentaux ? Les tumeurs des rats des groupes contrôles ne sont-elles pas semblables ? Là encore, comme pour les courbes de mortalité, une présentation partielle (et partiale) des résultats ne rend pas compte de l’expérience telle qu’elle a été réellement menée.
Le contenu de l’article devient franchement critiquable lorsque les auteurs sortent du domaine purement descriptif des observations en cherchant à expliquer les résultats obtenus et à les généraliser. On peut ainsi lire en conclusion :
  • The results of the study presented here clearly demonstrate that lower levels of complete agricultural glyphosate herbicide formulations, at concentrations well below officially set safety limits, induce severe hormone-dependent mammary, hepatic and kidney disturbances.
  • Altogether, the significant biochemical disturbances and physiological failures documented in this work confirm the pathological effects of these GMO and R treatments in both sexes, with different amplitudes. [2]
De telles affirmations ainsi formulées et ne laissant la place à aucun doute, devraient impérativement être rigoureusement justifiées et validées. Or, il est ici absolument impossible de conclure de façon définitive à la toxicité du NK603 sur la base de données aussi limitées.
Rappel : nous sommes dans un environnement incertain !
Ce n’est pas parce que seulement 2 rates parmi les 10 du groupe contrôle sont mortes à la fin de l’étude, contre 6 du groupe OGM 22% que l’on peut conclure que le risque de mourir dans les 2 ans pour une rate est 3 fois plus élevé si elle est nourrie avec un régime contenant 22% de NK603.
Le rôle de la statistique inférentielle consiste précisément à évaluer les incertitudes et les probabilités de se tromper en concluant à la présence ou à l’absence d’effets. Il est regrettable que les auteurs aient totalement négligé cet aspect de la statistique, tout en s’autorisant à des surinterprétations non justifiées de leurs résultats expérimentaux.

En suivant la démarche des auteurs (qui consiste à généraliser directement ce qui est observé sur un échantillon réduit à l’ensemble de la population), pourquoi ne pas avoir repris la différence observée entre mâles nourris au NK603 33% et le groupe contrôle pour conclure qu’une forte dose de NK603 réduit le taux de mortalité chez les mâles ? (tout ceci est bien sûr ironique… personne n’oserait remettre en question le fait que cette différence n’est due qu’aux fluctuations d’échantillonnage… tout comme les autres différences observées…)

Le protocole et les outils statistiques utilisés souffrent de graves lacunes et faiblesses méthodologiques qui remettent totalement en question les conclusions avancées par les auteurs. Une analyse statistique rigoureuse des résultats obtenus lors de cette étude ne met en évidence

  • aucune différence significative de la mortalité des rats dans les groupes contrôle et expérimentaux,
  • aucune différence significative des paramètres biochimiques.

Autres études, autres conclusions

Y’a quelque chose qui cloche là-dedans !
Mais la presse nous réserve aussi des surprises… On pouvait ainsi lire en février 2009
JPEG - 45 ko
Ou, plus récemment
PNG - 36 ko
Cette « une » se base elle aussi sur une publication scientifique de Snell et al. (publiée… toujours dans la même revue Food and Chemical Toxicology). Cet article passe en revue 24 études sur le sujet et conclut
- The studies reviewed present evidence to show that GM plants are nutritionally equivalent to their non-GM counterparts and can be safely used in food and feed. [3]

Mais là encore, la conclusion telle qu’elle est formulée par les auteurs de l’article va bien au-delà de ce que les études permettent de dire. Rappelons en effet que beaucoup d’études portent sur des groupes de seulement 10 animaux (parfois même 5 ou 3). On peut faire aux auteurs les mêmes reproches qu’à Séralini : conclure systématiquement de façon aussi définitive sur la base d’informations aussi limitées n’a pas de sens ! D’autre part, les tests mis en œuvre dans ces études sont des tests statistiques de comparaison qui n’autorisent en rien à conclure sur l’absence totale de risque ou sur la notion d’équivalence biologique. L’outil de statistique inférentielle théoriquement adapté à cette question est le test statistique d’équivalence.
L’évaluation des risques suit une stratégie dite « d’équivalence substantielle » basée sur la comparaison de caractéristiques diverses entre la PGM et son équivalent non transgénique.
Une telle évaluation se base sur une analyse de données expérimentales. La statistique joue alors un rôle incontournable dans un tel cadre d’analyse, mais son rôle décisionnel reste limité. En effet, ce ne sont pas de seuls arguments statistiques qui permettent de conclure à l’innocuité ou la dangerosité d’un PGM.
Un test statistique sert à évaluer le risque de se tromper en prenant une décision. Ainsi, un test statistique de comparaison permet d’évaluer la probabilité de se tromper en concluant à tort à l’existence d’une différence. Les conclusions que l’on peut tirer d’un tel test sont limitées pour différentes raisons :
  • Une différence biologiquement significative peut ne pas être statistiquement significative si les données disponibles sont insuffisantes. Une analyse de puissance est donc indispensable pour évaluer quelle taille d’effet peut être détectée avec une taille d’échantillon donnée,
  • Une différence statistiquement significative n’est pas nécessairement biologiquement significative. En effet, le test de comparaison cherche à détecter des différences, et ce quelle que soit leur taille. Or, avec une taille d’échantillon suffisamment grande, une différence même infime sera presque toujours détectée et donc statistiquement significative.
Un des rôles du statisticien est d’éviter que des raccourcis soient pris sans précaution. Ainsi,
  • l’étude de Séralini conclut à la toxicité des OGM car elle emprunte un raccourci assez surprenant (que personne n’avait encore osé emprunter...) :
Effet observé => Effet statistiquement significatif => Effet biologiquement significatif
=> DANGER !
  • les pétitionnaires et de nombreuses études concluent à l’innocuité des OGM en empruntant allègrement des raccourcis très fréquentés (surtout par nos collègues biologistes...) :
Effet statistiquement non significatif => Effet biologiquement non significatif => Equivalence OGM/non OGM
=> PAS DE DANGER !
Le rôle des agences et d’instances comme l’AESA, l’ANSES et le HCB est de signaler l’utilisation de tels raccourcis, et de fournir des recommandations pour un meilleur usage de l’outil statistique.

Ainsi, de nouvelles lignes directrices de l’EFSA recommandent la mise en œuvre de nouvelles procédures statistiques comme l’analyse de puissance,

L’ANSES a également publié des Recommandations pour la mise en œuvre de l’analyse statistique des données issues des études de toxicité sub-chronique de 90 jours chez le rat dans le cadre des demandes d’autorisation de mise sur le marché d’OGM.

Ces nouvelles lignes directrices ne permettront pas de clore le débat de façon définitive, mais elles devraient permettre d’aboutir à l’avenir à un consensus sur les conclusions que l’on peut tirer d’une étude de toxicité.

L’attitude de certains médias

JPEG - 15.9 ko

Cette « une » du Nouvel Obs est emblématique. Elle illustre très bien le besoin toujours accru pour les journaux de publier le plus rapidement possible des sujets les plus médiatiques et vendeurs possible. On peut le comprendre… mais de là à traiter les OGM comme les seins nus de la princesse d’Angleterre… pfff… On peut quand même imaginer que le journaliste qui a conçu ce dossier a de solides connaissances scientifiques (je le crois vraiment). D’accord, mais sa culture statistique est-elle à ce point limitée pour ne pas savoir qu’une étude portant sur des groupes de 10 rats a immanquablement des limites ? que le niveau d’incertitude est terriblement élevé dans un tel contexte ?

JPEG - 8.4 ko

Quel raisonnement a-t-il pu suivre pour, à partir d’une étude sur des groupes de 10 rats, portant sur un unique OGM, en déduire que tous les OGM sont des poisons (sous-entendu, pour l’homme) ?

JPEG - 20.7 ko

... faible dose ? ... se révèle ? ... lourdement toxique ? ... souvent mortel ? Mais sur quelles informations se base-t-il pour énoncer toutes ces affirmations en une seule et même phrase ?

Conclure de la sorte sur des bases aussi fragiles et sans avoir pris la peine de regarder de près les résultats de l’étude, est surréaliste et totalement irresponsable !

Je pense en effet à tous mes collègues scientifiques, et plus particulièrement statisticiens, chercheurs et enseignants, qui, travaillent inlassablement pour publier des articles prônant les bonnes pratiques statistiques, qui transmettent avec patience à leurs étudiants ou élèves la culture de l’environnement incertain. Mais que peuvent-ils penser aujourd’hui ? Quel message faire passer quand de telles contre-vérités peuvent être assénées à la une d’un grand hebdomadaire sans le moindre contrôle ?

De plus, il y a des précédents qui auraient dû alerter ces journalistes : plusieurs travaux de G.E.S avaient déjà été critiqués par la communauté scientifique. Pourquoi ne pas avoir alors pris le temps de s’assurer de la validité des conclusions de l’étude avant de publier ce dossier ?

L’article publié dans Food and Chemical Toxicology n’a été en ligne que le mercredi 19 septembre à 15h00… Le nouvel Obs était en kiosque le lendemain même ! Certains journalistes ont pu recevoir l’article bien avant sa mise en ligne, à la condition de signer un accord de confidentialité… et donc de ne pas le diffuser avant la date fatidique de sortie. De telles pratiques sont inadmissibles lorsque l’on sait pertinemment que des telles études réclament obligatoirement une contre-expertise (voir à ce sujet le billet de Sylvestre Huet ou celui de Pascal Lapointe).

Quelles seront maintenant les conséquences d’un tel battage médiatique ? Un amalgame entre lanceur d’alertes et lanceur d’alertes à la bombe... un amalgame entre véritable scandale et opération politico-médiatique montée de toutes pièces... un amalgame entre journalisme scientifique et journalisme à sensation... un amalgame entre intérêt général et intérêt personnel...

Une chose est sûre en ce qui me concerne... je saurai maintenant à quoi m’en tenir avec les unes du Nouvel Obs !

La crédibilité des scientifiques

Nous sommes confrontés à une situation paradoxale où l’expertise est reconnue comme incontournable par tous pour évaluer les risques sanitaires liés aux OGM, mais où les scientifiques ont énormément de mal à se faire entendre... et à être écoutés !

Il y a ainsi plusieurs choses vraiment frappantes pour un scientifique dans cette histoire :

1) Tout commentaire d’ordre purement scientifique est systématiquement assimilé à une prise de position !
J’en ai moi-même fait l’expérience à plusieurs reprises :

J’ai toujours été critique et intransigeant face à une étude qui prétend démontrer l’absence totale de risques sanitaires liés aux OGM (Le Monde,Les Echos). En effet, la méthodologie statistique mise en œuvre ne permet généralement pas de formuler de conclusions aussi définitives. Pour beaucoup, de telles prises de position « publique » faisaient de moi un anti-OGM ! J’ai, de la même façon, toujours été critique et intransigeant face à une étude qui prétend démontrer l’existence de risque sur la base d’arguments incorrects (Inf’OGM). M’exprimer ouvertement en ce sens m’a alors converti en pro-OGM ! Et bien non… comme la très grande majorité de mes collègues, je ne suis manipulé par aucun mouvement anti-OGM, ni par aucun lobby, et je ne suis financé par aucune industrie des biotechnologies. De la même façon que l’on peut dire en toute indépendance que non ! 2 et 2 ne font pas 5 ! on doit pouvoir affirmer que non ! l’expérimentation réalisée ne permet pas de conclure comme le font les auteurs de l’article.

2) Un avis scientifique argumenté n’a guère plus de poids qu’un commentaire infondé !
Tout d’abord, on ne peut pas indéfiniment continuer à accorder le même crédit scientifique à toute étude sous prétexte qu’elle a été publiée dans une revue scientifique internationale avec comité de lecture. Toute la communauté scientifique sait parfaitement bien que les 2 ou 3 referees en charge d’évaluer un article soumis à publication ne sont pas plus compétents que d’autres et qu’il est fréquent que des articles soient publiés alors qu’ils contiennent des imprécisions, des lacunes, voir des erreurs. Un article publié ne contient pas que des vérités gravées dans le marbre. On en trouve une illustration particulièrement savoureuse dans le même numéro du même journal où un article de Zhu et al. prétend démontrer l’innocuité d’un maïs OGM tolérant au Glyphosate. On peut ainsi lire dans le résumé « These results indicated that the GM glyphosate-tolerant maize was as safe and nutritious as conventional maize ». Ces deux articles ont suivi le même processus de révision et ont été acceptés tous les deux… Tout le monde peut être satisfait puisqu’on peut, au choix :
  • affirmer que les OGM sont des poisons grâce à l’article de Séralini,
  • affirmer que les OGM sont sans danger grâce à l’article de Zhu.

Cet exemple illustre bien le fait que la crédibilité scientifique d’un article peut et doit être continuellement remise en cause. C’est bien évidemment le rôle d’instances comme le Conseil Scientifique du HCB d’émettre un avis scientifique sur le contenu scientifique de telles études. Mais c’est aussi le rôle de tout scientifique de porter un regard critique sur une étude, même publiée. Lorsqu’il y a une telle controverse, il ne s’agit pas pour la communauté scientifique de distribuer des bons et des mauvais points, de donner raison à l’un et tort à l’autre, mais de faire preuve « d’autorité ». C’est-à-dire rappeler ce que la connaissance scientifique actuelle autorise à dire et écrire.

Dans le cas particulier qui nous intéresse, les très nombreuses faiblesses méthodologiques décrites plus haut sont indiscutables. Aucun statisticien ne peut raisonnablement les réfuter ! Aucun statisticien ne peut justifier rigoureusement les conclusions que l’on trouve dans cet article. Mais qu’importe… certains médias, certains politiques ne s’encombrent visiblement pas de telles considérations pour répéter inlassablement les mêmes contre-vérités.

Plusieurs articles précédents de G.E. Séralini ont été largement médiatisés (sans atteindre le niveau de cet article sur le NK603) mais aussi largement critiqués par la communauté scientifique pour leur manque de rigueur et leurs faiblesses méthodologiques.

Une mémoire bien courte : Un article de G.E. Séralini paru en 2007 prétendait démontrer la toxicité du maïs MON863.
JPEG - 28.8 ko

Une contre analyse de cette étude a été menée et a clairement montré que cet article comportait plusieurs erreurs (confusion entre effets fixes et effets aléatoires pour analyser les courbes de poids, mauvaise prise en compte de la multiplicité des tests statistiques). Ces erreurs remettent totalement en question les conclusions de l’article qui ne démontre en fait aucun signe de toxicité (je n’affirme pas pour autant que les OGM ne sont pas toxiques : je me contente de dire que l’étude en question ne permet absolument pas de conclure à la toxicité). Ce processus de contre-analyse est tout à fait sain et souhaitable : on peut s’en réjouir et penser que puisqu’il a été démontré que le contenu scientifique de cet article est faux, alors il ne devrait en aucun cas à nouveau être cité pour démontrer une éventuelle toxicité du MON863. Et bien non… certains journaux comme le Nouvel Obs ressortent cette étude 5 ans plus tard sans aucun état d’âme…

JPEG - 22.6 ko

La même histoire recommence 2 ans plus tard avec cette fois-ci un nouvel article qui prétend mettre en évidence la toxicité de 3 maïs OGM. Même scenario : des journaux s’emparent de cette étude, n’hésitant pas à parler de « preuve » et de « démonstration » ( !!!)

JPEG - 22.1 ko

Une contre-expertise réfute dans la foulée les conclusions de cette étude (toujours pour des raisons méthodologiques…)

Malgré ce passif, la nouvelle étude de G.E. Séralini sortie en 2012 a été prise pour argent comptant par la plupart des médias et publiée sans le moindre contrôle, sans la moindre vérification…

JPEG - 8.4 ko

Pas de doute… y’a quelque chose qui cloche là-dedans !

Notes

[1Avant cette période, 30% des mâles contrôlés (3 au total) et 20 % des femelles (seulement 2) sont morts spontanément, tandis que jusqu’à 50 % des mâles et 70 % des femelles sont morts dans certains groupes ayant un régime contenant du maïs GM.
[2- Les résultats de l’étude présentée ici démontrent clairement que de faibles niveaux de préparations complètes d’herbicide agricole à base de glyphosate, à des concentrations bien inférieures aux limites de sécurité fixées officiellement, induisent des troubles hormono-dépendant sévères d’ordre mammaire, hépatique et rénal.
  • Au final, les perturbations biochimiques significatives et les troubles physiologiques présentés dans ce travail confirment les effets pathologiques de ces traitements OGM et R pour les deux sexes, avec des amplitudes différentes.
[3Les études passées en revue démontrent que les PGM sont nutritionnellement équivalentes à leur contrepartie non transgéniques et peuvent être utilisées en toute sécurité.

Affiliation de l'auteur

Pour citer cet article : Marc Lavielle« Y’a quelque chose qui cloche là-dedans ! » — Images des Mathématiques, CNRS, 2012.

Poverty rates among children and retirees in the US

Another example that the best use of 15$ a month is to buy a subscription to The New York Times: A most interesting discussion of different ways to measure poverty, including their very different consequences on the relative poverty rates of young and old people.

Oh, and I had never heard this quote: “Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital.”


by THOMAS B. EDSALL, opinionator.blogs.nytimes.com
March 13th 2013 11:12 PM
Tom Edsall on politics inside and outside of Washington.

There are three ways of defining poverty in America: the official Census Bureau method, which uses a set of income thresholds that vary by family size and composition; an experimental income-based method called the Supplemental Poverty Measure that factors in government programs designed to help people with low incomes; and a consumption-based method that measures what households actually spend.

By defining poverty according to different criteria, these three methods capture surprisingly different populations of men, women and children. In a perfect world, these three methods would all tell us to do the same thing to alleviate poverty, but it’s not like that. Each method suggests a different approach toward how our government should direct its poverty-fighting resources.

According to the two income-based methods of calculation, poverty is increasing; according to the consumption-based method, it is decreasing. Confusingly, I am afraid, both the official method and the consumption method of defining poverty suggest that we should shift benefits away from the elderly and increase programs serving poor children and their families, but the Supplemental Poverty Measure, which is also income-based, does exactly the opposite.

Needless to say, these three methods and their distinct outcomes have led to substantial disagreement among policy experts and social scientists. The lack of definition in our definition of poverty is part of the problem; it helps to answer the question of how the richest country in the history of the world could have so many people living in a state of deprivation.

The lack of definition in our definition of poverty is part of the problem.

Let’s go over this a bit. Start with the two alternative measures of poverty based on income. The official definition was established in 1963 by the Kennedy Administration and uses as a point of reference the average dollar value of all the food needed for a week, times three. Income is calculated on a pre-tax basis including earnings, unemployment benefits, Social Security, disability, welfare, pensions, alimony and child support. The poverty threshold is set at the point at which a family would have to spend more than a third of its income on food.

The second income-based method of calculating poverty, the Supplemental Poverty Measure, is also published by the Census. It was first released in 2011. The S.P.M. adds together cash income, tax credits (in particular, the Earned Income Tax Credit, the benefit most important to the working poor), plus the value of in-kind benefits used to pay for food  (food stamps), clothing, shelter and utilities, and then subtracts taxes paid, work expenses (including child care), out-of-pocket medical costs and child support paid to another household.

The differences in the results of these two income-based measures are readily apparent in Fig. 1, a chart published by the Census. The bar on the right represents the S.P.M., and the bar on the left represents the official method.


U.S. Census Bureau

The poverty rate for poor children, under the official measure, is 22.3 percent; under the S.P.M. it is only 18.1 percent. The rate of poverty for those 65 and older is 8.7 percent under the official measure, but it nearly doubles to 15.1 percent under the S.P.M.

If the S.P.M. were adopted as the official measure used by government agencies to define poverty, millions of poor children would either lose, or face reductions in, benefits from means-tested programs, while millions of those over the age of 65 would qualify for government assistance.

On Nov. 4, 2011, The Times reported the findings of its own study of poverty calculations. When deploying measures almost identical to those used in the S.P.M., the study produced very similar results, showing higher levels of poverty among the elderly than the official measure and lower levels of poverty for poor children and households led by single mothers. The following example from the Times involved a retired city employee in Charlotte, N.C.:

Such is the case for John William Springs, 69, a retired city worker in Charlotte who gets nearly $12,000 a year in Social Security and disability checks. That leaves him about $1,300 above the poverty threshold for a single adult his age —‘officially not poor. Then again, Mr. Springs had a heart attack last summer and struggles with lung disease. Factor in the $2,500 a year that he estimates he spends on medicine, and Mr. Springs crosses the statistical line into poverty.

The Times also found that the S.P.M. approach reduced poverty rates among many of the non-elderly. Take the case of Angelique Melton, a divorced mother with two children, who lost her job in 2009:


Struggling to pay the rent and keep the family adequately fed, she took the only job she could find: a part-time position at Wal-Mart that paid less than half her former salary. With an annual income of about $7,500 — well below the poverty line of $17,400 for a family of three — Ms. Melton was officially poor. Unofficially she was not.
After trying to stretch her shrunken income, Ms. Melton signed up for $3,600 a year in food stamps and received $1,800 in nutritional supplements from the Women, Infants and Poor children program. And her small salary qualified her for large tax credits, which arrive in the form of an annual check — in her case for about $4,000. Along with housing aid, those subsidies gave her an annual income of nearly $18,800 — no one’s idea of rich, but by the new count not poor.


The consumption method of measuring poverty  — which was the subject, to some extent, of a column I wrote on Jan. 30 about the so called hidden prosperity of the poor — finds a substantial decline in the over-all rate of poverty, especially among the elderly.

The consumption-based method, which was disparaged in an email to the Times by Shawn Fremstad of the Center for Economic and Policy Research as “not ready for prime time,” is vigorously defended by Bruce D. Meyer and James X. Sullivan, economists at the University of Chicago and Notre Dame respectively. They argue in a series of papers that both the official and supplemental methods overestimate the level of poverty among those 65-plus for two reasons: “because older Americans are more likely (than other age groups) to be spending out of savings and using assets (like homes and cars) that they own” and because the two income-based measures of calculating poverty overstate the rate of inflation.

By Meyer and Sullivan’s computations, consumption poverty among those 65 and older has fallen by 83 percent since 1980 to just 3.2 percent in 2010.

Sullivan, in an email to The Times, suggested that 

there is something in this paper for everyone (on both the left and right) to like, but others might say there’s something for everyone to dislike. The right likes being able to say that poverty is declining.

He added that poverty scholars who have worked on development of the S.P.M. “have been very cold to our argument that a consumption-based measure of poverty is clearly better than one based on income.”

This heap of contradictions demonstrates the wisdom of the warning from Aaron Levenstein, a business professor at Baruch College who died in 1986: “Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital.”

The battle lines have formed. The claim that the poverty rate among the elderly is low is grist for the mill for those arguing along the lines of Harry Holzer and Isabel Sawhill in a March 8 Washington Post op-ed headlined “Payments to elders are harming our future.”

Holzer, a professor of public policy at Georgetown, and Sawhill, a senior fellow at the Brookings Institution, write:

Social Security and Medicare alone cost the federal government about $1.3 trillion last year, accounting for more than 37 percent of federal spending; they are slated, along with interest on the debt, to absorb virtually all currently projected federal revenue within the next several decades.

They go on to ask:

For how long will we continue to sacrifice investments in our nation’s children and youth, as well as its future productivity, to spend more and more on the aged? For how long will we continue to sacrifice investments in our nation’s childrenand youth, as well as its future productivity, to spend more and more on the aged?

Arrayed against Sawhill and Holzer and others who share their views are numerous experts, including David Betson, a professor of public policy and economics at Notre Dame. Betson argued in a phone interview that “the game here is the elderly.” Measures

ignoring the family’s cost of medical care not only understate the incidence of poverty for all groups but greatly understate the poverty rates of the demographic group that has the largest average out of pocket medical expenses, the elderly.

James Firman, president and C.E.O. of the National Council on Aging, told me that the official poverty measure is “grossly inadequate” because “it does not account for the 20 to 40 percent of total income that the elderly must pay out of pocket for health care, thus underestimating the poverty level among those 65 and older.”

Alicia H. Munnell, director of the Boston College Center for Retirement Research, warned that many are convinced the elderly have it relatively easy because the official and most publicized poverty measure “does not take into account their high medical costs.”

According to Munnell, credit card debt is rising faster among the elderly than other groups because of demands to pay growing medical fees and premiums just when higher and higher percentages of those reaching retirement age do not have defined benefit pension plans to provide support. Social Security, according to Munnell, which provides the average retired worker $15,168.36 a year, “plays a bigger and bigger role” as pensions diminish.

All three methods of measurement may undercount the poor. Kathryn Edin, a professor of public policy and management at Harvard’s Kennedy School, said in a phone interview and in a series of emails that a major problem with all three attempts to measure poverty

is that the poverty level has no real empirical basis — it is not a good measure of how much it takes to survive nor is it a relative measure meant to reflect what is required for social inclusion in the society. The poverty level is most certainly too low. Most people can’t actually live on incomes that hover around the poverty threshold.

An even sharper criticism of poverty measures, generally, comes from Shawn Fremstad of the C.E.P.R.:

There is broad recognition that the current poverty line ($21,756 for a family of four in 2009) falls far below the amount of income needed to “make ends meet” at a basic level. When established in the early 1960s, the poverty line was equal to nearly 50 percent of median income. Because it has only been adjusted for inflation since then, and not for increases in mainstream living standards, the poverty line has fallen to just under 30 percent of median income. As a result, to be counted as officially “poor,” you have to be much poorer today, compared to a typical family, than you would have in the 1960s.

One of the most deeply informed analyses of this issue comes from Pat Ruggles, a senior fellow at the independent research group NORC at the University of Chicago. She shared her personal thoughts in an email to the Times:

Arguing that there is a moral case for spending less on the elderly and more on poor children, to my mind, has very little to do with poverty.  Indeed, society may want to invest more in poor children because they will be around longer and will ultimately determine our future prosperity.  Alternatively, some people feel that we owe a higher standard of living to the elderly because they supported us and gave us the foundation for our current prosperity.  But neither of these arguments has anything to do with whether specific elderly or families with poor children need more or less.

The cost of caring for the elderly has risen substantially, Ruggles believes,

because we can now treat many things that we couldn’t treat 50 years ago when the poverty measure was first established. Elderly Americans live longer and have a better quality of life for more years than they would have two generations ago. In that sense they are clearly better off.

These new, and often expensive, medical treatments have become necessities of life for the elderly, according to Ruggles, and treating them otherwise will lead to a brutal, income-based rationing system:

Not taking these new necessities — for that is what they have become — into account in measuring and dealing with poverty among the elderly means making a decision that the improvements in technology and medicine that we have seen over the past decades should only be available to those who have sufficient incomes to cover their costs.  This in effect says that low-income elderly who can’t meet their co-pays should be allowed to suffer and to die early from things that are completely preventable.

Throughout the country, often with the active support of state governments, adults of all ages, but especially the elderly, are under mounting pressure to sign cost-saving advanced directives, allowing hospitals and doctors to end intensive procedures at various end-of-life stages. Three states now permit physician-assisted suicide. Another potentially controversial approach to cost control has been adopted by the 30 states that have “filial support” or “filial responsibility” laws on the books that make it the responsibility of adult children children to care for their indigent parents.

Such statutes were rarely invoked in the past, but they establish grounds for shifting medical care in certain circumstances from the state to families, authorizing nursing homes and other long-term care providers to sue the families of patients unable to pay their bills. Indeed, there has been a spate of attention-grabbing stories about recent cases in which these laws have been reactivated.

In a seemingly unrelated development, the Wall Street Journal recently reported on surging adult hospital admissions to repair heart surgeries performed years earlier on infants born with congenital heart defects. What was arresting was that the article dealt with the phenomenon in large part as a matter of costs and as opportunities for doctors to make money.

Jared O’Leary, a researcher at Brigham and Women’s Hospital and Boston Children’s Hospital, told the Journal that “In 2010, there were more than 100,000 hospital admissions of adults with congenital heart disease, incurring average charges per admission of $43,346, he said, or a total of more than $4 billion.” The story also noted that “late last year, two professional groups approved a new medical subspecialty for adults with congenital heart disease, which is expected to increase the number of doctors who treat such patients.”

In some ways, especially those who fall demographically on either side of the prime of life, every day seems to be more and more about the money, with competition accelerating among worthy claimants for access to limited resources. Virtual Mentor: The Journal of Ethics of the American Medical Association, reported a story on care in hospital neonatal units for premature babies:

it is not unusual for costs to top $1 million for a prolonged stay. Expenditures to preserve life are limited in every society, and, although third-party payers have questioned this level of expenditures, courts have consistently reaffirmed the rights of parents to determine the treatment of their newborns.

In other words, both the beginning and the end of life are becoming increasingly subject to market decisions, cost-benefit analyses, and bottom line considerations that had not been so glaringly explicit in the past.

© 2013 The New York Times Company.
The content you have chosen to save (which may include videos, articles, images and other copyrighted materials) is intended for your personal, noncommercial use. Such content is owned or controlled by The New York Times Company or the party credited as the content provider. Please refer to nytimes.com and the Terms of Service available on its website for information and restrictions related to the content.

Friday, March 8, 2013

Pros and cons of minimum wage increases


The Business of the Minimum Wage

RAISING the minimum wage, as President Obama proposed in his State of the Union address, tends to be more popular with the general public than with economists.
I don’t believe that’s because economists care less about the plight of the poor — many economists are perfectly nice people who care deeply about poverty and income inequality. Rather, economic analysis raises questions about whether a higher minimum wage will achieve better outcomes for the economy and reduce poverty.
First, what’s the argument for having a minimum wage at all? Many of my students assume that government protection is the only thing ensuring decent wages for most American workers. But basic economics shows that competition between employers for workers can be very effective at preventing businesses from misbehaving. If every other store in town is paying workers $9 an hour, one offering $8 will find it hard to hire anyone — perhaps not when unemployment is high, but certainly in normal times. Robust competition is a powerful force helping to ensure that workers are paid what they contribute to their employers’ bottom lines.
One argument for a minimum wage is that there sometimes isn’t enough competition among employers. In our nation’s history, there have been company towns where one employer truly dominated the local economy. As a result, that employer could affect the going wage for the entire area. In such a situation, a minimum wage can not only make workers better off but can also lead to more efficient levels of production and employment.
But I suspect that few people, including economists, find this argument compelling today. Company towns are largely a thing of the past in this country; even Wal-Mart Stores, the nation’s largest employer, faces substantial competition for workers in most places. And many employers paying the minimum wage are small businesses that clearly face strong competition for workers.
Instead, most arguments for instituting or raising a minimum wage are based on fairness and redistribution. Even if workers are getting a competitive wage, many of us are deeply disturbed that some hard-working families still have very little. Though a desire to help the poor is largely a moral issue, economics can help us think about how successful a higher minimum wage would be at reducing poverty.
An important issue is who benefits. When the minimum wage rises, is income redistributed primarily to poor families, or do many families higher up the income ladder benefit as well?
It is true, as conservative commentators often point out, that some minimum-wage workers are middle-class teenagers or secondary earners in fairly well-off households. But the available data suggest that roughly half the workers likely to be affected by the $9-an-hour level proposed by the president are in families earning less than $40,000 a year. So while raising the minimum wage from the current $7.25 an hour may not be particularly well targeted as an anti-poverty proposal, it’s not badly targeted, either.
A related issue is whether some low-income workers will lose their jobs when businesses have to pay a higher minimum wage. There’s been a tremendous amount of research on this topic, and the bulk of the empirical analysis finds that the overall adverse employment effects are small.
Some evidence suggests that employment doesn’t fall much because the higher minimum wagelowers labor turnover, which raises productivity and labor demand. But it’s possible that productivity also rises because the higher minimum attracts more efficient workers to the labor pool. If these new workers are typically more affluent — perhaps middle-income spouses or retirees — and end up taking some jobs held by poorer workers, a higher minimum could harm the truly disadvantaged.
Another reason that employment may not fall is that businesses pass along some of the cost of a higher minimum wage to consumers through higher prices. Often, the customers paying those prices — including some of the diners at McDonald’s and the shoppers at Walmart — have very low family incomes. Thus this price effect may harm the very people whom a minimum wage is supposed to help.
It’s precisely because the redistributive effects of a minimum wage are complicated that most economists prefer other ways to help low-income families. For example, the current tax system already subsidizes work by the poor via an earned-income tax credit. A low-income family with earned income gets a payment from the government that supplements its wages. This approach is very well targeted — the subsidy goes only to poor families — and could easily be made more generous.
By raising the reward for working, this tax credit also tends to increase the supply of labor. And that puts downward pressure on wages. As a result, some of the benefits go to businesses, as would be the case with any wage subsidy. Though this mutes some of the direct redistributive value of the program — particularly if there’s no constraining minimum wage — it also tends to increase employment. And a job may ultimately be the most valuable thing for a family struggling to escape poverty.
What about the macroeconomic argument that is sometimes made for raising the minimum wage? Poorer people typically spend a larger fraction of their income than more affluent people. So if an increase in the minimum wage successfully redistributed some income to the poor, it could increase overall consumer spending — which could stimulate employment and output growth.
All of this is true, but the effects would probably be small. The president’s proposal would raise annual income by $3,500 for a full-time minimum-wage worker. A recent analysis found that 13 million workers earn less than $9 an hour. If they were all working full time at the current minimum — and a majority are not — the income increase from the higher minimum wage would be only about $50 billion. Even assuming that all of that higher income was redistributed from the wealthiest families, the difference in spending behavior between low-income and high-income consumers is likely to translate into only about an additional $10 billion to $20 billion in consumer purchases. That’s not much in a $15 trillion economy.
SO where does all of this leave us? The economics of the minimum wage are complicated, and it’s far from obvious what an increase would accomplish. If a higher minimum wage were the only anti-poverty initiative available, I would support it. It helps some low-income workers, and the costs in terms of employment and inefficiency are likely small.
But we could do so much better if we were willing to spend some money. A more generous earned-income tax credit would provide more support for the working poor and would be pro-business at the same time. And pre-kindergarten education, which the president proposes to make universal, has been shown in rigorous studies to strengthen families and reduce poverty and crime. Why settle for half-measures when such truly first-rate policies are well understood and ready to go?
Christina D. Romer is an economics professor at the University of California, Berkeley, and was the chairwoman of President Obama’s Council of Economic Advisers.
© 2013 The New York Times Company.

The content you have chosen to save (which may include videos, articles, images and other copyrighted materials) is intended for your personal, noncommercial use. Such content is owned or controlled by The New York Times Company or the party credited as the content provider. Please refer to nytimes.com and the Terms of Service available on its website for information and restrictions related to the content.

Thursday, March 7, 2013

US Political arithmetics

Another gem by Nate Silver... This time, it compares the Electoral College with the district system used in the House of Representatives (from gerrymandering to the fact that Republican supporters are less geographically concentrated than Democrats). An article full of very interesting and clever remarks...


Did Democrats Get Lucky in the Electoral College?

President Obama won the Electoral College fairly decisively last year despite a margin of just 3.8 percentage points in the national popular vote. In fact, Mr. Obama would probably have won the Electoral College even if the popular vote had slightly favored Mitt Romney. The “tipping-point state” in the election — the one that provided Mr. Obama with his decisive 270th electoral vote — was Colorado, which Mr. Obama won by 5.4 percentage points. If all states had shifted toward Mr. Romney by 5.3 percentage points, Mr. Obama would still have won Colorado and therefore the Electoral College — despite losing the national popular vote by 1.5 points.
Contrast this Democratic advantage in the Electoral College with the Republican advantage in the House of Representatives. Democrats actually won slightly more votes in the House elections last year (about 59.5 million votes to the G.O.P.’s 58 million). Nevertheless, Republicans maintained a 234-201 majority in the House, losing only eight seats.
Democrats are quick to attribute the Republican advantage in the House to gerrymandering. This is certainly a part of the story. Republicans benefited from having an extremely strong election in 2010, giving them control of the redistricting process in many states. (Although Democrats were no less aggressive about creating gerrymandered districts in states like Illinois.)
However, much or most of the Republican advantage in the House results from geography rather than deliberate attempts to gerrymander districts. Liberals tend to cluster in dense urban centers, creating districts in which Democrats might earn as much as 80 or 90 percent of the vote. In contrast, even the most conservative districts in the country tend not to give more than about 70 or 75 percent of their vote to Republicans. This means that Democrats have more wasted votes in the cities than Republicans do in the countryside, depriving Democrats of votes at the margin in swing districts. Eliminating partisan gerrymandering would reduce the G.O.P.’s advantage in the House but not eliminate it.
But if this geographic principle holds true for the House, why doesn’t the same apply for the Electoral College?
Actually, it might hold true, if state boundaries were drawn a different way, and the states were required to have equal populations (as Congressional districts are). Neil Freeman, a graphic artist and urban planner, created just such a map in which the nation’s population was divided into 50 states of equal population. Mr. Freeman’s map also sought to keep metro areas within the same state — so, for instance, Kansas City and its suburbs would be entirely within the new state of “Nodaway” rather than divided between Kansas and Missouri.
Nate Cohn, of The New Republic, calculated what would have happened had the Electoral College been contested under Mr. Freeman’s map. He found that Mr. Romney probably would have won, by virtue of narrow victories in the new tipping-point states of Susquehanna (which consists of portions of Pennsylvania, West Virginia and Maryland) and Pocono (formed from rural and suburban portions of present-day Pennsylvania and New York).
We must qualify Mr. Cohn’s answer because the margin would have been so close in these states that the election would have gone to a recount. Nonetheless, the new boundaries would have been enough to shift us from a map in which Democrats had an Electoral College advantage (relative to their share of the popular vote) to one in which it would have considerably helped Mr. Romney.
Mr. Cohn concludes from this that the Democrats’ apparent advantage in the Electoral College is “a product of luck.” If state boundaries were drawn just slightly differently, the Electoral College might harm them rather than help them, he argues.
I’ve seen a couple of objections to Mr. Cohn’s claim, one of which is that Mr. Obama’s strategy was dictated by the Electoral College as currently configured. Had the new states of Susquehanna and Pocono been the tipping-point states, instead of Colorado and Pennsylvania, Mr. Obama would have directed more resources there and might have won them as a result.
This is an intriguing argument, and an important one for thinking about the Electoral College in 2016 and beyond. If Mr. Obama’s apparent advantage in the Electoral College in 2008 and 2012 was the result of superior voter-targeting operations, then Democrats will maintain that advantage as long as they remain ahead in the voter-targeting game, but no longer.
Still, I doubt that this is enough to explain all of the difference between Mr. Freeman’s map and the actual Electoral College. Most empirical research on Mr. Obama’s “ground game” has found that it might have been worth an extra one to three percentage points in the swing states. In other words, Mr. Obama’s turnout operation might be enough to explain why the Electoral College slightly favored him rather than being essentially neutral. However, the inference we might make from Mr. Freeman’s map, and from the distribution of votes in Congressional districts, is that the Electoral College should not merely have been neutral but should actually have favored Republicans by several percentage points because of the concentration of Democratic voters in urban areas.
So why hasn’t the tendency of Democrats to cluster in urban areas harmed them in the Electoral College, as it has in the House of Representatives?
We can gain some insight by comparing the distribution of votes under the actual Electoral College to that which would have resulted under Mr. Freeman’s map. I’ve done that in the chart below. The chart orders the states and the District of Columbia based on what share of the vote Mr. Obama received in each one. (The percentages listed are two-way vote shares, meaning that they exclude votes for third-party candidates.)
There is one technicality to explain in these results. Although Mr. Freeman assigns most of the population of Washington, D.C., into a new state with portions of Virginia and Maryland, he preserves a small region consisting of the National Mall, major monuments and federal buildings “set off as the seat of the federal government”, as might be required under the Constitution. This remaining part of the District of Columbia would still have three electoral votes despite having a permanent population of only about 33 people. (This is as best as I can infer from census data: the area corresponds to census tract 62.02 in the District of Columbia. Presumably, it would still be extremely Democratic, as its population might consist largely of Mr. Obama and his family, and high-level officials in his administration.)
That aside, the key facet of the chart is what happens in the upper-right corner, where the orange line (which represents how electoral votes would be allocated under Mr. Freeman’s system) diverges significantly from the black line (which reflects how they are allocated today). This reflects the results in the new states that are centered around Philadelphia, Washington, Chicago, Los Angeles, San Francisco and New York, all of which would have given Mr. Obama at least 65 percent of their vote. Collectively, these “city-states” would represent about 65 electoral votes in Mr. Freeman’s map. By comparison, the only present-day states to have given Mr. Obama at least 65 percent of their vote were Hawaii and Vermont, which together have just seven electoral votes. The superfluous votes in these “city-states” wind up costing Democrats dearly in swing states like Pocono.
These large cities create much less electoral wastage for Democrats under the current map. Let’s consider each of them individually:
Philadelphia. Mr. Obama’s margin of victory in Pennsylvania (about 300,000 votes) was less than his margin of victory in the city Philadelphia alone (about 470,000 votes). Mr. Obama also netted about 100,000 votes from the Philadelphia suburbs. If Philadelphia and its suburbs seceded from the rest of Pennsylvania, Mr. Obama would win the city-state of Philadelphia overwhelmingly but would probably lose what remained of Pennsylvania.
Washington. The District of Columbia itself yields some wasted votes for Democrats. (Although it should be noted that it is overrepresented in the Electoral College: it has roughly one electoral vote per 100,000 voters, versus a national average of 0.4 electoral votes per 100,000 voters.) However, Washington’s suburbs have now also become Democratic, enough to swing Virginia to Mr. Obama in the last two elections. Thus, Democrats get considerable leverage out of the Washington metro area under the current Electoral College. Under Mr. Freeman’s map, Democrats would win the city-state centered around Washington overwhelmingly, but the regions just beyond it would mostly go Republican.
Chicago. This is roughly the same case as Philadelphia. Mr. Obama actually lost Illinois outside of Cook County, which consists of Chicago and its immediate suburbs. Thus, Democrats won all 20 electoral votes in Illinois. If Cook County separated from the rest of the state, by contrast, Mr. Obama would have won its roughly 10 electoral votes but lost the 10 belonging to the rest of Illinois.
San Francisco and Los Angeles. Mr. Obama won California by about 3 million votes last year. Of this advantage, about 2 million votes came from the San Francisco and Los Angeles metro areas, as Mr. Freeman defines them. California would still be Democratic-leaning without them, but Republicans would have some chance of competing instead of Democrats automatically having 55 electoral votes in their column. The G.O.P. would be further helped if California were broken apart into a total of four or five states, as Republicans could perform well in states centered around San Diego or the Central Valley.
New York. Mr. Obama won New York state by about eight percentage points, excluding votes from New York City itself. Without the five boroughs, therefore, New York state would be a blue-leaning swing state, similar to Michigan, Wisconsin or Minnesota, instead of a safely Democratic one.
In other words, under the current map, the votes in these big cities don’t wind up being redundant. They allow Democrats to win Pennsylvania, Illinois and Virginia when they would otherwise usually lose them. California and New York would still be Democratic-leaning even without San Francisco, Los Angeles, and New York City, but Democrats get to win all their electoral votes whereas some regions would be competitive if they were subdivided. (The aforementioned swing state of Pocono consists partly of upstate New York under Mr. Freeman’s map, for example.)
I would also take objection to Mr. Cohn’s notion that the allocation of the United States into its 50 states should be thought of as a matter of “luck” — as though it reflects one draw from a randomly-generated pool of alternatives. Certainly, the boundaries of the states are quirky in some ways: Vermont, for instance, could easily have wound up as part of New York or New Hampshire.
But as books like “How The States Got Their Shapes” make clear, many other states have boundaries that were the result of careful deliberation by Congress. In particular, there was an effort to grant them roughly equal amounts of geographic territory, and to allow them to share access to important natural resources like the Great Lakes. (Most of the exceptions are in states that were brought into the nation whole-hog, like California and Texas, or the 13 original colonies.)
Here’s a thought experiment: if you could play geographer king, and were charged with dividing the United States into 50 political units with the goal of maximizing the nation’s collective economic well-being, what would your map look like? Would it be more like Mr. Freeman’s map, with the states divided based on equal populations and urban continuity, or would it be more our actual map of 50 states, however haphazard it might seem?
I don’t think this question has a simple answer, but there are some things to be said for the status quo.
Mr. Freeman’s map runs the risk of creating some small, urban states that are rich in human capital but lack natural resources, and some gargantuan, rural states that have the opposite problem. Under the actual map, most states have a reasonably good balance of urban and rural areas. The chart below reflects the percentage of voters in each state that are in urban, suburban and rural areas, according to 2008 exit polls. (The exit polls did not contain good data for Alaska and Hawaii, so I had to infer these separately.) Some 33 of the 50 states have somewhere between 20 and 50 percent of their populations in urban areas. Only one state (Nevada) has more than half its population in urban centers (Nevada occupies a large amount of territory, but most of its population is in Las Vegas). Only eight have under 10 percent of their population in urban areas (including New Jersey, which is otherwise suburban rather than rural).
This is not to say that the allocation of territory and resources into the states is perfect. From an economic standpoint, it’s hard to justify Delaware being its own state, or California being one state instead of two or more. And the geographic size of a state would have been a better proxy for its economic potential in the early days of the Republic, when the United States was primarily an agrarian nation.
But it also shouldn’t be thought of as merely coincidental that Chicago, for example, happens to be attached to the territory that makes up the rest of Illinois. Seeking to equalize populations across the states would have made it harder for Congress to equalize other types of resources between them. Illinois “needs” to have a larger-than-average population because the alternative would be to create a rich city-state of Chicago (but one that lacked agricultural or mining resources) and a poor state of Downstate Illinois (which had lots of farmland but no large cities and no access to Lake Michigan).
As a byproduct of the Congress’s goal of equalizing geographic resources across the states, most states have reasonably diverse populations and economic interests, and the income distribution across the states is reasonably even. The poorest state in 2009 was Mississippi, which had a median household income of about $35,000, while the wealthiest was New Jersey (about $65,000). This range is narrow when compared to almost any other type of geographic division. More than 90 of the 435 Congressional districts, for instance, fell somewhere outside this range.
As a result, the Electoral College does not convey all that much advantage to rural voters versus urban ones, or wealthy voters versus poorer ones, and therefore does not provide all that much long-term advantage toward either party. The Democrats slightly benefited from the Electoral College in 2008 and 2012, but the opposite was true as recently as 2000.
The Electoral College may nevertheless be a flawed system in that some votes count much more than others. This is not intended as an enthusiastic defense of it, as much as a warning that attempts to reform it could wind up exacerbating its flaws (as opposed to eliminating it entirely, as would be my preference).
The best feature of the Electoral College is that it takes advantage of the 50 states. And those states got their shapes not by luck but by design.

© 2013 The New York Times Company.
The content you have chosen to save (which may include videos, articles, images and other copyrighted materials) is intended for your personal, noncommercial use. Such content is owned or controlled by The New York Times Company or the party credited as the content provider. Please refer to nytimes.com and the Terms of Service available on its website for information and restrictions related to the content.