By Pepe Escobar THE ROVING EYE Asia Times April 26, 2013
This is an abridged version of a lecture this week at the 13th Seminary of Political Solidarity Don Juan Chavez in memoriam at the University of Zaragoza, Spain.
How cozy it would be to summon the retro-spirit of Burt Bacharach to define our geopolitical future and start singing, “What the world needs now, is love, sweet love”.
Sorry to scratch the vinyl. We interrupt this lovey-dovey to bring you breaking news. You have been catapulted to the age of the new Hobbesian “hero” – digital and virtual as well as physical.
Casino capitalism – aka turbocharged neoliberalism – is ruthlessly destroying the last vestiges of the welfare state and the egalitarian consensus in the industrialized West, possibly with the odd Scandinavian exception. It has established a “New Normal” consensus, intruding into private lives, dominating the political debate and institutionalizing for good the marketization of life itself – the final act of fierce corporate exploitation of natural resources, land and cheap labor.
Integration, socialization and multiculturalism are being corroded by disintegration, segregation, and widespread de-socialization – a direct consequence of the David Harvey-coined notion of “dis-accumulation” (society devouring its own).
This state of things is what Flemish philosopher and art historian Lieven De Cauter, in his book Entropic Empire, calls “the Mad Max phase of globalization”.
It is a Hobbesian world, a latent global civil war, a war of all against all; the economic haves against the have-nots; intolerant Wahhabis against “apostate” Shi’ites; the children of the Enlightenment against all manner of fundamentalists; the Pentagon militarization of Africa against Chinese mercantilism.
The disintegration and balkanization of Iraq, detonated by the Pentagon’s Shock and Awe 10 years ago, was a sort of prelude for this Brave New Disorder. The neo-con worldview, from 2001 to 2008, advanced the project with its ideology of Let’s Finish Off The State, everywhere; once again Iraq was the best example. But from bombing a sovereign nation back to the Stone Age, the project moved to civil war engineering – as in Libya and, hopefully for the engineers, Syria.
When we have armchair analysts, influential or otherwise, paid by flush foundations – usually in the US but also in Western Europe – pontificating about “chaos and anarchy”, they are just reinforcing a self-fulfilling prophecy. If “chaos and anarchy” turns them on, it’s because they are just reflecting the predominant libidinal economy, from reality TV to all sorts of what De Cauter describes as “psychotic games” – inside a room, inside an octagon, inside an island or virtually inside a digital box.
So welcome to the geopolitics of the young 21st century: an age of non-stop war (virtualized or not), sharp polarization and a pile-up of catastrophies.
After Hegel, Marx and that mediocre functionary of Empire, Fukuyama; but also after brilliant deconstructions by Gianni Vattimo, Baudrillard or Giorgio Agamben, this is what we get.
For Marx the end of history was a classless society. How romantic. Instead, in the second half of the 20th century, capitalism married Western liberal democracy till death do them part. Well, death is now upon them both. The Red Dragon, as in China, has joined the party and came up with a new toy; single-party neoliberalism.
An individualistic, self-indulgent, passive, easily controllable consumer drowned in a warped form of democracy that basically favors insiders – and very wealthy players; how could that be a humanist ideal? Yet the PR was so good that this is what legions in Asia, Africa, the Middle East and South America aspire to. But it’s still not enough for the geo-economic Masters of the Universe.
Thus post-history as the ultimate reality show. And war neoliberalism as its favorite weapon.
Choose your camp We are now familiar with Giorgio Agamben’s paradigm of the state of emergency – or state of exception. The ultimate example, until the mid 20th century, was the concentration camp. But post-history is more creative.
We have the Muslim-only concentration camp – as in Guantanamo. We have the simulacrum of a concentration camp – as in Palestine, which is virtually walled and under 24/7 surveillance, and where “the law” is dictated by an occupying power. And we have what happened – as a dry run – last week in Boston; the euphemistic “lockdown”, which is a suspension of the law to the benefit of martial law; no freedom of movement, no cell phone network, and you if you go the corner shop to buy a soft drink you may be shot. A whole city in the industrialized North turned into a high-tech concentration camp.
Agamben talked about the state of exception as a top-down excess of sovereignty, and the state of nature – as in Hobbes – as a bottom-up absence of sovereignty. After the Global War on Terror (GWOT), which, despite whatever the Pentagon says, is indeed eternal (or The Long War, as defined in 2002, and part of the Pentagon doctrine of Full Spectrum Dominance), we can talk about a merger.
The war on terror, seductively normalized by the Obama administration, was and remains a global state of exception, even though trappings come and go; the Patriot Act; shadowy executive orders; torture – a recent US bipartisan panel accused all top officials of the George W Bush administration of torture; extraordinary rendition, with which secular then allies of the West such as Libya and Syria collaborated, not to mention Eastern European nations and the usual Arab puppets, Egypt under Mubarak included; and the sprawling apparatus of homeland security.
As for a real concentration camp, once again we don’t need to look further than Guantanamo – which, contrary to Obama’s campaign promise, will remain open indefinitely, as well as some among the vast number of Bush-era CIA “secret” prisons.
In all these cases whatever happens to social life – suspension, dissolution, balkanization, implosion, a state of emergency – what happens to normal citizens is that citizenship (bios) evaporates. But ruling elites – political, economic, financial – don’t care about citizenship. They’re only interested in passive consumers.
Pick your dystopia The dystopias of the New Global Disorder are all being normalized. We’re familiar with state terrorism – as in the CIA’s “secret” drone war over tribal areas in Pakistan, in Yemen, Somalia and soon in other African latitudes. And we’re also familiar with non-state terrorism, as applied by that nebula that we in the West describe as “al-Qaeda”, with its myriad franchises and copycats.
We have a bunch of hyper-states – such as the US, China and Russia and the EU as whole – and myriad infra-states or failed states, some by design (Libya, and Syria is on the way), as well as satellite states, some essential to the Western-controlled system such as the Gulf Counter-Revolution Club (GCC – Gulf Cooperation Council).
It’s always enlightening to look back at how the Pentagon interprets this world. Here we find an “integrating core” opposed to a “non-integrated gap”. The “core” is what matters, in this case North America and most, but not all, of the EU. Sheepish, passive populations, with a consumer elite – the fast, mobile elites of liquid modernity, described by Bauman – and a vast mass of surviving toilers, a great deal of them expendable (as the millions of European victims of troika austerity policies who will never find a decent job again).
For the non-integrated gap, it’s Hobbes all the way. In the case of Africa – until virtually yesterday derided as a black hole – there’s an added geopolitical power play; how to counter-attack the extraordinary penetration of Chinese mercantilism over the last decade. The Pentagon’s response is to deploy Africom everywhere; to subdue nations that are too independent, such as Libya; and in the case of the French elite, also on the bandwagon, to try to regain some imperial muscle in Mali, profiting exactly from the implosion and balkanization of Libya.
The look of post-history, its aesthetic ideal, is the city as theme park. Los Angeles may have been the archetype but the best examples are Las Vegas, Dubai and Macao. In the absence of Umberto Eco and Baudrillard, who reveled in the mirror images of simulacra, we may follow master architect Rem Koolhaas – a keen observer of the urban dementia in southern China – to learn what junk space is all about.
Then there’s the security obsession – from cities like London turning into a sprawling version of Bentham’s Panopticum to the pathetic strip tease ritual at every airport, not to mention the gated condo or “community”, more like gated atoms, as the emblem of capsular civilization. Guerrilla counter-attacks, though, may be as lethal as Sunni Iraqis fighting the Americans in the “triangle of death” in the mid-2000s. In Sao Paulo, Brazil – the ultimate violent megalopolis – gangs “clone” cars and license plates, fool security at the door of gated condos, drive to the garage, and proceed to systematically rob each apartment in every floor.
You’re history Conceptually, post-history cuts all corners. The flow of history is degraded as fake. Simulacrum trumps reality. We see history repeating not as tragedy and farce but as a double farce; an overlapping example is jihadis in Syria weaponized just like the former “freedom fighters” in Afghanistan in the 1980s anti-Soviet jihad conflating with the Western gang in the UN Security Council trying to apply to Syria what they got away with in Libya; regime change.
We also have history repeating itself as cloning; neoliberalism with Chinese characteristics beating the West in its industrialization game – in terms of speed – while at the same time repeating the same mistakes, from the mindless excesses of an acquisition mentality to no respect for the environment.
It goes without saying that post-history buries the Enlightenment – as favoring the emergence of all sorts of fundamentalisms. So it had also to bury international law; from bypassing the UN to launch a war on Iraq in 2003 to using a UN resolution to launch a war on Libya in 2011. And now Britain and France are taking no prisoners trying to bypass the UN or even NATO itself and weaponize the “rebels” in Syria.
So we have a New Medievalism that cannot but fit wealthy neo-theocracy – as in Saudi Arabia and Qatar; because they are Western allies, or puppets, internally they may remain medieval. Superimposed, we have the politics of fear – which essentially rules Fortress America and Fortress Europe; fear of The Other, which can be occasionally Asian but most of the time Islamic.
What we don’t have is a political/philosophical vision of the future. Or a historical political program; political parties are only worried about winning the next election.
How would a post-state system look like? Independent minds don’t trust mammoth, asymmetrical, wobbly blocs like the EU, or the G-20, or even aspiring multipolars such as the BRICS (Brazil, Russia, India, China, South Africa – which still do not represent a real alternative to the Western-controlled system). No one is thinking in terms of a structural mutation of the system. Marx was beyond right on this: what determines history are objective, concrete, palpable processes – some of them very complex – affecting the economic and technological infrastructure.
What is possible to infer is that the real historical subject from now on is technology – as Jean-Francois Lyotard and Paul Virilio were already conceptualizing in the 1980s and 1990s. Technology will keep advancing way beyond the capitalist system. Techno-science is on the driving seat of history. But that also means war.
War and technology are Siamese twins; virtually all technology gets going as military technology. The best example is how the Internet completely changed our lives, with immense geo-economic and political ramifications; Beijing, in a 2010 white paper, may have hailed the Internet as a “crystallization of human wisdom”, but no state filters more information on the Internet than China. Pushing the scenario to a dystopian limit, Google’s Eric Schmidt argues, correctly, that with a flip of a switch, soon an entire country could even disappear from the Internet.
So, essentially, we may forget about a utopian regression to the state of the tribal nomad – as much as we may be fascinated by them, be they in Africa or in the Wakhan corridor in Tajikistan. If we survey the geopolitical landscape from Ground Zero to Boston, the only “models” are declinations of entropy.
Meet the neoliberal Adam Now for post-history’s favorite weapon: war neoliberalism. The best analysis these past few years by far is to be found in French geostrategist Alain Joxe’s book Les Guerres de L’Empire Global.
Joxe mixes it all up, because it is all interconnected – the eurocrisis, the European debt crisis, occupations and wars, restriction of civil liberties, totally corrupted elites – to unmask the project of Neoliberalism’s Global Empire, which goes way beyond the American Empire.
Financialization’s ultimate goal is unlimited accumulation of profit – a system where the wealthy get much wealthier and the poor get literally nothing (or, at best austerity). The real-life Masters of the Universe are a denationalized rentier class – cannot even call them noblesse, because mostly their absence of taste and critical sense is appalling, as in purveyors of unabashed bling bling. What they do is to the benefit of corporations, instead of the protecting functions of states. In this state of things military adventures become police doctrine. And a new information technology – from drones to “special” munitions – can be used against popular movements, not only in the South but also the North.
Joxe is able to show how a technological revolution led at the same time to the IT management of that goddess, The Market, as well as the robotization of war. So here we have a mix of economic, military and technological mutations, in parallel, leading to an acceleration of decisions that totally pulverize the long span of politics, generating a system incapable of regulating either finance or violence. Between the dictatorship of the “markets” and social democracy, guess who’s winning hands down.
In fact, Slavoj Zizek had already posed the key question, at least in terms of the Decline of the West. The (closet) winner is in fact “capitalism with Asian values’ – which, of course, has nothing to do with Asian people and everything to do with the clear and present tendency of contemporary capitalism to limit or even suspend democracy”. (See here.)
French philosopher Jean Claude Michea takes the political analysis further. He argues that post-modern politics has become in fact a negative art – defining the least bad society possible. That’s how liberalism – which shaped modern Western civilization – became, as neoliberalism, the “politics of lesser evil”. Well, “lesser evil” for who’s in control, of course, and damn the rest.
In another crucial book, Michea comes up with the delightful metaphor of the neoliberal Adam as the new Orpheus, condemned to escalate the path of Progress with no authorization to look back.
Not many contemporary thinkers are equipped to thrash Left and Right in equally devastating measure. Michea tells us that both Left and Right have submitted to the original myth of capitalist thinking; this “noir anthropology” that makes Man an egoist by nature. And he asks how could the institutionalized Left have abandoned the ambition of a just, decent society – or how the neoliberal wolf has wreaked havoc among the socialist sheep.
Beyond neoliberalism and/or a desire for social democracy, what the reality show tells us is that an internecine global civil war is at hand – the hypothesis I explored in my 2007 book Globalistan. When we mix Washington’s pivoting to Asia; the obsession with regime change in Iran; the fear of Western elites with the ascension of China; the real Arab Spring, which has not even started, via young generations who want political participation but without being constrained by religious fundamentalism; Muslim resentment against what is perceived as a New Crusade against them; the growth of neo-fascism in Europe; and the advanced pauperization of the Western middle class, it’s hard to think about love.
And still – Burt Bacharach to the rescue – that’s exactly what the world needs now.
Pepe Escobar is the author of Globalistan: How the Globalized World is Dissolving into Liquid War (Nimble Books, 2007), Red Zone Blues: a snapshot of Baghdad during the surge (Nimble Books, 2007), and Obama does Globalistan (Nimble Books, 2009).
He may be reached at email@example.com.
By Steven Brill
Feb. 20, 2013
Corrections Appended: February 26, 2013
1. Routine Care, Unforgettable Bills
When Sean Recchi, a 42-year-old from Lancaster, Ohio, was told last March that he had non-Hodgkin’s lymphoma, his wife Stephanie knew she had to get him to MD Anderson Cancer Center in Houston. Stephanie’s father had been treated there 10 years earlier, and she and her family credited the doctors and nurses at MD Anderson with extending his life by at least eight years.
Because Stephanie and her husband had recently started their own small technology business, they were unable to buy comprehensive health insurance. For $469 a month, or about 20% of their income, they had been able to get only a policy that covered just $2,000 per day of any hospital costs. “We don’t take that kind of discount insurance,” said the woman at MD Anderson when Stephanie called to make an appointment for Sean.
Stephanie was then told by a billing clerk that the estimated cost of Sean’s visit — just to be examined for six days so a treatment plan could be devised — would be $48,900, due in advance. Stephanie got her mother to write her a check. “You do anything you can in a situation like that,” she says. The Recchis flew to Houston, leaving Stephanie’s mother to care for their two teenage children.
About a week later, Stephanie had to ask her mother for $35,000 more so Sean could begin the treatment the doctors had decided was urgent. His condition had worsened rapidly since he had arrived in Houston. He was “sweating and shaking with chills and pains,” Stephanie recalls. “He had a large mass in his chest that was … growing. He was panicked.”
Nonetheless, Sean was held for about 90 minutes in a reception area, she says, because the hospital could not confirm that the check had cleared. Sean was allowed to see the doctor only after he advanced MD Anderson $7,500 from his credit card. The hospital says there was nothing unusual about how Sean was kept waiting. According to MD Anderson communications manager Julie Penne, “Asking for advance payment for services is a common, if unfortunate, situation that confronts hospitals all over the United States.”
The total cost, in advance, for Sean to get his treatment plan and initial doses of chemotherapy was $83,900.
The first of the 344 lines printed out across eight pages of his hospital bill — filled with indecipherable numerical codes and acronyms — seemed innocuous. But it set the tone for all that followed. It read, “1 ACETAMINOPHE TABS 325 MG.” The charge was only $1.50, but it was for a generic version of a Tylenol pill. You can buy 100 of them on Amazon for $1.49 even without a hospital’s purchasing power.
(In-Depth Video: The Exorbitant Prices of Health Care)
Dozens of midpriced items were embedded with similarly aggressive markups, like $283.00 for a “CHEST, PA AND LAT 71020.” That’s a simple chest X-ray, for which MD Anderson is routinely paid $20.44 when it treats a patient on Medicare, the government health care program for the elderly.
Every time a nurse drew blood, a “ROUTINE VENIPUNCTURE” charge of $36.00 appeared, accompanied by charges of $23 to $78 for each of a dozen or more lab analyses performed on the blood sample. In all, the charges for blood and other lab tests done on Recchi amounted to more than $15,000. Had Recchi been old enough for Medicare, MD Anderson would have been paid a few hundred dollars for all those tests. By law, Medicare’s payments approximate a hospital’s cost of providing a service, including overhead, equipment and salaries.
On the second page of the bill, the markups got bolder. Recchi was charged $13,702 for “1 RITUXIMAB INJ 660 MG.” That’s an injection of 660 mg of a cancer wonder drug called Rituxan. The average price paid by all hospitals for this dose is about $4,000, but MD Anderson probably gets a volume discount that would make its cost $3,000 to $3,500. That means the nonprofit cancer center’s paid-in-advance markup on Recchi’s lifesaving shot would be about 400%.
When I asked MD Anderson to comment on the charges on Recchi’s bill, the cancer center released a written statement that said in part, “The issues related to health care finance are complex for patients, health care providers, payers and government entities alike … MD Anderson’s clinical billing and collection practices are similar to those of other major hospitals and academic medical centers.”
The hospital’s hard-nosed approach pays off. Although it is officially a nonprofit unit of the University of Texas, MD Anderson has revenue that exceeds the cost of the world-class care it provides by so much that its operating profit for the fiscal year 2010, the most recent annual report it filed with the U.S. Department of Health and Human Services, was $531 million. That’s a profit margin of 26% on revenue of $2.05 billion, an astounding result for such a service-intensive enterprise.1
The president of MD Anderson is paid like someone running a prosperous business. Ronald DePinho’s total compensation last year was $1,845,000. That does not count outside earnings derived from a much publicized waiver he received from the university that, according to the Houston Chronicle, allows him to maintain unspecified “financial ties with his three principal pharmaceutical companies.”
DePinho’s salary is nearly two and a half times the $750,000 paid to Francisco Cigarroa, the chancellor of entire University of Texas system, of which MD Anderson is a part. This pay structure is emblematic of American medical economics and is reflected on campuses across the U.S., where the president of a hospital or hospital system associated with a university — whether it’s Texas, Stanford, Duke or Yale — is invariably paid much more than the person in charge of the university.
I got the idea for this article when I was visiting Rice University last year. As I was leaving the campus, which is just outside the central business district of Houston, I noticed a group of glass skyscrapers about a mile away lighting up the evening sky. The scene looked like Dubai. I was looking at the Texas Medical Center, a nearly 1,300-acre, 280-building complex of hospitals and related medical facilities, of which MD Anderson is the lead brand name. Medicine had obviously become a huge business. In fact, of Houston’s top 10 employers, five are hospitals, including MD Anderson with 19,000 employees; three, led by ExxonMobil with 14,000 employees, are energy companies. How did that happen, I wondered. Where’s all that money coming from? And where is it going? I have spent the past seven months trying to find out by analyzing a variety of bills from hospitals like MD Anderson, doctors, drug companies and every other player in the American health care ecosystem.
When you look behind the bills that Sean Recchi and other patients receive, you see nothing rational — no rhyme or reason — about the costs they faced in a marketplace they enter through no choice of their own. The only constant is the sticker shock for the patients who are asked to pay.
Yet those who work in the health care industry and those who argue over health care policy seem inured to the shock. When we debate health care policy, we seem to jump right to the issue of who should pay the bills, blowing past what should be the first question: Why exactly are the bills so high?
What are the reasons, good or bad, that cancer means a half-million- or million-dollar tab? Why should a trip to the emergency room for chest pains that turn out to be indigestion bring a bill that can exceed the cost of a semester of college? What makes a single dose of even the most wonderful wonder drug cost thousands of dollars? Why does simple lab work done during a few days in a hospital cost more than a car? And what is so different about the medical ecosystem that causes technology advances to drive bills up instead of down?
Recchi’s bill and six others examined line by line for this article offer a closeup window into what happens when powerless buyers — whether they are people like Recchi or big health-insurance companies — meet sellers in what is the ultimate seller’s market.
The result is a uniquely American gold rush for those who provide everything from wonder drugs to canes to high-tech implants to CT scans to hospital bill-coding and collection services. In hundreds of small and midsize cities across the country — from Stamford, Conn., to Marlton, N.J., to Oklahoma City — the American health care market has transformed tax-exempt “nonprofit” hospitals into the towns’ most profitable businesses and largest employers, often presided over by the regions’ most richly compensated executives. And in our largest cities, the system offers lavish paychecks even to midlevel hospital managers, like the 14 administrators at New York City’s Memorial Sloan-Kettering Cancer Center who are paid over $500,000 a year, including six who make over $1 million.
Taken as a whole, these powerful institutions and the bills they churn out dominate the nation’s economy and put demands on taxpayers to a degree unequaled anywhere else on earth. In the U.S., people spend almost 20% of the gross domestic product on health care, compared with about half that in most developed countries. Yet in every measurable way, the results our health care system produces are no better and often worse than the outcomes in those countries.
According to one of a series of exhaustive studies done by the McKinsey & Co. consulting firm, we spend more on health care than the next 10 biggest spenders combined: Japan, Germany, France, China, the U.K., Italy, Canada, Brazil, Spain and Australia. We may be shocked at the $60 billion price tag for cleaning up after Hurricane Sandy. We spent almost that much last week on health care. We spend more every year on artificial knees and hips than what Hollywood collects at the box office. We spend two or three times that much on durable medical devices like canes and wheelchairs, in part because a heavily lobbied Congress forces Medicare to pay 25% to 75% more for this equipment than it would cost at Walmart.
The Bureau of Labor Statistics projects that 10 of the 20 occupations that will grow the fastest in the U.S. by 2020 are related to health care. America’s largest city may be commonly thought of as the world’s financial-services capital, but of New York’s 18 largest private employers, eight are hospitals and four are banks. Employing all those people in the cause of curing the sick is, of course, not anything to be ashamed of. But the drag on our overall economy that comes with taxpayers, employers and consumers spending so much more than is spent in any other country for the same product is unsustainable. Health care is eating away at our economy and our treasury.
The health care industry seems to have the will and the means to keep it that way. According to the Center for Responsive Politics, the pharmaceutical and health-care-product industries, combined with organizations representing doctors, hospitals, nursing homes, health services and HMOs, have spent $5.36 billion since 1998 on lobbying in Washington. That dwarfs the $1.53 billion spent by the defense and aerospace industries and the $1.3 billion spent by oil and gas interests over the same period. That’s right: the health-care-industrial complex spends more than three times what the military-industrial complex spends in Washington.
When you crunch data compiled by McKinsey and other researchers, the big picture looks like this: We’re likely to spend $2.8 trillion this year on health care. That $2.8 trillion is likely to be $750 billion, or 27%, more than we would spend if we spent the same per capita as other developed countries, even after adjusting for the relatively high per capita income in the U.S. vs. those other countries. Of the total $2.8 trillion that will be spent on health care, about $800 billion will be paid by the federal government through the Medicare insurance program for the disabled and those 65 and older and the Medicaid program, which provides care for the poor. That $800 billion, which keeps rising far faster than inflation and the gross domestic product, is what’s driving the federal deficit. The other $2 trillion will be paid mostly by private health-insurance companies and individuals who have no insurance or who will pay some portion of the bills covered by their insurance. This is what’s increasingly burdening businesses that pay for their employees’ health insurance and forcing individuals to pay so much in out-of-pocket expenses.
1. Here and elsewhere I define operating profit as the hospital’s excess of revenue over expenses, plus the amount it lists on its tax return for depreciation of assets—because depreciation is an accounting expense, not a cash expense. John Gunn, chief operating officer of Memorial Sloan-Kettering Cancer Center, calls this the “fairest way” of judging a hospital’s financial performance
The original version of this article misidentified William Powers Jr., the president of the University of Texas system, as the head of the entire system. That is in fact Francisco Cigarroa, the chancellor of the University of Texas
Breaking these trillions down into real bills going to real patients cuts through the ideological debate over health care policy. By dissecting the bills that people like Sean Recchi face, we can see exactly how and why we are overspending, where the money is going and how to get it back. We just have to follow the money.
The $21,000 Heartburn Bill
One night last summer at her home near Stamford, Conn., a 64-year-old former sales clerk whom I’ll call Janice S. felt chest pains. She was taken four miles by ambulance to the emergency room at Stamford Hospital, officially a nonprofit institution. After about three hours of tests and some brief encounters with a doctor, she was told she had indigestion and sent home. That was the good news.
The bad news was the bill: $995 for the ambulance ride, $3,000 for the doctors and $17,000 for the hospital — in sum, $21,000 for a false alarm.
Out of work for a year, Janice S. had no insurance. Among the hospital’s charges were three “TROPONIN I” tests for $199.50 each. According to a National Institutes of Health website, a troponin test “measures the levels of certain proteins in the blood” whose release from the heart is a strong indicator of a heart attack. Some labs like to have the test done at intervals, so the fact that Janice S. got three of them is not necessarily an issue. The price is the problem. Stamford Hospital spokesman Scott Orstad told me that the $199.50 figure for the troponin test was taken from what he called the hospital’s chargemaster. The chargemaster, I learned, is every hospital’s internal price list. Decades ago it was a document the size of a phone book; now it’s a massive computer file, thousands of items long, maintained by every hospital.
Stamford Hospital’s chargemaster assigns prices to everything, including Janice S.’s blood tests. It would seem to be an important document. However, I quickly found that although every hospital has a chargemaster, officials treat it as if it were an eccentric uncle living in the attic. Whenever I asked, they deflected all conversation away from it. They even argued that it is irrelevant. I soon found that they have good reason to hope that outsiders pay no attention to the chargemaster or the process that produces it. For there seems to be no process, no rationale, behind the core document that is the basis for hundreds of billions of dollars in health care bills.
Because she was 64, not 65, Janice S. was not on Medicare. But seeing what Medicare would have paid Stamford Hospital for the troponin test if she had been a year older shines a bright light on the role the chargemaster plays in our national medical crisis — and helps us understand the illegitimacy of that $199.50 charge. That’s because Medicare collects troves of data on what every type of treatment, test and other service costs hospitals to deliver. Medicare takes seriously the notion that nonprofit hospitals should be paid for all their costs but actually be nonprofit after their calculation. Thus, under the law, Medicare is supposed to reimburse hospitals for any given service, factoring in not only direct costs but also allocated expenses such as overhead, capital expenses, executive salaries, insurance, differences in regional costs of living and even the education of medical students.
It turns out that Medicare would have paid Stamford $13.94 for each troponin test rather than the $199.50 Janice S. was charged.
Janice S. was also charged $157.61 for a CBC — the complete blood count that those of us who are ER aficionados remember George Clooney ordering several times a night. Medicare pays $11.02 for a CBC in Connecticut. Hospital finance people argue vehemently that Medicare doesn’t pay enough and that they lose as much as 10% on an average Medicare patient. But even if the Medicare price should be, say, 10% higher, it’s a long way from $11.02 plus 10% to $157.61. Yes, every hospital administrator grouses about Medicare’s payment rates — rates that are supervised by a Congress that is heavily lobbied by the American Hospital Association, which spent $1,859,041 on lobbyists in 2012. But an annual expense report that Stamford Hospital is required to file with the federal Department of Health and Human Services offers evidence that Medicare’s rates for the services Janice S. received are on the mark. According to the hospital’s latest filing (covering 2010), its total expenses for laboratory work (like Janice S.’s blood tests) in the 12 months covered by the report were $27.5 million. Its total charges were $293.2 million. That means it charged about 11 times its costs. As we examine other bills, we’ll see that like Medicare patients, the large portion of hospital patients who have private health insurance also get discounts off the listed chargemaster figures, assuming the hospital and insurance company have negotiated to include the hospital in the insurer’s network of providers that its customers can use. The insurance discounts are not nearly as steep as the Medicare markdowns, which means that even the discounted insurance-company rates fuel profits at these officially nonprofit hospitals. Those profits are further boosted by payments from the tens of millions of patients who, like the unemployed Janice S., have no insurance or whose insurance does not apply because the patient has exceeded the coverage limits. These patients are asked to pay the chargemaster list prices.
If you are confused by the notion that those least able to pay are the ones singled out to pay the highest rates, welcome to the American medical marketplace.
Pay No Attention To the Chargemaster
No hospital’s chargemaster prices are consistent with those of any other hospital, nor do they seem to be based on anything objective — like cost — that any hospital executive I spoke with was able to explain. “They were set in cement a long time ago and just keep going up almost automatically,” says one hospital chief financial officer with a shrug.
At Stamford Hospital I got the first of many brush-offs when I asked about the chargemaster rates on Janice S.’s bill. “Those are not our real rates,” protested hospital spokesman Orstad when I asked him to make hospital CEO Brian Grissler available to explain Janice S.’s bill, in particular the blood-test charges. “It’s a list we use internally in certain cases, but most people never pay those prices. I doubt that Brian [Grissler] has even seen the list in years. So I’m not sure why you care.”
Orstad also refused to comment on any of the specifics in Janice S.’s bill, including the seemingly inflated charges for all the lab work. “I’ve told you I don’t think a bill like this is relevant,” he explained. “Very few people actually pay those rates.”
But Janice S. was asked to pay them. Moreover, the chargemaster rates are relevant, even for those unlike her who have insurance. Insurers with the most leverage, because they have the most customers to offer a hospital that needs patients, will try to negotiate prices 30% to 50% above the Medicare rates rather than discounts off the sky-high chargemaster rates. But insurers are increasingly losing leverage because hospitals are consolidating by buying doctors’ practices and even rival hospitals. In that situation — in which the insurer needs the hospital more than the hospital needs the insurer — the pricing negotiation will be over discounts that work down from the chargemaster prices rather than up from what Medicare would pay. Getting a 50% or even 60% discount off the chargemaster price of an item that costs $13 and lists for $199.50 is still no bargain. “We hate to negotiate off of the chargemaster, but we have to do it a lot now,” says Edward Wardell, a lawyer for the giant health-insurance provider Aetna Inc.
That so few consumers seem to be aware of the chargemaster demonstrates how well the health care industry has steered the debate from why bills are so high to who should pay them.
The expensive technology deployed on Janice S. was a bigger factor in her bill than the lab tests. An “NM MYO REST/SPEC EJCT MOT MUL” was billed at $7,997.54. That’s a stress test using a radioactive dye that is tracked by an X-ray computed tomography, or CT, scan. Medicare would have paid Stamford $554 for that test.
Janice S. was charged an additional $872.44 just for the dye used in the test. The regular stress test patients are more familiar with, in which arteries are monitored electronically with an electrocardiograph, would have cost far less — $1,200 even at the hospital’s chargemaster price. (Medicare would have paid $96 for it.) And although many doctors view the version using the CT scan as more thorough, others consider it unnecessary in most cases.
According to Jack Lewin, a cardiologist and former CEO of the American College of Cardiology, “It depends on the patient, of course, but in most cases you would start with a standard stress test. We are doing too many of these nuclear tests. It is not being used appropriately … Sometimes a cardiogram is enough, and you don’t even need the simpler test. But it usually makes sense to give the patient the simpler one first and then use nuclear for a closer look if there seem to be problems.”
We don’t know the particulars of Janice S.’s condition, so we cannot know why the doctors who treated her ordered the more expensive test. But the incentives are clear. On the basis of market prices, Stamford probably paid about $250,000 for the CT equipment in its operating room. It costs little to operate, so the more it can be used and billed, the quicker the hospital recovers its costs and begins profiting from its purchase. In addition, the cardiologist in the emergency room gave Janice S. a separate bill for $600 to read the test results on top of the $342 he charged for examining her.
According to a McKinsey study of the medical marketplace, a typical piece of equipment will pay for itself in one year if it carries out just 10 to 15 procedures a day. That’s a terrific return on capital equipment that has an expected life span of seven to 10 years. And it means that after a year, every scan ordered by a doctor in the Stamford Hospital emergency room would mean pure profit, less maintenance costs, for the hospital. Plus an extra fee for the doctor.
Another McKinsey report found that health care providers in the U.S. conduct far more CT tests per capita than those in any other country — 71% more than in Germany, for example, where the government-run health care system offers none of those incentives for overtesting. We also pay a lot more for each test, even when it’s Medicare doing the paying. Medicare reimburses hospitals and clinics an average of four times as much as Germany does for CT scans, according to the data gathered by McKinsey.
Medicare’s reimbursement formulas for these tests are regulated by Congress. So too are restrictions on what Medicare can do to limit the use of CT and magnetic resonance imaging (MRI) scans when they might not be medically necessary. Standing at the ready to make sure Congress keeps Medicare at bay is, among other groups, the American College of Radiology, which on Nov. 14 ran a full-page ad in the Capitol Hill–centric newspaper Politico urging Congress to pass the Diagnostic Imaging Services Access Protection Act. It’s a bill that would block efforts by Medicare to discourage doctors from ordering multiple CT scans on the same patient by paying them less per test to read multiple tests of the same patient. (In fact, six of Politico’s 12 pages of ads that day were bought by medical interests urging Congress to spend or not cut back on one of their products.)
The costs associated with high-tech tests are likely to accelerate. McKinsey found that the more CT and MRI scanners are out there, the more doctors use them. In 1997 there were fewer than 3,000 machines available, and they completed an average of 3,800 scans per year. By 2006 there were more than 10,000 in use, and they completed an average of 6,100 per year. According to a study in the Annals of Emergency Medicine, the use of CT scans in America’s emergency rooms “has more than quadrupled in recent decades.” As one former emergency-room doctor puts it, “Giving out CT scans like candy in the ER is the equivalent of putting a 90-year-old grandmother through a pat-down at the airport: Hey, you never know.”
Selling this equipment to hospitals — which has become a key profit center for industrial conglomerates like General Electric and Siemens — is one of the U.S. economy’s bright spots. I recently subscribed to an online headhunter’s listings for medical-equipment salesmen and quickly found an opening in Connecticut that would pay a salary of $85,000 and sales commissions of up to $95,000 more, plus a car allowance. The only requirement was that applicants have “at least one year of experience selling some form of capital equipment.”
In all, on the day I signed up for that jobs website, it carried 186 listings for medical-equipment salespeople just in Connecticut.
2. Medical Technology’s Perverse Economics
Unlike those of almost any other area we can think of, the dynamics of the medical marketplace seem to be such that the advance of technology has made medical care more expensive, not less. First, it appears to encourage more procedures and treatment by making them easier and more convenient. (This is especially true for procedures like arthroscopic surgery.) Second, there is little patient pushback against higher costs because it seems to (and often does) result in safer, better care and because the customer getting the treatment is either not going to pay for it or not going to know the price until after the fact.
Beyond the hospitals’ and doctors’ obvious economic incentives to use the equipment and the manufacturers’ equally obvious incentives to sell it, there’s a legal incentive at work. Giving Janice S. a nuclear-imaging test instead of the lower-tech, less expensive stress test was the safer thing to do — a belt-and-suspenders approach that would let the hospital and doctor say they pulled out all the stops in case Janice S. died of a heart attack after she was sent home.
“We use the CT scan because it’s a great defense,” says the CEO of another hospital not far from Stamford. “For example, if anyone has fallen or done anything around their head — hell, if they even say the word head — we do it to be safe. We can’t be sued for doing too much.”
His rationale speaks to the real cost issue associated with medical-malpractice litigation. It’s not as much about the verdicts or settlements (or considerable malpractice-insurance premiums) that hospitals and doctors pay as it is about what they do to avoid being sued. And some no doubt claim they are ordering more tests to avoid being sued when it is actually an excuse for hiking profits. The most practical malpractice-reform proposals would not limit awards for victims but would allow doctors to use what’s called a safe-harbor defense. Under safe harbor, a defendant doctor or hospital could argue that the care provided was within the bounds of what peers have established as reasonable under the circumstances. The typical plaintiff argument that doing something more, like a nuclear-imaging test, might have saved the patient would then be less likely to prevail.
When Obamacare was being debated, Republicans pushed this kind of commonsense malpractice-tort reform. But the stranglehold that plaintiffs’ lawyers have traditionally had on Democrats prevailed, and neither a safe-harbor provision nor any other malpractice reform was included.
To the extent that they defend the chargemaster rates at all, the defense that hospital executives offer has to do with charity. As John Gunn, chief operating officer of Sloan-Kettering, puts it, “We charge those rates so that when we get paid by a [wealthy] uninsured person from overseas, it allows us to serve the poor.”
A closer look at hospital finance suggests two holes in that argument. First, while Sloan-Kettering does have an aggressive financial-assistance program (something Stamford Hospital lacks), at most hospitals it’s not a Saudi sheik but the almost poor — those who don’t qualify for Medicaid and don’t have insurance — who are most often asked to pay those exorbitant chargemaster prices. Second, there is the jaw-dropping difference between those list prices and the hospitals’ costs, which enables these ostensibly nonprofit institutions to produce high profits even after all the discounts. True, when the discounts to Medicare and private insurers are applied, hospitals end up being paid a lot less overall than what is itemized on the original bills. Stamford ends up receiving about 35% of what it bills, which is the yield for most hospitals. (Sloan-Kettering and MD Anderson, whose great brand names make them tough negotiators with insurance companies, get about 50%). However, no matter how steep the discounts, the chargemaster prices are so high and so devoid of any calculation related to cost that the result is uniquely American: thousands of nonprofit institutions have morphed into high-profit, high-profile businesses that have the best of both worlds. They have become entities akin to low-risk, must-have public utilities that nonetheless pay their operators as if they were high-risk entrepreneurs. As with the local electric company, customers must have the product and can’t go elsewhere to buy it. They are steered to a hospital by their insurance companies or doctors (whose practices may have a business alliance with the hospital or even be owned by it). Or they end up there because there isn’t any local competition. But unlike with the electric company, no regulator caps hospital profits.
Yet hospitals are also beloved local charities.
The result is that in small towns and cities across the country, the local nonprofit hospital may be the community’s strongest business, typically making tens of millions of dollars a year and paying its nondoctor administrators six or seven figures. As nonprofits, such hospitals solicit contributions, and their annual charity dinner, a showcase for their good works, is typically a major civic event. But charitable gifts are a minor part of their base; Stamford Hospital raised just over 1% of its revenue from contributions last year. Even after discounts, those $199.50 blood tests and multithousand-dollar CT scans are what really count.
Thus, according to the latest publicly available tax return it filed with the IRS, for the fiscal year ending September 2011, Stamford Hospital — in a midsize city serving an unusually high 50% share of highly discounted Medicare and Medicaid patients — managed an operating profit of $63 million on revenue actually received (after all the discounts off the chargemaster) of $495 million. That’s a 12.7% operating profit margin, which would be the envy of shareholders of high-service businesses across other sectors of the economy.
Its nearly half-billion dollars in revenue also makes Stamford Hospital by far the city’s largest business serving only local residents. In fact, the hospital’s revenue exceeded all money paid to the city of Stamford in taxes and fees. The hospital is a bigger business than its host city.
There is nothing special about the hospital’s fortunes. Its operating profit margin is about the same as the average for all nonprofit hospitals, 11.7%, even when those that lose money are included. And Stamford’s 12.7% was tallied after the hospital paid a slew of high salaries to its management, including $744,000 to its chief financial officer and $1,860,000 to CEO Grissler.
In fact, when McKinsey, aided by a Bank of America survey, pulled together all hospital financial reports, it found that the 2,900 nonprofit hospitals across the country, which are exempt from income taxes, actually end up averaging higher operating profit margins than the 1,000 for-profit hospitals after the for-profits’ income-tax obligations are deducted. In health care, being nonprofit produces more profit.
Nonetheless, hospitals like Stamford are able to use their sympathetic nonprofit status to push their interests. As the debate over deficit-cutting ideas related to health care has heated up, the American Hospital Association has run daily ads on Mike Allen’s Playbook, a popular Washington tip sheet, urging that Congress not be allowed to cut hospital payments because that would endanger the “$39.3 billion” in uncompensated care for the poor that hospitals now provide either through charity programs or because of patients failing to pay their debts. Based on the formula hospitals use to calculate the cost of this charity care, that amounts to approximately 5% of their total revenue for 2010.
Under Internal Revenue Service rules, nonprofits are not prohibited from taking in more money than they spend. They just can’t distribute the overage to shareholders — because they don’t have any shareholders.
So, what do these wealthy nonprofits do with all the profit? In a trend similar to what we’ve seen in nonprofit colleges and universities — where there has been an arms race of sorts to use rising tuition to construct buildings and add courses of study — the hospitals improve and expand facilities (despite the fact that the U.S. has more hospital beds than it can fill), buy more equipment, hire more people, offer more services, buy rival hospitals and then raise executive salaries because their operations have gotten so much larger. They keep the upward spiral going by marketing for more patients, raising prices and pushing harder to collect bill payments. Only with health care, the upward spiral is easier to sustain. Health care is seen as even more of a necessity than higher education. And unlike in higher education, in health care there is little price transparency — and far less competition in any given locale even if there were transparency. Besides, a hospital is typically one of the community’s larger employers if not the largest, so there is unlikely to be much local complaining about its burgeoning economic fortunes.
In December, when the New York Times ran a story about how a deficit deal might threaten hospital payments, Steven Safyer, chief executive of Montefiore Medical Center, a large nonprofit hospital system in the Bronx, complained, “There is no such thing as a cut to a provider that isn’t a cut to a beneficiary … This is not crying wolf.”
Actually, Safyer seems to be crying wolf to the tune of about $196.8 million, according to the hospital’s latest publicly available tax return. That was his hospital’s operating profit, according to its 2010 return. With $2.586 billion in revenue — of which 99.4% came from patient bills and 0.6% from fundraising events and other charitable contributions — Safyer’s business is more than six times as large as that of the Bronx’s most famous enterprise, the New York Yankees. Surely, without cutting services to beneficiaries, Safyer could cut what have to be some of the Bronx’s better non-Yankee salaries: his own, which was $4,065,000, or those of his chief financial officer ($3,243,000), his executive vice president ($2,220,000) or the head of his dental department ($1,798,000).
Shocked by her bill from Stamford hospital and unable to pay it, Janice S. found a local woman on the Internet who is part of a growing cottage industry of people who call themselves medical-billing advocates. They help people read and understand their bills and try to reduce them. “The hospitals all know the bills are fiction, or at least only a place to start the discussion, so you bargain with them,” says Katalin Goencz, a former appeals coordinator in a hospital billing department who negotiated Janice S.’s bills from a home office in Stamford.
Goencz is part of a trade group called the Alliance of Claim Assistant Professionals, which has about 40 members across the country. Another group, Medical Billing Advocates of America, has about 50 members. Each advocate seems to handle 40 to 70 cases a year for the uninsured and those disputing insurance claims. That would be about 5,000 patients a year out of what must be tens of millions of Americans facing these issues — which may help explain why 60% of the personal bankruptcy filings each year are related to medical bills.
“I can pretty much always get it down 30% to 50% simply by saying the patient is ready to pay but will not pay $300 for a blood test or an X-ray,” says Goencz. “They hand out blood tests and X-rays in hospitals like bottled water, and they know it.”
After weeks of back-and-forth phone calls, for which Goencz charged Janice S. $97 an hour, Stamford Hospital cut its bill in half. Most of the doctors did about the same, reducing Janice S.’s overall tab from $21,000 to about $11,000.
But the best the ambulance company would offer Goencz was to let Janice S. pay off its $995 ride in $25-a-month installments. “The ambulances never negotiate the amount,” says Goencz.
A manager at Stamford Emergency Medical Services, which charged Janice S. $958 for the pickup plus $9.38 per mile, says that “our rates are all set by the state on a regional basis” and that the company is independently owned. That’s at odds with a trend toward consolidation that has seen several private-equity firms making investments in what Wall Street analysts have identified as an increasingly high-margin business. Overall, ambulance revenues were more than $12 billion last year, or about 10% higher than Hollywood’s box-office take. It’s not a great deal to pay off $1,000 for a four-mile ambulance ride on the layaway plan or receive a 50% discount on a $199.50 blood test that should cost $15, nor is getting half off on a $7,997.54 stress test that was probably all profit and may not have been necessary. But, says Goencz, “I don’t go over it line by line. I just go for a deal. The patient usually is shocked by the bill, doesn’t understand any of the language and has bill collectors all over her by the time they call me. So they’re grateful. Why give them heartache by telling them they still paid too much for some test or pill?”
The original version of this article stated that the total annual amount of charity care provided by U.S. hospitals cost them less than half of 1% of their annual revenue. In fact, the uncompensated care hospitals provide, either through charity programs or because of patients failing to pay their debts, amounts to approximately 5% of their total revenue for 2010.
A Slip, a Fall And a $9,400 Bill
The billing advocates aren’t always successful. just ask Emilia Gilbert, a school-bus driver who got into a fight with a hospital associated with Connecticut’s most venerable nonprofit institution, which racked up quick profits on multiple CT scans, then refused to compromise at all on its chargemaster prices. Gilbert, now 66, is still making weekly payments on the bill she got in June 2008 after she slipped and fell on her face one summer evening in the small yard behind her house in Fairfield, Conn. Her nose bleeding heavily, she was taken to the emergency room at Bridgeport Hospital.
Along with Greenwich Hospital and the Hospital of St. Raphael in New Haven, Bridgeport Hospital is now owned by the Yale New Haven Health System, which boasts a variety of gleaming new facilities. Although Yale University and Yale New Haven are separate entities, Yale–New Haven Hospital is the teaching hospital for the Yale Medical School, and university representatives, including Yale president Richard Levin, sit on the Yale New Haven Health System board.
“I was there for maybe six hours, until midnight,” Gilbert recalls, “and most of it was spent waiting. I saw the resident for maybe 15 minutes, but I got a lot of tests.”
In fact, Gilbert got three CT scans — of her head, her chest and her face. The last one showed a hairline fracture of her nose. The CT bills alone were $6,538. (Medicare would have paid about $825 for all three.) A doctor charged $261 to read the scans.
Gilbert got the same troponin blood test that Janice S. got — the one Medicare pays $13.94 for and for which Janice S. was billed $199.50 at Stamford. Gilbert got just one. Bridgeport Hospital charged 20% more than its downstate neighbor: $239.
Also on the bill were items that neither Medicare nor any insurance company would pay anything at all for: basic instruments and bandages and even the tubing for an IV setup. Under Medicare regulations and the terms of most insurance contracts, these are supposed to be part of the hospital’s facility charge, which in this case was $908 for the emergency room.
Gilbert’s total bill was $9,418.
“We think the chargemaster is totally fair,” says William Gedge, senior vice president of payer relations at Yale New Haven Health System. “It’s fair because everyone gets the same bill. Even Medicare gets exactly the same charges that this patient got. Of course, we will have different arrangements for how Medicare or an insurance company will not pay some of the charges or discount the charges, but everyone starts from the same place.” Asked how the chargemaster charge for an item like the troponin test was calculated, Gedge said he “didn’t know exactly” but would try to find out. He subsequently reported back that “it’s an historical charge, which takes into account all of our costs for running the hospital.”
Bridgeport Hospital had $420 million in revenue and an operating profit of $52 million in 2010, the most recent year covered by its federal financial reports. CEO Robert Trefry, who has since left his post, was listed as having been paid $1.8 million. The CEO of the parent Yale New Haven Health System, Marna Borgstrom, was paid $2.5 million, which is 58% more than the $1.6 million paid to Levin, Yale University’s president.
“You really can’t compare the two jobs,” says Yale–New Haven Hospital senior vice president Vincent Petrini. “Comparing hospitals to universities is like apples and oranges. Running a hospital organization is much more complicated.” Actually, the four-hospital chain and the university have about the same operating budget. And it would seem that Levin deals with what most would consider complicated challenges in overseeing 3,900 faculty members, corralling (and complying with the terms of) hundreds of millions of dollars in government research grants and presiding over a $19 billion endowment, not to mention admitting and educating 14,000 students spread across Yale College and a variety of graduate schools, professional schools and foreign-study outposts. And surely Levin’s responsibilities are as complicated as those of the CEO of Yale New Haven Health’s smallest unit — the 184-bed Greenwich Hospital, whose CEO was paid $112,000 more than Levin.
“When I got the bill, I almost had to go back to the hospital,” Gilbert recalls. “I was hyperventilating.” Contributing to her shock was the fact that although her employer supplied insurance from Cigna, one of the country’s leading health insurers, Gilbert’s policy was from a Cigna subsidiary called Starbridge that insures mostly low-wage earners. That made Gilbert one of millions of Americans like Sean Recchi who are routinely categorized as having health insurance but really don’t have anything approaching meaningful coverage.
Starbridge covered Gilbert for just $2,500 per hospital visit, leaving her on the hook for about $7,000 of a $9,400 bill. Under Connecticut’s rules (states set their own guidelines for Medicaid, the federal-state program for the poor), Gilbert’s $1,800 a month in earnings was too high for her to qualify for Medicaid assistance. She was also turned down, she says, when she requested financial assistance from the hospital. Yale New Haven’s Gedge insists that she never applied to the hospital for aid, and Gilbert could not supply me with copies of any applications.
In September 2009, after a series of fruitless letters and phone calls from its bill collectors to Gilbert, the hospital sued her. Gilbert found a medical-billing advocate, Beth Morgan, who analyzed the charges on the bill and compared them with the discounted rates insurance companies would pay. During two court-required mediation sessions, Bridgeport Hospital’s attorney wouldn’t budge; his client wanted the bill paid in full, Gilbert and Morgan recall. At the third and final mediation, Gilbert was offered a 20% discount off the chargemaster fees if she would pay immediately, but she says she responded that according to what Morgan told her about the bill, it was still too much to pay. “We probably could have offered more,” Gedge acknowledges. “But in these situations, our bill-collection attorneys only know the amount we are saying is owed, not whether it is a chargemaster amount or an amount that is already discounted.”
On July 11, 2011, with the school-bus driver representing herself in Bridgeport superior court, a judge ruled that Gilbert had to pay all but about $500 of the original charges. (He deducted the superfluous bills for the basic equipment.) The judge put her on a payment schedule of $20 a week for six years. For her, the chargemaster prices were all too real.
The One-Day, $87,000 Outpatient Bill
Getting a patient in and out of a hospital the same day seems like a logical way to cut costs. Outpatients don’t take up hospital rooms or require the expensive 24/7 observation and care that come with them. That’s why in the 1990s Medicare pushed payment formulas on hospitals that paid them for whatever ailment they were treating (with more added for documented complications), not according to the number of days the patient spent in a bed. Insurance companies also pushed incentives on hospitals to move patients out faster or not admit them for overnight stays in the first place. Meanwhile, the introduction of procedures like noninvasive laparoscopic surgery helped speed the shift from inpatient to outpatient.
By 2010, average days spent in the hospital per patient had declined significantly, while outpatient services had increased even more dramatically. However, the result was not the savings that reformers had envisioned. It was just the opposite.
Experts estimate that outpatient services are now packed with so much hidden profit that about two-thirds of the $750 billion annual U.S. overspending identified by the McKinsey research on health care comes in payments for outpatient services. That includes work done by physicians, laboratories and clinics (including diagnostic clinics for CT scans or blood tests) and same-day surgeries and other hospital treatments like cancer chemotherapy. According to a McKinsey survey, outpatient emergency-room care averages an operating profit margin of 15% and nonemergency outpatient care averages 35%. On the other hand, inpatient care has a margin of just 2%. Put simply, inpatient care at nonprofit hospitals is, in fact, almost nonprofit. Outpatient care is wildly profitable.
“An operating room has fixed costs,” explains one hospital economist. “You get 10% or 20% more patients in there every day who you don’t have to board overnight, and that goes straight to the bottom line.”
The 2011 outpatient visit of someone I’ll call Steve H. to Mercy Hospital in Oklahoma City illustrates those economics. Steve H. had the kind of relatively routine care that patients might expect would be no big deal: he spent the day at Mercy getting his aching back fixed.
A blue collar worker who was in his 30s at the time and worked at a local retail store, Steve H. had consulted a specialist at Mercy in the summer of 2011 and was told that a stimulator would have to be surgically implanted in his back. The good news was that with all the advances of modern technology, the whole process could be done in a day. (The latest federal filing shows that 63% of surgeries at Mercy were performed on outpatients.)
Steve H.’s doctor intended to use a RestoreUltra neurostimulator manufactured by Medtronic, a Minneapolis-based company with $16 billion in annual sales that bills itself as the world’s largest stand-alone medical-technology company. “RestoreUltra delivers spinal-cord stimulation through one or more leads selected from a broad portfolio for greater customization of therapy,” Medtronic’s website promises. I was not able to interview Steve H., but according to Pat Palmer, a medical-billing specialist based in Salem, Va., who consults for the union that provides Steve H.’s health insurance, Steve H. didn’t ask how much the stimulator would cost because he had $45,181 remaining on the $60,000 annual payout limit his union-sponsored health-insurance plan imposed. “He figured, How much could a day at Mercy cost?” Palmer says. “Five thousand? Maybe 10?”
Steve H. was about to run up against a seemingly irrelevant footnote in millions of Americans’ insurance policies: the limit, sometimes annual or sometimes over a lifetime, on what the insurer has to pay out for a patient’s claims. Under Obamacare, those limits will not be allowed in most health-insurance policies after 2013. That might help people like Steve H. but is also one of the reasons premiums are going to skyrocket under Obamacare.
Steve H.’s bill for his day at Mercy contained all the usual and customary overcharges. One item was “MARKER SKIN REG TIP RULER” for $3. That’s the marking pen, presumably reusable, that marked the place on Steve H.’s back where the incision was to go. Six lines down, there was “STRAP OR TABLE 8X27 IN” for $31. That’s the strap used to hold Steve H. onto the operating table. Just below that was “BLNKT WARM UPPER BDY 42268” for $32. That’s a blanket used to keep surgery patients warm. It is, of course, reusable, and it’s available new on eBay for $13. Four lines down there’s “GOWN SURG ULTRA XLG 95121” for $39, which is the gown the surgeon wore. Thirty of them can be bought online for $180. Neither Medicare nor any large insurance company would pay a hospital separately for those straps or the surgeon’s gown; that’s all supposed to come with the facility fee paid to the hospital, which in this case was $6,289.
In all, Steve H.’s bill for these basic medical and surgical supplies was $7,882. On top of that was $1,837 under a category called “Pharmacy General Classification” for items like bacitracin ($108). But that was the least of Steve H.’s problems.
The big-ticket item for Steve H.’s day at Mercy was the Medtronic stimulator, and that’s where most of Mercy’s profit was collected during his brief visit. The bill for that was $49,237.
According to the chief financial officer of another hospital, the wholesale list price of the Medtronic stimulator is “about $19,000.” Because Mercy is part of a major hospital chain, it might pay 5% to 15% less than that. Even assuming Mercy paid $19,000, it would make more than $30,000 selling it to Steve H., a profit margin of more than 150%. To the extent that I found any consistency among hospital chargemaster practices, this is one of them: hospitals routinely seem to charge 21⁄2 times what these expensive implantable devices cost them, which produces that 150% profit margin.
As Steve H. found out when he got his bill, he had exceeded the $45,000 that was left on his insurance policy’s annual payout limit just with the neurostimulator. And his total bill was $86,951. After his insurance paid that first $45,000, he still owed more than $40,000, not counting doctors’ bills. (I did not see Steve H.’s doctors’ bills.)
Mercy Hospital is owned by an organization under the umbrella of the Catholic Church called Sisters of Mercy. Its mission, as described in its latest filing with the IRS as a tax-exempt charity, is “to carry out the healing ministry of Jesus by promoting health and wellness.” With a chain of 31 hospitals and 300 clinics across the Midwest, Sisters of Mercy uses a bill-collection firm based in Topeka, Kans., called Berlin-Wheeler Inc. Suits against Mercy patients are on file in courts across Oklahoma listing Berlin-Wheeler as the plaintiff. According to its most recent tax return, the Oklahoma City unit of the Sisters of Mercy hospital chain collected $337 million in revenue for the fiscal year ending June 30, 2011. It had an operating profit of $34 million. And that was after paying 10 executives more than $300,000 each, including $784,000 to a regional president and $438,000 to the hospital president.
That report doesn’t cover the executives overseeing the chain, called Mercy Health, of which Mercy in Oklahoma City is a part. The overall chain had $4.28 billion in revenue that year. Its hospital in Springfield, Mo. (pop. 160,660), had $880.7 million in revenue and an operating profit of $319 million, according to its federal filing. The incomes of the parent company’s executives appear on other IRS filings covering various interlocking Mercy nonprofit corporate entities. Mercy president and CEO Lynn Britton made $1,930,000, and an executive vice president, Myra Aubuchon, was paid $3.7 million, according to the Mercy filing. In all, seven Mercy Health executives were paid more than $1 million each. A note at the end of an Ernst & Young audit that is attached to Mercy’s IRS filing reported that the chain provided charity care worth 3.2% of its revenue in the previous year. However, the auditors state that the value of that care is based on the charges on all the bills, not the actual cost to Mercy of providing those services — in other words, the chargemaster value. Assuming that Mercy’s actual costs are a tenth of these chargemaster values — they’re probably less — all of this charity care actually cost Mercy about three-tenths of 1% of its revenue, or about $13 million out of $4.28 billion.
Mercy’s website lists an 18-member media team; one member, Rachel Wright, told me that neither CEO Britton nor anyone else would be available to answer questions about compensation, the hospital’s bill-collecting activities through Berlin-Wheeler or Steve H.’s bill, which I had sent her (with his name and the date of his visit to the hospital redacted to protect his privacy).
Wright said the hospital’s lawyers had decided that discussing Steve H.’s bill would violate the federal HIPAA law protecting the privacy of patient medical records. I pointed out that I wanted to ask questions only about the hospital’s charges for standard items — such as surgical gowns, basic blood tests, blanket warmers and even medical devices — that had nothing to do with individual patients. “Everything is particular to an individual patient’s needs,” she replied. Even a surgical gown? “Yes, even a surgical gown. We cannot discuss this with you. It’s against the law.” She declined to put me in touch with the hospital’s lawyers to discuss their legal analysis.
Hiding behind a privacy statute to avoid talking about how it prices surgeons’ gowns may be a stretch, but Mercy might have a valid legal reason not to discuss what it paid for the Medtronic device before selling it to Steve H. for $49,237. Pharmaceutical and medical-device companies routinely insert clauses in their sales contracts prohibiting hospitals from sharing information about what they pay and the discounts they receive. In January 2012, a report by the federal Government Accountability Office found that “the lack of price transparency and the substantial variation in amounts hospitals pay for some IMD [implantable medical devices] raise questions about whether hospitals are achieving the best prices possible.”
A lack of price transparency was not the only potential market inefficiency the GAO found. “Although physicians are not involved in price negotiations, they often express strong preferences for certain manufacturers and models of IMD,” the GAO reported. “To the extent that physicians in the same hospitals have different preferences for IMDs, it may be difficult for the hospital to obtain volume discounts from particular manufacturers.”
“Doctors have no incentive to buy one kind of hip or other implantable device as a group,” explains Ezekiel Emanuel, an oncologist and a vice provost of the University of Pennsylvania who was a key White House adviser when Obamacare was created. “Even in the most innocent of circumstances, it kills the chance for market efficiencies.”
The circumstances are not always innocent. In 2008, Gregory Demske, an assistant inspector general at the Department of Health and Human Services, told a Senate committee that “physicians routinely receive substantial compensation from medical-device companies through stock options, royalty agreements, consulting agreements, research grants and fellowships.”
The assistant inspector general then revealed startling numbers about the extent of those payments: “We found that during the years 2002 through 2006, four manufacturers, which controlled almost 75% of the hip- and knee-replacement market, paid physician consultants over $800 million under the terms of roughly 6,500 consulting agreements.”
Other doctors, Demske noted, had stretched the conflict of interest beyond consulting fees: “Additionally, physician ownership of medical-device manufacturers and related businesses appears to be a growing trend in the medical-device sector … In some cases, physicians could receive substantial returns while contributing little to the venture beyond the ability to generate business for the venture.” In 2010, Medtronic, along with several other members of a medical-technology trade group, began to make the potential conflicts transparent by posting all payments to physicians on a section of its website called Physician Collaboration. The voluntary move came just before a similar disclosure regulation promulgated by the Obama Administration went into effect governing any doctor who receives funds from Medicare or the National Institutes of Health (which would include most doctors). And the nonprofit public-interest-journalism organization ProPublica has smartly organized data on doctor payments on its website. The conflicts have not been eliminated, but they are being aired, albeit on searchable websites rather than through a requirement that doctors disclose them to patients directly.
But conflicts that may encourage devices to be overprescribed or that lead doctors to prescribe a more expensive one instead of another are not the core problem in this marketplace. The more fundamental disconnect is that there is little reason to believe that what Mercy Hospital paid Medtronic for Steve H.’s device would have had any bearing on what the hospital decided to charge Steve H. Why would it? He did not know the price in advance.
Besides, studies delving into the economics of the medical marketplace consistently find that a moderately higher or lower price doesn’t change consumer purchasing decisions much, if at all, because in health care there is little of the price sensitivity found in conventional marketplaces, even on the rare occasion that patients know the cost in advance. If you were in pain or in danger of dying, would you turn down treatment at a price 5% or 20% higher than the price you might have expected — that is, if you’d had any informed way to know what to expect in the first place, which you didn’t?
The question of how sensitive patients will be to increased prices for medical devices recently came up in a different context. Aware of the huge profits being accumulated by devicemakers, Obama Administration officials decided to recapture some of the money by imposing a 2.39% federal excise tax on the sales of these devices as well as other medical technology such as CT-scan equipment. The rationale was that getting back some of these generous profits was a fair way to cover some of the cost of the subsidized, broader insurance coverage provided by Obamacare — insurance that in some cases will pay for more of the devices. The industry has since geared up in Washington and is pushing legislation that would repeal the tax. Its main argument is that a 2.39% increase in prices would so reduce sales that it would wipe out a substantial portion of what the industry claims are the 422,000 jobs it supports in a $136 billion industry.
That prediction of doom brought on by this small tax contradicts the reams of studies documenting consumer price insensitivity in the health care marketplace. It also ignores profit-margin data collected by McKinsey that demonstrates that devicemakers have an open field in the current medical ecosystem. A 2011 McKinsey survey for medical-industry clients reported that devicemakers are superstar performers in a booming medical economy. Medtronic, which performed in the middle of the group, delivered an amazing compounded annual return of 14.95% to shareholders from 1990 to 2010. That means $100 invested in the company in 1990 was worth $1,622 20 years later. So if the extra 2.39% would be so disruptive to the market for products like Medtronic’s that it would kill sales, why would the industry pass it along as a price increase to consumers? It hardly has to, given its profit margins.
Medtronic spokeswoman Donna Marquad says that for competitive reasons, her company will not discuss sales figures or the profit on Steve H.’s neurostimulator. But Medtronic’s October 2012 quarterly SEC filing reported that its spine “products and therapies,” which presumably include Steve H.’s device, “continue to gain broad surgeon acceptance” and that its cost to make all of its products was 24.9% of what it sells them for.
That’s an unusually high gross profit margin — 75.1% — for a company that manufactures real physical products. Apple also produces high-end, high-tech products, and its gross margin is 40%. If the neurostimulator enjoys that company-wide profit margin, it would mean that if Medtronic was paid $19,000 by Mercy Hospital, Medtronic’s cost was about $4,500 and it made a gross profit of about $14,500 before expenses for sales, overhead and management — including CEO Omar Ishrak’s compensation, which was $25 million for the 2012 fiscal year.
When Pat Palmer, the medical-billing specialist who advises Steve H.’s union, was given the Mercy bill to deal with, she prepared a tally of about $4,000 worth of line items that she thought represented the most egregious charges, such as the surgical gown, the blanket warmer and the marking pen. She restricted her list to those she thought were plainly not allowable. “I didn’t dispute nearly all of them,” she says. “Because then they get their backs up.”
The hospital quickly conceded those items. For the remaining $83,000, Palmer invoked a 40% discount off chargemaster rates that Mercy allows for smaller insurance providers like the union. That cut the bill to about $50,000, for which the insurance company owed 80%, or about $40,000. That left Steve H. with a $10,000 bill.
Sean Recchi wasn’t as fortunate. His bill — which included not only the aggressively marked-up charge of $13,702 for the Rituxan cancer drug but also the usual array of chargemaster fees for basics like generic Tylenol, blood tests and simple supplies — had one item not found on any other bill I examined: MD Anderson’s charge of $7 each for “ALCOHOL PREP PAD.” This is a little square of cotton used to apply alcohol to an injection. A box of 200 can be bought online for $1.91.
We have seen that to the extent that most hospital administrators defend such chargemaster rates at all, they maintain that they are just starting points for a negotiation. But patients don’t typically know they are in a negotiation when they enter the hospital, nor do hospitals let them know that. And in any case, at MD Anderson, the Recchis were made to pay every penny of the chargemaster bill up front because their insurance was deemed inadequate. That left Penne, the hospital spokeswoman, with only this defense for the most blatantly abusive charges for items like the alcohol squares: “It is difficult to compare a retail store charge for a common product with a cancer center that provides the item as part of its highly specialized and personalized care,” she wrote in an e-mail. Yet the hospital also charges for that “specialized and personalized” care through, among other items, its $1,791-a-day room charge.
Before MD Anderson marked up Recchi’s Rituxan to $13,702, the profit taking was equally aggressive, and equally routine, at the beginning of the supply chain — at the drug company. Rituxan is a prime product of Biogen Idec, a company with $5.5 billion in annual sales. Its CEO, George Scangos, was paid $11,331,441 in 2011, a 20% boost over his 2010 income. Rituxan is made and sold by Biogen Idec in partnership with Genentech, a South San Francisco–based biotechnology pioneer. Genentech brags about Rituxan on its website, as did Roche, Genentech’s $45 billion parent, in its latest annual report. And in an Investor Day presentation last September, Roche CEO Severin Schwann stressed that his company is able to keep prices and margins high because of its focus on “medically differentiated therapies.” Rituxan, a cancer wonder drug, certainly meets that test.
A spokesman at Genentech for the Biogen Idec–Genentech partnership would not say what the drug cost the companies to make, but according to its latest annual report, Biogen Idec’s cost of sales — the incremental expense of producing and shipping each of its products compared with what it sells them for — was only 10%. That’s lower than the incremental cost of sales for most software companies, and the software companies usually don’t produce anything physical or have to pay to ship anything.
This would mean that Sean Recchi’s dose of Rituxan cost the Biogen Idec–Genentech partnership as little as $300 to make, test, package and ship to MD Anderson for $3,000 to $3,500, whereupon the hospital sold it to Recchi for $13,702.
As 2013 began, Recchi was being treated back in Ohio because he could not pay MD Anderson for more than his initial treatment. As for the $13,702-a-dose Rituxan, it turns out that Biogen Idec’s partner Genentech has a charity-access program that Recchi’s Ohio doctor told him about that enabled him to get those treatments free. “MD Anderson never said a word to us about the Genentech program,” says Stephanie Recchi. “They just took our money up front.”
Genentech spokeswoman Charlotte Arnold would not disclose how much free Rituxan had been dispensed to patients like Recchi in the past year, saying only that Genentech has “donated $2.85 billion in free medicine to uninsured patients in the U.S.” since 1985. That seems like a lot until the numbers are broken down. Arnold says the $2.85 billion is based on what the drugmaker sells the product for, not what it costs Genentech to make. On the basis of Genentech’s historic costs and revenue since 1985, that would make the cost of these donations less than 1% of Genentech’s sales — not something likely to take the sizzle out of CEO Severin’s Investor Day.
Nonetheless, the company provided more financial support than MD Anderson did to Recchi, whose wife reports that he “is doing great. He’s in remission.”
Penne of MD Anderson stressed that the hospital provides its own financial aid to patients but that the state legislature restricts the assistance to Texas residents. She also said MD Anderson “makes every attempt” to inform patients of drug-company charity programs and that 50 of the hospital’s 24,000 inpatients and outpatients, one of whom was from outside Texas, received charitable aid for Rituxan treatments in 2012.
3. Catastrophic Illness — And the Bills to Match
When medical care becomes a matter of life and death, the money demanded by the health care ecosystem reaches a wholly different order of magnitude, churning out reams of bills to people who can’t focus on them, let alone pay them. Soon after he was diagnosed with lung cancer in January 2011, a patient whom I will call Steven D. and his wife Alice knew that they were only buying time. The crushing question was, How much is time really worth? As Alice, who makes about $40,000 a year running a child-care center in her home, explained, “[Steven] kept saying he wanted every last minute he could get, no matter what. But I had to be thinking about the cost and how all this debt would leave me and my daughter.” By the time Steven D. died at his home in Northern California the following November, he had lived for an additional 11 months. And Alice had collected bills totaling $902,452. The family’s first bill — for $348,000 — which arrived when Steven got home from the Seton Medical Center in Daly City, Calif., was full of all the usual chargemaster profit grabs: $18 each for 88 diabetes-test strips that Amazon sells in boxes of 50 for $27.85; $24 each for 19 niacin pills that are sold in drugstores for about a nickel apiece. There were also four boxes of sterile gauze pads for $77 each. None of that was considered part of what was provided in return for Seton’s facility charge for the intensive-care unit for two days at $13,225 a day, 12 days in the critical unit at $7,315 a day and one day in a standard room (all of which totaled $120,116 over 15 days). There was also $20,886 for CT scans and $24,251 for lab work. Alice responded to my question about the obvious overcharges on the bill for items like the diabetes-test strips or the gauze pads much as Mrs. Lincoln, according to the famous joke, might have had she been asked what she thought of the play. “Are you kidding?” she said. “I’m dealing with a husband who had just been told he has Stage IV cancer. That’s all I can focus on … You think I looked at the items on the bills? I just looked at the total.”
Steven and Alice didn’t know that hospital billing people consider the chargemaster to be an opening bid. That’s because no medical bill ever says, “Give us your best offer.” The couple knew only that the bill said they had maxed out on the $50,000 payout limit on a UnitedHealthcare policy they had bought through a community college where Steven had briefly enrolled a year before. “We were in shock,” Alice recalls. “We looked at the total and couldn’t deal with it. So we just started putting all the bills in a box. We couldn’t bear to look at them.”
The $50,000 that UnitedHealthcare paid to Seton Medical Center was worth about $80,000 in credits because any charges covered by the insurer were subject to the discount it had negotiated with Seton. After that $80,000, Steven and Alice were on their own, not eligible for any more discounts. Four months into her husband’s illness, Alice by chance got the name of Patricia Stone, a billing advocate based in Menlo Park, Calif. Stone’s typical clients are middle-class people having trouble with insurance claims. Stone felt so bad for Steven and Alice — she saw the blizzard of bills Alice was going to have to sort through — that, says Alice, she “gave us many of her hours,” for which she usually charges $100, “for free.” Stone was soon able to persuade Seton to write off $297,000 of its $348,000 bill. Her argument was simple: There was no way the D.’s could pay it now or in the future, though they would scrape together $3,000 as a show of good faith. With the couple’s $3,000 on top of the $50,000 paid by the UnitedHealthcare insurance, that $297,000 write-off amounted to an 85% discount. According to its latest financial report, Seton applies so many discounts and write-offs to its chargemaster bills that it ends up with only about 18% of the revenue it bills for. That’s an average 82% discount, compared with an average discount of about 65% that I saw at the other hospitals whose bills were examined — except for the MD Anderson and Sloan-Kettering cancer centers, which collect about 50% of their chargemaster charges. Seton’s discounting practices may explain why it is the only hospital whose bills I looked at that actually reported a small operating loss — $5 million — on its last financial report.
Of course, had the D.’s not come across Stone, the incomprehensible but terrifying bills would have piled up in a box, and the Seton Medical Center bill collectors would not have been kept at bay. Robert Issai, the CEO of the Daughters of Charity Health System, which owns and runs Seton, refused through an e-mail from a public relations assistant to respond to requests for a comment on any aspect of his hospital’s billing or collections policies. Nor would he respond to repeated requests for a specific comment on the $24 charge for niacin pills, the $18 charge for the diabetes-test strips or the $77 charge for gauze pads. He also declined to respond when asked, via a follow-up e-mail, if the hospital thinks that sending patients who have just been told they are terminally ill bills that reflect chargemaster rates that the hospital doesn’t actually expect to be paid might unduly upset them during a particularly sensitive time. To begin to deal with all the other bills that kept coming after Steven’s first stay at Seton, Stone was also able to get him into a special high-risk insurance pool set up by the state of California. It helped but not much. The insurance premium was $1,000 a month, quite a burden on a family whose income was maybe $3,500 a month. And it had an annual payout limit of $75,000. The D.’s blew through that in about two months. The bills kept piling up. Sequoia Hospital — where Steven was an inpatient as well as an outpatient between the end of January and November following his initial stay at Seton — weighed in with 28 bills, all at chargemaster prices, including invoices for $99,000, $61,000 and $29,000. Doctor-run outpatient chemotherapy clinics wanted more than $85,000. One outside lab wanted $11,900.
Stone organized these and other bills into an elaborate spreadsheet — a ledger documenting how catastrophic illness in America unleashes its own mini-GDP.
In July, Stone figured out that Steven and Alice should qualify for Medicaid, which is called Medi-Cal in California. But there was a catch: Medicaid is the joint federal-state program directed at the poor that is often spoken of in the same breath as Medicare. Although most of the current national debate on entitlements is focused on Medicare, when Medicaid’s subsidiary program called Children’s Health Insurance, or CHIP, is counted, Medicaid actually covers more people: 56.2 million compared with 50.2 million. As Steven and Alice found out, Medicaid is also more vulnerable to cuts and conditions that limit coverage, probably for the same reason that most politicians and the press don’t pay the same attention to it that they do to Medicare: its constituents are the poor. The major difference in the two programs is that while Medicare’s rules are pretty much uniform across state lines, the states set the key rules for Medicaid because the state finances a big portion of the claims. According to Stone, Steven and Alice immediately ran into one of those rules. For people even with their modest income, the D.’s would have to pay $3,000 a month in medical bills before Medi-Cal would kick in. That amounted to most of Alice’s monthly take-home pay.
Medi-Cal was even willing to go back five months, to February, to cover the couple’s mountain of bills, but first they had to come up with $15,000. “We didn’t have anything close to that,” recalls Alice.
Stone then convinced Sequoia that if the hospital wanted to see any of the Medi-Cal money necessary to pay its bills (albeit at the big discount Medi-Cal would take), it should give Steven a “credit” for $15,000 — in other words, write it off. Sequoia agreed to do that for most of the bills. This was clearly a maneuver that Steven and Alice never could have navigated on their own. Covering most of the Sequoia debt was a huge relief, but there were still hundreds of thousands of dollars in bills left unpaid as Steven approached his end in the fall of 2011. Meantime, the bills kept coming. “We started talking about the cost of the chemo,” Alice recalls. “It was a source of tension between us … Finally,” she says, “the doctor told us that the next one scheduled might prolong his life a month, but it would be really painful. So he gave up.”
By the one-year anniversary of Steven’s death, late last year, Stone had made a slew of deals with his doctors, clinics and other providers whose services Medi-Cal did not cover. Some, like Seton, were generous. The home health care nurse ended up working for free in the final days of Steven’s life, which were over the Thanksgiving weekend. “He was a saint,” says Alice. “He said he was doing it to become accredited, so he didn’t charge us.”
Others, including some of the doctors, were more hard-nosed, insisting on full payment or offering minimal discounts. Still others had long since sold the bills to professional debt collectors, who, by definition, are bounty hunters. Alice and Stone were still hoping Medi-Cal would end up covering some or most of the debt.
As 2012 closed, Alice had paid out about $30,000 of her own money (including the $3,000 to Seton) and still owed $142,000 — her losses from the fixed poker game that she was forced to play in the worst of times with the worst of cards. She was still getting letters and calls from bill collectors. “I think about the $142,000 all the time. It just hangs over my head,” she said in December.
One lesson she has learned, she adds: “I’m never going to remarry. I can’t risk the liability.”2
2. In early February, Alice told TIME that she had recently eliminated “most of” the debt through proceeds from the sale of a small farm in Oklahoma her husband had inherited and after further payments from Medi-Cal and a small life-insurance policy
$132,303: The Lab-Test Cash Machine
As 2012 began, a couple I’ll call Rebecca and Scott S., both in their 50s, seemed to have carved out a comfortable semiretirement in a suburb near Dallas. Scott had successfully sold his small industrial business and was working part time advising other industrial companies. Rebecca was running a small marketing company. On March 4, Scott started having trouble breathing. By dinnertime he was gasping violently as Rebecca raced him to the emergency room at the University of Texas Southwestern Medical Center. Both Rebecca and her husband thought he was about to die, Rebecca recalls. It was not the time to think about the bills that were going to change their lives if Scott survived, and certainly not the time to imagine, much less worry about, the piles of charges for daily routine lab tests that would be incurred by any patient in the middle of a long hospital stay. Scott was in the hospital for 32 days before his pneumonia was brought under control. Rebecca recalls that “on about the fourth or fifth day, I was sitting around the hospital and bored, so I went down to the business office just to check that they had all the insurance information.” She remembered that there was, she says, “some kind of limit on it.”
“Even by then, the bill was over $80,000,” she recalls. “I couldn’t believe it.”
The woman in the business office matter-of-factly gave Rebecca more bad news: Her insurance policy, from a company called Assurant Health, had an annual payout limit of $200,000. Because of some prior claims Assurant had processed, the S.’s were well on their way to exceeding the limit. Just the room-and-board charge at Southwestern was $2,293 a day. And that was before all the real charges were added. When Scott checked out, his 161-page bill was $474,064. Scott and Rebecca were told they owed $402,955 after the payment from their insurance policy was deducted. The top billing categories were $73,376 for Scott’s room; $94,799 for “RESP SERVICES,” which mostly meant supplying Scott with oxygen and testing his breathing and included multiple charges per day of $134 for supervising oxygen inhalation, for which Medicare would have paid $17.94; and $108,663 for “SPECIAL DRUGS,” which included mostly not-so-special drugs such as “SODIUM CHLORIDE .9%.” That’s a standard saline solution probably used intravenously in this case to maintain Scott’s water and salt levels. (It is also used to wet contact lenses.) You can buy a liter of the hospital version (bagged for intravenous use) online for $5.16. Scott was charged $84 to $134 for dozens of these saline solutions.
Then there was the $132,303 charge for “LABORATORY,” which included hundreds of blood and urine tests ranging from $30 to $333 each, for which Medicare either pays nothing because it is part of the room fee or pays $7 to $30. Hospital spokesman Russell Rian said that neither Daniel Podolsky, Texas Southwestern Medical Center’s $1,244,000-a-year president, nor any other executive would be available to discuss billing practices. “The law does not allow us to talk about how we bill,” he explained. Through a friend of a friend, Rebecca found Patricia Palmer, the same billing advocate based in Salem, Va., who worked on Steve H.’s bill in Oklahoma City. Palmer — whose firm, Medical Recovery Services, now includes her two adult daughters — was a claims processor for Blue Cross Blue Shield. She got into her current business after she was stunned by the bill her local hospital sent after one of her daughters had to go to the emergency room after an accident. She says it included items like the shade attached to an examining lamp. She then began looking at bills for friends as kind of a hobby before deciding to make it a business.
The best Palmer could do was get Texas Southwestern Medical to provide a credit that still left Scott and Rebecca owing $313,000. Palmer claimed in a detailed appeal that there were also overcharges totaling $113,000 — not because the prices were too high but because the items she singled out should not have been charged for at all. These included $5,890 for all of that saline solution and $65,600 for the management of Scott’s oxygen. These items are supposed to be part of the hospital’s general room-and-services charge, she argued, so they should not be billed twice.
In fact, Palmer — echoing a constant and convincing refrain I heard from billing advocates across the country — alleged that the hospital triple-billed for some items used in Scott’s care in the intensive-care unit. “First they charge more than $2,000 a day for the ICU, because it’s an ICU and it has all this special equipment and personnel,” she says. “Then they charge $1,000 for some kit used in the ICU to give someone a transfusion or oxygen … And then they charge $50 or $100 for each tool or bandage or whatever that there is in the kit. That’s triple billing.” Palmer and Rebecca are still fighting, but the hospital insists that the S.’s owe the $313,000 balance. That doesn’t include what Rebecca says were “thousands” in doctors’ bills and $70,000 owed to a second hospital after Scott suffered a relapse. The only offer the hospital has made so far is to cut the bill to $200,000 if it is paid immediately, or for the full $313,000 to be paid in 24 monthly payments. “How am I supposed to write a check right now for $200,000?” Rebecca asks. “I have boxes full of notices from bill collectors … We can’t apply for charity, because we’re kind of well off in terms of assets,” she adds. “We thought we were set, but now we’re pretty much on the edge.”
Insurance That Isn’t
“People, especially relatively wealthy people, always think they have good insurance until they see they don’t,” says Palmer. “Most of my clients are middle- or upper-middle-class people with insurance.”
Scott and Rebecca bought their plan from Assurant, which sells health insurance to small businesses that will pay only for limited coverage for their employees or to individuals who cannot get insurance through employers and are not eligible for Medicare or Medicaid. Assurant also sold the Recchis their plan that paid only $2,000 a day for Sean Recchi’s treatment at MD Anderson. Although the tight limits on what their policies cover are clearly spelled out in Assurant’s marketing materials and in the policy documents themselves, it seems that for its customers the appeal of having something called health insurance for a few hundred dollars a month is far more compelling than comprehending the details. “Yes, we knew there were some limits,” says Rebecca. “But when you see the limits expressed in the thousands of dollars, it looks O.K., I guess. Until you have an event.”
Millions of plans have annual payout limits, though the more typical plans purchased by employers usually set those limits at $500,000 or $750,000 — which can also quickly be consumed by a catastrophic illness. For that reason, Obamacare prohibited lifetime limits on any policies sold after the law passed and phases out all annual dollar limits by 2014. That will protect people like Scott and Rebecca, but it will also make everyone’s premiums dramatically higher, because insurance companies risk much more when there is no cap on their exposure.
But Obamacare does little to attack the costs that overwhelmed Scott and Rebecca. There is nothing, for example, that addresses what may be the most surprising sinkhole — the seemingly routine blood, urine and other laboratory tests for which Scott was charged $132,000, or more than $4,000 a day. By my estimates, about $70 billion will be spent in the U.S. on about 7 billion lab tests in 2013. That’s about $223 a person for 16 tests per person. Cutting the overordering and overpricing could easily take $25 billion out of that bill. Much of that overordering involves patients like Scott S. who require prolonged hospital stays. Their tests become a routine, daily cash generator. “When you’re getting trained as a doctor,” says a physician who was involved in framing health care policy early in the Obama Administration, “you’re taught to order what’s called ‘morning labs.’ Every day you have a variety of blood tests and other tests done, not because it’s necessary but because it gives you something to talk about with the others when you go on rounds. It’s like your version of a news hook … I bet 60% of the labs are not necessary.”
The country’s largest lab tester is Quest Diagnostics, which reported revenues in 2012 of $7.4 billion. Quest’s operating income in 2012 was $1.2 billion, about 16.2% of sales.
But that’s hardly the spectacular profit margin we have seen in other sectors of the medical marketplace. The reason is that the outside companies like Quest, which mostly pick up specimens from doctors and clinics and deliver test results back to them, are not where the big profits are. The real money is in health care settings that cut out the middleman — the in-house venues, like the hospital testing lab run by Southwestern Medical that billed Scott and Rebecca $132,000. In-house labs account for about 60% of all testing revenue. Which means that for hospitals, they are vital profit centers. Labs are also increasingly being maintained by doctors who, as they form group practices with other doctors in their field, finance their own testing and diagnostic clinics. These labs account for a rapidly growing share of the testing revenue, and their share is growing rapidly. These in-house labs have no selling costs, and as pricing surveys repeatedly find, they can charge more because they have a captive consumer base in the hospitals or group practices. They also have an incentive to order more tests because they’re the ones profiting from the tests. The Wall Street Journal reported last April that a study in the medical journal Health Affairs had found that doctors’ urology groups with their own labs “bill the federal Medicare program for analyzing 72% more prostate tissue samples per biopsy while detecting fewer cases of cancer than counterparts who send specimens to outside labs.”
If anything, the move toward in-house testing, and with it the incentive to do more of it, is accelerating the move by doctors to consolidate into practice groups. As one Bronx urologist explains, “The economics of having your own lab are so alluring.” More important, hospitals are aligning with these practice groups, in many cases even getting them to sign noncompete clauses requiring that they steer all patients to the partner hospital. Some hospitals are buying physicians’ practices outright; 54% of physician practices were owned by hospitals in 2012, according to a McKinsey survey, up from 22% 10 years before. This is primarily a move to increase the hospitals’ leverage in negotiating with insurers. An expensive by-product is that it brings testing into the hospitals’ high-profit labs.
4. When Taxpayers Pick Up the Tab
Whether it was Emilia Gilbert trying to get out from under $9,418 in bills after her slip and fall or Alice D. vowing never to marry again because of the $142,000 debt from her husband’s losing battle with cancer, we’ve seen how the medical marketplace misfires when private parties get the bills.
When the taxpayers pick up the tab, most of the dynamics of the marketplace shift dramatically.
In July 2011, an 88-year-old man whom I’ll call Alan A. collapsed from a massive heart attack at his home outside Philadelphia. He survived, after two weeks in the intensive-care unit of the Virtua Marlton hospital. Virtua Marlton is part of a four-hospital chain that, in its 2010 federal filing, reported paying its CEO $3,073,000 and two other executives $1.4 million and $1.7 million from gross revenue of $633.7 million and an operating profit of $91 million. Alan A. then spent three weeks at a nearby convalescent-care center.
Medicare made quick work of the $268,227 in bills from the two hospitals, paying just $43,320. Except for $100 in incidental expenses, Alan A. paid nothing because 100% of inpatient hospital care is covered by Medicare.
The ManorCare convalescent center, which Alan A. says gave him “good care” in an “O.K. but not luxurious room,” got paid $11,982 by Medicare for his three-week stay. That is about $571 a day for all the physical therapy, tests and other services. As with all hospitals in nonemergency situations, ManorCare does not have to accept Medicare patients and their discounted rates. But it does accept them. In fact, it welcomes them and encourages doctors to refer them.
Health care providers may grouse about Medicare’s fee schedules, but Medicare’s payments must be producing profits for ManorCare. It is part of a for-profit chain owned by Carlyle Group, a blue-chip private-equity firm.
About a decade ago, Alan A. was diagnosed with non-Hodgkin’s lymphoma. He was 78, and his doctors in southern New Jersey told him there was little they could do. Through a family friend, he got an appointment with one of the lymphoma specialists at Sloan-Kettering. That doctor told Alan A. he was willing to try a new chemotherapy regimen on him. The doctor warned, however, that he hadn’t ever tried the treatment on a man of Alan A.’s age.
The original version of this article stated that the Assurant Health insurance policy of Rebecca and Scott S. had an annual pay limit of $100,000. It was $200,000.
The treatment worked. A decade later, Alan A. is still in remission. He now travels to Sloan-Kettering every six weeks to be examined by the doctor who saved his life and to get a transfusion of Flebogamma, a drug that bucks up his immune system.
With some minor variations each time, Sloan-Kettering’s typical bill for each visit is the same as or similar to the $7,346 bill he received during the summer of 2011, which included $340 for a session with the doctor.
Assuming eight visits (but only four with the doctor), that makes the annual bill $57,408 a year to keep Alan A. alive. His actual out-of-pocket cost for each session is a fraction of that. For that $7,346 visit, it was about $50.
In some ways, the set of transactions around Alan A.’s Sloan-Kettering care represent the best the American medical marketplace has to offer. First, obviously, there’s the fact that he is alive after other doctors gave him up for dead. And then there’s the fact that Alan A., a retired chemist of average means, was able to get care that might otherwise be reserved for the rich but was available to him because he had the right insurance.
Medicare is the core of that insurance, although Alan A. — as do 90% of those on Medicare — has a supplemental-insurance policy that kicks in and generally pays 90% of the 20% of costs for doctors and outpatient care that Medicare does not cover.
Here’s how it all computes for him using that summer 2011 bill as an example.
Not counting the doctor’s separate $340 bill, Sloan-Kettering’s bill for the transfusion is about $7,006.
In addition to a few hundred dollars in miscellaneous items, the two basic Sloan-Kettering charges are $414 per hour for five hours of nurse time for administering the Flebogamma and a $4,615 charge for the Flebogamma.
According to Alan A., the nurse generally handles three or four patients at a time. That would mean Sloan-Kettering is billing more than $1,200 an hour for that nurse. When I asked Paul Nelson, Sloan-Kettering’s director of financial planning, about the $414-per-hour charge, he explained that 15% of these charges is meant to cover overhead and indirect expenses, 20% is meant to be profit that will cover discounts for Medicare or Medicaid patients, and 65% covers direct expenses. That would still leave the nurse’s time being valued at about $800 an hour (65% of $1,200), again assuming that just three patients were billed for the same hour at $414 each. Pressed on that, Nelson conceded that the profit is higher and is meant to cover other hospital costs like research and capital equipment.
Whatever Sloan-Kettering’s calculations may be, Medicare — whose patients, including Alan A., are about a third of all Sloan-Kettering patients — buys into none of that math. Its cost-based pricing formulas yield a price of $302 for everything other than the drug, including those hourly charges for the nurse and the miscellaneous charges. Medicare pays 80% of that, or $241, leaving Alan A. and his private insurance company together to pay about $60 more to Sloan-Kettering. Alan A. pays $6, and his supplemental insurer, Aetna, pays $54.
Bottom line: Sloan-Kettering gets paid $302 by Medicare for about $2,400 worth of its chargemaster charges, and Alan A. ends up paying $6.
The Cancer Drug Profit Chain
It’s with the bill for the transfusion that the peculiar economics of American medicine take a different turn, even when Medicare is involved. We have seen that even with big discounts for insurance companies and bigger discounts for Medicare, the chargemaster prices on everything from room and board to Tylenol to CT scans are high enough to make hospital costs a leading cause of the $750 billion Americans overspend each year on health care. We’re now going to see how drug pricing is a major contributor to the way Americans overpay for medical care.
By law, Medicare has to pay hospitals 6% above what Congress calls the drug company’s “average sales price,” which is supposedly the average price at which the drugmaker sells the drug to hospitals and clinics. But Congress does not control what drugmakers charge. The drug companies are free to set their own prices. This seems fair in a free-market economy, but when the drug is a one-of-a-kind lifesaving serum, the result is anything but fair.
Applying that formula of average sales price plus the 6% premium, Medicare cuts Sloan-Kettering’s $4,615 charge for Alan A.’s Flebogamma to $2,123. That’s what the drugmaker tells Medicare the average sales price is plus 6%. Medicare again pays 80% of that, and Alan A. and his insurer split the other 20%, 10% for him and 90% for the insurer, which makes Alan A.’s cost $42.50.
In practice, the average sales price does not appear to be a real average. Two other hospitals I asked reported that after taking into account rebates given by the drug company, they paid an average of $1,650 for the same dose of Flebogamma, and neither hospital had nearly the leverage in the cancer-care marketplace that Sloan-Kettering does. One doctor at Sloan-Kettering guessed that it pays $1,400. “The drug companies give the rebates so that the hospitals will make more on the drug and therefore be encouraged to dispense it,” the doctor explained. (A spokesperson for Medicare would say only that the average sales price is based “on manufacturers’ data submitted to Medicare and is meant to include rebates.”)
Nelson, the Sloan-Kettering head of financial planning, said the price his hospital pays for Alan A.’s dose of Flebogamma is “somewhat higher” than $1,400, but he wasn’t specific, adding that “the difference between the cost and the charge represents the cost of running our pharmacy — which includes overhead cost — plus a markup.” Even assuming Sloan-Kettering’s real price for Flebogamma is “somewhat higher” than $1,400, the hospital would be making about 50% profit from Medicare’s $2,123 payment. So even Medicare contributes mightily to hospital profit — and drug-company profit — when it buys drugs.
Flebogamma’s Profit Margin
The Spanish business at the beginning of the Flebogamma supply chain does even better than Sloan-Kettering.
Made from human plasma, Flebogamma is a sterilized solution that is intended to boost the immune system. Sloan-Kettering buys it from either Baxter International in the U.S. or, as is more likely in Alan A.’s case, a Barcelona-based company called Grifols.
In its half-year 2012 shareholders report, Grifols featured a picture of the Flebogamma plasma serum and its packaging — “produced at the Clayton facility, North Carolina,” according to the caption. Worldwide sales of all Grifols products were reported as up 15.2%, to $1.62 billion, in the first half of 2012. In the U.S. and Canada, sales were up 20.5%. “Growth in the sales … of the main plasma derivatives” was highlighted in the report, as was the fact that “the cost per liter of plasma has fallen.” (Grifols operates 150 donation centers across the U.S. where it pays plasma donors $25 apiece.)
Grifols spokesman Christopher Healey would not discuss what it cost Grifols to produce and ship Alan A.’s dose, but he did say that the company’s average cost to produce its bioscience products, Flebogamma included, was approximately 55% of what it sells them for. However, a doctor familiar with the economics of cancer-care drugs said that plasma products typically have some of the industry’s higher profit margins. He estimated that the Flebogamma dose for Alan A. — which Sloan-Kettering bought from Grifols for $1,400 or $1,500 and sold to Medicare for $2,135 — “can’t cost them more than $200 or $300 to collect, process, test and ship.”
In Spain, as in the rest of the developed world, Grifols’ profit margins on sales are much lower than they are in the U.S., where it can charge much higher prices. Aware of the leverage that drug companies — especially those with unique lifesaving products — have on the market, most developed countries regulate what drugmakers can charge, limiting them to certain profit margins. In fact, the drugmakers’ securities filings repeatedly warn investors of tighter price controls that could threaten their high margins — though not in the U.S.
The difference between the regulatory environment in the U.S. and the environment abroad is so dramatic that McKinsey & Co. researchers reported that overall prescription-drug prices in the U.S. are “50% higher for comparable products” than in other developed countries. Yet those regulated profit margins outside the U.S. remain high enough that Grifols, Baxter and other drug companies still aggressively sell their products there. For example, 37% of Grifols’ sales come from outside North America.
More than $280 billion will be spent this year on prescription drugs in the U.S. If we paid what other countries did for the same products, we would save about $94 billion a year. The pharmaceutical industry’s common explanation for the price difference is that U.S. profits subsidize the research and development of trailblazing drugs that are developed in the U.S. and then marketed around the world. Apart from the question of whether a country with a health-care-spending crisis should subsidize the rest of the developed world — not to mention the question of who signed Americans up for that mission — there’s the fact that the companies’ math doesn’t add up.
According to securities filings of major drug companies, their R&D expenses are generally 15% to 20% of gross revenue. In fact, Grifols spent only 5% on R&D for the first nine months of 2012. Neither 5% nor 20% is enough to have cut deeply into the pharmaceutical companies’ stellar bottom-line net profits. This is not gross profit, which counts only the cost of producing the drug, but the profit after those R&D expenses are taken into account. Grifols made a 32.3% net operating profit after all its R&D expenses — as well as sales, management and other expenses — were tallied. In other words, even counting all the R&D across the entire company, including research for drugs that did not pan out, Grifols made healthy profits. All the numbers tell one consistent story: Regulating drug prices the way other countries do would save tens of billions of dollars while still offering profit margins that would keep encouraging the pharmaceutical companies’ quest for the next great drug.
Handcuffs On Medicare
Our laws do more than prevent the government from restraining prices for drugs the way other countries do. Federal law also restricts the biggest single buyer — Medicare — from even trying to negotiate drug prices. As a perpetual gift to the pharmaceutical companies (and an acceptance of their argument that completely unrestrained prices and profit are necessary to fund the risk taking of research and development), Congress has continually prohibited the Centers for Medicare and Medicaid Services (CMS) of the Department of Health and Human Services from negotiating prices with drugmakers. Instead, Medicare simply has to determine that average sales price and add 6% to it.
Similarly, when Congress passed Part D of Medicare in 2003, giving seniors coverage for prescription drugs, Congress prohibited Medicare from negotiating.
Nor can Medicare get involved in deciding that a drug may be a waste of money. In medical circles, this is known as the comparative-effectiveness debate, which nearly derailed the entire Obamacare effort in 2009.
Doctors and other health care reformers behind the comparative-effectiveness movement make a simple argument: Suppose that after exhaustive research, cancer drug A, which costs $300 a dose, is found to be just as effective as or more effective than drug B, which costs $3,000. Shouldn’t the person or entity paying the bill, e.g. Medicare, be able to decide that it will pay for drug A but not drug B? Not according to a law passed by Congress in 2003 that requires Medicare to reimburse patients (again, at average sales price plus 6%) for any cancer drug approved for use by the Food and Drug Administration. Most states require insurance companies to do the same thing.
Peter Bach, an epidemiologist at Sloan-Kettering who has also advised several health-policy organizations, reported in a 2009 New England Journal of Medicine article that Medicare’s spending on the category dominated by cancer drugs ballooned from $3 billion in 1997 to $11 billion in 2004. Bach says costs have continued to increase rapidly and must now be more than $20 billion.
With that escalating bill in mind, Bach was among the policy experts pushing for provisions in Obamacare to establish a Patient-Centered Outcomes Research Institute to expand comparative-effectiveness research efforts. Through painstaking research, doctors would try to determine the comparative effectiveness not only of drugs but also of procedures like CT scans.
However, after all the provisions spelling out elaborate research and review processes were embedded in the draft law, Congress jumped in and added eight provisions that restrict how the research can be used. The prime restriction: Findings shall “not be construed as mandates for practice guidelines, coverage recommendations, payment, or policy recommendations.”
With those 14 words, the work of Bach and his colleagues was undone. And costs remain unchecked.
“Medicare could see the research and say, Ah, this drug works better and costs the same or is even cheaper,” says Gunn, Sloan-Kettering’s chief operating officer. “But they are not allowed to do anything about it.”
Along with another doomed provision that would have allowed Medicare to pay a fee for doctors’ time spent counseling terminal patients on end-of-life care (but not on euthanasia), the Obama Administration’s push for comparative effectiveness is what brought opponents’ cries that the bill was creating “death panels.” Washington bureaucrats would now be dictating which drugs were worth giving to which patients and even which patients deserved to live or die, the critics charged.
The loudest voice sounding the death-panel alarm belonged to Betsy McCaughey, former New York State lieutenant governor and a conservative health-policy advocate. McCaughey, who now runs a foundation called the Committee to Reduce Infection Deaths, is still fiercely opposed to Medicare’s making comparative-effectiveness decisions. “There is comparative-effectiveness research being done in the medical journals all the time, which is fine,” she says. “But it should be used by doctors to make decisions — not by the Obama bureaucrats at Medicare to make decisions for doctors.”
Bach, the Sloan-Kettering doctor and policy wonk, has become so frustrated with the rising cost of the drugs he uses that he and some colleagues recently took matters into their own hands. They reported in an October op-ed in the New York Times that they had decided on their own that they were no longer going to dispense a colorectal-cancer drug called Zaltrap, which cost an average of $11,063 per month for treatment. All the research shows, they wrote, that a drug called Avastin, which cost $5,000 a month, is just as effective. They were taking this stand, they added, because “the typical new cancer drug coming on the market a decade ago cost about $4,500 per month (in 2012 dollars); since 2010, the median price has been around $10,000. Two of the new cancer drugs cost more than $35,000 each per month of treatment. The burden of this cost is borne, increasingly, by patients themselves — and the effects can be devastating.”
The CEO of Sanofi, the company that makes Zaltrap, initially dismissed the article by Bach and his Sloan-Kettering colleagues, saying they had taken the price of the drug out of context because of variations in the required dosage. But four weeks later, Sanofi cut its price in half.
Bureaucrats You Can Admire
By the numbers, Medicare looks like a government program run amok. After President Lyndon B. Johnson signed Medicare into law in 1965, the House Ways and Means Committee predicted that the program would cost $12 billion in 1990. Its actual cost by then was $110 billion. It is likely to be nearly $600 billion this year. That’s due to the U.S.’s aging population and the popular program’s expansion to cover more services, as well as the skyrocketing costs of medical services generally. It’s also because Medicare’s hands are tied when it comes to negotiating the prices for drugs or durable medical equipment. But Medicare’s growth is not a matter of those “bureaucrats” that Betsy McCaughey complains about having gone off the rails in how they operate it.
In fact, seeing the way Alan A.’s bills from Sloan-Kettering were vetted and processed is one of the more eye-opening and least discouraging aspects of a look inside the world of medical economics.
The process is fast, accurate, customer-friendly and impressively high-tech. And it’s all done quietly by a team of nonpolitical civil servants in close partnership with the private sector. In fact, despite calls to privatize Medicare by creating a voucher system under which the Medicare population would get money from the government to buy insurance from private companies, the current Medicare system is staffed with more people employed by private contractors (8,500) than government workers (700).
$1.5 Billion A Day
Sloan-Kettering sends Alan A.’s bills to medicare electronically, all elaborately coded according to Medicare’s rules.
There are two basic kinds of codes for the services billed. The first is a number identifying which of the 7,000 procedures were performed by a doctor, such as examining a chest X-ray, performing a heart transplant or conducting an office consultation for a new patient (which costs more than a consultation with a continuing patient — coded differently — because it typically takes more time). If a patient presents more complicated challenges, then these basic procedures will be coded differently; for example, there are two varieties of emergency-room consultations. Adjustments are also made for variations in the cost of living where the doctor works and for other factors, like whether doctors used their own office (they’ll get paid more for that) or the hospital. A panel of doctors set up by the American Medical Association reviews the codes annually and recommends updates to Medicare. The process can get messy as the doctors fight over which procedures in which specialties take more time and expertise or are worth relatively more. Medicare typically accepts most of the panel’s recommendations.
The second kind of code is used to pay the hospital for its services. Again, there are thousands of codes based on whether the person checked in for brain surgery, an appendectomy or a fainting spell. To come up with these numbers, Medicare takes the cost reports — including allocations for everything from overhead to nursing staff to operating-room equipment — that hospitals across the country are required to file for each type of service and pays an amount equal to the composite average costs.
The hospital has little incentive to overstate its costs because it’s against the law and because each hospital gets paid not on the basis of its own claimed costs but on the basis of the average of every hospital’s costs, with adjustments made for regional cost differences and other local factors. Except for emergency services, no hospital has to accept Medicare patients and these prices, but they all do.
Similar codes are calculated for laboratory and diagnostic tests like CT scans, ambulance services and, as we saw with Alan A.’s bill, drugs dispensed.
“When I tell my friends what I do here, it sounds boring, but it’s exciting,” says Diane Kovach, who works at Medicare’s Maryland campus and whose title is deputy director of the provider billing group. “We are implementing a program that helps millions and millions of people, and we’re doing it in a way that makes every one of us proud,” she adds.
Kovach, who has been at Medicare for 21 years, operates some of the gears of a machine that reviews the more than 3 million bills that come into Medicare every day, figures out the right payments for each and churns out more than $1.5 billion a day in wire transfers.
The part of that process that Kovach and three colleagues, with whom I spent a morning recently, are responsible for involves overseeing the writing and vetting of thousands of instructions for coders, who are also private contractors, employed by HP, General Dynamics and other major technology companies. The codes they write are supposed to ensure that Medicare pays what it is supposed to pay and catches anything in a bill that should not be paid.
For example, hundreds of instructions for code changes were needed to address Obamacare’s requirement that certain preventive-care visits, such as those for colonoscopies or contraceptive services, no longer be subject to Medicare’s usual outpatient co-pay of 20%. Adding to the complexity, the benefit is limited to one visit per year for some services, meaning instructions had to be written to track patient timelines for the codes assigned to those services.
When performing correctly, the codes produce “edits” whenever a bill is submitted with something awry on it — if a doctor submits two preventive-care colonoscopies for the same patient in the same year, for example. Depending on the code, an edit will result in the bill’s being sent back with questions or being rejected with an explanation. It all typically happens without a human being reading it. “Our goal at the first stage is that no one has to touch the bill,” says Leslie Trazzi, who focuses on instructions and edits for doctors’ claims.
Alan A.’s bills from Sloan-Kettering are wired to a data center in Shelbyville, Ky., run by a private company (owned by WellPoint, the insurance company that operates under the Blue Cross and Blue Shield names in more than a dozen states) that has the contract to process claims originating from New York and Connecticut. Medicare is paying the company about $323 million over five years — which, as with the fees of other contractors serving other regions, works out to an average of 84¢ per claim.
In Shelbyville, Alan A.’s status as a beneficiary is verified, and then the bill is sent electronically to a data center in Columbia, S.C., operated by another contractor, also a subsidiary of an insurance company. There, the codes are checked for edits, after which Alan A.’s Sloan-Kettering bill goes electronically to a data center in Denver, where the payment instructions are prepared and entered into what Karen Jackson, who supervises Medicare’s outside contractors, says is the largest accounting ledger in the world. The whole process takes three days — and that long only because the data is sent in batches.
There are multiple backups to make sure this ruthlessly efficient system isn’t just ruthless. Medicare keeps track of and publicly reports the percentage of bills processed “clean” — i.e., with no rejected items — within 30 days. Even the speed with which the contractors answer the widely publicized consumer phone lines is monitored and reported. The average time to answer a call from a doctor or other provider is 57.6 seconds, according to Medicare’s records, and the average time to answer one of the millions of calls from patients is 2 minutes 41 seconds, down from more than eight minutes in 2007. These times might come as a surprise to people who have tried to call a private insurer. That monitoring process is, in turn, backstopped by a separate ombudsman’s office, which has regional and national layers.
Beyond that, the members of the House of Representatives and the Senate loom as an additional 535 ombudsmen. “We get calls every day from congressional offices about complaints that a beneficiary’s claim has been denied,” says Jonathan Blum, the deputy administrator of CMS. As a result, Blum’s agency has an unusually large congressional liaison staff of 52, most of whom act as caseworkers trying to resolve these complaints.
All the customer-friendliness adds up to only about 10% of initial Medicare claims’ being denied, according to Medicare’s latest published Composite Benchmark Metric Report. Of those initial Medicare denials, only about 20% (2% of total claims) result in complaints or appeals, and the decisions in only about half of those (or 1% of the total) end up being reversed, with the claim being paid.
The astonishing efficiency, of course, raises the question of whether Medicare is simply funneling money out the door as fast as it can. Some fraud is inevitable — even a rate of 0.1% is enough to make headlines when $600 billion is being spent. It’s also possible that people can game the system without committing outright fraud. But Medicare has multiple layers of protection against fraud that the insurance companies don’t and perhaps can’t match because they lack Medicare’s scale.
According to Medicare’s Jackson, the contractors are “vigorously monitored for all kinds of metrics” and required every quarter “to do a lot of data analysis and submit review plans and error-rate-reduction plans.”
And then there are the RACs — a wholly separate group of private “recovery audit contractors.” Established by Congress during the George W. Bush Administration, the RACs, says one hospital administrator, “drive the doctors and the hospitals and even the Medicare claims processors crazy.” The RACs’ only job is to review provider bills after they have been paid by Medicare claims processors and look for system errors, like faulty processing, or errors in the bills as reflected in doctor or hospital medical records that the RACs have the authority to audit.
The RACs have an incentive that any champion of the private sector would love. They get no up-front fees but instead are paid a percentage of the money they retrieve. They eat what they kill. According to Medicare spokeswoman Emma Sandoe, the RAC bounty hunters retrieved $797 million in the 2011 fiscal year, for which they were paid 9% to 12.5% of what they brought in, depending on the region where they were operating.
This process can “get quite anal,” says the doctor who recently treated me for an ear infection. Although my doctor is on Park Avenue, she, like 96% of all specialists, accepts Medicare patients despite the discounted rates it pays, because, she says, “they pay quickly.” However, she recalls getting bills from Medicare for 21¢ or 85¢ for supposed overpayments.
The DHHS’s inspector general is also on the prowl to protect the Medicare checkbook. It reported recovering $1.2 billion last year through Medicare and Medicaid audits and investigations (though the recovered funds had probably been doled out over several fiscal years). The inspector general’s work is supplemented by a separate, multiagency federal health-care-fraud task force, which brings criminal charges against fraudsters and issues regular press releases claiming billions more in recoveries.
This does not mean the system is airtight. If anything, all that recovery activity suggests fallibility, even as it suggests more buttoned-up operations than those run by private insurers, whose payment systems are notoriously erratic.
Too Much Health Care?
In a review of other bills of those enrolled in Medicare, a pattern of deep, deep discounting of chargemaster charges emerged that mirrored how Alan A.’s bills were shrunk down to reality. A $121,414 Stanford Hospital bill for a 90-year-old California woman who fell and broke her wrist became $16,949. A $51,445 bill for the three days an ailing 91-year-old spent getting tests and being sedated in the hospital before dying of old age became $19,242. Before Medicare went to work, the bill was chock-full of creative chargemaster charges from the California Pacific Medical Center — part of Sutter Health, a dominant nonprofit Northern California chain whose CEO made $5,241,305 in 2011.
Another pattern emerged from a look at these bills: some seniors apparently visit doctors almost weekly or even daily, for all varieties of ailments. Sure, as patients age they are increasingly in need of medical care. But at least some of the time, the fact that they pay almost nothing to spend their days in doctors’ offices must also be a factor, especially if they have the supplemental insurance that covers most of the 20% not covered by Medicare.
Alan A. is now 89, and the mound of bills and Medicare statements he showed me for 2011 — when he had his heart attack and continued his treatments at Sloan-Kettering — seemed to add up to about $350,000, although I could not tell for sure because a few of the smaller ones may have been duplicates. What is certain — because his insurance company tallied it for him in a year-end statement — was that his total out-of-pocket expense was $1,139, or less than 0.2% of his overall medical bills. Those bills included what seemed to be 33 visits in one year to 11 doctors who had nothing to do with his recovery from the heart attack or his cancer. In all cases, he was routinely asked to pay almost nothing: $2.20 for a check of a sinus problem, $1.70 for an eye exam, 33¢ to deal with a bunion. When he showed me those bills he chuckled.
A comfortable member of the middle class, Alan A. could easily afford the burden of higher co-pays that would encourage him to use doctors less casually or would at least stick taxpayers with less of the bill if he wants to get that bunion treated. AARP (formerly the American Association of Retired Persons) and other liberal entitlement lobbies oppose these types of changes and consistently distort the arithmetic around them. But it seems clear that Medicare could save billions of dollars if it required that no Medicare supplemental-insurance plan for people with certain income or asset levels could result in their paying less than, say, 10% of a doctor’s bill until they had paid $2,000 or $3,000 out of their pockets in total bills in a year. (The AARP might oppose this idea for another reason: it gets royalties from UnitedHealthcare for endorsing United’s supplemental-insurance product.)
Medicare spent more than $6.5 billion last year to pay doctors (even at the discounted Medicare rates) for the service codes that denote the most basic categories of office visits. By asking people like Alan A. to pay more than a negligible share, Medicare could recoup $1 billion to $2 billion of those costs yearly.
Too Much Doctoring?
Another doctor’s bill, for which Alan A.’s share was 19¢, suggests a second apparent flaw in the system. This was one of 50 bills from 26 doctors who saw Alan A. at Virtua Marlton hospital or at the ManorCare convalescent center after his heart attack or read one of his diagnostic tests at the two facilities. “They paraded in once a day or once every other day, looked at me and poked around a bit and left,” Alan A. recalls. Other than the doctor in charge of his heart-attack recovery, “I had no idea who they were until I got these bills. But for a dollar or two, so what?”
The “so what,” of course, is that although Medicare deeply discounted the bills, it — meaning taxpayers — still paid from $7.48 (for a chest X-ray reading) to $164 for each encounter.
“One of the benefits attending physicians get from many hospitals is the opportunity to cruise the halls and go into a Medicare patient’s room and rack up a few dollars,” says a doctor who has worked at several hospitals across the country. “In some places it’s a Monday-morning tradition. You go see the people who came in over the weekend. There’s always an ostensible reason, but there’s also a lot of abuse.”
When health care wonks focus on this kind of overdoctoring, they complain (and write endless essays) about what they call the fee-for-service mode, meaning that doctors mostly get paid for the time they spend treating patients or ordering and reading tests. Alan A. didn’t care how much time his cancer or heart doctor spent with him or how many tests he got. He cared only that he got better.
Some private care organizations have made progress in avoiding this overdoctoring by paying salaries to their physicians and giving them incentives based on patient outcomes. Medicare and private insurers have yet to find a way to do that with doctors, nor are they likely to, given the current structure that involves hundreds of thousands of private providers billing them for their services.
In passing Obamacare, Congress enabled Medicare to drive efficiencies in hospital care based on the notion that good care should be rewarded and the opposite penalized. The primary lever is a system of penalties Obamacare imposes on hospitals for bad care — a term defined as unacceptable rates of adverse events, such as infections or injuries during a patient’s hospital stay or readmissions within a month after discharge. Both kinds of adverse events are more common than you might think: 1 in 5 Medicare patients is readmitted within 30 days, for example. One Medicare report asserts that “Medicare spent an estimated $4.4 billion in 2009 to care for patients who had been harmed in the hospital, and readmissions cost Medicare another $26 billion.” The anticipated savings that will be produced by the threat of these new penalties are what has allowed the Obama Administration to claim that Obamacare can cut hundreds of billions of dollars from Medicare over the next 10 years without shortchanging beneficiaries. “These payment penalties are sending a shock through the system that will drive costs down,” says Blum, the deputy administrator of the Centers for Medicare and Medicaid Services.
There are lots of other shocks Blum and his colleagues would like to send. However, Congress won’t allow him to. Chief among them, as we have seen, would be allowing Medicare, the world’s largest buyer of prescription drugs, to negotiate the prices that it pays for them and to make purchasing decisions on the basis of comparative effectiveness. But there’s also the cane that Alan A. got after his heart attack. Medicare paid $21.97 for it. Alan A. could have bought it on Amazon for about $12. Other than in a few pilot regions that Congress designated in 2011 after a push by the Obama Administration, Congress has not allowed Medicare to drive down the price of any so-called durable medical equipment through competitive bidding.
This is more than a matter of the 124,000 canes Medicare reports that it buys every year. It’s about mail-order diabetic supplies, wheelchairs, home medical beds and personal oxygen supplies too. Medicare spends about $15 billion annually for these goods.
In the areas of the country where Medicare has been allowed by Congress to conduct a competitive-bidding pilot program, the process has produced savings of 40%. But so far, the pilot programs cover only about 3% of the medical goods seniors typically use. Taking the program nationwide and saving 40% of the entire $15 billion would mean saving $6 billion a year for taxpayers.
The Way Out Of the Sinkhole
“I was driving through central Florida a year or two ago,” says Medicare’s Blum. “And it seemed like every billboard I saw advertised some hospital with these big shiny buildings or showed some new wing of a hospital being constructed … So when you tell me that the hospitals say they are losing money on Medicare and shifting costs from Medicare patients to other patients, my reaction is that Central Florida is overflowing with Medicare patients and all those hospitals are expanding and advertising for Medicare patients. So you can’t tell me they’re losing money … Hospitals don’t lose money when they serve Medicare patients.”
If that’s the case, I asked, why not just extend the program to everyone and pay for it all by charging people under 65 the kinds of premiums they would pay to private insurance companies? “That’s not for me to say,” Blum replied.
In the debate over controlling Medicare costs, politicians from both parties continue to suggest that Congress raise the age of eligibility for Medicare from 65 to 67. Doing so, they argue, would save the government tens of billions of dollars a year. So it’s worth noting another detail about the case of Janice S., which we examined earlier. Had she felt those chest pains and gone to the Stamford Hospital emergency room a month later, she would have been on Medicare, because she would have just celebrated her 65th birthday.
If covered by Medicare, Janice S.’s $21,000 bill would have been deeply discounted and, as is standard, Medicare would have picked up 80% of the reduced cost. The bottom line is that Janice S. would probably have ended up paying $500 to $600 for her 20% share of her heart-attack scare. And she would have paid only a fraction of that — maybe $100 — if, like most Medicare beneficiaries, she had paid for supplemental insurance to cover most of that 20%.
In fact, those numbers would seem to argue for lowering the Medicare age, not raising it — and not just from Janice S.’s standpoint but also from the taxpayers’ side of the equation. That’s not a liberal argument for protecting entitlements while the deficit balloons. It’s just a matter of hardheaded arithmetic.
As currently constituted, Obamacare is going to require people like Janice S. to get private insurance coverage and will subsidize those who can’t afford it. But the cost of that private insurance — and therefore those subsidies — will be much higher than if the same people were enrolled in Medicare at an earlier age. That’s because Medicare buys health care services at much lower rates than any insurance company. Thus the best way both to lower the deficit and to help save money for people like Janice S. would seem to be to bring her and other near seniors into the Medicare system before they reach 65. They could be required to pay premiums based on their incomes, with the poor paying low premiums and the better off paying what they might have paid a private insurer. Those who can afford it might also be required to pay a higher proportion of their bills — say, 25% or 30% — rather than the 20% they’re now required to pay for outpatient bills.
Meanwhile, adding younger people like Janice S. would lower the overall cost per beneficiary to Medicare and help cut its deficit still more, because younger members are likelier to be healthier.
From Janice S.’s standpoint, whatever premium she would pay for this age-64 Medicare protection would still be less than what she had been paying under the COBRA plan that she wished she could have kept after the rules dictated that she be cut off after she lost her job.
The only way this would not work is if 64-year-olds started using health care services they didn’t need. They might be tempted to, because, as we saw with Alan A., Medicare’s protection is so broad and supplemental private insurance costs so little that it all but eliminates patients’ obligation to pay the 20% of outpatient-care costs that Medicare doesn’t cover. To deal with that, a provision could be added requiring that 64-year-olds taking advantage of Medicare could not buy insurance freeing them from more than, say, 5% or 10% of their responsibility for the bills, with the percentage set according to their wealth. It would be a similar, though more stringent, provision of the kind I’ve already suggested for current Medicare beneficiaries as a way to cut the cost of people overusing benefits.
If that logic applies to 64-year-olds, then it would seem to apply even more readily to healthier 40-year-olds or 18-year-olds. This is the single-payer approach favored by liberals and used by most developed countries.
Then again, however much hospitals might survive or struggle under that scenario, no doctor could hope for anything approaching the income he or she deserves (and that will make future doctors want to practice) if 100% of their patients yielded anything close to the low rates Medicare pays.
“If you could figure out a way to pay doctors better and separately fund research … adequately, I could see where a single-payer approach would be the most logical solution,” says Gunn, Sloan-Kettering’s chief operating officer. “It would certainly be a lot more efficient than hospitals like ours having hundreds of people sitting around filling out dozens of different kinds of bills for dozens of insurance companies.” Maybe, but the prospect of overhauling our system this way, displacing all the private insurers and other infrastructure after all these decades, isn’t likely. For there would be one group of losers — and these losers have lots of clout. They’re the health care providers like hospitals and CT-scan-equipment makers whose profits — embedded in the bills we have examined — would be sacrificed. They would suffer because of the lower prices Medicare would pay them when the patient is 64, compared with what they are able to charge when that patient is either covered by private insurance or has no insurance at all.
That kind of systemic overhaul not only seems unrealistic but is also packed with all kinds of risk related to the microproblems of execution and the macro issue of giving government all that power.
Yet while Medicare may not be a realistic systemwide model for reform, the way Medicare works does demonstrate, by comparison, how the overall health care market doesn’t work.
Unless you are protected by Medicare, the health care market is not a market at all. It’s a crapshoot. People fare differently according to circumstances they can neither control nor predict. They may have no insurance. They may have insurance, but their employer chooses their insurance plan and it may have a payout limit or not cover a drug or treatment they need. They may or may not be old enough to be on Medicare or, given the different standards of the 50 states, be poor enough to be on Medicaid. If they’re not protected by Medicare or they’re protected only partly by private insurance with high co-pays, they have little visibility into pricing, let alone control of it. They have little choice of hospitals or the services they are billed for, even if they somehow know the prices before they get billed for the services. They have no idea what their bills mean, and those who maintain the chargemasters couldn’t explain them if they wanted to. How much of the bills they end up paying may depend on the generosity of the hospital or on whether they happen to get the help of a billing advocate. They have no choice of the drugs that they have to buy or the lab tests or CT scans that they have to get, and they would not know what to do if they did have a choice. They are powerless buyers in a seller’s market where the only sure thing is the profit of the sellers.
Indeed, the only player in the system that seems to have to balance countervailing interests the way market players in a real market usually do is Medicare. It has to answer to Congress and the taxpayers for wasting money, and it has to answer to portions of the same groups for trying to hold on to money it shouldn’t. Hospitals, drug companies and other suppliers, even the insurance companies, don’t have those worries.
Moreover, the only players in the private sector who seem to operate efficiently are the private contractors working — dare I say it? — under the government’s supervision. They’re the Medicare claims processors that handle claims like Alan A.’s for 84¢ each. With these and all other Medicare costs added together, Medicare’s total management, administrative and processing expenses are about $3.8 billion for processing more than a billion claims a year worth $550 billion. That’s an overall administrative and management cost of about two-thirds of 1% of the amount of the claims, or less than $3.80 per claim. According to its latest SEC filing, Aetna spent $6.9 billion on operating expenses (including claims processing, accounting, sales and executive management) in 2012. That’s about $30 for each of the 229 million claims Aetna processed, and it amounts to about 29% of the $23.7 billion Aetna pays out in claims.
The real issue isn’t whether we have a single payer or multiple payers. It’s whether whoever pays has a fair chance in a fair market. Congress has given Medicare that power when it comes to dealing with hospitals and doctors, and we have seen how that works to drive down the prices Medicare pays, just as we’ve seen what happens when Congress handcuffs Medicare when it comes to evaluating and buying drugs, medical devices and equipment. Stripping away what is now the sellers’ overwhelming leverage in dealing with Medicare in those areas and with private payers in all aspects of the market would inject fairness into the market. We don’t have to scrap our system and aren’t likely to. But we can reduce the $750 billion that we overspend on health care in the U.S. in part by acknowledging what other countries have: because the health care market deals in a life-or-death product, it cannot be left to its own devices.
Put simply, the bills tell us that this is not about interfering in a free market. It’s about facing the reality that our largest consumer product by far — one-fifth of our economy — does not operate in a free market.
So how can we fix it?
Changing Our Choices
We should tighten antitrust laws related to hospitals to keep them from becoming so dominant in a region that insurance companies are helpless in negotiating prices with them. The hospitals’ continuing consolidation of both lab work and doctors’ practices is one reason that trying to cut the deficit by simply lowering the fees Medicare and Medicaid pay to hospitals will not work. It will only cause the hospitals to shift the costs to non-Medicare patients in order to maintain profits — which they will be able to do because of their increasing leverage in their markets over insurers. Insurance premiums will therefore go up — which in turn will drive the deficit back up, because the subsidies on insurance premiums that Obamacare will soon offer to those who cannot afford them will have to go up.
Similarly, we should tax hospital profits at 75% and have a tax surcharge on all nondoctor hospital salaries that exceed, say, $750,000. Why are high profits at hospitals regarded as a given that we have to work around? Why shouldn’t those who are profiting the most from a market whose costs are victimizing everyone else chip in to help? If we recouped 75% of all hospital profits (from nonprofit as well as for-profit institutions), that would save over $80 billion a year before counting what we would save on tests that hospitals might not perform if their profit incentives were shaved.
To be sure, this too seems unlikely to happen. Hospitals may be the most politically powerful institution in any congressional district. They’re usually admired as their community’s most important charitable institution, and their influential stakeholders run the gamut from equipment makers to drug companies to doctors to thousands of rank-and-file employees. Then again, if every community paid more attention to those administrator salaries, to those nonprofits’ profit margins and to charges like $77 for gauze pads, perhaps the political balance would shift.
We should outlaw the chargemaster. Everyone involved, except a patient who gets a bill based on one (or worse, gets sued on the basis of one), shrugs off chargemasters as a fiction. So why not require that they be rewritten to reflect a process that considers actual and thoroughly transparent costs? After all, hospitals are supposed to be government-sanctioned institutions accountable to the public. Hospitals love the chargemaster because it gives them a big number to put in front of rich uninsured patients (typically from outside the U.S.) or, as is more likely, to attach to lawsuits or give to bill collectors, establishing a place from which they can negotiate settlements. It’s also a great place from which to start negotiations with insurance companies, which also love the chargemaster because they can then make their customers feel good when they get an Explanation of Benefits that shows the terrific discounts their insurance company won for them.
But for patients, the chargemasters are both the real and the metaphoric essence of the broken market. They are anything but irrelevant. They’re the source of the poison coursing through the health care ecosystem.
We should amend patent laws so that makers of wonder drugs would be limited in how they can exploit the monopoly our patent laws give them. Or we could simply set price limits or profit-margin caps on these drugs. Why are the drug profit margins treated as another given that we have to work around to get out of the $750 billion annual overspend, rather than a problem to be solved?
Just bringing these overall profits down to those of the software industry would save billions of dollars. Reducing drugmakers’ prices to what they get in other developed countries would save over $90 billion a year. It could save Medicare — meaning the taxpayers — more than $25 billion a year, or $250 billion over 10 years. Depending on whether that $250 billion is compared with the Republican or Democratic deficit-cutting proposals, that’s a third or a half of the Medicare cuts now being talked about.
Similarly, we should tighten what Medicare pays for CT or MRI tests a lot more and even cap what insurance companies can pay for them. This is a huge contributor to our massive overspending on outpatient costs. And we should cap profits on lab tests done in-house by hospitals or doctors.
Finally, we should embarrass Democrats into stopping their fight against medical-malpractice reform and instead provide safe-harbor defenses for doctors so they don’t have to order a CT scan whenever, as one hospital administrator put it, someone in the emergency room says the word head. Trial lawyers who make their bread and butter from civil suits have been the Democrats’ biggest financial backer for decades. Republicans are right when they argue that tort reform is overdue. Eliminating the rationale or excuse for all the extra doctor exams, lab tests and use of CT scans and MRIs could cut tens of billions of dollars a year while drastically cutting what hospitals and doctors spend on malpractice insurance and pass along to patients.
Other options are more tongue in cheek, though they illustrate the absurdity of the hole we have fallen into. We could limit administrator salaries at hospitals to five or six times what the lowest-paid licensed physician gets for caring for patients there. That might take care of the self-fulfilling peer dynamic that Gunn of Sloan-Kettering cited when he explained, “We all use the same compensation consultants.” Then again, it might unleash a wave of salary increases for junior doctors.
Or we could require drug companies to include a prominent, plain-English notice of the gross profit margin on the packaging of each drug, as well as the salary of the parent company’s CEO. The same would have to be posted on the company’s website. If nothing else, it would be a good test of embarrassment thresholds.
None of these suggestions will come as a revelation to the policy experts who put together Obamacare or to those before them who pushed health care reform for decades. They know what the core problem is — lopsided pricing and outsize profits in a market that doesn’t work. Yet there is little in Obamacare that addresses that core issue or jeopardizes the paydays of those thriving in that marketplace. In fact, by bringing so many new customers into that market by mandating that they get health insurance and then providing taxpayer support to pay their insurance premiums, Obamacare enriches them. That, of course, is why the bill was able to get through Congress.
Obamacare does some good work around the edges of the core problem. It restricts abusive hospital-bill collecting. It forces insurers to provide explanations of their policies in plain English. It requires a more rigorous appeal process conducted by independent entities when insurance coverage is denied. These are all positive changes, as is putting the insurance umbrella over tens of millions more Americans — a historic breakthrough. But none of it is a path to bending the health care cost curve. Indeed, while Obamacare’s promotion of statewide insurance exchanges may help distribute health-insurance policies to individuals now frozen out of the market, those exchanges could raise costs, not lower them. With hospitals consolidating by buying doctors’ practices and competing hospitals, their leverage over insurance companies is increasing. That’s a trend that will only be accelerated if there are more insurance companies with less market share competing in a new exchange market trying to negotiate with a dominant hospital and its doctors. Similarly, higher insurance premiums — much of them paid by taxpayers through Obamacare’s subsidies for those who can’t afford insurance but now must buy it — will certainly be the result of three of Obamacare’s best provisions: the prohibitions on exclusions for pre-existing conditions, the restrictions on co-pays for preventive care and the end of annual or lifetime payout caps.
Put simply, with Obamacare we’ve changed the rules related to who pays for what, but we haven’t done much to change the prices we pay.
When you follow the money, you see the choices we’ve made, knowingly or unknowingly.
Over the past few decades, we’ve enriched the labs, drug companies, medical device makers, hospital administrators and purveyors of CT scans, MRIs, canes and wheelchairs. Meanwhile, we’ve squeezed the doctors who don’t own their own clinics, don’t work as drug or device consultants or don’t otherwise game a system that is so gameable. And of course, we’ve squeezed everyone outside the system who gets stuck with the bills.
We’ve created a secure, prosperous island in an economy that is suffering under the weight of the riches those on the island extract.
And we’ve allowed those on the island and their lobbyists and allies to control the debate, diverting us from what Gerard Anderson, a health care economist at the Johns Hopkins Bloomberg School of Public Health, says is the obvious and only issue: “All the prices are too damn high.”
By Frederick Kaufman
The history of food took an ominous turn in 1991, at a time when no one was paying much attention. That was the year Goldman Sachs decided our daily bread might make an excellent investment.
Agriculture, rooted as it is in the rhythms of reaping and sowing, had not traditionally engaged the attention of Wall Street bankers, whose riches did not come from the sale of real things like wheat or bread but from the manipulation of ethereal concepts like risk and collateralized debt. But in 1991 nearly everything else that could be recast as a financial abstraction had already been considered. Food was pretty much all that was left. And so with accustomed care and precision, Goldman’s analysts went about transforming food into a concept. They selected eighteen commodifiable ingredients and contrived a financial elixir that included cattle, coffee, cocoa, corn, hogs, and a variety or two of wheat. They weighted the investment value of each element, blended and commingled the parts into sums, then reduced what had been a complicated collection of real things into a mathematical formula that could be expressed as a single manifestation, to be known thenceforward as the Goldman Sachs Commodity Index. Then they began to offer shares.
As was usually the case, Goldman’s product flourished. The prices of cattle, coffee, cocoa, corn, and wheat began to rise, slowly at first, and then rapidly. And as more people sank money into Goldman’s food index, other bankers took note and created their own food indexes for their own clients. Investors were delighted to see the value of their venture increase, but the rising price of breakfast, lunch, and dinner did not align with the interests of those of us who eat. And so the commodity index funds began to cause problems.
Wheat was a case in point. North America, the Saudi Arabia of cereal, sends nearly half its wheat production overseas, and an obscure syndicate known as the Minneapolis Grain Exchange remains the supreme price-setter for the continent’s most widely exported wheat, a high-protein variety called hard red spring. Other varieties of wheat make cake and cookies, but only hard red spring makes bread. Its price informs the cost of virtually every loaf on earth.
As far as most people who eat bread were concerned, the Minneapolis Grain Exchange had done a pretty good job: for more than a century the real price of wheat had steadily declined. Then, in 2005, that price began to rise, along with the prices of rice and corn and soy and oats and cooking oil. Hard red spring had long traded between $3 and $6 per sixty-pound bushel, but for three years Minneapolis wheat broke record after record as its price doubled and then doubled again. No one was surprised when in the first quarter of 2008 transnational wheat giant Cargill attributed its 86 percent jump in annual profits to commodity trading. And no one was surprised when packaged-food maker ConAgra sold its trading arm to a hedge fund for $2.8 billion. Nor when The Economist announced that the real price of food had reached its highest level since 1845, the year the magazine first calculated the number.
Nothing had changed about the wheat, but something had changed about the wheat market. Since Goldman’s innovation, hundreds of billions of new dollars had overwhelmed the actual supply of and actual demand for wheat, and rumors began to emerge that someone, somewhere, had cornered the market. Robber barons, gold bugs, and financiers of every stripe had long dreamed of controlling all of something everybody needed or desired, then holding back the supply as demand drove up prices. But there was plenty of real wheat, and American farmers were delivering it as fast as they always had, if not even a bit faster. It was as if the price itself had begun to generate its own demand—the more hard red spring cost, the more investors wanted to pay for it.
“It’s absolutely mind-boggling,” one grain trader told the Wall Street Journal. “You don’t ever want to trade wheat again,” another told the Chicago Tribune.
“We have never seen anything like this before,” Jeff Voge, chairman of the Kansas City Board of Trade, told the Washington Post. “This isn’t just any commodity,” continued Voge. “It is food, and people need to eat.”
The global speculative frenzy sparked riots in more than thirty countries and drove the number of the world’s “food insecure” to more than a billion. In 2008, for the first time since such statistics have been kept, the proportion of the world’s population without enough to eat ratcheted upward. The ranks of the hungry had increased by 250 million in a single year, the most abysmal increase in all of human history.
Then, like all speculative bubbles, the food bubble popped. By late 2008, the price of Minneapolis hard red spring had toppled back to normal levels, and trading volume quickly followed. Of course, the prices world consumers pay for food have not come down so fast, as manufacturers and retailers continue to make up for their own heavy losses.
The gratuitous damage of the food bubble struck me as not merely a disgrace but a disgrace that might easily be repeated. And so I traveled to Minneapolis—where the reality of hard red spring and the price of hard red spring first went their separate ways—to discover how such a thing could have happened, and if and when it would happen again.
The name of the Minneapolis Grain Exchange may conjure images of an immense concrete silo towering over the prairie, but the exchange is in fact a rather severe neoclassical steel-frame building that shares the downtown corner of Fourth Street and Fourth Avenue with City Hall, the courthouse, and the jail. I walked through its vestibule of granite and Italian marble, past renderings of wheat molded into the terra-cotta cartouches, and as I waited for the wheat-embossed elevator I tried not to gawk at the gold-plated mail chute. For more than a century, the trading floor of the Minneapolis Grain Exchange had been the place where wheat acquired a price, but as I stepped out of the elevator the opening bell tolled and echoed across a vast, silent, and chilly chamber. The place was abandoned, the phones ripped out of the walls, the octagonal grain pits littered with snakes of tangled wire.
I wandered across the wooden planks of the old pits, scarred by the boots of countless grain traders, and I peered into the dark and narrow recesses of the phone booths where those traders had scribbled down their orders. Beyond the booths loomed the massive cash-grain tables, starkly illuminated by rays of sunlight. In the old days, when brokers and traders looked into one another’s faces, not computer screens, they liked to examine the grain before they bought it.
Now an electronic board began to populate with green, red, and yellow numbers that told the price of barley, canola, cattle, coffee, copper, cotton, gold, hogs, lumber, milk, oats, oil, platinum, rice, and silver. Beneath them shimmered the indices: the Dow, the S&P 500, and, at the very bottom, the Goldman Sachs Commodity Index. Even the video technology was quaint, a relic from the Carter years, when trade with the Soviet Union was the final frontier, long before that moment in 2008 when the chief executive officer of the Minneapolis Grain Exchange, Mark Bagan, decided that the future of wheat was not on a table in Minneapolis but within the digital infinitude of the Internet.
As a courtesy to the speculators who for decades had spent their workdays executing trades in the grain pits, the exchange had set up a new space a few stories above the old trading floor, a gray-carpeted room in which a few dozen beige cubicles were available to rent, some featuring a view of a parking lot. I had expected shouting, panic, confusion, and chaos, but no more than half the cubicles were occupied, and the room was silent. One of the grain traders was reading his email, another checking ESPN for the weekend scores, another playing solitaire, another shopping on eBay for antique Japanese vases.
“We’re trading wheat, but it’s wheat we’re never going to see,” Austin Damiani, a twenty-eight-year-old wheat broker, would tell me later that afternoon. “It’s a cerebral experience.”
Today’s action consisted of a gray-haired man padding from cubicle to cubicle, greeting colleagues, sucking hard candy. The veteran eventually ambled off to a corner, to a battered cash-grain table that had been moved up from the old trading floor. A dozen aluminum pans sat on the table, each holding a different sample of grain. The old man brought a pan to his face and took a deep breath. Then he held a single grain in his palm, turned it over, and found the crease.
“The crease will tell you the variety,” he told me. “That’s a lost art.”
His name was Mike Mullin, he had been trading wheat for fifty years, and he was the first Minneapolis wheat trader I had seen touch a grain of the stuff. Back in the day, buyers and sellers might have spent hours insulting, cajoling, bullying, and pleading with one another across this table—anything to get the right price for hard red spring—but Mullin was not buying real wheat today, nor was anybody here selling it.
Above us, three monitors flickered prices from America’s primary grain exchanges: Chicago, Kansas City, and Minneapolis. Such geographic specificities struck me as archaic, but there remain essential differences among these wheat markets, vestiges of old-fashioned concerns such as latitude and proximity to the Erie Canal.
Mullin stared at the screens and asked me what I knew about wheat futures, and I told him that whereas Minneapolis traded the contract in hard red spring, Kansas City traded in hard red winter and Chicago in soft red winter, both of which have a lower protein content than Minneapolis wheat, are less expensive, and are more likely to be incorporated into a brownie mix than into a baguette. High protein content makes Minneapolis wheat elite, I told Mullin.
He nodded his head, and we stood in silence and watched the desultory movement of corn and soy, soft red winter and hard red spring. It was a slow trading day even if commodities, as Mullin told me, were overpriced 10 percent across the board. Mullin figured he knew the real worth of a bushel and had bet the price would soon head south. “Am I short?” he asked. “Yes I am.”
I asked him what he knew about the commodity indexes, like the one Goldman Sachs created in 1991.
“It’s a brainless entity,” Mullin said. His eyes did not move from the screen. “You look at a chart. You hit a number. You buy.”
Grain trading was not always brainless. Joseph parsed Pharaoh’s dream of cattle and crops, discerned that drought loomed, and diligently went about storing immense amounts of grain. By the time famine descended, Joseph had cornered the market—an accomplishment that brought nations to their knees and made Joseph an extremely rich man.
In 1730, enlightened bureaucrats of Japan’s Edo shogunate perceived that a stable rice price would protect those who produced their country’s sacred grain. Up to that time, all the farmers in Japan would bring their rice to market after the September harvest, at which point warehouses would overflow, prices would plummet, and, for all their hard work, Japan’s rice farmers would remain impoverished. Instead of suffering through the Osaka market’s perennial volatility, the bureaucrats preferred to set a price that would ensure a living for farmers, grain warehousemen, the samurai (who were paid in rice), and the general population—a price not at the mercy of the annual cycle of scarcity and plenty but a smooth line, gently fluctuating within a reasonable range.
While Japan had relied on the authority of the government to avoid deadly volatility, the United States trusted in free enterprise. After the combined credit crunch, real estate wreck, and stock-market meltdown now known as the Panic of 1857, U.S. grain merchants conceived a new stabilizing force: In return for a cash commitment today, farmers would sign a forward contract to deliver grain a few months down the line, on the expiration date of the contract. Since buyers could never be certain what the price of wheat would be on the date of delivery, the price of a future bushel of wheat was usually a few cents less than that of a present bushel of wheat. And while farmers had to accept less for future wheat than for real and present wheat, the guaranteed future sale protected them from plummeting prices and enabled them to use the promised payment as, say, collateral for a bank loan. These contracts let both producers and consumers hedge their risks, and in so doing reduced volatility.
But the forward contract was a primitive financial tool, and when demand for wheat exploded after the Civil War, and ever more grain merchants took to reselling and trading these agreements on a fast-growing secondary market, it became impossible to figure out who owed whom what and when. At which point the great grain merchants of Chicago, Kansas City, and Minneapolis set about creating a new kind of institution less like a medieval county fair and more like a modern clearinghouse. In place of myriad individually negotiated and fulfilled forward contracts, the merchants established exchanges that would regulate both the quality of grain and the expiration dates of all forward contracts—eventually limiting those dates to five each year, in March, May, July, September, and December. Whereas under the old system each buyer and each seller vetted whoever might stand at the opposite end of each deal, the grain exchange now served as the counterparty for everyone.
The exchanges soon attracted a new species of merchant interested in numbers, not grain. This was the speculator. As the price of futures contracts fluctuated in daily trading, the speculator sought to cash in through strategic buying and selling. And since the speculator had neither real wheat to sell nor a place to store any he might purchase, for every “long” position he took (a promise to buy future wheat), he would eventually need to place an equal and opposite “short” position (a promise to sell). Farmers and millers welcomed the speculator to their market, for his perpetual stream of buy and sell orders gave them the freedom to sell and buy their actual wheat just as they pleased.
Under the new system, farmers and millers could hedge, speculators could speculate, the market remained liquid, and yet the speculative futures price could never move too far from the “spot” (or actual) price: every ten weeks or so, when the delivery date of the contract approached, the two prices would converge, as everyone who had not cleared his position with an equal and opposite position would be obligated to do just that. The virtuality of wheat futures would settle up with the reality of cash wheat, and then, as the contract expired, the price of an ideal bushel would be “discovered” by hedger and speculator alike.
No less an economist than John Maynard Keynes applied himself to studying this miraculous interplay of supply and demand, buyers and sellers, real wheat and virtual wheat, and he gave the standard futures-pricing model its own special name. He called it “normal backwardation,” because in a normal market for real goods, he found, futures prices (for things that did not yet exist) generally stayed in back of spot prices (for things that actually existed).
Normal backwardation created the occasion for so many people to make so much money in so many ways that numerous other futures exchanges soon emerged, featuring contracts for everything from butter, cottonseed oil, and hay to plywood, poultry, and cat pelts. Speculators traded molasses futures on the New York Coffee and Sugar Exchange, and if they lost their shirts they could head over to the New York Burlap and Jute Exchange or the New York Hide Exchange. And despite the occasional market collapse (onions in 1957, Maine potatoes in 1976), for more than a century the basic strategy and tactics of futures trading remained the same, the price of wheat remained stable, and increasing numbers of people had plenty to eat.
The decline of volatility, good news for the rest of us, drove bankers up the wall. I put in a call to Steven Rothbart, who traded commodities for Cargill way back in the 1980s. I asked him what he knew about the birth of commodity index funds, and he began to laugh. “Commodities had died,” he told me. “We sat there every day and the market wouldn’t move. People left. They couldn’t make a living anymore.”
Clearly, some innovation was in order. In the midst of this dead market, Goldman Sachs envisioned a new form of commodities investment, a product for investors who had no taste for the complexities of corn or soy or wheat, no interest in weather and weevils, and no desire for getting into and out of shorts and longs—investors who wanted nothing more than to park a great deal of money somewhere, then sit back and watch that pile grow. The managers of this new product would acquire and hold long positions, and nothing but long positions, on a range of commodities futures. They would not hedge their futures with the actual sale or purchase of real wheat (like a bona-fide hedger), nor would they cover their positions by buying low and selling high (in the grand old fashion of commodities speculators). In fact, the structure of commodity index funds ran counter to our normal understanding of economic theory, requiring that index-fund managers not buy low and sell high but buy at any price and keep buying at any price. No matter what lofty highs long wheat futures might attain, the managers would transfer their long positions into the next long futures contract, due to expire a few months later, and repeat the roll when that contract, in turn, was about to expire—thus accumulating an everlasting, ever-growing long position, unremittingly regenerated.
“You’ve got to be out of your freaking mind to be long only,” Rothbart said. “Commodities are the riskiest things in the world.”
But Goldman had its own way to offset the risks of commodities trading—if not for their clients, then at least for themselves. The strategy, standard practice for most index funds, relied on “replication,” which meant that for every dollar a client invested in the index fund, Goldman would buy a dollar’s worth of the underlying commodities futures (minus management fees). Of course, in order to purchase commodities futures, the bankers had only to make a “good-faith deposit” of something like 5 percent. Which meant that they could stash the other 95 percent of their investors’ money in a pool of Treasury bills, or some other equally innocuous financial cranny, which they could subsequently leverage into ever greater amounts of capital to utilize to their own ends, whatever they might be. If the price of wheat went up, Goldman made money. And if the price of wheat fell, Goldman still made money—not only from management fees, but from the profits the bank pulled down by investing 95 percent of its clients’ money in less risky ventures. Goldman even made money from the roll into each new long contract, every instance of which required clients to pay a new set of transaction costs.
The bankers had figured out how to extract profit from the commodities market without taking on any of the risks they themselves had introduced by flooding that same market with long orders. Unlike the wheat producers and the wheat speculators, or even Goldman’s own customers, Goldman had no vested interest in a stable commodities market. As one index trader told me, “Commodity funds have historically made money—and kept most of it for themselves.”
No surprise, then, that other banks soon recognized the rightness of this approach. In 1994, J.P. Morgan established its own commodity index fund, and soon thereafter other players entered the scene, including the AIG Commodity Index and the Chase Physical Commodity Index, along with initial offerings from Bear Stearns, Oppenheimer, and Pimco. Barclays joined the group with eight index funds and, in just over a year, raised close to $3 billion.
Government regulators, far from preventing this strange new way of accumulating futures, actively encouraged it. Congress had in 1936 created a commission that curbed “excessive speculation” by limiting large holdings of futures contracts to bona-fide hedgers. Years later, the modern-day Commodity Futures Trading Commission continued to set absolute limits on the amount of wheat-futures contracts that could be held by speculators. In 1991, that limit was 5,000 contracts. But after the invention of the commodity index fund, bankers convinced the commission that they, too, were bona-fide hedgers. As a result, the commission issued a position-limit exemption to six commodity index traders, and within a decade those funds would be permitted to hold as many as 130,000 wheat-futures contracts at any one time.
“We have not seen U.S. agriculture rely this much on the market for almost seventy years,” was how Joseph Dial, the head of the commission, assessed his agency’s regulatory handiwork in 1997. “This paradigm shift in the government’s farm policy has created a new era for agriculture.”
Goldman and all the other banks that followed them into commodity index funds had figured out how to safeguard themselves, but there was a lot more money to be made if the banks could somehow convince everyone else that an inherently risky product designed to protect the banks—and only the banks—was in fact also safe for investors.
Good news came on February 28, 2005, when Gary Gorton, of the University of Pennsylvania, and K. Geert Rouwenhorst, of the Yale School of Management, published a working paper called “Facts and Fantasies About Commodities Futures.” In forty graph-and-equation-filled pages, the authors demonstrated that between 1959 and 2004, a hypothetical investment in a broad range of commodities—such as an index—would have been no more risky than an investment in a broad range of stocks. What’s more, commodities showed a negative correlation with equities and a positive correlation with inflation. Food was always a good investment, and even better in bad times. Money managers could hardly wait to spread the news.
“Since this discovery,” reported the Financial Times, investors had become attracted to commodities “in the hope that returns will differ from equities and bonds and be strong in case of inflation.” Another study noted as well that commodity index funds offered “an inherent or natural return that is not conditioned on skill.” And so the long-awaited legion of new investors began buying into commodity index funds, and the food bubble truly began to inflate.
A few years after “Facts and Fantasies” appeared, and almost as if to prove Gorton and Rouwenhorst’s point, the financial crisis hit mortgage, credit, and real estate markets—and, just as the scholars had predicted, those who had invested in commodities prospered. Money managers had to decide where to park what remained of their endowment, hedge, and pension funds, and the bankers were ready with something that looked very safe: in 2003, commodity index holdings amounted to a not particularly awe-inspiring $13 billion, but by 2008, $317 billion had poured into the funds. As long as the commodities brokers kept rolling over their futures, it looked as though the day of reckoning might never come. If no one contemplated the effects that this accumulation of long-only futures would eventually have on grain markets, perhaps it was because no one had never seen such a massive pile of long-only futures.
From one perspective, a complicated chain of cause and effect had inflated the food bubble. But there were those who understood what was happening to the wheat markets in simpler terms. “I don’t have to pay anybody for anything, basically,” one long-only indexer told me. “That’s the beauty of it.”
Mark Bagan, CEO of the Minneapolis Grain Exchange, invited me to his office for a talk. A self-proclaimed “grain brat,” Bagan grew up among bales, combines, and concrete silos all across the United States before attending Minnesota State to play football. As I settled into his oversize couch, admired his neatly tailored pinstriped suit, and listened to his soft voice, it occurred to me that if the grain markets were a casino, Mark Bagan was the biggest bookie. Without him, there could be no bets on hard red spring.
“From our perspective, we’re price neutral, value neutral,” Bagan said.
I asked him about the commodity index funds and whether they had transformed the traditional wheat market into something wholly speculative, artificial, and hidden. Why did anyone except bankers even need this new market?
“There are plenty of markets out there that have yet to be thought of and will be very successful,” Bagan said. Then he veered into the intricacies of running a commodities exchange. “With our old system, we could clear forty-eight products,” he said. “Now we can have more than fifty thousand products traded. It’s a big number, building derivatives on top of derivatives, but we’ve got to be prepared for that: the financial world is evolving so quickly, there will always be a need for new risk-management products.”
Bagan had not answered my question about the funds, so I asked again, as directly as I could: What did he make of the fact that speculation in commodity index funds had caused a global run on hard red spring?
Bagan slowly shook his head, as though he were an elementary-school teacher trying to explain a basic concept—subtraction? ice?—to a particularly dense child. The Goldman Sachs Commodity Index did not include a single hard red spring future, he told me. Minneapolis wheat may have set records in 2008 and led global food prices into the stratosphere, but it had nothing to do with Goldman’s fund. There just wasn’t enough speculation in the hard red spring market to satisfy the bankers. Not enough liquidity. Bagan smiled. Was there anything else I wanted to know?
Plenty, but there was nothing more Bagan was about to disclose. As I left the office, I remembered the rumors I’d heard at a grain-crisis conference in Washington, D.C., a few months earlier. Between interminable speeches about price ceilings and grain reserves, more than one wheat expert had confided, strictly on background, that at the height of the bubble, Minneapolis wheat had been cornered. No one could say whether the culprit had been Cargill or the Canadian Wheat Board or any other party, but the consensus was that as the world had cried for food, someone, somewhere, had been hoarding wheat.
Imaginary wheat bought anywhere affects real wheat bought everywhere. But as it turned out, index traders had purchased the majority of their long wheat futures on the oldest and largest grain clearinghouse in America, the Chicago Mercantile Exchange. And so I found myself pushing through the frigid blasts of the LaSalle Street canyon. If I could figure out precisely how and when wheat futures traded in Chicago had driven up the price of actual wheat in Minneapolis, I would know why a billion people on the planet could not afford bread.
The man who had agreed to escort me to the floor of the exchange traded grain for a transnational corporation, and he told me several times that he could not talk to the press, and that if I were to mention his name in print he would lose his job. So I will call him Mr. Silver.
In the basement cafeteria of the exchange I bought Mr. Silver a breakfast of bacon and eggs and asked whether he could explain how index funds that held long-only Chicago soft red winter wheat futures could have come to dictate the spot price of Minneapolis hard red spring. Had the world starved because of a corner in Chicago? Mr. Silver looked into his scrambled eggs and said nothing.
So I began to tell him everything I knew, hoping he would eventually be inspired to fill in the blanks. I told him about Joseph in Egypt, Osaka in 1730, the Panic of 1857, and futures contracts for cat pelts, molasses, and onions. I told him about Goldman’s replication strategy, Gorton and Rouwenhorst’s 2005 paper, and the rise and rise of index funds. I told him that at least one analyst had estimated that investments in commodity index funds could easily increase to as much as $1 trillion, which would result in yet another global food catastrophe, much worse than the one before.
And I told Mr. Silver something else I had discovered: About two thirds of the Goldman index remains devoted to crude oil, gasoline, heating oil, natural gas, and other energy-based commodities. Wheat was nothing but an indexical afterthought, accounting for less than 6.5 percent of Goldman’s fund.
Mr. Silver sipped his coffee.
Even 6.5 percent of the Goldman Sachs Commodity Index made for a historically unprecedented pile of long wheat futures, I went on. Especially when those index funds kept rolling over the contracts they already had—all of them long, only a smattering bought in Kansas City, none in Minneapolis.
And then it occurred to me: It was neither an individual nor a corporation that had cornered the wheat market. The index funds may never have held a single bushel of wheat, but they were hoarding staggering quantities of wheat futures, billions of promises to buy, not one of them ever to be fulfilled. The dreaded market corner had emerged not from a shortage in the wheat supply but from a much rarer economic occurrence, a shock inspired by the ceaseless call of index funds for wheat that did not exist and would never need to exist: a demand shock. Instead of a hidden mastermind committing a dastardly deed, it was old Mike Mullin’s “brainless entity,” the investment instrument itself, that had taken over and created the effects of a traditional corner.
Mr. Silver had stopped eating his eggs.
I said that I understood how the index funds’ unprecedented accumulation of Chicago futures could create the appearance of a market corner in Chicago. But there was still something I didn’t get. Why had the wheat market in Minneapolis begun to act as though it too had been cornered when none of the index funds held hard red spring? Why had the world’s most widely exported wheat experienced a sudden surge in price, a surge that caused a billion people to go hungry.
At which point Mr. Silver interrupted my monologue.
Index-fund buying had pushed up the price of the Chicago contract, he said, until the price of a wheat future had come to equal the spot price of wheat on the Chicago Mercantile Exchange—and still, the futures price surged. The result was contango.
I gave Mr. Silver a blank look. Contango, he explained, describes a market in which future prices rise above current prices. Rather than being stable and steady, contango markets tend to be overheated and hysterical, with spot prices rising to match the most outrageously escalated futures prices. Indeed, between 2006 and 2008, the spot price of Chicago soft red winter shot up from $3 per bushel to $11 per bushel.
The ever-escalating price of wheat and the newfound strength of grain markets were excellent news for the new investors who had flooded commodity index funds. No matter that the mechanism created to stabilize grain prices had been reassembled into a mechanism to inflate grain prices, or that the stubbornly growing discrepancy between futures and spot prices meant that farmers and merchants no longer could use these markets to price crops and manage risks. No matter that contango in Chicago had disrupted the operations of the nation’s grain markets to the extent that the Senate Committee on Homeland Security and Governmental Affairs had begun an investigation into whether speculation in the wheat markets might pose a threat to interstate commerce. And then there was the question of the millers and the warehousers—those who needed actual wheat to sell, actual bread that might feed actual people.
Mr. Silver lowered his voice as he informed me that as the price of Chicago wheat had bubbled up, commercial buyers had turned elsewhere—to places like Minneapolis. Although hard red spring historically had been more expensive than soft red winter, it had begun to look like a bargain. So brokers bought hard red spring and left it to the chemists at General Mills or Sara Lee or Domino’s to rejigger their dough recipes for a higher-protein variety.
The grain merchants purchased Minneapolis hard red spring much earlier in the annual cycle than usual, and they purchased more of it than ever before, as real demand began to chase the ever-growing, everlasting long. By the time the normal buying season began, drought had hit Australia, floods had inundated northern Europe, and a vogue for biofuels had enticed U.S. farmers to grow less wheat and more corn. And so, when nations across the globe called for their annual hit of hard red spring, they discovered that the so-called visible supply was far lower than usual. At which point the markets veered into insanity.
Bankers had taken control of the world’s food, money chased money, and a billion people went hungry.
Mr. Silver finished his bacon and eggs and I followed him upstairs, beyond two sets of metal detectors, dozens of security staff, and a gaudy stained-glass image of Hermes, god of commerce, luck, and thievery. Through the colored glass that outlined the deity I caught my first glimpse of the immense trading floor of the Chicago Mercantile Exchange. The electronic board had already begun to populate with green, yellow, and red numbers.
The wheat harvest of 2008 turned out to be the most bountiful the world had ever seen, so plentiful that even as hundreds of millions slowly starved, 200 million bushels were sold for animal feed. Livestock owners could afford the wheat; poor people could not. Rather belatedly, real wheat had shown up again—and lots of it. U.S. Department of Agriculture statistics eventually revealed that 657 million bushels of 2008 wheat remained in U.S. silos after the buying season, a record-breaking “carryover.” Soon after that bounteous oversupply had been discovered, grain prices plummeted and the wheat markets returned to business as usual.
The worldwide price of food had risen by 80 percent between 2005 and 2008, and unlike other food catastrophes of the past half century or so, the United States was not insulated from this one, as 49 million Americans found themselves unable to put a full meal on the table. Across the country demand for food stamps reached an all-time high, and one in five kids came to depend on food kitchens. In Los Angeles nearly a million people went hungry. In Detroit armed guards stood watch over grocery stores. Rising prices, mused the New York Times, “might have played a role.”
On the plane to Minneapolis I had read a startling prediction: “It may be hard to imagine commodity prices advancing another 460 percent above their mid-2008 price peaks,” hedge-fund manager John Hummel wrote in a letter to clients of AIS Capital Management. “But the fundamentals argue strongly,” he continued, that “these sectors have significant upside potential.” I made a quick calculation: 460 percent above 2008 peaks meant hamburger meat priced at $20 a pound.
On the ground in Minneapolis I put the question to Michael Ricks, chairman of the Minneapolis Grain Exchange. Could 2008 happen again? Could prices rise even higher?
“Absolutely,” said Ricks. “We’re in a volatile world.”
I put the same question to Layne Carlson, corporate secretary and treasurer of the Minneapolis Grain Exchange. “Yes,” said Carlson, who then told me the two principles that govern the movement of grain markets: “fear and greed.”
But wasn’t it part of a grain exchange’s responsibility to ensure a stable valuation of our daily bread?
“I view what we’re working with as widgets,” said Todd Posthuma, the exchange’s associate director of market operations and information technology, the man responsible for clearing $100 million worth of trades every day. “I think being an employee at an exchange is different from adding value to the food system.”
Above Mark Bagan’s oversize desk hangs a jagged chart of futures prices for the hard red spring wheat contract, mapping every peak and valley from 1973 to 2006. The highs on Bagan’s chart reached $7.50. Of course, had 2008 been included, the spikes would have, literally, gone through the roof.
Would the price of wheat rise again?
“The flow of money into commodities has changed significantly in the last decade,” explained Bagan. “Wheat, corn, soft commodities—I don’t see these dollars going away. It already has happened,” he said. “It’s inevitable.”
School children in the US were served 200,000 kilos of meat contaminated with a deadly antibiotic-resistant bacteria before the nation’s second largest meat packer issued a recall in 2009. A year earlier, six babies died and 300,000 others got horribly sick with kidney problems in China when one of the country’s top dairy producers knowingly allowed an industrial chemical into its milk supply. Across the world, people are getting sick and dying from food like never before. Governments and corporations are responding with all kinds of rules and regulations, but few have anything to do with public health. The trade agreements, laws and private standards used to impose their version of “food safety” only entrench corporate food systems that make us sick and devastate those that truly feed and care for people, those based on biodiversity, traditional knowledge, and local markets. People are resisting, whether its movements against GMOs in Benin and “mad cow” beef in Korea or campaigns to defend street hawkers in India and raw milk in Colombia. The question of who defines “food safety” is increasingly central to the struggle over the future of food and agriculture.
Read the synopsis of this report here.
The growing global menace
Food should be a source of health, not harm. But food can maim, cripple, and kill. The leading cause of food poisoning in the United Kingdom today is Campylobacter, a tiny bacterium, rife throughout the country’s chicken supply, that causes in humans diarrhoea, fever, abdominal pain and cramping, and in some cases chronic, even life-threatening, conditions. People get it from touching raw poultry or eating undercooked birds. Some 85% of the chicken population in the UK may be infected. In the United States, the top culprits these days are Norovirus, mostly transmitted from dirty hands, and Salmonella, contracted from eating food with faeces on it. Norovirus will give you acute vomiting and diarrhoea, while Salmonella causes vomiting, fever and cramps.
Graph: Data compiled by GRAIN from government and UN sources, 2008-2010 (except Australia=2005)
Among the more notorious food safety incidents in recent years was the melamine scandal in China in 2008. Six babies died and 300,000 others got horribly sick with kidney problems when the industrial chemical melamine got into the commercial milk distribution circuit. There was also a dioxin scandal in Germany in January 2011, where the German authorities shut down more than 4,000 farms after it was discovered that a German company had sold 200,000 tonnes of dioxin-tainted animal feed, which had subsequently entered the food chain. Dioxins are cancer-causing poisons formed in the burning of waste and other industrial processes. 1
How bad is the problem globally? Believe it or not, there are no global statistics or tracking mechanisms on food safety incidents worldwide; reliable data on their frequency and impact are grossly inadequate. Nevertheless, the available data do show that food poisoning is quite common in most countries (see Graph 1). 2 According to the Singaporean authorities, who run a pretty tight food hygiene system, roughly 1.5 billion people worldwide are affected by food-borne disease outbreaks each year, resulting in 3 million deaths. 3
The price of this food safety mess is huge. The UK puts the annual costs to the British economy at US$1.92 billion, which its Food Standards Agency bluntly calls “too much”. Australia’s annual bill is US$1.23 billion. The World Health Organisation says that the annual cost to Vietnam is US$210 million. In the US, the Centers for Disease Control (CDC) has long given the figure of US$35 million per year, but a new study released by the Pew Charitable Trusts at Georgetown University in 2010 puts the figure astronomically higher, at US$152 billion. 4
What makes food unsafe?
What constitutes safe or unsafe food is a controversial question. A range of things can make food unsafe: bad practices (poor hygiene, animal abuse, reliance on antibiotics and pesticides), unproven or risky technologies (genetic modification, nanotechnology, irradiation, cloning), deliberate contamination (such as tampering), or just poor supervision. One thing is clear though: the industrial food system is – in and of itself – the biggest source of food safety problems, because of its intensive practices, its sheer size, and the level of concentration and power that it has accumulated.
A small farm that produces some bad meat will have a relatively small impact. Networks of small and mid-sized farms producing food for regional consumption spread risk widely, diluting it. A global system built around geographically concentrated factory-sized farms does the opposite: it accumulates and magnifies risk, subjecting particular areas to industrial-style pollution and consumers globally to poisoned products (see Box: Superbugs and megafarms).
Both large- and small-scale systems are capable of producing tainted foods, but the potential impact is inherently different. There is simply bigger risk attached to bigger scale. In addition, the corporate food industry – as opposed to small farms and food operators – is highly integrated. This also generates higher risk, because it relies on combining and handling foods through a range of manufacturing, processing and distribution activities. Of course, people can get food poisoning anywhere, in school canteens or in their own homes. But the industrial food system has itself more and more become the problem, given the type of practices and the issue of scale and concentration (see Box: Food safety in the fast food nation).
Food safety in the Fast Food Nation
Does US-style production represent the future of global food? Possibly. Certainly, elite Western opinion shapers and policymakers – the editors of The Economist, the directors of the Bill and Melinda Gates Foundation, certain key elements in the Obama administration – think it should. So it is worthwhile to consider how the US food safety regime has responded to the dilemmas of scale in recent years.
In an industrialised, highly consolidated food system geared to maximising profit by selling vast volumes of cheap food, pressure exists at every phase of the production chain to cut costs by cutting corners, including safe food practices. Moreover, the very scale of modern food production means that seemingly isolated lapses can become quite grave, subjecting millions of people to danger based on the actions of a single production facility.
The case of Peanut Corp. of America demonstrates the perils of scale. Until recently, the company ran two plants: one in Texas, one in Georgia. These two facilities processed 2.5% of the peanuts produced in the United States, and sold “peanut paste” to the entire US processed food industry. By late 2007, the company had evidently given up trying to maintain hygienic conditions at its facilities. In late 2008, people started coming down with salmonella from a dizzying array of products containing Peanut Corp.’s paste, prompting the FDA to initiate a “voluntary recall”. By the time all was said and done, the recall affected no fewer than 1,800 supermarket brands. The tainted products killed nine people and officially sickened around 700 – half of them children – in 46 US states. The Centers for Disease Control (CDC) reckons that for every reported case of salmonella, another 38 cases go unreported – so the real number of people made ill from the output of just two facilities may be up to 26,000. In the wake of the fiasco, US journalists showed that the FDA had “outsourced” inspection of the Georgia plant to state authorities, and then ignored the state inspectors’ findings of atrocious hygiene practices. Moreover, it turned out that the company’s own testing had found salmonella in huge batches of peanut paste, which it proceeded to send out anyway. i
In another incident in 2009, a company called Beef Packers, owned by transnational agribusiness giant Cargill, had to declare two “voluntary recalls” involving over 500 tonnes of ground beef infected with antibiotic-resistant salmonella. ii The USDA announced that consuming the suspect meat could cause “treatment failure” iii – that is, death – because of its ability to withstand drugs. At least 39 people in 11 states reported getting sick, and more than 200,000 thousand kilos of the tainted meat was served to school children through the National School Lunch program. iv
The official response to such incidents has been minimal. In January 2011, a hotly debated piece of legislation called the Food Safety Modernisation Act was signed into being. The intention of the original Bill was to update and inject some resources into the US food safety system. It basically called for more inspections, gave the government authority to mandate food recalls, and provided some traceability to an otherwise fairly unregulated industrial sector. Who would oppose such a move? The fat cats from the food industry, you might think – the Cargills and the Tysons, who don’t want to be controlled. But you would be wrong. The new rules would hardly affect them.
According to an analysis by the US NGO Food & Water Watch, nothing in the Act would have prevented the Peanut Company of America from sending out its tainted paste. Worse, the rules would not even touch the meat sector, the biggest source of food-borne illness in the United States. v The main opponents of the bill throughout the debate were small family farm activists who, because of the way the bill was framed, saw themselves falling under these controls when they are not the problem. So instead of instigating real food safety reform in a country where one out of four people gets sick and 5,000 people die from eating contaminated food each year, the law might do next to nothing.
In the absence of stricter public action around food safety, corporations have moved to fill the void — sometimes to tragicomic effect. A case in point: in the mid-2000s, a company called Beef Products Inc. had an ingenious idea: it would buy slaughterhouse scraps – which are extremely likely to be infected by bacterial pathogens – from large-scale beef processors at cut-rate prices. It would purée those parts into a paste, which it would then mix with ammonia to kill bacterial pathogens. It would sell the product back the the beef industry as a cheap filler for ground beef, with the added feature that the ammonia in the paste would sterilise the ground beef it was mixed with. The beef industry had found a “solution” to the problem of bacterial pathogens in ground beef! The product, known in the industry as “pink slime” for its distinctive look, could be found in 70% of hamburgers consumed in the United States by the end of the decade. The USDA’s Food Safety Inspection Service, which oversees meat safety, applauded — it recognised “pink slime” as safe without requiring testing, on the grounds that it had been sterilised by ammonia. But in 2009, a New York Times exposé found that pink slime in fact tended to be ridden with pathogens — and was actively adding to the pathogen load of the ground beef it was mixed with. Beef Products Inc. responded by merely upping the ammonia dose for its mix. To this day, the product remains widely used in the vast US ground beef market, including at fast-food chains nationwide. vi
If the official US response to highly visible manifestations of food poisoning, like Salmonella-tainted meat and peanut butter, has been underwhelming and industry-friendly, then the response to low-level exposure to pathogens that cause cumulative damage has been virtually non-existent. The first kind causes spectacular, impossible-to-ignore symptoms like vomiting and diarrhoea; the second entails subtle, easy-to-ignore ones that can cause significant long-term damage. Corporate-led food safety regimes like the one in the United States have to at least gesture at the first kind; the second kind, not so much.
It turns out that the USDA’s Food Safety Inspection Service (FSIS), which oversees the safety of the US meat supply, routinely endorses meat that it knows to be tainted with residues of “veterinary drugs, pesticides, and heavy metals”, the USDA Inspector General revealed in a 2010 report. vii The damning report was met with silence by the US media – probably because small amounts of substances like heavy metals don’t cause dramatic immediate symptoms, but rather hard-to-trace, slow-to-develop conditions like cancer. As the report puts it, the “effects of residue are generally chronic as opposed to acute, which means that they will occur over time, as an individual consumes small traces of the residue”. In its report, the USDA Inspector General’s office expressed confidence that the FSIS would redouble efforts to keep heavy metals and antibiotic traces out of the meat supply going forward. Yet it had expressed the same thing, after exposing the same problem, in its report two years earlier. viii
Another example is the US Food and Drug Administration’s refusal to act on mounting evidence that Bisphenol A, an industrial compound found in many food containers, is an endocrine disrupter. If the food safety regime for spectacular pathogens could be described as porous, that for the second, more subtle, kind barely exists at all.
Written with contributions from Tom Philpott, senior writer on food and agriculture for Grist magazine.
i “Peanut Corp. Shipped Product After Finding Salmonella”, Bloomberg News, 27 January 2009, http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aeXwqlMnIWU0; and “Peanut Plant Had History of Health Lapses”, New York Times, 26 January 2009, http://www.nytimes.com/2009/01/27/health/27peanuts.html?_r=1&ref=health
ii “Antibiotic-resistant salmonella, school lunches, and Cargill’s dodgy California beef plant”, Grist, 10 December 2010, http://www.grist.org/article/2009-12-10-meat-wagon-cargill-salmonella/
iii “California Firm Recalls Ground Beef Products Due to Possible Salmonella Contamination”, USDA Food Safety and Inspection Service, 9 December 2009, http://www.fsis.usda.gov/ News_&_Events/Recall_065_2009_Release/index.asp
iv “Why a recall of tainted beef didn’t include school lunches”, USA Today, 2 December 2009, http://www.usatoday.com/news/education/2009-12-01-beef-recall-lunches_N.htm
v Responsibility for food safety in the US is divided between two agencies. The US Department of Agriculture is responsible for meat, poultry and egg products, which accounts for 20% of the US food supply. The Food and Drug Administration, within the US Department of Health, takes care of the rest. The Food Safety Modernisation Act addresses only the work of the FDA. The top sources of food poisoning in the United States are, however, poultry, beef and leafy vegetables (in that order, 2007). See: “Can Congress make a food-safety omelette without breaking the wrong eggs? “, Grist, 25 October 2010.
vi “Safety of Beef Processing Method Is Questioned”, New York Times, 30 December 2009, http://www.nytimes.com/2009/12/31/us/31meat.html?_r=1&partner=rss&emc=rss&pagewanted=all; See also, “Lessons on the food system from the ammonia-hamburger fiasco”, Grist, 5 January 2010, http://www.grist.org/article/2010-01-05-cheap-food-ammonia-burgers
vii “FSIS National Residue Program for Cattle”, Office of the Inspector General, US Department of Agriculture, http://www.usda.gov/oig/webdocs/24601-08-KC.pdf
viii “USDA Inspector General: meat supply routinely tainted with harmful residues”, Grist, 15 April 2010: http://www.grist.org
This is “food safety”?
Government and industry action on food safety gives little indication that they recognise any fundamental problem with industrial food production. Rarely do their regulations or standards hinder corporate practices in any significant way. On the contrary, they tend to reinforce the power of large industry while undermining, or even criminalising, small-scale production and local food cultures. Colombia, for instance, is in the process of implementing legislation to prevent the sale of raw milk in urban areas. Well over two million farmers and vendors depend for their livelihoods on these sales of raw milk, and around 20 million Colombians, most of them poor, depend on raw milk as an affordable and essential source of nutrition, easily made safe by boiling it at home. Hard pressed to justify its moves on public health grounds, the government says that the legislation is part of its commitment to the WTO, and that it will help to “modernise” the dairy sector, making it better able to compete with imports when a looming free trade agreement with the EU kicks in. 5
These days, in Colombia and elsewhere, “food safety” policy has little to do with public health or consumers. It has become a battleground among contesting interests, the site of power struggles for control over food and agriculture, with decisions being increasingly taken far from producers and consumers, in the obscure world of trade negotiations and multilateral agencies, where politics and commerce, not science and public health, are what drive things.
Consider the case of bovine spongiform encephalopathy (BSE), the fatal brain-wasting condition popularly known as mad cow disease. People get the human strain of it by eating the meat of cows that have been fed diseased animals as a cheap source of protein – a practice common in industrial feedlots since the 1970s. The US and Canada lost Japan, Korea and several other major export markets for beef when BSE was found in their herds in 2003, and have had a tough time regaining those markets because risks remain from their industries’ feeding practices. 6 Indeed, in March 2011, a new case of BSE was identified in a Canadian cow. 7 But through constant pressure, particularly at the trade negotiating table, both countries have secured some concessions to allow certain parts of the cow, or the meat of younger animals, to cross borders freely. Both countries also went to the Organisation for Animal Health (OIE) in Paris, which has a similar role to Codex Alimentarius Commission in Rome but for the animal kingdom, to get their beef declared generally safe for consumption. Where does that leave Japan? Unmoved. It says that its standards are higher than those of the OIE or the US, and have to be given priority.
And then there’s the case of ractopamine, a growth promoter added to pig feed. China and the European Union, which together produce 70% of the world’s pork, say that it is not safe for humans and have banned its use in meat production. The same is true for more than 150 other countries. In the United States, however, home to Eli Lilly, the pharmaceutical giant that produces ractopamine by way of its subsidiary Elanco, the drug is fed every day to pigs, cows, and turkeys and Washington fights tooth and nail to defend the interests of US corporations and prevent countries from rejecting US pork for containing residues of the stuff. The US and Eil Lilly are working hard to try to convince Codex to declare it safe for human consumption.
Beijing, for its part, has so far refused to budge. But that doesn’t mean that Chinese consumers are getting ractopamine-free pork. The same government fighting off ractopamine-laced US pork, is aggressively pushing, in the name of “food safety”, a consolidation and modernisation of the country’s pig production based on the US factory farm model. China’s two largest, vertically-integrated pork producers, Yurun and Shineway, both of whom have been heavily funded by the US bank Goldman Sachs, were implicated in recent food safety incidents involving ractopamine and clenbuterol (another banned drug added to pig feed for the same purposes). 8 In March 2011, Chinese consumers were shocked when a CCTV television report uncovered how ractopamine and clenbuterol are widely used in the farms supplying Shineway in Henan Province. 9 The report found that Shineway was actually offering farmers higher prices for pigs fed ractopamine. 10
Superbugs and megafarms
“Superbug” is a term used to describe bacteria that have acquired the ability to resist commonly used antibiotics. One of the most notorious is Methicillin-resistant Staphylococcus aureus (MRSA), which emerged in the 1960s in the UK and has since spread around the world, with deadly consequences. In the US alone, 17,000 people died from MRSA infection in 2005. i
MRSA is typically associated with hospitals, where the superbug has a tendency to get into open wounds and cause difficult-to-heal infections. But in recent years these superbugs have found another place to thrive: industrial pig farms. ii
In 2004, Dutch researchers identified a new strain of MRSA, later labelled ST398 or “pig MRSA”, which they found in people in close contact with Dutch pig farms. Within two years ST398 become a leading source of human MRSA infection in the country, accounting for more than one in five human MRSA cases. Studies showed that these cases were closely related with pigs, and further research revealed that ST398 was running rampant in pigs on Dutch farms. A 2007 survey found ST398 in 39% of pigs and 81% of local piggeries. iii
New surveys of farms outside of the Netherlands have turned up similar numbers. iv The first ever EU-wide survey for MRSA on pig farms in 2009, using a method that “largely underestimates MRSA prevalence”, found ST398 in more than two-thirds of EU member states. Spain and Germany had the highest incidence, with over 40% of pig holdings testing positive for MRSA. v Not surprisingly, given the European pig industry’s heavy exports overseas, ST398 is turning up in pigs beyond Europe’s borders, too. A study of pigs in the Canadian province of Ontario, for instance, found ST398 in a quarter of local pigs, as well as in one-fifth of the pig farmers tested. vi Only one study has been conducted in the US so far: it was a pilot study of two large hog operations in the midwest that found ST398 in 49% of the pigs and 45% of the workers. vii
MRSA has the potential to evolve in very dangerous ways in its new home on pig farms. The density of animals in factory farms allows the bacteria to evolve rapidly and in diverse ways. Also, the use of antibiotics on factory farms is ubiquitous. Pigs are routinely fed antibiotics in their feed and water, often as a preventive measure against disease outbreaks and even simply to increase growth rates.
In the US, 80% of all antibiotics consumed annually are consumed by livestock. viii In China, the figure is nearly 50%. ix Even in the EU, where the non-therapeutic use of antibiotics for animals is banned and where the types of antibiotics allowed for livestock are controlled, the use of antibiotics for animals still exceeds their use for humans. In Germany, for example, three times as many antibiotics are given to animals as to humans. x Such widespread use of antibiotics in factory farms speeds up the development of antibiotic resistance among bacteria. Unlike other strains of MRSA, ST398 can already withstand tetracyclines, a group of antibiotics that is given heavily and regularly to pigs in factory farms. The medical profession is getting increasingly worried about what this will mean for the future of human health care, as antibiotics may become useless. The WHO now calls it “the greatest threat to human health”. xi
The good news, however, is that ST398 still hasn’t shown much virulence in humans, nor is it easily transmitted between people. Not yet, at least.
In 2010, a 14-year-old girl in France, recovering in hospital from pneumonia, was infected with a superbug. She soon began having serious respiratory problems, her lungs started bleeding, and within six days she died. The superbug that killed her was a clone of MRSA ST398 that is known to circulate in humans. The most alarming issue for the French doctors studying the case was that this was the first incident on record in which this strain of MRSA had acquired the capacity to produce a lethal toxin in humans, something that certain other strains of superbugs are able to do. They reasoned that if the clone of MRSA ST398 could do it, then surely “pig MRSA” has the same capacity. xii
It is not much of a stretch to imagine a situation where “pig MRSA” passes from a pig to a farm worker carrying another MRSA strain with virulence to humans, mixes with that strain, and acquires its capacity for virulence. The new virulent strain of ST398 could then easily pass back into the pigs, where it would rapidly amplify and spread. ST398 is transmitted to humans not only through contact with live pigs: the bacteria is also present on meat sold in supermarkets and can be carried over large distances by the insects that pass in and out of farms. xiii
The EU is slowly starting to take action to defend against such a possibility. It has implemented several measures to restrict the use of antibiotics in livestock production and, at national and at EU level, some surveillance of farms is being carried out. In 2009, a panel of the European Food Safety Authority recommended that the EU move towards “systematic surveillance and monitoring of MRSA in intensively reared animals”. South Korea, for its part, banned the use of seven antibiotics in animal feed in 2008, and implemented a national programme to reduce the use of antibiotics on livestock farms. But such restrictions on the use of antibiotics for livestock hardly exist in the US, although proposed legislation restricting the non-therapeutic use of certain antibiotics in feed is currently before Congress. As for surveillance, the US National Antimicrobial Resistance Monitoring System doesn’t even test for MRSA. xiv Outside the industrialised countries, where the meat industry is expanding most rapidly, there is an almost complete absence of controls on the use of antibiotics in agriculture and of surveillance for pathogens such as MRSA.
Enhancing surveillance and cutting back on the use of antibiotics in factory farms are important measures. But they aren’t enough to deal effectively with the threat posed by MRSA and the myriad other pathogens that thrive in factory farms. A staggering 61% of all human pathogens, and 75% of new human pathogens, are transmitted by animals, with many of the most dangerous – such as bird flu, BSE, swine flu and the Nypah virus – having emerged from intensive livestock farms. xv It is the way that animals are farmed that is fundamentally at issue. xvi
i E. Klein, D.L. Smith, R. Laxminarayan, “Hospitalizations and Deaths Caused by Methicillin-Resistant Staphylococcus aureus, United States, 1999–2005″, Emerg. Infect. Dis. Vol. 13, No. 12, 2007, pp. 1840–46.
ii Ed Yong, “MRSA in pigs and pig farmers”, 23 January 2009, http://scienceblogs.com/ notrocketscience/2009/01/mrsa_in_pigs_and_pig_farmers.php
iii X.W. Huijsdens et al., “Community-acquired MRSA and pig-farming”, Ann. Clin. Microbiol. Antimicrob., Vol. 5, No. 26, 2006; A.J. de Neeling et al., “High prevalence of methicillin resistant Staphylococcus aureus in pigs”, Vet. Microbiol., Vol. 122, No. 3–4, 21 June 2007, pp. 366–72; I. van Loo et al., “Emergence of methicillin-resistant Staphylococcus aureus of animal origin in humans”, Emerg. Infect. Dis., Vol. 13, No. 12, 2007, pp. 1834–9.
iv Danish Integrated Antimicrobial Resistance Monitoring and Research Programme, http://www.danmap.org/pdfFiles/Danmap_2009.pdf
v “Pig MRSA widespread in Europe”, Ecologist, 25 November 2009; Broens et al., “Diagnostic validity of pooling environmental samples to determine the status of sow-herds for the presence of methicillin-resistant Staphylococcus aureus (MRSA)”, Poster presented at the ASM–ESCMID Conference on Methicillin-resistant Staphylococci, in Animals: Veterinary and Public Health Implications, London, 2009.
vi “Guelph Researchers Find MRSA in Pigs”, University of Guelph, 8 November 2007, http://www.uoguelph.ca/news/2007/11/post_75.html.
vii T.C. Smith, M.J. Male, A.L. Harper, J.S. Kroeger, G.P. Tinkler et al., (2009) “Methicillin-Resistant Staphylococcus aureus (MRSA) Strain ST398 Is Present in Midwestern US Swine and Swine Workers”, PLoS ONE, Vol. 4, No. 1, 2009.
viii See “New FDA Numbers Reveal Food Animals Consume Lion’s Share of Antibiotics”, Center for a Liveable Future, Johns Hopkins University, 23 December 2010,. http://www.livablefutureblog.com/2010/12/new-fda-numbers-reveal-food-animals-consume-lion%E2%80%99s-share-of-antibiotics
See also Margaret Mellon, Charles Benbrook, Karen Lutz Benbrook, “Hogging it!: Estimates of antimicrobial abuse in Livestock”, Union of Concerned Scientists, 2001, http://www.ucsusa.org
ix “Half of China’s antibiotics fed to animals: expert”, Xinhua, 26 November 2010.
x Kristen Kerksiek, “Farming out Antibiotics: The fast track to the post-antibiotic era”, Infection Research, Germany, 22 March 2010, http://www.infection-research.de/perspectives/ detail/pressrelease/ farming_out_antibiotics_the_fast_track_to_the_post_antibiotic_era/
xi AAP, “Greatest threat to human health”, Sydney Morning Herald, 16 February 2011, http://www.smh.com.au/lifestyle/wellbeing/greatest-threat-to-human-health-20110216-1awai.html
xii Frédéric Laurent, “Les souches de staphylococcus aureus ST398 sont-elles virulents”, Bull. Acad. Vét. France, Vol. 163, No. 3, May 2010.
xiii See Aqeel Ahmad et al., “Insects in confined swine operations carry a large antibiotic resistant and potentially virulent enterococcal community”, BMC Microbiology, 2011, http://www.biomedcentral.com/1471-2180/11/23/abstract
xiv Maryn McKenna, “Alarm over ‘pig MRSA’ – but not in the US”, Wired, 30 October 2010, http://www.wired.com/wiredscience/2010/10/alarm-over-pig-mrsa-%E2%80%94-but-not-in-the-us/
xv John McDermott and Delia Grace, “Agriculture-Associated diseases: Adapting Agriculture to improve Human Health”, ILRI, February 2011.
xvi GRAIN, “Germ warfare: Livestock disease, public health and the military-industrial complex”, Seedling, January 2008, http://www.grain.org/seedling/?id=533
Food safety and global trade: Europe and the US impose their standards
As the two examples above help to show, trade agreements have become the core mechanism to expand and enforce food safety standards around the world. Since the 1980s and the Uruguay Round of GATT negotiations, which gave rise to the World Trade Organisation (WTO), agricultural markets have been profoundly liberalised, with tariffs and quotas coming down, particularly in developing countries. 11 This has led to a boom in global food trade, with few countries free to impose tariffs or take similar measures to regulate the flow of imports and exports any more. As a result, governments and corporations have turned to other measures to manipulate market access and control. In agriculture, food safety is the major method.
In essence, as quantitative restrictions no longer exist (as a tool to open and close markets), qualitative ones have been invented to take their place. The WTO has played a direct role in this shift. (See Annex: Who does what?) But today, it is mainly through so-called free trade agreements, negotiated at the bilateral or regional level, that governments recalibrate the rules of food safety. Too often, the food safety rules that emerge from trade negotiations become mechanisms to force open markets, or backdoor ways to limit market access; they do little to protect public health, serving only corporate growth imperatives and profit margins.
Take the EU, which has become expert at defending some of the most ridiculous standards. In the late 1990s, the EU banned fishery products from India because of unacceptable sanitation risks supposedly found there. But the EU’s definition of “sanitary” can be absurd. It demanded, for instance, that the floors and ceilings of fish landing units be washed with potable water 12 –this in a country where a sizeable fraction of the population lacks access to potable water. For Indian fishers and processors, the point of such rules is not to protect the end consumer; it is to discourage access to the EU market for Indian companies, by imposing conditions that only EU companies can comply with.
Experiences in Africa bear this out. According to the United Nations, Tanzanian fishermen dependent on exports to the EU lost 80% of their income under a ban similar to the one placed on India. 13 Uganda, in the same situation, lost almost US$40 million. Did the Europeans stop eating fish? No. In fact, while these bans were conveniently in place, EU firms, such as the Spanish group Pescanova, aggressively expanded their fishing activities in African waters to serve the lucrative European market by buying up quotas and licenses. 14 Today, with Brussels pursuing a flurry of new generation trade deals, things are getting worse (see Box: EU–India FTA).
Consider peanuts. The EU has long posed problems to the rest of the world with its excessively high standards related to aflatoxins. Aflatoxins are mycotoxins produced from certain kinds of fungus or mould. In humans they can attack or even shut down the liver, as well as cause cancer. While adults have a high tolerance to aflatoxin poisoning, children do not, and can be exposed to it through grains, nuts, fruit, or cheese. With the growing prominence of food safety as a concern for EU authorities, Brussels has set tolerance limits for aflatoxins grossly out of proportion to the risks. 15 This has hit Iranian pistachio producers, Gabonese peanut exporters, Bolivian brazil nut harvesters and Filipino coconut farmers. The World Bank calculates that the exaggerated aflatoxin tolerance level imposed by the EU costs African countries US$670 million a year in export losses. 16 For many observers, it is hard to square those losses against the benefit of preventing the potential death of 0.7 people in a population of 500 million per year. 17 In fact, there are cases where the overzealous aflatoxin restrictions have only led to bidding wars to drive peanut prices down – for the benefit of European importers, of course. 18
The United States is slightly different in its demands. To begin with, the US is generally seen to have lower standards than Europe with regard to pesticide and chemical residues. In fact, Brussels seems constantly to be engaged in some spat with Washington DC. For instance, US poultry destined for export is routinely dunked in chlorine just before it is shipped. This is to kill the bacteria that have accumulated in the birds’ carcasses through the quintessentially American “factory farming” production process. 19 The Europeans do not allow the import of chickens bathed in chlorine, so no US poultry enters the EU market. The US also carries out fewer physical checks on its own food imports. It examines only 2% of all incoming fish shipments, for instance, even though some 80% of fish consumed in the US is imported. This laxity exemplifies a US food safety system which has long relied on self-regulation by the industry, particularly through Hazard Analysis and Critical Control Points (HACCP) checks, rather than public oversight and accountability. 20
At the trade negotiating table, the US government is well known –and feared– for pushing lax standards on genetically modified foods. Indeed, a diplomatic cable uncovered by Wikileaks shows that the George W. Bush administration pressured the French government to ease its stance against GMOs. In a 2007 cable, the US ambassador to France went so far as to suggest that “we calibrate a target retaliation list that causes some pain across the EU since this [acceptance of GMOs] is a collective responsibility, but that also focuses in part on the worst culprits “. He added: “The list should be measured rather than vicious and must be sustainable over the long term, since we should not expect an early victory”. 21
Such “diplomacy” is for the clear and direct benefit of Monsanto, DuPont and other agricultural biotechnology corporations that do not like foreign countries banning GM seeds or foods, much less requiring labels that inform consumers of the presence of GM ingredients. US firms, especially the members of the Biotechnology Industry Organisation, religiously use FTA talks by Washington officials as a platform to secure market access for GMOs through aggressive regulatory reforms. 22 Besides GMOs, US trade policy is also seen as destabilising other countries’ sovereignty over food safety and health matters, insofar as Washington regularly demands relaxation of rules against the import of US farm products that others deem risky, such as beef (BSE, hormones), veal (hormones), chicken (chlorine) and pork (swine flu).
The US and the EU have much in common, though (see Box: How EU and US use free trade deals to twist other people’s taste buds). Both are attached to the process of inspecting and accrediting specific farms, fisheries or manufacturers as matching or surpassing US or EU standards for exporting food to them (see Box: “Falling through the GAP”). While this might seem extraordinarily protective of EU or US consumers, it also invites corporate takeover and concentration. For example, when the EU lifted a six-year import ban on Chinese poultry in 2008, in reality it gave the nod to only a handful of meat factories in Shandong Province certified to export to the EU, one of which had been taken over just two weeks before by Tyson, the world’s second-largest meat company. 23 Both the US and the EU also create bilateral committees with their trade partners to continue the conversation on “harmonisation”, in order to develop further not only mutually agreed food safety practices but also standards, including new international standards. The EU is using these mechanisms to pursue its agenda of introducing “animal welfare” into the pool of world food trade norms.
Free Trade Agreements (FTAs) are used to fight food safety battles not only by the US and the EU, of course. Countries like India or Australia or Brazil are not just on the receiving end of US or EU pressures. They have their own sanitary standards, agendas and needs. India, for instance, through a gradually maturing FTA strategy, is fighting an uphill battle to increase foreign inward investment and yet still control agricultural markets. During US President Obama’s visit to India in November 2010, Indian Agriculture Minister Sharad Pawar made it clear that the United States can produce all the scientific studies it wants, and they will be respectfully reviewed, but India will not import US dairy products that offend domestic religious sensitivities. 24 The Japanese government, in its zeal to sign FTAs, especially with Australia and the US, also has a difficult tightrope to walk on the issue of GMOs, as it needs to respect its own electorate’s preference for GM-free foods. Southern African states such as Namibia have raised serious questions about how to be proactive in pushing their own “development” strategies and needs in trade negotiations with the EU, where Sanitary and Phytosanitary Standards (SPS) requirements – which are very costly to comply with – can undermine local benefits. The difference is that these countries are not out to change others’ food safety standards. The US and the EU most clearly are.
EU-India FTA: Bad news for small fishers and fishmongers
An excellent report from Focus on the Global South in collaboration with Intercultural Resources shows how the EU’s upcoming free trade agreement with India will affect small-scale fisherfolk and fish vendors, particularly women, in the subcontinent. The findings can be summarised thus: i
What the EU will get from the EU–India FTA
* Tariff cuts (for EU fish going to India).
* Traceability requirements (fish going to the EU must comply with EU certification – not the FAO’s – against illegal fishing), thereby cutting out competition from Indian operators.
* The right to sell Indian fish in the Indian market (probably through supermarkets).
* General investment protections (the right for EU firms to go to India and set up shop).
* National treatment (though it is still to be seen whether India will exempt access to its Exclusive Economic Zone, as Chile did in its EU FTA, or to its coastal lands, both of which are crucial for local fishers).
What India will get
* Slightly greater market access (EU tariffs not being high to begin with) but at the cost of very high food safety standards (barriers to entry), which is of no use to small fishers or traders.
i See “Economic liberalisation and gender dynamics in traditional small-scale fisheries: Reflections on the proposed EU–India free trade agreement”, Focus on the Global South Occasional Paper 8, New Delhi, August 2010, http://www.focusweb.org/ content/occasional-paper-8- economic-liberalisation-and- gender-dynamics-traditional- small-scale-fishe
New standards open new markets
Food safety, strictly speaking, is a matter of preventing illness. But the boundaries of what we bundle under this concept can be stretched to include broader issues of food quality. Halal, GM-free, cruelty-free and organic foods are all examples of growing markets that are generally handled, for practical purposes, by the current food safety regime (standards, audits, certification, traceability and dispute mechanisms). Similarly, at the policy level these considerations are regulated by food safety authorities, and in trade talks they form part of sanitary and phytosanitary chapters or agreements. 25 Many of these broader food quality concerns are not necessarily about product standards, but processes. Therefore they tend to get defined and controlled through schemes rather than standards per se. And if care is not taken, they can be quite arbitrarily defined to suit the needs of transnationals like Cargill or Carrefour, rather than the needs of local communities or of public health generally.
While demands for GM labelling and organic foods are relatively more integrated into food safety or food marketing regimes, a shake-out is needed soon with regard to halal foods and animal welfare issues. 26
The halal food market, currently valued at around US$600 billion, or 16% of the global food retail market, is expanding fast, and will continue to grow in the coming years. 27 But what constitutes halal food is a highly contested issue. There is no global standard, and within any given country there may be different or even competing standards. 28 At the international level, the Organisation of the Islamic Conference is the forum that needs to come to terms with this. In 2008, Malaysia and Turkey agreed to develop jointly some harmonised or common standards, for adoption by the OIC at large, but this is unlikely to pass uncontested (see Box: Religion as a racket).
Religion as a racket i
For some, the very idea of formalising norms and standards for halal food production reeks of a racket to make money out of people’s spiritual sensitivities. In a Muslim country like Algeria, why would there be any need to legislate on what constitutes halal food when the food produced in Algeria is halal? The push to define, and communicate to consumers, official halal food is really aimed at denting the pockets of Muslim consumers in Christian and other non-Muslim countries.
Even in the Philippines, if you listen to media reports of what the political class is up to, you could hardly be blamed for understanding that the momentum to develop domestic halal standards and guarantees is primarily aimed at facilitating the export of Philippine mangoes and other such foods to Saudi Arabia and neighbouring Gulf states. Any benefit for the Philippines’ Muslim population would seem secondary. If Islamic states and organisations now push for harmonisation of halal food standards, it is to serve purely commercial interests.
Isla Délice poster, France (“Proudly Halal”)
iThis commentary is based on an interview with Meriem Louanchi of AREA-ED in Algeria.
Animal welfare is another issue altogether. It seems to be a predominantly European regulatory concern, but this alone means that it is fast becoming a responsibility for the rest of the world. By 2013, the EU will implement new standards on animal slaughter, including stunning, and these new norms will have to be followed by anyone planning to export meat to the EU. As already noted, the EU increasingly includes animal welfare in its bilateral trade agreements, making explicit demands on partners to work with the EU to draw up international standards in this area. So far, Chile, Korea, Colombia, Peru, and Central America have accepted the EU’s demands, particularly working with the Europeans to draw up global legal standards. 29
Internationally, the OIE is expected to adopt, very soon, some recommended set of principles for animal welfare in international trade. 30 But who defines these principles, and who enforces them as international norms? There are no international legal standards for animal welfare. At OIE, the debate is divided along North–South lines. The major complaint from the South is that OIE’s proposed animal welfare framework is based on private standards. Developing countries already have bad experience with private standards on animal health and expect more of this if the task of drawing up animal welfare norms falls to non-public entities. 31
In these emerging fields, the question truly is: whose norms are we talking about — and for whose benefit?
Recap: How EU and US use free trade deals to twist other people’s taste buds
* Get GMOs accepted (US).
* Wrest space for GM policy-making outside the United Nations system (US).
* Impose high standards to keep competition down (EU).
* Require market openings for banned or unwanted foods (US).
* Create bilateral committees to continue shaping policy, away from public scrutiny (both).
* Impose farm-based accreditation systems, creating vulnerability to corporate takeover (both).
* Require bilateral cooperation on international standard setting, including the development of new standards (both).
Food safety, now on offer at Walmart
It would be wrong to take diplomatic or legislative wrangling as evidence that governments are getting serious about food safety. While they spare no expense in ensuring that regulations do not harm export markets for their food companies, when it comes to managing the risks generated by the industrial food system, deregulation and hands-off attitudes are very much the order of the day. Governments may define and administer the legal framework of food safety and similar standards, but the action and the agenda are very much left in the hands of the private sector. One could even say that food safety is hardly a matter of public policy at all any more, as so much revolves around private standards, voluntary controls and obscure industry bodies, all under the thumb of the largest food corporations.
Consider beef. The US government insists that US beef is the safest in the world, but buyers know better. “If you look at food recalls over the past two years, there’s been a significant increase”, says Frank Yianna, vice-president for food safety at Walmart, one of the country’s largest beef retailers. The US government’s response to this alarming rise in meat recalls: no new measures. Walmart’s response: a set of its own new standards to which its US beef suppliers will have to conform by June 2012. Walmart says that its standards will provide its customers with an “additional layer” of protection beyond the tests for Escherichia coli and other pathogens that the meat industry already conducts. “This is really a response to long-term trends in beef recalls”, says Yianna. 32
Supermarkets: Setting their own standards
US beef regulations, and even the regulations that the Japanese government imposes on US beef imports, aren’t good enough for Japan’s food service sector. Although Tokyo lifted, in 2006, its ban on US cattle aged 20 months or younger, Zensho, Japan’s largest food service company, wants US beef suppliers to provide it with special safeguards, particularly concerning BSE. In December 2010, Zensho announced that it had struck a deal with JBS, a Brazilian company that is one of the largest beef producers in the US, to provide Zensho with beef from cattle certified to have been raised without feed containing “BSE-responsible material”. Under the terms of the agreement, JBS must segregate “Zensho cattle” during the transportation, finishing and processing stages. JBS must also ensure that “Zensho cattle” are processed only at the beginning of a production shift and only after the equipment and facilities have been specially sanitised. Zensho inspectors will be physically present to monitor the process, and the final product will be marketed in Japan as “Zensho SFC beef”. 33
Along the same lines, French supermarket behemoth Carrefour announced in November 2010 that it will start labelling 300 of its own-brand, animal-based products sold in its stores as “Fed GM-free” (“Nourri sans OGM”).
The customers of these companies may appreciate such measures. But what about everyone else? The only accountability in such a system is to shareholders, not the public; private standards are all about the bottom line. To give one example of how this can play out, poultry companies in South Africa regularly take frozen chicken that is past its best-before date from supermarkets in wealthy neighbourhoods, recycle it by thawing, washing and injecting it with flavouring, and then sell it to shops in black townships. The poultry companies deny that the practice is racist, and claim that they are actually following standards higher than those required by the Department of Health. 34
Walmart in Central America
Traditional markets are disappearing fast in Central America. Already at least one in four quetzales spent by Guatemalans on food is spent in a Walmart-owned supermarket, while Costa Ricans spend 1 in 3 colones there. And yet, nearly all the horticultural products sourced from the region by Walmart’s Central American operations come from its own subsidiary, Hortifruti, which sources from a mere 1,800 growers. In Honduras, Hortifruti accepts supplies from 395 horticultural growers out of a total of 18,000 in the country, with most of the produce coming from a core of 45 preferred producers, who have at least 4 ha under drip irrigation and their own trucks –all trained by Bayer in “good agricultural practices”. i Moreover, half of the produce sold by Walmart stores in Central America is imported, much of it from big farms in Chile. ii
i For more on Hortifruti, see Madelon Meijer, Ivan Rodriguez, Mark Lundy and Jon Hellin, “Supermarkets and small farmers: the case of Fresh Vegetables in Honduras”, in E.B. McCullough et al., The Transformation of Agri-Food Systems, Earthscan, 2008; Alvarado and Charmel, “The Rapid Rise of Supermarkets in Costa Rica”, 2002; Berdegué et al., “The Rise of Supermarkets in Central America”, 2003.
ii Thomas Reardon, Spencer Hensen and Julio Berdegué, “‘Proactive fast-tracking’ diffusion of supermarkets in developing countries: Implications for market institutions and trade”, Journal of Economic Geography, Vol. 7, No. 4, 2007.
Small farmers at the losing end
More and more of the food that people buy is delivered to them through the supply chains of transnational supermarkets and food service corporations (see Box Supermarket tsunami). These companies now wield enormous power in deciding where food is produced and where it is sold, and they increasingly want to dictate exactly how it is produced and handled. Food standards have become a central way for them to organise global markets.
Thomas Reardon and fellow economists Spencer Henson and Julio Berdegué have tracked the rise of supermarkets in the South. They find that supermarket development moved along very slowly outside industrialised countries between the 1950s and the 1980s. During those years, supermarkets remained confined to small pockets of wealthy consumers in large cities, who could afford the higher prices. But things changed “abruptly and spectacularly” in the 1990s.
Reardon and his colleagues divide this supermarket take-off in the South into three waves.
The first wave occurred in the early1990s in much of South America, East Asia (outside China and Japan), northern Central Europe and South Africa. In these countries, supermarkets quickly moved from a 10% share of the overall retail food market to a 50–60% share. In Brazil the current figure is 70%, and in Argentina Carrefour alone has 25%.
The second wave began in the mid-1990s, in Central America, Mexico, much of South-east Asia and southern Central Europe. In these countries, the supermarkets’ share of overall food retail moved from 5–10% in 1990 to 30–50% by the early 2000s. Today, one out of three pesos spent on food in Mexico goes to Walmart.
The third wave started in the late 1990s and early 2000s in some countries in Africa, such as Kenya, in Latin America, such as Peru and Bolivia, and in Asia, such as Vietnam, China, India and Russia. This third wave is now in full swing, with multinationals pouring into these countries alongside domestic competitors. Even in Africa, supermarket expansion is taking off, led by African-based companies like Nakumatt and Shoprite. TNCs are now also moving in. In December 2010, Walmart put forward an offer to buy 51% of South African retailer Massmart, one of the largest distributors of consumer goods in the region, with some 290 outlets across 13 countries in Africa. The deal is being hotly contested by South African unions and still needs to be approved by the country’s regulatory authorities.
Overall, supermarket expansion is happening five times as fast in developing countries as it did in the US or the UK. What accounts for this sudden take-off? Reardon and his colleagues say the main factor was the liberalisation of foreign investment policy during the 1990s, which opened the door to investment from large foreign retailers. They also point to the “proactive fast-tracking” strategy of supermarkets to create the “enabling conditions” for their expansion, mainly by setting up direct, standardised procurement systems, which can keep costs down. They say that municipal policies favouring supermarkets also played an important role. i
i Thomas Reardon, Spencer Hensen and Julio Berdegué, “‘Proactive fast-tracking’ diffusion of supermarkets in developing countries: Implications for market institutions and trade”, Journal of Economic Geography, Vol. 7, No. 4, 2007.
Supermarket standards for fresh fruit and vegetables reveal much about who wins and loses within the corporate regulatory apparatus. Fresh fruit and vegetables are extremely important to retailers because they bring shoppers into their stores on a more regular basis, keeping overall sales up. Supermarkets have tried to capture this market by offering low costs and quality assurances. Their main strategy in this regard has been to source from “preferred suppliers” that can provide large volumes from low-cost production areas, assure traceability of the produce all the way back to the farm, and ensure that it was grown according to the standards stipulated by the supermarkets.
Today, big food retailers such as Tesco, Walmart, Carrefour and Lotte are focusing on expanding their operations in the South, where markets are growing. India, China, Brazil and Indonesia are among the prime targets. In these and other developing countries, however, produce markets are still dominated by informal supply chains, from peasants and small co-operatives to local wholesalers and street vendors. So the supermarkets impose their own procurement models, using a common set of standards as a basis for restructuring. They also have to deal with the competition from local and regional elites, such as the Matahari chain in Indonesia, or Big C in Thailand.
The basic picture of these global supply chains is arranged as follows. At the top stand the big retailers – the word “big” here being an understatement. Walmart, the globe’s largest food retailer, rings up annual food sales of US$405 billion – more than the annual GDP of Austria, Norway, Saudi Arabia, Iran, Greece, Venezuela, Denmark, or Argentina. The four largest global food retailers – Walmart, Carrefour, Metro, and Tesco – have combined annual food sales of US$705 billion. That’s more turnover than the annual output of Turkey or Switzerland. Their sheer size and buying power gives them tremendous leverage over the entire global food system: they are able to dictate terms to all their suppliers, from farmers to food processors. 35
They work together, with input from the biggest food companies and agribusiness firms, to develop common standards for foods (from farming to packaging) that their suppliers have to follow. An example is GlobalGAP. In the context of a largely laissez-faire – or at least industry-friendly – global food safety policy regime, these standards are emerging as the shadow food safety structure for much of the world (See Annex: Who does what?). And to emphasise a key point, these gigantic companies are accountable to their shareholders – and to a small extent their customers – but to no one else.
Below the supermarket giants are the suppliers. These are large companies that source and ship from around the globe, and increasingly from their own farms or from contract production schemes that they manage. Then there are the producers. More and more, production is centralised in “hubs” or “zones” where production of specific fruits or vegetables is cheap and organised according to the standards dictated by the supermarkets. Some well-known examples are grapes in Chile, green beans in Kenya, and apples in China.
Much has been said about how countries can position themselves to benefit from this global supermarket expansion. To gain access to supermarket shelves, local governments and donors devote huge resources to trying to build production capacity in poor countries. Supermarket growth is even portrayed as an “opportunity” for small growers. The reality is quite different (see Box: Walmart in Central America).
First, foreign retailers moving into southern countries compete directly with local and traditional markets. As they expand, they capture space from small vendors, traders and farmers’ markets, which are served primarily by small-scale growers and vendors. Developing countries are not merely sites for export production to Western supermarket supply chains. They are increasingly becoming the consumers of these markets as well (see Box: The supermarket tsunami).
Second, supermarkets have access to global procurement networks through which they can access cheap produce and force down prices. If local oranges are too costly for its Indonesian stores, Carrefour can bring in oranges from its suppliers in Pakistan or China. A whopping 70–80% of the fruits sold in supermarkets in Indonesia are imported, mostly from regional supermarket supply hubs in Thailand and China. 36
Third, the suppliers that serve supermarkets, and the standards that they are obliged to follow, leave no room for traditional farming (see Box: Falling through the GAP). The only window of opportunity for a small-scale grower who wants to sell to supermarkets is tightly controlled contract production, where the company dictates everything, from the seeds to the pesticides used. Such contract farming schemes erode biodiversity and local food systems and cultures. But even this option is usually not possible, as compliance is generally too costly and impractical for small-scale growers. So more and more of the actual farming is being carried out and managed by the “preferred suppliers” themselves, with heavy involvement from the supermarkets (see Box: Cold shoulder for Ugandan farmers).
Of course, many domestic supermarkets and supply chains – from ShopRite of South Africa to DMA of Brazil – are implementing this model as well. And while some will surely grow and become regional giants, they are easy prey for buyout by Northern cousins.
US-based Fresh Del Monte Produce is one such “preferred supplier” of fresh fruit and vegetables to global supermarket chains. According to the company’s CEO, Mohammad Abu-Ghazaleh, “Retailers today are more inclined to work with someone who can assure them that his product has come from his own farm, has been packed under his own packing plant, with shipping under his control and delivering it to his customer, also under his control”. His company produces 39% of its bananas, 84% of its pineapples, and 81% of its melons on its own plantations, mainly in Central America, and runs a vertically integrated poultry business in Jordan that supplies retailers and transnational corporations (TNCs) in the Middle East. In 2009, 13% of the its total sales were to Walmart.
Peru is described as a success in penetrating supermarket supply channels. It was prodded into the business under Washington’s so-called “war on drugs” 20 years ago. Since then, exports of asparagus to the EU and North America have taken off. But this has dramatically transformed local agriculture. Asparagus used to be produced by small-scale farmers, but today they account for less than 10% of the country’s production, which is now dominated by large-scale export-oriented firms. Just two companies – Del Monte and Green Giant, both of the US – today control a quarter of Peru’s asparagus exports. 37
In 2000, Ghana tried a similar programme, but with a focus on the production of pineapples for European supermarkets. In the first four years, exports of pineapples to Europe surged, from around 20,000 tonnes to around 50,000 tonnes, and much of it was supplied by small Ghanian farmers and mid-sized traders. 38 But in 2005, Ghana’s market crumbled. Without warning, European retailers, lobbied by Del Monte, unilaterally decided to begin purchasing only the MD2 variety of pineapple, and no longer to accept the Sweet Cayenne variety produced in Ghana. They also began requiring the EurepGAP certification from their suppliers, especially on pesticide residues. The sudden shift was too much for Ghana’s pineapple farmers and exporters. Both EurepGAP certification and the MD2 variety, due to the high costs of plantlets and the extra logistics required, were beyond their reach. They were forced to shut down, and TNCs moved in. In 2004 there were 65 pineapple exporters in Ghana. Today, just two companies control nearly 100% of Ghana’s pineapple exports: Dole of the US, which sources mainly from its own farms, and HPW Services of Switzerland, which sources from three large growers. 39
In Vietnam, small fish breeders and businesses trying to ride the wave of popularity of Tra –or catfish, as it is now being marketed (as a cheap family food) in Europe and North America – have had to jump a number of hurdles. In the US, a massive campaign run by domestic catfish producers, who cannot compete with the low priced Tra, tries to paint Vietnamese fish as “filthy”. In Europe, the World Wide Fund for Nature (WWF) put Tra on its “red list” of products that conscientious consumers should avoid. The boom in intensive Tra farming for these lucrative new export markets has indeed attracted the worst of practices and people. But to be fair, a number of businesses have been trying to meet the global standards. The problem is, precisely, these standards. One Tra fish farmer, Nguyen Huu Nghia, bitterly called it a “labyrinth”. 40 He and other small fish breeders were told first to follow the Safe Quality Food (SQF) standards, run by a private certification outfit in the US. Then they were told to follow something called SQF-1000. Then it was recommended that they adopt GlobalGAP standards. And now, in order to shake off the bad name given to Vietnamese fish by WWF, they are told to comply with the WWF’s criteria through the Aquatic Stewardship Council (ASC). If all Tra producers followed, say, the GlobalGAP and the ASC standards for a squeaky clean product that is safe for international consumption, it would cost the Vietnamese no less than US$22 million per year! 41 Apart from the bewildering array of private standards that no one can really vouch for, who can afford this and what is the point? (See Box: “Falling through the GAP”.)
Bigger players will pay the extra costs for the GlobalGAP “stamp” because, for them, privileged access to the expanding empires that supermarkets are building is worth the price. As one Kenyan exporter puts it, “I tend to be particularly positive about this [certification]. It might sound a bit cynical, but it’s an entry barrier to the business. The more standards there are, the less competition we are going to have”. 42 Tough luck for Kenyan small outgrowers, more than half of whom were dropped immediately once supermarkets began demanding adherence to their GAP norms. 43
It needs to be emphasised that it is not just in exports that this concentration is happening. As supermarkets take over larger shares of the food markets in the South, the distinction between export markets and domestic markets is disappearing, with the same standards being applied for both. This leaves small farmers, and the biodiversity they maintain, with a dwindling space in which to survive.
Falling through the GAP
In 2002, the US closed its border to imports of cantaloupe melons from Mexico after several Salmonella outbreaks were traced back to Mexican fruits. i A year later, under an agreement worked out between US and Mexican authorities, the ban was lifted for cantaloupes that showed compliance with the Mexican government’s new “Programme of federal recognition requirements for production, harvest, packaging, processing and transport of cantaloupe”. But with the enforcement of this GAP programme, modelled on standards set by US retailers, few Mexican growers could re-enter the market.
Under the GAP requirements, farms have to have portable toilets for use during planting and harvest. A survey of small growers in one of the important cantaloupe producing states found that 94% did not have toilet facilities in the vicinity; they were most often more than half an hour away. The GAP norms also require periodic analyses of water that take into account microbial counts. But 88% of the surveyed growers said they used water from rivers, where it is difficult to maintain water quality.
In the end, only two large farms in the state where the survey was carried out regained market access to the US. Now, like other Mexican growers, they have to comply with extensive GAP standards, such as regular soil and water tests, keeping registers on land use, fencing plantation areas and using water from a well that is tested every month during production for microbial contamination. They have also invested in osmosis plants to guarantee water quality, and have toilet facilities on-farm with running water, wash stations and soap and paper. Plus, they have to pay a third-party certification, which averages US$3,000 per farm.
The US imposes no such obligations on its own cantaloupe growers. But in any case, the effectiveness of the Mexican programme is questionable. From late 2006 to early 2007, the US FDA issued six recalls of cantaloupes, four of which involved Mexican melons grown on FDA-approved farms. ii At that point, only nine growers in Mexico had managed to get approval to export to the US. iii
Similar stories can be found around the world. One recent FAO/WHO paper points to data showing that the true cost per farm of small-farmer certification for GlobalGAP is over €1,200, leading the authors to conclude, “The ‘bottom line’ from the small farmer perspective is that GlobalGAP does not make economic sense”. iv
i This case from Mexico is found in Clare Narrod, Devesh Roy, Belem Avendano and Julias Okello, “Impact of International Food Safety Standards on Smallholders: Evidence from Three Cases”, in E.B. McCullough et al., The Transformation of Agri-Food Systems, Earthscan, 2008.
ii Julie Schmit, “US food imports outrun FDA resources”, USA Today, 18 March 2007, http://www.usatoday.com/money/industries/food/2007-03-18-food-safety-usat_N.htm
iii “Timco issues voluntary cantaloupe recall”, The Packer, 20 November 2006, http://thepacker.com/Timco-issues-voluntary-cantaloupe-recall/Article.aspx?oid=268606&fid=PACKER-TOP-STORIES
iv Spencer Henson and John Humphrey, “The Impacts of Private Food Safety Standards on the Food Chain and on Public Standard-Setting Processes”, paper prepared for FAO/WHO, May 2009.
Privatised Food Safety in the Global South
In China, where supermarkets are expanding at a furious pace, these trends are biting hard. The major supermarket chains, both foreign and domestic, are working hand-in-glove with suppliers and local governments to develop farms to supply fruit and vegetables. As part of a drive to improve food safety and integrate its 700 million small-scale farmers into “high value food chains” with “scientific methods of farming”, the Chinese government has been pursuing the establishment of fruit- and vegetable-growing bases in partnership with the private sector. In each of these designated production zones, local authorities negotiate deals with private companies whereby the company comes in, leases an area of land from the farmers currently occupying it, or acquires their land use rights, and then sets up large-scale production, hiring the displaced farmers as labourers or in contract production arrangements.
Hong Kong Yue Teng Investment is one of these companies. Over the last few years it has emerged as a major vegetable producer in China’s Guizhou Province, where it has two large-scale production bases that supply vegetables to Walmart’s stores in southern China. Walmart’s preferred fruit supplier is the Xingyeyuan Company, which has several thousand hectares of orchards north of Dalian City. For eggs, Walmart deals with Dalian Hongjia, a massive factory farm complex with 470,000 laying hens and an annual production capacity of 7,400 tons of fresh eggs.
Walmart has 56 such “direct purchase bases” with companies in 18 provinces and cities in China, covering a total of at least 33,000 ha of farmland. It calls its network the “Direct Farm Program” and claims that, by 2011, these arrangements will bring benefits to one million farmers. Of course, Walmart does not actually deal directly with farmers, but with companies that hire and manage farmers for their large-scale operations.
Walmart’s moves in agriculture are part of its overall strategy to source more directly and reduce costs in its supply chain. The companies supplying Walmart have to ensure that production happens strictly in accordance with Walmart’s demands, and the company runs training programmes to show the companies and the farmers working for them exactly how they want farming done. “As a multinational corporation with a strong sense of local social responsibility, we have helped farmers to better adapt to market conditions, encouraged them to choose standardised and scaled production methods, and provided instructions on ways to preserve the environment in production activities via sustainable agriculture programs”, says Ed Chan, president and CEO of Walmart China. 44
Chongqing Cikang Vegetables and Fruits, which manages Walmart’s Direct Farm operation in Chongqing Province, says that its production process is fully monitored by third party inspectors approved by Walmart, from variety selection to harvesting and storage. The same goes for companies in China supplying Carrefour, which runs its own direct farm program, called the Carrefour Quality Line, or national retailer Wumart, which has a direct farm programme in the Shandong Province. 45
What do these companies mean by “sustainable agriculture”? Well, for Walmart, at least with its Direct Farm Programs in India and Honduras, it has handed that task over to one of the world’s largest pesticide companies and GMO seed producers, Bayer CropScience of Germany (see Box: “Bye-bye biodiversity”). In Honduras, Bayer, through its Food Chain Partnership programme, trains 700 growers who supply Walmart on “responsible agricultural practices”. In India, the company operates 80 of these Food Chain Partnership projects with Walmart and other retailers, covering an area of 28,000 ha. Participating farmers must use a Bayer “passport” to keep track of their practices. 46
Bayer says that it has 250 Food Chain Partnership projects around the world. In Colombia it works with Carrefour, while in Mexico it directly partners with the national certification authority, Calidad Suprema, a “Civil Association without lucrative ends” that helps the Mexican government with “strengthening the competitiveness of the countryside” and the “promotion of the trademark México Calidad Suprema”, which is owned by the government. 47 Bayer trains Calidad Suprema officials on good agricultural practices, using its BAYGAP tool, and the two sides conduct joint farm visits. 48 Not to be outdone, Syngenta, the world’s second-largest pesticide company, has a food chain programme of its own, called “Fresh Trace”, that it is implementing in Thailand, and both companies are active members of GlobalGAP.
With the pesticide industry so intimately involved in developing and implementing supermarket standards, it’s hardly surprising that pesticide contamination remains prevalent on supermarket produce. Tests done by Greenpeace in China in 2008 and 2009 on popular vegetables and fruit found far more serious pesticide pollution on those collected from Walmart and the other major supermarkets than on those collected at wet markets. 49
One of Bayer’s Food Chain Partnership projects in India is with Indian supermarket major ABRL for the supply of uniformly sized okra. A Bayer promotional video recounts the experience of one farmer who supposedly participated in the Bayer project:
We used to grow our own food here in small fields. Now, on an area of approximately 2.4 ha, I grow okra. We, the farmers, learn from the professionals about sustainable crop growing in line with good agricultural practice.… This includes the controlled and environmentally friendly use of state-of-the-art crop protection products from Bayer CropSciences’ research.… This knowledge is good, not only for my wallet but also for the environment.… I used to grow only local okra varieties. But Food Chain Partnership experts from Bayer CropScience India convinced me to grow the variety Sonal in my fields. This new variety of okra from Nunhems is precisely suited to the regional conditions and the rising standards of domestic food retailers. Every stage of growing and every crop protection measure is recorded in detail in my Bayer passport.… It serves as proof to the food retailers that I have grown my vegetables correctly. i
Carrefour representatives visit a farm in India.
i See the video at http://www.youtube.com/watch?v=oVRMmYTqsCE
People’s resistance to corporate food safety
In recent years we have seen some amazing social struggles and solid initiatives emerge to counteract this corporate hijack of food safety policy-making and praxis. Some of them have been triggered by the restructuring of international food trade, such as the resistance to US beef waged by citizens’ movements in Taiwan, Australia, Japan or South Korea. Others have been reactions to domestic nightmares, such as the social activism in China following the melamine milk tragedy. Occasionally, all countries get rocked by short-lived food poisoning outbreaks. But we are increasingly seeing much more structural and political questioning of the industrial food system, of capitalist development and of who decides what, because people’s health and livelihoods are being directly affected.
The struggles around mad-cow beef and GMOs are good examples. Many times, social movements have organised to keep them out of their countries not so much because of the health or food safety implications per se, but because of the broader social and economic directions that these symbols of industrial agriculture, corporate power or Western imperialism represent. The Korean people’s resistance to US beef has grown into an expression of profound distrust toward Korea’s system of representational democracy, including the state’s relationship with the US, not an irrational fear of prions. 50 In Australia, the campaign has been more about keeping Australian food within Australian hands, a concern that many peoples across the world share with regard to governance and control of their own country’s food supplies. As to anti-GMO struggles, they are as diverse as the anti-US beef campaigns, but they have also been about profound issues of democracy, the survival of local cultures and food systems against the onslaught of Western “solutions”, about keeping seeds and knowledge alive in communities’ hands and challenging whole models of development.
On a deeper level, people are organising to overcome the health, environmental and social costs of the expanding industrial food system. Movements and campaigns for organic food or to “go local”, in other words to buy food produced nearby and boycott products shipped from far away, have been spreading in many countries. The alarming rise in obesity, type 2 diabetes, cancers and other diseases that are directly linked to unhealthy eating is mobilising many people to change their lifestyles and work with others to promote more wholesome food and farming options. Specific campaigns and actions to stop the demonisation and destruction of local alternatives to an over-sanitised food system, such as street hawkers, raw foods and backyard or traditionally raised livestock, are also growing in popularity. The global peasant/smallholder rights group La Vía Campesina has mounted a campaign to establish the concept of food sovereignty: the “right of peoples to healthy and culturally appropriate food produced through ecologically sound and sustainable methods, and their right to define their own food and agriculture systems”. 51 Following the lead of Vía Campesina, several townships in the US state of Maine have recently declared their “food independence”. 52 Food safety and broader aspects of food quality are clearly central to these developments.
Certainly, the defence and development of peasant agriculture and non-industrial food systems, particularly in industrial countries, require their own approaches to food safety. This doesn’t mean working outside the mainstream in the sense of breaking laws or creating dangerous underground economies, although some corporate groups try to vilify and eradicate raw foods and other tradition-conscious food cultures. 53 The challenge is to ensure that different knowledge systems and criteria can exist outside the monopolistic grip of supermarkets and their supply chains. As French farmer Guy Basitanelli of La Confédération Paysanne, puts it:
For small businesses that have few staff and operate at an artisanal level, the management of food safety risks hinges on training and direct human contact. Managing microbial balances, and protecting and producing specific flora based on a respect for traditional and local practices, are what best guarantees safety. You do not get safety from a “zero tolerance” approach to microorganisms and sterilisation equipment that destroy these balances. 54
Many producer organisations and consumers groups, not to mention large movements like Slow Food, are convinced that biodiversity and ecological complexity – as opposed to extreme hygiene – are the keys to healthy and stable systems. Nature abhors a vacuum, after all. Of course, these sounder approaches to food safety also rely on short distribution circuits, getting food from the farm or the small-scale processing plant into people’s homes through less complex, more direct distribution schemes (food clubs, all sorts of community-support agriculture systems, co-ops, and so on).
Another big part of people’s resistance to the corporate takeover of food safety and food cultures are the campaigns, investigative work and public education efforts devoted to exposing how supermarkets – and the supply chains that they dictate to if not run – really operate, stopping the spread of big retail and protecting street vendors from annihilation (see Box: “The lobby that dares not put its name on food labels”). Walmart’s anti-union culture is well known all over the world, thanks to decades of civic activism which today informs groups trying to resist Walmart’s entry in new markets such as India. In fact, India has a vibrant movement of hawkers and street vendors who stand to lose their livelihoods if the central government allows foreign retailers to come in. They have the support of farmers, intellectuals and civil society groups that are part of a growing fabric of resistance against TNCs coming in and taking over India’s food supply. Investigative research and political work into other corporate structures, like Carrefour or Tesco, has also been important to help civil society, not to mention legislators, to understand better how big retail works and the exploitative pressures it puts on biodiversity, farmers and food workers. 55
Food industry workers – from seasonal harvesters to the women and men involved in slaughtering or processing – are just as central to what food safety is or should be. After all, they are on the front line of the work, and they are usually paid as little as possible. They often suffer difficult organising conditions, especially migrant workers, children or illegal immigrants. When they do manage to organise and get support from other groups, their capacity to secure changes can be huge. The struggle of migrant farmworkers in Immokalee, Florida, for instance, has been phenomenal. Apart from securing higher wages for tomato pickers, the Coalition of Immoklalee Workers has helped demonstrate that the industrial food system, which was set up to provide cheap food, is the problem – socially, environmentally and in terms of safety and health. 56 Today, there is a significant momentum across the US to change the way food is produced, including the food safety standards, by reviving the use of anti-trust legislation. It may turn out to be a smart way to break up the industrial food system and return power to smallholders, local processors, regional markets, and other more democratic structures.
Cold shoulder for Ugandan farmers
In 2000, Icelandic investors set up a company in Uganda called Icemark Africa, to provide logistic operations to the European markets for fresh fish exports, with a complementary side operation in fresh fruit and vegetable exports. Icemark is now the largest exporter of fresh fruits and vegetables from Uganda, with three flights a week delivering products to Europe. Until a few years ago, 90% of Icemark’s produce was sourced from its chain of small-scale out-growers. But then the company began to establish its own farms, where GlobalGAP certification is easier to achieve. It now sources 40% of its produce from its own three farms on 270 ha in central Uganda. i
i Thomas Pere, “Mashamba: the identity of quality fruits, vegetables”, The New Vision, http://www.enteruganda.com/brochures/manifesto_7.html
The lobby that dares not put its name on food labels
Corporate agendas can be deceivingly hidden from view as governments and legislators haggle over what appears to be public policy. Take the fight over food labelling in the EU: corporate-driven globalisation and changes in lifestyles brought on by urbanisation and new technologies are creating a new set of food-related health problems, especially obesity and adult-onset diabetes. These are not restricted to the affluent West; they are penetrating all regions of the world, including fast-changing China and Africa. These diseases are not only painful and debilitating for the affected families, but they incur huge costs to society.
In the EU’s drive to tackle these rising health problems and their causes at home, the challenging task of harmonising food labels to inform consumers of what they are buying has naturally come up. In 2010, a war was pitched between two options: on the one hand a graphic “traffic light” label to show on food packages or restaurant menus how much of the main ingredients to be concerned about – fat, saturated fat, sugar and salt – an item contained; on the other hand, a strictly written list of the ingredients with a calculation of how much of a daily allowance you would consume per serving. The traffic light is used in various EU countries, such as the UK, and is extremely blunt and pro-consumer. The allowances’ listing has proved not very intelligible to most consumers (the whole matter of what a serving is can be very deceptive), and for that reason is the industry’s preference.
According to sleuth work by civil society group Corporate Europe Observatory, the EU food and drinks industry – the third largest economic sector of the union, after agriculture and chemicals – spent a whopping €1 billion to defeat the “traffic light” label and keep consumers in the dark. This was the single most expensive lobbying exercise in EU history. i
i See CEO, “A red light for consumer information”, Brussels, 11 June 2010, http://www.corporateeurope.org/lobbycracy/content/2010/06/red-light-consumer-information. As the EU is now operating under the Lisbon Treaty, a German group called Foodwatch (http://www.foodwatch.de) is proposing to launch a citizen’s initiative which, if it gains the required number of signatures, could oblige the European Commission to review the food labelling issue based on grassroots concern from ordinary people. Of course, the obligation on the Commission is only to take note and review, not actually to change anything, but some groups may use the momentum to build greater awareness of corporate control over the European food system and how that directly affects people’s health and living standards.
In most countries around the world, farming sectors are being rapidly restructured to make way for more agribusiness. With food safety standards playing a critical role in justifying new forms of corporate control, it is high time to reassess what food safety means. At present, it translates into “audit culture”, involving a transfer of power from people (consumers, small farmers, local food shops, markets, eateries) to the private sector (Cargill, Nestlé, Unilever, Walmarts … the list goes on). It can instead be about local control and more community-based food and farming systems. In fact, it can be much more aggressively and explicitly integrated into people’s food sovereignty campaigns and initiatives. In that process, we may want to stop talking about food safety altogether and assert instead our own demands for food quality, or something similarly more holistic.
Food safety, or food quality in broader terms, is a ground on which big corporate agriculture and supermarket cultures cannot outperform small producers and local markets. The challenge is to ensure that the small and the local can remain alive and turn today’s heightened concerns for food safety in our favour.
GRAIN, “Food safety: rigging the game”, Seedling, July 2008, http://www.grain.org/seedling/?id=555
Christine Ahn and GRAIN, “Food safety on the butcher’s block”, Foreign Policy In Focus, Washington DC, 18 April 2008, http://www.grain.org/o/?id=83
The SPS-food safety section of the activist website bilaterals.org has a range of highly focused articles tracking how countries use bilateral trade and investment agreements to move food safety standards and policies in favour of their corporations. http://www.bilaterals.org/spip.php?mot185
Sunita Narain, “Control your food. It’s your business”, Centre for Science and Environment, New Delhi, 1 October 2010, http://www.cseindia.org/content/control-your-food-it-your-business
Susan Freidberg, “Supermarkets and imperial knowledge”, Cultural Geographies, 2007, http://www.dartmouth.edu/~geog/facstaff/ CVs/Freidberg/ImpKnowledge.pdf
ANNEX: Food safety: Who does what?
World Trade Organisation (WTO)
In the realm of food safety, the WTO is responsible for implementing the Agreement on Sanitary and Phytosanitary Standards (SPS Agreement) and has an SPS Committee composed of the member states to do this. The SPS Agreement spells out a number of rules that aim to limit the blockage of agricultural trade due to food safety concerns, which it sees as a trade barrier. One of these rules is that countries should use the standards adopted by specialised intergovernmental agencies, such as OIE for animal health and the Codex Alimentarius for food products. But these “standards” are, in many cases, recommendations or guidelines. Countries retain the right to practice “higher” standards of food safety so long as they are justified on “scientific” grounds. They can even follow different standards that produce equivalent results, if they can get away with it. After all, anyone can defend their grounds as scientific. a What we get, as a result of all this, is a politics of “might makes right” (countries bully and argue their way forward), with the risk that some governments will just follow OIE or Codex guidelines for lack of a better alternative (as wished by the industry).
The WTO’s SPS Agreement does have teeth in so far as any disagreement between members can result in a dispute panel and trade sanctions. The US has repeatedly used this method to try to overturn EU policy that bans the entry of hormone beef or GM foods.
One major problem or weakness with the WTO SPS Agreement is the fact that so many food safety standards, which have been exploding in number and complexity, are developed by the private sector, not by governments. And they are voluntary, not mandatory. How do you bring this under the control of trade policy? Developing countries are particularly resistant to the notion of being held responsible for industry standards, especially at a forum like the WTO. Why should the government of Kenya, for instance, work to promote standards developed by Tesco for Tesco’s clients? Who is the government accountable to, after all: Kenya’s citizens or Tesco’s shareholders? This is the pickle that WTO members have driven themselves into.
All told, this means there is something of an SPS deadlock at WTO. The Organisation can advocate certain standards, but it cannot enforce them in a fully predictable or deterring manner. It can serve as a public venue where national policy changes or events are notified for everyone’s information, but most policy-making is actually done by and through the weight of the corporate sector in other fora.
The Codex Alimentarius (Codex for short) is a commission set up in 1953 by the UN Food and Agriculture Organisation and the World Health Organisation. The Codex debates and adopts guidelines, standards and recommendations related to food safety, such as what is an acceptable level of pesticide x in bananas. As such, its purpose is to come up with common ground in terms of health and safety in food.
The problem is that the Codex does not operate in a democratic, transparent fashion. Its membership is composed of governments, but the private sector participates very actively in its work, whether as part of official government delegations or as observers. Non-profit public interest, public health, or consumer groups, on the other hand, are barely in the room.
We can say that:
* Codex wields a lot of power, as it draws up official standards for what can pass as food and enter the commercial food chain with a view to achieving global uniformity.
* Apart from civil servants, the main participants at Codex are industry officials.
* The WTO gives the role of Codex a veneer of legitimacy that it never had before.
One major issue that Codex is debating right now is the labelling of GM products. A large group of countries wants to define and promote a common approach to GM food labelling. Others consider labelling a discriminatory practice (because it sets a GM tomato apart from a non-GM tomato!) and do not want any international standards on it. In what may be a welcome development at Codex, the pro-label bloc is gaining ground. b
World Organisation for Animal Health (OIE)
The OIE has a similar role to Codex but for the animal kingdom. It was set up in Paris in the 1920s to stop a rinderpest outbreak. Today, OIE is a fairly large intergovernmental institution that monitors and assesses animal diseases (including those that affect humans, like bird flu or BSE) and draws up sanitary standards for world trade in animal products. Like Codex, OIE has been given a veneer of authority and legitimacy to shape national and international policy on animal health issues thanks to the WTO. But also like Codex, it is very disconnected from people in so far as few farmers, consumers or grassroots public health advocates seem to know what it is, let alone have any influence over it.
OIE gained some notoriety in recent years because of the way it was used to break a logjam between the US and Korean governments over mad cow disease. c The victory for the US, which was conveniently declared a “controlled risk” country for beef, was short-lived however. The OIE has never been able to impose its standards on countries whose people resist US beef, such as Taiwan or Japan or Korea. OIE also, surprisingly, had little role to play during the recent bird flu and swine flu outbreaks.
Right now, OIE is trying to develop international norms or standards for animal welfare as a food trade issue. This clearly comes from the EU. Since the early 2000s, the EU has been trying to introduce animal welfare as an SPS issue through its bilateral free trade agreements with foreign trade partners like Chile and Korea, and it also forms part of the EU’s current talks with India, ASEAN countries, Canada and Mercosur. This goes beyond what was agreed at the WTO, which does not even mention animal welfare, and appears to be more about restricting trade along the line of EU preferences to favour EU businesses. d The OIE animal welfare “standards” related to food that are currently emerging will probably amount to the five freedoms: from hunger, thirst and malnutrition; from fear and distress; from physical and thermal discomfort; from pain, injury and disease; and to express normal patterns of behaviour.
Food and Agriculture Organisation (FAO)/World Health Organisation (WHO)
Apart from housing the Codex Alimentarius, the FAO and the WHO both deal with food safety from their respective standpoints (food production and health), but they seem to do very little in this field. Not even their joint International Food Safety Authorities Network (INFOSAN), has the resources or commitment to produce adequate global information related to food safety (such as a database on food safety alerts). Unsurprisingly, at the UN level, it seems that food safety is treated much more as a trade issue than as a food production or public health issue.
GlobalGAP and Global Food Safety Initiative (GFSI)
Over the past ten years, the global food industry has probably developed hundreds, if not thousands, of schemes – it is perhaps best to think of them as checklists – to identify products that are “OK” to move through the system, from farm to mouth. These schemes are sets of standards. For example, they may say that a jalapeño pepper should be x green, y slender and have a heat index of z. The complexity of these lists becomes enormous – down to what variety a farmer should sow – but they are central to the industrial food system. The institutions that control these lists wield the hidden power in shaping our food supply. In the 2000s, any country that wanted to participate seriously in the global food trade developed its own national benchmarking and standards systems for food producers under the name of GAP (good agricultural practices). Thailand, for example, developed ThaiGAP as an assurance of quality control for Thai agricultural products. This turned out to be crucial for Thai exporters even to sell products to China under the 2003 China–Thailand free trade agreement. These GAPs are voluntary private standards developed by the industry (originally led by retailers) to regulate itself. A whole battery of firms has sprung up to implement these standards: auditors, controllers, certifiers and companies that process the data.
Two institutions are important to note because of their ambitions to serve as global leaders in this web of private food controllers. In 2007, EurepGAP – a network of European GAPs formed in 1997 – rebirthed itself as GlobalGAP. This move amounted to no less than the European food industry globalising its standards to serve as world standards. As a consequence, other national GAPs (KenyaGAP, ThaiGAP, and so on) had to reorient themselves and work to get accepted by GlobalGAP as national benchmarks of the new system. Today, GlobalGAP holds the global authority over standards for agricultural products. This means that any farm that wants its products to enter the mainstream of global food trade and retail – and end up on Tesco’s shelves, for instance, with all the traceability and control assurances that that implies – would have to get GlobalGAP accreditation (via local members). Hence the power of those who define these standards.
GFSI was set up in 2000 by the Food Business Forum (now CIES), a club of the world’s most important food industry CEOs. The argument behind GFSI is that Codex, supposed to harmonise national standards, is too slow. GFSI bypasses harmonisation to create a system for the global approval of foods based on benchmarked private-sector schemes. If GAP guarantees a product’s quality (the jalapeño pepper that is x, y and z), GFSI accreditation is a mark of adherence to a host of broader food safety measures – including GlobalGAP.
GFSI insists that it is not a standard in itself but a forum that “benchmarks” best practices, almost like a brand. Composed of the top 400 food industry players, who collectively boast an annual turnover of €2.1 trillion (US$2.9 trillion), GFSI can be expected to have an important influence in reshaping food safety policy in the years to come.
a For example, on 7 April 2010, Japan’s then Agriculture Minister Hirotaka Akamatsu told reporters after meeting US Department of Agriculture head Tom Vilsack in Tokyo, “For us, food safety based on Japan’s scientific standards is the priority. The OIE standards are different from the Japanese scientific ones.” This was the Japanese government’s way of rebuffing US insistence that Tokyo open its market to all forms of US beef. See Jae Hur and Ichiro Suzuki, “Japan, US to Continue Dialogue on Beef Import Curbs”, Bloomberg, 8 April 2010, http://www.bloomberg.com/news/2010-04-07/u-s-japan-face-some-distance-as-talks-on-beef-import-curbs-to-continue.html
b At its meeting on the issue in Quebec in May 2010, the Codex commission was mostly in favour of GM labelling through the voices of the EU, many individual European countries, Brazil, India, Morocco, Kenya, Mali, Ghana, Cameroon and Korea. Staunchly against GM labelling were the US, Canada, Australia, New Zealand, Costa Rica, Mexico and Argentina. This anti- bloc seems be cracking, however. The next set of discussions will be held in 2011.
c See GRAIN, “Food safety: rigging the game”, Seedling, July 2008, http://www.grain.org/seedling/?id=555
d It is true that animal welfare is a concern among people in the EU, and rightly so. But the argument used by European trade negotiators according to which it is a major societal demand that needs to be imposed upon EU trade partners is negated by the latest Eurobarometer survey among EU consumers who do not even mention animal welfare when asked to spontaneously identify the issues that concern them around food quality and food safety. See European Food Safety Authority, “2010 Eurobarometer survey report on risk perception in the EU”, November 2010, http://www.efsa.europa.eu/en/riskcommunication/riskperception.htm
ACP Africa Caribbean Pacific states
AREA-AD Association de Réflexion, d’Echanges et d’Actions pour l’Environnement et le Développement (Algeria)
ASEAN Association of South-East Asian Nations
ASC Aquatic Stewardship Council (WWF)
BSE bovine spongiform encephalopathy
CDC Centres for Disease Control and Prevention (US)
CEO chief executive officer
CIES Consumer Goods Forum (formerly Food Business Forum)
FAO UN Food and Agriculture Organisation
FTA free trade agreement
GAP good agricultural practices
GATT General Agreement on Tariffs and Trade
GFSI Global Food Safety Initiative
GM(O) genetically modified (organism)
HACCP Hazard Analysis and Critical Control Points
INFOSAN International Food Safety Authorities Network (WHO/FAO)
MRSA Methicillin-resistant Staphylococcus aureus
NASA National Aeronautics and Space Agency (US)
OIC Organisation of the Islamic Conference
OIE Organisation for Animal Health
ppb parts per billion
SPS sanitary and phytosanitary standards
SQF Safe Food Quality (US)
TNC transnational corporation
WHO World Health Organisation
WTO World Trade Organisation
WWF World Wide Fund for Nature
GRAIN would like to thank various friends and colleagues who commented on or helped knock this briefing into shape. These include Phil Bereano, Brewster Kneen, Meriem Louanchi, Marta Rivera Ferre and Tom Philipott, plus our board and staff.
1 “Germany approves anti-dioxin action plan”, Reuters, 19 January 2011, http://af.reuters.com/article/worldNews/idAFTRE70I2CC20110119?sp=true
2 The FAO and WHO collaborate on these issues, particularly through INFOSAN, but there is no global database or tracking tool. Individual countries have (or don’t have) their own alert systems, plus they band together in various groupings. Australia and New Zealand share competency on food safety, and the EU as a whole has, apart from its highly contested European Food Safety Authority, what seems to be an extremely effective rapid alert system. See http://ec.europa.eu/food/food/rapidalert/index_en.htm
3 Agri-Food and Veterinary Authority of Singapore, “Importance of Food Safety”, 13 April 2010, http://www.ava.gov.sg/FoodSector/ FoodSafetyEducation/ AboutFoodSafetyPublicEduProg/ ImptFoodSafety/index.htm
4 The data do not reflect the increasing privatisation of food safety. To give just one example of a private legal cost generated by the failings of the US food system: in April 2010, Cargill settled a lawsuit with Stephanie Smith, a 22-year-old dancer who was paralysed for life after eating an Escherichia coli-tainted hamburger made from Cargill beef. The amount of the settlement will never be known, but it is said to provide for Ms Smith’s lifelong health costs related to coping with her affliction (and she is committed to walking again.) In the US context, this may climb to millions of dollars.
5 Aurelio Suarez Montoya, “Colombia, una pieza mas en la conquista de un ‘nuevo nundo’ lacteo”, RECALCA, November 2010, http://www.recalca.org.co/Colombia-una-pieza-mas-en-la.html
6 US regulation now forbids feeding cow protein to cows, but allows the feeding of “poultry litter”, which can contain “restricted feed ingredients including meat and bone meal from dead cattle”. See “Downright Scary: Cows fed chicken feces, recycled cow remains”, Consumers Union, 29 October 2009, http://www.consumersunion.org/pub/core_food_safety/015272.html
7 Lee Eun-joo, “New mad cow disease case in Canada noted”, JoongAng Daily, 7 March 2011, http://joongangdaily.joins.com/article/view.asp?aid=2933089
8 “Goldman Sachs may sell stake in Shineway to CDH: report,” China Knowledge, 6 November 2009.
9 Video of the news report is available here: http://video.sina.com.cn/v/b/48370817-1290078633.html. See also, “The clenbuterol crisis,” Dim Sums, 22 March 2011: http://dimsums.blogspot.com/2011/03/clenbuterol-crisis.html
10 Wang Qingchu, “Banned drug used widely in pig trade,” Shanghai Daily, 16 March 2011.
11 The rich countries still use subsidies to protect and promote their own agricultural businesses.
12 Veena Jha, chapter on South Asia in Environmental regulation and food safety: Studies of protection and protectionism, International Development Research Centre, Ottawa, 2006, http://www.idrc.ca/en/ev-93090-201-1-DO_TOPIC.html
13 Gumisai Mutume, “New barriers hinder African trade”, Africa Renewal, January 2006, http://www.un.org/ecosocdev/geninfo/afrec/vol19no4/194trade.html
14 This process has been dubbed the “Senegalisation” of EU fishing vessels, because of where it began. See ActionAid, “SelFish Europe”, June 2008, http://www.actionaid.org/main.aspx?PageID=1114, and Jean Sébastien Mora, “L’Europe pêche en eaux troubles”, Politis, 27 May 2010, http://www.bilaterals.org/spip.php?article17454.
15 For peanuts, the level adopted by the EU in the 1990s was 4 parts per billion (ppb). The level recommended by Codex Alimentarius is 15 ppb. Many countries practise the standard of 15 (Canada, Australia, Peru), 20 (Thailand, US, China) or 30 (India, Brazil). Data from the Almond Board of California, November 2009, http://californiaalmonds.fr/Handlers/Documents/Intnl-Aflatoxin-Limits.pdf
16 Timothy Josling, Donna Roberts and David Orden, “Food regulation and trade: toward a safe and open global system”, Institute for International Economics, Washington DC, 2004, p. 113.
17 T. Otsuki et al., “Saving two in a billion: quantifying the trade effect of European food safety standards on African exports”, Food Policy, Vol. 26, No. 5, October 2001, pp. 495–514.
18 See Veena Jha (ed.), Environmental regulation and food safety: Studies of protection and protectionism, International Development Research Centre, Ottawa, 2006, p. 16.
19 It is also to get rid of slime and odour.
20 HACCP is a method of controlling risks in a food production process by identifying the key points to monitor, and keeping an eye on them. It was developed by the Pillsbury Corporation to create foods suitable for NASA space flights, so one can imagine the ramifications! It is basically just a system of private checklists.
21 “Subject: France and the WTO ag biotech case”, Wikileaks cable Reference ID 07PARIS4723, dated 14 December 2007, http://126.96.36.199/cable/2007/12/07PARIS4723.html
22 For details, see bilaterals.org and GRAIN, “FTAs and biodiversity”, in Fighting FTAs, 2008, http://www.bilaterals.org/spip.php?article15225, and GRAIN, “Food safety: rigging the game”, Seedling, July 2008, http://www.grain.org/seedling/?id=555
23 GRAIN, “Big Meat is growing in the South”, Seedling, October 2010, http://www.grain.org/seedling/?type=82
24 This includes milk of cattle fed with feeds produced from internal organs, blood meal and tissues of ruminant origin or products that may contain animal rennet. See Gargi Parsai, “No import of US dairy products for now”, The Hindu, 15 November 2010, http://www.bilaterals.org/spip.php?article18483
25 They also fall under the remit of Technical Barriers to Trade (TBT) disciplines, the close cousin of SPS. TBT rules govern labelling, and many food safety and broader food quality issues require proper labelling.
26 The same is true for nanomaterials.
27 Exact figures of the market size vary, but come to US$550–630 billion per year. The main reasons why this market is booming are population growth and conversion rates. But practicalities facing the food service industry also weigh in. For instance, the catering firms that supply the airline industry at the world’s major hubs (e.g. Heathrow and Frankfurt) are increasingly opting to use only halal meat.
28 Whether GMOs – like cloning and other new technologies – are halal or haram has long been an issue of debate, and the answer often depends on the country or the authority giving it.
29 Outside the SPS arena, Canada filed a WTO dispute in August 2010 against the EU’s seal trade ban. While this conflict is not over food safety, it does challenge how far the EU can go in pushing its animal welfare standards on other countries. This issue will also have to be dealt with in the current EU–Canada FTA negotiations.
30 This involves not just food but testing and cosmetics.
31 Their main concerns are lack of harmonisation, lack of transparency, lack of scientific basis and no consultation. For OIE’s overview of the discussion process, see “Implications of private standards in international trade of animals and animal products”, updated 23 June 2010, http://www.oie.int/eng/normes/en_Implications%20of%20private%20standards.htm. For an account of developing country concerns, see the final report of the OIE questionnaire on private standards, http://www.oie.int/eng/normes/A_AHG_PS_NOV09_2.pdf
32 Bruce Blythe, “Walmart will require stricter safety tests for beef suppliers”, Drovers CattleNetwork, 29 April 2010, http://www.cattlenetwork.com/cattle-news/latest/wal-mart-will-require-stricter-safety-tests-for-beef-suppliers-114326579.html
33 Zensho statement of 30 November 2010, http://www.zensho.co.jp/en/ZENSHO_SFC_20101130.pdf
34 “South African poultry makers ‘racist’, politician says”, BBC, 29 December 2010, http://www.bbc.co.uk/news/world-africa-12090741
35 For an excellent discussion of Walmart’s role in the US food system, see Barry C. Lynn, “Breaking the chain: the antitrust case against Wal-Mart”, Harper’s, July 2006, http://www.harpers.org/archive/2006/07/0081115
36 Thomas Reardon, Spencer Hensen and Julio Berdegué, “‘Proactive fast-tracking’ diffusion of supermarkets in developing countries: implications for market institutions and trade”, Journal of Economic Geography, Vol. 7, No. 4, 2007.
37 GRAIN, “Global agribusiness: two decades of plunder”, Seedling, July 2010, http://www.grain.org/seedling/?type=81
38 Niels Fold, “Transnational Sourcing Practices in Ghana’s Perennial Crop Sectors”, Journal of Agrarian Change, Vol. 8, No. 1, January 2008, pp. 94–122.
39 Peter Jaeger, “Ghana export horticulture cluster strategic profile study”, prepared for World Bank, Ghana Ministry of Food and Agriculture and EU ACP Agricultural Commodities Programme, 2008.
40 See “Don’t let Vietnam’s Tra fish be ‘stricken down’” , Voice of Vietnam, 13 February 2011, http://english.vovnews.vn/Home/Dont-let-Vietnams-Tra-fish-be-stricken-down/20112/123832.vov
41 Ibid. WWF’s ASC certification alone costs US$7,500 per 5 hectares per year.
42 Spencer Henson and John Humphrey, “The Impacts of Private Food Safety Standards on the Food Chain and on Public Standard-Setting Processes”, paper prepared for FAO/WHO, May 2009.
43 Clare Narrod, Devesh Roy, Belem Avendano and Julius Okello, “Impact of International Food Safety Standards on Smallholders: Evidence from Three Cases”, in McCullough, Pingali and Stamoulis (eds), The Transformation of Agri-Food Systems: globalization, supply chains and smallholder farmers, London, Earthscan, 2008.
44 Walmart press release, 25 October 2010, http://en.prnasia.com/pr/2010/10/25/100984911.shtml
45 “Large Corporations Engaging Small Producers – Fruits and Vegetables in India and China”, live case prepared and presented by Nancy Barry, President of NBA Enterprise Solutions to Poverty, at the Harvard Business School Forum on the Future of Market Capitalism, 9–10 October 2009,
46 See Bayer’s Food Chain Partnership promotional video for India, http://www.youtube.com/watch?v=oVRMmYTqsCE ; “Wal-Mart Centroamérica y el Grupo Bayer firman convenio para impulsar agricultura”, La Tribuna, 15 January 2010, http://www.latribuna.hn/web2.0/?p=86331
47 See México Calidad Suprema website at http://www.mexicocalidadsuprema.com.mx/nosotros.php
48 Bayer CropScience, “An exceptional collaboration with Mexico Calidad Suprema”, http://www.bayercropscience.com/bcsweb/cropprotection.nsf/id/ EN_Mexico_Calidad_Suprema_English/$file/MEXICO_CS_web_EN_NEW.pdf
49 Greenpeace, “Pesticides: not your problem?”, 9 April 2009, http://www.greenpeace.org/eastasia/news/China-pesticides
50 See Jo Dongwon, “Real-time networked media activism in the 2008 Chotbul protest”, Interface, Vol. 2, No. 2, November 2010, pp. 92–102.
51 See the Via Campesina web site: http://viacampesina.org
52 David Gumpert, “Maine towns reject one-size-fits-all regulation, declare ‘food sovereignty’”, Grist, 15 March 2011: http://www.grist.org/article/2011-03-15-maine-towns-reject-one-size-fits-all-regulation-declare-food
53 The armed raid on Rawesome Foods in the US in 2010, which was captured on security camera and circulated over the internet, is one example (see http://www.youtube.com/watch?v=X2jgpGyyQW8). In France, two years earlier, industrial dairy processors that want a bigger share of the market tried to dismantle the rule that only raw milk can be used to make Camembert cheese, on the ground that it’s not safe. They were quickly defeated, including with regards to the lack of scientific data that there is any meaningful safety problem with raw-milk cheese. This debate has also flared up in Canada, but the government of Quebec has decided to keep the production of raw-milk cheese legal.
54 Quoted by Cécile Koehler in “Le risque zéro: du ‘sur mesure’ pour l’agriculture industrielle”, Campagnes solidaires, FADEAR, Bagnolet, November 2008. This dossier also points out that no study can show a correlation between heavy investment in industrial and administrative practices and a high level of food safety.
55 Western journalists and academics such as Christian Jacquiau, Marion Nestle, Felicity Lawrence and Michael Pollan have been doing a great job in helping the public to understand how supermarkets and food safety systems really work, and how citizens can retake control of such matters.
56 “Historic breakthrough in Florida’s tomato fields”, joint press release from Coalition of Immokalee Workers and the Florida Tomato Growers Exchange, 16 November 2010, http://www.ciw-online.org/FTGE_CIW_joint_release.html See also: “The human cost of industrial tomatoes”, Grist, 6 March 2009, http://www.grist.org/article/Immokalee-Diary-part-I/
By CAPT Wayne Porter, USN and Col Mark “Puck” Mykleby, USMC
Woodrow Wilson International Center For Scholars
By Anne-Marie Slaughter
Bert G. Kerstetter ’66 University Professor of Politics and International Affairs,
Director of Policy Planning, U.S. Department of State, 2009-2011
The United States needs a national strategic narrative. We have a national security strategy, which sets forth four core national interests and outlines a number of dimensions of an overarching strategy to advance those interests in the 21st century world. But that is a document written by specialists for specialists. It does not answer a fundamental question that more and more Americans are asking. Where is the United States going in the world? How can we get there? What are the guiding stars that will illuminate the path along the way? We need a story with a beginning, middle, and projected happy ending that will transcend our political divisions, orient us as a nation, and give us both a common direction and the confidence and commitment to get to our destination.
These questions require new answers because of the universal awareness that we are living through a time of rapid and universal change. The assumptions of the 20th century, of the U.S. as a bulwark first against fascism and then against communism, make little sense in a world in which World War II and its aftermath is as distant to young generations today as the War of 1870 was to the men who designed the United Nations and the international order in the late 1940s. Consider the description of the U.S. president as “the leader of the free world,” a phrase that encapsulated U.S. power and the structure of the global order for decades. Yet anyone under thirty today, a majority of the world’s population, likely has no idea what it means.
Moreover, the U.S. is experiencing its latest round of “declinism,” the periodic certainty that we are losing all the things that have made us a great nation. In a National Journal poll conducted in 2010, 47% percent of Americans rated China’s economy as the world’s strongest economy, even though today the U.S. economy is still 2 ½ times larger than the Chinese economy with only 1/6 of the population. Our crumbling roads and bridges reflect a crumbling self-confidence. Our education reformers often seem to despair that we can ever educate new generations effectively for the 21st century economy. Our health care system lags increasingly behind that of other developed nations – even behind British National Health in terms of the respective overall health of the British and American populations.
Against this backdrop, Captain Porter’s and Colonel Mykleby’s “Y article” could not come at a more propitious time. In 1947 George Kennan published “The Sources of Soviet Conduct” in Foreign Affairs under the pseudonym X, so as not to reveal his identity as a U.S. Foreign Service Officer. The X article gave us an intellectual framework within which to understand the rise and eventual fall of the Soviet Union and a strategy to hasten that objective. Based on that foundation, the strategic narrative of the Cold War was that the United States was the leader of the free world against the communist world; that we would invest in containing the Soviet Union and limiting its expansion while building a dynamic economy and as just, and prosperous a society as possible. We often departed from that narrative in practice, as George Kennan was one of the first to recognize. But it was a narrative that fit the facts of the world we perceived well enough to create and maintain a loose bipartisan national consensus for forty years.
Porter and Mykleby give us a non-partisan blueprint for understanding and reacting to the changes of the 21st century world. In one sentence, the strategic narrative of the United States in the 21st century is that we want to become the strongest competitor and most influential player in a deeply inter-connected global system, which requires that we invest less in defense and more in sustainable prosperity and the tools of effective global engagement.
At first reading, this sentence may not seem to mark much of a change. But look closer. The Y article narrative responds directly to five major transitions in the global system:
1) From control in a closed system to credible influence in an open system. The authors argue that Kennan’s strategy of containment was designed for a closed system, in which we assumed that we could control events through deterrence, defense, and dominance of the international system. The 21st century is an open system, in which unpredictable external events/phenomena are constantly disturbing and disrupting the system. In this world control is impossible; the best we can do is to build credible influence – the ability to shape and guide global trends in the direction that serves our values and interests (prosperity and security) within an interdependent strategic ecosystem. In other words, the U.S. should stop trying to dominate and direct global events. The best we can do is to build our capital so that we can influence events as they arise.
2) From containment to sustainment. The move from control to credible influence as a fundamental strategic goal requires a shift from containment to sustainment (sustainability). Instead of trying to contain others (the Soviet Union, terrorists, China, etc), we need to focus on sustaining ourselves in ways that build our strengths and underpin credible influence. That shift in turn means that the starting point for our strategy should be internal rather than external. The 2010 National Security Strategy did indeed focus on national renewal and global leadership, but this account makes an even stronger case for why we have to focus first and foremost on investing our resources domestically in those national resources that can be sustained, such as our youth and our natural resources (ranging from crops, livestock, and potable water to sources of energy and materials for industry). We can and must still engage internationally, of course, but only after a careful weighing of costs and benefits and with as many partners as possible. Credible influence also requires that we model the behavior we recommend for others, and that we pay close attention to the gap between our words and our deeds.
3) From deterrence and defense to civilian engagement and competition. Here in many ways is the hard nub of this narrative. Chairman of the Joint Chiefs Admiral Mike Mullen has already said publicly that the U.S. deficit is our biggest national security threat. He and Secretary of Defense Robert Gates have also given speeches and written articles calling for “demilitarizing American foreign policy” and investing more in the tools of civilian engagements – diplomacy and defense. As we modernize our military and cut spending the tools of 20th century warfare, we must also invest in a security complex that includes all domestic and foreign policy assets. Our credibility also requires a willingness to compete with others. Instead of defeatism and protectionism, we must embrace competition as a way to make ourselves stronger and better (e.g. Ford today, now competing with Toyota on electric cars). A willingness to compete means a new narrative on trade and a new willingness to invest in the skills, education, energy sources, and infrastructure necessary to make our products competitive.
4) From zero sum to positive sum global politics/economics. An interdependent world creates many converging interests and opportunities for positive-sum rather than zero-sum competition. The threats that come from interdependence (economic instability, global pandemics, global terrorist and criminal networks) also create common interests in countering those threats domestically and internationally. President Obama has often emphasized the significance of moving toward positive sum politics. To take only one example, the rise of China as a major economic power has been overall very positive for the U.S. economy and the prosperity and stability of East Asia. The United States must be careful to guard our interests and those of our allies, but we miss great opportunities if we assume that the rise of some necessarily means the decline of others.
5) From national security to national prosperity and security. The piece closes with a call for a National Prosperity and Security Act to replace the National Security Act of 1947. The term “national security” only entered the foreign policy lexicon after 1947 to reflect the merger of defense and foreign affairs. Today our security lies as much or more in our prosperity as in our military capabilities. Our vocabulary, our institutions, and our assumptions must reflect that shift. “National security” has become a trump card, justifying military spending even as the domestic foundations of our national strength are crumbling. “National prosperity and security” reminds us where our true security begins. Foreign policy pundits have long called for an overhaul of NSC 68, the blueprint for the national security state that accompanied the grand strategy of containment. If we are truly to become the strongest competitor and most influential player in the deeply interconnected world of the 21st century, then we need a new blueprint.
A narrative is a story. A national strategic narrative must be a story that all Americans can understand and identify with in their own lives. America’s national story has always see-sawed between exceptionalism and universalism. We think that we are an exceptional nation, but a core part of that exceptionalism is a commitment to universal values – to the equality of all human beings not just within the borders of the United States, but around the world. We should thus embrace the rise of other nations when that rise is powered by expanded prosperity, opportunity, and dignity for their peoples. In such a world we do not need to see ourselves as the automatic leader of any bloc of nations. We should be prepared instead to earn our influence through our ability to compete with other nations, the evident prosperity and wellbeing of our people, and our ability to engage not just with states but with societies in all their richness and complexity. We do not want to be the sole superpower that billions of people around the world have learned to hate from fear of our military might. We seek instead to be the nation other nations listen to, rely on and emulate out of respect and admiration.
The Y article is the first step down that new path. It is written by two military men who have put their lives on the line in the defense of their country and who are non-partisan by profession and conviction. Their insights and ideas should spark a national conversation. All it takes is for politicians, pundits, journalists, businesspeople, civic leaders, and engaged citizens across the country to read and respond.
A NATIONAL STRATEGIC NARRATIVE
By CAPT Wayne Porter, USN and Col Mark “Puck” Mykleby, USMC
This Strategic Narrative is intended to frame our National policy decisions regarding investment, security, economic development, the environment, and engagement well into this century. It is built upon the premise that we must sustain our enduring national interests – prosperity and security – within a “strategic ecosystem,” at home and abroad; that in complexity and uncertainty, there are opportunities and hope, as well as challenges, risk, and threat. The primary approach this Strategic Narrative advocates to achieve sustainable prosperity and security, is through the application of credible influence and strength, the pursuit of fair competition, acknowledgement of interdependencies and converging interests, and adaptation to complex, dynamic systems – all bounded by our national values.
From Containment to Sustainment: Control to Credible Influence
For those who believe that hope is not a strategy, America must seem a strange contradiction of anachronistic values and enduring interests amidst a constantly changing global environment. America is a country conceived in liberty, founded on hope, and built upon the notion that anything is possible with enough hard work and imagination. Over time we have continued to learn and mature even as we strive to remain true to those values our founding fathers set forth in the Declaration of Independence and our Constitution.
America’s national strategy in the second half of the last century was anchored in the belief that our global environment is a closed system to be controlled by mankind – through technology, power, and determination – to achieve security and prosperity. From that perspective, anything that challenged our national interests was perceived as a threat or a risk to be managed. For forty years our nation prospered and was kept secure through a strategy of containment. That strategy relied on control, deterrence, and the conviction that given the choice, people the world over share our vision for a better tomorrow. America emerged from the Twentieth Century as the most powerful nation on earth. But we failed to recognize that dominance, like fossil fuel, is not a sustainable source of energy. The new century brought with it a reminder that the world, in fact, is a complex, open system – constantly changing. And change brings with it uncertainty. What we really failed to recognize, is that in uncertainty and change, there is opportunity and hope.
It is time for America to re-focus our national interests and principles through a long lens on the global environment of tomorrow. It is time to move beyond a strategy of containment to a strategy of sustainment (sustainability); from an emphasis on power and control to an emphasis on strength and influence; from a defensive posture of exclusion, to a proactive posture of engagement. We must recognize that security means more than defense, and sustaining security requires adaptation and evolution, the leverage of converging interests and interdependencies. To grow we must accept that competitors are not necessarily adversaries, and that a winner does not demand a loser. We must regain our credibility as a leader among peers, a beacon of hope, rather than an island fortress. It is only by balancing our interests with our principles that we can truly hope to sustain our growth as a nation and to restore our credibility as a world leader.
As we focus on the opportunities within our strategic environment, however, we must also address risk and threat. It is important to recognize that developing credible influence to pursue our enduring national interests in a sustainable manner requires strength with restraint, power with patience, deterrence with detente. The economic, diplomatic, educational, military, and commercial tools through which we foster that credibility must always be tempered and hardened by the values that define us as a people.
Our Values and Enduring National Interests
America was founded on the core values and principles enshrined in our Constitution and proven through war and peace. These values have served as both our anchor and our compass, at home and abroad, for more than two centuries. Our values define our national character, and they are our source of credibility and legitimacy in everything we do. Our values provide the bounds within which we pursue our enduring national interests. When these values are no longer sustainable, we have failed as a nation, because without our values, America has no credibility.
As we continue to evolve, these values are reflected in a wider global application: tolerance for all cultures, races, and religions; global opportunity for self-fulfillment; human dignity and freedom from exploitation; justice with compassion and equality under internationally recognized rule of law; sovereignty without tyranny, with assured freedom of expression; and an environment for entrepreneurial freedom and global prosperity, with access to markets, plentiful water and arable soil, clean and abundant energy, and adequate health services.
From the earliest days of the Republic, America has depended on a vibrant free market and an indomitable entrepreneurial spirit to be the engines of our prosperity. Our strength as a world leader is largely derived from the central role we play in the global economy. Since the Bretton Woods agreement of 1944, the United States has been viewed as an anchor of global economic security and the U.S. dollar has served as an internationally recognized medium of exchange, the monetary standard. The American economy is the strongest in the world and likely to remain so well into the foreseeable future. Yet, while the dramatic acceleration of globalization over the last fifteen years has provided for the cultural, intellectual and social comingling among people on every continent, of every race, and of every ideology, it has also increased international economic interdependence and has made a narrowly domestic economic perspective an unattractive impossibility. Without growth and competition economies stagnate and wither, so sustaining America’s prosperity requires a healthy global economy. Prosperity at home and through global economic competition and development is then, one of America’s enduring national interests.
It follows logically that prosperity without security is unsustainable. Security is a state of mind, as much as it is a physical aspect of our environment. For Americans, security is very closely related to freedom, because security represents freedom from anxiety and external threat, freedom from disease and poverty, freedom from tyranny and oppression, freedom of expression but also freedom from hurtful ideologies, prejudice and violations of human rights. Security cannot be safeguarded by borders or natural barriers; freedom cannot be secured with locks or by force alone. In our complex, interdependent, and constantly changing global environment, security is not achievable for one nation or by one people alone; rather it must be recognized as a common interest among all peoples. Otherwise, security is not sustainable, and without it there can be no peace of mind. Security, then, is our other enduring national interest.
Our Three Investment Priorities
As Americans we have access to a vast array of resources. Perhaps the most important first step we can take, as part of a National Strategy, is to identify which of these resources are renewable and sustainable, and which are finite and diminishing. Without doubt, our greatest resource is America’s young people, who will shape and execute the vision needed to take this nation forward into an uncertain future. But this may require a reawakening, of sorts. Perhaps because our nation has been so blessed over time, many of us have forgotten that rewards must be earned, there is no “free ride” – that fair competition and hard work bring with them a true sense of accomplishment. We can no longer expect the ingenuity and labor of past generations to sustain our growth as a nation for generations to come. We must embrace the reality that with opportunity comes challenge, and that retooling our competitiveness requires a commitment and investment in the future.
Inherent in our children is the innovation, drive, and imagination that have made, and will continue to make, this country great. By investing energy, talent, and dollars now in the education and training of young Americans – the scientists, statesmen, industrialists, farmers, inventors, educators, clergy, artists, service members, and parents, of tomorrow – we are truly investing in our ability to successfully compete in, and influence, the strategic environment of the future. Our first investment priority, then, is intellectual capital and a sustainable infrastructure of education, health and social services to provide for the continuing development and growth of America’s youth.
Our second investment priority is ensuring the nation’s sustainable security – on our own soil and wherever Americans and their interests take them. As has been stated already, Americans view security in the broader context of freedom and peace of mind. Rather than focusing primarily on defense, the security we seek can only be sustained through a whole of nation approach to our domestic and foreign policies. This requires a different approach to problem solving than we have pursued previously and a hard look at the distribution of our national treasure. For too long, we have underutilized sectors of our government and our citizenry writ large, focusing intensely on defense and protectionism rather than on development and diplomacy. This has been true in our approach to domestic and foreign trade, agriculture and energy, science and technology, immigration and education, public health and crisis response, Homeland Security and military force posture. Security touches each of these and must be addressed by leveraging all the strengths of our nation, not simply those intended to keep perceived threat a safe arm’s length away.
America is a resplendent, plentiful and fertile land, rich with natural resources, bounded by vast ocean spaces. Together these gifts are ours to be enjoyed for their majesty, cultivated and harvested for their abundance, and preserved for following generations. Many of these resources are renewable, some are not. But all must be respected as part of a global ecosystem that is being tasked to support a world population projected to reach nine billion peoples midway through this century. These resources range from crops, livestock, and potable water to sources of energy and materials for industry. Our third investment priority is to develop a plan for the
sustainable access to, cultivation and use of, the natural resources we need for our continued wellbeing, prosperity and economic growth in the world marketplace.
Fair Competition and Deterrence
Competition is a powerful, and often misunderstood, concept. Fair competition – of ideas and enterprises, among individuals, organizations, and nations – is what has driven Americans to achieve greatness across the spectrum of human endeavor. And yet with globalization, we seem to have developed a strange apprehension about the efficacy of our ability to apply the innovation and hard work necessary to successfully compete in a complex security and economic environment. Further, we have misunderstood interdependence as a weakness rather than recognizing it as a strength. The key to sustaining our competitive edge, at home or on the world stage, is credibility – and credibility is a difficult capital to foster. It cannot be won through intimidation and threat, it cannot be sustained through protectionism or exclusion.
Credibility requires engagement, strength, and reliability – imaginatively applied through the national tools of development, diplomacy, and defense.
In many ways, deterrence is closely linked to competition. Like competition, deterrence in the truest sense is built upon strength and credibility and cannot be achieved solely through intimidation and threat. For deterrence to be effective, it must leverage converging interests and interdependencies, while differentiating and addressing diverging and conflicting interests that represent potential threats. Like competition, deterrence requires a whole of nation effort, credible influence supported by actions that are consistent with our national interests and values.
When fair competition and positive influence through engagement – largely dependent on the tools of development and diplomacy – fail to dissuade the threat of destructive behavior, we will approach deterrence through a broad, interdisciplinary effort that combines development and diplomacy with defense.
A Strategic Ecology
Rather than focusing all our attention on specific threats, risks, nations, or organizations, as we have in the past, let us evaluate the trends that will shape tomorrow’s strategic ecology, and seek opportunities to credibly influence these to our advantage. Among the trends that are already shaping a “new normal” in our strategic environment are the decline of rural economies, joblessness, the dramatic increase in urbanization, an increasing demand for energy, migration of populations and shifting demographics, the rise of grey and black markets, the phenomenon of extremism and anti-modernism, the effects of global climate change, the spread of pandemics
and lack of access to adequate health services, and an increasing dependency on cyber networks.
At first glance, these trends are cause for concern. But for Americans with vision, guided by values, they represent opportunities to reestablish and leverage credible influence, converging interests, and interdependencies that can transform despair into hope. This focus on improving our strategic ecosystem, and favorably competing for our national interests, underscores the investment priorities cited earlier, and the imaginative application of diplomacy, development, and defense in our foreign policy.
Many of the trends affecting our environment are conditions-based. That is, they have developed within a complex system as the result of conditions left unchecked for many years. These global trends, whether manifesting themselves in Africa, the Middle East, Asia, Eurasia, or within our own hemisphere impact the lives of Americans in ways that are often obscure as they propagate over vast areas with cascading and sometimes catastrophic effect.
Illiteracy, for example, is common in countries with high birth rates. High birth rates and illiteracy contribute to large labor pools and joblessness, particularly in rural areas in which changing weather conditions have resulted in desertification and soil erosion. This has led to the disruption of family and tribal support structures and the movement of large numbers of young, unskilled people into urban areas that lack infrastructure. This rapid urbanization has taxed countries with weak governance that lack rule of law, permitting the further growth of exploitive, grey and black market activities. Criminal networks prey upon and contribute to the disenfranchisement of a sizeable portion of the population in many underdeveloped nations.
This concentration of disenfranchised youth, with little-to-no licit support infrastructure has provided a recruiting pool for extremists seeking political support and soldiers for local or foreign causes, often facilitated through the internet. The wars and instability perpetrated by these extremists and their armies of the disenfranchised have resulted in the displacement of many thousands more, and the further weakening of governance. This displacement has, in many cases, produced massive migrations of disparate families, tribes, and cultures seeking a more sustainable existence. This migration has further exacerbated the exploitation of the weak by criminal and ideological profiteers and has facilitated the spread of diseases across natural barriers previously considered secure. The effect has been to create a kind of subculture of despair and hopelessness that is self-perpetuating. At some point, these underlying conditions must be addressed by offering choices and options that will nudge global trends in a positive direction. America’s national interests and values are not sustainable otherwise.
We cannot isolate our own prosperity and security from the global system. Even in a land as rich as ours, we too, have seen the gradual breakdown of rural communities and the rapid expansion of our cities. We have experienced migration, crime, and domestic terrorism. We struggle with joblessness and despite a low rate of illiteracy, we are losing our traditional role of innovation dominance in leading edge technologies and the sciences. We are, in the truest sense, part of an interdependent strategic ecosystem, and our interests converge with those of people in virtually every corner of the world. We must remain cognizant of this, and reconcile our domestic and foreign policies as being complementary and largely congruent.
As we pursue the growth of our own prosperity and security, the welfare of our citizens must be seen as part of a highly dynamic, and interconnected system that includes sovereign nations, world markets, natural and man-generated challenges and solutions – a system that demands adaptability and innovation. In this strategic environment, it is competition that will determine how we evolve, and Americans must have the tools and confidence required to successfully compete.
This begins at home with quality health care and education, with a vital economy and low rates of unemployment, with thriving urban centers and carefully planned rural communities, with low crime, and a sense of common purpose underwritten by personal responsibility. We often hear the term “smart power” applied to the tools of development and diplomacy abroad empowering people all over the world to improve their own lives and to help establish the stability needed to sustain security and prosperity on a global scale. But we can not export “smart power” until we practice “smart growth” at home. We must seize the opportunity to be a model of stability, a model of the values we cherish for the rest of the world to emulate. And we must ensure that our domestic policies are aligned with our foreign policies. Our own “smart growth” can serve as the exportable model of “smart power.” Because, truthfully, it is in our interest to see the rest of the world prosper and the world market thrive, just as it is in our interest to see our neighbors prosper and our own urban centers and rural communities come back to life.
Closing the “Say-do” Gap – the Negative Aspects of “Binning” An important step toward re-establishing credible influence and applying it effectively is to close the “say-do” gap. This begins by avoiding the very western tendency to label or “bin” individuals, groups, organizations, and ideas. In complex systems, adaptation and variation demonstrate that “binning” is not only difficult, it often leads to unintended consequences. For example, labeling, or binning, Islamist radicals as “terrorists,” or worse, as “jihadis,” has resulted in two very different, and unfortunate unintended misperceptions: that all Muslims are thought of as “terrorists;” and, that those who pervert Islam into a hateful, anti-modernist ideology to justify unspeakable acts of violence are truly motivated by a religious struggle (the definition of “jihad,” and the obligation of all Muslims), rather than being seen as apostates waging war against society and innocents. This has resulted in the alienation of vast elements of the global Muslim community and has only frustrated efforts to accurately depict and marginalize extremism.
Binning and labeling are legacies of a strategy intent on viewing the world as a closed system. Another significant unintended consequence of binning, is that it creates divisions within our own government and between our own domestic and foreign policies. As has been noted, we cannot isolate our own prosperity and security from the global system. We exist within a strategic ecology, and our interests converge with those of people in virtually every corner of the world. We must remain cognizant of this, and reconcile our domestic and foreign policies as being complementary and largely congruent. Yet we have binned government departments, agencies, laws, authorities, and programs into lanes that lack the strategic flexibility and dynamism to effectively adapt to the global environment. This, in turn, further erodes our credibility, diminishes our influence, inhibits our competitive edge, and exacerbates the say-do gap. The tools to be employed in pursuit of our national interests – development, diplomacy, and defense – cannot be effective if they are restricted to one government department or another. In fact, if these tools are not employed within the context of a coherent national strategy, vice being narrowly applied in isolation to individual countries or regions, they will fail to achieve a sustainable result. By recognizing the advantages of interdependence and converging interests, domestically and internationally, we gain the strategic flexibility to sustain our national interests without compromising our values. The tools of development do not exist within the domain of one government department alone, or even one sector of society, anymore than do the tools of diplomacy or defense.
Another form of binning that impedes strategic flexibility, interdependence, and converging interests in the global system, is a geo-centric approach to foreign policy. Perhaps since the Peace of Westphalia in 1648, westerners have tended to view the world as consisting of sovereign nation-states clearly distinguishable by their political borders and physical boundaries.
In the latter half of the Twentieth Century a new awareness of internationalism began to dominate political thought. This notion of communities of nations and regions was further broadened by globalization. But the borderless nature of the internet, and the accompanying proliferation of stateless organizations and ideologies, has brought with it a new appreciation for the interconnectivity of today’s strategic ecosystem. In this “new world order,” converging interests create interdependencies. Our former notion of competition as a zero sum game that allowed for one winner and many losers, seems as inadequate today as Newton’s Laws of Motion (written about the same time as the Westphalia Peace) did to Albert Einstein and quantum physicists in the early Twentieth Century. It is time to move beyond a narrow Westphalian vision of the world, and to recognize the opportunities in globalization.
Such an approach doesn’t advocate the relinquishment of sovereignty as it is understood within a Westphalian construct. Indeed, sovereignty without tyranny is a fundamental American value. Neither does the recognition of a more comprehensive perspective place the interests of American citizens behind, or even on par with those of any other country on earth. It is the popular convergence of interests among peoples, nations, cultures, and movements that will determine the sustainability of prosperity and security in this century. And it is credible influence, based on values and strength that will ensure America’s continuing role as a world leader. Security and prosperity are not sustainable in isolation from the rest of the global system. To close the say-do gap, we must stop behaving as if our national interests can be pursued without regard for our values.
Credible Influence in a Strategic Ecosystem
Viewed in the context of a strategic ecosystem, the global trends and conditions cited earlier are seen to be borderless. The application of credible influence to further our national interests, then, should be less about sovereign borders and geographic regions than the means and scope of its conveyance. By addressing the trends themselves, we will attract others in our environment also affected. These converging interests will create opportunities for both competition and interdependence, opportunities to positively shape these trends to mutual advantage. Whether this involves out-competing the grey and black market, funding research to develop alternate and sustainable sources of energy, adapting farming for low-water-level environments, anticipating and limiting the effects of pandemics, generating viable economies to relieve urbanization and migration, marginalizing extremism and demonstrating the futility of anti-modernism, or better managing the global information grid – international divisions among people will be less the focus than flexible and imaginative cooperation. Isolation – whether within national borders, physical boundaries, ideologies, or cyberspace – will prove to be a great disadvantage for any competitor in the evolution of the system.
The advent of the internet and world wide web, that ushered in the information age and greatly accelerated globalization, brought with it profound second and third order effects the implications of which have yet to be fully recognized or understood. These effects include the near-instantaneous and anonymous exchange of ideas and ideologies; the sharing and manipulation of previously protected and sophisticated technologies; vast and transparent social networking that has homogenized cultures, castes, and classes; the creation of complex virtual worlds; and, a universal dependence on the global grid from every sector of society that has become almost existential. The worldwide web has also facilitated the spread of hateful and manipulative propaganda and extremism; the theft of intellectual property and sensitive information; predatory behavior and the exploitation of innocence; and the dangerous and destructive prospect of cyber warfare waged from the shadows of non-attribution and deception.
Whether this revolution in communication and access to information is viewed as the democratization of ideas, or as the technological catalyst of an apocalypse, nothing has so significantly impacted our lives in the last one hundred years. Our perceptions of self, society, religion, and life itself have been challenged. But cyberspace is yet another dimension within the strategic ecosystem, offering opportunity through complex interdependence. Here, too, we must invest the resources and develop the capabilities necessary to sustain our prosperity and security without sacrificing our values.
Opportunities beyond Threat and Risk
As was stated earlier, while this Strategic Narrative advocates a focus on the opportunities inherent in a complex global system, it does not pretend that greed, corruption, ancient hatreds and new born apprehensions won’t manifest into very real risks that could threaten our national interests and test our values. Americans must recognize this as an inevitable part of the strategic environment and continue to maintain the means to minimize, deter, or defeat these diverging or conflicting interests that threaten our security. This calls for a robust, technologically superior, and agile military – equally capable of responding to low-end, irregular conflicts and to major conventional contingency operations. But it also requires a strong and unshakable economy, a more diverse and deployable Inter Agency, and perhaps most importantly a well-informed and supportive citizenry. As has also been cited, security means far more than defense, and strength denotes more than power. We must remain committed to a whole of nation application of the tools of competition and deterrence: development, diplomacy, and defense. Our ability to look beyond risk and threat – to accept them as realities within a strategic ecology – and to focus on opportunities and converging interests will determine our success in pursuing our national interests in a sustainable manner while maintaining our national values. This requires the projection of credible influence and strength, as well as confidence in our capabilities as a nation.
As we look ahead, we will need to determine what those capabilities should include. As Americans, our ability to remain relevant as a world leader, to evolve as a nation, depends as it always has on our determination to pursue our national interests within the constraints of our core values. We must embrace and respect diversity and encourage the exchange of ideas, welcoming as our own those who share our values and seek an opportunity to contribute to our nation. Innovation, imagination, and hard work must be applied through a national unity of effort that recognizes our place in the global system. We must accept that to be great requires competition and to remain great requires adaptability, that competition need not demand a single winner, and that through converging interests we should seek interdependencies that can help sustain our interests in the global strategic ecosystem. To achieve this we will need the tools of development, diplomacy and defense – employed with agility through an integrated whole of nation approach. This will require the prioritization of our investments in intellectual capital and a sustainable infrastructure of education, health and social services to provide for the continuing development and growth of America’s youth; investment in the nation’s sustainable security – on our own soil and wherever Americans and their interests take them, including space and cyberspace; and investment in sustainable access to, cultivation and use of, the natural resources we need for our continued wellbeing, prosperity and economic growth in the world marketplace. Only by developing internal strength through smart growth at home and smart power abroad, applied with strategic agility, can we muster the credible influence needed to remain a world leader.
A National Prosperity and Security Act
Having emerged from the Second World War with the strongest economy, most powerful military, and arguably the most stable model of democracy, President Truman sought to better align America’s security apparatus to face the challenges of the post-war era. He did this through the National Security Act of 1947 (NSA 47). Three years later, with the rise of Chinese communism and the first Russian test of a nuclear device, he ordered his National Security Council to consider the means with which America could confront the global spread of communism. In 1950, President Truman signed into law National Security Council finding 68 (NSC 6. Often called the “blueprint” for America’s Cold War strategy of containment, NSC 68 leveraged not only the National Security structures provided by NSA 47, but recommended funding and authorization for a Department of Defense-led strategy of containment, with other agencies and departments of the Federal government working in supporting roles. NSA 47 and NSC 68 provided the architecture, authorities and necessary resources required for a specific time in our nation’s progress.
Today, we find ourselves in a very different strategic environment than that of the last half of the Twentieth Century. The challenges and opportunities facing us are far more complex, multinodal, and interconnected than we could have imagined in 1950. Rather than narrowly focus on near term risk and solutions for today’s strategic environment, we must recognize the need to take a longer view, a generational view, for the sustainability of our nation’s security and prosperity. Innovation, flexibility, and resilience are critical characteristics to be cultivated if we are to maintain our competitive edge and leadership role in this century. To accomplish this, we must take a hard look at our interagency structures, authorities, and funding proportionalities.
We must seek more flexibility in public / private partnerships and more fungibility across departments. We must provide the means for the functional application of development, diplomacy, and defense rather than continuing to organizationally constrain these tools. We need to pursue our priorities of education, security, and access to natural resources by adopting sustainability as an organizing concept for a national strategy. This will require fundamental changes in policy, law, and organization.
What this calls for is a National Prosperity and Security Act, the modern day equivalent of the National Security Act of 1947. This National Prosperity and Security Act would: integrate policy across agencies and departments of the Federal government and provide for more effective public/private partnerships; increase the capacity of appropriate government departments and agencies; align Federal policies, taxation, research and development expenditures and regulations to coincide with the goals of sustainability; and, converge domestic and foreign policies toward a common purpose. Above all, this Act would provide for policy changes that foster and support the innovation and entrepreneurialism of America that are essential to sustain our qualitative growth as a people and a nation. We need a National Prosperity and Security Act and a clear plan for its application that can serve us as well in this strategic environment, as NSA 47 and NSC 68 served a generation before us.
A Beacon of Hope, a Pathway of Promise
This Narrative advocates for America to pursue her enduring interests of prosperity and security through a strategy of sustainability that is built upon the solid foundation of our national values. As Americans we needn’t seek the world’s friendship or to proselytize the virtues of our society. Neither do we seek to bully, intimidate, cajole, or persuade others to accept our unique values or to share our national objectives. Rather, we will let others draw their own conclusions based upon our actions. Our domestic and foreign policies will reflect unity of effort, coherency and constancy of purpose. We will pursue our national interests and allow others to pursue theirs, never betraying our values. We will seek converging interests and welcome interdependence. We will encourage fair competition and will not shy away from deterring bad behavior. We will accept our place in a complex and dynamic strategic ecosystem and use credible influence and strength to shape uncertainty into opportunities. We will be a pathway of promise and a beacon of hope, in an ever changing world.
CAPT Wayne Porter, USN and Col Mark “Puck” Mykleby, USMC are actively serving military officers. The views expressed herein are their own and do not reflect the official policy or position of the U.S. Navy, the U.S. Marine Corps, the Department of Defense or the U.S. government.
by Emeka Thaddues Njoku
Foreign Policy Journal
August 13, 2011
This paper examines the interface between globalization and terrorism in Nigeria. There has been an increasing trend of terrorism in Nigeria. The success of these attacks proves that the government does not have the capacities to curb this emerging trend. The attacks in Abuja, the nation’s capital, during Independence Day celebrations, and in Jos, the plateau state, on Christmas Eve in 2010, readily come to mind. This paper thus examines the following questions: Is terrorism in Nigeria a consequence of globalization? Are terrorists in Nigeria exploiting the tools of modern globalization in carrying out terrorist activities? The paper submits that the genesis of terrorism can be traced to colonialism, which came before modern globalization. Therefore, while terrorism in Nigeria is a legacy of colonialism, modern globalization also facilitated the creation of conditions for the continued existence of terrorism. Lastly, the paper submits that terrorists in Nigeria are comfortably using tools of globalization to perpetuate their nefarious acts to the detriment of the government, and warns that these activities could graduate to even more dangerous levels, should government activities completely go online due to the large presence of cyber criminals popularly known as “yahoo yahoo boys” and hackers.
Globalization and terrorism are two concepts that are intertwined. Although, globalization has resulted in development in every strata of our society—economic, political, technological, and socio-cultural it has been argued that globalization begets terrorism. In other words, terrorism and other related violent activities are consequences of globalization. Cronin asserted that “The current wave of international terrorism, characterized by unpredictable and unprecedented threats from non-state actors not only is a reaction to globalization but is facilitated by it.” Also Rourke was of the view that the gap between the rich countries and poor countries have expanded over the last 20 years owing to the effects of globalization, thereby fuelling animosities and violence among the poor, marginalized countries located in the Third World, against the Western pioneers of globalization and its antecedent characteristics, expressed in economic and political terms. “Whether deliberately intending to or not, the United States (and her Western counterparts) are projecting uncoordinated economic, social, and political power even more sweepingly than it is in military terms.” Cronin thus concluded thatthis results in aggression in the form of terrorism in the Third World against the pioneers of these policies that disarticulates their economy and leaves them with nothing. One of the root causes of terrorism in the Third World, it cannot be dichotomized from poverty, which is an end-product of the evil effects of globalization facilitated by the Bretton Woods institutions, such as the International Monetary Fund (IMF), World Bank, and the World Trade Organization (WTO), which are largely controlled by the Western industrialized capitalist states. The economic policies emanating from these institutions have helped to maul the economies of the Third World countries, especially in Africa, and ensured perpetual domination. The effects of these policies, such as the structural adjustment programs, have resulted into extreme poverty of the people and cursing hatred towards their governments, which dance around these institutions. Thus, the expression of hatred through violent attacks on government institutions, both foreign and local. In the words of the Paul Martin, Canadian Minister of Commerce, after the September 11, 2001 terrorist attacks on the United States:
For the terrorist, however, the aims of their criminal act was not only the destruction of life—they were seeking to destroy our way of life. The terrorist did not choose their target randomly. New York’s World Trade Center stood at the heart of the international financial district. It was a symbol of accomplishment and confidence. It was targeted for that reason. The terrorist sought to cripple economic activity, to paralyze financial relations, to create new barriers between economics, countries and people.
Karascasulu stated that, “today global terror is a giant problem for all humanity. September 11, gave a message that target was the main leader of globalization, the United States. World Trade Center as one target in the United States symbolized economic dimension while the Pentagon symbolizes political and military dimension.”
Like other countries in the Third World, Nigeria has had its fair share of the evil effects of globalization, which have resulted in terrorist activities and have also aided it. The various policies of governments that were sold to them by the IMF and World Bank in the 1980s led to untold economic hardships on the citizens, which prompted various groups to react violently against these policies. This period set the stage for terrorist violence in Nigeria, from the militants in the Niger Delta, who adopted terrorist tactics to fight the government, whom they believe are agents of foreign capital, to the Boko Haram followers, who were frustrated by poverty and unemployment, tore their university and college certificates, and destroyed the institutions of government they believed were the cause of their plight. However, all this begs these questions: What is the relationship between globalization and terrorism in Nigeria? Is terrorism in Nigeria a consequence of globalization? Have terrorists in Nigeria been exploiting the tools of globalization to wreak havoc on the populace?
This paper seeks to examine whether terrorism is a consequence of globalization in Nigeria and how terrorists in Nigeria have exploited the instruments of globalization to destroy the institution and forces responsible for it in Nigeria. However, it is expedient that these two concepts be thoroughly discussed.
The concept of terrorism today is a subject shrouded in a lot of controversies. There have been questions as to what constitutes terrorism. For the purpose of this study, the definition of terrorism can be perceived from two schools of thought. The first school is termed the idealistic conception of terrorism, and the second school of thought is termed the realist conception of terrorism. The idealist school of thought stressed the fact that every act that produces fear, terror, or death, whether legitimately carried out or not, by an individual group or state, is an act of terrorism. The realist school of thought, on the other hand, sees terrorism essentially as an attack by clandestine groups on non-combatants or civilians in order to draw attention by imbuing fear in the public to coerce a state actor from carrying out an action for their political objectives.
One of the proponents of the realist school of thought is the United States Government. Title 22 United States Code (USC) Section 2656 (f)) provides a definition of terrorism by the United States. According Title 22, “terrorism is defined as premeditated, politically motivated violence perpetuated against non combatant targets by sub national or clandestine agents usually intended to influence an audience.” The US State Department defines a terrorist group as “any group practicing, or that has significant sub groups that practice international terrorism.” Wiotti and Kauppi defined terrorism as “a politically motivated violence, aimed at achieving a demoralizing effects on publics and government.” Wilkinson defines terrorism as “a systematic use of coercive intimidation usually to serve political ends. It is used to create and exploit a climate of fear among a wider target group than the immediate victims of the violence and to publicize a cause as well as to coerce a target into assenting to aims” Cronin conceives of terrorism as “the threat or use of seemingly random violence against innocents for political ends by a non state actor.”
The essential features of the definitions of terrorism as posited by the realist scholars is that they place emphasis on non-state actors, such as clandestine groups perpetuating violence on the public and having political objectives. More so, these definitions shield state actors. This therefore raises two questions: is a lone attacker or bomber a terrorist? What political objectives does a lone attacker or bomber have? For instance, the Unabomber, a name given by the Federal Bureau of Investigations (FBI) to an elusive perpetrator of a series of bombings between 1975-1995 that killed three and wounded many people. More so, what are his objectives? An individual terrorist can be motivated by personal reasons, such as unfair dismissal from work place, divorce, death of loved ones, frustration, depression, unstable homes or insecurity at home, or, as has been suggested by Lee and Pearl, financial motivation.
The idealists on the other hand, view every act, legitimate or not, that breeds an atmosphere of fear, destruction of lives and properties as terrorism. Among the proponents of this school of thought are Spiegal and Wheling, They define terrorism as “violence across international boundaries intended to coerce a target group into meeting political demands”. The African Union defines terrorism as “any act which is a violation of the criminal laws of a state party and which may endanger the life, physical integrity or freedom of, or cause serious injury or death to any person, any member or group of persons or causes or may cause damage to public or private property, natural resources, environmental or cultural heritage.” The Nigerian government has very recently provided a definition for terrorism. They define a terrorist as “anyone who [is] involved or who causes an attack upon a person’s life which may cause serious bodily harm or death; kidnapping of a person; destruction to a government or public facility, transport system, an infrastructural facility including an information system, a fixed platform located on the intercontinental shelf, public place or private property likely to endanger human life or result in major economic loss.”
Proponents of this school further argue that “the consequences of an action are what matters and not the intent. Collateral or unintended damage to civilians from an attack is the same as a terrorist bomb directed deliberately at the civilian target with the intent of creating that damage”
This is the reason why it has been said that “one man’s terrorist is another man’s freedom fighter”. This school of thought adopts a moralistic and pacifist view point in their conception of terrorism. However, one question raised here about the definition of terrorism from this viewpoint is its emphasis on violence perpetuated outside the country. What about terrorism perpetuated against citizens within the boundary of the state?
However, irrespective of these schools of thought as regards the definition of the concept of terrorism, the United Nations, which is a platform where various countries meet, have not reached a consensus on the definition of concept. This is based on the fact that some countries, especially in the Middle East, are careful not to subscribe to any definition of terrorism that opposes a legitimate fight for freedom from foreign occupation. However, despite this obstacle, the UN has loosely agreed that terrorism violates certain principles on which the institution is established. The UN Policy Working Group on Terrorism enumerated these principles, which include “assault on the principles of law or order, human rights and peaceful settlement of disputes.”  For the purpose of this discussion, terrorism may be defined as violent attack by faceless groups, individuals, or the state in order to push forward their political, primordial, and personal objectives. This definition incorporates both the realist and idealist conception of terrorism. It takes into consideration all groups, individuals, and states and looks at the factors that engenders terrorism, such as political reasons, which may come in form of liberation from foreign occupation, one country trying to influence the decisions or policies of another country, or trying to force its policies on its own citizens, through coercive means; primordial reasons such as age long acrimony between two groups, especially religious groups; personal reasons, such as loss of job, marginalization, frustration and depression, instability or lack of security in the family; or financial reasons due to excessive poverty and trying enrich oneself. This definition illuminates the concept of terrorism.
Alasuutari posited that “the term globalization has been used to refer to a number of developments”; thus, the concept is considered very significant. Tomlinson defines globalization as “a process whereby a global network of interconnections and interdependence, uniting different countries and regions is getting more and more dense. Friedman conceives of globalization as “the the integration of everything with everything else”; that is, “the integration of markets, finance, and technology in a way that shrinks the world from a size medium to a small size.” From the foregoing definitions of the concept of globalization, scholars agree that globalization is a process that enhances the destructions of barriers that previously existed among states of the world, thus integrating the world into a single entity or unit where barriers such as culture, communication, governance and geography are extinct. It should, however, be noted that the concept of globalization has come a long way, and “many precursors of modern globalization date back into history, even into antiquity”. Marx and Engels, in their Communist Manifesto of 1848, discussed the concept of globalization especially in economic and academic terms:
The bourgeoisie has, through its exploitation of the world market, given a cosmopolitan character to production and consumption in every country to the great chagrin of reactionaries. It has drawn from under the feet of industry the national ground on which it stood. All old-established national industries have been destroyed or are daily being destroyed. They are dislodged by new industries whose introduction becomes a life and death question for all civilized nations, by industries that no longer work up indigenous raw material, but raw materials drawn from the remotest zones, industries whose products are consumed, not only at home, but in every quarter of the globe. In place of the old wants, satisfied by the production of the country, we find new wants, requiring for their satisfaction the products of distant lands and climes. In place of local and national seclusion and self-sufficiency, we have intercourse in every direction, universal inter-dependence of nations. And as immaterial, so also in intellectual production. The intellectual creations of individual nations become common property. National one-sidedness and narrow mindedness become more and more impossible, and from the numerous national and local literatures, there arises a world literature.
These attest to the fact that globalization is not a new concept. Scholars have classified the term into several historical epochs, such as modern or contemporary globalization and pre-modern globalization.  Alasuutari asserted that globalization is both old and new. It is old in that the efforts of humans to overcome the distance and other barriers to increased interchange have long existed. The first canoes and signal fires are part of the history of globalization. Rourke termed this stage of globalization particularly in the 1800s as “creeping globalization” where the world metamorphosed incrementally. In our contemporary society however, globalization moves at a very rapid manner, far different from what I may term old globalization. Tomlison attest to this fact that modern globalization is a “rapidly accelerating process, and especially so from the early 1980s to the late 1990s.” Alasuutari concluded that the pace of globalization refers to a long historical process, the contemporary stage of which represents a “distinctive historical form with a unique conjuncture of social, political, economic and technological forces.”
Globalization has closed the gaps that existed among states. Today, various issues such as environmental degradation, diseases, and terrorism are not issues that affect one state alone, but all states. States are continually interdependent in political, economic, and socio-cultural terms. Rourke corroborated this viewpoint when he stated that “because security and prosperity of individual state are increasingly linked to politics, economy and environmental conditions, in other states, thus, the result is a true global village where states, must work together to achieve a common goal.”
TERRORISM: A CONSEQUENCE OF GLOBALIZATION IN NIGERIA
The history of terrorism in Africa can be traced to the period of colonialism. Although, prior to colonialism, there has been evidence of violent clashes among various groups within African states fighting for one course or the other. However, terrorism was more evident during the colonial period. Hübschle posited that, “Historical data shows that the African continent has witnessed a wide array of terror incidents including revolutionary, state sponsored and state terrorism.” Past liberation movements that fought for independence in their countries, such as “Africa National Congress (ANC), The African National Union- Patriotic Front (ZANU-PF), The West Africa People’s Organization (SWAPO) and Frente de Libertaçãode de Moçambique (FRELIMO), are labeled as terrorist organizations.” Anette Hübschle stressed the fact that it was ironical that terrorism perpetrated by colonialist powers was not recorded, while revolutionary activities of liberation movement in Africa were labeled terrorist. The colonialist committed atrocities upon the African populace. Hübschle termed this pattern of terrorism as “colonial terror” “a distinct form of terrorism perpetuated during colonial and post-colonial period.”
Colonialism in Africa was largely facilitated by globalization. Pre-modern globalization ushered in the period of colonization in Africa, Asia, and the Middle East. In Africa, this historical epoch was the beginning of modern terrorism. Pre-modern globalization brought tripartite factors which are essentially linked. First, globalization brought about colonialism in Africa and other continents such as Asia and Middle East, which led to acts of resistance by the natives; thus, a culture of violence was created among the people till this day. In trying to suppress these violent resistances by the natives, the colonialists employed all manner of state terrorism to instill an atmosphere of terror and tension among the colonial people. Chinweizu, in his book West and the Rest of Us, best captures the relationship between globalization and colonialism and consequently terrorist activities by the colonial authorities:
For nearly six centuries now Western Europe and its Diaspora have been disturbing the peace of the world. Enlightened through their Renaissance by learning of the ancient Mediterranean; armed with gun, the making of whose powder they had learned from Chinese firecrackers; equipping their ships with lantern sail, astrolabes and nautical compasses, all invented by the Chinese and transmitted to them by the Arabs; fortified in aggressive spirit by arrogant, messianic Christianity of both the popish and protestant varieties; and motivated by the lure of enriching plunder, white horbes have sallied forth from their western European homelands to explore, assault, loot, occupy, rule and exploit the rest of the world. And even now the fury of their expansionist assault upon the rest of us has not abated.
The Nigerian state had its own story to tell about colonial terror and other terrorist activities which were mobilized as a result of colonial policies. This came about as a result of revolutionary movements and violent inter-ethno-religious clashes. Suberu and Osaghae stated that, ethnic and violent clashes can be traced to colonialism and its attendant policies. Colonialism brought about socio-economic inequality through the institutionalization of classes and thereby class struggle. A state of mutual suspicion existed among the major ethnic groups in Nigeria. And violent clashes among these groups has economic undertone. The various ethnic groups are keen to control the central government because all resources are centralized, thus making positions in the central government highly lucrative. Furthermore, Falola, in his book, Colonialism and Violence in Nigeria, argued that the root cause of violent activities in Nigeria today, such as Jos crisis, the Niger Delta violence in the southern and northern part of Nigeria respectively, can be traced to colonialism. At that time, the natives challenged colonial rule through violence. Therefore, a “public culture” was created in the Nigerian polity, in which the citizenry were inclined to commit acts of violence in response to exploitative colonial policies. Some of the notable violent protests during the colonial rule were the Aba women riot of 1929, and the Ekumeku wars, in which the guerilla form of resistance was used against the British occupation of Nigeria.
These violent activities by the colonial people in Africa and Nigeria, in particular during the colonial period, can be traced to the policies of the colonial authorities, during and after the period of the great depression from 1929 to 1939. Before the depression of the 1930s, the economic depression in the 1870s was the major factor that led to the colonialism in some parts of Nigeria. The economic depression of the 1930s was felt in various ways. There was the “falling export prices for crops and tin and declining trade profits and revenue, as British firms either ceased importing European manufacturers or sought tax relief.” These economic development had an established economic pattern where the agricultural and other reserves are accumulated from the taxes paid the colonialist. The colonial authorities responded by introducing austerity measures aimed at cutting salaries, firing some workers, expanding taxation, an aggressive revenue drive, public works were suspended, price controls, and expansion of export crops.
The people of Nigeria were prematurely integrated into the world market. According to Ochonu, “they were placed in the web of uncertainty, volatile and exploitative world market.” He further stated that “during the depression, the British colonial authority implemented contradictory policy of both incorporation and imperial closure of colonially mediated globalization and deglobalization.”
These economic policies of the colonial authority, which affected the income of colonial subject, stirred up all forms of violent and domestic terrorism against colonial authorities. These problems are attributed to the negative effect of globalization or pre-modern globalization. This is based on the fact that by the 1920s and the 1930s, the colonized nation’s economy had been fully integrated with the world economy, forming a center- peripheral-like relationship. Thus, the economic depression in Western developed states affected the colonial states.
Furthermore, in the 1980s, another economic depression hit the nation; that is, in the period of the “oil doom” there was a sharp drop in the sale of crude oil, which was rapidly becoming the major export earning of the country at that time. Responding to the economic crisis, the government, under the advice of the IMF, introduced the structural adjustment program. This austerity measure, which was aimed at wage cuts, dismissal, cuts in government expenditures, etc., resulted in severe hardship among the populace. The end product became violent protests and domestic terrorism towards the government. “In 1988, in response to an increase in the price of fuel, riots broke out in Jos and Sokoto state, which turned out to more intense….” Moreover, in May and June of 1989, several towns such as Lagos, Ibadan, Benin City, and Port Harcourt revolted against the IMF’s plans, which resulted in destruction of hundreds of lives and property worth millions of naira (the Nigerian currency).
The economic crisis in the 1980s saw the emergence of groups who were involved in terrorist activities in the country. They include: Ogoni Youth, Niger Delta Volunteer Force, (NDVF), Odua People Congress (OPC), Arewa Youth Consultative forum, Movement for the Actualization of the Sovereign State of Biafra (MASSOB), Movement for the Survival of the Ogoni People (MASOP), Movement for the Emancipation of the Niger Delta (MEND), Ijaw Youth Council (IYC), Egbesu Boys of Africa (EBA), Niger Delta Vigilante (NDV), Isoko National Youth Movement (INYM), Egi Women Movement. “[S]everal factors underline the growth and development of these groups… economic recession of the 1980s, falling commodity prices, OPEC price increases, privatization, economic liberalization, deregulation, currency devaluation, cold war politics, trade barriers”
In another vein, this period also witnessed state terrorism. The response by the government to these violent protests was brutal. Several military administrators from the 1980s through the 1990s responded violently to these protests, thereby creating an atmosphere of fear. Ogundiya and Amzat posited that certain incidences capture the fact that, in suppressing opposition to economic policies by the military government, diverse acts of state terrorism weere carried out successfully. For instance, Ken Saro Wiwa, the leader of the Movement for the Survival of the Ogoni People (MASOP), was executed by the then military ruler General Sanni Abacha; the assassination of Dele Giwa, an editor and environmental activist, through a letter bomb in October 1986; the assassination of Kudirat Abiola in June 4 1999; and the assassination of Moshood Abiola in July 7, 1998.
Therefore, all these factors highlighted above affirm the viewpoint that among the disadvantages of globalization is the consequence of terrorism in Nigeria. The economic policies and advice of the Bretton Woods institutions such as the IMF, World Bank, and World Trade Organization, which are agents and forces of globalization, have negative influences on the economy of developing nations such as Nigeria. This kindles the fire of hatred of the people of these developing countries on their government and the western collaborators.
GLOBALIZATION AS A MEANS OF TERRORISM IN NIGERIA
The tools of globalization have become a veritable means for terrorists to carry out their activities successfully. This makes most terrorists ideological hypocrites. Murphy observes that the major instruments of globalization, which are the information and communication technologies, such as mobile phones, the internet, mass media, etc., have ensured that terrorist plans are executed with the same ease at which commerce is carried out among nations in the world. He further states that, “using technological advances in communication, these groups (terrorist) can easily contact and operate”. Pillar further states that “the use of information technologies such as the internet, mobile phones, and instant messaging has extended the global reach of many terrorist groups.” Of one very essential tool of globalization, the internet, Theohany and Rollins state that “it is used by Insurgents, Jihadists, and terrorist organization as a tool for radicalization and recruitment, method of propaganda distribution, a means of communication, and ground training.”
In Nigeria, there is an emerging trend among terrorist groups. They have adopted the methods used by other terrorists, particularly in the Middle East and North Africa, by using information technology, and particularly the internet, to communicate to the people and the government on their activities. In the same vein, the leader of the Al Qaeda network, Osama Bin Laden, had been able to communicate to terrorists in Nigeria through the media, promising to support the quest to destroy their fellow countrymen. Karon reported that, in February 2003, Osama Bin Laden stated that Nigeria is a country that is ripe for “liberation”; that is, Nigeria is a country worthy of jihad. This statement was made available by Al Jazeera through a video message broad cast. Furthermore, The Guardian newspaper reported in 2004 that Al Qaeda had been communicating to certain terrorist groups in Nigeria through email addresses. In addition, terrorist groups in Nigeria, such as the Movement for the Emancipation of the Niger Delta (MEND), Niger Delta Volunteer Force (NDVF), and other related militants groups were using the internet to communicate to the government by claiming responsibility for attacks on crude oil installations. The Boko Haram sect in Nigeria claimed responsibility through the internet for the Christmas Eve bombings of some parts of Plateau state.
Furthermore, the media have ensured that the objectives of the terrorists in Nigeria are achieved. This is because news of impending terrorist attacks by the mass media creates an atmosphere of fear and suspicion in the country. There had been reported cases of threats by terrorists to attack Lagos, the country’s commercial nerve center, and Abuja. These threats were quickly disseminated, therefore sending a message that the government could no longer protect its citizens from the activities of these terrorists.
Another emerging trend of terrorism in Nigeria is cyber terrorism. Although Nigeria has not reached the desired level of information and communication technology needed to electronically run every activities of the government, the fact that cybercrimes are in existence in the country is a warning sign that the future may be bleak if the government completely goes online. “It is estimated that there are about 40 million computers in Nigeria. Without the owners ever suspecting it, each of these computers can be deployed as foot soldiers, even by an attacker in another country, to do the biddings of some evil geniuses.” Cases abound of how cybercriminals and hackers have swindled companies in and outside the country. Terrorist groups can hire these hackers or cyber criminals, popularly known in Nigeria as “yahoo yahoo boys,” to wreak havoc in the future on the nation’s network system.
This article has attempted to establish the link between globalization and terrorism in Nigeria; specifically, how globalization results in terrorism and how globalization has continued to aid terrorism in Nigeria. Pre-modern globalization brought about colonialism in Africa and other parts of the Third World. Colonial authorities established an economic system that ensured the continued subordination of the Nigerian people. Consequently, resistance to these economic policies of the colonial authority has led to the establishment of a culture of violent resistance or domestic terrorism in the country today. Moreover, in the quest to suppress this opposition by the colonial subjects, the colonial authorities applied all manner of state-terrorism upon the people. This period kicked-off the modern terrorism in Nigeria. After independence, the ruling elites, trapped by the international economic system they have been prematurely integrated with wherein competition is at its highest, relied on the advice of the western capitalist states, through their forces of globalization such as the IMF, World Bank, WTO, etc., thereby guaranteeing the continued subordination of their economic system. As a result of this, the people reacted through terrorist activities, this time against their own government, whose officials are now perceived as agents of foreign capitals. In another vein, instruments of globalization, such as the mass media, the internet, and mobile phones, have continued to make the activities of terrorists in Nigeria very easy to carry out and difficult for the government to tackle. Terrorists in Nigeria are creating an atmosphere of fear and insecurity through bombings and threats of bombings, and this information is quickly disseminated by the media. The internet has become a useful tool to communicate to the government and the people claiming responsibility of attacks, a similar tactic employed by terrorist networks elsewhere. More so, due to the porosity of the network system in Nigeria, where cyber criminals are everywhere, it portends danger in the future, if the government decides to completely carry out its tasks online.
1. Cronin, Audrey Kurth Behind the curve. Globalization and international terrorism. International Security. Vol. 27, No. 3 2003.
2. Rourke, John.T. Taking sides. Clashing View on Controversial Issues in World Politics 11th edition. 2005.
3. Ibid, p. 45
4. Hungton, Samuel P. “The Clash of Civilization? Foreign Affairs, Vol 72, No. 3 (summer 1993); Benjamin R. Barber, Jihad vs Mcworld: Terrorism’s challenges to Democracy. New York: Randon House, 1995; Samuel P. Hungton, The Clash of Civilization and the Remaking of world order (New York: Simon and Schuster, 1996)
5. Canadian Minister of Commerce Paul Martins statements after the terrorist attack on the United State of America. 2001
6. Karacasulu, Nilüfer. Security and Globalization in the Context of International Terrorism.” Uluslararasi Hukuk ve Politika Cilt 2, No:5. ss. 1-17 ( 2006). p. 3
7. Sean, K.alic. Combating a Modern Hydra Al Qaeda and the Global War on Terrorism” Global War on Terrorism Occasional Paper 8 Combat Studies Institute Press Fort Leaven Worth, Kansas. 2005
8. US States Department. Patterns of global terrorism 1990
9. Wiotti, R. and Kauppi, M. International Relations,word politics, Security, Economy and Entity 4th ed. Peasrson Educational inc. New Jersey. 2009
10. Wilkinson, Paul. International Relations: A very Short Introduction New York, Oxford University Press Inc. 2007.
11. Ibid, p.33
12. Rensselaer, Lee and Raphael, Pearl, “Terrorism, the Future and US Foreign Policy.” Foreign Affairs, Defence & Trade Division (Congressional Research Service. The Library of Congress (Order Code 1B95112) CSR-3. 2002
13. Speigel, Steven L. and Wehling, Fred.L. World Poltics in the New Era (2nd ed) Harcourt Brace College Publishers California. 1999
14. African Union Convention Cited in Omotola, Shola J. Assessing Counter Terorism Measures in Africa: Implications for Human Rights and National Security. 2008
15. The Nation, February 23, 2011
16. International Terrorism and Security Research. 2008
17. The UN Policy Working Group on Terrorism, 2001, 5
18. Alasuutari, Pertti. “Globalization and the Nation-State: An Appraisal of the Discussion.” Department of Sociology and Social Psychology. University of Tampere, Finland. 2000
19. Tomlinson, J. Globalization and culture. Cambrige: Polity Press. 1999
20. Ibid, p.4
21. Ibid, p.2
22. Ibid, p.260
23. Ibid, p.22
24. Ibid, p.8
25. Ibid, p.9
26. Ibid, p.43
27. Ibid, p.463
28. Oyeniyi, AB. “Terrorism in Nigeria: Groups, Activities, and Politics”. International Journal of Politics and Good Governance. Vol.1,No1.1 Quarter I ISSN No.0976-1195 2010
29. Hübschle, Annette. The T-word: Conceptualising Terrorism. African Security Review 15.3 Institute of Security Studies. 2005
30. Ibid, p.8
31. Ibid, p.9
32. Chinweizu, The West and the Rest of Us. Nok Publishers. Nigeria Ltd Lagos. 1978
33. Osaghae, Eghosa.E. and Suberu, Rotimi.T. A History of Identities, Violence, and Stability in Nigeria. Center for Research on Inequality,Human Security and Ethnicity, CRISE working Paper No.6 University of Oxford 2005
34. Falola, T. (2009). Colonialism and Violence in Nigeria. Bloomington: Indiana University Press
35. Ochonu, Moses .E “Colonial Meltdown”. Ohio University Press. 2009
36. Ibid, p.5
37. Ibid, p.7
38. Ibid, p.7
39. Ibid, p.8
40. Libcom.org “The Development of Class Struggle in Nigeria-ICG 2006
41. Ibid, p.3
42. Ibid, p.4
43. Ogundiya, S. and Amzat, J. (2008). Nigeria and the Threats of Terrorism:Myth or Reality. Journal of Sustainable Development in Africa (Vol.10, No. 2) ISSN 1520-5509 Clarion University of Pennsylvania, Clarion, Pennsylvania
44. Murphy, D. (2002). “Activated Asian Terror Web Busted” Christian Science Monitor and “Al Qaeda South Asia Reach” Washington Post.
45. Ibid, p.8
46. Pillar, P.R. (2001). Terrorism and U.S Foreign Policy. Washington D.C p.47
47. Ibid, p.47
48. Theohany C.A. and Rollins J. (2011). “Terrorist Use of the Internet: Information Operations in Cyberspace. Congressional Research Service. 7 – 5700
49. Karon, T. (2003) “Why African has Become a Bush Priority. Time Magazine, July 7
50. Guardian, 2004 “suspect Links e-mail Address to Al Qaeda” August 5, p.1
51. www.nigeriabestforum.com/index.php? topic = 90059.0
Emeka Thaddues Njoku graduated with a Bachelor of science degree in Political science from Enugu State University of Science and Technology and Masters of Science degree in Political science from the University of Ibadan. He is currently a PhD student in Department of Political Science University of Ibadan. He is also a Tutorial Assistant in the Department of Political Science Distance Learning Programme, University of Ibadan. His research focus is on terrorism in Africa. Contact him at firstname.lastname@example.org. Read more articles by Emeka Thaddues Njoku.
Syria and the Delusions of the Western Press
Disgraceful Distortions by The Guardian
By PETER LEE
April 15 – 17, 2011
On April 10, a mysterious and bloody incident occurred near the seaside town of Banyas, in Syria.
Nine members of a Syrian army patrol were shot to death and twenty five were wounded—the single bloodiest incident in the Syrian uprising to date.
Western news services largely ignored the incident and concentrated on reports of the army’s move to encircle and pacify Banyas.
When they did report the incident, some were in thrall to their preconceptions and their sources in the democracy movement and have credulously entertained the most improbable explanations for the incident: that the soldiers were murdered by one of their own number, who refused orders to fire on demonstrators; or that the Syrian secret service ordered officers to shoot their men in order to foment a provocation.
The most likely explanation—that infiltrators may be working to create chaos and destabilize the regime under cover of the demonstrations, and simply pumped two army trucks full of bullets in a carefully-planned ambush—has for the most part eluded them.
But even the paranoid have real enemies, and Syria’s President Bashar al-Assad—who chattered vaguely and counterproductively about “conspiracies” in his address to the Syrian parliament– has reason to worry about dangerous opponents, now in exile but perhaps willing to stir up trouble.
The list of potential opponents includes Rifaat al-Assad, Bashar’s uncle, brother of Hafez al-Assad. Rifaat tried to mount a coup against Hafez, but was forced into exile in 1984.
A more dangerous opponent is perhaps Abdul Halim Khaddam. The all-around fixer for Hafez and number 3 in the regime, he could not reconcile himself to the elevation of the relatively unproven Bashar at the age of 34 on Hafez’s death.
He went into exile in Paris, followed by an indictment for treason. In France, he claimed leadership of an opposition organization, the National Salvation Front, and offered damaging statements on the involvement of the Syrian leadership in the murder of Rafiq Hariri, the Lebanese statesman.
Khaddam’s home town is Banyas, where the massacre occurred.
By a remarkable coincidence, the events in Banyas attracted the close attention of one of America’s chief Syria watchers: Dr. Joshua Landis of the University of Oklahoma.
Dr. Landis’ wife is Syrian, and her cousin, Lt. Colonel Yasir Qash`ur , was one of the two Syrian army officers who died in the incident.
In an April 13 post titled, “Western Press Misled: Who Shot the Nine Syrian Soldiers in Banyas? Not Syrian Security Forces“, Landis debunked the claims reported by Agence France Presse and the Guardian. He also highlighted the pathetic ordeal of one wounded soldier badgered by anti-government activists but denying that he had been shot by security forces—only to have the video go out on Youtube the West with the canard attached.
Landis, an extremely circumspect and careful observer, wrote bluntly:
“A number of news reports by AFP, the Guardian, and other news agencies and outlets are suggesting that Syrian security forces were responsible for shooting nine Syrian soldiers, who were killed in Banyas on Sunday. Some versions insist that they were shot for refusing orders to shoot at demonstrators.
Considerable evidence suggests this is not true and that western journalists are passing on bad information.
My wife spoke this morning to one witness who denied the story. He is colonel `Uday Ahmad, brother-in-law of Lt. Col. Yasir Qash`ur, who was shot and killed in Banyas with eight other Syrian soldiers on Sunday April 10, 2011. Uday Ahmad was sitting in the back seat of the truck which Yasir was driving when he was shot dead on the highway outside Banyas. Uday said that shooting was coming from two directions. One was from the roof of a building facing the highway and another from people hiding behind the cement median of the highway. They jumped up and shot into the two trucks carrying Syrian troops, killing 9. Col. Uday survived.
Here is video of the shooting shown on Syrian TV sent by my brother-in-law, Firas, who lives in Latakia.”
“Video of one soldier purportedly confessing to being shot in the back by security forces and linked to by the Guardian has been completely misconstrued. The Guardian irresponsibly repeats a false interpretation of the video provided by an informant.
This is what the Guardian writes: ‘Footage on YouTube shows an injured soldier saying he was shot in the back by security forces.’
The video does not “support” the story that the Guardian says it does. The soldier denies that he was ordered to fire on people. Instead, he says he was on his way to Banyas to enforce security. He does not say that he was shot at by government agents or soldiers. In fact he denies it. The interviewer tries to put words in his mouth but the soldier clearly denies the story that the interviewer is trying to make him confess to. In the video, the wounded soldier is surrounded by people who are trying to get him to say that he was shot by a military officer. The soldier says clearly, ‘They [our superiors] told us, ‘Shoot at them IF they shoot at you.’’
The interviewer tried to get the wounded soldier to say that he had refused orders to shoot at the people when he asked : “When you did not shoot at us what happened?” But the soldier doesn’t understand the question because he has just said that he was not given orders to shoot at the people. The soldier replies, “Nothing, the shooting started from all directions”. The interviewer repeats his question in another way by asking, ‘Why were you shooting at us, we are Muslims?’ The soldier answers him, ‘I am Muslim too.’ The interviewer asks, ‘So why were you going to shoot at us?’ The soldier replies, ‘We did not shoot at people. They shot at us at the bridge.’”
The Guardian’s pseudonymous reporter in Damascus reported the allegations, incorrect, at least in the matter of the injured soldier shown on Youtube, and used the allegation to paint a dire picture of a military and a regime facing disintegration:
“Syrian soldiers shot for refusing to fire on protesters
Katherine Marsh – a pseudonym – in Damascus
The Guardian, Tuesday 12 April 2011
‘Witnesses claim soldiers who disobeyed orders in Banias were shot by security services as crackdown on protests intensifies.
Syrian soldiers have been shot by security forces after refusing to fire on protesters, witnesses said, as a crackdown on anti-government demonstrations intensified.
Witnesses told al-Jazeera and the BBC that some soldiers had refused to shoot after the army moved into Banias in the wake of intense protests on Friday.
Human rights monitors named Mourad Hejjo, a conscript from Madaya village, as one of those shot by security snipers. “His family and town are saying he refused to shoot at his people,” said Wassim Tarif, a local human rights monitor.
Footage on YouTube shows an injured soldier saying he was shot in the back by security forces Footage on YouTube shows an injured soldier saying he was shot in the back by security forces, while another video shows the funeral of Muhammad Awad Qunbar, who sources said was killed for refusing to fire on protesters.’
‘Signs of defections will be worrying to Syria’s regime. State media reported a different version of events, claiming nine soldiers had been killed in an ambush by an armed group in Banias.’”
According to Landis’ informants, the threat of Khaddamist infiltrators, though of limited interest to the Western media, is a matter of considerable anxiety among the pro-democracy activists.
Landis quoted an e-mail from the Damascus correspondent of la Republica, Alix van Buren, who wrote him:
“Josh, the picture is extremely confusing and it is often impossible to confirm data on the web. The absence of most foreign media here in Syria adds to that murky picture. What I can contribute about the question of “foreign meddling” is the following. These are direct quotes from leading and respected opposition members:
‘Sunday two of ex-Vice President Khaddam’s men were arrested in Banyas. A human rights activist confirmed that they were sowing trouble by distributing money and weapons. I don’t know what to make of the confessions of the three guys shown on Syrian tv today. However, several Syrian dissidents believe in the presence and the role of “infiltrators”. Michel Kilo, though he accepts that possibility, cautioned that the issue of “infiltrators and conspiracies” should not be exploited as an obstacle in the quick transition towards democracy.’
Haytham al-Maleh was the most explicit in pointing to the meddling of Khaddam people in and around Banias. He also mentioned the ‘loose dogs’ loyal to Rifa’t al-Assad. According to him they are active particularly along the coast between Tartous and Latakya. Here is a link to my interview with al-Maleh in La Repubblica.”
The veteran blogger Ahmed Abu ElKheir, unfortunately now in prison for the second time in less than a month, and not yet released, has links to Banyas. The first, peaceful demonstration of Saturday morning was also sparked by the request for his release. In his Facebook profile, before being arrested, he too lashed out against Khaddam. Several commentators from that area agreed with him, cursing Khaddam for meddling “with the blood of the innocents”.
If Dr. Landis is correct about the events in Banyas, the democratic stew in Syria has a dangerous element of foreign provocateurs delivering arms and money—and disinformation of a certain intensity and sophistication direct to Western journalists.
Landis writes dismissively of a literally bloodstained order allegedly issued by the Mukhabarat instructing officers that it was “acceptable” to shoot their own men:
A three-page document purporting to be a “top secret” Mukhabarat memo, giving instruction to intelligence forces that “it is acceptable to shoot some of the security agents or army officers in order to further deceive the enemy” has been published on the web and republished by all4Syria. A copy was sent to me with a translation by a journalist with a leading magazine for my thoughts. It has blood splattered on it and is clearly a fake. What army, after all, would survive even days if its top officers were publishing orders to shoot its own officers? Not a good moral [sic] booster for the troops.
Alleged mischief making by Khaddam and Rifaat al-Assad have an additional, regional dimension.
Saudi Arabia is quietly directing a pushback against Shi’a and Iranian influence in the Middle East, most conspicuously by its suppression of the largely Shi’a demonstrators in Bahrain, but also through a confrontational war of words (and expulsion of Iranian diplomats) conducted through the Gulf Co-Ordination Council of Saudi Arabia, Kuwait, Qatar, and the other sheikdoms.
Iran is anxious that Saudi Arabia is determined to destabilize Iran’s chief Middle East ally, Syria, as part of its effort to roll back Iranian influence and buttress the power of Sunni forces in the region.
The Assad regime is vulnerable to sectarian, anti-Shi’a agitation because the Assad family belongs to a minority sect, the Alawites, that are somewhat Shi’aesque and mystical in their observances. The Alawites only comprise 12 per cent of the population. Their religious practices are eyed askance by strict Sunni observers and opponents of the Iranian alliance, such as Khaddam, sometimes stir the sectarian pot with warnings of the creeping “Shi’aization” of Syria.
The level of Iranian concern—and a much interesting tittle-tattle concerning Khaddam and his alleged activities against the Assad regime—can be extracted from an op-ed carried on the website of the Iranian media outlet Press TV.
Titled Saudi Arabia, Jordan Behind Syria Unrest, it states:
“Saudi Arabia, which often bows to US and Israel’s policies in the region, tried to destabilize Bashar al-Assad’s government by undermining his rule.
To this end, Saudi Arabia paid 30 million dollars to former vice president Abdul Halim Khaddam to quit Assad’s government.
Khaddam sought asylum in France in 2005 with the aid of Saudi Arabia and began to plot against the Syrian government with the exiled leaders of the Muslim Brotherhood.
Khaddam, who is a relative of Saudi King Abdullah and former Lebanese premier Rafiq Hariri, used his great wealth to form a political group with the aim of toppling Bashar al-Assad.
The triangle of Khaddam-Abdullah-Hariri is well-known in the region as their wives are sisters.
Khaddam’s entire family enjoys Saudi citizenship and the value investment by his sons, Jamal and Jihad, in Saudi Arabia is estimated at more than USD 3 billion.
Therefore, with the start of popular protests in Tunisia, Egypt, Libya, Yemen and Bahrain, the Saudi regime saw an opportunity to drive a wedge between Tehran, Damascus and Beirut axis.
Due to the direct influence of the Saudi Wahhabis on Syria’s Muslim Brotherhood, the people of the cities of Daraa and Homs, following Saudi incitement and using popular demands as an excuse began resorting to violence.
It is reported that the United States, Israel, Jordan and Saudi Arabia formed joint operational headquarters in the Saudi Embassy in Belgium to direct the riots in southern Syria. Abdul Halim Khaddam, who held the highest political, executive and information posts in the Syrian government for more than 30 years, is said to have been transferred from Paris to Belgium to direct the unrest.
The reason for this was that based on French law, political asylum seekers cannot work against their countries of origin in France and therefore Khaddam was transferred to Brussels to guide the riots.
Jordan equipped the Muslim Brotherhood in the two cities with logistical facilities and personal weapons.
Although, Bashar al-Assad promised implementation of fundamental changes and reforms after the bloody riot in the country, the Brotherhood followed continued to incite protesters against him.
The Syrian state television recently broadcast footage of armed activity in the border city of Daraa by a guerilla group, which opened fire on the people and government forces. It is said that the group, which is affiliated to Salafi movements, obtained its weapons from Jordan and Saudi Arabia.
Because Syria’s ruling party is from the Alevi tribes associated with the Shias, the Brotherhood, due to its anti-Shia ideas, has tried for three decades to topple the Alevi establishment of the country.
Hence, the recent riots in Syria are not just rooted in popular demands and harbor a tribal aspect and Saudi Arabia, Jordan and the US are directing the unrest for their future purposes.”
It looks like some enemies of Bashar al-Assad’s regime are ready to fight with violence on the streets and roads of Syria—and disinformation on the front pages of the newspapers of the world.
Peter Lee is a businessman who has spent thirty years observing, analyzing, and writing about internatyional affairs. Lee writes frequently for CounterPunch and can be reached at peterrlee-2000@yahoo.
by Dan Colman
April 1st, 2011
In late February, Charles Ferguson’s film – Inside Job – won the Academy Award for Best Documentary. And now the film documenting the causes of the 2008 global financial meltdown has made its way online (thanks to the Internet Archive). A corrupt financial industry, its corrosive relationship with politicians, academics and regulators, and the trillions of damage done, it all gets documented in this film that runs a little shy of 2 hours.
Inside Job, now listed in our Free Movie collection, can be purchased on DVD at Amazon. We all love free, but let’s remember that good projects cost real money to develop, and they could use real financial support. So please consider buying a copy.
To watch online:
Hopefully watching or buying this film won’t be a pointless act, even though it can rightly feel that way. As Charles Ferguson reminded us during his Oscar acceptance speech, we are three years beyond the Wall Street crisis and taxpayers (you) got fleeced for billions. But still not one Wall Street exec is facing criminal charges. Welcome to your plutocracy…
Virtual Group NZ
Western business is dominated by the left brain (that’s the part that’s good at analysis) and weak in the right brain (the part that’s good at creativity).
Herrmann’s Thinking Preferences, as shown in the first chart, expands this into four quadrants according to how our brain is structured. The two left hand quadrants (A and B) are the left brain quadrants and the two right hand quadrants (D and C) are the right brain quadrants. The two upper quadrants (A and D) are the cerebral “thinking” quadrants and the two lower quadrants are the limbic “feeling” quadrants.
According to extensive research by Herrmann Asia, over 90% of large corporations in Australia and Asia have strong thinking preferences in the A quadrant (concerned mostly with facts, efficiencies, technology, performance, measurement and objectives) and the B quadrant (concerned mostly with form, methods, risk reduction, control, timing and policy). A much smaller percentage are as competent in the D quadrant (concerned mostly with innovation, new concepts, future, strategy, the big picture, competition, Vision and purpose) and very few are as competent in the C quadrant (concerned mostly with feelings, communicating, dealing with people, culture, values and people development).
Firstly, Western business is dominated by men. A major study by Kevin Ho, undertaken as part of his doctoral dissertation clearly shows that women are significantly more right brained than men. It would seem that women really are from Venus.
The study is reported in Ned Herrmann’s book “The Creative Brain”; it was of 7989 individuals, approximately one third of whom were women. Ho was careful not to have any bias due to qualifications or occupations. The second chart shows the Herrmann thinking preferences for men (see the blue line) and women (see the red line).
Quadrant D is a measure of creativity and in Ho’s study women scored an average of 79.1 compared with men at 73.9.
Quadrant C is a measure of people skills and in Ho’s study women scored an average of 74.9 compared with men at 55.5.
When these two right brain sides are combined women have an average right brain score of 102.3 compared with men of 86.0.
The current scientific explanation about the difference between men and women focuses on the fact that men have evolved as the hunters. They had to concentrate on hunting the prey, a risky task that has required them to develop a sense for risk and aggression. While men hunted, the women collected food around the dwellings and organised activities around the home. Unlike the men, they moved between a number of activities. While men executed their task as hunters in silence, the women always had others around to share their joy, sorrow and problems. It is therefore self-explanatory that women could give vent to their emotions and communicate well with others.
Because of these beginnings, today, men demonstrate a preference for focus, precision, directness, logic, strategy and risk while the women have a preference for organising, strategy, feelings and empathy.
The second reason
The second reason why business is dominated by the left brain is our education system. At school, by the time we are 6 years old most of the creativity has been trained out of us. When children go to school they are like question marks, by the time they leave school they are more like periods … with all the answers. We need to learn how to look for alternatives, “both/ and” solutions but we are taught mostly analysis and “either/or thinking”. In a recent study of American 18 year old students it was found that on average they had completed over 2000 examinations requiring a right or wrong answer.
Want more evidence? Reflect on the early school report of Albert Einstein. “Albert is a very poor student. He is mentally slow, unsociable, and always day dreaming. He is spoiling it for the rest of the class. It would be in the best interest for all if he were to be removed from school at once.”
Although I strongly believe in left AND right brain thinking, in practice because of the bias within the environment, I often find myself forced to work in ways that correct this bias. If you’d like to discuss these matters more fully phone on 0800 4 virtual.
The difference between left and right brain thinking and why it matters
In 1981 Roger Sperry received the Nobel Prize in Physiology “for his discoveries concerning the functional specialisation of the cerebral hemispheres”. Sperry, his student Michael Gazzinga and the neurosurgeon Joseph Bogden, performed the first ‘split brain operation’, and can be credited with some of the most important insights we have of the physiology of the brain today. They found that the left side of the brain is concerned with language, words, analysis, and figures. The right side is concerned with patterns, relationships, art, and music.
The left brain is the clever part. The left brain is so clever it’s taken us to the moon and developed our wonderful technologies. The trouble is it’s so clever that if we’re not careful it will kill us off. It’s the part that has developed the nuclear bomb and is in the process of polluting the world.
It’s the piece of the brain that’s always scheming, it never stops. Its the bit that wakes you up in the middle of the night with this wonderful idea that in the morning never looks quite so good. This part of our brain is like a bossy manager who needs to be in control, demands to be heard and thinks he is the only one with any ideas. Indeed it so bossy that sometimes when it is scheming, worrying or thinking in the middle of the night, even the best strategies are insufficient to keep it under control and quiet. For example, sometimes I try to get back to sleep by counting down slowly from 20 to 0, relaxing more after each number, but unless I’m totally disciplined, in between the numbers my left brain will race off on to some new subject.
The left brain is a straight line calculator that deals in words and numbers, likes things in sequence and has a need to explain things rationally and a need to always be in control. It likes logic and things that are 100% correct. It has trouble dealing with ambiguity, partial truth and uncertainty. It needs to be right and it needs to be 100 percent right. If something is only partly right, even if it is only slightly wrong, the left brain is inclined to reject the whole notion rather than play with the idea and work with it to see what can be extracted from the good parts.
This is why it’s so important to set up conditions and expectations in creativity sessions, so that things can be wrong or at least partly wrong, we need to change the inclination to reject the whole notion, by playing with ideas and using words like: “that’s interesting”.
Most religions are least partly based on trying to slow down and control the left brain. In essence this is the purpose of prayer and meditation. The act of creativity is largely based on practices which are designed to fool the left brain into slowing down or turning off. These include such things as non- dominant handwriting, analogue drawings, brain gym, telling stories, colour and drawing techniques, meditation, lateral thinking techniques, random word association and starting at the end and working back towards the start.
The right brain, on the other hand, has no need to be in control. It is an image processor, it deals with pictures and emotions, feelings and relationships. It is creative, intuitive, trusting. It is far better connected to the enormous power of the subconscious than the left brain. Compared with the subconscious, the conscious mind is very limited and yet this is where most of us try to solve their problems.
Weather as a Force Multiplier: Owning the Weather in 2025
A Research Paper Presented To Air Force 2025
“The purpose of this paper is to outline a strategy for the use of a future weather-modification system to achieve military objectives…”
“A high-risk, high-reward endeavor, weather-modification offers a dilemma not unlike the splitting of the atom.”
“From enhancing friendly operations or disrupting those of the enemy via small-scale tailoring of natural weather patterns to complete dominance of global communications and counterspace control, weather-modification offers the war fighter a wide-range of possible options to defeat or coerce an adversary.”
Col Tamzy J. House
Lt Col James B. Near, Jr.
LTC William B. Shields (USA)
Maj Ronald J. Celentano
Maj David M. Husband
Maj Ann E. Mercer
Maj James E. Pugh
2025 is a study designed to comply with a directive from the chief of staff of the Air Force to examine the concepts, capabilities, and technologies the United States will require to remain the dominant air and space force in the future. Presented on 17 June 1996, this report was produced in the Department of Defense school environment of academic freedom and in the interest of advancing concepts related to national defense. The views expressed in this report are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States government.
This report contains fictional representations of future situations/scenarios. Any similarities to real people or events, other than those specifically cited, are unintentional and are for purposes of illustration only.
This publication has been reviewed by security and policy review authorities, is unclassified, and is cleared for public release.
Why Would We Want to Mess with the Weather?
What Do We Mean by “Weather-modification”?
The Global Weather Network
Applying Weather-modification to Military Operations
Concept of Operations
Exploitation of “NearSpace” for Space Control
Opportunities Afforded by Space Weather-modification
Communications Dominance via Ionospheric Modification
Concept of Operations Summary
How Do We Get There From Here?
A Why Is the Ionosphere Important?
B Research to Better Understand and Predict Ionospheric Effects
C Acronyms and Definitions
3-1. Global Weather Network
3-2. The Military System for Weather-Modification Operations
4-1. Crossed-Beam Approach for Generating an Artificial Ionospheric Mirror
4-2. Artificial Ionospheric Mirrors Point-to-Point Communications
4-3. Artificial Ionospheric Mirror Over-the-Horizon Surveillance Concept
4-4. Scenarios for Telecommunications Degradation
5-1. A Core Competency Road Map to Weather Modification in 2025
5-2. A Systems Development Road Map to Weather Modification in 2025
1 – Operational Capabilities Matrix
We express our appreciation to Mr Mike McKim of Air War College who provided a wealth of technical expertise and innovative ideas that significantly contributed to our paper. We are also especially grateful for the devoted support of our families during this research project. Their understanding and patience during the demanding research period were crucial to the project’s success.
In 2025, US aerospace forces can “own the weather” by capitalizing on emerging technologies and focusing development of those technologies to war-fighting applications. Such a capability offers the war fighter tools to shape the battlespace in ways never before possible. It provides opportunities to impact operations across the full spectrum of conflict and is pertinent to all possible futures. The purpose of this paper is to outline a strategy for the use of a future weather-modification system to achieve military objectives rather than to provide a detailed technical road map.
A high-risk, high-reward endeavor, weather-modification offers a dilemma not unlike the splitting of the atom. While some segments of society will always be reluctant to examine controversial issues such as weather-modification, the tremendous military capabilities that could result from this field are ignored at our own peril. From enhancing friendly operations or disrupting those of the enemy via small-scale tailoring of natural weather patterns to complete dominance of global communications and counterspace control, weather-modification offers the war fighter a wide-range of possible options to defeat or coerce an adversary. Some of the potential capabilities a weather-modification system could provide to a war-fighting commander in chief (CINC) are listed in table 1.
Technology advancements in five major areas are necessary for an integrated weather-modification capability: (1) advanced nonlinear modeling techniques, (2) computational capability, (3) information gathering and transmission, (4) a global sensor array, and (5) weather intervention techniques. Some intervention tools exist today and others may be developed and refined in the future.
Operational Capabilities Matrix
DEGRADE ENEMY FORCES
- Flood Lines of Communication
- Reduce PGM/Recce Effectiveness
- Decrease Comfort Level/Morale
- Deny Operations
- Deny Fresh Water
- Induce Drought
- Disrupt Communications/Radar
- Disable/Destroy Space Assets
Fog and Cloud Removal
- Deny Concealment
- Increase Vulnerability to PGM/Recce
Detect Hostile Weather Activities
ENHANCE FRIENDLY FORCES
- Maintain/Improve LOC
- Maintain Visibility
- Maintain Comfort Level/Morale
- Choose Battlespace Environment
- Improve Communication Reliability
- Intercept Enemy Transmissions
- Revitalize Space Assets
Fog and Cloud Generation
- Increase Concealment
Fog and Cloud Removal
- Maintain Airfield Operations
- Enhance PGM Effectiveness
Defend against Enemy Capabilities
Current technologies that will mature over the next 30 years will offer anyone who has the necessary resources the ability to modify weather patterns and their corresponding effects, at least on the local scale. Current demographic, economic, and environmental trends will create global stresses that provide the impetus necessary for many countries or groups to turn this weather-modification ability into a capability.
In the United States, weather-modification will likely become a part of national security policy with both domestic and international applications. Our government will pursue such a policy, depending on its interests, at various levels. These levels could include unilateral actions, participation in a security framework such as NATO, membership in an international organization such as the UN, or participation in a coalition. Assuming that in 2025 our national security strategy includes weather-modification, its use in our national military strategy will naturally follow. Besides the significant benefits an operational capability would provide, another motivation to pursue weather-modification is to deter and counter potential adversaries.
In this paper we show that appropriate application of weather-modification can provide battlespace dominance to a degree never before imagined. In the future, such operations will enhance air and space superiority and provide new options for battlespace shaping and battlespace awareness.1 “The technology is there, waiting for us to pull it all together;”2 in 2025 we can “Own the Weather.”
Scenario: Imagine that in 2025 the US is fighting a rich, but now consolidated, politically powerful drug cartel in South America. The cartel has purchased hundreds of Russian-and Chinese-built fighters that have successfully thwarted our attempts to attack their production facilities. With their local numerical superiority and interior lines, the cartel is launching more than 10 aircraft for every one of ours. In addition, the cartel is using the French system probatoire d’ observation de la terre (SPOT) positioning and tracking imagery systems, which in 2025 are capable of transmitting near-real-time, multispectral imagery with 1 meter resolution. The US wishes to engage the enemy on an uneven playing field in order to exploit the full potential of our aircraft and munitions.
Meteorological analysis reveals that equatorial South America typically has afternoon thunderstorms on a daily basis throughout the year. Our intelligence has confirmed that cartel pilots are reluctant to fly in or near thunderstorms. Therefore, our weather force support element (WFSE), which is a part of the commander in chief’s (CINC) air operations center (AOC), is tasked to forecast storm paths and trigger or intensify thunderstorm cells over critical target areas that the enemy must defend with their aircraft. Since our aircraft in 2025 have all-weather capability, the thunderstorm threat is minimal to our forces, and we can effectively and decisively control the sky over the target.
The WFSE has the necessary sensor and communication capabilities to observe, detect, and act on weather-modification requirements to support US military objectives. These capabilities are part of an advanced battle area system that supports the war-fighting CINC. In our scenario, the CINC tasks the WFSE to conduct storm intensification and concealment operations. The WFSE models the atmospheric conditions to forecast, with 90 percent confidence, the likelihood of successful modification using airborne cloud generation and seeding.
In 2025, uninhabited aerospace vehicles (UAV) are routinely used for weather-modification operations. By cross-referencing desired attack times with wind and thunderstorm forecasts and the SPOT satellite’s projected orbit, the WFSE generates mission profiles for each UAV. The WFSE guides each UAV using near-real-time information from a networked sensor array.
Prior to the attack, which is coordinated with forecasted weather conditions, the UAVs begin cloud generation and seeding operations. UAVs disperse a cirrus shield to deny enemy visual and infrared (IR) surveillance. Simultaneously, microwave heaters create localized scintillation to disrupt active sensing via synthetic aperture radar (SAR) systems such as the commercially available Canadian search and rescue satellite-aided tracking (SARSAT) that will be widely available in 2025. Other cloud seeding operations cause a developing thunderstorm to intensify over the target, severely limiting the enemy’s capability to defend. The WFSE monitors the entire operation in real-time and notes the successful completion of another very important but routine weather-modification mission.
This scenario may seem far-fetched, but by 2025 it is within the realm of possibility. The next chapter explores the reasons for weather-modification, defines the scope, and examines trends that will make it possible in the next 30 years.
Why Would We Want to Mess with the Weather?
According to Gen Gordon Sullivan, former Army chief of staff, “As we leap technology into the 21st century, we will be able to see the enemy day or night, in any weather- and go after him relentlessly.”3 A global, precise, real-time, robust, systematic weather-modification capability would provide war-fighting CINCs with a powerful force multiplier to achieve military objectives. Since weather will be common to all possible futures, a weather-modification capability would be universally applicable and have utility across the entire spectrum of conflict. The capability of influencing the weather even on a small scale could change it from a force degrader to a force multiplier.
People have always wanted to be able to do something about the weather. In the US, as early as 1839, newspaper archives tell of people with serious and creative ideas on how to make rain.4 In 1957, the president’s advisory committee on weather control explicitly recognized the military potential of weather-modification, warning in their report that it could become a more important weapon than the atom bomb.5
However, controversy since 1947 concerning the possible legal consequences arising from the deliberate alteration of large storm systems meant that little future experimentation could be conducted on storms which had the potential to reach land.6 In 1977, the UN General Assembly adopted a resolution prohibiting the hostile use of environmental modification techniques. The resulting “Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Technique (ENMOD)” committed the signatories to refrain from any military or other hostile use of weather-modification which could result in widespread, long-lasting, or severe effects.7 While these two events have not halted the pursuit of weather-modification research, they have significantly inhibited its pace and the development of associated technologies, while producing a primary focus on suppressive versus intensification activities.
The influence of the weather on military operations has long been recognized. During World War II, Eisenhower said,
[i]n Europe bad weather is the worst enemy of the air [operations]. Some soldier once said, “The weather is always neutral.” Nothing could be more untrue. Bad weather is obviously the enemy of the side that seeks to launch projects requiring good weather, or of the side possessing great assets, such as strong air forces, which depend upon good weather for effective operations. If really bad weather should endure permanently, the Nazi would need nothing else to defend the Normandy coast!8
The impact of weather has also been important in more recent military operations. A significant number of the air sorties into Tuzla during the initial deployment supporting the Bosnian peace operation aborted due to weather. During Operation Desert Storm, Gen Buster C. Glosson asked his weather officer to tell him which targets would be clear in 48 hours for inclusion in the air tasking order (ATO).9 But current forecasting capability is only 85 percent accurate for no more than 24 hours, which doesn’t adequately meet the needs of the ATO planning cycle. Over 50 percent of the F-117 sorties weather aborted over their targets and A-10s only flew 75 of 200 scheduled close air support (CAS) missions due to low cloud cover during the first two days of the campaign.10 The application of weather-modification technology to clear a hole over the targets long enough for F-117s to attack and place bombs on target or clear the fog from the runway at Tuzla would have been a very effective force multiplier. Weather-modification clearly has potential for military use at the operational level to reduce the elements of fog and friction for friendly operations and to significantly increase them for the enemy.
What Do We Mean by “Weather-modification”?
Today, weather-modification is the alteration of weather phenomena over a limited area for a limited period of time.11 Within the next three decades, the concept of weather-modification could expand to include the ability to shape weather patterns by influencing their determining factors.12 Achieving such a highly accurate and reasonably precise weather-modification capability in the next 30 years will require overcoming some challenging but not insurmountable technological and legal hurdles.
Technologically, we must have a solid understanding of the variables that affect weather. We must be able to model the dynamics of their relationships, map the possible results of their interactions, measure their actual real-time values, and influence their values to achieve a desired outcome. Society will have to provide the resources and legal basis for a mature capability to develop. How could all of this happen? The following notional scenario postulates how weather-modification might become both technically feasible and socially desirable by 2025.
Between now and 2005, technological advances in meteorology and the demand for more precise weather information by global businesses will lead to the successful identification and parameterization of the major variables that affect weather. By 2015, advances in computational capability, modeling techniques, and atmospheric information tracking will produce a highly accurate and reliable weather prediction capability, validated against real-world weather. In the following decade, population densities put pressure on the worldwide availability and cost of food and usable water. Massive life and property losses associated with natural weather disasters become increasingly unacceptable. These pressures prompt governments and/or other organizations who are able to capitalize on the technological advances of the previous 20 years to pursue a highly accurate and reasonably precise weather-modification capability. The increasing urgency to realize the benefits of this capability stimulates laws and treaties, and some unilateral actions, making the risks required to validate and refine it acceptable. By 2025, the world, or parts of it, are able to shape local weather patterns by influencing the factors that affect climate, precipitation, storms and their effects, fog, and near space. These highly accurate and reasonably precise civil applications of weather-modification technology have obvious military implications. This is particularly true for aerospace forces, for while weather may affect all mediums of operation, it operates in ours.
The term weather-modification may have negative connotations for many people, civilians and military members alike. It is thus important to define the scope to be considered in this paper so that potential critics or proponents of further research have a common basis for discussion.
In the broadest sense, weather-modification can be divided into two major categories: suppression and intensification of weather patterns. In extreme cases, it might involve the creation of completely new weather patterns, attenuation or control of severe storms, or even alteration of global climate on a far-reaching and/or long-lasting scale. In the mildest and least controversial cases it may consist of inducing or suppressing precipitation, clouds, or fog for short times over a small-scale region. Other low-intensity applications might include the alteration and/or use of near space as a medium to enhance communications, disrupt active or passive sensing, or other purposes. In conducting the research for this study, the broadest possible interpretation of weather-modification was initially embraced, so that the widest range of opportunities available for our military in 2025 were thoughtfully considered. However, for several reasons described below, this paper focuses primarily on localized and short-term forms of weather-modification and how these could be incorporated into war-fighting capability. The primary areas discussed include generation and dissipation of precipitation, clouds, and fog; modification of localized storm systems; and the use of the ionosphere and near space for space control and communications dominance. These applications are consistent with CJCSI 3810.01, “Meteorological and Oceanographic Operations.”13
Extreme and controversial examples of weather modification-creation of made-to-order weather, large-scale climate modification, creation and/or control (or “steering”) of severe storms, etc.-were researched as part of this study but receive only brief mention here because, in the authors’ judgment, the technical obstacles preventing their application appear insurmountable within 30 years.14 If this were not the case, such applications would have been included in this report as potential military options, despite their controversial and potentially malevolent nature and their inconsistency with standing UN agreements to which the US is a signatory.
On the other hand, the weather-modification applications proposed in this report range from technically proven to potentially feasible. They are similar, however, in that none are currently employed or envisioned for employment by our operational forces. They are also similar in their potential value for the war fighter of the future, as we hope to convey in the following chapters. A notional integrated system that incorporates weather-modification tools will be described in the next chapter; how those tools might be applied are then discussed within the framework of the Concept of Operations in chapter 4.
Our vision is that by 2025 the military could influence the weather on a mesoscale (<200 km2) or microscale (immediate local area) to achieve operational capabilities such as those listed in Table 1. The capability would be the synergistic result of a system consisting of (1) highly trained weather force specialists (WFS) who are members of the CINC’s weather force support element (WFSE); (2) access ports to the global weather network (GWN), where worldwide weather observations and forecasts are obtained near-real-time from civilian and military sources; (3) a dense, highly accurate local area weather sensing and communication system; (4) an advanced computer local area weather-modification modeling and prediction capability within the area of responsibility (AOR); (5) proven weather-modification intervention technologies; and (6) a feedback capability.
The Global Weather Network
The GWN is envisioned to be an evolutionary expansion of the current military and civilian worldwide weather data network. By 2025, it will be a super high-speed, expanded bandwidth, communication network filled with near-real-time weather observations taken from a denser and more accurate worldwide observation network resulting from highly improved ground, air, maritime, and space sensors. The network will also provide access to forecast centers around the world where sophisticated, tailored forecast and data products, generated from weather prediction models (global, regional, local, specialized, etc.) based on the latest nonlinear mathematical techniques are made available to GWN customers for near-real-time use.
By 2025, we envision that weather prediction models, in general, and mesoscale weather-modification models, in particular, will be able to emulate all-weather producing variables, along with their interrelated dynamics, and prove to be highly accurate in stringent measurement trials against empirical data. The brains of these models will be advanced software and hardware capabilities which can rapidly ingest trillions of environmental data points, merge them into usable data bases, process the data through the weather prediction models, and disseminate the weather information over the GWN in near-real-time.15 This network is depicted schematically in figure 3-1.
Figure 3-1. Global Weather Network
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
Evidence of the evolving future weather modeling and prediction capability as well as the GWN can be seen in the national oceanic and atmospheric administration’s (NOAA) 1995-2005 strategic plan. It includes program elements to “advance short-term warning and forecast services, implement seasonal to inter-annual climate forecasts, and predict and assess decadal to centennial change;”16 it does not, however, include plans for weather-modification modeling or modification technology development. NOAA’s plans include extensive data gathering programs such as Next Generation Radar (NEXRAD) and Doppler weather surveillance systems deployed throughout the US. Data from these sensing systems feed into over 100 forecast centers for processing by the Advanced Weather Interactive Processing System (AWIPS), which will provide data communication, processing, and display capabilities for extensive forecasting. In addition, NOAA has leased a Cray C90 supercomputer capable of performing over 1.5×1010 operations per second that has already been used to run a Hurricane Prediction System.17
Applying Weather-modification to Military Operations
How will the military, in general, and the USAF, in particular, manage and employ a weather-modification capability? We envision this will be done by the weather force support element (WFSE), whose primary mission would be to support the war-fighting CINCs with weather-modification options, in addition to current forecasting support. Although the WFSE could operate anywhere as long as it has access to the GWN and the system components already discussed, it will more than likely be a component within the AOC or its 2025-equivalent. With the CINC’s intent as guidance, the WFSE formulates weather-modification options using information provided by the GWN, local weather data network, and weather-modification forecast model. The options include range of effect, probability of success, resources to be expended, the enemy’s vulnerability, and risks involved. The CINC chooses an effect based on these inputs, and the WFSE then implements the chosen course, selecting the right modification tools and employing them to achieve the desired effect. Sensors detect the change and feed data on the new weather pattern to the modeling system which updates its forecast accordingly. The WFSE checks the effectiveness of its efforts by pulling down the updated current conditions and new forecast(s) from the GWN and local weather data network, and plans follow-on missions as needed. This concept is illustrated in figure 3-2.
Figure 3-2. The Military System for Weather-Modification Operations.
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
WFSE personnel will need to be experts in information systems and well schooled in the arts of both offensive and defensive information warfare. They would also have an in-depth understanding of the GWN and an appreciation for how weather-modification could be employed to meet a CINC’s needs.
Because of the nodal web nature of the GWN, this concept would be very flexible. For instance, a WFSE could be assigned to each theater to provide direct support to the CINC. The system would also be survivable, with multiple nodes connected to the GWN.
A product of the information age, this system would be most vulnerable to information warfare. Each WFSE would need the most current defensive and offensive information capabilities available. Defensive abilities would be necessary for survival. Offensive abilities could provide spoofing options to create virtual weather in the enemy’s sensory and information systems, making it more likely for them to make decisions producing results of our choosing rather than theirs. It would also allow for the capability to mask or disguise our weather-modification activities.
Two key technologies are necessary to meld an integrated, comprehensive, responsive, precise, and effective weather-modification system. Advances in the science of chaos are critical to this endeavor. Also key to the feasibility of such a system is the ability to model the extremely complex nonlinear system of global weather in ways that can accurately predict the outcome of changes in the influencing variables. Researchers have already successfully controlled single variable nonlinear systems in the lab and hypothesize that current mathematical techniques and computer capacity could handle systems with up to five variables. Advances in these two areas would make it feasible to affect regional weather patterns by making small, continuous nudges to one or more influencing factors. Conceivably, with enough lead time and the right conditions, you could get “made-to-order” weather.18
Developing a true weather-modification capability will require various intervention tools to adjust the appropriate meteorological parameters in predictable ways. It is this area that must be developed by the military based on specific required capabilities such as those listed in table 1, table 1 is located in the Executive Summary. Such a system would contain a sensor array and localized battle area data net to provide the fine level of resolution required to detect intervention effects and provide feedback. This net would include ground, air, maritime, and space sensors as well as human observations in order to ensure the reliability and responsiveness of the system, even in the event of enemy countermeasures. It would also include specific intervention tools and technologies, some of which already exist and others which must be developed. Some of these proposed tools are described in the following chapter titled Concept of Operations. The total weather-modification process would be a real-time loop of continuous, appropriate, measured interventions, and feedback capable of producing desired weather behavior.
Concept of Operations
The essential ingredient of the weather-modification system is the set of intervention techniques used to modify the weather. The number of specific intervention methodologies is limited only by the imagination, but with few exceptions they involve infusing either energy or chemicals into the meteorological process in the right way, at the right place and time. The intervention could be designed to modify the weather in a number of ways, such as influencing clouds and precipitation, storm intensity, climate, space, or fog.
For centuries man has desired the ability to influence precipitation at the time and place of his choosing. Until recently, success in achieving this goal has been minimal; however, a new window of opportunity may exist resulting from development of new technologies and an increasing world interest in relieving water shortages through precipitation enhancement. Consequently, we advocate that the DOD explore the many opportunities (and also the ramifications) resulting from development of a capability to influence precipitation or conducting “selective precipitation modification.” Although the capability to influence precipitation over the long term (i.e., for more than several days) is still not fully understood. By 2025 we will certainly be capable of increasing or decreasing precipitation over the short term in a localized area.
Before discussing research in this area, it is important to describe the benefits of such a capability. While many military operations may be influenced by precipitation, ground mobility is most affected. Influencing precipitation could prove useful in two ways. First, enhancing precipitation could decrease the enemy’s trafficability by muddying terrain, while also affecting their morale. Second, suppressing precipitation could increase friendly trafficability by drying out an otherwise muddied area.
What is the possibility of developing this capability and applying it to tactical operations by 2025? Closer than one might think. Research has been conducted in precipitation modification for many years, and an aspect of the resulting technology was applied to operations during the Vietnam War.19 These initial attempts provide a foundation for further development of a true capability for selective precipitation modification.
Interestingly enough, the US government made a conscious decision to stop building upon this foundation. As mentioned earlier, international agreements have prevented the US from investigating weather-modification operations that could have widespread, long-lasting, or severe effects. However, possibilities do exist (within the boundaries of established treaties) for using localized precipitation modification over the short term, with limited and potentially positive results.
These possibilities date back to our own previous experimentation with precipitation modification. As stated in an article appearing in the Journal of Applied Meteorology,
[n]early all the weather-modification efforts over the last quarter century have been aimed at producing changes on the cloud scale through exploitation of the saturated vapor pressure difference between ice and water. This is not to be criticized but it is time we also consider the feasibility of weather-modification on other time-space scales and with other physical hypotheses.20
This study by William M. Gray, et al., investigated the hypothesis that “significant beneficial influences can be derived through judicious exploitation of the solar absorption potential of carbon black dust.”21 The study ultimately found that this technology could be used to enhance rainfall on the mesoscale, generate cirrus clouds, and enhance cumulonimbus (thunderstorm) clouds in otherwise dry areas.
The technology can be described as follows. Just as a black tar roof easily absorbs solar energy and subsequently radiates heat during a sunny day, carbon black also readily absorbs solar energy. When dispersed in microscopic or “dust” form in the air over a large body of water, the carbon becomes hot and heats the surrounding air, thereby increasing the amount of evaporation from the body of water below. As the surrounding air heats up, parcels of air will rise and the water vapor contained in the rising air parcel will eventually condense to form clouds. Over time the cloud droplets increase in size as more and more water vapor condenses, and eventually they become too large and heavy to stay suspended and will fall as rain or other forms of precipitation.22 The study points out that this precipitation enhancement technology would work best “upwind from coastlines with onshore flow.” Lake-effect snow along the southern edge of the Great Lakes is a naturally occurring phenomenon based on similar dynamics.
Can this type of precipitation enhancement technology have military applications? Yes, if the right conditions exist. For example, if we are fortunate enough to have a fairly large body of water available upwind from the targeted battlefield, carbon dust could be placed in the atmosphere over that water. Assuming the dynamics are supportive in the atmosphere, the rising saturated air will eventually form clouds and rainshowers downwind over the land.23 While the likelihood of having a body of water located upwind of the battlefield is unpredictable, the technology could prove enormously useful under the right conditions. Only further experimentation will determine to what degree precipitation enhancement can be controlled.
If precipitation enhancement techniques are successfully developed and the right natural conditions also exist, we must also be able to disperse carbon dust into the desired location. Transporting it in a completely controlled, safe, cost-effective, and reliable manner requires innovation. Numerous dispersal techniques have already been studied, but the most convenient, safe, and cost-effective method discussed is the use of afterburner-type jet engines to generate carbon particles while flying through the targeted air. This method is based on injection of liquid hydrocarbon fuel into the afterburner’s combustion gases. This direct generation method was found to be more desirable than another plausible method (i.e., the transport of large quantities of previously produced and properly sized carbon dust to the desired altitude).
The carbon dust study demonstrated that small-scale precipitation enhancement is possible and has been successfully verified under certain atmospheric conditions. Since the study was conducted, no known military applications of this technology have been realized. However, we can postulate how this technology might be used in the future by examining some of the delivery platforms conceivably available for effective dispersal of carbon dust or other effective modification agents in the year 2025.
One method we propose would further maximize the technology’s safety and reliability, by virtually eliminating the human element. To date, much work has been done on UAVs which can closely (if not completely) match the capabilities of piloted aircraft. If this UAV technology were combined with stealth and carbon dust technologies, the result could be a UAV aircraft invisible to radar while en route to the targeted area, which could spontaneously create carbon dust in any location. However, minimizing the number of UAVs required to complete the mission would depend upon the development of a new and more efficient system to produce carbon dust by a follow-on technology to the afterburner-type jet engines previously mentioned. In order to effectively use stealth technology, this system must also have the ability to disperse carbon dust while minimizing (or eliminating) the UAV’s infrared heat source.
In addition to using stealth UAV and carbon dust absorption technology for precipitation enhancement, this delivery method could also be used for precipitation suppression. Although the previously mentioned study did not significantly explore the possibility of cloud seeding for precipitation suppression, this possibility does exist. If clouds were seeded (using chemical nuclei similar to those used today or perhaps a more effective agent discovered through continued research) before their downwind arrival to a desired location, the result could be a suppression of precipitation. In other words, precipitation could be “forced” to fall before its arrival in the desired territory, thereby making the desired territory “dry.” The strategic and operational benefits of doing this have previously been discussed.
In general, successful fog dissipation requires some type of heating or seeding process. Which technique works best depends on the type of fog encountered. In simplest terms, there are two basic types of fog-cold and warm. Cold fog occurs at temperatures below 32oF. The best-known dissipation technique for cold fog is to seed it from the air with agents that promote the growth of ice crystals.24
Warm fog occurs at temperatures above 32oF and accounts for 90 percent of the fog-related problems encountered by flight operations.25 The best-known dissipation technique is heating because a small temperature increase is usually sufficient to evaporate the fog. Since heating usually isn’t practical, the next most effective technique is hygroscopic seeding.26 Hygroscopic seeding uses agents that absorb water vapor. This technique is most effective when accomplished from the air but can also be accomplished from the ground.27 Optimal results require advance information on fog depth, liquid water content, and wind.28
Decades of research show that fog dissipation is an effective application of weather-modification technology with demonstrated savings of huge proportions for both military and civil aviation.29 Local municipalities have also shown an interest in applying these techniques to improve the safety of high-speed highways transiting areas of frequently occurring dense fog.30
There are some emerging technologies which may have important applications for fog dispersal. As discussed earlier, heating is the most effective dispersal method for the most commonly occurring type of fog. Unfortunately, it has proved impractical for most situations and would be difficult at best for contingency operations. However, the development of directed radiant energy technologies, such as microwaves and lasers, could provide new possibilities.
Lab experiments have shown microwaves to be effective for the heat dissipation of fog. However, results also indicate that the energy levels required exceed the US large power density exposure limit of 100 watt/m2 and would be very expensive.31 Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. Generating 1 watt/cm2, which is approximately the US large power density exposure limit, the system raised visibility to one quarter of a mile in 20 seconds.32 Laser systems described in the Space Operations portion of this AF 2025 study could certainly provide this capability as one of their many possible uses.
With regard to seeding techniques, improvements in the materials and delivery methods are not only plausible but likely. Smart materials based on nanotechnology are currently being developed with gigaops computer capability at their core. They could adjust their size to optimal dimensions for a given fog seeding situation and even make adjustments throughout the process. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. They will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and can also change their temperature and polarity to improve their seeding effects.33 As mentioned above, UAVs could be used to deliver and distribute these smart materials.
Recent army research lab experiments have demonstrated the feasibility of generating fog. They used commercial equipment to generate thick fog in an area 100 meters long. Further study has shown fogs to be effective at blocking much of the UV/IR/visible spectrum, effectively masking emitters of such radiation from IR weapons.34 This technology would enable a small military unit to avoid detection in the IR spectrum. Fog could be generated to quickly, conceal the movement of tanks or infantry, or it could conceal military operations, facilities, or equipment. Such systems may also be useful in inhibiting observations of sensitive rear-area operations by electro-optical reconnaissance platforms.35
The desirability to modify storms to support military objectives is the most aggressive and controversial type of weather-modification. The damage caused by storms is indeed horrendous. For instance, a tropical storm has an energy equal to 10,000 one-megaton hydrogen bombs,36 and in 1992 Hurricane Andrew totally destroyed Homestead AFB, Florida, caused the evacuation of most military aircraft in the southeastern US, and resulted in $15.5 billion of damage.37 However, as one would expect based on a storm’s energy level, current scientific literature indicates that there are definite physical limits on mankind’s ability to modify storm systems. By taking this into account along with political, environmental, economic, legal, and moral considerations, we will confine our analysis of storms to localized thunderstorms and thus do not consider major storm systems such as hurricanes or intense low-pressure systems.
At any instant there are approximately 2,000 thunderstorms taking place. In fact 45,000 thunderstorms, which contain heavy rain, hail, microbursts, wind shear, and lightning form daily.38 Anyone who has flown frequently on commercial aircraft has probably noticed the extremes that pilots will go to avoid thunderstorms. The danger of thunderstorms was clearly shown in August 1985 when a jumbo jet crashed killing 137 people after encountering microburst wind shears during a rain squall.39 These forces of nature impact all aircraft and even the most advanced fighters of 1996 make every attempt to avoid a thunderstorm.
Will bad weather remain an aviation hazard in 2025? The answer, unfortunately, is “yes,” but projected advances in technology over the next 30 years will diminish the hazard potential. Computer-controlled flight systems will be able to “autopilot” aircraft through rapidly changing winds. Aircraft will also have highly accurate, onboard sensing systems that can instantaneously “map” and automatically guide the aircraft through the safest portion of a storm cell. Aircraft are envisioned to have hardened electronics that can withstand the effects of lightning strikes and may also have the capability to generate a surrounding electropotential field that will neutralize or repel lightning strikes.
Assuming that the US achieves some or all of the above outlined aircraft technical advances and maintains the technological “weather edge” over its potential adversaries, we can next look at how we could modify the battlespace weather to make the best use of our technical advantage.
Weather-modification technologies might involve techniques that would increase latent heat release in the atmosphere, provide additional water vapor for cloud cell development, and provide additional surface and lower atmospheric heating to increase atmospheric instability. Critical to the success of any attempt to trigger a storm cell is the pre-existing atmospheric conditions locally and regionally. The atmosphere must already be conditionally unstable and the large-scale dynamics must be supportive of vertical cloud development. The focus of the weather-modification effort would be to provide additional “conditions” that would make the atmosphere unstable enough to generate cloud and eventually storm cell development. The path of storm cells once developed or enhanced is dependent not only on the mesoscale dynamics of the storm but the regional and synoptic (global) scale atmospheric wind flow patterns in the area which are currently not subject to human control.
As indicated, the technical hurdles for storm development in support of military operations are obviously greater than enhancing precipitation or dispersing fog as described earlier. One area of storm research that would significantly benefit military operations is lightning modification. Most research efforts are being conducted to develop techniques to lessen the occurrence or hazards associated with lightning. This is important research for military operations and resource protection, but some offensive military benefit could be obtained by doing research on increasing the potential and intensity of lightning. Concepts to explore include increasing the basic efficiency of the thunderstorm, stimulating the triggering mechanism that initiates the bolt, and triggering lightning such as that which struck Apollo 12 in 1968.40 Possible mechanisms to investigate would be ways to modify the electropotential characteristics over certain targets to induce lightning strikes on the desired targets as the storm passes over their location.
In summary, the ability to modify battlespace weather through storm cell triggering or enhancement would allow us to exploit the technological “weather” advances of our 2025 aircraft; this area has tremendous potential and should be addressed by future research and concept development programs.
Exploitation of “NearSpace” for Space Control
This section discusses opportunities for control and modification of the ionosphere and near-space environment for force enhancement; specifically to enhance our own communications, sensing, and navigation capabilities and/or impair those of our enemy. A brief technical description of the ionosphere and its importance in current communications systems is provided in appendix A.
By 2025, it may be possible to modify the ionosphere and near space, creating a variety of potential applications, as discussed below. However, before ionospheric modification becomes possible, a number of evolutionary advances in space weather forecasting and observation are needed. Many of these needs were described in a Spacecast 2020 study, Space Weather Support for Communications.41 Some of the suggestions from this study are included in appendix B; it is important to note that our ability to exploit near space via active modification is dependent on successfully achieving reliable observation and prediction capabilities.
Opportunities Afforded by Space Weather-modification
Modification of the near-space environment is crucial to battlespace dominance. General Charles Horner, former commander in chief, United States space command, described his worst nightmare as “seeing an entire Marine battalion wiped out on some foreign landing zone because he was unable to deny the enemy intelligence and imagery generated from space.”42 Active modification could provide a “technological fix” to jam the enemy’s active and passive surveillance and reconnaissance systems. In short, an operational capability to modify the near-space environment would ensure space superiority in 2025; this capability would allow us to shape and control the battlespace via enhanced communication, sensing, navigation, and precision engagement systems.
While we recognize that technological advances may negate the importance of certain electromagnetic frequencies for US aerospace forces in 2025 (such as radio frequency (RF), high-frequency (HF) and very high-frequency (VHF) bands), the capabilities described below are nevertheless relevant. Our nonpeer adversaries will most likely still depend on such frequencies for communications, sensing, and navigation and would thus be extremely vulnerable to disruption via space weather-modification.
Communications Dominance via Ionospheric Modification
Modification of the ionosphere to enhance or disrupt communications has recently become the subject of active research. According to Lewis M. Duncan, and Robert L. Showen, the Former Soviet Union (FSU) conducted theoretical and experimental research in this area at a level considerably greater than comparable programs in the West.43 There is a strong motivation for this research, because induced ionospheric modifications may influence, or even disrupt, the operation of radio systems relying on propagation through the modified region. The controlled generation or accelerated dissipation of ionospheric disturbances may be used to produce new propagation paths, otherwise unavailable, appropriate for selected RF missions.44
A number of methods have been explored or proposed to modify the ionosphere, including injection of chemical vapors and heating or charging via electromagnetic radiation or particle beams (such as ions, neutral particles, x-rays, MeV particles, and energetic electrons).45 It is important to note that many techniques to modify the upper atmosphere have been successfully demonstrated experimentally. Ground-based modification techniques employed by the FSU include vertical HF heating, oblique HF heating, microwave heating, and magnetospheric modification.46 Significant military applications of such operations include low frequency (LF) communication production, HF ducted communications, and creation of an artificial ionosphere (discussed in detail below). Moreover, developing countries also recognize the benefit of ionospheric modification: “in the early 1980′s, Brazil conducted an experiment to modify the ionosphere by chemical injection.”47
Several high-payoff capabilities that could result from the modification of the ionosphere or near space are described briefly below. It should be emphasized that this list is not comprehensive; modification of the ionosphere is an area rich with potential applications and there are also likely spin-off applications that have yet to be envisioned.
Ionospheric mirrors for pinpoint communication or over-the-horizon (OTH) radar transmission. The properties and limitations of the ionosphere as a reflecting medium for high-frequency radiation are described in appendix A. The major disadvantage in depending on the ionosphere to reflect radio waves is its variability, which is due to normal space weather and events such as solar flares and geomagnetic storms. The ionosphere has been described as a crinkled sheet of wax paper whose relative position rises and sinks depending on weather conditions. The surface topography of the crinkled paper also constantly changes, leading to variability in its reflective, refractive, and transmissive properties.
Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic radiation of a selected frequency or a range of frequencies. It would thereby be useful for both pinpoint control of friendly communications and interception of enemy transmissions.
This concept has been described in detail by Paul A. Kossey, et al. in a paper entitled “Artificial Ionospheric Mirrors (AIM).”48 The authors describe how one could precisely control the location and height of the region of artificially produced ionization using crossed microwave (MW) beams, which produce atmospheric breakdown (ionization) of neutral species. The implications of such control are enormous: one would no longer be subject to the vagaries of the natural ionosphere but would instead have direct control of the propagation environment. Ideally, the AIM could be rapidly created and then would be maintained only for a brief operational period. A schematic depicting the crossed-beam approach for generation of an AIM is shown in figure 4-1.49
An AIM could theoretically reflect radio waves with frequencies up to 2 GHz, which is nearly two orders of magnitude higher than those waves reflected by the natural ionosphere. The MW radiator power requirements for such a system are roughly an order of magnitude greater than 1992 state-of-the-art systems; however, by 2025 such a power capability is expected to be easily achievable.
Figure 4-1. Crossed-Beam Approach for Generating an Artificial Ionospheric Mirror
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
Besides providing pinpoint communication control and potential interception capability, this technology would also provide communication capability at specified frequencies, as desired. Figure 4-2 shows how a ground-based radiator might generate a series of AIMs, each of which would be tailored to reflect a selected transmission frequency. Such an arrangement would greatly expand the available bandwidth for communications and also eliminate the problem of interference and crosstalk (by allowing one to use the requisite power level).
Figure 4-2. Artificial Ionospheric Mirrors Point-to-Point Communications
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
Kossey et al. also describe how AIMs could be used to improve the capability of OTH radar:
AIM based radar could be operated at a frequency chosen to optimize target detection, rather than be limited by prevailing ionospheric conditions. This, combined with the possibility of controlling the radar’s wave polarization to mitigate clutter effects, could result in reliable detection of cruise missiles and other low observable targets.50
A schematic depicting this concept is shown in figure 4-3. Potential advantages over conventional OTH radars include frequency control, mitigation of auroral effects, short range operation, and detection of a smaller cross-section target.
Figure 4-3. Artificial Ionospheric Mirror Over-the-Horizon Surveillance Concept.
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
Disruption of communications and radar via ionospheric control. A variation of the capability proposed above is ionospheric modification to disrupt an enemy’s communication or radar transmissions. Because HF communications are controlled directly by the ionosphere’s properties, an artificially created ionization region could conceivably disrupt an enemy’s electromagnetic transmissions. Even in the absence of an artificial ionization patch, high-frequency modification produces large-scale ionospheric variations which alter HF propagation characteristics. The payoff of research aimed at understanding how to control these variations could be high as both HF communication enhancement and degradation are possible. Offensive interference of this kind would likely be indistinguishable from naturally occurring space weather. This capability could also be employed to precisely locate the source of enemy electromagnetic transmissions.
VHF, UHF, and super-high frequency (SHF) satellite communications could be disrupted by creating artificial ionospheric scintillation. This phenomenon causes fluctuations in the phase and amplitude of radio waves over a very wide band (30 MHz to 30 GHz). HF modification produces electron density irregularities that cause scintillation over a wide-range of frequencies. The size of the irregularities determines which frequency band will be affected. Understanding how to control the spectrum of the artificial irregularities generated in the HF modification process should be a primary goal of research in this area. Additionally, it may be possible to suppress the growth of natural irregularities resulting in reduced levels of natural scintillation. Creating artificial scintillation would allow us to disrupt satellite transmissions over selected regions. Like the HF disruption described above, such actions would likely be indistinguishable from naturally occurring environmental events. Figure 4-4 shows how artificially ionized regions might be used to disrupt HF communications via attenuation, scatter, or absorption (fig. 4.4a) or degrade satellite communications via scintillation or energy loss (fig. 4-4b) (from Ref. 25).
Figure 4-4 (a) and (b). Scenarios for Telecommunications Degradation
Source: Microsoft Clipart Gallery © 1995 with courtesy from Microsoft.
Exploding/disabling space assets traversing near-space. The ionosphere could potentially be artificially charged or injected with radiation at a certain point so that it becomes inhospitable to satellites or other space structures. The result could range from temporarily disabling the target to its complete destruction via an induced explosion. Of course, effectively employing such a capability depends on the ability to apply it selectively to chosen regions in space.
Charging space assets by near-space energy transfer. In contrast to the injurious capability described above, regions of the ionosphere could potentially be modified or used as-is to revitalize space assets, for instance by charging their power systems. The natural charge of the ionosphere may serve to provide most or all of the energy input to the satellite. There have been a number of papers in the last decade on electrical charging of space vehicles; however, according to one author, “in spite of the significant effort made in the field both theoretically and experimentally, the vehicle charging problem is far from being completely understood.”51 While the technical challenge is considerable, the potential to harness electrostatic energy to fuel the satellite’s power cells would have a high payoff, enabling service life extension of space assets at a relatively low cost. Additionally, exploiting the capability of powerful HF radio waves to accelerate electrons to relatively high energies may also facilitate the degradation of enemy space assets through directed bombardment with the HF-induced electron beams. As with artificial HF communication disruptions and induced scintillation, the degradation of enemy spacecraft with such techniques would be effectively indistinguishable from natural environment effects. The investigation and optimization of HF acceleration mechanisms for both friendly and hostile purposes is an important area for future research efforts.
While most weather-modification efforts rely on the existence of certain preexisting conditions, it may be possible to produce some weather effects artificially, regardless of preexisting conditions. For instance, virtual weather could be created by influencing the weather information received by an end user. Their perception of parameter values or images from global or local meteorological information systems would differ from reality. This difference in perception would lead the end user to make degraded operational decisions.
Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could provide tremendous capability. Interconnected, atmospherically buoyant, and having navigation capability in three dimensions, such clouds could be designed to have a wide-range of properties. They might exclusively block optical sensors or could adjust to become impermeable to other surveillance methods. They could also provide an atmospheric electrical potential difference, which otherwise might not exist, to achieve precisely aimed and timed lightning strikes. Even if power levels achieved were insufficient to be an effective strike weapon, the potential for psychological operations in many situations could be fantastic.
One major advantage of using simulated weather to achieve a desired effect is that unlike other approaches, it makes what are otherwise the results of deliberate actions appear to be the consequences of natural weather phenomena. In addition, it is potentially relatively inexpensive to do. According to J. Storrs Hall, a scientist at Rutgers University conducting research on nanotechnology, production costs of these nanoparticles could be about the same price per pound as potatoes.52 This of course discounts research and development costs, which will be primarily borne by the private sector and be considered a sunk cost by 2025 and probably earlier.
Concept of Operations Summary
Weather affects everything we do, and weather-modification can enhance our ability to dominate the aerospace environment. It gives the commander tools to shape the battlespace. It gives the logistician tools to optimize the process. It gives the warriors in the cockpit an operating environment literally crafted to their needs. Some of the potential capabilities a weather-modification system could provide to a war-fighting CINC are summarized in table 1, of the executive summary).
How Do We Get There From Here?
To fully appreciate the development of the specific operational capabilities weather-modification could deliver to the war fighter, we must examine and understand their relationship to associated core competencies and the development of their requisite technologies. Figure 5-1 combines the specific operational capabilities of Table 1 into six core capabilities and depicts their relative importance over time. For example, fog and cloud modification are currently important and will remain so for some time to come to conceal our assets from surveillance or improve landing visibility at airfields. However, as surveillance assets become less optically dependent and aircraft achieve a truly global all-weather landing capability, fog and cloud modification applications become less important.
In contrast, artificial weather technologies do not currently exist. But as they are developed, the importance of their potential applications rises rapidly. For example, the anticipated proliferation of surveillance technologies in the future will make the ability to deny surveillance increasingly valuable. In such an environment, clouds made of smart particles such as described in chapter 4 could provide a premium capability.
Figure 5-1. A Core Competency Road Map to Weather Modification in 2025.
Legend for Figure 5-1
PM Precipitation Modification (F&C)M Fog and Cloud Modification
SM Storm Modification CW Counter Weather
SWM Space Weather-modification AW Artificial Weather
Even today’s most technologically advanced militaries would usually prefer to fight in clear weather and blue skies. But as war-fighting technologies proliferate, the side with the technological advantage will prefer to fight in weather that gives them an edge. The US Army has already alluded to this approach in their concept of “owning the weather.”53 Accordingly, storm modification will become more valuable over time. The importance of precipitation modification is also likely to increase as usable water sources become more scarce in volatile parts of the world.
As more countries pursue, develop, and exploit increasing types and degrees of weather-modification technologies, we must be able to detect their efforts and counter their activities when necessary. As depicted, the technologies and capabilities associated with such a counter weather role will become increasingly important.
The importance of space weather-modification will grow with time. Its rise will be more rapid at first as the technologies it can best support or negate proliferate at their fastest rates. Later, as those technologies mature or become obsolete, the importance of space weather-modification will continue to rise but not as rapidly.
To achieve the core capabilities depicted in figure 5-1, the necessary technologies and systems might be developed according to the process depicted in figure 5-2. This figure illustrates the systems development timing and sequence necessary to realize a weather-modification capability for the battlespace by 2025. The horizontal axis represents time. The vertical axis indicates the degree to which a given technology will be applied toward weather-modification. As the primary users, the military will be the main developer for the technologies designated with an asterisk. The civil sector will be the main source for the remaining technologies.
Figure 5-2. A Systems Development Road Map to Weather Modification in 2025.
Legend for Figure 5-2
ADV Aerospace Delivery Vehicles DE Directed Energy
AIM Artificial Ionospheric Mirrors GWN Global Weather Network
CHEM Chemicals SC Smart Clouds (nanotech)
CBD Carbon Black Dust SENSORS Sensors
COMM Communications VR WX Virtual Weather
COMP Computer Modeling *Technologies to be developed by DOD
WFSE Weather Force Support Element
The world’s finite resources and continued needs will drive the desire to protect people and property and more efficiently use our crop lands, forests, and range lands. The ability to modify the weather may be desirable both for economic and defense reasons. The global weather system has been described as a series of spheres or bubbles. Pushing down on one causes another to pop up.54 We need to know when another power “pushes” on a sphere in their region, and how that will affect either our own territory or areas of economic and political interest to the US.
Efforts are already under way to create more comprehensive weather models primarily to improve forecasts, but researchers are also trying to influence the results of these models by adding small amounts of energy at just the right time and space. These programs are extremely limited at the moment and are not yet validated, but there is great potential to improve them in the next 30 years.55
The lessons of history indicate a real weather-modification capability will eventually exist despite the risk. The drive exists. People have always wanted to control the weather and their desire will compel them to collectively and continuously pursue their goal. The motivation exists. The potential benefits and power are extremely lucrative and alluring for those who have the resources to develop it. This combination of drive, motivation, and resources will eventually produce the technology. History also teaches that we cannot afford to be without a weather-modification capability once the technology is developed and used by others. Even if we have no intention of using it, others will. To call upon the atomic weapon analogy again, we need to be able to deter or counter their capability with our own. Therefore, the weather and intelligence communities must keep abreast of the actions of others.
As the preceding chapters have shown, weather-modification is a force multiplier with tremendous power that could be exploited across the full spectrum of war-fighting environments. From enhancing friendly operations or disrupting those of the enemy via small-scale tailoring of natural weather patterns to complete dominance of global communications and counter-space control, weather-modification offers the war fighter a wide-range of possible options to defeat or coerce an adversary. But, while offensive weather-modification efforts would certainly be undertaken by US forces with great caution and trepidation, it is clear that we cannot afford to allow an adversary to obtain an exclusive weather-modification capability.
Why Is the Ionosphere Important?
The ionosphere is the part of the earth’s atmosphere beginning at an altitude of about 30 miles and extending outward 1,200 miles or more. This region consists of layers of free electrically charged particles that transmit, refract, and reflect radio waves, allowing those waves to be transmitted great distances around the earth. The interaction of the ionosphere on impinging electromagnetic radiation depends on the properties of the ionospheric layer, the geometry of transmission, and the frequency of the radiation. For any given signal path through the atmosphere, a range of workable frequency bands exists. This range, between the maximum usable frequency (MUF) and the lowest usable frequency (LUF), is where radio waves are reflected and refracted by the ionosphere much as a partial mirror may reflect or refract visible light.56 The reflective and refractive properties of the ionosphere provide a means to transmit radio signals beyond direct “line-of-sight” transmission between a transmitter and receiver. Ionospheric reflection and refraction has therefore been used almost exclusively for long-range HF (from 3 to 30 MHz) communications. Radio waves with frequencies ranging from above 30 MHz to 300 GHz are usually used for communications requiring line-of-sight transmissions, such as satellite communications. At these higher frequencies, radio waves propagate through the ionosphere with only a small fraction of the wave scattering back in a pattern analogous to a sky wave. Communicators receive significant benefit from using these frequencies since they provide considerably greater bandwidths and thus have greater data-carrying capacity; they are also less prone to natural interference (noise).
Although the ionosphere acts as a natural “mirror” for HF radio waves, it is in a constant state of flux, and thus, its “mirror property” can be limited at times. Like terrestrial weather, ionospheric properties change from year to year, from day to day, and even from hour to hour. This ionospheric variability, called space weather, can cause unreliability in ground- and space-based communications that depend on ionospheric reflection or transmission. Space weather variability affects how the ionosphere attenuates, absorbs, reflects, refracts, and changes the propagation, phase, and amplitude characteristics of radio waves. These weather dependent changes may arise from certain space weather conditions such as: (1) variability of solar radiation entering the upper atmosphere; (2) the solar plasma entering the earth’s magnetic field; (3) the gravitational atmospheric tides produced by the sun and moon; and (4) the vertical swelling of the atmosphere due to daytime heating of the sun.57 Space weather is also significantly affected by solar flare activity, the tilt of the earth’s geomagnetic field, and abrupt ionospheric changes resulting from events such as geomagnetic storms.
In summary, the ionosphere’s inherent reflectivity is a natural gift that humans have used to create long-range communications connecting distant points on the globe. However, natural variability in the ionosphere reduces the reliability of our communication systems that depend on ionospheric reflection and refraction (primarily HF). For the most part, higher frequency communications such as UHF, SHF, and EHF bands are transmitted through the ionosphere without distortion. However, these bands are also subject to degradation caused by ionospheric scintillation, a phenomenon induced by abrupt variations in electron density along the signal path, resulting in signal fade caused by rapid signal path variations and defocusing of the signal’s amplitude and/or phase.
Understanding and predicting ionospheric variability and its influence on the transmission and reflection of electromagnetic radiation has been a much studied field of scientific inquiry. Improving our ability to observe, model, and forecast space weather will substantially improve our communication systems, both ground and space-based. Considerable work is being conducted, both within the DOD and the commercial sector, on improving observation, modeling, and forecasting of space weather. While considerable technical challenges remain, we assume for the purposes of this study that dramatic improvements will occur in these areas over the next several decades.
Research to Better Understand and Predict Ionospheric Effects
According to a SPACECAST 2020 study titled, “Space Weather Support for Communications,” the major factors limiting our ability to observe and accurately forecast space weather are (1) current ionospheric sensing capability; (2) density and frequency of ionospheric observations; (3) sophistication and accuracy of ionospheric models; and (4) current scientific understanding of the physics of ionosphere-thermosphere-magnetosphere coupling mechanisms.58 The report recommends that improvements be realized in our ability to measure the ionosphere vertically and spatially; to this end an architecture for ionospheric mapping was proposed. Such a system would consist of ionospheric sounders and other sensing devices installed on DoD and commercial satellite constellations (taking advantage in particular of the proposed IRIDIUM system and replenishment of the GPS) and an expanded ground-based network of ionospheric vertical sounders in the US and other nations. Understanding and predicting ionospheric scintillation would also require launching of an equatorial remote sensing satellite in addition to the currently planned or deployed DOD and commercial constellations.
The payoff of such a system is an improvement in ionospheric forecasting accuracy from the current range of 40-60 percent to an anticipated 80-100 percent accuracy. Daily worldwide ionospheric mapping would provide the data required to accurately forecast diurnal, worldwide terrestrial propagation characteristics of electromagnetic energy from 3-300 MHz. This improved forecasting would assist satellite operators and users, resulting in enhanced operational efficiency of space systems. It would also provide an order of magnitude improvement in locating the sources of tactical radio communications, allowing for location and tracking of enemy and friendly platforms.59 Improved capability to forecast ionospheric scintillation would provide a means to improve communications reliability by the use of alternate ray paths or relay to undisturbed regions. It would also enable operational users to ascertain whether outages were due to naturally occurring ionospheric variability as opposed to enemy action or hardware problems.
These advances in ionospheric observation, modeling, and prediction would enhance the reliability and robustness of our military communications network. In addition to their significant benefits for our existing communications network, such advances are also requisite to further exploitation of the ionosphere via active modification.
Acronyms and Definitions
AOC air operations center
AOR area of responsibility
ATO air tasking order
EHF extra high frequency
GWN global weather network
HF high frequency
LF low frequency
LUF lowest usable frequency
Mesoscale less than 200 km2
Microscale immediate local area
MUF maximum usable frequency
PGM precision-guided munitions
RF radio frequency
SAR synthetic aperture radar
SARSAT search and rescue satellite-aided tracking
SHF super high frequency
SPOT satellite positioning and tracking
UAV uninhabited aerospace vehicle
VHF very high frequency
WFS weather force specialist
WFSE weather force support element
Appleman, Herbert S. An Introduction to Weather-modification. Scott AFB, Ill.: Air Weather Service (MAC), September 1969.
AU-18, Space Handbook, An Analyst’s Guide Vol. II. Maxwell AFB, Ala.: Air University Press, December 1993.
AWS PLAN 813, Appendix I, Annex Alfa. Scott AFB, Ill.: Air Weather Service (MAC), 14 January 1972.
Banks, Peter M. “Overview of Ionospheric Modification from Space Platforms.” In Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems, AGARD Conference Proceedings 485, October 1990.
Batton, Louis J. Harvesting the Clouds. Garden City, N.Y.: Doubleday & Co., 1969.
Brown, William. “Mathematicians Learn How to Tame Chaos.” New Scientist, 30 May 1992.
Byers, Horace R. “History of Weather-modification.” In Wilmot N. Hess, ed., Weather and Climate Modification. New York: John Wiley & Sons, 1974.
Centner, Christopher, et al., “Environmental Warfare: Implications for Policymakers and War Planners.” Maxwell AFB, Ala.: Air Command and Staff College, May 1995.
Coons, Capt Frank G. “Warm Fog Dispersal-A Different Story.” Aerospace Safety 25, no. 10 (October 1969).
CJCSI 3810.01, Meteorological and Oceanographic Operations, 10 January 1995.
Dawson, George. “An Introduction to Atmospheric Energy.” In Wilmot N. Hess, ed., Weather and Climate Modification. New York: John Wiley & Sons, 1974.
Duncan, Lewis M., and Robert L. Showen “Review of Soviet Ionospheric Modification Research.” In Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems AGARD Conference Proceedings 485, October 1990.
Dwyer, Maj Roy. Category III or Fog Dispersal, M-U 35582-7 D993a. Maxwell AFB, Ala.: Air University Press, May 1972.
Eisenhower, Dwight E. “Crusade in Europe” quoted in John F. Fuller, ed., Thor’s Legions. Boston: American Meterology Society, 1990.
Facts on File 55, No. 2866 (2 November 1995).
Frisby, E. M. “Weather-modification in Southeast Asia, 1966-1972.” The Journal Of Weather-modification 14, no. 1 (April 1982).
Frisby, E. M. “Weather-modification in Southeast Asia, 1966-1972.” Journal of Applied Meteorology 15 (April 1976).
Gray, William M., et al. “Weather-modification by Carbon Dust Absorption of Solar Energy.” Journal of Applied Meteorology 15, (April 1976).
Halacy, Daniel S. The Weather Changers. New York: Harper & Row, 1968.
Hall, J. Storrs. “Overview of Nanotechnology” Adapted from papers by Ralph C. Merkle and K. Eric Drexler. Internet address: http://nanotech.rutgers.edu/nanotech/-intro.html (Rutgers University, November 1995).
Horner, Gen Charles. “Space Seen as Challenge, Military’s Final Frontier” (Prepared Statement to the Senate Armed Services Committee) Defense Issues, 22 April 1993.
Hume, Capt Edward E., Jr. Atmospheric and Space Environmental Research Programs in Brazil (U), March 1993. Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992. (Secret) Information extracted is unclassified.
James, G. E. “Chaos Theory: The Essentials for Military Applications” ACSC Theater Air Campaign Studies Coursebook, AY96, Vol. 8. Maxwell AFB, Ala.: Air University Press, 1995.
Jiusto, James E. “Some Principles of Fog Modification with Hygroscopic Nuclei” Progress of NASA Research on Warm Fog Properties and Modification Concepts, NASA SP-212. Washington, D.C.: Scientific and Technical Information Division of the Office of Technology Utilization of the National Aeronautics and Space Administration, 1969.
Johnson, Capt Mike. Upper Atmospheric Research and Modification-Former Soviet Union (U) supporting document DST-18205-475-92, Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992. (Secret) Information extracted is unclassified.
Kasemir, Heinz W. “Lightning Suppression by Chaff Seeding and Triggered Lightning.” In Wilmot N. Hess, ed., Weather and Climate Modification. New York: John Wiley & Sons, 1974.
Keaney, Thomas A., and Eliot A. Cohen, Gulf War Air Power Survey Summary Report. Washington D.C.: GPO, 1993.
Klein, Milton M. A Feasibility Study of the Use of Radiant Energy for Fog Dispersal Abstract. Hanscom AFB, Mass.: Air Force Material Command, October 1978.
Kocmond, Warren C. “Dissipation of Natural Fog in the Atmosphere,” Progress of NASA Research on Warm Fog Properties and Modification Concepts, NASA SP-212. Washington, D.C.: Scientific and Technical Information Division of the Office of Technology Utilization of the National Aeronautics and Space Administration, 1969.
Kossey, Paul A., et al. “Artificial Ionospheric Mirrors (AIM) A. Concept and Issues,” In Ionospheric Modification and its Potential to Enhance or Degrade the Performance of Military Systems, AGARD Conference Proceedings 485, October 1990.
Maehlum, B. N., and J. Troim, “Vehicle Charging in Low Density Plasmas” In Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems AGARD Conference Proceedings 485, October 1990.
McLare, James. Pulp & Paper 68, no. 8, August 1994.
Meyer, William B. “The Life and Times of US Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986).
Petersen, Rear Adm Sigmund. “NOAA Moves Toward The 21st Century.” The Military Engineer 20, no. 571 (June-July 1995).
Riley, Lt Col Gerald F. Staff Weather Officer to CENTCOM OIC of CENTAF Weather Support Force and Commander of 3d Weather Squadron. In “Desert Shield/Desert Storm Interview Series,” interviewed by Dr William E. Narwyn, AWS Historian, 29 May 1991.
Seagraves, Mary Ann, and Richard Szymber “Weather a Force Multiplier.” Military Review, November/December 1995.
SPACECAST 2020. Space Weather Support for Communications White paper G. Maxwell AFB, Ala.: Air War College/2020, 1994.
Stuart, Gene S. “Whirlwinds and Thunderbolts,” In Nature on the Rampage. Washington D.C.: National Geographic Society, 1986.
Sullivan, Gen Gordon R. “Moving into the 21st Century: America’s Army and Modernization” Military Review. July 1993. Quoted in Mary Ann Seagraves and Richard Szymber “Weather a Force Multiplier” Military Review, November/December 1995.
Sutherland, Robert A. “Results of Man-Made Fog Experiment,” In Proceedings of the 1991 Battlefield Atmospherics Conference. Fort Bliss, Tex.: Hinman Hall, 3-6 December 1991.
Tascione, Thomas F. Introduction to the Space Environment. Colorado Springs: USAF Academy Department of Physics, 1984.Tomlinson, Edward M., Kenneth C. Young, and Duane D. Smith Laser Technology Applications for Dissipation of Warm Fog at Airfields, PL-TR-92-2087. Hanscom AFB, Mass.: Air Force Materiel Command, 1992.
USAF Scientific Advisory Board. New World Vistas: Air and Space Power for the 21st Century, Summary Volume. Washington, D.C.: USAF Scientific Advisory Board, 15 December 1995.
US Department of State. The Department of State Bulletin 76, no. 1981 (13 June 1977.
1. The weather-modification capabilities described in this paper are consistent with the operating environments and missions relevant for aerospace forces in 2025 as defined by AF/LR, a long-range planning office reporting to the CSAF [based on AF/LR PowerPoint briefing "Air and Space Power Framework for Strategy Development (jda-2lr.ppt)].”
2. General Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75.
3. Gen Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75.
4. Horace R. Byers, “History of Weather-modification,” in Wilmot N. Hess, ed. Weather and Climate Modification, (New York: John Wiley & Sons, 1974), 4.
5. William B. Meyer, “The Life and Times of US Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986), 48.
6. Byers, 13.
7. US Department of State, The Department of State Bulletin. 74, no. 1981 (13 June 1977): 10.
8. Dwight D Eisenhower. “Crusade in Europe,” quoted in John F. Fuller, Thor’s Legions (Boston: American Meterology Society, 1990), 67.
9. Interview of Lt Col Gerald F. Riley, Staff Weather Officer to CENTCOM OIC of CENTAF Weather Support Force and Commander of 3rd Weather Squadron, in “Desert Shield/Desert Storm Interview Series,” by Dr William E. Narwyn, AWS Historian, 29 May 1991.
10. Thomas A. Keaney and Eliot A. Cohen. Gulf War Air Power Survey Summary Report (Washington D.C.: Government Printing Office, 1993), 172.
11. Herbert S. Appleman, An Introduction to Weather-modification (Scott AFB, Ill.: Air Weather Service/MAC, September 1969), 1.
12. William Bown, “Mathematicians Learn How to Tame Chaos,” New Scientist, 30 May 1992, 16.
13. CJCSI 3810.01, Meteorological and Oceanographic Operations, 10 January 95. This CJCS Instruction establishes policy and assigns responsibilities for conducting meteorological and oceanographic operations. It also defines the terms widespread, long-lasting, and severe, in order to identify those activities that US forces are prohibited from conducting under the terms of the UN Environmental Modification Convention. Widespread is defined as encompassing an area on the scale of several hundred km; long-lasting means lasting for a period of months, or approximately a season; and severe involves serious or significant disruption or harm to human life, natural and economic resources, or other assets.
14. Concern about the unintended consequences of attempting to “control” the weather is well justified. Weather is a classic example of a chaotic system (i.e., a system that never exactly repeats itself). A chaotic system is also extremely sensitive: minuscule differences in conditions greatly affect outcomes. According to Dr. Glenn James, a widely published chaos expert, technical advances may provide a means to predict when weather transitions will occur and the magnitude of the inputs required to cause those transitions; however, it will never be possible to precisely predict changes that occur as a result of our inputs. The chaotic nature of weather also limits our ability to make accurate long-range forecasts. The renowned physicist Edward Teller recently presented calculations he performed to determine the long-range weather forecasting improvement that would result from a satellite constellation providing continuous atmospheric measurements over a 1 km2 grid worldwide. Such a system, which is currently cost-prohibitive, would only improve long-range forecasts from the current five days to approximately 14 days. Clearly, there are definite physical limits to mankind’s ability to control nature, but the extent of those physical limits remains an open question. Sources: G. E. James, “Chaos Theory: The Essentials for Military Applications,” in ACSC Theater Air Campaign Studies Coursebook, AY96, 8 (Maxwell AFB, Ala: Air University Press, 1995), 1-64. The Teller calculations are cited in Reference 49 of this source.
15. SPACECAST 2020, Space Weather Support for Communications, white paper G (Maxwell AFB, Ala.: Air War College/2020, 1994).
16. Rear Adm Sigmund Petersen, “NOAA Moves Toward The 21st Century,” The Military Engineer 20, no. 571 (June-July 1995): 44.
18. William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist (30 May 1992): 16.
19. A pilot program known as Project Popeye conducted in 1966 attempted to extend the monsoon season in order to increase the amount of mud on the Ho Chi Minh trail thereby reducing enemy movements. A silver iodide nuclei agent was dispersed from WC-130, F4 and A-1E aircraft into the clouds over portions of the trail winding from North Vietnam through Laos and Cambodia into South Vietnam. Positive results during this initial program led to continued operations from 1967 to 1972. While the effects of this program remain disputed, some scientists believe it resulted in a significant reduction in the enemy’s ability to bring supplies into South Vietnam along the trail. E. M. Frisby, “Weather-modification in Southeast Asia, 1966-1972,” The Journal of Weather-modification 14, no. 1 (April 1982): 1-3.
20. William M. Gray et al., “Weather-modification by Carbon Dust Absorption of Solar Energy,” Journal of Applied Meteorology 15 (April 1976): 355.
23. Ibid., 367.
24. AWS PLAN 813 Appendix I Annex Alfa (Scott AFB, Ill.: Air Weather Service/(MAC) 14 January 1972), 11. Hereafter cited as Annex Alfa.
25. Capt Frank G. Coons, “Warm Fog Dispersal-A Different Story,” Aerospace Safety 25, no. 10 (October 1969): 16.
26. Annex Alfa, 14.
27. Warren C. Kocmond, “Dissipation of Natural Fog in the Atmosphere,” Progress of NASA Research on Warm Fog Properties and Modification Concepts, NASA SP-212 (Washington, D.C.: Scientific and Technical Information Division of the Office of Technology Utilization of the National Aeronautics and Space Administration, 1969), 74.
28. James E. Jiusto, “Some Principles of Fog Modification with Hygrosopic Nuclei,” Progress of NASA Research on Warm Fog Properties and Modification Concepts, NASA SP-212 (Washington, D.C.: Scientific and Technical Information Division of the Office of Technology Utilization of the National Aeronautics and Space Administration, 1969), 37.
29. Maj Roy Dwyer, Category III or Fog Dispersal, M-U 35582-7 D993a c.1 (Maxwell AFB, Ala.: Air University Press, May 1972), 51.
30. James McLare, Pulp & Paper 68, no. 8 (August 1994): 79.
31. Milton M. Klein, A Feasibility Study of the Use of Radiant Energy for Fog Dispersal, Abstract (Hanscom AFB, Mass.: Air Force Material Command, October 1978).
32. Edward M. Tomlinson, Kenneth C. Young, and Duane D. Smith, Laser Technology Applications for Dissipation of Warm Fog at Airfields, PL-TR-92-2087 (Hanscom AFB, Mass.: Air Force Material Command, 1992).
33. J. Storrs Hall, “Overview of Nanotechnology,” adapted from papers by Ralph C. Merkle and K. Eric Drexler, Internet address: http://nanotech.rutgers.edu/nanotech-/intro.html, Rutgers University, November 1995.
34. Robert A. Sutherland, “Results of Man-Made Fog Experiment,” Proceedings of the 1991 Battlefield Atmospherics Conference (Fort Bliss, Tex.: Hinman Hall, 3-6 December 1991).
35. Christopher Centner et al., “Environmental Warfare: Implications for Policymakers and War Planners” (Maxwell AFB, Ala.: Air Command and Staff College, May 1995), 39.
36. Louis J. Battan, Harvesting the Clouds (Garden City, N.Y.: Doubleday & Co., 1960), 120.
37. Facts on File 55, no. 2866 (2 November 95).
38. Gene S. Stuart, “Whirlwinds and Thunderbolts,” Nature on the Rampage (Washington, D.C.: National Geographic Society, 1986), 130.
39. Ibid., 140.
40. Heinz W. Kasemir, “Lightning Suppression by Chaff Seeding and Triggered Lightning,” in Wilmot N. Hess, ed., Weather and Climate Modification (New York: John Wiley & Sons, 1974), 623-628.
41. SPACECAST 2020, Space Weather Support for Communications, white paper G, (Maxwell AFB, Ala.: Air War College/2020, 1994).
42. Gen Charles Horner, “Space Seen as Challenge, Military’s Final Frontier,” Defense Issues, (Prepared Statement to the Senate Armed Services Committee), 22 April 1993, 7.
43. Lewis M. Duncan and Robert L. Showen, “Review of Soviet Ionospheric Modification Research,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems,(AGARD Conference Proceedings 485, October, 1990), 2-1.
45. Peter M. Banks, “Overview of Ionospheric Modification from Space Platforms,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990) 19-1.
46. Capt Mike Johnson, Upper Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992), 3. (Secret) Information extracted is unclassified.
47. Capt Edward E. Hume, Jr., Atmospheric and Space Environmental Research Programs in Brazil (U) (Foreign Aerospace Science and Technology Center, AF Intelligence Command, March 1993), 12. (Secret) Information extracted is unclassified.
48. Paul A. Kossey et al. “Artificial Ionospheric Mirrors (AIM),” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990), 17A-1.
49. Ibid., 17A-7.
50. Ibid., 17A-10.
51. B. N. Maehlum and J. Troim, “Vehicle Charging in Low Density Plasmas,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990), 24-1.
53. Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 69.
54. Daniel S. Halacy, The Weather Changers (New York: Harper & Row, 1968), 202.
55. William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist, 30 May 1992, 16.
56. AU-18, Space Handbook, An Analyst’s Guide Vol. II. (Maxwell AFB, Ala.: Air University Press, December 1993), 196.
57. Thomas F. Tascione, Introduction to the Space Environment (Colorado Springs: USAF Academy Department of Physics, 1984), 175.
58. SPACECAST 2020, Space Weather Support for Communications, white paper G, (Maxwell AFB, Ala.: Air War College/2020, 1994).
59. Referenced in ibid.