October 25, 2012

Science on Trial



Surely among the most absurd judicial decisions in recent memory is the verdict by an Italian court that sends seven Italian earthquake experts to jail for six year terms. Their alleged crime, worthy of convictions for manslaughter and damage claims of $10.2 million, was for minimizing the risks of an earthquake to the residents of an Italian town after tremors struck the region. In the subsequent earthquake, over 300 people died. Though the prosecutors insist that the crime was not for failing to predict an earthquake, it amounts to the same thing. The guilty verdict must elicit incredulity; it presumes knowledge that does not exist, in effect postulating certainty in what is clearly a vastly uncertain enterprise. Though one expert says that the decision will lead scientists to keep their mouths shut, the more likely consequence (pointed out in the BBC piece below) is a mountain of false alarms. 


Seven prominent Italian earthquake experts were convicted of manslaughter on Monday and sentenced to six years in prison for failing to give adequate warning to the residents of a seismically active area in the months preceding an earthquake that killed more than 300 people.

Speaking in a hushed courtroom in L’Aquila, the city whose historic center was gutted by the April 2009 earthquake, the judge, Marco Billi, read a long list of names of those who had died or been injured in the disaster before he handed down the sentences to six scientists and a former government official. The defendants, who said they would appeal the decision, will also have to pay court costs and damages of $10.2 million.

The seven, most of them seismologists and geologists, were members of the National Commission for the Forecast and Prevention of Major Risks, which met shortly before the quake struck — after weeks of frequent small tremors — but did not issue a safety warning.

The verdicts jolted the international scientific community, which feared they might open the way to an onslaught of legal actions against scientists who evaluate the risks of natural hazards. “This is the death of public service on the part of professors and professionals,” said Luciano Maiani, the current president of the risks commission, according to the news agency Ansa. The legal and media pressure prompted by the trial have made it impossible to carry out professional consultancies for the state, he said, adding, “This doesn’t happen anywhere else in the world.”

Thomas H. Jordan, a professor at the University of Southern California, led a commission that after the disaster advised the Italian government about better ways to communicate earthquake risks to the public. He described the verdicts as incredible, “given that they have just convicted scientists for basically doing their job during a time of crisis.”

“I’m afraid it’s going to teach scientists to keep their mouths shut,” he added.

Scientists said the case raised the issue of when a public warning is appropriate. While predicting the exact time and location of an earthquake is not possible, seismologists are increasingly able to forecast the likelihood that a quake might occur in a certain area within a certain time. But if the likelihood is very low — as it was in this case, despite the increased seismic activity in the weeks before — a warning may do more harm than good. Lawyers for the defendants were unanimous on Monday in their condemnation of the sentence, which exceeded the prosecution’s request of four years in prison, and vowed to appeal.

“I wasn’t expecting this,” said Alfredo Biondi, a defense lawyer. He described the ruling as one of the most erroneous that he had encountered in his long career.

“This was a trial that should not have been held in L’Aquila” because the emotional impact of the quake is still felt so strongly in the city, said Filippo Dinacci, who represents two of the defendants.

More than three years after the earthquake, L’Aquila, in the Abruzzo region east of Rome, has yet to recover fully. Its architecturally rich center is still largely abandoned, and residents are still mourning the dead. There are some sporadic signs of reconstruction around the center, including the inauguration last month of an auditorium designed by Renzo Piano, but the overall mood in the city speaks more of discouragement and dismay.

The city and surrounding towns were felled by the magnitude 6.3 quake in the early hours of April 6, 2009. The disaster left thousands homeless and killed 309, many of them in their sleep.

Six days before the quake, the risks commission met to assess the situation after the period of frequent small quakes. The seismic activity had made the public anxious, as had a series of specific quake predictions — none of which proved to be accurate — by a local man who is not a scientist. After the meeting, some commission members gave encouraging statements to the news media, which prosecutors said gave residents an overly reassuring picture of the risks they faced. The commission, prosecutors charged, did not uphold its mandate and consequently did not allow residents to make informed decisions about whether to stay or leave their homes.

In his closing arguments on Monday the prosecutor, Fabio Picuti, cited a United States court ruling that blamed the Army Corps of Engineers for “monumental negligence” for some of the flooding from Hurricane Katrina, Ansa reported. That case, Mr. Picuti said, demonstrates that it is possible to fall short of preventing and predicting a risk, according to Ansa.

Relatives of the victims cheered the decision. “It’s just a tiny bit of justice so that it doesn’t happen again,” said an unidentified woman on Sky television.

The court did not rule on whether earthquakes can be predicted. But Fabio Alessandroni, a civil lawyer who represents the relatives of more than a dozen victims, said the sentence showed that it is possible to have a “culture of prevention.”

“It is possible to predict a risk and to adopt measures that mitigate that risk,” Mr. Alessandroni said. “It’s what the commission is supposed to do,” taking various elements, like a city’s seismic history, into account. “And this was not done in L’Aquila.”

Elisabetta Povoledo and Henry Fountain, “Italy Orders Jail Terms for 7 Who Didn’t Warn of Deadly Earthquake,” New York Times, October 22, 2012.

* * *


Years of research, much of it conducted by distinguished seismologists in your own country, have demonstrated that there is no accepted scientific method for earthquake prediction that can be reliably used to warn citizens of an impending disaster. To expect more of science at this time is unreasonable. It is manifestly unfair for scientists to be criminally charged for failing to act on information that the international scientific community would consider inadequate as a basis for issuing a warning. Moreover, we worry that subjecting scientists to criminal charges for adhering to accepted scientific practices may have a chilling effect on researchers, thereby impeding the free exchange of ideas necessary for progress in science and discouraging them from participating in matters of great public importance.

 * * *

A report from the BBC provides further detail on the state of the scientific understanding, noting that predicting an earthquake is extremely difficult:

When a large amount of stress is built up in the Earth's crust, it will mostly be released in a single large earthquake, but some smaller-scale cracking in the build-up to the break will result in precursor earthquakes. These small quakes precede around half of all large earthquakes, and can continue for days to months before the big break.

Some scientists have even gone so far as to try to predict the location of the large earthquake by mapping the small tremors. The "Mogi Doughnut Hypothesis" suggests that a circular pattern of small precursor quakes will precede a large earthquake emanating from the centre of that circle.

While half of the large earthquakes have precursor tremors, only around 5% of small earthquakes are associated with a large quake. So even if small tremors are felt, this cannot be a reliable prediction that a large, devastating earthquake will follow. "There is no scientific basis for making a prediction", said Dr Richard Walker of the University of Oxford. . . .

The minute changes in the movement, tilt, and the water, gas and chemical content of the ground associated with earthquake activity can be monitored on a long term scale. Measuring devices have been integrated into early warning systems that can trigger an alarm when a certain amount of activity is recorded.

Such early warning systems have been installed in Japan, Mexico and Taiwan, where the population density and high earthquake risk pose a huge threat to people's lives. But because of the nature of all of these precursor reactions, the systems may only be able to provide up to 30 seconds' advance warning.

"In the history of earthquake study, only one prediction has been successful", explains Dr Walker. The magnitude 7.3 earthquake in 1975 in Haicheng, North China was predicted one day before it struck, allowing authorities to order evacuation of the city, saving many lives. But the pattern of seismic activity that this prediction was based on has not resulted in a large earthquake since, and just a year later in 1976 a completely unanticipated magnitude 7.8 earthquake struck nearby Tangshan causing the death of over a quarter of a million people. The "prediction" of the Haicheng quake was therefore just a lucky unrepeatable coincidence.

A major problem in the prediction of earthquake events that will require evacuation is the threat of issuing false alarms. Scientists could warn of a large earthquake every time a potential precursor event is observed, however this would result in huge numbers of false alarms which put a strain on public resources and might ultimately reduce the public's trust in scientists.

"Earthquakes are complex natural processes with thousands of interacting factors, which makes accurate prediction of them virtually impossible," said Dr Walker.

Seismologists agree that the best way to limit the damage and loss of life resulting from a large earthquake is to predict and manage the longer-term risks in an earthquake-prone area. These include the likelihood of building collapsing and implementing emergency plans.

"Detailed scientific research has told us that each earthquake displays almost unique characteristics, preceded by foreshocks or small tremors, whereas others occur without warning. There simply are no rules to utilise in order to predict earthquakes," said Dr Dan Faulkner, senior lecturer in rock mechanics at the University of Liverpool.

"Earthquake prediction will only become possible with a detailed knowledge of the earthquake process. Even then, it may still be impossible."



Leila Battison, “Can we predict when and where quakes will strike?” BBC News, September, 20, 2011.

October 7, 2012

The End of Growth



The two pieces that follow, from Martin Wolf of the Financial Times and Michael Feller of Macro Strategists, explore the implications of a new paper by Robert Gordon on the limits to growth.

Wolf:

Might growth be ending? This is a heretical question. Yet an expert on productivity, Robert Gordon of Northwestern university, has raised it in a provocative paper. In this, he challenges the conventional view of economists that “economic growth ... will continue indefinitely.”

Yet unlimited growth is a heroic assumption. For most of history, next to no measurable growth in output per person occurred. What growth did occur came from rising population. Then, in the middle of the 18th century, something began to stir. Output per head in the world’s most productive economies – the UK until around 1900 and the US, thereafter – began to accelerate. Growth in productivity reached a peak in the two and a half decades after World War II. Thereafter growth decelerated again, despite an upward blip between 1996 and 2004. In 2011 – according to the Conference Board’s database – US output per hour was a third lower than it would have been if the 1950-72 trend had continued (see charts). Prof Gordon goes further. He argues that productivity growth might continue to decelerate over the next century, reaching negligible levels.

The future is unknowable. But the past is revealing. The core of Prof Gordon’s argument is that growth is driven by the discovery and subsequent exploitation of specific technologies and – above all – by “general purpose technologies”, which transform life in ways both deep and broad.

The implementation of a range of general purpose technologies discovered in the late 19th century drove the mid-20th century productivity explosion, Prof Gordon argues. These included electricity, the internal combustion engine, domestic running water and sewerage, communications (radio and telephone), chemicals and petroleum. These constitute “the second industrial revolution”. The first, between 1750 and 1830, started in the UK. That was the age of steam, which culminated with the railway. Today, we are living in a third, already some 50 years old: the age of information, whose leading technologies are the computer, the semiconductor and the internet.

Prof Gordon argues, to my mind persuasively, that in its impact on the economy and society, the second industrial revolution was far more profound than the first or the third. Motor power replaced animal power, across the board, removing animal waste from the roads and revolutionising speed. Running water replaced the manual hauling of water and domestic waste. Oil and gas replaced the hauling of coal and wood. Electric lights replaced candles. Electric appliances revolutionised communications, entertainment and, above all, domestic labour. Society industrialised and urbanised. Life expectancy soared. Prof Gordon notes that “little known is the fact that the annual rate of improvement in life expectancy in the first half of the 20th century was three times as fast as in the last half.” The second industrial revolution transformed far more than productivity. The lives of Americans, Europeans and, later on, Japanese, were changed utterly.

Many of these changes were one-offs. The speed of travel went from the horse to the jet plane. Then, some fifty years ago, it stuck. Urbanisation is a one-off. So, too, is the collapse in child mortality and the tripling of life expectancy. So, too, is control over domestic temperatures. So, too, is liberation of women from domestic drudgery.

By such standards, today’s information age is full of sound and fury signifying little. Many of the labour-saving benefits of computers occurred decades ago. There was an upsurge in productivity growth in the 1990s. But the effect petered out.

In the 2000s, the impact of the information revolution has come largely via enthralling entertainment and communication devices. How important is this? Prof Gordon proposes a thought-experiment. You may keep either the brilliant devices invented since 2002 or running water and inside lavatories. I will throw in Facebook. Does that make you change your mind? I thought not. I would not keep everything invented since 1970 if the alternative were losing running water.

What we are now living through is an intense, but narrow, set of innovations in one important area of technology. Does it matter? Yes. We can, after all, see that a decade or two from now every human being will have access to all of the world’s information. But the view that overall innovation is now slower than a century ago is compelling.

What does this analysis tell us? First, the US remains the global productivity frontier. If the rate of advance of the frontier has slowed, catch-up should now be easier. Second, catch-up could still drive global growth at a high rate for a long time (resources permitting). After all, the average gross domestic product per head of developing countries is still only a seventh of that of the US (at purchasing power parity). Third, growth is not just a product of incentives. It depends even more on opportunities. Rapid increases in productivity at the frontier are possible only if the right innovations occur. Transport and energy technologies have barely changed in half a century. Lower taxes are not going to change this.

Prof Gordon notes further obstacles to rising standards of living for ordinary Americans. These include: the reversal of the demographic dividend that came from the baby boomers and movement of women into the labour force; the levelling-off of educational attainment; and obstacles to the living standards of the bottom 99 per cent. These hurdles include globalisation, rising resource costs and high fiscal deficits and private debts. In brief, he expects the rise in the real disposable incomes of those outside the elite to slow to a crawl. Indeed, it appears to have already done so. Similar developments are occurring in other high-income countries.

For almost two centuries, today’s high-income countries enjoyed waves of innovation that made them both far more prosperous than before and far more powerful than everybody else. This was the world of the American dream and American exceptionalism. Now innovation is slow and economic catch-up fast. The elites of the high-income countries quite like this new world. The rest of their population like it vastly less. Get used to this. It will not change.

Feller summarizes the Gordon paper as well, citing Wolf, but then goes on to offer a range of fascinating connections in the history of economic thought:

These are questions others have asked as well, ranging from the longue durée historians of the Sorbonne, who are attempting to pinpoint capitalism’s demise by 2100 (economic systems, like those of feudal Europe or the Roman Empire, apparently last 600 years), to UBS strategist Andy Lees, who last year provocatively claimed the world had hit its innovation peak in the 1840s.

Furthermore, these questions are not new. William Morris, better known for his wallpaper designs, wrote of a cashless society in late Victorian England. In 1516, philosopher Thomas More described the isle of Utopia where gold and silver were cast aside for pursuits of real prosperity, the metals only used for the “humblest items of domestic equipment”.

And of course, there was John Stuart Mill, who in 1848 would advance the notion of the ‘stationary state’, where objectives of economic quality were to be pursued over objectives of economic quantity. This no-growth model would later have appeal for Kibbutzniks, survivalists, and environmentalists such the authors of the 1972 Club of Rome report – who reintroduced Mill’s concept of the limits of growth to a new Malthusian audience. Even John Meynard Keynes, an admirer of Mill, would at times lament the obsession politicians would come to have with GDP, an instrument of measurement he helped devise for limited use during the Second World War.

Yet Mill’s legacy is perhaps most relevant today with global populations now stabilising, the risks of catastrophic climate change becoming ever more apparent, technology supplanting labour and productivity seen by many economists as the last great hope for growth. Indeed, outside of canonising modern liberalism or the idea of falsification in the scientific method, Mill’s most important contribution to political economy was arguably his theory of development: that growth was a function of capital, labour and land (or natural resources). Mill felt that sustainable development was only possible if growth in labour was exceeded by growth in land and capital productivity, rather than debt. With middle class wages stagnating and the so-called 99% seeing few of the economic gains that we are supposed to have made since the economic deregulation of the 1980s, Mill’s dictums speak a remarkable truth across the gulf of time.

In a week where prominent fund manager Bill Gross likened America’s credit-based economic model to a crystal meth addiction (like any ‘hopium’ or narcotic, debt borrows the benefits of tomorrow for the enjoyment of today) and when central banks, from Australia to Russia, are joining peers in Europe, Britain, the US and Japan in pushing down rates or pump-priming markets with liquidity, one can see another of Mill’s classic warnings – the tyranny of the majority – coming true, with quick fixes and short-term solutions the order of the day.

Yet no country is as perhaps as apt for Mill’s analysis than China: a country that is in a self-imposed demographic decline, where utilitarianism and capital factor productivity have been warped into a fixed-asset bubble and where land is quite literally denuded of soil, drained of moisture and acidified by the detritus of industrialisation.

In an oped for the New York Times last week, economist Richard Easterlin described China’s belief it could purchase social stability through rapid economic growth as a “Faustian bargain”; a phrase also used recently by Bundesbank chief Jen Wideman’s in criticising the European Central Bank’s outright monetary transactions.

In a survey of opinions on life satisfaction scored between 1990 and 2011, Easterlin and his colleagues found that the average Chinese is no more satisfied today than they were in the aftermath of Tiananmen Square.  Moreover, scores on satisfaction for urbanites actually declined for most of the period, as old social safety nets – China’s so-called ‘iron rice bowl’ – were removed in the name of productivity, efficiency and, ultimately, economic liberalisation, and as the competitive urges of capitalism overtook the cooperation, forced as it was or not, of socialism.

Yet unlike other post-Enlightenment political philosophies – both the capitalism of Adam Smith or the communism of Karl Marx – Mill saw growth as more than material. Borrowing from its classical antecedent – the Ancient Greeks spoke of humanity’s goal aseudaimonia, or flourishing – modern society is built on the ideal of improvement, development, growth and expansion, but it needn’t be so mono-dimensional. Indeed, Aristotle considered an essential pillar of eudaimonia, or one of the four cardinal virtues, to be moderation.

The problem is that in many ways we may have already reached the limits of growth. Politicians like to talk about half-glass full and half-glass empty, yet ultimately we’re still drinking from the same poisoned chalice. By relooking at John Stuart Mill there may be a more refreshing alternative.

* * *

The one thing that neither writer focuses on—and a very big thing it is—is the implications of the limits to growth on our debt-bound economies. Adjustment to the end of growth would be far easier in the absence of the debt overhang. Growth as usual threatens the environment, it may be ventured, but the end of growth threatens to crash the entire financial architecture.

* * *

Martin Wolf, “Is unlimited growth a thing of the past,” Financial Times, October 2, 2012


Robert J. Gordon, “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds,” National Bureau of Economic Research Working Paper No 18315 (August 2012).