Commentary

Too Many Health Clinics Hurt Developing Countries

FREETOWN, SIERRA LEONE – Donors like the World Bank and the World Health Organization often urge developing countries to invest in national health systems. But while rushing to construct clinics and other medical facilities in even the remotest regions may seem like a straightforward approach to ensuring universal health coverage, that has not turned out to be true.


The recent Ebola epidemic in West Africa highlighted the urgent need for stronger, more efficient, and more resilient health-care systems in developing countries. But when countries rush to build more clinics, the resulting facilities tend to be hastily constructed and lacking in the equipment, supplies, and staff needed to deliver vital health services effectively.

In my frequent visits to rural areas of my native Sierra Leone, I have seen more than a few health facilities that communities could do without. A newly refurbished facility in Masunthu, for example, had scant equipment and no water in the taps. The facilities in nearby Maselleh and Katherie had cracked walls, leaky roofs, and so few cupboards that supplies like syringes and medical registers had to be stacked on the floor.

This situation is the direct result of a piecemeal and hurried approach to investment in health-care infrastructure. At the end of the civil war in 2002, Sierra Leone had fewer than 700 health facilities, according to the 2004 Primary Health Care Handbook. In 2003, the cash-strapped government decided to “decentralize” various public services to the district level, fueling fierce competition for limited resources.

Local councils, seeking to grab the biggest possible slice of the pie, began to push forward new projects, leading to rapid and uncontrolled expansion of the health system. Today, Sierra Leone – with a population of just seven million – has nearly 1,300 health facilities. The Ministry of Health has been unable to equip all of these new facilities and cover staff and operational costs, as its budget has not risen to match the system’s expansion. In fact, very few (if any) of the African countries that signed the 2001 Abuja Declaration to allocate 15% of their budget to health have been able to do so.

Last September, Sierra Leone conducted an assessment of the distribution of public-health facilities and health workers in the country, in order to guide discussions on the Human Resources for Health Strategy 2017-2021. The results were stark: only 47% of the country’s health facilities employed more than two health workers, including unsalaried workers and volunteers. Seven percent of health facilities had no health workers assigned to them at all – an empty promise in physical form.

This situation is not unique to Sierra Leone – or to Africa. In Indonesia, the government invested oil revenue in the massive and rapid expansion of basic social services, including health care. But today an insufficient number of doctors plagues many of these facilities, particularly in remote areas, where absenteeism also is high. There are many nurses, but most are inadequately trained. Still, they are left to run remote facilities on their own.

Beyond personnel, remote health facilities in Indonesia lack adequate supporting infrastructure: clean water, sanitation, reliable electricity, and basic medicine and equipment. Decentralized local governments, which have little authority over remote clinics, cannot supervise their activities. Small wonder that Indonesia has one of the highest rates of maternal mortality in East Asia.

An excess of poorly equipped health facilities is not only ineffective; it can actually make matters worse, owing to factors like poor sanitation and weak emergency referral systems. During the recent Ebola crisis, underequipped facilities caused even more deaths, not just among patients, but also among the health workers committed to helping them.

Rather than continuing to pursue the uncontrolled proliferation of poorly equipped and operated health-care facilities, policymakers should consider a more measured approach. Of course, people living in remote areas need access to quality health care, without having to navigate rough and dangerous roads that can become virtually inaccessible during some periods of the year. But outreach services and community health workers could cover these areas much more effectively. The value of such an approach has recently been demonstrated in Ethiopia, where health outcomes have improved.

While most of the Sierra Leone facilities were built with donor funds, the government has gone along with plans to accelerate the construction drive. The government and donors have a joint responsibility to pursue a more cautious approach that guarantees quality service delivery.

At the WHO’s World Health Assembly this month, participants should shine a spotlight on this responsibility and begin to rethink current strategies for achieving universal health coverage. With a more measured approach, it will take longer to build the same number of clinics. But more lives will be saved. And that’s the only indicator that should count.

Samuel Kargbo is Director of Policy and Planning in Sierra Leone, a member of the UHC2030 Steering Committee, and a 2016 Aspen Institute New Voices fellow.

By Samuel Kargbo

Manchester’s Bright Future

MANCHESTER – I am a proud Mancunian (as the people of Manchester are known), despite the fact I haven’t lived there permanently since I left school for university when I was 18. I was born in St. Mary’s hospital near the city center, was raised in a pleasant suburb in South Manchester, and attended a normal primary and junior school in a nearby, tougher neighborhood, before attending Burnage for secondary school. Thirty-eight years after I attended Burnage, so, apparently, did Salman Abedi, the suspected Manchester Arena bomber.


The atrocity carried out by Abedi, for which the Islamic State has claimed credit,is probably worse than the dreadful bombing by the Irish Republican Armythat destroyed parts of the city center 21 years ago, an event that many believe played a key role in Manchester’srenaissance. At least in that case, the bombers gave a 90-minute warning that helped avoid loss of life. Abedi’sbarbaric act, by contrast, killed at least 22 people, many of them children.

In recent years, I have been heavily involved in the policy aspects of this great city’s economic revival. I chaired an economic advisory group to the Greater Manchester Council, and then served as Chair of the Cities Growth Commission, which advocated for the “Northern Powerhouse,” a program to link the cities of the British north into a cohesive economic unit. Subsequently, I briefly joined David Cameron’s government, to help implement the early stages of the Northern Powerhouse.

I have never attended a concert at the Manchester Arena, but it appears to be a great venue for the city.Just as Manchester Airport has emerged as a transport hub serving the Northern Powerhouse, the arena plays a similar role in terms of live entertainment. As the sad reports about thoseaffected indicate, attendees came from many parts of northern England (and beyond).

In the past couple of years, Manchester has received much praise for its economic revival, including its position at the geographic heart of the Northern Powerhouse, and I am sure this will continue. Employment levels and the regional PMI business surveys indicate that, for most of the past two years, economic momentum has been stronger in North West England than in the country as a whole, including London.Whether this is because of the Northern Powerhouse policy is difficultto infer; whatever the reason, it is hugely welcome and important to sustain.

To my occasional irritation, manypeople still wonder what exactly the Northern Powerhouseis. At its core, it represents the economic geography that lies within Liverpool to the west, Sheffield to the East, and Leeds to the northeast, with Manchester in the middle. The distance from Manchester to the center of any of those other cities is less than 40 miles (64 kilometers), which is shorter than the London Underground’s Central, Piccadilly, or District lines. If the 7-8 million people wholive in those cities – and in the numerous towns, villages,and other areas between them – can be connected via infrastructure, they can act as a single unitin terms of their roles as consumers and producers.

The Northern Powerhouse would thenbe a genuine structural game changer forBritain’s economy.Indeed, along with London, it would be a second dynamic economic zone that registers on a global scale. It is this simple premise that led the previous government to place my ideas at the core of its economic policies, and why the Northern Powerhouse has become so attractive to business here in the United Kingdom and overseas.

It is a thrilling prospect, and, despite being less than three years old, it is showing signs of progress.In fact, given the broader economic benefits of agglomeration, the Northern Powerhouse mantra can be extendedto the whole of the North of England, not least to include Hull and the North East. But it is what I often inelegantly call “Man-Sheff-Leeds-Pool”thatdistinguishes the Northern Powerhouse, and Manchester, which sits at the heart of it, is certainly among the early beneficiaries.

Despite this, I have frequently said to local policy leaders, business people, those from the philanthropic world, and others that unless the areas lying outside the immediate vicinity of central Manchester benefit fromregional dynamism, Greater Manchester’s success will be far from complete. Anyone wholookslittle more than a mile north, south, east,or west of Manchester’s Albert Square – never mind slightly less adjacent parts such as Oldham and Rochdale – can see that much needs to be improved, including education, skills training, and inclusiveness, in order to ensure long-termsuccess.

Whatever the warped motive of the 22-year-old Abedi,who evidentlyblew himself up along with the innocent victims, his reprehensible act willnot tarnish Manchester’s bright, hopeful future. I do notclaimtounderstand the world of terrorism,but I do know that those who live in and around Manchester and other cities need to feel part of their community and share its aspirations. Residents who identify with their community are less likely to harm it – and more likely to contribute actively to itsrenewed vigor.

Now more than ever, Manchester needs the vision that the Northern Powerhouse provides. It is a vision that other cities and regions would do well to emulate. Jim O’Neill, a former chairman of Goldman Sachs Asset Management and a former UK Treasury Minister, is Honorary Professor of Economics at Manchester University and former Chairman of the British government's Review on Antimicrobial Resistance.

By Jim O’Neill

The Six-Day War at 50

NEW YORK – The world is about to mark the 50th anniversary of the June 1967 war between Israel and Egypt, Jordan, and Syria – a conflict that continues to stand out in a region with a modern history largely defined by violence. The war lasted less than a week, but its legacy remains pronounced a half-century later.

The war itself was triggered by an Israeli preemptive strike on the Egyptian air force, in response to Egypt’s decision to expel a United Nations peacekeeping force from Gaza and the Sinai Peninsula and to close the Straits of Tiran to Israeli shipping. Israel struck first, but most observers regarded what it did as a legitimate act of self-defense against an imminent threat.

Israel did not intend to fight on more than one front, but the war quickly expanded when both Jordan and Syria entered the conflict on Egypt’s side. It was a costly decision for the Arab countries. After just six days of fighting, Israel controlled the Sinai Peninsula and the Gaza strip, the Golan Heights, the West Bank, and all of Jerusalem. The new Israel was more than three times larger than the old one. It was oddly reminiscent of Genesis: six days of intense effort followed by a day of rest, in this case the signing of a cease-fire.

The one-sided battle and its outcome put an end to the notion (for some, a dream) that Israel could be eliminated. The 1967 victory made Israel permanent in ways that the wars of 1948 and 1956 did not. The new state finally acquired a degree of strategic depth. Most Arab leaders came to shift their strategic goal from Israel’s disappearance to its return to the pre-1967 war borders.

The Six-Day War did not, however, lead to peace, even a partial one. That would have to wait until the October 1973 war, which set the stage for what became the Camp David Accords and the Israel-Egypt peace treaty. The Arab side emerged from this subsequent conflict with its honor restored; Israelis for their part emerged chastened. There is a valuable lesson here: decisive military outcomes do not necessarily lead to decisive political results, much less peace.

The 1967 war did, however, lead to diplomacy, in this case UN Security Council Resolution 242. Approved in November 1967, the resolution called for Israel to withdraw from territories occupied in the recent conflict – but also upheld Israel’s right to live within secure and recognized boundaries. The resolution was a classic case of creative ambiguity. Different people read it to mean different things. That can make a resolution easier to adopt, but more difficult to act on.

It thus comes as little surprise that there is still no peace between Israelis and Palestinians, despite countless diplomatic undertakings by the United States, the European Union and its members, the UN, and the parties themselves. To be fair, Resolution 242 cannot be blamed for this state of affairs. Peace comes only when a conflict becomes ripe for resolution, which happens when the leaders of the principal protagonists are both willing and able to embrace compromise. Absent that, no amount of well-intentioned diplomatic effort by outsiders can compensate.

But the 1967 war has had an enormous impact all the same. Palestinians acquired an identity and international prominence that had largely eluded them when most were living under Egyptian or Jordanian rule. What Palestinians could not generate was a consensus among themselves regarding whether to accept Israel and, if so, what to give up in order to have a state of their own.

Israelis could agree on some things. A majority supported returning the Sinai to Egypt. Various governments were prepared to return the Golan Heights to Syria under terms that were never met. Israel unilaterally withdrew from Gaza and signed a peace treaty with Jordan. There was also broad agreement that Jerusalem should remain unified and in Israeli hands.

But agreement stopped when it came to the West Bank. For some Israelis, this territory was a means to an end, to be exchanged for a secure peace with a responsible Palestinian state. For others, it was an end in itself, to be settled and retained.

This is not to suggest a total absence of diplomatic progress since 1967. Many Israelis and Palestinians have come to recognize the reality of one another’s existence and the need for some sort of partition of the land into two states. But for now the two sides are not prepared to resolve what separates them. Both sides have paid and are paying a price for this standoff.

Beyond the physical and economic toll, Palestinians continue to lack a state of their own and control over their own lives. Israel’s objective of being a permanent Jewish, democratic, secure, and prosperous country is threatened by open-ended occupation and evolving demographic realities.

Meanwhile, the region and the world have mostly moved on, concerned more about Russia or China or North Korea. And even if there were peace between Israelis and Palestinians, it would not bring peace to Syria, Iraq, Yemen, or Libya. Fifty years after six days of war, the absence of peace between Israelis and Palestinians is part of an imperfect status quo that many have come to accept and expect.

Richard N. Haass is the president of the Council on Foreign Relations and the author, most recently, ofAworld World in Disarray: American Foreign Policy and the Crisis of the Old Order.

By Richard N. Haass

The Promise of Digital Health

BASEL – Africa has changed remarkably, and for the better, since I first worked as a young doctor in Angola some 20 years ago. But no change has been more obvious than the way the continent has adopted mobile technology. People in Africa – and, indeed, throughout low- and middle-income countries – are seizing the opportunities that technology provides, using mobile phones for everything from making payments to issuing birth certificates, to gaining access to health care.


The benefit of mobile technologies lies in access. Barriers like geographical distance and low resources, which have long prevented billions of people from getting the care they need, are much easier to overcome in the digital age. And, indeed, there are countless ways in which technology can be deployed to improve health-care access and delivery.

Of course, this is not new information, and a growing number of technology-based health initiatives have taken shape in recent years. But only a few have reached scale, and achieved long-term sustainability; the majority of projects have not made it past the pilot phase. The result is a highly fragmented landscape of digital solutions – one that, in some cases, can add extra strain to existing health systems.

The first step to addressing this problem is to identify which factors breed success – and which impede it. Here, perhaps the most important observation relates to how the solution is linked to the reality on the ground. After all, technology is an enabler for the innovation of health-care delivery, not an end in itself.

Solutions that focus on end-users, whether health practitioners or patients, have the best chance of succeeding. Fundamental to this approach is the recognition that what users need are not necessarily the most advanced technologies, but rather solutions that are easy to use and implement. In fact, seemingly outdated technologies like voice and text messages can be far more useful tools for the intended users than the latest apps or cutting-edge innovations in, say, nanotechnology.

Consider the Community-based Hypertension Improvement Project in Ghana, run by the Novartis Foundation, which I lead, and FHI 360. The project supports patients in self-managing their condition through regular mobile medication reminders, as well as advice on necessary lifestyle changes. This approach is successful because it is patient-centered and leverages information and communication technology (ICT) tools that are readily available and commonly used. In a country where mobile penetration exceeds 80% but only a few people have smartphones, such simple solutions can have the greatest impact.

For health practitioners, digital solutions must be perceived as boosting efficiency, rather than adding to their already-heavy workload. Co-creating solutions with people experienced in delivering health care in low-resource settings can help to ensure that the solutions are adopted at scale.

For example, the telemedicine network that the Novartis Foundation and its partners rolled out with the Ghana Health Service was a direct response to the need, expressed by health-care practitioners on the ground, to expand the reach of medical expertise. The network connects frontline health workers with a simple phone call to consultation centers in referral hospitals several hours away, where doctors and specialists are available around-the-clock. From the outset, the project was a response to an expressed need to expand the reach of medical expertise, and was fully operated on the ground by Ghana Health Service staff, which made this model sustainable at scale.

To realize the full potential of digital health, solutions need to be integrated into national health systems. Only then can digital technology accelerate progress toward universal health coverage and address countries’ priority health needs.

Collaboration across the health and ICT sectors, both public and private, is essential. Multidisciplinary partnerships driven by the sustained leadership of senior government officials must guide progress, beginning at the planning stage. Intra-governmental collaboration, dedicated financing for digital health solutions, and effective governance mechanisms will also be vital to successful strategies.

Digital technologies offer huge opportunities to improve the way health care is delivered. If we are to seize them, we must learn from past experience. By remaining focused on the reality of end-users and on priority health needs, rather than being dazzled by the latest technology, we can fulfil the promise of digital health. Ann Aerts is Head of the Novartis Foundation and Chair of the Broadband Commission for Sustainable Development Working Group on Digital Health.

By Ann Aerts

Energy, Economics, and the Environment

LONDON – To secure a low-carbon future and begin to address the challenge of climate change, the world needs more investment in renewable energy. So how do we get there? No system of power production is perfect, and even “green” power projects, given their geographic footprint, must be managed carefully to mitigate “energy sprawl” and the associated effects on landscapes, rivers, and oceans.


Hydropower offers one of the clearest examples of how the location of renewable energy infrastructure can have unintended consequences. Dam-generated electricity is currently the planet’s largest source of renewable energy, delivering about twice as much power as all other renewables combined. Even with massive expansion in solar and wind power projects, most forecasts assume that meeting global climate mitigation goals will require at least a 50% increase in hydropower capacity by 2040.

Despite hydropower’s promise, however, there are significant economic and ecological consequences to consider whenever dams are installed. Barriers that restrict the flow of water are particularly disruptive to inland fisheries, for example. More than six million tons of fish are harvested annually from river basins with projected hydropower development. Without proper planning, these projects could jeopardize a key source of food and income generation for more than 100 million people.

Consequences like these are not always apparent when countries plan dams in isolation. In many parts of Asia, Latin America, and Sub-Saharan Africa, hydropower is an important source of energy and economic development. But free-flowing rivers are also essential to the health of communities, local economies, and ecosystems. By some estimates, if the world completes all of the dam projects currently underway or planned without mitigation measures, the resulting infrastructure would disrupt 300,000 kilometers (186,411 miles) of free-flowing rivers – a length equivalent to seven trips around the planet.

There is a better way. By taking a system-scale approach – looking at dams in the context of an entire river basin, rather than on a project-by-project basis – we can better anticipate and balance the environmental, social, and economic effects of any single project, while at the same time ensuring that a community’s energy needs are met. The Nature Conservancy has pioneered such a planning approach – what we call “Hydropower by Design” – to help countries realize the full value within their river basins.

Even one dam changes the physical attributes of a river basin. Multiplied through an entire watershed, the impact is magnified. Hydropower projects planned in isolation not only often cause more environmental damage than necessary; they often fail to achieve their maximum strategic potential and may even constrain future economic opportunities.

As a result, even dams that meet their power-generation goals may fail to maximize the long-term value of other water-management services such as flood control, navigation, and water storage. Our research shows that these services add an estimated $770 billion annually to the global economy. Failure to design dams to their fullest potential, therefore, carries a significant cost.

In the past, some developers have been resistant to this sort of strategic planning, believing that it would cause delays and be expensive to implement. But, as the Conservancy’s latest report – The Power of Rivers: A Business Case – demonstrates, accounting for environmental, social, and economic risks up front can minimize delays and budget overruns while reducing the possibility of lawsuits. More important, for developers and investors, employing a holistic or system-wide approach leverages economies of scale in dam construction.

The financial and development benefits of such planning enable the process to pay for itself. Our projections show that projects sited using a Hydropower by Design approach can meet their energy objectives, achieve a higher average rate of return, and reduce adverse effects on environmental resources. With nearly $2 trillion of investment in hydropower anticipated between now and 2040, the benefits of smarter planning represent significant value.

System-scale hydropower planning does not require builders to embrace an entirely new process. Instead, governments and developers can integrate principles and tools into existing planning and regulatory processes. Similar principles are being applied to wind, solar, and other energy sources with large geographic footprints.

Completing the transition to a low-carbon future is perhaps the preeminent challenge of our time, and we won’t succeed without expanding renewable-energy production. In the case of hydropower, if we plan carefully using a more holistic approach, we can meet global goals for clean energy while protecting some 100,000 kilometers of river that would otherwise be disrupted. But if we don’t step back and see the whole picture, we will simply be trading one problem for another.GiulioBoccaletti is Chief Strategy Officer and Global Managing Director for Water at The Nature Conservancy.

By GiulioBoccaletti

Germany Will Lose if Macron Fails

FRANKFURT – When Emmanuel Macron won the French presidential election, many Germans breathed a loud sigh of relief. A pro-European centrist had soundly defeated a far-right populist, the National Front’s Marine Le Pen. But if the nationalist threat to Europe is truly to be contained, Germany will have to work with Macron to address the economic challenges that have driven so many voters to reject the European Union.


This will not be easy. In fact, within a couple of days of the election, core planks of Macron’s economic platform were already under attack in Germany. For starters, his proposed reforms of eurozone governance have been met with substantial criticism.

Macron’s campaign manifesto embraced the idea of more eurozone federalism, characterized by a shared budget for eurozone public goods, administered by a eurozone economics and finance minister and accountable to a eurozone parliament. It also called for greater coordination of tax regimes and border controls, stronger protection of the integrity of the internal market, and, in view of the rising threat of protectionism in the United States, a “made in Europe” procurement policy.

An attempt at re-opening the debate about Eurobonds, or the partial mutualization of eurozone public-sector liabilities, was viewed as a pie-in-the-sky suggestion, mostly just a distraction. And, incidentally, it appears nowhere in Macron’s platform. Far more disturbing to German pundits and policymakers is Macron’s desire for Germany to make use of its fiscal capacity to boost domestic demand, thereby reducing its massive current-account surplus.

These are not new ideas: the European Commission, the International Monetary Fund, Macron’s predecessors, and economists throughout Europe have advanced them often. And, just as predictably, Germany’s government has roundly rejected them, relying on reasoning that, like its counter-arguments, is well rehearsed.

For the most part, German economists and officials believe that economic policy should focus almost exclusively on the supply side, diagnosing and addressing structural problems. And German officials also regularly suggest that their economy is already performing at close to its supply-determined limits.

In fact, far from viewing the current-account surplus as a policy problem, the German government sees it as a reflection of the underlying competitiveness of German firms. It is the benign upshot of responsible labor unions, which allow for appropriate firm-level wage flexibility.

The accumulation of foreign assets is a logical corollary of these surpluses, not to mention an imperative for an aging society. Indeed, German policymakers view as essential a reduction of Germany’s debt-to-GDP ratio toward the 60% ceiling set by European rules. When, if not in good times, does one have the chance to save?

This stance does not align particularly smoothly with Macron’s economic program. While Macron’s program includes significant proposals for addressing supply-side issues with the French economy, it also favors output stabilization and, more important, increased spending in areas like public infrastructure, digitization, and clean energy to boost potential growth.

Despite Macron’s decisive victory, he faces an uphill battle implementing his economic agenda. Even if the National Assembly, to be elected in June, endorses his reform program, street-level resistance will be no less fierce than it has been over the last few years.

Germany, however, has good reason to support Macron’s supply- and demand-side reforms. After all, France and Germany are deeply interdependent, meaning that Germany has a stake in Macron’s fate.

While it is true that the German government cannot (fortunately) fine-tune wages, it could, out of sheer self-interest, provide for its future by investing more in its human and social capital – including schools, from kindergartens to universities, and infrastructure like roads, bridges, and bandwidth. This approach would reduce the private user cost of capital, thereby making private investment more attractive. It would also create domestic real assets, reducing Germany’s exposure to foreign credit risk. A lower current-account surplus implies a more sustainable net-financial-liability position for Germany’s partners.

If Germany and Macron don’t find common ground, the costs to both will be massive. No malicious external actor is imposing populism on Europe; it has emerged organically, fueled by real and widespread grievances. While those grievances are not exclusively economic, the geography of populism does fit that of the EU’s economic malaise: too many Europeans have been losing out for too long. So, if Macron fails to deliver on his promises, a Euroskeptic like Le Pen could well win France’s next election.

To avoid this outcome, Macron must be firmer than his predecessors in pursuing difficult but ultimately beneficial policies. He might take a page from former German Chancellor Gerhard Schröder’s playbook. In 2003, Schröder prioritized reforms over rigorous obedience to the EU’s Stability and Growth Pact. Additional fiscal leeway was needed to smooth the economy’s adjustment to the bold labor-market reforms that he was introducing. The decision to prioritize reforms over obstinate rule-following proved to be a good one.

Now is Macron’s Schröderian moment. He, too, appears to have opted for reasoned pragmatism over the blind implementation of rigid rules (which cannot make sense under any circumstances). Fortunately, policy principles are not written in stone, not even in Germany. Recall that the German government adamantly rejected the eurozone banking union and the European Stability Mechanism, both of which were ultimately launched (though some say it was too little, too late).

Europe is experiencing a seismic shift, with its political system being undermined from within (and becoming vulnerable to Russian pressure from without). Fear of the “other” and perceptions of trade as a zero-sum game are taking hold. These circumstances call for bold and committed action, not only by France, but also by Germany, which, ultimately, has the most to lose.

Hans-Helmut Kotz, Program Director of the SAFE Policy Center at the Goethe University in Frankfurt, is a visiting professor of economics and a resident fellow at the Center for European Studies at Harvard University.

By Hans-Helmut Kotz

Toward a Global Treaty on Plastic Waste

BERLIN – If there are any geologists in millions of years, they will easily be able to pinpoint the start of the so-called Anthropocene – the geological age during which humans became the dominant influence on our planet’s environment. Wherever they look, they will find clear evidence of its onset, in the form of plastic waste.


Plastic is a key material in the world economy, found in cars, mobile phones, toys, clothes, packaging, medical devices, and much more. Worldwide, 322 million metric tons of plastic were produced in 2015. And the figure keeps growing; by 2050, it could be four times higher.

But plastic already is creating massive global environmental, economic, and social problems. Despite requiring resources to produce, plastic is so cheap that it often is used for disposable – often single-use – products. As a result, a huge amount of it ends up polluting the earth.

Plastic clogs cities’ sewer systems and increases the risk of flooding. Larger pieces can fill with rainwater, providing a breeding ground for disease-spreading mosquitos. Up to 13 million tons of plastic waste end up in the ocean each year; by 2050, there could be more plastic in there than fish. The plastic that washes up on shores costs the tourism industry hundreds of millions of dollars every year.

Moreover, all that plastic poses a serious threat to wildlife. Beyond the dead or dying seals, penguins, and turtles that had the bad fortune of becoming entangled in plastic rings or nets, biologists are finding dead whales and birds with stomachs stuffed with plastic debris.

Plastic products may not be all that good for humans, either. While the plastics used, say, to package our foods are usually nontoxic, most plastics are laden with chemicals, from softeners (which can act as endocrine disruptors) to flame retardants (which can be carcinogenic or toxic in higher concentrations). These chemicals can make it through the ocean and its food chain – and onto our plates.

Addressing the problem will not be easy; no single country or company, however determined, can do so on its own. Many actors – including the biggest plastic producers and polluters, zero-waste initiatives, research labs, and waste-picker cooperatives – will have to tackle the problem head-on.

The first step is to create a high-level forum to facilitate discussion among such stakeholders, with the goal of developing a cooperative strategy for reducing plastic pollution. Such a strategy should go beyond voluntary action plans and partnerships to focus on developing a legally binding international agreement, underpinned by a commitment from all governments to eliminate plastic pollution. Negotiations on such a treaty could be launched this year, at the United Nations Environment Assembly in Nairobi in December.

Scientists have already advanced concrete proposals for a plastic-pollution treaty. One of the authors of this article proposed a convention modeled after the Paris climate agreement: a binding overarching goal combined with voluntary national action plans and flexible measures to achieve them. A research team from the University of Wollongong in Australia, taking inspiration from the Montreal Protocol, the treaty that safeguards the ozone layer, has suggested caps and bans on new plastic production.

Some might ask whether we should embark on yet another journey down the long, winding, and tiresome road of global treaty negotiations. Can’t we engineer our way out of our plastic problem?

The short answer is, probably not. Biodegradable plastics, for example, make sense only if they decompose quickly enough to avoid harming wildlife. Even promising discoveries like bacteria or moths that can dissolve or digest plastics can provide only auxiliary support.

The only way truly to address the problem is to slash our plastic waste. Technology might be able to help, offering more options for substitution and recycling; but, as the many zero-waste communities and cities around the world have shown, it is not necessary.

For example, Capannori, a town of 46,700 inhabitants near Lucca in Tuscany, signed a zero-waste strategy in 2007. A decade later, it has reduced its waste by 40%. With 82% of municipal waste now separated at source, just 18% of residual waste ends up in landfills. Such experiences should inform and guide the national action plans that would form part of the treaty on plastics.

The European Commission’s “circular economy package” may provide another example worth emulating. Though it has not yet been implemented, its waste targets have the potential to save the European Union 190 million tons of CO2 emissions per year. That is the equivalent of annual emissions in the Netherlands.

Of course, the transition to zero waste will require some investment. Any international treaty on plastic must therefore include a funding mechanism, and the “polluter pays” principle is the right place to start. The global plastic industry, with annual revenues of about $750 billion, surely could find a few hundred million dollars to help clean up the mess it created.

A comprehensive, binding, and forward-looking global plastics treaty will not be easy to achieve. It will take time and cost money, and it will inevitably include loopholes and have shortcomings. It certainly will not solve the plastic pollution problem on its own. But it is a prerequisite for success.

Plastic pollution is a defining problem of the Anthropocene. It is, after all, a global scourge that is entirely of our making – and entirely within our power to solve as well.


Nils Simon is a political scientist and Senior Project Manager at adelphi research. LiliFuhr heads the Ecology and Sustainable Development Department at the Heinrich Böll Foundation.

By Nils Simon and LiliFuhr

How Culture Shapes Human Evolution

ST. ANDREWS – Is there an evolutionary explanation for humanity’s greatest successes – technology, science, and the arts – with roots that can be traced back to animal behavior? I first asked this question 30 years ago, and have been working to answer it ever since.


Plenty of animals use tools, emit signals, imitate one another, and possess memories of past events. Some even develop learned traditions that entail consuming particular foods or singing a particular kind of song – acts that, to some extent, resemble human culture.

But human mental ability stands far apart. We live in complex societies organized around linguistically coded rules, morals, and social institutions, with a massive reliance on technology. We have devised machines that fly, microchips, and vaccines. We have written stories, songs, and sonnets. We have danced in Swan Lake.

Developmental psychologists have established that when it comes to dealing with the physical world (for example, spatial memory and tool use), human toddlers’ cognitive skills are already comparable to those of adult chimpanzees and orangutans. In terms of social cognition (such as imitating others or understanding intentions), toddlers’ minds are far more sophisticated.

The same gap is observed in both communication and cooperation. Vaunted claims that apes produce language do not stand up to scrutiny: animals can learn the meanings of signs and string together simple word combinations, but they cannot master syntax. And experiments show that apes cooperate far less readily than humans.

Thanks to advances in comparative cognition, scientists are now confident that other animals do not possess hidden reasoning powers and cognitive complexity, and that the gap between human and animal intelligence is genuine. So how could something as extraordinary and unique as the human mind evolve?

A major interdisciplinary effort has recently solved this longstanding evolutionary puzzle. The answer is surprising. It turns out that our species’ most extraordinary characteristics – our intelligence, language, cooperation, and technology – did not evolve as adaptive responses to external conditions. Rather, humans are creatures of their own making, with minds that were built not just for culture, but by culture. In other words, culture transformed the evolutionary process.

Key insights came from studies on animal behavior, which showed that, although social learning (copying) is widespread in nature, animals are highly selectiveabout what and whom they copy. Copying confers an evolutionary advantage only when it is accurate and efficient. Natural selection should therefore favor structures and capabilities in the brain that enhance the accuracy and efficiency of social learning.

Consistent with this prediction, research reveals strong associations between behavioral complexity and brain size. Big-brained primates invent new behaviors, copy the innovations of others, and use tools more than small-brained primates do. Selection for high intelligence almost certainly derives from multiple sources, but recent studies imply that selection for the intelligence to cope with complex social environments in monkeys and apes was followed by more restricted selection for cultural intelligence in the great apes, capuchins, and macaques.

Why, then, haven’t gorillas invented Facebook, or capuchins built spacecraft? To achieve such high levels of cognitive functioning requires not just cultural intelligence, but also cumulative culture, in which modifications accumulate over time. That demands transmission of information with a degreeof accuracyof which only humans are capable. Indeed, small increases in the accuracy of social transmission lead to big increases in the diversity and longevity of culture, as well as to fads, fashions, and conformity.

Our ancestors were able to achieve such high-fidelity information transmission not just because of language, but also because of teaching – a practice that is rare in nature, but universal among humans (once the subtle forms it takes are recognized). Mathematical analyses reveal that,while it is generally difficult for teaching to evolve, cumulative culture promotes teaching. This implies that teaching and cumulative culture co-evolved, producing a species that taught relatives across a broad range of circumstances.

It is in this context that language appeared. Evidence suggests that language originally evolved to reduce the costs, increase the accuracy, and expand the domains of teaching. That explanation accounts for many properties of language, including its uniqueness, power of generalization, and the fact that it is learned.

All of the elements that have underpinned the development of human cognitive abilities –encephalization (the evolutionary increase in the size of the brain), tool use, teaching, and language – have one key characteristic in common: the conditions that favored their evolution were created by cultural activities, through selective feedback. As theoretical, anthropological, and genetic studies all attest, a co-evolutionary dynamic – in which socially transmitted skills guided the natural selection that shaped human anatomy and cognition – has underpinned our evolution for at least 2.5 million years.

Our potent capacity for imitation, teaching, and language also encouraged unprecedented levels of cooperation among individuals, creating conditions that not only promoted longstanding cooperative mechanisms such as reciprocity and mutualism, but also generated new mechanisms. In the process, gene-culture co-evolution created a psychology – a motivation to teach, speak, imitate, emulate, and connect – that is entirely different from that of other animals.

Evolutionary analysis has shed light on the rise of the arts, too. Recent studies of the development of dance, for example, explain how humans move in time to music, synchronize their actions with others, and learn long sequences of movements.

Human culture sets us apart from the rest of the animal kingdom. Grasping its scientific basis enriches our understanding of our history – and why we became the species we are. Kevin Laland is Professor of Behavioral and Evolutionary Biology at the University of St Andrews, and the author of Darwin’s Unfinished Symphony: How Culture Made the Human Mind.

By Kevin Laland

Taking Eurozone Growth Seriously

LONDON – I have been out of the world of international finance and economic forecasting for more than four years, but much of what I learned during my 30 years working full-time in the field still influences how I view the world. One lesson was to measure an entity’s economic and financial performance by how it compares both to the entity’s underlying potential and the market’s valuation of its performance. Applying this approach to the major economies gives rise to some surprising observations – and possibilities.


For starters, contrary to popular belief, world growth hasn’t been especially disappointing so far this decade. From 2010 to 2016, global output rose at an average annual rate of 3.4%, according to the International Monetary Fund. That may belower than the 2000-2010 average, but it is higher than the growth rate in the 1980s and 1990s – decades that are not typically viewed as economically disappointing.

A breakdown of particular countries’ performance offers further insights. Despite significant political trauma, the United States and the United Kingdom have performed as expected. China, India, and Japan have also grown close to their potential. In a rare occurrence, no major economy has dramatically outperformed its potential.

Three economies have, however, genuinely disappointed: Brazil, Russia, and the eurozone. Could that mean that many observers, including me, overestimated these economies’ potential? Or does it reflect extenuating circumstances? If it is the latter, one must ask whether, contrary to prevailing expectations, new developments or shifts in any or all three of these economies might surprise us on the upside for the rest of the decade.

When it comes to the eurozone, embracing the idea that economic growth may be about to take off might be enough, at least until recently, to earn one a referral to a mental-health specialist. But, in my old life, I would be encouraging my analysts to spend more time considering just that possibility, because, on the off-chance that this crackpot notion were true, there would be some serious money to be made in today’s generously valued markets.

And, in fact, the prospect of a growth pickup in the eurozone might be only partly insane. Cyclically, the eurozone is currently doing well both by its own standards and relative to others. In the first quarter of this year, the eurozone grew more strongly than the US or the UK, and most of the eurozone’s larger countries have been showing stronger relative growth for some time.

Nonetheless, the eurozone’s long-term structural outlook remains uninspiring. The prospects for the two key drivers of long-term growth – the size and growth of the working-age population and productivity – look grim for the eurozone’s largest countries, even Germany, the one economy that most acknowledge is, from a cyclical perspective, doing just fine.

But – to indulge further that outlandish notion of an impending eurozone growth surge – what if something changes significantly to strengthen those growth drivers? With refugees – many of them young – continuing to pour into Europe from troubled parts of the Middle East and Africa, that may not be an altogether fanciful prospect.

Of course, tapping the potential of refugees requires assimilating them to European societies and economies – a challenge that has many Europeans justifiably worried. But, if that need were met, it would certainly mitigate Europe’s mounting demographic challenge, especially in Germany and Italy.

There is also the possibility that new developments will bring about a more constructive policy approach. Most eurozone members’ fiscal positions have undergone considerable, though often unnoticed, improvement in recent years – so much so that the eurozone-wide fiscal deficit is now less than 3% of GDP, much better than the US or the UK. Moreover, soaring tax receipts in some parts of the eurozone – notably Germany – are feeding almost embarrassingly large fiscal surpluses. Could now be the moment to push for an ambitious Franco-German-led stimulus effort?

If France’s new president, Emmanuel Macron, manages to obtain sufficient backing in the National Assembly in the June election, perhaps he could do something about reducing France’s structural government spending, while pursuing tax cuts and improved labor-market flexibility. Labor-market reform, in particular, could be crucial, not just for France itself, but also to convince German Chancellor Angela Merkel, if she is returned to power this September, to move toward greater fiscal integration, including the creation of a eurozone finance minister, which Macron advocates.

Much of this is probably a long shot, but nowhere near as long as it was just a few months ago. And, given market valuations, it is much more interesting to explore such possibilities than it is to focus on many of the other issues that analysts obsess about.

Carrying the scenario further, one could even dream up an optimistic outlook for the UK’s trade balance, with a highly competitive exchange rate significantly improving demand in its major market, the eurozone. That could more than compensate for the challenges that arise from the end of single-market access. With this, the fantasy may have jumped the shark. But you never know.

Jim O’Neill, a former chairman of Goldman Sachs Asset Management and a former UK Treasury Minister, is Honorary Professor of Economics at Manchester University and former Chairman of the British government's Review on Antimicrobial Resistance.

The Right to Agricultural Technology

STANFORD – In the 1960s, when biologist Paul Ehrlich was predicting mass starvation due to rapid population growth, plant breeder Norman Borlaug was developing the new crops and approaches to agriculture that would become mainstays of the Green Revolution. Those advances, along with other innovations in agricultural technology, are credited with preventing more than a billion deaths from starvation and improving the nutrition of the billions more people alive today. Yet some seem eager to roll back these gains.


Beyond saving lives, the Green Revolution saved the environment from massive despoliation. According to a Stanford University study, since 1961, modern agricultural technology has reduced greenhouse-gas emissions significantly, even as it has led to increases in net crop yields. It has also spared the equivalent of three Amazon rainforests – or double the area of the 48 contiguous US states – from having to be cleared of trees and plowed up for farmland. Genetically engineered crops, for their part, have reduced the use of environmentally damaging pesticides by 581 million kilograms (1.28 billion pounds), or 18.5%, cumulatively since 1996.

Surprisingly, many environmentalists are more likely to condemn these developments than they are to embrace them, promoting instead a return to inefficient, low-yield approaches. Included in the so-called agroecology that they advocate is primitive “peasant agriculture,” which, by lowering the yields and resilience of crops, undermines food security and leads to higher rates of starvation and malnutrition.

Promoting that lunacy, the United Nations Human Rights Council recently published a report by Special Rapporteur on the Right to Food HilalElver that called for a global agroecology regime, including a new global treaty to regulate and reduce the use of pesticides and genetic engineering, which it labeled human-rights violations.

The UNHRC – a body that includes such stalwart defenders of human rights as China, Cuba, Qatar, Saudi Arabia, and Venezuela – usually satisfies itself by bashing Israel. But in 2000, at the Cuban government’s urging, it created the post of special rapporteur on the right to food. Befitting the UNHRC’s absurd composition, the first person to fill the position, the Swiss sociologist Jean Ziegler, was the co-founder and a recipient of the Muammar al-Qaddafi International Human Rights Prize.

For her part, Elver has, according to UN Watch, cited works that claim the September 11, 2001, terrorist attacks were orchestrated by the United States government to justify its war on Muslims. Elver’s position on food reflects the same paranoid mindset. She opposes “industrial food production” and trade liberalization, and frequently collaborates with Greenpeace and other radical environmentalists.

Much of Elver’s new UNHRC report parrots the delusional musings of organic-industry-funded nongovernmental organizations. It blames agricultural innovations like pesticides for “destabilizing the ecosystem” and claims that they are unnecessary to increase crop yields.

This all might be dismissed as simply more misguided UN activism. But it is just one element of a broader and more consequential effort by global NGOs, together with allies in the European Union, to advance an agroecology model, in which critical farm inputs, including pesticides and genetically engineered crop plants, are prohibited. That agenda is now being promoted through a vast network of UN agencies and programs, as well as international treaties and agreements, such as the Convention on Biological Diversity, the Codex Alimentarius Commission, and the International Agency on Research on Cancer.

The potential damage of this effort is difficult to overstate. The UN’s Food and Agriculture Organization (which hasn’t yet completely succumbed to radical activists) estimates that, without pesticides, farmers would lose up to 80% of their harvests to insects, disease, and weeds. (Consider, for example, the impact of the fall armyworm, which, in the last 18 months alone, has devastated maize crops across much of Sub-Saharan Africa.) Developing countries are particularly vulnerable to radical regulatory regimes, because foreign aid is often contingent on compliance with them, though they can also reshape agriculture in the developed world, not least in the EU.

Millions of smallholder farmers in the developing world need crop protection. When they lack access to herbicides, for example, they must weed their plots by hand. This is literally backbreaking labor: to weed a one-hectare plot, farmers – usually women and children – have to walk ten kilometers (6.2 miles) in a stooped position. Over time, this produces painful and permanent spinal injuries. Indeed, that is why the state of California outlawed hand-weeding by agricultural workers in 2004, though an exception was made for organic farms, precisely because they refuse to use herbicides.

Depriving developing countries of more efficient and sustainable approaches to agriculture relegates them to poverty and denies them food security. That is the real human-rights violation.

Henry I. Miller is Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution. He was the founding director of the Office of Biotechnology at the US Food and Drug Administration.

By Henry I. Miller

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…