The Perils of Public Media Funding

BUDAPEST – Hungary’s state media corporation, MTVA, operated last year with a budget of roughly $309 million, most of it coming from the government’s coffers. That means that MTVA – which runs television stations, a radio network, and a news agency – had a daily budget of $846,000. For a country of just ten million people, that is the definition of a spendthrift quango.

One might assume that MTVA’s financial strength is an exception in an industry plagued by dwindling revenue and broken business models. But among the world’s state-supported media companies, MTVA’s bloated budget is the norm. In newsrooms from Serbia to South Africa, taxpayer-generated funding is growing. Unfortunately, while this windfall might be putting more programming on the air, it is only deepening the industry’s woes.

Governments have played a major role in domestic media for decades, using regulation of broadcast frequencies and licensing requirements to shape the market. Yet, in recent years, governments have also stepped up their budgetary influence. Today, government budget allocations are among the leading sources of media revenue.

Public support is typically delivered in one of three ways. One method is to levy licensing fees on households, a de facto tax on content. While public media budgets have not grown everywhere – between 2011 and 2015, for example, funding for public media dropped in 40% of the European Broadcasting Union’s 56 member countries – government cash remains influential. In January 2017, the Romanian government approved a $360 million budget for state broadcaster SRTV, a massive amount in a country of just 20 million people. Similar infusions of public money are common elsewhere.

Purchases of advertising are a second method of providing government support. State spending in this category can be significant. During the first half of 2013, for example, Malaysia’s government spent $118.5 million more on ads than the next four advertisers combined.

Finally, states often provide cash contributions to struggling media outlets, especially those offering favorable coverage. In 2014, the government of Montenegro, a country with just 622,000 people, spent $33.6 million on state aid for media outlets. According to the Center for International Media Assistance, the donations included “generous” support to the “reliably pro-government” newspaper Pobjeda.

Financial contributions will always be welcomed by the media, and especially by cash-strapped independent outlets. But when funding comes with strings, as government money often does, journalistic integrity can suffer. In many cases, public media organizations are often little more than government mouthpieces, and authorities regularly meddle in editorial affairs.

Hungary is a case in point. In 2010, not long after the right-wing populist Fidesz party came to power, government officials fired a number of MTVA journalists who had been critical of Fidesz during the election campaign. Since then, the authorities have dramatically reshaped media legislation, a move that some fear will “restrict media pluralism in the long term.”

Similar overreach has been reported in Macedonia, where, in 2014, the European Commission criticized the government for using advertising money to cement state control over news content. There are countless other examples of similar interference in media markets around the world.

Generally, governments tend to finance friendly media outlets, or news organizations that are ready to toe the line. According to a 2014 report on the future of digital journalism, which I co-edited for the Open Society Foundations, governments used financial pressure to manipulate news organizations in more than half of the markets we examined. No doubt that proportion has only increased in the years since.

More broadly, by favoring docile journalism, or by cutting subsidies to critical media voices, governments are distorting media markets to their advantage. In 2012, a cash injection from Serbian authorities into the state-controlled news agency, Tanjug, gave it a massive competitive edge over the independent news service Beta. In Hungary, too, independent journalism is struggling to keep pace with state-funded behemoths. One example is Atlatszo, an intrepid investigative startup. Funded almost entirely by donations, Atlatszo’s annual budget is less than half of MTVA’s daily allocation.

While public money is reshaping the media business, taxpayers are not the biggest beneficiaries in many countries. If even a fraction of the budgetary windfall received by state media was redirected to independent news organizations, journalism would thrive and the public would be better informed. At the moment, however, the biggest winners in the public media marketplace are the governments manipulating a struggling industry.

Marius Dragomir is Director of the Center for Media, Data, and Society at Central European University, and managed the research and policy portfolio of the Program on Independent Journalism in London.

By Marius Dragomir

The Making of Lehman Brothers II

WASHINGTON, DC – Last week, with some fanfare, the US Treasury Department released a report on what to do about the Orderly Liquidation Authority. The OLA, created under the Dodd-Frank financial reform legislation of 2010, was intended to prevent a recurrence of what happened in September 2008, when one failing firm, Lehman Brothers, was able to trigger a cascade effect that nearly destroyed the financial system.

The OLA allows the Federal Deposit Insurance Corporation (FDIC), subject to reasonable safeguards, to take over a failing financial firm and wind it down in an orderly manner – very much in line with what happens, with some regularity, when a small bank becomes insolvent. Although the Treasury report reads more like a political document rather than a well-reasoned technical assessment, it still comes to a sensible conclusion: keep the OLA in place. Unfortunately, the report also masks a broader legislative and regulatory agenda that will add unnecessary risk – and a lot of it – to the financial system.

The OLA has attracted a great deal of bipartisan support in recent years, including on the FDIC’s Systemic Resolution Advisory Committee (the SRAC, of which I am a member). But some highly influential Republicans on the House Financial Services Committee have attacked the OLA relentlessly, arguing that it represents a government bailout-in-waiting. They want to abolish it, and insist that failing financial firms simply go through a court-supervised bankruptcy process.

Lehman Brothers, of course, went bankrupt – and it was the spreading effects of that failure that caused so much damage in September 2008 and subsequently. House Republicans, drawing on work by scholars at the Hoover Institution, have argued that modifying the bankruptcy code – creating a so-called Chapter 14 – would allow such firms to fail without the risk of adverse systemic consequences.

The good news from the Treasury report is that the Trump administration is not prepared to support this position. Treasury recognizes, albeit implicitly, that no bankruptcy court can deal with the complex globally interconnected liabilities of JPMorgan Chase, Citigroup, Goldman Sachs, or other bank holding companies with over $500 billion on their balance sheets. (Lehman Brothers owed more than $600 billion when it failed.)

The Treasury report makes a big deal of demanding that bankruptcy must be the first and preferred option when a big bank is in trouble. But this is exactly what the Dodd-Frank legislation said – and it is what the FDIC and other regulators have worked hard to implement. (All SRAC meetings are public and broadcast online, and the details of implementing the OLA have been reviewed many times by Paul Volcker, Sheila Bair, and other experts.)

The Treasury report does sketch out a new Chapter 14, but this would achieve little. The main problem with the bankruptcy approach is the lack of Debtor-In-Possession financing for a complex global financial institution with an enormous balance sheet; without access to operational funding from the private sector, the entire process collapses – exactly the Lehman scenario.

The second problem with the bankruptcy approach is that international regulators would find themselves unable to cooperate – for their own legal and procedural reasons – with a US process that affects a major part of their own economies. Senior officials at the Bank of England, for example, have been commendably forthright about this – including at public SRAC meetings.

The Treasury report mentions these issues, but it fails to address them in any meaningful way. The Chapter 14 proposal is a hamburger with almost no beef. It is hard to see how the Senate Judiciary Committee (which has jurisdiction over the bankruptcy code) could be persuaded to waste time on this.

Much more worrying, however, is what lurks unmentioned behind the Treasury report: a serious legislative effort, supported by the Trump administration, to reduce the level of scrutiny applied to banks that are on the verge of becoming systemically important. The proposed bill, the Economic Growth, Regulatory Relief, and Consumer Protection Act, is a misnomer. Title IV of the law would raise the threshold for “applying enhanced prudential standards from $50 billion to $250 billion.”

The main lesson from the experience of 2008 and the subsequent deep recession is that it is much better to prevent big banks from failing than to deal with the consequences when they do. I testified to Congress that $50 billion, as defined under Dodd-Frank, is a sensible threshold at which the Federal Reserve should pay more attention to financial institutions. Art Wilmarth of George Washington University Law School has also written persuasively on this point: at $250 billion, a bank’s failure can have major ripple effects.

To be fair, even under the proposed legislation, the Fed would retain significant discretion regarding how to prevent big banks (and nonbanks with bank-type structures) from creating structures – organizational and financial – that could bring down other parts of the system, including across borders.

But fairness cuts both ways: there is no indication that Donald Trump’s appointees to the Board of Governors of the Fed will be careful or limit what big banks want to do. As in 2008, we risk learning the hard way why adequate regulation of systemically important financial institutions is essential.

Simon Johnson is a professor at MIT’s Sloan School of Management and the co-author of White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.

By Simon Johnson

Improving the Sustainability of Development Finance

WASHINGTON, DC – To achieve the United Nations Sustainable Development Goals by 2030, trillions of dollars in state spending, investment, and aid will be needed annually. Although estimates vary widely, one UN report from 2014 suggests that total investment of as much as $7 trillion will be required for infrastructure improvements alone. But whatever the final tally, these sums are far beyond the means of governments, and leaders working to implement the 17 SDGs will expect their domestic banking sectors to provide much of the funding.

This is a reasonable expectation. In emerging markets, banks hold assets estimated at more than $50 trillion, meaning that they could impact dramatically how sustainable development is financed.

At the moment, however, many lenders don’t have the capacity to evaluate properly the financial, environmental, social, and governance-related risks associated with these types of projects. If the international community is to meet its SDG targets, sustainable finance practices will need to be strengthened.

Fortunately, collaboration is already producing results. In May 2012, banking regulators from ten countries asked my organization, the International Finance Corporation (IFC), to help them establish the Sustainable Banking Network (SBN) to fund initiatives that are “greener, environmentally friendly, and socially inclusive.” Since its formation, the network has grown to include 34 countries, accounting for $42.6 trillion in bank assets – equivalent to more than 85% of emerging markets’ total bank holdings.

Today, the SBN connects regulators, bankers, and agencies in emerging economies to improve finance practices for sustainability projects. These efforts, though entirely voluntary, are already having a measurable impact. For example, in 2016, the SBN became a key partner to the G20’s Green Finance Study Group, which helped advance the bloc’s global “green finance” agenda, and underscored the importance of environmental risk management within financial systems.

Moreover, many of the network’s biggest economies have developed policies for sustainability financing that are in line with international best practices. Together, these efforts are encouraging regulators in member and non-member countries to deepen their support for socially conscious lending.

To maintain this momentum, the SBN needs tools to measure progress accurately, which is why the IFC has just released its first annual SBN Global Progress Report. The report’s measurement framework, designed to track the adoption and impact of policies by member organizations and states, was developed by and agreed upon by all SBN participants, with support from the IFC. It represents a remarkable level of global consensus and breaks new ground for financial-sector analysis.

In the report, eight SBN countries (Bangladesh, Brazil, China, Colombia, Indonesia, Mongolia, Nigeria, and Vietnam) received high marks for innovation. Reforms in these countries included the introduction of large-scale and transparent monitoring programs, and new regulations that require banks to include environmental and social risk assessments in their decision-making processes. These countries also introduced market incentives to entice banks to finance more environmental projects.

One motivation for compiling an annual report is to document insights and lessons learned, and thereby help banking sectors engage in more productive reforms. In this regard, the IFC views this inaugural report largely as a blueprint to accelerate and streamline change.

Much work remains to be done to improve practices for financing sustainability in the world’s emerging economies. For example, the SBN is now focused on helping developing countries capitalize on climate-related investment opportunities, which are estimated to be valued at some $23 trillion. The network is also working to accelerate growth in the green bond market, which would help push other parts of the global financial system participate in planning and initiatives.

Still, SBN members have much to celebrate. In just five years, the organization has grown from an ambitious idea into a network of committed regulators, bankers, policymakers, and international development organizations. As I have noted before, with the support of the SBN, countries committed to building better finance frameworks are putting their ideas to work.

Ending poverty, protecting the planet, and building a more equitable future for humanity – the overarching goals of the SDGs – will be costly. But with the right financial frameworks in place, and with new ways to measure progress–, the investments we make today don’t need to break the bank.

Ethiopis Tafara is Vice President for Corporate Risk and Sustainability and General Counsel at the International Finance Corporation.

By Ethiopis Tafara

The Cancer Threat to Africa’s Future

CHICAGO – One of the most pressing public-health challenges in Africa today is also one of the least reported: cancer, a leading cause of death worldwide. Every year, some 650,000 Africans are diagnosed with cancer, and more than a half-million die from the disease. Within the next five years, there could be more than one million cancer deaths annually in Africa, a surge in mortality that would make cancer one of the continent’s top killers.

Throughout Sub-Saharan Africa, tremendous progress has been made in combating deadly infectious diseases. In recent decades, international and local cooperation have reduced Africa’s malaria deaths by 60% , pushed polio to the brink of eradication, and extended the lives of millions of Africans infected with HIV/AIDS.

Unfortunately, similar gains have not been made in the fight against non-communicable diseases (NCDs), including cancer. Today, cancer kills more people in developing countries than AIDS, malaria, and tuberculosis combined. But, with Africa receiving only 5% of global funding for cancer prevention and control, the disease is outpacing efforts to contain it. Just as the world united to help Africa beat infectious disease outbreaks, a similar collaborative approach is needed to halt the cancer crisis.

Surviving cancer requires many things, but timely access to specialists, laboratories, and second opinions are among the most basic. Yet, in much of Africa, a lack of affordable medications, and a dearth of trained doctors and nurses, means that patients rarely receive the care they need. On average, African countries have fewer than one trained pathologist for every million people, meaning that most diagnoses come too late for treatment. According to University of Chicago oncologist Olufunmilayo Olopade, a diagnosis of cancer in Africa is “nearly always fatal.”

Building health-care systems that are capable of managing infectious diseases, while also providing quality cancer care, requires a significant investment in time, money, and expertise. Fortunately, Africa already has a head start. Past initiatives – like the Global Fund to Fight AIDS, Tuberculosis, and Malaria, the US President’s Emergency Plan for AIDS Relief, and the World Bank’s East Africa Public Health Laboratory Networking Project – have greatly expanded the continent’s medical infrastructure. National efforts are also strengthening pharmaceutical supply chains, improving medical training, and increasing the quality of diagnostic networks.

Still, Africans cannot face down this threat alone. That is why the American Society for Clinical Pathology, where I work, is cooperating with other global health-care innovators to attack the region’s growing cancer crisis. We have teamed up with the American Cancer Society (ACS) and the pharmaceutical company Novartis to support cancer treatment and testing efforts in four countries: Ethiopia, Rwanda, Tanzania, and Uganda. Together, we have brought immunohistochemistry, a key diagnostic tool, to seven regional laboratories, an effort we hope lead to more timely cancer diagnoses and greatly improve the quality of care.

To complement these technical efforts, the ACS is also training African health-care professionals how to carry out biopsies and deliver chemotherapy. That initiative, funded by Novartis, is viewed as a pilot program that could expand to other regional countries.

Finally, our organizations are advocating for enhanced cancer-treatment guidelines in national health-care planning efforts, protocols that we believe are essential to improving health outcomes. These initiatives are in conjunction with other undertakings, such as a joint ACS-Clinton Health Access Initiative program to broaden access to cancer medications.

When the world took notice that infectious diseases like HIV/AIDS, polio, and malaria were ravaging Africa, action plans were drawn up and solutions were delivered. Today, a similar global effort is needed to ensure that every African with a cancer diagnosis can get the treatment they need. Now, as then, success depends on coordination among African governments, health-service providers, drug makers, and non-governmental organizations.

There is no place on Earth that is immune from the dread of a cancer diagnosis; wherever the news is delivered, it is often devastating to recipients and their families. But geography should never be the deciding factor in patients’ fight to survive the disease. Cancer has been Africa’s silent killer for far too long, and the global health community must no longer remain quiet in the face of this crisis.

Danny A. Milner, Jr. is Chief Medical Officer of the American Society for Clinical Pathology.

By Danny A. Milner, Jr.

The Fed Should Be Careful What It Wishes For

CAMBRIDGE – Empirical relationships in economics are sufficiently fragile that there is even a “law” about their failure. As British economist Charles Goodhart explained in the 1980s, “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” Central banks in advanced economies have recently been providing a few more case studies confirming Goodhart’s Law, as they struggle to fulfill their promises to raise inflation to the stable plateau of their numerical targets.

Major central banks’ fixation on inflation betrays a guilty conscience for serially falling short of their targets. It also raises the risk that in fighting the last war, they will be poorly prepared for the next – the battle against too-high inflation.

Consider the United States Federal Reserve, which at the beginning of 2012 quantified its Congressional mandate of “promoting maximum employment, stable prices, and moderate long-term interest rates.” These goals would be best achieved by keeping inflation, measured by the Fed’s preferred personal consumption price index, at 2% in the long run. Since then, the four-quarter growth in that index has been below this target in every quarter but one, as Fed forecasts of inflation consistently fell short of the mark. Goodhart’s Law still has teeth.

The Fed’s solution to this failure, like that of other central banks, has been to talk more about the subject. The minutes of the January meeting of the Federal Open Market Committee (FOMC) reveal an extensive discussion among policymakers about how to determine US inflation. More than a thousand words (an enormous footprint in a normally succinct document) were required to summarize three separate staff briefings on the subject. Readers learned of alternative approaches to forecasting inflation, of the prevailing low level of inflation expectations, and of the diminished pressure that resource slack places on costs (or a less reliable Phillips’ curve). Fed officials wrung their hands about missing the target and reaffirmed their commitment to a symmetric goal of 2% inflation in the longer run.

The summary may have inadvertently revealed part of what the Fed has been getting wrong. The description of its efforts to determine inflation, with its blinkered focus on the domestic economy, is a throwback to the 1960s. Nowhere among those thousand words were the phrases “trading partners,” “the foreign exchange value of the dollar,” “commodity prices,” or “global supply chains” to be found. But the rest of the world economy exists, is bigger than it once was, and acts less like the US than it once did. All of this implies a discipline on costs in a sluggish economy and a potential accelerant in an overheated one.

As for the first observation, total US exports and imports of goods and services relative to nominal GDP (the standard international measure of openness to trade) currently stands at close to 30%. This is more than three times its average in the 40 years prior to the break-up of the managed fixed-exchange rate system, when the Phillips’ curve yielded more robust guidance. The rest of the world exists.

Second, while the US economy remains the largest in the world by most measures, comprising one-quarter of global GDP, this share is ten percentage points lower than in the 1960s, when US factories produced the most steel, autos, and aircraft in the world. Low transport and communications costs and freer trade have knitted markets more closely together, implying that this relative decline in the US share of the global economy loosens the link between domestic capacity constraints and international pricing. The rest of the world is bigger.

Third, early in the post-Bretton-Woods era, US trade was predominantly with the “Old World” of Europe, Canada, and Japan. Based on bilateral trade shares, transactions with Asian and Latin American economies caught up by 2006, and their relative trade significance for the US has more than doubled since 1972. While over-generalizations are risky, these other important trading partners have relatively larger pools of lower-wage workers to draw upon and discipline costs along the global value chain. The rest of the world is not entirely like the US.

These observations may explain why costs are sticky on the way up; but they do not imply that costs are stuck forever. With the US unemployment rate close to 4% and headed lower this year, inflation will move up, though less than the long-term record predicts. Fortunately, Fed officials are aware of the role of resource slack in driving inflation, with the January minutes noting that “estimates of the strength of those effects had diminished noticeably in recent years.”

The discussion, however, would have been more reassuring if it had included the rest of the world, in part because doing so will continue to pose a critical challenge for policymakers. A more trade-reliant economy is more sensitive to fluctuations in the foreign exchange value of its currency.

True, much of global trade is invoiced in dollars, but the Chinese renminbi is muscling into that turf, and producers ultimately care about how their revenues translate into domestic purchasing power. The upside risk to US inflation stems from that translation – the value of the dollar.

The legislative one-two punch of tax reform and spending increases puts the US federal debt on an upward path. If fiscal laxity tarnishes the safe-haven status of Treasury securities, and the monetary authority is perceived to be slow in removing policy accommodation, Fed Chair Jerome Powell and his colleagues may get more of the inflation they are hoping for.

Carmen Reinhart is Professor of the International Financial System at Harvard Kennedy School. Vincent Reinhart is Chief Economist and Macro Strategist at BNY Mellon Asset Management North America.

By Carmen M. Reinhart and Vincent Reinhart

Africa’s Year of Opportunity

GENEVA – We are still near the start of 2018, and already it feels like tension and disorder will be the year’s defining characteristics. From anti-immigration policies in the United States to flaring geopolitical hotspots in the Middle East and East Asia, disruption, upheaval, and uncertainty seem to be the order of the day.

But at least one metric offers reason for cautious optimism: economic growth. The International Monetary Fund estimates that global growth will reach 3.7% this year, up from 3.6% in 2017. As Christine Lagarde, the Fund’s managing director, put it in a speech in December, “The sun is shining through the clouds and helping most economies generate the strongest growth since the financial crisis.”

It was fitting that Lagarde made that observation in Addis Ababa, because it is in Africa where the rays of prosperity are shining brightest. In fact, I predict that 2018 will be a breakout year for many – though not all – African economies, owing to gains in eight key areas.

For starters, Africa is poised for a modest, if fragmented, growth recovery. Following three years of weak economic performance, overall growth is expected to accelerate to 3.5% this year, from 2.9% in 2017. This year’s projected gains will come amid improved global conditions, increased oil output, and the easing of drought conditions in the east and south.

To be sure, growth will be uneven. While nearly a third of African economies will grow by around 5%, slowdowns are likely in at least a dozen others. Sharp increases in public debt, which has reached 50% of GDP in nearly half of Sub-Saharan countries, are particularly worrying. But, overall, Africa is positioned for a positive year.

Second, Africa’s political landscape is liberalizing. Some of Africa’s longest-serving presidents – including Zimbabwe’s Robert Mugabe, Angola’s José Eduardo dos Santos, and the Gambia’s Yahya Jammeh – exited in 2017. In South Africa, Jacob Zuma’s resignation allowed Cyril Ramaphosa to become president. In January, Liberians witnessed their country’s first peaceful transfer of power since 1944, when former soccer star George Weah was sworn into office.

All of these gains will be tested, however, as voters in 18 countries go to the polls this year. Adding to Africa’s story of divergence will be continued political fragility in a number of states, including the Central African Republic, Burundi, Nigeria, South Sudan, and Somalia.

A third source of optimism is Africa’s agricultural sector, where the potential of smallholder farmers, the majority of whom are women, is finally being realized. African agricultural output is forecast to reach $1 trillion by 2030. This maturation could not have come at a more opportune time; roughly two-thirds of Africans depend on agriculture to make ends meet. Large tracts of uncultivated land, a youthful workforce, and the emergence of tech-savvy “agropreneurs” – agricultural entrepreneurs – are lifting production and transforming entire economies.

Fourth, Africans are benefiting from technological disruption. With more than 995 million mobile subscribers, Africa’s increasing connectivity is being used to power innovation. Key sectors like farming, health, education, banking, and insurance are already being transformed, greatly enhancing the region’s business landscape.

Fifth, African leaders are getting serious about curbing illicit financial outflows from corrupt practices that rob African countries of some $50 billion annually, much of it in the oil and gas sector. While US lawmakers are pushing to repeal portions of the 2010 Dodd-Frank financial reform legislation – which contains a provision requiring oil, gas, and mining companies to disclose payments they make to governments – the broader trend is toward greater transparency and accountability.

For example, the Panama Papers and the Paradise Papers pulled back the curtain on the murky system of tax havens and shell companies that shelter billions of dollars from some of the world’s poorest countries, including many in Africa. And with the G20 and the OECD working to stop tax avoidance, Africa may soon benefit from global efforts to end shady accounting.

Sixth, Africa’s energy sector is set to thrive. While 621 million Africans still lack reliable access to electricity, innovations like renewables, mini-grids, and smart metering are bringing power to more people than ever before. In South Africa, renewable energy has taken off; the price of wind power is now competitive with coal. Ethiopia, Kenya, Morocco, and Rwanda are also attracting large investments in renewable energy.

A seventh area showing signs of progress is education. To be sure, Africa’s educational offerings remain dismal; more than 30 million children in Sub-Saharan Africa are not in school, and those who do attend are not learning as much as they could. But many African leaders and publics have recognized these deficiencies; in some countries, such as Ghana, education has even become a deciding issue for voters.

As the Education Commission highlights, some countries are boosting investments in education. This represents an opportunity to align learning outcomes with future employment needs. But with over a billion young people living in Africa by 2050, greater investment in education is urgently needed.

Finally, greater attention is being paid to developing a pan-African identity, and African fashions, films, and foods are expanding to new markets. As these cultural connections grow, Africa’s soft power will continue to rise and extend far beyond the continent.

In many corners of the world, 2018 is shaping up to be yet another disappointing year, as inequality and poverty continue to fuel anger and populism. Africa will not be entirely immune from such developments. Nonetheless, the continent’s inhabitants have at least eight good reasons – far more than most people elsewhere – to be optimistic.

Caroline Kende-Robb, former chief adviser to the International Commission on Financing Global Education Opportunity, is a senior fellow at the African Center for Economic Transformation.

By Caroline Kende-Robb

When Fighting Fake News Aids Censorship

WASHINGTON, DC – Many media analysts have rightly identified the dangers posed by “fake news,” but often overlook what the phenomenon means for journalists themselves. Not only has the term become a shorthand way to malign an entire industry; autocrats are invoking it as an excuse to jail reporters and justify censorship, often on trumped-up charges of supporting terrorism.

Around the world, the number of honest journalists jailed for publishing fake or fictitious news is at an all-time high of at least 21. As non-democratic leaders increasingly use the “fake news” backlash to clamp down on independent media, that number is likely to climb.

The United States, once a world leader in defending free speech, has retreated from this role. President Donald Trump’s Twitter tirades about “fake news” have given autocratic regimes an example by which to justify their own media crackdowns. In December, China’s state-run People’s Daily newspaper posted tweets and a Facebook post welcoming Trump’s fake news mantra, noting that it “speaks to a larger truth about Western media.” This followed the Egyptian government’s praise for the Trump administration in February 2017, when the country’s foreign ministry criticized Western journalists for their coverage of global terrorism.

And in January 2017, Turkish President Recep Tayyip Erdoğan praised Trump for berating a CNN reporter during a live news conference. Erdoğan, who criticized the network for its coverage of pro-democracy protests in Turkey in 2013, said that Trump had put the journalist “in his place.” Trump returned the compliment when he met Erdoğan a few months later. Praising his counterpart for being an ally in the fight against terrorism, Trump made no mention of Erdoğan’s own dismal record on press freedom.

It is no accident that these three countries have been quickest to embrace Trump’s “fake news” trope. China, Egypt, and Turkey jailed more than half of the world’s journalists in 2017, continuing a trend from the previous year. The international community’s silence in the face of these governments’ attacks on independent media seems to have been interpreted as consent.

In Turkey, the world’s top jailer of journalists two years in a row, the erosion of free speech has been particularly swift. Since a failed coup attempt in 2016, Turkey’s courts have processed some 46,000 cases involving people accused of insulting the president, the nation, or its institutions. Each of the 73 journalists currently behind bars is being investigated for, or charged with, anti-state crimes. The most common charge against reporters is belonging to, aiding, or propagandizing for an alleged terrorist organization.

Vaguely worded laws that conflate reporting about terrorism with supporting it provide cover for regimes intent on preventing unfavorable news coverage. For example, attempting to write about the Kurdistan Workers’ Party (PKK) in Turkey, the Muslim Brotherhood in Egypt, or Uighurs in China can quickly land reporters in jail for harboring terrorist sympathies. Nearly three-quarters of the 262 journalists in prison around the world are being held on anti-state charges, according to the Committee to Protect Journalists’ most recent survey.

Even when journalists aren’t arrested, autocrats are increasingly invoking the claim of “fake news” to discredit legitimate reporting. And here, ironically, efforts by some Western governments to sanitize social media of fake or violent material have played into the autocrats’ hands. While the goals of these cleansing efforts – to prevent the type of electoral interference that Russia has perfected, for example – are laudable, an unintended consequence has been censorship of honest journalists reporting on real stories in some of the world’s most dangerous places.

Consider what happened last year to video coverage of the civil war in Syria. In an effort to rein in extremist content, YouTube removed hundreds of videos related to the conflict, including many posted by Shaam News Network, Qasioun News Agency, and Idlib Media Center – all independent news outlets documenting the disaster.

Similarly, Facebook closed accounts of individuals and organizations that were using the platform to document violence against Muslim Rohingya in Myanmar, a crisis that the United Nations has called a “textbook example of ethnic cleansing.” Facebook said it acted in response to violations of the platform’s “community standards.”

And in Egypt and Syria, Twitter has blocked citizen journalists from reporting on human-rights abuses, according to journalists whose accounts have been closed. Twitter’s censors have even hit the heart of Europe; in January, a German satire magazine was blocked from the platform after the Bundestag enacted legislation imposing fines of up to €50 million ($61 million) on social media firms that fail to remove illegal content in a timely manner. Other European countries are considering similar measures to compel Internet companies to battle misinformation and extremism.

Laws meant to curb hate speech, violence, or “fake news” may be well intentioned, but their implementation has been sloppy, with few mechanisms to ensure accountability, transparency, or reversibility. Governments are outsourcing censorship to the private sector, where maximizing shareholder value, not upholding journalistic freedom, drives decision-making.

Leaders of the world’s democracies must resist the illiberal assault on independent news organizations, and that means rethinking loosely crafted content laws that are vulnerable to abuse. A free, vibrant media is vital to the functioning of a healthy society, and misinformation can undermine it. But official remedies that end up silencing those reporting the news are worse than the disease.

Courtney C. Radsch is Advocacy Director at the Committee to Protect Journalists and author of Cyberactivism and Citizen Journalism in Egypt: Digital Dissidence and Political Change.

By Courtney C. Radsch

Working Toward the Next Economic Paradigm

LAGUNA BEACH – For decades, the Western world put its faith in a well-defined and broadly accepted economic paradigm with applications at both the national and global levels. But, against a background of declining confidence in the ability of “experts” to explain, let alone predict, economic developments, that faith has deteriorated. With a new paradigm having yet to emerge, the world economy faces a heightened risk of fragmentation, with already-vulnerable countries being left even further behind.

The paradigm that, until recently, dominated much of economic thinking and policymaking is embodied in the so-called Washington Consensus – a set of ten broadly applicable policy prescriptions for individual countries – and, at the international level, in the pursuit of economic and financial globalization. The idea, simply put, was that countries would benefit from embracing market-based pricing and deregulation at home, while fostering free trade and relatively open cross-border capital flows.

Deepening the economic and financial linkages among countries was viewed as the best way to deliver durable gains, enhance efficiency and productivity, and mitigate the threat of financial instability. This approach was also deemed to yield collateral benefits, from enhancing internal social mobility to reducing the risk of violent conflict among countries. And it promised to support the positive convergence of developing and developed countries, thereby reducing both absolute and relative poverty and weakening economic incentives for illegal cross-border migration.

Supported by the traditional economic theories taught at most universities, this approach was energized after the fall of the Berlin Wall and the disintegration of the Soviet Union, when the former communist countries, together with China, joined the Western-dominated world order, boosting total production and consumption.

But, at a certain point, confidence in the Washington Consensus turned into something like blind faith. The resulting complacency, among policymakers and economists alike, contributed to the world economy becoming more vulnerable to a series of small shocks that, in 2008, culminated in a crisis that pushed the world to the brink of a devastating multi-year economic depression.

Suddenly, the advantages of globalization paled in comparison to the risks. It didn’t help that the crisis originated in the United States, which had hitherto been the main advocate for the Washington Consensus and unbridled globalization, including through its role in multilateral organizations like the G7, the International Monetary Fund, the World Bank, and the World Trade Organization.

Analytical failures were partly to blame for this. The economics profession did not go far enough to develop a comprehensive understanding of the connection between a rapidly growing and increasingly deregulated financial sector and the real economy. The impact of major technological innovations was poorly understood. And insights from behavioral science were inadequately regarded – if not shunned altogether – in favor of analytically elegant microeconomic underpinnings that were model-friendly, but unrealistic and overly simplistic.

Meanwhile, policymakers overlooked the economic, political, and social consequences of rising inequality – not just of income and wealth, but also of opportunity – thereby allowing the middle class gradually to be hollowed out, a trend that was exacerbated by both technological and non-technological developments. They also underestimated the risks of financial contagion and surges in migration flows. As a result, behavioral norms and rules lagged far behind realities on the ground, and political polarization intensified.

At the international level, the established post-war order was increasingly challenged by a rising China, whose sheer size, in terms of both geography and population, enabled it to achieve systemic importance, despite a relatively low per capita income and a political system that seemed at odds with a liberal market-based economy. The major global economic institutions struggled to adapt quickly enough.

In fact, notwithstanding a few tweaks, the governance structure of the IMF and the World Bank remained more reflective of past realities, with Europe, in particular, maintaining disproportionate influence. Even the G20, which emerged when the G7 proved too narrow and exclusive to support effective economic-policy coordination, failed to change the game. A lack of operational continuity, together with disagreements among countries, quickly undermined the G20’s effectiveness, especially after the threat of a global depression had passed.

Given all this, it should come as no surprise that enthusiasm for economic and financial globalization has faltered. Indeed, both advanced and emerging economies have long balked at the notion of strengthening regional and international institutions by delegating more national authority to them.

Now, some countries are adopting a more inward-looking approach and/or shifting their focus to bilateral and, in Asia, to regional linkages. Such shifts give larger economies like the US and China a distinct advantage, while some economies and regions – particularly in Africa – face increasing marginalization.

Building consensus around a revised unifying paradigm will not be easy. It will be an analytically challenging, politically demanding, and time-consuming process that will probably entail the consideration and rejection of a few bad ideas before good ones take root. It will also be a more multidisciplinary and intellectually inclusive process – more bottom-up than top-down – than the one that preceded it. It will need to adapt intelligently to innovations in artificial intelligence, Big Data, and mobility.

In the meantime, both economists and policymakers have an important role to play in improving the existing situation. At the international level, the concept of “fair trade” – not to mention social displacement – should be a bigger part of policy discussions. And economies – especially Europe – need to work actively to reform a tired system of multilateral governance that increasingly lacks credibility.

Moreover, feedback loops between the real economy and finance need to be examined in greater depth. Distributional issues, including pressures on the middle class and the predicament of population segments vulnerable to slipping through stretched social safety nets, need to be better understood and addressed. This demands deeper comprehension of technology-driven structural changes, with Big Tech recognizing and adjusting to its growing systemic importance in step with government.

Complacency was a central reason for the last economic paradigm’s loss of credibility. Let us not allow it to do any more damage than it already has.

Mohamed A. El-Erian, Chief Economic Adviser at Allianz, was Chairman of US President Barack Obama’s Global Development Council and is the author of The Only Game in Town: Central Banks, Instability, and Avoiding the Next Collapse.

By Mohamed A. El-Erian

The Future of Fish Farming

NEW HAVEN – Demand for seafood is skyrocketing, and will continue to rise throughout this century. The only way to meet it will be through aquaculture. Yet, while next-generation aquaculture will be far more ecologically responsible than its predecessors, it will also use far more energy. If that additional energy is not clean and cheap, new aquaculture technologies cannot serve our broader environmental and climate goals.

The rise in demand for seafood is a good thing, to a point. Fish are more efficient than pork and beef, because they require fewer inputs to yield the same amount of protein. So, as global meat consumption continues to rise, it makes sense for a sizeable share of it to come from the sea.

On the other hand, rising demand for seafood poses significant ecological risks. According to the United Nations Food and Agriculture Organization, nearly one-third of global fish stocks are already harvested at an unsustainable level, meaning wild populations cannot regenerate quickly enough to make up for the rate at which they’re fished. And, because wild populations lack the carrying capacity to meet the increase in demand, more fish have to be farmed.

For that reason, aquaculture is already the leading mode of fish production for human consumption. But, like fishing, it also poses ecological risks. Because aquaculture systems often operate on coastlines or near inland rivers or ponds, they tend to disrupt natural habitats, contribute to nitrogen pollution, and add undue pressure on feeder fish stocks. For example, fish farming is one of the main drivers of mangrove deforestation in Southeast Asia.

But even with these conservation challenges in mind, aquaculture remains the only option for meeting future demand. The path that the industry takes today will thus have far-reaching environmental implications for years to come.

In the near term, fish farms can in fact be made cleaner. A few responsible producers have introduced new techniques and technologies to combat pollution, from monitoring feed uptake with video cameras to integrating filter feeders like shellfish and seaweed into their systems. Others are attempting to reduce their reliance on forage fish by replacing fish meal with plant proteins, or by adopting new biotechnologies to produce fish feed more sustainably. But so long as these aquaculture systems are embedded in coastal or freshwater environments, they will continue to contribute to habitat loss and ecological disruption.

For the long term, then, experts generally offer two ways forward: land-based recirculating systems and offshore aquaculture. Both could potentially mitigate the negative externalities of aquaculture and make fish production sustainable well into the future.

In the first approach, fish farms would be moved from the ocean to recirculating aquaculture systems (RAS), in which fish are housed in indoor tanks that are regulated by pumps, heaters, aerators, and filters. One of the biggest advantages of this approach is its adaptability: an RAS can be located almost anywhere, from urban lots to retired hog barns.

Better yet, these systems are designed to recycle nearly all of the water they use, which eliminates the problem of coastal pollution. Accordingly, the advocacy organization Seafood Watch currently gives all RAS-farmed fish a “Best Choice” tag.

The other option is to move aquaculture in the opposite direction: out to sea. Offshore systems harness the forces of the ocean, by using deeper waters and stronger currents to funnel excess nutrients and waste away from sensitive coastal ecosystems. As a result, they have no need for mechanical pumps or filters (although motorized pens may eventually take to the seas).

In the United States, the aquaculture industry has started to move toward RAS production. For example, a Norwegian firm has just announced plans to build a huge new land-based salmon farm in Maine. And examples of offshore projects can be found off the coasts of Norway, California, and Hawaii. But both systems are still a niche, rather than the norm.

One of the primary problems with cleaner approaches to aquaculture is that they are energy-intensive. With land-based systems, natural processes such as filtration and water exchange and dispersal must be carried out mechanically, which requires a lot of electricity. That isn’t necessarily a problem in places with low-carbon electricity grids, like France, but it would be in a place like Nova Scotia, which relies heavily on coal.

Likewise, offshore operations require diesel fuel for transport and maintenance, and that will remain the case until electric boats or low-carbon liquid fuels become more viable. Although open-ocean aquaculture should still require less diesel fuel than commercial fishing – and could run on renewable energy sources like solar, wind, or waves – offshore aquaculture is more energy-intensive than conventional fish farms. And even if newer aquaculture systems can overcome their current operational and regulatory challenges, their biggest hurdle will be the unavailability of cheap, low-carbon energy. As long as fossil fuels account for most global energy use, the environmental promise of next-generation aquaculture will go unrealized.

This is true for a wide range of industries. Without cleaner and cheaper energy across the board, we will not be able to meet our broader environmental and climate goals. Our current energy technologies – nuclear and renewables included – still have a way to go to meet energy demand. In the meantime, the aquaculture industry will need to make further investments and develop new innovations to put itself on a path to sustainability – be it on land or in the sea.

Linus Blomqvist is Director of the Conservation Program and the Food and Farming Program at the Breakthrough Institute.

Fighting Cybercrime with Neuro-Diversity

LONDON – Cybersecurity is one of the defining challenges of the digital age. Everyone, from households to businesses to governments, has a stake in protecting our era’s most valuable commodity: data. The question is how that can be achieved.

The scale of the challenge should not be underestimated. With attackers becoming increasingly nimble and innovative, armed with an increasingly diverse array of weapons, cyber-attacks are happening at a faster pace and with greater sophistication than ever before. The security team of my company, BT, a network operator and Internet service provider, detects 100,000 malware samples every day – that’s more than one per second.

Creative thinking among cyber attackers demands creative thinking among those of us fending them off. Here, the first step is ensuring that there are enough talented and trained individuals engaged in the fight. After all, according to a recent survey by the International Data Corporation, 97% of organizations have concerns about their security skills. By 2022, another study estimates, there will be 1.8 million vacant cybersecurity jobs.

Amid this critical shortage of security specialists, it is imperative that we develop new approaches to attracting, educating, and retaining talented individuals, in order to create a deep pool of highly skilled cyber experts prepared to beat cybercriminals at their own game.

The key to success is diversity of talents and perspectives. This includes neurological diversity, such as that represented by those with autism, Asperger syndrome, and attention-deficit disorder. People with Asperger syndrome or autism, for example, tend to think more literally and systematically, making them particularly adept at mathematics and pattern recognition – critical skills for cybersecurity.

The problem is that neurologically exceptional people tend to be disadvantaged by the traditional interview process, which relies heavily on good verbal communication skills. As a result, such people often struggle to find employment, and even when they do find a job, their work environment may not be able to support them adequately.

The United Kingdom’s National Autistic Society reports that just 16% of autistic adults in Britain have full-time paid employment, and only 32% have any kind of paid work, compared to 47% for disabled people and 80% for non-disabled people. This highlights the scale of the challenge faced by such candidates, as well as the vast untapped resource that they represent.

Recognizing the potential of neurological diversity to contribute to strengthening cybersecurity, we at BT have reframed how we interact with candidates during interviews. We encourage them to talk about their interests, rather than expecting them simply to answer typical questions about their employment goals or to list their strengths and weaknesses. This approach has already been applied with great success by the likes of Microsoft, Amazon, and SAP in the areas of coding and software development, and by the UK’s GCHQ intelligence and security organization, one of the country’s biggest employers of autistic people.

Of course, an updated approach to interviewing candidates will not work for everyone. But it is a start. More broadly, we must do more not just to expand the opportunities available to neurologically exceptional candidates, but also to ensure that these opportunities are well publicized.

Delivering this change will require leadership by – and cooperation between – government and business. I am pleased to say that, on this front, BT is already taking a leading role, including by working with the British government on their Cyber Discovery program, a special initiative to attract schoolchildren into the cyber industry, and through our own apprenticeship programs.

In the digital age, neuro-diversity should be viewed as a competitive advantage, not a hindrance. We now have a chance to invest in talented people who are often left behind when it comes to work, benefiting them, business, and society as a whole. By recognizing and developing the skills of this widely overlooked talent pool, we can address a critical skills shortage in our economies and enhance our ability to fight cybercrime. Such opportunities are not to be missed.

Gavin Patterson is CEO at BT Group.

By Gavin Patterson

We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…