When Will Tech Disrupt Higher Education?

CAMBRIDGE – In the early 1990s, at the dawn of the Internet era, an explosion in academic productivity seemed to be around the corner. But the corner never appeared. Instead, teaching techniques at colleges and universities, which pride themselves on spewing out creative ideas that disrupt the rest of society, have continued to evolve at a glacial pace.

Sure, PowerPoint presentations have displaced chalkboards, enrollments in “massive open online courses” often exceed 100,000 (though the number of engaged students tends to be much smaller), and “flipped classrooms” replace homework with watching taped lectures, while class time is spent discussing homework exercises. But, given education’s centrality to raising productivity, shouldn’t efforts to reinvigorate today’s sclerotic Western economies focus on how to reinvent higher education?

One can understand why change is slow to take root at the primary and secondary school level, where the social and political obstacles are massive. But colleges and universities have far more capacity to experiment; indeed, in many ways, that is their raison d’être.

For example, what sense does it make for each college in the United States to offer its own highly idiosyncratic lectures on core topics like freshman calculus, economics, and US history, often with classes of 500 students or more? Sometimes these giant classes are great, but anyone who has gone to college can tell you that is not the norm.

At least for large-scale introductory courses, why not let students everywhere watch highly produced recordings by the world’s best professors and lecturers, much as we do with music, sports, and entertainment? This does not mean a one-size-fits-all scenario: there could be a competitive market, as there already is for textbooks, with perhaps a dozen people dominating much of the market.

And videos could be used in modules, so a school could choose to use, say, one package to teach the first part of a course, and a completely different package to teach the second part. Professors could still mix in live lectures on their favorite topics, but as a treat, not as a boring routine.

A shift to recorded lectures is only one example. The potential for developing specialized software and apps to advance higher education is endless. There is already some experimentation with using software to help understand individual students’ challenges and deficiencies in ways that guide teachers on how to give the most constructive feedback. But so far, such initiatives are very limited.

Perhaps change in tertiary education is so glacial because the learning is deeply interpersonal, making human teachers essential. But wouldn’t it make more sense for the bulk of faculty teaching time to be devoted to helping students engage in active learning through discussion and exercises, rather than to sometimes hundredth-best lecture performances?

Yes, outside of traditional brick-and-mortar universities, there has been some remarkable innovation. The Khan Academy has produced a treasure trove of lectures on a variety of topics, and it is particularly strong in teaching basic mathematics. Although the main target audience is advanced high school students, there is a lot of material that college students (or anyone) would find useful.

Moreover, there are some great websites, including Crash Course and Ted-Ed, that contain short general education videos on a huge variety of subjects, from philosophy to biology to history. But while a small number of innovative professors are using such methods to reinvent their courses, the tremendous resistance they face from other faculty holds down the size of the market and makes it hard to justify the investments needed to produce more rapid change.

Let’s face it, college faculty are no keener to see technology cut into their jobs than any other group. And, unlike most factory workers, university faculty members have enormous power over the administration. Any university president that tries to run roughshod over them will usually lose her job long before any faculty member does.

Of course, change will eventually come, and when it does, the potential effect on economic growth and social welfare will be enormous. It is difficult to suggest an exact monetary figure, because, like many things in the modern tech world, money spent on education does not capture the full social impact. But even the most conservative estimates suggest the vast potential. In the US, tertiary education accounts for over 2.5% of GDP (roughly $500 billion), and yet much of this is spent quite inefficiently. The real cost, though, is not the squandered tax money, but the fact that today’s youth could be learning so much more than they do.

Universities and colleges are pivotal to the future of our societies. But, given impressive and ongoing advances in technology and artificial intelligence, it is hard to see how they can continue playing this role without reinventing themselves over the next two decades. Education innovation will disrupt academic employment, but the benefits to jobs everywhere else could be enormous. If there were more disruption within the ivory tower, economies just might become more resilient to disruption outside it.

Kenneth Rogoff, a former chief economist of the IMF, is Professor of Economics and Public Policy at Harvard University.

By Kenneth Rogoff

Burying the Legal Ghosts of Brazil’s Hyperinflation

SÃO PAULO – A decades-old legal fight between consumers and financial institutions over the impact of Brazil’s economic policies of the 1980s and 1990s is nearing conclusion. In December, lawyers representing claimants presented Brazil’s Supreme Federal Court with a request to ratify a settlement reached with the banks.

If the court approves the deal, the settlement would put billions of reals into the pockets of savers. But more than a long-awaited payday for some one million claimants, the court-ordered restitution would also mark an official end to Brazil’s seemingly endless war on hyperinflation.

During the late 1980s and early 1990s, the Brazilian government struggled to stabilize the country’s economy and currency. At the height of the crisis, annual inflation reached 2,477%; at that rate, prices for food and household goods increased daily. A string of unsuccessful policies had accelerated inflation in public and private contracts, affecting wages, rents, and bank deposits. Some highly controversial measures – such as a move in 1990 to commandeer deposits – briefly halted inflation but contributed to a deep recession.

The implementation in 1994 of Plano Real (“The Real Plan”) brought some relief, by introducing a set of stabilization measures that created the current currency. But these were also troubled times for Brazil politically, as the country was still consolidating its transition to democracy after 20 years of military dictatorship. Hyperinflation and its social consequences amounted to an economic gauntlet for Brazil’s new leaders, and rising inequality posed a severe threat to the country’s democratic hopes.

Although politicians eventually navigated the currency crisis, the economic tensions never really vanished, even after the 1994 plan took hold. Many of the failed monetary measures cost people significant savings; as a result, nearly every stabilization plan from that period was litigated, including the successful Plano Real. Lawsuits in that case are still pending at the Supreme Court.

Many in Brazil, especially central bankers, have warned that continued litigation of past monetary policies could result in a breakdown of the current financial system, leading to new levels of economic dysfunction, such as insufficient credit.

That possibility seems to have had a sobering effect on the country’s top court. Historically, the Supreme Court has sided with consumers in cases related to inflation adjustments on savings deposits. But the justices have also tempered their decisions in cases with sweeping economic implications. And, in the case of the lawsuits implicating economic policy, no final decision has been reached, despite their being on the docket for years. That is why the settlement reached in December – which was ratified by the attorney general – could be interpreted as giving the court a way out.

Full details of the settlement have not been disclosed. But it seems increasingly clear that after nearly three decades of legal wrangling, consumers and financial institutions have agreed that a negotiated approach is the only way forward.

As a result, whatever the final tally, banks will likely get off easier than authorities had feared. According to reports, the final settlement will be in the vicinity of R$12 billion ($3.8 billion), a far cry from a previous central bank estimate of R$150 billion, or the R$341 billion predicted by Febraban, the Brazilian Federation of Banks.

If this long legal journey does indeed end this year, updates to the Brazilian Code of Civil Procedure will deserve much of the credit. Changes implemented in March 2016 encourage litigants to pursue mediation and arbitration, a move meant to reduce the tens of millions of civil lawsuits that are currently clogging the courts. These revisions could also help other old legal cases move toward conclusion.

An approved settlement would mark the end of a complex and divisive legal fight that for too long has prevented Brazil’s leaders from putting a legacy of failed economic initiatives behind them. That would be welcome news for Brazil’s consumers, financial institutions, and overall economic health.

Camila Villard Duran is a professor of law at the University of São Paulo, and a former Oxford-Princeton Global Leaders Fellow at the Woodrow Wilson School of Public and International Affairs. Arnoldo Wald is a professor of law at Rio de Janeiro State University.

By Camila Villard Duran and Arnoldo Wald

Water Management Is Health Management

LONDON – With climate change accelerating and its effects exacerbating other geopolitical and development crises, the role of environmental protection in preserving and improving human wellbeing has become starkly apparent. This recognition lies at the heart of the concept of “planetary health,” which focuses on the health of human civilization and the condition of the natural systems on which it depends.

The concept’s logic is simple: if we try to deliver better health to a growing population, without regard for the health and security of our natural resources, we will not just struggle to make new strides; we will reverse the progress already made. Where things get complicated is in applying the concept, particularly when addressing the nexus of water services, health, and ecosystem integrity.

Since at least 1854, when John Snow discovered that cholera was spread through contaminated water supplies in central London, humans have understood that polluted water is bad for our health. The degradation of freshwater ecosystems often brings disease, just as the protection or strengthening of such ecosystems improves health outcomes.

But, while it is now well understood that progress in one area improves outcomes in another, such co-beneficial dynamics often are insufficient to spur investment in both areas.

For example, investing to protect a watershed can also protect biodiversity and improve water quality in associated rivers, thereby benefiting human health. But if the goal is explicitly to improve human health, it might be more cost-effective simply to invest in a water-treatment plant.

A more compelling dynamic is complementarity: when investment in one area increases the returns on investment in other areas. In this scenario, investments in protecting a watershed would aim not just to produce returns directly, but also to boost the returns of simultaneous investments in human health. Complementarity produces mutually reinforcing dynamics that improve outcomes across the board.

A well-functioning water sector already attempts to balance complementary interventions. Indeed, such a system amounts to a multidisciplinary triumph of human ingenuity and cooperation – involving engineering, hydrology, governance, and urban planning – with far-reaching complementary impacts on both human health and economic development.

In 1933, through the Tennessee Valley Authority Act, the United States established an agency whose purpose was to build hydroelectric dams on the Tennessee River. That effort benefited industry, agriculture, flood control, and conservation throughout the Tennessee Valley watershed, until then one of the country’s most disadvantaged regions.

Since then, governments worldwide have recognized the potential of water infrastructure to complement other economic and social policies, including those intended to improve health outcomes. It is no coincidence that one of the World Bank’s largest lending portfolios – $35 billion worth of investments – comprises water projects.

But understanding the potential of complementarity is just the first step. To maximize results, we must design a coherent strategy that takes full advantage of the dynamic, at the lowest possible cost. The question is whether there is an optimal mix of environmental protection and direct health interventions on which policymakers can rely to maximize investment returns for both.

A recent analysis suggests that, in rural areas, a 30% increase in upstream tree cover produces a 4% reduction in the probability of diarrheal disease in children – a result comparable to investing in an improved sanitation facility. But, if that is true, we have yet to determine at what point reforestation becomes a better investment than improving sanitation, let alone increases the returns of other health interventions by the highest possible amount.

Another study found that an estimated 42% of the global malaria burden, including a half-million deaths annually, could be eliminated through policies focused on issues like land use, deforestation, water resource management, and settlement siting. But the study didn’t cover the potential benefits of employing insecticide-treated nets as a tool for fighting malaria, ruling out a comparison of the two investments’ returns.

Worldwide, around 40% of cities’ source watersheds show high to moderate levels of degradation. Sediment from agricultural and other sources increases the cost of water treatment, while loss of natural vegetation and land degradation can change water-flow patterns. All of this can adversely affect supply, thereby increasing the need to store water in containers – such as drums, tanks, and concrete jars – that serve as mosquito larval habitats. Can we show that ecological restoration of the watershed could do more than just insecticides or mosquito nets to support efforts to reduce malaria (and dengue) in cities?

In all of these cases, finding the best option requires knowing not just the relative contribution of different interventions, but understanding their complementarity. In a world of limited resources, policymakers must prioritize their investments, including by differentiating the necessary from the desirable. To that end, finding ways to identify and maximize complementarity is vital.

Some 2.1 billion people worldwide lack access to safe, readily available water at home, and more than twice as many – a whopping 4.5 billion – lack safely managed sanitation, severely undermining health outcomes and fueling river pollution. With a growing share of the world’s population – including many of the same people – feeling the effects of environmental degradation and climate change firsthand, finding solutions that simultaneously advance environmental protection, water provision, and health could not be more important. Global health and conservation professionals must cooperate more closely to find those solutions – and convince policymakers to pursue them.

Giulio Boccaletti is Chief Strategy Officer and Global Managing Director for Water at The Nature Conservancy.

By Giulio Boccaletti

Africa’s Arrival

NEW YORK – The African Development Bank (AfDB) has just published its African Economic Outlook for 2018. This year’s revamped publication – shorter than usual, analytically well-structured, and written in lucid prose, without hyperbole – in some ways mirrors Africa’s own transformation, as it raises hopes that we may at last be witnessing the continent’s long-promised economic arrival.

Africa’s rise has been a long time coming. In the 1960s, hopes were high. The remarkable leaders of the independence generation – such as Ghana’s Kwame Nkrumah and Kenya’s Jomo Kenyatta – received advice from the world’s top economists. The Caribbean-born Nobel laureate Arthur Lewis became Nkrumah’s Chief Economic Adviser.

In India, we read about these leaders’ friendship with our own post-independence prime minister, Jawaharlal Nehru, and the hope for a new dawn for all emerging economies. And many emerging economies did indeed take off. In the late 1960s, some East Asian economies surged ahead. Beginning in the early 1980s, China began its decades-long rise. And, from the early 1990s, India’s economy also began to grow robustly, with annual rates reaching the 9% range by 2005.

But Africa remained stagnant, mired in poverty. Ironically, it was the continent’s resource wealth that hampered economic progress, as it fueled conflicts among governments and insurgents eager to control it. The resulting political instability attracted outsiders keen to exploit governments’ weakness. As the Indian poet and Nobel laureate Rabindranath Tagore put it in his 1936 poem “Ode to Africa,” which played on perceptions about who is “civilized,” the continent fell prey to “civilization’s barbaric greed,” as the colonists “arrived, manacles in hand/Claws sharper by far than any of your wolves.”

Finally, at the turn of the twenty-first century, things began to change for Africa. A few dynamic leaders, democratic stirrings, and emerging regional cooperation led to a decline in poverty and a pickup in growth. Commodity exporters faced a setback around 2014, when prices plummeted. But this turned out to be a blessing in disguise, because it forced countries to diversify their economies and increase production – factors that supported renewed growth.

According to the AfDB report, Africa’s 54 economies grew by 2.2% in 2016, on average, and 3.6% in 2017. In 2018, the AfDB predicts, average growth will accelerate to 4.1%, while the World Bank expects Ghana to grow by 8.3%, Ethiopia by 8.2%, and Senegal by 6.9%, placing these countries among the world’s fastest-growing economies. And these figures are not wishful thinking: in 2016, Ethiopia’s GDP grew by 7.6%.

Of course, serious challenges remain. South Africa, the continent’s strongest economy, is now facing the difficult task of tackling its deep-rooted corruption. Yet, with the African National Congress now apparently determined to replace President Jacob Zuma’s scandal-ridden administration with one led by the party’s new leader, Cyril Ramaphosa, there is reason for hope.

More broadly, many African countries need to find ways to create more employment – and fast. The share of the working-age population is rising faster in Africa than in any other region. This “demographic dividend” has immense potential. But if job creation stalls, the unemployed or under-employed are likely to become frustrated – a recipe for conflict.

Consider the case of Tanzania. Thanks to President John Magufuli’s effort to mobilize more domestic revenue to support increased development spending, the economy is doing well. But, with roughly 800,000 individuals entering the labor force each year, Tanzania needs much more working capital, better infrastructure, and educational reform aimed at ensuring that workers have the skills, resources, and opportunities to secure decent jobs.

The same is true of Ethiopia. In the last couple of decades, the country has made great strides in export-led growth, supported by a growing industrial sector and large investments from China. Now, it is poised to take over as the economic powerhouse of East Africa. Yet the urban youth unemployment rate stands at 23.3%. Left unchecked, this situation could easily end up fueling ethnic conflict and political turmoil.

Another, related challenge concerns resource mobilization: countries need funds to invest in infrastructure, human capital, and the creation of trade and digital links within and beyond Africa. The AfDB report estimates that, for infrastructure investment alone, the continent needs some $170 billion per year, which is $100 billion more than is currently available. As it stands, Africa receives a total of about $60 billion in foreign direct investment each year.

To close the gap, African governments must attract more money. That will require establishing effective regulatory structures that facilitate long-term borrowing and repayment, while ensuring that lenders do not exploit borrowers, as has occurred everywhere from rural India to the United States mortgage market.

The challenges are daunting, to say the least. But there are lessons that African countries can learn from one another. For example, Ghana’s smooth transfer of power after the December 2016 election set a positive democratic example. Nigeria’s Lagos State and Tanzania have done a good job of mobilizing internal resources for development. Add to that the emergence of an indigenous intelligentsia in the region, exemplified by organizations like the AfDB, and it seems that Africa’s moment may have arrived at last.

Kaushik Basu, former Chief Economist of the World Bank, is Professor of Economics at Cornell University and Nonresident Senior Fellow at the Brookings Institution.

By Kaushik Basu

Oil’s Uncertain Comeback

CALGARY – As global economic growth picks up practically everywhere, oil producers are becoming increasingly hopeful that the recent impressive price recovery will continue. But, if those hopes are to be fulfilled, not only will producers have to control what they can (by maintaining production discipline); what lies beyond their control (output from shale and the value of the dollar) will also have to work in their favor.

Just over three years ago, oil (WTI) was trading above $100 per barrel. But, by early 2016, prices had plummeted to around $30 per barrel, owing to a combination of sluggish demand, alternative supply (particularly shale oil and gas from the United States), and a new OPEC production paradigm under which the cartel, led by Saudi Arabia, withdrew from acting as a “swing producer.”

In the wake of the resulting collapse of export receipts and budget revenues, OPEC adopted a new approach, based on a modernized production agreement with two key features: greater flexibility for countries facing especially complex internal conditions (such as Libya) and the inclusion of non-OPEC producers, particularly Russia. Together, OPEC and non-OPEC countries established a floor from which oil prices could bounce. With the pickup in global growth and the emergence of geopolitical uncertainties (which could constrain output in some oil-producing countries), oil prices have rebounded to above $60 per barrel.

The current global growth phase is particularly good for the price of oil (and other commodities), because it is synchronized, real, and, increasingly, self-reinforcing. It is being powered by simultaneous recovery in the systemically important economies of Europe, Japan, the US, and the emerging world. And it is based on durable gains in economic activity, rather than just financial engineering.

Given these features, today’s global growth spurt is starting to generate a virtuous cycle among consumption, investment, and trade. And that dynamic could pick up even more momentum, especially if the recent pro-growth measures in the US and the endogenous healing in Europe are buttressed by structural reforms, more balanced demand management, and improved international policy coordination.

In fact, the downside risks for oil prices have shifted from the demand side to the supply side. Higher oil prices tend to erode production discipline in OPEC, particularly by members (such as Nigeria and Venezuela) that have historically rushed to secure higher revenues to mitigate difficult budgetary conditions, at the expense of their peers (such as Saudi Arabia and the United Arab Emirates). This tendency makes coordination with non-OPEC producers more difficult. Add to that the increased production from alternative sources (most consequentially, shale) that higher prices encouraged, and the beneficial demand effects are offset, if not overwhelmed.

Yet, with some minor modifications to the current agreement, OPEC members should be able to maintain their collective production discipline, assuming the will is there. They may find it harder to continue to rein in non-OPEC countries. But, with thoughtful negotiations that incorporate insights from game theory, this, too, is possible.

When it comes to the factors over which oil producers have less control, the outlook is less hopeful. The depreciation of the US dollar – which fell 10%, in trade-weighted terms, in 2017 – has helped to drive up oil prices, but it is likely to be halted and then partly reversed. Avoiding that outcome would require Europe and Japan to continue to outperform market expectations, both overall and, more important, relative to the US. Moreover, the European Central Bank and the Bank of Japan would need to tighten monetary policy – including accelerating the taper of their large-scale asset purchases – faster than markets expect.

Finally, there is the challenge posed by increased shale production. And the fact is that there is little the traditional oil producers can do to counter shale producers’ likely response to higher prices.

Given this, oil producers would be well advised to treat recent oil-price gains as a temporary windfall, not a permanent state of affairs or even – unless there is a notable geopolitical shock – a trend that is likely to intensify in the year ahead. This means that producers should resist the temptation to use their higher revenues for new recurrent spending. And they should act quickly to reinforce their collective discipline to minimize the risk of a free-for-all that negates the hard-earned gains of recent years.

Mohamed A. El-Erian, Chief Economic Adviser at Allianz, was Chairman of US President Barack Obama’s Global Development Council and is the author of The Only Game in Town: Central Banks, Instability, and Avoiding the Next Collapse.

By Mohamed A. El-Erian

A Year of Successes in Global Health

BANGKOK – In the field of human development, the year that just ended was better than many predicted it would be. A decade after the Great Recession began, economic recovery continued in 2017, and progress was made on issues like poverty, education, and global warming.

But perhaps the most significant achievements of the last 12 months were in global health. I count 18 unique successes in 2017, many of which will help sow the seeds of progress for the months and years ahead.

The first notable success occurred early in the year, when a Guinness World Record was set for the most donations of medication made during a 24-hour period. On January 30, more than 207 million drug doses were donated to treat neglected tropical diseases including guinea-worm disease, leprosy, and trachoma. This extraordinary feat was made possible by the Bill & Melinda Gates Foundation, and by pharmaceutical firms including Bayer, Novartis, Pfizer, and my company, Sanofi Pasteur.

India’s elimination of active trachoma was another milestone, as it marked an important turning point in the global fight against a leading infectious cause of blindness. Last year, trachoma was also eliminated in Mexico, Cambodia, and the Lao People’s Democratic Republic.

A third key health trend in 2017 was further progress toward the elimination of human onchocerciasis, which causes blindness, impaired vision, and skin infections.

Fourth on my list is a dramatic drop in the number of guinea-worm disease infections. A mere 26 cases were recorded worldwide in 2017, down from 3.5 million cases in 1986.

Efforts to eradicate leprosy earned the fifth spot on my list, while vaccine advances in general were sixth. Highlights included a new typhoid vaccine, shown to improve protection for infants and young children, and a new shingles vaccine.

Number seven is the dramatic progress made in eliminating measles. Four countries – Bhutan, the Maldives, New Zealand, and the United Kingdom – were all declared measles-free last year.

The war on Zika is number eight on my list of health achievements in 2017. Thanks to coordinated global efforts, most people in Latin America and the Caribbean are now immune to the mosquito-borne virus, and experts believe transmission will continue to slow.

Number nine is polio eradication. Fewer than 20 new cases were reported globally, a 99% reduction since 1988. Although the year ended with reports of cases in Pakistan, health experts remain optimistic that polio can be fully eradicated in 2018.

Rounding out my top ten was the creation of the Coalition for Epidemic Preparedness Innovations (CEPI), which was established to develop vaccines for infectious disease threats. Launched with nearly $600 million in funding from Germany, Japan, Norway, the UK charity Wellcome Trust, and the Bill & Melinda Gates Foundation, CEPI aims to reduce sharply the time it takes to develop and produce vaccines.

Huge gains in disease control and prevention were made last year, and the next few items on my list (11 through 16) reflect progress on specific illnesses. For example, rates of premature death fell for non-communicable diseases like cardiovascular disease, cancer, diabetes, and chronic respiratory conditions. Another highlight was the historic approval of a sophisticated cancer treatment, CAR T-cell therapy, which uses a patient’s own immune cells to attack tumors.

Improvements were also made in treating HIV. Clinical trials for an HIV vaccine started at the end of 2017, while doctors in South Africa reported curing a young boy of the disease after he received treatments as an infant. These and other initiatives give new hope to the many who are still suffering from this chronic condition.

Advances in treating gonorrhea, a common sexually transmitted infection that has become increasingly resistant to antibiotics, are also worthy of mention. Wrapping up my list of disease-specific gains of 2017 is the renewed commitment made by global health ministers to eradicate tuberculosis by 2030.

The final two successes are reminders of how much work remains. In August, the fast food giant McDonald’s unveiled a Global Vision for Antimicrobial Stewardship in Food Animals. Although recognition of the food industry’s ethical responsibilities for public health is to be welcomed, the pledge also represents a cautionary note about how closely connected food and health really are.

Finally, rounding out my list was the historic Universal Health Coverage Forum held in Tokyo, where global leaders gathered to discuss how to improve health-care access. The World Bank and the WHO note that half of the world’s population still cannot obtain essential health services. I therefore count the December meeting as a “success” not for its achievements, but because it was a reminder to the international community that improving health-care access remains a long-term endeavor.

As the global health community resets its annual clock – and I begin cataloguing the big health stories of 2018 – we should take a moment to reflect on the 12 months recently ended. Even in a mediocre year, the global health community saved millions of lives. Imagine what we will achieve in an extraordinary year.

Melvin Sanicas, a public health physician and vaccinologist, is a regional medical expert at Sanofi Pasteur.

By Melvin Sanicas

Building a Gender-Inclusive Workplace

NEW YORK – The wave of high-profile sexual harassment cases that began with revelations from Hollywood is having a profound impact on far less glamorous work environments. Just as major film studios have been forced to take action against abuse, a similar revolution – powered by the #MeToo movement of women speaking out – is sweeping workplaces everywhere.

It has been terrible to learn of the abuse that women suffered at the hands of powerful men like Harvey Weinstein, Matt Lauer, and Al Franken. But it is also deeply encouraging to see the corporate world take this issue seriously, by attempting to create a “shared future” for their female employees. The collective response to the #MeToo movement could mark a turning point in the way employers think about sexual harassment and other issues involving gender – like pay and power.

But the workplace revolution is far from over. New strategies are needed to encourage healthy interactions among employees. When handled properly, gender equality promotes business output and productivity, whereas sexual discrimination, if ignored, can destroy an office culture – and so much more.

Companies have traditionally taken a box-ticking approach to addressing harassment, using written policies and trainings in a feeble bid to encourage respect. But this top-down approach has proven ineffective, as scandals at Uber and other tech firms have demonstrated. If workplace abuse is to be curtailed, business leaders and C-Suite executives need a fresh approach.

The first priority is to achieve gender balance at the top. Diversity in leadership encourages employee cooperation and leads to healthier organizations. This is not a new idea; a 2016 study published in the Harvard Business Review found that companies with more high-level female executives generate higher profits. Other studies have shown that women perform better under stress, often making smarter decisions. But, despite the obvious benefits that women bring, they remain under-represented in senior leadership positions at companies around the world.

Change is needed in the digital workplace as well. Predators may lurk around the water cooler, but they are also active in online communities, chat rooms, and forums. Concerns raised by the #MeToo movement spread virally on social media within hours, and similar anger could engulf an organization at any time. Companies must therefore place a premium on promoting online decorum, and take seriously any comments and concerns expressed by their employees. Most companies already monitor social media for reputational risks and customer satisfaction; they should do the same to protect their staff.

Finally, companies must be responsive to the concerns of their youngest employees, who will inherit the office of the future. With more millennials entering the workforce and demanding greater equality, the youngest employees already have a stronger voice at work than previous generations. A recent Boston Consulting Group study found that young male employees are often more open-minded than their superiors on issues like family leave and diversity, suggesting that true leadership on gender equality may actually come from a company’s youngest staff members.

Moreover, researchers at Rutgers University have shown that more than 50% of millennials would consider a pay cut if it meant working for a company that shared their values, while the Society for Human Resource Management notes that 94% of young workers want to use their skills to benefit a good cause. Rather than resist these trends, companies should look to harness the benevolence of their youngest talent.

To build a more inclusive workplace, management must craft narratives that support the changes their employees are demanding. Most important, employees need role models. The willingness of celebrities like Salma Hayek, Rose McGowan, and Reese Witherspoon to share their stories of sexual harassment empowered women from many walks of life to speak out, too. Changing workplace culture will demand similarly strong leadership.

That shift is on the horizon, and I am inspired by the women and men who are calling on future generations to work together more equitably. It is easy to feel overwhelmed by the complexity of these issues, but if managers and employees can commit to building purpose-driven and inclusive work environments, change is inevitable.

The women of Hollywood may have initiated what has become a global call for equality at work, but the workplace revolution is no less significant for those of us who walk on less colorful carpets.

Kathy Bloomgarden is CEO of Ruder Finn.

By Kathy Bloomgarden

India’s Urban Awakening

WASHINGTON, DC – When the United Kingdom became the first country in the world to undergo large-scale urbanization in the nineteenth and early twentieth centuries, the process transformed its economy and society. Today, India is facing a similar transformation, only it is happening at 100 time the pace. By 2030, India’s urban population will reach 600 million people, twice the size of America’s.

For India, rapid urbanization is particularly vital to enable the country to take full advantage of the demographic dividend afforded by its young population. With 12 million more people joining the country’s labor force every year, the potential of that dividend is huge. As the urbanization process continues, connectivity, proximity, and diversity will accelerate knowledge diffusion, spark further innovation, and enhance productivity and employment growth.

For all of its benefits, however, rapid urbanization also poses enormous challenges, from managing congestion and pollution to ensuring that growth is inclusive and equitable. As a latecomer to urbanization, India will benefit from technological innovations – including digital technologies, cleaner energy, innovative construction materials, and new modes of transport – that will enable it to leapfrog some of its more developed counterparts. But taking advantage of those technologies will require effective policies, including smart infrastructure investments and measures to make cities more competitive, particularly in modern industries.

Making its cities more competitive will require India to decide whether to emphasize specialization (with an industry concentrated in a particular city) or diversification (with each city home to a range of industries, roughly in line with the national average). This is no easy choice: the debate over which approach is better has been raging for nearly a century.

In 1991, around the time India’s economic liberalization began, the country’s cities tended toward specialization. But, in recent years, there has been a notable shift toward diversification, with some major urban centers, like Mumbai and Bangalore, experiencing the largest and fastest shifts away from specialization.

Specialization tends to be much higher in traditional industries than in modern industries. Though some modern industries – like office accounting and computing machinery, and radio, television, and communication equipment – tend to be located in more specialized districts, roughly three-quarters of Indian districts with higher specialization levels rely on traditional industries. Of India’s 600 districts, those that remain the most specialized are Kavaratti (water transport), Darjiling (paper products), Panchkula (office accounting and computing machinery), and Wokha (wood products).

Though India’s specialization levels were much higher than those in the United States in the early 1990s, the two countries have converged over time. All of this suggests that, as technology continues to advance, so will diversification – a trend that will shape future urbanization patterns in India.

This bodes well for employment, because more diversified cities and districts tend to experience greater job growth. Initial clusters of modern services have also experienced abnormally high employment growth since 2000.

And there’s more good news: the strongest job gains due to diversification are occurring in rural areas and among small enterprises, suggesting that India’s urbanization can bring inclusive growth and prosperity. Evidence also shows that high growth rates, which support poverty reduction, are concentrated in the rural areas of particular districts.

Taking full advantage of these positive trends, however, will require India to boost infrastructure investment. Despite a slowdown in the manufacturing sector’s growth – a trend mirrored in much of the rest of the world – urbanization has continued to accelerate in India, especially in districts with access to better infrastructure.

In the developing world, a billion people lack access to electricity and roads, and more than a half-billion lack reliable access to safe drinking water. Addressing these deficiencies is critical to development – and India is no exception. Access to better infrastructure will enable millions more entrepreneurs, especially women, to benefit from the country’s urban awakening. The key to success will be to improve the efficiency of public spending, while attracting more private investment.

There is certainly an economic incentive for private actors to channel their money toward developing-country infrastructure. After all, high-income countries, where populations are aging rapidly, often have an excess of savings ready to be allocated to high-yield investments. Lower-income countries, with their younger populations and large infrastructure needs, provide just such opportunities.

As it stands, however, less than 1% of the $68 trillion managed by pension funds, life insurance companies, and others are channeled toward infrastructure projects. And, given the low risk appetite among investors, not to mention the small size of city-level projects, municipal governments will struggle to raise that share.

But it isn’t impossible. What is needed is visionary leadership at the local level, with municipal governments identifying infrastructure projects that promote entrepreneurship, increase their cities’ competitiveness, and promote regional development by strengthening urban-rural connectivity. Those governments should also leverage their assets, including land; mobilize user revenue; and modify financial regulations and incentives to increase investors’ risk appetite. Add to that greater technical and financial capacity, and it would become much easier to attract the needed private funds and build partnerships benefiting India’s urban transformation.

India has all of the tools it needs to advance its urbanization process in a way that promotes inclusive and sustainable growth. It must use them wisely.

Ejaz Ghani is Lead Economist at The World Bank.

By Ejaz Ghani

Liberal Democracy in Africa Can Wait

YAOUNDÉ – Africa’s policymakers understand that strong economic and political leadership is essential to growth and stability. For years, African economies have fared better than expected, owing to a commitment to improving governance. The question now is how to sustain the momentum.

Current strategies do not provide an adequate answer. Although leaders at a recent African Economic Conference in Addis Ababa, Ethiopia, committed to keeping governance reforms at the top of Africa’s agenda, they offered no blueprint. From my perspective, this void presents an opportunity to consider new governance paradigms, including those that borrow from two commonly discussed models: the “Washington Consensus” and the “Beijing Model.”

Development practitioners have long debated which model offers the best framework for reform. Put simply, “governance” refers to a dynamic framework of rules, structures, and processes that help a government manage its economic, political, and administrative affairs.

But which principles a government focuses on varies by approach. The model championed by the West places a premium on human rights and democracy, while the one advocated by China is more concerned with political stability and economic growth.

Since the election of President Donald Trump, the United States, which remains one of Africa’s top donors, has focused more on the principles China favors – like political stability, trade, and counterterrorism – than on human rights. The rationale is that the Beijing Model is better for Africa in the short and medium term. And, while it might not be popular to admit, Trump has a point.

Simply put, food, shelter, health, and good sanitation are more relevant for most Africans than the right to vote. Moreover, only a moderately wealthy population, with a healthy middle class, can adequately demand the rights that democracy provides. Paradoxically, the fastest way to build a strong middle class in Africa would be to move toward the hierarchy of principles that China’s model promotes.

For Africa to reorient its governance approach, and embrace a post-Washington Consensus, its leaders must commit to improving institutional effectiveness and economic management.

The first set of reforms would involve establishing clear lines of sovereignty with international partners. Africa’s relationship with Western donors, for example, has historically placed individual rights over national rights. But in my view, individual rights should not supersede sovereign ones. Punishing entire countries for laws that affect a minority is counterproductive.

An example of such collective punishment occurred in Uganda in 2014, when the World Bank froze some $90 million in loans following the government’s enactment of legislation criminalizing homosexuality. As a Ugandan government spokesman said at the time, the bank “should not blackmail its members” to adopt Western values. Yet, when governance models are judged solely through the lens of the Washington Consensus, there is very little alternative.

Along the same lines, the second set of reforms pertains to prioritizing economic rights over political rights. For example, politicians who manage an economy well should not be subject to term limits. Neither Singapore nor China is a democracy; but leaders in both countries have used their political power to improve living standards. Forcing leaders to step down in the middle of economic reforms seems counterproductive.

These are not far-fetched ideas. Today, leaders in Rwanda, which is widely considered an African success story, have improved stability by moving away from the Washington Consensus approach to governance.

Politically, Rwanda is strong, disciplined, and organized, but it is not liberal. The landslide reelection of President Paul Kagame last year had more to do with power than democracy. Although Kagame remains popular, his government was criticized for stifling free speech and human rights in the run-up to the vote. The conclusion I draw is not that human rights don’t matter, but that political discipline and imperfect forms of democracy are acceptable if the tradeoff is sustained progress in economic and institutional governance.

We should be intellectually honest and call a spade a spade. Rwandans should not be ashamed to value economic and administrative strength more than fair elections. The question for other African states seeking to reform their governance models, then, is how much of Rwanda’s approach to emulate.

Neither the Washington Consensus nor the Beijing Model has all the answers. But, as Rwanda has demonstrated, if discipline and strong leadership are improving lives and delivering public goods, perhaps liberal democracy should be a long-term priority.

Simplice A. Asongu is Lead Economist in the research department of the African Governance and Development Institute.

By Simplice A. Asongu

The Transformative Power of Africa’s Youth

TORONTO – A few years ago, during a conversation with young people from some of Senegal’s poorest communities, a pair of social entrepreneurs told me about projects they were working on to help their peers succeed. One young man said he planned to put more computers into primary schools; another had set up a network to connect rural job seekers in the urban tumult of Dakar, Senegal’s capital.

After they finished sharing their plans, I congratulated them, and said that their parents must be very proud. But instead of accepting the compliment, they demurred. “My parents are against what I’m doing,” they said, almost in unison, before explaining that young people face family pressure to get a government job or use their English skills to work as a tour guide – not to become a risk-taking entrepreneur.

For ambitious young Africans, there are many obstacles to success. The journey to a job – whether formal or informal, entrepreneurial or traditional – is often a solitary one. Many young people lack access to skills training or even a favorable social environment to try something new. As I was reminded that day in Senegal, helping young people find gainful employment is the most important thing that the international community can do to help Africa develop.

Africa is home to the world’s largest population of young people. In about 25 years, those young people will be part of the biggest workforce in the world, with more than 1.1 billion people of working age. By some forecasts, 11 million people will enter Africa’s labor market each year for the next decade, most of whom will be first-time job seekers.

If African countries boost job growth and equip young people with employable skills, this youth bulge can deliver rapid, inclusive, and sustainable economic growth to the continent. In turn, millions would have the opportunity to lift themselves out of poverty.

But Africa cannot achieve this future alone. At the Mastercard Foundation, we believe that, if Africa is to reach its potential, gaps in two keys areas must be closed.

The first area is access to financial products and services. According to the World Bank, some two billion people around the world currently lack such access. In Sub-Saharan Africa, just 34% of adults have a bank account, making it difficult for people to put money aside for unplanned events, like a bad harvest, or to save for school. This must change, with Africans gaining not only better access to banking systems, but also improved financial literacy.

The second key challenge that must be addressed is exclusion from secondary and higher education. While progress has been made in some regions, only about one-third of Africa’s young people graduate from high school. Girls are particularly disadvantaged; according to UNESCO, in Sub-Saharan Africa, an estimated nine million girls under the age of 11 have never been to school, compared to six million boys.

To address these issues, the Mastercard Foundation has established partnerships with local organizations to design education and financial-literacy programs aimed at helping young people find and keep jobs. By building a better-trained workforce, the Foundation’s programs are helping to empower the next generation of Africa’s community members and leaders, so that they can help their families, communities, and countries achieve a brighter and more prosperous future.

Already, a new generation of educated and ethical entrepreneurs, like those I met in Senegal, is emerging across Africa, demonstrating a profound commitment to building a stronger Africa. For example, when I ask young people participating in our Scholars Program what they plan to do with their new skills, they almost always reply that after getting a job, they plan to help somebody else, by returning to their secondary schools to serve as mentors to younger students.

Some of our program’s graduates have even established community projects in their villages to address HIV/AIDS or to build shelters for orphans and young children. Every one of these bright young Africans – examples of what the Mastercard Foundation calls “transformative leadership” in action – has the potential to drive change in their own countries and communities.

Those of us working in the field of international development can help level the playing field even more, by giving young Africans from all backgrounds an opportunity to lead in transformative ways. If we succeed, Africa’s dreamers of today will be the catalysts of positive change tomorrow.

Reeta Roy is President and CEO of the Mastercard Foundation.

By Reeta Roy

We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…