Category: Uncategorized

Uncategorized

End of year Globe and Mail oped

Some year end reflections and thoughts on the year to come for the Globe and Mail:

Big Tech’s net loss: How governments can turn anger into action

It has been a game-shifting 2018 for Big Tech. It was the year that long-simmering concerns about its potential negative effects on our economy, on our personal lives and even on our democracy broke into public debate.

It was the year that much of the media got serious about tech journalism, when the balance of tech journalism tipped from gadget reviews and chief-executive profiles to treating Silicon Valley as a node of power in society to be held accountable.

It was the year that tech company employees began holding their employers to account. At Google, walkouts were staged over gender policy, and petitions demanded an end to Chinese expansion plans and to the development of “warfare technology.” Protests were held over Microsoft contracts with the U.S. Immigration and Customs Enforcement agency. There was backlash against the use of facial recognition to assist law enforcement at Amazon. And at Facebook, employees started speaking far more openly to the media as the company careened from scandal to scandal.

It was the year that tech executives awoke to their new operating environment. The U.S. Congress and parliaments around the world ordered CEOs, accustomed to being adored as the leaders of venerated companies, to testify and answer tough questions. It was also the year that these same CEOs, whether motivated by sincere interest in fixing structural problems with their products or concern for protecting public image and shareholder value, began making meaningful reforms to their companies, to varying degrees of success.

Finally, and perhaps most consequentially, 2018 was the year that tech companies lost the benefit of the doubt from governments. This was a result of a growing body of investigations, academic research and enterprise reporting detailing the ways in which social platforms have been used to undermine democracy. It also stems from a concern that the economic benefits of the digital economy are flowing predominantly to a small handful of U.S.-based global companies. But the final straw for many legislators was a November article in The New York Times revealing a disconnect between Facebook’s public statements about abuses on their platform and the aggressive tactics being used by executives to fight the story. To many in government, this confirmed that the tech-sector giants should be treated like any other large multinational corporation, and that it’s time to get serious about governing Big Tech.

Luckily, there are some relatively easy places for governments to start. They can bring sunlight to the world of micro-targeted advertising through new transparency laws. They can overhaul data-privacy regimes that are limited in scope, weak in capacity and unco-ordinated globally. They can mandate the identification of automated accounts so that citizens know when they are engaging with a machine or a human. They can modernize tax and competition policy for the digital economy. And they can fund large-scale digital literacy initiatives for citizens of all ages.

But beyond these short-term Band-Aids, 2019 must also be the year we start grappling with a set of thornier questions at the intersection of technology and democracy.

Democratic governments will need to wrestle with how their speech laws apply to the digital world. This is going to require bringing together the private sector and civil society in a hard discussion about the nature and limits of free speech, about who is censored online and how, about responsibilities for moderating speech at scale, and about universal versus national speech norms.

And while the idea that platform companies are simply intermediaries – and therefore not liable for how their services are used – has been foundational to the innovation, growth and empowerment created by the open internet, the sheer breadth of the economic and social services now provided by platforms might demand a more nuanced approach to how they are governed. If this comes at the cost of that innovation, democracies must be allowed to decide about the trade-off.

Such democracies will need to start co-ordinating their public-policy efforts around emerging technologies, too. There is currently a disconnect between the global scale, operation and social impact of technology companies and the national jurisdiction of most countries’ tech laws and regulations. As former BlackBerry co-CEO Jim Balsillie has argued, the digital economy may need its Bretton Woods moment.

How we handle these challenges will set the tone for how we’ll grapple with the even knottier ones that are to come. As de facto public places increasingly involve private interests – such as Alphabet’s planned smart city in Toronto, or Amazon’s competition over which city would earn the right to be home to its HQ2 headquarters – governments will need to lead a conversation about what this collision looks like. What, for instance, would it mean to treat the data created by the citizens of cities as a public good?

And while governments devote substantial resources to growing the business of artificial intelligence, which promises to reshape broad aspects of our lives, we must work ahead to ensure these nodes of decision-making power are brought into the norms of accountability and transparency that we demand in democracies.

This year was defined by outrage against tech – but 2019 will be the year that the long and messy process of governing it begins.

Uncategorized

Globe and Mail Oped: We can save democracy from destructive digital threats

I had the privilege of speaking to the Federal Cabinet retreat this week. Details from the event can be found here. I was there to address the challenges of misinformation and disinformation in relation to the upcoming election. This oped, published in the advance of the retreat, provides some context to this issue, and is based on Ed Greenspon and my recent Democracy Divided report.

 

A decade ago, governments and regulators allowed Wall Street to run amok in the name of innovation and freedom until millions of jobs were lost, families were forced from their homes and trust in the financial system was decimated.

Today, the same kinds of systemic risks – so-called because the damage ripples way beyond its point of origin – are convulsing the information markets that feed our democracy.

The growth of the internet has resulted in tremendous opportunities for previously marginalized groups to gain voice, but an absence of a public-interest governance regime or even a civic-minded business ethos has resulted in a flood of disinformation and hate propagated by geopolitical, ideological, partisan and commercial operatives.

The result is that the giant digital platforms that now constitute a new public sphere are far too often being used to weaponize information, with a goal of deepening social divisions, fostering unrest and ultimately undermining democratic institutions and social cohesion. As we’ve seen in other countries, the integrity of elections themselves are at risk.

What can be done?

Some people say we need to invest in digital literacy. This is true, as is the broader need to increase civic knowledge and sharpen critical thinking skills. Yet this isn’t sufficient in itself. When Lake Erie was badly polluted a generation ago, signs were erected along the beaches warning swimmers to stay out of the water. But governments also passed laws and enforced regulations to get at the source of the pollution.

Others say these issues are not present in Canada. That would be a welcome kind of exceptionalism if remotely true. But misogynists, racists and other hate groups foment resentment online against female politicians and just about anyone else. Both the Quebec City mosque shooter and the suspect in the Toronto van attack were at least partially radicalized via the internet. That said, research into digital threats to our democracy is so thin in this country that we know almost nothing about who is purchasing our attention or exploiting our media ecosystem. There’s certainly no basis for complacency about protecting Canada’s 2019 federal election against attacks that would never be tolerated if they manifested themselves physically rather than digitally.

Here are some measures that merit serious consideration. The Elections Act needs to be reformed to bring complete transparency to digital advertising. Publishers and broadcasters are legally obligated to inform their audiences about who purchases political ads in election campaigns. Canadians have the same right to know about who is paying for digital ads and to whom they are being targeted.

Secondly, we need to do more to make sure that individuals exercise greater sovereignty over the data collected on them and then resold to advertisers or to the Cambridge Analyticas of the world. This means data profiles must be exportable by users, algorithms and AI must be explained, and consent must be freely, clearly and repeatedly given – not coerced through denial of services.

Thirdly, platforms such as YouTube, Facebook and Twitter need to be made liable to the same legal obligations as newspapers and broadcasters for defamation, hate and the like. Some people say this would amount to governments getting into the censorship business. That’s simply wrong; newspaper publishers and editors abide by these laws – or face the consequences – without consulting government minders. These digital platforms use algorithms to perform the same functions as editors: deciding what readers will see what content and with what prominence.

A fake news law would be a trickier proposition, but it is not impossible to think anew about a statute that existed in Canada’s Criminal Code from 1892 to 1992, until it was deemed unconstitutional in a split decision. It said that anyone who “wilfully publishes a statement, tale or news that he knows is false and that causes or is likely to cause injury or mischief to a public interest is guilty of an indictable offence.” The key words here are “wilfully” and causing “injury” to the public interest. We’re not sure such a measure is warranted, but as with the 1960s commission that recommended hate laws in Canada, we think it’s worth public discussion.

In the new digital public sphere, hate runs rampant, falsehood often outperforms truth, emotion trumps reason, extremism muscles out moderation. These aren’t accidents. They are products of particular structures and incentives. Let’s get with the program before democracy has its own Great Recession.

Uncategorized

Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere

Ed Greenspon and I have just published a report as a collaboration between the UBC School of Public Policy and Global Affairs and the Public Policy Forum, called Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere. The Report outlines what we see as a structural problem in our current information ecosystem that has led to our current problem of mis and disinformation, and details a range of policy ideas being discussed and tested around the world.

The report can be downloaded here.

And the Introduction is below.

Introduction:
For more than a quarter-century, the internet developed as an open web—a system to retrieve and exchange information and ideas, a way of connecting individuals and building communities and a digital step forward for democratization. It largely remains all these things. Indeed, the internet is supplanting the old concept of a public square, in which public debate occurs and political views are informed and formed, with a more dynamic and, in many ways, inclusive public sphere. But along the way, particularly in the last half-dozen years, the “open internet” has been consolidated by a handful of global companies and its integrity and trustworthiness attacked by malevolent actors with agendas antithetical to open societies and democratic institutions. These two phenomena are closely interrelated in that the structures, ethos and the economic incentives of the consolidators—Google (and YouTube), Facebook and Twitter in particular—produce an incentive system that aligns well with the disseminators of false and inflammatory information.

The digital revolution is famous for having disrupted broad segments of our economy and society. Now this disruption has come to our democracy. The Brexit referendum and the 2016 American election awakened the world to a dark side of digital communications technologies. Citizens and their governments are learning that a range of actors—foreign and domestic, political and economic, behaving in licit and illicit ways—can use disinformation, hate, bullying and extremist recruitment to erode democratic discourse and social cohesion both within and outside of election periods. And the problem is getting worse.

By and large, the internet has developed within a libertarian frame as compared, for instance, to broadcasting and cable. There has been until recently an almost autokinetic response that public authorities had little or no role to play. To some extent, the logic flows from a view that the internet is not dependent on government for access to spectrum, so therefore no justification exists for a government role. So long as it evolved in ways consistent with the public interest and democratic development, this logic—although flawed—was rarely challenged. And so governments around the world—and tech companies, too—were caught flat-footed when they discovered the internet had gone in directions unanticipated and largely unnoticed.

Today, the question is how to recapture and build on the values of the open internet so that it continues to promote the public good without also facilitating the run-off of social effluents and contaminants that pollute public discourse and the very security of open societies. “Keeping the web open isn’t enough,” said World Wide Web founder Tim Berners-Lee in 2017. “We need to make sure that it’s used in a way that’s constructive and promotes truth and supports democracy.”

It is not surprising that more than 50 years after its creation and a quarter century following the development of the World Wide Web, a sweeping review is required. With this paper, we seek to explore the fundamental challenges that have arisen. We will offer a range of policy options for consideration because there is no single fix. We do so understanding that the combination of the urgency and novelty of these threats creates a tension of needing to execute corporate and public policy in quick order yet with high precision given the possibility of unintended consequences to innovation and free expression. Nobody wants to suppress individual rights on the way to rebuilding trust or discourage the pioneering spirits that have made the internet so central to our lives. Yet doing nothing is not an option either; the current track is unacceptable for both civic life and fair and open marketplaces.

In some cases, this report will suggest actions; in others, the need for more study and more public engagement. In all instances, we believe that certain behaviours need to be remedied; that digital attacks on democracy can no more be tolerated than physical ones; that one raises the likelihood of the other in any case; and that a lowering of standards simply serves to grant permission to those intent on doing harm.

On April 5-6, 2018, PPF and the University of British Columbia’s School of Public Policy and Global Affairs convened a mix of subject matter experts, public officials and other interested parties from academia, philanthropy and civil society. This workshop flowed out of PPF’s 2017 report, The Shattered Mirror: News, Democracy and Truth in the Digital Age, which provided a diagnostic of the deteriorating economics of journalistic organizations, an analysis of negative impacts on Canadian democracy and recommendations for improving the situation. Named in recognition of a 1970 Senate of Canada study of mass media called The Uncertain Mirror, the PPF report noted that in the intervening decades this mirror has cracked and shattered under the pressure of content fragmentation, revenue consolidation and indifference to truth. Now we are speaking of the need for the internet to become a more faithful mirror of the positive attributes of greater human connectivity. This latest piece of work is part of continuing efforts by PPF to work with a wide range of partners in addressing two distinct but intertwined strands (think of a double-helix in biology): how to sustain journalism and how to clean up a now-polluted—arguably structurally so—internet. The April workshop succeeded in sharing and transferring knowledge about recent developments and what might be done about them among experts and policy-makers. It was capped by a public event featuring some of the leading thinkers in the world on the state of the digital public sphere. This report advances the process by canvassing a range of possible policy responses to a rapidly evolving environment replete with major societal consequences still in the process of formation.

PPF hosted a follow-up workshop on May 14-15, 2018, which brought international and Canadian experts together to discuss policy and industry responses to disinformation and threatening speech online, a report from which will be published in the fall.

The report is divided into three parts:

  • Discussion on the forces at play;
  • Assumptions and principles underlying any actions; and
  • A catalogue of potential policy options.

We submit Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere in the hopes of promoting discussion and debate and helping policy-makers steer the public sphere back toward the public good.

 

Uncategorized

Globe and Mail oped: The era of Big Tech self-governance has come to an end

piece in the Globe and Mail on the Zuckerberg hearings:

 

The era of Big Tech self-governance has come to an end

Twenty years ago, another young Silicon Valley tycoon was grilled in front of the U.S. Congress. Then, as this week, Congressional leaders grandstanded, asked long-winded questions, and showed at times shocking ignorance about how technology worked. And then, as this week, a tech CEO was contrite, well-rehearsed, and obfuscated on key aspects of his business practices.

But the hearings had consequences. They led to an anti-trust lawsuit brought against Microsoft by the U.S. Department of Justice and the Attorneys General of 20 U.S. states. Instead of trusting Bill Gates and Microsoft to behave better or act differently, the government punished them for perceived wrongdoings.

This is how democratic governance is supposed to work. We don’t have to simply trust citizens and corporations to act in the benefit of society; we impose rules, regulations and appropriate punishments to incentivize them to do so.

In the years since Mr. Gates’s testimony, a new generation of digital technology monopolies has emerged, reshaping online life and concentrating activity on a series of giant, global platforms. And they have done so in a policy context virtually void of regulation.

But in 2018, it’s hard to ignore the many troubling cases of abuse regularly perpetrated on and by platforms, from the manner in which the Russian government used the tools provided by companies such as Facebook and Google to interfere in the 2016 U.S. election, to the way in which hate groups in countries such as Myanmar have organized mass violence against minority populations.

Both the government and Mark Zuckerberg know that citizens are finally paying attention to the political impact of Facebook and its effect on our elections, that citizens are understandably concerned about the way Facebook has repeatedly and consistently flaunted and neglected user privacy, and that they are concerned about the hateful and divisive character of the civic discourse that is a result of Facebook’s business model.

And so this week the era of Silicon Valley self-regulation came to an end. It’s now time for a difficult debate about how the new internet – an internet of multinational corporations, and of platforms – will be governed.

While Congressmen and Mr. Zuckerberg appeared to agree that they could work together to develop the “right” regulations, this week’s hearing revealed clear tensions on several key policy issues.

First, while Mr. Zuckerberg says that Facebook now supports digital advertising transparency laws that they had previously lobbied against, it is unclear whether the proposed Honest Ads Act will go far enough or whether it will even pass.

Second, on privacy: The world is watching the response to Europe’s General Data Privacy Regulation (GDPR), and while Mr. Zuckerberg argued that the privacy tools that Facebook will roll out in response to GDPR will be available in other markets, the U.S. (and Canada) still seem unwilling to enshrine the punitive mechanisms that will be needed to ensure these new data rights. While he claims that he supports the principles of the GDPR, the details will be litigated in European courts for years to come.

Third, when pressed on whether they have any competitors, Mr. Zuckerberg strained to name any. Having aggressively acquired many potential competitors, Facebook – as well as Google and Amazon – will all surely fight aggressively against a new generation of competition policy.

Fourth, Mr. Zuckerberg surprised many by agreeing that Facebook is responsible for the content on their platforms. While this seems anodyne, the debate over whether Facebook is a neutral platform or a media company is rife with legal and regularity implications.

Finally, Mr. Zuckerberg suggested that law makers should focus attention on governing artificial intelligence. They repeatedly changed the subject. Since Facebook operates at a mind-boggling global scale, they use AI to implement and even determine their policies, regulations and norms. How states will in turn govern these algorithms is certain to be a central challenge for democracy. Mr. Zuckerberg knows it; Congress was disinterested.

Over the past 20 years, the internet has shown flashes of its empowering potential. But the recent Facebook revelations also demonstrate what can happen if we fail to hold it accountable.

Mr. Zuckerberg’s testimony is only the beginning of a long-overdue conversation about whether we will govern platforms or be governed by them.

Uncategorized

Globe and Mail oped: The new rules for the internet

There has been lots of discussion lately about regulating social media but much less on what this might look like. Ben Scott (former tech policy for Obama & Clinton) and I suggest some options in The Globe and Mail. In short, it will take a broad new approach to how we think about governing the internet. The piece is here, and below.

 

The new rules for the internet – and why deleting Facebook isn’t enough

While being pessimistic about the depressing tableau of Silicon Valley malfeasance is easy, let us not forget that the internet has brought tremendous value to our society. Therefore, the answer is not to lock down the open internet or even to delete Facebook (however satisfying that might feel, with 2.2-billion users it is embedded in our society). Instead, we urgently need new democratic rules for the internet that enhance the rights of citizens, protect the integrity of our public sphere and tackle the structural problems of our current digital economy.

Here are seven ideas:

Data rights. Much of the internet economy is built on trading personal data for free services with limited consumer protection. This model has metastasized into a vast complex of data brokers and A.I.-driven micro-targeting with monopolists such as Google and Facebook at the centre. With the curtain pulled back, there may at last be political will to build a rights-based framework for privacy that adapts as technologies change. For starters, we need major new restrictions on the political exploitation of personal data (including by political parties themselves, who remain exempt from our privacy law) and much greater user control over how data is collected and used. Europe’s new General Data Protection Regulation sets a high standard, though since it took 10 years to legislate, it was of date before it was implemented. We must evolve it to the next level.

Modernize and enforce election law. Few dispute that citizens deserve to know who is trying to sway them during elections, but our laws were designed for TV and radio. We need to update them for the internet era, where ads can be purchased from anywhere, disguised as normal social media posts, micro-targeted to polarize voters, and loaded up with sensational and divisive messages. All online ads should carry a clearly visible cache of information that states who bought them, the source of the funds, how much they spent, who saw them, and the specific targeting parameters they selected.

Audit artificial intelligence. Facebook and Google monetize billions of data points a day using powerful A.I. to target and influence specific audiences. The social and ethical implications of A.I. are a blinking red light as this technology advances, and we need to lay some ground-rules for accountability. Just as we require drug manufacturers and car makers to submit to rigorous public safety checks, we need to develop a parallel system for algorithms.

Tax Silicon Valley fairly. The titans of technology dominate the list of the most valuable companies on the planet. And yet, they are still coddled by tax law as if they were an emerging industry. It is time for Silicon Valley to pay unto Caesar — not least so that we plebeians can use the tax revenue to fix the things they keep breaking, like journalism, for example.

Aggressive competition policy. Before we start a decade-long trust-busting crusade, let’s begin with a competition policy agenda that delivers immediate, tangible value. This might include restrictions on acquisition of up-and-coming competitors, structural separation of behavior tracking and ad targeting businesses and consumer data portability from one service provider to another.

Improve digital security. What the Russians did in 2016 to exploit digital media should be a wake-up call. Without unleashing a surveillance dragnet, we need effective capabilities to counter foreign disinformation operations using measures such as “know your customer” rules for ad buyers and closing down the armies of fake accounts.

Transform civic literacy, and scale civic journalism. As social-media users, we all own part of this problem. It is our appetite for sensationalism, outrage and conspiracy that creates the audience for disinformation. Instead of relying on tech-funded literacy campaigns, the government needs to rebuild our civic literacy from the ground up, and couple these efforts with serious investments and policy changes to reinvigorate public service and accountability journalism.

Ironically, Facebook’s own conduct has awoken its vast user base to the need for a new generation of internet regulation. And with the United States mired in the politics of Donald Trump and the European Union slowed by a complex bureaucracy, there is an opportunity for Canada to present this new vision. But we will only be effective if the rigor and scale our response is commensurate with the threat posed to our democracy.

Uncategorized

Ungoverned Space

I have an essay in CIGI’s new data governance series, called Ungoverned Space: How Surveillance Capitalism and AI Undermine Democracy. My key points are:

  • The threat to democracy from misinformation is enabled by two structural problems in our digital infrastructure: the way data is collected and monetized (surveillance capitalism), and how our reality is algorithmically determined through artificial intelligence (AI).
  • Governments face a particular challenge in governing platforms as any efforts must engage with issues of competing jurisdiction, differing notions of free speech and large-scale technological trends toward automation.
  • Policy mechanisms that enable the rights of individuals (data protection and mobility) are likely to be more effective than those that seek to limit or regulate speech.

Full essay is here.

And here is a video that CIGI produced to accompany the article:

Uncategorized

Public Salon talk

I recently had the opportunity to give a talk at Sam Sullivan’s Public Salon in Vancouver. A great regular event hosted by the former mayor and current MLA. My talk was on the design problems at the core of our current crisis of misinformation. In short, I conclude: “Facebook didn’t fail when it used AI to match foreign agitators with micro-targeted US voter audiences, or offered ‘How to burn jews’ as an ad group, it is actually working as it was designed. And it is this definition of “working” and this design which presents the threat to our democracy, which needs to be held accountable, and for which governance oversight is urgently needed.”

Uncategorized

How safe are Canada’s elections from fake news on Facebook?

Here is an interview I recently did on CBC’s The Current on the digital threat to the next Canadian election. My argument is that a focus on discrete threats (from say Russia), are distracting us to what is ultimately a structural problem. It is the very design of Facebook that is the root cause.  Until we start talking about this root cause, and begin quickly testing policies that both address the flaws in this design and hold its social outcomes accountable, we are missing the plot. Governments that continue to make the policy choice of self regulation will soon also have to answer for these outcomes. Here is the Episode page, and below is the full audio (my segment starts at 8:00).