Category: Uncategorized

Uncategorized

Globe and Mail Oped: We can save democracy from destructive digital threats

I had the privilege of speaking to the Federal Cabinet retreat this week. Details from the event can be found here. I was there to address the challenges of misinformation and disinformation in relation to the upcoming election. This oped, published in the advance of the retreat, provides some context to this issue, and is based on Ed Greenspon and my recent Democracy Divided report.

 

A decade ago, governments and regulators allowed Wall Street to run amok in the name of innovation and freedom until millions of jobs were lost, families were forced from their homes and trust in the financial system was decimated.

Today, the same kinds of systemic risks – so-called because the damage ripples way beyond its point of origin – are convulsing the information markets that feed our democracy.

The growth of the internet has resulted in tremendous opportunities for previously marginalized groups to gain voice, but an absence of a public-interest governance regime or even a civic-minded business ethos has resulted in a flood of disinformation and hate propagated by geopolitical, ideological, partisan and commercial operatives.

The result is that the giant digital platforms that now constitute a new public sphere are far too often being used to weaponize information, with a goal of deepening social divisions, fostering unrest and ultimately undermining democratic institutions and social cohesion. As we’ve seen in other countries, the integrity of elections themselves are at risk.

What can be done?

Some people say we need to invest in digital literacy. This is true, as is the broader need to increase civic knowledge and sharpen critical thinking skills. Yet this isn’t sufficient in itself. When Lake Erie was badly polluted a generation ago, signs were erected along the beaches warning swimmers to stay out of the water. But governments also passed laws and enforced regulations to get at the source of the pollution.

Others say these issues are not present in Canada. That would be a welcome kind of exceptionalism if remotely true. But misogynists, racists and other hate groups foment resentment online against female politicians and just about anyone else. Both the Quebec City mosque shooter and the suspect in the Toronto van attack were at least partially radicalized via the internet. That said, research into digital threats to our democracy is so thin in this country that we know almost nothing about who is purchasing our attention or exploiting our media ecosystem. There’s certainly no basis for complacency about protecting Canada’s 2019 federal election against attacks that would never be tolerated if they manifested themselves physically rather than digitally.

Here are some measures that merit serious consideration. The Elections Act needs to be reformed to bring complete transparency to digital advertising. Publishers and broadcasters are legally obligated to inform their audiences about who purchases political ads in election campaigns. Canadians have the same right to know about who is paying for digital ads and to whom they are being targeted.

Secondly, we need to do more to make sure that individuals exercise greater sovereignty over the data collected on them and then resold to advertisers or to the Cambridge Analyticas of the world. This means data profiles must be exportable by users, algorithms and AI must be explained, and consent must be freely, clearly and repeatedly given – not coerced through denial of services.

Thirdly, platforms such as YouTube, Facebook and Twitter need to be made liable to the same legal obligations as newspapers and broadcasters for defamation, hate and the like. Some people say this would amount to governments getting into the censorship business. That’s simply wrong; newspaper publishers and editors abide by these laws – or face the consequences – without consulting government minders. These digital platforms use algorithms to perform the same functions as editors: deciding what readers will see what content and with what prominence.

A fake news law would be a trickier proposition, but it is not impossible to think anew about a statute that existed in Canada’s Criminal Code from 1892 to 1992, until it was deemed unconstitutional in a split decision. It said that anyone who “wilfully publishes a statement, tale or news that he knows is false and that causes or is likely to cause injury or mischief to a public interest is guilty of an indictable offence.” The key words here are “wilfully” and causing “injury” to the public interest. We’re not sure such a measure is warranted, but as with the 1960s commission that recommended hate laws in Canada, we think it’s worth public discussion.

In the new digital public sphere, hate runs rampant, falsehood often outperforms truth, emotion trumps reason, extremism muscles out moderation. These aren’t accidents. They are products of particular structures and incentives. Let’s get with the program before democracy has its own Great Recession.

Uncategorized

Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere

Ed Greenspon and I have just published a report as a collaboration between the UBC School of Public Policy and Global Affairs and the Public Policy Forum, called Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere. The Report outlines what we see as a structural problem in our current information ecosystem that has led to our current problem of mis and disinformation, and details a range of policy ideas being discussed and tested around the world.

The report can be downloaded here.

And the Introduction is below.

Introduction:
For more than a quarter-century, the internet developed as an open web—a system to retrieve and exchange information and ideas, a way of connecting individuals and building communities and a digital step forward for democratization. It largely remains all these things. Indeed, the internet is supplanting the old concept of a public square, in which public debate occurs and political views are informed and formed, with a more dynamic and, in many ways, inclusive public sphere. But along the way, particularly in the last half-dozen years, the “open internet” has been consolidated by a handful of global companies and its integrity and trustworthiness attacked by malevolent actors with agendas antithetical to open societies and democratic institutions. These two phenomena are closely interrelated in that the structures, ethos and the economic incentives of the consolidators—Google (and YouTube), Facebook and Twitter in particular—produce an incentive system that aligns well with the disseminators of false and inflammatory information.

The digital revolution is famous for having disrupted broad segments of our economy and society. Now this disruption has come to our democracy. The Brexit referendum and the 2016 American election awakened the world to a dark side of digital communications technologies. Citizens and their governments are learning that a range of actors—foreign and domestic, political and economic, behaving in licit and illicit ways—can use disinformation, hate, bullying and extremist recruitment to erode democratic discourse and social cohesion both within and outside of election periods. And the problem is getting worse.

By and large, the internet has developed within a libertarian frame as compared, for instance, to broadcasting and cable. There has been until recently an almost autokinetic response that public authorities had little or no role to play. To some extent, the logic flows from a view that the internet is not dependent on government for access to spectrum, so therefore no justification exists for a government role. So long as it evolved in ways consistent with the public interest and democratic development, this logic—although flawed—was rarely challenged. And so governments around the world—and tech companies, too—were caught flat-footed when they discovered the internet had gone in directions unanticipated and largely unnoticed.

Today, the question is how to recapture and build on the values of the open internet so that it continues to promote the public good without also facilitating the run-off of social effluents and contaminants that pollute public discourse and the very security of open societies. “Keeping the web open isn’t enough,” said World Wide Web founder Tim Berners-Lee in 2017. “We need to make sure that it’s used in a way that’s constructive and promotes truth and supports democracy.”

It is not surprising that more than 50 years after its creation and a quarter century following the development of the World Wide Web, a sweeping review is required. With this paper, we seek to explore the fundamental challenges that have arisen. We will offer a range of policy options for consideration because there is no single fix. We do so understanding that the combination of the urgency and novelty of these threats creates a tension of needing to execute corporate and public policy in quick order yet with high precision given the possibility of unintended consequences to innovation and free expression. Nobody wants to suppress individual rights on the way to rebuilding trust or discourage the pioneering spirits that have made the internet so central to our lives. Yet doing nothing is not an option either; the current track is unacceptable for both civic life and fair and open marketplaces.

In some cases, this report will suggest actions; in others, the need for more study and more public engagement. In all instances, we believe that certain behaviours need to be remedied; that digital attacks on democracy can no more be tolerated than physical ones; that one raises the likelihood of the other in any case; and that a lowering of standards simply serves to grant permission to those intent on doing harm.

On April 5-6, 2018, PPF and the University of British Columbia’s School of Public Policy and Global Affairs convened a mix of subject matter experts, public officials and other interested parties from academia, philanthropy and civil society. This workshop flowed out of PPF’s 2017 report, The Shattered Mirror: News, Democracy and Truth in the Digital Age, which provided a diagnostic of the deteriorating economics of journalistic organizations, an analysis of negative impacts on Canadian democracy and recommendations for improving the situation. Named in recognition of a 1970 Senate of Canada study of mass media called The Uncertain Mirror, the PPF report noted that in the intervening decades this mirror has cracked and shattered under the pressure of content fragmentation, revenue consolidation and indifference to truth. Now we are speaking of the need for the internet to become a more faithful mirror of the positive attributes of greater human connectivity. This latest piece of work is part of continuing efforts by PPF to work with a wide range of partners in addressing two distinct but intertwined strands (think of a double-helix in biology): how to sustain journalism and how to clean up a now-polluted—arguably structurally so—internet. The April workshop succeeded in sharing and transferring knowledge about recent developments and what might be done about them among experts and policy-makers. It was capped by a public event featuring some of the leading thinkers in the world on the state of the digital public sphere. This report advances the process by canvassing a range of possible policy responses to a rapidly evolving environment replete with major societal consequences still in the process of formation.

PPF hosted a follow-up workshop on May 14-15, 2018, which brought international and Canadian experts together to discuss policy and industry responses to disinformation and threatening speech online, a report from which will be published in the fall.

The report is divided into three parts:

  • Discussion on the forces at play;
  • Assumptions and principles underlying any actions; and
  • A catalogue of potential policy options.

We submit Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere in the hopes of promoting discussion and debate and helping policy-makers steer the public sphere back toward the public good.

 

Uncategorized

Globe and Mail oped: The era of Big Tech self-governance has come to an end

piece in the Globe and Mail on the Zuckerberg hearings:

 

The era of Big Tech self-governance has come to an end

Twenty years ago, another young Silicon Valley tycoon was grilled in front of the U.S. Congress. Then, as this week, Congressional leaders grandstanded, asked long-winded questions, and showed at times shocking ignorance about how technology worked. And then, as this week, a tech CEO was contrite, well-rehearsed, and obfuscated on key aspects of his business practices.

But the hearings had consequences. They led to an anti-trust lawsuit brought against Microsoft by the U.S. Department of Justice and the Attorneys General of 20 U.S. states. Instead of trusting Bill Gates and Microsoft to behave better or act differently, the government punished them for perceived wrongdoings.

This is how democratic governance is supposed to work. We don’t have to simply trust citizens and corporations to act in the benefit of society; we impose rules, regulations and appropriate punishments to incentivize them to do so.

In the years since Mr. Gates’s testimony, a new generation of digital technology monopolies has emerged, reshaping online life and concentrating activity on a series of giant, global platforms. And they have done so in a policy context virtually void of regulation.

But in 2018, it’s hard to ignore the many troubling cases of abuse regularly perpetrated on and by platforms, from the manner in which the Russian government used the tools provided by companies such as Facebook and Google to interfere in the 2016 U.S. election, to the way in which hate groups in countries such as Myanmar have organized mass violence against minority populations.

Both the government and Mark Zuckerberg know that citizens are finally paying attention to the political impact of Facebook and its effect on our elections, that citizens are understandably concerned about the way Facebook has repeatedly and consistently flaunted and neglected user privacy, and that they are concerned about the hateful and divisive character of the civic discourse that is a result of Facebook’s business model.

And so this week the era of Silicon Valley self-regulation came to an end. It’s now time for a difficult debate about how the new internet – an internet of multinational corporations, and of platforms – will be governed.

While Congressmen and Mr. Zuckerberg appeared to agree that they could work together to develop the “right” regulations, this week’s hearing revealed clear tensions on several key policy issues.

First, while Mr. Zuckerberg says that Facebook now supports digital advertising transparency laws that they had previously lobbied against, it is unclear whether the proposed Honest Ads Act will go far enough or whether it will even pass.

Second, on privacy: The world is watching the response to Europe’s General Data Privacy Regulation (GDPR), and while Mr. Zuckerberg argued that the privacy tools that Facebook will roll out in response to GDPR will be available in other markets, the U.S. (and Canada) still seem unwilling to enshrine the punitive mechanisms that will be needed to ensure these new data rights. While he claims that he supports the principles of the GDPR, the details will be litigated in European courts for years to come.

Third, when pressed on whether they have any competitors, Mr. Zuckerberg strained to name any. Having aggressively acquired many potential competitors, Facebook – as well as Google and Amazon – will all surely fight aggressively against a new generation of competition policy.

Fourth, Mr. Zuckerberg surprised many by agreeing that Facebook is responsible for the content on their platforms. While this seems anodyne, the debate over whether Facebook is a neutral platform or a media company is rife with legal and regularity implications.

Finally, Mr. Zuckerberg suggested that law makers should focus attention on governing artificial intelligence. They repeatedly changed the subject. Since Facebook operates at a mind-boggling global scale, they use AI to implement and even determine their policies, regulations and norms. How states will in turn govern these algorithms is certain to be a central challenge for democracy. Mr. Zuckerberg knows it; Congress was disinterested.

Over the past 20 years, the internet has shown flashes of its empowering potential. But the recent Facebook revelations also demonstrate what can happen if we fail to hold it accountable.

Mr. Zuckerberg’s testimony is only the beginning of a long-overdue conversation about whether we will govern platforms or be governed by them.

Uncategorized

Globe and Mail oped: The new rules for the internet

There has been lots of discussion lately about regulating social media but much less on what this might look like. Ben Scott (former tech policy for Obama & Clinton) and I suggest some options in The Globe and Mail. In short, it will take a broad new approach to how we think about governing the internet. The piece is here, and below.

 

The new rules for the internet – and why deleting Facebook isn’t enough

While being pessimistic about the depressing tableau of Silicon Valley malfeasance is easy, let us not forget that the internet has brought tremendous value to our society. Therefore, the answer is not to lock down the open internet or even to delete Facebook (however satisfying that might feel, with 2.2-billion users it is embedded in our society). Instead, we urgently need new democratic rules for the internet that enhance the rights of citizens, protect the integrity of our public sphere and tackle the structural problems of our current digital economy.

Here are seven ideas:

Data rights. Much of the internet economy is built on trading personal data for free services with limited consumer protection. This model has metastasized into a vast complex of data brokers and A.I.-driven micro-targeting with monopolists such as Google and Facebook at the centre. With the curtain pulled back, there may at last be political will to build a rights-based framework for privacy that adapts as technologies change. For starters, we need major new restrictions on the political exploitation of personal data (including by political parties themselves, who remain exempt from our privacy law) and much greater user control over how data is collected and used. Europe’s new General Data Protection Regulation sets a high standard, though since it took 10 years to legislate, it was of date before it was implemented. We must evolve it to the next level.

Modernize and enforce election law. Few dispute that citizens deserve to know who is trying to sway them during elections, but our laws were designed for TV and radio. We need to update them for the internet era, where ads can be purchased from anywhere, disguised as normal social media posts, micro-targeted to polarize voters, and loaded up with sensational and divisive messages. All online ads should carry a clearly visible cache of information that states who bought them, the source of the funds, how much they spent, who saw them, and the specific targeting parameters they selected.

Audit artificial intelligence. Facebook and Google monetize billions of data points a day using powerful A.I. to target and influence specific audiences. The social and ethical implications of A.I. are a blinking red light as this technology advances, and we need to lay some ground-rules for accountability. Just as we require drug manufacturers and car makers to submit to rigorous public safety checks, we need to develop a parallel system for algorithms.

Tax Silicon Valley fairly. The titans of technology dominate the list of the most valuable companies on the planet. And yet, they are still coddled by tax law as if they were an emerging industry. It is time for Silicon Valley to pay unto Caesar — not least so that we plebeians can use the tax revenue to fix the things they keep breaking, like journalism, for example.

Aggressive competition policy. Before we start a decade-long trust-busting crusade, let’s begin with a competition policy agenda that delivers immediate, tangible value. This might include restrictions on acquisition of up-and-coming competitors, structural separation of behavior tracking and ad targeting businesses and consumer data portability from one service provider to another.

Improve digital security. What the Russians did in 2016 to exploit digital media should be a wake-up call. Without unleashing a surveillance dragnet, we need effective capabilities to counter foreign disinformation operations using measures such as “know your customer” rules for ad buyers and closing down the armies of fake accounts.

Transform civic literacy, and scale civic journalism. As social-media users, we all own part of this problem. It is our appetite for sensationalism, outrage and conspiracy that creates the audience for disinformation. Instead of relying on tech-funded literacy campaigns, the government needs to rebuild our civic literacy from the ground up, and couple these efforts with serious investments and policy changes to reinvigorate public service and accountability journalism.

Ironically, Facebook’s own conduct has awoken its vast user base to the need for a new generation of internet regulation. And with the United States mired in the politics of Donald Trump and the European Union slowed by a complex bureaucracy, there is an opportunity for Canada to present this new vision. But we will only be effective if the rigor and scale our response is commensurate with the threat posed to our democracy.

Uncategorized

Ungoverned Space

I have an essay in CIGI’s new data governance series, called Ungoverned Space: How Surveillance Capitalism and AI Undermine Democracy. My key points are:

  • The threat to democracy from misinformation is enabled by two structural problems in our digital infrastructure: the way data is collected and monetized (surveillance capitalism), and how our reality is algorithmically determined through artificial intelligence (AI).
  • Governments face a particular challenge in governing platforms as any efforts must engage with issues of competing jurisdiction, differing notions of free speech and large-scale technological trends toward automation.
  • Policy mechanisms that enable the rights of individuals (data protection and mobility) are likely to be more effective than those that seek to limit or regulate speech.

Full essay is here.

And here is a video that CIGI produced to accompany the article:

Uncategorized

Public Salon talk

I recently had the opportunity to give a talk at Sam Sullivan’s Public Salon in Vancouver. A great regular event hosted by the former mayor and current MLA. My talk was on the design problems at the core of our current crisis of misinformation. In short, I conclude: “Facebook didn’t fail when it used AI to match foreign agitators with micro-targeted US voter audiences, or offered ‘How to burn jews’ as an ad group, it is actually working as it was designed. And it is this definition of “working” and this design which presents the threat to our democracy, which needs to be held accountable, and for which governance oversight is urgently needed.”

Uncategorized

How safe are Canada’s elections from fake news on Facebook?

Here is an interview I recently did on CBC’s The Current on the digital threat to the next Canadian election. My argument is that a focus on discrete threats (from say Russia), are distracting us to what is ultimately a structural problem. It is the very design of Facebook that is the root cause.  Until we start talking about this root cause, and begin quickly testing policies that both address the flaws in this design and hold its social outcomes accountable, we are missing the plot. Governments that continue to make the policy choice of self regulation will soon also have to answer for these outcomes. Here is the Episode page, and below is the full audio (my segment starts at 8:00).

Uncategorized

How Internet Monopolies Threaten Democracy

I have been thinking a lot about the internet and what it means for journalism and democracy lately.  I am currently writing a book the topic, so will have much more to say soon.  But last month I had the honour of giving the Dalton Camp Lecture in Journalism, which gave me the chance to summarize some of my latest thinking on, and feelings about, this problem.  The lecture just aired on an episode of CBC IDEAS, and can be found here.

This is the summary from the IDEAS site:

How Internet Monopolies Threaten Democracy (The 2017 Dalton Camp Lecture):  The internet began with great hope that it would strengthen democracy. Initially, social media movements seemed to be disrupting corrupt institutions. But the web no longer feels free and open, and the disenfranchised are feeling increasingly pessimistic. The unfulfilled promise of the internet has been a long-term concern of Digital Media and Global Affairs expert Dr. Taylor Owen, who delivers the 2017 Dalton Camp Lecture in Journalism. He argues the reality of the internet is now largely one of control, by four platform companies — Google, Facebook, Amazon and Apple — worth a combined $2.7 trillion — and their impact on democracy is deeply troubling.

The episode can be streamed HERE:

The Podcast can be downloaded HERE.

They also asked me to write a short intro letter framing the episode, the text of which is included below:

Dear IDEAS Listener,

I am hoping to entice you to listen to my lecture and interview on IDEAS.

Why?

Because one of the greatest challenges to democracy is happening right under our noses. In fact, we are full participants, with most of us not even realizing it.

Four internet platforms — Facebook, Google, Amazon, Apple — increasingly control our lives, our opinions, our democracy. We urgently need to start talking about how we are going to respond as a society.

Here’s some context:

Over the past year, I have begun to write and speak more publicly and with greater alarm over what I believe to be a growing crisis in our democracies. I have long studied and promoted the positive attributes of digital technologies, but my concern about the influence of internet platforms on how we live is deepening. And my concerns are shared more and more by those I work with and admire. Something fundamental has shifted in the debate about the internet.

But my view is also often met with surprise. The internet has become so normalized, so entwined in people’s lives that questioning its impact can feel jarring. The result is that I am regularly approached with two questions. Why is this happening now? And what on earth can be done about it? Let me spend a moment on each of these questions, and I hope you will be interested in listening to my wider argument in the program.

First, why now? Or, put another way, why are we seeing a crescendo of serious global concerns over a set of technologies which been seen largely as democratizing forces for over a decade?

I believe the answer lies in the structure of the internet that we have built. Far from the decentralized web imagined by its founders, the internet of today is mediated by four global platforms companies: Facebook, Google, Amazon and Apple. These companies shape our digital lives, and increasingly what we know, how we know it, and ultimately who we are. They determine our public sphere, the character of our civic discourse, and the nature of our democratic society.

What’s worth underlining is that while these companies shape our public sphere, they do so as private actors. They are publicly traded companies with boards of directors that have fiduciary responsibilities to make more than they did the year prior. In the case of Google and Facebook, this dynamic means collecting and selling more data about their users, incentivizing greater volumes of engagement, and maximizing the time we spend on their sites. These incentives have a pernicious effect on our civic discourse, leading to what I believe is an epistemological and ontological crisis in our democracy. Our common grounding and ability to act as a collective are being undermined.

Which brings me to the second question I am regularly asked: what can we do about this? I think there are two answers: an individual one, and a collective one.

We must take ownership of our digital lives. This does not mean simple digital literacy — trying to spot misinformation and hoaxes. The algorithms shaping our digital experiences are far more sophisticated at nudging our behaviour than this.

It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’. So I have begun to change my behaviour accordingly.

But acting as individuals is insufficient. Platform companies are among the largest and most profitable in the world. They shape the internet, are the world’s market place, and are even planning and developing our cities. Their scale and power demands a collective response. This will mean altering our model of governance.

These companies simply must be brought into the fold of the laws and norms of democratic society. This doesn’t mean forcing them into old governance paradigms. Nor does it mean blindly letting them scale growth in our markets and in our lives. The task is more challenging. It demands a rethinking of how we enforce collective constraints on a new type of economic and social actor in our society.

There is no doubt in my mind that how we choose to govern technology is the central question facing democracy itself in our time. But how this governance will work is not pre-determined, and yet the responsibility to insist on its creation begins with us. This responsibility requires first and foremost better understanding and speaking out against the ways technologies shape our lives and our society.

And I hope that my lecture contributes to this rethinking, and that you’ll listen in.