Category: Uncategorized


Recent writing

I am in the process of building a new site, but in the mean time, here is some recent writing.

We have the regulatory tools we need to fix Facebook, Globe and Mail, October 13, 2021, Taylor Owen, Beverley McLachlin, and Peter MacLeod

In the digital age, who has a right to be anonymous and whose information has a right to be secure? Globe and Mail

China’s Digital Dystopia Threatens us all, National Post

Canada shouldn’t turn away from the difficult task of regulating online speech, Globe and Mail

Is Big Tech Ungovernable, Globe and Mail

To govern Big Tech, listen to those most harmed by it, The National Post

Trump’s social-media ban clouds a bigger crisis: the power and systemic failure of Big Tech, Globe and Mail


Election integrity and platform governance updates

I haven’t been doing a good job of updating this site, and a full refresh is in development, but I wanted to pin some recent work at the top. Below is some info on recent work on the Canadian election monitoring project, on platform governance, and a range of writing, talks and media on both.

Platform Governance

As part of my work with the Center for International Governance Innovation I have recently published two projects.

The first is a paper, The Case For Platform Governance, in which I argue for platform governance as a frame for organizing a wide range of often disparate digital policies, and I propose a typology for a comprehensive platform governance agenda.

The second is an essay series, Models for Platform Governance, which I edited featuring 16 pieces proposing models for platform governance. My introduction to the series is here.

The Digital Democracy Project

This is a joint initiative led by the Max Bell School of Public Policy at McGill University and the Public Policy Forum.  We are currently doing a large scale monitoring project of the Canadian election. 

The project has two components. An online monitoring effort that is collecting and analyzing social and traditional media data, being led by Derek Ruths at McGill, and a survey effort that is doing weekly national surveys and a metered online consumption study being led by Peter Loewen at the University of Toronto. Together we are seeking to track the online media ecosystem during the election and to assess behaviour change based on exposure to both political and media narratives as well as disinformation. We published weekly reports through the election and will be producing an extensive post election report.  We have released seven reports to date, which can be found here:

I am also the Co-PI of the Digital Election Research Challenge, with Elizabeth Dubois, which is funding 18 research teams to study the online ecosystem during the Canadian election. Further details of this collaboration can be found here.

Finally, I am involved in the production of an election podcast called Attention Control.The show is hosted by Kevin Newman and produced by Antica. I do weekly segments on the show, our research is integrated into the podcast, and a detailed interview about the project can be found in the second half of this episode (starts at: 21:28). And some reflections on the election can be found in our post election episode here.

Recent Media

Recent Writing

Recent Presentations


Statement to the International Grand Committee on Big Data, Privacy and Democracy

Last week had the privilege of appearing before the International Grand Committee (representatives from 12 countries) alongside Maria Ressa, Shoshana Zuboff, Jim Balsille, Heidi Tworek, Jason Kint, Ben Scott and Roger McNamee. After years of working on this wicked set of problems, it was a major milestone to see the attention of smart and focused law makers zero in on the structural problems at the core of this issue. They agree on the problem, now it’s time to act.

Here is the video and text of my statement.


Co-Chairs Zimmer and Collins, Committee Members;

Thank you for having me, it is an honor to be here. I am particularly heartened because even three years ago a meeting like this would have seemed unnecessary by many in the public, the media, the technology sector, and by governments themselves.

But we are now in a very different public policy moment, about which I will make five observations.

First, self-regulation (and most forms of co-regulation) have and will continue to prove insufficient to this problem. Like in the lead up to the 2008 financial crisis, the financial incentives are powerfully aligned against meaningful reform. These are publicly traded, largely unregulated companies, whose shareholders and directors expect growth by maximizing a revenue model that is itself part of the problem. This growth may or may not be aligned with the public interest.

Second, the problem is not one of bad actors but one of structure.Disinformation, hate speech, election interreference, privacy breaches, mental health issues, and anti-competitive behavior must be treated as the symptoms of the problem, not its cause. Public policy should therefore focus on the design and the incentives of the platforms themselves.

It is the design of the attention economy which incentivizes virality and engagement over reliable information. It is the design of the financial model of surveillance capitalism which incentivizes data accumulation and its use to influence our behavior. It is the design of group messaging, which allows for harmful speech, even the incitement of violence, to spread without scrutiny. It is the design for global scale that has incentivized imperfect automated solutions to content filtering, moderation and fact checking. And it is the design of our unregulated digital economy that has allowed our public sphere to become monopolized.

If democratic governments determine that this structure is leading to negative social and economic outcomes, then it is their responsibility to govern.

Third, governments that are taking this issue seriously are converging on a similar platform-governance agenda. This agenda recognizes that there are no silver-bullets, and that instead policies must be domestically implemented and internationally coordinated across three domains.

Content policies which seek to address a wide range of both supply and demand issues about the nature, amplification, and legality of content in our digital public sphere.

Data policies which ensure that public data is used for public good and that citizens have far greater rights over the use, mobility and monetization of their data.

And Competition policies which promote free and competitive markets in the digital economy.

Fourth, the propensity in the platform governance conversation to overcomplicate solutions serves the interests of the status quo. There are actually many sensible policies that could and should be implemented immediately.

The online ad microtargeting market must be made radically more transparent, and in some cases suspended entirely.

Data privacy regimes should be updated to provide far greater rights to individuals and greater oversight and regulatory power to punish abuses.

Tax policy can be modernized to better reflect the consumption of digital goods and to crack down on tax base erosion and profit shifting.

Modernized competition policy can be used to restrict and rollback acquisitions and to separate platform ownership from application or product development.

Civic media can be supported as a public good.

Andlarge-scale and long-term civic literacy and critical thinking efforts can be funded at scale by national governments.

That few of these have been implemented is a problem of political will, not policy or technical complexity.

Finally, there are three policy questions for whichthere are neither easy solutions, meaningful consensus nor appropriate existing institutions, and where there may be irreconcilable tensions between the design of the platforms and the objectives of public policy.

The first is how we regulate harmful speech in the digital public sphere. At the moment, we have largely outsourced the application of national laws, as well as the interpretation of difficult tradeoffs between free speech and personal and public harms to the platforms themselves – companies who seek solutions that can be implemented at scale globally. In this case, what is possible technically and financially, might be insufficient for the public good.

The second is who is liable for content online? We have clearly moved beyond the notion of platform neutrality and absolute safe harbor, but what legal mechanisms are best suited to holding platforms, their design, and those that run them accountable?

Third, as artificial intelligence increasingly shapes the character and economy of  our digital public sphere, how will we bring these opaque systems into our laws, norms and regulations?

These difficult conversations should not be outsourced to the private sector, they need to be led by democratically accountable governments and their citizens. But this is going to require political will and policy leadership – precisely what this committee represents.

Thank you again for this opportunity.


Oped on Christchurch Call, IGC and Digital Charter

Here is a recent oped in the Globe and Mail in advance of my appearance before the International Grand Committee on Big Data, Privacy and Democracy:

Who will answer the Christchurch Call? Nobody, if tech platforms continue ungoverned

Speaking to a technology conference in Paris last week, Prime Minister Justin Trudeau – a leader who has long championed the political and economic benefits of digital technology – channelled our cultural moment of tech backlash.

“What we’re seeing now is a digital sphere that’s turned into the Wild West,” he argued. “And it’s because we – as governments and as industry leaders – haven’t made it a real priority.”

This change in tone came the day after he signed the Christchurch Call – an effort led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron to curb the problem of viral hate speech and violent content online in the wake of a massacre that was livestreamed and distributed on platforms such as Facebook and YouTube.

But while it was a helpful rallying call, the Christchurch compact was also ultimately a missed opportunity. It has no enforceable mandates, it focuses overwhelmingly on technical fixes to what are also political, social and economic problems and its framing around terrorism and hate speech is far too narrow, treating the symptom of the problem while ignoring the underlying disease. We don’t need to militarize the problem or play Whac-A-Mole with extremists: We need to govern platforms. The Christchurch Call won’t accomplish that.

In its wake, this week, the International Grand Committee on Disinformation and Fake News, a group of parliamentarians from 14 countries, continues its work with a second set of hearings in Ottawa. The work of the committee (of which I will serve as a witness) has become a catalyst for a community of scholars, policy-makers and technologists who believe that a broader conversation about tech governance – one that squarely addresses problems embedded in the design of digital platforms themselves – is long overdue.

These problems include the financial model of what Harvard professor Shoshana Zuboff calls “surveillance capitalism,” by which vast stores of data about our lives are used to target content designed to change our behaviour. The problems of surveillance capitalism include the way platforms manage their vast scale using opaque, commercially driven and poorly understood algorithmic systems and the market dominance of a small number of global companies with rapidly growing purchase on our social, political and economic lives.

While governments have been slow to take on the challenge of governing big tech, those that turned attention to this policy space in a serious way are coming to markedly similar conclusions. In short, that there are no silver bullets to the social and economic costs caused by the platform economy.

Instead, governments in France, Germany, Britain, the European Union, New Zealand, Australia, Canada and even a growing number of political leaders in the United States are articulating a need for a broad and comprehensive platform-governance agenda that is both nuanced to account for domestic differences in areas such as free-speech laws, and internationally coordinated to create sufficient market pressure.

The contours of this agenda are taking shape through three policy frameworks. The first: content policies. Democratic governments need to decide whether their speech laws require updating for the digital world and how they will be enforced. At the moment, we have delegated this regulatory role to the platforms, who hire thousands of moderators to enforce their terms of service agreements. Democratizing this system will involve difficult decisions around liability (who should be liable for speech online, the individual who spoke, or the company that amplified, and profited off, the speech?), moderation (who is responsible for implementation, the platforms who host and filter content, or the governments that are ultimately democratically accountable?) and transparency (how can we bring daylight to the secretive art of microtargeting, by which advertisers target and effectively influence narrow bands of people using extremely precise data?). Early experiments in content policy by Germany and France are yielding evidence of what works and what doesn’t, examples upon which other countries can iterate.

Second: data policies. If we believe in the premise that society should be able to leverage public data for the public good, citizens should have far greater rights over the use, mobility and monetization of their data, and regulation must be matched with meaningful enforcement. Even the reported FTC fine of $5-billion to Facebook was seen as inconsequential by the markets. The EU General Data Protection Regulation provided an example for such a package that is being tinkered with in other jurisdictions, including in California, where California Consumer Privacy Act is poised to push Silicon Valley directly from its home state.

Third: competition policies. The EU and Britain have begun to explore new ways to curb the power of digital giants, and several of the U.S. Democratic presidential candidates have come out in favour of pursuing antitrust regulation. Such efforts could also include restrictions or rollbacks on how services and platforms are acquired and developed, as well as antitrust oversight that accounts for more than price increases in judging a company’s market power, but also how much data it controls, whether it constrains innovation and whether it threatens consumer welfare.

On Monday, Innovation Minister Navdeep Bains built on Mr. Trudeau’s speech, laying out the 10 principles of a proposed Digital Charter. It’s a signal that Ottawa might finally be ready to take a broader view of its responsibilities. But whether this charter can be more than a collection of digital initiatives and instead become a co-ordinated policy agenda, implemented with the urgency that the problem demands, remains yet to be seen.


End of year Globe and Mail oped

Some year end reflections and thoughts on the year to come for the Globe and Mail:

Big Tech’s net loss: How governments can turn anger into action

It has been a game-shifting 2018 for Big Tech. It was the year that long-simmering concerns about its potential negative effects on our economy, on our personal lives and even on our democracy broke into public debate.

It was the year that much of the media got serious about tech journalism, when the balance of tech journalism tipped from gadget reviews and chief-executive profiles to treating Silicon Valley as a node of power in society to be held accountable.

It was the year that tech company employees began holding their employers to account. At Google, walkouts were staged over gender policy, and petitions demanded an end to Chinese expansion plans and to the development of “warfare technology.” Protests were held over Microsoft contracts with the U.S. Immigration and Customs Enforcement agency. There was backlash against the use of facial recognition to assist law enforcement at Amazon. And at Facebook, employees started speaking far more openly to the media as the company careened from scandal to scandal.

It was the year that tech executives awoke to their new operating environment. The U.S. Congress and parliaments around the world ordered CEOs, accustomed to being adored as the leaders of venerated companies, to testify and answer tough questions. It was also the year that these same CEOs, whether motivated by sincere interest in fixing structural problems with their products or concern for protecting public image and shareholder value, began making meaningful reforms to their companies, to varying degrees of success.

Finally, and perhaps most consequentially, 2018 was the year that tech companies lost the benefit of the doubt from governments. This was a result of a growing body of investigations, academic research and enterprise reporting detailing the ways in which social platforms have been used to undermine democracy. It also stems from a concern that the economic benefits of the digital economy are flowing predominantly to a small handful of U.S.-based global companies. But the final straw for many legislators was a November article in The New York Times revealing a disconnect between Facebook’s public statements about abuses on their platform and the aggressive tactics being used by executives to fight the story. To many in government, this confirmed that the tech-sector giants should be treated like any other large multinational corporation, and that it’s time to get serious about governing Big Tech.

Luckily, there are some relatively easy places for governments to start. They can bring sunlight to the world of micro-targeted advertising through new transparency laws. They can overhaul data-privacy regimes that are limited in scope, weak in capacity and unco-ordinated globally. They can mandate the identification of automated accounts so that citizens know when they are engaging with a machine or a human. They can modernize tax and competition policy for the digital economy. And they can fund large-scale digital literacy initiatives for citizens of all ages.

But beyond these short-term Band-Aids, 2019 must also be the year we start grappling with a set of thornier questions at the intersection of technology and democracy.

Democratic governments will need to wrestle with how their speech laws apply to the digital world. This is going to require bringing together the private sector and civil society in a hard discussion about the nature and limits of free speech, about who is censored online and how, about responsibilities for moderating speech at scale, and about universal versus national speech norms.

And while the idea that platform companies are simply intermediaries – and therefore not liable for how their services are used – has been foundational to the innovation, growth and empowerment created by the open internet, the sheer breadth of the economic and social services now provided by platforms might demand a more nuanced approach to how they are governed. If this comes at the cost of that innovation, democracies must be allowed to decide about the trade-off.

Such democracies will need to start co-ordinating their public-policy efforts around emerging technologies, too. There is currently a disconnect between the global scale, operation and social impact of technology companies and the national jurisdiction of most countries’ tech laws and regulations. As former BlackBerry co-CEO Jim Balsillie has argued, the digital economy may need its Bretton Woods moment.

How we handle these challenges will set the tone for how we’ll grapple with the even knottier ones that are to come. As de facto public places increasingly involve private interests – such as Alphabet’s planned smart city in Toronto, or Amazon’s competition over which city would earn the right to be home to its HQ2 headquarters – governments will need to lead a conversation about what this collision looks like. What, for instance, would it mean to treat the data created by the citizens of cities as a public good?

And while governments devote substantial resources to growing the business of artificial intelligence, which promises to reshape broad aspects of our lives, we must work ahead to ensure these nodes of decision-making power are brought into the norms of accountability and transparency that we demand in democracies.

This year was defined by outrage against tech – but 2019 will be the year that the long and messy process of governing it begins.


Globe and Mail Oped: We can save democracy from destructive digital threats

I had the privilege of speaking to the Federal Cabinet retreat this week. Details from the event can be found here. I was there to address the challenges of misinformation and disinformation in relation to the upcoming election. This oped, published in the advance of the retreat, provides some context to this issue, and is based on Ed Greenspon and my recent Democracy Divided report.


A decade ago, governments and regulators allowed Wall Street to run amok in the name of innovation and freedom until millions of jobs were lost, families were forced from their homes and trust in the financial system was decimated.

Today, the same kinds of systemic risks – so-called because the damage ripples way beyond its point of origin – are convulsing the information markets that feed our democracy.

The growth of the internet has resulted in tremendous opportunities for previously marginalized groups to gain voice, but an absence of a public-interest governance regime or even a civic-minded business ethos has resulted in a flood of disinformation and hate propagated by geopolitical, ideological, partisan and commercial operatives.

The result is that the giant digital platforms that now constitute a new public sphere are far too often being used to weaponize information, with a goal of deepening social divisions, fostering unrest and ultimately undermining democratic institutions and social cohesion. As we’ve seen in other countries, the integrity of elections themselves are at risk.

What can be done?

Some people say we need to invest in digital literacy. This is true, as is the broader need to increase civic knowledge and sharpen critical thinking skills. Yet this isn’t sufficient in itself. When Lake Erie was badly polluted a generation ago, signs were erected along the beaches warning swimmers to stay out of the water. But governments also passed laws and enforced regulations to get at the source of the pollution.

Others say these issues are not present in Canada. That would be a welcome kind of exceptionalism if remotely true. But misogynists, racists and other hate groups foment resentment online against female politicians and just about anyone else. Both the Quebec City mosque shooter and the suspect in the Toronto van attack were at least partially radicalized via the internet. That said, research into digital threats to our democracy is so thin in this country that we know almost nothing about who is purchasing our attention or exploiting our media ecosystem. There’s certainly no basis for complacency about protecting Canada’s 2019 federal election against attacks that would never be tolerated if they manifested themselves physically rather than digitally.

Here are some measures that merit serious consideration. The Elections Act needs to be reformed to bring complete transparency to digital advertising. Publishers and broadcasters are legally obligated to inform their audiences about who purchases political ads in election campaigns. Canadians have the same right to know about who is paying for digital ads and to whom they are being targeted.

Secondly, we need to do more to make sure that individuals exercise greater sovereignty over the data collected on them and then resold to advertisers or to the Cambridge Analyticas of the world. This means data profiles must be exportable by users, algorithms and AI must be explained, and consent must be freely, clearly and repeatedly given – not coerced through denial of services.

Thirdly, platforms such as YouTube, Facebook and Twitter need to be made liable to the same legal obligations as newspapers and broadcasters for defamation, hate and the like. Some people say this would amount to governments getting into the censorship business. That’s simply wrong; newspaper publishers and editors abide by these laws – or face the consequences – without consulting government minders. These digital platforms use algorithms to perform the same functions as editors: deciding what readers will see what content and with what prominence.

A fake news law would be a trickier proposition, but it is not impossible to think anew about a statute that existed in Canada’s Criminal Code from 1892 to 1992, until it was deemed unconstitutional in a split decision. It said that anyone who “wilfully publishes a statement, tale or news that he knows is false and that causes or is likely to cause injury or mischief to a public interest is guilty of an indictable offence.” The key words here are “wilfully” and causing “injury” to the public interest. We’re not sure such a measure is warranted, but as with the 1960s commission that recommended hate laws in Canada, we think it’s worth public discussion.

In the new digital public sphere, hate runs rampant, falsehood often outperforms truth, emotion trumps reason, extremism muscles out moderation. These aren’t accidents. They are products of particular structures and incentives. Let’s get with the program before democracy has its own Great Recession.


Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere

Ed Greenspon and I have just published a report as a collaboration between the UBC School of Public Policy and Global Affairs and the Public Policy Forum, called Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere. The Report outlines what we see as a structural problem in our current information ecosystem that has led to our current problem of mis and disinformation, and details a range of policy ideas being discussed and tested around the world.

The report can be downloaded here.

And the Introduction is below.

For more than a quarter-century, the internet developed as an open web—a system to retrieve and exchange information and ideas, a way of connecting individuals and building communities and a digital step forward for democratization. It largely remains all these things. Indeed, the internet is supplanting the old concept of a public square, in which public debate occurs and political views are informed and formed, with a more dynamic and, in many ways, inclusive public sphere. But along the way, particularly in the last half-dozen years, the “open internet” has been consolidated by a handful of global companies and its integrity and trustworthiness attacked by malevolent actors with agendas antithetical to open societies and democratic institutions. These two phenomena are closely interrelated in that the structures, ethos and the economic incentives of the consolidators—Google (and YouTube), Facebook and Twitter in particular—produce an incentive system that aligns well with the disseminators of false and inflammatory information.

The digital revolution is famous for having disrupted broad segments of our economy and society. Now this disruption has come to our democracy. The Brexit referendum and the 2016 American election awakened the world to a dark side of digital communications technologies. Citizens and their governments are learning that a range of actors—foreign and domestic, political and economic, behaving in licit and illicit ways—can use disinformation, hate, bullying and extremist recruitment to erode democratic discourse and social cohesion both within and outside of election periods. And the problem is getting worse.

By and large, the internet has developed within a libertarian frame as compared, for instance, to broadcasting and cable. There has been until recently an almost autokinetic response that public authorities had little or no role to play. To some extent, the logic flows from a view that the internet is not dependent on government for access to spectrum, so therefore no justification exists for a government role. So long as it evolved in ways consistent with the public interest and democratic development, this logic—although flawed—was rarely challenged. And so governments around the world—and tech companies, too—were caught flat-footed when they discovered the internet had gone in directions unanticipated and largely unnoticed.

Today, the question is how to recapture and build on the values of the open internet so that it continues to promote the public good without also facilitating the run-off of social effluents and contaminants that pollute public discourse and the very security of open societies. “Keeping the web open isn’t enough,” said World Wide Web founder Tim Berners-Lee in 2017. “We need to make sure that it’s used in a way that’s constructive and promotes truth and supports democracy.”

It is not surprising that more than 50 years after its creation and a quarter century following the development of the World Wide Web, a sweeping review is required. With this paper, we seek to explore the fundamental challenges that have arisen. We will offer a range of policy options for consideration because there is no single fix. We do so understanding that the combination of the urgency and novelty of these threats creates a tension of needing to execute corporate and public policy in quick order yet with high precision given the possibility of unintended consequences to innovation and free expression. Nobody wants to suppress individual rights on the way to rebuilding trust or discourage the pioneering spirits that have made the internet so central to our lives. Yet doing nothing is not an option either; the current track is unacceptable for both civic life and fair and open marketplaces.

In some cases, this report will suggest actions; in others, the need for more study and more public engagement. In all instances, we believe that certain behaviours need to be remedied; that digital attacks on democracy can no more be tolerated than physical ones; that one raises the likelihood of the other in any case; and that a lowering of standards simply serves to grant permission to those intent on doing harm.

On April 5-6, 2018, PPF and the University of British Columbia’s School of Public Policy and Global Affairs convened a mix of subject matter experts, public officials and other interested parties from academia, philanthropy and civil society. This workshop flowed out of PPF’s 2017 report, The Shattered Mirror: News, Democracy and Truth in the Digital Age, which provided a diagnostic of the deteriorating economics of journalistic organizations, an analysis of negative impacts on Canadian democracy and recommendations for improving the situation. Named in recognition of a 1970 Senate of Canada study of mass media called The Uncertain Mirror, the PPF report noted that in the intervening decades this mirror has cracked and shattered under the pressure of content fragmentation, revenue consolidation and indifference to truth. Now we are speaking of the need for the internet to become a more faithful mirror of the positive attributes of greater human connectivity. This latest piece of work is part of continuing efforts by PPF to work with a wide range of partners in addressing two distinct but intertwined strands (think of a double-helix in biology): how to sustain journalism and how to clean up a now-polluted—arguably structurally so—internet. The April workshop succeeded in sharing and transferring knowledge about recent developments and what might be done about them among experts and policy-makers. It was capped by a public event featuring some of the leading thinkers in the world on the state of the digital public sphere. This report advances the process by canvassing a range of possible policy responses to a rapidly evolving environment replete with major societal consequences still in the process of formation.

PPF hosted a follow-up workshop on May 14-15, 2018, which brought international and Canadian experts together to discuss policy and industry responses to disinformation and threatening speech online, a report from which will be published in the fall.

The report is divided into three parts:

  • Discussion on the forces at play;
  • Assumptions and principles underlying any actions; and
  • A catalogue of potential policy options.

We submit Democracy Divided: Countering Disinformation and Hate in the Digital Public Sphere in the hopes of promoting discussion and debate and helping policy-makers steer the public sphere back toward the public good.