Last month I had the pleasure of interviewing Ben Scott about is great New America report “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet” at a Brookfield Institute for Innovation + Entrepreneurship conference called AI + Public Policy: Understanding the shift. The video of our conversation is here:
A short video that the awesome Taylor Gunn and Civix Canada made with me about social platforms, journalism and democracy, for use in Canadian classrooms, focused on grade nine students:
A piece in the Globe and Mail on the Zuckerberg hearings:
The era of Big Tech self-governance has come to an end
Twenty years ago, another young Silicon Valley tycoon was grilled in front of the U.S. Congress. Then, as this week, Congressional leaders grandstanded, asked long-winded questions, and showed at times shocking ignorance about how technology worked. And then, as this week, a tech CEO was contrite, well-rehearsed, and obfuscated on key aspects of his business practices.
But the hearings had consequences. They led to an anti-trust lawsuit brought against Microsoft by the U.S. Department of Justice and the Attorneys General of 20 U.S. states. Instead of trusting Bill Gates and Microsoft to behave better or act differently, the government punished them for perceived wrongdoings.
This is how democratic governance is supposed to work. We don’t have to simply trust citizens and corporations to act in the benefit of society; we impose rules, regulations and appropriate punishments to incentivize them to do so.
In the years since Mr. Gates’s testimony, a new generation of digital technology monopolies has emerged, reshaping online life and concentrating activity on a series of giant, global platforms. And they have done so in a policy context virtually void of regulation.
But in 2018, it’s hard to ignore the many troubling cases of abuse regularly perpetrated on and by platforms, from the manner in which the Russian government used the tools provided by companies such as Facebook and Google to interfere in the 2016 U.S. election, to the way in which hate groups in countries such as Myanmar have organized mass violence against minority populations.
Both the government and Mark Zuckerberg know that citizens are finally paying attention to the political impact of Facebook and its effect on our elections, that citizens are understandably concerned about the way Facebook has repeatedly and consistently flaunted and neglected user privacy, and that they are concerned about the hateful and divisive character of the civic discourse that is a result of Facebook’s business model.
And so this week the era of Silicon Valley self-regulation came to an end. It’s now time for a difficult debate about how the new internet – an internet of multinational corporations, and of platforms – will be governed.
While Congressmen and Mr. Zuckerberg appeared to agree that they could work together to develop the “right” regulations, this week’s hearing revealed clear tensions on several key policy issues.
First, while Mr. Zuckerberg says that Facebook now supports digital advertising transparency laws that they had previously lobbied against, it is unclear whether the proposed Honest Ads Act will go far enough or whether it will even pass.
Second, on privacy: The world is watching the response to Europe’s General Data Privacy Regulation (GDPR), and while Mr. Zuckerberg argued that the privacy tools that Facebook will roll out in response to GDPR will be available in other markets, the U.S. (and Canada) still seem unwilling to enshrine the punitive mechanisms that will be needed to ensure these new data rights. While he claims that he supports the principles of the GDPR, the details will be litigated in European courts for years to come.
Third, when pressed on whether they have any competitors, Mr. Zuckerberg strained to name any. Having aggressively acquired many potential competitors, Facebook – as well as Google and Amazon – will all surely fight aggressively against a new generation of competition policy.
Fourth, Mr. Zuckerberg surprised many by agreeing that Facebook is responsible for the content on their platforms. While this seems anodyne, the debate over whether Facebook is a neutral platform or a media company is rife with legal and regularity implications.
Finally, Mr. Zuckerberg suggested that law makers should focus attention on governing artificial intelligence. They repeatedly changed the subject. Since Facebook operates at a mind-boggling global scale, they use AI to implement and even determine their policies, regulations and norms. How states will in turn govern these algorithms is certain to be a central challenge for democracy. Mr. Zuckerberg knows it; Congress was disinterested.
Over the past 20 years, the internet has shown flashes of its empowering potential. But the recent Facebook revelations also demonstrate what can happen if we fail to hold it accountable.
Mr. Zuckerberg’s testimony is only the beginning of a long-overdue conversation about whether we will govern platforms or be governed by them.
There has been lots of discussion lately about regulating social media but much less on what this might look like. Ben Scott (former tech policy for Obama & Clinton) and I suggest some options in The Globe and Mail. In short, it will take a broad new approach to how we think about governing the internet. The piece is here, and below.
The new rules for the internet – and why deleting Facebook isn’t enough
While being pessimistic about the depressing tableau of Silicon Valley malfeasance is easy, let us not forget that the internet has brought tremendous value to our society. Therefore, the answer is not to lock down the open internet or even to delete Facebook (however satisfying that might feel, with 2.2-billion users it is embedded in our society). Instead, we urgently need new democratic rules for the internet that enhance the rights of citizens, protect the integrity of our public sphere and tackle the structural problems of our current digital economy.
Here are seven ideas:
Data rights. Much of the internet economy is built on trading personal data for free services with limited consumer protection. This model has metastasized into a vast complex of data brokers and A.I.-driven micro-targeting with monopolists such as Google and Facebook at the centre. With the curtain pulled back, there may at last be political will to build a rights-based framework for privacy that adapts as technologies change. For starters, we need major new restrictions on the political exploitation of personal data (including by political parties themselves, who remain exempt from our privacy law) and much greater user control over how data is collected and used. Europe’s new General Data Protection Regulation sets a high standard, though since it took 10 years to legislate, it was of date before it was implemented. We must evolve it to the next level.
Modernize and enforce election law. Few dispute that citizens deserve to know who is trying to sway them during elections, but our laws were designed for TV and radio. We need to update them for the internet era, where ads can be purchased from anywhere, disguised as normal social media posts, micro-targeted to polarize voters, and loaded up with sensational and divisive messages. All online ads should carry a clearly visible cache of information that states who bought them, the source of the funds, how much they spent, who saw them, and the specific targeting parameters they selected.
Audit artificial intelligence. Facebook and Google monetize billions of data points a day using powerful A.I. to target and influence specific audiences. The social and ethical implications of A.I. are a blinking red light as this technology advances, and we need to lay some ground-rules for accountability. Just as we require drug manufacturers and car makers to submit to rigorous public safety checks, we need to develop a parallel system for algorithms.
Tax Silicon Valley fairly. The titans of technology dominate the list of the most valuable companies on the planet. And yet, they are still coddled by tax law as if they were an emerging industry. It is time for Silicon Valley to pay unto Caesar — not least so that we plebeians can use the tax revenue to fix the things they keep breaking, like journalism, for example.
Aggressive competition policy. Before we start a decade-long trust-busting crusade, let’s begin with a competition policy agenda that delivers immediate, tangible value. This might include restrictions on acquisition of up-and-coming competitors, structural separation of behavior tracking and ad targeting businesses and consumer data portability from one service provider to another.
Improve digital security. What the Russians did in 2016 to exploit digital media should be a wake-up call. Without unleashing a surveillance dragnet, we need effective capabilities to counter foreign disinformation operations using measures such as “know your customer” rules for ad buyers and closing down the armies of fake accounts.
Transform civic literacy, and scale civic journalism. As social-media users, we all own part of this problem. It is our appetite for sensationalism, outrage and conspiracy that creates the audience for disinformation. Instead of relying on tech-funded literacy campaigns, the government needs to rebuild our civic literacy from the ground up, and couple these efforts with serious investments and policy changes to reinvigorate public service and accountability journalism.
Ironically, Facebook’s own conduct has awoken its vast user base to the need for a new generation of internet regulation. And with the United States mired in the politics of Donald Trump and the European Union slowed by a complex bureaucracy, there is an opportunity for Canada to present this new vision. But we will only be effective if the rigor and scale our response is commensurate with the threat posed to our democracy.
I have an essay in CIGI’s new data governance series, called Ungoverned Space: How Surveillance Capitalism and AI Undermine Democracy. My key points are:
- The threat to democracy from misinformation is enabled by two structural problems in our digital infrastructure: the way data is collected and monetized (surveillance capitalism), and how our reality is algorithmically determined through artificial intelligence (AI).
- Governments face a particular challenge in governing platforms as any efforts must engage with issues of competing jurisdiction, differing notions of free speech and large-scale technological trends toward automation.
- Policy mechanisms that enable the rights of individuals (data protection and mobility) are likely to be more effective than those that seek to limit or regulate speech.
Full essay is here.
And here is a video that CIGI produced to accompany the article:
I recently had the opportunity to give a talk at Sam Sullivan’s Public Salon in Vancouver. A great regular event hosted by the former mayor and current MLA. My talk was on the design problems at the core of our current crisis of misinformation. In short, I conclude: “Facebook didn’t fail when it used AI to match foreign agitators with micro-targeted US voter audiences, or offered ‘How to burn jews’ as an ad group, it is actually working as it was designed. And it is this definition of “working” and this design which presents the threat to our democracy, which needs to be held accountable, and for which governance oversight is urgently needed.”
Here is an interview I recently did on CBC’s The Current on the digital threat to the next Canadian election. My argument is that a focus on discrete threats (from say Russia), are distracting us to what is ultimately a structural problem. It is the very design of Facebook that is the root cause. Until we start talking about this root cause, and begin quickly testing policies that both address the flaws in this design and hold its social outcomes accountable, we are missing the plot. Governments that continue to make the policy choice of self regulation will soon also have to answer for these outcomes. Here is the Episode page, and below is the full audio (my segment starts at 8:00).
I have been thinking a lot about the internet and what it means for journalism and democracy lately. I am currently writing a book the topic, so will have much more to say soon. But last month I had the honour of giving the Dalton Camp Lecture in Journalism, which gave me the chance to summarize some of my latest thinking on, and feelings about, this problem. The lecture just aired on an episode of CBC IDEAS, and can be found here.
This is the summary from the IDEAS site:
How Internet Monopolies Threaten Democracy (The 2017 Dalton Camp Lecture): The internet began with great hope that it would strengthen democracy. Initially, social media movements seemed to be disrupting corrupt institutions. But the web no longer feels free and open, and the disenfranchised are feeling increasingly pessimistic. The unfulfilled promise of the internet has been a long-term concern of Digital Media and Global Affairs expert Dr. Taylor Owen, who delivers the 2017 Dalton Camp Lecture in Journalism. He argues the reality of the internet is now largely one of control, by four platform companies — Google, Facebook, Amazon and Apple — worth a combined $2.7 trillion — and their impact on democracy is deeply troubling.
The episode can be streamed HERE:
The Podcast can be downloaded HERE.
They also asked me to write a short intro letter framing the episode, the text of which is included below:
Dear IDEAS Listener,
I am hoping to entice you to listen to my lecture and interview on IDEAS.
Because one of the greatest challenges to democracy is happening right under our noses. In fact, we are full participants, with most of us not even realizing it.
Four internet platforms — Facebook, Google, Amazon, Apple — increasingly control our lives, our opinions, our democracy. We urgently need to start talking about how we are going to respond as a society.
Here’s some context:
Over the past year, I have begun to write and speak more publicly and with greater alarm over what I believe to be a growing crisis in our democracies. I have long studied and promoted the positive attributes of digital technologies, but my concern about the influence of internet platforms on how we live is deepening. And my concerns are shared more and more by those I work with and admire. Something fundamental has shifted in the debate about the internet.
But my view is also often met with surprise. The internet has become so normalized, so entwined in people’s lives that questioning its impact can feel jarring. The result is that I am regularly approached with two questions. Why is this happening now? And what on earth can be done about it? Let me spend a moment on each of these questions, and I hope you will be interested in listening to my wider argument in the program.
First, why now? Or, put another way, why are we seeing a crescendo of serious global concerns over a set of technologies which been seen largely as democratizing forces for over a decade?
I believe the answer lies in the structure of the internet that we have built. Far from the decentralized web imagined by its founders, the internet of today is mediated by four global platforms companies: Facebook, Google, Amazon and Apple. These companies shape our digital lives, and increasingly what we know, how we know it, and ultimately who we are. They determine our public sphere, the character of our civic discourse, and the nature of our democratic society.
What’s worth underlining is that while these companies shape our public sphere, they do so as private actors. They are publicly traded companies with boards of directors that have fiduciary responsibilities to make more than they did the year prior. In the case of Google and Facebook, this dynamic means collecting and selling more data about their users, incentivizing greater volumes of engagement, and maximizing the time we spend on their sites. These incentives have a pernicious effect on our civic discourse, leading to what I believe is an epistemological and ontological crisis in our democracy. Our common grounding and ability to act as a collective are being undermined.
Which brings me to the second question I am regularly asked: what can we do about this? I think there are two answers: an individual one, and a collective one.
We must take ownership of our digital lives. This does not mean simple digital literacy — trying to spot misinformation and hoaxes. The algorithms shaping our digital experiences are far more sophisticated at nudging our behaviour than this.
It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.
For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’. So I have begun to change my behaviour accordingly.
But acting as individuals is insufficient. Platform companies are among the largest and most profitable in the world. They shape the internet, are the world’s market place, and are even planning and developing our cities. Their scale and power demands a collective response. This will mean altering our model of governance.
These companies simply must be brought into the fold of the laws and norms of democratic society. This doesn’t mean forcing them into old governance paradigms. Nor does it mean blindly letting them scale growth in our markets and in our lives. The task is more challenging. It demands a rethinking of how we enforce collective constraints on a new type of economic and social actor in our society.
There is no doubt in my mind that how we choose to govern technology is the central question facing democracy itself in our time. But how this governance will work is not pre-determined, and yet the responsibility to insist on its creation begins with us. This responsibility requires first and foremost better understanding and speaking out against the ways technologies shape our lives and our society.
And I hope that my lecture contributes to this rethinking, and that you’ll listen in.
I have an oped in the Globe today in reaction to Facebook’s Canadian Election Integrity Initiative.
In short, I think we are missing the structural problem: The system of surveillance capitalism that has resulted in a market for our attention. Here is a twitter thread that elaborates on this, and here is is the oped:
The unfolding drama surrounding Silicon Valley and the 2016 U.S. presidential election has brought much needed attention to the role that technology plays in democracies. On Thursday, Facebook announced the Canadian Election Integrity Initiative, the very premise of which invites the question: Does Facebook threaten the integrity of Canadian democracy?
It is increasingly apparent that the answer is yes.
Facebook’s product is the thousands of data points they capture from each of their users, and their customers are anyone who wants to buy access to these profiles. This model is immensely profitable. The company’s annual revenue, nearly all of which comes from paid content, has more than tripled in the past four years to $27.6-billion (U.S.) in 2016. But the Facebook model has also incentivized the spread of low-quality clickbait over high-quality information, enabled a race to the bottom for monetized consumer surveillance, and created an attention marketplace where anyone, including foreign actors, companies or political campaigns, can purchase an audience.
A key feature of the platform is that each user sees a personalized news feed chosen for them by Facebook. This filtering is done through a series of algorithms, which, when combined with detailed personal data, allow ads to be delivered to highly specific audiences. This microtargeting enables buyers to define audiences in racist, bigoted and otherwise highly discriminatory ways, some of questionable legal status and others merely lacking any relation to moral decency.
The Facebook system is also a potent political weapon. It is increasingly clear that Russia leveraged Facebook to purchase hundreds of millions of views of content designed to foment divisions in American society around issues of race, immigration and even fracking. And it’s of course not just foreign actors using Facebook to foster hate. Just this week, Bloomberg reported that in the final weeks of the U.S. election, Facebook and Google employees collaborated with extreme activist groups to help them microtarget divisive ads to swing-state voters.
Even without this targeting, content regularly goes viral regardless of its quality or veracity, disorienting and misleading huge audiences. A recent fake video showing the impact of Hurricane Irma was viewed 25 million times and shared 855,000 times (it is still up).
And here’s the rub: when Facebook hooks up foreign agitators and microtargeted U.S. voters, or amplifies neo-Nazis using the platform to plan and organize the Charlottesville rally, or offers “How to burn jews” as an automatically-generated ad purchasing group, it is actually working as designed. It is this definition of “working” and this design for which Facebook needs to be held publicly accountable.
Some jurisdictions are starting to force this accountability. Germany recently passed a law that would fine Facebook €50,000 ($75,000) for failing to remove hate speech within 24 hours. Britain has proposed treating Facebook like any other media company. The EU is implementing new data privacy laws and is raising anti-trust questions. A U.S. Congressional committee is questioning Facebook, Google and Twitter officials on Russia, with lawmakers likely to impose new online election advertising and disclosure regulations.
Oddly, these policy debates are largely absent in Canada. Instead, Facebook is intertwined in the workings of governments, the development of public policies and the campaigns of political parties. Recent policy decisions have seen the company remain largely untaxed and called on to help solve the journalism problem for which it is the leading cause.
Thursday’s announcement further illustrates the dilemma of this laissez-faire approach. How exactly should the Canadian government protect the integrity of the next federal election, in which interest groups, corporations, foreign actors and political campaigns may all run hundreds of thousands, or millions, of simultaneous microtargeted ads a day?
It could force complete transparency of all paid content of any kind shown to Canadians during the election period, as with other media. It could demand disclosure of all financial, location and targeting data connected to this paid content. It could place significant fines on the failure to quickly remove misinformation and hate speech. It could ensure that independent researchers have access to the platform’s data, rather than merely relying on Facebook’s good intentions. Political parties and the government could even model good behaviour themselves by ceasing to spend millions of dollars of our money on Facebook’s microtargeted ads.
None of these options are likely to be adopted voluntarily or unilaterally by Facebook. We have governments to safeguard the public interest.
In fact, the modest voluntary efforts announced Thursday, which aim to put the focus on users through news literacy initiatives, and hackers through better security, ignore the key structural problem that has undermined elections around the world – the very business model of Facebook.
Efforts such as the Canadian Election Integrity Initiative represent a shift in the public position of Facebook that should, if it goes further, be welcomed. But it must also be viewed as the action of a private corporation that extracts increasing profits from a de facto public space.
We are heading into new and immensely challenging public policy terrain, but what is certain is that the easy and politically expedient relationship between Silicon Valley and government must come to an end.
Technology and new media are facilitating a rapid shift in the ways in which we consume news. In the shift from print to digital, companies like Facebook – and the algorithms it engineers – are replacing traditional editors and publishers. The result is “surveillance capitalism”: a powerful system that can target specific groups to sell products, political ideas and fake news. In this lecture, I breaks down the challenges that these new social structures pose to civic discourse, as well as the governance problems at the core of our democracies in this new media landscape.