I recently had the opportunity to give a talk at Sam Sullivan’s Public Salon in Vancouver. A great regular event hosted by the former mayor and current MLA. My talk was on the design problems at the core of our current crisis of misinformation. In short, I conclude: “Facebook didn’t fail when it used AI to match foreign agitators with micro-targeted US voter audiences, or offered ‘How to burn jews’ as an ad group, it is actually working as it was designed. And it is this definition of “working” and this design which presents the threat to our democracy, which needs to be held accountable, and for which governance oversight is urgently needed.”
Here is an interview I recently did on CBC’s The Current on the digital threat to the next Canadian election. My argument is that a focus on discrete threats (from say Russia), are distracting us to what is ultimately a structural problem. It is the very design of Facebook that is the root cause. Until we start talking about this root cause, and begin quickly testing policies that both address the flaws in this design and hold its social outcomes accountable, we are missing the plot. Governments that continue to make the policy choice of self regulation will soon also have to answer for these outcomes. Here is the Episode page, and below is the full audio (my segment starts at 8:00).
I have been thinking a lot about the internet and what it means for journalism and democracy lately. I am currently writing a book the topic, so will have much more to say soon. But last month I had the honour of giving the Dalton Camp Lecture in Journalism, which gave me the chance to summarize some of my latest thinking on, and feelings about, this problem. The lecture just aired on an episode of CBC IDEAS, and can be found here.
This is the summary from the IDEAS site:
How Internet Monopolies Threaten Democracy (The 2017 Dalton Camp Lecture): The internet began with great hope that it would strengthen democracy. Initially, social media movements seemed to be disrupting corrupt institutions. But the web no longer feels free and open, and the disenfranchised are feeling increasingly pessimistic. The unfulfilled promise of the internet has been a long-term concern of Digital Media and Global Affairs expert Dr. Taylor Owen, who delivers the 2017 Dalton Camp Lecture in Journalism. He argues the reality of the internet is now largely one of control, by four platform companies — Google, Facebook, Amazon and Apple — worth a combined $2.7 trillion — and their impact on democracy is deeply troubling.
The episode can be streamed HERE:
The Podcast can be downloaded HERE.
They also asked me to write a short intro letter framing the episode, the text of which is included below:
Dear IDEAS Listener,
I am hoping to entice you to listen to my lecture and interview on IDEAS.
Because one of the greatest challenges to democracy is happening right under our noses. In fact, we are full participants, with most of us not even realizing it.
Four internet platforms — Facebook, Google, Amazon, Apple — increasingly control our lives, our opinions, our democracy. We urgently need to start talking about how we are going to respond as a society.
Here’s some context:
Over the past year, I have begun to write and speak more publicly and with greater alarm over what I believe to be a growing crisis in our democracies. I have long studied and promoted the positive attributes of digital technologies, but my concern about the influence of internet platforms on how we live is deepening. And my concerns are shared more and more by those I work with and admire. Something fundamental has shifted in the debate about the internet.
But my view is also often met with surprise. The internet has become so normalized, so entwined in people’s lives that questioning its impact can feel jarring. The result is that I am regularly approached with two questions. Why is this happening now? And what on earth can be done about it? Let me spend a moment on each of these questions, and I hope you will be interested in listening to my wider argument in the program.
First, why now? Or, put another way, why are we seeing a crescendo of serious global concerns over a set of technologies which been seen largely as democratizing forces for over a decade?
I believe the answer lies in the structure of the internet that we have built. Far from the decentralized web imagined by its founders, the internet of today is mediated by four global platforms companies: Facebook, Google, Amazon and Apple. These companies shape our digital lives, and increasingly what we know, how we know it, and ultimately who we are. They determine our public sphere, the character of our civic discourse, and the nature of our democratic society.
What’s worth underlining is that while these companies shape our public sphere, they do so as private actors. They are publicly traded companies with boards of directors that have fiduciary responsibilities to make more than they did the year prior. In the case of Google and Facebook, this dynamic means collecting and selling more data about their users, incentivizing greater volumes of engagement, and maximizing the time we spend on their sites. These incentives have a pernicious effect on our civic discourse, leading to what I believe is an epistemological and ontological crisis in our democracy. Our common grounding and ability to act as a collective are being undermined.
Which brings me to the second question I am regularly asked: what can we do about this? I think there are two answers: an individual one, and a collective one.
We must take ownership of our digital lives. This does not mean simple digital literacy — trying to spot misinformation and hoaxes. The algorithms shaping our digital experiences are far more sophisticated at nudging our behaviour than this.
It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.
For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’. So I have begun to change my behaviour accordingly.
But acting as individuals is insufficient. Platform companies are among the largest and most profitable in the world. They shape the internet, are the world’s market place, and are even planning and developing our cities. Their scale and power demands a collective response. This will mean altering our model of governance.
These companies simply must be brought into the fold of the laws and norms of democratic society. This doesn’t mean forcing them into old governance paradigms. Nor does it mean blindly letting them scale growth in our markets and in our lives. The task is more challenging. It demands a rethinking of how we enforce collective constraints on a new type of economic and social actor in our society.
There is no doubt in my mind that how we choose to govern technology is the central question facing democracy itself in our time. But how this governance will work is not pre-determined, and yet the responsibility to insist on its creation begins with us. This responsibility requires first and foremost better understanding and speaking out against the ways technologies shape our lives and our society.
And I hope that my lecture contributes to this rethinking, and that you’ll listen in.
I have an oped in the Globe today in reaction to Facebook’s Canadian Election Integrity Initiative.
In short, I think we are missing the structural problem: The system of surveillance capitalism that has resulted in a market for our attention. Here is a twitter thread that elaborates on this, and here is is the oped:
The unfolding drama surrounding Silicon Valley and the 2016 U.S. presidential election has brought much needed attention to the role that technology plays in democracies. On Thursday, Facebook announced the Canadian Election Integrity Initiative, the very premise of which invites the question: Does Facebook threaten the integrity of Canadian democracy?
It is increasingly apparent that the answer is yes.
Facebook’s product is the thousands of data points they capture from each of their users, and their customers are anyone who wants to buy access to these profiles. This model is immensely profitable. The company’s annual revenue, nearly all of which comes from paid content, has more than tripled in the past four years to $27.6-billion (U.S.) in 2016. But the Facebook model has also incentivized the spread of low-quality clickbait over high-quality information, enabled a race to the bottom for monetized consumer surveillance, and created an attention marketplace where anyone, including foreign actors, companies or political campaigns, can purchase an audience.
A key feature of the platform is that each user sees a personalized news feed chosen for them by Facebook. This filtering is done through a series of algorithms, which, when combined with detailed personal data, allow ads to be delivered to highly specific audiences. This microtargeting enables buyers to define audiences in racist, bigoted and otherwise highly discriminatory ways, some of questionable legal status and others merely lacking any relation to moral decency.
The Facebook system is also a potent political weapon. It is increasingly clear that Russia leveraged Facebook to purchase hundreds of millions of views of content designed to foment divisions in American society around issues of race, immigration and even fracking. And it’s of course not just foreign actors using Facebook to foster hate. Just this week, Bloomberg reported that in the final weeks of the U.S. election, Facebook and Google employees collaborated with extreme activist groups to help them microtarget divisive ads to swing-state voters.
Even without this targeting, content regularly goes viral regardless of its quality or veracity, disorienting and misleading huge audiences. A recent fake video showing the impact of Hurricane Irma was viewed 25 million times and shared 855,000 times (it is still up).
And here’s the rub: when Facebook hooks up foreign agitators and microtargeted U.S. voters, or amplifies neo-Nazis using the platform to plan and organize the Charlottesville rally, or offers “How to burn jews” as an automatically-generated ad purchasing group, it is actually working as designed. It is this definition of “working” and this design for which Facebook needs to be held publicly accountable.
Some jurisdictions are starting to force this accountability. Germany recently passed a law that would fine Facebook €50,000 ($75,000) for failing to remove hate speech within 24 hours. Britain has proposed treating Facebook like any other media company. The EU is implementing new data privacy laws and is raising anti-trust questions. A U.S. Congressional committee is questioning Facebook, Google and Twitter officials on Russia, with lawmakers likely to impose new online election advertising and disclosure regulations.
Oddly, these policy debates are largely absent in Canada. Instead, Facebook is intertwined in the workings of governments, the development of public policies and the campaigns of political parties. Recent policy decisions have seen the company remain largely untaxed and called on to help solve the journalism problem for which it is the leading cause.
Thursday’s announcement further illustrates the dilemma of this laissez-faire approach. How exactly should the Canadian government protect the integrity of the next federal election, in which interest groups, corporations, foreign actors and political campaigns may all run hundreds of thousands, or millions, of simultaneous microtargeted ads a day?
It could force complete transparency of all paid content of any kind shown to Canadians during the election period, as with other media. It could demand disclosure of all financial, location and targeting data connected to this paid content. It could place significant fines on the failure to quickly remove misinformation and hate speech. It could ensure that independent researchers have access to the platform’s data, rather than merely relying on Facebook’s good intentions. Political parties and the government could even model good behaviour themselves by ceasing to spend millions of dollars of our money on Facebook’s microtargeted ads.
None of these options are likely to be adopted voluntarily or unilaterally by Facebook. We have governments to safeguard the public interest.
In fact, the modest voluntary efforts announced Thursday, which aim to put the focus on users through news literacy initiatives, and hackers through better security, ignore the key structural problem that has undermined elections around the world – the very business model of Facebook.
Efforts such as the Canadian Election Integrity Initiative represent a shift in the public position of Facebook that should, if it goes further, be welcomed. But it must also be viewed as the action of a private corporation that extracts increasing profits from a de facto public space.
We are heading into new and immensely challenging public policy terrain, but what is certain is that the easy and politically expedient relationship between Silicon Valley and government must come to an end.
Technology and new media are facilitating a rapid shift in the ways in which we consume news. In the shift from print to digital, companies like Facebook – and the algorithms it engineers – are replacing traditional editors and publishers. The result is “surveillance capitalism”: a powerful system that can target specific groups to sell products, political ideas and fake news. In this lecture, I breaks down the challenges that these new social structures pose to civic discourse, as well as the governance problems at the core of our democracies in this new media landscape.
Below is the video of a talk I gave recently at a Canada2020 conference in Ottawa, titled “Fake News and the Crisis of Information” followed by a panel I was on with David Frum, Anand Giridharadas, Liz Plank, Susan Delacourt and Evan Solomon.
Related, here is a recent radio interview on misinformation and the looming challenge of fake video and audio on Roundhouse Radio, and here are a few recent articles that touch on similar issues:
- Is Facebook a threat to democracy?, The Globe and Mail
- Ethics and governance are getting lost in the AI frenzy, The Globe and Mail
- ‘Fake news 2.0’: A threat to Canada’s democracy, The Globe and Mail
- Can Journalism be Virtual, Columbia Journalism Review
- The Platform Press: How Silicon Valley Reengineered Journalism, Columbia Journalism Review
- Interview on NPR’s 1A on Silicon Valley and Journalism, Part 1 and Part 2 (responding after the interview with Facebook’s Campbell Brown).
- How Internet Monopolies Threaten Democracy (The 2017 Dalton Camp Lecture, broadcast on CBC Ideas)
- Ungoverned Spaces: #Fakenews, The Rise of Algorithms, and the Next Big Challenge for Democracy, GIGI Global Forum Lecture
Edward Greenspon and I have an oped on the likely evolution of #fakenews: a pernicious mix of AI, commercial surveillance, adtech and social platforms that is going to undermine democracy in some critical ways. If fake text had a social impact on our understanding of events and news, imagine what the coming fake and micro-targeted video and audio is going to do. Watch this space, it’s going to get wild fast.
‘Fake news 2.0’: A threat to Canada’s democracy
Ed Greenspon and Taylor Owen, The Globe and Mail, May 29, 2017.
The muggings of liberal democracies over the past year by election hackers and purveyors of fake news are on the cusp of becoming far worse.
By Canada’s next federal election, a combination of artificial intelligence software and data analytics built on vast consumer surveillance will allow depictions of events and statements to be instantly and automatically tailored, manipulated and manufactured to the predispositions of tiny subsets of the population. Fact or fabrication may be almost impossible to sort out.
“Fake news 2.0” will further disorient and disillusion populations and undermine free and fair elections. If these were physical attacks on polling stations or election workers, authorities would respond forcefully. The same zero tolerance is required of the propagation and targeting of falsehoods for commercial, partisan or geopolitical purposes. The challenge is that unlike illegal voting, which is a clearly criminal act, the dissemination of misinformation is embedded in the very financial model of digital media.
This is serious stuff. Germany is looking to hold social media companies to account for false content on their sites. Britain’s Information Commissioner is investigating the political use of social media-generated data, including the activities of an obscure Canadian analytics firm that received millions from the Leave side in the Brexit campaign. In the United States, investigative reporters, foundations and academics are unearthing startling insights into how the dark side of the digital ecosystem operates.
Fake news is inexpensive to produce (unlike real news); makes strange political bedfellows of the likes of white supremacists, human rights activists, foreign powers and anti-social billionaires; and plays to the clickbait tendencies of digital platforms. A recent study, The Platform Press: How Silicon Valley reengineered journalism, argues that the incentives of the new system favour the shareable over the informative and the sensational over the substantial. Fake news that circulated during the 2016 U.S. election is not a one-off problem, but rather a canary in a coal mine for a structural problem in our information ecosystem. On platforms driven by surveillance and targeted advertising, serious journalism is generally downgraded while fake news rises alongside gossip, entertainment and content shared from family and friends.
As with classic propaganda, fake news seeks credibility via constant repetition and amplification, supplied by a network of paid trolls, bots and proxy sites. The core openness of the Web enables congregations of the disaffected to discover one another and be recruited by the forces of division – Breitbart News, ISIS or Vladimir Putin.
The classic liberal defence of truth and falsehood grappling, with the better prevailing, is undercut by filter bubbles and echo chambers. It has become almost impossible to talk to all of the people even some of the time.
And so the polluted tributaries of disinformation pouring into the Internet raise a critical governance challenge for open societies such as Canada: Who will speak for the public interest and democratic good in the highly influential, but privately owned, digital civic space? What does it mean for a handful of platform companies to exercise unprecedented control over audience and data? How does government clean up the pollution without risking free speech?
Canada needs to catch up on analyzing and responding to these new challenges. Here’s where we would start:
- A well-funded and ongoing research program to keep tabs on the evolving networks and methods of anti-democratic forces, including their use of new technologies. Government support for artificial intelligence is necessary; so is vigilance about how it is applied and governed.
- Upgraded reconnaissance and defences to detect and respond to attacks in the early stages, as with the European Union’s East StratCom Task Force. Prime Minister Justin Trudeau has already instructed his Minister of Democratic Institutions to help political parties protect against hackers. That’s good, but a total rethink of electoral integrity is required, including tightening political spending limits outside writ periods and appointing a digital-savvy chief electoral officer.
- Measures to ensure the vitality of genuine news reporting; fake news cannot be allowed vacant space in which to flourish.
- Transparency and accountability around algorithms and personal data. Recent European initiatives would require platform companies to keep data stored within the national boundaries where it was collected and empower individuals to view what’s collected on them.
Finally, the best safeguard against incursions on commonweal is a truly inclusive democracy, meaning tireless promotion of economic opportunity and social empathy. As Brave New World author Aldous Huxley commented in 1936, propaganda preys on pre-existing grievances. “The propagandist is the man who canalizes an already existing stream. In a land where there is no water, he digs in vain.”
Mike Ananny and I have an oped in the Globe and Mail on the ethics and governance of AI. We wrote it in response to the Federal government’s recent funding announcement for AI research and commercialization.
Ethics and governance are getting lost in the AI frenzy
Taylor Owen and Mike Ananny, The Globe and Mail, March 30, 2017
On Thursday, Prime Minister Justin Trudeau announced the government’s pan-Canadian artificial intelligence strategy.
This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.
There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.
Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.
First, what kind of assumptions do AI systems make?
Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed “in the wild.”
For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.
While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm’s guess about your genetic make-up; or, when people and behaviours are flagged as ‘abnormal’ by algorithms, AI is making an ethical judgment.
This leads to a second question: how should we hold AI accountable?
The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.
Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.
This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don’t foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.
We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don’t understand and cannot predict.
We welcome the government’s investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.
Emily Bell and I have written a Tow Center report exploring how Silicon Valley has reengineered journalism. We look at how publishers have been absorbed into the platform ecosystem, how ad tech has shaped both media economics and political campaigns, and do a deep dive into Facebook and the 2016 election. In short, it’s a structural problem.
The influence of social media platforms and technology companies is having a greater effect on American journalism than even the shift from print to digital. There is a rapid takeover of traditional publishers’ roles by companies including Facebook, Snapchat, Google, and Twitter that shows no sign of slowing, and which raises serious questions over how the costs of journalism will be supported. These companies have evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes.
Publishers are continuing to push more of their journalism to third-party platforms despite no guarantee of consistent return on investment. Publishing is no longer the core activity of certain journalism organizations. This trend will continue as news companies give up more of the traditional functions of publishers.
This report, part of an ongoing study by the Tow Center for Digital Journalism at Columbia Journalism School, charts the convergence between journalism and platform companies. In the span of 20 years, journalism has experienced three significant changes in business and distribution models: the switch from analog to digital, the rise of the social web, and now the dominance of mobile. This last phase has seen large technology companies dominate the markets for attention and advertising and has forced news organizations to rethink their processes and structures.
- Technology platforms have become publishers in a short space of time, leaving news organizations confused about their own future. If the speed of convergence continues, more news organizations are likely to cease publishing—distributing, hosting, and monetizing—as a core activity.
- Competition among platforms to release products for publishers is helping newsrooms reach larger audiences than ever before. But the advantages of each platform are difficult to assess, and the return on investment is inadequate. The loss of branding, the lack of audience data, and the migration of advertising revenue remain key concerns for publishers.
- The influence of social platforms shapes the journalism itself. By offering incentives to news organizations for particular types of content, such as live video, or by dictating publisher activity through design standards, the platforms are explicitly editorial.
- The “fake news” revelations of the 2016 election have forced social platforms to take greater responsibility for publishing decisions. However, this is a distraction from the larger issue that the structure and the economics of social platforms incentivize the spread of low-quality content over high-quality material. Journalism with high civic value—journalism that investigates power, or reaches underserved and local communities—is discriminated against by a system that favors scale and shareability.
- Platforms rely on algorithms to sort and target content. They have not wanted to invest in human editing, to avoid both cost and the perception that humans would be biased. However, the nuances of journalism require editorial judgment, so platforms will need to reconsider their approach.
- Greater transparency and accountability are required from platform companies. While news might reach more people than ever before, for the first time, the audience has no way of knowing how or why it reaches them, how data collected about them is used, or how their online behavior is being manipulated. And publishers are producing more content than ever, without knowing who it is reaching or how—they are at the mercy of the algorithms.
In the wake of the election, we have an immediate opportunity to turn the attention focused on tech power and journalism into action. Until recently, the default position of platforms (and notably Facebook) has been to avoid the expensive responsibilities and liabilities of being publishers. The platform companies, led by Facebook and Google, have been proactive in starting initiatives focused on improving the news environment and issues of news literacy. However, more structural questions remain unaddressed.
If news organizations are to remain autonomous entities in the future, there will have to be a reversal in information consumption trends and advertising expenditure or a significant transfer of wealth from technology companies and advertisers. Some publishers are seeing a “Trump Bump” with subscriptions and donations rising post-election, and there is evidence of renewed efforts of both large and niche publishers to build audiences and revenue streams away from the intermediary platform businesses. However, it is too soon to tell if this represents a systemic change rather than a cyclical ripple.
News organizations face a critical dilemma. Should they continue the costly business of maintaining their own publishing infrastructure, with smaller audiences but complete control over revenue, brand, and audience data? Or, should they cede control over user data and advertising in exchange for the significant audience growth offered by Facebook or other platforms? We describe how publishers are managing these trade-offs through content analysis and interviews.
While the spread of misinformation online became a global story this year, we see it as a proxy for much wider issues about the commercialization and private control of the public sphere.