Category: Uncategorized

Uncategorized

Is Facebook a threat to democracy?

I have an oped in the Globe today in reaction to Facebook’s Canadian Election Integrity Initiative.

In short, I think we are missing the structural problem: The system of surveillance capitalism that has resulted in a market for our attention.  Here is a twitter thread that elaborates on this, and here is is the oped:

The unfolding drama surrounding Silicon Valley and the 2016 U.S. presidential election has brought much needed attention to the role that technology plays in democracies. On Thursday, Facebook announced the Canadian Election Integrity Initiative, the very premise of which invites the question: Does Facebook threaten the integrity of Canadian democracy?

It is increasingly apparent that the answer is yes.

Facebook’s product is the thousands of data points they capture from each of their users, and their customers are anyone who wants to buy access to these profiles. This model is immensely profitable. The company’s annual revenue, nearly all of which comes from paid content, has more than tripled in the past four years to $27.6-billion (U.S.) in 2016. But the Facebook model has also incentivized the spread of low-quality clickbait over high-quality information, enabled a race to the bottom for monetized consumer surveillance, and created an attention marketplace where anyone, including foreign actors, companies or political campaigns, can purchase an audience.

A key feature of the platform is that each user sees a personalized news feed chosen for them by Facebook. This filtering is done through a series of algorithms, which, when combined with detailed personal data, allow ads to be delivered to highly specific audiences. This microtargeting enables buyers to define audiences in racist, bigoted and otherwise highly discriminatory ways, some of questionable legal status and others merely lacking any relation to moral decency.

The Facebook system is also a potent political weapon. It is increasingly clear that Russia leveraged Facebook to purchase hundreds of millions of views of content designed to foment divisions in American society around issues of race, immigration and even fracking. And it’s of course not just foreign actors using Facebook to foster hate. Just this week, Bloomberg reported that in the final weeks of the U.S. election, Facebook and Google employees collaborated with extreme activist groups to help them microtarget divisive ads to swing-state voters.

Even without this targeting, content regularly goes viral regardless of its quality or veracity, disorienting and misleading huge audiences. A recent fake video showing the impact of Hurricane Irma was viewed 25 million times and shared 855,000 times (it is still up).

And here’s the rub: when Facebook hooks up foreign agitators and microtargeted U.S. voters, or amplifies neo-Nazis using the platform to plan and organize the Charlottesville rally, or offers “How to burn jews” as an automatically-generated ad purchasing group, it is actually working as designed. It is this definition of “working” and this design for which Facebook needs to be held publicly accountable.

Some jurisdictions are starting to force this accountability. Germany recently passed a law that would fine Facebook €50,000 ($75,000) for failing to remove hate speech within 24 hours. Britain has proposed treating Facebook like any other media company. The EU is implementing new data privacy laws and is raising anti-trust questions. A U.S. Congressional committee is questioning Facebook, Google and Twitter officials on Russia, with lawmakers likely to impose new online election advertising and disclosure regulations.

Oddly, these policy debates are largely absent in Canada. Instead, Facebook is intertwined in the workings of governments, the development of public policies and the campaigns of political parties. Recent policy decisions have seen the company remain largely untaxed and called on to help solve the journalism problem for which it is the leading cause.

Thursday’s announcement further illustrates the dilemma of this laissez-faire approach. How exactly should the Canadian government protect the integrity of the next federal election, in which interest groups, corporations, foreign actors and political campaigns may all run hundreds of thousands, or millions, of simultaneous microtargeted ads a day?

It could force complete transparency of all paid content of any kind shown to Canadians during the election period, as with other media. It could demand disclosure of all financial, location and targeting data connected to this paid content. It could place significant fines on the failure to quickly remove misinformation and hate speech. It could ensure that independent researchers have access to the platform’s data, rather than merely relying on Facebook’s good intentions. Political parties and the government could even model good behaviour themselves by ceasing to spend millions of dollars of our money on Facebook’s microtargeted ads.

None of these options are likely to be adopted voluntarily or unilaterally by Facebook. We have governments to safeguard the public interest.

In fact, the modest voluntary efforts announced Thursday, which aim to put the focus on users through news literacy initiatives, and hackers through better security, ignore the key structural problem that has undermined elections around the world – the very business model of Facebook.

Efforts such as the Canadian Election Integrity Initiative represent a shift in the public position of Facebook that should, if it goes further, be welcomed. But it must also be viewed as the action of a private corporation that extracts increasing profits from a de facto public space.

We are heading into new and immensely challenging public policy terrain, but what is certain is that the easy and politically expedient relationship between Silicon Valley and government must come to an end.

Uncategorized

Lecture on Fakenews and democracy

Technology and new media are facilitating a rapid shift in the ways in which we consume news. In the shift from print to digital, companies like Facebook – and the algorithms it engineers – are replacing traditional editors and publishers.  The result is “surveillance capitalism”: a powerful system that can target specific groups to sell products, political ideas and fake news. In this lecture, I breaks down the challenges that these new social structures pose to civic discourse, as well as the governance problems at the core of our democracies in this new media landscape.

Uncategorized

Fake News and the Crisis of Information

Below is the video of a talk I gave recently at a Canada2020 conference in Ottawa, titled “Fake News and the Crisis of Information” followed by a panel I was on with David Frum, Anand Giridharadas, Liz Plank, Susan Delacourt and Evan Solomon.

Related, here is a recent radio interview on misinformation and the looming challenge of fake video and audio on Roundhouse Radio, and here are a few recent articles that touch on similar issues:

Uncategorized

Oped on #fakenews in the Globe and Mail

Edward Greenspon and I have an oped on the likely evolution of #fakenews: a pernicious mix of AI, commercial surveillance, adtech and social platforms that is going to undermine democracy in some critical ways. If fake text had a social impact on our understanding of events and news, imagine what the coming fake and micro-targeted video and audio is going to do. Watch this space, it’s going to get wild fast.

 

‘Fake news 2.0’: A threat to Canada’s democracy

Ed Greenspon and Taylor Owen, The Globe and Mail, May 29, 2017.

The muggings of liberal democracies over the past year by election hackers and purveyors of fake news are on the cusp of becoming far worse.

By Canada’s next federal election, a combination of artificial intelligence software and data analytics built on vast consumer surveillance will allow depictions of events and statements to be instantly and automatically tailored, manipulated and manufactured to the predispositions of tiny subsets of the population. Fact or fabrication may be almost impossible to sort out.

“Fake news 2.0” will further disorient and disillusion populations and undermine free and fair elections. If these were physical attacks on polling stations or election workers, authorities would respond forcefully. The same zero tolerance is required of the propagation and targeting of falsehoods for commercial, partisan or geopolitical purposes. The challenge is that unlike illegal voting, which is a clearly criminal act, the dissemination of misinformation is embedded in the very financial model of digital media.

This is serious stuff. Germany is looking to hold social media companies to account for false content on their sites. Britain’s Information Commissioner is investigating the political use of social media-generated data, including the activities of an obscure Canadian analytics firm that received millions from the Leave side in the Brexit campaign. In the United States, investigative reporters, foundations and academics are unearthing startling insights into how the dark side of the digital ecosystem operates.

Fake news is inexpensive to produce (unlike real news); makes strange political bedfellows of the likes of white supremacists, human rights activists, foreign powers and anti-social billionaires; and plays to the clickbait tendencies of digital platforms. A recent study, The Platform Press: How Silicon Valley reengineered journalism, argues that the incentives of the new system favour the shareable over the informative and the sensational over the substantial. Fake news that circulated during the 2016 U.S. election is not a one-off problem, but rather a canary in a coal mine for a structural problem in our information ecosystem. On platforms driven by surveillance and targeted advertising, serious journalism is generally downgraded while fake news rises alongside gossip, entertainment and content shared from family and friends.

As with classic propaganda, fake news seeks credibility via constant repetition and amplification, supplied by a network of paid trolls, bots and proxy sites. The core openness of the Web enables congregations of the disaffected to discover one another and be recruited by the forces of division – Breitbart News, ISIS or Vladimir Putin.

The classic liberal defence of truth and falsehood grappling, with the better prevailing, is undercut by filter bubbles and echo chambers. It has become almost impossible to talk to all of the people even some of the time.

And so the polluted tributaries of disinformation pouring into the Internet raise a critical governance challenge for open societies such as Canada: Who will speak for the public interest and democratic good in the highly influential, but privately owned, digital civic space? What does it mean for a handful of platform companies to exercise unprecedented control over audience and data? How does government clean up the pollution without risking free speech?

Canada needs to catch up on analyzing and responding to these new challenges. Here’s where we would start:

  • A well-funded and ongoing research program to keep tabs on the evolving networks and methods of anti-democratic forces, including their use of new technologies. Government support for artificial intelligence is necessary; so is vigilance about how it is applied and governed.
  • Upgraded reconnaissance and defences to detect and respond to attacks in the early stages, as with the European Union’s East StratCom Task Force. Prime Minister Justin Trudeau has already instructed his Minister of Democratic Institutions to help political parties protect against hackers. That’s good, but a total rethink of electoral integrity is required, including tightening political spending limits outside writ periods and appointing a digital-savvy chief electoral officer.
  • Measures to ensure the vitality of genuine news reporting; fake news cannot be allowed vacant space in which to flourish.
  • Transparency and accountability around algorithms and personal data. Recent European initiatives would require platform companies to keep data stored within the national boundaries where it was collected and empower individuals to view what’s collected on them.

Finally, the best safeguard against incursions on commonweal is a truly inclusive democracy, meaning tireless promotion of economic opportunity and social empathy. As Brave New World author Aldous Huxley commented in 1936, propaganda preys on pre-existing grievances. “The propagandist is the man who canalizes an already existing stream. In a land where there is no water, he digs in vain.”

Uncategorized

Oped on the governance of AI in the Globe and Mail

Mike Ananny and I have an oped in the Globe and Mail on the ethics and governance of AI.  We wrote it in response to the Federal government’s recent funding announcement for AI research and commercialization.

Ethics and governance are getting lost in the AI frenzy

Taylor Owen and Mike Ananny, The Globe and Mail, March 30, 2017

On Thursday, Prime Minister Justin Trudeau announced the government’s pan-Canadian artificial intelligence strategy.

This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.

There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.

Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.

First, what kind of assumptions do AI systems make?

Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed “in the wild.”

For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.

While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm’s guess about your genetic make-up; or, when people and behaviours are flagged as ‘abnormal’ by algorithms, AI is making an ethical judgment.

This leads to a second question: how should we hold AI accountable?

The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.

Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.

This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don’t foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.

We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don’t understand and cannot predict.

We welcome the government’s investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.

Uncategorized

The Platform Press

Screen Shot 2017-03-30 at 8.54.09 AM

Emily Bell and I have written a Tow Center report exploring how Silicon Valley has reengineered journalism. We look at how publishers have been absorbed into the platform ecosystem, how ad tech has shaped both media economics and political campaigns, and do a deep dive into Facebook and the 2016 election. In short, it’s a structural problem.

Executive summary is below, the full report is here, and as a pdf here.

 

The Platform Press:  How Silicon Valley Reshaped Journalism 

Executive Summary

The influence of social media platforms and technology companies is having a greater effect on American journalism than even the shift from print to digital. There is a rapid takeover of traditional publishers’ roles by companies including Facebook, Snapchat, Google, and Twitter that shows no sign of slowing, and which raises serious questions over how the costs of journalism will be supported. These companies have evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes.

Publishers are continuing to push more of their journalism to third-party platforms despite no guarantee of consistent return on investment. Publishing is no longer the core activity of certain journalism organizations. This trend will continue as news companies give up more of the traditional functions of publishers.

This report, part of an ongoing study by the Tow Center for Digital Journalism at Columbia Journalism School, charts the convergence between journalism and platform companies. In the span of 20 years, journalism has experienced three significant changes in business and distribution models: the switch from analog to digital, the rise of the social web, and now the dominance of mobile. This last phase has seen large technology companies dominate the markets for attention and advertising and has forced news organizations to rethink their processes and structures.

Findings

  • Technology platforms have become publishers in a short space of time, leaving news organizations confused about their own future. If the speed of convergence continues, more news organizations are likely to cease publishing—distributing, hosting, and monetizing—as a core activity.
  • Competition among platforms to release products for publishers is helping newsrooms reach larger audiences than ever before. But the advantages of each platform are difficult to assess, and the return on investment is inadequate. The loss of branding, the lack of audience data, and the migration of advertising revenue remain key concerns for publishers.
  • The influence of social platforms shapes the journalism itself. By offering incentives to news organizations for particular types of content, such as live video, or by dictating publisher activity through design standards, the platforms are explicitly editorial.
  • The “fake news” revelations of the 2016 election have forced social platforms to take greater responsibility for publishing decisions. However, this is a distraction from the larger issue that the structure and the economics of social platforms incentivize the spread of low-quality content over high-quality material. Journalism with high civic value—journalism that investigates power, or reaches underserved and local communities—is discriminated against by a system that favors scale and shareability.
  • Platforms rely on algorithms to sort and target content. They have not wanted to invest in human editing, to avoid both cost and the perception that humans would be biased. However, the nuances of journalism require editorial judgment, so platforms will need to reconsider their approach.
  • Greater transparency and accountability are required from platform companies. While news might reach more people than ever before, for the first time, the audience has no way of knowing how or why it reaches them, how data collected about them is used, or how their online behavior is being manipulated. And publishers are producing more content than ever, without knowing who it is reaching or how—they are at the mercy of the algorithms.

In the wake of the election, we have an immediate opportunity to turn the attention focused on tech power and journalism into action. Until recently, the default position of platforms (and notably Facebook) has been to avoid the expensive responsibilities and liabilities of being publishers. The platform companies, led by Facebook and Google, have been proactive in starting initiatives focused on improving the news environment and issues of news literacy. However, more structural questions remain unaddressed.

If news organizations are to remain autonomous entities in the future, there will have to be a reversal in information consumption trends and advertising expenditure or a significant transfer of wealth from technology companies and advertisers. Some publishers are seeing a “Trump Bump” with subscriptions and donations rising post-election, and there is evidence of renewed efforts of both large and niche publishers to build audiences and revenue streams away from the intermediary platform businesses. However, it is too soon to tell if this represents a systemic change rather than a cyclical ripple.

News organizations face a critical dilemma. Should they continue the costly business of maintaining their own publishing infrastructure, with smaller audiences but complete control over revenue, brand, and audience data? Or, should they cede control over user data and advertising in exchange for the significant audience growth offered by Facebook or other platforms? We describe how publishers are managing these trade-offs through content analysis and interviews.

While the spread of misinformation online became a global story this year, we see it as a proxy for much wider issues about the commercialization and private control of the public sphere.

Uncategorized

Journalism After Snowden

My edited book for Columbia University Press with Emily Bell, Journalism After Snowden: The Future of the Free Press in the Surveillance State, on journalism and state surveillance has recently been released. It includes chapters by a pretty wonderful group of journalists and scholars, including: Steve Coll, Cass Sunstein, Clay Shirky, Alan Rusbridger, Jill Abramson, Glenn Greenwald, Ethan Zuckerman, Julia Angwin, David Sanger, Edward Snowden, Jonathan Zittrain, Lee Bollinger among many others. I feel very lucky to have worked with them on this.

It goes without saying that the stakes are even higher now, but it’s worth remembering that the surveillance architecture and vulnerability of journalism that it causes is a long-running, international, and bi-partisan affair.

Details of the book are available here.

The launch event for the book will be in New York on March 7th and will include a panel with Julia Angwin, Barton Gellman, and Ben Wizner. Details and tickets are here.

Uncategorized

It’s time to reform the CBC for the digital age

The oped below was written after the publishing of The Shattered Mirror: News, Democracy and Trust in the Digital Age, on which i served as a research principal.  I have been surprised that what I believe is the most radical and potentially transformative idea in the report has received almost no attention.  So Elizabeth Dubois and I wrote an oped about the recommendation for the CBC to move to publishing under a creative commons licence. The piece below was first published in The Toronto Star.

It’s time to reform the CBC for the digital age

Canadian journalism is in the midst of industrial and market failure. Print and broadcast journalism are struggling to adapt to both the economic models of the digital economy as well as the media consumption habits of digitally-enabled citizens. Meanwhile, our small size, lack of VC funding, large presence of U.S. digital journalism companies, combined with the rise of Facebook, Google and the pernicious effect of the ad-tech industry has led to a market failure in the funding model for Canadian digital journalism.

We simply do not have a digital ecosystem in waiting that will be able to replace, at scale, the reckoning that is looming in the traditional media space.

As a recent Public Policy Forum report (for which we were research principals) argues, it is time that Canadian media policy adapt to the realities of the digital age. While much of the coverage of the report has focused on the establishment of a Future of Journalism and Democracy Fund, in our minds the most critical recommendation concerns the CBC – namely, that the CBC should begin publishing all civic journalism content under a Creative Commons license.

There are seven types of Creative Commons copyright licenses and we believe the CBC could use “Attribution + NoDerivatives,” which would enable CBC journalism to be re-published by anyone, anywhere as long as it is unedited and attributed.

This change, when combined with our additional recommendation that the CBC cease digital advertising, could incentivize significant reform of the culture, structure and journalism of the public broadcaster, right at a time when Canadians need it most. Here’s how.

First, it would free the organization to re-focus on civic journalism, bolstering what is in our view the most critical component of its mandate – to inform Canadians. Given scale-driven ad models and the incentives of click-bait, the CBC has followed many of its market competitors down a path to lowest common denominator content. They are even, remarkably, publishing sponsored content. Moving to a Creative Commons model and getting out of digital ads would provide the CBC the freedom to escape this destructive cycle, and refocus its norms and journalism towards its critical civic function.

Second, a Creative Commons license would increase the reach of the CBC by enlisting citizens and organizations to become its distribution partners. While a core objective of the CBC should be to reach as many Canadians as possible, this is increasingly difficult to do in a fragmented media ecosystem. Whereas audiences could once be reached through a TV channel, radio station or website homepage, Canadians increasingly consume news through via atomized content shared through friends and followers on social media sites. The days of a single national media discourse are over. Allowing anyone to publish CBC content would broaden and lead to innovation in its distribution. Citizens should not care where CBC content is consumed, but only that reaches as many Canadians as possible.

Third, it will help to counter the growing challenge of misinformation online. The 2016 U.S. election highlighted structural challenges in the digital media ecosystem. While tremendous good has come from decentralizing effect of the internet, we have recently seen the emergence of several large platforms as the primary distributors and intermediaries for journalism. Platforms such as Facebook and Google rely on opaque algorithms to determine what pieces of news content we see.

When combined with the advertising technology market that monetizes our attention across all of our internet activity (not just our news consumption), the result is an ecosystem of atomized conversations and filter bubbles. This environment has proved fertile for the distribution of misleading and outright false information. One way to counter this problem in Canada would be to encourage much more reliable journalistic content in the platform ecosystem by allowing CBC content to be published by dozens or hundreds of organization, not just one.

Finally, allowing others to publish CBC content would provide tangible assistance to both traditional and new media organizations. For traditional organizations, access to high quality local and legislative reporting would allow them to focus their increasingly limited resources on other types of journalism. For digital native organizations, this would both shift the CBC from a competitor to a collaborator, and provide a base amount of quality civic content on which they can build their businesses.

The result would be a new ecosystem of digital companies innovating in the distribution of CBC content and developing their own value-added content in addition. This proposal would turn the CBC into a constructive partner and hub in the civic journalism ecosystem.

Rightly or wrongly, many people that we spoke to for this project, in both the traditional and new media, described the CBC as a “predator.” This should concern all proponents of the CBC. At a time when Canadian civic journalism is both in decline and needed most, Canadians should expect our national broadcaster to be able to work with, rather than compete against, Canadian journalism. Moving to a Creative Commons model would be a big step in this transition.

Taylor Owen (@taylor_owen) is an Assistant Professor of Digital Media and Global Affairs at the University of British Columbia. Elizabeth Dubois (@lizdubois) is an Assistant Professor in the Department of Communications at the University of Ottawa. Both were research principals on the Public Policy Forum report, Shattered Mirror: News, Democracy and Trust in the Digital Age.

Uncategorized

Report on state of Canadian journalism

For the past year I have had the fortune to work with Ed Greenspon, the Public Policy Forum, and a wonderful group of scholars and journalists on a report on the state of journalism in Canada. The resulting report was recently released, The Shattered Mirror: News, Democracy and Trust in the Digital Age. My role was mainly to support the analysis of digital journalism both in Canada and within the broader platform (Facebook, Google, etc) ecosystem. I hosted a workshop in Vancouver to dive specifically into the challenges digital start-ups face in Canada and I attended many of the roundtables that the PPF hosted across the country.  I learned a ton, and while I don’t agree with everything in the report, i think that it represents the most detailed current assessment of the state of the industry in Canada, and some of the recommendations, if adopted, would lead to significant transformation of the digital journalism space in this country.  The site for the report is here and a PDF can be downloaded here.