Category: Uncategorized

Uncategorized

Fake News and the Crisis of Information

Below is the video of a talk I gave recently at a Canada2020 conference in Ottawa, titled “Fake News and the Crisis of Information” followed by a panel I was on with David Frum, Anand Giridharadas, Liz Plank, Susan Delacourt and Evan Solomon.

Related, here is a recent radio interview on misinformation and the looming challenge of fake video and audio on Roundhouse Radio, and here are a few recent articles that touch on similar issues:

Uncategorized

Oped on #fakenews in the Globe and Mail

Edward Greenspon and I have an oped on the likely evolution of #fakenews: a pernicious mix of AI, commercial surveillance, adtech and social platforms that is going to undermine democracy in some critical ways. If fake text had a social impact on our understanding of events and news, imagine what the coming fake and micro-targeted video and audio is going to do. Watch this space, it’s going to get wild fast.

 

‘Fake news 2.0’: A threat to Canada’s democracy

Ed Greenspon and Taylor Owen, The Globe and Mail, May 29, 2017.

The muggings of liberal democracies over the past year by election hackers and purveyors of fake news are on the cusp of becoming far worse.

By Canada’s next federal election, a combination of artificial intelligence software and data analytics built on vast consumer surveillance will allow depictions of events and statements to be instantly and automatically tailored, manipulated and manufactured to the predispositions of tiny subsets of the population. Fact or fabrication may be almost impossible to sort out.

“Fake news 2.0” will further disorient and disillusion populations and undermine free and fair elections. If these were physical attacks on polling stations or election workers, authorities would respond forcefully. The same zero tolerance is required of the propagation and targeting of falsehoods for commercial, partisan or geopolitical purposes. The challenge is that unlike illegal voting, which is a clearly criminal act, the dissemination of misinformation is embedded in the very financial model of digital media.

This is serious stuff. Germany is looking to hold social media companies to account for false content on their sites. Britain’s Information Commissioner is investigating the political use of social media-generated data, including the activities of an obscure Canadian analytics firm that received millions from the Leave side in the Brexit campaign. In the United States, investigative reporters, foundations and academics are unearthing startling insights into how the dark side of the digital ecosystem operates.

Fake news is inexpensive to produce (unlike real news); makes strange political bedfellows of the likes of white supremacists, human rights activists, foreign powers and anti-social billionaires; and plays to the clickbait tendencies of digital platforms. A recent study, The Platform Press: How Silicon Valley reengineered journalism, argues that the incentives of the new system favour the shareable over the informative and the sensational over the substantial. Fake news that circulated during the 2016 U.S. election is not a one-off problem, but rather a canary in a coal mine for a structural problem in our information ecosystem. On platforms driven by surveillance and targeted advertising, serious journalism is generally downgraded while fake news rises alongside gossip, entertainment and content shared from family and friends.

As with classic propaganda, fake news seeks credibility via constant repetition and amplification, supplied by a network of paid trolls, bots and proxy sites. The core openness of the Web enables congregations of the disaffected to discover one another and be recruited by the forces of division – Breitbart News, ISIS or Vladimir Putin.

The classic liberal defence of truth and falsehood grappling, with the better prevailing, is undercut by filter bubbles and echo chambers. It has become almost impossible to talk to all of the people even some of the time.

And so the polluted tributaries of disinformation pouring into the Internet raise a critical governance challenge for open societies such as Canada: Who will speak for the public interest and democratic good in the highly influential, but privately owned, digital civic space? What does it mean for a handful of platform companies to exercise unprecedented control over audience and data? How does government clean up the pollution without risking free speech?

Canada needs to catch up on analyzing and responding to these new challenges. Here’s where we would start:

  • A well-funded and ongoing research program to keep tabs on the evolving networks and methods of anti-democratic forces, including their use of new technologies. Government support for artificial intelligence is necessary; so is vigilance about how it is applied and governed.
  • Upgraded reconnaissance and defences to detect and respond to attacks in the early stages, as with the European Union’s East StratCom Task Force. Prime Minister Justin Trudeau has already instructed his Minister of Democratic Institutions to help political parties protect against hackers. That’s good, but a total rethink of electoral integrity is required, including tightening political spending limits outside writ periods and appointing a digital-savvy chief electoral officer.
  • Measures to ensure the vitality of genuine news reporting; fake news cannot be allowed vacant space in which to flourish.
  • Transparency and accountability around algorithms and personal data. Recent European initiatives would require platform companies to keep data stored within the national boundaries where it was collected and empower individuals to view what’s collected on them.

Finally, the best safeguard against incursions on commonweal is a truly inclusive democracy, meaning tireless promotion of economic opportunity and social empathy. As Brave New World author Aldous Huxley commented in 1936, propaganda preys on pre-existing grievances. “The propagandist is the man who canalizes an already existing stream. In a land where there is no water, he digs in vain.”

Uncategorized

Oped on the governance of AI in the Globe and Mail

Mike Ananny and I have an oped in the Globe and Mail on the ethics and governance of AI.  We wrote it in response to the Federal government’s recent funding announcement for AI research and commercialization.

Ethics and governance are getting lost in the AI frenzy

Taylor Owen and Mike Ananny, The Globe and Mail, March 30, 2017

On Thursday, Prime Minister Justin Trudeau announced the government’s pan-Canadian artificial intelligence strategy.

This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.

There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.

Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.

First, what kind of assumptions do AI systems make?

Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed “in the wild.”

For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.

While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm’s guess about your genetic make-up; or, when people and behaviours are flagged as ‘abnormal’ by algorithms, AI is making an ethical judgment.

This leads to a second question: how should we hold AI accountable?

The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.

Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.

This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don’t foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.

We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don’t understand and cannot predict.

We welcome the government’s investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.

Uncategorized

The Platform Press

Screen Shot 2017-03-30 at 8.54.09 AM

Emily Bell and I have written a Tow Center report exploring how Silicon Valley has reengineered journalism. We look at how publishers have been absorbed into the platform ecosystem, how ad tech has shaped both media economics and political campaigns, and do a deep dive into Facebook and the 2016 election. In short, it’s a structural problem.

Executive summary is below, the full report is here, and as a pdf here.

 

The Platform Press:  How Silicon Valley Reshaped Journalism 

Executive Summary

The influence of social media platforms and technology companies is having a greater effect on American journalism than even the shift from print to digital. There is a rapid takeover of traditional publishers’ roles by companies including Facebook, Snapchat, Google, and Twitter that shows no sign of slowing, and which raises serious questions over how the costs of journalism will be supported. These companies have evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes.

Publishers are continuing to push more of their journalism to third-party platforms despite no guarantee of consistent return on investment. Publishing is no longer the core activity of certain journalism organizations. This trend will continue as news companies give up more of the traditional functions of publishers.

This report, part of an ongoing study by the Tow Center for Digital Journalism at Columbia Journalism School, charts the convergence between journalism and platform companies. In the span of 20 years, journalism has experienced three significant changes in business and distribution models: the switch from analog to digital, the rise of the social web, and now the dominance of mobile. This last phase has seen large technology companies dominate the markets for attention and advertising and has forced news organizations to rethink their processes and structures.

Findings

  • Technology platforms have become publishers in a short space of time, leaving news organizations confused about their own future. If the speed of convergence continues, more news organizations are likely to cease publishing—distributing, hosting, and monetizing—as a core activity.
  • Competition among platforms to release products for publishers is helping newsrooms reach larger audiences than ever before. But the advantages of each platform are difficult to assess, and the return on investment is inadequate. The loss of branding, the lack of audience data, and the migration of advertising revenue remain key concerns for publishers.
  • The influence of social platforms shapes the journalism itself. By offering incentives to news organizations for particular types of content, such as live video, or by dictating publisher activity through design standards, the platforms are explicitly editorial.
  • The “fake news” revelations of the 2016 election have forced social platforms to take greater responsibility for publishing decisions. However, this is a distraction from the larger issue that the structure and the economics of social platforms incentivize the spread of low-quality content over high-quality material. Journalism with high civic value—journalism that investigates power, or reaches underserved and local communities—is discriminated against by a system that favors scale and shareability.
  • Platforms rely on algorithms to sort and target content. They have not wanted to invest in human editing, to avoid both cost and the perception that humans would be biased. However, the nuances of journalism require editorial judgment, so platforms will need to reconsider their approach.
  • Greater transparency and accountability are required from platform companies. While news might reach more people than ever before, for the first time, the audience has no way of knowing how or why it reaches them, how data collected about them is used, or how their online behavior is being manipulated. And publishers are producing more content than ever, without knowing who it is reaching or how—they are at the mercy of the algorithms.

In the wake of the election, we have an immediate opportunity to turn the attention focused on tech power and journalism into action. Until recently, the default position of platforms (and notably Facebook) has been to avoid the expensive responsibilities and liabilities of being publishers. The platform companies, led by Facebook and Google, have been proactive in starting initiatives focused on improving the news environment and issues of news literacy. However, more structural questions remain unaddressed.

If news organizations are to remain autonomous entities in the future, there will have to be a reversal in information consumption trends and advertising expenditure or a significant transfer of wealth from technology companies and advertisers. Some publishers are seeing a “Trump Bump” with subscriptions and donations rising post-election, and there is evidence of renewed efforts of both large and niche publishers to build audiences and revenue streams away from the intermediary platform businesses. However, it is too soon to tell if this represents a systemic change rather than a cyclical ripple.

News organizations face a critical dilemma. Should they continue the costly business of maintaining their own publishing infrastructure, with smaller audiences but complete control over revenue, brand, and audience data? Or, should they cede control over user data and advertising in exchange for the significant audience growth offered by Facebook or other platforms? We describe how publishers are managing these trade-offs through content analysis and interviews.

While the spread of misinformation online became a global story this year, we see it as a proxy for much wider issues about the commercialization and private control of the public sphere.

Uncategorized

Journalism After Snowden

My edited book for Columbia University Press with Emily Bell, Journalism After Snowden: The Future of the Free Press in the Surveillance State, on journalism and state surveillance has recently been released. It includes chapters by a pretty wonderful group of journalists and scholars, including: Steve Coll, Cass Sunstein, Clay Shirky, Alan Rusbridger, Jill Abramson, Glenn Greenwald, Ethan Zuckerman, Julia Angwin, David Sanger, Edward Snowden, Jonathan Zittrain, Lee Bollinger among many others. I feel very lucky to have worked with them on this.

It goes without saying that the stakes are even higher now, but it’s worth remembering that the surveillance architecture and vulnerability of journalism that it causes is a long-running, international, and bi-partisan affair.

Details of the book are available here.

The launch event for the book will be in New York on March 7th and will include a panel with Julia Angwin, Barton Gellman, and Ben Wizner. Details and tickets are here.

Uncategorized

It’s time to reform the CBC for the digital age

The oped below was written after the publishing of The Shattered Mirror: News, Democracy and Trust in the Digital Age, on which i served as a research principal.  I have been surprised that what I believe is the most radical and potentially transformative idea in the report has received almost no attention.  So Elizabeth Dubois and I wrote an oped about the recommendation for the CBC to move to publishing under a creative commons licence. The piece below was first published in The Toronto Star.

It’s time to reform the CBC for the digital age

Canadian journalism is in the midst of industrial and market failure. Print and broadcast journalism are struggling to adapt to both the economic models of the digital economy as well as the media consumption habits of digitally-enabled citizens. Meanwhile, our small size, lack of VC funding, large presence of U.S. digital journalism companies, combined with the rise of Facebook, Google and the pernicious effect of the ad-tech industry has led to a market failure in the funding model for Canadian digital journalism.

We simply do not have a digital ecosystem in waiting that will be able to replace, at scale, the reckoning that is looming in the traditional media space.

As a recent Public Policy Forum report (for which we were research principals) argues, it is time that Canadian media policy adapt to the realities of the digital age. While much of the coverage of the report has focused on the establishment of a Future of Journalism and Democracy Fund, in our minds the most critical recommendation concerns the CBC – namely, that the CBC should begin publishing all civic journalism content under a Creative Commons license.

There are seven types of Creative Commons copyright licenses and we believe the CBC could use “Attribution + NoDerivatives,” which would enable CBC journalism to be re-published by anyone, anywhere as long as it is unedited and attributed.

This change, when combined with our additional recommendation that the CBC cease digital advertising, could incentivize significant reform of the culture, structure and journalism of the public broadcaster, right at a time when Canadians need it most. Here’s how.

First, it would free the organization to re-focus on civic journalism, bolstering what is in our view the most critical component of its mandate – to inform Canadians. Given scale-driven ad models and the incentives of click-bait, the CBC has followed many of its market competitors down a path to lowest common denominator content. They are even, remarkably, publishing sponsored content. Moving to a Creative Commons model and getting out of digital ads would provide the CBC the freedom to escape this destructive cycle, and refocus its norms and journalism towards its critical civic function.

Second, a Creative Commons license would increase the reach of the CBC by enlisting citizens and organizations to become its distribution partners. While a core objective of the CBC should be to reach as many Canadians as possible, this is increasingly difficult to do in a fragmented media ecosystem. Whereas audiences could once be reached through a TV channel, radio station or website homepage, Canadians increasingly consume news through via atomized content shared through friends and followers on social media sites. The days of a single national media discourse are over. Allowing anyone to publish CBC content would broaden and lead to innovation in its distribution. Citizens should not care where CBC content is consumed, but only that reaches as many Canadians as possible.

Third, it will help to counter the growing challenge of misinformation online. The 2016 U.S. election highlighted structural challenges in the digital media ecosystem. While tremendous good has come from decentralizing effect of the internet, we have recently seen the emergence of several large platforms as the primary distributors and intermediaries for journalism. Platforms such as Facebook and Google rely on opaque algorithms to determine what pieces of news content we see.

When combined with the advertising technology market that monetizes our attention across all of our internet activity (not just our news consumption), the result is an ecosystem of atomized conversations and filter bubbles. This environment has proved fertile for the distribution of misleading and outright false information. One way to counter this problem in Canada would be to encourage much more reliable journalistic content in the platform ecosystem by allowing CBC content to be published by dozens or hundreds of organization, not just one.

Finally, allowing others to publish CBC content would provide tangible assistance to both traditional and new media organizations. For traditional organizations, access to high quality local and legislative reporting would allow them to focus their increasingly limited resources on other types of journalism. For digital native organizations, this would both shift the CBC from a competitor to a collaborator, and provide a base amount of quality civic content on which they can build their businesses.

The result would be a new ecosystem of digital companies innovating in the distribution of CBC content and developing their own value-added content in addition. This proposal would turn the CBC into a constructive partner and hub in the civic journalism ecosystem.

Rightly or wrongly, many people that we spoke to for this project, in both the traditional and new media, described the CBC as a “predator.” This should concern all proponents of the CBC. At a time when Canadian civic journalism is both in decline and needed most, Canadians should expect our national broadcaster to be able to work with, rather than compete against, Canadian journalism. Moving to a Creative Commons model would be a big step in this transition.

Taylor Owen (@taylor_owen) is an Assistant Professor of Digital Media and Global Affairs at the University of British Columbia. Elizabeth Dubois (@lizdubois) is an Assistant Professor in the Department of Communications at the University of Ottawa. Both were research principals on the Public Policy Forum report, Shattered Mirror: News, Democracy and Trust in the Digital Age.

Uncategorized

Report on state of Canadian journalism

For the past year I have had the fortune to work with Ed Greenspon, the Public Policy Forum, and a wonderful group of scholars and journalists on a report on the state of journalism in Canada. The resulting report was recently released, The Shattered Mirror: News, Democracy and Trust in the Digital Age. My role was mainly to support the analysis of digital journalism both in Canada and within the broader platform (Facebook, Google, etc) ecosystem. I hosted a workshop in Vancouver to dive specifically into the challenges digital start-ups face in Canada and I attended many of the roundtables that the PPF hosted across the country.  I learned a ton, and while I don’t agree with everything in the report, i think that it represents the most detailed current assessment of the state of the industry in Canada, and some of the recommendations, if adopted, would lead to significant transformation of the digital journalism space in this country.  The site for the report is here and a PDF can be downloaded here.

 

Uncategorized

Can Journalism be Virtual?

I have an article in the Columbia Journalism Review that explores virtual reality, Facebook, the challenges of doing journalism in and on virtual realities, and the importance of holding platform companies accountable for the worlds they are building.  Full piece is here, below is from the introduction:

As Facebook and others begin researching and developing technologies that could augment our lives in significant ways, a new space is opening up for journalism. And unlike early virtual journalism experiments in Second Life, which ultimately mimicked traditional “real world” reporting, journalism inside these new virtual worlds will require an entirely different set of skills and approaches, and will challenge three core journalistic concepts: representation, witnessing, and accountability.

First, virtual reality challenges the ways in which journalists think about representation. At the core of VR’s unique power is a deception–that the user believes she is experiencing something she is not. The goal of journalism in VR, therefore, is to inform the user by blurring the act of journalistic representation. But journalists cannot appropriate the physiological power of virtual reality without also thinking seriously about how leveraging it for journalistic purposes changes the way the world is represented.

Second, virtual reality challenges journalists’ ability to serve as witnesses with agency. It is entirely unclear what tools will be needed to observe events and institutions in a virtual space that is created by a confluence of human intervention and algorithmic control. If the boundaries between observation, participation, audience, and social structure fundamentally break down in virtual worlds, it is uncertain whether virtual reality journalism can be done by a human at all.

Third, as Facebook begins to build a virtual world and signals its ambitions to augment human capabilities, there has never been a greater need for accountability journalism, both within virtual spaces and for the companies building them. These virtual experiences will be designed and increasingly automated to be as addictive as possible. They will be marketed aggressively and widely, and could radically change our lives. The technologies driving them will undoubtedly be used by governments and militaries to seek ever greater control. But as the cluster of Silicon Valley companies building these futures rises to significant and largely unchecked social, political, and economic power, technology journalism has proven insufficient to hold them responsible for their actions.

Whether or not these technology futures emerge, they are being discussed and researched at one of the largest companies in the world, with a user base of over 1.5 billion people and rich data about much of the world’s consumption, movements, knowledge, and networks. How these virtual worlds are designed and created, and how humans will evolve to engage with these new technologies, pose fundamental problems for journalism.

 

 

Uncategorized

Article in Foreign Affairs

Last year I was fortunate enough to get to attend a remarkable workshop at the University of Sidney lead by James Der Derian. It was part of a workshop series and documentary project that James is leading called Project Q: Peace and Security in the Quantum Age. The goal of the project is to bring together a broad range of interdisciplinary scholars, (physicists, biologists, philosophers, political scientists, artists, poets) to explore the potential implications of a second generation of quantum science. James is better than anyone I have ever encountered at developing creative and thought provoking conversations and his documentary about this project is going to be remarkable. This article builds on the summary remarks I made at the Q3 workshop, and explores some of the potential implications of various strands of quantum science for international peace and security.

The full article, Quantum Leap, is here, and below is a concluding excerpt:

The promise of quantum science has always been epistemological. It changes how and what we know. As a second generation of quantum technology comes online, three critical questions raised by and explored through Project Q are critical.

The first is whether quantum technologies will prove emancipatory or will reconcentrate power in the hands of states. At the Q Symposium, Professor Michael Biercuk, an experimental physicist and director of the Quantum Control Laboratory at the University of Sydney, pointed out that “new technology drives radical social change.” If we are going to take seriously the proposition that quantum could be disruptive, let alone emancipatory, then we need to ask who are the nimble outsiders developing these technologies to take on legacy institutions, and at what point will access to these technologies be democratized and available to the many in ways that challenge existing structures. It is far more likely that the early stages of the deployment of the technology will benefit incumbent actors.

Take the case of quantum positioning and quantum communications. On the one hand, these technologies have the potential to dramatically increase military capabilities. On the other hand, they could also profoundly empower individuals, providing new levels of privacy and agency if they trickle down into the public sector. For example, the tech journalist Patrick Tucker has suggested that quantum location technologies could potentially provide a replacement for the GPS in phones and hand-held devices, allowing them to run offline and perhaps keep the location data out of the hands of carriers or snooping government agencies. But power is often zero-sum. And it is worth assuming that the interests of those developing these technologies will determine who is empowered by them.

Observers also need to ask who is competing to get these technologies, and is there a tension between and within emerging strategic alliances. As Biercuk pointed out, the research has moved from “things to study to things to exploit,” meaning there will be real competition for capabilities that can be monopolized. There is a profound tension between the spirit of cooperation (the U.S. government and Silicon Valley, International Research labs) and the opportunities for strategic, scientific, and commercial gain: a confluence of interests that has led commentators to warn of an impending “quantum arms race.” We may have lost the window for a truly international project because the incentives for commercial and security gains are too strong. Along with the United States and China, Australia, Russia, and United Kingdom all are involved in the global race for quantum computing.

Third, and perhaps most important, it is time to begin thinking through how the world will govern emerging quantum technologies. In order to control the digital space, one needs both data and the tools to give them meaning. With meaning will come control and power, which opens up a wide range of governance challenges. According to Jairus Grove, director of the University of Hawaii’s Research Center for Futures Studies, quantum technologies pose a “direct challenge to democratic decision-making and accountability.” As government agencies seek to collect “the whole haystack,” as the former NSA chief Keith Alexander once put it, and utilize increasingly algorithmically oriented forms of governance to rule their citizens, how do we ensure that even more opaque quantum algorithms are employed responsibly?

As a limited number of states and corporations seek fault-tolerant quantum technologies to exploit a decisive military advantage, they will surely change the ways in which we think about power and control in the international system. But even beyond shifts in power, so-called quantum social theory could be used to help researchers metaphorically and empirically understand social phenomena. In a new book on quantum theory, Alexander Wendt, a professor of political science at Ohio State, argues that although classical physics cannot explain concepts such as consciousness, perhaps thinking of collections of human minds as a quantum machine, and subject to the emerging scientific knowledge of quantum phenomena, can scientifically ground our understanding of social collectives. Quantum science could change how we know the world.

The first generation of quantum science unleashed not only the power of atomic weapons but new ways of understanding the universe. The scientists developing quantum technologies were actively engaged in heated debates about the moral responsibility of both. Project Q has sought to replicate this moment. As research continues at a breakneck pace, and as the hype around quantum technologies continues to escalate, it would be wise to not lose sight of the very tangible promise and peril that this new quantum era embodies—for much like the nuclear age, it may arrive sooner than we think