Should journalism have an industry-wide ethics policy for covering artificial intelligence? 

Tech journalists have set their own standards as journalism organizations haven’t yet issued consistent guidelines around generative artificial intelligence — whether around how it’s used in newsroom processes or how it’s covered.

Black-and-white photo of six journalists in a newsroom working together. A speech bubble has been added to one journalist, who says, "How can we improve journalism's most influential ethics code?" Another journalist is edited with a speech bubble overhead that reads, Ask Claude.
Archival photo of journalists in the Radio-Canada/CBC newsroom in Montreal, Canada. Speech bubbles have text from the Society of Professional Journalists’ 2026 ethics week and Anthropic’s Claude interface, edits by James Salanga.

The Society of Professional Journalists is in the process of revising its code of ethics for the first time since 2014, and AI is sure to be on the agenda, following a short 2023 statement from national president Claire Regan: “While there is no need for a ban on artificial intelligence in journalism, its use is best limited and considered on a case-by-case basis.”

As generative artificial intelligence has come to dominate headlines over the past few years, practically every beat that a journalist may cover has been implicated in some way — even if many readers don’t necessarily think it’s a good thing. The speed with which the technology has touched journalists’ work and lives, reporter and author Cade Metz said, is not dissimilar to the rise of the internet in the ‘90s: “It suddenly becomes part of everyone’s beat.” 

Tech reporters in particular find themselves familiarizing themselves with generative AI, but there aren’t standard guidelines for what that process looks like. 

“When ChatGPT hit, I was essentially the only reporter covering this stuff at the [New York] Times,” Metz told The Objective. 

Given the swift rise in generative AI’s usage, journalism organizations haven’t issued consistent ethical standards around generative artificial intelligence — whether it be around how it’s used in newsroom processes or how it’s covered. 

The Center for News, Technology & Innovation found that “while AI is being used in newsrooms, formal codification within newsrooms and professional societies is not yet universal, and there are still barriers to implementing AI policies.” In general, their report states, current policies emphasize transparency of AI use and human verification of its outputs, but do not address biases within these tools, among other issues. 


Related: Baltimore Sun turns to AI for political analyses


More broadly, the practical logistics of coverage are colliding with other ethical concerns about the expansion of generative AI, from its potential infringement on journalists’ jobs in an already-unstable job market to the methods and means used to train large-language models like ChatGPT — including the water usage of data centers built in rural communities, the workers in the Global South helping train these models, and encoded bias in the algorithms used in these models. Reporters covering artificial intelligence as part of their beats are left to develop a code of ethics as they go, but much like newsrooms, not every journalist has similar ideas on what’s required to understand how the technology works. 

Tech reporters and writers divided on framing, sourcing of AI coverage

The advent of technology has been marked by gender and racial biases, and much like the makeup of the field, tech reporting has been predominantly populated by male reporters. The most recent Pew Research Center survey on journalism’s demographics, conducted in 2022, found that women made up just 38% of science and technology reporters

Radhika Rajkumar, an editor at ZDNet, said the advent of generative AI as a growing beat has helped diversify tech reporting. 

“Part of the reason that I went into AI journalism is because I think it is in such need of demystification,” she said. “My ethical commitment is … to the reader. It’s a very dense beat, so what separates better reporting is really that base-level understanding, the translation layer of tech jargon to layman’s terms.” 

Building up that base-level understanding, Rajkumar added, includes talking to people in-person. This doesn’t mean you can’t do it otherwise, she said, “it just might make sourcing harder.”

Metz, who lives in San Francisco, instead emphasized the importance of traveling to other parts of the world where this technology is being built — like Toronto, Europe, and China —and, crucially, “where it’s perceived completely differently. A lot of people covering the [Silicon] Valley miss that the perspectives on this technology are far broader.”

Jasmine Sun, who publishes a newsletter on “AI and Silicon Valley culture” and was recently brought on as a contributing writer at The Atlantic to cover AI, argues a different tack: More tech journalists should report from the Bay Area, where most of these companies are headquartered. 

Many tech reporters “don’t live in the right places or talk to the right people, and that narrows your perspective,” she said. “There’s not that many journalists that live in San Francisco, and that’s a huge problem. You wouldn’t cover the White House from San Francisco.” 

For her, the best writing on AI comes from outside of mainstream media: “I almost never learn more about it from any mainstream outlet as much as I learn from Twitter and Substack.”

That may be a sourcing issue, a recent study found. Its authors analyzed coverage from The New York Times, The Wall Street Journal, and The Washington Post, and saw “more positive tones when only developers or vendors discuss their own AI tools” — meaning journalists have an ability to “counterbalance and contextualize inherently biased perspectives.”

“Awareness of the phenomenon of technological overpromising is particularly vital, as projections about AI’s future are often shaped by the positioning and vested interests of the actors making them,” the study’s authors wrote. 

Still, Sun considers it important to take founders and engineers bullish about the technology at their word. 

“When Dario [Amodei, CEO of Anthropic] says 50% of white collar jobs are going to disappear by 2030, he believes that,” she said. “That doesn’t mean he’s right, but he isn’t making it up as a marketing claim.” 

For her, taking these claims in good faith means understanding that the “people working in tech are trying to do what they think is good or makes coherent sense to them,” so reporting ought to reflect that.  

Independent writer Ed Zitron, who hosts the podcast Better Offline, disagrees: He said the best reporting on AI “pierces through the marketing veil” and is aware that these companies are “actively incentivized to mislead you.”

Still, he said that he enters into every story “prepared to be wrong. The more I want something to be true, the more skeptical I am. That is probably the most valuable part of the process.” 

Setting some guidelines

Debates about how to cover artificial intelligence are compounded by evolving newsroom guidelines around the technology and the fact that many newsrooms have existing content-sharing partnerships with OpenAI

The Online News Association, for instance, has led an AI in Journalism Initiative since 2024 in partnership with Microsoft. Around the same time, the Associated Press announced, alongside its licensing agreement with OpenAI, some guidelines, including that “while AP staff may experiment with ChatGPT with caution, they do not use it to create publishable content.” Similarly, Wired has released a rundown of how its journalists will and won’t use AI, noting they won’t “publish stories with text generated or edited by AI,” though they may use it to generate story ideas or conduct research.

In light of these competing forces, it may be that it has become a liability for a tech reporter to refuse to use or engage with AI, since it could risk undermining their authority when it comes to reporting on the technology, the industry, and its impacts. 

Sun mentioned meeting journalists who cover tech but have never once used ChatGPT. “You don’t need to like it,” she said, but asserts that “the way you describe the technology is out of date because you never use it.”

Rajkumar said understanding the technology is just practical: “Without understanding what an AI product is [and how it works], you won’t be able to measure that against what a marketing exec or a PR person is telling you,” she said. 

But acquiring that basic level of understanding may also chafe against the imposition of the technology into editorial workflows. Many reporters have taken a stand against what they see as over-encroachment of generative AI in the editorial process, particularly mobilizing in their unions to fight for AI guardrails as a labor right. 


Related: Workers at nation’s largest investigative newsroom, ProPublica, go on strike


These are just some of the pressing questions facing journalists who cover AI. Casey Newton, founder of the tech newsletter Platformer and co-host of the Hard Fork New York Times podcast with Kevin Roose, responded to a request for comment by saying: “I’m still thinking this through, and at this time my thoughts aren’t settled enough to do an interview.”

Zitron, who also writes a newsletter about AI and works in PR, calls generative AI “a unique exploitation of a weakness in the tech and business press.”

“AI is the apex of an era of trusting these companies, as the media, and now we’re seeing what happens when they run wild,” he said, adding that he sees deep institutional problems largely within the editorial class, who “don’t want to rock the boat. And we’re long past the point where it’s rational to do that.”

It could be, then, that basic tenets of journalism remain the best guide into the uncertain future. 

Zitron said it’s worthwhile for tech journalists to take a leaf from the speech Philip Seymour Hoffman, as Lester Bangs, gave in Almost Famous: “You cannot make friends with the rock stars.” 

To Metz, a veteran on the tech beat, “Silicon Valley has always been driven by hype, and sees the world in a different way and portrays it in a way that’s different from reality.” 

Covering artificial intelligence developments ethically and accurately, to him, is about “the skill to translate that and package it so the reader can understand … that’s not something that AI can replace.” 

And it means reckoning with the notion that all this is inevitable.

“Can’t we look really hard at the flaws in this technology and decide if this is really what we want for the future?” he said. “That’s what we need to do.” 


Jake Pitre is a writer and scholar in Montreal whose work has been published in The Globe and Mail, The Atlantic, Fast Company, and elsewhere.

This piece was edited by James Salanga. Copy edits by Jen Ramos Eisen.

We depend on your donation. Yes, you...

With your small-dollar donation, we pay our writers, our fact checkers, our insurance broker, our web host, and a ton of other services we need to keep the lights on.

But we need your help. We can’t pay our writers what we believe their stories should be worth and we can’t afford to pay ourselves a full-time salary. Not because we don’t want to, but because we still need a lot more support to turn The Objective into a sustainable newsroom.

We don’t want to rely on advertising to make our stories happen — we want our work to be driven by readers like you validating the stories we publish are worth the effort we spend on them.

Consider supporting our work with a tax-deductable donation.

James Salanga,

Editorial Director