As AI in journalism takes root, safeguards and training are needed. Treat AI like an enthusiastic intern who has to be checked

Share our journalism

As the use of Artificial Intelligence rapidly expands in journalism, experts say there is a growing need for stronger guardrails, intensive training and more transparency to make sure that AI is used responsibly.

Estimates vary, but surveys indicate that as many as half of all journalists are now using some sort of AI tools, most commonly for researching topics, transcribing audio, or summarizing texts. As many as a third of journalists may be using AI writing tools.

“You need to have really good guidelines in place,” said Alex Mahadevan, who leads the Poynter Institute’s AI steering committee and directs its MediaWise digital media fact-checking program. “We have seen many failures resulting from lack of AI literacy.”

Jared Schroeder, an associate professor and AI expert at the University of Missouri’s School of Journalism, says AI tools “are not magic wands, but extremely flawed tools.” He adds: “They should be treated a bit like an enthusiastic intern who has to be restrained and carefully checked.”

At New York University, journalism associate professor Hilke Schellmann also warns of flaws in the technology. Her investigative team found that AI tools for summarizing meeting transcripts had a “surprisingly poor” performance on long (three- to four-page) summaries, even if their short summaries were accurate. The analysis also found that AI tools were “more hype than help” in placing scholarly work in context.

Humans needed for context

Despite their concerns about AI tools, all three of the critics – Mahadevan, Schroeder, and Schellmann – told the GJR that generative AI is here to stay in journalism, especially as the tools, as expected, are improved in the future. 

“Whether they like it or not, journalists should study the potential advantages of AI tools, while at the same time being wary of the disadvantages,” Mahadevan says. While AI is evolving at a fast pace, he says, “there are now lots of limitations and ethical issues.”

Among the limitations: AI does not have the bigger picture, often cannot provide necessary context, and occasionally “hallucinates” (comes up with fictional information), so it must be very carefully checked. If used wrongly, Mahadevan says, AI tools “can end up adding work” rather than making a project quicker. 

Schroeder agrees that AI tools “lack that context that is essential to good journalism.” He adds: “Why does journalism matter? Because we need a human to provide that context, to tell the reader what is really happening.”

Such concerns about the limits of AI, as well as acknowledgement of the growing use of the rapidly evolving technology, have led many news organizations to issue AI guidelines.

In what Mahadevan called a “transformational moment,” the Associated Press got the attention of journalists in 2023 by issuing such guidelines, encouraging journalists to learn about AI technology but also saying AI tools should not be used to create publishable content and images for the news service. 

In 2024, Poynter published its first “starter kit” for newsrooms to develop their own AI ethics policies, and updated that kit this year, adding information on visual journalism. Mahadevan says the kit does not tell newsrooms whether to use AI but helps them create a formal ethics policy and suggests how to inform their audience of that policy.

He says that many readers and viewers distrust AI, leading to what some call a “disclosure paradox” – deciding how to let readers know under what conditions AI is used. 

Deepfakes especially concerning

A Pew Research Center survey last year of more than 5,000 participants found that 41% felt that AI would do a worse job writing a news story than journalists. (But 19% said AI would do a better job and 20% felt it would do about the same.) The same survey found that 66% of respondents were “extremely” or “very” concerned about people getting inaccurate information from AI, while an additional 26% were “somewhat” worried.

So-called “deepfakes,” images created by AI, are of special concern – not only to readers and viewers, but also to photojournalists and videographers. For example, after a recent hurricane, a photorealistic fake image of a girl on a boat clutching a puppy was widely circulated. 

St. Louis Post-Dispatch staff photographer David Carson, vice president of the United Media Guild and a John S. Knight Journalism Fellow at Stanford University, is concerned about such “photorealistic AI-generated images.” Most news organizations – including Post-Dispatch owner Lee Enterprises, Inc., which publishes 72 daily newspapers in 25 states – forbid the use of AI-generated images for news stories.

Even so, Carson and others argue for the use of “content credentials” on news photos, video, and audio to provide information about the provenance, history, and edits made to the photos. “The use of content credentials is the best way forward for building public trust,” Carson says, “but we may still be a year or two away from widespread implementation on news websites.”

Even though such AI-generated images are widely banned, some news organizations use AI tools to help produce graphics. (Many newspapers require a disclosure that AI assistance was used to create the illustration.) While graphic artists can use such tools to make their work easier, the long-term concern is that AI graphics might eventually decrease the number of artists employed by publications.

AI is also increasingly used to streamline search functions. This year, the AP introduced an AI-powered “content delivery platform” for clients to negotiate the news organization’s visual, audio and text content. The new AP Newsroom includes search and content recommendations using AI.  

“Right now, there are certain things that AI tools can do well,” said Mahadevan, adding that some copy editors find the tools useful. “Others must be carefully checked.”

New initiatives 

Among the AI initiatives that have attracted attention are Hearst Newspaper’s DevHub group, Spotlight PA in Pennsylvania, AI efforts by the Texas Tribune, and several initiatives by the Washington Post. However, none of those publications allow the use of generative AI to actually write articles. 

Among its AI initiatives, the Washington Post offers “Ask the Post AI,” which responds to readers’ questions using Post reporting, with the caveat: “Answers are AI generated from Washington Post reporting. Because AI can make mistakes, verify information by referencing provided sources for each answer.” The Post also adds after major articles a summary of reader’s comments that is generated by AI.

“We’re proud of our accuracy rate,” says Tim O’Rourke, who leads Hearst Newspaper’s DevHub team of a dozen journalists based in San Francisco. The team’s expertise is made available to Hearst’s 28 dailies and 50 weeklies. “We do all-hands reviews and focused training – what you should and shouldn’t do,” says O’Rourke. “We err on the side of caution. We do a ton of checking.”

He says DevHub is organized as six major groups, including those led by the Houston Chronicle, the San Francisco Chronicle and the San Antonio Express. DevHub’s most popular AI tool, Assembly, monitors public meetings such as school boards and local government sessions. Using the AI tool, audio feeds are transcribed and made available to reporters as full text or time-stamped summaries. Reporters check quoted sections to make sure they are correct.

Another tool, called Producer-P, aims to streamline production tasks such as alerts, newsletters, and summaries for social media. It can also suggest headlines, although O’Rourke says most headline writers just use it to suggest ideas.

Another area is analyzing documents. In Oakland, he said, reporters used AI tools to summarize thousands of emails sent during an election cycle. In Albany, journalists used such tools to analyze years of medical complaints to find cases in which surgical tools were left in patients.

AI can’t replace a good journalist

How widespread is the use of AI tools in journalism? 

Companies that provide public relations software, such as Muck Rack and Cision, have done recent surveys of how journalists are using AI.  Cision reported this year that 53% of the 3,000 journalists surveyed worldwide were using generative AI tools like ChatGPT to support their work, and an additional 14% planned to start using AI soon.

However, fewer U.S. journalists in the Cision survey said they were using AI: 49% of American journalists said they did not use it and did not plan to use it in the future. The most frequent use of AI tools was in researching topics (25%), transcribing interviews and audio (23%), and summarizing text (20%).

A 2025 survey by Muck Rack – which develops cloud-based public relations software –found that 77% percent of the nearly 2,000 journalists surveyed sometimes used AI tools. (Most of the journalists in the survey were North American and 57% of them were full-time staffers at a news organization – the rest were freelancers or self-published). ChatGPT was used by 42%; transcription tools by 40%; and writing tools by 35%.

A 2024 AP survey of 292 journalists, mainly in the U.S. and Europe, found that, despite ethical concerns about the technology, nearly 70% of newsroom staffers from a variety of backgrounds and organizations said they used AI on occasion. The most common use of AI tools was for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts. One-fifth said they’d used AI for multimedia.

Aimee Rinehart, co-author of the survey and the AP’s senior product manager of AI strategy, said last year that “this technology is already presenting significant disruptions to how journalists and newsrooms approach their work.”

Surprisingly, only 6.8% of those surveyed in the AP study mentioned “job displacement” as a concern about AI. However, that is a big worry for communications unions and many others in the news business – especially over the long term as the use of AI tools expands.

Poynter’s Mahadevan says “you cannot replace a good journalist with AI. The function of the journalist’s job might change,” but the journalist won’t be replaced. “If used correctly, AI can help reporters do their work quicker and more comprehensively. And journalists who refuse to adopt AI might not be able to do as much.”

Mizzou’s Schroeder acknowledges that “any new tool can increase efficiency” in newsrooms. But he says AI tools can never replace the context provided by good reporters and editors. “Journalists must retain the value they provide to their audience. AI is not interested in creating a better community. Journalists are.”

Hearst’s O’Rourke also says he does not see AI displacing good journalists. “It can give journalists more time to do investigations and cover breaking news,” he says. “But there is always a need for local expertise.”

Asked about the future, he said Hearst’s approach is, “Innovate, but cautiously.”