AI in journalism is here. Now what? Educators debate how to ask and verify the answer.
At the start of the spring semester, the Journalism Department at San Francisco State University added a line to its student code prohibiting students from using “automated tools or assisted processes, such as machine learning or artificial intelligence” without citing the source.
Any assignments found to have represented the work of others in this way would automatically receive an F and potentially a failing grade for the course. The Office of Student Conduct also would be notified.
Rachele Kanigel, a journalism professor at San Francisco State, said she and her colleagues were concerned about students using generative AI to report and write stories. They made the change after a faculty meeting in January. “I do think generative AI has a place and could even be a useful learning tool for students, but I fear students will misuse it,” said Kanigel, who is also the faculty advisor to the student newspaper, the Golden Gate Xpress.
Like their other peers in higher ed, journalism and digital media educators are wrestling with how to address generative AI in their classrooms and student newsrooms since the introduction of user-friendly ChatGPT last November. Some want to limit its use or restrict it entirely. Others have embraced it.
“I may be the enemy among peers, but I’m actively teaching my students how to use AI this semester instead of warning against it,” said Jennifer Sadler, who teaches marketing and branding at Columbia College Chicago. “We need to be agile, creative and teach students foundational concepts alongside tools they need for a rapidly changing society.”
ChatGPT and other natural language processing models like it write responses to prompts based on sources like Wikipedia. For ChatGPT, these sources only go up to September 2021. (When prompted, for example, it admitted that it had no knowledge that former President Donald Trump had been indicted last month.) ChatGPT writes comparatively well but struggles with citations and will sometimes just make things up. It’s a toss-up in journalism which is a deadlier sin, plagiarism or fabrication. Even ChatGPT couldn’t say when I asked it.
“As an AI language model, I do not have personal beliefs or opinions. However, both plagiarism and fabrication are serious offenses in journalism and can have severe consequences for the journalist and the publication.” Plagiarism, ChatGPT told me, can lead to legal action. Fabrication “can lead to the complete loss of trust from readers.”
That’s what concerns Kanigal, who has played around with ChatGPT and Bard, a similar generative tool from Google.
“I have to admit that the writing is better than some of my students’ writing,” she said. “ But I’ve also been struck by the hallucinations, made up facts, quotes, etc. And I worry that students will use these tools when they are short on time or feeling pressured to produce.”
AI is not new to journalism. Four years before ChatGPT came onto the AI scene, Forbes declared under a headline that “The Future of Journalism is Artificial Intelligence.” By then news outlets had already been using forms of AI for years.
- The Associated Press started using artificial intelligence in 2014 to automate stories about corporate earnings from its business news desk.
- The Los Angeles Times used a bot in 2014 to write a story about an earthquake.
- The Washington Post also has used bots to cover elections and sports, starting in 2016 with the Rio de Janeiro 2016 Olympic Games. In 2018, the Post won top global BIGGIES awards for its in-house AI tools.
- Bloomberg also has embraced AI, using machine learning to more easily customize document searches on its subscription “terminal”–once a machine and now software–and to create stories based on financial reports.
In other words, way before ChatGPT made artificial intelligence understandable, even magical, to most people — type in a prompt and watch it answer – major news organizations were already using AI even if most journalism schools were not teaching about it.
That means many educators are now in a place of not only discovering the technology along with their students but also in having to teach authoritatively about it without coming across as naive or inept.
It’s not an easy balance as many journalism professors have learned over the past 15 years of digital transformation within the industry.
“ChatGPT’s release in November was the inflection point,” said Adam Maksi, associate professor of journalism and media at Indiana University Southeast. “It had been building up to this. But it’s not new.”
Maksi, who is also a faculty fellow for eLearning Design & Innovation for the entire IU system, said AI creates opportunities for journalism.
“I have a colleague who would say these aren’t really tools but they are digital co-collaborator,” he said. “ Many of us work collaboratively with other humans. What these tools present is a non-human collaborator. That’s what’s different.”
Through that collaboration, ChatGPT and models like it force us to ask better questions, Maksi said.
Or, as Lehigh University journalism professor Jeremy Littau wrote in a December post on Substack titled, “Who’s afraid of ChatGPT,” the question itself becomes the more important part of the process.
“ChatGPT’s ability to synthesize billions of pages on the web and give us a starting-place answer is not the death of a form or an industry,” Littau wrote. “Those answers could be incorrect, or rooted in bias. They might actually be pretty decent. But either way, they should start conversations with the humans interrogating them at the point of research and prewrite, not be the definitive copy that gets turned in for a class or published somewhere. If we treat generative text that way, we might be on to something transformational in education and media. It’s a huge opportunity to spend our brainpower on pursuing novel questions of substance and importance.”
Sarah Murray, an assistant professor in the Film, Television and Media department at the University of Michigan, said she has been talking about AI a lot this semester in a seminar digital media course.
She said it is important not to frame AI as cheating. “I push students to think about the problems that film and journalistic production have always faced,” she said. “The main example of this is industry standards of truth and authenticity, which both journalism and filmmaking ascribe to.”
Filmmakers think about the uncanny, she noted, and journalists think about reporting that operates in the realm of accessible literacies of credibility and objectivity. “Both of these have always been a problem for their respective artistic realms and both are historical problems that undergird how we teach creative arts.”
This is not the first time filmmakers and film scholars have dealt with the uncanny, she noted. “So we start by asking students, how has the uncanny been tackled in the past, and how might we lean into the creative affordances of AI to engage a new or different meaningful and trustworthy contract with the audience?”
In her Digital Media Strategies class in Columbia College Chicago, where I also teach, Stadler has students build content themselves and also use an AI generator. She then assigns them to write about the efficiencies and challenges of both.
“Professors are worried that students will use these things to cheat – if we should even call it that,” Sadler said. “I’m just not. College is not some wonderland where we should operate like the world outside of it isn’t rapidly changing. And we shouldn’t be scared or worried by it.”
In my Opinion class this semester, we’ve spent a lot of time talking about AI, particularly ChatGPT. One of their first assignments was to ask the bot to write an editorial about itself. Then I had them write their own, without assistance. We noted the differences. ChatGPT admitted it doesn’t do opinion well because it cannot apply human logic. It handled the facts well enough –which we made sure to verify–but it did not know how to structure an op-ed or editorial–because it had not yet been trained. After all, it only does what it has been trained to do.
As someone who works hard to accommodate neurodiverse and non-native English learners, I see how it can be useful as assistive technology for the students who might benefit from prompts to get started.
But there are others.
- Some other ways ChatGPT could be useful for news production include personalizing newsletters, content moderation and translation.
This is according to Nick Diakopoulos, communication professor at Northwestern University, who recently launched Generative AI in the Newsroom project.
- Damian Radcliffe, a journalism professor at the University of Oregon and fellow at the Tow Center for Digital Journalism at Columbia University, had these AI tips for publishers, writing for What’s New in Publishing in March that smaller newsrooms, in particularly, may not have the funds to invest in AI or may be wary that its benefits are being overpromised.
- Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, pointed out in this handy AI guide on Substack, that the trick is figuring out what you want AI to do for you. It’s also important to know that AI lies, which is where it may be most problematic for journalists who don’t fact-check what it spits out.
“Every fact or piece of information it tells you may be incorrect,” Mollick wrote. “You will need to check it all. Particularly dangerous is asking it for math, references, quotes, citations, and information for the internet.”
He followed with a guide to avoiding hallucinations, the term for falsehoods the bots put out. This happens when AI doesn’t understand the question or misinterprets the data. If the bots don’t have an answer, they just fabricate one.
Maksi looks at AI differently than most journalism educators – perhaps because he understands the technology so well, including this potential for error.
For him, he goes back to understanding the purpose of journalism. “It’s to bring people the need to be free and self governing or to empower people, to serve the audience, why does it matter if I use this word over this word if it’s not creating a problem with meaning?” he said. “We’re holding on to a traditional paradigm that may have been useful for other reasons. The most important part of journalism is reporting so if we can give people more time to report and develop those relationships, why wouldn’t we?”
Maksi said he worries about educators who are too focused on teaching journalism students formulaic ways of writing because eventually those skills will be done by a computer.
“The value of a copy editor wasn’t just straight up line editing but editing for bias so how do we emphasise the human elements of the skills we are teaching?” he said. “We keep pointing to the nature of the industry. Do we want to teach skills to students that are relevant and adaptable to a variety of circumstances –or the old way of doing this? This is the problem sometimes with journalism educators.”
Jackie Spinner is the editor of Gateway Journalism Review. She is an associate journalism professor at Columbia College Chicago and faculty advisor to the Columbia Chronicle. She has never taught at the University of Missouri in spite of what ChatGPT said.