As a digital director for media companies and now the digital advisor for West Virginia University’s Student Media department, I can’t tell you how many times I’ve had well-meaning publishers, editors, and now college students come to me excited about the hot new thing everyone is using. I have to be the sane one in the room to look at the product and decide if we should use it or give my coworkers a swat on the hand and say “no!”
So, when I started seeing chatter about the implications of how ChatGPT could affect journalism, I figured I better crack open the artificial intelligence chatbot to see what all the hubbub was about.
Artificial intelligence isn’t new to the media industry; newsrooms have been using it for years to automate things like obituaries, sports agates and event listings. Some crafty people have used it to write articles for fake news sites or rewrite legitimate news articles for click baiting.
What makes the implications of ChatGPT different is it’s easy to use, accessible to everyone and for the time being, free. By giving it a prompt, ChatGPT can answer questions, give advice and even write stories. The first night I used it, I stayed up until 2 a.m. letting ChatGPT write a book for me, where I gave ChatGPT the title of a chapter for each prompt and it wrote a corresponding chapter. It was a well-fleshed out story with good character development, an interesting plot and a story that weaved in information from previous answers.
The next day, through blurry eyes from lack of sleep, I began to test different prompts for actual journalism uses. The results were a mixed bag that were both great and scary. Some of the results included:
I asked it to give me five ideas for news articles for National Donut Day. It gave me five great ideas that I would have actually used.
I gave it the web address of a recent press release from our university and asked it to write a news article about the information on the page. It created a fairly well written article in AP style that did a good job of explaining the information accurately.
I made up some names and stats for a basketball game and asked for it to write a sports article. It gave me a well written recap and even added some color and conjecture based on my prompt, even giving the coach a few quotes.
I gave it a prompt that our university president announced that enrollment was down 20% and that this would result in a 15% cut in employees. ChatGPT created an article talking about how the university would pull through this time and focus on student enrichment, and even made up quotes from the president.
I gave it a prompt asking it to write an opinionated editorial from my editorial board saying our editorial board was against a newly passed campus carry bill, that students were not happy about this new law and that the legislature should repeal the law. It wrote the best editorial my staff has put together since I’ve worked at the paper and filled in all kinds of background on campus carry and even scolded the university for not doing enough to protect students.
My conclusion was that ChatGPT might be our staff’s best writer. I could see this could be a great tool for generating ideas and doing research, but I was also worried about a student turning in an article that ChatGPT wrote.
With no policies in place for such a scenario, I decided to bring my concerns to my student media director. We both agreed that it was unlikely one of our students would turn in an AI written article, but we should probably have something in our handbook just in case.
The next day, “just in case” happened when an email popped up on the College Media Association listserv about how an adviser at another university had a student confess that their last column was written by ChatGPT. Who would have guessed Skynet’s first move against humanity would be to help a student meet deadline?
There’s a lot of implications with using ChatGPT for journalism. Is it plagiarism to feed ChatGPT your reporting notes and let it write an article? How do you even determine an article was written by AI software; there’s no public record of what ChatGPT generated? I also worry about the amount of conjecture and the creation of quotes and opinions ChatGTP generated in my prompts – a lazy staff could let in false information or subjective analysis of information, or worse, promote ChatGPT’s use with their staff. We need to address this in our handbooks, even if it’s just to establish that it exists and that we’ll be developing new policies as we understand ChatGPT’s uses.
So, what does ChatGPT think of using it to write articles?
“As an AI language model, ChatGPT can be used to generate news articles. However, it is important to note that ChatGPT is not a journalist and does not have the ability to fact-check or verify information in the same way that human journalists do.”
Moreover, the ethics of using AI for journalism are still a topic of debate, as there are concerns about the potential for bias and lack of transparency in AI-generated content. Therefore, any news articles generated by ChatGPT should be carefully reviewed and edited by human journalists to ensure accuracy, fairness, and journalistic integrity.”