Tolerance for the speech we despise is the lesson of 232 years of the First Amendment. Yet the nation is awash today in public attacks on free speech and intellectual freedom from right, left and center.
Conservatives – some of whom picture themselves in the Oval Office – ban “woke” ideology, “critical race theory,” drag queen story hours, library books that mention diversity or sexual content and the popular TikTok social platform used by 150 million Americans. They empty school library shelves in Florida, want to end medical treatment for transgender adolescents and bar trans athletes from women’s sports.
At the same time, liberal law students recently shouted down a Trump judge at Stanford law school fortified by the moral support of a dean. Tirien Steinbach, dean for Diversity, Equity and Inclusion, asked appeals Judge Kyle Duncan if his words were “worth the squeeze,” given how upsetting they were to protesters. Meanwhile, inside newsrooms from the New York Times to Washington University’s Student Life, reporters face dissension from their colleagues and trans rights activists for reporting on criticism of transgender medical treatments.
If this weren’t confusing enough, Artificial Intelligence has entered the public sphere with soulless machines programmed with more facts than any human can learn. How long will these genius machines have patience with the relatively stupid people they serve?
The CEO of the most eye-catching AI experiment, ChatGPT, is a young man with whom some St. Louisans grew up, Sam Altman. He is remembered at John Burroughs as the student in the early 2000s who persuaded teachers to put up “Safe Space” signs for gays and who came out in his senior year. Altman dropped out of Stanford after realizing he was learning more from poker than the AI/robotics lab where he worked, according to a profile in the New York Times.
GJR devotes much of this issue to exploring what AI may mean for journalism and education.
Jackie Spinner, our editor, interviews educators who think it makes more sense to use AI as a tool to improve learning and journalism rather than branding it as cheating.
Mark Sableman, one of St. Louis’ most prominent media lawyers, asks what could go wrong once AI is added to the media world. “Everything,” he says, especially if AI is used as “reader ready content,” untouched by human hands.
Sableman’s point is illustrated by one of our stories that was written by ChatGPT itself. We asked it to tell us about Altman’s background in St. Louis. The 500-word response claimed Altman and his wife had made a major contribution to LaunchCode and he graduated from Stanford in 2007. But Altman is gay and did not graduate from Stanford. In a separate piece, ChatGPT identified Spinner, the GJR editor, as a professor from Mizzou rather than Columbia College Chicago.
Altman, in his interviews, recognizes dangers of AI while making soaring claims about its importance to humanity.
In the Times’ story, Altman pointed out he shared a birthday with Robert Oppenheimer, leader of the Manhattan Project and quoted him to the effect that, “Technology happens because it is possible” – a plain statement of technological determinism.
He went on to say, his company could “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.” He thinks OpenAI could capture much of the world’s wealth and redistribute it to ameliorate poverty.
Altman voiced similarly sweeping conclusions in an interview with radio personality Charlie Brennan in 2021 reminiscing about his childhood in Clayton when he walked through the back gate to Captain Elementary School.
He said, “We started OpenAI because we thought this thing that was happening of us….like humanity, building digital intelligence is going to be one of the most important milestones in human history and it could go either really well or really badly and we did not think there was enough effort to make sure it happens safely and in a way that humans broadly benefit.
“It’s very hard to think about what the world is going to be like when we have superhuman capacity inside a computer, computers that can learn anything…that can think billions of times faster, smarter than the smartest human on any topic simultaneously and that eventually become self-aware and have their own desires and will and none of the limitations that humans have…
“This is going to be a bigger technological revolution than the three great ones so far, the agricultural revolution, the industrial revolution and the computer revolution all put together….Everything is going to change.”
Altman may be right, but his claims sound exaggerated. The Washington Post reported this month that some AI experts question the speed with which Altman introduced AI to the public and his company’s transition from a nonprofit to a “capped profit” structure that allows investors to earn 100 times their investment..
Altman’s dreams are reminiscent of the now tattered hopes that existed around the turn of the century that computer technology and the smart phone would democratize the media by putting a printing press in every person’s pocket. Comments at the end of online stories would bring immediate accountability to journalists who wrote distorted stories.
The promised land didn’t arrive. Yes, citizen journalists captured big stories such as the Ferguson death of Michael Brown, yet some of the stories they sent the world were mythical, such as the Hands Up, Don’t shoot story line. And the comments at the end of stories often became forums for racism and misogyny.
Lockerdome, renamed Decide, is a current example of how a crown jewel of St. Louis tech startups can end up fueling disinformation. Paul Wagman lays out how it has helped monetize dozens of sites promoting election denial, white supremacism, Christian nationalism, Covid skepticism, climate-change denial and other far-right passions and fantasies. And the St. Louisans who operate the company won’t even offer a public explanation of their behavior.
Meanwhile, a young generation of smart phone natives stares into their devices while losing personal contact, many becoming increasingly isolated and depressed with the inches-high representation of the world that plays out on their screens.
Today’s parents – and grandparents – face a daunting task of protecting the next generation from the screens that seduce them into electronic isolation and despondency.
Sen. Josh Hawley, R-Mo., has a point when he talks about passing a law to cut off teen access to social media platforms until 16, although such a law couldn’t be enforced.
Hawley seems mostly intent, though, on making headlines for his obvious pursuit of the White House.
Recently he got into a floor debate with fellow Republican Rand Paul, a libertarian from Kentucky, when he asked for unanimous consent to ban the TikTok app. Paul refused to give his consent and pointed out that the law violated the First Amendment. (Ironically, the only reason I saw the exchange is that it came across my TitTok feed.)
Hawley claimed TikTok wasn’t free speech because it made private search data of Americans available to the Chinese Communist Party and that was an act of espionage. Paul pointed out that there was no proof that was happening to the data and that U.S. search engines similarly mine private data and make it available to third parties.
Hawley also has been busy in congressional hearings bludgeoning social media platforms for taking the advice of government health officials and removing false Covid and anti-vax claims. He knows full well from his years as a brilliant student at Stanford and Yale and his time as a Supreme Court clerk that the First Amendment only applies to the government. But he misleads his followers into thinking that government advice to the social media companies is coercion.
Recently, Hawley and Missouri Attorney General Andrew Bailey launched investigations of the Washington University Transgender Center at St. Louis Children’s hospital and called for it to halt its care. Chancellor Andrew Martin, after a weak initial response, refused to halt the treatment. Meanwhile the Missouri Senate has passed a bill that would put a four-year moratorium on puberty blockers, hormone therapy and surgery for those under 18.
The Student Life newspaper’s straightforward coverage of the dispute ran into criticism from both some staff members and from trans activists who said the newspaper’s neutrality in its reporting harmed trans students.
New York Times editors have run into the same criticism from within and without the newsroom. Newsroom employees wrote a letter criticizing the paper’s “anti-trans bias” that aligned with “far-right hate groups.”
Executive Editor Joseph Kahn responded sharply. “Participation in such a campaign is against the letter and spirit of our ethics policy. We do not welcome, and will not tolerate, participation by Times journalists in protests organized by advocacy groups or attacks on colleagues on social media and other public forums.”
Hawley should stay out of the business of medical professionals and leave the family decisions to parents and children in consultation with doctors. But reporters must present a straightforward story to the public in an unbiased way.
One way that news professionals respond to the news and information chaos of today’s public forum is to advocate for media literacy. Illinois was one step ahead of the rest of the country in passing the first media literacy requirement for public schools.
Emily Cooper Pierce, GJR’s student editor, spent a year traveling to Illinois schools to see how it is working out. She found many teachers had not even heard about the requirement, few received professional development and there was no funding to implement the new mandate.
It’s safe to say that Illinois students are no more media literate today than before the law passed.
So the nation barrels towards a brave new world of information technology as the presidential election season approaches with one leading candidate building his campaign for the world’s most powerful job on a mountain of false claims about winning the last election. The criminal investigations and trials he faces are just fake news conspiracies brought on by a weaponized legal system, he claims.
How will the new wizards of AI create an algorithm that deals with the fact-free delusions of almost half the people in the country?
All this plays out in a chaotic electronic world of trillions of bits of information and misinformation – a world in which Truth tries to catch up with Falsity but lags a lap behind because false news is more sensational, simplistic and exciting.
Oh, for the time when John Milton could confidently predict on behalf of free speech that when Truth and Falsehood grapple, “who ever knew Truth put to the worse in a free and open encounter?”
William H. Freivogel is a professor and former director of the School of Journalism at SIUC. He is the publisher of Gateway Journalism Review.