The most common question from parents at open days at Sheffield Hallam University is: “How will AI affect future job prospects for journalists?”.
It’s a great question and a tough one to answer without the benefit of a crystal ball but as researchers we can learn from the past and make educated predictions. Artificial intelligence is certainly reshaping the journalism industry in profound ways, just as it is impacting on virtually every trade.
History tells us that not all technological innovation resulted in job losses.
For example, the arrival of the iconic Model T Ford in the early 20th century could have spelled the end of the livelihoods of traditional blacksmiths, particularly those who specialised in horseshoeing and wagon-making. As automobiles became more affordable and widespread, the demand for horse-drawn transportation understandably declined.
Yet, we know now that many blacksmiths transitioned into auto repair, using their metalworking skills to fabricate car parts. Others became mechanics, learning to work with engines and automotive components. In the process, they prospered.
Entire industries sprung up around the world spawning new professions based entirely on the ease with which people could now get around – fast food restaurants and motels in America, and paved highways across Great Britain and the rest of Europe.
Economists call this the “productivity effect” – when automation in one sector creates a demand for jobs in another, sometimes unrelated sector.
The bad news is that economists also have something called the “displacement effect”. It’s the other side of the coin – when a rapidly expanding entity or force directly takes away the role of a worker
For journalism to survive, it is vital that we don’t find ourselves drawn into this category.
Some sectors of the media have in the past been notoriously slow to react to technological innovation. How else do we explain how entire departments processing analogue film for photographers practically disappeared overnight as digital imaging hit the sector.
AI-powered tools are currently being used by news organisations likes the Associated Press to routinely generate earnings reports. The technology can produce short, fact-based articles at a speed that can’t be matched by humans.
While this clearly increases efficiency, it has reduced the demand for entry-level reporters who traditionally would have been doing this kind of work as they learned their trade.
However, it’s only when things go wrong, that people often realise how much AI is currently used in the media.
In January, Apple's AI-generated news alert service faced significant criticism after disseminating several inaccurate and misleading notifications. One embarrassing error involved a notification falsely claiming that tennis star Rafael Nadal had come out as gay. This mistake arose from the AI system confusing a BBC article about Brazilian tennis player Joao Lucas Reis da Silva with Nadal. Apple was forced to issue an apology and suspended the AI-generated news service.
This was a timely reminder that wherever possible, AI should be used as a journalistic assistant to do the heavy lifting, not as a total replacement for the common sense of human reporters. News organisations must also ensure that they maintain strict editorial guidelines and fact-checking mechanisms to ensure AI-generated content meets ethical standards.
AI can be very useful when leveraged to assist investigative journalism by analysing large datasets and uncovering patterns for further exploration. But to date an AI-enabled robot reporter does not exist that can replace the likes of the Yorkshire Post’s own Greg Wright, whose tenacity in covering the loan charge scandal has impacted on the lives of so many people. It is this level of emotional intelligence and humanity that newsrooms need to exploit if they are to survive the onslaught of AI.
This is why the industry needs to be protected – think of all the major investigations and scandals to have hit society in recent times. Almost every one of them will have been instigated by compassionate human journalists – not AI.
Ironically, in this age of increasing levels of fake news, particularly on social media, it could be argued that journalists need to find ways to harness AI-powered fact-checking tools to help detect deepfakes and AI-generated disinformation. Reuters and the BBC have developed their own in-house AI-enabled fact-checking systems to verify images and news reports. They do however still have human journalists to sign off the final output.
The mass media should also take on the responsibility of educating audiences on how to identify AI-manipulated content. Transparency is crucial and news outlets need to disclose when AI is used in content production.
There may come a time in the distant future when humans will be replaced in more sectors. This scenario is played out in multiple narratives in popular culture. In the case of The Terminator (1984), global audiences were first introduced to the concept of “Skynet”, a self-actualised AI system that deems humanity to be a threat and therefore triggers a war against mankind.
We are not quite at that level yet, certainly not in the journalism industry. And for that reason, we tell parents at open days that we still believe in what we do when we are training students to become journalists in the newsrooms of tomorrow.