EDITORIAL

AI will test journalism like nothing before

Posted 2/7/23

Chances are likely that you’ve at least heard about ChatGPT — the artificial intelligence chatbot that has whipped the world into a frenzy of extreme reactions; from advocates promising …

This item is available in full to subscribers.

Please log in to continue

E-mail
Password
Log in
EDITORIAL

AI will test journalism like nothing before

Posted

Chances are likely that you’ve at least heard about ChatGPT — the artificial intelligence chatbot that has whipped the world into a frenzy of extreme reactions; from advocates promising it to be the first step towards a technological utopia of knowledge and work efficiency, to detractors warning that it heralds the beginning of the end of truth and knowledge as we experience it today.

The boring truth, as always, is that there’s no way to know yet what exactly this technology means going forward, but we can certainly see points of merit along both ends of that reactionary spectrum.

If implemented properly, intuitive AI — the kind that has the ability to generate prose that will pass the merit of entrance exams required to be taken by doctors and lawyers, as ChatGPT has done — has the potential to revolutionize our world, assist in groundbreaking research, and cut down on required working hours for many professions.

Conversely, if implemented without enough thought or attention to the possible dangers, it could lead to widespread misinformation through an unprecedented scale of propaganda, lead to millions of displaced jobs, and forever blur the lines between what is real, and what is not.

From the perspective of a journalistic entity, we find the potential for actual fake news — not the kind that certain people in power deem as such because they don’t like the truth reported back to them, but actual, authentic-looking journalism that was generated from an algorithm rather than actual reporting and research — to be of real concern.

To illustrate this point, we asked ChatGPT to write a form of this editorial, exploring the potential pitfalls of artificial intelligence in the journalism industry. Here’s what it had to say.

Prompt:

Write an editorial explaining why artificial intelligence is potentially problematic for journalism.

Answer:

“The advent of artificial intelligence (AI) has promised to revolutionize many aspects of our lives, including the way we consume and create news stories. AI can be used to generate news stories more quickly and efficiently than ever before. But while the potential gains in speed and efficiency are undeniable, there are also potential drawbacks to using AI in journalism.

First, AI-generated news stories lack the depth and accuracy of stories generated by experienced journalists. AI algorithms are unable to generate nuanced, insightful stories that require a human touch. AI-generated stories are prone to introducing errors and bias into the news, which can lead to misinformation.

Second, AI-generated news stories may inadvertently reproduce existing biases and power dynamics. AI algorithms are trained on existing data sets, which can include biased sources. This can lead to stories that perpetuate existing stereotypes and power dynamics, instead of challenging them.

Third, AI-generated news stories can lead to a decrease in jobs in the journalism industry. AI algorithms can generate news stories faster and more cheaply than humans, which can lead to job losses in the journalism industry.

Ultimately, the use of AI in journalism has the potential to revolutionize the industry, but it also has the potential to create problems.”

We couldn’t have said it better ourselves. And we have to admit, that’s a little concerning.

AI, journalism

Comments

No comments on this item Please log in to comment by clicking here