The Associated Press (AP) has issued guidelines on artificial intelligence, saying the tool cannot be used to create publishable content and images for the news service while encouraging staff members to become familiar with the technology.
AP is one of a handful of news organisations that have begun to set rules on how to integrate fast-developing tech tools such as ChatGPT into their work.
The service will couple this on Thursday with a chapter in its influential Stylebook that advises journalists on how to cover the story, complete with a glossary of terminology.
“Our goal is to give people a good way to understand how we can do a little experimentation but also be safe,” said Amanda Barrett, vice president of news standards and inclusion at AP.
Generative AI has the ability to create text, images, audio and video on command, but is not yet fully capable of distinguishing between fact and fiction
As a result, AP said material produced by artificial intelligence should be vetted carefully, just like material from any other news source.
Similarly, AP said a photo, video or audio segment generated by AI should not be used, unless the altered material is itself the subject of a story.
That is in line with the tech magazine Wired, which said it does not publish stories generated by AI, “except when the fact that it’s AI-generated is the point of the whole story”.
“Your stories must be completely written by you,” Nicholas Carlson, Insider editor-in-chief, wrote in a note to employees that was shared with readers.
“You are responsible for the accuracy, fairness, originality and quality of every word in your stories.”
Highly publicised cases of AI-generated “hallucinations”, or made-up facts, make it important that consumers know that standards are in place to “make sure the content they’re reading, watching and listening to is verified, credible and as fair as possible”, Poynter said in an editorial.
It can help editors at AP, for example, put together digests of stories in the works that are sent to its subscribers.
It could help editors create headlines or generate story ideas, Wired said.
Mr Carlson said AI could be asked to suggest possible edits to make a story concise and more readable, or to come up with possible questions for an interview.
AP has experimented with simpler forms of artificial intelligence for a decade, using it to create short news stories out of sports box scores or corporate earnings reports.
That is important experience, Ms Barrett said, but “we still want to enter this new phase cautiously, making sure we protect our journalism and protect our credibility”.
ChatGPT-maker OpenAI and The Associated Press last month announced a deal for the artificial intelligence company to license AP’s archive of news stories that it uses for training purposes.
News organisations are concerned about their material being used by AI companies without permission or payment.
The News Media Alliance, representing hundreds of publishers, issued a statement of principles designed to protect its members’ intellectual property rights.
Some journalists have expressed worry that artificial intelligence could eventually replace jobs done by humans and is a matter of keen interest, for example, in contract talks between AP and its union, the News Media Guild.
“We were encouraged by some provisions and have questions on others,” Cherwoo said.
With safeguards in place, AP wants its journalists to become familiar with the technology, since they will need to report stories about it in coming years, Ms Barrett said.
AP’s Stylebook – a roadmap of journalistic practices and rules for use of terminology in stories – will explain in the chapter due to be released on Thursday many of the factors that journalists should consider when writing about the technology.
“The artificial intelligence story goes far beyond business and technology,” AP says.
“It is also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues. Successful AI stories show how these tools are affecting many areas of our lives.”
The chapter includes a glossary of terminology, including machine learning, training data, face recognition and algorithmic bias.
Little of it should be considered the final word on the topic.
A committee exploring guidance on the topic meets monthly, Ms Barrett said.
“I fully expect we’ll have to update the guidance every three months because the landscape is shifting,” she said.