The UK’s information watchdog has said Snapchat may be required to “stop processing data” related to its AI chatbot after issuing a preliminary enforcement notice against the technology business.
UK Information Commissioner John Edwards said the provisional findings of a probe into the company suggested a “worrying failure” by Snap, the app’s parent business, over potential privacy risks.
The Information Commissioner’s Office (ICO) said it issued Snap with a “preliminary enforcement notice over potential failure to properly assess the privacy risks” posed by its generative AI chatbot My AI, particularly to children using it.
The regulator stressed that findings are “provisional” and conclusions should not yet be drawn.
However, it said that if a final enforcement notice were to be adopted, Snap might not be able to offer the My AI function to UK users until the company carries out “an adequate risk assessment”.
Mr Edwards said: “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI.
“We have been clear that organisations must consider the risks associated with AI, alongside the benefits.
“Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”
“Like the ICO, we are committed to protecting the privacy of our users.
“In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available.
“We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”