The Italian Data Protection Authority, Garante per la protezione dei dati personali, has temporarily suspended the use of OpenAI’s artificial intelligence (AI) service, ChatGPT, in Italy. The watchdog opened an investigation into the chatbot’s compliance with Italian data collection rules and alleged that the service failed to prevent people under 13 from using it. The authority also cited concerns over the privacy implications of the data collected and stored by the service.
Insufficient Measures to Protect Children’s Data
Garante per la protezione dei dati personali criticized OpenAI’s lack of clear notice to users and failure to have a legal basis that justifies the collection and storage of personal data to train ChatGPT’s algorithms. The watchdog also maintained that OpenAI did not put sufficient measures in place to prevent children under 13 from accessing the service, which violates Italian data collection rules.
ChatGPT’s Anonymity Aspect
According to Timothy Morris, Chief Security Advisor at Tanium, the heart of the issue in Italy seems to be the anonymity aspect of ChatGPT. The chatbot’s ability to process enormous amounts of data and create intelligible content that closely mimics human behavior is an undeniable game-changer. However, there could potentially be more regulations to provide industry oversight.
Incorrect Handling of User Data
The Garante also lamented the incorrect handling of user data from ChatGPT, resulting from the service’s limitations in processing information accurately. Edward Machin, a senior lawyer with Ropes & Gray LLP, commented that “users may be willing to accept the trade, but the allegation here is that users aren’t being given the information to allow them to make an informed decision. More problematically […] there may not be a lawful basis to process their data.”
ChatGPT’s Data Breach
In its announcement, the Italian privacy watchdog also mentioned the data breach that affected ChatGPT earlier this month. The service was found to have a vulnerability that could have exposed users’ payment information.
Potential for Misuse in Cybercrime
Mika Aalto, CEO of Hoxhunt, commented that “AI and Large Language Models like ChatGPT have tremendous potential to be used for good in cybersecurity, as well as for evil. But for now, the misuse of ChatGPT for phishing and smishing attacks will likely be focused on improving capabilities of existing cybercriminals more than activating new legions of attackers.”
OpenAI has until April 19 to respond to the Data Protection Authority’s investigation or face a potential fine of up to €20 million or 4% of its annual turnover. The company has yet to comment on the issue.
The Italian Data Protection Authority’s temporary suspension of OpenAI’s ChatGPT service underscores the need for AI companies to adhere to privacy regulations and data protection laws. The incident highlights the risks associated with AI’s ability to process vast amounts of data and underscores the need for companies to implement appropriate measures to protect users’ personal information. While AI and large language models have the potential to be used for good in cybersecurity, they can also be exploited by cybercriminals, emphasizing the importance of implementing appropriate safeguards.