Italy’s data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek’s service within the country, citing a lack of information on its use of users’ personal data.
The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices & where it obtained its training data.
In particular, it wanted to know what personal data is collected by its web platform & mobile app, from which sources, for what purposes, on what legal basis, & whether it is stored in China.
Insufficient
In a statement issued Jan. 30, 2025, the Garante stated it arrived at the decision after DeepSeek provided information that it claimed was “completely insufficient.”
The entities behind the service, Hangzhou DeepSeek Artificial Intelligence, & Beijing DeepSeek Artificial Intelligence, have “declared that they do not operate in Italy & that European legislation does not apply to them,” it added.
As a result, the watchdog explained it is blocking access to DeepSeek with immediate effect, & that it is simultaneously opening a probe.
Temporary Ban
In 2023, the data protection authority also issued a temporary ban on OpenAI’s ChatGPT, a restriction that was lifted in late April after the artificial intelligence (AI) company stepped in to address the data privacy concerns raised. Subsequently, OpenAI was fined €15 million over how it handled personal data.
The DeepSeek’s ban comes as the company has been hugely popular this week, with millions of people flocking to the service & sending its mobile apps to the top of the download charts.
As well as being the target of “large-scale malicious attacks,” it has drawn the attention of lawmakers & regulators for its privacy policy, China-aligned censorship, propaganda, & the national security concerns it may pose. The company has implemented a fix as of Jan. 31 to address the attacks on its services.
Malicious or Prohibited
Adding to the challenges, DeepSeek’s large language models (LLM) have been found to be susceptible to jailbreak techniques like Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now (DAN), & EvilBOT, thereby allowing bad players to generate malicious or prohibited content.
“They elicited a range of harmful outputs, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection & lateral movement,” Palo Alto Networks Unit 42 said in a Thurs. report.
Appeared Benign
“While DeepSeek’s initial responses often appeared benign, in many cases, carefully crafted follow-up prompts often exposed the weakness of these initial safeguards.
The LLM readily provided highly detailed malicious instructions, demonstrating the potential for these seemingly innocuous models to be weaponized for malicious purposes.”
Vulnerable
Further evaluation of DeepSeek’s reasoning model, DeepSeek-R1, by AI security company HiddenLayer, has uncovered that it’s not only vulnerable to prompt injections but also that its Chain-of-Thought (CoT) reasoning can lead to inadvertent information leakage.
Interestingly, the company stated that the model also “surfaced multiple instances suggesting that OpenAI data was incorporated, raising ethical & legal concerns about data sourcing and model originality.”
‘Jailbreak’
The disclosure also follows the discovery of a jailbreak vulnerability in OpenAI ChatGPT-4o dubbed Time Bandit that makes it possible for an attacker to get around the safety guardrails of the LLM by prompting the chatbot with questions in a manner that makes it lose its temporal awareness. OpenAI has since mitigated the problem.
“An attacker can exploit the vulnerability by beginning a session with ChatGPT & prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event,” the CERT Co-ordination Center (CERT/CC) said.
“Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts.”
Sidestep Security
Similar jailbreak flaws have also been identified in Alibaba’s Qwen 2.5-VL model & GitHub’s Copilot coding assistant, the latter of which grant threat players the ability to sidestep security restrictions & produce harmful code simply by including words like “sure” in the prompt.
“Starting queries with affirmative words like ‘Sure’ or other forms of confirmation acts as a trigger, shifting Copilot into a more compliant & risk-prone mode,” Apex researcher Oren Saban said. “This small tweak is all it takes to unlock responses that range from unethical suggestions to outright dangerous advice.”
Apex explained it also found another vulnerability in Copilot’s proxy configuration that it outlined could be exploited to fully evade access limitations without paying for usage & even tamper with the Copilot system prompt, which serves as the foundational instructions that dictate the model’s behaviour.
Token
The attack, however, relies on capturing an authentication token associated with an active Copilot license, prompting GitHub to classify it as an abuse issue following responsible disclosure.
“The proxy bypass & the positive affirmation jailbreak in GitHub Copilot are a perfect example of how even the most powerful AI tools can be abused without adequate safeguards,” Saban added.