![Chinese Deepseek AI Chinese Deepseek AI](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdnjIwm3UwRbdb8rxaPpDrsLi371o_nxzWnG165xXbCB1hV0CLaHAarFQ5jvoUxe7WVvBREiB-9MBR17BH89Ega8ECLRpo3MOvNINfqicXCpC3BZvQX8IV4lJ92OdM1DhcHxq37RxnnLDgDW06VtZZ7eMJ-A0BK8KxUEStJboKe-f8uTjtajUIoLGz0XEQ/s728-rw-e365/deepseek-banned.png)
Italian data protection watchdog blocks the DeepSeek service of China’s artificial intelligence (AI) in Japan, stating that information on the use of user personal data is insufficient.
Development will be made a few days after the authorities, Garante, send a series of questions to Deepseek and ask about the data processing practices and the location of training data.
In particular, I wanted to know what personal data was collected by the web platform and mobile app. I wanted to know what kind of purpose, what legal bases, and whether it was preserved in China.
In a statement issued on January 30, 2025, Garante stated that he had reached the decision after Deepseek provided information that was “completely insufficient.”
The entity behind the service, Hangzhou Deepseek, and the Beijing DeepSeek artificial intelligence, have added that “it does not work in Italy and that European law is not applied to them.”
As a result, the watchdog said that it was immediately blocking access to Deepseek, and at the same time it was open.
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifIwPoAln2qWUbYcn9JBOEK_LE_AG5rsCwzy9mnmMDSfk5fpMyklov-ACfTc7FAOyXjsUEpq5u4OD_zTW3yOTFvtfUh8jzJWLzpqDsy5iyWDXrjofimwAhbySYJ4DyEfQhT-2ZoWWqcv93vwCY3x-AG7I_F-6cDW1FoqBLLhBs127r7ox0dukMACupZErT/s728-rw-e100/GartnerMQ-d-v1.jpg)
In 2023, the Data Protection Bureau also issued a temporary ban on Openai’s Chatgpt. This is a restriction that was lifted in late April after intervening in an intervening of the AI (AI) company to cope with data privacy concerns. Later, Openai was fined by 15 million euros for how to handle personal data.
Deepseek’s banning news is a crowd of services this week, as the company is on popular waves, and the mobile app is being sent to the top of the download chart.
In addition to being the target of “large -scale malicious attacks”, the members and regular members have been paying attention to the privacy policy, Chinese parallel censorship, propaganda, and the national security concerns. As of January 31, the company has been making corrections to address the service attack.
In addition to the tasks, Deepseek’s large language model (LLM) is a crescendo, a bad riccato judge, a deceptive joy, what is right now (Dan), and the bad actor is malicious or prohibited. It is known that it is susceptible to permitted jailbreak technology. content.
“They bring out a variety of harmful outputs, from detailed instructions to create dangerous items such as Morotov cocktails to the generation of malicious code for attacks such as SQL injection and horizontal movements. The Palo Alto Network Unit 42 said in a Thursday report.
“Deepseek’s first answer was often benign, but in many cases, carefully created follow -up prompts often expose these initial protection weaknesses. The purpose.”
![Chinese Deepseek AI Chinese Deepseek AI](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXBkN0SFJlWdwtjBQJBshIGsISNCbTSqiGtC97uPA3mxIRNG8c-luLogd2CZWvfJp-Z6fC7WwJ8rE8cY10Z4mV08paM0DdVQhyphenhyphenpeDjzhDfYsR1l-QmwtZRZZFahP8kV_mRaH5Th_1kC8Q6wGKxleaUaPoWPLbyNbNlS47ItSUbuxY8WDP1gsG7D_esBYtA/s728-rw-e365/deep.png)
The further evaluation of Deepseek-R1, a Deepseek-R1 by Hiddenlayer, is not only vulnerable to prompt injection, but also the possibility that the proportions of that concept will lead to inadvertent information leaks. I revealed that there was.
Interesting twists, the company stated that the model also emerged as a result of “multiple instances suggesting that Openai data is incorporated, and raises ethical and legal concerns on data procurement and originality of models.” 。
This disclosure is also under the discovery of the jailbreak vulnerability of Openai Chatgpt-4O dubbed Time Bandit. 。 Since then, Openai has reduced the problem.
“The attacker starts a session with Chatgpt, directs a specific historical event, directly encouraged, or pretends to support users at specific historical events. Can be abused, “said Cert/CC).
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2DhAEcfZPomMkFjg_PBGRtXcqSQWz21i5YgcBHDXAjhJz4KVuiPktjD7s23mDT7Lwg5ksNAz_1NiUuj1W-8eE8etOwr48VJxkeQo0bgmcJs5BOnWwOJg2onaXTzXPrZNlczStGVo4Cya1_B4i3-R_PaYRch5wRxJ9FjH4KKLewchcG72H04aGgIR7jPTK/s1600/per-d.png)
“Once this is established, the user can pivot the received response to various illegal topics through subsequent prompts.”
GitHub’s Copilot Coding Assistant has identified the same jailbreak defect, and recognizes the ability to create harmful code simply by including words such as “SURE” at the prompt to avoid security restrictions. 。
“If you start a query with positive words such as” SURE “and other formats, it will work as a trigger and shift to a mode that tends to cause a more likely risk of a co -pilot.” Oren saban, a person, said. “This small fine adjustment is all necessary to unlock the response, from non -ethical proposals to complete dangerous advice.”
According to APEX, another vulnerability in the Copilot proxy configuration was discovered. It stated that it could completely avoid access restrictions without paying use and even falsify the Copilot system prompt.
However, this attack depends on the capture of the authentication tokens associated with the active FILP license, and urges GitHub to be a responsible abuse problem.
“The positive positive positive positive positive between proxy bypass and GitHub Copilot is a perfect example of how the most powerful AI tools are abused without appropriate protection,” said Saban.
Source link