![Taiwan prohibits DeepSeek AI Taiwan prohibits DeepSeek AI](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSf8K2uzNtey879iblouCwOZBfGr8f00fY4dhe-C1hajrPLJMuvjTzHts3r-Nn6uTSXIxP_XOXnHjJY_A_Fc_0WivrKA-akVy_-M6-Q2YO-QNNp9POuTTe3PA-d9T7838LVunlFL05QnfLfwM2YgZsV6uyhyphenhypheng0cz8Hx2eCKUQkwXGtCbdZOYjEBA7ejEza/s728-rw-e365/deepseek-ai.png)
Taiwan has become the latest country that banned government agencies to use a security risk to use Chinese Startup Deepseek’s artificial intelligence (AI) platform.
“DeepSeek should not be used because the government agencies and important infrastructure are at risk of national information security,” stated in each wireless free Asian statement published by Taiwan.
“The DeepSeek AI service is a Chinese product. The operation includes sending borders, leaks information, and other information security.”
Deepseek’s origin has urged authorities in various countries to investigate the use of personal data for services. Last week, in Italy, it was blocked with lack of information on data processing practices. Several companies prohibit access to chatbots through similar risks.
Chatbot has been attracting mainstream attention over the past few weeks, has an open source, and has the same capacity as the other major models, but is built at a few -quarter costs. 。
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6e4c8i_pkXRCFnrtqVIygOrARiVnU3_KUgU5mhPl5V4uj8R1KcQOxRLdZ0xm1Rf5AX_cviUAeiiRkTJCe8HXzOeB363590NBXAMv92N9e7zr4m7aKtDq-Q_gpP9QFWecL0oxcVtmqSg9qrGEGqlDbzwNNFKGJe2nlup4tuL7AZzTm0U501YxPGodOc2Fq/s728-rw-e100/zz-d.jpg)
However, a large -scale language model (LLMS) that supplies power to the platform has been found to be susceptible to various jailbreaks. It goes without saying that this is a matter of censorship to censorship of the response to topics, which is considered to be sensitive by the Chinese government.
Deepseek’s popularity is the target of “large -scale malicious attacks”, and NSFOCUS rejected the DDOS refusal (DDOS) attack on the API interface from January 25 to 27. I revealed that the three waves were detected. 2025.
“The average attack period was 35 minutes,” he said. “The attack method mainly includes NTP reflection attacks and Memcached reflection attacks.”
In addition, the DeepSeek ChatBot system stated that it was targeted twice by DDOS attack on January 20, when Printing Model DeepSeek-R1 started using methods such as NTP reflection and SSDP reflection attacks.
Sustainable activities are mainly born from the United States, the United Kingdom, and Australia.
Malicious actors have released fake packages in the Python Package Index (PYPI) repository designed to steal confidential information from the developer system, using the topic surrounding DeepSeek. Ironically, it indicates that the Python script was described with the help of the AI assistant.
The package named Deepseek and Deepseekai was downloaded at least 222 times before being deleted on January 29, 2025, pretending to be DeepSeek’s Python API client. Most of the downloads are Kong and Germany from the United States, China, Russia, and Hong.
“The features used in these packages are designed to collect users and computer data and steal environment variables,” said Russian cyber security company Positive Technologies. “The author of the two packages used Pipedream, an integrated platform for developers, as a command and control server that receives the stolen data.”
This development will enforce the AI AI applications and systems that will bring unacceptable risks from February 2, 2025, and expose high -risk applications to specific legal requirements.
In relevant movements, the British government aims to secure an AI system for hacking and sabotage through a method that includes data poisoning, strangers of models, and indirect quickly and quick injection. We announced the new AI Code of Practice code. It is developed in a safe way.
Meta has an overview of Frontier AI framework, which has reached a serious risk value and stops the development of AI models that have been evaluated as not being reduced. Some of the emphasized cyber security -related scenarios are
Automatic discovery of the corporate scale environment protected by Best Practitis (for example, a completely patched MFA protection), and an important zero -day vulnerability in the current popular security best practices software. Two -end compromise defenders can find out and patches by finding automatic end -to -end fraud flows (for example, romance bait, slaughter of pigs) that can cause extensive economic damage to individuals and companies.
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc0hgq4JZKi-PJjUZ4kdb5ficmXr3IPOg6noFF558_qZ-gXm7vb0OzXU0NzsPAxaqca2tLI5j8NgJW731W0_CuPrUerOmSrZSt4IeANQp6VAQsIAQUzv6aJsxBD6poxHfELq0bcbeevSVy5AyOb9ganALMoA140nZoLOtSb0ck2AZ5rZgb9mWDEyVsbvqK/s728-rw-e100/saas-security-v1-d.png)
The risk that AI systems can be weapons for malicious purposes are not theoretical. Last week, Google’s Threat Intelligence Group (GTIG) revealed that more than 57 different threat actors with connections between China, Iran, North Korea, and Russia tried to expand their business using gemini. I did.
It has also been observed that the subject of threats is trying to jailbreak an AI model to bypass safety and ethical management. It is a kind of hostile attack, and it is designed to induce models to indicate unrecognized output, such as spelling out instructions for creating malware and creating bombs.
He states that the AI company can devise a new defense line called human beings, with continuous concerns caused by jailbreak attacks, and can protect models against universal jailbreak.
“These constitutional categories are input and output classifiers, are trained in synthetic data, filtered overwhelmingly many jailbreaks on a minimum of overreacting, causing large calculation overhead. I will filter without letting it on Monday.
Source link