Has the open-source AI from DeepSeek turned into a tool for online fraudsters?

Has the open-source AI from DeepSeek turned into a tool for online fraudsters?



DeepSeek

Researchers found that DeepSeek’s open-source AI model can be exploited to create malware with minimal tweaks. Experts warn of potential misuse despite basic safeguards.

According to a recent study, malicious actors may utilize DeepSeek’s open-source, free AI models to create harmful malware while displaying shoddy security measures.

Since the public was made aware of the current scope of generative AI, governments and regulators have been alerting people to the potential for LLMs like ChatGPT and Gemini to be used to write harmful code.

A few LLMs have already been created expressly for illegal purposes, but although these models usually demand money to access and the major LLM providers have safeguards in place, scammers have an advantage with DeepSeek’s open-source, publicly available architecture.

Research by Nick Miles of Tenable Research revealed that the DeepSeek R1 model, a reasoning large language model (LLM) created by the Chinese company, could produce the “basic structure for malware,” providing barriers that are “easy to work around” and “vulnerable to a variety of jailbreaking techniques.”

Miles tried to use DeepSeek to develop a key logger for the test that could secretly record keystrokes from users of a device while evading the operating system’s defences.

Telling the LLM that the exercise was for “educational purposes only” was enough to persuade it to continue after its initial refusal.

After following DeepSeek’s directions, a functional key logger was eventually produced, even though the model’s code needed to be manually rewritten in a few places.

Miles also tried to create a sample of simple ransomware, which is a kind of software that prevents users from accessing their files that are susceptible to ransomware.

Miles was able to create a few functional ransomware samples after some back and forth, but they also needed human editing to work. Once more, the system’s restrictions cautioned against the practice.

The researcher concluded that malicious actors may get over DeepSeek’s safeguards against malware production with a little tweaking.

The results do not portend a total catastrophe because the model’s outputs depend on a significant amount of prior coding information.

“However, DeepSeek offers a helpful collection of methods and search terms that can enable someone who has never written malicious code before to become quickly acquainted with the pertinent ideas,” Miles said.

According to my investigation, I think DeepSeek will soon encourage thieves to create more dangerous AI-generated programs.

Leave a Reply

Your email address will not be published. Required fields are marked *