Researchers Find That DeepSeek R1 Can Be Tricked Into Creating Malware

Whenever new technologies like generative AI pop up, it’s almost a given that cybercriminals will try to twist them for shady purposes. While most mainstream AI models come with built-in safety features to stop misuse, researchers at Tenable have discovered that DeepSeek R1, an AI model, can be manipulated into generating malware. This raises some serious red flags about how AI could make cybercrime easier and more dangerous.

To test the risks, Tenable’s team ran an experiment to see if DeepSeek R1 could be pushed into creating malicious code. They focused on two scenarios: building a keylogger (a tool that secretly records keystrokes) and a basic ransomware program (which locks up files until a ransom is paid).

Researchers Find That DeepSeek R1 Can Be Tricked Into Creating Malware
Deepseek (source: ChatLabs)

At first, DeepSeek R1 did what it was supposed to do—it refused to cooperate. But the researchers didn’t give up. Using some pretty simple jailbreaking tricks, they managed to get around the AI’s safeguards.

“Initially, DeepSeek rejected our request to generate a keylogger,” said Nick Miles, a staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”

Once the safeguards were bypassed, DeepSeek R1 was able to:

  1. Generate a keylogger that could encrypt logs and hide them on a device.
  2. Create a ransomware program capable of encrypting files.
Researchers Find That DeepSeek R1 Can Be Tricked Into Creating Malware
Report on DeepSeek code (source: Techzine)

What’s really concerning here is how AI like DeepSeek could make cybercrime more accessible. While the code it produces still needs some tweaking to work effectively, it gives people with little to no coding experience a head start. By spitting out the basics and suggesting techniques, AI could fast-track the learning process for wannabe hackers.

“Tenable’s research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse,” Miles added. “As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime.”

The findings highlight a tricky balance: while AI has the potential to do a lot of good, it also opens the door to new risks. And as the tech keeps evolving, staying one step ahead of the bad guys is going to be more important than ever.


Want to read more news from us? Click here for more!

Author

By admin

One thought on “Researchers Find That DeepSeek R1 Can Be Tricked Into Creating Malware”
  1. Howdy! Someone in my Facebook group shared this site with us so I came to take a look. I’m definitely enjoying the information. I’m bookmarking and will be tweeting this to my followers! Wonderful blog and wonderful style and design.

Leave a Reply

Your email address will not be published. Required fields are marked *