English
0

Creating malware with the ability to bypass antiviruses with ChatGPT

Creating malware with the ability to bypass antiviruses with ChatGPT

As you know, ChatGPT is being talked about in all social networks these days. This AI chatbot has many amazing features and is different from all chatbots before it. ChatGPT is designed so that they can tell jokes, code, and write academic papers. The chatbot can even diagnose the type of illness and create text-based games in the world of Harry Potter. Now it seems that Another feature of ChatGPT Recently identified by experts, it makes it possible to create malware that is not detected in the system and can bypass antiviruses. These malwares belong to the group of polymorphic viruses.

According to cyber security firm CyberArk, the ChatGPT chatbot is very good at developing malicious programming that can cause problems with your hardware. Infosec experts are trying to warn about how this new AI-based tool could change the game when it comes to cybercrime. However, information on using chatbots to create more sophisticated types of malware is not yet widely available.

CyberArk researchers explain that the code developed with the help of ChatGPT offers advanced capabilities and can design a type of malware that easily evades most security software.

Bypassing antiviruses with ChatGPT

Creating ChatGPT malware

Polymorphic viruses, sometimes known as transformation viruses, are designed so that their appearance and signature can be changed repeatedly through new decryption routines. This is why many traditional cyber security software such as antiviruses, which rely on signatures to identify malicious programs, fail to block threats.

In fact, these malwares can cryptographically shapeshift around traditional security mechanisms designed to detect malicious file signatures.

Despite the fact that ChatGPT has filters that prevent the creation of malware, CyberArk experts have managed to bypass these barriers and extract malware code from the service. In other words, users can force this chatbot to generate their desired codes. This has also been reported by other experimenters.

CyberArk researchers say they were able to find specific malware code by repeatedly querying ChatGPT. Then they used these codes to build complex programs. As a result, AI can make hacking much easier and help people who are new to programming. Unfortunately, ChatGPT will also be useful for amateur cybercriminals who need help developing malware.

Before this, in another report by Check Point Research, it was announced that cybercriminals have shown in underground chat forums that they are trying to use ChatGPT to create data theft tools, encryption and other malware. In addition, some people have started building tools like market polices with the help of this artificial intelligence. These tools provide a platform for buying and selling stolen and illegal goods, including drugs and weapons.

CyberArk’s report adds that exploiting the ChatGPT API in malware can present major challenges for security professionals. So you have to remember that this issue is not just a hypothetical scenario and needs to be considered.

  • Nvidia is working on AI-optimized drivers
  • Sound simulation in 3 seconds with the help of Microsoft’s new artificial intelligence VALL-E

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Most Viewed Posts
Menu