Almost as soon as generative artificial intelligence (AI) technologies, such as ChatGPT, came on the scene, the public began to imagine dystopian scenarios. Critics of emerging AI warned of ChatGPT’s ability to create deep fakes. The comedian Sarah Silverman sued Open AI, the makers of ChatGPT, for using her memoir to train the bot without her permission.
Even AI creators have implored the U.S. government to create regulations to prevent the technology from being abused. At a recent United Nations Security Council Meeting, the U.N. Secretary General warned that “Generative AI has enormous potential for good and evil at scale." The U.N. is concerned that cyberterrorists or enemy nation-states might weaponize AI.
But how can emerging AI technology increase cybersecurity risk for companies?
Recently, bad actors have been quick to leverage AI technology to find vulnerabilities and develop tools, as well as stage malware attacks and breaches of company networks. Cybercriminals who lack coding expertise have turned to chatbots for help in developing malware. Generative AI, such as ChatGPT 4.0, can be used to generate malware code without the need for specialized knowledge or an investment in malware-as-a-service on the Dark Web.
Hackers can give ChatGPT broad malware parameters to use in creating the necessary code to stage a malware attack. Some cybercriminals have used ChatGPT to create a black-market platform for buying and selling malware and other tools to be used for cybercrime.
For a while now, cybercriminals have been using social engineering techniques to draft phishing emails that convincingly imitate co-workers and authority figures. These phishing emails convince employees to click on website links and attachments that are infected with malware or even send gift cards and money to hackers.
Now, generative AI can be used to create convincing phishing emails. Generative AI is trained to imitate writing styles, so it can easily craft an email in the voice of someone who is known to the targeted employee, such as a supervisor or fellow employee.
AI technology can be used to find security vulnerabilities within a company’s systems. All hackers need to do is give ChatGPT prompts for discovering low-hanging fruit to use in staging a breach.
Chatbots can be used to uncover software vulnerabilities, such as unpatched applications and bugs in new software for staging zero-day exploits, which refer to computer-software vulnerabilities previously unknown to those who should be concerned about their mitigation, such as the vendor of the target software. Additionally, cybercriminals can use AI to search for and uncover vulnerabilities in a company’s network.
As AI evolves, becoming more advanced and pervasive in our culture, companies like yours may struggle to keep up with the security threats the technology presents. Every company is different, so your business needs to get a picture of your slice of the cyber threat landscape along with a sense of the gaps in your approach to fighting AI risk.
Derive Technologies can help you fight emerging risks by conducting a free security assessment. During the process, our analysts check for vulnerabilities, such as your company’s chances of falling victim to a phishing attack.
As a Cisco Premier Certified Partner, we can provide your company with the leading network security solutions you need to defend against AI-generated threats. We also partner with Gradient Cyber, a leader in threat monitoring, and Check Point, a top security software provider.
With these leading partners, Derive Technologies helps our customers stay ahead of emerging AI threats.