Healthcare Organizations Warned About Use of AI for Developing Malware

Artificial Intelligence tools have been incorporated into many cybersecurity solutions to improve their threat detection capabilities, but there is growing concern that these systems could be adopted by malicious actors and used to accelerate malware development and for social engineering and phishing. One popular AI-based tool, which was launched in November and has proven extremely popular, is ChatGPT, and there are indications that cybercriminals have been abusing that tool.

The recent advances in natural language AI tools such as ChatGPT and the growing evidence of misuse prompted the Health Sector Cybersecurity Coordination Center (HC3) to issue an analyst note warning of the threat to the healthcare and public health sector from these AI tools. โ€œArtificial intelligence (AI) has now evolved to a point where it can be effectively used by threat actors to develop malware and phishing lures. While the use of AI is still very limited and requires a sophisticated user to make it effective, once this technology becomes more user-friendly, there will be a major paradigm shift in the development of malware,โ€ warned HC3.

AI systems promise to provide huge benefits to the healthcare sector. Systems have been trained using vast data sets to identify biomarkers in medical images, and since these systems can conduct analyses 24/7/365 with a high degree of accuracy, they can greatly speed up the diagnosis of disease. AI systems have similarly been developed to improve cybersecurity, but there is considerable potential for misuse. There are, after all, no morals when it comes to AI.

To assess the risk posed by these solutions, IBM researchers developed a tool dubbed DeepLocker, which explored how AI models could be combined with existing malware techniques to create more effective and potent attacks. DeepLocker analyzed the payload distribution cycle using a deep neural network (DNN) AI model to search for trigger conditions to attack an intended target, and it was demonstrated the tool could conduct stealthy attacks that were not revealed until the intended target was reached, with the payload launched only when the user was identified through facial recognition technology. The technology could therefore be used to conduct targeted cyberattacks with pinpoint precision. While this exercise was conducted for research purposes, there is nothing stopping criminals from using other AI models for similar purposes.

The ChatGPT tool, which is based on the GPT-3.5 language model, was developed to serve as a chatbot to provide better interactions with users and is capable of responding to queries in natural language. Since its release โ€“ initially free to help with testing the system โ€“ it has been used for a wide range of purposes, from writing articles to songs and wedding speeches. The platform attracted more than 1 million users in the month after its launch.

The popularity of the solution and the quality of the responses prompted security researchers to investigate its potential to be used to create phishing emails, and even write malware code. Researchers at Check Point demonstrated the tool could be used for an entire cyberattack โ€“ generating a convincing spear phishing email, writing the malicious code for an Excel attachment, and writing the code for the malicious payload.

HIPAA
Compliance
Checklist

Simple Guidelines
Immediate PDF Download

Immediate Access

Privacy Policy

Download Free Checklist

Evidence has been growing that this tool is being misused. Hackers have allegedly used the tool to write malware, including a Python-based information stealer and an encryption script, which could be used as ransomware. Currently, the cybersecurity community doesnโ€™t know how these tools can be prevented from being used to generate malware and write flawless phishing emails. โ€œCurrent artificial intelligence technologies are widely believed to only be at the very beginning of what will likely be a whole array of capabilities that will cut across industries and enter into peopleโ€™s private lives,โ€ย said HC3.

About Liam Johnson

Liam Johnson has produced articles about HIPAA for several years. He has extensive experience in healthcare privacy and security. With a deep understanding of the complex legal and regulatory landscape surrounding patient data protection, Liam has dedicated his career to helping organizations navigate the intricacies of HIPAA compliance. Liam focusses on the challenges faced by healthcare providers, insurance companies, and business associates in complying with HIPAA regulations. Liam has been published in leading healthcare publications, including The HIPAA Journal. Liam was appointed Editor-in-Chief of The HIPAA Guide in 2023. Contact Liam via LinkedIn: https://www.linkedin.com/in/liamhipaa/