By Anthony Jirouschek, Security Architect
While artificial intelligence, or AI, has been around for decades, the concept has been around for centuries. In today’s world, it is a reality and can be observed in multiple applications, including chatbots, virtual assistants, fraud prevention, and more.
But is the rise of AI the downfall of cybersecurity?
Below are five threats that AI can pose to an organization and/or individual, as well as steps that can be taken to reduce or mitigate these threats.
1. Adversarial Attacks
Just like any other program or code, AI can be vulnerable to a dedicated adversary.
This may be in the form of manipulating the logic for a desired outcome, compromising the legitimacy of the AI’s decisions, and impacting reliability. This threat might present a greater impact to the finance, health care, and autonomous vehicle industries, as these systems may lead to loss of life.
Solution: Input validation controls, anomaly detection, and good update hygiene can help reduce the likelihood of a successful attack by an adversary.
Knowing and collaborating with the AI community—whether inside your own organization or a public organization—may also help to stay one step ahead of the attacker.
2. Deepfake Technology
The rapid advancement of AI has brought us to a time where creating replicas of an individual’s face, voice, and even mannerisms is no longer in the realm of science fiction.
Deepfakes can be used in misinformation campaigns, to facilitate fraud, to manipulate public opinion, and more. It is likely that crime facilitated by deepfakes will continue to rise.
Solution: Robust authentication-and-verification processes, especially for high-risk activities like wire transfers and the sharing of sensitive information, will likely be the starting point to combat deepfakes within an organization.
Once the technology becomes more mature, deepfake detection tools may be the logical next step to combat these attacks. Until then, it is important that customer-facing partners are aware of and understand how these attacks may occur. It is not only the customer-facing partner that needs to increase their awareness, however, but rather the entire organization.
3. AI-Powered Malware
Just like you can use Snapchat’s AI, threat actors and criminals can use AI to develop malware. This results in a shorter development cycle, but may also result in less-than-great code.
Most large language models, or LLMs, are trained up until a certain date. When looking at some of the code rendered by these AI programs, it may be incomplete, insecure, or just plain incorrect.
AI, however, can also generate polymorphic malware—code that changes while it runs—which can be used to evade security tools and analysts. This threat is likely just beginning to be observed in the wild and will continue to be a staple in the landscape.
Solution: As always, it’s important to follow a defense-in-depth approach and have multiple layers to your security footprint.
Regularly updating and patching, as well as regular vulnerability scans, will always be a benefit—regardless of the type of attack against your systems.
Employing AI or machine learning solutions can help identify irregularities and identify a potential attack.
Think of the adage, “Fight fire with fire.”
4. Privacy Concerns
The recent rise of AI-based chatbots (e.g., ChatGPT) present a familiar threat to organizations.
Similar to not submitting sensitive data or secrets in GitHub or Google, data should not be submitted into a chatbot or LLM, either.
As these platforms are used, they learn from the feedback of their users. This means that if a developer provides sensitive data into a prompt to ask for help with code, that code may then be used as a response to another user who asks a question that this data may have an answer to.
Solution: To combat sensitive information being shared into these platforms, many organizations block access to them on their corporate networks.
As mentioned above, this is not a new issue, but rather a new tool being used. Your existing data-sharing policies may already address this concern.
Don’t post your secret keys on Stack Overflow, and don’t post them in a chatbot, either. If you are using a publicly available platform, just assume the data is being recorded somewhere and follow best practices.
5. AI Supply-Chain Attacks
AI has a significant reliance on large datasets, which are commonly sourced from various providers or vendors. While unlikely, these datasets may contain malicious code or logic intentionally injected to poison the AI models.
Attackers could target the supply chain to compromise the legitimacy and integrity of the AI. This may lead to bias decision making, compromised controls, or unauthorized access to data.
Remember: it was also unlikely that BGP would be hacked at one point in time—but it happened.
Solution: Because most of this data will be coming from third-party vendors, establish a comprehensive third-party-vendor risk-management program. The program should include assessments of the vendor’s security practices and data sources.
Be sure to develop a strenuous vetting process for the data and use secure data sharing, as well as encryption.
Monitor the AI models to help identify suspicious or anomalous behavior that may indicate a compromise within the supply chain.
Final Word
These threat vectors and recommendations may not apply to everyone. They should be tailored to your organization’s specific needs and risk appetite.
As always, regular monitoring, detection, education, and iterative security practices are crucial to defending your organization.
Is the rise fo AI the fall of cybersecurity? It doesn't have to be.