Robot holding shield protect icon showing hackers and AI

Enterprises vs. The Next-Generation of Hackers – Who’s Winning the AI Race?

The business landscape is evolving, and generative AI has become a top priority for leaders with 83% anticipating they’ll increase their investments in the technology by 50% or more in the next 6-12 months. But the growing use of AI tools among enterprises has ushered in a wave of emerging threats that security and IT teams are not yet equipped to address. Nearly half (47%) of IT professionals believe security threats are increasing in volume or severity and enterprise use of AI is already further exacerbating these risks. The race is on between security teams and next-generation hackers to see who will successfully take advantage of AI’s capabilities first.

A new wave of bad actors is on the rise

Following the initial launch of ChatGPT almost a year ago, AI tools have become widely accessible, not just to enterprises and everyday citizens, but also to cyber criminals. Amidst a push for responsible AI development, major players in the space are on a mission to secure their tools from malicious use but bad actors have already started to take advantage of the same tech to boost their skill sets.

Enterprises are increasingly finding new ways to integrate AI into internal workflows and external offerings, which in turn has created a new attack vector for hackers. This expanded surface has opened the door for a new wave of sophisticated attacks using advanced methods and unsuspecting entry points that enterprises previously didn’t have to secure against. Among the emerging techniques that IT and security teams must have on their radar:

  • Stealing the model: A method where threat actors target machine learning (ML) models that utilize public APIs by copying a specific model. Once a cybercriminal has access to a model’s make up, they’re able to study its abilities and actively attempt to exploit existing vulnerabilities all within a testing environment. If an existing vulnerability is uncovered, they can apply it to attack the public model.
  • Data Poisoning: Attacks targeting public datasets used to train deep-learning models, which are used by AI tools. If a hacker gains access to a public dataset, they can then manipulate it or corrupt it with spurious data. This kind of manipulation causes the machine learning models to provide biased or even malicious decisions. While an instance of data poisoning has yet to occur in the wild, it has the power to cause catastrophic damage.
  • Prompt Injection: A burgeoning technique that targets the foundational components of AI tools, Large Language Models (LLMs). Commonly used generative AI tools like chatbots use LLMs to instruct their decision making process and drive responses. Hackers use prompt injections to confuse chatbots by inputting a series of deceptive questions or prompts to attempt to impact the outcomes and, in some instances, override the application’s existing restrictions.

Today’s threat landscape is transforming — hackers have tools at their fingertips that can rapidly advance their impact and an entirely new attack vector to explore. With growing enterprise use of AI offering an opportunity to expedite attacks, now is the time to focus on transforming security defenses.

Not just a risk, AI can bolster enterprise security

Despite scrutiny for its ability to equip cybercriminals with more advanced techniques, AI models can be used just as effectively among security and IT teams to mitigate these mounting threats. Instead of just viewing AI as a risk or a threat, it’s more critical than ever for enterprises to instead view it as a way to enhance security defenses.

Among the benefits of AI, there are three areas in particular where enterprises can use the technology to enhance operations:

  1. Increasing threat intelligence: Many good products on the market already have AI and ML baked into them. As these tools continually collect data, they’ll process and analyze the information they’ve stored to provide valuable insight into bad actors’ motives, targets and behaviors. Threat intelligence gives enterprises data-backed security insights to help fight and prepare against cyberthreats.
  2. Improving anomaly detection and prediction: Enterprises can use AI/ML on the data lakes they generate to detect anomalies and predict threats. Alerting threats alleviates the burden of internal teams, allowing them to focus efforts on further developing their security strategy.
  3. Reducing diagnostic time: Diagnostic tools that are AI-powered speed up the time it takes to evaluate the security posture of an organization’s systems. This allows teams to quickly identify and address any existing errors or gaps in their strategies.

Enterprise use of AI may expand the attack surface for cybercriminals, but leveraging AI technologies can also allow security teams to get ahead in defending against and preventing adversarial AI and AI-powered cyber threats.

Enterprise use of AI will define if it’s an asset or a risk

Enterprises need to recognize that there’s a race between themselves and next-generation hackers to determine who will use AI to their benefit first. Currently, only 28% of organizations are using security AI extensively. As threats powered by, and targeting, AI platforms continue to surge, it’s imperative for IT and security teams to harness the benefits of integrating the technology into their security stacks and be better positioned to outpace the new wave of bad actors.