🚀Introducing Bright Star: AI-Powered, Autonomous Security Testing & Remediation! Learn more>>

Back to blog
Published: Jan 20th, 2025 /Modified: Apr 8th, 2025

OWASP Top 10 for LLM Applications in 2025

Time to read: 7 min
Avatar photo
Bar Hofesh

Intro

OWASP (Open Worldwide Application Security Project) Top 10 is a holy grail of the cybersecurity space. It’s a list of main cybersecurity threats, updating every couple of years in order to keep up with the ever-changing environment. You can think of it as a sort of FBI’s 10 most wanted list – while there are plenty of criminals out there, only ten stand out as the biggest and most dangerous. That’s about the same thing with OWASP’s list – while new vulnerabilities are arising on a daily basis, only a few select are dangerous to the point where everyone has to take note and react accordingly. 

However, with the rapid progression of LLMs (Large Language Models) in recent years, the cybersecurity space all of a sudden became very unpredictable with the vast amount of threat AI technologies did and yet could generate. This is why OWASP took it upon themselves to consult the world’s biggest experts in an attempt to uncover the 10 most dangerous vulnerabilities for LLMs – which is how we first saw OWASP Top 10 for Large Language Model Applications list released in 2023. 

Key Changes

In the past year, we’ve seen a lot of ebb and flow, which resulted in a shuffled list for 2025. To give you a clear overview, here’s the table depicting exactly which vulnerabilities went up, which went down, and which disappeared from the Top 10 in order to make space for the newcomers.

New vulnerabilities
Removed vulnerabilities
Moved up
Moved down

20242025
Prompt InjectionPrompt Injection
Insecure Output HandlingSensitive Information Disclosure
Training Data PoisoningSupply Chain
Model Denial of ServiceData and Model Poisoning
Supply Chain VulnerabilitiesImproper Output Handling
Sensitive Information DisclosureExcessive Agency
Insecure Plugin DesignSystem Prompt Leakage
Excessive AgencyVector and Embedding Weaknesses
OverrelianceMisinformation
Model TheftUnbounded Consumption

New Risks in 2025

On the surface, the OWASP Top 10 for LLMs stayed the same. The main culprit—by far—is prompt injection, which dominates the vulnerability list simply due to the broadness of possible breaches.  

In regards to the changes that happened in the past year, there are four notable updates:

  • Denial of Service is now engulfed in Unbounded Consumption that explains and highlights potential risks of resource management and unexpected costs
  • Vector and Embeddings weaknesses focus on securing embedding-based methods, most notably Retrieval-Augmented Generation
  • System Prompt Leakage revolves around securing prompts and making sure the data remains secret without leaking out
  • Misinformation are pretty self-explanatory in that they release false information appearing to be credible

Unbounded Consumption

When you think of LLM applications, you can think of them as King Kong on an island. It’s a majestic beast that only grows larger as time goes on. However, moving the monster out of its restricted zone could cause mayhem and all sorts of troubles. LLM applications are a prime example of this, because costs and restrictions can easily go out of the window if you’re not very careful. 

A few examples of unbounded consumption could be:

  • Overwhelming the system with enormous inputs
  • Unlimited API calls resulting in very high costs 
  • Infinite loops draining resources and taking down the system

As with everything else, OWASP suggested some mitigations for unbounded consumption:

  • Rate-limiting API calls 
  • Validating user inputs
  • Keeping track of resources and automatically preventing enormous operations

Vector and Embedding Vulnerabilities

Vector and Embedding is a new addition to the list for 2025 primarily for applications that use Retrieval Augmented Generation. The goal of the attacker is to try and exploit vectors and embeddings depending on how they’re generated, stored or retrieved. 

Some examples of vector and embedding vulnerabilties:

  • Unauthorized access where system could disclose personal data
  • Data poison attacks that can happen both externally from attackers and internally by accident
  • Behaviour change where Retrieval Augmentation could alter the model’s behaviour diminishing its effectiveness

As for mitigation and prevention:

  • Access control achieved through partitioning datasets in the vector database
  • Data validation that can be accomplished by regular audits and ensuring consistent data across the database
  • Monitoring and logging in order to consistently track the application’s behaviour and prevent unwanted behaviour

System Prompt Leakage

While it may look similar to prompt injection, system prompt leakage is a whole different ball game. This vulnerability arises when an attacker manages to find out internal prompts that run the LLM, leading to possible data breaches and unauthorized access.

Misinformation

False information has never been more rampant, and with the introduction of LLMs, the issue has been exacerbated to yet unseen heights. LLM sometimes uses something called Hallucinations—this is what happens when LLM is looking to fill out the missing context by using statistical analysis and making assumptions that aren’t based on facts but on LLM’s own logic. 

Removals compared to OWASP Top 10 for LLMs 2024

Model Denial of Service

More failsafe mechanisms and API rate-limiting means that Model DoS automatically lost some of its prominence. Not only that, but other vulnerabilities such as System Prompt Leakage in themselves could cause denial of service, meaning that DoS as a standalone issue isn’t as important as it was. 

Insecure Plugin Design

The change in overall approach from OWASP in LLM security meant a shift towards systematic defenses that are applicable across the board. As a result, standalone issues such as insecure plugin design were deprioritized. Furthermore, standardized practices for plugin enabled better inherent security for plugins as things like user access, API rate-limiting took precedence. 

Overreliance

While it was, and still is a big issue, overreliance as a standalone vulnerability was consumed by some broader risks. Not only that, but the latest standards in LLM deployment meant that a lot more prevention mechanisms took place, as well as human oversight via logging and monitoring, making the LLM applications a much safer environment where overreliance is an issue prevented from the ground up. 

Model Theft

Model Theft mostly relied on gaining unauthorized access to steal sensitive data and access otherwise private intellectual property. However, with the greater prominence of Sensitive Information Disclosure, Model Theft found itself consumed by a greater vulnerability.

Biggest Improvements on the List

Sensitive Information Disclosure

The explanation for Sensitive Information Disclosure moving from #6 to #2 is that we’ve seen LLMs more and more integrated into the enterprise system, dramatically increasing the risk of data leakage and important & sensitive information finding its way to an attacker. 

The stakes are higher, and some real-life incidents that happened speak in favour of this. A few examples of these issues involve:

  • Samsung data leaks: developers at Samsung debugged their code by using ChatGPT, resulting in GPT storing the data and incidentally releasing it to the public
  • Health App leaking sensitive user data via their LLM-based chatbot 
  • ChatGPT exposing other user’s chat histories

Supply Chain

Involvement of external integrations made world of LLMs that much more complex – as if it wasn’t already! As a result, thousands of APIs, libraries and datasets made their way into LLM applications, resulting in a multitude of supply chain issues caused by these growing complexities. 

LLMs are also famous for relying on cloud services & open source tools that are traditionally known for supply chain vulnerabilities. 

As if all of this wasn’t enough, the growing pressure of regulatory agencies play a major part in increasing the focus and scrutiny on potential supply chain vulnerabilities, as everyone is hellbent on drilling as deep as possible to eliminate core issues in LLM apps. 

Future of LLM Vulnerabilities Moving Forward

The keywords for 2025 look to be privacy and data control, so it’s fair to expect the trend to continue as LLMs grow. To keep these key issues under control, more emphasis was placed on safe core development practices, which led to better-controlled LLMs throughout their lifecycle. 

The key issue developers and architects will have to focus on is on maintaining safe integrations. We’ve seen how computer systems in the past had a good core, but due to a lack of safety standards in their plugins, saw plenty of cybersecurity issues arise. This is a big challenge for LLMs as well, because the world of plugins is spreading rapidly, and keeping up with it is ever more important. 

Industry standards will also become increasingly important as time goes on, especially as regulatory agencies catch up with LLMs. This means that OWASP Top 10 list will gain even more importance due to its authority in the cybersecurity industry.

Subscribe to Bright newsletter!