🚀Introducing Bright Star: AI-Powered, Autonomous Security Testing & Remediation! Learn more>>

Back to blog
Published: Mar 23rd, 2025 /Modified: Mar 25th, 2025

Can AI Secure Code… or Just Write Insecure Code Faster?

Time to read: 4 min
Avatar photo
Bar Hofesh

In the past few years, AI has made its way into the developer’s toolkit in a big way. Tools like GitHub Copilot, ChatGPT, and various AI code assistants promise to boost productivity, automate tedious tasks, and even catch security flaws. But as we welcome these powerful new capabilities, a fundamental question looms over the application security (AppSec) world:

Can AI truly help us write secure code, or is it just making it easier to ship insecure code faster?

Let’s dig into both sides of the equation.

The Productivity Boom: AI as a Coding Co-Pilot

There’s no doubt AI tools are helping developers move faster. Ask an AI to scaffold a REST API, convert a SQL query, or even write a regex pattern, and you’ll get a fairly solid response in seconds. For junior devs especially, this can be a massive learning accelerant.

AI-assisted coding has the potential to reduce cognitive load and improve consistency in common tasks—two factors that often contribute to security flaws when developers are under pressure or context-switching frequently.

Some AI tools also have built-in security awareness. They can flag common vulnerabilities like SQL injection or hardcoded secrets. Static analysis engines powered by machine learning are also getting better at spotting insecure patterns in vast codebases.

So, yes—AI can absolutely assist in writing more secure code, especially when paired with proper guardrails.

But Speed Isn’t Always a Good Thing

Here’s where the other side of the coin comes into view.

The same AI that helps you write code quickly can also help you generate vulnerable code just as fast—if not faster.

Why? Because AI doesn’t understand security the way a human does. It doesn’t reason about threats, or know the context of your specific application. It generates code based on patterns it has seen, including insecure or outdated ones from public code repositories.

There have already been multiple documented cases where AI-generated code included:

  • SQL injections from unsanitized inputs
  • Cross-site scripting (XSS) vulnerabilities
  • Improper use of cryptographic functions
  • Hardcoded secrets or keys
  • Broken authentication logic

In short, AI lacks the security intuition of an experienced developer or AppSec engineer. It doesn’t ask: “What could go wrong?” It just completes the pattern.

The Real Solution: Human-AI Collaboration

The future isn’t about replacing developers or AppSec teams with AI—it’s about augmenting them. Here’s what that looks like:

  • AI suggests code based on patterns
  • Developers review suggestions with a critical eye
  • Security teams integrate automated scanning and threat modeling into the pipeline
  • Secure defaults and policies are baked into the tools from the start

Some newer tools are already moving in this direction. For example, AI systems trained specifically on secure codebases or that integrate with SAST/DAST tools are becoming more common. Others include “explainability” features, helping developers understand why something might be insecure.

What Developers Can Do Today

While the tooling evolves, there are practical steps every developer can take:

  1. Treat AI-generated code like any other third-party code—review it carefully.
  2. Use AI for suggestions, not decisions. You’re still in the driver’s seat.
  3. Pair AI tools with automated security scans. Don’t rely on one layer of defense.
  4. Invest in security training. Even with AI, the developer’s intuition is the last line of defense.
  5. Stay updated on known AI limitations. Understanding where these tools struggle helps you use them more effectively.

So… Can AI Secure Code?

The answer, like most things in tech, is nuanced.

AI can help write more secure code—when used thoughtfully.
It can also write insecure code faster—when used carelessly.

The key lies not in the tool itself, but in how we wield it. If we treat AI as a shortcut to ship faster without accountability, we’ll see security debt balloon. But if we treat it as an assistant—one that still requires human oversight and security awareness—we can actually reduce vulnerabilities and empower dev teams.

The tools are getting smarter. But security still starts with us.

Subscribe to Bright newsletter!