Guides and Tutorials

How to Write Secure AI-Generated Code

Generative AI has quickly become a staple in modern software development. Developers are using tools like GitHub Copilot and ChatGPT to build features, generate tests, and accelerate development timelines. But speed comes with a trade-off. AI may be able to write functional code, but it doesn’t understand context or intent, and it certainly doesn’t understand […]

How to Write Secure AI-Generated Code
Bar Hofesh Co-founder of Bright Security, Bar acts at their CTO. Globally recognized security & technology expert, Bar has played many roles including CISO, System architect , Security, and DevSecOps advisor at over 10 companies. As a leader & researcher, he has multiple publications & projects in cybersecurity. CISO & MCITP certified.
May 28, 2025
6 minutes

Generative AI has quickly become a staple in modern software development. Developers are using tools like GitHub Copilot and ChatGPT to build features, generate tests, and accelerate development timelines. But speed comes with a trade-off. AI may be able to write functional code, but it doesn’t understand context or intent, and it certainly doesn’t understand security.

If you’re relying on AI to help write your code, here’s the reality: unless you’re guiding it intentionally and reviewing its output thoroughly, it will likely introduce risks. That’s because AI models generate what looks statistically correct – not necessarily what’s secure or maintainable.

This article explores how to use AI coding tools without compromising your application’s security posture.

Table of Content

  1. The Hidden Risks of AI-Created Code
  2. Write Secure Prompts, Not Just Code
  3. Never Skip Review, Even for “Simple” Code
  4. Validate Everything Because AI Often Doesn’t
  5. Be Careful with Dependencies
  6. Watch for Secrets and Unsafe Defaults
  7. Educate Your Team on AI Usage
  8. Final Thoughts

The Hidden Risks of AI-Created Code

AI models are trained on massive datasets, including public repositories and community Q&A forums. While that’s a rich source of examples, it also means AI often reproduces insecure practices that it’s seen before: outdated cryptographic functions, SQL queries without parameterization, or web handlers with no input validation.

In practice, that means developers can end up shipping vulnerable code that “works” – at least until attackers find the gap. These risks aren’t hypothetical. Researchers have already shown how large language models can generate code that’s exploitable, even when prompted with common use cases.

Write Secure Prompts, Not Just Code

The quality and safety of AI-generated code often comes down to how you ask for it. Vague prompts tend to produce code that’s generic and potentially insecure. For example, asking for a “login API in Node.js” may return something that stores plain-text passwords or relies on insecure query building.

Instead, you should explicitly ask the AI to use secure components: request password hashing with bcrypt, parameterized queries, and structured validation libraries. The more security expectations you include in the prompt, the more likely the output will reflect them. It’s also worth stating what to avoid – functions like eval, for example, or insecure serialization patterns.

In a team setting, it helps to standardize secure prompt templates so that developers are nudged toward best practices from the start.

Never Skip Review, Even for “Simple” Code

Treat AI-generated code the same way you’d treat code from a junior developer: don’t assume it’s right just because it compiles. Manual review is critical, especially when the code touches authentication, authorization, data access, or any user-facing component.

In addition to code review, apply static analysis and linters with security rules enabled. Tools like SonarQube, Bandit, and ESLint (with security plugins) can catch many of the obvious missteps that AI might introduce. It’s not just about correctness – it’s about risk reduction.

Security testing doesn’t end with static tools. Feeding AI-generated code into your SAST or DAST workflows helps detect deeper issues. If your organization has a security champion or AppSec team, have them weigh in on any AI-heavy codebase contributions.

Validate Everything Because AI Often Doesn’t

Input validation is one of the most frequently overlooked areas in AI-generated code. The code might look correct at a glance, but unless you’ve explicitly asked for it, there’s a good chance it won’t properly validate inputs or escape output.

Always double-check how inputs are handled, whether they come from HTTP requests, command-line arguments, or third-party APIs. Ensure your AI-generated code uses frameworks that support robust validation and sanitization.

And don’t just stop at validation. Think about encoding, escaping, and safe defaults. AI might not have the full picture of the attack surface you’re dealing with, so it’s your responsibility to review the code with adversarial thinking in mind.

Be Careful with Dependencies

AI doesn’t vet packages. It often recommends libraries that are outdated, unmaintained, or even potentially malicious. That means developers need to take extra care when accepting package suggestions from generative tools.

Always review the libraries that AI suggests. Check their last update date, look for known vulnerabilities (via tools like npm audit or pip-audit), and avoid packages with low community adoption or suspicious commit histories. Even legitimate libraries can introduce risk if they’re misconfigured or misused.

To keep things safe over time, make sure to pin dependency versions and use automation tools like Dependabot to track updates and patch known issues.

Watch for Secrets and Unsafe Defaults

It’s not uncommon for AI to include example API keys, JWT secrets, or hardcoded passwords in generated code. These are meant as placeholders, but if copied carelessly, they can easily make it into production environments.

You should never store secrets directly in code – AI-generated or otherwise. Use environment variables or a secret management system to keep sensitive data out of version control. It’s also good practice to add common secret file types (like .env, .pem, or .crt) to .gitignore by default in all generated scaffolds.

Educate Your Team on AI Usage

One of the biggest risks with AI-generated code isn’t the model, it’s how humans use it. Developers might assume that code output by AI is trustworthy because it appears polished or comes with documentation. That’s dangerous.

Every team using AI tools should invest in internal guidance for safe usage. Clarify where AI tools are useful (like writing boilerplate or generating test cases) and where they require stricter oversight (like anything touching security, business logic, or data handling). Set clear expectations for review, testing, and validation.

Don’t just train the AI to write better prompts – train your team to think critically about AI’s limitations.

Final Thoughts

Generative AI is a powerful tool, but like all tools, it needs to be used responsibly. Writing secure code with AI isn’t about banning the technology, but rather about layering guardrails around it. From prompt design to post-generation review, developers and security teams must work together to ensure AI accelerates development without increasing risk.

The key takeaway: AI can help you write code faster, but it’s still your job to make sure that code is safe.

What Our Customers Say About Us

"Empowering our developers with Bright Security's DAST has been pivotal at SentinelOne. It's not just about protecting systems; it's about instilling a culture where security is an integral part of development, driving innovation and efficiency."

Kunal Bhattacharya | Head of Application Security

"Bright DAST has transformed how we approach AST at SXI, Inc. Its seamless CI/CD
integration, advanced scanning, and actionable insights empower us to catch
vulnerabilities early, saving time and costs. It's a game-changer for organizations aiming to
enhance their security posture and reduce remediation costs."

Carlo M. Camerino | Chief Technology Officer

"Bright Security has helped us shift left by automating AppSec scans and regression testing early in development while also fostering better collaboration between R&D teams and raising overall security posture and awareness. Their support has been consistently fast and helpful."

Amit Blum | Security team lead

"Bright Security enabled us to significantly improve our application security coverage and remediate vulnerabilities much faster. Bright Security has reduced the amount of wall clock hours AND man hours we used to spend doing preliminary scans on applications by about 70%."

Alex Brown

"Duis aute irure dolor in reprehenderit in voluptate velit esse."

Bobby Kuzma | ProCircular

"Since implementing Bright's DAST scanner, we have markedly improved the efficiency of our runtime scanning. Despite increasing the cadence of application testing, we've noticed no impact to application stability using the tool. Additionally, the level of customer support has been second to none. They have been committed to ensuring our experience with the product has been valuable and have diligently worked with us to resolve any issues and questions."

AppSec Leader | Prominent Midwestern Bank

Book a Demo

See how Bright validates real risk inside your CI/CD pipeline and eliminates false positives before they reach developers.

Our clients:
SulAmerica Barracuda SentinelOne MetLife Nielsen Heritage Bank Versant Health