What is fuzzing in cyber security?

What fuzzing means in a cyber security context, how it works, and why it is essential for testing protocol-based systems and connected devices

Fuzzing is one of the most effective techniques in cyber security for finding vulnerabilities that other methods miss. It is also one of the most misunderstood. The term appears in compliance frameworks, vendor proposals, and security assessments without always being clearly explained, which makes it difficult to evaluate whether a fuzzing capability is relevant, adequate, or genuine.

The concept is straightforward. Fuzzing means deliberately sending a system inputs it was not designed to handle and observing what happens. The vulnerabilities it finds are among the most consistently exploited in real-world attacks. The reason it finds them when other methods do not is that it systematically explores the input space that specification-driven testing leaves uncovered.

This guide explains what fuzzing means in a cyber security context, how it works, what it finds, and why organisations building or operating protocol-based systems and connected devices need to understand it.

What Is Fuzzing in Cyber Security?

Fuzzing is a software testing technique that generates large volumes of unexpected, invalid, or malformed inputs and delivers them to a target system to observe how it responds. In a cyber security context, the purpose is to find vulnerabilities: conditions under which the system behaves in ways its developers did not intend and an attacker could exploit.

The name comes from the technique’s origins. Early fuzz testing involved sending essentially random data to systems and watching for crashes. Modern fuzzing is considerably more sophisticated. It uses formal models of input formats and protocols, mutation and generation strategies that target specific boundaries and edge cases, state-aware approaches that drive systems through different operational states before testing each one, and monitoring that captures the precise conditions under which failures occur. The underlying principle is the same: test the system with inputs it was not designed for and find what breaks.

The vulnerabilities fuzzing finds include buffer overflows, input validation failures, state machine bugs, authentication bypasses triggered by malformed inputs, and denial-of-service conditions caused by unexpected parsing behaviour. These are not edge case vulnerabilities. They are among the most exploited classes in published CVE data, and they are found disproportionately through fuzzing because they arise under conditions that specification-driven testing does not explore.


Fuzzing vs Other Security Testing Methods

Fuzzing sits alongside other security testing methods rather than replacing them. Understanding how it differs from the most common alternatives helps organisations identify where fuzzing adds value in their existing programme and where the gaps are that it fills.

Penetration testing applies human expertise and judgment to assess the security of a system. A skilled penetration tester understands attack techniques, reasons about the system’s architecture, identifies likely vulnerability locations, and assesses how findings could be exploited. Penetration testing produces high-quality, contextualised findings. Its fundamental constraint is coverage. A penetration test explores the input space the tester thinks to explore, which is a small fraction of the space that fuzzing covers systematically. Fuzzing and penetration testing are complementary: fuzzing finds the vulnerabilities across a broad input space, penetration testing assesses how they can be exploited and what an attacker could achieve.

Static analysis examines source code or compiled binaries without executing them. It finds structural issues in the code itself: dangerous function calls, potential out-of-bounds writes, missing null checks. It is fast and integrates well into build pipelines. What it cannot do is observe runtime behaviour. A vulnerability that only manifests when a specific sequence of inputs drives the system into an unexpected state is invisible to static analysis. Fuzzing runs the code and observes what actually happens, which is why the two approaches find different things.

Functional testing verifies that the system behaves correctly under planned conditions. It tests the inputs the system is designed to handle and confirms that they produce the expected outputs. It does not test what happens outside those planned conditions, which is where the vulnerabilities fuzzing finds are located. Functional testing and fuzzing address different questions and a thorough testing programme needs both.


Why Fuzzing Matters for Security

The security case for fuzzing is direct. Attackers do not send valid inputs. They send the inputs most likely to cause the system to behave in ways its designers did not intend. They probe boundaries, send malformed messages, try unexpected sequences, and look for the conditions that trigger undefined behaviour. Fuzzing is the discipline of doing that systematically before attackers do it opportunistically.

The vulnerability classes fuzzing finds are the ones most consistently present in real-world exploits. Buffer overflows have been among the most exploited vulnerability classes for decades, not because they are difficult to understand but because they are difficult to find through specification-driven testing. Input validation failures underpin a significant proportion of published CVEs across every major software category. State machine bugs in protocol implementations are a consistently underassessed attack surface in industrial and connected device security.

For organisations building or operating protocol-based systems, the case is more specific. Protocol implementations handle binary data from potentially untrusted sources, maintain complex state across multiple message exchanges, and need to parse and respond correctly to every input they receive. The gap between the inputs a developer tests and the inputs a real network or a malicious actor might deliver is large. Fuzzing is the only method that explores that gap systematically, at the scale needed to find the vulnerabilities it contains before they are discovered in the field.

The regulatory dimension reinforces the operational case. IEC 62443-4-1 Practice 6 requires vulnerability testing that explicitly includes robustness and negative testing techniques for industrial automation and control system components. These requirements can only be satisfied through the kind of systematic invalid input generation that fuzzing provides. Compliance without fuzzing, for products subject to IEC 62443, is incomplete compliance.


How Fuzzing Works in Practice

Fuzzing in practice involves four elements: a target system with an interface that accepts inputs, a fuzzing tool that generates and delivers test cases, a monitoring capability that observes the target’s response to each test case, and an output mechanism that captures findings with enough precision to be acted on.

The test case generation is where the quality of a fuzzing approach is most clearly differentiated. Basic fuzzers generate random or lightly mutated inputs. These find vulnerabilities in framing and validation logic but miss the majority of vulnerabilities in application logic because random inputs are rejected before they reach it. Effective fuzzing for protocol implementations uses formal models of the protocol being tested to generate inputs that conform to framing requirements while being invalid at the application layer. These inputs pass the framing checks and reach the logic where most vulnerabilities sit.

Mutation-based fuzzing starts from valid inputs and modifies them systematically: changing field values to boundary cases and out-of-range values, extending or truncating lengths, flipping bits, and substituting prohibited values. It is effective for finding vulnerabilities close to the valid input space and requires only valid input samples to get started. Generation-based fuzzing constructs test cases from a protocol model, enabling coverage of areas of the input space that no existing valid sample represents. The most effective fuzzing programmes use both approaches in combination.

State-aware fuzzing drives the target through its state machine before testing each state with targeted invalid inputs. Protocol implementations behave differently depending on their current state, and vulnerabilities that only manifest in specific states are only reachable through testing that understands and navigates the state machine. This is one of the most significant differentiators between protocol-aware fuzzing platforms and generic fuzz testing tools.

Monitoring captures the target’s response to each test case. Crashes, hangs, unexpected responses, and anomalous behaviour all indicate potential findings. The precision with which the monitoring captures the conditions around each finding determines whether the output is actionable.


Fuzzing for Protocol-Based Systems and Connected Devices

Fuzzing for protocol-based systems and connected devices requires a different approach from fuzzing for web applications or desktop software. The protocols, the deployment constraints, and the consequences of a finding are all different.

Industrial protocols including Modbus, DNP3, IEC 61850, IEC 60870-5-104, and DLMS COSEM use binary formats with specific framing requirements, complex state machines, and application logic that generic fuzzing tools cannot reach. Telecoms protocols including GTP-C, GTP-U, SS7, and Diameter handle signalling and session management for critical communications infrastructure. IoT protocols including MQTT connect devices at scale in environments where a single exploitable vulnerability affects thousands of deployed instances. Each of these protocol families requires a fuzzing approach that understands the protocol structure well enough to generate test cases that reach the application logic.

Connected devices present specific challenges beyond the protocol layer. They often run resource-constrained operating systems without the debugging infrastructure available in desktop or server environments. They may not provide detailed error responses that allow test cases to be correlated with specific failure modes. They may be sensitive to the timing and volume of test traffic in ways that require the fuzzing approach to be adapted. And they are typically tested in a representative test environment rather than against production devices, which requires that the test environment accurately reflects the behaviour of the deployed device.

The operational constraint is particularly significant in OT environments. Fuzzing that disrupts the operational behaviour of a production system is not acceptable in environments where that system controls physical processes. Fuzzing in these environments is conducted in isolated test environments with representative devices, not against production systems, and the methodology is designed to avoid generating traffic patterns that would be disruptive if accidentally applied to a live environment.


Where Fuzzing Fits in a Security Assurance Programme

Fuzzing is most effective when it is integrated into a security assurance programme rather than conducted as a one-off exercise. Its value compounds over time as protocol models mature, finding corpora grow, and regression testing catches vulnerabilities introduced by changes to implementations.

In the product development lifecycle, fuzzing fits at the integration and system testing stage, after unit testing has verified component behaviour under planned conditions. At this stage, fuzzing explores the input space that unit testing has left uncovered, finds the vulnerabilities that arise from component interactions under unexpected inputs, and produces the documented evidence that pre-release security assessment and compliance requirements demand.

For products subject to IEC 62443-4-1 certification, fuzzing provides the vulnerability testing evidence that Practice 6 SVV-3 requires. The scope, methodology, findings, and traceability to standard requirements all need to be documented in a form that will satisfy a certification audit. Fuzzing conducted without this documentation structure produces security improvement, but not compliance evidence.

In ongoing security assurance, fuzzing provides regression coverage as implementations change. Each significant update to a protocol implementation is an opportunity to introduce new vulnerabilities. Running the fuzzing corpus against updated implementations catches regressions before they reach the field, at a fraction of the cost of discovering them through a field incident or a customer security assessment. For organisations with active development cycles, continuous fuzzing integrated into the build pipeline provides this coverage automatically.


What Good Fuzzing Output Looks Like

The output of a fuzzing programme is what determines whether findings drive remediation or sit unactioned. Understanding what good output looks like helps organisations evaluate fuzzing capabilities and commission fuzzing work that produces results they can use.

Each finding needs the exact test case that triggered it. For protocol fuzzing, that means the precise message content, field values, and sequence that caused the failure, along with the protocol state at the time it was sent. Without this, the finding cannot be reproduced, the root cause cannot be identified, and the fix cannot be verified. A fuzzing report that describes vulnerability classes without providing triggering test cases is not actionable.

The observed behaviour needs to be documented precisely. A crash needs to be described with the specific conditions under which it occurred, not just noted as a crash. An unexpected response needs the exact response content and an explanation of why it is unexpected relative to the specification. A resource exhaustion condition needs the specific input that triggered it and the observable impact on the target.

Severity classification needs to reflect exploitability in the specific deployment context. A crash triggered by an unauthenticated connection presents a different risk from the same crash on an interface requiring prior authentication. The classification needs to give the engineering team a prioritised remediation list rather than an undifferentiated catalogue of failures.

For compliance purposes, fuzzing output needs to map findings and methodology to specific framework requirements. IEC 62443 SVV-3 requires documented scope, methodology, and traceability from test cases to the requirements being verified. Fuzzing output that does not include this structure does not satisfy the standard’s requirements, regardless of how thorough the testing was.


How ProtoCrawler Implements Fuzzing for Protocols

ProtoCrawler is CyTAL’s automated protocol fuzzing platform. It implements the approach described in this guide specifically for the protocol-based systems and connected devices where generic fuzzing tools cannot reach the vulnerabilities that matter.

For each supported protocol, ProtoCrawler uses a formal protocol model to generate test cases that combine mutation-based and generation-based approaches. Test cases pass framing layer checks and reach the application logic. State-aware testing drives the protocol implementation through its state machine and generates targeted invalid inputs at each state, finding the state machine bugs and state-dependent vulnerabilities that generic fuzzing misses.

The monitoring layer captures each finding with the precision that actionable output requires: the exact triggering test case, the protocol state at the time it was sent, the observed behaviour, and the severity classification based on exploitability. Every finding maps directly to IEC 62443 compliance requirements, producing audit-ready evidence for SVV-3 vulnerability testing, CR 3.5 input validation, and CR 7.1 denial-of-service protection assessments.

ProtoCrawler supports more than 100 protocols including Modbus, DNP3, IEC 61850, IEC 60870-5-104, GTP-C, GTP-U, DLMS COSEM, MQTT, SS7, and Diameter. For the full list, see the protocol models page. For a detailed plain-English explanation of how fuzz testing works, see What is fuzz testing?

Ready to see what fuzzing finds in the protocols your systems implement? Book a demo to see ProtoCrawler in action against your specific protocols.


Common Questions About Fuzzing in Cyber Security

Is fuzzing the same as fuzz testing?

Yes. Fuzzing and fuzz testing are the same thing described with slightly different terminology. Fuzzing is the more commonly used shorthand in security contexts. Fuzz testing is the more formal term used in testing and compliance frameworks. Both refer to the technique of generating unexpected inputs to find vulnerabilities in software and systems.

How is fuzzing different from stress testing?

Stress testing assesses how a system behaves under extreme load: high volumes of valid traffic, resource exhaustion from legitimate usage, and performance under peak demand. It tests the system’s capacity and resilience under operational stress. Fuzzing tests how the system behaves when it receives inputs it was not designed to handle. The two are related but distinct. Fuzzing can produce denial-of-service conditions as a side effect of finding vulnerabilities, but its purpose is finding vulnerabilities rather than assessing capacity.

What types of systems benefit most from fuzzing?

Any system that processes inputs from external or untrusted sources benefits from fuzzing. Protocol implementations, network-connected devices, embedded systems, and APIs are the categories where fuzzing consistently finds the most significant vulnerabilities, because these are the systems with the largest gap between the inputs developers test and the inputs a real-world adversary might deliver. Systems with complex state machines, binary data formats, or safety-critical functions benefit most because these are the characteristics associated with the vulnerability classes fuzzing finds most reliably.

Does fuzzing require specialist expertise to implement?

Manual fuzzing, which involves generating test cases and executing them by hand, requires significant expertise and produces limited coverage. Automated fuzzing platforms, including ProtoCrawler (link to: https://cytal.co.uk/products/protocrawler/) for protocol implementations, encapsulate the expertise required to generate effective test cases in the platform itself. The expertise required from the user is understanding of the target system, its interfaces, and its deployment context rather than expertise in test case generation methodology.

How does fuzzing relate to bug bounty programmes?

Bug bounty programmes invite external researchers to find and report vulnerabilities in exchange for rewards. Fuzzing is one of the techniques those researchers use. Organisations that conduct their own fuzzing before launching a bug bounty programme find and fix the vulnerabilities that would otherwise be discovered by researchers, improving security posture and reducing the cost of the programme. Fuzzing and bug bounty programmes are complementary: internal fuzzing finds the vulnerabilities before external researchers do, and bug bounty programmes provide an additional layer of coverage for issues that internal testing missed.

Ready to understand what fuzzing would find in your systems? Book a demo today

Book a demo

This field is for validation purposes and should be left unchanged.

Book Your Free Demo

Complete the form and we will confirm your slot within 1 business day.

By submitting, you agree to Cytal storing your information to arrange this demo. We will never share your details with third parties. Privacy Policy. Unsubscribe at any time.