Practical foundations in network security testing
This article establishes a practical understanding of network security testing by exploring both the underlying concepts and the essential tools used in the field. It begins by differentiating network security testing from routine network monitoring and administration, placing it within the context of a continuous security process like the "Test" phase of a security cycle. Next, the article looks at how network and vulnerability scanners like Nmap and OpenVAS actively probe a network to map assets and identify potential vulnerabilities. In contrast, we will explore how packet analyzers like Wireshark and tcpdump passively capture and dissect network traffic to provide deep visibility into the actual data in transit. This dual approach allows security professionals not only to discover what is on the network and what weaknesses may exist, but also to verify, troubleshoot, and perform forensic analysis on the network's live data. Finally, the article will expand this view by distinguishing protocol analysis from software analysis, introducing the specialized tools and methodologies used for deeper vulnerability research and reverse engineering. By the end, you will understand how these varied techniques work together to monitor and improve an organization's security posture.
I obtained my PhD in Digital Transformation and Innovation in April 2020 from the PhD in DTI uOttawa Program, School-EECS, Faculty of Engineering. I did my PhD thesis (titled Technoethics and sensemaking: Risk assessment and knowledge management of ethical hacking in a sociotechnical society) on the topic of ethical hacking sociotechnology (thesis advisory committee: uOttawa professors Rocci Luppicini, Liam Peyton, and Andre Vellino).
You may also be interested in Practical foundations in penetration testing.
Introduction
Network administrators configure various monitoring tools and perform various testing activities to ensure smooth and secure network operation. Such activities and tools include:
Network security testing
“Every organization on the planet that has any concern whatsoever for the security of its resources must perform various security assessments—and some don’t have a choice, if they need to comply with FISMA or other various government standards" (Walker, 2012, p. 312).
Network security is an ongoing process that can be described by the Cisco security wheel. The security wheel consists of the following four phases: Secure, Monitor, Test, and Improve. In the third phase, Test, or network security testing, netadmins verify the security design and discover vulnerabilities within the network.
Network security testing is also commonly referred to as security audit, security assessment, posture assessment, vulnerability assessment, penetration testing, and ethical hacking. All these terms are invoked to refer to "a legitimate process of attacking, discovering, and reporting security holes in a network" (Deveriya, 2005, p. 362).
Tools used for network security testing can be loosely classified into the following two categories:
This discussion focuses on the following tools and methodologies of network security testing:
Nmap and OpenVAS are covered in the section Penetration testing technologies. tcpdump and Wireshark are covered in the section Defensive cybersecurity technologies.
Network scanners
Network scanners are software tools that probe a network to determine the hosts present on the network. Network scanners also probe the discovered hosts to determine the TCP and UDP ports that are open. Furthermore, based on the response of the probes, scanners can identify the OS, the services that are running, and the associated security vulnerabilities present on the discovered hosts. Some scanners can also display the results in the form of graphical reports. (Deveriya, 2005, p. 365)
Packet analyzers
Packet analyzers, whether implemented as software applications or dedicated hardware devices, capture network traffic for inspection and analysis. These tools typically offer functionality for filtering, storing, and analyzing captured data. Many network intrusion detection systems (NIDS), for instance, function as specialized packet analyzers that monitor traffic for anomalous patterns associated with network attacks. Operating at the physical and data link layers (Layers 1 and 2 of the OSI model), packet analyzers can also decode protocol information from higher layers, providing networking professionals with a real-time, cross-sectional view of data traversing the network. This capability is invaluable when troubleshooting, as it allows administrators to inspect raw traffic at the packet level. It also serves as a learning tool for understanding protocol behavior and application communications, while simultaneously offering tangible proof that network components are functioning as intended (Deveriya, 2005).
Basic vs deep packet inspection
Layers 3-4 basic inspection of packets includes source/destination IPs, ports, TCP flags (SYN/ACK), packet size, and TTL (e.g., tcpdump -i eth0 'tcp port 80' shows HTTP traffic metadata).
Deep inspection involves analyzing not just Layer 3 (IP) and Layer 4 (TCP/UDP) headers, but also higher-layer protocols (L5-L7)—like HTTP requests, DNS queries, TLS handshakes, or even application-specific data (e.g., SSH encryption types, SMB file-sharing commands). Layers 5-7 deep inspection of packets includes:
Basic vs Deep Packet Inspection: tcpdump vs Wireshark
Key Clarifications
tcpdump is lightweight and fast for basic L3-L4 inspection (e.g., "Show me all traffic to port 443").
tcpdump -i eth0 'tcp port 443' -X
(Shows TCP metadata + hex/ASCII payload snippets.)
tcpdump cannot decrypt modern encrypted traffic (e.g., HTTPS), but it can expose:
Wireshark excels at deep L5-L7 analysis (e.g., "Decode this HTTP/2 stream" or "Find malformed DNS packets"). For example,
Protocol analyzers vs software analyzers
A different class of tools becomes necessary when the focus shifts from observing network conversations to examining the internal logic of the applications themselves. The following discussion clarifies the difference between analyzing protocols, performed to identify network-level issues, and analyzing software, performed to discover flaws embedded within application code. The discussion contrasts protocol analyzers like Wireshark and tcpdump with software analysis tools such as disassemblers, debuggers, and decompilers which are used in reverse engineering and vulnerability research.
Communications protocols analyzers
This is the domain of network traffic inspection. Wireshark and tcpdump are protocol analyzers (or packet sniffers). They see everything on the wire at the network and transport layers (e.g., IP, TCP, UDP, ICMP). They are passive observers. In comparison, Burp Suite and OWASP ZAP are Web Application Security Proxies. They operate as a man-in-the-middle between your browser and the web server, specifically for HTTP/HTTPS traffic. They are active manipulators.
Software analyzers
Protocol analyzers like Wireshark and tcpdump are used to understand the "conversation" between different components. They answer: "What data is being sent, in what order, and in what format?" Understanding the "brain" having the conversation—the software itself—requires a different set of tools to take the software apart and examine its internals.
Analyzing software products to determine the product architecture and security vulnerabilities is the core activity of reverse engineering and vulnerability research. The goal is to understand how a product works from the inside out, without necessarily having access to its original blueprints (source code). This process involves:
While a protocol analyzer is a tool for analyzing communications protocols, analyzing the software itself requires a different toolkit.
Disassemblers and Decompilers:
Debuggers:
Binary Analysis Frameworks:
Fuzzers (Fuzzing Tools):
Disassemblers convert machine code (1s and 0s) to Assembly Language (human-readable processor instructions, e.g., MOV EAX, 0x42, CALL printf, JMP loop_start). Decompilers convert machine code (1s and 0s) to High-Level Language (C-like code with variables, functions, loops, and conditionals). Fuzzing tools automatically throw malformed, unexpected, or random data at a program or protocol to try and crash it. A crash often indicates a discoverable security vulnerability.
Choosing Your Tools: Protocol Analysis vs Software Analysis
Compiled software vs source code
When software is developed, programmers write source code in human-readable languages like C, C++, Swift, or Rust. This looks like:
c
// This is source code - humans can read it #include <stdio.h>
int main() { printf("Hello, World!"); return 0; }
Compilation is the process of translating this human-readable source code into machine code - the raw 1s and 0s that the computer's processor understands directly. The result is a binary executable file (like .exe on Windows, or no extension on macOS/Linux). (An interpreter is a program that reads and executes source code such as .py files directly, line by line, without compiling it to machine code first.)
Security researchers do not have the source code for Windows, Pages, or any other commercial product. They only have the compiled binary. Their job is to work backwards:
Security testing vs vulnerability analysis
Key takeaways
References
Deveriya, A. (2005). Network Administrators Survival Guide. Pearson Education.
Sanders, C. (2017). Practical Packet Analysis: Using Wireshark to Solve Real-World Network Problems (3rd ed.). No Starch Press.
Walker, M. (2012). CEH Certified Ethical Hacker All-in-One Exam Guide. McGraw-Hill.