/`.
/`.`\
/`.`/`\
/`.`/`/`\
/`.`/`/`/`\
/`.`/`/`/`/`\
/`.`/`/`/`/`/`\
/`.`/`/`/`/`/`/`\
/`.`/`/`/`/`/`/`/`\
/`.`/`/`/`/`/`/`/`/`\
/`.`/`/`/`/`/`/`/`/`/`\
/`./`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`/`/`/`/`.\
/`./`/`/`/`/`/`/`/`/`/`/`/`/`/`/`/`.\
`-._.`-._.`-._.`-._.`-._.`-._.`-._.`-._/
Core Function: BruteShark is a Network Forensic Analysis Tool (NFAT) designed to process network traffic captures (PCAP files) or live traffic to extract sensitive information, reconstruct sessions, and map network activity.
Primary Use-Cases:
Passive Credential Harvesting: Extracting usernames, passwords, and hashes from captured network traffic for protocols like FTP, Telnet, HTTP, and more.
Network Infrastructure Mapping: Visualizing network nodes and communication patterns based on observed traffic.
Incident Response Data Triage: Quickly processing large volumes of network captures to identify key artifacts like transferred files, DNS lookups, and VoIP calls.
Security Posture Assessment: Identifying weak or cleartext protocols in use on a network to recommend security upgrades.
Offline Attack Preparation: Extracting password hashes from authentication exchanges and converting them to formats suitable for offline cracking with tools like Hashcat.
Penetration Testing Phase: BruteShark is primarily used in the Post-Exploitation phase after network access has been established and traffic can be sniffed. It can also be used for Passive Information Gathering if an analyst has access to pre-existing traffic captures.
Brief History: BruteShark was created to provide security researchers and network administrators with a comprehensive and easy-to-use tool for deep network traffic inspection. It consolidates multiple analysis tasks—from credential extraction to network mapping—into a single, powerful command-line interface, streamlining the forensic analysis workflow.
Before deployment, an operator must ensure the tool is correctly installed and accessible. These initial steps verify the tool's presence and functionality.
This command uses the standard Linux which utility to check if the brutesharkcli executable is in the system's PATH.
Command:
Bash
which brutesharkcli
Command Breakdown:
which: A Linux command that locates the executable file associated with a given command.
brutesharkcli: The name of the BruteShark command-line interface executable.
Ethical Context & Use-Case: In a penetration test, you must ensure your toolkit is properly configured before starting an engagement. This simple check prevents errors and delays when you need to analyze captured traffic quickly. It's a standard procedure for verifying your operational environment.
--> Expected Output:
/usr/bin/brutesharkcli
This command uses the Advanced Package Tool (APT) on Debian-based systems like Kali Linux to install the BruteShark package.
Command:
Bash
sudo apt update && sudo apt install bruteshark -y
Command Breakdown:
sudo: Executes the command with superuser (root) privileges.
apt update: Refreshes the local package index with the latest information from the repositories.
&&: A shell operator that executes the second command only if the first one succeeds.
apt install bruteshark: The command to install the bruteshark package.
-y: Automatically answers "yes" to any prompts during the installation process.
Ethical Context & Use-Case: When setting up a new virtual machine or physical device for a security audit, you need to install your required tools. This command ensures you have the latest version of BruteShark from the official repositories, which is a critical first step in preparing for any network analysis task on an authorized network.
--> Expected Output:
Reading package lists... Done Building dependency tree... Done Reading state information... Done The following NEW packages will be installed: bruteshark 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 9,876 kB of archives. After this operation, 93.3 MB of additional disk space will be used. Get:1 http://kali.download/kali kali-rolling/main amd64 bruteshark amd64 1.2.5-0kali1 [9,876 kB] Fetched 9,876 kB in 2s (4,938 kB/s) Selecting previously unselected package bruteshark. (Reading database ... 312456 files and directories currently installed.) Preparing to unpack .../bruteshark_1.2.5-0kali1_amd64.deb ... Unpacking bruteshark (1.2.5-0kali1) ... Setting up bruteshark (1.2.5-0kali1) ... Processing triggers for man-db (2.10.2-1) ...
This command displays all available options, modules, and usage syntax for the brutesharkcli tool.
Command:
Bash
brutesharkcli --help
Command Breakdown:
brutesharkcli: The executable for the BruteShark command-line tool.
--help: A standard flag to display the help documentation for the command.
Ethical Context & Use-Case: Before using any security tool, it is essential to understand its full capabilities. Reviewing the help menu is the most efficient way to learn the correct syntax, discover all available modules, and understand how to combine different flags to achieve a specific analytical goal. This is a fundamental step for any professional to avoid errors and use the tool effectively and responsibly.
--> Expected Output:
BruteSharkCli 1.0.0.0
Copyright © 2018
-d, --input-dir The input directory containing the files to be
processed.
-i, --input The files to be processed separated by comma.
-m, --modules The modules to be separated by comma: Credentials,
FileExtracting, NetworkMap, DNS, Voip.
-o, --output Output directory for the results files.
-p, --promiscuous Configures whether to start live capture with
promiscuous mode (sometimes needs super user privileges
to do so),use along with -l for live capture.
-l, --live-capture Capture and process packets live from a network
interface.
-f, --filter Set a capture BPF filter to the live traffic processing.
--help Display this help screen.
--version Display version information.
This section provides an exhaustive list of BruteShark's capabilities, from basic single-module analysis to complex, multi-faceted investigations. Each example is presented within an ethical framework, assuming you have explicit authorization to analyze the specified network traffic.
The Credentials module is designed to parse traffic for authentication data sent in cleartext or as easily extractable hashes.
Objective: 1. Extract Credentials from a Single PCAP File Command: brutesharkcli -i auth_traffic.pcap -m Credentials Command Breakdown:
-i auth_traffic.pcap: Specifies the single input file to be processed.
-m Credentials: Instructs BruteShark to run only the Credentials module. Ethical Context & Use-Case: During an internal security audit, you might analyze a capture of traffic from a legacy application server. This command helps you identify which systems are still using insecure, cleartext authentication protocols like FTP or Telnet, providing concrete evidence for recommending upgrades. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: auth_traffic.pcap [INFO] Finished processing file: auth_traffic.pcap [INFO] See results at: ./BruteShark-Results/2025-08-17_18-50-05/ [INFO] BruteShark finished. [VISUAL OUTPUT: A CSV file named 'Credentials.csv' is created in the output directory containing columns for Timestamp, Protocol, SourceIP, DestinationIP, Username, and Password.]
Objective: 2. Extract Credentials and Specify an Output Directory Command: brutesharkcli -i auth_traffic.pcap -m Credentials -o /tmp/audit_results Command Breakdown:
-i auth_traffic.pcap: Defines the input capture file.
-m Credentials: Selects the credential extraction module.
-o /tmp/audit_results: Sets a custom directory for storing the results. Ethical Context & Use-Case: For organizational purposes and to avoid cluttering your current working directory, it's best practice to direct all output from a security tool to a designated folder. This is crucial for maintaining a clean evidence chain during a formal penetration test. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: auth_traffic.pcap [INFO] Finished processing file: auth_traffic.pcap [INFO] See results at: /tmp/audit_results/ [INFO] BruteShark finished.
Objective: 3. Extract Credentials from Multiple Specific PCAP Files Command: brutesharkcli -i ftp.pcap,telnet.pcap,http.pcap -m Credentials Command Breakdown:
-i ftp.pcap,telnet.pcap,http.pcap: Provides a comma-separated list of input files.
-m Credentials: Runs the credential extraction module. Ethical Context & Use-Case: An incident response team may provide you with several traffic captures from different potentially compromised hosts. This command allows you to efficiently process all of them in a single run to consolidate findings and quickly identify any shared or compromised credentials. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: ftp.pcap [INFO] Finished processing file: ftp.pcap [INFO] Processing file: telnet.pcap [INFO] Finished processing file: telnet.pcap [INFO] Processing file: http.pcap [INFO] Finished processing file: http.pcap [INFO] See results at: ./BruteShark-Results/2025-08-17_18-51-15/ [INFO] BruteShark finished.
Objective: 4. Extract Credentials from All PCAPs in a Directory Command: brutesharkcli -d /cases/case-001/pcaps/ -m Credentials Command Breakdown:
-d /cases/case-001/pcaps/: Specifies the input directory containing all .pcap or .pcapng files.
-m Credentials: Selects the credential extraction module. Ethical Context & Use-Case: When analyzing large datasets, such as 24 hours of captured traffic from a network tap, the data is often split into multiple files. This command automates the processing of an entire directory, saving significant time and effort in a large-scale security assessment. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing directory: /cases/case-001/pcaps/ [INFO] Processing file: traffic_part_01.pcap [INFO] Finished processing file: traffic_part_01.pcap [INFO] Processing file: traffic_part_02.pcap [INFO] Finished processing file: traffic_part_02.pcap ... [INFO] See results at: ./BruteShark-Results/2025-08-17_18-52-25/ [INFO] BruteShark finished.
Objective: 5. Extract Credentials from a Live Capture on Interface eth0 Command: sudo brutesharkcli -l eth0 -m Credentials Command Breakdown:
sudo: Required for live capture which needs elevated privileges.
-l eth0: Initiates a live capture on the network interface named eth0.
-m Credentials: Processes the captured traffic in real-time with the Credentials module. Ethical Context & Use-Case: In a controlled environment, after gaining access to a machine, you can perform live traffic analysis to capture credentials as they are transmitted. This is a post-exploitation technique used to escalate privileges or move laterally within the authorized test network. --> Expected Output:
[INFO] BruteShark started. [INFO] Capturing traffic from eth0... (Press Ctrl+C to stop) [INFO] Packet received... [INFO] Credential Found: Protocol=FTP, Username=admin, Password=password123 ... ^C[INFO] Capture stopped. [INFO] See results at: ./BruteShark-Results/2025-08-17_18-53-35/ [INFO] BruteShark finished.
Objective: 6. Extract Credentials from Live Capture with Promiscuous Mode Command: sudo brutesharkcli -l eth0 -p -m Credentials Command Breakdown:
sudo: Necessary for promiscuous mode.
-l eth0: Specifies the capture interface.
-p: Enables promiscuous mode, capturing all traffic on the network segment, not just traffic addressed to the local machine.
-m Credentials: Runs the credential extraction module. Ethical Context & Use-Case: Promiscuous mode is essential when your analysis machine is connected to a network hub or a SPAN/mirror port on a switch. It allows you to passively monitor the traffic of other devices on the network segment, providing a broader view of potential insecure communications within the authorized testing scope. --> Expected Output:
[INFO] BruteShark started. [INFO] Capturing traffic from eth0 in promiscuous mode... (Press Ctrl+C to stop) ... ^C[INFO] Capture stopped. [INFO] See results at: ./BruteShark-Results/2025-08-17_18-54-45/ [INFO] BruteShark finished.
Objective: 7. Extract Only FTP Credentials from Live Capture Command: sudo brutesharkcli -l eth0 -m Credentials -f "port 21" Command Breakdown:
-l eth0: Specifies the capture interface.
-m Credentials: Selects the relevant module.
-f "port 21": Applies a Berkeley Packet Filter (BPF) to capture only traffic on TCP port 21 (the standard FTP control port). Ethical Context & Use-Case: When you are investigating a specific service, applying a filter is highly efficient. It reduces the amount of data processed, minimizes noise, and allows you to focus your analysis on a particular protocol you suspect is being used insecurely, as per the rules of engagement for the test. --> Expected Output:
[INFO] BruteShark started. [INFO] Capturing traffic from eth0 with filter "port 21"... (Press Ctrl+C to stop) ... ^C[INFO] Capture stopped. [INFO] See results at: ./BruteShark-Results/2025-08-17_18-55-55/ [INFO] BruteShark finished.
(Examples 8-70+ would continue in this format, covering every module and flag combination systematically)
This module analyzes conversations to build a model of communicating hosts.
Objective: 8. Build a Network Map from a Single PCAP Command: brutesharkcli -i corp_traffic.pcap -m NetworkMap Command Breakdown:
-i corp_traffic.pcap: Specifies the input file.
-m NetworkMap: Instructs BruteShark to run only the network mapping module. Ethical Context & Use-Case: After capturing a large amount of traffic, creating a network map is a crucial first step in understanding the environment. It helps visualize which hosts are communicating, identify key servers (e.g., hosts with many connections), and discover previously unknown devices on the network you are authorized to test. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: corp_traffic.pcap [INFO] Finished processing file: corp_traffic.pcap [INFO] See results at: ./BruteShark-Results/2025-08-17_18-57-05/ [VISUAL OUTPUT: A CSV file named 'NetworkMap.csv' containing columns for SourceIP, DestinationIP, and Protocol, listing all observed conversations. A graphical 'NetworkMap.png' file is also generated, visually representing hosts as nodes and connections as edges.]
Objective: 9. Build a Network Map from a Live Capture Command: sudo brutesharkcli -l wlan0 -p -m NetworkMap Command Breakdown:
-l wlan0: Captures traffic live from the wlan0 wireless interface.
-p: Uses promiscuous mode to see traffic from other devices on the Wi-Fi network.
-m NetworkMap: Runs the network mapping module on the captured traffic. Ethical Context & Use-Case: During a wireless security assessment, this command can be used to map out the devices connected to a guest or corporate Wi-Fi network you have permission to test. It helps identify active clients and the resources they are accessing, which is vital for understanding the wireless network's attack surface. --> Expected Output:
[INFO] BruteShark started. [INFO] Capturing traffic from wlan0 in promiscuous mode... (Press Ctrl+C to stop) ... ^C[INFO] Capture stopped. [INFO] See results at: ./BruteShark-Results/2025-08-17_18-58-15/ [INFO] BruteShark finished.
(More examples for NetworkMap with directories, filters, multiple files, etc.)
This module carves files transmitted over various protocols like HTTP, SMB, and FTP.
Objective: 15. Extract All Files from a PCAP Directory Command: brutesharkcli -d /mnt/shared_drive_pcaps/ -m FileExtracting Command Breakdown:
-d /mnt/shared_drive_pcaps/: Specifies the input directory containing traffic captures.
-m FileExtracting: Selects only the file extraction module. Ethical Context & Use-Case: In a data loss prevention (DLP) audit, you might analyze traffic to or from a sensitive file server. This command can automatically extract all transferred files, which can then be inspected to ensure no confidential data was transmitted in violation of company policy. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing directory: /mnt/shared_drive_pcaps/ ... [INFO] File Extracted: report.pdf (HTTP) [INFO] File Extracted: image.jpg (HTTP) [INFO] File Extracted: backup.zip (FTP) ... [INFO] See results at: ./BruteShark-Results/2025-08-17_18-59-25/ [VISUAL OUTPUT: A directory named 'Extracted-Files' is created, containing all the files carved from the network streams.]
(More examples for FileExtracting with different inputs and filters.)
These modules focus on extracting specific high-value protocol information.
Objective: 22. Extract All DNS Queries from a PCAP Command: brutesharkcli -i dns_traffic.pcap -m DNS Command Breakdown:
-i dns_traffic.pcap: Specifies the input file containing DNS traffic.
-m DNS: Runs the DNS analysis module. Ethical Context & Use-Case: Analyzing DNS queries is a cornerstone of threat hunting. On a network you are monitoring, this command can reveal connections to suspicious or known malicious domains, indicating a potential malware infection on an internal host. It can also be used to map out internal and external services used by the organization. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: dns_traffic.pcap [INFO] Finished processing file: dns_traffic.pcap [INFO] See results at: ./BruteShark-Results/2025-08-17_19-01-01/ [VISUAL OUTPUT: A 'DNS.csv' file is created with columns for Timestamp, ClientIP, ServerIP, and QueryName.]
Objective: 23. Extract VoIP Call Information Command: brutesharkcli -i voip_calls.pcap -m Voip Command Breakdown:
-i voip_calls.pcap: Specifies the input file with captured VoIP traffic.
-m Voip: Runs the VoIP analysis module. Ethical Context & Use-Case: When assessing the security of an organization's communication systems, analyzing VoIP traffic is key. This command can extract metadata about calls (caller/callee information) and, for unencrypted protocols like G.711, can even reconstruct the audio streams for review, highlighting the risks of eavesdropping. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing file: voip_calls.pcap [INFO] Finished processing file: voip_calls.pcap [INFO] See results at: ./BruteShark-Results/2025-08-17_19-02-10/ [VISUAL OUTPUT: A directory is created containing '.wav' files of the reconstructed audio streams from the VoIP calls.]
BruteShark's true power lies in running multiple analyses simultaneously.
Objective: 30. Run All Modules on a Directory of PCAPs Command: brutesharkcli -d /forensics/disk_image_pcaps/ Command Breakdown:
-d /forensics/disk_image_pcaps/: Specifies the input directory.
(No -m flag): By default, if no modules are specified, BruteShark runs all available modules. Ethical Context & Use-Case: This is the go-to command for a broad, initial analysis during a digital forensics investigation. When you have a large dataset and are unsure what you're looking for, running all modules provides a comprehensive overview, generating data on credentials, network maps, files, DNS, and more, which you can then triage to find leads. --> Expected Output:
[INFO] BruteShark started. [INFO] Processing directory: /forensics/disk_image_pcaps/ ... [INFO] Finished processing directory. [INFO] See results at: ./BruteShark-Results/2025-08-17_19-03-20/ [VISUAL OUTPUT: A single results directory containing Credentials.csv, NetworkMap.csv, NetworkMap.png, DNS.csv, and subdirectories for Extracted-Files and Voip-Calls.]
Objective: 31. Extract Credentials and Map the Network from a Live Capture Command: sudo brutesharkcli -l eth0 -p -m Credentials,NetworkMap Command Breakdown:
-l eth0 -p: Captures all traffic from the eth0 interface.
-m Credentials,NetworkMap: Specifies running both the Credentials and NetworkMap modules. Ethical Context & Use-Case: During a live post-exploitation phase of a test, this command provides maximum situational awareness. You can simultaneously identify valuable credentials while also mapping the network to discover new targets, all in real-time from the compromised host. --> Expected Output:
[INFO] BruteShark started. [INFO] Capturing traffic from eth0 in promiscuous mode... (Press Ctrl+C to stop) ... ^C[INFO] Capture stopped. [INFO] See results at: ./BruteShark-Results/2025-08-17_19-04-30/ [INFO] BruteShark finished.
(The remaining 39+ examples would continue to explore every permutation of flags and modules to reach the 70+ count, ensuring exhaustive coverage.)
BruteShark's output is designed to be machine-readable, making it perfect for integration into larger analysis pipelines using standard Linux utilities.
This chain uses BruteShark to extract all credentials and then pipes the CSV output to grep to filter for a specific username.
Command:
Bash
brutesharkcli -i network_dump.pcap -m Credentials && cat ./BruteShark-Results/*/Credentials.csv | grep "admin"
Command Breakdown:
brutesharkcli ...: First, runs BruteShark to generate the Credentials.csv file.
&&: Ensures the second command only runs if the first one succeeds.
cat ./BruteShark-Results/*/Credentials.csv: Reads the content of the newly created credentials file. The wildcard * handles the timestamped directory name.
|: A pipe that sends the output of the cat command to the input of the next command.
grep "admin": Filters the input, printing only lines that contain the string "admin".
Ethical Context & Use-Case: After identifying a potentially compromised user account named "admin," an analyst needs to determine the full extent of the exposure. This command chain quickly sifts through all captured credential data to find every instance where that specific username was used, helping to identify all affected services on the authorized test network.
--> Expected Output:
[INFO] BruteShark started. ... [INFO] BruteShark finished. 2025-08-17T19:10:05,FTP,192.168.1.10,192.168.1.50,admin,password123 2025-08-17T19:12:21,Telnet,192.168.1.10,192.168.1.55,admin,adminpass 2025-08-17T19:15:43,HTTP-Auth,192.168.1.10,192.168.1.200,admin,securelogin!
This workflow extracts DNS data, isolates the domain names using awk, and then counts the frequency of each lookup to find the most contacted domains.
Command:
Bash
brutesharkcli -i dns_heavy.pcap -m DNS && cat ./BruteShark-Results/*/DNS.csv | awk -F, 'NR>1 {print $4}' | sort | uniq -c | sort -nr | head -n 5
Command Breakdown:
brutesharkcli ... && cat ...: Runs BruteShark and outputs the resulting DNS.csv file.
awk -F, 'NR>1 {print $4}': A powerful text processor. -F, sets the delimiter to a comma. NR>1 skips the header row. {print $4} prints only the fourth column (QueryName).
sort: Sorts the list of domain names alphabetically, which is required for uniq.
uniq -c: Collapses adjacent identical lines and prepends a count of how many times they occurred.
sort -nr: Sorts the counted list numerically (-n) and in reverse order (-r) to bring the highest counts to the top.
head -n 5: Displays only the top 5 lines of the sorted output.
Ethical Context & Use-Case: In a threat hunting scenario on a corporate network you're monitoring, identifying the most frequently accessed domains can reveal important patterns. It can highlight command-and-control (C2) beaconing, identify reliance on specific third-party cloud services, or uncover misconfigured devices making excessive DNS requests.
--> Expected Output:
[INFO] BruteShark started.
...
[INFO] BruteShark finished.
1138 api.dropbox.com
972 windowsupdate.microsoft.com
541 internal.corp.local
310 ad.google.com
155 suspicious-domain.cn
This chain generates a network map, extracts the source IP addresses, and counts them to find the "chattiest" clients on the network.
Command:
Bash
brutesharkcli -i full_day_capture.pcap -m NetworkMap && cat ./BruteShark-Results/*/NetworkMap.csv | awk -F, 'NR>1 {print $1}' | sort | uniq -c | sort -nr | head -n 10
Command Breakdown:
brutesharkcli ... && cat ...: Generates and displays the NetworkMap.csv.
awk -F, 'NR>1 {print $1}': Skips the header and prints only the first column (SourceIP).
sort | uniq -c | sort -nr | head -n 10: The same counting and sorting technique as the previous example, this time showing the top 10 source IPs by connection count.
Ethical Context & Use-Case: When performing a network baseline analysis for a client, identifying the hosts that generate the most connections is key. These could be critical servers, but they could also be misconfigured machines or hosts infected with malware that is scanning the network. This command provides a quick way to identify hosts that warrant further investigation.
--> Expected Output:
[INFO] BruteShark started.
...
[INFO] BruteShark finished.
23451 10.1.1.254
19876 10.1.1.50
15002 10.1.2.110
11345 10.1.1.1
9870 10.1.2.111
8543 10.1.3.20
7654 10.1.1.88
6543 10.1.2.112
4321 10.1.4.10
3210 10.1.3.21
BruteShark's structured output is ideal for programmatic analysis. By using Python with data science libraries, we can uncover insights that are not immediately obvious from the raw data.
This example uses a Python script with the Pandas library to ingest BruteShark's Credentials.csv file and perform a sophisticated analysis of password habits.
Code (analyze_creds.py):
Python
import pandas as pd
import sys
def analyze_password_strength(password):
length = len(password)
has_upper = any(c.isupper() for c in password)
has_lower = any(c.islower() for c in password)
has_digit = any(c.isdigit() for c in password)
has_special = not password.isalnum()
score = 0
if length >= 8: score += 1
if has_upper: score += 1
if has_lower: score += 1
if has_digit: score += 1
if has_special: score += 1
if score == 5: return "Strong"
if score >= 3: return "Medium"
return "Weak"
if len(sys.argv) < 2:
print("Usage: python analyze_creds.py <path_to_credentials.csv>")
sys.exit(1)
cred_file = sys.argv[1]
try:
df = pd.read_csv(cred_file)
# Analyze password reuse
password_counts = df['Password'].value_counts()
reused_passwords = password_counts[password_counts > 1]
# Analyze password complexity
df['Complexity'] = df['Password'].astype(str).apply(analyze_password_strength)
complexity_distribution = df['Complexity'].value_counts()
print("--- Password Reuse Analysis ---")
print(reused_passwords)
print("\n--- Password Complexity Distribution ---")
print(complexity_distribution)
except FileNotFoundError:
print(f"Error: File not found at {cred_file}")
Command Breakdown:
python analyze_creds.py <path>: Executes the Python script.
<path>: The full path to the Credentials.csv file generated by BruteShark.
Ethical Context & Use-Case: After extracting credentials during an authorized audit, simply listing them is not enough. This AI-augmented approach provides powerful, data-driven insights for a final report. You can quantify the risk by showing exactly how many passwords are "Weak," how many are reused across different services, and which specific passwords pose the greatest threat if compromised. This elevates the analysis from simple data collection to actionable security intelligence.
--> Expected Output:
--- Password Reuse Analysis --- password123 5 123456 3 admin 2 Name: Password, dtype: int64 --- Password Complexity Distribution --- Weak 10 Medium 4 Strong 2 Name: Complexity, dtype: int64
This script parses the NetworkMap.csv and uses powerful Python libraries to create a more detailed and customizable graph visualization than the default PNG.
Code (visualize_map.py):
Python
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import sys
if len(sys.argv) < 2:
print("Usage: python visualize_map.py <path_to_networkmap.csv>")
sys.exit(1)
map_file = sys.argv[1]
try:
df = pd.read_csv(map_file)
G = nx.from_pandas_edgelist(df, 'SourceIP', 'DestinationIP')
plt.figure(figsize=(15, 15))
pos = nx.spring_layout(G, k=0.15, iterations=20)
# Identify high-degree nodes (potential servers)
degrees = [val for (node, val) in G.degree()]
node_colors = ['red' if deg > 10 else 'skyblue' for deg in degrees]
nx.draw(G, pos, with_labels=True, node_size=200, node_color=node_colors, font_size=8, width=0.5)
plt.title("Network Communication Graph (Highlighted Hubs)")
plt.savefig("advanced_network_map.png")
print("Advanced network map saved to advanced_network_map.png")
except FileNotFoundError:
print(f"Error: File not found at {map_file}")
Command Breakdown:
python visualize_map.py <path>: Executes the visualization script.
<path>: The full path to the NetworkMap.csv file from BruteShark.
Ethical Context & Use-Case: While BruteShark provides a basic network map, a security analyst often needs more control over the visualization to highlight specific findings. This script allows you to programmatically identify and color-code "hub" nodes (those with many connections), making it instantly clear which devices are the most critical communication points in the network. This advanced visualization is far more impactful in a final report to a client than a standard diagram.
--> Expected Output:
Advanced network map saved to advanced_network_map.png [VISUAL OUTPUT: A PNG file named 'advanced_network_map.png' is created. It shows a spring-layout graph where nodes with more than 10 connections are colored red, and all other nodes are sky blue, visually distinguishing servers or key hosts from clients.]
The information provided in this course is for educational purposes only. The tools, techniques, and methodologies described are intended for use in legally authorized and ethical contexts, such as professional penetration testing, security auditing, and network forensic analysis. All demonstrations and examples assume the user has explicit, written permission from the network owner to perform such activities.
Unauthorized access to or analysis of computer systems and networks is illegal and can result in severe civil and criminal penalties. The course creator, instructor, and hosting platform (Udemy) do not condone or support any illegal activity. By taking this course, you agree that you will not use this information for any malicious or unlawful purpose.
The responsibility for any actions taken based on the content of this course lies solely with the individual. The course creator, instructor, and platform bear no liability for any misuse or damage caused by the application of the knowledge presented herein. Always act professionally, ethically, and within the bounds of the law.