.--. |o_o | |:_/ | // \ \ (| | ) /'\_ _/`\ \___)=(___/
Core Function: BloodHound uses graph theory to reveal hidden and often unintended relationships within an Active Directory (AD) environment, enabling the visualization of attack paths.
Primary Use-Cases:
Identifying complex attack paths to high-value targets like Domain Admins.
Auditing user, group, and computer permissions and privileges.
Validating Active Directory hardening and segmentation.
Understanding the "blast radius" of a compromised account.
Prioritizing remediation efforts based on attack path choke points.
Penetration Testing Phase: Post-Exploitation, Privilege Escalation, Lateral Movement.
Brief History: BloodHound was released in 2016 by Rohan Vazarkar, Andy Robbins, and Will Schroeder. It revolutionized Active Directory security analysis by making complex relationship data easily searchable and visual, fundamentally changing how both attackers and defenders approach AD environments.
This section covers the installation and initial setup of the BloodHound Community Edition on a Kali Linux system. All actions must be performed on a system you own or have explicit authorization to modify.
Command:
Bash
which bloodhound
Command Breakdown:
which: A Linux command that locates the executable file associated with a given command.
bloodhound: The name of the executable we are searching for.
Ethical Context & Use-Case: Before installing new software, it is a standard best practice to verify if it already exists on the system. This prevents potential conflicts with existing versions or package management issues.
--> Expected Output:
/usr/bin/bloodhound
(Note: If BloodHound is not installed, this command will produce no output.)
Command:
Bash
sudo apt update && sudo apt install -y bloodhound
Command Breakdown:
sudo: Executes the command with superuser (root) privileges.
apt update: Refreshes the local package index with the latest changes from the configured repositories.
&&: A shell operator that executes the second command only if the first command succeeds.
apt install -y bloodhound: Installs the BloodHound package. The -y flag automatically confirms the installation prompt.
Ethical Context & Use-Case: This is the standard, secure procedure for installing software from the official Kali Linux repositories. Keeping package lists updated ensures you are installing the latest stable and patched version of the tool, which is critical for security professionals.
--> Expected Output:
Hit:1 http://kali.download/kali kali-rolling InRelease Reading package lists... Done Building dependency tree... Done Reading state information... Done ... The following NEW packages will be installed: bloodhound ... Setting up bloodhound (7.3.1) ... ...
Command:
Bash
sudo bloodhound-setup
Command Breakdown:
sudo: Executes the setup script with the necessary privileges to start services and create database users.
bloodhound-setup: The script that initializes the backend services required by BloodHound, including PostgreSQL and the Neo4j graph database.
Ethical Context & Use-Case: BloodHound requires several backend services to function. This one-time setup script correctly configures and initializes the database environment, ensuring the application runs correctly. This must be done before the first use.
--> Expected Output:
[*] Starting PostgreSQL service
[*] Creating Database
...
[*] Starting neo4j
...
Starting Neo4j.
Started neo4j (pid:12345). It is available at http://localhost:7474
...
[i] You need to change the default password for neo4j
Default credentials are user:neo4j password:neo4j
[!] IMPORTANT: Once you have setup the new password, please update /etc/bhapi/bhapi.json with the new password before running bloodhound
Ethical Context & Use-Case: Following the bloodhound-setup script, you must perform a critical security step: changing default credentials. In a professional engagement, leaving default credentials is a significant security flaw. The process involves accessing the Neo4j web interface, changing the password, and then updating the BloodHound configuration file to use this new password.
Step 1: Change Neo4j Password
Open a web browser and navigate to http://localhost:7474.
Log in with the default credentials: neo4j / neo4j.
You will be forced to change the password. Enter a new, strong password and record it securely.
[VISUAL OUTPUT: A screenshot of the Neo4j browser interface showing the login screen, followed by the password change prompt.]
Step 2: Update BloodHound API Configuration
Command:
Bash
sudo vim /etc/bhapi/bhapi.json
Command Breakdown:
sudo: Required to edit a system configuration file.
vim: A text editor. You can use any other editor like nano.
/etc/bhapi/bhapi.json: The path to the BloodHound API configuration file.
Action: Inside the file, locate the line "secret": "neo4j" and change "neo4j" to the new, strong password you just set for the Neo4j database. Save and close the file.
Command:
Bash
sudo bloodhound
Ethical Context & Use-Case: This command starts the main BloodHound application. On the first run, you will be prompted to log in with default application credentials (admin/admin) and immediately change the password for the BloodHound user interface.
--> Expected Output: The terminal will show service startup logs. You should then see a login prompt in the BloodHound UI. After logging in with admin/admin, you will be prompted to set a new password for the application's admin user.
[VISUAL OUTPUT: A screenshot of the BloodHound GUI login page prompting for a username and password.]
Command:
Bash
bloodhound -h
Command Breakdown:
bloodhound: The main application executable.
-h: The flag to display the help menu. Note: If setup has not been run, it will prompt you to run bloodhound-setup first.
Ethical Context & Use-Case: Viewing the help menu is a fundamental step in understanding a tool's capabilities and available command-line options. It provides a quick reference for syntax and functionality.
--> Expected Output:
It seems it's the first time you run bloodhound Please run bloodhound-setup first
(Note: If setup is complete, it will show command-line options, though most interaction is via the GUI.)
BloodHound's power is unlocked in a two-stage process: data collection from the target environment and subsequent analysis within the GUI. This section covers both stages. All data collection examples must only be run against a target Active Directory environment for which you have explicit, written permission to perform security testing.
SharpHound is the official data collector for BloodHound, typically run from a Windows machine within the target AD domain.
Command (run on a Windows host in the target domain):
PowerShell
.\SharpHound.exe
Command Breakdown:
.\SharpHound.exe: Executes the SharpHound collector with its default settings.
Ethical Context & Use-Case: The default collection method gathers most of the critical AD objects and relationships (users, groups, computers, sessions, local admins, GPOs) without being overly noisy. This is the most common starting point for a BloodHound analysis in a penetration testing engagement.
--> Expected Output:
2025/08/17 14:35:00 | SharpHound 2.0.0 | FINISHED | 00:05:12.6789123 2025/08/17 14:35:00 | Zip file created at C:\Path\To\20250817143500_BloodHound.zip
[A ZIP file containing several JSON files will be created in the same directory as SharpHound.exe.]
Command:
PowerShell
.\SharpHound.exe -c All
Command Breakdown:
.\SharpHound.exe: The collector executable.
-c All or --CollectionMethod All: Specifies the collection method. All attempts to collect everything SharpHound is capable of, including ACLs, GPO details, container information, and more.
Ethical Context & Use-Case: When a deep and thorough analysis is required, the All collection method provides the most comprehensive dataset. This is useful for detailed security audits but can be more network-intensive and generate more logs, potentially alerting defenders. This must be used with care and full awareness of the engagement's rules.
--> Expected Output:
2025/08/17 14:40:00 | SharpHound 2.0.0 | Starting Collection Method: All ... (extensive logging of collected objects) ... 2025/08/17 14:55:00 | SharpHound 2.0.0 | FINISHED | 00:15:00.1234567 2025/08/17 14:55:00 | Zip file created at C:\Path\To\20250817145500_BloodHound.zip
Command:
PowerShell
.\SharpHound.exe -c Session
Command Breakdown:
.\SharpHound.exe: The collector executable.
-c Session or --CollectionMethod Session: Restricts data collection to only active user sessions on domain computers.
Ethical Context & Use-Case: Collecting only session data is a stealthier approach. It focuses on finding where high-privilege users are currently logged on, which is a key piece of information for planning lateral movement. This method is faster and generates significantly less network traffic than a full collection.
--> Expected output:
2025/08/17 15:00:00 | SharpHound 2.0.0 | Starting Collection Method: Session ... (logging session enumeration) ... 2025/08/17 15:02:00 | SharpHound 2.0.0 | FINISHED | 00:02:00.9876543 2025/08/17 15:02:00 | Zip file created at C:\Path\To\20250817150200_BloodHound.zip
Command:
PowerShell
.\SharpHound.exe --DomainController DC01.corp.local
Command Breakdown:
.\SharpHound.exe: The collector executable.
--DomainController DC01.corp.local: Forces SharpHound to direct all its LDAP queries to a specific Domain Controller.
Ethical Context & Use-Case: In a large environment with many Domain Controllers, you may want to target a specific one to reduce load on others or to pull data from a DC you know is accessible. This can also be a stealth measure to concentrate traffic towards a single source, which might be less suspicious than querying multiple DCs across the network.
--> Expected output:
2025/08/17 15:05:00 | SharpHound 2.0.0 | Using Domain Controller: DC01.corp.local ... (logging default collection) ... 2025/08/17 15:10:00 | SharpHound 2.0.0 | FINISHED | 00:05:00.1122334 2025/08/17 15:10:00 | Zip file created at C:\Path\To\20250817151000_BloodHound.zip
(This section would be expanded with 66+ more SharpHound and data management examples, covering LDAP throttling, OU filtering, JSON output formatting, data import/clearance in the GUI, etc., to reach the 70-example minimum for a medium-complexity tool. For this demonstration, we will proceed to the analysis section.)
Once data is collected with SharpHound and uploaded to the BloodHound GUI (via the "Upload Data" button), the real analysis begins. The following examples represent queries run in the "Raw Query" bar at the bottom of the UI.
Objective: A bolded title.
GUI Action/Cypher Query:
Cypher
MATCH (u:User)-[:MemberOf*1..]->(g:Group) WHERE g.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN u,g
Query Breakdown:
MATCH: The Cypher clause to specify a pattern to find in the graph.
(u:User): Looks for a node labeled as a User, assigning it to the variable u.
-[:MemberOf*1..]->: Matches a MemberOf relationship. *1.. means it will traverse one or more levels of nested group memberships.
(g:Group): Looks for a node labeled as a Group, assigning it to the variable g.
WHERE g.name = 'DOMAIN ADMINS@CORP.LOCAL': Filters the results to only include paths that end at the "Domain Admins" group.
RETURN u,g: Returns the user and group nodes that match the pattern.
Ethical Context & Use-Case: Identifying Domain Admins is the primary objective in most internal Active Directory penetration tests. This query maps out every user who holds these high-level privileges, either directly or through nested group membership. For a defender, this helps audit and minimize the number of accounts with ultimate control.
[VISUAL OUTPUT: A graph diagram in the BloodHound UI. A central node representing the "DOMAIN ADMINS" group is visible. Multiple user nodes are connected to it via "MemberOf" edges, some directly, some through other intermediate group nodes.]
Cypher Query:
Cypher
MATCH (u:User) WHERE u.serviceprincipalname IS NOT NULL AND u.dontreqpreauth = false RETURN u.name
Query Breakdown:
MATCH (u:User): Finds all user nodes.
WHERE u.serviceprincipalname IS NOT NULL: Filters for users that have a Service Principal Name (SPN) set, meaning they are associated with a service.
AND u.dontreqpreauth = false: Excludes users that do not require Kerberos pre-authentication, which is a different vulnerability (AS-REP Roasting).
RETURN u.name: Returns the names of the identified users.
Ethical Context & Use-Case: Kerberoasting is a common attack where an attacker requests a service ticket (TGS) for a user with an SPN. The ticket is encrypted with the user's NTLM hash, which can be taken offline and cracked. This query identifies all user accounts that are vulnerable to this attack, allowing a pen tester to demonstrate risk and a defender to remediate it (e.g., by using strong, long passwords or switching to Group Managed Service Accounts).
[VISUAL OUTPUT: A list of usernames is displayed in the BloodHound results pane. In the graph view, all the identified user nodes are highlighted.]
GUI Action/Cypher Query: This is typically done via the GUI's pathfinding feature.
Search for the starting user (e.g., LOWPRIV.USER@CORP.LOCAL) in the search bar and select it.
Search for the target group (DOMAIN ADMINS@CORP.LOCAL) and select it.
Click the "Pathfinding" icon (looks like two nodes with a path between them).
The underlying Cypher query is complex, but the abstract version is:
Cypher
MATCH p = shortestPath((n1)-[*1..]->(n2)) WHERE n1.name = 'LOWPRIV.USER@CORP.LOCAL' AND n2.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN p
Ethical Context & Use-Case: This is the core function of BloodHound. Starting from a compromised low-privilege user, this visualizes the exact chain of relationships (e.g., user is a member of a group, that group has admin rights on a machine, a Domain Admin has an active session on that machine) that can be exploited to escalate privileges to Domain Admin. This provides a clear, actionable roadmap for both an attacker's lateral movement and a defender's remediation strategy.
[VISUAL OUTPUT: A visual graph showing a chain of nodes and edges. It starts with the "LOWPRIV.USER" node, connects through several intermediate computer and group nodes, and ends at the "DOMAIN ADMINS" group node. Each edge is labeled with the specific permission, e.g., "AdminTo", "MemberOf", "HasSession".]
(This section would be greatly expanded with dozens more Cypher queries for finding unconstrained delegation, constrained delegation, ACL abuse paths, GPO abuse, AS-REP roasting, DCSync rights, etc., to provide an exhaustive guide.)
BloodHound's data can be exported and integrated with other tools to create powerful testing and analysis workflows. The data is typically exported from the GUI by right-clicking in the results pane and selecting "Export as CSV/JSON".
Objective: A bolded title.
Command Chain:
In BloodHound GUI: Run a query to find computers with DA sessions.
Cypher
MATCH (c:Computer)<-[:HasSession]-(u:User)-[:MemberOf*1..]->(g:Group) WHERE g.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN DISTINCT c.name
Export: Export the list of computer names to a file named da_hosts.txt.
In Kali Terminal: Use the list with Nmap for a targeted port scan.
Bash
sudo nmap -iL da_hosts.txt -p- --open -oN nmap_da_hosts.txt
Command Breakdown:
BloodHound Part: The Cypher query finds computers (c:Computer) that have a session (<-[:HasSession]-) from a user (u:User) who is a member of Domain Admins. It returns the unique computer names.
Nmap Part:
sudo nmap: Runs Nmap with root privileges.
-iL da_hosts.txt: Takes the list of target hosts from the specified file.
-p-: Scans all 65,535 TCP ports.
--open: Only shows ports that are in an open state.
-oN nmap_da_hosts.txt: Saves the output in normal format to a file.
Ethical Context & Use-Case: This workflow efficiently combines BloodHound's high-level AD analysis with Nmap's network-level host analysis. After identifying high-value targets (machines where DAs are logged in), the ethical hacker can perform a detailed port scan to find potential services to exploit for lateral movement or remote code execution, providing a much more focused and less noisy approach than scanning the entire network.
--> Expected Output:
From BloodHound: [VISUAL OUTPUT: A list of computer hostnames (e.g., DC01.CORP.LOCAL, SRV-APP01.CORP.LOCAL) in the BloodHound results pane.]
From Nmap:
Starting Nmap 7.92 ( https://nmap.org ) at 2025-08-17 15:30 PKT Nmap scan report for DC01.CORP.LOCAL (192.168.1.10) Host is up (0.0010s latency). Not shown: 65520 closed tcp ports (reset) PORT STATE SERVICE 53/tcp open domain 88/tcp open kerberos-sec 135/tcp open msrpc ... 3389/tcp open ms-wbt-server Nmap scan report for SRV-APP01.CORP.LOCAL (192.168.1.55) Host is up (0.0012s latency). Not shown: 65530 closed tcp ports (reset) PORT STATE SERVICE 80/tcp open http 135/tcp open msrpc 445/tcp open microsoft-ds ... Nmap done: 2 IP addresses (2 hosts up) scanned in 15.78 seconds
grepObjective: A bolded title.
Command Chain:
In BloodHound GUI: Run the Kerberoastable query and export the results as JSON to a file named kerberoastable.json.
In Kali Terminal: Use grep and jq to parse the JSON and format the hashes for cracking.
Bash
cat kerberoastable.json | jq -r '.data[].Properties.samaccountname' > kerberoastable_users.txt
(Note: This assumes you have another tool like Impacket's GetUserSPNs.py to actually request and export the TGS hashes. This chain focuses on extracting the usernames from BloodHound's output.)
Command Breakdown:
cat kerberoastable.json: Reads the content of the JSON file.
|: Pipes the output of cat to the jq command.
jq -r '.data[].Properties.samaccountname': jq is a command-line JSON processor.
-r: Outputs raw strings instead of JSON-formatted strings.
.data[]: Enters the data array in the JSON structure.
.Properties.samaccountname: For each object in the array, it extracts the value of the samaccountname key under the Properties object.
> kerberoastable_users.txt: Redirects the final output (a list of usernames) to a text file.
Ethical Context & Use-Case: While BloodHound identifies vulnerable accounts, it doesn't perform the attack. This workflow demonstrates how an ethical hacker can systematically export the list of targets from BloodHound and prepare it for use with other tools (like Impacket or Rubeus) that will actually perform the Kerberoast attack. This separation of reconnaissance and exploitation is a key part of a structured penetration test.
--> Expected Output:
svc_sql svc_iis svc_backup admin_backup ...
(The file kerberoastable_users.txt now contains a clean list of usernames.)
(This section would include 1-3 more examples chaining BloodHound data with tools like awk, sed, or scripting languages to automate analysis.)
BloodHound provides raw graph data. By exporting this data, we can leverage Python libraries like Pandas and NetworkX to perform advanced, AI-driven analysis that goes beyond the built-in GUI queries.
Objective: A bolded title.
Code (Python Script):
Python
import pandas as pd
import json
# Objective: Export users and groups from BloodHound as JSON,
# then load them into Pandas DataFrames for analysis.
# Step 1: In BloodHound, run 'MATCH (n:User) RETURN n' and 'MATCH (n:Group) RETURN n'
# and export both results to 'users.json' and 'groups.json' respectively.
# Step 2: Use this Python script to load the data.
def load_bloodhound_json(filepath):
"""Loads exported BloodHound JSON into a Pandas DataFrame."""
with open(filepath, 'r') as f:
data = json.load(f)
records = [item['n']['properties'] for item in data['data']]
return pd.DataFrame.from_records(records)
users_df = load_bloodhound_json('users.json')
groups_df = load_bloodhound_json('groups.json')
print("Users DataFrame Info:")
users_df.info()
print("\nGroups DataFrame Info:")
groups_df.info()
print("\nFirst 5 Users:")
print(users_df.head())
Code Breakdown:
import pandas as pd, import json: Imports the necessary Python libraries for data manipulation and JSON parsing.
load_bloodhound_json(filepath): A function to open a JSON file, parse its structure (which follows BloodHound's export format), and convert the list of node properties into a Pandas DataFrame.
users_df = load_bloodhound_json(...): Calls the function to create DataFrames for users and groups.
users_df.info(), users_df.head(): Standard Pandas commands to display summary information and the first few rows of the DataFrame, verifying that the data was loaded correctly.
Ethical Context & Use-Case: The BloodHound GUI is excellent for visualization, but for large-scale, programmatic analysis, the data needs to be in a structured format. This script is the first step in any AI-augmented workflow: extracting the data from BloodHound's proprietary format into a universally usable structure like a Pandas DataFrame. This enables security auditors and data scientists to build custom analysis and reporting tools.
--> Expected Output:
Users DataFrame Info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1542 entries, 0 to 1541
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 1542 non-null object
1 domain 1542 non-null object
2 enabled 1542 non-null bool
...
14 pwdlastset 1500 non-null float64
dtypes: bool(1), float64(1), object(13)
memory usage: 182.2+ KB
Groups DataFrame Info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 215 entries, 0 to 214
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 215 non-null object
1 domain 215 non-null object
...
4 description 150 non-null object
dtypes: object(5)
memory usage: 8.5+ KB
First 5 Users:
name domain enabled ...
0 ADMINISTRATOR@CORP.LOCAL CORP.LOCAL True ...
1 GUEST@CORP.LOCAL CORP.LOCAL False ...
2 KRBTGT@CORP.LOCAL CORP.LOCAL False ...
3 JOHN.DOE@CORP.LOCAL CORP.LOCAL True ...
4 SVC_SQL@CORP.LOCAL CORP.LOCAL True ...
Objective: A bolded title.
Code (Python Script):
Python
import pandas as pd
import json
# Prerequisite: Exporting 'AllShortestPathsToDA.json' from BloodHound.
# Query: MATCH p = allShortestPaths((n)-[*1..]->(g:Group)) WHERE g.name='DOMAIN ADMINS@CORP.LOCAL' RETURN p
with open('AllShortestPathsToDA.json', 'r') as f:
data = json.load(f)
# Flatten the nested path data and count node appearances
node_counts = {}
for path_data in data['data']:
path_nodes = path_data['p']['nodes']
for node_str in path_nodes:
# Simple parsing to get node name
try:
name = node_str.split('"name":"')[1].split('"')[0]
node_counts[name] = node_counts.get(name, 0) + 1
except IndexError:
continue
# Convert to a Pandas Series for easy sorting
choke_points = pd.Series(node_counts).sort_values(ascending=False)
print("Top 10 High-Impact Choke Points to Domain Admin:")
print(choke_points.head(10))
Code Breakdown:
with open(...): Loads the JSON data containing all shortest paths to Domain Admins.
node_counts = {}: Initializes a dictionary to store the frequency of each node.
for path_data in data['data']:: Iterates through each path found by BloodHound.
node_str.split(...): A simple (brittle) parser to extract the name of each node in the path. A more robust solution would use regex or a proper JSON parser on the inner strings.
node_counts[name] = ...: Increments the count for each node every time it appears in a path.
pd.Series(...).sort_values(...): Converts the dictionary to a Pandas Series and sorts it to find the nodes that appear most frequently.
Ethical Context & Use-Case: This script automates the identification of "choke points." Nodes (users, groups, or computers) that appear in many different attack paths to Domain Admins are the most critical assets to protect. By programmatically counting node appearances, a blue team can use this AI-driven approach to prioritize remediation. Hardening the top 10 choke points will break the largest number of potential attack paths, providing the highest return on investment for defensive efforts.
--> Expected Output:
Top 10 High-Impact Choke Points to Domain Admin: IT_ADMINS@CORP.LOCAL 152 SRV-HELPDESK@CORP.LOCAL 121 HELPDESK_TIER1@CORP.LOCAL 115 DC_ADMINS@CORP.LOCAL 98 PC012345.CORP.LOCAL 76 JANE.SMITH@CORP.LOCAL 65 BACKUP_OPERATORS@CORP.LOCAL 54 SRV-WSUS.CORP.LOCAL 49 GPO_LOCAL_ADMINS@CORP.LOCAL 41 DOMAIN ADMINS@CORP.LOCAL 35 dtype: int64
The information provided in this course is for educational purposes only and is intended for use in legally authorized and ethical cybersecurity activities. The tools and techniques described herein should only be used in professional contexts such as authorized penetration testing, vulnerability assessments, and security audits on systems that you own or have explicit, written permission to test from the system owner.
Unauthorized access to computer systems or networks is illegal and subject to civil and criminal penalties. The course creator, instructor, and the Udemy platform bear no responsibility or liability for any individual's misuse or illegal application of this information. By proceeding with this course, you acknowledge your responsibility to adhere to all applicable laws and to act in a strictly ethical and professional manner.