Skip to main content

Building my own SOC

 Let's build our own SOC using open-source tools such as Zeek, Sigma and OSQuery. Please note, this is a bit of a notes dump of how I set it up. It is not a step by step guide but you could pop this into an AI chat bot to get a step by step guide going for your systems. For quite some time now I have been concentrating on offensive security, malware analysis and reverse engineering. But let's flip the script and start to look at how to defend against all of the above and more.  The SOC is the command hub of enterprise defense, it is where large amounts of data from endpoints, networks, application and cloud services are collected and analysed. A skilled SOC analyst balances technical fluency with adversary knowledge. First step - Determine our adversary framework First we need to understand our adversary framework so we get a head start on knowing how to build our detection platform. We will engineer our detection directly to the MITRE ATT&CK  framework (https://attac...

Building my own SOC

 Let's build our own SOC using open-source tools such as Zeek, Sigma and OSQuery. Please note, this is a bit of a notes dump of how I set it up. It is not a step by step guide but you could pop this into an AI chat bot to get a step by step guide going for your systems.


For quite some time now I have been concentrating on offensive security, malware analysis and reverse engineering.

But let's flip the script and start to look at how to defend against all of the above and more. 

The SOC is the command hub of enterprise defense, it is where large amounts of data from endpoints, networks, application and cloud services are collected and analysed. A skilled SOC analyst balances technical fluency with adversary knowledge.

First step - Determine our adversary framework

First we need to understand our adversary framework so we get a head start on knowing how to build our detection platform. We will engineer our detection directly to the MITRE ATT&CK  framework (https://attack.mitre.org/).
ATT&CK framework organise real-world attacker behaviors into a matrix of tactics and techniques. It is not tied to specific exploits or tools but behaviors itself, which is important because tools and exploits are constantly changing on a daily basis.

With this framework we will align the tools we will use to build our own SOC to the techniques outlined by ATT&CK. This will improve our defenses, confidence and resilience. 


Second step - Determine the core categories of tools we need

We will need the ability to see actions across endpoints, networks and application.
Three categories of tools we need are:
  1. SIEM
  2. EDR
  3. NDR
We'll need to unify the 3 to have the most effective SOC.

The SIEM (Security Information and Event Management) is the nervous system of the SOC, it allows use to aggregate the logs from our assets, such as cloud services, firewalls etc into a central location. We can then use this to ask the broad questions such as "Which account attempted to log into systems at unusual times?"

EDR (Endpoint Detection and Response) provide fine-grained telemetry such as process execution, command-line arguments etc. We will use this to monitor and response on endpoints.

NDR (Network Detection and Response) provides network layer coverage. We can use this to inspect traffic and how devices are communicating. For example, lateral SMB scanning.

So, let's first start with EDR and the tool I want to use to query the OS is osquery.

Installing osquery

osquery exposes an operating system as a high-performance relational database. This allows you to write SQL queries to explore operating system data. 

You can follow along with the installation here - https://osquery.readthedocs.io/en/stable/installation/install-windows/#installing-osquery-via-the-msi-package
Also, we need to ensure we have Chocolatey installed on your Windows system (I already have) - https://chocolatey.org/install

You just simply run "choco install osquery --params='/InstallService " via your PowerShell and follow any on screen instructions to installed osquery.

To test it has installed successfully, just run "osqueryi" to load the interactive shell:




To test that osquery is running as expected we will perform some tests commands.
For example, run " SELECT DISTINCT process.name from processes AS process;" and it will list the your system prcoess:



Also, follow the instructions Enabling Windows Event Log support as we do want to check the events log using osquery.
Once done, add the following to the osquery.conf file under the options setting at the start:

"logger_plugin": "windows_event_log,filesystem"

Don't forget any changes you make to the conf file requires you to restart osqueryd:

"Restart-Service osqueryd"


Ok, now that's installed, let's install our NDR which will be Zeek.

Installing Zeek

So Zeek is an open source network security monitoring tool, it will listen on our network devices and quietly analyse the network traffic in real-time. We will then feed this into our SIEM for detections and analysis. It does not work like a firewall or IPS, since it is not an active defence mechanism.


So there are two options to install Zeek, 1 is via Docker and the other via it's binary packages.
I personally will install it from a binary package as it is what Zeek recommends (https://zeek.org/get-zeek/).

Following - https://docs.zeek.org/en/master/install.html#docker-images, we need to perform the following:

echo 'deb https://download.opensuse.org/repositories/security:/zeek/xUbuntu_22.04/ /' | sudo tee /etc/apt/sources.list.d/security:zeek.list
curl -fsSL https://download.opensuse.org/repositories/security:zeek/xUbuntu_22.04/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/security_zeek.gpg > /dev/null


This adds the relevant OBS package repository to our systems.
Then update our apt and install Zeek:


sudo apt update
sudo apt install zeek-7.0


To confirm installation run "ls /opt/zeek/bin/" and you should see the following:




Which is great, but if you try running "zeek" in your terminal, it won't find it because it's not been added to our path, so let's do that:


echo 'export PATH=/opt/zeek/bin:$PATH' >> ~/.bashrc

That adds the path to our bash file (so it's loaded every time our terminal spins up).
Then, reload your shell:

source ~/.bashrc

and then run:

zeek -v

and we should see:

"zeek version 7.0.11"

We have now installed Zeek and have added it to our path!
Now, let's run a test to ensure zeek can listen and capture our packets.

If you run "zeek -i eth0 -C" it will run zeek (most likely that will require you to run as sudo so it can monitor the interface):



Once done, in another terminal or via a browser on the same host, just go to google.com.
Return back to terminal running zeek and press ctrl + C. Then you should see the following files:



The ssl.log will have the request to google, since it runs a request via HTTPS.

Note: Since I am using WSL2, I will have to run wireshark on my Windows host, send the pcap file to wsl2 and have a cron job to run zeek on the pcap. 

Extra step for WSL


So since I am running Zeek in my WSL and at this time, Zeek does not have a release for Windows, I decided to just run Wireshark, save the pcap file and send it to my WSL mount and then run Zeek via a cron job on the pcap file.


I will run a tshark command to capture the traffic on my local network and save it to a directory under my user:

"tshark -i 6 -b filesize:100000 -b files:1 -w "C:\Users\user1\Documents\wireshark\local_pcap\capture.pcap"""

The above command limits the file size to 100mb.

Check that the file has been built and capturing data:


We will also want command to run as a service so that if the host restarts, the command is run on start up.


Setting up Zeek + cron job

Then check that you can run zeek on the file via WSL:
For me, the path within WSL to the file is at 

/mnt/c/Users/user1/Documents/wireshark/local_pcap

and then simply run zeek on the pcap file:

" sudo zeek -r ./capture_00001_20260226135936.pcap LogAscii::use_json=T"

and we should see the results:



Excellent. But remember, zeek was only ran on the pcap file at that moment in time, so now we will set up a cron job to run every 1 minute.

I built the following script and saved it with the following command:

#!/bin/bash

cd /mnt/c/Users/user1/Documents/wireshark/local_pcap && sudo /opt/zeek/bin/zeek -r capture*.pcap LogAscii::use_json=T

Notice here that the file I run zeek on is "capture*", that's because wireshark appends the file with a sequence and timestamp, so we tell zeek to run on any file that starts with "capture".

Once written we then simply add a cron job with the following:

*/5 * * * * cd /mnt/c/Users/user/Documents/wireshark/local_pcap && /opt/zeek/bin/zeek -r capture*.pcap LogAscii::use_json=T

That tells the system to run my script every 5 minutes, ofcourse you can customise to your own timings.

Now that zeek is set up to run every 5 minutes and even if the system restarts, we have to do the same for wireshark.

Scheduled task for Wireshark

Since Wireshark runs within my Windows how, I'll set it up as scheduled task to run on boot and in the background.

The way I did this was to write a .bat file and then schedule that to run on start up. We can test this by rebooting our machine, checking it's running via task manager and we are good to go.

This is what is in my bat file:

@echo off
"C:\Program Files\Wireshark\tshark.exe" -i 6 -b filesize:100000 -b files:1 -w "C:\Users\user1\Documents\wireshark\local_pcap\capture.pcap"

Great, now that we have Zeek set up and running, let's move onto the next step.

Rule Detections - Sigma

Now that we can use OSQuery and Zeek to investigate OS processes and network traffic, we will want a tool to perform detections based on rules. This is where Sigma comes in.Sigma is a generic, open, and structured detection format that allows security teams to detect relevant log events in a simple and shareable way.

You can find the installation steps here - https://sigmahq.io/docs/guide/getting-started.html
I already have a python version higher than 3.9 so all I have to do is run "pip install sigma-cli".
And if you run:

C:\> sigma.exe version
2.0.1 (online pypi.org: 2.0.1)

You should get the version number installed. Great.

We will come back to fine tuning Sigma later.

SIEM

Elastic Search

We will use Elasticsearch as it is compatable with Sigma, Zeek and OSQuery - https://www.elastic.co/downloads/elasticsearch

Once installed, go to the installation dir - C:\Users\user1\Documents\elasticsearch-9.3.0-windows-x86_64\elasticsearch-9.3.0\bin>

and run .\elasticsearch.bat
Once it is running, check your console for the password and the default user name is elastic.
Then head to http://127.0.0.1:9300 and it will prompt you for a username and password.
Provide the username as "elastic" and the password that was generated.

Once in you should see an output like such:



Great, now elastic search out the box does not come with all the visuals we need, so we need to install Kibana. Kibana is a powerful, open-source data visualization and exploration tool designed by Elastic specifically for Elasticsearch.

Kibana

To download it, hear to https://www.elastic.co/downloads/kibana.

Create a Kibana service account token -

Kibana 9.x cannot use the elastic superuser internally.
So we need to create a service token:

cd C:\Elastic\elasticsearch-9.3.0\bin .\elasticsearch-service-tokens.bat create elastic/kibana kibana

Copy the long token string that is generated.


Then edit the "kibana.yml" file C:\Elastic\kibana-9.3.1\config\kibana.yml and remove the "#" from the following two lines and token we just generated:

elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.serviceAccountToken: "PASTE_YOUR_TOKEN_HERE"


Then start Kibana:

cd C:\Elastic\kibana-9.3.1\bin
.\kibana.bat

Once it is running it will be running a web app - Server running at http://localhost:5601
Navigate to the web app and it will ask for a username and password, use the same one we have for Elastic that was used to access localhost:9200.



Now that we have that set up, let's start to connect the components together.

Integrating Zeek Logs into Elasticsearch and Kibana Using Filebeat (Windows)

Zeek produces rich network telemetry, but the logs become far more useful once they’re parsed, indexed, and visualized in Elasticsearch and Kibana. Filebeat provides a built‑in Zeek module that handles parsing, field mapping, and dashboard creation automatically. This guide walks through a minimal, reliable setup for sending Zeek logs from a Windows system into Elasticsearch.

Install Filebeat on Windows
- Download the Filebeat Windows ZIP from Elastic. If you go to "http://localhost:5601/app/home#/tutorial/zeekLogs" it gives you the details on how to download and install it.

Once downloaded and installed, under C:\Program Files\Elastic\Beats\9.3.1\filebeat, change "filebeat.example.yml" to "filebeat.yml".

Then open "filebeat.yml" and configure the following to connect to elasticsearch and Kibana:


output.elasticsearch:
  hosts: ["http://localhost:9200"]
  username: "elastic"
  password: "YOUR_ELASTIC_PASSWORD"

setup.kibana:
  host: "http://localhost:5601"

You will need to remove the "#" as that comments it out.

Don't worry about SSL settings etc, we are not setting that up, since it is local it will be OK. If it was cloud based, then we would set SSL up.

Then enable the zeek module:

cd "C:\Program Files\Filebeat"
> filebeat.exe modules enable zeek

This will create the config file:

C:\Program Files\Filebeat\modules.d\zeek.yml

Edit that file.

Enable the filesets you want Filebeat to ingest. A simple, effective configuration is to enable all major Zeek log types and point them to your log directory:

- module: zeek
  connection:
    enabled: true
  dns:
    enabled: true
  http:
    enabled: true
  ssl:
    enabled: true
  weird:
    enabled: true

  var.paths: ["C:/Users/user1/Documents/wireshark/local_pcap/*.log"]

You can enable it all which is fine.

Now load Zeek dashboards and ingest pipelines
Still in PowerShell:

cd "C:\Program Files\Filebeat"
> .\filebeat.exe setup

This loads:

  • Zeek ingest pipelines
  • index templates
  • prebuilt Zeek dashboards in Kibana

If this step completes without errors, Filebeat is ready to run.

Now start the filebeat service within powershell:

Start-Service filebeat

and then check it is running:

> get-service filebeat

Status   Name               DisplayName

------   ----               -----------

Running  filebeat           Elastic Filebeat 9.3.1

Then head over to Kibana's dashboard and we should see our activity:



What you get from this setup

- Automatic parsing of all major Zeek log types

- ECS‑normalized fields for search and correlation

- Prebuilt dashboards for TLS, DNS, HTTP, and connection analysis

- Continuous ingestion via a Windows service

- Full visibility in Kibana’s Discover, Dashboard, and Lens tools

This creates a solid foundation for network‑level detection, threat hunting, and correlation with other data sources like osquery or Sigma‑based detections.

Next is to link OSQuery to Elasticsearch.

Guide to Setting Up Osquery + Filebeat +Elasticsearch

Our structure will be:

osqueryd (local daemon)

   → writes JSON logs

Filebeat (osquery module)

   → parses + enriches

Elasticsearch

   → stores + indexes

Kibana

   → visualizes + searches

We already have OSquery installed, so if you run "get-service osqueryd", it should be in a running state, if not, simply start it by running "start-serice" osqueryd from your powershell.



Now we need to ensure that our configuration file for osquery is set up to run scheduled queries. The scheduled queries will run, save the result to the logs and then filebeat will parse it to Elasticsearch.

 Edit C:\Program Files\osquery\osquery.conf and under schedule, have it look like this:


{

  "options": {

    "logger_plugin": "filesystem"

  },

  "schedule": {

    "system_info": {

      "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",

      "interval": 30

    }

  }

}


This tells osquery to:
- run the system_info query every 30 seconds

- write results to the filesystem. OSquery should read the above conf file by default as stated in https://osquery.readthedocs.io/en/stable/deployment/configuration/#schedule.

Then restart osquery - "restart-service osqueryd".

Give it about 30 seconds and you should see an entry at - C:\Program Files\osquery\log\osqueryd.results.log:




Enable the Osquery module


The Osquery module automatically parses JSON and loads dashboards. Run:

"filebeat modules enable osquery" from your powershell

 Configure the module
Edit: C:\Program Files\Filebeat\modules.d\osquery.yml

- module: osquery
  result:
    enabled: true
    var.paths:
      - "C:/Program Files/osquery/log/osqueryd.results.log"

Test it by running:

filebeat test config
filebeat test output

Load ingest pipelines and dashboards:

filebeat setup

This step creates:

- the osquery ingest pipeline

- ECS mappings

- Kibana dashboards

Then start filebeat:

Start-Service filebeat

 Verify Data in Elasticsearch:

Open Kibana / Elastic UI → Discover
Select index pattern: filebeat-*

Filter by:

event.module : "osquery"

You should now see your scheduled query results flowing in.

Note about OSquery schedule:

Why you now see updates every interval

Osquery has two logging modes:
1. Differential mode (default)
- Only writes when the result changes
- Great for detecting state changes
- Quiet when nothing changes

2. Snapshot mode
- Writes the full result every time
- Great for heartbeats, testing, and time‑series data

Ok, great. Now, let's move onto installing Sigma for our rule based detection tool.

Sigma

Before moving on, I ran into a road block with the permission settings in Kibana and because of this you won't be able to access the security features within Elasticsearch/Kibana. So to fix that edit your kibana.yml file and add this line:

xpack.encryptedSavedObjects.encryptionKey: "KEY" 

You can add it anywhere. Then in PowerShell run:

[guid]::NewGuid().ToString("N").Substring(0,32)

Take the 32 character output and replace "KEY" in the xpack statement you added to yml and then restart Kibana.

 Creating a DNS Alert in Elastic SIEM

The goal was to alert whenever a user resolved a YouTube domain. This is just a test but in the future you can add malicious domains to see if someone or a process is trying to resolve a domain name.

Go to security > rules in elasticsearch and add a new rule that follow this patter:

A custom detection rule was created using the KQL query:

"dns.question.name: *youtube.com*"

Alternate fields that also work:

zeek.dns.query: *youtube*

dns.question.registered_domain: youtube.com


As for the other settings, that tis up to you but I wanted it to run every 3 minutes.

Then go to "youtube.com" and after your specified run time, we should see an alert:


Conclusion 


Right, so our SOC is at a good place, we have network logs and system logs connected, we have Wireshark listening to our networks, zeek parsing the logs, filebeat connect to our SIEM passing the logs for us to review, set up alerts and so on.

The system logs require a bit of work because it's not constantly monitoring our system but that's ok for now. We can build queries to capture any malicious activity based on known malware/threat actor practices, set up alerts in our SIEM and so.

So moving forward, I'll want to start seeing what's happening on my system, set up more alerts that follow the Mitre attack framework and so on. There is a lot we can do here so I am excited to see where I go with this.

Comments

Popular posts from this blog

Building my own write blocker

  Spoiler — It’s cheaper than buying one I was looking to buy a write blocker to do data recovery/forensics tasks but I quickly noticed that I was window shopping write blockers due to their cost. Some starting at £300, others that cost less were no longer being built or sold, maybe you could find a 2nd hand one with or without the wires. Most of these write blockers were industry standard, used by law enforcement but was it necessary for me to buy such an expensive write blocker….or is it possible to build my own….. So th e  research began, reading through articles, publications, and so on, and with the information gained, I felt that I could build my own write blocker. So what do I need: A Raspberry Pi A Linux distro. HDD/SSD to test the write blocker And to put the information I gained into practice Building the write blocker So, I brought a Raspberry Pi 4 Model B that came with a power supply, HDMI cables, 32GB SD card, a case, and some extras. ( https://www.okdo.com/c/pi-...

Malware Analysis: Dissecting a Golang Botnet - Part 1

Introduction In this post, I walk through the process of analyzing a Golang-based botnet sample — specifically a variant of FritzFrog , a peer-to-peer (P2P) botnet known for brute-forcing SSH servers and spreading laterally across networks. The goal here is to share my steps, tools, and insights while preparing for a cybersecurity analyst role.  🐸 1. Downloading the Malware Sample I began by grabbing the malware sample from Da2dalus’ excellent GitHub repository of real-world malware: URL: FritzFrog Sample on GitHub To fetch the raw binary into my WSL environment, I used: wget -O botnet_malware_IM https://github.com/Da2dalus/The-MALWARE-Repo/raw/refs/heads/master/Botnets/FritzFrog/001eb377f0452060012124cb214f658754c7488ccb82e23ec56b2f45a636c859 📤 2. Transferring the Malware to the Flare VM (Windows) My analysis environment was running inside a Windows VM using FLARE VM . Since the malware was downloaded via WSL, I needed a way to securely transfer it to the Windows VM. First, ...

Diving Deeper: Unmasking the Spotify Installer's Network Secrets (Or Not!)

My recent bug bounty adventure with the Spotify installer took an interesting turn. After thoroughly investigating potential DLL hijacking vulnerabilities and finding the installer to be surprisingly resilient, the next logical step was to peek into its network communications. After all, the installer prominently displayed a "downloading installer" message, implying it was reaching out to the internet. This blog post chronicles our journey into capturing HTTP/HTTPS requests, battling DNS complexities, attempting local server interception, and ultimately, uncovering more about the installer's robust design. The New Challenge: Capturing Network Traffic Our trusted Process Monitor was fantastic for file system and registry activity, but it falls short when it comes to detailed HTTP/HTTPS requests. For that, we needed a dedicated network analysis tool. While Fiddler Classic is often my go-to for web traffic (especially with its easy HTTPS decryption), we opted for the powerf...