[HomeLab] RaspberryPi3 NSM Design

System Requirement

  • TP-LINK TL-SG105E 5-Port Smart Switch
  • RaspberryPi 3

Switch Port 3 used as Mirroring Port, connected to RaspberryPi’s eth0.
There’s no need to assign any IP on eth0.
The other interface wlan0 will therefore be used as Management Port.

1
2
3
$ sudo nano /etc/network/interfaces
iface eth0 inet static
static ip_address=0.0.0.0
1
$ sudo ifconfig eth0 down && sudo ifconfig eth0 up

BRO IDS on RaspberryPi 3

BRO IDS Basic Installation and Configuration

Install OpenSSL v1.0 headers:
1
$ sudo apt install libssl1.0-dev
Verify the headers have been properly replaced. We should see libssl1.0-dev instaed of libssl-dev:
1
2
3
4
5
$ apt list --installed | grep libssl
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libssl1.0-dev/stable,now 1.0.2l-2+deb9u3 armhf [installed]
libssl1.0.2/stable,now 1.0.2l-2+deb9u3 armhf [installed]
libssl1.1/now 1.1.0f-3+deb9u1 armhf [installed,upgradable to: 1.1.0f-3+deb9u2]
Install other required dependencies:
1
$ sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev
Download Bro using git clone or wget:

Method 1

1
$ sudo git clone --recursive git://git.bro.org/bro

Method 2

1
2
$ sudo wget https://www.bro.org/downloads/bro-2.5.3.tar.gz
$ sudo tar xvzf bro-2.5.3.tar.gz
Install BRO using make & make install:
1
2
3
4
$ cd bro
$ ./configure --prefix=/nsm/bro
$ sudo make
$ sudo make install
Select which NIC and which subnet will be used for monitoring:
1
2
$ nano /nsm/bro/etc/node.cfg
$ nano /nsm/bro/etc/networks.cfg
Command to start BRO IDS:
1
2
3
/nsm/bro/bin/broctl
install
exit
Add the following into rc.local for auto-start:
1
/nsm/bro/bin/broctl start
Optimization using Cron
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#### Clean /nsm/bro/logs at 23:59 everyday
$ sudo crontab -e
59 23 * * * rm -rf /nsm/bro/logs/20??-*-*

#### Restart cron service
$ sudo /etc/init.d/cron restart

#### Check the cron runnning status in Debian
$ sudo /etc/init.d/cron status
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-05-14 21:41:54 NZST; 5min ago
Docs: man:cron(8)
Main PID: 12368 (cron)
CGroup: /system.slice/cron.service
└─12368 /usr/sbin/cron -f
Start Bro and ensure that files needed by BroControl & Bro are brought up-to-date based on the current configurations
1
$ sudo broctl deploy

BRO IDS sending emails

Configure /etc/ssmtp/ssmtp.conf

1
2
3
4
5
6
7
8
9
10
Debug=YES
root=xxxx@gmail.com
mailhub=smtp.gmail.com:465
rewriteDomain=gmail.com
hostname=raspberrypi3
FromLineOverride=YES
UseTLS=Yes
UseSTARTTLS=No
AuthUser=username
AuthPass=password

Configure /etc/bro/broctl.cfg. This file has something to do with email sendings regarding system health report.

1
2
3
4
5
# Recipient address for all emails sent out by Bro and BroControl.
MailTo = root@localhost

# This will auto-send an email of "Connection Summary" every 1 hour
LogRotationInterval = 3600

Test email sending function using mail.

1
2
3
$ mail -s "This is a Test Email"  user@gmail.com < /dev/null
... ...
mail: Null message body; hope that's ok

[Reference: 5 Ways to Send Email From Linux Command Line - https://tecadmin.net/ways-to-send-email-from-linux-command-line/]

BRO IDS Command Lines

Use bro-cut to parse the logs

1
$ cat http.log | bro-cut -d ts uid id.orig_h id.orig_p id.resp_h id.resp_p method host uri user_agent status_code status_msg username password

Reference for HTTP.log: https://www.bro.org/sphinx/scripts/base/protocols/http/main.bro.html

Use query to check top dns queries

1
$ bro-cut query < dns.log | sort | uniq -c | sort -rn | head -n 10

Exercise - Understanding and Examining Bro Logs

1
https://www.bro.org/current/solutions/logs/index.html

Convert pcap to bro log format (e.g. Below, there’re 4 different xx.log generated)

1
2
3
4
5
6
7
$ sudo bro -r http.pcap
total 3988
-rw-r--r-- 1 root root 13403 May 23 21:03 conn.log
-rw-r--r-- 1 root root 27396 May 23 21:03 files.log
-rw-r--r-- 1 root root 71154 May 23 21:03 http.log
-rw-r--r-- 1 root root 3956760 Nov 7 2013 http.pcap
-rw-r--r-- 1 root root 253 May 23 21:03 packet_filter.log

What are the 5 most commonly visited webs sites?

1
2
$ bro-cut host < http.log | sort | uniq -c | sort -n | tail -n 5
$ zcat http.xxxx.log.gz | bro-cut host | sort | uniq -c | sort -n | tail -n 5

Top 10 GeoIP in conn.log

1
$ bro-cut resp_cc < conn.log | sort | uniq -c | sort -rn | head -n 10

Critical Stack on RaspberryPi 3

Add “Collecter”, “Sensor” and “Feed” from

1
https://intel.criticalstack.com

Config Critical Stack with Bro

1
2
3
4
$ sudo -u critical-stack critical-stack-intel config --set bro.path=/nsm/bro
$ sudo -u critical-stack critical-stack-intel config --set bro.include.path=/usr/share/bro/site/local.bro
$ sudo -u critical-stack critical-stack-intel config --set bro.broctl.path=/usr/bin/broctl
$ sudo -u critical-stack critical-stack-intel api xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx

Check Critical Stack IOCs

1
$ sudo -u critical-stack critical-stack-intel list

Fetch from the Intel Market using API key

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ sudo -u critical-stack critical-stack-intel pull

critical-stack 19:11:13 [INFO] Downloading feed information. Run with the `--debug` flag for more information.
7 / 7 [====] 100.00 % 9s
critical-stack 19:11:22 [INFO] Creating master file: master-public.bro.dat. Please wait.
critical-stack 19:11:22 [INFO] Master file created successfully.
critical-stack 19:11:22 [INFO] Checking bro configuration files.
critical-stack 19:11:22 [INFO] Intel include exists in: /usr/share/bro/site/local.bro
critical-stack 19:11:22 [WARN] --- RESTART NOTICE ---
critical-stack 19:11:22 [WARN] You need to restart bro for changes to take effect.
critical-stack 19:11:22 [INFO] * sudo broctl check
critical-stack 19:11:22 [INFO] * sudo broctl install
critical-stack 19:11:22 [INFO] * sudo broctl restart
critical-stack 19:11:22 [INFO] For automatic restarts run: `critical-stack-intel config --set bro.restart=true`
critical-stack 19:11:22 [INFO] Intel files located at: /opt/critical-stack/frameworks/intel
critical-stack 19:11:22 [INFO] API Requests Remaining: 999 of 1000/minute

Restart Bro for changes to take effect

1
2
3
$ broctl check
$ broctl install
$ broctl restart

Set automatic restarts

1
$ sudo -u critical-stack critical-stack-intel config --set bro.restart=true

Test if our TI is working properly

1
2
3
4
5
6
7
8
9
10
11
12
# IP 185.170.42.14 is on ET known Compromised Host List
# We do a SSH attempt on a testing machine then we can succesfully find this event in intel.log

[Testing] $ ssh 185.170.42.14

[Bro Log] $ cd /nsm/bro/logs/current

[Bro Log] $ cat intel.log
1526984588.191862 CrOHZ311FHM0wYnonf 115.xxx.xxx.74 53092 185.170.42.14 22 185.170.42.14 Intel::ADDR Conn::IN_RESP bro Intel::ADDR from http://rules.emergingthreats.net/open/suricata/rules/compromised.rules via intel.criticalstack.com - - -

[Bro Log] $ cat intel.log | bro-cut -d ts id.orig_h id.orig_p id.resp_h id.resp_p proto query
2018-05-22T22:23:08+1200 115.xxx.xxx.74 53092 185.170.42.14 22

Suricata IDS on Ubuntu

Install Dependencies

1
2
3
4
$ sudo apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libjansson-dev pkg-config

Install libhtp via OISF

1
2
3
4
5
$ git clone  https://github.com/OISF/libhtp.git
$ cd libhtp/
$./autogen.sh
$./configure
$ make && sudo make install

Download suricata and unzip

1
2
$ wget https://www.openinfosecfoundation.org/download/suricata-4.0.4.tar.gz
$ tar -xvf suricata-4.0.4.tar.gz

Complie suricata

By default, suricate runs on IDS mode

1
2
$ cd suricata-4.0.4
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var

Alternatively, suricate can runs on IDS & IPS mode

1
2
$ sudo apt-get -y install libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0  
$ ./configure --enable-nfqueue --prefix=/usr --sysconfdir=/etc --localstatedir=/var

Install suricata with default configuration

Below is default configuration

1
2
3
4
$ make && sudo make install
$ sudo make install-conf # Using Default Configuration
$ sudo make install-rules # Install Default Rules
$ sudo ldconfig # Enable shared system dynamic lib

If we don’t use default configuration, we can also

1
2
3
4
5
6
7
8
9
10
$ sudo mkdir /var/log/suricata
$ sudo mkdir /etc/suricata

$ cd /etc/suricata
$ wget http://rules.emergingthreats.net/open/suricata/emerging.rules.tar.gz
$ tar zxvf emerging.rules.tar.gz
$ sudo cp -R rules/ /etc/suricata/

$ cd suricata-4.0.4
$ sudo cp suricata.yaml classification.config reference.config /etc/suricata/

Edit /etc/suricata/suricata.yaml

Key configuration fileds

1
2
3
4
5
6
7
8
9
10
11
12
# Network Configuration
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
EXTERNAL_NET: "!$HOME_NET"

# Log file path
default-log-dir: /var/log/suricata/

# Number of packets preallocated per thread. Default 1024
max-pending-packets: 1024

# Preallocated size for packet. Default 1514
default-packet-size

Suricata Security Rules

1
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Rules

Create our own rules for testing purposes

Edit /etc/suricata/suricata.yaml

1
2
3
default-rule-path: /etc/suricata/rules
rule-files:
- my.rules

Go to my.rules

1
alert icmp $HOME_NET any -> $EXTERNAL_NET any (msg:"TEST :ICMP PING"; itype:8; sid:20000; rev:3;)
1
alert tcp any any -> any 80 (msg:"http test";)
1
alert http any any -> any any (msg:"Filemagic jgp(1)"; flow:established; filemagic:"JPEG image data"; filestore; sid:10; rev:1;)

Suricata Running Mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
$ sudo suricata --list-runmodes

------------------------------------- Runmodes ------------------------------------------
| RunMode Type | Custom Mode | Description
|----------------------------------------------------------------------------------------
| PCAP_DEV | single | Single threaded pcap live mode
| ---------------------------------------------------------------------
| | autofp | Multi threaded pcap live mode. Packets from each flow are assigned to a single detect thread, unlike "pcap_live_auto" where packets from the same flow can be processed by any detect thread
| ---------------------------------------------------------------------
| | workers | Workers pcap live mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| PCAP_FILE | single | Single threaded pcap file mode
| ---------------------------------------------------------------------
| | autofp | Multi threaded pcap file mode. Packets from each flow are assigned to a single detect thread, unlike "pcap-file-auto" where packets from the same flow can be processed by any detect thread
|----------------------------------------------------------------------------------------
| PFRING(DISABLED) | autofp | Multi threaded pfring mode. Packets from each flow are assigned to a single detect thread, unlike "pfring_auto" where packets from the same flow can be processed by any detect thread
| ---------------------------------------------------------------------
| | single | Single threaded pfring mode
| ---------------------------------------------------------------------
| | workers | Workers pfring mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| NFQ | autofp | Multi threaded NFQ IPS mode with respect to flow
| ---------------------------------------------------------------------
| | workers | Multi queue NFQ IPS mode with one thread per queue
|----------------------------------------------------------------------------------------
| NFLOG | autofp | Multi threaded nflog mode
| ---------------------------------------------------------------------
| | single | Single threaded nflog mode
| ---------------------------------------------------------------------
| | workers | Workers nflog mode
|----------------------------------------------------------------------------------------
| IPFW | autofp | Multi threaded IPFW IPS mode with respect to flow
| ---------------------------------------------------------------------
| | workers | Multi queue IPFW IPS mode with one thread per queue
|----------------------------------------------------------------------------------------
| ERF_FILE | single | Single threaded ERF file mode
| ---------------------------------------------------------------------
| | autofp | Multi threaded ERF file mode. Packets from each flow are assigned to a single detect thread
|----------------------------------------------------------------------------------------
| ERF_DAG | autofp | Multi threaded DAG mode. Packets from each flow are assigned to a single detect thread, unlike "dag_auto" where packets from the same flow can be processed by any detect thread
| ---------------------------------------------------------------------
| | single | Singled threaded DAG mode
| ---------------------------------------------------------------------
| | workers | Workers DAG mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| AF_PACKET_DEV | single | Single threaded af-packet mode
| ---------------------------------------------------------------------
| | workers | Workers af-packet mode, each thread does all tasks from acquisition to logging
| ---------------------------------------------------------------------
| | autofp | Multi socket AF_PACKET mode. Packets from each flow are assigned to a single detect thread.
|----------------------------------------------------------------------------------------
| NETMAP(DISABLED) | single | Single threaded netmap mode
| ---------------------------------------------------------------------
| | workers | Workers netmap mode, each thread does all tasks from acquisition to logging
| ---------------------------------------------------------------------
| | autofp | Multi threaded netmap mode. Packets from each flow are assigned to a single detect thread.
|----------------------------------------------------------------------------------------
| UNIX_SOCKET | single | Unix socket mode
| ---------------------------------------------------------------------
| | autofp | Unix socket mode

Disable NIC LRO/GRO

1
2
3
$ sudo ethtool -k ensxx # check if LRO/GRO is open
$ sudo ethtool -K ensxx lro off # disable LRO
$ sudo ethtool -K ensxx gro off # disable GRO

Run suricata

Event will be logged in fast.log uder /var/log/suricata/

1
2
3
4
$ sudo suricata -c /etc/suricata/suricata.yaml -i ens33

29/5/2018 -- 13:47:43 - <Notice> - This is Suricata version 4.0.4 RELEASE
29/5/2018 -- 13:47:52 - <Notice> - all 1 packet processing threads, 4 management threads initialized, engine started.

Scirius CE (Portal) on Ubuntu

Scirius Community Edition is a web interface dedicated to Suricata ruleset management. It handles the rules file and update associated files.

Installation Steps

Install pip on Debian

1
2
$ sudo apt-get install aptitude
$ sudo aptitude install python-pip python-dev

If you have these packages installed you need to remove them so that Scirius would work with the latest python dependencies.

1
$ sudo apt-get remove django-tables python-django python-django-south python-git

Install django and the dependencies

1
2
3
4
5
$ pip install django django-tables2 South GitPython pyinotify daemon
$ pip install -r requirements.txt
$ pip install pyinotify
$ pip install gitpython==0.3.1-beta2
$ pip install gitdb

We need a stable version of npm and webpack version 3.11

1
2
3
4
5
$ sudo apt-get install npm
$ sudo npm install -g npm@latest webpack@3.11
root@hostname:/opt/scirius# npm install

$ npm install --save-dev webpack

[Optional] If errors occur

1
2
3
4
5
6
7
8
9
10
11
12
$ sudo npm install eslint-loader --save-dev
$ sudo npm install eslint --save-dev
$ sudo npm install babel-eslint --save-dev

$ sudo npm install webpack
$ sudo npm install babel-loader
$ sudo npm install "babel-loader@^8.0.0-beta" @babel/core @babel/preset-env webpack
$ sudo npm i extract-text-webpack-plugin

$ sudo npm install style-loader css-loader --save
$ sudo npm install sass-loader -D
$ sudo npm install node-sass -D

Clone the latest version

1
2
3
$ cd /opt
$ sudo git clone https://github.com/StamusNetworks/scirius.git
$ cd scirius

Configure Scirius CE

Initiate Django database from inside the source directory.

1
2
$ cd /opt/scirius
$ sudo python manage.py migrate

Authentication is by default in scirius so you will need to create a superuser account

1
$ sudo python manage.py createsuperuser

Before starting the application you need to construct the bundles by running webpack

1
$ sudo webpack

Permanent way of making ES Replicas = 0

https://discuss.elastic.co/t/permanent-way-of-making-es-replicas-0/58206

1
2
3
4
5
$ curl -XPUT 'localhost:9200/_template/priority1' -d '
{
"template" : "*",
"settings" : {"number_of_replicas" : 0 }
} '

Run Scirius CE

One of the easiest way to try Scirius CE is to run the Django test server. You can then connect to http://localhost:8000

1
2
3
4
5
$ sudo python manage.py runserver

Django version 1.11.13, using settings 'scirius.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

If you need the application to listen to a reachable address, you can run something like

1
$ sudo python manage.py runserver 192.168.1.220:8000

Also, don’t forget tp add the HOST IP that you want to actually access, to ALLOWED_HOSTS under scirius/scirius/settings.py

1
ALLOWED_HOSTS = ['192.168.1.220']

Initialize Scirius CE

To interact with Scirius CE, you need to detect when /etc/suricata/rules/scirius.reload file are created, initiate a reload or restart of Suricata when it is the case and delete the reload file once this is done.

Run below command in one Terminal [scirius]

1
2
$ cd /opt/scirius/suricata/scripts
$ sudo ./suri_reloader -p /etc/suricata/rules/ -l /var/log/suri-reload.log -r

Start Suricata in another Terminal [suricata]

1
$ sudo suricata -c /etc/suricata/suricata.yaml -i ens33

Create a new source on scirius web portal

1
Source -> Emerging Threats Open Ruleset

Image

Image

Create a new Ruleset on scirius web portal

1
2
Rulesets -> myScirius 
[Using "Emerging Threats Open Ruleset" as a source]

Image

Image

Edit Suricata on scirius web portal

1
2
3
4
5
Name: suricata
Descr: suricata
Rules directory: /etc/suricata/rules/
Suricata configuration file: /etc/suricata/suricata.yaml
Ruleset: myScirius

Tick Suricata Ruleset Actions on scirius web portal

1
Actions: Update | Build | Push

Then, IDS rules will be auto-generated under

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
xxx@hostname:/etc/suricata/rules$ ls -l
total 13800
-rw-rw-r-- 1 root root 1673 May 30 21:53 BSD-License.txt
-rw-rw-r-- 1 root root 2638 May 30 21:53 classification.config
-rw-rw-r-- 1 root root 9013 May 30 21:53 compromised-ips.txt
-rw-rw-r-- 1 root root 3349 May 30 21:53 emerging.conf
-rw-rw-r-- 1 root root 18273 May 30 21:53 gen-msg.map
-rw-rw-r-- 1 root root 18092 May 30 21:53 gpl-2.0.txt
-rw-rw-r-- 1 root root 2243 May 30 21:53 LICENSE
-rw-rw-r-- 1 root root 1377 May 30 21:53 reference.config
-rw-r--r-- 1 root root 32 May 30 22:06 scirius.reload
-rw-r--r-- 1 root root 10307949 May 30 22:06 scirius.rules
-rw-rw-r-- 1 root root 3570798 May 30 21:53 sid-msg.map
-rw-rw-r-- 1 root root 32438 May 30 21:53 suricata-1.2-prior-open.yaml
-rw-rw-r-- 1 root root 37450 May 30 21:53 suricata-1.3-etpro-etnamed.yaml
-rw-rw-r-- 1 root root 37589 May 30 21:53 suricata-1.3-open.yaml
-rw-rw-r-- 1 root root 0 May 30 21:53 suricata-4.0-enhanced-open.txt
-rw-r--r-- 1 root root 0 May 30 22:06 threshold.config
-rw-rw-r-- 1 root root 53841 May 30 21:53 unicode.map

Install Elasticsearch

1
2
3
4
$ sudo apt-get install openjdk-8-jre
$ curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.deb
$ sudo dpkg -i elasticsearch-6.2.4.deb
$ sudo /etc/init.d/elasticsearch start

Install Logstash

An important advantage to this approach is that you can use Logstash to modify the data captured by Beats in any way you like. You can also use Logstash’s many output plugins to integrate with other systems.

1
2
3
$ sudo apt-get install openjdk-8-jre
$ curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.deb
$ sudo dpkg -i logstash-6.2.4.deb

Logstash set up

1
https://www.elastic.co/guide/en/beats/libbeat/current/logstash-installation.html

Run Logstash

1
$ sudo /etc/init.d/logstash start

To setup Elasticsearch connection, you can edit settings.py or create a local_settings.py file under scirius directory to setup the feature.

Elasticsearch is activated if a variable names USE_ELASTICSEARCH is set to True in /opt/scirius/scirius/settings.py.

The address of the Elasticsearch is stored in the ELASTICSEARCH_ADDRESS variable and uses the format IP:Port.

1
2
3
4
$ sudo nano /opt/scirius/scirius/settings.py
USE_ELASTICSEARCH = True
ELASTICSEARCH_ADDRESS = "192.168.1.210:9200"
ELASTICSEARCH_VERSION = 2 # In 1, 2, 5 set depending on ES major version

Image

Image

SELKS (Suricata + Scirius + Kibana + Logstash + Evebox)

SELKS 4.0 Stamus Networks

1
https://www.stamus-networks.com/2017/08/22/selks-4-0/

The SELKS Wiki - User Documentation

1
https://github.com/StamusNetworks/SELKS/wiki

Moloch (with Elasticsearch) on Ubuntu

Modify Hostname

1
2
3
$ sudo nano /etc/hostname
$ sudo nano /etc/hosts
$ sudo reboot

Install dpendencies including JRE/JDK

1
2
$ sudo apt-get install libjson-perl
$ sudo apt-get install default-jre

Modify max file descriptors

1
2
3
$ sudo nano /etc/security/limits.conf
* - nofile 128000
* - memlock unlimited

[Optional] Disable device and files paging and swapping to increase efficiency

Method 1

1
$ sudo swapoff -a (swapon -a)

Method 2

1
2
$ sudo nano /etc/fstab
# UUID=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx none swap sw 0 0

Download Moloch v0.17 for Ubuntu 16.04

1
$ wget http://files.molo.ch/builds/ubuntu-16.04/moloch_0.17.0-1_amd64.deb

Install Moloch

1
$ sudo dpkg -i moloch_0.17.0-1_amd64.deb

Install Dependencies (if the previous step halts due to errors)

1
$ sudo apt-get install -f

Configure Moloch

1
2
3
4
5
6
7
8
$ sudo /data/moloch/bin/Configure

Found interfaces: ens33,lo
Interface to monitor [eth1] ens33
Install Elasticsearch server locally for demo, must have at least 3G of memory, NOT recommended for production use (yes or no) [no] yes
... ...
... ...
Moloch - Configured - Now continue with step 4 in /data/moloch/README.txt

Configure elasticsearch.yml in /data/moloch/elasticsearch/config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: moloch
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node1
#
# Add custom attributes to the node:
# node.rack: r1
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
# Path to log files:
#path.logs: /path/to/logs
#path.logs: /data/moloch/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1

#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

Run elasticsearch

1
2
$ cd /data/moloch/elasticsearch/bin
$ ./elasticsearch

Initialise Elasticsearch Database on port 9200 and add a new user

1
2
$ /data/moloch/db/db.pl http://localhost:9200 init
$ /data/moloch/bin/moloch_add_user.sh admin admin PASSWORDGOESHERE --admin

Install Capture on ubuntu, in order to capture data

1
2
3
4
5
6
7
$ sudo apt-get install wget curl libpcre3-dev uuid-dev libmagic-dev pkg-config g++ flex bison zlib1g-dev libffi-dev gettext libgeoip-dev make libjson-perl libbz2-dev libwww-perl libpng-dev xz-utils libffi-dev

# This bash file can be downloaded from Git
$ sudo ./easybutton-build.sh

# Specify the thirdparty software location (/data/moloch/thirdparty)
$ sudo ./Configure --prefix=/data/moloch --with-libpcap=/data/moloch/thirdparty/libpcap-1.8.1 --with-yara=thirdparty/yara/yara-3.7.1 --with-maxminddb=thirdparty/libmaxminddb-1.3.2 --with-glib2=thirdparty/glib-2.54.3 --with-curl=thirdparty/curl-7.59.0 --with-lua=thirdparty/lua-5.3.4

Start molochcapture.service and molochviewer.service

1
$ systemctl start molochcapture.service molochviewer.service

Access Moloch

1
http://#.#.#.#:8005

[Optional–xxx???] Try to Auto Restart Moloch and Elasticsearch at system-boot

1
2
3
4
5
6
7
8
$ sudo systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /etc/systemd/system/elasticsearch.service.

$ sudo systemctl enable molochcapture.service
Created symlink from /etc/systemd/system/multi-user.target.wants/molochcapture.service to /etc/systemd/system/molochcapture.service.

$ sudo systemctl enable molochviewer.service
Created symlink from /etc/systemd/system/multi-user.target.wants/molochviewer.service to /etc/systemd/system/molochviewer.service.

[Optional] Delete all Elasticsearch indexes

Note: You need to initialise Moloch database after using this command

1
2
$ systemctl stop molochcapture.service molochviewer.service
$ curl -X DELETE 'http://localhost:9200/_all'

[Optional] Moloch Raw Data Storage Location

1
2
3
4
5
6
root@ubuntu:/data/moloch/raw# ls -l
total 2816
-rw-r----- 1 nobody daemon 262144 Jun 5 19:24 ubuntu-180605-00000007.pcap
-rw-r----- 1 nobody daemon 0 Jun 5 19:22 ubuntu-180605-00000008.pcap
-rw-r----- 1 nobody daemon 1048576 Jun 5 19:47 ubuntu-180605-00000010.pcap
-rw-r----- 1 nobody daemon 1572864 Jun 5 19:50 ubuntu-180605-00000011.pcap

[Optional] Moloch optimization

1
2
3
4
5
# Set ring buf size, see max with ethool -g eth0
$ sudo ethtool -G eth0 rx 4096 tx 4096

# Turn off feature, see available features with ethtool -k eth0
$ sudo ethtool -K eth0 rx off tx off gs off tso off gso off

[Optional] Moloch upgrade

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ wget https://files.molo.ch/builds/ubuntu-16.04/moloch_0.18.2-1_amd64.deb

$ sudo dpkg -i moloch_0.18.2-1_amd64.deb
(Reading database ... 345537 files and directories currently installed.)
Preparing to unpack moloch_0.18.2-1_amd64.deb ...
Unpacking moloch (0.18.2-1) over (0.17.0-1) ...
Setting up moloch (0.18.2-1) ...
READ /data/moloch/README.txt and RUN /data/moloch/bin/Configure

$ systemctl stop molochcapture.service
$ systemctl stop molochviewer.service

$ /data/moloch/db/db.pl http://localhost:9200 upgrade

$ systemctl start molochcapture.service
$ systemctl start molochviewer.service

[Optional] Configure Rsyslog Receiver on Moloch Instance

1
2
3
4
5
6
7
8
9
10
11
$ sudo nano /etc/rsyslog.conf

# To enable these modules and servers, uncomment the lines so the file now contains:

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")

Reference

1
2
3
4
5
Ubuntu Server v16.04+ + Moloch v0.17
http://moloch.3ilson.com/2017/01/moloch-installubuntu-16moloch-v017.html

moloch 网络流量回溯分析系统
https://paper.seebug.org/427/

Splunk (Receiver) on Ubuntu

Install Splunk on Linux Machine

1
$ dpkg -i splunk-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb

Configure and run Splunk

1
2
$ sudo cp /opt/splunk/etc/system/default/web.conf /opt/splunk/etc/system/local/
$ sudo /opt/splunk/bin/splunk start

Remove Indexed Spunk Eventdata

1
2
$ sudo /opt/splunk/bin/splunk help clean
$ sudo /opt/splunk/bin/splunk clean eventdata

Access Splunk Web Interface

1
http://localhost:8000

Splunk Universal Forwarder (Client) on RaspberryPi3

Download Splunk Universal Forwarder

1
https://www.splunk.com/en_us/download/universal-forwarder.html

Configure a Client Name. Set the clientName attribute

1
2
3
4
$ sudo nano  ~/splunkforwarder/etc/system/local/deploymentclient.conf
[target-broker:deploymentServer]
targetUri = 192.168.1.210:8089
clientName = pi3

Open TCP Port 9997 in Splunk Receiver (No Need to ‘Add Splunk Forwarder’)

1
Splunk -> Receive Data -> Forwarding and Receiving -> New Receiving Port

Configure and Start Splunk Universal Forwarder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
## Splunk Forwarder Auto-Restart

$ sudo splunkforwarder/bin/splunk enable boot-start

### This is for the local mamg port 8089
$ sudo splunkforwarder/bin/splunk set deploy-poll 192.168.1.210:8089

## Specify which system log folder to be monitored.
## For example /var/log

$ sudo splunkforwarder/bin/splunk add monitor /var/spool/bro/bro/
$ sudo splunkforwarder/bin/splunk add forward-server x.x.x.x:9997

## Check inputs.conf
[default]
host = raspberrypi3
index = soc
source = /var/spool/bro/bro/

## Check props.conf
[source::/var/log/suricata/fast.log]
sourcetype = suricata


## Check deploymentserver.conf
[target-broker:deploymentServer]
targetUri = 192.168.1.210:8089

## Start Splunk Forwarder
$ sudo splunkforwarder/bin/splunk start

Splunk> 4TW

Checking prerequisites...
Checking mgmt port [8089]: open
Creating: /home/pi/splunkforwarder/var/lib/splunk
Creating: /home/pi/splunkforwarder/var/run/splunk/appserver/i18n
Creating: /home/pi/splunkforwarder/var/run/splunk/appserver/modules/static/css
Creating: /home/pi/splunkforwarder/var/run/splunk/upload
Creating: /home/pi/splunkforwarder/var/spool/splunk
Creating: /home/pi/splunkforwarder/var/spool/dirmoncache
Creating: /home/pi/splunkforwarder/var/lib/splunk/authDb
Creating: /home/pi/splunkforwarder/var/lib/splunk/hashDb
New certs have been generated in '/home/pi/splunkforwarder/etc/auth'.
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/home/pi/splunkforwarder/splunkforwarder-7.1.1-8f0ead9ec3db-Linux-arm-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...
Done

Access the Whole System from Internet

Raspberry Pi3

1
$ ssh px@xx.xxx.com -p 22000

SELKS

1
$ ssh pxxx@xx.xxx.com -p 22020

Enable the remote web access

1
$ ssh -L 4443:localhost:443 pippo@xx.xxx.com -p 22020

Restart services

1
$ systemctl restart elasticsearch logstash kibana

Mini-Ubuntu (VM Machine acting as Internal Client)

  • IP: 10.0.0.200
  • Network: 10.0.0.0/24
  • Gateway: 10.0.0.1
1
$ ssh pxxx@xx.xxx.com -p 22030

Prerequisite on other machines

1
2
3
4
5
6
7
8
1) Enable the DNAT on Home Router
-- Allow incoming traffic towards port 22030

2) Enable the DNAT on 192.168.1.220:22030
pxxx@selks:~$ sudo iptables -t nat -A PREROUTING -d 192.168.1.220 -p tcp --dport 22030 -j DNAT --to 10.0.0.200:22

3) Enable the DNAT on 192.168.1.220:5901
pxxx@selks:~$ sudo iptables -t nat -A PREROUTING -d 192.168.1.220 -p tcp --dport 5901 -j DNAT --to 10.0.0.200:5901

[Optional] VNC Tunnel for encryption purposes

1
$ ssh -L 5901:127.0.0.1:5901 -N -f -l username server_ip_address

Moloch

Splunk