Fauie Technology

eclectic blogging, technology and hobby farming

Tag: elasticsearch

Threat Hunting with Open Source Software

I’ve begun working on a new project, with a spiffy/catchy/snazzy name:
Threat Hunting: With Open Source Software, Suricata and Bro

I’ve planned out multiple chapters, from raw PCAP analysis, building with session reassembly, into full on network monitoring and hunting with Suricata and Elasticsearch.

This project will take a long time. While I work through it, I’ll be posting here regularly. I very much welcome feedback.

Here’s a little introduction video, but , more will come as I add videos.

The next video will be looking at how data is transmitted over a network… anyone ready for a super brief OSI Network model overview?

Elasticsearch Maintenance with Jenkins

 

Maintaining production systems is one of those unfortunate tasks that we need to deal with…  I mean, why can’t they just run themselves?   I get tired of daily tasks extremely quickly.   Now that I have a few ongoing Elasticsearch clusters to deal with, I had to come up with a way to keep them singing.

As a developer, I usually don’t have to deal with these kind of things, but in startup world, I get to do it all from maintenance, monitoring, development, etc.

Jenkins makes this kind of stuff super easy.   With a slew of python programs, that use parameters/environment variables to connect to the right Elasticsearch cluster, I’m able to perform the following tasks, in order (order is key)

  1.  Create Snapshot
  2. Monitor Snapshot until it’s done
  3. Delete Old Data ( This is especially interesting in our use case, we have a lot of intentional False Positive data for connectivity testing)
  4. Force Merge Indices

I have Jenkins set up to trigger the down stream jobs after the prior completes.

I could do a cool Jenkins Pipeline…. in my spare time.

Snapshots:

Daily snapshots are critical in case of cluster failure.   With a four node cluster, I’m running in a fairly safe setup, but if something goes catastrophically bad, I can always restore from a snapshot.   My setup has my snapshots going to AWS S3 buckets.

Delete Old Data:

When dealing with network monitoring, network sensors and storing of NSM data (see Suricata NSM Fields ), we have determined one easy way to test end to end integration is by inserting some obviously fake False Positives into our system.   We have stood up a Threat Intelligence Platform (Soltra Edge) to serve some fake Indicator/Observables.   Google.com, Yahoo.com, etc.   They show up in everyone’s networks if there is user traffic.   Now, this is great to determine connectivity, but long term that comes to be LOTS of traffic that I really don’t need to store…. so, they get deleted.

Force Merge Indices

There is a lot of magic that happens in Elasticsearch.  Thats’s fantastic.  Force Merging allows ES to effectively shrink the number of segments in a shard, thereby increasing performance when querying it.  This is really only useful for indices that are no longer receiving data.  In our use case, that’s historical data.  I delete the old data, then force merge it.

 

A day in the life.. of Jenkins.

 

 

 

 

Installing Filebeat to ship data to Elasticsearch

This is going to assume you have elasticsearch, logstash already installed. I’m specifically going to cover installing Filebeat. These instructions will be copied a lot from here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

Step one, install:
curl -L -O https://download.elastic.co/beats/filebeat/filebeat_1.1.0_amd64.deb
sudo dpkg -i filebeat_1.1.0_amd64.deb

I’m already going to diverge from the official documentation. In this use case, I want to install filebeat on the same server as logstash. What what what?!

Obviously not a real world scenario, but, I’m taking baby steps. Going to simulate real world. I currently have my eve.json file going into Elastic from Logstash. Now, I want the same results, just using filebeat to ship the file, so in theory, I can remote it.

Let’s kill logstash
sudo service logstash stop

Here’s my starting logstash file (well, the input part at least)
$ sudo vi /etc/logstash/conf.d/logstash.conf


input {
file {
path => ["/var/log/eve.json"]
sincedb_path => ["/var/lib/somepath/eve_since.db"]
codec => json
type => "MyInputEVE"
}
}

There are a lot of settings in filebeat! This looks to be a basic simple setting for me to just ship the line to logstash. Real world, I’d use TLS etc.
$ sudo vi /etc/filebeat/filebeat.yml

############################# Filebeat ######################################
filebeat:
prospectors:
-
paths:
- /var/log/suricata/eve.json
input_type: log

registry_file: /var/lib/filebeat/registry

############################# Output ##########################################
output:
### Logstash as output
#logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
worker: 1

# Set gzip compression level.
compression_level: 3

Now, it’s time to get logstash to read it!

$ sudo vi /etc/logstash/conf.d/logstash.conf

input {
beats {
port => 5044
codec => json
type => "SuricataIDPS"
}

}
output{

elasticsearch{

}

}

Restart logstash
$ cd /var/log/logstash
$ sudo rm /var/log/logstash/*
$ sudo service logstash start
logstash started.
$ sudo netstat -tulpn | grep 5044
tcp6 0 0 :::5044 :::* LISTEN 30511/java

Seems to be listening!

Start filebeat
$ sudo service filebeat start
$ ps -eaf | grep filebeat
root 30728 1 0 20:51 pts/0 00:00:00 /usr/bin/filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
root 30729 30728 0 20:51 pts/0 00:00:00 /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
cf 30739 29753 0 20:52 pts/0 00:00:00 grep --color=auto filebeat

Check Kibana….. NO DATA!!!!
Screen Shot 2016-02-02 at 8.53.29 PM

Then I start my data collector to start filling out the eve.json file!

$ ls -latr /var/log/eve.json
-rw-r--r-- 1 root root 11450 Feb 2 20:55 /var/log/eve.json

Yep .. Records are showing up!!
BAM!

Screen Shot 2016-02-02 at 9.19.32 PM

—-QUICK UPDATE—-

After the first pass at this, since my logstash installation was a few months old, I had an error in /var/log/logstash/logstash.out… something like this:
logstash The field '@timestamp' must be a (LogStash::Timestamp

Quick update to the plugin fixed it:
$ cd /opt/logstash
$ sudo ./bin/plugin update logstash-input-beats

That fixed it.
Thanks: https://discuss.elastic.co/t/issue-with-filebeat-logstash-beats-input-unhandled-exception/33934

© 2020 Fauie Technology

Theme by Anders NorenUp ↑