De-Coder’s Ring

Consumable Security and Technology

Installing Filebeat to ship data to Elasticsearch

This is going to assume you have elasticsearch, logstash already installed. I’m specifically going to cover installing Filebeat. These instructions will be copied a lot from here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

Step one, install:
curl -L -O https://download.elastic.co/beats/filebeat/filebeat_1.1.0_amd64.deb
sudo dpkg -i filebeat_1.1.0_amd64.deb

I’m already going to diverge from the official documentation. In this use case, I want to install filebeat on the same server as logstash. What what what?!

Obviously not a real world scenario, but, I’m taking baby steps. Going to simulate real world. I currently have my eve.json file going into Elastic from Logstash. Now, I want the same results, just using filebeat to ship the file, so in theory, I can remote it.

Let’s kill logstash
sudo service logstash stop

Here’s my starting logstash file (well, the input part at least)
$ sudo vi /etc/logstash/conf.d/logstash.conf


input {
file {
path => ["/var/log/eve.json"]
sincedb_path => ["/var/lib/somepath/eve_since.db"]
codec => json
type => "MyInputEVE"
}
}

There are a lot of settings in filebeat! This looks to be a basic simple setting for me to just ship the line to logstash. Real world, I’d use TLS etc.
$ sudo vi /etc/filebeat/filebeat.yml

############################# Filebeat ######################################
filebeat:
prospectors:
-
paths:
- /var/log/suricata/eve.json
input_type: log

registry_file: /var/lib/filebeat/registry

############################# Output ##########################################
output:
### Logstash as output
#logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
worker: 1

# Set gzip compression level.
compression_level: 3

Now, it’s time to get logstash to read it!

$ sudo vi /etc/logstash/conf.d/logstash.conf

input {
beats {
port => 5044
codec => json
type => "SuricataIDPS"
}

}
output{

elasticsearch{

}

}

Restart logstash
$ cd /var/log/logstash
$ sudo rm /var/log/logstash/*
$ sudo service logstash start
logstash started.
$ sudo netstat -tulpn | grep 5044
tcp6 0 0 :::5044 :::* LISTEN 30511/java

Seems to be listening!

Start filebeat
$ sudo service filebeat start
$ ps -eaf | grep filebeat
root 30728 1 0 20:51 pts/0 00:00:00 /usr/bin/filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
root 30729 30728 0 20:51 pts/0 00:00:00 /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
cf 30739 29753 0 20:52 pts/0 00:00:00 grep --color=auto filebeat

Check Kibana….. NO DATA!!!!
Screen Shot 2016-02-02 at 8.53.29 PM

Then I start my data collector to start filling out the eve.json file!

$ ls -latr /var/log/eve.json
-rw-r--r-- 1 root root 11450 Feb 2 20:55 /var/log/eve.json

Yep .. Records are showing up!!
BAM!

Screen Shot 2016-02-02 at 9.19.32 PM

—-QUICK UPDATE—-

After the first pass at this, since my logstash installation was a few months old, I had an error in /var/log/logstash/logstash.out… something like this:
logstash The field '@timestamp' must be a (LogStash::Timestamp

Quick update to the plugin fixed it:
$ cd /opt/logstash
$ sudo ./bin/plugin update logstash-input-beats

That fixed it.
Thanks: https://discuss.elastic.co/t/issue-with-filebeat-logstash-beats-input-unhandled-exception/33934

Short URL: http://bit.ly/28ZmLjp

1 Comment

  1. I made an adaptation of the nginx log to the suricata log. I can have the geoip information in the suricata logs. I make the adaptation through swatch and send to a log file configured in filebeat. If you are interested, I can share it here.

    Ex:
    nginx.access.referrer: ET INFO Session Traversal Utilities for NAT (STUN Binding Request) [**

    nginx.access.geoip.location:
    {
    “lon”: -119.688,
    “lat”: 45.8696
    }

Leave a Reply

Your email address will not be published.

*

© 2017 De-Coder’s Ring

Theme by Anders NorenUp ↑