Fauie Technology

eclectic blogging, technology and hobby farming

Month: June 2016 (page 1 of 2)

Enough About Enough

One of my last posts was all about work life balance, “Your work life balance sucks“.  We all have issues with prioritizing our time and allocating enough in each bucket of our lives.  Family, Work, Self, Others.  Here, I want to talk about your knowledge and how it’s important to be intentional in that aspect of your life too.

I seem to be surrounded with people who either know the nitty-gritty detail about the technology or subject matter that they are engrossed in OR the stereo-typical “jack of all trades”.  You know the two types.  The first, you can’t have a discussion with about something.  They know WAY more than you do, and take the conversation very deep, very quickly.  They miss the forest for the trees.  These are your experts.   This person knows the insides and out of the tool you’re using, and their knowledge allows you to be confident in your approach and whether an idea will work or not.

The second type of person is the typical ‘jack of all trades’.  I say typical, because in this case, they know a little about a lot.  I’d even say, a little about ‘some’ things.  In the technology world, this person would be able to work around a bash shell, write database queries and make some updates to a web page.  The counter to this, is the Java developer who doesn’t know how to write a query.  The web designer who doesn’t know a lick of HTML or CSS.  My point here, is that this person is wide and shallow, as opposed with the first person, who’s super deep, but very narrow.

The mental image just came to me of the iceberg.  You know, something like this:

The first person will tell you about that little piece of ice that sits at the bottom of the iceberg.  While that’s important to someone, and is completely valid knowledge, 99% of us don’t care, and unless the conversation is about the bottom tip of the ice berg, it’s inappropriate.

The second person, they can tell you generally about ice bergs: maybe only if it’s covered in snow. If it’s just ice, they may not know about it.

The challenge a lot of us have, is how to balance in the middle of this scale.  Depending on your role, you need to find your sweet spot.   For me, as a consultant/architect/VP Engineering, I need to know “enough about enough”.  I need deep and wide.  I’d argue that I have to know 80% about an iceberg, but more importantly, know how ice works enough to be able to make some assumptions that can later be validated or experimented on.

In the world of technology, this manifests in a lot of different ways.  Mostly, it comes down to being educated enough to decide between two or more options, and picking an initial path go down.  Now, anyone can pick the path, but, the sweet spot means being able to get to the fork in the path as soon as possible to determine if it’s the right path or not.  Which database engine do we pick? Which time-series data storage do we use?  Which visualization engine will work with our python framework, etc.

There’s absolutely no way everyone can know everything about everything.  Seriously, look at the eco-system for devops these days (Back when I was first writing code, we didn’t have no automation!).  It’s amazing!  There are dozens of tools that do almost the same task.  They overlap, they have their sweet spots too, but it takes a special kind of person with a very specific set of skills (hmm.. I’ve heard that somewhere), to determine which tool to use in a specific situation.

I want to say this is another instance of the 80/20 rule, but not exactly.  Let’s go with that anyway.  Instead of learning the 100% details of something, spend 80% of the time, then keep going on to other things.  Don’t be so narrow focused.  Think about the days of Turbo Pascal.  If all you knew was 100% TP, how’s the job market these days?

Balance that with only learning 20% about something.  You will never be seen as an expert.  No matter what the subject matter is:  Technologies, development approaches, managerial styles, etc. You need to be deep enough to be an authority to make an impact on the organization you’re in if you want to excel and succeed.

Everything in life needs a balance.  Diet, exercise, hobbies, work/life, etc.  Knowledge and learning is in there too.  Be intentional about where you focus your energies in learning about something new, and figure out how much is enough.

Suricata Stats to Influx DB/Grafana

For everyone unfamiliar, Suricata is a high performance network IDS (Intrusion Detection System), IPS (Intrusion Prevention System) and NSM (Network Security Monitor).  Suricata is built to monitor high speed networks to look for intrusions using signatures.  The first/original tool in this space was Snort (by Sourcefire, acquired by Cisco).

NSM mode for Suricata accomplishes some pretty fantastic outputs.  In the early days of nPulse’s pivot to a network security company, I built a prototype ‘reassembly’ tool.  It would take a PCAP file, shove the payloads of the packets together, in order by flow, and extract a chunk of data.  Then I had to figure out what was in that jumbo payload.  Finding things like FTP or HTTP were pretty easy…. but then the possibilities became almost endless.    I’ll provide a link at the bottom with suricata and other tools in the space.  Suricata can do this extraction, in real time, on a really fast network.  It’s a multi threaded tool that scales.

Suricata can log alerts, NSM events and statistics to a json log file, eve.json.  On a typical unix box, that file will be in /var/log/suricata.  The stats event type is a nested JSON object with tons of valuable data.   The hierarchy of the object looks something like this:

Suricata Stats Organization

Stats layout for Suricata eve.json log file

For a project I’m working on, I wanted to get this Suricata stats data into a time series database.  For this build out, I’m not interested in a full Elasticsearch installation, like I’m used to, since I’m not collecting the NSM data, but only this time series data.  From my experience, RRD is a rigid pain, and graphite (as promising as it is) can be frustrating as well.  My visualization target was Grafana and it seems one of the favored data storage platforms is InfluxDB, so, I thought I’d give it a shot.  Note, Influx has the ‘tick’ stack, which included a visualization component, but, I really wanted to use Grafana.   So, I dropped Chronograf in favor of Grafana.

Getting Telegraf (machine data, CPU/Memory) injected into Influx and visualized within Grafana took 5 minutes. Seriously.  I followed the instructions and it just worked.  Sweet!  Now it’s time to get the suricata stats file working..

snippet of the above picture:

   "timestamp": "2016-06-27T14:38:34.000147+0000",
   "event_type": "stats",
   "stats": {
      "uptime": 245534,
      "capture": {
         "kernel_packets": 359737,
         "kernel_drops": 0
      "decoder": {
         "pkts": 359778,
         "bytes": 312452344,
         "invalid": 1000,
         "ipv4": 343734,
         "ipv6": 1,
         "ethernet": 359778,
         "raw": 0,

As you can see, the nested JSON is there.  I really want that “343734” “ipv4” number shown over time, in Grafana.  After I installed Logstash (duh), to read the eve.json file, I had to figure out how to get the data into Influx.  There is a nice plugin to inject the data, but unfortunately, the documentation doesn’t come with good examples. ESPECIALLY good examples using nested JSON.   Well, behold, here’s a working document, which gets all the yummy Suricata stats into Influx.

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"
   if !([event_type] == "stats") {
       drop { }

output {
   influxdb {
       data_points => {
         "event_type" => "%{[event_type]}"    

         "stats-capture-kernel_drops" => "%{[stats][capture][kernel_drops]}"    

         "stats-capture-kernel_packets" => "%{[stats][capture][kernel_packets]}"    

         "stats-decoder-avg_pkt_size" => "%{[stats][decoder][avg_pkt_size]}"    

        =======TRUNCATED, FULL DOCUMENT IN GITHUB ==========

         "stats-uptime" => "%{[stats][uptime]}"    

         "timestamp" => "%{[timestamp]}"    
       host => ["localhost"]
       user => "admin" 
       password => "admin" 
       db => "telegraf" 
       measurement => "suricata" 

WHOAH! That’s a LOT of fields.  Are you kidding me?!  Yep, it’s excellent. Tons of data will now be ‘graphable’.  I whipped together a quick python script to read an example of the JSON object, and spit out the data points entries, so I didn’t have to type anything by hand.  I’ll set up a quick gist in github to show my work.

Let’s break it down a little bit.

This snippet tells logstash to read the eve.json file, and tell it that each line is a JSON object:

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"

This section tells logstash to drop every event that does not have “event_type” of “stats”

   if !([event_type] == "stats") {
       drop { }

Suricata has a stats log file, that could probably be used by Logstash, but I may do that on another day. It’s way more complicated than JSON.

The last section is the tough one.  I found documentation that showed the general format of “input field” => “output field”… but that was it.  It took a ton of time over the past working day to figure out exactly how to nail this.  First, fields like ‘host’, ‘user’,’password’,’db’,’measurement’ are very straight forward Influx concepts.  DB is much like a name spacing or a ‘table’ in a traditional sense.   A db contains multiple measurements.  The measurement is the thing we’re going to track over time.  In our case, these are the actual low level details we want to see, for instance, ‘stats-capture-kernel-drops’.

Here’s an example:

"stats-decoder-ipv4" => "%{[stats][decoder][ipv4]}"    

On the left ‘stats-decoder-ipv4’ is the measurement name that will end up in InfluxDB.  The right side is how Logstash knows where to find the value based on this event from eve.json.  %{ indicates the value will come from the record.  Then Logstash just follows the chain down the JSON document.  stats->decoder->ipv4

That’s it.   The logstash config, eve.json, little python script, and here’s a picture!

Grafana Screen Shot showing CPU, Memory and Suricata Information

Grafana Screen Shot showing CPU, Memory and Suricata Information

Pretty straight forward.  Example configuration on one of the stats:

Configure graph in grafana

Configure graph in grafana

Please leave me a comment!  What could be improved here?   (Besides my python not being elegant.. it works.. .and sorry Richard, but, I’m  a spaces guy, tabs are no bueno)

  1. GITHUB:  https://github.com/chrisfauerbach/suricata_stats_influx
  2. Suricata: https://suricata-ids.org/
  3. Snort Acquired Sourcefire:  http://www.cisco.com/web/about/ac49/ac0/ac1/ac259/sourcefire.html
  4. Grafana – http://grafana.org/
  5. InfluxDB/TICK stack: https://influxdata.com
  6. Chronograf: https://influxdata.com/time-series-platform/chronograf/
  7. Telegraf: https://influxdata.com/time-series-platform/telegraf/
  8. Logstash:  https://www.elastic.co/products/logstash

SELinux: Causing a pain, time and time again

Once again, SELinux bit me.. what a pain.  It’s good, I’m sure for something.  but dang, it’s always to blame.

Trying to set up an Apache reverse proxy.   Kept getting a 503 error,

Permission denied: AH00957: HTTP: attempt to connect to ( failed

Did some googling, and thanks to Justin Ellison @ sysadminsjourney.com, he saved the day.

Simple command to allow the reverse proxy:

/usr/sbin/setsebool -P httpd_can_network_connect 1

Found the assist here:




Tencent buys majority Supercell stake (for more than a few cents!)

Oh my goodness.  Not since the Instagram acquisition have I been BLOWN AWAY by the valuation of a company.  I realize that I know nothing about the revenue of Supercell, but a $10B valuation?   Wow!    Their new app, Clash Royale, has definitely swept the household’s phones/ipods, etc.. and that’s ALL the kid’s are playing these days.




Your work-life balance sucks

Cross posted at:  https://www.linkedin.com/pulse/your-work-life-balance-sucks-chris-fauerbach

From wikipedia:

Work–life balance is a concept including proper prioritizing between “work” (career and ambition) and “lifestyle” (health, pleasure, leisure, family and spiritual development/meditation). This is related to the idea of lifestyle choice.

What does that mean?

  • If you’re working more than 40 hours a week, you are bad at balancing your work and your non-work life.  You should quit your job and work at Walmart where, if you read the same hype I do, you can’t work a solid 40!
  • Only work 20 hours a week?  Quit slacking! Balance work and life, but only up to 40 hours…..

Am I allowed to say “Just Kidding”, in a blog post?  Seems too unprofessional… but JUST KIDDING!

Work life balance is a goal one has to strive for.  If you work too much, your family and friends will feel neglected (ever put a family in “The Sims” into a room and removed the door?  They feel neglected.   If you don’t work enough, you won’t earn enough money, and your family will not eat, probably the same results as The Sims with a removed door.

In order to succeed, like most aspects of life, you need to be intentional about your work life balance.  It can’t just magically happen and be good.   Some of us tend to over work, some of us tend to under work.  A lot of us are not good at being intentional about family time.

Guidance #1:

Think about your life as a whole:  On your deathbed, would you regret spending too much time with family? or too much time at work?   This is the ultimate target.  In your life in its entirety, you need to have a focus on family.  Leisure activities and memories are invaluable.  Whether you die rich or poor, I can guarantee you won’t be thinking about how much money is in the bank when your’e dead. You’ll be thinking about the love and experiences you’ve had.  The adage “Work to live, don’t live to work” is embodied here.  Prioritize your life over your work, in the long run, and you will not have any regrets.

Guidance #2:

The balance changes at various stages of our life.  Single and right out of college?  Work hard during the day, have fun in the evenings and weekends!  Have kids?   Work your tail off during the day, go home and forget about work.  Weekends…. family time.  Kids older and out of the house?  Crank up the work if you still need to.  There are various ‘macro’ stages in your life and career.  Sometimes work has the higher percentage, sometimes life has the higher percentage.  Things change and are fluid.  Heck, there are micro changes in life.  In the software development profession, we have times when stuff breaks and we have to scramble to fix it.  Project deadline coming up and we’re behind? Crap, time to work more.    These are short sprints of extra work to meet a deliverable or to fix a mistake.  It happens. It sucks.  Family can get mad, but it’s short lived.  Do NOT do that to yourself for a long time.

Guidance #3

It’s no one’s fault but your own.  This one gets me in trouble when I talk to ‘certain’ people.  “My boss made me stay late again last night.”  “Once the kids went to bed, I knocked out another few hours last night” “I worked all weekend to get my presentation ready for Monday”.

Those kill me, because I know that’s not how life is supposed to be.  Again, once in a while happens, no big deal.  There are tons of people who I work with who are habitual, and that’s all they do.  You know the type.  Burned out, unhappy and frankly, not really impressing anyone at work.  If it takes you 70 hours a week to do a 40 hour job, you either haven’t learned how to say “No” (which is a learned skill, absolutely) or your time management stinks.  Both of which are bad news and need to be worked on.  The good news, is that both can be fixed!


You decide how much you work.  No one else does.  If you have a boss that ‘demands’ 60 hours for a traditionally 40 hour a week job, then those are unrealistic, and unsustainable demands.  If you ‘phone it in’ after 35 hours for a traditionally 40 hour a week job, then pick it up.  Hold your own and work for what you’re paid for.

Realize that work changes over time.  Sometimes you have to prove yourself, work extra hard, meet a crunch deadline, etc.  If your’e up every night and working on weekends for a traditionally 9-5, Mon-Friday job, then you may have a problem.  Take some time and figure it out.   It may take some uncomfortable conversations with your manager at the job, but, do it.  Life is too short, and too important to de-prioritize it.



« Older posts

© 2023 Fauie Technology

Theme by Anders NorenUp ↑