If you host a web site, or if you host a web site for someone else, download your own JQuery library (or other library for that matter).

JQuery, for example, uses a CDN for web sites that want to link to a hosted Jquery.

You can find that information here:

Here’s the actual file:

Personal preference, host your own.  That way, no one can change it without you knowing, and therefore breaking your web site.

Please, for super serious, don’t link to an unknown site that shows up on the 5th page of Google. That’s how you start to host malware.



Leave a Comment

Filed under Uncategorized

Finding Relationships in Elasticsearch document data with Neo4j

I’m working on a new project, in all my spare time ( ha ha ha), that I thought was interesting.   We’ve been building relational data bases for a bajillion years (or almost as long as computers were storing data?).  DAta gets normalized, stored, related, queried, joined, etc.

These days, we have Elasticsearch, which is a No SQL database that stores documents.  These documents are semi-structured, and have almost no relationships that can be defined.  This is all well and good when you’re just looking at data in a full text index, but, when looking for insights or patterns, we need some tricks to find those needles in the giant haystacks that we’re tackling.

Finding Unknown Relationships

Elasticsearch is a staple in my tool belt.  I use it for almost any project that will collect data over time, or has a large ingestion rate.  Look at the ELK stack ( ) and you’ll see some cool things demoed.  Using kibana, logstash and elasticsearch, you can build a very simple monitoring solution, log parser and collector, dashboarding tool, etc.

One of the ideas that I came up with today while working on the Perch Security platform was a way to discover relationships within the data.  Almost taking a denormalized data set, and normalizing it.   Well, that’s not entirely accurate.   What if we can find relationships in seemingly unrelated data?

Pulling a log file from Apache?   Also getting data from a usegroup on cyber attack victims?  Your data structures may be set to relate those data components, and you may not have thought about it..  That’s kind of a contrived example, because it’s pretty obvious you can search ‘bad’ data looking for indicators that are in your network, but my point is, what can we find out there?

Here’s a github repo for what I’m going to be working on.   It’s not really meant for public consumption, but I threw an MIT license on it if your’e interested in helping.

Oh!   I forgot to mention, I’m going to be using a lot of my recently learned Graph database skills (Thanks again Neo4j) to help discovery some relationships.    It will help with the ‘obvious’ relationships, and maybe even when I get into figuring out “significant” terms and whatnot.     Let’s see if I can get some cool hits!

Any data sets?





Leave a Comment

Filed under api, cybersecurity, elasticsearch, neo4j, technical

Neo4J: Driver Comparison


I’m not going to get into a detailed and analyzed discussion here, with key points on WHY there’s a huge performance difference, but, I want to illustrate why it’s good to attempt different approaches.  This ties into my Enough About Enough post from a while ago.  If I only knew python, I’d be stuck.  Thankfully I don’t, and I was able to whip out a Java program to do this comparison…. ok, I’m getting ahead of myself, let me start from scratch:

As I’ve written about before, I’m dealing with graph data in Neo4J.  Big shout out to the @Neo4j folks, as they’ve been instrumental in guiding me through some of my cypher queries, and whatnot.  Especially Michael H, who is apparently the Cypher Wizard to beat all Wizards.

Cyber threat intelligence data can be transmitted in TAXII format.  TAXII is an XML format that defines aspects of threat intelligence.  Read more in my blog post Stix, Taxii: Understanding Cybersecurity Intelligence

Since there are some nested and relations in this data that isn’t exactly ‘easy’ to model in an RDBMS or a document store, I decided to shove it into a graph database.  (Honestly, I’ve been looking for a Neo4j project for a while, and this time it worked!).   At Perch Security, our customers have access to threat intelligence data, paid and open source.   We want to give them access to that data in a specific way, so, I have to store and query it.  Storage is straight forward, I can get into that more later, but right now, I’m looking at querying this data.

After learning a few tricks to my cypher, again thanks @michael.neo on slack, I plugged it into my python implementation.  It took a while for me to figure out how to get access to the Cursor, so I could stream results ,and not actually pull the whole record set.  After all, I’m trying to create 100k records (which turns out to be approximately 500k nodes from Neo4j in order to do that.

My Data Flow

The gist of my data flow is simple, over time, I’m constantly polling an external data source and shoving the TAXII data into Neo4j.  My relationships are set, I try to consolidate objects where possible, and I’m off to the races.

When I query, I issue a big statement to execute my cypher with a max results of a lot… basically give me all the records until I stop reading them.  In other words, I use limit in my cypher that’s much  higher than what I will actually need.

My code starts streaming the results, and one by one shoves them into a ‘collector’ type object.  When the collector hits a batch size (5MB+), I add a part to my AWS S3 multi part upload.   When I’m done reading records (either I ran out of records, or hit my limit), I force an upload of the rest of the data I have in my collector, finalize the multi part upload and that’s it.

My python code, took about 15 minutes.  No lie, 15 minutes.   I tried optimizing, I tried smaller batches, etc, and my results were always linear.   I tried ‘paginating’ (using skip and limit), but that didn’t help… actually, I did skip/limit first, then I went to the big streaming block..

Ugh. 15 minutes.   I’m seeing visions of a MAJOR problem when we’re in full mode production.   Imagine.  I have 300 customers.  I have to run a daily (worst case) batch job for each customer.   Holy cow.   I’m going to have to scale like CRAZY if I’m going to match that.  … I’m sweating.  The doubt kicks in.   Neo4j wasn’t the right choice!  I wanted a pet project, and it’s going to kick my butt and get me fired!!!!!

Last ditch effort,  I rewrite in Java.

I’ve done a lot of Java in my life.. 10+ years full time.. but, it’s been over 6 years since I’ve REALLY written much Java code.. Man has it changed.

but I digress.   I download Eclipse.  Do some googling on Java/Maven/Docker

(  ended up using the Azul Docker instance, way smaller than the standard Java 8:   azul/zulu-openjdk:8)

and I’m off to the races.    I get to learn how to read from an SQS queue, query Neo4j, write to S3 all in 4 hours so I don’t get fired.

After a bunch of testing, getting Docker running and uploaded to AWS ECR… I run it..    It runs…. craps out after 15 seconds.   Shoot.. where did I break something.

I go to my logger output.. hmm.. no Exceptions.   No bad code blocks or stupid logic (I got rid of those in testing).. .

I run it again.

15 seconds.


15 seconds?  I check my output file.  It looks good. It matches the one created from python.    15 seconds?!

Something is wrong with py2neo.. that’s for darn sure.

Would anyone be interested in an example chunk of code to do this?

Email me if you do.

Leave a Comment

Filed under api, career, cybersecurity, neo4j, technical

Tackling Expensive and Complicated Information Security

Information Security:  It doesn’t have to be so expensive (or complicated!)


The Bad News

For Small/Medium Businesses (SMBs), you can’t approach information security the same way your bigger brothers do.  Face it, Capital One has a much larger information security (infosec) budget than the Downtown Credit Union in Powhatan, VA.   Small companies don’t have the same staffing models, technology expertise or highly specialized analysts that focus solely on protecting data.   Sure, there are free and open source tools, for example, but they still require expertise and time to get them up and running, not to mentioned tuned, maintained, updated, etc!


Here’s another challenge.  A good information security practice relies on intelligence about threats, attacks, vulnerabilities, etc.  There are open source data sets that can help your SMB know what to look for in network scans, packet matching signatures and queries in your SIEM, but that open source data tends to be stale.  Don’t get me wrong, it’s table stakes.  You NEED to be on the lookout for what Emerging Threats has, but it’s not sufficient.  That data will protect you, but it’s a tiny part of the known bad things out there.


Ok, one more ‘bad news’ comment.  There are vendors out there that will sell you cyber threat intelligence (CTI) data.  Some aggregate data from intelligence providers; they’re called TIPs, Threat Intelligence Platforms.  They provide tools and technologies to help you get known intelligence data.  Others research, probe and monitor the internet/private networks looking for ‘things’ that are bad.  They’ll either sell you the data or sell it to an aggregation company who will sell it to you.  They provide a great service, and deserve to be paid for the work they do, but again, this may be pricey and out of your budget.


The Good News!

There is a new reality out there.  There are sharing communities being formed to share this threat intelligence data (ISACs and ISAOs).  These groups are focused around specific industries (Health Care, Financial Services, Aviation, etc) and allow a platform to share more RELEVANT data.   This is data that affects your industry, and therefore has a much higher chance of being relevant to you company.   Their cyber intelligence data is target to their industry and typically much more relevant than the data served from large repositories.


Size doesn’t always matter.  With finite resources, both technical and human, it’s nearly impossible for SMBs to look out for all the bad things; and why should they?  A bank doesn’t care about a command and control channel for a botnet that is targeting manufacturing equipment.


Sharing communities are becoming the KEY source of threat intelligence data for small to mid-size business.  It’s putting the control of the infosec spend back into their hands.


By leveraging shared community data as the primary (but still not only!) source of intelligence, we substantially reduce the cost of a comprehensive cyber intelligence and threat mitigation plan.  Once we embrace this new world of industry-specific, relevant cyber intel, we’ll have new ways to connect in a USABLE way.   What’s “usable”?  In order to reap the benefits of your sharing community memberships, you need readily tools that:

  • Don’t require a skilled analyst behind the dashboard 24×7.
  • Don’t require a SIEM to use it.
  • Doesn’t require a knowledge of code.
  • Doesn’t require more than a basic understanding of CTI (STIX, TAXII) terminology


Now What

Who’s going to provide a tool like this?  Ha!  I’m not good at keeping secrets, but I’m working on something that will help bring the promise of a sharing community to reality.


Leave a Comment

Filed under cybersecurity, technical

Five Features of a Successful API Platform – PDF

“API Culture”

I’ve coined this phrase to help indicate a healthy technology organization that strives to build a solid set of capabilities that can be leveraged across a wider audience than an engineering team typically gets.

Building an API is more than just writing a web service.  It’s more than using AJAX or using a REST Framework within your code.  Building an “API Culture” is all about providing your development teams the structure and ability to be as effective as they can be.  Increase collaboration, increase code quality and increase the reuse of applications that they build.  We’re not talking about a specific tool or methodology, instead, we’re talking about an attitude that your teams will adopt in order to enjoy their day to day working life a lot more.

Read the rest after a free download –

Leave a Comment

Filed under api, software development, technical

Stix, Taxii: Understanding Cybersecurity Intelligence

Cyber Intelligence Takes Balls

Cyber Intelligence Takes Balls

I spent years building a packet capture and network forensics tool. Slicing and dicing packets makes sense to me. Headers, payloads, etc.. easy peasy (no, it’s not really easy, but like I said, years). Understanding complex data structures comes with the territory, and so far, I haven’t met a challenge that took me too long to understand.

Then I met Taxii. Then Stix. I forgot how painful XML was.

Taxii: Trusted Automated eXchange of Indicator Information

STIX: Structured Threat Information eXpression

FYI:  All the visualizations and screen shots are grabbed from Neo4J. The top rated and most used Graph database in the world.  My work has some specific requirements that I think are best suited with nodes, edges and finding relationships between data, so I thought I’d give it a shot.  Nice to see a built in browser that does some pretty fantastic drawing and layouts without any work on my part.  (Docker image to boot!)

TAXII is a set of instructions or standards on how to transport intelligence data. The standard (now an OASIS standard), defines the interactions with a web server (HTTP(s)) requests to query and receive intelligence. For most use cases, there are three main phases of interactions with a server:

  1. Discovery – Figure out the ‘other’ end points, this is where you start
  2. Collection Information – Determine how the intelligence is stored. Think of collections as a repository, or grouping of intelligence data within the server.
  3. Poll (pull) – (or push, but I’m focusing on pull). Receive intelligence data for further processing. Poll requests will result in different STIX packages (more to come)

I’m not going to go into details on the interactions here, but the python library for TAXII does a good enough job to get you started.  It’s not perfectly clear, but it helps.

STIX defines some data structures around intelligence data.   Everything is organized in a ‘package’.  The package contains different pieces of information about the package and about the intelligence.  In this article, I’ll focus on ‘observables’ and ‘indicators’.  The items I won’t talk much about are:

  • TTPs:  Tactics, Techniques and Procedures.  What mechanisms are the ‘bad guys’ using.  Software packages, exploit kits, etc.
  • Exploit Target:  What’s being attacked
  • Threat Actor: If known, who/what’s attacking?
  • TLPs, Kill chains, etc


Observables are the facts.  They are pieces of data that you may see on your network, on a host, in an email, etc.  These can be URLs, email addresses, files (and their corresponding hashes), IP addresses, etc.   A fact is a fact.  There’s no context around it, it’s just a fact.

A URL that can be seen on a network

A URL that can be seen on a network



Indicators are the ‘why’ around the facts.  These tell you what’s wrong with an IP address, or give the context and story about an email that was seen.

Context around an observable

Context around an observable

In the above pictures, you’ll see a malicious URL (hulk**, seriously, don’t follow it).   The observable component is the URL.  The indicator component tells us that it’s malicious.  The description above tells us that the intelligence center at identified the URL as part of a phishing scheme.

Source of data

All security analysts are well aware of some open source intelligence data. Emerging Threat, PhishTank, etc.  This data is updated regularly, and provided in their own format.  Since we’re talking about using TAXII to transport this data, we need an open source/free Taxii source.  Step in

When you make a query against Hailataxii’s discovery end point, you learn the collections and poll URLs.  Additionally, the inbox URL, but we’re not using that today.  (Coincidentally, HAT’s URLs are all the same)

Once you query the collection information end point, you see approximately 11 (At the time of writing) collections.  I will list those below.  From there, we can make Poll requests to each collection, and start receiving (hundreds? Thousands?) of STIX packages.

STIX Package

Since I’m a network monitoring junky, I want to see the observables I can monitor.  Specifically IPs and URLs.  Parsing through the data, I find some interesting tidbits.  Some packages have observables at the top level, and some have observables as children of the indicators.  No big deal, we’ll keep it all and start storing/displaying.

Once it’s all parsed using some custom python (what a mess!), I’m able to start loading my Nodes and edges.  Straight forward, I build nodes for the Community (Hailataxii), the Collection, the Package, Indicators and Observables.  The observables can be related to the Indicator and/or the Package.

Community view from the top down

Community view from the top down

Yellow circle is the community, green circle is the collection, small blue circle is the package (told you it could be hundreds), purple is the indicator and reddish is the observable.

Indicators and Observables

Indicators and Observables

That’s about it!  Don’t forget to check out my last post on Suricata NSM fields to see how some of these observables can be found on a network.

Suricata NSM Fields

Please leave feedback if you have any questions!








Collections from Hail  A Taxii:

  1. guest.dataForLast_7daysOnly
  2. guest.EmergingThreats_rules
  3. guest.phishtank_com
  4. system.Default
  5. guest.EmergineThreats_rules
  6. guest.dshield_BlockList
  7. guest.Abuse_ch
  8. guest.MalwareDomainList_Hostlist
  9. guest.Lehigh_edu
  10. guest.CyberCrime_Tracker
  11. guest.blutmagie_de_torExits

Leave a Comment

Filed under cybersecurity, technical

Suricata NSM Fields

Value of NSM data


Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation. Suricata is developed by the OISF and its supporting vendors.

Snort was an early IDS that matched signatures against packets seen on a network.  Suricata is the next generation of open source tools that looks to expand on the capabilities of snort.  While it continues to monitor packets, and can ‘alert’ on matches (think bad IP address, or a sequence of bytes inside of a packet), it expands the capabilities by adding Network Security Monitoring.  NSM watches a network (or PCAP file), jams packet payloads together (in order), and does further analysis.  From the payloads, Suricata can extract HTTP, FTP, DNS, SMTP, SSL certificate info etc.

This data can provide invaluable insight into what’s going on in your company or in your home.   Store the data for future searches, monitor the data actively for immediate notification of some wrong doings or anything else you want to do.  NSM data allows an analyst to track the spreading of malware.  Track how a malicious email came through.

Beyond the meta-data, Suricata can also extract the files from monitored sessions.  These files can be analyzed, replayed or shoved into a sandbox and detonated.  Build your own!

Here’s a break down of fields available, but remember they’re not always there.   Be careful in your coding.

All records contain general layer 3/4 network information:

  • timestamp
  • protocol
  • in_iface
  • flow_id
  • proto
  • src_ip
  • src_port
  • dest_ip
  • dest_port
  • event_type

This covered TCP/IP, UDP, etc.  Each event that gets logged (check out /var/log/eve.json) , has this information and more.  “event_type” indicates the ‘rest’ of the important data in this NSM record.  Values in ‘event_type’ will be one of:

  • http
  • ssh
  • dns
  • smtp
  • email
  • tls
  • fileinfo*


  • accept
  • accept_charset
  • accept_encoding
  • accept_language
  • accept_datetime
  • authorization
  • cache_control
  • cookie
  • from
  • max_forwards
  • origin
  • pragma
  • proxy_authorization
  • range
  • te
  • via
  • x_requested_with
  • dat
  • x_forwarded_proto
  • x_authenticated_user
  • x_flash_version
  • accept_range
  • age
  • allow
  • connection
  • content_encoding
  • content_language
  • content_length
  • content_location
  • content_md5
  • content_range
  • content_type
  • date
  • etag
  • expires
  • last_modified
  • link
  • location
  • proxy_authenticate
  • referrer
  • refresh
  • retry_after
  • server
  • set_cookie
  • trailer
  • transfer_encoding
  • upgrade
  • vary
  • warning
  • www_authenticate


Client/Server are child objects when this is parsed from JSON.

  • client
    •   proto_version
    •   software_version
  • server
    •   proto_version
    •   software_version


  • tx_id
  • rrtype
  • rrname
  • type
  • id
  • rdata
  • ttl
  • rcode


  • reply_to
  • bcc
  • message_id
  • subject
  • x_mailer
  • user_agent
  • received
  • x_originating_ip
  • in_reply_to
  • references
  • importance
  • priority
  • sensitivity
  • organization
  • content_md5
  • date


  • fingerprint
  • issuerdn
  • version
  • sni
  • subject


File info is special.  It can be associated with other types, like HTTP and SMTP/email.  Watch the object carefully, you’ll get a mix of fields

  • size
  • tx_id
  • state
  • stored
  • filename
  • magic
  • md5

1 Comment

Filed under technical

Agile: Why Enterprises Are Struggling

Let’s start out with the agile manifesto.  It’s been 15 years since this was written, and agile is all over the place in different organizations.

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

If you’ve worked with me in the recent past, you’ll read a few of my frequently vented statements from the office.  The word vented is quite important, it’s usually as a result of being unable to change a process or observing a ritual that’s ridiculous.   BUT! This is all caveated with the following statement:

“Companies need to get things done.  They need to understand how their development organizations are organized and working.  They need to be able to plan and forecast expenses and timelines.  I understand their need for process, even if they call it agile.” – Chris F.  (yeah, that’s me)

Individuals and interactions over processes and tools

Data is king.  As a data and process guy, I understand the value in knowing how well a team performs, especially over time.  Let’s compare this to a retail business.   If you don’t have historical data on sales, you can’t reliably plan your upcoming budgets.  Look at seasonal retail, you need to know when you’re in a high income month to hold back some income for the upcoming low seasons.  It’s cash flow planning 101.   The same goes for enterprises managing their development resources.  How many projects can we realistically plan and promise in the next 6 months? There’s no way to do that without knowing the output trends of your teams.

Where groups get in trouble is sacrificing development time and resources to make sure these metrics are buttoned down.  I’m a big fan of planning poker (or whatever the kids are calling it these days).  Using a fibonacci scale (1,2,3,5,8,13) you let your team tell you the complexity of the task they’re working on.   This isn’t a number of hours, it isn’t the number of days a task will take to complete.  It’s a little more granular than a t-shirt size estimate (S, M, L, XL), but not much.  The valuable outcome tells you approximate velocity.  It’s not a set in stone metric, it’s an approximation.  Don’t make it anything more than that.

There are plenty of tools out there to help track stories, points, sprints (oh my!).  Pick a light weight one that matches your natural processes.  Pivotal Tracker works well, Jira works well too.   Don’t over play the tool though. Just get work done!

Working software over comprehensive documentation

Documentation has been the bane of my existence as a software engineer for my entire career.  One of my first jobs was on a CMMI certified project.  Talk about documentation for documentations sake.  Documents that never get read are seriously stupid.  Talk about a waste of time!

No documentation is even stupider, in my opinion.  Remember that proverbial bus?  When your strongest developer gets a much higher paying job, (or gets hit by the proverbial bus), and doesn’t come to work, how do you know what the heck he was working on?   Sure, there are team members, but in my experience, you typically build a lot of domain knowledge in one (maybe two) developers.

Here’s my advice:  Document the process and the components.  How does code get from the source code repository to a production environment.  What steps are automated, what steps are manual.  How does data flow from start to finish? What components pick up data when it comes from a source system?  If you want to document interfaces, that’s a great next step.   Document your APIs, including inputs/outputs, required fields etc.  Define what it takes to integrate with other pieces of tech.

Find the balance between useful and wasteful.   It’s not always a clear answer, but lean to the side of less.

Customer collaboration over contract negotiation

Flashing back to the days of Waterfall.  These days are not totally gone, unfortunately.  Much like the previous point about over documenting, this tenet is similar.  Think about selling a house.   You’re the seller and I’m the buyer.    You write an offer.  I read it and send a counter offer.  You sign it and we’re done!   We never had to meet.  We didn’t talk, and heck, we were separated by another layer: the real estate agent.  This is the extreme of waterfall.  Your customer writes up this gigantic requirements document, ships it over the wall (via a people person, darn it) and the development staff has at it.  6-12+ months later, the developers show the product to the customer, and guess what!  BAM, it is NOT what they wanted.  Are you surprised?

Now imagine the customer and the engineering team working together in much faster cycles.  Instead of the customer seeing a quarterly release (or a final release, I shudder to think), imagine them seeing daily progress, heck, hourly progress.  Embed a customer representative with the developers.  Then the real time collaboration hits and boy is it magical.  “What do you think of this color”, or, “Here’s how the data is flowing, how does this look for a user interaction”.   Real time (or close to it) feedback is the best way to make sure the final result is really what the customer wants.

Responding to change over following a plan

Ever met someone who hates change?  We all get in our groove and really don’t like when others rock the boat, but, there are folks who truly can not deal with change. It gives them anxiety and it’s a weird thing to observe.  One of my natural trait’s is that I’m a rule follower***.  I understand the value and comfort a plan provides.   I’m also a realist and understand that the best laid plans (of mice and men) NEVER stick.  Timelines change, people get sick and most importantly, requirements change!  (I’m sensing a reason these points are laid out the way they are in the manifesto)  Go with the flow!  If you’re 20% through a project, and BAM, you get the best idea ever, then go ahead with your customer and figure out (or if) to change the project, timelines etc.   Your customer might change what they want from you.  If you lose a few weeks/months of time because of it, it’s up to them.  Don’t stress it (but keep notes that it’s the customer that changed plans mid project, don’t get dinged in your performance appraisal!).

There’s another side to this though too.  I’ve worked in two startups in which we’re building a product that has to be sold in the market.  Every week, our sales guys would come in and say “Customer XYZ needs feature ABC in order to close this 300 million dollar deal!!”.   It’s really easy to let the tail (sales) wag the dog (product and overall roadmap).  Don’t chase every sale by losing sight of your product roadmap.  Having a strategy is key.   Product management needs to be really clear and intentional if they change product delivery timelines or priorities of features when sales comes calling.  Good product management and executive leadership will never let a sale change the vision or strategy, but they may allow a slight deviation from the original plan in order to reprioritize!  (Although if it really were a 300 million dollar sale , that may be big enough to derail a lot!)


The tenet’s laid out in the Agile Manifesto were put together by some really , really smart people.  Don’t let the ceremonies and the calendar be the only thing that ‘makes you agile’.  For me, “Individuals and Interactions Over Processes and Tools” are the foundation for the rest of the bullets.  Write good software, and don’t focus on tools that don’t help write good software.

(Simpson car from )

*** (more to come on what that means, but,  it shows mainly in respecting org charts, staying in roles and responsibilities given, etc..   I do well in chaos, flexible environments, and love when my rule is “no rules!”)

Cross Posted @ LinkedIn

1 Comment

Filed under Uncategorized

Enough About Enough

One of my last posts was all about work life balance, “Your work life balance sucks“.  We all have issues with prioritizing our time and allocating enough in each bucket of our lives.  Family, Work, Self, Others.  Here, I want to talk about your knowledge and how it’s important to be intentional in that aspect of your life too.

I seem to be surrounded with people who either know the nitty-gritty detail about the technology or subject matter that they are engrossed in OR the stereo-typical “jack of all trades”.  You know the two types.  The first, you can’t have a discussion with about something.  They know WAY more than you do, and take the conversation very deep, very quickly.  They miss the forest for the trees.  These are your experts.   This person knows the insides and out of the tool you’re using, and their knowledge allows you to be confident in your approach and whether an idea will work or not.

The second type of person is the typical ‘jack of all trades’.  I say typical, because in this case, they know a little about a lot.  I’d even say, a little about ‘some’ things.  In the technology world, this person would be able to work around a bash shell, write database queries and make some updates to a web page.  The counter to this, is the Java developer who doesn’t know how to write a query.  The web designer who doesn’t know a lick of HTML or CSS.  My point here, is that this person is wide and shallow, as opposed with the first person, who’s super deep, but very narrow.

The mental image just came to me of the iceberg.  You know, something like this:

The first person will tell you about that little piece of ice that sits at the bottom of the iceberg.  While that’s important to someone, and is completely valid knowledge, 99% of us don’t care, and unless the conversation is about the bottom tip of the ice berg, it’s inappropriate.

The second person, they can tell you generally about ice bergs: maybe only if it’s covered in snow. If it’s just ice, they may not know about it.

The challenge a lot of us have, is how to balance in the middle of this scale.  Depending on your role, you need to find your sweet spot.   For me, as a consultant/architect/VP Engineering, I need to know “enough about enough”.  I need deep and wide.  I’d argue that I have to know 80% about an iceberg, but more importantly, know how ice works enough to be able to make some assumptions that can later be validated or experimented on.

In the world of technology, this manifests in a lot of different ways.  Mostly, it comes down to being educated enough to decide between two or more options, and picking an initial path go down.  Now, anyone can pick the path, but, the sweet spot means being able to get to the fork in the path as soon as possible to determine if it’s the right path or not.  Which database engine do we pick? Which time-series data storage do we use?  Which visualization engine will work with our python framework, etc.

There’s absolutely no way everyone can know everything about everything.  Seriously, look at the eco-system for devops these days (Back when I was first writing code, we didn’t have no automation!).  It’s amazing!  There are dozens of tools that do almost the same task.  They overlap, they have their sweet spots too, but it takes a special kind of person with a very specific set of skills (hmm.. I’ve heard that somewhere), to determine which tool to use in a specific situation.

I want to say this is another instance of the 80/20 rule, but not exactly.  Let’s go with that anyway.  Instead of learning the 100% details of something, spend 80% of the time, then keep going on to other things.  Don’t be so narrow focused.  Think about the days of Turbo Pascal.  If all you knew was 100% TP, how’s the job market these days?

Balance that with only learning 20% about something.  You will never be seen as an expert.  No matter what the subject matter is:  Technologies, development approaches, managerial styles, etc. You need to be deep enough to be an authority to make an impact on the organization you’re in if you want to excel and succeed.

Everything in life needs a balance.  Diet, exercise, hobbies, work/life, etc.  Knowledge and learning is in there too.  Be intentional about where you focus your energies in learning about something new, and figure out how much is enough.

1 Comment

Filed under career, software development, technical

Suricata Stats to Influx DB/Grafana

For everyone unfamiliar, Suricata is a high performance network IDS (Intrusion Detection System), IPS (Intrusion Prevention System) and NSM (Network Security Monitor).  Suricata is built to monitor high speed networks to look for intrusions using signatures.  The first/original tool in this space was Snort (by Sourcefire, acquired by Cisco).

NSM mode for Suricata accomplishes some pretty fantastic outputs.  In the early days of nPulse’s pivot to a network security company, I built a prototype ‘reassembly’ tool.  It would take a PCAP file, shove the payloads of the packets together, in order by flow, and extract a chunk of data.  Then I had to figure out what was in that jumbo payload.  Finding things like FTP or HTTP were pretty easy…. but then the possibilities became almost endless.    I’ll provide a link at the bottom with suricata and other tools in the space.  Suricata can do this extraction, in real time, on a really fast network.  It’s a multi threaded tool that scales.

Suricata can log alerts, NSM events and statistics to a json log file, eve.json.  On a typical unix box, that file will be in /var/log/suricata.  The stats event type is a nested JSON object with tons of valuable data.   The hierarchy of the object looks something like this:

Suricata Stats Organization

Stats layout for Suricata eve.json log file

For a project I’m working on, I wanted to get this Suricata stats data into a time series database.  For this build out, I’m not interested in a full Elasticsearch installation, like I’m used to, since I’m not collecting the NSM data, but only this time series data.  From my experience, RRD is a rigid pain, and graphite (as promising as it is) can be frustrating as well.  My visualization target was Grafana and it seems one of the favored data storage platforms is InfluxDB, so, I thought I’d give it a shot.  Note, Influx has the ‘tick’ stack, which included a visualization component, but, I really wanted to use Grafana.   So, I dropped Chronograf in favor of Grafana.

Getting Telegraf (machine data, CPU/Memory) injected into Influx and visualized within Grafana took 5 minutes. Seriously.  I followed the instructions and it just worked.  Sweet!  Now it’s time to get the suricata stats file working..

snippet of the above picture:

   "timestamp": "2016-06-27T14:38:34.000147+0000",
   "event_type": "stats",
   "stats": {
      "uptime": 245534,
      "capture": {
         "kernel_packets": 359737,
         "kernel_drops": 0
      "decoder": {
         "pkts": 359778,
         "bytes": 312452344,
         "invalid": 1000,
         "ipv4": 343734,
         "ipv6": 1,
         "ethernet": 359778,
         "raw": 0,

As you can see, the nested JSON is there.  I really want that “343734” “ipv4” number shown over time, in Grafana.  After I installed Logstash (duh), to read the eve.json file, I had to figure out how to get the data into Influx.  There is a nice plugin to inject the data, but unfortunately, the documentation doesn’t come with good examples. ESPECIALLY good examples using nested JSON.   Well, behold, here’s a working document, which gets all the yummy Suricata stats into Influx.

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"
   if !([event_type] == "stats") {
       drop { }

output {
   influxdb {
       data_points => {
         "event_type" => "%{[event_type]}"    

         "stats-capture-kernel_drops" => "%{[stats][capture][kernel_drops]}"    

         "stats-capture-kernel_packets" => "%{[stats][capture][kernel_packets]}"    

         "stats-decoder-avg_pkt_size" => "%{[stats][decoder][avg_pkt_size]}"    

        =======TRUNCATED, FULL DOCUMENT IN GITHUB ==========

         "stats-uptime" => "%{[stats][uptime]}"    

         "timestamp" => "%{[timestamp]}"    
       host => ["localhost"]
       user => "admin" 
       password => "admin" 
       db => "telegraf" 
       measurement => "suricata" 

WHOAH! That’s a LOT of fields.  Are you kidding me?!  Yep, it’s excellent. Tons of data will now be ‘graphable’.  I whipped together a quick python script to read an example of the JSON object, and spit out the data points entries, so I didn’t have to type anything by hand.  I’ll set up a quick gist in github to show my work.

Let’s break it down a little bit.

This snippet tells logstash to read the eve.json file, and tell it that each line is a JSON object:

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"

This section tells logstash to drop every event that does not have “event_type” of “stats”

   if !([event_type] == "stats") {
       drop { }

Suricata has a stats log file, that could probably be used by Logstash, but I may do that on another day. It’s way more complicated than JSON.

The last section is the tough one.  I found documentation that showed the general format of “input field” => “output field”… but that was it.  It took a ton of time over the past working day to figure out exactly how to nail this.  First, fields like ‘host’, ‘user’,’password’,’db’,’measurement’ are very straight forward Influx concepts.  DB is much like a name spacing or a ‘table’ in a traditional sense.   A db contains multiple measurements.  The measurement is the thing we’re going to track over time.  In our case, these are the actual low level details we want to see, for instance, ‘stats-capture-kernel-drops’.

Here’s an example:

"stats-decoder-ipv4" => "%{[stats][decoder][ipv4]}"    

On the left ‘stats-decoder-ipv4’ is the measurement name that will end up in InfluxDB.  The right side is how Logstash knows where to find the value based on this event from eve.json.  %{ indicates the value will come from the record.  Then Logstash just follows the chain down the JSON document.  stats->decoder->ipv4

That’s it.   The logstash config, eve.json, little python script, and here’s a picture!

Grafana Screen Shot showing CPU, Memory and Suricata Information

Grafana Screen Shot showing CPU, Memory and Suricata Information

Pretty straight forward.  Example configuration on one of the stats:

Configure graph in grafana

Configure graph in grafana

Please leave me a comment!  What could be improved here?   (Besides my python not being elegant.. it works.. .and sorry Richard, but, I’m  a spaces guy, tabs are no bueno)

  1. GITHUB:
  2. Suricata:
  3. Snort Acquired Sourcefire:
  4. Grafana –
  5. InfluxDB/TICK stack:
  6. Chronograf:
  7. Telegraf:
  8. Logstash:


June 27, 2016 · 3:42 PM