De-Coder’s Ring

Consumable Security and Technology

Month: January 2017

The Cyber Pivot

Over the years, I’ve put a lot of thought into pivoting.   Not in the startup lingo, but in the data sense.   Pivot from one piece of data to another, in order to build a picture.

Data is all about pivoting.   When I’m investigating an alert, I very rarely have a good picture of all the events/correlating data surround an alert. This leads to some frustrating , repeated times for each alert triage.   Looking at external sources, internal sources, etc.

The ‘simplest’ challenge to solve would be to auto-pivot around the internal log data we have.   Since I’m a wanna-be SOC analyst, but a pretty good software engineer, I need to build some code that will auto pivot.  Basically, given a specific moment in time, or a specific net flow record, spider out until I get 6 degrees of separation.  Effectively, build my own graph database from log storage.

To the code!

I’ve started writing some code, and will probably post it here, as long as we don’t want to claim IP at Perch (https://perchsecurity.com), but , we may, so hang tight.  Keep an eye out here:

https://github.com/chrisfauerbach

If this were Neo4J:

MATCH (a:Alert) – [] – (f:Flow) – [] – (n:NSM)

OPTIONAL MATCH (n) – [r:REFERS] – (n2:NSM {type:”HTTP”})

return a, n, f, n2

… or something like that.

 

 

 

Elasticsearch Maintenance with Jenkins

 

Maintaining production systems is one of those unfortunate tasks that we need to deal with…  I mean, why can’t they just run themselves?   I get tired of daily tasks extremely quickly.   Now that I have a few ongoing Elasticsearch clusters to deal with, I had to come up with a way to keep them singing.

As a developer, I usually don’t have to deal with these kind of things, but in startup world, I get to do it all from maintenance, monitoring, development, etc.

Jenkins makes this kind of stuff super easy.   With a slew of python programs, that use parameters/environment variables to connect to the right Elasticsearch cluster, I’m able to perform the following tasks, in order (order is key)

  1.  Create Snapshot
  2. Monitor Snapshot until it’s done
  3. Delete Old Data ( This is especially interesting in our use case, we have a lot of intentional False Positive data for connectivity testing)
  4. Force Merge Indices

I have Jenkins set up to trigger the down stream jobs after the prior completes.

I could do a cool Jenkins Pipeline…. in my spare time.

Snapshots:

Daily snapshots are critical in case of cluster failure.   With a four node cluster, I’m running in a fairly safe setup, but if something goes catastrophically bad, I can always restore from a snapshot.   My setup has my snapshots going to AWS S3 buckets.

Delete Old Data:

When dealing with network monitoring, network sensors and storing of NSM data (see Suricata NSM Fields ), we have determined one easy way to test end to end integration is by inserting some obviously fake False Positives into our system.   We have stood up a Threat Intelligence Platform (Soltra Edge) to serve some fake Indicator/Observables.   Google.com, Yahoo.com, etc.   They show up in everyone’s networks if there is user traffic.   Now, this is great to determine connectivity, but long term that comes to be LOTS of traffic that I really don’t need to store…. so, they get deleted.

Force Merge Indices

There is a lot of magic that happens in Elasticsearch.  Thats’s fantastic.  Force Merging allows ES to effectively shrink the number of segments in a shard, thereby increasing performance when querying it.  This is really only useful for indices that are no longer receiving data.  In our use case, that’s historical data.  I delete the old data, then force merge it.

 

A day in the life.. of Jenkins.

 

 

 

 

Did the Russians use Yahoo for the Grizzly Steppe attack? JAR-16-20296

No, no they didn’t.   At least not from what my investigation finds.   Just wanted to put out there another example of a False Positive in the DHS/US-CERT JAR that I talk about in this article, Grizzly Steppe: Lighting up Like A Christmas Tree

IP address:  66.196.116.112

NSLookup output:

$ nslookup 66.196.116.112
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
112.116.196.66.in-addr.arpa name = pr.comet.vip.bf1.yahoo.com.

WHOIS output can be found here:  Inktomi whois

Inktomi was aquired by Yahoo back in 2002.

wikipedia entry

Now, this 66.196.116.112 IP address is resolved via:  comet.yahoo[.]com

This looks to be a service utilized by Yahoo mail.

 

 

 

 

© 2017 De-Coder’s Ring

Theme by Anders NorenUp ↑