Fauie Technology

eclectic blogging, technology and hobby farming

Category: Uncategorized (page 3 of 6)

Cybersecurity Arms Race: The Nuclear Option

I see you

Sandboxing files and detecting unexpected system behaviors is one of the best approaches to finding exploits.  FireEye did it really well when they first came out with their network monitoring products.   Watch a network, extract files, shove into a sandbox, explode, see what happens.  They were credited with finding a ton of 0-day type events.  Now, we can do the same with open source software.

Then you hear about malware that can detect it’s in a sandbox or a Virtual Machine.   If it detects the virtual environment, then it doesn’t explode, doesn’t infect, doesn’t do the bad things.  Then we invest money into figuring out how to hide the virtual host or sandbox from the malware.   Arms race!   Who can do it better.   I hide, you detect, you hide, I detect.  It’s one example of the cybersecurity arms race.

In traditional warfare, the winner of the arms race has a bigger gun.   Well, a bigger stick, then a  bigger rock, then a bigger bow, gun, missile, etc.  There’s an end game there.   The nukes.  Whoever has the nukes is on top.   Even when the foe has a nuke, the arms has can’t continue.  We’re at a stalemate.  Mutually assured destruction if we all use our nuclear arms.  When we all have the biggest gun, none of us can use it.    (another blog post later about moving the traditional warfare to the cybers, but that’s for later.

What’s the nuclear option for cyber war fare?

The closes thing I can think of for mutually assured destruction would be around taking down the Internet as a whole.  It may not even be possible.   Can someone wipe out all the core routers, heck, all the routers in the world?   Is that the end?   It makes me think of my favorite definition of envy (vs Jealousy).    jealousy is “I want what you have”.. not totally bad, can help motivate someone, etc.    Envy is “I want what you have, but since I don’t, you can’t have it either”.   Going nuclear on the Internet would drastically affect every life on the planet (ok, maybe not every, but anyone who’s in a ‘civilized’ place).  If it’s even possible…

Good Intel Is Hard

“Good Intel Is Hard”

– Perch SOC Lead, Patrick S

Today, Patrick and I were discussing some intelligence that we’re sensing on.  This intel comes straight from a private intelligence source, that’s supposed to have highly accurate, and targeted intel.   Our focus is on private sharing communities, e.g. ISACs, ISAOs, etc.    In our experience, these sources of intel are supposed to be highly relevant and vetted, to make sure members of said communities are watching our for the most significant threats to their industry or community.

Contrast that to a threat feed like the open source Emerging Threats data.   It’s excellent data that everyone needs to be detecting against.  It’s just not specialized.   It’s valid data , that’s publicly available, and you should detect on it.   ( ( I’m trying to be super clear here, I’m not knocking ET at all…  use ET data.. pay for ET Pro, you need it, but, it’s table stakes, and getting the data from that source is key ) )

There are  few issues in the state of cyber intel that I see so far:

  1. Even targeted, industry specific intelligence ingests ‘other’ intel.  Thereby making it not very targeted.   (  One Stop Shop for Intel vs Highly Focused and Relevant)
  2. Intel is shared before it’s vetted leading to a lot of garbage (BUT I tend to prefer this, compared to ….)
  3. Intel is researched, and vetted, and analyzed before it’s shared, slowing down the release of information

TIPs and Private Communities

TIP  = Threat Intelligence Platform ..   a content management system that specializes in the creation, collaboration, ingest and export of cybersecurity intelligence data in standardized formats, for human and machine consumption.

ISACs (Intelligence Sharing and Analysis Center) and ISAOs (Intelligence Sharing and Analysis Organization) offer communities a fantastic resource, when they’re run well.  They provide a common center for analysis , research, communication with other groups (FBI/DOJ, 3 letter agencies, etc), and are chartered to disseminate intelligence with its members.   The issue that I’m currently running into during the automation of intelligence to detection, is that these highly focused groups, are ingesting data from other organizations or intelligence sources.  They’re ingesting some commercial and public feeds.   This dilutes their value, in my opinion.  Any tool that’s worth it’s salt (what a weird saying) already pulls in open source intelligence and even popular closed source intelligence.   Continue to add value by focusing and sharing highly relevant data.

Vetting of data

There’s a balance between sitting on data too long, and being paralyzed by analysis, vs sharing data too early that’s wrong.  I’d lean toward sharing too early, than too late though.   It’s very easy to tell if comet.yahoo[.]com, is a False Positive.  I don’t mind an analyst taking 5 minutes to figure that out.  I tend towards that compared to holding valuable intelligence too long “just to make sure it’s super bad”.  By then, my systems may be “super dead” (to quote A. Hamilton, or at least the musical, Hamilton)

If you’re pretty sure it’s bad, push it out.  Let the boots on the ground figure out for sure.  Worst case scenario, we take 5-10 minutes investigating alerts because of it.  Best case scenario, I alerted to some outbound traffic to a new C&C infrastructure, and was able to squash it REALLY quickly.

 

 

 

Quick Heimdall Data Install

Anyone played with Heimdall data’s software?  I was introduced a few weeks ago.  They’re a super early startup (love!) with a pretty cool technology.

The feature that I really latched onto was the invisible cacheing.  The first time I talked about a write-through cache was with nPulse tech, when dealing with some of the indexing we did, and there wasn’t really any easy technology to use..  typically its (pseudocode, python ish):

def my_function_by_id( id ):  
    out_object = check_redis_cache(id)
    if not out_object:
        out_object = db.execute("select * from my_table where id = %s", ( id, ))
        set_redis_cache(id, out_object)
    return out_object

 

What happens when you switch cache services.   Go from Redis (simple) to Hazelcast (complex)?

Wouldn’t it be better to just:

def my_function_by_id( id ):  

    out_object =  db.execute("select * from my_table where id = %s", ( id, ))

    return out_object

Yes, that didn’t save that much code.. but, how many places in your code do you interact with your database?

Enter in the Heimdall data system.      I can write my code, connect to their proxy (since I love python, I’ll use the proxy, but they DO have a JDBC driver.. update your config and you’re off to the races)

The software identifies my SQL statements, extracts patterns, extracts parameters, and automatically sets up the cache.

Let’s follow instructions from here:

 

Alright, let’s start from a fresh Centos 7 virtual machine

sudo su -
bash <(curl -s http://download.heimdalldata.com/downloads/serverinstall.sh)

Then, grabbed the service file from here:  https://rayed.com/wordpress/?p=1496

Put the file contents in /etc/rc.d/init.d/supervisord

chmod +x  /etc/rc.d/init.d/supervisord

sudo chkconfig --add supervisord

sudo chkconfig supervisord on

echo "SELINUX=permissive" > /etc/selinux/config

firewall-cmd --zone=public --add-port=8087/tcp --permanent

shutdown -r now  #Yeah, hack that SE Linux out of here

I had a little issue running the proxy on my VM, when it booted, it preferred, and only listened in the IPV6 addresses.  Quick fix to /etc/supervisor/conf.d/heimdallserver.conf

changed:

command=java -server  -jar /opt/heimdall/heimdallserver.jar

to

command=java -server -Djava.net.preferIPv4Stack=true  -jar /opt/heimdall/heimdallserver.jar

Reloaded supervisord

sudo service supervisord restart

waited a few minutes… tried my web browser and there it is!

Stay tuned.  In the near future, I’ll do a video walk through of these, and at least two more overall videos.

This is a good start for me to practice some on screen time.   More on that to come!  I can’t wait to share the big news.. you’ll hear LOTS more of me.

 

chris.

 

Did the Russians use Yahoo for the Grizzly Steppe attack? JAR-16-20296

No, no they didn’t.   At least not from what my investigation finds.   Just wanted to put out there another example of a False Positive in the DHS/US-CERT JAR that I talk about in this article, Grizzly Steppe: Lighting up Like A Christmas Tree

IP address:  66.196.116.112

NSLookup output:

$ nslookup 66.196.116.112
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
112.116.196.66.in-addr.arpa name = pr.comet.vip.bf1.yahoo.com.

WHOIS output can be found here:  Inktomi whois

Inktomi was aquired by Yahoo back in 2002.

wikipedia entry

Now, this 66.196.116.112 IP address is resolved via:  comet.yahoo[.]com

This looks to be a service utilized by Yahoo mail.

 

 

 

 

Mini Rant: Stix Confuses My Computer

For some background, please see my post here

13-023-image-stix-taxii-cybox_v4-02

Here at Perch Security (perchsecurity.com) we’re building a community intelligence sharing platform that adds value to existing intelligence sharing communities… (our marketing folks say that better than I do!).

I’m building a bunch of services to ingest external intelligence, normalize, persist etc. I’m very thankful for standards such as TAXII (transport) and STIX (intelligence).

Reading a STIX doc makes sense for an analyst. The document gives great context around some piece of intelligence. An analyst can look it over, figure out some bad IP addresses and figure out what to do with them. The challenge comes with machine to machine communication. STIX can be used to relate Observables (IP addresses) into context.

For example, when analysts set of a new Indicator (the story of why we care about these IP addresses), they can set a relationship between ports and IP addresses. The facts are the observables (Port 80, IP: 8.8.8.8) etc.

So, looking at this net flow, 192.168.1.1:6633 -> 8.8.8.8:53 , we can very quickly determine that it’s a DNS request from an internal IP to Google’s DNS server. If that was determined to be malicious, the STIX should be set up like this:

Observable 1: IP 8.8.8.8
Observable 2: Port 53
Observable 3: 192.168.1.1
Observable 4: Port 6633
Observable 5: “Observable 1” AND “Observable 2”
Observable 6: “Observable 3” AND “Observable 4”
Observable 7: “Observable 6” AND “Observable 5”

See the tree?

Indicator
– Description: “There’s a story here about malicious DNS traffic going to host 8.8.8.8 on port 53. We observed this on our network from an internal client 192.168.1.1:6633”
– Observables: “Observable 7”

So, an analyst reading that understands what’s going on. The immediate issue though for a Machine, is that the ‘internal’ IP part is effectively useless. No big deal, I can programmatically ignore internal IP addresses.

BUT, The REAL problem comes with fast analysts or bad tools. My Stix document comes back looking like this:

Indicator –
– Description: “Bad IP address found”
– Observables: “Observable 1” OR “Observable 2” OR “Observable 3” OR “observable 4”

That’s just plain wrong! Now, all my signature triggers on ALL DNS requests.

I see an education opportunity here for analysts using tools that create this intelligence. Not calling them wrong, but it’s an opportunity to learn to use the tooling a little better to improve all of our communities.

« Older posts Newer posts »

© 2021 Fauie Technology

Theme by Anders NorenUp ↑