Five Features of a Successful API Platform – PDF

“API Culture”

I’ve coined this phrase to help indicate a healthy technology organization that strives to build a solid set of capabilities that can be leveraged across a wider audience than an engineering team typically gets.

Building an API is more than just writing a web service.  It’s more than using AJAX or using a REST Framework within your code.  Building an “API Culture” is all about providing your development teams the structure and ability to be as effective as they can be.  Increase collaboration, increase code quality and increase the reuse of applications that they build.  We’re not talking about a specific tool or methodology, instead, we’re talking about an attitude that your teams will adopt in order to enjoy their day to day working life a lot more.

Read the rest after a free download –

Leave a Comment

Filed under api, software development, technical

Stix, Taxii: Understanding Cybersecurity Intelligence

Cyber Intelligence Takes Balls

Cyber Intelligence Takes Balls

I spent years building a packet capture and network forensics tool. Slicing and dicing packets makes sense to me. Headers, payloads, etc.. easy peasy (no, it’s not really easy, but like I said, years). Understanding complex data structures comes with the territory, and so far, I haven’t met a challenge that took me too long to understand.

Then I met Taxii. Then Stix. I forgot how painful XML was.

Taxii: Trusted Automated eXchange of Indicator Information

STIX: Structured Threat Information eXpression

FYI:  All the visualizations and screen shots are grabbed from Neo4J. The top rated and most used Graph database in the world.  My work has some specific requirements that I think are best suited with nodes, edges and finding relationships between data, so I thought I’d give it a shot.  Nice to see a built in browser that does some pretty fantastic drawing and layouts without any work on my part.  (Docker image to boot!)

TAXII is a set of instructions or standards on how to transport intelligence data. The standard (now an OASIS standard), defines the interactions with a web server (HTTP(s)) requests to query and receive intelligence. For most use cases, there are three main phases of interactions with a server:

  1. Discovery – Figure out the ‘other’ end points, this is where you start
  2. Collection Information – Determine how the intelligence is stored. Think of collections as a repository, or grouping of intelligence data within the server.
  3. Poll (pull) – (or push, but I’m focusing on pull). Receive intelligence data for further processing. Poll requests will result in different STIX packages (more to come)

I’m not going to go into details on the interactions here, but the python library for TAXII does a good enough job to get you started.  It’s not perfectly clear, but it helps.

STIX defines some data structures around intelligence data.   Everything is organized in a ‘package’.  The package contains different pieces of information about the package and about the intelligence.  In this article, I’ll focus on ‘observables’ and ‘indicators’.  The items I won’t talk much about are:

  • TTPs:  Tactics, Techniques and Procedures.  What mechanisms are the ‘bad guys’ using.  Software packages, exploit kits, etc.
  • Exploit Target:  What’s being attacked
  • Threat Actor: If known, who/what’s attacking?
  • TLPs, Kill chains, etc


Observables are the facts.  They are pieces of data that you may see on your network, on a host, in an email, etc.  These can be URLs, email addresses, files (and their corresponding hashes), IP addresses, etc.   A fact is a fact.  There’s no context around it, it’s just a fact.

A URL that can be seen on a network

A URL that can be seen on a network



Indicators are the ‘why’ around the facts.  These tell you what’s wrong with an IP address, or give the context and story about an email that was seen.

Context around an observable

Context around an observable

In the above pictures, you’ll see a malicious URL (hulk**, seriously, don’t follow it).   The observable component is the URL.  The indicator component tells us that it’s malicious.  The description above tells us that the intelligence center at identified the URL as part of a phishing scheme.

Source of data

All security analysts are well aware of some open source intelligence data. Emerging Threat, PhishTank, etc.  This data is updated regularly, and provided in their own format.  Since we’re talking about using TAXII to transport this data, we need an open source/free Taxii source.  Step in

When you make a query against Hailataxii’s discovery end point, you learn the collections and poll URLs.  Additionally, the inbox URL, but we’re not using that today.  (Coincidentally, HAT’s URLs are all the same)

Once you query the collection information end point, you see approximately 11 (At the time of writing) collections.  I will list those below.  From there, we can make Poll requests to each collection, and start receiving (hundreds? Thousands?) of STIX packages.

STIX Package

Since I’m a network monitoring junky, I want to see the observables I can monitor.  Specifically IPs and URLs.  Parsing through the data, I find some interesting tidbits.  Some packages have observables at the top level, and some have observables as children of the indicators.  No big deal, we’ll keep it all and start storing/displaying.

Once it’s all parsed using some custom python (what a mess!), I’m able to start loading my Nodes and edges.  Straight forward, I build nodes for the Community (Hailataxii), the Collection, the Package, Indicators and Observables.  The observables can be related to the Indicator and/or the Package.

Community view from the top down

Community view from the top down

Yellow circle is the community, green circle is the collection, small blue circle is the package (told you it could be hundreds), purple is the indicator and reddish is the observable.

Indicators and Observables

Indicators and Observables

That’s about it!  Don’t forget to check out my last post on Suricata NSM fields to see how some of these observables can be found on a network.

Suricata NSM Fields

Please leave feedback if you have any questions!








Collections from Hail  A Taxii:

  1. guest.dataForLast_7daysOnly
  2. guest.EmergingThreats_rules
  3. guest.phishtank_com
  4. system.Default
  5. guest.EmergineThreats_rules
  6. guest.dshield_BlockList
  7. guest.Abuse_ch
  8. guest.MalwareDomainList_Hostlist
  9. guest.Lehigh_edu
  10. guest.CyberCrime_Tracker
  11. guest.blutmagie_de_torExits

Leave a Comment

Filed under cybersecurity, technical

Suricata NSM Fields

Value of NSM data


Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation. Suricata is developed by the OISF and its supporting vendors.

Snort was an early IDS that matched signatures against packets seen on a network.  Suricata is the next generation of open source tools that looks to expand on the capabilities of snort.  While it continues to monitor packets, and can ‘alert’ on matches (think bad IP address, or a sequence of bytes inside of a packet), it expands the capabilities by adding Network Security Monitoring.  NSM watches a network (or PCAP file), jams packet payloads together (in order), and does further analysis.  From the payloads, Suricata can extract HTTP, FTP, DNS, SMTP, SSL certificate info etc.

This data can provide invaluable insight into what’s going on in your company or in your home.   Store the data for future searches, monitor the data actively for immediate notification of some wrong doings or anything else you want to do.  NSM data allows an analyst to track the spreading of malware.  Track how a malicious email came through.

Beyond the meta-data, Suricata can also extract the files from monitored sessions.  These files can be analyzed, replayed or shoved into a sandbox and detonated.  Build your own!

Here’s a break down of fields available, but remember they’re not always there.   Be careful in your coding.

All records contain general layer 3/4 network information:

  • timestamp
  • protocol
  • in_iface
  • flow_id
  • proto
  • src_ip
  • src_port
  • dest_ip
  • dest_port
  • event_type

This covered TCP/IP, UDP, etc.  Each event that gets logged (check out /var/log/eve.json) , has this information and more.  “event_type” indicates the ‘rest’ of the important data in this NSM record.  Values in ‘event_type’ will be one of:

  • http
  • ssh
  • dns
  • smtp
  • email
  • tls
  • fileinfo*


  • accept
  • accept_charset
  • accept_encoding
  • accept_language
  • accept_datetime
  • authorization
  • cache_control
  • cookie
  • from
  • max_forwards
  • origin
  • pragma
  • proxy_authorization
  • range
  • te
  • via
  • x_requested_with
  • dat
  • x_forwarded_proto
  • x_authenticated_user
  • x_flash_version
  • accept_range
  • age
  • allow
  • connection
  • content_encoding
  • content_language
  • content_length
  • content_location
  • content_md5
  • content_range
  • content_type
  • date
  • etag
  • expires
  • last_modified
  • link
  • location
  • proxy_authenticate
  • referrer
  • refresh
  • retry_after
  • server
  • set_cookie
  • trailer
  • transfer_encoding
  • upgrade
  • vary
  • warning
  • www_authenticate


Client/Server are child objects when this is parsed from JSON.

  • client
    •   proto_version
    •   software_version
  • server
    •   proto_version
    •   software_version


  • tx_id
  • rrtype
  • rrname
  • type
  • id
  • rdata
  • ttl
  • rcode


  • reply_to
  • bcc
  • message_id
  • subject
  • x_mailer
  • user_agent
  • received
  • x_originating_ip
  • in_reply_to
  • references
  • importance
  • priority
  • sensitivity
  • organization
  • content_md5
  • date


  • fingerprint
  • issuerdn
  • version
  • sni
  • subject


File info is special.  It can be associated with other types, like HTTP and SMTP/email.  Watch the object carefully, you’ll get a mix of fields

  • size
  • tx_id
  • state
  • stored
  • filename
  • magic
  • md5

1 Comment

Filed under technical

Agile: Why Enterprises Are Struggling

Let’s start out with the agile manifesto.  It’s been 15 years since this was written, and agile is all over the place in different organizations.

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

If you’ve worked with me in the recent past, you’ll read a few of my frequently vented statements from the office.  The word vented is quite important, it’s usually as a result of being unable to change a process or observing a ritual that’s ridiculous.   BUT! This is all caveated with the following statement:

“Companies need to get things done.  They need to understand how their development organizations are organized and working.  They need to be able to plan and forecast expenses and timelines.  I understand their need for process, even if they call it agile.” – Chris F.  (yeah, that’s me)

Individuals and interactions over processes and tools

Data is king.  As a data and process guy, I understand the value in knowing how well a team performs, especially over time.  Let’s compare this to a retail business.   If you don’t have historical data on sales, you can’t reliably plan your upcoming budgets.  Look at seasonal retail, you need to know when you’re in a high income month to hold back some income for the upcoming low seasons.  It’s cash flow planning 101.   The same goes for enterprises managing their development resources.  How many projects can we realistically plan and promise in the next 6 months? There’s no way to do that without knowing the output trends of your teams.

Where groups get in trouble is sacrificing development time and resources to make sure these metrics are buttoned down.  I’m a big fan of planning poker (or whatever the kids are calling it these days).  Using a fibonacci scale (1,2,3,5,8,13) you let your team tell you the complexity of the task they’re working on.   This isn’t a number of hours, it isn’t the number of days a task will take to complete.  It’s a little more granular than a t-shirt size estimate (S, M, L, XL), but not much.  The valuable outcome tells you approximate velocity.  It’s not a set in stone metric, it’s an approximation.  Don’t make it anything more than that.

There are plenty of tools out there to help track stories, points, sprints (oh my!).  Pick a light weight one that matches your natural processes.  Pivotal Tracker works well, Jira works well too.   Don’t over play the tool though. Just get work done!

Working software over comprehensive documentation

Documentation has been the bane of my existence as a software engineer for my entire career.  One of my first jobs was on a CMMI certified project.  Talk about documentation for documentations sake.  Documents that never get read are seriously stupid.  Talk about a waste of time!

No documentation is even stupider, in my opinion.  Remember that proverbial bus?  When your strongest developer gets a much higher paying job, (or gets hit by the proverbial bus), and doesn’t come to work, how do you know what the heck he was working on?   Sure, there are team members, but in my experience, you typically build a lot of domain knowledge in one (maybe two) developers.

Here’s my advice:  Document the process and the components.  How does code get from the source code repository to a production environment.  What steps are automated, what steps are manual.  How does data flow from start to finish? What components pick up data when it comes from a source system?  If you want to document interfaces, that’s a great next step.   Document your APIs, including inputs/outputs, required fields etc.  Define what it takes to integrate with other pieces of tech.

Find the balance between useful and wasteful.   It’s not always a clear answer, but lean to the side of less.

Customer collaboration over contract negotiation

Flashing back to the days of Waterfall.  These days are not totally gone, unfortunately.  Much like the previous point about over documenting, this tenet is similar.  Think about selling a house.   You’re the seller and I’m the buyer.    You write an offer.  I read it and send a counter offer.  You sign it and we’re done!   We never had to meet.  We didn’t talk, and heck, we were separated by another layer: the real estate agent.  This is the extreme of waterfall.  Your customer writes up this gigantic requirements document, ships it over the wall (via a people person, darn it) and the development staff has at it.  6-12+ months later, the developers show the product to the customer, and guess what!  BAM, it is NOT what they wanted.  Are you surprised?

Now imagine the customer and the engineering team working together in much faster cycles.  Instead of the customer seeing a quarterly release (or a final release, I shudder to think), imagine them seeing daily progress, heck, hourly progress.  Embed a customer representative with the developers.  Then the real time collaboration hits and boy is it magical.  “What do you think of this color”, or, “Here’s how the data is flowing, how does this look for a user interaction”.   Real time (or close to it) feedback is the best way to make sure the final result is really what the customer wants.

Responding to change over following a plan

Ever met someone who hates change?  We all get in our groove and really don’t like when others rock the boat, but, there are folks who truly can not deal with change. It gives them anxiety and it’s a weird thing to observe.  One of my natural trait’s is that I’m a rule follower***.  I understand the value and comfort a plan provides.   I’m also a realist and understand that the best laid plans (of mice and men) NEVER stick.  Timelines change, people get sick and most importantly, requirements change!  (I’m sensing a reason these points are laid out the way they are in the manifesto)  Go with the flow!  If you’re 20% through a project, and BAM, you get the best idea ever, then go ahead with your customer and figure out (or if) to change the project, timelines etc.   Your customer might change what they want from you.  If you lose a few weeks/months of time because of it, it’s up to them.  Don’t stress it (but keep notes that it’s the customer that changed plans mid project, don’t get dinged in your performance appraisal!).

There’s another side to this though too.  I’ve worked in two startups in which we’re building a product that has to be sold in the market.  Every week, our sales guys would come in and say “Customer XYZ needs feature ABC in order to close this 300 million dollar deal!!”.   It’s really easy to let the tail (sales) wag the dog (product and overall roadmap).  Don’t chase every sale by losing sight of your product roadmap.  Having a strategy is key.   Product management needs to be really clear and intentional if they change product delivery timelines or priorities of features when sales comes calling.  Good product management and executive leadership will never let a sale change the vision or strategy, but they may allow a slight deviation from the original plan in order to reprioritize!  (Although if it really were a 300 million dollar sale , that may be big enough to derail a lot!)


The tenet’s laid out in the Agile Manifesto were put together by some really , really smart people.  Don’t let the ceremonies and the calendar be the only thing that ‘makes you agile’.  For me, “Individuals and Interactions Over Processes and Tools” are the foundation for the rest of the bullets.  Write good software, and don’t focus on tools that don’t help write good software.

(Simpson car from )

*** (more to come on what that means, but,  it shows mainly in respecting org charts, staying in roles and responsibilities given, etc..   I do well in chaos, flexible environments, and love when my rule is “no rules!”)

Cross Posted @ LinkedIn

1 Comment

Filed under Uncategorized

Enough About Enough

One of my last posts was all about work life balance, “Your work life balance sucks“.  We all have issues with prioritizing our time and allocating enough in each bucket of our lives.  Family, Work, Self, Others.  Here, I want to talk about your knowledge and how it’s important to be intentional in that aspect of your life too.

I seem to be surrounded with people who either know the nitty-gritty detail about the technology or subject matter that they are engrossed in OR the stereo-typical “jack of all trades”.  You know the two types.  The first, you can’t have a discussion with about something.  They know WAY more than you do, and take the conversation very deep, very quickly.  They miss the forest for the trees.  These are your experts.   This person knows the insides and out of the tool you’re using, and their knowledge allows you to be confident in your approach and whether an idea will work or not.

The second type of person is the typical ‘jack of all trades’.  I say typical, because in this case, they know a little about a lot.  I’d even say, a little about ‘some’ things.  In the technology world, this person would be able to work around a bash shell, write database queries and make some updates to a web page.  The counter to this, is the Java developer who doesn’t know how to write a query.  The web designer who doesn’t know a lick of HTML or CSS.  My point here, is that this person is wide and shallow, as opposed with the first person, who’s super deep, but very narrow.

The mental image just came to me of the iceberg.  You know, something like this:

The first person will tell you about that little piece of ice that sits at the bottom of the iceberg.  While that’s important to someone, and is completely valid knowledge, 99% of us don’t care, and unless the conversation is about the bottom tip of the ice berg, it’s inappropriate.

The second person, they can tell you generally about ice bergs: maybe only if it’s covered in snow. If it’s just ice, they may not know about it.

The challenge a lot of us have, is how to balance in the middle of this scale.  Depending on your role, you need to find your sweet spot.   For me, as a consultant/architect/VP Engineering, I need to know “enough about enough”.  I need deep and wide.  I’d argue that I have to know 80% about an iceberg, but more importantly, know how ice works enough to be able to make some assumptions that can later be validated or experimented on.

In the world of technology, this manifests in a lot of different ways.  Mostly, it comes down to being educated enough to decide between two or more options, and picking an initial path go down.  Now, anyone can pick the path, but, the sweet spot means being able to get to the fork in the path as soon as possible to determine if it’s the right path or not.  Which database engine do we pick? Which time-series data storage do we use?  Which visualization engine will work with our python framework, etc.

There’s absolutely no way everyone can know everything about everything.  Seriously, look at the eco-system for devops these days (Back when I was first writing code, we didn’t have no automation!).  It’s amazing!  There are dozens of tools that do almost the same task.  They overlap, they have their sweet spots too, but it takes a special kind of person with a very specific set of skills (hmm.. I’ve heard that somewhere), to determine which tool to use in a specific situation.

I want to say this is another instance of the 80/20 rule, but not exactly.  Let’s go with that anyway.  Instead of learning the 100% details of something, spend 80% of the time, then keep going on to other things.  Don’t be so narrow focused.  Think about the days of Turbo Pascal.  If all you knew was 100% TP, how’s the job market these days?

Balance that with only learning 20% about something.  You will never be seen as an expert.  No matter what the subject matter is:  Technologies, development approaches, managerial styles, etc. You need to be deep enough to be an authority to make an impact on the organization you’re in if you want to excel and succeed.

Everything in life needs a balance.  Diet, exercise, hobbies, work/life, etc.  Knowledge and learning is in there too.  Be intentional about where you focus your energies in learning about something new, and figure out how much is enough.

Leave a Comment

Filed under career, software development, technical

Suricata Stats to Influx DB/Grafana

For everyone unfamiliar, Suricata is a high performance network IDS (Intrusion Detection System), IPS (Intrusion Prevention System) and NSM (Network Security Monitor).  Suricata is built to monitor high speed networks to look for intrusions using signatures.  The first/original tool in this space was Snort (by Sourcefire, acquired by Cisco).

NSM mode for Suricata accomplishes some pretty fantastic outputs.  In the early days of nPulse’s pivot to a network security company, I built a prototype ‘reassembly’ tool.  It would take a PCAP file, shove the payloads of the packets together, in order by flow, and extract a chunk of data.  Then I had to figure out what was in that jumbo payload.  Finding things like FTP or HTTP were pretty easy…. but then the possibilities became almost endless.    I’ll provide a link at the bottom with suricata and other tools in the space.  Suricata can do this extraction, in real time, on a really fast network.  It’s a multi threaded tool that scales.

Suricata can log alerts, NSM events and statistics to a json log file, eve.json.  On a typical unix box, that file will be in /var/log/suricata.  The stats event type is a nested JSON object with tons of valuable data.   The hierarchy of the object looks something like this:

Suricata Stats Organization

Stats layout for Suricata eve.json log file

For a project I’m working on, I wanted to get this Suricata stats data into a time series database.  For this build out, I’m not interested in a full Elasticsearch installation, like I’m used to, since I’m not collecting the NSM data, but only this time series data.  From my experience, RRD is a rigid pain, and graphite (as promising as it is) can be frustrating as well.  My visualization target was Grafana and it seems one of the favored data storage platforms is InfluxDB, so, I thought I’d give it a shot.  Note, Influx has the ‘tick’ stack, which included a visualization component, but, I really wanted to use Grafana.   So, I dropped Chronograf in favor of Grafana.

Getting Telegraf (machine data, CPU/Memory) injected into Influx and visualized within Grafana took 5 minutes. Seriously.  I followed the instructions and it just worked.  Sweet!  Now it’s time to get the suricata stats file working..

snippet of the above picture:

   "timestamp": "2016-06-27T14:38:34.000147+0000",
   "event_type": "stats",
   "stats": {
      "uptime": 245534,
      "capture": {
         "kernel_packets": 359737,
         "kernel_drops": 0
      "decoder": {
         "pkts": 359778,
         "bytes": 312452344,
         "invalid": 1000,
         "ipv4": 343734,
         "ipv6": 1,
         "ethernet": 359778,
         "raw": 0,

As you can see, the nested JSON is there.  I really want that “343734” “ipv4” number shown over time, in Grafana.  After I installed Logstash (duh), to read the eve.json file, I had to figure out how to get the data into Influx.  There is a nice plugin to inject the data, but unfortunately, the documentation doesn’t come with good examples. ESPECIALLY good examples using nested JSON.   Well, behold, here’s a working document, which gets all the yummy Suricata stats into Influx.

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"
   if !([event_type] == "stats") {
       drop { }

output {
   influxdb {
       data_points => {
         "event_type" => "%{[event_type]}"    

         "stats-capture-kernel_drops" => "%{[stats][capture][kernel_drops]}"    

         "stats-capture-kernel_packets" => "%{[stats][capture][kernel_packets]}"    

         "stats-decoder-avg_pkt_size" => "%{[stats][decoder][avg_pkt_size]}"    

        =======TRUNCATED, FULL DOCUMENT IN GITHUB ==========

         "stats-uptime" => "%{[stats][uptime]}"    

         "timestamp" => "%{[timestamp]}"    
       host => ["localhost"]
       user => "admin" 
       password => "admin" 
       db => "telegraf" 
       measurement => "suricata" 

WHOAH! That’s a LOT of fields.  Are you kidding me?!  Yep, it’s excellent. Tons of data will now be ‘graphable’.  I whipped together a quick python script to read an example of the JSON object, and spit out the data points entries, so I didn’t have to type anything by hand.  I’ll set up a quick gist in github to show my work.

Let’s break it down a little bit.

This snippet tells logstash to read the eve.json file, and tell it that each line is a JSON object:

file { 
    path => "/var/log/suricata/eve.json"
    codec => "json"

This section tells logstash to drop every event that does not have “event_type” of “stats”

   if !([event_type] == "stats") {
       drop { }

Suricata has a stats log file, that could probably be used by Logstash, but I may do that on another day. It’s way more complicated than JSON.

The last section is the tough one.  I found documentation that showed the general format of “input field” => “output field”… but that was it.  It took a ton of time over the past working day to figure out exactly how to nail this.  First, fields like ‘host’, ‘user’,’password’,’db’,’measurement’ are very straight forward Influx concepts.  DB is much like a name spacing or a ‘table’ in a traditional sense.   A db contains multiple measurements.  The measurement is the thing we’re going to track over time.  In our case, these are the actual low level details we want to see, for instance, ‘stats-capture-kernel-drops’.

Here’s an example:

"stats-decoder-ipv4" => "%{[stats][decoder][ipv4]}"    

On the left ‘stats-decoder-ipv4’ is the measurement name that will end up in InfluxDB.  The right side is how Logstash knows where to find the value based on this event from eve.json.  %{ indicates the value will come from the record.  Then Logstash just follows the chain down the JSON document.  stats->decoder->ipv4

That’s it.   The logstash config, eve.json, little python script, and here’s a picture!

Grafana Screen Shot showing CPU, Memory and Suricata Information

Grafana Screen Shot showing CPU, Memory and Suricata Information

Pretty straight forward.  Example configuration on one of the stats:

Configure graph in grafana

Configure graph in grafana

Please leave me a comment!  What could be improved here?   (Besides my python not being elegant.. it works.. .and sorry Richard, but, I’m  a spaces guy, tabs are no bueno)

  1. GITHUB:
  2. Suricata:
  3. Snort Acquired Sourcefire:
  4. Grafana –
  5. InfluxDB/TICK stack:
  6. Chronograf:
  7. Telegraf:
  8. Logstash:


June 27, 2016 · 3:42 PM

SELinux: Causing a pain, time and time again

Once again, SELinux bit me.. what a pain.  It’s good, I’m sure for something.  but dang, it’s always to blame.

Trying to set up an Apache reverse proxy.   Kept getting a 503 error,

Permission denied: AH00957: HTTP: attempt to connect to ( failed

Did some googling, and thanks to Justin Ellison @, he saved the day.

Simple command to allow the reverse proxy:

/usr/sbin/setsebool -P httpd_can_network_connect 1

Found the assist here:



Leave a Comment

Filed under simple tips, technical

Tencent buys majority Supercell stake (for more than a few cents!)

Oh my goodness.  Not since the Instagram acquisition have I been BLOWN AWAY by the valuation of a company.  I realize that I know nothing about the revenue of Supercell, but a $10B valuation?   Wow!    Their new app, Clash Royale, has definitely swept the household’s phones/ipods, etc.. and that’s ALL the kid’s are playing these days.



Leave a Comment

Filed under news

Your work-life balance sucks

Cross posted at:

From wikipedia:

Work–life balance is a concept including proper prioritizing between “work” (career and ambition) and “lifestyle” (health, pleasure, leisure, family and spiritual development/meditation). This is related to the idea of lifestyle choice.

What does that mean?

  • If you’re working more than 40 hours a week, you are bad at balancing your work and your non-work life.  You should quit your job and work at Walmart where, if you read the same hype I do, you can’t work a solid 40!
  • Only work 20 hours a week?  Quit slacking! Balance work and life, but only up to 40 hours…..

Am I allowed to say “Just Kidding”, in a blog post?  Seems too unprofessional… but JUST KIDDING!

Work life balance is a goal one has to strive for.  If you work too much, your family and friends will feel neglected (ever put a family in “The Sims” into a room and removed the door?  They feel neglected.   If you don’t work enough, you won’t earn enough money, and your family will not eat, probably the same results as The Sims with a removed door.

In order to succeed, like most aspects of life, you need to be intentional about your work life balance.  It can’t just magically happen and be good.   Some of us tend to over work, some of us tend to under work.  A lot of us are not good at being intentional about family time.

Guidance #1:

Think about your life as a whole:  On your deathbed, would you regret spending too much time with family? or too much time at work?   This is the ultimate target.  In your life in its entirety, you need to have a focus on family.  Leisure activities and memories are invaluable.  Whether you die rich or poor, I can guarantee you won’t be thinking about how much money is in the bank when your’e dead. You’ll be thinking about the love and experiences you’ve had.  The adage “Work to live, don’t live to work” is embodied here.  Prioritize your life over your work, in the long run, and you will not have any regrets.

Guidance #2:

The balance changes at various stages of our life.  Single and right out of college?  Work hard during the day, have fun in the evenings and weekends!  Have kids?   Work your tail off during the day, go home and forget about work.  Weekends…. family time.  Kids older and out of the house?  Crank up the work if you still need to.  There are various ‘macro’ stages in your life and career.  Sometimes work has the higher percentage, sometimes life has the higher percentage.  Things change and are fluid.  Heck, there are micro changes in life.  In the software development profession, we have times when stuff breaks and we have to scramble to fix it.  Project deadline coming up and we’re behind? Crap, time to work more.    These are short sprints of extra work to meet a deliverable or to fix a mistake.  It happens. It sucks.  Family can get mad, but it’s short lived.  Do NOT do that to yourself for a long time.

Guidance #3

It’s no one’s fault but your own.  This one gets me in trouble when I talk to ‘certain’ people.  “My boss made me stay late again last night.”  “Once the kids went to bed, I knocked out another few hours last night” “I worked all weekend to get my presentation ready for Monday”.

Those kill me, because I know that’s not how life is supposed to be.  Again, once in a while happens, no big deal.  There are tons of people who I work with who are habitual, and that’s all they do.  You know the type.  Burned out, unhappy and frankly, not really impressing anyone at work.  If it takes you 70 hours a week to do a 40 hour job, you either haven’t learned how to say “No” (which is a learned skill, absolutely) or your time management stinks.  Both of which are bad news and need to be worked on.  The good news, is that both can be fixed!


You decide how much you work.  No one else does.  If you have a boss that ‘demands’ 60 hours for a traditionally 40 hour a week job, then those are unrealistic, and unsustainable demands.  If you ‘phone it in’ after 35 hours for a traditionally 40 hour a week job, then pick it up.  Hold your own and work for what you’re paid for.

Realize that work changes over time.  Sometimes you have to prove yourself, work extra hard, meet a crunch deadline, etc.  If your’e up every night and working on weekends for a traditionally 9-5, Mon-Friday job, then you may have a problem.  Take some time and figure it out.   It may take some uncomfortable conversations with your manager at the job, but, do it.  Life is too short, and too important to de-prioritize it.



1 Comment

Filed under Uncategorized

Quit your job already!

Today is day two at PrairieFire for me.  A few brave souls are working to create a new tool for the cyber security industry.  This is my second time around as a very early employee at a startup, and I’m VERY excited about it.  The funny thing I noticed is……..

“Congrats on the new job! Let me know if you’re hiring XYZ”

“Way to go! Position sounds great, when will you be hiring a ABC?”

How many folks out there are unhappy in their job?  Is it the day to day rut you can’t stand, is it your boss?  Why the heck don’t you do something about it?

In my career, I’ve only left a job “to leave the job” once or twice that I can think of.  The other times it was to GO somewhere else for a reason.   Relocation, compensation (this the dumb one, by the way) or an amazing opportunity to build something new (nPulse!).  This time, leaving Capital One was exactly the same as before.   I had a good job, wasn’t really looking for a new one, and BAM, a REALLY good opportunity showed up.

My career advice, since you’re reading, don’t stay in a job that you hate!  Work can be fun, it can be challenging and exciting.  Don’t stay somewhere because it’s “secure”, don’t be afraid of leaving and trying something new.   I can’t tell you how many people have said “Wow! I want to go work at a startup because …”.   Just find it and do it! Find a job you really enjoy.  Don’t get stuck in the traps that hold us back.

Stupid reasons to stay in a job:

  • Job security, need that pay check!
  • After this years performance review/bonus/raise, then I’ll start looking
  • My boss sucks, but one of us will find a new position in the company soon
  • We have great benefits here!

Sure, a lot of those SOUND like good reasons to stay in a job (kind of?), but don’t let comfort or the myth of stability hold you back.  Find something you like and enjoy.  It’s the age old question:

“What would you do if you had a million dollars”

No, it’s not about your cousin and low risk mutual funds, and no, you can’t do “nothing”.  Is there a task that you enjoy doing so much that it doesn’t seem like work?   Maybe it won’t pay as much, but probably it will pay a ton more in the long run.  Think about it… will you get a better raise doing something that bores you to death or something that really lights your fire and gets you pumped up to go to work every day!

Check out the book, 48 Days to the Work You Love, by Dan Miller:

((  This may seem like a little stab at my last job, and it’s absolutely not.  Capital One is doing some REALLY cool things in technology.  Investing in startup companies, embracing cloud and open source across the board.  It’s a VERY exciting time to be there, and my bosses were good too. Ha!   ))

Cross posted at:

Leave a Comment

Filed under career