De-Coder’s Ring

Software doesn’t have to be hard

Category: software development (page 1 of 2)

Neo4J Tutorial – Published Video!

You may have noticed that my stream of thought posts on Neo4J.  It’s pained my, cause you know, drawing the balls is fun.

Today, I get to announce a published video tutorial on Neo4J by Packt Publishing!

We developed an in depth course, covering a bunch of graph database and Neo4J problems, ranging from:

  • Installation
  • What is a graph database?
  • comparing to a relational database
  • Using Cypher Query Language (Cypher, CQL)
  • Looking at various functions in Neo4j
  • Query profiling

It was a ton of work, over a few months, but, the support from Packt was great.  I’m really looking forward to getting feedback on the course!

Packt Publishing -  Learning Neo4j Graphs and Cypher

Packt Publishing – Learning Neo4j Graphs and Cypher




Simple Tip: Provision an Elasticsearch Node Automatically!

I built out a new Elasticsearch 5.4 cluster today.

Typically, it’s a tedious task.   I haven’t invested in any sort of infrastructure automation technology, because, well, there aren’t enough hours in the day.  I remembered a trick a few of us came up with at a previous bank I used to work for.  Using a shell script in AWS S3, that gets downloaded in a user init script in EC2, and bam, off to the races!

I won’t give away any tricks here, since my boss would kick me… again, but, since this processed was used heavily by me and team previously, I don’t mind sharing.

We didn’t use it specifically for Elasticsearch, but, you can get the gist of how to use it in other applications.

First step, upload the script to AWS S3.    Here, I’ll use an example bucket of “” – that’s my bucket, don’t try to own it.  for reals.

Let’s call the script “”

The provision file can look something like this:

You’ll see reference to an that’s super simple:

Made up a security group, etc… configure the security group to whatever you’re using for your ES cluster.. change the bucket to your bucket.

Each ES host needs a unique name (beats me what will happen to elasticsearch if you have multiple nodes with the same name.. they’re geniuses.. it’s probably fine, but, you can test it, not me).  Alternatively, try to use your instance ID as your node name!

Then your user init data looks super stupid and simple:

Add user data

Once you complete your EC2 creation, you can verify the output in:



Ship it!

Every time I forget the mantra “F(orget) It, Ship It”, things don’t go well.  Analysis Paralysis.  Develop towards a stale goal.

Historically, projects get bogged down for ages making sure it’s “perfect”.

Face it, it’s never perfect.  Ever.

This applies to software, companies, features, church activities and anything else that might be new and untried before.  Analysis and rework is the killer of new ideas.

I build products for people to use.  I know the data that my products use.  I know some of the pain points I’m trying to solve for customers (current and future!).  It’s SO easy to say “oh dang, let’s just add this one more XYZ widget before we call MVP”.  It’s easier to add new features than it is to declare a product “good enough”.


Yes, I did, and will again.  Nothing is ever perfect, and “good enough” is not a declaration on the quality/reliability/security of a new piece of software code.  It’s ‘good enough’ for someone to use.  This is why we strive for a minimally viable product, or MVP.

Counter that with the bad attitude: “good enough”.  That’s a statement on being lazy, not having professional quality standards and not giving a crap about what happens once something leaves your desk.  This is NOT what I’m advocating for.

Draw a line in the sand

Before you build, define your target. Define your MVP.  Define what is ‘good enough’ to your customer.   It can’t suck.  It has to add value.  It has to be easy (enough) to use.  It can’t be ugly, but it doesn’t have to be a work of art.  Ever see the first Google home page or the first version of Splunk?   Compare them to the current interfaces.  Good enough at work.



Quick Heimdall Data Install

Anyone played with Heimdall data’s software?  I was introduced a few weeks ago.  They’re a super early startup (love!) with a pretty cool technology.

The feature that I really latched onto was the invisible cacheing.  The first time I talked about a write-through cache was with nPulse tech, when dealing with some of the indexing we did, and there wasn’t really any easy technology to use..  typically its (pseudocode, python ish):


What happens when you switch cache services.   Go from Redis (simple) to Hazelcast (complex)?

Wouldn’t it be better to just:

Yes, that didn’t save that much code.. but, how many places in your code do you interact with your database?

Enter in the Heimdall data system.      I can write my code, connect to their proxy (since I love python, I’ll use the proxy, but they DO have a JDBC driver.. update your config and you’re off to the races)

The software identifies my SQL statements, extracts patterns, extracts parameters, and automatically sets up the cache.

Let’s follow instructions from here:


Alright, let’s start from a fresh Centos 7 virtual machine

Then, grabbed the service file from here:

Put the file contents in /etc/rc.d/init.d/supervisord

I had a little issue running the proxy on my VM, when it booted, it preferred, and only listened in the IPV6 addresses.  Quick fix to /etc/supervisor/conf.d/heimdallserver.conf



Reloaded supervisord

waited a few minutes… tried my web browser and there it is!

Stay tuned.  In the near future, I’ll do a video walk through of these, and at least two more overall videos.

This is a good start for me to practice some on screen time.   More on that to come!  I can’t wait to share the big news.. you’ll hear LOTS more of me.




Older posts

© 2017 De-Coder’s Ring

Theme by Anders NorenUp ↑