I built out a new Elasticsearch 5.4 cluster today.

Typically, it’s a tedious task.   I haven’t invested in any sort of infrastructure automation technology, because, well, there aren’t enough hours in the day.  I remembered a trick a few of us came up with at a previous bank I used to work for.  Using a shell script in AWS S3, that gets downloaded in a user init script in EC2, and bam, off to the races!

I won’t give away any tricks here, since my boss would kick me… again, but, since this processed was used heavily by me and team previously, I don’t mind sharing.

We didn’t use it specifically for Elasticsearch, but, you can get the gist of how to use it in other applications.

First step, upload the script to AWS S3.    Here, I’ll use an example bucket of “notmybucket.com” – that’s my bucket, don’t try to own it.  for reals.

Let’s call the script “provision.es.sh”

The provision file can look something like this:

You’ll see reference to an elasticsearch.data.yml.template.. that’s super simple:

Made up a security group, etc… configure the security group to whatever you’re using for your ES cluster.. change the bucket to your bucket.

Each ES host needs a unique name (beats me what will happen to elasticsearch if you have multiple nodes with the same name.. they’re geniuses.. it’s probably fine, but, you can test it, not me).  Alternatively, try to use your instance ID as your node name!

Then your user init data looks super stupid and simple:

Add user data

Once you complete your EC2 creation, you can verify the output in:

/var/log/cloud-init-output.log

 

Short URL: http://bit.ly/2sVn0aa