De-Coder’s Ring

Consumable Security and Technology

Category: technical (page 1 of 10)

AWS Fargate: What ECS Should Have Been in the First Place

Introduce AWS Fargate

AWS has been able to manage and run Docker containers for a long time using ECS, or Elastic Container Services. I found that it’s difficult to operate unless you start from an understanding and with infrastructure as code. In startup mode, that’s not always easy, so I led myself wrong and got stuck in a manual maintenance mode of ECS.

ECS allowed you to store Docker images in a registry, like you would in Dockyard or any other Docker registry. You could then create Tasks, which was the definition of how to run the Docker image.   The task configured things like port forwarding, disk mounting and kept a link to the tagged Docker image.   Next step to actually run the Task was to set up a Service.  The service provisions a certain number of tasks to run. Then you configure which ECS Cluster to run the tasks on. Finally, the conditions to auto scale the service by adding or removing instances of the tasks/image.

Here’s a snippet from the AWS blog:

 

AWS Fargate is an easy way to deploy your containers on AWS. To put it simply, Fargate is like EC2 but instead of giving you a virtual machine you get a container. It’s a technology that allows you to use containers as a fundamental compute primitive without having to manage the underlying instances. All you need to do is build your container image, specify the CPU and memory requirements, define your networking and IAM policies, and launch. With Fargate, you have flexible configuration options to closely match your application needs and you’re billed with per-second granularity.

Fargate solves an important pain point with ECS.   The ECS cluster of EC2 instances.  You, as an administrator/devops engineer on your AWS account, needs to provision a ECS cluster.  That’s a fancy and abbreviated way of saying: “Create an autoscaling group, using a launch configuration that has User Data configured to join the ECS Cluster that you’ve defined.”….     Not difficult by any stretch, but, it’s  always felt like a layer that shouldn’t be there.

I always found auto scaling to be a challenge.  Are you autoscaling your ECS Service?  Are you auto scaling the ECS cluster?   Yeah, you kind of have to do both.

Frankly, since I was in startup mode when dealing with ECS primarily, and believe me I dealt with ECS a ton, I never took the time or bothered to figure out how to ‘get it right’…

I got it working.   In startup land, getting it working is first and foremost… getting it done “right” is secondary (as long as you aren’t getting it done awfully poorly)….

Back to the point of Fargate. This is a major simplification of the ECS/Docker process.  Now, you can configure a group of ECS tasks to run without configuring the EC2 cluster.

The magic happens behind the scenes, managed 100% by AWS.  You are even presented with a “Fargate” cluster when you look at your ECS clusters in the web user interface for ECS.

 

Amazon has taken away the need to be particular about how your tasks are running across your instances.  You don’t have to stress about making sure you’re using your ECS Cluster optimally. AWS takes care of scaling your tasks to meet your jobs’ needs.

This simplification will now make Docker containers a first class citizen within AWS.   This is a huge change and will definitely streamline administration and provisioning of your containers.

 

 

Podcast – Jon Bodner

This week, we branch out a little from the previous topics of Cyber Security, which we’ll be back to soon, don’t worry.   (that was a lot of commas in that sentence)

Today is an interview with Jon Bodner.  Jon is a very popular writer and speaker on all things software development, with a new focus on the Go programming language.  I’ll add a few notes here to find Jon’s various talks, but, please subscribe and listen to this episode!

You got your engineering in my data process

GopherCon2017 – Runtime Generated, Typesafe, and Declarative: Pick Any Three

The Story Behind Capital One’s Fork Of An Open Source Project

Routing Messages through Kafka

I’m going through a major project with a client in regards to migrating to a Kafka based streaming message platform.  This is a heckuva lot of fun.   Previously, I’ve written about how Kafka made the message bus dumb, on purpose:   find it here. 

Building out a  new streaming platform is an interesting challenge.  There are so many ways to handle the logic.  I’ll provide more details as time goes on, but there are at least three ways of dealing with a stream of data.

The Hard Wired Bus

A message bus needs various publishers and subscribers.  You can very tightly couple each service by having them be aware of what’s upstream or what’s downstream.  Upstream is where the message came from, downstream is where it goes next.  When each component is aware of the route a message must take, it becomes brittle and hard to change over time.  Imagine spaghetti.

The Smart Conductor

A conductor travels on the train.  The conductor determines the best route while moving along the train tracks.  The conductor can handle every message, after every service to determine which service is next in line.  This cleans up the function of each service along the line but makes the conductor pretty brittle too.  The more complex the system gets, the more complex the conductor gets.  A rules engine would be a great component to add to the conductor if you choose this path.

The Map Maker

A map maker plots a route for a hiker to follow.  In our case, the map maker is a component that sits at the very beginning of every stream.  When the event comes to the map makers topic (in Kafka), the map maker determines the best route for the message to take.  Using metadata or embedded data in the event, the map maker can send the route down the chain with the event itself.  Each service can use a common library to read from its configured topic, allow the custom service to do some work, and then use the wrapper again to pass the message downstream.  The biggest advantage here is that each service doesn’t care where the message comes from, or where it goes next.   This works great for streams that are static, and the route can be determined up front.    If there are decisions down stream, then it may need a ‘switch’ service that is allowed to update the route.

What’s the best path for your application?    Have something better that I haven’t thought of yet?

 

 

 

 

 

Threat Hunting: Wireshark

Here’s the delayed 4th video!    Wireshark

I do a quick overview of loading a PCAP file within Wireshark, to do some analysis of packets and TCP reassembly.

Sign up for my mailing list above to get information on new podcasts and videos.

This is the last step in the education before jumping into Suricata next time!

Network Monitoring on the Cheap

I’ve regularly blogged about Suricata, Logstash and Elasticsearch.  Shoot, I’ve built multiple successful commercial tools using that technical stack.  The thing that made us successful wasn’t the tech, but it was how we used the tech to solve a problem that our customers had at that moment in time.

Now it’s time for me to share the secret on how to do it.

Ok, not a secret at all.  If you google, you can figure it out.

With this podcast, I want to introduce the topic to put some context around why those tools are the right tools.

I want to evangelize the idea of EVERYONE monitoring your home or work network with basic rules from places like Emerging Threats.  It’s free, and it’s invaluable to finding/stopping malware/viruses on your network.  Do it now!

Suricata

https://www.elastic.co/

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output

https://rules.emergingthreats.net/open/suricata-1.3/

Subscribe here : https://fauie.com/feed/podcast

Older posts

© 2017 De-Coder’s Ring

Theme by Anders NorenUp ↑