Leaky cauldron: A case of a vulnerable container

Containerised application development is a common engineering practice. However, putting your code in a container doesn’t make it secure out of the box.
Want to know more?
Let me tell you about 5 simple ways to make your container application secure. See my blog for details… ttyl

Is your container truly secure?

Containerised application development is a common engineering practice. However, putting your code in a container doesn’t make it secure out of the box. Yup! you heard it right. And believe me, you are not the only one! Count me on this list as well.

The moment of realisation dawn upon me when I had a conversation with the AppSec engineer (Aaron) about it. That’s how the conversation went (at least what I can recall).

Me: Hey Aaron, how are you doing mate!

Aaron: Good OZ, how about it.

Me: Same old same old. Hey, I would like to run a few things with you regarding the new service I am working on.

Aaron: Sure! Shoot

Me: Well! it will be quick as I have it all covered (me showing off). It is a containerised application so security wouldn't be an issue...

Aaron: Hold it right there. What makes you say that?

Me: (caught up a bit surprised) aa.. as I said it is CONTAINERISED.

Aaron: News flash OZ, putting your code in a container doesn't make it secure out of the box. At worst, it give you an illusion of security which can catch you off-guard.

Me: I have to be honest with you, I wasn't aware of it.

This conversation was a bit embracing but I have been a developer long enough to know what I don’t know and never shy to admit it. This conversation made me think about the security aspects of a container-based application. It started a journey to explore the notion of security in the context of a container based applications.

In this blog, i will share a quick summary of useful stuff that helped me create a better, more secure application.

Without further ado!

Drum rolls please!!


5 simple ways to improve your container security

1: Smaller container = Smaller attack surface

The container should only contain enough support structure (OS features, tools, libraries, running processes, etc) to run the desired application as expected. Everything else should not be part of your container ecosystem.

Given the hereditary nature of the container framework like Docker. You are not only inheriting the functionality but also the security vulnerabilities. It is important to evaluate the base image carefully.

One possible solution is to use Docker Slim which minify the image and create security profiles (more on it later). Check out this short video for details

Don’t take my word for it… see it in action!

This principle will guide you to adopt micro service based architecture. A smaller size image is easier to sync with remote repositories and improves the deployment time.

2: Contain the container

Control what the container can see and do!

Operating System (OS) provides a large number of system calls to perform various OS / Kernel related operations like modifying I/O privileges or profiling etc.

It is important to control the sys commands a container can run. By default, Docker applies a “Seccomp” profile which whitelists a small subset of commands. However, you can take a step further to create a tighter list.

Ensure that the container is running in the least possible privilege settings. Aside from Seccomp, you can isolate containers (limit what container can see or do) by running it in the context of an unprivileged user. DON’T USE ROOT USER unless there is a clear case for it. Remember: root user in a container is a root user outside.

3: Run vulnerability scans

Containers are created hierarchically. Your container uses a base image that may inherit from any other and so on. This establishes a system largely based on vendors or third-party routines. With this model, you will inherit the good, the bad and the ugly.

If your base image has a security vulnerability, guess what!! it has become your application’s security vulnerability. This serious problem leads to the rise of security vulnerability tools like Twist Lock, Anchore, Clair, etc. There are a lot of options out there…. open source, proprietary. In fact, Docker Enterprise also provides scan features when you push your docker image to the registry. Personally, I am using Twist-Lock and it works like a charm.

To put it simply the main purpose of the security vulnerability tool is to perform a static analysis of the dependency tree and identify any potential security loopholes by comparing it with the database of known security issues.

The scan results may look like this:

Recently, I used these tools extensively with the service I am working on. I will share my insights about comparative analysis of JRuby image on Open JDK, AWS Corretto, Adopt JDK and Azul Zulu. Let’s keep this topic for another blog šŸ™‚

Pro tip: Integrate Security Vulnerability scan in your CI/CD pipeline and make it a blocker for your deployment.

4: Apply security best practices

I got to have this one on my list. It is always handy to have a tool for course correction, especially for a lazy lousy developer like me.

Lads! please welcome “Docker Bench Security“. Shout out to Thomas & Diogo for this Swiss army knife of a tool for docker developers.

When you run this tool on your container or image, it executes a script to check common best practices (like the stuff we discussed in point #2) and give you a quick summary and report. Running this tool while developing apps will ensure that all bases are covered.

# clone the repo and run it like this.
sh docker-bench-security.sh -i <Container or Image Name>

One area of improvement for this script. I would love to have something like this as part of my continuous development process or as a plugin to my IDE. Like to know your thoughts. If I get 10 or more yeas, I will work on it.

5: Use signed images

This may sound obvious and common knowledge but more often than not, it slips through the cracks. You can blame the triviality of the task but it can be consequential.

As a general rule of thumb, you should know the owner of the docker image you are using (as a base or an element of a composite). This is where “signed images” come into the picture. It revolves around the idea of using Digital Signature to ensure integrity.

The author should sign the images and the consumer should use one. Let’s break this down a bit.

Author Checklist

One of my signed images #self marketing

– Generate “root key” using docker trust. See the link below for more details

– Use the root key to create a repository tag pair using the docker trust signer. The public key will be shared by Docker Notary service.

– Use the private key to sign the tag.

– No need to signed all the tags. However, you should sign the ones your consumers will use. For example, latest or LTE tag, etc.

– Make this part of your build process to avoid manual work.

– Think about whether it makes sense for you to make it an Official Docker container. It is not straight forward, you need to comply with the list of requirements that may affect your timelines.

More details here.

Consumer Checklist

– Use official or user signed images.

– Enforce this check on the docker daemon to avoid non-compliance.

Application security is a vast and evolving field. I am not an expert in it by any means. The main purpose of this blog is to share my experience and things I learned by reviewing articles, documentation and going through many iterations of trial and error.

I hope it helps in your journey! Let me know your feedback.

Stay Safe! Stay Hopeful!

Why Startup’s fail

APAC CIO Outlook recently published my article in their engineering special edition.

Entrepreneur’s Cauldron: Why Startup Fails

I am sure many of you, who have walked this walk can related to my analysis.

Let me know what do you think?

Have you experienced any pitfalls in your entrepreneurial life?Ā Share it in the commentsĀ 

Deep Dive in UX fundamentals

We are living in an era of specializations and strong affiliations. In the paradigm of software development, this phenomenon can be seen in full swing. For better or worst, we have established rigid criteria on the basis of skill sets. A very common division for a web development team is Web / Front end Developer, Server side developer and Web Designer or User Experience Engineer.

Even thou, these type of categorizations or separation by skill; enhance and encourage specialization, but often cause compartmentalization and stereotypes. I see this as a limitation that can hamper the creative and learning process.

Recently, this fact become more evident on me. I participated in a Visual Design Hackathon @Autodesk and there I got the opportunity to see how User Experience folks work. I saw the amount of thought and effortĀ involved in the whole process. The process whichĀ starts withĀ productĀ requirements, branding, client and stakeholders expectation and translates into visual design and overall experience. It was an excellent opportunity for a developer like me to understand the details (not just talking about mocks and style guides :D)

I am glad to participate in such an event. I strongly encourage fellow Devs to go and venture into an uncharted territories whenever an opportunity arises. This will help in bridging a gap between UX and UI professionals. Moreover, this will allow the development of shared understanding and vocabulary.

User interface is the outcome and result of collaboration and teamwork instead of just an aspect or part of the product. Collaborations like Visual Design Hackathon plays a vital role in the development of open culture.

0_Team.jpg

Keeping true to agile

Its always nice to revisit the principals that you endorse or believe in. Being a Scrum-Master, I try to remain true to agile manifesto. Recently, while working on a blog related to inner source (more on it later), I visitedĀ Agile Manifesto. Its been a while since I went this site. It serves a good reminder regarding what is important and what is not.

I encourage all agile practitioners to checkout Agile ManifestoĀ and if you feel like endorsing; do so (i surely did :)).

In fact, this applies to everything.

Go revisit the principals you believe in and if you feel like endorsing; do so.

Go revisit an old friend or a dear old Book and remind yourself that you still believe.

because Time, Influence and Thoughts…all have the tendencies to blur, modify and change…

Take care

Angular 2.0: Chapter 1 ā€˜New Horizonā€™

Time for some ā€˜Hello Worldā€™. This is by far the most effective way of learning a new tool or technology. Letā€™s do this a bit differently. We will call our first app ā€˜New Horizonā€™. It is a hello world on steroids (or steroids of mild potency). The basic idea is taken fromĀ https://angular.io/docs/ts/latest/quickstart.html.

1: Install Node

We will useĀ Node Package Manager (NPM) to setup theĀ project. In order to use NPM, install Node. Note: This is not mandatory. I choose it because its simple and scalable. Feel free to use anything that you like. Alternate way is to go for non-installation path by using CDN hosted libraries (out of the scope of the current tutorial).

2: Setup Project and Modules

Create a folder for theĀ project (mkdir is a handy command to do that). From command prompt, go to the newly created folder and run the following commands

  • npm init (this command will ask all sort of questions, if you want the default settings, go for npm init -y).
  • npm i angular2@2.0.0-alpha.44 systemjs@0.19.2 ā€“save ā€“save-exact (this command will install the latest alpha of angular and systemJS library. This is the latest alpha at the time of writing this blog. Do check angular.io to ensure the latest one).
  • npm i typescript live-server ā€“save-dev

Q: Hmm, IĀ can understand the fact that we have to install Angular 2, but whatā€™s up with all the other stuff?

Good Catch. Lets break things down

SystemJS ā€“ It is the universal dynamic module loader. This will be a useful tool when creating multi-module application.

Typescript ā€“ It is a super set of ECMA-Script 6 with support of data types. As most of you know, Javascript is a weak type language. With Typescript, this is not a weakness anymore. Plus, Angular 2 is written in TypeScript. Having said that, this is not mandatory as ES5 version angularJS is also available (along with the one for Dart). So, long story short, we will be writing theĀ code in Typescript (with extension of ā€˜tsā€™). Since, Browser can only understand ES5 (and some ES6), the code is trans-piled to ES5 before going to the browser.

LiveServer ā€“ A quick way to create a web server to publish and see your work.

After running these commands, check ā€˜package.jsonā€™ file in the projectĀ folder. This file now contains the details of the installed packages. npm package file can be used to writeĀ rudimentary scripts to help manage theĀ application. Add the following script to the script tag in theĀ package.json file.

ā€œtscā€: ā€œtsc -p src -wā€,
ā€œstartā€: ā€œlive-server ā€“open=srcā€

We willĀ run these commands later. Note: If you want to go further, you can setup Grunt or Curl to manage your application. They have more functionality.

3: Finalize directory structure.

Following is the folder structure, I am using for the demo application. You can do the same.

Tutorial_1_1 (1)

 

4: Entry point to the application: Create Index.html

Index.html is the entry point of theĀ application.

Tutorial_1_2

This is a very simple setup for an application with angular2 and systemJS libraries included. SystemJS is used to import theĀ application modules.Note: System is a global object created by SystemJS.

As an alternate, you can to the following:

System.import(ā€˜scripts/app.jsā€™); // No need to add config.

OR

If you are not interested in SystemJS all together, then you have to relay onĀ DOMContentLoaded event to manually bootstrap your root component. Checkout this nice article:Ā http://blog.thoughtram.io/angular/2015/05/09/writing-angular-2-code-in-es5.html

5: Root Component: App.ts

The first step will be to import required modules. Unlike Angular 1, not every component or module is available in the context. We need to import the ones needed. For theĀ root app component, we need the basic stuff (Component, View & Bootstrap).

import {
  Component,
  View,
  bootstrap
} from "angular2/angular2";

Import statement defines the modules we want to use to write our code.

Second step is to create a root component. Consider root component like an ā€˜np-appā€™ directive. There are three main aspects of the new component of Angular.

Component Annotation: The annotation that provides basic information about the directive and how it will interact with the outside world (DOM).

View Annotation: The annotation that provides the basic template of the component along with the information of the directives that can be used in the template.

Definition class: provides meaning to the component. The properties of the class can be used in the template.

@Component({
    selector:'my-app'
})
@View({
    template: `Hello World on Steroids`
})
class AppComponent {}

Last part is to tie everything together. Bootstrap the new component to the root level of the application:

// Use this function to Bootstrap or Loading a new component at the root level of the application.
bootstrap(AppComponent);