Monday, January 14, 2019

DevOps is NOT about Automation



In my experience of talking to folks and working with development teams, I have noticed too many people are focused on the ideas of automation, speed of automation, and amount of automation when it comes to DevOps. Some of these teams take pride in – and even compete on – the percentage of automation in their delivery processes. This heavy emphasis on automation and lack of focus on knowledge sharing and problem discovery doesn’t just originate from the development teams; it seems to propagate up and down the engineering leadership chains.


I believe automation is a great thing to do and talk about, but it is not the measure or essence of success with DevOps
Don’t get me wrong. I believe automation is a great thing to do and talk about, but it is not the measure or essence of success with DevOps. For example, I could optimize my entire SDLC process and automate it 100% only to find that the biggest bottleneck to delivery is the Authorization-to-Operate (ATO) process. From an organizational perspective and investment, all that automation – while certainly good – is not improving our ability to deliver business value faster or enabling faster returns on investment! 

This myopic view is partly the fault of the DevOps label because, to me, it dilutes what the DevOps movement wanted to evangelize. For better or worse, due to its roots, we are stuck with the name, but the objectives of DevOps aren’t purely about automation and they certainly aren’t tied to just Development, Operations, and Security (feel free to re-order based on your convictions… that is a whole other post).


whole purpose is to reduce the time it takes from identification of a business need to its delivery into the hands of the user
To me, and based on my application and experiences within DevOps, the whole purpose is to reduce the time it takes from identification of a business need to its delivery into the hands of the user. DevOps engages multiple techniques, not just automation, to positively impact those that care or assign value to the business need, whether it be sales, marketing, finance, compliance, customers, partners, etc. We can see that automation isn’t, and shouldn’t be, the focus of DevOps. It’s about breaking organizational silos and barriers to collaboration in order to facilitate knowledge sharing and understanding. This gives us the opportunity to identify the true impediments in our organizational delivery pipeline from identification of a need to its final delivery into users’ hands.

We must map our pipeline to much wider portions of the organization to capture the entire picture and identify true bottlenecks for DevOps to succeed. Once we have done this, only then can we identify game changing opportunities that are ripe for automation, achieve drastic reductions in our time-to-market, and improve our delivery capabilities as an organization. 

Cross-posted from my LinkedIn: https://www.linkedin.com/pulse/devops-automation-rizwan-tanoli-pmp-pmi-acp-csm/

Image credit: Jack Moreh / freerangestock.com

Wednesday, June 28, 2017

Scaling Agile - MITRE/ATARC DevOps White Paper

I had the privilege of being the industry lead for the Scaling Agile collaboration session during the ATARC DevOps Summit 2017. Mitre has published the whitepaper capturing the conversation and the great dialogue that occurred during the session.
I also got the opportunity to work with MITRE folks on compilation of the Scaling Agile part of the whitepaper content. It is definitely a great and informative read. Follow the link below to read/download:

MITRE-ATARC DevOps Whitepaper.pdf

Monday, August 29, 2016

Loving Amazon Lambda!

If you have worked with AWS, then you are already aware of some of the services like EC2, S3, etc. it has to offer and their benefits in terms of cost, overhead, ownership etc. 

We have been working with AWS services particularly EC2, S3, DynamoDB, and Kinesis Streams. Recently, we started looking at AWS Lambda and found it to be awesome in the microservices architecture that we are going towards. 

"By allowing us to only consume resources when our code is invoked...we reduce our resource consumption and increase our billing efficiency."


The benefits that Lambda offers are truly awesome and significant. Simply put, without having to worry about configuring, provisioning, or doing any kind of grunt work to setup/launch AMIs, we can improve upon benefits of on-demand resource accessibility and cost control that AWS offers. By allowing us to only consume resources when our code is invoked instead of having at least one instance running all the time we reduce our resource consumption and increase our billing efficiency.

 As an example, we have apps that rely on various business/eligibility rules; some of these become quite repetitive in nature like eligibility verification, funding availability, location capabilities etc. Historically, these have been bundled in a common library added as a dependency to the applications. However, as we move to a microservices based auto-scaling architecture using AWS, we are finding this to be a bit limiting. 

To make life easy, and remove the library dependency from our applications, we began generating individual services for some of these complex business rules accessible through RESTful end-points. This has been working great for us and definitely a vast improvement over our previous design. Now, using Lambda, we are taking it a step further. Instead of keeping running instances of these microservices at all times, we have uploaded the code for them to AWS Lambda.

Now whenever our apps need access to these services we simply invoke the code uploaded to Lambda and we are good to go. No more multiple instances of same code or library dependencies between apps, or idle running instance etc. The code for the service is uploaded to one point, accessed through one point, and is only executed when needed saving us cost, resource consumption, and overhead. 

This is just one example of how we have started leveraging AWS Lambda fitting nicely into the microservices architecture and approach that we have going. Of course, this is not to say this is the only way to do this, but if you are already using AWS, this is an awesome service and definitely worth looking into.

Monday, March 28, 2016

Using Liferay Theme resources in layout templates and views

Recently I created a custom theme for a Liferay 6.2 portal along with custom layout templates. I bundled some JavaScript files and images with the theme that were needed. Some of these were jQuery plugins for social media integration, sliders while others were images for backgrounds, tabs etc.

Originally, I was using these resources from the theme by hard coding the theme name into the URL but realized that this was highly inefficient and would break things if the theme name ever changed or got updated to something else.

Luckily Liferay has built in functionality to support dynamically getting resources from the active theme without actually specifying the name of the theme that the resources are being used from. All the information that I needed for this, I found in ThemeDisplay.java.

In this class there are methods that allow end developers to get the path to the js, css, images, templates etc. folders of a theme. Usage inside of a velocity template would be as such:
  • For JS:
    <script src="$theme_display.getPathThemeJavaScript()/mY_js.js"></script>
  • For CSS:
    <link rel="stylesheet" type="text/css" href="$theme_display.getPathThemeCss()/theme.css" />
  • For images:
    <img src="$theme_display.getPathThemeImages()/my_custom_image.gif" alt="..." />
As you can see this makes usage of the active theme in our Liferay views very easy and we avoid hard-coding of the theme name. Of course if there are multiple themes deployed and we wish to use resources from those themes then we would have to resort to hard coding the theme name, but we don't need to do it for the active theme or in a single theme deployment.

Tuesday, March 22, 2016

Shopping Cart Microservice app with Node.js - Part 1


I have been reading around about Node.js and playing with some test programs here and there. I finally decided to build a proper service that uses Node.js to fully showcase the capabilities of the platform. After thinking for sometime about what to build, I landed on the idea of a Shopping Cart microservice.

Following are the planned features as of now:
  • Add Item
  • Remove Item
  • Update Item
  • Save Cart (for retrieval later)
  • Update Cart
  • Empty Cart (remove everything for the cart)
My setup:
  • Node.js
  • WebStorm IDE 
  • Express JS
  • Bootstrap
  • LokiJS (for in-memory db/persistence) 
  • Mocha (for testing)
This renders a nice screen with the words Express JS; setup is done so I guess 90% of the battle is fought :) I was delighted at how straight word the setup was.

Reason I chose a microservice implementation is so that it can be plugged into other programs and depending on deployment can scale up and down based on usage without impacting the overall application deployment or maintenance. To achieve this I plan to incorporate RESTful webservices into the app. Persistence will be handled by LokiJS which supports in-memory as well as persistence. I might change the db, but for now I am keeping this.

All the code will be available on Github as I continue with my implementation. More posts will be forth coming as I continue with the app.