Improving your deployments with Blue/Green strategies

soap-bubbles-817098_640The most common scenario is that you spent weeks tuning your deployment to production. You suffered through testing and, finally, the end of this release is here. The deployment to production starts… And the application isn’t initializing. Something freezes along your spine as the consequences of this downtime slowly dawn on you…

Does this short story sound familiar? I bet that we all technical chaps suffered through this experience at least once in our lifetime. Then we define rollback plans, contingency interventions, prayers and offerings to the tyrannical data gods… And, in the end, we learn about Blue/Green deployments.

So, what is this about? Let’s picture a basic scenario for what our production environment could be:

Simple enough: a server holds an application that attacks to a database, and the users connect to our server. Now, instead of deploying directly, we will publish the new version of our software to a clone of this machine, and redirect part of our user base to this computer. Check the following diagram:

If the “green” environment has any defect that prevents the application to start, or there are any errors that automated testing can detect easily, the promotion can be rejected without any impact on our users: they will still be using the version 1.0. However, if it looks like all is correct, we slowly redirect access to our version 1.1.

Take notice on how both versions of our software are using the same database (now highlighted with a third color to make clear enough that it’s not subjected to a single environment). This means that we have to take into account additional considerations when altering the structure, as the current and previous version of the software will coexist for a certain amount of time.

A best practice approach to solve this obstacle is to detach the column or table from the data model in version n, then executing the script that actually removes the element in version n+1. Since there are no “hard links” to the feature, no errors would be generated in any of the blue or green environments when performing deployments.


The key to successfully adopt Blue/Green tactics is to automate your environment creation. “Infrastructure as code” approaches that generate your servers on demand on a repeatable and steady form eliminate human error from the equation. Either something as simple as a template for your virtual machine or as complex configuration management and prerequisite inventories allows you to roll out new versions of your product with a minimal collision with your user base.

Oh, and let’s not forget about the mental health of your operations employees. They will be grateful to stop deploying with a sense of impending doom.

Why you are doing DevOps wrong

automotive-62827_640Oh my! Is that a flaming clickbait post title? I believe that management really needs a reality check on this one, and maybe this direct attack will help to catch people’s attention.

It looks like DevOps is the “flavor of the month”. Companies do not know why they need this new shiny term in their development department, but damn if they are going to be left behind in this new trend. So they start setting up small DevOps teams, defining deadlines, and then… Miserably fail these initiatives.

DevOps is not something you can develop outside of the usual development chain. Remember: we are talking about the joint effort of development, operations and testing. There is already expertise on these three pillars on your business, so why are you looking “outside”? Perhaps you believe that someone outside of the actual development change can bring in some fresh air. A new point of view can be handy to solve the current predicaments. New hires are then sought, and brought into… Their own “DevOps department”.

That’s mistake number one.

DevOps it’s all about collaboration, information sharing, and speeding up processes. “External agents” to the work flow will see the sticking points in the pipeline, but will hardly have the power to drive any real change. They might even be perceived as a threat by those whose work are trying to change, despite pushing improvements. After all, who are these “new guys” to tell them how to do their job?

This reasoning is even worse when the new DevOps are young or inexperienced, which brings us to mistake number two: junior hirings.

You could think that “doing” DevOps is about technology and automation. Allow me to correct this misconception: is about people and communication. Is about removing unnecessary paperwork and outdated protocols. Is about reacting to change, about daring to be wrong and fix what was broken. And thus, in the end, DevOps is about managing people and their work. Someone without the adequate mileage will be utterly lost trying to use technology to automate a process, when the issue is the process existing, and pursuing to improve procedures. Or, if they have the correct ideas, those could be shut down because elder workers will not heed what the new blood is growing better.

Now, don’t despair. Not everything is lost yet. You can still make DevOps work. What do you have to do for that? Locate and designate those senior elements in your department that have the drive to polish and refine methods. Find out who talks loudly about change, who tries to bring in new ideas. Do not put a label on someone, because DevOps is a verb. You are doing collaboration; you are improving what you already have and making it something better.

If you really want your business to succeed and expand, start by taking the following test: . It’s a handy way to see how far is your agile evolution going. And then, start by noting down those items that you failed, and let your teams know what is necessary to be improved.

And then, you will also be DevOps-ing.

A siege named ‘Change’


Let’s take a small break from all those new and shiny practices, processes and approaches, and let’s focus for a minute on the business side of things. Or rather, let’s take a closer look on the business culture point of view. I would like to make sort of a philosophical monologue on what DevOps means for those persons involved in operations change.

Executives in a company aim to keep stability as an objective. But the idea behind all business effort is to generate profit in a steady and regular way, slowly evolving and growing. This means that there are two big balls to juggle: evolution and reliability. As most things tech-related tend to evolve very quickly, change management must deal with a reasonable pace for variation.

On an earlier post, i showed a handy diagram describing DevOps as the meeting point of Development, Quality Assurance and Operations. However, in terms of management resistance, the “sticking point” will be generated between the Development and Operations related individuals – and it’s not by chance that those two departments are the ones that give name to the DevOps movement.


This friction is born from opposite mindsets: developers are the evolution force on the software side of things, and infrastructure work towards making the system as reliable as it can be. Without the proper handling their objectives can be perceived as harmful for each other. That is when the silo mentality comes into play and information stops flowing freely. Enmity between departments can be perceived as childish behavior, lack of training or team inefficiency. That is when management must take a step forward, recognizing this risk and creating solutions that build up teamwork and ingrain to the related actors that we are all aboard the same ship: it’s not their toy or the others’, but rather the whole company’s.

What can you do when silos block a healthy work flow?

Start by listening to the challenges each “side” puts to the table. In most cases there will be a legitimate reason for a change, or to avoid one. The way to solve each issue is to find a compromise on all involved parties. Where is the origin of this complaint? Is it a matter of capacity, of lack of knowledge, or is fear taking shape? Understanding the origin of all that noise can show you the way to break down these artificial walls.

Another point to take into consideration is personal conflict: act as a middleman to dissolve hostilities. Most workers can take rejection to their ideas as a personal attack to their craft, or consider that objections take root in their persona rather than on the changes they are pushing or holding back. This might be the trickiest obstacle to save, and unless checked, it can quickly turn into a toxic work environment, destroying your change initiatives.

Finally, as a manager, don’t assume you know all the answers. Maybe the latest version of a technology is not adequate for you. Perhaps this development refactoring does not increase value on your product or service. Make sure to rely on the opinion of the experts that you hired, as the expertise they bring to the table is what drives your business’ success.

Counting changes


Decoupling your applications is tough work, and if by any change you started after reading the previous topic on this series, I don’t really expect you to be even halfway by now! Still, you might wonder what the next step in this evolution could be. Database Versioning is a sensible change to follow. If you put your code under source control, why wouldn’t you make the same choice with your database?

Keeping your database scripts under a control system allows you to keep detailed history of what has been done, as well as an easy audit of who has changed something. This discourages last-minute freebies, reducing the problems derived of undocumented changes. You could even include Continuous Integration or Continuous Deployment, launching the new scripts against your database server and executing tests to make sure that nothing is broken after any change.

The benefits are clear. We are invested in this change so… Where do we begin?

If this is a new project, it will be easy to define a versioning strategy from the start. Sadly, it won’t be the case in most businesses, which means we need to define a baseline for our database. A “version 0” of sorts. We will take a snapshot of the current database, taking into consideration the whole structure and only the minimum necessary data for our application to work. All this input will be saved into a single file, which we will use to create our database from scratch, if necessary.

After deciding the baseline, we have to include a new table to hold the associated changes. Could be something as simple as this:


The idea behind these fields is to be able to include semantic versioning in our scripts. This means that we provide a numeric representation of the state of the database as it grows and changes over time. Each number is increased once depending on the changes of the script that will be executed:

  • Major” is increased when changes are not backwards compatible: changing a column’s name, deleting fields or tables…
  • Minor” grows when the script only expand what already exists. Adding a new column or table, or applying conditional changes to existent data. Inserts could also be considered a “minor” change.
  • Bugfix” must be updated when the script just solves issues created by previous scripts, without affecting structure. Fixing the content of a line with a specific ID is a good example.

We can check the state of our database with a simple query that just outputs “Major.Minor.Bugfix”. In order for this versioning to work, each single script should update this table after executing and committing the changes. As well, the script files should follow single naming rules that ensure that they always executed in the same order. For example, all scripts could follow the rule {Major}_{Minor}_{Bug}_{FreeText}, and a glance at the filename would reveal what version is the script intended for. This would make 1_0_0_Baseline an adequate name for the “version 0” that we talked about, then 1_1_0_NewTableForBilling the following script, and after that is just making sure that everyone follows the guidelines.

In order to make easier the life of everyone involved in database evolution, you should keep these rules in mind:

  • No changes to a file after committed: scripts are meant to be one-shots. Editing files already established with a version ruins the concept behind this control. Though inefficient, creating a second script to solve an error created by the first file will make errors easier to track and fix.
  • No branching: including branching policies on your source control files includes several degrees of complexity with the naming convention of the files. There is no easy way to fix this, so I strongly advise against working elsewhere than on the main timeline.

The goal behind Database Versioning is to push changes in a consistent and repeatable way. This will reduce impacts between teams when there are strong dependencies with the database, and will make any error tracking less of a hassle.

Brick and mortar

wall-of-bricks-336546_640On the previous entry i was about to talk on N-tiered architecture. Let’s dive on what this software design brings to the table.

Essentially, we are separating a single piece of software into smaller ones that perform very specific functions depending on the layers of abstraction that we define. For example, we could break a single project into a data tier to perform the management and persistence of objects, a logical tier to apply different business rules, and a presentation tier to render this information into pages, forms and reports.

Let’s take this concept to a practical exercise. Consider a starting point similar to the one in the following diagram.


It is usually way more complicated, but drawing more lines will hardly make the problem easier to understand. Essentially, we have two applications that talk to each other, and both read and write from a database… But one of the databases is shared. Whenever Team1, that works on App1 and Database 1 make changes to Database 2, Team 2 must update App2 as well. So a lot of “alignment meetings” are created to keep the changes matched.

Now let’s see what a possible n-tiered solution to this problem could look like.


On a layered architecture, instead of the convoluted connection system of having multiple information links, there is an obvious point to receive information and a different one to put information. As well, we have included a data layer that interacts between the persistence and the business level, creating a level of abstraction that reduces complexity. We have also separated all elements located on Database 2 that App1 used, and created a new database for it, along with it’s own Data Tier application (though moving those components to Database 1 and it’s Data Tier could also be an option).

We could even have a common bus communication service that each component connects to, listens to events, and then reacts if the event triggered is relevant to the application, or discards it otherwise.

Now, it is true that there are more elements, and the apparent complexity has increased. Instead of maintaining two applications, two databases and a front-end website, we have nine different elements!

Bear in mind that the cross impacts that we used to have on the first scenario no longer happen. The first team’s modifications do not disrupt the workflow of the second team. This will also make all test scenarios easier, and open the door to create automatic testing for each separated component. And those three separated and untangled databases open a new door: database versioning. We will take a look at that on our next entry.

Giving back to the world

As you can see on the side bar, i created a GitHub profile. The philosophy of “free software” always called to me, but i never really took the step.

No more. My first repository holds a few custom tasks for TFS2015, and, hopefully, will soon start increasing with new tools. I will also expand the contents of those projects with articles on the blog.


Unraveling databases

knitting-1268932_640We talked about how a closely coupled system becomes a stagnating point for a company that wants to perform releases in a more nimble way. The first issue that we will address is database sharing: several components accessing a relational database directly. In a successful business, software grows constantly, and that means new requirements and features constantly being improved and increased.

At some point, the team will realize that the old architecture does not hold any more to the scalability and stability needs for the components. Additionally, if new teams are being brought into the department and start working on the same database, changes on the structure generate impacts on each other’s performance and reliability.

Case 1 – Metadata-driven

The first and typical answer to this problem is making all queries dynamic instead of fixed. Rather than fleshing out a detailed and optimized request to the server, this job is delegated to another component. However, the recursive and reflexive algorithms required for this approach have a major caveat: high consumption of processing power and memory, which means that the tool will not scale correctly on high demand environments.

Case 2 – NoSQL could be an option

Eliminating the “relational” part of the equation means that there are no changes to the structure – as there is no fixed structure to begin with. Including a new attribute to the object increases complexity only in the control of null and empty values (which you should be doing, anyway).

The challenges within this approach lie on performing analytical research, and building up effective reporting services. Querying becomes non-trivial when you can’t expect consistent fields returned from your requests. Good data models are the biggest weakness for non-relational databases, and that could make this strategy wrong for your business.

Case 3 – N-tiered architecture

The most valid strategy is also the most expensive, in terms of time and effort invested. Since this approach alters the software architecture as a whole, we will take a look in depth on the next entry on this series.

Fixing the monolith

stonehenge-165247_640Large companies that built up their technical debt for a long time, focusing their development capacity on releasing new features, find an impasse when they try to implement Continuous Integration or Continuous Development strategies. Coupled processes, shared databases and lack of proper documentation make self-provision, creation of environments and test automation an endless chore. Instead of several independent components, they host a monolith.

The moment of realization usually comes when a new project starts, and the assigned team starts setting up a workable integration context. Confronted with the task of finding the minimum amount of elements required to arrange the environment, the answer seems to be “all of them”.

Having deadlines to meet, the teams take ownership of their own R&D instance, and add to their chores the maintenance tasks. This means that every group works isolated from each other, and “alignment meetings” are created in order to reduce impacts that changes on one element cause on every other component. Release times are increased, testing phases extend longer and longer to cope with the test scenario complexity, and even hotfixes take months to release.

If you work in a somewhat big company’s software department – one that has been around for, let’s say, more than ten years – this picture is most probably a close picture of your daily routine. That is because refactoring is often seen as a “wasted effort”, since a product release holds no new characteristics.

So, consider the following situation: instead of what was previously described, a new team starts working on day one, as the only required component is the connection to a central communication interface. They set up their own separate database, and develop a communication API that every other component can talk to, instead of allowing direct communication to their schema. Test cases can be narrowed down to the application’s specific user stories, and every code commit runs them automatically, reporting their success to the user that generated those changes.

Can the first scenario be transformed on the second? It’s a long journey, but one that is worth investing effort into. On the next series of posts we will attack each individual bottleneck, aiming to untangle the mess that technical debt has turned into.

Increasing Product Value

dinoRunLiving in the outskirts of a big city means taking a daily long commute by train – not that i complain about this, as the advantages outweigh the inconveniences. However, there is this long tunnel with zero connectivity in which i used to patiently wait for the signal to return. That is, until somebody hinted that i had to tap the dinosaur.

For those unfamiliar with this amusing gimmick, on android devices this opens a very simple mini game in the form of an endless runner. Tapping the screen instructs our cute dinosaur to jump over the different obstacles in our path. You can take a look at the stand alone version on this link.

This simple challenge allows the user to ease the boring wait for navigation data, chat, or whatever application of choice is executing at the moment.

Other than the curious trivia, from the usability point of view, this makes for an interesting reflection: companies understand now that is the final user the one choosing to use their product. As a perfect example, Dino run solves in a very satisfactory manner one of the biggest pet peeves of mobile browsing (the other being, i think, badly converted websites for small displays).

Ignoring potential roughness in the intended experience is a risk. Fluent use is transparent, but bad design is punished. This is a rule to keep in mind with every interface. The rule of five seconds or Mandel’s Golden Rules are good starting points, but feedback from your user base is the real source of information regarding the additional development and investment your application experience requires.

Myself, i will be playing daily around 10 minutes to this silly little game, waiting patiently to resume my navigation.

Embracing DevOps

DevopsThe field of software development grows and changes quickly. Learning new skills becomes one more of the responsibilities and skills that a worker in computer science must improve. And, just as the specializations available increase, the same happens with the possible job positions. “DevOps” is one of the newest ones, and with it brings a change in business culture.

Historically, each specialization in Information Technologies created its own silo. Developers and Software Architects on one side, Infrastructure on other, DBAs on their own, etc. Thus, knowledge and documentation is rarely shared amongst each team. Even now, most big businesses keep up the barriers between departments, and hold meetings with the excuse of “information exchange”. DevOps was born with the idea of bringing down this artificial blockade, as this barrier benefits no-one in the long run.

The role of DevOps is deeply rooted within the mindset of Agile methodologies. And thus, there is no valid job description without proper context in a specific business culture. Some might consider “DevOps” as a “disposition”, as a way to define work to be done between departments. Others might create a transversal position that collaborates and brings together the different actors.

There are no specific tools of the trade when it comes to daily work. No, allow me to correct myself: documentation is THE weapon of choice. Wiki-style software, or a common documents repository open for everyone involved, allows for an efficient knowledge pool. That is a strong basis to build the foundations of proper cooperation. Orchestrating software, build pipelines, automated tests… All of that is useless without the proper approach to information sharing. Preventing the loss of know-how should be a priority for management, and a critical mission for all DevOps involved.

So, is your company riding this cultural wave, or stuck in the old way of doing?