A siege named ‘Change’

painting-195643_640

Let’s take a small break from all those new and shiny practices, processes and approaches, and let’s focus for a minute on the business side of things. Or rather, let’s take a closer look on the business culture point of view. I would like to make sort of a philosophical monologue on what DevOps means for those persons involved in operations change.

Executives in a company aim to keep stability as an objective. But the idea behind all business effort is to generate profit in a steady and regular way, slowly evolving and growing. This means that there are two big balls to juggle: evolution and reliability. As most things tech-related tend to evolve very quickly, change management must deal with a reasonable pace for variation.

On an earlier post, i showed a handy diagram describing DevOps as the meeting point of Development, Quality Assurance and Operations. However, in terms of management resistance, the “sticking point” will be generated between the Development and Operations related individuals – and it’s not by chance that those two departments are the ones that give name to the DevOps movement.

Devops

This friction is born from opposite mindsets: developers are the evolution force on the software side of things, and infrastructure work towards making the system as reliable as it can be. Without the proper handling their objectives can be perceived as harmful for each other. That is when the silo mentality comes into play and information stops flowing freely. Enmity between departments can be perceived as childish behavior, lack of training or team inefficiency. That is when management must take a step forward, recognizing this risk and creating solutions that build up teamwork and ingrain to the related actors that we are all aboard the same ship: it’s not their toy or the others’, but rather the whole company’s.

What can you do when silos block a healthy work flow?

Start by listening to the challenges each “side” puts to the table. In most cases there will be a legitimate reason for a change, or to avoid one. The way to solve each issue is to find a compromise on all involved parties. Where is the origin of this complaint? Is it a matter of capacity, of lack of knowledge, or is fear taking shape? Understanding the origin of all that noise can show you the way to break down these artificial walls.

Another point to take into consideration is personal conflict: act as a middleman to dissolve hostilities. Most workers can take rejection to their ideas as a personal attack to their craft, or consider that objections take root in their persona rather than on the changes they are pushing or holding back. This might be the trickiest obstacle to save, and unless checked, it can quickly turn into a toxic work environment, destroying your change initiatives.

Finally, as a manager, don’t assume you know all the answers. Maybe the latest version of a technology is not adequate for you. Perhaps this development refactoring does not increase value on your product or service. Make sure to rely on the opinion of the experts that you hired, as the expertise they bring to the table is what drives your business’ success.

Counting changes

abacus-909181_640

Decoupling your applications is tough work, and if by any change you started after reading the previous topic on this series, I don’t really expect you to be even halfway by now! Still, you might wonder what the next step in this evolution could be. Database Versioning is a sensible change to follow. If you put your code under source control, why wouldn’t you make the same choice with your database?

Keeping your database scripts under a control system allows you to keep detailed history of what has been done, as well as an easy audit of who has changed something. This discourages last-minute freebies, reducing the problems derived of undocumented changes. You could even include Continuous Integration or Continuous Deployment, launching the new scripts against your database server and executing tests to make sure that nothing is broken after any change.

The benefits are clear. We are invested in this change so… Where do we begin?

If this is a new project, it will be easy to define a versioning strategy from the start. Sadly, it won’t be the case in most businesses, which means we need to define a baseline for our database. A “version 0” of sorts. We will take a snapshot of the current database, taking into consideration the whole structure and only the minimum necessary data for our application to work. All this input will be saved into a single file, which we will use to create our database from scratch, if necessary.

After deciding the baseline, we have to include a new table to hold the associated changes. Could be something as simple as this:

DatabaseVersioning

The idea behind these fields is to be able to include semantic versioning in our scripts. This means that we provide a numeric representation of the state of the database as it grows and changes over time. Each number is increased once depending on the changes of the script that will be executed:

  • Major” is increased when changes are not backwards compatible: changing a column’s name, deleting fields or tables…
  • Minor” grows when the script only expand what already exists. Adding a new column or table, or applying conditional changes to existent data. Inserts could also be considered a “minor” change.
  • Bugfix” must be updated when the script just solves issues created by previous scripts, without affecting structure. Fixing the content of a line with a specific ID is a good example.

We can check the state of our database with a simple query that just outputs “Major.Minor.Bugfix”. In order for this versioning to work, each single script should update this table after executing and committing the changes. As well, the script files should follow single naming rules that ensure that they always executed in the same order. For example, all scripts could follow the rule {Major}_{Minor}_{Bug}_{FreeText}, and a glance at the filename would reveal what version is the script intended for. This would make 1_0_0_Baseline an adequate name for the “version 0” that we talked about, then 1_1_0_NewTableForBilling the following script, and after that is just making sure that everyone follows the guidelines.

In order to make easier the life of everyone involved in database evolution, you should keep these rules in mind:

  • No changes to a file after committed: scripts are meant to be one-shots. Editing files already established with a version ruins the concept behind this control. Though inefficient, creating a second script to solve an error created by the first file will make errors easier to track and fix.
  • No branching: including branching policies on your source control files includes several degrees of complexity with the naming convention of the files. There is no easy way to fix this, so I strongly advise against working elsewhere than on the main timeline.

The goal behind Database Versioning is to push changes in a consistent and repeatable way. This will reduce impacts between teams when there are strong dependencies with the database, and will make any error tracking less of a hassle.

Brick and mortar

wall-of-bricks-336546_640On the previous entry i was about to talk on N-tiered architecture. Let’s dive on what this software design brings to the table.

Essentially, we are separating a single piece of software into smaller ones that perform very specific functions depending on the layers of abstraction that we define. For example, we could break a single project into a data tier to perform the management and persistence of objects, a logical tier to apply different business rules, and a presentation tier to render this information into pages, forms and reports.

Let’s take this concept to a practical exercise. Consider a starting point similar to the one in the following diagram.

nTierFrom

It is usually way more complicated, but drawing more lines will hardly make the problem easier to understand. Essentially, we have two applications that talk to each other, and both read and write from a database… But one of the databases is shared. Whenever Team1, that works on App1 and Database 1 make changes to Database 2, Team 2 must update App2 as well. So a lot of “alignment meetings” are created to keep the changes matched.

Now let’s see what a possible n-tiered solution to this problem could look like.

nTier2

On a layered architecture, instead of the convoluted connection system of having multiple information links, there is an obvious point to receive information and a different one to put information. As well, we have included a data layer that interacts between the persistence and the business level, creating a level of abstraction that reduces complexity. We have also separated all elements located on Database 2 that App1 used, and created a new database for it, along with it’s own Data Tier application (though moving those components to Database 1 and it’s Data Tier could also be an option).

We could even have a common bus communication service that each component connects to, listens to events, and then reacts if the event triggered is relevant to the application, or discards it otherwise.

Now, it is true that there are more elements, and the apparent complexity has increased. Instead of maintaining two applications, two databases and a front-end website, we have nine different elements!

Bear in mind that the cross impacts that we used to have on the first scenario no longer happen. The first team’s modifications do not disrupt the workflow of the second team. This will also make all test scenarios easier, and open the door to create automatic testing for each separated component. And those three separated and untangled databases open a new door: database versioning. We will take a look at that on our next entry.

Giving back to the world

As you can see on the side bar, i created a GitHub profile. The philosophy of “free software” always called to me, but i never really took the step.

No more. My first repository holds a few custom tasks for TFS2015, and, hopefully, will soon start increasing with new tools. I will also expand the contents of those projects with articles on the blog.

devopsthings

Unraveling databases

knitting-1268932_640We talked about how a closely coupled system becomes a stagnating point for a company that wants to perform releases in a more nimble way. The first issue that we will address is database sharing: several components accessing a relational database directly. In a successful business, software grows constantly, and that means new requirements and features constantly being improved and increased.

At some point, the team will realize that the old architecture does not hold any more to the scalability and stability needs for the components. Additionally, if new teams are being brought into the department and start working on the same database, changes on the structure generate impacts on each other’s performance and reliability.

Case 1 – Metadata-driven

The first and typical answer to this problem is making all queries dynamic instead of fixed. Rather than fleshing out a detailed and optimized request to the server, this job is delegated to another component. However, the recursive and reflexive algorithms required for this approach have a major caveat: high consumption of processing power and memory, which means that the tool will not scale correctly on high demand environments.

Case 2 – NoSQL could be an option

Eliminating the “relational” part of the equation means that there are no changes to the structure – as there is no fixed structure to begin with. Including a new attribute to the object increases complexity only in the control of null and empty values (which you should be doing, anyway).

The challenges within this approach lie on performing analytical research, and building up effective reporting services. Querying becomes non-trivial when you can’t expect consistent fields returned from your requests. Good data models are the biggest weakness for non-relational databases, and that could make this strategy wrong for your business.

Case 3 – N-tiered architecture

The most valid strategy is also the most expensive, in terms of time and effort invested. Since this approach alters the software architecture as a whole, we will take a look in depth on the next entry on this series.

Fixing the monolith

stonehenge-165247_640Large companies that built up their technical debt for a long time, focusing their development capacity on releasing new features, find an impasse when they try to implement Continuous Integration or Continuous Development strategies. Coupled processes, shared databases and lack of proper documentation make self-provision, creation of environments and test automation an endless chore. Instead of several independent components, they host a monolith.

The moment of realization usually comes when a new project starts, and the assigned team starts setting up a workable integration context. Confronted with the task of finding the minimum amount of elements required to arrange the environment, the answer seems to be “all of them”.

Having deadlines to meet, the teams take ownership of their own R&D instance, and add to their chores the maintenance tasks. This means that every group works isolated from each other, and “alignment meetings” are created in order to reduce impacts that changes on one element cause on every other component. Release times are increased, testing phases extend longer and longer to cope with the test scenario complexity, and even hotfixes take months to release.

If you work in a somewhat big company’s software department – one that has been around for, let’s say, more than ten years – this picture is most probably a close picture of your daily routine. That is because refactoring is often seen as a “wasted effort”, since a product release holds no new characteristics.

So, consider the following situation: instead of what was previously described, a new team starts working on day one, as the only required component is the connection to a central communication interface. They set up their own separate database, and develop a communication API that every other component can talk to, instead of allowing direct communication to their schema. Test cases can be narrowed down to the application’s specific user stories, and every code commit runs them automatically, reporting their success to the user that generated those changes.

Can the first scenario be transformed on the second? It’s a long journey, but one that is worth investing effort into. On the next series of posts we will attack each individual bottleneck, aiming to untangle the mess that technical debt has turned into.

Increasing Product Value

dinoRunLiving in the outskirts of a big city means taking a daily long commute by train – not that i complain about this, as the advantages outweigh the inconveniences. However, there is this long tunnel with zero connectivity in which i used to patiently wait for the signal to return. That is, until somebody hinted that i had to tap the dinosaur.

For those unfamiliar with this amusing gimmick, on android devices this opens a very simple mini game in the form of an endless runner. Tapping the screen instructs our cute dinosaur to jump over the different obstacles in our path. You can take a look at the stand alone version on this link.

This simple challenge allows the user to ease the boring wait for navigation data, chat, or whatever application of choice is executing at the moment.

Other than the curious trivia, from the usability point of view, this makes for an interesting reflection: companies understand now that is the final user the one choosing to use their product. As a perfect example, Dino run solves in a very satisfactory manner one of the biggest pet peeves of mobile browsing (the other being, i think, badly converted websites for small displays).

Ignoring potential roughness in the intended experience is a risk. Fluent use is transparent, but bad design is punished. This is a rule to keep in mind with every interface. The rule of five seconds or Mandel’s Golden Rules are good starting points, but feedback from your user base is the real source of information regarding the additional development and investment your application experience requires.

Myself, i will be playing daily around 10 minutes to this silly little game, waiting patiently to resume my navigation.

Embracing DevOps

DevopsThe field of software development grows and changes quickly. Learning new skills becomes one more of the responsibilities and skills that a worker in computer science must improve. And, just as the specializations available increase, the same happens with the possible job positions. “DevOps” is one of the newest ones, and with it brings a change in business culture.

Historically, each specialization in Information Technologies created its own silo. Developers and Software Architects on one side, Infrastructure on other, DBAs on their own, etc. Thus, knowledge and documentation is rarely shared amongst each team. Even now, most big businesses keep up the barriers between departments, and hold meetings with the excuse of “information exchange”. DevOps was born with the idea of bringing down this artificial blockade, as this barrier benefits no-one in the long run.

The role of DevOps is deeply rooted within the mindset of Agile methodologies. And thus, there is no valid job description without proper context in a specific business culture. Some might consider “DevOps” as a “disposition”, as a way to define work to be done between departments. Others might create a transversal position that collaborates and brings together the different actors.

There are no specific tools of the trade when it comes to daily work. No, allow me to correct myself: documentation is THE weapon of choice. Wiki-style software, or a common documents repository open for everyone involved, allows for an efficient knowledge pool. That is a strong basis to build the foundations of proper cooperation. Orchestrating software, build pipelines, automated tests… All of that is useless without the proper approach to information sharing. Preventing the loss of know-how should be a priority for management, and a critical mission for all DevOps involved.

So, is your company riding this cultural wave, or stuck in the old way of doing?

The blurry edges of privacy

lockIt might shock you to know that defining your friends and relatives in the social media website of your choice is not enough protection to block access to outsiders to your content.

An experiment by the russian photographer Egor Tsvetkov has destroyed the illusion of isolation on the website VKontakte. His service, FindFace, allows to search within the image database of the social network. So far, the results with regular photos reaches the 70% of accuracy, according to Tsvetkov himself.

While this, in itself, it’s not much of a revelation – nowadays we are capable of very precise results when it comes to computation algorithms – this essay reveals a visage of one unnerving future. Your personal content gives away very personal information about your customs, routines and life habits. A malicious stalker can use this findings for his own benefit. Companies could use this knowledge to feed their marketing departments for aggressive campaigns.

Usually, Social Media companies take their clients’ data very seriously. Defining your privacy filters on your profile will easily block most external entries to your photos and entries. However, unaware of the risks that the malevolent use of your personal information, most people prefer to share their updates with the whole internet, seeing this as a trade-off for more visibility.

In any case, always use the golden rules when it comes to information sharing:

  • Never include your contact data, or banking information, nor anything related.
  • Don’t post on behalf of someone else. Don’t share their schedules, or their plans.
  • Avoid including your phone number, or home address.
  • Be mindful of who is tagged on your photographs. Asking for permission before uploading them somewhere is good manners, and common sense.

You can take a look at Egor Tsvetkov’s experiment, named “Your Face is Big Data”, at this link.

Developing a personal brand

button-160595_1280

It’s all about the networking. Your contacts, your ideas, you creations: everything that shapes your “business persona” should pour out of your two-page resume into the digital world. Every recruiter nowadays uses internet as another tool to research on the candidates’ background, so the best choice is to use this to your benefit.

Gaining visibility on the net, and carry out marketing about yourself, means that you will do a lot of “social media”. More specifically, you will be creating content. Actually, it’s possible that you are doing that very thing currently. Working in a personal project? Collaborating on open source software on github? Tweeting about the latest technology developments? Advertising your personal brand is just using the means to link what you do, share, and create with your name and image.

Opening a blog sounds like a good place to start. Share those bits of knowledge that you have been learning during your business experience with everyone willing to listen. Open up a channel with your readers, and invite discussion on the comments section. After all, this is not a strategy to boost your ego – or rather, it shouldn’t be only about that. Define an objective: do you want to broaden your professional career? Increase the market reach of your own product? Define a strategy to achieve an objective. Focus on a goal to get a sense of improvement.

What’s out there? Are there any other professionals linked with your area of expertise? Then start by standing on the shoulders of giants: follow them, interact, and share with them what you create. Either them will notice, and help you bring word of your content, or other creators and readers will, and thus your audience will grow. Write about what YOU know about. Be passionate, and don’t fear to give your opinion.

Welcome, everyone. I’m Alvaro, and this is my blog on technology and software development.