The Importance of Modernizing Legacy Code

The term legacy code conjures thoughts of dread in developers everywhere. It’s code that’s perceived (justly or unjustly) to be tightly-coupled, hard to understand, hard to change, and just plain out-dated. It’s an immovable object.

The reality is, legacy code is everywhere, and it isn’t going anywhere. So why not make it better, and make our lives working with it everyday easier?

This isn’t just about upgrading your libraries and frameworks, it’s also about modernizing the code itself and its use of those libraries/frameworks over time, which in turn, makes future improvements and upgrades easier.

But upgrades can be hard work and they do carry risks, so teams often put them off as long as possible. It’s a common best practice to avoid “big bang”, high-risk changes by instead making small incremental and low-risk changes over time. This allows you to minimize risk by spreading changes out over several releases, and makes isolating issues much easier when things do go wrong.

Big framework upgrades for example, should be no different. Often these upgrades also require updating a lot of existing code, for example, meeting minimum library version compatibility requirements or re-factoring out use of previously deprecated code which has now been removed. Why include the framework upgrade and all the upgrade-related code changes in the same release? Foresight and an ongoing mentality of modernization are required in order to prevent these upgrades from becoming a really painful process. This is one of the things that makes legacy code feel more and more legacy; a fear of change and helplessness to improve. It’s a vicious cycle.

By managing modernization in small and steady increments as a regular maintenance activity, these upgrades become much less risky, and in some cases, can be close to non-events. The remainder of this post goes over some tips I’ve found about how to go about it. I successfully employed this on a large legacy codebase recently migrating from an aging Spring 3 framework codebase to the latest Spring 4 release, with some key improvements along the way.

Why you should always be modernizing the libraries/frameworks in your codebase

There are of course obvious reasons to do this, including developer productivity/happiness/retention, performance improvements and avoiding being on end-of-life release versions (no more bug fixes, security patches, etc.). But then there are other more subtle reasons that people don’t think about, but are equally important:

  • Many people learn by example. Junior developers should see exemplary code. There’s also a strong mental bias towards using things that are already known to work. In both of these cases, this can lead to bad or old code and patterns being replicated and proliferated around your codebase.
  • There are also other more subtle forms of copying, like build scripts or configuration. These are especially important in multi-repository codebases. You should stay on top of these as well.

So when you do upgrade, don’t just do the bare minimum to get on the new version. Actively modernize the code to use the new API’s, techniques and other improvements. Increase the chances that proliferations that do occur are of good patterns and code, not bad ones.

As a very simple example to illustrate the point; When you perform an upgrade from Spring 3.x to 4.x, you can eliminate the need to use default constructors in your spring beans. If a developer is unfamiliar with this improvement and they come across other code in the codebase which has a default constructor even in the new version, they might assume it’s still necessary. Suddenly you find more default constructors popping up in new classes being created around your codebase..

How to upgrade

Ok, so you’re convinced (I hope). How do you go about doing upgrades in the least painful way?

Obviously, the first thing you should do is look at release notes for notable changes. I like to create one or more throwaway/scratch branches to discover unknown/unforeseen issues and ticket them separately and independently where possible. This can really help map out an upgrade and come up with the best plan of attack in terms of mitigating risk.

I like to make small/easy changes first to clear away the simple stuff and get clarity on the more complex changes. It also helps me see things from different angles and find bugs. And as always, review your own PR’s.

Your goal should be to make upgrades as low risk as possible. Obviously the upgrade itself presents some risk, but generally speaking if it’s of high quality, you can expect that it has been thoroughly tested in many scenarios by the authors.

Another important and related maintenance activity is staying on top of deprecation notices. A good library/framework will first deprecate something for at least one release before it is completely removed. A library respecting semantic versioning will only do this in a major version (i.e. one with breaking API changes). This will mean the newer alternative is available in a current version you are already using. If you stay on top of deprecation notices, and use the newest API, you may simply be able to do a version bump in your build file and have that be your only change. This is where you want to be.

For example, in a Spring 3.x codebase, use of deprecated classes like CommonsClientHttpRequestFactory and TimerFactoryBean already have alternative counterparts ready to use in HttpComponentsClientHttpRequestFactory and ScheduledExecutorFactoryBean, respectively. In many of these cases, the new classes are drop-in replacements for the old, or require only minor code changes. After these re-factors, this code remains unchanged in the version bump to 4.x.

Post-upgrade: Identify modernization opportunities

Once you’ve upgraded, you probably already have a laundry list of new improvements you would like to leverage that you already know about. Ticket and backlog them as tech debt. But dig a little deeper. Spend a little time familiarizing yourself with all the changes and improvements and map out which you would like to explore further.

In the case of our Spring upgrade, there were a slew of new improvements to leverage from the core container, to ease of testing, improved messaging support, not to mention bumping the major versions of many other spring umbrella projects (spring security, spring integration, spring batch, etc.) and the list goes on.

Undertaking some of these modernization efforts can go a long way towards producing more of that exemplary code in your codebase.

Then, Look & Plan ahead. Now!

Continuing with our Spring 4 upgrade example, why not start looking ahead to Spring 5? It’s M4 at the time of this writing, but how about making a plan now? The upgrade raises minimum versions of many dependencies. Are you on Hibernate 5 yet? What about Servlet API 3.1? You get the picture.

If you have more interest in the topic of improving legacy code, you might want to check out this great software daily podcast episode on the subject too for some inspiration.

Truly rapid development of admin apps with json-editor

At work we have a lot of configurable settings in our application. Like a lot. For a long time, nobody tackled the task of properly exposing the management of these settings in a UI because they thought it would be way too much work. The settings have a flat unified structure in the back-end, making them awkward to manage logically as a set of related settings. Building support for a large set of arbitrary data types, different editors for each, custom validation, etc. seemed like a daunting task.

In this blog post, I’m going to show how you can use the json-editor library to build these kinds of complex back-office admin apps really quickly and easily.

Continue reading “Truly rapid development of admin apps with json-editor”

Hello Idris

I heard a great podcast interview recently with Edwin Brady. He was discussing his upcoming book Type-driven development With Idris. After listening to the podcast, I immediately picked up a copy of his book. Having now completed the book (well it’s a MEAP, so what’s finished so far), I’m finding Idris the language really intriguing.

I’ve always had a preference toward statically-typed languages. I just like the ability to specify type constraints and have some level of confidence of correctness in my programs before I run them.

Continue reading “Hello Idris”

From plumatic’s schema to clojure.spec

In a previous blog post, I showed an example of using plumatic’s schema with test.check and test.chuck. With the introduction of Clojure’s new spec library, I thought it would be interesting to revisit that post and port it from schema to spec. The code from this post is available on github.

Overall, the port was relatively straight-forward, though spec took some getting used to. spec provides similar facilities for what I was using in schema. It integrated with both test.check and test.chuck with no significant modifications!

Continue reading “From plumatic’s schema to clojure.spec”

Emacs Keyboard Setup – OSX

Before you get going with Emacs on a Mac, there are a number of keyboard settings that you generally want to tweak to get the most fluid and comfortable experience. This post outlines the keyboard settings changes I’ve made that I find essential.

Note: this is targeted towards OSX users. Not all of this will apply to other systems.

Continue reading “Emacs Keyboard Setup – OSX”

Why I’m learning Emacs

Emacs is a classic piece of software that has stood the test of time. It has been around for decades and will probably be around for decades to come, so though it has a bit of a learning curve, it’s well worth the effort to learn.

Although my original motivation for learning Emacs was in pursuit of the ultimate clojure IDE experience, I’ve quickly realized it is extremely valuable as a general-purpose editor. My background has always been as an IDE guy (eclipse, IntelliJ IDEA, etc.), aside from knowing just enough vi to get around on the command-line.

Here’s some of the reasons I chose to invest in learning Emacs whole hog:

Continue reading “Why I’m learning Emacs”

Stackoverflow: Road to 5K

I recently reached 5K reputation on stackoverflow. It’s nothing mind blowing reputation-wise for sure, but it feels like a nice milestone that took some time and effort to reach. Thought I’d write a quick post summarizing my experiences on the site and share some tips & tricks I’ve found helpful along the way.

Continue reading “Stackoverflow: Road to 5K”

Making Sense of Stream Processing – A Must Read

Making Sense of Stream Processing Stumbled on this book this week, and devoured it in an afternoon.

Written by Martin Kleppmann, a distributed systems researcher and former engineer at LinkedIn (where Kafka was born), this book explores the ideas of stream processing and outlines how they can apply broadly to application architectures. It’s a small book in a report format, synthesized from a series of blog posts (linked on Martin’s website).

Continue reading “Making Sense of Stream Processing – A Must Read”

AWS Cloud Best Practices

On a bit of an AWS Whitepaper binge as of late. This post catalogs some of the important highlights and takeaways I’ve had reading through a number of them. Despite the fact that it’s all presented in the context of AWS products and services, there’s a lot of information that I think is generally applicable to any cloud architecture. Reading these are a great way to get familiar with the space as no doubt other cloud providers (Google Cloud, Microsoft Azure, etc.) will have similar offerings now and in the future.

Check out the References section at the bottom of this post. I’ve linked to some specific whitepapers that I found the most interesting/generally applicable.

Continue reading “AWS Cloud Best Practices”

New DevOps Reading List

I’ve been listening to a lot of devops cafe podcasts lately. It’s opened up a new world of reading material on various subjects, technical and non-technical.

Just writing a short post to list some books I’ve been adding to my reading queue:

The Phoenix Project: A Novel About IT, DevOps, And Helping Your Business Win

The Phoenix Project

This is a novel about a fictional IT organization called Parts Unlimited. Meant to be a cautionary tail, it follows a successful middle manager suddenly thrust into a CTO role. He’s immediately faced with daily fires, a broken IT organization and an already years-late “do or die-save the company” project way off the rails. I’ve started this one already, and so far it’s very entertaining. A plus is that it’s available as an audiobook.

Continue reading “New DevOps Reading List”