This post is inspired by Kent Beck’s excellent talk at JavaZone 2011 titled Software G Forces: The Effects of Acceleration where he describes how the development process, practices and partly the whole organization change and/or have to change as you go from annual to monthly to weekly, daily, hourly deployments. I’d like to summarize some of the points he made and use that as a ground for arguing that more frequent deployments are (in general) better.
I’d highly recommend you to watch his presentation as I will only reproduce parts of it (and as they are out of their original context, they might well not represent exactly what Kent wanted to communicate).
Kent argues that as you deploy more and more frequently, many things have to change including the business side of the software. What is a best practice for one of these speeds becomes an impediment for another one. With more frequent deployments teams have to progress towards the following practices, while leaving some other practices behind:
- Better, more automated testing: developer testing, automated acceptance testing (a.k.a. Specification by Example). Notice that this implies that the software has to be designed for testability, which, I firmly believe, forces a better, more decoupled architecture (aside of the other benefits of well-done SbE).
- Merging of QA/testing team and operations team with the development team. There may still be a testing specialist and an operations specialist but they are one team with a shared responsibility and commitment and everyone does a little of everything.
- Improved feedback about production use and state of the application. (Which makes f.ex. data-driven, i.e. experimentation-based instead of guess-driven usability [UI] design possible.)
- Automated deployments including support for rollback (and I’d also assume it becomes necessary to implement gradual deployment where functionality is first released to only a small subset of users and progressively enabled for more and more of them)
- Less up-front design and more empirical-based design refactoring and design evolution
- More suitable pricing model (per upgrade -> subscription -> per use …)
As a developer I find a couple of things associated with more frequent deployments very attractive. The key terms are rapid feedback and high quality. Both of them are also cornerstones of the lean development – you cannot go fast without achieving an excellent quality (as Mary Poppendieck argues in her book) and feedback is much superior to guesswork in the complex world where we live and develop.
One of the positive changes is that the disconnection between developers and their application and its production life disappears. When programmers are isolated from the application by the QA and Ops teams they don’t really care about how difficult it is to test or operate it. When they are one team, the communication becomes much better (=> less wasted time) and the application will be much easier to test and operate (=> yet less wasted time, less defects due to more effective testing and better defect detection). For me as a developer it is really a pain not to be a part of Ops and not to have a good insight into how my application is doing, how many people are using it and in which ways, how the latest new features have influenced this, and not to have a rapid defect detection and alerting (without them deployments become frightening ventures).
I very much agree with Kent that it is great to have daily+ deployments and rapid feedback from the production so that you can “feel the software” under your toes, meaning that whatever you decide now is going to effect users tomorrow. But it is not only about developers and feelings. As others have argued (just ask uncle Google about continuous delivery), more frequent deployments of small increments are much safer; good feedback makes it possible to detect and remove defects in virtually no time; it is cheaper (as I’ve reasoned above); and has many other benefits. Why then isn’t everybody doing it? Well, radical changes aren’t easy – and it isn’t only the development that is transformed but the whole organization using the software. And it needs some effort to set up and some experience to do it correctly. So far continuous delivery is still a “new” paradigm and it will take time for the mainstream to understand and accept it.
Me On (Design) Refactoring
Frequent deployments make it possible to collect feedback and adjust the application correspondingly. Refactoring and design refactoring become an essential part of the process and replace the speculative design of today, that is designing for what you think – usually incorrectly – will be needed.
No matter how you make your design, the requirements will change and thus the design has to change too to prevent the software from turning into a growing pile of crap. Unfortunately too often people in the name of speed just cut corners and misuse and twist the old, insufficient design instead of reworking it, causing a lot of exponentially growing complexity and technical debt and thus a considerable loss.
Continuous, feedback-based design refactoring and evolution is the most reasonable and cost-effective approach to application design. Approach software development as continual improvement, as a kaizen event.
The software development of the future will be an empirical process based on releasing often and adjusting based on the feedback. It requires higher process quality, removal of time wastes via efficient automation, removal of communication barriers between all stakeholders including testers and ops, and transformation of how the organization approaches software development, of its relations with customers, and of its business model. DevOps is a great model that can improve developer satisfaction and prevent a considerable waste.
- Greg Linden: Frequent Releases Change Software Engineering
- (Norwegian) O. C. Rynning: Felles ansvar for produksjon!
- Aliastair Cockburn: Design as knowledge creation – the value of early feedback; also: “In most cases, the bigger the design that is done in the beginning, the more the unvalidated decisions produced, i.e. no “knowledge” is produced, only hypothesis, which is inventory.”
- Regarding continuous system design refactoring – Aliastair Cockburn: Incremental Rearchitecture – “Starting from a simple working architecture and applying Incremental Rearchitecture is a winning strategy for most, though not all systems” (Also provides guidelines for How completely designed should the system architecture and infrastructure be during the early stages of the project?)