So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!
I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.
One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.
Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.
I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.
The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.
The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.
The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.
You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.
Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.
See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.
Be First to Comment