Skip to content

Month: July 2017

Microservices – It’s All About Tradeoffs

Everybody has probably heard about the much hyped word “Microservices”; the architecture that solves about everything as computers aren’t getting any faster (kind of) and the need for scaling and distribution is getting more important as more and more people are using the internet.

Don’t get me wrong, I love microservices! However, it is important to know that as with most stuff in the world, everything has tradeoffs.

Lessons to be Learned

I’ve been developing systems using microservices for quite a few years now, and there are alot of lessons learnt (and still lessons to be learned). I saw this presentation by Matt Ranney from Uber last year, where he talkes about the almost ridiculous amount of services Uber has, and the insight of all the problems that comes with communicating between all these independent and loosly coupled services. If you have ever developed asynchronous applications, you probably know what kind of complexity it might generate and how hard it can be to understand how everything sticks together. With microservices, this can be even harder.

The World of Computer Systems are Changing

I recognize many of the insights he shares from my experiences of building microservices. I recently did some developing using akka.net experiencing similar insights but on a whole new level. Microservices within microservices. I won’t jump off to that now, maybe I’ll share those thoughts at another occasion. However, microservice architectures today are getting more and more important. One reason is because the stagnation of hardware speed with cpus changing from the traditional one core where the clock frequency is increased between models to today where the cores instead are multiplied without the frequency increasing. But also because it gives you freedom as a developer when facing hugh applications and organisations. Also there is this thing called zero downtime. You might have heard of it. Everything has to work all the time.

Synchronisation

While I do tend to agree with most of what Matt says, I don’t agree with the being “blocked by other teams” statement. If you get “blocked” as a team, you are doing something wrong, especially if you are supposed to be microservice oriented.

Blocked tends to point towards you needing some sort of synchronisation of information, and before you have that you cannot continue. While synchronisation between systems must occur at some point, it does not mean that you cannot develop and release code that cannot be fully utilized until other systems have been developed and deployed. Remember agile, moving fast, autonomous, and everything has to work all the time? The synchronisation part is all about the mutable understanding of the contract between the services. When you have that, it’s just a matter of using different techniques to do parallell development without ever being blocked, like feature toggling. There need to be a “we are finished let’s integrate”-moment at some point, but it’s not a blockage per se. It’s all about continuation, and it is even more important with microservices as integration get’s more complex with more services and functionality being developed in parallel.

Context

Systems developed as microservices are also facing the problem of process context, or the start and end problem. As a user doing something you are usually seeing this doing as from a bubble perspective. You update a piece of information and expect to see a result from that action. But with microservice based systems, there might be alot of things going on at the same time during that action in many systems. The action does not necessary have the cause you think, and the context gets chopped in smaller pieces as your context now spans multiple systems. This leads to the distribution problem. How do you visualize and exaplain what’s happening when alot of things are happening at roughly the same time at different places? People tend to rely on synchronisation to explain things, but synchronisation is really hard to achive if you do not have all the context at the same time and place, which is next to impossible when it comes to parallelism, distribution, scaling, something you often get and want to have with microservices; asynchronicity. You might want to rethink the way you perceive the systems you are working with. Do you really need synchronisation, and why? It’s probably not an easy thing to just move away from as it is deep rooted in many peoples mind. A way to simplify actions happening. But things are seldom synchronous in the world, and as computers and systems are becoming more distributed, it will make less and less sense keeping it that way.

REST

I also think the overhyped use of the word REST might extend the problem. REST implies that model states is important. But many microservices are not always built around the concept of states and models. They want to continuesly change things. Transitions are really hard to represent as states. I’m not talking about what something transitioned TO but how to visualize the transition or the causation, the relationship between cause and effect. Sometimes functionality is best represented as functions and not the states before and after. Back are the days of RPC services. Why not represent an API as a stream of events? Stateless APIs! It’s all about watching things happen. Using commands and events can be a powerful thing.

Anyhow, microservices are great, but they might get you in situations where you feel confused as you try to apply things that used to work great but does not seem to fit anymore. By rethinking the way you percieve computer systems you will soon find new ways and it might give you great new possibilities. Dare to try new angles, and remember that it’s all about trade offs. Simulating the world exactly as is might not be feasible with a computer system, but still, treating it as a bounch of state models within a single process might not be the right way either.

Leave a Comment

Continues Delivery – Are You Fast Enough?

So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!

I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.

One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.

Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.

I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.

The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.

The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.

The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.

You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.

Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.

See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.

Leave a Comment

Integration Testing

Sexy title, isn’t it? 🙂 Well, maybe not, but still, it’s an important aspect of system development.

What is Integration Testing?

Good question, Fredrik. Well thank you, Fredrik.

Yeah yeah, okay, enough with this, let’s get serious.

When I talk about testing I almost always get misunderstood. Why? Because we all have different views and words to use when we talk about testing, and specially automated tests. Have you seen the explanation of Integration Testing on Wikipedia? Well, it’s not explicit, that you can say 🙂 https://en.wikipedia.org/wiki/Integration_testing

When I talk about integration tests, I usually mean firing up my application in memory and probe the integration points. I like this approach because it let’s me get rid of third party application dependency. It means I can run my tests anywhere without the need of installing and handling third party applications during testing, for example a database.

It fits nicely into the continues integration process.

Test It While Hosting Your Windows Service

So, I’d like to introduce you to some of the home brewed testing frameworks I like to use when building applications. Let’s start with Test.It.While.Hosting.Your.Windows.Service (C#, .NET Framework 4.6.2), which is a testing framework that helps you simplify how you write your integration tests for Windows Service applications.

I’m not going to go into details, you will find that if you follow the link. BUT, I think integration testing is something all developers should have at some degree when developing applications. They cover much more than a unit test, but at the same time they are autonomous and non-dependent which means you can run them anywhere where the targeted framework is installed.

Just by being able to start your application in a unit test like manner you have come far in testing your application’s functionality. How many times have you not experienced problems with the IOC registrations of your application?

Okay, I’ll cut the selling point. I think you get the point anyway. Testing is important. Multiple levels of testing is important. Finding issues fast is important, at least in an agile world, and you are agile, aren’t you? 🙂

Hi!

My name is Fredrik. I’m a system developer, newly gone freelance, and (obviously) I like automated testing. I would not call my self a ‘tester’, I’m a system developer who likes simplicity and when things just work.

This is my first blog post for my new company, FKAN Consulting. I’ll try to continuously post new (hopefully interesting) posts here about my experiences as a developer and consultant.

Leave a Comment