Skip to content

Category: post

Scaling in process migrations with Kubernetes

I was recently involved in migrating data from one database to another for an application deployed to Kubernetes. The idea was to use an in-process migration routine that kicks in during the startup process of the application. Resilience and scaling were to be handled entirely by Kubernetes. Any failure would cause the application to fail fast and let k8s restart it.

This turned out to be quite simple to implement!

We use a rolling update strategy in order to let the newly deployed service migrate the data side-by-side with the old one still running the actual application process. This is defined in the application’s deployment manifest:

strategy:
  type: RollingUpdate
  rollingUpdate:
     maxUnavailable: 50%
     maxSurge: 50%

With a replica count of 8, we will now end up with a minimum of 4 pods running the old version and up to 8 pods running the migration.

However, for the rolling update strategy to work, we also need a readiness probe for the service to tell the deployment when it is okay to swap out pods. Since the application is a message streaming service hosted in a generic .NET Core host, we could simply use an ExecAction probe that executes cat against a file which existence we can control during the life cycle of the application. Simple!

The application’s life cycle went from:

private static async Task Main(string[] args)
{
    using var host = new HostBuilder().Build();
    await host.RunAsync();
}

…to something like this:

internal class Program
{
    private static async Task Main(string[] args)
    {
        using var host = new HostBuilder().Build();
        await host.StartAsync();
        File.Create("/Ready");
        await host.WaitForShutdownAsync();
        File.Delete("/Ready");
        await host.StopAsync();
    }
}

During the startup phase, the migration takes place. At this time, no file called “Ready” exists in the application’s execution directory. When start finishes (the migration is done and the application is ready to serve), the Ready file is created. When sigterm is received, the Ready file is deleted and we start the shutdown process. At this time, the application is no longer ready to serve.

What about the probe configuration? Easy!

readinessProbe:
  exec:
    command: 
    - cat
    - /Ready
  periodSeconds: 2

Every other second, the container will be checked for a file called Ready. If it exists, the service is considered ready for service and the deployment will continue and deploy the next pod according to the update strategy.

Need more scaling? Just add more replicas!

Leave a Comment

Distributed Tracing with OpenTracing and Kafka

OpenTracing has become a standardized way to write distributed tracing within the microservices architecture. It has been adopted by many libraries in all kinds of programming languages. I was missing a library in .NET for creating distributed traces between Kafka clients, so I decided to write one!

OpenTracing.Confluent.Kafka

OpenTracing.Confluent.Kafka is a library build in .NET Standard 1.3. With the help of some extension methods and decorators for the Confluent.Kafka consumer and producer it makes distributed tracing between the producer and the consumer quite simple.

Example

There is a simple console app that serves as an example on how to use the decorators to create a trace with two spans, one for the producing side and one for the consuming side. It’s build in .NET Core, so you can run it where ever you like. It integrates with Kafka and Jaeger, so in order to setup these systems I have provided a simple script and some Docker and Kubernetes yaml files for a one-click-setup. If you don’t run under Windows, you need to setup everything manually. The script should be able to act as an indication of what needs to be done.

The result when running the app should look something like the image below.

Happy tracing!

Leave a Comment

4 Ways to Include Symbols and Source Files when Shipping C# Libraries

The need for debugging NuGet packaged assemblies have always been around, but did you know there are a couple of ways to achieve the same thing? Let’s take a closer look.

One Package to Rule Them All

Back in the days, I’ve mostly done symbol packaging by including the pdb symbol file and the source code in the nuget package along with the assembly. Everything in one single NuGet package.

pros

  • No extra communication to download symbols and source
  • No need to know where symbols and source files are located

 cons

  • Larger NuGet package

 

Symbol Package

To mitigate bloated packages and unnecessary data due to lack of debugging needs, there is the possibility to pack the symbol file and source code in a symbol package. This is done by creating a .nuget package containing the runtime and a .symbol.nuget package containing runtime, symbols and source code. The package containing the runtime is uploaded to a NuGet package server and the symbol package is uploaded to a symbol source server, like NuGet smbsrc.

pros

  • Small NuGet package 

cons

  • Need to know where the symbols are stored
  • Extra communication to download symbols and source files

Note: SymbolSource does not support the portable PDBs that the .NET Core CLI tool generates.

SourceLink

Now part of the .Net Foundation, SourceLink support has been integrated into Visual Studio 2017 15.3 and the brand new smoking hot .NET Core 2.1 SDK. SourceLink include source file information into the symbol files, making symbol packages obsolete. The source files can then be downloaded on demand from the source code repository by the debugger using this piece of information.

pros

  • Medium NuGet package
  • No symbol source package needed
  • Only download the source files needed for debugging

cons

  • Might call the repository for source code many times

JetBrains dotPeek

Did you know you could utilize dotPeek as a local symbol server? Well now you do. dotPeek will automatically construct symbol files from the assemblies loaded, which will be associated with the decompiled source files.

pros

  • Small NuGet package
  • No symbols or source files needed!

cons

  • Quite slow
  • Need to disable Just My Code when debugging, making symbol loading even slower
  • Not the real source code; poorer code quality

Conclusion

There you have it! Which ever approach you prefer, there will as usual always be trade offs. However, SourceLink seems to gain traction and might have a shot to become the de facto standard of symbol source packaging.

Leave a Comment

Integration Testing with AMQP

So, last week I finally released the first version of my new shiny integration testing framework for AMQP, Test.It.With.AMQP. It comes with an implementation of the AMQP 0.9.1 protocol and integration with the popular .NET AMQP client of RabbitMQ.

Oh yeah, it’s all compatible with .NET Core 🙂

AMQP – Advanced Message Queuing Protocol

Wikipedia:

The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security.

Example

A common test scenario is that you have an application that consumes messages from a queue and you want to assert that the application retrieves the messages correctly.

var testServer = new AmqpTestFramework(Amqp091.Protocol.Amqp091.ProtocolResolver);
testServer.On<Basic.Consume>((connectionId, message) => AssertSomething(message));
myApplicationsIOCContainer.RegisterSingleton(() => testServer.ConnectionFactory.ToRabbitMqConnectionFactory()));

This is simplified though. In reality there are alot of setup negotiation that needs to be done before you can consume any messages, like creating a connection and a channel. A real working test with a made up application and the test framework Test.It.While.Hosting.Your.Windows.Service can be found here.

Why?

The purpose of this test framework is to mock an AMQP communication based service in order to test the AMQP integration points and behaviour within an application without the need of a shared and installed instance of the actual AMQP service. It’s kind of what OWIN Test Server does for HTTP in Katana.

Fast

The test framework runs in memory, that means no time consuming network traffic or interop calls. 

Isolated

All instances are setup by the test scenario and has no shared resources. This means there is no risk that two or more tests affect each other.

Testable

The framework makes it possible to subscribe and send all AMQP methods defined in the protocols, or you can even extend the protocol with your own methods!

Easy Setup and Tear Down

Create an instance when setting up your test, verify your result, and dispose it when your done. No hassle with communication pools and locked resources.

 

Integration testing made easy.

 

 

Leave a Comment

Continues Delivery with .NET Framework Applications in the Cloud – for Free!

Yepp, you read that correct. I’ve started to setup continues delivery processes in the cloud using AppVeyor. AppVeyor is a platform for achiving CI/CD in the cloud for particularly applications written for Windows. It has some integration plugins for all your common popular services, like Github, Gitlab, Bitbucket, NuGet etc and has support for a mishmash of different languages concentrated around a Windows environment. The cost? It’s FREE for open source projects!

I’ve written about continues integration and continues delivery before here, and this will be a sort of extension of that topic. I though I would describe my goto CI/CD process, and how you can setup your own with some simple steps!

The Project

One of my open source projects is a hosting framework for integration testing Windows Services. It’s a class library built in C# and released as a NuGet package. It’s called Test.It.While.Hosting.Your.Windows.Service. Pretty obvious what the purpose of that application is, right? 🙂

Source Control

The code is hosted on Github, and I use a trunc based branching strategy, which means I use one branch, master, for simplicity.

AppVeyor has integration towards a bunch of popular source control systems, among them, Github. It uses webhooks in order to be notified of new code pushed to the repository, which can be used to trigger a new build on an AppVeyor project.

The Process

Since my project is open source, and I use the free version of AppVeyor, the CI/CD process information is publicity available. You can find the history for version 2.1.3 here.

The following pictures shows the General tab in the settings page of my AppVeyor project for Test.It.While.Hosting.Your.Windows.Service.

Github – Blue Rectangles

If you look at the build history link, you will see at row 2 that the first thing that happens is cloning of the source code from Github.

When you create an AppVeyor project, you need to integrate with your source control system. You can choose which type of repository to integrate with. I use Github, which means I can configure Github specific data as seen in the settings picture above. You don’t need to manually enter the link to your repository, AppVeyor uses oauth to autenticate towards Github and then let you choose with a simple click which repository to create the project for.

I choose to trigger the build from any branch because I would like to support pull requests, since that is a great way to let strangers contribute to your open source project without risking that anyone for example deliberated destroys your repository. However, I don’t want to increment the build version during pull requests, so I check the “Pull Requests do not increment build number”-checkbox. This will cause the pull requests builds to add a random string to the build version instead of bumping the number.

That’s basically it for the integration part with Github. You might notice that I have checked the “Do not build tags” checkbox. I will come to as why later in the bonus part of this article 🙂

Build – Green Rectangles

There are some configurations available for how to choose a build version format. I like using Semver, and use the build number to iterate the patch version. When making some new functionality or breaking changes, it’s important to change the build version manually before pushing the changes to your source control system. Remember that all changes in master will trigger a new build which in turn will trigger a release and deployment.

I’d also like to update the version generated by AppVeyor in the AssemblyInfo files of the C# projects being built. This will later be used to generate the NuGet package that will be released on NuGet.org. You can see the AssemblyInfo files being patched at row 4-6 in the build output.

In the Build tab, I choose MSBUILD as build tool and Release as configuration, which means the projects will be built with release configuration using msbuild. You can also see this at row 8 in the build output, and the actual build at line 91-99.

On row 7 it says

7. nuget restore

This is just a before build cmd script configuration to restore the NuGet packages referenced by the .NET projects. The NuGet CLI tool comes pre-installed in the build agent image. You can see the NuGet restore process at line 9-90.

The above picture shows the Environment where the build worker image is chosen.

Tests

The next step after building is running automated tests.

As you can see in the Tests tab, I have actually not configured anything. Tests are automatically discovered and executed. AppVeyor has support for the most common test framework, in my case xUnit.net. You can see the tests being run and the test result being provided at line 113-121.

Packaging (Red Rectangles)

After the build completes it’s time to package the NuGet target projects into NuGet packages. AppVeyor is integrated with NuGet, or rather exposes the NuGet CLI tool in the current OS image. The checkbox “Package NuGet projects” will automatically look for .nuspec files in the root directory of all projects and package them accordingly and automatically upload them to the internal artifact storage.

One of the projects includes a .nuspec file, can you see which one? (Hint: Check line 108)

If you look closely, you can see that the packaging is done before the tests are being run. That doesn’t really make much sense since packaging is not needed if any test fails, but that’s a minor though.

Deploying

The last step is to deploy the NuGet package to my NuGet feed at NuGet.org. There are alot of deployment providers available at AppVeyor like NuGet, Azure, Web Deploy, SQL etc, you can find them under the Deployment tab.

I choose NuGet as Deployment provider, and left the NuGet server URL empty as it falls back automatically to nuget.org. As I’ve also left Artifacts empty it will automatically choose all NuGet package artifacts uploaded to my artifact store during the build process, in this case there is just one, as showned at lines 122-123. I only deploy from the master branch in order to avoid publishing packages by mistake should I push to another branch. Remember that I use a trunk-based source control strategy, so it should never happen.

Notice the placeholder under API key. Here should the NuGet API key go for my NuGet feed authorizing AppVeyor to publish NuGet packages onto my feed on my behalf. Since this is a sensitive piece of information, I have stored it as an Environment Variable (you might have noticed it in the picture of the Environment tab, enclosed in a purple rectangle).

Environment variables are available through out the whole CI/CD process. There are also a bunch of pre-defined that can come in handy.

The actual deployment to NuGet.org can be seen at lines 124-126, and the package can then be found at my NuGet feed.

Some Last Words

AppVeyor is a powerful tool to help with CI/CD. It really makes it easy to setup fully automated processes from source control through the build and test processes to release and deployment.

I have used both Jenkins and TFS together with Octopus Deploy to achive different levels of continues delivery, but this so much easier to setup in comparison, and without you needing to host anything except the applications you build.

Not a fan of the UI based configuration? No problem, AppVeyor also supports yml based definition file for the project configuration.

Oh, yeah, almost forgot. There is also some super nice badges you can show off with on, for example, your README.md on Github.

The first one comes from AppVeyor, and the second one from BuildStats. Both are supported in markdown. Go check them out!

BONUS! (Black Rectangles)

If you were observant when looking at the build output and at the bottom of the Build and the Deployment tabs, you might have seen some PowerShell scripts.

Release Notes

The first script sets release notes for the NuGet package based on the commit message from Git. It is applied before packaging and updates the .nuspec file used to define the package. Note the usage of the pre-defined build parameters mentioned earlier.

$path = "src/$env:APPVEYOR_PROJECT_NAME/$env:APPVEYOR_PROJECT_NAME.nuspec"
[xml]$xml = Get-Content -Path $path
$xml.GetElementsByTagName("releaseNotes").set_InnerXML("$env:APPVEYOR_REPO_COMMIT_MESSAGE $env:APPVEYOR_REPO_COMMIT_MESSAGE_EXTENDED")
Set-Content $path -Value $xml.InnerXml -Force

It opens the .nuspec file, reads it’s content, updates the releaseNotes tag with the commit message and then saves the changes.

The release notes can be seen at the NuGet feed, reading “Update README.md added badges”. It can also be seen in the Visual Studio NuGet Package Manager UI.

Git Version Tag

The second script pushes a tag with the deployed version back to the Github repository on the commit that was fetched in the beginning of the process. This makes it easy to back-track what commit resulted in what NuGet package.

git config --global credential.helper store
Add-Content "$env:USERPROFILE\.git-credentials" "https://$($env:git_access_token):x-oauth-basic@github.com`n"
git config --global user.email "fresa@fresa.se"
git config --global user.name "Fredrik Arvidsson"
git tag v$($env:APPVEYOR_BUILD_VERSION) $($env:APPVEYOR_REPO_COMMIT)
git push origin --tags --quiet
  1. In order to authenticate with Github we use the git credential store. This could be a security issue since the credentials (here a git access token) will be stored on the disk on the AppVeyor build agent. However since nothing on the build agent is ever shared, and the agent will be destroyed after the build process, it’s not an issue.
  2. Store the credentials. The git access token generated from my Github account is securely stored using a secure environment variable.
  3. Set user email.
  4. Set user name.
  5. Create a git tag based on the build version and apply it on the commit fetched in the beginning of the CI/CD process.
  6. Push the tag created to Github. Notice the --quiet flag supressing the output from the git push command that otherwise will produce an error in the PowerShell script execution task run by AppVeyor.

Do you remember a checkbox called “Do not build tags” mentioned in the Github chapter above? Well, it is checked in order to prevent triggering a neverending loop of new build triggers when pushing the tag to the remote repository.

Leave a Comment

C# 8 Preview with Mads Torgersen

A colleague sent this video to me today containing a presentation of the upcoming C# 8 features.

For you who do not recognize this guy, he’s Microsofts Program Manager for the C# language, and has been so for many years! Not to be mixed up with Mads Kristensen (who wrote this awesome blog engine), also working at Microsoft, both originating from Denmark.

So, let me give you a little summary if you don’t have the time to watch the video, which btw you really should if you’re a .NET developer like me.

Nullable Reference Types

If you have coded C# before, you probably know about the explicit nullable type operator ‘?’.

int? length = null;

This means that that the integer length now can be set to null, which the primitive type int never can be. This was introduced way back in 2005 when the .NET Framework 2.0 was released.

So now, the C# team introduces nullable reference types. This means that you can null for example a string. Well, a string can already be null, you might ask yourself, so what’s the big deal? Intension. I bet all programmers have had their share of the infamous NullReferenceException, am I right? Enter nullable reference types. Want your string to be nullable by intent? Use string?. This means that the intent of the string variable is that it can be null, and your compiler will notice this. Using a method on the nullable string variable without first checking for null will cause a compiler error. Awesome!

Async Streams

Since streams are naturally a pushing thing, which means stuff is happening at anytime outside the control of the consumer (think a river and a dam). This might build up to congestion where you need some sort of throttling. With async streams you will get this naturally! The consumer now has a saying if it is ready to consume or not. If you have ever used a message queue, this is probably already a natural thing for you. For distributed transport systems like RabbitMQ, this is a natural thing. You spin up more consumers when the queue get’s bigger and bigger and use a thottling mechanism for the consumer to only consume a couple of messages at a time.

Default Interface Implementations

Now this one is interesting. It means you can implement functionality in an Interface definition.

Say what?

You mean like an abstract class? Well, yes and no. It’s more a usage of the private implicit implementation of an interface. You probably have seen an implicit implementation of an interface before, but let me demonstrate anyway.

public interface IGossip
{
    string TellGossip();
}

public class GossipingNeighbor : IGossip
{
    string IGossip.TellGossip()
    {
        return "Haha, you won't know about this!";
    }
}

public class NosyNeighbor
{
    private readonly GossipingNeighbor _gossipingNeighbor;

    public NosyNeighbor(GossipingNeighbor gossipingNeighbor)
    {
        _gossipingNeighbor = gossipingNeighbor;
    }

    public void PleaseTellMe()
    {
        // Compiler error
        var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();
    }
}

So extending interface definitions in C# 8, you can actually use this directly in the interface definition! If you add a method to the above interface and provide a default implementation, the implementations of that interface does not break, which means they do not need to explicit implement this new method. This does not even affect the implementation until it is casted to the interface.

public void PleaseTellMe()
{
    // Compiler error
    var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();

    // This works!
    var iDidntKnowThat = ((IGossip) _gossipingNeighbor).TellGossip();
}

Default interface implementations was introdued in Java 8 in 2015, so here C# is actually behind Java!

Extend Everything!

Well, maybe not. Since 2007 (.NET Framework 3.5, C# version 3.0) you have had the extension methods. You can add methods to an exiting class definition without touching the class. Formerly this only included methods, now you can use properties, operators and maybe constructors aswell! However, there is limitations. You can not hold instance states, but you can hold definition states, i.e. static states. Maybe not revolutionizing, you can already do a great amount of stuff with extention methods, but still, there will be times where this might be useful.

Mads also talks about extending extionsions with interfaces. He does not go into details what that means, and also states that this probably is way into the future of the evolvement of C#.

Conclusion

No generic attributes?

Still, lot’s of new goodies that might be included in C# 8. However, bear in mind that many of these ideas might not turn out like this when C# 8 is actually released. If you watch the video you’ll hear Mads state the uncertainties of what actually will be shipped, but I think it corresponds quite much to the C# 8 Milestone.

Leave a Comment

Don’t Branch

Git is a great, popular, distributed source control system that most of us probably have encountered in various projects. It’s really simple:

1. Pull changes from the remote origin master branch to your local master branch.

2. Code.

(3). Merge any changes from the remote origin master branch to your local master branch.

4. Push your local changes on the master branch to the remote master origin branch.

That’s it! Simple, isn’t it? Master is always deployable and changes fast.

So why do many people use complex git branching strategies?

Check out this google search result: https://www.google.com/search?q=git+workflow&tbm=isch

The horror!

If you are living by continues delivery, you do not want to see that. That’s a the opposite of continues integration; continues isolation. You part, you do not integrate. Well, technically you have to part a while when using distributed source control systems (otherwise it would not be distribution), but you’d like to part for as little time as possible. Why? Read my post Continues Delivery – Are You Fast Enough? 🙂

Open Source

So, is branching always bad? Well, no, it would probably not exist if it were 🙂 Open source software published at the git framework Github is a perfect example when branching might be necessary. If you develop an application and put the source code on github as publicly available, anyone can clone your code, create a branch, make changes and request a pull request before it is merged with master.

https://guides.github.com/introduction/flow/

This makes sense. Why? Because you do not necessary know the person changing your code. It can be a rival wanting to destroy your work. It wouldn’t work if that person could directly merge into master. A security gate is needed.

Tight Teams

Being part of a team at a company, you work towards the same agenda. You probably have some agreed code standard, and a process the team follows. No one is working against the team, so there is no need for a security gate in the source control system. Hence, keep it simple, don’t branch, use master.

– But we need feature branchi…

Feature Branching

So, you think you need to branch for developing new features? You don’t. There are some nice strategies to achive doing small changes and commit them to the production code continuously, even though the functionality might not be fully functioning.

Feature Toggling

This is a great tool for hiding any functionality that is not ready for production yet. If you havn’t heard about all the other nice perks of feature toggling, I highly recommend you read this article by Martin Fowler: https://martinfowler.com/articles/feature-toggles.html

Branch by Abstraction

No, it’s not source control branching. This technique let the user incrementally do large changes to the code while continuously integrating with the production code. Again I’d like to forward you to an excellent explanation of the subject by Martin: https://martinfowler.com/bliki/BranchByAbstraction.html

Conclusion

Don’t use branching strategies if you work in a tight team that has the same goal. Keep it simple, stupid.

Leave a Comment

Microservices – It’s All About Tradeoffs

Everybody has probably heard about the much hyped word “Microservices”; the architecture that solves about everything as computers aren’t getting any faster (kind of) and the need for scaling and distribution is getting more important as more and more people are using the internet.

Don’t get me wrong, I love microservices! However, it is important to know that as with most stuff in the world, everything has tradeoffs.

Lessons to be Learned

I’ve been developing systems using microservices for quite a few years now, and there are alot of lessons learnt (and still lessons to be learned). I saw this presentation by Matt Ranney from Uber last year, where he talkes about the almost ridiculous amount of services Uber has, and the insight of all the problems that comes with communicating between all these independent and loosly coupled services. If you have ever developed asynchronous applications, you probably know what kind of complexity it might generate and how hard it can be to understand how everything sticks together. With microservices, this can be even harder.

The World of Computer Systems are Changing

I recognize many of the insights he shares from my experiences of building microservices. I recently did some developing using akka.net experiencing similar insights but on a whole new level. Microservices within microservices. I won’t jump off to that now, maybe I’ll share those thoughts at another occasion. However, microservice architectures today are getting more and more important. One reason is because the stagnation of hardware speed with cpus changing from the traditional one core where the clock frequency is increased between models to today where the cores instead are multiplied without the frequency increasing. But also because it gives you freedom as a developer when facing hugh applications and organisations. Also there is this thing called zero downtime. You might have heard of it. Everything has to work all the time.

Synchronisation

While I do tend to agree with most of what Matt says, I don’t agree with the being “blocked by other teams” statement. If you get “blocked” as a team, you are doing something wrong, especially if you are supposed to be microservice oriented.

Blocked tends to point towards you needing some sort of synchronisation of information, and before you have that you cannot continue. While synchronisation between systems must occur at some point, it does not mean that you cannot develop and release code that cannot be fully utilized until other systems have been developed and deployed. Remember agile, moving fast, autonomous, and everything has to work all the time? The synchronisation part is all about the mutable understanding of the contract between the services. When you have that, it’s just a matter of using different techniques to do parallell development without ever being blocked, like feature toggling. There need to be a “we are finished let’s integrate”-moment at some point, but it’s not a blockage per se. It’s all about continuation, and it is even more important with microservices as integration get’s more complex with more services and functionality being developed in parallel.

Context

Systems developed as microservices are also facing the problem of process context, or the start and end problem. As a user doing something you are usually seeing this doing as from a bubble perspective. You update a piece of information and expect to see a result from that action. But with microservice based systems, there might be alot of things going on at the same time during that action in many systems. The action does not necessary have the cause you think, and the context gets chopped in smaller pieces as your context now spans multiple systems. This leads to the distribution problem. How do you visualize and exaplain what’s happening when alot of things are happening at roughly the same time at different places? People tend to rely on synchronisation to explain things, but synchronisation is really hard to achive if you do not have all the context at the same time and place, which is next to impossible when it comes to parallelism, distribution, scaling, something you often get and want to have with microservices; asynchronicity. You might want to rethink the way you perceive the systems you are working with. Do you really need synchronisation, and why? It’s probably not an easy thing to just move away from as it is deep rooted in many peoples mind. A way to simplify actions happening. But things are seldom synchronous in the world, and as computers and systems are becoming more distributed, it will make less and less sense keeping it that way.

REST

I also think the overhyped use of the word REST might extend the problem. REST implies that model states is important. But many microservices are not always built around the concept of states and models. They want to continuesly change things. Transitions are really hard to represent as states. I’m not talking about what something transitioned TO but how to visualize the transition or the causation, the relationship between cause and effect. Sometimes functionality is best represented as functions and not the states before and after. Back are the days of RPC services. Why not represent an API as a stream of events? Stateless APIs! It’s all about watching things happen. Using commands and events can be a powerful thing.

Anyhow, microservices are great, but they might get you in situations where you feel confused as you try to apply things that used to work great but does not seem to fit anymore. By rethinking the way you percieve computer systems you will soon find new ways and it might give you great new possibilities. Dare to try new angles, and remember that it’s all about trade offs. Simulating the world exactly as is might not be feasible with a computer system, but still, treating it as a bounch of state models within a single process might not be the right way either.

Leave a Comment

Continues Delivery – Are You Fast Enough?

So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!

I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.

One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.

Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.

I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.

The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.

The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.

The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.

You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.

Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.

See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.

Leave a Comment

Integration Testing

Sexy title, isn’t it? 🙂 Well, maybe not, but still, it’s an important aspect of system development.

What is Integration Testing?

Good question, Fredrik. Well thank you, Fredrik.

Yeah yeah, okay, enough with this, let’s get serious.

When I talk about testing I almost always get misunderstood. Why? Because we all have different views and words to use when we talk about testing, and specially automated tests. Have you seen the explanation of Integration Testing on Wikipedia? Well, it’s not explicit, that you can say 🙂 https://en.wikipedia.org/wiki/Integration_testing

When I talk about integration tests, I usually mean firing up my application in memory and probe the integration points. I like this approach because it let’s me get rid of third party application dependency. It means I can run my tests anywhere without the need of installing and handling third party applications during testing, for example a database.

It fits nicely into the continues integration process.

Test It While Hosting Your Windows Service

So, I’d like to introduce you to some of the home brewed testing frameworks I like to use when building applications. Let’s start with Test.It.While.Hosting.Your.Windows.Service (C#, .NET Framework 4.6.2), which is a testing framework that helps you simplify how you write your integration tests for Windows Service applications.

I’m not going to go into details, you will find that if you follow the link. BUT, I think integration testing is something all developers should have at some degree when developing applications. They cover much more than a unit test, but at the same time they are autonomous and non-dependent which means you can run them anywhere where the targeted framework is installed.

Just by being able to start your application in a unit test like manner you have come far in testing your application’s functionality. How many times have you not experienced problems with the IOC registrations of your application?

Okay, I’ll cut the selling point. I think you get the point anyway. Testing is important. Multiple levels of testing is important. Finding issues fast is important, at least in an agile world, and you are agile, aren’t you? 🙂

Hi!

My name is Fredrik. I’m a system developer, newly gone freelance, and (obviously) I like automated testing. I would not call my self a ‘tester’, I’m a system developer who likes simplicity and when things just work.

This is my first blog post for my new company, FKAN Consulting. I’ll try to continuously post new (hopefully interesting) posts here about my experiences as a developer and consultant.

Leave a Comment