Skip to content

Tag: Testing

ApiCompat has Moved into the .NET SDK

I while ago I wrote about how to detect breaking changes in .NET using Microsoft.DotNet.ApiCompat. Since then ApiCompat has moved into the .NET SDK.

What has Changed?

Since ApiCompat now is part of the .NET SDK, the Arcade package feed doesn’t need to be referenced anymore.

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

The package reference can also be removed.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

In order to continue running assembly validation from MSBuild, install the Microsoft.DotNet.ApiCompat.Task.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat.Task" Version="8.0.404" PrivateAssets="all" IsImplicitlyDefined="true" />
</ItemGroup>

Enable Assembly validation.

<PropertyGroup>
  <ApiCompatValidateAssemblies>true</ApiCompatValidateAssemblies>
</PropertyGroup>

The contract assembly reference directive has changed, so the old directive needs to be replaced.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

<PropertyGroup>   
  <ApiCompatContractAssembly> LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll
  </ApiCompatContractAssembly>
</PropertyGroup>

The property controlling suppressing breaking changes, BaselineAllAPICompatError, has changed to ApiCompatGenerateSuppressionFile.

<PropertyGroup>
  <BaselineAllAPICompatError>false</BaselineAllAPICompatError>
  <ApiCompatGenerateSuppressionFile>false </ApiCompatGenerateSuppressionFile>
</PropertyGroup>

That’s it, your good to go!

Compatibility Baseline / Suppression directives

Previously the suppression file, ApiCompatBaseline.txt, contained text directives describing suppressed compatibility issues.

Compat issues with assembly Kafka.Protocol:
TypesMustExist : Type 'Kafka.Protocol.ConsumerGroupHeartbeatRequest.Assignor' does not exist in the implementation but it does exist in the contract.

This format has changed to an XML based format, written by default to a file called CompatibilitySuppressions.xml.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target> T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj\Debug\netstandard2.1\Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

This format is more verbose than the old format, a bit more difficult to read from a human perspective if you ask me. The description of the various DiagnosticIds can be found in this list.

Path Separator Mismatches

Looking at the suppression example, you might notice that the suppressions contain references to the compared assembly and the baseline contract. It’s not a coincidence that the path separators differs between the references to the contract assembly and the assembly being compared. The Left reference is a templated copy of the ApiCompatContractAssembly directive using OS agnostic forward slashes, but the Right directive is generated by ApiCompat and it is not OS agnostic, hence the backslash path separators generated when executing under Windows. If ApiCompat is executed under Linux it would generate front slash path separators.

You might also notice that the reference to the assembly being compared contains the build configuration name. This might not match the build configuration name used during a build pipeline for example (Debug vs Release).

Both these differences in path reference will make ApiCompat ignore the suppressions when not matched. There is no documentation on how to consolidate these, but fortunately there are a couple of somewhat hidden transformation directives which can help control how these paths are formatted.

<PropertyGroup>
  <_ApiCompatCaptureGroupPattern>
.+%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)
  </_ApiCompatCaptureGroupPattern>
</PropertyGroup>

<ItemGroup>
  <!-- Make sure the Right suppression directive is OS-agnostic and disregards configuration -->
  <ApiCompatRightAssembliesTransformationPattern Include="$(_ApiCompatCaptureGroupPattern)" ReplacementString="obj/$1/$2" />
</ItemGroup>

The _ApiCompatCaptureGroupPattern regex directive captures path segment groups which can be used in the ApiCompatRightAssembliesTransformationPattern directive to rewrite the assembly reference path to something that is compatible to both Linux and Windows, and removes the build configuration segment.

Using this will cause the Right directive to change accordingly.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target>
T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj/netstandard2.1/Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

There is a similar directive for the Left directive named ApiCompatLeftAssembliesTransformationPattern.

Leave a Comment

Detecting Breaking Changes

When integrating with other applications and libraries it’s important to detect when APIs are changing in an incompatible way as that might cause downtime for the downstream application or library. SemVer is a popular versioning strategy that can hint about breaking changes by bumping the major version part but as an upstream application developer it can be difficult to detect that a code change is in fact a breaking change.

ApiCompat

Microsoft.DotNet.ApiCompat is a tool built for .NET that can compare two assemblies for API compatibility. It’s built and maintained by the .NET Core team for usage within the Microsoft .NET development teams, but it’s also open source and available to anyone.

Installation

The tool is provided as a NuGet package on the .NET Core team’s NuGet feed, which is not on nuget.org, which most package managers reference by default, but a custom NuGet server hosted in Azure. The URL to the feed needs to be explicitly specified in the project that like to use it.

In the project file add a reference to the NuGet feed under the Project group:

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

Add a reference to the Microsoft.DotNet.ApiCompat package:

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

The most recent version of the package can be found by browsing the feed.

Usage

The tool can execute as part of the build process and fail the build if the current source code contains changes that is not compatible with a provided contract, an assembly from the latest major release for example.

While it is possible to commit a contract assembly to Git, another approach is to automatically fetch it from source early in the build process. This example uses a NuGet feed as the release source, but it could be an asset in a GitHub release as well or something else.

<Target Name="DownloadLastMajorVersion" BeforeTargets="PreBuildEvent">
  <DownloadFile SourceUrl="https://www.nuget.org/api/v2/package/MyLibrary/2.0.0" DestinationFolder="LastMajorVersionBinary">
    <Output TaskParameter="DownloadedFile" PropertyName="LastMajorVersionNugetPackage" />
  </DownloadFile>
  <Unzip SourceFiles="$(LastMajorVersionNugetPackage)" DestinationFolder="LastMajorVersionBinary" />
</Target>

This will download a NuGet package to the folder LastMajorVersionBinary in the project directory using the DownloadFile task. If the directory doesn’t exist it will be created. If the file already exist and has not been changed, this becomes a no-op.

The Unzip task will unpack the .nupkg file to the same directory as the next step. Same thing here, if the files already exists this is a no-op.

The last step is to instruct ApiCompat to target the unpacked assembly file as the current contract the source code will be compared with. This is done by setting the ResolvedMatchingContract property, which is the only setting required to run the tool.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

The path points to where the assembly file is located in the unpacked NuGet package directory.

Building the project will now download the contract and execute ApiCompat with the default settings. Remember to use proper access modifiers when writing code. internal is by default not treated as part of a public contract1 as the referenced member or type cannot be reached by other assemblies, compared to public.

1 InternalsVisibleToAttribute can make internal member and types accessible by other assemblies and should be used by care.

Handling a Breaking Change

Breaking changes should be avoided as often as possible as they break integrations downstream. This is true for both libraries and applications. An indication of a breaking change should most of the time be identified and mitigated in a non-breaking fashion, like adding another API and deprecating the current one. But at some point the old API needs to be removed, thus a breaking change is introduced which will fail the build.

The property BaselineAllAPICompatError can be used to accept breaking changes. The specification of the breaking changes will be written to a file called ApiCompatBaseline.txt in the root of the project. ApiCompat uses it to ignore these tracked incompatible changes from now on. This property should only be set when a breaking change should be accepted and should result in a new major release.

<PropertyGroup>
  <BaselineAllAPICompatError>true</BaselineAllAPICompatError>
</PropertyGroup>

Once a new build has be executed and the new contract baseline has been established remember to set the property to false or remove it.

The baseline file should be committed to source control where it can be referenced as documentation for the breaking change by for example including it in a BREAKING CHANGE conventional commit message.

Once a new major release has been published, there are two options.

  1. Reference the assembly in the new major release as the contract and delete ApiCompatBaseline.txt.
  2. Do nothing 🙂

As git can visualize diffs between two references, the content of the ApiCompatBaseline.txt will show all the breaking changes between two version tags, which can be quite useful.

Summary

ApiCompat is a great tool for automating detection of breaking changes during development. It avoids introducing incompatible changes and potential headache to downstream consumers, both for applications and libraries.

A complete example of the mentioned changes are available here.

Leave a Comment

In-Memory Service Integration Tests

One concept I always use when writing applications or libraries is to test them using fast, reliable and automated integration tests.

The Test Trophy

You have probably heard about the test pyramid before, but if you haven’t heard about the test trophy, you should read this article.

Coming from the .NET world I have come to rely on real time code analyzers such as Roslyn and ReSharper, they help me write cleaner code faster and with little effort. It’s great having continuous feedback on code being written more or less instantly.

For similar reasons I use NCrunch for continuous test execution. I want instant feedback on the functionality I write. This means I need tests that are isolated and stable, run fast and continuously and test business functionality as realistically as possible.

Integration Tests

While unit tests are valuable when writing small isolated pieces of functionality, they are often too isolated to verify the bigger picture, hence the need for integration tests.

In order for integration tests to give near real time feedback they need to work similar to unit tests yet test real business scenarios end-to-end using stable integration points that do not break when refactoring. This is where in-memory, service integration tests come in.

  • In-memory – because it’s fast and can run anywhere.
  • Service – because it exposes stable APIs that don’t break compatibility.
  • Integrations – because they also use stable API’s and are pluggable.

These tests encapsulate an in-memory environment where the service is deployed to a host that acts similar to a real host through which the test can invoke the service. Third party in-memory representations of integrating services are used to mimic dependencies and observe the service’s behavior. The test shortcuts the service’s I/O integrations preferably as close to the network level as possible and redirects traffic back to itself for instrumentation and assertions.

An example of an integration test can be found here. It tests a web server that provides port forwarding functionality to Kubernetes, similar to kubectl port-forward.

Examples of in-memory versions of different components are Kafka.TestFramework, Test.It.With.AMQP, Entity Framework Effort as well as the AspNetCore TestServer.

So how fast do these integration tests run? Let’s churn.

Fast enough to run continuously!

JIT Lag

It is worth mentioning that most test runners do not run each test in a completely isolated application domain because of the overhead caused by the JIT compiler when loading assemblies. Therefor it is important to design services to be instantiable and not have mutating static properties as these would be shared when running tests in parallel. Reusing the same application domain for test executing when running multiple tests simultaneously is a trade-off that increases performance considerably.

Summary

Running integration tests continuously while writing code enables very fast feedback loops. These tests are however not a replacement to post-deployment end-to-end tests which test application functionality on production similar infrastructure.

Its main purpose is to give fast feedback continuously on business functionality during development by mimicking a production deployed instance of the service with minimal mocking while using well known, loosely coupled integration points for refactor stability.

Leave a Comment

Integration Testing with AMQP

So, last week I finally released the first version of my new shiny integration testing framework for AMQP, Test.It.With.AMQP. It comes with an implementation of the AMQP 0.9.1 protocol and integration with the popular .NET AMQP client of RabbitMQ.

Oh yeah, it’s all compatible with .NET Core 🙂

AMQP – Advanced Message Queuing Protocol

Wikipedia:

The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security.

Example

A common test scenario is that you have an application that consumes messages from a queue and you want to assert that the application retrieves the messages correctly.

var testServer = new AmqpTestFramework(Amqp091.Protocol.Amqp091.ProtocolResolver);
testServer.On<Basic.Consume>((connectionId, message) => AssertSomething(message));
myApplicationsIOCContainer.RegisterSingleton(() => testServer.ConnectionFactory.ToRabbitMqConnectionFactory()));

This is simplified though. In reality there are alot of setup negotiation that needs to be done before you can consume any messages, like creating a connection and a channel. A real working test with a made up application and the test framework Test.It.While.Hosting.Your.Windows.Service can be found here.

Why?

The purpose of this test framework is to mock an AMQP communication based service in order to test the AMQP integration points and behaviour within an application without the need of a shared and installed instance of the actual AMQP service. It’s kind of what OWIN Test Server does for HTTP in Katana.

Fast

The test framework runs in memory, that means no time consuming network traffic or interop calls. 

Isolated

All instances are setup by the test scenario and has no shared resources. This means there is no risk that two or more tests affect each other.

Testable

The framework makes it possible to subscribe and send all AMQP methods defined in the protocols, or you can even extend the protocol with your own methods!

Easy Setup and Tear Down

Create an instance when setting up your test, verify your result, and dispose it when your done. No hassle with communication pools and locked resources.

 

Integration testing made easy.

 

 

Leave a Comment

Continues Delivery – Are You Fast Enough?

So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!

I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.

One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.

Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.

I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.

The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.

The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.

The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.

You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.

Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.

See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.

Leave a Comment

Integration Testing

Sexy title, isn’t it? 🙂 Well, maybe not, but still, it’s an important aspect of system development.

What is Integration Testing?

Good question, Fredrik. Well thank you, Fredrik.

Yeah yeah, okay, enough with this, let’s get serious.

When I talk about testing I almost always get misunderstood. Why? Because we all have different views and words to use when we talk about testing, and specially automated tests. Have you seen the explanation of Integration Testing on Wikipedia? Well, it’s not explicit, that you can say 🙂 https://en.wikipedia.org/wiki/Integration_testing

When I talk about integration tests, I usually mean firing up my application in memory and probe the integration points. I like this approach because it let’s me get rid of third party application dependency. It means I can run my tests anywhere without the need of installing and handling third party applications during testing, for example a database.

It fits nicely into the continues integration process.

Test It While Hosting Your Windows Service

So, I’d like to introduce you to some of the home brewed testing frameworks I like to use when building applications. Let’s start with Test.It.While.Hosting.Your.Windows.Service (C#, .NET Framework 4.6.2), which is a testing framework that helps you simplify how you write your integration tests for Windows Service applications.

I’m not going to go into details, you will find that if you follow the link. BUT, I think integration testing is something all developers should have at some degree when developing applications. They cover much more than a unit test, but at the same time they are autonomous and non-dependent which means you can run them anywhere where the targeted framework is installed.

Just by being able to start your application in a unit test like manner you have come far in testing your application’s functionality. How many times have you not experienced problems with the IOC registrations of your application?

Okay, I’ll cut the selling point. I think you get the point anyway. Testing is important. Multiple levels of testing is important. Finding issues fast is important, at least in an agile world, and you are agile, aren’t you? 🙂

Hi!

My name is Fredrik. I’m a system developer, newly gone freelance, and (obviously) I like automated testing. I would not call my self a ‘tester’, I’m a system developer who likes simplicity and when things just work.

This is my first blog post for my new company, FKAN Consulting. I’ll try to continuously post new (hopefully interesting) posts here about my experiences as a developer and consultant.

Leave a Comment