Skip to content

Category: CI/CD

ApiCompat has Moved into the .NET SDK

I while ago I wrote about how to detect breaking changes in .NET using Microsoft.DotNet.ApiCompat. Since then ApiCompat has moved into the .NET SDK.

What has Changed?

Since ApiCompat now is part of the .NET SDK, the Arcade package feed doesn’t need to be referenced anymore.

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

The package reference can also be removed.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

In order to continue running assembly validation from MSBuild, install the Microsoft.DotNet.ApiCompat.Task.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat.Task" Version="8.0.404" PrivateAssets="all" IsImplicitlyDefined="true" />
</ItemGroup>

Enable Assembly validation.

<PropertyGroup>
  <ApiCompatValidateAssemblies>true</ApiCompatValidateAssemblies>
</PropertyGroup>

The contract assembly reference directive has changed, so the old directive needs to be replaced.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

<PropertyGroup>   
  <ApiCompatContractAssembly> LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll
  </ApiCompatContractAssembly>
</PropertyGroup>

The property controlling suppressing breaking changes, BaselineAllAPICompatError, has changed to ApiCompatGenerateSuppressionFile.

<PropertyGroup>
  <BaselineAllAPICompatError>false</BaselineAllAPICompatError>
  <ApiCompatGenerateSuppressionFile>false </ApiCompatGenerateSuppressionFile>
</PropertyGroup>

That’s it, your good to go!

Compatibility Baseline / Suppression directives

Previously the suppression file, ApiCompatBaseline.txt, contained text directives describing suppressed compatibility issues.

Compat issues with assembly Kafka.Protocol:
TypesMustExist : Type 'Kafka.Protocol.ConsumerGroupHeartbeatRequest.Assignor' does not exist in the implementation but it does exist in the contract.

This format has changed to an XML based format, written by default to a file called CompatibilitySuppressions.xml.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target> T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj\Debug\netstandard2.1\Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

This format is more verbose than the old format, a bit more difficult to read from a human perspective if you ask me. The description of the various DiagnosticIds can be found in this list.

Path Separator Mismatches

Looking at the suppression example, you might notice that the suppressions contain references to the compared assembly and the baseline contract. It’s not a coincidence that the path separators differs between the references to the contract assembly and the assembly being compared. The Left reference is a templated copy of the ApiCompatContractAssembly directive using OS agnostic forward slashes, but the Right directive is generated by ApiCompat and it is not OS agnostic, hence the backslash path separators generated when executing under Windows. If ApiCompat is executed under Linux it would generate front slash path separators.

You might also notice that the reference to the assembly being compared contains the build configuration name. This might not match the build configuration name used during a build pipeline for example (Debug vs Release).

Both these differences in path reference will make ApiCompat ignore the suppressions when not matched. There is no documentation on how to consolidate these, but fortunately there are a couple of somewhat hidden transformation directives which can help control how these paths are formatted.

<PropertyGroup>
  <_ApiCompatCaptureGroupPattern>
.+%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)
  </_ApiCompatCaptureGroupPattern>
</PropertyGroup>

<ItemGroup>
  <!-- Make sure the Right suppression directive is OS-agnostic and disregards configuration -->
  <ApiCompatRightAssembliesTransformationPattern Include="$(_ApiCompatCaptureGroupPattern)" ReplacementString="obj/$1/$2" />
</ItemGroup>

The _ApiCompatCaptureGroupPattern regex directive captures path segment groups which can be used in the ApiCompatRightAssembliesTransformationPattern directive to rewrite the assembly reference path to something that is compatible to both Linux and Windows, and removes the build configuration segment.

Using this will cause the Right directive to change accordingly.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target>
T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj/netstandard2.1/Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

There is a similar directive for the Left directive named ApiCompatLeftAssembliesTransformationPattern.

Leave a Comment

Detecting Breaking Changes

When integrating with other applications and libraries it’s important to detect when APIs are changing in an incompatible way as that might cause downtime for the downstream application or library. SemVer is a popular versioning strategy that can hint about breaking changes by bumping the major version part but as an upstream application developer it can be difficult to detect that a code change is in fact a breaking change.

ApiCompat

Microsoft.DotNet.ApiCompat is a tool built for .NET that can compare two assemblies for API compatibility. It’s built and maintained by the .NET Core team for usage within the Microsoft .NET development teams, but it’s also open source and available to anyone.

Installation

The tool is provided as a NuGet package on the .NET Core team’s NuGet feed, which is not on nuget.org, which most package managers reference by default, but a custom NuGet server hosted in Azure. The URL to the feed needs to be explicitly specified in the project that like to use it.

In the project file add a reference to the NuGet feed under the Project group:

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

Add a reference to the Microsoft.DotNet.ApiCompat package:

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

The most recent version of the package can be found by browsing the feed.

Usage

The tool can execute as part of the build process and fail the build if the current source code contains changes that is not compatible with a provided contract, an assembly from the latest major release for example.

While it is possible to commit a contract assembly to Git, another approach is to automatically fetch it from source early in the build process. This example uses a NuGet feed as the release source, but it could be an asset in a GitHub release as well or something else.

<Target Name="DownloadLastMajorVersion" BeforeTargets="PreBuildEvent">
  <DownloadFile SourceUrl="https://www.nuget.org/api/v2/package/MyLibrary/2.0.0" DestinationFolder="LastMajorVersionBinary">
    <Output TaskParameter="DownloadedFile" PropertyName="LastMajorVersionNugetPackage" />
  </DownloadFile>
  <Unzip SourceFiles="$(LastMajorVersionNugetPackage)" DestinationFolder="LastMajorVersionBinary" />
</Target>

This will download a NuGet package to the folder LastMajorVersionBinary in the project directory using the DownloadFile task. If the directory doesn’t exist it will be created. If the file already exist and has not been changed, this becomes a no-op.

The Unzip task will unpack the .nupkg file to the same directory as the next step. Same thing here, if the files already exists this is a no-op.

The last step is to instruct ApiCompat to target the unpacked assembly file as the current contract the source code will be compared with. This is done by setting the ResolvedMatchingContract property, which is the only setting required to run the tool.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

The path points to where the assembly file is located in the unpacked NuGet package directory.

Building the project will now download the contract and execute ApiCompat with the default settings. Remember to use proper access modifiers when writing code. internal is by default not treated as part of a public contract1 as the referenced member or type cannot be reached by other assemblies, compared to public.

1 InternalsVisibleToAttribute can make internal member and types accessible by other assemblies and should be used by care.

Handling a Breaking Change

Breaking changes should be avoided as often as possible as they break integrations downstream. This is true for both libraries and applications. An indication of a breaking change should most of the time be identified and mitigated in a non-breaking fashion, like adding another API and deprecating the current one. But at some point the old API needs to be removed, thus a breaking change is introduced which will fail the build.

The property BaselineAllAPICompatError can be used to accept breaking changes. The specification of the breaking changes will be written to a file called ApiCompatBaseline.txt in the root of the project. ApiCompat uses it to ignore these tracked incompatible changes from now on. This property should only be set when a breaking change should be accepted and should result in a new major release.

<PropertyGroup>
  <BaselineAllAPICompatError>true</BaselineAllAPICompatError>
</PropertyGroup>

Once a new build has be executed and the new contract baseline has been established remember to set the property to false or remove it.

The baseline file should be committed to source control where it can be referenced as documentation for the breaking change by for example including it in a BREAKING CHANGE conventional commit message.

Once a new major release has been published, there are two options.

  1. Reference the assembly in the new major release as the contract and delete ApiCompatBaseline.txt.
  2. Do nothing 🙂

As git can visualize diffs between two references, the content of the ApiCompatBaseline.txt will show all the breaking changes between two version tags, which can be quite useful.

Summary

ApiCompat is a great tool for automating detection of breaking changes during development. It avoids introducing incompatible changes and potential headache to downstream consumers, both for applications and libraries.

A complete example of the mentioned changes are available here.

Leave a Comment

Trunk Based Release Versioning – The Simple Way

Some while ago I wrote an article about release versioning using Trunk-Based development where I used GitVersion for calculating the release version. Since then I have experienced multiple issues with that setup up to the point when I had enough and decided to make my own release versioning script, which I simply call Trunk-Based Release Versioning!

There is no complex configuration, it’s just a bash script that can be executed in a git repository. It follows the Trunk-Based Development branching model and identifies previous release versions by looking for SemVer formatted tags. It outputs at what commits the next release starts and ends and includes the version and the commit messages. Bumping version is done by writing commit messages that follows the Conventional Commits specification.

Trunk-Based Development For Smaller Teams

This is the simplest strategy, there is only the default branch. A release contains everything between two release tags.

Scaled Trunk-Based Development

Development is done on short lived development branches. It is possible, and preferable, to continuously release by merging into the default branch. All commits between two release tags make up a release, same as for the previous strategy, but it is also possible to pre-release from the development branch as pictured by the red boxes. A pre-release contains all commits on the branch since last release and when merging it all becomes the new release. Note that previous merge commits from the development branch onto the default branch also are considered release points in order to be able to continuously work from a single branch. This also works if merging the opposite way from the default branch into the development branch in order to for example enforce gated check-ins to the default branch.

For a full example using Github Actions, see this workflow.

That’s it! Keep releasing simple.

Leave a Comment

Semantic Release Notes with Conventional Commits

Recently I wrote about how to use Conventional Commits to calculate the next release version based on semantic versioning. Today I’m going to show how to turn conventional commit messages into release notes using a simple Github Action called Semantic Release Notes Generator that I’ve written.

release-notes-generator

The action is basically just a wrapper for semantic-release/release-notes-generator which is a plugin for semantic-release, an opinionated version management tool. When trying out semantic-release I found it to be too much functionality entangled in a way I couldn’t get to work the way I wanted it to, but I really liked the release notes generator! Being a fan of the Unix philosophy, I decided to wrap it in a neat little action that I could pipe.

Usage

The generator fetches all commit messages between two git references, and feeds them into the release-notes-generator that formats the messages into a nice looking release log.

The semantic release notes generator actually uses a Github Action workflow to test itself (more about that in another post). It uses gitversion to determine next version and pipe that to the generator to generate release notes based on the commits logged since the last release.

Here’s how that could look like:

...
- name: Determine Version
  id: gitversion
  uses: gittools/actions/gitversion/execute@v0
  with:
    useConfigFile: true
    configFilePath: .github/version_config.yml
- name: Determine Release Info
  id: release
  run: |
    from_tag=$(git tag --points-at ${{ steps.gitversion.outputs.versionSourceSha }} | grep -m 1 ^v[0-9]*\.[0-9]*\.[0-9]* | head -1)
    tag=v${{ steps.gitversion.outputs.majorMinorPatch }}

    echo "::set-output name=tag::$tag"
    echo "::set-output name=from_ref_exclusive::$from_ref_exclusive"
- name: Create Tag
  uses: actions/github-script@v3
  with:
    script: |
      github.git.createRef({
        owner: context.repo.owner,
        repo: context.repo.repo,
        ref: "refs/tags/${{ steps.release.outputs.tag }}",
        sha: "${{ steps.gitversion.outputs.sha }}"
      });
- name: Generate Release Notes
  id: release_notes
  uses: fresa/release-notes-generator@v0
  with:
    version: ${{ steps.release.outputs.tag }}
    from_ref_exclusive: ${{ steps.release.outputs.from_ref_exclusive }}
    to_ref_inclusive: ${{ steps.release.outputs.tag }}
- name: Create Release
  uses: softprops/action-gh-release@v1
  with:
    body: ${{ steps.release_notes.outputs.release_notes }}
    tag_name: ${{ steps.release.outputs.tag }}
...

It calculates the next version and finds the last release’s git reference. Then it creates a tag to mark the next release, which also will be used as to_ref_inclusive when generating the release notes. Lastly it creates the Github release using the release notes and the tag created.

If there’s a need to change the release notes, it’s always possible to edit the release in Github either after the release has been published or by releasing it as a draft and manually publishing it after a review.

Summary

release-notes-generator is a great tool for generating release notes when using conventional commits. With Semantic Release Notes Generator you can in an unopiniated way choose how to use release notes within your automated continuous delivery processes!

Leave a Comment

Simple Release Versioning with Trunkbased Development, SemVer and Conventional Commits

I have scripted versioning strategies for Trunk Based Development many times thoughout my career, including using simple Github Actions like Github Tag Action, but the result was never spot on. So I finally gave in overcoming the complexity of tuning GitVersion!

Trunk Based Development

From trunkbaseddevelopment.com:

A source-control branching model, where developers collaborate on code in a single branch called ‘trunk’ *, resist any pressure to create other long-lived development branches by employing documented techniques. They therefore avoid merge hell, do not break the build, and live happily ever after.

A great fit for Continuous Delivery, and very easy to apply!

SemVer

Versioning does not need to be just about incrementing a number to distinguise releases. It can also be semantic; alas SemVer, or short for Semantic Versioning.

From the site:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards compatible manner, and
  3. PATCH version when you make backwards compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

This is all nice and such, but how do you integrate that with the process of continuously delivering code?

Conventional Commits

Conventional Commits is a convention for how to turn human written commit messages into machine readable meaning. It also happen to nicely follow the convention for SemVer.

  • Major / Breaking Change
refactor!: drop support for Node 6

BREAKING CHANGE: refactor to use JavaScript features not available in Node 6.
  • Minor / Feature
feat(lang): add polish language
  • Patch
fix: correct minor typos in code

GitVersion

GitVersion looks simple and promising at first glance.

However, when digging into the documentation, a different picture immerges.

GitVersion comes with three different strategies; Mainline Development, Continuous Delivery and Continuous Deployment, all quite difficult to decipher and grasp. All of them seems at first to be a possible fit for Trunk Based Development, but I ended up only getting Continuous Delivery to follow the simple philipsopy of Trunk Based Development, yet being flexible enough of when to actually bump versions and not bump any of major/minor/patch more then once until merge to main happens.

Configuration

GitVersion uses a config file to configure strategies, and there are quite few configurable aspects of said strategies. Let’s get started!

Branches

Trunk Based Development is all about getting into trunk (main/master in git lingo) fast. We have two branch strategies; either we are on main (where we don’t do any development) or we are on a development branch. GitVersion however comes with a lot of them that needs to be disabled.

  feature:
    # Match nothing
    regex: ^\b$
  develop:
    # Match nothing
    regex: ^\b$
  release:
    # Match nothing
    regex: ^\b$
  pull-request:
    # Match nothing
    regex: ^\b$
  hotfix:
    # Match nothing
    regex: ^\b$
  support:
    # Match nothing
    regex: ^\b$y

We consider all branches except main as development branches. These should incremented with patch by default.

  development:
    increment: Patch
    # Everything except main and master
    regex: ^(?!(main|master)$)

Detecting Conventional Commits

The following regular expressions detect when to bump major/minor/patch, and are based on the Conventional Commit v1.0.0 specification.

  • Major
    major-version-bump-message: "(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test)(\\([a-z]+\\))?(!: .+|: (.+\\n\\n)+BREAKING CHANGE: .+)"
  • Minor
    minor-version-bump-message: "(feat)(\\([a-z]+\\))?: .+"
  • Patch
    patch-version-bump-message: "(build|chore|ci|docs|fix|perf|refactor|revert|style|test)(\\([a-z]+\\))?: .+"

Here’s the complete config:

mode: ContinuousDelivery
# Conventional Commits https://www.conventionalcommits.org/en/v1.0.0/
# https://regex101.com/r/Ms7Vx6/2
major-version-bump-message: "(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test)(\\([a-z]+\\))?(!: .+|: (.+\\n\\n)+BREAKING CHANGE: .+)"
# https://regex101.com/r/Oqhi2m/1
minor-version-bump-message: "(feat)(\\([a-z]+\\))?: .+"
# https://regex101.com/r/f5C4fP/1
patch-version-bump-message: "(build|chore|ci|docs|fix|perf|refactor|revert|style|test)(\\([a-z]+\\))?: .+"
# Match nothing
no-bump-message: ^\b$
continuous-delivery-fallback-tag: ''
branches:
  development:
    increment: Patch
    # Everything except main and master
    regex: ^(?!(main|master)$)
    track-merge-target: true
    source-branches: []
  feature:
    # Match nothing
    regex: ^\b$
  develop:
    # Match nothing
    regex: ^\b$
  main:
    source-branches: []
  release:
    # Match nothing
    regex: ^\b$
  pull-request:
    # Match nothing
    regex: ^\b$
  hotfix:
    # Match nothing
    regex: ^\b$
  support:
    # Match nothing
    regex: ^\b$

Tagging

A common practice when releasing in git is to tag a commit with the release number, i.e. v1.2.3. When practicing continuous delivery, this can become quite verbose, especially when producing pre-releases, as every commit becomes a potential release. GitVersion Continuous Delivery strategy traverses each commit from last version tag when calculating the current version, so pre-releases does not necessary need to be tagged. Instead, the commit sha can be used as the pre-release identifier for back-referencing any pre-release artifacts.

Ofcourse, when using Github or similar platforms, a release is a tag, so in that case you might want to use tagging for pre-releases anyway.

Caveats

Forgetting to merge a merge commit back to the development branch when continuing working on the same branch means the automatic patch increment that happens for commits after branching does not occur. This will skew any pre-release versions. According to the documentation, the parameter track-merge-target should solve that scenario, but it seems it has not been implemented!

Summary

GitVersion turned out to be a really nice versioning tool once overcoming the knowledge curve of understanding all the concepts, features and configurations. Check out the GitVersion Github Action and Azure Pipeline Task for simple integration!

Leave a Comment

Continues Delivery with .NET Framework Applications in the Cloud – for Free!

Yepp, you read that correct. I’ve started to setup continues delivery processes in the cloud using AppVeyor. AppVeyor is a platform for achiving CI/CD in the cloud for particularly applications written for Windows. It has some integration plugins for all your common popular services, like Github, Gitlab, Bitbucket, NuGet etc and has support for a mishmash of different languages concentrated around a Windows environment. The cost? It’s FREE for open source projects!

I’ve written about continues integration and continues delivery before here, and this will be a sort of extension of that topic. I though I would describe my goto CI/CD process, and how you can setup your own with some simple steps!

The Project

One of my open source projects is a hosting framework for integration testing Windows Services. It’s a class library built in C# and released as a NuGet package. It’s called Test.It.While.Hosting.Your.Windows.Service. Pretty obvious what the purpose of that application is, right? 🙂

Source Control

The code is hosted on Github, and I use a trunc based branching strategy, which means I use one branch, master, for simplicity.

AppVeyor has integration towards a bunch of popular source control systems, among them, Github. It uses webhooks in order to be notified of new code pushed to the repository, which can be used to trigger a new build on an AppVeyor project.

The Process

Since my project is open source, and I use the free version of AppVeyor, the CI/CD process information is publicity available. You can find the history for version 2.1.3 here.

The following pictures shows the General tab in the settings page of my AppVeyor project for Test.It.While.Hosting.Your.Windows.Service.

Github – Blue Rectangles

If you look at the build history link, you will see at row 2 that the first thing that happens is cloning of the source code from Github.

When you create an AppVeyor project, you need to integrate with your source control system. You can choose which type of repository to integrate with. I use Github, which means I can configure Github specific data as seen in the settings picture above. You don’t need to manually enter the link to your repository, AppVeyor uses oauth to autenticate towards Github and then let you choose with a simple click which repository to create the project for.

I choose to trigger the build from any branch because I would like to support pull requests, since that is a great way to let strangers contribute to your open source project without risking that anyone for example deliberated destroys your repository. However, I don’t want to increment the build version during pull requests, so I check the “Pull Requests do not increment build number”-checkbox. This will cause the pull requests builds to add a random string to the build version instead of bumping the number.

That’s basically it for the integration part with Github. You might notice that I have checked the “Do not build tags” checkbox. I will come to as why later in the bonus part of this article 🙂

Build – Green Rectangles

There are some configurations available for how to choose a build version format. I like using Semver, and use the build number to iterate the patch version. When making some new functionality or breaking changes, it’s important to change the build version manually before pushing the changes to your source control system. Remember that all changes in master will trigger a new build which in turn will trigger a release and deployment.

I’d also like to update the version generated by AppVeyor in the AssemblyInfo files of the C# projects being built. This will later be used to generate the NuGet package that will be released on NuGet.org. You can see the AssemblyInfo files being patched at row 4-6 in the build output.

In the Build tab, I choose MSBUILD as build tool and Release as configuration, which means the projects will be built with release configuration using msbuild. You can also see this at row 8 in the build output, and the actual build at line 91-99.

On row 7 it says

7. nuget restore

This is just a before build cmd script configuration to restore the NuGet packages referenced by the .NET projects. The NuGet CLI tool comes pre-installed in the build agent image. You can see the NuGet restore process at line 9-90.

The above picture shows the Environment where the build worker image is chosen.

Tests

The next step after building is running automated tests.

As you can see in the Tests tab, I have actually not configured anything. Tests are automatically discovered and executed. AppVeyor has support for the most common test framework, in my case xUnit.net. You can see the tests being run and the test result being provided at line 113-121.

Packaging (Red Rectangles)

After the build completes it’s time to package the NuGet target projects into NuGet packages. AppVeyor is integrated with NuGet, or rather exposes the NuGet CLI tool in the current OS image. The checkbox “Package NuGet projects” will automatically look for .nuspec files in the root directory of all projects and package them accordingly and automatically upload them to the internal artifact storage.

One of the projects includes a .nuspec file, can you see which one? (Hint: Check line 108)

If you look closely, you can see that the packaging is done before the tests are being run. That doesn’t really make much sense since packaging is not needed if any test fails, but that’s a minor though.

Deploying

The last step is to deploy the NuGet package to my NuGet feed at NuGet.org. There are alot of deployment providers available at AppVeyor like NuGet, Azure, Web Deploy, SQL etc, you can find them under the Deployment tab.

I choose NuGet as Deployment provider, and left the NuGet server URL empty as it falls back automatically to nuget.org. As I’ve also left Artifacts empty it will automatically choose all NuGet package artifacts uploaded to my artifact store during the build process, in this case there is just one, as showned at lines 122-123. I only deploy from the master branch in order to avoid publishing packages by mistake should I push to another branch. Remember that I use a trunk-based source control strategy, so it should never happen.

Notice the placeholder under API key. Here should the NuGet API key go for my NuGet feed authorizing AppVeyor to publish NuGet packages onto my feed on my behalf. Since this is a sensitive piece of information, I have stored it as an Environment Variable (you might have noticed it in the picture of the Environment tab, enclosed in a purple rectangle).

Environment variables are available through out the whole CI/CD process. There are also a bunch of pre-defined that can come in handy.

The actual deployment to NuGet.org can be seen at lines 124-126, and the package can then be found at my NuGet feed.

Some Last Words

AppVeyor is a powerful tool to help with CI/CD. It really makes it easy to setup fully automated processes from source control through the build and test processes to release and deployment.

I have used both Jenkins and TFS together with Octopus Deploy to achive different levels of continues delivery, but this so much easier to setup in comparison, and without you needing to host anything except the applications you build.

Not a fan of the UI based configuration? No problem, AppVeyor also supports yml based definition file for the project configuration.

Oh, yeah, almost forgot. There is also some super nice badges you can show off with on, for example, your README.md on Github.

The first one comes from AppVeyor, and the second one from BuildStats. Both are supported in markdown. Go check them out!

BONUS! (Black Rectangles)

If you were observant when looking at the build output and at the bottom of the Build and the Deployment tabs, you might have seen some PowerShell scripts.

Release Notes

The first script sets release notes for the NuGet package based on the commit message from Git. It is applied before packaging and updates the .nuspec file used to define the package. Note the usage of the pre-defined build parameters mentioned earlier.

$path = "src/$env:APPVEYOR_PROJECT_NAME/$env:APPVEYOR_PROJECT_NAME.nuspec"
[xml]$xml = Get-Content -Path $path
$xml.GetElementsByTagName("releaseNotes").set_InnerXML("$env:APPVEYOR_REPO_COMMIT_MESSAGE $env:APPVEYOR_REPO_COMMIT_MESSAGE_EXTENDED")
Set-Content $path -Value $xml.InnerXml -Force

It opens the .nuspec file, reads it’s content, updates the releaseNotes tag with the commit message and then saves the changes.

The release notes can be seen at the NuGet feed, reading “Update README.md added badges”. It can also be seen in the Visual Studio NuGet Package Manager UI.

Git Version Tag

The second script pushes a tag with the deployed version back to the Github repository on the commit that was fetched in the beginning of the process. This makes it easy to back-track what commit resulted in what NuGet package.

git config --global credential.helper store
Add-Content "$env:USERPROFILE\.git-credentials" "https://$($env:git_access_token):x-oauth-basic@github.com`n"
git config --global user.email "fresa@fresa.se"
git config --global user.name "Fredrik Arvidsson"
git tag v$($env:APPVEYOR_BUILD_VERSION) $($env:APPVEYOR_REPO_COMMIT)
git push origin --tags --quiet
  1. In order to authenticate with Github we use the git credential store. This could be a security issue since the credentials (here a git access token) will be stored on the disk on the AppVeyor build agent. However since nothing on the build agent is ever shared, and the agent will be destroyed after the build process, it’s not an issue.
  2. Store the credentials. The git access token generated from my Github account is securely stored using a secure environment variable.
  3. Set user email.
  4. Set user name.
  5. Create a git tag based on the build version and apply it on the commit fetched in the beginning of the CI/CD process.
  6. Push the tag created to Github. Notice the --quiet flag supressing the output from the git push command that otherwise will produce an error in the PowerShell script execution task run by AppVeyor.

Do you remember a checkbox called “Do not build tags” mentioned in the Github chapter above? Well, it is checked in order to prevent triggering a neverending loop of new build triggers when pushing the tag to the remote repository.

Leave a Comment

Don’t Branch

Git is a great, popular, distributed source control system that most of us probably have encountered in various projects. It’s really simple:

1. Pull changes from the remote origin master branch to your local master branch.

2. Code.

(3). Merge any changes from the remote origin master branch to your local master branch.

4. Push your local changes on the master branch to the remote master origin branch.

That’s it! Simple, isn’t it? Master is always deployable and changes fast.

So why do many people use complex git branching strategies?

Check out this google search result: https://www.google.com/search?q=git+workflow&tbm=isch

The horror!

If you are living by continues delivery, you do not want to see that. That’s a the opposite of continues integration; continues isolation. You part, you do not integrate. Well, technically you have to part a while when using distributed source control systems (otherwise it would not be distribution), but you’d like to part for as little time as possible. Why? Read my post Continues Delivery – Are You Fast Enough? 🙂

Open Source

So, is branching always bad? Well, no, it would probably not exist if it were 🙂 Open source software published at the git framework Github is a perfect example when branching might be necessary. If you develop an application and put the source code on github as publicly available, anyone can clone your code, create a branch, make changes and request a pull request before it is merged with master.

https://guides.github.com/introduction/flow/

This makes sense. Why? Because you do not necessary know the person changing your code. It can be a rival wanting to destroy your work. It wouldn’t work if that person could directly merge into master. A security gate is needed.

Tight Teams

Being part of a team at a company, you work towards the same agenda. You probably have some agreed code standard, and a process the team follows. No one is working against the team, so there is no need for a security gate in the source control system. Hence, keep it simple, don’t branch, use master.

– But we need feature branchi…

Feature Branching

So, you think you need to branch for developing new features? You don’t. There are some nice strategies to achive doing small changes and commit them to the production code continuously, even though the functionality might not be fully functioning.

Feature Toggling

This is a great tool for hiding any functionality that is not ready for production yet. If you havn’t heard about all the other nice perks of feature toggling, I highly recommend you read this article by Martin Fowler: https://martinfowler.com/articles/feature-toggles.html

Branch by Abstraction

No, it’s not source control branching. This technique let the user incrementally do large changes to the code while continuously integrating with the production code. Again I’d like to forward you to an excellent explanation of the subject by Martin: https://martinfowler.com/bliki/BranchByAbstraction.html

Conclusion

Don’t use branching strategies if you work in a tight team that has the same goal. Keep it simple, stupid.

Leave a Comment

Continues Delivery – Are You Fast Enough?

So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!

I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.

One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.

Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.

I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.

The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.

The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.

The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.

You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.

Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.

See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.

Leave a Comment