Skip to content

Year: 2018

4 Ways to Include Symbols and Source Files when Shipping C# Libraries

The need for debugging NuGet packaged assemblies have always been around, but did you know there are a couple of ways to achieve the same thing? Let’s take a closer look.

One Package to Rule Them All

Back in the days, I’ve mostly done symbol packaging by including the pdb symbol file and the source code in the nuget package along with the assembly. Everything in one single NuGet package.

pros

  • No extra communication to download symbols and source
  • No need to know where symbols and source files are located

 cons

  • Larger NuGet package

 

Symbol Package

To mitigate bloated packages and unnecessary data due to lack of debugging needs, there is the possibility to pack the symbol file and source code in a symbol package. This is done by creating a .nuget package containing the runtime and a .symbol.nuget package containing runtime, symbols and source code. The package containing the runtime is uploaded to a NuGet package server and the symbol package is uploaded to a symbol source server, like NuGet smbsrc.

pros

  • Small NuGet package 

cons

  • Need to know where the symbols are stored
  • Extra communication to download symbols and source files

Note: SymbolSource does not support the portable PDBs that the .NET Core CLI tool generates.

SourceLink

Now part of the .Net Foundation, SourceLink support has been integrated into Visual Studio 2017 15.3 and the brand new smoking hot .NET Core 2.1 SDK. SourceLink include source file information into the symbol files, making symbol packages obsolete. The source files can then be downloaded on demand from the source code repository by the debugger using this piece of information.

pros

  • Medium NuGet package
  • No symbol source package needed
  • Only download the source files needed for debugging

cons

  • Might call the repository for source code many times

JetBrains dotPeek

Did you know you could utilize dotPeek as a local symbol server? Well now you do. dotPeek will automatically construct symbol files from the assemblies loaded, which will be associated with the decompiled source files.

pros

  • Small NuGet package
  • No symbols or source files needed!

cons

  • Quite slow
  • Need to disable Just My Code when debugging, making symbol loading even slower
  • Not the real source code; poorer code quality

Conclusion

There you have it! Which ever approach you prefer, there will as usual always be trade offs. However, SourceLink seems to gain traction and might have a shot to become the de facto standard of symbol source packaging.

Leave a Comment

Integration Testing with AMQP

So, last week I finally released the first version of my new shiny integration testing framework for AMQP, Test.It.With.AMQP. It comes with an implementation of the AMQP 0.9.1 protocol and integration with the popular .NET AMQP client of RabbitMQ.

Oh yeah, it’s all compatible with .NET Core 🙂

AMQP – Advanced Message Queuing Protocol

Wikipedia:

The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security.

Example

A common test scenario is that you have an application that consumes messages from a queue and you want to assert that the application retrieves the messages correctly.

var testServer = new AmqpTestFramework(Amqp091.Protocol.Amqp091.ProtocolResolver);
testServer.On<Basic.Consume>((connectionId, message) => AssertSomething(message));
myApplicationsIOCContainer.RegisterSingleton(() => testServer.ConnectionFactory.ToRabbitMqConnectionFactory()));

This is simplified though. In reality there are alot of setup negotiation that needs to be done before you can consume any messages, like creating a connection and a channel. A real working test with a made up application and the test framework Test.It.While.Hosting.Your.Windows.Service can be found here.

Why?

The purpose of this test framework is to mock an AMQP communication based service in order to test the AMQP integration points and behaviour within an application without the need of a shared and installed instance of the actual AMQP service. It’s kind of what OWIN Test Server does for HTTP in Katana.

Fast

The test framework runs in memory, that means no time consuming network traffic or interop calls. 

Isolated

All instances are setup by the test scenario and has no shared resources. This means there is no risk that two or more tests affect each other.

Testable

The framework makes it possible to subscribe and send all AMQP methods defined in the protocols, or you can even extend the protocol with your own methods!

Easy Setup and Tear Down

Create an instance when setting up your test, verify your result, and dispose it when your done. No hassle with communication pools and locked resources.

 

Integration testing made easy.

 

 

Leave a Comment

Continues Delivery with .NET Framework Applications in the Cloud – for Free!

Yepp, you read that correct. I’ve started to setup continues delivery processes in the cloud using AppVeyor. AppVeyor is a platform for achiving CI/CD in the cloud for particularly applications written for Windows. It has some integration plugins for all your common popular services, like Github, Gitlab, Bitbucket, NuGet etc and has support for a mishmash of different languages concentrated around a Windows environment. The cost? It’s FREE for open source projects!

I’ve written about continues integration and continues delivery before here, and this will be a sort of extension of that topic. I though I would describe my goto CI/CD process, and how you can setup your own with some simple steps!

The Project

One of my open source projects is a hosting framework for integration testing Windows Services. It’s a class library built in C# and released as a NuGet package. It’s called Test.It.While.Hosting.Your.Windows.Service. Pretty obvious what the purpose of that application is, right? 🙂

Source Control

The code is hosted on Github, and I use a trunc based branching strategy, which means I use one branch, master, for simplicity.

AppVeyor has integration towards a bunch of popular source control systems, among them, Github. It uses webhooks in order to be notified of new code pushed to the repository, which can be used to trigger a new build on an AppVeyor project.

The Process

Since my project is open source, and I use the free version of AppVeyor, the CI/CD process information is publicity available. You can find the history for version 2.1.3 here.

The following pictures shows the General tab in the settings page of my AppVeyor project for Test.It.While.Hosting.Your.Windows.Service.

Github – Blue Rectangles

If you look at the build history link, you will see at row 2 that the first thing that happens is cloning of the source code from Github.

When you create an AppVeyor project, you need to integrate with your source control system. You can choose which type of repository to integrate with. I use Github, which means I can configure Github specific data as seen in the settings picture above. You don’t need to manually enter the link to your repository, AppVeyor uses oauth to autenticate towards Github and then let you choose with a simple click which repository to create the project for.

I choose to trigger the build from any branch because I would like to support pull requests, since that is a great way to let strangers contribute to your open source project without risking that anyone for example deliberated destroys your repository. However, I don’t want to increment the build version during pull requests, so I check the “Pull Requests do not increment build number”-checkbox. This will cause the pull requests builds to add a random string to the build version instead of bumping the number.

That’s basically it for the integration part with Github. You might notice that I have checked the “Do not build tags” checkbox. I will come to as why later in the bonus part of this article 🙂

Build – Green Rectangles

There are some configurations available for how to choose a build version format. I like using Semver, and use the build number to iterate the patch version. When making some new functionality or breaking changes, it’s important to change the build version manually before pushing the changes to your source control system. Remember that all changes in master will trigger a new build which in turn will trigger a release and deployment.

I’d also like to update the version generated by AppVeyor in the AssemblyInfo files of the C# projects being built. This will later be used to generate the NuGet package that will be released on NuGet.org. You can see the AssemblyInfo files being patched at row 4-6 in the build output.

In the Build tab, I choose MSBUILD as build tool and Release as configuration, which means the projects will be built with release configuration using msbuild. You can also see this at row 8 in the build output, and the actual build at line 91-99.

On row 7 it says

7. nuget restore

This is just a before build cmd script configuration to restore the NuGet packages referenced by the .NET projects. The NuGet CLI tool comes pre-installed in the build agent image. You can see the NuGet restore process at line 9-90.

The above picture shows the Environment where the build worker image is chosen.

Tests

The next step after building is running automated tests.

As you can see in the Tests tab, I have actually not configured anything. Tests are automatically discovered and executed. AppVeyor has support for the most common test framework, in my case xUnit.net. You can see the tests being run and the test result being provided at line 113-121.

Packaging (Red Rectangles)

After the build completes it’s time to package the NuGet target projects into NuGet packages. AppVeyor is integrated with NuGet, or rather exposes the NuGet CLI tool in the current OS image. The checkbox “Package NuGet projects” will automatically look for .nuspec files in the root directory of all projects and package them accordingly and automatically upload them to the internal artifact storage.

One of the projects includes a .nuspec file, can you see which one? (Hint: Check line 108)

If you look closely, you can see that the packaging is done before the tests are being run. That doesn’t really make much sense since packaging is not needed if any test fails, but that’s a minor though.

Deploying

The last step is to deploy the NuGet package to my NuGet feed at NuGet.org. There are alot of deployment providers available at AppVeyor like NuGet, Azure, Web Deploy, SQL etc, you can find them under the Deployment tab.

I choose NuGet as Deployment provider, and left the NuGet server URL empty as it falls back automatically to nuget.org. As I’ve also left Artifacts empty it will automatically choose all NuGet package artifacts uploaded to my artifact store during the build process, in this case there is just one, as showned at lines 122-123. I only deploy from the master branch in order to avoid publishing packages by mistake should I push to another branch. Remember that I use a trunk-based source control strategy, so it should never happen.

Notice the placeholder under API key. Here should the NuGet API key go for my NuGet feed authorizing AppVeyor to publish NuGet packages onto my feed on my behalf. Since this is a sensitive piece of information, I have stored it as an Environment Variable (you might have noticed it in the picture of the Environment tab, enclosed in a purple rectangle).

Environment variables are available through out the whole CI/CD process. There are also a bunch of pre-defined that can come in handy.

The actual deployment to NuGet.org can be seen at lines 124-126, and the package can then be found at my NuGet feed.

Some Last Words

AppVeyor is a powerful tool to help with CI/CD. It really makes it easy to setup fully automated processes from source control through the build and test processes to release and deployment.

I have used both Jenkins and TFS together with Octopus Deploy to achive different levels of continues delivery, but this so much easier to setup in comparison, and without you needing to host anything except the applications you build.

Not a fan of the UI based configuration? No problem, AppVeyor also supports yml based definition file for the project configuration.

Oh, yeah, almost forgot. There is also some super nice badges you can show off with on, for example, your README.md on Github.

The first one comes from AppVeyor, and the second one from BuildStats. Both are supported in markdown. Go check them out!

BONUS! (Black Rectangles)

If you were observant when looking at the build output and at the bottom of the Build and the Deployment tabs, you might have seen some PowerShell scripts.

Release Notes

The first script sets release notes for the NuGet package based on the commit message from Git. It is applied before packaging and updates the .nuspec file used to define the package. Note the usage of the pre-defined build parameters mentioned earlier.

$path = "src/$env:APPVEYOR_PROJECT_NAME/$env:APPVEYOR_PROJECT_NAME.nuspec"
[xml]$xml = Get-Content -Path $path
$xml.GetElementsByTagName("releaseNotes").set_InnerXML("$env:APPVEYOR_REPO_COMMIT_MESSAGE $env:APPVEYOR_REPO_COMMIT_MESSAGE_EXTENDED")
Set-Content $path -Value $xml.InnerXml -Force

It opens the .nuspec file, reads it’s content, updates the releaseNotes tag with the commit message and then saves the changes.

The release notes can be seen at the NuGet feed, reading “Update README.md added badges”. It can also be seen in the Visual Studio NuGet Package Manager UI.

Git Version Tag

The second script pushes a tag with the deployed version back to the Github repository on the commit that was fetched in the beginning of the process. This makes it easy to back-track what commit resulted in what NuGet package.

git config --global credential.helper store
Add-Content "$env:USERPROFILE\.git-credentials" "https://$($env:git_access_token):x-oauth-basic@github.com`n"
git config --global user.email "fresa@fresa.se"
git config --global user.name "Fredrik Arvidsson"
git tag v$($env:APPVEYOR_BUILD_VERSION) $($env:APPVEYOR_REPO_COMMIT)
git push origin --tags --quiet
  1. In order to authenticate with Github we use the git credential store. This could be a security issue since the credentials (here a git access token) will be stored on the disk on the AppVeyor build agent. However since nothing on the build agent is ever shared, and the agent will be destroyed after the build process, it’s not an issue.
  2. Store the credentials. The git access token generated from my Github account is securely stored using a secure environment variable.
  3. Set user email.
  4. Set user name.
  5. Create a git tag based on the build version and apply it on the commit fetched in the beginning of the CI/CD process.
  6. Push the tag created to Github. Notice the --quiet flag supressing the output from the git push command that otherwise will produce an error in the PowerShell script execution task run by AppVeyor.

Do you remember a checkbox called “Do not build tags” mentioned in the Github chapter above? Well, it is checked in order to prevent triggering a neverending loop of new build triggers when pushing the tag to the remote repository.

Leave a Comment