Skip to content

Tag: .net

ApiCompat has Moved into the .NET SDK

I while ago I wrote about how to detect breaking changes in .NET using Microsoft.DotNet.ApiCompat. Since then ApiCompat has moved into the .NET SDK.

What has Changed?

Since ApiCompat now is part of the .NET SDK, the Arcade package feed doesn’t need to be referenced anymore.

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

The package reference can also be removed.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

In order to continue running assembly validation from MSBuild, install the Microsoft.DotNet.ApiCompat.Task.

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat.Task" Version="8.0.404" PrivateAssets="all" IsImplicitlyDefined="true" />
</ItemGroup>

Enable Assembly validation.

<PropertyGroup>
  <ApiCompatValidateAssemblies>true</ApiCompatValidateAssemblies>
</PropertyGroup>

The contract assembly reference directive has changed, so the old directive needs to be replaced.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

<PropertyGroup>   
  <ApiCompatContractAssembly> LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll
  </ApiCompatContractAssembly>
</PropertyGroup>

The property controlling suppressing breaking changes, BaselineAllAPICompatError, has changed to ApiCompatGenerateSuppressionFile.

<PropertyGroup>
  <BaselineAllAPICompatError>false</BaselineAllAPICompatError>
  <ApiCompatGenerateSuppressionFile>false </ApiCompatGenerateSuppressionFile>
</PropertyGroup>

That’s it, your good to go!

Compatibility Baseline / Suppression directives

Previously the suppression file, ApiCompatBaseline.txt, contained text directives describing suppressed compatibility issues.

Compat issues with assembly Kafka.Protocol:
TypesMustExist : Type 'Kafka.Protocol.ConsumerGroupHeartbeatRequest.Assignor' does not exist in the implementation but it does exist in the contract.

This format has changed to an XML based format, written by default to a file called CompatibilitySuppressions.xml.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target> T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj\Debug\netstandard2.1\Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

This format is more verbose than the old format, a bit more difficult to read from a human perspective if you ask me. The description of the various DiagnosticIds can be found in this list.

Path Separator Mismatches

Looking at the suppression example, you might notice that the suppressions contain references to the compared assembly and the baseline contract. It’s not a coincidence that the path separators differs between the references to the contract assembly and the assembly being compared. The Left reference is a templated copy of the ApiCompatContractAssembly directive using OS agnostic forward slashes, but the Right directive is generated by ApiCompat and it is not OS agnostic, hence the backslash path separators generated when executing under Windows. If ApiCompat is executed under Linux it would generate front slash path separators.

You might also notice that the reference to the assembly being compared contains the build configuration name. This might not match the build configuration name used during a build pipeline for example (Debug vs Release).

Both these differences in path reference will make ApiCompat ignore the suppressions when not matched. There is no documentation on how to consolidate these, but fortunately there are a couple of somewhat hidden transformation directives which can help control how these paths are formatted.

<PropertyGroup>
  <_ApiCompatCaptureGroupPattern>
.+%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)%5C$([System.IO.Path]::DirectorySeparatorChar)(.+)
  </_ApiCompatCaptureGroupPattern>
</PropertyGroup>

<ItemGroup>
  <!-- Make sure the Right suppression directive is OS-agnostic and disregards configuration -->
  <ApiCompatRightAssembliesTransformationPattern Include="$(_ApiCompatCaptureGroupPattern)" ReplacementString="obj/$1/$2" />
</ItemGroup>

The _ApiCompatCaptureGroupPattern regex directive captures path segment groups which can be used in the ApiCompatRightAssembliesTransformationPattern directive to rewrite the assembly reference path to something that is compatible to both Linux and Windows, and removes the build configuration segment.

Using this will cause the Right directive to change accordingly.

<?xml version="1.0" encoding="utf-8"?>
<Suppressions xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Suppression>
    <DiagnosticId>CP0001</DiagnosticId>
    <Target>
T:Kafka.Protocol.CreateTopicsRequest.CreatableTopic.CreateableTopicConfig
    </Target>
    <Left>
LastMajorVersionBinary/lib/netstandard2.1/Kafka.Protocol.dll
    </Left>
    <Right>obj/netstandard2.1/Kafka.Protocol.dll</Right>
  </Suppression>
</Suppressions>

There is a similar directive for the Left directive named ApiCompatLeftAssembliesTransformationPattern.

Leave a Comment

ConfigureAwait or Not

I often get into the discussion about should you disable continuing on the captured context when awaiting a task or not, so I’m going to write down some of my reasoning around this rather complex functionality. Let’s start with some fundamentals first.

async/await

Tasks wrap operations and schedules them using a task scheduler. The default scheduler in .NET schedules tasks on the ThreadPool. async and await are syntactic sugar telling the compiler to generate a state machine that can keep track of the state of the task. It iterates forward by keeping track of the current awaiter’s completion state and the current location of execution. If the awaiter is completed it continues forward to the next awaiter, otherwise it asks the current awaiter to schedule the continuation.

If you are interested in deep-diving what actually happens with async and await, I recommend this very detailed article by Stephen Toub.

What is the Synchronization Context?

Different components have different models on how scheduling of operations need to be synchronized, this is where the synchronization context comes into play. The default SynchronizationContext synchronizes operations on the thread pool, while others might use other means.

The most common awaiters, like those implemented for Task and ValueTask, considers the current SynchronizationContext for scheduling an operation’s continuation. When the state machine moves forward executing an operation it goes via the current awaiter which when not completed might schedule the state machines current continuation via SynchronizationContext.Post.

ConfigureAwait

The only thing this method actually does is wrap the current awaiter with it’s single argument, continueOnCapturedContext . This argument tells the awaiter to use any configured custom synchronization context or task scheduler when scheduling the continuation. Turned off, i.e. ConfigureAwait(false), it simply bypasses them and schedules on the default scheduler, i.e. the thread pool. If there are no custom synchronization context or scheduler ConfigureAwait becomes a no-op. Same thing apply if the awaiter doesn’t need queueing when the state machines reaches the awaitable, i.e. the task has already completed.

Continue on Captured Context or not?

If you know there is a context that any continuation must run on, for example a UI thread, then yes, the continuation must be configured to capture the current context. As this is the default behavior it’s not technically required to declare this, but by explicitly configure the continuation it sends a signal to the next developer that here a continuation is important. If configured implicitly there won’t be anything hinting that it wasn’t just a mistake to leave it out, or that the continuation was ever considered or understood.

In the most common scenario though the current context is not relevant. We can declare that by explicitly state the continuation doesn’t need to run on any captured context, i.e. ConfigureAwait(false).

Enforcing ConfigureAwait

Since configuring the continuation is not required, it’s easy to miss configuring it. Fortunately there is a Roslyn analyzer that can be enabled to enforce that all awaiters have been configured.

Summary

Always declare ConfigureAwait to show intent that the continuation behavior has explicitly been considered. Only continue on a captured context if there is a good reason for doing so, otherwise reap the benefits of executing on the thread pool.

Leave a Comment

OpenAPI Evaluation

Json Schema has a validation vocabulary which can be used to set constraints on json structures. OpenAPI uses Json Schemas to describe parameters and content, so wouldn’t it be nice to be able to evaluate HTTP request and response messages?

OpenAPI.Evaluation is a .NET library which can evaluate HTTP request and response messages according to an OpenAPI specification. Json schemas are evaluated using JsonSchema.NET together with the JsonSchema.Net.OpenApi vocabulary. It supports the standard HttpRequestMessage and HttpResponseMessage and comes with a DelegatingHandler, OpenApiEvaluationHandler, which can be used by HttpClient to intercept and evaluate requests going out and responses coming in according to an OpenAPI 3.1 specification. It’s also possible to manually evaluate requests and responses by traversing the parsed OpenAPI specification and feed it’s evaluators with the corresponding extracted content.

ASP.NET

OpenAPI.Evaluation.AspNet integrates OpenAPI.Evaluation with the ASP.NET request pipeline to enable server side evaluation. It comes with extension methods to evaluate HttpRequest and HttpResponse abstractions. It also supports integration via the HttpContext enabling easy access for ASP.NET request and response pipelines and controllers. A middleware is provided for simple integration and can be enabled via the application builder and service collection.

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApiEvaluation(OpenAPI.Evaluation.Specification.OpenAPI.Parse(JsonNode.Parse(File.OpenRead("openapi.json"));
var app = builder.Build();
// Registers the middleware into the request pipeline
app.UseOpenApiEvaluation();

To evaluate directly from a request pipeline, the OpenAPI specification first needs to be loaded and registered as described above. Extension methods for HttpContext can then be used for request and response evaluation:

var requestEvaluationResult = context.EvaluateRequest();
...
var responseEvaluationResult = context.EvaluateResponse(200, responseHeaders, responseContent);

Evaluation Result

JsonSchema.NET implements the Json Schema output format which OpenAPI.Evaluation is influenced by. The OpenAPI specification doesn’t define annotations as Json Schema does, so I decided to adopt something similar. The evaluation result contains information describing each specification object traversed and what path through the specification the evaluation process took. An example produced by the default evaluation result json converter is shown belong, it uses a hierarchical output format.

{
  "valid": false,
  "evaluationPath": "",
  "specificationLocation": "http://localhost/#",
  "details": [
    {
      "valid": false,
      "evaluationPath": "/paths",
      "specificationLocation": "http://localhost/#/paths",
      "details": [
        {
          "valid": false,
          "evaluationPath": "/paths/~1user~1{user-id}",
          "specificationLocation": "http://localhost/#/paths/%7e1user%7e1%7buser-id%7d",
          "details": [
            {
              "valid": false,
              "evaluationPath": "/paths/~1user~1{user-id}/get",
              "specificationLocation": "http://localhost/#/paths/%7e1user%7e1%7buser-id%7d/get",
              "details": [
                {
                  "valid": false,
                  "evaluationPath": "/paths/~1user~1{user-id}/get/parameters",
                  "specificationLocation": "http://localhost/#/paths/%7e1user%7e1%7buser-id%7d/get/parameters",
                  "details": [
                    {
                      "valid": false,
                      "evaluationPath": "/paths/~1user~1{user-id}/get/parameters/0",
                      "specificationLocation": "http://localhost/#/paths/%7e1user%7e1%7buser-id%7d/get/parameters/0",
                      "details": [
                        {
                          "valid": false,
                          "evaluationPath": "/paths/~1user~1{user-id}/get/parameters/0/$ref/components/parameters/user/content",
                          "specificationLocation": "http://localhost/#/components/parameters/user/content",
                          "details": [
                            {
                              "valid": false,
                              "evaluationPath": "/paths/~1user~1{user-id}/get/parameters/0/$ref/components/parameters/user/content/application~1json",
                              "specificationLocation": "http://localhost/#/components/parameters/user/content/application%7e1json",
                              "details": [
                                {
                                  "valid": false,
                                  "evaluationPath": "/paths/~1user~1{user-id}/get/parameters/0/$ref/components/parameters/user/content/application~1json/schema",
                                  "specificationLocation": "http://localhost/#/components/parameters/user/content/application%7e1json/schema",
                                  "schemaEvaluationResults": [
                                    {
                                      "valid": false,
                                      "evaluationPath": "",
                                      "schemaLocation": "http://localhost#",
                                      "instanceLocation": "",
                                      "errors": {
                                        "required": "Required properties [\"first-name\"] are not present"
                                      },
                                      "details": [
                                        {
                                          "valid": true,
                                          "evaluationPath": "/properties/last-name",
                                          "schemaLocation": "http://localhost/#/properties/last-name",
                                          "instanceLocation": "/last-name"
                                        }
                                      ]
                                    }
                                  ]
                                }
                              ]
                            }
                          ]
                        }
                      ]
                    }
                  ]
                }
              ]
            }
          ]
        }
      ]
    },
    {
      "valid": true,
      "evaluationPath": "/servers",
      "specificationLocation": "http://localhost/#/servers",
      "details": [
        {
          "valid": true,
          "evaluationPath": "/servers/0",
          "specificationLocation": "http://localhost/#/servers/0",
          "annotations": {
            "url": "http://localhost/v1",
            "description": "v1"
          }
        }
      ]
    }
  ]
}

Parameter Value Parsers

Headers, path values, query strings and cookies can be described in an OpenAPI specification using a combination of instructive metadata, like styles, and schemas. It’s designed to cater for simple data structures and is complemented by content media types for more complex scenarios.

OpenAPI.Evaluation supports all the styles described in the specification, but it’s not explicitly defined how complex scenarios the specification should support, that is left to implementors to decide. In order to cater for more complex scenarios, it’s possible to define custom parsers per parameter by implementing the IParameterValueParser and register it when parsing the OpenAPI specification.

OpenAPI.Evaluation.Specification.OpenAPI.Parse(jsonDocument, parameterValueParsers: new[] { customParameterValueParser });
Leave a Comment

DynamoDB as an Event Store

I’ve been wondering how well Amazon DynamoDB would fit an event store implementation. There were two different designs I wanted to explore, both are described in this article, of which I implemented one of them. The source code is available on GitHub, including a nuget package feed on nuget.org.

Event Store Basics

An event store is a storage concept that stores events in a chronological order. These events can describe business critical changes within a domain aggregate. Besides storing the events of the aggregate, an aggregate state can be stored as a snapshot to avoid reading all events each time the aggregate needs to be rebuild. This can boost performance both in regards of latency and the amount of data that needs to be transferred.

DynamoDB

DynamoDB is a large scale distributed NoSQL database that can handle millions of requests per second. It’s built to handle structured documents grouped in partitions which can be stored in order within a partition key. This has obvious similarities to how an event stream for an aggregate looks like, seems promising!

Another neat feature with DynamoDB is DynamoDB Streams and Kinesis Data Streams with DynamoDB, which both can stream changes in a table to various other AWS services and clients. No need to implement an outbox and integrate with a separate message broker. Add point-in-time recovery and it is possible to stream the whole event store at any time!

Separated Snapshot and Events

Let’s start with the first design that uses composite keys to store snapshots and events grouped by aggregate.

The events are grouped into commits to create an atomic unit that has consistency guarantees without using transactions. Commits use a monotonically increasing id as sort key, while the snapshot uses zero. Since sort keys determine in what order items are stored the commits become ordered chronologically with the snapshot leading, meaning the commits of events can be fetched with a range query while the snapshot can be fetched separately. It makes little sense fetching them all, even though that would also be possible. The snapshot includes the sort key to the commit it represent up until in order to know where to start querying for any events that have not yet been applied to the snapshot.

Do note that there is an item size limit of 400kb in DynamoDB, which should be more than enough to represent both commits and snapshots, but as both are stored as opaque binary data they could be compressed. Besides lowering the size of the items and round-trip latency, this can also lower read and write cost.

When storing a commit the sort key is monotonically increased, this is predictable and therefor can be used as a condition to introduce both optimistic concurrency and idempotency in order to prevent inconsistency due to multiple competing consumers writing events at the same time or a network error occurs during a write operation.

"ConditionExpression": "attribute_not_exists(PK)"

Snapshots can be stored once it precedes the size of the non-snapshotted commits to save cost and lower latency. As snapshots are detached from the event stream it doesn’t matter if storing it after a commit succeeds or fails. If it fails the write operation could be re-run at any time. Updating a snapshot includes guarantees for optimistic concurrency and idempotency by only writing if the version the snapshot points to is higher than the currently stored snapshot or if the attribute is missing all together, which means no snapshot exists.

"ConditionExpression": "attribute_not_exists(version) OR version < :version"

More about conditional writes can be found here.

This was the solution I chose to implement!

Interleaving Snapshots and Events

This was an alternative I wanted to try out, interleaving snapshots and events in one continuous stream.

The idea was to only require a single request to fetch both snapshots and the trailing, non-snapshotted, events, lowering the amount of roundtrips to DynamoDB increasing possible throughput. Reading commits and the latest snapshot would be done by reading in reversed chronological order until a snapshot is found.

This however presents a problem. If a snapshot is stored after say every 10th commit, 11 items have to be queried to avoid multiple roundtrips even though the first item could be a snapshot making the 10 other items redundant. Further more, there are no guarantees when a snapshot get’s written, hence there are no way to know upfront exactly how many items to read to reach the latest snapshot.

Another problem is that all snapshots have to be read when reading the whole event stream.

Conclusion

DynamoDB turns out to be quite a good candidate to take the job as a persistence engine for an event store, supporting the design of an ordered list of events including snapshots, and has the capabilities to stream the events to other services. Replaying a single aggregate can be done with a simple ranged query and it’s schema less design makes it easy to store both commits and snapshots in the same table. It’s distributed nature enables almost limit less scalability and the fact that it is a managed service makes operating it a breeze.

Very nice indeed!

Leave a Comment

Detecting Breaking Changes

When integrating with other applications and libraries it’s important to detect when APIs are changing in an incompatible way as that might cause downtime for the downstream application or library. SemVer is a popular versioning strategy that can hint about breaking changes by bumping the major version part but as an upstream application developer it can be difficult to detect that a code change is in fact a breaking change.

ApiCompat

Microsoft.DotNet.ApiCompat is a tool built for .NET that can compare two assemblies for API compatibility. It’s built and maintained by the .NET Core team for usage within the Microsoft .NET development teams, but it’s also open source and available to anyone.

Installation

The tool is provided as a NuGet package on the .NET Core team’s NuGet feed, which is not on nuget.org, which most package managers reference by default, but a custom NuGet server hosted in Azure. The URL to the feed needs to be explicitly specified in the project that like to use it.

In the project file add a reference to the NuGet feed under the Project group:

<PropertyGroup>
  <RestoreAdditionalProjectSources>
    https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json;
  </RestoreAdditionalProjectSources>
</PropertyGroup>

Add a reference to the Microsoft.DotNet.ApiCompat package:

<ItemGroup>
  <PackageReference Include="Microsoft.DotNet.ApiCompat" Version="7.0.0-beta.22115.2" PrivateAssets="All" />
</ItemGroup>

The most recent version of the package can be found by browsing the feed.

Usage

The tool can execute as part of the build process and fail the build if the current source code contains changes that is not compatible with a provided contract, an assembly from the latest major release for example.

While it is possible to commit a contract assembly to Git, another approach is to automatically fetch it from source early in the build process. This example uses a NuGet feed as the release source, but it could be an asset in a GitHub release as well or something else.

<Target Name="DownloadLastMajorVersion" BeforeTargets="PreBuildEvent">
  <DownloadFile SourceUrl="https://www.nuget.org/api/v2/package/MyLibrary/2.0.0" DestinationFolder="LastMajorVersionBinary">
    <Output TaskParameter="DownloadedFile" PropertyName="LastMajorVersionNugetPackage" />
  </DownloadFile>
  <Unzip SourceFiles="$(LastMajorVersionNugetPackage)" DestinationFolder="LastMajorVersionBinary" />
</Target>

This will download a NuGet package to the folder LastMajorVersionBinary in the project directory using the DownloadFile task. If the directory doesn’t exist it will be created. If the file already exist and has not been changed, this becomes a no-op.

The Unzip task will unpack the .nupkg file to the same directory as the next step. Same thing here, if the files already exists this is a no-op.

The last step is to instruct ApiCompat to target the unpacked assembly file as the current contract the source code will be compared with. This is done by setting the ResolvedMatchingContract property, which is the only setting required to run the tool.

<ItemGroup>
  <ResolvedMatchingContract Include="LastMajorVersionBinary/lib/$(TargetFramework)/$(AssemblyName).dll" />
</ItemGroup>

The path points to where the assembly file is located in the unpacked NuGet package directory.

Building the project will now download the contract and execute ApiCompat with the default settings. Remember to use proper access modifiers when writing code. internal is by default not treated as part of a public contract1 as the referenced member or type cannot be reached by other assemblies, compared to public.

1 InternalsVisibleToAttribute can make internal member and types accessible by other assemblies and should be used by care.

Handling a Breaking Change

Breaking changes should be avoided as often as possible as they break integrations downstream. This is true for both libraries and applications. An indication of a breaking change should most of the time be identified and mitigated in a non-breaking fashion, like adding another API and deprecating the current one. But at some point the old API needs to be removed, thus a breaking change is introduced which will fail the build.

The property BaselineAllAPICompatError can be used to accept breaking changes. The specification of the breaking changes will be written to a file called ApiCompatBaseline.txt in the root of the project. ApiCompat uses it to ignore these tracked incompatible changes from now on. This property should only be set when a breaking change should be accepted and should result in a new major release.

<PropertyGroup>
  <BaselineAllAPICompatError>true</BaselineAllAPICompatError>
</PropertyGroup>

Once a new build has be executed and the new contract baseline has been established remember to set the property to false or remove it.

The baseline file should be committed to source control where it can be referenced as documentation for the breaking change by for example including it in a BREAKING CHANGE conventional commit message.

Once a new major release has been published, there are two options.

  1. Reference the assembly in the new major release as the contract and delete ApiCompatBaseline.txt.
  2. Do nothing 🙂

As git can visualize diffs between two references, the content of the ApiCompatBaseline.txt will show all the breaking changes between two version tags, which can be quite useful.

Summary

ApiCompat is a great tool for automating detection of breaking changes during development. It avoids introducing incompatible changes and potential headache to downstream consumers, both for applications and libraries.

A complete example of the mentioned changes are available here.

Leave a Comment

C# 8 Preview with Mads Torgersen

A colleague sent this video to me today containing a presentation of the upcoming C# 8 features.

For you who do not recognize this guy, he’s Microsofts Program Manager for the C# language, and has been so for many years! Not to be mixed up with Mads Kristensen (who wrote this awesome blog engine), also working at Microsoft, both originating from Denmark.

So, let me give you a little summary if you don’t have the time to watch the video, which btw you really should if you’re a .NET developer like me.

Nullable Reference Types

If you have coded C# before, you probably know about the explicit nullable type operator ‘?’.

int? length = null;

This means that that the integer length now can be set to null, which the primitive type int never can be. This was introduced way back in 2005 when the .NET Framework 2.0 was released.

So now, the C# team introduces nullable reference types. This means that you can null for example a string. Well, a string can already be null, you might ask yourself, so what’s the big deal? Intension. I bet all programmers have had their share of the infamous NullReferenceException, am I right? Enter nullable reference types. Want your string to be nullable by intent? Use string?. This means that the intent of the string variable is that it can be null, and your compiler will notice this. Using a method on the nullable string variable without first checking for null will cause a compiler error. Awesome!

Async Streams

Since streams are naturally a pushing thing, which means stuff is happening at anytime outside the control of the consumer (think a river and a dam). This might build up to congestion where you need some sort of throttling. With async streams you will get this naturally! The consumer now has a saying if it is ready to consume or not. If you have ever used a message queue, this is probably already a natural thing for you. For distributed transport systems like RabbitMQ, this is a natural thing. You spin up more consumers when the queue get’s bigger and bigger and use a thottling mechanism for the consumer to only consume a couple of messages at a time.

Default Interface Implementations

Now this one is interesting. It means you can implement functionality in an Interface definition.

Say what?

You mean like an abstract class? Well, yes and no. It’s more a usage of the private implicit implementation of an interface. You probably have seen an implicit implementation of an interface before, but let me demonstrate anyway.

public interface IGossip
{
    string TellGossip();
}

public class GossipingNeighbor : IGossip
{
    string IGossip.TellGossip()
    {
        return "Haha, you won't know about this!";
    }
}

public class NosyNeighbor
{
    private readonly GossipingNeighbor _gossipingNeighbor;

    public NosyNeighbor(GossipingNeighbor gossipingNeighbor)
    {
        _gossipingNeighbor = gossipingNeighbor;
    }

    public void PleaseTellMe()
    {
        // Compiler error
        var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();
    }
}

So extending interface definitions in C# 8, you can actually use this directly in the interface definition! If you add a method to the above interface and provide a default implementation, the implementations of that interface does not break, which means they do not need to explicit implement this new method. This does not even affect the implementation until it is casted to the interface.

public void PleaseTellMe()
{
    // Compiler error
    var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();

    // This works!
    var iDidntKnowThat = ((IGossip) _gossipingNeighbor).TellGossip();
}

Default interface implementations was introdued in Java 8 in 2015, so here C# is actually behind Java!

Extend Everything!

Well, maybe not. Since 2007 (.NET Framework 3.5, C# version 3.0) you have had the extension methods. You can add methods to an exiting class definition without touching the class. Formerly this only included methods, now you can use properties, operators and maybe constructors aswell! However, there is limitations. You can not hold instance states, but you can hold definition states, i.e. static states. Maybe not revolutionizing, you can already do a great amount of stuff with extention methods, but still, there will be times where this might be useful.

Mads also talks about extending extionsions with interfaces. He does not go into details what that means, and also states that this probably is way into the future of the evolvement of C#.

Conclusion

No generic attributes?

Still, lot’s of new goodies that might be included in C# 8. However, bear in mind that many of these ideas might not turn out like this when C# 8 is actually released. If you watch the video you’ll hear Mads state the uncertainties of what actually will be shipped, but I think it corresponds quite much to the C# 8 Milestone.

Leave a Comment