Skip to content

Year: 2025

Source Generators and Package Dependencies

Source generators that has package dependencies require some extra directives for Roslyn to load them properly, which differs depending if the generator is consumed as a package reference or a project reference.

Consider a source generator, MySourceGenerator, with a dependency to a fictive .NET Standard 2.0 package Foo.Bar. The package reference directive would look something like:

<ItemGroup>
<PackageReference Include="Foo.Bar" Version="1.2.3" PrivateAssets="all" GeneratePathProperty="true" />
</ItemGroup>

PrivateAssets="all" means that the consuming project doesn’t take a direct dependency, it’s not the consuming project that needs the dependency per se, it’s the compiler when running the source generator.

GeneratePathProperty="true" enables MSBuild targets to get the path to the corresponding assembly, which usually is within a global folder. The path will be used in both scenarios to reference the dependency.

Project Reference

A project consuming a source generator as a project reference would have a directive like:

<ItemGroup>
    <ProjectReference Include="MySourceGenerator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false"/>
</ItemGroup>

We use the path (generated by GeneratePathProperty) to produce target path references to the dependency’s assembly to let MSBuild know about the dependency:

<PropertyGroup>  <GetTargetPathDependsOn>$(GetTargetPathDependsOn);GetDependencyTargetPaths</GetTargetPathDependsOn>
</PropertyGroup>
<Target Name="GetDependencyTargetPaths">
    <ItemGroup>
        <TargetPathWithTargetPlatformMoniker Include="$(PkgFooBar)\lib\netstandard2.0\*.dll" IncludeRuntimeDependency="false" />
    </ItemGroup>
</Target>

IncludeRuntimeDependency="false" states that this is not a runtime dependency but a compile time dependency, which avoids it being copied to output.

Note that all transient dependencies used also need to be referenced!

Package Reference

A project consuming a source generator as a package reference would have a directive like:

<ItemGroup>
    <PackageReference Include="MySourceGenerator" Version="x.y.z" PrivateAssets="all" />
</ItemGroup>

There is no MSBuild consuming the source generator project here, instead dependencies need to be packed into the source generator’s NuGet package. We can use the path in a similar fashion to reference the dependency to include it in the package using this directive:

<ItemGroup>
<None Include="$(PkgFooBar)\lib\netstandard2.0\*.dll" Pack="true" PackagePath="analyzers/dotnet/cs" Visible="false" />
</ItemGroup>

analyzers/dotnet/cs is a special path in a NuGet package where analyzers and source generators are located. Their dependencies can be included at the same path.

This scenario is actually pretty well documented in the Source Generators Cookbook.

Don’t forget transient dependencies, as described above.

Generated Code Dependencies

We’ve covered how to handle dependencies the source generator it self depends on, but how about dependencies that the code it generates depend on? There is currently no way to define such package reference, unless it targets the same framework as the source generator, meaning .NET Standard 2.0. PackageReference directives must be compatible with the project where they are declared. This means that required dependencies must be referenced manually by the consumer.

It can be a good idea to check that the expected dependency is installed in the target assembly (context.Compilation.ReferencedAssemblyNames) from the source generator and produce a diagnostics message if it is missing.

Conclusion

Handling dependencies in source generators is clonky and requires quite a lot of manually added directives. Hopefully we’ll see a more streamlined solution in the future where Roslyn deals with this automatically.

Leave a Comment

Reusable TestContainers and xUnit Parallelism

xUnit’s TestContainers library includes a fixture for simplifying writing integration tests, however it does not work well with reusable containers since TestContainers doesn’t support creating containers idempotently across parallel processes.

Why Reuse?

There are a couple of common strategies when orchestrating integration dependencies when writing integration tests:

  • In-memory orchestration
  • Real dependencies
  • Third-party emulators

TestContainers enable the two latter while simplifying orchestration, but the downside of these strategies compared to the first is start and stop latency, which may become a serious problem as the number of tests increase. Enter Reuse.

Lack of Idempotency

TestContainers uses the create container endpoint from the Docker API to create a new container. If Reuse is enabled, it first checks if their is a container matching a hash representing the labels of the current test container, if there is, it reuses it, otherwise it creates a new container using a random pseudo name which is not deterministic and hence makes this process non-idempotent. This works fine when running tests in a single, sequential process, but will eventually fail if using parallel processes or a process that run tests in parallel without synchronization.

An example is NCrunch, which by default parallelize test execution to multiple processes.

TestContainers will fail because it will eventually create more than one container with the same hash which will cause the framework to fail on the next execution, as it doesn’t support multiple containers with the same hash, nor does it align with the purpose of reusability.

TestContainers.Xunit.Reusable

TestContainers.Xunit.Reusable is a drop-in replacement for Testcontainers.XunitV3 that supports idempotency by simply using the reuse hash as the name of the reusable container. It implements optimistic concurrency, meaning if a process fails creating a reusable container due to conflict, it will reuse it as another parallel process managed to create the container first.

Simply replace <PackageReference Include="Testcontainers.XunitV3" Version="x.y.z" /> with <PackageReference Include="Reusable.XunitV3.TestContainers" Version="a.b.c" />. Enable reusability for a ContainerFixture by overriding the Reuse property or by setting the TESTCONTAINERS_REUSE_ENABLE environment variable to ‘true’.

Not sure what Kaka is, but yay, AI! 😀

Leave a Comment

Are You Terraform Applying State Changes?

Did you know that terraform apply might apply important state changes even if the plan states that no changes is detected?

The output should be familiar. When running terraform plan Terraform compares the current resource directives defined in code, with the current live state of the real resources it represents via the module’s remote state. If both the resource directives and the state matches the actual provisioned resources, no changes are needed.

Except, it might.

State Upgrades

Terraform modules keep track of the expected state of managed resources via what’s called the State. Besides keeping track of the state of the expected resources, it also contains metadata about the resource definitions currently used by the configured providers. In particular, it might contain information about resource schema migrations, or state upgrade directives. These are not well documented among user documentation, and is supposedly an internal mechanism for provider developers, however it might affect users as well.

Potential state upgrades are applied during apply, whether or not actual resource changes are needed or not. Not even the SDK documentation specifies this explicitly. There is also no record of such upgrades recorded in the plan.

Always Apply

It might be tempting to skip applying a plan that states that no changes are needed, but this could lead to incompatible and difficult upgrade problems later on due to missing state migrations and other metadata changes caused by upgrading a provider, or Terraform it self. For example subtle changes to an identifier of a resource, like case sensitivity, that may cause planning to fail for future versions of the provider. Add accidental breaking changes to the underlaying APIs that might be mitigated by future releases of the provider and you might end up in a catch-22 moment. Can’t apply state changes due to downstream API changes, can’t upgrade due to missing state upgrades.

Always apply the plan.

Leave a Comment

Migrating from Text Templating to Source Generator

I recently ported Kafka.Protocol‘s source code generation functionality from Text Templating (T4) to a Source Generator, and I thought I would share my experience with how they differ and what to expect.

Text Templating

Text Templating has been around since 2005 and is available on .NET Framework using C# 6. Mono.TextTemplating has been around for a couple of years which supports .NET and C# 10, and recently Visual Studio 2022 started to ship with a revamped CLI tool for text templating.

A text template uses text blocks and directives together to generate source code, it reminds of old ASP Classic in a way, or razor templates. Here’s an example borrowed from Microsoft:

<#@ output extension=".cs" #>
<#@ assembly name="System.Xml" #>
<#
 System.Xml.XmlDocument configurationData = ...; // Read a data file here.
#>
namespace Fabrikam.<#= configurationData.SelectSingleNode("jobName").Value #>
{
  ... // More code here.
}

This is a design time template, it runs when the text template file is saved and produces the output in a separate file. There are also run time templates which can generate code from other code during runtime, which can simplify splitting generated code into multiple files, but requires some other code to run in order to do so. Design time templates produces a single file per text template. There are tooling that can get around this limitation, but it has it’s own limitations.

Source Generator

Source generators were first introduced in .NET 5, and runs during compile time. It has to target .NET Standard 2.0 but can use any C# version. It can generate source code from input like a data model specification or based on objects being compiled. Generated code is added to the compilation, meaning both ordinary written code and source generated code gets compiled together into the same assembly. This means that code generated isn’t written to any files, like for T4 templates, it’s written directly to the output assembly.

Writing Generated Code to Files

Emitting generated code to files can be enabled with some simple project directives:

<EmitCompilerGeneratedFiles>true</EmitCompilerGeneratedFiles>
<CompilerGeneratedFilesOutputPath>Generated</CompilerGeneratedFilesOutputPath>

Make sure to remove any generated files before compilation, remember, the generated source code is already baked into the compilation!

<ItemGroup>
  <Compile Remove="$(CompilerGeneratedFilesOutputPath)/**/*.cs" />
</ItemGroup>

There are no limitations on how many files a generator can produce or where it should be outputted. Storing generated source code in files are great if you want to track how changes in the source generator affects the generated code in source control, specifically if you are generating code from a specification and not content from the compilation.

Using a Generator

To use a source generator, add a reference to it from a project:

<ItemGroup>
  <ProjectReference Include="..\Path\To\Generator\Generator.csproj" 
    OutputItemType="Analyzer" 
    ReferenceOutputAssembly="false" />
</ItemGroup>

…or a NuGet package:

<PackageReference Include="My.Generator" Version="1.2.3" PrivateAssets="all" />

Note the OutputItemType="Analyzer" directive in the project reference directive. It tells the compiler that the project is to be treated as an analyzer instead of being a runtime reference. Output from an analyzer can be found under Dependencies in Visual Studio, and that’s where we find the generated types.

Note that they appear as files, even though they are not, it’s just an identifier. To find them in Visual Studio you would need to search for the type they contain or navigate to them in the Solution Explorer.

It would be possible to include the emitted files in a project and exclude them from compilation, that would make them searchable as any other file, but since they aren’t part of compilation they will lack some analysis disabling some functionality like symbol navigation etc. I recommend keeping emitted files solely for source control purposes.

Limitations

Source Generators, as analyzers, have limited exception handling. All exceptions thrown by a source generator is wrapped by a standard error message and contains very little information of what the problem is.

CSC : warning CS8785: Generator 'SourceGenerator' failed to generate source. It will not contribute to the output and compilation err
ors may occur as a result. Exception was of type 'NullReferenceException' with message 'Object reference not set to an instance of an object.'.

It’s possible to export the full exception including the stack trace by using the ErrorLog directive and output it as a SARIF formatted file. This isn’t great to work with as you’d like proper diagnostic feedback from the compiler directly. A workaround can be to construct a diagnostic error manually and include the stack trace, but neither multiline messages nor the description property is outputted so everything needs to be packed into a single-line message. Locations from stack traces are also problematic with incremental source generators where if a generator reruns with no code changes the stack frame location is gone.

Diagnostic reporting only work with error and warnings, other severities are ignored. A proposal on how informational diagnostic output should work can be found here.

Generated Code and Analyzers

The assemblies containing generated source code have had issues not being properly analyzed due to analyzers not being reloaded when the generated code in an assembly changes, which required deleting the .vs cache directory and restarting Visual Studio to force-reload them. This was resolved in the Visual Studio 2022 17.12 release.

Conclusion

Even though Source Generators still have a few quirks, they are much easier to work with than T4 Text Templates. It enables unit testing, file splitting and does not require running under Windows. They are also easier to distribute as they can be packed in NuGet-packages, and I’ve barely mentioned content based generator, which opens up a whole other world of opportunities! Check out the Source Generator Cook Book to get started.

Leave a Comment