Skip to content

Year: 2017

C# 8 Preview with Mads Torgersen

A colleague sent this video to me today containing a presentation of the upcoming C# 8 features.

For you who do not recognize this guy, he’s Microsofts Program Manager for the C# language, and has been so for many years! Not to be mixed up with Mads Kristensen (who wrote this awesome blog engine), also working at Microsoft, both originating from Denmark.

So, let me give you a little summary if you don’t have the time to watch the video, which btw you really should if you’re a .NET developer like me.

Nullable Reference Types

If you have coded C# before, you probably know about the explicit nullable type operator ‘?’.

int? length = null;

This means that that the integer length now can be set to null, which the primitive type int never can be. This was introduced way back in 2005 when the .NET Framework 2.0 was released.

So now, the C# team introduces nullable reference types. This means that you can null for example a string. Well, a string can already be null, you might ask yourself, so what’s the big deal? Intension. I bet all programmers have had their share of the infamous NullReferenceException, am I right? Enter nullable reference types. Want your string to be nullable by intent? Use string?. This means that the intent of the string variable is that it can be null, and your compiler will notice this. Using a method on the nullable string variable without first checking for null will cause a compiler error. Awesome!

Async Streams

Since streams are naturally a pushing thing, which means stuff is happening at anytime outside the control of the consumer (think a river and a dam). This might build up to congestion where you need some sort of throttling. With async streams you will get this naturally! The consumer now has a saying if it is ready to consume or not. If you have ever used a message queue, this is probably already a natural thing for you. For distributed transport systems like RabbitMQ, this is a natural thing. You spin up more consumers when the queue get’s bigger and bigger and use a thottling mechanism for the consumer to only consume a couple of messages at a time.

Default Interface Implementations

Now this one is interesting. It means you can implement functionality in an Interface definition.

Say what?

You mean like an abstract class? Well, yes and no. It’s more a usage of the private implicit implementation of an interface. You probably have seen an implicit implementation of an interface before, but let me demonstrate anyway.

public interface IGossip
{
    string TellGossip();
}

public class GossipingNeighbor : IGossip
{
    string IGossip.TellGossip()
    {
        return "Haha, you won't know about this!";
    }
}

public class NosyNeighbor
{
    private readonly GossipingNeighbor _gossipingNeighbor;

    public NosyNeighbor(GossipingNeighbor gossipingNeighbor)
    {
        _gossipingNeighbor = gossipingNeighbor;
    }

    public void PleaseTellMe()
    {
        // Compiler error
        var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();
    }
}

So extending interface definitions in C# 8, you can actually use this directly in the interface definition! If you add a method to the above interface and provide a default implementation, the implementations of that interface does not break, which means they do not need to explicit implement this new method. This does not even affect the implementation until it is casted to the interface.

public void PleaseTellMe()
{
    // Compiler error
    var theMotherLoadOfSecrets = _gossipingNeighbor.TellGossip();

    // This works!
    var iDidntKnowThat = ((IGossip) _gossipingNeighbor).TellGossip();
}

Default interface implementations was introdued in Java 8 in 2015, so here C# is actually behind Java!

Extend Everything!

Well, maybe not. Since 2007 (.NET Framework 3.5, C# version 3.0) you have had the extension methods. You can add methods to an exiting class definition without touching the class. Formerly this only included methods, now you can use properties, operators and maybe constructors aswell! However, there is limitations. You can not hold instance states, but you can hold definition states, i.e. static states. Maybe not revolutionizing, you can already do a great amount of stuff with extention methods, but still, there will be times where this might be useful.

Mads also talks about extending extionsions with interfaces. He does not go into details what that means, and also states that this probably is way into the future of the evolvement of C#.

Conclusion

No generic attributes?

Still, lot’s of new goodies that might be included in C# 8. However, bear in mind that many of these ideas might not turn out like this when C# 8 is actually released. If you watch the video you’ll hear Mads state the uncertainties of what actually will be shipped, but I think it corresponds quite much to the C# 8 Milestone.

Leave a Comment

Don’t Branch

Git is a great, popular, distributed source control system that most of us probably have encountered in various projects. It’s really simple:

1. Pull changes from the remote origin master branch to your local master branch.

2. Code.

(3). Merge any changes from the remote origin master branch to your local master branch.

4. Push your local changes on the master branch to the remote master origin branch.

That’s it! Simple, isn’t it? Master is always deployable and changes fast.

So why do many people use complex git branching strategies?

Check out this google search result: https://www.google.com/search?q=git+workflow&tbm=isch

The horror!

If you are living by continues delivery, you do not want to see that. That’s a the opposite of continues integration; continues isolation. You part, you do not integrate. Well, technically you have to part a while when using distributed source control systems (otherwise it would not be distribution), but you’d like to part for as little time as possible. Why? Read my post Continues Delivery – Are You Fast Enough? 🙂

Open Source

So, is branching always bad? Well, no, it would probably not exist if it were 🙂 Open source software published at the git framework Github is a perfect example when branching might be necessary. If you develop an application and put the source code on github as publicly available, anyone can clone your code, create a branch, make changes and request a pull request before it is merged with master.

https://guides.github.com/introduction/flow/

This makes sense. Why? Because you do not necessary know the person changing your code. It can be a rival wanting to destroy your work. It wouldn’t work if that person could directly merge into master. A security gate is needed.

Tight Teams

Being part of a team at a company, you work towards the same agenda. You probably have some agreed code standard, and a process the team follows. No one is working against the team, so there is no need for a security gate in the source control system. Hence, keep it simple, don’t branch, use master.

– But we need feature branchi…

Feature Branching

So, you think you need to branch for developing new features? You don’t. There are some nice strategies to achive doing small changes and commit them to the production code continuously, even though the functionality might not be fully functioning.

Feature Toggling

This is a great tool for hiding any functionality that is not ready for production yet. If you havn’t heard about all the other nice perks of feature toggling, I highly recommend you read this article by Martin Fowler: https://martinfowler.com/articles/feature-toggles.html

Branch by Abstraction

No, it’s not source control branching. This technique let the user incrementally do large changes to the code while continuously integrating with the production code. Again I’d like to forward you to an excellent explanation of the subject by Martin: https://martinfowler.com/bliki/BranchByAbstraction.html

Conclusion

Don’t use branching strategies if you work in a tight team that has the same goal. Keep it simple, stupid.

Leave a Comment

Microservices – It’s All About Tradeoffs

Everybody has probably heard about the much hyped word “Microservices”; the architecture that solves about everything as computers aren’t getting any faster (kind of) and the need for scaling and distribution is getting more important as more and more people are using the internet.

Don’t get me wrong, I love microservices! However, it is important to know that as with most stuff in the world, everything has tradeoffs.

Lessons to be Learned

I’ve been developing systems using microservices for quite a few years now, and there are alot of lessons learnt (and still lessons to be learned). I saw this presentation by Matt Ranney from Uber last year, where he talkes about the almost ridiculous amount of services Uber has, and the insight of all the problems that comes with communicating between all these independent and loosly coupled services. If you have ever developed asynchronous applications, you probably know what kind of complexity it might generate and how hard it can be to understand how everything sticks together. With microservices, this can be even harder.

The World of Computer Systems are Changing

I recognize many of the insights he shares from my experiences of building microservices. I recently did some developing using akka.net experiencing similar insights but on a whole new level. Microservices within microservices. I won’t jump off to that now, maybe I’ll share those thoughts at another occasion. However, microservice architectures today are getting more and more important. One reason is because the stagnation of hardware speed with cpus changing from the traditional one core where the clock frequency is increased between models to today where the cores instead are multiplied without the frequency increasing. But also because it gives you freedom as a developer when facing hugh applications and organisations. Also there is this thing called zero downtime. You might have heard of it. Everything has to work all the time.

Synchronisation

While I do tend to agree with most of what Matt says, I don’t agree with the being “blocked by other teams” statement. If you get “blocked” as a team, you are doing something wrong, especially if you are supposed to be microservice oriented.

Blocked tends to point towards you needing some sort of synchronisation of information, and before you have that you cannot continue. While synchronisation between systems must occur at some point, it does not mean that you cannot develop and release code that cannot be fully utilized until other systems have been developed and deployed. Remember agile, moving fast, autonomous, and everything has to work all the time? The synchronisation part is all about the mutable understanding of the contract between the services. When you have that, it’s just a matter of using different techniques to do parallell development without ever being blocked, like feature toggling. There need to be a “we are finished let’s integrate”-moment at some point, but it’s not a blockage per se. It’s all about continuation, and it is even more important with microservices as integration get’s more complex with more services and functionality being developed in parallel.

Context

Systems developed as microservices are also facing the problem of process context, or the start and end problem. As a user doing something you are usually seeing this doing as from a bubble perspective. You update a piece of information and expect to see a result from that action. But with microservice based systems, there might be alot of things going on at the same time during that action in many systems. The action does not necessary have the cause you think, and the context gets chopped in smaller pieces as your context now spans multiple systems. This leads to the distribution problem. How do you visualize and exaplain what’s happening when alot of things are happening at roughly the same time at different places? People tend to rely on synchronisation to explain things, but synchronisation is really hard to achive if you do not have all the context at the same time and place, which is next to impossible when it comes to parallelism, distribution, scaling, something you often get and want to have with microservices; asynchronicity. You might want to rethink the way you perceive the systems you are working with. Do you really need synchronisation, and why? It’s probably not an easy thing to just move away from as it is deep rooted in many peoples mind. A way to simplify actions happening. But things are seldom synchronous in the world, and as computers and systems are becoming more distributed, it will make less and less sense keeping it that way.

REST

I also think the overhyped use of the word REST might extend the problem. REST implies that model states is important. But many microservices are not always built around the concept of states and models. They want to continuesly change things. Transitions are really hard to represent as states. I’m not talking about what something transitioned TO but how to visualize the transition or the causation, the relationship between cause and effect. Sometimes functionality is best represented as functions and not the states before and after. Back are the days of RPC services. Why not represent an API as a stream of events? Stateless APIs! It’s all about watching things happen. Using commands and events can be a powerful thing.

Anyhow, microservices are great, but they might get you in situations where you feel confused as you try to apply things that used to work great but does not seem to fit anymore. By rethinking the way you percieve computer systems you will soon find new ways and it might give you great new possibilities. Dare to try new angles, and remember that it’s all about trade offs. Simulating the world exactly as is might not be feasible with a computer system, but still, treating it as a bounch of state models within a single process might not be the right way either.

Leave a Comment

Continues Delivery – Are You Fast Enough?

So, I came over this nice presentation by Ken Mugrage @ ThoughtWorks presented at GOTO 2017 a couple of months ago, and I saved in in the “to watch” list on YouTube, as I so often do, and I forgot about it, as I so often do, until yesterday. It’s a short presentation of how to succeed with continues integration and continues delivery, and I like it. You should watch it!

I have been doing CI/CD for many years in many projects and learnt alot along the way. I think the understanding of what it is and how you can implement such processes is crucial for becoming successful in application development. Still, I frequently meet people that seems lost in how to get there.

One thing that I often hear is people talking about how Gitflow is a CI workflow process. It really is not. Feature branching is a hard lived phenomenon that is just the opposit of continues integration. I really like the phrase continues isolation, because that is exactly what it is.

Separated teams / team handovers in the development process is also something that I often see. Dividing teams into test, operations and development does not contribute to a more effective continues delivery process. It is the opposite. Isolation. Handovers takes time, and information and knowledge get lost along the way.

I often try to push for simplicity when it comes to continues delivery. If it is not simple, people do not tend to use it. It should also be fast and reliable. It should give you that feeling of trust when the application hits production.

The process I tend to realise looks somewhat like what Ken talks about in the video. I would draw it something like the diagram below.

The continues integration part is usually pretty strait forward. You got your source control system which triggers a build on your build server which runs all unit- and integration tests. If all is green, it will pack the tested application and upload it to a package storage, trigger the release process and deploy to test environments.

The deploy process triggers different kind of more complex, heavier and time consuming tests, which Ken also talks about. These tests will produce alot of metrics in form of logs, load and performance data which will be indexed and analyzed by monitoring and log aggregating systems in order to be able to visualize how the application behaves, but also for debugging purposes.

You really can’t get to much logs and metrics, however it is important to have the right tools for structuring and mining all this data in a usable way, otherwise it will only be a big pile of data that no one ever is going to touch. It also needs to be done in real time.

Ken talks about the importance of alerting when it makes sense, based on context and cause. You might not want alerts everytime a server request times out. But if it’s happening alot during some time period, and nothing else can explain this cause, then you might want to look into what is going on. This is again where you want to go for simplicity. You do not want to spend hours or days going through log posts, you might not even have that time depending on the importance of the incident. This is also where continues delivery is important and a powerful tool to identifying and solving such issues fast. It might even be cruical for survival, like the Knight Capital example he brings up in the end.

See the video. It might not go deep dive into CI/CD processes and how to do it, but it does explain how to think and why.

Leave a Comment

Integration Testing

Sexy title, isn’t it? 🙂 Well, maybe not, but still, it’s an important aspect of system development.

What is Integration Testing?

Good question, Fredrik. Well thank you, Fredrik.

Yeah yeah, okay, enough with this, let’s get serious.

When I talk about testing I almost always get misunderstood. Why? Because we all have different views and words to use when we talk about testing, and specially automated tests. Have you seen the explanation of Integration Testing on Wikipedia? Well, it’s not explicit, that you can say 🙂 https://en.wikipedia.org/wiki/Integration_testing

When I talk about integration tests, I usually mean firing up my application in memory and probe the integration points. I like this approach because it let’s me get rid of third party application dependency. It means I can run my tests anywhere without the need of installing and handling third party applications during testing, for example a database.

It fits nicely into the continues integration process.

Test It While Hosting Your Windows Service

So, I’d like to introduce you to some of the home brewed testing frameworks I like to use when building applications. Let’s start with Test.It.While.Hosting.Your.Windows.Service (C#, .NET Framework 4.6.2), which is a testing framework that helps you simplify how you write your integration tests for Windows Service applications.

I’m not going to go into details, you will find that if you follow the link. BUT, I think integration testing is something all developers should have at some degree when developing applications. They cover much more than a unit test, but at the same time they are autonomous and non-dependent which means you can run them anywhere where the targeted framework is installed.

Just by being able to start your application in a unit test like manner you have come far in testing your application’s functionality. How many times have you not experienced problems with the IOC registrations of your application?

Okay, I’ll cut the selling point. I think you get the point anyway. Testing is important. Multiple levels of testing is important. Finding issues fast is important, at least in an agile world, and you are agile, aren’t you? 🙂

Hi!

My name is Fredrik. I’m a system developer, newly gone freelance, and (obviously) I like automated testing. I would not call my self a ‘tester’, I’m a system developer who likes simplicity and when things just work.

This is my first blog post for my new company, FKAN Consulting. I’ll try to continuously post new (hopefully interesting) posts here about my experiences as a developer and consultant.

Leave a Comment