Skip to content

Tag: distribution

Mitigating Infrastructure Drift by using Software Development Principals – Part 2

If you haven’t read the first part on how to mitigate infrastructure drift using software development principals you should! I will refer to parts mentioned in that post during this second, hands-on part.

Work Process

Let’s start by exploring how a simple work process might look like during software development.

 I have included post deployment testing because it is important but I’m not going to talk about how to orchestrate it within deployments, it is out of scope for this article.

Let’s add source control, automation and feedback loops. Traditionally there might be multiple different automation tools responsible for the different parts.

Say hi to trunk based development!

When the team grows, the capability of building functionality in parallel grows. This is where the process complexity also grows. The simplest way of doing parallel work is to duplicate the process and let it run side by side. However, it makes little sense developing functionality within a domain if we cannot use it all together, we need to integrate.

Remember that automation is key for moving fast and the goal is to deploy to production as fast as possible with as high quality as possible. The build and test processes increases quality while automation enables fast feedback within all of the processes pictured.

Integration introduces a risk of breaking functionality, therefor we’d like to integrate as small changes as possible as fast as possible to not loose momentum moving towards production.

Let’s expand trunk based development with a gated check-in strategy using pull requests.

With automation and repeating this process many times a day we now have continuous integration.

A great source control platform that can enable this workflow is for example GitHub.

GitOps

In the previous article I talked about the power of GitOps, where we manage processes and configuration declaratively with source control. By using GitHub Actions we define this process declaratively using YAML and source control it together with the application code. By introducing a simple branching strategy to isolate changes during the parallel and iterative development cycle we also isolate the definition for building and testing.

But why stop there? Git commits are immutable and signed with a checksum and it’s history is a directed acyclic graph of all changes. That means that any thing stored in Git has strong integrity. What if we can treat each commit as a release. Three in one! Source control, quality checks and release management.

This is called continuous delivery.

Infrastructure as Code

Let’s see if a similar automation process can be used for managing declarative cloud infrastructure.

Almost the same. Deployment has been moved into the process enabling continuous deployment. Deployment becomes tightly coupled to a branch, let’s explore this further.

Environments

Traditionally deployments are decoupled from the development process because there is a limitation on the amount of environments to deploy to, causing environments to become a bottle neck for delivery. Deploying to the local machine might also be a completely different process, further complicating and deviating the development process. It would make sense having an environment for every development cycle to further expand on the simplicity of trunk based development. One environment for every branch no matter what the purpose of the environment is.

Using cloud infrastructure we can do that by simply moving the environments to the cloud!

Since each branch represents an environment, it makes working with environments simple.

  • Need a new environment? Create a branch!
  • Switch environment? Switch branch!
  • Delete an environment? Delete the branch!
  • Upgrade an environment? Merge changes from another branch!
  • Downgrade an environment? git reset!

Cost

A common concern with infrastructure and environments is cost. Continuously observing how cloud infrastructure related costs changes over time becomes even more important when all environments are in the cloud since more of the cost becomes related to resource utilization. Most cloud providers have tools available for tracking and alerting on cost fluctuations and since all environments are built the same the tools can be used the same way for all environments. This also enabled observing how cost changes faster and doing something about it even earlier in the development process.

If development environment costs do become too steep, they usually do not need the same amount of resources that exist in a production environment. For performance related development it might still be relevant, but in all other cases lowering cost is quite easy to achieve by lowering the resource tiers used and using auto-scaling as a built in strategy. The latter also lowers cost and increases efficiency for production environments by maximizing resource utilization.

In comparison, how much does building and maintaining local infrastructure for each employee cost? How much does it cost to set up a new on-prem environment, local or shared?

Example with Terraform

There are different tools that can help build cloud infrastructure. We’re going to use Terraform and the HashiCorp Configuration Language as the declarative language.

Let’s start by defining how to build, test and deploy. Here’s a simple GitHub Action workflow that automatically builds infrastructure using the previously mentioned workflow:

name: Build Environment

on: push

jobs:
  build:
    name: Build Environment
    runs-on: ubuntu-latest

    env:
      branch: ${{ github.ref }}

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Creating Environment Variable for Terraform
        run: |
          branch=${{ env.branch }}
          branch_name=${branch#refs/heads/}
          env=${branch_name////-}
          env=${env//_/-}

          cat << EOF > env.auto.tfvars
          env = "$env"
          EOF

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1

      - name: Terraform Init
        run: terraform init 
        
      - name: Terraform Validate
        run: terraform validate

      - name: Terraform Plan
        id: plan
        run: terraform plan -out=terraform.tfplan -var-file=config.tfvars

      - name: Terraform Plan Status
        if: steps.plan.outcome == 'failure'
        run: exit 1

      - name: Terraform Apply
        run: terraform apply -auto-approve terraform.tfplan

Building could be translated to initiating, validating and planning in Terraform.

When initiating, Terraform initializes the working directory by setting up backend state storage and loading referenced modules. Since many persons might work against the same environment at the same time, it is a good idea to share the Terraform state for the environment module by setting up remote backend state storage that can be locked in order to guarantee consistency. Each environment should have it’s own tracked state which means it needs to be stored and referenced explicitly per environment. This can be done using a partial backend configuration.

To validates that the configuration files are syntactically valid, terraform validate is executed.

Running terraform plan creates an execution plan containing the resources that needs to be created, updated or deleted by comparing the current state with the current configuration files and the objects already created previously. The env.auto.tfvars configuration file created in the second step contains the environment name based on the branch name and can be used to create environment specific resources by naming conventions.

Last step is to apply/deploy the execution plan modifying the targeted resources.

Application Platforms

The infrastructure architecture we have explored so far is quite simple and mostly suited for one-to-one matches between infrastructure and application. This might work well for managed services or serverless but if you need more control you might choose an application platform like Kubernetes.

A service does not only comprise by a binary, it needs a host, security, network, telemetry, certificates, load balancing etc. which quite fast increases overhead and complexity. Even though such platforms would fit single application needs, it does become unnecessary complex operating and orchestrating an application platform per application.

Let’s have a look how Azure Kubernetes Service could be configured to host our applications:

An application platform like Kubernetes works like a second infrastructure layer on top of the cloud infrastructure. It does help simplify operating complex distributed systems and systems architected as microservices, especially when running as a managed service. Kubernetes infrastructure abstractions makes it easier for applications to provision and orchestrate functionality while preserving much of their independency. The applications do also still control provisioning of infrastructure specific for their needs outside Kubernetes.

Provisioning Kubernetes and all it’s cloud specific integrations has become separated from application specific infrastructure provisioning. Application infrastructure on the other hand has taken a dependency on the Kubernetes API alongside cloud resource APIs. The environment has become tiered and partitioned.

I do again want to emphasize the importance of having autonomous, independent and loosely coupled cross-functional teams. Each team should own their own infrastructure for the same reason they own their applications. A Kubernetes cluster should not become a central infrastructure platform that the whole company depends on.

Separated Application Workflow

Since the Kubernetes infrastructure has been decoupled from the application workflow it could make sense moving the applications to separate repositories. As they have become autonomous we need to figure out a simple way to reintegrate the applications downstream. Since branches defines the environment of the application platform infrastructure, we could simply go for a similar branch naming standard, i.e. MyEnvironment/SomethingBeingDeveloped.

Looking at the Kubernetes platform architecture, the GitOps continuous delivery tool ArgoCD is responsible for deploying applications in the cluster. The process for delivering applications becomes similar to the GitOps process described earlier, where deployment becomes a reversed dependency. Instead of deploying to a specific environment after releasing, ArgoCD is instructed to observe for new releases in an application’s repository and if the release matches the strategy, it becomes deployed. This means that many ArgoCD instances can monitor and independently deploy many applications across many k8s clusters without any intervention.

Here is the process again:

We still provision application specific infrastructure, that works the same way as described earlier, except we now have an upstream dependency; the Kubernetes cluster(s) in the targeted environment. To keep the applications separated in Kubernetes, we use separate namespaces. This is also where we share platform defined details, for example where to find the environment’s container registry. We can do this by creating ConfigMaps.

Namespaces can also be used to limit resource access for users and service principals minimizing exposed attack surfaces. Access rights for each application can be defined in the upstream infrastructure repository. Since we use a managed Kubernetes service, which has integration with active directory, we can leverage access to both Kubernetes and cloud infrastructure through managed identities.

Tying it all together with a GitHub Action workflow, it could look something like this:

name: Build and Release Application

on: push

jobs:
  build:
    runs-on: ubuntu-latest

    env:
      branch: ${{ github.ref }}
      version: 1.2.3

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Extracting Environment From Branch Name
        run: |
          branch=${{ env.branch }}
          branch_name=${branch#refs/heads/}
          env=${branch_name%/*}
          echo "env=$env" >> $GITHUB_ENV

      - name: Login to Azure
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}

      - name: Set AKS Context
        uses: azure/aks-set-context@v1
        with:
          creds: '${{ secrets.AZURE_CREDENTIALS }}'
          cluster-name: my-cluster
          resource-group: rg-${{ env.env }}

      - name: Fetch Environment Metadata
        run: |
          ENVIRONMENT_METADATA=$(kubectl get configmap/metadata -o go-template={{index .data "metadata.yaml"}} | docker run -i --rm mikefarah/yq eval -j)
          ACR_NAME=$(echo "$ENVIRONMENT_METADATA" | jq .acr.name | xargs)
          echo "ACR_NAME=$ACR_NAME" >> $GITHUB_ENV
          ACR_RESOURCE_GROUP_NAME=$(echo "$ENVIRONMENT_METADATA" | jq .acr.resourceGroupName | xargs)
          echo "ACR_RESOURCE_GROUP_NAME=$ACR_RESOURCE_GROUP_NAME" >> $GITHUB_ENV

      # Here could any application specific infrastructure be applied with Terraform
      # ...

      - name: Build and Push Application Container Images
        run : |
          az acr build --registry ${{ env.ACR_NAME }} --resource-group ${{ env.ACR_RESOURCE_GROUP_NAME }} --file path/to/Dockerfile --image my-application:latest --image my-application:${{ env.version }} .

      - name: Update Deployment Strategy (optional)
        run: |
          read -r -d '' helmvalues <<- YAML
          image_tag: ${{ env.version }}
          YAML

          cat application.yaml | \
            docker run -e HELMVALUES="$helmvalue" -i --rm mikefarah/yq eval '(.spec.source.helm.values=strenv(HELMVALUES) | (.spec.source.targetRevision="refs/heads/${{ env.branch }}")' - | \
            tee application.yaml
          
          kubectl apply -f application.yaml -n argocd

What about running and debugging an application during development? Use Bridge to Kubernetes straight from your IDE!

Shared Components

An important strategy when building environments (and applications) is that they need to be autonomous. However some components might need to be shared or at least moved upstream, like the backend state storage and permissions to create and manage components on the cloud platform.

These components should live in separate repositories with similar development strategies.

Permissions

Defining different roles is good practice in order to align with the least privilege principal. Depending on the size of the company, persons might have multiple roles, but remember that autonomous, cross-functional teams are important to move fast, so each team should have all the roles needed to deliver their applications.

Lessons Learned

One important thing about mitigating drift between environments is to continuously integrate between them. In an ideal world, each change is directly integrated into all environments. However, in reality, that will not always happen, which might cause incompatibility issues when introducing changes to environments. Upgrading from A to B to C is not the same thing as upgrading directly from A to C. That is usually what happens when a branch is merged into another branch, and with cloud infrastructure this might lead to unexpected problems. An example is that no minor versions can be skipped when upgrading Azure Kubernetes Service.

Can I skip multiple AKS versions during cluster upgrade?

When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped.

This means that Terraform needs to apply each commit in order. This can be a bit cumbersome to orchestrate.

Another important aspect with infrastructure as code is that it is quite easy to render yourself into an inconsistent state if manually changing configurations, which can be tempting in pressing situations and with the cloud platform tools easily available. Don’t fall in that trap.

Conclusions

Working cross-functional is powerful. It enables teams to work autonomous, end-to-end within a loosely coupled piece of domain. It also enables teams to use development workflows that includes both applications and infrastructure from start simplifying how to make them work efficiently together. By using the same infrastructure for all environments, continuously merging changes downstream, it can help mitigate drift, while simplifying managing infrastructure changes.

Leave a Comment

Mitigating Infrastructure Drift by using Software Development Principals

According to this survey from driftctl, 96% of the teams asked reports manual changes being the main cause for infrastructure drift. Other concerns are not moving from development to production fast enough and introducing many changes at once.

Drift is a big problem in system development. Test environments get broken and halts the whole development process, or they are in an unknown state.

As a software developer, you have probably experienced drift many times due to parallel, isolated development processes and not merging into production fast enough. We mitigate some of this drift by introducing continuous integration and continuous deployment processes that reliably can move software from development to production faster while still guaranteeing quality by test automation and gated check in. We use DevOps to shift knowledge left in order to remove blocking phases and speed up the process, and we use GitOps to source control operational changes in order to gain control over configuration unknown syndromes.

Development Goals

Before we further explore what concepts we have adopted to mitigate drift in application development, and how we can use it to also mitigating infrastructure drift, let’s have a look at some objectives for general product development. In particular, let’s study three core objectives:

  • Fast Deliveries
  • Quality
  • Simplicity

Fast deliveries makes the product reach the market and potential consumers fast, preferably faster than the competition. Quality makes the customer stay with the product, and simplicity makes both of the previous objectives easier to achieve and maintain.

Automation

Key part of moving fast is automation of repetitive tasks. It decreases the risk of drift by continuously moving us forward faster with higher accuracy by replicating and reproducing test scenarios, releases and deployments, continuously reporting back feedback helping us steer in the right direction. The more automation the faster we move, the lower risk of drift, the higher the quality.

Least Privilege / Zero Trust / BeyondCorp

Security is something that should be embraced by software developers during the development cycle, not something that are forced upon from the side or as an after construct. This is maybe even more important when building infrastructure. When security becomes a real problem, it’s not uncommon that it is too late to do something about it. Trust is fragile and so is also the weakest link to our customers precious data.

Applying a least Privilege policy does not only minimize risk of granting god mode to perpetrators, it also minimizes the possibility to introduce manual applied drift.

While least privilege can lower the attack surface, Zero Trust simplifies the way we do security. If there are no hurdles in the way of development progress, there is less risk to succumb to the temptation of disabling security mechanisms in order to make life easier.

Infrastructure as Code / GitOps

Source controlling application code has been common practice for many years. By using manifests to define wanted states and declarative code on how to get there, source control follows naturally. The reasons for why infrastructure as code is powerful are the same as for application source code; to track and visualize how functionality changes while being able to apply it in a reproducible way by automation.

Less risk of drift.

GitOps makes it possible to detect risky configuration changes by introducing reviews and running gated checks. It simplifies moving changes forward (and backwards) in a controllable way by enabling small increments while keeping a breadcrumb trail on how we got where we are.

DevSecInfraOps

Cross functional teams help us bridge knowledge gaps, removing handovers and synchronization problems. It brings needed knowledge closer to the development process, shortening time to production and getting security and operational requirements built in. Infrastructure is very much part of this process.

Local Environments

Using a different architecture for local development increases drift as well as the cost to build and maintain it. Concepts like security, network and scalability are built into cloud infrastructure, and often provided by products that are not available for local development. As for distributed systems, these are hard to use locally since they run elastically over multiple hosts possibly across geographic boundaries.

What if we could minimize both application and infrastructure drift by reusing the same cloud native infrastructure architecture and automation to produce the same type of environment, anywhere, any time, for any purpose, while adhering to the above criteria. Test, staging, it all shifts left into the development stage, shortening the path to production, and enabling using all environments as a production like environment.

In the next part we will deep dive into how we can work with cloud infrastructure similar to how we work with application development while adopting all of the above concepts.

Leave a Comment

Pulsar – The New Kafka?

I’ve been following Apache Pulsar for a while now, reading what ever articles I come by. I thought I should write a short summary of my thoughts about this “new”, exiting, distributed message streaming platform.

Kafka

I have gained experience working with Apache Kafka for some time, and it does have a lot in common with Pulsar. It has been around for a couple of more years, and I think it is a good message streaming platform. I wrote a protocol definition of the Kafka protocol earlier this year together with an in-memory test framework including an example on how you can test applications that for example uses Confluent’s .NET Kafka client. You can read all about it here.

Pulsar vs Kafka

Last week I came accross a couple of articles by Confluent (managed cloud-native platform for Kafka) and StreamNative (managed cloud-native platform for Pulsar) comparing the two platforms performance abillities that I think are really interesting (though maybe a bit biased).

Spoiler alert! Both are best! 😉

You can study them here and here.

Performance

What I found really interesting is the high throughput and the almost constant low latency Pulsar seems to achieve no matter the amount of partitions used while still being able to guarantee really good durability (consistency and availability). If you know about the CAP theorem, you would almost think this is too good to be true!

Different Storage Strategies

Kafka stores messages in partitions, while Pulsar has another storage level called segments that make up a partition. This might partly explain why Pulsar can perform better at some scenarios.

Credit: Splunk

In Kafka as a partition grows, more data needs to be copied when brokers leave or join the cluster. In Pulsar, segments can be formed in smaller, static sizes. An important observation here is that the partions can have an inifite amount of segments spread out in the Apache BookKeeper storage nodes, also known as Bookies. Therefor partitions in Pulsar can have an infinite amount of messages, while in Kafka, partitions will be bound by the hardware it is stored on. Replicating a partition will become slower and slower in Kafka, but in Pulsar, it could theoratically stay almost constant because of the possibility to scale storage infinitly.

However, brokers in Pulsar need to potentially talk to many storage nodes to serve a partition, while a Kafka broker has the whole partition stored directly on it’s own disk. This could cause the brokers to end up having to handle a lot of connections to the storage nodes.

Consumer Group / Subscription Rebalancing

Kafka has made some improvements in their partion rebalancing strategy lately, like Static Membership and Incremental Cooperative Rebalancing which has sped up the consumer group rebalancing act.

In Pulsar, consumers use hash ranges in order to share the load when consuming from a shared subscription. The broker handles the rebalancing and as soon as a consumer leaves or joins, no messages will be delivered to any consumers until the hash ranges have been redistributed, either by the broker when using auto hash distribution, or by the consumers when using sticky hash ranges. This might cause downtime and latency due to the stop-the-world-strategy when redistributing the hashes. This was the case in Kafka as well before it was mitigated by Incremental Cooperative Rebalancing.

Broker Rebalancing

Kafka does not have any automated rebalancing built in. Instead users are left depending on tools like LinkedIn’s Cruise Control. Since Kafka stores the topic partition data and replicas directly on the brokers, this needs to be copied and rebalanced when adding a new broker.

Pulsar’s architecture on the other side, which separates computation (broker) and storage (partition), enables almost instant rebalancing since brokers can just switch from what storage they read from or write to. Apache ZooKeeper is in charge of monitoring broker health and initiates recovery when a broker is deemed lost. This will cause other brokers to take over the ownership of the lost broker’s owned topics. Any split-brain scenarios between the brokers are handled by BookKeeper through a fencing mechanism causing only one broker at a time to be allowed to write to the topic’s ledgers.


Credits: Jack Vanlightly

Metadata Storage

Both Pulsar and Kafka uses Apache Zookeeper as it’s metadata management storage. The Kafka team announced a while back that they are dropping Zookeeper in favor to bring metadata into Kafka for less complexity, replication, better scalability and bootstrap effeciency. It would be interesting to know if there has been a similar discussion around ZooKeeper within the Pulsar project, and what potential performance gains scrapping ZooKeper might give Kafka.

As a side note, ZooKeeper is a central part of BookKeeper as well, so dropping it all together would probably prove very difficult. If ZooKeeper goes down, Pulsar goes down.

Conclusion

Comparing two seamingly similar distributed platforms can be complex. While using similar test setups but with a few knobs tweeked, the result can differ a lot. I think this quote from one of StreamNative’s articles sums it up pretty good:

– “Ultimately, no benchmark can replace testing done on your own hardware with your own workloads. We encourage you to evaluate additional variables and scenarios and to test using your own setups and environments.”

Understanding the impact infrastructure and architecture has on systems and applications, observability, operations and to have the knowledge about all those things have become more important than ever.

Leave a Comment

Microservices – It’s All About Tradeoffs

Everybody has probably heard about the much hyped word “Microservices”; the architecture that solves about everything as computers aren’t getting any faster (kind of) and the need for scaling and distribution is getting more important as more and more people are using the internet.

Don’t get me wrong, I love microservices! However, it is important to know that as with most stuff in the world, everything has tradeoffs.

Lessons to be Learned

I’ve been developing systems using microservices for quite a few years now, and there are alot of lessons learnt (and still lessons to be learned). I saw this presentation by Matt Ranney from Uber last year, where he talkes about the almost ridiculous amount of services Uber has, and the insight of all the problems that comes with communicating between all these independent and loosly coupled services. If you have ever developed asynchronous applications, you probably know what kind of complexity it might generate and how hard it can be to understand how everything sticks together. With microservices, this can be even harder.

The World of Computer Systems are Changing

I recognize many of the insights he shares from my experiences of building microservices. I recently did some developing using akka.net experiencing similar insights but on a whole new level. Microservices within microservices. I won’t jump off to that now, maybe I’ll share those thoughts at another occasion. However, microservice architectures today are getting more and more important. One reason is because the stagnation of hardware speed with cpus changing from the traditional one core where the clock frequency is increased between models to today where the cores instead are multiplied without the frequency increasing. But also because it gives you freedom as a developer when facing hugh applications and organisations. Also there is this thing called zero downtime. You might have heard of it. Everything has to work all the time.

Synchronisation

While I do tend to agree with most of what Matt says, I don’t agree with the being “blocked by other teams” statement. If you get “blocked” as a team, you are doing something wrong, especially if you are supposed to be microservice oriented.

Blocked tends to point towards you needing some sort of synchronisation of information, and before you have that you cannot continue. While synchronisation between systems must occur at some point, it does not mean that you cannot develop and release code that cannot be fully utilized until other systems have been developed and deployed. Remember agile, moving fast, autonomous, and everything has to work all the time? The synchronisation part is all about the mutable understanding of the contract between the services. When you have that, it’s just a matter of using different techniques to do parallell development without ever being blocked, like feature toggling. There need to be a “we are finished let’s integrate”-moment at some point, but it’s not a blockage per se. It’s all about continuation, and it is even more important with microservices as integration get’s more complex with more services and functionality being developed in parallel.

Context

Systems developed as microservices are also facing the problem of process context, or the start and end problem. As a user doing something you are usually seeing this doing as from a bubble perspective. You update a piece of information and expect to see a result from that action. But with microservice based systems, there might be alot of things going on at the same time during that action in many systems. The action does not necessary have the cause you think, and the context gets chopped in smaller pieces as your context now spans multiple systems. This leads to the distribution problem. How do you visualize and exaplain what’s happening when alot of things are happening at roughly the same time at different places? People tend to rely on synchronisation to explain things, but synchronisation is really hard to achive if you do not have all the context at the same time and place, which is next to impossible when it comes to parallelism, distribution, scaling, something you often get and want to have with microservices; asynchronicity. You might want to rethink the way you perceive the systems you are working with. Do you really need synchronisation, and why? It’s probably not an easy thing to just move away from as it is deep rooted in many peoples mind. A way to simplify actions happening. But things are seldom synchronous in the world, and as computers and systems are becoming more distributed, it will make less and less sense keeping it that way.

REST

I also think the overhyped use of the word REST might extend the problem. REST implies that model states is important. But many microservices are not always built around the concept of states and models. They want to continuesly change things. Transitions are really hard to represent as states. I’m not talking about what something transitioned TO but how to visualize the transition or the causation, the relationship between cause and effect. Sometimes functionality is best represented as functions and not the states before and after. Back are the days of RPC services. Why not represent an API as a stream of events? Stateless APIs! It’s all about watching things happen. Using commands and events can be a powerful thing.

Anyhow, microservices are great, but they might get you in situations where you feel confused as you try to apply things that used to work great but does not seem to fit anymore. By rethinking the way you percieve computer systems you will soon find new ways and it might give you great new possibilities. Dare to try new angles, and remember that it’s all about trade offs. Simulating the world exactly as is might not be feasible with a computer system, but still, treating it as a bounch of state models within a single process might not be the right way either.

Leave a Comment