Mesos vs. Kubernetes

1. A general overview
In this article, we’ll learn why a container orchestration system is necessary.
We’ll assess the desirable feature of such a system. From there, we’ll try to compare Apache Mesos and Kubernetes, two of the most prominent container orchestration technologies in use today.
2. Container Management

Before we compare Mesos with Kubernetes, let’s take a look at what containers are and why we need container orchestration in the first place.
Containers (section 2.1)
A container is a standardized software unit that bundles together code and all of its dependencies.
As a result, it offers platform independence as well as operational simplicity. One of the most widely used container platforms is Docker.
Docker makes use of Linux kernel features like CGroups and namespaces to offer process separation.

It’s quite easy to create docker images, all we need is a Dockerfile:

FROM openjdk:8-jdk-alpine
COPY target/hello-world-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT [“java”,”-jar”,”/app.jar”]

So, these few lines are good enough to create a Docker image of a Spring Boot application using the Docker CLI:

docker build -t hello_world .

Container Orchestration (section 2.2)
So far, we’ve seen how containers can help with application deployment consistency and reliability. But why do we need container orchestration in the first place?

We’re OK with Docker CLI now that we’ve had a few containers to manage. We can also automate some of the more routine tasks. But what happens when there are hundreds of containers to manage?

Consider a system with several microservices, each with its own scalability and availability needs.

As a result, things may easily spiral out of control, which is where a container orchestration system comes into play. A cluster of computers running a multi-container application is treated as a single deployment object by a container orchestration system. It automates everything from initial deployment, scheduling, and updates through monitoring, scalability, and failover.

3. Mesos Apache: A Quick Overview Mesos is an open-source cluster management that was created at the University of California, Berkeley. It provides APIs for resource management and scheduling throughout the cluster to applications. Mesos allows us to execute containerized and non-containerized workloads in a distributed environment.

Mesos is extremely adaptable, allowing frameworks to plan and execute activities using well-defined APIs. However, implementing these primitives directly is inconvenient, especially when we wish to schedule bespoke programs. Orchestrating apps packed as containers, for example.

Marathon, for example, can help us with this. Marathon is a Mesos-based container orchestration system. Marathon serves as a foundation for the Mesos cluster in this regard. Marathon has features like as service discovery, load balancing, monitoring, and container management APIs that we would expect from an orchestration platform.

Marathon considers a long-running service to be an application, and an instance of an application to be a task. Multiple programs with dependencies, referred to as Application Groups, might be found in a typical scenario.

Google created Kubernetes, which is an open-source container orchestration technology. The Cloud Native Computing Foundation currently owns it (CNCF). It’s a platform for automating application container deployment, scaling, and operations across a cluster of servers.

Workloads that are supported
Mesos is built to manage a wide range of workloads, both containerized and non-containerized. It is dependent on the structure we employ. As we’ve shown, using a framework like Marathon, it’s relatively simple to enable containerized workloads in Mesos.

Kubernetes, on the other hand, is dedicated to containerized workloads only. We mostly use it with Docker containers, but it also supports Rkt and other container runtimes. Kubernetes may serve a wider range of workloads in the future.

5.2. Scalability Support
Marathon allows you to scale your application by modifying the application specification or the user interface. Marathon supports autoscaling as well. We can also scale Application Groups, which scales all dependencies automatically.

Pod is the core unit of execution in Kubernetes.

When handled via Deployment, pods may be scaled, which is why pods are always described as a deployment. Scaling can be done manually or automatically.

5.3. Managing a High Availability Environment
Marathon distributes application instances among Mesos agents for high availability. A Mesos cluster is usually made up of many agents. ZooKeeper also ensures the Mesos cluster’s high availability by using quorum and leader election.

Pods in Kubernetes are similarly duplicated across numerous nodes to ensure high availability. A Kubernetes cluster typically consists of many worker nodes. In addition, the cluster might have many masters. As a result, the Kubernetes cluster can provide high availability.

5.4. Load Balancing and Service Discovery
Mesos-DNS can help applications identify services and perform rudimentary load balancing. Mesos-DNS creates an SRV record for each Mesos job and maps it to the IP address and port of the machine that runs it. We can also utilize Marathon-lb to offer port-based discovery using HAProxy for Marathon apps.

Kubernetes deployment dynamically builds and kills pods. As a result, we usually expose pods in Kubernetes via Service, which facilitates service discovery. In Kubernetes, the service works as a dispatcher for the pods and hence provides load balancing.

Upgrades and Rollbacks (section 5.5)
Marathon treats changes to application definitions as a deployment. Applications can be started, stopped, upgraded, or scaled via deployment.

Marathon also allows for rolling starts, which allows for the deployment of updated versions of the apps. Rolling back, on the other hand, is straightforward and usually necessitates the deployment of a new definition.

Kubernetes deployment allows for both upgrade and rollback. We can advise on the best Deployment approach to use when replacing old pods with new ones. Recreate and Rolling Update are two common tactics. In Kubernetes, the rollout history of a deployment is kept by default, making it simple to revert to a previous iteration.

5.6. Monitoring and logging
Mesos has a diagnostic tool that scans all cluster components and provides statistics on health and other parameters. The data may be accessed and aggregated using the APIs that are accessible.

We can capture a lot of this data with an external tool like Prometheus.

As resource metrics or whole metrics pipelines, Kubernetes publishes extensive information about various objects. On a Kubernetes cluster, it’s common to use an external tool like ELK or Prometheus+Grafana. These tools can ingest cluster metrics and display them in a much more user-friendly manner.

Storage (section 5.7)
For stateful applications, Mesos supports persistent local volumes. Only reserved resources can be used to construct persistent volumes. With certain limits, it can also handle external storage. Mesos now includes experimental support for the Container Storage Interface (CSI), a set of APIs that allows storage suppliers and container orchestration platforms to communicate with one other.

For stateful containers, Kubernetes provides a variety of persistent volumes. This covers iSCSI and NFS storage. It also supports external storage such as AWS and GCP. This idea is supported by Kubernetes’ Volume object, which comes in a number of forms, including CSI.

5.8. Networking Mesos’ container runtime supports two forms of networking: IP-per-container and network-port-mapping. Mesos offers a standard interface for specifying and retrieving container networking information.

Marathon programs have the ability to define a network in either host or bridge mode.

Kubernetes networking assigns each pod a unique IP address. This eliminates the requirement for container ports to be mapped to the host port. It also specifies how these pods can communicate with one another across nodes. Network Plugins like Cilium and Contiv implement this in Kubernetes.

6. When Should You Use What?
Finally, we typically expect a clear verdict in comparison! However, declaring one technology superior to another isn’t totally accurate. Both Kubernetes and Mesos, as we’ve seen, are strong systems with very similar characteristics.

However, performance is a critical factor.

A Kubernetes cluster can have up to 5000 nodes, whereas a Marathon on Mesos cluster may have up to 10,000 agents. We won’t be working with such massive clusters in the majority of scenarios.

Finally, it comes down to the amount of freedom we have and the sorts of workloads we have. Kubernetes can be a faster answer if we’re starting from scratch and solely aim to employ containerized workloads. Mesos with Marathon, on the other hand, may be a preferable solution if we have current workloads that are a mix of containers and non-containers.

Additional Options
Although Kubernetes and Apache Mesos are strong, they are not the only solutions available. We have a number of potential options at our disposal. While we won’t go into depth about them, here are a few of them:

Docker Swarm is a clustering and scheduling solution for Docker containers that is open-source. It comes with a command-line tool for managing a Docker cluster. Unlike Kubernetes and Mesos, it is limited to Docker containers.

Nomad: HashiCorp’s Nomad is a versatile workload orchestrator for managing containerized and non-containerized applications. For deploying apps like Docker containers, Nomad offers declarative infrastructure-as-code.

OpenShift: Red Hat’s OpenShift is a container platform that is orchestrated and controlled by Kubernetes.

On top of what Kubernetes provides, OpenShift adds capabilities like an integrated image registry, a source-to-image build, and a native networking solution, to mention a few.

8. Final thoughts
To summarize, we explored containers and container orchestration systems in this lesson. Kubernetes and Apache Mesos, two of the most extensively used container orchestration solutions, were briefly discussed. We also compared these systems based on a number of characteristics. Finally, we looked at some of the different options available in this field.

Before we go any further, it’s important to remember that the point of a comparison like this is to present statistics and facts. This isn’t to say that one is superior than the others; it all depends on the application. As a result, we must consider the context of our situation when deciding the solution.