Kubernetes vs. Mesos

September 29, 2022

Container orchestration engines (COEs) make managing containerized workloads easier by automating operational tasks like scheduling, load balancing, scaling, networking, ensuring high availability (HA), and managing logs. Kubernetes and Apache Mesos are two of the most popular COEs.

These two technologies take different approaches to container management. Kubernetes works purely as a container orchestrator. Mesos is more like an “operating system for your data center.”

In this article, we’ll discuss Kubernetes and Mesos and compare their key features. However, we’ll start by introducing COEs and why they’re essential for managing containers.

Why Do You Need Container Orchestration Engines?

Most distributed applications today are built on containers. Containers require fewer resources and help to make application development faster and more secure.

Managing ten or twenty containers is quite simple, but a team can quickly become overwhelmed when the number of containers grows to hundreds or thousands across a distributed network. Although containers are lightweight and short-lived, running them in large clusters makes for many moving pieces that need simultaneous coordination.

In addition, most production container environments can be complex. They can run multiple operating systems (or the same OS with different kernel versions) and have complex network and security configurations. Multi-cloud or hybrid environments add even more complexity to the mix.

This is where COEs come in.

COEs simplify and automate tasks related to container management, and those tasks include:

  • Deployment
  • Load balancing
  • Container scheduling
  • Resource allocation
  • Performance monitoring
  • Configuring networks

The automation makes it much easier to run large-scale containerized environments, freeing up the DevOps team to pursue more value-added tasks.

COEs also ensure application availability by automating health checks. Load balancing ensures that requests are automatically routed to healthy container instances, and autoscaling ensures a sufficient number of containers are available to handle the present load.

A Brief Introduction to Kubernetes and Mesos

Kubernetes is the container management and orchestration system released by Google in 2014. Given their reliance on Docker, Google developed Kubernetes to deploy and schedule containers at scale, manage cluster resources, implement HA, and route application traffic.

Today, most cloud service providers (including the major ones) support Kubernetes and provide infrastructure and integrations for running Kubernetes-hosted workloads.

Some of the main Kubernetes features include:

  • Auto-scaling
  • Storage orchestration
  • Volume management
  • Secret and configuration management
  • Automatic rollbacks
  • Batch execution
  • Service discovery
  • Automatic bin-packing

Kubernetes also has strong support from the DevOps community, and many vendors offer free or commercial applications that add extra features on top of Kubernetes.

Mesos is a distributed kernel that was created by Ph.D. students at UC Berkeley in 2009. It abstracts compute resources like CPU, memory, and storage from machines (both physical and virtual) running across on-premise or cloud tenancies. The Mesos kernel runs on each machine in distributed environments, and both containerized and non-containerized workloads can use the Mesos API for resource management and scheduling.

When comparing Mesos to Kubernetes in this article, we’ll refer to Mesos and Marathon. Marathon is a plugin for Mesosphere’s Datacenter Operating System (DC/OS) and Apache Mesos, allowing users to manage containers easily.

Features from the Mesos and Marathon combination include:

  • APIs
  • Linear scalability
  • Pluggable isolation
  • Cross-platform support
  • Two-level scheduling
  • Fault-tolerance
  • Replicated master using ZooKeeper
  • Multi-language support

Mesos is a popular choice among tech giants like Twitter, Netflix, and Airbnb.

Kubernetes and Mesos: Feature Comparison

When evaluating a COE platform, some of the most important factors to consider are high availability, load balancing, auto-scaling, storage, and networking.

High Availability

Kubernetes pods can replicate across multiple nodes (VMs and physical servers) to ensure the application remains online even if one of the cluster nodes fails.

The Kubernetes Control Plane manages the pods and worker nodes across the cluster, and it does so based on node health. Kubernetes takes care of scheduling as well as detecting and responding to failures. You can make Kubernetes highly available by implementing a stacked or external etcd topology with multiple key architecture components (for example, multiple masters or etcd replicas).

For Mesos, applications run on clusters with multiple Mesos agents to increase availability. You can make Mesos highly available by implementing three to five masters, one leader, and the rest of the nodes as backups. Apache ZooKeeper elects the leader and automatically detects masters, slaves, and scheduler drivers.

Load Balancing

Kubernetes exposes pods externally using “services” (a group of pods under a common name). This lets services discover each other dynamically without requiring hard-coded IP addresses. Services coordinate connections to pods to provide load balancing features.

You can implement different load balancing strategies with Kubernetes, including:

  • Round-robin
  • Kube-proxy L4 Round Robin Load Balancing
  • L7 Round Robin Load Balancing
  • Consistent Hashing/Ring Hash

Mesos-DNS provides basic load balancing services for your application. It generates SRV records for each Mesos task and translates these to the correct IP address and port on the app’s machine. You can also use Marathon-lb to facilitate load balancing.

Mesos supports advanced functionality, such as:

  • Sticky connections
  • SSL offloading
  • VHost-based load balancing, so you can specify individual VMs for your application

Auto-Scaling

Kubernetes allows you to define a target number of pods using deployments. To trigger auto-scaling, you can also define resource metrics thresholds, such as CPU or memory utilization.

Mesos continuously monitors the number of containers and schedules a container on another slave node if it fails. It doesn’t natively support auto-scaling using resource metrics, but there are some community-supported components.

Storage

Kubernetes supports non-persistent, ephemeral volumes like emptyDir, configMap, downwardAPI, and CSI ephemeral for short-term storage.

It also supports persistent storage (file or block), including iSCSI, NFS, FC, and cloud storage like those available in AWS or Azure. Applications running on Kubernetes-hosted containers don’t communicate directly with storage, as Kubernetes abstracts the layer.

Mesos supports persistent local storage on reserved resources for stateful applications. Containers must run on the same node because the volumes are created locally on the node.

Mesos supports persistent external storage, but bypassing resource management makes quota control, reservation, and fair sharing challenging to enforce.

Networking

Kubernetes allocates unique IPs to pods, removing the need to map container ports to the host port. It operates a flat network, with one network for pods and another for services. Pods can communicate freely with other pods and services. IP tables control connectivity between pods and handle most networking and port forwarding rules.

Mesos supports two types of networking: IP-per-container and network-port-mapping. Containers don’t get their IPs by default, but using the Calico integration gives every Mesos container its own IP. This prevents port conflicts, removes the need for dynamic port assignment, and allows DNS A-record-based service discovery. Containers are unable to communicate with each other on a local host.

Log Everything, Answer Anything – For Free

Falcon LogScale Community Edition (previously Humio) offers a free modern log management platform for the cloud. Leverage streaming data ingestion to achieve instant visibility across distributed systems and prevent and resolve incidents.

Falcon LogScale Community Edition, available instantly at no cost, includes the following:

  • Ingest up to 16GB per day
  • 7-day retention
  • No credit card required
  • Ongoing access with no trial period
  • Index-free logging, real-time alerts and live dashboards
  • Access our marketplace and packages, including guides to build new packages
  • Learn and collaborate with an active community

Get Started Free