Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
What are Kubernetes clusters?
You can cluster together groups of hosts running Linux® containers, and Kubernetes helps you easily and efficiently manage those clusters.
Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.
What can you do with Kubernetes?
The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs).
More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things other application platforms or management systems let you do—but for your containers.
Developers can also create cloud-native apps with Kubernetes as a runtime platform by using Kubernetes patterns. Patterns are the tools a Kubernetes developer needs to build container-based applications and services.
With Kubernetes you can:
Orchestrate containers across multiple hosts.
Make better use of hardware to maximize resources needed to run your enterprise apps.
Control and automate application deployments and updates.
Mount and add storage to run stateful apps.
Scale containerized applications and their resources on the fly.
Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.
However, Kubernetes relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):
Registry, through projects like Docker Registry.
Networking, through projects like OpenvSwitch and intelligent edge routing.
Telemetry, through projects such as Kibana, Hawkular, and Elastic.
Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multitenancy layers.
Automation, with the addition of Ansible playbooks for installation and cluster life cycle management.
Services, through a rich catalog of popular app patterns.
What is Kubernetes-native infrastructure?
Today, the majority of on-premises Kubernetes deployments run on top of existing virtual infrastructure, with a growing number of deployments on bare metal servers. This is a natural evolution in data centers. Kubernetes serves as the deployment and lifecycle management tool for containerized applications, and separate tools are used to manage infrastructure resources.
But what if you designed the datacenter from scratch to support containers, including the infrastructure layer?
You would start directly with bare metal servers and software-defined storage, deployed and managed by Kubernetes to give the infrastructure the same self-installing, self-scaling, and self-healing benefits as containers enjoy. This is the vision of Kubernetes-native infrastructure.