How spotify uses Kubernetes?

What is Kubernetes?
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
In simple terms,
Kubernetes is a open-source Container Management tool
Why we need Kubernetes?
Consider, in production environment, if a container goes down, another container needs to start. Wouldn’t it be easier if this behaviour was handled by a system?
That’s how Kubernetes comes to the rescue!
Kubernetes takes care of scaling and failure of your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Some features of Kubernetes:
- Self-healing: Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
- Automated rollouts and rollbacks: You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
- Storage orchestration: Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
and many more..
How Kubernetes is used by spotify?
Spotify, launched in 2008, is a audio-streaming platform that has grown to over 200 million monthly active users across the world.
Spotify, were using container technology since 2013, and had built there own container management tool called as Helios. But, by late 2017, it became clear that having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community. Hence, spotify moved on to Kubernetes
The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.”
“We saw the amazing community that’s grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.”
— JAI CHAKRABARTI, DIRECTOR OF ENGINEERING, INFRASTRUCTURE AND OPERATIONS, SPOTIFY
A small percentage of Spotify’s fleet, containing over 150 services, has been migrated to Kubernetes so far.
Another plus: “Kubernetes fit very nicely as a complement and now as a replacement to Helios, so we could have it running alongside Helios to mitigate the risks,” says Chakrabarti. “During the migration, the services run on both, so we’re not having to put all of our eggs in one basket until we can validate Kubernetes under a variety of load circumstances and stress circumstances.”
“It’s been surprisingly easy to get in touch with anybody we wanted to, to get expertise on any of the things we’re working with. And it’s helped us validate all the things we’re doing.”
— JAMES WEN, SITE RELIABILITY ENGINEER, SPOTIFY
Thank you for reading..