Back to the list

What exactly is the big deal about Kubernetes?

Kubernetes is highly popular these days, and it's not just for show: it provides good abstractions for hosting many types of web applications and can scale to serve massive services. If you wish to deliver a dependable web service to your users, you must consider several factors:

  • What happens if one of the servers that hosts the service fails?
  • How can new software versions be reliably deployed without causing downtime?
  • How can we add computing resources as needed?
  • How can the service be made known to the public?


We'll need to solve these and many other issues one way or another, and Kubernetes can help.



It's not the only way to solve these problems, but it's an good one. One significant advantage is the de-facto standard API: a lot of supporting software interfaces with Kubernetes, so we'll have a lot of options for monitoring, security, and other tools. With many managed systems, integrating with other software within the same operator's walled garden is the easiest or even only option, bringing lock-in problems that Kubernetes does not have. Kubernetes is particularly well-suited to certain use cases. If we use case meets one of these, the chances are that Kubernetes is a good fit:

  • Multi-cloud. If we use various cloud providers, Kubernetes can operate as an abstraction layer, allowing us to manage our apps across all of them using the same API. 
  • Massive scale. If we serve millions of daily active users, a properly designed Kubernetes solution is likely to be cost-effective. Serving a limited number of users can be costly, but when we scale up, Kubernetes expenses do not rise as quickly as some other solutions.

  • Hybrid cloud. Similar to multi-cloud, if we need to operate things both on-premises and in the public cloud, Kubernetes can let us manage applications in both environments the same way. 

Let's have a look at some of the choices. If the operational overhead of running Kubernetes is currently not justifiable for our use case, these alternatives may be preferable. Because there are many different alternatives, this is by no means an exhaustive list.


What other choices are there?

To begin, plain old virtual machines are not a viable alternative to Kubernetes for greenfield projects. If your company is already experienced with virtual machines, it may be tempting to stick with what you know. However, you should think about the long-term ramifications carefully. It's difficult to match the quality and fault tolerance of Kubernetes with a standard virtual machine infrastructure. Because your VM configuration will be unique to your company, training new staff to run it will be much more difficult. You may save a lot of time and effort by using Kubernetes.

Having said that, if you already have a configuration like this that works for you, migrating to Kubernetes is a large project that depends on your circumstance as to whether it is worthwhile - if it ain't broke, don't fix it. If Kubernetes is currently too complex for your company, consider managed platforms where the vendor handles upkeep. Microsoft Azure Container Instances, AWS ECS or Google Cloud Run are all services that allow you to run a containerized workload without worrying about where and how it's running. If you decide to transfer to Kubernetes later, the transition will be easier because your app is already operating in containers. There are other platform-as-a-service offerings, with Heroku being the most well-known example. They enable you to run your application as-is, with no need to worry about infrastructure. These services can be a great way to get started quickly, but their restrictions may become evident later, and expenses can quickly escalate as you scale.

Google Cloud Functions and AWS Lambda are serverless alternatives where you write code and specify when it runs. Cost-effective for specialized use cases with limited traffic and occasional request spikes. Open-source alternatives like Serverless Framework or Knative reduce vendor lock-in with serverless. Transitioning from serverless to other architectures may be difficult because it's so different. Suppose you've been running serverless functions in the cloud and must move the app on-premises. Converting the program to a traditional architecture or implementing serverless on-premises may be difficult. You could combine options. Using serverless for some activities while hosting the core application on traditional infrastructure is common.


It depends on the case

Here's a basic guideline - when your scale is small, buying is more cost-effective than building, but as your scale rises, building more yourself becomes more cost-effective. However, you must take this advise into account in your own case. To demonstrate how context influences the choice, consider the following concrete instances where Kubernetes could be a viable option:

  • Your company is a large enterprise with numerous development teams. Each is in charge of one or more microservices that make up an application that has millions of daily users.
  • You have a startup whose app requires a lot of data processing, and you expect your user base to increase consistently. In this case, managed services can soon become prohibitively expensive, or they may lack the computational capacity that you want.
  • You must host your program or parts of it on-premises, but you want equivalent functionality to those accessible on cloud platforms. If you're using Kubernetes in the cloud, it may also make sense to run it on-premises so that you can use the same tools and workflows everywhere.

Kubernetes, on the other hand, may not be the greatest choice if you have a tiny startup with limited resources that wants to provide a minimum version of its product to end-users as quickly as feasible. You have an established architecture that works for you, is cost-effective, and has no major flaws. Your app is straightforward, and you have no plans to scale to a big number of users.Whatever option you choose, it's critical to maintain your architecture adaptable. This allows you to change course if your circumstances change. For example, if you use containers, you'll find it easier to transition to Kubernetes if you decide to use it. Some services, such as Google Cloud Run, are also particularly built to make it easy to switch to Kubernetes later on.


Conclusion

You should always have a clear notion of the exact problems you are attempting to tackle before using Kubernetes. There is no proper answer to whether you should run it or not, and you are the greatest judge of your individual situation. Talk to us and our experts will give you some pointers on what aspects of using Kubernetes you ought to take into consideration.


 
Reg. ID:5592362353 | Jörgen Ankersgatan 11, Malmö 211 47 Sweden | info@devpals.se | + 46406820504 | Terms & Conditions | Privacy Policy | Cookies