Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am a solo developer (full stack, but primarily frontend), and Kubernetes has been a game changer for me. I could never run a scalable service on the cloud without Kubernetes. The alternative to Kubernetes is learning proprietary technologies like "Elastic Beanstalk" and "Azure App Service" and so on. No thank you. Kubernetes is very well designed, a pleasure to learn and a breeze to use. This article seems to be about setting up your own Kubernetes cluster. That may be hard; I don't know; I use Google Kubernetes Engine.

For others considering Kubernetes: go for it. Sometimes you learn a technology because your job requires it, sometimes you learn a technology because it is so well designed and awesome. Kubernetes was the latter for me, although it may also be the former for many people.

The first step is to learn Docker. Docker is useful in and of itself, whether you use Kubernetes or not. Once you learn Docker you can take advantage of things like deploying an app as a Docker image to Azure, on-demand Azure Container Instances and so on. Once you know Docker you will realize that all other ways of deploying applications are outmoded.

Once you know Docker it is but a small step to learn Kubernetes. If you have microservices then you need a way for services to discover each other. Kubernetes lets you use DNS to find other services. Learn about Kubernetes' Pods (one or more Containers that must reside on the same machine to work), ReplicaSets (run multiple copies of a Pod), Services (exposes a microservice internally using DNS), Deployments (lets you reliably roll out new software versions without downtime, and restarts pods if they die) and Ingress (HTTP load balancing). You may also need to learn PersistentVolumes and StatefulSets.

The awesome parts of Kubernetes include the kubectl exec command which lets you log into any container without almost any setup or password, kubectl logs to view stdout from your process, kubectl cp to copy files in and out, kubectl port-forward to make remote services appear to be running on your dev box, and so on.



> Once you know Docker you will realize that all other ways of deploying applications are outmoded.

This is a strong and absolute statement to be making in a field as broad and diverse as software engineering. My experience from being on both sides of these statements it that they're often wrong, or at least short sighted.

In this case, while I get the packaging benefits of Docker, there are other ways to package applications that don't require as much extra software/virtualization/training. So the the question isn't as much about whether Docker/K8S/etc. provides useful benefits as whether or not those benefits are worth the associated costs. Nothing is free, after all, and particularly for small to moderate sized systems, the answer is often that the costs are too high. (And with hardware as good as it is these days, small-to-moderate is an awful lot of capacity.)

I've personally gotten a lot of value out of packaging things up into an uber jar, setting up a standard install process/script, and then using the usual unix tooling (and init.d) to manage and run the thing. I guess that sounds super old fashioned, but the approach has been around a long time, is widely understood, and known to work in many, many, many worthwhile circumstances.


Indeed. Containers suck when your entire filesystem is 60 megabytes.


When I know how to use a hammer, everything starts looking like nails?


When something breaks you log in to the machine and make incremental updates to fix it, right? This approach leads to non-reproducible deployment environments. Immutable systems are better, and Dockerfile is essentially a written record of how to reproduce the environment.


> When something breaks you log in to the machine and make incremental updates to fix it, right?

Not generally, and you do a good job explaining why I don't in your next sentence.

> This approach leads to non-reproducible deployment environments.

It's true that there's some discipline involved, but it's not necessarily a huge amount. For me, what it tends to look like is a build that produces some sort of deployable artifact, an idempotent install script, and following standard Unix patterns. Except for maybe that last bit, this is exactly what you'd do in a Docker environment. And of course, Docker and the like are always still candidates for adoption, if the circumstances warrant.

Part of what surprises me about conversations like this is that the idea of an environment in a known and stable state isn't a novel development. The question is really about what degree of environment stability you need to achieve to meet your requirements and then the specific tools and procedures you choose to adopt to meet that goal. Docker is once choice, but not the only choice, and even if you chose it, there is still a set of disciplines and procedures you'll need to follow manually for it to be effective.


Everybody feels confident in the stack they have spent time using.. You like Kubernetes because you took the time to learn it, someone else will find Elastic Beanstalk or AWS ECS equally easy to setup and scale. It's not that Docker is the only way to deploy an application either, there are virtues of learning the serverless deployment modes as well on the various clouds. For many of the "proprietary lock-ins" you run into, you often get something back.

I do agree in point that kubernetes and Docker are nice of course :)


Another advantage of Kubernetes over things like Elastic Beanstalk is portability. Your app can move from one cloud to another with minimal effort.

Yet another advantage is portability and durability of your knowledge. Kubernetes has so much momentum, so it is here to stay. It is extensible so third parties can innovate without leaving Kubernetes, which is yet another reason it is going to be around for a long time.


That's clearly also a disadvantage, because part of the source of k8s complexity is being a generic services supporter.

Please apply some level of critical thinking before copying/pasting generic selling points that could apply to almost any other open-source IAC framework.


EB or ECS specific knowledge is AWS specific. I can (and do) run k8s on my laptop and can (and do) deploy Helm charts (the ones I wrote or 3rd party ones) on any k8s install. So that's quite different to the usual vendor lock in that comes with proprietary cloud services.


...or you could deploy your app on Google App Engine or Heroku and spend all your time developing features your customers care about.


I have no idea how to deploy my app on Google App Engine or Heroku. So instead of spending time developing features my customers care about, I'll spend time learning how to deploy my app on those services.


You will spend orders of magnitude more time fiddling with K8s. You may end up with employees working on infrastructure fulltime.

These are not even remotely comparable things.


This is true for any way of deploying, & depends on what your already know versus what you need to learn about. But different deployment approaches require you understanding different things, or different volumes of stuff.

There's also the difference between what you need to know to get started vs what you need to know to run a service reliably.

If you deploy to a platform that uses thing X for your app in production, and thing X has unhelpful defaults or will behave poorly in some situation and cause or amplify an outage, then not only do you need to learn the minimum about how to deploy, but to also learn about the pitfalls and what you need to do to overcome or mitigate them -- either proactively or reactively when production breaks and you don't understand why & don't understand how to fix it.

The amount of latter stuff you need to learn to have a reliable production system that you're able to maintain in a more complicated configurable deployment system is going to be much larger even if it happens to be quick & easy for you to get started.


> This is true for any way of deploying, & depends on what your already know versus what you need to learn about.

The difference is that Kubernetes is portable from cloud to cloud. Also, when you invest in learning Kubernetes your knowledge is both portable and durable. This fact made a huge difference for me, because I am not a backend dev, so I am not willing to invest time in learning something unless my the knowledge I acquire is both portable and durable.


> Also, when you invest in learning Kubernetes your knowledge is both portable and durable.

this may be true, let's check back in 10 years to validate the durability!

e.g. to give a non-tech counterpoint: I'm currently working on some logic to fit statistical models to data. The foundations of much of this knowledge is hundreds or thousands of years old (e.g. algebra, calculus, statistics). Orders of magnitude more durable than any knowledge related to the particular tech stack I am using.


I'm skeptical to believe the service is any more scalable than it would be with regular instances and multi az. mainly because in my experience scalability has way more to do with network topology and the architecture of how requests flow, rather than the tech for implementation.


> I could never run a scalable service on the cloud without Kubernetes

Can you give us an indication of the scale of your app? e.g rpm.


It is still in development, so no rpm at the moment.

That’s another thing: some people think Kubernetes is something you use if you need high scalability. I disagree. Kubernetes should be the default if your app consists of more than 1 service. If you don’t have high scalability requirements you can rent a single-node GKE “cluster” for about $60 per month.

If you have just 1 service then a single Docker container is all you need, so Kubernetes not needed.


This mentality is how we end up with overly engineered piles of dung. Instead of building something in the most simple way practical which would fulfill our requirements, we go all out. Now changing things takes longer because to do anything you have to weave through 10+ layers of opaque abstraction. No thanks.


If you don't have high scalability requirements, virtually anything will work. You're probably paying $55/month over the odds.


What will you use for service discovery?


off the top of my head?

a) Shared data source; each service writes pid/state to a file in the shared data store. It could be a single directory in a single server setup or a dedicated NFS/SMB server for hundreds/thousands of nodes.

b) Pub/Sub service; Kafka, et al, in which services simply subscribe to and publish to a central channel to see everyone else.

c) Determinism; You use predictable naming/addressing and simply infer. This is tricky to scale but not impossible.

d) Any number of stand alone discovery services ala Zookeeper or Eureka. They all end up being effectively the same pub/sub model as B, just prepackaged.

e) You don't discover shit, you have a single load balanced endpoints that can scale out instances as needed behind balancer with zero knowledge required by the rest of the system.

Pick one to suit your needs. Service Discovery is not that hard and has been way over engineered.


As I was reading this, I thought to myself "How does this scale" and then I re-read the parent comment that said "If you don't have high scalability requirements, virtually anything will work."

The fact of the matter is that Kubernetes solves certain problems well but also presents other problems/challenges. For some organizations, the problems K8s solves is bigger than the problems/challenges it creates. It's all about trade offs.

Some people do want to hop on the next big thing in order to keep their imposter syndrome in check. Others know a certain technology and stick with it.

Sorry, I'm just ranting.


There are lots of ways to avoid learning Kubernetes, but why? Kubernetes is so well designed and easy to learn and use!


This is the comment you see from people on EKS or GKE. Many companies have compelling reasons to keep a large part, or all, of their services in-house. Nobody who actually has to install and administer K8s is on here commenting about how easy it is to run, maintain, and upgrade on their bare metal hosts. Troubleshoot, I almost forgot troubleshoot! All of those moving pieces, and something is hosed at 3am. This will be fun.

It will be great if that changes someday, and there's certainly been progress, but for places where they'd need to run it themselves, K8s is a tough proposition.


I took the time to learn it, and for just my side projects it's a ridiculous amount of overkill.


/s


If you're not worrying about scalability like the OP said, static configs. Add a new service? Roll out config changes. Server goes down? Let the redundancy handle it, roll out config changes in the morning.


If you are a single dev writing of couple of small services all by yourself, then the odds are you don't need a technical solution for service discovery.


Your comment makes me so irrationally angry. I totally disagree. But I'll be civil.

I write scalable apps without K8. I moved away from it. Stateless services are trivial to scale.


[flagged]


Most of us have to deal with the design decisions made by others. I can see how poor decisions can make someone angry down the road.


That's exactly what my therapist said.


It was probably more the "if you have more than 1 service you need Kubernetes".

No. You don't.


well if you have 1 service k8s is already useful. try to make blue/green or any other no downtime deployment. especially with database changes.

in k8s deployments are deterministic, it will roll out X containers at once (X is configurable and defaults to 1)


This is my experience too. I've used smaller-scale tools (such as docker-compose, Dokku, Heroku etc) but I've found them to be a mixture of unreliable or unsuitable in the case of fairly modest complexity.

Eventually I turned to Kuberenetes to see how it compared. I spent a day-ish reading through the 'core concepts' in the docs which was plenty enough to get me started on GKE. It took me a week or two to migrate our workloads over, and once everything stabilised it has been pretty much fire-and-forget.

I have about twenty pieces of software deployed for my current client and I feel that I can trust Kuberenetes to just get on with making everything run.

I've since deployed clusters manually (i.e. bare metal), but I certainly wouldn't recommend it for anyone starting-out. Personally I'm keeping a close eye on k3s.

I think my main learning during this process – at least for my situation – was to run any critical stateful services outside of Kubernetes (Postgres, message queues, etc). I think this applies less now than it did when I started out (v1.4), but non-the-less it is a choice that is still serving me well.


"I could never run a scalable service on the cloud without Kubernetes."

But also

"The alternative to Kubernetes is learning proprietary technologies like "Elastic Beanstalk" and "Azure App Service" and so on. No thank you"

So can we clarify that you truly meant: "I decided not to run a scalable service in the cloud using any of the existing cloud tools that do and have supported that scenario for years. And decided to use k8s instead" :)


> I could never run a scalable service on the cloud without Kubernetes.

I find this statement quite bizarre.


Not bizarre at all - it's perfectly fine - this poster could never run a service without kubernetes.

Doesn't make any kind of judgement, just stating their personal fact.

I could never make a souffle without a recipe. Do you find this statement bizarre as well?


> I could never make a souffle without a recipe. Do you find this statement bizarre as well?

Of course. You most likely could, after making it dozens of time with a recipe.


I’m in a similar situation and kubernetes is honestly pretty easy to use one you get. If your team is small use a managed kubernetes Like GKE of EKS

It’s worth noting that kubernetes uses containers which can be created via docket but is not dependent on docker


Can you point me to a good doc on deploying a small production service on k8s?

The official documentation provides a super simple tutorial, and then nothing. There's not even documentation of the primary config file. Frustrating.

https://github.com/kubernetes/website/issues/19139


    If you have microservices then you need
    a way for services to discover each other
Why not run them in docker containers with fixed IPs?


What happens when the IP address changes? You need some way to lookup current IP addresses. Why re-invent DNS? Also, how do you protect these services from unauthorized access?


    What happens when the IP address changes?
Changes how? It's not as if the IP of a server magically changes out of the blue.

    Why re-invent DNS?
There is no reason to re-invent DNS. Each docker container will have to have the info where the other containers are. So you could write that into /etc/hosts of the containers for example.

    Also, how do you protect these services
    from unauthorized access?
You need to do this no matter if you use Kubernetes or your own config scripts.


> What happens when the IP address changes?

Erm, he literally said "with fixed IP's" (i.e. a "static IP")

You DO realize this is possible and easy to configure, right? If it changes anyway after that, that's an entirely new problem.

I feel like some networking knowledge will fall through the cracks eventually, static IP's might be one of those


Because you want to scale, or roll out during a deploy. Or one goes down and you need a new host.


Do you have any resources you'd recommend to learn Docker?


So much this.


I'm also enjoying Kubernetes. I started a hobby project on GKE just to learn, but now the project has 8,000 MAU or so and will be scaling up more in the near future. K8s is totally overkill, but I've had a good time and it's worked well so far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: