A while ago there was a thread asking for experiences for people who regret using Elixir. One of the most popular answers was that Erlang is now kind of obsolete because all the things it offers are now available with technologies like Kubernetes and other things.
The thing is however, with Erlang you get all of that in a pretty easy to learn language and it will scale to practically anything. Erlang is also an incredibly productive language. You can get a whole lot of things done in a short amount of time.
I am learning Elixir... and I really get a kick out of it. It's zany and pushes me to bend my brain.
It's like sort of Dr Seuss riff for an experienced programmer.
So you've done a lot of program?
Can you, can you, without a reference?
Could you, could you, with nary a pointer?
Will you, will you, use no return?
No for loop? And less control flow goop?
I had to switch back to do some Python and Swift maintenance/development last week after 2 solid weeks of climbing the Elixir newbie staircase, and I was amazed at how much code I write in "normal" languages whose sole purpose is to decide where the code goes next. For the first hour or two my brain kept screaming "why do you keep putting all of these if blocks everywhere?!"
I liked the book ‘Elixir in Action’. It shows some of the language then immediately jumps into building a way too overcomplicated program (a distributed TODO list) that nicely shows off the features of elixir.
After that I just read some codebases. I find elixir codebases extraordinarily easy to read. The language does a great job of making ‘the good way’ also ‘the easy way’, and the linear nature of functional code is much clearer than the usual indirect dependency injected hell. Papercups is a fairly good starting point I think. I liked the livebook codebases too.
I've been writing a lot about Elixir as I've been learning it: https://inquisitivedeveloper.com/. The idea is for people to learn Elixir along with me as I learn it.
It's not a quick start though: I really dig into how Elixir works and move fairly slowly. I've finished covering the language itself, and I'm about to move into the OTP functionality that's inherited from Erlang.
I used Dave Thomas's Programming Elixir > 1.6 and can recommend it highly for learning the language.
As others have mentioned on some other threads, learning about OTP, Genserver is really worthwhile. Apart from language, the book teaches a bit about that too. But maybe just enough.
Elixir in Action is in a class by itself. I'd also recommend Phoenix in Action as a nice quick start and the many, many Elixir books from Pragmatic Bookshelf. I've read almost all of them and all of them are well-written and well-edited.
Also, the official docs and guides are fantastic once you have your bearings.
The Elixir-lang Slack channel. The Dave Thomas book. Lots of trial and error in iex. For me, I have to a meaningful thing to do with a language/platform to learn it. The To Do/hello world apps just do nothing for me.
If-expressions is a specialized version that only works on booleans, pattern matching is a more general concept (that of course can be used to implement if-expressions, but so can pure lambda calculus)
I did not mean to imply that there are no if blocks in Erlang/Elixir. Just that I use them a whole lot less. Probably an order of magnitude less.
And you still have to decide where your code goes at some level. The pattern of having two function variants (one which matches an empty list, and one the head/tail) is a different idiom to looping. So while for loops go away, you still traverse sequences. And more generally, a lot of the if(else) clauses I would normally write, now get solved with pattern matching.
With control flow languages, the where the code goes and what the code does feel very intertwined, whereas with the pattern matching idioms, I find that where it goes, and what it does, are more orthogonally declared.
I always remember Joe Armstrong's quote (which is an adaptation of the of Greenspun's tenth rule):
“Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”
That is, while other languages and systems barely scratched the surface in terms of distributed programming, Erlang was perfecting all the models needed for it, because that's what it was built for. So everyone else is now playing catch-up with Erlang.
It is easier to get a distributed computing runtime on existing ecosystems than rewriting the world.
I have learned Erlang, and like its Prolog inspired syntax, but that is all about it, there is hardly a place for it on the Java/.NET/C++ shops I work on.
Could you find the link? I'm not sure kubernetes can really replace erlang. Some of the key things that distinguishes erlang are
- light "threads" and supervision of them
- OTP which is largely a library other languages with message passing and light threads could implement
- garbage collection per light "thread" rather than stop-the-world
Akka was on the right track in supporting the same approach to highly concurrent and fault tolerant software. The JVM might have gotten better at garbage collection without paid plugins since 2015 when I last used Erlang?
But I'm curious what stack in particular is seen as superceding erlang.
Ah yeah. I'm using python now for the same reason. I don't need more concurrency than a couple of servers with a few web threads and worker threads would give me.
PAAS level deployment and failover is sufficient. (Edit: for my needs at the moment)
If you keep your async tasks in memory in any system they'll disappear when you deploy.
So they weren't really reliant on the things that set erlang apart. So the cost of the small support base in tooling and libraries is more significant.
Speaking only for myself, I really start to miss the concurrency options in Elixir when I start coding in something else. I start trying to mash concurrency into the other language & dreaming of GenServers & receive functions.
Akka, Erlang, Service Fabric and so on are based on the Actor Model. You can implement that pattern in any modern language I would say. It is good when it comes to concurrency handling and it can be very simple to understand and to use but async programming is the modern way to solve many of those problems.
I went from working with Erlang to working with Java/K8s microservices.
While I understand there are some path-dependent reasons for why we ended up where we are at, but setting up a new service as part of an Erlang release is just an order of magnitude simpler. It's not even close.
"setting up a new service as part of an Erlang release is just an order of magnitude simpler."
Maybe this is something to consider: by the use of gen_servers erlang/elixir allows mere developers to build a service, which in the end is just a (lightweight) process managed by OTP (the erlang runtime, so to speak). No ops involved.
I read the same article then realised that the author didn't understand the full capabilities of OTP and maybe had a bit of survivorship bias from kubernetes hostage experience.
I don't remember seeing Erlang as the main criticism against Elixir.
The common problems as far as I recall:
1) Stale or missing libraries
2) Small community (hard to hire, missing resources)
3) High learning curve (Genservers, OTP etc) and the fact you basically need to learn 2 languages at once (Elixir and Erlang).
There are a few more concerning proplists/keyword lists: [{:x, "y"}], [x: "y"], ["x": "y"], and x: "y" are the same thing (if in args). Erlang dictates one syntax for this instead ([{x, <<"y">>}]) (ok you COULD say that technically 'x' works too)
The pinning operator x = :x; {^x, y} = {x, "y"}
Exceptions are slightly different.
There are a few others that made me realize "oh shit I need to learn Erlang first."
My point was that Erlang and Elixir aren't that different. You can totally not use the syntactic sugar for proplists, it will still work the same.
Therefore saying "you need to learn 2 languages" isn't correct. It's like saying Javascript/Typescript are 2 different languages. Well yes, but if you're fluent with Javascript, you'll be able to read typescript, and if you're fluent with Typescript, chances are that you are fluent with Javascript too.
But I agree that one should learn Erlang first, because most of the documentation about Elixir seems to assume that you already know Erlang/OTP.
there are a handful of things that make elixir way more ergonomic. For example, in a really big codebase if you want to zoom to a function definition you just have to search "def/p function_name". Aliasing is very nice, though confusing if you are coming from a language where module names are first class (and not just atoms) like Julia.
Last point isn't true. I've been working full time with Elixir for 3 years and I can't write any Erlang. I can read it, just because it's not that complex syntactically.
I'll say the high learning curve statement isn't really true for Erlang either. I was up and running with Erlang as a mid level dev back in 2013 or so within 2 weeks. Worked through LYSE, and basically grokked everything I needed to, genservers and OTP included. Only thing tricky after that was building a release (since at the time the best tooling was rebar; Elixir's mix makes this really simple).
same, I can read erlang extremely effectively. Even better than some erlang devs. I can also patch erlang libraries (I fixed my friend's http library to be able to do streaming http 1.1). I can't write it, though.
"The thing is however, with Erlang you get all of that in a pretty easy to learn language and it will scale to practically anything."
But as I said in that thread, that's actually one of the traps.
While Erlang/Elixir as languages and runtimes will scale to "practically anything", a lot of the technologies in the Erlang stack don't. Mnesia is almost useless, if not totally useless. Erlang's not the slowest language, but it's not very fast, and using something 5-10x faster can "scale" nicely too by needing that many fewer systems in the first place. The message system is not what I'd call best-of-breed anymore; very ahead of its time, but most messaging APIs have more functionality in them now for a reason. Erlang is 1-or-none and 1-or-many seems to be the choice winning out in general.
As long as you stay in the ecosystem you're OK, but if you're using Erlang to speak to a non-Erlang DB, to interact with a non-Erlang message bus like Kafka or something, to call APIs from non-Erlang systems and provide APIs to non-Erlang systems, all this integration becomes more a trap than an advantage. Plus Erlang's type system starts to become a bear in those contexts; Erlang's types integrate well with Erlang, but are just quirky enough that it's a pain to integrate with anything else. There isn't even a clean embedding of JSON into Erlang types. (Erlang, like many functional languages of the era, made the mistake of defining a "string" as a "linked list of numbers representing characters", which makes lists ambiguous as to whether they are lists or strings.) Interaction with non-Erlang systems is always kind of a pain because of the lack of clean embeddings into the Erlang type system.
In 2005, scaling to "practically anything" was a pretty decent advantage for Erlang. Scaling to "practically anything" is table stakes for a language/runtime now. Someone else says languages are "catching up" to Erlang, but to a large degree, they have, and exceeded it in many places. To be honest I think it's Erlang trying to catch up now. And it can't. It's too integrated, too opinionated. What was absolutely an advantage in 1999 and 2010 is now a significant disadvantage.
Almost no one uses mnesia. Those 5-10x faster languages are not as productive as Erlang. Python, Ruby, PHP, Node.js are all slow.
> As long as you stay in the ecosystem you're OK, but if you're using Erlang to speak to a non-Erlang DB, to interact with a non-Erlang message bus like Kafka or something, to call APIs from non-Erlang systems and provide APIs to non-Erlang systems, all this integration becomes more a trap than an advantage.
I fail to see why. You can write an API and communicate with an API just like any other language.
> Erlang's types integrate well with Erlang, but are just quirky enough that it's a pain to integrate with anything else.
You integrate through APIs. I do not see how it has anything to do with Erlang's internal types. Any service designed to be used by a diverse set of clients requires a clean API.
> (Erlang, like many functional languages of the era, made the mistake of defining a "string" as a "linked list of numbers representing characters", which makes lists ambiguous as to whether they are lists or strings.)
Where does this matter in practice? I've been building Erlang software for 10+ years and people keep criticizing it for this and I still do not understand why this is such a big deal to everyone.
> Scaling to "practically anything" is table stakes for a language/runtime now.
Python/PHP/Ruby/Node.js are pretty terrible at taking advantage of many core systems.
> Someone else says languages are "catching up" to Erlang, but to a large degree, they have, and exceeded it in many places.
Erlang is still unmatched when it comes to ease of use building reliable distributed systems and it is not even close. Yeah you could run Kafka, Kubernetes and a bunch of other technologies that do their job better than Erlang does it natively. But at the end of the day, Erlang is one technology that is good enough for most and does not require large operational experience to run. Your entire stack is in your own code, understandable by your entire team.
People understand Erlang scaling wrong. Erlang’s distribution model, and its apps like Mnesia, were never designed for horizontal shared-nothing scaling. They were designed for SOA: taking the components or operational roles of a system (e.g. “master” vs “hot standby”) and codifying them into distinct Erlang nodes (which can be developed as separate POSIX processes on the same machine, before being moved to prod to live on separate machines.)
People try to scale Erlang systems by trying to get one distribution set to have thousands of nodes; and then taking that thousands-of-nodes cluster, calling that the “master” cluster, and then having another thousands-of-nodes cluster and calling that the “hot standby” cluster. But that’s precisely backwards.
Think like Ericsson. You’re designing a switch. You have two independent problems:
1. How do you make your switch Highly Available / fault-tolerant?
2. How do you handle higher load than one switch could possibly handle (for either hardware or software reasons)?
To solve problem 1, you create nodes, with roles (e.g. a master and hot standby), where there are an exact number of pre-defined nodes, each with exact static roles. These nodes have defined fault-tolerance relationships. Databases like Mnesia that replicate from one node to another. Applications that have defined chain-of-command such that it’s clear which node becomes the leader if the first leader fails. Etc. You design your system so that some of these nodes can go down, or netsplit, without the system itself going down/crashing. And then you deploy these nodes to separate machines, so that hardware failures will end up being treated the same as node failures.
To solve problem 2, you stamp out copies of this entire system of nodes, and add more infrastructure external to it to route between them.
In this model, a system of nodes — a distribution-set — is the same type of thing as a Kubernetes Pod. It’s several static processes, one of each role, that inter-depend, and are “wired” to one-another; and then get managed together / treated as one workload. The big difference is that, in contrast to K8s pods that all must get scheduled to one host together, Erlang distribution-sets usually get “scheduled” across several machines. But they still have one identity. They’re a single abstract machine that happens to consist of workloads running on multiple physical machines; where those workloads going down is a designed-for and tolerated aspect of the abstract machine they compose.
Everything that happens inside that single abstract machine, is the purview of Erlang and Erlang’s stdlib. Application failover, Mnesia, the thin-client code_server: they’re all designed to make a single abstract machine out of a defined set of Erlang nodes where each node has a static role.
Everything that happens outside of the abstract machine, is the purview of user code. Want to horizontally scale the abstract machine? Don’t use the distribution protocol; it’s not tuned for that. Use a userspace library. Have an application that does abstract-machine-to-abstract-machine peering. Make it the role of one or more of your nodes to run that application. Make it fault-tolerant, too.
Under this model, I hope you can see that statements like this are mostly incoherent:
> Erlang's not the slowest language, but it's not very fast, and using something 5-10x faster can "scale" nicely too by needing that many fewer systems in the first place.
...because an Erlang system architecture that builds one Highly Available abstract machine out of five-to-ten nodes, cannot be swapped out for a system architecture composed of one node, no matter the language it’s written in.
The other system, to implement the same fault-tolerance strategies, will need exactly the same number of nodes, because each node performs one or more roles, and serves as a VM-level failure-kernel / bulkhead for HA within those roles.
The non-Erlang-implemented system will just have a harder time implementing such strategies, because it’ll mostly have to use userspace libraries/frameworks — rather than VM-level abstractions — to do so. (Though you can get close with languages that have basic syntax that compiles down to framework code in other languages, e.g. if there were a hypothetical “Erleans” that compiled down to C# + .NET Orleans framework calls.)
——————
Of course, none of this matters if you don’t have the problem that Erlang solves. If you don’t need a stateful abstract machine that keeps itself online and computing over that state at all costs with fancy multi-node footwork, then you really don’t need any of Erlang’s distribution stuff.
But there are still, today, problems that do put you in that position. Stateful packet switching is still a thing. Game servers are another good example. App-layer cache servers (Redis, memcached, etc.) where your architecture couldn’t survive the thundering-herd that would happen if you needed to re-warm the cache from a cold start. Etc.
In those contexts, do you really want to try to jury-rig a solution using e.g. Hazelcast + IPVS failover? Better to just use Erlang. Because it’s still ahead of its time, for this problem domain.
As someone who has mostly been an outside observer to Erlang, you have succinctly described why I have not felt the need to use it.
It's amazing at what it does - If you had lots of physical nodes and needed to build a robust, fixed-size system at any moment in time, it's an incredible option.
But with container orchestrators, you get the ability to design a polyglot with more fluidity in your design. The admission that different languages are best suited to different tasks throws a monkey wrench into the Erlang ecosystem, because now I all of a sudden can't use Erlang for everything.
It does sound like an excellent choice for a control plane in many cases, however. I think it's phenomenal at what it does, but I think you have provided excellent context regarding its ideal use case.
The other commenter said it best: "But at the end of the day, Erlang is one technology that is good enough for most and does not require large operational experience to run. Your entire stack is in your own code, understandable by your entire team."
So the difference is doing your scaling and distribution natively in code, versus hiring and maintaining devops expertise to manage your containers. On top of that you now have one more "node" in your team (whether a single devops engineer or a group of them) that needs to collaborate with everyone.
> The admission that different languages are best suited to different tasks throws a monkey wrench into the Erlang ecosystem, because now I all of a sudden can't use Erlang for everything.
I would note that Erlang is pretty good at absorbing other things into the Erlang abstraction. Depending on your needs, you can use:
• NIFs (library code loaded into the Erlang VM — good for performance, but not good for fault-tolerance)
• Port programs: arbitrary POSIX processes (as long as they speak a specified protocol on their stdio) spawned by the Erlang VM and communicated with + managed through a fancy socket abstraction (Erlang ports).
That last one is especially interesting if you're building a polyglot system where Erlang is just one component: it allows you to build an Erlang "node" entirely in C or Go or Rust or Java or Haskell or whatever, that — from Erlang's perspective — can occupy one of the node-roles in your Erlang system architecture; while from the other software's side, can treat the Erlang abstract-machine as a guest within some other abstracted system. Through the erl_interface library, the C node gets access to all the interesting distribution and fault-tolerance tools of Erlang (e.g. it can interact with other nodes' ports; it can hold monitor refs; etc.) So you still can "use Erlang for everything" (related to distribution and fault-tolerance, at least) even in your other software.
After read the explanation of parent, I see it as certainly one of the most sensible models to build scalable and efficient software.
This a great way to make things:
- You have N Cores on machine
- You create N Threads to match, you don't "create/destroy/create/destroy", no, you already have the size of the problem set!
- You assign roles, maybe just split between "heavy" and "light"
- Your context switching is for computing small things on light
- You do sequential code, tight loops and heavy I/O on heavy
I truly wish I could do something like this on most languages (ie: is part of the paradigm/semantics). I use Rust now, and async is great but the mental model is HARD.
- You don't have a easy way to split between workloads, anything could suddenly stall
- When is easy to launch async/goroutines you launch them like is no tomorrow.
- So, this mean I don't know what are the size of my threads or task are(could) be running without manually inspecting or assume.
- I don't know, after look at async function, to know which workload is for (without inspect the code and the flow of it) (ok, in rust you can maybe infer if you are clever in the use of types to be markers... but still is disconnected to the run of the program)
"container orchestrators" is like put a hammer to handle another hammer ("heavy OS + VM/DbEngines") under the pretense of being "light" and efficient, when actually this is how most software stack are:
- A "heavy" DB engine
- A "light" logic server that connect to it
- MAYBE out-of-band processing for send emails and stuff
So, the thing is, having Kubernetes or anything like put the solution too far away. I need, as developer, to MODEL that relationships and KNOW in what workloads this or that must operate. And the most fixed/predictable is it, the better.
>But with container orchestrators, you get the ability to design a polyglot with more fluidity in your design.
You are then adding some redis/memcached, kafka or something similar, etcd perhaps and kubernetes.
A bunch of technologies your developers need to use properly and be running to develop on.
As opposed to having an easy to use single technology named Erlang which is almost certainly good enough for your needs. And your developers can easily be running a distributed system of erlang nodes locally without any issues or complex setups.
Does it really? A wrench? Maybe a banana. With the jungle and the gorilla holding it.
And anecdotally, for me who never had the need to use distributed erlang in practice, the benefits lay completely elsewhere. Just looking at programs written out there in the wild, some ubiquitous, and knowing the mess that is software development makes one wonder.
Could you elaborate why mnesia is useless? I've only used erlang and elixir in hobbyist projects but in theory mnesia felt like great fit into the erlang ecosystem.
From my experience, even though we deploy with Kubernetes, they complement eachother just fine. When writing code, I can easily partition things into processes, which makes total sense from pure application development viewpoint. Whether Kubernetes does its own thing on top or not, does not matter to me.
I wouldn't spawn and kill thousands of processes with Kubernetes. That's a bit heavy-handed. A couple of Erlang pods that do the same? Sure.
How can Erlang and Kubernetes be compared at all? I actually don’t understand that.
Erlang is a programming language that allows message passing between concurrent actors. Are you saying that, containers are comparable to actors, and that using IPC between them is equivalent to message passing?
Maybe - but that requires the internet to work. That’s a pretty large dependency to bring into what Erlang does in a single process.
I think the idea that you don't need k8s if you have Erlang isn't entirely wrong, but it also does people a disservice by making them sound equivalent when they complement each other really well.
K8s is like an additional level of supervisor that applies to an entire node, and provides really easy scaling and clustering. It's a great tool to run an Erlang app, especially if you already have a managed k8s cluster anyway.
I had a pleasure to watch an amazing talk from Joe Armstrong at Reaktor Breakpoint 2015 in Helsinki.
The funniest part of the talk was when Joe presented a diagram showing the stock market value of Ericsson being up always when he was working at Ericsson. :-D
I even had a chance to have a beer with him afterwards and talk about some programming ideas related to audio programming and also some related to typography!
RIP Joe Armstrong, you were an amazing person and programmer.
> The language grew and evolved and somewhere along the line acquired a name, Erlang, named in honor of the Danish mathematician Agner Krarup Erlang (1878- 1929) whose name is associated with the telecoms industry.
Huh, I always assumed it was a portmanteau of Erricsson and Language.
Unfortunately, Pony has not shown up much lately on HN or Reddit (or Twitter, or anywhere I used to hear about it) but it is a very neat language based on Actors:
to be honest I think pony focuses on Actors from a theoretical perspective. Erlang accidentally implemented Actors, and the driving motivation was fault tolerance. Specifically, the Erlang system really cares about having a sane error model.
But in the same way that there are evolutionary tracks that lead to the same end result. The eye has evolved up to 40 times independently (see: https://www.nature.com/articles/eye2017226 ), which is all 'accidental' but for very good reasons: the advantages of having eyes are very large.
the motivations matter. I think Prof. Hewitt was looking for something that could defeat the church-turing hypothesis. The erlang creators cared about fault-tolerance. So other Actor systems out there have different design lineage which means different low-level design choices, and in many ways, they aren't quite the same in ways that are subtle, hard to see, but still quite important to the ergonomics of the developer or maintainer who is in the weeds.
I'm currently trying out Bastion (also Rust) in a project and while I still have more to learn I must say: multithreading never felt this simple. Sending data between actors feels as easy as defining onclick listeners in JS, but with added benefits like typing and pattern matching on messages. Panicking threads are restarted so instantly by default I only notice it via their loss of state. And everything that I usually found cumbersome in that area is just...gone.
Regarding your recommendation of Actix: according to its website it seems to be mainly a web framework?
Given its support of functional language, built-in fault-tolerance, asynchronous message passing, and scalability, one would assume that it should ride the current wave of distributed computing. Any reason why it is relegated to a niche technology?
No large company pushing it (it came out of Ericsson, but it was never meant as a mass market language from their eyes, so it was really only its creators pushing it), strange syntax (Prolog rather than Algol/C derivative), FP with immutability (so common things like for loops don't exist, making it feel alien to the common dev), and the benefits being things that require understanding and likely experiencing to really value.
This answer is true, but also a bit depressing. It's like saying "we know this would be the right way, but we prefer the old ways. So we'll just keep shooting ourself in the foot with our C-looking code, with its for loops and its concurrency boilerplate".
(Just to be clear: "We" does not refer just to devs/engineers, companies may also be wary of using an alternate technology, even though it could bring benefits down the road)
Well, yes, but there is definitely a bit of dissonance in tech. Likely everywhere, but it's easy as a dev to see it here.
Every place wants to be the best. And yet, most places, when confronted with new technology, ask the same questions. "Who else is using it?", "How easy is it to hire for?", and if they're a bit sharper, "How much library/community support is there?" And they ask these without realizing that all of those questions will ensure they adhere to the mean. Positioning themselves that way prevents the risk of dropping below the mean due to technology choices, but it also prevents rising above it. But that's okay with most corporations, which are all about risk management, and who don't equate a tech win -> faster/better software -> better user experience -> a business win.
I agree with the opening sentence that there is no large company pushing or rather peddling it. But the fact that it supports FP with immutability is really a well positioned design and with little bit of practice programmers get the hang of it very fast. Lack of for loop should not be an alienating factor, according to me. With NIF one can integrate with native code easily so, a team does not have to re-write the entire codebase just because Erlang has been chosen.
The thing is however, with Erlang you get all of that in a pretty easy to learn language and it will scale to practically anything. Erlang is also an incredibly productive language. You can get a whole lot of things done in a short amount of time.