Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cell Lang: Why yet another programming language? (cell-lang.net)
313 points by luu on June 30, 2022 | hide | past | favorite | 124 comments


> The state of the application can be partitioned in separate components (called automata) that do not share any mutable state and can be safely updated concurrently.

This is the right direction! Pure computations on (somewhat) centralized data has proven to be very resilient to bugs. It's kind of like how React organizes its programs.

I like how Cell is tackling this problem at the language level. A language's programs will (in practice) often be a reflection of what the language makes easy and what it makes hard, so designing a language to handle this complexity would be a boon to the world.

Pony also has this via its `iso` keyword and actors [0]. Cone is also exploring this notion, with an interesting actor/async/await hybrid [1].

> The first one is the ability to "replay" the execution of a Cell program. One can easily reconstruct the exact state of a Cell program at any point in time. Obviously that's very useful for debugging, but there are others, more interesting ways of taking advantage of that.

This sounds like Vale's "Perfect Replayability" feature [2] which captures all IO for a program and deterministically replays it. It's an incredibly useful feature, and I hope Cell exposes it well!

[0] https://tutorial.ponylang.io/reference-capabilities/referenc...

[1] https://cone.jondgoodwin.com/

[2] https://verdagon.dev/blog/perfect-replayability-prototyped


> A language's programs will (in practice) often be a reflection of what the language makes easy and what it makes hard…

In industrial design this is called an affordance. The product is designed to encourage correct usage and make incorrect usage difficult. I would like it if more language designers will carefully consider their language’s features like this. The Elm language is a good example.


I think this may also be a good way to separate libraries from application code, by considering the library as another actor instead of a direct function call. We are allowed to panic within the same abstraction layer (application or in a library), but panic should not cause another application using it to abort, there should be a way to let the application recover from it and decide whether to cancel the operation or do other actions. If the mutable states are not shared and execution can be recorded, we can just restart the library after abort and resume to the previous state.


This description makes me immediately think of Erlang (and Elixir).


I heartily recommend you take a look at Elm if you find this interesting.


> and how tedious and time consuming it often is to implement even trivial things like sending data from the client to the server and vice-versa.

Let's be clear here, the complications there are as follows:

1. Authentication and permissions in regard to the end user who is making the change or fetching the data. (E.g. internal web app, the AWS hosted database has no knowledge of my corporate AD accounts)

2. Authentication/permissions of the service/software that is talking to the database

3. The crap tons of network problems and errors that can occur.

4. Dealing with other possible errors, such as malformed input, or data not being present, data being stale, write conflicts, etc.

5. My backend service is running in a private cluster and my DB is running in a different private cluster.

6. My backend service is running in a private cluster and because of that the nifty subscription DB engine can't try and reach out to reconnect to my service if something goes wrong with the connection because there isn't a routable IP address.

Writing data to a table isn't the problem!


I think a more charitable interpretation would be that they're highlighting how much work has gone into that area (web server frameworks come to mind), compared to how little work they've seen on cleaner techniques and language assistance for managing local state.

That's how I read it at least, reasonable interpretations may differ.


> compared to how little work they've seen on cleaner techniques and language assistance for managing local state.

Literally every new web framework is some new take on how to manage local state.

You could spend an entire year learning about different state management solutions. From systems that keep backend and front end state in sync, to the entire redux/mobx family, to the stuff that Svelte does.

We are a long ways away from the bad old days of a global "state.c(pp)" file.


But I cannot agree. I am very down on this. I think this a colossal waste of time.

I do not see the problem they are trying to solve. Doing I/O is hard, not because sending bytes on a wire is hard but because of everything that can go wrong. It is the "everything that can go wrong" part that is hard.

They are going to solve the problem of state with "algebraic data types". That is not new. I am trying to think of a language at a higher level than machine code that does not use algebraic data types. ("Algebraic data types" are not an invention, they are a description)

I cannot see anything new in this. I fear it is another example of making easy things easier and pretend the hard things do not matter.

There have been huge strides in the development of computer languages in recent years. Rust, Go, Swift, Dart.... All of them with slightly different use cases and solving slightly different problems in slightly different ways. But the author knows of Closure... Yes, but.

"One can easily reconstruct the exact state of a Cell program at any point in time." Really? I spend my entire professional life doing exactly that, whatever the system I work in.

This is engineering with machines. State is part of the nature of those machines. It is very well understood and is not a problem (handling state) that we are struggling to solve.


It's pretty bold to say that their programming language project is a colossal waste of time. I recommend steel-manning [0] and reading their post with a more open and curious mindset.

They could have phrased their article better, sure. But lets not cherry-pick one sentence that's barely relevant to the rest of the article and use it to fuel our anger that they aren't solving Our Favorite Problem. Instead, let's focus on what the author is really trying to get across: that a language can help manage the complexity that can come from mutable state.

If you read the article with a more charitable lens, you'll see that the author is saying a lot that could be new to many readers: a language can combine relational data with functional transformations, a language can expose these patterns to enable more solid architecture, a language might support first-class reactive programming, and a language could (if high level enough) offer replaying and snapshots.

That's honestly fascinating, and I would love to see what kind of ecosystem can grow around a language offering such features.

[0] https://themindcollection.com/steelmanning-how-to-discover-t...


If you are interested in approaches to combining programming language and database semantic. There is a bunch of work in this area from the 80s… Look up “Persistent Programming” and “Database Programming Languages” (DBPL).

And yes, get off my lawn :-P


> State is part of the nature of those machines. It is very well understood and is not a problem (handling state) that we are struggling to solve.

Wow, really? If handling state is not a problem, then what problems do we actually have?


Mostly, trust in some form. That is what we know about some one else's state


So basically an actor system with relations between actors, reactivity and automatic persistence.

Very interesting concept!

Although I do wonder if this couldn't be implemented as a library for an existing language.

Doesn't seem like it is actively developed though, the compiler repo hasn't seen updates since November and the runtime repo since 2018.


> Although I do wonder if this couldn't be implemented as a library for an existing language.

It definitely could. So could Elm, which — nonetheless — is an interesting and useful language (bordering on DSL).


Elm is very interesting. Also the evolution of Elm from something more resembling Reactive Extensions to the current, more opinionated Elm-Architecture version.

Also there is an Elm-Architecture clone in almost every language. You could also emulate it in React (probably using top level state, prop drilling and following the same patterns, typescript would help)


temporal.io = durable actors for microservices workflows



I'm learning so much about a market I'm entering. My company is https://www.adama-platform.com/


Neat! Something, something, Greenspun but Erlang. You should check out Lumen. Not to dissuade you at all, but doesn't making a new language greatly limit who can use your platform? Or was it on purpose so that only frolicking goat angels can fly through the eye of a needle?

BTW, would love to see a higher resolution image of your logo.

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

https://github.com/lumen/lumen


It does, but it offers some unique advantages as well. One of them is durability maintaining state between async/await calls. That is, the process can bounce and no one notices.

The key challenge that I see is that I should focus less, in marketing, around the programming language and more on the data side.


Also, https://www.adama-platform.com/i/adama-moderate.png is a better quality image. I paid someone on fiverr to make it.



Trying to understand what this is. A kind of Erlang runtime for Go/Java/PHP/TS?


Ok, found a good explanation now: https://news.ycombinator.com/item?id=30368838


I'd describe it as a system that let's you write software without regard for the lifecycle of the OS and hardware running it. The Temporal runtime takes care of rehydrating your program in the correct state if the underlying computer is killed. This eliminates a tremendous amount of incidental complexity most traditional monolithic and distributed architectures.


The "Concepts" section is a good intro: https://docs.temporal.io/temporal


This looks very interesting. I'm curious if the author has looked at Erlang/Elixir — my first reaction on reading this is that it sounds similar to the actor model and the way that Erlang handles state and state updates: as messages to autonomous processes that are modeled as a continuous fold over the message stream.


I too am disappointed by the lack of innovation around state management in applications. But I don't think this solves the problem.

A lot of developers think what we need is additional abstraction on top of the way data is stored and retrieved, to remove all the annoying CRUD boilerplate that goes into building an app. This seems to make sense because devs think in terms of simple examples like TODO lists.

But in a real production application with heavy scale, you need a lot of low-level control over precisely how state is managed. Tweaking a SQL query or table structure can be the difference between minutes of latency and milliseconds.

I'm not sure what the path forward is here, but I hope smart people keep thinking about it.


We've also seen that sometimes working at a lower level can actually make performance worse; there are cases where eg. describing something abstractly to a smart compiler gives it more opportunities to optimize

Personally I think a better way is "high-level with trapdoors", where you can pierce the abstraction if you really need to spot-optimize but higher-level code is the norm


> But in a real production application with heavy scale, you need a lot of low-level control over precisely how state is managed.

Very much this. "State is managed automatically!" can very quickly become "this frequent operation is automatically doing a huge amount of DB work which should actually be deferred until later"


>Then there's support for reactive programming, which is still in its infancy

State management for UI I would think should be a first class objective rather than 3rd on the todo list. See SwiftUI for something evolving toward this direction.

>The best way to illustrate the advantages of functional/relational programming over OOP is through a case study

No, the best way is to be right up front with a real life common problems and code examples that make life better. I’d prefer not to read this many dry theoretical pages before seeing any of those.


> real life common problems and code examples that make life better

That is exactly what a case study is. Looking at examples of real world cases.


No it’s not. Case studies are usually retrospective. That’s not very practical when a language is in the design/implementation stages.

It is possible to present common problem scenarios and examples up front, even if something is at the idea stage.

It’s commonly done, and can be an effective way of conveying your ideas and getting feedback without pages on end of speculative requirements.


I'm curious what kind of examples would be compelling. I'm writing a language which takes exceptional control over state (https://www.adama-platform.com/)

And im having a bunch of fun with it now that I popping up the stack in UI. My docs have a minimal tic-tac-toe ( http://book.adama-platform.com/examples/tic-tac-toe.html )


I've always been intrigued by the idea of a language that enforces a separation of queries and command. Functions (queries) return values and have no side-effects, where procedures (commands) that can have side-effects and don't return a value.


Nice! I had the same thought :)


Solidity does this out of necessity. A command might cost you a week’s rent, whereas a pure query is free.

The pure query inside a command might still add cost to the command though.


This reminded me of the Ceylon language. So I created https://news.ycombinator.com/item?id=31932085


I think robust information software (AKA "CRUD" apps, data visualization and interaction etc.) should definitely be built with higher level paradigms, DSLs or a language that provides such capabilities out of the box for these reasons:

- principle of least power

- can often be talked and reasoned about more easily

- lends itself to visualization

- can be tested more easily and uniformly

- eliminates certain classes of accidental complexity

So yes it makes sense to use...

- relational algebra for data manipulation where feasible, functional programming for the rest

- (data driven / functional) state machines for control

...like the article is suggesting. I also kind of like the idea about I/O but I'm not 100% sure there, because I never played with such a system. Reactive programming is powerful too to manage data flow and again hides away plumbing and side-effects.

In a sense this is kind of already how many are programming in modern FP languages like Clojure (via libraries).

What we possibly lose with these approaches is performance, except the runtime/macros are fine tuned to make automatic optimizations, so some of that can be gained back in the long term. But I generally agree that there is tons of leverage in programming these things at a higher level.


> higher level paradigms, DSLs or a language that provides such capabilities out of the box for these reasons

Based on exactly this arguments that we created HerbsJS [1], a domain-first library to build microservices. While the essential complexity can't be removed [2], we should put effort to remove the accidental complexity as much as possible.

BTW, great to see this kind of discussion here and congratulations to the team responsible for Cell Lang.

[1] https://herbsjs.org/

[2] https://en.wikipedia.org/wiki/No_Silver_Bullet


> this is kind of already how many are programming in modern FP languages like Clojure (via libraries).

Would you have links about this programming style and some of these libraries?


No links. However as clojure dev, typically you build your programs as this one big folding operation. You can stray away from this to a degree (much more than in say Haskell) but in the end most programs are just one call to reduce function with data.

This is something that becomes very obvious when you become more familiar with fp. Yet it was the most surprising/eye opening thing when you are not.


I came across cell-lang a few years and really liked the ideas in it. It does a lot of the things that was proposed in Out of the Tar Pit, which is probably the paper that has influenced my opinions about programming the most

For applications that don't have high performance requirements I could see this getting adopted - automation and IoT devices are a couple of areas that spring to mind


Summary of that paper [0]. Referenced paper is linked.

[0] http://kmdouglass.github.io/posts/summary-out-of-the-tar-pit...


> Why yet another programming language?

'Because I wanted to' is a sufficient reason. Create all the programming languages you want and share it with the world!


I know some people are afraid of change and competition, but I'm thrilled everytime someone shares their new language or library. I don't have to learn or use them all, but it's great to basically have unlimited research and approaches to skim on github/gitlab/etc.. for any language and any problem.


Programming languages can be similar to bacterial DNA. Even if the language never gains broad acceptance, parts of it may be incorporated into other languages.


I like to think of it like music. For each Nirvana type band there needed to be many Pixies and Melvins type bands to influence them.


I know it's a total pipe dream, but this is what makes me most sad about lisp never being taken seriously.

It could have been common practice to create and pull in new language constructs like what this is doing.


Languages that rely on the user creating their own language constructs suffer from a balkanization of the language - each team creates their own language, and cannot share code.

This is why D does not have a macro preprocessor.


I agree it's much easier to make a mess, but I can see doing it the right way being extremely powerful.

One (slightly contrived) example is javascript's different (sub?) languages. JSX, await/async, typescript, pre-processors, etc.

It would be best practice to pull in popular versions of these things as opposed to rolling your own.

However, I also don't think there's much issue with some custom languages as part of a normal codebase when used in moderation.

There's not that much difference between using a traditional API that requires function calls in the right order, maybe some testing to ensure you're using it correctly, etc.

With a "new" language you can put a lot of that as static-typing style compile requirements. Essentially more guard rails to using a library correctly, and an opportunity to make a set of scoped primitives specific to what problem you're trying to solve. Ruby on rails is the best example of this sort of system.

Again though, I know it's a huge pipe dream, and for these ideas to go forward will require a LOT of change in our intuitions and methods of developing software. I'll never stop dreaming though ;)


There's no doubt it is extremely powerful. But the result is everyone invents their own DSL, which is inevitably undocumented and nobody else wants to touch it.

> However, I also don't think there's much issue with some custom languages as part of a normal codebase when used in moderation.

Easy to say, but too many step over the line.


With great power comes great responsibility, and unfortunately relying on everyone to be responsible doesn’t scale.


I hear this concern a lot, but I’ve never actually seen it happen any more than it does in languages without macros.

https://news.ycombinator.com/item?id=31520381


I've seen it enough. Case in point, back in the 80's a friend of mine worked at Microsoft. A manager came to him with a lament that there was a program that compiled to 50K, written entirely in macro assembler. It had a bug in it. He had assigned it to multiple programmers, each of whom failed after trying to fix it for several weeks.

The problem was the author had invented his own language with the macros, left it completely undocumented, and nobody was able to make heads or tails of it.

My friend said, no problem, he'd give it a try. 2 hours later he had fixed it and checked it in.

Astonished, the manager asked him how he'd figured it out. My friend said he didn't. He ran it through the disassembler I'd written https://www.digitalmars.com/ctg/obj2asm.html (which is why he related the story to me) because it turned object code into asm source code. Found the bug, fixed it, and checked in the new source code.

Not only that, I discovered I could not read my own macros a few years after I'd written the code.

I've seen similar terrible examples that make use of expression templates in C++. They were all the rage for a couple of years. Fortunately for C++ programmers, expression templates are so miserably slow to compile that a blessed damper was put on the exploitation of this discovery.

Feel free to disagree with my assessment of this. You'll have plenty of company. But use macros enough and eventually you'll agree with me :-)


> But use macros enough and eventually you'll agree with me :-)

I spent a long time writing Clojure professionally, and macros were used extremely rarely. People definitely didn’t create their own languages.

I guess it may also be a function of the other abstraction facilities in the language, the culture around the language, etc.


On the other hand, Rust's macros enable one to use future language features without ever touching the compiler.


Clojure seems to be gaining more and more momentum, unfortunately just as I changed jobs away from a clojure shop.


How is it gaining momentum? "Who's hiring" posts have zero clojure positions and that is a place where you might expect to see such positions.


While a fair assumption, we're only seeing a small subset of software jobs here: mostly SV-style big tech. And with that a set of specific technologies that have become popular in the space.

A lot of smaller shops use more unique technologies for different fields that I'm sure are using clojure for some things.

Frankly, I'm just guessing, and would love to see some hard data on this stuff.


in some sense Lisp was taken seriously:

Go is pretty similar to a lovechild of C and Scheme, for instance, Python and Ruby have many handy pragmatisms from Lisp style development, and so on.


Agreed. I think a reason some would react negatively is every new language decreases the ratio of things-I-know : things-I-dont-know which feels threatening to those who are in a habit of zero-sum thinking.


It is a sufficient reason, but that does not mean that you cannot also mention other reasons, and how it might be better for some kinds of uses, etc.


As a programming language author, your attitude is awesome. I love it. Thank you

I'll share an example of my current fun example: http://book.adama-platform.com/examples/tic-tac-toe.html


Yeah, I plan on making at least 5 of them for one project. I think more people/project should reach for making/using different programming languages. Instead of trying to solve every problem in the same language.


I'm going to leave some simple directions for creating and running Cell-o World. The website is verbose and it doesn't spell out blow-by-blow how to do this in "getting-started":

1. Download the Cell compiler from the website and ensure you have a recent enough version of Java installed:

2. I made a copy of one of the example project directories, /send-msgs/, called /hello-world/.

3. Change directory to /hello-world/.

4. Command to compile: >java -jar ../../bin/cellc-java.jar project.txt ../hello-world/

5. Command to run: >java Generated.java init-state.txt msg-list.txt output.txt

Now all you have to do is read the docs, get hands-on, experiment, and you can get a feel for how Cell works.


> ensure you have a recent enough version of Java installed

Now we know why TFA thinks clojure is the only lang to innovate wrt state problems :)

Joking aside, I no longer have a JDK in any of my builds and couldn't be happier. Oracle has made the JDK practically into spyware.

Given the success of rust, haskell and go, it seems antiquated to be designing another JVM lang in this day and age ...


> Because there's currently no high-level programming language for writing stateful software that I'm aware of.

Wot? Here is how you handle state in a pure functional language like Haskell:

update : Event -> State -> State

and if you want a return result:

update : Event -> State -> (State, Result)

and if you can't be bothered to type "State" all the time:

Use a state monad (the clue is in the name).

I highly recommend that language designers do a bit of research before making claims about the uniqueness of their new language designs. As far as I can see there isn't a single feature in the language design that hasn't been done already in other languages.


I think the claim is in regards to the persistence of state. I've dealt with this as I created https://www.adama-platform.com/

A key finding is that the data model needs to handle process failures and upgrades/downgrades. This is why the memory model for Adama is simply JSON.


Where all of these fall over is data migration, which I wasn't surprised to see was missing from TFA. Having a runtime data model that "serializes itself" is lovely until you have to change the model of an existing dataset.

It's why Protobuf is such a complicated mess. It's why ORMs are usually a bad idea. Persistence is just hard over time.


Great stuff! Reactive programming is seriously undervalued as a paradigm. I suspect we'll see a lot more work with it in the future.


This is really great, because it touches on the concepts of hardware design. I've thought a lot about this myself, and I'm honestly surprised to see someone else discussing this. Maybe I'm way out of my depth here, but the idea of stateful programming doesn't actually fall under the paradigm of sequential programming (think single threaded programming.) We all like sequential programming for the same reasons we like linear signal processing and euclidean geometry. When things break those paradigms, we of create mental models (transformation functions) to try to emulate it back in the linear world where the math is easier (for the most part, I'm not too well versed in those subjects. It's more of an analogy.) And I think that's what this language is trying to do, except for certain concepts of non-sequential programming.

Now I sort of disagree with the author when they state that no high level language exists for stateful programming, because a HDL (used for designing chips and programming FPGAs) is exactly this. Now is System Verilog a high level language? Maybe not, maybe the author is right.

Interestingly enough, a lot of hardware blocks (think like a ethernet controller) use a special purpose hardware based state machine to control and manage the link status. While this is less flexible and less programmable than a CPU (which is also a state machine at it's core,) it uses far less power.

You can think of a CPU as a state machine that can emulate other state machines (though that's probably not a great way of thinking of a CPU because it can do many other things.) However, an FPGA can do it far more efficiently, but it's a pain in the butt to program. Now I'm not suggesting we do everything on FPGAs, but as we start to run up against the physical limitations of speed and transistor density, I think it's good to think about other paradigms of computing and how we can more efficiently compute things.


This is interesting. I've been trying to model UI flows in a "CPU-like" manner, as incremental state transformations that are committed at most once per frame. That's the basis of some UI frameworks like Jetpack Compose, which bundle the incremental state in a global snapshot, sidestepping most of the pitfalls of mutable state by controlling when mutations are applied. What's neat about this paradigm is that you can write fairly naive code that reads and mutates state with abandon, and the snapshot mechanism lets the code just work without modeling the whole app as a state machine.


Previously: https://news.ycombinator.com/item?id=15797831

Interesting that there is no author information, even at the Github project where there's just a pseudonymous "cell-lang" user. Also no license information.


I'm still reading this, but I have this nagging feeling about how great it would be to team up.

My programming language chops are not cutting edge, but im bleeding edge in real-time distributed systems. I've built https://www.adama-platform.com/ and im playing with the language to map idioms that work to a language which can enforce good discipline.

Now, im highly unfocused as im an retired monk code machine (i just turned my serverless VM into a tiny webserver so i can build a serverless twilio bot. I'm curious about things, so I plan to deep dive into this and provide feedback in the coming weeks.


I'm also working in the languages space, I would love to hear your feedback for them and ideas in this realm in general, especially since they don't seem to be that active anymore. My email is my username @gmail.com, feel free to reach out!


This is an inspiring read. Without looking at the code, there are so many revolutionary ideas in here for more usable programming languages. The ideas regarding stateful execution seem both useful for highly parallel tasks and even hardware description.


Where is the source?, this proyect is still active(https://github.com/cell-lang) don't have activities in the compiler the last 4 months


"Because there's currently no high-level programming language for writing stateful software that I'm aware of. That's the short answer."

Huh?


It's frustrating that innovators are constantly having to justify "why yet another". Folks should just be able to freely innovate without fear of getting spammed with xkcd#927. You wanna make yet another programming language, compiler, JSON-alternative, container format, or trivial FTP+SVN killer, go for it! This is how we innovate.


When people write "why yet another", they usually mean "why this one specifically". And that's the case here, where the author lists what they think are the weaknesses of the state of the art, and explains why they think their innovations help with these problems. Innovation on its own isn't particularly interesting - I want to know why the author is innovating.


My Bayesian priors want to know more about the author.


> innovators are constantly having to justify ... This is how we innovate

Quite the opposite.

Having to explain the benefits of an idea do not thwart innovation - if anything it promotes it.

The software industry is littered with novelties that have little innovative value other than being the new shiny thing that people want to put on their CV.

Churn is at a historical high.

Providing solid reasons for adopting new technology is a breath of fresh air.


Agreed! Even if the new thing doesn't reach mainstream adoption it might still cause change in the mainstream choices - like how redux was inspired by Elm

Secondly I think it's sad that someone building something will be asked to justify their decision in terms of economics. If it makes that person excited and happy to build a thing then I don't think any further justification is needed


In my experience, people believe that programming languages are a solved space, and we should stick with what we have. It's an unfortunate view.

Languages are actually very polarized today. I think there's a lot of room for a mainstream language that could be safe, fast, and most importantly, easy. Today's languages are generally two out of three.

Luckily, a lot of languages are exploring that space!

* Vale is blending generational references with regions, to have memory-safe single ownership without garbage collection or a borrow checker. [0]

* Cone is adding a borrow checker on top of GC, RC, single ownership, and even custom user allocators. [1]

* Lobster found a way to add borrow-checker-like static analysis to reference counting. [2]

* HVM is using borrowing and cloning under the hood to make pure functional programming ridiculously fast. [3]

* Ante is using lifetime inference and algebraic effects to make programs faster and more flexible. [4]

* D is adding a borrow checker! [5]

[0] https://vale.dev/

[1] https://cone.jondgoodwin.com/

[2] https://www.strlen.com/lobster/

[3] https://github.com/Kindelia/HVM

[4] https://antelang.org/

[5] https://dlang.org/blog/2022/06/21/dip1000-memory-safety-in-a...


Languages are not that polarized. Most languages are in fact compromises, design-wise, rather than very principled (polarized). Rust[1] and ATS are the only no-compromise newish (this century) languages that are very principled. (EDIT: in the high-level with low-level control category.)

Being supposedly easier is also easier (design-wise) than being principled, since you are able to compromise on whatever dimension (like e.g. memory safety) as long as you make things just a bit easier than the competition.[2] A language like Rust, on the other hand, has to (1) make Safe Rust memory safe and (2) make make it possible to create Safe Rust APIs using Unsafe Rust.

Polarized design is the road less travelled.

[1] All your examples mention “borrow checker”…

[2] For propaganda purposes, a subjective criteria like “easier to use” (than e.g. Rust, if that is applicable) is better than a technical criteria like being memory safe.


I suppose you could add a few to this list: Koka has formalised side-effects, and thanks to that can do in-place mutation after static analysis, similar to HVM. It also uses static analysis to manage memory with elided reference counting. Lastly, koka has managed to utilise static typing with dynamic binding, contrast with cell-lang which avoids dynamic binding (which has it's use cases). https://koka-lang.github.io/koka/doc/book.html#sec-effect-ty...

Flix does something similar to cell, though the typing is worked out better as lattice types ensure that there is an unambiguous top and bottom type. Addditionally where cell cannot compute dependent values, Flix can as it uses constraint modelling rather than reactive computation ie. the algo computing the rules is formally worked out to cover the edge cases. https://flix.dev/principles/

Maude handles subtyping and typechecking of said subtypes through equational and rewrite logic. It has the concept of purely functional modules as well as impure (system) modules, but adds to that the math theories that represent the modules, so you get a lot of formal verification techniques at your disposal while programming. http://maude.cs.illinois.edu/w/index.php/Maude_Overview

Composita covers the idea of removing pointers, and restricting components so you can use concurrency in anger (and managed memory without GC at the component level) https://concurrency.ch/Content/publications/Blaeser_Componen...

Kali makes a good job of migrating processes - where cell restricts the ability to do closures due to the extended value set, kali can walk the call tree to migrate all linked state. http://community.schemewiki.org/?Kali-Scheme-Revival

I think cell looks interesting, but there seem to be restrictions here to simplify/avoid some of the harder problems. That in itself is not a bad thing, but it's worth noting given that some of these problems have been tackled individually above. I'm not quite ready with a blend of these techniques, but it looks as though they are compatible with each other.


Wow, I've never heard of most of these languages, and I consider myself a language expert! How did you learn of these?


Mainly reading threads here :) I make a stab at looking at the top headlines. There's a pattern in a headline eg. Mentions of GC, datalog and so on which I flag.

The exceptions are Composita and Maude. Composita is off the back of Oberon, which is one of the smaller language-based OSes. Maude I found on a keyword search around formal verification- I'm looking to build a new language so interesting stuff keeps coming up when you go down the rabbit holes ;)


I don't see this as a problem. If someone is creating something new they have a reason, something that is motivating them. Explaining these reasons lets others know if the project is something that interests them or not. This is especially important for collaborative projects to attract people whose goals meet your own and who can contribute additional ideas that you wouldn't have thought of without being a distraction because they are trying to pull the project in different direction than you had in mind. It is also important to attract users if that is something you want.

Its not justification, it is just communication.


I think the reason for the justification is because in a field like programming where we have so many different languages, if there are any newcomers you'd want to know what sort of gap they have found which existing languages do not fill.

As others have pointed out, there's a difference between creating something just for the experience of it, and creating something because no other similar solutions exist. This project seems to fit in the latter category hence why a justification has been offered.


IMO it’s important to distinguish between projects that are following a well beaten path just to follow it (I.e. writing a toy lisp), and projects that are intentionally trailblazing. If you want to find innovation you want to follow the later projects rather than the former.


I find it valuable that they spelled out their motivations and agenda for creating this language, which will help prospective users to decide whether they need to take this language seriously or not. This helps attract users, and arguably the main value of any language is how well users can interoperate with it.

Folks can freely create anything, but it might be a shame for such efforts to go to something that only few people will use.


Very interesting.

In my domain of backend business rule + SQL development I think a perfect programming language would combine ideas from "Out of the tar pit" with event sourcing.

So perhaps something in sort of the same direction as Cell, but also focused on state changing (really, aggregate computation) from certain events having happened...


This seems like an interesting idea for microcontrollers, where you're often wrestling with state and concurrency.

I'm guessing it's too heavyweight, though.


On a technology that has proper closures, there is no reason why OOP used functionally wouldn't be a good fit for the proposed problem.


When I clicked on the page I wanted to see an example and know if the language is meant for someone like me

I saw a huge page and clicked on links to find more pages just as big. I figure it's not for me because I love mini examples and low level and saw nothing like that.

So I wanted to say, you're probably not targeting me but make sure whoever you are targeting will know it's for them based on your site. No matter how nice the page looks, it's a large page and it's hard to know who it's for at a glance


It has a link labelled "Introductory example" at the top left of the linked page. ...in the section called "Start Here". I'm not sure how much easier they could make it to see some example code with detailed explanations.


wc says it's 5645 words. Thats 10 pages. Are you saying 10 pages printed pages isn't a large page? I'm not sure how much easier you could have understood me


Don’t worry yourself about when the next programming language will be written.

Worry instead about when the last one will be written.


I hope that Rust is the last language to be written, because frankly there is so much rewriting-into-Rust going on that it will be super demotivating if it turned out that everything has yet to be rewritten again!


Isn’t SQL the language of state keeping?


yes. and we could really use some effective new blood in that space.


A spreadsheet becoming a language. :)


Reminds me a lot of Alan Kay's ideas about biology and code.


It's neat this can compile to C# and Java.


I am in industrial automation and functional safety of machinery, especially complex sequenced machinery like burner management systems, mine winders bulk handling stackers/reclaimers/shiploaders, route sequencing for bulk handling/grain etc and architecturally this is very similar to the point I have gravitated to over 25 odd years of trying to find optimal ways of specifying highly predictable and reliable machine behaviour, with optimal code outcomes strong traceability back to the specification.

Basically state machines, but a little bit more, and a formal way of specifying them.

Eventually I got to the point where I built a package in PyQt that allows formal specification in a regularised way using a form of a "dynamic cause and effects" chart that at first glance looks a little like excel, per state machine. But it for the specification to be executable (so it is also a simulator you test run your code on as you developed your state machine/s to verify behaviour), could generate documentation programmatically and could also effectively directly write export files for the code for the target controller, if desired.

Originally it was a means to fully defined and predictable behaviour, and easily demonstrable traceability of requirements to acode, but it also turned out to be the sweet spot to bridge the world of, say, the combustion engineer who knew the general logic/steps in his mind but did not code, and the systems engineer coding the controller for a BMS system.

The combustion engineer got full control of specifying the desired behaviour and the systems cder only had to worry about implementing exactly what was specified into code for the safety PLC controller of choice. Often there is a large overlap there when the specification is by the traditional english language "crappy narrative" and there are endless TQs and functionality updates flowing backwards and forth.

Then I discovered there is an IEC standard related to distributed real time control architectures, where they are narrowing down on how to best specify and implement such schemes. But one of the core underlying essences is the executable specification, which they do by trying to go down the path of XML and OPC. You might like to check out IEC61499. But they are still working on a lot of details and last I checked there are only a few reference projects where they got it anywhere near right at scale and made it really work, but a workable framework seems to be emerging.

Based on my experience of real time controls, this state based methodology solves a lot of problems of randomly stateful code. For my work, you usually know if you got it right pretty quickly because you iron ore stacker or shiploader or burner system does not misbehave unexpectedly, though there can be latent issues in automation code that only come out in certain circumstances years later, but much less os with state based approach. I'm aware of one case where over ten years later after delivery of some special controllers circumstances lined up and acode deficiency lead to a loss of well over a billion dollars for one of the biggest miners on the planet.

I think Cell lang is really something that could take off if enough people realise what it is and the benefits it might give.

But it needs a slightly different way of thinking about your problems and how to solve them in a way that best fits to the architectural framework that statefulness uses. Some people seem to take to it, some adapt poorly.

Because it lends itself to creating code that is effectively an engine and "configured" with data in the possible languages available on PLC type controllers, I've had some problems with a few ICS engineers who just don't get it and can only see the code as near meaningless.

First time you build a larger dynamic system out a hierarchical state machine arrangement can be a little challenging, but you definitely get the hang of it, UML timing diagrams can be your friend.

But you can end up writing incredibly efficient code for the state machine approach, and for each machine you can only be in one state and only logic related to what state is currently executed and all outputs are a feature of state only (assuming Moore machines are enacted), so troubleshooting can be dramatically reduced as you have effectively encapsulation by architecture.

I look forward to seeing how Cell lang develops, I believe it has many, many advantages it might offer and has only just begun to yield some of the possible benefits. I believe it could also potentially be a new higher level language for real time and distributed controls/automation in the future if the standards committees can see past their nose, and ultimately be the language of choice for such applications, or something like it might be.

Final point, state machines lend themselves well to formal methods, so there is also possibility of a TLA+ transcriber from Cell to allow very painless formal methods verification of things like, all states can be reached, and all states have an achievable exit transition condition, and so on. Real time guys love this sort of shit when it matters (aerospace etc) and really everyone should at some level.

Final wish : I would like to see a an XML interface that allows translation in and out, even if only for a key restricted set of the language, but it doesn't look too hard and could open up a world of interfacing to existing tools of various kinds, a lot of ICS and real time engineers love to exchange interface and functionality data in XML if there systems are organised to use it.


Check out ballerina.io


That's awesome, that would be great to convert to use in real time applications.


I love the concept and would like to see some real world programs to be implemented in that language :)


What's with people tacking "Lang" on to the end of language names? I can kind of understand it with Go, since Google didn't exactly come up with a very searchable name, but now I'm seeing it spread to other languages like Rust, and now here with Cell.


I like the convention. Even if you pick a word that isn't very common, you are still unlikely to be able to register a domain name for that word, and will have to amend it somehow, and search engines will still have to guess which use of the word you meant. Having a common convention simplifies searching for things, as opposed to having to type out "Rust Programming Language" every search.


First result in Google for "Rust" is this:

https://store.steampowered.com/app/252490/Rust/

Rust the language is three years older, but I suppose those who bought the domain were aware that it's a common noun likely to be used elsewhere.


Lisp does it too (https://lisp-lang.org/) - I think it's a reasonably familiar convention


I'm not talking about domain names. I'm talking about people referring to the language itself as, for example, Rustlang.


Right, but the colloquial usage stemmed from the domain name :)


I don't think I've actually seen anyone refer to it as Lisplang.


"Rust", "go", "cell", all of those words have a preexisting meaning which is more common than the corresponding programming languages.


This is true of everything that is named after something, and it wasn't really something people worried about until Go, which is interesting in itself.


I think the internet was less pervasive before, so it mattered less for C, C++, Java, and it matters a lot more now.

So now, if you want to call your language Duck or Car, you better add that "-lang"! It seems to work well enough.


Well, "cell" isn't too searchable either


"Cell" is going to have lots of search-collisions, and it can also help with finding a domain name


Odin is one language that could use this convention. My Searches often turn up things on The Odin Project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: