The feature just makes it run identically to the last run. It's useful for reproducing bugs; if you encounter a bug, just start the program in "replay" mode, supply the recording from the last run, and fire up the debugger, and watch as the bug is reproduced perfectly.
It's kind of like a time-traveling debugger, but with stronger guarantees.
I'm particularly excited about combining this with concurrency, we'll never have to spend hours hunting down mysterious race conditions ever again!
Total functional programming indeed is another example of a valid approach to weakening cycles and thereby guaranteeing termination. Although it works for that, it is too restrictive for my taste. I am looking for less restrictive forms of weakening that allow creation of a broader range of algorithms (including "imperative") that we can still guarantee will terminate.
> Although it works for that, it is too restrictive for my taste. I am looking for less restrictive forms of weakening that allow creation of a broader range of algorithms (including "imperative") that we can still guarantee will terminate.
How is it weak? Surely it's equivalent to any other approach: given a function and any form of termination guarantee for that function, you can encode that guarantee as a phantom type and then you have an expression of that function in a total functional language.
I mean, how long does it take to convince yourself a function will halt vs how long does it take to use implement it with well-founded recursion, with comparable readability? Not that it's not possible, it's just more of a pain.
AIUI actors are regularly idle. However, you are right that Pony offers a clever mechanism not offered by Erlang, which allows an actor to effectively despawn itself. I believe how it does this is to notice when no other actor has a reference to it. If no other actor has a reference to it, no one cannot send it any messages. If it has no messages in its queue and it is idle (not currently dispatched on a message), we know it will never again receive any messages, and therefore it is safe to free itself because it will never again have work to do.
Author here. You are 100% correct that Pony makes a strong claim about being deadlock-free: "It’s deadlock free. This one is easy, because Pony has no locks at all! So they definitely don’t deadlock, because they don’t exist." (from the first page of the tutorial). Sylvan Clebsch is a really smart guy and Pony is a stunning piece of engineering based on rigorous analysis. He would surely know.
So far as I can determine, this part of the claim is true: it makes no use of locks, not in the runtime and none are surfaced to the Pony programmer. The only synchronization mechanism that Pony makes use of is the MPSC queues that make possible message passing between actors. Each actor has its own queue, and it is only "active" when it has messages on its queue that need to serviced. The work-stealing scheduler will give every actor with messages an opportunity to process at least one message. When it does so, it never blocks and (hopefully) finishes with no requirement to return a result to any other actor (although it may be pass messages to other actors if it wishes).
Now consider Wikipedia's definition of a deadlock: "a deadlock is a state in which each member of a group is waiting for another member, including itself, to take action, such as sending a message or more commonly releasing a lock". Notice that deadlocks can occur not only because of locks (which Pony does not have), but also because of waiting on another actor to send a message. The wiki article lists four conditions for a deadlock, which (it turns out) can also occur in a message-passing architecture.
How would Pony trigger this sort of deadlock? Consider implementing the dining philosophers problem in Pony by having the main actor spawn five fork actors (with on-table state) and then five philosopher actors, telling each the fork actor to their left and the fork actor to their right. The philosopher actors all wake up hungry, so they reach for their left fork first, by sending it an acquire message and then effectively suspend. If the left fork A is in on-table state, it changes its state to indicate it has been acquired by a specific philosopher and sends a message back to the philosopher to say it has been acquired. That reactivates the philospher to now send an acquire message to the right fork B. But maybe by this point in time another philosopher has already acquired that right fork B as its left fork! So it sends a message back to the requesting philosopher to say it is not currently available to be acquired, try again later.
If you walk through this implications of this carefully, you will see that although every philosopher is getting activated by messages regularly, at a certain point in time it is possible that meaningful forward progress can cease for at least one philosopher. At least some philosophers will starve, because they become deadlocked in not being able to get two forks needed to eat, because they are effectively contending over access to shared resources.
The situation is semantically equivalent to when we use locks for forks instead. And if we apply Dijkstra's partial order to how the philosopher actors acquire their forks in an actor-based message passing architecture, the deadlock risk vanishes and all the philosophers have a reasonable chance, eventually, to eat and therefore guarantee eventual forward progress.
Does this explanation of an example satisfy your question?
Author here. Cool story about odd perfect examples. That said, my post is not trying to find a way to crack the halting problem, and thereby demonstrate it is not real. Turing wrote a most excellent proof, and I stand by it wholeheartedly. I even cite another famous numeric example of a program we still do not know if it will terminate for all numbers: the Collatz conjecture.
Notice my description: "Turing proved that no general algorithm can be formulated that can answer this question across all possible programs." The operative part here is "across ALL possible programs". The proof does stop us from knowing that SOME programs will or will not halt. It only stops us from being able to determine this for EVERY possible program. My post explores a very specific subset of programs that we can prove will terminate, then asks what pattern(s) underlie such programs, and then explores the use of such patterns in a variety of interesting problems. It is this last result that most intrigued me, and caused me to write about it.
After some digging I found several blogs that make the same distinction as you do.
However, basically all of literature calls the undecidability of the halting problem a proof by contradiction.
So I guess that there are some intuitionists that make that distinction. I'm not sure that I'd call it a direct or constructive proof though.
The thing is that if you follow constructive reasoning and you assume that the church turing thesis is true, then all of math becomes somewhat small. No more uncountably infinite stuff, no dense reals, computable objects only e.t.c.
I feel like we've painted mathematics into a dogmatic corner, and tbh I have no idea how to get out of it.
Hey - I am thrilled to hear about your positive experience with interning strings. I never actually did a performance test, so I am delighted to hear your gains were substantial.
As for you questions about performance on bigger source files and twiddling optimization options, I too am curious about that. I will likely revisit those questions at some point in the future. It will be easier to do once I have baked these diagnostics into the compiler.
Very well said. This is very much what I am after. Some common idioms in Rust or C++ just feel unnecessarily verbose to me (e.g., `<exp>.unwrap().borrow_mut()`), and I am looking for straightforward ways, often borrowed from other languages, to express the same intent more succinctly and clearly.
> There is no available reference type that can target objects from different allocators.
Cone actually does support Rust-like, lifetime-constrained borrowed references which can do exactly that safely. Cone also supports raw pointers (however de-reference safety is the responsibility of the programmer).
I appreciate the chance to learn about your language's unique form of reference type. I am less likely to call them safe than you, no doubt because I use a different criteria for safety. A key requirement I have placed on references (vs. pointers) is that you can always de-reference them and get a valid value with no chance of exception. I don't think your references would comply with this.
A Cone programmer would need to use raw pointers to throw off the shackles of lifetime constraints but, unlike with your references, they could not expect such pointers to turn into nullptr if the object they refer to has been freed. Given the nature of Cone's design, there is no way to accomplish this mechanic with decent performance, especially given that borrowed references and raw pointers are both able to point inside an allocated object.
I do appreciate your bringing it to my attention and wish you all the best with getting others to learn about and adopt your language.
> Cone actually does support Rust-like, lifetime-constrained borrowed references
Ah, so kind of a super-set of Rust functionality. Presumably these would require a "borrow checker" or equivalent? Is that already implemented? So how do you address the safety of, say, taking a reference to an element in a (resizable) vector? Rust's "exclusivity of mutable references" restriction intrinsically makes the vector immutable while the borrowed reference exists, but do I understand that Cone doesn't have that restriction? The "C++ lifetime profile checker", on the other hand, makes the vector non-resizable (but leaves the data mutable).
> A key requirement I have placed on references (vs. pointers) is that you can always de-reference them and get a valid value
SaferCPlusPlus provides both a pointer that throws an exception if you attempt an invalid memory access (though it could just as easily return an optional<>, and you can always query if the target is valid), and one that terminates the program if its target is ever deallocated prematurely (and thus (technically) satisfies your criteria). (The latter has less/minimal overhead.)
> A Cone programmer would need to use raw pointers to throw off the shackles of lifetime constraints
Or reference counting pointers or GC, right? The features needed to implement the pointers I mentioned is either support for calling a destructor on move operations or the ability to make an object non-movable, and, support for copy constructors or the ability to make an object uncopyable. Does/could your language support some combination of those features?
I explain the reason the pointers are important in an article called "Implications of the Core Guidelines lifetime checker restrictions" [1]. Specifically, I give an example of reasonable C++ code [2] that historically had no corresponding efficient implementation in Safe Rust. (I think it's still the case, but I haven't fully investigated the implications of Rust's new "pinning" feature.) It can, however, be implemented in a memory safe way using the SaferCPlusPlus pointers in question [3]. Basically the example just temporarily inserts a reference to a (stack allocated) local variable into a list (or whatever dynamic container), given that the local variable does not outlive the container.
> especially given that borrowed references and raw pointers are both able to point inside an allocated object
SaferCPlusPlus has the equivalent of "borrowed references"[4] (though even more restricted until C++'s "borrow checker" (the aforementioned "lifetime profile checker") is completed), and they can safely point "inside" (allocated) objects. Note that the second safe pointer type (the one that potentially terminates the program), is a "strong" pointer, and there is a simple mechanism for obtaining a "borrowed reference" from a strong pointer [5]. And from there, a simple mechanism for obtaining a reference to an (interior?) member [6].
The other pointer (the one that potentially throws an exception) would be considered a "weak" pointer, and you cannot obtain a "borrowed reference" directly from a weak pointer. But often, the weak pointer is used to target an object that can yield a borrowed reference (like a strong pointer, for example), or a borrowed reference directly.
> all the best with getting others to learn about and adopt your language.
You too :) I can see the appeal of this sort of clean, flexible language. But in the case of SaferCPlusPlus the goal is not necessarily (just) for programmers to adopt it. In part it's maybe a demonstration (to language designers such as yourself :) of a set of language elements that use run-time safety enforcement mechanisms but are a little more flexible than (and might be a good / (unintuitively) needed complement to) their counterparts that rely on strictly compile-time safety enforcement.
Oh and if you do get around to checking out SaferCPlusPlus in more depth, apologies for the inadequate documentation in advance. Feel free to post any questions you might have. :)
Yes, borrowed references being lifetime-constrained means that I have a "borrow checker" that ensures that. It is only partially implemented.
You are correct that Cone supports a static, shared, mutability permission, including on borrowed references into resizable arrays. The short safety answer is array resizing is only possible when you have a unique reference to the array, so you can't run into the trouble you describe. I wrote a post about it.[1]
You left out an important clause I specified in my criteria: "with no chance of exception". Terminating the program in the event of dereferencing a reference does not meet the safety requirements I set for Cone references.
Yes, only borrowed reference have lifetime constraints. I did not mention the allocator-based reference in that quote because of context. Cone does support a distinction between move vs. copy types. Unlike with Rust, the distinction is typically inferred from the definition of the type. Currently, all memory is "pinned", but that may become more flexible in the future.
The safety strategy for Cone involves versatility: giving the programmer a curated collection of permissions and memory allocators, each with distinct advantages and disadvantages. The safety of certain options can be completely determined statically, making them inflexible but fast. Others will use a mix of static and runtime mechanisms, which offer greater flexibility but incur a runtime cost.
That said, I admit I am somewhat uncomfortable temporarily injecting a borrowed reference into a longer-lived container as snippet 4 shows. I feel like any logic able to ensure this is only done safely would be too complicated for my taste, at least for now. I understand how your mechanism would address this scenario, but again that does not ascribe to my more restrictive notion of safety. If the program does it wrong, it crashes.
I just read it, and I thought it was great. I had a similar, if perhaps not-as-well-thought-out, reaction to Manish's (I agree, excellent) post. I think SaferCPlusPlus basically implements the permission mechanisms you listed in the summary (as well as the preceding "Race-Safe Strategies" post). (Although with some of the restrictions enforced at run-time rather than compile-time.) Looking forward to Cone 1.0. :)
p.s.: btw, the link on your post to the preceding "Race-Safe Strategies" post is broken
No, I just read it. SaferCPlusPlus does support all the permission "modes" listed, except the "opaque" one, and transitioning between them (though being stuck with C++'s move semantics). The permissions modes that aren't intrinsic to C++ are enforced at run-time. Objects to be mutably (non-atomically) shared need to be put in an "access control" wrapper, which is kind of like a generalized version of Rust's RefCell<> wrapper.
As I noted in my original comment, in the "mutex"/"RwLock" case, SaferCPlusPlus allows you to simultaneously hold read-locks and write-locks in the same thread. Which seems natural, since SaferCPlusPlus (and Cone) allows const and non-const pointer/references to coexist in the same thread. But in this case it actually provides increased functionality. It is the functional equivalent of replacing your mutex (and Rust's RwLock) with an "upgradable mutex", which facilitates better resource utilization in some cases, right? It also provides new opportunities to create deadlocks, so the mutex has to detect those.
Btw, I am certainly a pot talking to a kettle here, but your "mutex1" urgently needs a better name, right?
Cone is an imperative, multi-paradigmatic language in the family of systems programming languages (C, C++, Rust, D).
It is fundamentally statically-typed. However, a language that is too rigidly typed can sometimes degrade programmer productivity. Abstractions like variant types and various forms of type polymorphism can improve code flexibility and reuse, which is why I plan for Cone to include these features (the home page has a link to a page that describes this in more detail).
High-level attributes of a language are always hard to get right, because terms mean different things to different people, but several responders do a better job than I did of explaining why concise and readable don't necessarily have to be contradictory goals, at least for me. My primary aim here is that code be maintainable. By concise, I am actually far more focused on how much of a function's logic can fit readably on the editor window without scrolling than I am the size of keywords. Certain common C++/Rust patterns feel unnecessarily verbose to me (e.g., borrowing a reference in Rust), and I am trying to incorporate syntactic patterns from a number of other languages that I feel encode the same intent consistently in a way that a programmer familiar with Cone will be able to process and edit more quickly. Of course, that's always a judgement call...
As a language designer myself I truly do applaud your effort. I am just a fellow skeptical human who loves strongly typed languages!
Keep it up, I will be following the progress.
> ...focused on how much of a function's logic can fit readably on the editor window without scrolling than I am the size of keywords
This seems like a silly reason, we have large widescreen high res monitors - C's curly braces were invented to fit inside 70 char by 90 char terminals. Lets just be honest with ourselves and say its because its what people generally like and/or are used to these days : )