Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway17_17's commentslogin

Since you are in the comments, I’ll just ask directly…

Are you planning to enable a local only version of chat history and maybe an option for local first instancing? In my line of work Slack is basically a non-starter due to the off sight and non-employee managed nature of the storage/centralized transport and pass-through nature of their business model. I would love to be able to have something similar for my various teams and employee groupings, almost everything we do is asynchronous comms via email or direct phone calls. Being able to act like it’s 2026 instead of 1997 would be a huge win for me.


We use a custom in-house sync engine (similar to how Git works) to propagate messages between peers, but the "source of truth" is local first and a sync server.

This gives you full ownership of your data, instant search (even offline), and naturally supports the kind of privacy/custody requirements you're describing.


As an aside, have you looked at Zulip?

While I’m always thankful when people give the broad perspective and context in a discussion, which your comment does. The specifics of this particular project’s usage of almost-C is not something I could have quickly figured out, so thanks. For such a large program, an to be as old as Qt is at this point, I find it impressive and slightly amazing that it has in some sense self-limited its divergence from standard C. It would be interesting to see what something like SQLite includes in its almost-C.

The more portable a project is, the less weird stuff it’s likely to do. The almost-C parts become more of a headache the more OSes and compilers you support. This seem pretty tame, and I’d expect SQLite to be similar. I work on some projects that only support a single OS, compiler, and CPU architecture and it’s full of dependencies on things like the OS’s actual address space (few 64-bit archs use all 64 bits).

I’m apparently comment happy on this OP, but, the typing of it looks funny because it starts the sentence, I’m pretty sure OP was saying safER, as opposed to SAFE (as in totally safe instead of comparatively safer). I have been quite charitable to OP in some sibling comments and will do so here. I think OP is attempting to give Fil-C some credit for being an attempt to increase the overall memory safety of existing code without incurring the complexity of a new language or the complexity of rewriting long running/widely distributed code. It is a decent sentiment and a viable methodology to achieve a laudable goal, but is certainly susceptible to caveats like the performance penalty you mention.

I’m pretty sure there is not any realistically feasible way to ever prove your statement. But I hope a majority of people can recognize the sheer magnitude of C++ as a language and take a position that it may not be possible to master the whole thing. Rust is ‘smaller’ language using some metrics (most metrics really) than C++ is another thing I would hope most people can accept. So, given that the comparisons between the two wholes being a semi-intractable discussion I would propose the following:

When considering some chosen subset of functionality for some specified use case, how do Rust and C++ compare in the ability to ‘master’. There are wide and varied groups (practically infinite) of features, constructs, techniques, and implementations that achieve targeted use cases in both languages, so when constructing a given subset which language grants the most expressivity and capability in the more ‘tight’ (i.e. masterable) package?

I think that’s a way more interesting discussion to have. Obviously, where the specified use case requires Rust’s definition of memory safety to be implemented 100% of the time (excluding a small-ish percentage of delimited ‘unsafe but identifiable’ sections) the Rust subset will be smaller due to the mandatory abstractions required to put C++ anywhere near complete coverage. So it may make sense to allow the subset to be defined as not only constructs in the base language, but include sealed abstractions (philosophically if not in reality) as potential components in the constructed subsets.

I may have to try and formulate some use cases to pose in a longer something to see if any truly experienced devs can lay out their preferred language’s best candidate subset in response. It would also be fascinating to see what abstractions and metaprogramming would be used to implement the subset candidates and figure out how that could factor into an overall measurement of the ‘masterable-ness’ of the given language (i.e. how impossible a task is it to be able to rely on a subject matter expert to implement any proposed subset for any given use case).


I’m going to start this comment by specifying that I don’t know what OP was considering complex about Rust and, unfortunately, a large amount of discussion on the topic tends toward strawman-ing by people looking to argue the ‘anti-Rust’ side of said discussions. Additionally, the lack of a contextual and well considered position against some aspect of Rust, as a language, is very common, and at worst the negative take is really just a overall confrontational stance against Rust’s uptick in usage broadly and its community of users, as perceived (and also strawmanned), in a generally negative light. But since borrowing is not explicitly mentioned by GP, I will give a slightly different position than he might, but I think this is an interesting perspective difference to discuss and not a blatant ad hom argument used to ‘fight’ Rust users on the internet.

From my position the complexity incurred by ownership semantics in Rust does not stem from Rust’s ‘formalization’ and semi-reification of a particular view on ownership as a means of program constraint. The complexity of Rust, in relation to ownership, comes with the lengths I would have to go to design systems using other logical means of handling references (particularly plain hardware implemented pointers) to semantic objects: their creation, specification, and their deletion. Additionally, other means of handling resources (particularly memory acquired via allocation): its acquisition, transport through local and distributed processes (from different cores to over the wire), and its deletion or handing back to OS.

Rust adopts ownership semantics (and value semantics to a large degree) to the maximum extent possible and has enmeshed those semantics throughout all levels of abstraction in the language definition (as far as a singular authoritative ‘definition’ can be said to exist as a non-implementation related formalism). At the level of Rust the language, not merely discussions and discourse about the language, ownership semantics are baked in at a specified granularity and that ownership is compositional over the abstraction mechanisms provided. These semantics dictate how everything from a single variable on the stack to large size allocations in a general heap to non-memory ‘resources’, like files, textures, databases, and general processes, are handled in a program. On top of the ownership semantics sit the rest of Rust’s semantics and they are all checked at compile time by a singular oracular subsystem (i.e. the borrow checker).

The complexity really begins to rise, for me, if ai want to attempt to program without engaging with ownership as the methodology or semantics for handling all of the above mentioned ‘resources’. I prefer, and believe, that a broader set of formalisms should be available for ‘handling’ resources, that those formalisms should exist with parameterized granularity, and that the foundational semantics for those mechanisms should come from type systems’ ability to encode capabilities and conditions for particular types in a program. That position is in contrast to the universal and foundational ownership semantic, especially with the individualistic fixed granularity, that Rust chose.

That being said, it is bordering on insanity to attempt to program in such a ‘style’/paradigm/method in Rust. My preferences make Rust’s chosen focus on ownership seem complex at the outset, and attempts to try and impose an alternate formalism in Rust (which would, by necessity, have to try and be some abstraction over Rust’s ownership semantics which hid those semantics and tried to present a different set of semantics to thenprogrammer) take that complexity to even higher levels.

The real problem with trying to frame my position here as complexity is the following: to me Rust and its ownership semantic is complex because I do not like it’s chosen core semantic construct, so when I think about achieving something using Rust I have to deal with additional semantics, semantic objects, and their constraints on my program that I do not think are fit for purpose. But, if I wanted to program in Rust without trying to circumvent, ignore, or disregard it’s choices as a language and just decided to accept (or embrace) it’s semantic choices the complexity I perceive would decrease significantly and immediately.

For me, Rust’s ownership semantics create an impedance mismatch that at the level of language use FEELS like complexity (and acts like complexity in a lot of ways), but is probably more correctly identified as just what it is… an impedance mismatch, nothing more and nothing less. For me, I just chose not to use Rust to avoid that, but for others they get focused on these issues and don’t actually get to the bottom of their issues and just default to calling it complexity during discussion.

All in all, I am probably being entirely to optimistic about the comments about the complexity of Rust and ownership and most commenters are just fighting to fight, but I genuinely believe there is much to discuss and work through in programming language design theory and writing walls of text on HN helps me do that.


Rust's strict ownership model isn't flexible enough sometimes (particularly when dealing with hardware concepts), so I consider it a "mostly good enough" abstraction that is a good default but should be circumvented sometimes.

My friend this is the comment section, surely this could be said in fewer words

Tl;Dr


The TL;DR was exhaustion and medication make me chatty. I didn’t realize how long the comment was until I got back on just now.

I agree that your previous comment could be shorter, maybe half as long, with some effort. For example, this sentence could just be removed without changing the content or tone:

> But I think this is an interesting perspective difference to discuss and not a blatant ad hom argument used to ‘fight’ Rust users on the internet.

(It is already clear that you share your perspective because you think that it's interesting and not a fallacy.)

Then, there are some paragraphs close to the end that seem to repeat ideas.

But IMHO it's a good, nuanced comment that both addresses shortcomings in current discourse and adds ideas to the discussion. This is obviously difficult to do in few words.

A few concepts that come to mind from your comment, but I missed from others:

- Are Rust-style ownership semantics complex or just hard for me? Is it an issue of familiarity only? Is it ergonomics?

- Are they hard in general or just due to the way I like to program? How can I change my style to better fit the model?

- Are they even fit for the software I write? i.e. is it worth it to change how I program to better fit the model?

- What other tools are there to deal with resources?

- What could a programming language do to offer multiple of those? How can we mix GC, region types, linear/uniqueness types, manual management, etc. in a single language?

- A bunch of stuff about discourse in this thread and HN/the internet in general (which is maybe not the point of the comment and could be omitted?)


I would love for this to turn out to be some internal constraint where the LLM can not ‘reason’ about LEM and will always go to an understanding based in constructive logic. However, I am more ready to accept that LLM aren’t actually ‘reasoning’ about anything and it’s an inherent flaw in how we talk about the algorithms as though they were actually thinking ‘minds’ instead of very fancy syntax completion machines.

The problem is that both constructive logic and "normal" logic are part of the training data. You might be able to say "using constructive logic, prove X". But even that depends on none of the non-constructive training data "leaking" into the part of the model that it uses for answering such a query. I don't think LLMs have hard partitions like that, so you may not get a purely constructive proof even if that's what you ask for. Worse, the non-constructive part may be not obvious.

I know this reply is late, but what the hell.

Your comment is certainly correct and I agree that the various implementations of LLM probably can not actually partition attempts to find proofs into any given logical system.

My comment was more tongue in cheek than your response. I phrased it awkwardly obviously. I was humorously hoping that at some fundamental level, involving unintentionally taking advantage of a previously unknown foundational rule of computer science, that LLM’s as ‘thinking’ algorithms simply could not understand or utilize non-constructive logical means to formulate a proof.

As I said, I did not think this is actually what’s going on with GPT not being able to actually, or convincingly, ‘understand’ the law of the excluded middle. It was more of backhanded insult at LLMs, particularly, and those sales people who want to talk about them as thinking, reasoning, semi-conscious algorithmic ‘beings’.


Jai does not compile to C. It has a bytecode representation that is used primarily for compile time execution of code, a native backend used mostly for iteration speed and debug builds, and a LLVM target for optimized release builds.

Seriously, in the discussion happening in this thread C is clearly not a high-level language in context.

I get your statement and even agree with it in certain contexts. But in a discussion where high-level languages are presumed (in context) to not have memory management, looping constructs are defined over a semantics inferred range of some given types, overloading of functions (maybe even operators), algebraic datatypes, and other functional language mixins: C most certainly IS NOT a high level language.

This is pedantic to the point of being derailing and in some ways seemed geared to end the discussion occurring by sticking a bar in the conversations spokes.


glad you bring up context in this note. i find C high level too but u are right, in a comparisson you can still say its really low level.

C was coined originally as high level because the alternatives were things like assembler. a term rooted in comparisson more than anything.


Thanks, my parent’s comment is almost a thought-terminating cliche in this kind of discussion. However, Chisnall’s now classic ‘C is not a low level language’ article is one of my favorite papers on language theory and potential hardware design. A discussion about the shortcomings of viewing C as a low level language can/could be profitable, deep, and interesting; but context is king.

I’d say that the point of async/await is to create a syntax demarcation between functions which may suspend themselves (or be suspended by a supervisory system) and those functions that process through completely and cannot be suspended (particularly by a supervisory system). The means to enable the suspension of computation and allow other computations to proceed following that suspension are implementation details.

So, having an async function run on a separate thread from those functions that are synchronous seems a viable way to achieve the underlying goal of continuous processing in the face of computations that involve waiting for some resource to become available.

I will agree that inspired by C#’s originating and then JavaScripts popularization of the syntax, it is not a stretch to assume async/await is implemented with an event loop (since both languages use such for implementation).


There are two distinct constructs that are referred to using the name variable in computer science:

1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;

2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;

Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.

I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.

From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: