Hacker Newsnew | past | comments | ask | show | jobs | submit | indemnity's commentslogin

Which languages would that be? Not being snarky, actually curious.

Rocq or another in that wheelhouse.

Once you go down the spectrum then you have to start relying on tests, which end up indirectly validating the same things as the type system anyway, at which point you are no further ahead than having used a dynamically-typed language.

Partial statically-typed languages like Rust are pretty cool for things like highlighting basic mistakes in your editor as you type, automatic refactoring, etc. That is all very useful for humans writing code, even if a bit redundant on some technical level. But if an LLM is doing the work, none of that matters.


Are LLMs any good at generating rocq code?

I asked several LLMs on your behalf and they all said "yes". But they might be a little biased.

Why not put them to the test? It is only a prompt away. Rust is an available extraction target for Rocq, so you can even integrate it into your existing codebase.


>Why not put them to the test?

haha I shall. I like the idea of tighter constraints on LLM hence my enthusiasm for rust over python for vibecoding.

I did just try it and the first thing it did was try to download half a gig. Which is fine, except my fiber install is next week and I'm hotspotting off a phone. So that'll need to wait

...it did seem to fundamentally understand the ask though of "make me a plan to make a http hello world server".

Though at same time flagged that this isn't ideal

>Rocq/Coq is primarily for theorem proving; HTTP server implementation will be minimal


> Though at same time flagged that this isn't ideal

I suppose that depends how deep down the rabbit hole you want to go. HTTP is pretty minimal in and of itself. The complexity is due to it being layered on top of many other things, like TCP/IP, sockets, network devices, etc. Is it worth proving that your NIC functions according to spec? Probably not. But, if an HTTP server is what you are trying to build, like, parsing an HTTP request is quite suitable.

But a good case. I would suggest that Rocq is already proving itself. I expect by "make a http hello world server" you don't mean: First, invent the universe. Although maybe you do! Who knows? Your prompt is not clear. And since the type system is comprehensive enough to force you to actually think about what you really want, it is compelling you to be much more specific.


>I suppose that depends how deep down the rabbit hole you want to go.

I'm intellectually curious about whether a formal verification style vibecoding would work. It seems like the logical outcome of current trajectory - if compute increases exponentially then something where the LLM can just hammer away until it checks out seems ideal way of vibecoding while getting a bullet proof outcome.

Knowing absolutely nothing about formal verification I'm quite skeptical it's viable right now though. (For hobby projects, I'm sure the formal verification academics are having fun)

So still leaning towards rust as current optimal. I am ofc biased (python/rust is my preference depending on task).

>I expect by "make a http hello world server" you don't mean

There is no deeper thinking behind it. Selected it purely because if I ask a LLM to oneshot something without guidance I think keeping it simple and within training data is fair.


> Knowing absolutely nothing about formal verification

Are you sure about that? That is what we've been talking about all along. After all, the thing that separates Rust from Python, with respect to our discussion here — the thing that allows code to be less fragile, is its type system. Which is formal verification! Rust is very limited in that regard, not allowing verification of much, but still very much lives on the formal verification spectrum.

The trouble when you can't verify much is that you end up in a situation where something like:

    fn add(a: i32, b: i32) -> i32 {
        a + b + 1
    }
Is able to compile just fine despite being completely wrong. To deal with that you need to write tests (which is essentially runtime formal verification). But once you've written the tests, you've also largely covered the properties that are statically verified, leaving no real benefit to LLMs.

But I infer that you aren't writing tests either, and that Rust's type system is feeling advantageous as it at least provides some limited verification. You seem to be enjoying that workflow, which is why I mentioned that there are languages with way better types systems that you might enjoy even more. There are probably hundreds of languages between Rust and full verification on the spectrum, so if Rocq isn't the exact one for you, try some others! They're only a prompt away.


Another convert back to the light from the darkness here.

My eyes are getting old. Black text on white background is more legible/clear.


When we built our 2015 house our requirements were simple: well insulated, whole-house ducted heating/cooling, and CAT6 to every room. Everything that has an Ethernet port gets wired. Our front door still requires physical keys. Our TV is dumb. Dishwasher has no screen just knobs, likewise the oven and fridge. Heating has one panel to control everything, buttons not touch screen. Comfortable house, all my tech works and basically never dies. Not reliant on some buggy software that never gets updated from company thats out of business for my lights or heating or dishwasher or fridge to work.

Lost my uptime today due to a power outage, first one in years. The house in the OPs article sounds like absolute hell.


This is the way. May you enjoy your home.

I’m in a market where we can buy the Chinese cars.

Lots of hi-res graphics, modern looking interfaces and flashy animations do not good software make.

Function > form.


Yeah, this is a regression since macOS Tahoe. Amazing that it still exists after several patch releases, is audio working not a basic test case for Apple?

I’ve found it to be worst when using Xcode / simulator and having headphones on for music.

sudo killall coreaudiod seems to fix it for a while.


It's ludicrous. I remember when Apple took pride in the audio never stopping. Once a very long time ago my entire Mac froze, even the Force Touch trackpad was unresponsive, but the music kept playing. Now, press Volume Up and the audio stutters. Or AirPods randomly get choppy and then stop playing until you reconnect them. The heck?


To add insult to injury, I recently connected my headphones to my Windows gaming PC.

Sounded absolutely amazing, no skipping or crackling no matter what I was doing on the PC, nor how heavy I was cranking it. Quality was better than on macOS as well.,

My M1 sounded like an AM radio when driving the same headphones using the same software (Spotify lossless).


> sudo killall coreaudiod seems to fix it for a while

For me this fixes it for about 30 minutes then I have to do it again… and again… and again…

I wonder why some folks need to do it more than others


I've had it since the first macOS that shipped with my M1 Pro MacBook!


I also cancelled my All Products subscription a while ago. I have been an IntelliJ user since the early 2000s and gave up after in 2025, it would still forget how a Maven project with some generated files should be built, with everything turning to a sea of red until you reimported the project and redid all your settings again. Job #1.

There was always a regression like this in every new build, along with the performance issues. Also switched to Zed + Claude Code/Codex.

I will miss the debugger (a little bit).


Never ran into this. This seems odd, I use the 2025 build and it works just fine for me. With maven and gradle.


Arguably Google and others like them would not even exist without access to cheap off the shelf hardware in their early days.


Yes, and that's a part of the appeal to companies like Google. They've climbed the ladder, and now they're pulling it up behind them so others can't climb up to catch them.


Definitely. And there's a tendency for individuals and particularly corporations to pull up the ladder behind them. They know that leaving things accessible means they could face major competition 5 years down the road. So they do what they can to prevent that.


Exactly! Apple wouldn't have existed without access to the MOS 6502 and other electronics, which allowed Woz to carry out his dream of building a personal computer. Microsoft might not have existed without the Altair 8800. Many 1990s and 2000s web startups got off the ground with affordable, available hardware, whether it's hand-me-down RISC workstations or commodity x86 PCs.

Granted, to be fair, many of today's startups and small businesses are made possible by AWS, Google Cloud, Microsoft Azure, and other cloud services. It is sometimes cheaper to rent a server than to own one, and there are fewer system administration chores. However, there's something to be said about owning your own infrastructure rather than renting it out, and I think a major risk of compute power being concentrated by just a few major players is the terms of computation being increasingly dictated by those players.


I am fairly skeptical about many AI companies, but as someone else pointed out, Anthropic has 10x'ed their revenue for the past 3 years. 100m->1b->10b. While past performance no predictor of future results, their product is solid and to me looks like they have found PMF.


Oh damn, this is awesome.

I wonder if https://github.com/zed-industries/zed/discussions/18129 is still accurate. Would love to be able to use Ghostty as my Zed terminal.


We'd love to (and, tbh, likely will).

Search had been a blocker, but that's coming soon; beyond that not sure that there's any reason other than inertia. Alacritty is totally fine, but excited for the Ghostty focus on performance (obviously), and the font rendering stuff looks excellent (though TBD how much of that we can "just use" vs needing to copy-pasta)


Note search has landed in main, and the core of it is cross platform and exposed via libghostty (Zig API, C to follow).


Woohoo! Thank you!

(Edit) Download it here: https://github.com/ghostty-org/ghostty/releases/tag/tip


They could certainly compile Ghostty and link into it from Rust. I couldn't imagine it'd be that large of an undertaking.


i switched from alacritty -> ghostty, and occasionally use zed.

i can't recall why i made the transition (maybe just to try it out, and it became default?). i can't think of any practical consequences of this transition.

why do you care about whether zed uses alacritty or ghostty under the hood?


Kitty Graphics Protocol support and subtle font rendering differences between Ghostty and Alacritty that drive me nuts.

I have reported font rendering issues to Alacritty in the past and let's just say the developer was not exactly receptive to fixing them since they occur on macOS and not his preferred OS of Linux.


Just want to second this. I find Zed nearly unusable due to weird rendering in the terminal.


I haven’t visited StackOverflow for years.


I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)

When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.


I think that's OP's point though, Ai can do it better now. No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.

Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.


It's not a pejorative statement, I used to live in Stack Overflow.

But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.

And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)


I stopped using it much even before the AI wave.


Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.


It became as annoying as experts exchange the very thing it railed against!


What was annoying about it?


Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...

There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.


They SEOd their way into being a top search result by showing crawlers both questions and answers, but when you visited the answer would be paywalled

Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching


That despite their url's claim, they didn't actually have and sex change experts.


the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.


Why do you think those people behave that way?


Unpaid labor finds a variety of impulses to satisfy


Memories of years ago on Stack Overflow, when it seemed like every single beginner python question was answered by one specific guy. And all his answers were streams of invective directed at the question's author. Whatever labor this guy was doing, he was clearly getting a lot of value in return by getting to yell at hapless beginners.


That doesn't seem like the kind of thing that's ever been allowed on Stack Overflow.


So? It is commonplace there. The comments are even worse.


I don't think it matters. Whether it was a fault of incentives or some intrinsic nature of people given the environment, it was rarely a pleasant experience. And this is one of the reasons it's fallen to LLM usage.


Nope. The main problem with expertsexchange was their SEO + paywall - they'd sneak into top Google hits by showing crawler full data, then present a paywall when actual human visits. (Have no idea why Google tolerated them btw...)

SO was never that bad, even with all their moderation policies, they had no paywalls.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: