The article didn’t claim that “last wins” is in and of itself an issue, but that the differences between who wins between parsers across services/languages can cause issues. Their position was that everyone should standardize on “last wins,” since that is the most common.
The problem of trying to ensure that each parser behaves the same for all input is twofold:
- JSON and XML specifications are complex, lots of quirks. So not feasible.
- Does not solve the fundamental issue of the processing layer not using the same data that is verified in the verification layer.
Note: the processing layer parses the original input bytes, while the verification layer verifies a struct that is parsed using another parser.
Doesn’t seem vibe coded to me. The crates aren’t unreasonable: data structures, parser, formatter, repl, “util,” some proc macros (which have to be in a separate crate in rust), and a VM.
For folks who seek a rule of thumb, I’ve found SPoT (single point of truth) a better maxim than DRY: there should be ideally one place where business logic is defined. Other stuff can be duplicated as needed and it isn’t inherently a bad thing.
To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting.
Of course no rule of thumb applies in all cases, and the sense for that is hard to teach.
> To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting
Just for fun, this more or less already exists as another acronym: WET. Write Everything Twice
It basically just means exactly what you said. Don't bother DRYing your code until you find yourself writing it for the third time.
And to those who feel the OCD and fear of forgetting coming over by writing twice, put TODOs on both spots; so that when the third time comes, you can find the other two easily. If you are the backlogging type, put JIRA reference with the TODOs to make finding even easier.
> I’ve found SPoT (single point of truth) a better maxim than DRY
I totally agree. For example having 5 variables that are all the same value but mean very different things is good. Combining them to one variable would be "DRY" but would defeat separations of concern. With variables its obvious but the same applies to more complex concepts like functions, classes, programs to a degree.
It's fine to share code across abstractions but you gotta make sure that it doesn't end up tying these things too much together just for the cause of DRY.
The benefit of (some) DSLs is that they make invalid states unrepresentable, which isn't possible with the entire surface-area of a programming language at your (or the LLM's) disposal.
Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues.
This reads to me like someone finally won an argument they’d been having for some time.
Third. The ability to uprank sites makes the “finding known content” use case absolutely amazing. The specific postgres docs case in the blog post is one I am constantly using kagi as my external brain for.
This is tangential to the discussion at hand, but a point I haven’t seen much in these conversations is the odd impedance mismatch between knowing you’re interacting with a tool but being asked to interact with it like a human.
I personally am much less patient and forgiving of tools that I use regularly than I am of my colleagues (as I would hope is true for most of us), but it would make me uncomfortable to “treat” an LLM with the same expectations of consistency and “get out of my way” as I treat vim or emacs, even though I intellectually know it is also a non-thinking machine.
I wonder about the psychological effects on myself and others long term of this kind of language-based machine interaction: will it affect our interactions with other people, or influence how we think about and what we expect from our tools?
Would be curious if your experience gives you any insight into this.
I feel bad being rude to an LLM even though it doesn't care, so I add words like "please" and sometimes even complement it on good work even though I know this is useless. Will I learn to stop doing that, and if so, will I also stop doing it to humans?
I'm hoping the answer is simply "no". Plenty of people are rude in some contexts and then polite in others (especially private vs. public, or when talking to underlings vs. superiors), so it should be no problem to learn to be polite to humans even if you aren't polite to LLMs, I think? But I guess we'll see.