I searched in page for Cărtărescu and was disappointed to find no mention. And then I scroll and see your comment lol.
I read Theodoros this year (in Romanian) and I was really impressed. Best novel I've read in years. I'm currently reading Orbitor III. I bought Solenoid, but don't yet feel ready for it.
For one, it's the right arrow key for complete for most things (but tab for others).
But by FAR the worst thing is that often times you'll type a command and try to tab/arrow complete an argument, and the module/dll or whatever is not loaded into memory, and so theres some blocking operation and loads the module which takes 10+ seconds. This happens to me almost every day.
I do love powershell otherwise though, after 20+ years in bash, there is actually some things to like about it.
If you like Powershell but have some complaints, you might find nushell to be the best of both worlds. My elevator pitch for it would be imagine the object-oriented / typed nature of Powershell, minus the verbosity and windows-centric design of it. As someone who develops on and for windows computers, nushell is a real breath of fresh air.
I have a command line program at work which outputs json. Pure JSON in all situations.
I thought nushell would be able to make sense of that and display it semi-nicely.
Nushell pukes on it, errors out, and doesn’t even show the output of the command. As far as sins go for a shell, not showing the output of the program it just ran is very high among them.
With external commands you might have to collect the output of the program before doing any sort of manipulation. I’ve been got by this before too; the fix is simple (for me at least). `external.exe | collect | from json` et voila
I like PowerShell too, but in what universe other than ours (clearly the worst one) is it even possible for loading a module to take more time than the blink of an eye?
Microsoft should find it embarrassing how long it takes powershell to load a module. Pushing <tab> to autocomplete a cmdlet name should never take more than maybe 100 milliseconds.
Loading times surely is not a problem unique to Powershell. The more complex and advanced a software gets, the more it takes to load data into RAM that appears to the user redundant.
This is the most noticable with startup times. My favorite software (Firefox) has this solved; it opens up in reasonable amounts of time, even if it takes a moment after to show the first website. My second favorite software (Inkscape), meanwhile, takes so long just to show the main UI that the developers didn't think anything of adding a splash screen: an overt acknowledgement that you're keeping the user waiting.
I, too, wish that everything were more lean and snappy, but clearly this is still an unsolved problem.
As someone recording myself playing music, I've been meaning to see if any of these tools are good enough yet to not only separate vocals from another instrument (acoustic guitar for example), but do so without any loss of fidelity (or least not a perceivable one).
The reason I'm interested in this is because recording with multiple microphones (one on guitar, one on the vocal), has it's own set of problems with phase relationship and bleed between the microphones, which causes issues when mixing.
Being able to capture a singing guitarist with a single microphone placed in just the right spot, but still being able to process the tracks individually (with EQ, compression, reverb, etc), could be really helpful.
We weren’t able to agree on a good way to measure this. Curious - what’s your opinion on code churn as a metric? If code simply persists over some number of months, is that indication it’s good quality code?
I've seen code persist a long time because it is unmaintainable gloop that takes forever to understand and nobody is brave enough to rebuild it.
So no, I don't think persistence-through-time is a good metric. Probably better to look at cyclomatic complexity, and maybe for a given code path or module or class hierarchy, how many calls it makes within itself vs to things outside the hierarchy - some measure of how many files you need to jump between to understand it
I second the persistence. Some of the most persistent code we own is because it’s untested and poorly written, but managed to become critical infrastructure early on. Most new tests are best-effort black box tests and guesswork, since the creators have left a long time ago.
Of course, feeding the code to an LLM makes it really go to town. And break every test in the process. Then you start babying it to do smaller and smaller changes, but at that point it’s faster to just do it manually.
Might be harder to track but what about CFR or some other metric to measure how many bugs are getting through review before versus after the introduction of your product?
You might respond that ultimately, developers need to stay in charge of the review process, but tracking that kind of thing reflects how the product is actually getting used. If you can prove it helps to ship features faster as opposed to just allowing more LOC to get past review (these are not the same thing!) then your product has a much stronger demonstrable value.
I remember him as Meathead, and my 22-year-old son remembers him as the dad from Wolf of Wall Street. It's really amazing how many generations his work spans.
The Primagen video about the bash scripts underpinning github actions runner was crazy. I'm a half-assed programmer at best and I don't even think I would make some of those mistakes.
"Solenoid" - Mircea Carterescu"
"The Notebook, The Proof, and The Third Lie" - Agota Kristof
"Septology" - Jon Fosse
reply