Random nerd note: The history is slightly wrong. Netscape had their own "interactive script" language at the time Sun started talking about Java and somehow got the front page of the Mercury news when they announced it in March of 1995. At the Third International World Wide Web Conference in Darmstadt Germany everyone was talking about it and I was roped into giving a session on it during lunch break (which then had to be stopped because no one was going to the keynote by SGI :-)). Everyone one there was excited and saying "forget everything, this is the future." So, Netscape wanted to incorporate it into Netscape Navigator (their browser) but they had a small problem which was that this was kind of a competitor to their own scripting language. They wanted to call it JavaScript to ride the coattails of the Java excitement and Sun legal only agreed to let them do that if they would promise to ship Java in their browser when it hit 1.0 (which it did in September of that year).
So Netscape got visibility for their language, Sun got the #1 browser to ship their language and they had leverage over Microsoft to extortionately license it for Internet Explorer. There were debates among the Java team about whether or not this was a "good" thing or not, I mean for Sun sure, but the confusion between what was "Java" was not. The politics won of course, and when they refused to let the standards organization use the name "JavaScript" the term ECMAScript was created.
So there's that. But how we got here isn't particularly germane to the argument that yes, we should all be able to call it the same thing.
This response turned into more of an essay in general, and not specifically a response to your post, marginalia_nu. :)
Sharing information, to me, was what made things so great in the hacker culture of the 80s and 90s. Just people helping people explore and no expectation of anything in return. What could you possibly want for? There was tons of great information[1] all around everywhere you turned.
I'm disappointed by how so much of the web has become commercialized. Not that I'm against capitalism or advertising (on principle) or making money; I've done all those, myself. But while great information used to be a high percentage of the information available, now it's a tiny slice of signal in the chaff--when people care more about making money on content than sharing content, the results are subpar.
So I love the small internet movement. I love hanging out on a few Usenet groups (now that Google has fucked off). I love neocities. And I LOVE just having my own webpage where I can do my part and share some information that people find entertaining or helpful.
There's that gap from being clueless to having the light bulb turn on. (I've been learning Rust on and off and, believe me, I've opened plenty of doors to dark rooms, and in most of those I have not yet found the light switch.) And I love the challenge of finding helpful ways to bridge that gap. "If only they'd said X to begin with!" marks what I'm looking for.
I'm not always correct (I challenge anyone to write 5000 words on computing with no errors, let alone 750,000) or as clear as I could be, but I think that's OK. Anyone aspiring to write helpful information and put it online should just go for it! People will correct you if you're wrong[2] :) and you'll learn a *ton*. And your readers will learn something. And you'll have made the small web a slightly larger place, giving us more freedom to ignore the large web.
[1] When I say "great information", I don't necessarily mean "high quality". But the intention was there, and I feel that makes the difference.
[2] It can be really embarrassing to put bad information out there (for me, anyway). I don't want people to find out I don't know something and think less of me. But that's really illogical--I don't even personally know my critics! And here's the thing: when the critics are right (and they're often right!), you can go fix your material. And then it becomes more correct. After a short time of fixing mistakes critics point out, you get on the long tail of errors, and these are things that people are a lot less judgmental about. The short of it is, do the best you can, put your writing out there, correct errors as they are reported or as you find them, and repeat. I cannot stress how grateful I am to everyone who has helped me improve my guides, whether mean-spirited or not, because it's helped me and so many others learn the right thing.
An interesting fact is that while almost all of the Solar System has started as gas, which has then condensed here into solid bodies that have then aggregated into planets, a small part of the original matter of the Solar System has consisted of solid dust particles that have come as such from the stellar explosions that have propelled them.
So we can identify in meteorites or on the surface of other bodies not affected by weather, like the Moon or asteroids, small mineral grains that are true stardust, i.e. interstellar grains that have remained unchanged since long before the formation of the Earth and of the Solar System.
We can identify such grains by their abnormal isotopic composition, in comparison with the matter of the Solar System. While many such interstellar grains should be just silicates, those are hard to extract from the rocks formed here, which are similar chemically.
Because of that, the interstellar grains that are best known are those which come from stellar systems that chemically are unlike the Solar System. In most stellar systems, there is more oxygen than carbon and those stellar systems are like ours, with planets having iron cores covered by mantles and crusts made of silicates, covered then by a layer of ice.
In the other kind of stellar systems, there is more carbon than oxygen and there the planets would be formed from minerals that are very rare on Earth, i.e. mainly from silicon carbide and various metallic carbides and also with great amounts of graphite and diamonds.
So most of the interstellar grains (i.e. true stardust) that have been identified and studied are grains of silicon carbide, graphite, diamond or titanium carbide, which are easy to extract from the silicates formed in the Solar System.
Thanks HN! I regularly open HN during lectures. There is no better way to show my students what software engineering entails and why I focus on certain topics.
Is SCRUM really as great as its evangelists claim? Let's read HN comments.
What are good use cases for UML? Let's check out HN.
Does anyone actually care about CoCoMo or CMMI? Let's read ... oh - nearly nobody's talking about it there. Maybe it won't be that relevant to the students.
Here's a crazy idea. I personally prefer the fidelity of an active ambient in-ear monitor (IEM), as used by musicians on stage over the best hearing aids. Once a year, I do a monthly trial with the latest hearing aid models and IMO the fidelity (especially low-end) and the comfort just is not there compared with the best active ambient IEMs. The difference between hearing aids and IEMs is blurring, but they are not yet fully interchangeable.
Standard IEMs isolate you from the world, which is the opposite of what a hearing aid does. However, a specific category called "Active Ambient" IEMs bridges this gap. These are IEMs with embedded high-fidelity microphones on the outer shell. They pick up the sound of the room (bandmates, crowd, conductor), amplify it, and blend it with your monitor mix. The accompanying bodypack or app often includes a multi-band EQ and Limiter. You can boost specific frequencies where you have hearing loss (e.g., boosting highs to hear cymbals or speech clearly) and set a volume ceiling to protect your remaining hearing. I have no ownership/sponsorship in the product, but I personally LOVE the ASI Audio 3DME (powered by Sensaphonics), which is the industry standard for this. [1] It allows you to use an app to shape the ambient sound to your hearing needs.
The Pros: It provides hearing protection + monitoring + hearing enhancement in one device.
The Cons (Why they aren't daily hearing aids):
1) Form Factor: You are tethered to a belt pack. You likely won't wear a wired bodypack to a grocery store or dinner party.
2) Social Barrier: Wearing full-shell custom IEMs creates a "do not disturb" look that discourages conversation in social settings. This can be more socially alienating than a comparatively inconspicuous hearing aid.
3) Battery Life: IEM systems typically last 6–8 hours, whereas hearing aid batteries can last days or weeks.
Well… we have a culture of transparency we take seriously. I spent 3 years in law school that many times over my career have seemed like wastes but days like today prove useful. I was in the triage video bridge call nearly the whole time. Spent some time after we got things under control talking to customers. Then went home. I’m currently in Lisbon at our EUHQ. I texted John Graham-Cumming, our former CTO and current Board member whose clarity of writing I’ve always admired. He came over. Brought his son (“to show that work isn’t always fun”). Our Chief Legal Officer (Doug) happened to be in town. He came over too. The team had put together a technical doc with all the details. A tick-tock of what had happened and when. I locked myself on a balcony and started writing the intro and conclusion in my trusty BBEdit text editor. John started working on the technical middle. Doug provided edits here and there on places we weren’t clear. At some point John ordered sushi but from a place with limited delivery selection options, and I’m allergic to shellfish, so I ordered a burrito. The team continued to flesh out what happened. As we’d write we’d discover questions: how could a database permission change impact query results? Why were we making a permission change in the first place? We asked in the Google Doc. Answers came back. A few hours ago we declared it done. I read it top-to-bottom out loud for Doug, John, and John’s son. None of us were happy — we were embarrassed by what had happened — but we declared it true and accurate. I sent a draft to Michelle, who’s in SF. The technical teams gave it a once over. Our social media team staged it to our blog. I texted John to see if he wanted to post it to HN. He didn’t reply after a few minutes so I did. That was the process.
Hey, guy who made this here. This probably deserves a little explanation. First off, I'd like to tell you I'm really, really unemployed, and have the freedom to do some cool stuff. So I came up with a project idea. This is only a small part of a project I'm working on, but you'll see where this is going.
I was inspired by this video: https://www.youtube.com/watch?v=HRfbQJ6FdF0 from bitluni that's a cluster of $0.10-0.20 RISC-V microcontrollers. For ten or twenty cents, these have a lot of GPIOs compared to other extremely low-cost microcontrollers. 18 GPIOs on the CH32V006F4U6. This got me thinking, what if I built a cluster of these chips. Basically re-doing bitluni's build.
But then I started thinking, at ten cents a chip, you could scale this to thousands. But how do you connect them? That problem was already solved in the 80s, with the Connection Machine. The basic idea here is to get 2^(whatever) chips, and connect them so each chip connects to (whatever) many other chips. The Connection Machine sold this as a hypercube, but it's better described as a hamming-distance-one graph or something.
So I started building that. I did the LEDs first, just to get a handle on thousands of parts: https://x.com/ViolenceWorks/status/1987596162954903808 and started laying out the 'cards' of this thing. With a 'hypecube topology' you can split up the cube into different parts, so this thing is made of sixteen cards (2^4), with 256 chips on each card (2^8), meaning 4096 (2^12) chips in total. This requires a backplane. A huge backplane with 8196 nets. Non-trivial stuff.
So the real stumbling block for this project is the backplane, and this is basically the only way I could figure out how to build it; write an autorouter. It's a fun project that really couldn't have been done before the launch of KiCad 9; the new IPC API was a necessity to make this a reality. After that it's just some CuPy because of sparse matrices and a few blockers trying to adapt PathFinder to circuit boards.
Last week I finished up the 'cloud routing' functionality and was able to run this on an A100 80GB instance on Vast.io; the board wouldn't fit in my 16GB 5080 I used for testing. That instance took 41 hours to route the board, and now I have the result back on my main battlestation ready for the bit of hand routing that's still needed. No, it's not perfect, but it's an autorouter. It's never going to be perfect.
This was a fun project but what I really should have been doing the past three months or so is grinding leetcode. It's hard out there, and given that I've been rejected from every technician job I've applied to, I don't think this project is going to help me. Either way, this project.... is not useful. There's probably a dozen engineers out there in the world that this _could_ help.
So, while it's working for my weird project, this is really not what hiring managers want to see.
I was a tube amp tech for several years, have built multiple guitar amps from scratch, and still dabble in it.
What may not be obvious is that modern tube amp designs are an evolutionary branch from 1930's technology, with only a little coming across from the transistor->digital tech tree. The amps of the 40s and 50s were pretty closely based on reference designs that came from RCA and other tube manufacturers.
Modern passive components (resistors, diodes and caps) are made to a far higher tolerance and are better understood, but tubes and transformers are a mixed bag. The older designs were somewhat overbuilt and can be more reliable or have tonal characteristics that are not available in modern parts.
Back when I was in Uni, so late 80s or early 90s, my dad was Project Manager on an Air Force project for a new F-111 flight simulator, when Australia upgraded the avionics on their F-111 fighter/bombers.
The sim cockpit had a spherical dome screen and a pair of Silicon Graphics Reality Engines. One of them projected an image across the entire screen at a relatively low resolution. The other projector was on a turret that pan/tilted with the pilot's helmet, and projected a high resolution image but only in a perhaps 1.5m circle directly in from of where the helmet was aimed.
It was super fun being the project manager's kid, and getting to "play with it" on weekends sometimes. You could see what was happening while wearing the helmet and sitting in the seat if you tried - mostly ny intentionally pointing your eyes in a different direction to your head - but when you were "flying around" it was totally believable, and it _looked_ like everything was high resolution. It was also fun watching other people fly it, and being able to see where they were looking, and where they weren't looking and the enemy was speaking up on them.
Aha! I used to work in film and was very close to the film scanning system.
When you scan in a film you need to dust bust it, and generally clean it up (because there are physical scars on the film from going through the projector. Theres also a shit tone of dust, that needs to be physically or digitally removed, ie "busted")
If you're unluckly you'll use a telecine machine, https://www.ebay.co.uk/itm/283479247780 which runs much faster, but has less time to dustbust and properly register the film (so it'll warp more)
However! that doesnt affect the colour. Those colour changes are deliberate and are a result of grading. Ie, a colourist has gone through and made changes to make each scene feel more effective. Ideally they'd alter the colour for emotion, but that depends on who's making the decision.
As someone who has been the target of an FBI investigation for what was effectively criminal copyright infringement (later arrested and did time in prison), my only takeaway is that this, if anything, should just be a civil suit just like so many other similar cases of copyright issues.
In my personal experience, the priorities of the FBI are typically highly politically motivated. The exceptions are if you’re doing something seriously icky, or doing fraud that deceives people.
For those interested in what’s reported and what actually happens, I’ve made some comments on my case and my experience here: https://prison.josh.mn
Interestingly, it took another 7 years for stack overflows to be taken seriously, despite a fairly complete proof of concept widely written about. For years, pretty much everybody slept on buffer overflows of all sorts; if you found an IFS expansion bug in an SUID, you'd only talk about it on hushed private mailing lists with vendor security contacts, but nobody gave a shit about overflows.
It was Thomas Lopatic and 8lgm that really lit a fire under this (though likely they were inspired by Morris' work). Lopatic wrote the first public modern stack overflow exploit, for HPUX NCSA httpd, in 1995. Later that year, 8lgm teased (but didn't publish --- which was a big departure for them) a remote stack overflow in Sendmail 8.6.12 (it's important to understand what a big deal Sendmail vectors were at the time).
That 8lgm tease was what set Dave Goldsmith, Elias Levy, San Mehat, and Pieter Zatko (and presumably a bunch of other people I just don't know) off POC'ing the first wave of public stack overflow vulnerabilities. In the 9-18 months surrounding that work, you could look at basically any piece of privileged code, be it a remote service or an SUID binary or a kernel driver, and instantly spot overflows. It was the popularization with model exploits and articles like "Smashing The Stack" that really raised the alarm people took seriously.
That 7 year gap is really wild when you think about it, because during that time period, during which people jealously guarded fairly dumb bugs, like an errant pipe filter input to the calendar manager service that run by default on SunOS shelling out to commands, you could have owned up literally any system on the Internet, so prevalent were the bugs. And people blew them off!
I wrote a thread about this on Twitter back in the day, and Neil Woods from 8lgm responded... with the 8.6.12 exploit!
My dad worked on the Space Shuttle main engine program in the 80s. One of the things they built was the turbopump [0], which generated 23,000HP (and could drain your average home swimming pool in one minute).
Seeing the test firings of the pump was pretty amazing, draining one "swimming pool" and filling another in a minute.
I implemented the same behavior in a different Google product.
I remember the PM working on this feature showing us their research on how iPhones rendered bars across different versions.
They had different spectrum ranges, one for each of maybe the last 3 iPhone versions at the time. And overlayed were lines that indicated the "breakpoints" where iPhones would show more bars.
And you could clearly see that on every release, iPhones were shifting the all the breakpoints more and more into the left, rendering more bars with less signal strength.
We tried to implement something that matched the most recent iPhone version.
Personal anecdote time, which enough time has passed that it can finally be told.
About 30 years ago, a family came down from the mountains near San Luis Obispo to ask whether my mother could teach them piano. They were an unusual family -- a mother and a number of children; apparently their father wouldn't leave his homestead up in the mountains. The children were all homeschoooled. They were perhaps a bit raggedy, but all quite brilliant and free-thinking, and quickly became excellent piano players. Our family became friends with theirs, and eventually we were invited to visit their homestead up in the mountains.
The homestead was an off-grid hand-built house and working organic dairy farm, lovingly stuffed to the rafters with various arts and crafts, including a large collection of medieval-style musical instruments which the patriarch of the family, Hal, had built by hand. Hal was an enigma within an enigma: he refused to talk about his past, looked like a Santa-clause mountain man, wouldn't engage with the outside world in person, but was relentlessly curious about it -- able to keep up with conversations about the latest in politics and technology. He also had a keen interest in the archaeology of the upper Colorado plateau, and soon we were making trips to the Cal Poly library to check out the latest archaeology books on his behalf. One day, on a whim, we looked for his name in the index of one of those books, and that's when we found out that we already knew who he was.
Haldon Chase[1] had been at the absolute epicenter of the Beat movement. He was the one who introduced Allen Ginsberg to Jack Kerouac, and most of the other Beats to each other. He'd gone by pseudonym "Chad King" in "On the Road". At the time he didn't have a Wikipedia entry, and at the time all anybody knew is that he had vanished at some point. Of course my family felt privileged to know the rest of the story.
Thinking now about Hal's life, in the few retrospectives I've seen of it, he's framed as having rejected the whole Beat lifestyle. I'm not sure that's accurate. In many ways the life he managed to carve out for himself was the apotheosis of much of the beat philosophy: genuinely free-thinking, self-reliant, non-conformist, creative, and in his way, spiritual. All very Beat. What he certainly rejected was the the limelight. The publicity, the drama, the ego. He wanted absolutely nothing to do with any of that. So he managed to get away and just live a good (if unconventional) life. His kids have all gone on to live really good, non-messed-up lives as well.
So when reading stories about messed-up Beats and their messed-up kids, it's worth considering that there's a kind of anti-survivor-bias at play: where everything worked out, where the trauma didn't explode dramatically or get passed down the generations, you're probably not going to hear about it.
I miss OS/2 a lot. For what it was at the time (intel, not ppc) it worked really well. When I was at Netscape, my build machine was OS/2 so I could do windows builds and still actually work. Machines then were much less capable than now, but I rarely had any bogging down of the system.
I've told this story before, but it's super relevant here. When I set up all the servers for reddit, I set them all to Arizona time. Arizona does not have DST, and is only one hour off of California (where we all were) for five months, and the same as here the rest of the year.
I didn't use UTC because subtracting eight when reading raw logs is a lot harder than subtracting one.
They use UTC now, which makes sense since they have a global workforce and log reading interfaces that can automatically change timezones on display.
Back in 2007, I published the first YouTube bypass of the Master Lock #175 (very common 4-digit code lock), using a paperclip.
After the video reached 1.5M views (over a couple years), the video was eventually demonetized (no official reason given). I suspect there was a similarly-frivolous DMCA / claim, but at that point in my life I didn't have any money (was worth negative) so I just accepted YouTube's ruling.
Eventually shut down the account, not wanting to help thieves bypass one of the most-common utility locks around — but definitely am in a position now where I understand that videos like mine and McNally's force manufacturers to actually improve their locks' securities/mechanisms.
It is lovely now to see that the tolerances on the #175 have been tightened enough that a paperclip no longer defeats the lock (at least non-destructively); but thin high-tensile picks still do the trick (of bypassing the lock) via the exact same mechanism.
Locks keep honest people honest, but to claim Master's products high security is inherently dishonest (e.g. in their advertising). Thievery is about ease of opportunity; if I were stealing from a jobsite with multiple lockboxes, the ones with Master locks would be attacked first (particularly wafer cylinders).
So I once brought down an alerting system using Excel
(btw, this story is more about unintended consequences instead of MSFT)
- I own an alerting system
- For log based alerts, it looks for a keyword e.g. "alert_log"
- I make a spreadsheet to track data about alerts and call one of the sheets "alert_log"
- Alert system starts going crazy: using tons of CPU, number of alerts processed goes through the roof but not a lot of alerts generated
- Turns out that I was using the cloud version of Excel so any text entered transited the firewall
- Firewall logs store the text "alert_log"
- Alert system thinks it's an alert BUT it's not a real alert so triggers an alert processing alert
- That second alert contains the text from the firewall log and so cycle begins
In other words, systems can operate in weird ways and then cause things to happen you didn't anticipate. It's why things like audits, red teaming and defense in depth all matter.
Fun fact: this is one of the few situations in the US where a prosecutor could claim that this is criminal speech (though I hope and trust they would not, and if it did it would get thrown out by any court respecting the First Amendment).
Not a civil issue, like libel or fraud, but the sort of talk that can get a policeman to come and drag you off to jail. If you've ever wondered why DRM is so roundly hated by engineers of a certain age, it's because not only it dumb makework that they are required to implement, not only is it extremely irritating to discover it interfering with your own computer, but if you do effectively point out how dumb, irritating, and eminently circumventable it is, they made it against the law to even tell anyone.
Way back in the 90s, I had a hacked satellite dish. This meant that I could get local channels from across the USA. My roommate used this for a school assignment. He looked at how much time local news spent on each topic, categorized by city. Here is what he found:
- All newscasts featured crime more than anything else ("if it bleeds it leads").
- All newscasts had a local feel-good story.
- All newscasts had weather (although East Coast and Midwest stations spent more time on it).
- All newscasts had a local sports update
But what was most interesting was what they spend the rest of their time on:
- In New York, it was mostly financial news.
- In Los Angeles it was mostly entertainment news.
- In San Francisco it was mostly tech related news
- In Chicago it was often manufacturing related.
That homework was really what drove home for me that the news is very cherry picked and I basically stopped watching after that.
In safety-critical systems, we distinguish between accidents (actual loss, e.g. lives, equipment, etc.) and hazardous states. The equation is
hazardous state + environmental conditions = accident
Since we can only control the system, and not its environment, we focus on preventing hazardous states, rather than accidents. If we can keep the system out of all hazardous states, we also avoid accidents. (Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.)
One such hazardous state we have defined in aviation is "less than N minutes of fuel remaining when landing". If an aircraft lands with less than N minutes of fuel on board, it would only have taken bad environmental conditions to make it crash, rather than land. Thus we design commercial aviation so that planes always have N minutes of fuel remaining when landing. If they don't, that's a big deal: they've entered a hazardous state, and we never want to see that. (I don't remember if N is 30 or 45 or 60 but somewhere in that region.)
For another example, one of my children loves playing around cliffs and rocks. Initially he was very keen on promising me that he wouldn't fall down. I explained the difference between accidents and hazardous states to him in childrens' terms, and he realised slowly that he cannot control whether or not he has an accident, so it's a bad idea to promise me that he won't have an accident. What he can control is whether or not bad environmental conditions lead to an accident, and he does that by keeping out of hazardous states. In this case, the hazardous state would be standing less than a child-height within a ledge when there is nobody below ready to catch. He can promise me to avoid that, and that satisfies me a lot more than a promise to not fall.
Tangential, but I practically owe my life to this guy. He wrote the flask mega tutorial in what I followed religiously to launch my first website. Then right before launch, in the most critical part of my entire application; piping a fragged file in flask. He answered my stackoverflow question, I put his fix live, and the site went viral. Here's the link for posterity's sake https://stackoverflow.com/a/34391304/4180276
I started out my automotive software career with Ford, and as part of the new college hire training program, I actually got to see the process of how "book rate" is determined. They take a brand new car, straight off the assembly line and give a master mechanic a process sheet (head gasket remove and replace, for instance). He has a tool cart with a computer next to it, about 6 feet away from the vehicle. For each step he starts a timer on the computer for that step, picks up the necessary ratchet and socket or whatever, loosens the next bolt, walks the ratchet and socket back to the tool box, puts it away and then finally stops the timer. He probably practices the procedure a few times before the timed run, but basically this prevents the company from setting the time to do a job super crazy low.
He's also not allowed to take any shortcuts from the book procedure, which there frequently are a few available (use a long wobble extension bar and a universal joint and you can get in without taking off all of the stuff above that bolt, whatever). On the other hand, this is the warranty rate (meaning new cars, largely less rust, etc). Independent/non-dealer mechanics will typically charge more time than the warranty time estimate from the manufacturer to account for things like rusty vehicles with harder to remove bolts and such, though this is usually in the rate book they subscribe to from whatever information source they pay for (warranty + 20% or so).
The issue is that the estimated time for a job is probably a high estimate for a brand new car and probably a low estimate for a several year old car, and the risk of that is on the dealership. The dealership then pays mechanics an hourly wage ($20+, fairly high for well certified master mechanics) and assumes that the hours listed on the job from the manufacturer are accurate, leaving the mechanic to take the risk if it goes over. Generally, the dealership loses on this proposition too, since they lose out on business/bay/electric/heat/etc for the lost time, so they don't like warranty work. They can upcharge/charge for more time/etc on a job for a customer, not for warranty repair due to contractual obligations to the OEM. This is particularly bad for Ford, since they currently lead the industry in recalls and warranty spend, meaning that their dealership networks are getting a lot more of that kind of work with limited profit and no ability to turn it down.
Hey guys, I learned electronics from a nobel laureate!
Throughout my physics career including PhD, analog electronics was the most difficult but probably also the most rewarding class to me. I fondly remember staying until 2am in broida at ucsb trying to get a filter to work, getting a few hours sleep, then being back in the lab before sunrise. Of course, this was mostly the result of procrastination, but damn were those good times.
One thing that really bothered me then was the idea of a current source. I was perfectly happy with a voltage source, perhaps naively(1). But a current source seemed magical. I was asking Martinis about this and he seemed dumbfounded that I didn't understand. Of course, the answer is feedback. And, of course, good voltage sources also require feedback. But he was so familiar with feedback control he didn't even consider saying that's whats happening, while I never even heard of controls.
Long story short, sometime later I asked to join his lab as an undergrad researcher. He said no, and to this day I think it's because I didn't understand current sources. Or maybe I was too late, or maybe the A- (see the aforementioned procrastination). That led me to asking a biophysicist, and therefore I became a biophysicist instead of condensed matter/QI/QC. In hindsight, I think this was fortunate. I would've never considered biophysics, which has been one of the loves of my life since then. Who knows, maybe I would've been just as happy with quantum stuff. I'm working through Mike and ike now and find it fascinating.
Funny enough, after my PhD, I co-founded a startup in industrial control & automation. Now I understand feedback quite well, and thus current sources, albeit many years too late.
(1) Of course, good voltage sources vary their resistance just like good current sources vary their voltage. My best guess as to the reason I was more bothered by the current sources is that I was so familiar with voltage sources with confidently claimed constant voltages (batteries). Not a very good reason, I should've questioned it more. In practice, it's much easier to make a near ideal voltage source (very high resistance) than a near ideal current source (0 resistance).
Hah, this is my time to shine. I worked in anime subtitling and timing for a number of years. I helped write our style guide — things like how to handle signs, overlapping dialogue, colors etc.
It wound up being quite a large document!
But the thing to realize here is that, all of these subs have to be placed by hand. There are AI tools that can help you match in and out times, but they have a difficult time matching English subs to Japanese dialogue. So what you have to do is have a human with some small grasp of Japanese place each of these in/out times by hand.
If you’re really good you can do one 25 minute episode in about 35 minutes. But that’s ONLY if you don’t spend any extra time coloring and moving the subs around the screen (as you would song and sign captions).
Elite tier subs can take up to two or even three or four hours per episode. That’s why the best subs, are always fan subs! Because a business will never put in 8x more time on an episodes subtitles than “bare minimum.”
Crunchy roll looks to have at least gone halfway for a while… but multiply those times across thousands of episodes over X years… and you can see why some manager somewhere finally decided 35 minutes was good enough.
I am in the Product world now, and I do think this was a bad move. Anime fans LOVE anime. The level of customer delight (and hate) in the anime industry is like no other. I really miss the excitement that my customers would get (and happily telegraph!) when I launched a product in those days. Which is all to say, you HAVE to factor delight into your product. Especially with a super fan base like you have in anime.
This is a solved problem. The answer is to add extra relevant information to the context as part of answering the user's prompt.
This is sometimes called RAG, for Retrieval Augmented Generation.
These days the most convincing way to do this is via tool calls.
Provide your LLM harness with a tool for running searches, and tell it to use that tool any time it needs additional information.
A good "reasoning" LLM like GPT-5 or Claude 4 can even handle contradictory pieces of information - they can run additional searches if they get back confusing results and work towards a resolution, or present "both sides" to the user if they were unable to figure it out themselves.
I’ve recently thrown out all my masking tape (crepe paper) in favor of Washi tape (rice/mulberry paper with a 3M adhesive). I use Blue Dolphin for house painting and Nichiban for airbrushing. Very nice quality of life upgrade.
Masking tape would bleed or lift paint. (Even frog tape). 10x reduction in these problems since switching to washi.
So Netscape got visibility for their language, Sun got the #1 browser to ship their language and they had leverage over Microsoft to extortionately license it for Internet Explorer. There were debates among the Java team about whether or not this was a "good" thing or not, I mean for Sun sure, but the confusion between what was "Java" was not. The politics won of course, and when they refused to let the standards organization use the name "JavaScript" the term ECMAScript was created.
So there's that. But how we got here isn't particularly germane to the argument that yes, we should all be able to call it the same thing.