Hacker Newsnew | past | comments | ask | show | jobs | submit | userbinator's commentslogin

Looks like whatever LLM you used is not doing a very good job.

Chrome has become much worse than IE6. Microsoft was not in the business of tracking users and selling ads back then.

The latest and greatest is not great for you, but for them.

I am reminded of the old AMD CPUs with "unlockable" extra cores, which would when unlocked change the model name to something unusual.

"GenuineIotel" is definitely odd, but difficult to research more about; I suspect these CPUs might actually end up being collector's items sometime in the future.

because inserting no-op instructions after them prevents the issue.

The early 386s were extremely buggy and needed the same workaround: https://devblogs.microsoft.com/oldnewthing/20110112-00/?p=11...


Some of the 386 bugs described there sound to me like the classic kind of "multiple different subsystems interact in the wrong way" issue that can slip through the testing process and get into hardware, like this one:

> For example, there was one bug that manifested itself in incorrect instruction decoding if a conditional branch instruction had just the right sequence of taken/not-taken history, and the branch instruction was followed immediately by a selector load, and one of the first two instructions at the destination of the branch was itself a jump, call, or return.

Even if you write up a comprehensive test plan for the branch predictor, and for selector loads, and so on, it might easily not include that particular corner case. And pre silicon testing is expensive and slow, which also limits how much of it you can do.


80386 (1985) did not have a branch predictor, which was used first only in Intel Pentium (1993).

Nevertheless, the states of the internal pipelines, which were supposed to be stopped, flushed and restarted cleanly by taken branches, depended on whether the previous branches had been taken or not taken.


Ah, thanks for that correction -- I jumped straight from "depends on the history of conditional branches" to "branch predictor" without stopping to think that that would have been unlikely in the 386.

Before having branch predictors, most CPUs that used any kind of instruction pipelining behaved like a modern CPU where all the branches are predicted as not taken.

Thus on an 80386 or 80486 CPU not taken branches behaved like predicted branches on a modern CPU and taken branches behaved as mispredicted branches on a modern CPU.

The 80386 bug described above was probably caused by some kind of incomplete flushing of some pipeline after a taken branch, which leaved it in a state partially invalid, which could be exposed by a specific sequence of the following instructions.


This sort of bug, especially in and around pipelines are always hard to find. In chips I've built we've had one guy who built a system that would build random instruction streams to try and trigger as many as we possibly could

Yeah, I think random-instruction-sequence testing is a pretty good approach to try to find the problems you didn't think of up front. I wrote a very simple tool for this years ago to help flush out bugs in QEMU: https://gitlab.com/pm215/risu

Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.


I think FP needs it's own custom tests (billions of them!) - I hate building FP units, they are really the pits

The revenge of the MIPS delay slot (the architecture simply didn't handle certain aspects of pipelining, so NOPs were required and documented as such).

The old trick years ago was to translate from English to different language and back (possibly repeating). I'd be curious how helpful it is against stylometry detection?

If you want to be grouped with foreigners who don't know English, it might work well, although word choices may still be distinctive enough to differentiate even when translated.


Assuming the source language is English, going to a romance language and back wouldn't be too hard grammar wise, but could easily wipe out a lot of non-Latin-descended words if you use the right approach to translation.

In my experience you will need to think even harder with AI if you want a decent result, although the problems you'll be thinking about will be more along the lines of "what the hell did it just write?"

The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.


Handcuffs already exist.

Sadly, nobody has time or budget for beauty any more

It's amazing how ornately decorated early equipment was --- especially 19th century and earlier.

https://en.wikipedia.org/wiki/File:Cooke_and_Wheatstone_elec...


There has been an inversion in cost. It used to be that Materials cost a lot, and labor cost very little. There wasn't cheap plastics. particle board or synthetic rubber, it was real rubber, real wood, real metal. So it was worth it to buy the better quality thing made of nicer materials that was more artisinal and decorative. It was also cheaper to employ people just to enhance the look in a time consuming fashion. Plus since there was no miniaturization if parts broke you could replace them instead of buying a new one.

Or the utilitarian cost has gone down much faster than cost of decorative. If your bulb cost $10, spending $5 to make beautiful lamp post makes sense. But if bulb cost has fallen to 10 cents now to justify $2 is difficult.

It's down to taste and materials too - in the past 50 years or so the general design trend is more minimalist, wood has become more and more expensive (and good quality harder to come by), etc.

But the main reason is of course cost, the linked device looks like a woodworker would spend a few hours to build it (although they probably had machinery to automate parts of the process already).


you aren’t going to make non-binary-sized chips

TLC flash actually has a total number of bits that's a multiple of 3, but it and QLC are so unreliable that there's a significant amount of extra bits used for error correction and such.

SSDs haven't been real binary sizes since the early days of SLC flash which didn't need more than basic ECC. (I have an old 16MB USB drive, which actually has a user-accessible capacity of 16,777,216 bytes. The NAND flash itself actually stores 17,301,504 bytes.)


Ctrl+F "SIP" - 0 results before this comment.

There are decades-old standards for VoIP and teleconferencing, which even the proprietary solutions will often let you interoperate with (at additional cost). Now would be a good time to actually promote them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: