That is only true as the speed of the system performing the emulation approaches infinity ;-)
Yes, you can do all the same things in software, in fact it is trivial, just take the same output from you EDA tools and run it in a simulator. Of course that is so slow it cannot interface with (most) real external HW like CRTs and accessories, but in some technical sense it is software taking the exact same set of inputs as an FPGA, and generating the exact same outputs (just much, much slower).
If we accept that as the premise then then we can consider emulators an optimization where instead of using the simulated verilog we try to manually write code that performs equivalent operations, but can run fast enough to hit the original timing constraints of the HW we are replacing. The thing is that the code is constrained by the limits of the modern HW it is running on, and sometimes the modern HW just cannot do what legacy HW did.
An NES does not have a frame buffer (it does not even have enough ram to hold ~5% of a rendered frame of its output!). To cope with that the games generate their output line by line as that the video signal is being generated. What that means is that you click a button on the controller it can change the output of the scanline that is currently writing to the screen (and you can release it updating the output before the frame is being generated, changing subsequent lines). IOW, the input latency is less than a single frame of input. That is not true with modern computers where we render into a memory mapped frame buffer which is then transmitted to the screen with a complex series of of chips including the GPU and DC, and ultimately synchronized on the blanking intervals.
On an FPGA you can design a display pipeline that matches that of legacy consoles, and get the same latency. Of course you could also do the same thing in software emulation on a computer if you clock it so high that it renders and outputs one frame of video for each scanline of output on the original, but given the NES had a framerate of ~60 (59.94) fps and vertical resolution of 240p that comes out to a framerate of ~14,400 fps to hit the latency target for accurate emulation.
Now in practice most of the time it is a non-issue and emulation is more than sufficient, but some old games do very funky things to exploit whatever they could on the limited HW they run on.
It is also worth noting that FPGAs are a lot more interesting for older systems. Once you get to more modern systems that look more like modern computers the strict timing becomes less important. In particular, once you get to consoles that have frame buffers the timing becomes much less sensitive because the frame buffer acts as a huge synchronization point where you can absorb and hide a lot of timing mismatches.
I'm not saying that FPGAs aren't interesting, or don't have the possibility of allowing one to produce highly accurate emulators. I think that FPGA emulators are really cool -- I've written one! Check the website linked from my profile.
I'm taking issue with how Analogue markets their products. If they want to talk about the specific benefits their products have (like low latency, or video output, or original cartridges, or accuracy), great! But the claim that they're making is that their work involves "No Emulation". And through that, they're implying that FPGA based emulators are inherently better. And they're not. It's a different set of tradeoffs, and they certainly have the possibility to have certain advantages, but that's still all up to the quality of the implementation. Which is exactly the way that software emulators work too.
Regardless of framebuffering and all that nonsense, at the end of the day, pixels have to be pushed onto the physical display, bit-by-bit. I've wondered if it would be possible for a screen to simulate CRT-style line drawing by exploiting that.
So, if the multiplexing is done line-by-line, for example, then one could simulate a CRT by driving the pixels directly, without a framebuffer. It wouldn't be easy, but that's partly why framebuffers are a thing - they're easier than not having one.
It is a pretty easy mistake to make if you are used to how fast new processors come out now, but you comparing an i386 from 1991 to to a 68020 from the mid 80s.
In 1985 when the 386 came out I believe the fastest speed you could get was 16MHz. They added higher speed variants for years afterwards. Intel made a 40MHz 386 in 1991 that was strictly aimed at embedded users who want more perf but were not ready to move to 486 based designs (386CX40), I doubt almost anyone used on in a PC. AMD made a Am386 at 40MHz which was a reverse engineered clone of the 386, but again that came out in the 90s (the big selling point was that you could reuse your existing 386 motherboards instead of replacing them like you needed to for a 486).
Generally speaking macOS does not guarantee syscall stability, and does not generally guarantee compatibility for any binaries not linked to `libSystem.dylib` (that is the supported ABI boundary)[1]. This has a number of implications, including (but not limited to):
* The most obvious is the commonly mentioned fact that syscalls may change. Here is an example where golang program broke because they were directly using the `gettimeofday()` syscalls[2].
* The interface between the kernel and the dynamic linker (which is required since ABI stability for statically linked executables is not guaranteed) is private and may change between versions. That means if your chroot contains a `dyld` from an OS version that is not the same as the host kernel it may not work.
* The format of the dyld shared cache changes most releases, which means you can't just use the old dyld that matches the host kernel in your chroot because it may not work with the dyld shared cache for the OS you are trying to run in the chroot.
* The system maintains a number of security policies around platform binaries, and those binaries are enumerated as part of the static trust cache[3]. Depending on what you are doing and what permissions it needs you may not be able to even run the system binaries from another release of macOS.
In practice you can often get away with a slight skew (~1 year), but you can rarely get away with skews of more than 2-3 years.
I presume the reason they do it is that the premise of Nushell is that it uses pipes of structured output instead of simple text streams. That means that they need all the tools to output datas in that form. They could include wrappers for all OS provided binaries and handle the conversion in those wrappers, but that makes you incredibly fragile to minor output or flags changes, and in many cases those wrappers would end up being more complex than the complex than the commands themselves.
Not to say there has never been software based market segmentation, but this example is just not right.
First off the LC was in no way a threat to the Iici. The IIci had 32 bit data bus with a 25MHz 68030 and and supported a CPU cache. The LC with a 16MHz 68020 with a 16 bit bus. The Iici was conservatively twice as fast.
Second, the LC HW did not support nearly as much ram as the Iici. It shipped with 2MB soldered down (which logically you can think of 2 1MB SIMMs) and had 2 slots that each supported 4MB SIMMs, which were the highest density commonly available at the time. The (cheaper) memory controller used in the LC only supported 24 bits of physical addresses (and only in so many configurations), resulting in a maximum of ~16MB. Once you account for the soldered down two megabytes and how the slots had to be configured that left you with the ability to install 4MB into the each slot you get 10MB.
Technically speaking it was probably possible to get it to support 12MB or 16MB with a ROM patch if you desoldered the builtin memory and soldered from the address lines on the controller to some custom memory board. But as shipped with the builtin RAM and the controller chip they included 10MB was the most it could reasonably use.
The LCII did up the builtin memory to 4MB and had a software limit of 10MB like the LC (which meant if you installed 4MB SIMMs you would be missing 2MB), but I suspect that was more a result of how quickly it came to market (it was essentially just an LC with a 68030 and 4MB of ram, both of which greatly improved the experience of using the machine with System 7, which shipped after the original Mac LC).
Within a year or so after the LCII the the LCIII shipped with a completely redesigned board, and it supported 36MB of ram.
Source: I owned a Mac LC, paid for and installed a 2Mb memory upgrade to get it to 4MB, then eventually did a motherboard swap to upgrade it to an LCIII. I can even still tell you how much each of those upgrades cost ;-)
I was being a bit tongue in cheek about Apple saving the IIci. But the ROM on the machine is hardwired to only support up to 10MB of memory, even if you drop in larger sticks that would otherwise be supported. There's no real reason to do this except to protect the higher priced products. But as you noted, there wasn't really a higher end product to protect because the LC was already crippled by its slow bus and obsolete CPU.
I think you missed my point. It wasn’t just ROM limited, the ram controller they used in the LC did not have enough address lines to address more than 4MB per SIMM slot, end of story. No amount of firmware hacking can ever make it support 16MB SIMMs without what amounts to a total board redesign. Given the 2MB soldered to the board (which took the address lines for two of the slots) that meant the machine was physically limited to 10MB unless you wanted to break out a soldering iron. Yes, the ROM has a software limit, but it reflected the actual limits of the HW (and more likely was due how the ROM went about detecting the ram then any explicit intent to limit things… it is not shocking that the software only works with supported physical configurations and not board reworks).
The LCII on the other hand is a bit less excusable since it could physically hold 12MB but only 10 was usable. As I said I suspect the reason is that it was a fairly quick revision they squeezes in before the redesigned LCIII and they just didn’t rev those parts of the ROM, but it still seemed pretty bad).
This is awesome. macOS actually enables the same env var protections by default if your process is opted into the hardened runtime. You can do that by passing —-options=runtime to your codesign invocation.
iMessage is E2E even without ADP, even with groups and multiple devices. The details are complex, but they are publicly documented here[1]:
The issue (I think) you are referring to is that if you enable iCloud backup[2] or iCloud for Messages[3] (both of which move effectively move the storage of the messages to the cloud, either as part of the device backup or as the canonical representation that devices sync from respectively) then the messages decoded on device will be stored in blobs that iCloud has the keys to unless you enable Advanced Data Protection.
HW accelerated rendering was supported at least as far back as System 6.0 in 1990
Classic MacOS used an API called Quickdraw, which was implemented as a series of graphics primitives (originally written for the Lisa as part of LisaGraf). Quickdraw implemented support for things like drawing lines, rectangles, etc. The original implementation software rendered them onto the system frame buffer.
Essentially all drawing went through Quickdraw, which made it a natural chokepoint to introduce acceleration, which is exactly what happened when Apple shipped the Macintosh Display Card 8*24GC in 1990. The card included a separate Am29000 processor (which was often higher performance than the host CPU), which had its own memory and an implementation of Quickdraw. Its driver patched the Quickdraw calls in the OS to RPC them over the bus to the card, which would then render them on behalf of the host. It also supported off screen rendering and DMAing the results back to other cards.
You could argue that is still software rendering, just on another CPU, but at the end of the day that is sort of orthogonal, almost all GPUs have some programmable components you need to load firmware into in order to operate. The key point is that there was an abstract interface the OS could use to offload rendering to some other device besides the main application processor, and the UI used it.
I honestly can't recall how much of this was still in common use by the time Mac OS 9 came around. CPUs were also much faster by then, and the move to PCI meant it was possible to use fast off the shelf PCI GPUs which may have changed the cost benefit ratios enough that it was best to just take whatever the GPU vendors were offering and software render into their frame buffers even if it could not fully accelerate all the same operations a bespoke earlier design could.
posix_spawn() is specified such that it is possible (but does not required!) it to be implemented in terms of [v]fork() and exec().
On macOS it is a syscall and is generally faster than using vfork() and exec(). It also has a number of extensions like POSIX_SPAWN_CLOEXEC_DEFAULT, POSIX_SPAWN_SETEXEC, and `posix_spawn_file_actions_addchdir_np()` that allow it to actually be used in many cases where other systems need to resort to fork() and exec().
Yes, you can do all the same things in software, in fact it is trivial, just take the same output from you EDA tools and run it in a simulator. Of course that is so slow it cannot interface with (most) real external HW like CRTs and accessories, but in some technical sense it is software taking the exact same set of inputs as an FPGA, and generating the exact same outputs (just much, much slower).
If we accept that as the premise then then we can consider emulators an optimization where instead of using the simulated verilog we try to manually write code that performs equivalent operations, but can run fast enough to hit the original timing constraints of the HW we are replacing. The thing is that the code is constrained by the limits of the modern HW it is running on, and sometimes the modern HW just cannot do what legacy HW did.
An NES does not have a frame buffer (it does not even have enough ram to hold ~5% of a rendered frame of its output!). To cope with that the games generate their output line by line as that the video signal is being generated. What that means is that you click a button on the controller it can change the output of the scanline that is currently writing to the screen (and you can release it updating the output before the frame is being generated, changing subsequent lines). IOW, the input latency is less than a single frame of input. That is not true with modern computers where we render into a memory mapped frame buffer which is then transmitted to the screen with a complex series of of chips including the GPU and DC, and ultimately synchronized on the blanking intervals.
On an FPGA you can design a display pipeline that matches that of legacy consoles, and get the same latency. Of course you could also do the same thing in software emulation on a computer if you clock it so high that it renders and outputs one frame of video for each scanline of output on the original, but given the NES had a framerate of ~60 (59.94) fps and vertical resolution of 240p that comes out to a framerate of ~14,400 fps to hit the latency target for accurate emulation.
Now in practice most of the time it is a non-issue and emulation is more than sufficient, but some old games do very funky things to exploit whatever they could on the limited HW they run on.
It is also worth noting that FPGAs are a lot more interesting for older systems. Once you get to more modern systems that look more like modern computers the strict timing becomes less important. In particular, once you get to consoles that have frame buffers the timing becomes much less sensitive because the frame buffer acts as a huge synchronization point where you can absorb and hide a lot of timing mismatches.