If software development taught me anything it is that everything that can go wrong will go wrong, the impossible will happen. As a result I prefer having less things that can go wrong in the first place.
Since I acknowledge my own fallibility and remote possibilities of bad things happening I have come to prefer reliability above everything else. I don't want a bucket that leaks from a thousand holes. I want the leaks to be visible and in places I am aware of and where I can find and fix them easily. I am unable to write C code to that standard in an economical fashion, which is why I avoid C as much as possible.
This is, perhaps surprisingly, what I consider the strength of C. It doesn't hide the issues behind some language abstraction, you are in full control of what the machine does. The bug is right there in front of you if you are able to spot it (given it's not hiding away in some 3rd party library of course) which of course takes many years of practice but once you have your own best practices nailed down this doesn't happen as often as you might expect.
Also, code doesn't need to be bulletproof. When you design your program you also design a scope saying this program will only work given these conditions. Programs that misbehaves outside of your scope is actually totally fine.
Empirically speaking, programmers as a whole are quite bad at avoiding such bugs. Humans are fallible, which is why I personally think it's good to have tools to catch when we make mistakes. One man's "this takes control away from the programmer" is another man's "friend that looks at my work to make sure it makes sense".
Nothing of that is written in pure C, as per ISO C standard.
Rather they rely on a mix of C compiler language extensions, inline or external Assembly written helpers functions, which any language compiled language also has available, when going out of the standard goes.
When most people say "I write in C", they don't mean abstract ISO C standard, with the possibility of CHAR_BIT=9. They mean "C for my machine" - so C with compiler extensions, assumptions about memory model, and yes, occasional inline assembly.
That is not an argument. ANSI/ISO C standardizes hardware-independent parts of the language but at some point you have to meet the hardware. The concept of a "implementation platform" (i.e. cpu arch + OS + ABI) is well known for all language runtimes.
All apps using the above-mentioned are written in standard ANSI/ISO C. The implementation themselves are "system level" code and hence have Language/HW/OS specific extensions which is standard practice when interfacing with low-level code.
> any language compiled language also has available
In theory yes, but in practice never to the ease nor flexibility with which you can use C for the job. This is what people mean when they say "C is close to the metal" or "C is a high-level assembly language".
You're assuming that CAD tooling is mature enough to enable that.
There is no standard format for CAD projects/design files. STEP is a standard format for exporting finished designs.
There are many CAD-with-code platforms, but none of them converged on a shared language the same way there are multiple competing C compilers.
I might be beating a dead horse with this one, but standardizing around FreeCAD isn't possible either, because it isn't good enough to compete with commercial CAD software like Solidworks, OnShape or Autodesk Fusion. Blender is almost there though, but for mesh-based free form 3d modeling.
Then on top of that, a 3d model on its own is useless. You need to manufacture the part. Printers have varying capabilities and something that has turned out to be particularly essential is multi color printing, since it allows embedding text and markings onto a print but not every printer is capable of doing it or doing it economically.
Ordering individual parts is expensive, which means you'd rather buy a full kit from someone who is getting volume discounts.
Lightshot is a cloud-first screenshot tool. This means you shouldn't screenshot sensitive information. That's all I could find.
Considering that there are many tools like ShareX where uploading the screenshot is a feature, I don't really see reboot81's poor attempt at "spreading awareness" as genuine.
Humans update their model of the world as they receive new information.
LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.
As far as I know, static recompilation is thwarted by self modifying code (primarily JITs) and the ability to jump to arbitrary code locations at runtime.
The latter means that even in the absence of a JIT, you would need to achieve 100% code coverage (akin to unit testing or fuzzing) to perform static recompilation, otherwise you need to compile code at runtime at which point you're back to state of the art emulation with a JIT. The only real downside of JITs is the added latency similar to the lag induced by shader compilation, but this could be addressed by having a smart code cache instead. That code cache realistically only needs to store a trace of potential starting locations, then the JIT can compile the code before starting the game.
Yes, but in practice that isn't a problem. People do write self modifying code, and jump to random places today. However it is much less common today than in the past. IT is safe to say that most games are developed and run on the developers PC and then ported to the target system. If they know the target system they will make sure it works on the system from day one, but most developers are going to prefer to run their latest changes on their current system over sending it to the target system. If you really need to take advantage of the hardware you can't do this, but most games don't.
Many games are written in a high level language (like C...) which doesn't give you easy access to self modifying code. (even higher level languages like python do, but they are not compiled and so not part of this discussion). Likewise, jumping to arbitrary code is limited to function calls for most programmers.
Many games just run on a game engine, and the game engine is something we can port or rewrite to other systems and then enable running the game.
Be careful of the above: most games don't become popular. It is likely the "big ticket games" people are most interested in emulating had the development budget and need to take advantage of the hardware in the hard ways. That is the small minority of exceptions are the ones we care about the most.
JIT isn't _that_ common in games (although it is certainly present in some, even from the PS2 era), but self-modifying or even self-referencing executables were a quite common memory saving trick that lingered into the PS2 era - binaries that would swap different parts in and out of disk were quite common, and some developers kept using really old school space-saving tricks like reusing partial functions as code gadgets, although this was dying out by the PS2 era.
Emulation actually got easier after around the PS2 era because hardware got a little closer to commodity and console makers realized they would need to emulate their own consoles in the future and banned things like self-modifying code as policy (AFAIK, the PowerPC code segment on both PS3 and Xbox 360 is mapped read only; although I think SPE code could technically self-modify I'm not sure this was widespread)
The fundamental challenges in this style of recompilation are mostly offset jump tables and virtual dispatch / function pointer passing; this is usually handled with some kind of static analysis fixup pass to deal with jump tables and some kind of function boundary detection + symbol table to deal with virtual dispatch.
I believe the main interest in recompilation is in using the recompiled source code as a base for modifications.
Otherwise, yeah, a normal emulator JIT basically points a recompiler at each jump target encountered at runtime, which avoids the static analysis problem. AFAIK translating small basic blocks and not the largest reachable set is actually desirable since you want frequent "stopping points" to support pausing, evaluating interrupts, save states, that kind of stuff, which you'd normally lose with a static recompiler.
How many PS2-era games used JIT? I would be surprised if there were many of them - most games for the console were released between 2000 and 2006. JIT was still considered a fairly advanced and uncommon technology at the time.
A lot of PS2-era games unfortunately used various self-modifying executable tricks to swap code in and out of memory; Naughty Dog games are notorious for this. This got easier in the Xbox 360 and PS3 era where the vendors started banning self-modifying code as a matter of policy, probably because they recognized that they would need to emulate their own consoles in the future.
The PS2 is one of the most deeply cursed game console architectures (VU1 -> GS pipeline, VU1 microcode, use of the PS1 processor as IOP, etc) so it will be interesting to see how far this gets.
Ah - so, not full-on runtime code generation, just runtime loading (with some associated code-mangling operations like applying relocations). That seems considerably more manageable than what I was thinking at first.
Yeah, at least in the case of most Naughty Dog games the main ELF binary is in itself a little binary format loader that fixes up and relocates proprietary binaries (compiled GOAL LISP) as they are streamed in by the IOP. It would probably be a bit pointless to recompile Naughty Dog games this way anyway though; since the GOAL compiler didn’t do a lot of optimization, the original code can be recovered fairly effectively (OpenGOAL) and recompiled from that source.
I'd say practically none, we were quite memory starved most of the time and even regular scripting engines were a hard sell at times (perhaps more so due to GC rather than interpretation performance).
Games on PS2 were C or C++ with some VU code (asm or some specialized hll) for most parts, often Lua(due to low memory usage) or similar scripting added for minor parts with bindings to native C/C++ functions.
"Normal" self-modifying code went out of favour a few years earlier in the early-mid 90s, and was perhaps more useful on CPU's like the 6502s or X86's that had few registers so adjusting constants directly into inner-loops was useful (The PS2 MIPS cpu has plenty of registers, so no need for that).
However by the mid/late 90s CPU's like the PPro already added penalties for self-modifying code so it was already frowned on, also PS2 era games already often ran with PC-versions side-by-side so you didn't want more than needed platform dependencies.
Most PS2 performance tuning we did was around resources/memory, VU and helped by DMA-chains.
Self modifying code might've been used for copy-protection but that's another issue.
OS package managers are the equivalent of a massive monorepo. You have to ask them to let you in and they have many reasons to refuse you and yet you have to do your job anyway.
Unless you're truly car sharing with a bunch of other people going the same way, I don't see how that makes sense. You have to wait for the car to arrive and you're paying a premium for it.
It works if there is no scheduler, or you tell the scheduler what you're doing.
Turns out the first scenario is rare outside of embedded or OS development. The second scenario defeats the purpose because you're doing the same thing a mutex would be doing. It's not like mutexes were made slow on purpose to bully people. They're actually pretty fast.
How can you guarantee that the OS doesn't preempt your thread in the middle of the spinlock? Suddenly your 100 cycle spinlock turns into millions or billions of wasted cycles, because the other threads that are trying to acquire the same lock are spinning and didn't bother informing the OS scheduler that they need the thread that is holding the spinlock, which also didn't inform the OS, to finish its business ASAP.
https://business.facebook.com/messaging/partner-showcase
reply