An "explorative" hex editor where you can do "fuzzy" searches, e.g., searching for a header with specific values for certain fields. (I thought ImHex should be able to do this (and still think it might), but haven't really figured out a good work flow...)
If you want recognize all the common patterns, the code can get very verbose. But it's all still just one analysis or transformation, so it would be artificial to split into multiple files. I haven't worked much in llvm, but I'd guess that the external interface to these packages is pretty reasonable and hides a large amount of the complexity that took 16kloc to implement
If you don’t rely on IDE features or completion plugins in an editor like vim, it can be easier to navigate tightly coupled complexity if it is all in one file. You can’t really scan it or jump to the right spot as easily as smaller files, but in vim searching for the exact symbol under the cursor is a single character shortcut, and that only works if the symbol is in the current buffer. This type of development works best for academic style code with a small number (usually one or two) experts that are familiar with the implementation, but in that context it’s remarkably effective. Not great for merge conflicts in frequently updated code though.
If it was 16K lines of modular "compositional" code, or a DSL that compiles in some provably-correct way, that would make me confident. A single file with 16K lines of -- let's be honest -- unsafe procedural spaghetti makes me much less confident.
Compiler code tends to work "surprisingly well" because it's beaten to death by millions of developers throwing random stuff at it, so bugs tend to be ironed out relatively quickly, unless you go off the beaten path... then it rapidly turns out to be a mess of spiky brambles.
The Rust development team for example found a series of LLVM optimiser bugs related to (no)aliasing, because C/C++ didn't use that attribute much, but Rust can aggressively utilise it.
I would be much more impressed by 16K lines of provably correct transformations with associated Lean proofs (or something), and/or something based on EGG: https://egraphs-good.github.io/
On the other end of the optimizer size spectrum, a surprising place to find a DSL is LuaJIT’s “FOLD” stage: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/lj_opt_fold.c (it’s just pattern matching, more or less, that the DSL compiler distills down to a perfect hash).
Part of the issue is that it suggests that the code had a spaghettified growth; it is neither sufficient nor necessary but lacking external constraints (like an entire library developed as a single c header) it suggests that code organisation is not great.
Hardware is often spaghetti anyway. There are a large number of considerations and conditions that can invalidate the ability to use certain ops, which would change the compilation strategy.
The idea of good abstractions and such falls apart the moment the target environment itself is not a good abstraction.
I find the real question: are all 16,000 of those lines require to implement the optimization? How much of that is dealing with LLVM’s internal representation and the varying complexity of LLVM’s other internal structure?
For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory.
I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level.
>And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”)
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
> All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.
Not my experience. I work with a -fno-exceptions codebase. Still quite a lot of std left. (Exceptions come with a surprisingly hefty binary size cost.)
Apparently according to some ACCU and CPPCon talks by Khalil Estel this can be largely mitigated even in embedded lowering the size cost by orders of magnitude.
Yeah. I unfortunately moved to an APU where code size isn't an issue so I never got the chance to see how well that analysis translated to the work I do.
Provocative talk though, it upends one of the pillars of deeply embedded programming, at least from a size perspective.
Not exactly sure what your experience is, but if you work with in an -fno-exceptions codebase then you know that STL containers are not usable in that regime (with the exception of std::tuple it seems, see freestanding comment below). I would argue that the majority of use cases of the STL is for its containers.
So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc).
Of course you can, you just need to check your preconditions and limit sizes ahead of time - but you need to do that with exceptions too because modern operating systems overcommit instead of failing allocations and the OOM killer is not going to give you an exception to handle.
I don't think it would be typical to depend on exception handling when dealing with boundary conditions with C++ containers.
I mean .at is great and all, but it's really for the benefit of eliminating undefined behavior and if the program just terminates then you've achieved this. I've seen decoders that just catch the std::out_of_range or even std::exception to handle the remaining bugs in the logic, though.
Not scaffolding in the same way, but, two examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations" are
- the fashion for unpainted marble statues and architecture
- the aesthetic of running film slightly too fast in the projector (or slightly too slow in the camera) for an old-timey effect
The industry decided on 24 FPS as something of an average of the multiple existing company standards and it was fast enough to provide smooth motion, avoid flicker, and not use too much film ($$$).
Overtime it became “the film look”. One hundred-ish years later we still record TV shows and movies in it that we want to look “good” as opposed to “fake” like a soap opera.
And it’s all happenstance. The movie industry could’ve moved to something higher at any point other than inertia. With TV being 60i it would have made plenty of sense to go to 30p for film to allow them to show it on TV better once that became a thing.
Now, don't get me wrong, I'm a fan of pixel art and retro games.
But this reminds me of when people complained that the latest Monkey Island didn't use pixel art, and Ron Gilbert had to explain the original "The Curse of Monkey Island" wasn't "a pixel art game" either, it was a "state of the art game (for that time)", and it was never his intention to make retro games.
Many classic games had pixel art by accident; it was the most feasible technology at the time.
I don't think anyone would have complained if the art had been more detailed but in the same style as the original or even using real digitized actors.
Monkey Island II's art was slightly more comic-like than say The Last Crusade but still with realistic proportions and movements so that was the expectation before CoMI.
The art style changing to silly-comic is what got people riled up.
(Also a correction: by original I meant "Secret of" but mistyped "Curse of").
I meant Return to Monkey Island (2022), which was no more abrupt a change than say, "The Curse of Monkey Island" (1997).
Monkey Island was always "silly comic", it's its sine qua non.
People whined because they wanted a retro game, they wanted "the same style" (pixels) as the original "Secret", but Ron Gilbert was pretty explicit about this: "Secret" looked what it looked like due to limitations of the time, he wasn't "going for that style", it was just the style that they managed with pixel art. Monkey Island was a state-of-the-art game for its time.
So my example is fully within the terms of the concept we're describing: people growing attached to technical limitations, or in the original words:
> [...] examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations"
I wouldn't call it "fetishizing" though; not all of them anyway.
Motion blur happens with real vision, so anything without blur would look odd. There's cinematic exaggeration, of course.
24 FPS is indeed entirely artificial, but I wouldn't call it a fetish: if you've grown with 24 FPS movies, a higher frame rate will paradoxically look artificial! It's not a snobby thing, maybe it's an "uncanny valley" thing? To me higher frame rates (as in how The Hobbit was released) make the actors look fake, almost like automatons or puppets. I know it makes no objective sense, but at the same time it's not a fetishization. I also cannot get used to it, it doesn't go away as I get immersed in the movie (it doesn't help that The Hobbit is trash, of course, but that's a tangent).
Grain, I'd argue, is the true fetish. There's no grain in real life (unless you have a visual impairment). You forget fast about the lack of grain if you're immersed in the movie. I like grain, but it's 100% an esthetic preference, i.e. a fetish.
>Motion blur happens with real vision, so anything without blur would look odd.
You watch the video with your eyes so it's not possible to get "odd"-looking lack of blur. There's no need to add extra motion blur on top of the naturally occurring blur.
On the contrary, an object moving across your field of vision will produce a level of motion blur in your eyes. The same object recorded at 24fps and then projected or displayed in front of your eyes will produce a different level of motion blur, because the object is no longer moving continuously across your vision but instead moving in discrete steps. The exact character of this motion blur can be influenced by controlling what fraction of that 1/24th of a second the image is exposed for (vs. having the screen black)
The most natural level of motion blur for a moving picture to exhibit is not that traditionally exhibited by 24fps film, but it is equally not none (unless your motion picture is recorded at such high frame rate that it substantially exceeds the reaction time of your eyes, which is rather infeasible)
In practice, I think the kind of blur that happens when you're looking at a physical object vs an object projected on a crisp, lit screen, with postprocessing/color grading/light meant for the screen, is different. I'm also not sure whatever is captured by a camera looks the same in motion than what you see with your eyes; in effect even the best camera is always introducing a distortion, so it has to be corrected somehow. The camera is "faking" movement, it's just that it's more convincing than a simple cartoon as a sequence of static drawings. (Note I'm speaking from intuition, I'm not making a formal claim!).
That's why (IMO) you don't need "motion blur" effects for live theater, but you do for cinema and TV shows: real physical objects and people vs whatever exists on a flat surface that emits light.
You're forgetting about the shutter angle. A large shutter angle will have a lot of motion blur and feel fluid even at a low frame rate, while a small shutter angle will make movement feel stilted but every frame will be fully legible, very useful for caothic scenes. Saving private Ryan, for example, used a small shutter angle. And until digital, you were restricted to a shutter angle of 180, which meant that very fast moving elements would still jump from frame to frame in between exposures.
I suspect 24fps is popular because it forces the videography to be more intentional with motion. Too blurry, and it becomes incomprehensible. That, and everything staying sharp at 60fps makes it look like TikTok slop.
24fps looks a little different on a real film projector than on nearly all home screens, too. There's a little time between each frame when a full-frame black is projected (the light is blocked, that is) as the film advances (else you'd get a horrid and probably nausea-inducing smear as the film moved). This (oddly enough!) has the effect of apparently smoothing motion—though "motion smoothing" settings on e.g. modern TVs don't match that effect, unfortunately, but looks like something else entirely (which one may or may not find intolerably awful).
Some of your fancier, brighter (because you lose some apparent brightness by cutting the light for fractions of a second) home digital projectors can convincingly mimic the effect, but otherwise, you'll never quite get things like 24fps panning judder down to imperceptible levels, like a real film projector can.
Me at every AirBnB: turn on TV "OH MY GOD WTF MY EYES ARE BLEEDING where is the settings button?" go turn off noise reduction, upscaling, motion smoothing.
I think I've seen like one out of a couple dozen where the motion smoothing was already off.
I think the "real" problem is not matching shutter speed to frame rate. With 24fps you have to make a strong choice - either the shutter speed is 1/24s or 1/48s, or any panning movement is going to look like absolute garbage. But, with 60+fps, even if your shutter speed is incredible fast, motion will still look decent, because there's enough frames being shown that the motion isn't jerky - it looks unnatural, just harder to put your finger on why (whereas 24fps at 1/1000s looks unnatural for obvious reasons - the entire picture jerks when you're panning).
The solution is 60fps at 1/60s. Panning looks pretty natural again, as does most other motion, and you get clarity for fast-moving objects. You can play around with different framerates, but imo anything more than 1/120s (180 degree shutter in film speak) will start severely degrading the watch experience.
I've been doing a good bit of filming of cars at autocross and road course circuits the past two years, and I've received a number of compliments on the smoothness and clarity of the footage - "how does that video out of your dslr [note: it's a Lumix G9 mirrorless] look so good" is a common one. The answer is 60fps, 1/60s shutter, and lots of in-body and in-lens stabilization so my by-hand tracking shots aren't wildly swinging around. At 24/25/30fps everything either degrades into a blurry mess, or is too choppy to be enjoyable, but at 60fps and 1/500s or 1/1000s, it looks like a (crappy) video game.
Is getting something like this wrong why e.g. The Hobbit looked so damn weird? I didn't have a strong opinion on higher FPS films, and was even kinda excited about it, until I watched that in theaters. Not only did it have (to me, just a tiny bit of) the oft-complained-about "soap opera" effect due to the association of higher frame rates with cheap shot-on-video content—the main problem was that any time a character was moving it felt wrong, like a manually-cranked silent film playing back at inconsistent speeds. Often it looked like characters were moving at speed-walking rates when their affect and gait were calm and casual. Totally bizarre and ruined any amount of enjoyment I may have gotten out of it (other quality issues aside). That's not something I've noticed in other higher FPS content (the "soap opera" effect, yes; things looking subtly sped-up or slowed-down, no).
[EDIT] I mean, IIRC that was 48fps, not 60, so you'd think they'd get the shutter timing right, but man, something was wrong with it.
Not necessarily heavy (except sometimes as an effect), but some compression almost all the time for artistic reasons, yes.
Most people would barely notice it as it's waaaay more subtle than your distorted guitar example. But it's there.
Part of the likeable sound of albums made on tape is the particular combination of old-time compressors used to make sure enough level gets to the tape, plus the way tape compresses the signal again on recording by it's nature.
I work in vfx, and we had a lecture from one of the art designers that worked with some formula 1 teams on the color design for cars. It was really interesting on how much work goes into making the car look "iconic" but also highlight sponsors, etc.
But for your point, back during the pal/ntsc analog days, the physical color of the cars was set so when viewed on analog broadcast, the color would be correct (very similar to film scanning).
He worked for a different team but brought in a small piece of ferrari bodywork and it was more of a day-glo red-orange than the delicious red we all think of with ferrari.
Yes. The LEON series of microprocessors is quite common in space industry. It is based on SPARC v8 and SPARC is big-endian. And also, yes, SPARC v8 is a 33 years old 32-bit architecture, in space we tend to stick to the trailing edge of technology.
Also remember: Even though many of these articles/books/papers/etc. are good, even great, some of them are starting to get a bit old. When reading them, check what modern commentators are saying about them.
E.g.:
What every programmer should know about memory (18 years old) [1]
How much of ‘What Every Programmer Should Know About Memory’ is still valid? (13 years old) [2]
While i cannot comment on the specifics u listed i dont think the fundamentals have changed much concerning memory. Always good to have something more digestible though.
I've always wondered how well these RPi based cubesats really work in space. Really hard to find out. Also, people (naturally) aren't always eager to talk about failed projects. Maybe some people here on HN have experiences to share?
In my experience, having provided advice to a lot of academic CubeSats: the issues usually aren't related to the parts, the problems are usually lack of testing and general inexperience.
Yes, a Raspberry Pi isn't radiation hardened, but in LEO (say around 400-500 km) the radiation environment isn't that severe. Total ionizing dose is not a problem. High energy particles causing single event effects are an issue, but these can be addressed with design mitigations: a window watchdog timer to reset the Pi, multiple copies of flight software on different flash ICs to switch between if one copy is corrupted, latchup detection circuits, etc. None of these mitigations require expensive space qualified hardware to reasonably address.
The usual issues I see in academic CubeSats are mostly programmatic. These things are usually built by students, and generally speaking a CubeSat project is just a bit too long (3-4 years design and build + 1-2 years operations) to have good continuity of personnel, you usually have nobody left at the end there since the beginning except the principal investigator and maybe a couple PhD students.
And since everyone is very green (for many students, this is their first serious multidisciplinary development effort) people are bound to make mistakes. Now, that's a good thing, the whole point is learning. The problem is that extensive testing is usually neglected on academic CubeSats, either because of time pressure to meet a launch date or the team simply doesn't know how to test effectively. So, they'll launch it, and it'll be DOA on orbit since nobody did a fully integrated test campaign.
As someone that have successfully flown a RPi CM4 based payload on a cubesat, I fully agree with this. There's not enough funding in my research group to hire a dedicated test engineer so I need to both design and test my payload. It was a long lonely road
It does work at the end, but shortly after we got our first data from space, I decided to quit the space industry and become a test engineer at a terrestrial embedded company instead
It's a bit like balloon projects that have a transmitter. I think now the 20th group found out that standard GPS receivers stop reporting data of at a specific height because of the COCOM limit implementation (They 'or' speed and height). Well.. there are quite a few modules around that 'and' this rule and so work perfectly fine in great heights.
It's all about the learning experience and evolution of these projects. Mistakes must happen.. but learning from them should take place too.
That's kind of how I was thinking about it. Why does each cubesat project have to start over from scratch? Why isn't there a basic set of projects that a team can build on top of to make their own custom sensors for their purpose, but the basic operational stuff like the suggested multiple storage types with redundant code shouldn't need to be recreated each time. Just continue using what worked, and tweak what didn't. No need to constantly reinvent the wheel just because it's students learning.
I agree though, my dream for years has been an open source CubeSat bus design that covers say 80% of academic CubeSat use cases and can be modified by the user for the other 20%. Unfortunately I have very little free time these days with family commitments.
Well, the point of a student's project is to reinvent the wheel.
One should limit the number of wheels being reinvented each time, though. What would also reduce the time-to-space of those projects. The design should cover 100% of the CubeSat, so the students can redesign any part they want.
> I agree though, my dream for years has been an open source CubeSat bus design that covers say 80% of academic CubeSat use cases and can be modified by the user for the other 20%
Seems like we have similar thoughts as we wrote more or less the same comment 10 minutes apart :) Would love to chat about this, maybe we figure out a way to get there? Email is on my profile.
Imaging a group building an managing a robust power supply design for Cubesats that can be immediately ordered from JLCPCB. With a well maintain BOM list.
My dream is to build an open source CubeSat kit (hardware, software, mission control software) with an experience similar to Arduino. Download GUI, load up some examples, and you're directly writing space applications. Ideally should be capable of high end functions like attitude control and propulsion. The problem is that designing and testing such a thing is a rather expensive endeavour. So far I haven't found a way to get funds to dedicate time on this kind of "abstract"/generic project, most funding organizations want a specific mission proposal that ends generating useful data from space.
The annoying answer is "it depends." The main drivers are reliability (ie: how much risk of failure are you willing to accept) and mission life (ionizing dose is cumulative, so a 2 year vs. 10 year mission will have different requirements).
I would say you certainly need to start seriously considering at least some radiation hardening at around 600 km, but missions that can accept a large amount of risk to keep costs down still operate at that altitude with non-hardened parts. Likewise, missions with critical reliability requirements like the International Space Station use radiation hardening even down at 400 km.
The "hard" limit is probably around 1000 km, which is where the inner Van Allen Belt starts. At this altitude, hardware that isn't specifically radiation hardened will fail quickly.
The inner Van Allen Belt also has a bulge that goes down as low as 200 km (the South Atlantic Anomaly), so missions in low inclined orbits that spend a lot of time there or missions that need good reliability when flying through the SAA may also need radiation hardening at comparatively low altitudes.
Always wondered if you could mitigate this somewhat by basically putting your sat in a bag of water and leaving the antenna and solar panels sticking out.
Not really. Radiation shielding has diminishing returns with thickness as the relationship is logarithmic. A few millimeters of aluminum cuts down most of your ionizing dose by orders of magnitude over unshielded, but doing appreciably better requires impractically thick shields.
And that only helps with ionizing dose, which is already not really a problem in LEO. The issue is more high energy particles like cosmic rays, which cause single event effects (SEEs) - things like random bit flips in RAM or CPU registers, or transistor latchup that can cause destructive shorts to ground if not mitigated. These are impractical to shield against, unless you want to fly a few feet of lead. So instead we mitigate them (ECC memory, watchdog timers, latchup supervisor circuits that can quickly power cycle a system to clear a latchup before it can cause damage, etc).
If you want to get an idea of how much shielding is effective in a particular orbit, you can use ESA's SPENVIS software (online, free): https://www.spenvis.oma.be/. Despite being free, it's the tool of choice for initial radiation studies for many space missions worldwide.
There are many Raspberry Pis on the International Space Station (AstroPis). They're subject to a similar amount of space radiation as CubeSats in LEO, and they work just fine. There's also an increasing trend of building CubeSat On-Board Computers (OBCs) as some form of Linux System-on-Module (these would traditionally be microcontrollers). I think Raspberry Pis (especially the Compute Modules) are quite suitable for Payload Data Handling (PDH) systems, although I've personally not had a chance to launch a RPi chip yet.
I personally haven’t seen confirmed SEUs in the satellites I’ve designed/operated (as in, an ionized particle affecting a transistor/MOSFET in a way that creates a short circuit and can only be cleared with a power cycle). But it’s good practice to design space systems to have current monitoring and automatically power off in case of such events.
Resets etc. are common, most likely caused by software bugs. This is more or less assumed as a fact of life; software for space applications is often as stateless as possible, and when it’s required you’d implement frequent state checkpoints, redundant data storage, etc. These are all common practices that you’d do anyway, it doesn’t make a huge difference if the software is running on a rad-hard microcontroller or off the shelf Linux processor - although (IMO) there are many benefits to the latter (and some downsides as well.) Assuming a base level of reliability, of course - you don’t want your OBC/PDH to overheat or reboot every 5 minutes.
About 50% of cubesats fail, at least partially. I've worked with a dozen or so of them, supporting different people and companies trying to use them. Only one failed to work at all. But many of the others had serious problems of one kind or another that limited their usefulness.
We’ve been using Raspberry Pis in CubeSats for a while, for LEO they are good enough for a year or two. It’s the common consumer grade SD cards that are the weakest point. There are more robust industrial grade SD cards and there are RPis with flash (the compute modules) that can work great.
I've participated in the design or manufacture or launch of dozens of cubesats. The ones with RPis as their flight computers either accept that they'll get messed up by radiation with some regularity throughout their mission (and design other components accordingly, such as timeout watchdog resets), or accept that they'll have a quite limited mission lifetime.
Great project! Been using it for years together with VTS [1] to visualize real-time and propagated satellite positions and attitudes, and also star tracker and payload "beams".
reply