Hacker Newsnew | past | comments | ask | show | jobs | submit | arghwhat's commentslogin

> Chemistry trumps psychology

To nitpick: The mind is applied biochemistry. Psychology intervenes in the chemistry, like many other activities do. The goal of that is to solve the root cause so that your future levels will be maintained at the right level, instead of just forcing the level by sourcing the respective chemicals externally.

A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" - if you start regularly receiving something externally, internal production will scale back and atrophy in response, in many cases permanently.


Psychology can change neurochemistry but only in certain limited ways. Many people are on antidepressants long term because that's the only thing that works for them. Taking antidepressants is already stigmatized enough. People should just do what makes them feel best over the long run. Your rule of thumb does not trump hard-won personal experiences.

We don't really know how SSRIs work, but there's some evidence that it's through desensitizing serotonin receptors, not directly addressing the lack of serotonin. If so, "use it or lose it" doesn't apply; long-term adaptation is the point, and SOMETIMES does persist after quitting.


>A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" - if you start regularly receiving something externally, internal production will scale back and atrophy in response, in many cases permanently.

There are ways to "hack it".

For example, ~6 months ago I started trt (testosterone replacement). It was the best decision health wise ever. I feel way better psychologically, first time in my life I managed to stick with cardio training for so long (before 3 months was the most). There are other benefits too.

So what about the "loose it" part? Well there is a hormone called HCG one can take a twice a week to trick one's balls into producing some natural testosterone. Its use prevents atrophy and infertility.


> Some cancerous tumors produce this hormone; therefore, elevated levels measured when the patient is not pregnant may lead to a diagnosis of cancer and, if high enough, of paraneoplastic syndromes. It is unknown however whether this production is a contributing cause or an effect of carcinogenesis.

Interesting.

Well, I don't think you'll be able to avoid testicle atrophy even if it minimizes it, but the important part is understanding the tradeoff. Particularly, that adding testosterone will cause changes throughout your entire body (including, for example, shortening life expectancy a bit), and that adding other hormones to the mix will likewise cause changes around the entire body and not just one single process or organ.

But it's your body, your life, your priorities and decision. I also wouldn't consider it a good decision health-wise to take steroids to get huge, but I have no problem with someone deciding that absurd bulk is their main goal in life and worth the tradeoff.


>A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" -

Very basic and very often wrong rule, so take it with a grain of salt.

Insulin for example is the opposite. "lose it then use it" would be a general rule for type 2 diabetics where insulin resistance commonly due to weight gain is the primary problem. Losing the weight leads to better uptake and usage. For a type 1 "lose it then use it" you typically lose the ability to produce insulin to an an autoimmune disorder, then are stuck using insulin for the rest of your life.

The body itself typically attempts to main homeostasis, but at population scales this is something that is going to have a massive range of ways it shows up. Evolution, at grand scales, doesn't care if you survive as long as enough of your population survives and breeds. At the end of the day you might just be one of those people that was born broken and to work properly you need replacement parts/chemicals. A working medical system should be there to figure out which case is which.


> Insulin for example is the opposite.

You're describing entirely orthogonal issues. In case of insulin resistance, your natural production is running full blast with demand exceeding supply because the consumer stopped caring about the hormone. In case of autoimmune disease, the natural production was killed - you can neither use nor lose what is already dead, and even if some capacity was left it will either soon be killed or atrophy under external insulin, but it will not be mourned.

So no I would say it is exactly the same - "use it or lose it" - but that does not mean that there is never a reason to manually overrule your body's attempt at homeostasis through direct manipulation. It just means that there is a very significant consequence to the process.

> The body itself typically attempts to main homeostasis, but at population scales this is something that is going to have a massive range of ways it shows up.

As a somewhat sidenote, this is also why I dislike the idea of trying to classify people into "normal" and "divergent/atypical". In my eyes we're all normal people and an entirely normal aspect of being a human is that we all differ and have individually specific needs by virtue of being built by a trillion micro-meter sized workers, each with their own hand-copied version of the blueprint, only caring about the millimeter of you in their immediate vincinity and not really talking to any of the others.


I believe that is indeed what they meant. The perception of being given a remedy is very powerful indeed, especially for issues ultimately linked to the mind.

That placebos can work should not be seen as undermining the severity or pain of the depression, but rather underline the power of tricking the mind into improvement.


For reference, golang's mutex also spins by up to 4 times before parking the goroutine on a semaphore. A lot less than the 40 times in the webkit blogpost, but I would definitely consider spinning an appropriate amount before sleeping to be common practice for a generic lock. Granted, as they have a userspace scheduler things do differ a bit there, but most concepts still apply.

https://github.com/golang/go/blob/2bd7f15dd7423b6817939b199c...

https://github.com/golang/go/blob/2bd7f15dd7423b6817939b199c...


There are third-party tags out there compatible with both Google and Apple's network that is roughly the same size and use the same battery, yet have a giant lanyard opening in the design to fit anything.

Apple could trivially have fit a usable hole if they wanted to. They just don't want to because they get to sell accessories with that now. Also, looking cleaner on its own helps sell even if that is an entirely useless quality for a tag tha tneeds to go into a bloody case.


Do the third-party tags have all the same features, size, capabilities, range, durability, etc.? Or have they made other tradeoffs instead of eliding the attachment point?

Nothing related to the attachment point.

I don't know of any third-party AirTag-compatible trackers that have UWB right now, but this applies equally to tags that are much larger than the AirTag. The rest is identical - good battery life, range, loud speaker, ...

I have a few theories on the lacking UWB:

1. Given that UWB is also super slow to roll out to Google Find, with only the Moto Tag available, there might be a technical/regulatory hurdle that manufacturers don't think is worth it

2. Apple/Google might make it a pain to be allowed to integrate with their UWB stuff

3. Cost - maybe the UWB stack is comparatively expensive, with third-party tags aiming for price brackets as low as 1/0th the cost of an AirTag

As a note, I don't know if this is because of regional differences in spectrum limits, but at least with AirTag and Moto Tag v1 EU versions, I could never get UWB to give any meaningful directions until I was already staring at the thing. Once you were in range to even consider UWB, playing a sound would be way more effective.


I'm pleasantly surprised Apple allows third-party manufacturers to make trackers that work with Find My. I've bought a bunch for as low as $2 per tracker. The only missing feature, like you mentioned, is missing UWB.

I do appreciate the visual of driving a forklift into the gym.

The activity would train something, but it sure wouldn't be your ability to lift.


A version of this does happen with regard to fitness.

There are enthusiasts who will spend an absolute fortune to get a bike that is few grams lighter and then use it to ride up hills for the exercise.

Presumably a much cheaper bike would mean you could use a smaller hill for the same effect.


From an exercise standpoint, sure, but with sports there is more to it than just maximizing exercise.

If you practice judo you're definitely exercising but the goal is defeating your opponent. When biking or running you're definitely exercising but the goal is going faster or further.

From an an exercise optimization perspective you should be sitting on a spinner with a customized profile, or maybe do some entirely different motion.

If sitting on a carbon fiber bike, shaving off half a second off your multi-hour time, is what brings you joy and motivation then I say screw it to further justification. You do you. Just be mindful of others, as the path you ride isn't your property.


Dynamic libraries have been frowned upon since their inception as being a terrible solution to a non-existent problem, generally amplifying binary sizes and harming performance. Some fun quotes of quite notable characters on the matter here: https://harmful.cat-v.org/software/dynamic-linking/

In practice, a statically linked system is often smaller than a meticulously dynamically linked one - while there are many copies of common routines, programs only contain tightly packed, specifically optimized and sometimes inlined versions of the symbols they use. The space and performance gain per program is quite significant.

Modern apps and containers are another issue entirely - linking doesn't help if your issue is gigabytes of graphical assets or using a container base image that includes the entire world.


Statically linked binaries are a huge security problem, as are containers, for the same reason. Vendors are too slow to patch.

When dynamically linking against shared OS libraries, Updates are far quicker and easier.

And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...


This is the theory, but not the practice.

In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications, not only on Linux, but even on Windows and even for Microsoft products, such as Visual Studio.

I have also seen a lot of space and time wasted by the necessity of having installed in the same system, by using various hacks, a great number of versions of the same dynamic library, in order to satisfy the conflicting requirements of various applications. I have also seen systems bricked by a faulty update of glibc, if they did not have any statically-linked rescue programs.

On Windows such problems are much less frequent only because a great number of applications bundle with the them, in their own directory, the desired versions of various dynamic libraries, and Windows is happy to load those libraries. On UNIX derivatives, this usually does not work as the dynamic linker searches only standard places for libraries.

Therefore, in my opinion static linking should always be the default, especially for something like the standard C library. Dynamic linking shall be reserved for some very special libraries, where there are strong arguments that this should be beneficial, i.e. that there really exists a need to upgrade the library without upgrading the main executable.

Golang is probably an anomaly. C-based programs are rarely much bigger when statically linked than when dynamically linked. Only using "printf" is typically implemented in such a way that it links a lot into any statically-linked program, so the C standard libraries intended for embedded computers typically have some special lightweight "printf" versions, to avoid this overhead.


> In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

> On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications,

OpenSSL is a good example of both useful and problematic updates. The number of updates that fixed a critical security problem but needed application changes to work was pretty high.


I've heard this many times, and while there might be data out there in support of it, I've never seen that, and my anecdotal experience is more complicated.

In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:

1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").

2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).

3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).

That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.

And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.


Would be nice if there was a binary format where you could easily swap out static objects for updated ones

Imagine a fully statically linked version of Debian. What happens when there’s a security update in a commonly used library? Am I supposed to redownload a rebuild of basically the entire distro every time this happens, or else what?

Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

> Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

This was indeed comon for Unix. The only way to tune the systems (or even change the timezone) was to edit the very few source files and run make, which compiled those files then linked them into a new binary.

Linking-only is (or was) much faster than recompiling.


But if I have to relink everything, I need all the makefiles, linker scripts and source code structure. I might as well compile it outright. On the other hand, I might as well just link it whenever I run it, like, dynamically ;)

And then how would this be any different in practice from dynamic linking?

Libraries already break their ABI so often that continuously rebuilding/relinking everything is inevitable.

Debian manages perfectly well without.

Only because of the enormous efforts put in by debian package maintainers and it's infrastructure.

If you're a an indie developer wanting your application to run on various debian based distros but the debian maintainers won't package your application, that's when you'd see why it's called DLL hell, how horribly fragmented the Linux packaging is and why even steam ships their whole run time.


Everything inside Debian is fine. That's most of the ecosystem apart from the very new stuff that isn't mature enough yet. Usually the reason something notable stays out if Debian long term is when that thing has such bad dependency hygiene that it cannot easily be brought up to standard.

Then you update those dependencies. Not very difficult with a package manager. And most dependencies aren't used by a ton of programs in a single system anyway. It is not a big deal in practice.

This would only work if you use dynamic linking. Updating dependencies in a statically built distribution would have no effect.

Honestly, that doesn't sound too bad if you have decent bandwidth.

> And once you're banned you cant [..] make a data request

glares in GDPR


Rust does not have a garbage collector in any way or form. It's just automatic memory like we're used to (e.g., stack in C++), with the compiler injecting free/drop when an object goes out of scope.

What Rust brings is ownership with very extensive lifecycle tracking, but that is a guard rail that gives compile-time failures, not something that powers memory management.

(If you consider the presence of Rc<T> to make Rust garbage collected, then so is C garbage collected as developers often add refcounting to their structs.)


> so is C garbage collected as developers often add refcounting to their structs.

Absolutely, C also can use a garbage collector. Obviously you can make any programming language do whatever you want if you are willing to implement the necessary pieces. That isn't any kind of revelation. It is all just 1s and 0s in the end. C, however, does not come with an expectation of it being provided. For all practical purposes you are going to have to implement it yourself or use some kind of third-party solution.

The difference with Limbo and Rust is that they include reference counting GCs out of the box. It is not just something you have the option of bolting onto the side if you are willing to put in the effort. It is something that is already there to use from day one.


> Absolutely, C also can use a garbage collector.

It is not C using the garbage collector - it is you writing a garbage collector in C. The application or library code you develop with the language is not itself a feature of the language, and the language you wrote the code in is not considered to be "using" your code.

Rust and C are unaware of any type of garbage collection, and therefore never "use" garbage collection. They just have all the bells and whistles to allow you to reference count whatever type you'd like, and in case of Rust there's just a convenient wrapper in the standard library to save you from writing a few lines of code. However, this wrapper is entirely "bolted onto the side": You can write your own Rc<T>, and there would be no notable difference to the std version.

So no, neither Rust nor C can use a garbage collector, but you can write code with garbage collection in any feature-complete language. This is importantly very different from languages that have garbage collection as a feature, like Limbo, Go, JavaScript, etc.


> it is you writing a garbage collector in C.

That's right, you can write a garbage collector in C for C to use. You can also write a garbage collector in C for Javascript to use, you could even write a garbage collector in C for Rust to use, but in this case we are talking about garbage collector for C to use.

If you are writing a garbage collector that will not be used, why bother?


This feels like it's into trolling, so one last round:

The C language does not use your C code, it is your C code that uses the C language.

The tools available to C is what the language specification dictated and what the compiler implemented. For example, C might use stack memory and related CPU instructions, because the C specification described "automatic memory" and the compiler implemented it with the CPU's stack functionality. It might insert calls to "memcpy" as this function is part of the C language spec. For C++, the compiler will insert calls to constructors and destructors as the language specified them.

The C language does not specify a garbage collector so it can never use one.

You, however, can use C to write a garbage collector to manually use in your C code. C remains entirely unaware of the garbage collectors existence as it has no idea what the code you write does - it will never call it on its own and the compiler will never make any decisions based on its existence. From C's perspective, it's still just memory managed manually by your application with your logic.

In JavaScript and Go, the language specifies the presence of garbage collection and how that should work, and so any runtime is required to implement it accordingly. You can write that runtime in C, but the C code and C compiler will still not be garbage collected.


The C standard is actually carefully written to allow for placing distinct "objects" in separate memory segments of a non-flat address space, such that ordinary pointer arithmetic cannot be expected to reach across to a separate "object". This is not far from allowing for some sort of GC as part of low-level C implementation, and in fact the modern Fil-C relies on it.


This is actually quite an interesting topic in its own right, but not quite the discussion above, which is about whether C or Rust include a GC or is considered to "use" the GC you hand-rolled or pulled in from a library.

I wouldn't consider FilC's GC a GC in the conventional sense either, in that code compiled with Fil-C is still managed manually, with the (fully-fledged) GC effectively only serving as a way to do runtime validation and to turn what would otherwise be a UAF with undefined behavior into a well-defined and immediate panic. This aspect is in essence an alternative approach to what is done by AddressSanitizer.

I'll have to look a bit more into Fil-C though. Might be interesting to see how it compares to the usual sanitizers in practice.


> The C language does not use your C code

Your impression that there is a semantic authority misses the mark. While you are free to use English as you see fit, so too is everyone else. We already agreed on the intent of the message, so when I say something like "C uses C code", it absolutely does, even if you wouldn't say it that way yourself. I could be alone in this usage and it would remain valid. Only intent is significant.

However, I am clearly not alone in that style of usage. I read things like "Rust can use code written in C" on here and in other developer venues all the time. Nobody ever appears confused by such a statement even. If Rust can use code written in C, why can't C use code written in C?

> The C language does not specify a garbage collector so it can never use one.

The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list. Please take a photo when they look at you like you have two heads. Admittedly I lack the ability to say something so outlandish to another human with a straight face, but for the sake of science I put that into an LLM. It called me out on the bullshit, pointing out that C can, in fact, use a linked list.

For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same. LLMs are really good at figuring out how humans generally use terminology. It is inherit to how they are trained. If an LLM is making that connection, so too would many humans.

> In JavaScript and Go, the language specifies the presence of garbage collection

The Go language spec does, no doubt as a result of Pike's experience with Alef. The JavaScript spec[1] does not. Assuming you aren't making things up, I am afraid your intent was lost. What were you actually trying to say?

> C code and C compiler will still not be garbage collected.

That depends. GC use isn't typical in the C ecosystem, granted, but you absolutely can use garbage collection in a C program. You can even use something like the CCured compiler to have GC added automatically. The world is your oyster. There is no way you couldn't have already realized that, though, especially since we already went over it earlier. It is apparent that your intent wasn't successfully transferred again. What are you actually trying to say here?

> This is turning into trolling.

The mightiest tree in the forest could be cut down with that red herring!

[1] The standard calls itself ECMAScript, but I believe your intent here is understood.


> The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list.

They would not blink because the statement is accurate. To the C language and to the C compiler, there are no linked lists - just random structs with random pointers pointing to god knows what. C does know about arrays though.

> The JavaScript spec[1] does not [specify garbage collection].

I have good reason to believe that you are not familiar with the specification, although to be fair most developers would not be familiar with its innards.

The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...

Just like with Go, it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything, which means that the application will continously leak memory until it runs out and crashes as the language provides no mechanism to free memory - for Go, you can switch to this with `GOGC=off`. The specific wording from ECMA-262:

> This specification does not make any guarantees that any object or symbol will be garbage collected. Objects or symbols which are not live may be released after long periods of time, or never at all. For this reason, this specification uses the term "may" when describing behaviour triggered by garbage collection.

If you're not used to reading language specs the rest of the details can be a bit dry to extract, but the general idea is that the spec outlines automatic allocation, permission for a runtime to deallocate things that are not considered "live", and the rules under which something is considered to be "live". And importantly, it provides no means within the language to take on the task of managing memory yourself.

This is how languages specify garbage collection as the language does not want to limit you to a specific garbage collection algorithm, and only care about what the language needs to guarantee.

> [1] The standard calls itself ECMAScript, but I believe your intent here is understood.

sigh.

> For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same.

more sigh. LLMs always just wag their tails when you beg the question on an opinion. They do not do critical thinking for you or have any strong opinions to give of their own.

I'm done, have a nice day.


> To the C language and to the C compiler, there are no linked lists

But are most certainly able to use one. Just as they can use a garbage collector. You are quite right that these are not provided out of the box, though. If you want to use them, you are on your own. Both Limbo and Rust do provide a garbage collector to use out of the box, though, so that's something different.

> The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...

But, again, does not specify use of a garbage collector. It could use one, or not. That is left up to the implementer.

> it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything

It's perfectly valid as far as the computer is concerned, but in the case of Go not spec-compliant. Obviously you don't have to follow the spec. It is not some fundamental law of the universe. But if you want to be complaint, that is not an option. I get you haven't actually read the spec, but you didn't have to either as this was already explained in the earlier comment.

> This is how languages specify garbage collection

That is how some languages specify how you could add garbage collection if you so choose. It is optional, though. At very least you can always leak memory. Go, however, explicitly states that it is garbage collected, always. An implementation of Go that is GC-less and leaks memory, while absolutely possible to do and something the computer will happily execute, does not meet the conditions of the spec.

> I'm done

Done what? It is not clear what you started.


> The best apis are those that are hated by the developer and loved by the end users.

No, just those loved by the API consumer. Negative emotions on one end doens't do anything positive.

In the case of plan9, not everything can be described elegantly in the filesystem paradigm and a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse. It also handicaps performance due to the number of filesystem operation roundtrips you usually end up making.

Maybe if combined with something io_uring-esque, but the complexity of that wouldn't be very plan9-esque.


> a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse.

These are no different in principle than ioctl calls in *ix systems. The `ctl` approach is at least a proper generalization. Being able to use simple read/write primitives for everything else is nonetheless a significant gain.


They are very different. ioctl's on a file take an operation and arguments that are often userspace pointers as the kernel can freely access any process's memory space. ctl files on the other hand are merely human-readable strings that are parsed.

Say, imagine an API where you need to provide an 1KiB string. The plan9 version would have to process the input byte for byte to sort out what the command was, then read the string to a dynamic buffer while unescaping it until it finds, say, the newline character.

The ioctl would just have an integer for the operation, and if it wanted to it could set the source page up for CoW so it didn't even have to read or copy the data at all.

Then we have to add the consideration of context switches: The traditional ioctl approach is just calling process, kernel and back. Under plan9, you must switch from calling process, to kernel, to fileserver process, to kernel, to fileserver process (repeat multiple times for multiple read calls), to kernel, and finally to calling process to complete the write. Now if you need a result you need to read a file, and so you get to repeat the entire process for the read operation!

Under Linux we're upset with the cost of the ioctl approach, and for some APIs plan to let io_uring batch up ioctls - the plan9 approach would be considered unfathomably expensive.

> The `ctl` approach is at least a proper generalization.

ioctl is already a proper generalization of "call operation on file with arguments", but because it was frowned upon originally it never really got the beauty-treatment it needed to not just be a lot of header file defines.

However, ioctl'ing a magic define is no different than writing a magic string.


It's perfectly possible to provide binary interfaces that don't need byte-wise parsing or that work more like io_uring as part of a Plan9 approach, it's just not idiomatic. Providing zero-copy communication of any "source" range of pages across processes is also a facility that could be provided by any plan9-like kernel via segment(3) and segattach(2), though the details would of course be somewhat hardware-dependent and making this "sharing" available across the network might be a bit harder.


Indeed, you can disregard plan9 common practice and adopt the ioctl pattern, but then you just created ioctl under a different name, having gained nothing over it.

You will still have the significant context switching overhead, and you will still need distinct write-then-read phases for any return value. Manual buffer sharing is also notably more cumbersome than having a kernel just look directly at the value, and the neat part of being able to operate these fileservers by hand from a shell is lost.

So while I don't disagree with you on a technical level, taking that approach seems like it misses the point of the plan9 paradigm entirely and converts it to a worse form of the ioctl-based approach that it is seen as a cleaner alternative to.


Being able to do everything in user space looks like it might be a worthwhile gain in some scenarios. You're right that there can be some context switching overhead to deal with, though even that might possibly be mitigated; the rendezvous(2) mechanism (which works in combination with segattach(2) in plan9) is relevant, depending on how exactly it's implemented under the hood.


I must admit that the ability to randomly bind on top of your "drivers" to arbitrarily overwrite functionality, whether to VPN somewhere by binding a target machine's network files, or how rio's windows were merely /dev/draw proxies and you could forward windows by just binding your own /dev/draw on the target, holds a special place in my heart. 9front, if nothing else, is fun to play with. I just don't necessarily consider it the most optimal or most performant design.

(I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top... I hate $PATH and the gross profile scripts that go with it with a passion.)


> I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top

That's really an easy special case of what's called containerization or namespacing on Linux-like systems. It's just how the system works natively in plan9.


can you give a few examples of this "lot of things"? What operations do not map naturally to file access?


can you paper over the *ixian abstraction using transformer based metamodeling language oriented programming and the individual process namespace Lincos style message note passing hierarchy lets the Minsky society of mind idea fall out?


> For years I thought they'd "damage" my system...

Well, would you argue that the office apps you installed from them didn't cause you damage, physically or emotionally?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: