Hacker Newsnew | past | comments | ask | show | jobs | submit | mycall's commentslogin


If there is one thing that is testimony to the power of microkernels then it is that one. And that 2011 one was avoidable, imo.

The reduction in scope is really gold, it makes it so much easier just to have a small defined interface per program. It is a bit like Erlang/OTP but with C as the core language, the IPC is so lightweight that it becomes the driver behind all library level isolation. So what in a macrokernel would be a massive monolith with all manner of stuff in the same execution ring turns into a miniscule kernel that just does IPC and scheduling and everything else is a user process, including all of the luxuries that you normally associate with user processes: dumps, debuggers, consoles.


You can an an enterprise environment when following SOPs are mandatory due to cybersecurity and infrastructure requirements.

Wait until automation is itself automated.

What about the reverse, after Claude code implements it, let Gemini/Codex do a code review for bugs and architecture revisions? I found it is important to prompt to only make absolutely minimal changes to the working code, or unwanted code clobbering will happen.

That works great too. Will be adding the ability to tag another agent in a near release

Have you looked at Titan and MIRAS where they use online/updating associative memory that happens to be read out via next-token prediction?

https://research.google/blog/titans-miras-helping-ai-have-lo...

https://arxiv.org/abs/2501.00663

https://arxiv.org/pdf/2504.13173

Much research is going into these directions, but I'm more interested in mind-wandering tangents, involving both attentional control and additional mechanisms (memory retrieval, self-referential processing).


Memory in world models is interesting. But I think the main issue is that its holding everything in pixel space (its not, but it feels like that) rather than concept space. Thats why its hard for it to synthesise consistently.

However I am not qualified really to make that assertion.


You can do specialized SLMs with different roles working on problems. Also deterministic workflows. That is what I gathered its use. I know last year, multi-agent scenarios were topping to benchmarks but I don't know if 2025 has been the same.

Start with ReactOS and go from there.

So look at the dependency tree and add back what is missing. We are hackers here afterall.

Linux is behind Windows wrt (Hybrid) Microkernel vs Monolith, which helps with having drivers and subsystems in user mode and support multiple personalities (Win32, POSIX, OS/2 and WSL subsystems). Linux can hot‑patch the kernel, but replacing core components is risky and drivers and filesystems cannot be restarted independently.

It is also common for authors to misspell names (proper nouns) in an attempt to determine who leaks docs (and to force non-matches for FOIA requests).

If you want to fingerprint text you can also do it by small insignificant changes to text which doesn't change the meaning.

If you have a number such locations with alternatives then you can make a number of identifiable versions by combining alternates.


Random side fact but this was also a thing map makers did back in the day. Including fake towns. In that way they could identify who was stealing their work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: