Hacker Newsnew | past | comments | ask | show | jobs | submit | rahen's commentslogin

My only complaint regarding the Zed editor is the inability to display two panes of the sidebar one below the other. Not only is it impossible to display them together, but switching between them requires clicking a tiny button in the status bar. To make matters worse, performing a search hides the symbols and the tree view.

So right now I'm sticking to Emacs.


"> I think all of ML being in Python is a colossal mistake that we'll pay for for years.

Market pressure. Early ML frameworks were in Lisp, then eventually Lua with Torch, but demand dictated the choice of Python because "it's simple" even if the result is cobbled together.

Lisp is arguably still the most suitable language for neural networks for a lot of reasons beyond the scope of this post, but the tooling is missing. I’m developing such a framework right now, though I have no illusions that many will adopt it. Python may not be elegant or efficient, but it's simple, and that's what people want.


Gee, I wonder why the tooling for ML in Lisp is missing even though the early ML frameworks were in Lisp. Perhaps there is something about the language that stifles truly wide collaboration?

I doubt it considering there are massive Clojure codebases with large teams collaborating on them every day. The lack of Lisp tooling and the prevalence of Python are more a result of inertia, low barrier to entry and ecosystem lock-in.

What sort of tooling is missing in Lisp? I'd love to check out your framework if you've shared it somewhere

Lisp isn't missing anything, it's a natural fit for AI/ML. It’s the ecosystem's tooling that needs catching up.

The code hasn't reached RC yet, but I'll definitely post a Show HN once it's ready for a preview.


I love it, instant Github star. I wrote an MLP in Fortran IV for a punched card machine from the sixties (https://github.com/dbrll/Xortran), so this really speaks to me.

The interaction is surprisingly good despite the lack of attention mechanism and the limitation of the "context" to trigrams from the last sentence.

This could have worked on 60s-era hardware and would have completely changed the world (and science fiction) back then. Great job.


Stuff like this is fascinating. Truly the road not taken.

Tin foil hat on: i think that a huge part of the major buyout of ram from AI companies is to keep people from realising that we are essentially at the home computer revolution stage of llms. I have a 1tb ram machine which with custom agents outperforms all the proprietary models. It's private, secure and won't let me be motetized.


how so? sound like you are running Kimi K2 / GLM? What agents do you give it and how do you handle web search and computer use well?


The reverse is true though, and I find that fascinating with Fortran.

I recently learned Fortran IV to build a backpropagated neural network for the IBM 1130 (1965) and was amazed to see it compile with no warning on both the punched card compiler from IBM and on a modern Fortran compiler (gfortran).

Some Fortran II conventions, like the arithmetic IF, are now deprecated, but --std=legacy is all it takes to make it work.


You mean everywhere. It's just hidden behind abstraction layers or Fortran libraries like BLAS/LAPACK, which are used by NumPy, R, Julia, MATLAB, Excel, TensorFlow, PyTorch (for some backends), and basically anything that involves linear algebra.


Unless you need horizontal scalability or clustering, Compose + Terraform is all you need.

With Compose, you get proper n-tier application containerization with immutability. By adding an infrastructure-as-code tool such as Terraform to abstract your IT environment, you can deploy your application on-premises, in the cloud, or at a customer site with a single command.

For clustering needs, there’s Incus, and finally, Kubernetes for very fast scalability (<mn), massive deployments on large clusters, cloud offloading, and microservices.

Almost nobody truly needs the complexity of Kubernetes. The ROI simply isn’t there for the majority of use cases.


Part 9 elaborates on GOOL, the Lisp dialect they designed in-house to create the gameplay.

This is my favorite part: https://all-things-andy-gavin.com/2011/03/12/making-crash-ba...


The best challenger to systemd in terms of feature parity is probably dinit: https://davmac.org/projects/dinit/

Have a look at Chimera Linux if you want to give it a try: https://chimera-linux.org/

runit, s6, and OpenRC don't have the downsides of systemd, but they also only cover a subset of its features


The article and discussion are about runit, why bring systemd into it? Diversity in solutions is a good thing, there’s no need to feel threatened by that.


I'm okay with it for comparison's sake.

As a long-time runit user, systemd does far better with sequencing things. With runit you have to have a check executable, and then run 'sv check servicename' in the start script of the service which depends on another.


Least sane systemd hater, try using runit or sysvinit on a production system and come back crying when your runit bash scripts fail all the time


You never addressed GP's point...

This is just a thread about runit, what good is bringing tribal console-war like arguments about systemd to it?


I've had systemd fail/freeze in weird ways very few times. I've had non-systemd init scripts fail zero times.


"My opinions are universal fact and not a skill issue"


It is bad enough that systemd developers belittle users for the breakages they suffer from systemd, it is worse that you join this behavior. The lockup/freeze bugs from fifos/cdevs/sockets are real and systemd is the only init system affected, as the other init systems do not have functionality that would need the open() calls. Example bug: https://github.com/systemd/systemd/issues/30690


The number of times I've had something break is not an opinion.

You claimed that anything but systemd would "fail all the time", I pointed out that in many years of using other options I've never had them do that.

The most egregious systemd bug I've hit was it just freezing on shutdown, with zero error message or even warnings; there's nothing a user should be able to do to produce that outcome that's a "skill issue" and not a bug. In any event, you're just making up claims without evidence.


The first convolutional neural network, the Neocognitron, was AFAIK implemented on a PDP-11 as well: https://www.semanticscholar.org/paper/Neocognitron%3A-A-neur...

No backpropagation back then, this only appeared around 1986 with Rumelhart, probably on VAX machines by that time.

The 11/34 was hardly a powerhouse (roughly a turbo XT) but it was sturdy, could handle sustained load and its FPU made the whole difference.


If I remember right that FORTRAN IV compiler really sucked, it used a stack machine and that floating point accelerator "sucked" by normal standards but was actually 100% effective at accelerating that stack machine. The FORTRAN 77 compiler that came latter was better.


Author here. They call it a FORTRAN IV compiler but it uses some F66 extensions, such as proper types and functions, although it lacks some of the nicer constructs of F66 like If/Then/Else, which would have been handy.

Regarding floating point, I realized the code actually works fine without an FPU, so I assume it uses soft-float. There's no switch to enable the FP11 opcodes, maybe that was in their F77 compiler.

It's indeed rough and spartan, but using a 64KB optimizing compiler requiring just 8KB of memory was a refreshing change for me.


Yes! It took 73 years , but Fortran 77 was definitely better than Fortran IV


Why 73 years?


Fortan IV -- released in 1904. Fortran 77: 1977



> it used a stack machine

Do you have some reading for this? I've used that compiler but I never read the resulting assembly language.


I always found it annoying that Rumelhart and McClelland named their books with the acronym “PDP” - Parallel Distributed Processing. Now I know that they were probably aware of the name collision…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: