Author here. Not ridiculous at all, the name is a bit of a misnomer. I had tried doing true ASCII but moving data back to the CPU to render it all was too slow. So I opted to recreate them as glyphs that are drawn using signed distance functions, which gets pretty close to looking like real ASCII while still being incredibly performant due to it never leaving the GPU.
Suggestion: continue in the current LLM-generated track and ask Claude (or whatever) to create an example + unit tests validating the idiom. Then tell Claude to remove half the example, leaving only a stub + failing unit tests. Add a go.mod at root + instructions on how to run all tests. The go initiate is "certified" once he/she has forked the repository and made the tests pass.
There are many open source alternatives to claude code. Crush[0] is one, Clai[1] another, opencode[3] a third. These are all vendor agnostic, and use API credits from different providers.
I'm a bit jealous. I would like to experiment with having a similar setup, but 10x Opus 4.5 running practically non stop must amount to a very high inference bill. Is it really worth the output?
From experimentation, I need to coach the models quite closely in order to get enough value. Letting it loose only works when I've given very specific instructions. But I'm using Codex and Clai, perhaps Claude code is better.
I have a coworker who is basically doing this right now he leads our team and is second place overall. Regularly runs opus in parallel he alone is burning through 1k worth of credits a day.
I've tried running a number of claude's in paralell on a CRUD full stack JS app. Yes, it got features made faster, yes it definitely did not leave me enough time to acutally look at what they did, yes it definitely produced sub-par code.
At the moment with one claude + manually fixing crap it produces I am faster at solving "easier" features (Think add API endpoint, re-build API client, implement frontend logic for API endpoint + UI) faster than if I write it myself.
Things that are more logic dense, it tends to produce so many errors that it's faster to solve myself.
I get some of the skepticism in this thread, but I don't get takes like this. How are you using cc that the output you look at is "full of errors"? By the time I look at the output of a session the agent has already ran linting, formatting, testing and so on. The things I look at are adherence to the conventions, files touched, libraries used, and so on. And the "error rate" on those has been steadily coming down. Especially if you also use a review loop (w/ codex since it has been the best at review lately).
You have to set these things up for success. You need loops with clear feedback. You need a project that has lots of clear things to adhere to. You need tight integrations. But once you have these things, if you're looking at "errors", you're doing something wrong IMO.
I don't think he meant like syntax errors, but thinking errors. I get these a lot with CC. Especially for example with CSS. So much useless code it produces, it blows my mind. Once I deleted 50 lines of code and manually added 4 which was enough to fix the error.
If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.
Hi, author here. Honestly, I just used this as a bookmarking place for myself. Which you could infer if you go through some patterns. I’ve created a flow with CC where I would just dump a new source like a podcast, post, or whatever to have it for reference.
Thank you for putting it together. I looked at a couple of the references and they look like they point to your blog. Do you have a view at all of popular patterns in terms of citations? Might be useful
See my comment above. The repository is from May when I was intensely exploring everything agentic. I used it as a public bookmarking tool and also in the hope of receiving contributions. Thanks to this HN share, I received four PRs.
What are the paradigms people are using to use AI in helping generate better specs and then converting those specs to code and test cases? The Kiro IDE from Amazon I felt was a step in the direction of applying AI across the entire SDLC
An agentic media player, intended as home media server for.. uhh.. seasonal vacation videos with subtitles. I've experimented a lot with different "levels" of AI automation, starting from simple workflows, to more advanced ones, and now soon to fully agentic.
Pretty good practice project! All written in Go with minimal dependencies and an embedded vanillja-js frontend built into the binary (it's so small it's negligable)
Question being: WHY would I be doing RAG locally?
reply