Gitea has a builtin defense against this, `REQUIRE_SIGNIN_VIEW=expensive`, that completely stopped AI traffic issues for me and cut my VPS's bandwidth usage by 95%.
This is the most assured best way to make sure your remain the only user of your stuff.
I highly encourage folks to put stuff out there! Put your stuff on the internet! Even if you don't need it even if you don't think you'll necessarily benefit: leave the door open to possibility!
Crawlers will find everything on the internet eventually regardless of subdomain (e.g. from crt.sh logs, or Google finds them from 8.8.8.8 queries).
REQUIRE_SIGNIN_VIEW=true means signin is required for all pages - that's great and definitely stops AI bots. The signin page is very cheap for Gitea to render. However, it is a barrier for the regular human visitors to your site.
'expensive' is a middle-ground that lets normal visitors browse and explore repos, view README, and download release binaries. Signin is only required for "expensive" pageloads, such as viewing file content at specific commits git history.
From Gitea's doc I was under the impression that it was going further than "true" so I didn't understood why because "true" was enough for me to not be bothered by bots.
But in your case you want a middle-ground, which is provided by "expensive"!
> Enable this to force users to log in to view any page or to use API. It could be set to "expensive" to block anonymous users accessing some pages which consume a lot of resources, for example: block anonymous AI crawlers from accessing repo code pages. The "expensive" mode is experimental and subject to change.
Forgejo doesn't seem to have copied that feature yet
Go exposes raw pointers to the programmer, and its current GC is entirely non-moving. Even excluding cgo, I think a moving one would probably break real programs that rely on pointer values.
Yes, there's a case to be made that exposing "real" pointers in a GC'd language was a substantial mistake, but I guess it simplified _some_ parts of FFI. The trade-off so far maybe is fine, but it is a shame that there are certain things that can't be done without introducing new substantial costs. Maybe the compiler could learn to do something suuuper clever like recognize when pointers are being used non-transparently and automatically pin those, seems fraught with potential error though, trivial example being stuff like &a[0] (that ones easier to catch, others might not be).
True, I forgot about unsafe package. They would probably have to make it a Go 2 thing and add indirection to raw pointers or a need to "pin" them. Since pinning would already exist for CGo I suspect that would make more sense and wouldn't have performance penalty.
I can’t find it either. It may have been him, washing machines are the kind of alternating current appliances that he avoided in preparation for living in space
HN was so enamored with him when his deal was about hacking eating into being more productive by not having to chew
> I enjoy doing laundry about as much as doing dishes. I get my clothing custom made in China for prices you would not believe and have new ones regularly shipped to me. Shipping is a problem. I wish container ships had nuclear engines but it’s still much more efficient and convenient than retail. Thanks to synthetic fabrics it takes less water to make my clothes than it would to wash them, and I donate my used garments.
I'm sure i remember a few more details, like his claim that black t-shirts worn only once are the most stylish possible garment, but i'm willing to put that part down to the Berenstein Bears effect.
The ability of a WAF to respond to an 0day incident is rapid rollout, 100% of endpoints, which is a SPOF no matter whether it's done via a big company or by a distributed system.
Assuming there are still 2 WAF makers they hopefully do two mostly independent rollouts at least with separate reviewers.. It is a little shocking to me how far we have slid down the slope to letting one monopoly decide when each part of of computing environment is up.. But if bigger organizations are down it is socially acceptable to have an outage.
An AOT TS -> C compiler is fantastic - how much of the language is supported, what are the limitations on TS support? I assume highly dynamic stuff and eval is out-of-scope?
Most of the TS language is supported, things that are not can be considered bugs that we need to fix. Eval is supported but it won't be able to capture variables outside of the eval string itself. We took a reverse approach than most other TS to native compiler projects: we wanted the compiler to be as compatible with JS as possible, at the expense of reducing performance initially, to make it possible to adopt the native compiler incrementally at scale.
There are significant trade-offs with this compiler at the moment: it uses much more binary size than minified JS or JS bytecode, and performance improvements goes from 2x to sometimes zero. It's a work-in-progress, it's pretty far along in what it supports, but its value-proposition is not yet where it needs to be.
IIRC, his "LibAV" fork was malicious and his people lied a lot to the community ("ffmpeg is now deprecated!"). Ultimately, they failed, but I see a lot of their rhetoric and resentment in Kostya's post today.
This isn't my place to argue, and certainly he was involved at the time, but LibAV wasn't really "his fork". Reading the full list of names who signed off https://lwn.net/Articles/423703/ I'm more interested in some other names, including darkshikari's deadname and the other heavy hitters of x264.
And if you browse through the ffmpeg mailing list of that historic month https://ffmpeg.org/pipermail/ffmpeg-devel/2011-January/ you'll find his name mostly attached to esoteric video game format patches and not in the big flamewar threads.
Actually - it looks like you can also see in that same month, his post of the first SMUSH codec implementation, that we're discussing in this thread. That's probably a bigger emotional factor than LibAV.
That's my point and also why I won't dig more. The author hasn't come to terms with his bad experience with ffmpeg, in LibAV or otherwise. Whatever the baggage is, it feels quite messy and heavy. It weighs down the article, with too much bitterness and resentment. That could work for self-therapy, less so for a credible account. As he confesses at the end:
> I wrote this post with an ulterior motive—I don’t want to feel shame when remembering that I took part in that project. So far though the more I hear about it the more disgusted I become.
it’s very… sad, i guess, watching a lot of software engineering discourse on social media (at least, what I see from Twitter) just become this attention grabbing shitposting. ffmpeg is very much a big player in this field, and it has paid off handsomely - those tweets are often popular on site, and shared across other social media.
The most interesting part of that is the admission that they used decompilers to reverse engineer the codecs. I wonder if makign that output freely available is legal.
Reverse engineering for interoperability is generally legal. Even if not, copyright does not follow the "fruit of the poisoned tree" idea, so if the new code isn't substantially similar to the original, it doesn't matter.
That requires an ISA emulation layer, this new implementation doesn't - here, every binary is compiled as wasm, and every child process runs as a new Wasm WebWorker, and the Kernel ABI is exposed as Wasm export functions.
Removing the ISA translation layer has the potential to be massively faster for full-system environments. At the expense of maybe some new bugs.
The performance should ultimately be similar to compiling your userspace application directly as Wasm, but you now get to take advantage of the full kernel ABI instead of just the minimal shims that Emscripten give you / whatever DOM glue you create yourself.
Some languages like Odin, ISPC, and Jai all have annotations that can automatically transform AoS to SoA. A key benefit is you can easily experiment to see if this helps your application, without doing a major refactor.