I used to religiously subscribe to Outdoor magazine in print. I had to go check if it was still being published [0] and it is, although it is perhaps quarterly now?
[0] Since so many magazines and newspapers are going out of business and just selling their domains to dogshit spam factories for the incredible Page Rank they have.
It's still published, I get a print issue probably every quarter, yeah. I flip through really quickly before it gets tossed in the recycling bin. Sometimes I flip quickly enough that it doesn't even make it into the house before it goes to recycling.
It used to be great, then turned into kind of an airport magazine (you know, the kind you'll read on the plane but not subscribe to), and after it got bought out it's garbage now (see above: I mean this literally). Personally, I'm extra miffed that they took Trail Running magazine with them.
Why do I continue to subscribe? Because along with Outside magazine they (I forget who "they" are, exactly) bought the Gaia GPS app which I use extensively. So I'm basically buying the Gaia subscription and get a shitty print magazine thrown in for free (oh, yeah, and access to their online edition, which redefines "garbage". It's awful, I could spend pages on the topic.) I am currently reevaluating how much I really use Gaia GPS, and what a suitable alternative would be. In many cases, Footpath (an HN user creation, IIRC) might do the trick.
...By clicking “Accept All Cookies” you consent to the setting of these cookies and technologies. By clicking “Decline All Cookies” you decline all non-necessary cookies and similar technologies...
[Accept All Cookies]
There was no [Decline All Cookies] button at all. Why even bother with the pretense of a consent warning?
Even if it wasn't outright beneficial for decoding by itself, it would still allow you to connect a second machine running a smaller, more heavily quantized version of the model for speculative decoding which can net you >4x without quality loss
The rock they dug through for Koralm is, no hyperbole, about as bad as it gets. It's the gnarliest part of what's under the Alps and required them switching back and forth between boring and blasting.
Being two separate tunnels, it also needs twice as much excavation work. It's also ~25x deeper than Toei Oedo (4000ft vs 157ft). At 4000ft the rock itself is 45-50C!
The Koralm tunnel has a different temperature gradient, as the depth is a consequence of a mountain on top of the tunnel, rather than an increased proximity to the earth crust/core.
> "The undisturbed rock temperature varies from 10 °C, in tunnel sections close to the portals, to 32 °C in the tunnel centre"
32°C is still a significant engineering concern, but not as consequential as 45–50°C.
Looks like they reported a peak of 39C in the summer. Either way I figured that would still be pretty miserable, especially if it gets up around 100% humidity.
Assumed they would at least have their own air in the bits that didn't have aircon/ventilation while it was being built. They don't even need to do that anymore! The ventilation systems they used are as advanced and bespoke as the boring machines.
Because they were blasting too, they couldn't utilize full-face pressurization of the entire tunnel to maintain negative pressure to suck all of the fumes, dust, silicates, etc out like they would if it was only boring. That's 1-3kPa, "leaks are jets of air, can pull an airlock door closed hard enough to break bones" territory.
Instead, they have a bunch of dedicated supply and exhaust vents going to the surface (some up to 2m in diameter) and sets of connections between the two tunnels with huge axial fans. It allows them to selectively apply "slight" negative pressure to any of the individual segments when they need to clear them. 50Pa is ~10x what you encounter in a negative pressure highrise. It is described as a "constant slight breeze"
I found this short video on some of the safety features of the finished tunnel. It almost looks "too serious", like something out of a James Bond movie https://www.youtube.com/watch?v=I8trt96huf0
A tunnel on the Kurobe gorge railway (originally used for dam contruction, now partially open to tourists) has reached 160 °C (!!) during construction, but has cooled down to manageable 40 °C since.
It's also really hard to make the tunnel remain a tunnel over its expected 150 year lifespan - given that it basically runs through a fault line. They had to study and test local geology for about 15 years, build certain sections to expect some movement over time, as well as kit everything out with a lot of sensors.
Overall an amazing achievement, and unsurprising it took this long to figure out!
After seeing some of the safety features in a short video I linked in another comment, I get the impression that this is either going to last much longer than 150 years or something so catastrophic will happen that nothing that could have been built would've persisted.
Good point about "boring vs blasting". I didn't think about that. I remember reading about the longest tunnel in Japan between Honshu and Hokkaido (Seikan Tunnel). I recall that it was entirely hand drilled due to unusual soil conditions. I wonder if that would still be true today with state of the art tunnel boring machines.
> Being two separate tunnels, it also needs twice as much excavation work.
Yet another great point. At some of the Toei Oedo stations, you can see a miniature model of the weird overlapping twin tunnel boring machines. So, in theory it is two tunnels, but in practice, it was dug as a single, weird overlapping twin tunnel.
Obviously this does not give any indication of the complexity of each project. Tunnelling and building railway through a metropolis I would imagine is quite challenging.
Still seems insanely more expensive in the UK. I understand they have a higher cost to carry because their project is indeed more complex, but that's like a almost 13x more expensive variant, while not even being two times the length.
Natural gas turbines are pretty common (power plants, large on site/mobile generators) and the efficiency levels of these are the same as what you'd see in similar use cases. Turbines don't really care what they're doing (within reason), these just happen to share a lot of parts with a plane engine.
The cost issue is completely unrelated to supply or usage, there is a cyclic issue of power companies using their profits for lobbying in order to push through measures that allow them to further increase their rates. It is often far more than is publicly disclosed.
For example, last year in this state my power company made billions of dollars and claims they spent less than a million on political contributions. But if you look at their donations, grants, and development programs there is over a hundred million dollars mostly going to companies and nonprofits owned in part by the same politicians or their family members, as well as the municipalities where the policymakers live.
In my state the combined total of rate increases in the past five years for both electricity and natural gas is >1.5x compared to inflation. Each time it is framed in the press as a good thing "we reached a solid deal, for less than half as much of what they were asking!". Every year the profits exceed their expectations by a few percent, each year more people are having their power shut off.
Or a Strix Halo Ryzen AI Max. Lots of "unified" memory that can be dedicated to the GPU portion, for not that expensive. Read through benchmarks to know if the performance will be enough for your needs though.
Do you think the larger Mistral model would fit on a AI Max 395? I've been thinking about buying one of those machines, but haven't convinced myself yet.
I've been running local models on an AMD 7800 XT with ollama-rocm. I've had zero technical issues. It's really just the usefulness of a model with only 16GB vram + 64GB of main RAM is questionable, but that isn't an AMD specific issue. It was a similar experience running locally with an nvidia card.
Hopefully this trend continues, there's too many dogshit people in positions they shouldn't have at Apple.
Two of the three worst interviews I've ever had were with them. Basically got flown out twice to be insulted by team leads or upper management. Everyone insists I'm supposed to keep trying until I don't encounter someone like that but that doesn't seem right to me, not for a company like this. I can wait
I exclusively download av1 encodes from places like tbp. It has fantastic quality for the filesize, and AV1 also benefits the most from the trick of encoding sdr content in 10 bit (more accurate quantization at a smaller size). Crazy that we can fit ~two hours of 1080p video at better than netflix quality (they bias their psnr/etc a little low for my eyes) on a single CD.
I'm not sure it's fair to call reencodes expensive. Sure, its relatively expensive to using ffprobe, but any 4 series nvidia gpu with 2 nvenc engines can handle five? simultaneous realtime encodes, or will get up to near 180fps if it isn't being streamed. Our "we have aja at home" box with four of them churned through something like 20,000 hours of video in just under two weeks.
My understanding is that you shouldn't be using HW accelerated encoding for any archival purpose except realtime capture.
The PSNR/bitrate is much lower for HW encode, but the encode rate is typically realtime or better. That's a great tradeoff if you are transcoding so that a device with limited bandwidth can receive the video while streaming, or so that you can encode a raw livestream from a video capture or camera. It's not so great if you are saving to disk and planning to watch multiple times.
It's not just great. It's so good that even on much older android phones than the ones tested in those links the brightness of the screen has a larger impact.
This is by design, so that even extremely dated smart tvs and etc can also benefit from the bandwidth savings.
Fun fact: I can't say which, but some of the oldest devices (smart tvs, home security products, etc) work around their dated hardware decoders by buzzsawing 4k video in half, running each piece through the decoder at a resolution it supports, then stitching them back together.
Safari's JSC (and much more recently, WebAssembly) are the only ones that actually implement it. In practice I don't think it actually ends up being any better than V8, which I believe has some amount of logic to replace them with iterators or trampolines when it can.
IIRC ES6 introduced PTC (Proper Tail Calls), Chrome had an experimental flag for a while but it introduced more issues than it solved (stack traces became a mess and the stack inspection changes came with realistic security concerns). Microsoft and Firefox refused to implement it. Safari refused to un-implement it, and also refused to adopt an opt-in per function flag.
It's crazy how fast javascript has gotten. Ported a classic game earlier this year using all of the new stuff in ES5/6/onwards, the benchmarks are within a couple of percent of what the perf would be were it a standalone game. Runs with 250x monsters at 30x the original tick rate, or >1000x as many monsters at the original tick rate.
reply