Yeah, gotta say I'm not enthusiastic about handing over any health data to OpenAI. I'd be more likely to trust Google or maybe even Anthropic with this data and that's saying something.
> Job openings in the US fell by 303,000 to 7.146 million in November 2025, the lowest since December 2020 and well below market expectations of 7.60 million. The number of job openings decreased in accommodation and food services (-148,000); transportation, warehousing, and utilities (-108,000); and wholesale trade (-63,000). On the other hand, openings increased in construction (+90,000). Meanwhile, hires were little changed and total separations were unchanged at 5.1 million each. Within separations, both quits (3.2 million) and layoffs and discharges (1.7 million) were little changed.
I'd say personally it's worthwhile for Americans to know where to get the canonical data directly: https://www.bls.gov/news.release/jolts.a.htm. Everything else is some sort of spin, interpretation, or at best selective reporting of the underlying primary data.
I generally agree. But with the current administration firing the former BLS chief (possibly due to bad numbers being reported) and changing economic reporting (e.g. PPI and GDP estimates), I'm not sure I trust the government data to not also have some sort of spin or selective releasing.
Craig Fuller - the CEO of Freightwave - has been indicating that their freight data clearly suggests the US economy is in much worse shape than official reporting.
Certainly sounds like canaries telling us the rest of the economy is not doing great. (Not warning us that it's going to have problems. Telling us it already does.)
Happily, this wasn't actually the case. Canaries faint long before they die and miners would carry small resuscitation chambers where the canaries could be reawakened in an oxygen-rich atmosphere. The Science and Industry Museum in Liverpool has one in their collection: https://blog.scienceandindustrymuseum.org.uk/canary-resuscit...
Interesting, but I frankly doubt the birds remained utterly unharmed. Birds are really sensitive to many gases, with the common anecdote being to not cook with nonstick pans if you have a parrot.
> Some economists have questioned the validity of the JOLTS data, in part due to the survey’s low response rate and sometimes sizable revisions. A separate index by job-posting site Indeed, which is reported on a daily basis, showed openings rebounded in November after reaching a multiyear low.
Igalia is a bit unique as it serves as a single corporate entity for organizing a lot of sponsored work on the Linux kernel and open source projects. You'll notice in their blog posts they have collaborations with a number of other large companies seeking to sponsor very specific development work. For example, Google works with them a lot. I think it really just simplifies a lot of logistics for paying folks to do this kind of work, plus the Igalia employees can get shared efficiency's and savings for things like benefits etc.
This seems to be a win-win where developers benefit from more work in niche areas, companies benefit by getting better developers for the things they want done, and Igalia gets paid (effectively) for matching the two together, sourcing sufficient work/developers, etc.
I don't know much about Igalia but they are worker owned and I always see them work on high skill requirement tasks. Makes me wish I was good enough to work for them.
I suspect Ollama is at least partly moving away open source as they look to raise capitol, when they released their replacement desktop app they did so as closed source. You're absolutely right that people should be using llama.cpp - not only is it truly open source but it's significantly faster, has better model support, many more features, better maintained and the development community is far more active.
Only issue I have found with llama.cpp is trying to get it working with my amd GPU. Ollama almost works out of the box, in docker and directly on my Linux box.
ik_llama is almost always faster when tuned. However, when untuned I've found them to be very similar in performance with varied results as to which will perform better.
But vLLM and Sglang tend to be faster than both of those.
Imagine a Steam TV with the Steam Box simply built-in. That would be incredibly nice. The worst part of my brand new LG G5 OLED TV is the software itself. I'd pay a good deal more to have Valve responsible for the software running on my TV.
It might be nice for a little while, but the PC component is going to age much more poorly than the display will.
I think the better move would be for Valve to make a really nice gamer-oriented dumb TV that's essentially a 50"+ monitor. Kind of like those BFGDs (Big Format Gaming Displays) sans the exorbitant prices. The size of a Steam Box is in comparison quite diminutive, so finding a place to put it shouldn't be too much of an issue and the ability to swap it out for a newer model with the same screen 5+ years down the road would be nice.
There's actually a quasi-standard of TV-compute unit interface made for industrial displays. This could be really nice for things like steam cards that could just slot into TVs with whatever performance you need.
And even better make it as open as Steam Deck/Machine and allow to install any GNU/Linux distribution onto it maybe even something with KDE Plasma Bigscreen or something similar if desired.
Seems okay. It's no Opus 4.5 or Gemini 3 Pro according to the benchmarks. Also, still a good chance the AWS team is benchmaxing the same as last time.
Additionally, my experience with Bedrock hasn't made me a huge fan. If anything its pushed me towards OpenRouter. Way too many 500 errors when we're well below our service quotas.
I've had to repeatedly tell our AWS account reps that we're not even a little interested in the Trainium or Inferentia instances unless they have a provably reliable track record of working with the standard libraries we have to use like Transformers and PyTorch.
I know they claim they work, but that's only on their happy path with their very specific AMI's and the nightmare that is the neuron SDK. You try to do any real work with them and use your own dependencies and things tend to fall apart immediately.
It was just in the past couple years that it really became worthwhile to use TPU's if you're on GCP and that's only with the huge investment on Google's part into software support. I'm not going to sink hours and hours into beta testing AWS's software just to use their chips.
IMO AWS once you get off the core services is full of beta services. S3, Dynamo, Lambda, ECS, etc are all solid. But there are a lot of services they have that have some big rough patches.
RDS, Route53, and Elasticache are decent, too. But yes, I've also been bitten badly in the distant past by attempting to rely on their higher-level services. I guess some things don't change.
I wonder if the difference is stuff they dogfood versus stuff they don't?
I once used one of their services (I forget which, but I think it was there serverless product) that “supported” Java.
… but the official command line tools had show-stopper bugs if you were deploying Java to this service, that’d been known for months, and some features couldn’t be used in Java, and the docs were only like 20% complete.
But this work-in-progress alpha (not even beta quality because it couldn’t plausibly be considered feature complete) counted as “supported” alongside other languages that were actually supported.
(This was a few years ago and this particular thing might be a lot better now, but it shows how little you can trust their marketing pages and GUI AWS dashboards)
I'm assuming you're talking about Lambda. I don't mess with their default images. Write a Dockerfile and use containerized Lambdas. Saves so many headaches. Still have to deal with RIE though, which is annoying.
But yes, the less of a core building block the specific service is (or widely used internally in Amazon), the more likely you are to run into significant issues.
Hmm is it actually that bad? Keep in mind r2 is only stored in one region which is chosen when the bucket is first created so that might be what you're seeing
But I've never really looked too closely because I just use it for non-latency critical blob storage
Personally, EMR has never shaken off the "scrappy" feeling (sometimes it feels OK if you're using Spark), and it feels even more neglected recently as they seem to want you on AWS Glue or Athena. LakeFormation is... a thing that I'm sure is good in theory if you're using only managed services, but in practice is like taking a quick jaunt on the Event Horizon.
Glue Catalog has some annoying assumptions baked in.
Frankly the entire analytics space on AWS feels like a huge mess of competing teams and products instead of a uniform vision.