Hacker Newsnew | past | comments | ask | show | jobs | submit | ZeroCool2u's commentslogin


Yeah, gotta say I'm not enthusiastic about handing over any health data to OpenAI. I'd be more likely to trust Google or maybe even Anthropic with this data and that's saying something.


Not necessary, this information is free:

https://finviz.com/calendar/economic/detail/UNITEDSTAJOBOFF?...

> Job openings in the US fell by 303,000 to 7.146 million in November 2025, the lowest since December 2020 and well below market expectations of 7.60 million. The number of job openings decreased in accommodation and food services (-148,000); transportation, warehousing, and utilities (-108,000); and wholesale trade (-63,000). On the other hand, openings increased in construction (+90,000). Meanwhile, hires were little changed and total separations were unchanged at 5.1 million each. Within separations, both quits (3.2 million) and layoffs and discharges (1.7 million) were little changed.


I'd say personally it's worthwhile for Americans to know where to get the canonical data directly: https://www.bls.gov/news.release/jolts.a.htm. Everything else is some sort of spin, interpretation, or at best selective reporting of the underlying primary data.

I generally agree. But with the current administration firing the former BLS chief (possibly due to bad numbers being reported) and changing economic reporting (e.g. PPI and GDP estimates), I'm not sure I trust the government data to not also have some sort of spin or selective releasing.

transportation, warehousing, and utilities being a headlining loser here is the most striking.... perhaps?

Heavy Truck Sales Collapsed in Q4; Down 32.5% Year-over-Year in December - https://news.ycombinator.com/item?id=46514490 - January 2026

(heavy trucks sales collapse is a recession indicator)


Craig Fuller - the CEO of Freightwave - has been indicating that their freight data clearly suggests the US economy is in much worse shape than official reporting.

this is usually more accurate than most indicators - as America's economy runs on trucks not donuts.

more trucks on the road - more goods flowing - heavy machinery included, things getting built etc. less trucks - trouble


Certainly sounds like canaries telling us the rest of the economy is not doing great. (Not warning us that it's going to have problems. Telling us it already does.)

The economy canaries can't tell us anything: they're already dead.

That's literally how the canaries would inform the miners about toxic gases.

Happily, this wasn't actually the case. Canaries faint long before they die and miners would carry small resuscitation chambers where the canaries could be reawakened in an oxygen-rich atmosphere. The Science and Industry Museum in Liverpool has one in their collection: https://blog.scienceandindustrymuseum.org.uk/canary-resuscit...

Interesting, but I frankly doubt the birds remained utterly unharmed. Birds are really sensitive to many gases, with the common anecdote being to not cook with nonstick pans if you have a parrot.

"Yes, Ted. That was the joke."

Gift link costs me nothing :)

> Some economists have questioned the validity of the JOLTS data, in part due to the survey’s low response rate and sometimes sizable revisions. A separate index by job-posting site Indeed, which is reported on a daily basis, showed openings rebounded in November after reaching a multiyear low.

Thank you!


Igalia is a bit unique as it serves as a single corporate entity for organizing a lot of sponsored work on the Linux kernel and open source projects. You'll notice in their blog posts they have collaborations with a number of other large companies seeking to sponsor very specific development work. For example, Google works with them a lot. I think it really just simplifies a lot of logistics for paying folks to do this kind of work, plus the Igalia employees can get shared efficiency's and savings for things like benefits etc.


Oh ok, so Igalia owns the developer sweatshops now. Got it.


This seems to be a win-win where developers benefit from more work in niche areas, companies benefit by getting better developers for the things they want done, and Igalia gets paid (effectively) for matching the two together, sourcing sufficient work/developers, etc.


I don't know much about Igalia but they are worker owned and I always see them work on high skill requirement tasks. Makes me wish I was good enough to work for them.


It's a cooperative sweatshop in that sense.


And the developers own Igalia.


Just because work is 'out-sourced' to contractors does not mean it is a sweatshop....


LMStudio is so much better than Ollama it's silly it's not more popular.


LMStudio is not open source though, ollama is

but people should use llama.cpp instead


I suspect Ollama is at least partly moving away open source as they look to raise capitol, when they released their replacement desktop app they did so as closed source. You're absolutely right that people should be using llama.cpp - not only is it truly open source but it's significantly faster, has better model support, many more features, better maintained and the development community is far more active.


Only issue I have found with llama.cpp is trying to get it working with my amd GPU. Ollama almost works out of the box, in docker and directly on my Linux box.


>Only issue I have found with llama.cpp is trying to get it working with my amd GPU.

I had no problems with ROCm 6.x but couldn't get it to run with ROCm 7.x. I switched to Vulkan and the performance seems ok for my use cases


Desktop app is open-source now.


> but people should use llama.cpp instead

MLX is a lot more performant than Ollama and llama.cpp on Apple Silicon, comparing both peak memory usage + tok/s output.

edit: LM Studio benefits from MLX optimizations when running MLX compatible models.


> LMStudio is not open source though, ollama is

and why should that affect usage? it's not like ollama users fork the repo before installing it.


It was worth mentioning.


Note that there's also "LlamaBarn" (macOS app): https://github.com/ggml-org/LlamaBarn


Ollama did not open source their GUI.



Thanks, I stand corrected.


ik_llama is almost always faster when tuned. However, when untuned I've found them to be very similar in performance with varied results as to which will perform better.

But vLLM and Sglang tend to be faster than both of those.


Besides optimizations specific to running locally lands in lamma.cpp first.


Imagine a Steam TV with the Steam Box simply built-in. That would be incredibly nice. The worst part of my brand new LG G5 OLED TV is the software itself. I'd pay a good deal more to have Valve responsible for the software running on my TV.


It might be nice for a little while, but the PC component is going to age much more poorly than the display will.

I think the better move would be for Valve to make a really nice gamer-oriented dumb TV that's essentially a 50"+ monitor. Kind of like those BFGDs (Big Format Gaming Displays) sans the exorbitant prices. The size of a Steam Box is in comparison quite diminutive, so finding a place to put it shouldn't be too much of an issue and the ability to swap it out for a newer model with the same screen 5+ years down the road would be nice.


Would they? A gaming PC from 2015 is still a decent machine today, just don't use laggy ahh win11


There's actually a quasi-standard of TV-compute unit interface made for industrial displays. This could be really nice for things like steam cards that could just slot into TVs with whatever performance you need.

https://youtu.be/q9a3dCd1SQI


And even better make it as open as Steam Deck/Machine and allow to install any GNU/Linux distribution onto it maybe even something with KDE Plasma Bigscreen or something similar if desired.


You can get TVs with a "PC slot" like the Sharp M431-2. Just need a Steam Slot.


That's only 60Hz though. Are there any dumb TVs with 120+ Hz VRR and HDR?


Is this an actual thing people can buy, or only companies?


I see one for sale at B&H Photo Video.


Yeah, the reverse breakup fee is ~2.6B I believe, but the Paramount takeover doesn't have to succeed for that fee to kick in. WB just has to back out.


Right, but if it does succeed, does it then kick in?


Yes


Seems okay. It's no Opus 4.5 or Gemini 3 Pro according to the benchmarks. Also, still a good chance the AWS team is benchmaxing the same as last time.

Additionally, my experience with Bedrock hasn't made me a huge fan. If anything its pushed me towards OpenRouter. Way too many 500 errors when we're well below our service quotas.


I've had to repeatedly tell our AWS account reps that we're not even a little interested in the Trainium or Inferentia instances unless they have a provably reliable track record of working with the standard libraries we have to use like Transformers and PyTorch.

I know they claim they work, but that's only on their happy path with their very specific AMI's and the nightmare that is the neuron SDK. You try to do any real work with them and use your own dependencies and things tend to fall apart immediately.

It was just in the past couple years that it really became worthwhile to use TPU's if you're on GCP and that's only with the huge investment on Google's part into software support. I'm not going to sink hours and hours into beta testing AWS's software just to use their chips.


IMO AWS once you get off the core services is full of beta services. S3, Dynamo, Lambda, ECS, etc are all solid. But there are a lot of services they have that have some big rough patches.


RDS, Route53, and Elasticache are decent, too. But yes, I've also been bitten badly in the distant past by attempting to rely on their higher-level services. I guess some things don't change.

I wonder if the difference is stuff they dogfood versus stuff they don't?


I once used one of their services (I forget which, but I think it was there serverless product) that “supported” Java.

… but the official command line tools had show-stopper bugs if you were deploying Java to this service, that’d been known for months, and some features couldn’t be used in Java, and the docs were only like 20% complete.

But this work-in-progress alpha (not even beta quality because it couldn’t plausibly be considered feature complete) counted as “supported” alongside other languages that were actually supported.

(This was a few years ago and this particular thing might be a lot better now, but it shows how little you can trust their marketing pages and GUI AWS dashboards)


I'm assuming you're talking about Lambda. I don't mess with their default images. Write a Dockerfile and use containerized Lambdas. Saves so many headaches. Still have to deal with RIE though, which is annoying.


A big problem for a when three AWS teams launch the same thing. Lowers confidence in dogfooding the “right” one.


Or when your AWS account rep is schmoozing your boss trying to persuade them to use something that is officially deprecated, lol.


Amazon Connect is a solid higher level offering. But only because it is a productized version of Amazon Retail’s call center


My understanding is that AWS productizes lots of one-offs for customers (like Snowball), so that makes sense


I'd add SQS to the solid category.

But yes, the less of a core building block the specific service is (or widely used internally in Amazon), the more likely you are to run into significant issues.


Lightsail fortunately behave like core services.


True with Cloudflare too. Just stick with Workers, R2, Durable Objects, etc...


Not even sure about R2 with it's unpredictable latencies.


Hmm is it actually that bad? Keep in mind r2 is only stored in one region which is chosen when the bucket is first created so that might be what you're seeing

But I've never really looked too closely because I just use it for non-latency critical blob storage


>But there are a lot of services they have that have some big rough patches.

Enlight us...


Personally, EMR has never shaken off the "scrappy" feeling (sometimes it feels OK if you're using Spark), and it feels even more neglected recently as they seem to want you on AWS Glue or Athena. LakeFormation is... a thing that I'm sure is good in theory if you're using only managed services, but in practice is like taking a quick jaunt on the Event Horizon.

Glue Catalog has some annoying assumptions baked in.

Frankly the entire analytics space on AWS feels like a huge mess of competing teams and products instead of a uniform vision.


Kinesis is decent


That's heartening to know. I find running Kafka less pleasant.


Checkout redpanda


This. 100 times this.


Agree, Google put a ton of work into making TPUs usable with the ecosystem. Given Amazon’s track record I can’t imagine they would ever do that.


There might be enough market pressure right now to make them think about it, but the stock price went up enough from just announcing it so whatever


Amazon has no interest in making their platform interoperable.



spoiler alert, they don't work without a lot of custom code


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: