Hacker Newsnew | past | comments | ask | show | jobs | submit | musictubes's commentslogin

JRiver is an advanced media player that works cross platform including Linux. It isn’t the prettiest thing around and understanding everything that it can do can be frustrating but it will do just about anything you’d like a media player to do.

I rarely interact with it directly. I usually use JRemote on my iPad or iPhone to control it. There is also an incredibly fast web front end you can use in whatever device you want.

Does the old Logitech music server (or whatever it is called these days) work on Linux? There have been a bunch of front end programs to use those servers.


The problem with that is that things like remasters, special editions, etc. screw up the timeline. Those are listed when they came out but that means they are not in original releases order any longer.

Just edit the year?

I don't think it is possible to have a locked down development machine. You have to be able to run arbitrary code on a development machine so they can never lock it down like iOS is.

There are plenty of other ways they can be less open and hackable than Linux but it can never get to the point of the iPhone.


That’s a reasonable take. The never part seems strong though.

If I may offer a slight consideration? “arbitrary code vs arbitrary signed code”.

What’s realistically stopping Apple from requiring all code and processes be signed? Including on device dev code with a trust chain going back to Apple and TPU / Secure Enclave enforcement


Nothing.

That's confusing "will boot anything" with "will run any userspace software".

Abortion is currently too divisive in the US to get a national health care system going. One side will absolutely refuse to include it and the other will absolutely require it. If one side brute forces it there will be immense backlash.

Along similar lines it isn't clear that having the federal government controlling healthcare at a more fundamental level is a good idea. Many (most?) would shudder at the thought of this administration controlling healthcare.


They are prioritizing safety both personal and litigious. Apple markets it as a way to find lost things, not stolen things. There are trackers you can buy for tracking stolen things. I'm only familiar with ones designed for cars but I'm sure there are others as well.

Sigh.

Apple cannot lockdown the Mac. You can’t have a development machine that is incapable of running arbitrary code. Back when they still did WWDC live they said that software development was the biggest professional bloc of Mac users. I’m certain that these days development is the biggest driver of the expensive Macs. No one has ever made a decent argument as to why Apple would lock down the Mac that would also explain why they haven’t done it yet.

Passivity isn’t hostility. There isn’t any evidence that Apple is considering locking down the Mac. They could have easily done that with the transition to their own silicon but they didn’t despite the endless conspiracy theories.


Apple can lockdown the Mac. You might not think it is likely, but without UEFI there is no path of recourse if Apple decides to update iBoot. How do you launch Asahi if Apple quits reading the EFI from the secure partition?

> They could have easily done that with the transition to their own silicon

They already did, that's what my last comment just outlined. Macs do not ship with UEFI anymore, you are wholly at the mercy of a proprietary bootloader that can be changed at any time.


Again, why haven't they done it yet? It's because you cannot lock down a development platform. Yes, they could do it but it doesn't make any sense. You haven't articulated why they would do it only that they could.

Why people continue to think Apple will treat the Mac like the iPhone I have no idea. Will Microsoft take the same approach with Windows as they did with Xbox? Different product, different strategy.


App.net was a wonderful experience with great developer buy in. It is also my understanding that it was operating at break even when it was mothballed. The VC backing it wanted Facebook returns. It was an amazing experience because it didn’t depend on advertisers. I have no idea how it would have fared through Covid and election dramas but it remains my platonic ideal for a social network.


It isn’t clear to me that Apple will ever pursue their own chatbot like Gemini, ChatGPT, etc. There’s lots of potential for on device AI functions without it ever being a general purpose agent that tries to do everything. AI and LLMs are not synonymous.


From UX perspective they already have Siri for that


There are some visualizers in the Mac App Store. I'm using Ferromagnetic right now and like it well enough. There are still visualizers in Apple Music left over from the iTunes days but they're kind of lame.


I stumbled onto one years ago by accident, maybe an Easter egg or something. I came back to my computer (Mac) after several hours of iTunes playback to see a hitherto unknown visualization running, with fairly primitive-looking graphics by today's standards. It was not any of the visualizations available in iTunes at the time.

I filed a bug on it with Apple and they got back to me asking how the hell I had invoked this, because they'd never seen it before. Never did get to the bottom of it.


Intentional pun?


That article points out that GB5 and GB6 test multi-core differently. The author notes that GB6 is supposed to approach performance the way most consumer programs actually work. GB5 is better suited for testing things like servers where every core is running independent tasks.

The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.

If you’re going to use synthetic benchmarks it’s important to use the one that reflects your actual use case. Sounds like GB6 is a good general purpose benchmark for most people. It doesn’t make any sense for server use, maybe it also isn’t useful for other use cases but GB6 isn’t trash.


> The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.

The problem with this rejoinder is, of course, that you are then testing applications that don't use more cores while calling it a "multi-core" test. That's the purpose of the single core test.

Meanwhile "most consumer programs" do use multiple cores, especially the ones you'd actually be waiting on. 7zip, encryption, Blender, video and photo editing, code compiles, etc. all use many cores. Even the demon scourge JavaScript has had thread pools for a while now and on top of that browsers give each tab its own process.

It also ignores how people actually use computers. You're listening to music with 30 browser tabs open while playing a video game and the OS is doing updates in the background. Even if the game would only use 6 cores by itself, that's not what's happening.


Ok I had time to read through this, and yeah I agree, multicore test should not be waiting on so much shared state.

There are examples of programs that aren't totally parallel or serial, they'll scale to maybe 6 cores on a 32-core machine. But there's so much variation in that, idk how you'd pick the right amount of sharing, so the only reasonable thing to test is something embarassingly parallel or close. Geekbench 6's scaling curve is way too flat.


Yeah. I think it might even be worse than that.

The purpose of a multi-core benchmark is that if you throw a lot of threads at something, it can move where the bottleneck is. With one thread neither a desktop nor HEDT processor is limited by memory bandwidth, with max threads maybe the first one is and the second one isn't. With one thread everything is running at the boost clock, with max threads everything may be running at the base clock. So the point of distinguishing them is that you want to see to what extent a particular chip stumbles when it's fully maxed out.

But tanking the performance with shared state will load up the chip without getting anything in return, which isn't even representative of the real workloads that use an in-between number of threads. The 6-thread consumer app isn't burning max threads on useless lock contention, it just only has 6 active threads. If you have something with 32 cores and 64 threads and it has a 5GHz boost clock and a 2GHz base clock, it's going to be running near the boost clock if you only put 6 threads on it.

It's basically measuring the performance you'd get from a small number of active threads at the level of resource contention you'd have when using all the threads, which is the thing that almost never happens in real-world cases because they're typically alternatives to each other rather than things that happen at the same time.


It is worse. The use case of many threads, resource contention, diminishing and eventually negative returns does exist and I've run into it, but it's not common at all for regular users and not even that interesting to me. I want to know how the CPU responds to full util (not being able to do full turbo like you said).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: