I'm not sure if you're being facetious and making fun of my saying stream a timestamp instead of stream to a particular time in the video, but if so I guess https://news.ycombinator.com/item?id=46364765 suggested a way.
I just expected you would stream to point X in the stream that would be to the timestamp set to start and then to point y which would be the end. Obviously it would have to be able to figure out how the streamed file would map to time, which I don't know how to do which is why I said I would like a tool that did it other than announcing I made a tool that did it myself.
Of course obviously some tools like yt-dlp etc. have this capability with the --download-sections property but I want something for torrents.
I vibe coded a little tool [0] that can stream range requests from torrents.
It's a little buggy and super rough around the edges, but it's definitely possible because your torrent client can prioritize piece requests and http standards support http range requests, just requesting parts of a document. I lightly tested it with VLC and seeking the playback to the middle of a video
You negotiate the header to find the video length, to then issue http get requests with the offset to the timestamp. Sometimes there’s an API that cuts with ffmpeg and returns the buffer. Sometimes you just need to fetch the raw bytes between offset+0 and offset+n.
I can confirm this, from my experience. Many organizations have foregone ms in lieu of redhat and oracle, and redhat is slowly injecting openshift where they can fit it.
You can see why people like the docker experience, you can manage to do all that in a single interface, instead of one off scripts touching a ton of little things
What's wrong with using an LLM to learn about politics and religion?
I've found Claude to be an excellent tool to facilitate introspective psychoanalysis. Unlike most human therapists I've worked with, Claude will call me on my shit and won't be talked into agreeing with my neurotic fantasies (if prompted correctly).
Because unlike a human who can identify that some lines of reasoning are flawed or unhealthy, an LLM will very happily be a self-contained echo chamber that will write whatever you want with some nudging.
It can drive people further and further into their own personal delusions or mental health problems.
You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it. That's not how therapy works.
> You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it.
That's been my experience with human therapists. When I tell Claude to stop being sycophantic, it complies. When I tell a human to stop being sycophantic, they get defensive.
I agree that an ideal human therapists would be better than Claude, but most that I've worked with are far from ideal. Most are not very bright, easily manipulated, and quick to defensiveness when questioned. And Claude won't try to get me to take random meds with the only justification for the specific medication being 'got to start somewhere'.
He wants to be called a good boy, so the LLM calls him a good boy. Since the LLM is a machine that does what you want, he's essentially doing it to himself. It might not be a conscious choice, but there's still intention behind it. Kein Herr im eigenen Haus. (No master in one's own house.) - Sigmund Freud. He was wrong about a lot of stuff but this is one thing that still stands.
I'm sure you can be unconsciously intent on things, but gaslighting is a unique concept. Here's the definition I am relying on: manipulate (someone) using psychological methods into questioning their own sanity or powers of reasoning.
In your provided example, the user is obviously not trying to manipulate someone into questioning their sanity, nor power of reasoning. Quite the opposite. Lying to themselves (your example) for sure.
That might actually be interesting if there were enough content, something of the "Beta" level AI's in Alastair Reynolds' revelation space books.
But that isn't what I've seen done when people said they did that. Instead they just told ChatGPT a bit about the person and asked it to playact. The result was nothing like the person-- just the same pathetic ChatGPT persona, but in their confusion, grief, and vulnerability they thought it was a recreation of the deceased person.
A particularly shocking and public example is the Jim Acosta interview of the simulacra of a parkland shooting victim.
Mental health issues in the population are never going away, people using software tools to prey on those with issues is never going away. Arguably the entire consumer software industry preys on addiction and poor impulse control already.
yeah, I know, I'm addicted to this hellhole of a site. I hate it but I still open it every five minutes.
But that's not what I'm talking about, I'm talking specifically about people who've made a SoTA model their buddy, like the people who were sad when 4o disappeared. Users of character.ai. That sort of thing. It's going to get very, very expensive and provides very little value. People are struggling with rent. These services won't be able to survive, I hope, purely through causing psychosis in vulnerable people.
Reliability and consistency? Humans have bad days. Humans have needs and can't be available 24/7/365 for years on end. OF creators burn out or grow up or lose interest or cash out and retire.
It's not like the "real women" of OnlyFans are consistently real, or women. And there's some percentage that are already AI-by-proxy. There's definitely opportunity for someone to just skip the middleman.
When I was at a game studio for a big MMORPG I had the valuable experience sitting next to the monetization team. It was a third grade MMO with gacha mechanics our whales spend 20-30k every month... for years.
It doesn't require a particularly powerful AI because the human's own hope is doing the heavy lifting. 70B models run juust fine on hardware you can have sitting under your desk.
People have spend much more on pig butchering scam boyfriends that don't even exist. I bet you could get some people to pay quit a lot to keep what they see as their significant other alive.
I'm guessing this feature is intended to prevent two scenarios: A) Grandma accidentally undressing during a FaceTime call (for instance, she forgot to hang up and the call kept recording); and B) Grandma getting on a call with a stranger who wanted to shock her by exposing themselves.
At least, that's what the child protection feature† in current versions of iOS is supposed to do.
† Which this feature clearly evolved from, and the article suggests that it might be a bug that has enabled this for adults as well.
reply