I cloned my voice and had it generate audio for a paragraph from something I wrote. It definitely kind of sounds like me, but I like it much better than listening to my real voice. Some kind of uncanny peak.
You do realize that you don't hear your real voice normally, an individual has to record their voice to hear how others hear their voice. What you hear when you speak includes your skull resonating, which other's do not hear.
Huh, it's true. I thought an organization that needs $50M yearly to function[1] would employ more people. Still, I think it's fair to call them "pretty big" looking on how much media exposure they get or their operating costs. Perhaps a bit misleading from my part with the "company" part, as I'm not english-native, every type of firm,company,foundation in my head translates to a "company", sorry about that, will be more clear next time :)
No worries, I don't think "company" is even technically wrong. But I do think given the nonprofit structure (and Moxie Marlinspike's track record), that there are fewer incentives for Signal to lie about its privacy guarantees than a messaging app backed by a commercially-driven big company.
Ollama is a ycombinator startup, so I guess they have to find some roi at some point.[1]
I personally found Ollama to be an easy way to try out local LLMs and appreciate them for that (and I still use it to download small models on my laptop and phone (via termux)), but I've long switched to llama.cpp + llama-swap[2] on my dev desktop. I download whatever ggufs I want from hugging face and just do `git pull` and `cmake --build build --config Release` from my llama.cpp directory whenever I want to update.
For your target audience it might not matter, but for the hn crowd this is potentially a very confusing naming collision. Especially since the X logo already looks so much like the X11 Window System logo.
Haha, the fact that an LLM tries to offload NLP jobs to Python libraries is funny. But I just tried this prompt (with "Don't use python") and uploaded a text file and it worked well (it immediately started outputting entities without running Python):
"NER task: extract all named entities from this text. Don't use python. JSON format"
I tried Gemini too, and it works without needing to tell it to not use Python.
The API is still better, of course, because you can constrain output to a JSON schema of your choosing... but all models now seem quite good at outputting valid JSON when asked and can follow 1-shot schema examples
It starts a webserver to serve its UI, which is what your comment parent meant. It doesn't provide its own openai-style API, which I guess is what you meant.
reply