Hacker Newsnew | past | comments | ask | show | jobs | submit | anagri's commentslogin

@kaiwren thanx for shoutout, looking forward to your feedback


@nandx thanks for shoutout, more features coming soon, feel free to raise issues or feature request using GitHub issues.

Thanx


@kartik7153 - looking forward to your feedback


@irfn - that's an interesting idea. will definitely try to create benchmark using my local M2 machine and llama3-7b, just for comparison.

yes, ollama and Bodhi App both use llama.cpp but our approaches are different. Ollama embeds a binary within its binary, that it copies to a tmp folder and runs this webserver. any request that comes to ollama is then forwarded to this server, and replied sent back to the client.

Bodhi embeds the llama.cpp server, so there is no tmp binary that is copied. when a request comes to Bodhi App, it invokes the code in llama.cpp and sends the response back to client. So there is no request hopping.

Hope that approach do provide us with some benefits.

Also Bodhi uses Rust as programming language. IMHO rust have excellent interface with C/C++ libraries, so the C-code is invoked using the C-FFI bridge. And given Rust's memory safety, fearless concurrency and zero cost abstractions, should definitely provide some performance benefit to Bodhi's approach.

Will get back to you once I have results for these benchmarks. Thanks for the idea.

Hope you try Bodhi, and have some equally valuable feedback on the app.

Cheers.


thanks @abhinavrai, looking forward to your feeedback, hope you get to use it when you are developing GenAI apps locally as well.


thanks @akashkahlon, looking forward to your feedback


so the splash image is generated by midjourney.

@levmiseri are you using APIs, or have fine-tuned prompts for each of the location? do share these as well. the github doesn't include this.


The AI generation code with all prompts is available on github as well: https://github.com/riesvile/meoweler-content


The website is beautiful. Thank you for sharing the source code. I took a look at this source code and could not find where you batch-processed the midjourney API calls. I am interested to learn whether you generated the midjourney images by hands or by scripts.


Midjourney sadly doesn't have public API. The images were prompted manually via Discord. Thanks to permutations [0], I could do 20 images at a time, but it's still a painful process. (The generation of the permutation strings is in the shared code)

[0] https://docs.midjourney.com/docs/permutations


I see. Thanks a lots for the detailed info. Many cats you generated look quite mindful.


agree

you cannot run gumroad like financial transaction business without licence and agreement with stripe. it is bound to be closed down on short notice, and all the earnings will be clawed back.

so don't use this app. period.


Running gumroad and running “as a gumroad customer would” are two different things


just glad that the 2019 IPO didn't materialize, otherwise the innocent retail shareholders would have paid for all the nuisance


I'm also glad the 2019 IPO didn't materialize, but if it had, anyone who bought in would have been deserving of no pity and I wouldn't call them "innocent retail investors". As badly managed as WeWork was, this was not a case of fraud. Everything was wide out in the open, for anyone with eyes and a brain to see. Heck, the whole reason the 2019 IPO didn't go through is that WeWork's original S-1 was such a shit show of epic proportions, with nothing but red ink as far as the eye could see and completely made up vanity metrics, that Wall Street's collective reaction was "Are you fucking kidding me?"


Matt Levine's contemporaneous take on the WeWork IPO and associated shenanigans by Neumann: https://archive.is/qkJXl


Matt Levine is a national treasure.


++

good problem to solve, nice personal story, but disappointed it is translating into clickbait+data scraping


How so?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: