Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
CuriouslyC
on Aug 1, 2024
|
parent
|
context
|
favorite
| on:
Flux: Open-source text-to-image model with 12B par...
3090 TIs should be able to handle it without much in the way of tricks for a "reasonable" (for the HN crowd) price.
fl0id
on Aug 1, 2024
|
next
[–]
higher ram apple silicon should be able to run it too. if they don't use some ancient pytorch version or something.
phkahler
on Aug 1, 2024
|
prev
[–]
Why not on a CPU with 32 or 64 GB of RAM?
holoduke
on Aug 1, 2024
|
parent
|
next
[–]
Much slower memory and limited parallelism. Gpu ÷- 8k pr more cuda cores vs +-16 on regular cpu. Less mem swapping between operations. Gpu much much faster.
CuriouslyC
on Aug 1, 2024
|
parent
|
prev
[–]
Performance, mostly. It'll work but image generation is shitty to do slowly compared to text inference.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: