Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

3090 TIs should be able to handle it without much in the way of tricks for a "reasonable" (for the HN crowd) price.


higher ram apple silicon should be able to run it too. if they don't use some ancient pytorch version or something.


Why not on a CPU with 32 or 64 GB of RAM?


Much slower memory and limited parallelism. Gpu ÷- 8k pr more cuda cores vs +-16 on regular cpu. Less mem swapping between operations. Gpu much much faster.


Performance, mostly. It'll work but image generation is shitty to do slowly compared to text inference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: