Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

btw I got 4bit quantized Vicuna working in my 16GB laptop and the results seem very good, perhaps the best I got running locally so far


Did you have to diff LLaMA? Did you use EasyLM?


I found it ready-made for download, here https://huggingface.co/eachadea/ggml-vicuna-13b-4bit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: