Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because I wanted to run it in the browser and have a document object in context of the LLM response being eval’d!

Also, having the exemplars typed has saved me from sending broken few-shots many times!

That is, I keep all of the few-shot exemplars in TypeScript and then compile them into an array of system/user/assistant message strings at some point before making any calls to an LLM.



Nice, although you did that so you can avoid having an API when running some web application you make for yourself or am I misunderstanding you sorry?

Because the other distribution paradigms are... - sharing your key with the user on client side is risky, so you have API side requests. - one day LLMs might be local and can then run off-browser


The approach I’ve been using is to keep the API requests server-side and to expose a client interface, thus keeping the keys safe, but the response is eval’d client-side so when OpenAI starts referencing document.body in a completion it affects the browser runtime directly.


Yeah it's a smart idea I see now, you can use it as a universal database as such for all clients, like having a Python dict for all the outputs or something, but you can also easily spin up the UIs enabling your cool examples.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: