Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

really need a simple "put your source stuff in this directory, then press this button, then chat with your contents" type app/module/library.

too much implementation detail required make it inaccessible for any non-significant use case. i imagine privateGpt will get there slowly



I wrote a simple implementation to do this in ChatGPT via local plugin [0]. Obviously it doesn’t hit the “fully private” requirement but I imagine it would be relatively straightforward to integrate into a local LLM. The question is whether a local LLM would be as good at grabbing enough context and nuance from the project to answer meaningfully as GPT-4 is able to do with plugins.

[0] https://github.com/samrawal/chatgpt-localfiles


One of my streams I essentially build this from scratch https://www.youtube.com/watch?v=kBB1A2ot-Bw&t=236s. A retriever reader model, let me know if you want the code I think I like the colab in the comments but let me know if you need more.


At this stage of the AI, The implementation details matters a lot for the chat to be actually meaningful… RAG is over-hyped




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: