Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Should it be a larger frontier model, with this as a tool call (tool call another llm) to verify the larger one?

Why not go nuts with it and put it in the speculative decoding algorithm.



If we could somehow weave in a reasoning tool directly into the inference process, without having to use the context for it, that’s be something. Perhaps compile to weights and pretend this part is pretrained…? No idea if it’s feasible, but it’d definitely be a breakthrough if AI had access to z3 in hidden layers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: