Hacker Newsnew | past | comments | ask | show | jobs | submit | nmca's commentslogin

In a maximally earnest way — to what degree should we be sure these language harms are real on net? Are there data as opposed to anecdata? (Of course many phenomena are real without data; we can just be more confident in cases with data)


This is also a nice way to combine the ratings of a number of noisy annotators with variable annotations noise.


This is not work by any of the high profile new hires, in case folks are confused.


Lillian’s blog is extremely good in general & if it’s new to you I suggest checking out the other posts also. I particularly enjoyed the one on human data.


right, but you definitely shouldn’t be using any other formatter than ruff and this helps with that


Isn’t the answer to the question just classic economies of scale?

You can’t run GPT4 for yourself because the fixed costs are high. But the variable costs are low, so OAI can serve a shit ton.

Or equivalently the smallest available unit of “serving a gpt4” is more gpt4 than one person needs.

I think all the inference optimisation answers are plain wrong for the actual question asked?



You can dream of better yet! If the spec was required to be open source for the government project, then you could have commercial choices and some less feature rich open source version.


I feel the AI safety community has not made enough of Lehrer’s masterpiece on the topic:

https://youtu.be/frAEmhqdLFs?si=DYsY5Juco-kJ5eWD


I assure you we sing it sometimes!


We sung it at solstice


Indeed, it’s like saying a jet plane can fly!


It takes Tim Gowers more than hour and a half to go through q4! (Sure, he could go faster without video. But Tim Gowers! An hour and a half!!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: