Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not certain, but I'd bet Watson was explicitly not allowed to use something like J!Archive as training data. For one, the questions used in the Jeopardy games it played were drawn randomly from previous questions. More importantly, though, learning a stilted, domain specific language model to play Jeopardy isn't anywhere near as challenging, impressive, or worth pursuing than generating something that includes Jeopardy as a subset of its capacity.

Now Watson was tuned on Jeopardy questions. I'm sure the learning processes were adjusted in light of mistakes made on the Jeopardy corpus, but interpolation is far less big a deal than a full language model.



the questions used in the Jeopardy games it played were drawn randomly from previous questions

I've not heard that, and if true, it would have given Jennings and Rutter, both excellent crammers, a knowledge advantage.

Further, human contestants absolutely review the J!Archive before competing, so why wouldn't Watson?

We don't yet know for sure Jeopardy is only one subset of all the impressive things Watson can do. Notably, in the 'Ask Reddit' answers, the Watson team says: "At this point, all Watson can do is play Jeopardy and provide responses in the Jeopardy format."

So it seems like they're trying to claim the accolades for solving a bigger problem, when in fact they've only done well on a very constrained problem.


Can't find the quote this moment, but IIRC the questions(or technically, answers) were drawn from previously prepared questions, but not previously used questions. The point being that aside from eliminating audio/video based questions, these had been designed with humans in mind and there was no tailoring of the content to be "Watson friendly/unfriendly".

That may help explain the confusion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: