Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's natural to question how someone can unproblematically pose a metric on the value of game rulesets (particularly without a generous helping of philosophy and psychology to back it up). However, pragmatically, it's not a silly thing to try. If there were game-design IDEs in the future, we'd like them to provide the equivalent of spell/grammar check -- if there was an alternate design with an edit distance of two away from the current design that scored dramatically better on some canned metric, it might be worth putting a human eye on that alternative design.

If you think of the error function as the combined output of a bunch of "obvious flaw" detectors instead of a "theory of fun", then speculative optimization where the initial conditions come from a human's design under consideration becomes a potentially interesting bit of design automation. Think of design rule checks in CAD with a bit of fuzziness and a default bias -- instead of saying just yes/no, it can say "X might be a better alternative according to the metrics you've enabled in the preference window, consider adopting some of its edits".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: