I really appreciate the feedback. This is one of the main things I think about, and relates strongly to the part I mentioned about puzzle generation and difficulty variability. I do want this to feel like a consistently great puzzle.
For comparison, the median number of tries it took people to solve yesterday's puzzle (10 moves) was 7. Figure #35's median was 19 tries! Today's puzzle is definitely one of the easier ones, fwiw.
I've vaguely noticed some patterns of tile arrangements that create interesting sequences that are nontrivial but solvable if given some thought. I would love to develop a more formal system where these patterns are used to build up puzzles with more intent than the random boards I'm currently making. If this sort of thing is in anyone's wheelhouse, I'd love to hear from you.
This is excellent, thanks for sharing. My solver is designed to find every possible solution so that I can evaluate the ratio of "best" solutions to all solutions. A lot of these optimizations are super interesting, but since I'm intentionally going for comprehensive brute force (rather than finding any winning solution as quickly as possible), they're mostly off the table, at least as long as puzzles are randomly generated and I need to filter them based on solution space characteristics like this.
I did realize while reading this that I could get a little more sophisticated by finding out the number of moves in a best solution using an aggressively optimized fast solver. If that number is within a range that's fun to play, say 8-11 moves, then go ahead and find all the rest of the solutions with the maximum brute force. If not, then just scrub the whole thing since I wouldn't use that puzzle anyway.
I dunno how you'd use genetic algorithms, since solutions are paths, and there's usually no way to splice segments of candidate paths together.
This kind of problem is known as planning in old AI, and it's languishing. most techniques are just variations of A*, with things like symmetry detection and heuristics tailored to game benchmark problems. It's the kind of thing (IMO) that deep RL fails at, since unlike Chess or Go where your solution-finding simply needs to beat a competitor, here your solution needs to be exactly correct (and ideally minimal.) It's very sparse.
The “number of tries” metric might be out if anyone else solved it like I did (ie going back to see if there was some smarter solution).
As for your comment about nontrivial puzzles - absolutely! 100% agree - the joy in a lot of puzzle games like bejewelled is unexpectedly clearing half the board with a clever move!
Not sure exactly what you mean. It only logs the number of tries for your first solve, but you can keep playing around with it after that without affecting your stats (or the global stats).
* “Oh, maybe if I did it the other way around I could get that in one” <Reset>
* “ok that worked but I wonder if it’s different if I did this…” <Reset>
* “Ok actually all of those have been exactly the same number of moves, but how about if I just reset and do the first again?” <Reset then complete>
I just mean I’m not sure I would class the above as “4 attempts” to show difficulty as you are just using it to test scenarios (ie the above person found it trivial even though it took four ‘attempts’)
Good luck with the site, production value is great and looks amazing! Finding the difficulty balance will be tough to satisfy both repeat and new visitors, but worthwhile!
For comparison, the median number of tries it took people to solve yesterday's puzzle (10 moves) was 7. Figure #35's median was 19 tries! Today's puzzle is definitely one of the easier ones, fwiw.
I've vaguely noticed some patterns of tile arrangements that create interesting sequences that are nontrivial but solvable if given some thought. I would love to develop a more formal system where these patterns are used to build up puzzles with more intent than the random boards I'm currently making. If this sort of thing is in anyone's wheelhouse, I'd love to hear from you.