After I wrote a solver for the game Black Hole, I figured I might as well add support for Worm Hole, and my variation Unstable Worm Hole (in which the free cell can be used only one time) too. It would need to be fast enough that I could solve at least a million deals (in order to match the sample size of the results Solvitaire previously reported) within a reasonable amount of time (preferably less than a year). Also, I always strive to have as few inconclusive results as possible, ideally none. Given how much memory my Black Hole solver needed, I was concerned that some Worm Hole deals might require more than what I have available.
I am running Ubuntu Linux on a 16-core/32-thread AMD Threadripper CPU with 32 GB of RAM. I am writing code in C#, producing executables that can run on both Windows and Linux.
I started with my existing solver for Black Hole, and added the capability to play cards to a free cell. Normally, I like to get a solver's basic functionality working first before adding optimizations as necessary to boost performance and/or reduce memory consumption. But in this case, I implemented some of the memory optimizations within the initial development, anticipating that they would surely be needed.
There are often trade-offs possible between CPU speed and reducing memory consumption. Given my computer's configuration (many CPU threads, but relatively starved for RAM) and my desire to avoid inconclusive results due to too much memory consumption, I am generally favoring memory over speed.
For Worm Hole, it is still using the same breadth-first search that I had implemented for Black Hole. It might seem odd that I didn't use a depth-first search, especially for a game with a high win rate. But I stuck with my general approach of getting basic functionality working first, especially since some of the optimizations I had in mind are easier to implement with breadth-first search. So, my plan was to experiment later with a depth-first search if the solver was too slow (but this turned out to be unnecessary).
This almost doesn't need mentioning. As in Black Hole, as well as almost all of my other solvers, I am using a HashSet to store the positions so that duplicates are eliminated. This is very important, because in Worm Hole there are often many possible paths that can lead to some of the same positions, and it would take much longer to solve if it has to keep revisiting the same positions repeatedly.
The Black Hole solver was already storing positions as a 64-bit integer, but this needed a slight modification in order to also store the rank of a card in the free cell (or a sentinel value to indicate that the free cell was empty). Of course, there is a trade-off that the solver needs to keep decoding every saved position and encoding all the new positons, but in my opinion it is well worth it for the memory savings.
This is another optimization in common with the Black Hole solver, but needed a modification to account for the free cell card. The idea is that as you discard the final card of a given rank to the hole, some cards may become permanently unreachable (in which case, the game is definitely in a losing state, and there is no need to keep exploring all further moves from that position).
There is never a need to play a card to the free cell unless the very next move is to play the uncovered card to the hole. My solver implements both of these as part of a single "move". Besides reducing the number of positions to process, this optimization has a nice side-effect that each and every "move" always results in one card being moved to the hole.
In Worm Hole, the game simplifies with every move (at least in the sense that there is one less card left after each discard). Therefore, when using breadth-first search, it is not necessary for the solver to keep track of positions it has already visited (it is gauranteed that they will never come up again). This resulted in huge memory savings, but at a very high cost: the solver lost the ability to display solutions! Although I don't really need solution display for the purposes of reporting win rates on this web site, it's still a nice feature to have, and comes in useful for testing and debugging.
My imperfect plan to fix solution display has two parts. First, when solution display is optionally requested, the solver will try keeping the old positions in memory (disabling this optimization), so that the solution can be reconstructed later as it would have been without the optimization. However, if the solving attempt fails due to insufficient memory, the solver can resort to a back-up technique I have used before. it will note the previous position before winning, and solve the game again but treating that position as the winning state, noting the previous position to that, solve again to that position, and so on. In this way, the solution is generated backwards by repeatedly solving the same deal, but going one move less each time. Of course, this will cause a noticeably long delay for some deals.
Every Black Hole solution is also a valid solution for Worm Hole (in which there are no moves made to the free cell). So solving Worm Hole deals without free cell moves can find wins 87% of the time (which is how often Black Hole is won). If this resulted in a loss, the solver tries again with including the free cell, but allowing it to be used only once (as in the game Unstable Worm Hole, which is won 98.76% of the time). For the remaining 1.24%, the solver tries again, this time allowing the free cell to be used multiple times, but only under certain memory conditions (this finds wins for almost half of the remaining deals). For the remainging losses (about 0.7% of the original deals), the solver tries yet again, finally allowing unlimited moves to the free cell. All of this increases the solving time for a few deals, but is a significant speedup overall. Even more important, this optimization significantly decreases the average memory consumption (especially because it seems to eliminate the need to fully solve all of the otherwise heaviest memory-consuming deals). Bonus: For each Worm Hole deal solved, this opitimization has the nice side-effect of providing, for free, a result for the same deal of Black Hole and Unstable Worm Hole as well!
This is not implemented yet, but I plan to experiment with attempting to find wins after playing a number of random moves (similar to how my Accordion solver works), thus avoiding the need to fully solve such deals. Iterative sampling can be done using very little memory, so finding some wins in this manner will reduce the average memory consumption (it might be a speedup as well).
Currently, the program can solve nearly 3,000 deals per day when using a single CPU thread (about 30 seconds per deal average). The vast majority of deals can be solved using less than 4 GB of RAM, but a few take slightly more. The average memory consumption per deal is low enough that it can solve up to ten deals at a time (without exceeding my 32 GB of RAM). When solving many deals at the same time, each thread is slightly slower (I assume this is due to all the processes competing for access to the computer's RAM), but using ten threads still results in a total of 20,000 deals per day (about 4 seconds per deal average). I have completed over a million deals without a single inconclusive result, thus satisfying the goals of the project.
Depending upon the game, you can see that a lot of thought, effort, and some experimentation can go into writing a successful solitaire solver. A few improvements are still needed, but I am pleased with this project's results.
Any comments are welcome, please send to Mark Masten at:
Last modified March 30, 2022
Copyright ©2022 by Mark Masten. All rights reserved.