Multiple solvers had already been written for the game of Gaps/Montana (the variant not allowing any redeals), but some uncertainty about the win rate has remained due to inconclusive results for some of the deals. The best previous result I am aware of was from Solvitaire (by Ian Gent and Charles Blake), reporting a win rate of 85.815% ± 3.717%, based on 10,000 deals, but more than 6% of them were inconclusive. My plan was to improve upon this result by solving 1,000 deals with fewer than 3% inconclusive.
As a secondary goal, I wanted to determine win rates for variants with fewer than 13 ranks. In addition, I was curious to answer the question of how often it is necessary to move a two in the first column in order to complete a winnable deal (I noticed that Solvitaire's solutions often included a surprisingly high number of such moves).
I am running Ubuntu Linux on a 16-core/32-thread AMD Threadripper CPU. During the project, my RAM was upgraded from 32 GB to 96 GB (which was definitely helpful, but not essential). I am writing code in C#, producing executables that can run on both Windows and Linux.
I knew it would be difficult to achieve the project goals, and I was not even sure it would be practical to do so. Given the large number of possible positions that can be reached during a game (thanks to being able to move Two's repeatedly from one row to another), limiting the memory consumption and the execution time are both very important. This would not be a simple solver; many optimizations are needed, and trade-offs must be carefully considered.
Normally, I start by writing a solver that plays the regular game I am interested in, before adding options to play variants or similar games. In this case, I decided to implement support right away to allow playing with fewer ranks, since this would be easier to work with before all the required optimizations were in place to allow solving the full-deck game. Also, I included support for the "Fixed Suits" variant (allowing a Two to be moved only if its suit matched a predetermined suit for the row it is being moved to), which I anticipated would come in useful for the full solver. In addition, I wrote a "throwaway" solver to answer my question about how often you need to move the leftmost Two's, but kept it, finding it to also be useful (as described later).
While I could envision a number of potential opportunities to improve execution speed by using parallel processing with threads, for simplicity I decided to stick with my normal single-threaded approach. As memory allows, I take advantage of the many CPU's by running multiple instances of the program at a time in order to solve multiple deals simultaneously.
I had a dilemma about which search technique to use. My normal breadth-first approach seemed doomed to fail, given the high number of possible positions for many deals. However, I assumed that a depth-first approach would probably yield results similar to what Solvitaire does, and that it would likely be very time-consuming to beat my desired 3% benchmark for inconclusives. So, I took on the challenge to figure out a way to "flatten the curve" of exponential memory growth that happens with breadth-first search. It was necessary to experiment with ways of breaking down the search space into smaller parts that can be solved one at a time. Fortunately, I had a few such ideas in mind (otherwise, I probably would not have started the project).
In order to prevent loops from making the search extend forever, I am simply storing positions that have been visited into a HashSet (unless they are dead-ends), and ignoring these positions if they are encountered in the future. This structure is periodically freed before processing the next group of positions, but obviously requires a lot of memory in addition to all the positions waiting to be explored. I suspect there may be better, more memory-efficient ways of accomplishing this.
There are up to three different search processes performed for each deal.
First, the fixed-suits variation of Gaps is solved in hopes of finding a quick win (any win found using fixed suits is also a valid solution for the regular variant). With a win rate around 24.9%, a winnable deal is solved about 29.2% of the time. Usually, this process takes only a matter of minutes to complete (and sometimes just seconds). It was tempting to try solving fixed-suits using all 24 combinations of suit order, but there is quickly a point of diminishing returns, and the next search phase can find such wins fairly quickly anyway.
During the second search phase, positions are explored in the order of how many times it is necessary to move a two that occupies the first column (this is accomplished using the "throwaway" solver I wrote in order to determine how many times it is necessary to move a leftmost Two in winnable games). All positions without moves of leftmost twos are exhausted first, then all positions in which one leftmost two was moved, then two moves, etc. This may seem like a strange heuristic, especially since these moves are often required in order to win a deal. The reason why this works fairly well for most winnable deals, is because most possible game positions are reachable only after multiple moves of leftmost twos, but most winnable games can be won with only a few such moves. In fact, about 73.2% of winnable games can be won without any moves of leftmost twos at all! As a result, a win can usually be found within a few hours, and often within minutes.
As an experiment, I added an option to automatically keep filling the gap created by each move. When enabled, moves will continue like this until either no continuing move is possible (for example, if the new gap follows a King), or the previous move opened up a choice of multiple possible next moves (for example, a gap in the first column could be filled by four different Twos). Then it will try all of the next possible chains of moves. I found that this option resulted in finding wins faster for most deals, but a few deals either took longer, or became unwinnable. I suspect that the reason this heuristic usually helps, similar to avoiding moves of leftmost twos, is due to cutting out search space rather than being a good rule of thumb. With these heuristics being applied within a time limit, they can be viewed as a form of iterative sampling.
Although this process can eventually produce a conclusive result for most deals, even losses (if not using the move chaining option), it is far from perfect. Some deals can require five or more moves of leftmost twos, and therefore take days of processing to find a solution. And losses can take months to complete (or even years, I would assume) due to positions being processed multiple times. Although I can think of a number of optimizations to improve this process, it works well enough to find most wins within a few hours or so, and therefore I decided to focus my efforts on optimizations for the third search process.
During the final search phase, the possible positions of a deal are exhaustively searched to produce a conclusive result for losses as well as the elusive winnable deals (in which no win was found during the first two search phases). In order to reduce memory consumption while still using a technique similar to breadth-first search, positions are separated into many different sets, and processed one set at a time. The key to defining the sets is that subsequent moves will result in positions that are either in the same set, or a higher-numbered set (in which case further processing of the position is deferred until later), but never a lower-numbered set that has already been processed.
At first, it may seem impossible to define separate sets in such a way that you can never reach a set already processed, since it's hard to prove that a card's position is fixed until all four Two's have been moved to the left. However, note that it is much easier to determine that a card's column is fixed. Once a Two has been moved to the first (leftmost) column (or if it happened to start there), it may be moved to another row later, but it will still occupy the first column of that row. If a Three is moved to the second column, it will remain in that column for the rest of the game. Same for a Four moved next to a fixed three, and so on. Note that a card that happens to be in the correct column may still be moved to a different column, if any lower-ranked cards of the same suit are not in their correct final columns.
There are many possible ways to define the sets, but I chose the following scheme of 100 sets:
Set | Contents |
---|---|
1 | All positions without any leftmost Two's. Most deals start in this set, unless a Two happens to already be present in the first column. |
2 | All positions with exactly one Two (regardless of suit) in the first column. |
3-8 | There are six possible combinations to have two Two's in the first column (depending upon the suits). |
9-12 | There are four possible combinations to have three Two's in the first column (depending upon which Two is not). Positions having a gap in the first column are excluded. |
13-76 | There are four possible combinations to have three Two's in the first column (depending upon which Two is not), plus a gap in the first column as well. 16 sets for each of these (for a total of 64 sets) are used, depending on the number of 3's, 4's, etc. which are also fixed into their corresponding columns. |
77-100 | There are 24 possible combinations of having all four Two's in the first column (depending upon the suit order). Unlike most other sets, suit order can be used to differentiate, because the positions of all four Two's are locked. |
I assume that using a lot more sets (therefore with fewer positions per set) would have been a good idea, because less memory would be required, and a CPU could do more with its cache without having to wait for RAM access.
As a game of Gaps progresses, there tends to be an increasing number of cards that sequentially follow the previous card of the same suit (even if these cards might not be in their final positions). In order to further conserve memory, positions within each set are processed in order from least to most sequential cards, almost as if each group of positions having a given count of sequential cards were it's own set. Unfortunately, it would not quite work to define them as separate sets, because it is possible for the count to go down when moving a Two, and therefore this would violate the rule about future positions never having a lower set number. The sequential cards count will stay the same if the card being moved was to the left of the next card in sequence, otherwise the count will usually increase by one (sometimes it may increase by two if it is moved to a gap that happens to connect on both sides).
I hate to use this optimization, because it makes things difficult to allow displaying a solution for winnable games. But with such a need to keep memory consumption down, and since I do not need to support solution display to achieve my goal of determining the win rate, the savings are just too good to pass up. I do have some ideas in mind for how I could support optional solution display (at the cost of a longer process when the option is used), but I have no plans to implement it at this time.
It is not a coincidence that I dedicated nearly 2/3 of the sets to process positions with three leftmost Two's plus a leftmost gap, since these positions tend to account for a good percentage of the total positions and processing time. Note that the leftmost Two's can easily be moved around, so if one position is possible, the other 23 combinations can be reached as well. Therefore, there is no need to store all 24 combinations of positions; it is sufficient to store one position without the leftmost column, and then process it 24 times with each possible configuration.
Similarly, if the fourth Two is moved into a gap in the first column, then the 23 other combinations must be possible as well (because the three leftmost Two's could be rearranged as desired before moving the fourth Two). So, not storing the first column can result in 1/24 of the memory consumption for these positions as well (this applies only to positions initially stored having four Two's in the first column, not for subsequent positions).
This optimization builds upon the previous one, using sets having three Two's and a gap in the unstored first column. For each of the 24 positions resulting from each entry having the first column reconstructed in the various combinations, any Three's that can be moved into the second column are processed, as well as moving the fourth Two into the gap. But all other possible moves need to be made on only one (arbitrarily chosen) position! The reason why this works is because without the first column being stored, all 24 versions of these moves would have been indentical entries anyway. An interesting and nifty thing about this optimization, is that it manages to do most of the work for many similar positions simultaneously, even without using threads for parallel processing!
There are not a whole lot of opportunities in Gaps to automatically play safe moves (at least that I could think of), but it can speed up the endgame processing once all four rows start with a Two. For each row that has a gap following the ascending sequence of cards of the same suit starting with a Two in the leftmost column, there are three cases when this gap can be safely filled:
In the case of solving the fixed suits variation, there is no need for all four rows to start with a Two. In addition, a gap in the first column is automatically filled with the appropriate Two if it qualifies as described above (if the Two is located in the last column, or if it is followed by the Three of the same suit).
I thought of a few other situations in which a card could be safely autoplayed, but none that seemed worth implementing.
Once all four Two's have been moved into the first column, there are some patterns of blocked positions that can be detected. For example, if the first four cards of a row are the Two, Four, Three, and Five of the same suit, the game cannot be won from that position. Or perhaps the second column contains all four Three's, but two or more of them do not match the same suit as the Two on its left. Since winning the game from such positions is impossible, there is no need to explore them any further.
For some deals, this optimization resulted in a surprisingly good improvement in both speed and memory usage. For more improvement, other patterns of blocked positions could potentially be detected as well (I attempted to find only those that are the most common and easiest to implement).
Normally, I am loath to have any hard disk I/O as part of a solving process. It is very annoying to have your system grind to a halt while you hear your hard drive cranking, wondering if it will still be OK weeks or months from now when it finally finishes! But with the lengthy solution times common for Gaps deals, a disciplined use of saving and loading positions to/from memory can conserve some memory with relatively negligible impacts.
While processing the positions with three Two's in the first column plus a gap also in the first column, there is no need to store in memory the 3/4 of these positions with a different set of leftmost Two's, since there is no way that future positions can end up in these sets. So, the sets that are not immediately needed are saved to disk and freed from memory, and later reloaded when it is time to process them.
As a bonus, this optiimization made it easy to add enhancements to occasionally save progress so that the solving process can be interrupted and then resumed later without starting over (which turned out to be very useful). In addition, it allowed the final sets (with all four Two's in the first column) to be copied to other instances of the program, and solved in parallel, which was helpful for some of the deals requiring the longest execution times.
Thousands of deals have been completed without a single inconclusive deal. When I started the project, I assumed it would be impractical to avoid some inconclusive deals, so I am thrilled with this result. With 1801 wins and 299 losses, the win rate is 85.697% ± 1.495%. In order to further improve it, I plan to solve additional deals. For the latest results, including for variants with fewer than 13 ranks, see the Gaps article.
Gaps has been the most challenging Solitaire solver I have written so far. A number of optimizations were required, many of which exploited insights about how the game works. Some of the optimizations I figured out will be useful for future solvers. For example, splitting the search space into subsets is a technique that can be directly applied to any game with foundations, assuming the rules do not allow moving cards off the foundations. I will use this technique for my Shamrocks solver (currently in progress).
For another article about the writing of Solitaire solvers, see the Technical Details of my Worm Hole solver.
Any comments are welcome, please send to Mark Masten at:
Last modified February 9, 2024
Copyright ©2024 by Mark Masten. All rights reserved.