Elo ratings were introduced in December 2020. We use a customized version of the Elo rating model widely used in chess and sporting events for predicting outcomes between two competitors.
The most unique aspect of this implementation is that freecell players don’t compete head-to-head over a freecell game a la the more traditional Elo, so our model handles this indirectly by rating games as well as players. In other words, each specific deal is assigned a rating, and players and games exchange points based on how expected or unexpected the outcome was. In this way players compete against each other using the games as proxy.
Although we refer to it as a "total awesomeness rating" it's just another stat for viewing player's performance. This is fairly complex stat that looks at a players strength in solving deals (their winning percentage coupled with the diffulty of the deals they play), and as such may be somewhat influenced by the variants they choose (see Power Rankings below). At some point in the future the system will have properly normalized all the variants and there will be little advantage to playing a particular variant.
Only streak play is rated. Separate ratings have been developed for tournament and HotStreak play. Rankings are for current players only. Names are removed from published rankings after 14 days of inactivity. While most players favor the standard 8x4 game, the site offers a vast array of variants, each with its own lists of best streaks. Elo ratings bring this all together with their ability to account for differences in difficulty between variants and levels, to provide a single ranking for overall player performance. The idea here is that we can now start to compare streak players across variants and difficulty levels, and we can now compare game variants as well (see chart below).
The ratings are designed to answer a simple question: given a particular deal in streak play, how likely is this player to win? The premise of the Elo model is that we can quantify this likelihood based on the difference in ratings between game and player, and then use the actual outcome to improve our prediction for the next event.
Every new player begins with a rating of 1500, a common starting point in Elo systems. Games could have also been assigned 1500 to start, but this would have ignored information we already have about individual games and the set they belong to. We also have far more games than players to rate, thousands of players versus millions of games, so a better starting point was needed.
To do this, the win/loss record of each game in December 2020 was used to assign an initial rating. Note that this was a one-time event, and game stats no longer play any role in Elo ratings. Game stats differ in important ways from ratings, because we don’t know who played those games and we don’t know if it was streak play or a timed event. But as an historical note here’s how ratings were assigned.
Basically we took the game's play history, adjusted it slightly toward the mean for that level, and then assigned ratings straight from the Elo formula that corresponds with that win%, assuming a 1500-level opponent. Ratings for level 6–12 were scaled up based on a calculation of how a player's rating increases after winning ten games in the level below. And finally whole-level adjustments were made in almost every level based on play testing to bring them into parity with each other, and now the Elo model is continuing to fine tune things.
As an example of how ratings were assigned, let’s say a 7x4-5 game had been beaten 1 time out of 10 plays, for a 10% player win rate. Before assigning its initial rating we adjusted to account for the fact that we don't know that much about a game after ten plays. For instance one more win would have doubled its win%, which is significant.
So since the cumulative player win% for 7x4-5 is 64% we add five more fictitious plays at the average win rate for this set of games, meaning we pretend it was beaten 3.2 times out of the next 5, giving an adjusted win rate of 4.2 wins out of 15 plays = 28%. In other words the fewer plays a game has the more we assume it’s a typical game for that variant and rate it accordingly.
If the same game showed 10 wins out of 100 plays, it has the same win% but now we know more about it. This time adding the 5-game adjustment has much less impact, and the game is rated as 13.2/105=12.6%. The 1 out of 10 game would be rated 1668 and the 10 out of 100 game would be rated 1837. This method was used for all levels 5 through 12 where streak play is possible, with an additional bump in the ratings of games beyond level 5 to account for the presumably higher average rating of the players there. New games being played for the first time are assigned the average rating for their respective variant and difficulty level.
Elo ratings represent the likelihood that a player will win or lose
a particular game. The formula for expected win% is the inverse of:
1 + 10**((game rating minus player rating) divided by 400)
So if a 1500-rated
player is dealt a 1000-rated 8x4 game, we would say there’s a
1 / (1 + 10**((1000-1500) / 400)) = 94.68%
chance the player wins and only a 5.32% chance she loses.
These percentages also define how ratings adjust based on the actual outcome. We use a constant K of 8 points, which is the max point exchange between player and game. If the result was expected, and the player above wins, her rating increases by 5.32% of 8 or 0.43 points. If she loses, her rating decreases by 94.68% of 8 or 7.57 points. Points gained by a player are taken by the game, and vice versa. So her new rating will be 1500.43 if she wins or 1492.43 if she loses.
That’s the whole story in terms of the player ratings. There’s more going on behind the scenes though when it comes to the games, as we needed to create some leverage to balance the impact on players and games. We do this by taking a few hundredths of every point gain or loss on an individual game and applying it to all 32,768 games in that variant/difficulty level. In other words all the games in 8x3-8 get a small boost up or down based on what happens to any individual 8x3-8 that gets played. This gives us years’ worth of adjustment in days, which is not too much given how many more games there are than players. This extra “boost,” either up or down, is scaled to the frequency of play for that level so as long as a variant gets some play we’re able to get enough adjustment to bring it in line with the others.
Elo ratings are a self-correcting predictive tool and not a score. If this were a head-to-head competition like chess, a 200-point difference means the higher rated player would expect to win 76% of the time. Some top players are 600 or more points above the starting rating, meaning they'd expect to outsolve an average player 97% of the time.
A rating is also focused on recent performance. You can think of it like a thermometer: it’s always adjusting based on the current temperature. The previous temperature is the starting point, but once it moves it doesn’t remember the old reading. A rating provides an interesting measure of overall solving ability, but may frustrate players who make it their primary focus. Ideally you check out your rating to see how you stack up and to be amazed at the talented field of players we have here, and then go back to running up streaks in your favorite variants.
Individual game ratings are only an approximation, and except perhaps in 8x4 will never reach their true level. That’s fine, as long as the average for the whole level reaches its true rating, since presumably players will face a large sample of games and some will be rated too high and others too low. Also, at this point the ratings don’t know the difference between really hard games and unwinnable games, so variants with lots of unwinnables will tend to have higher average ratings to compensate.
On the player side, ratings reach their true level much faster. To get there fastest some may opt for what a chess player might call “sharp” play, choosing variants with ratings close to their own where something, good or bad, is bound to happen. Opponents with close ratings push apart like magnets with like charges. Others will choose to protect a rating by only playing specific variants.
Eventually it won’t matter where you play because the variants will naturally move toward parity with each other. And since player ratings are set relative to game ratings, over time it will become impossible to maintain a rating built on play in specific variants that were previously overrated. In the mean time, if you want to know that your rating is an accurate representation of your ability, the best bet is to play in a variety of variants and difficulty levels. This has the added benefit of speeding along the process of getting all the variants into parity with each other. Feel free to look for variants you feel are overrated though, your play will help bring them in line.
There’s nothing you have to do to improve your rating, except play better obviously. Good and bad streaks will happen, and it’s normal to see a rating fluctuate even by dozens of points if you play a lot. Note that if a player wins exactly the number of games predicted by their rating during a day the rating will be unchanged. If you lose one more game than expected your rating will drop by 8 points. Players are human and deals are random. Performance can vary by a lot more than one game, even if the ratings were perfect. So if you get down, keep playing. Ratings have no memory, they’re free floating and not held back by previous performance.
One point of caution, Elo ratings do not care if this is the first game of your streak or the hundredth, so play every game like it matters and don’t let your guard down on those early ones. Also, where Winnable versions of a variant exist it’s marginally preferable to play these over the regular version of the same variant where you might risk losing points to a game that other players won’t have to face. This difference is minor and transient, since most unwinnable games in these variants have been assigned very high ratings already and any points they take you’ll begin to get back with your next game, but this may help you add a few Elo points.
Here are the current best returns for playing for Elo. The "power ranking" is simply the game average win percent multipled by the average Elo for the variant. These are ever-changing and of course your mileage may vary.
Note: to play these specific variants and difficulty levels, use the Custom mode and leave the game number selection on Random and check Streak mode so the game will count. Read here for more information on selecting a particular variant and difficulty level.
Before he passed away SlowPoker (part of the original Ratings Crew) imagined devising a rating system for streak play here. He wanted to use the Elo system but he wanted to give each game a rating, sort of a man against machine approach. So basically each game would develop its own rating over time as would each player. These ratings represent the fruition of that idea. After the initial launch extensive play testing was done and manual adjustments made to the games level by level. Then more adjustments were made based on anomalies players uncovered, and finally the “secret sauce” part of the algorithm was fully implemented to let the machine do the work of boosting game averages up or down. We continue to monitor the adjustments the model is making to game averages, and it’s working very well.
Keep branching out, everyone. Play those odd variants and higher difficulty levels if you aren’t worried about protecting a streak. It all helps. Don’t worry, none of you are breaking the rating system. If you choose to play up instead of starting at level 5 that actually helps us get some coverage in lesser played games. Just know that if a rating is built on games that seem to be rated too high, you'll find that playing anything else will bring it back down.
Those games with a high number of plays and no wins have already been assigned ratings near 3000 to minimize their impact. We used this number so as not to distort the averages too much since we're already near parity and averages are important for assigning new ratings. This will continue to be refined. Meanwhile it’s helpful to remember that every game’s rating is off to one degree or another, the unwinnables only stand out because we can tell when it’s off. The system is designed to work despite that.