For the first time in recorded history, I’ve changed my mind on something.
After some input from a few people (including one of the guys from http://www.smartfootballrankings.com/) and a bit of experimenting I’ve made some very significant changes to how the Elo Ratings are calculated. As of right now I haven’t updated the original pdf, so you can still check out the basics for the original calculations. I expect to have the new version up with all the changes spelled out by the end of the week.
Or you can just read on!
The first and biggest change is that the ratings are now iterative. What does that mean exactly? Well it means that I run through the whole season 15 times in a row, recalculating after each game, in order to get the final ratings. I got to 15 loops because that’s the number of iterations I found to be necessary for the ratings to change less than 1/100 of 1% on each additional go around.
The two big pros I see from this method:
1) It makes sure that teams are accurately rated for each match earlier in the season. If Real Salt Lake turns out to be terrible this year (they won’t) we don’t want to give opponents too much credit for beating them early in the season when we thought they were the best team in the league.
2) It gives games more meaning. Each game is calculated in there 15 times now rather than just once. In a league that will only play 34 games this year, that means I should be able to get reliable ratings a little earlier in the year than halfway through this time, and hopefully have a better sense of what the results mean rather than just where the team started the season.
The biggest con:
I could not find one single example of anyone else doing this, so I kinda just played around with stuff until I was happy with how the numbers were coming out.
So with that gigantic looming potential negative in mind, allow me to explain some of the other changes.
K-factors, the values that change based on the game’s setting, have been drastically reduced. Last time it was 15 for a regular season game, 30 for a playoff game, and 45 for the MLS Cup Final. Now those numbers are 1, 1.25, and 1.5 respectively. This was done mostly for cosmetic reasons. When I kept the numbers at 15/30/45 for 15 iterationsÂ the ratings tended to span from 1200-1700 rather than the 1350-1625 that we’re used to and seemed to drastically overvalue the MLS Cup Final as a rating point. 1/1.25/1.5 got the numbers in line with where they were previously.
What I really should do is go back through the ratings with a few different variations, and then see how the real results match up to the expected to results given each matchups rankings. For instance, a team with a 100 point advantage playing at a neutral site should win 64% of the time. That would give us the most accurate model, but it’s a lot of work I haven’t put in yet.
As for the calculations, I still set all beginning-of-season ratings by halving the difference between the previous year’s end-of-season rating and the 1500 base. So if Philadelphia Union ended at 1450, they would start the next season at 1475. I will re-run the entire season’s results off of those base values after each match day, incorporating all new results from that day at the same time. One other idea I played with was changing the home field advantage in the calculations by team, but couldn’t decide if that was a good idea so I held off.
Iâ€™ll do a follow up post either tomorrow or Wednesday with the end of year rankings under each method. Suffice to say that I really like the newer versions output a lot more. I think the previous method gave too much weight to results and not enough to goal differential.
And one final note, if youâ€™ve actually made it this far, I caution you to remember what the MLS Elo Ratings are and what they are not. They do have a large proponent of saying which team is the best in MLS, but it is not strictly a power ranking. It looks at the sum of a teamâ€™s results on the season, and rates them by most to least impressive.
Phew, that was a lot of words for a lot of work.