Pilot | Rating | Confidence |

Jediluke | 29.24 | 8.58 |

Mark392 | 23.1 | 2.42 |

bahamut | 22.9 | 2.68 |

b2af | 22.54 | 4.5 |

Code | 17.68 | 7.08 |

Morfod | 16.8 | 4.66 |

Swarthy | 14.66 | 5.47 |

RETHINK | 14.46 | 5.77 |

Lee | 13.62 | 6.34 |

Drakona | 13.56 | 2.36 |

ByeByeStyle | 13.22 | 1.49 |

Entropy | 12.86 | 2.11 |

Daz | 11.12 | 4.86 |

PuDLeZ | 6.06 | 2.36 |

Apo | 5.26 | 4.08 |

Mandioca | 5.16 | 4.61 |

Godin1984 | 4.2 | 2.12 |

Borjarnon | 3.82 | 2.29 |

Carl Spackler | 3.78 | 1.88 |

Skill ratings were the output of the algorithm which awarded the bronze, gold, and silver ratings. Here is the numerical output as it stands at the end of each season.

While the math was complex, the basic idea is that the ratio of skill ratings was the ratio of kills the ladder expected two pilots to get. So, if one pilot has a 10.0 rating and another has a 5.0 rating, the ladder expects the first pilot to win 20-10. Or, if you see a rating of 8.0 and a rating of 7.0, you can read that as the ladder expects the first pilot to get 8 kills for every 7 the second pilot does.

The dividing lines between the classes were algorithmically calculated, too, and changed with the pilots. It was not saved, so I cannot tell you exactly where those lines stood at the end of each season. But I can tell you that the tiers were generally intended to be such that the lowest pilot in each was expected to lose 13-20 to the highest pilot. This was where we felt the line of no hope was - pilots who scored further apart than this were not competitive with each other.

The rating confidence has to do with how sure the ladder is of its read on a pilot. Pilots who play rarely or who don't play many opponents or only play very lopsided games are hard to judge accurately against their peers. Roughly speaking, this number represents the number of pilots against which the ladder thinks you have given it enough data to judge accurately where you relatively stand.