A few years ago, the NCAA officially ditched the RPI (finally) and introduced a new metric, the NET, that is supposed to encompass more modern ways of evaluating team performance. Along with the NET, the NCAA also uses five other official metrics when the selection committee is deciding which teams make the NCAA Tournament and what seed they will receive.

Between the change to the NET being new and the many misconceptions of other advanced metrics and rating systems like KenPom that I see on my Twitter timeline quite often this time of year, I thought I would try to do my best to explain the different metrics and how the NCAA uses them to pick the 68 teams that take part in the best sporting event of the year.


The NCAA has a nice page explaining their new selection criteria if you want to double-check the things I cover, but I am going to try to go into more detail and also explain the other five metrics that they officially use.

At the end of the day, there is still a group of people deciding which teams make the NCAA Tournament and what seed they receive. One of the more common misconceptions I see where people believe that the NET rankings directly indicate what seed a team will receive, which isn’t the case at all. There are no rankings that seed the tournament on their own.

The selection committee officially uses the NET, ESPN’s Strength of Record and BPI, KPI, KenPom, and Sagarin rankings to evaluate each team. The NET is also used with a new quadrant system that categorizes wins and losses as Quad 1, 2, 3 or 4 games based on the ranking of the opponent and where the game is played.

The committee has also said that they take a player’s missing games into consideration as long as the player is back for the NCAA Tournament. I have not seen any specifics on how exactly they consider this, but it’s safe to say a team that loses a few games while a top player is out is seen as more favorable than a team with a similar resume or slightly resume that didn’t lose any key players to injury.

Alongside the NET, which is a combination of both, the five other metrics that the NCAA uses are broken into two groups: results-based metrics and predictive metrics.

Now, here is an overview of the six metrics included on each team’s official team sheet:

The NCAA Evaluation Tool (NET)

As I mentioned above, the NET is the new official metric of the NCAA and is an improved replacement for the RPI. The NET is supposed to be a modernized metric that includes “the Team Value Index, which is based on game results and factors game location, the opponent and outcome, as well as net efficiency, winning percentage, adjusted winning percentage, and a capped scoring margin.”

The official formula has not been released for the NET, but the rankings are updated daily for most of the season.

The NET is what is used to determine what quadrant each game is and rankings are taken at the end of the year for Selection Sunday. For instance, a team may get a win that counts as a Quad 1 win at the time, but if that other team finished the year poorly, it may slide to a Quad 2 win by the end of the year.

While the NET says that it caps the impact of the margin of victory, some still criticize it because the efficiency component is most likely still not capped. This is still technically speculation because seemingly no one knows the official algorithm used to generate the rankings.

ESPN Strength of Record

Strength of Record is one of the two results based metrics that the NCAA uses and ESPN describes it as “a measure of team accomplishment based on how difficult a team’s W-L record is to achieve.”

Strength of Record is similar to a strength of schedule metric, but it actually matters if a team wins the tough games instead of just playing them. For instance, Kansas has the 7th best strength of schedule according to ESPN, but their 23-3 record gives them the 2nd ranked Strength of Record while Baylor has the best Strength of Record currently by having a 24-1 record against the 33rd toughest schedule (and a road win over Kansas) as of 2/20/2020.

Currently, Minnesota has the toughest strength of schedule, but still is ranked 60th in Strength of Record because they’ve gone 12-13 against that schedule.

Kevin Pauga Index (KPI)

The KPI is a results-based metric developed by now Assistant Director of Athletics at Michigan State, Kevin Pauga. The KPI started as a large Excel sheet and has developed into a comprehensive metric used by college basketball and other sports over the last decade-plus.

The KPI assigns a value for each game based on the opponent and location and then uses the aggregate of each game’s score to rank teams. For instance, Auburn’s best win of the season was 0.77 at Mississippi State while their worst loss was -0.11 at Missouri. Auburn currently sits in 5th overall with a rating of 0.367, just 0.001 ahead of both Villanova and Creighton at 0.366 as of 2/20/2020.

Each team also has a rating for the strength of schedule, overall differential, offensive differential, and defensive differential.

Because each game is assigned a score, it’s easy to see the best wins and losses of the season. Baylor has the best win of the season so far with a 1.13 at Kansas while Grand Canyon and Hampton have the worst losses of the season with -1.01 results against a non-D1 opponent and Howard, both at home.

Ken Pomeroy Ratings (KenPom)

KenPom has been the metric I reference the most on Twitter, mostly because the site also provides a ton of other good information for each team. The KenPom rankings are one of the predictive metrics used and they are “not trying to measure accomplishment.”

In the most basic sense, KenPom ranks teams based on the difference between their adjusted offensive efficiency and adjusted defensive efficiency. KenPom does not disclose his exact formula, but it adjusts each teams efficiency (how many points they score or give up, per 100 possessions) to account for the level of the competition they have played.

This means that how efficient a team is is more important than results and hence why the rankings are supposed to be an indication of how a team will perform in the future and not an assessment of their resume. Yes, winning helps because a win means that a team has a higher offensive efficiency than defensive efficiency for the game, but wins (even over good teams) don’t always lead to big jumps in the rankings, especially later in the season.

KenPom also allows for easy comparisons between teams from year to year. For instance, Kansas is the best KenPom team with an adjusted net efficiency of +30.99 as of me writing this. However, they would only be the 3rd best team last year with that efficiency. The top 8 teams at the end of last season are “better” than the second-best team this season (Baylor, +27.54).

Like many of the other advanced metrics, KenPom also has a calculation for different strengths of schedule as well as things like “luck” which are seemingly factored into the adjustment to the efficiencies that are then used for the final ratings and rankings.

ESPN Basketball Power Index (BPI)

The BPI is ESPN’s predictive metric that they describe as “a measure of team strength that is meant to be the best predictor of performance going forward.”

I have not been able to find a detailed description of how the BPI is calculated or what it’s mostly based on, but it is at least broken down into two metrics for offense and defense that are added together to get the actual BPI rating which is used to rank teams.

Currently, Duke is the overall BPI leader at 18.4 while Gonzaga has the best offense (12.9) and West Virginia has the best defense (10.3).

Sagarin Ratings

Jeff Sagarin’s ratings, like KPI, are used for many other sports and therefore are not just basketball-specific. The ratings also only factor in games against D1 opponents. Sagarin’s ratings are a predictive metric, and he even says that to predict the outcome of a given game, just compare the two team’s ratings and then simply add an adjustment to the home team (the exact amount for adjustment changes as the season goes based on more data gathered).

I do not reference Sagarin as often and am not as familiar with the ratings, but you can get a pretty good explanation directly on the site. Like others, Sagarin has his rating for a schedule which ranks teams based on the overall strength of opponents.

Sagarin has three ratings that are combined to create the overall rating for each team: a predictor rating based on expected future scores, a golden mean rating that utilizes actual scores from games played, and a recent rating which puts more emphasis on recent games in order to account for upward and downward trends for a team.

As an example, Auburn is currently 44th overall, with a predictor ranking of 28, a golden mean ranking of 22, and a recent ranking of 78 which brings the overall down quite a bit following two straight double-digit losses to bad teams as of writing this.

At the end of the day, computer rankings are a major asset for the NCAA Tournament selection committee because college basketball, in particular, has a large number of teams competing against each other and with just 30ish regular season games to consider, schedules are certainly not going to overlap much. Computer rankings and metrics allow for teams to be normalized and easily compared based on many different factors.

The NCAA selection committee uses six different metrics (and two different types of metrics) because each one looks at different things and the rankings can be quite different across the board. If the committee only went off of KPI for example, Auburn would be the top two-seed and has been a one-seed for most of the recent weeks. However, based on Sagarin, Auburn would barely be an 11 seed in the tournament.

The committee looks at each of these metrics to get a sense of how good a team is, but then still look at its head to head results, “the eye test,” and other things as I mentioned at the top.

Human opinion polls like the AP and Coaches poll are not factored into this. Those polls are great for teams to market and TV networks to use to put numbers by team names, but at the end of the season, they are pretty much meaningless.

I’ve linked to different explanations and ranking throughout the article, but to make it easy, here is where you can find each official ranking if you want to check throughout the rest of the season to see how your team is doing:






I hope that this post did a good enough job of explaining each of the metrics used by the NCAA selection committee and breaking down the components that go into each rating and how they should be interpreted. At the end of the day, it is still humans picking teams and seeding the NCAA tournament, so there will be things that cannot always be explained, but hopefully this post makes it easier to understand what factors went into your team getting a certain seed or missing the tournament altogether.

As always, if you have any additional questions, or think something is explained wrong in this post, please feel free to let me know on Twitter.

Thanks for reading.