* Only includes college prospects   Included in Simple Consensus   Included in Advanced Consensus
Underrated Overrated
Investigate how your store was calculated:  

Top 3 Models (by accuracy rank over time)

Model Accuracy Over Time (Experimental)

Note: Only models with multiple draft predictions are shown.

Correlation Matrix

Model Projections By Pick



How are model correlations calculated?

Each model ranking has a numeric value, or projection, associated with each prospect (the number in parenthesis). For rankings that don't have projections, we use the expected value of that pick as a replacement. From there, we calculate the Pearson Coefficient of the projections from the intersection of prospects from each ranking.

How do you calculate the expected value of a draft pick?

There are a number of published methods of calculating the expected value of a draft pick. While each have their advantages and disadvantages, we decided to use Basketball-Reference's Win Share based expected value formula mostly because it was externally published, based on a common stat used commonly for judging impact, and had a decent distribution.

How do you determine the Simple Consensus ranking?

Given the top 60 projections from each model ranking, we linearly map the domain [min(projections), max(projections)] to the range [0, 1] to create a normalized scale. We use these scales to calculate each prospect's normalized projections, and use the average of each prospect's normalized projections as their consensus projection. Sorting prospects by their consensus projection gives us our Consensus ranking.

Note: This is not perfect, and assumes that each model has a similar distribution. This is most certainly false, but we can tell by the above model projection by pick chart (for the top 60 picks) that they're generally consistent.

How do you determine the Advanced Consensus ranking?

A blog post is coming!

Why don't Advanced Consensus rankings have correlation data?

This is due to the way advanced consensus is calculated.

How do you ensure the historical model projections are out of sample?

There is no way to enforce that the models projections are fully out of sample. We are trusting the authors are doing their due diligence in ensuring no over-fitting biases exist in their back testing methods. We have followed up with the authors and suggested that historical projections are generated via year-by-year cross validation (by fully withholding all data for that draft year while generating its projections). The majority of the models are using some variation of this technique. You can follow up with the authors for their specific back testing techniques used. With that being said, we do apply one sanity check to validate a models projections were indeed calculated out of sample: we ensure that Michael Beasley is ranked #1 in the 2008 draft class - if a draft model says otherwise you shouldn't trust it's methods.

How is the Experimental Accuracy Score calculated?

Check out this blog post by Jesse Fischer for an in depth explanation of how we're calculating scores.

Additionally, use the 'Show prospect-level score penalties' checkbox to see how each individual player effects that model's scores, and the 'Info' modal next to each score for the top 5 biggest misses.

There are some known limitations with this methodology, including missing players from rankings (negatively effects scores) and model prediction selection bias (negatively effects models that predict more players). We are working to improve our scores and are flagging them as experimental at this time.

Why is my model score so low?

Some models were built using only available college data and were not able to include high school and international prospects. To better compare those model accuracies, filter prospects to 'Only college prospects'. Check 'Show prospect scores' to see how your score was derived from your rankings.

Why don't you have RPM or [my favorite metric]?

We strive to give the people what they want - just let us know using a contact link below.

Suggestions, comments, questions? Found an error?

If you have suggestions, comments, or questions about this draft board, please contact Viraj Sanghvi via twitter or email.

If you found an error with any data in this tool, please contact Jesse Fischer via twitter or email.

Want to add or remove your rankings?

All rankings are the property of their creators. If you are a ranking creator, and would like to add or update your rankings, change links, or have your rankings removed, please contact Jesse Fischer for support.