OT: And the best drafting team of the past 20 years is...

Welcome to our community

Be a part of something great, join today!

The Bucks? I aint even gonna click on the link if that's who they think is the best drafting team the past 20 years.
 
The Bucks? I aint even gonna click on the link if that's who they think is the best drafting team the past 20 years.

Well 82games tries to make a formula which then calculates a draft rating

Rating = points/game + rebounds/game + assists/game

Why use this definition? It's the data I have easily on hand, which while not a good player rating system is a decent wag for these purposes. Then I group players as follows:

* Star -- 20+ rating
* Solid -- 15 to 19.9
* Role Player -- 10 to14.9
* Deep Bench -- 5 to 9.9
* Complete Bust -- less than 5
* DNP -- (never played in the NBA)

It's says Portland has 5 stars, 4 solids, 8 role players, 8 deep bench, and 7 DNP's. I wonder if we can figure out all players and what category the fall into.

EDIT: and 8 BUSTS
 
Last edited:
I also wonder if the Timberwolves get credited for Brandon Roy because they technically drafted him but then traded him to us 10 minutes later.
 
Without looking at the article, my first reaction is:

This method would need to be normalized by where the team gets to draft. Obviously a team constantly drafting in the lottery has a better chance of drafting a "start" than the Spurs, who are always drafting at the end of the 1st round.
 
Well 82games tries to make a formula which then calculates a draft rating



It's says Portland has 5 stars, 4 solids, 8 role players, 8 deep bench, and 7 DNP's. I wonder if we can figure out all players and what category the fall into.

You left out Portland's 8 busts. :wink:
 
Without looking at the article, my first reaction is:

This method would need to be normalized by where the team gets to draft. Obviously a team constantly drafting in the lottery has a better chance of drafting a "start" than the Spurs, who are always drafting at the end of the 1st round.

Yup. It needs to be compared to the expected value of all their picks.

Also, not counting defense is going to flaw this study enormously.
 
Also, "star" is 20+? Considering their "formula," that's silly. A 10/7/3 player is a star? Hardly.
 
There are tons of holes, but I guarantee the guy that runs this site is more aware of them than we are. The thing is, he doesn't deny it or try to form objective opinions based on the data he presents.

I actually had a quick exchange with the guy that runs that site... he agreed with me that it probably should've had an additional category at the top (i.e. Superstar) but said that soon the study will be broken down by team so that you can see each of the players and how they've faired.
 
Seems like they need to come up with a new formula if you get the Bucks as No. 1.
 
Seems like they need to come up with a new formula if you get the Bucks as No. 1.

I don't think systems should generally be evaluated by how their results match one's expectations. After all, if a system told you exactly what you already believed, what use would it be? The idea of developing new evaluations systems is to try to being some non-obvious truths to light.

A system should be evaluated by it's methodology, in my opinion. And this system has all sorts of problems. ;) But I assume it's just done as a quick toy, for fun.
 
Seems like they need to come up with a new formula if you get the Bucks as No. 1.

I like that the person who came up with the system didn't alter it because the end results may not fit into conventional wisdom. If he has a team in mind that should be #1 and then tweaks his system to reflect it, it isn't an objective system. Now, because the system was not tweaked, it can be evaluated objectively, and with that in mind, I see a lot of holes in it. I wonder about Brandon Roy and who gets credit for him, as was posted by another member earlier in this thread. I also feel that using "20" as the baseline shortchanges players who truly are "stars".
 
I don't think systems should generally be evaluated by how their results match one's expectations. After all, if a system told you exactly what you already believed, what use would it be? The idea of developing new evaluations systems is to try to being some non-obvious truths to light.

A system should be evaluated by it's methodology, in my opinion. And this system has all sorts of problems. ;) But I assume it's just done as a quick toy, for fun.

Yeah, true I guess. But it should prove a little bit or give some validity to what people already believe. I guess all those draft picks of Lamont Strothers, Dave Johnsons, Marcus Browns, Randolph Childresses, Reggie Smiths, Bryon Irvins and Alaa Abdelnabys didn't help our score any. Oh well.
 
There are tons of holes, but I guarantee the guy that runs this site is more aware of them than we are. The thing is, he doesn't deny it or try to form objective opinions based on the data he presents.

I actually had a quick exchange with the guy that runs that site... he agreed with me that it probably should've had an additional category at the top (i.e. Superstar) but said that soon the study will be broken down by team so that you can see each of the players and how they've faired.


I would say that an easy improvement would be to do something like the following:

1) Assign each category (star, solid, etc) a weighting factor. Maybe 10, 8, 6, etc.

2) Total up the weighted values for each team in the last 20 years.

3) Divide the weighted total by the number of times that team's pick was in the lottery.

Still not very scientific, but at least it would capture the fact that teams draft at different spots. It would also be easy to implement.


Better yet,

Take the weighted value of a "star" , "solid" etc, and divide that value by (number of draft locations) - (location of pick)

So a "star" selected at number 10, with 60 picks total (2 rounds) would be better drafting than a "star" selected at number 1.

Then sum up the normalized-weighted values.
 
Well 82games tries to make a formula which then calculates a draft rating



It's says Portland has 5 stars, 4 solids, 8 role players, 8 deep bench, and 7 DNP's. I wonder if we can figure out all players and what category the fall into.

EDIT: and 8 BUSTS

I don't know about the rest of you, but I always have been a bit of a breast man, so it's tough for me to complain about the Blazers draft picks.:drumroll:
 
I don't know about the rest of you, but I always have been a bit of a breast man, so it's tough for me to complain about the Blazers draft picks.:drumroll:

I guess I've always been more of an ass man ... no shortage of those in the past twenty years either.
 
I guess I've always been more of an ass man ... no shortage of those in the past twenty years either.

We drafted the J-Lo of asses over the past 20 years. Sadly, however, now I think we're drafting pussies.:sigh:
 
Without looking at the article, my first reaction is:

This method would need to be normalized by where the team gets to draft. Obviously a team constantly drafting in the lottery has a better chance of drafting a "start" than the Spurs, who are always drafting at the end of the 1st round.

I agree.

The system should be set up so that it grades on a curve, not just for each draft year (important), but for multiple drafts. Some drafts are better than others.

Additionally, it should account for draft day trades, which are common.

Additionally, it should try to account for defense.

Do something like, list all drafted players each year, list a category for each player (bust, deep bench, bench, 6th man, starter, above average, star, superstar).

Then, create a formula for each team's draft pick(s) that creates a score for how many better players they passed on with each pick. The more better players passed up, the more significantly better those players, the higher the score.

Total all the draft picks scores. The team with the lowest score is the better (and luckier due to injuries, etc.) drafting team.
 

Users who are viewing this thread

Back
Top