Rankings Methodology

Rectangle

Ipeds/College Scorecard Ranking Algorithm Ranking and Scoring Methodology

Scoring By Individual Criteria

The ranking algorithm was developed to provide a tool to assist potential undergraduate and graduate students in the selection of an academic program housed within a college or university in the United States.

The algorithm ranks based on the following criteria:

  • Estimated yearly expense
  • Selectivity
  • Student-faculty ratio
  • Six-year graduation rate
  • Academic and career counseling (YES/NO)
  • Ten-year mean earnings
  • Post Graduation Salary

Each criteria (except Academic and career counseling) was scored out of 10 as follows:

Expense Score

Estimated yearly expense – The estimated expense variable in the College Score Card Data set captured the estimated yearly cost of attendance per student at a given college or university as a whole number. Yearly expense varied greatly, with a range of nearly $59,000 per year between the most and least expensive schools. In order to score expense out of 10 points, we used percentile scores. The raw estimated yearly expense variable was sorted in ascending order and ranked by percentile. We developed an expense score (out of 10 points) by multiplying the percentile for the estimated expense category by 1.0101 (to bring all 99 percentile scores to equal 100) and divided by ten (force final scores to equal 10).

Acceptance Rate Score

Acceptance Rate– We calculated an institution’s selectiveness by dividing the total number of students who were accepted by the total number who applied. The resulting computation produced a proportion that can be interpreted as the proportion of the applicant pool who were selected to attend. We did not take into account the number who actually attended the university because this was not a property of the institution, rather a choice of the applicant.

Selectivenes was sorted in ascending order, with the lowest proportion indicating the most selective college or university. We subtracted the lowest proportion from one to and multiplied the result by 10 to achieve a score out of 10 points. As is our custom, the best or highest rated institution in each category achieves a score of 10/10. To accomplish this, we calculated the difference from 10 to the highest score and added that difference to all subsequent scores.

Thus, the selectiveness/acceptance rate score provides a relative measure that indicates exactly how selective a given institution is compared to the most selective in the ranking group.

For example, if ABC university receives a 10 for selectiveness, and XYZ university receives an 8.75, we can know that ABC university accepted 12.5% fewer students than XYZ university.

Student to Faculty Ratio

Student-faculty ratio – The College Score Card data set provided a variable that represents an institution’s student-faculty ratio (X:1) as a whole number. Student-faculty ratio was sorted in ascending order and ranked by percentile. We developed a student-faculty ratio score (out of 10) by multiplying the percentile for student-faculty ratio by 1.01 and dividing by ten to compute the score.

Graduation rate

Six-year graduation rate – Graduation rates were provided as whole number representing the percentage of the total enrolled student body who graduated after six years of enrollment. Six-year graduation rate values were sorted in descending order. The highest rate was divided into 100 (100/six_year_graduation_rate), which calculation provided a multiplier by which all original six-year graduation rates were to be multiplied to produce a six-year graduation rate score out of 100. The multiplier served as a curving mechanism to force the highest six-year graduation rate equal to 100. We then divided this score by 10.

Career Counseling

Academic and Career Counseling (YES/NO) – Schools having academic and career counseling were flagged in the original College Score Card data set. We formed a composite variable of schools that indicated whether schools had both of these types of programs, as has been done in other ranking algorithms. For on-screen representation, we simply flagged programs as either “Yes” or “No”.

Median Earnings

Ten-year median earnings – Ten-year median earnings were sorted in descending order. The highest income served as the divisor for all other incomes. All other incomes were divided by the highest income in order and the results of that division multiplied by ten to retrieve a score out of 10 points. Thus, each median ten-year earnings score is ten times the proportion representing the level of income by all other schools compared to the highest.

Value/Roi

Ten-year Value Factor – We created a value factor that examined the median student debt, cost of schooling,  and divided it by median income of a teacher employed in that state. This incredibly significant number will give a student an idea of how much they will spend to go to this college, the debt they may incur, and their income after graduating.  Colleges were then ranked based upon this calculation.

Final Rankings

Ranking criteria – All criteria scores were ranked using the PROC RANK command in SAS. All ties held the same rank as the first tie listed in the data set. The next rank corresponded to the number of data lines that were processed as ties. For example, if three identical scores occur at rank 13 in a certain criteria, the other two institutions possessing that score will also be ranked as 13th. The next ranked value will appear as the 16th ranked institution for that criteria. The final score was also ranked accordingly.

Final Score – The final score was calculated through a two-step process using a weighted average of percentiles for each criteria as follows:

Step 1: Final Ranking Metric = (% estimated yearly expense)*0.16667 + (%selectiveness)*0.16667 + (%student-faculty ratio)*0.16667 + (%six-year graduation rate)*0.166687 + (%Academic and career counseling)*0.16667 + (%ten-year median earnings)*0.16667

Step 2: Multiplier = 100/Highest Final Ranking Metric

As with other scores, we computed a multiplier that acts as a curving mechanism to force the top school’s score to initially be equal to 100.

Step 3: Final Ranking Score = (Final Ranking Metric * Multiplier)/10

We then multiplied the final metric ranking by the multiplier for every school in the data set and divided by ten.

Example or ranking methodology Steps 2 and 3:

Step 2: The top school receives a ranking metric of 77.85, which means that the weighted average of all of its percentiles was equal to 77.85. This number divided into 100 is:

100/77.85 = 1.28452152

Step 3: We then multiply all scores by that number to calculate the final ranking scores.

Three examples:

Top school

Top school’s ranking metric = 77.85

Top school’s Final Ranking Score = (77.85*1.28452152)/10 = 10

Two other schools in data set 

School A

School A’s computed ranking metric = 58.10

School A’s Final Ranking Score = (58.10*1.28452152)/10 = 7.46

School B 

School B’s computed ranking metric = 72.0

School B’s Final Ranking Score = (72.0*1.28452152)/10 = 9.25

Copyright 2016 tobecomeateacher.org All Rights Reserved