Approaches to Assessment & Institutional Research
The data are gathered by the institution and are published annually in the WC FactBook. Data are also submitted to the federal government (IPEDS), the state government (MDHE annual factbook), and to a variety of third-parties, e.g., publishing companies participating in the Common Data Set initiative. Examples include racial/ethnic/gender composition of student body, level of endowment, student/faculty ratios, selectivity ratio, test scores and class rank of entering students, graduation rates, etc.
Data are relatively easy to collect and permit comparisons over time and across institutions.
There is little evidence supporting causal links between these variables and an institution's educational effectiveness defined in terms of the student outcomes it produces. Discussion of quality tends to center on variables that are easy to collect.
Ratings of Institutional Quality:
U.S. News & World Report Rankings of National Liberal Arts Institutions and similar publications.
Rankings are relatively simple to report and appear to easy to understood.
In spite of efforts to combine actuarial data with the ratings of "experts", rankings remain highly subjective and appear to be very dependent on proprietary systems that differ in the variables compared and the weights assigned. There is also no clear evidence that rankings reflect actual student learning. Institutional reputations tend to influence results and may or may not accurately reflect institutional quality.
NSSE; Senior Surveys and Exit Interviews; Career Development Survey; Westminster Seminar EBI First-year Initiative; CIRP; IDEA; SSI; CORE; HERI Faculty Survey.
Self-reported student data from well-designed survey processes are highly correlated with quantifiable measures of student learning progress. National surveys also tend to be relatively inexpensive to administer and comparisons over time and across institutions are possible.
Surveys are indirect measures of student learning. Since they depend on self-evaluations, knowledge of survey design and administration are necessary for generating meaningful data.
Direct measures of Student Learning:
CLA; writing samples; ETS Major Field Exams; imbedded assessments; performances, portfolios.
Student learning is assessed by directly measuring what students have learned. Direct measures can be used to answer the question of value added with appropriate design.
Performance may vary with changes in student motivation. Comparisons over time and across institutions are difficult to make in ways that are valid and reliable without significant investments for implementation and maintenance.
Adapted from: Chun, Marc. "Looking Where the Light is Better: A Review of the Literature on Assessing Higher Education Quality. AAC&U peerReview, Winter/Spring 2002.