WestConn Assessment Information

The Busy Person’s Guide to Assessment

Introduction

Assessment of student learning outcomes is mandated by the New England Commission of Higher Education (NECHE), the Board of Regents, the Connecticut Legislature, and many professional organizations. The following set of questions and answers is provided by the WCSU Assessment Committee as a brief guide for reviewing, implementing, and reporting your department’s assessment program. If you’re extremely busy, start with Part II.

QUESTIONS

Part I

  1. What is a departmental assessment program?
  2. What should be included in the department’s list of educational objectives?
  3. Who should be assessed?
  4. How can we ensure that all students are assessed? What if students don’t want to participate?
  5. What if an appropriate nationally normed test of achievement in the major is not available?
  6. Can you be more specific about acceptable and unacceptable measures of student learning?
  7. Is one good measure of student learning enough to satisfy the assessment requirement?
  8. Whatever happened to “value-added” assessment?
  9. Can you be more specific about standards? How do you set up and apply standards in assessment?
  10. How can we add assessment to the busy schedules of faculty and students?
  11. When and where do we report the results of assessment?

Part II

  1. What should be included in the assessment report?
  2. Who will read my assessment report?
  3. If the office of Institutional Research and Assessment already has the department assessment plan on file, then why do I need to include it in the annual report?
  4. Are there required formats for reporting data?
  5. What do you mean by analysis and interpretation?
  6. Do we have to use assessment results for the purpose of improvement?

APPENDIX

  1. Format for Reporting Data
  2. Departmental Assessment Grid

 

ANSWERS

 

Part I

1. What is a departmental assessment program?

A departmental assessment program evaluates the effectiveness of its programs in terms of measurable student learning outcomes. The program consists of

a) lists of educational objectives for each of the department’s major programs expressed in terms of student learning outcomes
b) measures of student achievement for each of the objectives
c) methods of collecting data, realistic timetables, and needed resources
d) procedures for involving departmental faculty in reviewing and using the results of assessment to improve student learning — including program revision and revision of the assessment plan when necessary
e) annual collection, analysis, and reporting of the results of assessment.

2. What should be included in the department’s list of educational objectives?

The list of educational objectives for each academic program in the department may include knowledge, skills, competencies, conduct, and values specific to the major. The department should identify not more than 3 to 5 principal objectives and may concentrate on meeting one specific objective in a given year. It should be feasible to assess every stated objective. Formulate each objective so that there is a credible connection between the objective and the method of assessing it.

3. Who should be assessed?

All majors should be assessed, typically as they near completion of their program. Voluntary testing in which only some majors participate is not advisable. Some departments may wish to assess the learning of non-majors in their service courses and the learning of students transferring from other colleges and universities. However, the focus of assessment should be on graduates from the department’s major programs.

4. How can we ensure that all students are assessed? What if students don’t want to participate?

The most viable solution is to integrate assessment into the curriculum. For example, a department might do one or more of the following:

a) design internship evaluations so that they provide useful information about student performance on key objectives
b) incorporate projects and exit exams into a capstone course
c) pre-test students in an introductory course.

Students will engage in assessment activities that are an integral, logical part of their education. Since voluntary participation is highly unlikely to produce satisfactory levels of student involvement, both the benefits of participating and the costs of abstaining need to be made evident to the students in terms that make sense to them.

5. What if an appropriate nationally normed test of achievement in the major is not available?

Departments are not required to use nationally normed tests. In fact, we discourage the use of nationally normed tests if they do not provide relevant information about student achievement in the major. One advantage of nationally normed tests is that they provide a comparative standard of performance. A disadvantage is that they often do not relate directly to a department’s program objectives. Popular alternatives to the nationally normed exam are locally developed exams and performance-based assessments (including capstone projects, portfolios, and recitals). Locally developed exams are scored objectively. Performance-based assessments typically use a criterionreferenced scoring guide — a set of guidelines for distinguishing levels of excellence, sometimes called a “rubric.”

6. Can you be more specific about acceptable and unacceptable measures of student learning?

Acceptable measures can be divided into direct and indirect measures of student learning.

Direct measures include

a) the capstone experience
b) portfolio assessment
c) standardized tests
d) certification and licensure exams
e) locally developed exams
f) essay exams blind scored by multiple scorers
g) juried review of student performances and projects
h) external evaluation of student performance in internships.

Indirect measures include

a) surveys (Survey of Graduates, National Survey of Student Engagement, etc.)
b) grade point averages (GPA)
c) grades in the major
d) exit interviews
e) placement and graduate program acceptance data.

Unacceptable measures include

a) SAT scores, Accuplacer scores, or other tests administered to entering students (unless accompanied by comparable post-test scores)
b) faculty/student ratios
c) curriculum review documents
d) accreditation reports
e) retention and transfer rates
f) graduation rates and length of time to degree g) demographic, biographic, and administrative data.

7. Is one good measure of student learning enough to satisfy the requirements?

No. Departments should use two or more measures of student learning. Over several years a department might employ a combination of the following methods in its assessment plan..

a) a capstone project, thesis, or other culminating experience
b) internship evaluations (self-report compared to supervisor’s report)
c) writing test scores
d) Academic Profile pre-test and post-test scores
e) Graduate school placement and acceptance data
f) Surveys, interviews and/or focus groups with alumni or students in their last semester

Once a measure has been shown to be valid and reliable, evaluators will look for consistency in its use and interpretation. Using one method for a year and then switching to another without good reason raises a “red flag” for evaluators. One measure can be used for several objectives. For example, a capstone project might be used to measure knowledge in the major, research skills, and communication skills.

8. Whatever happened to “value-added” assessment?

“Value added” is still with us. We can demonstrate value added by testing both entering and exiting students. However, pre-testing is not necessary if one is highly confident that students know little or none of the content they are to master through completing the degree program. Pre-testing is particularly appropriate for transfer and graduate students, but the ongoing assessment of majors through various levels of development is always impressive. More fundamental than demonstrating growth, however, is the need to measure student achievement against clearly stated standards. Though it is difficult to demonstrate that a department’s programs are the primary contributor to student learning, it is less difficult to show that students are completing the department’s programs having reached an acceptable level of achievement relative to specific educational objectives.

9. Can you be more specific about standards?

How do you set up and apply standards in assessment? Standards constitute performance goals and should be defined in terms appropriate to the relevant method of measurement. Where comparative data are available, a department might define standards in terms of the percentage of students at or above a particular percentile. A department might state that all of its students should score above the 70th percentile on a standardized test in the major — provided that this is a meaningful expression of standards. Departments with licensure exams might want to state that no fewer than 95% of their students will pass the exam on the first attempt. Moreover, departments with a criterion-referenced capstone project (or internship evaluations) might want to state that all students will receive at least a satisfactory score in each substantive area, with 30% performing at a level higher than satisfactory.

Performance-based assessments present specific problems. Although standards are usually written into scoring criteria, performance-based assessments have little credibility unless results are analyzed by comparison to performance of students outside the department, by external review, or through conscientious discussion among faculty of the relative strengths and weaknesses of student performance. An evaluation of projects would be strengthened by the use of external judges.

Similarly, evaluators would look favorably upon a department that identifies areas of weakness indicated by particular measures and proposes actions to strengthen them (for example, holding a faculty workshop on teaching a particular method). Whatever your approach, remember that statements such as “All graduating students passed the department’s exit exam” are not credible indicators of standards unless supplemented with appropriate analysis, interpretation and follow-up.

10. How can we add assessment to the busy schedules of faculty and students?

To the extent that one can incorporate assessment into daily practice, assessment will not appear as an additional burden. We need to find creative ways to incorporate assessment into curriculum and instruction so that it is part of our normal workload. The burden will seem unbearable to a chairperson who tries to pull together disparate elements of an uncoordinated assessment program on the weekend before the departmental annual report is due. For the chairperson who plans ahead and fully involves faculty in the collection, interpretation, and use of assessment data, the burden will be less onerous.

11. When and where do we report the results of assessment?

  1. The annual assessment update should be a separate section of each academic department’s annual report, beginning on a new page and clearly labeled “Assessment Update.” When the annual report is sent from an academic department to the dean’s office, the annual assessment update section should be sent to the office of Institutional Research and Assessment (IRA). The IRA director will be responsible for delivering copies to the Assessment Committee.
  2. When a new or revised program or option is proposed, the appropriate assessment documents should be included in the proposal for review by CUCAS or the Graduate Council. At the same time, a copy of the proposal should also be forwarded to the director of Institutional Research and Assessment for review.

PART II

12. What should be included in the assessment report?

The four areas A through D below should be addressed in the annual assessment update.

  1. EXPECTED EDUCATIONAL OUTCOMES — Goals for Student Learning
    1. Define each goal so that it is specific and measurable.
    2. Include an action verb in each goal statement. Indicate what you intend for students to know, do, or value when they have completed the program — e.g., skills, competencies, attitudes, behaviors, etc.
    3. It may be useful to state the source of your goals. For example, some goals are implicit in our mission statement and General Education program, while others are mandated by outside agencies or associations (e.g., professional accreditation standards).
  2. PLAN — For Gathering Information to Measure Student Attainment of Each Goal
    1. For each goal, specify the timetable, procedures, and indicators. These may vary dependent upon the goal.
    2. If pre-test and post-test data are needed, a schedule for obtaining the necessary data should be included.
    3. There should be evidence that the methods chosen are both valid and reliable. Methods may be qualitative and/or quantitative.
  3. RESULTS – Progress Made in Implementing the Plan
    1. List the results of the information you gathered on student attainment of the goals.
    2. Information collected during the current academic year should be interpreted in the context of goals which your department has set during the previous five years or more.
    3. You may also include material not directly related to student learning outcomes — such as alumni surveys and input from advisory boards.
  4. ACTIONS – The Feedback Loop
    1. What changes have you made in response to the information gathered in step C? For example, what are the changes or recommended changes in goals, programs, options, and/or instruction?
    2. Are the assessment methods you are using adequately measuring what you want to measure? If not, what measures do you plan to introduce?
    3. Describe the use of findings on student learning in your curriculum review process.
    4. What is the timetable for implementing proposed changes?
    5. List specific examples of changes, or report that changes were judged to be unnecessary after departmental review of the findings.

13. Who will read my assessment report?

The annual Assessment Update is similar to other planning and assessment documents. Summaries and recommendations will be provided to members of the university community. In addition, summaries of assessment reports will be used in the Self-Study for NEASC reaccreditation, as well as Performance Measures reports to the CSU System Office, the Department of Higher Education, and the state legislature.

14. If the office of Institutional Research and Assessment already has the department assessment plan on file, then why do I need to include it in the annual report?

Plans previously filed provide baseline information. Most departments have refined their assessment plans since the initial filing in 2000-01. Please be sure that the plan now on file for your department is current and representative of the approaches you use to assess student learning in all of your programs. You should confirm the validity of this plan by the end of the fall 2005 semester. Thereafter, beginning in May 2006, your Assessment Update should include only revisions to the plan and/or actions taken as a result of assessment evidence.

15. Are there required formats for reporting data?

No, but Appendices A and B of this report contain formats strongly recommended for reporting both quantitative and qualitative measures of student learning outcomes. Members of the Assessment Committee and staff in the office of Institutional Research and Assessment can provide examples of exemplary practices. Data should be collected so as to supply credible information about student achievement and to identify relative strengths and weaknesses. Numerous styles of table are possible depending on the nature of the data to be reported. Tabular summaries of data should also support longitudinal comparison of departmental graduates and show how close the department is to assessing all of its graduates with that particular measure. Most data can be reported in tables if they are being analyzed within an explicit conceptual framework. Tables help the reader digest the assessment results at a glance. However, tables should not be inserted into your report without comment. Analysis and interpretation are essential components of assessment, and the reader will want to know what you make of the data in the tables. Please note that a section on analysis of the data is essential. No standard format is prescribed for this, but it is your analysis and interpretation that will lead to program improvement, the objective of assessing student learning outcomes.

16. What do you mean by analysis and interpretation?

Analysis helps the reader understand the data by describing general trends in the data and pointing out differences and similarities among data points. Interpretation relates data to the objectives they are supposed to measure, explores the relationships between multiple measures of an educational objective, qualifies, amplifies, draws inferences, and evaluates.

Analysis and interpretation address questions such as the following.

  • What do the data say about your students’ mastery of subject matter, of research skills, of writing and speaking, and so on?
  • What do the data say about your students’ preparation for taking the next step in their careers?
  • Are there respects in which your students are outstanding?
  • Do they consistently score at the 85th percentile or above on certain subjects in the MFAT?
  • Do they receive high praise from internship supervisors?
  • Are they consistently weak in some respects?
  • Are many of them getting good jobs, being accepted into good graduate programs, reporting that they are satisfied with the education they have received from your department?
  • Does their performance on capstone projects indicate that the research skills of your students are relatively weak? Are there areas where their performance is adequate but undistinguished?

An attempt to address such questions through analysis and interpretation is an essential piece of any conscientious assessment program.

17. Do we have to use assessment results for the purpose of improvement?

  • Yes. The purpose of student learning outcomes assessment is to improve programs.
  • It is recognized that in any given year it may not be necessary or appropriate to launch a program improvement initiative based on assessment results. However, evaluators consistently fault assessment programs when the results are not being used to improve curriculum and instruction.
  • We expect to see a significant increase in the number of departments using assessment for improvement.
  • If there are any efforts to improve programs on the basis of assessment information, then the progress of those efforts should be recognized, supported, and reported.
  • Reports of efforts to improve programs are telling indicators of a vital, ongoing assessment program.
  • If your assessment program is not giving you useful information for program improvement, then the assessment program should be improved to yield results.
  • Even disconcerting results are better than non-results, provided that they are discussed in the department and used to make recommendations for program improvement.

Note: The primary source of the structure and content was the Office of the Provost, Southeast Missouri State University. This document was edited and tailored to WestConn’s needs by Carol Hawkes, Jerry Wilcox and members of the Assessment Committee.