National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Viewpoints

We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

 

Using Evidence to Make a Difference
Stan Ikenberry and George Kuh, NILOA

 

Imagine a physician's office that routinely ordered blood tests but does not review the results. Malpractice? Or an automobile manufacturer that collected product safety data only to file them away. A cover-up? Or what about images from a global satellite network that no one examined. A waste? Or a university that boasts having a high quality nursing program but a fifth of its graduates regularly do not pass the licensing exam? False advertising?

Gathering performance data is certainly a worthwhile activity, but what ultimately matters most is using the evidence. The capacity of American colleges and universities to assess student learning outcomes has expanded significantly over the last two decades. During the same period, however, the actual use of assessment results to improve student success and institutional performance lagged. Why is that? And how can the gap be closed?

The failure of campuses to use assessment findings in consequential ways is due in large part to the origins of the assessment movement itself. On many campuses, assessment emerged in response to external forces. Two separate surveys conducted by the National Institute for Learning Outcomes Assessment (NILOA) confirmed that regional accreditors of academic institutions and specialized accreditors of specific programs are the primary forces prompting the expansion of assessment work on college campuses (Kuh & Ikenberry, 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014). And while accreditors were at the top of the list of those calling for more and better assessment, they were not the only interested outside party. Government, employers, and several higher education associations also asked for more information about what students know and can do.

These external demands for accountability stimulated more assessment activity, but they also inadvertently nurtured a culture of compliance. The process of assessment became a prime marker of "compliance." Simply doing assessment was seen as sufficient.

If assessment is to be consequential to the future of American higher education, assessment practice must be driven by genuine needs and challenges faced by campuses and the students they serve. The work must be informed by campus needs and by internal priorities shaped by faculty members, academic leaders and governing boards. Only then will assessment findings be used to productive ends and inform improvement efforts.

How can this shift to assessment driven by authentic internal needs be brought about? No two campuses are alike and campus needs and priorities differ. Still, certain challenges are common to thousands of campuses across America:

  • Changing student characteristics and needs.
  • Increasing variation in when, where, and how students learn.
  • Unprecedented competition for students.
  • An unforgiving economic environment.

Compounding these challenges is a nagging public skepticism about the quality of higher education.

Assessment results that respond to these broader systemic challenges and also align with specific campus needs and priorities are more likely to be harnessed and prove useful to students and institutions. How to make this happen is the question.

An effective assessment program requires both partners and end users who have the capacity to:

  • inform and shape the questions to be studied.
  • contribute to the development of an assessment methodology that will yield the partners and end users will find useful.
  • set the stage for the use of evidence in ways that will improve students’ prospects for success and institutional performance.

Accreditors and governments are rarely assessment partners. In most instances they are not in a position to use assessment results to advance student success. Rather, the prime goal is to confirm institutional accountability and compliance.

Potential partners and end users of assessment work are on campus: faculty members, faculty committees, academic leaders, and governing boards. To have a consequential impact on student learning and the health of academic institutions, these players must be engaged. The focus of assessment work needs to shift inward, toward the campus and the academics who genuinely need and can productively use the evidence (Banta & Blaich, 2011).

Faculty and staff members are best positioned to understand the challenges related to student success. Although the literature underscores the importance of faculty engagement in assessment (Hutchings, 2010), in NILOA's most recent survey, when chief academic officers were asked what their institutions most needed to advance assessment work, their top two priorities related to faculty: more professional development opportunities for faculty; and more faculty members using and applying assessment results. Faculty members too often are not effectively cultivated as potential end users or recruited as active partners.

Students are frequently overlooked as potential partners. They can offer a unique perspective and needed advice about how to garner the cooperation and participation of their peers. They also can help interpret assessment findings and translate the results into policy and programmatic implications.

Among the more obvious partners for assessment work are the faculty and staff members who serve on campus committees. For example -- members of standing committees on undergraduate or general education, or members of special ad hoc committees focused on particular questions such as student retention and graduation rates. Still other faculty members teach high enrollment or gateway courses which define and shape much of the undergraduate experience for thousands of students.

Also crucial to the consequential use of assessment data is the engagement of provosts, deans, directors, and department heads and chairs. These front-line academic leaders are indispensable to an effective institutional assessment program because they provide direction and focus for the work. They also must play a cheerleading role to help pave the way for the support and involvement of others. Provosts are the key problem solvers. They allocate resources. As a result, provosts must help shape the assessment agenda, articulate the key questions about student and institutional performance, and signal where evidence of what students know and can do can be put to good use.

Presidents and governing boards influence the campus culture, shape the institution's strategic priorities, and set the tone for assessment work. As guardians and fiduciaries of an institution of higher learning, governing boards are responsible for the oversight of the institution's academic quality as well as its financial soundness. Boards and their audit committees understand their financial fiduciary duty; they are often less clear about their duty of care for the academic program.

Above all, assessment work needs to begin with the end in mind, which is why many assessment experts favor "backward design"; that is, shaping assessment that "anticipates use" of evidence for specific purposes: advising, curriculum revision, pedagogical change, resource allocation, faculty development, and program review (American Association for Higher Education, 1992; Beld, 2014; Blaich & Wise, 2011).

The good news is that the capacity to assess student learning continues to grow and evolve, there are more tools and approaches and better technology, and more people are involved. The news will get even better when the focus of assessment is squarely on the challenge of using evidence to help students and campuses. The resources to address this agenda lie within, on every campus, and they need to be tapped.

American Association for Higher Education. (AAHE). (1992). Principles of good practice for assessing student learning (developed under the auspices of the AAHE Assessment Forum). Washington, DC.

Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22-27.

Beld, J. M. (2014). Making assessment matter: How not to let your data die on the vine (Presentation). 2014 Assessment Institute, Indiana University-Purdue University Indianapolis.

Blaich, C. F., & Wise, K. S. (2011, January). From gathering to using assessment results: Lessons from the Wabash National Study (NILOA Occasional Paper No. 8). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Hutchings, P. (2010, April). Opening doors to faculty involvement in assessment.(NILOA Occasional Paper No. 4). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: learning outcomes assessment in American Higher Education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Check out our past Viewpoints:

Assessment - More than Numbers
Sheri Barrett

Challenges and Opportunities in Assessing the Capstone Experience in Australia
Nicolette Lee

Making Assessment Count
Maggie Bailey

Some Thoughts on Assessing Intercultural Competence
Darla K. Deardorff

Catalyst for Learning: ePortfolio-Based Outcomes Assessment
Laura M. Gambino and Bret Eynon

The Interstate Passport: A New Framework for Transfer
Peter Quigley, Patricia Shea, and Robert Turner

College Ratings: What Lessons Can We Learn from Other Sectors?
Nicholas Hillman

Guidelines to Consider in Being Strategic about Assessment
Larry A. Braskamp and Mark E. Engberg

An "Uncommon" View of the Common Core
Paul L. Gaston

Involving Undergraduates in Assessment: Documenting Student Engagement in Flipped Classrooms
Adriana Signorini & Robert Oschner

The Surprisingly Useful Practice of Meta-Assessment
Keston H. Fulcher & Megan Rodgers Good

Student Invovlement in Assessment: A 3-Way Win
Josie Welsh

Internships: Fertile Ground for Cultivating Integrative Learning
Alan W. Grose

What if the VSA Morphed into the VST?
George Kuh

Where is Culture in Higher Education Assessment and Evaluation?
Nora Gannon-Slater, Stafford Hood, and Thomas Schwandt

Embedded Assessment and Evidence-Based Curriculum Mapping: The Promise of Learning Analytics
Jane M. Souza

The DQP and the Creation of the Transformative Education Program at St. Augustine University
St. Augustine University

Why Student Learning Outcomes Assessment is Key to the Future of MOOCs

Wallace Boston & Jennifer Stephens

Measuring Success in Internationalization: What are Students Learning?
Madeleine F. Green

Demonstrating How Career Services Contribute to Student Learning
Julia Panke Makela & Gail S. Rooney

The Culture Change Imperative for Learning Assessment
Richard H. Hersh & Richard P. Keeling

Comments on the Commentaries about "Seven Red Herrings"
Roger Benjamin

Ethics and Assessment: When the Test is Life Itself
Edward L. Queen

Discussing the Data, Making Meaning of the Results
Anne Goodsell Love

Faculty Concerns About Student Learning Outcomes Assessment
Janet Fontenot

What to Consider When Selecting an Assessment Management System
R. Stephen RiCharde

AAHE Principles of Good Practice: Aging Nicely A Letter from Pat Hutchings, Peter Ewell, and Trudy Banta

The State of Assessment of Learning Outcomes Eduardo M. Ochoa

What is Satisfactory Performance? Measuring Students and Measuring Programs with Rubrics
Patricia DeWitt

Being Confident about Results from Rubrics Thomas P. Judd, Charles Secolsky & Clayton Allen

What Assessment Personnel Need to Know About IRBs
Curtis R. Naser

How Assessment and Institutional Research Staff Can Help Faculty with Student Learning Outcomes Assessment
Laura Blasi

Why Assess Student Learning? What the Measuring Stick Series Revealed
Gloria F. Shenoy

Putting Myself to the Test
Ama Nyamekye

From Uniformity to Personalization: How to Get the Most Out of Assessment
Peter Stokes

Transparency Drives Learning at Rio Salado College
Vernon Smith

Navigating a Perfect Storm
Robert Connor

Avoiding a Tragedy of the Commons in Postsecondary Education
Roger Benjamin

In Search for Standard of Quality
Michael Bassis

It is Time to Make our Academic Standards Clear
Paul E. Lingenfelter