National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Viewpoints

We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

 

Improvement Matters
Peter Felten, Elon University

 

This Viewpoint summarizes important takeaways found in Chapter 6 of “The Undergraduate Experience: Focusing Institutions on What Matters Most” (2016) by Peter Felten, John N. Gardner, Charles C. Schroeder, Leo M. Lambert, and Betsy O. Barefoot. The seven principles summarized here inform good assessment practice and improvement. Keeping these action principles in mind and using them to guide assessment and improvement efforts on your campus can help lead to meaningful change. The full text of this chapter, and additional resources, are available on the book’s web site.

Assessment is a vital tool for improvement, especially when it is used in ways that serve what matters most in the undergraduate experience. In an era of often intrusive external oversight, many on campus are suspicious—or just plain tired—of initiatives promising change. Yet, a large study of student success in college found that effective institutions are characterized by “positive restlessness,” which is “an acculturated wariness that what and how we are doing now can well be improved” (Kuh, Kinzie, Schuh, Whitt, & Associates, 2010, p. 146). This “we can do better” ethos not only works dynamically to improve the institution but also models for students the processes of growth and change. The following seven action principles can guide your work.

Recognize That Assessment Is Fundamental to Improvement

Understanding is the first step toward improvement. Until you understand what is, you cannot identify a reasonable path toward what could be. Unfortunately, assessment in higher education too often operates in a culture of compliance. This “assessment-for-others” orientation has created a chasm between routine assessment practices at many institutions and the people on campus who are most able to act on the results of those assessments to improve student learning: the faculty, staff, and students. By focusing on improving, assessment becomes “problem-specific and user-centered” (Bryk, Gomez, Grunow, & LeMahieu, 2015, p. 12). Those characteristics make it possible for academics to do what they do best: applying their critical capacities to understand and systematically act on complex issues related to both student learning and institutional performance. In other words, assessment as improvement is a key to student and institutional effectiveness.

Focus Assessment on Improving What Matters Most

Assessment can be a powerful lens for improvement, but only when it is focused on what matters most. Effective assessments require clearly articulated goals that are linked to the institution’s mission and priorities. For example, St. Olaf College in Minnesota threads this needle by supporting department-level assessment. When the department of religion sought to assess its students’ performance on a core disciplinary and liberal arts goal, the capacity to “form, evaluate, and communicate critical and normative interpretations of religious life and thought,” the faculty worked together to evaluate senior essays. When the management studies concentration weighed the merits of team-based pedagogies, which gave students practice with challenging group work but also consumed considerable class time, the faculty compared student performance on individual and group quizzes. In both cases, assessment led to significant improvements, including new writing assignments in religion courses and expanded use of team-based learning in management (Beld, 2010).

Commit to Using Evidence to Inform Changes

Although institutions have invested vast sums and great hopes in the power of data to serve as a catalyst for change, research demonstrates that evidence alone is rarely sufficient to spark meaningful reform (Banta & Blaich, 2011). Nobel Prize–winning physicist Carl Weiman and his colleagues, for example, conclude that research results seldom are “compelling enough by themselves to change faculty members’ pedagogy” in science, technology, engineering, and mathematics (STEM) disciplines (quoted in Wieman, Perkins, & Gilbert, 2010, p. 13). This problem of individuals and institutions not applying what is learned from research is so pervasive in higher education, extending far beyond STEM classrooms, that one of the primary findings from the 49-institution Wabash National Study is that “it is incredibly difficult to translate assessment evidence into improvements in student learning” (Blaich & Wise, 2011, p. 11).

Change is hard, of course, but the human and organizational tendency to remain static may not be sufficient to explain why so little is done with so much evidence in higher education. To counter initiative fatigue, and to enhance the chances of evidence-based action, institutions and individuals should commit to the following (Blaich & Wise, 2011; Kuh & Hutchings, 2015; Walvoord, 2010):

  • Establish clear improvement priorities for sustained focus.
  • Communicate the educational value and anticipated outcomes of each initiative.
  • Gather enough data to have a reasonable basis for action.
  • Foster conversations about and engagement with that data so those in positions to act have the opportunity to understand the evidence and shape further actions.
  • Identify and celebrate successes along the way.

Involve Everyone in the Process of Making Change

Too often, assessment is done to or for people rather than with them. Students, for instance, complete surveys like NSSE or develop portfolios of their best work yet may not know what happens with, or as a result of, these efforts. Trustees often review assessment reports that provide a lot of information but offer little nuanced or benchmarked evidence to support program oversight or appropriate board action (Sullivan, 2015).

To counter this, improvement initiatives should be designed from the start as partnerships among all of the relevant parties. Effective partnerships draw on the distinct expertise and perspectives of different participants. For example, diverse institutions ranging from Bryn Mawr College to North Carolina A&T State University are developing student-faculty and student-staff partnerships to bring undergraduates into the institutional processes used to gather, analyze, and make decisions about how to act on evidence of learning, teaching, and other important aspects of the student experience (Cook-Sather, Bovill, & Felten, 2014).

Adapt Best Practices from Elsewhere

Assessment often focuses internally. That is essential, but institutions also should look externally to identify effective practices at other institutions and within the scholarly literature that could be adapted to meet local goals and needs. Many students and institutions, for example, struggle with developmental math and statistics. While the particulars vary by campus, common challenges exist including student habits and beliefs that make success unlikely. Drawing on research and a multi-institutional network of faculty, the Carnegie Foundation for the Advancement of Teaching sponsored the creation of a set of strategies to support students in cultivating productive persistence. Both scholarly studies and classroom experience demonstrated that students’ beliefs about themselves as mathematical thinkers and about their sense of belonging in a mathematical environment had profound influence on their performance in developmental courses. The results of the productive persistence interventions are striking, dramatically increasing the rate of student success in roughly half the time (American Association of Community Colleges, 2014; Yamada, 2014). As the number of faculty and campuses who adapt these interventions in their own local contexts grow, the results vary within a small range while the impact of this best practice spreads to thousands of students in many states (Carnegie Foundation for the Advancement of Teaching, 2015).

Cultivate an Ethos of Positive Restlessness

Improvement requires not only specific actions but also a certain orientation toward ourselves and our institutions. Although we need to act with resolve, we also need to remain humble—what scholars studying improvement refer to as the assumption that your ideas and practices are “possibly wrong and definitely incomplete” (Bryk et al., 2015, p. 163). Or, as one of the authors of The Undergraduate Experience (Felten et al., 2016, p. 128) was told by a campus leader during a very positive accreditation visit, “We are pleased you think we are doing well. We want you to help us figure out how we can be even better.”

Model the Process of Improvement for Students and the Institution

Paying attention to the processes that support improvement has two distinct benefits: (a) We can actually get better at getting better, and (b) we can model and teach students (and others) to learn how to think about and work on improvement in many aspects of their own lives. The field of improvement science began with industry and medicine and recently has been adapted for education by the Carnegie Foundation for the Advancement of Teaching. This approach rests on a set of principles, three of which are particularly appropriate here:

  • Make the work problem specific and user centered: Effective improvement efforts typically focus on concrete, clearly defined problems that are of concern to the people involved in the effort.
  • See the system that produces the current outcomes: Whatever you are trying to improve exists within a context, and that context matters. By looking at both specific problems and the environment that produces and sustains those problems, you will be more apt to recognize both resources that can aid your improvement efforts and challenges that will need to be addressed.
  • Use inquiry to drive improvement: Inquiry is a powerful tool for improvement, particularly at academic institutions where many people are trained and motivated by research. (Bryk et al., 2015, pp. 12–17).

By publicly modeling the improvement process and bringing students into the work, projects like this help the institution to get better and students (and others in the campus community) to develop the kinds of practical reasoning capacities that are essential to working and living in the modern world (Sullivan & Rosin, 2008).

Final Words

In short, a personal orientation toward and an institutional culture of positive restlessness are necessary for us to fulfill our aspirations for our students and our communities. Developing these can be challenging in a time of constraints and cynicism, but a persistent focus on what matters most—and on the vital purposes of higher education for our students and our world—can help individuals and institutions to do the hard work necessary to make positive, lasting change.

 

Excerpted from chapter 6 of Felten, Gardner, Schroeder, Lambert, & Barefoot, The Undergraduate Experience: Focusing Institutions on What Matters Most (Wiley, 2016; ISBN 9781119050742).
The full text of chapter 6 is online at http://theundergraduateexperience.org/#resources

Check out our past Viewpoints:

Improvement Matters
Peter Felten

Working Together to Define and Measure Learning in the Disciplines
Amanda Cook, Richard Arum, and Josipa Roksa

The Simplicity of Cycles
Mary Catharine Lennon

Helping Faculty Use Assessment Data to Provide More Equitable Learning Experiences
Mary-Ann Winkelmes

Ignorance is Not Bliss: Implementation Fidelity and Learning Improvement
Sara J. Finney and Kristen L. Smith

Student Learning Outcomes Alignment through Academic and Student Affairs Partnerships
Susan Platt and Sharlene Sayegh

The Transformation of Higher Education in America: Understanding the Changing Landscape
Michael Bassis

Learning-Oriented Assessment in Practice
David Carless

Moving Beyond Anarchy to Build a New Field
Hamish Coats

The Tools of Intentional Colleges and Universities: The DQP, ELOs, and Tuning
Paul L. Gaston, Trustees Professor, Kent State University

Addressing Assessment Fatigue by Keeping the Focus on Learning
George Kuh and Pat Hutchings, NILOA

Evidence of Student Learning: What Counts and What Matters for Improvement
Pat Hutchings, Jillian Kinzie, and George D. Kuh, NILOA

Using Evidence to Make a Difference
Stan Ikenberry and George Kuh, NILOA

Assessment - More than Numbers
Sheri Barrett

Challenges and Opportunities in Assessing the Capstone Experience in Australia
Nicolette Lee

Making Assessment Count
Maggie Bailey

Some Thoughts on Assessing Intercultural Competence
Darla K. Deardorff

Catalyst for Learning: ePortfolio-Based Outcomes Assessment
Laura M. Gambino and Bret Eynon

The Interstate Passport: A New Framework for Transfer
Peter Quigley, Patricia Shea, and Robert Turner

College Ratings: What Lessons Can We Learn from Other Sectors?
Nicholas Hillman

Guidelines to Consider in Being Strategic about Assessment
Larry A. Braskamp and Mark E. Engberg

An "Uncommon" View of the Common Core
Paul L. Gaston

Involving Undergraduates in Assessment: Documenting Student Engagement in Flipped Classrooms
Adriana Signorini & Robert Oschner

The Surprisingly Useful Practice of Meta-Assessment
Keston H. Fulcher & Megan Rodgers Good

Student Invovlement in Assessment: A 3-Way Win
Josie Welsh

Internships: Fertile Ground for Cultivating Integrative Learning
Alan W. Grose

What if the VSA Morphed into the VST?
George Kuh

Where is Culture in Higher Education Assessment and Evaluation?
Nora Gannon-Slater, Stafford Hood, and Thomas Schwandt

Embedded Assessment and Evidence-Based Curriculum Mapping: The Promise of Learning Analytics
Jane M. Souza

The DQP and the Creation of the Transformative Education Program at St. Augustine University
St. Augustine University

Why Student Learning Outcomes Assessment is Key to the Future of MOOCs

Wallace Boston & Jennifer Stephens

Measuring Success in Internationalization: What are Students Learning?
Madeleine F. Green

Demonstrating How Career Services Contribute to Student Learning
Julia Panke Makela & Gail S. Rooney

The Culture Change Imperative for Learning Assessment
Richard H. Hersh & Richard P. Keeling

Comments on the Commentaries about "Seven Red Herrings"
Roger Benjamin

Ethics and Assessment: When the Test is Life Itself
Edward L. Queen

Discussing the Data, Making Meaning of the Results
Anne Goodsell Love

Faculty Concerns About Student Learning Outcomes Assessment
Janet Fontenot

What to Consider When Selecting an Assessment Management System
R. Stephen RiCharde

AAHE Principles of Good Practice: Aging Nicely A Letter from Pat Hutchings, Peter Ewell, and Trudy Banta

The State of Assessment of Learning Outcomes Eduardo M. Ochoa

What is Satisfactory Performance? Measuring Students and Measuring Programs with Rubrics
Patricia DeWitt

Being Confident about Results from Rubrics Thomas P. Judd, Charles Secolsky & Clayton Allen

What Assessment Personnel Need to Know About IRBs
Curtis R. Naser

How Assessment and Institutional Research Staff Can Help Faculty with Student Learning Outcomes Assessment
Laura Blasi

Why Assess Student Learning? What the Measuring Stick Series Revealed
Gloria F. Shenoy

Putting Myself to the Test
Ama Nyamekye

From Uniformity to Personalization: How to Get the Most Out of Assessment
Peter Stokes

Transparency Drives Learning at Rio Salado College
Vernon Smith

Navigating a Perfect Storm
Robert Connor

It is Time to Make our Academic Standards Clear
Paul E. Lingenfelter

In Search for Standard of Quality
Michael Bassis

Avoiding a Tragedy of the Commons in Postsecondary Education
Roger Benjamin