National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Viewpoints

We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

 

Ignorance is Not Bliss: Implementation Fidelity and Learning Improvement
Sara J. Finney and Kristen L. Smith
Center for Assessment & Research Studies, James Madison University

 

As higher education faculty and administrators, we aim to implement established or encouraging learning practices, pedagogy, and programming that result in our campuses being effective learning environments. We engage in outcomes assessment to evaluate and enhance educational programming with respect to student learning (Figure 1). Yet, Banta and Blaich (2011) reported few institutions use outcomes assessment data to change programming and subsequently demonstrate improved student learning.

Figure 1. Typical Outcomes Assessment Process

One reason for the lack of use of outcomes assessment data for learning improvement is that these data alone are insufficient. We agree with Hutchings, Kinzie, and Kuh (2015): “Assessment that is truly focused on improving students’ educational experiences means putting a premium on evidence. It also means being smart about what constitutes evidence and how to use it effectively.” Outcomes assessment data – on their own – have limited utility. That is, if SLOs are not met, we are in the difficult position of answering “why” and making evidence-based changes to programing that will increase learning in the future. Nonetheless, outcomes data simply indicate if the outcomes were (not) met, not why. Thus, we argue that “evidence” must include implementation fidelity if evidence-based changes to educational programming and improved student learning are our goals.

“The bridge between a promising idea and the impact on students is implementation, but innovations are seldom implemented as intended” (Berman & McLaughlin, 1978, p. 349). Implementation fidelity data demonstrate the extent to which the designed programming was implemented as intended, allowing for determinations on how the designed programming differed from the programming students actually experienced (e.g., Gerstner & Finney, 2013). In short, if students don’t experience activities and curriculum deemed necessary to learn concepts and develop skills, we shouldn’t be surprised when outcomes data suggest learning hasn’t occurred. Furthermore, we can’t change programming to increase learning if we don’t know what programming was implemented. As we better understand the programming students actually experienced, and thus better understand the programming we actually assessed, we can make more intentional, informed, effective program changes that contribute to demonstrably improved student learning (e.g., Fisher, Smith, Finney, & Pinder, 2014).

Implementation fidelity is represented by five components (Figure 2). Program differentiation, the most important component, aligns with the second step of the outcomes assessment process: program theory is specified and corresponding curriculum and learning experiences are developed and mapped to SLOs. Engaging in program differentiation helps us better conceptualize and refine our program theory, which subsequently helps us implement intentional learning interventions. It affords dedicated time to discuss pedagogical techniques, best practices we have successfully implemented in our classes, and potential barriers to student learning (an invaluable faculty development opportunity). The remaining implementation fidelity components involve collecting data that provide insight into what students actually experienced (e.g., Swain, Finney, & Gerstner, 2013).

Figure 2. Five Components of Implementation Fidelity

Coupling implementation fidelity and outcomes data can improve inferences about program impact on student learning (e.g., O’Donnell, 2008). If the designed programming wasn’t implemented, the outcomes assessment data reflect nothing about the designed programming (Figure 3). However, without implementation fidelity data, it is unclear what programming was implemented and thus what inferences are valid. Often, we assume designed programming was implemented and (invalid) inferences from outcomes data are subsequently made about programming.
 

Figure 3. Interpretations when Pairing Outcomes Assessment and Implementation Fidelity Data

As you strive to make informed program changes and demonstrably improve student learning on your campus, consider the following strategies:

  • Meet with faculty involved in creating and implementing programming to discuss implementation fidelityówhat it is, why itís important. This discussion validates the tremendous effort expended creating the programming and acknowledges that students must have the opportunity to receive the programming as designed.
  • Even if data canít be collected immediately regarding all aspects of implementation fidelity, engaging in Program Differentiation is critical to learning improvement. If the program theory isnít articulated in terms of specific curriculum and activities that should result in studentsí achieving the SLOs, then implementation fidelity canít be assessed and outcomes data have limited utility. Engaging in Program Differentiation requires faculty to communicate explicitly about programming, which should increase the likelihood that the designed programming is delivered.
  • Emphasize learning improvement. Outcomes assessment is most useful when it identifies both effective and ineffective learning experiences. If the goal of increasing student learning is kept at the forefront, the need for implementation fidelity data is obvious.
  • Donít wait to collect implementation fidelity data. Implementation fidelity and outcomes data donít need to be collected together. Once SLOs and programming have been specified and mapped, implementation fidelity can be evaluated while examining/creating outcomes assessment measures. In fact, we encourage evaluation of implementation fidelity before gathering outcomes data. If programming isnít implemented with high fidelity, outcomes data are useless for making inferences back to designed programming; thus, it may be most resourceful to wait to collect outcomes data after implementation issues have been identified and addressed.

References

Banta, T., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22-27.

Berman, P., & McLaughlin, M. W. (1978). Federal programs supporting educational change, Vol. VIII: Implementing and sustaining innovations. Santa Monica, CA: Rand.

Fisher, R., Smith, K.L., Finney, S.J., & Pinder, K. (2014). The importance of implementation fidelity data for evaluating program effectiveness. About Campus, 19, 28-32.

Gerstner, J. J., & Finney, S. J. (2013). Measuring the implementation fidelity of student affairs programs: A critical component of the outcomes assessment cycle. Research and Practice in Assessment, 8, 15 – 28.

Hutchings, P., Kinzie, J., & Kuh, G. (2015). Evidence of student learning: What counts and what matters for improvement.National Institute for Learning Outcomes Assessment: Guest Viewpoints. Retrieved from https://illinois.edu/blog/view/915/141541

O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78, 33–84.

Swain, M. S., Finney, S. J., & Gerstner, J. J. (2013). A practical approach to assessing implementation fidelity. Assessment Update, 25(1), 5-7, 13.

 

Check out our past Viewpoints:

Ignorance is Not Bliss: Implementation Fidelity and Learning Improvement
Sara J. Finney and Kristen L. Smith

Student Learning Outcomes Alignment through Academic and Student Affairs Partnerships
Susan Platt and Sharlene Sayegh

The Transformation of Higher Education in America: Understanding the Changing Landscape
Michael Bassis

Learning-Oriented Assessment in Practice
David Carless

Moving Beyond Anarchy to Build a New Field
Hamish Coats

The Tools of Intentional Colleges and Universities: The DQP, ELOs, and Tuning
Paul L. Gaston, Trustees Professor, Kent State University

Addressing Assessment Fatigue by Keeping the Focus on Learning
George Kuh and Pat Hutchings, NILOA

Evidence of Student Learning: What Counts and What Matters for Improvement
Pat Hutchings, Jillian Kinzie, and George D. Kuh, NILOA

Using Evidence to Make a Difference
Stan Ikenberry and George Kuh, NILOA

Assessment - More than Numbers
Sheri Barrett

Challenges and Opportunities in Assessing the Capstone Experience in Australia
Nicolette Lee

Making Assessment Count
Maggie Bailey

Some Thoughts on Assessing Intercultural Competence
Darla K. Deardorff

Catalyst for Learning: ePortfolio-Based Outcomes Assessment
Laura M. Gambino and Bret Eynon

The Interstate Passport: A New Framework for Transfer
Peter Quigley, Patricia Shea, and Robert Turner

College Ratings: What Lessons Can We Learn from Other Sectors?
Nicholas Hillman

Guidelines to Consider in Being Strategic about Assessment
Larry A. Braskamp and Mark E. Engberg

An "Uncommon" View of the Common Core
Paul L. Gaston

Involving Undergraduates in Assessment: Documenting Student Engagement in Flipped Classrooms
Adriana Signorini & Robert Oschner

The Surprisingly Useful Practice of Meta-Assessment
Keston H. Fulcher & Megan Rodgers Good

Student Invovlement in Assessment: A 3-Way Win
Josie Welsh

Internships: Fertile Ground for Cultivating Integrative Learning
Alan W. Grose

What if the VSA Morphed into the VST?
George Kuh

Where is Culture in Higher Education Assessment and Evaluation?
Nora Gannon-Slater, Stafford Hood, and Thomas Schwandt

Embedded Assessment and Evidence-Based Curriculum Mapping: The Promise of Learning Analytics
Jane M. Souza

The DQP and the Creation of the Transformative Education Program at St. Augustine University
St. Augustine University

Why Student Learning Outcomes Assessment is Key to the Future of MOOCs

Wallace Boston & Jennifer Stephens

Measuring Success in Internationalization: What are Students Learning?
Madeleine F. Green

Demonstrating How Career Services Contribute to Student Learning
Julia Panke Makela & Gail S. Rooney

The Culture Change Imperative for Learning Assessment
Richard H. Hersh & Richard P. Keeling

Comments on the Commentaries about "Seven Red Herrings"
Roger Benjamin

Ethics and Assessment: When the Test is Life Itself
Edward L. Queen

Discussing the Data, Making Meaning of the Results
Anne Goodsell Love

Faculty Concerns About Student Learning Outcomes Assessment
Janet Fontenot

What to Consider When Selecting an Assessment Management System
R. Stephen RiCharde

AAHE Principles of Good Practice: Aging Nicely A Letter from Pat Hutchings, Peter Ewell, and Trudy Banta

The State of Assessment of Learning Outcomes Eduardo M. Ochoa

What is Satisfactory Performance? Measuring Students and Measuring Programs with Rubrics
Patricia DeWitt

Being Confident about Results from Rubrics Thomas P. Judd, Charles Secolsky & Clayton Allen

What Assessment Personnel Need to Know About IRBs
Curtis R. Naser

How Assessment and Institutional Research Staff Can Help Faculty with Student Learning Outcomes Assessment
Laura Blasi

Why Assess Student Learning? What the Measuring Stick Series Revealed
Gloria F. Shenoy

Putting Myself to the Test
Ama Nyamekye

From Uniformity to Personalization: How to Get the Most Out of Assessment
Peter Stokes

Transparency Drives Learning at Rio Salado College
Vernon Smith

Navigating a Perfect Storm
Robert Connor

Avoiding a Tragedy of the Commons in Postsecondary Education
Roger Benjamin

In Search for Standard of Quality
Michael Bassis

It is Time to Make our Academic Standards Clear
Paul E. Lingenfelter