National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Viewpoints

We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

 

Assessment - More than Numbers
Sheri Barrett, Johnson County Community College

 

From my first forays into assessment work, I heard colleagues assert that assessing student learning is the responsibility of faculty with the goal of improving student outcomes.  I agree wholeheartedly with the statement, but then encountered examples of assessment processes that seemed to be driven more by the need to satisfy accreditors, state agencies, or other external bodies, all of which seemed ill-suited to foster faculty interest and involvement in assessment.  While I believe that Institutional Research or Assessment offices are colleagues that partner with faculty to assist, the actual work of assessment belongs to faculty and processes of assessment within institutions should be constructed in a way to ensure faculty ownership and use of assessment results.  

At Johnson County Community College we have found that using the cycle of assessment provides a theoretical and practical framework to engage faculty in authentic assessment activities.

What was the Question?

The cycle of assessment begins with a “Question.”  It is the faculty’s role to identify the relevant question to assess learning in their classes, courses or programs. Basic guidelines for writing a good assessment question are to make the question:

  • Meaningful – the question is one about which faculty want to know the answer.
  • Relatable – the question is tied to course objectives, program goals and campus-wide student learning outcomes.
  • Measurable – the question can be answered!  Usually that means specifying the question to an observable student performance . 
    • Too broad: What attitudes do students need to possess to pass the problem-solving essay portion on the mid-term exam?
    • Narrower:  What key concepts are students not understanding in the curriculum as reflected in the problem-solving essay portion of the mid-term?
  • Manageable –the process of collecting data is manageable.  Complex assessment systems with multiple variables make for interesting research projects, but can be burdensome to faculty.
  • Actionable – the answers to the question provide faculty with information to make changes.

Planning Makes Perfect

After the question has been developed, faculty “Plan” for the collection of the assessment data.  Some universal decisions must be made:

  • Which program goal, college-wide learning outcome, and/or course objectives are going to be assessed?  Often creating a curriculum map is a good starting point to determine what initial assessment initiatives should be addressed.
  • What tools are best to conduct the assessment, such as portfolios, rubrics, embedded test question(s), pre/post tests, etc.?
  • What are faculty expectations of student performance?  This is an important conversation to engage in prior to collecting the data, as expectations can often “sink” to the level of the data collected.  Expectations also provide a roadmap to determine when students have “arrived” at an expected level of performance.

I have data, so what?

“Collecting and Scoring” the data are best considered prior to collecting a single data point.  On campus we frequently help faculty create Excel templates to facilitate recording the data.  Once the data have been collected, deciding what the data are saying can be challenging.  It is important that faculty grapple with this task rather than an office like Institutional Research.  Faculty must understand the data to make meaning of the results.  Faculty have shared that sessions offered by the assessment office on making sense of data have been helpful in grappling with their assessment data.  The workshops include using real data from courses/departments for faculty to discuss and analyze.

Do something!

The final steps in the assessment cycle, “Report and Act,” involve making curricular or instructional changes informed by the assessment results.  In a very broad sense there are four possible outcomes of assessment data. 

First, faculty may find the assessment instrument was ill-suited to measure the intended learning outcome.  When this happens, faculty members need to either modify the instrument or change to a different instrument.

Another common result of analysis is that the findings indicate an area of challenge for students in the course or overall program. Determining what changes faculty should make to the curriculum or program to improve student learning is an obvious but challenging next step. Assessment data can provide an opportunity for robust discussion by the faculty.  

Sometimes it is time to move on from an assessment simply because results indicate that students are successfully hitting the benchmark criteria.  Continuing to assess a learning outcome in which students show proficiency may not be the best use of time and energy for departments. 

Faculty may question when it is time to move on and chose a different outcome to assess.  Some questions to explore:

  • Was there improvement?
  • Did students meet the benchmark performance?  Setting these benchmarks early in the process are important as the unfortunate tendency of setting them later results in expectations “sinking” to the level of performance.
  • Are faculty satisfied with student performance?
  • Do faculty see a greater need/question that needs to be asked?  Often what emerges from an assessment that has reached its benchmark is another question. 

Finally, assessment can lead to changes, not in student learning, but in faculty training. When discussing assessment of student learning, the assumption is that all data should point to actions that faculty can take to improve student outcomes.  Sometimes the assessment process leads not to the students, but to the faculty.   In one instance, a general education assessment project indicated that the faculty were uncomfortable assessing student performance on visual communications in their discipline because they themselves lacked a basic understanding of good visual communications.  Sometimes, assessment projects  may indicate that the faculty need more training on a new textbook, learning management system, or program concept etc. 

Write about it

It is important to report results that are meaningful to multiple stakeholders, internal and external.  Good reports provide a history of assessment activities, help crystalize what was learned, and provide a road map for next steps.

An assessment report should answer the following questions:

  • What was the question that needed to be answered to improve student learning?
  • What assessment was completed to answer the question?
  • What do the assessment results suggest in terms of actions faculty and others must take?
  • What are the next steps?

Conclusion

A significant challenge facing institutions concerns assessment practices that have dual purpose of engaging faculty in meaningful assessment practices, while addressing ever increasing accountability requirements.  How institutions address accountability calls while still engaging in assessment practices that yield robust and authentic assessment efforts will help create a true culture of assessment. 

In using the Cycle of Assessment as a means of framing assessment at Johnson County Community College, we have found its most important result is a highly engaged faculty invested in an process that is focused on student learning and contributes to developing assessment practices in ways that benefitted students and the institution.

 

We invite you to contribute your thoughts regarding this Viewpoint here.

Check out our past Viewpoints:

Assessment - More than Numbers
Sheri Barrett

Challenges and Opportunities in Assessing the Capstone Experience in Australia
Nicolette Lee

Making Assessment Count
Maggie Bailey

Some Thoughts on Assessing Intercultural Competence
Darla K. Deardorff

Catalyst for Learning: ePortfolio-Based Outcomes Assessment
Laura M. Gambino and Bret Eynon

The Interstate Passport: A New Framework for Transfer
Peter Quigley, Patricia Shea, and Robert Turner

College Ratings: What Lessons Can We Learn from Other Sectors?
Nicholas Hillman

Guidelines to Consider in Being Strategic about Assessment
Larry A. Braskamp and Mark E. Engberg

An "Uncommon" View of the Common Core
Paul L. Gaston

Involving Undergraduates in Assessment: Documenting Student Engagement in Flipped Classrooms
Adriana Signorini & Robert Oschner

The Surprisingly Useful Practice of Meta-Assessment
Keston H. Fulcher & Megan Rodgers Good

Student Invovlement in Assessment: A 3-Way Win
Josie Welsh

Internships: Fertile Ground for Cultivating Integrative Learning
Alan W. Grose

What if the VSA Morphed into the VST?
George Kuh

Where is Culture in Higher Education Assessment and Evaluation?
Nora Gannon-Slater, Stafford Hood, and Thomas Schwandt

Embedded Assessment and Evidence-Based Curriculum Mapping: The Promise of Learning Analytics
Jane M. Souza

The DQP and the Creation of the Transformative Education Program at St. Augustine University
St. Augustine University

Why Student Learning Outcomes Assessment is Key to the Future of MOOCs

Wallace Boston & Jennifer Stephens

Measuring Success in Internationalization: What are Students Learning?
Madeleine F. Green

Demonstrating How Career Services Contribute to Student Learning
Julia Panke Makela & Gail S. Rooney

The Culture Change Imperative for Learning Assessment
Richard H. Hersh & Richard P. Keeling

Comments on the Commentaries about "Seven Red Herrings"
Roger Benjamin

Ethics and Assessment: When the Test is Life Itself
Edward L. Queen

Discussing the Data, Making Meaning of the Results
Anne Goodsell Love

Faculty Concerns About Student Learning Outcomes Assessment
Janet Fontenot

What to Consider When Selecting an Assessment Management System
R. Stephen RiCharde

AAHE Principles of Good Practice: Aging Nicely A Letter from Pat Hutchings, Peter Ewell, and Trudy Banta

The State of Assessment of Learning Outcomes Eduardo M. Ochoa

What is Satisfactory Performance? Measuring Students and Measuring Programs with Rubrics
Patricia DeWitt

Being Confident about Results from Rubrics Thomas P. Judd, Charles Secolsky & Clayton Allen

What Assessment Personnel Need to Know About IRBs
Curtis R. Naser

How Assessment and Institutional Research Staff Can Help Faculty with Student Learning Outcomes Assessment
Laura Blasi

Why Assess Student Learning? What the Measuring Stick Series Revealed
Gloria F. Shenoy

Putting Myself to the Test
Ama Nyamekye

From Uniformity to Personalization: How to Get the Most Out of Assessment
Peter Stokes

Transparency Drives Learning at Rio Salado College
Vernon Smith

Navigating a Perfect Storm
Robert Connor

Avoiding a Tragedy of the Commons in Postsecondary Education
Roger Benjamin

In Search for Standard of Quality
Michael Bassis

It is Time to Make our Academic Standards Clear
Paul E. Lingenfelter