NILOA Guest Viewpoints
We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.
The Simplicity of Cycles
Mary Catharine Lennon, Ontario Institute for Studies in Education, University of Toronto, and Postsecondary Education Quality Assessment Board, Government of Ontario
Education is fundamentally based on the joint notions of “teach, practice and assess.” When working with learning outcomes, the cycle repeats as “articulate, incorporate and measure”. The basic idea involves identifying what you are trying to achieve, integrating it into activities, and assessing any changes. Until the final stage is complete, the initial phase only identifies goals and the activities are guesses at how to achieve those goals. We only ‘close the loop’ and understand if we’re achieving our goals through evaluation.
As soldiers in the field of learning outcomes, we constantly reinforce the value of this cycle and the importance of appropriate measures to close the loop. We argue that it is critical to provide assessment feedback to improve the teaching and learning experience and to demonstrate the quality of postsecondary education.
Knowing that cyclic approaches are a best practice, why don’t we apply it to the policies we develop for learning outcomes? Policies themselves are based on that same cyclic model: “formulate, implement and evaluate”.
Somehow, we are so focused on determining whether learning outcomes make a difference at the micro-level that we often neglect broader work examining whether there has actually been any policy impact at the macro-level. For example, what has been the impact of the regulatory agency (i.e. quality assurance, accreditation, accountability) policies that are driving many institutional activities?
To answer this simple question, members of the International Network of Quality Assurance Agencies in Higher Education (INQAAHE) and CHEA International Quality Group (CIQG) were surveyed on their policy activities1, and 74 agencies from around the world responded. The primary questions of the survey concerned what the policy goals were, how those goals were trying to be achieved, and how the achievement of those goals were assessed. For those that had conducted actual research on the policy, follow-up questions asked about the achievement of those goals. The question of impact was also broadened to respondents who had not completed research, instead inquiring about the perceived impact of the learning outcomes policies. The Figure below indicates the results of questions related to goals, impacts and impressions of impact.
The first thing to consider when reviewing the Figure is the wide range of targeted goals indicated by the agencies. Improved student learning was identified by over 80% of the policies, followed by labour market alignment and institutional improvement. What this suggests is that there are extremely different intentions for policies stemming from quality assurance agencies.
The second point of interest concerns comparing the stated policy goals to the research findings on impact. Those agencies that had conducted research on their policies found little impact except in the area of economic development. Note, however, that these data reflect only the percentage of agencies that indicated the goal (N = 29) and the percentage that indicated impact (N = 14). They are not necessarily the same agencies and, significantly, not all of the 74 agencies answered the first question (perhaps because the policy goals were not clearly identified at the outset).
An intriguing result of the survey compares perceived impact to actual impact. Across the board, the perception is that the policies are having little impact on the targets. Yet, when compared to the actual research findings, the policies are often more effective than perceived. The largest differences, for example, are seen in the impact on credit transfer, teaching and learning.
Ultimately, the importance of evidence of achievement is critical to properly understanding the impact of activities. Current perceptions of the value of learning outcomes policies are misaligned with reality, which suggests that policy decisions are likely being made with the wrong information – impressions rather than facts. More than commentary on the impact of learning outcomes policies, the findings suggest that more policy research needs to be conducted: that we need to evaluate the outcomes of our activities in order to fairly value the impact of the work.
This is not new news, but an important reminder that in both the microcosm of learning outcomes cycles of ‘articulate, incorporate, measure’, and the macrocosm of the ‘formulate, implement, evaluate’ policy cycle, the value of closing the loop through evaluation is critical for success.
Lennon, M.C. (2016) In search of quality: Evaluating the impact of learning outcomes policies in higher education regulation (Doctoral dissertation).
1The survey was part of a broader research study conducted as a PhD dissertation (Lennon, 2016)