Learning Designer Doreen Richards describes how her collaboration with MSL Program Director Mary Ellen Joyce produced a Canvas-based outcome assessment system that serves both grading and NECHE accreditation review.
This past fall, Learning Design Consultant Doreen Richards partnered with MSL Program Director Mary Ellen Joyce to build a Canvas-based outcome assessment system for the M.S. in Leadership program. Their work centered on a specific question: how could the program extend its existing assessment architecture, already built around a common weekly module structure, the CARE rubric, and clearly articulated Program Learning Outcomes (PLOs), to generate the kind of outcome-level evidence the New England Commission of Higher Education (NECHE) will require for the program’s 2027 accreditation review? The result is a program-level Signature Assignment rubric in Canvas that produces PLO-aligned analytics as a byproduct of normal faculty grading practice, without asking faculty to adopt any new workflow. In this article, I sit down with Doreen to learn about the project and what it may offer other programs.
Tell us more about course analytics. Why did you decide to implement them in the MSL program?
The MSL program’s NECHE accreditation review, part of the broader university reaccreditation process, is scheduled for 2027, which means the program will need to demonstrate that it has met its established Program Learning Outcomes. In anticipation of the self-study, Mary Ellen and I began working together to generate that evidence efficiently. The honest challenge we discussed was that pulling outcome artifacts together across many courses and faculty members can become a significant lift at review time, and we wanted to build a system that continuously produces the evidence rather than requiring a retrospective effort.
The MSL program was an unusually good candidate for this kind of work. Mary Ellen had already built a shared weekly module structure across all MSL courses, a standardized CARE rubric for all written assignments, and detailed curriculum maps documenting where each PLO is introduced, developed, and advanced across the Core and Executive tracks. Those elements gave us a foundation we did not have to build from scratch. Our job was to connect that existing architecture to Canvas analytics in a way that preserved what faculty valued and produced the outcome-level evidence NECHE expects.
Can you speak more about the process of introducing Canvas analytics?
The starting point was a design decision about which assignment in each course would carry the outcome assessment load. Mary Ellen and I agreed that every MSL core course in both program tracks should have one Signature Assignment, designated by the faculty member, that most directly demonstrates the course’s alignment to its assigned PLO. The CARE rubric continues to govern all written assignments in the program, including the Signature Assignment, as the shared standard for academic writing. The Signature Assignment rubric sits alongside CARE and does a different job: it captures PLO-level performance for accreditation purposes.
From there, we partnered with CDIL’s Learning Technology Team, including Peter Hess, to secure program-level access to institutional and program outcomes in Canvas. We created the PLO structure in Canvas, built the Signature Assignment rubric so it could be imported into each MSL course, and ensured it would generate usable analytics at the program level. The design goal throughout was that faculty would simply attach the Signature Assignment rubric to the assignment they believed best met the signature expectation for their course and use it as they normally grade. The outcome data would be produced as a byproduct of that grading, not as an additional reporting burden.
What was the end result? How does this support MSL’s accreditation review?
The MSL program now has a standardized Signature Assignment rubric, aligned to its PLOs, that produces outcome-level data in the gradebook that the accreditation review will need. The approach is sustainable because it is built from the program’s existing practices, minimizes faculty effort, and has a dual-purpose design that serves both grading and outcome tracking at the same time. Evidence accumulates continuously over the years, rather than being assembled retrospectively in the months before the self-study is due.
What can other programs take away from MSL’s experience?
The first lesson is that outcome analytics work best when they extend existing assessment practices rather than replace them. A program with well-defined PLOs, a stable rubric culture, and documented curriculum maps has much of the foundation already in place. The job of learning design in that context is to connect those elements to the institutional technology in a way that faculty experience as a small addition to familiar practice, not as a new compliance workflow.
The second lesson is the importance of collaboration between program leadership and learning design from the earliest stages. A learning designer working alone can build a technically excellent system that faculty resist because it does not reflect what the program actually cares about. A program director working alone may know exactly what the program needs but lack the Canvas expertise to implement it efficiently. The MSL project worked because Mary Ellen brought the program’s pedagogical architecture and I brought the learning design and Canvas expertise, and we built the system together from both ends.
The result is a sustainable outcome assessment system that preserves what MSL values about its academic culture while meeting NECHE’s evidentiary expectations for 2027. For both Mary Ellen and me, the project also demonstrated how much can be accomplished when program leadership and learning design work closely together on a problem that matters to both.
