How to assess the value of the goods we provide

There are multiple outcome measures that we use to help shed light on the end-purpose issues of quality and educational effectiveness, workforce preparation and retraining, quality of technical/vocational education, and continuing education and lifelong learning as they individually affect our sector’s impact on providing a public good. Here is a generic assessment framework to evaluate how good our public good is:

  1. Quality and Educational Effectiveness. Are we working smarter and not harder? We live with constraints, so are we as effective as possible?
  2. Workforce Preparation and Retraining. Are we listening to our PACs as to what we need to get them prepared? Do we offer access to retraining options for our graduates?
  3. Quality of Vocational/Technical Education. Are we staying in contact with employers? How often is our career services specialist bringing in employers and asking for real feedback about our students?
  4. Continued Education and Lifelong Learning. What example are we setting? Do we make CEUs and lifelong learning the cornerstones of our staff and faculty? It will be hard to ensure this outcome is accomplished if we don’t support it at all levels. Are our faculty members promoting this idea?

What tools are we using to assess the outcomes we expect? All of the examples I give below are valuable tools, but individually they all have shortcomings:

  1. IPEDS, as much of a pain as it is to submit, is built around a series of interrelated surveys to collect institution-level data in such areas as enrollments, program completions, faculty, staff and finances. This measure provides insight into the educational success of the institution, which relates directly to quality and educational effectiveness.
  2. Economic indicators, revenues, gross profits, EBITDA comparisons and net income are all recipes to describe our educational effectiveness, but still miss the quality.
  3. Regulatory bodies provide a great service by promoting quality education practices and measure some aspects of all of the end-purpose issues listed, but again fail to remain ever vigilant. Despite their appreciated brutal honesty at times, they are not always around to tell you the real deal.
  4. Qualitative feedback from instructors, students, administrators and people intimately involved in your programs does exist. The perspective remains myopic; we compare what we are doing to what we have done, and not what we are doing as it impacts the workforce. There is also the risk of having the “yes” women and men around us that dilutes the quality of the qualitative feedback.
  5. Experts in their respective fields can provide us with some of the most critical and perhaps most crucial feedback around. We can capitalize on access to these experts through externships, capstone education events and special events. This necessary complement to assessment tools provides us with candid external feedback about the product prior to entry into the workforce.

The idea that I think we fail to adequately capture is the qualitative feedback through experts. At my school I promote the idea of a capstone event where external experts evaluate and assess the students before they hit the workforce.

I think these types of learning opportunities are valuable to the student and perhaps even more valuable to the school for candid critiques of the public good we are providing –quality education. I want to learn more, so please share with all of us or send me your innovations of capstone events — internships, externships, capstone courses, external training centers, etc. Let’s find out how we all can benefit from your quality education ideas.

Leave a Reply

Be the First to Comment!

Notify of
avatar