Making it REAL! Recruitment, Education, And Learning:
Creating a New Generation of Librarians to Serve All New Yorkers
IMLS Grant Partners Program Evaluation Workshops
Presentations, June 1-2, 2005
Outcomes Based Evaluation Concepts Review
By Stephen C. Maack
REAP Change Consultants
June 2005
[Text version of presentation; also available in .PDF]
What is Evaluation?
- Evaluation is a systematic process for an organization to obtain information on its activities, its impacts,
and the effectiveness of its work, so that it can improve its activities and describe its accomplishments.
(Paul W. Mattessich 2003 The Manager's Guide to Program Evaluation: Planning, Contracting and
Managing for Useful Results, Saint Paul, MN: Wilder Research Center, p. 3)
Why Evaluate?
- Find out what works and how well it works.
- Hear directly from clients about what they like and dislike.
- Identify unanticipated results and unanticipated problems.
- Document needs of clients.
- Share evaluation information in order to recruit and retain talented staff, volunteers, participants, and collaborators.
(Based on The Manager's Guide to Program Evaluation, pp. 4-6)
More Reasons to Evaluate
- Gain support for innovative efforts.
- Gain public recognition.
- Respond to funder and public calls with evidence of outcomes and outcomes-based management.
- Monitor and manage implementation.
- Monitor and increase service effectiveness.
- Better allocate resources.
- Retain or increase funding.
(source: ibid)
Benefits of Outcomes Measurement
Program execs in 298 programs agree or strongly agree - outcome measurement helps their programs:
- Focus staff on shared goals (88%)
- Communicate results to stakeholders (88%)
- Clarify program purpose (86%)
- Identify effective practices (84%)
- Compete for resources (83%)
- Enhance record keeping (80%)
- Improve service delivery (76%)
(Source: United Way of America 2000 Agency Experiences with Outcome Measurement: Survey Findings)
Formative Evaluation
- Data collection activities and analysis that occur over the course of program implementation. A process used
to determine whether or not a program is working: What progress is being made toward program objectives? How do
we use feedback information to improve the program, refine data collection activities, and identify problems or
issues of importance that were not evident before a program began?
(Sally L. Bond et al., 1997 Taking Stock: A Practical Guide to Evaluating Your Own Programs,
Chapel Hill, NC: Horizon Research, Inc., Glossary, p. 90)
Summative Evaluation
- Data collection activities and analysis which help determine how successful a program has been at achieving
its goals. These activities generally occur toward the end of a program, or at appropriate breakpoints in multi-year
or ongoing programs.
(Taking Stock, p. 91).
Why are YOU doing the evaluation?
What do you hope to get out of it?
What specific kinds of organizational learning and program improvements are you seeking?
How will you be contributing to a funder's strategic initiative or effort?
Are you testing out a new approach and need to know if it works?
Is something not working and you have to figure out why?
Are you trying to justify or seek new funding and need to document outcomes or efficient process to be able
to do that?
Questions and Evaluation
"Being clear about what questions you want your evaluation to answer is the key to getting an evaluation
that meets your needs. While this seems like it should be easy, the way questions are worded can have a big impact
on what information is collected, and requires a great deal of thought."
(W.K. Kellogg Foundation, Evaluation Toolkit)
What does your logic model look like?
Logic Model: A graphic, with text-filled boxes and connecting arrows, that visually links a program's resources,
activities, intended outcomes, and performance measures. A Logic Model is usually based on a program theory that
logically links the program to its outcomes.
(First 5 LA, Program Evaluation Kit, p. 49).
Typical Logic Model Elements
- Inputs. Resources a program uses to carry out its activities, for example, staff, supplied, volunteers, money.
- Activities. The actual work or services of a program. Things that staff and volunteers do.
(Source: Paul W. Mattessich 2003 The Manager's Guide to Program Evaluation: Planning, Contracting
and Managing for Useful Results, Saint Paul, MN: Wilder Research Center, p. 28)
- Outputs. The accomplishments, products or services units of a program.
- Outcomes. Changes that occur in people, policies, or something else as a result of a program's activities.
(Source: Paul W. Mattessich 2003 The Manager's Guide to Program Evaluation: Planning, Contracting
and Managing for Useful Results, Saint Paul, MN: Wilder Research Center, p. 28)
The Difference between Outputs and Outcomes
"Outcome: not how many worms the bird feeds it young, but how well the fledgling flies."
(United Way)
IMLS Examples of Output and Outcome Difference
Outputs
- 42 staff members will complete training
- 37 libraries will participate in reference training
- 4 workshops will be held
- participants will receive 3 CEUs
Outcomes
- Library staff will provide faster, more accurate, and more complete answers to reference questions
- Customers will report high satisfaction with reference service
Outcome Varieties
- Initial (Short-Term) Outcomes. Changes that a program immediately produces in participants (e.g., scholarship
student's knowledge, skills, or attitudes).
- Intermediate Outcomes. Changes that occur later as a result of the initial outcomes. For example, IMLS students
begin to provide new library services in their libraries in new ways.
- Longer-term Outcomes. Changes that a program ultimately strives to accomplish and that follow from the intermediate
outcomes (n.b., some call these "Impacts"). For example, IMLS grant graduates lead change in their library
systems.
Poor Outcome Statements (IMLS Examples)
- Students will know how to use the Web
- Patrons will use the automated ILL system
- Users will have better health information
- Library staff will be trained in reference skills
- Democracy will flourish
Better Outcome Statements (IMLS Examples)
- Students will demonstrate information literacy skills
- Patrons will report high satisfaction with the automated ILL service
- Patrons will make healthier life-style choices
- Library staff will provide faster, more accurate, and more complete answers to reference questions
- Visitors will register to vote
Indicators – IMLS Definition
Indicators are the specific, observable, and measurable characteristics, actions, or conditions that tell a
program whether a desired achievement or change has happened. To measure outcomes accurately, indicators must be
concrete, well-defined, and observable; usually they are also countable.
IMLS Indicator Examples
Poor Indicators
- The # and % of students who know how to use the Web.
- Patrons will report high satisfaction with the automated ILL service.
- Users will make healthier choices.
Better Indicators
- The # and % of participating students who can bring up an Internet search engine, enter a topic in the search
function, and bring up one example of the information being sought within 15 minutes.
- The # and % of patrons who say they are "satisfied" or "very satisfied" with the automated
ILL service after using the service and who use the service more than once a month for six months.
- The # and % of users who report they made one or more life-style changes from a list of 10 key life-style health
factors in the last six months.
Best Output, Outcome and Indicator Statements are SMART
- Specific,
- Measurable (amount, speed, change, dollars),
- Aggressive, yet Achievable,
- Relevant (related to what you are trying to measure), and
- Time bound (target is expected to be reached in a specified amount
of time).
(Source: Douglas K. Smith 1999 Make Success Measurable: A Mindbook-Workbook for Setting Goals
and Taking Action. New York: John Wiley and Sons, Inc.)
Quantitative versus Qualitative Evaluation Methods – One Can Use Both!
"Not everything that can be counted counts and not everything that counts can be counted." Albert
Einstein
An Example of a Qualitative Performance Based Outcome Measurement for an Individual
An IMLS scholarship student has been studying providing reference services to diverse populations. Three knowledgeable
reference librarians observe the student providing reference services to 3 diverse clients (or for 15 minutes).
Using previously discussed indicators and rating criteria each rates the student as "excellent," "good,"
"fair," or "poor" for each reference encounter. Observers compare results at the end. The criteria
might be that by the end of the school term all observers rate the student at least “good” on all indicators. If
done as a formative evaluation, if the student doesn’t meet criteria then observers could discuss why, mentor the
student, and check again later.
Other Qualitative Outcomes Measurement Approaches
- Portfolio creation over time with evaluation at the end. Criteria for a "good" portfolio need to
be discussed but depend to some extent on expert (MLS/PhD) librarian judgment.
- Performance observation and rating (cf. music student recitals and contests, or English majors being required
to publish). Performance could be checked during staged situations or in natural library service settings.
- Diverse clients' ratings of library services.
Teaching Library Outcomes Example For the Project
- Create an enthusiastic competent children's librarian
- Assure that the candidate has not only the skills but the MLS needed to become a leader in the profession
- Design a mentoring model that can be replicated
- Develop relationships among libraries, library systems, library schools and professional organizations like
NYLA that last beyond this project
- Engender interest in more people becoming professional librarians
- Create a seamless network of library related organizations working together to educate well- qualified librarians
Teaching Library Outcomes Example For the Student
- Candidate will be able to present story hours for a variety of age groups
- Candidate will be able to plan and run a summer reading program
- Candidate will be able to evaluate books and materials and create a collection development policy for a children's
room
- Candidate will be an active and contributing member of a professional organization such as NYLA
- Candidate will be well-qualified to handle a variety of children's and young adult reference questions
- Candidate will be able to demonstrate the competencies identified in Competencies For Librarians Serving
Children in Public Libraries, Revised Edition
University Outcomes Example Developed in OBE Training
(Note: this is a Long-Term [or Impact] Outcome)
U.S. Library schools use curriculum developed for University MLS
Indicators:
- The # (emails + faxes + telephone) of requests for additional information within five years of completion and
% of library schools requesting (target 50%).
- The # and % of library schools that adopt at least one element of the University MLS curriculum, based on follow-up
inquiries to U.S. Library Schools (n=56, target = 100%).
- The # (emails + faxes + telephone) and % of libraries or library systems that adopt at least one element of
the curriculum for in-service training (n = ?, target = 5%)
IMLS Provides Library Specific Online Evaluation Resources