Competency N

Evaluation. Each graduate of the Master of Library and Information Science program is able to evaluate programs and services using measurable criteria.

Introduction

In order to improve library services and envision their library’s role in the future of the community, library administrators are involved in a continual planning process. In addition to improvement of existing services, in their strategic plans libraries seek to expand their offerings to meet the needs of diverse users and stay in sync with the rapidly changing information environment of the 21st century. Evaluation of programs and services helps library managers make economically, demographically, and politically sound decisions during the strategic planning process, aimed at maximizing the library impact in the community.

“Evaluation is the process of determining the success, impact, results, costs, outcomes, or other factors related to a library activity, program, service, or resource use. Typically such library evaluation can be summative—done at the end of program or service—or it can be formative—done on an ongoing basis to monitor a program or service,” McClure explains. “At its best, evaluation provides input for planning and decision-making, provides high quality qualitative and quantitative data describing library activities, and serves as a basis to constantly assess and improve library services” (2008, p. 179). The author links evaluation to planning, viewing these two processes as “the two sides of the same coin—each contributing to a successfully managed library” (ibid., p. 181).

In the Information Services Today, Stenström distinguishes two primary objectives of evaluation in libraries and information organizations:

  • to inform decision making regarding the organization’s strategic directions, and
  • to aid in demonstrating the value of the organization’s services to its stakeholders (2015, p. 272).

Evaluation criteria

There exist various definitions of library value and approaches regarding specific evaluation criteria in library services. While discussing a culture of assessment in libraries, Stenström distinguishes three defining variables in the library value construct: “User Satisfaction: This measures the perception of a client, customer, user, or patron regarding whether they believe their needs have been met. Economic Impact: This measures the financial impact an information organization has on its immediate community. For example, a university paying salaries to library employees who then turn around and purchase goods in their local communities demonstrates both direct and indirect economic impact. Social Impact: This is multifaceted and difficult to measure—as many factors beyond the information organization can contribute to the growth or decline of an organization’s social impact; however, many consider it more important than economic impact. Examples include improved literacy levels, community pride, a sense of belonging, greater levels of social trust, and increased opportunities for democratic participation (2015, p. 272)”.

Stenström also points out that the measured value can be direct and indirect, depending on whether it’s geared towards a user or a society in general. The direct value to the user, for example, is measured by increased skills, knowledge, enjoyment, or financial savings (with an emphasis on user satisfaction). The indirect value to society generally, on the other hand, is measured by greater levels of education, generalized trust, job creation, or support of the publishing cycle and national cultural growth (with an emphasis on the broader social impact) (ibid., p. 273).

In The portable MLIS: Insights from the experts, McClure, in turn, identifies six assessment criteria—extensiveness, efficiency, effectiveness, service quality, impact, and usefulness—and defines a measurement scope for each of them. Extensiveness measures how much of a particular program or service a library provides, e.g. a number of reference transactions per week. Efficiency measures the use of resources that are required for the provision of services, e.g. the time required per reference transaction. Effectiveness measures the extent at which a library program or service meets the user information needs, e.g. a reference transaction success rate. Service Quality measures the quality of library programs or services. Impact measures a difference made by a particular library program or service in the community. McClure equals this criterion to an outcome: “Outcomes typically determine the degree of change in a person’s knowledge, skills, attitudes, or behavior” (2008. pp. 182-183). Finally, usefulness measures the degree to which a library program or service was able to assist or solve problems of a particular user or user group.

The decision to use a particular set of evaluation criteria is informed by a specific library evaluation context. For example, while measuring effectiveness can produce meaningful results in evaluation of reference service, the same criterion may be not quite suitable in the evaluation of library programs of indirect value, with an emphasis on the broader social impact, e.g. outreach programs geared toward support of culturally diverse communities. In the latter scenario, managers might prefer assessing the user satisfaction instead, in order to determine the perception of library users regarding whether they believe their culturally diverse needs have been met. It is with the local context in mind that libraries and information organizations determine the evaluation criteria that are appropriate to their particular assessment needs, choosing either to follow the assessment guidelines that have already been in place with the professional associations, such as the Reference and User Services Association Guidelines for Behavioral Performance of Reference and Information Service Providers (RUSA Guidelines), for example; alternatively they can independently aggregate a set of evaluation criteria that best fits their assessment scope.

Evaluation methods

The individual context plays equally important role in the selection of effective evaluation methods. “To understand the impacts, benefits, and value of library services and resources, library decision makers must select evaluation strategies appropriate to targeted data needs withing specific situational contexts,” Snead, Bertot, McClure, and Jaeger point out. “There are many different kinds of evaluation data that a library may need and evaluation approaches that a library might employ. As a result, many libraries struggle with the problem of choosing the best evaluation approaches to effectively and efficiently demonstrate the value they provide” (2006, p. 225). The authors discuss evaluation approaches—resource-based process, problem-based process, and multiple evaluation process, while identifying the factors determining the best-fit evaluation: purpose of the evaluation, type of data needed, knowledge and skills of library staff, degree of difficulty, and organizational and situational factors, such as available resources and political context (ibid., pp. 228-229).

In addition to the professional associations’ guidelines, libraries use a variety of methods in the effort to assess their value: impact surveys, value toolkits, return-on-investment calculators, contingent valuation analysis (CVA), focus groups, cost/benefit worksheets, web analytics, and others. In close collaboration with ALA, New Knowledge Organization Ltd., for example, employs both qualitative and quantitative methods in the evaluation of Libraries Transforming Communities (LTC) initiative, including surveys, interviews, focus groups and monitoring web use.

In the recent years, it has been argued that, when measuring library value, libraries and their funding agencies are now concerned not only with numbers, or quantitative evaluation,  but also their context, calling for reflecting the quantitative results in a personalized, more contextualized evaluation narrative. James G. Neal, for instance, argues against the return-on-investment evaluation method in the academic library environment. “ROI instruments and calculations fundamentally do not work for academic libraries, and present naïve and misinterpreted assessments of our role and impacts at our institutions and across higher education,” the author maintains. “New and rigorous qualitative measures of success are needed” (2011, p. 424). Instead, he recommends that in support of user-oriented academic library, evaluations focus on user expectations, embracing the “human” objectives of success, happiness, productivity, progress, relationships, experiences, and impact. Neal proposes the formulas for measuring the value of academic libraries, while factoring in the variables of quality, price, and success (ibid., p. 428):

  • Value = Quality + Traffic
  • Quality = Content + Functionality
  • Price ≠ Cost of Inputs
  • Price = Perceived Quality + Value
  • Success ≠ Resource Allocation
  • Success = Resource Attraction

Evidence

To support my skills in this competency, I present the following examples of my work:

  1. The Digital Reference Interview Evaluation assignment from LIBR 210 (Reference and Information Services)
  2. The Library Website Review assignment from INFO 241 (Automated Library Systems)
  3. The ILS Vendor Evaluation assignment from INFO 241 (Automated Library Systems)

LIBR 210 Digital Reference Interview Evaluation assignment

The first piece of evidence to demonstrate my mastery of competency N is a Digital Reference Interview Evaluation assignment from LIBR 210 (Reference and Information Services). In this assignment, I worked with a classmate to analyze two synchronous online chat-based reference interviews and provide recommendations on how the reference service might have been improved. As the primary author of the Interview #1 transcript analysis and a contributor to the Interview #2 transcript analysis, in this paper I explore how the reference service reflected in the transcripts demonstrated or did not demonstrate the evaluation criteria we had agreed upon with my classmate—the RUSA Guidelines for Behavioral Performance of Reference and Information Service Providers.

The following RUSA Guidelines’ evaluation criteria are used in this paper, in particular as they apply to general and remote reference transactions:

  1. Visibility/Approachability
  2. Interest
  3. Listening/Inquiring
  4. Searching
  5. Follow-up

The detailed description of the evaluation criteria is available in the Appendix A section of this paper; the text of the transcripts is included in Appendix B.

The analysis presented in this paper demonstrates my competence in the digital reference evaluation and my acknowledgement of the importance of adherence to professional guidelines while evaluating reference services. The ideas I present in this paper support my reference evaluation skills, as I identify the ways in which the librarian-patron interactions might have been improved based on the established evaluation criteria.

Click here to read the paper.

INFO 241 Library Website Review assignment

The second piece of evidence to demonstrate my mastery of competency N is a Library Website Review assignment from INFO 241 (Automated Library Systems). In this paper, I present a heuristic evaluation of the World Digital Library website usability. Usability, according to Jakob Nielsen, is “a quality attribute that assesses how easy user interfaces are to use. The word “usability” also refers to methods for improving ease-of-use during the design process” (2012). Based on the Nielsen’s usability definition, this paper presents three strong points in the WDL website interface design, and then discusses another three points—that need improvement. For my evaluation criteria, I use the usability principles developed by Purdue University (Lehman & Nikkel, 2008, pp. 10-11):

  • Clarity of communication
  • Accessibility
  • Consistency
  • Navigation
  • Flexibility and minimalist design
  • Visual presentation
  • Recognition rather than recall

This piece of evidence demonstrates my understanding of the heuristic evaluation principles as they apply to the web usability evaluation in digital libraries.

Click here to read the paper.

INFO 241 ILS Vendor Evaluation assignment

The third piece of evidence to demonstrate my mastery of competency N is an ILS Vendor Evaluation assignment from INFO 241 (Automated Library Systems). In this paper, I present a proposal to adopt Koha ILS as a replacement of a traditional, proprietary ILS in a small academic library. Following a discussion of the library background, main user groups and their information needs, I provide the rationale and criteria for selecting Koha as a new ILS vendor by the way of the detailed vendor evaluation. My evaluation is based on the following criteria:

  • Available modules
  • Compliance with library standards
  • Next-generation offerings
  • Customization
  • Consortial interoperability / interaction with consortial partners
  • Pricing
  • Available vendor support

Information used in the evaluation is based on several sources, including the vendor’s website, published vendor reviews, implementation studies, comparative analyses of open-source integrated library systems, and library technology reports regarding specific initiatives on the part of the vendor to provide services to libraries.

From this assignment, I learned to conduct a comprehensive evaluation of the integrated library system vendors tied to the evidence-based information needs of academic libraries and develop proposals for the ILS implementation based on the evaluation results. This piece of evidence demonstrates my ability to not only perform evaluation of library technology systems based on applicable criteria, but also to present my findings in a concise and convincing way to the academic library decision makers: the Board, the Trustees, the Partners, and the University—explaining why this particular technology is important to the specific needs of this library system and its users.

Click here to read the paper.

Conclusion

Libraries provide a unique value to their communities, unquestioned in the past and commonly assumed as a public good, granting libraries steady sources of public funding. Changes in the political and economic climate, however, have brought about a shift in the fiscal reality, requiring that libraries provide justification for the allocations they receive in the government budgets. “Given the complexity of today’s information management and services delivery environment, librarians must be able to assess the degree to which a particular resource, service, program, or activity meets library goals and objectives and ultimately improves the quality of services to the library’s community,” McClure maintains. “Without ongoing and meaningful evaluation the library profession will not be able to attack and resolve successfully the issues and challenges of today or the future” (2008, p. 180). With the evaluation skills I have acquired during my MLIS studies, I am confident that I can determine best-fit evaluation strategies and apply the evaluation criteria that are appropriate within a specific library context, to provide for successful evaluation of professional services and programs, in the effort of maximizing the library impact and demonstrating the library value to local stakeholders and the community at large.

References

McClure, C. R. (2008). Learning and using evaluation: A practical introduction. In K. Haycock & B. E. Sheldon (Eds.), The portable MLIS: Insights from the experts (pp. 179-192). Westport, CT: Libraries Unlimited.

Lehman, T., & Nikkel, T. (2008). Making library Web sites usable: A LITA guide. New York:Neal-Schuman Publishers.

Neal, J. G. (2011). Stop the madness: The insanity of ROI and the need for new qualitative measures of academic library success. Retrieved from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/conferences/confsandpreconfs/national/2011/papers/stop_the_madness.pdf

Nielsen, J. (2012, January 4). Usability 101: Introduction to Usability. Retrieved from http://www.nngroup.com/articles/usability-101-introduction-to-usability/

Snead, J. T., Bertot, J. C., McClure, C. R., & Jaeger, P. T. (2006, September). Developing best-fit evaluation strategies. In F. DeFranco, S. Hiller, L. J. Hinchliffe, K. Justh, M. Kyrillidou, J. Self, & J. Stein (Eds.), Proceedings of the Library Assessment Conference: Building Effective, Sustainable, Practical Assessment (pp. 225-232). Washington, DC: Association of Research Libraries. Retrieved from http://libraryassessment.org/bm~doc/proceedings-lac-2006.pdf

Stenström, C. (2015). Demonstrating value. In S. Hirsh (Ed.), Information services today: An Introduction (pp. 271-277). Lanham: Rowman & Littlefield.