An Empirical Review of Library Discovery Tools
Xi Shi*, Sarah Levy
SUNY Rockland Community College, New York, USA.
DOI: 10.4236/jssm.2015.85073   PDF    HTML   XML   5,124 Downloads   6,764 Views   Citations


The Internet search concept has fostered an expectation that all users need to do is to feed relevant terms to a search engine to describe a topic or to ask a question, and click “search”. The search engine is then expected to return a list of possible relevant and useful results for users to choose from. Based on this search concept, library system developers have been developing and constructing software programs for library databases that manage scholarly information. These software programs are known as library discovery tools, or web-scale discovery (WSD) tools. In this article, the term “library discovery tools” is used when discussing search engines designed for the libraries, WSD is included. Library discovery tools are intended for intelligent searches for educational or research purposes. This article provides a practical analysis of available library discovery tools in context of the present-day explosion of available open search engines on the Internet. The focuses of our analysis include how discovery tools are expected to manage library collections, provide access to scholarly information content, as well as other factors, such as budgetary considerations, when choosing or adding a discovery tool for a library.

Share and Cite:

Shi, X. and Levy, S. (2015) An Empirical Review of Library Discovery Tools. Journal of Service Science and Management, 8, 716-725. doi: 10.4236/jssm.2015.85073.

1. Introduction

Although the concept of library discovery tools is no longer novel to many, including librarians, the understanding of this tool is far from inclusive, nor is it conclusive as to what they do, how they do it, and the cost of delivering what is expected.

As both authors are librarians at a community college library, for the purposes of this article, we discuss library information management systems, library information search tools, particularly discovery tools, as IS products and services. Our library users, mostly our community college students, are observed and discussed as a specific sub-group of computing end users, or computing consumers. A library discovery tool is defined in this paper as a search engine that builds on unified indexes of licensed scholarly information, searches across multiple library databases provided by different vendors, and may be customized for its size, range and comprehensiveness of data inclusion for targeted solutions. Deliberations are presented on what search tools in academic libraries are most appreciated by college students and faculty, highlighting the differences between library search tools, including discovery tools, and commercial search engines.

Currently, the Internet is exploding with available free and fee based web-scale search tools for scholarly information, and many more are getting on Internet daily, of which some are commercial, such as Google Scholar, and others are sponsored by academies, such as Cite Seer X. Our analysis only includes those that are being commonly recognized and habitually utilized by college students for their comprehensiveness of academic information content coverage, and satisfying return of search results for discussion and examination.

A practical review is offered on how search tools are evaluated by users, and which tools are often identified and selected by specific clientele as appropriate for satisfying their information needs. First, a comparative analysis of commercial search engines versus library search tools is provided. Search experiences are discussed in order to describe characteristics of the various search engines, thereby identifying what information users expect from these search engines, and the factors users value when selecting one of many search engines for their information needs. Second, the most popular library databases and search tools are evaluated. The strengths and weaknesses are discussed to understand the practical functionality of each tool, and to facilitate the formation of realistic expectations of the currently available library discovery tools. Finally, financial cost-benefit considerations are reviewed as an additional factor to be weighed when library databases or search tools are being investigated.

2. Commercial Search Engines

2.1. Google and Its Impact on the Library Market

Google, like it or not, has “created a model that librarians, as information providers, must meet head on.” (Luther, 2003) [1] . It is in the best interest of librarians and library IS developers to understand Google, what it does, how it progressed to today’s performance level and to learn from it, rather than distancing ourselves and our information users from it.

Most academic librarians do not, and generally speaking librarians should not, discourage students from using Google, especially Google Scholar, as Google Scholar never fails to provide relevant information for any set of search terms. Very often, Google Scholar includes full text information offered by educational institutions at no cost to the users. In some instances, the search will direct the users to their home library if the full text is found to be available via their library subscriptions. Even if the full text is not offered from the initial Google Scholar search, once the bibliographic information is provided, a targeted search by author(s) or title can be followed up in library databases rather easily.

Content retrieval from a Google search has no doubt been recognized by most educated users. Google’s mega-data search algorithms, indexing preferences, ranking and displaying techniques have been observed, and its capacity of providing relevant contents, which is often described by many librarians as “good enough content” (Luther, 2003) [1] has been recognized and accepted. This is one reason that many library IS developers are imitating what Google does by creating “Google like” search tools to manage library information resources. However, one information retrieval characteristic presented by Google that has not received as much attention is that the relevant information content from a Google search is not always displayed directly or instantly. This statement may appear to be conflicting with many existing findings in the literature discussing how much information content Google does provide. We noticed Google often functions as a pointer, when it does not present or display full text information content. It does so sometimes even when it does deliver full text information content, so as purposefully to provide additional information from different sources. For example, when searching in Google for the term “define plagiarism”, a seven digit result is indicated. The first entry listed on the result page is a definition of plagiarism provided by Google. Following this full text definition are links that point to definitions offered by other sources, e.g.,, Wikipedia, and academic dictionaries, such as Oxford Dictionaries; as well as definitions provided by many educational institutions, e.g., Princeton University.

Many studies have reported and confirmed that academic library users use Google, love Google and complete their assignments on Google (Al-Maskari & Sanderson, M. 2011) [2] ; (Gross & Sheridan, 2011) [3] ; (Luther, 2003) [1] ; (Thompson, Obrig & Abate, 2013) [4] ). We believe this user behavior is likely influenced by user’s expectations. However, how the user’s decision is made to turn to Google and return to Google still needs comprehensive investigations.

2.2. Search Engine Success Determinants

In the IS field, both researchers and practitioners, such as software developers, believe that user satisfaction is an indication of successful IS product design (Au, 2008) [5] ; (Bolton & Drew, 1991) [6] ; (Heinbokel, et al. 1996) [7] ; (Hsieh et al. 2012) [8] . This same belief is shared in library science (Condit Fagan et al. 2012) [9] ; (Gross & Sheridan, 2011) [3] ). It is well researched that satisfied information users tend to continue with the system or programs they have experienced, and dissatisfied users will most likely look for different tools for the task at hand, or return to the tools with which they had prior positive experience. It is well documented that the influence of user satisfaction is closely associated with the selection of library IS products.

According to satisfaction researchers, consumer satisfaction is heavily influenced by consumer’s expectations of a product or service being consumed. Therefore, many studies define satisfaction formation process as a comparison process in which consumers compare their pre-purchase expectations with the product or services after the purchase (Bhattacherjee, 2001) [10] ; (Churchill & Surprenant, 1982) [11] ; (Shi et al., 2004) [12] ; (Spreng et al., 1993) [13] , (Spreng et al., 1996) [14] . The most renowned theory that has been applied to explain information user satisfaction in both IS and library fields is disconfirmation of expectations. This theory proposes that consumer satisfaction is a result of a comparison of pre and post purchasing experiences. Before the purchase, consumers have expectations of what they are buying. During the purchasing process, consumers evaluate and select the products and services according to their pre-purchase expectations. After the purchase is made, consumers compare what they have purchased with their pre-purchase expectations. If the products or services exceed consumers’ expectations, satisfaction occurs. If the products or services fall short of consumers’ expectations, dissatisfaction occurs. In other words, disconfirmation is defined as the discrepancy between the consumers’ expectations and the actual purchase experiences. Outcomes of the consumption that are better than expected lead to positive disconfirmation. Outcomes of the consumption that are worse than expected lead to negative disconfirmation.

As for information users, specifically library users who are consumers of IS products and services, satisfaction of search results is heavily influenced by their expectations. On one hand, it has been recognized that Google has changed information seekers’ expectations, and defined and redefined the search experiences of those who are seeking information, scholarly and/or other kinds. On the other hand, library IS developers as well as librarians, have created very high users’ expectations for library databases and discovery tools for scholarly information searching, promising discovery tools will do equally well, if not better than Google.

For the purpose of a comparison analysis, one search example is offered. Searching Google for the article Cognitive Test Anxiety and Academic Performance, by J.C. Cassady & R.E. Johnson in Contemporary Educational Psychology, the first listing shows the exact article with a brief description of this article. Clicking on the listing opens the article in Science Direct, and the full text can be retrieved in PDF. In addition, attached to the listing, users can easily find “Cited by” information, formatting guides to “Cite” in APA, MLA and Chicago. Or, user may choose “Save” the listing into “My library” for later review. Please note, Science Direct is one of our library’s subscription databases, although no effort was put in from the library to have this article listed, and linked to the database by Google.

The same search is then conducted in Ebsco Discovery Service (EDS), which our library subscribes to. Searching by exact title, nine records are displayed. One of the nine records shows the exact title, authors and the publication information. However, the user is being pointed to Scopus, a database that our library does not subscribe to. If the search is revised to exact title and author(s), limiting to full text only, two records are listed. The record for the article from Scopus still shows, although no full text is available to our library users. The other record shows the title as “Regular Article: Cognitive Test Anxiety and Academic Performance”, with a link indicating “View record from Science Direct”, which will retrieve the full text article.

The above sample search demonstrates that the Google search probably exceeded the user’s expectation, in a sense that the full text is quickly displayed in the search results, and can be retrieved with no additional effort from the user, and at no cost to the user. In addition, citation guides are handily provided for college students, and “Cited by” information offers immediate access to other research articles mostly in the same field where the original article was cited and discussed. On the other hand, the library discovery search tools, such as EDS, requires intensive customization from the library end, including constructing a “profile” to include all subscription databases, journals and other print and electronic materials. Regular updates of all holdings is required, including the loading of catalog MARC records if the library wishes their book collection to also be incorporated in the discovery search. Also, updating library collection information is the responsibility of the library, e.g., additions and deletions of journals and databases to the library collection. On the other hand, Google requires no effort from the user end to refer a search to a more appropriate database. All Google search features are integrated into one interface and any given search will be directed to the more appropriate search tools, which can also be activated at “More” and “Even more from Google”. For example, if a search is detected as possibly being a scholarly article search, the search is automatically redirected to Google Scholar. Similarly, if a search is detected as likely being a book search, the search will go to Google Books. In addition, as Google announced in December 2009, it would begin to customize search results based on information gleaned from the user, indicating searches conducted by different individuals might receive different outcomes for their search results that are believed to better fit what those individuals have expected (Petter, DeLone & McLean, 2013) [15] .

3. Users’ Perceptions and Evaluations of Library Discovery Tools

In an information dynamic environment, consumers look for one stop shopping sites for their daily necessities, whether on the Internet or at local stores. Similarly, library users look for one-click search engines to fulfill their information needs.

The most widely recognized WSD services in the library market include Ebsco’s Discovery Service, Ex Libris’ Primo Central Index, Serial Solutions’ Summon Service, and OCLC’s WorldCat Local. Although these discovery tools are all web-based computer programs, the design of each was not based on the same conceptualization. The following discussion will explore some aspects that we believe are essential to understanding discovery tools in general, before attempting to select the most appropriate one for a specific library.

Ebsco’s Discovery Services (EDS) is one of the discovery search tools highly recommended by librarians. Ebsco has won many academic contracts for the coming years. The State University of New York (SUNY), for example, signed a multi-year contract with EDS in 2013.

EDS obtains all licensed data from other suppliers. Wherever possible, EDS copies the bibliographic data and then includes it in Ebsco’s metadata for processing and retrieval. By theory, this conceptualization should provide seamless searching across all information suppliers as well as direct retrieval without the use of pointers or linkers, since the data now resides with Ebsco. However, if any information or data is not included in Ebsco’s central index, for example, if a supplier does not agree to supply their data, the discovery service will either omit the information from the search or it will need to rely on a linker to retrieve the full text. Another design conceptualization depends on common programming standards, which are also labeled in the field of library science as “discovery layers” (Hoeppner, 2012) [16] . Summon is an example of this design approach. Layers apply when indexing across different platforms used by different data suppliers for data retrieval. Although the interface, display, and searching process for this design approach may present as a single application to the users, it is in fact only an improved federated search adjusted by adding more layers to include more data access, under the condition that all suppliers comply with the same standards.

The following provides a description of the authors’ evaluations based on our own trials and experiences at our library.

EDS, a rather new product by Ebsco in the library market, is developed and marketed as a “Google like” search engine. EDS is supposed to return information search results from all library resources across different platforms that are provided by different vendors. After the users enter keywords or subject terms for a topic, EDS would then perform a smart search to lead the users to results as well as offering suggested adjustments to the original search, such as narrowing or limiting the search. It may also provide recommendations for further exploration, e.g., providing different search terms, or searching in different areas. In addition, it can provide directions and assistance for further steps in the research endeavor. One example of such assistance is the citation guides that include most academically accepted citation formats. Ebsco has a solid reputation for their comprehensive collection content, sophisticated search options, and responsive technical support. However, with information content from other vendors included, such as Science Direct, LexisNexis, etc., the discovery module does not appear to be designed to return search results on an equal ranking across the providers. Some articles might be missing from the EDS search results, but would be retrieved if a search is conducted in the native database where the articles reside. This phenomenon confirms the bias of information retrieval as reported in other studies that confirm that the bias of search engine coverage and the bias of information retrieval system does exist (Buttcher & Soboroff, 2007 [17] ; Vaughan & Thewall, 2004 [18] ). Consequently, users must understand that the search results provided by a discovery tool from one specific vendor may not be content neutral, whether it is due to the indexing technique or the classification standards. In other words, the Ebsco Discovery Service user should expect to be pointed to Ebsco information first and foremost.

From our experience, help from librarians often enhances the EDS search outcomes. As a matter of fact the librarians’ assistance, which might include subject phrasing choices, searching technique guidance, is found most effective as library users appreciate librarians comprehensive knowledge not only regarding the subject matter of the information content, but also the performance of library subscription databases. Unlike Google searching, librarians are expected to be familiar with the content specialty of each database, search functionality of different vendors, as well as being knowledgeable about their institution’s academic curriculum. EDS, although a discovery tool, works better with librarians’ intervention matching the available information to satisfy the users’ needs, in our case mostly students’ information needs to complete their academic assignments. For example, librarians would recommend information from Science Direct as an intelligent choice for scholarly information on “joint injury”, but for a specific legal case involving “joint injury” the user would receive better results in LexisNexis.

To better appreciate the consequences of the design concept in understanding how library discovery tools might function differently from one another, and how they differ collectively from commercial search engines such as Google, the following offers a factual analysis from our observations as librarians. The discussion of application evaluations and the users’ selection of search tools are presented from three perspectives. First, the content orientation of the provider; second, users’ information needs, information literacy level and their perceived net benefits, and third, budgetary or resource-based considerations and their influence on selection of a search engine, or search tool.

3.1. Content Orientation of Search Engine Providers

First, we should acknowledge that every library database has its own content orientation, in much the same way that commercial search engines are constructed. Although Google can be viewed as a discovery tool, when specific information is being searched for, the Google user will be redirected to a more appropriate search engine. Sophisticated users may directly choose a most effective engine for their particular search. For example, when looking for the definition of a term, a user may go directly to a dictionary or encyclopedia, or they might try Google first, and then be redirected to a more appropriate site. Most current generation college students know to go directly to Amazon when seeking to purchase books, to Wikipedia for quick information on a subject, and to their college website for academic related information, such as registration dates, an academic calendar or course information.

Library databases or search tools also have their own specialties in content collections. Therefore, the database construction concept of each provider may be influenced and governed by the distinctive knowledge of their collections, and created to work most efficiently with their own collections. For example, Gale specializes in literature; Ebsco started their library business with journal and serial publications; and OCLC is known as a bibliographic utility. Understanding the specialties of each vendor is important because it lands users in a position to form realistic expectations. For example, if users recognize that Gale specializes in literature content, they may reconsider if the use of a discovery tool is most sensible to find literary criticism, even if the Gale content is included in the discovery module. Understanding that Ebsco began as a journal servicer, users can expect their serial data collection to be one of the most inclusive. Similarly, OCLC’s WorldCat should provide superior functionality when searching catalogs for library specific holding information, resource sharing possibilities, etc. Periodical or journal information at the publication level provided by OCLC derives from MARC records entered or loaded from other sources. Consequently the information quality may not be predictable, because the host of such information content may or may not behave as described. Similarly, Ebsco may not be the best tool for searching library catalog information, especially union catalogs, since this information is loaded in MARC format from other sources, such as OCLC or a local library system, into the EDS module. Anytime data is introduced from an external source, uncertainty can be anticipated, and users may experience less reliable search outcomes, which will inevitably lead to less confidence in the searching process.

The following scenario offers two search examples to illustrate the impact of the selection of a search engine on a search outcome. Students in a social studies class are assigned to research the correlation between quality and the cost of higher education. The results from different search engines are listed in Table 1 below for analysis. The same search term “higher education cost and quality” was applied to all searches.

The data (retrieved July 2015) from Table 1 shows the search tools selected, the search strategy applied and the results retrieved from the initial search conducted when the assignment was received. Please note that of the 65,746 EDS results, 36,827 are full text articles and all Science Direct listings have full text. The Google Scholar search was not initiated by the user. Instead, the Google Scholar search results were pointed to by Google web.

We conducted this same search in EbscoHost with Academic Search Complete, Education Source and ERIC simultaneously selected. The search returns 4813 results and 1488 are displayed when limiting these results to full text scholarly journals. See Table 2 for details of this search.

After the authors’ careful evaluation of the results from this search, we found that the EbscoHost search results displayed in Table 2 provided much more focused content than the same search performed in the Ebsco EDS, which, like most library databases, has the ability to suggest subject terms within the search context to narrow down the results. Most databases also build in the “smart search” functionality to suggest similar or broader terms for expanding a search.

Our students, as is the case for most undergraduates, prefer to spend the least possible time on receiving or retrieving a minimum amount of acceptable information (Gross & Sheridan, 2011) [3] ; (Hoy, 2012) [19] . This behavior will be further discussed in later sections of this paper. From the analysis of our sample searches, two indications as to why the selection of a more effective search tool is important warrant attention. One indication is that a wise selection of a search tool can result in speedy and satisfactory information retrieval, creating an immediate positive perception by users of the search tool for their information search process. Second, this positive perception will most likely be reinforced by repeating positive experiences when selecting an effective search tool. In turn, the positive experiences enhance users’ learning practice of evaluating available search tools, and then selecting the best one that may satisfy their information needs.

The following sample searches are provided to further analyze the importance of understanding the impact of selection of an effective search tool. Searching in Google for “Wharton Puts First-Year MBA Courses Online for Free” published by Business Week, September 13, 2013, pulls up the full text article. The same search was conducted in Ebsco EDS, EbscoHost, ProQuest Central, and Gale. No result was found from any of the tested databases, nor was the user pointed to other site(s), although Business Week (also indexed as Bloomburg Business week and is covered by Ebsco Business Complete to which our library subscribes, and it is also included in our individual journal subscriptions. The retrieval failures of newspaper articles, including from major papers such as The New York Times and The Wall Street Journal, are often found from the library databases, including discovery tools. Excuses from the providers include the incompatibility of the content indexing from the dot com version and print paper, misinformation received from the newspaper(s), etc.

3.2. Information Users Needs and Perceived Net Benefits

To begin our discussion on users’ information needs and the role those needs may play in information searching

Table 1. Search result comparison.

Table 2. Ebsco education research complete, academic search complete & ERIC.

and retrieval, a brief literature review is provided.

Many satisfaction researchers in the field of IS and marketing consider it necessary to include a “needs” construct when studying information searching and retrieval. Findings of the theory development and test results were reported (Oliver, 1995) [20] ; (Spreng et al., 1996) [21] . Needs theory can be traced way back to the 1940s, when Maslow (1943) developed the theory of a hierarchy of needs. The basic assumption of needs theories indicate that when deficiencies of need are realized and identified, people are motivated to take actions to correct or reduce those deficiencies to satisfy the need. Since human beings have different levels of needs according to Maslow, actions taken would indicate different behavior patterns in an effort to fulfill those needs. Two areas of concern are worth noting. First, many satisfaction researchers argue that needs should be one of the driving factors in understanding consumer behavior, and that needs is likely to have some effect on predicting or influencing consumer satisfaction. Therefore, the construct of needs should be included into the disconfirmation equation. In doing so, some researchers find that consumers may compare the product performance, such as IS programs or search engines, with their identified needs. Accordingly, if the product performance exceeds the innate needs, the consumer realizes the personal benefits, thus is satisfied. On the other hand, if the product performance does not fulfill the pre-identified needs, for example, the search failed to provide the sought after information, the consumer would perceive no net benefits, thus is dissatisfied (Petter et al., 2013) [15] ; (Sirgy, 1984) [22] ; (Spreng et al., 1996) [14] ; (Shi et al., 2004) [12] .

Researchers in the field of information science noted another perspective on the impact of needs on satisfaction. This perspective denotes that unlike other human needs, such as the need for food, water or shelter, etc., “what is required to satisfy an information need is often not known to the individual concerned” (Cole 2011, p. 1216) [23] . Information needs must be placed in context of the specific user’s situation in order to be meaningful. Saracevic (2007) [24] stated that while most research on information needs concerns the user-centered concept, computer scientists when designing information retrieval systems often ignore the information-situation condition. Because the nature of information needs is intangible and non-specifiable, the information users may be engaged in a long process of activities as they search and then revise their searches to find the needed information. Therefore, no matter how long the process of information searching takes and how it progresses, e.g., an academic assignment or school project may in fact last weeks or months, there will be different types of information searches throughout that time while the information needs for this academic assignment remain constant. This information seeking and searching behavior can be witnessed in libraries with students looking for information to satisfy their assignment or research project. Very often students come in to the library without knowing or understanding what information they may need, or where to start searching for the information that may fulfill their assignments. Over the course of doing and completing their assignment, students learn to recognize and better understand their information needs, and the information needs become clearer and clearer as the information seeking activities continue. It is a learning process, as students conduct information searches trying and using different tools to get and review various search results. This learning process helps the students realize and clarify their information needs. In the meantime, their experience with searching activities and the applications of those tools in their searches, forms expectations of what they can find, which tools are effective, where they should start, and how to get the most needed information in the shortest time.

Additional theories on how users’ needs should be integrated into IS design warrants further review. Users’ information needs have been widely studied and recognized by researchers in many different fields. However, how the needs are understood and integrated into the IS design is approached differently. The concept of cognitive approach is attention worthy. The cognitive approach considers that it is the information specialists’ knowledge that determines the successful design of an effective information retrieval system, and that knowledge is not based on the empirical studies of users (Hjorland, 2013) [25] . Examples of this approach include Apple computers and iPhones. Apple’s success suggests that understanding what users may want or need as individuals, as well as socially and culturally, offers insight for their design. Apple’s success demonstrates that even if a company does not interact directly with users, even if the design is not based on a review of the market, it can still be much more insightful about what people could want (Verganti, 2009) [26] . The library catalog as an IS retrieval system is a similar example. The most popular information organization schemes for library catalogs used by American libraries include Library of Congress and Dewey classification systems. Whether a catalog is an LC or Dewey based system, it is not likely that most college students will show interest in learning to understand the classification scheme. Most students simply need a tool to identify and locate the needed information. In this situation, librarians’ expertise in information organization and retrieval would be most appreciated in helping students find needed information quickly and effectively from a library catalog, which is now much more inclusive than just book collections. In addition, librarians’ help, including instructions on how to best utilize the available search tools, benefit not only those students who often prefer to spend the least possible time on receiving or retrieving a minimum amount of acceptable information, especially undergraduates, but also those serious users who demand and care to check the quality of the available information (Hjorland, 2013) [25] .

When discussing characteristics of information users, equity theory should also be reviewed. According to DeLone and McLean’s IS Success Model (1992) [27] , (2003) [28] ), the IS system success can be measured by system variables, including system quality, information quality, and service quality; and user variables, including system use and user satisfaction. The outcome of the Model is “net benefits” perceived by either individual users or organization/business. The basic concept of equity theory is that consumers’ satisfaction with a product or service is determined by a cost-benefit analysis. The consumer feels satisfied if he/she believes the input for the transaction to get the product is adequately rewarded. The input may include the time sacrificed, money spent, etc. In other words, if the consumers believe the time was justly spent and the product was well worth the money, they feel satisfied. On the other hand, if the consumers do not perceive any benefits of the consumption, for example, if they believe the time was wasted and the product or service was not worth the money paid, they will be dissatisfied (Boddy & Paton, 2005) [29] ; (Staples et al., 2002) [30] ; (Woodroof & Kasper, 1998) [31] .

Physical effort and time expended can be seen as two major inputs when using a library for educational materials or using a library database to find scholarly information. According to the equity theory, the end results of the library users’ efforts, whether or not the materials are available in the library or available online from library subscription databases, will be judged against how much time and effort was required in order to retrieve the information. For example, does the student need to make a physical trip to the library? Is the information available from the library databases? How easy is it to identify, locate and retrieve the information? Does the database supply the full text right away, or only bibliographic information is offered and further searching is needed for accessing full text? Bear in mind that library users always conduct cost-benefit analyses, knowingly or unknowingly. From our previous sample search, for example, if a newspaper article can be easily pulled up from Google by a title keyword search, why should students go to the library homepage, then select a discovery tool or familiarize themselves with the database from which this newspaper may be retrieved?

4. Conclusion

This paper is offered to promote further discussions of library discovery search tools, their development, assessment and selection. The authors believe that individual library experiences with an applicable discovery product and the understanding of how a discovery product should function, calls for further deliberations by and among librarians and professionals in the IT field. With this objective in mind, the authors present their experiences and evaluations of these search engines. Practical examples are analyzed within the context of database design and management frameworks. Expectations of how library discovery products should work to satisfy our information users’ academic and research needs are reviewed. The examples provided in this paper are all drawn from actual information seeking activities at our library, a community college library. These searches include both popular Internet sites, such as Google, and library subscription databases. The relevant theoretical models that guided these evaluations are reviewed where appropriate. We believe the current market for discovery products is too assertive to expect a sensible and comprehensive decision from libraries. Librarians may feel compelled to provide their users with a discovery tool, whether or not these products are well-designed and mature enough to commit to, especially at their currently marketed price(s). If the determination is made that the discovery tools should be promoted, a careful evaluation of all products is recommended to ensure the selection of the best one to serve the targeted user group of a specific library. The library should then define within their budget limitations what they consider to be a reasonable expenditure for the selected product. In addition, we encourage librarians to investigate other models beyond the available discovery products on the library market. One suggestion includes an exploration of partnering with other unconventional contractors, for example, Google. Perhaps the library world could consider working with Google to develop a discovery tool designed for library tasks. This suggestion is derived from our experience with Google’s design of an email system that has been employed by our college, and is now employed by many other academic institutions as well. Many of these institutions, ours included, trialed other mail protocols before adopting Google’s Gmail. Considering this model, further exploration is recommended to develop more thoughtful and on-target library discovery search tools from the librarians’ perspective with our users’ needs in mind.


*Corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Luther, J. (2003) Trumping Google: Metasearching’s Promise. Library Journal, 128, 36-39.
[2] Al-Maskari, A. and Sanderson, M. (2011) The Effect of User Characteristics on Search Effectiveness in Information Retrieval. Information Processing and Management, 47, 719-729.
[3] Gross, J. and Sheridan, L. (2011) Web Scale Discovery: The User Experience. New Library World, 112, 236-247.
[4] Thompson, J., Obrig, K. and Abate, L. (2013) Web-Scale Discovery in an Academic Health Sciences Library. Medical Reference Services Quarterly, 32, 26-41.
[5] Au, N.N., Ngai, E.T. and Cheng, T.E. (2008) Extending the Understanding of End User Information Systems Satisfaction Formation: An Equitable Needs Fulfillment Model Approach. MIS Quarterly, 32, 43-66.
[6] Bolton, R.N. and Drew, J.H. (1991) A Multistage Model of Customers’ Assessments of Service Quality and Value. Journal of Consumer Research, 17, 375-384.
[7] Heinbokel, T., Sonnentag, S., Frese, M. and Stolte, W. (1996) Don’t Underestimate the Problems of User Centeredness in Software Development Projects—There Are Many! Behaviour& Information Technology, 15, 226-236.
[8] Hsieh, J., Rai, A., Petter, S. and Ting, Z. (2012) Impact of User Satisfaction with Mandated CRM Use on Employee Service Quality. MIS Quarterly, 36, 1065-A3.
[9] Condit Fagan, J., Mandernach, M., Nelson, C.S., Paulo, J.R. and Saunders, G. (2012) Usability Test Results for a Discovery Tool in an Academic Library. Information Technology & Libraries, 31, 83-112.
[10] Bhattacherjee, A. (2001) Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Quarterly, 25, 351-370.
[11] Churchill Jr., G.A. and Surprenant, C. (1982) An Investigation into the Determinants of Customer Satisfaction. Journal of Marketing Research (JMR), 19, 491-504.
[12] Shi, X., Holahan, P.J. and Jurkat, M. (2004) Satisfaction Formation Processes in Library Users: Understanding Multisource Effects. Journal of Academic Librarianship, 30, 122-131.
[13] Spreng, R.A. and Olshavsky, R.W. (1993) A Desires Congruency Model of Consumer Satisfaction. Journal of the Academy Of Marketing Science, 21, 169-177.
[14] Spreng, R.A., MacKenzie, S.B. and Olshavsky, R.W. (1996) A Reexamination of the Determinants of Consumer Satisfaction. Journal of Marketing, 60, 15-32.
[15] Petter, S., De Lone, W. and McLean, E.R. (2013) Information Systems Success: The Quest for the Independent Variables. Journal of Management Information Systems, 29, 7-62.
[16] Hoeppner, A. (2012) The Ins and Outs of Evaluating Web-Scale Discovery Services. Computers in Libraries, 32, 6-40.
[17] Buttcher, S. and Soboroff, I. (2007) Reliable Information Retrieval Evaluation with Incomplete and Biased Judgments. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, 23-27 July 2007, 63-70.
[18] Vaughan, L. and Thelwall, M. (2004) Search Engine Coverage Bias: Evidence and Possible Causes. Information Processing & Management, 40, 693-707.
[19] Hoy, M.B. (2012) An Introduction to Web-Scale Discovery Systems. Medical Reference Services Quarterly, 31, 323-329.
[20] Oliver, R.L. (1995) Attribute Need Fulfillment in Product Usage Satisfaction. Psychology & Marketing, 12, 1-17.
[21] Maslow, A.H. (1943) A Theory of Human Motivation. Psychological Review, 50, 370-396.
[22] Sirgy, M. (1984) A Social Cognition Model of Consumer Satisfaction/Dissatisfaction: An Experiment. Psychology & Marketing, 1, 27-44.
[23] Cole, C. (2011) A Theory of Information Need for Information Retrieval That Connects Information to Knowledge. Journal of the American Society for Information Science and Technology, 62, 1216-1231.
[24] Saracevic, T. (2007) Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science. Part II: Nature and Manifestations of Relevance. Journal of the American Society for Information Science & Technology, 58, 1915-1933.
[25] Hjørland, B. (2013) User-Based and Cognitive Approaches to Knowledge Organization: A Theoretical Analysis of the Research Literature. Knowledge Organization, 40, 11-27.
[26] Venkatesh, V. and Goyal, S. (2010) Expectation Disconfirmation and Technology Adoption: Polynomial Modeling and Response Surface Analysis. MIS Quarterly, 34, 281-303.
[27] DeLone, W.H. and McLean, E.R. (1992) Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60-95.
[28] DeLone, W.H. and McLean, E.R. (2003) The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19, 9-30.
[29] Boddy, D. and Paton, R. (2005) Maintaining Alignment over the Long-Term: Lessons from the Evolution of an Electronic Point of Sale System. Journal of Information Technology (Palgrave Macmillan), 20, 141-151.
[30] Staples, D., Wong, I. and Seddon, P.B. (2002) Having Expectation of Information Systems Benefits That Match Received Benefits: Does It Really Matter? Information & Management, 40, 115-131.
[31] Woodroof, J.B. and Kasper, G.M. (1998) A Conceptual Development of Process and Outcome User Satisfaction. Information Resources Management Journal, 11, 37-43.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.