Information Retrieval: A Health and Biomedical Perspective, Third Edition

William Hersh, M.D.

Chapter 7 Update

This update contains all new references cited in the author's OHSU BMI 514/614 course for Chapter 7.

Measuring frequency of use of IR systems in the era of ubiquitous computers, smartphones, tablets, and other devices is increasingly difficult. One reason it is more difficult is that users and their myriad of devices are harder to track usage (at least for researchers), not so much what they do but their purpose in doing so. Another challenge in tracking usage is that some applications throw users into search applications, the most common example being Web browsers that send non-URL information entered into the address box directly into a search engine (or even Apple Safari that combines the search box and URL address box feature in one place). This means that essentially all Web users, which is probably all computer users, are using IR systems in one way or another. Unfortunately, this all makes the direct measurement of usage more difficult.

Some studies, however, have looked at the usage issue from various vantage points. One study (Franko et al., 2011) looked at trainees and attending physicians at Accreditation Council for Graduate Medical Education (ACGME) training programs, finding that 85.9% owned smartphones, with usage highest among residents (88.4%) and attending physicians in their first five years of their careers (88.8%). But even those who were attending physicians for >15 years, who had the lowest rate of all groups, still had a usage rate of 78.2%. Most physicians in the study used apps, the most common being drug (79%) and other references (18%). Another study compared the use of electronic vs. print textbooks at a university health sciences library, finding a fivefold increase in electronic textbooks over their print counterparts (Ugaz and Resnick, 2008).

The difficulty in being able to directly measure usage makes indirect measurement studies more important. In one survey of physicians in the US, Canada, and UK, physicians in the US were most likely report searching "at least once a week," while UK physicians were more likely to report searching "less than once a week" and Canadian physicians in between (Davies, 2010). Another study of nearly 4000 internal medicine physicians who took the Internal Medicine Maintenance of Certification Examination (IM–MOCE) between 2006-2008 and who held individual licenses to ACP PIER and/or UpToDate found usage of electronic resources about 20 days over a one-year period (Reed, 2012). More recently, a study of a knowledge resource integrated with the EHR at Mayo Clinic (MayoExpert) was found to be used at least once by 71% of staff physicians, 66% of midlevel providers, and 75% of residents and fellows (Cook, 2014). In any given month, about 10% of providers used the system two or more times. General internal medicine physicians were found to use the system most, with about five uses over a one-year period.

However, a widely cited study by Google and Manhattan Research (cited in the Chapter 1 update) noted that essentially all physicians reported searching on digital devices daily, with most searching resulting in action, such as changing treatment decisions or sharing with a colleague or patient (Anonymous, 2012). Similarly, a recent study of internal medicine residents at three sites found that nearly all responding to the survey searched daily, with the most common resource searched being UpToDate (Duran-Nelson, 2013). The next most frequent source of information was consultation with attending faculty, followed by the Google search engine, the Epocrates drug reference, and various other "pocket" references.

Patient and consumer searching of the Web for health information also continues to be reported high. In the most recent update of her ongoing survey of health-related searching, Fox (2013) found that 72% of US adult Internet users (59% of all US adults) have looked for health information in the last year. As noted in the Chapter 1 update, the most common types of searches done by these users was for a specific disease or medical condition and for a certain medical treatment or procedure (Fox, 2011).
Anonymous (2012). From Screen to Script: The Doctor's Digital Path to Treatment. New York, NY, Manhattan Research; Google.
Davies, K. (2010). Physicians and their use of information: a survey comparison between the United States, Canada, and the United Kingdom. Journal of the Medical Library Association, 99: 88-91.
Duran-Nelson, A, Gladding, S, et al. (2013). Should we Google it? resource use by internal medicine residents for point-of-care clinical decision making. Academic Medicine, 88: 788-794.
Fox, S (2011). Health Topics. Washington, DC, Pew Internet & American Life Project.
Fox, S and Duggan, M (2013). Health Online 2013. Washington, DC, Pew Internet & American Life Project.
Franko, O. and Tirrell, T. (2011). Smartphone app use among medical providers in ACGME training programs. Journal of Medical Systems. 36: 3135-3139.
Reed, DA, West, CP, et al. (2012). Relationship of electronic medical knowledge resource use and practice characteristics with Internal Medicine Maintenance of Certification Examination scores. Journal of General Internal Medicine. 27: 917-923.
Cook, DA, Sorensen, KJ, et al. (2014). A comprehensive information technology system to support physician learning at the point of care. Academic Medicine. 90: 33-39.
Ugaz, A. and Resnick, T. (2008). Assessing print and electronic use of reference/core medical textbooks. Journal of the Medical Library Association, 96: 145-147
Three focus groups convened by Mayo Clinic researchers asked consumers about their online searching use and needs (Fiksdal, 2014). Subjects reported searching, filtering, and comparing information retrieved, with the process stopping due to saturation and fatigue.

Another study from Mayo Clinic analyzed search queries submitted through general search engines but leading users into a consumer health information portal from computers and mobile devices (Jadhav, 2014). The most common types of searches were on symptoms (32-39%), causes of disease (19-20%), and treatments and drugs (14-16%). Health queries tended to be longer and more specific than general (non-health) queries. Health queries were somewhat more likely to come from mobile devices. Most searches used key words, although some were also phrased as questions (wh- or yes/no).
Fiksdal, AS, Kumbamu, A, et al. (2014). Evaluating the process of online health information searching: a qualitative approach to exploring consumer perspectives. Journal of Medical Internet Research. 16(10): e224.
Jadhav, A, Andrews, D, et al. (2014). Comparative analysis of online health queries originating from personal computers and smart devices on a consumer health information portal. Journal of Medical Internet Research. 16(7): e160.
There are very few new studies of satisfaction with searching or use of knowledge-based resources. In their study of internal medicine residents, Duran-Nelson et al. (2013) assessed attributes of search systems that users found valuable. UpToDate was valued for its speed and linkage to the medical record. Most resources were noted for their trustworthiness, including electronic and paper textbooks.

Most of the major surveys of consumer users find general satisfaction with the information found. The survey by Taylor (2011) noted that searching is very successful (44%) or successful (46%) for all users.
Duran-Nelson, A, Gladding, S, et al. (2013). Should we Google it? resource use by internal medicine residents for point-of-care clinical decision making. Academic Medicine, 88: 788-794.
Taylor, H (2011). The Growing Influence and Use Of Health Care Information Obtained Online. New York, NY, Harris Interactive.
Just as studies assessing usage have changed in the era of ubiquitous, multi-device usage by users has changed the nature of usage studies, it has altered the types of studies being done assessing searching quality both from the system and user perspectives.

A variety of new studies have asked various research questions and provided answers to them. Some studies compare the efficacy of different search systems and their content for the speed and quality of answering questions for clinicians.

Several studies have looked at attributes of point-of-care (POC) information resources such as quality, timeliness, and comprehensiveness. Banzi et al. (2010) reviewed a variety of POC resources for volume of content, editorial process quality, and evidence-based methodology in preparation. They found substantial differences among products, with none ranking in the top quartile for all three dimensions. Ahmadi et al. (2011) compared four resources touted as evidence-based for rate of answer retrieval and mean time taken to obtain it with 112 residents in training, each of whom answered three of 24 questions. The resources that were most successful in answering questions also tended to have shortest time to answer, i.e., UpToDate (86% answered, 14.6 minutes to answer), First Consult (69%, 15.9 minutes), ACP PIER (49%, 17.3 minutes), and Essential Evidence (45%, 16.3 minutes). The differences were statistically significant.

Prorok et al. (2012) performed a similar study, also noting differences in timeliness, breadth, and quality, with Dynamed and UpToDate ranking highest overall. Jeffery et al. (2012) focused just on timeliness, searching for 200 clinical topics in four online textbooks. The number of topics for which there was one or more recently published articles found in a continuous online evidence rating system (McMaster PLUS) that had evidence that differed from the textbooks’ treatment recommendations was:
  • DynaMed - 23%
  • UpToDate - 52%
  • PIER - 55%
  • Best Practice - 60%
They also found that the time of last update for each textbook varied from 170 days for DynaMed to 488 days for PIER, indicating that even in the era of instant information, timeliness of information is still a challenge.

Another study looked at the impact of Wikipedia results appearing in search results (Laurent and Vickers, 2009). The researchers took queries that had been entered into health-specific database systems (MedlinePLUS, NHS Direct Online, and the National Organization of Rare Diseases) and entered them into Google, Yahoo, and MSN (the latter was Microsoft's earlier search engine before Bing). Wikipedia pages ranked in the top ten search results in the general search systems from 71-85% of the time, with even higher proportions for rare diseases, indicating that Wikipedia results are likely to show up in health-related searches.

Some studies have looked at the impact of different approaches to indexing and retrieval. One newer study in this area compared retrieval results searching over the full text of articles versus only titles and abstracts for articles in the TREC 2007 Genomics Track collection of 162,259 full-text articles from Highwire Press (Lin, 2009). (Most coverage of the TREC Genomics Track is provided in Chapter 8, but this study is presented here because it compares retrieval based on titles and abstracts versus full text.) Two retrieval algorithms (standard Lucene and BM25) and three retrieval measures (mean average precision, precision at 20 retrieved, and IP@R50) were assessed. Results showed that searching the full-text outperformed searching titles and abstracts, especially when the former used spans within the text as retrieval units as opposed to the entire document.

Another comparison of different strategies for retrieval of evidence was carried in a study that looked at retrieval of articles included in systematic reviews (Agoritsas, 2012). For 30 clinical questions derived from systematic reviews on the topic, searches were composed using a variety of approaches, from publication type limits to the PICO framework to the PubMed clinical queries. Output was assessed based on the results from the first two pages (40 articles) of the standard PubMed output. Searches using the Clinical Queries narrow filter and the PICO framework had the best overall results, although there was substantial variation across topics.

Kim et al. (2014) looked at the ability of internal medicine interns to answer questions starting from Google vs. an evidence-based summary resource developed by a local medical library. Ten questions were given to each subject, with each participant randomized to start in either Google or the summary resource for half of questions. Answers were found for 82% of the questions administered, with no difference between groups in correct answers (58-62% correct) or time taken (136-139 seconds). While those starting in the summary resource mostly found answers in resources that were part of the summary system 93% of the time, those starting with Google found answers in commercial medical portals (25.7%), hospital Web sites (12.6%), Wikipedia (12.0%), US government Web sites (9.4%), PubMed (9.4%), evidence-based summary resources (9.4%), and others (18%).
Banzi, R., Liberati, A., et al. (2010). A review of online evidence-based practice point-of-care information summary providers. Journal of Medical Internet Research, 12(3): e26.
Ahmadi, S., Faghankhani, M., et al. (2011). A comparison of answer retrieval through four evidence-based textbooks (ACP PIER, Essential Evidence Plus, First Consult, and UpToDate): a randomized controlled trial. Medical Teacher, 33: 724-730.
Prorok, JC, Iserman, EC, et al. (2012). The quality, breadth, and timeliness of content updating vary substantially for 10 online medical texts: an analytic survey. Journal of Clinical Epidemiology. 65: 1289-1295.
Jeffery, R, Navarro, T, et al. (2012). How current are leading evidence-based medical textbooks? An analytic survey of four online textbooks. Journal of Medical Internet Research. 14(6): e175.
Laurent, M. and Vickers, T. (2009). Seeking health information online: does Wikipedia matter? Journal of the American Medical Informatics Association, 16: 471-479.
Lin, J. (2009). Is searching full text more effective than searching abstracts? BMC Bioinformatics, 10: 46.
Agoritsas, T, Merglen, A, et al. (2012). Sensitivity and predictive value of 15 PubMed search strategies to answer clinical questions rated against full systematic reviews. Journal of Medical Internet Research. 14(3): e85.
Kim, S, Noveck, H, et al. (2014). Searching for answers to clinical questions using Google versus evidence-based summary resources: a randomized controlled crossover study. Academic Medicine. 89: 940-943.

Little data exists on the efficacy of different training approaches to improve usage or search skills with clinical IR systems. A systematic review identified nine studies assessing various approaches, showing modest success for a variety of interventions (Gagnon et al., 2010). Another study assessed the value of librarian assistance with searches for pediatric residents (Gardois et al., 2011). The quality of searches was assessed by a ten-item tool that included the following:
  1. Population-intervention-comparator-outcome (PICO) formulation
  2. Number of PICO terms translated into search terms
  3. Search string syntax (Boolean operators)
  4. Medical Subject Headings (MESH) use (thesaurus, details, tree structure, age groups, subheadings)
  5. Publication date limit utilization
  6. Language limit utilization
  7. Filters (Subsets, Clinical Queries, other limits not listed above)
  8. Percentage of relevant articles collected according to EBM criteria
  9. Saving of search strings/results
  10. Pertinent use of other sources
Compared to a control group of no assistance, those assisted scored an average of 73.6 out of 100 on the search assessment tool compared to 50.4 for those not assisted.

Some studies have focused on search system features that are used (or not used). A study of user logs of the Turning Research into Practice (TRIP) database, which provides access to a variety of evidence-based resources, found that most searches entered used just one term and no Boolean operators (Meats, 2007). A follow-up study of clinician users of the system found interest expressed in learning to become better searchers. Lau et al. (2010) compared the search behaviors of resource-based vs. task-based systems, with the former allowing the user to select one of six information resources (e.g., PubMed, a pharmaceutical database, evidence-based guidelines, etc.) and the latter allowing the user to select one of six tasks (e.g., diagnosis, therapy, drug information. etc.). A total of 44 physicians and 31 senior nurse consultants were randomized to one approach or the other. Clinicians randomized to the resource-based system tended to use a "breadth-first" strategy of entering the same query into different databases, whereas those randomized to the task-based system tended to use a "depth-first" strategy of entering and refining queries into single resources.

The majority of more recent studies have assessed the ability of systems to help clinician (usually physician) users. Some studies have focused on aspects of the searching process (e.g., recall and precision) while more have assessed the ability of users to correctly answer clinical questions.

One study of search process compared three meta-search systems: Medtextus – keyword suggestion, results organized by folders; Helpful Med – keyword suggestions, topic maps; and NLM Gateway – to access to wide variety of resources (Leroy et al., 2007). They assessed 23 users searching on 12 topics. The outcome measures were effectiveness (subtopics retrieved) and efficiency (searches required). They found no difference in effectiveness but Medtextus was more efficient and Helpful Med had higher user satisfaction.

Another study using a collection from the TREC Genomics track used the 2004 collection to assess the value of MeSH terms for different types of searchers (Liu, 2017). The researchers recruited four types of searchers:
  • Search Novice (SN) - undergraduates with no formal search training or advanced knowledge in biomedicine
  • Domain Expert (DE) - biomedical graduate students
  • Search Expert (SE) - library and information science  graduate students
  • Medical Librarian (ML)
The searchers used a digital library system to search on 20 of the 50 topics selected from the original test collection. Searchers assigned to search with MeSH were provided access to a MeSH browser. As with other studies, recall and precision were relatively close across different groups:
  • SN - 0.21/0.29
  • DE - 0.15/0.40
  • SE - 0.15/0.30
  • ML - 0.23/0.35
MeSH terms had little impact upon recall in the four groups, but they were found to substantially increase precision in search novices (SN and DE) and decrease it in search experts (SE and ML) (recall and precision with MeSH; without MeSH):
  • SN - 0.21/0.36; 0.20/0.23
  • DE - 0.15/0.29; 0.15/0.51
  • SE - 0.13/0.38; 0.16/0.21
  • ML - 0.24/0.42; 0.22/0.28
User characteristics that improved precision were number of undergraduate and graduate biology courses for SN and DE respectively. User characteristics associated with improved recall included having had online search courses and MeSH use experience. Other factors having no association with search results included gender, native language, age, or experience or frequency with database searching.

A number of studies have assessed the value of evidence-based search filters. Lokker et al. (2012) assessed the search results of 40 physicians who searched PubMed on the topic of their choice, with their results sent in a random and blinded way to the standard PubMed interface or to the Clinical Queries filter. For searches on treatment topics, the number of relevant articles retrieved was not significantly different between the two types of search processing, although a higher proportion of articles from the Clinical Queries searches met methodologic criteria and were published in core internal medicine journals. For diagnosis topics, the Clinical Queries results returned more relevant articles and fewer nonrelevant articles. Participants were noted to vary greatly in their search performance.

Another study of search filters focused on Canadian nephrologists searching on aspects of renal therapy (Shariff et al., 2012). The searchers entered topics into PubMed. Further analysis of the search output was done with filters for best evidence (PubMed Clinical Queries) and pertinence to nephrology, each in a "broad" and "narrow" configuration. Recall (called "comprehensiveness" in the paper) and precision (called "efficiency" in the paper) were measured for the first 40 articles in the retrieval output as well as all articles retrieved. Not surprisingly, the total search output had relatively high recall (45.9%) and very low precision (6.0%). The narrow Clinical Queries filter was most effective in raising precision (22.9%), with no additional benefit provided by the narrow subject filter. The nephrology subject filter did, however, raise recall to as high as 54.5%. Also not surprisingly, the search limited to the first 40 articles retrieved had lower baseline recall (12.7%) with little change in precision (5.5%). Similar to the full retrieval, the narrow Clinical Queries filter was most effective in raising precision (to 23.1%) and also in raising recall (to 26.1%).

The bulk of more recent physician user studies have focused on ability to users to answer clinical questions.

Hoogendam et al. (2008) compared UptoDate with PubMed for questions that arose in patient care among residents and attending physicians in internal medicine. For 1305 questions, they found that both resources provided complete answers 53% of the time, but UpToDate was better at providing partial answers (83% full or partial answer for UptoDate compared to 63% full or partial answer for PubMed).

A similar study compared Google, Ovid, PubMed, and UpToDate for answering clinical questions among trainees and attending physicians in anesthesiology and critical care medicine (Thiele, 2010). Users were allowed to select which tool to use for a first set of four questions to answer, while 1-3 weeks later they were randomized to only a single tool to answer another set of eight questions. For the first set of questions, users most commonly selected Google (45%), followed by UpToDate (26%), PubMed (25%), and Ovid (4.4%). The rate of answering questions correctly in the first set was highest for UpToDate (70%), followed by Google (60%), Ovid (50%), and PubMed (38%). The time taken to answer these questions was lowest for UpToDate (3.3 minutes), followed by Google (3.8 minutes), PubMed (4.4 minutes), and Ovid (4.6 minutes). In the second set of questions, the correct answer was most likely to be obtained by UpToDate (69%), followed by PubMed (62%), Google (57%), and Ovid (38%). Subjects randomized a new tool generally fared comparably, with the exception of those randomized from another tool to Ovid.

Another study compared searching UpToDate and PubMed Clinical Queries at the conclusion of a course for 44 medical residents in an information mastery course (Ensan et al., 2011). Subjects were randomized to one system for two questions and then the other system for another two questions. The correct answer was retrieved 76% of the time with UpToDate versus only 45% of the time with PubMed Clinical Queries. Median time to answer the question was less for UpToDate (17 minutes) than PubMed Clinical Queries (29 minutes). User satisfaction was higher with UpToDate.

Fewer studies have been done assessing non-clinicians searching on health information. Lau et al. (2008, 2011) found that use of a consumer-oriented medical search engine that included PubMed, MedlinePLUS, and other resources by college undergraduates led to answers being correct at a higher rate after searching (82.0%) than before searching (61.2%). Providing a feedback summary from prior searches boosted the success rate of using the system even higher, to 85.3%. Confidence in one's answer was not found to be highly associated with correctness of the answer, although confidence was likely to increase for those provided with feedback from other searchers on the same topic.

Despite the ubiquity of search systems, many users have skill-related problems when searching for information. van Duersen (2012) assessed a variety of computer-related and content-related skills from randomly selected subjects in the Netherlands. Older age and lower educational level were associated with reduced skills, including use of search engines. While younger subjects were more likely to have better computer and searching skills than older subjects, they were more likely to use nonrelevant search results and unreliable sources in answering health-related questions. This latter phenomenon has also been seen outside the health domain among the "millenial" generation, sometimes referred to as "digital natives" (Taylor, 2012).
Gagnon, M., Pluye, P., et al. (2010). A systematic review of interventions promoting clinical information retrieval technology (CIRT) adoption by healthcare professionals. International Journal of Medical Informatics, 79: 669-680.
Gardois, P., Calabrese, R., et al. (2011). Effectiveness of bibliographic searches performed by paediatric residents and interns assisted by librarians - a randomised controlled trial. Health Information and Libraries Journal, 28: 273-284.
Meats, E., Brassey, J., et al. (2007). Using the Turning Research Into Practice (TRIP) database: how do clinicians really search? Journal of the Medical Library Association, 95: 156-163.
Lau, A., Coiera, E., et al. (2010). Clinician search behaviors may be influenced by search engine design. Journal of Medical Internet Research, 12(2): e25.
Leroy, G., Xu, J., et al. (2007). An end user evaluation of query formulation and results review tools in three medical meta-search engines. International Journal of Medical Informatics, 76: 780-789.
Liu, YH and Wacholder, N (2017). Evaluating the impact of MeSH (Medical Subject Headings) terms on different types of searchers. Information Processing & Management. 53: 851-870.
Lokker, C, Haynes, RB, et al. (2011). Retrieval of diagnostic and treatment studies for clinical use through PubMed and PubMed's Clinical Queries filters. Journal of the American Medical Informatics Association. 18: 652-659.
Shariff, SZ, Sontrop, JM, et al. (2012). Impact of PubMed search filters on the retrieval of evidence by physicians. Canadian Medical Association Journal. 184: E184-E190.
Hoogendam, A., Stalenhoef, A., et al. (2008). Answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed. Journal of Medical Internet Research, 10(4): e29.
Thiele, R., Poiro, N., et al. (2010). Speed, accuracy, and confidence in Google, Ovid, PubMed, and UpToDate: results of a randomised trial. Postgraduate Medical Journal, 86: 459-465.
Ensan, L., Faghankhani, M., et al. (2011). To compare PubMed Clinical Queries and UpToDate in teaching information mastery to clinical residents: a crossover randomized controlled trial. PLoS ONE, 6: e23487.
Lau, A. and Coiera, E. (2008). Impact of web searching and social feedback on consumer decision making: a prospective online experiment. Journal of Medical Internet Research, 10(1): e2.
Lau, A., Kwok, T., et al. (2011). How online crowds influence the way individual consumers answer health questions. Applied Clinical Informatics, 2: 177-189.
van Deursen, A. (2012). Internet skill-related problems in accessing online health information. International Journal of Medical Informatics, 81: 61-72.
Taylor, A. (2012). A study of the information search behaviour of the millennial generation. Information Research, 17(1).

Ketchum, A., Saleh, A., et al. (2011). Type of evidence behind point-of-care clinical information products: a bibliometric analysis. Journal of Medical Internet Research, 13(1): e21.

Some recent studies have assessed the impact of IR systems for a variety of outcomes. Although the cause and effect is unknown, hospitals having UpToDate available have been found to have better patient outcomes in the form of shorter length of stay, reduced risk-adjusted mortality rates, and improved performance on quality indicators (Isaac et al., 2011). Bringing UpToDate on beside rounds showed it being used 157 times over a three-month period (Phua, 2012). Searches took a median time of three minutes, providing a useful answer 75% of the time, a partial answer 17% of the time, and no answer 9% of the time. The search results led to a change of diagnosis or management plans 37% of the time, confirmed original plans 38% of the time, and had no effect 25% of the time. In the Reed et al. (2012) study of nearly 4000 internal medicine physicians who took the IM–MOCE exam between 2006-2008 and who held individual licenses to ACP PIER and/or UpToDate, more frequent usage of electronic information resources was associated with modestly higher exam scores.

Other studies have demonstrated the beneficial impact of librarians. A pair of other studies found that provision of a rapid evidence-based question-answering service by librarians led to clinician satisfaction (McGowan et al., 2009) and potential cost savings due to time efficiency (McGowan et al., 2012).

Marshall et al. (2013) recently showed that libraries and librarians continue to add value to clinical care and research. They conducted a Web-based survey of physicians, residents, and nurses at 56 library sites serving 118 hospitals, along with 24 follow-up telephone interviews. Those who were surveyed were asked to responsd based on a recent episode of seeking information for patient care. Over 16,000 individuals responded, with about three-quarters indicating that the library resources led them to handle the patient care episode in a different manner. The reported changes included advice given to the patient (48%), diagnosis (25%), and choice of drugs (33%), other treatment (31%), and tests (23%). Nearly all respondents (95%) said the information resulted in a better informed clinical decision. Respondents also reported that the information allowed them to avoid different types of adverse events, including patient misunderstanding of their disease (23%), additional tests (19%), misdiagnosis (13%), adverse drug reactions (13%), medication errors (12%), and patient mortality (6%).

One concern about the volume of information easily accessible through search engines such as Google is its adverse impact on human cognition. Sparrow et al. (2011) have demonstrated a "Google effect," which causes individuals to rely less on memorizing information and instead knowing where they can find it online. A former New England Journal of Medicine editor, Kassirer (2010), laments that instant access to "compiled" information may undermine clinical cognition. Another study has shown that those who are high "media multitaskers" are more likely to have interference from other environmental stimuli as well as irrelevant representation in memory, leading to lower performance in ability to task-switch (Ophir et al., 2009). At least one clinical case report has found that distraction via text messaging led to overt patient harm through failure to complete a physician order (Halamka, 2011).
Isaac, T, Zheng, J, et al. (2012). Use of UpToDate and outcomes in US hospitals. Journal of Hospital Medicine. 7: 85-90.
Phua, J., See, K., et al. (2012). Utility of the electronic information resource UpToDate for clinical decision-making at bedside rounds. Singapore Medical Journal, 53: 116-120.
Reed, DA, West, CP, et al. (2012). Relationship of electronic medical knowledge resource use and practice characteristics with Internal Medicine Maintenance of Certification Examination scores. Journal of General Internal Medicine. 27: 917-923.
McGowan, J., Hogg, W., et al. (2009). A rapid evidence-based service by librarians provided information to answer primary care clinical questions. Health Information and Libraries Journal, 27: 11-21.
McGowan, J., Hogg, W., et al. (2012). A cost-consequences analysis of a primary care librarian question and answering service. PLoS ONE, 7(3): e33837.
Marshall, JG, Sollenberger, J, et al. (2013). The value of library and information services in patient care: results of a multisite study. Journal of the Medical Library Association. 101: 38-46.
Sparrow, B., Liu, J., et al. (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science, 333: 776-778.
Kassirer, J. (2010). Does instant access to compiled information undermine clinical cognition? Lancet, 376: 1510-1511.
Ophir, E., Nass, C., et al. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences, 106: 15583-15587.
Halamka, J. (2011). Order interrupted by text: multitasking mishap. AHRQ WebM&M.

Last updated April 25, 2017