

Literature Review: Systematic literature reviews
- Traditional or narrative literature reviews
- Scoping Reviews
- Systematic literature reviews
- Annotated bibliography
- Keeping up to date with literature
- Finding a thesis
- Evaluating sources and critical appraisal of literature
- Managing and analysing your literature
- Further reading and resources
Systematic reviews
Systematic and systematic-like reviews
Charles Sturt University library has produced a comprehensive guide for Systematic and systematic-like literature reviews. A comprehensive systematic literature review can often take a team of people up to a year to complete. This guide provides an overview of the steps required for systematic reviews:
- Identify your research question
- Develop your protocol
- Conduct systematic searches (including the search strategy, text mining, choosing databases, documenting and reviewing
- Critical appraisal
- Data extraction and synthesis
- Writing and publishing .
- Systematic and systematic-like reviews Library Resource Guide
Systematic literature review
A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (Dewey, A. & Drahota, A. 2016). The systematic review should follow a clearly defined protocol or plan where the criteria is clearly stated before the review is conducted. It is a comprehensive, transparent search conducted over multiple databases and grey literature that can be replicated and reproduced by other researchers. It involves planning a well thought out search strategy which has a specific focus or answers a defined question. The review identifies the type of information searched, critiqued and reported within known timeframes. The search terms, search strategies (including database names, platforms, dates of search) and limits all need to be included in the review.
Pittway (2008) outlines seven key principles behind systematic literature reviews
- Transparency
- Integration
- Accessibility
Systematic literature reviews originated in medicine and are linked to evidence based practice. According to Grant & Booth (p 91, 2009) "the expansion in evidence-based practice has lead to an increasing variety of review types". They compare and contrast 14 review types, listing the strengths and weaknesses of each review.
Tranfield et al (2003) discusses the origins of the evidence-based approach to undertaking a literature review and its application to other disciplines including management and science.
References and additional resources
Dewey, A. & Drahota, A. (2016) Introduction to systematic reviews: online learning module Cochrane Training https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews
Gough, David A., David Gough, Sandy Oliver, and James Thomas. An Introduction to Systematic Reviews. Systematic Reviews. London: SAGE, 2012.
Grant, M. J. & Booth, A. (2009) A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal 26(2), 91-108
Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x
Pittway, L. (2008) Systematic literature reviews. In Thorpe, R. & Holt, R. The SAGE dictionary of qualitative management research. SAGE Publications Ltd doi:10.4135/9780857020109
Tranfield, D., Denyer, D & Smart, P. (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review . British Journal of Management 14 (3), 207-222
Evidence based practice - an introduction : Literature reviews/systematic reviews
Evidence based practice - an introduction is a library guide produced at CSU Library for undergraduates. The information contained in the guide is also relevant for post graduate study and will help you to understand the types of research and levels of evidence required to conduct evidence based research.
- Evidence based practice an introduction
- << Previous: Scoping Reviews
- Next: Annotated bibliography >>
- Last Updated: Jan 24, 2023 11:45 AM
- URL: https://libguides.csu.edu.au/review

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.
- Staff portal
Systematic style literature reviews for education and social sciences
- Different types of literature review
- Developing the research question
- Search strategies
- Recording your systematic searching
- Systematic reading of the literature
- Writing your systematic literature review
- Software tools
- Citing your sources
What is a systematic style literature review
A systematic literature review is a method to review relevant literature in your field through a highly rigorous and 'systematic' process. The process of undertaking a systematic literature review covers not only the content found in the literature but the methods used to find the literature, what search strategies you used, and how and where you searched. A systematic literature review also, importantly, focuses on the criteria you have used to evaluate the literature found for inclusion or exclusion in the review. Like any literature review, a systematic literature review is undertaken to give you a broad understanding of your topic area, to show you what work has already been done in the subject area, and what research methods and theories are being used. The literature review will help you find your research gap and direct your research.
A literature review "...creates a firm foundation for advancing knowledge. A successful literature review facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed" (Webster & Watson, 2002, p.8). Fink (2014, p.3) describes a systematic literature review as a "systematic, explicit and reproducible method for identifying, evaluating and synthesizing the existing body of completed and recorded work produced by researchers, scholars and practitioners".
The purpose of your literature review will be to build a knowledge base for your research. The knowledge base will help direct your research, assist with research gap analysis, and give you a strong platform to direct original research to address any gaps and support your hypothesis.
A systematic literature review differs from other styles of literature review as it applies a much higher level of methodology to the process. The EPPI-Centre is a research centre at the University College London. They state the key features of a systematic literature review are:
- its use of explicit and transparent methods
- its adherence to following a standard set of research stages
- its requirement that the review is accountable, replicable and up-dateable
- its requirement of user involvement to ensure reports are relevant and useful
Systematic literature reviews aim to find as much relevant research on a particular research question as possible, by using explicit methods to identify what can reliably be said on the basis of these studies. Methods should be explicit and systematic with the aim of producing varied and reliable results. In this way, systematic reviews reduce the bias that can occur in other approaches to reviewing research evidence (EPPI 2015).
There are three principal reasons to undertake a systematic approach to literature reviews: clarity , validity and auditability (Booth, Papaionannou & Sutton 2012).
A focused research question and explicit search strategies help to clarify considerations of scope and terminology (Booth, Papaionannou & Sutton 2012, p. 23).
In this case, clarity means that the review should have a defined structure, document methods and document the searching process. This will allow for easy navigation and interpretation of the literature search. It will also allow others to understand what you have done and why certain research materials have been included while others have been excluded. It is recommended that you are very clear in what you are trying to achieve with your literature review. Keep the review focused and show each step of your methodology so the reader can follow your arguments and see where you are going and why.
For a literature review to be a valid research output, it should seek to be unbiased regarding the literature that is reviewed. When crafting a literature review you need to be mindful to include a range of voices to show clear reasoning behind the inclusion of particular papers and theories. Pitfalls to be aware of and/or avoid in your review process include:
- selection bias - only including materials that support your hypothesis or personal ideology
- publication bias - an over-reliance on a particular database or set of journals for your materials.
To avoid publication bias, be sure to search a wide range of resources for the materials you include in your literature review.
Auditability
Auditability, a key feature of a systematic literature review, pertains to the keeping of accurate records of your systematic search strategies. Accurate record keeping of your search strategies will allow others to verify your results. The records will give the reader an understanding of how you came to find and choose the materials in your review. It will give your review an extra layer of authority. Auditability is a crucial part of the review process. The review must be consistent and systematic throughout.
- Next: Different types of literature review >>
- Last Updated: Nov 1, 2022 10:43 AM
- URL: https://libraryguides.griffith.edu.au/systematic-literature-reviews-for-education

FutureLearn uses cookies to enhance your experience of the website. All but strictly necessary cookies are currently disabled for this browser. Turn on JavaScript to exercise your cookie preferences for all non-essential cookies. You can read FutureLearn's Cookie policy here .
- FutureLearn Local

Learn more about this course.
What is a systematic literature review.

Reach your personal and professional goals
Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.
Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.
Start Learning now
Register to receive updates
Create an account to receive our newsletter, course recommendations and promotions.

See all FutureLearn courses.

An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- My Bibliography
- Collections
- Citation manager
Save citation to file
Email citation, add to collections.
- Create a new collection
- Add to an existing collection
Add to My Bibliography
Your saved search, create a file for external citation management software, your rss feed.
- Search in PubMed
- Search in NLM Catalog
- Add to Search
How to Write a Systematic Review of the Literature
Affiliations.
- 1 1 Texas Tech University, Lubbock, TX, USA.
- 2 2 University of Florida, Gainesville, FL, USA.
- PMID: 29283007
- DOI: 10.1177/1937586717747384
This article provides a step-by-step approach to conducting and reporting systematic literature reviews (SLRs) in the domain of healthcare design and discusses some of the key quality issues associated with SLRs. SLR, as the name implies, is a systematic way of collecting, critically evaluating, integrating, and presenting findings from across multiple research studies on a research question or topic of interest. SLR provides a way to assess the quality level and magnitude of existing evidence on a question or topic of interest. It offers a broader and more accurate level of understanding than a traditional literature review. A systematic review adheres to standardized methodologies/guidelines in systematic searching, filtering, reviewing, critiquing, interpreting, synthesizing, and reporting of findings from multiple publications on a topic/domain of interest. The Cochrane Collaboration is the most well-known and widely respected global organization producing SLRs within the healthcare field and a standard to follow for any researcher seeking to write a transparent and methodologically sound SLR. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), like the Cochrane Collaboration, was created by an international network of health-based collaborators and provides the framework for SLR to ensure methodological rigor and quality. The PRISMA statement is an evidence-based guide consisting of a checklist and flowchart intended to be used as tools for authors seeking to write SLR and meta-analyses.
Keywords: evidence based design; healthcare design; systematic literature review.
Similar articles
- Conducting systematic reviews of association (etiology): The Joanna Briggs Institute's approach. Moola S, Munn Z, Sears K, Sfetcu R, Currie M, Lisy K, Tufanaru C, Qureshi R, Mattis P, Mu P. Moola S, et al. Int J Evid Based Healthc. 2015 Sep;13(3):163-9. doi: 10.1097/XEB.0000000000000064. Int J Evid Based Healthc. 2015. PMID: 26262566
- PRISMA harms checklist: improving harms reporting in systematic reviews. Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida P, Altman DG, Moher D, Vohra S; PRISMAHarms Group. Zorzela L, et al. BMJ. 2016 Feb 1;352:i157. doi: 10.1136/bmj.i157. BMJ. 2016. PMID: 26830668
- AHRQ series on complex intervention systematic reviews-paper 6: PRISMA-CI extension statement and checklist. Guise JM, Butler ME, Chang C, Viswanathan M, Pigott T, Tugwell P; Complex Interventions Workgroup. Guise JM, et al. J Clin Epidemiol. 2017 Oct;90:43-50. doi: 10.1016/j.jclinepi.2017.06.016. Epub 2017 Jul 15. J Clin Epidemiol. 2017. PMID: 28720516
- Reporting and methodological quality of systematic reviews in the orthopaedic literature. Gagnier JJ, Kellam PJ. Gagnier JJ, et al. J Bone Joint Surg Am. 2013 Jun 5;95(11):e771-7. doi: 10.2106/JBJS.L.00597. J Bone Joint Surg Am. 2013. PMID: 23780547
- Quality of conduct and reporting in rapid reviews: an exploration of compliance with PRISMA and AMSTAR guidelines. Kelly SE, Moher D, Clifford TJ. Kelly SE, et al. Syst Rev. 2016 May 10;5:79. doi: 10.1186/s13643-016-0258-9. Syst Rev. 2016. PMID: 27160255 Free PMC article. Review.
- How-to conduct a systematic literature review: A quick guide for computer science research. Carrera-Rivera A, Ochoa W, Larrinaga F, Lasa G. Carrera-Rivera A, et al. MethodsX. 2022 Nov 4;9:101895. doi: 10.1016/j.mex.2022.101895. eCollection 2022. MethodsX. 2022. PMID: 36405369 Free PMC article.
- Industrial Air Pollution Leads to Adverse Birth Outcomes: A Systematized Review of Different Exposure Metrics and Health Effects in Newborns. Veber T, Dahal U, Lang K, Orru K, Orru H. Veber T, et al. Public Health Rev. 2022 Aug 10;43:1604775. doi: 10.3389/phrs.2022.1604775. eCollection 2022. Public Health Rev. 2022. PMID: 36035982 Free PMC article. Review.
- A Systematic Review of Multilevel Influenced Risk-Taking in Helicopter and Small Airplane Normal Operations. Harris MR, Fein EC, Machin MA. Harris MR, et al. Front Public Health. 2022 May 12;10:823276. doi: 10.3389/fpubh.2022.823276. eCollection 2022. Front Public Health. 2022. PMID: 35646790 Free PMC article.
- In Search of Medical Professionalism Research: Preliminary Results from a Review of Widely Read Medical Journals. Isaacson JH, Ziring D, Hafferty F, Kalet A, Littleton D, Frankel RM. Isaacson JH, et al. Perm J. 2021 May 26;25:20.223. doi: 10.7812/TPP/20.223. Perm J. 2021. PMID: 35348058 Free PMC article. Review.
- Choosing Wisely in clinical practice: Embracing critical thinking, striving for safer care. Furlan L, Francesco PD, Costantino G, Montano N. Furlan L, et al. J Intern Med. 2022 Apr;291(4):397-407. doi: 10.1111/joim.13472. J Intern Med. 2022. PMID: 35307902 Free PMC article. Review.
- Search in MeSH
Related information
- Cited in Books
LinkOut - more resources
Full text sources.
- Ovid Technologies, Inc.
Other Literature Sources
- scite Smart Citations

- Citation Manager
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
- Open Access
- Published: 12 October 2020
A systematic literature review of researchers’ and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data
- Elizabeth Hutchings ORCID: orcid.org/0000-0002-6030-954X 1 ,
- Max Loomes ORCID: orcid.org/0000-0003-1042-0968 2 ,
- Phyllis Butow ORCID: orcid.org/0000-0003-3562-6954 2 , 3 , 4 &
- Frances M. Boyle ORCID: orcid.org/0000-0003-3798-1570 1 , 5
Systematic Reviews volume 9 , Article number: 240 ( 2020 ) Cite this article
7132 Accesses
6 Citations
5 Altmetric
Metrics details
A systematic literature review of researchers and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data was conducted using electronic data searching. Eligible articles included those reporting qualitative or quantitative original research and published in English. No restrictions were placed on publication dates, study design, or disease setting. Two authors were involved in all stages of the review process; conflicts were resolved by consensus. Data was extracted independently using a pre-piloted data extraction template. Quality and bias were assessed using the QualSyst criteria for qualitative studies. Eighteen eligible articles were identified, and articles were categorised into four key themes: barriers, facilitators, access, and ownership; 14 subthemes were identified. While respondents were generally supportive of data sharing, concerns were expressed about access to data, data storage infrastructure, and consent. Perceptions of data ownership and acknowledgement, trust, and policy frameworks influenced sharing practice, as did age, discipline, professional focus, and world region. Young researchers were less willing to share data; they were willing to share in circumstances where they were acknowledged. While there is a general consensus that increased data sharing in health is beneficial to the wider scientific community, substantial barriers remain.
Systematic review registration
PROSPERO CRD42018110559
Peer Review reports
Healthcare systems generate large amounts of data; approximately 80 mB of data are generated per patient per year [ 1 ]. It is projected that this figure will continue to grow with an increasing reliance on technologies and diagnostic capabilities. Healthcare data provides an opportunity for secondary data analysis with the capacity to greatly influence medical research, service planning, and health policy.
There are many forms of data collected in the healthcare setting including administrative and clinical trial data which are the focus of this review. Administrative data collected during patients’ care in the primary, secondary, and tertiary settings can be analysed to identify systemic issues and service gaps, and used to inform improved health resourcing. Clinical trials play an essential role in furthering our understanding of disease, advancing new therapeutics, and developing improved supportive care interventions. However, clinical trials are expensive and can take several years to complete; a frequently quoted figure is that it takes 17 years for 14% of clinical research to benefit the patient [ 2 , 3 ].
Those who argue for increased data sharing in healthcare suggest that it may lead to improved treatment decisions based on all available information [ 4 , 5 ], improved identification of causes and clinical manifestations of disease [ 6 ], and provide increased research transparency [ 7 ]. In rare diseases, secondary data analysis may greatly accelerate the medical community’s understanding of the disease’s pathology and influence treatment.
Internationally, there are signs of movement towards greater transparency, particularly with regard to clinical research data. This change has been driven by governments [ 8 ], peak bodies [ 9 ], and clinician led initiatives [ 5 ]. One initiative led by the International Council of Medical Journal Editors (ICMJE) now requires a data sharing plan for all clinical research submitted for publication in a member scientific journal [ 9 ]. Further, international examples of data sharing can be seen in projects such as The Cancer Genome Atlas (TCGA) [ 10 ] dataset and the Surveillance, Epidemiology, and End Results (SEER) [ 11 ] database which have been used extensively for cancer research.
However, consent, data ownership, privacy, intellectual property rights, and potential for misinterpretation of data [ 12 ] remain areas of concern to individuals who are more circumspect about changing the data sharing norm. To date, there has been no published synthesis of views on data sharing from the perspectives of diverse professional stakeholders. Thus, we conducted a systematic review of the literature on the views of researchers and healthcare professionals regarding the sharing of health data.
This systematic literature review was part of a larger review of articles addressing data sharing, undertaken in accordance with the PRISMA statement for systematic reviews and meta-analysis [ 13 ]. The protocol was prospectively registered on PROSPERO ( www.crd.york.ac.uk /PROSPERO, CRD42018110559).
The following databases were searched: EMBASE/MEDLINE, Cochrane Library, PubMed, CINAHL, Informit Health Collection, PROSPERO Database of Systematic Reviews, PsycINFO, and ProQuest. The final search was conducted on 21 October 2018. No date restrictions were placed on the search; key search terms are listed in Table 1 . Papers were considered eligible if they: were published in English; were published in a peer review journal; reported original research, either qualitative or quantitative with any study design, related to data sharing in any disease setting; and included subjects over 18 years of age. Systematic literature reviews were included in the wider search but were not included in the results. Reference list and hand searching were undertaken to identify additional papers. Papers were considered ineligible if they focused on electronic health records, biobanking, or personal health records or were review articles, opinion pieces/articles/letters, editorials, or theses from masters or doctoral research. Duplicates were removed and title and abstract and full-text screening were undertaken using the Cochrane systematic literature review program Covidence [ 14 ]. Two authors were involved in all stages of the review process; conflicts were resolved by consensus.
Quality and bias were assessed at a study level using the QualSyst system for quantitative and qualitative studies as described by Kmet et al. [ 15 ]. A maximum score of 20 is assigned to articles of high quality and low bias; the final QualSyst score is a proportion of the total, with a possible score ranging from 0.0 to 1.0 [ 15 ].
Data extraction was undertaken using a pre-piloted form in Microsoft Office Excel. Data points included author, country and year of study, study design and methodology, health setting, and key themes and results. Where available, detailed information on research participants was extracted including age, sex, clinical/academic employment setting, publication and grant history, career stage, and world region.
Quantitative data were summarised using descriptive statistics. Synthesis of qualitative findings used a meta-ethnographic approach, in accordance with guidelines from Lockwood et al. [ 16 ].The main themes of each qualitative study were first identified and then combined, if relevant, into categories of commonality. Using a constant comparative approach, higher order themes and subthemes were developed. Quantitative data relevant to each theme were then incorporated. Using a framework analysis approach as described by Gale et al. [ 17 ], the perspectives of different professional groups (researchers, healthcare professionals, data custodians, and ethics committees) towards data sharing were identified. Where differences occurred, they are highlighted in the results. Similarly, where systematic differences according to other characteristics (such as age or years of experience), these are highlighted.
This search identified 4019 articles, of which 241 underwent full-text screening; 73 articles met the inclusion criteria for the larger review. Five systematic literature reviews were excluded as was one article which presented duplicate results; this left a total of 67 articles eligible for review. See Fig. 1 for the PRISMA diagram describing study screening.

PRIMSA flow diagram (attached)
This systematic literature review was originally developed to identify attitudes towards secondary use and sharing of health administrative and clinical trial data in breast cancer. However, as there was a paucity of material identified specifically related to this group, we present the multidisciplinary results of this search, and where possible highlight results specific to breast cancer, and cancer more generally. We believe that the material identified in this search is relevant and reflective of the wider attitudes towards data sharing within the scientific and medical communities and can be used to inform data sharing strategies in breast cancer.
Eighteen [ 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ] of the 67 articles addressed the perspectives of clinical and scientific researchers, data custodians, and ethics committees and were analysed for this paper (Table 2 ). The majority ( n = 16) of articles focused on the views of researchers and health professionals, [ 18 , 19 , 20 , 21 , 22 , 24 , 25 , 26 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ], only one article focused on data custodians [ 27 ] and ethics committees [ 23 ] respectively. Four articles [ 18 , 19 , 21 , 35 ] included a discussion on the attitudes of both researchers and healthcare professionals and patients; only results relating to researchers/clinicians are included in this analysis (Fig. 1 ).
Study design, location, and disciplines
Several study methodologies were used, including surveys ( n = 11) [ 24 , 25 , 26 , 27 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ], interviews and focus groups ( n = 6) [ 18 , 19 , 20 , 21 , 22 , 23 ], and mixed methods ( n = 1) [ 28 ]. Studies were conducted in a several countries and regions; a breakdown by country and study is available in Table 3 .
In addition to papers focusing on general health and sciences [ 18 , 21 , 22 , 24 , 25 , 26 , 29 , 30 , 31 , 32 , 33 , 34 ], two articles included views from both science and non-science disciplines [ 27 , 28 ]. Multiple sclerosis (MS) [ 19 ], mental health [ 35 ], and human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS)/tuberculosis (TB) [ 20 ] were each the subject of one article.
Study quality
Results of the quality assessment are provided in Table 2 . QualSyst [15] scores ranged from 0.7 to 1.0 (possible range 0.0 to 1.0). While none were blinded studies, most provided clear information on respondent selection, data analysis methods, and justifiable study design and methodology.
Four key themes, barriers, facilitators, access, and ownership were identified; 14 subthemes were identified. A graphical representation of article themes is presented in Fig. 2 . Two articles reflect the perspective of research ethics committees [ 23 ] and data custodians [ 27 ]; concerns noted by these groups are similar to those highlighted by researchers and healthcare professionals.

Graphic representation of key themes and subthemes identified (attached)
Barriers and facilitators
Reasons for not sharing.
Eleven articles identified barriers to data sharing [ 20 , 22 , 24 , 25 , 27 , 29 , 30 , 31 , 32 , 33 , 34 ]. Concerns cited by respondents included other researchers taking their results [ 24 , 25 ], having data misinterpreted or misattributed [ 24 , 27 , 31 , 32 ], loss of opportunities to maximise intellectual property [ 24 , 25 , 27 ], and loss of publication opportunities [ 24 , 25 ] or funding [ 25 ]. Results of a qualitative study showed respondents emphasised the competitive value of research data and its capacity to advance an individual’s career [ 20 ] and the potential for competitive disadvantage with data sharing [ 22 ]. Systematic issues related to increased data sharing were noted in several articles where it was suggested the barriers are ‘deeply rooted in the practices and culture of the research process as well as the researchers themselves’ [ 33 ] (p. 1), and that scientific competition and a lack of incentive in academia to share data remain barriers to increased sharing [ 30 ].
Insufficient time, lack of funding, limited storage infrastructure, and lack of procedural standards were also noted as barriers [ 33 ]. Quantitative results demonstrated that the researchers did not have the right to make the data public or that there was no requirement to share by the study sponsor [ 33 ]. Maintaining the balance between investigator and funder interests and the protection of research subjects [ 31 ] were also cited as barriers. Concerns about privacy were noted in four articles [ 25 , 27 , 29 , 30 ]; one study indicated that clinical researchers were significantly more concerned with issues of privacy compared to scientific researchers [ 25 ]. The results of one qualitative study indicated that clinicians were more cautious than patients regarding the inclusion of personal information in a disease specific registry; the authors suggest this may be a result of potential for legal challenges in the setting of a lack of explicit consent and consistent guidelines [ 19 ]. Researchers, particularly clinical staff, indicated that they did not see sharing data in a repository as relevant to their work [ 29 ]
Trust was also identified as a barrier to greater data sharing [ 32 ]. Rathi et al. identified that researchers were likely to withhold data if they mistrusted the intent of the researcher requesting the information [ 32 ]. Ethical, moral, and legal issues were other potential barriers cited [ 19 , 22 ]. In one quantitative study, 74% of respondents ( N = 317) indicated that ensuring appropriate data use was a concern; other concerns included data not being appropriate for the requested purpose [ 32 ]. Concerns about data quality were also cited as a barrier to data reuse; some respondents suggested that there was a perceived negative association of data reuse among health scientists [ 30 ].
Reasons for sharing
Eleven articles [ 19 , 20 , 21 , 22 , 24 , 25 , 29 , 30 , 31 , 32 , 33 ] discussed the reasons identified by researchers and healthcare professionals for sharing health data; broadly the principle of data sharing was seen as a desirable norm [ 25 , 31 ]. Cited benefits included improvements to the delivery of care, communication and receipt of information, impacts on care and quality of life [ 19 ], contributing to the advancement of science [ 20 , 24 , 29 ], validating scientific outputs, reducing duplication of scientific effort and minimising research costs [ 20 ], and promoting open science [ 31 , 32 ]. Professional reasons for sharing data included academic benefit and recognition, networking and collaborative opportunities [ 20 , 24 , 29 , 31 ], and contributing to the visibility of their research [ 24 ]. Several articles noted the potential of shared data for enabling faster access to a wider pool of patients [ 21 ] for research, improved access to population data for longitudinal studies [ 22 ], and increased responsiveness to public health needs [ 20 ]. In one study, a small percentage of respondents indicated that there were no benefits from sharing their data [ 24 ].
Analysis of quantitative survey data indicated that the perceived usefulness of data was most strongly associated with reuse intention [ 30 ]. The lack of access to data generated by other researchers or institutions was seen as a major impediment to progress in science [ 33 ]. In a second study, quantitative data showed no significant differences in reasons for sharing by clinical trialists’ academic productivity, geographic location, trial funding source or size, or the journal in which the results were published [ 32 ]. Attitudes towards sharing in order to receive academic benefits or recognition differed significantly based on the respondent’s geographic location; those from Western Europe were more willing to share compared to respondents in the USA or Canada, and the rest of the world [ 32 ].
Views on sharing
Seven articles [ 19 , 20 , 21 , 29 , 31 , 33 , 34 ] discussed researchers’ and healthcare professionals’ views relating to sharing data, with a broad range of views noted. Two articles, both qualitative, discussed the role of national registries [ 21 ], and data repositories [ 31 ]. Generally, there was clear support for national research registers and an acceptance for their rationale [ 21 ], and some respondents believed that sharing de-identified data through data repositories should be required and that when requested, investigators should share data [ 31 ]. Sharing de-identified data for reasons beyond academic and public health benefit were cited as a concern [ 20 ]. Two quantitative studies noted a proportion of researchers who believed that data should not be made available [ 33 , 34 ]. Researchers also expressed differences in how shared data should be managed; the requirement for data to be ‘gate-kept’ was preferred by some, while others were happy to relinquish control of their data once curated or on release [ 20 ]. Quantitative results indicated that scientists were significantly more likely to rank data reuse as highly relevant to their work than clinicians [ 29 ], but not all scientists shared data equally or had the same views about data sharing or reuse [ 33 ]. Some respondents argued that not all data were equal and therefore should only be shared in certain circumstances. This was in direct contrast to other respondents who suggested that all data should be shared, all of the time [ 20 ].
Differences by age, background, discipline, professional focus, and world region
Differences in attitudes towards shared data were noted by age, professional focus, and world region [ 25 , 27 , 33 , 34 ]. Younger researchers, aged between 20–39 and 40–49 years, were less likely to share their data with others (39% and 38% respectively) compared to other age groups; respondents aged over 50 years of age were more willing (46%) to share [ 33 ]. Interestingly, while less willing to share, younger researchers also believed that the lack of access to data was a major impediment to science and their research [ 33 ]. Where younger researchers were able to place conditions on access to their data, rates of willingness to share were increased [ 33 ].
Respondents from the disciplines of education, medicine/health science, and psychology were more inclined than others to agree that their data should not be available for others to use in the first place [ 34 ]. However, results from one study indicated that researchers from the medical field and social sciences were less likely to share compared to other disciplines [ 33 ]. For example, results of a quantitative study showed that compared to biologists, who reported sharing 85% of their data, medical and social sciences reported sharing their data 65% and 58% percent of the time, respectively [ 33 ].
One of the primary reasons for controlling access to data, identified in a study of data custodians, was due to a desire to avoid data misuse; this was cited as a factor for all surveyed data repositories except those of an interdisciplinary nature [ 27 ]. Limiting access to certain types of research and ensuring attribution were not listed as a concern for sociology, humanities or interdisciplinary data collections [ 27 ]. Issues pertaining to privacy and sensitive data were only cited as concerns for data collections related to humanities, social sciences, and biology, ecology, and chemistry; concerns regarding intellectual property were also noted [ 27 ]. The disciplines of biology, ecology, and chemistry and social sciences had the most policy restrictions on the use of data held in their repositories [ 27 ].
Differences in data sharing practices were also noted by world region. Respondents not from North American and European countries were more willing to place their data on a central repository; however, they were also more likely to place conditions on the reuse of their data [ 33 , 34 ].
Experience of data sharing
The experience of data sharing among researchers was discussed in nine articles [ 20 , 24 , 25 , 26 , 28 , 29 , 30 , 31 , 32 , 33 ]. Data sharing arrangements were highly individual and ranged from ad hoc and informal processes to formal procedures enforced by institutional policies in the form of contractual agreements, with respondents indicating data sharing behaviour ranging from sharing no data to sharing all data [ 20 , 26 , 31 ]. Quantitative data from one study showed that researchers were more inclined to share data prior to publication with people that they knew compared to those they did not; post publication, these figures were similar between groups [ 24 ]. While many researchers were prepared to share data, results of a survey identified a preference of researchers to collect data themselves, followed by their team, or by close colleagues [ 26 ].
Differences in the stated rate of data sharing compared to the actual rate of sharing [ 25 ] were noted. In a large quantitative study ( N = 1329), nearly one third of respondents chose not to answer whether they make their data available to others; of those who responded to the question, 46% reported they do not make their data electronically available to others [ 33 ]. By discipline, differences in the rate of refusal to share were higher in chemistry compared to non-science disciplines such as sociology [ 25 ]. Respondents who were more academically productive (> 25 articles over the past 3 years) reported that they have or would withhold data to protect research subjects less frequently than those who were less academically productive or received industry funding [ 32 ].
Attitudes to sharing de-identified data via data repositories was discussed in two articles [ 29 , 31 ]. A majority of respondents in one study indicated that de-identified data should be shared via a repository and that it should be shared when requested. A lack of experience in uploading data to repositories was noted as a barrier [ 29 ]. When data was shared, most researchers included additional materials to support their data including materials such as metadata or a protocol description [ 29 ].
Two articles [ 28 , 30 ] focused on processes and variables associated with sharing. Factors such as norms, data infrastructure/organisational support, and research communities were identified as important factors in a researcher’s attitude towards data sharing [ 28 , 30 ]. A moderate correlation between data reuse and data sharing suggest that these two variables are not linked. Furthermore, sharing data compared to self-reported data reuse were also only moderately associated (Pearson’s correlation of 0.25 ( p ≤ 0.001)) [ 26 ].
Predictors of data sharing and norms
Two articles [ 26 , 30 ] discussed the role of social norms and an individual’s willingness to share health data. Perceived efficacy and efficiency of data reuse were strong predictors of data sharing [ 26 ] and the development of a ‘positive social norm towards data sharing support(s)[ed] researcher data reuse intention’ [ 30 ] (p. 400).
Policy framework
The establishment of clear policies and procedures to support data sharing was highlighted in two articles [ 22 , 28 ]. The presence of ambiguous data sharing policies was noted as a major limitation, particularly in primary care and the increased adoption of health informatics systems [ 22 ]. Policies that support an efficient exchange system allowing for the maximum amount of data sharing are preferred and may include incentives such as formal recognition and financial reimbursement; a framework for this is proposed in Fecher et al. [ 28 ].
Research funding
The requirement to share data funded by public monies was discussed in one article [ 25 ]. Some cases were reported of researchers refusing to share data funded by tax-payer funds; reasons for refusal included a potential reduction in future funding or publishing opportunities [ 25 ].

Access and ownership
Articles relating to access and ownership were grouped together and seven subthemes were identified.
Access, information systems, and metadata
Ten articles [ 19 , 20 , 21 , 22 , 26 , 27 , 29 , 33 , 34 , 35 ] discussed the themes of access, information systems, and the use of metadata. Ensuring privacy protections in a prospective manner was seen as important for data held in registries [ 19 ]. In the setting of mental health, researchers indicated that patients should have more choices for controlling access to shared registry data [ 35 ]. The use of guardianship committees [ 19 ] or gate-keepers [ 20 ] was seen as important in ensuring the security and access to data held in registries by some respondents; however, many suggested that a researcher should relinquish control of the data collection once curated or released, unless embargoed [ 20 ]. Reasons for maintaining control over registry data included ensuring attribution, restricting commercial research, protecting sensitive (non-personal) information, and limiting certain types of research [ 27 ]. Concerns about security and confidentiality were noted as important and assurances about these needed to be provided; accountability and transparency mechanisms also need to be included [ 21 ]. Many respondents believed that access to the registry data by pharmaceutical companies and marketing agencies was not considered appropriate [ 19 ].
Respondents to a survey from medicine and social sciences were less likely to agree to have all data included on a central repository with no restrictions [ 33 ]; notably, this was also reflected in the results of qualitative research which indicated that health professionals were more cautious than patients about the inclusion of personal data within a disease specific register [ 19 ].
While many researchers stated that they commonly shared data directly with other researchers, most did not have experience with uploading data to repositories [ 29 ]. Results from a survey indicated that younger respondents have more data access restrictions and thought that their data is easier to access significantly more than older respondents [ 34 ]. In the primary care setting, concerns were noted about the potential for practitioners to block patient involvement in a registry by refusing access to a patient’s personal data or by not giving permission for the data to be extracted from their clinical system [ 21 ]. There was also resistance in primary care towards health data amalgamation undertaken for an unspecified purpose [ 22 ]; respondents were not in favour of systems which included unwanted functionality (do not want/need), inadequate attributes (capability and receptivity) of the practice, or undesirable impact on the role of the general practitioner (autonomy, status, control, and workflow) [ 22 ].
Access to ‘comprehensive metadata (is needed) to support the correct interpretation of the data’ [ 26 ] (p. 4) at a later stage. When additional materials were shared, most researchers shared contextualising information or a description of the experimental protocol [ 29 ]. The use of metadata standards was not universal with some respondents using their own [ 33 ].
Several articles highlighted the impact of data curation on researchers’ time [ 20 , 21 , 22 , 29 , 33 ] or finances [ 24 , 28 , 29 , 33 , 34 ]; these were seen as potential barriers to increased registry adoption [ 21 ]. Tasks required for curation included preparing data for dissemination in a usable format and uploading data to repositories. The importance of ensuring that the data is accurately preserved for future reuse was highlighted; it must be presented in a retriable and auditable manner [ 20 ]. The amount of time required to curate data ranged from ‘no additional time’ to ‘greater than ten hours’ [ 29 ]. In one study, no clinical respondent had their data in a sharable format [ 29 ]. In the primary care setting, health information systems which promote sharing were not seen as being beneficial if they required standardisation of processes and/or sharing of clinical notes [ 22 ]. Further, spending time on non-medical issues in a time poor environment [ 22 ] was identified as a barrier. Six articles described the provision of funding or technical support to ensure data storage, maintenance, and the ability to provide access to data when requested. All noted a lack of funding and time as a barrier to increased sharing data [ 20 , 24 , 28 , 29 , 33 , 34 ].
Results of qualitative research indicated a range of views regarding consent mechanisms for future data use [ 18 , 19 , 20 , 23 , 35 ]. Consenting for future research can be complex given that the exact nature of the study will be unknown, and therefore some respondents suggested that a broad statement on future data uses be included [ 19 , 20 ] during the consent process. In contrast, other participants indicated that the current consent processes were too broad and do not reflect patient preferences sufficiently [ 35 ]. The importance of respecting the original consent in all future research was noted [ 20 ]. It was suggested that seeking additional consent for future data use may discourage participation in the original study [ 20 ]. Differences in views regarding the provision of detailed information about sharing individual level data was noted suggesting that the researchers wanted to exert some control over data they had collected [ 20 ]. An opt-out consent process was considered appropriate in some situations [ 18 ] but not all; some respondents suggested that consent to use a patient’s medical records was not required [ 18 ]. There was support by some researchers to provide patients with the option to ‘opt-in’ to different levels of involvement in a registry setting [ 19 ]. Providing patients more granular choices when controlling access to their medical data [ 35 ] was seen as important.
The attitudes of ethics and review boards ( N = 30) towards the use of medical records for research was discussed in one article [ 23 ]. While 38% indicated that no further consent would be required, 47% required participant consent, and 10% said that the requirement for consent would depend on how the potentially identifying variables would be managed [ 23 ]. External researcher access to medical record data was associated with a requirement for consent [ 23 ].
Acknowledgement
The importance of establishing mechanisms which acknowledge the use of shared data were discussed in four articles [ 27 , 29 , 33 , 34 ]. A significant proportion of respondents to a survey believed it was fair to use other researchers’ data if they acknowledged the originator and the funding body in all disseminated work or as a formal citation in published works [ 33 ]. Other mechanisms for acknowledging the data originator included opportunities to collaborate on the project, reciprocal data sharing agreements, allowing the originator to review or comment on results, but not approve derivative works, or the provision of a list of products making use of the data and co-authorship [ 33 , 34 ]. In the setting of controlled data collections, survey results indicated that ensuring attribution was a motivator for controlled access [ 27 ]. Over half of respondents in one survey believed it was fair to disseminate results based either in whole or part without the data provider’s approval [ 33 ]. No significant differences in mechanisms for acknowledgement were noted between clinical and scientific participants; mechanisms included co-authorship, recognition in the acknowledgement section of publications, and citation in the bibliography [ 29 ]. No consentient method for acknowledging shared data reuse was identified [ 29 ].
Data ownership was identified as a potential barrier to increased data sharing in academic research [ 28 ]. In the setting of control of data collections, survey respondents indicated that they wanted to maintain some control over the dataset, which is suggestive of researchers having a perceived ownership of their research data [ 28 ]. Examples of researchers extending ownership over their data include the right to publish first and the control of access to datasets [ 28 ]. Fecher et al. noted that the idea of data ownership by the researcher is not a position always supported legally; ‘the ownership and rights of use, privacy, contractual consent and copyright’ are subsumed [ 28 ] (p. 15). Rather data sharing is restricted by privacy law, which is applied to datasets containing data from individuals. The legal uncertainty about data ownership and the complexity of law can deter data sharing [ 28 ].
Promotion/professional criteria
The role of data sharing and its relation to promotion and professional criteria were discussed in two articles [ 24 , 28 ]. The requirement to share data is rarely a promotion or professional criterion, rather the systems are based on grants and publication history [ 24 , 28 ]. One study noted that while the traditional link between publication history and promotion remains, it is ‘likely that funders will continue to get sub-optimal returns on their investments, and that data will continue to be inefficiently utilised and disseminated’ [ 24 ] (p. 49).
This systematic literature review highlights the ongoing complexity associated with increasing data sharing across the sciences. No additional literature meeting the inclusion criteria were identified in the period between the data search and the submission of this manuscript. Data gaps identified include a paucity of information specifically related to the attitudes of breast cancer researchers and health professionals towards the secondary use and sharing of health administrative and clinical trial data.
While the majority of respondents believed the principles of data sharing were sound, significant barriers remain: issues of consent, privacy, information security, and ownership were key themes throughout the literature. Data ownership and acknowledgement, trust, and policy frameworks influenced sharing practice, as did age, discipline, professional focus, and world region.
Addressing concerns of privacy, trust, and information security in a technologically changing and challenging landscape is complex. Ensuring the balance between privacy and sharing data for the greater good will require the formation of policy and procedures, which promote both these ideals.
Establishing clear consent mechanisms would provide greater clarity for all parties involved in the data sharing debate. Ensuring that appropriate consent for future research, including secondary data analysis and sharing and linking of datasets, is gained at the point of data collection, would continue to promote research transparency and provide healthcare professionals and researchers with knowledge that an individual is aware that their data may be used for other research purposes. The establishment of policy which supports and promotes the secondary use of data and data sharing will assist in the normalisation of this type of health research. With the increased promotion of data sharing and secondary data analysis as an established tool in health research, over time barriers to its use, including perceptions of ownership and concerns regarding privacy and consent, will decrease.
The importance of establishing clear and formal processes associated with acknowledging the use of shared data has been underscored in the results presented. Initiatives such as the Bioresource Research Impact Factor/Framework (BRIF) [ 36 ] and the Citation of BioResources in journal Articles (CoBRA) [ 37 ] have sought to formalise the process. However, increased academic recognition of sharing data for secondary analysis requires further development and the allocation of funding to ensure that collected data is in a usable, searchable, and retrievable format. Further, there needs to be a shift away from the traditional criteria of academic promotion, which includes research outputs, to one which is inclusive of a researcher’s data sharing history and the availability of their research dataset for secondary analysis.
The capacity to identify and use already collected data was identified as a barrier. Moves to make data findable, accessible, interoperable, and reusable (FAIR) have been promoted as a means to encourage greater accessibility to data in a systematic way [ 38 ]. The FAIR principles focus on data characteristics and should be interpreted alongside the collective benefit, authority to control, responsibility, and ethics (CARE) principles established by the Global Indigenous Data Alliance (GIDA) which a people and purpose orientated [ 39 ].
Limitations
The papers included in this study were limited to those indexed on major databases. Some literature on this topic may have been excluded if it was not identified during the grey literature and hand searching phases.
Implications
Results of this systematic literature review indicate that while there is broad agreement for the principles of data sharing in medical research, there remain disagreements about the infrastructure and procedures associated with the data sharing process. Additional work is therefore required on areas such as acknowledgement, curation, and data ownership.
While the literature confirms that there is overall support for data sharing in medical and scientific research, there remain significant barriers to its uptake. These include concerns about privacy, consent, information security, and data ownership.
Availability of data and materials
All data generated or analysed during this study are included in this published article.
Abbreviations
Bioresource Research Impact Factor/Framework
Collective benefit, authority to control, responsibility, and ethics
Citation of BioResources in journal Articles
Findable, accessible, interoperable, and reusable
Global Indigenous Data Alliance
Human immunodeficiency virus/acquired immunodeficiency
International Council of Medical Journal Editors
Multiple sclerosis
Surveillance, Epidemiology, and End Results
Tuberculosis
The Cancer Genome Atlas
Huesch MD, Mosher TJ. Using it or losing it? The case for data scientists inside health care. NEJM Catalyst. 2017.
Green LW. Closing the chasm between research and practice: evidence of and for change. Health Promot J Australia. 2014;25(1):25–9.
Article Google Scholar
Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.
Article PubMed PubMed Central Google Scholar
Goldacre B. Are clinical trial data shared sufficiently today? No. Br Med J. 2013;347:f1880.
Goldacre B, Gray J. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials. 2016;17(1):164.
Kostkova P, Brewer H, de Lusignan S, Fottrell E, Goldacre B, Hart G, et al. Who owns the data? Open data for healthcare. Front Public Health. 2016;4.
Elliott M. Seeing through the lies: innovation and the need for transparency. Gresham College Lecture Series; 23 November 2016; Museum of London. 2016.
European Medicines Agency. Publication and access to clinical-trial data. London: European Medicines Agency; 2013.
Google Scholar
Taichman DB, Backus J, Baethge C, Bauchner H, de Leeuw PW, Drazen JM, et al. Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. J Am Med Assoc. 2016;315(5):467–8.
Article CAS Google Scholar
National Institue of Health (NIH). The Cancer Genome Atlas (TCGA): program overview United States of America: National Institue of Health (NIH); 2019 [Available from: https://cancergenome.nih.gov/abouttcga/overview ].
National Institue of Health (NIH). Surveillance, Epidemiology, and End Results (SEER) Program Washington: The Government of United States of Ameica; 2019 [Available from: https://seer.cancer.gov ].
Castellani J. Are clinical trial data shared sufficiently today? Yes. Br Med J. 2013;347:f1881.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097–e.
Veritas Health Innovation. Covidence systematic review software. Melbourne: Cochrane Collaboration; 2018.
Kmet LM, Cook LS, Lee RC. Standard quality assessment criteria for evaluating primary research papers from a variety of fields; 2004.
Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evidence Based Healthcare. 2015;13(3):179–87.
Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117.
Asai A, Ohnishi M, Nishigaki E, Sekimoto M, Fukuhara S, Fukui T. Attitudes of the Japanese public and doctors towards use of archived information and samples without informed consent: preliminary findings based on focus group interviews. BMC Medical Ethics. 2002;3(1):1.
Article PubMed Central Google Scholar
Baird W, Jackson R, Ford H, Evangelou N, Busby M, Bull P, et al. Holding personal information in a disease-specific register: the perspectives of people with multiple sclerosis and professionals on consent and access. J Med Ethics. 2009;35(2):92–6.
Article CAS PubMed Google Scholar
Denny SG, Silaigwana B, Wassenaar D, Bull S, Parker M. Developing ethical practices for public health research data sharing in South Africa: the views and experiences from a diverse sample of research stakeholders. J Empiric Res Human Res Ethics. 2015;10(3):290–301.
Grant A, Ure J, Nicolson DJ, Hanley J, Sheikh A, McKinstry B, et al. Acceptability and perceived barriers and facilitators to creating a national research register to enable 'direct to patient' enrolment into research: the Scottish Health Research register (SHARE). BMC Health Serv Res. 2013;13(1):422.
Knight J, Patrickson M, Gurd B. Understanding GP attitudes towards a data amalgamating health informatics system. Electron J Health Inform. 2008;3(2):12.
Willison DJ, Emerson C, Szala-Meneok KV, Gibson E, Schwartz L, Weisbaum KM, et al. Access to medical records for research purposes: varying perceptions across research ethics boards. J Med Ethics. 2008;34(4):308–14.
Bezuidenhout L, Chakauya E. Hidden concerns of sharing research data by low/middle-income country scientists. Glob Bioethics. 2018;29(1):39–54.
Ceci SJ. Scientists' attitudes toward data sharing. Sci Technol Human Values. 1988;13(1-2):45–52.
Curty RG, Crowston K, Specht A, Grant BW, Dalton ED. Attitudes and norms affecting scientists’ data reuse. PLoS One. 2017;12(12):e0189288.
Article PubMed PubMed Central CAS Google Scholar
Eschenfelder K, Johnson A. The limits of sharing: controlled data collections. Proc Am Soc Inf Sci Technol. 2011;48(1):1–10.
Fecher B, Friesike S, Hebing M. What drives academic data sharing? PLoS One. 2015;10(2):e0118053.
Federer LM, Lu Y-L, Joubert DJ, Welsh J, Brandys B. Biomedical data sharing and reuse: attitudes and practices of clinical and scientific research staff. PLoS One. 2015;10(6):e0129506.
Joo S, Kim S, Kim Y. An exploratory study of health scientists’ data reuse behaviors: examining attitudinal, social, and resource factors. Aslib J Inf Manag. 2017;69(4):389–407.
Rathi V, Dzara K, Gross CP, Hrynaszkiewicz I, Joffe S, Krumholz HM, et al. Sharing of clinical trial data among trialists: a cross sectional survey. Br Med J. 2012;345:e7570.
Rathi VK, Strait KM, Gross CP, Hrynaszkiewicz I, Joffe S, Krumholz HM, et al. Predictors of clinical trial data sharing: exploratory analysis of a cross-sectional survey. Trials. 2014;15(1):384.
Tenopir C, Allard S, Douglass K, Aydinoglu AU, Wu L, Read E, et al. Data sharing by scientists: practices and perceptions. PLoS One. 2011;6(6):e21101.
Article CAS PubMed PubMed Central Google Scholar
Tenopir C, Dalton ED, Allard S, Frame M, Pjesivac I, Birch B, et al. Changes in data sharing and data reuse practices and perceptions among scientists worldwide. PLoS One. 2015;10(8):e0134826.
Grando MA, Murcko A, Mahankali S, Saks M, Zent M, Chern D, et al. A study to elicit behavioral health patients' and providers' opinions on health records consent. J Law Med Ethics. 2017;45(2):238–59.
Howard HC, Mascalzoni D, Mabile L, Houeland G, Rial-Sebbag E, Cambon-Thomsen A. How to responsibly acknowledge research work in the era of big data and biobanks: ethical aspects of the bioresource research impact factor (BRIF). J Commun Genetics. 2018;9(2):169–76.
Bravo E, Calzolari A, De Castro P, Mabile L, Napolitani F, Rossi AM, et al. Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA). BMC Med. 2015;13:33.
Boeckhout M, Zielhuis GA, Bredenoord AL. The FAIR guiding principles for data stewardship: fair enough? Eur J Human Genetics. 2018;26(7):931–6.
Global Indigenous Data Alliance (GIDA). CARE principles for indigenous data governance GIDA; 2019 [Available from: https://www.gida-global.org/care ].
Download references
Acknowledgements
The authors would like to thank Ms. Ngaire Pettit-Young, Information First, Sydney, NSW, Australia, for her assistance in developing the search strategy.
This project was supported by the Sydney Vital, Translational Cancer Research, through a Cancer Institute NSW competitive grant. The views expressed herein are those of the authors and are not necessarily those of the Cancer Institute NSW. FB is supported in her academic role by the Friends of the Mater Foundation.
Author information
Authors and affiliations.
Northern Clinical School, Faculty of Medicine, University of Sydney, Sydney, Australia
Elizabeth Hutchings & Frances M. Boyle
Department of Psychology, The University of Sydney, Sydney, NSW, Australia
Max Loomes & Phyllis Butow
Centre for Medical Psychology & Evidence-Based Decision-Making (CeMPED), Sydney, Australia
Phyllis Butow
Psycho-Oncology Co-Operative Research Group (PoCoG), The University of Sydney, Sydney, NSW, Australia
Patricia Ritchie Centre for Cancer Care and Research, Mater Hospital, North Sydney, Sydney, Australia
Frances M. Boyle
You can also search for this author in PubMed Google Scholar
Contributions
EH, PB, and FB were responsible for developing the study concept and the development of the protocol. EH and ML were responsible for the data extraction and data analysis. FB and PB supervised this research. All authors participated in interpreting the findings and contributed the intellectual content of the manuscript. All authors have read and approved the manuscript.
Corresponding author
Correspondence to Elizabeth Hutchings .
Ethics declarations
Ethics approval and consent to participate.
Not applicable.
Consent for publication
Competing interests.
EH, ML, PB, and FB declare that they have no competing interests.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and Permissions
About this article
Cite this article.
Hutchings, E., Loomes, M., Butow, P. et al. A systematic literature review of researchers’ and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data. Syst Rev 9 , 240 (2020). https://doi.org/10.1186/s13643-020-01485-5
Download citation
Received : 27 December 2019
Accepted : 17 September 2020
Published : 12 October 2020
DOI : https://doi.org/10.1186/s13643-020-01485-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Secondary data analysis
Systematic Reviews
ISSN: 2046-4053
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
Research and Writing Guides
Writing a paper? Don't get lost.
- How to write a systematic literature review

- What is a systematic literature review?
A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question. Put simply, it’s a study of studies.
- Where are systematic literature reviews used?
Systematic literature reviews can be utilized in various contexts, but they’re often relied on in clinical or healthcare settings.
Medical professionals read systematic literature reviews to stay up-to-date in their field, and granting agencies sometimes need them to make sure there’s justification for further research in an area. They can even be used as the starting point for developing clinical practice guidelines.
- What types of systematic literature review are there?
A classic systematic literature review can take different approaches:
- Effectiveness reviews assess the extent to which a medical intervention or therapy achieves its intended effect. They’re the most common type of systematic literature review.
- Diagnostic test accuracy reviews produce a summary of diagnostic test performance so that their accuracy can be determined before use by healthcare professionals.
- Experiential (qualitative) reviews analyze human experiences in a cultural or social context. They can be used to assess the effectiveness of an intervention from a person-centric perspective.
- Costs/economics evaluation reviews look at the cost implications of an intervention or procedure, to assess the resources needed to implement it.
- Etiology/risk reviews usually try to determine to what degree a relationship exists between an exposure and a health outcome. This can be used to better inform healthcare planning and resource allocation.
- Psychometric reviews assess the quality of health measurement tools so that the best instrument can be selected for use.
- Prevalence/incidence reviews measure both the proportion of a population who have a disease, and how often the disease occurs.
- Prognostic reviews examine the course of a disease and its potential outcomes.
- Expert opinion/policy reviews are based around expert narrative or policy. They’re often used to complement, or in the absence of, quantitative data.
- Methodology systematic reviews can be carried out to analyze any methodological issues in the design, conduct, or review of research studies.
Writing a systematic literature review can feel like an overwhelming undertaking. After all, they can often take 6 to 18 months to complete. But, as with any documentation, we can break them down into the sections that should be included. Below we’ve prepared a step-by-step guide on how to write a systematic literature review.
- 1. Decide on your team
When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.
You may also need to team up with a librarian to help with the search, literature screeners, a statistician to analyze the data, and the relevant subject experts.
- 2. Formulate your question
Define your answerable question. Then ask yourself, “has someone written a systematic literature review on my question already?” If so, yours may not be needed. A librarian can help you answer this.
You should formulate a ‘Well-Built Clinical Question’ – this is the process of generating a good search question. To do this, run through PICO:
- P atient or Population or Problem/Disease – Who or what is the question about? Are there factors about them (e.g. age, race) that could be relevant to the question you’re trying to answer?
- I ntervention – Which main intervention or treatment are you considering for assessment?
- C omparison/s or Control – Is there an alternative intervention or treatment you’re considering? Your systematic literature review doesn’t have to contain a comparison, but you’ll want to stipulate at this stage, either way.
- O utcome/s – What are you trying to measure or achieve? What’s the wider goal for the work you’ll be doing?
- 3. Plan your research protocol
Now you need a detailed strategy for how you’re going to search for and evaluate the studies relating to your question.
The protocol for your systematic literature review should include:
- The objectives of your project
- The specific methods and processes that you’ll use
- The eligibility criteria of the individual studies
- How you plan to extract data from individual studies
- Which analyses you’re going to carry out
For a full guide on how to systematically develop your protocol, take a look at the PRISMA checklist . PRISMA has been designed primarily to improve the reporting of systematic literature reviews and meta-analyses.
- 4. Search for the literature
When writing a systematic literature review, your goal is to find all of the relevant studies relating to your question, so you need to search thoroughly .
This is where your librarian will come in handy again. They should be able to help you formulate a detailed search strategy, and point you to all of the best databases for your topic.
The places to consider in your search are electronic scientific databases (the most popular are PubMed, MEDLINE, and Embase), controlled clinical trial registers, non-English literature, raw data from published trials, references listed in primary sources, and unpublished sources known to experts in the field.
But don’t miss out on ‘grey literature’ sources – those sources outside of the usual academic publishing environment. They include non-peer-reviewed journals, pharmaceutical industry files, conference proceedings, pharmaceutical company websites, and internal reports. Grey literature sources are more likely to contain negative conclusions, so you’ll improve the reliability of your findings by including them.
You should document details such as:
- The databases you search and which years they cover
- The dates you first run the searches, and when they’re updated
- Which strategies you use, including search terms
- The numbers of results obtained
- 5. Screen the literature
This should be performed by your two reviewers, using the criteria documented in your research protocol. The screening is done in two phases:
- Pre-screening all titles and abstracts, and selecting those appropriate
- Screening the full-text articles of the selected studies
Make sure reviewers keep a log of which studies they exclude, with reasons why.
- 6. Assess the quality of the studies
Your reviewers should evaluate the methodological quality of your chosen full-text articles. Make an assessment checklist that closely aligns with your research protocol, including a consistent scoring system, calculations of the quality of each study, and sensitivity analysis.
The kinds of questions you'll come up with are:
- Were the participants really randomly allocated to their groups?
- Were the groups similar in terms of prognostic factors?
- Could the conclusions of the study have been influenced by bias?
- 7. Extract the data
Every step of the data extraction must be documented for transparency and replicability. Create a data extraction form and set your reviewers to work extracting data from the qualified studies.
Here’s a free detailed template for recording data extraction, from Dalhousie University, Canada. It should be adapted to your specific question.
- 8. Analyze the results
Establish a standard measure of outcome which can be applied to each study on the basis of its effect size.
Measures of outcome for studies with:
- Binary outcomes (e.g. cured/not cured) are odds ratio and risk ratio
- Continuous outcomes (e.g. blood pressure) are means, difference in means, and standardized difference in means
- Survival or time-to-event data are hazard ratios
Design a table and populate it with your data results. Draw this out into a forest plot , which provides a simple visual representation of variation between the studies. Then analyze the data for issues. These can include heterogeneity, which is when studies’ lines within the forest plot don’t overlap with any other studies.
Again, record any excluded studies here for reference.
- 9. Interpret and present the results
Consider different factors when interpreting your results. These include limitations, strength of evidence, biases, applicability, economic effects, and implications for future practice or research.
Apply appropriate grading of your evidence and consider the strength of your recommendations.
It’s best to formulate a detailed plan for how you’ll present your systematic review results – take a look at these guidelines from the Cochrane Institute.
- Registering your systematic literature review
Before writing your systematic literature review, you can register it with OSF for additional guidance along the way.
Or, maybe you'd prefer to register your completed work with PROSPERO or TUScholarShare .
- Frequently Asked Questions about writing a systematic literature review
Systematic literature reviews are often found in clinical or healthcare settings. Medical professionals read systematic literature reviews to stay up-to-date in their field and granting agencies sometimes need them to make sure there’s justification for further research in an area.
The first stage in carrying out a systematic literature review is to put together your team. You should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.
Your systematic review should include the following details:
A literature review simply provides a summary of the literature available on a topic. A systematic review, on the other hand, is more than just a summary. It also includes an analysis and evaluation of existing research. Put simply, it's a study of studies.
The final stage of conducting a systematic literature review is interpreting and presenting the results. It’s best to formulate a detailed plan for how you’ll present your systematic review results, guidelines can be found for example from the Cochrane institute .
- Related Articles

Covidence website will be inaccessible as we upgrading our platform on Monday 23rd August at 10am AEST, / 2am CEST/1am BST (Sunday, 15th August 8pm EDT/5pm PDT)
The difference between a systematic review and a literature review
- Best Practice
Home | Blog | Best Practice | The difference between a systematic review and a literature review
Covidence takes a look at the difference between the two
Most of us are familiar with the terms systematic review and literature review. Both review types synthesise evidence and provide summary information. So what are the differences? What does systematic mean? And which approach is best 🤔 ?
‘ Systematic ‘ describes the review’s methods. It means that they are transparent, reproducible and defined before the search gets underway. That’s important because it helps to minimise the bias that would result from cherry-picking studies in a non-systematic way.
This brings us to literature reviews. Literature reviews don’t usually apply the same rigour in their methods. That’s because, unlike systematic reviews, they don’t aim to produce an answer to a clinical question. Literature reviews can provide context or background information for a new piece of research. They can also stand alone as a general guide to what is already known about a particular topic.
Interest in systematic reviews has grown in recent years and the frequency of ‘systematic reviews’ in Google books has overtaken ‘literature reviews’ (with all the usual Ngram Viewer warnings – it searches around 6% of all books, no journals).

Let’s take a look at the two review types in more detail to highlight some key similarities and differences 👀.
🙋🏾♂️ What is a systematic review?
Systematic reviews ask a specific question about the effectiveness of a treatment and answer it by summarising evidence that meets a set of pre-specified criteria.
The process starts with a research question and a protocol or research plan. A review team searches for studies to answer the question using a highly sensitive search strategy. The retrieved studies are then screened for eligibility using the inclusion and exclusion criteria (this is done by at least two people working independently). Next, the reviewers extract the relevant data and assess the quality of the included studies. Finally, the review team synthesises the extracted study data and presents the results. The process is shown in figure 2 .

The results of a systematic review can be presented in many ways and the choice will depend on factors such as the type of data. Some reviews use meta-analysis to produce a statistical summary of effect estimates. Other reviews use narrative synthesis to present a textual summary.
Covidence accelerates the screening, data extraction, and quality assessment stages of your systematic review. It provides simple workflows and easy collaboration with colleagues around the world.
When is it appropriate to do a systematic review?
If you have a clinical question about the effectiveness of a particular treatment or treatments, you could answer it by conducting a systematic review. Systematic reviews in clinical medicine often follow the PICO framework, which stands for:
👦 Population (or patients)
💊 Intervention
💊 Comparison
Here’s a typical example of a systematic review title that uses the PICO framework: Alarms [intervention] versus drug treatments [comparison] for the prevention of nocturnal enuresis [outcome] in children [population]
Key attributes
- Systematic reviews follow prespecified methods
- The methods are explicit and replicable
- The review team assesses the quality of the evidence and attempts to minimise bias
- Results and conclusions are based on the evidence
🙋🏻♀️ What is a literature review?
Literature reviews provide an overview of what is known about a particular topic. They evaluate the material, rather than simply restating it, but the methods used to do this are not usually prespecified and they are not described in detail in the review. The search might be comprehensive but it does not aim to be exhaustive. Literature reviews are also referred to as narrative reviews.
Literature reviews use a topical approach and often take the form of a discussion. Precision and replicability are not the focus, rather the author seeks to demonstrate their understanding and perhaps also present their work in the context of what has come before. Often, this sort of synthesis does not attempt to control for the author’s own bias. The results or conclusion of a literature review is likely to be presented using words rather than statistical methods.
When is it appropriate to do a literature review?
We’ve all written some form of literature review: they are a central part of academic research ✍🏾. Literature reviews often form the introduction to a piece of writing, to provide the context. They can also be used to identify gaps in the literature and the need to fill them with new research 📚.
- Literature reviews take a thematic approach
- They do not specify inclusion or exclusion criteria
- They do not answer a clinical question
- The conclusions might be influenced by the author’s own views
🙋🏽 Ok, but what is a systematic literature review?
A quick internet search retrieves a cool 200 million hits for ‘systematic literature review’. What strange hybrid is this 🤯🤯 ?
Systematic review methodology has its roots in evidence-based medicine but it quickly gained traction in other areas – the social sciences for example – where researchers recognise the value of being methodical and minimising bias. Systematic review methods are increasingly applied to the more traditional types of review, including literature reviews, hence the proliferation of terms like ‘systematic literature review’ and many more.
Beware of the labels 🚨. The terminology used to describe review types can vary by discipline and changes over time. To really understand how any review was done you will need to examine the methods critically and make your own assessment of the quality and reliability of each synthesis 🤓.
Review methods are evolving constantly as researchers find new ways to meet the challenge of synthesising the evidence. Systematic review methods have influenced many other review types, including the traditional literature review.
Covidence is a web-based tool that saves you time at the screening, selection, data extraction and quality assessment stages of your systematic review. It supports easy collaboration across teams and provides a clear overview of task status.
Get a glimpse inside Covidence and how it works

Laura Mellor. Portsmouth, UK
Perhaps you'd also like....

Finding your research niche
Some expert advice on finding your research niche with real-life examples to help busy post graduate students.

Engaging with your supervisor
Some suggestions to help students prepare for a systematic review

Systematic review types: meet the family
A lot has changed since the first systematic reviews were published in the late 1970s. The rate of research output continues to accelerate and new and innovative ways of reviewing the evidence continue to emerge
Better systematic review management
Head office, while you’re here, why not try covidence for yourself, it’s free to sign up and start a review..

By using our site you consent to our use of cookies to measure and improve our site’s performance. Please see our Privacy Policy for more information.

IMAGES
VIDEO
COMMENTS
A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (
Systematic literature reviews aim to find as much relevant research on a particular research question as possible, by using explicit methods to
Broadly speaking, a systematic literature review is a type of review that collects multiple research studies and summarises them to answer a research
SLR, as the name implies, is a systematic way of collecting, critically evaluating, integrating, and presenting findings from across multiple research studies
A systematic literature review of researchers and healthcare professionals' attitudes towards the secondary use and sharing of health
A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question.
Systematic reviews (SRs) identify, collate, and systematically summarize empirical evidence from two or more primary research studies.
By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore. By summarizing, analyzing, and
Systematic reviews ask a specific question about the effectiveness of a treatment and answer it by summarising evidence that meets a set of pre-specified
A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on