Preview

Вопросы современной педиатрии

Расширенный поиск

Рекомендации по составлению отчетов о диагностических исследованиях (STARD 2015): разъяснения и уточнения

https://doi.org/10.15690/vsp.v21i3.2427

Полный текст:

Аннотация

Диагностические исследования (diagnostic accuracy studies), как и другие клинические исследования, подвержены риску систематических ошибок (bias) из-за недостатков дизайна и проведения, а их результаты могут оказаться неприменимыми к другим группам пациентов и в других условиях. Читатели должны быть достаточно подробно проинформированы о дизайне и проведении диагностического исследования, чтобы судить о надежности (trustworthiness) и применимости (applicability) его результатов. Руководство STARD (Standards for Reporting of Diagnostic Accuracy Studies) разработано с целью обеспечить полноту и прозрачность отчетов о диагностических исследованиях. Оно содержит перечень основных пунктов отчета, который может быть использоваНавторами, рецензентами и читателями как контрольный список (checklist) для отслеживания полноты представляемой информации. Здесь представлено обновленное руководство STARD, все материалы которого, включая контрольный список, доступны на http://www.equator-network.org/reporting-guidelines/stard. В данной статье приведены обоснования для 30 пунктов руководства и описание того, что требуется от авторов для составления достаточно информативных отчетов об исследованиях. Настоящая статья является переводом оригинальной публикации под редакцией д.м.н. Р.Т. Сайгитова. Перевод впервые опубликован в Digital Diagnostics. doi: 10.17816/DD71031. Публикуется с незначительными изменениями, связанными с литературным редактированием текста перевода.

Об авторах

J. F. Cohen
University of Amsterdam; Paris Descartes University
Нидерланды

Амстердам, Париж


Раскрытие интересов:

нет



D. A. Korevaar
University of Amsterdam
Нидерланды

Амстердам


Раскрытие интересов:

нет



D. G. Altman
University of Oxford
Великобритания

Оксфорд


Раскрытие интересов:

нет



D. E. Bruns
University of Virginia School of Medicine
Соединённые Штаты Америки

Шарлотсвилл, Вирджиния


Раскрытие интересов:

нет



C. A. Gatsonis
Brown University School of Public Health
Соединённые Штаты Америки

Провиденс, Род-Айленд


Раскрытие интересов:

нет



L. Hooft
University of Utrecht
Нидерланды

Утрехт


Раскрытие интересов:

нет



L. Irwig
University of Sydney
Австралия

Сидней, Новый Южный Уэльс


Раскрытие интересов:

нет



D. Levine
Beth Israel Deaconess Medical Center; Radiology Editorial Office
Соединённые Штаты Америки

Бостон


Раскрытие интересов:

Beth Israel Deaconess Medical Center; Radiology Editorial Office



J. B. Reitsma
University of Utrecht
Нидерланды

Утрехт


Раскрытие интересов:

Beth Israel Deaconess Medical Center; Radiology Editorial Office



H.C.W. de Vet
VU University Medical Center
Нидерланды

Амстердам


Раскрытие интересов:

нет



P.M.M. Bossuyt
University of Amsterdam
Нидерланды

Амстердам


Раскрытие интересов:

нет



Список литературы

1. Whiting PF, Rutjes AW, Reitsma JB, et al. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med. 2004;140(3):189–202. doi: https://doi.org/10.7326/0003-4819-140-3-200402030-00010

2. Whiting PF, Rutjes AW, Westwood ME, et al. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol. 2013;66(10):1093–1104. doi: https://doi.org/10.1016/j.jclinepi.2013.05.014

3. Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–536. doi: https://doi.org/10.7326/0003-4819-155-8-201110180-00009

4. Korevaar DA, van Enst WA, Spijker R, et al. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. Evid Based Med. 2014;19(2):47–54. doi: https://doi.org/10.1136/eb-2013-101637

5. Korevaar DA, Wang J, van Enst WA, et al. Reporting diagnostic accuracy studies: some improvements after 10 years of STARD. Radiology. 2015;274(3):781–789. doi: https://doi.org/10.1148/radiol.14141160

6. Lijmer JG, Mol BW, Heisterkamp S, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282(11):1061–1066. doi: https://doi.org/10.1001/jama.282.11.1061

7. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem. 2003;49(1):1–6. doi: https://doi.org/10.1373/49.1.1

8. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276(8):637–639. doi: https://doi.org/10.1001/jama.276.8.637

9. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340(1):332. doi: https://doi.org/10.1136/bmj.c332

10. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ. 2015;351:h5527. doi: https://doi.org/10.1136/bmj.h5527

11. Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med. 2003;138(1):W1–12. doi: https://doi.org/10.7326/0003-4819-138-1-200301070-00012-w1

12. Regge D, Laudi C, Galatola G, et al. Diagnostic accuracy of computed tomographic colonography for the detection of advanced neoplasia in individuals at increased risk of colorectal cancer. JAMA. 2009;301(23):2453–2461. doi: https://doi.org/10.1001/jama.2009.832

13. Deville WL, Bezemer PD, Bouter LM. Publications on diagnostic test evaluation in family medicine journals: an optimal search strategy. J Clin Epidemiol. 2000;53(1):65–69. doi: https://doi.org/10.1016/s0895-4356(99)00144-4

14. Korevaar DA, Cohen JF, Hooft L, et al. Literature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies. J Clin Epidemiol. 2015;68(6):708–715. doi: https://doi.org/10.1016/j.jclinepi.2015.01.014

15. Korevaar DA, Cohen JF, de Ronde MW, et al. Reporting weaknessess in conference abstracts of diagnostic accuracy studies in ophthalmology. JAMA Ophthalmol. 2015;133(12):1464–1467. doi: https://doi.org/10.1001/jamaophthalmol.2015.3577

16. A proposal for more informative abstracts of clinical articles. Ad Hoc Working Group for Critical Appraisal of the Medical Literature. Ann Intern Med. 1987;106(4):598–604.

17. Stiell IG, Greenberg GH, Wells GA, et al. Derivation of a decision rule for the use of radiography in acute knee injuries. Ann Emerg Med. 1995;26(4):405–413. doi: https://doi.org/10.1016/S0196-0644(95)70106-0

18. Horvath AR, Lord SJ, StJohn A, et al. From biomarkers to medical tests: the changing landscape of test evaluation. Clin Chim Acta. 2014;427:49–57. doi: https://doi.org/10.1016/j.cca.2013.09.018

19. Bossuyt PM, Irwig L, Craig J, et al. Comparative accuracy: assessing new tests against existing diagnostic pathways. BMJ. 2006;332:1089–1092. doi: https://doi.org/10.1136/bmj.332.7549.1089

20. Gieseker KE, Roe MH, MacKenzie T, et al. Evaluating the American Academy of Pediatrics diagnostic standard for Streptococcus pyogenes pharyngitis: backup culture versus repeat rapid antigen testing. Pediatrics. 2003;111(6 Pt 1):e666–e670. doi: https://doi.org/10.1542/peds.111.6.e666

21. Tanz RR, Gerber MA, Kabat W, et al. Performance of a rapid antigen-detection test and throat culture in community pediatric offices: implications for management of pharyngitis. Pediatrics. 2009;123(2):437–444. doi: https://doi.org/10.1542/peds.2008-0488

22. Ochodo EA, de Haan MC, Reitsma JB, et al. Overinterpretation and misreporting of diagnostic accuracy studies: evidence of ‘spin’. Radiology. 2013;267(2):581–588. doi: https://doi.org/10.1148/radiol.12120527

23. Freer PE, Niell B, Rafferty EA. Preoperative tomosynthesisguided needle localization of mammographically and sonographically occult breast lesions. Radiology. 2015;275(2):377–383. doi: https://doi.org/10.1148/radiol.14140515

24. Sorensen HT, Sabroe S, Olsen J. A framework for evaluation of secondary data sources for epidemiological research. Int J Epidemiol. 1996;25(2):435–442. doi: https://doi.org/10.1093/ije/25.2.435

25. Geersing GJ, Erkens PM, Lucassen WA, et al. Safe exclusion of pulmonary embolism using the Wells rule and qualitative D-dimer testing in primary care: prospective cohort study. BMJ. 2012;345:e6564. doi: https://doi.org/10.1136/bmj.e6564

26. Bomers MK, van Agtmael MA, Luik H, et al. Using a dog’s superior olfactory sensitivity to identify Clostridium difficile in stools and patients: proof of principle study. BMJ. 2012;345:e7396. doi: https://doi.org/10.1136/bmj.e7396

27. Philbrick JT, Horwitz RI, Feinstein AR. Methodologic problems of exercise testing for coronary artery disease: groups, analysis and bias. Am J Cardiol. 1980;46(5):807–812. doi: https://doi.org/10.1016/0002-9149(80)90432-4

28. Rutjes AW, Reitsma JB, Vandenbroucke JP, et al. Casecontrol and two-gate designs in diagnostic accuracy studies. Clin Chem. 2005;51(8):1335–1341. doi: https://doi.org/10.1373/clinchem.2005.048595

29. Rutjes AW, Reitsma JB, Di Nisio M, et al. Evidence of bias and variation in diagnostic accuracy studies. CMAJ. 2006;174(4): 469–476. doi: https://doi.org/10.1503/cmaj.050090

30. Knottnerus JA, Muris JW. Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol. 2003;56(11):1118–1128. doi: https://doi.org/10.1016/S0895-4356(03)00206-3

31. Van der Schouw YT, Van Dijk R, Verbeek AL. Problems in selecting the adequate patient population from existing data files for assessment studies of new diagnostic tests. J Clin Epidemiol. 1995;48(3):417–422. doi: https://doi.org/10.1016/0895-4356(94)00144-f

32. Leeflang MM, Bossuyt PM, Irwig L. Diagnostic test accuracy may vary with prevalence: implications for evidence-based diagnosis. J Clin Epidemiol. 2009;62(1):5–12. doi: https://doi.org/10.1016/j.jclinepi.2008.04.007

33. Attia M, Zaoutis T, Eppes S, et al. Multivariate predictive models for group A beta-hemolytic streptococcal pharyngitis in children. Acad Emerg Med. 1999;6(1);8–13. doi: https://doi.org/10.1111/j.1553-2712.1999.tb00087.x

34. Knottnerus JA, Knipschild PG, Sturmans F. Symptoms and selection bias: the influence of selection towards specialist care on the relationship between symptoms and diagnoses. Theor Med. 1989;10(1):67–81. doi: https://doi.org/10.1007/BF00625761

35. Knottnerus JA, Leffers P. The influence of referral patterns on the characteristics of diagnostic tests. J Clin Epidemiol. 1992;45(10):1143–1154. doi: https://doi.org/10.1016/0895-4356(92)90155-g

36. Melbye H, Straume B. The spectrum of patients strongly influences the usefulness of diagnostic tests for pneumonia. Scand J Prim Health Care. 1993;11(4):241–246. doi: https://doi.org/10.3109/02813439308994838

37. Ezike EN, Rongkavilit C, Fairfax MR, et al. Effect of using 2 throat swabs vs 1 throat swab on detection of group A streptococcus by a rapid antigen detection test. Arch Pediatr Adolesc Med. 2005;159(5):486–490. doi: https://doi.org/10.1001/archpedi.159.5.486

38. Rosjo H, Kravdal G, Hoiseth AD, et al. Troponin I measured by a high-sensitivity assay in patients with suspected reversible myocardial ischemia: data from the Akershus Cardiac Examination (ACE) 1 study. Clin Chem. 2012;58(11):1565–1573. doi: https://doi.org/10.1373/clinchem.2012.190868

39. Irwig L, Bossuyt P, Glasziou P, et al. Designing studies to ensure that estimates of test accuracy are transferable. BMJ. 2002;324(7338):669–671. doi: https://doi.org/10.1136/bmj.324.7338.669

40. Detrano R, Gianrossi R, Froelicher V. The diagnostic accuracy of the exercise electrocardiogram: a meta-analysis of 22 years of research. Prog Cardiovasc Dis. 1989;32(3):173–206. doi: https://doi.org/10.1016/0033-0620(89)90025-x

41. Brealey S, Scally AJ. Bias in plain film reading performance studies. Br J Radiol. 2001:74(880):307–316. doi: https://doi.org/10.1259/bjr.74.880.740307

42. Elmore JG, Wells CK, Lee CH, et al. Variability in radiologists’ interpretations of mammograms. N Engl J Med. 1994;331(22):1493– 1499. doi: https://doi.org/10.1056/NEJM199412013312206

43. Ronco G, Montanari G, Aimone V, et al. Estimating the sensitivity of cervical cytology: errors of interpretation and test limitations. Cytopathology. 1996;7(3):151–158. doi: https://doi.org/10.1046/j.1365-2303.1996.39382393.x

44. Cohen MB, Rodgers RP, Hales MS, et al. Influence of training and experience in fine-needle aspiration biopsy of breast. Receiver operating characteristics curve analysis. Arch Pathol Lab Med. 1987;111(6):518–520.

45. Fox JW, Cohen DM, Marcon MJ, et al. Performance of rapid streptococcal antigen testing varies by personnel. J Clin Microbiol. 2006; 44(11):3918–3922. doi: https://doi.org/10.1128/JCM.01399-06

46. Gandy M, Sharpe L, Perry KN, et al. Assessing the efficacy of 2 screening measures for depression in people with epilepsy. Neurology. 2012;79(4):371–375. doi: https://doi.org/10.1212/WNL.0b013e318260cbfc

47. Stegeman I, de Wijkerslooth TR, Stoop EM, et al. Combining risk factors with faecal immunochemical test outcome for selecting CRC screenees for colonoscopy. Gut. 2014;63(3):466–471. doi: https://doi.org/10.1136/gutjnl-2013-305013

48. Leeflang MM, Moons KG, Reitsma JB, et al. Bias in sensitivity and specificity caused by data-driven selection of optimal cutoff values: mechanisms, magnitude, and solutions. Clin Chem. 2008;54(4):729–737. doi: https://doi.org/10.1373/clinchem.2007.096032

49. Ewald B. Post hoc choice of cut points introduced bias to diagnostic research. J Clin Epidemiol. 2006;59(8):798–801. doi: https://doi.org/10.1016/j.jclinepi.2005.11.025

50. Justice AC, Covinsky KE, Berlin JA. Assessing the generalizability of prognostic information. Ann Intern Med. 1999;130(6):515–524. doi: https://doi.org/10.7326/0003-4819-130-6-199903160-00016

51. Harrell FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996;15(4):361–387. doi: https://doi.org/10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO.;2-4

52. Hodgdon T, McInnes MD, Schieda N, et al. Can quantitative CT texture analysis be used to differentiate fat-poor renal angiomyolipoma from renal cell carcinoma on unenhanced CT images? Radiology. 2015;276(3):787–796. doi: https://doi.org/10.1148/radiol.2015142215

53. Begg CB. Biases in the assessment of diagnostic tests. Stat Med. 1987;6(4):411–423. doi: https://doi.org/10.1002/Sim.4780060402

54. Doubilet P, Herman PG. Interpretation of radiographs: effect of clinical history. AJR Am J Roentgenol. 1981;137(5):1055–1058. doi: https://doi.org/10.2214/ajr.137.5.1055

55. D’Orsi CJ, Getty DJ, Pickett RM, et al. Stereoscopic digital mammography: improved specificity and reduced rate of recall in a prospective clinical trial. Radiology. 2013;266(1):81–88. doi: https://doi.org/10.1148/radiol.12120382

56. Knottnerus JA, Buntinx F. The evidence base of clinical diagnosis: theory and methods of diagnostic research. 2nd ed. BMJ Books; 2008. 316 р.

57. Pepe MS. Study design and hypothesis testing. In: The statistical evaluation of medical tests for classification and prediction. Oxford, UK: Oxford University Press; 2003. pp. 214–251.

58. Hayen A, Macaskill P, Irwig L, et al. Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage. J Clin Epidemiol. 2010;63(8):883–891. doi: https://doi.org/10.1016/j.jclinepi.2009.08.024

59. Реnа BM, Mandl KD, Kraus SJ, et al. Ultrasonography and limited computed tomography in the diagnosis and management of appendicitis in children. JAMA. 1999;282(11):1041–1046. doi: https://doi.org/10.1001/jama.282.11.1041

60. Simel DL, Feussner JR, DeLong ER, et al. Intermediate, indeterminate, and uninterpretable diagnostic test results. Med Decis Making. 1987;7(2):107–114. doi: https://doi.org/10.1177/0272989X8700700208

61. Philbrick JT, Horwitz RI, Feinstein AR, et al. The limited spectrum of patients studied in exercise test research. Analyzing the tip of the iceberg. JAMA. 1982;248(19):2467–2470.

62. Begg CB, Greenes RA, Iglewicz B. The influence of uninterpretability on the assessment of diagnostic tests. J Chronic Dis. 1986;39(8): 575–584. doi: https://doi.org/10.1016/0021-9681(86)90182-7

63. Shinkins B, Thompson M, Mallett S, et al. Diagnostic accuracy studies: how to report and analyse inconclusive test results. BMJ. 2013;346:f2778. doi: https://doi.org/10.1136/bmj.f2778

64. Pisano ED, Fajardo LL, Tsimikas J, et al. Rate of insufficient samples for fine-needle aspiration for nonpalpable breast lesions in a multicenter clinical trial: the Radiologic Diagnostic Oncology Group 5 Study. The RDOG5 investigators. Cancer. 1998;82(4):679–688. doi: https://doi.org/10.1002/(sici)1097-0142(19980215)82:4<679::aid-cncr10>3.0.co;2-v

65. Giard RW, Hermans J. The value of aspiration cytologic examination of the breast. A statistical review of the medical literature. Cancer. 1992;69(8):2104–2110. doi: https://doi.org/10.1002/1097-0142(19920415)69:8<2104::aid-cncr2820690816>3.0.co;2-o

66. Investigators P. Value of the ventilation/perfusion scan in acute pulmonary embolism. Results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263(20):2753–2759. doi: https://doi.org/10.1001/jama.1990.03440200057023

67. Min JK, Leipsic J, Pencina MJ, et al. Diagnostic accuracy of fractional flow reserve from anatomic CT angiography. JAMA. 2012;308(12):1237–1245. doi: https://doi.org/10.1001/2012.jama.11274

68. Naaktgeboren CA, de Groot JA, Rutjes AW, et al. Anticipating missing reference standard data when planning diagnostic accuracy studies. BMJ. 2016;352:i402. doi: https://doi.org/10.1136/bmj.i402

69. Van der Heijden GJ, Donders AR, Stijnen T, et al. Imputation of missing values is superior to complete case analysis and the missing-indicator method in multivariable diagnostic research: a clinical example. J Clin Epidemiol. 2006;59(10):1102–1109. doi: https://doi.org/10.1016/j.jclinepi.2006.01.015

70. de Groot JA, Bossuyt PM, Reitsma JB, et al. Verification problems in diagnostic accuracy studies: consequences and solutions. BMJ. 2011;343:d4770. doi: https://doi.org/10.1136/bmj.d4770

71. Pons B, Lautrette A, Oziel J, et al. Diagnostic accuracy of early urinary index changes in differentiating transient from persistent acute kidney injury in critically ill patients: multicenter cohort study. Crit Care. 2013;17(2):R56. doi: https://doi.org/10.1186/cc12582

72. Sun X, Ioannidis JP, Agoritsas T, et al. How to use a subgroup analysis: users’ guide to the medical literature. JAMA. 2014;311(4): 405–411. doi: https://doi.org/10.1001/jama.2013.285063

73. Zalis ME, Blake MA, Cai W, et al. Diagnostic accuracy of laxative-free computed tomographic colonography for detection of adenomatous polyps in asymptomatic adults: a prospective evaluation. Ann Intern Med. 2012;156(10):692–702. doi: https://doi.org/10.7326/0003-4819-156-10-201205150-00005

74. Flahault A, Cadilhac M, Thomas G. Sample size calculation should be performed for design accuracy in diagnostic test studies. J Clin Epidemiol. 2005;58(8):859–862. doi: https://doi.org/10.1016/j.jclinepi.2004.12.009

75. Pepe MS. The statistical evaluation of medical tests for classification and prediction. Oxford, New York: Oxford University Press; 2003.

76. Vach W, Gerke O, Hoilund-Carlsen PF. Three principles to define the success of a diagnostic study could be identified. J Clin Epidemiol. 2012;65(3):293–300. doi: https://doi.org/10.1016/j.jclinepi.2011.07.004

77. Bachmann LM, Puhan MA, ter Riet G, et al. Sample sizes of studies on diagnostic accuracy: literature survey. BMJ. 2006;332(4550):1127–1129. doi: https://doi.org/10.1136/bmj.38793.637789.2F

78. Bochmann F, Johnson Z, Azuara-Blanco A. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey. Br J Ophthalmol. 2007;91(7):898–900. doi: https://doi.org/10.1136/bjo.2006.113290

79. Collins MG, Teo E, Cole SR, et al. Screening for colorectal cancer and advanced colorectal neoplasia in kidney transplant recipients: cross sectional prevalence and diagnostic accuracy study of faecal immunochemical testing for haemoglobin and colonoscopy. BMJ. 2012;345:e4657. doi: https://doi.org/10.1136/bmj.e4657

80. Cecil MP, Kosinski AS, Jones MT, et al. The importance of workup (verification) bias correction in assessing the accuracy of SPECT thallium-201 testing for the diagnosis of coronary artery disease. J Clin Epidemiol. 1996;49(7):735–742. doi: https://doi.org/10.1016/0895-4356(96)00014-5

81. Choi BC. Sensitivity and specificity of a single diagnostic test in the presence of work-up bias. J Clin Epidemiol. 1992;45(6): 581–586. doi: https://doi.org/10.1016/0895-4356(92)90129-b

82. Diamond GA. Off Bayes: effect of verification bias on posterior probabilities calculated using Bayes’ theorem. Med Decis Making. 1992;12(1):22–31. doi: https://doi.org/10.1177/0272989X9201200105

83. Diamond GA, Rozanski A, Forrester JS, et al. A model for assessing the sensitivity and specificity of tests subject to selection bias. Application to exercise radionuclide ventriculography for diagnosis of coronary artery disease. J Chronic Dis. 1986;39(5): 343–355. doi: https://doi.org/10.1016/0021-9681(86)90119-0

84. Greenes RA, Begg CB. Assessment of diagnostic technologies. Methodology for unbiased estimation from samples of selectively verified patients. Invest Radiol. 1985;20(7):751–756.

85. Ransohoff DF, Feinstein AR. Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. N Engl J Med. 1978;299(17):926–930. doi: https://doi.org/10.1056/NEJM197810262991705

86. Zhou XH. Effect of verification bias on positive and negative predictive values. Stat Med. 1994;13(17):1737–1745. doi: https://doi.org/10.1002/sim.4780131705

87. Kok L, Elias SG, Witteman BJ, et al. Diagnostic accuracy of point-of-care fecal calprotectin and immunochemical occult blood tests for diagnosis of organic bowel disease in primary care: the Cost-Effectiveness of a Decision Rule for Abdominal Complaints in Primary Care (CEDAR) study. Clin Chem. 2012;58(6):989–998. doi: https://doi.org/10.1373/clinchem.2011.177980

88. Harris JM. The hazards of bedside Bayes. JAMA. 1981; 246(22):2602–2605.

89. Hlatky MA, Pryor DB, Harrell FE, et al. Factors affecting sensitivity and specificity of exercise electrocardiography. Multivariable analysis. Am J Med. 1984;77(1):64–71. doi: https://doi.org/10.1016/0002-9343(84)90437-6

90. Lachs MS, Nachamkin I, Edelstein PH, et al. Spectrum bias in the evaluation of diagnostic tests: lessons from the rapid dipstick test for urinary tract infection. Ann Intern Med. 1992;117(2): 135–140. doi: https://doi.org/10.7326/0003-4819-117-2-135

91. Moons KG, van Es GA, Deckers JW, et al. Limitations of sensitivity, specificity, likelihood ratio, and bayes’ theorem in assessing diagnostic probabilities: a clinical example. Epidemiology. 1997;8(1):12–17. doi: https://doi.org/10.1097/00001648-199701000-00002

92. O’Connor PW, Tansay CM, Detsky AS, et al. The effect of spectrum bias on the utility of magnetic resonance imaging and evoked potentials in the diagnosis of suspected multiple sclerosis. Neurology. 1996;47(1):140–144. doi: https://doi.org/10.1212/wnl.47.1.140

93. Deckers JW, Rensing BJ, Tijssen JG, et al. A comparison of methods of analysing exercise tests for diagnosis of coronary artery disease. Br Heart J. 1989;62(6):438–444. doi: https://doi.org/10.1136/hrt.62.6.438

94. Naraghi AM, Gupta S, Jacks LM, et al. Anterior cruciate ligament reconstruction: MR imaging signs of anterior knee laxity in the presence of an intact graft. Radiology. 2012;263(3):802–810. doi: https://doi.org/10.1148/radiol.12110779

95. Ashdown HF, D’Souza N, Karim D, et al. Pain over speed bumps in diagnosis of acute appendicitis: diagnostic accuracy study. BMJ. 2012;345:e8012. doi: https://doi.org/10.1136/bmj.e8012

96. Leeflang MM, Rutjes AW, Reitsma JB, et al. Variation of a test’s sensitivity and specificity with disease prevalence. CMAJ. 2013;185(11):E537–544. doi: https://doi.org/10.1503/cmaj.121286

97. Rajaram S, Swift AJ, Capener D, et al. Lung morphology assessment with balanced steady-state free precession MR imaging compared with CT. Radiology. 2012;263(2):569–577. doi: https://doi.org/10.1148/radiol.12110990

98. Lang TA, Secic M. Generalizing from a sample to a population: reporting estimates and confidence intervals. Philadelphia: American College of Physicians; 1997.

99. Ioannidis JP, Evans SJ, Gotzsche PC, et al. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141(10):781–788. doi: https://doi.org/10.7326/0003-4819-141-10-200411160-00009

100. Ioannidis JP, Lau J. Completeness of safety reporting in randomized trials: an evaluation of 7 medical areas. JAMA. 2001;285(4):437–443. doi: https://doi.org/10.1001/jama.285.4.437

101. Park SH, Lee JH, Lee SS, et al. CT colonography for detection and characterisation of synchronous proximal colonic lesions in patients with stenosing colorectal cancer. Gut. 2012;61(12): 1716–1722. doi: https://doi.org/10.1136/gutjnl-2011-301135

102. Irwig LM, Bossuyt PM, Glasziou PP, et al. Designing studies to ensure that estimates of test accuracy will travel. In: The evidence base of clinical diagnosis. Knottnerus JA, ed. London: BMJ Publishing Group; 2002. pp. 95–116. doi: https://doi.org/10.1002/9781444300574.ch6

103. Ter Riet G, Chesley P, Gross AG, et al. All that glitters isn’t gold: a survey on acknowledgment of limitations in biomedical studies. PLoS One. 2013;8(11):e73623. doi: https://doi.org/10.1371/journal.pone.0073623

104. Ioannidis JP. Limitations are not properly acknowledged in the scientific literature. J Clin Epidemiol. 2007;60(4):324–329. doi: https://doi.org/10.1016/j.jclinepi.2006.09.011

105. Lord SJ, Irwig L, Simes RJ. When is measuring sensitivity and specificity sufficient to evaluate a diagnostic test, and when do we need randomized trials? Ann Intern Med. 2006;144(11):850–855. doi: https://doi.org/10.7326/0003-4819-144-11-200606060-00011

106. Pewsner D, Battaglia M, Minder C, et al. Ruling a diagnosis in or out with ‘SpPIn’ and ‘SnNOut’: a note of caution. BMJ. 2004;329(7459):209–213. doi: https://doi.org/10.1136/bmj.329.7459.209

107. Foerch C, Niessner M, Back T, et al. Diagnostic accuracy of plasma glial fibrillary acidic protein for differentiating intracerebral hemorrhage and cerebral ischemia in patients with symptoms of acute stroke. Clin Chem. 2012;58(1):237–245. doi: https://doi.org/10.1373/clinchem.2011.172676

108. Altman DG. The time has come to register diagnostic and prognostic research. Clin Chem. 2014;60(4):580–582. doi: https://doi.org/10.1373/clinchem.2013.220335

109. Hooft L, Bossuyt PM. Prospective registration of marker evaluation studies: time to act. Clin Chem. 2011;57(12):1684–1686. doi: https://doi.org/10.1373/clinchem.2011.176230

110. Rifai N, Altman DG, Bossuyt PM. Reporting bias in diagnostic and prognostic studies: time for action. Clin Chem. 2008;54(7): 1101–1103. doi: https://doi.org/10.1373/clinchem.2008.108993

111. Korevaar DA, Ochodo EA, Bossuyt PM, et al. Publication and reporting of test accuracy studies registered in ClinicalTrials.gov. Clin Chem. 2014;60(4):651–659. doi: https://doi.org/10.1373/clinchem.2013.218149

112. Rifai N, Bossuyt PM, Ioannidis JP, et al. Registering diagnostic and prognostic trials of tests: is it the right thing to do? Clin Chem. 2014;60(9):1146–1152. doi: https://doi.org/10.1373/clinchem.2014.226100

113. Korevaar DA, Bossuyt PM, Hooft L. Infrequent and incomplete registration of test accuracy studies: analysis of recent study reports. BMJ Open. 2014;4(1):e004596. doi: https://doi.org/10.1136/bmjopen-2013-004596

114. Leeuwenburgh MM, Wiarda BM, Wiezer MJ, et al. Comparison of imaging strategies with conditional contrast-enhanced CT and unenhanced MR imaging in patients suspected of having appendicitis: a multicenter diagnostic performance study. Radiology. 2013;268(1):135–143. doi: https://doi.org/10.1148/radiol.13121753

115. Chan AW, Song F, Vickers A, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–266. doi: https://doi.org/10.1016/S0140-6736(13)62296-5

116. Stewart CM, Schoeman SA, Booth RA, et al. Assessment of self taken swabs versus clinician taken swab cultures for diagnosing gonorrhoea in women: single centre, diagnostic accuracy study. BMJ. 2012;345:e8107. doi: https://doi.org/10.1136/bmj.e8107

117. Sismondo S. Pharmaceutical company funding and its consequences: a qualitative systematic review. Contemp Clin Trials. 2008;29(2):109–113. doi: https://doi.org/10.1016/j.cct.2007.08.001


Рецензия

Для цитирования:


Cohen J.F., Korevaar D.A., Altman D.G., Bruns D.E., Gatsonis C.A., Hooft L., Irwig L., Levine D., Reitsma J.B., de Vet H., Bossuyt P. Рекомендации по составлению отчетов о диагностических исследованиях (STARD 2015): разъяснения и уточнения. Вопросы современной педиатрии. 2022;21(3):209-228. https://doi.org/10.15690/vsp.v21i3.2427

For citation:


Cohen J.F., Korevaar D.A., Altman D.G., Bruns D.E., Gatsonis C.A., Hooft L., Irwig L., Levine D., Reitsma J.B., de Vet H., Bossuyt P. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. Current Pediatrics. 2022;21(3):209-228. (In Russ.) https://doi.org/10.15690/vsp.v21i3.2427

Просмотров: 237


Creative Commons License
Контент доступен под лицензией Creative Commons Attribution 4.0 License.


ISSN 1682-5527 (Print)
ISSN 1682-5535 (Online)