TY - JOUR
T1 - Inter-rater and intra-rater reliability of the current assessment model and tools for laparoscopic suturing
AU - Wei, Chin Hung
AU - Shen, Shih Chiang
AU - Duh, Yih Cherng
AU - Tsai, Kuei‐Yen ‐Y
AU - Chen, Hsin An
AU - Huang, Shih Wei
N1 - Funding Information:
Drs. Hsin-An Chen and Chin-Hung Wei are the speakers in IRCAD-Taiwan, and the Grant recipients funded by Shuang Ho Hospital, Grant number 110YSR-02. Dr. Shih-Wei Huang is the director of IRCAD-Taiwan. Drs. Kuei‐Yen Tsai, Shih-Chiang Shen, and Yih-Cherng Duh have no conflicts of interest and financial ties to disclose.
Funding Information:
Funding was provided by Taipei Medical University, Shuang Ho Hospital (Grant No. 110YSR-02).
Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2022
Y1 - 2022
N2 - Background: Moorthy checklist (MC) and laparoscopic skill competency assessment tool (LS-CAT) are tools commonly used to evaluate the quality of laparoscopic suturing. The current assessment model is single measurement by multiple raters. Our aim is to examine the reliability of the current assessment model and tools. Methods: With IRB approval, participants of three different backgrounds, namely medical students, trainees, and surgeons, were enrolled. The participants each accomplished a standardized laparoscopic suturing task. The performances were video-recorded and reviewed with LS-CAT and MC independently by three blinded raters. Intraclass correlation coefficients (ICC) were calculated for inter-rater and intra-rater reliability. Results: 26 participants were enrolled, comprising 10 students, 10 trainees and 6 surgeons. In regard of inter-rater reliability, ICC values (95% CI) were 0.909 (0.768–0.961) and 0.868 (0.608–0.948) in LS-CAP and MC, respectively. For students, ICC values were 0.908 (0.682–0.976) and 0.815 (0.408–0.951) in LS-CAT and MC, respectively. For trainees, ICC values were 0.812 (0.426–0.947) and 0.717 (0.102–0.925), respectively. For surgeons, ICC values were 0.720 (0.064–0.955) and 0.868 (0.608–0.948), respectively. In regard of intra-rater reliability, ICC values of the mean scores from the three raters were 0.956 (0.905–0.980) and 0.925 (0.842–0.966) in LS-CAP and MC, respectively. Conclusion: LS-CAT and MC are both qualified assessment tools for laparoscopic suturing. LS-CAT is more reliable particularly for medical students and trainees. The current assessment model of single measurement by multiple raters provides excellent reliability.
AB - Background: Moorthy checklist (MC) and laparoscopic skill competency assessment tool (LS-CAT) are tools commonly used to evaluate the quality of laparoscopic suturing. The current assessment model is single measurement by multiple raters. Our aim is to examine the reliability of the current assessment model and tools. Methods: With IRB approval, participants of three different backgrounds, namely medical students, trainees, and surgeons, were enrolled. The participants each accomplished a standardized laparoscopic suturing task. The performances were video-recorded and reviewed with LS-CAT and MC independently by three blinded raters. Intraclass correlation coefficients (ICC) were calculated for inter-rater and intra-rater reliability. Results: 26 participants were enrolled, comprising 10 students, 10 trainees and 6 surgeons. In regard of inter-rater reliability, ICC values (95% CI) were 0.909 (0.768–0.961) and 0.868 (0.608–0.948) in LS-CAP and MC, respectively. For students, ICC values were 0.908 (0.682–0.976) and 0.815 (0.408–0.951) in LS-CAT and MC, respectively. For trainees, ICC values were 0.812 (0.426–0.947) and 0.717 (0.102–0.925), respectively. For surgeons, ICC values were 0.720 (0.064–0.955) and 0.868 (0.608–0.948), respectively. In regard of intra-rater reliability, ICC values of the mean scores from the three raters were 0.956 (0.905–0.980) and 0.925 (0.842–0.966) in LS-CAP and MC, respectively. Conclusion: LS-CAT and MC are both qualified assessment tools for laparoscopic suturing. LS-CAT is more reliable particularly for medical students and trainees. The current assessment model of single measurement by multiple raters provides excellent reliability.
KW - Assessment tool
KW - Laparoscopic suturing
KW - Laparoscopic suturing competency assessment tool
KW - Moorthy checklist
KW - Reliability
UR - http://www.scopus.com/inward/record.url?scp=85123930344&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123930344&partnerID=8YFLogxK
U2 - 10.1007/s00464-022-09061-9
DO - 10.1007/s00464-022-09061-9
M3 - Article
C2 - 35102428
AN - SCOPUS:85123930344
SN - 0930-2794
VL - 36
SP - 6586
EP - 6591
JO - Surgical endoscopy
JF - Surgical endoscopy
IS - 9
ER -