You are on page 1of 2

Dissections OBSERVATIONAL

13 April 2009
Evidence-based Medicine for Surgeons

Can everyone achieve proficiency with the laparoscopic technique? Learning curve patterns in
technical skills acquisition
Authors: Grantcharov TP, Funch-Jensen P
Journal: The American Journal of Surgery 2009; 197: 447–449
Centre: Univ. of Toronto, St. Michael’s Hospital, Toronto, Canada; Aarhus Univ. Hospital, Aarhus, Denmark
Laparoscopic surgery is associated with a learning phase during which there can be an increased
incidence of serious complications. The technique is difficult to learn because of a number of
BACKGROUND specific phenomena: loss of tactile perception, 2-dimensional to 3-dimensional conversion, hand-
eye co-ordination, and the fulcrum effect. Surgical trainees have varying psychomotor abilities
and there is evidence that that some can not achieve proficiency despite extensive training
Authors' claim(s): “...a group of trainees performed poorly and did not
RESEARCH QUESTION demonstrate any improvement with practice, indicating that these subjects do
not have the abilities necessary to develop laparoscopic technical skills.
Population Identifying subjects lacking in abilities to learn laparoscopic technique will pose
Thirty-seven surgical trainees with a challenge for the professional bodies responsible for training and
similar limited experience in certification.”
laparoscopic surgery.

Indicator variable
IN SUMMARY
Proficiency after training
A standardized skill acquisition
program carried out on a Number
laparoscopic, virtual-reality trainer.
Proficiency from the beginning 2
Outcome variable Achieved pre-defined expert criteria 26
Objective assessment of Improved but not able to achieve pre-defined expert criteria 6
improvement in proficiency.
Underperformed and showed no tendency of skills 3
Comparison improvement
None

THE BOTTOM LINE


The authors perform a little trick that is annoying: reporting all results as percentages, when the total number involved in
the study is small. This tends to misrepresent reality. I had to calculate the numbers shown in the table above; they are
not available in the paper. Still, the conclusions are not as alarming as the authors make it out to be if you eyeball the
figure showing the results of their study (not reproduced due to copyright reasons). The two superstars are way beyond
the rest of the crowd: classical outliers that are seen in every group of human performers. If you ignore them, then the
remaining three groups are fairly closely bunched together around the competence cut-off line and not that alarmingly
different. The authors fail to tell us if the differences are statistically significant. Considering the small numbers in the
study, I would bet that they are not. Yes, there are technically incompetent surgeons, but, I don't think we are any closer
to an objective method of identifying them. And then, there will always be the argument that these trainers do not
replicate "real world" situations closely enough to have the power to discriminate between competent and incompetent
performers.

EBM-O-METER
Evidence level Overall rating Bias levels
Double blind RCT Sampling
Randomized controlled trial (RCT) Comparison
Trash Swiss Safe News-
Prospective cohort study - not randomized cheese worthy Measurement
Life's too Holds water
short for this Full of holes “Just do it”
Case controlled study
Interestingl | Novel l | Feasible l
Case series - retrospective  Ethical l | Resource saving l

The devil is in the details (more on the paper) ... 

© Dr Arjun Rajagopalan
SAMPLING
Sample type Inclusion criteria Exclusion criteria Final score card
Simple random Surgical trainees with None mentioned Study
limited experience in
Stratified random Target ?
laparoscopic surgery ?
Cluster Accessible ?
Consecutive Intended 37
Convenience Drop outs 0
Judgmental Study 37

 = Reasonable | ? = Arguable |  = Questionable

Sampling bias: The study is based on a small, convenience sample of surgical trainees, from, presumably one,
centre. At best, the sampling can only be considered as a pilot study and needs to be validated on a larger, more
representative group.

COMPARISON
Randomized Case-control Non-random Historical None

Controls - details
Allocation details All participants performed 10 repetitions of 6 basic skills tasks on a laparoscopic virtual-reality
trainer within 1 month. Distribution of practice sessions was standardized by performing no
more than 3 repetitions per session and no more than 1 session per day. Trainees were tested
individually by the same instructor (TG). The subjects performed no laparoscopic procedures
in the operating room during the course of data collection.
Comparability Proficiency criteria were established in a previous study by testing 8 surgeons with extensive
laparoscopic experience (ie, performance of >100 laparoscopic procedures)
Disparity -

Comparison bias: It would have been nice if the study compared trainees with surgeons of varying level of
experience. At least then, we would have a fair comparison standard for overall technical competence.

MEASUREMENT
Measurement error
Device used Device error Observer error
Gold std.

Device suited to task


Training

Scoring

Blinding
Repetition

Protocols

(Laparoscopic virtual-reality trainer -Procedicus


MIST; Mentice, Gothenburgh, Sweden) * Y ? N

1.Time to complete task Y Y Y Y Y - -


2.Number of errors Y Y Y Y Y - -
3.Economy of motion Y Y Y Y Y - -
Sufficient evidence is available to confirm the validity of several commercially available virtual-reality systems.

Measurement bias: All measurements of competence were made by a single observer.

© Dr Arjun Rajagopalan

You might also like