Consider 2 hypothetical randomized trials, each of which enters 1000 patients into both the treatment and control groups, of whom 30 (3%) are lost to follow-up.

In trial A, treated patients die at half the rate of the control group (200 vs 400), a relative risk (RR) of 50%. To what extent does the loss to follow-up threaten our inference that treatment reduces the death rate by half? If we assume the worst (i.e., that all treated patients lost to follow-up died), the number of deaths in the experimental group would be 230 (23%). If there were no deaths among the control patients who were lost to follow-up, our best estimate of the effect of treatment in reducing the RR of death decreases from 200/400, or 50%, to 230/400, or 58%. Thus, even assuming the worst makes little difference to the best estimate of the magnitude of the treatment effect. Our inference is therefore secure.

Contrast this with trial B. Here, the RR of the death is also 50%. In this case, however, the total number of deaths is much lower; of the treated patients, 30 die (3%), and the number of deaths in control patients is 60 (6%). In trial B, if we make the same worst-case assumption about the fate of the patients lost to follow-up, the results would change markedly. If we assume that all patients initially allocated to treatment but subsequently lost to follow-up - die, the number of death among treated patients increases form 30 to 60, which is equal to the number of control group deaths. If this assumption is accurate, we would have 60 deaths in both the treatment and control groups, and the effect of treatment would decrease to 0. Because of this marked change in the treatment effect (50% RR if we ignore those lost to follow-up; 100% RR if we assume all patients in the treatment group who were lost to follow-up died), the 3% loss to follow-up in trial B threatens our inference about the magnitude of the RR.