Federal Focus, Inc.

Science Policy

The London Principles for Evaluating Epidemiologic Data in Regulatory Risk Assessment

Preliminary and summary issues to be considered in assessing the utility of an epidemiologic study for risk assessment:

Question Yes No Not Known Not Applicable
a. Were the objectives of the study defined and stated?

b. Are the data relevant for risk assessment?

c. Was the study designed to have sufficient power to detect the effect(s) of interest?

d. Were good epidemiological practices followed?

e. Can the study findings be generalized for statutory regulations?

f. Were the principles enumerated below followed?

The listing of "preliminary and summary issues" is designed to help the reviewer to begin focusing on fundamental issues of the study's utility for risk assessment, and whether the study is suitable for either or both the hazard identification and dose-response components of a risk assessment.

The presence of questions in the format of a checklist in this preliminary section and under each of the hazard identification principles which follow does not imply that the principles and checklists can be applied in a mechanical fashion, that they are intended to produce some kind of numerical "score" or "grade", or that there are certain minimum quality hurdles a study must surmount. Nevertheless, when considered in their totality, the principles and subquestions are intended to assist the risk assessor, assisted by experts in epidemiology and other relevant disciplines, in forming an opinion as to the overall quality of the data and the weight they should be given in a risk assessment. While nonconformance with any single principle, or a "No" or "Not Known" answer to any subquestion, should not eliminate a study from consideration, review of the study in light of all the principles might result in its being given essentially no weight in a risk assessment.

A. Principles for Evaluating an
Epidemiologic Report for Cause-Effect Relationship


The numbered principles in this section apply only to the hazard identification portion of a risk assessment. The questions under each principle are designed to help elucidate the principle and to assist the expert reviewer in judging whether the study is consistent with that principle. The subquestions are framed so that a Yes answer is preferred.

The emphasis in these hazard identification principles is on evaluating individual studies, and the principles follow a logical progression from design and study population selection to reporting of results and evaluation of the results in a risk assessment context. Principle A-6, however, addresses interpretation of multiple studies through application of the "Bradford Hill criteria"; and Principle B-6 in the dose-response section, concerning meta-analysis, applies to consideration of multiple studies for hazard identification purposes as well as for dose-response purposes.

It must be emphasized that it is intended that application of these principles and interpretation of the data for risk assessment should be done by the risk assessor with the assistance of expert epidemiologists, and preferably with the assistance of a multidisciplinary team that includes not only epidemiologists, but also experts from other relevant disciplines, such as toxicology, medicine, biology, and industrial hygiene.

Finally, it is recognized that these principles set high standards, and that it is unlikely that any individual study can be considered perfect. The principles were drafted not only for the purpose of evaluating existing studies, but also with the hope that they will encourage greater rigor in future studies that are likely to be used in regulatory risk assessment.

Principle A-1.
The population studied should be pertinent to the risk assessment at hand, and it should be representative of a well-defined underlying cohort or population at risk.

Question Yes No Not Known Not Applicable
a. Were study subjects representative of exposed and unexposed persons (cohort study), or of diseased and non-diseased persons (case-control study)?

b. To minimize bias, were exposed and unexposed persons comparable "at baseline" (cohort study), or were cases similar to controls, prior to exposure, with respect to major risk factors for the disease or condition under study?

Principle A-2.
Study procedures should be described in sufficient detail, or available from the study's written protocol, to determine whether appropriate methods were used in the design and conduct of the investigation.

Question Yes No Not Known Not Applicable
a. To minimize the potential for bias, were interviewers and data collectors blind to the case/control status of study subjects and to the hypothesis being tested?

b. Were there procedures for quality control in place for all major aspects of the study's design and implementation (e.g., ascertainment and selection of subjects for study, methods of data collection and analysis, follow-up, etc).

c. Were the effects of nonparticipation, a low response rate, or loss to follow-up taken into account in producing the study results?

Principle A-3.
The measures of exposure(s) or exposure surrogates should be:

  1. conceptually relevant to the risk assessment being conducted;
  2. based on principles that are biologically sound in light of present knowledge; and
  3. properly quantitated to assess dose-response relationships.
Question Yes No Not Known Not Applicable
a. Were well-documented procedures for quality assurance and quality control followed in exposure measurement and assessment (e.g. calibrating instruments, repeat measurements, re-interviews, tape recordings of interviews, etc.)

b. Were measures of exposure consistent with current biological understanding of dose (e.g., with respect to averaging time, dose rate, peak dose, absorption via different exposure routes)?

c. If there is uncertainty about appropriate exposure measures, was a variety of measures used (e.g., duration of exposure, intensity of exposure, latency)?

d. If surrogate respondents were the source of information about exposure, was the proportion of the data they provided given, and were their relationships to the index subjects described?

e. To improve study power and enhance the generalizability of findings, was there sufficient variation in the exposure among subjects?

f. Were correlated exposures measured and evaluated to assess the possibility of competing causes, confounding, and potentiating effects (synergy)?

g. Were exposures measured directly rather than estimated? If estimated, have the systematic and random errors been characterized, either in the study at hand or by reference to the literature?

h. Were measurements of exposure or human biochemical samples of exposure made? Was there a distinction made between exposures estimated by emission as opposed to body absorption?

i. If exposure was estimated by questionnaire, interview, or existing records, was reporting bias considered, and was it unlikely to have affected the study outcome?

j. Was there an explanation/understanding of why exposure occurred, the context of its occurrence, and the time period of exposure?

Principle A-4.
Study outcomes (endpoints) should be clearly defined, properly measured, and ascertained in an unbiased manner.

Question Yes No Not Known Not Applicable
a. Was the outcome variable a disease entity or pathological finding rather than a symptom or a physiological parameter?

b. Was variability in the possible outcomes understood and taken into account -- e.g., various manifestations of a disease considering its natural history?

c. Was the method of recording the outcome variable(s) reliable -- e.g., if the outcome was disease, did the design of the study provide for recording of the full spectrum of disease, such as early and advanced stage cancer; was a standardized classification system, such as the International Classification of Diseases, followed; were the data from a primary or a secondary source?

d. Has misclassification of the outcome(s) been minimized in the design and execution of the study? Has there been a review of all diagnoses by qualified medical personnel, and if so, were they blinded to study exposure?

Principle A-5.
The analysis of the study's data should provide both point and interval estimates of the exposure's effect, including adjustment for confounding, assessment of interaction (e.g, effect of multiple exposures or differential susceptibility), and an evaluation of the possible influence of study bias.

Question Yes No Not Known Not Applicable
a. Was there a well-formulated and well-documented plan of analysis? If so, was it followed?

b. Were the methods of analysis appropriate? If not, is it reasonable to believe that better methods would not have led to substantially different results?

c. Were proper analytic approaches, such as stratification and regression adjustment, used to account for well-known major risk factors (potential confounders such as age, race, smoking, socio-economic status) for the disease under study?

d. Has a sensitivity analysis been performed in which quantitative adjustment was made for the effect of unmeasured potential confounders, e.g., any unmeasured, well-established risk factor(s) for the disease under study?

e. Did the report avoid selective reporting of results or inappropriate use of methods to achieve a stated or implicit objective? For example, are both significant and non-significant results reported in a balanced fashion?

f. Were confidence intervals provided in the main and subsidiary analyses?

Principle A-6.
The reporting of the study should clearly identify both its strengths and limitations, and the interpretation of its findings should reflect not only an honest consideration of those factors, but also its relationship to the current state of knowledge in the area. The overall study quality should be sufficiently high that it would be judged publishable in a peer-reviewed scientific journal.

Question Yes No Not Known Not Applicable
a. Were the major results directly related to the a priori hypothesis under investigation?

b. Were the strengths and limitations of the study design, execution, and the resulting data adequately discussed?

c. Is loss to follow-up and non-response documented? Was it minimal? Has any major loss to follow-up or migration out of study been taken into account?

d. Did the study's design and analysis account for competing causes of mortality or morbidity which might influence its findings?

e. Were contradictory or implausible results satisfactorily explained?

f. Were alternative explanations for the results seriously explored and discussed?

g. Were the Bradford Hill criteria (see Appendix B) for judging the plausibility of causation (strength of association, consistency within and across studies, dose response, biological plausibility, and temporality) applied when interpreting the results?

h. What are the public health implications of the results? For example, are estimates of absolute risk given, and is the size of the population at risk discussed?

Please continue with
Part B and the Epilogue


Joint Federal

Risk Assessment
    Toward Common

London Principles

   for Epidemiology

Endocrine Effects

Other Areas



Jazz Band





Home Mission Organization Funding Site Map Contact