Abstract
Objective: Physician-specific surveys are a frequently used tool in health services research, but attempts at ensuring adequate response rates are rarely reported. We reviewed literature of survey methodology specific to physician surveys and report those found to be most effective.
Data Sources: Studies were identified by searching MEDLINE and PSYCHInfo from 1967 through February 1999. We included all English-language studies that randomized physician survey respondents to an experimental or control group. The authors independently extracted data from 24 studies examining survey methodology of physician-specific surveys. We included Mantel–Haenszel chi-squares comparing treatment groups, if present. If not, these were calculated from study data.
Results: Pre-notification of survey recipients, personalizing the survey mailout package, and nonmonetary incentives were not associated with increased response rates. Monetary incentives, the use of stamps on both outgoing and return envelopes, and short questionnaires did increase response rates. Few differences were reported in response rates of phone surveys compared with mail surveys and between the demographics and practice characteristics of early survey respondents and late respondents.
Conclusions: We report some simple approaches that may significantly increase response rates of mail surveys. Surprisingly, the response rates of mail surveys of physicians compared favorably with those from telephone and personal interview surveys. Nonresponse bias may be of less concern in physician surveys than in surveys of the general public. Future research steps include specifically testing the more compelling results to allow for better control of confounders.
Keywords
Introduction
Physicians play a key role in the rapidly changing health care and public health system, and it is essential to study their attitudes, beliefs, behaviors, and concerns. One of the most effective ways of doing this is through the use of surveys. Surveys of physicians are widely used for eliciting opinions on issues affecting practice,
1
the delivery of clinical preventive services,2
implementation of public health interventions,3
the changing environment of medicine in the era of managed care,4
ethics,5
prevention efforts,6
, 7
and other topics.Epidemiologists, health administrators, and other public health professionals rely on mailed questionnaires to obtain data from physicians.
8
However, low response rates to physician surveys are common, which could seriously impair the validity and generalizability of results.9
, 10
, 11
, 12
, 13
For example, one study examining response rates among 321 mail surveys published in medical journals over a one-year period showed that nonphysician surveys had a mean response rate of 68% compared to 54% in physician surveys.14
These circumstances have motivated investigators to consider the differences between the general public and physicians regarding survey response, and to specifically examine the best ways to survey physicians so as to maximize response rates and minimize nonresponse bias.Mail surveys of highly educated, professional persons (e.g., physicians) should theoretically elicit higher response rates than those of less educated respondents. Overall, such persons (1) can read the survey and (2) are more likely to have considered issues that are to be covered in the survey and to have formulated their opinions in advance.
11
Alternatively, some professionals may resist surveys or questions that (1) stereotype or generalize issues, (2) are restrictive (i.e., multiple choice questions), (3) do not make sense to them, and (4) take too much time out of an already overburdened schedule.5
Although abundant research exists regarding the way in which the general population responds to surveys,
10
, 16
, , 18
, 19
, 20
, 21
, 22
there are relatively few studies examining how and why physicians respond. In addition, 6 of the 24 studies reviewed in this paper were not available on Medline (i.e., they were found on PsychINFO) and thus may not be readily available to researchers planning physician surveys. To assess techniques purported to increase the response rates of physicians, we reviewed peer-reviewed literature and examined methods used to increase response rates to physician-specific surveys. While the limited number of studies and comparison difficulties across studies preclude universal conclusions, methodologic issues raised in this review can be considered by researchers who are in the business of conducting surveys of physicians.Methods
We searched Medline and PsychINFO databases from 1967 through February 1999 to identify published research on survey methodology specific to physicians. The references of all retrieved articles were screened for further citations. The search was limited to articles in English. Medline/PsychINFO index terms searched included “questionnaire,” “survey,” and “physician.” “Methodology” and “questionnaire” were also searched using the text word option in Medline.
Only randomized, controlled studies whose primary objective was to examine factors influencing response rates among physicians targeted for surveys were included in this review. No attempts to find unpublished reports of survey methods were made. All surveys in which researchers tested different methodologies were of an academic nature and were not conducted for, nor sponsored by, commercial interests.
If the authors of cited papers reported chi-square and p values comparing experimental treatments, we included them in our analysis. If not, we determined the number of responders and nonrespondents from the text of the paper and calculated a Mantel–Haenzel chi-square or a chi-square for trend comparing different treatments using Epi-Info, version 6.04a.
23
Results
We found 24 published studies that examined methods to increase physicians’ response rates to mail, telephone, or personal interview surveys. These studies evaluated techniques to improve response rates, compared the efficacy of mail versus telephone and mail versus personal interview surveys and examined the effect of nonresponse bias in physician surveys.
Five papers examined general attempts to increase response rates to the first mailing of a survey (Table 1). Pre-notification of survey participants advising that a questionnaire would be arriving soon did not significantly increase response rates.
24
Strategies found to improve response rates included the use of first-class postage (on the original mailing envelope) compared with bulk mail-outs,25
the use of a stamp versus a metered frank or business reply return envelopes,24
, 26
the use of shorter (i.e., one to two pages) surveys,27
and personalized packaging of the mail-outs.28
Table 1Methods used to increase response rates to the first mailing of a physician survey, Mantel–Haenszel χ2 and p-value
Study | Method studied | Percent responding (total sample size) | Mantel–Haenszel χ2, p-value |
---|---|---|---|
Notification | |||
Shiono et al.24 | Prenotification letter | 69%(5018) | χ2=2.68, p=0.102 |
No prenotification letter | 70.5% (5029) | ||
Postage | |||
Shiono et al.24 | Stamped return envelope | 71% (5032) | χ2=10.73, p=0.001 |
Metered return envelope | 68% (5015) | ||
Urban et al.26 | Stamped return envelope | 84% (183) | χ2=5.29, p=0.02 |
Business reply mail return envelope | 72% (197) | ||
Gullen et al.25 | Personalized mailout, first-class metered postage | 57% (382) | χ2=0.75, p=0.386 |
Personalized mailout, stamped postage | 54% (380) | ||
Gullen et al.25 | Nonpersonalized mailout, first-class metered postage | 45% (771) | χ2=9.98, p=0.001 |
Nonpersonalized mailout, bulk-mail postage | 38% (1115) | ||
Survey length | |||
Cartwright27 | Short (<2 pages) | 980% (115) | χ2=12.15, p=0.0005 |
Long (>2 pages) | 75% (117) | ||
Packaging | |||
Asch et al.28 | Veterans affairs hospital envelope | 41% (449) | χ2=4.58, p=0.03 |
University medical school envelope | 34% (452) |
a Percentage responding of all individuals assigned to each experimental treatment.
b Bold type in rightmost column indicates statistically significant results.
Interventions to increase response rates after the first mailing were investigated in five papers (Table 2). No differences in response rates were noted among physicians receiving nonmonetary incentives versus a phone call on the third mailing
29
; those receiving a follow-up phone call and a personalized note on the second mailing30
; or those receiving a personalized note on the second mailing.8
Alternatively, a personalized mail-out package on the third mailing (i.e., a typed vs computer-printed mail label and a postage stamp vs a metered frank on the outgoing and return envelopes)8
; the inclusion of a replacement questionnaire on subsequent mailings31
; and the use of certified mail with a return receipt on the third mailing,32
all significantly increased response rates.Table 2Interventions on the second and third mailings of physician surveys, Mantel–Haenszel χ2 and p-value
Study | Number of mailing | Type of intervention | Percent responding (sample size) | Mantel–Haenszel χ2, p-value |
---|---|---|---|---|
Incentive | ||||
Sallis et al.29 | 3rd mailing | Incentive (pencil printed with an attractive design) | 30% (56) | χ2=0.4, p=0.528 |
Phone call follow-up | 25% (56) | |||
Packaging | ||||
Ogborne et al.30 | 2nd mailing | Personalized note | 38% (39) | χ2=0.22, p=0.639 |
Telephone follow-up | 33% (39) | |||
Maheux et al.8 | 2nd mailing | Personalized thank-you note | 33% (170) | χ2=2.54, p=0.111 |
No personalized thank-you note | 25% (186) | |||
Maheux et al.8 | 3rd mailing | Personalized mail-out package | 46% (126) | χ2=4.43, p=0.035 |
No personalized mail-out package | 33% (127) | |||
Reminder | ||||
Vogel et al.31 | 3rd mailing | Inclusion of replacement questionnaire | 24% (137) | χ2=3.94, p=0.05 |
Reminder letter only | 14% (137) | |||
Postage | ||||
Del Valle et al.32 | 3rd mailing | Certified mail with return receipt | 41% (205) | χ2=47.96, p<0.0001 |
First-class postage | 25% (204) |
a Percent of initial nonrespondents returning a survey after the stated intervention.
b Bold type in rightmost column indicates statistically significant results.
The use of incentives to increase response rates on the first mailing of a survey was examined in eight separate studies (Table 3). Monetary incentives, regardless of the amount, were consistently associated with increased response rates
33
, 34
, 35
, 36
, 37
as was the timing of such incentives, prepayment being preferable to postpayment.38
The two nonmonetary incentives reported here (i.e., a free pencil and educational software), did not affect response rates.29
, 39
Table 3Use of incentives on the first mailing of a physician survey, Mantel–Haenszel χ2 and p-value
Study | Type of incentive | Percent responding (sample size) | Mantel–Haenszel χ2, p-value |
---|---|---|---|
Tambor et al.33 | $25 incentive and 10 category I CME | 65%(1759) | χ2=2.43.9, p=0.0001 |
No incentive | 20% (352) | ||
Gunn et al.34 | $50 prepayment | 77% (145) | χ2 for trend=12.36, |
$25 prepayment | 69% (147) | p=.0004 | |
No incentive | 58% (137) | ||
Asch et al.35 | $5 prepayment | 61% (484) | χ2=22.72, p=0.0001 |
$2 prepayment | 46% (482) | ||
Easton et al.36 | $1 prepayment | 79% (305) | χ2=14.93, p=0.001 |
Information booklet on survey topic | 64% (189) | ||
Everett et al.37 | $1 prepayment | 63% (300) | χ2=19.53, p=0.0001 |
No incentive | 45% (300) | ||
Berry et al.38 | $20 prepayment, before survey completed | 78% (1011) | χ2=36.75, p=0.0001 |
$20 postpayment, after survey completed | 66% (1017) | ||
Sallis et al.29 | Inventive (pencil printed with an attractive design) | 32% (41) | χ2=2.35, p=0.125 |
No incentive | 17% (41) | ||
Bonito et al.39 | Incentive (educational software) | 67% (1186) | χ2=0.00, p=0.98 |
No incentive | 67% (315) |
a Percentage responding of all individuals assigned to each experimental treatment.
b Bold type indicates statistically significant results.
legend CME, continuing medical education.
Three studies comparing phone and mail surveys and mail and personal interviews were reviewed (Table 4). Phone surveys yielded slightly higher (but not statistically significantly higher) response rates than an identical mail survey.
40
, 41
Cost estimates suggest that mail surveys are much less expensive than telephone surveys. Personal interview response rates were significantly higher than mail surveys, but the difference was small.27
Table 4Comparison of response rates between mail and telephone or personal interview surveys of physicians with cost estimates/survey completed, Mantel–Haenszel χ2 and p-value
Study | Survey type | Cost estimates total (per completed survey) | Percent responding (sample size) | Mantel–Haenszel χ2, p-value |
---|---|---|---|---|
Shosteck et al.40 | Phone | $18,692 ($63) | 74% (247) | χ2=1.15, p=0.284 |
$7,030 ($24) | 70% (296) | |||
Pendleton et al.41 | Phone | $416 ($2.74) | 68% (223) | χ2=0.32, p=0.569 |
$320 ($1.03) | 66% (500) | |||
Cartwright et al.27 | Personal interview | N/A | 76% (702) | χ2=4.35, p=0.037 |
N/A | 72% (1917) |
a Percentage responding of all individuals assigned to each experimental treatment.
b Bold type indicates statistically significant results.
To understand the role of nonresponse bias in physician surveys, five papers were reviewed examining demographics of survey respondents based on their time of response (Table 5). In these studies, late respondents (i.e., nonrespondents to the first mailing of a survey) were considered proxies for nonrespondents, a standard technique to assess survey representativeness.
42
Two studies reported minor differences in medical practice variables (e.g., number of years in practice).9
, 43
However, demographic variables (e.g., income, area and type of practice, physician gender and age) of early and late survey respondents did not differ appreciably in reviewed studies.43
, 44
, 45
, 46
Table 5Variability of demographic variables among respondents based on time of response to first versus subsequent mailings
Study | Type of physician surveyed | Percent responding (sample size within a mailing) | Difference between first and second mailing respondents | ||
---|---|---|---|---|---|
1st mailing | 2nd mailing | Overall | |||
Sobal et al.43 | Practicing family physicians regarding residency length and hospital privileges | 63% (38) | 46% (114) | 80% | Early respondents more likely to live in a suburban versus an urban of rural area |
Family practice residency directors | 79% (383) | 50% (79) | 89% | No differences in demographic variables | |
Third-year family practice residents | 55% (319) | 46% (145) | 76% | No differences in demographic variables | |
Guadagnoli et al.44 | Primary care physicians regarding their perceptions of issues affecting adults with cancer | 1st mailing | 2nd mailing | Overall | No differences in demographic variables |
35% (364) | 35% (236) | 63% | |||
Gough et al.45 | Practicing physicians previously administered psychological tests while in medical school | 1st mailing | 2nd mailing | Overall | No differences in demographic variables c Demographic variables include age, sex, performance in medical school, specialty, area of practice, number of siblings, respondent’s family background including parents’ education level, social class and responses to limited psychological tests; respondent’s scores on various psychological tests performed prior to entrance into medical school. |
60% (1119) | 32% (449) | 81% | |||
Hovland et al.46 | Knowledge of drug prices and attitudes towards basic science curriculum among dental school graduates | 1st mailing | 2nd mailing | Overall | No differences in demographic variables |
48% (400) | 93% (207) | 96% | |||
Berk9 | Data from Physicians Practice Survey | 1st mailing | 2nd mailing | 3rd mailing | Early respondents’ annual income slightly higher than later respondents |
26% (6685) | 31% (4923) | 49% (3406) |
a Demographic variables include age, race, sex, income, state and area of practice, and size of community in which they practiced.
b Demographic variables include age, sex, specialty, board certification, and year of medical school graduation.
c Demographic variables include age, sex, performance in medical school, specialty, area of practice, number of siblings, respondent’s family background including parents’ education level, social class and responses to limited psychological tests; respondent’s scores on various psychological tests performed prior to entrance into medical school.
d Demographic variables include age, sex, race, class rank upon graduation from dental school, year of graduation, practice location, type of practice.
e Demographic variables include age, sex, income, board certification, specialty, foreign vs American medical school graduate.
Discussion
We reviewed available published literature and found several simple interventions that may increase response rates to physician surveys. Although the number of studies dealing directly with strategies to improve physician survey response is small compared to survey methods of the general population, the studies we reviewed are of high quality and similarities to findings from survey methods research of the general population support our findings.
The strategies we found effective in these studies have also been proven effective in surveys of the general population. The positive effect of shorter questionnaire length (one or two pages), type of postage, and other mailing characteristics on response rates has been well established in studies of the general population.
10
, 16
, , 18
In addition, there is extensive evidence of an effect of pre-paid monetary incentives on general survey response, although the actual psychological mechanism at work is not well understood.19
, 20
Although similarities between physicians and the general public regarding survey response are informative and reassuring, the areas where differences exist may be more useful in understanding how to improve response rates among physicians. For example, mail surveys of the general population often employ three or more follow-ups to increase response rates and to reduce potential nonresponse bias.,
21
Studies in this review found little difference in demographic variables (e.g., income, area and type of practice, physician gender and age) among respondents to first mailings, respondents to subsequent mailings, and late respondents (considered to be a proxy for nonrespondents by several researchers).Reasons why responding and nonresponding physicians have similar characteristics can be explained. Physicians as a group are more homogeneous regarding knowledge, training, attitudes, and behavior than the general population. Variations that do exist among physicians may not be as associated with willingness to respond or survey content as in the general population. This finding, combined with the consistently positive effect of a monetary incentive, suggests that limited resources might be shifted from follow-up mailings to a sufficient monetary incentive in the first mailing. Furthermore, the finding has implications for the interpretation of data gathered from physician surveys with low response rates, i.e., nonresponse bias may not be as crucial in physician surveys as in surveys of the general population.
Surprisingly, minimal differences in response rates between interviewer-administered and self-administered surveys were observed, contradicting the long-held belief that telephone surveys obtain higher response rates than mail surveys.,
21
The importance of an interviewer, via telephone or in person, is to explain the value of the study, to increase the difficulty of refusal, and to explain questions to persons of low literacy level, among other reasons., 21
, 22
, 47
The mailed questionnaire is recommended when the respondent needs greater control over the time, pace, and sequence of response; when privacy of response is important; and when the sample is a highly literate population.10
, , 47
Most physicians are intelligent, self-sufficient, and accustomed to making quick and autonomous decisions in their daily work. Thus, a physician’s decision to participate may not be readily influenced by an interviewer. It should be noted, however, that data quality may vary substantially with survey methodology.All surveys must distribute fixed resources among survey design, data collection, and data analysis to maximize valid and representative data from the target population. Measures that can lead to cost savings in physician surveys are the use of a mail survey instead of an interviewer-assisted survey; a reduction in the number of follow-up mailings; and the use of a short survey instrument. Cost savings in these areas could be used for techniques proven to increase response rates such as monetary incentives and the use of first-class postage stamps for the mail out and return envelopes.
Clearly, more research on methodology for physician surveys is needed. Not only are the studies in this review limited in number, but they also are limited by the number of methods tested in a single study. Because so many factors can influence survey response, including a number not addressed by this review (e.g., questionnaire content, survey topic, question structure), comparing methodologic effects across studies is risky. The next step in physician survey research is to combine tests of the more compelling methodologic results into a single study to allow for better control of confounders. In addition, the studies described in this review should be replicated, particularly those with small sample size and those whose findings run contrary to findings for the general population. Finally, the findings of similar response rates in mail and telephone surveys and of nonrespondents being similar to respondents have major implications for survey research, not only for physicians, but for other homogeneous, highly educated populations faced with substantial time constraints.
15
References
- Curbside consultation practices and attitudes among primary care physicians and medical subspecialists.JAMA. 1998; 280: 905-909
- Self-report of delivery of clinical preventive services by U.S. physicians.Am J Prev Med. 1999; 17: 62-72
- Implementing breast cancer screening guidelines.Am J Prev Med. 1997; 13: 143-149
- Managed care and physician’s provision of charity care.JAMA. 1999; 281 (published erratum appears in JAMA 1999;282:943): 1087-1092
- Medical students’ attitudes toward physician-assisted suicide.JAMA. 1999; 282: 2080-2081
- Tuberculosis screening in private physicians’ offices, Pennsylvania, 1996.Am J Prev Med. 1999; 16: 178-181
- Skin cancer screening by Australian family physicians.Am J Prev Med. 1999; 17: 142-146
- Increasing response rates in physicians’ mail surveys.Am J Public Health. 1989; 79 (see comments): 638-639
- Interviewing physicians.Am J Public Health. 1985; 75: 1338-1340
- Survey research methods. 2nd ed. Sage Publications, Inc, Newbury Park, CA1993
- Mail surveys of reluctant professionals.Eval Rev. 1985; 9: 349-360
- Response rates to mail surveys published in medical journals.J Clin Epidemiol. 1997; 50: 1129-1136
- Physicians’ reactions to a mailed questionnaire.Public Opinion Q. 1956; 20: 599-604
- Don’t survey physicians! Center for Health Services Research and Development, American Medical Association, Chicago, IL1974
- Not another questionnaire! Eliciting the views of general practitioners.Fam Pract. 1995; 12: 335-338
- Stimulating responses to mailed questionnaires.Public Opinion Q. 1975; 39: 82-101
- Mail and telephone surveys. John Wiley and Sons, New York1978
- Questionnaire construction and item writing.in: Rossi P.H Wright J.D Anderson A.B Handbook of survey research. Academic Press, London, UK1983: 195-230
- Mail survey response rate.Public Opinion Q. 1988; 52: 467-491
- The effect of monetary incentives and follow-up mailings on the response rate and the response quality in mail surveys.Public Opinion Q. 1990; 54: 346-361
- How to conduct your own survey. John Wiley and Sons, New York1994
- Response effects.in: Rossi P.H Wright J.D Anderson A.B Handbook of survey research. Academic Press, London, UK1983: 195-230
- Epi Info, version 5.01b. Centers for Disease Control, Atlanta, GA1990
- The effect of two mailing strategies on the response to a survey of physicians.Am J Epidemiol. 1991; 134: 539-542
- Factors influencing physicians’ response to mailed questionnaires.Health Serv Rep. 1973; 88: 510-514
- Effects on response rates and cost of stamps vs. business reply in a mail survey of physicians.J Clin Epidemiol. 1993; 46: 455-459
- Professionals as responders.Br Med J. 1978; 2: 1419-1421
- Different response rates in a trial of two envelope styles in mail survey research.Epidemiology. 1994; 5: 364-365
- Increasing returns of physician surveys.Am J Public Health. 1989; 74: 1043
- Dealing with non-respondents in a mail survey of professionals.Eval Health Professions. 1986; 9: 121-128
- Impact of replacement questionnaire on the response rate of practicing physicians to mail questionnaire.J Med Educ. 1983; 58: 905
- A randomized trial of the impact of certified mail on response rate to a physician survey, and a cost-effectiveness analysis.Eval Health Professions. 1997; 20: 389-406
- Improving response rates through incentive and follow-up.Am J Public Health. 1993; 83: 1599-1603
- Physician response rates to a telephone survey.Public Opinion Q. 1981; 45: 109-115
- Conducting physician mail surveys on a limited budget. A randomized trial comparing $2 bill versus $5 bill incentives.Med Care. 1998; 36: 95-99
- An informational versus monetary incentive in increasing physicians’ response rates.Psychol Rep. 1997; 81: 968-970
- The effect of a monetary incentive in increasing the return rate of a survey to family physicians.Eval Health Professions. 1997; 20: 207-214
- Physician response to a mailed survey.Public Opinion Q. 1987; 51: 102-114
- Use of a non-monetary incentive to improve physician responses to a mail survey.Acad Med. 1997; 72: 73
- Physician response rates to mail and personal interview surveys.Public Opinion Q. 1979; 43: 206-216
- Studying medical opinion.Community Med. 1987; 9: 25-34
- Assessing sample representativeness in surveys of physicians.Eval Health Professions. 1990; 13: 367-372
- Comparing physicians’ response to the first and second mailings of a questionnaire.Eval Health Professions. 1989; 12: 329-339
- The effects of nonresponse and late response on a survey of physician attitudes.Eval Health Professions. 1989; 12: 318-328
- A comparison of physicians who did or did not respond to a postal questionnaire.J Appl Psychol. 1977; 62: 777-780
- Nonresponse bias to mail survey questionnaires within a professional population.J Dent Educ. 1980; 44: 270-274
- Assessing sample representativeness in surveys of physicians.Eval Health Professions. 1990; 13: 364-366
Article info
Identification
Copyright
© 2001 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.