Case - Can This Survey Be Saved?

"What's troubling me is that you can't just pick a new random sample just because somebody didn't like the results of the first survey. Please tell me more about what's been done." Your voice is clear and steady, trying to discover what actually happened and, hopefully, to identify some useful information without the additional expense of a new survey.
"It's not that we didn't like the results of the first survey," responded Steegmans, "it's that only 54% of the membership responded. We hadn't even looked at their planned spending when the decision [to sample again] was made. Since we had (naively) planned on receiving answers from nearly all of the 400 people initially selected, we chose 200 more at random and surveyed them also. That's the second sample." At this point, sensing that there's more to the story, you simply respond "Uh huh . . ." Sure enough, more follows:
"Then Eldredge had this great idea of following up on those who didn't respond. We sent them another whole questionnaire, together with a crisp dollar and a letter telling them how important their responses are to the planning of the industry. Worked pretty well. Then, of course, we had to follow up the second sample as well."
"Let me see if I understand," you reply. "You have two samples: one of 400 people and one of 200. For each, you have the initial responses and followup responses. Is that it?"
"Well, yes, but there was also the pilot study - 12 people in offices downstairs and across the street. We'd like to include them with the rest because we worked so hard on that at the start, and it seems a shame to throw them away. But all we really want is to know average spending to within about a hundred dollars."
At this point, you feel that you have enough of the background information to evaluate the situation and to either recommend an estimate or an additional survey. Additional details for the survey of the 8,391 overall membership in order to determine planned spending over the next quarter are provided on the following page.
Discussion Questions
1. Do you agree that drawing a second sample was a good idea?
2. Do you agree that the followup mailings were a good idea?
3. How might you explain differences among averages in the results?
4. Should the pilot data be included? Why or why not?
5. Are any or all of the results here useful? How would you treat the data that has been collected? What additional data collection, if any, would you do?

From: Siegel, A.F. (1997). Practical Business Statistics, 3rd Edition. Irwin/McGrawHill.


Pilot
Study

First Sample

Second Sample

Both Samples

All Combined

Initial Mailing

Mailed

12

400

200

600

612

Responses

12

216

120

336

348

Average

$39,274.89

$3,949.40

$3,795.55

$3,894.45

$5,114.47

Std. Dev.

$9,061.91

$849.26

$868.39

$858.02

$6,716.42

Followup Mailing

Mailed

0

184

80

264

264

Responses

0

64

18

82

82

Average

$1,238.34

$1,262.34

$1,243.60

$1,243.60

Std. Dev.

$153.19

$156.59

$153.29

$153.29

Initial and Followup Mailings Combined

Mailed

12

400

200

600

612

Responses

12

280

138

418

430

Average

$39,274.89

$3,329.73

$3,465.13

$3,374.43

$4,376.30

Std. Dev.

$9,061.91

$1,364.45

$1,179.50

$1,306.42

$6,229.77

1. Do you agree that drawing a second sample was a good idea?

Drawing a second sample can be a good idea in certain situations. In this case, the decision to draw a second sample was made because only 54% of the membership responded to the first survey. By choosing 200 more members at random and surveying them, the aim was to increase the sample size and potentially get a more representative sample of the population. However, whether it was a good idea or not depends on the specific circumstances and the goals of the survey.

To evaluate if drawing a second sample was a good idea, you can consider the reasons behind the low response rate and whether there were any biases in the initial sample. Additionally, it's important to assess if the second sample results addressed the limitations of the first sample and if it provided additional insight into the research question or problem at hand.

2. Do you agree that the follow-up mailings were a good idea?
Sending follow-up mailings to those who didn't respond to the initial survey can also be a good idea in some cases. In this situation, sending another questionnaire along with a dollar and a letter emphasizing the importance of their responses seemed to have worked relatively well.

To determine if the follow-up mailings were a good idea, you should assess the response rate and the quality of the responses received from the follow-up mailings. If the response rate improved significantly and the responses provided valuable insights, then it could be considered a successful strategy. However, it's important to note that follow-up mailings can add additional costs and may not always lead to significant improvements in response rates or data quality.

3. How might you explain differences among averages in the results?
Differences among averages can be explained by various factors such as sample size, sampling methods, response rates, and potential biases in the sample. In this case, there are multiple averages provided for different samples and for the combined samples.

The differences in averages could be due to the different sample sizes between the initial, second, and combined samples. Larger sample sizes typically provide more accurate estimates of the population parameters. Additionally, the differences could also be influenced by the characteristics of the respondents who participated in each sample.

To better understand the differences among averages, you can analyze the standard deviations provided for each average. A larger standard deviation indicates more variation in the data, which could contribute to different averages.

4. Should the pilot data be included? Why or why not?
The decision to include pilot data depends on the specific circumstances and the goals of the survey. In this case, the pilot study included 12 people from offices downstairs and across the street.

Including the pilot data can be beneficial if it provides valuable insights, helps to refine the survey methodology, or adds to the understanding of the research question. However, it's important to consider the representativeness of the pilot data. If the 12 people from the pilot study are not representative of the overall population, including them in the analysis could introduce biases.

To determine whether the pilot data should be included, you should assess the similarities and differences between the pilot sample and the larger sample. If they are similar and the pilot data helps to enhance the understanding of the research question, then it can be included. However, if the pilot sample differs significantly from the larger sample or introduces biases, it may be more appropriate to exclude it.

5. Are any or all of the results here useful? How would you treat the data that has been collected? What additional data collection, if any, would you do?
The usefulness of the results depends on the specific goals of the survey and the research question at hand. In this case, the results include averages and standard deviations for each sample as well as the combined samples.

To treat the collected data, you can perform various statistical analyses such as calculating confidence intervals, conducting hypothesis tests, or exploring relationships between variables. These analyses can provide further insights and help draw conclusions based on the data.

Regarding additional data collection, it depends on the unanswered questions, the level of precision required, and the available resources. If there are significant gaps in the data or if the desired level of precision has not been achieved, additional data collection might be necessary. This could involve increasing the sample size, targeting specific subgroups, or using different survey methods to address potential biases.

1. Do you agree that drawing a second sample was a good idea?

Based on the information provided, it seems like drawing a second sample was not a good idea. The decision to sample again was made because only 54% of the membership responded to the first survey. However, the decision was made before even looking at the results of the first survey. This indicates that the decision was not based on dissatisfaction with the results, but rather on a lack of response rate. Drawing a second sample without understanding the reasons for the low response rate may not have addressed the underlying issues and may not have provided more reliable results.

2. Do you agree that the follow-up mailings were a good idea?

The follow-up mailings can be considered a good idea in the context of increasing response rates. Sending another whole questionnaire, along with a monetary incentive and a letter emphasizing the importance of their responses, helped to motivate some individuals who did not initially respond to participate. This approach showed some success in increasing response rates and obtaining additional data.

3. How might you explain differences among averages in the results?

The differences among averages in the results can be attributed to various factors such as sample size, sampling bias, non-response bias, and the specific characteristics of the individuals who responded. Different samples, including the pilot study, first sample, second sample, and follow-up mailings, may have different characteristics and demographics, resulting in variations in the average spending. Additionally, the use of different sample sizes can also impact the accuracy and representativeness of the averages.

4. Should the pilot data be included? Why or why not?

Including the pilot data can be beneficial if it provides valuable insights or if it helps increase the overall sample size and representativeness of the data. However, it is important to evaluate the methodological rigor of the pilot study and its compatibility with the main survey. If there are concerns about the quality or validity of the pilot data, it might be more appropriate to exclude it and focus on the larger samples.

5. Are any or all of the results here useful? How would you treat the data that has been collected? What additional data collection, if any, would you do?

The results obtained from the various samples and follow-up mailings can be useful in providing some insights into planned spending. However, it is important to consider the limitations of the data, such as potential sampling biases and variations in response rates. To treat the collected data, it would be beneficial to analyze each sample separately to understand the characteristics and spending patterns specific to each group. It may also be necessary to adjust for biases and take into account the differences in response rates.

If additional data collection is required, it would be advisable to conduct a new survey with a more rigorous sampling strategy, ensuring a higher response rate and improved representativeness. This would provide a more accurate and reliable estimate of planned spending within the industry.