Смекни!
smekni.com

Statistics Facts Or Fallacies Essay Research Paper (стр. 2 из 2)

Researchers generally have an idea what their research is looking for. They thus formulate questions that will illuminate their research, either pro or con. Prejudice can creep in when a researcher unconsciously words questions in such a way that the answers support his or her contention or opinion. Various questions of this type are leading questions, loaded questions, and double-barrelled questions.

Leading questions are those that tell the respondant how to answer. Attorneys sometimes use them. For example, “Is it not true that on the night of the 27th you were drunk?” Such a question leads the respondent to say yes. Asking instead, “Were you drunk on the night of the 27th?” does not tell the witness how to respond.

Loaded questions are those that, no matter how they are answered, the respondent loses. “Are you still beating your wife?” and “Are you still cheating on your income tax?” are examples. A loaded question appears to ask for a yes or no answer, yet the actual answer may be neither yes nor no.

Double-barrelled questions are those that ask for more than one piece of information in the same question. For example, “Do you go up or downtown in the afternoon?” is double-barrelled.

Another point to be considered is how the questions were worded. It is easy, and often subconscious, for the questioner to word the questions in such a way as to lead to respondent to reply in a certain way. For example, a survey on whaling could ask, “Should the only three countries in the world that do so, continue to slaughter to extinction the helpless, harmless intelligent giants of the deep?” I surmise that few people would respond with a yes.

It is the answers that sometimes cause difficulty for a researcher. The problems lie not only in how the respondents answer, but in how the researcher responds to the answer. Sometimes the response is not what the researcher wants or needs and/or contradicts expectations. He or she must then account for the anomaly. He or she may revamp the original concept or theory, revamp the study, or even ignore the data. The researcher may fall prey to selective perception (seeing only what you want to see) or cognitive dissonance (rationalizing away anything that doesn’t fit into your preconceptions). In addition, how the researcher interprets the words in the questions may be at odds with how the respondents interpreted the words. For example, in a recent survey on the incident of rape on college campuses, the questions used words such as unwelcome sexual advance; the researcher interpreted unwelcome sexual advance as rape, while the respondents could well have been referring to a drunk at a bar making a pass, something that most people would accept as disgusting, but not rape.

The order of the questions can also be a problem. Often, the questions can lead a respondent to answer in a certain way because he or she has answered all the previous questions in the same way. In sales, it’s a common technique, that can lead a respondent through a series of yes answers, from “it’s a nice day,” to “sign here.”

Thus “How were they asked?” requires an examination of the original study in order to see if the researcher may have made an error in questioning and in understanding the answers.

Compared with What?

Finally, you need to examine statistics to determine what are the comparisons being drawn and are they relevant and valid. For example, say your topic is gun control. You could find statistics on murder rates with handguns per capita in New York City, London and Tokyo. Such statistics would show much higher rates in New York than the other two cities. It would therefore appear that gun control is a good idea since guns are controlled in London and Tokyo. However, such statistics must be suspect, not because they are wrong (more people are indeed murdered with handguns in New York City than in London or Tokyo), but because they don’t tell the whole story.

For instance, New York has an extremely stringent weapons control law (the Sullivan Act). Since this is the case, what happens to the argument that control laws work? There must be something else influencing the murder rate.

What about the culture? The United States is unlike any other country on Earth. Its society has a tradition of independence and self-sufficiency, where if you have a problem it is normal for you to take care of it yourself, even if you can’t. It is also a country that used to be called “the melting-pot” but is now known as the “mosaic”, with New York City a patchwork of often conflicting cultures, languages, customs and attitudes. Add in the traditions of the old West and “gunslinging” becomes an apparently viable option to solve problems. Japan, on the other hand, is an extremely homogenous and traditional culture, with little in the way of overt class or cultural conflict. England is also very traditional with far less cultural conflict (any country that feels no necessity to arm their police does not have a tradition of individual use of force to solve problems). However, now as England is becoming more culturally and ethnically diverse, there is a rising incidence of violence and use of guns.

From the above it is clear that any statistics on murder rates says nothing about the efficacy of gun control laws, but rather about the cultural and/or societal factors that make such laws ineffective. If you wish statistics to serve as evidence for a gun control law, find something else.

For the above reasons you must search for other evidence to support whatever statistics you use as support, if only to show that the statistics actually apply.

Do not, however, take all the problems outlined above as a condemnation of statistics as evidence. Statistics are excellent evidence, and often the easiest and most concise way to express evidence. I merely wish you to be aware you must examine them for relevance, validity and authority or they can do you more harm than good in proving your point.

343