Chapter 3: Methods of Research
Section 1: Positivism and Quantitative Methods
It’s time to talk about methods of research!
No, look. We have to talk about methods of research. We have to get an understanding of how sociologists know what they know. Sociologists don’t just sit around thinking about society and writing whatever pops into our heads. There’s a process that differentiates sociology as a science and just ordinary, run of the mill social commentary that anyone with a blog can do!
Not that there’s anything wrong with having a sociology blog!
Understanding methods of research is of especial importance in this day of alternative facts and fake news. How do you know the fake news from the real news, the alternative facts from the…well…factual facts? There is a difference, and understanding how sociologists do their work can help you in sifting through the tsunami of information that marks our age.
This chapter will focus on methods of research. This section will cover traditions of sociological exploration and quantitative methods of research. The next section will discuss about qualitative methods of research. The third section will address some ethical concerns for doing sociological research.
Data and Theory
In order for sociologists to draw conclusions, their observations must be grounded and supported in two ways, with data and with theory. First, let’s talk about data. Data is factual information that the sociologist can subject to analysis. By factual, the scientist means that the data can be confirmed. In other words, it can be tested for validity and reliability. The sociologist can gather this data in two ways.
First, she can go out and get the data herself. This is called primary data, or data that is collected personally by the researcher. Doing this is beneficial because the researcher has control of the data collecting process. She can use methods of research that are specific to her population and research setting. On the other hand, collecting primary data is time consuming and possibly expensive. The researcher may also have difficulty gaining access to her research subjects. If she wants to do research on, say, drug dealers, she may not be able to get direct access to people who are willing to admit to dealing drugs.
Another option is for the researcher to use secondary data. This is data that has already been collected by other researchers, research agencies or government agencies that collect official statistics. Case studies or documentaries are also examples of secondary data. Many universities and academic departments and research institutes keep valuable datasets that are available to researchers. This is a huge time saver and even a money saver. On the other hand, data collected by one group of researchers may not quite fit the needs of another researcher in a different research setting. Also, by using data that others have collected, you are bound by the limitations of that research.
Theory is the next element of sociological research that really needs to be elaborated. In my mind, the word “theory” is one of the most misused words, at least in American culture. Many Americans use the word theory as a synonym for a guess, concept or personal opinion. Some believe that there’s some kind of hierarchy involved in knowing stuff. First you start off with a guess, then form a hypothesis. When the hypothesis is confirmed it becomes a theory. When the theory is confirmed it becomes a fact. These folks have the process very confused. Theories don’t ever become facts. Hypotheses do not become theories. Theories, hypotheses and facts are all different tools in the research process that really need to be understood.
This confusion has serious consequences. In the United States some people argue that evolution or global warming are just theories in the context of guesses or opinions. Since they are only theories, they don’t have to be taken seriously until they become facts. Until then, these mere theories are really no more relevant than any other opinion. This is not how the scientific process works. Evolution is a theory that has been tested and retested for a hundred and fifty years. Global warming is a theory that really does describe the climate phenomena that the world is experiencing. These theories need to be taken seriously because they carry with them real consequences for real life. They should not be rejected simply because people don’t understand what a theory is.
For something to be theory it must satisfy two criteria. First, a theory must explain a given phenomenon. In other words, it must be valid. Secondly, one must be able to use a theory to make predictions about hypothetical outcomes. A researcher should be able to use a theory about a particular phenomenon to formulate a hypothesis. This hypothesis can then be subject to testing and retesting. If the testing confirms the hypothesis, then this validates the theory. If the hypothesis can be retested using various methods and new settings, the theory is deemed reliable. At no point does the theory ever become a “fact,” though after so much testing and validating it can often be treated as such.
The bottom line is if an offered “theory” does not consistently explain a given phenomenon, and/or cannot be tested, then you don’t have a theory. You might have a concept, or a belief. You may really like these concepts or beliefs. But they are not theories in the scientific sense.
For instance, Intelligent design “theory”, the idea that the universe is so complex that it must have been designed by some supernatural entity, much like you can’t get an airplane by just amassing random parts. Intelligent Design isn’t a theory. One might argue that Intelligent Design does explain the existence of diverse species, but the explanation breaks down when we consider the concept of complexity. Why, for instance, would an intelligent designer include vestigial organs or body parts such as leg bones in whales? Also, we cannot use intelligent design to formulate hypotheses…at least not without knowing the whims of the intelligent designer. It’s not a theory. You may believe it to be true, but that’s irrelevant. It is not, by definition, a theory.
Now the fact that the previous chapter was dedicated to introducing a bunch of theories should demonstrate the importance of theory in sociological research. Sociological research is, in essence, a process of creating theories to describe the social world and then testing those theories against reality. An hypothetical sociologist may collect observations, then formulate a theory to explain them. That theory can then be used to formulate testable hypotheses that can be used to validate the theory and confirm value of the researcher’s observations. This is the research process.
Validity and Reliability
Validity means that our conclusions match what is going on in the real world. For instance, according to Pew Research Center, about 70% of Americans feel that crime rates have increased in the last ten years.1 This is the common sense notion when watching the news. However, real crime statistics show that crime is at historic lows, and though there are some specific regions experiencing an increase in violent crime, this pattern is not a reflection of the society over all. This misperception of crime may have very real consequences as almost 80% of Trump voters believed crime was getting worse while this was only true for almost 40% of Clinton supporters. If more people had a valid understanding of crime, might that have influenced the results of a presidential election.
Reliability means that you consistently get the same results. So, for instance, when we are researching crime, we might use the Unified Crime Report put out by the Department of Justice (see examples above). The UCR gives you the total number of crimes reported for each category, known as Index Crimes. When you look at the trends on the Unified Crime Report over time you should get a relatively smooth trend. If you get a trend that is all over the place, you are probably not using a reliable measure.
But there’s more to it than that. To understand if a measure is valid or reliable you have to know exactly what is measured. The researcher has to operationalize his terms. For instance, the UCR measures the number of crimes reported to authorities. Crimes are divided into what are called Index Crimes: Violent Crimes specifically refer to murder and non-negligent manslaughter, forcible rape, robbery, and aggravated assault; Property Crimes specifically refer to burglary, larceny (theft), and motor vehicle theft. What about other crimes, like drug possession? Prostitution? Wage theft? Loan sharking? Well, those aren’t considered index crimes, so they are not part of the measure. Also, the UCR only counts crimes that are reported. What about crimes that are not reported? For instance, if someone steals your car, you are probably going to report it, but if someone steals your bike…meh…maybe not. If you get into a fistfight with a stranger, you might report the assault. If you get into a fistfight with your brother-in-law at your sister’s wedding you might not. So the UCR is useful, but it’s not comprehensive.
So another measure for crime is the National Crime Victimization Survey. The NCVS is a survey of about 75,000 people about their history of being victimized in the last year. Based on this measure, we can get an estimate of how many people were victimized. When we take the NCVS and compare it to the UCR, we should get consistent results. What do we get? Weeeeell. Some results are more consistent than others. Property Crimes and sexual assaults are examples of measures that are higher for the Crime Victimization Survey than they are for the UCR, meaning that many such crimes are not reported. Robberies and serious domestic violence tends to get reported to the police. General theft on the other hand is unreported at much higher levels. Both the UCR and NCVS, however, show that overall, there has been a decrease in crime since the mid 90’s. This decrease in crime is a reliable stat because it can be determined using different measures, the UCR and the NCVS.
The Positivist Approach: Quantitative Methods
How a researcher approaches his or her research is determined by a number of different variables. Society and the social impact on human behavior is a pretty complicated matter. Researching society’s influence on human behavior can be done through rigid conformity to scientific method, but often requires more nuanced and innovative approaches. Two big questions that must be answered is, how big of a sample is necessary to get the data required and does the research have the ability to directly control for extraneous variables? The theoretical perspective of the researcher also matters. The approach that a functionalist might use will likely be different from the approach used by a symbolic interactionist.
For our purposes, there two major approaches to research in sociology, the positivist approach and the interpretivist approach.
The Positivist approach was that pioneered by the philosopher Auguste Comte. You remember Compte from Lecture 3. Comte believed that the same scientific methods used to understand the natural world should be applied to understanding society. The Positivist approach mostly emphasizes what is called the Deductive Method. In other words, the researcher will use a theory to formulate an hypothesis and then test that hypothesis.
The Positivist is mostly likely to use what are called Quantitative Methods of Research. Quantitative Methods are those methods using numbers and measurable values. There are three main Quantitative Methods: Surveys, Experiments, and Content Analysis.
Under surveys use closed ended, often multiple choice questions to gather data. Examples of surveys may include questionnaires, and structured or closed ended interviews in the same category. Each of these techniques involves asking individuals a pre-established series of questions with limited choices that are given number values. These methods are relatively easy to put together and analyze. Using the same questions with the same responses improves the reliability of the data. As with any form of statistical data, the more people you have responses from, the more valid the data. Many surveys allow the researcher the ability to question thousands of people within a relatively short amount of time. Longitudinal surveys can also be done to gather data on a particular population over a long period of time. Cross sectional surveys are those that compare variables among two or more groups. For instance, we might want to analyze educational attainment by race. Quantitative methods make it relatively easy to do this.
There are some limits to these questioning methods, however. It might be difficult to get people to participate, especially when the topic is sensitive, personal or even painful. When people do participate, however, it’s impossible to know if they are telling the truth or giving an accurate account. This may not even have to do with respondents actually lying. Sometimes our responses to questions are influenced by how we happen to be feeling at any given time. If someone is conducting a structured interview on marriage and the respondent had just had a big argument with his spouse, he may be inclined to give answers that he otherwise would not have given. This limitation can be controlled by repeating the research and comparing the results. Even if people are lying or giving inaccurate responses, if they are doing so at a consistent rate, then the research is still useful. Another limitation is, by pre-establishing the questions and responses, the questions themselves are subject to the researcher’s assumptions and may be biased. Have you ever completed a survey in which none of the responses match your particular preference?
Another problem is what’s called the Researcher Effect or the Interviewer Effect. This happens during face to face questionnaires and structured interviews when the relationship between the researcher and the respondent shapes or biases the responses. I may find myself giving different responses to questions from people I like or trust than I would to people I don’t like or trust. Just a simple thing like a smile or a nod from the researcher during the questioning can shape the respondent’s answers. We often try to please the people we interact with. On the other hand, if the researcher doesn’t smile we may feel like we are giving the “wrong” answer.
Experimentation is another quantitative method. Experiments are difficult to do in sociology. Often, sociologists will use experiments done by social psychologists and such. Regardless, sociology experiments are set up in much the same way they are for other fields like psychology. The idea behind experiments is to test the relationship between different variables. Variables are those factors believed to be related, that the researcher is testing. The variable that the experimenter changes or manipulates is called the Independent Variable. The variables that the researcher is measuring are called Dependent Variables.
A basic experiment is set up by dividing subjects into at least two groups. First is the Control Group, or the group that the researcher does not manipulate. Beyond the Control group, the experiment must have at least one Experimental Group. This is the group that is predicted to be influenced by the independent variable. The idea is, if the characteristics of the Control and Experiment Groups are the same going into the experiment, then any changes seen in the Experiment Group, but not seen in the Control Group at the end of the experiment must be caused by the Independent Variable.
[Independent, dependent, control…what the…]
Look, it’s not rocket science…well, no, actually it kinda is rocket science, but…anyway a famous example of a classic social science experiment was the “Bobo Doll Experiment” conducted in 1963. In this experiment children were divided into four groups, three experimental groups and a control group. The children in the three experiment groups were shown films depicting different types of violence. These films were the Independent Variables. The control group was not shown the videos. The children were then allowed to play with inflatable “Bobo Dolls.” The children who had seen the violent films exhibited violent behavior toward poor Bobo. This is the dependent variable. The children in the control did not show the same behavior. This indicated that, at the very least, there was a correlation between violent videos and violent behavior.
It’s tough to do sociology in a lab, however. When sociologists do experiments, they are usually field or natural experiments. These are experiments that take place in the real world between multiple settings with comparable characteristics. Consequently, it’s difficult to set up controls, so at best, natural experiments can offer correlations. Harold Garfinkel’s experiments were some of the most inventive. In one experiment some of Garfinkel’s students were sent home with instructions to behave as if their parents were strangers. In this case the students’ interactions were the independent variables and the parent’s responses were the dependent.
Experiments are great ways to find correlations and even to test causality. The parameters of experiments make them easy to reproduce to test for reliability. Experiments are often limited by virtue of the fact that they are conducted under very specific conditions, so the results may not be generalizable to the population as a whole. The children of the Bobo experiment were given an inflatable doll that was really ideal for punching. Frankly, I don’t know how anyone could resist punching a Bobo Doll. Would they have still exhibited violent behavior if they were given toy trucks to play with?
Another problem is called the Hawthorne Effect. This is named after a famous experiment in 1933 in which Elton Mayo went into the Hawthorne Works to test different variables and how they impact productivity. He set up a natural experiment in which he changed certain variables, such as lighting, in an experiment group to see which variables improved productivity. What he found, however, is that everything he did improved productivity. Increased lighting. Decreased lighting. It didn’t matter. In this case, the relevant independent variable wasn’t anything he was using in his experiment. He was the independent variable. The workers were responding to being observed by the researcher. So the Hawthorne Effect happens when the knowledge of being observed results in a change of behavior.
Finally, Content analysis is done on texts and other media. In this case, the researcher is looking for recurring themes in the media. Looking at media content is useful not because media is a reflection of the real world, but because media does reflect cultural ideals. For instance, a Masters Thesis done by Brittany M. Trimble-Clark at Minnesota State University examined representations of young women in Seventeen Magazine in 2011. She concluded that most articles contained anti-feminist content and focused largely on appearance. What does this suggest about our cultural ideals with regard to young women?
Looking at and conducting quantitative research requires an understanding of statistics. No…wait…don’t worry, we’re not going to cover statistics in this lecture. We’ll do that down the line. But there are quite a few resources that can help you get a basic understanding of statistics and get you started on using quantitative research to understand the real world.
Quantitative Methods Strengths and Weaknesses
Quantitative methods in general are a great way to get a big picture understanding of social phenomena. Using statistical tools makes the research relatively easy to validate and test for reliability. If the study is large enough, and uses representative samples, then it is likely that the conclusions are generalizable. In other words, the conclusions that can be drawn from a study group is probably true for the population as a whole. If you have a sample of ten people and discover that five of them are fans of Game of Thrones, you are probably stretching it to conclude that fifty percent of the people in the country are Game of Throne fans. That may just be a random situation. But if you get a thousand people and discover that five hundred of them are fans of Game of Thrones, it’s more likely these results are generalizable. Better still, if you get ten thousand people…etc.
With quantitative research, the bigger, the better. Just a quick note on statistics. In statistics, the letter “n” represents the size of the population. One way to measure the validity of one’s data is to calculate the Standard Error (SE: the spread of all data points from the mean or median). The larger the n, or the population, the smaller the SE. It’s always best to have a huuuuuge quantitative study. On the other hand, quite often when testing hypotheses, the researcher finds that he or she has proven the Null Hypothesis. In other words, the researcher finds that the variables being tested are irrelevant to the predicted outcomes. Ouch! That hurts! Huge studies with large populations are hard to justify against the possibility of proving the Null Hypothesis. That’s why many large studies will be preceded by a Pilot Study. A pilot study is a small scale version of a planned study used to determine if there is a reason to go through the time and expense of the larger project.
Quantitative research is so powerful and so useful that it’s almost impossible for you to get through the day without becoming a datapoint on some quantitative database. You may be asked to participate in a survey or study, but even if you are not, data is being collected on you pretty much all day every day. Your school keeps quantitative data on you. Your workplace quantifies your productivity. The search engines you use online all end up in databases. Your bank and credit agencies keep track of your spending habits. Your cell phone keeps track of where you go, who you call and what apps you use. We are entering a period characterized by what has been called Big Data, in which just about every aspect of your life can be broken down into quantifiable bits of information and used by governments or business interests or schools or any institution or organization that you participate in. It’s a form of intensive and possibly invasive surveillance.
But regardless of how powerful and important quantitative research is, there is one big limitation as it relates to sociology. If the sociologist is interested in the lived experience of individuals, then quantitative research is lacking. Sure, I could create a quantitative profile of racism in my community that would be valid and reliable. However, would it tell me about racism as it is experienced by real people? How do people of color navigate and deal with racist social structures and interactions? What are some processes by which people become racist or even reject racism? These are important questions that deserve attention by sociologists, but are not easy to analyze. Quantitative methods are inadequate toward this task.
To research such subjective matters, sociologists have to use some innovative strategies from an interpretivist tradition. These strategies usually fall under an Interpretivist Approach and use what we call qualitative research. That is the topic of the next chapter.
In the meantime, go have fun with stats.
- More recent data does show an uptick in violent crime. This uptick may be a transitory response to the Pandemic (this chapter was written before Covid). Furthermore, when you look at the actual data tables, the “increase” is hardly dramatic. Uniform Crime Report. National Crime Victimization Survey
Wonderful post! Looking forward to reading the next one! 🙂 Nina