Statistical Analysis Research Paper Starter

Statistical Analysis

Statistical analysis encompasses the whole range of techniques used in quantitative studies, as all such studies are concerned with the examination of discrete data, with describing this data using quantifiable measures, and with comparing this data to theoretical models or to other experimental results. Statistical analysis is used to adequately sample populations, to determine relationships, correlations, and causality between different attributes or events, and to measure differences between sets of empirical data. Statistical analyses are grounded in the scientific method, and as such rely on experimental designs that are free of bias, reproducible, reliable, and valid. Statistical analysis is prevalent in the field of education research today, specifically in policy research and in studies of school management, funding, staffing, and student retention rates. It is less common in studies of curriculum development and analysis, though in the early to mid-twentieth century it dominated this field as well. In the early twenty-first century, qualitative research, in the form of interpretive and critical methods, has been the most commonly used method in curriculum research.

Keywords ANOVA; Bias; Cluster Sampling; Dependent Variables; Descriptive Statistics; Independent Variables; Inferential Statistics; Qualitative Research; Quantitative Research; Quota Sampling; Simple Random Sampling; Sampling Techniques; Stratified Sampling; T-Test

Research in Education: Statistical Analysis


Statistical analysis is used in quantitative research to collect, organize, and describe empirical data. All quantitative studies rely on statistical analyses because quantitative research is a method of approaching questions that is based on concrete, observable, "objective," and measurable data. Quantitative measures aim to explain causal relationships—though most often, in social science research, these methods can only determine correlations between the different factors studied, not make definite predictions about the universality of their application (Creswell, 2003).

A quantitative study in the social sciences follows the prescriptions of the scientific method: it must be reproducible or reliable, free of bias, accurate, and valid. Reproducibility is foundational to quantitative experimentation; if a study cannot be reproduced by others, or by the same researcher at a later time, by definition it does not follow the constructs of the scientific method. Because quantitative analysis applies mathematical equations to collected data to make the analysis objective, the social sciences researcher must be particularly attentive to the collection of non-biased data (Sax, 1985). If the data is biased, statistical analyses might lead to false or inaccurate theories or predictions. In the life or physical sciences, collecting neutrally biased information is less challenging than in educational research, as measurements of widths of cells and of electron transport, for example, have been standardized. In educational research, however, particular care must be paid to the way in which an investigator's biases may lead to the collection of a specific kind of information, or to a particular sampling method that might not fairly represent the population in question (Creswell, 2003). Because all researchers hold values, qualitative education researchers have criticized quantitative studies since the late twentieth century for being only marginally useful in determining the "objective truth" of life in the classrooms or of methods of curriculum construction (Pinar et al, 2004).

In addition to concerns of reliability and bias, quantitative research must also be accurate and valid. Accuracy refers to the extent to which the experimental results accord with theoretical models, or the extent to which empirical results measure the phenomenon in question. Validity in the social sciences can be assessed in three ways: content, criterion, or construct (Twycross & Shields, 2004). Content validity refers the suitability of a study's data collection and analysis methods for the questions being investigated. Criterion validity refers to whether a study uses previously validated methods or, if it is novel in its approach, it has predictive value. Construct validity is similar to accuracy, in that it indicates the strength of the relation between a study's findings and theoretical constructs or other studies' findings (Twycross & Shields, 2004).

Quantitative research in curriculum construction and evaluation grew out of the common school movement in the nineteenth century and, though popular in the early to mid-twentieth century, drew increasing criticism into the late twentieth and early twenty-first centuries (Pinar et al, 2004). However, at about the time it was losing popularity with educational researchers, quantitative research gained new importance in the newly developed field of education policy studies. The movement toward accountability, toward assessing "learning gaps" between various groups, toward increasing the efficiency and effectiveness of policy implementations—along with the increasing role of the federal government and of national organizations in education—was founded upon quantitative analysis (Heck, 2004). The quality of quantitative research in education has been harshly criticized, however, and it remains a matter of concern for policy analysts and quantitative researchers. Some common critiques are that educational researchers are not adequately trained to approach social problems quantitatively, and that researchers often dismiss as irrelevant data that does not seem to match the expected results, offering explanations, after the fact, of why this data was not included in the final analysis (Sax, 1985; Heck, 2004). Even quantitative researchers have posed these critiques. They continue to advise policy makers, students of education, and professionals in the field of the importance of critically appraising a quantitative educational study before relying on its analyses.

History of Statistical Analysis in Education Research

Quantitative research in education grew out of the common school movement in the nineteenth century. The common school, a term coined by educational advocate and reformer Horace Mann (1796–1859), was a government-funded free school in which any child from any socioeconomic background or status could enroll. The movement formed the foundation for modern day public education (Travers, 1983). The establishment of common schools ushered in an era of educational management. The educators of the time, whose names remain well known today, were generally involved not directly in teaching, but rather in making educational policies, effectively governing boards of education, advising and consulting with principals and teachers, and founding educational journals and conferences. These management decisions relied on adequate, accurate quantitative descriptions of school enrollment and retention rates, of effective organizational structures, and of a variety of other matters thought to correlate to a school's efficiency and productivity (Pinar et al, 2004).

The quantitative movement soon permeated the curriculum field, and educational theorists became increasingly interested in using scientific methods to measure and improve curriculum construction. Edward Thorndike (1874–1949) was an American psychologist who is credited as the leader of this movement toward quantifying curriculum (Travers, 1983). He emphasized the importance of establishing "facts;" once these basic facts are discovered, Thorndike wrote, one would have a grasp of a field of knowledge. His theory was criticized by many scientists as a misunderstanding of the scientific method as applied to social science, and is cited as a primary reason for the misunderstanding of quantitative research in the field of education studies. Noam Chomsky, a prominent linguist and cognitive theorist, wrote that the scientific method is based not only on the collection of data, but on the theories subsequently developed from this data and their ability to encompass a comprehensive range of phenomena (cited in Travers, 1983).

Thorndike's work, despite the critiques voiced by natural and physical scientists, inspired a movement in education aimed at classifying, structuring, and analyzing all aspects of schooling, not just school management. IQ scores were increasingly used to evaluate students, and soon inspired the movement toward subject tests (Pinar et al, 2004). The Scholastic Aptitude Test (SAT) grew out of Carl Brigham's research and development of army aptitude evaluations. Ralph Tyler (1902–1994) developed a method of curriculum construction based on quantifiable objectives, goals, procedures, and outcome assessments—a paradigm frequently used in schools today (1949). Educational psychology and school counseling were founded on a basis of quantitative studies and data, too (Travers, 1983). Quantitative, statistical analysis thus firmly established assessment tools, school organizational structures, curriculum construction guidelines, and other objective methods and approaches to pedagogy.

After the initial wave of enthusiasm for scientific formulations of curriculum subsided after the first half of the twentieth century, qualitative research methods gained appeal through their emphasis on the human, subjective factors previously ignored in the positivist paradigm of scientific formulations. However, at about this time, quantitative, statistical analysis formed the methodological foundation for the new fields of educational politics in the 1960s and 1970s, and of education policy in the 1980s through to the present day (Heck, 2004). Education policy is of a similar intent as the initial push toward standardization that grew out of the common school movement. It is concerned with the efficient and effective management of classrooms, schools, teachers, and funding. One policy initiative is the Head Start Act of 1981 (initially created in 1965), an initiative that provides education and health services for low-income children, as well as parental support and guidance. Another initiative is the No Child Left Behind Act of 2001, legislation that provides federal funding to schools according to their effectiveness as measured by standardized test scores. A third initiative is Race to the Top, begun in 2009 to reward state school systems for complying with and improving upon certain educational policies; the Common Core State Standards initiative is a major component of Race to the Top.

Applications: The Theory

Data Visualization

Statistical analysis is applied to experimental data after it has been collected. Before applying statistical analysis to empirical data, a quantitative study must define the variables it will quantify and measure. Attributes, or variables, may be independent or dependent. Independent variables are measurable...

(The entire section is 4794 words.)