What are college entrance examinations? Are they subject to cultural biases?

Quick Answer
College entrance examinations have been used since the 1920’s in the United States to assist college administrators in making admissions decisions. The two most widely used exams are the ACT and the SAT. Although college entrance exams are useful in predicting first-year college grades, some critics of the tests have argued that the score gaps between racial and ethnic groups may be more reflective of cultural bias in the assessments than actual differences in cognitive ability.
Expert Answers
enotes eNotes educator| Certified Educator

College entrance exams are standardized tests designed to predict student grades in the first year of college. Because research has shown that students’ scores on these assessments are related to their grade point averages as college freshmen, many US colleges and universities use these scores as a source of information for selection and admissions decisions. In addition, college entrance exam scores are used for decisions about financial aid, scholarships, and placement into remedial course work.

The most commonly used college entrance exams in the United States are the SAT Reasoning Test and the ACT. In 2012, about 1.6 million high school students completed the ACT, and 1.6 million completed the SAT. ACT test takers are largely residents of the Midwest and the South, while SAT test takers tend to be residents of the Northeast and West, although most institutions will accept scores from either assessment.


Before the development of the SAT Reasoning Test, information used to make college admissions decisions varied widely. Many elite institutions selected the children of alumni or graduates from highly ranked preparatory schools for admission. Some colleges did have entrance exams; however, these differed from college to college, so students interested in multiple institutions had to take multiple exams. The aim of the SAT as designed by its developers, the College Entrance Examination Board (later the College Board), was to provide a standardized way to assess students’ aptitude for college-level work, regardless of previous education or family lineage and, consequently, to select students for admittance on the basis of their own merits.

The SAT Reasoning Test, developed in the 1920s and originally called the Scholastic Aptitude Test, has evolved over the years. It was originally designed to measure aptitude, or an individual’s innate ability to perform well in school. Critics of the test argued that the SAT favored students from middle- and upper-income families, and that tests designed to measure curriculum-based learning were likely to be more egalitarian and better predictors of college grade point average. In response to this criticism and to the rising number of higher education institutions who dropped the SAT as an application requirement, the College Board added a writing component to the SAT in 2005 and revised the existing test to more closely match content covered in high school curricula. In 2014, the College Board announced that in all forthcoming rounds of testing, the essay portion of the SAT would be optional.

The SAT is a three-hour test—three hour and fifty-minutes when including the essay—with three sections: critical reading (formerly verbal), mathematics, and writing. Both the critical reading and mathematics sections consist of multiple-choice and fill-in-the-blank questions. In the critical reading section, students complete sentences, read, and assess written passages, and in the math section, they apply mathematical concepts and interpret data. The writing section, is made up of an essay-writing portion and multiple-choice questions requiring students to recognize writing errors and improve sentences and paragraphs.

Scores on each section of the SAT range from 200 to 800. The average score varies slightly from year to year, but is relatively stable at approximately 500 on each of the sections with a standard deviation of about 100.


In the 1950s, E. F. Lindquist developed the American College Test (later known as the ACT) and founded the testing and measurement company ACT, Inc., in Iowa City, Iowa. Lindquist believed that although tests of aptitude such as the SAT measured an individual’s innate ability, such tests failed to recognize achievement, or what individuals had done with their ability. The ACT, therefore, was designed to measure what students had learned in core college-preparatory curriculum areas. ACT regularly conducts a survey of high school and college faculty to ensure that the assessment stays consistent with high school curricula.

The ACT is a two-hour, fifty-five-minute test. The writing section of the ACT is optional and takes an additional thirty minutes to complete. Besides writing, the ACT has four sections—English, mathematics, reading and science—consisting entirely of multiple-choice questions. The English test measures knowledge of punctuation, grammar, sentence structure, organization, and style. Mathematics measures algebra, geometry, and trigonometry skills. Reading measures skills in reading college-level material, and the science section measures scientific reasoning skills, assuming that students have completed three years of science, including biology. The optional writing test is a single thirty-minute essay.

Scores on each of the four ACT scales range from 1 to 36. A composite score, which is the mean, or average, of the scores on all four scales, is also provided on a range from 1 to 36. The mean score varies slightly from year to year but remains relatively stable at approximately 20 on each of the scales and the composite, with a standard deviation of about 5. Scores on the writing section range from 2 to 12 and are reported in combination with the English subscale on a scale of 1 to 36.

Although the ACT is used for college admissions decisions, it is also designed to provide feedback to teachers and students on academic areas of strength and areas for development. Students can use ACT subscale scores to plan what courses to take and where to focus their studies to improve their achievement and, consequently, their level of preparation for college. In addition, teachers and high school administrators can use ACT scores to evaluate the effectiveness of their teaching and of the curriculum. Because the ACT is linked to high school course content, several states have also mandated its use as a high school exit exam.

Advantages of the Tests

The recent revisions to the SAT have made it more similar in content to the ACT, although it still assesses critical thinking and problem-solving skills to a greater degree than does the ACT, which focuses on assessing acquired academic knowledge and skills. Regardless, both have been shown to be good predictors of college grade point average and, when considered in combination with high school grade point average, have proven to be better predictors than either the test score or grade point average alone. Some college admissions officials contend that looking at the tests in combination with a student’s high school performance allows for more efficient and effective selection of those students most likely to succeed in a college environment.

Test scores provide a uniform scale for the comparison of applicants. High school grade point averages and class rank vary widely depending on the school attended and the courses taken. A student might perform very well in remedial courses and poorly in honors courses, so the student’s choice of classes might result in significantly different grade point averages. In addition, a 3.0 at one high school might reflect a very different level of performance than a 3.0 at another high school. For this reason, supporters of college entrance exams have argued that test scores allow for more accurate comparisons of students from diverse schools, or of students with different course work at the same schools.

Criticisms of the Tests

The ACT and SAT tests have been criticized for a number of reasons, but the two common criticisms are that the tests are biased against certain racial and ethnic groups and ignore other key characteristics of applicants that may be useful in predicting college success.

Research has consistently shown that African Americans, Latinos, and Native Americans have lower mean scores on college entrance exams and other tests of achievement such as the National Assessment of Educational Progress than whites and Asian Americans. This difference is referred to as the achievement gap. Critics of entrance exams have argued that the difference is caused by the culture-specific nature of the tests, with items written to favor students from a white background and to put minorities at a disadvantage. Although there is evidence that such items once existed, the tests have been rewritten, researched, and extensively scrutinized by writers and consultants from diverse backgrounds so as to eliminate such bias. Researchers largely agree that this effort has met the educational standard for ensuring fairness. The achievement gap, however, continues to exist.

Some researchers have asserted that the achievement gap reflects differences that bear a relationship to race and ethnicity. These may include differences in quality and preparedness of teachers, rigor of the curriculum, quality and safety of the school, parental involvement and emphasis on school-related activities, socioeconomic status, and hunger and nutrition. Regardless of the reason, many institutions place greater emphasis on an applicant’s high school grade point average, courses taken, and involvement in extracurricular activities, so as to address the concern that differences between racial and ethnic groups on test scores might result in the underselection of minorities for entry into college.

Critics have also suggested that college entrance exams measure only one determinant of success in college and ignore other influential variables. For example, the motivation to perform well, feelings of connection to the college, and study skills have all been shown to predict college performance and are marginally related, if at all, to performance on the SAT and ACT. The use of these noncognitive factors could help predict college performance better than SAT or ACT scores alone and would give a fuller picture of an applicant.

Both of these criticisms have led institutions to not use SAT or ACT scores as the sole basis of admissions decisions. In the absence of another uniform and standardized measure, it is unlikely that college entrance exams will disappear entirely. Instead, it is likely that exam results, along with other information such as students’ personal statements, essays, extracurricular activities, high school coursework and grades, and letters of recommendations will continue to be used to make admissions decisions.


ACT, Inc. The ACT Technical Manual. Iowa City, Iowa: Author, 2007. Print.

ACT, Inc. ACT Writing Test: Preliminary Technical Report. Iowa City, Iowa: Author, 2007. Print.

Adams, Caralee. "College Board Begins Redesign of SAT Exam." Education Week 6 Mar. 2013: 4. Print.

Barton, P. E., and R. J. Coley. “Windows on Achievement and Inequality.” Policy Information Report, PIC-WINDOWS. Princeton, N.J.: Educational Testing Service, 2008. Print.

Lewin, Tamar. "A New SAT Aims to Realign with Schoolwork." New York Times 6 Mar. 2014: A1. Print.

Mattern, K., W. Camara, and J. L. Kobrin. SAT Writing: An Overview of Research and Psychometrics to Date. College Board Research Report no. RN-32. New York: The College Board, 2007. Print.

Noble, J., M. Davenport, J. Schiel, and M. Pommerich. Relationships Between the Noncognitive Characteristics, High School Course Work and Grades, and Test Scores of ACT-Tested Students. ACT Research Report No. 99-4. Iowa City, Iowa: ACT, Inc., 1999. Print.

Sackett, P. R., M. J. Borneman, and B. S. Connelly. “High-Stakes Testing in Higher Education and Employment: Appraising the Evidence for Validity and Fairness.” American Psychologist 63, no. 4 (May/June, 2008): 215-227. Print.

Young, J. W. “The Past, Present, and Future of the SAT: Implications for College Admissions.” College & University 78, no. 3 (March, 2003): 21-24. Print.

Zoroya, Gregg. "Sharpen Those Pencils, Kids: The SAT is Getting Harder." USA Today 6 Mar. 2014: 3a. Print.

Access hundreds of thousands of answers with a free trial.

Start Free Trial
Ask a Question