What method in statistics would I use to show how a student taking 2 different exams in to different subjects, a 50% on one and a 80% on the other....what concept could be applied that the 50% is actually better than the 80%???
A student taking 2 different exams in 2 subjects received a 50% on one test and an 80% on the other. How could the 50% represent a better result than the 80%?
It all depends on the mean and the standard deviation of the tests.
For example: suppose that there were 100 students in each class.
In the class that the student received a 50% (class A), the mean score was 48% with a standard deviation of 2%.
In the class that the student scored an 80%(class B), the mean score was 85% with a standard deviation of 5%.
In class A, assuming the scores were normally distributed, the student scored better than 84% of her classmates. (The z-score is `z=(50-48)/2=1` and the area to the left of a z=1 under the standard normal curve (found in a table or using technology) is 84%)
In class B, assuming the scores are normally distributed, the student , the student scored better than only 16% of the other students.
So the score of 50% is "better" than the score of 80%.