A cohort study was conducted to determine the relationship between stress and heart attack. High levels of stress were found to increase the risk of heart attack. The R.R was 3.2. The information...

A cohort study was conducted to determine the relationship between stress and heart attack. High levels of stress were found to increase the risk of heart attack. The R.R was 3.2. The information that the means used to determine that a participant had a high level of stress was subject to misclassification for about 10% of all study subjects. This weakens the conclusion that high levels of stress cause heart attacks. Explain why this is true or false.

Asked on by madgeflower

1 Answer | Add Yours

mathsworkmusic's profile pic

mathsworkmusic | (Level 2) Educator

Posted on

In a cohort study participants are selected from a sub-population of some nature, for example from databases available to the researcher, and not at random from the population as a whole. This in itself can cause bias. However, it may be a practical constraint for the study, or may be on purpose to target a sub-population of interest, for example certain age groups or persons of particular socio-economic status, or with or without competing risk conditions such as smoking status, diabetes or alcoholism.

In this case, the study population was partitioned into 'at risk' and 'not at risk' of heart attack due to stress, where 'at risk' and 'not at risk' were classified by measuring level of stress. The summary notes that this classification was subject to error of 10%. This adds noise to the variable the data are being partitioned by, meaning that there is a 10% chance of misclassification into 'at risk' and 'not at risk'.

This noise has a knock-on effect to the analysis of the effect of stress on heart attack as the line between 'at risk' and 'not at risk' is blurred so that any connection made has to be made with less certainty than if there were less noise. This means that the researcher must be less sure of the estimate of the effect they see in their results, that is, whether stress affects risk of heart attack or not. This entails being less sure of the size of the effect, quoted here as a RR of 3.2, but also the direction of the effect. If the noise in the classification is such that the (eg 95%) confidence interval around the RR in fact crosses 1 (no effect), then the researcher cannot be certain at the 5% level of significance (corresponding to the 95% confidence interval) that the effect does not go the other way and that stress possibly reduces the likelihood of a heart attack.

In rigorous scientific studies counter-intuitive results (or negative results) should be recognised and published. If there is a strong belief beforehand that, for example here, stress increases the likelihood of a heart attack, this should be acknowledged formally in an a priori statement (as implemented in Bayesian analysis). This 'laying of your cards on the table' from the outset is much better as regards scientific rigour than being a perpetrator of publication bias where negative results or counterintuitve results are brushed under the carpet and ignored simply because they don't comply with (unacknowledged) strong prior opinions. Unfortunately, it can happen that prior beliefs are (albeit 'formally') overstated (retrospectively once a negative or counterintuitive result has been found!) to avoid possible bad reactions by publishers and other researchers towards the negative or counterintuitive results (papers can be rejected on this basis because the world at large is biased).

In summary, the result from the study could be incorrect (or the confidence about it overstated), in that it could be the wrong magnitude RR, and also the wrong direction (RR>1 versus RR<1).

Sources:

We’ve answered 318,915 questions. We can answer yours, too.

Ask a question