Im in the military and my friend gave me these questions to see if I would be interested in going into business intelligence. Dont know how to do them but would like to learn?
Pretend you are hiring a company. You have an infinite number of applications, and you want to find someone in the top 2% of quality with 90% Confidence. How many interviews would you schedule? Explain your answer, reasoning, and any assumptions made.
Pretend you work for a company, and you are testing a new offer. Offers are shown to visitors online, and they either convert or pass on the offer. For this particular offer, a conversion is worth $1. So far, the offer has been shown a total of 500 times with 75 conversions.
Based on historical precedent, offers need an expected value of $0.20 per impression to be successful. Would you continue to test this offer? Explain your answer and reasoning, showing all work.
Hint: Consider this a random variable with two distinct outcomes in the probability space. One outcome results in $1 in revenue, the other results in $0
Pretend you work for a company. Revenue’s just came in for last month, and all of the offers you manage underpaid you by $20,000. The CEO is furious. He won’t stop screaming and kicking things. You start to fear for the well-being of the dogs running around the office.
You decide to make adjustments to reporting, with the goal of overstating revenue by roughly 5%. Using the attached data set, make adjustments to the reporting conversion values. Show all work, explain your reasoning.Reported Revenue Revenue Received $ 4,800.00 $ 4,805.00 $ 38,750.00 $ 38,750.00 $ 14,450.00 $ 14,448.00 $ 1,800.00 $ 950.00 $ 390,000.00 $ 375,000.00 $ 14,250.00 $ 10,450.00 $ 250.00 $ 347.00 $ 5,250.00 $ 4,800.00 $ 37,500.00 $ 37,500.00
(1) Assume that the population of candidate scores is approximately normal. (If you plotted the frequencies of the scores versus the scores the graph would be roughly "bell"-shaped, symmetric about a single high point.)
We do not know either the mean (average) score or the standard deviation of the scores. (And since there are an infinite number of applicants, we cannot know them in principle.) However, assuming the population is approximately normal we can use the central limit theorem. For a sufficiently large sample, the mean of the sample can be used as an approximation for the population mean. With a small modification, the standard deviation of the sample can also be used to approximate the population standard deviation.
Let x represent the mean of the sample. This will be used in place of the population mean. Let s be the sample standard deviation. We can use this to approximate the population standard deviation. (In reality, s is smaller than the population standard deviation, but that will just increase the size of the sample we need a little.)
Then if n is the sample size we need, use n=(1.64*s/E) where E is the maximum error you can tolerate. The 1.64 represents the 90% certainty that you require. Round n to the next integer.
(2) This can be modeled as a binomial experiment. Let the number of trials be n=500 and the number of success is x=75. We need the expected value to be mu=.2, where mu=n*p, n the number of trials and p the probability of success for each trial.
The question is, could you get 75 successes in 500 trials by chance, if p=.2?
The null hypothesis is that p=.2 , while the alternative hypothesis is that p!= .2
Let us test the claim with 95% certainty. Then the critical values are +- 1.96.
The test value is z=(.15-.2)/sqrt((.2)(.8)/500))=-2.79. Since this is the critical region, reject the null-hypothesis. This probably did not happen by chance.
Therefore, we should not continue to test this offer.
(3) It is tempting to just multiply each entry by 1.05, thus increasing each by 5%. There are at least two problems with this:
(a) The results have some numbers with cent increments, where the original data are all whole numbers.
(b) You may run afoul of Benford's Law -- in a random set of numbers taken from real-life situations, first digits do not appear at the same rate, but by varying rates with the number of larger digits proportionally smaller than the number of small digits. (e.g. 1 appears approximately 30% of the time.)
So I would round all results to whole numbers. Then notice that there are too many 4's and 5's as leading digits. You will need to add to some of the smaller numbers while taking away a bit from the larger numbers.
A new list might be something like: