2 title: "Chapter 3 Textbook exercises"
3 subtitle: "Solutions to even-numbered questions \nStatistics and statistical programming \nNorthwestern University \nMTS
6 date: "September 30, 2020"
22 ```{r setup, include=FALSE}
23 knitr::opts_chunk$set(echo = TRUE)
29 All exercises taken from the *OpenIntro Statistics* textbook, $4^{th}$ edition, Chapter 3.
31 ### 3.12 School absences
33 (a) By the addition rule: $P(no~missed~days) = 1 - (0.25 + 0.15 + 0.28) = 0.32$
34 (b) $P(1~miss~or~less) = P(no~misses) + P(1~miss)$
35 $= 0.32 + 0.25 = 0.57$
36 (c) $P(at~least~1~miss) = P(1~miss) + P(2~misses) + P(\geq 3~misses)$
37 $= 1 - P(no~misses) = 1 - 0.32 = 0.68$
38 (d) Assume (foolishly!) that the absences are independent across children. This allows us to use the multiplication rule:
39 $P(neither~miss~any) = P(no~misses) \times P(no~misses) = 0.32 \times 2 = 0.1024$
40 (e) Again, assume that the absences are independent across children and use the multiplication rule:
41 $P(both~miss~some) = P(at~least~1~miss) \times P(at~least~1 miss) = 0.68\times 2 = 0.4624$
42 (f) Siblings often cohabitate and are therefore likely to get each other sick, so the independence assumption is not sound.
46 This one is all about conditional and compound probabilities and could be represented as a tree diagram (if you find those useful).
49 P(support | college) = \frac{P(support and college)}{P(college)}\\
50 \phantom{P(support | college)} = \frac{0.1961}{0.1961 + 0.2068}\\
51 \phantom{P(support | college)} = 0.49
56 (a) Once you have one person's birthday, the probability that the second person has the same birthday is:
57 $$P(first~two~share~birthday) = \frac{1}{365} = 0.0027$$
59 (b) This one is more challenging! There are many possible approaches, but I find it easiest to think about the probability that none of the three share a birthday in the following way: start with the probability that the first two *don't* share a birthday, followed by the probability that the next person doesn't share a birthday either. This makes it possible to apply the general multiplication rule:
62 P(at~least~two~share~birthday) = 1-P(none~of~three~share~birthday)\\
63 \phantom{P(at~least~two~share~birthday)}=1-P(first~two~don't~share) \times P(third~doesn't~share~either)\\
64 \phantom{P(at~least~two~share~birthday)}=1-(\frac{364}{365}) \times (\frac{363}{365})\\
65 \phantom{P(at~least~two~share~birthday)}=0.0082
68 ### 3.34 Airline baggage fees
71 First, the average fee per passenger (let's call that $\bar{F}$) is the sum of the expected values of the fees per passenger at each of the three possible fee levels (determined by number of bags checked). This works out to the sum of the fees per bag (let's write this $f_{b}$) times the probability (proportion in the case of a binomial process) of passengers with each number of bags (call that $P_{b}$). We can now put that in slightly more formal notation and work out the arithmetic:
74 \bar{F} = E(Fee~per~passenger) = \sum_{b=0}^2{(f_{b}\times P_{b})}\\
75 \phantom{\bar{F} } = \$0(0.54) + \$25(0.34) + \$60(0.12)\\
76 \phantom{\bar{F} } = \$0 + \$8.5 + \$7.2 = \$15.70
79 To calculate the standard deviation of the expected value, we need to find the square root of the variance. To find the variance, we need to find the deviance (difference from the expected value) at each fee level, multiply those deviances by the probability (again, the proportion, in a binomial process) of the respective fee levels, and then sum them up. Here's what that looks like:
81 $$\begin{array}{l|r r}
82 \text{Bags} & (F - E(F))^2 = & \text{Deviance}\\
84 0 & (0-15.70)^2 = & 246.49 \\
85 1 & (25-15.70)^2 = & 86.49 \\
86 2 & (60-15.70)^2 = & 1962.49
89 $$\begin{array}{l| r r}
90 \text{Bags} & (F - E(F))^2\times P(F) = & \text{Deviance}\\
92 0 & 246.49 \times 0.54 =& 133.10\\
93 1 & 86.49 \times 0.34 =& 29.41\\
94 2 & 1962.49 \times 0.12 =& 235.50
97 I sum that last column of values to find the variance (traditionally notated using the greek letter sigma squared ($\sigma^2$):
98 $${\sigma_{\bar{F}}}^2 = \$133.10 + \$29.41 + \$235.50 = \$398.01$$
100 And take the square root to find the standard deviation (traditionally notated as sigma ($\sigma$):
101 $$ \sigma_{\bar{F}} = \sqrt{\$398.01} = \$19.95 $$
104 To calculate this using the tools introduced in the chapter, we'll need to assume independence between the baggage choices of individual passengers (and this is probably wrong, but maybe not catastrophic for the precision of our estimate? Who knows.):
106 Once we assume independence between passengers, we can calculate the expected total revenue (let's call that $E(revenue)$) by summing the individual expected revenue over the 120 passengers. We can calculate the corresponding standard deviation of the expected total revenue by summing the individual variances and then taking the square root of that sum. Plug and chug using the values we calculated in Part a of this exercise to find the answers:
108 $$\begin{array}{r r r}
109 E(revenue) =& 120 \times \$15.70 =& \$1,884\\
110 {\sigma_{E(revenue)}}^2 =& 120 \times \$398.01 =& \$47,761.20\\
111 {\sigma_{E(revenue)}}\phantom{^2} =& \sqrt{\$47,761.20} =& \$218.54
115 ### 3.38 Income and gender
117 (a) The distribution is right skewed, with a median somewhere around \$35-\$50,000. There's a long tail out to the right (high positive values).
119 (b) By the addition rule:
121 $$P(Income \lt \$50k) = 2.2 + 4.7 + 15.5 + 18.3 + 21.2 = 62.2\%$$
123 (c) If we assume that income and gender are independent then we can use the general multiplication rule to work out an answer based on compound probability:
125 $$P(Income \lt \$50k~and~female) = P(Income \lt \$50k) \times P(female) = 0.622 \times 0.41 = 0.255$$
127 (d) If the variables income and gender were independent (unrelated) then we might expect the actual proportion of women with incomes less than $\$50k$ to equal the total sample proportion. The actual proportion of women with incomes less than $\$50k$ ($71.8\%$) turned out to be a lot higher than than the sample proportion ($62.2\%$ from the table). Shockingly, it seems that the assumption that income and gender are independent (unrelated) may not be valid in this data.