--- title: "Problem set 8: Worked solutions" subtitle: "Statistics and statistical programming \nNorthwestern University \nMTS 525" author: "Aaron Shaw" date: "November 23, 2020" output: pdf_document: toc: yes toc_depth: '3' html_document: toc: yes toc_depth: 3 toc_float: collapsed: true smooth_scroll: true theme: readable header-includes: - \newcommand{\lt}{<} - \newcommand{\gt}{>} --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, message=FALSE, tidy='styler') ``` # Import libraries I'll start by loading some useful libraries... ```{r dependencies} library(openintro) library(tidyverse) library(ggfortify) library(haven) ``` # Part I: Mario kart replication/extension ## Import data and setup ```{r} data(mariokart) mariokart ``` To make things a bit easier to manage, I'll select the variables I want to use in the analysis and do some cleanup. Note that I conver the `cond_new` and `stock_photo` variables to logical first (using boolean comparisons) and also coerce them to be numeric values (using `as.numeric()`). This results in 1/0 values corresponding to the observations shown/described in Figure 9.13 and 9.14 on p. 365 of the textbook. ```{r} mariokart <- mariokart %>% select( price = total_pr, cond_new = cond, stock_photo, duration, wheels ) %>% mutate( cond_new = as.numeric(cond_new == "new"), stock_photo = as.numeric(stock_photo == "yes") ) mariokart ``` Now let's look at the variables in our model. Summary statistics for each, univariate density plots for the continuous measures, and some bivariate plots w each predictor and the outcome variable. ```{r} summary(mariokart) sd(mariokart$price) sd(mariokart$duration) sd(mariokart$wheels) qplot(data=mariokart, x=price, geom="density") qplot(data=mariokart, x=duration, geom="density") qplot(data=mariokart, x=wheels, geom="density") ggplot(data=mariokart, aes(cond_new, price)) + geom_boxplot() ggplot(data=mariokart, aes(stock_photo, price)) + geom_boxplot() ggplot(data=mariokart, aes(duration, price)) + geom_point() ggplot(data=mariokart, aes(wheels, price)) + geom_point() ``` I'm also going to calculate correlation coefficients for all of the variables. We can discuss what to make of this in class: ```{r} cor(mariokart) ``` ## Replicate model results The description of the model ```{r} model <- lm(price ~ cond_new + stock_photo + duration + wheels, data=mariokart) summary(model) ``` While I'm looking at that, I'll go ahead and calculate a confidence interval around the only parameter for which the model rejects the null hypothesis (`wheels`): ```{r} confint(model, "wheels") ``` Overall, this resembles the model results in Figure 9.15, but notice that it's different in a number of ways! Without more information from the authors of the textbook it's very hard to determine exactly where or why the differences emerge. ## Assess model fit/assumptions I've already generated a bunch of univariate and bivariate summaries and plots. Let's inspect the residuals more closely to see what else we can learn. I'll use the `autoplot()` function from the `ggfortify` package to help me do this. ```{r} autoplot(model) ``` Overall, there are a number of issues with this model fit that I'd want to mention/consider: * The distribution of the dependent variable (`price`) is very skewed, with two extreme outliers. I'd recommend trying some transformations to see if it would look more appropriate for a linear regression and/or inspecting the cases that produced the outlying values more closely to understand what's happening there. * The plots of the residuals reveal those same two outlying points are also outliers with respect to the line of best fit. That said, they are not exerting huge amounts of leverage on the estimates, so it's possible that the estimates from the fitted model wouldn't change much without those two points. Indeed, based on the degrees of freedom reported in Figure 9.15 (136) vs. the number reported in our version of the model (138) my best guess is that the textbook authors silently dropped those two outlying observations from their model! More out of curiosity than anything, I'll create a version of the model that drops the two largest values of price. From the plots, I can see that those two are the only ones above \$100, so I'll use that information here: ```{r} summary(lm(price ~ cond_new + stock_photo + duration + wheels, data = mariokart[mariokart$price < 100,])) ``` What do you know. That was it. ## Interpret some results The issues above notwithstanding, we can march ahead and interpret the model results. Here are some general comments and some specifically focused on the `cond_new` and `stock_photo` variables: * Overall, the linear model regressing total auction price on condition, stock photo, duration, and number of Wii wheels shows evidence of a positive, significant relationship between number of wheels and price. According to this model fit, an increase of 1 wheel is associated with a total auction price increase of $10 with the 95% confindence interval of (\$4.57-\$15.32). * The point estimate for selling a new condition game is positive, but with a large standard error. As a result, the model fails to reject the null of no association and provides no evidence of any relationship between the game condition and auction price. * The point estimate for including a stock photo is negative, but again, the standard error is very large and the model fails to reject the null hypothesis. There is no evidence of any relationship between including a stock photo and the final auction price. ## Recommendations Based on this model result, I'd recommend the prospective vendor of a **used** copy of the game not worry about it too much unless they can get their hands on some extra Wii wheels, since it seems like the number of Wii wheels explains variations in the auction price outcome more than anything else they can control (such as whether or not a stock photo is included). # Part II: Hypothetical study ## Import and explore I'll start off by just importing things and summarizing the different variables we care about here: ```{r} grads <- readRDS(url("https://communitydata.science/~ads/teaching/2020/stats/data/week_11/grads.rds")) summary(grads) table(grads$cohort) sd(grads$gpa) sd(grads$income) qplot(data=grads, x=gpa, geom="density") qplot(data=grads, x=income, geom="density") ggplot(data=grads, aes(x=gpa, y=income)) + geom_point() ``` I'll also calculate some summary statistics and visual comparisons within cohorts: ```{r} tapply(grads$income, grads$cohort, summary) tapply(grads$income, grads$cohort, sd) tapply(grads$gpa, grads$cohort, summary) tapply(grads$gpa, grads$cohort, sd) ``` Huh. Those are remarkably similar values for the group means and the group standard deviations... Onwards to plotting: ```{r} ggplot(data=grads, aes(x=cohort, y=income)) + geom_boxplot() ggplot(data=grads, aes(x=gpa, y=income, color=cohort)) + geom_point() ``` Those plots are also a little strange. I know this is just a simulated analysis, but it still seems weird that overall it just looks like a big mass of random points, but when I add the colors by cohort, I can see there are some lines and other regularities within groups. I wonder what happens when I plot each scatter within cohorts? ```{r} ggplot(data=grads, aes(x=gpa, y=income, color=cohort)) + geom_point() + facet_wrap(vars(cohort)) ``` Hmmm. That's...absurd (in particular, cohort 8 looks like a sideways dinosaur). At this point, if I were really working as a consultant on this project, I would write to the client and start asking some uncomfortable questions about data quality (who collected this data? how did it get recorded/stored/etc.? what quality controls were in place?). I would also feel obligated to tell them that there's just no way the data correspond to the variables they think are here. If you did that and the client was honest, they might tell you [where the data actually came from](https://www.autodesk.com/research/publications/same-stats-different-graphs). In the event that you marched ahead with the analysis and are curious about what that could have looked like, I've provided some example code below. That said, *this is a situation where the assumptions and conditions necessary to identify ANOVA, t-tests, or regression are all pretty broken* because the data was generated programmatically in ways that undermine the kinds of interpretation you've been asked to make. The best response here (IMHO) is to abandon these kinds of analysis once you discover that there's something systematically weird going on. The statistical procedures will "work" in the sense that they will return a result, but because those results aren't even close to meaningful, any relationships you do observe in the data reflect something different than the sorts of relationships the statistical procedures were designed to identify. ```{r} summary(aov(income ~ cohort, data=grads)) # no global differences of means across groups summary(grads.model <- lm(income ~ gpa + cohort, data=grads)) # gpa percentile has a small, negative association with income. confint(grads.model, "gpa") # 95% confidence interval ``` Note that the failure to reject the null of any association between district and income in the ANOVA does not provide conclusive evidence that the relationship between GPA and income does not vary by cohort. There were several things you might have done here. One is to calculate correlation coefficients within groups. Here's some tidyverse code that does that: ```{r} grads %>% group_by(cohort) %>% summarize( correlation = cor(income, gpa) ) ``` Because these correlation coefficients are nearly identical, I would likely end my analysis here and conclude that the correlation between gpa and income appears to be consistently small and negative. If you wanted to go further, you could theoretically calculate an interaction term in the model (by including `I(gpa*cohort)` in the model formula), but the analysis up to this point gives no indication that you'd be likely to find much of anything (and we haven't really talked about interactions yet). # Part III: Trick or treating again ## Import and update data ```{r import} ## reminder that the "read_dta()" function requires the "haven" library df <- read_dta(url('https://communitydata.science/~ads/teaching/2020/stats/data/week_07/Halloween2012-2014-2015_PLOS.dta')) df <- df %>% mutate( obama = as.logical(obama), fruit = as.logical(fruit), year = as.factor(year), male = as.logical(male), age = age-mean(age, na.rm=T) ) df ``` Let's fit and summarize the model: ```{r} f <- formula(fruit ~ obama + age + male + year) fit <- glm(f, data=df, family=binomial("logit")) summary(fit) ``` Interesting. Looks like adjusting for these other variables in a regression setting can impact the results. Onwards to generating more interpretable results: ```{r} ## Odds ratios (exponentiated log-odds!) exp(coef(fit)) exp(confint(fit)) ## model-predicted probabilities for prototypical observations: fake.kids = data.frame( obama = rep(c(FALSE, TRUE), 2), year = factor(rep(c("2015", "2012"), 2)), age = rep(c(9, 7), 2), male= rep(c(FALSE, TRUE), 2) ) fake.kids.pred <- cbind(fake.kids, pred.prob = predict(fit, fake.kids, type="response")) fake.kids.pred ``` Note that [this UCLA logit regression tutorial](https://stats.idre.ucla.edu/r/dae/logit-regression/) also contains example code to help extract standard errors and confidence intervals around these predicted probabilities. You were not asked to produce them here, but if you'd like an example here you go (I can try to clarify in class): ```{r} fake.kids.more.pred <- cbind(fake.kids, predict(fit, fake.kids, type= "link", se=TRUE)) within(fake.kids.more.pred, { pred.prob = plogis(fit); lower = plogis(fit - (1.96*se.fit)); upper = plogis(fit + (1.96*se.fit)) } ) ``` ## Sub-group analysis ```{r} f2 <- formula(fruit ~ obama + age + male) summary( glm(f2, data=df[df$year == "2012",], family=binomial("logit"))) summary( glm(f2, data=df[df$year == "2014",], family=binomial("logit"))) summary( glm(f2, data=df[df$year == "2015",], family=binomial("logit"))) ``` Interesting. The treatment effect seems to emerge overwhelmingly within a single year of the data.