X-Git-Url: https://code.communitydata.science/stats_class_2020.git/blobdiff_plain/4bd11a0174b122e4587d832ce9035acb5467e039..refs/heads/master:/r_tutorials/w11-R_tutorial.rmd diff --git a/r_tutorials/w11-R_tutorial.rmd b/r_tutorials/w11-R_tutorial.rmd index 919ff4c..69e1e2d 100644 --- a/r_tutorials/w11-R_tutorial.rmd +++ b/r_tutorials/w11-R_tutorial.rmd @@ -1,11 +1,8 @@ --- -title: "Week 11 (ha) R tutorial" +title: "Week 11 R tutorial" author: "Aaron Shaw" date: "November 24, 2020" output: - pdf_document: - toc: yes - toc_depth: '3' html_document: toc: yes toc_depth: 3 @@ -13,6 +10,9 @@ output: collapsed: true smooth_scroll: true theme: readable + pdf_document: + toc: yes + toc_depth: '3' header-includes: - \newcommand{\lt}{<} - \newcommand{\gt}{>} @@ -48,7 +48,7 @@ It is often necessary to transform variables for modeling (we'll discuss some re ### Interaction terms -Interaction terms are best handled using the `I()` function (note the capitalization). I start with a "base" model and update it to add the interaction. +Interaction terms are often best handled using the `I()` function (note the capitalization). You can also incorporate interactions by multiplying the two variables involved directly within the model formula. In the example below, I start with a "base" model and then update it to add the interaction term. ```{r} m.base <- formula(y ~ x + j) @@ -59,9 +59,15 @@ m.i <- update.formula(m.base, . ~ . + I(x*j)) summary(lm(m.i, data=d)) ``` +Evaluating whether interaction terms are important or improve the fit of your model is a topic best left out of this discussion for now. That said, if you need to include an interaction term, you now have a basic idea of how to do it. Interpreting interaction terms is generally best done using model-predicted values. For example, in this case where `x` is continuous and `j` is dichotomous, you might generate a "hypothetical" dataset incorporating the range of observed values for `x` at each of the two values of `j` (that would yield a "fake" data frame with $n \times 2$ rows). You can then plot the predicted values of `y` for each of the `j` categories over the `x` distribution (imagine: a plot with `x` on the x-axis, `y` on y-axis, and lines of different colors corresponding to the different levels of `j`). [^1] + +[^1]: An okay example of this appears in [this paper](https://doi.org/10.1093/joc/jqx003) that I worked on a few years ago. + +Needless to say, there's a lot more to be said about interactions. You can read more [in this econometrics textbook](https://www.econometrics-with-r.org/8-3-interactions-between-independent-variables.html), which seems to have pretty thorough coverage of the fundamentals as well as example R code. + ### Polynomial (square, cube, etc.) terms -Polynomial terms can easily be created using `I()` as well: +Polynomial terms can easily be created using `I()` as well. You can also create a new variable in your data called `x.2` or something like that by writing `x.2 <- x^2` if that seems simpler. Either way, you'll wind up adding that to your model formula (either by creating a new formula or using the `update.formula()` function as I do here). ```{r} m.poly <- update.formula(m.base, . ~ . + I(x^2)) @@ -69,7 +75,7 @@ summary(lm(m.poly, data=d)) ``` Need higher order polynomials? Try including `I(x^3)` and so on... -Creating polynomials this way is intuitive, but can also create a little bit of a messy situation for reasons that go beyond the scope of our course. In these circumstances, using the `poly()` function is useful (look up "orthogonalized polynomials" online to learn more). Generally speaking, creating polynomials in this way impacts the interpretation as well as the model estimates, so you should only use it if you need to and once you've taken the time to actually learn what is happening. That said, here's what the code could look like: +Creating polynomials this way is intuitive, but can also create a little bit of a messy situation for reasons that go beyond the scope of our course (look up "orthogonalized polynomials" online to learn more). In these circumstances, using the `poly()` function is useful, but potentially confusing. Generally speaking, creating polynomials in this way impacts the interpretation as well as the model estimates, so you should only use it if you need to and once you've taken the time to actually learn what is happening. That said, here's what the code could look like: ```{r} m.poly2 <- formula(y ~ j + poly(x,2)) @@ -79,7 +85,7 @@ Higher order (to the nth degree) terms can be created by using higher values of ### Log-transformations -We covered log transformations (usually natural logarithms) before, but just in case, here they are again. I usually default to using `log1p()` because it is less prone ot fail in the event your data (like mine) contains many zeroes. That said, if you have a lot of -1 values you may need something else: +We covered log transformations (usually natural logarithms) before, but just in case, here they are again. I usually default to using `log1p()` because it is less prone to fail in the event your data (like most of mine) contains many zeroes. That said, if you have a lot of -1 values you may need something else: ```{r} m.log <- update.formula(m.base, . ~ log1p(x) + j) @@ -87,14 +93,16 @@ summary(lm(m.log, data=d)) ``` Keep in mind that you can use other bases for your logarithmic transformations. Check out the documentation for `log()` for more information. -## Interpreting regression results with model-predicted values +## Why model-predicted values? When you report the results of a regression model, you should provide a table summarizing the model as well as some interpretation that renders the model results back into the original, human-intelligible measures and units specific to the study. -This was covered in one of the [resources](https://communitydata.science/~mako/2017-COM521/logistic_regression_interpretation.html) I distributed last week (the handout on logistic regression from Mako Hill), but I wanted to bring it back because it is **important**. Please revisit that handout to see a worked example that walks through the process. The rest of this text is a bit of a rant about why you should bother to do so. +This was covered in one of the [resources](https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/logistic_regression_interpretation.html) I distributed last week (the handout on logistic regression), but I wanted to bring it back because it is **important**. Please revisit that handout to see a worked example that walks through the process. The rest of this text is a bit of a rant about why you should bother to do so. + +When is a regression table not enough? In textbook/homework examples, this is not an issue, but in real data it matters all the time. Recall that the coefficient estimated for any single predictor is the expected change in the outcome for a 1-unit change in the predictor *holding all the other predictors constant.* What value are those other predictors held constant at? Algebraically, the best answer is zero! [^2] This is unlikely to be the most helpful or intuitive way to understand your estimates (for example, what if you have a dichotomous predictor, what does it mean then?). Once your models get even a little bit complicated (quick, exponentiate a log-transformed value and tell me what it means!), the regression-table-alone approach becomes arguably worse than useless. -When is a regression table not enough? In textbook/homework examples, this is not an issue, but in real data it matters all the time. Recall that the coefficient estimated for any single predictor is the expected change in the outcome for a 1-unit change in the predictor *holding all the other predictors constant.* What value are those other predictors held constant at? Zero! This is unlikely to be the most helpful or intuitive way to understand your estimates (for example, what if you have a dichotomous predictor, what does it mean then?). Once your models get even a little bit complicated (quick, exponentiate a log-transformed value and tell me what it means!), the regression-table-alone approach becomes arguably worse than useless. +[^2]: Think about it this way: an ordinary least squares regression equation can be written $\hat{y} = \alpha + \beta_1x_1 + \beta_2x_2$, when I want to talk about the change in $\hat{y}$ associated with a one-unit change in $x_1$, I am essentially taking $x_2$ out of the equation. In other words, I am acting as if $x_2=0$ (and the $\alpha = 0$, although that's a separate topic) for the purposes of interpreting the $\beta_1$ coefficient directly. -What is to be done? Provide predicted estimates for real, reasonable examples drawn from your dataset! For instance, if you were regressing lifetime earnings on a bunch of different predictors including years of education, gender, race, age, and height, you would, of course, start by showing your readers the table that includes all of the coefficients, standard errors, etc. Then you chould also provide some specific predicted values for "prototypical" individuals in your dataset. Regression models usually incorporate earnings as a square-root or log-transformed measure, so the table of raw model results won't be easy to interpret. It is probably far more helpful to translate the model results into estimates of how much more/less you would we estimate 30 year old woman of average height with a college degree to change if they were white vs. asian/pacific islander. These prototypical predicted values (also sometimes referred to as "marginal effects") may be presented as specific point-estimates in the text and/or using a visualization of some sort (e.g., lines plotting the predicted lifetime earnings by race over the observed range of age...). You can (and should!) even generate confidence intervals around them (But that's a whole separate rant...). +What is to be done? Provide predicted estimates for real, reasonable examples drawn from your dataset! For instance, if you were regressing lifetime earnings on a bunch of different predictors including years of education, gender, race, age, and height, you would, of course, start by showing your readers the table that includes all of the coefficients, standard errors, etc. Then you chould also provide some specific predicted values for "prototypical" individuals in your dataset. Regression models usually incorporate earnings as a square-root or log-transformed measure, so the table of raw model results won't be easy to interpret. This only gets worse if you have interactions, polynomial terms, etc. It is far more helpful to translate the model results into estimates of (for example) how much more/less you would we estimate 30 year old woman of average height with a college degree to change if they were white vs. asian/pacific islander. These prototypical predicted values (also sometimes referred to as "marginal effects") may be presented as specific point-estimates in the text and/or using a visualization of some sort (e.g., lines plotting the predicted lifetime earnings by race over the observed range of age...). You can (and should!) even generate confidence intervals around them. The point that I hope you take away is that just because you produced a regression table with some p-values and stars in it, your job is not done. You should always do the work to convey your results in a human-intelligible manner by translating the model back into some model-predicted estimates for reasonable combinations of your predictor variables. Once you've got the hang of that, you should also work on conveying the uncertainty/confidence around your predictions given the data/model. \ No newline at end of file