From: aaronshaw Date: Thu, 30 May 2019 13:49:29 +0000 (-0500) Subject: typo fix X-Git-Url: https://code.communitydata.science/stats_class_2019.git/commitdiff_plain typo fix --- diff --git a/r_lectures/w09-R_lecture.Rmd b/r_lectures/w09-R_lecture.Rmd index bf750d7..9167175 100644 --- a/r_lectures/w09-R_lecture.Rmd +++ b/r_lectures/w09-R_lecture.Rmd @@ -68,7 +68,7 @@ summary(lm(m.log, data=d)) ``` Keep in mind that you can use other bases for your logarithmic transformations. Check out the documentation for `log()` for more information. -## Interpreting regression results wiht model-predicted values +## Interpreting regression results with model-predicted values When you report the results of a regression model, you should provide a table summarizing the model as well as some interpretation that renders the model results back into the original, human-intelligible measures and units specific to the study. diff --git a/r_lectures/w09-R_lecture.html b/r_lectures/w09-R_lecture.html index b5f76f4..abb90ff 100644 --- a/r_lectures/w09-R_lecture.html +++ b/r_lectures/w09-R_lecture.html @@ -335,8 +335,8 @@ summary(lm(m.log, data=d))

Keep in mind that you can use other bases for your logarithmic transformations. Check out the documentation for log() for more information.

-
-

Interpreting regression results wiht model-predicted values

+
+

Interpreting regression results with model-predicted values

When you report the results of a regression model, you should provide a table summarizing the model as well as some interpretation that renders the model results back into the original, human-intelligible measures and units specific to the study.

This was covered in one of the resources I distributed last week (the handout on logistic regression from Mako Hill), but I wanted to bring it back because it is important. Please revisit that handout to see a worked example that walks through the process. The rest of this text is a bit of a rant about why you should bother to do so.

When is a regression table not enough? In textbook/homework examples, this is not an issue, but in real data it matters all the time. Recall that the coefficient estimated for any single predictor is the expected change in the outcome for a 1-unit change in the predictor holding all the other predictors constant. What value are those other predictors held constant at? Zero! This is unlikely to be the most helpful or intuitive way to understand your estimates (for example, what if you have a dichotomous predictor, what does it mean then?). Once your models get even a little bit complicated (quick, exponentiate a log-transformed value and tell me what it means!), the regression-table-alone approach becomes arguably worse than useless.