##
## Number of Fisher Scoring iterations: 4</code></pre>
<p>Interesting. Looks like adjusting for these other variables in a regression setting allows us to uncover some different the results.</p>
-<p>Onwards to generating more interpretable results.</p>
-<p>First, odds-ratios instead of “raw” log-odds coefficients:</p>
+<p>Onwards to generating more interpretable results. You might recall that the big problem with interpreting logistic regression is that the results are given to you in “log-odds.” Not only is it difficult to have intuitions about odds, but intuitions about the natural logarithms of odds are just intractable (for most of us).</p>
+<p>To make things easier, the typical first step is to calculare odds-ratios instead of log-odds. This is done by exponentiating the coefficients (as well as the corresponding 95% confidence intervals):</p>
<pre class="r"><code>## Odds ratios (exponentiated log-odds!)
exp(coef(fit))</code></pre>
<pre><code>## (Intercept) obamaTRUE age maleTRUE year2014 year2015
## maleTRUE 0.6697587 1.1280707
## year2014 0.6518923 1.5564945
## year2015 0.8649706 1.9799543</code></pre>
+<p>You can use these to construct statements about the change in odds of the dependent variable flipping from 0 to 1 (or <code>FALSE</code> to <code>TRUE</code>) predicted by a 1-unit change in the corresponding predictor (where an odds ratio of 1 corresponds to unchanged odds). We’ll interpret the <code>obamaTRUE</code> odds ratio below.</p>
<p>Now, model-predicted probabilities for prototypical observations. Recall that it’s necessary to create synthetic (“fake”), hypothetical individuals to generate predicted probabilities like these. In this case, I’ll create two versions of each fake kid: one assigned to the treatment condition and one assigned to the control. Then I’ll use the <code>predict()</code> function to generate fitted values for each of the fake kids.</p>
<pre class="r"><code>fake.kids <- data.frame(
obama = c(FALSE, FALSE, TRUE, TRUE),
</div>
<div id="interpret-and-discuss" class="section level2">
<h2>Interpret and discuss</h2>
-<p>Well, for starters, the model providing a “pooled” estimate of treatment effects while adjusting for age, gender, and study year suggests that the point estimate is “marginally” statistically significant (<span class="math inline">\(p <0.1\)</span>) indicating some evidence that the data support the alternative hypothesis (being shown a picture of Michelle Obama causes trick-or-treaters to be more likely to pick up fruit than the control condition). In more concrete terms, the trick-or-treaters shown the Obama picture were, on average, about 25% more likely to pick up fruit than those exposed to the control (95% CI: <span class="math inline">\(-4\%~-~+66\%\)</span>). In even more concrete terms, the estimated probability that a 9 year-old girl in 2015 and a 7 year-old boy in 2012 would take fruit increase about 17% and 19% respectively on average (from 29% to 34% in the case of the 9 year-old and from 21% to 25% in the case of the 7 year-old). These findings are sort of remarkable given the simplicity of the intervention and the fairly strong norm that Halloween is all about candy.</p>
-<p>All of that said, the t-test results from Problem set 7 and the “unpooled” results reported in the sub-group analysis point to some potential concerns and limitations. For starters, the fact that the experiment was run iteratively over multiple years and that the sample size grew each year raises some concerns that the study design may not have anticipated the small effect sizes eventually observed and/or was adapted on the fly. This would undermine confidence in some of the test statistics and procedures. Furthermore, because the experiment occurred in sequential years, there’s a very real possibility that the significance of a picture of Michelle Obama shifted during that time period and/or the house in question developed a reputation for being “that weird place where they show you pictures of Michelle Obama and offer you fruit.” Whatever the case, my confidence in the findings here is not so great and I have some lingering suspicions that the results might not replicate.</p>
+<p>Well, for starters, the model providing a “pooled” estimate of treatment effects while adjusting for age, gender, and study year suggests that the point estimate is “marginally” statistically significant (<span class="math inline">\(p <0.1\)</span>) indicating some evidence that the data support the alternative hypothesis (being shown a picture of Michelle Obama causes trick-or-treaters to be more likely to pick up fruit than the control condition). In more concrete terms, the trick-or-treaters shown the Obama picture were, on average, about 26% more likely to pick up fruit than those exposed to the control (95% CI: <span class="math inline">\(-4\%~-~+66\%\)</span>).<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a> In even more concrete terms, the estimated probability that a 9 year-old girl in 2015 and a 7 year-old boy in 2012 would take fruit increase about 17% and 19% respectively on average (from 29% to 34% in the case of the 9 year-old and from 21% to 25% in the case of the 7 year-old). These findings are sort of remarkable given the simplicity of the intervention and the fairly strong norm that Halloween is all about candy.</p>
+<p>All of that said, the t-test results from Problem set 5 and the “unpooled” results reported in the sub-group analysis point to some potential concerns and limitations. For starters, the fact that the experiment was run iteratively over multiple years and that the sample size grew each year raises some concerns that the study design may not have anticipated the small effect sizes eventually observed and/or was adapted on the fly. This would undermine confidence in some of the test statistics and procedures. Furthermore, because the experiment occurred in sequential years, there’s a very real possibility that the significance of a picture of Michelle Obama shifted during that time period and/or the house in question developed a reputation for being “that weird place where they show you pictures of Michelle Obama and offer you fruit.” Whatever the case, my confidence in the findings here is not so great and I have some lingering suspicions that the results might not replicate.</p>
<p>On a more nuanced/advanced statistical note, I also have some concerns about the standard errors. This goes beyond the content of our course, but basically, a randomized controlled trial introduces clustering into the data by-design (you can think of it as analogous to the observations coming from the treatment “cluster” and the control “cluster”). In this regard, the normal standard error formulas can be biased. Luckily, there’s a fix for this: compute “robust” standard errors as a result and re-calculate the corresponding confidence intervals. Indeed, robust standard errors are often considered to be the best choice even when you don’t know about potential latent clustering or heteroskedastic error structures in your data. <a href="https://oes.gsa.gov/assets/files/calculating-standard-errors-guidance.pdf">This a short pdf</a> provides a little more explanation, citations, as well as example R code for how you might calculate robust standard errors.</p>
</div>
</div>
+<div class="footnotes">
+<hr />
+<ol>
+<li id="fn1"><p>Remember when I said we would use those odds ratios to interpret the parameter on <code>obamaTRUE</code>? Here we are. The parameter value is approximately 1.26, which means that the odds of picking fruit are, on average, 1.26 times as large for a trick-or-treater exposed to the picture of Michelle Obama versus a trick-or-treater in the control condition. In other words, the odds go up by about 26% (<span class="math inline">\(= \frac{1.26-1}{1}\)</span>).<a href="#fnref1" class="footnote-back">↩︎</a></p></li>
+</ol>
+</div>
Interesting. Looks like adjusting for these other variables in a regression setting allows us to uncover some different the results.
-Onwards to generating more interpretable results.
+Onwards to generating more interpretable results. You might recall that the big problem with interpreting logistic regression is that the results are given to you in "log-odds." Not only is it difficult to have intuitions about odds, but intuitions about the natural logarithms of odds are just intractable (for most of us).
-First, odds-ratios instead of "raw" log-odds coefficients:
+To make things easier, the typical first step is to calculare odds-ratios instead of log-odds. This is done by exponentiating the coefficients (as well as the corresponding 95\% confidence intervals):
```{r}
## Odds ratios (exponentiated log-odds!)
exp(confint(fit))
```
+You can use these to construct statements about the change in odds of the dependent variable flipping from 0 to 1 (or `FALSE` to `TRUE`) predicted by a 1-unit change in the corresponding predictor (where an odds ratio of 1 corresponds to unchanged odds). We'll interpret the `obamaTRUE` odds ratio below.
+
Now, model-predicted probabilities for prototypical observations. Recall that it's necessary to create synthetic ("fake"), hypothetical individuals to generate predicted probabilities like these. In this case, I'll create two versions of each fake kid: one assigned to the treatment condition and one assigned to the control. Then I'll use the `predict()` function to generate fitted values for each of the fake kids.
```{r}
## Interpret and discuss
-Well, for starters, the model providing a "pooled" estimate of treatment effects while adjusting for age, gender, and study year suggests that the point estimate is "marginally" statistically significant ($p <0.1$) indicating some evidence that the data support the alternative hypothesis (being shown a picture of Michelle Obama causes trick-or-treaters to be more likely to pick up fruit than the control condition). In more concrete terms, the trick-or-treaters shown the Obama picture were, on average, about 25\% more likely to pick up fruit than those exposed to the control (95\% CI: $-4\%~-~+66\%$). In even more concrete terms, the estimated probability that a 9 year-old girl in 2015 and a 7 year-old boy in 2012 would take fruit increase about 17\% and 19\% respectively on average (from 29\% to 34\% in the case of the 9 year-old and from 21\% to 25\% in the case of the 7 year-old). These findings are sort of remarkable given the simplicity of the intervention and the fairly strong norm that Halloween is all about candy.
+Well, for starters, the model providing a "pooled" estimate of treatment effects while adjusting for age, gender, and study year suggests that the point estimate is "marginally" statistically significant ($p <0.1$) indicating some evidence that the data support the alternative hypothesis (being shown a picture of Michelle Obama causes trick-or-treaters to be more likely to pick up fruit than the control condition). In more concrete terms, the trick-or-treaters shown the Obama picture were, on average, about 26\% more likely to pick up fruit than those exposed to the control (95\% CI: $-4\%~-~+66\%$).[^1] In even more concrete terms, the estimated probability that a 9 year-old girl in 2015 and a 7 year-old boy in 2012 would take fruit increase about 17\% and 19\% respectively on average (from 29\% to 34\% in the case of the 9 year-old and from 21\% to 25\% in the case of the 7 year-old). These findings are sort of remarkable given the simplicity of the intervention and the fairly strong norm that Halloween is all about candy.
+
+[^1]: Remember when I said we would use those odds ratios to interpret the parameter on `obamaTRUE`? Here we are. The parameter value is approximately 1.26, which means that the odds of picking fruit are, on average, 1.26 times as large for a trick-or-treater exposed to the picture of Michelle Obama versus a trick-or-treater in the control condition. In other words, the odds go up by about 26\% ($= \frac{1.26-1}{1}$).
-All of that said, the t-test results from Problem set 7 and the "unpooled" results reported in the sub-group analysis point to some potential concerns and limitations. For starters, the fact that the experiment was run iteratively over multiple years and that the sample size grew each year raises some concerns that the study design may not have anticipated the small effect sizes eventually observed and/or was adapted on the fly. This would undermine confidence in some of the test statistics and procedures. Furthermore, because the experiment occurred in sequential years, there's a very real possibility that the significance of a picture of Michelle Obama shifted during that time period and/or the house in question developed a reputation for being "that weird place where they show you pictures of Michelle Obama and offer you fruit." Whatever the case, my confidence in the findings here is not so great and I have some lingering suspicions that the results might not replicate.
+All of that said, the t-test results from Problem set 5 and the "unpooled" results reported in the sub-group analysis point to some potential concerns and limitations. For starters, the fact that the experiment was run iteratively over multiple years and that the sample size grew each year raises some concerns that the study design may not have anticipated the small effect sizes eventually observed and/or was adapted on the fly. This would undermine confidence in some of the test statistics and procedures. Furthermore, because the experiment occurred in sequential years, there's a very real possibility that the significance of a picture of Michelle Obama shifted during that time period and/or the house in question developed a reputation for being "that weird place where they show you pictures of Michelle Obama and offer you fruit." Whatever the case, my confidence in the findings here is not so great and I have some lingering suspicions that the results might not replicate.
On a more nuanced/advanced statistical note, I also have some concerns about the standard errors. This goes beyond the content of our course, but basically, a randomized controlled trial introduces clustering into the data by-design (you can think of it as analogous to the observations coming from the treatment "cluster" and the control "cluster"). In this regard, the normal standard error formulas can be biased. Luckily, there's a fix for this: compute "robust" standard errors as a result and re-calculate the corresponding confidence intervals. Indeed, robust standard errors are often considered to be the best choice even when you don't know about potential latent clustering or heteroskedastic error structures in your data. [This a short pdf](https://oes.gsa.gov/assets/files/calculating-standard-errors-guidance.pdf) provides a little more explanation, citations, as well as example R code for how you might calculate robust standard errors.
\ No newline at end of file