X-Git-Url: https://code.communitydata.science/stats_class_2020.git/blobdiff_plain/a954b64f0f0464d9380c82ee71006b2a607d211e..f8c0af419e2d2b4c8aa350abd66d1fdac2fb76d1:/psets/pset4-worked_solution.html?ds=sidebyside diff --git a/psets/pset4-worked_solution.html b/psets/pset4-worked_solution.html index 85583b0..0d84fc9 100644 --- a/psets/pset4-worked_solution.html +++ b/psets/pset4-worked_solution.html @@ -1935,9 +1935,9 @@ for (i in 1:100) {

\[HR_{pos}1_0:~~ \Delta_{\mu_{pos}} = 0\] \[HR_{pos}1_a:~~ \Delta{\mu_{pos}} \neq 0\]

And:

\[HR_{pos}2_0:~~ \Delta_{\mu_{neg}} = 0\] \[HR_{pos}2_a:~~ \Delta_{\mu_{neg}} \neq 0\] Note that the theories the authors used to motivate the study imply directions for the alternative hypotheses, but nothing in the description of the analysis suggests that they used one-tailed tests. I’ve written these all in terms of undirected two-tailed tests to correspond to the analysis conducted in the paper. That said, given that the theories correspond to specific directions you might (arguably more accurately) have written the hypotheses in terms of directed inequalities (e.g., “\(>\)” or “\(<\)”).

-

(b) Describing the effects

+

(b) Describing the effects

The authors’ estimates suggest that reduced negative News Feed content causes an increase in the percentage of positive words and a decrease in the percentage of negative words in subsequent News Feed posts by study participants (supporting \(HR_{neg}1_a\) and \(HR_{neg}2_a\) respectively).

-

They also find that reduced positive News Feed content causes a decrease in the percentage of negative words and an increase in the percentage of positive words in susbequent News Feed posts (supporting \(HR_{pos}1_a\) and \(HR_{pos}2_a\))

+

They also find that reduced positive News Feed content causes a decrease in the percentage of negative words and an increase in the percentage of positive words in subsequent News Feed posts (supporting \(HR_{pos}1_a\) and \(HR_{pos}2_a\))

(c) Statistical vs. practical significance

Cohen’s \(d\) puts estimates of experimental effects in standardized units (much like a Z-score!) in order to help understand their size relative to the underlying distribution of the dependent variable(s). The \(d\) values for each of the effects estimated in the paper are 0.02, 0.001, 0.02, and 0.008 respectively (in the order presented in the paper, not in order of the hypotheses above). These are miniscule by the standards of most approaches to Cohen’s \(d\)! However, as the authors’ argue, the treatment itself is quite narrow in scope, suggesting that the presence of any treatment effect at all is an indication of the underlying phenomenon (emotional contagion). Personally, I find it difficult to attribute much substantive significance to the results because I’m not even convinced that tiny shifts in the percentage of positive/negative words used in News Feed updates accurately index meaningful emotional shifts (I might call it linguistic contagion instead?). That said, I have a hard time thinking about micro-level psychological processes and I’m probably being overly narrow/skeptical in my response. Despite these concerns and the ethical considerations that attracted so much public attention, I consider this a clever, well-executed study and I think it’s quite compelling. I expect many of you will have different opinions of various kinds and I’m eager to hear about them.