X-Git-Url: https://code.communitydata.science/ml_measurement_error_public.git/blobdiff_plain/d8bc08f18f8c2128369ee959196e0e6080a11689..d9d3e47a44ddead1cdf7a649bc0e9849c2219498:/simulations/robustness_check_notes.md?ds=sidebyside diff --git a/simulations/robustness_check_notes.md b/simulations/robustness_check_notes.md index 1c786e9..ac7e88f 100644 --- a/simulations/robustness_check_notes.md +++ b/simulations/robustness_check_notes.md @@ -2,19 +2,18 @@ Tests how robust the MLE method for independent variables with differential error is when the model for $X$ is less precise. In the main paper, we include $Z$ on the right-hand-side of the `truth_formula`. In this robustness check, the `truth_formula` is an intercept-only model. -The stats are in the list named `robustness_1` in the `.RDS` file. - +The stats are in the list named `robustness_1` in the `.RDS` # robustness\_1\_dv.RDS -Like `robustness\_1.RDS` but with a less precise model for $w_pred$. In the main paper, we included $Z$ in the `outcome_formula`. In this robustness check, we do not. +Like `robustness\_1.RDS` but with a less precise model for $w_pred$. In the main paper, we included $Z$ in the `proxy_formula`. In this robustness check, we do not. # robustness_2.RDS -This is just example 1 with varying levels of classifier accuracy. +This is just example 1 with varying levels of classifier accuracy indicated by the `prediction_accuracy` variable.. # robustness_2_dv.RDS -Example 3 with varying levels of classifier accuracy +Example 3 with varying levels of classifier accuracy indicated by the `prediction_accuracy` variable. # robustness_3.RDS