X-Git-Url: https://code.communitydata.science/ml_measurement_error_public.git/blobdiff_plain/82fe7b0f482a71c95e8ae99f7e6d37b79357506a..8ac33c14d7e7874bf283aa9c252fa06566dc8b15:/simulations/robustness_check_notes.md diff --git a/simulations/robustness_check_notes.md b/simulations/robustness_check_notes.md index e6adc8a..0a287a7 100644 --- a/simulations/robustness_check_notes.md +++ b/simulations/robustness_check_notes.md @@ -1,5 +1,14 @@ -# robustness_1.RDS +# robustness\_1.RDS -Tests how robust the MLE method is when the model for $X$ is less precise. In the main result, we include $Z$ on the right-hand-side of the `truth_formula`. +Tests how robust the MLE method for independent variables with differential error is when the model for $X$ is less precise. In the main paper, we include $Z$ on the right-hand-side of the `truth_formula`. In this robustness check, the `truth_formula` is an intercept-only model. +The stats are in the list named `robustness_1` in the `.RDS` file. + +# robustness\_1\_dv.RDS + +Like `robustness\_1.RDS` but with a less precise model for $w_pred$. In the main paper, we included $Z$ in the `outcome_formula`. In this robustness check, we do not. + +# robustness_2.RDS + +This is just example 1 with varying levels of classifier accuracy.