--- title: "Interactive Self-Assessment" subtitle: "Fall 2020 MTS 525 / COMMST 395 Statistics and Statistical Programming" output: learnr::tutorial runtime: shiny_prerendered --- ```{r setup, include=FALSE} library(learnr) library(tidyverse) knitr::opts_chunk$set(echo = FALSE, tidy=TRUE) t <- Sys.time() question_filename <- paste("question_submission_", t, ".csv", sep="") code_filename <- paste("code_", t, ".csv", sep="") #df <- data.frame(label=c('test'), question=c('asd'), answer=c('asd'), correct=c(TRUE), stringsAsFactors=FALSE) df <- data.frame() tutorial_event_recorder <- function(tutorial_id, tutorial_version, user_id, event, data) { # quiz question answered if (event == "question_submission"){ # nick exasperatedly believes this is the correct way to index the result of strsplit... [[1]][[1]] data$category <- strsplit(data$label, '_')[[1]][[1]] df <<- rbind(df, data, stringsAsFactors=FALSE) #write.table(data, question_filename, append=TRUE, sep=",", row.names=TRUE, col.names=FALSE) write.table(df, question_filename, append=FALSE, sep=",", row.names=TRUE, col.names=TRUE) } # code if (event == "exercise_submitted"){ write.table(data, code_filename, append=TRUE, sep=",", row.names=TRUE, col.names=FALSE) } } options(tutorial.event_recorder = tutorial_event_recorder) ``` ## Overview This is document contains R Markdown code for an *Interactive Self Assessment*. Through this assessment, both students and the teaching team can check in on learning progress. The Self Assessment is broken into six sections, described below. You can navigate throughout the document using the left-hand column. In general, completely this assessment should take about 60 minutes. * Overview: you are here. * Section 1, Warmup Exercises. Contains warm-ups to help you become familiar with the interactive `learnr` environment (learnr is the R package that this assessment relies on). 1 coding question, 2 multiple choice questions. Time estimate: 5 min. * Section 2, Debugging and Reading R Code. Contains a series of questions that will require you to work with existing R code. 3 coding questions, 4 multiple choice questions. Time estimate: 15 minutes. * Section 3, Statistics Concepts and Definitions. 12 multiple choice questions about statistics concepts and definitions. Time estimate: 15 minutes. * Section 4, Distributions. 3 multiples choice questions that involve some minor calculations. Time estimate: 5 minutes. * Section 5, Computing Probabilities. 6 multiple choice questions that involve calulating probabilities of events. These calculations are more involved than Section 4. Time estimate: 15 minutes. * Section 6, Helpful Formulas. Contains some helpful formulas that may be useful for Sections 3-5. * Section 7, Answer Report. Time estimate: 5 minutes. Here, you can use R code (some of which is prepopulated for you) to analyze (or visualize) your performance on the assessment. This provides a way for you to (1) practice exploratory analyses R with data you created yourself (by answering questions) and (2) get immediate feedback about your performance. Note that you can clear **all** your answers to *all* questions by clicking "Start Over" in the left-hand sidebar, but doing that basically erases all progress in the document and your answers to every question will be deleted. *Use with caution* (if at all)! ## Section 1, Warm-up Exercises This section contains quick warm-up questions, so you can become familiar with how `learnr` works and what to expect from this activity. ### Code Chunk Warm-up To get familiar with how code chunks work in `learnr`, let's write R code required to add two numbers: 1234 and 5678 (and the answer is 6912). The code chunk below is editable and is "pre-populated" with an unfinished function definition. The goal is to add arguments and fill in the body of the function. When finished, you can run the code chunk and it should produce the answer. If you click "Run Code", you should see the answer below the chunk. That answer will persist as you navigate around this doc. You can clear your answers by clicking "Start Over" in the top-left of the chunk. You can also clear **all** your answers by clicking "Start Over" in the left-hand sidebar, but doing that basically erases all progress in the document *Use with caution!* ```{r WarmUp_1, exercise=TRUE, exercise.lines=10} add <- function() { } x = 1234 y = 5678 add(x,y) ``` ```{r WarmUp_1-solution} add <- function(value1, value2) { return(value1 + value2) } x = 1234 y = 5678 add(x,y) ``` ### Multiple Choice Question Warmup The question below shows how the multiple choice answering and feedback works. ```{r WarmUp_2} quiz( question("Select the answer choice that will return `TRUE` in R.", answer("1 == 1", message="Good work! Feedback appears here.", correct=TRUE), answer("1 == 0", message="Not quite! Feedback appears here."), allow_retry = TRUE ) ) ``` ## Useful Formulas Sample Mean (sample statistic): $\bar{x}=\frac{\sum_{i=1}^n x_i}{n}$ Standard deviation: $s=\sqrt{\frac{\sum_{i=1}^n (x_i-\bar{x})^2}{n-1}}$ Variance: $var = s^2$ Useful probability axioms: Complement: $\mbox{Pr}(A^c)=1-\mbox{Pr}(A)$ Probability of two *independent* events both happening: Pr(A and B) = Pr(A) $\times$ Pr(B) Probability of one of two events happening: Pr(A or B) = Pr(A) + Pr(B) - Pr(A and B) Conditional probability: $\mbox{Pr}(A|B)=\frac{\mbox{Pr(A and B)}}{\mbox{Pr(B)}}$ Population mean (population statistic): $\mu = \sum_{i=1}^{n}x\mbox{Pr}(x)$ Z-score: $z=\frac{x-\mu}{\sigma}$ Standard errors: $SE=\frac{\sigma}{\sqrt{n}}$ $SE_{proportion}=\sqrt{\frac{p(1-p)}{n}}$ Identifying outliers using Interquartile Range (IRQ): $Q_1 - 1.5 \times IQR, \quad Q_3 + 1.5 \times IQR$ ## Section 2: Writing and Debugging R Code ### Debugging a Function Below, you'll see code to define a function that is *supposed* to perform a transformation on a vector. The problem is that it doesn't work right now. In theory, the function will take a numeric vector as input (let's call it $x$) and scale the values so they lie between zero and one. (This is sometimes called min-max [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling), and is sometimes used for machine learning.) The way it *should* do this is by first subtracting the minimum value of $x$ from each element of $x$. Then, the function will divide each element by the difference between the maximum value of $x$ and the minimum value of $x$. As written now, however, the function does not work! There are at least three issues you will need to fix to get it working. Once you fix them, you should be able to confirm that your function works with the pre-populated example (with the correct output provided). You might also be able to make this code more "elegant" (or alternatively, improve the comments and variable names as you see fit). Bonus: how might we update this function to scale between any "floor" and "ceiling" value? ```{r R_debug1, exercise=TRUE} zeroToOneRescaler <- function() { # the minimum value minval <- min(x) # let's "shift" our vector by subtracting the minimum value of x from each element shifted <- x - minval # let's find the difference between max val and min val difference <- min(x) - max(x) scaled <- shifted / difference scaled } test_vector = c(1,2,3,4,5) # Should print c(0, 0.25, 0.5, 0.75, 1.00) zeroToOneRescaler(test_vector) ``` ```{r R_debug1-solution} zeroToOneRescaler <- function(x) { shifted <- x - min(x) difference = max(x) - min(x) return(shifted / difference) } test_vector = c(1,2,3,4,5) # Should print c(0, 0.25, 0.5, 0.75, 1.00) zeroToOneRescaler(test_vector) ``` ```{r R_debug1-response} quiz( question("Were you able to solve the debugging question? (this question is for feedback purposes)", answer("Yes", message="Nice work!", correct = TRUE), answer("No", message="Good try! If there were specific aspects that were challenging, feel free to reach out to the teaching team.") ) ) ``` The following commented chunk has at least five (annoying) bugs. Can you uncomment the code, fix all the bugs, and get this chunk to run? These are drawn from real experiences from your TA! ```{r R_debug2, exercise=TRUE} # ps2 <- readcsv(file = url( # " https://communitydata.science/~ads/teaching/2020/stats/data/week_04/group_03.csv"), row.names = NULL # ) # # ps2$y[is.na(ps2$y)] <- 0 # "ps2$'My First New Column' <- ps2$y * -1" # ps2$'My Second New Column" <- ps2$y + ps2$'My First New Column' # # summary(ps2$'My Second New Column'] ``` ```{r R_debug2-solution} ps2 <- read.csv(file = url("https://communitydata.science/~ads/teaching/2020/stats/data/week_04/group_03.csv"), row.names = NULL) ps2$y[is.na(ps2$y)] <- 0 ps2$'My First New Column' <- ps2$y * -1 ps2$'My Second New Column' <- ps2$y + ps2$'My First New Column' summary(ps2$'My Second New Column') ``` ```{r R_debug2-response} quiz( question("Were you able to solve the above debugging question? (this question is for feedback purposes)", answer("Yes", message="Nice work!", correct = TRUE), answer("No", message="Good try! If there were specific aspects that were challenging, feel free to reach out to the teaching team."), allow_retry = TRUE ) ) ``` ### Updating a visualization Imagine you've created a histogram to visualize some data from your research (below, we'll use R's built-in "PlantGrowth" dataset). You show your collaborator a histogram of this plot using default R, and they express some concerns about your plot's aesthetics. Replace the base-R histogram with a `ggplot2` histogram that also includes a density plot overlaid on it (maybe in a bright, contrasting color like red). ```{r R_ggplot, exercise=TRUE} data("PlantGrowth") hist(PlantGrowth$weight) ``` ```{r R_ggplot-solution} library(ggplot2) ggplot(PlantGrowth, aes(weight, after_stat(density))) + geom_histogram() + geom_density(color = "red") ``` Bonus: How would you find more information about the source of this dataset? ```{r R_ggplot-response} quiz( question("Were you able to solve the above plotting question? (this question is for feedback purposes)", answer("Yes", message="Nice work!", correct = TRUE), answer("No", message="Good try! If there were specific aspects that were challenging, feel free to reach out to the teaching team."), allow_retry = TRUE ) ) ``` ### Interpret a dataframe ```{r R_columns-setup, exercise=TRUE} data <- mtcars data$mpgGreaterThan20 <- data$mpg > 20 data$gear <- as.factor(data$gear) data$mpgRounded <- round(data$mpg) ``` The below questions relate to the `data` data.frame defined above, which is a modified version of the classic `mtcars`. For all answers, assume the above code chunks *has completely run*, i.e. assume all modifications described above were made. ```{r R_columns} quiz( question("Which of the following best describes the `mpg` variable?", answer("Numeric, continuous", correct=TRUE), answer("Numeric, discrete"), answer("Categorical, dichotomous"), answer("Categorical, ordinal"), answer("Categorical") ), question("Which of the following best describes the `mpgGreaterThan20` variable?", answer("Numeric, continuous"), answer("Numeric, discrete"), answer("Categorical, dichotomous", correct=TRUE), answer("Categorical, ordinal"), answer("Categorical") ), question("Which of the following best describes the `mpgRounded` variable?", answer("Numeric, continuous"), answer("Numeric, discrete", correct=TRUE), answer("Categorical, dichotomous"), answer("Categorical, ordinal"), answer("Categorical") ), question("Which of the following best describes the `gear` variable?", answer("Numeric, continuous"), answer("Numeric, discrete"), answer("Categorical, dichotomous"), answer("Categorical, ordinal", correct=TRUE), answer("Categorical") ) ) ``` ## Section 3, Statistics Concepts and Definitions The following is a series of short multiple choice questions. These questions focus on definitions, and should not require performing any computations or writing any code. ```{r StatsConcepts_lightninground} m1 <- "" wolf <- "Think of the 'Boy who cried wolf', with a null hypothesis that no wolf exists. First the boy claims the alternative hypothesis: there is a wolf. The villagers believe this, and reject the correct null hypothesis. Second, the villagers make an error by not believing the boy when he presents a correct alternative hypothesis." quiz( question("A hypothesis is typically written in terms of a:", answer("p-value."), answer("population statistic.", correct = TRUE), answer("sample statistic.") ), question("A sampling distribution is:", answer("critical to report in your papers."), answer("theoretically helpful, but rarely available to researchers in practice.", correct = TRUE), answer("practically useful, but not relies on assumptions that are rarely met.") ), question("Z-scores tell us about a value in terms of:", answer("mean and standard deviation.", correct = TRUE), answer("sample size and sampling strategy."), answer("if an effect is causal or not.") ), question("A distribution that is right-skewed has a long tail to the:", answer("right.", correct = TRUE), answer("left.") ), question("A normal distribution can be characterized with only this many parameters:", answer("1."), answer("2.", correct = TRUE), answer("3.") ), question("When we calculate a standard error, we look to understand", answer("the spread of our observed data based on the spread of the population distribution."), answer("the spread of the population distribution based on the spread of our observed data.", correct = TRUE), answer("whether or not our result is causal.") ), # question("When we calculate standard error, we calculate", # answer("using a different formula for every type of variable."), # answer("the sample standard error, which is an estimate of the population standard error.", correct = TRUE), # answer("whether or not our result is causal.") # ), question("P values tell us about", answer("the probability of observing the outcome."), answer("the world in which the null hypothesis is true.", correct = TRUE), answer("the world in which the null hypothesis is false."), answer("the probability that a difference is due to chance.") # answer("the world in which our data describe a causal effect.") ), question("P values are", answer("a conditional probability.", correct = TRUE), answer("completely misleading."), answer("an indication of the strength of an association"), answer("most useful when our data has a normal distribution.") ), question("A type 1 error occurs when", answer("when we reject a correct null hypothesis (i.e. false positive).", correct = TRUE, message=wolf), answer("when we accept a correct null hypothesis", message=wolf), answer("when we accept an incorrect null hypothesis (i.e. false negative)", message=wolf) ), question("Before we assume independence of two random samples, it can be useful to check whether", answer("they are correlated."), answer("both samples include over 90% of the population."), answer("both samples include less than 10% of the population.", correct = TRUE) ) ) ``` ```{r StatsConcepts_sampling} quiz( question("A political scientist is interested in the effect of teaching style on standardized test performance She plans to use a sample of 30 classes evenly spread among the Communication, Computer Science, and Business to conduct her analysis. What type of sampling strategy should she use to ensure that classes are selected from each discipline equally? Assume a limited research budget.", answer("A simple random sample"), answer("A cluster random sample"), answer("A stratifed random sample", correct=TRUE), answer("A snowball sample") ) ) ``` ## Section 4: Distributions The following questions are in the style of pen-and-paper statistics class exam questions. This section includes three questions about distributions. These questions involve some minor calculations. ### Percentiles and the Normal Distribution For the following question, you may want to use this "scratch paper" code chunk. ```{r Distributions_quartile-scratch, exercise=TRUE} ``` ```{r Distributions_quartile} quiz( question("Heights of boys in a high school are approximately normally distributed with mean of 175 cm standard deviation of 5 cm. What value most likely corresponds to the first quartile of heights?", answer("25 cm"), answer("167.3 cm"), answer("171.7 cm", correct=TRUE), answer("173.5 cm"), answer("178.3 cm") ) ) ``` ### Outliers and Skew Suppose we are reading a paper which reports the following about a column of a dataset: Minimum value is 0.00125 and Maximum Value is 2.1100. Mean is 0.41100 and median is 0.27800. 1st quartile is 0.13000 and 3rd quartile is 0.56200. ```{r Distributions_summary-scratch, exercise=TRUE} ``` ```{r Distributions_summary} m1 <- "Under R's default setting, outliers are values that are either greater than the upper bound $Q_3 + 1.5\\times IQR$ OR less than the lower bound $Q_1 - 1.5\\times IQR$. Here, $IQR = 0.562-0.130=0.432$. The upper bound $= 0.562 + 1.5\\times (0.432) = 1.21$. The lower bound is $0.13 - 1.5\\times (0.432) = -0.518$. We see that the maximum value is 2.11, greater than the upper bound. Thus, there is at least one outlier in this sample." m2 <- "There is at least one outlier on the right, whereas there is none on the left. $|Q_3-Q_2| > |Q_2-Q_1|$, so the whisker for this box plot would be longer on the right-hand side. The mean is larger than the median." quiz( question("Are there outliers (in terms of IQR) in this sample?", answer("Yes", correct = TRUE, message=m1), answer("No", message=m1) ), question("Based on these summary statistics, we might expect the skew of the distribution to be:", answer("left-skewed", message=m2), answer("right-skewed", message=m2, correct=TRUE), answer("symmetric", message=m2) ) ) ``` ## Sections 5, Computing Probabilities For each of the below questions, you will need to calculate some probabilities by hand. You may want to use this "scratch paper" code chunk (possibly in conjunction with actual paper). ```{r Probabilities-scratch, exercise=TRUE} ``` ```{r Probabilities_probs} m1 <- "$P(\\text{Coffee} \\cap \\text{No Milk}) = P(\\text{Coffee})\\cdot P(\\text{No Milk}) = 0.5 \\cdot (1-0.1) = 0.45$" m2 <- "Let H be the event of hypertension, M be event of being a male. We see here that $P(H) = 0.15$ whereas $P(H|M) = 0.18$. Since $P(H) \\neq P(H|M)$, then hypertension is not independent of sex." m3 <- "$P(HIV \\cap HCV) = P(HIV|HCV)\\cdot P(HCV) = 0.1\\cdot 0.02 = 0.002$" quiz( question("Suppose in a population, half prefer coffee to tea, and assume that 10 percent of the population prefers no milk in their coffee or tea. If coffee vs. tea preference and milk use are independent, what fraction of the population both prefers coffee and puts milk in their coffee?", answer("40%", message=m1), answer("45%", correct = TRUE, message=m1), answer("50%", message=m1), answer("55%", message=m1) ), question("In the general population, about 15 percent of adults between 25 and 40 years of age are hypertensive. Suppose that among males of this age, hypertension occurs about 18 percent of the time. Is hypertension independent of sex? ", answer("Yes", message=m2), answer("No.", correct=TRUE, message=m2) ), question("Co-infection with HIV and hepatitis C (HCV) occurs when a patient has both diseases, and is on the rise in some countries. Assume that in a given country, only about 2% of the population has HCV, but 25% of the population with HIV have HCV. Assume as well that 10% of the population with HCV have HIV. What is the probability that a randomly chosen member of the population has both HIV and HCV?", answer("0.001", message=m3), answer("0.01", message=m3), answer("0.002", correct=TRUE, message=m3), answer("0.02", message=m3) ) #question("What might you search for (in Google, your notes, the OpenIntro PDF, etc.) to help with this question?", # answer("t test"), # answer("laws of probability", correct=TRUE), # answer("linear regression"), # answer("R debugging") #) ) ``` ### Biostats Example This question is adapted from a biostats midterm exam. In the past (2015, to be specific), the US Preventive Services Task Force recommended that women under the age of 50 should not get routine mammogram screening for breast cancer. The Task Force argued that for a woman with a positive mammogram (one suggesting the presence of breast cancer), the chance that she has breast cancer was too low to justify a surgical biopsy. Suppose the data below describe a cohort of 100,000 women age 40 - 49 in whom mammogram screening and breast cancer behaves just like the larger population. For instance, in this table, the 3,333 women with breast cancer represent a rate of 1 in 30 women with undiagnosed cancer. The numbers in the table are realistic for US women in this age category. | | Positive test result | Negative test result | |--------------------|---------------------:|---------------------:| | Have breast cancer | 3,296 | 37 | | Do not have breast cancer | 8,313 | 88,354 | First, compute the "margins" of the above contingency table. * Row margins: How many total women have breast cancer? How many total women do not have breast cancer? * Column margins: How many total positive test? How many total negative tests? ```{r Probabilities_mammogram-chunk, exercise=TRUE} ``` ```{r Probabilities_mammogram} m1 <- " $\\Pr(\\textrm{Test}^+ \\cap \\textrm{Cancer}) = 3,296$ $\\Pr(Cancer) = 3,333$ $\\Pr(\\textrm{Test}^+|\\textrm{Cancer}) =$ \ $\\dfrac{\\Pr(\\textrm{Test}^+ \\cap \\textrm{Cancer})}{\\Pr(\\textrm{Cancer})} =$\ $\\dfrac{3,296}{3,333} = 0.989$" m2 <- " $Pr(\\textrm{Cancer}|\\textrm{Test}^+) =$ $\\dfrac{\\Pr(\\textrm{Cancer} \\cap \\textrm{Test}^+)} {\\Pr(\\textrm{Test}^+)}=$ $\\dfrac{3,296}{11,609} = 0.284$" quiz( question("Based on this data, what is the probability that a woman has a positive test given that a women has cancer?", answer("98.9%", correct = TRUE, message=m1), answer("99.9%",message=m1), answer("89.9%",message=m1), answer("88.9%",message=m1) ), question("Based on this data, what is the probability that a woman who has cancer receives a positive test?", answer("28.4%", correct = TRUE,message=m2), answer("10.3%",message=m2), answer("50.7%",message=m2), answer("97.9%",message=m2) ), question("Is the Task Force correct to claim that there is a low probability that a women between 40-49 who tests positive has breast cancer?", answer("Yes", correct=TRUE), answer("No") ) ) ``` ## Answer Report Finally, let's generate a report that summarizes your answers to this evaluation. Answers are written to a file that looks like this: `question_submission-{CURRENT TIME}.csv`. Take note of this csv file: this is what you will submit to Canvas. They're also saved in R Studio's global environment as a variable called `df`. Run the below code chunk to see what `df` looks like. ```{r report1, exercise=TRUE} df ``` To check your percentage of correct answers: ```{r report2, exercise=TRUE} mean(df$correct) ``` To check your percentage of correct answers by section: ```{r report3, exercise=TRUE} df %>% group_by(category) %>% summarize(avg = mean(correct)) ```