---
title: "ADA1: Cumulative project file"
subtitle: "Name your project"
author: "Your Name Here"
date: "`r format(Sys.time(), '%B %d, %Y')`"
output:
html_document:
toc: true
number_sections: true
toc_depth: 5
code_folding: hide
df_print: paged
#toc_float: true
#collapsed: false
#smooth_scroll: TRUE
theme: cosmo #spacelab #yeti #united
highlight: tango
pdf_document:
df_print: kable
#latex_engine: xelatex
#sansfont: IBM Plex Sans
#classoption: landscape
fontsize: 12pt
geometry: margin=0.25in
always_allow_html: yes
---
----------------------------------------
# Document overview
This document is organized by Week and Class number.
The in-class assignments are indicated by the Tuesday and Thursday Class numbers.
Each week's homework is often a combination of Tuesday and Thursday,
with a small extension.
Therefore, "fleshing out" the Tuesday and Thursday sections with a little addition
is often sufficient for your homework assignment;
that is, you won't need a separate "homework" section for a week but
just extend the in-class assignments.
Rarely, the homework assignment is different from the in-class assignments
and requires it's own section in this document.
Consider your readers (graders):
* organize the document clearly (use this document as an example)
* label minor sections under each day (use this document as an example)
* For each thing you do, always have these three parts:
1. Say what you're going to do and why.
2. Do it with code, and document your code.
3. Interpret the results.
## Global code options
```{R}
# I set some GLOBAL R chunk options here.
# (to hide this message add "echo=FALSE" to the code chunk options)
# In particular, see the fig.height and fig.width (in inches)
# and notes about the cache option.
knitr::opts_chunk$set(comment = NA, message = FALSE, warning = FALSE, width = 100)
knitr::opts_chunk$set(fig.align = "center", fig.height = 4, fig.width = 6)
# Note: The "cache=TRUE" option will save the computations of code chunks
# so R it doesn't recompute everything every time you recompile.
# This can save _tons of time_ if you're working on a small section of code
# at the bottom of the document.
# Code chunks will be recompiled only if they are edited.
# The autodep=TRUE will also update dependent code chunks.
# A folder is created with the cache in it -- if you delete the folder, then
# all the code chunks will recompute again.
# ** If things are working as expected, or I want to freshly compute everything,
# I delete the *_cache folder.
knitr::opts_chunk$set(cache = FALSE) #, autodep=TRUE) #$
```
## Document
### Naming
*Note: Each class save this file with a new name,
updating the last two digits to the class number.
Then, you'll have a record of your progress,
as well as which files you turned in for grading.*
* `ADA1_ALL_05.Rmd`
* `ADA1_ALL_06.Rmd`
* `ADA1_ALL_07.Rmd` ...
A version that I prefer is to use a date using Year-Month-Day, `YYYYMMDD`:
* `ADA1_ALL_20190903.Rmd`
* `ADA1_ALL_20190905.Rmd`
* `ADA1_ALL_20190910.Rmd` ...
### Structure
Starting in Week03, we will concatenate all our Homework assignments together to retain the
relevant information needed for subsequent classes.
You will also have an opportunity to revisit previous parts to make changes or improvements,
such as updating your codebook, modifying your research questions, improving tables and plots.
I've provided an initial predicted organization of our
sections and subsections using the \# and \#\# symbols.
A table of contents is automatically generated using the "toc: true" in the yaml
and can headings in the table of contents are clickable to jump down
to each (sub)section.
----------------------------------------
# Research Questions
## Week01: Personal Codebook
### Class 02 Rmd, codebook
### Codebook
```
Copy your CODEBOOK here
```
Additional variables were created from the original variables:
```
CREATED VARIABLES
```
----------------------------------------
## Week02: Literature Review
### Tuesday ---------
### Class 03 Research questions
Copy your research question assignment here.
### Thursday ---------
### Class 04 Citations and Literature review
### Citations
Copy your citations assignment here.
### Week 02 Homework Literature review
Copy your literature review assignment here.
----------------------------------------
# Data Management
## Week03: Data Subset, Univariate Summaries And Plots
### Background
#### Purpose of study
#### Variables
### Tuesday ---------
### Data subset
Starting today, work directly in this document so that you always have all your
previous work here.
First, the data is placed on the search path.
```{R}
# data analysis packages
library(tidyverse) # Data manipulation and visualization suite
library(forcats) # Factor variables
library(lubridate) # Dates
## 1. Download the ".RData" file for your dataset into your ADA Folder.
## 2. Use the load() statement for the dataset you want to use.
##
## load("AddHealth.RData")
## load("addhealth_public4.RData")
## load("NESARC.RData")
# read data example
#load("NESARC.RData")
#dim(NESARC)
```
### Renaming Variables
### Coding missing values
There are two steps.
The first step is to recode any existing `NA`s to actual values, if necessary.
The method for doing this differs for numeric and categorical variables.
The second step is to recode any coded missing values, such as 9s or 99s, as actual `NA`.
#### Coding `NA`s as meaningful "missing"
First step: the existing blank values with `NA` mean "never",
and "never" has a meaning different from "missing".
For each variable we need to decide what "never" means
and code it appropriately.
##### `NA`s recoded as numeric
##### `NA`s recoded as categorical
#### Coding 9s and 99s as `NA`s
### Creating new variables
#### From categories to numeric
#### From numeric to numeric
#### From numeric to categories based on quantiles
#### From many categories to a few
#### Review results of new variables
### Labeling Categorical variable levels
### Data subset rows
--------------------------------------------------------------------------------
# Graphing and Tabulating
## Thursday ---------
## Categorical variables
## Tables for categorical variables
#### Graphing frequency tables
## Numeric variables
## Graphing numeric variables
#### Creating Density Plots
----------------------------------------
## Week04: Bivariate graphs
### Tuesday ---------
### Scatter plot (for regression): x = numerical, y = numerical
### Box plots (for ANOVA): x = categorical, y = numerical
### Thursday ---------
### Mosaic plot or bivariate bar plots (for contingency tables): x = categorical, y = categorical
### Logistic scatter plot (for logistic regression): x = numerical, y = categorical (binary)
# Statistical methods
## Week05: Simple linear regression, logarithm transformation
### Tuesday ---------
### 1. Scatter plot, add a regression line.
### 2. Fit the linear regression, interpret slope and $R^2$ (R-squared) values
### 3. Interpret the intercept. Does it make sense?
### Thursday ---------
### 4. Try plotting on log scale ($x$-only, $y$-only, both).
### 5. Does log transformation help?
## Week06: Correlation and Categorical contingency tables
### Tuesday ---------
### Correlation
### Interpretation of correlation
### Thursday ---------
### Contingency table
## Week07: Inference and Parameter estimation (one-sample)
### Tuesday ---------
### Dataset description of sampling
```{R}
#### Visual comparison of whether sampling distribution is close to Normal via Bootstrap
# a function to compare the bootstrap sampling distribution with
# a normal distribution with mean and SEM estimated from the data
bs.one.samp.dist <- function(dat, N = 1e4) {
n <- length(dat);
# resample from data
sam <- matrix(sample(dat, size = N * n, replace = TRUE), ncol=N);
# draw a histogram of the means
sam.mean <- colMeans(sam);
# save par() settings
old.par <- par(no.readonly = TRUE)
# make smaller margins
par(mfrow=c(2,1), mar=c(3,2,2,1), oma=c(1,1,1,1))
# Histogram overlaid with kernel density curve
hist(dat, freq = FALSE, breaks = 6
, main = "Plot of data with smoothed density curve")
points(density(dat), type = "l")
rug(dat)
hist(sam.mean, freq = FALSE, breaks = 25
, main = "Bootstrap sampling distribution of the mean"
, xlab = paste("Data: n =", n
, ", mean =", signif(mean(dat), digits = 5)
, ", se =", signif(sd(dat)/sqrt(n)), digits = 5))
# overlay a density curve for the sample means
points(density(sam.mean), type = "l")
# overlay a normal distribution, bold and red
x <- seq(min(sam.mean), max(sam.mean), length = 1000)
points(x, dnorm(x, mean = mean(dat), sd = sd(dat)/sqrt(n))
, type = "l", lwd = 2, col = "red")
# place a rug of points under the plot
rug(sam.mean)
# restore par() settings
par(old.par)
}
#### Visual comparison of whether sampling distribution is close to Normal via Bootstrap
# a function to compare the bootstrap sampling distribution
# of the difference of means from two samples with
# a normal distribution with mean and SEM estimated from the data
bs.two.samp.diff.dist <- function(dat1, dat2, N = 1e4) {
n1 <- length(dat1);
n2 <- length(dat2);
# resample from data
sam1 <- matrix(sample(dat1, size = N * n1, replace = TRUE), ncol=N);
sam2 <- matrix(sample(dat2, size = N * n2, replace = TRUE), ncol=N);
# calculate the means and take difference between populations
sam1.mean <- colMeans(sam1);
sam2.mean <- colMeans(sam2);
diff.mean <- sam1.mean - sam2.mean;
# save par() settings
old.par <- par(no.readonly = TRUE)
# make smaller margins
par(mfrow=c(3,1), mar=c(3,2,2,1), oma=c(1,1,1,1))
# Histogram overlaid with kernel density curve
hist(dat1, freq = FALSE, breaks = 6
, main = paste("Sample 1", "\n"
, "n =", n1
, ", mean =", signif(mean(dat1), digits = 5)
, ", sd =", signif(sd(dat1), digits = 5))
, xlim = range(c(dat1, dat2)))
points(density(dat1), type = "l")
rug(dat1)
hist(dat2, freq = FALSE, breaks = 6
, main = paste("Sample 2", "\n"
, "n =", n2
, ", mean =", signif(mean(dat2), digits = 5)
, ", sd =", signif(sd(dat2), digits = 5))
, xlim = range(c(dat1, dat2)))
points(density(dat2), type = "l")
rug(dat2)
hist(diff.mean, freq = FALSE, breaks = 25
, main = paste("Bootstrap sampling distribution of the difference in means", "\n"
, "mean =", signif(mean(diff.mean), digits = 5)
, ", se =", signif(sd(diff.mean), digits = 5)))
# overlay a density curve for the sample means
points(density(diff.mean), type = "l")
# overlay a normal distribution, bold and red
x <- seq(min(diff.mean), max(diff.mean), length = 1000)
points(x, dnorm(x, mean = mean(diff.mean), sd = sd(diff.mean))
, type = "l", lwd = 2, col = "red")
# place a rug of points under the plot
rug(diff.mean)
# restore par() settings
par(old.par)
}
```
### Thursday ---------
### Numeric variable confidence interval for mean $\mu$
### Categorical variable confidence interval for proportion $p$
## Week08: Hypothesis testing (one- and two-sample)
### Mechanics of a hypothesis test (review)
1. Set up the __null and alternative hypotheses__ in words and notation.
* In words: ``The population mean for [what is being studied] is different from [value of $\mu_0$].''
(Note that the statement in words is in terms of the alternative hypothesis.)
* In notation: $H_0: \mu=\mu_0$ versus $H_A: \mu \ne \mu_0$
(where $\mu_0$ is specified by the context of the problem).
2. Choose the __significance level__ of the test, such as $\alpha=0.05$.
3. Compute the __test statistic__, such as $t_{s} = \frac{\bar{Y}-\mu_0}{SE_{\bar{Y}}}$, where $SE_{\bar{Y}}=s/\sqrt{n}$ is the standard error.
4. Determine the __tail(s)__ of the sampling distribution where the __$p$-value__ from the test statistic will be calculated
(for example, both tails, right tail, or left tail).
(Historically, we would compare the observed test statistic, $t_{s}$,
with the __critical value__ $t_{\textrm{crit}}=t_{\alpha/2}$
in the direction of the alternative hypothesis from the
$t$-distribution table with degrees of freedom $df = n-1$.)
5. State the __conclusion__ in terms of the problem.
* Reject $H_0$ in favor of $H_A$ if $p\textrm{-value} < \alpha$.
* Fail to reject $H_0$ if $p\textrm{-value} \ge \alpha$.
(Note: We DO NOT _accept_ $H_0$.)
6. __Check assumptions__ of the test (for now we skip this).
### What do we do about "significance"?
Adapted from **[Significance Magazine](https://rss.onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2019.01295.x)**.
Recent calls have been made to abandon the term "statistical significance".
The American Statistical Association (ASA) issued its
[statement](https://www.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108) and
[recommendation](https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913)
on p-values (see the [special issue of p-values](https://www.tandfonline.com/toc/utas20/73/sup1?nav=tocList) for more).
In summary, the problem of "significance" is one of misuse, misunderstanding, and misinterpretation.
The recommendation in this class is that it is no longer sufficient to say that a
result is "statistically significant" or "non-significant" depending on whether a p-value is less than a threshold.
Instead, we will be looking for wording as in the following paragraph.
"The difference between the two groups turns out to be small (8%), while the
probability ($p$) of observing a result at least as extreme as this under the
null hypothesis of no difference between the two groups is $p = 0.003$ (that is,
0.3%). This p-value is statistically significant as it is below our pre-defined
threshold ($p < 0.05$). However, the p-value tells us only that the 8% difference
between the two groups is somewhat unlikely given our hypothesis and null model's
assumptions. More research is required, or other considerations may be needed,
to conclude that the difference is of practical importance and reproducible."
### Tuesday ---------
### Two-sample $t$-test
### Thursday ---------
Enjoy your Fall Break!
## Week09: ANOVA and Assessing Assumptions
### Tuesday ---------
### Thursday ---------
### ANOVA: Total cigarettes smoked by Ethnicity
#### Transform the response variable to satisfy assumptions
#### ANOVA Hypothesis test
#### Check assumptions
#### Post Hoc pairwise comparison tests
## Week10: Nonparametric methods and Binomial and multinomial proportion tests
### Tuesday ---------
### Thursday ---------
### Multinomial goodness-of-fit
#### Observed
#### Expected
### Perform $\chi^2$ Goodness-of-fit test
#### Chi-sq statistic helps us understand the deviations
#### Multiple Comparisons
## Week11: Two-way categorical tables and Simple linear regression, inference
### Tuesday ---------
### Two-way categorical analysis.
### Thursday ---------
### Simple linear regression.
## Week12: Logistic regression and Experiments vs Observational studies
### Tuesday ---------
### Logistic Regression
### Thursday ---------
### Experiments and observational studies
----------------------------------------
# Poster presentation
## Week13: Complete poster in HW document
### Tuesday ---------
### Thursday ---------
#### Title
#### 1. __(1 p)__ Introduction
#### 2. __(1 p)__ Research Questions
#### 3. __(1 p)__ Methods
#### 4. __(1 p)__ Discussion (while this would follow your results, let's put it here so you have a full column to show the results of the analysis of both research questions)
#### 5. __(1 p)__ Further directions or Future work or Next steps or something else that indicates there more to do and you've thought about it.
#### 6. __(1 p)__ References
* By citing sources in your introduction, this section will automatically have your bibliography.
* [Bibliography will go here -- it's currently at the bottom of the document]
#### 7. __(2 p)__ Results for your first research question.
#### 8. __(2 p)__ Results for your second research question.
## Week14: Posters, finishing up
### Tuesday ---------
### Thursday ---------
## Week15: Posters, final touches
### Tuesday ---------
### Thursday ---------
*Thanksgiving break. Remember to print your posters*
## Week16: Posters, presentations
### Tuesday ---------
### Thursday ---------
# References (from Week02)