- 1 Document overview
- 2 Research Questions
- 3 Data Management
- 3.1 Class 05, Data subset and numerical summaries
- 3.2 Data is complete (Class 06)

- 4 Graphing and Tabulating
- 5 Statistical methods
- 5.1 Class 09, Simple linear regression (separate worksheet)
- 5.2 Class 10, Simple linear regression
- 5.3 Class 11, Logarithm transformation (separate worksheet)
- 5.4 Class 12, Logarithm transformation
- 5.5 Class 13, Correlation (separate worksheet)
- 5.6 Class 14, Categorical contingency tables (separate worksheet)
- 5.7 Class 15, Correlation and Categorical contingency tables
- 5.8 Class 16, Parameter estimation (one-sample) (separate worksheet)
- 5.9 Class 17, Inference and Parameter estimation (one-sample)
- 5.10 Class 18, Hypothesis testing (one- and two-sample) (separate worksheet)
- 5.11 Class 19, Paired data, assumption assessment (separate worksheet)
- 5.12 Class 20, Hypothesis testing (one- and two-sample)
- 5.13 Class 21, ANOVA, Pairwise comparisons (separate worksheet)
- 5.14 Class 22, ANOVA and Assessing Assumptions
- 5.15 Class 23, Nonparametric methods (separate worksheet)
- 5.16 Class 24, Binomial and Multinomial tests (separate worksheet)
- 5.17 Class 25, Two-way categorical tables (separate worksheet)
- 5.18 Class 26, Simple linear regression (separate worksheet)
- 5.19 Class 27, Two-way categorical and simple linear regression
- 5.20 Class 28, Logistic regression (separate worksheet)
- 5.21 Class 29, Logistic regression

This document is organized by Week and Class number. The worksheet assignments are indicated by the Tuesday and Thursday Class numbers.

Consider your readers (graders):

- organize the document clearly (use this document as an example)
- label minor sections under each day (use this document as an example)
- For each thing you do, always have these three parts:
- Say what you’re going to do and why.
- Do it with code, and document your code.
- Interpret the results.

```
# I set some GLOBAL R chunk options here.
# (to hide this message add "echo=FALSE" to the code chunk options)
# In particular, see the fig.height and fig.width (in inches)
# and notes about the cache option.
knitr::opts_chunk$set(comment = NA, message = FALSE, warning = FALSE, width = 100)
knitr::opts_chunk$set(fig.align = "center", fig.height = 4, fig.width = 6)
# Note: The "cache=TRUE" option will save the computations of code chunks
# so R it doesn't recompute everything every time you recompile.
# This can save _tons of time_ if you're working on a small section of code
# at the bottom of the document.
# Code chunks will be recompiled only if they are edited.
# The autodep=TRUE will also update dependent code chunks.
# A folder is created with the cache in it -- if you delete the folder, then
# all the code chunks will recompute again.
# ** If things are working as expected, or I want to freshly compute everything,
# I delete the *_cache folder.
knitr::opts_chunk$set(cache = FALSE) #, autodep=TRUE) #$
```

*Note: Each class save this file with a new name, updating the last two digits to the class number. Then, you’ll have a record of your progress, as well as which files you turned in for grading.*

`ADA1_ALL_05.Rmd`

`ADA1_ALL_06.Rmd`

`ADA1_ALL_07.Rmd`

…

A version that I prefer is to use a date using Year-Month-Day, `YYYYMMDD`

:

`ADA1_ALL_20200903.Rmd`

`ADA1_ALL_20200905.Rmd`

`ADA1_ALL_20200910.Rmd`

…

We will include all of our assignments together in this document to retain the relevant information needed for subsequent assignments since our analysis is cumulative. You will also have an opportunity to revisit previous parts to make changes or improvements, such as updating your codebook, recoding variables, and improving tables and plots. I’ve provided an initial predicted organization of our sections and subsections using the # and ## symbols. A table of contents is automatically generated using the “toc: true” in the yaml and can headings in the table of contents are clickable to jump down to each (sub)section.

These class assignments are in their own documents: 09, 11, 13, 14

**Rubric**

(1 p) Is there a topic of interest?

(2 p) Are the variables relevant to a set of research questions?

(4 p) Are there at least 2 categorical and 2 numerical variables (at least 4 “data” variables)?

- 1 categorical variable with only 2 levels
- 1 categorical variable with at least 3 levels
- 2 numerical variables with many possible unique values
- More variables are welcome and you’re likely to add to this later in the semester

(3 p) For each variable, is there a variable description, a data type, and coded value descriptions?

Compile this Rmd file to an html, print/save to pdf, and upload to UNM Learn.

**Topic:**

As you select variables from the bottom of this document, a general topic should reveal itself to you.

**Research questions:**

Question 1

Question 2

Question 3

National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III)

- Codebook: https://statacumen.com/teach/ADA1/PDS_data/NESARC_W1_CodeBook.pdf
- Official site: https://www.niaaa.nih.gov/research/nesarc-iii
- Introduction: https://pubs.niaaa.nih.gov/publications/arh29-2/74-78.htm

```
Dataset: NESARC
Primary association: nicotine dependence vs frequency and quantity of smoking
Key:
RenamedVarName
VarName original in dataset
Variable description
Data type (Continuous, Discrete, Nominal, Ordinal)
Frequency ItemValue Description
ID
IDNUM
UNIQUE ID NUMBER WITH NO ALPHABETICS
Nominal
43093 1-43093. Unique Identification number
Sex
SEX
SEX
Nominal
18518 1. Male
24575 2. Female
Age
AGE
AGE
Continuous
43079 18-97. Age in years
14 98. 98 years or older
Height_ft
S1Q24FT
HEIGHT: FEET
42363 4-7. Feet
730 99. Unknown
* change 99 to NA
Height_in
S1Q24IN
HEIGHT: INCHES
Continuous
3572 0. None
38760 1-11. Inches
761 99. Unknown
* change 99 to NA
Weight_lb
S1Q24LB
WEIGHT: POUNDS
Continuous
41717 62-500. Pounds
1376 999. Unknown
* change 999 to NA
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
```

Additional variables were created from the original variables:

```
CREATED VARIABLES
Height_inches
Total height in inches
Height_ft * 12 + Height_in
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE (If you think you'll combine or transform any variables)
ADD MORE HERE
ADD MORE HERE
ADD MORE HERE
```

**Rubric**

(4 p) The data are loaded and a data.frame subset is created by selecting only the variables in the personal codebook.

- Scroll down to sections labeled “(Class 05)”.

(1 p) Output confirms the subset is correct (e.g., using

`dim()`

and`str()`

).(3 p) Rename your variables to descriptive names (e.g., from “S3AQ3B1” to “SmokingFreq”).

- Scroll down to sections labeled “(Class 05)”.

(2 p) Provide numerical summaries for all variables (e.g., using

`summary()`

).- Scroll down to sections labeled “(Class 05)”.

First, the data is placed on the search path.

```
# data analysis packages
library(tidyverse) # Data manipulation and visualization suite
library(lubridate) # Dates
## 1. Download the ".RData" file for your dataset into your ADA Folder.
## 2. Use the load() statement for the dataset you want to use.
# read data example
#load("NESARC.RData")
#dim(NESARC)
```

There are two steps. The first step is to recode any existing `NA`

s to actual values, if necessary. The method for doing this differs for numeric and categorical variables. The second step is to recode any coded missing values, such as 9s or 99s, as actual `NA`

.

`NA`

s as meaningful “missing”First step: the existing blank values with `NA`

mean “never”, and “never” has a meaning different from “missing”. For each variable we need to decide what “never” means and code it appropriately.

`NA`

s recoded as numeric`NA`

s recoded as categorical`NA`

s**Rubric**

(3 p) For one categorical variable, a barplot is plotted with axis labels and a title. Interpret the plot: describe the relationship between categories you observe.

(3 p) For one numerical variable, a histogram or boxplot is plotted with axis labels and a title. Interpret the plot: describe the distribution (shape, center, spread, outliers).

(2 p) Code missing variables, remove records with missing values, indicate with R output that this was done correctly (e.g.,

`str()`

,`dim()`

,`summary()`

).- Scroll up to sections labeled “(Class 06)”.

(2 p) Label levels of factor variables.

- Scroll up to sections labeled “(Class 06)”.

**Rubric**

Each of the following (2 p for plot, 2 p for labelled axes and title, 1 p for interpretation):

Scatter plot (for regression): \(x\) = numerical, \(y\) = numerical, include axis labels and a title. Interpret the plot: describe the relationship.

Box plots (for ANOVA): \(x\) = categorical, \(y\) = numerical, include axis labels and a title. Interpret the plot: describe the relationship.

**Rubric**

Each of the following (2 p for plot, 2 p for labelled axes and title, 1 p for interpretation):

Mosaic plot or bivariate bar plots (for contingency tables): \(x\) = categorical, \(y\) = categorical, include axis labels and a title. Interpret the plot: describe the relationship.

Logistic scatter plot (for logistic regression): \(x\) = numerical, \(y\) = categorical (binary), include axis labels and a title. Interpret the plot: describe the relationship.

**Rubric**

- With your previous (or new) bivariate scatter plot, add a regression line.
- (2 p) plot with regression line,
- (1 p) label axes and title.

- Use
`lm()`

to fit the linear regression and interpret slope and \(R^2\) (R-squared) values.- (2 p) lm
`summary()`

table is presented, - (2 p) slope is interpreted with respect to a per-unit increase of the \(x\) variable in the context of the variables in the plot,
- (2 p) \(R^2\) is interpretted in a sentence.

- (2 p) lm
- (1 p) Interpret the intercept. Does it make sense in the context of your study?

**Rubric**

- Try plotting the data on a logarithmic scale
- (6 p) Each of the logarithmic relationships is plotted, axes are labelled with scale.

- original scales
- \(\log(x)\)-only
- \(\log(y)\)-only
- both \(\log(x)\) and \(\log(y)\)

- What happened to your data when you transformed it?
- (2 p) Describe what happened to the relationship after each log transformation (compare transformed scale to original scale; is the relationship more linear, more curved?).
- (1 p) Choose the best scale for a linear relationship and explain why.
- (1 p) Does your relationship benefit from a logarithmic transformation? Say why or why not.

Describe what happened to the relationship after each log transformation (compare transformed scale to original scale).

Choose the best scale for a linear relationship and explain why.

Does your relationship benefit from a logarithmic transformation? Say why or why not.

**Rubric**

- With your previous (or a new) bivariate scatter plot, calculate the correlation and interpret.
- (1 p) plot is repeated here or the plot is referenced an easy to find from a plot above,
- (1 p) correlation is calculated,
- (2 p) correlation is interpretted (direction, strength of LINEAR relationship).

- With your previous (or a new) two- or three-variable categorical plot, calculate conditional proportions and interpret.
- (1 p) frequency table of variables is given,
- (2 p) conditional proportion tables are calculated of the outcome variable conditional on one or two other variables,
- (1 p) a well-labelled plot of the proportion table is given,
- (2 p) the conditional proportions are interpretted and compared between conditions.

**Rubric**

- Using a numerical variable, calculate and interpret a confidence interval for the population mean.
- (1 p) Identify and describe the variable,
- (1 p) use
`t.test()`

to calculate the mean and confidence interval, and - (1 p) interpret the confidence interval.
- (2 p) Using plotting code from the last two classes, plot the data, estimate, and confidence interval in a single well-labelled plot.

- Using a two-level categorical variable, calculate and interpret a confidence interval for the population proportion.
- (1 p) Identify and describe the variable,
- (1 p) use
`binom.test()`

to calculate the mean and confidence interval, and - (1 p) interpret the confidence interval.
- (2 p) Using plotting code from the last two classes, plot the data, estimate, and confidence interval in a single well-labelled plot.

Set up the

**null and alternative hypotheses**in words and notation.- In words: ``The population mean for [what is being studied] is different from [value of \(\mu_0\)].’’ (Note that the statement in words is in terms of the alternative hypothesis.)
- In notation: \(H_0: \mu=\mu_0\) versus \(H_A: \mu \ne \mu_0\) (where \(\mu_0\) is specified by the context of the problem).

Choose the

**significance level**of the test, such as \(\alpha=0.05\).Compute the

**test statistic**, such as \(t_{s} = \frac{\bar{Y}-\mu_0}{SE_{\bar{Y}}}\), where \(SE_{\bar{Y}}=s/\sqrt{n}\) is the standard error.Determine the

**tail(s)**of the sampling distribution where the**\(p\)-value**from the test statistic will be calculated (for example, both tails, right tail, or left tail). (Historically, we would compare the observed test statistic, \(t_{s}\), with the**critical value**\(t_{\textrm{crit}}=t_{\alpha/2}\) in the direction of the alternative hypothesis from the \(t\)-distribution table with degrees of freedom \(df = n-1\).)State the

**conclusion**in terms of the problem.- Reject \(H_0\) in favor of \(H_A\) if \(p\textrm{-value} < \alpha\).
- Fail to reject \(H_0\) if \(p\textrm{-value} \ge \alpha\). (Note: We DO NOT
*accept*\(H_0\).)

**Check assumptions**of the test (for now we skip this).

Adapted from **Significance Magazine**.

Recent calls have been made to abandon the term “statistical significance”. The American Statistical Association (ASA) issued its statement and recommendation on p-values (see the special issue of p-values for more).

In summary, the problem of “significance” is one of misuse, misunderstanding, and misinterpretation. The recommendation in this class is that it is no longer sufficient to say that a result is “statistically significant” or “non-significant” depending on whether a p-value is less than a threshold. Instead, we will be looking for wording as in the following paragraph.

“The difference between the two groups turns out to be small (8%), while the probability (\(p\)) of observing a result at least as extreme as this under the null hypothesis of no difference between the two groups is \(p = 0.003\) (that is, 0.3%). This p-value is statistically significant as it is below our pre-defined threshold (\(p < 0.05\)). However, the p-value tells us only that the 8% difference between the two groups is somewhat unlikely given our hypothesis and null model’s assumptions. More research is required, or other considerations may be needed, to conclude that the difference is of practical importance and reproducible.”

**Rubric**

- Using a numerical response variable and a two-level categorical variable (or a categorical variable you can reduce to two levels), specify a two-sample \(t\)-test associated with your research questions.
- (2 p) Specify the hypotheses in words and notation (either one- or two-sided test),
- (0 p) use
`t.test()`

to calculate the mean, test statistic, and p-value, - (3 p) state the significance level, test statistic, and p-value, and
- (2 p) state the conclusion in the context of the problem.
- (1 p) Given your conclusion, could you have committed at Type-I or Type-II error?
- (2 p) Provide an appropriate plot of the data and sample estimates in a well-labelled plot.

**Rubric**

- Using a numerical response variable and a categorical variable with three to five levels (or a categorical variable you can reduce to three to five levels), specify an ANOVA hypothesis associated with your research questions.
- (1 p) Specify the ANOVA hypotheses in words and notation,
- (1 p) plot the data in a way that is consistent with hypothesis test (comparing means, assess equal variance assumption),
- (1 p) use
`aov()`

to calculate the hypothesis test statistic and p-value, - (1 p) state the significance level, test statistic, and p-value,
- (1 p) state the conclusion in the context of the problem,
- (2 p) assess the normality assumption of the residuals using appropriate methods (QQ-plot and Anderson-Darling test), and
- (1 p) assess the assumption of equal variance between your groups using an appropriate test (also mention standard deviations of each group).
- (2 p) If you rejected the ANOVA null hypothesis, perform follow-up pairwise comparisons using Tukey’s HSD to indicate which groups have statistically different means and summarize the results.

*(If required.)*

*(If required.)*

**Rubric**

- Two-way categorical analysis.
- Using two categorical variables with two to five levels each, specify a hypothesis test for homogeneity of proportions associated with your research questions.
- (1 p) Specify the hypotheses in words and notation.
- (1 p) State the conclusion of the test in the context of the problem.
- (1 p) Plot a mosaic plot of the data and Pearson residuals.
- (1 p) Interpret the mosaic plot with reference to the Pearson residuals.

- Simple linear regression.
- Select two numerical variables.
- (1 p) Plot the data and, if required, transform the variables so a roughly linear relationship is observed. All interpretations will be done on this scale of the variables.
- (0 p) Fit the simple linear regression model.
- (1 p) Assess the residuals for lack of fit (interpret plots of residuals vs fitted and \(x\)-value).
- (1 p) Assess the residuals for normality (interpret QQ-plot and histogram).
- (1 p) Assess the relative influence of points.
- (1 p) Test whether the slope is different from zero, \(H_A: \beta_1 \ne 0\).
- (1 p) Interpret the \(R^2\) value.

**Rubric**

- Logistic regression.
- Select a binary reponse and continue explanatory/predictor variable.
- (1 p) Plot the data.
- (1 p) Summarize the \(\hat{p}\) values for each value of the \(x\)-variable. Also, calculate the empirical logits.
- (1 p) Plot the \(\hat{p}\) values vs the \(x\)-variable and plot the empirical logits vs the \(x\)-variable.
- (1 p) Describe the logit-vs-\(x\) plot. Is it linear? If not, consider a transformation of \(x\) to improve linearity; describe the transformation you chose if you needed one.
- (1 p) Fit the
`glm()`

model and assess the deviance lack-of-fit test. - (1 p) Calculate the confidence bands around the model fit/predictions. Plot on both the logit and \(\hat{p}\) scales.
- (1 p) Interpret the sign (\(+\) or \(-\)) of the slope parameter and test whether the slope is different from zero, \(H_A: \beta_1 \ne 0\).

[End]