---
title: "ADA1: Class 21, Simple linear regression"
author: "Your Name Here"
date: "`r format(Sys.time(), '%B %d, %Y')`"
output:
html_document:
toc: true
---
Include your answers in this document in the sections below the rubric.
# Rubric
Answer the questions with the data example.
---
# Height vs Hand Span
We collected this data earlier in the semester.
```{R, cache = FALSE}
# install.packages("gsheet")
# Height vs Hand Span
library(gsheet)
dat.hand.url <- "docs.google.com/spreadsheets/d/1_lax2SqNMhfGBpw1MBDnKiB1w2J5oC6LWvG7Gf2acWY"
dat.hand <- gsheet2tbl(dat.hand.url)
dat.hand <- as.data.frame(dat.hand)
dat.hand <- na.omit(dat.hand)
dat.hand$Gender_M_F <- factor(dat.hand$Gender_M_F, levels = c("F", "M"))
str(dat.hand)
```
Plot data for `Height_in` vs `HandSpan_cm` for Females and Males.
```{R}
library(ggplot2)
p <- ggplot(dat.hand, aes(x = HandSpan_cm, y = Height_in))
# linear regression fit and confidence bands
p <- p + geom_smooth(method = lm, se = TRUE)
# jitter a little to uncover duplicate points
p <- p + geom_jitter(position = position_jitter(.1), alpha = 0.75)
# separate for Females and Males
p <- p + facet_wrap(~ Gender_M_F, nrow = 1)
print(p)
```
__Choose either Females or Males for the remaining analysis.__
```{R}
# choose one:
dat.use <- subset(dat.hand, (Gender_M_F == "F"))
#dat.use <- subset(dat.hand, (Gender_M_F == "M"))
```
__Plan:__
Center the explanatory variable `HandSpan_cm`,
fit a simple linear regression model,
check model assumptions,
interpret the parameter estimate table, and
interpret a confidence and prediction interval.
## Center the explanatory variable `HandSpan_cm`
Recentering the $x$-variable doesn't change the model,
but it does provide an interpretation for the intercept of the model.
For example, if you interpret the intercept for the regression lines above,
it's the "expected height for a person with a hand span of zero",
but that's not meaningful.
Choose a value to center your data on.
A good choice is a nice round number near the mean (or center) of your data.
This becomes the value for the interpretation of your intercept.
```{R}
dat.use$HandSpan_cm_centered <- dat.use$HandSpan_cm - 20
```
## Fit a simple linear regression model
```{R}
# fit model
lm.fit <- lm(Height_in ~ HandSpan_cm_centered, data = dat.use)
```
Here's the data you're using for the linear regression,
with the regression line and confidence and prediction intervals.
```{R}
library(ggplot2)
p <- ggplot(dat.use, aes(x = HandSpan_cm_centered, y = Height_in))
p <- p + geom_vline(xintercept = 0, alpha = 0.25)
# prediction bands
p <- p + geom_ribbon(aes(ymin = predict(lm.fit, data.frame(HandSpan_cm_centered)
, interval = "prediction", level = 0.95)[, 2],
ymax = predict(lm.fit, data.frame(HandSpan_cm_centered)
, interval = "prediction", level = 0.95)[, 3],)
, alpha=0.1, fill="darkgreen")
# linear regression fit and confidence bands
p <- p + geom_smooth(method = lm, se = TRUE)
# jitter a little to uncover duplicate points
p <- p + geom_jitter(position = position_jitter(.1), alpha = 0.75)
p <- p + labs(title = "Regression with confidence and prediction bands")
print(p)
```
## Check model assumptions
Present and interpret the residual plots with respect to model assumptions.
```{R, fig.width = 8, fig.height = 6}
# plot diagnistics
par(mfrow=c(2,3))
plot(lm.fit, which = c(1,4,6))
# residuals vs HandSpan_cm_centered
plot(dat.use$HandSpan_cm_centered, lm.fit$residuals, main="Residuals vs HandSpan_cm_centered")
# horizontal line at zero
abline(h = 0, col = "gray75")
# Normality of Residuals
library(car)
qqPlot(lm.fit$residuals, las = 1, id = list(n = 3, cex = 1), main="QQ Plot")
# # residuals vs order of data
# plot(lm.fit$residuals, main="Residuals vs Order of data")
# # horizontal line at zero
# abline(h = 0, col = "gray75")
hist(lm.fit$residuals, breaks=10, main="Residuals")
```
__(1 p)__
If the normality assumption seems to be violated, perform a normality test on the standardized residuals.
__(1 p)__
Do the residuals versus the fitted values and `HandSpan_cm_centered` values appear random?
Or is there a pattern?
## Investigate the relative influence of points
Investigate the leverages and Cook's Distance.
There are recommendations for what's considered large,
for example, a $3p/n$ cutoff for large leverages,
and a cutoff of 1 for large Cook's D values.
I find it more practical to consider the relative
leverage or Cook's D between all the points and
worry when there are only a few that are much more influential than others.
Here's a plot that duplicates a plot above.
Here, the observation number is used as both the plotting point and a label.
```{R}
# plot diagnistics
par(mfrow=c(1,2))
plot(influence(lm.fit)$hat, main="Leverages", type = "n")
text(1:nrow(dat.use), influence(lm.fit)$hat, label=paste(1:nrow(dat.use)))
# horizontal line at zero
abline(h = 3*2/82, col = "gray75")
plot(cooks.distance(lm.fit), main="Cook's Distances", type = "n")
text(1:nrow(dat.use), cooks.distance(lm.fit), label=paste(1:nrow(dat.use)))
# horizontal line at zero
abline(h = 1, col = "gray75")
```
__(1 p)__
Interpret the leverages and Cook's D values with respect to whether
any observations are having undue influence on model fit.
## Interpret the parameter estimate table
Here's the parameter estimate table.
We're estimating the $\beta$ parameter coefficients in the regression model
$y_i = \beta_0 + \beta_1 x_i + e_i$.
```{R}
summary(lm.fit)
```
__(1 p)__
Assuming the model fits well, complete this equation
(fill in the $\beta$ values with values from the table)
with the appropriate numbers from the table above
(3 numbers: each beta and the HandSpan centering value).
The regression line is
$\hat{\textrm{Height_in}} = \hat{\beta}_0 + \hat{\beta}_1 \textrm{(HandSpan_cm - 20)}$.
__(2 p)__
State the hypothesis test related to the slope of the line,
indicate the p-value for the test,
and state the conclusion.
Words and notation:
* Words:
* Notation: $H_0:\beta_? = ?$ vs $H_A:\beta_? \ne ?$
__(1 p)__
Interpret the slope coefficient in the context of the model.
__(1 p)__
State and interpret the $R^2$ value.
## Interpret a confidence and prediction interval
Below is a 95% confidence interval for the mean (the regression line)
and a prediction interval for a new observation
at $\textrm{HandSpan_cm_centered} = -1$.
See how these match up with the plot above.
```{R}
predict(lm.fit, data.frame(HandSpan_cm_centered = -1)
, interval = "confidence", level = 0.95)
predict(lm.fit, data.frame(HandSpan_cm_centered = -1)
, interval = "prediction", level = 0.95)
```
__(1 p)__
Interpret the CI.
__(1 p)__
Interpret the PI.