---
title: "ADA1: Class 22, Logistic regression"
author: anonymous
date: "11/08/2016"
output:
html_document:
toc: true
---
Include your answers in this document in the sections below the rubric.
# Rubric
---
# AddHealth W4: Age at first intercourse vs pregancy
## Research question
__Research question:__
Is there a relationship between having at least one pregancy
(F = been pregnant, M = make partner pregnant)
and the age at first vaginal intercourse
for people in their fertile school years (12--22 years old)?
__Intuition:__
The earlier you get started,
the more opportunities you'll have,
and the more likely it will be.
## Data
Code book variables used to address this question.
(Note, if you get in the habit of providing a personal codebook for your work
you can give an analysis to someone else to read without having to explain it
to them. Everything is documented (_especially for your future self_
who doesn't have as good a memory as you think).
```
ADDHEALTH: WAVE4 IN-HOME INTERVIEW CODE BOOK
Wave IV Section 15: Suicide, Sexual Experiences, & Sexually Transmitted Diseases
H4SE7 (AgeAtVag)
7. How old were you the first time you ever had vaginal intercourse?
1-30 years
96 refused
97 legitimate skip
98 don't know
H4TR9 (NPreg)
9. Thinking about all the relationships and sexual encounters you have ever
had, (how many times have you ever been pregnant/how many times have you ever
made a partner pregnant)? Include all pregnancies, whether they resulted in
babies born alive, stillbirth, abortion, miscarriage, or ectopic or tubal
pregnancy. [If Q.7=1] say: Be sure to include your current pregnancy in your
count.
1-19 pregancies
96 refused
98 don't know
Constructed variables
EverPreg = (NPreg > 0)
```
We create a data frame with nicely named variables,
code and remove missing values,
and code a binary variable for "ever pregnant" (`EverPreg`).
```{R}
library(PDS)
# NOTE:
# if you don't have PDS installed,
# then you'll need to download the addhealth_public4.RData data file
# and load it as described on UNM Learn / Resources / PDS Data:
# load("/PATH_TO_FILE/addhealth_public4.RData")
# assign to a data frame
df.piv.preg <- data.frame(AgeAtVag = addhealth_public4$h4se7
, NPreg = addhealth_public4$h4tr9)
# assign NA to missing values
df.piv.preg$AgeAtVag[(df.piv.preg$AgeAtVag > 95)] <- NA
df.piv.preg$NPreg[(df.piv.preg$NPreg > 95)] <- NA
# remove NAs (the NAs break the logi.hist.plot() below)
df.piv.preg <- na.omit(df.piv.preg)
# set as a binary variable, if ever pregnant, then 1, otherwise 0
df.piv.preg$EverPreg <- (df.piv.preg$NPreg > 0) # TRUE or FALSE (1 or 0)
# alternatively, to code as 1 and 0:
# df.piv.preg$EverPreg <- ifelse(df.piv.preg$NPreg > 0, 1, 0) # 1 or 0
# keep people between 12 and 22 years old at first vaginal intercourse
df.piv.preg <- subset(df.piv.preg, (AgeAtVag >= 12) & (AgeAtVag <= 22))
# plot histograms for each condition and fit a logistic curve
library(popbio)
logi.hist.plot(df.piv.preg$AgeAtVag, df.piv.preg$EverPreg
, logi.mod = 1 # logistic fit
, type = "hist", boxp = FALSE, rug = FALSE
, col = "gray"
, ylabel = "Probability at least one pregnancy (red)"
, ylabel2 = "Frequency"
, xlabel = "Age at first vaginal intercourse")
# Note (11/1/2015):
# there's a bug in the plotting function
# (I've emailed the package maintainer and it's been fixed in the next update)
# if the boxplots are on (boxp=TRUE), then both ylabels are ylabel2.
```
Summarize observed probability of pregnancy
for each age at first vaginal intercourse.
```{R}
library(plyr)
df.piv.preg.sum <- ddply(df.piv.preg, "AgeAtVag", summarise
, Total = length(EverPreg)
, Success = sum(EverPreg)
)
# create estimated proportion of preg for each age group
df.piv.preg.sum$p.hat <- df.piv.preg.sum$Success / df.piv.preg.sum$Total
df.piv.preg.sum
```
Plots on the probability scale should follow a sigmoidal curve
(a little difficult to see here).
Note that the overlayed reference
curve (red) is a weighted smoothed curve (loess),
not the model fit.
```{R}
library(ggplot2)
p <- ggplot(df.piv.preg.sum, aes(x = AgeAtVag, y = p.hat))
p <- p + geom_point(aes(size = Total))
p <- p + geom_smooth(se = FALSE, colour = "red", aes(weight = Total)) # just for reference
p <- p + expand_limits(y = 0) + expand_limits(y = 1)
p <- p + labs(title = "Observed probability of at least one pregnancy")
print(p)
```
On the logit scale, if points follow a straight line,
then we can fit a simple linear logistic regression model.
Note that the overlayed reference
straight line (blue) is from weighted least squares (not the official fitted line),
and the curve (red) is a weighted smoothed curve (loess).
Both lines are weighted by the total number of observations that each point represents,
so that points representing few observations don't contribute as much as
points representing many observations.
```{R}
# emperical logits
df.piv.preg.sum$emp.logit <- log(( df.piv.preg.sum$p.hat + 0.5/df.piv.preg.sum$Total) /
(1 - df.piv.preg.sum$p.hat + 0.5/df.piv.preg.sum$Total))
library(ggplot2)
p <- ggplot(df.piv.preg.sum, aes(x = AgeAtVag, y = emp.logit))
p <- p + geom_point(aes(size = Total))
p <- p + stat_smooth(method = "lm", se = FALSE, aes(weight = Total)) # just for reference
p <- p + geom_smooth(se = FALSE, colour = "red", aes(weight = Total)) # just for reference
p <- p + labs(title = "Empirical logits")
print(p)
```
__(1 p)__
For the plot above,
on the logit scale,
does it appear that a straight line fits the data well?
If not, what are the features in the data that aren't being captured by the model?
## Simple logistic regression model
The simple logistic regression model expresses the population proportion $p$ of
individuals with a given attribute (called the probability of success) as a
function of a single predictor variable $X$. The model assumes that $p$ is
related to $X$ through
$$
\log \left( \frac{p}{1-p} \right) = \beta_0 + \beta_1 X
$$
or, equivalently, as
$$
p = \frac{ \exp( \beta_0 + \beta_1 X ) }{ 1 + \exp( \beta_0 + \beta_1 X ) }.
$$
The logistic regression model is a __binary response model__, where the
response for each case falls into one of two exclusive and exhaustive
categories, success (cases with the attribute of interest) and failure (cases
without the attribute of interest).
Fit the model.
```{R}
# For our summarized data (with frequencies and totals for each age)
# The left-hand side of our formula binds two columns together with cbind():
# the columns are the number of "successes" and "failures".
# For logistic regression with logit link we specify family = binomial,
# where logit is the default link function for the binomial family.
# first-order linear model
glm.p.a <- glm(cbind(Success, Total - Success) ~ AgeAtVag, family = binomial, df.piv.preg.sum)
```
```{R, echo=FALSE}
# # Note that this method where every observation is distinct
# # gives the same parameter estimates,
# # but the deviance statistic for assessing lack-of-fit is not correct
# # because it treats every observation as a "category" rather than each age category.
# # Above, using the summarized data gives the appropriate lack-of-fit test.
# glm.p.a <- glm(EverPreg ~ AgeAtVag, family = binomial, df.piv.preg)
# summary(glm.p.a)
#
# ## INCORRECT FOR UNSUMMARIZED DATA
# # Test residual deviance for lack-of-fit (if > 0.10, little-to-no lack-of-fit)
# glm.p.a$deviance
# glm.p.a$df.residual
# dev.p.val <- 1 - pchisq(glm.p.a$deviance, glm.p.a$df.residual)
# dev.p.val
```
## Deviance statistic for lack-of-fit
Unfortunately, there aren't many model diagnostics for logistic regression.
One simple test is for lack-of-fit using the deviance.
Under the null hypothesis (that you'll state below),
the residual deviance follows a $\chi^2$ distribution with
the associated degrees of freedom.
Below we calculate the deviance p-value and perform the test for lack-of-fit.
```{R}
# Test residual deviance for lack-of-fit (if > 0.10, little-to-no lack-of-fit)
glm.p.a$deviance
glm.p.a$df.residual
dev.p.val <- 1 - pchisq(glm.p.a$deviance, glm.p.a$df.residual)
dev.p.val
```
__(1 p)__
State the null hypothesis for lack-of-fit.
__(1 p)__
For your preferred model, the deviance statistic is $D=???$ with $???$ df.
The p-value = ???.
__(1 p)__
What is your conclusion for the model fit?
## Visualize and interpret the model
The `glm()` statement creates an object which we can use to create
the fitted probabilities
and 95\% CIs for the population proportions at the ages at first vaginal intercourse.
The fitted probabilities and the limits are stored in columns labeled
`fitted.values`, `fit.lower`, and `fit.upper`, respectively.
Below I create confidence bands for the model
and plot the fit against the data,
first on the probability scale and then on the logit scale.
```{R}
# put the fitted values in the data.frame
df.piv.preg.sum$fitted.values <- glm.p.a$fitted.values
pred <- predict(glm.p.a, data.frame(AgeAtVag = df.piv.preg.sum$AgeAtVag), type = "link", se.fit = TRUE) #$
df.piv.preg.sum$fit <- pred$fit
df.piv.preg.sum$se.fit <- pred$se.fit
# CI for fitted values
df.piv.preg.sum <- within(df.piv.preg.sum, {
fit.lower = exp(fit - 1.96 * se.fit) / (1 + exp(fit - 1.96 * se.fit))
fit.upper = exp(fit + 1.96 * se.fit) / (1 + exp(fit + 1.96 * se.fit))
})
```
```{R}
# plot on probability scale
library(ggplot2)
p <- ggplot(df.piv.preg.sum, aes(x = AgeAtVag, y = p.hat))
# predicted curve and point-wise 95% CI
p <- p + geom_ribbon(aes(x = AgeAtVag, ymin = fit.lower, ymax = fit.upper), colour = "blue", linetype = 0, alpha = 0.2)
p <- p + geom_line(aes(x = AgeAtVag, y = fitted.values), colour = "blue", size = 1)
# fitted values
p <- p + geom_point(aes(y = fitted.values), colour = "blue", size=2)
# observed values
p <- p + geom_point(aes(size = Total), color = "black")
p <- p + expand_limits(y = 0) + expand_limits(y = 1)
p <- p + labs(title = "Observed and predicted pregnancy, probability scale")
print(p)
```
__(1 p)__
For the plot above,
describe the general pattern of the probability of pregnancy
given the age at first vaginal intercourse.
```{R}
# plot on logit scale
library(ggplot2)
p <- ggplot(df.piv.preg.sum, aes(x = AgeAtVag, y = emp.logit))
# predicted curve and point-wise 95% CI
p <- p + geom_ribbon(aes(x = AgeAtVag, ymin = fit - 1.96 * se.fit, ymax = fit + 1.96 * se.fit), linetype = 0, alpha = 0.2)
p <- p + geom_line(aes(x = AgeAtVag, y = fit), colour = "blue", size = 1)
# fitted values
p <- p + geom_point(aes(y = fit), colour = "blue", size=2)
# observed values
p <- p + geom_point(aes(size = Total), color = "black")
p <- p + labs(title = "Observed and predicted pregnancy, logit scale")
print(p)
```
__(1 p)__
For the plot above,
comment on the model fit.
Let's consider those people who first had vaginal intercourse at age 15.
```{R}
subset(df.piv.preg.sum, AgeAtVag == 15)
```
__(1 p)__
Complete the statement below using r code in the sentence
to automatically print the values.
"Using the model,
the estimated population proportion of pregnancy
when age at first intercourse was 15 is
`r signif(subset(df.piv.preg.sum, AgeAtVag == 15, fitted.values), 3)`.
We are 95\% confident that the population proportion is between
???
and
???."
The summary table gives MLEs and standard errors for the regression parameters.
The z-value column is the parameter estimate divided by its standard error.
The p-values are used to test whether the corresponding parameters of the
logistic model are zero.
```{R}
summary(glm.p.a)
# see names(summary(glm.p.a)) to find the object that has the coefficients.
# can also use coef(glm.p.a)
```
__(1 p)__
Complete the equation below with the MLEs of the regression coefficients.
The MLE of the predicted probabilities satisfy
$$
\log \left( \frac{\hat{p}}{1-\hat{p}} \right) =
??? + ??? \textrm{ AgeAtVag}
$$
Interpreting the model coefficients is tricky because they're on the logit
scale.
We'd prefer to think of them on the probability scale.
We'll cover other ways of interpretating these coefficients next semester.
For now, I want you to interpret qualities of the slope and intercept.
__(1 p)__
Interpret the sign ($+$ or $-$) of the slope.
__(1 p)__
What is the interpretation of the intercept?
Is it meaningful?