```
adv <- read.csv("http://www.stats.ox.ac.uk/~laws/LMs/data/advert.csv")
adv.lm <- lm(sales ~ TV + radio + newspaper, data = adv)
options(digits = 3)
summary(adv.lm)
```

```
##
## Call:
## lm(formula = sales ~ TV + radio + newspaper, data = adv)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.828 -0.891 0.242 1.189 2.829
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.93889 0.31191 9.42 <2e-16 ***
## TV 0.04576 0.00139 32.81 <2e-16 ***
## radio 0.18853 0.00861 21.89 <2e-16 ***
## newspaper -0.00104 0.00587 -0.18 0.86
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.69 on 196 degrees of freedom
## Multiple R-squared: 0.897, Adjusted R-squared: 0.896
## F-statistic: 570 on 3 and 196 DF, p-value: <2e-16
```

Each t-value is estimate/(standard error), the final column gives the p-value for the two-sided tets of \(\beta_j=0\) (one separate test for each value of \(j\)).

A convenient way to obtain a confidence interval for each of the regression parameters is:

`confint(adv.lm)`

```
## 2.5 % 97.5 %
## (Intercept) 2.3238 3.5540
## TV 0.0430 0.0485
## radio 0.1715 0.2055
## newspaper -0.0126 0.0105
```

We can do checks by hand, e.g.:

`2 * pt(-0.18, df = 200 - 4)`

`## [1] 0.857`

`(a <- qt(0.975, df = 200 - 4))`

`## [1] 1.97`

`-0.00104 + c(-1, 1) * a * 0.00587`

`## [1] -0.0126 0.0105`

Observe that the 95% confidence interval for \(\beta_{\tt newspaper}\) contains zero. Because of the duality between hypothesis tests and confidence intervals, this indicates that (as we saw above) the test of \(\beta_{\tt newspaper}=0\) is not rejected at the 5% level. That is, the newspaper variable is not significant, we could drop it from `adv.lm`

.

Compare the summary of `adv.lm`

above with:

```
adv2.lm <- lm(sales ~ TV + radio, data = adv)
summary(adv2.lm)
```

```
##
## Call:
## lm(formula = sales ~ TV + radio, data = adv)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.798 -0.875 0.242 1.171 2.833
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.92110 0.29449 9.92 <2e-16 ***
## TV 0.04575 0.00139 32.91 <2e-16 ***
## radio 0.18799 0.00804 23.38 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.68 on 197 degrees of freedom
## Multiple R-squared: 0.897, Adjusted R-squared: 0.896
## F-statistic: 860 on 2 and 197 DF, p-value: <2e-16
```

The two summaries are similar because the newspaper term is not significant.

It is important to be clear about what hypothesis is being tested, e.g. we can consider a different test of \(\beta_{\tt newspaper}=0\):

`summary(lm(sales ~ newspaper, data = adv))`

```
##
## Call:
## lm(formula = sales ~ newspaper, data = adv)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.227 -3.387 -0.839 3.506 12.775
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 12.3514 0.6214 19.9 <2e-16 ***
## newspaper 0.0547 0.0166 3.3 0.0011 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.09 on 198 degrees of freedom
## Multiple R-squared: 0.0521, Adjusted R-squared: 0.0473
## F-statistic: 10.9 on 1 and 198 DF, p-value: 0.00115
```

The small \(p\)-value here indicates that \(\beta_{\tt newspaper}=0\) is clearly rejected. That is, when the \(H_1\)-model has just a newspaper term, plus an intercept, the newspaper coefficient is significantly different from zero.

The \(p\)-value for the test of \(\beta_{\tt newspaper}=0\) in `adv.lm`

indicates that when the \(H_1\)-model has terms for TV, radio and newspaper, plus an intercept, the newspaper coefficient is not significantly different from zero.

Hence it is important to be clear which other predictors are included in a model when specifying the null and alternative hypotheses.