Exercise solutions: Section 10.7

Author

Rob J Hyndman and George Athanasopoulos

fpp3 10.7, Ex 1

This exercise uses data set LakeHuron giving the level of Lake Huron from 1875–1972.

  1. Convert the data to a tsibble object using the as_tsibble() function.
  2. Fit a piecewise linear trend model to the Lake Huron data with a knot at 1920 and an ARMA error structure.
  3. Forecast the level for the next 30 years. Do you think the extrapolated linear trend is realistic?
huron <- as_tsibble(LakeHuron)
fit <- huron |>
  model(ARIMA(value ~ trend(knot = 1920)))
report(fit)
Series: value 
Model: LM w/ ARIMA(2,0,0) errors 

Coefficients:
         ar1      ar2  trend(knot = 1920)trend  trend(knot = 1920)trend_46
      0.9628  -0.3107                  -0.0572                      0.0633
s.e.  0.0973   0.0983                   0.0161                      0.0265
      intercept
       580.9391
s.e.     0.5124

sigma^2 estimated as 0.4594:  log likelihood=-98.86
AIC=209.73   AICc=210.65   BIC=225.24
fit |>
  forecast(h = 30) |>
  autoplot(huron) + labs(y = "feet")

It seems unlikely that there was an increasing trend from 1973 to 2002, but the prediction intervals are very wide so they probably capture the actual values. Historical data on the level of Lake Huron can be obtained from the NOAA.

fpp3 10.7, Ex 2

Repeat Exercise 4 from Section 7.10, but this time adding in ARIMA errors to address the autocorrelations in the residuals.

  1. How much difference does the ARIMA error process make to the regression coefficients?
fit <- souvenirs |>
  mutate(festival = month(Month) == 3 & year(Month) != 1987) |>
  model(
    reg = TSLM(log(Sales) ~ trend() + season() + festival),
    dynreg = ARIMA(log(Sales) ~ trend() + season() + festival)
  )
tidy(fit) |> print(n=50)
# A tibble: 31 × 6
   .model term           estimate std.error statistic  p.value
   <chr>  <chr>             <dbl>     <dbl>     <dbl>    <dbl>
 1 reg    (Intercept)      7.62    0.0742     103.    4.67e-78
 2 reg    trend()          0.0220  0.000827    26.6   2.32e-38
 3 reg    season()year2    0.251   0.0957       2.63  1.06e- 2
 4 reg    season()year3    0.266   0.193        1.38  1.73e- 1
 5 reg    season()year4    0.384   0.0957       4.01  1.48e- 4
 6 reg    season()year5    0.409   0.0957       4.28  5.88e- 5
 7 reg    season()year6    0.449   0.0958       4.69  1.33e- 5
 8 reg    season()year7    0.610   0.0958       6.37  1.71e- 8
 9 reg    season()year8    0.588   0.0959       6.13  4.53e- 8
10 reg    season()year9    0.669   0.0959       6.98  1.36e- 9
11 reg    season()year10   0.747   0.0960       7.79  4.48e-11
12 reg    season()year11   1.21    0.0960      12.6   1.29e-19
13 reg    season()year12   1.96    0.0961      20.4   3.39e-31
14 reg    festivalTRUE     0.502   0.196        2.55  1.29e- 2
15 dynreg ar1              0.556   0.179        3.11  2.53e- 3
16 dynreg ma1             -0.129   0.192       -0.670 5.05e- 1
17 dynreg ma2              0.340   0.114        2.99  3.68e- 3
18 dynreg trend()          0.0226  0.00150     15.1   1.17e-25
19 dynreg season()year2    0.252   0.0574       4.38  3.40e- 5
20 dynreg season()year3    0.297   0.118        2.51  1.42e- 2
21 dynreg season()year4    0.377   0.0729       5.17  1.56e- 6
22 dynreg season()year5    0.400   0.0789       5.07  2.30e- 6
23 dynreg season()year6    0.438   0.0817       5.36  7.19e- 7
24 dynreg season()year7    0.598   0.0827       7.23  2.04e-10
25 dynreg season()year8    0.573   0.0821       6.98  6.45e-10
26 dynreg season()year9    0.651   0.0799       8.16  2.94e-12
27 dynreg season()year10   0.725   0.0746       9.71  2.18e-15
28 dynreg season()year11   1.18    0.0629      18.7   1.14e-31
29 dynreg season()year12   1.93    0.0599      32.2   5.41e-49
30 dynreg festivalTRUE     0.461   0.119        3.86  2.19e- 4
31 dynreg intercept        7.60    0.0857      88.7   8.60e-85

The coefficients are all relatively close.

  1. How much difference does the ARIMA error process make to the forecasts?
future_souvenirs <- new_data(souvenirs, n = 24) |>
  mutate(festival = month(Month) == 3)
fit |>
  forecast(new_data = future_souvenirs)  |>
  autoplot(souvenirs, level=95)

The forecasts are also extremely close.

  1. Check the residuals of the fitted model to ensure the ARIMA process has adequately addressed the autocorrelations seen in the TSLM model.
fit |>
  select(dynreg) |>
  gg_tsresiduals()

These look fine.

fpp3 10.7, Ex 4

This exercise concerns aus_accommodation: the total quarterly takings from accommodation and the room occupancy level for hotels, motels, and guest houses in Australia, between January 1998 and June 2016. Total quarterly takings are in millions of Australian dollars. a. Compute the CPI-adjusted takings and plot the result for each state

aus_accommodation <- aus_accommodation |>
  mutate(
    adjTakings = Takings / CPI * 100
  )
aus_accommodation |>
  autoplot(adjTakings)

  1. For each state, fit a dynamic regression model of CPI-adjusted takings with seasonal dummy variables, a piecewise linear time trend with one knot at 2008 Q1, and ARIMA errors.
fit <- aus_accommodation |>
  model(
    ARIMA(adjTakings ~ season() + trend(knot = yearquarter("2008 Q1")))
  )
fit
# A mable: 8 x 2
# Key:     State [8]
  State                        ARIMA(adjTakings ~ season() + trend(knot = year…¹
  <chr>                                                                  <model>
1 Australian Capital Territory                       <LM w/ ARIMA(1,0,0) errors>
2 New South Wales                          <LM w/ ARIMA(1,0,0)(0,0,1)[4] errors>
3 Northern Territory                       <LM w/ ARIMA(0,0,1)(1,0,0)[4] errors>
4 Queensland                               <LM w/ ARIMA(1,0,0)(0,0,1)[4] errors>
5 South Australia                          <LM w/ ARIMA(1,0,0)(1,0,0)[4] errors>
6 Tasmania                                 <LM w/ ARIMA(0,0,1)(1,0,0)[4] errors>
7 Victoria                                 <LM w/ ARIMA(1,0,0)(0,0,1)[4] errors>
8 Western Australia                                  <LM w/ ARIMA(1,0,0) errors>
# ℹ abbreviated name:
#   ¹​`ARIMA(adjTakings ~ season() + trend(knot = yearquarter("2008 Q1")))`

The seasonal dummy variable has not adequately handled the seasonality, so there are seasonal ARIMA components.

  1. Check that the residuals of the model look like white noise.
fit |>
  filter(State == "Victoria") |>
  gg_tsresiduals()

No apparent problems. Similar plots needed for the other states.

  1. Forecast the takings for each state to the end of 2017. (Hint: You will need to produce forecasts of the CPI first.)
# CPI forecasts
cpif <- aus_accommodation |>
  model(ARIMA(CPI)) |>
  forecast(h = 6) |>
  as_tsibble() |>
  select(Date, State, CPI = .mean)
fit |>
  forecast(new_data = cpif) |>
  mutate(Takings = adjTakings * CPI / 100)
# A fable: 48 x 7 [1Q]
# Key:     State, .model [8]
   State                    .model    Date   adjTakings .mean   CPI      Takings
   <chr>                    <chr>    <qtr>       <dist> <dbl> <dbl>       <dist>
 1 Australian Capital Terr… "ARIM… 2016 Q3   N(62, 9.9)  61.6  109.    N(67, 12)
 2 Australian Capital Terr… "ARIM… 2016 Q4    N(59, 12)  58.9  110.    N(65, 15)
 3 Australian Capital Terr… "ARIM… 2017 Q1    N(59, 13)  59.0  110.    N(65, 16)
 4 Australian Capital Terr… "ARIM… 2017 Q2    N(59, 13)  59.4  111.    N(66, 16)
 5 Australian Capital Terr… "ARIM… 2017 Q3    N(61, 13)  60.9  111.    N(68, 16)
 6 Australian Capital Terr… "ARIM… 2017 Q4    N(59, 13)  58.8  112.    N(66, 16)
 7 New South Wales          "ARIM… 2016 Q3 N(791, 1254) 791.   109. N(863, 1494)
 8 New South Wales          "ARIM… 2016 Q4 N(844, 1589) 844.   110. N(926, 1914)
 9 New South Wales          "ARIM… 2017 Q1 N(829, 1679) 829.   110. N(915, 2043)
10 New South Wales          "ARIM… 2017 Q2 N(734, 1703) 734.   111. N(814, 2094)
# ℹ 38 more rows
  1. What sources of uncertainty have not been taken into account in the prediction intervals?
  • The uncertainty in the CPI forecasts has been ignored.
  • As usual, the estimation of the parameters and the choice of models have also not been accounted for.

fpp3 10.7, Ex 5

We fitted a harmonic regression model to part of the us_gasoline series in Exercise 6 in Section 7.10. We will now revisit this model, and extend it to include more data and ARMA errors.

  1. Using TSLM(), fit a harmonic regression with a piecewise linear time trend to the full gasoline series. Select the position of the knots in the trend and the appropriate number of Fourier terms to include by minimising the AICc or CV value.

Let’s optimize using 2 break points and an unknown number of Fourier terms. Because the number of Fourier terms is integer, we can’t just use optim. Instead, we will loop over a large number of possible values for the breakpoints and Fourier terms. There are more than 2000 models fitted here, but TSLM is relatively fast.

Note that the possible values of the knots must be restricted so that knot2 is always much larger than knot1. We have set them to be at least 2 years apart here.

us_gasoline |> autoplot(Barrels)

# Function to compute CV given K and knots.
get_cv <- function(K, knot1, knot2) {
  us_gasoline |>
    model(TSLM(Barrels ~ fourier(K = K) + trend(c(knot1, knot2)))) |>
    glance() |>
    pull(CV)
}

models <- expand.grid(
  K = seq(25),
  knot1 = yearweek(as.character(seq(1991, 2017, 2))),
  knot2 = yearweek(as.character(seq(1991, 2017, 2)))
) |>
  filter(knot2 - knot1 > 104) |>
  as_tibble()
models <- models |>
  mutate(cv = purrr::pmap_dbl(models, get_cv)) |>
  arrange(cv)

# Best combination
(best <- head(models, 1))
# A tibble: 1 × 4
      K    knot1    knot2     cv
  <int>   <week>   <week>  <dbl>
1     6 2007 W01 2013 W01 0.0641
fit <- us_gasoline |>
  model(
    TSLM(Barrels ~ fourier(K = best$K) + trend(c(best$knot1, best$knot2)))
  )
  1. Now refit the model using ARIMA() to allow for correlated errors, keeping the same predictor variables as you used with TSLM().
fit <- us_gasoline |>
  model(ARIMA(Barrels ~ fourier(K = best$K) + trend(c(best$knot1, best$knot2)) + PDQ(0, 0, 0)))
fit |> report()
Series: Barrels 
Model: LM w/ ARIMA(1,0,1) errors 

Coefficients:
         ar1      ma1  fourier(K = best$K)C1_52  fourier(K = best$K)S1_52
      0.9277  -0.8414                   -0.1144                   -0.2306
s.e.  0.0256   0.0357                    0.0133                    0.0132
      fourier(K = best$K)C2_52  fourier(K = best$K)S2_52
                        0.0418                    0.0309
s.e.                    0.0105                    0.0105
      fourier(K = best$K)C3_52  fourier(K = best$K)S3_52
                        0.0836                    0.0343
s.e.                    0.0097                    0.0097
      fourier(K = best$K)C4_52  fourier(K = best$K)S4_52
                        0.0187                    0.0399
s.e.                    0.0094                    0.0094
      fourier(K = best$K)C5_52  fourier(K = best$K)S5_52
                       -0.0315                    0.0011
s.e.                    0.0092                    0.0092
      fourier(K = best$K)C6_52  fourier(K = best$K)S6_52
                       -0.0523                    0.0001
s.e.                    0.0092                    0.0092
      trend(c(best$knot1, best$knot2))trend
                                     0.0028
s.e.                                 0.0001
      trend(c(best$knot1, best$knot2))trend_831
                                        -0.0051
s.e.                                     0.0002
      trend(c(best$knot1, best$knot2))trend_1144  intercept
                                          0.0055     7.1065
s.e.                                      0.0006     0.0352

sigma^2 estimated as 0.06051:  log likelihood=-13.38
AIC=64.76   AICc=65.33   BIC=163.78
  1. Check the residuals of the final model using the gg_tsdisplay() function and a Ljung-Box test. Do they look sufficiently like white noise to continue? If not, try modifying your model, or removing the first few years of data.
gg_tsresiduals(fit)

augment(fit) |> features(.innov, ljung_box, dof = 2, lag = 26)
# A tibble: 1 × 3
  .model                                                       lb_stat lb_pvalue
  <chr>                                                          <dbl>     <dbl>
1 "ARIMA(Barrels ~ fourier(K = best$K) + trend(c(best$knot1, …    25.8     0.365

Usually, we choose lag to be twice the seasonal period, but the seasonal period here is about 52, and 104 lags is too many for the test statistics to have good properties. So we’ve set lag to be 26 which should be plenty.

The model looks pretty good, and passes the Ljung-Box test, although there is some heteroskedasticity.

  1. Once you have a model with white noise residuals, produce forecasts for the next year.
fit |>
  forecast(h = "1 year") |>
  autoplot(us_gasoline)