·
Cursos Gerais ·
Econometria
Envie sua pergunta para a IA e receba a resposta na hora
Recomendado para você
5
Prova Series Temporais
Econometria
UFRJ
5
Prova Series Temporais
Econometria
UFRJ
1
Pedro a Morettin - Análise de Séries Temporais-edgard Blucher 2006
Econometria
UFRJ
8
Lista1seriestemporaisgabarito
Econometria
UFRJ
1
Pedro a Morettin - Análise de Séries Temporais-edgard Blucher 2006
Econometria
UFRJ
12
Regressão com Álgebra Matricial
Econometria
UFJF
4
Técnicas de Pesquisa em Economia_prova pdf
Econometria
UNIP
4
Tecnicas de Pesquisas em Economia Prova Unip
Econometria
UNIP
3
Métodos Econométricos - Programa de Curso
Econometria
UMG
4
Monitoria
Econometria
UMG
Texto de pré-visualização
(g) MA(3) y_t = e_t - 0.7e_{t-1} - 0.2e_{t-2} - 0.3e_{t-3}, \ e_t \sim R.B(0,\sigma^2) \hat{y}_{t}(h=1) = -0.7\hat{e}_t - 0.2\hat{e}_{t-1} - 0.5\hat{e}_{t-2}, \ h=1 \ \ \ \ \hat{e}_t(h) = \left\{ \begin{array}{ll} -0.7\hat{e}_t -0.3\hat{e}_{t-1}, & h=2\\ -0.3\hat{e}_{t-1}, & h=3\\ 0, & h>0 \end{array} \right. (b) ARIMA(0,2,1) \Delta^2y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) \ (1-B)(1-B)y_t = e_t + \Theta e_{t-1} \ (1-B)(y_u-y_{t-1}) = e_{t} - \Theta e_{t-1} \ (1-B)y_t-y_{t-1} = e_{t} - \Theta e_{t} \ y_e = 2y_{e-1} - y_{e-1} + e_t - \Theta e_{t-1} \hat{y}_{t}(h=1) = 2y_{u-1} - y_{u-1} - \Theta \hat{e}_m, \ h=1 \ \ \ \ \ \hat{y}_{t}(h=2) = 2y_u(1) - y_u, \ h=2 {(y_t(h-1) - y_t(h-1), & h>2} (.) SARIMA (1,0,0) (1,0,0)_{12} \Pi(B^{12})\Phi(B)y_t = e_t, \ e_t \sim R.B(1,0,\sigma^2) (1-\Pi B^{12})(1-\Phi B)y_t = e_t 4ª LISTA DE EXERCICIOS - ANALISE DE SERIES TEMPORAIS 02/02/01 \ 1) (a) AR(1) y_t = \phi y_{t-1} + e_t, \ e_t \sim R.B(0, \sigma^2) \hat{y}_t(h) = \left\{ \begin{array}{ll} \phi y_t, & h=1\\ \phi \hat{y}_t(h-1), & h>1 \end{array} \right. (c) . AM(1) y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) \(g_t = \left\{ \begin{array}{ll} -\Theta \hat{e}_m, & h=1\\ \ 0, & h>1 \end{array} \right. (c) ARMA(2,1) \ (1-\phi B-\phi B^2)y_u = (1-\Theta B)e_t, \ e_t \sim R(B(0,\sigma^2) \ \ y_t = \phi_1y_t + \phi_2 u_{-1} + \Theta y_{t-2} + y_e + \Theta e_{t-1} \ \hat{y}_u(h=1) = \hat{}\phi_1 y_{u-1} + \phi_2 y_{u-2} - \Theta \hat{y}_u, \quad \ \phi_2 y_{u-1) + \phi_1 y.u + \Theta y, h=2 \ \ \ \ \ \ \ \phi_1\hat{y}_t(h-1) + \phi_2 y_u(h-2) \hat{y}, h>2 (d) ARIMA (0,1,0) \Delta y_t = e_t, \ e_t \sim R.B(0,\sigma^2) \ \ y_t - y_{t-1} = e_t \Rightarrow y_t = y_{l_0} + e_t \ \hat{y}_t(h) = \left\{ \begin{array}{ll} y_m, & h=1\\ \ \hat{y}_m = (h)>1, & t`1 \end{array} \right. (e) ARIMA(0,1,1)\ \Delta y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) y_t - y_{t-1}e_t = e_t - \Theta e_t\ \ \hat{y}_u(h) = \left\{ \begin{array}{ll} y_m - Se_m= h=1\\ \ \hat{y}_m(h-1), & h>1 \end{array} \right. (f) AR(3) \ y_t = b_0 y_{u-e} + b_c y_{u-1} - \Phi y_{u-2} - ft(uint u-3) + e_t, \ e_t \sim R.B(0,\sigma^2) \ \ \hat{u})h=\left\{ \begin{array}{ll} \Phi y.u - \phi_1 y_{u-1} - \phi_2 y_{u-2}\\ \Phi. \hat{y}_{u-2} + \phi y_u-3\\ \ \phi y_u(h-2) + \phi y_{a-1} \end{array}\right. (1 - \Theta B^{12})(y_{t} - \phi y_{t-1}) = \varepsilon_{t} y_t - \phi y_{t-1} = \Phi y_{t-12} + \Phi \Phi y_{t-13} = \varepsilon_{t+} y_t = \phi y_{t-1} + \Phi y_{t-12} - \Phi \Phi y_{t-13} = \varepsilon_{t+} \hat{y}_t(h) = \left\{ \begin{array}{ll} \Phi y_{t-h+1} + \Phi \hat{y}_{t-h+1} - \Phi \Phi y_{t-k+1}, & h=1\\ \Phi y_{t-h-1} + \Phi \hat{y}_{t-h-12} - \Phi \Phi y_{t-k+1}, & h=2,\ldots,12\\ \Phi y_{t-(h-k-1)} + \Phi \hat{y}_{t-(h-12)} - \Phi \Phi \hat{y}_{t-(h-13)}, & h=13 \Phi \hat{y}_{t-(h-12)}, & h > 13 \end{array} \right. (3) SARIMA (0,1,1) x (0,0,1)_{12} \Delta_{12} y_{t} = \Theta(B^{12}) \Theta(B) \varepsilon_t, \; z_t \sim RB(0,\sigma^2) y_t - y_{t-12} = \Theta(B^{12}) (z_t - \Theta z_{t-1}) y_t - y_{t-12} = \Theta \varepsilon_{t} - \Theta \varepsilon_{t-1} - \Theta \varepsilon_{t-12} + \Theta \Theta \varepsilon_{t-13} y_{t-12} - \Theta \varepsilon_{t-24} + \Theta \Theta \varepsilon_{t+12} - \Theta \varepsilon_{t-12} + \Theta \Theta \varepsilon_{t+13} \hat{y}_t(h) = \left\{ \begin{array}{ll} \varepsilon_{t-h-12} - \hat{\varepsilon}_t - \hat{\varepsilon}_{t-h-1} + \Theta \hat{\varepsilon}_{t-h-12}, & h=1 \hat{\varepsilon}_{t-h-12} - \hat{\varepsilon}_{t-h-12} + \Theta \hat{\varepsilon}_{t-h-12}, & h=2,\ldots,12\\ \hat{y}_{t-(h-12)} + \Phi \hat{\varepsilon}_t, & h=13 \hat{y}_{t-(h-12)}, & h > 13 \end{array} \right. 2) (a) \hat{\phi} = 0,2674 \;, \hat{C}^2 = 0,3053 \hat{y}_t(h) = \left\{ \begin{array}{ll} 0,2674 y_{t}, & h=1\\ 0,2674 \hat{y}_{t}'(1), & h=2 \end{array} \right. O erro de previsão é dado por \tilde{e}_n (h) = y_{t+n} - \hat{y}_{t}(h) \sim N(0, V(h)), \; \varepsilon_t \sim N(0, \sigma^2). => IC( \hat{y}_{n+h}; \alpha=0) = \hat{y}_t(1) - j]=(-\sqrt{\hat{V}(h)}; \hat{y}'(x) \hat{\varepsilon}_t = \hat{\Psi}_{j=0} \Psi'_j \hat{C'}^2. Para um modelo AR(1), temos y_t = \Phi(B)\varepsilon_{t} = (\varphi_0 + \varphi B + \varphi_2 B^2 + \ldots ) \varepsilon_{t} = (1 + \phi B + \phi^2 B^2 + \ldots ) \varepsilon_t. Logo, para h = 1, \hat{y}_t(1) = 0,2674 x y_{t} = 0,3209 \hat{V}(1) = \hat{\varphi}_1^2 \hat{C}^2 = 1 x 0,3053 = 0,3053 IC(y_{0; 0,95}) = (0,3209 - 1,36 x \sqrt{0,3053}; 0,3209 + 1,36 x \sqrt{0,3053}) = (-1,5440; 2,1858). Para h = 2 \hat{y}_t(2) = 0,2674 x 0,3209 = 0,3058. \hat{V}(2) = \hat{\Psi}_0 \hat{\varepsilon}t^2 \hat{\Psi'}_t \varepsilon_t^2 = 0,3053 + 0,2674 \cdot 0,3053 = 0,8059. IC(y_{0; 0,95}) = (-1,5447; 2,1905). (a) \hat{\theta} = -0.4578, \; \hat{C}^2 = 0.3559 \hat{y}_t (h) = \left\{ \begin{array}{ll} 0.0478 \hat{\varepsilon}_t, & h=1\\ 0, & h=2 \end{array}\right. Para um modelo MA(1), temos y_t = \varphi(B)\varepsilon_t = (\varphi_0 + \varphi_1 B + \varphi_2 B^2 + \ldots ) \varepsilon_t = (1 - \theta B) \varepsilon_t. Logo, para h=1, \hat{y}_t(1) = 0.0478 x 1,05 = 0,0502 \hat{V}(1) = \hat{\varphi}_1^2 \hat{C}^2 = \hat{\varphi}_1^2 x 0.3559 = 0.3559. IC(y_{0; 0,95}) = (0,0502 - 1,96\sqrt{0,3559}; 0,0502 + 1,96\sqrt{0,3559}) = (-1,79631; 1,89635). Para h = 2 \hat{y}_t (2) = 0 \hat{V}(2) = \hat{\varphi}_2 \hat{V}^2 + \hat{\varphi}_1 \hat{V}^2 \hat{\varepsilon}_t^2 = 0.3559 + (-0,4579)^2 x 0.3559 = 1,0553. IC(y_{0; 0,95}) = (-1,90491; 2,00449). 3) (a) Sabemos que mu|D_{t+1} ~ N(at, R_t), em que at = m_{t-1} e R_t = C_{t-1} + V_t e y_t|D_{t+1} ~ N(ft, Qt), em que ft = at e m_{t+1} e Qt = R_t + V_t Entao, temos lin(y_t - yt|D_{t+1}) = E_col(mu_t|u_t, y_t, D_{t+1}) = varp(y_t|D_{t+1}) = Var(yt|D_{t+1}) + R_t Logo, (y_t, y_t')|D_{t+1} tem dist. normal bivariada da forma ( y \mu )|D_{t+1} ~ N \begin({align}) \begin({array{{c}}} \mu_t \mu_t \end{{array}}) \begin{{array}}{{c}} O_t - R_t R_t - R_t \end{{array}}\\ \end{{align}} \end{{align}} (b) PROPRIEDADES DA DIST. NORMAL - X_n N(ma, V) Se X = (X_1 \X_2) ~ N \begin{{matrix}} \begin{{matrix}} \mu_1 \mu_2 \end{{matrix}} \end{{matrix}}, V = \begin{{bmatrix}} V_{11} V_{12} \\ V_{22} V_{21} \end{{bmatrix}} Entao, X_1|X_2 ~ N(ma1(x1), va1(x2)) em que ma1(x2) = ma_1 + V_{12}V^{-1}_{22}(X_2 - ma_2) e va11(x2) = V_{11} - V_{12}V_{12}^{-1}V_{21} Aplicando na ND, temos mu_t|D_t ~ N(mt, Ct), com mt = mt-1+RtQt^{-1} (yt-mt-1) e \begin{{align}} O_t - R_t Rt - At^t Qt. \end{{align}} (c) C_t = R_t - A_t^2 Q_{t( R_t - R_t^2/ Q_t Q_{t R_t - R_t^2 R_{t+V/t}} - car (1-R_k6) + // tam. Nas 4) F = 1, G_t = A_t, V_t = V , w_{t-1} = 0 => qpt = mu_t+v_t =/N(0,V) mu_t = mu_{t-1} = mu_{t} mu_t|D_{t} => qpt\N(p_t, v_t) com mtD_t\N(mi, c_o), logPro = poka 5) y_t = mu_t+ vt, u(t)\N(0,Vt) mu_t = mu_{t-1}+ w_t, w(t)\N(0,W_t) o Tempo ta-1, mu_tD_{t+1}\N(at, R_t) 2 yt/ sprene^N(mu1 +Ns, Vt) (a) Pelo Teorema de Bayes, p(mu_t |Dt)\ap(mu_t|mu_t,Dt) p(mu_{t+1} D_{t-1}) proa/ expo\ap exp( -d\|2Vt (ys-mu_t-m^2) L assel exp{-1 \|2VRt[Re_tf] t(qt\eq_i (mu{t+1'})_4(mu_t+v_irn_2) + V\uQ_1^p-ar mu+Qu(ay_{t+1} + V\pointerin{b}] pro_enin_alf(+ B2-t\ea) \qada_b\begin({cases}) \end{{cases}) => Mu_t|Dt ~ N(m_1,Ct), em que MT = pressure etc/mu_t{sub+} \mu+ 2 \muln{+v/0(n-0.7\\'&d,0vj[pt(b..., sino] = ...v+0 )\pron \sigma \sum c\'are-Dvri&map,\;b[\text{'0})',\le\mu p,\\th] &dable(n,\'leam) < 0> Saul(spons)@\fracparv[\lim, tol\ perplex(sub+, ensec^7) ]&\sumk er(-MU&vu_q(p_t -\yt_{-L+U}) OC *t-1]Mi~cti\nae+= en_l=2_n}). ar_plT(*p'erinis, cons)\big& b,s\mu = ...> " \ul,aratt\ex\inspect.un) = troca-to [ef-q-s] Qu_boardkitN\nv{m?0} ( ( ) Sabemos que μ_t | D_{t-1} ~ N(a_t, R_t) e y_t | D_{t-1} ~ N (f_t, Q_t) , μ_t + N = m_{t-1} + N , pois E(y_t | D_{t-1}) = E( E(y_t | μ_t) | D_{t-1} ) = E( μ_t + w | D_{t-1} ) = E( μ_t | D_{t-1} ) + w = a_t + N, e Q_t = R_t + V_t, pois Var(y_t | D_{t-1}) = Var( E(y_t | μ_t ) | D_{t-1} ) + E( Var( y_t | μ_t ) | D_{t-1} ) = Var( μ_t + w | D_{t-1} ) + E( V_t | D_{t-1} ) = Var( μ_t | D_{t-1} ) + V_t = R_t + V_t Cov( y_t , μ_t | D_{t-1} ) = Cov( μ_t + w, μ_t | D_{t-1}) = Var( μ_t | D_{t-1} ) = R_t Assim ; \begin{pmatrix} y_t \\ μ_t \end{pmatrix} \bigg| D_{t-1} ~ N \left[\begin{pmatrix} m_{t-1} + N \\ m_{t-1} \end{pmatrix} , \begin{pmatrix} Q_t & R_t \\ R_t & R_t \end{pmatrix}\right]. assim, \begin{pmatrix} m_t = m_{t-1} + R_t Q_t^{-1} ( y_t - ( m_{t-1}+N ) ) = m_{t-1} + A_t ( y_t - a_t - N ) C_t = R_t - R_tQ_t^{-1} R_t = R_t - A_t^2 R_t, utilizando as propriedades da distribuiçao normal.
Envie sua pergunta para a IA e receba a resposta na hora
Recomendado para você
5
Prova Series Temporais
Econometria
UFRJ
5
Prova Series Temporais
Econometria
UFRJ
1
Pedro a Morettin - Análise de Séries Temporais-edgard Blucher 2006
Econometria
UFRJ
8
Lista1seriestemporaisgabarito
Econometria
UFRJ
1
Pedro a Morettin - Análise de Séries Temporais-edgard Blucher 2006
Econometria
UFRJ
12
Regressão com Álgebra Matricial
Econometria
UFJF
4
Técnicas de Pesquisa em Economia_prova pdf
Econometria
UNIP
4
Tecnicas de Pesquisas em Economia Prova Unip
Econometria
UNIP
3
Métodos Econométricos - Programa de Curso
Econometria
UMG
4
Monitoria
Econometria
UMG
Texto de pré-visualização
(g) MA(3) y_t = e_t - 0.7e_{t-1} - 0.2e_{t-2} - 0.3e_{t-3}, \ e_t \sim R.B(0,\sigma^2) \hat{y}_{t}(h=1) = -0.7\hat{e}_t - 0.2\hat{e}_{t-1} - 0.5\hat{e}_{t-2}, \ h=1 \ \ \ \ \hat{e}_t(h) = \left\{ \begin{array}{ll} -0.7\hat{e}_t -0.3\hat{e}_{t-1}, & h=2\\ -0.3\hat{e}_{t-1}, & h=3\\ 0, & h>0 \end{array} \right. (b) ARIMA(0,2,1) \Delta^2y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) \ (1-B)(1-B)y_t = e_t + \Theta e_{t-1} \ (1-B)(y_u-y_{t-1}) = e_{t} - \Theta e_{t-1} \ (1-B)y_t-y_{t-1} = e_{t} - \Theta e_{t} \ y_e = 2y_{e-1} - y_{e-1} + e_t - \Theta e_{t-1} \hat{y}_{t}(h=1) = 2y_{u-1} - y_{u-1} - \Theta \hat{e}_m, \ h=1 \ \ \ \ \ \hat{y}_{t}(h=2) = 2y_u(1) - y_u, \ h=2 {(y_t(h-1) - y_t(h-1), & h>2} (.) SARIMA (1,0,0) (1,0,0)_{12} \Pi(B^{12})\Phi(B)y_t = e_t, \ e_t \sim R.B(1,0,\sigma^2) (1-\Pi B^{12})(1-\Phi B)y_t = e_t 4ª LISTA DE EXERCICIOS - ANALISE DE SERIES TEMPORAIS 02/02/01 \ 1) (a) AR(1) y_t = \phi y_{t-1} + e_t, \ e_t \sim R.B(0, \sigma^2) \hat{y}_t(h) = \left\{ \begin{array}{ll} \phi y_t, & h=1\\ \phi \hat{y}_t(h-1), & h>1 \end{array} \right. (c) . AM(1) y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) \(g_t = \left\{ \begin{array}{ll} -\Theta \hat{e}_m, & h=1\\ \ 0, & h>1 \end{array} \right. (c) ARMA(2,1) \ (1-\phi B-\phi B^2)y_u = (1-\Theta B)e_t, \ e_t \sim R(B(0,\sigma^2) \ \ y_t = \phi_1y_t + \phi_2 u_{-1} + \Theta y_{t-2} + y_e + \Theta e_{t-1} \ \hat{y}_u(h=1) = \hat{}\phi_1 y_{u-1} + \phi_2 y_{u-2} - \Theta \hat{y}_u, \quad \ \phi_2 y_{u-1) + \phi_1 y.u + \Theta y, h=2 \ \ \ \ \ \ \ \phi_1\hat{y}_t(h-1) + \phi_2 y_u(h-2) \hat{y}, h>2 (d) ARIMA (0,1,0) \Delta y_t = e_t, \ e_t \sim R.B(0,\sigma^2) \ \ y_t - y_{t-1} = e_t \Rightarrow y_t = y_{l_0} + e_t \ \hat{y}_t(h) = \left\{ \begin{array}{ll} y_m, & h=1\\ \ \hat{y}_m = (h)>1, & t`1 \end{array} \right. (e) ARIMA(0,1,1)\ \Delta y_t = e_t - \Theta e_{t-1}, \ e_t \sim R.B(0,\sigma^2) y_t - y_{t-1}e_t = e_t - \Theta e_t\ \ \hat{y}_u(h) = \left\{ \begin{array}{ll} y_m - Se_m= h=1\\ \ \hat{y}_m(h-1), & h>1 \end{array} \right. (f) AR(3) \ y_t = b_0 y_{u-e} + b_c y_{u-1} - \Phi y_{u-2} - ft(uint u-3) + e_t, \ e_t \sim R.B(0,\sigma^2) \ \ \hat{u})h=\left\{ \begin{array}{ll} \Phi y.u - \phi_1 y_{u-1} - \phi_2 y_{u-2}\\ \Phi. \hat{y}_{u-2} + \phi y_u-3\\ \ \phi y_u(h-2) + \phi y_{a-1} \end{array}\right. (1 - \Theta B^{12})(y_{t} - \phi y_{t-1}) = \varepsilon_{t} y_t - \phi y_{t-1} = \Phi y_{t-12} + \Phi \Phi y_{t-13} = \varepsilon_{t+} y_t = \phi y_{t-1} + \Phi y_{t-12} - \Phi \Phi y_{t-13} = \varepsilon_{t+} \hat{y}_t(h) = \left\{ \begin{array}{ll} \Phi y_{t-h+1} + \Phi \hat{y}_{t-h+1} - \Phi \Phi y_{t-k+1}, & h=1\\ \Phi y_{t-h-1} + \Phi \hat{y}_{t-h-12} - \Phi \Phi y_{t-k+1}, & h=2,\ldots,12\\ \Phi y_{t-(h-k-1)} + \Phi \hat{y}_{t-(h-12)} - \Phi \Phi \hat{y}_{t-(h-13)}, & h=13 \Phi \hat{y}_{t-(h-12)}, & h > 13 \end{array} \right. (3) SARIMA (0,1,1) x (0,0,1)_{12} \Delta_{12} y_{t} = \Theta(B^{12}) \Theta(B) \varepsilon_t, \; z_t \sim RB(0,\sigma^2) y_t - y_{t-12} = \Theta(B^{12}) (z_t - \Theta z_{t-1}) y_t - y_{t-12} = \Theta \varepsilon_{t} - \Theta \varepsilon_{t-1} - \Theta \varepsilon_{t-12} + \Theta \Theta \varepsilon_{t-13} y_{t-12} - \Theta \varepsilon_{t-24} + \Theta \Theta \varepsilon_{t+12} - \Theta \varepsilon_{t-12} + \Theta \Theta \varepsilon_{t+13} \hat{y}_t(h) = \left\{ \begin{array}{ll} \varepsilon_{t-h-12} - \hat{\varepsilon}_t - \hat{\varepsilon}_{t-h-1} + \Theta \hat{\varepsilon}_{t-h-12}, & h=1 \hat{\varepsilon}_{t-h-12} - \hat{\varepsilon}_{t-h-12} + \Theta \hat{\varepsilon}_{t-h-12}, & h=2,\ldots,12\\ \hat{y}_{t-(h-12)} + \Phi \hat{\varepsilon}_t, & h=13 \hat{y}_{t-(h-12)}, & h > 13 \end{array} \right. 2) (a) \hat{\phi} = 0,2674 \;, \hat{C}^2 = 0,3053 \hat{y}_t(h) = \left\{ \begin{array}{ll} 0,2674 y_{t}, & h=1\\ 0,2674 \hat{y}_{t}'(1), & h=2 \end{array} \right. O erro de previsão é dado por \tilde{e}_n (h) = y_{t+n} - \hat{y}_{t}(h) \sim N(0, V(h)), \; \varepsilon_t \sim N(0, \sigma^2). => IC( \hat{y}_{n+h}; \alpha=0) = \hat{y}_t(1) - j]=(-\sqrt{\hat{V}(h)}; \hat{y}'(x) \hat{\varepsilon}_t = \hat{\Psi}_{j=0} \Psi'_j \hat{C'}^2. Para um modelo AR(1), temos y_t = \Phi(B)\varepsilon_{t} = (\varphi_0 + \varphi B + \varphi_2 B^2 + \ldots ) \varepsilon_{t} = (1 + \phi B + \phi^2 B^2 + \ldots ) \varepsilon_t. Logo, para h = 1, \hat{y}_t(1) = 0,2674 x y_{t} = 0,3209 \hat{V}(1) = \hat{\varphi}_1^2 \hat{C}^2 = 1 x 0,3053 = 0,3053 IC(y_{0; 0,95}) = (0,3209 - 1,36 x \sqrt{0,3053}; 0,3209 + 1,36 x \sqrt{0,3053}) = (-1,5440; 2,1858). Para h = 2 \hat{y}_t(2) = 0,2674 x 0,3209 = 0,3058. \hat{V}(2) = \hat{\Psi}_0 \hat{\varepsilon}t^2 \hat{\Psi'}_t \varepsilon_t^2 = 0,3053 + 0,2674 \cdot 0,3053 = 0,8059. IC(y_{0; 0,95}) = (-1,5447; 2,1905). (a) \hat{\theta} = -0.4578, \; \hat{C}^2 = 0.3559 \hat{y}_t (h) = \left\{ \begin{array}{ll} 0.0478 \hat{\varepsilon}_t, & h=1\\ 0, & h=2 \end{array}\right. Para um modelo MA(1), temos y_t = \varphi(B)\varepsilon_t = (\varphi_0 + \varphi_1 B + \varphi_2 B^2 + \ldots ) \varepsilon_t = (1 - \theta B) \varepsilon_t. Logo, para h=1, \hat{y}_t(1) = 0.0478 x 1,05 = 0,0502 \hat{V}(1) = \hat{\varphi}_1^2 \hat{C}^2 = \hat{\varphi}_1^2 x 0.3559 = 0.3559. IC(y_{0; 0,95}) = (0,0502 - 1,96\sqrt{0,3559}; 0,0502 + 1,96\sqrt{0,3559}) = (-1,79631; 1,89635). Para h = 2 \hat{y}_t (2) = 0 \hat{V}(2) = \hat{\varphi}_2 \hat{V}^2 + \hat{\varphi}_1 \hat{V}^2 \hat{\varepsilon}_t^2 = 0.3559 + (-0,4579)^2 x 0.3559 = 1,0553. IC(y_{0; 0,95}) = (-1,90491; 2,00449). 3) (a) Sabemos que mu|D_{t+1} ~ N(at, R_t), em que at = m_{t-1} e R_t = C_{t-1} + V_t e y_t|D_{t+1} ~ N(ft, Qt), em que ft = at e m_{t+1} e Qt = R_t + V_t Entao, temos lin(y_t - yt|D_{t+1}) = E_col(mu_t|u_t, y_t, D_{t+1}) = varp(y_t|D_{t+1}) = Var(yt|D_{t+1}) + R_t Logo, (y_t, y_t')|D_{t+1} tem dist. normal bivariada da forma ( y \mu )|D_{t+1} ~ N \begin({align}) \begin({array{{c}}} \mu_t \mu_t \end{{array}}) \begin{{array}}{{c}} O_t - R_t R_t - R_t \end{{array}}\\ \end{{align}} \end{{align}} (b) PROPRIEDADES DA DIST. NORMAL - X_n N(ma, V) Se X = (X_1 \X_2) ~ N \begin{{matrix}} \begin{{matrix}} \mu_1 \mu_2 \end{{matrix}} \end{{matrix}}, V = \begin{{bmatrix}} V_{11} V_{12} \\ V_{22} V_{21} \end{{bmatrix}} Entao, X_1|X_2 ~ N(ma1(x1), va1(x2)) em que ma1(x2) = ma_1 + V_{12}V^{-1}_{22}(X_2 - ma_2) e va11(x2) = V_{11} - V_{12}V_{12}^{-1}V_{21} Aplicando na ND, temos mu_t|D_t ~ N(mt, Ct), com mt = mt-1+RtQt^{-1} (yt-mt-1) e \begin{{align}} O_t - R_t Rt - At^t Qt. \end{{align}} (c) C_t = R_t - A_t^2 Q_{t( R_t - R_t^2/ Q_t Q_{t R_t - R_t^2 R_{t+V/t}} - car (1-R_k6) + // tam. Nas 4) F = 1, G_t = A_t, V_t = V , w_{t-1} = 0 => qpt = mu_t+v_t =/N(0,V) mu_t = mu_{t-1} = mu_{t} mu_t|D_{t} => qpt\N(p_t, v_t) com mtD_t\N(mi, c_o), logPro = poka 5) y_t = mu_t+ vt, u(t)\N(0,Vt) mu_t = mu_{t-1}+ w_t, w(t)\N(0,W_t) o Tempo ta-1, mu_tD_{t+1}\N(at, R_t) 2 yt/ sprene^N(mu1 +Ns, Vt) (a) Pelo Teorema de Bayes, p(mu_t |Dt)\ap(mu_t|mu_t,Dt) p(mu_{t+1} D_{t-1}) proa/ expo\ap exp( -d\|2Vt (ys-mu_t-m^2) L assel exp{-1 \|2VRt[Re_tf] t(qt\eq_i (mu{t+1'})_4(mu_t+v_irn_2) + V\uQ_1^p-ar mu+Qu(ay_{t+1} + V\pointerin{b}] pro_enin_alf(+ B2-t\ea) \qada_b\begin({cases}) \end{{cases}) => Mu_t|Dt ~ N(m_1,Ct), em que MT = pressure etc/mu_t{sub+} \mu+ 2 \muln{+v/0(n-0.7\\'&d,0vj[pt(b..., sino] = ...v+0 )\pron \sigma \sum c\'are-Dvri&map,\;b[\text{'0})',\le\mu p,\\th] &dable(n,\'leam) < 0> Saul(spons)@\fracparv[\lim, tol\ perplex(sub+, ensec^7) ]&\sumk er(-MU&vu_q(p_t -\yt_{-L+U}) OC *t-1]Mi~cti\nae+= en_l=2_n}). ar_plT(*p'erinis, cons)\big& b,s\mu = ...> " \ul,aratt\ex\inspect.un) = troca-to [ef-q-s] Qu_boardkitN\nv{m?0} ( ( ) Sabemos que μ_t | D_{t-1} ~ N(a_t, R_t) e y_t | D_{t-1} ~ N (f_t, Q_t) , μ_t + N = m_{t-1} + N , pois E(y_t | D_{t-1}) = E( E(y_t | μ_t) | D_{t-1} ) = E( μ_t + w | D_{t-1} ) = E( μ_t | D_{t-1} ) + w = a_t + N, e Q_t = R_t + V_t, pois Var(y_t | D_{t-1}) = Var( E(y_t | μ_t ) | D_{t-1} ) + E( Var( y_t | μ_t ) | D_{t-1} ) = Var( μ_t + w | D_{t-1} ) + E( V_t | D_{t-1} ) = Var( μ_t | D_{t-1} ) + V_t = R_t + V_t Cov( y_t , μ_t | D_{t-1} ) = Cov( μ_t + w, μ_t | D_{t-1}) = Var( μ_t | D_{t-1} ) = R_t Assim ; \begin{pmatrix} y_t \\ μ_t \end{pmatrix} \bigg| D_{t-1} ~ N \left[\begin{pmatrix} m_{t-1} + N \\ m_{t-1} \end{pmatrix} , \begin{pmatrix} Q_t & R_t \\ R_t & R_t \end{pmatrix}\right]. assim, \begin{pmatrix} m_t = m_{t-1} + R_t Q_t^{-1} ( y_t - ( m_{t-1}+N ) ) = m_{t-1} + A_t ( y_t - a_t - N ) C_t = R_t - R_tQ_t^{-1} R_t = R_t - A_t^2 R_t, utilizando as propriedades da distribuiçao normal.