Friday, October 24, 2008

Dr. Thoma shows his compassion and not necessarily his Economics...

The Goal of Increasing Home Ownership
If someone has been paying rent month after month, year after year, and has a good credit record, it seems to me there ought to be some way for them to buy a house.
Maybe there should be considering that there may be some social benefit for the general population to own homes. I can not find it now but there was an interesting article pointing out that homeowners tend to become more conservative politically at least with home ownership. They have more vested into the markets and the overall economy and desire to have less taxes overall. So it could be selfish Republican desires to increase ownership and get the political advantages of increased voter registration. Similar to Democrats wanting to register prisoners and give amnesty to illegal aliens.
We are about to start passing rules and regulations to try to prevent another financial crisis from happening, and I don't want to see people excluded from home ownership unnecessarily. I know it's unfashionable to stick up for the poor right now, to advocate for increased home ownership, and in particular to say that it was not a mistake to try to increase home ownership rates at lower income levels, but (1) poor households didn't cause the financial crisis, though in many cases they were victims of it, and (2) it's the right thing to do in any case.
Yes the operative word is "unnecessarily". As far as what is popular, it seems that just like Thoma, that we should only be concerned about the poor. Not that I am shedding any tears for Wall Street brokers etc, but just seeing everything in terms of implied benefit or harm to the "poor" is too much at times.

I can not think of anyone else better to blame than those that took out mortgages that they could not repay. Sure there is probably some unscrupulous lenders but I have not heard of anyone being duressed into signing the papers with a gun to their heads. I also wonder if he differentiates between poor and homeowners with mortgages above their means to pay? And maybe it is the "right thing to do" but at what cost? It does not seem to bother Dr. Thoma that the costs we are bearing now may be more than any benefit we could possibly get over the next 50 years.

After spelling out some benefits and drawbacks to homeownership missing the investment aspects he pontificates on the following.
If we, say, require a 10% or 20% down payment for all buyers, that will impose a substantial barrier to purchasing a home. Many people can get access to a down payment somehow - real estate agents will fill you in on tricks such as how to borrow the money from family and have it look like a gift - but many others don't have access to those resources, and saving money when you are living close to the edge is not easy at all.

But what about all the lower income households who have never missed a rent payment, that have decent credit, but cannot possibly meet even, say, a 10% down payment hurdle, how do we ensure that they have a path to home ownership? They have shown themselves to be able to reliably pay a particular amount, and there ought to be a house they could buy with a similar payment profile.
There seems to be alternatives at any time including borrowing from relatives. Including my first house we bought with a 80-15-5 loan package. I quickly told my wife every month that her savings was netting $5 per month and the second note was interest of $150. Although we diminished our savings to too low a number we paid it off and saved thousands of dollars in interest.

But there could be a false dilemma in that the number he is talking about is very low that pay their rent and have good credit but can not find the down payments. Also it could be "good renters" may not be good homeowners. Renters know that one week late on payments and they are looking for a new place and homeowners have a lot more leeway as foreclosures by banks is a very expensive proposition.
So let's fix that instead of excluding them from ownership. Households with a verifiable, reliable payment history and with decent credit need a way to buy a house if that's what they have their heart set on doing. But it has to be a house they can afford, the payments have to match their income and their rental history. The process has to ensure that this happens.

[Sketching something out quickly without intending to get every detail correct, perhaps something like the following would work. First, you only get one shot at this program. If you walk away or default, that's it, you can't ever use this program again. That probably means not buying a house again for a long, long time, if ever. The program would involve mortgage loans with minimal down payment requirements.

Second, if your household income is in the qualifying range, the government will grant you an equity stake in the house of, say, $5,000 (or pick an amount you like better). If you stay in the house for seven years or more, then the $5,000 is yours if you ever sell the house (perhaps as a tax credit). [There could be some payback mechanism if the homeowner makes an excessive amount on the sale, or not. Also, I don't like that there is an incentive to sell the house after seven years, so perhaps the $5,000 could go into an IRA or something similar if it is not used to purchase a new house, that way the cash would not be immediately available if the household went back to renting.]
OK, so an interesting proposal. And Dr. Thoma is at least economically sound in his assessment of what policies would be least distorting to the market as compared to the Acorn and GSE's problems of forcing markets to make bad decisions. So if there is a benefit to society and we can calculate that value like what he provides above at $5,000 cash incentive to purchase a home, then it is simply a lump sum cash transfer in society and will be least distorting to a market.

To expand upon this idea, it could be expanded to what the Libs call "Redline" districts. If there is a bias against lending homes in certain areas then provide the incentive to overcome such obstacles. That would be a market enhancing measure instead of market destroying mechanism of forcing banks to cover this extra loan through legislature that the police state would enforce or worse yet rule by the mob.

Anyway food for thought from Dr. Thoma.

Labels:

Wednesday, October 22, 2008

More Dweebs at Econospeak {Econoldge}

I found some more dweeb economists on the web. Maybe the Google CEO is right, the internet is a cesspool.
Can Destabilizing Speculation Both Be Profitable and Help Obama Win the Election?
Paul Krugman treats us to a recent rightwing claim that a bunch of rich socialists have generated the recent financial crisis in order to assure that Obama wins the election. Paul reminds us that this same crowd back in 2004 claimed George Soros would do the same in order to help John Kerry.
Oh boy those dweebs provide a link to another Dweeb. Paul the Dweeb tells us that Crooked Timber has the goods on the vast right-wing conspiracies as in:
Let me be the first to welcome our new Socialist International Conspirator Overlords
Some unkind lefties (including one of my co-bloggers) were a little dismissive towards this post by ‘Dr. Helen,’ blogger and Instaspouse of Professor Glenn Reynolds.
...
But now Barbara Ehrenreich (via Cosma ) has let the cat out of the bag and it’s even worse than Dr. Helen suspected.

So what do we have? A blog post by Dr. Helen. Not even someone that has a background in economics and does not represent much of the right by my standard. And a comedy post by Barbara Ehrenreich at Report from the Socialist International Conspiracy via Three-Toed Sloth. So where is our Right Wing Nuts? No where when Barbara Ehrenreich is defined in Wiki as:
is an American feminist, socialist and political activist. She is a widely read columnist and essayist, and the author of nearly 20 books.
So a socialist is our right wing?

Back to Paul Krugman the Dweeb:
But why should we be surprised? Before the 2004 election, there was a lot of talk on the right about how George Soros would engineer a financial crisis to swing the election:
This was just segway back EconoSpeak
OK but aren’t there a lot of rich people who want McCain to win? If the rich lefties are engaged in what amounts to be destabilizing activity – couldn’t the rich righties make money by engaging in stabilizing speculation? Interestingly, Marxist.com brings us the thoughts of Milton Friedman on this issue:
Milton Friedman asserted that destabilising speculation was impossible. This was supposed to be the case because speculators who ‘got it wrong’ would be buying dear and selling cheap. They would lose money and soon disappear.

So are these rightwingers saying that Friedman got this issue wrong?
Maybe there is a lot of rich folds that want McCain to win but are they as willing to invest to see it happen is the question. And considering that Obama is getting more money in donations and that much is coming from the CEO class that may be supportive of McCain in his election based on their preconceived biases then it is who is more willing to risk their assets.

But it is true that if the Libs decide to destabilize the markets and the Right decide to counter it they could make money as much the same way that governments do if they predict the natural level correctly.

I see no one telling Friedman he is wrong just that Libs seem more willing to invest large amounts of capital to push their candidate onto other people. Just as yes Soros risked 30 million to get Bush defeated which as we know failed. With Soros and a few of his friends could if they wanted to and was a directed attack could disrupt the markets. We do have Soros being influential in the collapse of the British Pound in the early 90s and again in East Asian Financial Crisis of 97-98.

So back to the Friedman point, the question becomes what is defined as "soon" and whether the Libtards are willing to lose some serious change to make the markets out of order. If taking the convicted insider Soros as an example then Yes I believe some libtards would be interested enough to do such things.

Labels:

Thursday, October 02, 2008

#1 Answer all parts of this question:

Question 1>(a): Explain the Chow test of parameter stability.
The Chow test can help us test for the presence of structural changes or ‘breaks’ by testing whether the coefficients of two linear regressions are the same over different sets of data (for example on two time periods).
Quote:
The consequence of including dummy variables in regression is essentially that we estimate two or more regressions simultaneously.
Quote:
Three separate regressions are run for:
the entire data set-------------------------n------RSS(N)
period before any parameter change----n1----RSS(n1)
period after at least parameter changed-n2---RSS(n2)

Each regression estimates k parameters.

The Chow test compares the residual sum of squares from a regression run on the entire data set with the residual sums of squares resulting from two separate regressions on a two-sub groups within the sample. That is, the tests compare RSS(N) with RSS(n1)+RSS(n2). If the two values are close, the same parameters appropriate fo the entire data set: the parameters are stable. The point of the F-test is to see whether the residual sum of squares (which measures the variation in the data not explained by the regression) is significantly reduced by fitting two separate regressions rather than just one.
RSS(UR) = RSS(1) + RSS(2)
F= (RSS(R)-RSS(UR))/k /((RSS(UR)/(n1+n2-2k))

Or:
F = /k / /(n1+n2-2k>


If the calculated value of the test statistic is greater than the critical value at a predetermined significance level, say .05, reject H(0); the same parameters are notappropriate for the entire sample period.

Quote:
{Computed F< Critical F}
6. Therefore, we do not reject the null hypothesis of parameter stability (i.e. no structural change) if the computed F value in an application does not exceed the critical F value obtained from the F table at the chosen level of significance (or the p value).

But it must be determined if the error variances in the regression for n1 and n2.
Estimated error variances: Var=RSS(1)/(n1-2)
F=Var(n1)/Var(n2)
with F distribution of F(n1-k,n2-k)
Quote:
Note: By convention we put the larger of the two estimated variances in the numerator.

Computing this F in an application and comparing it with the critical F value with the appropriate df, one can decide to reject or not reject the null hypothesis that the variances in the two subpopulations are the same. If the null hypothesis is not rejected then one can use the Chow test.

+++++++++++++++++++
(b) Using annual data, a consumption function has been estimated for an economy over two consecutive periods. The estimated equations and associated sums of squared residuals (RSS) are:

C(t) = 3.5 + .67*Y(t) u(t)
-------(1.9)---(.18)----------Standard Errors
1960-1979 RSS(1) = 53.6

C(t) = 7.3 + .89*Y(t) + u(t)
------(2.21)---(.23)
1980-1999 RSS(2) = 198.6

in which C is consumption expenditure and Y is personal disposable income. When the equation is estimated for the combined data set, RSS = 641.2. Using the Chow test at the .05 significance level, test the hypothesis that the parameters are stable over time.

I use the following equations from the book:
RSS(UR) = RSS(1) + RSS(2)
F = ((RSS(R)-RSS(UR))/k)/((RSS(UR)/(n(1)+n(2)-2*k)
k= 2 {parameters in equation}
n(1)=n(2)=20
RSS(UR)= 252.2

F (calculated/computed) = (641.2 - 252.2)/2 / (252.2/40-4) = 389/7 = 55.57
F<2,40>@1%= 5.18
Quote:
6. {Page 277}
Therefore, we do not reject the null hypothesis of parameter stability (i.e. no structural change) if the computed F value in an application does not exceed the critical F value obtained from the F table at the chosen level of significance (or the p value).

Computed F < Critical F then do not reject the null hypothesis of parameter stability.
Computed F > Critical F then we reject the null hypothesis of parameter stability.
+++++++++++++++++
(c) How might dummy variables be used to test the hypothesis that both the intercept and slope coefficients have changed?
By running one regression with a Dummy Variable for both time periods we could determine the if either or both the intercept and slope coefficients have changed, for example with the following simple example:
Y(t) = a(1) + a(2)*D(t) + b(1)*X(t) +b(2)*(D(t)*X(t)) + u(t)
Where D is 0 for one period and 1 for the other period.
Thus is a(2) {differential intercept} is significant we would know that the intercept has changed and if b(2) {differential slope coefficient or slope drifter} is significant then we would know if the slope has changed between the two periods.
More precisely:
Quote:
Thus if the differential intercept coefficient a(2) is statistically insignificant, we may accept the hypothesis that the two regressions have the same intercept.


#2. (a) Explain the partial adjustment hypothesis.
Quote:
1.1.3 This partial adjustment model is an autoregressive model which is similar to the autoregressive model formed by using the Koyck transformation {in section 1.1.2. Notice, however, that with this partial adjustment model, the disturbance term is s*u(t) and s is a constant; s*u(t), is not autocorrelated if u(t) is not autocorrelated.
1. The short run or impact reaction of Y to a unit change in X is s*b(0).
2. The long run or total response is b(0). An estimate of b(0) can be obtained by dividing the estimate of s*b(0) by one minus the estimate of (1-s).

Gujarati Book page 675
Quote:
The partial adjustment model resembles both the Koyck and adaptive expectation models in that it is autoregressive. But it has a much simpler disturbance term: the original disturbance term u(t) multiplied by a constant s. But bear in mind that although similar in appearance, the adaptive expectation and partial adjustment models are conceptually very different. The former is based on uncertainty (about the future course of prices, interest rates, etc.), whereas the latter is due to technical or institutional rigidities, inertia, cost of change, etc. However both of these models are theoretically much sounder than the Koyck model.

++++++++++++++++
(b) Using the long run demand for money function:

ln(exp(M(t))) = b(1) + b(2)*ln(R(t)) + b(3)*ln(Y(t)) + u(t)
in which exp(M) is the desired demand for real cash balances, R is a long term interest rate and Y is real national income, together with the logarithmic partial adjustment hypothesis:

ln(M(t)) - ln(M(t-1)) = ad{ln(exp(M(t))) - ln(M(t-1))}
in which M is real cash balances, derive the short run demand for money function.
++++++++++++++++++
(c) The following demand for money function was estimated with annual data over the period 1960-1000 (40 observations):

ln(M(t)) = -3.21 - 0.45*ln(R(t)) + 0.65*ln(Y(t)) + 0.25*ln(M(t-1))
-----------(2.65)-----(0.07)----------(0.20)-----------(0.02)
(Standard errors)
R^2 = 0.987------DW = 1.5

(i) Calculate and interpret Durbin's h statistic.
h =approx= <1-d/2>*sqrt(N/(1-N*var(lamda)^2))
<1-.75>*sqrt(40/1-40(.0004))
= 1.60???

(II) Derive an estimate of the adjustment parameter and interpret it.

(iii) What is long-run income elasticity of the demand for money and what does it tell us?

(iv) What is the long-run interest elasticity of the demand for money.

Question #3|Sample Examination|Unit 3
(a) In the context of a simultaneous equation model explain, explain clearly the differences between:
(i) endogenous, exogenous and predetermined variables:

Exercise 1.b
Quote:
Endogenous variables are those variables whose values are determined by the solution to the simultaneous equation system representing the economic model. The values of exogenous variables are not determined by the solution to the model, but are provided by information supplied from outside the model. Predetermined variables are exogenous variables (current or lagged values) plus any lagged endogenous variables.

So predetermined are any exogenous variable as well as any lagged endogenous variable.

(ii) behavioural equations, equilibrium conditions and identities:
Unit 3 Pages 3-4
Quote:
A model is a set of equations describing hypotheses about economics relations. Equations may be of three kinds: first, they may be definitions or identities, setting up identities between variables; Y=C+I+G, the second equation in the model (3.4), is of this type. Note that an identity is not an equation to be estimated by econometric procedures and it does not include a random disturbance term; it simply defines an equality. Secondly, equations may be behavioural, showing the assumptions made about the way in which economic agents, or groups of economic agents behave: in (3.3) and (3.4) the demand and supply functions and the consumption function are of this kind. These behavioural equation are the ones with parameters to be estimated by econometric methods from real world data. Finally, model equations may state equilibrium conditions; see, for example, the third equation of the market model (3.3) which states that for equilibrium in the marketplace the quantity demanded must equal the quantity supplied.


(iii) the structural and reduced forms of the model:
Exercise 1.c
Quote:
The structural form of the model represents a theory or hypothesis about the relationship in an economy or some part of an economy. The equations of the structural form may be behavioural equations, identities or equilibrium conditions, and the variables of the structural equations can endogenously or exogenously determined. The reduced form of the model is the set of equations that express the endogenous variables of the model in terms of the predetermined variables. The equations in the final form are similar but, for each equation, only lagged values of the left-hand side endogenous variables are permitted on the right-hand side.

+++++++++++++++++
(b) Consider the following market model:

Qd(t) = a(1) + a(2)*P(t) + a(3)*Y(t)
Qs(t) = b(1) + b(2)*P(t) + b(3)*P(t-1)
Qd(t) = Qs(T)

in which Qd(t), Qs(t) and P(t) are endogenous and Y(t) exogenous.

(i) Find the reduced form of the model.
***Section 1.4
****Answer 3*****
(ii) Find the final form of the model.
***Section 1.5.1
(iii) Explain what determines the stability of the model.
Section 1.5.3 and answer 5 & 6
Quote:
...we know that the system will be stable if the coefficient on the lagged endogenous variable has an absolute value less than one.
...
If ... this expression will be negative .... then it will follow an oscillating path, tending to a stable equilibrium only if the absolute value of the numerator is less that the absolute value of the denominator.


#4(a) Explain carefully the meaning and significance of saying that an equation is 'identified'.
Quote:
19.2 The Identification Problem (Page 739)
By the identification problem we mean whether numerical estimates of the parameters of a structural equation can be obtained from the estimated reduced-form coefficients. If this can be done, we say that the particular equation is identified.

To get a defined answer (unique answers) to a set of equations we must have enough variables in each equation so that each equation is identified but not too many so that a unique answer is not possible with more than one result.

Answers to Exercise 1:
Quote:
(a) The 'identification problem' refers to the question of whether numerical values for the parameters of the structural equations of a model can be determined from the estimated reduced form coefficients.

(b) (i) A structural equation is exactly identified if unique numerical values for its parameters can be obtained from the reduced form coefficients (rfc).

(ii) not identified if it is impossible to obtain numerical values for the structural parameters from the rfc.

(iii) overidentified if more than one set of numerical values can be calculated for the structural parameters from the rfc.

+++++++++
(b) Explain carefully the order and rank conditions for identification.
+++++++++++
Consider the following simultaneous equations system:

Y(1t) = a(1) + a(2)*Y(3t) + u(1t)
Y(2t) = b(1) + b(2)*X(1t) + b(3)*Y(1t) + b(4)*Y(3t) + u(2t)
Y(3t) = l(1) + l(2)*X(2t) + l(3)*X(3t) + l(4)*Y(2t) + u(3t)

in which the Y(i) are endogenous variables, the X(i) are exogenous variables and the u(i) are disturbances.
(c) Use the order and rank conditions to examine the identification of the equations.
++++++++++
Consider the following simultaneous equations system:

Y(1t) = a(1) + a(2)*Y(3t) + u(1t)
Y(2t) = b(1) + b(2)*X(1t) + b(3)*Y(1t) + b(4)*Y(3t) + u(2t)
Y(3t) = l(1) + l(2)*X(2t) + l(3)*X(3t) + l(4)*Y(2t) + u(3t)

in which the Y(i) are endogenous variables, the X(i) are exogenous variables and the u(i) are disturbances.
(c) Use the order and rank conditions to examine the identification of the equations.
+++++++++++
(d) What are the implications for identification if b(4) = 0.

#5. (a) What are the principal characteristics of a recursive model? How would you obtain estimates of the parameters?
That there is a sequence of equations where the first equation does not have any endogenous variables including in its regression and each subsequent equation only has the preceding endogenous variables included in its regression. Thus we could say unidirectional causality and none bidirectional causality.
Quote:
1.2
A recursive model has the characteristic that the equations can be ordered in a specific way. The endogenous variable in the first equation is determined only by exogenous variables. The dependent variable in the second equation is determined by the endogenous variable in the first equation and exogenous variables-but not by any other endogenous variable. The dependent variable in the third equation is determined by the endogenous variables in the first two equations and exogenous variables-but no other endogenous variable enter as a regressor.
<<>>
There are two crucial requirements. First there is no feedback from a higher level endogenous variable to one lower down the causal chain. Secondly, the disturbances are assumed independent...
...
Furthermore, if none of the equations have lagged dependent variables, OLS estimators are unbiased.

++++++++++++
(b) Explain carefully one method for estimating the parameters of an overidentified equation in a system of simultaneous equations.
1.4 Overidentified Equations and 2SLS:
Unit 5 Page 6, 1.4
Quote:
When an endogenous variable appears as a regressor it is correlated with the disturbance term. The basic idea behind 2SLS is to replace the stochastic endogenous regressor with one that is non-stochastic and consequently independent of the disturbance term. The method is called two-stage least squares for the obvious reason that least squares estimation is applied in two stages. For the equation to be estimated:

Stage 1: Regress each endogenous regressor on all of the predetermined (exogenous and lagged endogenous) variables in the entire system, using OLS. That is, estimated the reduced form equations. Calculate the predicted values of the endogenous variables. This yields, for example, Y(t) = est(Y(t)) + est(u(t)) in which est(u(t)) are estimated residuals which are uncorrelated with est(Y(t)) (proof p 59 Gujarati).

Stage 2: The predicted values are used as proxies or instruments for the endogenous regressors in the original equations

The method of 2SLS has been widely used in empirical work. It generates estimates which are biased but consistent and which are relatively robust in the presence of specification errors. It is desirable, however, that the R^2 for the estimated reduced form equations in stage 1 are not too low. ... For an equation that is exactly identified, 2SLS yields estimates identical to those obtained by ILS and so 2SLS is usually applied to all identified equations.

++++++++++
(c) What are the main features of a two-stage least squares (2SLS) estimators?
Gujarati Book page 773
Quote:
Note the following features of 2SLS.
1. It can be applied to an individual equation in the system without directly taking into account any other equation(s) in the system. Hence, for solving econometric models involving a large number of equations, 2SLS offers an economical method. For this reason the method has been used extensively in practice.

2. Unlike ILS, which provides multiple estimates of parameters in the overidentified equations, 2SLS provides only one estimate per parameter.

3. It is easy to apply because all one needs to know is the total number of exogenous or predetermined variables in the system without knowing any other variables in the system.

4. Although specifically designed to handle overidentified equations, the method can also be applied to exactly identified equations. But then ILS and 2SLS will give identical estimates.

5. If the R^2 values in the reduced-form regressions (that is, Stage 1 regressions) are very high, say, in excess of 0.8, the classical OLS estimates and the 2SLS estimates will be very close. But this result should not be surprising because if the R^2 value in the first stage is very high, it means that the estimated values of the endogenous variables are very close to their actual values, and hence the latter are less likely to be correlated with the stochastic disturbances in the original structural equations. If, however, the R^2 values in the first-stage regressions are very low, the 2SLS estimates will be practically meaningless because we shall be replacing the original Y's in the second-stage regression by the estimated est(Y)'s from the first-stage regressions, which will essentially represent the disturbances in the first-stage regressions. In other words, in this case, the est(Y)s will be a very poor proxies for the original Y's.

6. Notice that in reporting the ILS regression in (20.3.15) we did not state the standard errors of the estimated coefficients (for reasons explained in footnote 10). But we can do this for the 2SLS estimates because the structural coefficients are directly estimated from the second-stage (OLS) regressions. There is, however, a caution to be exercised. The estimated standard errors in the second-stage regressions need to be modified because, as can be seen from Eq. (20.4.60), the error term u(t)* is, in fact, the original error term u(2t) plus b(21)*est(u(t)). Hence the variance of u(t)* is not exactly equal to the variance of the original u(2t). However, modification required can be easily effected by the formula given in Appendix 20A, Section 20A.2.

7. In using the 2SLS, bear in mind the following remarks of Henry Theil:
Quote:
The statistical justification of the 2SLS is of the large-sample type. When there are no lagged endogenous variables...the 2SLS coefficient estimators are consistent if the exogenous variables are constant in repeated samples and if the disturbance(s) ...are independently and identically distributed with zero means and finite variances...If these two conditions are satisfied, the sampling distribution of 2SLS coefficient estimators becomes approximately normal for large samples...

When the equation system contains lagged endogenous variables, the consistency and large-sample normality of the 2SLS coefficient estimators require an additional condition...that as the sample increases the mean square of the values taken by each lagged endogenous variable converges in probability to a positive limit...

If the not independently distributed, lagged endogenous variables are not independent of the current operation of the equation system..., which means these variables are not really predetermined. If these variables are nevertheless treated as predetermined in the 2SLS procedure, the resulting estimators are not consistent.


#6. (a) Explain what is meant by:

(i) a stationary time series
Quote:
The time series x(t) is weakly (covariance) stationary if the following three properties hold:
1. the mean is constant through time, E = m for all t
and
2. the variance is constant through time, var(x(t)) = E^2 = var^2 for all t
and
3. the covariance depends only upon the number of periods between two values
cov(x(t), x(t-k))= E<(x(t) - m)*(x(t-k) - m)> = lamda(k), k<>0 for all t.

If either 1 or 2 or 3 is not true then the times series, x(t), is nonstationary.



(ii) a random walk with drift.
Quote:
Consider the process:
x(t) = a(0) + x(t-1) + e(t)
{with a(0) being the drift part of the equation either being negative or positive drift and e(t) giving the randomness of the value added to last x.}

and the initial value of x at time t=0, x(0), is fixed. The current value of the series is the sum of a fixed increment, a(0) the previous value and a purely random element. The random walk with drift may be viewed as an autoregressive model with an intercept and an intercept and a coefficient of one on the lagged variable.

++++++++++++
The following graphs are based upon weekly data for the Mauritius Stock Exchange for the period from 3 January 1990 until 24 February 1999 (478 observations). LMU is the logarithm of the stock market index (SEMDEX) and DLMU is its first difference.


(b) Explain what each of these four graphs illustrates.
Graph 1 (upper left):
Random Walk with a positive drift it appears to be. Not trend stationary with more rises and falls over time periods versus the Trend Stationary white noise along a visible trend. Figures 8 and 13 show this.

Graph 2 (upper right):
We are looking at the order of lags to see how quickly autocorrelation drops to zero, and in this example it takes quite a few. Thus we can see that the original time series is not stationary. Figure #20 is most similar.

Graph 3 (lower left):
This is the first difference of out time series which results in a white noise graph above and thus stationary the first order difference. Figure #15 shows this effect.

Graph 4 (lower right):
As figure #18 shows an AR(1) with a factor of .8 of the autocorrelation of the first difference.
+++++++++++++++++
(c) What is the order of integration of a time series and why is it important to know.
Quote:
1.2.1 Integrated Series:
If x(t) is a nonstationary series and if delta(x(t)) is said to be integrated of order 1, which is denoted by I(1). Notice that both the random walk and the random walk with drift, discussed in Sections 1.1.3 and 1.1.4 respectively, are I(1) series.
Quote:
Definition: Order of Integration
In general, if a series must be differenced a minimum of d times to generate a stationary series then it is said to be integrated of order d, denoted I(d).
For an economic time series, we usually only consider the possibilities of it being I(1) or I(2). If a series is I(1), it is also said to be Difference-Stationary (DS) or a Difference Stationary Process (DSP). A stationary series is integrated of order zero, I(0). The first difference of an I(0) series is also I(0).


Gujarati Book page 804-805; 21.6 Integrated Stochastic Processes:
Quote:
The random walk model is but a specific case of a more general class of stochastic processes known as integrated processes. Recall that the RWM without drift is nonstationary, but its first difference, as shown in {21.3.8}, is stationary. Therefore, we call the RWM without drift integrated or order 1, denoted as I(1). Similarly, if a time series has to be differenced twice (ie, take the first difference of the first differences) to make it stationary, we call such a time series integrated of order 2. In general, if a (nonstationary) time series has to be differenced d times to make it stationary, that time series is said to be integrated of order d. A time series Y(t) integrated of order d is denoted Y(t) I(d). If a time series Y(t) is stationary to begin with (i.e., it does not require any differencing), it is said to integrated of order zero, denoted by Y(t) I(0). Thus, we will use the terms "stationary time series" and "time series integrated of order one" to mean the same thing.

Most economic time series are generally I(1); that is, they generally become stationary only after taking their first differences {and not more}.

++++++++++++
(d) Explain how you would test the hypothesis that a trended series is I(1).
This is actually a two step process possibly.
The first step is to run a Dickey-Fuller regressions test. From the time series plot and the autocorrelation function graph we want to check to see if there is a trend and if in question we will choose the DF with linear trend (lower portion of tests). With a higher emphasis on SBS and HQC than the AIC results we would pick the results that have the highest numbers (lower negatives) to check the DF/ADF(X) test statistic of that line. And if the calculated value (value of line just chosen) is greater than the critical value provided by the test statistic (95% critical value for the aDF statistic) the null hypothesis of I(1) is not rejected, which then the series is nonstationary.

So we have not rejected the null hypothesis that the series is I(1) but this does indicate that the series is I(2) or greater, although this possibility is very unlikely.

First we obtain the first difference of a series as in example:
DLUS=LUSS-LUSS(-1)

And again we run the Dickey-Fuller regressions and look at the first part of the table since the model does not include an intercept. And as before if the calculated value of the DF statistic (on line lowest in chart) is less than the critical value at the 95% indicated below chart then the null hypothesis of the series is I(2) is rejected and from before we accept the null hypothesis on both tests and conclude the series is I(1).

**********Insert the hypothesis tests from study book Pages 24-27???************

#7. (a) What is meant by spurious regression?
Quote:
1.1 Spurious Regression
Regression analysis may suggest a relationship exists between two or more variables when no causal relationship exists. This is called a nonsense or spurious regression. A classical example was provided by the statistician Yule in 1926. Using annual data for the period 1866-1911, he found a positive relationship between the death rate and the proportion of marriages in the Church of England. This implies that closing the church would result in immortality-clearly nonsense!
But it is quite possible that a third factor could be influencing both deaths and marriages. It also would have been interesting if the Granger Causality could find which direction if any the links are. If higher deaths leads to higher marriages, could be just a factor of people facing their own mortality.

From book pages 806-807:
Quote:
Yule showed that (spurious) correlation could persist in nonstationary time series even if the sample is very large. That there is something wrong in the preceding regression is suggested by the extremely low Durbin-Watson d value, which suggests very strong first-order autocorrelation. According to Granger and Newbold, an R^2> d is a good rule of thumb to suspect that the estimated regression is spurious,...

+++++++++++++
(b) What is the relationship between spurious regression and cointegration?
Section 1.2
To check to see if there is a causal relationship between two I(1) or any two I(x) time series we check to see if they are "cointegrated" and if not then we would assume a spurious relationship (nonsense). To prevent spurious regressions we check for cointegration.
Quote:
More formally, if x(t) and y(t) are both I(1) {or I(x)} and there exists a linear combination y(t) - (la(1) + la(2)*x(t)) which is I(0) then x and y are cointegrated.

Quote:
However, it has been shown that if x and y are I(1) and cointegrated, then the OLS estimator of the slope coefficient is superconsistent. What do we mean by this? With I(1) series and cointegration, the sampling distributions of the OLS estimators collapse to their true values at a faster rate than is the case with I(0) series and where all of the classical assumptions are valid. That is, the OLS estimators of the lambdas converge in probability to their true values faster in the nonstationary case that in the stationary case! OLS estimators are consistent and very asymptotically efficient.

+++++++++++++
(c) Explain a test of cointegration based on OLS residuals.
In the Error Correction Models if H(0): b(3)=0 then there is no cointegration derived from the ECM t statistic where b(3) is used for the differences in the lagged variables (e.g. (y(t-1)-x(t-1))
Quote:
In general, a linear combination of two I(1) series is I(1). Therefore, is the series x and y are not cointegrated, the residuals, e(t) will be I(1). If x and y are cointegrated then the e(t) will be stationary and we would expect them to behave like an I(0) process. Therefore, we have:
H(0): x and y are not cointegrated, the residuals are I(1); against
H(1): x and y are cointegrated, the residuals are I(0).


The two tests that do answer this question is from Cointegrating Regression Dickey Fuller (CRDF) Test (Section 1.3.1.1) and the Cointegrating Regression Augmented Dickey-Fuller (CRADF) Test (Section 1.3.1.2).
++++++++++++
(d) Explain the nature of first-order error correction model (ECM).
Section 1.3.2 *******************

#8. (a) Explain carefully how single equation econometric models can be used to generate forecasts.
It first must be decided if the model is static or dynamic. Section 1.2.2 and 1.2.3

We need to forecast an exogenous variable which we can derive from judgmental information or from ARIMA but whatever the case..."forecasts from an econometric model are always conditional forecasts -- conditional on the assumed values of fore(S(T+1).

Without an error term as in:
for(Y(T+1) = est(a) + est(b)*for(X(T+1))

but if we want to add a "judgmental adjustments" which can be thought of as a forecast of the error term in T+1 as follows:
for(Y(T+1) = est(a) + est(b)*for(X(T+1)) + for(e(T+1))

Quote:
There are several reasons for adding this judgmental adjustment:
* The pure forecast may not look plausible. Adding or subtracting a bit may make it look more plausible.
* The forecaster may have information about things that are likely to happen which have not been included in the model, for example strikes, or changes in government policy. {I would think one time events or structural changes that no model could predict all possible outcomes.}
*The forecaster may believe that recent errors are going to persist and so set for(e(T+1)) equal to the value of the error in the last period or to an average of recent errors. {Judgment as to whether the bias will continue or not.}


Forecasting with a Single Equation Dynamic Econometric Model:
Quote:
The forecast for Y(T+1) is calculated in exactly the same way as with the static model

for(Y(T+1) = est(a) + est(b)*for(X(T+1)) + lam(Y(T))

but when we forecast T+2 and subsequent periods, we use the previous period farecast as we did in the AR model:

for(Y(T+2) = est(a) + est(b)*for(X(T+2)) + lam(Y(T+1))

This is known as a dynamic forecast because it uses the forecasts from the previous period on the right hand side.

++++++++++++
(b) The following ARIMA(1,1,0) model has been estimated to period T.
delta(Y(t)) = m + est(p)*delta(Y(t-1)) + est(u(t))
How can this model be used to create forecasts?

Exercise 2.
To calculate the next period T+1 we would use the factors from our regression and calculate the following:
fore(delta(Y(T+1)) = m + est(p)*delta(Y(T))
This is possible since we have Y(T) which is the last number in our sequence and we did not include an error term (judgmental term) then this was our simple estimate.

In terms of level (value of fore(Y(T+1))) we use the following:
fore(Y(T+1) = Y(T) + fore(delta(Y(T+1))
The last term was calculated above.
Then these numbers could be used for forecast T+2
+++++++++++++++
(c) Explain any three measures of forecast accuracy and discuss their relative merits.
Section 1.4 in Unit 8.
Printout...

Labels:

C225-Macroeconomic Policy and Financial Markets-Sample Questions 1-8

C225-Macroeconomic Policy and Financial Markets-Sample Questions 1-8
Quote:
Question 1: Explain the ‘life cycle’ theory of saving and discuss its relevance for understanding aggregate saving in an economy.
From Unit 2-Saving and Finance
...
Keynes ideas for the Macro-economy in his theories was that savings was only a function of current income as a percentage, with a positive intercept. "Current Income"
C= A + b*Y
Consumption= Autonomous spending + Marginal Propensity to consume (b<1)* Disposable Income (after taxes).
Consumption and disposable income is highly correlated according to Figure 12.3 with US data 1980-2002.
Quote:
Keynes' concept of the m.p.c. is important in understanding demand in an economy because it contributes to a multiplier effect upon an initial increase in demand.

Keynesian Cross "Rule of Thumb":
Planned Expenditure=A + b(Y) + G + I + X (in open economy)
The PE line is less step than a 45% angle in the Planned Spending, Output graph. If an autonomous increase in either A, I or G will move the PE line up by 1/MPC. "The larger is the propensity to consume, the steeper is the line PE and the greater is the multiplier."


Milton Friedman considered "Permanent Income" to consider that people look at the trend line more than the basis of single income periods for consumption patterns. One aspect is that consumers tend to shop by grocery lists. They tend to purchase the same things over periods and have the same bundle of consumption goods and not directly affected by individual income checks. Thus they tend to change spending patterns slowly over time.

Franco Modigliani considered "Life Cycle Income" as a way of predicting what the savings rates would be over the cohorts of similar age would save.

Quote:
2.2.1 Definition of income
In the previous paragraph we wrote of individual consumption and saving
depending on ‘some measure of income’ denoted by y*. The powerful insights of modern theories of consumption and saving have been achieved after innovations were made in defining the appropriate concept of income. Saving in a particular period (current saving) may be related to three distinct measures of income:
• Current income: the original idea of the consumption function, developed
by Keynes in the 1930s, postulated a systematic relationship between
current consumption, current saving, and current income.
• Permanent income: a great theoretical innovation was achieved by Milton Friedman in his 1957 book A Theory of the Consumption Function where he demonstrated that a rational individual’s consumption would depend upon ‘permanent income’. A crude, highly simplified measure of an individual’s permanent income could be the trend of current income over time. When current income deviates positively from permanent income (or, let us say, deviates from trend) it is saved instead of causing an increase in consumption.
• Life cycle income: a similar important theoretical innovation was made by Franco Modigliani in the same decade as Friedman. Modigliani and his
collaborators proposed that an individual’s income follows a predictable
pattern over their life, a life cycle. Consumption depends on lifetime
income and the individual’s current saving depends on the stage of their
life cycle that they are at. In the simplest versions people save while they
are of working age and they dissave (consume their assets) in old age.

Analytically: "while consumption closely follows GDP fluctuations, it is not quite so volatile. Consumption tends to be a bit smoother than income." Which tends to support Milton Friedman's theories.
...
Intertemporal Budget Constraint:
Quote:
The intertemporal budget constraint says that current consumption plus discounted future consumption must equal the sum of current income and future discounted income.
{Assuming no bequeathment desires by the individuals.}


Quote:
The m.p.c. out of shocks to permanent income is much greater than out of temporary shocks. In the case of permanent shocks to income, the implications of the forward-looking model and the Keynesian consumption function are very similar-consumers will spend most of any increase in current income.


Saving for a rainy day and the Importance of Uncertainty.
Quote:
One recent study suggests that as much as 46% of personal sector wealth is the result of higher savings aimed at insuring against future income uncertainty.
...
Introducing precautionary savings makes current income more important than future income.
...
Further, if consumers expect rising future income, then according to our "rainy day" story, savings will go negative as consumers avail themselves of the opportunity to smooth their consumption over time.
...
If borrowing constraints are important, it is current income rather than future income that matters.
Borrowing Constraints.
Quote:
If borrowing constraints are important, it is current income rather than futrue income that matters.
The Influence of Interest Rates:
Quote:
Under both Keynesian and forward-looking models, interest rates affect debtors and savers differently. In forward-looking models, the effects on savers are complicated by substitution and income effects.
Demographic Influence in the Life Cycle Model:
Quote:
Because savings is likely to vary substantially over people's lives, changes in the overall demographic structure of a country are likely to have a significant impact upon its aggregate rate of saving.
The Role of Wealth and Capital Gains:
Quote:
The dependence of the m.p.c. on consumers' expectations of future income, whether they perceive income increases to be temporary or permanent, the importance of uncertainty, and the variable impact of capital gains, combined with the issue of whether they have access to credit-all make the mpc a difficult number to pin down. This suggests that governments have limited ability to reliably influence consumption by shifting taxes or moving interest rates.


Quote:
#2. Explain and discuss the theory that firms’ investment is determined by the cost of capital.


This basically is along the same lines as the assignment one question:
Quote:
Examine the view that the cost of capital is the most important influence on the level of investment.
So I am not sure this needs anymore additional comments...
******Printout assignments*******

Quote:
#3. ‘To understand what determines the level of aggregate demand this year we must understand the principle that individuals make choices between consumption now and consumption in the future.’ Explain and discuss.
Basically a two period graph with utility maximization used as the determining factor in allocation of income and savings over the two periods. I would also mention the fact of kinked interest rate lines and the fact that some consumers may be financially constrained.

Quote:
#4. Central bankers can either be required to follow policy rules or can be given discretion over monetary policy. Discuss the merits of each approach.
This seems along the lines of Assignment Two question:
Quote:
Comment on the fiscal rules adopted by the UK government and assess whether the UK government has met these fiscal rules in recent years.
Although this is Fiscal Rules, there could be some overlap in explaining Monetary Rules.

Norway Fiscal Rules:
Economic survey of Norway 2007: Putting public finances on a sustainable path

Unit 4|Monetary Policy and the Central Bank.

****4.1 Eran
How does that match up with the speech by Milton Friedman?
Page 439, I. What Monetary Policy Cannot Do:
Quote:
(1) It cannot peg interest rates for more than very limited periods; (2) It cannot peg the rate of employment for more than very limited periods.

***Inflation Targeting-Bernanke
Quote:
...such as the adoption of money growth targets in the 1970s,...
The hallmark of inflation targeting is the announcement by the government, the central bank, or some combination of the two, that in the future the central bank will strive to hold inflation at or near some numerically specified level.
...
"Price stability" never in practice means literally zero inflation, however, but usually something closer to a 2 percent annual rate of price change, for reasons we discuss later.
...
In making inflation, a goal variable, the focus of monetary policy, the inflation-targeting strategy in most cases significantly reduces the role of formal intermediate targets, such as the exchange rate or money growth. {...Page 101}
...
Some countries, such as Canada, came to inflation targeting after unsuccessful attempts to use a money-targeting approach.
...
Notably, a number of economists have proposed that central banks should target the growth rate of nominal GDP rather than inflation (Taylor). Nominal GDP growth, which can be thought of a "velocity-corrected" money growth (that is, if velocity were constant, nominal GDP growth and money growth would be equal, by definition), has the advantage that it does put some weight on output as well as prices. ...


**** Use of Explicit target (91 economies)
***
Quote:
Because a security's price relates to the future stream of payments on it, new information that changes expectations has a powerful effect on financial markets.


"Bringing Back Regulation's Good Name"

Remarks by Governor Ben S. Bernanke At the Annual Washington Policy Conference of the National Association of Business Economists, Washington, D.C. March 25, 2003

Quote:
#5. ‘Ministers of Finance are in positions of great power because fiscal policy has a major effect on aggregate demand’. Discuss.


There is a question to whether Fiscal Policy can have any effect on the economy and maybe only in negative ways. For part of this answer we need to go back to the IS-LM-BP/Mundell Fleming Model. Mundell-Fleming Model/Open Economy
Most importantly:
Quote:
Floating Exchange Rates and Perfect Capital Mobility

...with perfect capital mobility and floating exchange rate, fiscal policy is ineffective at influencing output.
...
The result that fiscal policy is very effective at influencing output under fixed exchange rates and monetary policy is very effective under floating exchange rates with perfect capital mobility.
But also important to note:
Quote:
Fiscal Expansion Under Floating Exchange Rates

BP schedule is steeper than the LM schedule, which means that capital flows are relatively insensitive to interest-rate changes, while money demand is fairly elastic with respect to the interest rate.


Steps:
1. Expansionary fiscal policy shifts the IS schedule to the right.
2. Rise in domestic interest rate and domestic income.
3. Opposite effects on the BoP; expansion in real output-deterioration of CA, rise in interest rate improves capital account.
4. Capital flows are relatively immobile then CA is larger effects and thus BoP moves into deficit.
5. Deficit leads to depreciation of the exchange rate.
6. BP shifts to the right.
7. LM shifts to the left.
8. IS shifts further to the right.
9. Thus higher interest rates, higher output but a depreciation of the exchange rate.

LM schedule is steeper than BP schedule.
Steps:
1. Expansionary fiscal policy shifts the IS schedule to the right.
2. Rise in interest rates (but less rise since of capital mobility is higher-BP flatter) and domestic income.
3. BoP moves into surplus since increased capital inflow more than offsets the deterioration in the CA due to increases in Income.
4. Appreciation of exchange rate moves the LM to the right.
5. IS shifts to the left.
6. Thus higher output, higher interest rate and exchange rate appreciation.

Hence a fiscal expansion can, according to the degree of international capital mobility, lead to either an exchange-rate depreciation or an exchange-rate appreciation.

***
Under a Classical Equilibrium Model,
1. Government spending financed from borrowing can only crowd out the same amount of investment so that:the increase in government spending has no independent effect on aggregate demand.
2. Tax cuts financed by issuing bonds results in the same: no aggregate demand increase.
3. But on the Supply-Side Effects when implementing a marginal income tax rate does effect the aggregate of labor by shifting the supply curve out, by increasing the real wage rates of after tax rates. This in turn increases (shifts to the right) the Aggregate Supply curve. Thus a reduction in marginal tax rates increases aggregate output and at lower prices.

4. Monetarist view the IS curve as relatively flat (investment demand is highly sensitive to changes in the interest rate) and the LM curve as nearly vertical (interest elasticity of money demand is small).
Quote:
"In the monetarist model such crowding out occurs almost dollar for dollar with an increase in government spending. On net, aggregate demand and, hence, income is increased very little by an increase in government spending."


From Macro-economics Second Edition-Richard T. Froyen.

Ricardian Equivalence:
Quote:
The theory of 'crowding out' usually posits that firms' investment expenditure declines to offset deficit financed increases in government spending. The theory of 'Ricardian equivalence argues that an increase in government spending financed by government borrowing causes an offsetting decline in households' consumption spending or, in other words, an increase in their saving rate to offset the government's dissaving. Both 'crowding out' and 'Ricardian equivalence' refer to hypotheses that a government policy leads to changes in private sector behaviour that offset it.
Basically it assumes that tax payers are able to anticipate and calculate how increased spending or reduced taxes are compensated by spending and saving behaviour changes.

But this does not "recognize the limits to and costs of information-processing and cognitive constraints that influence the expectations-formation process". Imperfect Knowledge, Inflation Expectations, and Monetary Policy" Orphanides, Williams, Federal Reserve System.
***
But I have not addressed the Keynesian Economics and the fiscal multiplier yet and not sure that was mentioned in the unit this question seems to be derived from.

Quote:
6. Outline and discuss the role of expectations in macroeconomic policy.


Unit 6. Markets Reflect the Expected Future Today

Will Monetary Policy Become More of a Science? Frederic S. Mishkin Member Board of Governors of the Federal Reserve System September 2007

http://www.ifk-cfs.de/papers/Readings_5.pdf

Section 6.1.1: "The American Economic Review", "The Role of Monetary Policy", Milton Friedman, Volume 58, No 1, March 1968
Quote:
-as Irving Fisher pointed out decades ago. This price expectation effect is slow to develop and also slow to disappear. Fisher estimated that it took several decades for a full adjustment and more recent work is consistent with his estimates.


Reading from Text Book {Chapter 18}.
The "US government bond yield curve" shows time versus yield on a chart.
Quote:
The predictive ability of the forward spread is modest.
The yield curve appears to be slightly more informative in predicting inflation than in predicting interest rates.
...
Evidence shows that bond prices, specifically the shape of the yield curve, do provide useful information for predicting movements in output. For example, economists have found that when the yield curve has a shallow slope (or slopes down), recession is more likely. Under the expectations theory, a downward-sloping yield curve suggest that short-term interest rates are falling, which is likely if the economy goes into a recession.
...
The sensitivity of bond prices to expectations of what the central bank will do in the future gives monetary policy real teeth.


"Why are TIIS Yields So High? The Case of the Missing Inflation-Risk Premium", Ben Craig
TIIS (Treasury Inflation Indexed Securities)
http://www.clevelandfed.org/research/com2003/0315.pdf
http://www.clevelandfed.org/research/Com2003/0315.pdf
ECN 327: INTERMEDIATE MACROECONOMICS Professor Leonard Lardaro
Quote:
The difference between the TIIS yield and that of nominal Treasury securities should be a very good measure of expected inflation.
...
Surprisingly, the difference in yields between the two types of securities (their yield spread) for 10-year instruments is only about 1.90 percentage points for the 1997-2002 period.


6.3.4 How quickly do inflation expectations change?
Quote:
In previous paragraphs we summarized an important principle of macroeconomics in these words: policy makers' cannot maintain unemployment below the natural rate unless inflation expectations continue to be different from actual inflation, and that is generally thought to be impossible because it implies that employers and employees never learn from their experience of getting expectations wrong.


******Reference to "Disagreement about Inflation Expectations"*************
Quote:
The sticky-information model of Mankiw and Reis (2002) generates disagreement in expectations that is endogenous to the model and correlated with aggregate variables. In this model, costs of acquiring and processing information and of re-optimizing lead agents to update their information sets and expectations sporadically.
...
We follow Mankiw and Reis (2002) and assume that each period a fraction lambda of the population obtain new information about the state of the economy and recomputes optimal expectations based on this new information. Each person has the same probability of updating their information, regardless of how long it has been since the last update.
...
Our estimates are also consistent with the reasonable expectation that people in the general public update their information less frequently than professional economists. It is more surprising that the difference between the two is so small.
...
We believe we have established three facts about inflation expectations. First, not everyone has the same expectations. The amount of disagreement is substantial. Second, the amount of disagreement varies over time together with other economic aggregates. Third, sticky-information model, according to which some people form expectations based on outdated information, seems capable of explaining many features of the observed evolution of both the central tendency and the dispersion of inflation expectations over the past fifty years.
Then not all the population would be covered in any one year or even long periods of time. But I wonder if it reasonable that each individual have the same chances. I would imagine that the update of information while might be random-happen to turn on a channel or read an article-most would be idiosyncratic and have their own pattern of updates.

******Study econometric models of adaption.************
**********"Imperfect Knowledge, Inflation Expectations, and Monetary Policy"*****
http://www.federalreserve.gov/pubs/feds/2002/200227/200227pap.pdf
******
How Economic News Moves Markets

Press Release How Economic News Moves Markets

What Type of Economic News Moves Markets?|Mark Thoma's Post

http://www.ny.frb.org/research/current_issues/ci14-6.pdf
******
Imperfect Knowledge, Inflation Expectations,
and Monetary Policy
Athanasios Orphanides
Board of Governors of the Federal Reserve System
and
John C. Williams
Federal Reserve Bank of San Francisco
May 2002

http://www.federalreserve.gov/pubs/feds/2002/%20200227/200227pap.pdf
Quote:
Missing from such models, as Benjamin Friedman points out, "is a clear outline of the way in which economic agents derive the knowledge which they then use to formulate expectations." To be sure, this does not reflect a criticism of the traditional use of the concept of "rationality" as reflecting the optimal use of information in the formation of expectations, taking into account an agent's objectives and resource constraints. The difficulty is that in Muth's (1961) original formulation, rational expectations are not optimizing in that sense. Thus, the issue is not that the "rational expectations" concept reflects too much rationality but rather that it imposes too little rationality in the expectation formation process. For example, as Sims (2001) has recently pointed out, optimal information processing subject to a finite cognitive capacity may result in fundamentally different processes for the formation of expectations than those implied by rational expectations. To acknowledge this terminological tension, Simon (1978) suggested that a less misleading term for Muth's concept would be "model consistent" expectations (p.2).

RATIONAL EXPECTATIONS - macroeconomics
John Muth From Wikipedia
Muth, John F. (1961), \Rational Expectations and the Theory of Price Movements," Econo-
metrica, 29, 315{335, July.


Simon, Herbert A. (1978), "Rationality as Process and as Product of Thought," American
Economic Review, 1{16, May.
Bounded rationality

Rational choice theory
Quote:
Benefits

Describing the decisions made by individuals as rational and utility maximizing may seem to be a tautological explanation of their behavior that provided very little new information. While there may be many reasons for a rational choice theory approach, two are important for the social sciences. First, assuming humans make decisions in a rational, rather than stochastic manner implies that their behavior can be modeled and thus predictions can be made about future actions. Second, the mathematical formality of rational choice theory models allows social scientists to derive results from their models that may have otherwise not been seen.



Stephen McNees: "At best, the adaptive expectations assumptions can be defended only as a 'working hypothesis' proxying for a more complex, perhaps changing expectations formulation mechanism." {Page 50}

Michael C Lovell, "Test of the Rational Expectations Hypothesis," American Economic Review, March 1966.

Quote:
1Question #7: How do nominal exchange rates reflect countries' inflation rates? In your answer, please use both theoretical and empirical reasoning.

From class notes:
Quote:
The real exchange rate takes account of both the nominal exchange rate and the domestic price level, therefore a change of real exchange rate reflects both changes in the nominal exchange rate and changes in price level (inflation).


Covered Interest Parity?
****
Quote:
The nominal exchange rate is the rate at which currencies of two countries can be exchanged, whereas the real exchange rate is the ratio of what a specified amount of money will buy in one country compared with what it can buy in another.
...
Purchasing Power Parity (PPP), which says that identical bundles of goods should cost the same in different countries. This implies that the real exchange rate should be constant and equal to one and that changes in the nominal exchange rate are driven by inflation differences.


real exchange rate = * /
Quote:
Historic data shows that fluctuations in the real exchange rate track movements in nominal exchange rate quite closely.

****
Law of One Price
Identical commodities should sell at the same price wherever they are sold, and the basis of the law is arbitrage.
dollar price of television in US = dollar/euro exchange rate * euro price of television in Barcelona

The law of one price fails for several real-life factors:
1. Transportation Costs
2. Border Effects-Technical Requirements (different voltage)
3. Pricing to Market-supply and demand nation factors.

Quote:
Firms set prices based on local conditions and prices set by rivals. These prices tend to be sticky, but nominal exchange rates are very volatile. As a result, nominal exchange rate changes feed into real exchange rate changes and the law of one price fails to hold.

4. Not sure how it fits in here but "transaction costs" must be considered somewhere in here, due mainly to considering how efficient markets are and to what degree monopolistic competition is in various countries. Which I guess could fall into category 2 and 3 to certain degree.
Quote:
The law of one price is a key part of our first theory of real exchange rate determination-Purchasing Power Parity (PPP). The law of one price refers to particular commodities. PPP applies the law of one price to ALL commodities-whether they are tradeables or not.
...
In other words, PPP implies that currencies depreciate if they have higher inflation that other countries and appreciate if they have lower inflation.
...
PPP appears to be a useful model for explaining long-run data.
It should be stated Relative PPP and not absolute PPP in above remarks.

Balassa-Samuelson effect.
Quote:
Nontradeable commodities are a key reason why the law of one price does not hold. Countries with high productivity in their tradeable sector tend to have high prices for nontradeables so that rich countries are more expensive than poor countries.

***
Nominal and real exchange rates are both volatile and their general fluctuations show a similar pattern. Two potential explanations:
First: volatile economic fundamentals lead to a volatile real exchange rate which, in turn, produces a volatile nominal exchange rate.
Second: because prices in a country are relatively sticky, changes in the nominal exchange rate feed through into changes in the real exchange rate.

Be sure to review Optional reading (Page 14/Unit 7-"Living with Flexible Exchange Rates:...").

Quote:
1Question 8:
What do you understand by the concept 'uncovered interest parity'? How relevant is the concept for analyzing countries' international capital flows, exchange rates, and macroeconomic policy?


I guess it might be best to think about CIP (covered interest parity) first.
Quote:
Arbitrageurs can also make a profit with the forward exchange rates by using the Covered Interest Parity (CIP). By using the following formula:
F=((r(f)-r)*S)/(1+r)+S as F=forward, S=spot, r(f)=foreign interest rate 1 year, and r is the 1 year forward domestic interest rate.
If there is a difference between the calculated forward rates compared to the market rate then arbitrageurs will transfer monies until the rates match. “Covered interest parity is achieved as a result of arbitrage between the spot and the forward markets.”

A currency is said to be a forward premium if the forward exchange-rate quotation for that currency represents an appreciation of that currency compared to the spot quotation. Or "If F(0)>S(0), then the dollar is said to have a forward premium-the forward rate includes a dollar appreciation."
IRP (Interest Rate Parity) page 9-12 work book Unit 2.
The choice that an investor makes is to either remain in the home currency and receive the going rate OR switch to the foreign currency earn interest on the duration to time period 1 and then switch back to the home currency. Both these paths should equal the same results aside from "CIP only holds exactly if there is absolutely no risk to either the Yen or Dollar side of the transactions". That is no difference in risks from holding in different banks.
Quote:
Arbitrage by investors will lead to the forward premium between two currencies being equal to the interest rate differential. High interest rate currencies are priced at a forward discount.
As money is moved from low interest rate countries to high interest rate countries the forward rate will decrease for the high interest country and interest rates will decline until the CIP holds true.
Quote:
Does the CIP hold? The answer is a resounding "yes".

*****
Uncovered Interest Parity (UIP):
As compared to returns being measured as:
Y(1+i(j)) to Y(1+i(US))F(0)/S(0)
Now the same in the country but the forward market is replaced by the Spot in the future:
Y(1)=i(US)Se(1)/S(0)
Quote:
Under UIP, investors rearrange their portfolio until the return on the yen account is equal to the expected return on the dollar account. In the case of UIP, we have:
expected appreciation of dollarS(0)
=Japanese interest - US interest rate

****
Covered and Uncovered Interest Parities

Category: Uncovered interest parity

The Exchange Risk Premium, Uncovered Interest Parity, and the Treatment of Exchange Rates in Multicountry Macroeconomic Models
****
Quote:
UIP is correct in predicting how exchange rates respond immediately to interest rates and monetary policy, but it is wrong in forecasting the exchange rate forward. Instead, high interest rate currencies tend to appreciate.
Under UIP, when interest rates increase in the USA then the dollar immediately strengthens but then in future periods the currency should depreciate on a sliding scale-much as the J curve works.
Quote:
In response to an unexpected increase in interest rates, UIP predicts that currencies should appreciate immediately so as to provide room for a larger future depreciation. Expectations of future currency strength also lead to an immediate appreciation. If overseas interest rates increase, then the currency should depreciate.


Quote:
UIP says that predictable changes in the exchange rate are due to interest rate differentials but that changes in expectations will lead to substantial unpredictable fluctuations in exchange rates.


Quote:
...a permanent change in monetary policy will exert a substantial impact on the current exchange rate and create volatility in the current exchange rates. Therefore, UIP implies that rational, forward-looking investors should generate a highly volatile exchange rate if changes in monetary policy are highly persistent.

******
Introducing Risk Averse Investors
This is an additional effort so that UIP fits the data better with loosening the assumption that investors ignore risk.
After adding the risk premium we have:
US Return = US interest rate + expected dollar appreciation
= Japanese interest rate + risk premium

Of course adding a risk premium can help explain capital flight. Risk is not just because of nationalization but also because of policies that could affect returns on assets including changes in tax laws and capital restrictions.
"Introducing risk premiums produces a more volatile exchange rate." Since expectations could be altered in even more dramatic ways.
Quote:
A risk premium adds an additional source of exchange rate volatility and can potentially explain why the currencies of countries with high interest rates tend to appreciate over time.


Quote:
Introducing risk premiums into UIP can help explain more of the volatility in exchange rates, but ultimately UIP fails to successfully account for short-run fluctuations in exchange rates. While it correctly predicts how exchange rates react to interest rate changes, there seem many other factors that drive exchange rates that are not reflected in UIP. Over a six-month horizon there seems little role for macroeconomics in predicting exchange rates.

*****
Ultimately, since monetary policy affects interest rates and this in turn affects exchange rates (at least on the short term basis) then this is important consideration for any macroeconomic policy, even if over the longer period it does not hold.

*** IS-LM-BP ***
Assuming a flexible exchange rate with a high but not complete degree of capital mobility, then as the monetary officials reduce the money supply, this shifts the LM curve to the left. This in turn creates an immediate increase in the exchange rate -actually an appreciation of Currency- terms of trade is better- will shift the schedule horizontally to the left. Depending on the slope of the BP, interest rates could rise or fall but the in either case the BP will reinforce the effects of tighter monetary policy by reduced income. Thus the secondary effects could lead monetary officials to not be able to judge the exact effects of monetary policy.

Labels: