Advanced Macroeconomics An Easy Guide Appendix - A Very brief mathematical appendix

Explore the Advanced Macroeconomics An Easy Guide Appendix - A Very brief mathematical appendix study material pdf and utilize it for learning all the covered concepts as it always helps in improving the conceptual knowledge.

Subjects

Social Studies

Grade Levels

K12

Resource Type

PDF

Advanced Macroeconomics An Easy Guide Appendix - A Very brief mathematical appendix PDF Download

Very brief mathematical appendix Throughout this book , we make use of a few key mathematical tools that allow us to handle the problems that arise when dealing with issues of policy . In this appendix , we go over the key solution techniques that we use as a simple user guide . Note that our focus here is not on rigor , but rather on , which may lead to some moments for those more familiar with the formalism . For a more thorough ( yet still ) the reader can consult a number of textbooks , such as ( 2009 ) or Dixit and ( 1994 ) We now go over three key areas ( i ) Dynamic in continuous time , ii ) Dynamic in discrete time , and ( iii ) Differential equations . Dynamic in continuous time We have described policy problems in discrete and continuous time at different points , depending on convenience . In continuous time , we can solve these problems using the conditions from optimal What kinds of problems fit the optimal control framework ?

The idea is that you choose a certain path for a choice variable the control variable that the total value over time of a tion affected by that variable . This would be relatively easy , and well within the realm of standard constrained , if whatever value you chose for the control Variable at a certain moment in time had no implication for what values it may take at the next moment . What makes things trickier , and more interesting , is when it is not the case . That is to say , when what you do now affects what your options are for tomorrow or , in continuous time , the next moment . Thats what is captured by the state Variable a variable that contains the information from all the previous of the dynamic system . The evolution of the state variable is described by a dynamic equation , the equation . The simplest way to see all of this is to look at a simple example . Consider a consumer ( How to cite this book chapter , and , A . 2021 . Advanced An Easy Guide . Appendix A . Very appendix , London Press . DOI License .

364 VERY BRIEF MATHEMATICAL APPENDIX subject to the budget constraint , ra , and to an initial level of assets . In words , the consumer chooses the path for their consumption so as to maximise total utility over their lifetime , and whatever income ( labour plus interest on assets ) they do not consume is accumulated as assets . The control variable is , that is , what the consumer chooses in order to maximise utility and the state variable is a , that is , what links one instant to the next , as described by the equation of motion a , The maximum principle can be as a series of steps Step Set up the function The is simply what is in the integral the instantaneous value , at time , of the function you are trying to maximise over time plus , the of the equation of motion multiplied by a function called the variable , which we will denote as , In our example , we can write , This is the version of the because utility at time tis being evaluated at the current , that is , without the time discounting represented by the term . The version , where we would add that discounting term and write ( works just as well , with some minor adaptation in the conditions we will lay out below We believe the lends itself to a more natural economic interpretation . This looks a lot like the function from static , right ?

Well , the variable has a natural economic interpretation that is analogous to the familiar multiplier . It is the marginal benefit of a marginal addition to the stock of the state variable that is , of relaxing the constraint . In economic parlance , it is the shadow value of the state variable . The key idea behind the maximum principle is that the optimal trajectory of control , state , and variables must maximise the function . But what are the conditions for that ?

Step Maximise the with respect to the ( There is no integral in the , its just a function evaluated at a point in time . So this is just like static ! The intuition is pretty clear if you were not maximising the function at all considered separately , you probably could be doing better , right ?

For our purposes , this will boil down to taking the order condition with respect to the control variable ( In our example , we would write , This has your usual interpretation the marginal utility gain of increasing consumption has to be equal to the marginal cost , which is not adding to the stock of your assets , and is thus given by the variable . Importantly , you could have more than one control variable in a problem . What do you do ?

Well , as in static , you take for each of them . Step Figure out the optimal path of the ( The is static , but the problem is dynamic . This means that , at any given instant , you must out that whatever you leave to the next instant ( your state variable ) is consistent with . This is a key insight . Intuitively , maximising an problem ( ie whats the right value for your control variable at every instant in a continuum ) can be broken down into a sequence of choices between the current instant and the ( next . Fair enough , but how can we guarantee that ?

The maximum principle tells you its about satisfying the equations . These are basically about the with respect to the state VERY BRIEF MATHEMATICAL APPENDIX 365 variable ( The here is a bit different from what you are used to , as you wont set derivatives equal to zero . Instead , you set them equal to , In our example , pi , It seems like this does not have much of an economic intuition , but think again . Consider your state variable as a ( asset . It is literally true in our example , but also more broadly ) This condition can be rewritten as , pi , As ) The is the total marginal return of holding this asset for an additional instant the dividend from the of utility coming from it ( measured by the marginal impact on the , Ha ) plus the capital gain coming from the change in its valuation in utility terms ( The is the required marginal payoff for it to make sense for you to hold the asset this additional instant it has to compensate for your discount rate If the is greater ( resp . smaller ) than the , you should hold more ( resp . less ) of that asset . You can only be at an optimum if there is equality In other words , you can think about this as an asset pricing condition that is necessary for dynamic . What if there were more than one state variable ?

Well , then you will have more than one equation of motion , and you will need to impose one such asset pricing dynamic condition for each of them . Equations ( AA ) and ( put together , yield a differential equation that contains the information for the static and dynamic requirements for . As we will see in more detail later in the Appendix , a equation allows for an number of solutions , up to two constants . How do we pin down which solution is the true optimum ?

Step condition We need two conditions to pin down the constants that are left free by the differential equation . One of them is the initial condition we know the state variable starts off at a value that is given at I in our example , at ) The second is a terminal condition how much should we have left at the end , to guarantee that we have indeed our objective function ?

This is what the transversality condition gives us in the example a , Intuitively , as long as our state variable has any positive value in terms of generating utility ( and that shadow value is given by the variable , you should not be left with any of it at the end . After all , you could have consumed it and generated additional utility ! These conditions fully characterise the solution for any dynamic problem we will have encountered in this book . Dynamic in discrete time In dynamic problems , it is sometimes just more convenient to model time as evolving in discrete intervals , as opposed to continuously . This doesn make a for the economic intuition , as we will see , but does require techniques . These techniques come from dynamic programming theory The key ( and truly remarkable ) insight behind these techniques is to recognise the iterative ( or recursive ) nature of a lot of dynamic problems . Their structure essentially repeats over and over again through time . This means that not coincidentally , echoing the lessons from optimal control theory in a context you can break

366 VERY BRIEF MATHEMATICAL APPENDIX down such problems into a sequence of smaller ones . Optimizing through time is achieved by ing sure that you choose properly between today and tomorrow , while accounting for the fact that tomorrow you will face a similar choice again . This insight is beautifully encapsulated in the Bellman equation , To see it in action , let consider the version of the consumer problem we have seen in the previous section ( max ( subject to the equation of motion , a , ra , The information in this recursive problem can be using the concept of the value function , a , it is the value of total utility at a given point in time ( as a function of the state variable ) on optimal decisions being made over the entire future path . In other words , we can rewrite the problem as ( a , Here is the intuition choosing optimal consumption means maximising current utility , while also leaving the amount of the state variable that lets you make the optimal choice at , Picking today consumption is a much simpler task than choosing an entire path . If you do it right , it leads you to the same solution . That the beauty of dynamic programming . That seems reasonable enough , but how do you solve it ?

Step Maximise the value function with respect to the control variable Well , the thing is , naturally enough , to take the condition with respect to the control variable . In our example , where the control variable is , we get ' a , where the second term on the is using the fact that , as per the equation , am is a function of , The intuition is the same as ever the optimal choice between today and tomorrow equates the marginal gain of additional consumption to the marginal cost of leaving a marginal unit of assets for tomorrow . The latter is that it detracts from the future choice possibilities encapsulated in the value function , Just as with optimal control theory , this the fact that requires static otherwise , you could have been doing better ! Step Figure out the optimal path for the state variable Again , by analogy with the intuition from optimal control , we also need to out that the choice we are making today leaves the right amount of the state variable for an optimal decision tomorrow This means out the value function . To see how that works , let look at our example again . Since our problem has a nice , continuously differentiable value function , we can differentiate the value function with respect to the state variable ' a , Its easy enough to see why we get this marginal utility term . Consumption is a function of the state variable , as per the equation of motion . But shouldn there be a term on ' a , somewhere ?

After all , a , is also a function of a , The trick is that we used an envelope Intuitively , as with any envelope theorem , if you are the value function , you should set the path of the state variable such that you can not get additional utility from a marginal change , at the optimum . This means that the term on ' to zero .

VERY BRIEF MATHEMATICAL APPENDIX 367 In short , at the optimum , the only impact of an additional marginal unit of your state variable is the additional utility brought by converting it into consumption the impact on the future value is , at the optimum , equal to zero . Step Putting it all together You will have noticed that , once you know ' a , you also know ' a , just stick a I wherever you see a We can put ( and ( All ) together and obtain ( that is to say our familiar equation . That will do it for our purposes here , though you should keep in mind that to the full path of your control variable , you still need the initial and terminal conditions . In our example , it is pretty obvious that the consumer would choose to consume all of their assets in the last period of their life a transversality condition . equations Integrating factors Typically , the solution to a dynamic problem will be a system of differential ( or difference ) equations , describing the evolution of key variables of interest over time . In the main text of the book , we have introduced phase diagrams as a tool for analysing the behaviour of such systems . Oftentimes , though , we were interested in an analytical solution for a variable whose behaviour is described by a dynamic equation . In these cases , we used a method of solution involving integrating factors , which we will elaborate on now . As usual , this is easiest to motivate in the context of a example . Lets take the consumer problem discussed above , and in particular the equation of motion described by ( which we rewrite here , for convenience , in slightly different form a , ray . This illustrates a kind of differential equation that can generally be written as , where , are functions of only This is the kind that can be solved using integrating factors . The integrating factor is as I elm . The trick is to multiply both sides of by the integrating factor , which yields gel ( You will notice that the of this equation is what you get from differentiating zel ' with respect to , using the product rule of differentiation . In other words , we can integrate both sides of , using the Fundamental Theorem of Calculus , and obtain ( elM ) I I ( This allows us to a general solution for , up to the constant ( recall that any constant term would not affect the derivative , so any solution has to be up to a constant )

368 VERY BRIEF MATHEMATICAL APPENDIX Let look at that in the context of our consumer problem in ( You can see that is a , is , is , and is , So the integrating , which can be to ' Multiplying both sides by that integrating factor yields aye . Applying ( allows us to a general solution for a , I 9112 ' I ( I ( a , I ( ice , where we are using to denote each instant over which we are integrating , up to How do we pin down the constant ?

This equation must hold for , which entails that . In other words , I a , I ( age ( This tells us that the consumers assets , at any given point in time , equal the initial value of assets ( compounded by the interest rate ) plus the compound sum of their savings over time , The differential equations we deal with can all be solved using this method , which is quite . Depending on the problem , this solution then allows us to out the optimal level of consumption , or the path of the current account , or whatever else that interests us in the case at hand . In other occasions , we do not need an analytical solution , but want to have a way of out the dynamic properties of a system of differential equations . Here where linear systems are very , because we can use tools of linear algebra to come to the rescue . This helps explain why we often focus on linear ( around a , typically ) Note that this focus entails tant consequences a linear approximation is good enough when you are close to the point around which you are doing the approximation . If a shock gets you far from the , then maybe the approximation is not going to work that well as a description of your economy ! Lets talk about the tools , and especially one concept that we mention quite a bit in the book . Consider a system of differential equations that we have studied , describing the solution of the basic AK model . In its simplest version , we can write it as ( A ( Ak . The nice thing is that this system is already linear in , meaning that we can write it in matrix form Alp ( Let us denote the vector , as xi we can write this more concisely as ,

VERY BRIEF MATHEMATICAL APPENDIX 369 Here where the trick comes in solutions to a system like ( can be written as xi , where 11 and are the of the matrix , 11 and 112 are the corresponding , and and are the standard constants , of which there are two because this is a system of two differential This means that the dynamic behaviour of the system will be closely tied to the , Why so ?

Imagine that both , 12 It is easy to see that the solution ( will behave explosively as grows , will grow without bound , in absolute value , for any ( nonzero ) Such a system will not converge . What if 11 , 12 ?

Then , eventually , the solution will converge to zero ( which means that the solution to the general differential equation will converge to the particular solution ) This is a stable system it will converge no matter where it starts . Economists like stable systems that converge but not quite so stable . After all , it seems natural to think that there is room for human choices to make a Particularly since , in economic terms , there will often be the jumpy variables that we ve been alluding to through the book that is to say , those that need not follow an equation of motion . You will have noticed , though , that there is a third case , 12 , This is without loss of generality , of course ! In that case , the system will converge only in the case where is exactly equal to zero . Such a system is , technically speaking , not stable it generally will not converge . But we refer to it as stable , These are the most interesting , from an economic perspective , as convergence depends on purposeful behaviour by the agents in the model . How do we know if a system is stable without having to compute the ?

It suffices to recall , from linear algebra , that the determinant of a matrix is equal to the product of its . It immediately follows that , if det ( we are in a situation of stability ?

Such a system will converge only if the initial choice of the jumpy variable is the one that puts the system on the saddle path that is to say , the one that delivers in the notation above . Notes These conditions are synthesised in maximum principle , derived in the from standard principles from the classical calculus of variations that goes back to and and others . Again , we are using to denote time , as opposed to parentheses , as more often done in settings . We are ! With the , this becomes , It is easy to check that they are equivalent . These techniques were developed by Richard Bellman in the . The relevant envelope theorem here is due to and ( 1979 ) Recall that the of matrix can be computed as the solution A to the equation det ( A ) where I is the identity matrix . The are as vectors such that ' An . More precisely , we are referring here to the real part of the they can be complex with an imaginary part . It is easy to check that the system in ( does not converge if A You will recall that this is exactly our conclusion from studying the AK model in Chapter .