{"id":7197,"date":"2012-05-17T10:45:41","date_gmt":"2012-05-17T09:45:41","guid":{"rendered":"https:\/\/www.portfolioprobe.com\/?p=7197"},"modified":"2012-05-17T10:45:41","modified_gmt":"2012-05-17T09:45:41","slug":"exponential-decay-models","status":"publish","type":"post","link":"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/","title":{"rendered":"Exponential decay models"},"content":{"rendered":"<p>All models are wrong, some models are more wrong than others.<\/p>\n<h2>The streetlight model<\/h2>\n<p>Exponential decay models are quite common.\u00a0 But why?<\/p>\n<p>One reason a model might be popular is that it contains a reasonable approximation to the mechanism that generates the data.\u00a0 That is seriously unlikely in this case.<\/p>\n<p>When it is dark and you&#8217;ve lost your keys, where do you look?\u00a0 Under the streetlight.\u00a0 You look there not because you think that&#8217;s the most likely spot for the keys to be; you look there because that is the only place you&#8217;ll find them if they are there.<\/p>\n<p>Photo by takomabibelot via everystockphoto.com <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/geotagged-folkstreet-julepstreet-92997-crop-4\/\" rel=\"attachment wp-att-7253\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-7253\" title=\"geotagged-folkstreet-julepstreet-92997-crop\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/geotagged-folkstreet-julepstreet-92997-crop3-520x673.jpg\" alt=\"\" width=\"520\" height=\"673\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/geotagged-folkstreet-julepstreet-92997-crop3-520x673.jpg 520w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/geotagged-folkstreet-julepstreet-92997-crop3-250x323.jpg 250w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/geotagged-folkstreet-julepstreet-92997-crop3.jpg 679w\" sizes=\"(max-width: 520px) 100vw, 520px\" \/><\/a><\/p>\n<p>Long ago and not so far away, I needed to compute the <a href=\"https:\/\/www.portfolioprobe.com\/2010\/08\/25\/what-the-hell-is-a-variance-matrix\/\">variance matrix of the returns<\/a> of a few thousand stocks.\u00a0 The machine that I had could hold three copies of the matrix but not much more.<\/p>\n<p>An exponential decay model worked wonderfully in this situation.\u00a0 The data I needed were:<\/p>\n<ul>\n<li>the previous day&#8217;s variance matrix<\/li>\n<li>the vector of yesterday&#8217;s returns<\/li>\n<\/ul>\n<p>The operations were:<\/p>\n<ul>\n<li>do an outer product with the vector of returns<\/li>\n<li>do a weighted average of that outer product and the previous variance matrix<\/li>\n<\/ul>\n<p>Compact and simple.<\/p>\n<p>There are better ways, it&#8217;s just that the time was wrong.<\/p>\n<p>The &#8220;right&#8221; way would be to use a long history of the returns of the stocks and fit a realistic model.\u00a0 Even if that computer could have held the history of returns (probably not), it is unlikely it would have had room to work with it in order to come up with the answer.\u00a0 Plus there would have been lots of data complications to work through with a more complex model.<\/p>\n<p><a href=\"https:\/\/www.portfolioprobe.com\/2012\/03\/05\/the-shadows-and-light-of-models\/\">&#8220;The shadows and light of models&#8221;<\/a> used a different metaphor of light in relation to models.\u00a0 If we can only use an exponential decay model, measuring ignorance is going to be a bit dodgy.\u00a0 We won&#8217;t know how much of the ignorance is due to the model and how much is inherent.<\/p>\n<h2>Exponential smoothing<\/h2>\n<p>We can do exponential smoothing of the daily returns of the S&amp;P 500 as an example.\u00a0 Figure 1 shows the unsmoothed returns.\u00a0 Figure 2 shows the exponential smooth with lambda equal to 0.97 &#8212; that is 97% weight on the previous smooth and 3% weight on the current point.\u00a0 Figure 3 shows the exponential smooth with lambda equal to 1%.<\/p>\n<p><strong>Note<\/strong>: Often what is called lambda here is one minus lambda elsewhere.<\/p>\n<p>Figure 1: Log returns of the S&amp;P 500. <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/spxreturn\/\" rel=\"attachment wp-att-7224\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-7224\" title=\"spxreturn\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxreturn.png\" alt=\"\" width=\"512\" height=\"480\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxreturn.png 512w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxreturn-250x234.png 250w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a><\/p>\n<p>Figure 2: Exponential smooth of the log returns of the S&amp;P 500 with lambda equal to 0.97. <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/spxretes97\/\" rel=\"attachment wp-att-7225\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-7225\" title=\"spxretes97\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes97.png\" alt=\"\" width=\"512\" height=\"480\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes97.png 512w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes97-250x234.png 250w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a><\/p>\n<p>Figure 3: Exponential smooth of the log returns of the S&amp;P 500 with lambda equal to 0.99. <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/spxretes99\/\" rel=\"attachment wp-att-7226\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-7226\" title=\"spxretes99\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes99.png\" alt=\"\" width=\"512\" height=\"480\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes99.png 512w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/spxretes99-250x234.png 250w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a><\/p>\n<p>Notice that the start of the smooth in Figures 2 and 3 is a little strange.\u00a0 The first point of the smooth is the actual first datapoint, which happened to be a little over 1%.\u00a0 That problem can be solved by allowing a burn-in period &#8212; dropping the first 20, 50, 100 points in the smooth.<\/p>\n<h2>Chains of weights<\/h2>\n<p>The simple process of always doing a weighted sum of the previous smooth and the new data means that the smooth is actually a weighted sum of all the previous data with weights that decrease exponentially.\u00a0 Figure 4 shows the weights that result from two choices of lambda.<\/p>\n<p>Figure 4: Weights of lagged observations for lambda equal to 0.97 (blue) and 0.99 (gold). <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/expweight\/\" rel=\"attachment wp-att-7227\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-7227\" title=\"expweight\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/expweight.png\" alt=\"\" width=\"512\" height=\"480\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/expweight.png 512w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/expweight-250x234.png 250w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a>The weights drop off quite fast.\u00a0 If the process were stable, we would want the observations to be equally weighted.\u00a0 It is easy to do that compactly as well:<\/p>\n<ul>\n<li>keep the sum of the statistic you want<\/li>\n<li>note the number of observations<\/li>\n<li>add the new statistic to the sum and divide by the number of observations<\/li>\n<\/ul>\n<p>What we are likely to really want is some compromise between these extremes.<\/p>\n<h2>Half-life<\/h2>\n<p>The half-life of an exponential decay is often given.\u00a0 This is the number of lags at which the weight falls to half of the weight for the current observation.\u00a0 Figure 5 shows the half-lives for our two example lambdas.<\/p>\n<p>Figure 5: Half-lives and weights of lagged observations for lambda equal to 0.97 (blue) and 0.99 (gold). <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/exphalflife\/\" rel=\"attachment wp-att-7230\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-7230\" title=\"exphalflife\" src=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/exphalflife.png\" alt=\"\" width=\"512\" height=\"480\" srcset=\"https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/exphalflife.png 512w, https:\/\/www.portfolioprobe.com\/wp-content\/uploads\/2012\/05\/exphalflife-250x234.png 250w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a><\/p>\n<p>Generally the half-life is presented as if it is an intuitive value.\u00a0 Well, sounds cool &#8212; I&#8217;m not sure it tells <strong>me<\/strong> much of anything.<\/p>\n<h2>Summary<\/h2>\n<p>When an exponential decay model is being used, you should ask:<\/p>\n<p>Is there a good reason to use exponential decay, or is it only used because it has always been done like that?<\/p>\n<p>Advances in hardware means that some of what could only be modeled with exponential decay in the past can now be modeled better.\u00a0 It also means that what could not be done at all before <a href=\"http:\/\/mathbabe.org\/2012\/01\/27\/updating-your-big-data-model\/\" target=\"_blank\">can now be done with exponential decay<\/a>.<\/p>\n<h2>Epilogue<\/h2>\n<blockquote><p>He finds a convenient street light, steps out of the shade<br \/>\nSays something like, &#8220;You and me babe, how about it?&#8221;<\/p><\/blockquote>\n<p>from &#8220;Romeo and Juliet&#8221; by Mark Knopfler<br \/>\n<object width=\"520\" height=\"382\" classid=\"clsid:d27cdb6e-ae6d-11cf-96b8-444553540000\" codebase=\"http:\/\/download.macromedia.com\/pub\/shockwave\/cabs\/flash\/swflash.cab#version=6,0,40,0\"><param name=\"allowFullScreen\" value=\"true\" \/><param name=\"allowscriptaccess\" value=\"always\" \/><param name=\"src\" value=\"http:\/\/www.youtube.com\/v\/NtmorUXAwiI?version=3&amp;hl=en_GB\" \/><param name=\"allowfullscreen\" value=\"true\" \/><embed width=\"520\" height=\"382\" type=\"application\/x-shockwave-flash\" src=\"http:\/\/www.youtube.com\/v\/NtmorUXAwiI?version=3&amp;hl=en_GB\" allowFullScreen=\"true\" allowscriptaccess=\"always\" allowfullscreen=\"true\" \/><\/object><\/p>\n<h2>Appendix R<\/h2>\n<p>R is, of course, a <a href=\"https:\/\/www.portfolioprobe.com\/user-area\/some-hints-for-the-r-beginner\/\">wonderful place to do modeling<\/a> &#8212; exponential and other.<\/p>\n<h4>naive exponential smoothing<\/h4>\n<p>A naive implementation of exponential smoothing is:<\/p>\n<pre>&gt; pp.naive.exponential.smooth\r\nfunction (x, lambda=.97)\r\n{\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ans &lt;- x\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 oneml &lt;- 1 - lambda\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 for(i in 2:length(x)) {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ans[i] &lt;- lambda * ans[i-1] + oneml * x[i]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ans\r\n}<\/pre>\n<p>This is naive in at least two senses.<\/p>\n<p>It does the looping explicitly.\u00a0 For this simple case, there is an alternative.\u00a0 However, in more complex situations it may be necessary to use a loop.<\/p>\n<p>The function is also naive in assuming that the vector to be smoothed has at least two elements.<\/p>\n<pre>&gt; pp.naive.exponential.smooth(100)\r\nError in ans[i] &lt;- lambda * ans[i - 1] + oneml * x[i] :\r\n\u00a0 replacement has length zero<\/pre>\n<p>This sort of problem is a relative of Circle 8.1.60 in <a href=\"http:\/\/www.burns-stat.com\/pages\/Tutor\/R_inferno.pdf\" target=\"_blank\">The R Inferno<\/a>.<\/p>\n<h4>better exponential smoothing<\/h4>\n<p>Exponential smoothing can be done more efficiently by pushing the iteration down into a compiled language:<\/p>\n<pre>&gt; pp.exponential.smooth\r\nfunction (x, lambda=.97)\r\n{\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 xmod &lt;- (1 - lambda) * x\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 xmod[1] &lt;- x[1]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ans &lt;- filter(xmod, lambda, method=\"recursive\",\r\n               sides=1)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 attributes(ans) &lt;- attributes(x)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ans\r\n}<\/pre>\n<p>This still retains some naivety in that it doesn&#8217;t check that <code>lambda<\/code> is of length one.<\/p>\n<p>See also the <code>HoltWinters<\/code> function in the <code>stats<\/code> package, and <code>ets<\/code> in the <code>forecast<\/code> package.<\/p>\n<h4>weight plots<\/h4>\n<p>A simplified version of part of Figure 5 is:<\/p>\n<pre>&gt; plot(.03 * .97^(400:1),\u00a0type=\"l\",\u00a0xaxt=\"n\")\r\n&gt; axis(1, at=c(0, 100, 200, 300, 400),\r\n+    labels=c(400, 300, 200, 100, 0))\r\n&gt; abline(v=400 - log(2) \/ .03)<\/pre>\n<h4>time plot<\/h4>\n<p>The plots over time were produced with <a href=\"https:\/\/www.portfolioprobe.com\/R\/blog\/pp.timeplot.R\" target=\"_blank\"><code>pp.timeplot<\/code><\/a>.<\/p>\n<p><a href=\"http:\/\/feedburner.google.com\/fb\/a\/mailverify?uri=PortfolioProbe&amp;loc=en_US\" target=\"_blank\">Subscribe to the Portfolio Probe blog by Email<\/a><\/p>\n<!-- AddThis Advanced Settings generic via filter on the_content --><!-- AddThis Share Buttons generic via filter on the_content -->","protected":false},"excerpt":{"rendered":"<p>All models are wrong, some models are more wrong than others. The streetlight model Exponential decay models are quite common.\u00a0 But why? One reason a model might be popular is that it contains a reasonable approximation to the mechanism that generates the data.\u00a0 That is seriously unlikely in this case. When it is dark and &hellip; <a href=\"https:\/\/www.portfolioprobe.com\/2012\/05\/17\/exponential-decay-models\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><!-- AddThis Advanced Settings generic via filter on get_the_excerpt --><!-- AddThis Share Buttons generic via filter on get_the_excerpt --><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,17],"tags":[244,245,136],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/posts\/7197"}],"collection":[{"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/comments?post=7197"}],"version-history":[{"count":0,"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/posts\/7197\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/media?parent=7197"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/categories?post=7197"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.portfolioprobe.com\/wp-json\/wp\/v2\/tags?post=7197"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}