In A Short Pedagogical Note, Nassim Taleb outlines his specific objection to Extreme Value Theory (EVT), imploring us to forget about it (fuhgetaboudit!). Unfortunately, financial panics and hundred-year floods continue to remind us of EVT, as do pandemics. Juxtaposing A Short Pedagogical Note with Taleb’s treatise, Tail Risk of Contagious Diseases, one might conclude that the risk engineer occupies the ultimate hedged position as a both an opponent and proponent of EVT.

Diversification only insulates a portfolio from decline if assets remain uncorrelated. Stock market crashes are largely non-diversifiable within the stock market. This is because the correlations between stocks increase dramatically in volatile market conditions. Such is the fate of Direxion’s Daily Technology Bull fund, hereinafter referred to as TECL.
The following companies comprise the top ten components of TECL, abstracted on 16 March 2020 and alphabetized by company name:
Accenture (ACN) Adobe (ADBE) Apple (APPL) Cisco (CSCO) Intel (INTC) Mastercard (MA) Microsoft (MSFT) Nvidia (NVDA) Salesforce (CRM) Visa (V) Each company was added to the Security Explorer on our Ticksift platform.

In a prior blog post, Is it Normal?, we began with two normal distributions and summed their frequencies to obtain a Gaussian mixture. In this post, we begin with a Gaussian mixture and deploy the Expectation-Maximization (EM) algorithm to decompose a given Gaussian mixture into its component distributions. Example code is included, and the results of these examples are contrasted with that of an R package mixtools, a professional software release which is based upon based upon work supported by the National Science Foundation under Grant No.

The sum of random variables should not be confused with the sum of their distributions. If both distributions are normal, the former is also a normal distribution with appropriately scaled parameters. The latter would be what is called a Gaussian mixture. This piece will illustrate both sums, beginning with two normal distributions with identical standard deviations yet different means.
First, we load necessary packages and define our distributions:
pacman::p_load("ggplot2", "RColorBrewer", "extrafont") set.

Cash may be king, but cashflow is god—at least for small businesses, who can be simultaneously profitable yet bankrupt. A creditor, perhaps with cash flow problems of its own, need only delay the payment of its accounts receivable long enough such that the small business in question lacks the resources to meet its short-term obligations.
Unfortunately, even fractional CFOs may be disinterested in a small business due to its size. The responsibility of cash flow management therefore falls frequently on the shoulders of those who feel ill equipped to manage what amounts to the differencing of statistical distributions over time.

The correlation coefficient, r, measures the strength of a linear relationship between variables, but not its significance. The null hypothesis of zero correlation between variables, r = 0, can be refuted by a statistical test where the associated p-value is a function both of the magnitude of correlation as well as the sample size. In general, larger sample sizes with larger |r| values are more significant. But how often do p-values and sample size simultaneously increase?

Whenever past performance is indicative of future results, predictive modeling is prescient. Such is the case with electrical bills. Twenty-two months worth of electrical bills for a four bedroom, two bath apartment of a 1500 square foot duplex in the Lincoln, Nebraska area were submitted by residents. The following billing-period statistics were abstracted from each electrical bill:
kWh, total kilowatt hour usage, avg_kWh_per_day, average kilowatt hour usage per day, avg_high, average high temperature, and avg_low, average low temperature.

George Moody of MIT-BIH noted back in 1996 that “neither first-order statistics nor frequency-domain analyses of HR (heart rate) time series reveal all of the information hidden in heart rate variations.” (Moody 1996) This post will evaluate that claim using a time series similarity metric contrasted with classical statistical tools on heart-rate data first made available at the website listed in the Works Cited section of this paper.
The Dynamic Time Warping algorithm (DTW) can detect similarities between time series missed by other statistical tests.