Goodness-of-Fit Testing with SQL Server Part 7.2: The Lilliefors Test

By Steve Bolton

…………Since I’m teaching myself as I go in this series of self-tutorials, I often have only a vague idea of the challenges that will arise when trying to implement the next goodness-of-fit test with SQL Server. In retrospect, had I known that the Lilliefors Test was so similar to the Kolmogorov-Smirnov and Kuiper’s Tests, I probably would have combined them into single article. The code for this week’s T-SQL stored procedure is nearly the same, as is the execution plan and the performance. The results are also quite similar right to those of the Kolmogorov-Smirnov Test for some of the practice data I’ve used throughout the series, differing in some cases by just a few decimal places. The slight differences may arise from one of the characteristics of the Lilliefors Test that differentiate it from its more famous cousin, namely that “this test of normality is more powerful than others procedures for a wide range of nonnormal conditions.”[i] Otherwise, they share many mathematical properties in common, like location and scale invariance – i.e., the proportions of the test statistics aren’t altered when using a different starting point or multiplying by a common factor.
…………On the other hand, the test is apparently more restrictive than the Kolmogorov-Smirnov, in that I’ve seen it referred to specifically as a normality test and I haven’t encountered any mention of it being applied to other distributions. Furthermore, its primary use cases seems to be those in which the variance of the data is unknown[ii], which often doesn’t apply in the types of million-row tables the SQL Server community works with daily. The late Hubert Lilliefors (1928-2008), a stats professor at George Washington University, published it in a Journal of the American Statistical Association titled “On the Kolmogorov-Smirnov Test for Normality with Mean and Variance Unknown” back in 1967[iii] – so augmenting its more famous cousin in a few niche scenarios seems to have been the raison d’etre from the beginning. We can always use more statistical tests in our toolbox to meet the never-ending welter of distributions that arise from actual physical processes, but I won’t dwell on the Lilliefors Test for long because its narrower use cases are less suited to our needs than those of the broader Kolmogorov-Smirnov Test.

Differences from the Kolmogorov-Smirnov

                Another reason for not dwelling on it for too long is that most of the code is identical to that of the stored procedure posted in last week’s article. The Lilliefors Test quantifies the difference between the empirical distribution function (EDF) and cumulative distribution function (CDF) exactly the same as the Kolmogorov-Smirnov and Kuiper’s Tests do; in plain English, it orders the actual values and ranks them on a scale of 0 to 1 and computes the difference from the theoretical probability for the Gaussian “normal” distribution, or bell curve, which is also ranked on a scale of 0 to 1. A couple of notes of caution are in order here, because some of the sources I consulted mentioned inputting Z-Scores into the formula and using the standard normal distribution rather than the actual mean and standard deviation of the dataset, but I verified that the procedure is correct as it stands now against an example at Statd.com.[iv]
…………One of the main characteristics that set it apart from the Kolmogorov-Smirnov Test is that the test statistic is compared against the Lilliefors distribution, which apparently has a better Bahadur Slope[v] (one of many measures of the efficiency of test statistics) than its competitors in certain hypothesis testing scenarios. That is a broad topic I’ve downplayed for several reasons throughout the last two tutorial series. Among the reasons I’ve brought up in the past are the fact that SQL Server users more likely to be using these tests for exploratory data mining, not proving specific points of evidence, as well as the ease of misinterpretation of p-values, critical values and confidence intervals even among professional academic researchers. What we need are continuous measures of how closely a dataset follows a particular distribution, not simple Boolean either-or choices of the kind used in hypothesis testing, which reduce the information content of the test statistics as sharply as casting a float data type to a bit would in T-SQL. Furthermore, many of the lookup tables and approximation used in hypothesis testing are only valid up to a few hundred values, not the several million that we would need in Big Data scenarios.

Abde and Molin’s Approximation

                The Lilliefors distribution was originally derived from Monte Carlo simulations (a broad term encompassing many types of randomized trials) and at least one attempt has been made to approximate it through a set of constants and equations.[vi] I implemented the approximation developed by Hervé Abdi and Paul Molin, but the first couple of SELECTs and the declarations following the comment “code for Molin and Abdi’s approximation” can be safely deleted if you don’t have a need for the P-values the block generates. I verified the P-Values and @A constants used to generate it against the examples given in their undated manuscript “Lilliefors/Van Soest’s Test of Normality,” but as is commonly the case with such workarounds in hypothesis testing, the algorithm is inapplicable when Big Data-sized values and counts are plugged into it.
…………Once @A falls below about 0.74 the approximation begins to return negative P-values and when it climbs above about 5.66 it produces P-values greater than 1, which I believe are invalid under the tenets of probability theory. Most of the practice datasets I plugged into the approximation returned invalid outputs, most of them strongly negative. This is a problem I’ve seen with other approximation techniques when they’re fed values beyond the expected ranges. Nevertheless, since I already coded it, I’ll leave that section intact in case anyone runs into scenarios where they can apply it to smaller datasets.

Figure 1: T-SQL Code for the Lilliefors Goodness-of-Fit Test
CREATE PROCEDURE [Calculations].[GoodnessOfFitLillieforsTest]
@Database1 as nvarchar(128) = NULL, @Schema1 as nvarchar(128), @Table1 as nvarchar(128),@Column1 AS nvarchar(128), @OrderByCode as tinyint
AS

DECLARE @SchemaAndTable1 nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTable1 = @Database1 + ‘.’ + @Schema1 + ‘.’ + @Table1

DECLARE @Mean float,
@StDev float,
@Count  float

DECLARE @EDFTable table
(ID bigint IDENTITY (1,1),
Value float,
ValueCount bigint,
EDFValue float,
CDFValue decimal(38,37),
EDFCDFDifference decimal(38,37)) 

DECLARE @ExecSQLString nvarchar(max), @MeanOUT nvarchar(200),@StDevOUT nvarchar(200),@CountOUT nvarchar(200), @ParameterDefinition nvarchar(max)
SET @ParameterDefinition = ‘@MeanOUT nvarchar(200) UTPUT,@StDevOUT nvarchar(200) OUTPUT,@CountOUT nvarchar(200) OUTPUT ‘
SET @ExecSQLString = ‘SELECT @MeanOUT = CAST(Avg(‘ + @Column1 + ‘) as float),CAST(@StDevOUT = StDev(‘ + @Column1 + ‘) as float),CAST(@CountOUT = Count(‘ + @Column1 + ‘) as float)
       FROM ‘ + @SchemaAndTable1 +
       WHERE ‘ + @Column1 + ‘ IS NOT NULL’

EXEC sp_executesql @ExecSQLString,@ParameterDefinition, @MeanOUT = @Mean OUTPUT,@StDevOUT = @StDev OUTPUT,@CountOUT = @Count OUTPUT

SET @SQLString = SELECT Value, ValueCount, SUM(ValueCount) OVER (ORDER BY Value ASC) / CAST(‘ + CAST(@Count as nvarchar(50)) + ‘AS float) AS EDFValue
       FROM (SELECT DISTINCT  + @Column1 + ‘ AS      Value, Count(+ @Column1 + ‘) OVER (PARTITION BY ‘ + @Column1 + ‘ ORDER BY ‘ + @Column1 + ‘) AS ValueCount
              FROM ‘ + @SchemaAndTable1 +
WHERE ‘ + @Column1 + ‘ IS NOT NULL) AS T1              

INSERT INTO @EDFTable (Value, ValueCount, EDFValue)
EXEC (@SQLString)

UPDATE T1
SET CDFValue = T3.CDFValue, EDFCDFDifference = EDFValue T3.CDFValue
FROM @EDFTable AS T1
       INNER JOIN (SELECT DistinctValue, Calculations.NormalCalculationsingleCDFFunction (DistinctValue, @Mean, @StDev) AS CDFValue
       FROM (SELECT DISTINCT Value AS DistinctValue
              FROM @EDFTable) AS T2) AS T3
       ON T1.Value = T3.DistinctValue

DECLARE @b0 float = 0.37872256037043,
@b1 float = 1.30748185078790,
@b2 float = 0.08861783849346,
@A float,
@PValue float,
@LillieforsTestStatistic float 

SELECT @LillieforsTestStatistic = Max(ABS(EDFCDFDifference))
FROM @EDFTable

code for Molin and Abdis approximation
— =======================================
SELECT @A = ((-1 * (@b1 + @Count)) + Power(Power((@b1 + @Count), 2) (4 * @b2 * (@b0 Power(@LillieforsTestStatistic, 2))), 0.5)) / (2 * @b2)
SELECT @PValue = 0.37782822932809 + (1.67819837908004 * @A)
(3.02959249450445 * Power(@A, 2))
+ (2.80015798142101 * Power(@A, 3))
(1.39874347510845 * Power(@A, 4))
+ (0.40466213484419 * Power(@A, 5))
(0.06353440854207 * Power(@A, 6))
+ (0.00287462087623 * Power(@A, 7))
+ (0.00069650013110 * Power(@A, 8))
(0.00011872227037 * Power(@A, 9))
+ (0.00000575586834 * Power(@A, 10))

SELECT  @LillieforsTestStatistic AS LillieforsTestStatistic, @PValue AS PValueAbdiMollinApproximation
SELECT ID, Value, ValueCount, EDFValue, CDFValue, EDFCDFDifference
FROM
@EDFTable
ORDER BY CASE WHEN @OrderByCode = 1 THEN ID END ASC,
CASE WHEN @OrderByCode = 2 THEN ID END DESC,
CASE WHEN @OrderByCode = 3 THEN Value END ASC,
CASE WHEN @OrderByCode = 4 THEN Value END DESC,
CASE WHEN @OrderByCode = 5 THEN ValueCount END ASC,
CASE WHEN @OrderByCode = 6 THEN ValueCount END DESC,
CASE WHEN @OrderByCode = 7 THEN EDFValue END ASC,
CASE WHEN @OrderByCode = 8 THEN EDFValue END DESC,
CASE WHEN @OrderByCode = 9 THEN CDFValue END ASC,
CASE WHEN @OrderByCode = 10 THEN CDFValue END DESC,
CASE WHEN @OrderByCode = 11 THEN EDFCDFDifference END ASC,
CASE WHEN @OrderByCode = 12 THEN EDFCDFDifference END DESC

Figure 2: Sample Results from the Lilliefors Goodness-of-Fit Test
EXEC   Calculations.GoodnessofFitLillieforsTestSP
              @DatabaseName = N’DataMiningProjects,
              @SchemaName = N’Health,
              @TableName = N’DuchennesTable,
             @ColumnName = N’LactateDehydrogenase,
              @OrderByCode = ‘1’

LillieforsResults

…………Aside from the approximation section, the code in Figure 1 is almost identical to that of last week’s procedure, so I won’t belabor the point by rehashing the explanation here. As usual, I used queries like the one in Figure 2 to test the procedure against several columns in a 209-row dataset on the Duchennes form of muscular dystrophy and an 11-million-row dataset on the Higgs Boson, which are made publicly available by the Vanderbilt University’s Department of Biostatistics and University of California at Irvine’s Machine Learning Repository respectively. It is not surprising that the results nearly matched the Kolmogorov-Smirnov test statistic for many practice columns. For example, the LactateDehydrogenase enzyme scored 0.128712871287129 here and  0.131875117324784 on the Kolmogorov-Smirnov, while the less abnormal Hemopexin protein scored 0.116783569553499 on the Lilliefors and 0.0607407215998911on the Kolmogorov-Smirnov Test. Likewise, the highly abnormal first float column and Gaussian second column in the Higgs Boson table had test statistics of 0.276267238731715 and 0.0181893798916693 respectively, which were quite close to the results of the Kolmogorov-Smirnov. I cannot say if the departure in the case of Hemopexin was the result of some property of the test itself, like its aforementioned higher statistical power for detecting non-normality, or perhaps a coding error on my part. If so, then it would probably be worth it to calculate the Lilliefors test statistic together with the Kolmogorov-Smirnov and Kuiper’s measures and return them in one batch, to give end users a sharper picture of their data at virtually no computational cost.

Figure 3: Execution Plan for the Lilliefors Goodness-of-Fit Test (click to enlarge)
LillieforsExecutionPlan
…………There were six queries in the execution plan, just as there were for last week’s tests, but the first accounted for 19 percent and the second 82 percent of the batch total. Both of those began with non-clustered Index Seeks, which is exactly what we want to see. Only the second would provide any worthwhile opportunities for further optimization, perhaps by targeting the only three operators besides the seek that contributed anything worthwhile to the query cost: a Hash Match (Aggregate) at 14 percent, a Stream Aggregate that accounted for 10 percent and two Parallelism (Repartition Streams) that together amount to 53 percent . Optimization might not really be necessary, given that the first float column in the mammoth Higgs Boson dataset returned in just 23 seconds and the second in 27. Your marks are likely to be several orders of magnitude better, considering that the procedure was executed on an antiquated semblance of a workstation that is an adventure to start up, not a real database server. The only other fitness tests in this series this fast were the Kolmogorov-Smirnov and Kuiper’s Tests, which I would have calculated together with this test in a single procedure if I’d known there was so much overlap between them. The Anderson-Darling Test we’ll survey in the next installment of the series is also included in the same category of EDF-based fitness tests, but has less in common with the Lilliefors Test and its aforementioned cousins. Unfortunately, high performance is apparently not among the characteristics the Anderson-Darling Test shares with its fellow EDF-based methods. That’s something of a shame, since it is more widely by real statisticians than many other goodness-of-fit tests.

[i] p. 1, Abdi, Hervé and Molin, Paul, undated manuscript “Lilliefors/Van Soest’s Test of Normality,” published at the University of Texas at Dallas School of Behavioral and Brain Sciences web address https://www.utdallas.edu/~herve/Abdi-Lillie2007-pretty.pdf

[ii] See the Wikipedia page “Lilliefors Test” at http://en.wikipedia.org/wiki/Lilliefors_test

[iii] Lilliefors, Hubert W., 1967, “On the Kolmogorov-Smirnov Test for Normality with Mean and Variance Unknown,” pp. 399-402 in Journal of the American Statistical Association, Vol. 62, No. 318. June, 1967.

[iv] See the Statd.com webpage “Lilliefors Normality Test” at http://statltd.com/articles/lilliefors.htm

[v] See Arcones, Miguel A., 2006, “On the Bahadur Slope of the Lilliefors and the Cramér–von Mises Tests of Normality,” pp. 196-206 in the Institute of Mathematical Statistics Lecture Notes – Monograph Series. No. 51. Available at the web address https://projecteuclid.org/euclid.lnms/1196284113

[vi] See p. 3, Abdi and Molin and the aforementioned Wikipedia page “Lilliefors Test” at http://en.wikipedia.org/wiki/Lilliefors_test

Goodness-of-Fit Testing with SQL Server Part 7.1: The Kolmogorov-Smirnov and Kuiper’s Tests

By Steve Bolton

…………“The names statisticians use for non-parametric analyses are misnomers too, in my opinion: Kruskal-Wallis tests and Kolmogorov-Smirnov statistics, for example. Good grief! These analyses are simple applications of parametric modeling that belie their intimidating exotic names.”[i]
                Apparently even experts like Will G. Hopkins, the author of the plain English online guide to stats A New View of Statistics, perceive just how dry the subject can be. They feel our pain. Sometimes the topics are simply too difficult to express efficiently without brain-taxing equations and really terse writing, but this is not the case with the Kolmogorov-Smirnov Test, the topic of this week’s mistutorial on how to perform goodness-of-fit tests with SQL Server. This particular test got its lengthy moniker from two Russian mathematicians, Vladimir Smirnov (1887-1974) and Andrey Kolmogorov (1903-1987), the latter of whom is well-known in the field but hardly a household name beyond it.[ii] He made many important contributions to information theory, neural nets and other fields directly related to data mining, which I hope to shed some light on in a future tutorial series, Information Measurement with SQL Server. Among them was Kolmogorov Complexity, a fascinating topic that can be used to embed data mining algorithms more firmly into the use of reason, in order to make inferences based on strict logical necessity. Even more importantly, he was apparently sane – unlike most famous mathematicians and physicists, who as I have noted before tend to be not merely eccentric, but often shockingly degenerate or dangerous, or both.[iii] Despite the imposing name, I was actually looking forward to coding this particular test because Kolmogorov’s work always seems to turn out to be quite useful. I wasn’t disappointed. The concepts aren’t half as hard to grasp as the name is to pronounce, because aside from the usual inscrutable equations (which I almost always omit from these articles after translating them into code) the logic behind it is really common sense. Perhaps best of all, the Kolmogorov-Smirnov Test is hands down the fastest and best-performing goodness-of-fit measure we have yet surveyed in this series. The code I provided for the last few articles was some of the weakest I’ve written in all of my tutorial series, which was compounded by the fact that the tests I surveyed aren’t a good match for SQL Server use cases, but all in all, the T-SQL below for the Kolmogorov-Smirnov is some of the best I’ve written to date. After several rewrites, it now executes on an 11-million-row dataset on a beat-up desktop in less than 30 seconds.

The Benefits of Kolmogorov’s Test

                Several studies comparing the various goodness-of-fit tests often rank the Kolmogorov-Smirnov measure near the bottom along with other popular ones like the Chi-Squared Test) because it has lower statistical power (i.e., the ability to detect an effect on a variable when it is actually present) than rivals like the Shapiro-Wilk. As we have seen in previous articles, however, many of these alternate measures are not as well-suited to the use cases the SQL Server community is likely to encounter – particularly the popular Shapiro-Wilk Test, since it can only be applied to very small datasets. Our scenarios are distinctly different from those encountered in the bulk of academic research, since we’re using recordsets of millions or even billions of rows. These datasets are often big enough to reduce the need for random sampling, since they may represent the full population or something close to it. Furthermore, parameters like averages, counts, standard deviations and variances can be instantly calculated for the entire dataset, thereby obviating the need for the complicated statistical techniques often used to estimate them. This advantage forestalls one of the stumbling blocks otherwise associated with the Kolmogorov-Smirnov Test, i.e. the need to fully specify all of the parameters (typically aggregates) for the distribution being tested.[iv]
…………The ideal goodness-of-fit test for our purposes would be one applicable to the widest number of distributions, but many of them are limited to the Gaussian “normal” distribution or bell curve. That is not true of the Kolmogorov-Smirnov Test, which can be applied to any distribution that would have a continuous Content type in SQL Server Data Mining (SSDM). It is also an exact test whose accuracy is not dependent on the number of data points fed into it.[v] I would also count among its advantages the fact that it has clear bounds, between 0 and 1; other statistical tests sometimes continually increase in tandem with the values and counts fed into them and can be difficult to read, once the number of digits of the decimal place exceeds six or seven, thereby requiring users to waste time counting them. As we shall see, there is a lingering interpretation issue with this test, or at least my amateur implementation of it. The test can also be “more sensitive near the center of the distribution than at the tails,” but this inconvenience is heavily outweighed by its many other advantages.
…………Another plus in its favor is the relative ease with which the inner workings can be grasped. End users should always know how to interpret the numbers returned to them, but there is no reason to burden them with the internal calculations and arcane equations; I think most of the instructional materials on math and stats lose their audiences precisely because they bury non-experts under a mountain of internal details that require a lot of skills they don’t have, nor need. End users are like commuters, who don’t need to give a dissertation in automotive engineering in order to drive to work each day; what they should be able to do is read a speedometer correctly. It is the job of programmers to put the ideas of mathematicians and statisticians into practice and make them accessible to end users, in the same way that mechanics are the middlemen between drivers and automotive engineers. It does help, however, if the internal calculations have the minimum possible level of difficulty, so that programmers and end users alike can interpret and troubleshoot the results better; it’s akin to the way many Americans in rural areas become driveway mechanics on weekends, which isn’t possible if the automotive design becomes too complex for them to work on efficiently.

Plugging in CDFs and EDFs

                The Kolmogorov-Smirnov Test isn’t trivial to understand, but some end users might find it easier to grasp the inner workings of this particular goodness-of-fit test. The most difficult concept is that of the cumulative distribution function (CDF), which I covered back in Goodness-of-Fit Testing with SQL Server, part 2.1: Implementing Probability Plots in Reporting Services and won’t rehash here. Suffice it to say that all of the probabilities for all of the possible values of a column are arranged so that they accumulate from 0 to 1. The concept is easier to understand than to code, at least for the normal distribution. One of the strongest points of the Kolmogorov-Smirnov Test is that we can plug the CDF of any continuous distribution into it, but I’ll keep things short by simply reusing one of the CDF functions I wrote for the Implementing Probability Plots article.
…………All we have to do to derive the Kolmogorov-Smirnov metric is to add in the concept of the empirical distribution function (EDF), in which we merely put the recordset in the order of the values and assign an EDF value at each point that is equal to the reciprocal of the overall count. Some sources make cautionary statements like this, “Warning: ties should not be present for the Kolmogorov-Smirnov test,”[vi]  which would render all of the test based on EDFs useless for our purposes since our billion-row tables are bound to have repeat values. I was fortunate to find a workaround in some undated, uncredited course notes I found at the Penn State University website, which turned out to be the most useful source of info I’ve yet found on implementing EDFs.[vii] To circumvent this issue, all we have to do is use the distinct count for values with ties as the dividend rather than one. Like the CDF, the EDF starts at 0 and accumulates up to a limit of 1, which means they can be easily compared by simply subtracting them at each level. The Kolmogorov-Smirnov test statistic is merely the highest difference between the two.[viii] That’s it. All we’re basically doing is seeing if the order follows the probability we’d expected for each value, if they came from a particular distribution. In fact, we can get two measures for the price of one by using the minimum difference as the test statistic for Kuiper’s Test, which is used sometimes in cases where cyclical variations in the data are an issue.[ix]

Figure 1: T-SQL Code for the Kolmogorov-Smirnov and Kuiper’s Tests

CREATE PROCEDURE [Calculations].[GoodnessOfFitKolomgorovSmirnovAndKuipersTestsSP]
@Database1 as nvarchar(128) = NULL, @Schema1 as nvarchar(128), @Table1 as nvarchar(128),@Column1 AS nvarchar(128), @OrderByCode as tinyint
AS

DECLARE @SchemaAndTable1 nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTable1 = @Database1 + ‘.’ + @Schema1 + ‘.’ + @Table1

DECLARE @Mean float,
@StDev float,
@Count  float

DECLARE @EDFTable table
(ID bigint IDENTITY (1,1),
Value float,
ValueCount bigint,
EDFValue float,
CDFValue decimal(38,37),
EDFCDFDifference decimal(38,37)) 

DECLARE @ExecSQLString nvarchar(max), @MeanOUT nvarchar(200),@StDevOUT nvarchar(200),@CountOUT nvarchar(200), @ParameterDefinition nvarchar(max)
SET @ParameterDefinition = ‘@MeanOUT nvarchar(200) OUTPUT,@StDevOUT nvarchar(200) OUTPUT,@CountOUT nvarchar(200) OUTPUT ‘
SET @ExecSQLString = ‘SELECT @MeanOUT = Avg(‘ + @Column1 + ‘),@StDevOUT = StDev(‘ + @Column1 + ‘),@CountOUT = Count(‘ + @Column1 + ‘)
       FROM ‘ + @SchemaAndTable1 +
       WHERE ‘ + @Column1 + ‘ IS NOT NULL’

EXEC sp_executesql @ExecSQLString,@ParameterDefinition, @MeanOUT = @Mean OUTPUT,@StDevOUT = @StDev OUTPUT,@CountOUT = @Count OUTPUT
SET @SQLString =  ‘SELECT Value, ValueCount, SUM(ValueCount) OVER (ORDER BY Value ASC) / CAST(‘ +
CAST(@Count as nvarchar(50)) + ‘AS float) AS EDFValue
       FROM (SELECT DISTINCT  ‘
+ @Column1 + ‘ AS      Value, Count(‘ + @Column1 + ‘) OVER (PARTITION BY ‘ + @Column1 + ‘ ORDER BY ‘ + @Column1 + ‘) AS ValueCount
              FROM ‘ + @SchemaAndTable1 +
WHERE ‘ + @Column1 + ‘ IS NOT NULL) AS T1            

INSERT INTO @EDFTable
(Value, ValueCount, EDFValue)
EXEC (@SQLString)

UPDATE T1
SET CDFValue = T3.CDFValue, EDFCDFDifference = EDFValue T3.CDFValue
FROM @EDFTable AS T1
       INNER JOIN (SELECT DistinctValue, Calculations.NormalDistributionSingleCDFFunction (DistinctValue, @Mean, @StDev) AS CDFValue
 FROM (SELECT DISTINCT Value AS DistinctValue
       FROM @EDFTable) AS T2) AS T3
       ON T1.Value = T3.DistinctValue

SELECT KolomgorovSmirnovSupremum AS KolomgorovSmirnovTest, KolomgorovSmirnovSupremum KolomgorovSmirnovMinimum AS KuipersTest
FROM (SELECT Max(ABS(EDFValue CDFValue)) AS KolomgorovSmirnovSupremum, — the supremum i.e. max
Min(ABS(EDFValue CDFValue)) AS KolomgorovSmirnovMinimum
       FROM @EDFTable
       WHERE EDFCDFDifference > 0) AS T3

SELECT ID, Value, ValueCount, EDFValue, CDFValue, EDFCDFDifference
FROM
@EDFTable
       ORDER BY CASE WHEN @OrderByCode = 1 THEN ID END ASC,
CASE WHEN @OrderByCode = 2 THEN ID END DESC,
CASE WHEN @OrderByCode = 3 THEN Value END ASC,
CASE WHEN @OrderByCode = 4 THEN Value END DESC,
CASE WHEN @OrderByCode = 5 THEN ValueCount END ASC,
CASE WHEN @OrderByCode = 6 THEN ValueCount END DESC,
CASE WHEN @OrderByCode = 7 THEN EDFValue END ASC,
CASE WHEN @OrderByCode = 8 THEN EDFValue END DESC,
CASE WHEN @OrderByCode = 9 THEN CDFValue END ASC,
CASE WHEN @OrderByCode = 10 THEN CDFValue END DESC,
CASE WHEN @OrderByCode = 11 THEN EDFCDFDifference END ASC,
CASE WHEN @OrderByCode = 12 THEN EDFCDFDifference END DESC

…………The code above is actually a lot simpler than it looks, given that the last 12 lines are dedicated to implementing the @OrderByCode parameter, which I’ve occasionally provided as an affordance over the course of the last two tutorial series. It’s particularly useful in this test when the column values, distinct counts EDF and CDF results in the @EDFTable are of interest in addition to the test statistic; ordinarily, this would be taken care of in an app’s presentation layer, so the ordering code can be safely deleted if you’re not using SQL Server Management Studio (SSMS). In this instance, 1 orders the results by ID ASC, 2 is by ID Desc, 3  is by Value ASC, 4 is by Value DESC, 5  is by ValueCount ASC, 6 is by ValueCount DESC, 7 is by EDFValue ASC, 9 is by EDFValue Desc, 9  is by CDFValue ASC, 10 is by CDFValue DESC, 11  is by EDFCDFDifference ASC and 12 is by EDFCDFDifference DESC. The rest of the parameters and first couple of line of dynamic SQL allow users to perform the tests against any column in any database they have sufficient access to. As usual, you’ll have to add in your own validation, null handling and SQL injection protection code. Two dynamic SQL statements are necessary because separate count, mean and standard deviation have to be extracted from the original base table. The retrieval of those aggregates needed for subsequent calculations occurs shortly after the declarations section.
…………Note that this procedure was markedly faster after substituting the sp_executesql statement for a dynamic INSERT EXEC on the base table (which had been used to populate the @EDFTable in an inefficient way). One quirk I should point out though is the use of the DISTINCT clause in the UPDATE subquery, which is needed to prevent unnecessary repetitive calls to the somewhat expensive Calculations.NormalDistributionSingleCDFFunction in the case of duplicate values. This somewhat convoluted method actually save a big performance hit on large tables with lots of duplicates. In the final query, I bet that the outer subquery would be less expensive than retrieving the max twice in a single query. One of the few concerns I have about the procedure is the use of the actual mean and standard deviation in calculating the CDF values. Some sources recommended using the standard normal, but this typically resulted in ridiculous distortions for most of the recordsets I tested them against. On the other hand, I verified the correctness of the calculations as they stand now by working through the example in the Alion System Reliability Center’s Selected Topics in Assurance Related Technologies, a series of publications on stats I recently discovered and now can’t live without.[x]

Figure 2: Sample Results from the Kolmogorov-Smirnov and Kuiper’s Tests
EXEC   Calculations.GoodnessOfFitKolomgorovSmirnovAndKuipersTestsSP
              @Database1 = N’DataMiningProjects’,
              @Schema1 = N’Health’,
              @Table1 = N’DuchennesTable’,
              @Column1 = N’LactateDehydrogenase’,
              @OrderByCode = 1

Kolmogorov-Smirnov Fixed

…………Even more good news: when I tested it on the 209 rows of the tiny 9-kilobyte set of data on the Duchennes muscular dystrophy and the 11 million rows and near 6 gigs of data in the Higgs Boson dataset (which I downloaded from the Vanderbilt University’s Department of Biostatistics and University of California at Irvine’s Machine Learning Repository respectively) I got pretty much the results I expected. After using the same datasets for the last dozen articles or so, I know which ones follow the Gaussian distribution and which do not, and the Kolmogorov-Smirnov Test consistently returned lower figures for the ones that followed a bell curve and higher ones for those that do not. For example, the query in Figure 2 returned a value of 0.131875117324784 for the LactateDehydrogenase enzyme, while the less abnormal Hemopexin scored a 0.0607407215998911. On the other hand, the highly abnormal, really lopsided first float column in the Higgs Boson dataset scored a whopping 0.276266847552121, while the second float column scored just a 0.0181892303151281 probably because it clearly follows a bell curve in a histogram.
…………Other programmers may also want to consider adding in their own logic to implement confidence intervals and the like, which I typically omit for reasons of simplicity, the difficulty of deriving lookup values on a Big Data scale and philosophical concerns about their applicability, not to mention the widespread concern among many professional statisticians about the rampant misuse and misinterpretation of hypothesis testing methods. Suffice it to say that my own interval of confidence in them is steadily narrowing, at least for the unique use cases the SQL Server community faces. The good news is that if you decide to use standard hypothesis testing methods, then the Kolmogorov-Smirnov test statistic doesn’t require modifications before plugging it into lookup tables, unlike the popular Shapiro-Wilk and Anderson-Darling tests.[xi]

Figure 3: Execution Plan for the Kolmogorov-Smirnov and Kuiper’s Procedure (click to enlarge)
Kolmogorov-Smirnov Test Execution Plan - Better

…………When all is said and done, the Kolmogorov-Smirnov Test is the closest thing to the ideal goodness-of-fit measure for our use cases. It may have low statistical power, but it can handle big datasets and a wide range of distributions. The internals are a shorter to code and moderately easier to explain to end users than those of some other procedures and the final test statistic is easy to read because it has clear bounds. It also comes with some freebies, like the ability to simultaneously calculate Kuiper’s Test at virtually no extra cost. For most columns I tested there wasn’t much of a difference between the Kolmogorov-Smirnov and Kuiper’s Test results till we got down to the second through fifth decimal places, but there’s no reason not to calculate it if the costs are dwarfed by those incurred by the rest of the procedure. Note that I also return the full @EDFTable, including the ValueCount for each distinct Value, since there’s no point in discarding all that information once the burden of computing it all has been borne. One of the few remaining concerns I have about the test is that much of this information may be wasted in the final test statistics, since merely taking minimums and maximums is often an inefficient way of making inferences about a dataset. This means that more useful, expanded versions of the tests might be possible by calculating more sophisticated measures on the same EDF and CDF data.
…………Best of all, the test outperforms any of the others we’ve used in the last two tutorial series. After eliminating most of the dynamic SQL I overused in previous articles, the execution time actually worsened, till I experimented with some different execution plans. On the first float column in the 11-million-row, 6-gig Higgs Boson dataset, the procedure return in just 24 seconds, but for the equally-sized second float column, in returned in an average of just 29. That’s not shabby at all for such a useful statistical test on such a huge dataset, on a clunker of a desktop that’s held together with duct tape. I can’t account for that difference, given that the execution plans were identical and the two columns share the same data type and count; the only significant difference I know of is that one is highly abnormal and the other follows a bell curve. For smaller datasets of a few thousand rows the test was almost instantaneous. I don’t think the execution plan in Figure 3 can be improved upon much, given that just two of the five queries account for practically all of the cost and both of them begin with Index Seeks. In the case of the first, that initial Seek accounts for 92 percent of the cost. The second ought to be the target of any optimization efforts, since it accounts for 85 percent of the batch; within it, however, the only operators that might be worth experimenting with are the Hash Match (Aggregate) and the Sort. Besides, the procedure already performs well enough as it is and should be practically instantly on a real database server. In the next installment, we’ll see whether the Lilliefors Test, another measure based on the EDF, can compete with the Kolmogorov-Smirnov Test, which is thus far the most promising measure of fit we’ve yet covered in the series.

[i] See Hopkins, 2014, “Rank Transformation: Non-Parametric Models,” published at the A New View of Statistics webpage http://www.sportsci.org/resource/stats/nonparms.html

[ii] See the Wikipedia pages “Andrey Kolmogorov” and “Vladimir Smirnov” at http://en.wikipedia.org/wiki/Andrey_Kolmogorov and  http://en.wikipedia.org/wiki/Vladimir_Smirnov_(mathematician) respectively.

[iii] I’m slowly compiling a list of the crazy ones and their bizarre antics for a future editorial or whatever – which will include such cases as Rene Descartes’ charming habit of carrying a dummy of his dead sister around Europe and carrying on conversations with it in public. I’m sure there’ll also be room for Kurt Gödel, who had a bizarre fear of being poisoned – so he forced his wife to serve as his food-taster. Nothing says romance like putting the love of your life in harm’s way when you think people are out to get you. When she was hospitalized, he ended up starving to death. Such tales are the norm among the great names in these fields, which is why I’m glad I deliberately decided back in fifth grade not to pursue my fascination with particle physics.

[iv] See National Institute for Standards and Technology, 2014,  “1.3.5.16 Kolmogorov-Smirnov Goodness-of-Fit Test,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm

[v] IBID.

[vi] p. 14, Hofmann, Heike, 2013, “Nonparametric Inference and Bootstrap { Q-Q plots; Kolmogorov Test,” lecture notes published Oct. 11, 2013 at the Iowa State University web address http://www.public.iastate.edu/~hofmann/stat415/lectures/07-qqplots.pdf

[vii] Penn State University, 2014, “Empirical Distribution Functions,” undated course notes posted at the Penn State University website and retrieved Nov. 5, 2014 from the web address https://onlinecourses.science.psu.edu/stat414/node/333

[viii] I also consulted the Wikipedia page “Kolmogorov-Smirnov Test” at http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test for some of these calculations.

[ix] See the Wikipedia article “Kuiper’s Test” at http://en.wikipedia.org/wiki/Kuiper’s_test

[x] See Alion System Reliability Center, 2014, “Kolmogorov-Simirnov: A Goodness of Fit Test for Small Samples,” published in Selected Topics in Assurance Related Technologies, Vol. 10, No. 6. Available online at the Alion System Reliability Center web address

https://src.alionscience.com/pdf/K_STest.pdf

[xi] “Critical value beyond which the hypothesis is rejected in Anderson-Darling test is different when Gaussian pattern is being tested than when another distribution such a lognormal is being tested. Shapiro-Wilk critical value also depends on the distribution under test. But Kolmogorov-Smirnov test is distribution-free as the critical values do not depend on whether Gaussianity is being tested or some other form. No author listed, 2014, “Checking Gaussianity,” published online at the MedicalBiostatistics.com web address http://www.medicalbiostatistics.com/checkinggaussianity.pdf

Goodness-of-Fit Testing with SQL Server Part 6.2: The Ryan-Joiner Test

By Steve Bolton

…………In the last installment of this amateur series of self-tutorials, we saw how the Shapiro-Wilk Test might probably prove less useful to SQL Server users, despite the fact that it is one of the most popular goodness-of-fit tests among statisticians and researchers. Its impressive statistical power is rendered impotent by the fact that the logic of its internal calculations limits it to inputs of just 50 rows (or up to 2,000 when certain revisions are applied), which is chump change when we’re talking about SQL Server tables that often number in the millions of rows. Thankfully, a little-known rival in the same category[1] is available that shares few of the drawbacks that make Shapiro-Wilk such a disappointing choice for our particular use cases. In fact, the Ryan-Joiner Test “is very highly correlated with that of Shapiro and Wilk, so either test may be used and will produce very similar resultspWhen wading through various head-to-head comparisons of the goodness-of-fit tests published on the Internet, I noticed that on the occasions when the Ryan-Joiner Test was mentioned, it received favorable marks like this review by Jim Colton at the Minitab Blog:

                “I should note that the three scenarios evaluated in this blog are not designed to assess the validity of the Normality assumption for tests that benefit from the Central Limit Theorem, such as such as 1-sample, 2-sample, and paired t-tests. Our focus here is detecting Non-Normality when using a distribution to estimate the probability of manufacturing defective (out-of-spec) unit.
…………“In scenario 1, the Ryan-Joiner test was a clear winner. The simulation results are below…
…………“…The Anderson-Darling test was never the worst test, but it was not nearly as effective as the RJ test at detecting a 4-sigma outlier. If you’re analyzing data from a manufacturing process tends to produce individual outliers, the Ryan-Joiner test is the most appropriate…”
…………“…The RJ test performed very well in two of the scenarios, but was poor at detecting Non-Normality when there was a shift in the data. If you’re analyzing data from a manufacturing process that tends to shift due to unexpected changes, the AD test is the most appropriate.”[3]

…………The most striking drawback is the paucity of public information available on the test, which doesn’t even have a Wikipedia page, thereby forcing me to resort to even less professional sources like Answers.com for matter-of-fact explanations like this: “The Ryan-Joiner test is implemented in the Minitab software package but not widely elsewhere.”[4] It was apparently the brainchild of Brian Joiner and Barbara Ryan, “the founder of Minitab,” but I was unable to find a publicly available copy of the original academic paper they published on the test back in 1976 until after I’d already written most of the code below.[5] Publication of this kind signifies that it is not a proprietary algorithm exclusively owned by Minitab, so we are free to implement it ourselves – provided we can find adequate detail on its inner workings, which turned out to be a tall order. The main drawback of the Ryan-Joiner Test is the difficulty in finding information that can be applied to implementation and testing, which is certainly a consequence of its close association with Minitab, a stats package that competes only tangentially with SQL Server Data Mining (SSDM) as I addressed in Integrating Other Data Mining Tools with SQL Server, Part 2.1: The Minuscule Hassles of Minitab and Integrating Other Data Mining Tools with SQL Server, Part 2.2: Minitab vs. SSDM and Reporting Services. This makes it somewhat opaque, but it I was able to overcome this inscrutability enough to get a T-SQL version of it up and running.
…………The underlying mechanisms are still somewhat unclear, but this brief introduction in the LinkedIn discussion group Lean Sigma Six Group Brazil is adequate enough for our purposes:  “This test assesses normality by calculating the correlation between your data and the normal scores of your data. If the correlation coefficient is near 1, the population is likely to be normal. The Ryan-Joiner statistic assesses the strength of this correlation; if it falls below the appropriate critical value, you will reject the null hypothesis of population normality.”[6] As usual, I’ll be omitting those critical values, because of the numerous issues with hypothesis testing I’ve pointed out in previous blog posts. Apparently my misgivings are widely shared by professional statisticians and mathematicians who actually know what they’re talking about, particularly when it comes to the ease and frequency with which all of the caveats and context that statements of statistical significance are carelessly dispensed with. It is not that significance level stats aren’t useful, but that the either-or nature of standard hypothesis testing techniques discards an awful lot of information by effectively shrinking our hard-won calculations down to simple Boolean either-or choices; not only is this equivalent to casting a float or decimal value down to a SQL Server bit data type, but can also easily lead to errors in interpretation. For this reason and concerns about brevity and simplicity, I’ll leave out the critical values, which can be easily tacked on to my code by anyone with a need for them.

Interpreting the Results in Terms of the Bell Curve

                Aside from that, the final test statistic isn’t that hard to interpret: the closer we get to 1, the more closely the data follows the Gaussian or “normal” distribution, i.e. the bell curve. So far, my test results have all remained within the range of 0 to 1 as expected, but I cannot rule out the possibility that in some situations an undiscovered error will cause them to exceed these bounds. When writing the T-SQL code in Figure 1 I had to make use of just two incomplete sources[7], before finally finding the original paper by Ryan and Joiner at the Minitab website late in the game.[8] This find was invaluable because it pointed out that the Z-Scores (a basic topic I explained way back in Outlier Detection with SQL Server, part 2.1: Z-Scores) in the internal calculations should be done against the standard normal distribution, not the data points.
…………My standard disclaimer that I still a novice in the fields of data mining and statistics and that my sample code has not yet been thoroughly tested ought not be glossed over, given the number of other mistakes I caught myself making when writing the code below. At one point I accidentally used a minus sign rather than an asterisk in the top divisor; I tested it once against the wrong online calculator, for the normal probability density function (PDF) rather than the cumulative density function (CDF); later, I realized I should have used the standard normal inverse CDF rather than the CDF or PDF; I also used several different improper step values for the RangeCTE, including one that was based on minimum and maximum values rather than the count and another based on fractions. Worst of all, I garbled my code at the last minute by accidentally (and not for the first time) using the All Open Documents option with Quick Replace in SQL Server Management Studio (SSMS). Once I figured out my mistakes, the procedure ended up being a lot shorter and easier to follow than I ever expected. Keep in mind, however, that I didn’t have any published examples to test it against, so there may be other reliability issues lurking within.

Figure 1: T-SQL Code for the Ryan-Joiner Test Procedure
CREATE PROCEDURE [Calculations].[NormalityTestRyanJoinerTestSP]
@Database1 as nvarchar(128) = NULL, @Schema1 as nvarchar(128), @Table1 as nvarchar(128),@Column1 AS nvarchar(128)
AS

DECLARE @SchemaAndTable1 nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTable1 = @Database1 + ‘.’ + @Schema1 + ‘.’ + @Table1

DECLARE @ValueTable table
(ID bigint IDENTITY (1,1),
Value float)

SET @SQLString = ‘SELECT ‘ + @Column1 + ‘ AS Value
FROM ‘ + @SchemaAndTable1 +
WHERE ‘ + @Column1 + ‘ IS NOT NULL’ 

INSERT INTO @ValueTable
(Value)
EXEC (@SQLString)

DECLARE  @Var AS decimal(38,11),
@Count bigint,
@ConstantBasedOnCount decimal(5,4),
@Mean AS decimal(38,11) 

SELECT @Count = Count(*), @Var = Var(Value)
FROM @ValueTable
the NOT NULL clause is not necessary here because that’s taken care of the in @SQLString 

SELECT @ConstantBasedOnCount = CASE WHEN @Count > 10 THEN 0.5 ELSE 0.375 END

; WITH RangeCTE(RangeNumber) as
(
    SELECT 1 as RangeNumber   

       UNION ALL

    SELECT RangeNumber + 1
        FROM RangeCTE
        WHERE RangeNumber  < @Count
) 

SELECT SUM(Value * RankitApproximation) /  Power((@Var * (@Count 1)  * SUM(Power(CAST(RankitApproximation AS float), 2))), 0.5) AS RyanJoinerTestStatistic
FROM (SELECT RN, Value, Calculations.NormalDistributionInverseCDFFunction((RN @ConstantBasedOnCount) / (@Count + 1 (2 * @ConstantBasedOnCount))) AS RankitApproximation
       FROM (SELECT ROW_NUMBER() OVER (ORDER BY Value) AS RN, Value
              FROM @ValueTable AS T0) AS T1
       INNER JOIN RangeCTE AS T2
       ON  T1.RN = T2.RangeNumber) AS T3
OPTION (MAXRECURSION 0) 

…………Much of the code above is easy to follow if you’ve seen procedures I’ve posted over the last two tutorial series. As usual, the parameters and first couple of lines in the body allow users to perform the test on any column in any database they have sufficient access to, as well as to adjust the precision of the calculations to avoid arithmetic overflows. Starting with my last article, I began using a lot less dynamic SQL to code procedures like these, by instead caching the original values in a @ValueTable table variable. A couple of simple declarations and assignments needed for the rest of the computations follow this. The RangeCTE generates a set of integers that is fed to the Calculations.NormalDistributionInverseCDFFunction I introduced in Goodness-of-Fit Testing with SQL Server, part 2: Implementing Probability Plots in Reporting Services.
…………In lieu of making this article any more verbose and dry than it absolutely has to be, I’ll omit a rehash of that topic and simply point users back to the code from that previous article. Once those numbers are derived, the calculations are actually quite simple in comparison to some of the more complex procedures I’ve posted in the past. As usual, I’ll avoid a precise explanation of how I translated the mathematical formulas into code, for the same reason that driver’s ed classes don’t require a graduate degree in automotive engineering: end users need to be able to interpret the final test statistic accurately – which is why I’m not including the easily misunderstood critical values – but shouldn’t be bothered with the internals. I’ve supplied just enough detail so that the T-SQL equivalent of a mechanic can fix my shoddy engineering, if need be. It may be worth noting though that I can’t simply use the standard deviation in place of the root of the variance as I normally do, because the square root in the Ryan-Joiner equations is calculated after the variance has been multiplied by other terms.

Figure 2: Sample Results from Ryan-Joiner Test Procedure
EXEC   Calculations.NormalityTestRyanJoinerTestSP
              @DatabaseName = N’DataMiningProjects’,
              @SchemaName = N’Health’,
              @TableName = N’DuchennesTable’,
              @ColumnName = N’Hemopexin’

RyanJoinerTestResults

…………The sample query in Figure 2 was run against a column of data on the Hemopexin protein contained within a dataset on the Duchennes form of muscular dystrophy, which I downloaded long ago from the Vanderbilt University’s Department of Biostatistics and converted to a SQL Server table. Since this table only has 209 rows and occupies just 9 kilobytes, I customarily stress test the procedures I post in these tutorials against an 11-million-row table of data on the Higgs Boson, which is made freely available by the University of California at Irvine’s Machine Learning Repository and now occupies nearly 6 gigabytes in the same database.
…………I also tested it against columns in other tables I’m quite familiar with and discovered a pattern that is both comforting and disconcerting at the same time: the test statistic is indeed closer to 0 than 1 on columns I already know to be abnormal, but there may be a scaling issue in the internal calculations because the values are still unexpectedly high for all columns. I know from previous goodness-of-fit and outlier detection tests that the Hemopexin column is more abnormal than some of the other Duchennes columns and as expected, it had a lower Ryan-Joiner statistic; the problem is that it was still fairly close to 1. Likewise, the histograms I posted way back in Outlier Detection with SQL Server, part 6.1: Visual Outlier Detection with Reporting Services clearly show that the first float column in the Higgs Boson dataset is hopelessly lopsided and therefore can’t come from a normal distribution, while the second follows a clear bell curve. It is not surprising then that the former scored a 0.909093348747035 and the latter a 0.996126753961487 in the Ryan-Joiner Test. The order of the values always seems to correctly match the degree of normality for every practice dataset I use, which is a good sign, but the gaps between them may not be proportionally correct. In the absence of example data to verify my procedure against, I can’t tell for sure if this is a problem or not.

Next Up: Kolmogorov and EDF-Based Tests

               Either way, the test is useful as-is, because it at least assigns test statistic values that are in the expected order, regardless of whether or not they are scaled correctly. These results come at a moderate, tolerable performance cost, clocking in at 6:43 for the first float column and 6:14 for the second. As usual, your results will probably be several orders of magnitude better than mine, given that I’m using a clunker of a development machine, not a real database server. The execution plans consist of two queries, the second of which accounts for 97 percent of the cost of the whole batch; out of the 24 operators in that query, a single Sort accounts for 96 percent of the cost. It occurs prior to a Merge Join, so there may be some way to optimize the procedure with join hints or recoding with optimization in mind. We’re unlikely to get much benefit out of analyzing the execution plan further, because it consists almost entirely of spools and Compute Scalar operators with infinitesimal costs, plus two Index Seeks, which is what we want to see.
…………The Ryan-Joiner Test performs well enough that DBA and data miners might find it a more useful addition to their toolbelt than the far better-known Shapiro-Wilk Test, which is simply inapplicable to most Big Data scenarios because of its fatal limitations on input sizes. There may be some lingering concerns about it reliability, but this can be rectified through a more diligent search of the available literature for examples that we can test it against; if we really need this particular statistic, then conferring with a professional statistician for ten minutes to verify the correctness of the results might also get the job done. If misgivings about its reliability are a real concern, then we can always turn to the alternatives we’ll cover in the next segment of this series, like the Kolmogorov-Smirnov (my personal favorite, which was also invented by my favorite mathematician), Anderson-Darling Kuiper’s and Lilliefors Tests, as well as the Cramér–von Mises Criterion. Judging from the fact that experts seem to divide the various goodness-of-fit tests into categories along the same lines[9], I was right to segregate the Jarque-Bera and D’Agostino’s K-Squared Test into a separate segment at the beginning of this series for measures based on kurtosis and skewness. The Shapiro-Wilk and Ryan-Joiner Tests likewise have a separate set of internal mechanism in commons, based on measures of correlation. In the next five articles, we’ll cover a set of goodness-of-fit measures that rely on a different type of internal mechanism, the empirical distribution function (EDF), which is a lot easier to calculate and explain than the long-winded name would suggest.

[1] These authors say it is  “similar to the SW test”:  p. 2142, Yap, B. W. and Sim, C. H., 2011, “Comparisons of Various Types of Normality Tests,” pp. 2141-2155 in Journal of Statistical Computation and Simulation, Vol. 81, No. 12. Also see the remark to the effect that “This test is similar to the Shapiro-Wilk normality test” at Gilberto, S. 2013, “Which Normality Test May I Use?” published in the Lean Sigma Six Group Brazil discussion group, at the LinkedIn web address http://www.linkedin.com/groups/Which-normality-test-may-I-3713927.S.51120536

[2] See the Answers.com webpage “What is Ryan Joiner Test” at http://www.answers.com/Q/What_is_Ryan_joiner_test

[3] Colton, Jim, 2013, “Anderson-Darling, Ryan-Joiner, or Kolmogorov-Smirnov: Which Normality Test Is the Best?” published Oct. 10, 2013 at The Minitab Blog web address http://blog.minitab.com/blog/the-statistical-mentor/anderson-darling-ryan-joiner-or-kolmogorov-smirnov-which-normality-test-is-the-best

[4] See the aforementioned Answers.com webpage.

[5] See the comment by the user named Mikel on Jan. 23, 2008 in the iSixSigma thread “Ryan-Joiner Test” at http://www.isixsigma.com/topic/ryan-joiner-test/

[6] Gilberto, S. 2013, “Which Normality Test May I Use?” published in the Lean Sigma Six Group Brazil discussion group, at the LinkedIn web address http://www.linkedin.com/groups/Which-normality-test-may-I-3713927.S.51120536

[7] No author listed, 2014, “7.5 – Tests for Error Normality,” published at the Penn State University web address https://onlinecourses.science.psu.edu/stat501/node/366 .This source has several other goodness-of-test formulas arranged in a convenient format. Also see Uaieshafizh, 2011, “Normality Test Dengan Menggunakan Uji Ryan-Joiner,” published Nov. 1, 2011 at the Coretan Uaies Hafizh web address http://uaieshafizh.wordpress.com/2011/11/01/uji-ryan-joiner/ . Translated from Indonesian by Google Translate.

[8] Ryan, Jr., Thomas A. and Joiner, Brian L., 1976, “Normal Probability Plots and Tests for Normality,” Technical Report, published by the Pennsylvania State University Statistics Department. Available online at the Minitab web address http://www.minitab.com/uploadedFiles/Content/News/Published_Articles/normal_probability_plots.pdf

[9] For an example, see p. 2143, Yap, B. W. and Sim, C. H., 2011, “Comparisons of Various Types of Normality Tests,” pp. 2141-2155 in Journal of Statistical Computation and Simulation, Vol. 81, No. 12. Available online at the web address http://www.tandfonline.com/doi/pdf/10.1080/00949655.2010.520163 “Normality tests can be classified into tests based on regression and correlation (SW, Shapiro–Francia and Ryan–Joiner tests), CSQ test, empirical distribution test (such as KS, LL, AD andCVM), moment tests (skewness test, kurtosis test, D’Agostino test, JB test), spacings test (Rao’s test, Greenwood test) and other special tests.” I have yet to see the latter two tests mentioned anywhere else, so I’ll omit them from the series for now on the grounds that sufficient information will likely be even harder to find than it was for the Ryan-Joiner Test.

 

Goodness-of-Fit Testing with SQL Server Part 6.1: The Shapiro-Wilk Test

By Steve Bolton

…………Just as a good garage mechanic will fill his or her Craftsman with tools designed to fix specific problems, it is obviously wise for data miners to stockpile a wide range of algorithms, statistical tools, software packages and the like to deal with a wide variety of user scenarios. Some of the tests and algorithms I’ve covered in this amateur self-tutorial series and the previous one on outlier detection are applicable to a broad range of problems, while others are tailor-made to address specific issues; what works in one instance may be entirely inappropriate in a different context. For example, some fitness tests are specifically applicable only to linear regression and other to logistic regression, as explained in Goodness-of-Fit Testing with SQL Server, part 4.1::R2, RMSE and Regression-Related Routines and Goodness-of-Fit Testing with SQL Server part 4.2:: The Hosmer-Lemeshow Test. Other measures we’ve surveyed recently, like the Chi-Squared, Jarque-Bera and D’Agostino-Pearson Tests, can only be applied to particular probability distributions or are calculated in ways that can be a drag on performance, when run against the wrong type of dataset. The metric I’ll be discussing this week stands out as one of the most popular goodness-of-fit tests, in large part because it is has better “statistical power,” which is a numerical measure of how often the actual effects of a variable are detected by a particular test.
…………The Shapiro-Wilk Test is also apparently flexible enough to be extended to other distribution beyond the “normal” Gaussian (i.e. the bell curve), such as the uniform, the exponential, and to a certain extent “to any symmetric distribution.”[1] Its flexibility is augmented by scale and origin invariance, two properties that statisticians prefer to endow their metrics with because multiplying the terms by a common factor or choosing a different starting point doesn’t lead to incomparable values.[2] For these reasons it is widely implemented in statistical software that competes in a tangential way with SQL Server Data Mining (SSDM), most notably “R, Stata, SPSS and SAS.”[3] As we shall see, however, there is less incentive to implement it in SQL Server than in these dedicated stats packages, because of the specific nature of the datasets we work with.

The Fatal Flaw of Shapiro-Wilk for Big Data

                The usefulness of the Shapiro-Wilk Test is severely constrained by a number of drawbacks, such as sensitivity to outliers and the fact that its authors envisioned it as an adjunct to the kind of visualizations we covered in Goodness-of-Fit Testing with SQL Server, part 2: Implementing Probability Plots in Reporting Services, not as a replacement for them.[4] The fatal flaw, however, is that the Shapiro-Wilk Test can only handle datasets up to 50 rows in size; approximations have been developed by statisticians like Patrick Royston that can extend it to at least 2,000 rows, but that is still a drop in the bucket compared to the millions of rows found in SQL Server tables. As I’ve pointed out in previous articles, one of the great strengths of the “Big Data” era is that we can now plumb the depths of such huge treasures troves in order to derive information of greater detail, which is an advantage we shouldn’t have to sacrifice merely to accommodate metrics that were designed generations ago with entirely different contexts in mind. Furthermore, the test is normally used in hypothesis testing on random samples when the means and variances are unknown, which as I have explained in the past, are not user scenarios that the SQL Server community will encounter often.[5] The means and variances of particular columns are trivial to calculate with built-in T-SQL functions. Moreover, random sampling is not as necessary in our field because we have access to such huge repositories of information, which are often equivalent to the full population, depending on what questions we choose to ask about our data.
…………I’ll have to implement the T-SQL code for this article against a small sample of our available practice data, simply because of the built-in limitation on row counts. In order to accommodate larger datasets, we’d have to find a different way of performing the internal calculations, which are subject to combinatorial explosion. The main sticking point it a constant in the Shapiro-Wilk equations which must be derived through covariance matrices, which are too large to calculate for large datasets, regardless of the performance costs. As Royston notes, deriving the constant for a 1,500-row table would require the storage of 1,126,500 reals, given that the covariance matrix requires a number of comparisons equivalent to the count of the table multiplied by one less than itself. That exponential growth isn’t ameliorated much by the fact that those results are then halved; I’m still learning the subject of computational complexity classes so I can’t identify which this calculation might fit into, but it certainly isn’t one of those that are easily computable in polynomial time.

Workarounds for Combinatorial Explosion

                My math may be off, but I calculated that stress-testing the Shapiro-Wilk procedure against the first float column in the 11-million-row Higgs Boson Dataset, which I downloaded from the University of California at Irvine’s Machine Learning Repository and converted into a SQL Server table of about 6 gigabytes) would require about 1.2 trillion float values and 67 terabytes of storage space. I have the sneaking suspicion that no one in the SQL Server community has that much free space in their TempDB. And that is before factoring in such further performance hits as the matrix inversion and other such transforms.
…………While writing a recent article on Mahalanobis Distance, combinatorial explosion of matrix determinants forced me to scrap my sample code for a type of covariance matrix that compared the global variance values for each column against one another; even that was a cheap workaround for calculating what amounts to a cross product against each set of local values. In this case, we’re only talking about a bivariate comparison, so inserting the easily calculable global variance value would leave us with a covariance matrix of just one entry, which isn’t going to fly.[6] We can’t fudge the covariance matrix in this way, but it might be possible to use one of Royston’s approximations to derive that pesky constant in a more efficient way. Alas, I was only able to read a couple of pages in his 1991 academic journal article on the topic, since Springer.com charges an arm and a leg for full access. I had the distinct sense, however, that it would still not scale to the size of datasets typically associated with the Big Data buzzword. Furthermore, a lot of it was still over my head, as was the original 1965 paper by Samuel S. Shapiro and Martin B. Wilk (although not as far as such topics used to be, which is precisely why I am using exercises like these in order to acquire the skills I lack). Thankfully, that article in Biometrika provides an easily adaptable table of lookup values for that constant[7], as well as a legible example that I was able to verify my results against. Figure 1 below provides DDL for creating a lookup table to hold those values, which you’ll have to copy yourself from one of the many publicly available sources on the Internet, including the original paper.[8]

Figure 1: DDL for the Shapiro-Wilk Lookup Table
CREATE TABLE [Calculations].[ShapiroWilkLookupTable](
       [ID] [smallint] IDENTITY(1,1) NOT NULL,
       [ICount] bigint NULL,
       [NCount] bigint NULL,
       [Coefficient] [decimal](5, 4) NULL,
 CONSTRAINT [PK_ShapiroWilkLookupTable] PRIMARY KEY CLUSTERED ([ID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE
= OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Figure 2: T-SQL Code for the Shapiro-Wilk Test
CREATE PROCEDURE [Calculations].[GoodnessOfFitShapiroWilkTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128),@DecimalPrecision AS nvarchar(50)
AS
DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName

DECLARE
@ValueTable table
(ID bigint IDENTITY (1,1),
Value float) 

SET @SQLString = SELECT ‘ + @Column1 + ‘ AS Value
FROM ‘ + @SchemaAndTable1 +
W
HERE ‘ + @Column1 + ‘ IS NOT NULL’

INSERT INTO @ValueTable
(Value)
EXEC (@SQLString)

DECLARE @Count bigint,
@CountPlusOneQuarter decimal(38,2),
@CountIsOdd bit = 0,
@CountDivisor float,
@S2  float,
@ShapiroWilkTestStatistic  float,
@One  float = 1

SELECT @Count = Count(*)
FROM @ValueTable

SELECT @CountPlusOneQuarter = @Count + 0.25
SELECT @CountIsOdd = CASE WHEN @Count % 2 = 1 THEN  1 ELSE 0 END
SELECT @CountDivisor = CASE WHEN @CountIsOdd = 1 THEN (@Count / CAST(2 as float)) + 1 ELSE (@Count / CAST(2 as float)) END

SELECT  TOP 1 @S2 = Sum(Power(Value, 2)) OVER (ORDER BY Value) (Power(Sum(Value) OVER (ORDER BY Value), 2) * (@One / CAST(@Count as float)))
FROM @ValueTable
ORDER BY Value DESC

SELECT @ShapiroWilkTestStatistic = Power(CoefficientSum, 2) / @S2
FROM  (SELECT TOP 1 SUM(FactorByShapiroWilkLookup * Coefficient) OVER (ORDER BY Coefficient DESC) AS CoefficientSum
       FROM (SELECT T1.RN AS RN, T2.Value T1.Value AS FactorByShapiroWilkLookup
              FROM (SELECT TOP 99999999999 Value, ROW_NUMBER () OVER (ORDER BY Value ASC) AS RN
                          FROM @ValueTable
                          WHERE Value IS NOT NULL
                          ORDER BY Value ASC) AS T1
                    INNER JOIN  (SELECT TOP 99999999999 Value, ROW_NUMBER () OVER (ORDER BY Value DESC) AS RN
                           FROM @ValueTable
                           WHERE Value IS NOT NULL
                           ORDER BY Value DESC) AS T2
                     ON T1.RN = T2.RN
                     WHERE T1.RN <= @CountDivisor) AS T3
              INNER JOIN OutlierDetection.LookupShapiroWilkTable
              ON RN = ICount AND NCount = @Count
       ORDER BY RN DESC) AS T4

SELECT @ShapiroWilkTestStatistic AS ShapiroWilkTestStatistic

…………The use of the lookup table removes the need for the complex matrix logic, which might have made the T-SQL in Figure 2 even longer than the matrix code I originally wrote for Outlier Detection with SQL Server, part 8: A T-SQL Hack for Mahalanobis Distance (which might have set a record for the lengthiest T-SQL samples ever posted in a blog, if I hadn’t found a workaround at the last minute). Longtime readers may notice a big change in the format of my SQL; gone is the @DecimalPrecision parameter, which enabled users to set their own precision and scale, but which made the code a lot less legible by requiring much bigger blocks of dynamic SQL. From now on, I’ll be using short dynamic SQL statements like the one included in @SQLString and performing a lot of the math operations on a table variable that holds the results. I ought to have done this sooner, but one of the disadvantages of working in isolation is that you’re missing the feedback that would ferret out bad coding habits more quickly. As usual, the parameters and first couple of lines within the body enable users to perform the test on any table column in any database they have sufficient access to.
…………Most of the internal variables and constants we’ll need for our computations are declared near the top, followed by the some simple assignments of values based on the record count. The @S2 assignment requires a little more code. It is then employed in a simple division operation in the last block, which is a series of subqueries and windowing operations that retrieve the appropriate lookup value, which depends on the record count. It also sorts the dataset by value, then derives order statistics by essentially folding the table in half, so that the first and last values are compared, the second from the beginning and second from the end, etc. etc. right up to the midpoint. The final calculations on the lookup values and these order statistics are actually quite simple. For this part, I also consulted the National Institute for Standards and Technology’s Engineering Statistics Handbook, which is one of the most succinctly written sources of information I’ve found to date on the topic of statistics.[9] Because I’m still a novice, the reasons why these particular calculations are used is still a mystery to me, although I’ve frequently seen Shapiro and Wilk mentioned in connection with Analysis of Variance (ANOVA), which is a simpler topic to grasp if not to implement. If a float would do in place of variable precision then this code could be simplified, by inserting the results of a query on the @SchemaAndTableName into a table variable and then performing all the math on it outside of the dynamic SQL block.

Figure 3: Sample Results from the Shapiro-Wilk Test
EXEC   Calculations.GoodnessOfFitShapiroWilkTestSP
              @DatabaseName = N’DataMiningProjects’,
              @SchemaName = N’Health’,
              @TableName = N’First50RowsPyruvateKinaseView’,
              @ColumnName = N’PyruvateKinase

ShapiroWilkQueryResults

…………In Figure 3, I ran the procedure against a view created on the first 50 non-null values of the Pyruvate Kinase enzyme, derived from the 209-row table of Duchennes muscular dystrophy data I downloaded from the Vanderbilt University’s Department of Biostatistics. Given that we can’t yet calculate this on more than 50 rows at this point, doing the traditional performance test of the procedure on the HiggsBosonTable is basically pointless. Only if the lookup table could be extended somehow with new coefficients it might pay to look at the execution plan. When run against the trivial 7-row example in the Shapiro-Wilk paper, it had a couple of Clustered Index Scans that could probably be turned into Seeks with proper indexing on both the lookup table and the table being tested. It also had a couple of expensive Sort operators and a Hash Match that might warrant further inspection if the procedure could somehow be extended to datasets big enough to affect performance.
…………Interpretation of the final test statistics is straightforward in one sense, yet tricky in another. The closer the statistic is to 1, the more closely the data approaches a normal distribution. It is common to assign confidence intervals, P-values and the like with the Shapiro-Wilk Test, but I am omitting this step out of growing concern about the applicability of hypothesis testing to our use cases. I’ve often questioned the wisdom of reducing high-precision test statistics down to simple Boolean, yes-no answers about whether a particular column is normally distributed, or a particular value is an outlier; not only is it akin to taking a float column in a table and casting it to a bit, but it prevents us from asking more sophisticated questions of our hard-won computations like, “How normally distributed is my data?”

More Misgivings About Hypothesis Testing-Style Metrics

                The more I read by professional statisticians and data miners who really know what they’re talking about, the less at ease I feel. Doubts about the utility of hypothesis tests of normality are routinely expressed in the literature; for some easily accessible examples that pertain directly to today’s metric, see the StackOverflow threads “Seeing if Data is Normally Distributed in R”[10] and “Perform a Shapiro-Wilk Normality Test”.[11] Some of the books I’ve read recently in my crash course in stats have not just echoed the same sentiments, but added dozens of different potential pitfalls in interpretation.[12] Hypothesis testing encompasses a set of techniques that are routinely wielded without the precision and skill required to derive useful information from them, as many professional statisticians lament. Worse still, the inherent difficulties are greatly magnified by Big Data, which comes with a unique set of use cases. The SQL Server user community might find bona fide niches for applying hypothesis testing, but for the foreseeable future I’ll forego that step and simply use the test statistics as measures in their own right, which still gives end users the freedom to implement confidence intervals and the like if they find a need.
…………The Shapiro-Wilk Test in its current form is likewise not as likely to be as useful to us as it is to researchers in other fields, in large part because of the severe limitations on input sizes. As a rule, DBAs and data miners are going to be more interested in exploratory data mining rather than hypothesis testing, using very large datasets where the means and variances are often easily discernible and sampling is less necessary. Perhaps the Shapiro-Wilk Test could be adapted to accommodate much larger datasets, as Royston apparently attempted to do by using quintic regression coefficients to approximate that constant the Shapiro-Wilk equations depend upon.[13] In fact, given that I’m still learning about the field of statistics, it is entirely possible that a better workaround is already available. I’ve already toyed with the idea of breaking up entire datasets into random samples of no more than 50 rows, but I’m not qualified to say if averaging the test statistics together would be a logically valid measure. I suspect that the measure would be incorrectly scaled because of the higher record counts.
…………Until some kind of enhancement becomes available, it is unlikely that the Shapiro-Wilk Test will occupy a prominent place in any DBA’s fitness testing toolbox. There might be niches where small random sampling and hypothesis testing might make it a good choice, but for now it is simply too small to accommodate the sheer size of the data we’re working with. I looked into another potential workaround in the form of the Shapiro-Francia Test, but since it is calculated in a similar way and is “asymptotically equivalent”[14] to the Shapiro-Wilk (i.e., they basically converge and become equal for all intents and purposes) I chose to skip that alternative for the time being. In next week’s article we’ll instead discuss the Ryan-Joiner Test, which is often lumped in the same category with the Shapiro-Wilk. After that, we’ll survey a set of loosely related techniques that are likely to be of more use to the SQL Server community, encompassing the Kolmogorov-Smirnov, Anderson-Darling, Kuiper’s and Lilliefors Tests, as well as the Cramér–von Mises Criterion

[1] Royston, Patrick, 1991, “Approximating the Shapiro-Wilk W-Test for Non-Normality,” pp. 117-119 in Statistics and Computing, September, 1992. Vol. 2, No. 3. Available online at http://link.springer.com/article/10.1007/BF01891203#page-1

[2] p. 591, Shapiro, Samuel S. and Wilk, Martin B., 1965, “An Analysis of Variance Test for Normality (Complete Samples),” pp. 591-611 in Biometrika, December 1965. Vol. 52, Nos. 3-4.

[3] See the Wikipedia page “Shapiro-Wilk Test” at http://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test

[4] p. 610, Shapiro and Wilk, 1965.

[5] p. 593, Shapiro, Samuel S. and Wilk, Martin B., 1965, “An Analysis of Variance Test for Normality (Complete Samples),” pp. 591-611 in Biometrika, December 1965. Vol. 52, Nos. 3-4.

[6] Apparently there is another competing definition of the term, in which values are compared within a particular column, not against across columns. See the Wikipedia page “Covariance Matrix” at http://en.wikipedia.org/wiki/Covariance_matrix#Conflicting_nomenclatures_and_notations

[7] pp. 603-604, Shapiro, Samuel S. and Wilk, Martin B., 1965, “An Analysis of Variance Test for Normality (Complete Samples),” pp. 591-611 in Biometrika, December 1965. Vol. 52, Nos. 3-4.

[8] Another source of the Shapiro-Wilk coefficient is Zaiontz, Charles, 2014, “Shapiro-Wilk Tables,” posted at the Real Statistics Using Excel blog web address http://www.real-statistics.com/statistics-tables/shapiro-wilk-table/

[9] For this part, I also consulted the National Institute for Standards and Technology, 2014, “7.2.1.3 Anderson-Darling and Shapiro-Wilk Tests,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm

[10] See especially the comment by Ian Fellows on Oct. 17, 2011:

                “Normality tests don’t do what most think they do. Shapiro’s test, Anderson Darling, and others are null hypothesis tests AGAINST the assumption of normality. These should not be used to determine whether to use normal theory statistical procedures. In fact they are of virtually no value to the data analyst. Under what conditions are we interested in rejecting the null hypothesis that the data are normally distributed? I have never come across a situation where a normal test is the right thing to do. When the sample size is small, even big departures from normality are not detected, and when your sample size is large, even the smallest deviation from normality will lead to a rejected null…”
…………“…So, in both these cases (binomial and lognormal variates) the p-value is > 0.05 causing a failure to reject the null (that the data are normal). Does this mean we are to conclude that the data are normal? (hint: the answer is no). Failure to reject is not the same thing as accepting. This is hypothesis testing 101.”
…………“But what about larger sample sizes? Let’s take the case where there the distribution is very nearly normal.”
…………“Here we are using a t-distribution with 200 degrees of freedom. The qq-plot shows the distribution is closer to normal than any distribution you are likely to see in the real world, but the test rejects normality with a very high degree of confidence.”
…………“Does the significant test against normality mean that we should not use normal theory statistics in this case? (another hint: the answer is no:) )”

[11] Note these helpful comments by Paul Hiemstra on March 15, 2013:

                “An additional issue with the Shapiro-Wilks test is that when you feed it more data, the chances of the null hypothesis being rejected becomes larger. So what happens is that for large amounts of data even veeeery small deviations from normality can be detected, leading to rejection of the null hypothesis even though for practical purposes the data is more than normal enough…”
…………“…In practice, if an analysis assumes normality, e.g. lm, I would not do this Shapiro-Wilks test, but do the analysis and look at diagnostic plots of the outcome of the analysis to judge whether any assumptions of the analysis where violated too much. For linear regression using lm this is done by looking at some of the diagnostic plots you get using plot (lm()). Statistics is not a series of steps that cough up a few numbers (hey p < 0.05!) but requires a lot of experience and skill in judging how to analysis your data correctly.”

[12] A case in point with an entire chapter devoted to the shortcomings of hypothesis testing methods is Kault, David, 2003, Statistics with Common Sense. Greenwood Press: Westport, Connecticut.

[13] His approximation method is also based on Weisberg, Sanford and Bingham, Christopher, 1975, “An Approximate Analysis of Variance Test for Non-Normality Suitable for Machine Calculation,” pp 133-134 in Technometrics, Vol. 17.

[14] p. 117, Royston.

Goodness-of-Fit Testing with SQL Server Part 5: The Chi-Squared Test

By Steve Bolton

…………As I’ve cautioned before, I’m writing this series of amateur self-tutorials in order to learn how to use SQL Server to perform goodness-of-fit testing on probability distributions and regression lines, not because I already know the topic well. Along the way, one of the things I’ve absorbed is that the use cases for the various goodness-of-fit tests are more sharply delineated than the statistical tests for the outlier detection, which was the topic of my last tutorial series. I covered some of the more general measures in Goodness-of-Fit Testing with SQL Server, part 1: The Simplest Methods, but even some of these – like the 3-Sigma Rule – are limited only to the Gaussian or “normal” distribution, i.e. the bell curve. Many of the other metrics we’ll survey later in this series are likewise limited to specific data types, such as the popular Kolmogorov-Smirnov and Anderson-Darling Tests, which cannot be applied to nominal data (i.e. corresponding to the Discrete Content type in SQL Server Data Mining).[1] For that task, you need a metric like the Chi-Squared Test (or 𝜒²), which can handle nominal data types as well as continuous ones, which are measured in infinitesimal gradations; cases in point would include the decimal and float types in SQL Server.
…………In addition to the bell curve, the 𝜒²can handle such other popular distributions as the Poisson, log normal, Weibull, exponential, binomial and logistic, plus any others that have an associated cumulative distribution function (CDF).[2] Ralph B. D’Agostino, one of the inventors of the metric we discussed in Goodness-of-Fit Testing with SQL Server, part 3.2: D’Agostino’s K-Squared Test, cautions though that analyses of the 𝜒² Test indicate this flexibility comes at the cost of decreased statistical power; as he and some of his colleagues put it in a 1990 academic paper, “The extensive power studies just mentioned have also demonstrated convincingly that the old warhorses, the chi-squared test and the Kolmogorov test (1933), have poor power properties and should not be used when testing for normality.”[3] Some experts consider this flaw to be almost fatal, to the point where one writes, “If you want to test normality, a chi-squared test is a really bad way to do it. Why not, say, a Shapiro-Francia test or say an Anderson-Darling adjusted for estimation? You’ll have far more power.”[4] As we shall see in a few weeks, the Anderson-Darling Test has other limitations beyond its inability to handle nominal columns, whereas I believe the Shapiro-Francia Test is based on the Shapiro-Wilk, which is computationally expensive and limited to what the SQL Server community would consider very small sample sizes. Each test has its own unique set of strengths and weaknesses, which ought to strongly influence a data miner’s choices.

More “Gotchas” with the 𝜒² Test (and Its Inventor)

                A further caveat of the 𝜒² Test is that the population ought to be ten times more numerous than the sample[5], but one of the strengths of Big Data-era analysis is that we can use modern set-based methods to traverse gigantic datasets, rather than taking dinky slices of the kind normally seen in hypothesis testing. As discussed over the course of the last two tutorial series, I’m shying away from the whole field of hypothesis testing because it is not well-suited to our use cases, which may involve hundreds of millions of rows that might represent a full population rather than 50 or 100 from a sample that rarely does; furthermore, the act of applying the usual confidence and significance levels and the like reduces such tests down to a simple Boolean, yes-no answer. This represents a substantial reduction in the information provided by the test statistics, akin to truncating a float or decimal column down to a SQL Server bit data type; by retaining the full statistic, we can measure how normal or exponential or uniform a particular dataset may be.[6]
                That is why in the last article, I skipped the usual step of plugging the Hosmer-Lemeshow Test results into a 𝜒² Test, to derive confidence levels and the like based on how well they approximate a 𝜒² distribution.[7] In fact, such comparisons to the 𝜒² distribution seem to be as common in hypothesis testing as those to Student’s T-distribution, or the F-distribution in the case of Analysis of Variance (ANOVA). Further adding to the confusion is the fact that there is also a 𝜒² Test of Independence, in which contingency tables are used to establish relationships between multiple variables. There is some overlap in the underlying concepts, but the two 𝜒² Tests are not identical.[8] The goodness-of-fit version we’re speaking of here was the developed by Karl Pearson, one of the most brilliant statisticians of the 19th Century – but also one of the most twisted. As I’ve pointed out several times since beginning my series on A Rickety Stairway to SQL Server Data Mining, ordinary mathematicians might be stable people, but a frightening share of the rare geniuses among them have been not just eccentric, but Norman Bates-level crazy. Pearson was a blatant racist and Social Darwinist who sought to extinguish the “unfit” through such schemes as eugenics[9], and thereby helped feed the intellectual current in Europe that eventually brought Hitler to power. We can still use his statistical tests, just as we use the rockets devised by Werner von Braun and the quantum mechanics pioneered by Werner Heisenberg – provided that we put them to better purposes.

Deriving the 𝜒² from CDFs

                You don’t have to be a proverbial rocket scientist in order to calculate the 𝜒² Test, nor do you need to do the kind of mental gymnastics required for the Heisenberg Uncertainty Principle. The equation is actually quite simple, especially since it follows a form similar to that of many of the other test statistics surveyed in the last two tutorial series. Like Z-Scores and so many other metrics, the 𝜒² Test involves subtracting one value from another for each row, squaring the result and then summing them across the entire dataset, all of which can be easily implemented with T-SQL windowing functions. The difference is that in this case, we’re putting the data in ascending order, then subtracting probabilities generated by the CDF of the distribution we’re testing from the actual value.
Some CDFs are trivial to calculate, but as I mentioned in Goodness-of-Fit Testing with SQL Server, part 2: Implementing Probability Plots in Reporting Services, I had a hell of a time deriving the correct values for the normal distribution’s CDF – as do many novices, in large part because there is no closed-form solution to that particular formula. Rather than rehash that whole topic of how to use approximations to derive the normal CDF, I’ll simply reuse most of the code from that article to implement that part of the 𝜒² equation. I had to tweak a little so that I could calculate only the handful of CDF values we actually need, rather than every single probability in a defined range; this called for passing it a table parameter of the type shown below, which is populated in the middle of Figure 1. Keep in mind that this Gaussian CDF is based on the simplest approximation I could find, so once you get about five or six places right of the decimal point, some inaccuracy creeps in, which might be magnified in certain cases by the use of float rather than decimal in the type definition.

Figure 1: DDL for the 𝜒²Test
CREATE TYPE [Calculations].[SimpleFloatValueTableParameter] AS TABLE(
       [RN] [bigint] NULL,
       [Value] float(53) NULL)

 CREATE PROCEDURE [Calculations].[NormalDistributionCDFSupplyTableParameterSP]
@Mean decimal(38,21), @StDev decimal(38,21),  @InputTableParameter AS [Calculations].[SimpleFloatValueTableParameter] READONLY
AS

DECLARE @StDevTimesSquareRootOf2 as decimal(38,21), @One as decimal(38,37) = 1,  @Two as decimal(38,37) = 2,  @EulersConstant decimal(38,37) = 2.7182818284590452353602874713526624977
SELECT @StDevTimesSquareRootOf2 = @StDev * Power(@Two, 0.5)

SELECT
ColumnValue, CASE WHEN ColumnValue >= @Mean THEN CDFValue ELSE 1 CDFValue END AS CDFValue
FROM (SELECT Value AS ColumnValue, 0.5 + (0.5 * Power(@One Power(@EulersConstant, ((-0.147 * Power(ErrorFunctionInput, 4)) (1.27324 * Power(ErrorFunctionInput, 2))) / (@One + (0.147 * Power(ErrorFunctionInput, 2)))), 0.5)) AS CDFValue
FROM (       SELECT Value, (Value @Mean) / @StDevTimesSquareRootOf2 AS ErrorFunctionInput
       FROM  @InputTableParameter
       WHERE Value IS NOT NULL) AS T1) AS T2 

…………As annoying as it might be to create these extra objects just to run the procedure in Figure 2, it saves us from having to calculate zillions of CDF values on large tables, when we only need the minimum and maximum values for each band. The 𝜒² Test is applied to distributions rather than regression lines as the Hosmer-Lemeshow Test is, but they have at least one thing in common: the division of the dataset into probability bands, which are then graded on how close the expected values match the actual observations. The criteria for membership in these bands is up to you, but in my implementation, I’m simply using the NTILE windowing function to break up a dataset into subsets of almost equal size, in order of the values of the column being tested. Several sources caution that the type of banding can have a strong effect on the final test statistic. As the National Institute for Standards and Technology’s Engineering Statistics Handbook (one of the best online resources for anyone learning statistics) puts it, “This test is sensitive to the choice of bins. There is no optimal choice for the bin width (since the optimal bin width depends on the distribution). Most reasonable choices should produce similar, but not identical, results… The chi-square goodness-of-fit test is applied to binned data (i.e., data put into classes). This is actually not a restriction since for non-binned data you can simply calculate a histogram or frequency table before generating the chi-square test. However, the value of the chi-square test statistic are dependent on how the data is binned.”[10]
                They’re not kidding, as I observed first-hand. I set the @NumberOfBands parameter to a default of 10, but you’re probably going to want to run several trials and experiment with higher and lower values, especially when it’s calculated against large tables, because it can dramatically affect the test statistic. Many sources mention that the count of records in each bucket ought to be more than 5, so you don’t want to set the @NumberOfBands so high that the bucket size falls below this threshold. I found it helpful to look at the output of the @FrequencyTable to make sure there weren’t too many bands with identical bounds, which will happen if the @NumberOfBounds is too high. Use some common sense: if you’re operating on nominals that can only be assigned integer values between 0 and 5, then a bin count of 6 might be a wise starting point.

An Explanation of the Sample Code

                Most of the rest of the code is self-explanatory, to those who have slogged their way through one of my procedures before. As usual, the first four parameters allow you to run it against any numerical column in any database you have adequate access to, while the first couple of lines in the body help implement this. The rest is all dynamic SQL, beginning with the usual declaration sections and assignments of the aggregate values we’ll need for other calculations. After that I declare a couple of table variables and a table parameter to hold the final results as well as some intermediate steps. Most of the work occurs in the first INSERT, which divides the dataset into bands; a few statements later, the distinct minimum and maximum values that are inserted in this step are fed to the CDF procedure to derive probability values. Note that it can be drastically simplified if the flexible variable ranges that the @DecimalPrecision parameter implements are not needed; in that case, simply return the results from @SchemaAndTableName into a table variable and perform all the math on it outside the dynamic SQL block.
…………If you receive NULL values for your CDFs in the final results, it’s a clue that you probably need to try a @DecimalPrecision parameter (which I normally provide to help end users avoid arithmetic overflows) with a smaller scale; it signifies that the procedure can’t match values in the joins properly due to rounding somewhere. For a distribution other than the normal, simply plug in a different CDF and adjust the degrees of freedom to account for additional parameters, such as the shape parameter used in the Weibull. There might be a more efficient way to do the updates to the @FrequencyTable that follow, but the costs of these statements compared to the rest of the batch are inconsequential, plus the procedure is easier to follow this way. The two cumulative frequency counts are provided just as a convenience and can be safely eliminated if you don’t need them. After that, I return the full @FrequencyTable to the user (since the costs of calculating it have already been incurred) and compute the final test statistic in a single line in the last SELECT.
As mentioned in previous articles, many of these older tests were not designed for datasets of the size found in modern relational databases and data warehouse, so there are no checks built in to keep the final test statistic from being grossly inflated by the accumulation of millions of values. For that reason, I’m using a variant known as “Reduced 𝜒²” that simply divides by the count of records to scale the results back down to a user-friendly, easily readable stat. Note that in previous articles, I misidentified Euler’s Number in my variable names as Euler’s Constant, for obvious reasons. Adding to the confusion is the fact that the former is sometimes also known as Napier’s Constant or the Exponential Constant, while the latter is also referred to as the Euler-Mascheroni Constant, which I originally thought to be distinct from Euler’s Constant. I used the correct constant and high-precision value for it, but applied the wrong name in my variable declarations.

Figure 2: T-SQL for the 𝜒² Goodness-of-Fit Test
CREATE PROCEDURE [Calculations].[GoodnessOfFitChiSquaredTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @DecimalPrecision AS nvarchar(50), @NumberOfBands as bigint = 10
AS

DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName 
SET @SQLString = ‘DECLARE @Mean decimal(‘ + @DecimalPrecision + ‘), @StDev decimal(‘ + @DecimalPrecision + ‘), @Count decimal(‘ + @DecimalPrecision + ‘),
@EulersNumber decimal(38,37) = 2.7182818284590452353602874713526624977

SELECT @Count=Count(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @Mean = Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @StDev = StDev(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))
FROM ‘ + @SchemaAndTableName + WHERE ‘ + @ColumnName + ‘ IS NOT NULL

DECLARE @CDFTable table
(ID bigint IDENTITY (1,1),
Value decimal( + @DecimalPrecision + ‘),
CDFValue decimal( + @DecimalPrecision + ‘)) 

DECLARE @FrequencyTable table
(ID bigint,
MinValue decimal(‘ + @DecimalPrecision + ‘),
MaxValue decimal( + @DecimalPrecision + ‘),
LowerCDFValue decimal( + @DecimalPrecision + ‘),
UpperCDFValue decimal( + @DecimalPrecision + ‘),
ActualFrequencyCount bigint,
ExpectedFrequencyCount decimal( + @DecimalPrecision + ‘),
CumulativeActualFrequencyCount decimal( + @DecimalPrecision + ‘),
CumulativeExpectedFrequencyCount decimal( + @DecimalPrecision + ‘)
)

INSERT INTO @FrequencyTable
(ID, MinValue, MaxValue, ActualFrequencyCount)

SELECT DISTINCT BandNumber, Min(CAST(‘ + @ColumnName + ‘ AS decimal(‘ + @DecimalPrecision + ‘))) OVER (PARTITION BY BandNumber ORDER BY ‘ + @ColumnName + ‘) AS BandMin,
                           Max(CAST(‘ + @ColumnName + ‘ AS decimal(‘ + @DecimalPrecision + ‘))) OVER (PARTITION BY BandNumber ORDER BY ‘ + @ColumnName + ‘ DESC) AS BandMax, — note the DESC to go in the opposite order
                            Count(*) OVER (PARTITION BY BandNumber) AS BandFrequencyCount
FROM (SELECT ‘ + @ColumnName + ‘, NTile( + CAST(@NumberOfBands as nvarchar(128)) + ‘) OVER (ORDER BY ‘ + @ColumnName + ‘) AS BandNumber
       FROM ‘ + @SchemaAndTableName +
       WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1

DECLARE @InputTableParameter AS Calculations.SimpleFloatValueTableParameter

INSERT INTO @InputTableParameter
(Value)
SELECT DISTINCT Value FROM (SELECT MinValue AS Value
       FROM @FrequencyTable
       UNION
       SELECT MaxValue  AS Value
       FROM @FrequencyTable)
AS T1

INSERT INTO @CDFTable
(Value, CDFValue)
EXEC Calculations.NormalDistributionCDFSupplyTableParameterSP @Mean, @StDev, @InputTableParameter

UPDATE T1
SET LowerCDFValue = T2.CDFValue
FROM @FrequencyTable AS T1
       INNER JOIN @CDFTable AS T2
       ON T1.MinValue = Value

UPDATE T1
SET UpperCDFValue = T2.CDFValue
FROM @FrequencyTable AS T1
       INNER JOIN @CDFTable AS T2
       ON  T1.MaxValue = T2.Value

UPDATE @FrequencyTable
SET ExpectedFrequencyCount = (UpperCDFValue LowerCDFValue) * @Count

the Cumulatives are just for convenience and can be safely eliminated from the table if you don’t need them

UPDATE T1
SET T1.CumulativeActualFrequencyCount = T2.CumulativeActualFrequencyCount,
T1.CumulativeExpectedFrequencyCount = T2.CumulativeExpectedFrequencyCount
FROM @FrequencyTable AS T1
INNER JOIN (SELECT ID, Sum(ActualFrequencyCount) OVER (ORDER BY ID)  AS CumulativeActualFrequencyCount, Sum(ExpectedFrequencyCount)
OVER (ORDER BY ID)  AS CumulativeExpectedFrequencyCount
FROM @FrequencyTable) AS T2
ON T1.ID = T2.ID

— return all of the results
SELECT ID, MinValue, MaxValue, LowerCDFValue, UpperCDFValue, ActualFrequencyCount, ExpectedFrequencyCount, CumulativeActualFrequencyCount, CumulativeExpectedFrequencyCount
FROM @FrequencyTable
ORDER BY ID

— this is an alternate version of the test called “reduced chi squared” in which the degrees of freedom are taken into account to scale the results back down
SELECT Sum(ExpectedFrequencyCountSum) /  Count(*) AS ChiSquaredTestReduced,
@Count AS FullPopulationCount, @Mean AS Mean, @StDev AS StDev
FROM (SELECT CASE WHEN ExpectedFrequencyCount = 0 THEN 0 ELSE  Power(ActualFrequencyCountExpectedFrequencyCount, 2) / ExpectedFrequencyCount END AS ExpectedFrequencyCountSum
FROM @FrequencyTable) AS T1

–SELECT @SQLStringuncomment this to debug dynamic SQL errors
EXEC (@SQLString)

…………As has become standard fare over the past two tutorial series, I first tested the results against a tiny 9-kilobyte table of data on the Duchennes form of muscular dystrophy from the Vanderbilt University’s Department of Biostatistics. Then I stress-tested it against the 11 million rows in the Higgs Boson Dataset I downloaded from the University of California at Irvine’s Machine Learning Repository and converted into a nearly 6-gigabyte SQL Server table. The query in Figure 3 on the Hemopexin protein produced the first resultset below it, while the much longer resultset was the product of a similar query on the first float column in the HiggsBosonTable. An unreasonable selection of bands can also apparently affect performance; on my first trial on the HiggsBosonTable, I forgot to set the number well above 7, which may be why it took 7:26. Subsequent trials with values around 100 took between 5:46 and 5:52; the results depicted here are only for the first 22 out of 110 bands.
…………I’m not surprised that the final test statistic has six digits to the left of the decimal points, given that I know from previous outlier detection and goodness-of-fit tests that Column 1 is highly abnormal. Column 2 follows an obvious bell curve when displayed in a histogram, so it is likewise not surprising that its 𝜒² Test result was only 1,326, or less a hundredth of Column1. I have the feeling that the sheer size of the dataset can distort the final test statistic, thereby making it difficult to compare them across datasets, but probably not as severely as in other measures, particularly the Jarque-Bera and K² Tests. The query on the second float column likewise took 5:45 on my beat-up development machine – which more closely resembles the Bluesmobile than a real server, so your mileage will probably be a lot better. It’s not as quick as the procedure I wrote in Goodness-of-Fit Testing with SQL Server Part 4.1: R2, RMSE and Regression-Related Routines, but certainly faster than many others I’ve done in past articles.

Figure 3: Sample Results from the Duchennes and Higgs Boson Datasets
EXEC   @return_value = [Calculations].[GoodnessOfFitChiSquaredTestSP]
              @DatabaseName = N’DataMiningProjects,
              @SchemaName = N’Health,
              @TableName = N’DuchennesTable,
              @ColumnName = N’Hemopexin,
              @DecimalPrecision = N’38,17′,
              @NumberOfBands = 7 

ChiSquaredResults 1 ChiSquaredResults 2

…………The full execution plan it too large to depict here, but suffice it to say that it consists of 11 separate queries – with one of them, the insert into the @FrequencyTable, accounting for 99 percent of the computational cost of the entire batch. I’m not sure at this point how to go about optimizing that particular query, given that it starts with an Index Seek, which is normally what we want to see; there are also a couple of Nested Loops operators and a Hash Match within that query, but together they only account for about 12 percent of its internal costs. Almost all of the performance hit comes on two Sort operators, which a better-trained T-SQL aficionado might be able to dispose of with a few optimizations.
Efficiency is something I’ll sorely need for next week’s article, in which I tackle the Shapiro-Wilk Test. Many sources I’ve stumbled upon while researching this series indicate that it has better statistical power than most of the competing goodness-of-fit tests, but it has many limitations which severely crimp its usability, at least for our purposes. First, it can apparently only be calculated on just 50 values, although I’ve seen figures as high as a couple of hundred. Either way, that’s about a few hundred million rows short; the sheer sizes of datasets available to DBAs and data miners today are one of their strengths, and we shouldn’t have to sacrifice that hard-won advantage by taking Lilliputian slices of it. Worst of all, the calculations are dogged by a form of combinatorial explosion, which can be the kiss of death for Big Analysis. I have learned to fear the dreaded factorial symbol n! and the more insidious menace posed by calculations upon infinitesimal reciprocals, of the kind that afflicted the Hosmer-Lemeshow Test in last week’s article. My implementation of the Shapiro-Wilk Test will sink or swim depending on whether or not I can find a reasonable workaround for the covariance matrices, which are calculated based on a cross product of rows. In a table of a million rows, that means 1 trillion calculations just to derive an intermediary statistic. A workaround might be worthwhile, however, given the greater accuracy most sources ascribe to the Shapiro-Wilk Test.

[1] See National Institute for Standards and Technology, 2014, “1.3.5.15 Chi-Square Goodness-of-Fit Test,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm

[2] IBID.

[3] p. 316 D’Agostino, Ralph B.; Belanger, Albert and D’Agostino Jr., Ralph B, 1990, “A Suggestion for Using Powerful and Informative Tests of Normality,” pp. 316–321 in The American Statistician. Vol. 44, No. 4. Available online at http://www.ohio.edu/plantbio/staff/mccarthy/quantmet/D’Agostino.pdf

[4] See the reply by the user named Glen_b in the CrossValidated thread “How to Get the Expected Counts When Computing a Chi-Squared Test?” dated March 14, 2013, which is available at the web address

http://stats.stackexchange.com/questions/52209/how-to-get-the-expected-counts-when-computing-a-chi-squared-test

[5] See the StatTrek webpage titled “When to Use the Chi-Square Goodness of Fit Test” at http://stattrek.com/chi-square-test/goodness-of-fit.aspx

[6] I may be a novice, but am apparently not alone in my reluctance to use tests that enforce either-or choices. See the reply by the user named Glen_b in the CrossValidated thread “How to Get the Expected Counts When Computing a Chi-Squared Test?” dated March 14, 2013, which is available at the web address http://stats.stackexchange.com/questions/52209/how-to-get-the-expected-counts-when-computing-a-chi-squared-test as well as the reply by the same user to the thread “What Tests Do I  Use to Confirm That Residuals are Normally Distributed?” posted Sept. 13, 2013 at the CrossValidated forum web address http://stats.stackexchange.com/questions/36212/what-tests-do-i-use-to-confirm-that-residuals-are-normally-distributed/36220#36220 He makes several very good points about goodness-of-fit testing that are worth quoting here. In the first, he says that

“No test will prove your data is normally distributed. In fact I bet that it isn’t. (Why would any distribution be exactly normal? Can you name anything that actually is?)
2) When considering the distributional form, usually, hypothesis tests answer the wrong question
What’s a good reason to use a hypothesis test for checking normality?
I can think of a few cases where it makes some sense to formally test a distribution. One common use is in testing some random number generating algorithm for generating a uniform or a normal.

In the second thread, he similarly points out that:

“1.No test will tell you your residuals are normally distributed. In fact, you can reliably bet that they are not.”
“2.Hypothesis tests are not generally a good idea as checks on your assumptions. The effect of non-normality on your inference is not generally a function of sample size*, but the result of a significance test is. A small deviation from normality will be obvious at a large sample size even though the answer to the question of actual interest (‘to what extent did this impact my inference?’) may be ‘hardly at all’. Correspondingly, a large deviation from normality at a small sample size may not approach significance…”
“…If you must use a test, Shapiro-Wilk is probably as good as anything else. (But it’s answering a question you already know the answer to – and every time you fail to reject, giving an answer you can be sure is wrong.)”

[7] Just a side note on terminology: I see both the tests and the distribution referred to as “Chi-Squared” with a final D as often as I do “Chi-Square” without one, which are sometimes mixed together in the same sources. I’ll stick with a closing D for the sake of consistency, even if it turns out to be semantically incorrect.

[8] For a readable explanation of the independence test, see Hopkins, Will G., 2001, “Contingency Table (Chi-Squared Test),” published at the A New View of Statistics website address http://www.sportsci.org/resource/stats/continge.html

[9] For a quick introduction to this sordid tale, see the Wikipedia page “Karl Pearson” at http://en.wikipedia.org/wiki/Karl_Pearson

[10] See National Institute for Standards and Technology, 2014,  “1.3.5.15 Chi-Square Goodness-of-Fit Test,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm The formula for the goodness-of-fit test is widely available, but I depended mostly on this NIST webpage when writing my code because their equation was more legible.

Goodness-of-Fit Testing with SQL Server Part 4.2: The Hosmer–Lemeshow Test with Logistic Regression

By Steve Bolton

…………The last installment of this amateur series of self-tutorials was the beginning of a short detour into using SQL Server to perform goodness-of-fit testing on regression lines, rather than on probability distributions. These are actually quite simple concepts; any college freshman ought to be able to grasp the idea of a single regression line, since all they do is graph the values of one variable as it changes in tandem with another, whereas distributions merely summarize the distinct counts for each value of a variable, as the famous bell curve does. In the first case we’re dealing with how well a regression line models the relationship between two variables – assuming one exists – and in the second, we’re assessing whether the distribution of a single variable matches one of many distributions that constantly recur in nature, like the Gaussian “normal” distribution, or other common ones like the Poisson, binomial, uniform and exponential distributions.
…………These topics can of course become quite complex rather quickly once we factor in modifications like multivariate cases, but I’ll avoid these topics for the sake of simplicity, plus the fact that there are an endless number of variants of them that could keep us busy to the end of time. I really can’t end this segment though without addressing a particular variant, since logistic regression is a widely used algorithm that addresses a distinctive set of use cases. There is likewise an endless array of alternative takes on the basic logistic regression algorithm, such as adaptations for multinomial cases, but I’ll stick to the old maxim, KISS: Keep It Simple, Stupid. The Hosmer-Lemeshow Test that is commonly used for fitness testing on logistic regression may not be applicable in some of these more advanced cases, but it is indispensable for the bare bones version of the algorithm.

Adapting Regression Lines to Use the Logistic Function

               This regression algorithm is a topic I became somewhat familiar with while writing A Rickety Stairway to SQL Server Data Mining, Algorithm 4: Logistic Regression a while back, in a different tutorial series. Suffice it to say that the algorithm is ideal for situations in which you need to place bounds on the outputs that a regression line can generate; most commonly this is a Boolean yes-no choice that ranges between 0 and 1, but it can be adapted to other scales, such as 0 to 100. A linear regression line would produce nonsensical values in these instances, since the values would be free to go off the charts, but a logistic regression is guaranteed to stay within its assigned bounds. This is accomplished by using the logistic function, which is really not much more difficult to implement than linear regression (with one critical limitation). The formula is widely available on the Internet, so when writing the code for this article I retrieved it from the most convenient source as usual: Wikipedia.[1]
…………In many ways, the logistic function behaves like the cumulative distribution functions (CDFs) I mentioned in Goodness-of-Fit Testing with SQL Server, Part 2: Implementing Probability Plots in Reporting Services, in that the probabilities assigned to the lowest value begin at 0 and accumulate up to 1 by the time we reach the final value in an ordered dataset. It also behaves in many ways like a weighted function, in that the pressure on it to conform to the bounds increases as it nears them; I think of it in terms of the way quarks are inextricably bound together by the strong force within hadrons, which increases as they come closer to breaking free. In between the upper and lower bounds the regression takes on the appearance of an S-curve rather than the lines seen in normal regression.
…………Another easily readable and insightful commentary on logistic regression can be found at University of Toronto Prof. Saed Sayad’s website[2], in which he provides a succinct explanation of the logistic function equation and some alternative measures for the accuracy of the mining models it generates. Three of these are subspecies of the R2 measure we discussed last week in connection with linear regression, which are discussed together under the rubric of Pseudo R2. Fitness testing of this kind is as necessary on regression lines as it is for distribution matching, because they rarely model the relationships between variables perfectly; as statistician George Box (one of the pioneers of Time Series) once put it so colorfully, “All models are wrong, but some are useful.”[3] Sayad also mentions alternative methods like Likelihood Ratio tests and the Wald Test. The measure of fit I’ve seen mentioned most often in connection with logistic regression goes by the memorable moniker of the Hosmer-Lemeshow Test. It apparently has its limitations, as we shall see, but it is not terribly difficult to implement – with some important caveats.

Banding Issues with Coding the Hosmer-Lemeshow Test

                In fact, the first three steps in the dynamic SQL in Figure 1 are almost identical to those used to calculate regression lines in last week’s article and a tutorial from a different series, Outlier Detection with SQL Server Part 7: Cook’s Distance, which I won’t waste time recapping here.[4] The fourth step just applies the logistic function to the results, in a single, simple line of T-SQL. After this, I simply insert the logistic regression values into a table variable for later retrieval, including returning it to the user towards the end of the procedure; there’s really no reason not to return the correlation, covariance, slope, intercept and standard deviations for both variables as well, given that we’ve already calculated them. Step 5 is where the meat and potatoes of the Hosmer-Lemeshow Test can be found. Its strategy is essentially to divide the values into bands, which are often referred to as “deciles of risk” when the test is employed in one of its most common applications, risk analysis.[5] The bands are then compared to the values we’d expect for them based on probabilistic calculations and the gap between them is quantified and summarized.
…………It is now time for my usual disclaimer: I am writing this series in order to familiarize myself with these data mining tools, not because I have expertise in using them, so it would be wise to check my code thoroughly before putting it to use (or even, God forbid, a production environment). Normally I check my code against the examples provided in various sources, especially the original academic papers whenever possible. In this case, however, I couldn’t find any at a juncture where they would have come in handy, given that I am not quite certain that I am splitting the data into bands on the right axis. I am still learning how to decipher the equations that underpin algorithms of this kind and some of them differ significantly in notation and nomenclature, so it may be that I ought to be counting the observed and expected values differently, which affects how they are split;  from the wording in a book inventors David W. Hosmer Jr. and Stanley Lemeshow wrote on Applied Logistic Regression, it seems that the counts between the bands are supposed to vary much more significantly[6] than they do in my version, which could be a side effect of incorrect banding. There are apparently many different ways of doing banding[7], however, including my method below, in which the @BandCount parameter is plugged into the NTILE windowing function in Step 1.
…………I’ve seen two general rules of thumb mentioned in the literature for setting the @BandCount to an optimal level: using groups of fewer than five members leads to incorrect results, while using fewer than six groups “almost always” leads to passing the fitness test.[8] Averages for both the X and Y axes are calculated for each band, then the regression line is derived through the usual methods in Steps 3 through 5, with one crucial difference: some of the aggregates have to be derived from the bands rather than the original table, otherwise we’d end up with an apples vs. oranges comparison. This is one of several points where the procedure can go wrong, since the banding obscures the detail of the original table and can lead to a substantially different regression line. Keep in mind though that the literature mentions several alternative methods of banding, so there may be better ways of accomplishing this.

Figure 1: T-SQL Code for the Hosmer–Lemeshow Test Procedure
CREATE PROCEDURE [Calculations].[GoodnessOfFitLogisticRegressionHosmerLemsehowTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName1 AS nvarchar(128),@ColumnName2 AS nvarchar(128), @BandCount As  bigint
AS

DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName
SET @SQLString = ‘DECLARE @MeanX float,@MeanY float, @StDevX float, @StDevY float,
@Count  float, @Correlation   float, @Covariance float, @Slope float, @Intercept float,
@ValueRange float, @HosmerLemeshowTest float 

— STEP #1 — GET THE RANGE OF VALUES FOR THE Y COLUMN
SELECT @ValueRange = CASE WHEN RecordCount % 2 = 0 THEN ValueRange + 1 ELSE ValueRange END
FROM (SELECT Max( + @ColumnName2 + ‘) – Min(‘ + @ColumnName2 + ‘) AS ValueRange, Count(*) AS RecordCount
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName2 + ‘ IS NOT NULL) AS T1

— STEP #2 — CREATE THE BANDS
DECLARE @LogisticRegressionTable table
(ID bigint, — ID is the decile identifier
CountForGroup bigint,
AverageX float,
AverageY float,
RescaledY float,
LogisticResult float
) 

INSERT INTO @LogisticRegressionTable
(ID, CountForGroup, AverageX, AverageY)

SELECT DISTINCT DecileNumber, COUNT(*) OVER (PARTITION BY DecileNumber) AS CountForGroup , Avg(CAST(‘ + @ColumnName1 + ‘ AS float)) OVER
(PARTITION BY DecileNumber ORDER BY DecileNumber) AS AverageX, Avg(CAST(‘
+ @ColumnName2 + ‘ AS float)) OVER (PARTITION BY DecileNumber
ORDER BY DecileNumber)  AS AverageY
FROM (SELECT ‘ + @ColumnName1 + ‘, ‘ + @ColumnName2 + ‘, NTILE(+ CAST(@BandCount AS nvarchar(128)) + ‘) OVER (ORDER BY ‘ + @ColumnName1 + ‘) AS DecileNumber
FROM ‘ + @SchemaAndTableName + ‘) AS T1

UPDATE @LogisticRegressionTable — this could be done in one step, but Im leaving it this way for legibility purposes
SET RescaledY = AverageY / @ValueRange 

— STEP #3 – RETRIEVE THE GLOBAL AGGREGATES NEEDED FOR
OTHER CALCULATIONS
— note that we cant operate on the original table here, otherwise the stats would be different from those of the bands
SELECT @MeanX = Avg(CAST(X AS float)), @MeanY = Avg(CAST(Y AS float)), @StDevX = StDev(CAST(X AS float)), @StDevY = StDev(CAST(Y AS float))
FROM (SELECT ‘ + @ColumnName1 + ‘ AS X, ‘ + @ColumnName2 + ‘ AS Y
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName1 + ‘ IS NOT NULL AND ‘ + @ColumnName2 + ‘ IS NOT NULL) AS T1

— STEP #4 – CALCULATE THE CORRELATION (BY FIRST GETTING THE COVARIANCE)
SELECT @Covariance = SUM((AverageX – @MeanX) * (AverageY – @MeanY)) / (‘ + CAST(@BandCount AS nvarchar(128)) + ‘ – 1)
FROM @LogisticRegressionTable 

SELECT @Correlation = @Covariance / (@StDevX * @StDevY)

— STEP #5 – CALCULATE THE SLOPE AND INTERCEPT
SELECT @Slope = @Correlation * (@StDevY / @StDevX)
SELECT @Intercept = @MeanY – (@Slope * @MeanX)

— STEP #6 – CALCULATE THE LOGISTIC FUNCTION
UPDATE T1
SET LogisticResult = 1 / (1 + EXP(-1 * (@Intercept + (@Slope * AverageX))))
FROM  @LogisticRegressionTable AS T1 

— RETURN THE RESULTS
SELECT @HosmerLemeshowTest = SUM(Power((RescaledYLogisticResult), 2) / (CountForGroup * LogisticResult * (1- LogisticResult)))
FROM @LogisticRegressionTable AS T1

SELECT * FROM @LogisticRegressionTable
SELECT @HosmerLemeshowTest AS HosmerLemeshowTest, @Covariance AS Covariance, @Correlation AS Correlation, @MeanX As MeanX, @MeanY As MeanY, @StDevX as StDevX, @StDevY AS StDevY, @Intercept AS Intercept, @Slope AS Slope

–SELECT @SQLStringuncomment this to debug dynamic SQL errors
EXEC (@SQLString)

…………Another potential problem arises when we derive the expected values of column Y for each band, using the logistic function in Step 6.[9] This is one of those maddening scaling issues that continually seem to arise whenever these older statistical tests are applied to Big Data-sized recordsets. This very simple and well-known function is implemented in the formula 1 / (1 + EXP(-1 * (@Intercept + (@Slope * AverageX)))), but when the result of the exponent operation is an infinitesimally small value, it gets truncated when adding the 1. This often occurs when the values for the @Intercept are high, particularly above 100. When 1 is divided by the resulting 1, the logistic function result is 1, which leads to division by zero errors when calculating the test statistic in the last SELECT assignment. There might be a way for a more mathematically adept programmer to rescale the @Intercept, @Slope and other variables so that this truncation doesn’t occur, but I’m not going to attempt to implement a workaround unless I’m sure it won’t lead to incorrect results in some unforeseen way.
…………Yet another issue is that my implementation allows the second column to be Continuous rather than the usual Boolean either-or choice seen in simple logistic regression. That requires rescaling to the range of permissible values, but the way I’ve implemented it through the RescaledY table variable column and @ValueRange variable may be incorrect. The SELECT that assigns the value to @HosmerLemeshowTest can also probably be done a little more concisely in fewer steps, but I want to highlight the internal logic so that it is easier to follow and debug. The rest of the code follows much the same format as usual, in which null handling, SQL injection protection, bracket handling and validation code are all omitted for the sake of legibility and simplicity. Most of the parameters are designed to allow the user to perform the regression on any two columns in the same table or view, in any database they have sufficient access to. The next-to-last line allows programmers to debug the dynamic SQL, which will probably be necessary before putting this procedure to professional use. In the last two statements I return all of the bands in the regression table plus the regression stats, since the costs of calculating them have already been incurred. It would be child’s play for us to also calculate the Mean Squared Error from these figures with virtually no computational cost, but I’m not yet sure if it enjoys the same validity and significance with logistic regression as it does with linear.

Figure 2: Sample Results from Duchennes and Higgs Boson Datasets
EXEC   Calculations.GoodnessOfFitLogisticRegressionHosmerLemsehowTestSP
             @DatabaseName = N’DataMiningProjects’,
             @SchemaName = N’Physics’,
              @TableName = N’HiggsBosonTable’,
              @ColumnName1 = N’Column1′,
              @ColumnName2 = N’Column2′,
              @BandCount = 12
HosmerLemeshowResults

…………I’ve tested most of the procedures for the last two tutorial series against a 9-kilobyte dataset on the Duchennes form of muscular dystrophy, which is made publicly available by Vanderbilt University’s Department of Biostatistics. For this week’s article, however, the intercepts were too high for most combinations of comparisons between the PyruvateKinase, CreatineKinase, LactateDehydrogenase and Hemopexin columns, resulting in the aforementioned divide-by-zero errors. For the few combinations that worked, the test statistic was ridiculously inflated; for other databases I’m familiar with, it returned results in the single and double digits (which is apparently permissible, since I’ve seen professionals post Hosmer-Lemeshow results online that fall in that range) but for whatever reason, this was not the case with Duchennes dataset.
…………That is why I derived the sample results in Figure 2 from the first two float columns of the Higgs Boson Dataset I downloaded from University of California at Irvine’s Machine Learning Repository, which I normally use for stress-testing because its 11 million rows occupy nearly 6 gigabytes in the SQL Server table I converted it into. Given that the first column is obviously abnormal and the second clearly follows a bell curve, I expected the results to indicate a serious lack of fit, but the test statistic was only a minuscule 1.30909988070185E-05. In fact, the values seem to shrink in tandem with the record count, which makes me wonder if another, more familiar scaling issue is operative here. As we’ve seen through the last two tutorial series, many common statistical measures were not designed with today’s Big Data table sizes in mind and thus end up distorted when we try to cram too much data into them. Given that there are so many other issues with my implementation, it is hard to tell if that is an additional problem or some inaccuracy in my code. Substituting the AverageY value for the RescaledY I used in the test statistic only seems to introduce new problems, without solving this one.

The Case Against Hosmer-Lemeshow

…………Regardless of the quality of my implementation, the usefulness of the Hosmer-Lemeshow Test is apparently still questionable, given that even professionals publish articles with inspiring titles like “Why I Don’t Trust the Hosmer-Lemeshow Test for Logistic Regression.”[10] University of Pennsylvania Sociology Prof. Paul Allison lists several drawbacks to the test even when it is correctly implemented, including the most common one mentioned on the Internet, the weaknesses inherent in dividing the dataset into groups. This can even lead to significantly different results when different stats software is run against the same data.[11] Hosmer and Lemeshow themselves point out that the choice of boundaries (“cut points”) for the groups can lead to significantly different results.[12] Furthermore, as Frank Harrell puts it in a thread in the CrossValidated forum, “The Hosmer-Lemeshow test is to some extent obsolete because it requires arbitrary binning of predicted probabilities and does not possess excellent power to detect lack of calibration. It also does not fully penalize for extreme overfitting of the model. …More importantly, this kind of assessment just addresses overall model calibration (agreement between predicted and observed) and does not address lack of fit such as improperly transforming a predictor.”[13] He recommends alternatives like a “generalized R2,” while Allison gets into “daunting” alternatives like “standardized Pearson, unweighted sum of squared residuals, Stukel’s test, and the information matrix test.”[14]
…………Nevertheless, despite these well-known shortcomings, the Hosmer-Lemeshow Test remains perhaps the best-known goodness-of-fit measure for logistic regression. Fitness tests seem to have well-defined use cases in comparison to the outlier detection methods we covered in my last tutorial series, with the Hosmer-Lemeshow among those that occupy a very distinct niche. The other methods mentioned by these authors seem to be more advanced and less well-known, so I thought it still worthwhile to post the code, even if there are certain to be problems with it. On the other hand, it is not worth it at this point to optimize the procedures much until the accuracy issues can either be fixed or debunked. It performed well in comparison to other procedures in this series anyways, with a time of 3:08 for the trial run in Figure 2. Only 1 of the 8 queries accounted for 89 percent of the cost of the whole execution plan, and that one contained two expensive Nested Loops operators, which might mean there’s room for further optimization if and when the accuracy can be verified.
…………Given the number of issues with my code as well as the inherent issues with the test itself, it might be fitting to write a rebuttal to my mistutorial titled titled like Allison article, such as “Why I Don’t Trust Steve Bolton’s Version of the Hosmer-Lemeshow Test for Logistic Regression.” It may still be useful in the absence of any other measures, but I’d assign it a lot less trust than some of the other code I’ve posted in the last two series. On the other hand, this also gives me an opportunity to jump into my usual spiel about my own lack of trust in hypothesis testing methods. I have a lack of confidence in confidence intervals and the like, at least as far as our use cases go, for multiple reasons. First and foremost, plugging test statistics into distributions just to derive a simple Boolean pass/fail measure sharply reduces the information content. Another critical problem is that most of the lookup tables for the tests and distributions they’re plugged into stop at just a few hundred values and are often full of gaps; furthermore, calculating the missing values yourself for millions of degrees of freedom can be prohibitively expensive. Once again, these measures were designed long before Big Data became a buzz word, in an era when most statistical tests were done against a few dozen or few hundred records at best. For that reason I have often omitted the hypothesis testing stage that accompanies many of the goodness-of-fit measures in this series, including the Hosmer-Lemeshow Test, which is normally plugged into a Chi-Squared distribution.[15] On the other hand, we can make use of the separate Chi-Squared goodness-of-fit measure, which as we shall see next week, is a versatile metric that can be adapted to assess the fit of a wide variety of probability distributions – with a much higher degree of confidence than we can assign to the Hosmer-Lemeshow Test results on logistic regression.

 

[1] See the Wikipedia article “Logistic Regression” at http://en.wikipedia.org/wiki/Logistic_regression

[2] See Sayad, Saed, 2014, “Logistic Regression,” published at the SaedSayad.com web address http://www.saedsayad.com/logistic_regression.htm

[3] See the undated publication “Goodness of Fit in Linear Regression” retrieved from Lawrence Joseph’s course notes on Bayesian Statistics on Oct. 30, 2014, which are published at the website of the McGill University Faculty of Medicine. Available at the web address http://www.medicine.mcgill.ca/epidemiology/joseph/courses/EPIB-621/fit.pdf. No author is listed but I presume that Prof. Joseph wrote it.

[4] I derived some of the code for regular linear regression routine long ago from the Dummies.com webpage “How to Calculate a Regression Line” at  http://www.dummies.com/how-to/content/how-to-calculate-a-regression-line.html

[5] See the Wikipedia article Hosmer-Lemeshow Test at the web address http://en.wikipedia.org/wiki/Hosmer%E2%80%93Lemeshow_test

[6] p. 160, Hosmer Jr., David W.; Lemeshow, Stanley and Sturdivan, Rodney X., 2013, Applied Logistic Regression. John Wiley & Sons: Hoboken, New Jersey.

[7] IBID., pp. 160-163.

[8] IBID., p. 161 for the second comment.

[9] I was initially confused about the assignment of the expected values (as well as the use of mean scores), but they are definitely derived from the logistic function, according to p. 2 of the undated manuscript, “Logistic Regression,” published at the Portland State University web address http://www.upa.pdx.edu/IOA/newsom/da2/ho_logistic.pdf . It is part of the instructional materials for one of Prof. Jason Newsom’s classes so I assume he wrote it, but cannot be sure.

[10] Allison, Paul, 2013, “Why I Don’t Trust the Hosmer-Lemeshow Test for Logistic Regression,” published March 5, 2013 at the Statistical Horizons web address http://www.statisticalhorizons.com/hosmer-lemeshow

[11] Allison, Paul, 2014, untitled article published in March, 2014 at the Statistical Horizons web address http://www.statisticalhorizons.com/2014/04

[12] pp. 965-966, 968, Hosmer, D.W.; T. Hosmer; Le Cessie, S. and Lemeshow, S., 1997, “A Comparison of Goodness-of-Fit Tests for the Logistic Regression Model,” pp. 965-980 in Statistics in Medicine. Vol. 16.

[13] Harrell, Frank, 2011, “Hosmer-Lemeshow vs AIC for Logistic Regression,” published Nov. 22, 2011 at the CrossValidated web address http://stats.stackexchange.com/questions/18750/hosmer-lemeshow-vs-aic-for-logistic-regression

[14] See Allison, 2014.

[15] For more on the usual implementation involving the Chi-Squared distribution, see p. 977. Hosmer et al., 1997 and p. 158, Hosmer et al. 2013.

Goodness-of-Fit Testing with SQL Server Part 4.1: R2, RMSE and Regression-Related Routines

By Steve Bolton

…………Throughout most of this series of amateur self-tutorials, the main topic has been and will continue to be in using SQL Server to perform goodness-of-testing on probability distributions. Don’t let the long syllables (or the alliteration practice in the title) fool you, because the underlying concept really isn’t all that hard; all these statistical tests tells us is whether the distinct counts of our data points approximate shapes like the famous bell curve, i.e. the Gaussian or “normal” distribution. While researching the topic, I found out that the term “goodness-of-fit” is also used to describe how much confidence we can assign to a particular regression line. Recall that in regression, we’re trying to learn something about the relationships between one or more variables, whereas in the case of probability distributions, we’re normally talking about univariate cases, so we’re really trying to learn something about the internal structure of a single variable (or in our case, a database column). Once again, don’t be intimidated by the big words though, because regression is really a very simple idea that every college freshman has been exposed to at some point.
…………As I explain in more detail in a post from an earlier mistutorial series, A Rickety Stairway to SQL Server Data Mining, Algorithm 2: Linear Regression, regression in its simplest form is just the graph of a line that depicts how much one variable increases or decreases as another changes in value. There are certainly complex variants of regression that could blow up someone’s brain like that poor guy in the horror film Scanners, but the fundamentals are not that taxing on the mind. Thankfully, coding simple regression lines in T-SQL is that difficult either. There are some moderate performance costs, as can be expected whenever we have to traverse a whole dataset, but the actual calculations aren’t terribly difficult to follow or debug (presuming, that is, that you understand set-based languages like SQL). That is especially true for the metrics calculated upon those regression lines, which tell us how well our data mining model might approximate the true relationships between the variables.

The Foibles and Follies of Calculating R2 and RMSE

………… Once we’ve incurred the cost of a traversing a dataset, there’s really little incentive not to squeeze all the benefit out of the trip by computing all of the relevant goodness-of-fit regression stats afterwards. For that reason, plus the fact that they’re not terribly challenging to explain, I’ll dispense with them all in a single procedure, beginning with R2. In my earlier article Outlier Detection with SQL Server Part 7: Cook’s Distance we already dealt with the coefficient of determination (also known as R2), which is simply the square of the correlation coefficient. This is a long name for the very simple process of quantifying the relationship between two variables, by computing the difference for each value of the first (usually labeled X) from the average of the second (usually labeled Y) and vice-versa, then multiplying them together. This gives us the covariance, which is then transformed into the correlation by comparing the result to the product of the two standard deviations. All we need to do is implement the same code from the Cook’s Distance article, beginning with the regression calculations, then add a new step: squaring the results of the correlation. That changes all negative values to positives and thus scales the result for easier interpretation. The higher the R2, the more closely the two variables are related, and the closer to 0, the less linkage there is between their values.
…………One of the few pitfalls to watch out for is that the values are often below 1 but can exceed it in some circumstances. End users don’t need to know all of the implementation details and intermediate steps I just mentioned, but the must be able to read the result, which is highly intuitive and can be easily depicted in a visualization like a Reporting Services gauge; they don’t need to be burdened with the boring, technical internals of the computations any more than a commuter needs to give a dissertation on automotive engineering, but they should be able to interpret the result easily, and R2 is as mercifully simple as a gas gauge. The same is true of stats like covariance and correlation that it is built on, which costs us nothing to return to the users within the same queries.
Mean Square Error (MSE) is a little more difficult to calculate, but not much harder to interpret, since all end users need to know is that zero represents “perfect accuracy”[i] and values further from it less fitness; the only catch might be that the goodness-of-fit moves in the opposite direction as R2, which might cause confusion unless a tooltip or some other handy reminder is given to end users. Root Mean Square Error (RMSE, a.k.a. Root-Mean-Square Deviation) is usually derived from it by squaring, which statisticians often do to rescale metrics so that they only have positive values. Keep in mind that SQL Server can easily calculate standard deviation through the T-SQL StDev function, which gives us a measure of how dispersed the values in a dataset are; practically all of the procedures I’ve posted in the last two tutorial series have made use of it. What RMSE does is take standard deviation to the next level, by measuring the dispersion between multiple variables instead of just one. I really can’t explain it any better than Will G. Hopkins does at his website A New View of Statistics, which I highly recommend to novices in the field of statistics like myself:

                “The RMSE is a kind of generalized standard deviation. It pops up whenever you look for differences between subgroups or for other effects or relationships between variables. It’s the spread left over when you have accounted for any such relationships in your data, or (same thing) when you have fitted a statistical model to the data. Hence its other name, residual variation. I’ll say more about residuals for models, about fitting models in general, and about fitting them to data like these much later.”
…………“Here’s an example. Suppose you have heights for a group of females and males. If you analyze the data without regard to the sex of the subjects, the measure of spread you get will be the total variation. But stats programs can take into account the sex of each subject, work out the means for the boys and the girls, then derive a single SD that will do for the boys and the girls. That single SD is the RMSE. Yes, you can also work out the SDs for the boys and girls separately, but you may need a single one to calculate effect sizes. You can’t simply average the SDs.”[ii]

…………RMSE and R2 can be used for goodness-of-fit because they are intimately related to the differences between the actual and predicted values for a regression line; they essentially quantify how much of the standard deviation or variance can be ascribed to these errors, i.e. residuals.[iii] There are many complex variants of these stats, just as there are for regression models as a whole; for example, Wikipedia provides several alternate formulas for RMSE , including some for biased estimators, which is a topic we needn’t worry as much about given the whopping sizes of the datasets the SQL Server community works with.[iv] We have unique cases in which the standard methods of hypothesis testing are less applicable, which is why I’ve generally shied away from applying confidence intervals, significance levels and the like to the stats covered in my last two tutorial series. Such tests sharply reduce the information provided by our hard-won calculations, from float or decimal data types down to simple Boolean, yes-or-no answers that a particular value is an outlier, or that subsets of values do not fit a particular distribution; retaining that information allows us to gauge how much a value qualifies as an outlier or a set of them follows a distribution, or a set of columns follows a regression line.
…………For that reason, I won’t get into the a discussion of the F-Tests often performed on our last regression measure, Lack-of-Fit Sum-of-Squares, particularly in connection with Analysis of Variance (ANOVA). The core concepts with this measure are only slightly more advanced than with RMSE and R2. Once again, we’re essentially slicing up the residuals of the regression line in a way that separates the portion that can be ascribed to the inaccuracy of the model, just through alternate means. It is important here to note that with all three measures, the terms “error” and “residual” are often used interchangeably, although there is a strictly definable difference between them: a residual quantifies the difference between actual and predicted values, while errors refer to actual values and “the (unobservable) true function value.”[v] Despite this subtle yet distinguishable difference, the two terms are often used inappropriately even by experts, to the point that novices like myself can’t always discern which of the two is under discussion. Further partitioning of the residuals and errors occurs in the internal calculations of Lack-of-Fit Sum-of-Squares, but I can’t comment at length on the differences between such constituent components as Residual Sum-of-Squares and Sum-of-Squares for Pure Error, except to recommend the explanation by Mukesh Mahadeo, a frequent contributor on statistical concepts at Yahoo! Answers:

                “For certain designs with replicates at the levels of the predictor variables, the residual sum of squares can be further partitioned into meaningful parts which are relevant for testing hypotheses. Specifically, the residual sums of squares can be partitioned into lack-of-fit and pure-error components. This involves determining the part of the residual sum of squares that can be predicted by including additional terms for the predictor variables in the model (for example, higher-order polynomial or interaction terms), and the part of the residual sum of squares that cannot be predicted by any additional terms (i.e., the sum of squares for pure error). A test of lack-of-fit for the model without the additional terms can then be performed, using the mean square pure error as the error term. This provides a more sensitive test of model fit, because the effects of the additional higher-order terms is removed from the error.” It is important here to note that with all three measures, the terms “error” and “residual” are often used interchangeably, although there is a strictly definable difference between them: a residual quantifies the difference between actual and predicted values, while errors refer to actual values and “the (unobservable) true function value” chosen from a random sample.”[vi]

…………The important thing is that the code for the Lack-of-Fit Sum-of-Squares formulas[vii] gets the job done. Of course it always helps if a data mining programmer can write a dissertation on the logic and math of the equations they’re working with, but ordinarily, that’s best left to mathematicians; their assignment is analogous to that of an automotive engineers, while our role is that of a garage mechanic, whose main responsibility is to make sure that the car runs, one way or another. If the owner can drive it away without the engine stalling, then mission accomplished.
…………We only need to add two elements to make the Lack-of-Fit Sum-of-Squares code below useful to end users, one of which is to simply interpret higher numbers as greater lack of fit. The second is to define what that is, since it represents the opposite of goodness-of-fit and therefore can cause the same kind of confusion in direction that’s possible when providing RMSE and R2 side-by-side. The two terms are sometimes used interchangeably, but in a more specific sense they’re actually polar opposites, in which measures that rise as fit improves can be termed goodness-of-fit and those that rise as the fit of a model declines as lack-of-fit. CrossValidated forum contributor Nick Cox provided the best explanation of the difference I’ve seen yet to date:  “Another example comes from linear regression. Here two among several possible figures of merit are the coefficient of determination R2   and the root mean square error (RMSE), or (loosely) the standard deviation of residuals. R2 could be described as a measure of goodness of fit in both weak and strict senses: it measures how good the fit is of the regression predictions to data for the response and the better the fit, the higher it is. The RMSE is a measure of lack of fit in so far as it increases with the badness of fit, but many would be happy with calling it a measure of goodness of fit in the weak or broad sense.”[viii] If end users need a little more detail in what the stat means, that constitutes the most succinct explanation I’ve yet seen. Ordinarily, however, they only need to that the return values for the @LackOfFitSumOfSquares variable below will rise as the accuracy of their model gets worse and vice-versa.

Figure 1: T-SQL Code for the Regression Goodness-of-Fit Tests
CREATE PROCEDURE [Calculations].[GoodnessOfFitRegressionTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName1 AS nvarchar(128),@ColumnName2 AS nvarchar(128),  @DecimalPrecision AS nvarchar(50)
AS

DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName
SET @SQLString = ‘DECLARE @MeanX decimal( + @DecimalPrecision + ‘),@MeanY decimal(‘ + @DecimalPrecision + ‘), @StDevX decimal( + @DecimalPrecision + ‘),
@StDevY decimal(‘
+ @DecimalPrecision + ‘),
@Count  decimal(‘ + @DecimalPrecision + ‘), @Correlation   decimal( + @DecimalPrecision + ‘),
@Covariance decimal( + @DecimalPrecision + ‘), @Slope decimal( + @DecimalPrecision + ‘), @Intercept decimal( + @DecimalPrecision + ‘),
@MeanSquaredError decimal( + @DecimalPrecision + ‘), @LackOfFitSumOfSquares decimal( + @DecimalPrecision + ‘)

— STEP #1 – RETRIEVE THE GLOBAL AGGREGATES NEEDED FOR OTHER CALCULATIONS
SELECT @Count=Count(CAST(‘ + @ColumnName1 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @MeanX = Avg(CAST(‘ + @ColumnName1 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))),
@MeanY = Avg(CAST(‘
+ @ColumnName2 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))),
@StDevX = StDev(CAST(‘ + @ColumnName1 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @StDevY = StDev(CAST(‘ + @ColumnName2 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName1 + ‘ IS NOT NULL AND ‘ + @ColumnName2 + ‘ IS NOT NULL

— STEP #2 – CALCULATE THE CORRELATION (BY FIRST GETTING THE COVARIANCE)
SELECT @Covariance = SUM((‘ + @ColumnName1 + ‘ – @MeanX) * (‘ + @ColumnName2 + ‘ – @MeanY)) / (@Count – 1
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName1 + ‘ IS NOT NULL AND ‘ + @ColumnName2 + ‘ IS NOT NULL

once wee got the covariance, its trivial to calculate the correlation
SELECT @Correlation = @Covariance / (@StDevX * @StDevY) 

— STEP #3 – CALCULATE THE SLOPE AND INTERCEPT
SELECT @Slope = @Correlation * (@StDevY / @StDevX)
SELECT @Intercept = @MeanY – (@Slope * @MeanX)

— STEP #4 – CALCULATE THE MEAN SQUARED ERROR AND LACK OF FIT SUM OF SQUARES TOGETHER
SELECT @MeanSquaredError = SUM(Power((PredictedValue – ‘ + @ColumnName2 + ‘), 2)) * (1 / @Count), @LackOfFitSumOfSquares = SUM(LackofFitInput)
FROM (SELECT ‘ + @ColumnName1 + ‘, ‘ + @ColumnName2 + ‘, PredictedValue, Count(CAST(‘ + @ColumnName2 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))) OVER (PARTITION BY ‘ +
@ColumnName1 + ‘ ORDER BY ‘ + @ColumnName1 + ‘) * 
(Power(Avg(CAST(‘ + @ColumnName2 + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))) OVER (PARTITION BY ‘ +
@ColumnName1 + ‘ ORDER BY ‘ + @ColumnName1 + ‘) – PredictedValue, 2)) AS LackofFitInput
       FROM (SELECT ‘ + @ColumnName1 + ‘, ‘ + @ColumnName2 + ‘, (‘ + @ColumnName1 + ‘ * @Slope) + @Intercept AS PredictedValue
                     FROM ‘ + @SchemaAndTableName +
                     WHERE ‘ + @ColumnName1 + ‘ IS NOT NULL AND ‘ + @ColumnName2 + ‘ IS NOT NULL) AS T1) AS T2

SELECT @MeanSquaredError AS MeanSquaredError, Power(@MeanSquaredError, 0.5) AS RMSE, @LackOfFitSumOfSquares AS LackOfFitSumOfSquares,
P
ower(@Correlation, 2) * 100 AS R2, @Covariance AS Covariance, @Correlation AS Correlation, @Slope AS Slope, @Intercept AS Intercept

–SELECT @SQLStringuncomment this to debug dynamic SQL errors
EXEC (@SQLString)

…………Most of the code for this procedure is identical to that of the aforementioned Cook’s Distance procedure, which requires regression, covariance and correlation computations.[ix] I won’t rehash here how to derive the slope, intercept and other such constituent calculations, for the sake of brevity. The really striking thing is how few lines of code it takes to derive all of these incredibly useful stats in one fell swoop, which we can thank the powerful T-SQL windowing functions introduced in SQL Server 2012 for. It is noteworthy though that the outer query in Step 4 is necessary because of T-SQL error 4109, “Windowed functions cannot be used in the context of another windowed function or aggregate,” which prevents us from performing the calculations in one big gulp and plug them into the SUM. Besides a few departures like that, the procedure closely follows the format used in the last two tutorial series, in which I start with a common set of parameters that allow users to perform the test on any table in any database they have sufficient access to. The first two lines of code in the procedure body help make this happen, while the rest is dynamic SQL that begins with declarations of the constants, stats and variables we need to make the procedure perform its calculations. As usual, the @DecimalPrecision parameter is provided to help users set their own precision and scale (to avoid errors like arithmetic overflows and still accommodate columns of all sizes) and the SELECT @SQLString near the end, which can be uncommented for debugging purposes.

Figure 2: Sample Results from the Regression Goodness-of-Fit Test on the Duchennes Dataset (click to enlarge)
EXEC   [Calculations].[GoodnessOfFitRegressionTestSP]
              @DatabaseName = N’DataMiningProjects,
              @SchemaName = N’Health,
              @TableName = N’DuchennesTable,
              @ColumnName1 = N’Hemopexin,
              @ColumnName2 = N’LactateDehydrogenase,
              @DecimalPrecision = N’38,21′
GoodnessOfFitRegressionResults (1)

Figure 3: Sample Results from the Regression Goodness-of-Fit Test on the Higgs Boson Dataset (click to enlarge)

EXEC   [Calculations].[GoodnessOfFitRegressionTestSP]
              @DatabaseName = N’DataMiningProjects,
              @SchemaName = N’Physics,
              @TableName = N’HiggsBosonTable,
              @ColumnName1 = N’Column1′,
              @ColumnName2 = N’Column2′,
              @DecimalPrecision = N’38,29′

GoodnessOfFitRegressionResults (2)

…………I’ve made it standard practice in the last two tutorial series to test my procedures first on the 209 rows of a dataset on the Duchennes form of muscular dystrophy provided by Vanderbilt University’s Department of Biostatistics, then on the first couple of float columns in the 11-million-row Higgs Boson Dataset, which is made publicly available by the University of California at Irvine’s Machine Learning Repository. As depicted in Figure 2, the regression line for the Hemopexin protein and Lactate Dehydrogenase enzyme data in the Duchennes table fits poorly, as indicated by the MeanSquaredError, RMSE, LackOfFitSumOfSquares and R2 results. The graphic below it demonstrates clearly that the first two float columns in the Higgs Boson dataset don’t fit well on a regression line either. Neither is not surprising, given that the correlation coefficients for both are near zero, which indicates a lack of relationship between the variables (a strongly negative value would indicate a strongly inverse relationship, whereas positive values would do the opposite).
…………What was truly surprising is how well the latter query performed on the Higgs Boson table, which takes up nearly 6 gigabytes in the DataMiningProjects database I assembled from various practice datasets. It only took 2:04 to execute on my clunker of a development machine, which hardly qualifies as a real database server. The execution plan in Figure 4 may provide clues as to why: most of the costs come in terms of three non-clustered index seeks, which is normally what we want to see. Nor are there any expensive Sort operators. Most of the others are parallelism and Compute Scalar operators that come at next to no cost. In last week’s article, I mentioned that it really doesn’t hurt to calculate both the Jarque-Bera and D’Agostino-Pearson Omnibus Test together, since the costs are incurred almost exclusively in traversing a whole table to derive the constituent skewness and kurtosis values. In the same way, it doesn’t cost us much to calculate the MSE, RMSE and Lack-of-Fit Sum-of-Squares together in Step 4, once we’ve already gone to the trouble of traversing the whole table by calculating one of them. It also costs us just a single operation to derive the R2 once we’ve done the regression and have the correlation, and nothing at all to return the covariance, correlation, slope and intercept if we’re going to go to the trouble of getting the R2. The execution plan essentially bears this out, since the Index Seeks perform almost all the work.

Figure 4: Execution Plan for the Regression Goodness-of-Fit Test on the Higgs Boson Dataset (click to enlarge)
GoodnessOfFitRegressionExecutionPlan

…………There are of course limitations and drawbacks with this procedure and the formulas it is meant to reflect. It is always possible that I’m not implementing them accurately, since I’m writing this in order to learn the topic, not because I know it already; as usual, my sample code is more of a suggested means of implementation, not a well-tested workhorse ready to go into a production environment tomorrow. I still lack the level of understanding I wish I had of the internal mechanics of the equations; in fact, I’m still having trouble wrapping my head around such concepts as the difference between the coefficient of determination and variance explained, which seem to overlap quite closely.[x] Moreover, the MSE can place too much weight on outliers for some use cases, even when implemented accurately.[xi] The RMSE also can’t be used to compare regressions between different sets of columns, “as it is scale-dependent.”[xii]
…………The values for the some of the stats returned above also suffer from a different scaling issue, in that they tend to increase too quickly as the number of records accumulates. They’re not in the same league as the truly astronomical values I’ve seen with other stats I’ve surveyed in the last two tutorial series, but the fact that the Lack-of-Fit Sum-of-Squares reaches eight digits above the decimal place is bothersome. That’s about the upper limit of what end users can read before they have to start counting the decimal places by hand, which rapidly degrades the legibility of the statistic.

Traditional Metrics and Tests in the “Big Data” Era

…………That just adds to my growing conviction that the vastly larger datasets in use today may require new statistical measures or rescaling of tried-and-true ones in order to accommodate their sheer size. We shouldn’t have to sacrifice the main strength of Big Data[xiii], which is the fact that we can now quickly derive very detailed descriptions of very large datasets, just to use these methods. As we have seen throughout the last two tutorial series, this issue has consistently thrown a monkey wrench into many of the established statistical procedures, which were designed decades or even centuries ago with datasets of a few dozen records in mind, not several million. We’ve seen it in the exponent and factorial operations required to derive many of well-established measures, which simply can’t be performed at all on values of more than a few hundred without leading to disastrous arithmetic overflows and loss of precision. We’ve seen it again this week and the last, in which the high record counts made the final statistics a little less legible.
…………We’ve also seen it in some of the hypothesis testing methods, which require lookup tables that often only go up to record counts of a few hundred at best. That’s a problem that will rear its head again in a few weeks when I try, and fail, to implement the popular Shapiro-Wilk Test of normality, which supposedly has excellent statistical power yet is only usable up to about 50 records.[xiv] Such goodness-of-fit tests for probability distributions can also be applied to regression, to determine if the residuals are distributed in a bell curve; cases in point include the histograms discussed in Outlier Detection with SQL Server, Part 6.1: Visual Outlier Detsection with Reporting Services and the Chi-Squared Test, which I’ll cover in a few weeks.[xv] Rather than applying these tests to regression in this segment of the series, I’ll introduce the ones I haven’t covered yet separately. For the sake of simplicity, I won’t delve into complicated topics like lack-of-fit testing on variants like multiple regression at this point. It would be useful, however, to finish off this segment of the series next week by introducing the Hosmer–Lemeshow Test, which can be applied to Logistic Regression, which is one of the most popular alternative regression algorithms. As discussed in A Rickety Stairway to SQL Server Data Mining, Algorithm 4: Logistic Regression, a logistic function is applied to produce an S-shaped curve that bounds outcomes between 0 and 1, which fits many user scenarios. Thankfully, the code will be much simpler to implement now that we’ve got this week’s T-SQL and concepts out of the way, so it should make for an easier read.

 

[i] See the Wikipedia page “Mean Squared Error” at http://en.wikipedia.org/wiki/Mean_squared_error

[ii] Hopkins, Will G., 2001, “Root Mean-Square Error (RMSE),” published at the A New View of Statistics web address http://www.sportsci.org/resource/stats/rmse.html

[iii] For a more in depth explanation of the interrelationships between these stats and why they operate as they do, see Hopkins, Will G., 2001, “Models: Important Details,” published at the A New View of Statistics web address http://www.sportsci.org/resource/stats/modelsdetail.html#residuals

[iv] See the Wikipedia page “Root Mean Square Deviation” at http://en.wikipedia.org/wiki/Root-mean-square_deviation

[v] See the succinct explanation at the Wikipedia page “Errors and Residuals in Statistics” at http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics

[vi] Mukesh Mahadeo’s reply to the thread  “What is Mean by Lack of Fit in Least Square Method?” at the Yahoo! Answers web address https://in.answers.yahoo.com/question/index?qid=20100401082012AAf0yXg

[vii] Which I derived from the formulas at the Wikipedia webpage “Lack-of-Fit Sum of Squares” at http://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares

[viii] See Cox, Nick, 2013, reply to the CrossValidated thread “Are Goodness of Fit and Lack of Fit the Same?” on Aug. 2, 2013. Available at the web address http://stats.stackexchange.com/questions/66311/are-goodness-of-fit-and-lack-of-fit-the-same

[ix] As mentioned in that article, the originals sources for the internal calculations included Hopkins, Will G., 2001, “Correlation Coefficient,” published at the A New View of Statistics web address http://www.sportsci.org/resource/stats/correl.html; the Dummies.Com webpage “How to Calculate a Regression Line” at http://www.dummies.com/how-to/content/how-to-calculate-a-regression-line.html, the Wikipedia page “Mean Squared Error” at http://en.wikipedia.org/wiki/Mean_squared_error and the Wikipedia page “Lack-of-Fit Sum of Squares” at http://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares.

[x] I’ve seen competing equations in the literature, one based on residual sum-of-squares calculations and the other on squaring of the correlation coefficient. The wording often leads be to believe that they arrive at the same results through different methods, but I’m not yet certain of this.

[xi] See the Wikipedia page “Mean Squared Error” at http://en.wikipedia.org/wiki/Mean_squared_error

[xii] http://en.wikipedia.org/wiki/Root-mean-square_deviation “Root-Mean-Square Deviation”

[xiii] It’s a buzzword, I know, but it’s the most succinct term I can use here.

[xiv] Some sources say up to a couple hundred records, and I’m not familiar enough with the topic to discern which limit applies in which cases. It’s a moot point, however, because we need such tests to work on datasets of several hundred million rows.

[xv] See the undated publication “Goodness of Fit in Linear Regression” retrieved from Lawrence Joseph’s course notes on Bayesian Statistics on Oct. 30, 2014, which are published at the website of the McGill University Faculty of Medicine. Available at the web address http://www.medicine.mcgill.ca/epidemiology/joseph/courses/EPIB-621/fit.pdf. No author is listed but I presume that Prof. Joseph wrote it. This is such a good source of information for the topic of this article that I couldn’t neglect to work in a mention of it.

 

Goodness-of-Fit Testing with SQL Server, part 3.2: D’Agostino’s K-Squared Test

By Steve Bolton

…………In the last edition of this amateur series of self-tutorials on goodness-of-fit testing with SQL Server, we discussed the Jarque-Bera Test, a measure that unfortunately doesn’t scale well on datasets of the size that DBAs are accustomed to using. The problem is not with the usefulness of the statistics that it is composed of, because skewness and kurtosis are easy to interpret and valuable in their own right as measures of shape and for purposes of outlier detection. Usually scaling problems signify performance issues, but the resource consumption and execution of the Jarque-Bera Test aren’t bad by any means; the issue is that the statistic itself increases to ungodly numbers that are difficult to interpret, precisely because it was designed with smaller datasets in mind. In this week’s installment, I’ll provide an alternative measure that also builds upon skewness and kurtosis and can be calculated in almost exactly the same amount of time as Jarque-Bera, but without the cumbersome scaling issue.
…………The improved interpretability of D’Agostino’s K-Squared Test comes at the cost of more complicated internal calculations, which turn out to be trivial in comparison to the main computational costs, which consist almost exclusively of index seeks and sorts in the execution plan issued by the SQL Server query optimizer. This added complexity is only a problem if one wants to check to see what’s going on under the hood in these calculations, which is rarely necessary in most use cases after the code has been validated. As I pointed out at every opportunity in my earlier mistutorial series A Rickety Stairway to SQL Server Data Mining, most end users have about as much need to understand how such statistics are derived as the average driver needs to know the engineering details of their car; in many cases it is a mistake to overload them with superfluous information like incomprehensible math equations. That is why I haven’t posted any such formulas in the last few tutorial series I’ve posted here. End users should understand enough to interpret the results in light of their domain knowledge, just as the average rush hour commuter needs to know how to read a gas gauge and transmission fluid stick properly. Those who write the computer code that implement these stats obviously need to grasp the inner workings at a much deeper level, but not to the point that they’re designing their own formulas; data mining programmers essentially occupy the middle zone halfway between end users and mathematicians, in the same way that garage mechanics reside in the niche between drivers and automotive engineers. It is my goal to learn the skills necessary to serve at this midpoint, but as I usually point out, I haven’t reached it yet; I hope to use blog posts of this kind to familiarize myself with these topics better, not because I already know the material well. And that is why I cannot explain in great detail why D’Agostino’s K-Squared Test (a.k.a. the D’Agostino-Pearson Omnibus Test) works as it does. Like a typical mechanic, I was able to get it running sufficiently well that it returns the expected results in a potentially reliable way, but I don’t have sufficient skill to comment on why it was designed as it was. Nevertheless I did pick up a few things while reading sources like D’Agostino, et al.’s 1990 paper in The American Statistician[1] and as usual, the Wikipedia[2] article on the topic, which may not be a professional source but qualifies as the most convenient repository for every math formula under the sun.
…………As you can gather from the T-SQL code in Figure 1, the underlying equations I found in former source are fairly complicated and involve the derivation of several intermediate statistics in between the sample skewness and kurtosis and the final metric. Although the latter source is only an introduction to the topic, it did provide some invaluable insights into the aim of these calculations, albeit without explaining why those particular calculations satisfied those aims. Apparently, the @Z1 and @Z2 measures are meant to bring the skewness and kurtosis in line with the standard normal distribution to solve their “frustratingly slow” approach to the distribution limit, which is a scaling issue of sorts.[3] The SELECT statement towards the end that assigns the final value to the @K2Test combines these internal calculations into a single result so that the skewness and kurtosis can be measured together, in what technically known as an “omnibus test.”[4] After all these esoteric calculations, that final assignment is actually quite simple. I’m sure the nitty gritty details are in the original academic articles published by statisticians Ralph D’Agostino and E.S. Pearson in the early ‘70s, but I couldn’t find any publicly accessible copies; judging from the difficulty I had in following the 1990 paper, much of it would still have been over my head anyways. The important thing to know is that I was able to follow the equations sufficiently well that the code below returns the correct results for the examples provided by D’Agostino, et al.

Figure 1: T-SQL Code for the D’Agostino-Pearson K-Squared Test
CREATE PROCEDURE [Calculations].[NormalityTestDAgostinosKSquaredSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400)
AS

DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName
SET @SQLString = ‘DECLARE @Mean float, @StDev float, @Count as bigint, @Alpha decimal(38,37),
@One float = 1, @Two float = 2, @Three float = 3, @Four float = 4, @Five float = 5,
@Six float = 6, @Seven float = 7, @Eight float = 8, @Nine float = 9,
@TwentyFour float = 24, @TwentySeven float = 27, @Seventy float = 70,
@RecpiprocalOfNSampleSize float, @DifferenceFromSampleMeanSquared float, @DifferenceFromSampleMeanCubed
float, @DifferenceFromSampleMeanFourthPower float,
@SampleSkewness float, @SampleKurtosis float,
@Y float, @B2 float, @WSquared float, @Z1 float, @Z2 float, @Sigma float, @E float, @VarianceKurtosis float,
@StandardizedKurtosis float, @ThirdStandardizedMomentOfKurtosis float, @A float, @K2Test float 

SELECT @Count = Count( + @ColumnName + ‘), @Mean = Avg(CAST(‘ + @ColumnName + ‘ AS float)), @StDev = StDev(‘ + @ColumnName + ‘)
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL

SELECT @RecpiprocalOfNSampleSize = @One / @Count

DECLARE @CountPlusOne float = @Count + @One, @CountPlusThree float = @Count + @Three, @CountPlusFive float = @Count + @Five,
@CountPlusSeven float = @Count + @Seven, @CountPlusNine float = @Count + @Nine, @CountMinusTwo float = @Count – @Two,
@CountMinusThree float = @Count – @Three

DECLARE @CountPlusOneTimesCountPlusThree float =  (@Count + @One) * (@Count + @Three) 

SELECT  @DifferenceFromSampleMeanSquared = SUM(Power(DifferenceFromSampleMean, 2)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC),
       @DifferenceFromSampleMeanCubed  = SUM(Power(DifferenceFromSampleMean, 3)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC),
       @DifferenceFromSampleMeanFourthPower  =SUM(Power(DifferenceFromSampleMean, 4)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC)
FROM (SELECT ‘ + @PrimaryKeyName + ‘, CAST(+ @ColumnName + ‘ AS float) – @Mean as DifferenceFromSampleMean — make a single pass across the table?
              FROM ‘ + @SchemaAndTableName +
              WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1

SELECT @SampleSkewness = (@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanCubed)  /(Power((@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanSquared), 1.5))

SELECT @SampleKurtosis = (@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanFourthPower)  /(Power((@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanSquared), 2))

— perform operations on the Skewness
SELECT @Y = @SampleSkewness * Power(((@CountPlusOneTimesCountPlusThree) / (@CountMinusTwo * @Six)), 0.5) — do the brackets signify multiplication? ****
SELECT @B2 = (@CountPlusOneTimesCountPlusThree * (@Three * ((Power(@Count, 2) + (@TwentySeven * @Count)) -@Seventy))) / (@CountMinusTwo * @CountPlusFive * (@Count + @Seven) * (@Count + @Nine))
SELECT @WSquared = Power(@Two * (@B2 – @One), 0.5) – @One
SELECT @Alpha = Power(Abs(@Two / (@WSquared – @One)), 0.5)
SELECT @Sigma =  @One / (Power(Abs((Log(Abs(Power(@WSquared, 0.5))))), 0.5))
Im not sure if this sigma is related to StDev or not

SELECT @Z1 = @Sigma * Log((@Y / @Alpha) + Power((Power((@Y / @Alpha), 2) + @One), 0.5)) 

SET @SQLString = @SQLString + — perform operations on the kurtosis
SELECT @E  = (@Three * (@Count – @One)) / @CountPlusOne — according to the paper, this is the mean for the kurtosis
SELECT @VarianceKurtosis  = (@TwentyFour * @Count * @CountMinusTwo * @CountMinusThree) / (Power(@CountPlusOne, 2) * @CountPlusThree * @CountPlusFive
SELECT @StandardizedKurtosis  = (@SampleKurtosis  – @E) / Power(@VarianceKurtosis, 0.5)
SELECT @ThirdStandardizedMomentOfKurtosis  = ((@Six * ((Power(@Count, 2) – (@Five * @Count)) + @Two)) / (@CountPlusSeven * @CountPlusNine)) *
Power((@Six * @CountPlusThree * @CountPlusFive) / (@Count * @CountMinusTwo *  @CountMinusThree), 0.5) 

SELECT @A  = @Six + ((@Eight / @ThirdStandardizedMomentOfKurtosis) * ((@Two / @ThirdStandardizedMomentOfKurtosis) + Power(@One + (@Four / @ThirdStandardizedMomentOfKurtosis), 0.5)))
SELECT @Z2 = ((@One – (@Two / (@Nine * @A))) – Power((@One – (@Two / @A)) / (@One + (@StandardizedKurtosis * Power((@Two / (@A – @Four)), 0.5))), (@One / @Three))) / Power((@Two / (@Nine *
@A)), 0.5)

SELECT @K2Test = Power(@Z1, 2) + Power(@Z2, 2)

— uncomment this to debug the internal calculations SELECT @Alpha, @Sigma, @Y AS T, @B2 AS B2, @WSquared AS WSquared,  @E AS E, @VarianceKurtosis AS VarianceKurtosis, @StandardizedKurtosis AS StandardizedKurtosis, @ThirdStandardizedMomentOfKurtosis
AS ThirdStandardizedmMomentOfKurtosis, @A AS A, @DifferenceFromSampleMeanSquared, @RecpiprocalOfNSampleSize, @DifferenceFromSampleMeanSquared, @DifferenceFromSampleMeanCubed, @DifferenceFromSampleMeanFourthPower
 

SELECT @K2Test AS  KSquaredTest, @SampleSkewness AS SampleSkewness, @SampleKurtosis AS SampleKurtosis, @Z1 as Z1, @Z2 as Z2, @Mean AS  Mean, @StDev AS StDev 

–SELECT @SQLStringuncomment this
to debug dynamic SQL errors

EXEC (@SQLString)

…………A few explanations of why the code was written as it was are in order. The five parameters allow users to run the test on any table or view in any database they have sufficient access to, while the first declaration assists in implementing this. The dynamic SQL differs from some of the procedures I’ve posted in the past by the sheer number of reciprocals and constants that need to be precalculated early on, in order to avoid performing the same operations over again. The length of the dynamic SQL also necessitates the use of the second SET statement on the @SQLString, since such strings can’t be assigned in one big gulp after a certain character limit, but can thankfully be added together easily; keep in mind that if this step is left out, the dynamic SQL may be unexpectedly truncated. This procedure also differs in the sense that I’ve chosen to use floats rather than the decimal data type, for the same reason I did in the article on Jarque-Bera: some of the internal calculations are performed on very small fractions, particularly the exponents and reciprocals, which SQL Server will sometimes convert to zeros in the case of the decimal data type. Secondly, I substituted named variables for many of the constants, such as @CountPlusOne, which are declared near the beginning of the dynamic SQL. This is due to the fact that SQL Server sometimes truncates the decimal points out of certain operations on integers; I haven’t yet determined precisely what causes this, although using integers as dividends seems to trigger it most often. Consider this an experiment in discerning whether using named variables is more legible than using countless CAST operations, some of which would have to be buried deep within subqueries. By all means, feel free to copy and paste the constants back in if you know the answers to those questions. As with the Jarque-Bera Test, I’m not certain whether this K2 Test would retain its validity if we substituted the simpler calculations for the full population for the sample skewness and sample kurtosis, but those stats would be preferable if this was case. As usual, I’ve provided a couple of lines of debugging code that can be uncommented if you need to adjust or verify the code, both near the end of the procedure. Be aware that due my difficulty in reading the original equations, @StandardizedKurtosis may need to serve as the root instead of 0.5 (the square root) in my calculation for @Z2, and also in the calculation for the third standardized moment – but I doubt it, since this would throw off the calculations quite a bit. I also added several ABS function calls to avoid Invalid Floating Point Operation errors on T-SQL functions like POWER that can’t handle imaginary roots, but this departure doesn’t seem to affect the final results.
…………The bottom line is that I tested this against the same stem-and-leaf plot cholesterol data from the Framingham Heart Study that D’Agostino, et al. assessed their equations with and get pretty much the same results.[5] They got 1.02 and 4.58 for their sample skewness and kurtosis and 14.75 for the final K2 test statistic, which was derived from Z1 and Z2 values of 3.14 and 2.21 respectively; my results were 1.0235482596477, 4.57738778764656 and 14.7958406166879 for the sample skewness, kurtosis and test statistic respectively, which were derived from values of 3.13939241925962 and 2.22262364213628 for the intermediate Z1 and Z2 stats. It is possible that the slight differences are due to undiscovered errors in my code, but some departure is expected given that I used variables and constants of much higher precision, which would lead to rounding discrepancies. I then tested it against two datasets I’ve been using throughout the last two tutorial series, one on the Duchennes form of muscular dystrophy made publicly available by the Vanderbilt University’s Department of Biostatistics and another on the Higgs Boson that can be downloaded from the University of California at Irvine’s Machine Learning Repository. I derived the first resultset in Figure 2 from the query above it and the following two from queries like it on the first two float columns in the Higgs Boson dataset. Note that test statistic is much larger for the Higgs Boson results – mainly because that table has 11 million rows, compared to just 209 for the Duchennes table – but isn’t quite as inflated as in some of the Jarque-Bera results. One of them has seven digits to the left of the decimal point, which I’d wager is near the limit of numerical legibility for most people. Once you get past that, most people have trouble comparing numbers by eye without resorting to the ungainly strategy of counting digits and mentally interpolating commas between every set of three.

Figure 2: Sample Results from the Duchennes and Higgs Boson Datasets
EXEC   [Calculations].[NormalityTestDAgostinosKSquaredSP]
              @DatabaseName = N’DataMiningProjects’,
              @SchemaName = N’Health’,
              @TableName = N’DuchennesTable’
              @ColumnName = N’LactateDehydrogenase’,
              @PrimaryKeyName = N’ID’
D'Agostino-Pearson Result
D'Agostino-Pearson Result 2

D'Agostino-Pearson Result 3

…………The good news is that the procedure performed unexpectedly well; in fact, the first trial run took 3:43 on the first float column in the Higgs Boson table, i.e.  exactly the same execution time as for the Jarque-Bera Test in the last tutorial. After all of those arcane calculations you’d expect to see a rather messy execution plan, but as Figure 3 shows, this procedure isn’t all that hard to follow. The main costs were incurred by two non-clustered index seeks and a sort. This is because almost all of the work occurs in retrieving the values and performing simple calculations for each row, not in the fancy math that occurs after they’ve been summarized, which turns out to have an inconsequential computation cost. The main burden of these calculations falls exactly where we want it: on the brains of the coders and testers, not on the end users, to whom the procedure will be a well-oiled black box after error-checking, validation and SQL injection protection code are added.

Figure 3: Execution Plan for the D’Agostino-Pearson Omnibus Test
D'Agostino-Pearson Execution Plan

…………There’s more good news: since most of the performance cost occurs in the same seeks, sorts and initial calculations of skewness and kurtosis that the Jarque-Bera Test uses, there’s no real penalty incurred by computing it together with the D’Agostino-Pearson Omnibus Test. If we had to sacrifice one, however, it would be the former, since I have heard anecdotes about statisticians preferring the latter, but not the other way around. One of the reasons the K2 is favored is because of numerous studies (including some written by D’Agostino Sr.) demonstrating that it has better statistical power, which is a numerical measure of how often the actual effects of a variable are detected by a particular test.[6] This metric is applicable to large sample sizes, unlike the Shapiro-Wilk Test[7], and can be used for both one-sided and two-sided hypothesis tests.[8] As I learn more about the field I’m shying more and more away from hypothesis tests, on the grounds that the small sample sizes and narrow focus aren’t suited to typical SQL Server user scenarios, like exploratory data mining on large datasets. Nevertheless, it doesn’t hurt to know that the D’Agostino-Pearson Test is flexible enough to be used for these purposes. Moreover, it can apparently be applied to goodness-of-fit testing on datasets that don’t follow the Gaussian or “normal” distribution, i.e. the bell curve, which isn’t true of many of them. In fact, the authors of that 1990 study go so far as to say that “The extensive power studies just mentioned have also demonstrated convincingly that the old warhorses, the chi-squared test and the Kolmogorov test (1933), have poor power properties and should not be used when testing for normality.”[9] This is by no means the first time I’ve heard such sentiments expressed by statisticians about these two rival metrics, which still seem to be implemented far more frequently in practice despite such advice.
…………Later on in this series I’ll explain how to implement both the Chi-Squared Test and Kolmogorov-Smirnov Test in T-SQL, but I’m going to skip over a couple of other measures related to skewness and kurtosis, at least for the time being. One of these is Mardia’s multivariate versions of skewness and kurtosis, which I will save for some far-flung future when grappling with the complexity added by dealing with multiple columns isn’t too overwhelming; perhaps someday I’ll tack a segment onto the end of this series for multivariate goodness-of-fit tests, like the Cox-Small Test and Smith and Jain’s Test.[10] I’ve organized this series in the order of how difficult the concepts and underlying code are, which brings us to the topic of regression-related methods of goodness-of-fit testing. As explained in the last article, skewness and kurtosis really aren’t that hard to grasp intuitively, and as I dealt with in A Rickety Stairway to SQL Server Data Mining, Algorithm 2: Linear Regression, the core concepts behind regression aren’t that difficult either. The variants of regression can get quite complicated, but drawing a line on a graph based on the relationship between two variables is something every college freshman has been exposed to. The stats based on these lines can also vary in their intricacy; there is apparently even a version of Jarque-Bera for multiple regression[11], which I’ll skip over for now to avoid the added complexity of dealing with three or more variables. The code required to implement regression stats for purposes of normality testing can also require differing levels of sophistication, as we’ll see shortly after New Year’s.

[1] D’Agostino, Ralph B.; Belanger, Albert and D’Agostino Jr., Ralph B, 1990, “A Suggestion for Using Powerful and Informative Tests of Normality,” pp. 316–321 in The American Statistician. Vol. 44, No. 4. Available online at http://www.ohio.edu/plantbio/staff/mccarthy/quantmet/D’Agostino.pdf

[2] See the Wikipedia article “D’Agostino’s K-Squared Test” at  http://en.wikipedia.org/wiki/D’Agostino’s_K-squared_test

[3] IBID.

[4] “D’Agostino and Pearson (1973) presented a statistic that combines….to produce an omnibus test of normality. By omnibus, we mean it is able to detect deviations from normality due to either skewness or kurtosis.” See p. 318, D’Agostino, et al., 1990.

[5] IBID., p. 318.

[6] For a better explanation of the term than I can give, see Hopkins, Will G., 2001, “Generalizing to a Population: ESTIMATING SAMPLE SIZE continued,” published at the A New View of Statistics web address http://www.sportsci.org/resource/stats/ssdetermine.html. I highly recommend his website for those who are new to the field of stats, like me.

[7] p. 319, D’Agostino, et al., 1990.

[8] IBID., p. 318.

[9] IBID., p. 316.

[10] I don’t know anything about these tests, but I’ve seen them mentioned in sources like the Wikipedia article “Multivariate Normal Distribution” at http://en.wikipedia.org/wiki/Multivariate_normal_distribution

[11] See the Wikipedia page “Jarque-Bera Test” at http://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test

Goodness-of-Fit Testing with SQL Server, part 3.1: Skewness, Kurtosis and the Jarque-Bera Test

By Steve Bolton

…………In the last installment of this series of amateur self-tutorials on using SQL Server to identify probability distributions, we saw how devices like probability plots can provide simple visual confirmation of a dataset’s shape. I considered doing a quick detour into Q-Q plots, but decided against it because of their simplicity; instead of putting values for the distribution being tested on the horizontal axis, Q-Q plots chop them up into partitions of equal size, a task that is obviously trivial to implement with NTILE. I’m more eager to discuss skewness and kurtosis, two of the oldest, tried-and-true measures of goodness-of-fit[1] – particularly for the normal or “Gaussian” distribution, i.e. the bell curve – precisely because they are often easy to spot with the naked eye. They are numerical measures rather than visualizations, but are often self-evident within graphics like histograms. For example, the third histogram in my recent post Outlier Detection with SQL Server Part 6.1 – Visual Outlier Detection with Reporting Services is a striking examples of a highly skewed column, while the one below it obviously follows a bell curve more closely and has relatively low skewness and kurtosis; later in this article, I’ll run some sample T-SQL code against the same data to derive hard numbers for both. I’ve seen several good explanations of the meanings of skewness and kurtosis in sources at various sites on the Internet, including one of my favorites, the National Institute for Standards and Technology’s Engineering Statistics Handbook, which defines them thus: “Skewness is a measure of symmetry, or more precisely, the lack of symmetry. A distribution, or data set, is symmetric if it looks the same to the left and right of the center point…Kurtosis is a measure of whether the data are peaked or flat relative to a normal distribution. That is, data sets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Data sets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. A uniform distribution would be the extreme case.”[2] Another succinct explanation is given by Tompkins County Community College adjunct faculty member Stan Brown, who says that “The histogram can give you a general idea of the shape, but two numerical measures of shape give a more precise evaluation: skewness tells you the amount and direction of skew (departure from horizontal symmetry), and kurtosis tells you how tall and sharp the central peak is, relative to a standard bell curve.” [3]
…………I already had some experience with both measures way back in A Rickety Stairway to SQL Server Data Mining, Part 14.2: Writing a Bare Bones Plugin Algorithm and A Rickety Stairway to SQL Server Data Mining, Part 14.6: Custom Mining Functions, when I made crude attempts to implement skewness and kurtosis in SSDM in order to illustrate the capabilities of its custom algorithms and functions. That called for fairly simple stats which wouldn’t distract from the main mission; I didn’t really even make much of an effort to understand them, because it wasn’t germane to the lesson at hand. Since then I’ve discovered that it’s easier for me to grasp both stats by viewing them as numerical measures of lopsidedness on a histogram that is divided into imaginary stripes, in which skewness detects how uneven a distribution is from one vertical band to another, whereas kurtosis measures how squashed the distribution curve is on the horizontal axis. Either way you look at it, the measures are still simple enough to explain in layman’s terms, which is one of the strengths of the set of normality tests built from them.
…………The most well-known extension of these somewhat forgotten stats is the Jarque-Bera Test, which only dates back to the 1970s despite being one of earliest examples of normality testing. All of these measures have fallen out of favor with statisticians to some extent, for reasons that will be apparent shortly, but one of the side effects of this is that it is a little more difficult to find variations on them that are more suited to the unique needs of the SQL Server community. One of the strengths of data mining on database servers like SQL Server is that you typically have such an enormous number of records to draw from that you can actually perform calculations on the full population, or a proportion close to it. In ordinary statistics, however, you’re often limited to making inferences based on small samples of just a few dozen or a few hundred rows, out of a much larger population that is often of unknown size; the results can still be logically valid, but often only if other preconditions are met on the data (including normality tests, which are often not performed). For that reason, I usually prefer to leverage SQL Server’s fast set-based retrieval methods to quickly calculate statistics on full populations whenever possible, especially when there are simpler versions of the mathematical formulas available for the full dataset. Skewness and kurtosis are among those measures that can be computed in a simpler way when using the whole population[4], but I’ve opted here to use the more intensive formulas for sample skewness and sample kurtosis for one reason only: it might be possible to substitute population skewness and kurtosis for their sampling counterparts in the formulas for the Jarque-Bera Test, but I can’t find any online sources that mention such a swap. I suspect that it probably would be logically valid, but I took the more conservative approach in Figure 1 by employing the usual Jarque-Bera formula, which really isn’t much more difficult to compute.[5]

Figure 1: Code for the Jarque-Bera Test Procedure
ALTER PROCEDURE [Calculations].[NormalityTestJarqueBeraSkewnessAndKurtosisSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400)
AS

DECLARE @SchemaAndTableName nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTableName = @DatabaseName + ‘.’ + @SchemaName + ‘.’ + @TableName
SET @SQLString = ‘DECLARE
@Mean float,
@StDev float,
@Count as bigint,
@One float = 1,
@RecpiprocalOfNSampleSize float,
@DifferenceFromSampleMeanSquared float,
@DifferenceFromSampleMeanCubed float,
@DifferenceFromSampleMeanFourthPower float,
@SampleSkewness float,
@SampleKurtosis float,
@JarqueBeraTest float 

SELECT @Count = Count( + @ColumnName + ‘), @Mean = Avg(CAST(‘ + @ColumnName + ‘ AS float)), @StDev = StDev(‘ + @ColumnName + ‘)
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL

SELECT  @RecpiprocalOfNSampleSize = @One / @Count

SELECT  @DifferenceFromSampleMeanSquared = SUM(Power(DifferenceFromSampleMean, 2)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC),
       @DifferenceFromSampleMeanCubed  = SUM(Power(DifferenceFromSampleMean, 3)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC),
       @DifferenceFromSampleMeanFourthPower  =SUM(Power(DifferenceFromSampleMean, 4)) OVER (ORDER BY ‘ + @PrimaryKeyName + ‘ ASC)
FROM (SELECT ‘ + @PrimaryKeyName + ‘, CAST(+ @ColumnName + ‘ AS float) – @Mean as DifferenceFromSampleMean — make a single pass across the table?
              FROM ‘ + @SchemaAndTableName + ‘
              WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1

SELECT @SampleSkewness = (@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanCubed)  /(Power((@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanSquared), 1.5))

SELECT @SampleKurtosis = (@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanFourthPower)  /(Power((@RecpiprocalOfNSampleSize * @DifferenceFromSampleMeanSquared), 2))

SELECT @JarqueBeraTest = CAST((CAST(@Count AS float) / CAST(6 AS float)) AS Decimal(38,12)) * CAST((Power(@SampleSkewness, 2) + (0.25 * Power((@SampleKurtosis -3), 2))) AS Decimal(38,12)) 

SELECT @JarqueBeraTest AS JarqueBeraTest, @SampleSkewness AS SampleSkewness, @SampleKurtosis AS SampleKurtosis, @SampleKurtosis – 3 AS ExcessKurtosis, @Mean AS  Mean, @StDev AS StDev
— to debug the internal calculations, uncomment the rest of this line, @DifferenceFromSampleMeanSquared, @RecpiprocalOfNSampleSize, @DifferenceFromSampleMeanSquared, @DifferenceFromSampleMeanCubed, @DifferenceFromSampleMeanFourthPower

–SELECT @SQLStringuncomment this to debug dynamic SQL errors
EXEC (@SQLString)

…………The end of the final SELECT can be uncommented to debug the internal calculations. I’ve also reserved the second-to-last line for a SELECT that can be uncommented to debug the dynamic SQL string, as is standard in most of my procedures. Much of the initial code ought to be familiar to reader of this series and the one on outliers, since I use many of the same parameters and internal variables, and apply some of the usual preliminary SET operations on them. As usual, I calculate the values of some reusable internal stats and then cache them in dynamic SQL variables so that we don’t have to recalculate them again, as in the case of the reciprocal of the count and the deviation computation in the lowest-level subquery. I’m experimenting with declaring constants like 1 to high precision data types to prevent situations where SQL Server sometimes truncates the values during calculations, which can lead to erroneous results or at best, messy code full of casts deep within subqueries to avoid such errors. One departure from the norm is the use of floats rather than the decimal data type in the dynamic SQL calculations. The square, cube and quartic operations can result in really high and low values, which may in turn cause divide by zero errors if they’re rounded down to nothing or arithmetic overflows if they’re rounded too high, so I resorted to using float data types for the first time in any of my mistutorial series. This may entail some loss of precision in the internal calculations, but shouldn’t have much an effect on the final test statistic. It is not uncommon for this result to seem outlandishly high when the underlying distribution is abnormal.[6]

Figure 2: Sample Results from the Duchennes Table and HiggsBosonTable
EXEC   [Calculations].[NormalityTestJarqueBeraSkewnessAndKurtosisSP]
              @DatabaseName = N’DataMiningProjects’,
              @SchemaName = N’Health’,
              @TableName = N’DuchennesTable’,
              @ColumnName = N’LactateDehydrogenase’,
              @PrimaryKeyName = N’ID’

 

JarqueBeraResult1 JarqueBeraResult2 JarqueBeraResult3

…………The query in Figure 2 produced the results in the graphic immediately below it, in which I tested the procedure on the LactateDehydrogenase column of a dataset on the Duchennes form of muscular dystrophy, which is made publicly available by Vanderbilt University’s Department of Biostatistics. The procedure performed surprisingly well when deriving the other two result sets, clocking in at 3:43 and 3:42 minutes on the first two float columns of the Higgs Boson Dataset, which I downloaded from the University of California at Irvine’s Machine Learning Repository and converted into a SQL Server table. It has 11 million rows and takes up about 6 gigabytes of the DataMiningProjects database I created for these tutorial series, which makes it ideal for stress-testing. Keep in mind that my clunker of a development machine hardly qualifies as a professional database server, so your results will probably be spectacularly better – especially after it’s been subjected to query tuning by one of the countless DBA who knows the ins and outs of T-SQL a lot better than I do. As evinced in Figure 3, the execution plan turned out to be a lot easier to interpret than some of the more sophisticated code I posted in the last tutorial series, with two seeks and two sorts taking up the bulk of the computational effort.

Figure 3: Execution Plan for the Jarque-Bera Procedure
JarqueBeraExecutionPlan

…………The results in Figure 2 are a powerful illustration of one of the weaknesses of the Jarque-Bera Test, i.e. its lack of scaling. The higher the values of the column accumulate in the internal calculations, the larger the test results may be; that is why the 209 rows of the LactateDehydrogenase column had much higher skewness and kurtosis scores than the results for Column1 and Column2 of the Higgs Boson table, yet had a Jarque Bera score that was several orders of magnitude smaller. I’m sure that by now some statistician has developed a scaling mechanism to get around this problem, but I question whether it is worth it for our purposes, for “…it is not without weakness. It has low power for distributions with short tails, especially for bimodal distributions. Other authors have declined to include its data in their studies because of its poor overall performance.”[7]  The latter wasn’t as much of an issue as expected in this example, but another problem frequently encountered in the last couple of tutorial series reared its head again: the lack of hard-and-fast cut-off points. I couldn’t find a clear winner among the competing criteria for when the Jarque-Bera stat disqualifies a dataset from being Gaussian (although that doesn’t mean one doesn’t exist, given that I lack experience with this field). They seem to all boil down to “rules of thumb,” out of those I’m most inclined to favor M.G. Bulmer’s, that skewness values beyond -1 or +1 are highly skewed, those within 0.5 of zero are pretty much symmetric and the rest are moderately skewed.[8]
…………It may be that we are better off without such hard limits though, given that they limit us to simplistic either-or choices. Confidence intervals are another common way of forcing the same kind of choice, when there might not be a real crying need for such a limit. If we use a continuous measure, we can ask questions about how close a dataset comes to a particular distribution, such as the Gaussian bell curve, but we lose all of that flexibility when we resort to arbitrary cut-off criteria. This is a problem we’ll probably see again as we work our way through the whole menagerie of goodness-of-fit tests, some of which blindly affix labels like “normal” and “not normal” in an almost manic depressive, all-or-nothing way. It’s always good to keep in mind that when we assign labels and test results in this way on a simple pass/fail basis, or perform binning and banding on the values within them, we’re sacrificing a lot of information. For our purposes, we’d probably be better of preserving the skewness and kurtosis values as measures of how skewed or kurtic a dataset is, as well as how normal it might be, rather than tossing out all the insights and details the full numbers provide. Skewness and kurtosis aren’t as useful in resolving the usual chicken-and-egg dilemma that accompanies outlier detection and goodness-of-fit testing, because we can’t determine whether or not a dataset follows a distribution closely but has too many outliers, or if those outliers signify that a different distribution is a better match. Yet they do occupy a substantial niche in the matrix of use cases I hope to develop for goodness-of-fit, as I did for outlier detection methods in my last mistutorial series. They’re simple enough for a layman to understand and easy to visualize, plus they represent really effective measures of the shape of a dataset, aside from whether or not that shape is applicable to goodness-of-fit testing. This makes them useful in their own right as primitive forms of data mining, in a sense. I’m not as enthused about the Jarque-Bera Test though, because it requires extra computational effort in order to derive results that lack adequate scaling, interpretation criteria and statistical power, even when implemented flawlessly by better programmers than myself. It may very well have valid uses in ordinary statistical applications, but its range of usefulness may be more constrained in the realm of databases servers and Big Data. Perhaps D’Agostino’s K-Squared Test, an alternative goodness-of-fit measure also built upon skewness and kurtosis, will prove more useful that the Jarque-Bera Test in next week’s article.

 

[1] See the Wikipedia page “Normality Test” at http://en.wikipedia.org/wiki/Normality_test

[2] See National Institute for Standards and Technology, 2014,  “1.3.5.11 Measures of Skewness and Kurtosis,” published in the online edition of the Engineering Statistics Handbook. Available online at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h1.htm

[3] Brown, Stan, 2012, “Measures of Shape: Skewness and Kurtosis,” published Dec. 27, 2012 at the Tompkins Cortland Community College website. Available online at http://www.tc3.edu/instruct/sbrown/stat/shape.htm .

[4] See Brown, Stan, 2012.

[5] I derived this code from the formulas at Brown’s webpage and the Wikipedia entry “Jarque-Bera Test” at http://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test

[6] For example,  see the thread posted by the user named ipadawan on Oct. 13, 2011 in the CrossValidated forums, titled “Appropriate Probability Threshold for Jarque-Bera Test,” which is available online at the web address http://stats.stackexchange.com/questions/16949/appropriate-probability-threshold-for-jarque-bera-test

[7] See the Wikipedia entry for “Normality Test” again, at http://en.wikipedia.org/wiki/Normality_test

[8] I’m paraphrasing Brown, 2012, who cites Bulmer, M. G., 1979, Principles of Statistics. Dover Publication: New York. I also agree with Brown when he says that “… GraphPad suggests a confidence interval for skewness….I would say, compute that confidence interval, but take it with several grains of salt — and the further the sample skewness is from zero, the more skeptical you should be.” I have no issue with GraphPad, which I’ve never used before, but am not inclined to much stock in hard confidence intervals anyways.

 

Goodness-of-Fit Testing with SQL Server, part 2.1: Implementing Probability Plots in Reporting Services

By Steve Bolton

…………In the first installment of this series of amateur self-tutorials, I explained how to implement the most basic goodness-of-fit tests in SQL Server. All of those produced simple numeric results that are trivial to calculate, but in terms of interpretability, you really can’t beat the straightforwardness of visual tests like Probability-Probability (P-P) and Quantile-Quantile (Q-Q) Plots. Don’t let the fancy names fool you, because the underlying concepts aren’t that difficult to grasp once the big words are subtracted. It is true that misunderstandings may sometimes arise over the terminology, since both types of visual goodness-of-fit tests are often referred by the generic term of “probability plots” – especially when we use the Q-Q Plot for the Gaussian or “normal” distribution, i.e. the bell curve, which is often called the “normal probability plot.”[1] Nevertheless, the meaning of either one is easy to grasp at a glance, even to an untrained eye: basically, we just build a scatter plot of data points, then compare it to a line that represents the ideal distribution of points for a perfect match. If they look like they follow the same path – usually a straight line – then we can conclude that the distribution we want to assess fits well. Visual analysis of this kind is of course does not provide the kind of detail or rigor that more sophisticated goodness-of-fit tests can, but it serves as an excellent starting point, especially since it is relatively straightforward to implement scatter plots of this kind in Reporting Services.
…………As I found out the hard way, the difficult part with implementing these visual aids is not in representing the data in Reporting Services, but in calculating the deceptively short formulas in T-SQL. For P-P Plots, we need to compare two cumulative distribution functions (CDFs). That may be a mouthful, but one that is not particularly difficult to swallow once we understand how to calculate probability distribution functions. PDFs[2] are easily depicted in histograms, where we can plot the probability of the occurrence of each particular value in a distribution from left to right to derive such familiar shapes as the bell curve. Since probabilities in stochastic theory always start at 0 and sum to 1, we can plot them a different way, by summing them in succession for each associated value until we reach that ceiling. Q-Q Plots are a tad more difficult because they involve comparing the inverse of the CDFs, using what is alternately known as quantile or percent point functions[3], but not terribly so. Apparently the raison d’etre for these operations is to distill distributions like the Gaussian down to the uniform distribution, i.e. a flat line in which all outcomes are equally likely, for easier comparison.[4]

Baptism By Fire: Why the Most Common CDF is Also the Most Trying

                Most probability distributions have their own CDF and Inverse CDF, which means it would be time-consuming to implement them all in order to encompass all of the known distributions within a single testing program. The equations involved are not always terribly difficult – except, however, when it comes to the most important distribution of all, the Gaussian. No exact solutions are available (let alone mathematically possible) for our most critical, must-have use case, so we must rely on various approximations developed by mathematicians over the years. One of my key goals in writing self-tutorials of this kind is to acquire the ability to translate equations into T-SQL, Visual Basic and Multidimensional Expression (MDX) quickly, but I got a baptism by fire when trying to decipher one of the symbols used in the error functions the normal distribution’s CDF depends upon. The assistance I received from the folks at CrossValidated (StackOverlow’s machine learning and statistics forum) was also indispensable in helping me wrap my head around the formulas, which are apparently a common stumbling block for beginners like me.[5] For the Inverse CDFs I also had to learn the concept of order statistics, i.e. rankits, which I can probably explain a lot more succinctly than some of the unnecessarily prolix resources I waded through along the way. The mathematical operation is really no more difficult than writing down all of your values in order from lowest to highest, then folding the sheet of paper in half and adding the corresponding points together. The Wikipedia discussion page “Talk:Rankit” helped tremendously; in fact, I ended up using the approximation for the R statistical package that is cited there in my implementation of the Gaussian Inverse CDF.[6]
…………While slogging through the material, it began to dawn on me that it might not be possible to implement even a crude solution in T-SQL, at least for tables of the size SQL Server users encounter daily. Indeed, if it weren’t for a couple of workarounds like the aforementioned one for R I found scattered across the Internet, I wouldn’t have finished this article at all. Resorting to the use of lookup tables for known values really doesn’t help us in the SQL Server world, because they simply don’t go high enough. I was reunited with one of the same stumbling blocks I often encountered when writing my last mistutorial series, namely that fact that the available lookup tables for known rankit values simply don’t go anywhere near high enough for the size of the tables used in SQL Server databases and cubes. For example, one compendium of statistical tables I consulted could only accommodate up to 50 values.[7]

In the Absence of Lookup Tables, Plan on Writing Intricate SQL

                This is merely a subset of the much broader issue of scaling statistical tests that were designed generations ago for much smaller sample sizes, of a few dozen or a few hundred records, to the thousands or millions of rows routinely seen in modern databases. In this case, I was forced to calculate the missing rankit values myself, which opened up a whole new can of worms.  Another critical problem with implementing the CDF and Inverse CDF in code is that many of the approximations involve factorials, but those can only be calculated up to values around 170 without reaching the limit of the T-SQL float data type; this is actually quite good compared to other languages and statistical packages, which can often handle values only up to around 20.[8] Thankfully, Peter John Acklam published a decent approximation algorithm online, which can calculate Inverse CDFs for the normal distribution without factorials. It’s only good to a precision of 1.15 x 10−9, which may not be sufficient for some Big Analysis use cases, but this code ought to be enough to get a novice data miner started.[9]
…………The complexity of implemented probability plots is further increased when we factor in the need to write separate code for each distribution; most of them aren’t as difficult as the Gaussian, which has no closed-form solution, but providing code for each of them would require dozens of more articles. For that reason, I’ll stick to the bell curve for now; consequently, I also won’t get into a discussion of the lesser-known Probability Plot Correlation Coefficient Plot (PPCC), which is only applicable to distributions like the Weibull that have shape parameters, unlike the bell curve.[10] Another complication we have to deal with when using CDFs, inverse CDFs and PDFs is that different versions may be required for each, depending on whether you want to return a single value or a whole range, or whether such inputs as the mean, standard deviation and counts are already known or have to be computed on the fly. Later in this series we will probably have to make use of some of these alternate versions for more advanced fitness tests, so I’ve uploaded all 14 versions I’ve coded to date in one fell swoop to one central repository on DropBox, which are listed below:

NormalDistributionCDFSP.sql
NormalDistributionCDFSupplyMeanAndStDevSP.sql
NormalDistributionCDFSupplyMeanStDevAndRangeSP.sql
NormalDistributionCDFSupplyTableParameterSP.sql
NormalDistributionInverseCDFFunction.sql
NormalDistributionPDFAndCDFSupplyMeanStDevAndRangeSP.sql
NormalDistributionPDFSP.sql
NormalDistributionPDFSupplyMeanAndStDevSP.sql
NormalDistributionPDFSupplyMeanStDevAndRangeSP.sql
NormalDistributionRankitApproximationSP.sql
NormalDistributionSingleCDFFunction.sql
RankitApproximationFunction.sql
RankitApproximationSP.sql
RankitApproximationSupplyCountSP.sql
SimpleFloatValueTableParameter.sql

…………Keep in mind that, as usual, I’ve only done very basic testing on these stored procedures and functions, so they’ll probably require some troubleshooting before putting them into a production environment; consider them an example of how a professional solution might be engineered, not as a finished product. I did some validation of the procedures against various CDF and Inverse CDF lookup tables and calculators I found on the Web, but only for a handful of values.[11] The .sql file names are pretty much self-explanatory: for example, NormalDistributionPDFSupplyMeanAndStDevSP returns the PDF function for the normal distribution if you supply the mean and standard deviation, whereas the NormalDistributionSingleCDFFunction does just what it says by returning one value out of a set of CDF results. A few take table variables as inputs, so I’ve included the SimpleFloatValueTableParameter I defined to supply them. I’ve followed my usual coding style by appending SP and Function to the ends of the names to denote what type of object they are. The NormalDistributionRankitApproximationSP, RankitApproximationSP and RankitApproximationSupplyCountSP procedures use the aforementioned approximation from R, while my implementation of Acklam’s approximation can be found in the NormalDistributionInverseCDFFunction.sql file. Some of the objects are dependent on the others, like the RankitApproximationFunction, which utilizes the NormalDistributionInverseCDFFunction.
…………Some of the other procedures will be of use later in this tutorial series, but in this week’s installment, we’ll be feeding the output from DataMiningProjects.Distributions.NormalDistributionSingleCDFFunction listed above into a couple of SSRS line charts. As I pointed out in three previous articles from the tail end of my last tutorial series, there are plenty of better explanations of how to write reports and do other basic tasks in RS, so I won’t clutter this post with those extraneous details. Basically, the sample procedure below derives the CDF values for the horizontal axis and another set of values for the vertical axis called the Empirical Distribution Function (EDF), which is just a fancy way of saying the values actually found in the dataset. Anyone familiar with the style of sample code I’ve posted on this blog can tell that we’re just using dynamic SQL to calculate distinct counts, with the difficult computations hidden inside the CDF function; I reuse most of the same parameters, intermediate variable declarations and other code seen in past articles, like the SELECT @SQLString for debugging the procedure.

Figure 1: Sample T-SQL to Build a Probability-Probability Plot
CREATE PROCEDURE [GoodnessOfFit].[PPPlot]
@Database1 as nvarchar(128) = NULL, @Schema1 as nvarchar(128), @Table1 as nvarchar(128),@Column1 AS nvarchar(128)
AS
DECLARE @SchemaAndTable1 nvarchar(400),@SQLString nvarchar(max)
SET @SchemaAndTable1 = @Database1 + ‘.’ + @Schema1 + ‘.’ + @Table1
SET @SQLString = ‘DECLARE @Mean as float,
@StDev as float,
@Count bigint 

SELECT @Count=Count(CAST(‘ + @Column1 + ‘ as float)), @Mean = Avg(CAST(‘ + @Column1 + ‘ as float)), @StDev = StDev(CAST(‘ + @Column1 + ‘ as float))
FROM ‘ + @SchemaAndTable1 +
WHERE ‘ + @Column1 + ‘ IS NOT NULL

DECLARE @EDFTable table
(ID bigint IDENTITY (1,1),
Value float,
ValueCount bigint,
EDFValue float,
CDFValue decimal(38,37),
EDFCDFDifference decimal(38,37)) 

INSERT INTO @EDFTable
(Value, ValueCount, EDFValue)
SELECT Value, ValueCount, CAST(SUM(ValueCount) OVER (ORDER
BY Value ASC) as float) / @Count AS EDFValue
       FROM  (SELECT DISTINCT ‘ + @Column1 + ‘ AS Value, Count( + @Column1 + ‘) OVER (PARTITION BY ‘
+ @Column1 + ‘ ORDER BY ‘ + @Column1 + ‘) AS ValueCount
              FROM ‘ + @SchemaAndTable1 +
             WHERE ‘ + @Column1 + ‘ IS NOT NULL) AS T1

UPDATE T1
SET CDFValue = T3.CDFValue, EDFCDFDifference = EDFValue – T3.CDFValue
FROM @EDFTable AS T1
       INNER JOIN (SELECT DistinctValue, DataMiningProjects.Distributions.NormalDistributionSingleCDFFunction
(DistinctValue, @Mean, @StDev) AS CDFValue
       FROM (SELECT
DISTINCT Value AS DistinctValue
       FROM @EDFTable) AS T2) AS T3
       ON T1.Value = T3.DistinctValue

SELECT ID, ROW_NUMBER() OVER (ORDER BY ID) AS RN, Value, ValueCount, EDFValue, CDFValue, EDFCDFDifference
FROM @EDFTable 

–SELECT @SQLStringuncomment this to debug dynamic SQL errors

DECLARE @ResultTable table
(PrimaryKey sql_variant,
RN bigint,
Value float,
ValueCount bigint,
EDF float,
CDF float,
EDFCDFDifference float
)

INSERT INTO @ResultTable
EXEC (@SQLString) 

SELECT PrimaryKey, RN, Value, ValueCount, EDF, CDF, EDFCDFDifference
FROM @ResultTable 

…………If the distribution being tested by the CDF is a good match then the coordinates ought to come as close to an imaginary center line cutting across from (0,0) to (1,1), which are the boundaries of any EDF or CDF calculation. That’s obviously not the case in the first plot in Figure 2, where the coordinates are shifted far to the left and top despite the fact that the horizontal axis is skewed, with most of the values lopped off. The other three all have standard 0.1 intervals, including the second plot, which seems to be a good match. This is not surprising, given that I’ve already performed much more sophisticated goodness-of-fit tests on this data, which represents the second float column in the Higgs Boson Dataset I downloaded from University of California at Irvine’s Machine Learning Repository ages ago for practice data on this blog. The abnormal plot above it comes from the first float column in the same dataset, which I routinely fails tests for the Gaussian/normal distributon. Note how thick the lines are in both: this is because there are 11 million rows in the practice dataset, with 5,001 distinct values for the second column alone. Most of the tests I’ll survey in this series perform well in the databae engine, but trying to depict that many values in an SSRS report can obviously lead to congestion in the user interface. The first plot was particularly slow in loading on my development machine. The third plot loaded quickly because it came from the Duchennes muscular dystrophy dataset[12] I’ve also been using for demonstration purposes, which has a mere 209 rows. The Lactate Dehyrogenase enzyme data embodied in the column I plugged into my procedure is probably not normally distributed, given how erratic it is at the tails and bowed at the center. The fourth plot comes from a time dataset that may be Gaussian  despite its jagged appearance, which is caused by the discrete 15-minute intervals it tracks. It is in situations like this where knowing your data is an immense help in successful interpretation, i.e. the end goal of any data mining endeavor. In many other contexts, serrated shapes are often an indicator of abnormality; in this instance, it is dictated by the fixed width of data type intervals chosen.

Figure 2: Four Sample Probability-Probability Plots Derived from T-SQL
Higgs Column 1 PP Plot

 

Higgs Column 2 PP Plot

Jagged Plot

Lactate Dehyrogenase PP Plot

…………It should be fairly obvious just from glancing at the results that P-P can serve as outlier detection methods in and of themselves; as the National Institute for Standards and Technology’s Engineering Statistics Handbook (one of my favorite online statistical references) points out, “In addition to checking the normality assumption, the lower and upper tails of the normal probability plot can be a useful graphical technique for identifying potential outliers. In particular, the plot can help determine whether we need to check for a single outlier or whether we need to check for multiple outliers.”[13]  Nevertheless, I omitted them from my last tutorial series because they’re simply too crude to be effective in this capacity. If we were going to spot aberrant data points by eye in this manner, we might be better off comparing histograms like the ones I introduced in Outlier Detection with SQL Server Part 6.1: Visual Outlier Detection with Reporting Services with the PDFs of the distributions we want to compare. Even then, we still run into the same chicken-and-egg problem that we encountered through the series on outliers: without goodness-of-fit testing, we can’t determine what the underlying distribution should be and therefore can’t tell if any records are outliers. If we force these fitness tests to do double-duty, we end up sitting between two stools, as the old German proverb says, because then we can’t be sure of either the distribution or the aberrance of underlying data points. Moreover, like most other outlier methods, it doesn’t provide any information whatsoever on why a record is aberrant. Furthermore, some of the approximations the underlying functions use also intrinsically discount outliers, as Acklam’s does.[14] In the case of P-P Plots and Q-Q Plots, we’re more often than not better off using them in their original capacity as fitness tests. No harm is done if we spot an aberrant data point in the scatter plots and flag them for further investigation, but scaling up this approach to full-fledged automatic outlier detection would become problematic once we get into the thousands or millions of data points.
…………This size issue also places a built-in limitation on the usefulness of these visual methods for fitness testing purposes. If all of the data points from a single table are crammed into one thick black line that obscures all of the underlying detail, then we can still draw a reasonable conclusion that it fits the distribution we’re comparing it against. That approach is no longer tenable once we’re talking about one thousand out of a million records being off that line, which forces us to make a thousand judgment calls. Once we try to scale up these visual methods, we run into many of the same problems we encountered with the visual outlier detection methods surveyed in the last series, such as the use of binning and banding – not to mention the annoying restriction in Reporting Services against consuming more than a single resultset from each stored procedure, which forces us to discard any summary data that really ought to be calculated in T-SQL, MDX or DAX rather than in RS. These methods also have some specific inherent limitations, such as the inapplicability of P-P plots when the two distributions don’t have roughly simple center points (as measured by means, medians, modes, etc.).[15] At a much broader level, these tests don’t provide much information on how well a dataset fits a particular distribution, because that would involve half-conscious visual assessments of how much each outlier counts for or against the final verdict. For example, how are we to weigh seven outliers that are two quantiles off the mark, compared to three that are a half a quantile away? These tests are conveniences that allow users to make spot assessments of the fitness of distributions at a glance, with the minimum of interpretation and computational costs, but they simply don’t have much muscle. That is the unavoidable drawback of simplistic tests of this type. They amount to brute force, unconscious assessments that “if nothing looks out of place, the fitness of the distribution is not an issue we need to be worried about” – i.e. the flip side of visual outlier detection methods, which boil down to “if it looks out of place, we’ll look at it more closely.” Once the need arises for more definite confirmation of a dataset’s fit to a particular distribution, we have to resort to tests of greater sophistication, which invariably churn out numeric results rather than eye candy. If I don’t take a quick detour into Q-Q Plots next time around, then in the next installment we’ll climb another rung up this ladder of sophistication as we discuss skewness and kurtosis, which can provide greater detail about how closely a dataset fits its target distribution.

 

[1] See the Wikipedia articles “P-P Plot” and “Normal Probability Plot” respectively at http://en.wikipedia.org/wiki/P%E2%80%93P_plot  and http://en.wikipedia.org/wiki/Normal_probability_plot for mention of these conundrums.

[2] As pointed out in the last article, for the sake of convenience I’ll be using the term “probability distriubtion function” (PDF) to denote probability density functions and the equivalent concepts for distributions on discrete scales, probability mass functions (PMFs). This is sometimes done in the literature, but not often.

[3] See the Wikipedia article “Quantile Function” at http://en.wikipedia.org/wiki/Quantile_function for the terminology.

[4] See this comment at the Wikipedia page “Order Statistic” at http://en.wikipedia.org/wiki/Order_statistic :”When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.”

[5] See the CrossValidated thread “Cumulative Distribution Function: What Does t in \int\exp(-t^2)dt stand for?” at http://stats.stackexchange.com/questions/111868/cumulative-distribution-function-what-does-t-in-int-exp-t2dt-stand-for

[6] Another source I found useful as Holmes, Susan, 1998, “Order Statistics 10/30,” published Dec. 7, 1998 at the Stanford Univeristy web address http://statweb.stanford.edu/~susan/courses/s116/node79.html

[7] pp. 59-61, Rohlf, F. James and Sokal, Robert R., 1995, Statistical Tables. Freeman: New York. Retrieved from the Google Books web address http://books.google.com/books?id=1ImWLlMxEzoC&pg=PA59&lpg=PA59&dq=rankits+example&source=bl&ots=fWnT_Gfhvy&sig=bXSLnrtWqlbmT07FXVnVKd5wqbY&hl=en&sa=X&ei=gNJFVJCmNIf2OqKNgMgF&ved=0CDkQ6AEwAg#v=onepage&q=rankits%20example&f=false

[8] Some sources I used when trying to implement the factorial formula include p. 410, Teichroew, D., 1956, “Tables of Expected Values of Order Statistics and Products of Order Statistics for Samples of Size Twenty and Less from the Normal Distribution,” pp. 410-426 in The Annals of Mathematical Statistics, Vol. 27, No. 2. Available at the Project Euclid web address http://projecteuclid.org/euclid.aoms/1177728266 as well as Weisstein, Eric W., 2014, “Order Statistic.” published t the Wolfram MathWorld web address http://mathworld.wolfram.com/OrderStatistic.html

[9] See Acklam, Peter John, 2010, “An Algorithm for Computing the Inverse Normal Cumulative Distribution Function,” published Jan. 21, 2010, at the Peter’s Page website. Available online at http://home.online.no/~pjacklam/notes/invnorm/ I made some corrections to my original implementation after consulting John Herrero’s VB example at http://home.online.no/~pjacklam/notes/invnorm/impl/herrero/inversecdf.txt and discovering that I had left off several minus signs from the constants; these might have been clipped off when I imported them.

[10] See the Wikipedia article “Probability Plot Correlation Coefficient Plot” at http://en.wikipedia.org/wiki/Probability_plot_correlation_coefficient_plot

[11] I checked the inverse CDF values at p. 15, University of Glasgow School of Mathematics & Statistics, 2012, “Statistical Tables,” published June 21, 2012 at the University of Glasgow School of Mathematics & Statistics web address http://www.stats.gla.ac.uk/˜levers/software/tables/

[12] I downloaded this long ago from Vanderbilt University’s Department of Biostatistics.

[13] See National Institute for Standards and Technology, 2014, ““1.3.5.17 Detection of Outliers,” published in the online edition of the Engineering Statistics Handbook. Available online at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm . Also see

“1.3.3.26.10. Scatter Plot: Outlier” at http://www.itl.nist.gov/div898/handbook/eda/section3/scattera.htm

[14] See Acklam, Peter John, 2010.

[15] See the aforementioned Wikipedia article “P-P Plot” at http://en.wikipedia.org/wiki/P%E2%80%93P_plot

Follow

Get every new post delivered to your Inbox.