Outlier Detection with SQL Server, part 3.6: Chauvenet’s Criterion

By Steve Bolton

…………This is the last of six articles I’ve segregated in this middle of my mistutorial series on identifying outlying values with SQL Server, because they turned out to be difficult to apply to the typical use cases DBAs encounter. After this detour we’ll get back on track with outlier detection methods like Interquartile Range that are likely to be as useful as the ones the series started with, such as Benford’s Law and Z-Scores, but I’ll first give a brief explanation of  Chauvenet’s Criterion for the sake of completeness and the offhand chance that it might prove useful in the right circumstances. Those circumstances are normally those suitable for statistical hypothesis testing, in which researchers attempt to prove narrow, specific points of evidence using relatively small datasets – not exploratory data mining or data quality examinations on datasets of thousands or even billions of rows, as in a typical relational table. This subset is designed with different use cases in mind, so it is not surprising that they come with some common limitations that make them difficult to apply to big tables. Among these are the necessity of prior goodness-of-fit testing to ensure that the data follows a Gaussian or “normal” distribution, i.e. the bell curve, without which the outlier tests are invalid. Furthermore, the lookup tables that many of these tests require for comparisons are plentiful on the Internet and old texts, but finding ones without gaps or that extend beyond a few hundred records are difficult to find; worse still, the formulas for calculating the missing values are often performance hogs or require precisions and scales that choke T-SQL code with arithmetic overflows errors. Drawbacks like these also restrict the usefulness of Chauvenet’s Criterion, which was among the first outlier detection methods ever developed. Naval Academy mathematician William Chauvenet (1820-1870) formulated it in the Civil War era, but recognized from the beginning that there were already more trustworthy means available, like Peirce’s Criterion. Perhaps its crudest limitation is that it calls for recursive reexamination of data after carrying out automatic deletion of data points without further investigation, which as I have discussed in prior articles, is unwise and sometimes even unethical. Thankfully, we can apply the same type of interpretation-hack used in last week’s article to make the Modified Thompsons Tau test more useful and valid to Chavuenet’s formula, which is still apparently in common use today despite all of these well-known issues.
…………Repetetively deleting records until no more outliers remain to test is probably not going to fly with either users or IT managers in a SQL Server environment, but as we saw last week, it is possible to simply flag records as potential outliers and still recursively recalculate the underlying aggregates that the formula is based on, as if they had been deleted. It is easier to have our cake and eat it too thanks to new T-SQL windowing clauses like ROWS UNBOUNDED PRECEDING, which make the code for this week’s stored procedure much shorter and easier to follow. The T-SQL in Figure 2 closely resembles that of the Modified Thompson Tau procedure for another reason: the Chauvenet Criterion is also a hybrid method that marries some of the logic of Z-Scores with that of hypothesis testing. The key difference is that we need to compare the absolute deviation against the standard normal distribution rather than Student’s T-distribution; we’re really just substituting one statistical part for another in order to address different use cases, just as we would  swap out a computer component or an automotive part in a car engine. That substitution requires the use of a different lookup table than the ones we’ve used in recent articles, but we only need one of them, since we only need to input the absolute deviation rather than the degrees of freedom and an alpha value. That  in turn means we can use a single join rather than a function call, further decomplicating the procedure. The main problem I encountered when implementing this is that it is impossible to find complete lookup tables for the standard normal distribution, which typically only accept just one decimal point of precision despite the fact that possible to the calculate Z-Scores fed into them to far higher precisions. Part of the problem is that they’re continuous values, but as I’ve found out the hard way, it is surprisingly difficult to calculate them to higher precisions with the original cumulative distribution function (CDF). Until I can come up with a more precise approximation for high-precision values, the clumsy lookup table defined in Figure 1 will have to do. I designed it to host the table cited at the Wikipedia page “68–95–99.7 Rule,”[1] which includes the probabilities that values will occur within one to seven standard deviations, at intervals of 0.5. Once I overcome my difficulties with CDFs and can get more accurate measures, it will be possible to replace the clumsy BETWEEN clause and CASE in the procedure that crudely peg data points to these wide limits. Once the probability value has been retrieved, we only need to multiply it by the number of data points and flag the record as an outlier if the result is less than 0.5. The PopulationOutsideRange that the procedure in Figure 2 joins to is a calculated column (which is renamed as Probability in the stored procedure) while the RN ROW_NUMBER value acts as a running count.

Figure 1: Code for the Standard Normal Deviation Lookup Table
CREATE TABLE [Calculations].[StandardNormalDeviationTable](
       [ID] [bigint] IDENTITY(1,1) NOT NULL,
       [StandardDeviations] [decimal](2, 1) NULL,
       [PopulationInRange] [decimal](16, 15) NULL,
       [PopulationOutsideRange]  AS ((1)-[PopulationInRange]),
       [ExpectedFrequency] [bigint] NULL,
       [ApproximateFrequencyForDailyEvent] [nvarchar](400) NULL,
 CONSTRAINT [PK_StandardNormalDeviationTable]

Figure 2: Code for the Chauvenet Criterion Procedure
CREATE PROCEDURE [Calculations].[ChauvenetCriterionSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @DecimalPrecision AS nvarchar(50)
DECLARE @SchemaAndTableName nvarchar(400),  @SQLString nvarchar(max)
SET @DatabaseName = @DatabaseName + ‘.’
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName
SET @SQLString = ‘SELECT’ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, AbsoluteDeviation, Probability, ”IsOutlier
= CASE WHEN (RN * Probability) < 0.5 THEN 1 ELSE 0 END
       FROM (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, AbsoluteDeviation, ”Probability” = CASE WHEN PopulationOutsideRange IS NOT NULL THEN PopulationOutsideRange
WHEN PopulationOutsideRange IS NULL AND AbsoluteDeviation < 1 THEN 1 ELSE 0 END
                     FROM   (SELECT T1.’ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, CAST(ROW_NUMBER() OVER (ORDER BY ‘ + @ColumnName + ‘ ASC) AS bigint) AS RN,
                     Abs( + @ColumnName + ‘ – Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))  OVER (ORDER BY ‘ + @ColumnName + ‘ ASC ROWS UNBOUNDED PRECEDING)) /
                                 NullIf(StDev(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))  OVER (ORDER BY ‘ + @ColumnName + ‘ ASC ROWS UNBOUNDED PRECEDING), 0) AS AbsoluteDeviation
                           FROM ‘ + @SchemaAndTableName +
                           WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1
                           LEFT JOIN Calculations.StandardNormalDeviationTable AS T2
                           ON T1.AbsoluteDeviation BETWEEN T2.StandardDeviations – 0.25 AND T2.StandardDeviations + 0.25
                     WHERE AbsoluteDeviation IS NOT NULL) AS T3
ORDER BY IsOutlier DESC, AbsoluteDeviation
DESC, Probability DESC

–SELECT @SQLStringuncomment this to debug string errors
EXEC (@SQLString)

…………If you’ve been following this series, most of the rest is self-explanatory: the first few parameters allow users to apply the test to any column in any database for which they have permissions, while the @DecimalPrecision parameter allows them to adjust the precision and scale to avoid arithmetic overflows in the calculations. The rest is just the usual dynamic SQL, with a couple of nested subqueries to allow calculations like the Z-Score to be bubbled up and handled at higher levels. As usual, I’ve omitted any code to handle spaces in object names (which I never allow when I can get away with it) or SQL injection protections.

Figure 3: Results for the Chauvenet Criterion Procedure
EXEC [Calculations].[ChauvenetCriterionSP]
              @DatabaseName = N’DataMiningProjects,
              @SchemaName = N’Health,
              @TableName = N’DuchennesTable,
              @ColumnName = N’Hemopexin,
              @PrimaryKeyName = N’ID’,
             @DecimalPrecision = N’5,1′


…………Chauvenet’s Criterion turned out to be much more conservative than I expected it be, at least in terms of flagging outliers in columns without a lot of repeating values. Only a single value was identified as an outlier in the Hemopexin column of a 209-row dataset on the Duchennes form of muscular dystrophy, which I downloaded from Vanderbilt University’s Department of Biostatistics and have been using for practice throughout this series. On another practice dataset with 1,600 rows it found only three, far fewer than any of the other procedures tested to date. I was startled at how well the procedure performed against the first float column in the Higgs Boson dataset, which I downloaded from the University of California at Irvine’s Machine Learning Repository and turned into a nearly 6-gigabyte table. Despite the fact that the logic for both procedures is quite similar, Chauvenet’s test took only 3 minutes and 15 seconds to run on my poor beat-up six-core imitation of a workstation, compared to an hour and fifteen minutes for the Modified Thompson Tau test procedure. The execution plan in last week’s tutorial was too small to bother to post, since it consisted mostly of a single Sort operation that sucked 95 percent of the computational cost, whereas the Chauvenet execution plan was too wide to fit here and was comprised of several other operators like Compute Scalar, Nested Loops and Parallelism (Gather Streams). It also included a single Sort, but it only accounted for 38 percent of the expense.
…………It may perform surprisingly well and have its uses on columns with few repeating values when a conservative identification threshold is called for, but this century-and-a-half old test has many drawbacks that should not be understated. The requirement of a Gaussian distribution and the difficulty in getting accurate probability values for the size of the datasets DBAs work with are only the tip of the iceberg. The Central Limit Theorem on which it is based is mathematically based is much more trustworthy than other probabilistic methods extant today, but it is a fallacy to believe that probabilities represent guarantees or have any effect on causation. I’ve barely touched on this issue yet, but the aforementioned Wikipedia article on the 68–95–99.7 Rule put it a lot better than I can: “…it is important to be aware of the fact that there is actually nothing in the process of drawing with replacement that specifies the order in which the unlikely events should occur, merely their relative frequency, and one must take care when reasoning from sequential draws. It is a corollary of the gambler’s fallacy to suggest that just because a rare event has been observed, that rare event was not rare. It is the observation of a multitude of purportedly rare events that undermines the hypothesis that they are actually rare.”[2] The remedy for this uncertainty is the same as for the more serious issue of deletion: further investigation, not knee-jerk deletion of records. As geneticist David M. Glvoer and oceanographers Scott Christopher Doney and Wiliam J. Jenkins put it in their 2011 book, Modeling Methods for Marine Science:

                “Now the truly clever researcher might be tempted to perform this rejection iteratively. That is, why not compute a mean and standard deviation, Z-score the data and reject the fliers, then compute an even better mean and standard deviation and do the same thing all over again, rejecting more data. The advice of all the statistical sages and texts is do Chauvenet rejection only once in a given distribution. If the data were normally distributed, and there weren’t many fliers, you’ll probably find that the second iteration will not yield any more rejectable points. If it does, then it suggests that your data may not be normally distributed. The philosophy is that filtering once is a valid thing to do, but iterative filtering may dramatically alter the data distribution in a fundamental way, invalidating the assumptions behind your statistical calculations, and leading to erroneous results. Moreover, you may accused of being a Chauvenet Chauvinist.”[3]

This is professional confirmation of the Catch-22 I’ve always fretted about with the normal distribution: the more outliers that are found, the less likely it is that a Gaussian bell curve is active, in which case most of these hypothesis-testing based outlier detection methods are invalid. Another Catch-22 is operative when we’re recklessly deleting data in a recursive routine like Chauvenet’s Criterion and the Modified Thompson Tau test: the more we delete, the bigger the impact on the dataset will be. If we follow Glover et al.’s suggestion and limit the criterion to a single use, it’s hardly applicable to a SQL Server database where we may need to find tens of thousands of outliers, while looking for data quality issues or doing exploratory data mining. Such a wide scope also calls for degrees of precision that aren’t readily available in regular lookup tables and would probably be quite costly to compute. The criterion may have been better than nothing when Chauvenet wrote his paper back in the Civil War era, but it’s really hard to justify its use, even in many of the hypothesis testing scenarios it was designed for. Nevertheless, academia and research labs across the planet are apparently still staffed by many of those “Chauvenet Chauvinists” today. While researching this article (including reading parts of Chauvenet’s original paper in .pdf format, which I’ve since lost) I ran across many comments like this one from Stephen Ross, a professor of mechanical engineering at the University of New Haven:

                “Peirce’s criterion has been buried in the scientific literature for approximately 150 years. It is virtually unknown today in the scientific community. In its place, Chauvenet’s criterion is commonly used for rational elimination of “outlier” data by government laboratories, (e.g., Environmental Protection Agency, U.S. Army Corps of Engineers, Agency for Toxic Substances and Disease Registry, Institute for Telecommunication Sciences), industry (e.g., Boeing, Sikorsky), foreign laboratories (e.g., Laboratoire National Henri Becquerel, Joint Astronomy Centre), and universities (e.g., research and courses at University of Michigan, Texas A&M, University of California, Vanderbilt, University of Alberta, Ohio State). Methods of elimination of data “outliers” are useful for anyone working in industry or in an educational institution where statistical information concerning product runs or experimental data is of interest. In an engineering, technology or science program, laboratory courses in chemistry, physics and engineering can, and do, find use for rational spurious data elimination. In the BSME program at the University of New Haven, we have used Chauvenet’s criterion in our instrumentation and fluid/thermal laboratory courses for many years. Other universities have similarly used this criterion in their undergraduate laboratories. Typically, students take several measurements of a quantity, say pressure, at one setting (meaning the experimental conditions are maintained at the same level). Assuming the systematic errors are negligible, each measurement will vary slightly due to random errors (e.g., reading instrument values, flow rate may change slightly, etc.). Often, however, one or two datum points seem to fall “far” outside the range of the others obtained. These outliers greatly impact the mean and standard deviation of the, measurements. A data elimination method can be used to obtain a realistic average value of pressure and an “uncertainty” in the true value given by the standard deviation…Chauvenet’s criterion is in common use today for elimination of suspect data.”[4]

…………Ignorance is bliss. I started off this series with some dire warnings about how haphazardly statistics are handled today, especially in fields like medicine where they can do the most damage. The more I’ve learned while writing this series, the less reassured I’ve become. One of the clearest lessons I’ve learned from this exercise is that, if the SQL Server community and the rest of the database server field get in the habit of routinely doing outlier detection (as I suspect they will, in time), they really need to avoid simply copying the means used in other fields. Chauvenet’s Criterion and the other five hypothesis-testing based methods don’t seem to be well-suited to the Big Data buzzword at all, but it doesn’t stop there: in many cases, they’re not even applied correctly in industries where they’re used on a daily basis, such as medical research. So far in this series, only Benford’s Law and Z-Scores appear to fit our use cases well, although I have high hopes for upcoming topics like Interquartile Range, Cook’s distance and Mahalanobis distance, as well as the various visual means that can be implemented in Reporting Services. Next week’s article on Peirce’s Criterion is also likely to be more valuable to DBAs. As Ross points out in an article on that topic, even Chauvenet recommended it in place of his own test: “Chauvenet himself believed that Peirce’s work was more rigorous and could be applied more generally, and in Chauvenet’s words, ‘For the general case….. when there are several unknown quantities and several doubtful observations, the modifications which the rule (meaning his own criterion) requires renders it more troublesome than Peirce’s formula……What I have given may serve the purpose of giving the reader greater confidence in the correctness and value of Peirce’s Criterion.’”
…………What’s good enough for Chauvenet is good enough for me. Why his advice not to use his own test apparently isn’t heeded in academia and private sector research is beyond me. Perhaps it is only a matter of habit, like the completely arbitrary custom of using confidence levels like 95 percent in hypothesis testing. Hopefully it is a custom that DBAs won’t adopt without some thought; perhaps Chauvenet’s Criterion has a place in our tool belts for unusual use cases, but it ought to be a very small place, considering how many more fitting outlier detection methods we have available to us.

[1] See the Wikipedia pages “68–95–99.7 Rule” and  “Standard Deviation” at http://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule “68–95–99.7 Rule” and

http://en.wikipedia.org/wiki/Standard_deviation “Standard Deviation” respectively.

[2] IBID.

[3] p. 29, Glover,  David M.; Jenkins, William J. and Doney, Scott Christopher, 2011, Modeling Methods for Marine Science. Cambridge University Press: New York. I found this reference at the Google Books web address http://books.google.com/books?id=OYAkMs85848C&q=chauvenet#v=snippet&q=chauvenet&f=false

[4] pp. 3-4, Ross, Stephen M. “Peirce’s Criterion for the Elimination of Suspect Experimental Data,” pp. 1-12 in the Journal of Engineering Technology, Fall 2003. Vol. 2, No. 2. http://newton.newhaven.edu/sross/piercescriterion.pdf

Outlier Detection with SQL Server, part 3.5: The Modified Thompson Tau Test

By Steve Bolton

…………Based on what little experience I’ve gained from writing this series on finding outliers in SQL Server databases, I expected the Modified Thompson Tau test to be a clunker. It marries the math underpinning one of the most ubiquitous means of outlier detection, Z-Scores, with the methods taken from the field of statistical hypothesis testing, which I’ve grouped together in the middle of this segment of the series. I feared, however, that it would turn out to be a shotgun wedding that would combine the worst drawbacks of both, at least when applied to the kind of large datasets that DBAs encounter every day. As discussed in previous articles in more depth, hypothesis testing is often performed on datasets of just a few dozen or a few hundred rows, whereas SQL Server tables normally have thousands, if not millions or billions of rows; the former is meant to be applied in proving specific points of evidence, while our main use cases tend to be ferreting out data quality problems and exploratory data mining. I’m dispensing with this subset of outlier detection methods (which amount to a kind of DIY data mining) in this segment because they’re not designed for our main user scenarios – although they can be tweaked somewhat to better fit our needs, as Dixon’s Q-Test was in last week’s installment. The Modified Thompson Tau test shares many of the same limitations as the other methods in the same class, such as the requirement of prior goodness-of-fit testing to ensure that the data follows a Gaussian or “normal” distribution (i.e. the bell curve). In fact, I expected it to be worse than Dixon’s Q-Test, Grubbs’ Test, the GESD and the Tietjen-Moore Test because it involves a recursive elimination of outliers, one at a time, which requires constant recalculations of the underlying averages and standard deviations used in the formulas. Not only could this be a performance headache, but this iterative elimination smacks of the kind of logically and ethically dubious handling of outliers I’ve addressed at length in past articles. Any outlier detection workflow must include investigations of their underlying causes, before matching them to an appropriate response; some are the result of bad data collection, some may indicate a non-Gaussian distribution, others may be beneficial depending on the context, such as finding new customer bases in a marketing survey. The Modified Thompson Tau test apparently is misused routinely to simply jettison unwanted data points without further investigation, judging from the frightening number of sources I found on the Internet where professionals did exactly that. That’s not going to fly in a SQL Server database, where the users and IT directors tend to get miffed when a DBA whimsically decides to delete ten thousand records. We can, however, salvage some usefulness out of the test by simply using it to flag potential outliers, rather than deleting them. The underlying stats can be recomputed as if previously detected outliers were eliminated, without actually making any change to the record.
…………Flagging outliers in this manner solves the problem of inappropriate responses that are often associated with the test, but it takes the magic of SQL Server windowing functions to address the recursive recomputations of aggregates like standard deviation and the mean. The declaration section of the dynamic SQL in Figure 1 is shorter than in any other procedure I’ve written in this series because we can’t simply do a one-time measurement of these basic stats; Modified Thompson Tau is the first test we’ve encountered that uses a sliding window of this kind. The formula I retrieved from Wikipedia[1] really isn’t that hard to calculate, nor are the underlying concepts anything new to this series; all we have to do is compute the absolute deviation (which we’ve already seen in the articles on Z-Scores) and use the Calculations.FindCriticalRegionForTDistributionFunction coded for the Grubbs’ Test article to find the critical region. After that, it’s just a simple matter of making a quick mathematical comparison between the two. The difficulty consists in the recursion, which is computationally costly. Furthermore, it is difficult if not impossible to do these calculations in a recursive CTE, thanks to such fun messages as “TOP operator is not allowed in the recursive part of a recursive common table expression,” “Functions with side effects are not allowed in the recursive part of a recursive common table expression” and “GROUP BY, HAVING, or aggregate functions are not allowed in the recursive part of a recursive common table expression.” Prior to the introduction of new windowing function clauses in SQL Server 2012, it might have been necessary to code this with some messy recursive function calls. The code is much shorter and more legible and the performance is probably much better than any alternatives, thanks to the single ROWS UNBOUNDED PRECEDING clause, which really saved my bacon out of the fire this time. Whenever the subject comes up, I never miss an opportunity to plug Itzik Ben-Gan’s classic reference Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions,[2] which invariably turns out to be useful in unexpected binds like this. I’ve made it a point to learn as much as I can about the largely untapped potential of these new T-SQL clauses, but apparently I need to redouble my efforts, judging on how indispensable they were in this particular situation.

Figure 1: Code for the Modified Thompson Tau Procedure

CREATE PROCEDURE [Calculations].[ModifiedThompsonTauTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @DecimalPrecision AS nvarchar(50), @Alpha decimal(38,35) = 0.05

DECLARE @SchemaAndTableName nvarchar(400), @SQLString nvarchar(max)
SET @DatabaseName = @DatabaseName + ‘.’
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName –I’ll change this value one time, mainly for legibility purposes

SET @SQLString = ‘DECLARE @Alpha  decimal(5,4)
SET @Alpha = ‘ + CAST(@Alpha AS nvarchar(50)) + SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, AbsoluteDeviation, RejectionRegion, ”IsOutlier” = CASE WHEN AbsoluteDeviation > RejectionRegion THEN 1 ELSE 0 END
FROM       (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, AbsoluteDeviation, (RejectionRegionInput * (RN – 1)) / ((Power(RN, 0.5) * Power(RN – 2 + Power(RejectionRegionInput, 2), 0.5))) AS RejectionRegion
       FROM (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, AbsoluteDeviation, DataMiningProjects.Calculations.FindCriticalRegionForTDistributionFunction (RN, 1, 0.05) AS RejectionRegionInput
                   (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, CAST(ROW_NUMBER() OVER (ORDER BY ‘ + @ColumnName + ‘ ASC) AS bigint) AS RN,
                    Abs( + @ColumnName + ‘ – Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))  OVER (ORDER BY ‘ + @ColumnName + ‘ ASC ROWS UNBOUNDED PRECEDING)) /   StDev(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))  OVER (ORDER BY ‘ + @ColumnName + ‘ ASC ROWS UNBOUNDED PRECEDING) AS AbsoluteDeviation
                          FROM ‘ + @SchemaAndTableName +
                          WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1
                     WHERE AbsoluteDeviation IS NOT NULL) AS T2) AS T3
ORDER BY IsOutlier DESC, AbsoluteDeviation DESC, RejectionRegion, ‘ + @ColumnName + ‘, ‘ + @PrimaryKeyName +

–SELECT @SQLStringuncomment this to debug string errors
EXEC (@SQLString)

…………To avoid divide-by-zero errors when the standard deviation is zero, I borrowed the NullIf trick from a post by Henrik Staun Poulsen at StackExchange (which probably saved me a lot of time trying to devise an answer of my own, since I was clueless about NullIf).[3] Most of the rest of the code follows the same format I’ve used in previous procedures, with the usual @DecimalPrecision parameter available to avoid arithmetic overflows, the usual set of parameters and implementation code allowing users to select a column in any database for which they have permissions, etc.  The messiest part is the use of subqueries to bubble up previous computations to higher levels, which I’m considering recoding as a series of sequential CTEs, if that would be easier for people to follow. As always, I’m using a Calculations schema that users can change and haven’t included code for accommodating spaces in object names or to prevent SQL injection. My usual disclaimer is still operative: I’m posting this in order to learn, not because I know what I’m talking about, so test my code before putting it into production. This is an introduction to the topic with suggestions on how to code it, not the last word. I’m not 100 percent sure I’ve got the order of operations correct in the second subquery or that I’m not supposed to use the @Alpha value squared instead of the rejection region; it’s likewise possible I ought to be feeding the count minus one to the degrees of freedom parameter of the T-distribution function, rather than the simple count. A T-SQL expert might also be able to suggest numerous ways of improving the performance, given that the query’s still a resource hog even with the ROWS UNBOUNDED clause.

Figure 2: Results for the Modified Thompson Tau Procedure

EXEC   [Calculations].[ModifiedThompsonTauTestSP]
             @DatabaseName = N’DataMiningProjects,
             @SchemaName = N’Physics,
            @TableName = N’HiggsBosonTable,
              @ColumnName = N’Column1′,
              @PrimaryKeyName = N’ID’,
             @DecimalPrecision = N’ 33,29′,
              @Alpha = 0.05

Modified Thompson Tau Test

…………The results in Figure 2 depict outliers for the Creatine Kinase enzyme, which plays in a role in the Duchennes form of muscular dystrophy, which is the subject of a tiny 209-row dataset I downloaded from the Vanderbilt University’s Department of Biostatistics for use as practice data in this series.Whenever performance has been an issue with procedures I’ve posted previously in this series, I’ve stress-tested them against the first float column of the 11-million-row Higgs Boson dataset, which I downloaded from the he University of California at Irvine’s Machine Learning Repository and converted to a nearly 6-gigabyte SQL Server table. That test took an hour and fifteen minutes on my poor beat-up six-core development machine, which is probably several orders of magnitude slower than a real database server yet still sluggish enough to make me worry about the performance in a live environment. On the other hand, this kind of complete examination of an entire column really amounts to a crude data mining operation, of the kind that would normally be run occasionally on off-peak hours. It might thus be sufficient to get the job done as it is, although I’m sure a real T-SQL coder could advise on several ways of getting around the single Sort operation that gobbled up 95 percent of the cost of the execution plan (which I won’t bother to post, because there’s nothing more to the story except that Sort). I have the distinct impression, however, that the ROWS UNBOUNDED PRECEDING method is as close to an optimal method of recursively recalculating the aggregates as we’re going to get.
…………Either way, I feel like I made a unique contribution for the first time in this series, by adapting to SQL Server use cases a statistical test that is often misused, even when applied to its usual scenarios in hypothesis testing. The results in Figure 2 allow DBAs and data miners to make decisions based on a bird’s eye view of the potential outliers in a dataset, without wantonly deleting them in knee-jerk manner. Like the other five outlier detection methods I’ve segregated in this part of the tutorial series, it is still based on hypothesis testing methods that retard its usability, like the requirement of a Gaussian distribution and the difficulties of finding or calculating T-distribution lookup tables for degrees of freedom far in excess of 200 rows. Before proceeding with outlier identification methods that are more likely to be profitable to SQL Server DBAs, like Interquartile Range, Peirce’s Criterion, Cook’s Distance, Mahalanobis Distance and various visual means that can be displayed in Reporting Services, I’ll finish out this segment with a discussion of one of the oldest means, Chauvenet’s Criterion. Like the others in this subset, its usefulness is severely curtailed by its dependence on a normal distribution and the small size of the available lookup tables. Furthermore, it is also implemented in an inherently recursive manner, which brings with it the same performance and logical validity issues that the Modified Thompson Tau test does. I’ll attempt to code it in the next installment of this series anyways, for the sake of completeness and the possibility that a SQL Server user out there might find a use for it – as well as to gain some further experience in translating stats and math equations into code, while passing on my misadventures in T-SQL to others in the hopes that neither I nor they will repeat my cautionary tales.

[1] It is mentioned in the Wikipedia webpage “Outlier,” at the web address http://en.wikipedia.org/wiki/Outlier.


[2] Ben-Gan, Itzik, 2012, Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions . O’Reilly Media, Inc.: Sebastopol, California.

[3] Poulsen, Henrik Staun, 2009, reply to the thread “How to Avoid the “Divide by Zero” Error in SQL?” published May 14, 2009 on the StackOverflow.com website. Available online at http://stackoverflow.com/questions/861778/how-to-avoid-the-divide-by-zero-error-in-sql

Outlier Detection with SQL Server, part 3.4: Dixon’s Q-Test

By Steve Bolton

…………In the last three installments of this amateur series of mistutorials on finding outliers using SQL Server, we delved into a subset of standard detection methods taken from the realm of statistical hypothesis testing. These are generally more difficult to apply to tables of thousands of rows, let alone the billions or even trillions commonly associated with the buzzword “Big Data,” for a number of reasons. First, many of them are invalid if the data doesn’t follow a normal distribution, which requires goodness-of-fit testing that can be expensive on large datasets. Secondly, many of them also depend on comparisons to Student’s T and other distributions in order to define a data point as an outlier, but the lookup tables widely available in texts and on the Internet generally stop after sample sizes of a couple of hundred at best. Calculating these for the much larger datasets that DBAs work with is likely to be computationally costly, especially in the case of last week’s topic, the Tietjen-Moore test. Typically, they are used to give a more rigorous numerical definition of an outlier in small dataset of a few dozen or a few hundred data points, which is at least an improvement over simply spotting them by eye in a scatter plot or some other form of data visualization. Hypothesis testing certainly has valid uses when applied to its proper use case, which is proving a single point of evidence, not ferreting out data quality problems or the kind of exploratory data mining DBAs are likely to do. Even then, there are many pitfalls to watch out for, including common misconceptions about probabilistic reasoning and terms like “confidence” and “statistical significance.” The manner in which alpha values are selected to define confidence intervals is also somewhat capricious. I am more confident in hanging my hat on measures like Minimum Description Length and Kolmogorov Complexity which are more deeply rooted in reason, but I’ll have to defer discussion of these for a future series tentatively titled Information Measurement with SQL Server, since they’re not applicable to outlier detection. Despite these caveats, I’ll finish this detour into outlier detection methods dependent on hypothesis testing before getting back on track with topics like Interquartile Range and Cook’s Distance that will probably prove to be more useful to DBAs.
…………For the sake of completeness and finishing what I started, I’ll give a quick rundown of Dixon’s Q-Test, which suffers from many of the limitations listed above. It too is invalid when applied to a dataset that does not follow a Gaussian or “normal” distribution, i.e. a bell curve. The test statistic derived from it must also be compared to a particular distribution, which is much more difficult to find reference lookup tables for than the ubiquitous T-distribution. The DDL in Figure 1 was used hold the critical values I inserted from the only source I could find during a short search, from a webpage at the University of Göttingen’s Department of Sedimentology and Environmental Geology.[i] This particular lookup table only goes up to 25 degrees of freedom, so we can only apply it to datasets with that many rows. Yet the limitations do not end there. As discussed a few columns ago, Grubbs’ Test can only be applied to a single row at a time; Dixon’s Q-Test is even more restrictive, in that it can only be applied to a dataset once, to detect a single outlier. As its Wikipedia entry states, “This assumes normal distribution and per Dean and Dixon, and others, this test should be used sparingly and never more than once in a data set.”[ii] If the 25-record limit wasn’t a fatal blow to its usability, then the single-use criterion certainly delivers the coup de grace. Nevertheless, I’ll provide the stored procedure in Figure 2 for anyone who finds a need for it:

Figure 1: DDL for the Dixon’s Q-Test Critical Value Table
CREATE TABLE [Calculations].[DixonsQTestTable](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[N] [tinyint] NULL,
[Alpha10] [decimal](4, 3) NULL,
[Alpha05] [decimal](4, 3) NULL,
[Alpha02] [decimal](4, 3) NULL,
[Alpha01] [decimal](4, 3) NULL,
[Alpha005] [decimal](4, 3) NULL,

Figure 2: Code for the Dixon’s Q-Test Procedure
ALTER PROCEDURE [Calculations].[DixonsQTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @OrderByCode as tinyint = 1, @DecimalPrecision AS nvarchar(50), @Alpha decimal(38,35) = 0.05

SET @DatabaseName = @DatabaseName + ‘.’
DECLARE @SchemaAndTableName nvarchar(400)
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName

DECLARE @SQLString nvarchar(max)
SET @SQLString = DECLARE @Mean decimal( + @DecimalPrecision + ‘), @Range  decimal(‘ + @DecimalPrecision + ‘), @Count decimal( + @DecimalPrecision + ‘), @CriticalValue decimal( + @DecimalPrecision + ‘), @Alpha
+ @DecimalPrecision + ‘), @OrderByCode tinyint
ET @OrderByCode  = + CAST(@orderByCode AS nvarchar(50)) +
SET @Alpha = ‘ + CAST(@Alpha AS nvarchar(50)) +

SELECT @Range = Max(CAST(‘ + @ColumnName + ‘ AS decimal(‘ + @DecimalPrecision + ‘))) – Min(CAST(‘ + @ColumnName + ‘ AS decimal(‘ + @DecimalPrecision + ‘))), @Count=Count(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL

SELECT @CriticalValue = CASE
      WHEN @Alpha = 0.1 THEN Alpha10
      WHEN @Alpha = 0.05 THEN Alpha05
      WHEN @Alpha = 0.02 THEN Alpha02
      WHEN @Alpha = 0.01 THEN Alpha01
      WHEN @Alpha = 0.005 THEN Alpha005
FROM Calculations.DixonsQTestTable
WHERE N = @Count

SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, Gap, TestStatistic, @CriticalValue AS Critical’ + @ColumnName + ‘, @Alpha AS Alpha, IsOutlier” = CASE WHEN TestStatistic > @CriticalValue THEN 1 WHEN TestStatistic <= @CriticalValue THEN 0 ELSE NULL END
FROM (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, Gap, Gap / @Range AS TestStatistic
       FROM  (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, Lead(+ @ColumnName + ‘, 1, 0) OVER (ORDER BY ‘ + @ColumnName + ‘) – ‘ + @ColumnName + ‘ AS Gap
      FROM ‘ + @SchemaAndTableName +
      WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1) AS T2
ORDER BY CASE WHEN @OrderByCode = 1 THEN ‘ + @PrimaryKeyName + ‘ END ASC,
CASE WHEN @OrderByCode = 2 THEN ‘ + @PrimaryKeyName + ‘ END DESC,
CASE WHEN @OrderByCode = 3 THEN ‘ + @ColumnName + ‘ END ASC,
CASE WHEN @OrderByCode = 4 THEN ‘ + @ColumnName + ‘ END DESC,
CASE WHEN @OrderByCode = 5 THEN TestStatistic END ASC,
CASE WHEN @OrderByCode = 6 THEN TestStatistic END DESC

–SELECT @SQLStringuncomment this to debug string errors
EXEC (@SQLString)

…………There’s not much going on here in comparison to some of the more complex procedures I’ve posted recently. The first five parameters allow users to perform the test on any column in any database for which they have requisite permissions. The @DecimalPrecision is a parameter I’ve added to many of my procedures to enable users to escape from arithmetic overflows, while the @OrderByCode takes the same values as in other tutorials: the value 1 orders the results by the primary key ascending, 2 by the same descending, 3 and 4 by the column name ascending or descending respectively, and 5 and 6 order them by the TestStatistic in either direction. Most of the procedure consists of implementations of the @OrderByCode, aggregate retrievals and column selection that I’ve merely cut and pasted from past procedures. The logic of the test statistic itself is quite simple: use the T-SQL Lead windowing function to find the previous row, calculate the gap and then sort by it. The test statistic is merely the gap divided by the range.[iii] Just like with the Tietjen-Moore test in last week’s article, this is probably of greater utility for our use case scenarios than a comparison of a single test statistic to the critical value; for that reason, I’ve returned all 25 rows of a view on the Hemopexin column of the 209 rows of the Duchennes dataset, derived from research on a form of muscular dystrophy that Vanderbilt University’s Department of Biostatistics has made publicly available. These are ordered by the gap and test statistic, which tell us more about the data distribution of the view than a single outlier test would do.

Figure 3: Results for a View on the First 25 Values for the Hemopexin Column
EXEC [Calculations].[DixonsQTestSP] @DatabaseName = N’DataMiningProjects, @SchemaName = N’Practice, @TableName = N’Top25HemopexinView’, @ColumnName = N’Hemopexin, @PrimaryKeyName = N’ID’, @OrderByCode = 6, @DecimalPrecision = N’12,8′, @Alpha = 0.1

There are no outliers in this particular view, according to the comparison made against the critical values cited in the University of Göttingen’s source. It did correctly identify an outlier when I tested it against the example data provided in the Wikipedia entry. The code also works quite quickly, as can be expected for a view of just 25 records; that is why I won’t bother to post execution plans or test it against the much larger Higgs Boson dataset as we have done in previous tutorials to look for performance bottlenecks. It may work as designed, but it is probably more useful when the gaps and test statistics for all of the rows are provided, as depicted in Figure 3. Even when adapted in this way, however, it is of still of little practical utility for datasets with thousands of records, in large part because we don’t have a means of deriving critical values for that many rows. One strategy might be to define a view on a subset of data as I have done above, quite arbitrarily. On the other hand, taking tiny samples of large datasets, even when properly randomized, doesn’t do us much good if our most common purposes are finding and fixing all rows affected by data quality issues, or doing exploratory data mining. When we’re dealing with datasets of billions of records; our main use case is to devise procedures that will ferret out as many of them as we can find, as efficiently as we can – which means getting them all in one pass if possible, not looking for one per test as we do with Grubbs and Dixon. The latter is even more restrictive, because according to the developers of the test itself, it ought not be applied more than once to any dataset. We’re not just limited to testing one outlier in a single pass, but to a single pass, forever. That is obviously not as useful as familiar tests like Z-Scores, which can be applied as often as we like to an entire database. In the next installment of this series we’ll discuss the Modified Thompson Tau test, which is more sophisticated in that it marries Z-Scores to some of the hypothesis testing logic underpinning the last few tutorials. I thought it would turn out to be a shotgun wedding, but it turns out that this test can be enhanced by returning all of the values involved, just as Dixon’s Q-Test can be made mildly more useful in the same way. Such adjustments might be called for in the cases of many of the outlier detection methods based on hypothesis testing, since we’re using them for quite different purposes than what they were designed for. The statistical tools introduced in this segment of the series might not be as useful on an everyday basis to DBAs as upcoming topics like Interquartile Range, Peirce’ Criterion, Cook’s Distance or Mahalanobis Distance, but there’s no reason to remove them from our tool belts if they can be adusted to work with rare use cases that we might eventually encounter.

[i] See the webpage titled “Out?Lier”at the website of the Geowissenschaftliches Zentrum der Universität Göttingen’s Department of Sedimentology and Environmental Geology, which is available at the web address http://www.sediment.uni-goettingen.de/staff/dunkl/software/o_l-help.html

[ii] See the Wikipedia page “Dixon’s Q-Test,” available online at http://en.wikipedia.org/wiki/Dixon%27s_Q_test

[iii] IBID.

Outlier Detection with SQL Server, part 3.3: The Limitations of the Tietjen-Moore Test

By Steve Bolton

…………The Tietjen-Moore test may have the coolest-soundest name of any of the outlier detection methods I’ll be surveying haphazardly in this amateur series of mistutorials, yet it suffers from some debilitating limitations that may render it among the least useful for SQL Server DBAs. It is among a set of six methods that I’ll be dispensing with quickly in the middle of this series because they’re designed to perform hypothesis testing on small datasets of a few dozen or a few hundred records, not the thousands commonly found in the smallest SQL Server databases, let alone the billions or even trillions associated with the Big Data buzzword. By the end of the series, I may be capable of providing a matrix in which the various means of finding aberrant data points are divided by their use cases and the questions users want to ask of the data, followed by the number and types of inputs and outputs and their mathematical properties, as well as the means of calculation in between those two steps if it is relevant to performance or adequate access. Any associated workflow ought to include steps to define outliers with rigorous logic, then assess the underlying causes of any that are found to fit those criteria before proceeding to the final step, matching them to a proper response, whether it be deletion in the case of certain data quality issues or elation if outliers happen to be a positive outcome according to the context. As discussed at length in previous articles, a failure in any one of these areas can be worse than an incorrect calculation, in terms of misleading conclusions or even unethical responses. The set of hypothesis testing methods that Tietjen-Moore belongs to would occupy a quite narrow range of such a matrix, since they have numerous constraints that make them inapplicable in many situations, such as the requirement of a Gaussian or “normal” distribution (i.e. a bell curve) and the requisite goodness-of-fit testing to prove it. Within this subset, the Tietjen-Moore test is actually more restrictive in its own way than many others – even Grubbs’ Test (which Tietjen-Moore is derived from) and Dixon’s Q-Test, which can only be used to identify a single outlier. It too returns a single result, but which is merely a Boolean yes/no answer to the question, “Does the dataset contain exactly n number of outliers?” The Generalized Extreme Studentized Deviate Test (GESD), the topic of the last article, can also be used to ascertain how many outliers a dataset has, but is more useful because it does not require the user to guess the exact number in advance.
…………The chances of guessing the correct number are much greater in the kinds of relatively small datasets used in hypothesis testing, but obviously quite difficult even in a conventional SQL Server database with millions of more rows. Furthermore, the performance penalties for repetitively retesting a dataset until the correct number is arrived at is obviously much higher with a larger database, even if we don’t factor in the obvious need to also test it many more times over. Moreover, as mentioned in the last couple of articles, many of the outlier detection methods drawn from the realm of hypothesis testing require looking up critical regions for common data patterns like Student’s T-distribution. Unfortunately, the lookup tables widely available on the Internet normally stop at a few hundred rows and are often riddled with gaps, but calculating the missing values needed for millions of records can be both computationally costly and intellectually draining. The description of the Tietjen-Moore Test in the National Institute for Standards and Technology’s Engineering Statistics Handbook (one of the most readable online sources of information on statistics) indicates that such lookup tables are not readily available in this case: “The critical region for the Tietjen-Moore test is determined by simulation. The simulation is performed by generating a standard normal random sample of size n and computing the Tietjen-Moore test statistic. Typically, 10,000 random samples are used. The value of the Tietjen-Moore statistic obtained from the data is compared to this reference distribution.”[1] To be representative of a much larger dataset like those found in SQL Server tables, these random samples would have to be much larger than those normally performed in hypothesis testing; taking 10,000 of these larger samples would add to the performance costs, which already include ferreting out the exact number of outliers in advance, plus rigorous goodness-of-fit testing in beforehand to discern whether or not the necessary bell curve is operative or not. That is why I enabled users to supply their own value through the @CriticalValueLowerTail parameter in Figure 1, which implements the test in a T-SQL stored procedure. In Figure 2 I set an arbitrary value of 0.3, and since the value of the test statistic in that case was higher, the hypothesis that the dataset contains exactly 20 outliers was rejected. The test statistic returned by the procedure may still be of use, however, since it can be interpreted thus even without the critical region: “The value of the test statistic is between zero and one. If there are no outliers in the data, the test statistic is close to 1. If there are outliers in the data, the test statistic will be closer to zero.”[2]

Figure 1: Code for the Tietjen-Moore Test
CREATE PROCEDURE [Calculations].[TietjenMooreTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @DecimalPrecision AS
nvarchar(50), @Alpha decimal(5,4) = 0.05, @K bigint, @CriticalValueLowerTail decimal(38,35), @DoTwoTailedTest bit = 1
SET @DatabaseName = @DatabaseName + ‘.’
DECLARE @SchemaAndTableName nvarchar(400)
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName

DECLARE @SQLString nvarchar(max), @CommonTestCode nvarchar(max), @MeanConditionCode nvarchar(max), @SelectConditionCode nvarchar(max), @TempCTEOrderByCode nvarchar(max)
SET @CommonTestCode =DECLARE @Mean decimal(‘ + @DecimalPrecision + ‘), @K bigint
SET @K = ‘ + CAST(@K AS nvarchar(50)) +
SELECT @Mean = Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL’

SELECT @MeanConditionCode = CASE WHEN @DoTwoTailedTest = 1 THEN ‘RN < @K’ ELSE ‘RN >= @K’ END
SELECT @SelectConditionCode = CASE WHEN @DoTwoTailedTest = 1 THEN ‘RN > MaxRN – @K’ ELSE ‘RN < @K’ END
SELECT @TempCTEOrderByCode = CASE WHEN @DoTwoTailedTest = 1 THEN  ‘ORDER BY ABS(‘ + @ColumnName + ‘ – @Mean)’ ELSE ‘ORDER BY ‘ + @ColumnName + END

SET @SQLString = @CommonTestCode +
(‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, MaxRN)
SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, RN, Max(RN) OVER (ORDER BY RN DESC) AS MaxRN
FROM (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘,  ROW_NUMBER() OVER (‘ + @TempCTEOrderByCode + ‘) AS RN
FROM ‘ + @SchemaAndTableName +
HERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1),
SELECT Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)))
WHERE ‘ + @MeanConditionCode + ‘ — computes a different mean based on a subset of the dataset

SELECT TestStatistic, ‘ + CAST(@Alpha AS nvarchar(50)) + ‘ AS Alpha, ‘ + CAST(@CriticalValueLowerTail AS
nvarchar(50)) + ‘ AS CriticalValueLowerTail, ”HasOutliers” = CASE WHEN TestStatistic < ‘ + CAST(@CriticalValueLowerTail AS
nvarchar(50)) + ‘ THEN 1 WHEN TestStatistic >= ‘ + CAST(@CriticalValueLowerTail AS nvarchar(50)) + ‘ THEN 0 ELSE NULL END
FROM (SELECT TOP 1 TopOperand / BottomOperand AS TestStatistic
       FROM (SELECT SUM(CASE WHEN ‘ + @SelectConditionCode +
‘ THEN 0 ELSE Power((CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘)) – (SELECT NewMean FROM
NewMeanCTE)), 2) END) OVER (ORDER BY RN) AS TopOperand,  SUM(Power((‘
+ @ColumnName + ‘ – @Mean), 2)) OVER (ORDER
BY RN ASC) AS BottomOperand, RN, (SELECT NewMean FROM NewMeanCTE) AS NewMean, ‘

+ @ColumnName +
      FROM TempCTE AS T1) AS T2
–SELECT @SQLString — uncomment this to debug string errors
EXEC (@SQLString)

 …………Readers of other installments in this series should easily spot some commonalities between the code in Figure1 and that of other procedures I’ve posted in recent weeks. The procedure resides in a schema named Calculations, which you can change; to keep it short and simple, I left out SQL Injection and security code; I don’t allow spaces in my object names, so you’ll have to add such logic yourself; the first five parameters allow you to perform the test on any column in any database for which you have adequate access. The column must of course have a numerical data type, whose upper and lower ranges ought to be kept in mind when setting the usual @DecimalPrecision value to avoid arithmetic overflows. Most of the differences from other recent procedures revolve around the @DoTwoTailedTest parameter, which calls for separate greater-than and less-than comparisons that are implemented in five strings that are appended or prepended to the dynamic SQL. Usually I use just a single @SQLString variable for this purpose, which can be debugged by uncommenting the next to last line. The extra common table expression (CTE) is also needed to handle the differences between the two test variants. As usual, the procedure ends with a few nested subqueries that look more intimidating than they really are; they’re only needed to bubble up the results of computations so they can be operated on further, before returning the complete results to the user. Sometimes this is strictly necessary because the values can’t get calculated in one fell swoop, while in others it often makes the code more legible and either to debug, rather than cramming a dozen different arithmetic operations and confusing parentheses in one line.

Figure 2: Results for the Tietjen-Moore Procedure

EXEC   [Calculations].[TietjenMooreTestSP]
            @DatabaseName = N’DataMiningProjects’,
            @SchemaName = N’Health’,
             @TableName = N’DuchennesTable’,
             @ColumnName = N’LactateDehydrogenase’,
             @PrimaryKeyName = N’ID’,
             @DecimalPrecision = N’38,21′,
             @Alpha = 0.05,
             @K = 20,
             @CriticalValueLowerTai = 0.3,
             @DoTwoTailedTest = 1

Tietjen-Moore Results

…………After validating the two-tailed version of the procedure against the sample data at the NIST webpage, I ran it against the LactateHydrogenase column of the DuchennesTable, which is derived from a 209-row dataset on the Duchennes form of muscular dystrophy that is published online by the Vanderbilt University’s Department of Biostatistics. There was little incentive to stress test it on the other two datasets I’ll be using for the rest of this tutorial series, one of which has more than 11 million rows. I’ve omitted the client statistics and execution plans I’ve often included in previous articles because the performance hits for running the test are minimal – at least when we omit the 10,000 random samples to establish the critical region and can supply a reasonable guess for the number of outliers to test for through the @K parameter.
…………Omitting the goodness-of-fit tests needed to prove that the dataset follows a normal distribution is not a good idea, however, no matter what the computational expense is. Without that precursor, the Tietjen-Moore test can be misleading, just like many of the other outlier detection methods I’m dispensing with in this segment of the series. Even when we’re certain the data ought to follow a bell curve, the main use scenario for the Tietjen-Moore test for SQL Server DBAs and data miners is likely to be restricted to merely interpreting the test statistic. The critical regions are just too computationally expensive to calculate for such a limited use. The main problem, however, is that correctly divining the correct number of outliers in advance among tens of thousands of records is like looking for a needle in a haystack. In fact, it may be worse, because once we’ve found the needle, all the test tells us is that we have indeed found it. Like many of the other means of outlier identification I’m dispensing with in this segment of the series, Tietjen-Moore is more suited for hypothesis testing, in which researchers attempt to rigorously prove a specific point about a small data sample. As I wade through this series, learning as I go, I’m beginning to realize that the kind of much larger datasets that DBAs and Big Data miners work with these days may call for new methods of outlier identification. Some of the means we’ve already discussed can be used for our scenarios, like Z-Scores and Benford’s Law, but many of the others that statisticians and researchers routinely utilize for hypothesis testing have limited applicability for our use cases, which normally begin with ferreting out data quality problems and exploratory data mining.
…………Once I’ve gotten other methods in the same class, like the Modified Thompson Tau Test and Chauvenet’s Criterion, out of the way I’ll delve into others like Interquartile Range, Peirce’s Criterion, Cook’s Distance and Mahalanobis Distance that are more likely to be of use for our scenarios. The same holds true of the segment I’ll do later on Visual Outlier Detection with Reporting Services, in which I’ll demonstrate how to spot outliers with the naked eye. The hypothesis-test set of outlier detection methods are often quite useful in rigorously confirming that the out-of-place data points spotted using eye candy like histograms and scatter plots are indeed outliers. I have some reservations about hypothesis testing even when applied to its typical use cases, like the arbitrariness of commonly assigned alpha values, the frequency with which such concepts as “confidence” and “statistical significance” are abused and the infrequency with which many researchers apply goodness-of-fit tests. Misunderstandings about randomness and probabilistic methods also abound, even in hard sciences like physics where such math concepts are used every day. I’m hardly well-versed in these matters, but I have the impression that the proofs are not as rigorous and trustworthy as those established by other subdisciplines like computational complexity and measures like Minimum Description Length (MDL) and Kolmogorov Complexity, which I hope to one day be able to code in a future tutorial series, Information Measurement with SQL Server. That series will likely be of practical use to DBAs and data miners, just as the second half of this series on outlier detection is certain to be. For the sake of completeness, however, I’ll first finish off this detour into methods based on hypothesis testing by hurrying through Dixon’s Q-Test, Modified Thompson Tau Test and Chauvenet’s Criterion. After that, we’ll get back on track with outlier detection methods that are more suited to our use cases.

[1] See National Institute for Standards and Technology, 2014,  “ Test for Outliers,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h2.htm

[2] IBID.

Outlier Detection with SQL Server, part 3.2: GESD

By Steve Bolton

…………In the last edition of this amateur series of self-tutorials on finding outlying values in SQL Server columns, I mentioned that Grubbs’ Test has a number of limitations that sharply constrain its usefulness to DBAs. The Generalized Extreme Studentized Deviate Test (GESD) suffers from some of the same restrictions – most notably the fact that it is only applicable to datasets that have a Gaussian (a.k.a. “normal”) distribution, better known as the bell curve. Nonetheless, the GESD applies to a wider set of use cases because it can be applied to find more than one outlier, unlike the Grubbs’ Test it is derived from. It is “essentially Grubbs test applied sequentially,” with some tweaks applied to its test statistics and critical values to avoid such problems as false negatives with weak stopping criteria.[1] Grubbs’ Test is sometimes performed recursively by deleting aberrant data points one at a time, which raises the whole morass of ethical issues I’ve harped on previous articles, like the far too common tendencies among researchers to classify outliers arbitrarily, subjectively change those classifications (i.e. “moving the goalposts”) and fail to investigate the causes of aberrant values and matching them to appropriate responses. As mentioned previously, outliers can be differentiated by their use cases, numbers and types of inputs, the numbers and types of outputs and their mathematical properties and the means of calculation applied in between the input and output stage, if that would affect performance. A failure in any one of these areas can be worse than an incorrect calculation. An outlier workflow also ought to include rigorous goodness-of-fit testing to make sure that only tests appropriate to the underlying distribution are applied, plus proper classification procedures and criteria, guarding against redefinition (i.e. “changing horses in midstream”) and matching the underlying causes of outliers with appropriate responses. Not all outliers are the result of faulty data collection and storage, so simply deleting aberrant data points as recursive application of the Grubbs’ Test implies is often inappropriate, or even unethical. There is little reason to use it when a test like GESD is available to ameliorate some of these associated drawbacks and temptations The GESD test is still quite limited in comparison to other outlier detection methods we’ll survey here, since it is normally only applied to look for small numbers of aberrant data points, but at least we’re not limited to finding a single one as we are with the Grubbs’ Test. The performance costs for both are quite trivial. Another benefit of the GESD is that we can reuse much of the code from last week’s mistutorial, thereby simplifying the development and troubleshooting processes.
…………We can thank Harvard Biostatistics Prof. Bernard Rosner for this improvement to the Grubbs’ Test, which was published in an issue of the statistical journal Technometrics back in 1983.[2] I was unable to get ahold of the this paper, unlike the original publications for Grubbs’ Test, but I was able to find the formulas at the National Institute for Standards and Technology’s Engineering Statistics Handbook, which is one of the most readable online sources of information on statistics for amateurs like myself. I’d wager that most readers will find a pint of Guinness a lot less boring than these underlying equations, so it is worth noting that we can thank a brewer of Irish beer for some of these formulas. The “Studentized” part of the name is derived from the “Student’s T-distribution” it is dependent on, which I always vaguely pictured as being derived from some sort of high school or college math test; the real story is more colorful, in that it is was the product of a 1908 Biometrick article by William Sealy Gosset, who chose the pen name “Student” after his employer, the Guinness brewery in Dublin, required him to publish under a pseudonym.[3] Just something more colorful that degrees of freedom and critical values to meditate on the next time you’ve had one too many draughts of Guinness. I’m not sure why the GESD calculations, like those of Grubbs’ Test, are dependent on both the T-distribution and the bell curve, although one of my missions when starting this series was to grasp the underlying mechanics and logic of the formulas, which is something I was deficient in when writing my last series of tutorials, on SQL Server Data Mining (SSDM). I am trying to acquire the skill to translate equations into T-SQL, Multidimensional Expressions (MDX) and Visual Basic (VB) code as quickly as possible, but sometimes still struggle to get the formulas right. Some caution is in order here, because the code in the figure below only matched the first four results for Rosner’s 54 sample data points, in the example calculation at the NIST webpage. I had some issues following the order of operations in the NIST’s equations at first, but it is odd that they would only be off by a hundredth of a decimal point for the first four results, then diverge after that. This ought to refocus attention on one of the main caveats associated with any of my tutorial series: always check my code before putting it into production, because I’m writing this in hopes of learning, not because I already know what I’m doing; I’m only publishing it in order to sharpen my thinking further and in the hopes that others might gain something from my misadventures.

Figure 1: Code for the GESD Test Procedure
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @Alpha decimal(38,35) = 0.05, @NumberOfTests bigint = 10

SET @DatabaseName = @DatabaseName + ‘.’
DECLARE @SchemaAndTableName nvarchar(400), @SQLString nvarchar(max)
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName

@SQLString =
‘DECLARE @Count bigint

SELECT @Count=Count( + @ColumnName + ‘)
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL

SELECT RN, ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + GESDStatistic, P, V, CriticalRegion, IsOutlier” = CASE WHEN GESDStatistic > CriticalRegion THEN 1 ELSE 0 END
FROM (SELECT RN, ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, GESDStatistic, P, V, (Calculations.FindCriticalRegionForTDistributionFunction (V, 1, 1-P) * (@Count – RN)) / Power(((@Count – (RN – (1 + (Power(Calculations.FindCriticalRegionForTDistributionFunction (V, 1, 1-P), 2))))) * (@Count – (RN + 1))), 0.5) AS CriticalRegion
       FROM (SELECT RN, ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, GESDStatistic, 1 – (‘ + CAST(@Alpha AS nvarchar(50)) + ‘ / (2 * (@Count – (RN + 1)))) AS P, ((@Count – RN) – 1) AS V
      FROM (SELECT TOP ‘ + CAST(@NumberOfTests AS nvarchar(50)) + ‘ ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY ‘ + @ColumnName + ‘ DESC) AS RN, ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘,
              (‘ + @ColumnName + ‘ – Avg( + @ColumnName + ‘) OVER (PARTITION BY 1 ORDER BY ‘ + @ColumnName + ‘ DESC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)) / StDev( + @ColumnName + ‘) OVER (PARTITION BY 1 ORDER BY ‘ + @ColumnName + ‘ DESC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS GESDStatistic
              FROM ‘ + @SchemaAndTableName +
              WHERE ‘
+ @ColumnName + ‘ IS NOT NULL
              ORDER BY ‘ + @ColumnName + ‘ DESC) AS T1) AS T2) AS T3′

–SELECT @SQLStringuncomment this to debug string errors
EXEC (@SQLString)

…………Anyone who has read previous articles in this series should be able to cut through all the gobbledygook in Figure 1 by noticing the commonalities between this T-SQL code and that of previous procedures. Once again, the first five parameters allow users to perform the test on any column in any database for which they have permissions and the first three lines of the procedure make some adjustments to the string names of those parameters for legibility purposes. There is no @OrderByCode as in previous procedures and @DecimalPrecision is likewise not needed. Instead, we must supply values for two new parameters, the @NumberOfTests to perform (which I’ve set to an arbitrary default of zero) and the @Alpha value, which is a core concept in hypothesis testing along with confidence levels, critical values, critical regions, statistical significance and the like. I strongly recommend Will G. Hopkins’ website A New View of Statistics for anyone interested in a readable introduction to these topics, which really aren’t as hard to grasp as they seem – as long as someone explains them in plain English. As usual, the procedure is created in a schema called Calculations, which you can change to your liking; uncommenting the next-to-last line allows you to debug the dynamic SQL; and you’ll have to add your own code to accommodate spaces in object names, which I don’t allow, or to handle SQL injection, which I haven’t included to keep my code short and to the point. This procedure is much shorter than last week’s because I’m reusing the code I already published for the Calculations.FindCriticalRegionForTDistributionFunction, which performs the comparisons against the T-distribution to see if a particular value is an outlier. The code is also much shorter because there’s only one version of the GESD, whereas Grubbs’ Test has two-tailed, lower one-tailed and upper one-tailed versions. As is customary, I retrieve the global statistics once only at the beginning of the dynamic SQL, but in this case, but all we need is a simple record count. As usual, it takes several subqueries to perform all of the calculations required, which forms the bulk of the rest of the dynamic SQL. I’m really quite proud of myself for coding it with the DESC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING clauses rather than a common table expression (CTE), since one of my goals is to make my T-SQL more efficient by implementing the kind of windowing functions Itzik Ben-Gan discusses in his classic reference Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions.[4] It seemed to be the most efficient solution to the problem, although performance really isn’t much of an issue with either GESD or Grubbs’. Figure 3 shows only a minimal change in the execution time when a non-clustered index was added, while the execution plans weren’t really noteworthy enough to publish.

Figure 2: Results for the GESD Test Procedure
EXEC   Calculations.GESDTestSP
             @DatabaseName = N’DataMiningProjects,
             @SchemaName = N’Health‘,
             @TableName = N’DuchennesTable,
             @ColumnName = N’PyruvateKinase,
             @PrimaryKeyName= ‘ID’,
             @Alpha = 0.05,
             @NumberOfTests = 15

GESD Results

Figure 3: Client Statistics for the GESD Test Procedure

…………If performance had been an issue I would have run the procedure against the first float column in the Higgs Boson Dataset made publicly available by the University of California at Irvine’s Machine Learning Repository, which occupies nearly 6 gigabytes of space in the DataMiningProjects database I’ve created to encompass all of the three practice datasets we’ll be using in this tutorial series. Instead, I ran it against the PyruvateKinase column of a tiny 9-kilobyte dataset on the Duchennes form of muscular dystrophy, published online by Vanderbilt University’s Department of Biostatistics. Since calculations on the same enzyme were also performed in last week’s article, we can easily contrast them. The stark, obvious difference is that we only received one row of output for Grubbs’, whereas Figure 2 flags the first 13 rows as outliers based on the fact that their GESD statistics exceed the values in the T-distribution lookup table for the associated critical region, which is in turn determined by the probability values (P) and the degrees of freedom (V, which is in this case the record count).
…………The EXEC code above it specified a common @Alpha value of 0.05 and a test of 15 values, which is why we received 15 results. This is certainly more useful than returning a single output of the kind we get from Grubbs’ Test, but still of limited utility to DBAs who normally work with tables of thousands or even billions of rows. While writing this series, one of the weaknesses I’ve discovered with applying standard outlier detection methods to SQL Server databases is that many of them simply aren’t designed with the Big Data buzzword in mind. Many of them do a fantastic job when used in hypothesis testing, the main use case scenario they were designed for, which is a much narrower and more specific task than the two most common uses cases DBAs face, exploratory data mining and checking for data quality problems. Studies of the first kind often involve just a few hundred or even just a few dozen cases, whereas in the latter we may be dealing with millions of records. It is almost impossible, however, to find lookup tables for many of the distributions and calculations associated with these hypothesis tests that go beyond more than a few hundred values. For example, I had to hunt all over the Internet to find a table of T-distribution values that went up as far as 200 values, which is still below the 209 rows of the Duchennes table and minuscule in comparison to the 11 million rows of the Physics.HiggsBosonTable. I’ve also learned that attempting to fill the gap by calculating the missing values for these tables yourself can be quite computationally expensive and surprisingly difficult to code, as the cumulative distribution function (CDF) for the Gaussian bell curve can be. I now suspect that Grubbs and GESD belong to a subset of outlier detection methods that DBAs will rarely find use cases for, along with such related means as the Tietjen-Moore Test, Dixon’s Q-Test, Chauvenet’s Criterion and the Modified Thompson Tau Test. I’ll dispense with these in the middle of the series since I’ve already written most of the code  for them and might as well not let it go to waste, just in case a reader out there discovers a need to include them in their toolbelt. After this quick stretch we’ll get into methods like Interquartile Range and Peirce’s Criterion, which I expect to be much more useful for our scenarios, although not perhaps as much as topics we’ve already covered like Benford’s Law and Z-Scores. I also have high hopes for Cook’s Distance and Mahalanobis Distance, which I’ll tackle after a recap of SSDM Clustering and an interlude into Visual Outlier Detection with Reporting Services, in which we can spot outliers with the naked eye. For now, I’ll finish on quickly getting out of the way other means of outlier detection from the same class as GESD and Grubbs. Many of these share some of the same severe limitations, such as dependence on a normal distribution. GESD may be the most flexible among them, since it allows you to specify the number of outliers you want to look for, whereas Dixon’s Q-Test and Grubbs limit you to just one. As we shall see next week, the Tietjen-Moore test appears at first glance to be more useful since it also include a parameter like @NumberofTests. Its utility is crimped, however, by the subtle difference that it only tells you whether or not the dataset contains that number of outliers. GESD will likely be more useful, in that it can actually flag the specified number of records as aberrant data points.

[1] See National Institute for Standards and Technology, 2014,  “ Grubbs’ Test for Outliers,” published in the online edition of the Engineering Statistics Handbook. Available at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h1.htm

[2] Rosner, Bernard, 1983,  “Percentage Points for a Generalized ESD Many-Outlier Procedure,” pp. 165-172 in Technometrics, May, 1983. Vol. 25, No. 2.  Original citation found at the NIST webpage “1.4.3. References For Chapter 1: Exploratory Data Analysis,” published in the online edition of the Engineering Statistics Handbook. Available  on the Internet at  http://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Rosner

[3] See the Wikipedia page on the “Student’s T-distribution,” available at the web address http://en.wikipedia.org/wiki/Student’s_t-distribution .

[4] Ben-Gan, Itzik, 2012, Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions . O’Reilly Media, Inc.: Sebastopol, California.


Outlier Detection with SQL Server, part 3.1: Grubbs’ Test

By Steve Bolton

…………In the last two installments of this series of amateur self-tutorials, I mentioned that the various means of detecting outliers with SQL Server might best be explained as a function of the uses cases, the context determined by the questions one chooses to ask of the data, the number and data types of the inputs and the desired mathematical properties of the outputs. The means of calculation in between the input and output stage may also be pertinent for performance reasons. Moreover, we can differentiate these aberrant data points we call “outliers” by their underlying causes, which must be matched with the correct response; it does us no good to find extreme values in our datasets if we can’t determine whether they were the product of faulty data collection, corruption during storage, natural random variations or some other root cause, then use that determination to handle them correctly. If we could build a matrix of outlier detection methods and their uses cases, then Grubbs’ Test would occupy a very narrow range. The inputs and questions the test can answer are quite constrained, since the test can only determine whether the highest and lowest values in a sample are outliers. It outputs a single test statistic, which can be used to output a single Boolean answer, rejecting or accepting the null hypothesis that there are no outliers in the dataset. The National Institute for Standards and Technology’s Engineering Statistics Handbook, one of the best online resources for explaining statistics in plain English, warns that, “If you suspect more than one outlier may be present, it is recommended that you use either the Tietjen-Moore test or the generalized extreme Studentized deviate test instead of the Grubbs’ test.” In the kinds of billion-row databases that SQL Server DBAs work with on a day-to-day basis, we can expect far more than a single aberrant data point just by random chance alone. Grubbs’ Test is more applicable to hypothesis testing on small samples in a research environment, but I’ll provide some code anyways in the chance that it might prove useful to someone in the SQL Server community on small datasets.
…………The “maximum normed residual test” originated with the a paper penned for the journal Technometrics by Frank E. Grubbs, a statistician for the U.S. Army’s Ballistics Research Laboratory (BRL), six years before his retirement in 1975. Apparently the Allies owe him some gratitude, given that “he was dispatched to England in 1944 as part of a team on a priority mission to sample and sort the artillery ammunition stockpiled for the invasion of France. After the team conducted thousands of test firings of the hundreds of different lots of artillery ammunition in the stockpiles, he analyzed the statistical variations in the data and was able to divide the ammunition into four large categories based on their ballistic characteristics. As a result, the firing batteries did not need to register each lot of ammunition before it was unpacked; they only needed to apply four sets of ballistic corrections to the firing tables to achieve their objectives.” After the war, he assisted the BRL in evaluating the reliability and ballistic characteristics of projectiles, rockets, and guided missiles; maybe he wasn’t a “rocket scientist,” as the saying goes, but close enough. The groundwork for the test that bears his name was laid in 1950, when he published a paper titled “Procedures for Detecting Outlying Observations in Samples” for Annals of Mathematical Statistics, which I also had to consult for this article. The 1950 paper is made publicly available by the Project Euclid website, while the one establishing the test itself is made available at California Institute of Technology’s Infrared Processing and Analysis Center, for anyone wise enough to double-check my calculations and code or to get more background.
…………Calculating the test statistic from the formula at the NIST webpage is really trivial; the difficulty is in finding proper tables of the T-distribution to interpret the statistic with. The equation for the two-sided test is quite similar to the familiar Z-Score, except that we take the maximum value of the absolute deviation (i.e., the data point minus the mean) before dividing by the standard deviation. The one-sided tests for determining if a minimum or maximum value in a dataset is an outlier are only slightly different; in the former we subtract the minimum value from the mean, while in the latter we subtract the mean from the maximum. Since the code is so easy that even a caveman can do it, I decided not to complicate it by adding logic to let the user select which of the three tests to use; I simply return all three in one row, along with the critical regions for each. The formulas for calculating the critical regions at the NIST’s webpage on Grubbs’ Test are more involved, which requires the use of the function in Figure 3. This in turn calls a rather sloppy but effective function to find the correct critical values for the T-distribution, from the lookup tables defined in Figure 1. I haven’t supplied any code to populate them, but this can be easily rectified by using one of the thousands of lookup tables available on the Internet for that distribution. The tricky part was finding a table that was reasonably complete, since many sources begin skipping over deciles around 40 or 50 degrees of freedom; I populated my own from the best source I could find, the “Tables of Critical values of t for Probabilities” at the StatsToDo website. In fact, you may need to tweak the DDL and retrieval code if you use a different source, since my ranges and stopping point of 200 degrees of freedom are derived from that particular source. According to another lookup table (from a Blogspot post by a writer who I’ve been unable to identify to give proper credit) that I didn’t use because it skips some deciles, the values for 200 and 250 are nearly identical except down to the hundredth of percentage points; the next value listed there is for infinity, which varies only a few hundredths of a percentage point from 250. Unlike researchers working with small samples drawn from an unknown population, SQL Server users can often instantly call up millions of records, so using the smaller values of these lookup tables may be of limited utility for our use cases. I only recently learned how to do hypothesis testing, so my standard advice to check my code before putting it into a production environment definitely holds here. The Grubbs Statistic values match the NIST sample results though and could prove useful, by providing a measurement of “the largest absolute deviation from the sample mean in units of the sample standard deviation.”

Figure 1: DDL for the T-Distribution Lookup Tables
CREATE TABLE [Calculations].[CriticalValueRangeTable](
       [ID] [smallint] IDENTITY(1,1) NOT NULL,
       [OneTail] [decimal](5, 4) NULL,
       [TwoTail] [decimal](5, 4) NULL,

CREATE TABLE [Calculations].[TDistributionTable](
[ID] [smallint] IDENTITY(1,1) NOT NULL,
[ProbabilityRange1] [decimal](7, 4) NULL,
[ProbabilityRange2] [decimal](7, 4) NULL,
[ProbabilityRange3] [decimal](7, 4) NULL,
[ProbabilityRange4] [decimal](7, 4) NULL,
[ProbabilityRange5] [decimal](7, 4) NULL,
[ProbabilityRange6] [decimal](7, 4) NULL,
[ProbabilityRange7] [decimal](7, 4) NULL,

Figure 2: Function to Look Up Values in the T-Distribution Tables
CREATE FUNCTION [Calculations].[FindCriticalRegionForTDistributionFunction]
(@DegreesOfFreedom bigint, @SidedTestType bit, @ProbabilityValue decimal(5,4))
RETURNS decimal(7, 4)
DECLARE @CriticalRegion decimal(7, 4)
– this is a little more awkward than I’d like, but hey
      SELECT @CriticalRegion = CASE WHEN ProbabilityRangeColumnID = 1 THEN (SELECT ProbabilityRange1
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom
>= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
      WHEN ProbabilityRangeColumnID = 2 THEN (SELECT ProbabilityRange2
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom
>= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom
       WHEN ProbabilityRangeColumnID = 3 THEN (SELECT ProbabilityRange3
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom
>= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
       WHEN ProbabilityRangeColumnID = 4 THEN (SELECT ProbabilityRange4
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom
>= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
       WHEN ProbabilityRangeColumnID = 5 THEN (SELECT ProbabilityRange5
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom
>= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
      WHEN ProbabilityRangeColumnID = 6 THEN (SELECT ProbabilityRange6
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom >= 200 THEN 200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
       WHEN ProbabilityRangeColumnID = 7 THEN (SELECT ProbabilityRange7
FROM Calculations.TDistributionTable WHERE ID = (CASE WHEN @DegreesOfFreedom >= 200 THEN
200 WHEN ID = @DegreesOfFreedom THEN @DegreesOfFreedom ELSE NULL END))
      FROM   (SELECT TOP 1 ID AS ProbabilityRangeColumnID
             FROM  (SELECT ID, ValueRange, ABS(ValueRange - @ProbabilityValue) AS RangeDifference, Lead(ValueRange, 1, 0) OVER (ORDER BY ValueRange) AS Lead
                     FROM (SELECT ID, ‘ValueRange’ = CASE WHEN @SidedTestType = 0 THEN OneTail
                           WHEN @SidedTestType = 1 THEN TwoTail
                           ELSE NULL END
                     FROM [Calculations].[CriticalValueRangeTable] ) AS T1) AS T2
              ORDER BY RangeDifference ASC) AS T3
      RETURN @CriticalRegion

Figure 3: Grubbs Hypothesis Testing Function
CREATE FUNCTION [Calculations].[GrubbsHypothesisTestSP](
@DegreesofFreedom bigint, @TestType bit = 0, @SignificanceLevel decimal(38,35))
RETURNS decimal(38,32)
       DECLARE @CriticalValue decimal(38,32), – *** look this up in a table by the SignificanceLevel I’ve already  recalculated according to the formulas, and also the Degrees of Freedom – 2
      @ReturnValue decimal(38,32)
       SELECT @CriticalValue = [Calculations].[FindCriticalRegionForTDistributionFunction] (@DegreesofFreedom, @TestType, @SignificanceLevel)
      SELECT @ReturnValue = Power(Power(@CriticalValue, 2) / (@DegreesOfFreedom - 2
+ Power(@CriticalValue, 2)), 0.5) * ((@DegreesOfFreedom -1) / Power(@DegreesOfFreedom,
       RETURN @ReturnValue

Figure 4: Code for the the Grubbs Test Procedure
CREATE PROCEDURE [Calculations].[GrubbsTestSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @DecimalPrecision AS nvarchar(50), @Alpha decimal(38,35) = 0.05

DECLARE @SchemaAndTableName nvarchar(400), @SQLString nvarchar(max)
ET @DatabaseName = @DatabaseName + ‘.’
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName –I’ll change this value one time, mainly for legibility purposes

SET @SQLString = ‘DECLARE @Mean decimal(‘ + @DecimalPrecision + ‘),
@StDev decimal(‘ + @DecimalPrecision + ‘),
@Min decimal(‘ + @DecimalPrecision + ‘),
@Max decimal(‘ + @DecimalPrecision + ‘),
@GrubbsVersion1 decimal(‘ + @DecimalPrecision + ‘),
@GrubbsVersion2 decimal(‘ + @DecimalPrecision + ‘),
@GrubbsVersion3 decimal(‘ + @DecimalPrecision + ‘),
@DegreesofFreedom bigint,
@SignificanceLevel decimal(‘ + @DecimalPrecision + ‘),
@SignificanceLevelOneSided decimal(‘ + @DecimalPrecision + ‘)

SELECT @DegreesofFreedom=Count(‘ + @ColumnName + ‘), @Mean = Avg(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @StDev = StDev(CAST(‘ + @ColumnName + ‘ AS Decimal(‘ + @DecimalPrecision + ‘))), @Min = Min(‘ + @ColumnName + ‘), @Max = Max(‘ + @ColumnName + ‘) FROM ‘ + @SchemaAndTableName + WHERE ‘ + @ColumnName + ‘ IS NOT NULL

– the sample exercise at the NIST webpage uses a straight value of @SignificanceLevel = @Alpha, rather than the two calculations for two- and one-sided tests that are recommended elsewhere on the directions; hence Ive commented them out for now
-SET @SignificanceLevel  = ‘ + CAST(@Alpha AS nvarchar(50)) + ‘ / (2 * @DegreesofFreedom)
–SET @SignificanceLevelOneSided = ‘ + CAST(@Alpha AS nvarchar(50)) + ‘ / (@DegreesofFreedom)
SET @SignificanceLevel = ‘ +
CAST(@Alpha AS nvarchar(50)) +
SET @SignificanceLevelOneSided = ‘ + CAST(@Alpha AS nvarchar(50)) +

SELECT @GrubbsVersion1 = Max(‘ + @ColumnName + ‘ – @Mean) / @StDev, @GrubbsVersion2 = ((@Mean – @Min) / @StDev),
@GrubbsVersion3 = ((@Max – @Mean) / @StDev)
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL

SELECT @GrubbsVersion1 AS  GrubbsTwoSided, CAST([Calculations].[GrubbsHypothesisTestSP] (@DegreesOfFreedom, 1, @SignificanceLevel) AS decimal(‘ + @DecimalPrecision + ‘)) AS
@GrubbsVersion2 AS  GrubbsLowerOneSided, @GrubbsVersion3 AS  GrubbsUpperOnesided, CAST([Calculations].[GrubbsHypothesisTestSP] (@DegreesOfFreedom, 0, @SignificanceLevelOneSided)  AS decimal(‘ + @DecimalPrecision + ‘)) AS CriticalRegionForGrubbsUpperOneSided
–SELECT @SQLString — uncomment this to debug string errors
EXEC (@SQLString)

…………Thankfully, we will be able to reuse the mass of T-SQL in the first three figures in next week’s tutorial, which also requires looking up T-distribution values. The code in Figure 4 will look familiar if you’ve been following this mistutorial series. As always, you’ll have to add the brackets and program in the logic yourself if you allow spaces in your object names; you may also need to add SQL injection and other security code. Also, keep in mind that I’m still using a Calculations schema for these sample routines, so you may need to create one in your database or change the name as needed. The first three parameters allow you to run the procedure against any column in any database for which you have the requisite permissions and access. The @DecimalPrecision also allows you to manually set the precision and scale for the internal calculations of the procedure, in order to avoid arithmetic overflows. There are some slight differences between the parameters of this procedure and those discussed in the last few articles though, such as the fact that the usual @OrderByCode and @PrimaryKeyName are not needed here. The @Alpha parameter allows you to set the significance level of the test to any value you please (including incorrect ones, since I haven’t added any validation code) like the default of 0.05, which corresponds to a 95 percent confidence level. Oddly, Grubbs wrote in the 1969 paper that the confidence levels ought to be at least 99 percent for use with this test, but still used the standard 95 percent in his own example. Similarly, the NIST webpage says to use @Alpha divided by the degrees of freedom for a one-sided test and divided by twice the degrees of freedom for a two-sided test, yet uses a plain significance value of @Alpha = 0.05 in its sample data. Hence the commenting out of the code that would change the @SignificanceLevel to the alternate values. When run against the sample exercise on the NIST page, my code is accurate within about a hundredth of a percentage point, as long as this adjustment is made.

Figure 5: Results for the Grubbs Test Procedure
EXEC   Calculations.GrubbsTestSP
            @DatabaseName =N’DataMiningProjects’,
             @SchemaName= N’Health’,
             @TableName = N’DuchennesTable’,
             @ColumnName= N’PyruvateKinase’,
             @DecimalPrecision= N’12,7′,
             @Alpha = 0.05



…………Executing a query like the one above against the Duchennes dataset we’ll be working with in this tutorial series produced the five columns above. The first, third and fourth columns represent the Grubbs Statistics for the two-sided, minimum and maximum tests respectively, while the Critical Regions are derived from the aforementioned code in Figures 1 through 3. The Grubbs Statistics are far beyond the critical regions, so yes, the maximum values in the dataset are beyond the thresholds and thus can be defined as “outliers.” Keep in mind that after testing the procedure against several different columns in different datasets, I’ve only seen slight differences between the two-sided result and the upper one-sided; due to lack of appropriate samples to work through, I cannot say whether or not that can be expected or is an error on my part. The PyruvateKinase column refers to an enzyme that is apparently involved in the devastating Duchennes form of muscular dystrophy, which is the subject of a tiny nine-kilobyte sample dataset made publicly available by Vanderbilt University’s Department of Biostatistics. In the last couple of blog posts I used the first float column of the Higgs Boson Dataset that the University of California at Irvine’s Machine Learning Repository has made available online, which occupies nearly 6 gigabytes of space in the DataMiningProjects database I’ve created to encompass all of the practice datasets I’ll be using in tutorial series to come. Traversing that much space in index scans and seeks turned out to be somewhat costly for the two versions of Z-Scores that I coded in the last two posts, but I haven’t bothered to post Client Statistics from SQL Server Management Studio (SSMS) because the Grubbs’ Test procedure takes only a matter of seconds, even for the massive float columns of the Physics.HiggsBosonTable.
…………The procedure may perform well compared to other outlier detection methods, but its usefulness is limited. I have yet to test it against a table that didn’t turn out to have outliers – which is likely to be the case for most of the large tables that DBAs might run the procedure on. Grubbs’ Test is more useful for the kind of small tables used in hypothesis testing, rather than exploratory data mining and data quality analyses, which are far more common uses cases in the SQL Server user community. Statistical testing of this kind is also prone to several real dangers I’ve touched on in the last few articles and others that I have yet to address. The confidence levels commonly associated with them are pretty much drawn out of thin air; you’ll see 95 percent used most of the time, but only because it was an arbitrary de facto standard long before the UNIVAC. There really isn’t a good justification for it, except for the fact that it has been used for so long. Secondly, probabilities are not guarantees; there is a finite chance that random fluctuations alone could produce a dataset that consisted of nothing but outliers, using any definition and detection method. Worst of all, Grubbs’ Test requires a Gaussian (i.e.”normal”) distribution, i.e. the bell curve. Without goodness-of-fit tests that clearly demonstrate that the data ought to fit the normal distribution, such tests are useless – or worse, deceptive. As Grubbs himself puts it mildly, “Until such time as criteria not sensitive to the normality assumption are developed, the experimenter is cautioned against interpreting the probabilities too literally when normality of the data is not assured.” I don’t yet know how to apply some of the best goodness-of-fit tests (mainly because I’m still having trouble wrapping my head around some of the intricacies of cumulative distribution functions) but that won’t stop me from harping on this point repeatedly: the extent to which statistics are bandied about in many academic disciplines without proper testing today is simply alarming. The one place we can least afford to see them is in medical research, where they become a matter of life and death, but at least half of all studies published today contain at least one statistical error. The most common error appears to be the lack of goodness-of-fit testing; researchers in many fields seem to be in the habit of routinely applying tests that depend on a Gaussian distribution with reckless disregard for their validity. It’s not surprising that this occurs, given that there are so few statistical tests that can be used with the scores of other distributions that data might follow. If researchers everywhere were held to a proper standard of evidence, they might not be able to back up claims for a favorite project or experimental medicine with any statistics at all.
…………This leads straight back to the logical and moral issues that I’ve tried to reiterate throughout this series: there are an awful lot of shenanigans that go on with “lies, damned lies and statistics,” as the old saying goes. Grubbs’ Test is no more vulnerable to misuse than any other measure based on the normal distribution, all of which can be invalid, misleading or worse when the data is not naturally Gaussian. It is sometimes, however, applied recursively by simply deleting outliers until some stopping criteria is reached, which raises the grim possibility of improper handling of unfavorable data points. In some situations, an iterative Grubbs Test is vulnerable to false negatives, in which actual outliers are not detected, or false positives, in which haphazard definitions of stopping criteria lead to the deletion of good data. That brings us back full circle to the confluence of subjective definitions, “moving the goalposts” and inadequate analysis of the causes of aberrance which I discussed at length in the last article. Thankfully, the Generalized Extreme Studentized Deviate Test (GESD) ameliorates some of the standard problems associated with the sequential application of Grubbs’ Test by making a few adjustments; the NIST recommends that either the GESD or the Tietjen-Moore test be applied when looking for more than one outlier, which is almost always going to be the case in a SQL Server database. The math and code for both are also relatively simple. Unfortunately, they are both dependent on a Gaussian distribution, which means they also require goodness-of-fit tests that are often dispensed with in an irresponsible manner. The same limitation applies to Dixon’s Q-Test, which is simple to code, as well as to Chauvenet’s Criterion, which is not. It may also be true of Peirce’s Criterion, which will also be featured later in this series. Interquartile Range is a much more easily coded method of outlier detection which also may be less dependent on the normal distribution. Later in the series, I’ll give a quick recap of the Clustering algorithm in SQL Server Data Mining (SSDM) and supply some eye candy that is much easier on the brain than these fancy equations in Visual Outlier Detection with SQL Server Reporting Services. Towards the end I’ll delve into more difficult methods like Cook’s Distance and the Modified Thompson Tau Test, then Mahalanobis Distance. Many of these more sophisticated methods are of course more difficult to code than GESD, Tietjen-Moore and Dixon’s Q-Test, but they may also be more applicable to distributions besides the bell curve.

Outlier Detection with SQL Server, part 2.2: Modified Z-Scores

By Steve Bolton

…………There are apparently many subtle variations on Z-Scores, a ubiquitous measure that is practically a cornerstone in the foundation of statistics. The popularity and ease of implementation of Z-Scores are what made me decide to tackle them early on in this series of amateur self-tutorials, on using various components of SQL Server to detect those aberrant data points we call “outliers.” As discussed in the last two posts, there are many different means of identifying outliers, which may be understood best by categorizing them by their use cases; the right choice of detection tools is essentially a function of the questions one wants to ask of the data, the number and types of inputs, the desired mathematical properties of the outputs and in between, the performance and other requirements used in transforming the inputs into outputs. From my scant understanding of what little literature I’ve read on the topic, statisticians and other researchers commonly encounter use cases where the sensitivity of ordinary measurements to outliers has to be toned down, often in response to fat-tailed (i.e. highly skewed) distributions. The Modified Z-Scores developed by Temple University Prof. Boris Iglewicz and University of Massachusetts Prof. David C. Hoaglin are one means of adjusting Z-Scores for such cases, but hardly the only one. I’m highlighting it merely because I was introduced to it early on, while trying to learn the subject of stats from the National Institute for Standards and Technology’s Engineering Statistics Handbook, one of the best online resources for anyone trying to wade through this notoriously dry subject.[I]
                Iglewicz and Hoaglin suggest specific cut-off criteria for their measure, that can of course be adjusted as needed by users – which raises the whole question of researchers “moving the goalposts” or setting them haphazardly when using any means of outlier detection. Correct classification is a thorny issue with every method we’ll discuss in this series; I’m merely using the hard boundary associated with Modified Z-Scores as an introduction to the all-important topic of subjectivity in category definitions. As an amateur, I can’t give any guidance on whether or not to use a specific cut-off point for classifying outliers, although it would seem to be common sense that more detailed rough and fuzzy sets ought to be more commonly used in place of hard limits than they are. It is worth reiterating that outliers also vary significantly in their causes and the responses made to them, not just the means of differentiation. The frequent mismatches between the causes and responses and the lack of attention paid to discerning them both leave the door wide open to innumerable fallacies and sometimes outright fraud, which as I discussed earlier in this series, is frighteningly common in certain fields. The definition of an outlier is subjective, depending on the kind of investigation a researcher chooses to perform, but whether or not a particular data point meets the chosen criteria is wholly objective. Fallacies and fraud arise when the distinction in the right uses and proper places of subjectivity and objectivity are blurred; the whole history of human philosophy demonstrates that when the former is loosed from such bonds, the result is always maniacal madness. For example, a person can choose to affix the name “pepperoni pizza” to anything they want; but once they’ve set tomato sauce, bread, cheese and the like as part of the criteria, then they can’t pretend that a pencil sharpener or a Planck length fits the definition, because whether or not they consist of the same ingredients set forth in the criteria is an objective matter. That’s plain common sense, which suddenly becomes uncommon when the labeler has an incentive to fudge their definitions, or worse yet, a pedantic justification for it, like solipsism (i.e., one of the major symptoms of schizophrenia). Outlier detection presents a serious temptation to simply ignore the distinctions between causes and put no effort to differentiating the correct response to others, so that data miners and others who use these tools frequently just delete records that don’t fit their models and theories, or adjust their definitions of the term to achieve the same purpose. I’ll delve into outlier deletion in more depth a few posts from now, but the issue of subjective limits can serve as a transition into my usual dire disclaimer that math formulas, including those underpinning data mining algorithms and outlier detection, resides in a Pandora’s Box. The Modified Z-Scores under discussion today do not open the box any wider than any other formula; this is merely the context in which all statistical measures naturally reside, in which the slightest logical deviation in their use may lead to erroneous, misleading or even fraudulent conclusions. Data mining tools can be used quite easily by amateurs like myself for exploratory data analysis, but need to be handled like scalpels when attempting to prove a specific point. Nevertheless, they’re often employed carelessly like sledge hammers by professionals in many different fields, particularly health care. The specter of fallacious reasoning hems us in all sides, and wielding these tools properly for this purpose requires more skill than the old board game of Operation. The difference with math and logic is that there is no buzzer to warn us when we’ve used them wrong; there may be terrible consequences down the line in the form of falling bridges and adverse medical reactions, but the intelligentsia also has the intellectual power to explain those away using the same poor reasoning. What is called for here is not intelligence, but wisdom; without it, outlier detection methods merely prove the old adage, “There are three kinds of lies: lies, damned lies and statistics.”[ii] No data mining tool or math formula is going to going to provide a quick fix for this overarching problem, which hangs like a Sword of Damocles over everything that researchers, scientists, mathematicians and data miners do; the only fix is to apply the use of reason rigorously, which requires a deep understanding of logical fallacies and in turn, painful self-examination. Neither I nor most of the DBAs who read this probably have that kind of training, so our use cases ought to be limited to exploratory analysis – which can be a highly productive exercise, even for the unqualified – rather than hypothesis testing and the like.
…………The point of using Modified Z-Scores is to address situations where it is desirable to reduce the sensitivity to outliers, so that there are for all intents and purposes fewer false positives when classifying them. Whether or not such reduced sensitivity is a wise choice to fit the problem at hand is one question; whether or not Modified Z-Scores succeed in doing so seems to be an open and shut case. In this series I’m trying to grasp the mechanisms that make these algorithms and formulas work as they do, which is something I didn’t delve into adequately in my series on SQL Server Data Mining (SSDM). The reason why Iglewicz and Hoaglin’s Z-Scores are less sensitive to outliers without being completely blind to them is that they use medians rather than means, which are an alternate measure of central tendency that is known for being less affected by unusual values. Both medians and means are basically primordial forms of clustering that identify a specific location near the center of a dataset, but the former is less affected by the most distant points. The formula given at the NIST website is not all that difficult to decipher or code in T-SQL; I was unable to get ahold of a copy of their original paper to see what the reasoning was behind the constant that appears in it, but it is child’s play to simply include it in the code and be done with it.[iii]  This was my first introduction to median absolute deviation (MAD), which is a variation of the average absolute deviation that is even less affected by extremes in the tail because the data in the tails have less influence on the calculation of the median than they do on the mean.”[iv] I initially confused it with a more common calculation, mean absolute deviation because of the similar names. The idea is basically the same though: instead of taking a mean of a mean, we compare each data point to the median of the whole dataset, then calculate a new median for the absolute value of those distances. Then we take subtract the median from each data point again and multiply that result by a constant, 0.6745, the divide the result by the MAD. The equations are actually quite easy to read; most of the T-SQL involved in implementing them is dedicated to calculating the two medians, using some subqueries and windowing functions. I’ve precalculated both in Common Table Expressions (CTEs) at the beginning of this week’s T-SQL stored procedure, because this reduces them to one-time operations (I think the technical term might an “Θ(n) operation”) and makes the complicated dynamic SQL a little more legible. The T-SQL in Figure 1 could be streamlined further to suit your needs by removing the DENSE_RANK calculation, the OutlierCandidate column and the @OrderByCode logic, which are dispensable elements I’ve added as conveniences.

Figure 1: Code for the Modified Z-Score Stored Procedure[v]
CREATE PROCEDURE [Calculations].[ModifiedZScoreSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @OrderByCode as tinyint = 1, @DecimalPrecision AS nvarchar(50)–1 is by PK ASC, 2 is by
PK Desc, 3  is by ColumnName ASC, 4 is by ColumnName DESC, 5  is by ZScore ASC, 6 is by ZScore DESC

SET @DatabaseName = @DatabaseName + ‘.’
DECLARE @SchemaAndTableName nvarchar(400), @SQLString nvarchar(max)
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName –I’ll change
this value one time, mainly for legibility purposes

SET @SQLString =
‘DECLARE @OrderByCode as tinyint ,– pass the outer value like a parameter of sorts
@Median AS decimal(‘ + @DecimalPrecision + ‘),
@MedianAbsoluteDeviation AS decimal(‘ + @DecimalPrecision + ‘)

– precalculating these 3 stats not only makes the code more legible, but is more efficient because it is a one-time operation
- first get the median

(‘ + @ColumnName + ‘, RN, DenseRank)
FROM (SELECT ‘ + @ColumnName + ‘, ROW_NUMBER() OVER (ORDER BY ‘ + @ColumnName + ‘) AS RN
     FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1

SELECT @Median = Avg(‘ + @ColumnName + ‘) FROM MedianCTE WHERE RN BETWEEN DenseRank – 1 AND DenseRank +1;

– get the MedianAbsoluteDeviation
WITH MedianAbsoluteDeviationCTE
(‘ + @ColumnName + ‘, RN, DenseRank)
ROM        (SELECT NewMedian, ROW_NUMBER() OVER (ORDER BY NewMedian) AS RN
       FROM             (SELECT ABS(‘ + @ColumnName + ‘ – @Median) AS NewMedian
              FROM ‘ + @SchemaAndTableName +
              WHERE ‘ + @ColumnName + ‘ IS NOT NULL) AS T1) AS T2)

SELECT @MedianAbsoluteDeviation = Avg(‘ + @ColumnName + ‘)
FROM MedianAbsoluteDeviationCTE
WHERE RN BETWEEN DenseRank – 1 AND DenseRank +1;

–SELECT @Median
–SELECT @MedianAbsoluteDeviation
ET @OrderByCode = ‘ + CAST(@OrderByCode  AS nvarchar(50) ) +

– now check each data point
SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, ModifiedZScore, DENSE_RANK () OVER (ORDER BY ModifiedZScore) AS GroupRank,
”OutlierCandidate” = CASE WHEN Abs(ModifiedZScore) > 3.5 THEN 1
       ELSE 0
FROM (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, ((0.6745 * (‘ + @ColumnName + ‘ – @Median)) / @MedianAbsoluteDeviation) AS ModifiedZScore
FROM ‘ + @SchemaAndTableName + ‘) AS T1
       ORDER BY
CASE WHEN @OrderByCode = 1 THEN ‘ + @PrimaryKeyName + ‘ END ASC,
CASE WHEN @OrderByCode = 2 THEN ‘ + @PrimaryKeyName + ‘ END DESC,
CASE WHEN @OrderByCode = 3 THEN ‘ + @ColumnName + ‘ END ASC,
CASE WHEN @OrderByCode = 4 THEN ‘ + @ColumnName + ‘ END DESC,
CASE WHEN @OrderByCode = 5 THEN ModifiedZScore
CASE WHEN @OrderByCode = 6 THEN ModifiedZScore END DESC’
–SELECT @SQLString — uncomment this to debug string errors
EXEC (@SQLString)

…………Note that I’m using almost all of the same parameters and dynamic SQL format as in the article on regular Z-Scores. The combination of the first three parameters allows you to execute the procedure against any table in any database, assuming you have the requisite permissions. Just like in the last tutorial, the @OrderByCode parameter allows you to sort the results by 1) the primary key values, ascending; 2) the primary key value descending; 3) the column values ascending; 4) the column values descending; 5) the Modified Z-Score ascending and 6) the Modified Z-Score descending. You’ll have to supply your own @DecimalPrecision values and tweak them to avoid arithmetic overflows, which are tricky to handle when multiple calculations can change the number of decimal places repeatedly. I usually try setting these values to the original precision and scale of decimal and numeric columns if they’re high enough, but when working with integers you’ll have to decide how many decimal places are appropriate for your output.  You can debug the dynamic SQL by uncommenting the next-to-last line and two others beginning with comment marks and SELECTS. I’ve also used DENSE_RANK windowing function again to assign identical results to specific groups by their Modified Z-Score values, which comes in handy with columns that have few distinct values that are repeated many times. The OutlierCandidate is merely a bit column that reveals whether or not the ModifiedZScore falls outside the ±3.5 threshold set by Iglewicz and Hoaglin. Your requirements may be different, so feel free to change the threshold or eliminate it altogether; it wouldn’t be all that difficult either to replace hard thresholds like this with more flexible fuzzy set criteria with graded memberships. If you use @OrderByCode 5 or 6, values where OutlierCandidate = 1 will be sorted to the top and bottom of the results. As usual, you’ll have to add your own brackets and logic to handle spaces if you allow them in your object names (I have a ruthless ban on them in my own code, for legibility purposes) and program in your own security to handle risks like SQL injection.

Figure 2: Results for Column1 of the HiggsBosonTable
EXEC [Calculations].[ModifiedZScoreSP]
              @DatabaseName = N’DataMiningProjects’,
             @SchemaName = N’Physics’,
              @TableName = N’HiggsBosonTable’,
              @ColumnName = N’Column1′,
              @PrimaryKeyName = N’ID’,
             @OrderByCode = 6,
              @DecimalPrecision = N’33,29′


Figure 3: Client Statistics for the Modified Z-Scores Procedure
ModifiedZScoresClientStats (1)
ModifiedZScoresClientStats (2)

…………In last week’s tutorial, I tested my Z-Score stored procedure on the first float column of a nearly 6-gigabyte table from the Higgs Boson Dataset, which is made publicly available by the University of California at Irvine’s Machine Learning Repository. In future mistutorials I will use a dataset on the Duchennes form of muscular dystrophy provided by Vanderbilt University’s Department of Biostatistics, as well as transcriptions of the Voynich Manuscript, a creepy tome whose medieval author encrypted it so well that no one has been able to crack it since, including the National Security Agency (NSA). For the sake of consistency, I tested my Modified Z-Scores procedure against the same Higgs Boson column. Using the query at the top of Figure 2 returned the corresponding results, plus about 11 million more rows that I somehow couldn’t cram into the article. There were some records at the very bottom with Modified Z-Scores near -1, but none that qualified for Iglewicz and Hoaglin’s cut-off point for outliers.
…………I didn’t bother to post screenshots of the execution plans because they weren’t pretty, nor would they fit on an NFL scoreboard. The novel approach I took of comparing the middle point of two windowing functions moving in opposite directions added a lot of expensive sorts, which even the addition of a non-clustered index couldn’t fix. As depicted in Figure 3, the index improved the Client Processing Time and Total Execution Time significantly, but the procedure still consumed far too much memory on my poor beat-up development machine and took too long for my tastes. It will do just fine on columns in small tables, but expect it to take a while if you’re executing it on 11 million rows of a 6-gig database using an outdated workstation in place of a real server.
…………That drawback ought to refocus attention on one of the caveats I want to stress in this series: I’m posting these articles because I don’t know what I’m doing and want to learn, not because I have any real expertise. As with my series A Rickety Stairway to SQL Server Data Mining, I’m passing on my misadventures so that others don’t repeat them. Another error I rectified along the way was accidentally substituting a mode for the median while wool-gathering; that procedure might actually be useful in catching certain outliers and I will post it if anyone thinks they can benefit, but the bottom line is that I almost posted an article based on the wrong formula. Just keep in mind that my code samples in this series will always need further testing before going into a production environment. Consider these posts an introduction to the topic, not the last word. If all goes according to plan, I’ll be introducing both myself and my readers to Chauvenet’s Criterion, which is a means of outlier detection that is intrinsically dependent on a Gaussian distribution. I may follow these up by going on a tangent with some fairly easy means of outlier detection, like Grubbs’ Test and the Tietjen-Moore Test, the Generalized Extreme Studentized Deviate (ESD) Test, Interquartile Range and Dixon’s Q-Test. At some point I’ll also get into a discussion of Visual Outlier Detection with Reporting Services (featuring a lot of eye candy) and do a quick recap of Clustering with SSDM. Towards the end of the series I’ll tackle Cook’s Distance the Modified Thompson Tau Test, then end with the daunting task of coding Mahalanobis Distance. I hope to use that last post as a springboard towards a much longer and more difficult series months down the line, Information Measurement with SQL Server.

[i] That’s more “dry humor,” but not as bad as the hydrology joke in the last column.

[ii] Although frequently attributed to Mark Twain and Benjamin Disraeli, the quip apparently originated with British politician Leonard H. Courtney in 1895. See the TwainQuotes webpage “Statistics” at http://www.twainquotes.com/Statistics.html.

[iii] See “Detection of Outliers,” an undated article published at the National Institute for Standards and Technology’s Engineering Statistics Handbook website. Available online at http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm. The page in turn cites Iglewicz, Boris and Hoaglin, David, 1993, “Volume 16: How to Detect and Handle Outliers,” The ASQC Basic References in Quality Control: Statistical Techniques, Edward F. Mykytka, Ph.D., Editor.

[iv] See “Measures of Scale,” an undated article published at the National Institute for Standards and Technology’s Engineering Statistics Handbook website. Available online at http://www.itl.nist.gov/div898/handbook/eda/section3/eda356.htm#MAD

[v]  While coding this, I forgot how to use modulo properly and made use of See Byers, Mark, 2009, response to the thread “How Can I Determine is Digit Even Number?” published Nov. 26, 2009 at StackOverflow.com. Available online at http://stackoverflow.com/questions/1805420/how-can-i-determine-is-digit-even-number.  I also double-checked my median calculations against the MathIsFun webpage “How to Find the Median Value” at http://www.mathsisfun.com/median.html.

Outlier Detection with SQL Server, part 2.1: Z-Scores

By Steve Bolton

…………Using SQL Server to ferret out those aberrant data points we call outliers may call for some complex T-SQL, Multidimensional Expressions (MDX) or Common Language Runtime (CLR) code. Yet thankfully, the logic and math that underpin the standard means of outlier detection I’ll delve into in this series are sometimes mercifully simple. That was not the case in the introductory article in this amateur series of self-tutorials, in which I tackled the fascinating mystery of Benford’s Law, a means of finding outliers that is often used to expose fraud. I also used the occasion to bring up the topic of potential misuses of outlier detection itself, including some duplicitous practices that are nonetheless frighteningly frequent in academic studies – particularly in the health care field, where a lot of money is at stake and where poorly handled stats can do the most damage to ordinary people. I cannot say enough about how critical it is to moor statistics and data mining solidly in reason, because all it takes is a single fallacy to render the conclusions drawn from them useless, misleading or downright false; this point is worth reiterating throughout this blog series, given that it dwarfs all of the technical issues that mathematicians, statisticians, data miners and the field of computing spend much more time on. Our algorithms may go on correctly generating numbers, but if they’re not firmly embedded in reason, they may be deceptively erroneous. Some of the saving graces of this week’s method of outlier detection are that it is well-understood and simple implement, both of which mean there is there is less room for fallacies to worm their way into Z-Score calculations than there are with other methods.
…………Z-Scores may be more run-of-the-mill than cutting edge methods like Benford’s Law, but the technique is ubiquitous throughout the field of statistics precisely because it is a reliable workhorse. In fact, it is used as a building block in many other higher-order statistics, many of which are in turn used in the edifices of today’s sophisticated data mining algorithms. When I wrote my series of self-tutorials on SQL Server Data Mining (SSDM) I merely set out to prove that this neglected component could be used by inexperienced amateurs like myself for practical purposes, but I didn’t yet grasp the mechanics of why the constituent algorithms worked. In this series and the next several I hope to write, I plan to rectify that defect by looking more under the hood. I’m still hardly qualified to write about statistics, but I have found it is much easier to grasp the material by categorizing various stats by their use cases. It is easiest to sort through the confusing welter of numbers and algorithms by viewing the use cases as a function of the questions one wants to ask of the data, the number and type of inputs an algorithm requires, the number and type of outputs it returns, as well as the mathematical properties associated with the inputs and outputs. For example, if you choose to ask a particular question of a dataset but don’t have the correct data types or number of input parameters, or sample sizes, your choices will be quickly narrowed down to a few stats and algorithms. If you require the outputs to have specific mathematical properties, such as positivity or homoscedasticity, your choices will be further constrained, because the formulas are also highly specific as to what kind of data they spit back out. Will G. Hopkins, the author of an excellent plain English guide to stats[i], likewise writes that he had a difficult time sorting out the various types of statistical models until he categorized them by the types of comparisons being made, such as numerical vs. nominal data, or numeric vs. numeric, etc. Those categories are basically equivalent to the column Content types I discussed in the series on SSDM, where they represent an important consideration in the design of mining models. It might be helpful in the future to develop a matrix of use cases for all permutations of Content types, data types, numbers of inputs, properties of outputs and perhaps even the performance requirements for the inner workings of the algorithms in between the input and output stages. For now, however, we’ll just use the concept to illustrate what we’re doing with Z-Scores.
…………This particular measure is merely a comparison of a data point to the mean and standard deviation of the dataset it belongs to. The formula for Z-Scores is fairly simple: subtract the data point from the average, then divide by the deviation. What’s the purpose in doing this though? I was able to grope towards a better understanding by resorting to categorization again. Averages are merely the most popular instance of a broader classification of measures of central tendency or location, which identify the center of a dataset; the mean really represents the most primordial form of clustering we know of. If we want to tell how close a particular data point is to that center point, we also need a yardstick to measure the distance by. This is where standard deviation, the most basic metric of dispersion, comes in handy. Rather than pinpointing the single center of the dataset, the deviation is a single measure of how diffuse or spread out all the data points are. Like the mean, standard deviation is a fundamental building block of higher statistics, one which also gives us a convenient means of gauging how far a particular data point is from the center point identified by the mean. In plain English, a Z-Score essentially tells us how many standard deviations (i.e. units of dispersion) there are between a given data point and the center. Many other calculations we’ll encounter in the next few tutorial series are either derived directly from Z-Scores, or resemble them in their use of the mean and standard deviation. To someone lost in the swirl of numbers and equations surrounding statistics, it may seem that there is no rhyme or reason to any of them, but there is a method behind the madness. In each Z-Score calculation, we’re not plugging in just anything, like a whole probability distribution or a sequence or a set, but a single data point – which matches our question, “Is this an outlier?” More sophisticated calculations may require us to further limit our choices by such considerations as data types, Content types, the internal performance requirements of the algorithm, the number of inputs, the sample size and the desired mathematical properties of the output. In the case of Z-Scores, all we really have to make sure of is that we’re inputting one of SQL Server’s numeric data types. We obviously can’t plug text or dates into a Z-Score equation, although we could perform calculations on such fields and then plug them in as needed. We also need to know the mean and standard deviation for the entire dataset, rather than a mere sample; in some situations it might be impractical to calculate them due to resource constraints, but DBAs usually have one over on those engaged in scientific research, in that they usually have populations of millions of records to drawn from if they choose to. Sometimes researchers only have access to small samples taken from unknown populations, in which case it may not be possible to apply Z-Scores at all.
…………Fortunately, coding a Z-Score is also a lot less taxing on the brain than the subject of the last post, Benford’s Law. That is why the stored procedure in Figure 1 is a lot easier to follow. The first three parameters allow you to specify a table in any database for which you have privileges, while the fourth identifies the column to be sampled and the fifth is the name of the table’s primary key. Keep in mind that I don’t allow spaces in my object names, so if you’re going to be operating on objects that have them, you’re going to have to add the requisite brackets to this code yourself. Enter the @DecimalPrecision parameter with care, since an incorrect setting will result in arithmetic overflows; leaving that setting up to the end user was a lot easier to code than a workaround that would fit all use cases. The most difficult part of the code to grasp may be the @OrderByCode, which allows you to sort the results by 1) the primary key values, ascending; 2) the primary key value descending; 3) the column values ascending; 4) the column values descending; 5) the Z-Score ascending and 6) the Z-Score descending. I’m a stickler about giving credit where it is due, so I’ll point that I’ve already done ORDER BY CASES before, but double-checked the syntax at a thread by one of the greatest assets of the SQL Server community, Pinal Dave.[ii] Uncommenting the next-to-last line will allow you to debug the procedure as needed by checking the dynamic SQL. Also be aware that I haven’t taken any steps to proof this against SQL injection attacks, so be ready to program your own security requirements into it. In addition, the procedure is created in a schema called Calculations that I will be using frequently throughout this series, so be prepared to add it to your own database or change the code.

Figure 1: Code for the Z-Score Stored Procedure
CREATE PROCEDURE [Calculations].[ZScoreSP]
@DatabaseName as nvarchar(128) = NULL, @SchemaName as nvarchar(128), @TableName as nvarchar(128),@ColumnName AS nvarchar(128), @PrimaryKeyName as nvarchar(400), @OrderByCode as tinyint = 1, @DecimalPrecision AS nvarchar(50)

SET @DatabaseName = @DatabaseName + ‘.’
DECLARE @SchemaAndTableName nvarchar(400)
SET @SchemaAndTableName = ISNull(@DatabaseName, ) + @SchemaName + ‘.’ + @TableName –I’ll change this value one time, mainly for legibility purposes

DECLARE @SQLString nvarchar(max)
SET @SQLString =
‘DECLARE @OrderByCode as tinyint ,– pass the outer value like a parameter of sorts
@StDev AS decimal(‘ + @DecimalPrecision + ‘),
@Mean AS decimal(‘ + @DecimalPrecision + ‘)

– precalculating these not only makes the code more legible, but is more efficient because it is a one-time operation
SELECT @StDev = StDEv(‘ + @ColumnName + ‘)   FROM ‘ + @SchemaAndTableName +
SELECT @Mean = Avg(‘ + @ColumnName + ‘)   FROM ‘ + @SchemaAndTableName +

–SELECT @StDev  — uncomment these to debug
value errors

SET @OrderByCode = ‘ + CAST(@OrderByCode  AS nvarchar(50) ) +

SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘, ZScore, DENSE_RANK () OVER (ORDER BY ZScore) AS GroupRank
       (SELECT ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘,  ”ZScore” = CASE WHEN @StDev = 0 THEN 0
     ELSE (‘ + @ColumnName + ‘ – @Mean) / @StDev
        FROM ‘ + @SchemaAndTableName +               GROUP BY ‘ + @PrimaryKeyName + ‘, ‘ + @ColumnName + ‘) AS T1 — the purpose of the inner query is to allow us to order by the ZScore

       ORDER BY
CASE WHEN @OrderByCode = 1 THEN ‘ + @PrimaryKeyName + ‘ END ASC,
CASE WHEN @OrderByCode = 2 THEN ‘ + @PrimaryKeyName + ‘ END DESC,
CASE WHEN @OrderByCode = 3 THEN ‘ + @ColumnName + ‘ END ASC,
CASE WHEN @OrderByCode = 4 THEN ‘ + @ColumnName + ‘ END DESC,
ASE WHEN @OrderByCode = 5 THEN ZScore END ASC,

–SELECT @SQLString — uncomment this to debug string errors
EXEC (@SQLString)

Figure 2: Sample Results from Column1 of the HiggsBosonTable

EXEC [Calculations].[ZScoreSP]
             @DatabaseName = ‘DataMiningProjects’,
             @SchemaName = ‘Physics’,
             @TableName = N’HiggsBosonTable’,
             @ColumnName = N’Column1′,
             @PrimaryKeyName = N’ID’,
             @OrderByCode = 6,
             @DecimalPrecision = ‘38,28’


…………As discussed in my last few posts, I’ll be using three publicly available practice datasets for my next three or four series of tutorials, beginning in last week’s post with a data on the Duchennes form of muscular dystrophy provided by Vanderbilt University’s Department of Biostatistics. In time, I will probably also perform outlier detection and other data mining calculations on the Voynich Manuscript, an inscrutable medieval tome with an encryption scheme so obscure that no one has been able to crack it for more than five centuries, including the National Security Agency (NSA). The best of the three datasets to stress test this procedure is the data on the Higgs Boson made available by the University of California at Irvine’s Machine Learning Repository, given that its single table dwarfs the tiny 9-kilobyte Duchennes table by almost 6 gigabytes. It also consists entirely of numeric data, unlike the transcriptions of the Voynich Manuscript I’ve imported, which are mainly appropriate for text mining. To date, I have yet to find an explanation of what the 28 float columns actually measure, although there’s an outside chance I might be able to interpret an explanation if I find one, given that I understand particle physics too well for my own good back in fourth grade.[iii] Figure 2 depicts a sample query against the first float column in the HiggsBosonTable of my DataMiningProjects database, which includes all three datasets. Note that it’s in descending order by Z-Scores. The GroupRank separates identical Z-Score values into distinct groups, through the DENSE_RANK windowing function; feel free to eliminate it from the code if it drags down performance on your databases. I find it handy when running the procedure against tables with small ranges of distinct and frequently duplicated values. This logic may also be enhanced by intrepid programmers to handle bucketing and banding of contiguous but not quite identical values, perhaps using fuzzy sets with graded memberships. The interpretation is not difficult at all: the further away Z-Scores are in either direction from zero, the more likely they are to represent any outliers. It only becomes difficult once we compare the results to particular probability distributions, which often expect certain percentage of their values to occur in specific ranges and therefore makes the definition of an outlier less arbitrary in that context; for example, the Gaussian or “normal” distribution, i.e. the bell curve, expects about 68 percent of the values to be within the first standard deviation, 95 within the second and 99.7 within the third.
…………The procedure took much less time to execute on the 11 million rows (the seven at the tail end are accidental duplicates I’ve been procrastinating on removing, but they’re inconsequential for today’s article) of the HiggsBosonTable than I expected, given that my poor beat up development machine is hardly a match for the servers most DBAs use every day. Unfortunately, I was unable to test it with a columnstore index because I haven’t been able to afford to upgrade from SQL Server 2012 Developer Edition to 2014, in which Microsoft lifted many of the restrictions that made it impossible to apply them to many user scenarios. The Column1 I was testing in the HiggsBosonTable has a precision of 33 and a scale of 29, so I naturally received this error when trying to create one: “CREATE INDEX statement failed. A columnstore index cannot include a decimal or numeric data type with a precision greater than 18. Reduce the precision of column ‘Column 1′ to 18 or omit column ‘Column1′. (Microsoft SQL Server, Error: 35341).” I was, however, able to reduce the client processing time and total execution time by adding a regular nonclustered index to Column1. The total execution time was higher, but only because of an increased Wait Time on Server Replies, which was probably due to pressure on the server from other unrelated tasks. If you click on the pictures of the execution plans to enlarge them, you’ll see that the index was used when calculating the standard deviation, the average and the Z-Scores alike.

Figure 3: Client Statistics and Execution Plans for the Z-Score Procedure on the HiggsBosonTable
ZScoreClientStats ZScoreExecutionPlans (1) ZScoreExecutionPlans (2)

…………In my last post, I cited many articles by professionals who gave a long laundry list of use cases, limitations and nuances of interpretation for Benford’s Law. I searched for similar materials for Z-Scores but essentially came up empty, perhaps because they’re so well-established, well-understood and trivial to calculate. The one clear exception I saw mentioned in the literature is that fat-tailed distributions, in which the data is significantly skewed in one or more directions, may require the use of less outlier-sensitive techniques. This particularly outlier identification method is used most often in conjunction with the normal distribution but doesn’t have to be; out of all the techniques I’ll survey in this series, it is perhaps the most flexible and suited to the widest range of use cases. It is fairly easy to understand and interpret, performs well and isn’t dependent on any particular cut-off criteria for defining outliers, unlike many others that force unnecessary all-or-nothing choices. This means we can use it to ask how much of an outlier a record is, which is a much more informative question than simply asking if a record belongs in a single outlier bucket or outside it.
…………In the next installment of this series I’ll briefly touch on Iglewicz and Hoaglin’s Modified Z-Scores, which are one of many twists that can be applied to Z-Scores depending on the use cases at hand. That material should be fairly light and easy, as should some of the posts that follow on stats like Grubbs’ Test and the Tietjen-Moore Test that have deceptively scary names. Interquartile Range and Dixon’s Q-Test should also be trivial to handle. I’m not sure yet how difficult it will be to explain and code lesser-known measures like the Generalized Extreme Studentized Deviate (ESD) Test , Cook’s Distance, Peirce’s Criterion, Chauvenet’s Criterion and the Modified Thompson Tau Test. Eventually I’ll also be doing a recap of how to use SSDM Clustering for finding aberrant data and writing a post tentatively titled Visual Outlier Detection with Reporting Services, which will be full of eye candy rather than T-SQL and equations like the others. I’m not yet sure what order I’ll be tackling them all in, except for Mahalanobis Distance, which is apparently a quite sophisticated and useful method that unfortunately has math of commensurate difficulty. If all goes according to plan, climbing that final hurdle will propel me into a much more difficult but useful series, Information Measurement with SQL Server. We will see references to Z-Scores and formulas like it through that series, as well as the remainder of this one.

[i] See See Hopkins, Will G., 2001, A New View of Statistics website. The home page for the site is http://www.sportsci.org/resource/stats/index.html.

[ii] Pinal Dave, 2007, “SQL SERVER – CASE Statement in ORDER BY Clause – ORDER BY Using Variable,” published July 17, 2007 at the Journey to SQL Authority with Pinal Dave website. Available online at


[iii] When my family moved out of the home we had in my elementary school days, we never did find the uranium and radium samples that came with my do-it-yourself cloud chamber kit. Oh well, I suppose the new owners either sold it to some North Koreans, or their kids all have seven toes. I suppose I shouldn’t lose hope of finding them again, given that their half-lives were several million years apiece.

Outlier Detection with SQL Server, part 1: Finding Fraud and Folly with Benford’s Law

By Steve Bolton

…………My last blog series, A Rickety Stairway to SQL Server Data Mining, often epitomized a quip by University of Connecticut statistician Daniel T. Larose, to the effect that “data mining is easy to do badly.”[1] It is clear that today’s sophisticated mining algorithms can still return useful results even when misused by a bumbling amateur like myself, but it is better to understand why they work, which is an area I’m seriously deficient in. In that series of self-tutorials, I was more interested in demonstrating that SSDM could be used by someone who understood it badly, while gleaning what I could about the mechanics and concepts that underpin data mining. I intended to rectify that with a new series tentatively titled Information Measurement with SQL Server, in which I would gradually teach myself how to code dozens of metrics used in information theory and statistics in Transact-SQL (T-SQL), Multidimensional Expressions (MDX) and Visual Basic (VB) Common Language Runtime (CLR) routines. In time, I still intend to publish my misadventures with such sundry metrics as the Küllback-Leibler Divergence, Minimum Description Length, Bayes Factors, the Akaike Information Criterion and Shannon’s Entropy. I will probably also stick the plan of writing a parallel series on Using Other Data Mining Tools with SQL Server, in which I may pass on my inexperience with accessing such alternative mining platforms as WEKA and RapidMiner from Microsoft’s database server software. Prior to writing up both series, however, I decided to sneak in a short but potentially useful series on myriad means of detecting outliers using SQL Server, in part because the material is lighter and the equations involved less complicated by several orders of magnitude. That will help me bridge the gap in skills required for the Information Measurement series. All three series could be of real practical use to database administrators (DBAs) and data miners, but outlier detection is simpler both to implement and discuss. In fact, I’ll dispense in the very first article with one of the more complex and intriguing methods, Benford’s Law, which I will implement with a simple lookup table and some dynamic SQL with windowing functions. In the next eight or nine articles I’ll tackle several others means that can be implemented in SQL Server using Reporting Services, SSDM, T-SQL and possibly some CLR stored procedures written in VB.

The Uses and Abuses of Outliers

…………Before discussing any of these detection methods, it is critical to delineate what an outlier is and more importantly, how to respond to them depending on their causes. One of the greatest pitfalls of statistics (and math in general) is that it is quite easy for the numbers to become detached from their underlying logic and meaning; in such cases, the statistical packages and data mining tools will continue to spit out figures, but they will be meaningless, misleading or otherwise downright false. When math goes wrong, it is often at this juncture with logic and reason, which are terribly neglected in every academic field these days; in fact, one of the most productive things anyone in the Western intelligentsia can do today is to learn about logical fallacies and acquire the skill of recognizing them.[2] As I pointed out in the SSDM series and will reiterate in the future, at a deeper level, it is even more important to acquire the skills of recognizing one’s own fallacies and motivations for holding them – which. In a nutshell, sometimes people don’t like the truth, so they apply whatever form of intelligence they’re proficient in to find a way around it; this is one of the great banes and most common defects of all human history, one that requires painful self-examination and swallowing of pride. When math, statistics, data mining and kindred topics go wrong, it is often because they have come uncoupled from reason, not because the tools are spitting out bad numbers; the few sentences I’ve spent here are all out of proportion to the importance of keeping our calculations moored in reason and reality, given that the slightest mistake in this regard can render them useless or counterproductive. For these reasons, it is always wise to take a page from the philosophers of ages past and reduce the wiggle room available to fallacies by carefully defining our terms, without going to the opposite extreme of hair-splitting reductionism and getting lost in jargon. With all that background out of the way, let’s define an outlier rigidly but plainly so that anyone can understand it: it is a data point that contrasts with the other data points it is grouped with. How we group them is dependent on the questions we want to ask, which amount to a subjective choice; whether or not a particular data point meets the criteria we’ve set down for membership in a set or its outliers is an objective thing beyond our control. This distinction between subjectivity and objectivity is frequently confused and sometimes abused by intellectuals these days, but particularly when it comes to the treatment of outliers. Once we’ve detected them, the potential responses range from doing nothing, to applying alternate outlier-resistant measures like medians, to throwing out the data altogether.[3]
                There are two broad classes of hazards involved with outliers, one of which is changing the definition of what an outlier is, or altering the criteria for membership in the larger set they are being contrasted with. Either way, it may amount to moving the goalposts, when not done for a plainly justified reason. The second snare is making assumptions about what caused the aberration in these data points, which is sometimes combined with the first. Outliers can of course be useful in detecting data quality problems, systematic errors in data collection and in the case of Benford’s Law, fraud. It would be a mistake, however, to simply change the definition of outliers or assign those data points to a different population without ascertaining whether erroneous data of some kind is indeed the culprit. This presents researchers with a subtle temptation to simply throw out data they don’t like because it doesn’t fit their preconceived notions, by changing categories. Furthermore, because outlier detection is so useful in ferreting out data quality issues, aberrant data points of any kind are often assumed to be “bad.” Like gold nuggets in the pan of a prospector, they may actually be positive, depending on the context. To a businessman, they may represent an untapped market; to a health care researcher, they may represent a population that turns out to be immune to a particular disease; to a scientist, they may represent a subgroup of materials more resistant to particular physical strain of some kind. If you found yourself in a disaster or horror film with a high body count, you don’t want to be among the “normal” population, because the norm is to end up dead; only the outliers survive in The Poseidon Adventure and Aliens.
Sometimes outliers have no significance at all, due to luck of the draw. Probability theory only tells us that over time we can normally expect a certain set of events to occur with a given frequency, but it does not guarantee it; I wouldn’t be on your chances of rolling snake eyes a hundred times in a row or getting three straight loner hands in Euchre, but it is a fallacy to believe that it can’t happen. Theoretically, all of your data points could be outliers in a given trial of any size, simply by random chance, no matter how infinitesimal it might be. Moreover, a certain number of outliers can always be expected, but that number may vary significantly depending on what the underlying distribution of the data is. In these cases, the presence of many outliers indicates that the dataset is highly skewed and comes from a fat-tailed distribution, or from a mix of one or more distributions.[4] This presents researchers with an even more subtle temptation to continue using popular methods of statistical analysis that are meaningful only with the distribution they have chosen, which more often than not is the Gaussian or “normal” distribution. Researchers in many fields prefer this distribution because there are far fewer statistical tests available for the others, so it is not in their interest to recognize outliers as evidence of some other underlying distribution. In fact, it is frightening to see how little goodness-of-fit tests are applied – particularly in medical research, which I had some incomplete training in back around my grad school days. This is part of a larger pattern of unconscionable misuse of statistics that is standard fare in medical research – which is exactly the one field in which they can do the most damage to ordinary people. At least half the research published today contains at least one statistical error, and just one is enough to render any study invalid; this figure does not take into account the logical errors that may accompany incorrect use of statistical methods and also render them useless or misleading.[5] When I brought up this figure to a health care professional I know, she told me that it is standard fare in her field to simply not publish negative findings, which is likewise dishonorable, because it amounts to suppressing all but the desired evidence. Outliers in particular are easy game for academics who are at best negligent in learning how to use their statistical tools, and at worst are hell-bent on reaching particular conclusions. This is particularly true when the presence of outliers indicates that the underlying theory is flawed, or that prohibitively expensive data collection methods are necessary. No scholar ever wants to admit that a theory they’ve invested years of thought and personal prestige into is wrong, especially when money is at stake and applying more rigorous statistical techniques would require a lot more brain work. Ethical issues of this kind are not uncommon with outlier detection, although SQL Server DBAs are less likely to encounter them if their main use case scenarios are merely looking for data quality problems or doing exploratory data mining, to ferret out possible relationships between variables for further testing.

The Enigma of Benford’s Law

                Both types of DBAs, however, can make use of Benford’s Law, a strikingly different and somewhat mysterious form of outlier detection that can be used by the SQL Server community right off the bat. The gist of it is that the leading digits in many number series often follow a particular distribution, unless there is interference by some non-random process. When this occurs it is often regarded as evidence of intelligent interference, especially when there is something to be gained by it; for this reason, it is most often used to detect fraud, which really amounts to an intentional degradation of data quality. As mathematicians Arno Gerger and Theodore P. Hill note, “For instance, the U.S. Internal Revenue Service uses BL to ferret out suspicious tax returns, political scientists use it to identify voter fraud, and engineers to detect altered digital images.”[6] One of my favorite uses to date has been to uncover the kind of academic fraud I discussed in the last section, particularly in health care research and the discipline of economics.[7] West Virginia University Mark Nigrini, a noted expert on Benford’s Law, was one of the pioneers in using it to identify financial fraud, which seems to be its most popular use case. Williams College math Prof. Steven J. Miller has published several anecdotes of embezzlement uncovered through Benford’s Law and mentions a few less colorful uses that might be of use to DBAs, like “analyzing round-off errors” and “determining the optimal way to store numbers.”[8] Another application that might be of interest to DBAs is “computer disk space allocation.”[9] It has also been applied to such dry subjects as hydrology.[10] Its more glamorous use in exposing swindlers is what earned Benford’s Law a starring role on primetime television in 2006, when it was used by the good guys to catch a thief in an episode of the crime series NUMB3RS.[11]
                Demonstrating that Benford’s Law works is something altogether different than stating definitively why it does. Astronomer Simon Newcomb was the first to notice it when he was flipping through a reference on logarithm tables in 1881 and saw that pages for logs starting with 1 and 2 were far more common than those starting with 8 and 9.[12] Physicist Frank Benford got his name attached instead of Newcomb because he demonstrated in 1938 that it held for “over 20,000 numbers from sources as diverse as Readers’ Digest articles, street addresses of American Men of Science, atomic weights, population sizes, drainage rates of rivers, and physical constants.”[13] It is actually more of an observation, like Moore’s Law, but even more enigmatic in that explanations of the phenomena to date have fallen short. Some observers link it to scaling laws like Zipf’s Law, which is apparently an instance of the general scaling laws generalized by Benoit Mandelbrot (1924–2010)[14] and popularized in chaos theory.[15] Hill is credited with providing the most tenable to theory to date, which statistician R.M. Fewster says is a “measure-theoretical proof that data drawn from a random mix of different distributions—rather like Frank Benford’s original 20,000—will ultimately converge to Benford’s law.” [16] Fewster attempts to “provide nothing rigorous, but provide a simple, intuitive explanation of why and when the law applies” that is helpful in finding use cases. Yet even Berger and Hill concurred with Fewster in a recent paper in which they take issue with recent explanations, then conclude with the statement that it “remains mysterious.”[17] Anyone interested in getting to the bottom of that mystery can begin at the Benford Online Bibliography at http://www.benfordonline.net/, which is a plentiful source for all things Benford-related.
Like other data mining techniques, we can still make use of it without understanding why it works – just as bumblebees don’t stop flying merely because scientists can’t explain how they apparently defy the laws of aerodynamics.[18] To put it to use, we’ll first build a lookup table, using the Data Definition Language (DDL) code below. Note that I’m using a bigint identity column for personal reasons and hosting both the table and its associated stored procedure in a Calculations schema, which may not fit your particular use cases. The identity column also starts with a zero base, to represent the digit zero. After the DDL statement, I populate it using constants provided in a couple of articles published by Nigrini.[19]

Figure 1: DDL Code for the BenfordTable

CREATE TABLE [Calculations].[BenfordTable](
       [ID] [bigint] IDENTITY(0,1) NOT NULL,
       [ExpectedPercentageDigit1] [decimal](5, 5) NULL,
       [ExpectedPercentageDigit2] [decimal](5, 5) NULL,
       [ExpectedPercentageDigit3] [decimal](5, 5) NULL,
       [ExpectedPercentageDigit4] [decimal](5, 5) NULL,



INSERT INTO Calculations.BenfordTable
(ExpectedPercentageDigit1, ExpectedPercentageDigit2,
ExpectedPercentageDigit3, ExpectedPercentageDigit4)

VALUES (NULL, 0.11968, 0.10178, 0.10018),
(0.30103, 0.11389, 0.10138, 0.10014),
(0.17609, 0.19882, 0.10097, 0.10010),
(0.12494, 0.10433, 0.10057, 0.10006),
(0.09691, 0.10031, 0.10018, 0.10002),
(0.07918, 0.09668, 0.09979, 0.09998),
(0.06695, 0.09337, 0.09940, 0.09994),
(0.05799, 0.09035, 0.09902, 0.09990),
(0.05115, 0.08757, 0.09864, 0.09986),
(0.04576, 0.08500, 0.09827, 0.09982)

Figure 2: Comparison Procedure with Dynamic SQL and Windowing Functions

CREATE PROCEDURE Calculations.BenfordComparisonSP
@SchemaAndTableName as nvarchar(128) = NULL,@ColumnName AS nvarchar(128) = NULL

DECLARE @SQLString nvarchar(max)

SET @SQLString =
(RecordCount, Digit1, Digit2, Digit3, Digit4)
(SELECT Count(*) OVER (PARTITION BY 1) AS RecordCount,
(‘ + @ColumnName +   / CAST(”1” + REPLICATE(”0”, LEN(CAST(‘ + @ColumnName +    AS VARCHAR(255))) – 1) AS bigint)) %10 as Digit1, (‘ + @ColumnName +   / CAST(”1” + REPLICATE(”0”, LEN(CAST(‘ + @ColumnName +    AS VARCHAR(255))) – 2) AS bigint)) %10 as Digit2,
(‘ + @ColumnName   / CAST(”1” + REPLICATE(”0”, LEN(CAST(‘ + @ColumnName +    AS VARCHAR(255))) – 3) AS bigint)) %10 as Digit3,
(‘ + @ColumnName +   / CAST(”1” + REPLICATE(”0”, LEN(CAST(‘ + @ColumnName +    AS VARCHAR(255))) – 4) AS bigint)) %10 as Digit4
FROM ‘ + @SchemaAndTableName +
WHERE ‘ + @ColumnName +   IS NOT NULL
(RecordCount, Digit, CountForDigit, DigitNumber)
SELECT DISTINCT RecordCount, Digit1, Count(Digit1) OVER (PARTITION BY Digit1) AS CountForDigit1, ”CountForDigit1” AS DigitNumber


SELECT DISTINCT RecordCount, Digit2, Count(Digit2) OVER (PARTITION BY Digit2) AS CountForDigit2, ”CountForDigit2” AS DigitNumber


SELECT DISTINCT RecordCount, Digit3, Count(Digit3) OVER (PARTITION BY Digit3) AS CountForDigit3, ”CountForDigit3” AS DigitNumber


SELECT DISTINCT RecordCount, Digit3, Count(Digit4) OVER (PARTITION BY Digit4) AS CountForDigit4, ”CountForDigit4” AS DigitNumber

the reason zeros occur are because there are a handful of zero values
SELECT Digit, CountForDigit1, CountForDigit2, CountForDigit3, CountForDigit4,PercentageDigit1,PercentageDigit2,PercentageDigit3,PercentageDigit4,
PercentageDigit1 – (ExpectedPercentageDigit1 * 100) AS DeviationFromExpectedPercentageDigit1,
PercentageDigit2 – (ExpectedPercentageDigit2 * 100) AS DeviationFromExpectedPercentageDigit2,
PercentageDigit3 – (ExpectedPercentageDigit3 * 100) AS DeviationFromExpectedPercentageDigit3,
PercentageDigit4 – (ExpectedPercentageDigit4 * 100) AS DeviationFromExpectedPercentageDigit4,
(PercentageDigit1 – ExpectedPercentageDigit1) – (1 / (2 * RecordCount))
/ Power(((ExpectedPercentageDigit1 * (1 – ExpectedPercentageDigit1)) * RecordCount), 0.5) AS NigriniZStatisticDigit1,
(PercentageDigit2 – ExpectedPercentageDigit2) – (1 / (2 * RecordCount)) / Power(((ExpectedPercentageDigit2 * (1 – ExpectedPercentageDigit2)) * RecordCount), 0.5) AS NigriniZStatisticDigit2,
(PercentageDigit3 – ExpectedPercentageDigit3) – (1 / (2 * RecordCount)) / Power(((ExpectedPercentageDigit3 * (1 – ExpectedPercentageDigit3)) * RecordCount), 0.5) AS NigriniZStatisticDigit3,
(PercentageDigit4 – ExpectedPercentageDigit4) – (1 / (2 * RecordCount)) / Power(((ExpectedPercentageDigit4 * (1 – ExpectedPercentageDigit4)) * RecordCount), 0.5) AS NigriniZStatisticDigit4
(SELECT RecordCount, Digit, IsNull(CountForDigit1, 0) AS CountForDigit1, IsNull(CountForDigit2, 0) AS CountForDigit2, IsNull(CountForDigit3, 0) AS CountForDigit3,  IsNull(CountForDigit4, 0) AS CountForDigit4,
IsNull(CountForDigit1, 0) * CAST(100 AS decimal(38,19)) / (CAST(RecordCount AS decimal(38,19))) AS PercentageDigit1, IsNull(CountForDigit2, 0) * CAST(100 AS decimal(38,19)) / (CAST(RecordCount AS decimal(38,19))) AS PercentageDigit2,
IsNull(CountForDigit3, 0) * CAST(100 AS decimal(38,19)) / (CAST(RecordCount AS decimal(38,19))) AS PercentageDigit3, IsNull(CountForDigit4, 0) * CAST(100 AS decimal(38,19)) / (CAST(RecordCount AS decimal(38,19))) AS PercentageDigit4
FROM SecondTempCTE AS T1
       PIVOT (MAX(CountForDigit) FOR DigitNumber IN  (CountForDigit1, CountForDigit2, CountForDigit3, CountForDigit4)) AS pvt) AS T2
       LEFT JOIN Calculations.BenfordTable AS T3
ON Digit = T3.ID
      WHERE Digit IS NOT NULL’

EXEC (@SQLString)

…………There’s a lot going in Figure 2, including dynamic SQL, a pivot, some windowing functions and a digit parsing trick I heavily adapted from Gordon Linoff’s example in the StackOverflow thread “Parsing an Integer Field in SQL.”[20] At present, it’s only working on tinyint, smallint, int and bigint fields, because handling data types like floats and numerics that allow decimal places will require some extra logic I haven’t implemented yet. I will if I get a request to do so or discover a need to; for now, my purpose is just to demonstrate how Benford’s Law might be applied in a SQL Server database. Some cautions are in order here. First, I don’t allow spaces in my T-SQL object names, so you’ll have to add brackets to the stored procedure manually if you want to run it against tables and columns that have them. Secondly, I also haven’t done any optimizations on the procedure at all, or even checked the execution plan yet, so test this out before running it against large tables and views. Third, there are probably a lot of undiscovered logical errors embedded here, since I haven’t tested it thoroughly. It’s also important to keep in mind that if you’re using it against a column that only has one digit, you will naturally get nulls returned for the calculations performed against the second through fourth columns, which I momentarily forgot (causing a moment of panic and the invention of a few brand new curse words). Also be aware that I may not be calculating Nigrini’s Z-Statistic correctly, since I wasn’t able to find any complete examples online to compare my results against. I derived the code from the equations on the Wikipedia page for Benford’s Law[21] and in Durtschi, et al.’s “The Effective Use of Benford’s Law to Assist in Detecting Fraud in Accounting Data.”[22] The latter has directions on how to interpret it though.
…………If all goes according to plan, however, executing a query like the one in Figure 3 ought to produce a single table with the values in the three tables below, which provide the counts, percentages, deviations and Nigrini Z-stats for the first four digits of that column. Figure 4 does the same for the LactateDehydrogenase column in the same table of the Duchennes database, which is one of three datasets I will be using for practice purposes over the course of my next three or four tutorial series. This tiny nine-kilobyte table of data on a particularly debilitating form of muscular dystrophy is one of many health care research datasets made publicly available through Vanderbilt University’s Department of Biostatistics. In future articles, I will use the much larger Higgs Boson dataset made available through the University of California at Irvine’s Machine Learning Repository and transcriptions of the Voynich Manuscript, an inscrutable medieval tome with an encryption scheme so obscure that even the National Security Agency (NSA) can’t crack it. The problem with using the Voynich Manuscript for this week’s tutorial is that it is the result of an intelligent author rather than a random natural process, one who intentionally kept his words shrouded in mystery and has succeeded for nearly six centuries. This means that it is highly unlikely to follow Benford’s Law. Likewise, the meaning of the seven gigs of float values found in the Higgs Boson dataset is still unclear to me, since I haven’t yet found documentation on its blandly named columns. The significance of the Age column in the Duchennes dataset is self-explanatory though. We can expect significant deviation from Benford’s Law here, given the natural limitations on human longevity and the preponderance of female carriers of childbearing age in the survey population. I deleted the results for the third and fourth digits of this column since there were no carriers in the population over the ages of 100 and 1,000, for obvious reasons. The LactateDehydrogenase column likewise had no values with four digits, so I deleted the corresponding results for that digit for the sake of brevity.

Figure 3: Results of the BenfordComparisonSP on the Age Column of the Duchennes Database
EXEC   [Calculations].[BenfordComparisonSP] @SchemaAndTableName = N’Health.DuchennesTable, @ColumnName = N’Age


Figure 4: Results of the BenfordComparisonSP on the LactateDehydrogenase Column

…………Some of the stats for the Digit 0 are of course null, given that it can’t be the first digit of a number.Note that there are an unexpectedly high number of LactateDehydrogenase values that start with 1 and 2, but that the deviations for the rest of the values and digits drop off sharply after that. I knew very little about this enzyme before converting the Duchennes database into a SQL Server database so I figured that it might be a candidate for Benford’s Law, but the data shows that it deviates from it almost as much as the Age column. After the fact, I looked up the normal value range at MedlinePlus and sure enough, the normal values range from 105-333 IU/L (international units per liter).[23] This is why we see such high deviations for 1 and 2 on the first digit, but far smaller deviations on the second and third digits, which are not affected much by the natural range limits. In this case, the unusually high number of values starting with 1 and 2 qualify as outliers if we’re analyzing them through the prism of a Benford distribution, but it is easy to pinpoint a highly skewed natural distribution as the cause.

Uses and Misuses of Benford’s Law

…………Like any other statistical tool, Benford’s Law has to be used as carefully as a scalpel, because the slightest slip beyond the proper context and mooring in logic can render it useless or misleading. The literature on Benford’s Law is chock full of conditions that can be used by DBAs and .Net programmers to develop use cases though. The article by Durtschi, et al. states for example that “if the mean of a particular set of numbers is larger than the median and the skewness value is positive, the data set likely follows a Benford distribution. It follows that the larger the ratio of the mean divided by the median, the more closely the set will follow Benford’s law.”[24] They also point out a few common sense examples where Benford’s Law doesn’t apply, such as numbers assigned by humans (like operational codes) and numbers with specific minimum and maximum bounds.[25] The Age column discussed above probably falls under the rubric of the latter. Furthermore, they also warn that in regards to detecting financial fraud, “if there are only a few fraudulent transactions, a significant difference will not be triggered even if the total dollar amount is large. Second, statistically, if the account being tested has a large number of transactions, it will take a smaller proportion of inconsistent numbers to trigger a significant difference from expected than it would take if the account had fewer observations.”[26] It is also apparently possible to jump to conclusions in the opposite direction, given that in a case from 2006, the falsified data used in a fraudulent cancer study was found to have nonetheless followed Benford’s Law.[27]  The Wikipedia page also lists some sequences that are likely to fit a Benford distribution, such as Fibonacci numbers, factorials, powers of 2 and “the powers of almost any other number”, and some that are unlikely, like “square roots and reciprocals.”[28]
…………One of the most well-written resources on the topic is accountant Tommie W. Singleton’s “Understanding and Applying Benford’s Law,” which has succinct, convenient lists of use cases, including “credit card transactions, purchase orders, loan data, customer balances, journal entries, stock prices, accounts payable transactions, inventory prices and customer refunds.”[29] He also lists “examples of data sets that are not likely to be suitable for Benford’s Law include: airline passenger counts per plane; telephone numbers; data sets with 500 or fewer transactions; data generated by formulas (e.g., YYMM#### as an insurance policy number); and Data restricted by a maximum or minimum number (e.g., hourly wage rate).”[30] Large, randomly generated datasets of unassigned numbers without minimum and maximum limits are ideal, particularly those from exponential series of several orders of magnitude.[31] He also writes that “The theory does not hold true for data sets in which digits are predisposed to begin with a limited set of digits. For instance, Benford’s Law will not hold true for data sets of human heights, human weights and intellectual quotient (IQ) scores. Another example would be small insurance claims (e.g., between US $50 and US $100). The theory also does not hold true when a data set covers only one or two orders of magnitude.”
…………In addition, we need to keep in mind that when applying this to subjects like accounting that are human inventions rather than observations of purely physical phenomena, we can’t necessarily expect them to stay static for deep reasons involving mind-blowing topics like free will and determinism. I suspect that is one of the reasons behind the puzzling increase in zeroes over time in the accounting field, which Jialan Wang discusses in depth in an interesting article on Benford’s Law.[32] Keep in mind that fraud detection is based on the principle that deviation from expected random values is often a sign of purposeful interference by a non-random process, i.e. a human intelligence. In the case of the mysterious increase in zeroes in the accounting field, we may be seeing the accidental results of half-conscious interference by human intelligences, who are unwittingly affecting the outcome of Benford’s Law by such gradual methods as routine changes in accounting practices.
To make a long story short, use your head when applying Benford’s Law. That includes using the code provided here, given that I have no professional experience with this and am learning as I go. If you can grasp Benford’s Law, the rest of the series will be a breeze – at least until the final article on a metric called Mahalanobis Distance. That will probably be the most difficult installment and will help me transition into the Information Measurement with SQL Server series, which will include implementations of various other distances and divergences. Between now and then I’ll probably tackle such simpler means of outlier detection as Z-Scores, Grubbs’ Test, the Tietjen-Moore Test, the Generalized Extreme Studentized Deviate (GESD) Test and Cook’s Distance, perhaps along with Interquartile Range, Dixon’s Q-Test, Peirce’s Criterion, Chauvenet’s Criterion and the Modified Thompson Tau test. I’m not certain of the order I’ll tackle them in yet or whether or not some of the topics might be combined, given that I haven’t coded them yet. The calculations and logic for some of these are a lot easier on the brain than their dry-sounding names, while others like Z-Scores are commonly known statistics. Along the way I will probably also write an article on Visual Outlier Detection with Reporting Services using the various graphs and charts RS provides, as well as touch on the SSDM Clustering algorithm again. As with the Rickety Stairway series on SSDM, I’m bound to make mistakes, but will probably be able to provide some benefit to other DBAs and data miners as I learn through misadventure. Due to my inexperience, I expect many of my articles to qualify as outliers in and of themselves, but by the end of the series, I aim to acquire the skills to recognize them as such, quickly and accurately.



[1] p. xii, LaRose, Daniel T., 2005, Discovering Knowledge in Data: An Introduction to Data Mining. Wiley-Interscience: Hoboken, N.J.

[2] Which I have yet to do.

[3] ASTM International (formerly known as the as the American Society for Testing and Materials), which “publishes voluntary consensus technical standards for a wide range of materials, products, systems, and services” sells a guide called Standard Practice for Dealing With Outlying Observations, if you’ve got $48 to spare for a .pdf copy (the hard cover’s the same price). That isn’t happening on my budget, but there’s the link if you want a more professional guide to the topic. I originally found out about the ASTM publication from the Wikipedia page “Outlier” at http://en.wikipedia.org/wiki/Outlier.

[4] IBID. To make sure I didn’t leave anything out, I consulted the Wikipedia page “Outlier” at http://en.wikipedia.org/wiki/Outlier, which I where I learned that mixed distributions can also be problematic.

[5] For a quick gloss on the topic of the misuse of stats in health care, see the Influential Points.com webpage “Statistical Errors in Research: Use and Misuse of Statistics in Biology,” at http://influentialpoints.com/Training/Statistical_Errors_in_Research_Use_and_misuse_of_statistics_in_biology.htm . Earlier this year another meta-analysis was released showing that half of the journal articles surveyed contained statistical errors, but I wasn’t immediately able to find a reference to that particular one. The webpage above is sufficient to help someone interested in further reading get started/

[6] Berger, Arno and Hill, Theodore P., 2011, “Benford’s Law Strikes Back: No Simple Explanation in Sight for Mathematical Gem,” published by Springer Science Business Media, LLC, Vol. 33, No. 1. Available online at http://people.math.gatech.edu/~hill/publications/PAPER%20PDFS/BenfordsLawStrikesBack2011.pdf.

[7] Tödter, Karl-Heinz, 2009, “Benford’s Law as an Indicator of Fraud in Economics,” pp. 39-351 in German Economic Review, August 2009. Vol. 10, No. 3. I think I merely read the abstract of this one, or references to it, but the important thing for my point is to know that the article exists.

[8] Miller, Steven J., 2008, “Benford’s Law and Fraud Detection, or Why the IRS Should Care About Number Theory!” Published as an outline at the Bronfman Science Lunch, Oct. 21, 2008, at Williams College. Available online at http://web.williams.edu/Mathematics/sjmiller/public_html/math/talks/Benford_Williams20min.pdf.

[9] See Fewster, R.M., 2009, “A Simple Explanation of Benford’s Law,” pp. 26-32 in The American Statistician, February 2009, Vol. 63, No. 1. 2009. Available online at https://www.stat.auckland.ac.nz/~fewster/RFewster_Benford.pdf.

[10] What good is a blog post on stats without a little “dry humor?”

[11] See the webpage “Benford’s Law” at the Wolfram Mathworld website, published by Eric W. Weisstein.

[12] See Bogomolny, Alexander, 2014, “Benford’s Law and Zipf’s Law,” published at the Interactive Mathematics Miscellany and Puzzles website. Available online at http://www.cut-the-knot.org/do_you_know/zipfLaw.shtml. The tale is related in more depth in Fewster, p. 26.

[13] p. 26, Fewster.

[14] See Bogomolny,

[15] For a readable introduction to that topic for non-mathematicians, I highly recommend, Gleick, James, 1988, Chaos: Making a New Science. Penguin: New York.

[16] p. 27, Fewster.

[17] p. 90, Berger and Hill.

[18] At least according to a famous quip by Igor Sikorsky (1889 1972), one of the inventors of the helicopter.

[19] I originally used Weisstein’s figures, until I found a more detailed table in Nigrini, Mark J. and Mittermaier, Linda J., 1997, “The Use of Benford’s Law as an Aid in Analytical Procedures,” pp. 52-67 in Auditing: A Journal of Practice and Theory, Fall 1997. Vol. 16, No. 2. Reprinted at the University of North Carolina at Charlotte’s Belk College of Business web address http://belkcollegeofbusiness.uncc.edu/cwiggins/acct6210/Lecture%20Notes/Nigrini%20article%20pdf1302507.pdf This work in turn cites Nigrini, M. J. 1996. “Taxpayer Compliance Application of Benford’s Law,” pp. 72-92, Journal of the American Taxation Association. Vol. 18, No. 1.

[20] See Linoff, Gordon, 2013, response to the thread “Parsing an Integer Field in SQL,” published Jan. 14, 2013 at the StackOverflow.com website. Available online at http://stackoverflow.com/questions/14319107/parsing-an-integer-field-in-sql.

[21] See the Wikipedia page “Benford’s Law,” available at http://en.wikipedia.org/wiki/Benford’s_law.

[22] pp. 25-26, Durtschi, Cindy; Hillison, William and Carl Pacini, 2004, “The Effective Use of Benford’s Law to Assist in Detecting Fraud in Accounting Data”, pp. 17-33 in Journal of Forensic Accounting, Vol. 5. Available online at the University of South Florida St. Petersburg webpage http://faculty.usfsp.edu/gkearns/Articles_Fraud/Benford%20Analysis%20Article.pdf

[23] See the MedlinePlus webpage “Lactate Dehydrogenase Test” webpage available at http://www.nlm.nih.gov/medlineplus/ency/article/003471.htm.

[24] p. 24, Durtschi, et al. They cite “Wallace (2002).”

[25] IBID., pp. 23-24.

[26] IBID., p. 26.

[27] Diekmann, Andreas and Jann, Ben, 2010, ‘Benford’s Law and Fraud Detection: Facts and Legends,” pp. 397–401 in German Economic Review. Vol. 11, No. 3. Available online at http://www.socio.ethz.ch/socio/news/docs/Diekmann_Jann_GER_2010_1.pdf.

[28] See the Wikipedia page “Benford’s Law,” available at http://en.wikipedia.org/wiki/Benford’s_law.

[29] Singleton, Tommie W., “Understanding and Applying Benford’s Law” undated article published at the ISACA website and retrieved Aug, 11, 2014. Available online at http://www.isaca.org/Journal/Past-Issues/2011/Volume-3/Pages/Understanding-and-Applying-Benfords-Law.aspx.

[30] IBID. I added the semi-colons myself in order to convert the list into a sentence, which I hope is permissible.

[31] IBID.

[32] Wang, Jialan, 2011, “Benford’s Law: A Revised Analysis,”  published Oct. 24, 2011 at the Studies in Everyday Life: Investigations and Experiments on Food, Ethics, Economics, and the Environment website. Available online at http://econerdfood.blogspot.com/2011/10/benfords-law-revised-analysis.html


Stay Tuned…for a SQL Server Tutorial Series Juggling Act

by Steve Bolton

…………If all goes according to plan, my blog will return in a few weeks with two brand new series, Using Other Data Mining Tools with SQL Server and Information Measurement with SQL Server. Yes, I will be attempting what amounts to a circus act among SQL Server bloggers, maintaining two separate tutorial series at the same time. Like my series A Rickety Stairway to SQL Server Data Mining, it might be more apt to call them mistutorials; once again I’ll be tackling subject matter I’m not yet qualified to write about, in the hopes of learning as I go. The first series will compare and contrast the capabilities of SSDM against other tools in the market, with the proviso that it is possible to run them on data stored in SQL Server. That eliminates a lot of applications that run only on Linux or otherwise cannot access SQL Server databases right off the bat. The software packages I survey may include such recognizable names as RapidMiner and WEKA, plus a few I promised others to review long ago, like Autobox and Predixion Software. The latter is the brainchild of Jamie MacLennan and Bogdan Crivat, two former members of Microsoft’s Data Mining Team who were instrumental in the development of SSDM.
…………The second series of amateur tutorials may be updated more frequently because those posts won’t require lengthy periods of time to evaluate unfamiliar software. This will expand my minuscule knowledge of data mining in a different direction, by figuring out how to code some of the building blocks used routinely in the field, like Shannon’s Entropy, Bayes factors and the Akaike Information Criterion. Not only can such metrics be used in the development of new mining algorithms, but they can be applied out-of-the-box to answer myriad basic questions about the type of information stored in our SQL Server databases – such as how random, complex, ordered and compressed it might be. No mining company would ever excavate a new quarry without first performing a geological survey of what precious metals might be beneath the surface; likewise, it may be helpful for data miners to have a surface impression of how much useful information might be stored in our tables and cubes, before digging into them with sophisticated algorithms that can have formidable performance costs. Scores of such measures are scattered throughout the mathematical literature that underpins data mining applications, so it may take quite awhile to slog through them all; while haphazardly researching the topic, I ran across a couple of quite interesting measures of information that seem to have been forgotten. I will try to make these exercises useful to SQL Server users by providing T-SQL, MDX, Common Language Runtime (CLR) functions in Visual Basic and perhaps even short SSDM plug-in algorithms, as well as use cases for when they might be appropriate. To illustrate these uses, I may test them on freely available databases of interest to me, like the Higgs Boson dataset provided by the Univeristy of California at Irvine’s Machine Learning Repository. I may also make use of a tiny 9 kilobyte dataset on Duchenne’s form of muscular dystrophy, which Vanderbilt University’s Department of Biostatistics has made publicly available, and transcriptions of the Voynich Manuscript, an enigmatic medieval manuscript with an encryption scheme so obscure that even the National Security Agency (NSA) can’t crack it. Both tutorial series will use the same datasets in order to cut down on overhead and make it easier for readers to follow both. When and if I manage to complete both series, the next distant item on this blog’s roadmap will be a tutorial series on how to use various types of neural nets with SQL Server, which is a topic I had some intriguing experiences with many years ago.


Get every new post delivered to your Inbox.