# Author Archives: Stevan Bolton

## Information Measurement with SQL Server, Part 4.1: The Kullback-Leibler Divergence

**By Steve Bolton**

…………This informal series of tutorials on how to implement a hodgepodge of information metrics in SQL Server is somewhat disorganized by necessity, given that I’m trying to cram in every noteworthy type of measure into one series while simultaneously acquainting myself with the topics along the way. That’s going to require some skipping around, but whenever possible, I try to provide what little structure I can by starting off each segment with the simplest and/or more fundamental measures. The formula underlying the Kullback-Leibler Divergence is not quite as easy as certain intuitive distance measures like Euclidean Distance, which belongs to a group of measures that are more appropriate for calculating the separation between data points in clustering algorithms, all of which I’ll deal with later on in this series if all goes according to plan. Yet neither is the Kullback-Leibler Divergence (KL Divergence) much of a burden in terms of comprehension and calculation, as long as we already have concepts like information entropy under our belts.

…………I kicked off this series with a segment on entropic measures because they are such elementary parts of information theory, whose subject matter often overlaps that of this amateur series. I saved Information Measurement with SQL Server, Part 2.4: Conditional and Joint Entropy for the tail end of that segment because it involved comparisons between two entropies rather than one, which is also the case with the topic of this week’s mistutorial; the good news is that it is actually takes a little less code and even performs better, with faster calculation times and fewer burdens on server resources. The KL Divergence and its relatives serve a different purpose though: instead of measuring the association between two probability distributions as Conditional and Joint Entropy do, they are used to assess how far apart an approximate distribution is from its target. To compute it, we merely need to divide each actual probability value by its approximate counterpart and plug the results into the LOG operation in Shannon’s Entropy, which I covered in a previous article. Various shades of meaning can be attached to the KL (as is always the case with the fundamental information metrics), but they all boil down to measuring some kind of information loss, in terms of how much we can learn from particular data values when we encounter them, i.e. of “newsworthiness.”. The ultimate goal is often to select better approximations by minimizing the KL Divergence, which is useful in a wide range of applications.

**The Cold War Origins of a Foundational Information Metric**

This indispensable information measure was the brainchild of Solomon Kullback (1907-1994) and Richard Leibler (1914-2003), two more of those pioneering American cryptanalysts who never received proper credit for breaking the Japanese and Soviet ciphers in World War II and the early Cold War.[1] While working at the National Security Agency (NSA) at a time when it supposedly did not exist (hence the nickname “No Such Agency”[2]), the two code-breakers outlined this simple yet powerful measure in an classic 1951 paper in *The Annals of Mathematical Statistics*.[3] At the time Claude Shannon’s newborn measure of information entropy hadn’t yet been repurposed for discriminating between two distributions, so their groundbreaking idea opened up a whole new world of use cases for these entropic measures, which were then confined more to coding theory. Its inventors noted in the original paper how the measure can serve as a bridge between Shannon’s Entropy and Fisher Information, an important metric based on variances and covariances that I’ll have to explain in detail later in this series.

…………Both the Fisher Information and KL Divergence are vital to the field of information geometry, but we needn’t look to the cutting edge to find use cases. For example, it has applications in the study of “large deviations” where it acts as the rate function and can be used as an alternative measure of “surprise” in the kind of Bayesian statistics I introduced in the last article, if the approximation is treated as incorrect *a priori* knowledge.[4] The latter might be of more use to SQL Server DBAs and data miners than its original applications in coding theory, where it was interpreted either as a measure of compression or the count of bits needed to derive the incorrect code embodied in the approximation.[5] It can be adapted for statistical hypothesis testing[6], which as I noted in my Outlier Detection series is an uncommon use case in the SQL Server world; the same principles also make it attractive as a measure of the usefulness of mining models though, which users of SQL Server Data Mining (SSDM) might be able to leverage. Later in this long series I hope to cover the Akaike Information Criterion, which performs the same mining purposes by inferring the equivalent of the KL Divergence through calculations of statistical likelihood.[7] It has even served in recent years as an alternative to the distribution matching tasks I covered in the Goodness-of-Fit Testing with SQL Server series.

…………It has many deep relationships with other forms of entropy, hence the frequency of the nickname “Relative Entropy” in the information theory literature. In some situations it acts as the expected value of the Mutual Information[8] measure we covered a couple of articles ago, which is closely related to the type of information gain we’ll tackle in a future article on tree structures. Since the KL Divergence measures information loss, it makes sense that it would be useful in gauging information gain. Furthermore, the KL Divergence intersects in many complex ways with other measures of information distance; for example, when the main formula discussed in Information Measurement with SQL Server, Part 2.2: The Rényi Entropy and Its Kin is treated as a distance measure, it becomes the Rényi Divergence or α-Divergence, which the KL Divergence serves as a special case of[9]. It is also an F-Divergence derivable from a Bregman Divergence, all of which I’ll explain to the best of my inability later in this segment of the series. Given that it integrates with all of these diverse information measures in myriad ways, it is not surprising that the Kullback-Leibler is probably encountered more often in the data mining and information theory literature than any of these other information distances. These complex interrelationships work to our advantage, however, by enabling us to cast a web of metrics around whatever information we’re trying to capture in our tables and cubes. The KL Divergence is useful in large part because it integrates so well with many of its relatives, which makes it one of the strongest threads in the web.

**Coding the KL**

Since the KL Divergence can also be derived by subtracting the Shannon’s Entropy for the target distribution from another simple measure known as Cross Entropy[10], I included it in the sample T-SQL in Figure 1 as a check against my calculations. The code is structured like the six T-SQL samples in the segment on entropic measures, so it shouldn’t be too hard to follow despite the length of the text. In fact, I just cut and pasted a good part of it, particularly the derivation of probabilities from actual proportions found for the second float column in the same Higgs Boson dataset I’ve been using for practice material for ages[11] and their storage in an @EntropyTable table variable. The second set of probabilities are derived from a Calculations.NormalDistributionCDFFindProbabilityFunction, which is not depicted here but takes the mean and standard deviation for the column and plugs them into the Cumulative Distribution Function (CDF) for the Gaussian distribution (i.e. the bell curve). Note that I substituted an arbitrary value of 0.61 for the standard deviation in this case, simply because it generated non-zero probabilities for every ActualProportion; most other values did not, even when using the real standard deviation, which is a strong clue that Column2 would fail a normality test based on the Kullback-Leibler Divergence. In fact, the KL isn’t even defined in many cases like this. One of the restrictions on the Kullback-Leibler Divergence is that both distributions must be “absolutely continuous,” i.e. they sum to 1, which signifies that the counterpart of each non-zero probability must also be non-zero; this means that we can use the Value column as a sort of primary key for the @EntropyTable, but it’s actually an irritation that disqualifies many distributions from comparison. The first set of probabilities can never be zero because we’re only storing ones that have observed values in the dataset, but some nil values might sneak into the second set of probabilities if we populated it through a distribution that wasn’t absolutely continuous with the first. The @IsAbsolutelyContinuous variable acts as a check against this and sets the KL results to NULL when there are any non-zero values for the second probability. Both the ActualProportion and GaussianProbability indeed add up to 1 in this case, as can be verified by uncommenting the validation block in the middle.

…………The target probability (the ActualProportion in this case) is divided by the approximation (the GaussianProbability in this sample) and the result is bubbled up to what amounts to the Shannon’s Entropy formula, featuring the usual LOG, multiplication and summation operations, except with a different input. I also calculate the Cross Entropy and Shannon’s Entropy in the same pass, in order to compute the simple check in the last SELECT against my calculation of the KL Divergence. Figure 2 depicts only slight discrepancies for this check at really small precisions, which could be due to rounding errors alone. The code could use some sprucing up (the CASE checks might not even be necessary) but it gets the point across and can serve as a springboard for anyone who wants to write production code based on it. Note how in Figure 2 the KL Divergence is much higher for Column1 in the top graphic than for Column2 in the bottom one. After using these two columns as practice data for the last few tutorial series, I know that Column2 is somewhat Gaussian and Column1 definitely is not, so I thought they’d make an ideal contrast. Since Column1 has a different distribution, I had to substitute a different fake standard deviation value to simulate the property of absolute continuity. I also had to set an arbitrary lag of Value – 0.1 to cover cases when the first proportion in the @EntropyTable generated a GaussianProbability of zero.

** Figure 1: Sample T-SQL Code for the KL Divergence and Cross Entropy**DECLARE @LogarithmBase decimal(38,36)

SET @LogarithmBase = 2 — 2.7182818284590452353602874713526624977 — 10

DECLARE @KullbackLeiblerDivergence float, @CrossEntropy float, @ShannonsEntropy float, @IsAbsolutelyContinuous bit

–@JeffreysDivergence float,

–@KDivergence float,

–@TopsoeDistance float,

–@JensenShannonDivergence float,

–@JensenDifference float,

DECLARE @Count bigint, @Mean float, @StDev float

SELECT @Count=Count(*), @Mean = Avg(Column2), @StDev= 0.61 –, @StDev= StDev(Column2)

FROM Physics.HiggsBosonTable

WHERE Column2 IS NOT NULL

DECLARE @EntropyTable table

(Value decimal(33,29),

ActualProportion decimal(38,37),

GaussianProbability decimal(38,37),

SummationInputForEntropy float,

SummationInputForKullbackLeibler float,

SummationInputForCrossEntropy float

)

INSERT INTO @EntropyTable

(Value, ActualProportion, GaussianProbability, SummationInputForEntropy, SummationInputForKullbackLeibler, SummationInputForCrossEntropy)

SELECT Value, ActualProportion, GaussianProbability, ActualProportion *

Log(ActualProportion, @LogarithmBase) AS SummationInputForEntropy,

CASE WHEN GaussianProbability<= 0 THEN 0 ELSE ActualProportion * Log(ActualProportion / GaussianProbability, @LogarithmBase) END AS SummationInputForKullbackLeibler,

CASE WHEN GaussianProbability <= 0 THEN 0 ELSE ActualProportion * Log(GaussianProbability, @LogarithmBase) END AS SummationInputForCrossEntropy

FROM (SELECT Value, ActualProportion, Calculations.NormalDistributionCDFFindProbabilityFunction(ActualValueLag, Value, @Mean, @StDev) AS GaussianProbability

FROM (SELECT Value, ActualProportion, Lag(Value, 1, Value – 0.1) OVER (ORDER BY Value ASC) AS ActualValueLag

FROM (SELECT Value, ValueCount / CAST(@Count as float) AS ActualProportion

FROM (SELECT Column2 AS Value, Count(*) AS ValueCount

FROM Physics.HiggsBosonTable

GROUP BY Column2) AS T1) AS T2) AS T3) AS T4

— VALIDATION – uncomment to debug

–SELECT SUM(CAST(ActualProportion AS float)), SUM(CAST(GaussianProbability AS float))

–FROM @EntropyTable

— if this count is not = 0, then the distributions are not absolutely continuous and the KL Divergence is not defined

SELECT @IsAbsolutelyContinuous = CASE WHEN Count(*) > 0 THEN 0 ELSE 1 END

FROM @EntropyTable

WHERE GaussianProbability = 0

SELECT @ShannonsEntropy = ABS(SUM(SummationInputForEntropy)), @CrossEntropy = ABS(SUM(SummationInputForCrossEntropy)),

@KullbackLeiblerDivergence = SUM(SummationInputForKullbackLeibler)

FROM @EntropyTable

SELECT @IsAbsolutelyContinuous AS IsAbsolutelyContinuous, @ShannonsEntropy AS ShannonsEntropy,

@CrossEntropy AS CrossEntropy, ABS(@CrossEntropy – @ShannonsEntropy) AS CalculationCheck,

ABS(@CrossEntropy – @ShannonsEntropy) – @KullbackLeiblerDivergence AS DiscrepancyWithCrossEntropyCheck,

CASE WHEN @IsAbsolutelyContinuous = 1 THEN @KullbackLeiblerDivergence ELSE NULL END AS KullbackLeiblerDivergence

** Figure 2: Results from the First and Second Float Columns in the Higgs Boson Dataset** (click to enlarge)

…………The accuracy of the CrossEntropy check still needs some work, but the performance is stellar. It runs in just 3 seconds on my rusty abacus of a development machine, which hardly qualifies as a modern production server. The execution plans are quite similar to those for in the segment on entropic measures, albeit even simpler. The INSERT accounts for 98 percent of the batch costs, and one Sort operator accounts for 49 percent of its costs, with 45 percent more attributable to the actual Table Insert. That means it probably can’t optimized much further – unless of course you can have precalculated probabilities that can be read directly from a table, in which case most of this code and the associated costs in the INSERT can be sliced in half. It does soak up more RAM and TempDB space than I’d like, but this is only a problem when calculating all of these probabilities on the fly on a little desktop machine.

…………You’ve probably noticed already how I commented out five other measures at the beginning of Figure 1. Since they’re all members of the same family of entropic distance measures I’ll be tackling them in the next article, which will involve merely pasting the same code and altering some of the math operations in the INSERT. None of them are particularly difficult to translate into T-SQL, so I’ll dispose of them in one fell swoop. This segment will basically consist of diving into a maelstrom of distance measures, many of which are easy to code and also have minimal performance costs; many of them are used in clustering algorithms, which may be the hidden reason why the SSDM functionality I covered in A Rickety Stairway to SQL Server Data Mining, Algorithm 7: Clustering ran so efficiently compared to other built-in algorithms. The overarching reason why theoreticians have develop so many of these distances metrics is that they exhibit a welter of different properties, some of which are more conducive to certain data mining and statistical tasks than others. For example, divergences don’t have to be symmetric, in the sense that measuring in reverse from the destination to the source is not guaranteed to return the same values as the original formula. Nor are they subadditive, in that the distances between each set of data points and a third reference point must always sum to greater than or equal to the distance between the two distributions. Technically speaking, “distances” exhibit symmetry but are not required to be subadditive (i.e. they do not have to obey the “triangle inequality”). Throughout this series I’ve been using the term “metric” in a casual way, but in mathematical parlance it also has the stricter meaning of a distance measure that exhibits both symmetry and subadditivity. The KL does neither, hence the need for some of the compensatory tweaks embodied in the divergences we’ll discuss next time around; the KL is still remarkably useful without these desirable properties though[12], to the extent that it is still more commonly used than any of these variants, the most well-known of which may be the Jensen-Shannon Divergence. After that, we’ll tackle divergences, distances and metrics that aren’t limited to probabilistic inputs. In the meantime, I highly recommend Cha Sung-Hyuk’s well-organized and highly readable taxonomy of distance measures, which explains better than I can how they fit together.[13] There’s such an abundance of data mining measures available in the mathematical literature that I doubt anybody will be able to corner them all in the analytics marketplace. By the end of the series, however, I hope to demonstrate how several dozen of the leading metrics can be “weaponized” in T-SQL form and put into action in cornering the uncertainty in our SQL Server tables and cubes from every possible direction. The most numerous section of our arsenal will consist of these divergences and distances, which will allow us to perform DIY data mining with greater skill than before.

[1] See the __Wikipedia__ pages “Solomon Kullback” and “Richard Leibler” at http://en.wikipedia.org/wiki/Solomon_Kullback and http://en.wikipedia.org/wiki/Richard_Leibler respectively.

[2] Which was still in use long after the Church Committee publicly exposed its illegal domestic surveillance operations in the mid-‘70s.

[3] Kullback, Solomon and Leibler, Richard, 1951, “On Information and Sufficiency,” pp. 79-86 in __The Annals of Mathematical Statistics__, March 1951. Vol. 22, No. 1. Available online at the West Virginia University Lane Department of Computer Science and Electrical Engineering web address http://www.csee.wvu.edu/~xinl/library/papers/math/statistics/Kullback_Leibler_1951.pdf KL

[4] See the __Wikipedia__ page “Kullback-Leibler Divergence” at http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence

[5] See the __Wikipedia__ page “Information Theory” at http://en.wikipedia.org/wiki/Information_theory

[6] See the __Wikipedia__ page “Kullback-Leibler Divergence” at http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence

[7] See the __Wikipedia__ page “Akaike Information Criterion” at http://en.wikipedia.org/wiki/Akaike_information_criterion

[8] See the __Wikipedia__ page “Information Gain in Decision Trees” at http://en.wikipedia.org/wiki/Information_gain_in_decision_trees

[9] See the __Wikipedia__ article “Rényi Entropy” at http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

[10] See the __Wikipedia__ page “Cross Entropy” at http://en.wikipedia.org/wiki/Cross_entropy

[11] This dataset is made publicly available by the University of California at Irvine’s Machine Learning Repository. which I converted it to a SQL Server table of about 5 gigabytes at the start of my Outlier Detection series.

[12] “We can say that KL divergence has some of the properties of a metric on the space of probability distribution: it’s non-negative, with equality only when the two distributions are equal (a.e.). Unfortunately, however, it is not symmetric, and it does not obey the triangle inequality. (This is why it’s the KL divergence rather than the KL distance.) Nonetheless, it’s enough like a metric that it can be used to construct a kind of geometry on the space of probability distributions, and so of statistical models, which can be extremely useful.” p. 193, Cosma Shalizi, Cosma, 2006, “Advanced Probability II or Almost None of Stochastic Processes,” monograph published at the __Carnegie Mellon University Department of Statistics__ web address http://www.stat.cmu.edu/~cshalizi/754/2006/notes/all.pdf

[13] Sung-Hyuk Cha, 2007, “Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions,” pp. 300-307 in __International Journal of Mathematical Models and Methods in Applied Sciences__, Vol. 1, No. 4.

## Information Measurement with SQL Server, Part 3: Inverse Probability and Bayes Factors

**By Steve Bolton**

…………In the last segment of this series of amateur self-tutorials, we discussed how to code various ways of quantifying how much we *don’t* know about the data in SQL Server tables and cubes. The various probabilistic entropies I translated into T-SQL in those six articles can be viewed as measures of a particular sort of information, which might best be interpreted as “newsworthiness,” since they tell us how much we might learn from each data point. The rest of this wide-ranging series will have to be somewhat haphazard, due to the sheer number of information metrics and the fact that I’m learning about them as I go; writing on these topics helps me absorb the material a lot faster, while hopefully helping other amateurs avoid my inevitable mistakes. I thought it fitting, however, to complement the topic of information entropy by next discussing means of quantifying what we* do* know about our data. Bayes Factors are mercifully easy to calculate, since all we need to do is divide two probabilities by each other and plug them into a logarithm. In fact, we can reuse some of the code from Information Measurement with SQL Server, Part 2.4: Conditional and Joint Entropy and simply strip out the LOG operations that transformed the conditional probabilities into entropies. This step could also be dispensed with, if one of the chief uses of Bayes Factors didn’t involve multiplying them by one conditional probability in order to derive another. This information metric can be interpreted as a sort of crude gauge of existing knowledge, but is chiefly useful in adjusting probabilities in the light of new evidence, through a famous probability formula developed by Presbyterian Minister Thomas Bayes (1701-1761) and extended a few decades later by Pierre-Simon Laplace (1749-1827) [1] to induction.[2]

…………Bayes Factors are indeed simple to code, but making use of them requires a lot more context; this is basically the same paradox that made Information Measurement with SQL Server, Part 2.1: The Uses and Abuses of Shannon’s Entropy one of the longest articles in the last segment, although it was the simplest of the entropic metrics to code. This is due in part to the broad applications of Bayes’ Theorem, which essentially derives a conditional probability for one distinct event or value given another; this is arrived at by multiplying the probability of the first event by the conditional probability of the second given the first, then dividing by the overall probability of the second. The formula isn’t terribly difficult to follow even for an amateur like myself, but I don’t see any reason to post it and violate my longstanding ban on equations in this blog. Calculating a conditional probability in reverse in this way can be useful in some situations on its own, but can be harnessed for new uses by infusing with additional meaning as a “degree of belief.” Basically, the probability of X given Y derived from this inverse probability is interpreted by Bayesian statisticians as a posterior distribution, or a degree of belief updated in light of the conditional probability of Y given X, which is known as a prior. The Bayes Factor merely represents the ratio between the other two elements in Bayes’ Theorem, the two overall probabilities for X and Y. If we know the Bayes Factor and the prior conditional probability, we can derive the posterior conditional probability, hence the common refrain “posterior is proportional to prior times likelihood.” Sometimes the term “likelihood ratio” is used as a synonym for Bayes Factors, but this can lead to confusion, given that it is also used for the likelihood-ratio tests, certain diagnostic tests and “the ratio of two likelihood functions.”[3] Moreover, the term “likelihood” carries specific connotations in statistical parlance, whereas in ordinary speech it is often used interchangeably with “probability.” For that reason, I’ll probably avoid it in discussions on topics like Bayes Factors that are directly related to likelihood.

**From Inverse Probability to Decibans**

I’ll also stick to the broader and older term “inverse probability” in place of the now-common “Bayesian probability,” in order to stay out of the knock-down, drag-out turf wars that occur among statisticians today over the topic; these methods don’t necessarily have to be used in ways often labeled “Bayesian” today, which as we shall see, sometimes implies endorsement of controversial interpretations. The process was originally known as inverse probability, with good reason, as a few intuitive illustrations of its usage will demonstrate. Many of the intuitive examples given for inverse probability in the statistical literature are similar to one in Fazlollah Reza’s* An Introduction to Information Theory,* where the goal is to gauge the probability of particular combinations of red and black balls being taken from three urns. The catch is that the probability is derived *after* they are drawn, not before, as is typically the case in ordinary “frequentist” probability.[4] This is summarized quite succinctly in Dan McNeill’s *Fuzzy Logic*, one of the most readable books on the topics covered in my Implementing Fuzzy Sets in SQL Server series.[5] Basically, in ordinary probability we’re asking what color the next ball we draw is likely to be, but with inverse probability we’re transposing the question into, “What is the ratio of the balls in the urn?” given foreknowledge of those we’ve drawn so far. Following Laplace’s “principle of indifference” in the absence of extra information, we’d set the probabilities for the outcomes even, as in a uniform distribution[6]; with each subsequent draw from the urns, Bayesians would update these probabilities until they gradually approached the true distribution. A previous Wikipedia article on Bayes Theorem or Bayes Rule contained a well-written example, in which the goal was to guess the gender of a person with long hair, given an even distribution of men and women and probabilities of each sex having long hair set to 15 and 75 percent respectively (with the remainder being unknown). In that example, the Bayes Factor would be a simple ratio between 75 and 15, which equals odds of 5:1. All we need to do is divide them and apply a LOG operation, which is normally done in base 10 and then multiplied by a factor of 10. These units are known as decibans, a variant of an information measure pioneered by famed cryptanalyst Alan Turing and alternately known as the hartley, in honor of the developer of the metric discussed in Information Measurement with SQL Server, Part 1: A Quick Review of the Hartley Function. Statisticians find decibans convenient because common odds ratios can be translated into them seamlessly, such as 100 to 1, which equals 10 decibans, a nice round number.Figure 1 also includes the calculations in base 2, in order to provide a comparison point with bits, the most common unit associated with entropic information measures. About 95 percent of the performance costs are incurred in the INSERT, which is identical to the initial code used in Information Measurement with SQL Server, Part 2: Conditional and Joint Entropy, except with the entropic LOG calculations stripped out. This simplification may be the reason why it executes in just 1:14 on the same two float columns of the 11-million-row Higgs Boson dataset I’ve been using throughout this series for demonstration purposes**[7]**, compared to the 1:46 required for the Joint Entropy article. One of the conditional probabilities can be reconstructed from the other as long as its inverse and individual probabilities are available, or can at least be derived from statistical estimation techniques[8]; in the database server world, we’re more likely to have the luxury of using actual proportions derived from extensive counts, whereas in other fields these inputs would have to be derived from probability distribution formulas and likelihood estimation methods. I included the individual proportions and conditional probabilities in the final SELECT merely to convey how Bayes’ Theorem can be used to derive them from each other in this manner.

** Figure 1: Deriving Bayes Factors with T-SQL**DECLARE @Count1 bigint, @Count2 bigint, @JointCount bigint

SELECT @Count1=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

SELECT @Count2=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column2 IS NOT NULL

SELECT @JointCount=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL OR Column2 IS NOT NULL

DECLARE @EntropyTable table

(Value1 decimal(33,29),

Value2 decimal(33,29),

ValueCount bigint,

Proportion1 decimal(38,37),

Proportion2 decimal(38,37),

JointProportion decimal(38,37)

)

INSERT INTO @EntropyTable

(Value1, Value2, ValueCount, Proportion1,Proportion2, JointProportion)

SELECT Value1, Value2,ValueCount, Proportion1,Proportion2, JointProportion

FROM (SELECT Value1, Value2,ValueCount,

CASE WHEN GroupingIDColumn1 = 0 AND GroupingIDColumn2 = 1 THEN ValueCount / CAST(@Count1 AS float) ELSE NULL END AS Proportion1,

CASE WHEN GroupingIDColumn1 = 1 AND GroupingIDColumn2 = 0 THEN ValueCount / CAST(@Count2 AS float) ELSE NULL END AS Proportion2,

CASE WHEN GroupingIDColumn1 = 0 AND GroupingIDColumn2 = 0 THEN ValueCount / CAST(@JointCount AS float) ELSE NULL END AS JointProportion

FROM (SELECT Column1 AS Value1, Column2 AS Value2, Count(*) AS ValueCount, GROUPING_ID(Column1) AS GroupingIDColumn1, GROUPING_ID(Column2) AS GroupingIDColumn2

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL AND Column2 IS NOT NULL

GROUP BY CUBE (Column1, Column2)) AS T1) AS T2

SELECT 10 * Log(Proportion1 / Proportion2, 10) AS BayesFactorForHypothesisTesting,

10 * ABS(Log(Proportion1 / Proportion2, 10)) AS BayesFactorInDecibans,

ABS(Log(Proportion1 / Proportion2, 2)) AS BayesFactorInBits,

Proportion1 / Proportion2 AS OddsRatio,

Proportion1, Proportion2, ConditionalProbabilityOfColumn2GivenColumn1, ConditionalProbabilityOfColumn1GivenColumn2,

(CAST(ConditionalProbabilityOfColumn2GivenColumn1 as float) * CAST(Proportion1 as float)) / CAST(Proportion2 as float) AS RecalculationOfConditionalProbabilityOfColumn1GivenColumn2

FROM (SELECT Proportion1, Proportion2, JointProportion, JointProportion / Proportion1 AS ConditionalProbabilityOfColumn2GivenColumn1, JointProportion / Proportion2 AS ConditionalProbabilityOfColumn1GivenColumn2

FROM (SELECT JointProportion

FROM @EntropyTable

WHERE Value1 = –0.61253148317337036130000000000 AND Value2 = 0.54839861392974853520000000000) AS JointProportion,

(SELECT Proportion1 FROM @EntropyTable

WHERE Value1 = –0.61253148317337036130000000000 AND Proportion1 IS NOT NULL) AS Proportion1,

(SELECT Proportion2 FROM @EntropyTable

WHERE Value2 = 0.54839861392974853520000000000 AND Proportion2 IS NOT NULL) AS Proportion2) AS T1

** Figure 2: Results from the Higgs Boson Dataset** (Click to Enlarge)

…………At 0 decibans or bits, the probability is evenly split. When the odds ratio is less than 1:1 then the value, event, hypothesis or model embodied in the divisor is more likely, whereas if it is greater than 1:1, the evidence is weighted in favor of the dividend[9]; since an odds ratio in favor of the divisor results in a fraction less than 1, the LOG applied to translate this into a Bayes Factor will result in a negative value. This makes it difficult to compare them to entropic measures, most of which are non-negative by definition, hence the need for the two ABS functions in the last SELECT. On the other hand, the minus sign is useful when the ratio’s interpreted as a ratio of the weights of evidence for the event or value in the dividend, vs. the evidence for the contrary hypothesis embodied in the divisor. Hence the need for the BayesFactorForHypothesisTesting, sans an ABS operation. Bayes Factors are used directly in the Bayesian version of hypothesis testing, on the grounds that the odds can be interpreted as evidence in favor of one hypothesis or the other.[10] The same sort of comparison can be made for the global figures, in which case they are useful in model selection, a use more closely associated with data mining than ordinary statistics. For this use case, “an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. It thus guards against overfitting.”[11] Furthermore, the ability to update probabilities in a sequential manner as new information is acquired can be leveraged to upgrade Design of Experiments (DOE). It is more common to use Bayes Factors for these tasks than as stand-alone, pure information measures, but they can be treated as a sort of crude gauge of *a priori* knowledge. The mathematical properties that make Bayes Factors attractive for hypothesis testing and model selection were worked out in detail long ago, but some of them – such as additivity and eventual convergence to a Gaussian or “normal” distribution[12] – might be useful in terms of an information metric as well.

…………I’m sure that theoreticians have worked out their relationships to measures of information entropy already, but I haven’t gotten that far in the literature yet. I do know that the journal *Entropy* recently published a special edition on integrating Bayesian stats with important information theory axioms like the Principles of Maximum Entropy and Minimum Cross Entropy, so apparently it’s still a hot research topic.[13] I strongly suspect that they share much in common with the Self-Information metric I introduced in the last article, given that all we have to do to derive the ratio of Self-Information is to apply the LOG operations first, then perform the division. I omitted those calculations in order to shrink the graphic in Figure 2, but calculating the ratio of the Self-Information of these two distinct values for Column1 and Column2 yields a figure of 1.06758789451323 for base 2, compared to 0.786263282058452 for the BayesFactorsInBits depicted above. It would actually be quite economical to calculate Bayes Factors and Self-Information together, just as it wouldn’t cost us much more to derive the Conditional and Joint Entropies in the same pass across the original table in the INSERT. If we’re going to derive the two probabilities and the conditional probability across two columns, it’s really simple in T-SQL to derive the other alongside it rather than calculating backwards from Bayes’ Theorem.

**Uses and Interpretations of Bayes Factors**

The popular examples of counting balls of different color in separate urns would be easier to solve in SQL Server, given the extensive counts that are readily available in our tables and cubes. Bayesian probability would probably find wider use cases in terms of updating probabilities as new information comes in, which calls for interpreting them as “beliefs” in a manner similar to that introduced in Implementing Fuzzy Sets in SQL Server, Part 10.1: A Crude Introduction to Dempster-Shafer Evidence Theory. Bayesian stats can also be integrated with Decision Theory. In fact, it intersects with many other areas of statistics, which give rise to a wide array of use cases; for example, when Maximum Likelihood Estimates (MLEs) are used in Bayes Factors, it basically turns into an ordinary likelihood ratio test.[14] Bayes Factors are thus useful in a smorgasbord of applications beyond just a pure information metric, which really doesn’t even qualify as the tip of the iceberg because it’s a somewhat obscure usage. Many of those applications occur in the field of data mining, including the Bayesian stats used in Naive Bayes in SSDM A Rickety Stairway to SQL Server Data Mining, Algorithm 1: Not-So-Naïve Bayes. Bayes Factors might be adapted for all kinds of DIY mining algorithms, but I wonder if Bayesian stats might be integrated in interesting ways with my pet topic, neural nets. The human brain cannot yet act on information it does not have, so it stands to reason that it must have a particularly efficient way of using priors to update beliefs, to help us quickly zero in on truth (or the fallacies we prefer out of pride, which has been one of mankind’s most tragic imperfections throughout history) out of endless sea of wrong answers. It might daisy-chain priors together to update our internal knowledge bases with incoming sensory input every split second, perhaps by calibrating anticipation in the same manner theorized by artificial intelligence researcher Jeff Hawkins.[15

…………The limits of human perception of differences in degrees of belief apparently occur around 1 deciban.[16] This corresponds to inserting a ratio of 1.258925411794168 into the LOG operation, or 0.332192809488737 bits using base 2. Various informal scales for gauging the strength of evidence have been developed over the years, including one by famed Bayesian Harold Jeffreys and a more recent one by statistics Profs. Robert E. Kass and Adrian E. Raftery.[17] They’re not terribly different from the one mentioned in a .pdf document published by Biostatistics Prof. Leonhard Held, who says that odds of 1:1 to 3:1 are “barely worth mentioning,” 3:1 to 20:1 are “substantial” and 20:1 through 150:1 are “strong,” with anything beyond that constituting “very strong” evidence in favor of the hypothesis embodied in the dividend.[18] The three fences of 3:1, 20:1 and 150:1 corresponds to 4.77121254719662, 13.0102999566398 and 21.7609125905568 decibans respectively, or 1.58496250072116, 4.32192809488736 and7.22881869049588 bits.

**Too Much of a Good Thing: The Danger of Fringe Interpretations of Bayesian Statistics**

Beyond these somewhat subjective criteria, the inherent meaning of Bayesian statistics is hotly debated. As is often the case in fields related to information theory, the difficulty rests in the interpretation, not the calculations; in fact, interpretation of Bayesian stats has been the subject of incessant trench warfare among statisticians and researchers in related fields for the last few decades. Apparently the gist is that some Bayesians claim their techniques supersede all others, including those for describing uncertainty[19], which many other statisticians take issue with. Peter Cheeseman, a leading Bayesian, took note of the “nearly religious fervor” and “stand-up fights” that arose in the field as a result, as did many of his counterparts on the other side of the fence.[20] This rivalry apparently spilled over into funding wars and appointments; for example, McNeill laments in *Fuzzy Logic* that Bayesians allegedly torpedoed fuzzy research in the U.S. in the 1980s.[21] He points out in return, however, that at the time he wrote two decades ago, there were no Bayesian products in the statistical marketplace yet, but plenty that made use of fuzzy sets.[22] George J. Klir, one of the authors of my favorite mathematical resource on fuzzy sets, criticizes Bayesianism on the grounds that it assumes degrees of belief when the corresponding numbers are in fact hazy. “It forces us to be certain about our uncertainty…“By making vague expectations precise, he says, the Bayesians fail to know their ignorance, and by disdain to model vagueness appropriately, they ignore available knowledge.” Klir says, then suggests that they read the wisdom of the Tao Te Ching: “Knowing ignorance is strength. Ignoring knowledge is sickness.”[23]

…………Bayes Factors opens a whole world of Bayesian statistics that I have yet to set foot in, nor intend to incorporate much in this series, which is strictly concerned with coding information metrics in SQL Server for purposes of DIY data mining. Therefore, I won’t get into a holy war about something I don’t know enough about by joining one camp or another at this point. I’ll stick with my usual approach of finding ways of pragmatically incorporating these techniques in my toolbelt wherever I can and matching them up with appropriate use cases; I agree with some of the Bayesian criticisms of standard statistical tests involving P-values, confidence levels and the like, but do not think they warrant jettisoning the whole “frequentist” gamut of techniques. If the characterizations of some Bayesians as extremists who will insist on throwing out many of the competing tools, then I would definitely have a problem with that kind of obsessiveness. I would also point out that the common Bayesian refrain that “certainty is simply total belief”[24] does not necessarily wash, since a person can be totally certain of falsehoods, even to the point of willingly embracing bad evidence; as Bill W. and his disciples are wont to point out, “Denial just ain’t a river in Egypt.” Other misgivings about Baysian inference would also have to be cleared up, such as the ramifications of using it to reason “backwards from the evidence to the hypothesis.”[25] Counter-intuitive results are also known to arise in Bayesian interpretations, just as they do with the Evidence Theory belief and plausibility measures I covered in the fuzzy sets tutorial series. For example, in the well-known “Oliver’s Blood” problem, if two blood samples are left at a crime scene with Type O and Type A blood, the evidence would be slightly against a person with Type O blood accounting for one of them, even though the former is more common than the latter among the general population by odds of about 60:1.[26] I’m not sure how Bayesians compensate for fallacious reasoning that often arises from ordinary probability problems, such as the common belief that priors have a direct, causative relationship in determining posteriors. This is an accurate statement in certain card games where applications without replacement are the rule (or against the rules, in the case of Ben Affleck), but not in coin tosses, where getting ten heads in a row has no effect whatsoever on the odds of getting an eleventh. This is one of the most dangerous aspects of probability theory, a trap I’d argue that some physicists have fallen into by overinterpreting quantum theory as *arising* from probabilities, rather than interpreting them as field effects.

…………The ultimate culprit there is the same as in extreme interpretations of Bayesianism: the idea that reality is subjective. Both Bayesian and frequentist models of probability are ultimately derived from combinatorics, includes Bayes Factors and entropic measures; this in turn gives them a rigid foundation in logic, but leavening it with the wrong kind of subjectivity might be tantamount to adding a fly to the ointment. It is critical to always keep in mind that subjectivity is fine when it is strictly a matter of gauging personal aesthetic preferences, including the questions we prefer to ask while data mining. “Is my data mine played out?” is an example of a subjectively selected question, which might be answered through such objective means as declining information gain and a progressive lack of informative priors. Given that probabilities do not determine outcomes, we might have to check every last square inch of ground to be certain that a particular mine does not hide the nuggets of gold we’re looking for, but these measures can serve as a reasonable guide on when to stop and look elsewhere. As I’ve pointed out many times in previous posts, subjectivity becomes problematic when confused with the answers to those questions, which are always a matter of hard, cold, objective reality. I’ve seen noted authors in the hard sciences and mathematics unconsciously let their definitions of these things drift across the course of their books, in ways which obscure this dividing line without justification. To put it bluntly, subjectivity in the wrong place leads to madness; when men have accepted the idea that their personal thoughts affect reality, they end up in institutions, and when the same ideas infect entire civilizations, they come crashing down. It is a permanent temptation of mankind, a dangerous undercurrent that has reared time and again through human intellectual history in guises like phenomenalism, nominalism, solipsism (which is considered a marker of schizophrenia, unless it is championed by professors) and certain misuses of David Hume’s philosophy. It is dangerous enough when civilizations tamper with the definition of purely abstract ideas like paper money (which always leads to hyperinflation), but doubly so when subjectivity degrades its capacity for wise decision-making. As I have pointed out from time to time in the fuzzy set tutorial series and A Rickety Stairway to SQL Server Data Mining, many of these mind-blowing concepts are apparently fraught with danger; perhaps that is why so many of the “rock star” mathematicians and physicists have gone mad or become degenerates, although I’d wager that overweening pride is more to blame. In the next installment I’ll pilot this series through less dangerous waters that I nevertheless really don’t have the expertise to navigate[27], in search of answers about the Kullback-Leibler Divergence, a tried-and-true distance measure used often in data mining.

[1] Pierre-Simon Laplace (1749- 1827). See the __Wikipedia__ page “Pierre Simon Laplace” at http://en.wikipedia.org/wiki/Pierre-Simon_Laplace for his birth and death dates.

[2] See Statistical, LLC, 2015, “Likelihood,”available at the __Bayesian-Inference__ web address http://www.bayesian-inference.com/likelihood and the __Wikipedia__ pages “Bayes’ Theorem” and “Bayes’ Rule”at http://en.wikipedia.org/wiki/Bayes%27_theorem and http://en.wikipedia.org/wiki/Bayes%27_rule

[3] See the __Wikipedia__ page “Likelihood Ratio” at http://en.wikipedia.org/wiki/Likelihood_ratio

[4] p. 48, Reza, Fazlollah M., 1961, __An Introduction to Information Theory__. McGraw-Hill: New York.

[5] p. 178, McNeill, Dan, 1993, __Fuzzy Logic__. Simon & Schuster: New York.

[6] *IBID*.,

[7] This dataset is made publicly available by University of California at Irvine’s Machine Learning Repository__.__ I downloaded it a few tutorial series ago and converted it to a SQL Server table, which now takes up about 5 gigabytes of space in a sham DataMiningProjects database I’ve been using ever since for practice data.

[8] “In order to complete the definition of a Bayesian model, both the prior distributions and the likelihood must be approximated or fully specified.” See Statistical, LLC, 2015, “Likelihood,”available at the __Bayesian-Inference__ web address http://www.bayesian-inference.com/likelihood

[9] pp. 45-46, Downey, Allen B., 2012, __Think Bayes: Bayesian Statistics Made Simple, Version 1.03__. Green Tea Press: Needham, Mass. Available online at the Green Tea Press web address http://www.greenteapress.com/thinkbayes/thinkbayes.pdf

[10] Held, Leonhard, 2011, “Introducing Bayes Factors,” monograph published Nov. 25, 2011 at the __Vienna University for Economics and Business Institute for Statistics and Mathematics __web address http://statmath.wu.ac.at/research/talks/resources/talkheld.pdf

[11] See the __Wikipedia__ article “Bayes Factor” at http://en.wikipedia.org/wiki/Bayes_factor

[12] Good, I. J., 2011, “A List of Properties of Bayes-Turing Factors,” undated monograph released March 9, 2011 by the __National Security Agency__ as a result of Freedom of Information Act request #58820. Available online at the NSA web address https://www.nsa.gov/public_info/_files/tech_journals/list_of_properties.pdf

[13] Giffin, Adom, 2013, “Special Issue ‘Maximum Entropy and Bayes Theorem’,” commentary published Jan. 7, 2013 at the __Multidisciplinary Digital Publishing Institute__ web address http://www.mdpi.com/journal/entropy/special_issues/bayes-theorem

[14] *IBID*.

[15] pp. 147, 238, Hawkins, Jeff, 2004, __On Intelligence__. Times Books: New York.

[16] Good, I.J., 1985, “Weight of Evidence: A Brief Survey,” pp. 249-270 in __Bayesian Statistics 2__, Bernardo J.M.; DeGroot, M.H.; Lindley, D.V. and Smith, A.F.M., eds. Elsevier Science Publishers: North Holland. Available online at the Californa Water Boards web address at http://www.waterboards.ca.gov/water_issues/programs/tmdl/docs/303d_policydocs/207.pdf . Originally cited at the __Wikipedia__ webpage “Ban (Unit)” at http://en.wikipedia.org/wiki/Ban_(unit)

[17] Kass, Robert E. and Raftery, Adrian E., 1995, “Bayes Factors,” pp. 773-795 in __Journal of the American Statistical Association__, June 1995. Vol. 90, No. 430. Available online at the __Carnegie Mellon__ web address http://www.andrew.cmu.edu/user/kk3n/simplicity/KassRaftery1995.pdf. I learned of it through the __Wikipedia__ article “Bayes Factor” at http://en.wikipedia.org/wiki/Bayes_factor

[18] p. 2, Held.

[19] p. 181, McNeill.

[20] *IBID*, p. 181.

[21] *IBID*., p. 176.

[22] *IBID*., p. 190.

[23] *IBID*., pp. 188-189.

[24] *IBID*., pp. 177-178.

[25] p. 179, McNeill.

[26] pp. 45-46, Dwney. The original source was MacKay, David J. C, 2003, __Information Theory, Inference, and Learning Algorithms__. Cambridge University Press: New York.

[27] You might be in better hands with Gene Wilder.

## Information Measurement with SQL Server, Part 2.5: Mutual, Lautum and Shared Information

**By Steve Bolton**

…………The sample T-SQL I posted in the last article wasn’t as difficult as it looked, considering that it merely implemented the same code on the same data we used in Information Measurement with SQL Server, Part 2.1: The Uses and Abuses of Shannon’s Entropy, except for two columns rather than one. A little more logic was required to calculate Joint Entropy from them, but a GROUP BY CUBE and some GROUPING_IDs took care of that. That sample code was basically a two-for-one deal, because we merely need to tack the practically effortless code from this article onto it. Once we’ve derived measures like Conditional and Joint Entropy, it is child’s play to derive Mutual Information, which is one of the most widely used metrics in information theory. There are many equivalent ways of deriving it, all of which involve simple subtraction and addition operations on various combinations of the Joint Entropy, @ConditionalEntropyOfBGivenA, @ConditionalEntropyOfAGivenB and the Shannon’s Entropies for Column1 and Column2.[1] All of these were already calculated in last week’s code (which I’ll omit here for the sake of brevity) so this week’s installment should be a breeze. As is often the case in information theory, the difficulties consist chiefly in interpreting Mutual Information and some of the kindred metrics depicted in Figures 1 and 2.

…………Mutual Information functions as a kind of two-way version of Conditional Entropy, so that we can gauge how much “how much knowing one of these variables reduces uncertainty about the other,”[2] rather than just about one variable given a value for the other. The interpretation is pretty much the same though: if the two variables are perfectly dependent on one another the value will be zero, but as their degree of independence rises, so does their Mutual Information. In this way, Mutual Information can be harnessed to complement standard measures of association like covariance and correlation. In coding theory it can be interpreted as the information transferred before and after receiving a signal, which can be adapted to our purposes by interpreting it as the information transferred before and after a particular event is observed and added to our records.[3] The information-carrying capacity and other properties of communication channels also play a part in determining whether or not particular messages affect the Mutual Information of a code[4], which can be interpreted in terms of the frequency of values in database tables and cubes. Daniel P. Palomar and Sergio Verdí, the reinventors of an up-and-coming alternative known as Lautum Information[5], sum up the many uses for Mutual Information quite succinctly: “Mutual information has also proven a popular measure of statistical dependence in many experimental applications such as neurobiology, genetics, machine learning, medical imaging, linguistics, artificial intelligence, authentication, and signal processing.”[6]

…………Palomar and Verdí’s measure has been lauded as replacement for Mutual Information in certain contexts based on various mathematical properties, but I had a difficult time wading through the notation in their original paper from 2008, so I had to rely on the formula given by chemist Gavin E. Crooks in a handy .pdf guide to measures of entropy.[7] As I mentioned in the last article, there are several equivalent methods of calculating Joint Entropy, so I selected the one I wagered would be easiest to incorporate in the T-SQL I had already written for Shannon’s Entropy. Among these is a formula for computing it using a division operation, which we simply need to invert to derive Lautum Information; the name is apparently an anadrome of “mutual” chosen because it means “elegant” in Latin, so it too is an inversion of sorts.[8] It may be possible to simplify Figure 1 to calculate Lautum Information inside the same INSERT that Joint Entropy is derived in (which is not included here, for the sake of brevity), or at least derive it from Joint Entropy after the fact through simpler means than a complete pass over the whole @EntropyTable table variable. Surprisingly, the convoluted solution in the fourth SELECT adds next to nothing to the performance costs, despite the fact that the @EntropyTable contains more than 9,119,674 distinct combinations for Column1 and Column2, out of the 11 million rows in the Higgs Boson dataset I’ve been using for practice data for the last few self-tutorial series.[9] It was simpler to code Shared Information Distance, another metric that has apparently gained in popularity in recent years, which has been used in cutting-edge fields like information geometry and applications like plagiarism detection.[10] Its advantages over the better-established Mutual Information metric include the fact that it obeys the triangle inequality (a transitive property related to subadditivity, that separates true “metrics” from ordinary distance measures) and that it may qualify as a “universal metric,” so “that if any other distance measure two items close-by, then the variation of information will also judge them close.”[11] The Shared Information Distance is really easy to compute[12] once we have Joint Entropy and a Conditional Entropy, which in Figure 1 is calculated via the Shannon’s Entropy for Column 1.

** Figure 1: Sample T-SQL for Four Entropic Information Measures**DECLARE @MutualInformation float,

@LautumInformation float,

@SharedInformationDistance float,

@ConditionalEntropyOfBGivenA float,

@ConditionalEntropyOfAGivenB float

SELECT @ConditionalEntropyOfBGivenA = @JointEntropy – @ShannonsEntropy1,

@ConditionalEntropyOfAGivenB = @JointEntropy – @ShannonsEntropy2

SELECT @MutualInformation = @ShannonsEntropy1 + @ShannonsEntropy2 – @JointEntropy

SELECT @SharedInformationDistance = 1 – (@ShannonsEntropy1 – @ConditionalEntropyOfAGivenB) / @JointEntropy

SELECT @LautumInformation = SUM(JointProportion * Log(@JointEntropy / JointProportion))

FROM @EntropyTable

SELECT @ShannonsEntropy1 AS ShannonsEntropyForX, @ShannonsEntropy2 AS ShannonsEntropyForY, @JointEntropy AS JointEntropy,

@ShannonsEntropy1 + @ShannonsEntropy2 AS SumOfIndividualShannonEntropies,

@ConditionalEntropyOfBGivenA AS ConditionalEntropyOfBGivenA, @ConditionalEntropyOfAGivenB AS ConditionalEntropyOfAGivenB,

@MutualInformation AS MutualInformation, @LautumInformation AS LautumInformation, @SharedInformationDistance AS SharedInformationDistance

** Figure 2: Self-Information and Rate Calculations**— Information and Entropy Rates

SELECT @ShannonsEntropy1 / CAST (@Count1 as float) AS EntropyRateForColumn1, @ShannonsEntropy2 / CAST (@Count2 as float) AS EntropyRateForColumn2, @JointEntropy / CAST (@JointCount as float) AS JointEntropyRate, @MutualInformation / @JointCount AS InformationRate

— Self-Information

SELECT DISTINCT Value1, –1 * Log(Proportion1, @LogarithmBase) AS SelfInformation

FROM @EntropyTable

WHERE Proportion1 IS NOT NULL

ORDER BY Value1

OFFSET 0 ROWS FETCH FIRST 100 ROWS ONLY

** Figure 3: Results from the First Two Float Columns of the Higgs Boson Dataset** (Click to enlarge)

…………The routine clocked in at 1:46, four seconds faster than the sample T-SQL from the last article, despite the fact that it’s identical except for the extra routines I appended. I expected the Table Scan required for my brute force calculation of Lautum Information to add significantly to the performance costs, but it accounted for less than 1 percent of the query batch; otherwise, the execution plan was pretty much indistinguishable to the one from the last post. Note that the results also contain a sample of the first 100 Self-Information values for the first float column, as defined in Figure 2. In a nutshell, this trivial calculation[13] tells us the entropy for each individual record. This can be used to identify, partition and cluster records that offer greater potential for information gain, among other things. Like many other information metrics, it has multiple layers of meaning. At the most fundamental level, Self-Information quantifies the question, “How much can we learn when this particular event occurs?” Another subtle implication is that it may constitute another means of quantifying “surprise” in addition to the possibilistic one I mentioned in Implementing Fuzzy Sets in SQL Server, Part 8: Possibility Theory and Alpha Cuts. Rare events are surprising by definition, but also more informative in the same specific sense of all information theory entropies, which tell us how “newsworthy” specific values can be. For example, it is really rare to see spectacular lenticular, anvil or Undulatus Asperatus clouds. Their presence can tell us a lot more about the weather than ordinary patterns though, since they require specific atmospheric conditions to produce them; I don’t know what their specific entropy values would be a weather model, but I’d wager that they’d be astronomically high. In the same way, the observation of rare events with high self-information values can be a clue that some extraordinary and often highly specific underlying process is at work.

…………Figure 2 also includes code for the Information Rate, which is even more trivial than Self-Information to compute once we have Mutual Information out of the way. The term is sometimes applied to the ordinary Entropy Rate, so I differentiated it in the code by referring to it as the MutualInformationRate. Both are merely averages over different quantities, the Mutual Information and Shannon’s Entropy; I should note though that there is a supremum symbol in the equations I found for the Information Rate[14], which may mean that I need to incorporate a MAX operation somewhere in there. My usual disclaimer is always lurking in the background: I’m posting code in these tutorial series in order to absorb the material faster, not because I know what I’m doing, so check my code over before putting into production if accuracy is of paramount concern. Also keep in mind that the interpretation and calculation of both may be affected by the properties of the underlying data, such as whether or not they can modeled as irreducible or aperiodic Markov chains.[15] Further nuances can be added for calculating entropy per message in cases where the input is unknown and the Conditional Entropy “per message if the input is known,” which indicates the presence of noise if the value is non-zero.[16] Erroneous data leaves the former rate unchanged but increases the second at a steady rate dependent on the maximum probability.[17]

…………Just as with simply statistical building blocks like averages and standard deviation, the sky’s the limit when it comes to the combinations and variations we can build out of Shannon’s Entropy and its relatives. Information theory is a vast field, which was birthed in the ‘40s out of a desire to send clearer communication signals, but now extends its tendrils into fields as diverse as cryptography, data compression, Internet search algorithms and quantum physics. Modern Man is perpetually surrounded by technology that relies on the principles of information theory, regardless of whether they were introduced to the term as late in life as I was. My only concern in this series is to focus on the actual metrics used in information theory, particularly those that might be of benefit to SQL Server users in DIY data mining. In the future I might revisit these entropic measures, once I have a better understanding of which variants might be beneficial in such applications; for instance, I’m allured by the tantalizing possibility of calculating the channel capacity of neural nets, although I’m not yet sure how to go about it. I’ll cross those bridges when I come to them, but for now it would be more beneficial to shift our focus to other classes of information metrics, which calls for a whole lengthy segment on various distance and divergence measures. Shared Information Distance implies some share in that group by its very name, but it is also intimately related to Kolmogorov Complexity, a fascinating topic I hope to take up in a later segment on minimum information length metrics. Lautum Information has also apparently been referred to as the “reverse Kullback-Leibler Divergence,”[18] which links it to one of the most important distance measures used in data mining and knowledge discovery. The KL-Divergence is also known by the alias “Relative Entropy,” which has its own associated Relative Entropy Rate.[19] Mutual Information is likewise an important building block in many other information metrics, including Pointwise Mutual Information (PMI), which is used in Internet search engines; I’ll have to save PMI and related topics for a much later segment on semantic information metrics, which are rather advanced and often difficult to calculate and interpret. New information measures are being churned out by theoreticians much faster than the analytics marketplace can keep up with them; after skimming the literature over the last few years, I doubt that the gap will be crossed anytime soon, because it yawns as wide as the Valles Marineris. No single software package is going to be capable of implementing all of the worthwhile algorithms and metrics for a long time to come, which makes DIY data mining skills worthwhile to acquire. To that end, it would make sense to become familiar with certain rudimentary measures of existing knowledge, to complement entropic measures that tell us how much we *don’t* know about our data. To that end, next time around I’ll kick off another segment of this meandering mistutorial series with an introduction to Bayes Factors. These are fairly easily to calculate and may serve as bridges to Bayesian probability – a topic I know little about, but which makes use of Conditional Entropy – and Fisher Information, which is of vital importance in fields like data mining and information geometry. As the series progresses, metrics like these will allow us to box in the remaining uncertainty in our datasets little by little, as each new piece of the information puzzle sheds new light on our data from fresh directions.

[1] The __Wikipedia__ page “Mutual Information” at http://en.wikipedia.org/wiki/Mutual_information is a handy reference for these formulas, but they’re readily available in many information theory texts, such as p. 16, Jones, D.S., 1979, __Elementary Information Theory__. Oxford University Press: New York and pp. 49-62, Mansuripur, Masud, 1987, __Introduction to Information Theory.__ Prentice-Hall: Englewood Cliffs, N.J.

[2] See the __Wikipedia__ page “Mutual Information” at http://en.wikipedia.org/wiki/Mutual_information. It may be a free resource and quoting from it may be frowned upon, but sometimes the contributing writers word things particularly well or provide really succinct explanations that you can’t find in professional texts in various fields.

[3] pp. 126-127, Moser, Stefan M. and Po-Ning, Chen, 2012, __A Student’s Guide to Coding and Information Theory__. Cambridge University Press: New York.

[4] p. 64, Mansuripur.

[5] Which was originally mentioned in a 1945 paper by renowned Hungarian statistican Abraham Wald, the inventor of the well-known Wald Test. p. 964, Palomar, Daniel P, and Verdí, Sergio, 2008, “Lautum Information,” pp. 964-975 in __IEEE Transactions on Information Theory__, March 2008. Vol. 54, No. 3.

[6] *IBID*.

[7] p. 3, Crooks, Gavin E., 2015, , “On Measures of Entropy and Information,” monograph published Jan. 22, 2015 at the __ThreePlusOne.com__ web address http://threeplusone.com/info

[8] p. 964, Palomar and Verdí.

[9] The original source was the University of California at Irvine’s Machine Learning Repository__. I converted it ages ago to a SQL Server table, which now takes up about 5 gigabytes of space in a sham DataMiningProjects database.__

[10] For the latter, see Kleiman, Alan Bustos and Kowaltowski, Tomasz, 2009, __Qualitative Analysis and Comparison of Plagiarism-Detection Systems in Student Programs__. Technical Report available in pdf format at the Universidade Estadual de Campinas web address http://www.ic.unicamp.br/~reltech/2009/09-08.pdf

[11] See the __Wikipedia__ page “Variation of Information” at http://en.wikipedia.org/wiki/Variation_of_information

[12] p. 3, Li, Ming, 2006, “Information Distance and Its Applications,” pp. 1-9 in __Implementation and Application of Automata: 11th International Conference, CIAA 2006. Taipei, Taiwan, August 2006 Proceedings__. Ibarra, Oscar H. ed. Springer-Verlag: Berlin.

[13] See p. 16, Jones and the __Wikipedia__ article “Self-Information” at http://en.wikipedia.org/wiki/Self-information

[14] p. 219, Gray, Robert M., 2011,__ Entropy and Information Theory__. Springer: New York.

[15] See the __Wikipedia__ page “Entropy Rate ” at http://en.wikipedia.org/wiki/Entropy_rate

[16] p. 47-48, Goldman, Stanford, 1953, __Information Theory__. Prentice-Hall: New York.

[17] *IBID*.

[18] See Sarwate, Anand, 2013, “C.R. Rao and Information Geometry,” posted on April 13, 2013 at __The Ergodic Walk__ web address http://ergodicity.net/2013/04/13/c-r-rao-and-information-geometry/

[19] p. 45, Goldman, Stanford, 1953, __Information Theory__. Prentice-Hall: New York.

## Information Measurement with SQL Server, Part 2.4: Conditional and Joint Entropy

**By Steve Bolton**

…………Since this series on using SQL Server to implement the whole gamut of information metrics is wide-ranging in nature, it will also be somewhat disorganized by necessity; this is doubly true, given that I’m writing it in order to learn the material faster, not because I’m already intimately familiar with these topics. Nonetheless, I’m trying to provide what structure I can along the way, including segregating the most important entropy measures of information theory in this early segment. I’m slowly ratcheting up the level of complexity within this section by introducing simpler concepts first and using them as stepping stones to more difficult ones, yet this week’s topic differs in the manner of its complexity from that of the previous article. The foray we took into thermodynamic entropies in the last post was difficult due to the depth of the subject matter whenever definitions of qualities like “randomness” and “order” are in play. I put off discussing Conditional and Joint Entropy until this post because their complexity is of a different type; basically, they just involve binary comparisons of entropy rather than the simple unary metrics I introduced in Information Measurement with SQL Server, Part 2.1: The Uses and Abuses of Shannon’s Entropy and Information Measurement with SQL Server, Part 2.2: The Rényi Entropy and Its Kin. They’re among the first topics discussed in texts on information theory and coding theory and shouldn’t be too difficult to fathom, if readers could swallow the last two installments in this series. Coding them is not terribly difficult, although they do present some challenges in terms of performance and interpretation.

…………One of the benefits of reading my amateur self-tutorials is that you get sample T-SQL code and a few paragraphs of commentary, which basically saves the hassle of having to consult gigantic math tomes chock full of arcane equations. Data miners shouldn’t have to give a dissertation on the theorems and lemmas that justify the underlying algorithms, any more than a commuter should be required to give a dissertation on automotive engineering in order to get a license. To that end, I usually remove all of the associated equations to suit my target audience of SQL Server DBAs and data miners, who can probably grasp them much easier in T-SQL form. I broke that rule in the article on Shannon’s Entropy for a sound reason and this time around, I’ll bend it a little in order to clear up some possible sources of confusion about probability notation. Symbols like P(A|B) denote a conditional probability, which signifies, “Given a value for B, what is the probability of A?” Joint probability is denoted by P(A,B) or P(AB), which can be read as, “What is the probability of two specific values for A and B occurring together?” Both concepts have counterparts in information theory, where we simply replace the P with the H symbol to denote entropy rather than probability. This may come in handy if readers want to try to one of several alternatives formulas for arriving at the same figures with these entropies, or want to double-check my sample T-SQL in Figure 1 (which is always a wise idea, since I don’t yet have any real expertise in these subjects).

**Unit and Scaling Issues in the Sample T-SQL**

I chose the methods for calculating both that I believed would mesh well with the code I posted a few articles ago for Shannon’s Entropy. Although the Figure 1 is somewhat lengthier than the sample T-SQL in that post, the same basic format is at work. Once again, I include a @LogarithmBase parameter so that users can easily switch between base 2, base 10 and Euler’s Number to derive the three main units use for entropy, bits (i.e. shannons), bans (i.e. hartleys) and nats respectively. The INSERT INTO populates a table variable by selecting from the same Higgs Boson dataset, which is ideal for stress-testing the kind of routines I’ve been posting for the last few tutorial series, since it has 11 million rows of mainly float columns.[1] Shannon’s Entropy is derived in precisely the same way as before by using intermediate table variable columns like SummationInput1, except that it must be done for two of the dataset’s float columns this time around. The tricky part is the use of the GROUP BY CUBE statement, which allows us to simultaneously calculate the joint proportions of both columns without traversing the table repeatedly. The GROUPING_ID conditions derived from it are used to distinguish the aggregates of the two columns and their joint proportions in the CASE statements, which were not necessary in the original sample T-SQL for Shannon’s Entropy. We likewise need to take two additional counts in this routine, since we need to perform calculations on an additional column and its joint distribution with the other; in this particular instance Count1 and Count2 are equal to the count of the whole dataset, i.e. the JointCount, simply because the Higgs Boson dataset has no null values for these columns. This is not going to be the case with many other datasets.

…………It is also good to keep in mind that I’m once again cheating a little by using known proportions as probability values, which could also be derived from alternative sources like sampling, probability distribution formulas and deductive reasoning about the underlying processes, as is possible with simple examples like dice and card games. Most of the texts I’ve read on information theory and coding theory begin with the same dice and card examples, but in most real-world applications, the underlying processes are far too complex to reason them out in advance. As I mentioned in the articles on the Hartley Function and thermodynamic entropies, in some cases it may be necessary to calculate probabilities across all possible values of a particular column, but that can get messy since it involves taking permutations and combinations over large data types. There’s simply no way we’re going to cram all 10^{38} + 1 values that are permissible in a decimal(38,0) column into standard combinatorics formulas, given that the highest values we can use for factorials is about 170. This is an inherent limitation of SQL Server’s data types, which actually do a much better job at this than many of its competitors in the data mining market, like Minitab and WEKA. By using existing proportions, we can eliminate millions of values that are not found in the dataset, but for which we might have to assign nonzero values if we were using uniform distribution or whatever. It is worthwhile to note though that this kind of subtle fudging is more appropriate to the sizes of the tables used in relational databases and cubes than in ordinary scientific analysis, since we have extensive counts that can be leveraged in this manner. Our roles are almost reversed: it is far easier for DBAs and Analysis Services users to leverage actual counts of this kind, whereas it is much easier for scientists to perform calculations involving permutations, factorials and the like.

…………I also took the easy way out in setting the table variable data types, since Figure 1 is merely intended to convey the underlying concepts rather than to serve as production code. Values1 and 2 in the table variable are set to decimal(33,29) because that’s the precision and scale of the original columns, whereas the proportions are set to decimal(38,37), since they should never exceed 1 and therefore require just 1 digit to the left of the decimal place. The SummationInput columns are set to (38,21) arbitrarily, since I’m familiar with these two columns and was certain that this was enough space to accommodate them, while retaining enough precision to get the point across; I could have used floats, but kept getting annoying floating point rounding errors at lower precisions than I expected. The validation code in the middle of the routine can be uncommented if users want to inspect the table variable’s individual values or make sure that the proportions sum to 1; these check may reveal a few rounding errors that I haven’t yet been able to track down and weed out, but they’re occurring at trivial precision values and don’t really detract from the lessons. The good news is that once we calculate either the Joint Entropy or Conditional Entropy, it is trivial to derive the other using the Chain Rule[2], which involves some simple subtraction or addition operations. We therefore get two metrics for the price of one. I wagered that it would be less costly to derive the Joint Entropy first, since it can be done alongside the individual Shannon’s Entropy values less awkwardly than the Conditional Entropy can through alternative formulas.

** Figure 1: T-SQL for Joint and Conditional Entropy**DECLARE @LogarithmBase decimal(38,36)

SET @LogarithmBase = 2 — 2.7182818284590452353602874713526624977 — 10

DECLARE @Count1 bigint, @Count2 bigint, @JointCount bigint, @ShannonsEntropy1 float, @ShannonsEntropy2 float, @JointEntropy float

SELECT @Count1=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

SELECT @Count2=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column2 IS NOT NULL

SELECT @JointCount=Count(*)

FROM DataMiningProjects.Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL OR Column2 IS NOT NULL

DECLARE @EntropyTable table

(Value1 decimal(33,29),

Value2 decimal(33,29),

ValueCount bigint,

Proportion1 decimal(38,37),

Proportion2 decimal(38,37),

JointProportion decimal(38,21),

SummationInput1 decimal(38,21),

SummationInput2 decimal(38,21),

JointSummationInput decimal(38,21)

)

INSERT INTO @EntropyTable

(Value1, Value2, ValueCount, Proportion1,Proportion2, JointProportion, SummationInput1, SummationInput2, JointSummationInput)

SELECT Value1, Value2,ValueCount, Proportion1,Proportion2, JointProportion,

Proportion1 * Log(Proportion1, @LogarithmBase) AS SummationInput1,

Proportion2 * Log(Proportion2, @LogarithmBase) AS SummationInput2,

JointProportion * Log(JointProportion, @LogarithmBase) AS JointSummationInput

FROM (SELECT Value1, Value2,ValueCount,

CASE WHEN GroupingIDColumn1 = 0 AND GroupingIDColumn2 = 1 THEN ValueCount / CAST(@Count1 AS float) ELSE NULL END AS Proportion1,

CASE WHEN GroupingIDColumn1 = 1 AND GroupingIDColumn2 = 0 THEN ValueCount / CAST(@Count2 AS float) ELSE NULL END AS Proportion2,

CASE WHEN GroupingIDColumn1 = 0 AND GroupingIDColumn2 = 0 THEN ValueCount / CAST(@JointCount AS float) ELSE NULL END AS JointProportion,

GroupingIDColumn1 = 0,GroupingIDColumn2

FROM (SELECT Column1 AS Value1, Column2 AS Value2, Count(*) AS ValueCount, GROUPING_ID(Column1) AS GroupingIDColumn1, GROUPING_ID(Column2) AS GroupingIDColumn2

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL AND Column2 IS NOT NULL

GROUP BY CUBE (Column1, Column2)) AS T1) AS T2

— for validation

–SELECT * FROM @EntropyTable

–SELECT SUM(Proportion1), SUM(Proportion2), SUM(JointProportion)

–FROM @EntropyTable

SELECT @ShannonsEntropy1 = SUM(SummationInput1) * –1,

@ShannonsEntropy2 = SUM(SummationInput2) * –1,

@JointEntropy = SUM(JointSummationInput) * –1

FROM @EntropyTable

SELECT @ShannonsEntropy1 AS ShannonsEntropyForX, @ShannonsEntropy2 AS ShannonsEntropyForY, @JointEntropy AS JointEntropy,

@ShannonsEntropy1 + @ShannonsEntropy2 AS SumOfIndividualShannonEntropies,

@JointEntropy – @ShannonsEntropy1 AS ConditionalEntropyOfBGivenA,

@JointEntropy – @ShannonsEntropy2 AS ConditionalEntropyOfAGivenB

__Figure 2: Results from the Higgs Boson Dataset__

…………Both Joint and Conditional Entropy must meet a bevy of validity tests, which the results in Figure 2 pass with flying colors. First, the Shannon’s Entropy for the first float column is identical to the results we received in previous articles. Likewise, the Joint Entropy is greater than the individual entropies of both columns, but less than their sum, as indicated by SumOfIndividualShannonEntropies.[3] As expected, the ConditionalValueOfBGivenA is below that of the Shannon’s Entropy for the second column and ConditionalValueOfAGivenB is below that of the first. One unexpected result was that the JointEntropy is close to the output of the multiset versions of the Hartley function I coded in Information Measurement with SQL Server, Part 1: A Quick Review of the Hartley Function, which returned 23.2534966642115 and 23.3910011060398 respectively. This may merely be a fluke, since Column2 has a very different distribution and the Hartley function only took Column1 into account in that particular article. Moreover, I tested this procedure on other combinations of float columns in the same dataset and received results between 22 and 23 most of the time, even when Column1 wasn’t being tested at all. I’m not yet well-versed enough in these matters to say if this indicates that we could derive additional insights from our data, by comparing each column’s Hartley function against the Joint Entropy.

…………The performance implications are of more immediate concern, of course. These two columns have 9,119,674 distinct combinations between them, which required a few gigabytes of TempDB space for spooling; SQL Server also gobbled up a few more gigs of RAM during these calculations, so it is probably wise to keep ample memory at hand. The good news is that this code ran in just 1:46, which is much better than I expected for multiple calculations on two float columns across 11 million rows. I have to perform these kinds of calculations on a beat-up clunker of a development machine that routinely falls apart like the Blues Brothers’ car, so the results on professional hardware are likely to be several orders of magnitude better. They could also probably benefit from a tune-up in the hands of a qualified T-SQL expert, which I am not. I’ve always had a little trouble with grouping statements, so there may be more efficient ways to write this; I doubt that would involve calculating the counts in the INSERT though, even though this could be done, albeit more awkwardly than taking them in advance. The INSERT statement accounted for 96 percent of the batch and 46 percent of those costs were incurred in a single Sort, so that operator might be a good target for performance tweaks. Another 49 percent was locked up in a single Table Insert operator, which probably can’t be dispensed with through optimization. The execution plan was short and otherwise uneventful though, since seeks and scans on the nonclustered indexes of the two columns did the bulk of the heavy lifting.

**Interpretations and Use Cases for Joint and Conditional Entropies**

…………So why bother going to the trouble of calculating these metrics at all? There’s no point in expending any server resources without first answering Aristotle’s *causa efficiens*. There are several different categories of use cases for these measures, some of which are really straightforward: whenever any question arises in any data model arises concerning the “news value” of one column given a value for another, or of a state description with two specific values for those columns, these are usually the measures we’d turn to. Conditional Entropy answers the fundamental question “How much can knowing a value of Y tell us about X?” whereas Joint Entropy measures the same thing as Shannon’s Entropy, except for two or more columns. As I’ve emphasized throughout this series, interpretation is a critical stage in any workflow involving information theory metrics, and in the case of these binary entropies, their subtle implications give rise to a whole additional class of use cases. For example, then knowing Y can tell us everything about X if the value of Conditional Entropy is zero, but progressively less as the value rises; in this roundabout way, it becomes a sort of measure of association by proxy, which might be integrated with more familiar measures like correlation, covariance and regression. Conditional Entropy is also sometimes referred to as “equivocation,” especially when it is interpreted a conditional uncertainty. If we were speaking strictly of the original application of information theory to communication channels, then we can view a change in equivocation as a “decrease of uncertainty as to what message might have been enciphered.”[4] It is also used to “justify the definition for channel capacity.”[5] Wendell R. Garner’s excellent but largely unnoticed work from 1962, *Uncertainty and Structure as Psychological Concepts*, is chock full of formulas in which Conditional Entropy can be transmuted into various measures of uncertainty interactions[6], which are akin to the interaction effects in analysis of variance (ANOVA). These might be useful in the kind of “uncertainty management” programs I delved into in the Implementing Fuzzy Sets with SQL Server series. Some of these related measures, like conditional and contingent uncertainty, can be used to quantify such qualities as redundancy, the relationships between variables and the amount of information transmitted between them.[7] The math is rather thick but Garner’s arguments are sufficiently rigorous for demonstrating connections between Conditional Entropy to both “irrelevant information”[8] and patterns: as he puts it, “structure is related uncertainty…structure is still the important kind of uncertainty, and unstructured uncertainty is equivocation, or noise, or error, whichever you prefer. It is uncertainty which is unrelated to another uncertainty.”[9] It also has subtle relationships with measures of “irony”[10] and a priori knowledge.[11]

…………The third class of use cases for these entropies involves using them to calculate other information metrics, just as we derived Conditional Entropy from Joint Entropy in this article. In like manner, Joint Entropy can be useful as a stepping stone to Mutual Information, one of the most important measures in information theory. It would thus be wise to have it in our toolbelts, if we want to go beyond what the existing software can do and do some wildcat data mining, using DIY code for metrics and algorithms that haven’t been implemented yet. One of the surprises I’ve encountered while trying to learn these fields in recent years has been the sheer size of the yawning gap between the research and theory that undergirds data mining and the available software, which is in some respects decades behind the theoreticians. Only a fraction of the available measures and techniques are available anywhere in the analytics marketplace. Since this situation is likely to last for a long time to come, it may be helpful to acquire the skills to develop our own DIY solutions, which is where posts like this might prove useful (perhaps as cautionary tales against making the same amateur mistakes). To that end, I’ll also address a couple of up-and-coming metrics known as Shared Information Distance and Lautum Information, both of which are related to the more established topic of Mutual Information. It would be feasible to take this segment of the series on a whole array of tangents, including coding all of the information theory counterparts of probabilistic concepts like multiple outcomes, independent events, mutually exclusive events, non-exclusive, unordered pairs and compound events. All of these have equivalent implementations and ramifications in terms of entropy, but I’ll restrict my scope to actual information metrics in keeping with the title of the series, rather than all of the calculations and principles that can be derived from them. In the next article I’ll also dispense with Self Information, which is the entropic counterpart to probabilities for a single record. I’ll complete my wrap-up of this segment of the Information Measurement series with brief discussions of Entropy Rate and Information Rate, which are fairly simple to code.

[1] I downloaded this ages ago from the University of California at Irvine’s Machine Learning Repository__ and converted it to a SQL Server table, which now takes up about 5 gigabytes of space in a sham DataMiningProjects database. __

[2] See the __Wikipedia__ pages “Conditional Entropy” and “Entropy (Information Theory) ” at http://en.wikipedia.org/wiki/Entropy_(information_theory) and http://en.wikipedia.org/wiki/Conditional_entropy respectively.

[3] See the __Wikipedia__ page “Joint Entropy ” at http://en.wikipedia.org/wiki/Joint_entropy

[4] p. 272, Pierce, John Robinson, 1980, __An Introduction to Information Theory: Symbols, Signals & Noise__. Dover Publications: New York. Also see Pierce, John Robinson, 1961, __Symbols, Signals and Noise: The Nature and Process of Communication__. Harper: New York.

[5] pp. 49-62, Mansuripur, Masud, 1987, __Introduction to Information Theory.__ Prentice-Hall: Englewood Cliffs, N.J.

[6] p. 106, Garner, Wendell R., 1962, __Uncertainty and Structure as Psychological Concepts__. Wiley: New York.

[7] *IBID*., p. 96, 136.

[8] *IBID*., p. 316.

[9] *IBID*., p. 339.

[10] p. 46, Ritchie, L. David., 1991, __Information__. Sage Publications: Newbury Park, Calif.

[11] p. 19, Brillouin, Léon, 1962, __Science and Information Theory__. Academic Press: New York.

## Information Measurement with SQL Server, Part 2.3: Thermodynamic and Quantum Entropies

**By Steve Bolton**

…………When I was about 12 years old, I suddenly discovered football. Many lessons still awaited far in the future – such as the risks of being a Buffalo Bills fan, the explosive sound footballs make when they hit a Saguaro cactus, or the solid reasons for not playing tackle on city streets – but I did make one important finding right away. During the following summer vacation, I fought off boredom by looking up old rosters (in a sort of precursor to fantasy football) and running the previous year’s standings through the NFL’s comprehensive formula for creating schedules. The rules were simple: teams with higher won-loss percentages were the victors whenever they were scheduled against opponents with worse records the previous year, then at the end of each fake season, I ran the new standings through the scheduling formula again. Based on the initial patterns I expected my eccentric amusement to last for a while, but what I ended up getting was a hard lesson in information entropy. No matter what standings I plugged into the formula, they eventually stopped changing, given enough iterations; the best I could get was permanent oscillations of two teams who swapped positions in their division each year, usually with records of 9-7 and 7-9. It occurred to me that I could increase the range of possible patterns by changing the scheduling rules, but that I was merely putting off the inevitable, for in due time they would freeze in some configuration or another. The information system was purely self-contained and was not dependent on any input of physical energy whatsoever; some of my favorite underrated players of the day, like Neil Lomax and Danny White, might have been able to change the course of a real NFL season, but they were powerless to alter this one. This experience helped me to quickly grasp the Law of Conversation of Information[1] when I heard of it as an adult. It is analogous to the Second Law of Thermodynamics, but for a different quantity that isn’t really interchangeable. All of the energy of the sun itself could not affect the outcome of that closed information system, which was doomed to run down sooner or later, once set in motion. The only way to change that fate was to input more information into the system, by adding new scheduling rules, random standing changes and new teams. This is almost precisely what occurs in the Third Law of Thermodynamics[2], in which a closed physical system slowly loses its ability to change as it approaches equilibrium of some kind.

…………Both sets of principles are intimately related to entropy, which is the topic of this segment of my amateur self-tutorial series on coding various information metrics in SQL Server. Nevertheless, both are subject to misunderstandings, some of which are innocuous and others which can be treacherously fallacious, particularly when they are used interchangeably with broad terms like “order.” The most common errors with information theory occur in its interpretations rather than its calculations, since it intersects in complex ways with deep concepts like order, as well as many of the other fundamental quantities I hope to measure later in this series, like redundancy, complexity and the like. As I touched on in my lengthy introduction to Shannon’s Entropy, order is in the eye of the beholder. The choice to search for a particular order is thus entirely subjective, although whether or not a particular empirical observation meets the criteria inherent in the question is entirely objective. Say, for example, that the arrangement of stars was considered “random,” another broad term which intersects with the meaning of “order” but is not synonymous. If some we were to discover a tattoo on the arm of some celestial being corresponding to the arrangement of stars in our particular universe, we would have to assume that it was either put there through conscious effort or as the output of some unfathomable natural process. Either way, it would require an enormous amount of energy to derive that highly complex order. It might also require a lot of energy to derive an unnaturally simple order through the destruction of a fault-tolerant, complex one; furthermore, in cryptography, it sometimes requires a greater expenditure of resources to derive deceptively simple information, which is actually generated from a more complex process than the information it is designed to conceal. A perfectly uniform distribution is often difficult to derive either by natural processes or intelligent intervention; the first examples that spring to mind are the many machinists I know, who do highly skilled labor day-in, day-out to create metal parts that have the smoothest possible tolerances. If kids build a sand castle, that’s one particular form of order; perhaps a real estate developer would prefer a different order, in which case he might bulldoze the sand castle, grade the beach and put up a hotel. Both require inputs of energy to derive their particular orders, which exhibit entirely different levels of complexity. Neither “disorder” nor “order” are synonymous with entropy, which only measures the capacity of an information, thermodynamic or quantum system to be reordered under its own impetus, without some new input of information or energy from some external source.

**Adjusting Hartley and Shannon Entropy by Boltzmann’s Constant**

As we shall see, coding the formulas for the original thermodynamic measures isn’t terribly difficult, provided that we’ve been introduced to the corresponding information theory concepts covered in the first four blog posts of this series. Despite the obvious similarities in the underlying equations, measures of entropy are the subject of subtle differences in the way these fields handle them. In the physical sciences, entropic states are often treated as something to be avoided, particularly when cosmic topics like the Heath Death of the Universe are brought up.[3] Entropy is often embraced in data mining and related fields, because it represents complete knowledge; paradoxically, the higher the entropy, the higher the potential information gain from adding to our existing knowledge. The more incomplete our understanding is, the more benefit we can derive from “news,” so to speak.

…………It’s not surprising that the measures of entropy in thermodynamics and information theory have similar formulas, given that the former inspired the latter. Information theory repaid this debt to physics by giving birth to quantum information theory and spotlighted the deep principles which give rise to the laws of thermodynamics – which was originally an empirical observation rather than an inevitable consequence of logic. Both ultimately stem from the same principles of math and logic, like the Law of Large Numbers, but as usual, I’ll skip over the related theorems, proofs and lemmas to get to the meat and potatoes. Nor am I going to explain the principles of thermodynamics any more than I have to; this blog is not intended to be a resource for basic science, especially since that topic is much older and well-known than information theory, which means there are many more resources available for readers who want to learn more. This detour in my Information Measurement series is posted mainly for the sake of completeness, as well as the off-chance that some readers may encounter use cases where they have to use T-SQL versions of some of these common thermodynamic formulas; I also hope to illustrate the differences between physical and informational entropies, as well as demonstrate how more concepts of thermodynamics can be assimilated into information theory and put to good use in our databases.

…………On the other hand, I’m going to make another exception to my unwritten rule against posting equations on this blog, in order to point out the obvious similarities between Shannon’s Entropy and its cousin in thermodynamics, Gibb’s Entropy (a.k.a. the Shannon-Gibbs, Boltzmann-Gibbs, Boltzmann–Gibbs–Shannon, BGS or BG are all used interchangeably).[4]. As I noted in Information Measurement with SQL Server, Part 2.1: The Uses and Abuses of Shannon’s Entropy, H = -Σ p_{i} log_{b} p_{i} is one of the most famous equations in the history of mathematics. Except for the fact it multiplies the result by the infamous Boltzmann’s Constant, k_{b} [5] Gibb’s Entropy is practically identical: S = -k_{b }-Σ p_{i} log_{b} p_{i}. The thermodynamic formulas are distinguishable from their information theory kin in large part by the presence of his constant, which is measured in different branches of science via 15 different units[6]; in the Code in Figure 1, I used the formulas corresponding to the three main units used in information theory, which I introduced in the article on Hartley’s Function. Its discoverer, Austrian physicist Ludwig Boltzmann (1844-1906) was another one of those unbalanced math wizzes who bequeathed us many other crucial advances in the hard sciences, but who was eccentric to the point of self-destruction; he was in all likelihood a manic depressive, which may have led to his suicide attempts.[7] He also feuded with Ernst Mach (1838-1916), another influential Austrian physicist – yet that may have been to his credit, given that Mach was one of the last holdouts in the field who opposed the existence of atoms.[8]

…………Boltzmann also lent his name to the Boltzmann Entropy, whose formula is obviously similar to the Hartley function. As I pointed out in Information Measurement with SQL Server, Part 1: A Quick Review of the Hartley Function, deciding which count of records to plug into the Hartley measure is not as straightforward as it seems; as expected, the DISTINCT count version returned the same results as a uniform distribution plugged into Shannon’s equation, but counting repeated records (as we would in a multiset) can also tell us useful things about our data. With Boltzmann’s Entropy, we’re more likely to use a really expansive definition of cardinality that incorporates all permissible arrangements in a particular space. This would be equivalent to using the Hartley measure across all permissible states, regardless of their probabilities. This would in turn equal the Shannon’s Entropy on a uniform distribution where all permissible states – not just the ones with nonzero values – are included. Counting all permissible states is usually a lot easier in physics than actually much easier than determining their probabilities, which is turn a far cry from determining the number of unique particle types, let alone the exact counts of particles. In modern databases, this relationship is almost completely reversed; in SQL Server, table counts are preaggregated and thus instantly available, while DISTINCT clauses are costly in T-SQL and even more so in Analysis Services. To make matters worse, calculating a factorial on values higher than about 170 is impossible, even using the float data type; this precludes counting all 10^{38} + 1 permissible value in a decimal(38,0) column and plugging it into the Boltzmann Entropy, which uses factorials to precalculate the counts plugged into it. In other words, we simply can’t use the Boltzmann Entropy on all permissible values, if the universal set would contain more than 170 members. For that reason, I used the old-fashioned SQL Server count, in this case on the first float column of the same Higgs Boson dataset I’ve been using for practice purposes for several tutorial series.[9] Despite the fact that the table takes up 5 gigabytes and consists of 11 million rows, my sample code ran in just 1 second on my clunker of a development machine. The code is actually quite trivial and can be condensed from the deliberately verbose version below:

** Figure 1: Various Measures of Thermodynamic Entropy**DECLARE @LogarithmBase decimal(38,36)

SET @LogarithmBase = 2 —2.7182818284590452353602874713526624977 — 10

DECLARE @EntropicIndexParameter float = 0.99

DECLARE @Count bigint, @Mean decimal(38,32), @StDev decimal(38,32), @GibbsEntropy float, @BoltzmannEntropy float, @TsallisEntropy float, @ConfigurationEntropy float @BoltzmannConstant float

SELECT @BoltzmannConstant = CASE WHEN @LogarithmBase = 2 THEN 1 / CAST(Log(2) AS float)

WHEN @LogarithmBase = 10 THEN 10 / CAST(Log(2) AS float)

WHEN @LogarithmBase = 2.7182818284590452353602874713526624977 THEN 1

ELSE NULL END

SELECT @Count = Count(*)

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

DECLARE @EntropyTable table

(Value decimal(33,29),

ValueCount bigint,

Proportion float

)

INSERT INTO @EntropyTable

(Value, ValueCount, Proportion)

SELECT Value, ValueCount, ValueCount / CAST(@Count AS float) AS Proportion

FROM (SELECT Column1 AS Value, Count(*) AS ValueCount

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

GROUP BY Column1) AS T1

SELECT @GibbsEntropy = –1 * @BoltzmannConstant * SUM(CASE WHEN Proportion = 0 THEN 0 ELSE Proportion * CAST(Log(Proportion, @LogarithmBase) as float) END), @TsallisEntropy = (1 / CAST(@EntropicIndexParameter –

1 AS float)) * (1 – SUM(CAST(Power(Proportion, @EntropicIndexParameter) AS float))), @ConfigurationEntropy =

–1 * @BoltzmannConstant * SUM(Proportion * Log(Proportion, @LogarithmBase))

FROM @EntropyTable

SELECT @BoltzmannEntropy = @BoltzmannConstant * Log(@Count, @LogarithmBase)

SELECT @BoltzmannConstant AS BoltzmannConstant, @BoltzmannEntropy AS BoltzmannEntropy, @GibbsEntropy AS GibbsEntropy, @ConfigurationEntropy AS ConfigurationEntropy, @TsallisEntropy AS TsallisEntropy

__Figure 2: Results from the Higgs Boson Dataset
__

…………The formula used here is for Boltzmann’s Constant is based on a common approximation of 1 / Log(2), which introduces inaccuracy as early as the third decimal place; greater precision can be achieved for the common Base-2 log using more accurate approximations, such as a hard-coded constant of 1.3806485097962231207904142850573. Of course, the results in Figure 3 would only make sense if the particular columns measured thermodynamic quantities, and I’m not familiar enough with the semantic meaning of the Higgs Boson data to say if that’s the case. The same is true of the Tsallis Entropy (which may be identical to the Havrda-Charvat Entropy)[10] derived in the first SELECT after the INSERT, which is the thermodynamic counterpart of the Rényi Entropy we covered a few articles ago. The two formulas are almost identical, except that the Tsallis takes an @EntropicIndexParameter similar to the @AlphaParameter used in the Rényi, which also cannot equal 1 because it would lead to a divide-by-zero error. As it approaches this forbidden value, however, it nears the Gibbs Entropy, just as the Rényi Entropy approaches Shannon’s measure. At 0 it is equivalent to the DISTINCT COUNT minus 1. A wide range of other values have proven useful in multifarious physics problems, as well as “sensitivity to initial conditions and entropy production at the edge of chaos,”[11] which could prove useful when I tackle chaos theory at the tail end of this series. The formula for the Configuration Entropy[12] is exactly the same as that for the Gibbs Entropy, except that it measures possible configurations rather than the energy of a system; for our immediate purposes in translating this for DIY data mining purposes, the number of configurations and energy are both equivalent to the information content.

…………Although all of the above measures are derived from thermodynamics, they have obviously similarities and uses in information theory and by extension, related fields like data mining. Some additional concepts from thermodynamics might prove good matches for certain SQL Server use cases in the future as well, although I have yet to see any formulas posted that could be translated into T-SQL at this point. The concept of Entropy of Mixing[13] involves measuring the change in entropy from the merger of two closed thermodynamic systems, separated by some sort of impermeable barrier. This might prove incredibly useful if it could be ported to information theory (assuming it already hasn’t), since it could be used to gauge the change in entropy that occurs from the merger of two datasets or partitions. This might include appending new data in temporal order, partitioning sets by value or topic and incorporating new samples – all of which could be useful in ascertaining whether the possible information gain is worth the performance costs, in advance of the merge operation. In the same vein, a “perfect crystal, at absolute zero” has zero entropy and therefore no more potential for internally-generated change, but it may be in one of several microstates at the point it is frozen in stasis; this is what is measured by Residual Entropy[14], which might be transferable to information theory by quantifying the states an information system can exhibit once its potential for internally-generated change is gone.

**Adapting Other Thermodynamic Entropies**

One of the most promising new information metrics is information enthalpy, which is an analogue of an older thermodynamic concept. The original version performed calculations on measures of energy, pressure and volume, in which the first “term can be interpreted as the energy required to create the system” and the second as the energy that would be required to ‘make room’ for the system if the pressure of the environment remained constant.”[15] It measures “the amount of heat content used or released in a system at constant pressure,” but is usually expresses as the change in enthalpy as measured in joules.[16] This could be adapted to quantify the information needed to give rise to a particular data structure, or to assess changes in it as information is added or removed. The thermodynamic version can be further partitioned into the enthalpy due to such specific processes as hydrogenation, combustion, atomization, hydration, vaporization and the like; perhaps information enthalpy can be partitioned as well, except by processes specific to information and data science. Information enthalpy is the subject of at least one patent for a cybersecurity algorithm to prevent data leaks, depending on security rating.[17] At least two other research papers use information enthalpy for data modeling with neural nets, which is a subject nearer to my heart.[18] A more recent journal article uses it in measuring artificial intelligence[19], which is also directly relevant to data mining and information theory.

…………Loop Entropy is specific not just to thermodynamics, but to specific materials within it, since it represents “the entropy lost upon bringing together two residues of a polymer within a prescribed distance.”[20] Nevertheless, it might be possible to develop a similar measure in information theory to quantify the entropy lost in the mixing between two probability spaces. Conformational Entropy is even more specific to chemistry, but it might be helpful to develop similar measures to quantify the structures an information system can take on, as this entropy does with molecular arrangements. Incidentally, the formula is identical to that of the Gibbs Entropy, except that the Gas Constant version of the Boltzmann Constant is used.[21] Likewise, Entropic Force is a concept mainly used to quantify phenomena related to Brownian motion, crystallization, gravity and “hydrophobic force.”[22] In recent years, however, it has been linked together with “entropy-like measures of complexity,” intelligence and the knowledge discovery principle of Occam’s Razor, which brings it within the purview of information theory.[23] I surmise that a similar concept could be put to good use in measuring entropic forces in data science, for such constructive purposes as estimating data loss and contamination, or even ascertaining tendencies of data to form clusters of a particular type. It is also possible that a more thorough relationship between thermodynamic free energy, Free Entropy and “free probability” can be fleshed out.[24] These are related to the “internal energy of a system minus the amount of energy that cannot be used to perform work,” but it might be useful outside of the thermodynamic context, if we can further partition measures like Shannon’s Entropy to strain out information that is likewise unavailable for our purposes. I cannot think of possible adaptations for more distant thermodynamic concepts like Entropic Explosion and Free Entropy off the top of my head, but that does not mean they are not possible or would not be useful in data science.

**Esoteric Quantum Entropies**

These measures of physical entropy are of course prerequisites for bleeding-edge topics like quantum mechanics, where the strange properties of matter below the atomic level of ordinary particle physics introduce mind-bending complications like entanglement. This gives rise a whole host of quantum-specific entropies which I won’t provide code for because they’re too far afield for my intended audience, the SQL Server community, where making Schrödinger’s Cat reappear like a rabbit out of a hat isn’t usually required in C.V.s. These metrics will only prove useful if we can find objects and data that can be modeled like quantum states, which might actually be feasible in the future. Thermodynamics served as a precursor to information theory, which in turn provided a foundation for the newborn field of quantum information theory, so further cross-pollination between these subject areas can be expected. This could arise out of information geometry, another bleeding-edge field that borrows concepts like Riemann manifolds and multidimensional hyperspace and applies them to information theory; I hope that by the end of this wide-ranging series I’ll have acquired the skills to at least explore the topic, but the day when I can write tutorials on it is far off. Another interesting instance of cross-pollination is occurring as we speak between spin glasses, i.e. disordered magnets which are deeply interesting to physicists for their phase transitions, and neural nets, which apparently share analogous properties in common with them. It is nonetheless far more likely that some of the simpler thermodynamic concepts like enthalpy will be adapted for use in information theory (and by extension, data mining) before any of the quantum information measures I’ll quickly dispense with here.

…………The Von Neumann Entropy is one of the brands of entropy most frequently mentioned in books on quantum mechanics, like Ingemar Bengtsson’s *Geometry of Quantum States: An Introduction to Quantum Entanglement *[25]and Vlatko Vedral’s *Decoding Reality: The Universe as Quantum Information [26]*, but it merely extends the information theory and thermodynamic concepts we’ve already discussed, using the kind of high-level math employed in quantum physics. For example, “the Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy” [27] except that we plug in a stochastic density matrix for the probabilities and use a Trace operation on it instead of a summation operator. Moreover, when Von Neumann’s Entropy is calculated via its eigenvectors it reduces to Shannon ‘s version.[28] Linear Entropy (or “Impurity”) is likewise in some respects an extension of the concept of entropic mixing to the field birthed from the union of these two cutting-edge fields, quantum information theory.[29]

…………Certain quantum-specific measures, such as the Belavkin-Staszewski Entropy, are obscure enough that it is difficult to find references on them anywhere, even in professional quantum theory and mathematical texts. I became acquainted with these quantum measures awhile back while skimming works like Bengtsson’s and Vedral’s, but as you can tell from my citations, I had to rely heavily on Wikipedia to fill in a lot of the blanks (which were quite sizeable, given that I’ve evidently forgotten most of what I learned on the topic as a kid, when I imbibed a lot from my father’s moonlighting as a college physics teacher). Fortunately, I was able to find a Wikipedia article on the Sackur-Tetrode Entropy[30], which was just comprehensible enough to allow me to decipher its purpose. Evidently, it’s used to partition quantum entropies by the types of missing information they quantify, similar to how I used various measures of imprecision, nonspecificity, strife, discord, conflict and the like at the tail of my Implementing Fuzzy Sets in SQL Server series to partition “uncertainty.” In those tutorials I in turn likened uncertainty partitioning to the manner in which variance is partitioned in analysis of variance (ANOVA).

…………It is much more common to find references to measures like Quantum Relative Entropy and Generalized Relative Entropy[31] in works on quantum mechanics, which are just highly specialized, souped-up versions of the Kullback-Leibler Divergence we’ll be tackling later in this series. They’re used to quantify the dissimilarity of indistinguishability of quantum states across Hilbert Spaces, which aren’t exactly everyday use cases for SQL Server DBAs and data miners (yet not entirely irrelevant, given that they’re constructed from inner products). On the other hand, the KL-Divergence they’re derived from is almost as important in information theory as Shannon’s Entropy, so I’ll be writing at length on it once I can set aside room in this series for a really long segment on distance and divergence metrics. It can certainly be put to a wide range of productive uses by SQL Server end users. The same can be said of the conditional and joint information entropies we’ll be tackling in the next article, alongside such ubiquitous measures as information rates. In one sense, they’re a little more complex that the topics we covered so far in this series, since they’re binary relations between probability figures rather than simple unary measures. On the other hand, they’re far more simple and useful than counterparts like Quantum Mutual Information and Conditional Quantum Entropy. I’m only mentioning these metrics and their quantum kin for the benefit of readers who want to learn more about information theory and don’t want to wade into the literature blind, without any inkling as to what these more esoteric entropies do or how compartmentalized their use cases are. When applying information theory outside of quantum mechanics, we don’t need to concern ourselves with such exotic properties as non-separability and oddities like the fact that quantum conditional entropy can be negative, which is equivalent to a measure known as “coherent information.”[32] In contrast, I’ll provide code at the end of this segment of the series for measures like Mutual, Lautum and Shared Information, which indeed have more general-purpose use cases in data mining and knowledge discovery. The same can be said of next week’s article on Conditional and Joint Entropy, which are among the tried-and-true principles of information theory.

[1] This is also justified by such principles as the Data Processing Theorem, which demonstrate that a loss of information occurs at each step in data processing, since no new information can be added. It is actually a more solid proof than that its thermodynamic counterpart, since it stems directly from logical and mathematical consistency rather than empirical observations with unknown causes. For a discussion of the Data Processing Theorem, see p. 30, Jones, D.S., 1979, __Elementary Information Theory__. Oxford University Press: New York.

[2] Boundless Chemistry, 2014, “The Third Law of Thermodynamics and Absolute Energy,” published Nov. 19, 2014 at the __Boundless.com__ web address https://www.boundless.com/chemistry/textbooks/boundless-chemistry-textbook/thermodynamics-17/entropy-124/the-third-law-of-thermodynamics-and-absolute-energy-502-3636/

[3] See the Wikipedia webpage “Heat Death of the Universe” at http://en.wikipedia.org/wiki/Heat_death_of_the_universe

[4] See the __Wikipedia__ page “History of Entropy” at http://en.wikipedia.org/wiki/History_of_entropy

[5] See the __Wikipedia__ article “Boltzmann’s Entropy Formula” at http://en.wikipedia.org/wiki/Boltzmann%27s_entropy_formula

[6] I retrieved this value from the Wikipedia page “Boltzmann’s Constant” at http://en.wikipedia.org/wiki/Boltzmann_constant

[7] See the __Wikipedia__ article “Ludwig Boltzmann” at http://en.wikipedia.org/wiki/Ludwig_Boltzmann

[8] See the __Wikipedia__ page “Ernst Mach” at http://en.wikipedia.org/wiki/Ernst_Mach

[9] I originally downloaded this from the University of California at Irvine’s Machine Learning Repository and converted it into a single table in a sham SQL Server database called DataMiningProjects.

[10] See the __Wikipedia__ article “Talk:Entropy/Archive11” at http://en.wikipedia.org/wiki/Talk%3AEntropy/Archive11

[11] See the __Wikipedia __webpage “Tsallis Entropy” at http://en.wikipedia.org/wiki/Tsallis_entropy

[12] See the __Wikipedia__ article “Configuration Entropy” at http://en.wikipedia.org/wiki/Configuration_entropy

[13] See the __Wikipedia__ page http://en.wikipedia.org/wiki/Entropy_of_mixing

[14] See the __Wikipedia__ article “Residual Entropy” at http://en.wikipedia.org/wiki/Residual_entropy

[15] See the __Wikipedia__ page “Enthalpy” at http://en.wikipedia.org/wiki/Enthalpy

[16] See the __ChemWiki__ webpage “Enthalpy” at

http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/State_Functions/Enthalpy

[17] See the __Google.com__ webpage “High Granularity Reactive Measures for Selective Pruning of Information” at http://www.google.com/patents/US8141127

[18] Lin, Jun-Shien and Jang, Shi-Shang, 1998, “Nonlinear Dynamic Artificial Neural Network Modeling Using an Information Theory Based Experimental Design Approach,” pp. 3640-3651 in __Industrial and Engineering Chemistry Research__, Vol. 37, No. 9. Also see Lin, Jun-Shien; Jang, Shi-Shang; Shieh, Shyan-Shu and M. Subramaniam, M., 1999, “Generalized Multivariable Dynamic Artificial Neural Network Modeling for Chemical Processes,” pp. 4700-4711 in __Industrial and Engineering Chemistry Research__, Vol. 38, No. 12.

[19] Benjun, Guo; Peng, Wang; Dongdong, Chen; and Gaoyun, Chen, 2009, “Decide by Information Enthalpy Based on Intelligent Algorithm,” pp. 719-722 in__ Information Technology and Applications.__ Vol. 1.

[20] See the __Wikipedia__ article “Loop Entropy” at http://en.wikipedia.org/wiki/Loop_entropy

[21] See the __Wikipedia__ article “Conformational Entropy” at http://en.wikipedia.org/wiki/Conformational_entropy

[22] See the __Wikipedia__ article “Entropic Force” at http://en.wikipedia.org/wiki/Entropic_force

[23] *IBID*.

[24] See the __Wikipedia__ pages “Free Entropy,” “Thermodynamic Free Energy” and “Free Probability” at http://en.wikipedia.org/wiki/Free_entropy, http://en.wikipedia.org/wiki/Thermodynamic_free_energy and http://en.wikipedia.org/wiki/Free_probability respectively.

[25] Bengtsson, Ingemar, 2008, __Geometry of Quantum States: An Introduction to Quantum Entanglement__. Cambridge University Press: New York.

[26] Vedral, Vlatko, 2010, __Decoding Reality: The Universe as Quantum Information__. Oxford University Press: New York

[27] See the __Wikipedia__ article “Entropy (Information Theory)” at http://en.wikipedia.org/wiki/Entropy_(information_theory)

[28] See the __Wikipedia__ article “Von Neumman Entropy” at http://en.wikipedia.org/wiki/Von_Neumann_entropy

[29] See the Wikipedia article “Linear Entropy” at http://en.wikipedia.org/wiki/Linear_entropy

[30] See the __Wikipedia__ webpage “Sackur-Tetrode Equation” at http://en.wikipedia.org/wiki/Sackur%E2%80%93Tetrode_equation

[31] See the Wikipedia articles “Quantum Relative Entropy” and “Generalized Relative Entropy” at http://en.wikipedia.org/wiki/Quantum_relative_entropy and http://en.wikipedia.org/wiki/Generalized_relative_entropy respectively

[32] See the __Wikipedia__ articles Quantum Mutual Information and “Conditional Quantum Entropy” at

http://en.wikipedia.org/wiki/Quantum_mutual_information and http://en.wikipedia.org/wiki/Conditional_quantum_entropy respectively.

## Information Measurement with SQL Server, Part 2.2: The Rényi Entropy and Its Kin

**By Steve Bolton**

…………I kicked off this far-ranging series on using SQL Server to quantify information by discussing two of the earliest and most important measures, the Hartley function and Shannon’s Entropy. These foundations of information theory are intimately related to a more general measure, Rényi Entropy, which is a bit more complex but nonetheless worthwhile to discuss, since its unites many different information measures under one umbrella. The underlying math formula isn’t much more difficult than the one for Shannon’s Entropy I posted last time around, but its alpha parameter (α-parameter) enables it to give rise to a wider range of results. This assortment of entropy types can be adapted to solving a wider range of problems.

…………A general-purpose means of parameterizing the concept of entropy was the explicit goal of Alfréd Rényi (1920-1971), a mathematician who overcame anti-Semitic persecution at the hands of Hungary’s World War II regime, which was allied with the Nazis and passed stringent laws against their own Jewish minority. Incidentally, his coffee addiction was the inspiration for the colorful saying, “A mathematician is a device for turning coffee into theorems”, which apparently predates the saying, “A programmer is a machine that turns coffee into code.”[1] The formula that bears his name has proven useful in many diverse fields and industries, from quantum mechanics to fractals to ecology and statistics, where it is useful in deriving indices of diversity.[2] When the α-parameter is set to 2, it becomes Collision or Quadratic Entropy (or sometimes just “the Rényi Entropy”), which “has been used in physics, in signal processing and in economics” and is attractive to statisticians because they “have found an easy way to estimate it directly from samples.”[3] Given that it has multifarious uses and is both easy to explain and compute in SQL Server, it makes sense to add it to our toolbelt, in order to derive our own DIY data mining methods. One thing I’ve come to realize since writing my initial A Rickety Stairway to SQL Server Data Mining tutorial series awhile back is that the data mining marketplace is decades behind the research in some respects, to the point where all of the available software taken together is probably several orders of magnitude behind the formulas available in the math books. If we encounter situations where the particular algorithms that would work best for our use cases aren’t yet implemented, waiting might not be an option, and if we’re going to learn to build our own, Rényi Entropy is bound to come in handy at some point.

**Coding the Range of α-Parameter Values**

The key to understanding Rényi Entropy is grasping how it intersects with its cousins. The α-parameter can be any non-negative value except for 1, which would result in a divide-by-zero error since one of the terms is 1 / (1 – α). This result is multiplied by a logarithm taken on the sum of the probabilities taken to the power of α; this differs from Shannon’s famous equation in that his log operation is performed on each probability and then multiplied by that probability, then summed afterwards and multiplied by -1. This logic is all implemented in the INSERT operation in Figure 1, in which most of the code is devoted to deriving actual probability values from the proportions of a single float column, taken from the same Higgs Boson dataset[4] I’ve been using for practice purposes for the past few tutorial series. I omitted any display of the intermediate proportions stored in the @EntropyTable variable, since they’re identical to the sample results posted in last week’s article. Evidently, it’s economical to compute these entropies, given that this code takes less than 2 seconds to run on my ancient abacus of a development machine (I might as well be using quipus) and most of that was devoted to deriving the probabilities for all 11 million rows; if we already had probability values derived from some other source like estimates, sampling or deductive methods, it would run practically instantaneously. I also hard-coded the results we already calculated over the last two articles for the Hartley and Shannon Entropies, given that they are also unchanged. The final SELECT in Figure 2 provides the same information from another vantage point, by plugging a range of α-values rather than a single parameter into the Rényi formula.

…………Note that as the values approach 1, they converge towards the Shannon Entropy, although they can never quite reach it because of the divide-by-zero error that would occur if plugged in the forbidden parameter value of 1.Likewise, the return values approach the Hartley measure as the parameter values approach 0; this acts as a cap on the permissible values, hence the frequent use of the synonym Max Entropy. Because SQL Server’s calculation powers are impressive yet finite, we can’t plug in values approaching infinity (if such a thing were possible). In fact, we can’t even go much higher than α-values of about 50 (which are rarely used in practice anyways) on this particular dataset without getting floating point errors. Figure 3 is sufficient to nevertheless sufficient to illustrate how the Rényi results approach an information measure known as the Min Entropy, which acts as a cap on the other end. References to it as the Chebyshev Entropy[5] seem to be few and far between outside the realm of quantum physics, but I’ve run into a few here and there. It is nonetheless easy to calculate, given that all we have to do is find the maximum probability value and take the negative log. As Wikipedia points out quite succinctly, “The min entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max entropy, defined as the logarithm of the number of outcomes.”[6] It is also useful as the “most conservative way of measuring the unpredictability of a set of outcomes,” a property that makes it useful in determining randomness and quantum cryptography, in a way that the Shannon Entropy can’t handle.[7] Each Rényi value in between the fences set by the Min, Hartley and Shannon Entropies has its own distinct uses in ascertaining “the unpredictability of a nonuniform distribution in different ways.”[8] We can thus choose whatever α-parameter is ideal for dealing with the particular distribution of our data. As an amateur, I’m lacking in the experience needed to determine which α-values are best-suited to specific problems, but this code at least provides a launching pad for exploring such questions.

** Figure 1: The Rényi Entropy in T-SQL**DECLARE @LogarithmBase decimal(38,36) = 2 –2.7182818284590452353602874713526624977 — 10

DECLARE @AlphaParameter decimal(38,35) = 2

DECLARE @Count bigint, @DistinctValueCount bigint, @RenyiEntropy float, @MinOrChebyshevEntropy float

SELECT @Count = Count(*)

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

DECLARE @EntropyTable table

(Value decimal(33,29),

ValueCount bigint,

Proportion float,

SummationInput float

)

INSERT INTO @EntropyTable

(Value, ValueCount, Proportion, SummationInput)

SELECT Value, ValueCount, Proportion, Power(Proportion, @AlphaParameter) AS SummationInput

FROM (SELECT Value, ValueCount, ValueCount / CAST(@Count AS float) AS Proportion

FROM (SELECT Column1 AS Value, Count(*) AS ValueCount

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

GROUP BY Column1) AS T1) AS T2

SELECT @RenyiEntropy = 1 / (1 – @AlphaParameter) * Log(SUM(SummationInput), @LogarithmBase), @MinOrChebyshevEntropy = –1 * Log(Max(Proportion), @LogarithmBase)

FROM @EntropyTable

SELECT @RenyiEntropy AS RenyiEntropy, @MinOrChebyshevEntropy AS MinEntropy, 13.2879304339032 AS ShannonsEntropy, 14.772263018717 AS HartleyOrMaxEntropy

** Figure 2: A Range of Sample T-SQL**–WHOLE RANGE OF RENYI ENTROPY

SELECT AlphaParameter, 1 / (1 – AlphaParameter) * Log(SUM(Power(Proportion, AlphaParameter)), @LogarithmBase) AS ResultingRenyiEntropy

FROM (VALUES (0.01), (0.5),(1.01),(2),(3),(4),(5),(10),(50)) AS T1(AlphaParameter)

LEFT JOIN @EntropyTable AS T2

ON 1 = 1

GROUP BY AlphaParameter

ORDER BY AlphaParameter

__Figure 3: Results from the Higgs Boson Dataset
__

…………Jose C. Principe, a neural net expert who teaches at the University of Florida, has written a handy 35-page guide to using the Rényi Entropy for tasks even further beyond my level of inexpertise, such as deriving measures like Information Potential based on probability distribution function (PDF) estimates. This quickly leads into the bleeding edge topic of information geometry, which I won’t touch with a ten-foot pole for a few more years, at least until I have a better understanding of such mind-blowing concepts as statistical Riemann manifolds and multidimensional spaces. The introductory pages are nevertheless useful in illustrating how Rényi Entropy works on simpler expressions of information. This is particularly true of the excellent diagram on the page 4, which shows how the various α-values measure the distance of a PDF to the origin point, thereby filling in the available space for information content in different proportions. As we shall see later in this series of self-tutorials, plugging different α-values into a related concept known as the Rényi Divergence also allows us to measure the distances between two PDFs in various ways. This is intimately related to the Kullback–Leibler Divergence in much the same way as the Rényi Entropy is related to the Shannon Entropy. The KL-Divergence is among the most important measures in information theory and related fields so I’ll be writing quite a bit on the subject later on in the series, as I did with the Shannon Entropy in the last article. It also goes by the name of Relative Entropy, but I’ll have to put off that discussion for a later time, when I can set aside a separate segment of the series for distance and divergence measures between probability distributions. Over the next four articles I’ll stick to the topic of entropy measures on single distributions, including Leaf and Root Entropy, which are inextricably related to the SSDM algorithm I covered in A Rickety Stairway to SQL Server Data Mining, Algorithm 3: Decision Trees.

[1] See the __Wikipedia__ page “Alfréd Rényi” at http://en.wikipedia.org/wiki/Alfr%C3%A9d_R%C3%A9nyi

[2] See the __Wikipedia__ article “Rényi Entropy” at http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

[3] pp. 6-7, Principe, Jose C., 2009, “Rényi Entropy,” course notes posted at the __University of Florida__ webpage http://www.cnel.ufl.edu/courses/EEL6814/renyis_entropy.pdf

[4] I downloaded this publicly available dataset from University of California at Irvine’s Machine Learning Repository__ and while back and converted it to a SQL Server table, which now takes up about 5 gigabytes in a sham DataMiningProjects database.__

[5] One source I found it mentioned in was Bengtsson, Ingemar, 2008, __Geometry of Quantum States: An Introduction to Quantum Entanglement__. Cambridge University Press: New York.

[6] See the __Wikipedia__ article “Min Entropy” at http://en.wikipedia.org/wiki/Min_entropy

[7] *IBID*.

[8] *IBID*.

## Information Measurement with SQL Server, Part 2.1: The Uses and Abuses of Shannon’s Entropy

**By Steve Bolton**

…………In the first installment of this wide-ranging series of amateur tutorials, I noted that the Hartley function indeed returns “information,” but of a specific kind that could be described as “newsworthiness.” This week’s measure also quantifies how much we add to our existing knowledge from each new fact, but through explicitly stochastic methods, whereas the Hartley function is more closely related to counts of state descriptions than probabilities.[1] Shannon’s Entropy is of greater renown than its predecessor, but isn’t much more difficult to code in T-SQL. In fact, it should be a breeze given that I already posted a more advanced version of it tailor-made for Implementing Fuzzy Sets in SQL Server, Part 10.2: Measuring Uncertainty in Evidence Theory, just as I had already coded a modified version of Hartley’s measure for Implementing Fuzzy Sets in SQL Server, Part 9: Measuring Nonspecificity with the Hartley Function.

…………Historically, the real difficulty with the metrics of information theory is with their interpretation, largely due to the broad and fuzzy meaning of the imprecise term, “information.” As we shall see, many brilliant theorists far smarter than we have made the mistake of overextending it beyond its original domain, just as some thinkers have gotten carried with the hype surrounding fuzzy sets and chaos theory. Like those cutting-edge topics, Shannon’s Entropy and its relatives within information theory are indeed powerful, but can go badly wrong when misapplied outside its specific use cases. It has really subtle yet mind-blowing implications for all of the different classes of information measurement I hope to cover in this wide-ranging and open-ended series, not all of which are fully understood. My purposes in this series is to illuminate datasets from every possible direction, using a whole smorgasbord of measures of meaning (semantic information), randomness, complexity, order, redundancy and sensitivity to initial conditions, among others; since many of these metrics serve as the foundation of many mining algorithms, we can use them in a SQL Server environment to devise DIY data mining algorithms. Shannon’s Entropy intersects many of these disparate class of information in multifarious ways, such as the fact that it is equal to the Hartley measure in the case of uniform distributions (i.e. when all values are equally likely). It is not a measure of what is known, but of how much can be learned by adding to it; entropy measures rise in tandem with uncertainty precisely we because we can learn more from new facts when we don’t already know everything. It can thus be considered complementary to measures of existing knowledge, such as Bayes Factors. I’ve read more on information theory than many other fields I’ve commented on over the past couple of years, when I started writing self-tutorials on data mining, but that doesn’t mean I understand all of these complex interrelationships well. In fact, I’m writing this series in part because it helps me absorb the material a lot faster, and posting it publicly in the hopes that it can at least help readers avoid my inevitable mistakes. I lamented often in A Rickety Stairway to SQL Server Data Mining that SSDM was woefully under-utilized in comparison to its potential benefits, but the same can also be said of the algorithms that underpin it.

**Channel Capacity as a Godsend**

Few of these information metrics are as well-known as the measure that Claude E. Shannon (1916-2001), a key American cryptographic specialist during the Second World War, introduced in 1948 in a two-part journal article titled “A Mathematical Theory of Communication.”[2] Most of it consists of math theorems and formulas that only got thicker as researchers built on his theory over the years, but it is noteworthy to mention that he credits earlier work by electrical engineers Ralph Hartley[3] (1888-1970) and Harry Nyquist (1889-1976) in the opening paragraphs.[4] I’ll skip over most of the proofs and equations involved – even the ones I understand – for the usual reasons: as I’ve pointed out in past articles, users of mining tools shouldn’t be burdened with these details, for the same reason that commuters don’t need a degree in automotive engineering in order to drive their cars. Suffice it to say that the original purpose was to determine the shortest possible codes in the case of noisy transmission lines. In this effort to extend coding theory, Shannon gave birth to the whole wider field of information theory. Masud Mansuripur, the chair of the Optical Data Storage Department at the University of Arizona, sums up Shannon’s surprising mathematical discovery best: “In the past, communications engineers believed that the rate of information transmission over a noisy channel had to decline to zero, if we require the error probability to approach zero. Shannon was the first to show that the information-transmission rate can be kept constant for an arbitrarily small probability of error.”[5] It is also possible to determine the information-carrying capacity of the channel at the same time.[6]

…………The idea is still somewhat startling to this day, as many writers in the field of information theory have pointed out ever since; it almost seems too good to be true. Basically, the idea is to engineer a “decrease of uncertainty as to what message might have been enciphered,” [7] in part by leveraging combinatorics to derive probabilities. For example, the Law of Large Numbers states that rolling a pair of dice is far more likely to result in a number like eight, since there are multiple ways of adding up both dice to get that result, whereas there’s only one combination apiece for snake eyes or boxcars. The edges of the resulting dataset are thus far less likely than the values in the middle, just as we’d find in a Gaussian or “normal” distribution, i.e. the ubiquitous bell curve. This gives rise to a dizzying array of axioms and lemmas and a whole set of procedures for determining codes, all of which I’ll skip over. For perhaps the first time in the history of this blog, however, I have a sound reason for posting the underlying equation: *H* = -Σ p_{i} log_{b} p_{i}. It’s not a household name like E = mc^{2} or 2 + 2 = 4, but it’s still one of the most famous formulas of all-time. It relatives can be spotted in the literature by the telltale combination of a negative summation operator with a logarithm operation. Aside from the fact that it possesses many ideal mathematical properties[8], “it was proven in numerous ways, from several well-justified axiomatic characterizations, that this function is the only sensible measure of uncertainty in probability theory.”[9]

**Translating H into Code**

As with Hartley’s measure, a base 10 logarithm results in units known as hartleys or bans (as famed cryptographer Alan Turing called them). With Euler’s Number (as in the natural logarithm) the units are known as nats and with base 2, they’re referred to as shannons or more commonly, bits. The code in Figure 1 uses bits, but users can uncomment the numbers that follow to use one of the other units. It is also worth noting that physicist Léon Brillouin used a “negentropy” measure that is basically the inverse of *H*, to compensate for the fact that Shannon’s Entropy measures information in terms of uncertainty, which is a bit counter-intuitive; it never really caught on though.[10]

…………My sample code derives some fake probabilities by taking the known proportions of values for a column in the same Higgs Boson dataset[11] I’ve been using for practice purposes for several tutorial series. The probabilities are then multiplied by the logs for those probabilities, then summed across the dataset and multiplied by -1. It’s actually quite simpler than it looks; the table variable and INSERT statement could be done away with altogether if we already had probability figures calculated through some other means, such as some sort of statistical sampling method or even reasoning out the probabilities from the underlying process, as writers on information theory often do with dice and cards to illustrate the concepts. In fact, if we know the underlying probability distribution in advance, we can sometimes calculate the entropy in reverse by using special formulas specific to certain distributions, such as the normal. Moreover, all of the code pertaining the @UniformProportion is included solely for the purpose of validating the results, which equal the results we received in the last article for the DISTINCT version of the Hartley function, as expected. If need be, users can also validate their results using Lukasz Kozlowski’s Shannon Entropy Calculator. All of that can code can also be removed at will. Despite all of this extraneous baggage, the code ran in just 1.9 seconds according to the Client Statistics, on 11 million rows of float values on my beat-up, wheezing semblance of a development machine. The execution plan was uneventful and consisted mainly of seeks on the nonclustered index I put on the column a couple of tutorial series ago.

** Figure 1: Shannon’s Entropy in T-SQL**DECLARE @LogarithmBase decimal(38,36) = 2 –2.7182818284590452353602874713526624977 — 10

DECLARE @Count bigint, @DistinctValueCount bigint, @ShannonsEntropy float, @EntropyForUniformDistribution float,

@MaxProbability float, @ChebyshevEntropy float, @MetricEntropy float, @UniformProportion float

SELECT @Count = Count(*), @DistinctValueCount = Count(DISTINCT Column1)

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

SELECT @UniformProportion = 1 / CAST(@DistinctValueCount as float)

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

DECLARE @EntropyTable table

(Value decimal(33,29),

ValueCount bigint,

Proportion float,

SelfInformation float

)

INSERT INTO @EntropyTable

(Value, ValueCount, Proportion, SelfInformation)

SELECT Value, ValueCount, Proportion, –1 * Proportion * Log(Proportion, @LogarithmBase) AS SelfInformation

FROM (SELECT Value, ValueCount, ValueCount / CAST(@Count AS float) AS Proportion

FROM (SELECT Column1 AS Value, Count(*) AS ValueCount

FROM Physics.HiggsBosonTable

WHERE Column1 IS NOT NULL

GROUP BY Column1) AS T1) AS T2

SELECT * FROM @EntropyTable

SELECT @ShannonsEntropy = SUM(SelfInformation), @EntropyForUniformDistribution= SUM (@UniformProportion * Log(@UniformProportion, @LogarithmBase)) * –1, @MaxProbability = Max(Proportion)

FROM @EntropyTable

— ====================================

— SOME RELATED ENTROPIES

— ====================================

SELECT @ChebyshevEntropy = –1 * Log(@MaxProbability, @LogarithmBase)

SELECT @MetricEntropy = @ShannonsEntropy / CAST(@Count as float)

SELECT @ShannonsEntropy AS ShannonsEntropy, @MetricEntropy AS MetricEntropy, @ChebyshevEntropy AS ChebyshevEntropy, @EntropyForUniformDistribution as EntropyForUniformDistribution

__Figure 2: Calculating Shannon’s Entropy on the Higgs Boson Dataset
__

…………Note that in the last section of the code, I tacked on the Metric and Chebyshev Entropies, which are trivial to calculate once we have building blocks like Shannon’s Entropy. References to the Chebyshev Entropy[12], a.k.a the Min Entropy because it represents the minimum amount a variable can exhibit, seem to be few and far between outside the realm of quantum physics. Metric Entropy, on the other hand, can serve as a simple measure of randomness[13], which is more likely to be useful to SQL Server users. These are among a couple dozen extensions and relatives of Shannon’s Entropy, a few of which I’ve already dealt with, like the Hartley Entropy (i.e. the Max Entropy). I won’t discuss Differential Entropy, the extensions to continuous variables using various methods of calculus, because SQL Server data types like float, decimal and numeric are actually discrete in the strict sense; they can’t actually represent infinitesimal grades in between values, any more than the data types of any other software can on finite computers. Nor will I delve into the myriad applications that have been developed from Shannon’s Entropy for signal transmission and coding theory, since these are off-topic.

**Detection of “Randomness” and Other Uses**

Its relationship to cryptography might be a more appropriate subject, but for the sake of brevity I’ll limit my discussion to pointing out how it sheds light on the nature of “randomness.” Like “information,” it’s a broad term that raises the danger of “definition drift,” even within one’s own mind; I’ve seen big names in certain scientific fields unconsciously shift their usage of the term from “unintentioned” to “uncaused” to “indeterminate” and back again, without discerning the subtle differences between them all. In data mining, we’re basically trying to uncover information that is badly concealed, whereas in cryptography it’s deliberately concealed by mixing it in with apparently random information; the patterns generated by cryptographers are the opposite of random in one specific sense, that it takes great intelligence and hard work to deliberately create these false patterns. Cryptanalysis involves detecting these deliberate patterns, in order to remove the noise obscuring the original message. Of course, “noise” is a subjective matter determined entirely by the questions we choose to ask of the data (although the answers to those questions are entirely objective). Randomness can thus be viewed as excess information of the wrong kind, not an absence of pattern or lack of causation. For example, if we’re looking for evidence of solar flares, then the static on shortwave radios might tell us a lot, but if we’re trying to listen to a broadcast from some foreign land (like the mysterious “numbers stations,” which I picked up when I was younger) then it degrades our information. Consider the case of paleontologists trying to follow dinosaur tracks: each rain drop that has fallen upon them over the eons has interfered with that pattern, but the rain itself follows a pattern, albeit the wrong kind. It is only random for our chosen purposes and pattern recognition goals, but perhaps not to some prehistoric weathermen to whom the pattern of rain might’ve been of keen interest. The relationship between randomness and information is thus quite deep; this brief introduction is merely a wade into the kiddie pool. Later on in this series we may have to dive in, if the measures of randomness I hope to discuss call for it.

…………Shannon’s Entropy and related concepts from information theory have all sorts of far-flung implications, some of which are understood in great detail (particularly when they relate directly to designing codes with minimum information loss) and others which are really intellectually challenging. One of the simplest yet trickiest rules is that the greater the uncertainty, the greater the amount of probabilistic information that can be conveyed with each new record. [14] Some of the corollaries of information theory can be put to good use in reasoning from the data we’ve mined, which is after all, its raison d’etre. The principle of maximum entropy, for example, states that the ideal probability density functions (PDFs) maximize Shannon’s Entropy when producing the expected values.[15] I’d imagine that this could be leveraged for such good purposes as mining model selection. There are all kinds of hidden relationships to other forms of information I hope to cover eventually, such as redundancy, which is a big topic within coding theory. Someday I hope to tackle measures of semantic information, which can be extremely difficult to grasp because they quantify the actual meaning of data, which can be quite slippery. Sometimes the absence of any data at all can tell us everything we need to know, which many mystery writers and crime scene investigation shows have put to good use. More often, people often differ not only between but within themselves as the meaning they choose to assign to terms and numbers, without thinking it through clearly. Perhaps this would make it an ideal field to apply fuzzy sets to, since their chief use cases include modeling imprecision in natural language, as I discussed ad nauseum in the last tutorial series.

**Semantic Misinterpretation, Cybernetics, “Disorder” and Other Subtle Stumbling Blocks**

Shannon’s Entropy is probably further away from semantic information than fuzzy sets, but that hasn’t stopped many theorists from mistakenly conflating stochastic information with meaning. Shannon himself warned against this kind of bandwagon-jumping.[16] In fact, that’s probably the most common stumbling block of all. Perhaps the clearest cautionary tale is the whole philosophy developed out of information theory by bombastic mathematician Norbert Wiener (1894-1964), who asserted explicitly and deliberately” the very same connection to semantic meaning that Shannon and others cautioned against.[17] Another great mathematician who took Shannon’s Entropy a little too far into the realm of semantic meaning was Shannon’s colleague, Warren Weaver (1894-1978). Wiener’s philosophy is known by the cool name of “cybernetics,” which has a certain intriguing flair to it – just like the loaded terms “fuzzy sets” and “chaos theory”, which are often used in a manner precisely opposite to their meaning and purpose. Nobody is really quite certain what “cybernetics” is, but that hasn’t stopped academics from establishing a journal by that name; some of its contributors are among the leading names in analytics I most respect, but I find other commentators on the field as disturbing as some of the frenetic theologians who followed Pierre Teilhard de Chardin. To put it simply, information theory has been put to even more bad uses in the last seven decades than other hot mathematical ideas, like chaos theory and fuzzy sets – some of which border on madness.

…………I’m not convinced that even a lot of the authors who specialize this sort of thing really grasp all of these intricacies yet, although most at least have the common sense not to idolize these ideas and blow them up into full-blown crackpot philosophies. For example, there seems to be a lot of misunderstanding about its relationship to measures of complexity, structure and order, which are alike but not precisely the same. To complicate matters further, they all overlap thermodynamic entropy, in which not the same thing as probabilistic entropy. They both ultimately proceed from mathematical relationships like the Law of the Large Numbers, but measure different things which are not always connected; Shannon noticed the resemblance between the two off the bat, as did Leo Szilard (1898-1964), hence the name “entropy.” This kind of entropy is not really “disorder”; it merely removes the energy a system one would need to move from a disordered to an ordered state and vice-versa. Essentially, it freezes a system into its current state, no matter what order it exhibits. Likewise, zero probabilistic entropy always signifies complete certainty, but not necessarily lack of structure; we could for example, be quite certain that our data is disorganized. It is perhaps true that “…*H* can also be considered as a measure of disorganization…The more organized a system, the lower the value of *H*”[18] but only in a really broad sense. Furthermore, entropy prevents new complexity from arising in a system; the range of possible states that are reachable from a system’s current state are determined by its energy, so that the greater the entropy, the greater the number of unreachable states. Without an input of fresh energy, this set of reachable states cannot increase. This means that entropy *tends* towards simplicity, which can nevertheless exhibit order, while still not ruling out complexity. The same is true of probabilistic entropy, which intersects with algorithmic complexity at certain points.

…………The latter measures the shortest possible description of a program, which Shannon also investigated in terms of the minimum length of codes. He “further proclaimed that random sources – such as speech, music, or image signals – possess an irreducible complexity beyond which they cannot be compressed distortion-free. He called this complexity the source entropy (see the discussion in Chapter 5). He went on to assert that if a source has an entropy that is less than the capacity of a communication channel, the asymptotically error-free transmission of the source over the channel can be achieved.”[19] It is not wise to conflate these two powerful techniques, but when probabilistic and algorithmic information intersect, we can leverage the properties of both to shed further light on our data from both angles at once. In essence, we can borrow the axioms of both at the same time to discover more new knowledge, with greater certainty. One of the axioms of algorithmic complexity is that a program cannot contain a program more sophisticated than itself, for essentially the same reasons that a smaller box cannot contain a larger one in the physical realm. This is related to a principle called the Conservation of Information, which operates like the Second Law of Thermodynamics, in that absent or lost information cannot be added to a system from itself; since its violation would be a logical contradiction, actually more solid than its thermodynamic counterpart, which is based merely on empirical observation. It is essentially a violation of various No Free Lunch theorems. This has profound implications for fields like artificial intelligence and concepts like “self-organization” that are deeply intertwined with data mining. Since the thermodynamic entropies aren’t as closely related to data mining and information theory, I’ll only spend a little bit of time on them a couple of articles from now, where I’ll also dispense with extraneous topics like quantum entropies. There are many other directions we could take this discussion in by factoring in things like applications without replacement, compound events, multiple possible outcomes, mutually exclusive events, unordered pairs and sources with memory (non-Markov models). Instead, I’ll concentrate on using this article as a springboard to more complex forms of probabilistic entropy that might be of more use to SQL Server data miners, like leaf and root entropies, binary entropies like the conditional and joint, the information and entropy rates and I’ll gradually build up towards more complex cases like Mutual, Lautum and Shared Information at the tail end of this segment of the tutorial series, whereas the Cross Entropy will be saved for a future segment on an important distance measure called the Kullback-Leibler Divergence. The next logical step is to discuss the Rényi Entropy, which subsumes the Shannon and Hartley Entropies with other relatives in a single formula.

[1] I lost my original citation for this, but believe it is buried somewhere in Klir, George J., 2006, __Uncertainty and Information: Foundations of Generalized Information Theory__, Wiley-Interscience: Hoboken, N.J.

[2] See the __Wikipedia__ page “Claude Shannon” at http://en.wikipedia.org/wiki/Claude_Shannon

[3] See the __Wikipedia__ articles “Hartley Function” and “Ralph Hartley” at http://en.wikipedia.org/wiki/Hartley_function and http://en.wikipedia.org/wiki/Ralph_Hartley respectively.

[4] pp. 5-6, Shannon, C.E., 1974, “A Mathematical Theory of Communication,” pp. 5-18 in __Key Papers in the Development of Information Theory__, Slepian, David Slepian ed. IEEE Press: New York.

[5] p. xv, Mansuripur, Masud, 1987, __Introduction to Information Theory.__ Prentice-Hall: Englewood Cliffs, N.J.

[6] “It is nevertheless quite remarkable that, as originally shown by Shannon, one can show that, by proper encoding into long signals, one can attain the maximum possible language transmisssion capacity of a system while at the same time obtaining a vanishingly small percentage of errors.” p. 172, Goldman, Stanford, 1953, Information Theory. Prentice-Hall: New York.

[7] p. 272, Pierce, John Robinson, 1980, __An Introduction to Information Theory: Symbols, Signals & Noise__. Dover Publications: New York. Also see Pierce, John Robinson, 1961, __Symbols, Signals and Noise: The Nature and Process of Communication__. Harper: New York

[8] p. 31, Ritchie, L. David., 1991, Information. Sage Publications: Newbury Park, Calif.

[9] p. 259, Klir, George J. and Yuan, Bo, 1995, __Fuzzy Sets and Fuzzy Logic: Theory and Applications__. Prentice Hall: Upper Saddle River, N.J.

[10] p. 12, Brillouin, Léon, 1964, __Science, Uncertainty and Information__. Academic Press: New York. .

[11] Which I originally downloaded from the University of California at Irvine’s Machine Learning Repository__.__

[12] One source I found it mentioned in was Bengtsson, Ingemar, 2008, __Geometry of Quantum States: An Introduction to Quantum Entanglement__. Cambridge University Press: New York.

[13] See the Wikipedia page “Entropy (Information Theory)” at http://en.wikipedia.org/wiki/Entropy_(information_theory)

[14] p. 87, Wright, Robert, 1988,__ Three Scientists and Their Gods: Looking For Meaning in an Age of Information__. Times Books: New York.

[15] See the Wikipedia page “Minimum Fisher Information” at http://en.wikipedia.org/wiki/Minimum_Fisher_information

[16] p. xvi, Mansuripur and p. 61, Ritchie, 1991.

[17] p. 289, Bar-Hillel, Yehoshua, 1964, __Language and Information: Selected Essays On Their Theory and Application__. Addison-Wesley Pub. Co.: Reading, Mass.

[18] p. 5, Ritchie.

[19] p. 143, Moser, Stefan M. and Po-Ning, Chen, 2012, __A Student’s Guide to Coding and Information Theory__. Cambridge University Press: New York

## Information Measurement with SQL Server, Part 1: A Quick Review of the Hartley Function

**By Steve Bolton**

…………This long-delayed series of amateur self-tutorials has been in the works ever since I began writing my A Rickety Stairway to SQL Server Data Mining series, which made it clear to me that I didn’t know enough about what was going on under the hood in SSDM. I still don’t know enough about the reasoning behind the various data mining algorithms implemented by SQL Server and other tools, but I am certain of one thing: I never will know enough, even if I actually became competent in these topics. These fields are just too detailed, broad and rooted in poorly understood corners of pure reason for anyone to master, let alone myself. Like my series on SSDM, this foray into coding all of the basic measures of information theory and related fields may well exemplify University of Connecticut statistician Daniel T. Larose’s witticism that “data mining is easy to do badly.”[1] My purpose in the Rickety series was merely to demonstrate that useful results can still be derived from SSDM, even when it is badly mishandled. In this series, I will try to explain how the metrics used in many data mining algorithms can be used to answer a whole cornucopia of questions about our datasets, such as: How much information might there be, as measured in terms of possible state descriptions and probabilities? How much meaning (i.e. semantic information) might it have? How many state descriptions does it rule out? How much is already known? How random, aperiodic, redundant, complex or ordered is it? Another interesting challenge is determining the shortest possible specifications of a structure.

…………There are literally dozens upon dozens of measures available to answer all of these questions, taken from such diverse fields as information theory, chaos theory, algorithmic complexity and many others which provide the basic building blocks of most data mining algorithms. The techniques provided by these fields are powerful, yet contain many logical stumbling blocks that far smarter people than ourselves have tripped over, often without even knowing it; these range from instances of subtle “definition drift” in the meaning of terms like “randomness” over the course of textbooks, to the development of full-blown crackpot theories by scholars overexcited by the potential of systems like chaos and information theory. I am bound to make mistakes along the way, given that I’m an amateur at this, so by all means take care when implementing my code and trusting my analysis, which is sometimes just plain wrong. On the other hand, these techniques are so powerful and so under-utilized that there is a crying need for some explanation of how they can be applied in a SQL Server environment, even a poor one. I know a little bit more of certain areas of philosophy – especially historical instances of when it has gone very wrong – so I can occasionally make a contribution by commenting on how to avoid fallacious reasoning, which is even more of a problem once these sophisticated topics are clouded over by excess jargon and complex math formulas.

**DIY Data Mining and the Scope of the Series**

One thing I’ve learned while trying to force-feed myself the underlying math is that the analysis marketplace is decades behind the research in some ways; there is no way that any single company is ever going to be able to code all of the extant mining algorithms, assuming it is even possible to tally them all up. This means that it may be beneficial in the years to come to have the skills to build DIY solutions. Throughout this series I will provide T-SQL code so that SQL Server DBAs and data miners can implement some of these techniques on their own, without waiting for developers of mining software to code the particular algorithms that fit their use cases. Why T-SQL? I could make a strong case that the gradual accretion of features like windowing functions is slowly making set-based languages ideal for this purpose, although they are rarely thought of in that way; it boils down to the fact that most of the problems covered in these interrelated fields are much easier to express in terms of sets. Furthermore, the sheer size of “Big Data” (which has steadily gotten “bigger” ever since the first records were kept; like “globalization”, it has only accelerated in recent years) requires taking computing to a new level of abstraction in order to simplify things, which is a purpose that set-based languages can fulfill. This series will be a learning experience for me as well, in which I hope to at least help others avoid my mistakes, by teaching through misadventure; writing a series like this aids me in absorbing the material a lot faster, while also getting valuable practice in translating the difficult underlying math formulas into code. Since I don’t know what I’ll discover along the way, I suspect that at some point I may resort to using some of my other favorite languages, like Multidimensional Expressions (MDX) and Visual Basic .Net, possibly in conjunction with Common Language Runtime (CLR) stored procedures. One of the benefits I hope to provide is to take most of the math out of the picture, so that readers don’t get bogged down in it; the jargon and formulas are absolutely necessary for the researchers to communicate with each other, but there is no need for data miners to be writing formal mathematical proofs, just as commuters shouldn’t have to give a dissertation on automotive engineering in order to drive their cars. I’ve sometimes received comments to the effect that there’s too much text in these articles, but that’s because they don’t see the hundreds of pages of math formulas that gave rise to them; rather than stringing together some meaningless screenshots or rehashing MSDN tutorials from AdventureWorks, I aim to show how these techniques might be coded and how they can be used appropriately.

…………The goal in this series is to corner the uncertainty in our datasets by shining lights on it from every possible direction, so that organizations can make better decisions that result in more effective actions. To that end, everything from measures used in Bayesian inference to Solomonoff Algorithmic Probability to the calculation of periodicities to the Lyapunov Exponent will be fair game. These metrics vary quite widely in terms of sophistication, so at points we’ll cross the imprecise boundaries separating them from full-blown data mining algorithms; the dividing lines separating statistics, data mining, machine learning, “soft computing,” predictive analytics and the like seem to boil down to degrees of intricacy, rather than differences in kind, especially since their algorithms are derived from the same metrics and building blocks. My last mistutorial series was designed from the ground up to build on existing concepts, one article at a time. This one will be inherently disorganized, since the scope is so wide and I don’t know what I will find along the way. I will have to skip around quite a bit across topics that may only be distantly related, or across levels of sophistication exhibited by the measures of information. It will also be an open-ended series, whereas the Rickety series was merely necessary to cover a specific set of easily denumerable features. We could delve into dozens of obscure metrics if the need arises, or even concoct our own special-purpose metrics, if a use case calls for it.

…………The series may explore a wide-ranging topics along paths are still somewhat unknown, but I can at least kick it off by introducing some of the primordial foundations of information theory. Perhaps the simplest is a function developed in 1928 by electronics pioneer Ralph Hartley[2], who applied it to signal transmission[3] a few decades before Claude Shannon did the same with his own renowned entropy measure. Although Hartley considered it a measure of information, his function did not lead to the use of the term of “information theory,”[4] which was coined when Shannon’s famous equation gave birth to the field. This introduction will be made even easier by the fact that I already discussed a more advanced version of Hartley’s measure in Implementing Fuzzy Sets in SQL Server, Part 9: Measuring Nonspecificity with the Hartley Function. The version for ordinary “crisp” sets of the kind DBAs and data miners are accustomed to working is actually quite a bit easier to code and interpret: all we have to do is count the records in a set and take the logarithm. The code in Figure 1 is actually longer than it has to be, given that I used three different logarithm bases to measure the same quantity, for the sake of completeness. When base 2 is used, the units are known as bits or “shannons.” When the default value of Euler’s Number is used, they’re known as “nats,” but with base 10 we’re measuring in hartleys or “bans” (a term coined by famed cryptographer Alan Turing). It would be trivial to turn this into a stored procedure with an option to select the desired units. It only took about a second to calculate the results in Figure 2 on the first float column of the Higgs Boson dataset I downloaded from University of California at Irvine’s Machine Learning Repository a few tutorial series ago, which I converted to a 5-gigabyte SQL Server table. This was calculated effortlessly across all 11 million rows only because we had to performs some simple counts, without traversing the whole table.

** Figure 1: Code for the Ordinary “Crisp” Version of the Hartley Function**DECLARE @HartleyEntropy float, @DistinctCount bigint, @Count bigint

SELECT @DistinctCount = Count(DISTINCT Column1), @Count = Count(*)

FROM Physics.HiggsBosonTable

SELECT Log(@DistinctCount, 2) AS BitsOrShannons, Log(@DistinctCount, 2.7182818284590452353602874713526624977) AS Nats, Log(@DistinctCount, 10) AS Hartleys,

Log(@Count, 2) AS MultisetBitsOrShannons, Log(@Count, 2.7182818284590452353602874713526624977) AS MultisetNats, Log(@Count, 10) AS MultisetHartleys

__ ____Figure 2: Results from the Higgs Boson Dataset
__

…………The main problem I ran into was a fundamental one: the formula calls for plugging in the cardinality of the set, but in ordinary set theory parlance, duplicate values are only counted in multisets. The set notation includes the symbols |A|, but the bars specify a cardinality measure rather than the use of the ABS function. The question is, which cardinality? I’ve included both versions in Figure 1, which differ solely by the fact that one uses a DISTINCT operator and the other takes a standard COUNT. This discrepancy could have its uses though. I have yet to see this issue raised in the information literature, where Hartley’s metric is often skipped over as a topic of mainly historical interest in comparison to Shannon’s, but it may be possible to derive a third metric based on the difference between the two. A simple subtraction might help us quantify the contribution of the repeated values to the uncertainty, which could have its uses in the kinds of uncertainty management programs I spoke of in the fuzzy set series. In essence, this difference could act as a crude measure of redundancy. If it reflects the information gained by using the DISTINCT operator, we could use this to assess its costs. We know for a fact that the DISTINCT version can’t exceed the multiset version, which acts as a cap on its range. At the other end of the scale, the measure reaches the limit of 0 when all records have the same value, therefore giving us perfect certainty. Another important issue is whether or not the DISTINCT clause adds information, by reducing the amount of uncertainty about how many different values a variable can take on.

…………Of course, the clause might not be necessary at all if we knew in advance precisely which values were permissible for a column, such as the range of a decimal type or a varchar column limited to a handful of known category values. On the other hand, this raises a subtle distinction between values that are permissible for a column, which can be determined by the data type, and the values actually found, which can only be counted through a DISTINCT operator. The issue becomes more sophisticated if we are able to determine the counts of each individual value; these measures of “multiplicity,” as they are known in multiset theory, further reduce the uncertainty associated with the dataset. It is easy enough to implement these internal counts using windowing functions and GROUP BY statements, but the issue of how to factor them in quickly complicates the discussion of the otherwise simple Hartley function. Thankfully, the order of the records is only an issue with tuples, not the kinds of sets or multisets we plug into it – except when we get to the end of the dataset and can determine the last records from the remaining counts, which is something I’ve not seen addressed in the literature. This brings applications without replacement (i.e., the kind of probabilities associated with decks of cards where no reshuffling takes place) into play, at least briefly.

**“Newsworthiness”: The Narrow Definition of “Information”**

Aside from all of these hidden subtleties, the information provided by both the DISTINCT and multiset versions can be summed up thus: how much am I learning each time I inspect a row and verify the actual value found there? This is equivalent to asking how much we learn from each slip of paper in a drawing, if we know the count of the jar in advance. In more advanced terms, we can think of this as increasing information by reducing the number of possible state descriptions the next record can take on; in such a context, whether or not repeated values are allowed makes a big difference. The same goes for their individual counts, if the answer is yes. Keep in mind that this type of “information” is practically the polar opposite of existing knowledge; basically, the higher the Hartley measure is, the less we don’t already know, so the more the next record can tell us. It is a highly specific type of information, which former journalists like myself might equate with “newsworthiness.” This is the dividing line between existing and new knowledge is precisely where measures of entropy (like the Hartley function) intersect with topics like Bayesian probability, which I will also address at some point in this series; as we shall see, many of these information measures are interrelated in complex ways. This highly specific definition of information is an important distinction that pertains to all of the other measures of entropy we’ll discuss in this series; interpretation is a critical stage in every field associated with data mining, particularly information theory, which should never be shortchanged in any workflow.

…………The Hartley function can be leveraged in a data mining workflow in various ways, such as calculating the reduction in uncertainty between two separate measures; this could be useful, for example, in specifying a numerical cut-off point in bits or bans, after which it’s not worthwhile to go on inspecting rows, for whatever end purpose that might be, such as sampling. The sample code in Figure 3 takes the Hartley measure after the 10 millionth row, or about 9 percent away from the end of the dataset, which is why remaining uncertainty in Figure 4 is so low; as we approach the last uninspected record, the remaining uncertainty would approach zero. This formula would be equivalent to counting the remaining records and plugging the results into the Hartley function. Another interesting question is whether or not we could we pair this with cardinality estimation, to get a ballpark figure of how much we can learn from each record we inspect, before we’ve even traversed a dataset. I don’t know much about cardinality estimation yet, but the possibility is tantalizing

** Figure 3: Calculating the Remaining Uncertainty with the Hartley Function**DECLARE @DistinctCountOfKnownValues bigint, @CountOfKnownValues bigint

SELECT @DistinctCountOfKnownValues = Count(DISTINCT Column1), @CountOfKnownValues = Count(*)

FROM Physics.HiggsBosonTable

WHERE ID BETWEEN 1 AND 10000000

SELECT Bits, KnownBits, Bits – KnownBits AS RemainingUncertaintyInBits, MultisetBits, KnownMultisetBits, MultisetBits – KnownMultisetBits AS RemainingMultisetUncertaintyInBits

FROM (SELECT Log(@DistinctCount, 2) AS Bits, Log(@DistinctCountOfKnownValues, 2) AS KnownBits, Log(@Count, 2) AS MultisetBits, Log(@CountOfKnownValues, 2) AS KnownMultisetBits) AS T1

__Figure 4: Uncertainty Reduction Results
__

…………The kinship between the Hartley function and the rest of information theory is evident in some of its alternative names, like the Hartley Entropy or Max Entropy. It is equivalent to the Rényi Entropy[5] with its alpha parameter (α) set to 0, as I’ll explain a few articles from now. It’s also identical to the Shannon Entropy in cases of the uniform distribution, i.e. when all values are equally likely.[6] I’ll be spending a couple of articles on various aspects of entropy early on this series, since it’s such an important concept in information theory. The math and logic can get thick pretty quickly in this field, so it is best to start off with the measure that started it all, Shannon’s infamous “H.” I recognized its signature combination of a negative summation operator and log operation when translating some of the equations used in Implementing Fuzzy Sets in SQL Server, Part 10.2: Measuring Uncertainty in Evidence Theory into T-SQL. As with the Hartley function, this previous exposure to more advanced fuzzy derivatives ought to make the material a little easier to swallow. The difficulty with Shannon’s Entropy, however, is not in its calculation, but in its proper interpretation. Yet as long as we have the rigor to avoid assigning unwarranted shades of meaning to the term “information,” it can be a powerful addition to our data mining toolbelts.

[1] p. xii, LaRose, Daniel T., 2005, __Discovering Knowledge in Data: An Introduction to Data Mining__. Wiley-Interscience: Hoboken, N.J.

[2] See the __Wikipedia__ articles “Hartley Function” and “Ralph Hartley” at http://en.wikipedia.org/wiki/Hartley_function and http://en.wikipedia.org/wiki/Ralph_Hartley respectively.

[3] p. 5 Ritchie, L. David., 1991, __Information__. Sage Publications: Newbury Park, Calif.

[4] p. 288, Bar-Hillel, Yehoshua, 1964, __Language and Information: Selected Essays On Their Theory and Application__. Addison-Wesley Pub. Co.: Reading, Mass.

[5] See the Wikipedia article “Rényi Entropy” at http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy

[6] See the __Wikipedia__ article “Hartley Function” at http://en.wikipedia.org/wiki/Hartley_function

## Implementing Fuzzy Sets in SQL Server, Part 11: Fuzzy Addenda

**By Steve Bolton**

…………One of the key reasons I looked into the topic of fuzzy sets in the first place was my suspicion that T-SQL, as a set-based language, would be ideal for modeling them. That turned out to be an understatement of sorts: I definitely was not prepared to discover just how useful they can be for translating imprecise linguistic modifiers in Behavior-Driven Development (BDD) environments and user stories, nor did I realize how little information has percolated down from the mammoth amount of theoretical research done on fuzzy topics over the last 40 years. Hopefully this series of amateur mistutorials helped rectify that gap by giving fuzzy sets some badly needed free press, of the kind I tried to bring SSDM in my older A Rickety Stairway to SQL Server Data Mining series awhile back. I originally set aside this final article as a kitchen drawer of sorts, to dispense with some postscripts that would’ve interfered with the flow of the rest of the series, in which one concept was used as a building block onto the next. One leftover concept I thought might be worthy of significant attention was fuzzy orders, which sounds as if it would be right up SQL Server’s alley. After all, DBAs use the ORDER BY statement every day. The problem is that it turns out T-SQL, like most other set-based languages, is not ideal for modeling this kind of fuzzy object.

**Fuzzy Orders and the Limitations of Hierarchies in SQL**

In the literature, fuzzy set orders are created by applying continuous membership grades to a record’s position in a particular fuzzy set. Devices like Hesse diagrams and properties like “dominated” and “undominated” are useful in implementing them[1], but I won’t bother, for the simple reason that SQL Server lacks robust graph database capabilities. Modeling relationships of this kind is still notoriously difficult in the relational realm, even though they’ve been augmented by such useful tools as hierarchyid data type in recent years. I am rather fond of hierarchyid, but it is unable to model multiparent trees in an efficient way, let alone multidimensional directed graphs. Just try modeling a simple genealogical tree with it. Trees are instances of what are known in mathematical parlance as partial orders; when you really stop and think about it, they represent a form of order, except in more than one dimension, such as “my grandparents and I have a descendant-ancestor relationship, but not my cousins and I.”[2] As far as I can tell, directed graphs open up more possibilities by relaxing the rules of composition, in the same way the Riemann manifolds give us access to curved hyperspace. I for one would cast my vote for adding graph database capabilities similar to those found in Neo4j[3] to SQL Server, which would add a whole new dimension to the product in the same way that Analysis Services and Reporting Services do, without being a separate service.

…………Alas, until such capabilities are added to SQL Server, it wouldn’t be useful to model most forms of fuzzy orders in T-SQL, let alone Multidimensional Expressions (MDX) in SQL Server Analysis Server (SSAS) cubes, because they immediately require the flexibility of multiparent trees and directed graphs. These tasks could be accomplished in SQL Server 2014 as it stands, but in contrast to the other fuzzy objects I’ve introduced throughout this series, I doubt it can be done in an efficient way. It also doesn’t help matters at all that the Windows Presentation Foundation (WPF) tree control is a walking disaster – for years now, its shortcomings have been a thorn in the side of .Net developers of all skill levels. Microsoft simply didn’t build in such basic functionality as searching for specific members in a collapsed tree, and in fact made it virtually impossible for third-party developers to do it themselves. Needless to say, neither the WPF TreeView nor hierarchyid is well-suited to modeling directed graphs, which are simply a more flexible generalizations of trees. The kissing cousins of fuzzy orders, like fuzzy rankings[4] and fuzzy morphisms[5], aren’t really feasible either. George J. Klir and Bo Yuan’s *Fuzzy Sets and Fuzzy Logic: Theory and Applications*, my favorite go-to resource for fuzzy math formulas, provides a decent starting point for all three[6], but from my little experience, I wouldn’t even try to implement them unless I had access to a good third-party product like GoXAM’s directed graph control (which may be expensive, but would probably recoup its costs by saving weeks of wasted labor on the unworkable WPF TreeView). If it one day does become worthwhile to model fuzzy orders and ranks in some future edition of SQL Server (or I turn out to be wrong), they’ll probably require the use of a lot of CASE statements in ORDER BY clauses and windowing functions, respectively. Given that there’s a mountain of currently unsolved problems out there that other aspects of fuzzy sets could tackle right away, we’ll save this topic for a later date. It’ll be a long time before all the low-hanging fruit is used up and we’re to the point where struggling to model them will become worthwhile.

**Some Simple T-SQL for Fuzzy Medians**

Because I realized early on that fuzzy orders were an afterthought – at least by the present capabilities of SQL Server and other relational databases – I left the subject of fuzzy medians for this junk drawer of an article. After all, medians are inherently dependent on the order of data, given that the pick the one or two values that occur precisely in the middle of a set. Furthermore, I noticed that the formulas involved calculations on two sets rather than one, which would have cluttered Implementing Fuzzy Sets in SQL Server, Part 7: The Significance of Fuzzy Stats, where the sample code was all done on a single table. That should have been a clue, however, that the fuzzy medians in the literature are a separate subject, not just a fuzzified version of ordinary medians. That would be easy enough to implement, given the principles of fuzzy sets introduced throughout this series; for example, instead of selecting the one or two records at the dead center of the dataset, we could select a fuzzy range. The trapezoidal numbers discussed in Implementing Fuzzy Sets in SQL Server, Part 6: Fuzzy Numbers and Linguistic Modifiers might be ideal for this purpose. The type of fuzzy medians under discussion here instead belong in the taxonomic hierarchy of fuzzy objects I mentioned in the fuzzy stats article, like Ordered Weighted Averages (OWAs), Lambda Averages (λ-Averages), T-norms, T-conorms and the like. Compared to some of those operations, the logic of fuzzy medians is fairly simple: we take the maximum of the values of two sets at each corresponding row when both membership scores are between 0 and the @LambdaParameter, the minimum values when both are between the @LambdaParameter and 1 and just the @LambdaParameter (which must be set between 0 and 1) in all other cases.[7] Assuming I read the formulas correctly – which is not a given, since I’m a novice at this – then this should all be implemented in Figure 1. As usual, it looks a lot longer than it really is; everything through the second UPDATE statement is just the same sample code I’ve used this series to populate the membership functions for binary set relations. Keep in mind that we don’t need to use Z-Scores to assign membership values here; I’m just using them to illustrate how to assign memberships in a fuzzy set, using familiar code from older tutorials. The sky’s the limit as far as the number of functions you can use to assign such values; the key thing is to find the right match to the problem you’re trying to solve. This would be a good match if we were trying to rate outliers by two different forms of Z-Scores, for example. The only novel part is the last SELECT, which isn’t difficult at all. As always, the results in Figure 2 are derived from the Duchennes muscular dystrophy dataset I downloaded a few tutorial series ago from Vanderbilt University’s Department of Biostatistics and have been using for practice data ever since.

** Figure 1: Sample Code for a Simple Fuzzy Median**DECLARE @RescalingMax decimal(38,6), @RescalingMin decimal(38,6), @RescalingRange decimal(38,6)

DECLARE @ZScoreTable table

(PrimaryKey sql_variant,

Value decimal(38,6),

ZScore decimal(38,6),

ReversedZScore as CAST(1 as decimal(38,6)) – ABS(ZScore),

MembershipScore decimal(38,6),

GroupRank bigint

)

DECLARE @ModifiedZScoreTable table

(PrimaryKey sql_variant,

Value decimal(38,6),

ZScore decimal(38,6),

ReversedZScore as CAST(1 as decimal(38,6)) – ABS(ZScore),

MembershipScore decimal(38,6),

GroupRank bigint,

OutlierCandidate bit

)

INSERT INTO @ZScoreTable

(PrimaryKey, Value, ZScore, GroupRank)

EXEC Calculations.ZScoreSP

@DatabaseName = N’DataMiningProjects‘,

@SchemaName = N’Health‘,

@TableName = N’DuchennesTable‘,

@ColumnName = N’LactateDehydrogenase‘,

@PrimaryKeyName = N’ID’,

@DecimalPrecision = ’38,32′,

@OrderByCode = 8

— RESCALING

SELECT @RescalingMax = Max(ReversedZScore), @RescalingMin= Min(ReversedZScore)

FROM @ZScoreTable

SELECT @RescalingRange = @RescalingMax – @RescalingMin

UPDATE @ZScoreTable

SET MembershipScore = (ReversedZScore – @RescalingMin) / @RescalingRange

INSERT INTO @ModifiedZScoreTable

(PrimaryKey, Value, ZScore, GroupRank, OutlierCandidate)

EXEC Calculations.ModifiedZScoreSP

@DatabaseName = N’DataMiningProjects‘,

@SchemaName = N’Health‘,

@TableName = N’DuchennesTable‘,

@ColumnName = N’LactateDehydrogenase‘,

@PrimaryKeyName = N’ID’

@OrderByCode = 8,

@DecimalPrecision = ’38,32′

— RESCALING

SELECT @RescalingMax = Max(ReversedZScore), @RescalingMin= Min(ReversedZScore)

FROM @ModifiedZScoreTable

SELECT @RescalingRange = @RescalingMax – @RescalingMin

UPDATE @ModifiedZScoreTable

SET MembershipScore = (ReversedZScore – @RescalingMin) / @RescalingRange

DECLARE @LambdaParameter float = 0.43

SELECT T1.PrimaryKey, T1.Value, T1.MembershipScore, T2.MembershipScore,

CASE WHEN (T1.MembershipScore BETWEEN 0 AND @LambdaParameter) AND (T2.MembershipScore BETWEEN 0 AND @LambdaParameter) THEN (SELECT MAX(Value) FROM (VALUES (T1.MembershipScore), (T2.MembershipScore) ) AS T1(Value))

WHEN (T1.MembershipScore BETWEEN @LambdaParameter AND 1) AND (T2.MembershipScore BETWEEN @LambdaParameter AND 1) THEN (SELECT MIN(Value) FROM (VALUES (T1.MembershipScore), (T2.MembershipScore) ) AS T1(Value))

ELSE @LambdaParameter END AS FuzzyMedian

FROM @ZScoreTable AS T1

INNER JOIN @ModifiedZScoreTable AS T2

ON T1.PrimaryKey = T2.PrimaryKey AND T1.Value IS NOT NULL AND T2.Value IS NOT NULL

__Figure 2: Results from the Duchennes Dataset
__

…………I barely began to scratch the surface of fuzzy objects like fuzzy medians, λ-Averages, T-norms, T-conorms and OWAs in this series. In fact, there’s an entire sea of ripe research out there on all topics fuzzy that could be quite useful to relational DBAs and decision support specialists, but which has gone unpicked. There are many different directions this topic can be taken in, so I may revisit this series and tack some additional articles onto it in the future. I didn’t get a chance to mention the extension principle[8] at all and glossed over important applications of fuzzy techniques in Decision Theory, way back in Implementing Fuzzy Sets in SQL Server, Part 4: From Fuzzy Unions to Fuzzy Logic. I might provide more detail on the use cases for particular T-norms and T-conorms (if I can ever get my hands on the relevant academic journal articles, which are expensive), model more linguistic states and get into indexing considerations, other brands of fuzzy aggregates and other types of fuzzy partitions besides alpha cuts (α-cuts), among other things. Yet I’d rather branch off into “soft computing,” which is a grab-bag and hodge-podge of cutting edge fields that are quite hard, which make its name something of an oxymoron. Fuzzy logic is merely one of the buzz words associated with it, like chaos theory, neural nets, support vector machines (SVMs) and genetic algorithms. What they all have in common is that they’re useful in situations where inexact solutions are acceptable, including NP-Complete problems.[9] The same hype and intellectual intoxication I spoke of in Implementing Fuzzy Sets in SQL Server, Part 1: Membership Functions and the Fuzzy Taxonomy also surrounds certain aspects of soft computing, which seems to make some theoreticians go soft in the head; I guarantee there will still be useful innovations occurring in these fields a century from now, assuming the human race lasts that long, but these incredible tools aren’t cure-alls. There are some things they just can’t do and I’d wager that certain brands of artificial intelligence and machine learning are among them; I love science fiction but it’s not wise to confuse it with cold, hard reality.

…………That’s a discussion I’ll take up by dribs and drabs in my next, long-delayed mistutorial series, Information Measurement with SQL Server, which may serve as stepping stone to my favorite topic, neural nets. Both topics dovetail nicely with fuzzy sets and many of the tangential topics we’ve covered in this series, like Shannon’s Entropy and the Hartley function. These are among dozens of metrics which can be coded in T-SQL and Multidimensional Expressions (MDX) and put to good use for data mining purposes, as I will demonstrate over the course of this long and possibly nomadic series. I aim to familiarize myself with semantic information, measures of order, measures of sensitivity to initial conditions (like the Lyapunov Exponent used in chaos theory), various means of quantifying algorithmic complexity – anything that will reduce uncertainty and glean whatever unused information is left in our datasets, by quantifying it in some way. Some of these metrics can be plugged into the formulas I introduced in this series for measuring fuzziness in terms of set complements, such as the Küllback-Leibler Divergence and Bhattacharyya Distance. We’ve already gotten our toes wet by introducing fuzzy stats and metrics for quantifying nonspecificity and fuzziness; now it’s time to jump in. Some of the topics will be quite shallow and easy to follow, while others may be incredibly deep. It’s largely unexplored territory for me as well, so I may have to skip around from topic to topic in an unsystematic way, instead of deliberately building up to more complex concepts as I did towards Dempster-Shafer Evidence Theory in this series. At a minimum, readers should at least benefit from learning from my mistakes, which don’t require a fancy fuzzy expert system to tell us that they’re inevitable; like death and taxes, they’re one of the few pieces of information that come with any certainty in predictive analytics and data mining.

[1] pp . 137-141, Klir, George J. and Yuan, Bo, 1995, __Fuzzy Sets and Fuzzy Logic: Theory and Applications__. Prentice Hall: Upper Saddle River, N.J. On this particular page, they’re extending the meaning of the term even further, to complex network topologies.

[2] For more information, see the article “Partially Ordered Set” at the __Wikipedia__ web address http://en.wikipedia.org/wiki/Partially_ordered_set

[3] Which I have yet to try; I’m only speaking here of what’ve read about Neo4j casually.

[4] pp. 405-408, Klir and Yuan.

[5] *IBID*., pp. 141-144.

[6] *IBID*., pp. 137-144,

[7] *IBID*., p. 94.

[8] *IBID*., pp. 44-45.

[9] See the __Wikipedia__ article “Soft Computing” at http://en.wikipedia.org/wiki/Soft_computing__.__

## Implementing Fuzzy Sets in SQL Server, Part 10.2: Measuring Uncertainty in Evidence Theory

**By Steve Bolton **

…………To avoid overloading readers with too many concepts at once, I split my discussion of Dempster-Shafer Evidence Theory into two parts, with the bulk of the data modeling aspects and theory occurring in the last article. This time around, I’ll cover how fuzzy measures can be applied to it to quantify such forms of uncertainty as nonspecificity and imprecision (i.e., “fuzziness”) that were introduced in prior articles. Since the Plausibility, Belief and probability mass assignment figures work together to assign degrees of truth, they also introduce the potential for contradictory evidence, which leads to a few other measures of uncertainty: Strife, Discord and Conflict, which aren’t as relevant to possibility distributions and ordinary fuzzy sets. In addition, the probability mass for a universal hypothesis can be interpreted as a form of uncertainty left over after all of the probabilities for the subsets have been partitioned out. For example, in Figure 1, this crude type of uncertainty would be associated with the 0.0334928229665072 value for row 6. For the sake of brevity, I won’t rehash how I derived the ordinal LactateDehydrogenaseState category and the first three fuzzy measures associated with it, since the numbers are identical to those in the last tutorial. For the sake of convenience I added three columns with nearly identical names and calculated some sham data for them (based on the frequencies of some CreatineKinase data in the original table) so that we have some Conflicting data to work with. Ordinarily, such comparisons would be made using joins against an external view or table with its own separate ProbabilityMassAssignment, BeliefScore and PlausibilityScore columns, or a query that calculated them on the fly.

__Figure 1: Some Sample Evidence Theory Data from the Last Tutorial
__

…………In Figure 2, I translated some of the most popular formulas for evidence theory measures into T-SQL, such as Strife, Discord and Conflict.[1] For these, I used a simpler version of the equations that performed calculations on differences in set values rather than fuzzy intersections and unions.[2] Despite the fact the two measures only differ by the divisor and order of the difference operation, Discord is apparently not used as often as Strife on the grounds that it does not capture as much information. These subtle differences occur only in the alternate measures of Conflict they’re based on; since the one related to Strife is more important, I only included that one in Figure 3, where it’s represented by a score of 0.286225667126791. Versions of Strife and Discord are available for possibility distributions, but I omitted these because the fact that possibility theory is “almost conflict-free” signifies that they’re of “negligible” benefit.[3] I also coded the evidence theory version of nonspecificity and essentially rehashed the crude fuzziness measure I used in Implementing Fuzzy Sets in SQL Server, Part 2: Measuring Imprecision with Fuzzy Complements, except with the YagerComplement parameter arbitrarily set to 0.55 and the probability mass used in place of the membership function results. Both of these are unary fuzzy measures that apply only to the set defined by the first three float columns, whereas Strife, Discord and Conflict are binary measures that are calculated on the differences between the two sets encoded in the Health.DuchennesEvidenceTheoryTable. We can also add the Strife and fuzziness figures together to derive a measure of total uncertainty, plus interpret the height of a fuzzy set – i.e., the count of records with the maximum MembershipScore of 1 – as a sort of credibility measure. Keep in mind that I’m not only a novice at this, but am consulting mathematical resources that generally don’t have the kind of step-by-step examples with sample data used in the literature on statistics. This means I wasn’t able to validate my implementation of these formulas well at all, so it would be wise to recheck them before putting them to use in a production environments where accuracy is an issue. I’m most concerned by the possibility that I may be incorrectly aggregating the individual focal elements for evidentiary fuzziness and nonspecificity, each of which should be weighted by the corresponding probability mass.

** Figure 2: Several Evidence Theory Measures Implemented in T-SQL**DECLARE @Conflict float, @ConflictForDiscord float

SELECT @Conflict = SUM(CASE WHEN BeliefScore2 = 0 THEN ProbabilityMassAssignment2 * ABS(BeliefScore – BeliefScore2)

ELSE ProbabilityMassAssignment2 * ABS(BeliefScore – BeliefScore2) / ABS(CAST(BeliefScore AS float))

END),

@ConflictForDiscord = SUM(CASE WHEN BeliefScore2 = 0 THEN ProbabilityMassAssignment2 * ABS(BeliefScore2 – BeliefScore)

ELSE ProbabilityMassAssignment2 * ABS(BeliefScore2 –BeliefScore) / ABS(CAST(BeliefScore2 AS float))

END)

FROM Health.DuchennesEvidenceTheoryTable

— FUZZINESS

DECLARE @Count bigint, @SimpleMeasureOfFuzziness float

DECLARE @OmegaParameter float = 0.55 — ω

SELECT @Count=Count(*)

FROM Health.DuchennesEvidenceTheoryTable

SELECT @SimpleMeasureOfFuzziness = SUM(ABS(ProbabilityMassAssignment – YagerComplement)) /@Count

FROM (SELECT ProbabilityMassAssignment, Power(1 – Power(ProbabilityMassAssignment, @OmegaParameter), 1 / CAST(@OmegaParameter AS float)) AS YagerComplement

FROM Health.DuchennesEvidenceTheoryTable) AS T1

— NONSPECIFICITY

DECLARE @EvidenceTheoryNonspecificityInBits float

SELECT @EvidenceTheoryNonspecificityInBits = SUM(ProbabilityMassAssignment * Log(@Count, 2))

FROM Health.DuchennesEvidenceTheoryTable

SELECT Strife, Discord, Conflict, EvidenceTheoryNonspecificityInBits,SimpleMeasureOfFuzziness, Strife + EvidenceTheoryNonspecificityInBits

AS TotaUncertainty, (SELECT ProbabilityMassAssignment

FROM Health.DuchennesEvidenceTheoryTable

WHERE LactateDehydrogenaseState = ‘Any’) AS ProbabilityMassRemainderUncertainty

FROM (SELECT –1 * SUM(ProbabilityMassAssignment * Log((1 – @Conflict), 2)) AS Strife,

–1 * SUM(ProbabilityMassAssignment * Log((1 – @ConflictForDiscord), 2)) AS Discord, @Conflict AS Conflict, @EvidenceTheoryNonspecificityInBits AS EvidenceTheoryNonspecificityInBits, @SimpleMeasureOfFuzziness AS SimpleMeasureOfFuzziness

FROM Health.DuchennesEvidenceTheoryTable) AS T1

__Figure 3: Sample Results from the Duchennes Evidence Theory Table
__

…………The nonspecificity measure in evidence theory is merely the Hartley function weighted by the probability mass assignments. On paper, the equation for Strife ought to appear awfully familiar to data miners who have worked with Shannon’s Entropy before. The evidence theory version incorporates some additional terms so that a comparison can be performed over two sets, but the negative summation operator and logarithm operation are immediately reminiscent of its more famous forerunner, which measures probabilistic uncertainty due to a lack of stochastic information. Evidentiary nonspecificity trumps entropy in many situations because it is measured linearly, therefore avoiding computationally difficult nonlinear math (my paraphrase), but sometimes doesn’t produce unique solutions, in which case Klir and Yuan recommend using measures of Strife to quantify uncertainty.[4] Nevertheless, when interpreted correctly and used judiciously, they can be used in conjunction with axioms like the principles of minimum uncertainty, maximum uncertainty[5] and uncertainty invariance[6] to perform ampliative reasoning[7] and draw useful inferences about datasets:

“Once uncertainty (and information) measures become well justified, they can very effectively be utilized for managing uncertainty and the associated information. For example, they can be utilized for extrapolating evidence, assessing the strength of relationship on between given groups of variables, assessing the influence of given input variables on given output variables, measuring the loss of information when a system is simplified, and the like. In many problem situations, the relevant measures of uncertainty are applicable only in their conditional or relative terms.”[8]

…………That often requires some really deep thinking in order to avoid various pitfalls in analysis; in essence, they all involve honing the use of pure reason, which I now see the benefits of, but could definitely use a lot more practice in. For example, Dempster-Shafer Theory has well-known issues with counter-intuitive results at the highest and lowest Conflict values, which may require mental discipline to ferret out; perhaps high values of Strife can act as a safeguard against this, by alerting analysts that inspection for these logical conundrums is warranted.[9] Critics like Judea Pearl have apparently elaborated at length on various other fallacies that can arise from “confusing probabilities of truth with probabilities of provability,” all of which need to be taken into account when modeling evidentiary uncertainty.[10] Keep in mind as well that Belief or Plausibility scores of 1 do not necessarily signify total certainty; as we saw a few articles ago, Possibility values of 1 only signify a state of complete surprise when an event does not occur rather than assurance that it will happen.

…………The issue with evidence theory is even deeper in a certain sense, especially if those figures are derived from subjective ratings. Nevertheless, even perfectly objective and accurate observations can be quibbled with, for reasons that basically boil down to Bill W.’s adage “Denial ain’t just a river in Egypt.” One of the banes of the human condition is our propensity to squeeze our eyes shut to evidence we don’t like, which can only be overcome by honesty, not education; more schooling may even make things worse, by enabling people to lie to themselves with bigger words than before. In that case, they may end up getting tenure for developing entirely preposterous philosophies, like solipsism, or doubting their own existence. As G.K. Chesterton warned more than a century ago, nothing can stop a man from piling doubt on top of doubt, perhaps by reaching for such desperate excuses as “perhaps all we know is just a dream.” He provided a litmus test for recognizing bad chains of logic, which can indeed go on forever, but can be judged on whether or not they tend to drive men into lunatic asylums. Cutting edge topics like fuzzy sets, chaos theory and information theory inevitably give birth to extravagant half-baked philosophies, born of the precisely the kind of obsession and intellectual intoxication that Chesterton speaks of in his chapter on The Suicide of Thought[11] and his colleague Arnold Lunn’s addresses in *The Flight from Reason*.[12] These are powerful techniques, but only when kept within the proper bounds; problems like “definition drift” and subtle, unwitting changes in the meanings assigned to fuzzy measures can easily lead to unwarranted, fallacious or even self-deceptive conclusions. As we shall see in the next series, information theory overlays some of its own interpretability issues on top of this, which means we must trend even more carefully when integrating it with evidence theory.

…………Fuzzy measures and information theory mesh so well together than George J. Klir and Bo Yuan included an entire chapter on the topic of “Uncertainty-Based Information” in my favorite resource for fuzzy formulas, *Fuzzy Sets and Fuzzy Logic: Theory and Applications*.[13] The field of uncertainty management is still in its infancy, but scholars now recognize that uncertainty is often “the result of some information deficiency. Information…may be incomplete, imprecise, fragmentary, not fully reliable, vague, contradictory, or deficient in some other way. In general, these various information deficiencies may result in different types of uncertainty.”[14] Information in this context is interpreted as uncertainty reduction[15]; the more information we have, the more certain we become. Methods to ascertain how the reduction of fuzziness (i.e how imprecise the boundaries of fuzzy sets are) contributes to information gain were not fully worked out two decades ago when most of the literature I consulted for this series was written, but I have the impression that still holds today. When we adapt the Hartley function to measure the nonspecificity of evidence, possibility distributions and fuzzy sets, all we’re doing is taking a count of how many states a dataset might take on. With Shannon’s Entropy, we’re performing a related calculation that incorporates the probabilities associated with those states. Given their status as the foundations of information theory, I’ll kick off my long-delayed tutorial series Information Measurement with SQL Server by discussing both from different vantage points. I hope to tackle a whole smorgasbord of various ways in which the amount of information associated with a dataset can be quantified, thereby helping to cut down further on uncertainty. Algorithmic complexity, the Lyapunov exponent, various measures of order and semantic information metrics can all be used to partition uncertainty and preserve the information content of our data, so that organizations can make more accurate decisions in the tangible world of the here and now.

[1] pp. 259, 262-263, 267, 269, Klir, George J. and Yuan, Bo, 1995, __Fuzzy Sets and Fuzzy Logic: Theory and Applications__. Prentice Hall: Upper Saddle River, N.J. The formulas are widely available, but I adopted this as my go-to resource whenever the math got thick.

[2] *IBID*., p. 263.

[3] *IBID*., pp. 262-265.

[4] *IBID*., p. 274

[5] *IBID*., pp. 271-272. Klir and Yuan’s explanation of how to use maximum uncertainty for ampliative reasoning almost sounds a sort of reverse parsimony: “use all information available, but make sure that no additional information is unwittingly added…the principle requires that conclusions resulting from any ampliative inference maximize the relevant uncertainty within the constraints representing the premises. The principle guarantees that our ignorance be fully recognized when we try to enlarge our claims beyond the given premises and, as the same time, that all information contained in the premises be fully utilized. In other words, the principle guarantees that our conclusions are maximally noncommittal with regard to information not contained in the premises.”

[6] *IBID*., p. 275.

[7] *IBID*., p. 271.

[8] *IBID*., p. 269.

[9] See the __Wikipedia__ webpage “Dempster Shafer Theory” at http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory

[10] *IBID*.

[11] See Chesterton, G.K., 2001, __Orthodoxy.__ Image Books: London. Available online at the __G. K. Chesterton’s Works on the Web__ address http://www.cse.dmu.ac.uk/~mward/gkc/books/

[12] Lunn, Arnold, 1931, __The Flight from Reason__. Longmans, Green and Co.: New York.

[13] pp. 245-276, Klir and Yuan.

[14] *IBID*.

[15] *IBID*., p. 245.