This is the first post of two on information theory in the context of unsupervised machine learning. The premise of this first post is to introduce some basic concepts and simple examples to get the thinking started.


Pretend that you are trapped in the now Russian wilderness near Vyborg in the year 1939. You’re hiding in the snow and all you hear around you are faint screams. Naturally, as a last wish, you want to covertly send a message to your comrades, perhaps the instructions of your favourite recipe for poundcake, but your only means of communication is a flashlight. This allows you to send a whole 2 bits of information per second. Faced with these limitations, you therefore want to devise a clever plan to encode the message, but how could you go about this? With only 2 bits (on and off) you can send the following symbols.

In principle, this tiny alphabet will allow you encode any message. Truly, we could redefine the whole alphabet by repetition as 0000 = D, 0001 = E and so forth to make every letter possible, but as our alphabet grows bigger, so does the time send a message. While this scheme allows us to transmit messages with zero distortion i.e. perfectly, it is still limited by the same bandwidth constraints, and the Russians are coming quick. So what if you added some randomness? For example set 0000 = A = H, where the same bitstring can represent two different letters. This can give some errors in interpretation, but if the receiver knows the language well enough, they will probably be able to disambiguate it and you can now ‘type’ faster.

Intuitively, we must think that we are able to do a tradeoff between the amount of error we incur in interpreting the original message (distortion) and the amount of information we can send per second (rate). In fact there is, and we have plotted the relationship in the case where sequence are i.i.d. Gaussians, and message reconstruction is measured in terms of mean-squared error. This is otherwise known as the rate-distortion curve for a channel1.

Rate-distortion curve for Gaussian channel with fixed variance

Reading the graph, in order to send the messages with virtually no loss of information, we would need at least 5 bits of information per symbol. If we can accept an error of 25%, meaning on average, every 4th symbol is wrong, we can get away with 1 bit. Note that this curve is specific for the case where the source variable is Gaussian and the channel has the property of being memoryless. We can draw many curves like this for different variances of the Gaussian and what we will find is that the hardness of the problem is ultimately governed by the entropy of the source data2 - more random data being harder to compress.

The setup

To get a better sense of this tradeoff, let us allow ourselves to become more formal.

Define a code alphabet as , with size along with our source random variable , which is the “data” that we are interested in. The choice of alphabet and distribution are arbitrary and serve only as example. Given the limited capacity of the alphabet, we have many possible configurations to choose from to segment the data. Limiting ourselves to a metric space viewpoint, we consider three of such configurations,

  1. uniformly over the range of .
  2. uniformly over the range of , but transformed through the inverse density3. Which we call a probability-weighted segmentation of the input space.
  3. closest centroid attained from -means trained on samples from the distribution.

These three options have been shown below, with an arrow indication their midpoints.

Segmentations

We can define the encoder as a function that picks out the nearest midpoint/centroid of the corresponding segmentation technique, . Along with a simple decoder, which picks the actual value of the centroid at this index . For convenience, each letter in the alphabet is just its alphabetical index, . This problem in itself is not very interesting, as what we have is just a deterministic approximation of the source variable, which has some quantisation error that is proportional to . The deterministic problem is well studied for both of static and adaptive approached, for which the latter is known as vector quantisation (Friedman, Hastie, & Tibshirani, 2001).

To make the problem more interesting, let us introduce a stochastic variable that is added to the encoding process, so now we have . Here, is independent of and can be considered noise in the channel - so this encoder might randomly shift the code one bit to the right, akin to a Caesar cipher. This distribution is actually very simply expressed as a [mixture of two distributions (proof). We will analyse this simple construction in depth to understand how noisy channels behave.

First, we can get the trivial behaviours out the way. If we take this encoder to its natural limit of segmentations by looking at , we end up with a surjection wrt. the source signal. The quantisation error vanishes and converges in distribution to . However, due to the stochasticity induced by the noise process, we do not have a bijection and we will not have almost-sure convergence. If we instead set , we can guarantee almost-sure convergence. This also has the nice side-effect that we can interpret the signal as being on the unit sphere , , shown below.

Wrapped beta distribution

As mentioned, these cases are trivial and generally not practically feasible to consider. Therefore, to investigate further, we pose the following question three questions.

  • Under which conditions does (convergence in distribution)?
  • Under which conditions (if any) does (convergence in probability)?
  • Which effect does the topology of the centroids have on these types of convergence?

An experiment

We can create a small experiment based on samples drawn from the distribution. We again have the three segmentation strategies highlighted earlier, but we vary the number of segmentations of the code space. We then measure two quantities of the reconstructed data. First, the CDF error, which is a measure of how similar the empirical CDF differs from the ground truth one on average for the whole data. Secondly, the packet error ratio, measuring the proportion of input signals that are perfectly reconstructed up to some error .

Mathematically, we can define the reconstructed signal as and fix . We call the sample estimated empirical CDF of . Then we define the distribution error and the packet error ratio as .

You will notice that the first error measures the convergence in distribution as a function of and the second is a measure of convergence in probability . Intuitively, we expect higher information content (fidelty/rate) when we have more segmentations.

Distribution error and packet error

From the figure on the left, we can see that the error asymptotically approaches as we increase the number of segmentations. Also notice that the curves presented in both plots bear a similarity to the rate distortion curves discussed earlier, only here they are more complex because our source is different. We can show that for any , the convergence in distribution holds exactly when (proof). The speed of convergence towards this asymptote varies with the choice of topology. Especially when the number of segmentations are few we see a large gap in performance between the uniform approach compared to the two others. This tells us that the topology is critical in the low fidelity regime.

The packet error (computed with )4 similarly shows that the choice of topology is important. But the capacity of the channel, which is here denoted by the number of segmentations, must increase much more before we can reliable send and receive the messages. Even with segmentations, or bits, we still receive the wrong message 20% of the time!

How could we improve on this estimate? Well, we can try integrate out the noise by performing clever sampling from the channel. As we know that the source of noise is inherit to the channel, we can probe the noise process by sending multiple instances through the channel. This will hold as long as we know what the ground truth signal is, and only when the noise process is stationary. We can devise an algorithm based on Monte-Carlo sampling below.

def encode(value):
  # Noisy channel encode
  ...
def mean_statistic(samples):
  acc = 0
  for code in samples:
    acc += samples
  acc /= len(samples)
  return acc
def monte_carlo(num_samples, sample_statistic=mean_statistic):
  samples = [encode(message) for _ in range(num_samples)]
  return sample_statistic(responses)

Monte Carlo error estimates for packet error ratio

In the above example, the sample statistic is based on a median and floor filter over a window of 5 observations or random draws. By picking the right type of filter, which in this case we know to be a flooring operation, we can retain a consistent lower bound error rate up to when , which is a significant improvement over the raw interpretation from the channel.

The intuition here is that convergence in probability is quite a bit more difficult than convergence in distribution. We require orders of magnitude more channel capacity to effectively transmit the message contentwise correctly rather than just distributionally correctly. When the channel is noisy this is further complicated as certain packets are transmitted wrongly. Noise has little effect on the overall distribution, but make pointwise reconstruction difficult. However, we see that we can perform tricks to improve message interpretation if we know something about the noise.

Side note: When can noise actually help?

While we have vilified noise showing that it makes interpreting signals more difficult, it can actually be helpful in some instances. It is especially useful in the cases where we are more interested in the shape or distribution of the data rather than the bitwise reconstruction.

If we go back to the previous definitions of encoder/decoder and consider instead a clean encoder, with a noisy decoder , where independent of . We then fix the number of segmentations by setting and vary the standard deviation of the additive noise and then measure as before. What we find is that there is a certain sweet spot of

Gain

We obtain an optimal (global) noise value at around and notice that the -means approach works best. Somehow, the addition of noise at the correct magnitude performs better than no noise at all. Why does this seem to work?

Because we are measuring the discrepancy in distribution terms, rather than in a pointwise manner, the noise helps to fill the “holes” that are left behind by not having a high enough fidelity (). What we are actually doing here is modelling the source variable as a Gaussian mixture model with three components and shared variance, but rather than using the common EM-approach of finding centers, we have chosen them more arbitrarily. What we do see is that the choice of topology is again important to get a correct coverage of the code-space. These centers must be distributed non-overlappingly according to some maximum likelihood law, rather than an a-priori maximally covering law.

If our topology is a minimally covering set and the noise process is locally applied to clusters it allows us to build bridges.

The UMAP (McInnes, Healy, & Melville, 2018) method is a beautiful application of this principle, in which simplical complexes are used to construct this covering set.

Gaussian noise on surface points

When can we perfectly capture the inputs?

We have seen that convergence in distribution is more easily attainable than pointwise convergence. In fact, noise injection appears to help us for convergence in distribution, as it provide topological covering in the absence of sufficient rate (if our channel is well calibrated). Still, the question remains to what degree we could reconstruct the inputs exactly, in the presence of noise or not.

The trivial results were already proven in the 1950s by Claude Shannon (MacKay, 2003) and they will tell you that data can be compressed losslessly down to the entropy of the signal and it can be transmitted with error bounded by the information rate. But Shannon (and lossless compression) is not all there is to be said about compression and error-free transmission. The assumuptions that Shannon and his peers built upon were very general channels and the data passed through are assumed to have certain structure, e.g. i.i.d. samples. Signal regularity has widely been used to write adaptive and domain specific channels, such as for video coding and speech coding, and machine learning is the current avenue for adaptive data-based compression.

As we will see in next post, posing learning as a noisy communication channel problem gives us the ability do well-posed5 maximum (approximate) likelihood estimation in an unsupervised way. It provides a trivial way to write down a likelihood function, which is given by reconstruction error. Additionally, due to its information theoretic properties, it is amenable to a Bayesian interpretation, for which is becomes simple to pose the problem of posterior inference - though this problem is not always easily solvable.


Bibliography

  1. Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning (Vol. 1). Springer series in statistics New York.
  2. McInnes, L., Healy, J., & Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. ArXiv Preprint ArXiv:1802.03426.
  3. MacKay, D. J. C. (2003). Information theory, inference, and learning algorithms. Cambridge Univ Pr.

Bernoulli noise is equivalent to mixture of two distributions

Let be a distribution of a set of categories with probability of class being , and let . Define their sum as . The probability of is given by.

Now consider the same problem, but is instead wrapped to the image of , such that .

This is equivalent to , a mixture of with probability and a shifted version of with probability . So adding Bernoulli noise to a categorical variable leads to a convex mixture of categorical distributions.

Weak convergence of encoder-decoder

Let be a function which acts pointwise on , then the expected reconstruction under is given by.

Where are segmentations of the real line based on the midpoint of subsequent cluster centers. By seeing that as , the statement above converges to a Riemannian integral.

Which shows that as . By the argument that with the noise process defined earlier, similarly. Note that this proof only holds due to the facts that has a total order and is Riemann integrable.

Footnotes

  1. For an introduction to this concept, the wikipedia article is quite good. 

  2. Shannon’s source coding theorem tells us that a sequence of i.i.d. random variables with entropy cannot be perfectly compressed to less than bits. 

  3. Consider the CDF of a distribution as with inverse its quantile function. An index set can then be pushed through the distribution  

  4. This plot is actual better conveyed as three-dimensional, where the choice of epsilon varies over this new direction. Here we have picked an arbitrary threshold to simplify the plots. 

  5. Actually, the degree to which this is actually all that well-posed is one of the points of contention in the following post.