On selection bias and sample calibration
When using data from a sample of a population of interest to make inferences about that population, it’s important to be aware of the sampling process, i.e. how observed units in the sample were chosen.
The goals of this post are:
- To briefly introduce probability sampling;
- To talk about systematic selection biases and how we can reason about it;
- To expose the risk of being misled by selection bias when conclusions are driven by common statistical tools;
- Finally, to demonstrate what you can do about them and help you to understand the assumptions and consequent limitations of such tools.
This post is for anyone who is interested on these issues and has a basic understanding of probability and statistics.
I’ll discuss these topics in a simple yet realistic setup. I’ll take advantage of the simplicity to make a deeper analysis that’s hopefully useful in understanding the technique as applied in the wild.
The problem
We are interested in a finite population, or universe, of elements, or units, and only a sample of elements taken from is available. We want to use to provide a guess, or estimate, for the populational quantity, or parameter,
the proportion of elements of that belong to a set . We write and for the number of elements of in and , respectively. You can also think of as the probability of sampling a random element from and then finding out that it belongs to . These two interpretations (the “share of something” in the population and the “probability of something” being sampled) can be invoked interchangeably througout the text whenever you see . Also, recall the definition of conditional distributions:
where and are subsets of .
The population in one figure
That situation can be visualized below, where the whole rectangle is and we also add that it can be partitioned into two disjoint sets and (just the complement of in ), which are going to be useful in a minute.
You can think of as being any other characteristic of the elements of that may be important to understand those which belong to . Both and could be some thresholding on age, years of education or gender in case we are talking about humans.
Probability sampling
We say that is a probability sample if it was obtained by a controlled (i.e. known) randomization procedure that attaches to each element an inclusion probability of it being chosen to be part of .
For a set , we define the binary variable as being 1 when and 0 otherwise.
Note that is just a constant that can be perfectly known as long as we have access to the unit . On the other hand, we only get to know the values once we get , and repeating the same randomized sampling process may yield different values for the same . If the sampling procedure itself is known, then what we know a priori is the inclusion probability . That is, prior to sampling, is a Bernoulli random variable.
Now we may write
Let’s try to employ the sample analogue of the above equation to estimate from (just because it is pretty intuitive):
Note that is a random variable because it depends on , which prior to sampling we do not know. Let’s make the dependence on the random process explicit in terms of the variables (note the change from to in the second summation):
Is this an accurate estimator of ? Since the s are constants, it’s easy to compute the expectation of this random variable:
When all units are equally likely to be sampled, we have that for all . In this case, is called a simple random sample (SRS) and the equation collapses into
That is, under a SRS, the sample proportion of cases in is a nice guess of its populational counterpart. When an equation like this holds true, we say that the estimator is centered for the parameter.
The Horvitz-Thompson estimator
But what if is not a SRS? What if the elements in are not equally likely to be in ?
An important observation is that both and are linear combinations of the s. Let’s write it down:
where the s are constants. In the SRS case, we just saw that is a good choice because . In general, we observe that the expectation of this linear estimator is
If we set , then is centered for . This is known as the Horvitz-Thompson estimator (HTE). It assumes that we know the randomization process that yields and therefore know the inclusion probabilities up to reasonable accuracy for all .
Notice that probability sampling designs may have different inclusion probabilities for different units in the population (SRS is not the only probability sampling design). That is simply a design choice that is commonly employed as a tool to improve the precision of the estimates under certain circumstances.
But what if we do not know the s? Equivalently, what if we do not know the process that yields ? Well, welcome to most real-world data analysis problems.
Calibration
In practice, we rarely know the structure of the sampling process. However, when we do know it, we saw that the HTE proposes to weight the data from unit with and this choice produces a centered estimator. That’s nicely intuitive since we may interpret as the number of units in that the unit will be responsible to represent in case it’s sampled.
The baseline approach is usually the sample proportion, which we know to be our best guess of under SRS. How can we improve it?
Let’s suppose that knowing that an unit belongs to is a relevant piece of information to guess whether or not it is also in . In other words, and are correlated and then the distribution is different from . We’ll denote this hypothesis by .
From the law of total probability, we have
A similar equation is also true when we condition on the sample as well:
It may be useful to visualize these equations in the figure at the beginning. Note that is just the baseline estimate, the fraction of in once is realized. Also, it’s informative to evaluate this last equation for the SRS case, just to convince ourselves that it makes sense:
which is right! We’ve used the fact that and form a partition of and we removed the conditioning on and now we have a random variable.
What I want to point out now is how we may improve this estimator induced by given that we do not know the structure of the sampling process but is true and we have access to auxiliary information on in the form of an alternative estimate .
The (observed) bias of our baseline estimator is . Hopefully, it’s clear that it has two sources:
- : Under , over- or under-representation of in can be a problem since we want to estimate ;
- (similarly for ): Even when the previous point is OK, we can still suffer if there is selection bias within and with respect to .
It’s a good exercise to verify that both inequalities above turn into equalities under SRS. It’s also important to notice that these can cause problems when we use the baseline fraction-of--in- estimator and these issues are present. In general, these conditions can be met by probability sampling designs and that would not be a problem because we would be aware of it and could employ, say, the HTE. We want to analyze the disparities between a naive guess and possible underlying populational realities.
The idea is simply to substitute by the auxiliary estimate , with , to obtain
This technique is known as calibration. As discussed, it’s expected to work as fine as the sample approximations of the conditional distributions and .
This estimator is also linear in and the HTE coefficient is being estimated as if , and otherwise. Also, whenever is a good approximation, i.e. , we get
and thus we are estimating the inclusion probability as the probability that a random unit from will be found to be in (take a second to think about it).
That’s how our assumptions plus this calibration thing are effectively modeling the sampling process.
It seems reasonable to argue that if we have no prior information on the s, our expectation on the statistical performance of is always equal to or better than that of whenever is more accurate than .
Sum up
We’ve learned that
- Under SRS, the commonly used sample proportion is a good guess for the populational proportion
- When that is not the case but we know the sampling process, the general form of the proportion estimator is given by the Horvitz-Thompson estimator, in which each unit is assigned a weight proportional to ;
- When we do not know the sampling process but we have access to information on auxiliary variables that correlate with the variable we want to understand, we can use this extra info to calibrate our naive sample proportion and expect its performance to improve a bit, depending on how strong is that correlation.
In a future post, I hope to show the technique in action with real data.
Thanks for reading.