Bootstrapping vs. jackknife

Ly Nguyenova
4 min readFeb 13, 2020

--

“One of the commonest problems in statistics is, given a series of observations Xj, xit…, xn, to find a function of these, tn(xltxit…, xn), which should provide an estimate of an unknown parameter 0.” — M. H. QUENOUILLE (2016)

Problems with the process of estimating these unknown parameters are that we can never be certain that are in fact the true parameters from a particular population. How can we be sure that they are not biased? How can we know how far from the truth are our statistics?

This is when bootstrap and jackknife were introduced. Bootstrap and jackknife are statistical tools used to investigate bias and standard errors of estimators. Both are resampling/cross-validation techniques, meaning they are used to generate new samples from the original data of the representative population.

Resampling methods are implemented:

  • as a substitute for traditional methods
  • when the distribution of the underlying population is unknown
  • traditional methods are hard or impossible to apply

What is Bootstrap?

Bootstrap is a method which was introduced by B. Efron in 1979. Bootstrap uses sampling with replacement in order to estimate to distribution for the desired target variable.

Source: Ming Cui et al (2017)

The main purpose of bootstrap is to evaluate the variance of the estimator. Other applications might be:

  • to estimate confidence intervals, standard errors for the estimator
  • to estimate precision for an estimator θ
  • to deal with non-normally distributed data
  • to create sample sizes for experiments

Pros and cons of Bootstrap

Pros — excellent method to estimate distributions for statistics, giving better results than traditional normal approximation, works well with small samples

Cons — does not perform well if the model is not smooth, not good for dependent data, missing data, censoring or data with outliers

Jackknife

Jackknife was first introduced by Quenouille to estimate bias of an estimator. It was later expanded further by John Tukey to include variance of estimation. Jackknife works by sequentially deleting one observation in the data set, then recomputing the desired statistic.

How Jackknife works?

Unlike bootstrap, jackknife is an iterative process. A parameter is calculated on the whole dataset and it is repeatedly recalculated by removing an element one after another. The estimation of a parameter derived from this smaller sample is called partial estimate. A pseudo-value is then computed as the difference between the whole sample estimate and the partial estimate.

These pseudo-values reduce the (linear) bias of the partial estimate (because the bias is eliminated by the subtraction between the two estimates). The pseudo-values are then used in lieu of the original values to estimate the parameter of interest and their standard deviation is used to estimate the parameter standard error which can then be used for null hypothesis testing and for computing confidence intervals. The jackknife is strongly related to the bootstrap (i.e., the jackknife is often a linear approximation of the bootstrap).

Models such as neural networks, machine learning algorithms or any multivariate analysis technique usually have a large number of features and are therefore highly prone to over-fitting. The jackknife can estimate the actual predictive power of those models by predicting the dependent variable values of each observation as if this observation were a new observation. This is why it is called a procedure which is used to obtain an unbiased prediction (i.e., a random effect) and to minimise the risk of over-fitting.

The main application of jackknife is to reduce bias and evaluate variance for an estimator. Other applications are:

  • to find the standard errors of a statistic
  • to estimate precision for an estimator θ

Pros and cons of Jackknife

Pros — computationally simpler than bootstrapping, more orderly as it is iterative

Cons — still fairly computationally intensive, does not perform well for non-smooth and nonlinear statistics, requires observations to be independent of each other — meaning that it is not suitable for time series analysis

Differences between Bootstrapping and Jackknife

The main difference between bootstrap are that Jackknife is an older method which is less computationally expensive. While Bootstrap is more computationally expensive but more popular and it gives more precision.

  • Bootstrap is ten times computationally more intensive than Jackknife
  • Bootstrap is conceptually simpler than Jackknife
  • Jackknife does not perform as well ad Bootstrap
  • Bootstrapping introduces a “cushion error”
  • Jackknife is more conservative, producing larger standard errors
  • Jackknife produces same results every time while Bootstrapping gives different results for every run

When to use which

  • Jackknife performs better for confidence interval for pairwise agreement measures
  • Bootstrap performs better for skewed distribution
  • Jackknife is more suitable for small original data

References

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1015.9344&rep=rep1&type=pdf

https://projecteuclid.org/download/pdf_1/euclid.aos/1176344552

https://towardsdatascience.com/an-introduction-to-the-bootstrap-method-58bcb51b4d60

--

--

Ly Nguyenova
Ly Nguyenova

Written by Ly Nguyenova

Junior Data Scientist | Data Engineer | BI Analyst | Tableau Developer