All Articles

XAI Methods - Integrated Gradients

Published 18 Apr 2022 · 5min coffee icon5min coffee icon5min coffee icon5min coffee icon20 min read

What is Integrated Gradients method?

Integrated Gradients (IG) [1] is a method proposed by Sundararajan et al. that is based on two axioms: Sensitivity and Implementation Invariance. Authors argue that those two axioms should be satisfied by all attribution methods. The definition of those two axioms is as follows:

Definition 1 (Axiom: Sensitivity) An attribution method satisfies Sensitivity if for every input and baseline that differ in one feature but have different predictions, then the differing feature should be given a non-zero attribution. If the function implemented by the deep network does not depend (mathematically) on some variable, then the attribution to that variable is always zero.

Definition 2 (Axiom: Implementation Invariance) Two networks are functionally equivalent if their outputs are equal for all inputs, despite having very different implementations. Attribution methods should satisfy Implementation Invariance, i.e., the attributions are always identical for two functionally equivalent networks.

The sensitivity axiom introduces the baseline which is an important part of the IG method. A baseline is defined as an absence of a feature in an input. This definition is confusing, especially when dealing with complex models, but the baseline could be interpreted as “input from the input space that produces a neutral prediction”. A baseline can be treated as an input to produce a counterfactual explanation by checking how the model behaves when moving from baseline to the original image.

The authors give the example of the baseline for an object recognition network, which is a black image. I personally think that a black image doesn’t represent an “absence of feature”, because this absence should be defined based on the manifold that represents data. This means that a black image could work as an absence of a feature in one network but may not work for a network trained on a different dataset, allowing a network to use black pixels in prediction.

Relu chart
Figure 1: f(x) = 1 − ReLU(1 − x) where x ∈< 0, 2 >

Authors argue that gradient-based methods are violating Sensitivity (Def. 1). As an example, we are presented with the case of simple function, f(x)=1ReLU(1x)f(x) = 1 - \text{ReLU}(1 - x) (see Fig. 1) and the baseline being x=0x = 0. When trying to generate attribution for x=2x = 2, the functions’ output changes from 0 to 1 but after x=1x=1, it becomes flat and causes the gradient to equal zero. Obviously, xx attributes to the result, but because the function is flat at the input we are testing results in invalid attribution and breaks the Sensitivity. Sundararajan et al. think that breaking Sensitivity causes gradients to focus on irrelevant features.

How IG is calculated?

In IG definition we have a function FF representing our model, input xRnx \in \mathbb{R}^{n} (xx is in Rn\mathbb{R}^n because this is a general definition of IG and not CNN specific), and the baseline xRnx' \in \mathbb{R}^{n}. We assume a straight line path between xx and xx' and compute gradients long that path. The integrated gradient along ithi^th dimension is defined as:

Equation 1
IntegratedGradsi(x)::=(xixi)×α=01F(x+α×(xx))xidαIntegratedGrads_{i}(x) ::= (x_{i} - x'_{i})\times\int_{\alpha=0}^1\frac{\partial F(x'+\alpha \times (x - x'))}{\partial x_i}{d\alpha}

The original definition of Integrated Gradients is incalculable (because of the integral). Therefore, the implementation of the method uses approximated value by replacing the integral with the summation:

Equation 2
IntegratedGradsiapprox(x)::=(xixi)×k=1mF(x+km×(xx))xi×1mIntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\partial F(x' + \frac{k}{m}\times(x - x'))}{\partial x_{i} } \times \frac{1}{m}

In the approximated calculation (Eq. 2), mm defines a number of interpolation steps. As an example, we can visualize the interpolations with mm equals five (see Fig. 2). In practice, the number of interpolation steps is usually between 20 and 300, but the mode value is equal to 50. The results of applying IG can be seen in Figure 3.

IG Interpolation
Figure 2: Five-step interpolation between the baseline x' and the input image x. The first image on the left (alpha:0.0) is not a part of the interpolation process. Image source: Stanford Dogs [5]
IG Results
Figure 3: Visualization of the saliency map by the IG generated for the class saint_bernard. The result is averaged over 50 interpolation steps. Image source: Stanford Dogs [5]

Baselines

In recent years there was discussion about replacing the constant color baseline with an alternative. One of the first propositions was to add Gaussian noise to the original image (see Fig. 4a). Gaussian baseline was introduced by Smilkov et al. [2] and used a Gaussian distribution centered on the current image with a variance σ\sigma. This variance is the only parameter when tuning the method. Another baseline is called Blur baseline and uses a multi-dimensional gaussian filter (see Fig. 4b). The idea presented by Fong and Vedaldi [3] blurred version of the image is a domain-specific way to represent missing information and therefore be a valid baseline according to the original definition. Inspired by the work of Fong and Vedaldi, Sturmfels et al. [4] introduced another version of the baseline, which is based on the original image. This baseline is called the Maximum Distance baseline and creates a baseline by constructing an image with the largest value of the L1L1 distance from the original image. The result of that can be seen in Figure 4c. The problem with the maximum distance is that it doesn’t represent the “absence of feature”. It contains the information about the original image, just in a different form. In the same work, Sturmfels et al. created another baseline called Uniform baseline. This time, the baseline doesn’t require an input image and uses only uniform distribution to generate a baseline (see Fig. 4d). The problem with selecting a baseline is not solved, and for any further experiments, the “black image” baseline is going to be used.

IG Baselines
Figure 4: Alternative baselines for IG. Gaussian baseline is using σ = 0.5 to generate noise. Blur baseline is using σ = 5 in a gaussian filter. All the values are clipped at <0,1> to be within the range of the scaled colors. Image source: Stanford Dogs [5]

Further reading

I’ve decided to create a series of articles explaining the most important XAI methods currently used in practice. Here is the main article: XAI Methods - The Introduction

References:

  1. M. Sundararajan, A. Taly, Q. Yan. Axiomatic attribution for deep networks. International Conference on Machine Learning, pages 3319–3328. PMLR, 2017.
  2. D. Smilkov, N. Thorat, B. Kim, F. Viégas, M. Wattenberg. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
  3. R. C. Fong, A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. Proceedings of the IEEE International Conference on Computer Vision, pages 3429–3437, 2017.
  4. P. Sturmfels, S. Lundberg, S.-I. Lee. Visualizing the impact of feature attribution baselines. Distill, 5(1):e22, 2020.
  5. A. Khosla, N. Jayadevaprakash, B. Yao, L. Fei-Fei. Stanford dogs dataset. https://www.kaggle.com/jessicali9530/stanford-dogs-dataset, 2019. Accessed: 2021-10-01.

Citation

Kemal Erdem, (Apr 2022). "XAI Methods - Integrated Gradients". https://erdem.pl/2022/04/xai-methods-integrated-gradients
or
@article{erdem2022xaiMethodsIntegratedGradients,
    title   = "XAI Methods - Integrated Gradients",
    author  = "Kemal Erdem",
    journal = "https://erdem.pl",
    year    = "2022",
    month   = "Apr",
    url     = "https://erdem.pl/2022/04/xai-methods-integrated-gradients"
}