# Confidence Graphs: Representing Model Uncertainty in Deep Learning¶

**Hendrik Jacob van Veen** hendrik.vanveen@nubank.com.br •
https://mlwave.com

**Matheus Facure** matheus.facure@nubank.com.br •
https://matheusfacure.github.io/

## Introduction¶

Variational inference (MacKay, 2003) gives a computationally tractible measure of uncertainty/confidence/variance for machine learning models, including complex black-box models, like those used in the fields of gradient boosting (Chen et al, 2016) and deep learning (Schmidhuber, 2014).

The \(MAPPER\) algorithm (Singh et al, 2007) [.pdf] from Topological Data Analysis (Carlsson, 2009) turns any data or function output into a graph (or simplicial complex) which is used for data exploration (Lum et al, 2009), error analysis (Carlsson et al, 2018), serving as input for higher-level machine learning algorithms (Hofer et al, 2017), and more.

Dropout (Srivastava et al, 2014) can be viewed as an ensemble of many different sub-networks inside a single neural network, which, much like bootstrap aggregation of decision trees (Breiman, 1996), aims to combat overfit. Viewed as such, dropout is applicable as a Bayesian approximation (Rubin, 1984) in the variational inference framework (Gal, 2016) (.pdf)

Interpretability is useful for detecting bias in and debugging errors of machine learning models. Many methods exist, such as tree paths (Saabas, 2014), saliency maps, permutation feature importance (Altmann et al, 2010), locally-fit white box models (van Veen, 2015) (Ribeiro et al, 2016). More recent efforts aim to combine a variety of methods (Korobov et al, 2016) (Olah et al, 2018).

## Motivation¶

Error analysis surfaces different subsets/types of the data where a model makes fundamental errors. When building policies and making financial decisions based on the output of a model it is not only useful to study the errors of a model, but also the confidence: - Correct, but low-confidence, predictions for a cluster of data tells us where to focus our active learning (Dasgupta et al, 2009) - and data collection efforts, so as to make the model more certain. - Incorrect, but high-confidence predictions, surface fundamental error types that can more readily be fixed by a correction layer (Schapire, 1999) [.pdf], or redoing feature engineering (Guyon et al, 2006). - Every profit-maximizing model has a prediction threshold where a decision is made (Hardt et al, 2016). However, given two equal predictions, the more confident predictions are preferred. - Interpretability methods have focussed either on explaining the model in general, or explaining a single sample. To our knowledge, not much focus has gone in a holistic view of modeled data, including explanations for subsets of similar samples (for whatever pragmatic definition of “similar”, like “similar age”, “similar spend”, “similar transaction behavior”). The combination of interpretability and unsupervised exploratory analysis is attractive, because it catches unexpected behavior early on, as opposed to acting on faulty model output, and digging down to find a cause.

## Experimental setup¶

We will use the MNIST dataset (LeCun et al, 1999), Keras (Chollet et al, 2015) with TensorFlow (Abadi et al, 2016), NumPy (van der Walt et al., 2011), Pandas (McKinney, 2010), Scikit-Learn (Pedregosa et al, 2011), Matplotlib (Hunter, 2007), and KeplerMapper (Saul et al, 2017).

- To classify between the digits 3 and 5, we will train a Multi-Layer Perceptron (Ivakhnenko et al, 1965) with 2 hidden layers, Backprop (LeCun et al, 1998) (pdf), RELU activation (Nair et al, 2010), ADAM optimizer (Kingma et al, 2014), dropout of 0.5, and softmax output, to classify between the digits 3 and 5.
- We perform a 1000 forward passes to get the standard deviation and variance ratio of our predictions as per (Gal, 2016, page 51) [.pdf].
- Closely following the \(FiFa\) method from (Carlsson et al,
2018, page 4) we then apply
\(MAPPER\) with the 2D filter function
`[predicted probability(x), confidence(x)]`

to project the data. We cover this projection with 10 10% overlapping intervals per dimension. We cluster with complete single-linkage agglomerative clustering (`n_clusters=3`

) (Ward, 1963) and use the penultimate layer as the inverse \(X\). To guide exploration, we color the graph nodes by`mean absolute error(x)`

. - We also ask predictions for the digit 4 which was never seen during training (Larochelle et al, 2008), to see how this influences the confidence of the network, and to compare the graphs outputted by KeplerMapper.
- For every graph node we show the original images. Binary classification on MNIST digits is easy enough to resort to a simple interpretability method to show what distinguishes the cluster from the rest of the data: We order each feature by z-score and highlight the top 10% features (Singh, 2016).

```
In [1]:
```

```
%matplotlib inline
import keras
from keras import backend as K
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import Adam
import kmapper as km
import numpy as np
import pandas as pd
from sklearn import metrics, cluster, preprocessing
import xgboost as xgb
from matplotlib import pyplot as plt
plt.style.use("ggplot")
```

```
Using TensorFlow backend.
```

## Preparing Data¶

We create train and test data sets for the digits 3, 4, and 5.

```
In [2]:
```

```
# get the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_strange = X_train[y_train == 4]
y_strange = y_train[y_train == 4]
X_train = X_train[np.logical_or(y_train == 3, y_train == 5)]
y_train = y_train[np.logical_or(y_train == 3, y_train == 5)]
X_test = X_test[np.logical_or(y_test == 3, y_test == 5)]
y_test = y_test[np.logical_or(y_test == 3, y_test == 5)]
X_strange = X_strange[:X_test.shape[0]]
y_strange = y_strange[:X_test.shape[0]]
X_train = X_train.reshape(-1, 784)
X_test = X_test.reshape(-1, 784)
X_strange = X_strange.reshape(-1, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_strange = X_strange.astype('float32')
X_train /= 255
X_test /= 255
X_strange /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(X_strange.shape[0], 'strange samples')
# convert class vectors to binary class matrices
y_train = (y_train == 3).astype(int)
y_test = (y_test == 3).astype(int)
y_mean_test = y_test.mean()
print(y_mean_test, 'y test mean')
```

```
11552 train samples
1902 test samples
1902 strange samples
0.5310199789695058 y test mean
```

## Model¶

Model is a basic 2-hidden layer MLP with RELU activation, ADAM optimizer, and softmax output. Dropout is applied to every layer but the final.

```
In [3]:
```

```
batch_size = 128
num_classes = 1
epochs = 10
model = Sequential()
model.add(Dropout(0.5, input_shape=(784,)))
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
```

```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dropout_1 (Dropout) (None, 784) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 262656
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 513
=================================================================
Total params: 665,089
Trainable params: 665,089
Non-trainable params: 0
_________________________________________________________________
```

## Fitting and evaluation¶

```
In [4]:
```

```
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, y_test))
score = model.evaluate(X_test, y_test, verbose=0)
score
```

```
Train on 11552 samples, validate on 1902 samples
Epoch 1/10
11552/11552 [==============================] - 3s 217us/step - loss: 0.2627 - acc: 0.8872 - val_loss: 0.0800 - val_acc: 0.9679
Epoch 2/10
11552/11552 [==============================] - 2s 161us/step - loss: 0.1389 - acc: 0.9459 - val_loss: 0.0650 - val_acc: 0.9742
Epoch 3/10
11552/11552 [==============================] - 1s 128us/step - loss: 0.1092 - acc: 0.9592 - val_loss: 0.0417 - val_acc: 0.9879
Epoch 4/10
11552/11552 [==============================] - 1s 120us/step - loss: 0.0936 - acc: 0.9657 - val_loss: 0.0388 - val_acc: 0.9869
Epoch 5/10
11552/11552 [==============================] - 1s 117us/step - loss: 0.0800 - acc: 0.9681 - val_loss: 0.0325 - val_acc: 0.9921
Epoch 6/10
11552/11552 [==============================] - 1s 115us/step - loss: 0.0730 - acc: 0.9735 - val_loss: 0.0291 - val_acc: 0.9916
Epoch 7/10
11552/11552 [==============================] - 1s 113us/step - loss: 0.0685 - acc: 0.9760 - val_loss: 0.0284 - val_acc: 0.9926
Epoch 8/10
11552/11552 [==============================] - 1s 114us/step - loss: 0.0662 - acc: 0.9761 - val_loss: 0.0269 - val_acc: 0.9953
Epoch 9/10
11552/11552 [==============================] - 1s 116us/step - loss: 0.0643 - acc: 0.9753 - val_loss: 0.0265 - val_acc: 0.9932
Epoch 10/10
11552/11552 [==============================] - 1s 114us/step - loss: 0.0606 - acc: 0.9786 - val_loss: 0.0279 - val_acc: 0.9932
```

```
Out[4]:
```

```
[0.027948900684616387, 0.9931650893796005]
```

## Perform 1000 forward passes on test set and calculate Variance Ratio and Standard Dev¶

```
In [5]:
```

```
FP = 1000
predict_stochastic = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output])
y_pred_test = np.array([predict_stochastic([X_test, 1]) for _ in range(FP)])
y_pred_stochastic_test = y_pred_test.reshape(-1,y_test.shape[0]).T
y_pred_std_test = np.std(y_pred_stochastic_test, axis=1)
y_pred_mean_test = np.mean(y_pred_stochastic_test, axis=1)
y_pred_mode_test = (np.mean(y_pred_stochastic_test > .5, axis=1) > .5).astype(int).reshape(-1,1)
y_pred_var_ratio_test = 1 - np.mean((y_pred_stochastic_test > .5) == y_pred_mode_test, axis=1)
test_analysis = pd.DataFrame({
"y_true": y_test,
"y_pred": y_pred_mean_test,
"VR": y_pred_var_ratio_test,
"STD": y_pred_std_test
})
print(metrics.accuracy_score(y_true=y_test, y_pred=y_pred_mean_test > .5))
print(test_analysis.describe())
```

```
0.9905362776025236
y_true y_pred VR STD
count 1902.000000 1902.000000 1902.000000 1902.000000
mean 0.531020 0.544015 0.021121 0.053761
std 0.499168 0.471939 0.067162 0.077280
min 0.000000 0.000001 0.000000 0.000010
25% 0.000000 0.006608 0.000000 0.004060
50% 1.000000 0.911458 0.000000 0.020484
75% 1.000000 0.998069 0.005000 0.064760
max 1.000000 0.999999 0.500000 0.367364
```

## Plot test set confidence¶

```
In [6]:
```

```
prediction_cut_off = (test_analysis.y_pred < .96) & (test_analysis.y_pred > .94)
std_diff = test_analysis.STD[prediction_cut_off].max() - test_analysis.STD[prediction_cut_off].min()
vr_diff = test_analysis.VR[prediction_cut_off].max() - test_analysis.VR[prediction_cut_off].min()
num_preds = test_analysis.STD[prediction_cut_off].shape[0]
# STD plot
plt.figure(figsize=(16,8))
plt.suptitle("Standard Deviation of Test Predictions", fontsize=18, weight="bold")
plt.title("For the %d predictions between 0.94 and 0.96 the STD varies with %f"%(num_preds, std_diff),
style="italic")
plt.xlabel("Standard Deviation")
plt.ylabel("Predicted Probability")
plt.scatter(test_analysis.STD, test_analysis.y_pred, alpha=.3)
plt.scatter(test_analysis.STD[prediction_cut_off],
test_analysis.y_pred[prediction_cut_off])
plt.show()
# VR plot
plt.figure(figsize=(16,8))
plt.suptitle("Variance Ratio of Test Predictions", fontsize=18, weight="bold")
plt.title("For the %d predictions between 0.94 and 0.96 the Variance Ratio varies with %f"%(num_preds, vr_diff),
style="italic")
plt.xlabel("Variance Ratio")
plt.ylabel("Predicted Probability")
plt.scatter(test_analysis.VR, test_analysis.y_pred, alpha=.3)
plt.scatter(test_analysis.VR[prediction_cut_off],
test_analysis.y_pred[prediction_cut_off])
plt.show()
```

## Apply \(MAPPER\)¶

### Take penultimate layer activations from test set for the inverse \(X\)¶

```
In [7]:
```

```
predict_penultimate_layer = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-2].output])
X_inverse_test = np.array(predict_penultimate_layer([X_test, 1]))[0]
print((X_inverse_test.shape, "X_inverse_test shape"))
```

```
((1902, 512), 'X_inverse_test shape')
```

### Take STD and error as the projected \(X\)¶

```
In [8]:
```

```
X_projected_test = np.c_[test_analysis.STD, test_analysis.y_true - test_analysis.y_pred]
print((X_projected_test.shape, "X_projected_test shape"))
```

```
((1902, 2), 'X_projected_test shape')
```

### Create the confidence graph \(G\)¶

```
In [9]:
```

```
mapper = km.KeplerMapper(verbose=2)
G = mapper.map(X_projected_test,
X_inverse_test,
clusterer=cluster.AgglomerativeClustering(n_clusters=2),
overlap_perc=0.8,
nr_cubes=10)
```

```
KeplerMapper()
Mapping on data shaped (1902, 512) using lens shaped (1902, 2)
Minimal points in hypercube before clustering: 2
Creating 100 hypercubes.
There are 0 points in cube_0 / 100
Cube_0 is empty.
There are 0 points in cube_1 / 100
Cube_1 is empty.
There are 0 points in cube_2 / 100
Cube_2 is empty.
There are 0 points in cube_3 / 100
Cube_3 is empty.
There are 0 points in cube_4 / 100
Cube_4 is empty.
There are 1437 points in cube_5 / 100
Found 2 clusters in cube_5
There are 1437 points in cube_6 / 100
Found 2 clusters in cube_6
There are 0 points in cube_7 / 100
Cube_7 is empty.
There are 0 points in cube_8 / 100
Cube_8 is empty.
There are 0 points in cube_9 / 100
Cube_9 is empty.
There are 1 points in cube_10 / 100
Cube_10 is empty.
There are 0 points in cube_11 / 100
Cube_11 is empty.
There are 0 points in cube_12 / 100
Cube_12 is empty.
There are 0 points in cube_13 / 100
Cube_13 is empty.
There are 0 points in cube_14 / 100
Cube_14 is empty.
There are 382 points in cube_15 / 100
Found 2 clusters in cube_15
There are 364 points in cube_16 / 100
Found 2 clusters in cube_16
There are 0 points in cube_17 / 100
Cube_17 is empty.
There are 0 points in cube_18 / 100
Cube_18 is empty.
There are 0 points in cube_19 / 100
Cube_19 is empty.
There are 1 points in cube_20 / 100
Cube_20 is empty.
There are 0 points in cube_21 / 100
Cube_21 is empty.
There are 0 points in cube_22 / 100
Cube_22 is empty.
There are 0 points in cube_23 / 100
Cube_23 is empty.
There are 14 points in cube_24 / 100
Found 2 clusters in cube_24
There are 203 points in cube_25 / 100
Found 2 clusters in cube_25
There are 144 points in cube_26 / 100
Found 2 clusters in cube_26
There are 0 points in cube_27 / 100
Cube_27 is empty.
There are 0 points in cube_28 / 100
Cube_28 is empty.
There are 0 points in cube_29 / 100
Cube_29 is empty.
There are 0 points in cube_30 / 100
Cube_30 is empty.
There are 0 points in cube_31 / 100
Cube_31 is empty.
There are 0 points in cube_32 / 100
Cube_32 is empty.
There are 0 points in cube_33 / 100
Cube_33 is empty.
There are 42 points in cube_34 / 100
Found 2 clusters in cube_34
There are 124 points in cube_35 / 100
Found 2 clusters in cube_35
There are 65 points in cube_36 / 100
Found 2 clusters in cube_36
There are 2 points in cube_37 / 100
Found 2 clusters in cube_37
There are 0 points in cube_38 / 100
Cube_38 is empty.
There are 0 points in cube_39 / 100
Cube_39 is empty.
There are 0 points in cube_40 / 100
Cube_40 is empty.
There are 0 points in cube_41 / 100
Cube_41 is empty.
There are 0 points in cube_42 / 100
Cube_42 is empty.
There are 1 points in cube_43 / 100
Cube_43 is empty.
There are 52 points in cube_44 / 100
Found 2 clusters in cube_44
There are 69 points in cube_45 / 100
Found 2 clusters in cube_45
There are 37 points in cube_46 / 100
Found 2 clusters in cube_46
There are 8 points in cube_47 / 100
Found 2 clusters in cube_47
There are 0 points in cube_48 / 100
Cube_48 is empty.
There are 0 points in cube_49 / 100
Cube_49 is empty.
There are 0 points in cube_50 / 100
Cube_50 is empty.
There are 0 points in cube_51 / 100
Cube_51 is empty.
There are 0 points in cube_52 / 100
Cube_52 is empty.
There are 6 points in cube_53 / 100
Found 2 clusters in cube_53
There are 50 points in cube_54 / 100
Found 2 clusters in cube_54
There are 42 points in cube_55 / 100
Found 2 clusters in cube_55
There are 23 points in cube_56 / 100
Found 2 clusters in cube_56
There are 15 points in cube_57 / 100
Found 2 clusters in cube_57
There are 1 points in cube_58 / 100
Cube_58 is empty.
There are 0 points in cube_59 / 100
Cube_59 is empty.
There are 4 points in cube_60 / 100
Found 2 clusters in cube_60
There are 6 points in cube_61 / 100
Found 2 clusters in cube_61
There are 2 points in cube_62 / 100
Found 2 clusters in cube_62
There are 17 points in cube_63 / 100
Found 2 clusters in cube_63
There are 42 points in cube_64 / 100
Found 2 clusters in cube_64
There are 18 points in cube_65 / 100
Found 2 clusters in cube_65
There are 15 points in cube_66 / 100
Found 2 clusters in cube_66
There are 22 points in cube_67 / 100
Found 2 clusters in cube_67
There are 5 points in cube_68 / 100
Found 2 clusters in cube_68
There are 0 points in cube_69 / 100
Cube_69 is empty.
There are 4 points in cube_70 / 100
Found 2 clusters in cube_70
There are 7 points in cube_71 / 100
Found 2 clusters in cube_71
There are 9 points in cube_72 / 100
Found 2 clusters in cube_72
There are 28 points in cube_73 / 100
Found 2 clusters in cube_73
There are 28 points in cube_74 / 100
Found 2 clusters in cube_74
There are 3 points in cube_75 / 100
Found 2 clusters in cube_75
There are 8 points in cube_76 / 100
Found 2 clusters in cube_76
There are 20 points in cube_77 / 100
Found 2 clusters in cube_77
There are 8 points in cube_78 / 100
Found 2 clusters in cube_78
There are 0 points in cube_79 / 100
Cube_79 is empty.
There are 0 points in cube_80 / 100
Cube_80 is empty.
There are 7 points in cube_81 / 100
Found 2 clusters in cube_81
There are 23 points in cube_82 / 100
Found 2 clusters in cube_82
There are 29 points in cube_83 / 100
Found 2 clusters in cube_83
There are 14 points in cube_84 / 100
Found 2 clusters in cube_84
There are 0 points in cube_85 / 100
Cube_85 is empty.
There are 1 points in cube_86 / 100
Cube_86 is empty.
There are 8 points in cube_87 / 100
Found 2 clusters in cube_87
There are 6 points in cube_88 / 100
Found 2 clusters in cube_88
There are 1 points in cube_89 / 100
Cube_89 is empty.
There are 0 points in cube_90 / 100
Cube_90 is empty.
There are 4 points in cube_91 / 100
Found 2 clusters in cube_91
There are 14 points in cube_92 / 100
Found 2 clusters in cube_92
There are 14 points in cube_93 / 100
Found 2 clusters in cube_93
There are 4 points in cube_94 / 100
Found 2 clusters in cube_94
There are 0 points in cube_95 / 100
Cube_95 is empty.
There are 0 points in cube_96 / 100
Cube_96 is empty.
There are 1 points in cube_97 / 100
Cube_97 is empty.
There are 2 points in cube_98 / 100
Found 2 clusters in cube_98
There are 2 points in cube_99 / 100
Found 2 clusters in cube_99
Created 267 edges and 100 nodes in 0:00:02.503639.
```

### Create color function output (absolute error)¶

```
In [10]:
```

```
color_function_output = np.sqrt((y_test-test_analysis.y_pred)**2)
```

### Create image tooltips for samples that are interpretable for humans¶

```
In [11]:
```

```
import io
import base64
from scipy.misc import toimage, imsave, imresize
# Create z-scores
hard_predictions = (test_analysis.y_pred > 0.5).astype(int)
o = np.std(X_test, axis=0)
u = np.mean(X_test[hard_predictions == 0], axis=0)
v = np.mean(X_test[hard_predictions == 1], axis=0)
z_scores = (u-v)/o
scores_0 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != "nan"],
reverse=False)
scores_1 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != "nan"],
reverse=True)
# Fill RGBA image array with top 200 scores for positive and negative
img_array_0 = np.zeros((28,28,4))
img_array_1 = np.zeros((28,28,4))
for e, (score, i) in enumerate(scores_0[:200]):
y = i % 28
x = int((i - (i % 28))/28)
img_array_0[x][y] = [255,255,0,205-e]
for e, (score, i) in enumerate(scores_1[:200]):
y = i % 28
x = int((i - (i % 28))/28)
img_array_1[x][y] = [255,0,0,205-e]
img_array = (img_array_0 + img_array_1) / 2
# Get base64 encoded version of this
output = io.BytesIO()
img = imresize(img_array, (64,64))
img = toimage(img)
img.save(output, format="PNG")
contents = output.getvalue()
explanation_img_encoded = base64.b64encode(contents)
output.close()
# Create tooltips for each digit
tooltip_s = []
for ys, image_data in zip(y_test, X_test):
output = io.BytesIO()
img = toimage(imresize(image_data.reshape((28,28)), (64,64))) # Data was a flat row of "pixels".
img.save(output, format="PNG")
contents = output.getvalue()
img_encoded = base64.b64encode(contents)
img_tag = """<div style="width:71px;
height:71px;
overflow:hidden;
float:left;
position: relative;">
<img src="data:image/png;base64,%s" style="position:absolute; top:0; right:0" />
<img src="data:image/png;base64,%s" style="position:absolute; top:0; right:0;
opacity:0.5; width: 64px; height: 64px;" />
<div style="position: relative; top: 0; left: 1px; font-size:9px">%s</div>
</div>"""%((img_encoded.decode('utf-8'),
explanation_img_encoded.decode('utf-8'),
ys))
tooltip_s.append(img_tag)
output.close()
tooltip_s = np.array(tooltip_s)
```

```
/Users/sauln/libraries/kepler-mapper/venv/lib/python3.6/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in true_divide
# This is added back by InteractiveShellApp.init_path()
/Users/sauln/libraries/kepler-mapper/venv/lib/python3.6/site-packages/ipykernel_launcher.py:36: DeprecationWarning: `imresize` is deprecated!
`imresize` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``skimage.transform.resize`` instead.
/Users/sauln/libraries/kepler-mapper/venv/lib/python3.6/site-packages/ipykernel_launcher.py:37: DeprecationWarning: `toimage` is deprecated!
`toimage` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use Pillow's ``Image.fromarray`` directly instead.
/Users/sauln/libraries/kepler-mapper/venv/lib/python3.6/site-packages/ipykernel_launcher.py:48: DeprecationWarning: `imresize` is deprecated!
`imresize` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``skimage.transform.resize`` instead.
/Users/sauln/libraries/kepler-mapper/venv/lib/python3.6/site-packages/ipykernel_launcher.py:48: DeprecationWarning: `toimage` is deprecated!
`toimage` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use Pillow's ``Image.fromarray`` directly instead.
```

### Visualize¶

```
In [12]:
```

```
_ = mapper.visualize(G,
lens=X_projected_test,
lens_names=["Uncertainty", "Error"],
custom_tooltips=tooltip_s,
color_function=color_function_output.values,
title="Confidence Graph for a MLP trained on MNIST",
path_html="confidence_graph_output.html")
```

```
Wrote visualization to: confidence_graph_output.html
```

```
In [13]:
```

```
from kmapper import jupyter
jupyter.display("confidence_graph_output.html")
```