Model Uncertainty prediction¶
Note:
This notebook extends the "Custom DataLoader for Imbalanced dataset" notebook
- In this notebook we will use the higly imbalanced Protein Homology Dataset from KDD cup 2004
* The first element of each line is a BLOCK ID that denotes to which native sequence this example belongs. There is a unique BLOCK ID for each native sequence. BLOCK IDs are integers running from 1 to 303 (one for each native sequence, i.e. for each query). BLOCK IDs were assigned before the blocks were split into the train and test sets, so they do not run consecutively in either file.
* The second element of each line is an EXAMPLE ID that uniquely describes the example. You will need this EXAMPLE ID and the BLOCK ID when you submit results.
* The third element is the class of the example. Proteins that are homologous to the native sequence are denoted by 1, non-homologous proteins (i.e. decoys) by 0. Test examples have a "?" in this position.
* All following elements are feature values. There are 74 feature values in each line. The features describe the match (e.g. the score of a sequence alignment) between the native protein sequence and the sequence that is tested for homology.
Initial imports¶
In [1]:
Copied!
import pandas as pd
import numpy as np
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import TabPreprocessor
from pytorch_widedeep.models import TabMlp, WideDeep
from pytorch_widedeep.dataloaders import DataLoaderImbalanced
from pytorch_widedeep.metrics import Accuracy, Recall, Precision, F1Score
from pytorch_widedeep.initializers import XavierNormal
from pytorch_widedeep.datasets import load_bio_kdd04
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import time
import datetime
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import pandas as pd
import numpy as np
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import TabPreprocessor
from pytorch_widedeep.models import TabMlp, WideDeep
from pytorch_widedeep.dataloaders import DataLoaderImbalanced
from pytorch_widedeep.metrics import Accuracy, Recall, Precision, F1Score
from pytorch_widedeep.initializers import XavierNormal
from pytorch_widedeep.datasets import load_bio_kdd04
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import time
import datetime
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
/Users/javierrodriguezzaurin/.pyenv/versions/3.10.15/envs/widedeep310/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm
In [2]:
Copied!
df = load_bio_kdd04(as_frame=True)
df.head()
df = load_bio_kdd04(as_frame=True)
df.head()
Out[2]:
EXAMPLE_ID | BLOCK_ID | target | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ... | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 279 | 261532 | 0 | 52.0 | 32.69 | 0.30 | 2.5 | 20.0 | 1256.8 | -0.89 | ... | -8.0 | 1595.1 | -1.64 | 2.83 | -2.0 | -50.0 | 445.2 | -0.35 | 0.26 | 0.76 |
1 | 279 | 261533 | 0 | 58.0 | 33.33 | 0.00 | 16.5 | 9.5 | 608.1 | 0.50 | ... | -6.0 | 762.9 | 0.29 | 0.82 | -3.0 | -35.0 | 140.3 | 1.16 | 0.39 | 0.73 |
2 | 279 | 261534 | 0 | 77.0 | 27.27 | -0.91 | 6.0 | 58.5 | 1623.6 | -1.40 | ... | 7.0 | 1491.8 | 0.32 | -1.29 | 0.0 | -34.0 | 658.2 | -0.76 | 0.26 | 0.24 |
3 | 279 | 261535 | 0 | 41.0 | 27.91 | -0.35 | 3.0 | 46.0 | 1921.6 | -1.36 | ... | 6.0 | 2047.7 | -0.98 | 1.53 | 0.0 | -49.0 | 554.2 | -0.83 | 0.39 | 0.73 |
4 | 279 | 261536 | 0 | 50.0 | 28.00 | -1.32 | -9.0 | 12.0 | 464.8 | 0.88 | ... | -14.0 | 479.5 | 0.68 | -0.59 | 2.0 | -36.0 | -6.9 | 2.02 | 0.14 | -0.23 |
5 rows × 77 columns
In [3]:
Copied!
# drop columns we won't need in this example
df.drop(columns=["EXAMPLE_ID", "BLOCK_ID"], inplace=True)
# drop columns we won't need in this example
df.drop(columns=["EXAMPLE_ID", "BLOCK_ID"], inplace=True)
In [4]:
Copied!
df_train, df_valid = train_test_split(
df, test_size=0.2, stratify=df["target"], random_state=1
)
df_valid, df_test = train_test_split(
df_valid, test_size=0.5, stratify=df_valid["target"], random_state=1
)
df_train, df_valid = train_test_split(
df, test_size=0.2, stratify=df["target"], random_state=1
)
df_valid, df_test = train_test_split(
df_valid, test_size=0.5, stratify=df_valid["target"], random_state=1
)
Preparing the data¶
In [5]:
Copied!
continuous_cols = df.drop(columns=["target"]).columns.values.tolist()
continuous_cols = df.drop(columns=["target"]).columns.values.tolist()
In [6]:
Copied!
# deeptabular
tab_preprocessor = TabPreprocessor(continuous_cols=continuous_cols, scale=True)
X_tab_train = tab_preprocessor.fit_transform(df_train)
X_tab_valid = tab_preprocessor.transform(df_valid)
X_tab_test = tab_preprocessor.transform(df_test)
# target
y_train = df_train["target"].values
y_valid = df_valid["target"].values
y_test = df_test["target"].values
# deeptabular
tab_preprocessor = TabPreprocessor(continuous_cols=continuous_cols, scale=True)
X_tab_train = tab_preprocessor.fit_transform(df_train)
X_tab_valid = tab_preprocessor.transform(df_valid)
X_tab_test = tab_preprocessor.transform(df_test)
# target
y_train = df_train["target"].values
y_valid = df_valid["target"].values
y_test = df_test["target"].values
Define the model¶
In [7]:
Copied!
deeptabular = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32],
)
model = WideDeep(deeptabular=deeptabular, pred_dim=1)
model
deeptabular = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32],
)
model = WideDeep(deeptabular=deeptabular, pred_dim=1)
model
Out[7]:
WideDeep( (deeptabular): Sequential( (0): TabMlp( (cont_norm): Identity() (encoder): MLP( (mlp): Sequential( (dense_layer_0): Sequential( (0): Linear(in_features=74, out_features=64, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.1, inplace=False) ) (dense_layer_1): Sequential( (0): Linear(in_features=64, out_features=32, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.1, inplace=False) ) ) ) ) (1): Linear(in_features=32, out_features=1, bias=True) ) )
In [8]:
Copied!
trainer = Trainer(
model,
objective="binary",
metrics=[Accuracy(), Precision(), F1Score(), Recall()],
verbose=1,
)
trainer = Trainer(
model,
objective="binary",
metrics=[Accuracy(), Precision(), F1Score(), Recall()],
verbose=1,
)
In [9]:
Copied!
start = time.time()
trainer.fit(
X_train={"X_tab": X_tab_train, "target": y_train},
X_val={"X_tab": X_tab_valid, "target": y_valid},
n_epochs=3,
batch_size=32,
)
start = time.time()
trainer.fit(
X_train={"X_tab": X_tab_train, "target": y_train},
X_val={"X_tab": X_tab_valid, "target": y_valid},
n_epochs=3,
batch_size=32,
)
epoch 1: 100%|██████████| 3644/3644 [00:20<00:00, 175.39it/s, loss=0.0222, metrics={'acc': 0.9945, 'prec': 0.7565, 'f1': 0.6419, 'rec': 0.5574}] valid: 100%|██████████| 456/456 [00:01<00:00, 252.36it/s, loss=0.0125, metrics={'acc': 0.9969, 'prec': 0.92, 'f1': 0.8035, 'rec': 0.7132}] epoch 2: 100%|██████████| 3644/3644 [00:20<00:00, 177.43it/s, loss=0.0119, metrics={'acc': 0.9968, 'prec': 0.9209, 'f1': 0.793, 'rec': 0.6962}] valid: 100%|██████████| 456/456 [00:01<00:00, 255.61it/s, loss=0.0121, metrics={'acc': 0.997, 'prec': 0.8972, 'f1': 0.8136, 'rec': 0.7442}] epoch 3: 100%|██████████| 3644/3644 [00:20<00:00, 176.07it/s, loss=0.0103, metrics={'acc': 0.9973, 'prec': 0.9312, 'f1': 0.8351, 'rec': 0.757}] valid: 100%|██████████| 456/456 [00:01<00:00, 259.70it/s, loss=0.0119, metrics={'acc': 0.997, 'prec': 0.8909, 'f1': 0.8201, 'rec': 0.7597}]
In [10]:
Copied!
pd.DataFrame(trainer.history)
pd.DataFrame(trainer.history)
Out[10]:
train_loss | train_acc | train_prec | train_f1 | train_rec | val_loss | val_acc | val_prec | val_f1 | val_rec | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0.022229 | 0.994468 | 0.756545 | 0.641866 | 0.557377 | 0.012473 | 0.996913 | 0.920000 | 0.803493 | 0.713178 |
1 | 0.011912 | 0.996767 | 0.920918 | 0.792971 | 0.696239 | 0.012088 | 0.996981 | 0.897196 | 0.813559 | 0.744186 |
2 | 0.010341 | 0.997341 | 0.931198 | 0.835106 | 0.756991 | 0.011884 | 0.997050 | 0.890909 | 0.820084 | 0.759690 |
"Normal" prediction¶
In [11]:
Copied!
df_pred = trainer.predict(X_tab=X_tab_test)
print(classification_report(df_test["target"].to_list(), df_pred))
print("Actual predicted values:\n{}".format(np.unique(df_pred, return_counts=True)))
df_pred = trainer.predict(X_tab=X_tab_test)
print(classification_report(df_test["target"].to_list(), df_pred))
print("Actual predicted values:\n{}".format(np.unique(df_pred, return_counts=True)))
predict: 100%|██████████| 456/456 [00:00<00:00, 689.36it/s]
precision recall f1-score support 0 1.00 1.00 1.00 14446 1 0.91 0.78 0.84 130 accuracy 1.00 14576 macro avg 0.95 0.89 0.92 14576 weighted avg 1.00 1.00 1.00 14576 Actual predicted values: (array([0, 1]), array([14465, 111]))
Prediction using uncertainty¶
In [12]:
Copied!
df_pred_unc = trainer.predict_uncertainty(X_tab=X_tab_test, uncertainty_granularity=10)
print(classification_report(df_test["target"].to_list(), df_pred))
print(
"Actual predicted values:\n{}".format(
np.unique(df_pred_unc[:, -1], return_counts=True)
)
)
df_pred_unc = trainer.predict_uncertainty(X_tab=X_tab_test, uncertainty_granularity=10)
print(classification_report(df_test["target"].to_list(), df_pred))
print(
"Actual predicted values:\n{}".format(
np.unique(df_pred_unc[:, -1], return_counts=True)
)
)
predict_UncertaintyIter: 100%|██████████| 10/10 [00:05<00:00, 1.86it/s]
precision recall f1-score support 0 1.00 1.00 1.00 14446 1 0.91 0.78 0.84 130 accuracy 1.00 14576 macro avg 0.95 0.89 0.92 14576 weighted avg 1.00 1.00 1.00 14576 Actual predicted values: (array([0.]), array([14576]))
In [13]:
Copied!
df_pred_unc
df_pred_unc
Out[13]:
array([[9.99999821e-01, 1.77245539e-07, 0.00000000e+00], [1.00000000e+00, 8.29310925e-11, 0.00000000e+00], [9.99995947e-01, 4.06420531e-06, 0.00000000e+00], ..., [9.99999940e-01, 3.85314713e-08, 0.00000000e+00], [1.00000000e+00, 2.98146707e-09, 0.00000000e+00], [1.00000000e+00, 1.21332046e-12, 0.00000000e+00]])