Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • P pyod
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 144
    • Issues 144
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 16
    • Merge requests 16
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Yue Zhao
  • pyod
  • Issues
  • #460
Closed
Open
Issue created Dec 09, 2022 by Administrator@rootContributor

PyOD implementations show lower results compared to the paper, and to the corresponding sklearn scores

Created by: kordc

I was trying to explore the PyOD functionality, and I can see, that the results by default are very low compared to the ADBench paper. Moreover, when I compare the Isolation Forest, the sklearn implementation just performs better. I used fixed random_state so you can reproduce it.

Isolation Forest

from pyod.models.iforest import IForest
data = np.load('../data/numerical/01_breastw.npz',
                   allow_pickle=True)  # very simple dataset
X, y = data['X'], data['y']
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=23)

isf = IForest(random_state=23).fit(X_train)
isf_pred = isf.predict(X_train)
roc_auc_score(y_train, isf_pred) # => 0.6443854458530086

The result is 0.6407. Complared to the sklearn:

from sklearn.ensemble import IsolationForest
data = np.load('../data/numerical/01_breastw.npz',
                   allow_pickle=True)  # very simple dataset
X, y = data['X'], data['y']
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=23)

isf = IsolationForest(random_state=23).fit(X_train)
isf_pred = isf.predict(X_train)
isf_pred_shifted = isf_pred.copy()
isf_pred_shifted[isf_pred_shifted == 1] = 0
isf_pred_shifted[isf_pred_shifted == -1] = 1
roc_auc_score(y_train, isf_pred_shifted) # => 0.956080882497012

The above result of 0.956 corresponds more to the paper, where score is equal to 0.9832

ECOD

from pyod.models.ecod import ECOD
data = np.load('../data/numerical/01_breastw.npz',
                   allow_pickle=True)  # very simple dataset
X, y = data['X'], data['y']
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=23)

isf = ECOD().fit(X_train)
isf_pred = isf.predict(X_train)
roc_auc_score(y_train, isf_pred) # => 0.6490683229813665

The above 0.649 doesn't correspond to the paper's 99.17.

HBOS

from pyod.models.hbos import HBOS
data = np.load('../data/numerical/01_breastw.npz',
                   allow_pickle=True)  # very simple dataset
X, y = data['X'], data['y']
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=23)

isf = HBOS().fit(X_train)
isf_pred = isf.predict(X_train)
roc_auc_score(y_train, isf_pred) # => 0.6443854458530086

The above 0.644 doesn't correspond to the paper's 98.94.


I use the PyOD in the simplest possible manner, and I checked the dataset with the isolation forest. That's why I think something is wrong.

The dataset is available in the ADBench repository - breastw

Assignee
Assign to
Time tracking