Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • P pyod
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 144
    • Issues 144
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 16
    • Merge requests 16
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Yue Zhao
  • pyod
  • Issues
  • #461
Closed
Open
Issue created Dec 12, 2022 by rupesh15203@rupesh15203

Can we use Pyspark dataframe as input

I am exploring this library on Pyspark dataframe. Following code I use in my experiment. I used joblibspark library to use spark as my backend while processing:

from sklearn.utils import parallel_backend
from joblibspark import register_spark
register_spark()
mcd_model = mcd.MCD(random_state=42)
with parallel_backend('spark', n_jobs=-1):
    mcd_model.fit(df)

I tried to run this code using dummy dataset. I ran into following problem:

ValueError: Expected 2D array, got scalar array instead:
array=DataFrame[length: double, width: double, height: double, engine-size: int, price: int].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.

It is only considering shema information of dataframe.

If I use df.collect() method while fitting the model, I am able to run the code successfully. But it will cause error for large dataset. Can somebody guide me how can I use pyspark dataframe directly to train the model.

Assignee
Assign to
Time tracking