Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • P PyTorch-GAN
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 102
    • Issues 102
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 24
    • Merge requests 24
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Erik Linder-Norén
  • PyTorch-GAN
  • Issues
  • #170
Closed
Open
Issue created Mar 04, 2022 by edwardcho@edwardcho

Can I do train grayscale-image on MUNIT ??

Hello Sir,

I have interesting image-to-image translation. So I tried to your code using my-datasets.

My-datasets are as follows :

  1. grayscale (1 channel)
  2. 256 x 256

When start training, I met some error.

Namespace(b1=0.5, b2=0.999, batch_size=4, channels=1, checkpoint_interval=-1, dataset_name='noise2clip', decay_epoch=2, dim=64, epoch=0, img_height=256, img_width=256, lr=0.0001, n_cpu=8, n_downsample=2, n_epochs=4, n_residual=3, sample_interval=400, style_dim=8)
/home/itsme/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
  "Argument interpolation should be of type InterpolationMode instead of int. "
../../data/noise2clip/trainA
../../data/noise2clip/valA
Traceback (most recent call last):
  File "munit.py", line 171, in <module>
    for i, batch in enumerate(dataloader):
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/data/TESTBOARD/additional_networks/generation/PyTorch-GAN_eriklindernoren/implementations/munit/datasets.py", line 40, in __getitem__
    img_A = self.transform(img_A)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 61, in __call__
    img = t(img)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 226, in forward
    return F.normalize(tensor, self.mean, self.std, self.inplace)
  File "/home/itsme/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 351, in normalize
    tensor.sub_(mean).div_(std)
RuntimeError: output with shape [1, 256, 256] doesn't match the broadcast shape [3, 256, 256]

How to train on my-case (using grayscale-datam MUNIT) ?

Thanks. Edward Cho.

Assignee
Assign to
Time tracking