dataset

This module holds all of the dataset classes for creating self-supervised specific images.

blendowski

selfsupervised3d.dataset.blendowski

datasets to support blendowski-style self-supervised learning methods

References

[1] M. Blendowski et al. “How to Learn from Unlabeled Volume Data:
Self-supervised 3D Context Feature Learning.” MICCAI. 2019.

[2] https://github.com/multimodallearning/miccai19_self_supervision

Author: Jacob Reinhold (jacob.reinhold@jhu.edu)

Created on: April 28, 2020

class selfsupervised3d.dataset.blendowski.BlendowskiDataset(img_dir: List[str], patch_size: float = 0.4, patch_dim: int = 42, offset: float = 0.3, stack_size: float = 0.05, stack_dim: int = 3, min_off_inplane: float = 0.25, max_off_inplane: float = 0.3, min_off_throughplane: float = 0.125, max_off_throughplane: float = 0.25, heatmap_dim: int = 19, scale: float = 10.0, precision: float = 15.0, throughplane_axis: int = 0)
selfsupervised3d.dataset.blendowski.blendowski_collate(lst)

collate function to integrate BlendowskiDataset with PyTorch DataLoader

selfsupervised3d.dataset.blendowski.blendowski_patches(img: <sphinx.ext.autodoc.importer._MockObject object at 0x7f0470ad0128>, patch_size: float = 0.4, patch_dim: int = 42, offset: float = 0.3, stack_size: float = 0.05, stack_dim: int = 3, min_off_inplane: float = 0.25, max_off_inplane: float = 0.3, min_off_throughplane: float = 0.125, max_off_throughplane: float = 0.25, heatmap_dim: int = 19, scale: float = 10.0, precision: float = 15.0, throughplane_axis: int = 0)

Creates patches and targets for self-supervised learning as described in [1]

Parameters:
  • img (torch.Tensor) – image from which to create patches
  • patch_size (float) – size of patch as a proportion of the image
  • patch_dim (int) – side length of the cube in voxels to be extracted
  • offset (float) – proportion of image away from the center patch allowed
  • stack_size (float) – proportion of image that comprises the throughplane stack
  • stack_dim (float) – dimension of image that comprises the throughplane stack
  • min_off_inplane (float) – minimum offset in the inplane direction
  • max_off_inplane (float) – maximum offset in the inplane direction
  • min_off_throughplane (float) – minimum offset in the throughplane direction
  • max_off_throughplane (float) – maximum offset in the throughplane direction
  • heatmap_dim (int) – dimension in pixels of the target heatmap
  • scale (float) – constant scale value multiplying the gaussian term (see the eq. in Details on Heatmap Network Training in [1])
  • precision (float) – value of precision (1/var) in the gaussian term (see the eq. in Details on Heatmap Network Training in [1])
  • throughplane_axis (int) – axis selected as throughplane (0, 1, or 2)

References

[1] M. Blendowski et al. “How to Learn from Unlabeled Volume Data:
Self-supervised 3D Context Feature Learning.” MICCAI. 2019.

context

selfsupervised3d.dataset.context

datasets to support context encoder-style self-supervised learning methods

References

[1] D. Pathak et al. “Context encoders: Feature learning by inpainting.”
CVPR. 2016.

Author: Jacob Reinhold (jacob.reinhold@jhu.edu)

Created on: May 06, 2020

class selfsupervised3d.dataset.context.ContextDataset(img_dir: List[str], mask_dir: Optional[str] = None, n_blocks: int = 5, size: int = 10, n_erode: Optional[int] = 4, patch_size: Optional[int] = None)
selfsupervised3d.dataset.context.context_collate(lst)

collate function to integrate ContextDataset with PyTorch DataLoader

selfsupervised3d.dataset.context.create_block_mask(idx_mask: <sphinx.ext.autodoc.importer._MockObject object at 0x7f0470b00240>, size: int, n_erode: Optional[int] = None, fill_val: float = 1.0)

creates a mask containing a block inside a given mask

selfsupervised3d.dataset.context.create_multiblock_mask(idx_mask: <sphinx.ext.autodoc.importer._MockObject object at 0x7f0470b00940>, n_blocks: int, size: int, n_erode: Optional[int] = None, fill_val: float = 1.0)

creates a mask containing multiple (potentially overlapping) blocks inside a given mask

doersch

selfsupervised3d.dataset.doersch

datasets to support doersch-style self-supervised learning methods

References

[1] M. Blendowski et al. “How to Learn from Unlabeled Volume Data:
Self-supervised 3D Context Feature Learning.” MICCAI. 2019.

[2] https://github.com/multimodallearning/miccai19_self_supervision [3] C. Doersch et al. “Unsupervised visual representation learning

by context prediction.” ICCV. 2015.

Author: Jacob Reinhold (jacob.reinhold@jhu.edu)

Created on: April 28, 2020

class selfsupervised3d.dataset.doersch.DoerschDataset(img_dir: List[str], patch_size: float = 0.4, patch_dim: int = 25, offset: float = 0.5)
selfsupervised3d.dataset.doersch.doersch_collate(lst)

collate function to integrate DoerschDataset with PyTorch DataLoader

selfsupervised3d.dataset.doersch.doersch_patches(img: <sphinx.ext.autodoc.importer._MockObject object at 0x7f0470b00b00>, patch_size: float = 0.2, patch_dim: int = 25, offset: float = 0.5)

Creates Doersch-style patches [1] as described in [2]

Parameters:
  • img (torch.Tensor) – image from which to create patches
  • patch_size (float) – size of patch as a proportion of the image
  • patch_dim (int) – side length of the cube in voxels to be extracted
  • offset (float) – proportion of image away from the center patch allowed

References

[1] C. Doersch et al. “Unsupervised visual representation learning
by context prediction.” ICCV. 2015.
[2] M. Blendowski et al. “How to Learn from Unlabeled Volume Data:
Self-supervised 3D Context Feature Learning.” MICCAI. 2019.