Transform Image Dataset (Experimental)¶
Transforms images within an image dataset
Documentation¶
Algorithms¶
- Center Crop
Crops the given image at the center. If the image is torch Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions. If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
- Width:
Desired output width of the crop
- Height:
Desired output height of the crop
- Grayscale
Convert image to grayscale. If the image is torch Tensor, it is expected to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions
- Number of output channels:
(1 or 3) number of channels desired for output image
- Normalize
Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean[1],…,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
- Standard deviation:
Sequence of standard deviations for each channel.
- Mean:
Sequence of means for each channel.
- Pad
Pad the given image on all sides with the given “pad” value. If the image is torch Tensor, it is expected to have […, H, W] shape, where … means at most 2 leading dimensions for mode reflect and symmetric, at most 3 leading dimensions for mode edge, and an arbitrary number of leading dimensions for mode constant
- Fill:
Pixel fill value for constant fill. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. This value is only used when the padding_mode is constant. Only number is supported for torch Tensor. Only int or str or tuple value is supported for PIL Image.
- Padding mode:
Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant. - constant: pads with a constant value, this value is specified with fill - edge: pads with the last value at the edge of the image. If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2 - reflect: pads with reflection of image without repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode will result in [3, 2, 1, 2, 3, 4, 3, 2] - symmetric: pads with reflection of image repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode will result in [2, 1, 1, 2, 3, 4, 4, 3]
- Padding size:
Padding on each border. If a single int is provided this is used to pad all borders. If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively.
- Resize
Resize the input image to the given size. If the image is torch Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions
- Width:
Desired output width.
- Interpolation:
Desired interpolation enum defined by torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR and InterpolationMode.BICUBIC are supported. For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.
- Height:
Desired output height.
- To PIL Image
Convert a tensor or an ndarray to PIL Image. This transform does not support torchscript.
- To Tensor
Convert a PIL Image or numpy.ndarray to tensor. This transform does not support torchscript.
Definition¶
Input ports¶
- dataset dataset
Dataset
Output ports¶
- dataset dataset
Dataset
Configuration¶
- Fill (Fill)
(no description)
- Height (Height)
(no description)
- Interpolation (Interpolation)
(no description)
- Mean (Mean)
(no description)
- Number of output channels (Number of output channels)
(no description)
- Padding mode (Padding mode)
(no description)
- Padding size (Padding size)
(no description)
- Standard deviation (Standard deviation)
(no description)
- Width (Width)
(no description)
- Algorithm (algorithm)
(no description)
Implementation¶
- class node_transformdataset.TransformImageDataset[source]