LuptonRGB#

class mergernet.data.image.LuptonRGB[source]#

Bases: ImageTransform

batch_transform(images: List[Path], save_paths: List[Path] | None = None, silent: bool = False)#
on_batch_end()#
transform(imgs: ndarray) ndarray[source]#

Create human-interpretable rgb image from multi-band pixel data Follow the comments of Lupton (2004) to preserve colour during rescaling 1) linearly scale each band to have good colour spread (subjective choice) 2) nonlinear rescale of total intensity using arcsinh 3) linearly scale all pixel values to lie between mn and mx 4) clip all pixel values to lie between 0 and 1 Optionally, desaturate pixels with low signal/noise value to avoid speckled sky (partially implemented)

Parameters:
  • imgs (numpy.ndarray) – an array with shape (h, w, c) which represents the image, each with pixel data on a band

  • bands (str, optional) – ordered characters of bands of the 2-dim pixel arrays in imgs

  • arcsinh (float, optional) – softening factor for arcsinh rescaling

  • mn (float, optional) – min pixel value to set before (0, 1) clipping

  • mx (float, optional) – max pixel value to set before (0, 1) clipping

  • desaturate (bool, optional) – If True, reduce saturation on low S/N pixels to avoid speckled sky

  • desaturate_factor (float, optional) – parameter controlling desaturation. Proportional to saturation.

Returns:

aray of shape (H, W, 3) of pixel values for colour image

Return type:

numpy.ndarray

_abc_impl = <_abc_data object>#