delta.data.train_generator_track#
- delta.data.train_generator_track(batch_size, img_path, seg_path, previmg_path, segall_path, track_path, weights_path=None, *, augment_params=None, crop_windows=False, target_size=(256, 32), shift=0, seed=1)[source]#
Create a generato to train the tracking U-Net.
- Parameters
- batch_sizeint
Batch size, number of training samples to concatenate together.
- img_pathstring
Path to folder containing training input images (current timepoint).
- seg_pathstring
Path to folder containing training ‘seed’ images, ie mask of 1 cell in the previous image to track in the current image.
- previmg_pathstring
Path to folder containing training input images (previous timepoint).
- segall_pathstring
Path to folder containing training ‘segall’ images, ie mask of all cells in the current image.
- track_pathstring
Path to folder containing tracking groundtruth, ie mask of the seed cell and its potential daughter tracked in the current frame.
- weights_pathstring or None, optional
Path to folder containing pixel-wise weights to apply to the tracking groundtruth. If None, the same weight is applied to all pixels. If the string is ‘online’, weights will be generated on the fly (not recommended, much slower) The default is None.
- augment_paramsdict, optional
Data augmentation parameters. See data_augmentation() doc for more info The default is {}.
- target_sizetuple of 2 ints, optional
Input and output image size. The default is (256,32).
- crop_windowsbool, optional
Whether to crop out a window of size target_size around the seed/seg cell to track for all input images instead of resizing.
- shiftint, optional
If crop_windows is True, a shift between [-shift, +shift] will be uniformly sampled for both the X and the Y axis. This shift in pixels will be applied only to the the cropbox for the current timepoint input frames (img,segall,mot_dau,wei), to simulate image drift over time.
- seedint, optional
Seed for numpy’s random generator. The default is 1.
- Yields
- inputs_arr4D numpy array of floats
Input images and masks for the U-Net training routine. Dimensions of the tensor are (batch_size, target_size[0], target_size[1], 4)
- outputs_arr4D numpy array of floats
Output masks for the U-Net training routine. Dimensions of the tensor are (batch_size, target_size[0], target_size[1], 3). The third index of axis=3 contains ‘background’ masks, ie the part of the tracking output groundtruth that is not part of the mother or daughter masks