Installation#
You can try DeLTA out without installing anything on Google Colab.
You can also install DeLTA on your computer or your cluster easily with conda or pip, as described on this page.
If you don’t know anything about conda
, you might find the following
dropdown helpful:
conda
crash course
conda
is a program able to create isolated environments and install packages in them, allowing to have different potentially conflicting versions coexisting in different environments for the needs of different programs.It finds the packages on the internet on different repositories, Anaconda and conda-forge being the two main ones. Anaconda is the official repository but has a limited number of packages and often very outdated, conda-forge is community-led and has many more packages and more recent versions. DeLTA is on conda-forge.
mamba
is a reimplementation ofconda
which is much faster. We recommend using it. You can replaceconda
withmamba
in all of the commands that we provide.In some of the commands of this documentation, to be typed in a terminal, we show a word in parentheses before the prompt, in general either
(base)
or(delta_env)
. These refer to the currently activated conda environment:base
is the default (when no environment has been manually activated), whiledelta_env
is the environment where DeLTA is available to python.You can activate the
delta_env
environment withconda activate delta_env
and deactivate it withconda deactivate
. When the environment is activated, every package that was installed in it is available to python.
Hardware requirements#
DeLTA runs much faster on a GPU. To do so you will need an nvidia gpu with a cuda compute capability of 3.5 or higher and an up-to-date gpu driver. If no compatible GPU/driver is found on the system, tensorflow will run on CPU.
For more information see the anaconda tensorflow guide and the tensorflow guide.
If you do not have access to a compatible GPU, DeLTA will run fine on CPU but will be slower. Computing clusters and cloud computing platforms are also an alternative that has been working well for us.
Prerequisites#
On Windows, you will need c++ and .NET support for Visual studio.
Finally, on Windows, to be able to use the ffmpeg-python module and save mp4 output files you will need to download ffmpeg.exe and place it in your environment’s binaries directory. Otherwise ffmpeg-python will issue a cryptic message about not finding the file specified when trying to save.
If you want to use DeLTA on Windows, it might be easier to install it inside the WSL (Windows Subsystem for Linux) than natively.
The WSL can be seen as a Linux emulator running inside Windows. If you already have it installed, you can already continue with the Linux instructions. Otherwise, install it, and then follow the Linux instructions.
Installing the WSL
To activate the WSL on Windows, first ensure that Windows Update is up to date, and if you have a GPU make sure you are using the latest drivers.
Then, using the “Turn Windows features on or off” tool, turn on the Virtual Machine Platform and the Windows Subsystem for Linux features. You may now check again for Windows updates.
Using the Microsoft app store, download the Windows Terminal to make the terminal use more agreeable.
You next need to install a Linux distribution. Type wsl --list
--online
in the Windows terminal to see a list of options. Debian
is a good choice, to install it type wsl --install -d Debian
.
Setup a username and a password, and you now have a fully
functional Linux distribution running inside Windows. Type wsl
--list --verbose
in any Windows terminal, and make sure that
the version number is 2, otherwise you will have to either update
wsl with wsl --update
or by following these steps.
You can start upgrading Debian by typing the following commands in the Linux terminal:
$ sudo apt update
$ sudo apt upgrade
Note
The dollar sign is not a part of the command, it just represents the invite prompt.
You are now ready to continue with the Linux section of this guide.
You can from now on follow the Linux instructions, starting with this tab.
We recommend to install DeLTA with conda
, it makes installation
much easier than with pip
. If you already have it, you are all
set, if you don’t, the easiest way to install it is through the
Mambaforge distribution. Just type these three lines in the terminal:
$ sudo apt install wget
$ wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
$ bash Mambaforge-Linux-x86_64.sh
TODO
Installation#
If you don’t think that you will ever need to modify DeLTA’s own code, choose the “for users” option. Otherwise, choose the “for devs” option.
Overall, conda is recommended in both cases because even though DeLTA is pure Python, its dependencies are not (typically, tensorflow and CUDA), and it is difficult to install them properly with pip.
DeLTA is available as a conda package on conda-forge. To install it, simply run in your system’s shell:
(base)$ conda create -n delta_env -c conda-forge delta2
This command creates a new environment named delta_env
and installs DeLTA
inside. To use DeLTA, you just need to activate the environment with conda
activate delta_env
, and you will be able to import delta
.
Attention
Note that the conda-forge
package name is delta2
but the
python package to import is just delta
.
You can also complete the environment with various packages, such as the elastic-deform library to do some advanced data augmentation for training:
(base)$ conda activate delta_env
(delta_env)$ pip install elastic-deform
Note
For now DeLTA is only available on conda-forge for Linux and MacOSX because the conda-forge Windows version of tensorflow is not up to date. You can however install from the cloned git on Windows.
DeLTA is available as a pip package on Pypi.
If you want to be able to use your GPU, you need to install the CUDA toolkit and cuDNN before installing DeLTA with pip, as explained on tensorflow’s website.
Tip
You might want to create a virtual environment first. You can use your favorite environment manager to do so.
Then, just run the following command to install DeLTA:
$ pip install delta2
You can now import delta
.
Attention
Note that the pip
package name is delta2
but the python
package to import is just delta
.
You can clone our git repository and then use the delta_dev.yml
file to install the environment:
(base)$ git clone https://gitlab.com/delta-microscopy/DeLTA
(base)$ cd DeLTA
(base)$ conda env create -f delta_dev.yml
After activating the environment, you can complete it with various packages, such as the elastic-training library to do some advanced data augmentation for training:
(base)$ conda activate delta_env
(delta_env)$ pip install elastic-deform
If you want to be able to use your GPU, you need to install the CUDA toolkit and cuDNN before installing DeLTA with pip, as explained on tensorflow’s website.
Tip
You might want to create a virtual environment first. You can use your favorite environment manager to do so.
Then you should be able to install the rest of the dependencies and DeLTA by the following command, run inside DeLTA’s directory:
$ pip install -e .
Note
The -e
installs DeLTA in “editable” mode, meaning that any
change you make to DeLTA’s source code will be taken into account
next time you import delta
.
Check installation#
You can check what libraries have been installed with:
(base)$ conda activate delta_env
(delta_env)$ conda list
or with pip:
$ pip list
To check that Tensorflow is able to detect the GPU, please run the following in the python interpreter:
>>> import tensorflow as tf
>>> tf.config.list_physical_devices()
Your GPU should appear in the list.
Import DeLTA#
You should be all set. The following line in a python interpreter should work from anywhere on your system (it will issue a warning about not finding elastic-deform if you didn’t install it):
>>> import delta
Tip
If python can’t find DeLTA even though you are inside the DeLTA
environment, it might be because you installed DeLTA’s dependencies but not
DeLTA itself. You can do so either with pip install -e .
or with
conda develop
, as explained above, in the “(for devs)” tabs.
Troubleshooting#
Problems with tensorflow-estimator or h5py
We have sometimes run into issues where conda would install versions of tensorflow-estimator that did not match the version of the base tensorflow library. To check which versions got installed if you run into issues with tensorflow-estimator please run the following:
(delta_env)$ conda list | grep tensorflow
If the versions of the estimator and the base library are too different this will cause problems. You can run the following to install the correct version:
(delta_env)$ conda install tensorflow-estimator==2.X
with ‘X’ replaced by the version of your base tensorflow.
Similarly for h5py, sometimes a version that is too recent or too old gets installed. Depending on which version was installed, try:
(delta_env)$ conda install h5py==2.*
or:
(delta_env)$ conda install h5py==3.*
cuDNN (or other libraries) not loading
We have run into OOM errors or some GPU-related libraries failing to load
or initialize on laptops. See the “Limiting GPU memory growth” section on
this tensorflow help page.
Setting the memory_growth_limit
parameter in the JSON config file to a set value in MB (eg 1024, 2048…) should solve
the issue.
OOM - Out of memory (GPU)
On GPU, you might run into memory problems. This is both linked to the batch size and the size of the images. The batch size is straightforward to change, lower the value at the beginning of the training scripts. Note that lower batch sizes may mean slower training convergence or lower performance overall.
The other solution would be to use a smaller image target size. However if the original training images and masks are for example 512×512, downsizing them to 256×256 will reduce the memory footprint, but it might cause some borders between cells in the binary masks to disappear. Instead, training images should be resized upstream of DeLTA to make sure that your training set does feature cell-to-cell borders in the segmentation masks.
Another reason why this may happen is that the pipeline is trying to process
too many samples at once. Try lowering pipeline_seg_batch
,
pipeline_track_batch
, and pipeline_chunk_size
in your config.