Linux

Description

ZeroCostDL4Mic: exploiting Google Colab to develop a free and open-source toolbox for Deep-Learning in microscopy

ZeroCostDL4Mic is a collection of self-explanatory Jupyter Notebooks for Google Colab that features an easy-to-use graphical user interface. They are meant to quickly get you started on learning to use deep-learning for microscopy. 

need a thumbnail
Description

btrack is a Python library for multi object tracking, used to reconstruct trajectories in crowded fields. btrack implemented a residual U-Net model coupledd with a classification CNN to allow accurate instance segmentation of the cell nuclei. To track the cells over time and through cell divisions, btrack developed a Bayesian cell tracking methodology that uses input features from the images to enable the retrieval of multi-generational lineage information from a corpus of thousands of hours of live-cell imaging data.

need a thumbnail
Description

Open source deep learning based framework for multi-animal pose tracking. It can track animal and any number of animals and has a labeling/training GUI for learning and proofreading.

has topic
has function
Description

Algorithm and software created to extract animal trajectories from videos of a collection of animals up to 100 individuals. Idtrackerai uses two convolutional networks: one for animal identification and another to detect when animals touch or cross each other.

has topic
has function
Description

The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.

The entry-point to the codebase is the main.py file. The user has the option to

  • Train the network on their own dataset
  • Load a pre-trained model and use that for inference on their own data
  • NoteThe provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'
has function
daman