High content screening

Synonyms
HCS

Image Analysis of Biological Data using CellProfiler

After the session you will be able to built your own CellProfiler pipeline, including:

  • Image data import
  • Object segmentation (e.g. detect nuclei in an image) using the modules "IdentifyPrimaryObjects" and "IdentifySecondaryObjects"
  • Object feature measurements (e.g. measure size, shape and intensity of cells)
  • Measurements export to a spreadsheet
  • Creating and saving quality control images
Description

Fractal is a framework to process high-content imaging data at scale and prepare it for interactive visualization. Fractal provides distributed workflows that convert TBs of image data into OME-Zarr files. The platform then processes the 3D image data by applying tasks like illumination correction, maximum intensity projection, 3D segmentation using cellpose and measurements using napari workflows. The pyramidal OME-Zarr files enable interactive visualization in the napari viewer.

need a thumbnail

Fractal: A framework for processing OME-Zarr high content imaging data

Fractal is a framework to process high-content imaging data at scale and prepare it for interactive visualization. Fractal provides distributed workflows that convert TBs of image data into OME-Zarr files. The platform then processes the 3D image data by applying tasks like illumination correction, maximum intensity projection, 3D segmentation using cellpose and measurements using napari workflows. The pyramidal OME-Zarr files enable interactive visualization in the napari viewer.
These slides are from an early demo of Fractal in November 2022

Description

Phindr3D is a comprehensive shallow-learning framework for automated quantitative phenotyping of three-dimensional (3D) high content screening image data using unsupervised data-driven voxel-based feature learning, which enables computationally facile classification, clustering and data visualization.

Please see our GitHub page and the original publication for details.

Description

KNIME workflow to visualize a dataset described by multiple quantitative features (ex: a list of samples or cells, each described with multiple morphological features) as a 3D cloud of points (each point corresponding to one sample/cell) as well as a line plot (1 line per sample/cell).

For the 3D plot, the workflow uses Principal Component Analysis (PCA) for dimensionality reduction, ie it simplifies the information for each sample from n-features to 3 pseudo-features which are used as x,y,z-coordinates for each sample. The original features should cover similar value range, to make sure the PCA is not biased towards the large values features. One option is to normalize the values (min/max or Z-score). 

Also make sure that the resulting PCA represents a decent % of the original data variance (at least 70%). Otherwise the PCA plot will not be representative of the original data-distribution. The % is shown in the title of the PCA plot.

The workflow is interactive and so selecting in one panel of the figure will highlight in the other panel too.

It was originally published for the visualization of phenotypic kidney features in zebrafish, but the workflow is generic by design and can be reused for any quantitative feature set. 

KNIME-Workflow