Seminar
We organized an image analysis seminar and a follow-up workshop on April 6th, 2022.
Download Seminar Slides
Workshop
In this workshop, we will work through the following exercises:
- Segmentation using global threshold, local threshold and Deep Learning (StarDist)
- Cell tracking using StarDist and TrackMate
- Denoising using Noise2Void
- Bonus – 3D segmentation using StarDist and TrackMate
Installing required plugins in Fiji
We will be adding the following three updates sites in our Fiji to install all the required plugins for this workshop:
Step 1: Start Fiji.
Step 2: Select Help > Update… from the menu bar.
Step 3: Click on the button “Manage update sites”.
Step 4: Scroll down the list and tick the checkboxes for update sites: CSBDeep (shown below), StarDist and TrackMate-StarDist, then click the Close button.
Step 5: Click on “Apply changes” to install the plugins.
Step 6: Restart Fiji. StarDist plugin should now be available under Plugins > StarDist > StarDist 2D
.
Noise2Void plugin should be visible under Plugins › CSBDeep › N2V
.
Workshop Exercise 1: Segmentation
Download TIF file
Human HT29 colon cancer cells, Image from Broad Bioimage Benchmark Collection
Global segmentation
Open above image by dragging it into the Fiji window and run Threshold command: Image > Adjust > Threshold...
Choose different thresholding methods (such as Default, Huang, Otsu etc.) from the drop down list and check how well they perform on your image.
Once satisfied with a particular method or by manually selecting the lower and upper threshold values (using sliders), click on the Apply button to generate a thresholded image.
Note: All thresholding methods (such as Default, Huang, Otsu etc.) can be tested at once by using Image > Adjust > Auto Threshold
Local segmentation
Select your original image and run the command: Image > Adjust > Auto Local Threshold
.
Run with “Try all” methods to check which one gives the best result. For this image, the best segmentation is achieved with the Phansalkar
method.
Deep Learning based segmentation using StarDist
Select your original image and run the command: Plugins › StarDist › StarDist 2D
In the follow up menu, choose Model: Versatile (fluorescent nuclei)
and click on the Set optimized thresholds
button at the bottom. Keep other settings as deafult. Click OK.
A segmentation label image will be generated with the nuclei ROIs added to the ROI Manager.
Workshop Exercise 2: Tracking cancer cell migration using TrackMate plugin
Cancer cell migration, image from Zenodo.
Download TIF file
- Open the above time-lapse sequence in Fiji and run the command:
Plugins › Tracking › TrackMate
- You will be presented with a
TrackMate
window. Click Next.
- Select
StarDist detector
from the drop-down menu (image A below). Click next.
- You can now click on the
Preview
tab, to check how well StarDist is detecting the nuclei on the current slice. If detections look fine, click next to detect nuclei in the whole time-lapse sequence.
-
Keep clicking next button until you reach the Select a tracker
window. Select Simple LAP tracker
and click next.
In the settings (image B below), choose Linking max distance=30 pixel, Gap-closing max distance=10 pixel, Gap-closing max frame gap=10. Click next.
Note: if you want to track splitting and/or merging events, then choose the LAP tracker
, and tick the splitting/merging checkbox under settings.
- Keep clicking next until you reach the
Display options
window (image C below). Here, various display options for the tracks could be selected. Choose Show tracks backward in time
. Play with different settings here and scroll through the stack to see the effect of these changes.
Please explore the three tabs at the bottom - TrackScheme
, Tracks
and Spots
. A lot of statistics is hidden there, such as the raw values for each nuclei in each frame of the time-lapse. All the statistics could be exported to a CSV file.
- Keep clicking next until you reach the last window called
Select an action
. Select Capture overlay
and click Execute
to generate a time-lapse movie with spots and tracks overlaid on top of the original data.
- A label time-lapse movie could also be generated by selecting
Export label image
from the drop-down list and clicking Execute
.
Workshop Exercise 3: Denoising using Noise2Void plugin
Original |
Noise2Void |
|
|
FISH in C. elegans, Spinning disk confocal, image courtesy of ABRF/LMRG Image Analysis Study.
Download TIF file
- Open the above Z-stack in Fiji and run the command:
Plugins › CSBDeep › N2V › N2V train + predict
- You will be presented with a
N2V train + predict
window. Choose the following options:
- Axes of prediction input: XYZ
- Number of epochs: 10
- Number of steps per epoch: 10
- Batch size per step: 64
- Patch shape: 64
- Neighborhood readius: 5
-
Click OK. A window showing the progress of different steps (Preparation, Training and Prediction) will open. As the training progresses, training loss (red) and validation loss (blue) curves are displayed in the window (see below). If training goes well, then both the red and blue curves will decrease with more cycles (epochs) of training and stabilize around a minimum loss value (~ 1.0 in the image below). Training loss goes down from the beginning but Validation loss (blue curve) usually goes up in the beginning and then comes down and approaches the red curve.
If by the end of the training, red and blue curves do not stabilize to a minimum loss value, then increase the number of epochs to 20 or 30 and then run the command N2V train + predict
again.
- After program finishes, it generates a denoised Z-stack from the trained model. You might need to run
Image › Adjust › Brightness/Contrast...
and hit Reset
to adjust the display of the denoised image.
-
The Deep Learning model you just trained could be saved as a .ZIP file (to be used for prediction in the future) by clicking on the File actions > Save to...
(see below).
-
Trained model could also be applied immediately on a single noisy image or a folder full of noisy images by using Predict > Single image or stack
or Predict > Folder of images of stacks
, respectively.
Bonus Workshop Exercise 4: 3D segmentation using TrackMate (StarDist)
Spheroid, Z-stack |
Z-stack segmentation |
3D rendering |
|
|
|
3D stack of cells in a spheroid from Zenodo.
Download TIF file
- Open above Z-stack in Fiji and run the
TrackMate
plugin, just like in exercise 2.
-
Since this is a Z-stack and TrackMate works on a time-lapse sequence, we need to swap Z and T dimensions. TrackMate automatically detects it and asks for dimension swapping. Click Yes.
- For the detector, choose
StarDist detector
from the drop-down menu.
- For the tracker, choose
Simple LAP tracker
and the following tracking settings:
Linking max distance=5 pixel (a lower value than in exercise 2, since the cells are not moving)
Gap-closing max distance=10 pixel
Gap-closing max frame gap=0. (since cells are not disappearing from frame-to-frame)
-
Keep clicking the next button until you reach the Display options
window (image below). Choose:
Color spots by: Track index
Uncheck Display tracks
Every cell will be outlined in a different color. Scroll through the stack to check the accuracy of the results. If results are not optimum, go back to the detection and/or tracker steps by clicking on the previous button (green left arrow) and changing the settings under detector and tracker.
- On the last TrackMate window called
Select an action
, generate a label image by selecting Export label image
from the drop-down list and clicking Execute
. It will generate a grayscale Z-stack.
- Apply colors to cells with an LUT:
Image › Lookup Tables › glasbey_inverted
- For creating a 3D rendering, swap Z and T dimensions back to the original values by selecting
Image › Properties...
and entering Z=64 and T=1. Click OK.
- Generate 3D rendering by using 3D viewer plugin:
Plugins › 3D Viewer
.
Select Resampling factor = 1
.
If a window pops up asking to convert the Z-stack to 8-bit or RGB image, click Yes.
In the ImageJ 3D Viewer window, use left mouse click and drag to rotate and inspect the volume.
Well done, if you reached this far!
For any questions, please contact Ved Sharma.