Ganglia Models

Ganglia model benchmark v3 vs v2

Ganglia Model v3

We recently updated our enteric ganglia model to v3. This model architecture is more powerful than v2 and we have added more training data, specifically from .

The median IoU for v3 (0.83 ± 0.08 ) is higher than v2 (0.79 ± 0.09 ) when evaluating on the test dataset (mean ± std, 20 images).

Ganglia model v2 vs v3

Although this may not appear to be much, in v3 we replaced the original UNet (v2) with a Feature Pyramid Network (FPN) and a ResNet101 backbone. This architecture offers several improvements:

  • Stronger feature extraction: ResNet101 captures complex patterns better, meaning it may work well for challenging datasets -> high noise or uneven illumination.

  • Multi-scale fusion: FPN integrates features at multiple resolutions, improving segmentation of enteric ganglia of different sizes.

  • Improved generalization: The model should perform better than v2 across varying imaging conditions and datasets.


Testing different architectures

We have used PyTorch instead of Tensorflow and adapted code from this repository for training our new ganglia model. We wanted to have the flexibility of testing multiple architectures, so segmentation-models-pytorch library was incorporated into the code. The two architectures that worked well were:

  • DeepLabv3Plus

  • Feature Pyramid Network

Using a combined loss of Dice and BCE Loss seemed to work the best. This is possibly because BCE will help with pixelwise classification and Dice will aid in overall segmentation quality. However, we had to test different combinations of weights.

The graphs below show shows the IoU and Dice scores for each network and weight combination. The syntax for x axis is: NETWORK_dice_weight_BCE_weight . So, FPN_0.1_0.9 means FPN network with 0.1 weighting for Dice and 0.9 weighting for BCE.

IoU for each network and weight combination
Dice score for each network and weight combination

We chose FPN_0.6_0.4 even though others had a higher score as it did a better job with separating ganglia on some challenging images. They all perform fairly well, so this will be reevaluated with new data at a future stage.

Example segmentation, showing how FPN_0.6_0.4 separates ganglia accurately (yellow arrow). But, both of them separate areas which was joined in ground truth data (light green).

Comparing ganglia segmentation

However, there is a lot of subjectivity with defining ganglia and it can get very tricky, especially in the proximal colon. It is recommended to combine no. of cells/ganglia with spatial analysis in GAT.

Last updated

Was this helpful?