Nguyen Do Trung Chanh
Permanent URI for this collection
Model Development Manager
College of Engineering and Computer Science
Browse
Recent Submissions
Now showing 1 - 4 of 4
- ItemBenchmarking saliency methods for chest X-ray interpretation(2022-10-10) Adriel Saporta; Xiaotong Gui; Ashwin Agrawal; Anuj Pareek; Steven Truong; Chanh Nguyen; Doan Ngo; Jayne Seekins; Francis G. Blankenberg; Andrew Y. Ng; Matthew P. Lungren; Pranav RajpurkarSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was the largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
- ItemSegTransVAE: Hybrid CNN - Transformer with regularization for medical image segmentation(2022) Pham, Quan Dung; Nguyen, Truong Hai; Nguyen, Phuong Nam; Nguyen, N. A. Khoa; Nguyen, D. T. Chanh; Bui, Trung; Truong, Q. H. StevenCurrent research on deep learning for medical image segmentation exposes their limitations in learning either global semantic information or local contextual information. To tackle these issues, a novel network named SegTransVAE is proposed in this paper. SegTransVAE is built upon encoder-decoder architecture, exploiting the transformer with the variational autoencoder (VAE) branch to the network to reconstruct the input images jointly with segmentation. To the best of our knowledge, this is the first method combining the success of CNN, transformer, and VAE. Evaluation on various recently introduced datasets shows that SegTransVAE outperforms previous methods in Dice Score and 95%-Haudorff Distance while having comparable inference time to a simple CNN-based architecture network. The source code is available at: https://github.com/itruonghai/SegTransVAE.
- ItemCapNeXt: Unifying capsule and resnext for medical image segmentation(2022) Thanh Huynh; Chanh Nguyen; Khoa Nguyen; Trung Bui; Steven TruongCapsule Network is a contemporary approach to image analysis that emphasizes part-whole relationships. However, its applications to segmentation tasks are limited due to training difficulties such as initialization and convergence. In this study, we propose a novel Capsule Network, called CapNeXt, that unifies Capsule and ResNeXt architectures for medical image segmentation. CapNeXt advances the existing capsule-based segmentation model by integrating optimization techniques from Convolutional Neural Networks (CNN) to make training much easier than other contemporary Capsule-based segmentation methods. Experimental results on two public datasets show that CapNeXt outperforms the CNNs and other Capsule architectures in 2D and 3D segmentation tasks by 1% of the Dice score. The code will be released on GitHub after being accepted.
- ItemAdaptive Proxy Anchor loss for Deep Machine Learning(2022) Nguyen Phan; Sen Tran; Huy Ta; Soan Duong; Chanh Nguyen; Trung Bui; Steven TruongDeep metric learning (or simply called metric learning) uses the deep neural network to learn the representation of images, leading to widely used in many applications, e.g. image retrieval and face recognition. In the metric learning approaches, proxy anchor takes advantage of proxy-based and pair-based approaches to enable fast convergence time and robustness to noisy labels. However, in training the proxy anchor, selecting the hyperparameter margin is important to achieve a good performance. This selection requires expertise and is time-consuming. This paper proposes a novel method to learn the margin while training the proxy anchor approach adaptively. The proposed adaptive proxy anchor simplifies the hyperparameter tuning process while advancing the proxy anchor. We achieve state-of-the-art on three public datasets with a noticeably faster convergence time. Our code is available at https://github.com/tks1998/Adaptive-Proxy-Anchor