Hua Binh Son
Permanent URI for this collection
Browse
Recent Submissions
Now showing 1 - 4 of 4
- ItemGlobal Context Aware Convolutions for 3D Point Cloud Understanding(2020) Zhiyuan Zhang; Son Hua; Wei Chen; Yibin Tian; Sai-Kit YeungRecent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquired from 3D scanning. Recent works show that it is possible to design point cloud convolutions with rotation invariance property, but such methods generally do not perform as well as translation-invariant only convolution. We found that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution. To this end, a globally weighted local reference frame is constructed in each point neighborhood in which the local point set is decomposed into bins. Anchor points are generated in each bin to represent global shape features. A convolution can then be performed to transform the points and anchor features into final rotation-invariant features. We conduct several experiments on point cloud classification, part segmentation, shape retrieval, and normals estimation to evaluate our convolution, which achieves state-of-the-art accuracy under challenging rotations.
- ItemMinimal Adversarial Examples for Deep Learning on 3D Point Clouds(2021) Jaeyeon Kim; Son Hua; Duc Nguyen; Sai-Kit YeungWith recent developments of convolutional neural networks, deep learning for 3D point clouds has shown significant progress in various 3D scene understanding tasks, e.g., object recognition, semantic segmentation. In a safety-critical environment, it is however not well understood how such deep learning models are vulnerable to adversarial examples. In this work, we explore adversarial attacks for point cloud-based neural networks. We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies. Our method generates adversarial examples by attacking the classification ability of point cloud-based networks while considering the perceptibility of the examples and ensuring the minimal level of point manipulations. Experimental results show that our method achieves state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively, while manipulating only about 4% of the total points.
- ItemPOODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples(2021) Duong H. Le; Khoi D. Nguyen; Khoi Nguyen; Quoc-Huy Tran; Rang Nguyen; Binh-Son HuaIn this work, we propose to leverage out-of-distribution samples, i.e., unlabeled samples coming from outside target classes, for improving few-shot learning. Specifically, we exploit the easily available out-of-distribution samples (e.g., from base classes) to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that to in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures. Our code is available at https://github.com/VinAIResearch/poodle.
- ItemNeural Sequence Transformation(2021) Sabyasachi Mukherjee; Sayan Mukherjee; Son Hua; Nobuyuki Umetani; Daniel MeisterMonte Carlo integration is a technique for numerically estimating a definite integral by stochastically sampling its integrand. These samples can be averaged to make an improved estimate, and the progressive estimates form a sequence that converges to the integral value on the limit. Unfortunately, the sequence of Monte Carlo estimates converges at a rate of O(√n), where n denotes the sample count, effectively slowing down as more samples are drawn. To overcome this, we can apply sequence transformation, which transforms one converging sequence into another with the goal of accelerating the rate of convergence. However, analytically finding such a transformation for Monte Carlo estimates can be challenging, due to both the stochastic nature of the sequence, and the complexity of the integrand. In this paper, we propose to leverage neural networks to learn sequence transformations that improve the convergence of the progressive estimates of Monte Carlo integration. We demonstrate the effectiveness of our method on several canonical 1D integration problems as well as applications in light transport simulation.