Surface Normal Estimation in Point Clouds

A survey on surface normal estimation in 3D point clouds

November 14, 2020 • Pratulya Bubna

Surface Normals

Have you ever encountered a semi-rendered object in a game? Perhaps in a game you were able to see through inside a box, even though from outside it seemed a solid object? Well, that happens sometimes because of rendering tricks that gaming engines use.

This usually is the case as, intuitively enough, polygon surfaces that aren’t facing the camera, are not rendered; thereby, providing speed-ups in generating frames. Moreover, the saved computation resources can be spent optimizing the visible surfaces.

What determines a surface’s orientation is a normal to it at a point. In 3D computer graphics, it is of great utility to determine a surface’s orientation towards a light source. This piece of information helps in performing lighting computations, eg. shading and achieving other visual effects.

A normal to a surface at point P is a vector perpendicular to the tangent plane of the surface at P (see below-left image)
Fig 1: (left) Normal to a surface at a point is same as normal to the tangent to surface at that point (right) Normals to a curved surface (credits: wikipedia https://en.wikipedia.org/wiki/Normal_(geometry))

Normal Estimation in Point Clouds

Introduction

A point cloud from a dataset (say, PCPNet Dataset) represents a set of points sampled from an underlying surface. Inherent imperfections such as varying sampling density, noise, missing data, however lie in the point cloud data.

A point cloud is a set of data points in a space that represents a 3D shape or object. Each point has its set of X,Y,Z coordinates. https://en.wikipedia.org/wiki/Point_cloud
point cloud (left) sampled from the original surface (right)

Problem Statement: Given a point cloud, process it directly to infer the surface normals.
Note that this problem is different from processing a point cloud indirectly, where it is first converted to an intermediate representation, eg. mesh, and then processed.

More closely, the problem requires us to be able to infer the underlying surface from the point cloud representation, from which the normals can be estimated at the given points.

Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface at that point.

Making use of local surface properties such as normals and curvature as additional features for the point clouds has shown improvement in downstream tasks such as reconstruction, segmentation, denoising etc.

issues inherent to point cloud data while estimating normals (credits: Boulch, Alexandre et al. "Deep Learning for Robust Normal Estimation in Unstructured Point Clouds" )


Traditional Approaches

The traditional approaches to estimate normal at a point P involve fitting a surface to a patch of local neighborhood points around P and then calculating the normal analytically. For instance, utilizes PCA to fit a low-order surface on the r-ball neighborhood surrounding P, and outputs the eigenvector with minimum eigenvalue as the normal at P. In other words, normal at P is in the direction of the vector representing least variance. (PCA finds an orthogonal basis that best represents data by finding eigenvectors in directions of maximum variances.)

Fitting a surface to a patch (credits: Ben-Shabat et al. )

Other approaches try fitting higher-order surfaces such as spherical surfaces, jets. There’s work involving ideas from Voronio diagrams to achieve sharper results.

Given the orientation of a normal—a global property, techniques such as Poisson Surface Reconstruction can be used to reconstruct the mesh. However, most of the approaches tend to regress only the direction of the normal—a local property, and not the orientation. Orientation of the normals can be determined as a post-processing step though, using some approaches. Minimum Spanning Trees (MST) based approaches work on the intuition that parallel tangent plances in a neighborhoood should have similar normals, and thus propagate orientations using MST.

Data-Driven Approaches

The fundamental problem with the traditional approaches is the sensitivity to the choice of hyperparameters such as the radius of the neighborhood, for one, which leads to problematic results at the borders (corners, edges). Even though a small radius might be preferred to appropriately constrain the neighborhood and output fine normal estimations, it is sensitive to outliers; thereby, leading to inaccurate estimations at the borders. On the other hand, a large radius would oversmoothen (average) the estimates at these sharp features.

To overcome such tradeoffs between robustness to noise and accuracy of fine details, SOTA approaches are utilizing data-driven methods.

To put it into persepective, normal estimation in point clouds is a regression problem and {L2, RMSE} loss can be utilized for training the deep learning architecture.

Some common datasets include PCPNet Dataset, NYU Depth V2, ScanNet.

Now, let’s look at some of the deep learning based approaches for estimating local 3D shape properties in point clouds:


PCPNet

Single-scale and multi-scale PCPNet archicture.

Nesti-Net

Nesti-Net Architecture.

Local Plane Constraint and Multi-Scale Selection

Architecture

(Graph Convolutional Networks) GCN-based

Proposed Architecture

:bulb: TODO: complete citations @me

:loudspeaker: know of any other interesting architectures? discussion section is below :arrow_down: