Introduction¶
Suppose we have a curve representation method, for example some parameterization scheme such as a spline. We also have a particular shape specified (say the Stanford Bunny) and we want our parameterization to represent that shape as closely as possible. To represent the given shape one way would be to tune the parameters of the representation scheme iteratively using a gradient descent optimization approach. This requires us to define an objective function that can then be optimized over. An appropriate objective for this task would be some measure of similarity(or dissimilarity) between the target curve and the one traced out by our parameterization . The objective function can then be maximized(or minimized) to fit the parameters.
The target of this tutorial is to study geometric similarity measures. We discuss different kinds of measures both for curve and surface similarity and also see their implementations.
Figure 1:Parameterized shape, using a spline.
Figure 2:Target shape, the Stanford Bunny.
Concrete Problem Setup¶
First we define the different objects that we deal with:
- Shape Parameterization
- We use some parameterization scheme e.g. splines to represent our shapes. We have a set of parameters ϕ that represent our shape. By changing ϕ we trace out different curves in the plane. We will think of ϕ as a column vector .
- Parameterized/Candidate Curve
- This is the curve that is traced out by the parameterization scheme. We denote it by and is obtained by sampling the scheme at different points along the actual curve. It is specified in the form of a length sequence of points. These points are ordered along the curve. We will specify the points in a matrix in where each row corresponds to a point . We denote the matrix as .
- Target Curve
- This is the curve that we want our parameterization scheme to represent. We denote it by and it is specified in the form of a length sequence of points. These points are ordered along the curve. We will specify the points in a matrix in as we did for the parameterized curve. We denote the matrix as .
- Loss Function
- A function denoted as that measures the degree of dissimilarity between the target curve and the candidate curve. It should be differentiable to allow us to find gradients that can then be used to run gradient descent.
Goal: To tune ϕ such that our representation scheme traces out the target curve.
Similarity Measures¶
We now discuss the different curve and surface similarity measures. For each measure we describe the exact mathematical definition, practical considerations, modifications to make them differentiable and implementations in PyTorch.
Mean Squared Error (MSE)¶
Description¶
The mean squared error loss function computes the average of the squared distance between the corresponding points on the two curves. Mathematically,
where, denotes th row, i.e. and is a distance function.
Though the measure is quite naive and not very robust, it is very simple and quick to implement and is also differentiable without any modifications.
Mean Squared Error (MSE) between two curves visualized.
Implementation¶
import torch
def mse_loss(Xc, Xt):
loss = torch.mean((Xc - Xt) ** 2)
return loss