Motivation for Non-Self-Intersecting Geometry
An Introduction to Simple Closed Curves
Introduction¶
Shape optimization is the study of designing shapes that minimize or maximize some quantity of interest. An example would be designing the wing of an airplane that maximizes the lift-to-drag ratio. A typical shape optimization process loop consists of three steps:
1. Shape Representation
Using a parameterization method to represent the shape. The parameters ϕ are the design variables. E.g. splines with their control points as parameters.
2. Shape Evaluation
Evaluating some characteristic of the shape that is to be minimized or maximized. E.g. lift-to-drag ratio of a wing.
3. Shape Improvement
Changing the parameters ϕ to design shapes that achieve better characteristics. E.g. gradient descent to optimize ϕ.
We will focus on shape representation.
In a large class of shape optimization problems the shapes of interest are simple closed curves. Simple closed curves are curves that are closed, i.e. they form loops and are simple, i.e. they do not self-intersect. They are also called Jordan curves. Figure 1 below compares simple closed curves with other curves.
Figure 1:We will be concerned with simple closed curves, i.e. curves that loop back and do not self-intersect.
Simple closed curves have applications in a diverse set of fields: aerospace design, biomedical modeling, computer graphics, scientific modelling and simulation, gaming, object recognition etc.
To understand why an optimization problem might be concerned only with simple closed curves consider the following problem: Imagine that there is a flow going left right around the body shown in Figure 2, and consider the lift-to-drag ratio of this shape. The flow only sees the boundary of this shape, the complicated spaghetti inside does not have any effect whatsoever on the properties of the body. All the representation used to describe the inner curves is wasteful.
Figure 2:An incoming flow would only interact with the boundary. The complicated internal mess is wasteful representation.
This is especially crucial when doing optimization. Shape representation methods that can potentially represent self-intersecting shapes would cause the optimization algorithm to search a bigger design space than needed. Also, self-intersecting shapes might be physically undesirable or problematic to deal with in downstream tasks such as shape evaluation. Thus, such a shape parameterization would require manual tuning during optimization. As an e.g. consider the optimization happening in Figure 3. We represent the shape using 12 spline control points which are then fed into a neural network that predicts the shape’s lift-to-drag ratio. During training we used non-self-intersecting shapes as they are the ones of interest, but during optimization, if after a gradient step the spline starts intersecting the network struggles to predict its lift-to-drag ratio and steers the optimization in an even worse direction. This is the classical distributional shift problem.

Figure 3:Optimization starts from the dashed red initial airfoil shape which is iteratively modified. We see that the optimization process suffers from distributional shift. Once a self-intersecting shape is reached it is iteratively made even worse.
When using shape parameterization that can represent self-intersecting shapes the optimization algorithm’s search has to be made limited, or someone has to sit and manually tune and check for self-intersection. Sometimes additional losses are added to the objective to prevent self-intersection. This is a hassle. In effect prevents an automated and aggressive shape space exploration. In an ideal setting, one would like to leave gradient descent running and go to sleep and wake up to find the optimal shape. Thus we need a shape parameterization that has the quality of non-self-intersection in-built and for any set of parameters produces non-self-intersecting shapes only.
Naive Usage of Polar Coordinates: The Representation Power Problem.¶
One simple way of representing non-self-intersecting shapes would be through polar coordinates in the form:
where we would represent using some parameterized form, e.g. neural networks.
But this representation cannot describe general simple closed curves. This is because is parameterized as a function of θ and for a given θ we can only have one value of . This means we cannot represent shapes as shown in Figure 4.
Figure 4:Since we have multiple values at a single θ we cannot have a function that can represent this.
Therefore we would not be able to analyze a fish swimming in water as shown in Figure 5. Or if for whatever reason the optimal design was the Stanford Bunny, we would not be able to design for it! Jokes apart we will use the Stanford Bunny as a target shape to test our shape representation techniques.
Figure 5:A swimming fish with its body curved cannot be represented by a polar parameterization.
Figure 6:The Stanford Bunny. We will use it as a target shape against which we test our parameterization techniques.
We now discuss a new method for shape parameterization, Neural Injective Geometry, that can describe general simple closed curves. This method ensures that only non-self-intersecting curves are generated for any combination of the parameters.
Neural Injective Geometry has several components that give it a lot of representation power:
- Injective Networks (The core)
- Monotonic Networks
- Pre and Post Auxilliary Networks
- And a bunch of other techniques
We describe each of these in detail in the tutorials that follow.
The central idea behind this geometry parameterization is the concept of injective networks which we discuss next.