## Archive for June, 2010

### A different way to find a point on a hyperplane closest to a given point

Saturday, June 12th, 2010

As a graphics programmer, I often need to find a point on a hyperplane (line or plane) closest to a given point. So far my understanding of the problem has been based on trigonometric concepts that is probably the simplest way of approaching the problem. I’m not going to explain the boring trigonometric approach to the problem here. You can find that on some other website using Google.

A few weeks back I came across a new concept in maths called Lagrange multipliers. In addition to the problem that I was addressing at the time, I was interested in seeing if it could be applied to find a point on a hyperplane closest to a given point.

If you are not familiar with Lagrange multipliers then just go to the Wikipedia page that explains it all. I think they give a better explanation of the problem than I ever will. Ok, so if you haven’t already figured out how we are going to approach this problem, read on.

Lets work in 2 dimensions with  a line of the form
$\alpha x+\beta y+\gamma = 0$
Lets try and visualize this…
The vector $(\alpha,\beta)^T$ represents the normal to the line
The constant $\gamma$ is the distance to closest point on the line to the origin.
If you are used to only seeing the line in the slope intercept form as $y=mx + b$
then we can simply assume that
$\alpha = m$, $\beta = -1$ and $\gamma = b$.

Now lets take our point $P=(x_p, y_p)$ in space. Now we would like to do is minimise the Euclidean distance to this point.  The Euclidean distance is calculated using the equation
$f(x, y) = \sqrt{(x-x_{p})^2+(y-y_{p})^2}$
Minimizing this problem is dead simple. The closest point to a point is the point it self. But now we would like to add a constraint such that the closest point must lie of the line, whose equation is
$g(x,y) = \alpha x + \beta y + \gamma = 0$
We approach this problem using a simple form of the Lagrange multiplier where we minimize the function $f(x,y)$ subject to the constraint $g(x,y)$. The best way to show that this works is with an example so…

### Lets try this with an example

Lets assume we have a line of the form
$y=0.5 x + 1$ and we would like to find a point on this line that is closest to a given point $P = (3,2)$

We know the general form of the Lagrange multiplier which states that
$f(x,y) = -\lambda\: g(x,y)$
In English this simply means the that we need to find a point $X = (x, y)$ where the functions $f(x,y)$ and $g(x,y)$ give us vectors that point in the same direction. By multiplying one of the vectors with a constant $-\lambda$ we make both  vectors equal in length and direction. We could also write the equation above as
$\Lambda(x,y,\lambda) = f(x,y)+\lambda\: g(x,y) = 0$

Before we start we change our distance function to a squared distance function. So in this case $f(x,y)$ would become
$f(x,y) = (x-3)^2+(y-2)^2$
We can do this because $f(x,y)$ is just a function that needs to be minimised and for us it does not make a difference if we minimise the distance or the squared distance. We make this change as differentiating the squared distance function is much simpler and results in a system of linear equations. The function $g(x,y)$ remains the same.

So now we need to find
$\nabla_{x,y,\lambda}\Lambda(x,y,\lambda) = 0$
In this case
$\Lambda(x,y,\lambda) = (x-3)^2 + (y-2)^2 + \lambda\:(0.5x-y+1)=0$
We now need to find
$\frac{\displaystyle\partial}{\displaystyle\partial x}\Lambda(x,y,\lambda) = 0$
$\frac{\displaystyle\partial}{\displaystyle\partial y}\Lambda(x,y,\lambda) = 0$
$\frac{\displaystyle\partial}{\displaystyle \partial \lambda}\Lambda(x,y\lambda) = 0$
After doing all the hairy math, we get
$\frac{\displaystyle \partial}{\displaystyle \partial x}\Lambda(x,y,\lambda) = 0.5\lambda+2x-6=0$
$\frac{\displaystyle\partial}{\displaystyle\partial y}\Lambda(x,y,\lambda) = -\lambda + 2y -4=0$
$\frac{\displaystyle\partial}{\displaystyle\partial \lambda } \Lambda(x,y,\lambda) =0.5x-y+1= 0$
We now have a system of three linear equations with three unknowns. This should be fairly easy to solve using a number of methods. If we had used the distance function instead of the squared distance, we would have ended up with a system of non-linear equations which would not been so easy to solve.

Ok so after solving for $x,y,\lambda$ we get
$x=\frac{14}{5}$
$y=\frac{12}{5}$
We are not really interested in $\lambda$ anymore, so we shall leave that out. So the point on the line is
$X=\left (\frac{14}{5}, \frac{12}{5} \right )$

We can use several different methods to verify if this answer is correct. We can solve for the same point using the trigonometric method or we can check if the vectors $\overrightarrow{PX}$ and $\overrightarrow{AX}$ are orthonormal to each other. Here $A$ is just another point on our line. We shall use the latter method to verify our results. We already have
$P=(3, 2)$, $X=\left(\frac{14}{5}, \frac{12}{5}\right )$. We can find $A$ by just setting $x=0$ in line equation. So we get
$A=(0, 1)$
So we get
$\overrightarrow{AX} = \left ( -\frac{14}{5}, -\frac{7}{5} \right )$
$\overrightarrow{PX} = \left ( \frac{1}{5}, -\frac{2}{5} \right )$
and…
$\langle\overrightarrow{AX},\overrightarrow{PX}\rangle= 0$

…or in plain English: IT WORKS!!