Implementing Lagrangian Mechanics Three Ways
This post describes how to implement the dynamic equations of motion for a simple mobile manipulator robot. The equations of motion tell us how the position and velocity of the system evolve over time, which is useful for planning and control. As we'll see, finding a closed-form expression for the equations of motion requires calculating a number of derivatives. We're going to do this three different ways:
- with manual differentiation,
- with automatic differentiation,
- with symbolic differentiation.
The main point of this post is to show how automatic and symbolic differentiation can be used to avoid tedious and error-prone manual calculations. In particular, we'll see this in the context of robot dynamics and implemented in Python.
Aside: I'll put extra information that I think is interesting but not necessary for understanding in boxes like this. Feel free to read or skip as you please.
Resources
There are a number of textbooks that I draw on for information in this post and robotics more generally:
- Modern Robotics by Kevin Lynch and Frank Park;
- Robotics: Modelling, Planning, and Control by Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo;
- Robot Dynamics and Control by Mark Spong and Mathukumalli Vidyasagar.
Modern Robotics is actually freely available at modernrobotics.org, so it is likely a good place to start if you want more information on the math and physics in this post.
Now, let's have a look at our robot.
Our robot
The robot I'll use for this post is a planar (i.e., two-dimensional) mobile manipulator consisting of a mobile base and a two-link arm, pictured below.
The base moves only in the -direction. Each of the two arm links is attached by a revolute (rotating) joint. Thus the system has three degrees of freedom (DOFs), which make up its generalized coordinates:
The generalized coordinates can be any convenient set of values that fully capture the configuration of the system. For robot arms, it is usually more convenient to use the joint angles rather than, say, Cartesian coordinates.
The centre of mass (CoM) of each link is labeled , where is the link number.
Kinematics
In this post, I use the terms "dynamics" and "equations of motion" interchangeably to refer to the same thing: the equations that tell us why the system moves. However, before we get to the dynamics, we need to understand some kinematics. Kinematics tells us how the system moves: unlike dynamics, it is not concerned with dynamical quantities like mass, energy, or force (the why); rather, it deals only with motion: position, velocity, acceleration, etc.
The first thing we'll need is the pose (position and orientation) of the CoM of each link in the robot, expressed in terms of the generalized coordinates . In 2D, the pose consists of the position and the orientation angle , which we'll store in a 3-dimensional vector .
Aside: Computing the pose as a function of is called forward kinematics; the opposite problem, computing the configuration that corresponds to a particular pose (typically of the end effector) is known as inverse kinematics, and is a much harder problem. Luckily, we don't need it here.
The poses of the link CoMs expressed in terms of are
(Note that and are known and constant.)
Analogous to a pose, we can express the linear and angular velocity together as the twist . In terms of the generalized coordinates, the twist of the th link is
where
is the Jacobian of the link's CoM. I use the "dot" notation to indicate the time derivative of .
With these kinematic relationships out of the way, we are ready to discuss the dynamics.
Aside: In three-dimensional space, there is a distinction between the geometric Jacobian and the analytic Jacobian. However, in a two-dimensional setting like ours they are the same.
Dynamics
The central object of Lagrangian mechanics is the Lagrangian, which is a function defined as
where is the kinetic energy of the system and is the potential energy. The trajectory that the system follows is the one that minimizes the Lagrangian over the time interval of interest.
Aside: In other words, Lagrangian mechanics is based on the observation that bodies tend to follow trajectories that minimize the difference between kinetic and potential energy. This is known as Hamilton's principle of least action. Lagrangian mechanics is equivalent to Newtonian mechanics and all other formulations of classical mechanics (i.e., the resulting equations of motion are the same).
Using variational calculus (which we'll skip here), it can be shown that the minimizing trajectory evolves according to
which are known as the equations of motion, where is the generalized force vector ("generalized" means it can contain both forces and torques) and is the number of DOFs of the system. As we saw above, for our example . We will assume that there is no friction at the joints.
Once we compute the Lagrangian and take the required derivatives, we find that has the form
where is the mass matrix, is the vector of centrifugal and Coriolis effects, and is the gravity vector. I've left out the math that takes us from to , since this post is already quite long—check out Modern Robotics or another reference for more information.
Aside: The form of shows that the equations of motions are a set of coupled second-order differential equations. Typically, we integrate the equations forward through time from some initial state to find the state at some future time , given the generalized forces . This is known as forward dynamics.
It is common to see expressed in a few different but equivalent ways:
where is the Coriolis matrix and is the 3D array of Christoffel symbols.
Next, we will look at the mathematical expressions for , , and , which we need in order to implement them in code.
Aside: It is very common for the dynamics to be computed using the Newton-Euler equations rather than the Lagrangian formulation used in this post. Using the Newton-Euler method is more computationally efficient, but it does not provide closed-form expressions for , , and , which are useful for control design.
The mass matrix
We can derive the mass matrix from the kinetic energy of the system. Recall that the kinetic energy of a particle with mass moving with velocity has kinetic energy . It turns out that the kinetic energy of our robotic system can be expressed analogously as
where we've used the generalized velocities and mass matrix of the robot instead of the linear velocity and scalar mass of the particle. But we still don't know how to calculate : to do so, we need to find an expression for with the same form as .
We can find the total kinetic energy of the robotic system by determining the kinetic energy of each link and summing them up. Each link of our robot is a rigid body (rather than just a particle) in the 2D plane, so we need to account for both the linear and angular velocity. For the th link, the kinetic energy is given by
where is the link's mass and is the link's moment of inertia. We can express this in the convenient matrix form
where is the link's twist and
However, we need to express the kinetic energy in terms of the generalized velocity , rather than the twist of each link. Substituting the differential kinematic relationship into , we get
Thus the total kinetic energy is just the sum
which reveals that the mass matrix, as seen in , can be expressed as
The Coriolis matrix
Next, we'll move on to the Coriolis matrix. To find it, we first need the 3D array of Christoffel symbols . It turns out that that can be directly calculated from the mass matrix: the entry at index is given by
(This is the result of some of that math we skipped between and .)
The Coriolis matrix can then be found by computing
where is a matrix where the entry at is and, likewise, is a matrix with where the entry at is .
The gravity vector
Finally, we turn to the gravity vector, for which we will need to consider the potential energy of the system. Recall that the potential energy of a particle with mass at height is , where is the gravitational acceleration. The potential energy of each link of the robot follows the same formula, and we just need to sum them to get the total potential energy:
Once we have that, it turns out that the gravity vector is just the derivative of the potential energy,
Implementation
In the previous sections we found mathematical expressions for , , and We can easily write a program to compute the required matrix multiplications and trigonometric functions, but what about the expressions containing derivatives? In particular, we need to evaluate the link Jacobians (to compute the mass matrix), the Christoffel symbols , and the gravity vector . Again, we will do this three different ways:
- with manual differentiation,
- with automatic differentiation,
- with symbolic differentiation.
I provide most of the code for reference below, but you can also play with it in a Jupyter notebook in the browser here or download the repository here.
We'll start by specifying the parameters of the model:
# link masses
M1 = 3
M2 = 2
M3 = 1
# link lengths
LX = 0.1
LY = 0.1
L2 = 1
L3 = 0.5
# moments of inertia (base link doesn't rotate so don't need it's inertia)
I2 = M2 * L2 ** 2 / 12
I3 = M3 * L3 ** 2 / 12
# inertia matrices
G1 = np.diag(np.array([M1, M1, 0]))
G2 = np.diag(np.array([M2, M2, I2]))
G3 = np.diag(np.array([M3, M3, I3]))
# gravity
G = 9.8
Method 1: Manual differentiation
For our first method, I've manually calculated and implemented expressions for all of the derivatives. The derivatives are tedious but fairly straightforward, so I'll skip the math here and get right to the code:
class ManualDynamics:
@classmethod
def mass_matrix(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
m11 = M1 + M2 + M3
m12 = -(0.5 * M2 + M3) * L2 * np.sin(θ2) - 0.5 * M3 * L3 * np.sin(θ23)
m13 = -0.5 * M3 * L3 * np.sin(θ23)
m22 = (
(0.25 * M2 + M3) * L2 ** 2
+ 0.25 * M3 * L3 ** 2
+ M3 * L2 * L3 * np.cos(θ3)
+ I2
+ I3
)
m23 = 0.5 * M3 * L3 * (0.5 * L3 + L2 * np.cos(θ3)) + I3
m33 = 0.25 * M3 * L3 ** 2 + I3
M = np.array([[m11, m12, m13], [m12, m22, m23], [m13, m23, m33]])
return M
@classmethod
def christoffel_matrix(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
# Partial derivatives of mass matrix
dMdx1 = np.zeros((3, 3))
dMdθ2_12 = (
-0.5 * M2 * L2 * np.cos(θ2) - M3 * L2 * np.cos(θ2) - 0.5 * M3 * L3 * np.cos(θ23)
)
dMdθ2_13 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ2 = np.array([[0, dMdθ2_12, dMdθ2_13], [dMdθ2_12, 0, 0], [dMdθ2_13, 0, 0]])
dMdθ3_12 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ3_13 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ3_22 = -M3 * L2 * L3 * np.sin(θ3)
dMdθ3_23 = -0.5 * M3 * L2 * L3 * np.sin(θ3)
dMdθ3 = np.array([
[0, dMdθ3_12, dMdθ3_13],
[dMdθ3_12, dMdθ3_22, dMdθ3_23],
[dMdθ3_13, dMdθ3_23, 0],
])
dMdq = np.zeros((3, 3, 3))
dMdq[0, :, :] = dMdx1
dMdq[1, :, :] = dMdθ2
dMdq[2, :, :] = dMdθ3
# This is equivalent to a set of for loops that fill out Γ such that
# Γ[i, j, k] = dMdq[k, j, i] - 0.5*dMdq[i, j, k]
Γ = dMdq.T - 0.5 * dMdq
return Γ
@classmethod
def coriolis_matrix(cls, q, dq):
Γ = cls.christoffel_matrix(q)
# note numpy dot broadcasting rules: this is a sum product of dq with
# the second last dimension of Γ
return np.dot(dq, Γ)
@classmethod
def gravity_vector(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
return np.array([
0,
(0.5 * M2 + M3) * G * L2 * np.cos(θ2) + 0.5 * M3 * L3 * G * np.cos(θ23),
0.5 * M3 * L3 * G * np.cos(θ23),
])
@classmethod
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
This method produces code that is challenging to read and debug. It is easy to make a mistake taking a derivative or writing its expression in code, and difficult to find it. We can avoid these bugs entirely by taking derivatives automatically, which is our next approach.
Method 2: Automatic differentiation
We will use jax to automatically compute the required derivatives. This results in more succinct and readable code than the manual version:
class AutoDiffDynamics:
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link1_pose(cls, q):
return jnp.array([q[0], 0, 0])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link2_pose(cls, q):
P1 = cls.link1_pose(q)
return P1 + jnp.array([
LX + 0.5 * L2 * jnp.cos(q[1]),
LY + 0.5 * L2 * jnp.sin(q[1]), q[1]]
)
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link3_pose(cls, q):
P2 = cls.link2_pose(q)
return P2 + jnp.array([
0.5 * L2 * jnp.cos(q[1]) + 0.5 * L3 * jnp.cos(q[1] + q[2]),
0.5 * L2 * jnp.sin(q[1]) + 0.5 * L3 * jnp.sin(q[1] + q[2]),
q[2],
])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def mass_matrix(cls, q):
# Jacobians
J1 = jax.jacfwd(cls.link1_pose)(q)
J2 = jax.jacfwd(cls.link2_pose)(q)
J3 = jax.jacfwd(cls.link3_pose)(q)
return J1.T @ G1 @ J1 + J2.T @ G2 @ J2 + J3.T @ G3 @ J3
@classmethod
@partial(jax.jit, static_argnums=(0,))
def christoffel_matrix(cls, q):
# Here dMdq is transposed with respect to the dMdq's in the manual and
# symbolic implementations. Thus, the expression for Γ is also
# transposed, giving the same end result.
dMdq = jax.jacfwd(cls.mass_matrix)(q)
Γ = dMdq - 0.5 * dMdq.T
return Γ
@classmethod
@partial(jax.jit, static_argnums=(0,))
def coriolis_matrix(cls, q, dq):
Γ = cls.christoffel_matrix(q)
return dq.T @ Γ
@classmethod
@partial(jax.jit, static_argnums=(0,))
def potential_energy(cls, q):
P1 = cls.link1_pose(q)
P2 = cls.link2_pose(q)
P3 = cls.link3_pose(q)
return G * (M1 * P1[1] + M2 * P2[1] + M3 * P3[1])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def gravity_vector(cls, q):
return jax.jacfwd(cls.potential_energy)(q)
@classmethod
@partial(jax.jit, static_argnums=(0,))
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
Here we have been able to avoid the complicated derivative expressions by
automatically computing them using jax.jacfwd
.
I like jax because it is easy to use—it is nearly a drop-in
replacement for numpy—but I do wish there was some way to save a
compiled function between runs. jax does have the option to
just-in-time (JIT) compile functions to
XLA for
extra speed (which we've done via the @partial(jax.jit, static_argnums=(0,))
lines), though startup times can still be long when automatically
differentiating complex, high-dimensional functions.
Method 3: Symbolic differentiation
For our final method, we will perform symbolic differentiation using sympy. The code for this one is a little oddly structured, since we need to do a bunch of math before we can actually generate the class functions we want. However, I think the code is quite short and easy to follow overall.
class SymbolicDynamics:
# joint configuration and velocity are functions of time
t = sym.symbols("t")
q = sym.Array(
[sym.Function("x1")(t), sym.Function("θ2")(t), sym.Function("θ3")(t)]
)
dq = q.diff(t)
# link poses
P1 = sym.Matrix([q[0], 0, 0])
P2 = P1 + sym.Matrix([
LX + 0.5 * L2 * sym.cos(q[1]),
LY + 0.5 * L2 * sym.sin(q[1]),
q[1],
])
P3 = P2 + sym.Matrix([
0.5 * L2 * sym.cos(q[1]) + 0.5 * L3 * sym.cos(q[1] + q[2]),
0.5 * L2 * sym.sin(q[1]) + 0.5 * L3 * sym.sin(q[1] + q[2]),
q[2],
])
# link Jacobians
J1 = P1.jacobian(q)
J2 = P2.jacobian(q)
J3 = P3.jacobian(q)
# mass matrix
M = J1.transpose() * G1 * J1 + J2.transpose() * G2 * J2 + J3.transpose() * G3 * J3
# Christoffel symbols and Coriolis matrix
dMdq = M.diff(q)
Γ = sym.permutedims(dMdq, (2, 1, 0)) - 0.5 * dMdq
C = sym.tensorcontraction(sym.tensorproduct(dq, Γ), (0, 2)).tomatrix()
# gravity vector
V = G * (M1 * P1[1] + M2 * P2[1] + M3 * P3[1])
g = V.diff(q)
# compile functions to numerical code
mass_matrix = sym.lambdify([q], M)
coriolis_matrix = sym.lambdify([q, dq], C)
gravity_vector = sym.lambdify([q], g)
@classmethod
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
This is probably my favourite method, for two reasons. First, the code has a very similar structure to the math, and thus is easy to follow. Second, once the symbolic computations have been done, the result can be compiled into a normal numerical Python function for (reasonably) fast evaluation. It is easy to serialize and save these functions to disk for future use with something like dill. You can even generate code in another language, like C++, and then compile that. This is the approach I take for computing some of the kinematic functions for a 9-DOF robot I use for research.
Performance
The last part of the Jupyter notebook provides a performance comparison between
the three methods. I test the speed of each method by calculating the
generalized forces given a random input
. I use the Jupyter %%timeit
cell
magic, which runs each function 10,000 times, repeated with seven different
inputs. The approximate average results when run on my laptop are shown in the
table below (approximate because each run gives slightly different numbers, but
they are typically pretty close to the ones in the table).
Method | Approximate time per call (us) |
---|---|
Manual | 60 |
Automatic | 30 |
Symbolic | 160 |
For this simple example, all of the methods are pretty fast (note the times are in microseconds). Automatic differentiation with jax is the fastest, probably because of its JIT compilation to efficient XLA code. The results in the table exclude the initial run when jax performs the compilation.
The symbolic approach is quite a bit slower than the others, even after being compiled to numerical code. I still really like the symbolic approach for its elegance, but it may be too slow for use in some cases where there are tight performance requirements. In that case, however, you'd likely translate it to C or C++ first (which sympy can do!) and then compile that, which should be much faster.
When the benchmarking code is run in Binder, I actually see automatic differentiation's time per call increase to over 80 us. The manual approach stays about the same and the symbolic approach increases slightly. Since the performance varies depending on the particular environment and application, it is hard to claim that a single approach should always be preferred; testing is required for each case.
Conclusion
This brings us to the end of this rather long post, in which we saw some background on Lagrangian dynamics as applied to robotics, and how we could actually implement the dynamics in a few different ways. If you have any questions or other feedback, feel free to send me an email. In particular, please let me know of any mistakes—there is a lot of math and code in this post, and thus a lot of opportunity for error!