`/a/b/c/`

and I want to copy a file
called `foo.txt`

to directory `/d/e/f/`

, then I have to write
`cp foo.txt ../../../d/e/f`

Analogously, to move the file without copying (analogous to the "cut" verb of GUIs), I'd write

`mv foo.txt ../../../d/e/f`

Writing out the whole relative path `../../../d/e/f`

is pretty annoying (*did I
get the number of .. right?*), especially if I already have another terminal
open with CWD

`/d/e/f/`

. I would prefer to be able to somehow cut or copy the
file in one terminal then paste it in another, similar to how GUI file managers
typically work.A while ago I wrote a Python tool to provide this functionality by copying files to and from an intermediate directory. However, I no longer use this tool because there are simpler methods, two of which I'll discuss here.

Probably the simplest method of all is to just use xclip, which comes with the commands

```
xclip-copyfile
xclip-cutfile
xclip-pastefile
```

which do exactly what we want. For convenience, I'd probably alias these to something shorter like

```
alias xcp=xclip-copyfile
alias xmv=xclip-cutfile
alias pst=xclip-pastefile
```

The names `xcp`

and `xmv`

are inspired both by `x`

in `xclip`

as well the
great renameutils package, which
provides the quick (`qcp`

, `qmv`

) and interactive (`icp`

, `imv`

) variants of
`cp`

and `mv`

. These commands can be quite handy, so I'd recommend checking out
renameutils if you haven't before.

One thing I don't like about this solution is that once a file has been cut
using `xclip-cutfile`

, it's stuck in the mysterious xclip clipboard. If I
accidentally copy or cut something else before I paste it, then it's gone.
Another (smaller) downside is that the file in the clipboard can only be pasted
once, when I might like to copy the same file to multiple destinations.

In an attempt to remedy the drawbacks of the xclip solution, I wrote a simple alternative shell script. Instead of the traditional order of cut or copy, then paste, I actually felt it to be more elegant to "mark" a set of files and then choose whether they should be copied or moved to the new location. Before I explain further, here is the script itself (you can also find it here):

```
CLIPBOARD_FILE=~/.xcp-clipboard
# mark the file(s) for later use with cp or mv
xmark() {
realpath "$@" > "$CLIPBOARD_FILE"
}
# copy the marked files
xcp() {
local files
while read line; do
files+=("$line")
done < "$CLIPBOARD_FILE"
cp $files $@
}
# move the marked files
xmv() {
local files
while read line; do
files+=("$line")
done < "$CLIPBOARD_FILE"
mv $files $@
}
```

To access these commands, you just need to copy the above code to your shell's
rc file (like the `.bashrc`

) or otherwise source it.

Let's return to our example from the beginning of the post to see how these
commands work. We have two terminal windows: Terminal #1 has CWD `/a/b/c/`

and
Terminal #2 has CWD `/d/e/f/`

. The goal is to copy `foo.txt`

from `/a/b/c/`

to
`/d/e/f/`

without writing tedious relative paths. Making use of the functions
in the above script, in Terminal #1 I can run:

`xmark foo.txt`

which just "marks" the file by writing its full path `/a/b/c/foo.txt`

to the
clipboard file. Now, in Terminal #2, I can just run:

`xcp .`

and the file will be copied from its original location to the CWD `/d/e/f/`

.
Alternatively I could use `xmv`

to move the file (i.e., deleting the original).

Looking at the definitions of the `xcp`

and `xmv`

functions above, one can see
that they actually just call `cp`

and `mv`

respectively, except that the paths
listed in the clipboard file are added as the first arguments. Any arguments
supplied to `xmv`

or `xcp`

get passed along directly to `mv`

or `cp`

,
respectively, which means we get to leverage all of the power of `mv`

and
`cp`

for free. Want the command to print out what is being done? Pass `-v`

(verbose). Want to avoid overwriting an existing file? Use `-n`

(no-clobber).
I was surprised by how many options there were when I read the man pages for
for `cp`

and `mv`

: there is a lot going on there!

And that's it: as you can see, it is actually quite simple to cut and copy files from the command line. As usual, I'm interested in hearing about alternative solutions to the problem, as well as comments on the ones proposed above. The best way to reach me is by email.

]]>- with manual differentiation,
- with automatic differentiation,
- with symbolic differentiation.

The main point of this post is to show how automatic and symbolic differentiation can be used to avoid tedious and error-prone manual calculations. In particular, we'll see this in the context of robot dynamics and implemented in Python.

*Aside:* I'll put extra information that I think is interesting but not
necessary for understanding in boxes like this. Feel free to read or skip as
you please.

There are a number of textbooks that I draw on for information in this post and robotics more generally:

*Modern Robotics*by Kevin Lynch and Frank Park;*Robotics: Modelling, Planning, and Control*by Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo;*Robot Dynamics and Control*by Mark Spong and Mathukumalli Vidyasagar.

*Modern Robotics* is actually freely available at
modernrobotics.org, so it is likely a good place to start
if you want more information on the math and physics in this post.

Now, let's have a look at our robot.

The robot I'll use for this post is a planar (i.e., two-dimensional) mobile manipulator consisting of a mobile base and a two-link arm, pictured below.

The base moves only in the \(x\)-direction. Each of the two arm links is
attached by a revolute (rotating) joint. Thus the system has three degrees of
freedom (DOFs), which make up its *generalized coordinates*:
\begin{equation*}
\bm{q}=[x_1,\theta_2,\theta_3]^T.
\end{equation*}
The generalized coordinates can be any convenient set of values that fully
capture the configuration of the system. For robot arms, it is usually more
convenient to use the joint angles rather than, say, Cartesian coordinates.

The centre of mass (CoM) of each link is labeled \((x_i,y_i)\), where \(i\) is the link number.

In this post, I use the terms "dynamics" and "equations of motion"
interchangeably to refer to the same thing: the equations that tell us *why*
the system moves. However, before we get to the dynamics, we need to understand
some kinematics. Kinematics tells us *how* the system moves: unlike dynamics,
it is not concerned with dynamical quantities like mass, energy, or force (the
*why*); rather, it deals only with motion: position, velocity, acceleration,
etc.

The first thing we'll need is the pose (position and orientation) of the CoM of each link in the robot, expressed in terms of the generalized coordinates \(\bm{q}\). In 2D, the pose consists of the position \((x,y)\) and the orientation angle \(\theta\), which we'll store in a 3-dimensional vector \(\bm{P}\ = [x,y,\theta]^T\).

*Aside:* Computing the pose as a function of
\(\bm{q}\) is called *forward kinematics*; the opposite problem,
computing the configuration \(\bm{q}\) that corresponds to a particular pose
(typically of the end effector) is known as *inverse kinematics*, and is
a much harder problem. Luckily, we don't need it here.

The poses of the link CoMs expressed in terms of \(\bm{q}\) are \begin{align*} \bm{P}_1 &= \begin{bmatrix} x_1 \\ y_1 \\ \theta_1 \end{bmatrix}, \\ \bm{P}_2 &= \begin{bmatrix} x_2 \\ y_2 \\ \theta_2 \end{bmatrix} = \bm{P}_1 + \begin{bmatrix} l_x + \tfrac{1}{2}l_1\cos\theta_2 \\ l_y + \tfrac{1}{2}l_1\sin\theta_2 \\ \theta_2 \end{bmatrix}, \\ \bm{P}_3 &= \begin{bmatrix} x_3 \\ y_3 \\ \theta_3 \end{bmatrix} = \bm{P}_2 + \begin{bmatrix}\tfrac{1}{2}l_1\cos\theta_2 + \tfrac{1}{2}l_2\cos(\theta_2+\theta_3) \\ \tfrac{1}{2}l_1\sin\theta_2 + \tfrac{1}{2}l_2\sin(\theta_2+\theta_3) \\ \theta_3 \end{bmatrix}. \end{align*} (Note that \(y_1\) and \(\theta_1\) are known and constant.)

Analogous to a pose, we can express the linear and angular velocity together as
the *twist* \(\bm{V}=[\dot{x},\dot{y},\dot{\theta}]^T\). In terms of the
generalized coordinates, the twist of the \(i\)th link is
\begin{equation}
\bm{V}_i = \dot{\bm{P}}_i = \frac{\partial\bm{P}_i}{\partial\bm{q}}\frac{d\bm{q}}{dt} = \bm{J}_i(\bm{q})\dot{\bm{q}},
\label{eq:diff_kin}
\end{equation}
where
\begin{equation}
\bm{J}_i(\bm{q})=\frac{\partial\bm{P}_i}{\partial\bm{q}}
\label{eq:jacobian}
\end{equation}
is the Jacobian of the link's CoM. I use the "dot" notation \(\dot{\bm{q}}\)
to indicate the time derivative of \(\bm{q}\).

With these kinematic relationships out of the way, we are ready to discuss the dynamics.

*Aside:* In three-dimensional space, there is a distinction between the
*geometric* Jacobian and the *analytic* Jacobian. However, in a
two-dimensional setting like ours they are the same.

The central object of Lagrangian mechanics is the *Lagrangian*, which is a
function defined as
\begin{equation*}
\mathcal{L} = \mathcal{K} - \mathcal{P},
\end{equation*}
where \(\mathcal{K}\) is the kinetic energy of the system and
\(\mathcal{P}\) is the potential energy. The trajectory \(\bm{q}(t)\) that
the system follows is the one that minimizes the Lagrangian over the time
interval of interest.

*Aside:* In other words, Lagrangian mechanics is based on the
observation that bodies tend to follow trajectories that minimize the
difference between kinetic and potential energy. This is known as Hamilton's
principle of least action. Lagrangian mechanics is equivalent to Newtonian
mechanics and all other formulations of classical mechanics (i.e., the
resulting equations of motion are the same).

Using variational calculus (which we'll skip here), it can be shown that the
minimizing trajectory evolves according to
\begin{equation}
\bm{\tau} = \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\bm{q}}} - \frac{\partial\mathcal{L}}{\partial\bm{q}},
\label{eq:motion_lagrange}
\end{equation}
which are known as the *equations of motion*, where \(\bm{\tau}\in\R^n\) is
the generalized force vector ("generalized" means it can contain both forces
and torques) and \(n\) is the number of DOFs of the system. As we saw above,
for our example \(n=3\). We will assume that there is no friction at the
joints.

Once we compute the Lagrangian and take the required derivatives, we find
that \eqref{eq:motion_lagrange} has the form
\begin{equation}
\bm{\tau} = \bm{M}(\bm{q})\ddot{\bm{q}} + \bm{c}(\bm{q},\dot{\bm{q}}) + \bm{g}(\bm{q}),
\label{eq:motion}
\end{equation}
where \(\bm{M}(\bm{q})\in\R^{n\times
n}\) is the mass matrix, \(\bm{c}(\bm{q},\dot{\bm{q}})\in\R^n\) is the
vector of centrifugal and Coriolis effects, and \(\bm{g}(\bm{q})\in\R^n\) is
the gravity vector. I've left out the math that takes us
from \eqref{eq:motion_lagrange} to \eqref{eq:motion}, since this post
is already quite long—check out *Modern Robotics* or another reference
for more information.

*Aside:* The form of \eqref{eq:motion} shows that the equations of
motions are a set of coupled second-order differential equations. Typically, we
integrate the equations forward through time from some initial state
\((\bm{q}_0,\dot{\bm{q}}_0)\) to find the state \((\bm{q}(t),\dot{\bm{q}}(t))\)
at some future time \(t\), given the generalized forces \(\bm{\tau}\). This is
known as *forward dynamics*.

It is common to see \(\bm{c}(\bm{q},\dot{\bm{q}})\) expressed in a few different but equivalent ways: \begin{equation*} \bm{c}(\bm{q},\dot{\bm{q}}) = \bm{C}(\bm{q},\dot{\bm{q}})\dot{\bm{q}} = \dot{\bm{q}}^T\bm{\Gamma}(\bm{q})\dot{\bm{q}}, \end{equation*} where \(\bm{C}(\bm{q},\dot{\bm{q}})\in\R^{n\times n} \) is the Coriolis matrix and \(\bm{\Gamma}(\bm{q})\in\R^{n\times n\times n}\) is the 3D array of Christoffel symbols.

Next, we will look at the mathematical expressions for \(\bm{M}(\bm{q}),\) \(\bm{C}(\bm{q},\dot{\bm{q}}),\) and \(\bm{g}(\bm{q})\), which we need in order to implement them in code.

*Aside:* It is very common for the dynamics to be computed using the
Newton-Euler equations rather than the Lagrangian formulation used in this
post. Using the Newton-Euler method is more computationally efficient, but it
does not provide closed-form expressions for \(\bm{M}(\bm{q}),\)
\(\bm{C}(\bm{q},\dot{\bm{q}}),\) and \(\bm{g}(\bm{q})\), which are useful for
control design.

We can derive the mass matrix from the kinetic energy of the system. Recall that the kinetic energy of a particle with mass \(m\) moving with velocity \(\bm{v}\) has kinetic energy \(\mathcal{K}=\frac{1}{2}m\|\bm{v}\|^2\). It turns out that the kinetic energy of our robotic system can be expressed analogously as \begin{equation} \mathcal{K} = \frac{1}{2}\dot{\bm{q}}^T\bm{M}(\bm{q})\dot{\bm{q}}, \label{eq:kinetic_energy} \end{equation} where we've used the generalized velocities and mass matrix of the robot instead of the linear velocity and scalar mass of the particle. But we still don't know how to calculate \(\bm{M}\): to do so, we need to find an expression for \(\mathcal{K}\) with the same form as \eqref{eq:kinetic_energy}.

We can find the total kinetic energy of the robotic system by determining the kinetic energy of each link and summing them up. Each link of our robot is a rigid body (rather than just a particle) in the 2D plane, so we need to account for both the linear and angular velocity. For the \(i\)th link, the kinetic energy is given by \begin{equation*} \mathcal{K}_i = \frac{1}{2}m_i\|\bm{v}_i\|^2 + \frac{1}{2}I_i\dot{\theta}_i^2, \end{equation*} where \(m_i\) is the link's mass and \(I_i\) is the link's moment of inertia. We can express this in the convenient matrix form \begin{equation} \mathcal{K}_i = \frac{1}{2}\bm{V}_i^T\bm{G}_i\bm{V}_i, \label{eq:T_i} \end{equation} where \(\bm{V}_i = [\dot{x}_i,\dot{y}_i,\dot{\theta}_i]^T\) is the link's twist and \begin{equation*} \bm{G}_i = \begin{bmatrix} m_i & 0 & 0 \\ 0 & m_i & 0 \\ 0 & 0 & I_i \end{bmatrix}. \end{equation*}

However, we need to express the kinetic energy in terms of the generalized velocity \(\dot{\bm{q}}\), rather than the twist of each link. Substituting the differential kinematic relationship \eqref{eq:diff_kin} into \eqref{eq:T_i}, we get \begin{equation*} \mathcal{K}_i = \frac{1}{2}\dot{\bm{q}}^T\bm{J}_i^T\bm{G}_i\bm{J}_i\dot{\bm{q}}. \end{equation*}

Thus the total kinetic energy is just the sum \begin{equation*} \mathcal{K} = \sum_i\mathcal{K}_i = \frac{1}{2}\dot{\bm{q}}^T\left(\sum_i\bm{J}_i^T\bm{G}_i\bm{J}_i\right)\dot{\bm{q}}, \end{equation*} which reveals that the mass matrix, as seen in \eqref{eq:kinetic_energy}, can be expressed as \begin{equation*} \bm{M} = \sum_i\bm{J}_i^T\bm{G}_i\bm{J}_i. \end{equation*}

Next, we'll move on to the Coriolis matrix. To find it, we first need the 3D array of Christoffel symbols \(\bm{\Gamma}\). It turns out that \(\bm{\Gamma}\) that can be directly calculated from the mass matrix: the entry at index \((i,j,k)\) is given by \begin{equation} \Gamma_{ijk} = \frac{\partial M_{ik}}{\partial q_j} - \frac{1}{2}\frac{\partial M_{ij}}{\partial q_k}. \label{eq:christoffel_entry} \end{equation} (This is the result of some of that math we skipped between \eqref{eq:motion_lagrange} and \eqref{eq:motion}.)

The Coriolis matrix can then be found by computing \begin{equation*} \bm{C} = \begin{bmatrix} \dot{\bm{q}}^T\bm{\Gamma}_{1**} \\ \dot{\bm{q}}^T\bm{\Gamma}_{2**} \\ \dot{\bm{q}}^T\bm{\Gamma}_{3**} \end{bmatrix} = \sum_j\bm{\Gamma}_{*j*}\dot{q}_j, \end{equation*}

where \( \bm{\Gamma}_{i**}\in\R^{n\times n} \) is a matrix where the entry at \((j,k)\) is \(\Gamma_{ijk}\) and, likewise, \( \bm{\Gamma}_{*j*}\in\R^{n\times n} \) is a matrix with where the entry at \( (i,k) \) is \(\Gamma_{ijk}\).

Finally, we turn to the gravity vector, for which we will need to consider the potential energy of the system. Recall that the potential energy of a particle with mass \(m\) at height \(y\) is \(\mathcal{P} = mgy\), where \(g\) is the gravitational acceleration. The potential energy of each link of the robot follows the same formula, and we just need to sum them to get the total potential energy: \begin{equation*} \mathcal{P} = g\sum_i m_iy_i. \end{equation*}

Once we have that, it turns out that the gravity vector is just the derivative of the potential energy, \begin{equation} \bm{g}(\bm{q}) = \frac{\partial\mathcal{P}}{\partial\bm{q}}. \label{eq:gravity_vector} \end{equation}

In the previous sections we found mathematical expressions for \(\bm{M}(\bm{q})\), \(\bm{C}(\bm{q},\dot{\bm{q}})\), and \(\bm{g}(\bm{q}).\) We can easily write a program to compute the required matrix multiplications and trigonometric functions, but what about the expressions containing derivatives? In particular, we need to evaluate the link Jacobians \eqref{eq:jacobian} (to compute the mass matrix), the Christoffel symbols \eqref{eq:christoffel_entry}, and the gravity vector \eqref{eq:gravity_vector}. Again, we will do this three different ways:

- with manual differentiation,
- with automatic differentiation,
- with symbolic differentiation.

I provide most of the code for reference below, but you can also play with it in a Jupyter notebook in the browser here or download the repository here.

We'll start by specifying the parameters of the model:

```
# link masses
M1 = 3
M2 = 2
M3 = 1
# link lengths
LX = 0.1
LY = 0.1
L2 = 1
L3 = 0.5
# moments of inertia (base link doesn't rotate so don't need it's inertia)
I2 = M2 * L2 ** 2 / 12
I3 = M3 * L3 ** 2 / 12
# inertia matrices
G1 = np.diag(np.array([M1, M1, 0]))
G2 = np.diag(np.array([M2, M2, I2]))
G3 = np.diag(np.array([M3, M3, I3]))
# gravity
G = 9.8
```

For our first method, I've manually calculated and implemented expressions for all of the derivatives. The derivatives are tedious but fairly straightforward, so I'll skip the math here and get right to the code:

```
class ManualDynamics:
@classmethod
def mass_matrix(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
m11 = M1 + M2 + M3
m12 = -(0.5 * M2 + M3) * L2 * np.sin(θ2) - 0.5 * M3 * L3 * np.sin(θ23)
m13 = -0.5 * M3 * L3 * np.sin(θ23)
m22 = (
(0.25 * M2 + M3) * L2 ** 2
+ 0.25 * M3 * L3 ** 2
+ M3 * L2 * L3 * np.cos(θ3)
+ I2
+ I3
)
m23 = 0.5 * M3 * L3 * (0.5 * L3 + L2 * np.cos(θ3)) + I3
m33 = 0.25 * M3 * L3 ** 2 + I3
M = np.array([[m11, m12, m13], [m12, m22, m23], [m13, m23, m33]])
return M
@classmethod
def christoffel_matrix(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
# Partial derivatives of mass matrix
dMdx1 = np.zeros((3, 3))
dMdθ2_12 = (
-0.5 * M2 * L2 * np.cos(θ2) - M3 * L2 * np.cos(θ2) - 0.5 * M3 * L3 * np.cos(θ23)
)
dMdθ2_13 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ2 = np.array([[0, dMdθ2_12, dMdθ2_13], [dMdθ2_12, 0, 0], [dMdθ2_13, 0, 0]])
dMdθ3_12 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ3_13 = -0.5 * M3 * L3 * np.cos(θ23)
dMdθ3_22 = -M3 * L2 * L3 * np.sin(θ3)
dMdθ3_23 = -0.5 * M3 * L2 * L3 * np.sin(θ3)
dMdθ3 = np.array([
[0, dMdθ3_12, dMdθ3_13],
[dMdθ3_12, dMdθ3_22, dMdθ3_23],
[dMdθ3_13, dMdθ3_23, 0],
])
dMdq = np.zeros((3, 3, 3))
dMdq[0, :, :] = dMdx1
dMdq[1, :, :] = dMdθ2
dMdq[2, :, :] = dMdθ3
# This is equivalent to a set of for loops that fill out Γ such that
# Γ[i, j, k] = dMdq[k, j, i] - 0.5*dMdq[i, j, k]
Γ = dMdq.T - 0.5 * dMdq
return Γ
@classmethod
def coriolis_matrix(cls, q, dq):
Γ = cls.christoffel_matrix(q)
# note numpy dot broadcasting rules: this is a sum product of dq with
# the second last dimension of Γ
return np.dot(dq, Γ)
@classmethod
def gravity_vector(cls, q):
x1, θ2, θ3 = q
θ23 = θ2 + θ3
return np.array([
0,
(0.5 * M2 + M3) * G * L2 * np.cos(θ2) + 0.5 * M3 * L3 * G * np.cos(θ23),
0.5 * M3 * L3 * G * np.cos(θ23),
])
@classmethod
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
```

This method produces code that is challenging to read and debug. It is easy to make a mistake taking a derivative or writing its expression in code, and difficult to find it. We can avoid these bugs entirely by taking derivatives automatically, which is our next approach.

We will use jax to automatically compute the required derivatives. This results in more succinct and readable code than the manual version:

```
class AutoDiffDynamics:
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link1_pose(cls, q):
return jnp.array([q[0], 0, 0])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link2_pose(cls, q):
P1 = cls.link1_pose(q)
return P1 + jnp.array([
LX + 0.5 * L2 * jnp.cos(q[1]),
LY + 0.5 * L2 * jnp.sin(q[1]), q[1]]
)
@classmethod
@partial(jax.jit, static_argnums=(0,))
def link3_pose(cls, q):
P2 = cls.link2_pose(q)
return P2 + jnp.array([
0.5 * L2 * jnp.cos(q[1]) + 0.5 * L3 * jnp.cos(q[1] + q[2]),
0.5 * L2 * jnp.sin(q[1]) + 0.5 * L3 * jnp.sin(q[1] + q[2]),
q[2],
])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def mass_matrix(cls, q):
# Jacobians
J1 = jax.jacfwd(cls.link1_pose)(q)
J2 = jax.jacfwd(cls.link2_pose)(q)
J3 = jax.jacfwd(cls.link3_pose)(q)
return J1.T @ G1 @ J1 + J2.T @ G2 @ J2 + J3.T @ G3 @ J3
@classmethod
@partial(jax.jit, static_argnums=(0,))
def christoffel_matrix(cls, q):
# Here dMdq is transposed with respect to the dMdq's in the manual and
# symbolic implementations. Thus, the expression for Γ is also
# transposed, giving the same end result.
dMdq = jax.jacfwd(cls.mass_matrix)(q)
Γ = dMdq - 0.5 * dMdq.T
return Γ
@classmethod
@partial(jax.jit, static_argnums=(0,))
def coriolis_matrix(cls, q, dq):
Γ = cls.christoffel_matrix(q)
return dq.T @ Γ
@classmethod
@partial(jax.jit, static_argnums=(0,))
def potential_energy(cls, q):
P1 = cls.link1_pose(q)
P2 = cls.link2_pose(q)
P3 = cls.link3_pose(q)
return G * (M1 * P1[1] + M2 * P2[1] + M3 * P3[1])
@classmethod
@partial(jax.jit, static_argnums=(0,))
def gravity_vector(cls, q):
return jax.jacfwd(cls.potential_energy)(q)
@classmethod
@partial(jax.jit, static_argnums=(0,))
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
```

Here we have been able to avoid the complicated derivative expressions by
automatically computing them using `jax.jacfwd`

.

I like jax because it is easy to use—it is nearly a drop-in
replacement for numpy—but I do wish there was some way to save a
compiled function between runs. jax does have the option to
just-in-time (JIT) compile functions to
XLA for
extra speed (which we've done via the `@partial(jax.jit, static_argnums=(0,))`

lines), though startup times can still be long when automatically
differentiating complex, high-dimensional functions.

For our final method, we will perform symbolic differentiation using sympy. The code for this one is a little oddly structured, since we need to do a bunch of math before we can actually generate the class functions we want. However, I think the code is quite short and easy to follow overall.

```
class SymbolicDynamics:
# joint configuration and velocity are functions of time
t = sym.symbols("t")
q = sym.Array(
[sym.Function("x1")(t), sym.Function("θ2")(t), sym.Function("θ3")(t)]
)
dq = q.diff(t)
# link poses
P1 = sym.Matrix([q[0], 0, 0])
P2 = P1 + sym.Matrix([
LX + 0.5 * L2 * sym.cos(q[1]),
LY + 0.5 * L2 * sym.sin(q[1]),
q[1],
])
P3 = P2 + sym.Matrix([
0.5 * L2 * sym.cos(q[1]) + 0.5 * L3 * sym.cos(q[1] + q[2]),
0.5 * L2 * sym.sin(q[1]) + 0.5 * L3 * sym.sin(q[1] + q[2]),
q[2],
])
# link Jacobians
J1 = P1.jacobian(q)
J2 = P2.jacobian(q)
J3 = P3.jacobian(q)
# mass matrix
M = J1.transpose() * G1 * J1 + J2.transpose() * G2 * J2 + J3.transpose() * G3 * J3
# Christoffel symbols and Coriolis matrix
dMdq = M.diff(q)
Γ = sym.permutedims(dMdq, (2, 1, 0)) - 0.5 * dMdq
C = sym.tensorcontraction(sym.tensorproduct(dq, Γ), (0, 2)).tomatrix()
# gravity vector
V = G * (M1 * P1[1] + M2 * P2[1] + M3 * P3[1])
g = V.diff(q)
# compile functions to numerical code
mass_matrix = sym.lambdify([q], M)
coriolis_matrix = sym.lambdify([q, dq], C)
gravity_vector = sym.lambdify([q], g)
@classmethod
def force(cls, q, dq, ddq):
M = cls.mass_matrix(q)
C = cls.coriolis_matrix(q, dq)
g = cls.gravity_vector(q)
return M @ ddq + C @ dq + g
```

This is probably my favourite method, for two reasons. First, the code has a very similar structure to the math, and thus is easy to follow. Second, once the symbolic computations have been done, the result can be compiled into a normal numerical Python function for (reasonably) fast evaluation. It is easy to serialize and save these functions to disk for future use with something like dill. You can even generate code in another language, like C++, and then compile that. This is the approach I take for computing some of the kinematic functions for a 9-DOF robot I use for research.

The last part of the Jupyter notebook provides a performance comparison between
the three methods. I test the speed of each method by calculating the
generalized forces \(\bm{\tau}\) given a random input
\((\bm{q},\dot{\bm{q}},\ddot{\bm{q}})\). I use the Jupyter `%%timeit`

cell
magic, which runs each function 10,000 times, repeated with seven different
inputs. The approximate average results when run on my laptop are shown in the
table below (approximate because each run gives slightly different numbers, but
they are typically pretty close to the ones in the table).

Method | Approximate time per call (us) |
---|---|

Manual | 60 |

Automatic | 30 |

Symbolic | 160 |

For this simple example, all of the methods are pretty fast (note the times are in microseconds). Automatic differentiation with jax is the fastest, probably because of its JIT compilation to efficient XLA code. The results in the table exclude the initial run when jax performs the compilation.

The symbolic approach is quite a bit slower than the others, even after being compiled to numerical code. I still really like the symbolic approach for its elegance, but it may be too slow for use in some cases where there are tight performance requirements. In that case, however, you'd likely translate it to C or C++ first (which sympy can do!) and then compile that, which should be much faster.

When the benchmarking code is run in Binder, I actually see automatic differentiation's time per call increase to over 80 us. The manual approach stays about the same and the symbolic approach increases slightly. Since the performance varies depending on the particular environment and application, it is hard to claim that a single approach should always be preferred; testing is required for each case.

This brings us to the end of this rather long post, in which we saw some background on Lagrangian dynamics as applied to robotics, and how we could actually implement the dynamics in a few different ways. If you have any questions or other feedback, feel free to send me an email. In particular, please let me know of any mistakes—there is a lot of math and code in this post, and thus a lot of opportunity for error!

]]>`rm`

. However, `rm`

isn't fussy about what it deletes and
doesn't believe in second chances. For interactive use, I think we can do a bit
better. In this article, I'll first discuss some safer alternatives to `rm`

and
then offer my current preferred solution.
One of the older (though still maintained!) efforts to improve the safety of
`rm`

is safe-rm, which is a simple wrapper
around `rm`

that refuses to delete a list of protected directories. By default
`rm`

will refuse to remove the root of the filesystem, `/`

, but that's it. In
contrast, `safe-rm`

refuses to remove a list of important system directories,
such as `/bin`

and `/usr`

. Additional directories can be protected by listing
them in a configuration file. `safe-rm`

was originally a Perl script but has
recently been rewritten in Rust, and is meant to be a drop-in replacement for
`rm`

.

The key feature of the subsequent alternatives is that files are moved to some trash directory instead of being immediately deleted, much like typical desktop GUI trash programs. This allows files deleted by mistake to be restored until the trash is permanently deleted later.

One example is shell-safe-rm,
which is a bash script that aims to be an almost-complete drop-in replacement
for `rm`

, except that it moves files to the directory `~/.Trash`

. The tool does
not provide a built-in mechanism for restoring or emptying the trash, leaving
this for the user to do manually.

Another example is rip, which is written in
Rust. `rip`

is intended to be an intuitive and ergonomic alternative to `rm`

rather than a drop-in replacement. In particular, `rip`

doesn't bother with
`-r`

or other flag to remove directories. Like `shell-safe-rm`

, it moves
deleted files to a particular directory. By default, `rip`

uses a directory in
`/tmp`

, so that the deleted files are automatically cleaned up when the
computer reboots. I'm a little hesitant about this, in case my computer crashes
or otherwise reboots before I realize a file was removed by mistake; I want a
more predictable process for permanent deletion. Luckily, `rip`

allows the
destination for deleted files to be easily changed through environment
variables or a command line argument.

Finally, a rather attractive prospect is to follow the same specification as
common Unix desktop GUI trash programs, the XDG trash
spec. A
popular CLI tool that implements this spec is the Python tool
trash-cli. `trash-cli`

provides
scripts for all of the utilities expected from a trash program: move files to
trash, inspect them, restore them, or delete them permanently.

Ultimately, I want a tool with the following features:

- Move files to a trash directory rather than immediately deleting
permanently. This is typical of GUI trash programs and a main feature of
`shell-safe-rm`

,`rip`

, and`trash-cli`

. - Confirmation before deleting multiple items (to avoid, say, an incorrect
glob). This is actually present in plain
`rm`

: the`-i`

flag prompts before removing each and every file;`-I`

prompts once before removing directories or more than three files. - Refuse to delete a protected list of directories, like
`safe-rm`

. - Files are automatically permanently deleted after having been in the trash
for
`N`

days.

I played around with my own implementation of (1) for a while, but eventually
decided to just wrap `trash-cli`

to get (1) and the rest of the XDG trash spec
for free. My wrapper just adds features (2)
and (3), both of which are quite simple to implement, on top of the `trash-put`

script from `trash-cli`

.

For feature (4), I initially had each call to my wrapper check for files older
than 7 days and delete them, which can cause a noticeable delay if there
is a lot to delete. A better solution is to set up a cron job to regularly
empty the trash automatically. One way to do this is to use the `trash-empty`

script from `trash-cli`

and a crontab entry like

`@daily /path/to/trash-empty 30`

which runs a daily job to remove files that have been in the trash for at least 30 days.

I'll also mention autotrash, which is a dedicated tool for automatically emptying the trash can. It can set up a systemd service to do so, or can be added as a cron job similar to the one above.

I discussed four safer versions of `rm`

:

After hacking on my own toy version for a little while, I ended up wrapping
`trash-cli`

and using a cron job to automatically remove old trash. If you have
an alternative (better?) solution to removing files in the CLI, I'd be happy to
hear about it by email or
Twitter.