This lecture provides a very brief introduction to the dynamics and control of robots. Dynamics deals with the motions of systems acted upon by forces and torques. This is an important area of study for roboticists because robots move by creating forces and torques on their environment. So a robot's dynamics explains what forces and torques are needed to generate motion. Even when we know the dynamics choosing the right forces and torques to control a robot can be a very tricky problem. And as part of a whole field in engineering and mathematics called control theory. Today we are going to go over a very useful tool used by engineers to explicitly write out the dynamics as some simple systems. And then go over a popular method of controlling those systems. Just a word of warning, the math in this lecture is a little more involved than in many of the previous lectures. But our intention in doing this is to give you practical engineering tools that you can use to apply to real problems. You likely learned about Newton's Laws in your introductory physics course. They tell us that mechanical systems obey second order differential equations. You need differential equations where there are no more than two derivatives in it of any variable. Newton's Laws let us model the accelerations of robots, as function of their speed, velocity, and external forces. In the kinematics section of this course, you learn how to generate the equations and motion for a system using Free Body Diagrams. Free Body Diagrams work well in simple systems. But often times robotic models use complicated geometry and coordinate systems. It can make free body diagrams cumbersome. In these cases, writing out the components of the internal forces can be tedious especially when using rotating coordinate systems which is pretty common when modeling robot joints. Oftentimes, writing out the energy of such systems is much simpler than writing out all the forces. It turns out that there's another way of generating the equations of motions for a system from the mechanical energy alone. So you don't need to write out all the forces. This formulation is known as Lagrangian Mechanics and is another equivalent way of expressing Newton's laws. For systems that aren't acted upon by outside forces, Lagrangian Mechanics says that the equations of motion are given by taking the kinetic energy, subtracting potential energy. And then applying a special expression called the Euler-Lagrange operator and setting the result to 0. One big advantage of Lagrangian Mechanics is that since this process is so simple, it is easy to automate in software once you can write down the system's energy. Let's go over this in more detail before we do a simple example. The mechanical energy of a system is the sum of its kinetic energy, the energy from its motion, and potential energy. For example the energy due to gravity or springs. Even in complicated systems, the kinetic and potential energies are often much easier to write out than the force is. The difference between the kinetic and the potential energy has a name, and it's called a Lagrangian. By applying the Euler-Lagrange operator to the Lagrangian and setting the results to 0, you get the equations of motion. The Euler-Lagrange operator simply takes the partial derivative of velocities, differentiates that with respect to time, and subtracts away the partial derivative of positions. The form of the Euler-Lagrange operator and the reason all of this works is a consequence of the principle in physics called the principle of least action. We don't have time to go into details of it today, but feel free to look it up online if you're interested. Finally, all of this accounts for internal forces, but if external forces are acting on the system, all you need to do is replace these forces with the zero at the end. As a quick example of how to use Lagrangian mechanics, let's derive the equations of motion for a simple pendulum. A simple pendulum consists of a mass m attached to a rod of length l that is free to rotate around some stationary pivot point. Gravity pulls the mass downwards. Let's say that when the rod is vertically downwards it has an angle of zero radians. So the state of the system is given by the rod angle and the rod angular velocity. The first step is to write out the kinetic and potential energy for the system. The kinetic energy is given by 1/2 the moment of inertia times the angular velocity squared. Which simplifies to one half the mass times the rod length squared times the angular velocity squared. The potential energy is simply the energy due to gravity. Which is equal to the negative of the mass times the rod length times the acceleration due to gravity, time the cosine of the rod angle. The Lagrangian is then the difference between the Kinetic and the Potential energy. Let's apply the Euler-Lagrange operator to the system term by term. The partial derivative at the Lagrangian with respect to position is equal to mgl sine theta. Now in this case we get a scalar, but if the system had multiple degrees of freedom we would instead get a vector whenever we take a partial derivative. The partial derivative of the Lagrangian with respect to velocity is equal to ml squared times the angular velocity. And differentiating with respect to time gives ml squared times the angular acceleration. We get the equations of motion when we combine these two terms and set the results to 0. Notably, when we simplify the equations, we see that the mass cancels out, leading to the interesting result that if the frequency of a simple pendulum is unaffected by its mass. The equations of motion for a system resulting from the Euler-Lagrange operator have a nice structure that can be useful to understand. To demonstrate this structure, we've gone ahead and derived the equations of motion for the spring loaded inverted pendulum, or slip template. By rearranging terms, we can put the equations into a generalized f = ma form where the accelerations are being multiplied by an inertial matrix containing the mass terms. And all of this equals the sum of the generalized forces. These forces include the Coriolis and centrifugal forces caused by rotating reference frames represented by the matrix C. And the forces from potential fields such as gravity are represented by the vector N. External forces represented by the vector tau are added to the right side of the equation as well. This representation of the general dynamics, regardless of the geometry or coordinate system, shows that forces always need to act through the mass to accelerate the system. Thus, as robots get heavier, they lose the ability to accelerate as quickly. Putting big heavy actuators on a robot isn't always a good idea if those actuators can't generate a lot of torque or force for their weight. Sometimes a light motor that can produce a lot of torque for its weight can accelerate a robot more than a heavy one that produces more torque but less for its weight. So, this concludes our discussion of using Lagrangian mechanics to write out the dynamics when modeling a robot. Now that we know how to write out the dynamics for a robot telling us how forces and twerks generated by the actuators cause the system to move. What do we do with it? Well, we would like our robot to choose a right actuator commands, so it executes some kind of desired behavior. Often this takes the form of going to a certain state. For example, setting some desired joint angles or moving forward at say one meter per second. Today we'll focus on these sort of behaviors where a robotic task can be encoded as reaching some desired position or velocity. This idea of using the inputs to a system, which in our case are motor commands, to achieve some desired behavior is ubiquitous enough in engineering and mathematics, that it is itself a field called control theory. We do not have time today to do any justice to a proper introduction into control theory, but we're going to briefly introduce two techniques that have relevance to getting robots to achieve some desired states. These techniques are called Inverse Dynamics Control and Proportional Derivative Control. Our first attempt to control the equations of motion of a robot will be something called Inverse Dynamics Control. We are introducing it first because in some sense, it's the most obvious thing to try, but we'll see often times it doesn't work well in practice. The idea here is that the motors can generate arbitrary accelerations on the system if they just cancel out the natural dynamics. By choosing torques and forces that cancel everything out but the acceleration, the actuator can, in theory, make the system accelerate at its will. Using this control technique, we can get a system to go wherever we want arbitrarily fast. Let's give an example using pendulum-like dynamics as they show up so much in robotics. Imagine that you're balancing a pendulum on your hand and you want the pendulum to point straight up, but you can only move your hands side to side to balance it. You may have tried this for fun with a meter stick or a ruler. This is system can be modeled by a pendulum attached to a cart. It can roll left or right, and where only lateral forces can be applied to the cart. Here's a simulation of inverse dynamical control applied to a pendulum on a cart. So you get the pendulum to point straight up, or be at an angle of theta equals zero radians. Remember, the only available input we have to control the theta angle is to push the cart left or right. In this simulation things work out pretty well, and we can cancel out the dynamics, and accelerate the pendulum as fast as we want to vertical. But often in robotics, we don't precisely know the dynamics. Or the actual dynamics might not exactly match the simplified model. For example, a walking robot might look approximately like a rimless wheel, but in actuality, models never exactly match the physical world. If the control scheme doesn't account for model uncertainty, problems can occur. Here we wrote out the acceleration using inverse dynamics control, when the controller has an imperfect estimate of the parameters, here represented by matrices with tildes. As you can see, the actual acceleration is no longer the desired acceleration F. Another possible issue with inverting the dynamics for control is that the forces needed to cancel out the dynamics can be very high. Notice how much the cart gets pushed around. In real life, actuators can be very torque or power limited. And often not enough actuators exist to cancel out the natural dynamics. As we've seen previously in this course, often a better strategy in legged robotics is to utilize the natural dynamics as much as possible instead of canceling them out. So as to minimize the needed motor torque. Here's an example of the cart when it has imperfect internal model of their parameters. As you can see, the pendulum falls over. An extremely popular method of control that is more robust to parameter variations is called proportional derivative control, or PD control. A PD controller has two parts. The first is the P part, or proportional term. The Proportional Controller tries to push the system to a desired value proportionally to the difference between the actual and the desired value. We call this difference the error. So when the system is closed to where it should be or when the error is low the controller slows down and doesn't push too hard. But when it is very far away the controller pushes much harder. If you're a mechanical thinker, you might imagine proportional control like a spring pulling the system to a desired value, included by the rest length of the spring. A problem with just using proportional control is that it can drive the system variable to a desired point, but it doesn't do anything explicitly to make sure it's stopped there. In the case of our spring analogy, without damping a spring constantly oscillates and overshoots its rest length. Here's the pendulum on the cart using just proportional control. The error does repeatedly go to zero, but there's no natural damping in this system, so the goal is continually overshot. The d or derivative term adds damping to the system to prevent this from happening. The derivative term is proportional to the change in error. It tries to prevent the error from changing too fast. It can prevent the perpetual overshoot problem we just mentioned. This derivative term is then added to the proportional term, and the sum of these terms is used as an input to this system. Another way about thinking about PD control is from an energy perspective. Where we reimagine that the error term as a virtual energy associated with it. The potential part of the PD Controller acts like forces from a potential field, which can drive the error momentarily to a certain position where its potential energy is minimized. But it can't change the total energy, it will still have kinetic energy at this point. The derivative portion acts to dissipate this energy, reducing the total energy so that eventually the error goes to zero with no kinetic energy. Because of this, PD control could be considered to be in a class of controllers called potential dissipative control that explicitly considers this virtual energy landscape that we've used as an analogy. Here's the cart with the derivative term added in to use full PD control. Notice how the derivative term dampens out the oscillations that we saw previously. Since PD controllers use the state of the system to formulate what to do, they're a classic example of feedback controllers. PD control can work very well in linear systems, however most real-life systems are non-linear. Non-linear systems can act approximately linear close to their equilibrium states. So often in cases where a system's range of operation is close to its equilibrium, even if it is non-linear just as the pendulum on a card example PD control can work well. Also, frequently another term is added in which integrates the error in what is called PID control. Before we end we are going to review the Raibert controller which we introduced at the end of the template section, to show that PD control can be used on real robots to great effect. Recall that Raibert's hoppers were approximately slip like machines that had actuators capable of producing leg twerks to vary the leg angle, as well as radio leg forces. Raibert's control idea was to provide a constant radial force and stance to get a vertical hop, to use angular torque to control their body pitch and to use the leg touchdown angle to control that forward speed through the natural body dynamics. The forward speed controller is a little more complicated, but essentially uses proportional control. The first term estimates how far ahead of the body the foot should touch down without causing a net forward acceleration in stance. And then the second term adds an offset in proportion to the current speed error. So that the robot accelerates to the desired speed over time. Often times in legged robotics due to limited actuation, the natural dynamics must be taken into account. And Raibert's control scheme demonstrates that PD control can leverage the natural dynamics through the use of the forward speed controller to achieve impressive performance. We've covered a lot in this lecture. But I hope it's given you a practical introduction to some engineering tools that will let you model in control legged robots for yourself. We've gone over the necessity of understanding the dynamics of legged robots. And shown that you can write down the equations and motion directly from energy using Lagrangian mechanics. This process is often simpler than using free-body diagrams and more complicated systems, and it can be easily automated in software. Finally, we introduced proportional derivative control as a way of controlling these dynamics, to get the robot to do what you want.