forked from RussTedrake/manipulation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
intro.html
528 lines (444 loc) · 29.3 KB
/
intro.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
<!DOCTYPE html>
<html>
<head>
<title>Ch. 1 - Introduction</title>
<meta name="Ch. 1 - Introduction" content="text/html; charset=utf-8;" />
<link rel="canonical" href="http://manipulation.csail.mit.edu/intro.html" />
<script src="https://hypothes.is/embed.js" async></script>
<script type="text/javascript" src="chapters.js"></script>
<script type="text/javascript" src="htmlbook/book.js"></script>
<script src="htmlbook/mathjax-config.js" defer></script>
<script type="text/javascript" id="MathJax-script" defer
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<script>window.MathJax || document.write('<script type="text/javascript" src="htmlbook/MathJax/es5/tex-chtml.js" defer><\/script>')</script>
<link rel="stylesheet" href="htmlbook/highlight/styles/default.css">
<script src="htmlbook/highlight/highlight.pack.js"></script> <!-- http://highlightjs.readthedocs.io/en/latest/css-classes-reference.html#language-names-and-aliases -->
<script>hljs.initHighlightingOnLoad();</script>
<link rel="stylesheet" type="text/css" href="htmlbook/book.css" />
</head>
<body onload="loadChapter('manipulation');">
<div data-type="titlepage" pdf="no">
<header>
<h1><a href="index.html" style="text-decoration:none;">Robotic Manipulation</a></h1>
<p data-type="subtitle">Perception, Planning, and Control</p>
<p style="font-size: 18px;"><a href="http://people.csail.mit.edu/russt/">Russ Tedrake</a></p>
<p style="font-size: 14px; text-align: right;">
© Russ Tedrake, 2020-2022<br/>
Last modified <span id="last_modified"></span>.</br>
<script>
var d = new Date(document.lastModified);
document.getElementById("last_modified").innerHTML = d.getFullYear() + "-" + (d.getMonth()+1) + "-" + d.getDate();</script>
<a href="misc.html">How to cite these notes, use annotations, and give feedback.</a><br/>
</p>
</header>
</div>
<p pdf="no"><b>Note:</b> These are working notes used for <a
href="http://manipulation.csail.mit.edu/Fall2022/">a course being taught
at MIT</a>. They will be updated throughout the Fall 2022 semester. <!-- <a
href="https://www.youtube.com/channel/UChfUOAhz7ynELF-s_1LPpWg">Lecture videos are available on YouTube</a>.--></p>
<table style="width:100%;" pdf="no"><tr style="width:100%">
<td style="width:33%;text-align:left;"><a class="previous_chapter"></a></td>
<td style="width:33%;text-align:center;"><a href=index.html>Table of contents</a></td>
<td style="width:33%;text-align:right;"><a class="next_chapter" href=robot.html>Next Chapter</a></td>
</tr></table>
<script type="text/javascript">document.write(notebook_header('intro'))
</script>
<!-- EVERYTHING ABOVE THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<chapter style="counter-reset: chapter 0"><h1>Introduction</h1>
<p>It's worth taking time to appreciate just how amazingly well we are able to
perform tasks with our hands. Tasks that often feel mundane to us -- loading
the dishwasher, chopping vegetables, folding laundry -- remain as incredibly
challenging tasks for robots and are at the very forefront of robotics
research.</p>
<figure><span title="Artwork by Reed Alspach"><img style="width:24%" src="data/plate_pick_final_3.JPG"/> <img style="width:24%" src="data/plate_pick_final_4.JPG"/> <img style="width:24%" src="data/plate_pick_final_5.JPG"/> <img style="width:24%" src="data/plate_pick_final_6.JPG"> </span>
</figure>
<p>Consider the problem of picking up a single plate from a stack of plates in
the sink and placing it into the dishwasher. Clearly you first have to
perceive that there is a plate in the sink and that it is accessible. Getting
your hand to the plate likely requires navigating your hand around the
geometry of the sink and other dishes. The act of actually picking it up
might require a fairly subtle maneuver where you have to tip up the plate,
sliding it along your fingers and then along the sink/dishes in order to get a
reasonable grasp on it. Presumably as you lift it out of the sink, you'd like
to mostly avoid collisions between the plate and the sink, which suggests a
reasonable understanding of the size/extent of the plate (though I actually
think robots today are too afraid of collisions). Even placing the plate into
the dishwasher is pretty subtle. You might think that you would align the
plate with the slats and then slide it in, but I think humans are more clever
than that. A seemingly better strategy is to loosen your grip on the plate,
come in at an angle and intentionally contact one side of the slat, letting
the plate effectively rotate itself into position as you set it down. But the
justification for this strategy is subtle -- it is a way to achieve the
kinematically accurate task without requiring much kinematic accuracy on the
position/orientation of the plate.</p>
<figure>
<todo>Move this to the manip repo (or not?) and definitely implement the deferred loading.</todo>
<video style="width:114%; margin-left:-7%" controls poster="https://s3.amazonaws.com/media-p.slid.es/videos/1350152/TpzT02Cy/plate_pickup_thumb_00001.jpg" muted="">
<source src="https://s3.amazonaws.com/media-p.slid.es/videos/1350152/TpzT02Cy/plate_pickup.mp4"/>
</video>
<figcaption>
A robot picking up a plate from a potentially cluttered sink (left: in
simulation, right: in reality). This example is from the <a
href="https://www.tri.global/news/tri-taking-on-the-hard-problems-in-manipulation-re-2019-6-27">manipulation
team at the Toyota Research Institute</a>.
</figcaption>
</figure>
<p>Perhaps one of the reasons that these problems remain so hard is that they
require strong capabilities in numerous technical areas that have
traditionally been somewhat disparate; it's challenging to be an expert in all
of them. More so than robotic mapping and navigation, or legged locomotion,
or other great areas in robotics, the most interesting problems in
manipulation require significant interactions between perception, planning,
and control. This includes both geometric perception to understand the local
geometry of the objects and environment and semantic perception to understand
what opportunities for manipulation are available in the scene. Planning
typically includes reasoning about the kinematic and dynamic constraints of
the task (how do I command my rigid seven degree-of-freedom arm to reach into
the drawer?). But it also includes higher-level task planning (to get milk
into my cup, I need to open the fridge, then grab the bottle, then unscrew the
lid, then pour... and perhaps even put it all back when I'm done). The
low-level begs for representations using real numbers, but the higher levels
feel more logical and discrete. At perhaps the lowest level, our hands are
making and breaking contact with the world either directly or through tools,
exerting forces, rapidly and easily transitioning between sticking and sliding
frictional regimes -- these alone are incredibly rich and difficult problems
from the perspective of dynamics and control.</p>
<p>There is a lot for us to discuss!</p>
<!-- two core research problems (that I like to focus on): 1) there are many
tasks that we don't know how to program a robot to do robustly even in a
single instance in lab (reach into your pocket and pull out the keys); 2)
achieving robustness of a complete manipulation stack in the open-world. -->
<section><h1>Manipulation is more than pick-and-place</h1>
<p>There are a large number of applications for manipulation. Picking up an
object from one bin and placing it into another bin -- one version of the
famous "pick and place" problem -- is a great application for robotic
manipulation. Robots have done this for decades in factories with carefully
curated parts. In the last few years, we've seen a new generation of
pick-and-place systems that use deep learning for perception, and can work
well with much more diverse sets of objects, especially if the
location/orientation of the placement need not be very accurate. This can
be done with conventional robot hands or more special-purpose end-effectors
that, for instance, use suction. It can often be done without having a very
accurate understanding of the shape, pose, mass, nor friction of the
object(s) to be picked.</p>
<p>The goal for these notes, however, is to examine the much broader view of
manipulation than what is captured by the pick and place problem. Even our
thought experiment of loading the dishwasher -- arguably a more advanced
type of pick and place -- requires much more from the perception, planning,
and control systems. But the diversity of tasks that humans (and hopefully
soon robots) can do with their hands is truly remarkable. To see one small
catalog of examples that I like, take a look at the
<a href="http://www.shap.ecs.soton.ac.uk/about-usage.php?task=buttons/">
Southampton Hand Assessment Procedure (SHAP)</a>, which was designed as a
way to empirically evaluate prosthetic hands. Matt Mason also gave a broad
and thoughtful definition of manipulation in the opening of his 2018 review
paper<elib>Mason18</elib>.</p>
<p>It's also important to recognize that manipulation research today looks
very different than manipulation research looked like in the 1980s and
1990s. During that time there was a strong and beautiful focus on
"manipulation as grasping," with seminal work on, for instance, the
kinematics/dynamics of multi-fingered hands assuming a stable grasp on an
object. Sometimes still, I hear people use the term "grasping" as almost
synonymous with manipulation. But please realize that the goals of
manipulation research today, and of these notes, are much broader than that.
Is grasping a sufficient description of what your hand is doing when you are
buttoning your shirt? Making a salad? Spreading peanut butter on
toast?</p>
</section>
<section><h1>Open-world manipulation</h1>
<p>Perhaps because humans are so good at manipulation, our expectations in
terms of performance and robustness for these tasks are extremely high.
It's not enough to be able to load one set of plates in a laboratory
environment into a dishwasher reliably. We'd like to be able to manipulate
basically any plate that someone might put in the sink. And we'd like the
system to be able to work in any kitchen, despite various geometric
configurations, lighting conditions, etc. The challenge of achieving and
verifying robustness of a complex manipulation stack with physics,
perception, planning, and control in the loop is already daunting. But how
do we provide test coverage for every kitchen in the world?</p>
<p>The idea that the world has infinite variability (there will never be a
point at which you have seen every possible kitchen) is often referred to as
the "open-world" or "open-domain" problem -- a term popularized first in the
context of <a href="https://en.wikipedia.org/wiki/Open_world">video
games</a>. It can be tough to strike a balance between rigorous thinking
about aspects of the manipulation problem, and trying to embrace the
diversity and complexity of the entire world. But it's important to walk
that line.</p>
<p>There is a chance that the diversity of manipulation in the open world
might actually make the problem easier. We are just at the dawn of the era
of big data in robotics; the impact this will have cannot be overstated.
But I think it is deeper than that. As an example, some of our
optimization formulations for planning and control might get stuck in local
minima now because narrow problem formulations can have many quirky
solutions; but once we ask a controller to work in a huge diversity of
scenarios then the quirky solutions can be discarded and the optimization
landscape may become much simpler. But to the extent that is true, then we
should aim to understand it rigorously!</p>
</section>
<section><h1>Simulation</h1>
<p>There is another reason that we are now entering a golden age for
manipulation research. Our software tools have (very recently) gotten much
better!</p>
<p>I remember a just few years ago (~2015) talking to my PhD students, who
were all quite adept at using simulation for developing control systems for
walking robots, about using simulation for manipulation. "You can't work
on manipulation in simulation" was their refrain, and for good reason. The
complexity of the contact mechanics in manipulation has traditionally been
much harder to simulate than a walking robot that only interacts with the
ground and through a minimal set of contact points. Looming even larger,
though, was the centrality of perception for manipulation; it was generally
accepted that one could not simulate a camera well enough to be
meaningful.</p>
<p>How quickly things can change! The last few years has seen a rapid
adoption of video-game quality rendering by the robotics and computer vision
communities. The growing consensus now is that game-engine renderers
<i>can</i> model cameras well enough not only to <i>test</i> a perception
system in simulation, but even to <i>train</i> perception systems in
simulation and expect them to work in the real world! This is fairly
amazing, as we were all very concerned before that training a deep learning
perception system in simulation would allow it to exploit any quirks of the
simulated images that could make the problem easier.</p>
<p>We have also seen dramatic improvements in the quality and performance
of contact simulation. Making robust and performant simulations of
multi-body contact involves dealing with complex geometry queries and stiff
(measure-) differential equations. There is still room for fundamental
improvements in the mathematical formulations of and numerical solutions
for the governing equations, but today's solvers are good enough to be
extremely useful.</p>
</section>
<section><h1>These notes are interactive</h1>
<p>By leveraging the power of simulation, and the new availability of free
online interactive compute, I am trying to make these notes more than what
you would find in a traditional text. Each chapter will have working code
examples, which you can run immediately (no installation required) on <a
href="https://deepnote.com">Deepnote</a>, or download/install and run on
your local machine (see the <a href="drake.html">appendix</a> for more
details). It uses an open-source library called <drake></drake> which has
been a labor of love for me since around 2013 when I started taking our
research code and making it more broadly available. Because all of the code
is open source, it is entirely up to you how deep into the rabbit hole you
choose to go.</p>
<p>Mortimer Adler famously said "Reading a [great] book should be a
conversation between you and the author." In addition to the interactive
graphics/code, I've added the ability to highlight/comment/ask questions
directly on these notes (go ahead and try it; but please read my note on <a
href="misc.html#annotation">annotation etiquette</a>). Adler's suggestion
was that <i>great</i> writing can turn static text into a dialogue,
transcending distance and time; perhaps I'm cheating, but technology can
help me communicate with you even if my writing isn't as strong as Adler
would have liked!</p>
<p>I have organized the software examples into notebooks by chapter. There
is an "Launch in Deepnote" button at the top of each chapter; I'd encourage
you to open it immediately when you are reading the chapter. Go ahead and
"duplicate" the chapter project and run the first cell to startup the cloud
machine. Then as you read the text, I will have examples that will have
corresponding sections in the notebook.</p>
<example id="teleop2d"><h1>Teleoperation in 2D.</h1>
<p>Before we get into any autonomous manipulation, let's start by just
getting a feel for what it will be like to work on manipulation in an
online Jupyter notebook. This example will open up a new window with our
3D visualizer, and in the "Controls" menu in the visualizer, you will
find sliders that you can use to drive the end-effector of the robot
around. Give it a spin!</p>
<script>document.write(notebook_link('intro'))</script>
</example>
<example><h1>Teleoperation in 3D.</h1>
<p>I offered a 2D visualization / control first because everything is simpler in 2D. But the simulation is actually
running fully in 3D. Run the second example to see it.</p>
<script>document.write(notebook_link('intro'))</script>
</example>
</section>
<section><h1>Model-based design and analysis</h1>
<p>Thanks to progress in simulation, it is now possible to pursue a
meaningful study of manipulation without needing a physical robot. But the
software advances in simulation (of our robot dynamics, sensors, actuators
and its environment) are not enough to support all of the topics in these
notes. Manipulation research today leverages a myriad of advanced
algorithms in perception, planning, and control. In addition to providing
those individual algorithms, a major goal of these notes is to attempt to
bridle the complexity of using them together in a systematic way.</p>
<p>Many of you will be familiar with <a href="http://ros.org">ROS (the
Robot Operating System)</a>. I believe that ROS was one of the best things
to happen to robotics in the last decades. It meant that experts from
different subdisciplines could easily share their expertise in the form of
modular components. Components (as ROS packages) simply agree on the
messages that they will send and receive on the network; packages can
inter-operate even if they are written in different programming languages
or even on different operating systems.</p>
<p>Although ROS makes it relatively easy to get started with manipulation,
it doesn't serve my pedagogical goal of thinking clearly about
manipulation. The modular approach to authoring the components is extremely
good, and we will adopt it here. But in <drake></drake> we ask for a little
bit more from each of the components -- essentially that they declare their
states, parameters, and timing semantics in a consistent way -- so that we
have a much better chance of understanding the complex relationships between
systems. This has great practical value as well; the ability to debug a
full manipulation stack with repeatable deterministic simulations (even if
they include randomness) is surprisingly rare in the field but hugely
valuable.</p>
<p>The key building block in our work will be <drake></drake>
<b>Systems</b>, and systems can be combined in complex combinations into
<b>Diagrams</b>. System diagrams have long been the modeling paradigm used
in controls, and the software aspect of it will be very familiar to you if
you've used tools like Simulink, LabView, or Modelica. These software tools
refer to the block-diagram design paradigm as "model-based design".</p>
<example><h1>System Diagrams</h1>
<p>Even the examples above, which relied on you to perform teleoperation
instead of having an autonomy stack, were still the result of combining
numerous parts. For any system (a system diagram is still a system)
that you have in <drake></drake>, you can visualize that diagram
directly in the notebook.</p>
<iframe width="100%" height="300px" src="https://people.csail.mit.edu/russt/uploads/manip_station.html"></iframe>
<todo>Update this to the newer rendering, and host the file in
manipulation/data.</todo>
<p>This graphic is interactive. Make sure you zoom in and click around
to get a sense for the amount of complexity that we can abstract away in
this framework. For instance, try expanding the
<code>iiwa_controller</code> block.</p>
<p>Whenever you are ready to learn more about the Systems Framework in
<drake></drake>, I recommend starting with the "Modeling Dynamical
Systems" tutorial linked from the <a href="http://drake.mit.edu">main
Drake website</a>.</p>
</example>
<p>Let me be transparent: not everybody likes this systems framework for
manipulation. Some people are just trying to write code as fast as
possible, and don't see the benefits of slowing down to declare their state
variables, etc. It will likely feel like a burden to you the first time
you go to write a new system. But it's precisely <i>because</i> we're
trying to build complex systems quickly that I advocate this more rigorous
approach. I believe that getting to the next level of maturity in our
open-world manipulation systems requires more maturity from our building
blocks.</p>
</section>
<section><h1>Organization of these notes</h1>
<p>The remaining chapters of these notes are organized around the
component-level building blocks of manipulation. Many of these components
each individually build on a wealth of literature (e.g. from computer
vision, or dynamics and control). Rather than be overwhelmed, I've chosen
to focus on delivering a consistent coherent presentation of the most
relevant ideas from each field <i>as they relate to manipulation</i>, and
pointers to more literature. Even finding a single notation across all of
the fields can be a challenge!</p>
<p>The next few chapters will give you a minimal background on the relevant
robot hardware that we are simulating, on (some of) the details about
simulating them, and on the geometry and kinematics basics that we will use
heavily through the notes.</p>
<p>For the remainder of the notes, rather than organize the chapters into
"Perception", "Planning", and "Control", I've decided to spiral through
these topics. In the first part, we'll do just enough perception, planning,
and control to build up a basic manipulation system that can perform
pick-and-place manipulation of known isolated objects. Then I'll try to
motivate each new chapter by describing what our previous system cannot do,
and what we hope it will do by the end of the chapter.</p>
<p>I welcome any feedback. And don't forget to interact!</p>
<!--
<p>One thing that we’ll leave out is humans. It’s not that I don’t like humans, but we have enough to do here without trying to model them. Let’s call it future work. The one exception is when we talk about control of the arm — a few simple heuristics at this level can make the robots much safer if operating around humans. The robot won’t understand them, but we want to make sure we still don’t hurt them. Or other robots/agents in the scene.</p>
-->
</section>
<section><h1>Exercises</h1>
<exercise id="drake_exercise"><h1>Familiarize yourself with Drake</h1>
<p><drake></drake> is a powerful and mature software library that can
support many advanced robotics applications. The motivation behind it is
described <a
href="https://medium.com/toyotaresearch/drake-model-based-design-in-the-age-of-robotics-and-machine-learning-59938c985515">in
this blog post</a>. It is extensively documented, but you have to know
where to look for it.</p>
<ol>
<li>Check out the growing list of <drake></drake> <a
href="https://github.com/RobotLocomotion/drake/tree/master/tutorials"
target="_tutorials">tutorials</a> (linked from the main Drake page). In
the <code>dynamical_systems</code> tutorial, to what value is the
initial condition, $x(0)$, set when we simulate the
<code>SimpleContinuousTimeSystem</code>?</li>
<li>The class-/function-level documentation is the most extensive
documentation in Drake. When I'm working on Drake (in either C++ or
Python), I most often have the <a
href="https://drake.mit.edu/doxygen_cxx/index.html">C++ doxygen</a>
open. The <a href="https://drake.mit.edu/pydrake/index.html">Python
documentation</a> is (mostly) auto-generated from this and isn't
curated as carefully; I tend to look there only in the rare cases that
the Python interface differs from C++.
<div>In C++ doxygen, search for "Spatial Vectors". What ascii
characters do we use to denote an angular acceleration in
code?</div></li>
<li>Drake is open-source. There are no black-box algorithms here. If
you ever want to see how a particular algorithm is implemented, or find
examples of how to use a function, you can always look at the source
code. These days you can <a
href="https://github1s.com/RobotLocomotion/drake" target="_vscode">use
VS Code</a> to explore the code right in your browser. What value of
<code>convergence_tol</code> do I use in the <a
href="https://github1s.com/RobotLocomotion/drake/blob/HEAD/bindings/pydrake/systems/test/controllers_test.py"
target="_vscode">unit test for "fitted value iteration"?</a> </li>
</ol>
<p>As you use Drake, if you have any questions, please consider <a
href="https://stackoverflow.com/questions/tagged/drake">asking them on
stack<b>overflow</b></a> using the "drake" tag. The broader Drake
developers community will often be able to answer faster (and/or better)
than the course staff! And asking there helps to build a searchable
knowledge base that will make Drake more useful and accessible for
everyone.</p>
</exercise>
<!--
<exercise id="state_space_practice">
<h1>State-space practice</h1>
<p>We are going to model manipulation components using state-space models; here is a simple practice problem to get you thinking. Consider the system: $$y[n] = u[n-3],$$ where $y$ is the output of the system that you're interested in and $u$ is the input which you control, both of which are scalars. In this problem, you'll consider how to choose a state vector $x$ which allows you to model the output trajectory $y$ as $u$ changes.</p>
<p>Specifically: you will choose a state vector $x$ such that $x[n+1]=Ax[n]+Bu[n]$ and $y[n]=Cx[n],$ where $A$, $B$, and $C$ are all matrices of appropriate size.</p>
<ol type="a">
<li>
<p>What size is (the smallest vector that can be used for) $x[n]$ to represent this system? Explain your answer. (Hint: what happens when you try to simulate the system, either by hand or computationally, with a state that's the wrong size?)</p>
</li>
<li>
<p>For the state size you chose for part (a), what are corresponding $A$, $B$, and $C$ matrices to represent the dynamics of the system?</p>
</li>
<li>
Say we increase the delay between the input and the output to 5: $$y[n] = u[n-5]$$
Does the size of the state change compared with (a)? Explain why or why not.
</li>
<li>
Consider a system with slightly different dynamics:
\begin{align*}
x[n] &= x[n-2] + u[n-1] \\
y[n] &= x[n] + 1
\end{align*}
Given the following library of blocks, draw a block diagram representing the difference equation above.
<figure>
<img style="width:20%", src="figures/exercises/sum.png"/>
<img style="margin-left:10%; width:24%", src="figures/exercises/delay.png"/>
</figure>
For example, the system $y[n] = u[n-1]$ would be represented as follows:
<figure>
<img style="width:30%", src="figures/exercises/block_diagram_example.png"/>
</figure>
</li>
</ol>
</exercise>
-->
</section>
</chapter>
<!-- EVERYTHING BELOW THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<div id="references"><section><h1>References</h1>
<ol>
<li id=Mason18>
<span class="author">Matthew T Mason</span>,
<span class="title">"Toward robotic manipulation"</span>,
<span class="publisher">Annual Review of Control, Robotics, and Autonomous Systems</span>, vol. 1, pp. 1--28, <span class="year">2018</span>.
</li><br>
</ol>
</section><p/>
</div>
<table style="width:100%;" pdf="no"><tr style="width:100%">
<td style="width:33%;text-align:left;"><a class="previous_chapter"></a></td>
<td style="width:33%;text-align:center;"><a href=index.html>Table of contents</a></td>
<td style="width:33%;text-align:right;"><a class="next_chapter" href=robot.html>Next Chapter</a></td>
</tr></table>
<div id="footer" pdf="no">
<hr>
<table style="width:100%;">
<tr><td><a href="https://accessibility.mit.edu/">Accessibility</a></td><td style="text-align:right">© Russ
Tedrake, 2022</td></tr>
</table>
</div>
</body>
</html>