Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..f173110 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/404.html b/404.html new file mode 100644 index 0000000..68ecc8f --- /dev/null +++ b/404.html @@ -0,0 +1,220 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +++NOTE: This chapter is basically just an adaptation of F1/10ths chapter on the same subject (thanks creative commons!). +Please give their content a look for more info.
+
As mentioned before, the first part of our autonomous stack will be the planner, which determines the point we want to +send the kart to.
+As discussed prior, what a planner actually does depends on the goal of the robot. In the case of TinyKart, that would +be:
+Thus, a planner for TinyKart should aim to maximize these goals. The planners introduced in this chapter will not be the +best for this task, but show a possible approach.
+Considering these goals is important, as many existing approaches to planning (such as A*) don't make a ton of sense +in the context of TinyKart, and will lead to suboptimal results, even if they are completely acceptable in other +contexts.
+First off, we will introduce planners using F1/10ths follow the gap. This is an incredibly basic algorithm that decides +the next point to head to by simply finding the center point of the largest gap in each scan.
+Based off this description, the algorithm would look something like:
+scan[(start_idx+end_idex)/2]
Of course, this is rather hand waving away item 2. What even is a gap? We can model one as a set of points from the scan +that fulfill two conditions:
+Thus, the algorithm now looks something like this:
+And as it turns out, that's really the best you can make the naive approach. The downfall of this approach is that it +has a tendency to cut corners, as the kart's limited turning radius means it needs to approach the corner from the far +wall in order to arc properly. The next approach we will discuss will aim to solve this.
+++NOTE: This section makes use of C++'s std::optional. If you haven't worked with it before, please check out this +article here.
+
While things are still simple, it's time for you to get your hands dirty and get this kart moving autonomously! In this +section, you will be implementing the above algorithm yourself. To begin, replace your main loop with the following:
+/// Finds a target point to drive to by finding the largest gap in the scan.
+///
+/// \param scan Lidar scan
+/// \param min_gap_size Minimum number of points in a gap required for it to be considered a gap
+/// \param min_dist Minimum distance for a point to be considered part of a gap, in m
+/// \return Target point to drive to, if a gap is found
+std::optional<ScanPoint> find_gap_naive(const std::vector<ScanPoint> &scan, uint8_t min_gap_size, float min_dist) {
+ // TODO
+}
+
+void loop() {
+ noInterrupts();
+ auto res = ld06.get_scan();
+ interrupts();
+
+ // Check if we have a scan frame
+ if (res) {
+ auto scan_res = *res;
+
+ // Check if frame erred
+ if (scan_res) {
+ auto maybe_scan = scan_builder.add_frame(scan_res.scan);
+
+ // Check if we have a 180 degree scan built
+ if (maybe_scan) {
+ auto scan = *maybe_scan;
+
+ auto front_obj_dist = scan[scan.size() / 2].dist(ScanPoint::zero());
+
+ // If object is 45cm in front of kart, stop (0.0 means bad point)
+ if (front_obj_dist != 0.0 && front_obj_dist < 0.45 + 0.1524) {
+ tinyKart->pause();
+ digitalWrite(LED_YELLOW, HIGH);
+ }
+
+ // Find target point TODO tune your params
+ auto maybe_target_pt = find_gap_naive(scan, 10, 2)
+
+ if (maybe_target_pt) {
+ auto target_pt = *maybe_target_pt;
+
+ logger.printf("Target point: (%hi,%hi)\n", (int16_t) (target_pt.x * 1000),
+ (int16_t) (target_pt.y * 1000));
+
+ // Find command to drive to point
+ auto command = pure_pursuit::calculate_command_to_point(tinyKart, target_pt, 1.0);
+
+ // Set throttle proportional to distance to point in front of kart
+ command.throttle_percent = mapfloat(front_obj_dist, 0.1, 10.0, 0.15, tinyKart->get_speed_cap());
+
+ logger.printf("Command: throttle %hu, angle %hi\n", (uint16_t) (command.throttle_percent * 100),
+ (int16_t) (command.steering_angle));
+
+ // Actuate kart
+ tinyKart->set_forward(command.throttle_percent);
+ tinyKart->set_steering(command.steering_angle);
+ }
+ }
+ } else {
+ switch (scan_res.error) {
+ case ScanResult::Error::CRCFail:
+ logger.printf("CRC error!\n");
+ break;
+
+ case ScanResult::Error::HeaderByteWrong:
+ logger.printf("Header byte wrong!\n");
+ break;
+ }
+ }
+ }
+}
+
+Don't worry about the loop much for now, as it contains the path tracker you will be making next chapter. A reference
+implementation is included here to ensure that you can test your planner. All you need to work on for now is find_gap_naive
,
+which will contain the algorithm detailed above.
Once you have a complete implementation, test it with the tinykart in a relatively constrained space, so the lidar can +see things. Don't stress too much if it isn't perfect, but try to hack on the algorithm until you're confident you can't +make it better.
+This will likely take quite a long time, possibly days depending on how much time you have to work on TinyKart. Feel free +to take your time with this, and ensure you understand what is happening. This process will be very similar to when you're +working on it yourself, so its important to get this down.
+If you're ever completely stuck, there is a reference implementation in libs/gap_follow/naive_gap_follow.cpp
.
+I would still try to finish this yourself though, because there are many ways to approach this implementation and things
+only get more complex from here.
With that experience under your belt, lets introduce another approach to the problem. This approach can be summarized as +"Throw yourself at the largest wall".
+Essentially, we redefine the largest gap to be the largest continues non-zero +span of points. This means that our target point will fling us directly at a wall. That's insane! Why would we want to +do that? The idea is that doing so will cause us to be on the outside of corners, which means that when we corner our +kart will have a wider angle to perform its turn. Done well, this almost looks like a racing line. Of course, with the +current implementation, we will just plow into a wall. So what stops us from doing so?
+To solve this issue, we add a "bubble" that removes points in front of the kart. This looks like:
+Because our gaps cannot contain zeroed points, this means that the kart cannot continue to head towards a wall, since +when it gets too close, those points are zeroed. This has the effect of the kart bouncing from the largest wall to the largest +wall, which has the desired cornering properties mentioned above.
+As it turns out, this bubble approach is actually very similar in implementation to the algorithm you just implemented. +It's main differences is the addition of the bubble, and the definition change of the gap. Otherwise, you are still +searching for the largest gap, and still need to scan the scan for gaps.
+Take your prior implementation, and adapt it to this new approach. You will need to modify both the function arguments +and body, but the rest of the code should still be fine.
+Once you've got this working, you're at feature parity with the reference kart! As before, a reference implementation of +this algorithm is in libs if you get stuck. Try to avoid using it though, as this will be great practice for working on +larger projects where I won't just hand you a template.
+Good luck!
+ +Ladies, gentleman, baby lidar - it's finally time to make TinyKart autonomous!
+This is going to be a multipart process, and be quite a bit more involved than the prior sections. This is why we're +doing this project after all.
+Before we look into algorithms, I want to give a very brief look at the way mobile robotics is 'normally' done. This +will +be very high level, but should you give you a decent mental model of what we've been doing this whole time.
+At a high level, autonomous stacks can be described using 'sense-think-act', a pretty ancient paradigm but one that +works for a simple system like TinyKart.
+For TinyKart, sense think act looks like the following:
+The idea of sense think act is that most robotics solutions form a pipeline where you read from sensors, plan based off +that new data, execute those plans, and finally repeat this over and over as the sensors get new data, and you progress +towards your goals.
+As you can see, you've actually already completed sense and act. All you need to do now is think, and wire it all +together!
+For mobile robotics, the think step above generally encompasses two main processes:
+Path planning is the act of taking the state of the world as input, and outputting a path for the robot to follow. +How this is done is entirely dependent on your sensors and goal for the robot.
+Generally speaking, a path is represented as a sequence of points to follow, rather than a line or something. For +TinyKart, we will actually only plan to a single point as we lack any sort of feedback on our speed, required to +use multipoint paths.
+Path planning algorithms span from general algorithms like A* or RRT to bespoke algorithms such as the gap +algorithms you will be writing.
+Path tracking algorithms take paths from a path planner and actually calculate the command the robot needs to perform to +follow the path.
+By command, we mean the value all actuators should be set to continue following the path. Because of this, path +trackers are independent of the path planner, but do depend on the actuators and geometry of your robot, known as +kinematics.
+For multipoint paths, this generally looks like:
+For our single point path, we only need to do step 2, which is considerably easier.
+For examples of path planning algorithms, see Pure Pursuit, DWB, and MPC.
+The following chapters will go more in depth on these two topics.
+ +++NOTE: This chapter makes significant use of resources from this article, +adapted for use in TinyKart. Please take a look there for more details on the math side of things.
+
Now that we have a target point, all you need to do is find out how to actually get the kart there. This is the +responsibility of path trackers.
+Much like path planners, path trackers come in many varieties, depending on the robot and requirements. For example, +some planners like Pure Pursuit simply directly head to the target, some like ROS's DWB attempt to avoid obstacles, +and some use advanced control algorithms such as Model Predictive Control to attempt to also handle vehicle dynamics +(such as wheel slip). These range in difficulty of implementation from elementary to reasurch.
+In the last chapter, you were using a reference implementation of pure pursuit. In this chapter, you will learn how to +reimplement this yourself, and get a better idea of how to tune it.
+Before we can talk about pure pursuit, we first need to introduce vehicle kinematics. Kinematics is the study of how +things move without respect to forces (dynamics). For example, the kinematics of a car define where the car will go +given some steering angle input alone, without considering things like wheel slip that depends on surface friction. +While far less accurate than using dynamics, robot kinematics gives us a good estimation of how our robot should move +given some command.
+Because kinematics doesn't depend on forces, it could be said that it instead relies on geometries. This means that +the physical layout of the robot defines your robot's kinematics. While these are theoretically infinite, they tend to +come in only a few varieties:
+Differential Drive:
+Skid Steer:
+Omnidirectional:
+And of course...
+Ackermann:
+Because cars use ackermann steering (as any other mechanism would fail at speed), we will only discuss ackermann +kinematics +in detail. Conveniently, the kart also uses ackermann steering so all of these equations apply to your real work.
+While one could model vehicle kinematics using all four wheels, it's common to simplify them down even further to just +two - the bicycle model:
+At this level of simplification, things should be pretty clear. To break it down:
+Some things to note:
+With that model in mind, we can now introduce pure pursuit. Pure pursuit is a path tracker that computes the command +to reach some target point by simply calculating the arc to that target point, and heading directly towards it. +With this in mind, it's clear why Ackermann kinematics are useful, as they give us a means to find a steering angle +given an arc.
+Pure pursuit is a tad more than that, however. It's main trinket is lookahead distance. Basically, PP will only +calculate arcs to points so far away from itself, in order to strike a balance between making sudden turns and slow +turns. You can think of lookahead distance as a tunable parameter that sets the "aggressiveness" of the karts steering.
+Let's go through a pure pursuit iteration step by step.
+When using a path with multiple points, the first step would be to find the intersection of the lookahead distance and +the path, in order to find the target point. For TinyKart we already have the target point, so this step isn't needed. +However, we still need to include the lookahead distance somehow, else pure pursuit will turn far too slowly.
+To do this, we simply do the following:
+To calculate the arc we need to follow to reach the target point (which remember, is given by R), we can exploit +the geometry of the problem.
+To find R, and thus the arc, we can use the law of sines in the geometry above to derive:
+\[ R = {distinceToTarget \over 2\sin(\alpha)} \]
+Finally, we need to calculate the steering angle required to take the arc described by R.
+By the bike model discussed prior, you can see that the steering angle relates to R by: +\[ \delta = \arctan({L \over{R}}) \]
+By substituting R for our arc found in the last step, we can solve the equation to find the required steering +angle \( \delta \).
+Now that we have our steering angle, which is our command, pure pursuit is complete.
+For a visual representation of pure pursuit in action, take a look at our pure pursuit implementation for Phoenix:
+ +Before I hand things off to you, I want to give a brief overview of tuning pure pursuit, since there isn't much media on +it.
+As mentioned before, lookahead is the main parameter for pure pursuit. For TinyKart, it directly controls the +aggressiveness +of a turn, since we drag target points closer to the kart, rather than sampling a closer point on a path. Because of +this, +a closer lookahead distance will always lead to a more aggressive turing angle, so long as the target point is farther +than the lookahead distance.
+Because of this, tuning lookahead on tinykart is rather simple, if tedious:
+Generally speaking, a larger lookahead distance should be faster, since turning causes the kart to lose speed from +friction.
+It's finally time for your last assignment! Of course, this will have you writing and tuning your own pure pursuit. +Build this off of your code from the last chapter.
+Find this line in your loop:
+auto command = pure_pursuit::calculate_command_to_point(tinyKart, target_pt, 1.0);
+
+And replace it with:
+auto command = calculate_command_to_point(tinyKart, target_pt, 1.0);
+
+Then somewhere in your main.cpp, add:
+/// Calculates the command to move the kart to some target point.
+AckermannCommand calculate_command_to_point(const TinyKart *tinyKart, ScanPoint target_point,
+ float max_lookahead) {
+ //TODO
+}
+
+Implement pure pursuit in this function, and test it our with your existing code.
+As a bit of an extra, consider using your past work from chapter 4 to set the throttle component of the command +proportionally to the distance to the objects in front of the kart.
+Good Luck!
+ +TODO cover what hardware is required, and explain its role
+ +Finally, lets set up the tinykart codebase now that everything else is installed.
+First, lets clone the tinykart codebase using Git. This will effectively copy code from GitHub and onto your local +machine.
+To do so:
+cd
command to navigate to the directory you want to keep your code in. For
+example: cd C:\users\andy\documents\code\
on Windows or
+cd ~\Documents\code
on *nix.git clone https://github.com/andyblarblar/tinykart-academy
Next, lets flash the code onto the MCU, and actually see it running. To do this, we need to:
+First, open vscode. This should open to the example project from earlier. To change to tinykart, go to the file tab, and +select the "open folder" option, then open the tinykart folder:
+Make sure that you never connect power to both of the USB ports at the same time, as this will kill the board. +I have no idea why you would do this, but be warned.
+The STM board we use has an integrated ST-Link debugger, which we can connect to over USB to flash the controller, +debug, and more. To use this, connect a USB cable to the port on the side opposite to the Ethernet jack. If the port +on the other side is used, then the debugger will not be attached.
+You'll know the debugger is correctly connected when the MCU pops up like a USB drive. Make sure to mount this disk. +This is how PIO will know how to flash the controller.
+Finally, we can flash the code to the controller! This will involve compiling the code, creating the firmware file to +flash to the controller, and actually programming the controller with that firmware. Thankfully, PIO makes this process +trivial.
+To flash:
+If this process has succeeded, a few things will occur:
+To show that it's working, click the blue user button on the board. This will toggle the yellow LED:
+Congrats! You now have the tinykart software development environment setup. Before we dive in any further, I would +recommend poking around the codebase and messing with the code.
+ +TODO introduce the idea of the project, its goals, and audience
+ +