Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

potential & concerns #64

Open
coffear opened this issue Aug 26, 2022 · 0 comments
Open

potential & concerns #64

coffear opened this issue Aug 26, 2022 · 0 comments

Comments

@coffear
Copy link

coffear commented Aug 26, 2022

This end-to-end motion planning strategy was realized by taking advantage of AI and NPU development, a pioneering work. However, I doubt quite much about the practical application.

What I have learned from this work are:

  1. The actual inputs of the network are the processed depth image and its corresponding flying robot's state, and the goal direction. Therefore, one can use fake depth to train the network
  2. The range of the depth matters to the training. Even the depth information very far from the robot can affect the output of the network.
  3. the raw output network is the exact training result, why does it need to be fitted again before being sent to the controller? Does this mean the network is not trained well?

What I am concerned about is, how far is it from practical applications. The training is the key to the whole work. The combination of the depth and odometry information can be infinitely large, and the amount of these different scenarios can affect the network model. Therefore, the training work would be extremely hard, in order to get a proper model that works for most of the application scenarios.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant