You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for making this work open source, and let us glimpse some of Google's achievements on the NAS platform. , I have successfully run feature data, binary and multi-classes image network search after checking materials and issues.
The current problem: model_search is only trained on a single GPU card, and the computational efficiency is not satisfactory. Is there a way to modify model_search (via distributing configs?) for multi-GPU card synchronous/asynchronous training?
The text was updated successfully, but these errors were encountered:
Thank you for making this work open source, and let us glimpse some of Google's achievements on the NAS platform. , I have successfully run feature data, binary and multi-classes image network search after checking materials and issues.
The current problem: model_search is only trained on a single GPU card, and the computational efficiency is not satisfactory. Is there a way to modify model_search (via distributing configs?) for multi-GPU card synchronous/asynchronous training?
The text was updated successfully, but these errors were encountered: