Skip to content
This repository has been archived by the owner on May 3, 2021. It is now read-only.

ClientPSMoveAPI Programmer notes

Guido Sanchez edited this page Mar 29, 2017 · 1 revision

ClientPSMoveAPI

Startup

The client application starts by calling ClientPSMoveAPI::startup() passing a host address, a port, a callback, and callback userdata. ClientPSMoveAPI uses the "PImpl Idiom" and its m_implementation_ptr is instantiated during ::startup(), passing the arguments along.

Importantly, ClientPSMoveAPIImpl inherits from IDataFrameListener, INotificationListener, and IClientNetworkEventListener. During instantiation of ClientPSMoveAPIImpl, it initializes its m_request_manager(), m_network_manager(see below), m_event_callback(callback), m_event_callback_userdata(callback_userdata), and m_controller_view_map().

During initialization of m_network_manager (an instance of ClientNetworkManager), ClientPSMoveAPIImpl passes in itself as the IDataFrameListener, INotificationListener, and IClientNetworkEventListener. m_request_manager is passed in as the IResponseListener.

ClientNetworkManager also uses the PImpl Idiom. During instantiation of ClientNetworkManagerImpl, we copy pointers to m_data_frame_listener, m_notification_listener, m_response_listener, and m_netEventListener, among many others. TODO: Need more device types for m_packed_data_frame.

Then ClientPSMoveAPIImpl::startup() is called which in turn calls ClientNetworkManager::startup() which calls ClientNetworkManagerImpl::start(). This resolves the host&port and calls start_tcp_connect on the endpoint. start_tcp_connect creates a tcp connection and passes in ClientNetworkManagerImpl::handle_tcp_connect as the connection handler which is called when the connect operation completes. Assuming everything goes smoothly, we get the corresponding m_udp_server_endpoint then we call start_tcp_read_response_header().

start_tcp_read_response_header() is the beginning of a loop, including m_response_read_buffer, handle_tcp_read_response_header, start_tcp_read_response_body(), handle_tcp_read_response_body, and back to start_tcp_read_response_header().

During handle_tcp_read_response_body, we call handle_tcp_response_received(). This is a branch point. If the response has a request id then we call m_response_listener->handle_response(response). Otherwise, this is a notification, and we call handle_tcp_connection_info_notification if Response_ResponseType_CONNECTION_INFO or m_notification_listener->handle_notification(response).

Remember that the m_response_listener is in fact ClientRequestManager (PImpl). ClientRequestManagerImpl::handle_response first checks that there's a corresponding entry in m_pending_requests. If it finds the request, then it gets the RequestContext context then calls the context.callback. There would only be an entry in m_pending_requests if it was put there by the App (e.g., ClientPSMoveAPI::start_controller_data_stream during AppStage_MagnetometerCalibration::enter()). Similarly, the callback would have been specified by the app.

Remember that m_notification_listener is ClientPSMoveAPIImpl. ClientPSMoveAPIImpl::handle_notification calls m_event_callback. m_event_callback is defined by the App (e.g., App::onClientPSMoveEvent. The first argument to m_event_callback is the ClientPSMoveAPI::eClientPSMoveAPIEvent eventType. Currently the only specific eventTypes handled is Response_ResponseType_CONTROLLER_LIST_UPDATED. TODO: Different event types for different devices. If the app-defined callback then handles the notification, maybe checking the event type (e.g., this might trigger a request_controller_list()).

ClientNetworkManagerImpl::handle_tcp_connection_info_notification starts a chain -> ClientNetworkManagerImpl::send_udp_connection_id() -> ClientNetworkManagerImpl::handle_udp_write_connection_id -> ClientNetworkManagerImpl::handle_udp_read_connection_result. If no errors, this calls 3 methods: start_udp_read_data_frame(), start_tcp_write_request(), and m_netEventListener->handle_server_connection_opened().

ClientNetworkManagerImpl::start_udp_read_data_frame() does an async receive from UDP with handler ClientNetworkManagerImpl::handle_udp_read_data_frame. This handler calls handle_udp_data_frame_received() then goes back to start_udp_read_data_frame() to read the next frame. handle_udp_data_frame_received() gets the data frame then passes it off to (ClientPSMoveAPIPImpl) m_data_frame_listener->handle_data_frame. Currently this assumes the return of the data .get_msg() is ControllerDataFramePtr. TODO: Check for data frame type, get the correct type, then call overloaded handle_data_frame.

ClientPSMoveAPIPImpl::handle_data_frame is device-type specific. TODO: create more handle_data_frame functions for different data_frame types. This gets the ClientControllerView, and ApplyControllerDataFrame. This copies information from the data_frame to the instance of the Client<Device>View.

Remember that m_netEventListener is ClientPSMoveAPIPImpl. ClientPSMoveAPIPImpl::handle_server_connection_opened() calls the application event callback with event type ClientPSMoveAPI::connectedToService.

Update

On each loop of the client application, it calls ClientPSMoveAPI::update() and maybe controller_view->GetDataFrameFPS(). ClientPSMoveAPI::update() calls m_network_manager.update(), and that's it.

m_network_manager is an instance of ClientNetworkManager. Its .update() calls m_io_service.poll(), where m_io_service is an instance of boost::asio::io_service and poll() runs all the handlers that are bound to the sockets (I think).