-
Notifications
You must be signed in to change notification settings - Fork 1
Prototype Visualization Frontend #24
Comments
@subwaystation What do you need to get started on the front end? |
@josiahseaman This is tightly connected with #20. The general aim is to untangle the current IVG implementation into back-end and front-end. IVG's front-end already makes use of React. So, The back-end will then have to be adjusted in order to visualize a calculated haplotype block. |
The good news is that everything is the exact same Node -> NodeTraversal ->
Path that matches the GFA conception and should match the existing IVG
conceptual model. You don't need to worry about haplotype blocks or
summarization or any of that. Just get us a GFA file loaded (using
GFA.load_from_gfa(); GFA.to_graph()) to the database and send it for
display on the front end.
… |
That sounds good. In the following an example of a graph in JSON formatted to be put into IVG:
So your plan is to read from GFA (a whole graph, or just a subgraph?), extract the nodes, edges (I think IVG does not even use them) and the paths, format them as above, so that we can directly visualize the GFA with the current IVG implementation? |
Gdoc Documentation
We will need a visualization capability to be able to visualize and browse our test data sets. It would be great to start with the functionality of IVG or MoMI-G. One way of doing this would be to figure out the minimum input data required for SequenceTubemap as a service to call and render objects.
Ultimately, we'd like to see a responsive React or Angular website that talks to the Django server in stateful view windows. The first prototype could simply contact /graph/ which returns a json of the whole graph and feeds it into SequenceTubemap for rendering. In the end, we will likely rewrite SequenceTubemap from scratch in order to have control of the rendering process.
The text was updated successfully, but these errors were encountered: