Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement rendering of simulation results on neuron geometries in Geppetto #168

Closed
slarson opened this issue Jan 3, 2014 · 21 comments
Closed

Comments

@slarson
Copy link
Member

slarson commented Jan 3, 2014

As part of the Geppetto simulation engine, it would be extremely helpful to be able to watch the change of voltage in the c. elegans neural network overlaid on top of the shapes of the neurons. This would rapidly give users an intuition about what was happening in the neuronal network.

An example of what this looks like can be seen here implemented in neuronvisio. Here's another example implemented in the Whole Brain Catalog.

Unfortunately, these software, neuronVisio and Whole Brain Catalog, can be very challenging to install. The other option is to use neuroConstruct to visualize these changes. Unfortunately there is no good way to control the color range and so it is difficult to see changes happening. These limitations of software pose a serious barrier to entry to the general programming community getting involved with playing with the dynamics of the neuronal network model.

Geppetto currently has the means to render neuronal shapes in the web browser using WebGL, which provides massive benefits to the end user because when Geppetto is run as a service, they don't have to install anything. The javascript code responsible for this is over here. The basic idea behind this code is what enables the OSB to render the c. elegans NeuroML. What Geppetto can't do is render time series on the morphologies. This has never been done in WebGL before.

Both neuronvisio and neuroconstruct have the kind of animation code we would need in Geppetto. Another example is from the Whole Brain Catalog.

The simulation results driving the visualization would need to be encoded in some form that would lend itself to streaming from the back-end to the front end. neuroConstruct, neuronvisio, and Whole Brain Catalog all used different formats to accomplish this.

@tarelli
Copy link
Member

tarelli commented Jan 3, 2014

@slarson To do this with the C.elegans nervous system a prerequisite is to expand Geppetto neuronal simulator (currently based on jLEMS) to support multi-compartments neurons to begin with. I will be attending a meeting on the 24th of January with @robertcannon, @pgleeson and @borismarin to see what the current status of this is in jLEMS and then we can evaluate what is the best way to move forward. I thought we had already captured the necessity to add multi-compartment support in another issue but I can't seem to find it now, is it still there?

@vellamike
Copy link
Contributor

This would certainly be a fantastic addition! Neuronvisio is really useful
but as has been pointed out it's extremely difficult to get it to work.

On 3 January 2014 10:06, Matteo Cantarelli [email protected] wrote:

@slarson https://github.com/slarson To do this with the C.elegans
nervous system a prerequisite is to expand Geppetto neuronal simulator
(currently based on jLEMS) to support multi-compartments neurons to begin
with. I will be attending a meeting on the 24th of January with
@robertcannon https://github.com/robertcannon, @pgleesonhttps://github.com/pgleesonand
@borismarin https://github.com/borismarin to see what the current
status of this is in jLEMS and then we can evaluate what is the best way to
move forward. I thought we had already captured the necessity to add
multi-compartment support in another issue but I can't seem to find it now,
is it still there?


Reply to this email directly or view it on GitHubhttps://github.com//issues/168#issuecomment-31514083
.

@slarson
Copy link
Member Author

slarson commented Jan 3, 2014

@tarelli Note that neither neuronvisio, neuroConstruct, nor Whole Brain catalog implemented their own neuronal simulator -- they rely on the simulation happening elsewhere and rendering the results. While I'm a big fan of expanding the Geppetto neuronal simulator, I don't view it as a pre-requisite to this visualization capacity. If someone can implement the visualization to run as a pre-recorded movie it would still be a huge advantage over what is there now.

@tarelli
Copy link
Member

tarelli commented Jan 4, 2014

@slarson I know since none of them is a simulator but whether we are talking of a) improving the simulator b) wrapping a preexisting simulator c) adding capabilities to run the recording of a simulation (what standard? what format?) visualisation always comes last. Don't get me wrong having flashing morphologies is definitely a goal, but there are some other things to do before we can do it regardless of what scenario we pick. The first practical step as I see it would be to parametrize the material for a single compartment cell.

@vellamike
Copy link
Contributor

The nice thing about option c would be that it would solve a problem that
exists right now so geppetto would be a useful tool to a lot of people. It
would also help us a lot in the next stage of investigating neural
activity.

There is no standard for this kind of data as it is rarely stored.

@tarelli what do you mean by parameterize the material for a single
compartment cell?
On 4 Jan 2014 02:42, "Matteo Cantarelli" [email protected] wrote:

@slarson https://github.com/slarson I know since none of them is a
simulator but whether we are talking of a) improving the simulator b)
wrapping a preexisting simulator c) adding capabilities to run the
recording of a simulation (what standard? what format?) visualisation
always comes last. Don't get me wrong having flashing morphologies is
definitely a goal, but there are some other things to do before we can do
it regardless of what scenario we pick. The first practical step as I see
it would be to parametrize the material for a single compartment cell.


Reply to this email directly or view it on GitHubhttps://github.com//issues/168#issuecomment-31569470
.

@tarelli
Copy link
Member

tarelli commented Jan 4, 2014

@vellamike What problem are you exactly referring to? Do we have meaningful multi-compartment simulations we want to "replay" at the moment? Am I missing something? Just trying to understand...
What I mean is to have a single compartment cell changing material, e.g. color or luminosity, as a consequence of variation of a given variable, e.g. membrane potential.

@vellamike
Copy link
Contributor

I mean the problem in the community in general that there is no good way to
visualise multicompartmental simulations - If geppetto provided this it
would be really nice. The morphology visualisations in geppetto are already
the best out there IMO so it would be a great feature.

I agree of course that the problem needs to be solved at the single
compartment level first.
On 4 Jan 2014 02:58, "Matteo Cantarelli" [email protected] wrote:

@vellamike https://github.com/vellamike What problem are you exactly
referring to? Do we have meaningful multi-compartment simulations we want
to "replay" at the moment? Am I missing something? Just trying to
understand...
What I mean is to have a single compartment cell changing material, e.g.
color or luminosity, as a consequence of variation of a given variable,
e.g. membrane potential.


Reply to this email directly or view it on GitHubhttps://github.com//issues/168#issuecomment-31569802
.

@borismarin
Copy link
Member

@tarelli @slarson I guess that it boils down to drawing a centerline, which
defines the middle in Geppetto as _middle_ware. Your current architecture
has been built on the premises of modularity/reusability, and given the
bleeding-edginess of the stuff you are aiming for, starting to add hooks
for domain specific simulators/engines will be fundamental. I don't expect
you to wrap solvers for all kinds of formalism employed by the modelers --
isn't it all about interfacing with bullet-proof/industry
standard/time-proven tools? One thing is wrapping a crude ode stepper to
perform quick and dirty test runs; multi-agent cfd
interacting in realtime with agents described by dae's is another beast
entirely. In other words, I think that you should focus on making it easy
for Geppetto to speak (fast!) to the heavy-lifters.

In any case, you will need to come up with a standard data interchange
format. I really find the unix-ish idea of using text streams as an
universal interface very appealing, but since you already having trouble
with high speed streaming, I suggest having a look at hdf5.

@gidili
Copy link
Member

gidili commented Jan 5, 2014

Marry me Boris.

@gidili
Copy link
Member

gidili commented Jan 5, 2014

It can be arranged.

@vellamike
Copy link
Contributor

So what is everyone's thoughts on the priority of this? And if it is a high priority which way should it be achieved?

My view is that 1. it is a high priority (Because it is something we will want in the future anyway, and it is fills a gap which people need) and 2. It should initially be accomplished by adding capabilities to run the recording of a simulation. The format is something we could have a separate discussion about.

@richstoner
Copy link
Member

@tarelli @vellamike Loop me in on the messaging format. HDF5 isn't practical in a browser world - better off with something like http://binaryjs.com/.

@tarelli
Copy link
Member

tarelli commented Jan 8, 2014

@richstoner thanks Rich! This is the issue related to that discussion, I will mention you from there.

@slarson
Copy link
Member Author

slarson commented Jan 8, 2014

@vellamike It is sounding like folks are seeing the value of this. While it of course depends on the bandwidths of folks that decide to implement it, I think that the value to the community of being able to visualize this stuff would make it a high priority. I defer to @tarelli though for formulating this into a concrete roadmap that others could participate in.

@tarelli
Copy link
Member

tarelli commented Jan 9, 2014

@slarson @vellamike I created #172 to capture the first required action (following the "replay a recorded simulation trail") before we can move forward with having multi-compartment neurons lighting up in Geppetto. Note we can already start doing this for single compartment neurons using the jLEMS module to run the simulation.

@slarson
Copy link
Member Author

slarson commented Jan 9, 2014

Great!

On Thursday, January 9, 2014, Matteo Cantarelli wrote:

@slarson https://github.com/slarson @vellamikehttps://github.com/vellamikeI created this
#172 #172 to capture the
first required action (following the "replay a recorded simulation trail")
before we can move forward with having multi-compartment neurons lighting
up in Geppetto.


Reply to this email directly or view it on GitHubhttps://github.com//issues/168#issuecomment-31925847
.

@tarelli
Copy link
Member

tarelli commented Jan 17, 2014

@mlolson would you be interested in looking at this? As I wrote in my last comment we can already light up the morphologies in single compartment neurons (the first sample of the list for instance). If you are interested we can setup a meeting and we can look at what needs to be done together, any other folks also welcome to join (cc @jmartinez, @gidili, @vellamike, etc.)

@mlolson
Copy link

mlolson commented Jan 17, 2014

@tarelli yeah sounds great! let's set up a meeting, next week perhaps?

@tarelli
Copy link
Member

tarelli commented Jan 17, 2014

@mlolson great, I will send an invite.

@mlolson
Copy link

mlolson commented Feb 25, 2014

Hello everyone,

I had an email thread with Matteo about this issue, I wanted to post a summary here in case anyone has any input (I'm copy-pasta-ing snippets from a few emails here). I'm not sure if this thread is the right place for this discussion, feel free to direct me elsewhere if it's not.

Matteo created a branch "lighting" for the frontend.
Inside GEPPETTO.js there is a stub:

GEPPETTO.lightUpEntity = function(jsonEntity, intensity)

This method is what will get called from the Geppetto API to light up an entity, at the moment the function takes an id and set the color of the corresponding entity to white (ignoring the intensity parameter).

To try it for instance load the first example from the list and in the javascript console (the browser one, not the geppetto one) type GEPPETTO.lightUpEntity("hhcell");
If you load the Purkinjee Cell (last but one sample) you can use GEPPETTO.lightUpEntity("purk2"); to make the whole cell go white.

Originally we wanted to do something similar to the effect that is happening in this demo:

http://stemkoski.github.io/Three.js/Shader-Halo.html

However, upon investigation we found that the effect is achieved by creating a semi-transparent "halo" object with dimensions larger than the target object's geometry- easy to calculate when the object is a cube/sphere but much harder when it is an arbitrary set of vectors. We found this three.js extension which is able to handle geometry dilation, which sort of works. However, the introduction of another duplicate object seems like a computationally intensive solution for large geometries, so we would like to investigate the idea of using a different material texture or some other means to light up the cell that does not require duplicate geometries. I'm not much of an expert in three.js, so I'd be interested if anyone has an idea of how to achieve such an effect.

@slarson
Copy link
Member Author

slarson commented Jan 10, 2015

The first step of this is done, and can be seen in the changing color of the sphere in the Hodgkin Huxley example on live.geppetto.org. Closing for now

@slarson slarson closed this as completed Jan 10, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants