-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathatom.xml
186 lines (131 loc) · 9.05 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title><![CDATA[]]></title>
<link href="http://wholeslide.com/atom.xml" rel="self"/>
<link href="http://wholeslide.com/"/>
<updated>2014-08-24T20:34:42-07:00</updated>
<id>http://wholeslide.com/</id>
<author>
<name><![CDATA[Rich Stoner]]></name>
</author>
<generator uri="http://octopress.org/">Octopress</generator>
<entry>
<title type="html"><![CDATA[Building the index]]></title>
<link href="http://wholeslide.com/blog/2014/02/18/building-the-index/"/>
<updated>2014-02-18T12:48:00-08:00</updated>
<id>http://wholeslide.com/blog/2014/02/18/building-the-index</id>
<content type="html"><![CDATA[<p>This post will provide a high-level view of the indexing approach used by the DURA remote data source.</p>
<p><em>The DURA remote data source is simply a standalone elasticsearch server running somewhere in digitalocean.</em></p>
<h4>General data model</h4>
<p>A dura object has a few key properties:</p>
<ol>
<li>title</li>
<li>short description</li>
<li>type –> the corresponding class in the dura data model</li>
<li>json description</li>
</ol>
<p>To get search to work, I should add additional tags that enable me to slice across different facets.</p>
<p>For example, I would like to predefine queries for:</p>
<ul>
<li>Species</li>
</ul>
<p>For species, I would return any matching content type that is from an organism matching the species string.</p>
<ul>
<li>General content type</li>
</ul>
<p>For general content type, this would allow me to search for ‘all electron microscopy volumes’ or ‘all zoomify images’ or ‘all course lists’.</p>
<h4>Thinking aloud</h4>
<p>Adding relational search via graph-based mappings, while fun, is beyond the difficulty level of what I’m trying for …Although, it would open up the opportunity to do some great mappings, and possibly give me some overlap between the INCF NI-DM and DURA.</p>
<p>Everything on the collections page will be a search. Everything in the defaults.json generated by ipy will be a search query, description, and icon.</p>
<h4>Steps to replace the current defaults</h4>
<ol>
<li>Index the defaults in elasticsearch</li>
<li>Write pre-canned query</li>
<li>Write pre-canned query generator</li>
<li>Rebuild defaults.json</li>
</ol>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[EM Navigation]]></title>
<link href="http://wholeslide.com/blog/2014/02/18/em-navigation/"/>
<updated>2014-02-18T12:43:00-08:00</updated>
<id>http://wholeslide.com/blog/2014/02/18/em-navigation</id>
<content type="html"><![CDATA[<p>A quick demo of Dura, the core application behind WholeSlide Open. We’re quickly working towards a v1.0</p>
<iframe src="http://wholeslide.com//player.vimeo.com/video/80876206" width="500" height="652" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
<p> <p><a href="http://vimeo.com/80876206">Open Viewer With EM annotation alpha</a> from <a href="http://vimeo.com/wholeslide">Rich Stoner</a> on <a href="https://vimeo.com">Vimeo</a>.</p></p>
<h4>What can it do now? (build ~#4000)</h4>
<ul>
<li>View high resolution image sets from Brainmaps.org (uses boring 2D viewer, 3d in progress)</li>
<li>Link with Dropbox (sandboxed currently, needs appstore submission before this will work)</li>
<li>Connect to Knossos (EM) image services (experimental, issues with VRAM and opengl es 2)</li>
<li>Connect to OpenConnectome (EM) servers (experimental, issues with VRAM and opengl es 2)</li>
<li>Open converted EyeWire neuron meshes (experimental, issues with VRAM and opengl es 2)</li>
</ul>
<h4>What will it do in the future?</h4>
<ul>
<li>Connect to Aperio image servers</li>
<li>Contain most of the image resources available in the original WholeSlide app.</li>
<li>Connect to MicroBrightField’s Biolucida Server.</li>
<li>Connect to data stored on remote hard drives</li>
<li>View and create annotations on high resolution images</li>
<li>View stacks of high resolution images in 3D</li>
<li>Visualize skeletonized annotations for EM</li>
<li>Visualize volume annotations for EM</li>
<li>Include data sources beyond neuroscience (dermatology, radiology)</li>
<li>Integrate a searchable backend</li>
<li>Create courses and link content together</li>
</ul>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[History: Nuance Speech Annotation]]></title>
<link href="http://wholeslide.com/blog/2013/07/25/history-nuance-speech-annotation/"/>
<updated>2013-07-25T13:58:00-07:00</updated>
<id>http://wholeslide.com/blog/2013/07/25/history-nuance-speech-annotation</id>
<content type="html"><![CDATA[<p>A brief demo of the speech recognition annotation feature in the WholeSlide pathology engine. The speech recognition is used to rapidly annotate regions of interest to quickly share with other clinicians and researchers. Speech recognition is powered by the Nuance HealthCare SDK.</p>
<iframe src="http://player.vimeo.com/video/36066248?title=0&byline=0&portrait=0&color=ffffff" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>This submission eventually would place 2nd in Nuance’s Speech SDK Hackathon.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[History: Digital Library]]></title>
<link href="http://wholeslide.com/blog/2013/07/25/history-digital-library/"/>
<updated>2013-07-25T13:56:00-07:00</updated>
<id>http://wholeslide.com/blog/2013/07/25/history-digital-library</id>
<content type="html"><![CDATA[<p>When Cambridge released high resolution scans of Newton’s notebooks, I quickly modified the WholeSlide source code to create a demo of how this technology could be used.</p>
<iframe src="http://player.vimeo.com/video/33621622?title=0&byline=0&portrait=0&color=ffffff" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>Due to the closed nature of the code however, I was unable to continue work on the Newton project. It’s an ember I hope to rekindle with the release of an open source WholeSlide v2.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[History: WholeSlide v1]]></title>
<link href="http://wholeslide.com/blog/2013/07/25/history-wholeslide-v1/"/>
<updated>2013-07-25T13:53:00-07:00</updated>
<id>http://wholeslide.com/blog/2013/07/25/history-wholeslide-v1</id>
<content type="html"><![CDATA[<iframe src="http://player.vimeo.com/video/25954032?title=0&byline=0&portrait=0&color=ffffff" width="600" height="450" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>The original WholeSlide was written as an exercise to see how well the native UIKit libraries had evolved on the iOS platform. Once I was able to get comfortable with the ObjectiveC syntax and documentation, the rest fell in to place. WholeSlide was much more efficient and compact as a code base than anything I had written in c++ for iOS previously.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[History: WholeBrainCatalog Mobile Engine]]></title>
<link href="http://wholeslide.com/blog/2013/07/25/history-wbcme/"/>
<updated>2013-07-25T13:46:00-07:00</updated>
<id>http://wholeslide.com/blog/2013/07/25/history-wbcme</id>
<content type="html"><![CDATA[<iframe src="http://player.vimeo.com/video/16454765?title=0&byline=0&portrait=0&color=ffffff" width="600" height="465" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>Some of the first ideas for WholeSlide came from work I did with the WholeBrainCatalog group at UCSD. Starting from nothing, we were able to piece together an app for viewing high resolution image data in an OpenGL context. The entire application was written in c++ using the openframeworks toolchain.</p>
<iframe src="http://player.vimeo.com/video/17786249?title=0&byline=0&portrait=0&color=ffffff" width="600" height="379" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>At various points it compiled for iphone, ipad, osx 10.5, and could even be controlled with the kinect.</p>
<iframe src="http://player.vimeo.com/video/19175692?title=0&byline=0&portrait=0&color=ffffff" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
<p>The source code is still online at <a href="http://code.google.com/p/wbcmobileengine/">http://code.google.com/p/wbcmobileengine/</a></p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[The blog is alive]]></title>
<link href="http://wholeslide.com/blog/2013/07/20/the-blog-is-alive/"/>
<updated>2013-07-20T13:45:00-07:00</updated>
<id>http://wholeslide.com/blog/2013/07/20/the-blog-is-alive</id>
<content type="html"><![CDATA[<p>This blog will house the updates for the open source project along with any collaborations.</p>
]]></content>
</entry>
</feed>