-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
566 lines (516 loc) · 26.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CCD Workshop 2024</title>
<link rel="stylesheet" href="./style.css"> <!-- Replace with the actual path to the CSS file -->
<style>
.slider {
position: relative;
width: 100%;
height: auto;
}
.slider img {
width: 100%;
height: auto;
display: block;
}
.slider .title-container {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
text-align: center;
}
.banner {
background-color: rgb(117, 50, 29); RGB color (150, 90, 45) */
color: white; /* White text */
padding: 5px; /* Padding around text */
text-align: center; /* Center align text */
font-size: 24px; /* Font size */
animation: moveText 1s infinite alternate; /* Add animation */
color: white;
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="read_more.js"></script>
</head>
<body>
<!-- <div class="banner">
SUBMISSIONS FOR POSTERS OPEN
</div> -->
<div class="nav">
<div class="nav-container">
<a href="index.html">Home</a>
<!-- <a href="call_for_posters.html" class="eye-catching-link">Call for Posters</a> -->
<a href="#introduction">Introduction</a>
<a href="#speakers">Speakers</a>
<a href="#schedule">Schedule</a>
<a href="#posters">Posters & Spotlights</a>
<a href="#organizers">Organizers</a>
</div>
</div>
<div class="slider" id="home">
<img src="./Lens_LargerCrop.png" alt="Lens image">
<div class="title-container">
<h2>13th IEEE International Workshop on</h2>
<h1>Computational Cameras and Displays</h1>
<!-- <h1>CCD 2023</h1>
<h2>CVPR workshop on cutting edge research on cameras and displays</h2> -->
<h2>Seattle Convention Center</h2>
<h2>June 18, CVPR 2024</h2>
</div>
</div>
<!-- <div class="title-container" id="home">
<h1>Computational Cameras and Displays Workshop</h1>
<h2>June 18, 2023</h2>
</div> -->
<div class="container">
<!-- <div class="announcement-banner">
Poster & Spotlight Submissions are now open! Submit <a href="https://forms.gle/2nth2wfJhGLgjVbz6">via this form</a> by May 21st, 2023. <br>
Top-tier posters/spotlights will be invited to deliver short research talks at CCD this year.
</div> -->
<div class="section" id="introduction">
<h2>Introduction</h2>
<p>
<strong style="font-size: 16px;">Computational photography</strong><span style="font-size: 16px; font-weight: 400;"> has become an increasingly active area of research within the computer vision community. Within the few last years, the amount of research has grown tremendously with dozens of published papers per year in a variety of vision, optics, and graphics venues. A similar trend can be seen in the emerging field of computational displays – spurred by the widespread availability of precise optical and material fabrication technologies, the research community has begun to investigate the joint design of display optics and computational processing. Such displays are not only designed for human observers but also for computer vision applications, providing high-dimensional structured illumination that varies in space, time, angle, and the color spectrum. This workshop is designed to unite the computational camera and display communities in that it considers to what degree concepts from computational cameras can inform the design of emerging computational displays and vice versa, both focused on applications in computer vision.</span>
</p>
<p>
<span style="font-size: 16px; font-weight: 400;">The Computational Cameras and Displays (CCD) workshop series serves as an annual gathering place for researchers and practitioners who design, build, and use computational cameras, displays, and imaging systems for a wide variety of uses. The workshop solicits posters and demo submissions on all topics relating to computational imaging systems.</span>
</p>
<p>
<strong style="font-size: 16px;">Previous CCD Workshops:</strong>
<a href="https://ccd2023.github.io/">CCD2023</a>,
<a href="https://sites.northwestern.edu/ccd2022/">CCD2022</a>,
<a href="https://visual.ee.ucla.edu/ccd2021.htm/">CCD2021</a>,
<a href="http://ccd2020.cms.caltech.edu/">CCD2020</a>,
<a href="http://focus.ece.ufl.edu/ccd2019/">CCD2019</a>,
<a href="http://wisionlab.cs.wisc.edu/ccd2018/">CCD2018</a>,
<a href="http://www.computationalimaging.org/ccd2017/">CCD2017</a>,
<a href="http://imagesci.ece.cmu.edu/CCD2016/">CCD2016</a>,
<a href="http://ollie-imac.cs.northwestern.edu/~ollie/CCD2015/">CCD2015</a>,
<a href="http://www.ece.rice.edu/~vb10/CVPR2014/CCD2014/index.php">CCD2014</a>,
<a href="http://computationalcamerasanddisplays.media.mit.edu/">CCD2013</a>,
<a href="http://computationalcamerasanddisplays.media.mit.edu/2012/">CCD2012</a>
</p>
</div>
<section class="location-section" id="location">
<h3>Location: <strong>Arch 204</strong></h3>
<!-- <p>
<a href="https://cvpr2023.thecvf.com/virtual/2024/workshop/23625" target="_blank" rel="noopener noreferrer">
<strong>Virtual Workshop Link</strong>
</a>
</p> -->
</section>
<div class="section" id="speakers">
<h2>Keynote Talks</h2>
<div class="people">
<div>
<a href="https://graphics.stanford.edu/~levoy/">
<img src=https://graphics.stanford.edu/~levoy/images/marc-googlex-xmas-party14-c.jpg alt="Marc Levoy">
<div class="name">Marc Levoy</div>
<div class="aff">Adobe</div>
</a>
<div id="c1" class="content_bio">
Marc Levoy is the VMware Founders Professor of Computer Science (Emeritus) at
Stanford University and a Vice President and Fellow at Adobe. In previous lives
he worked on computer-assisted cartoon animation (1970s), volume rendering
(1980s), 3D scanning (1990s), light field imaging (2000s), and computational
photography (2010s). At Stanford he taught computer graphics, digital
photography, and the science of art. At Google he launched Street View,
co-designed the library book scanner, and led the team that created HDR+,
Portrait Mode, and Night Sight for Pixel smartphones. Levoy's awards include
Cornell University's Charles Goodwin Sands Medal for best undergraduate thesis
(1976) and the ACM SIGGRAPH Computer Graphics Achievement Award (1996). He is an
ACM Fellow (2007) and member of the National Academy of Engineering (2022).
</div>
</div>
<div>
<a href="http://research.nii.ac.jp/pbv/research_en.html">
<img src=https://research.nii.ac.jp/~imarik/files/5b9f9a135ba4.jpg alt="Imari Sato">
<div class="name">Imari Sato</div>
<div class="aff">National Institute of Informatics</div>
</a>
<div id="c2" class="content_bio">
Imari Sato received the BS degree in policy management from Keio University in 1994. After studying at Robotics Institute of Carnegie Mellon University as a visiting scholar, she received the MS and Ph.D. degrees in interdisciplinary Information Studies from the University of Tokyo in 2002 and 2005, respectively. In 2005, she joined the National Institute of Informatics, where she is currently a professor. Concurrently, she serves as a visiting professor at Tokyo Institute of Technology and a professor at the University of Tokyo. Her primary research interests are in the fields of computer vision (physics-based vision, spectral analysis, image-based modeling). She has received various research awards, including The Young Scientists’ Prize from The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology (2009), and Microsoft Research Japan New Faculty award (2011).
</div>
</div>
<div>
<a href="https://www.ece.cmu.edu/directory/bios/sankaranarayanan-aswin.html">
<img src=https://www.ece.cmu.edu/directory/images/faculty/S/aswin-sankaranarayanan-800x8001.png alt="Aswin Sankaranarayanan">
<div class="name">Aswin Sankaranarayanan</div>
<div class="aff">CMU</div>
</a>
<div id="c3" class="content_bio">
Aswin C. Sankaranarayanan is a professor in the ECE department at CMU, where he leads the Image Science Lab. His research interests are broadly in computational photography, signal processing and vision. His doctoral research was in the University of Maryland where his dissertation won the distinguished dissertation award from the ECE department in 2009. Aswin is the recipient of best paper awards at SIGGRAPH 2023 and CVPR 2019, the NSF CAREER award in 2017, as well as the Eta Kappa Nu Excellence in Teaching award.
</div>
</div>
<div>
<a href="http://users.cms.caltech.edu/~klbouman/">
<img src="profiles/bouman2.jpg" alt="Katie Bouman">
<div class="name">Katie Bouman</div>
<div class="aff">Caltech</div>
</a>
<div id="c4" class="content_bio">
Katherine L. (Katie) Bouman is an associate professor in the Computing and Mathematical Sciences, Electrical Engineering, and Astronomy Departments at the California Institute of Technology. Her work combines ideas from signal processing, computer vision, machine learning, and physics to find and exploit hidden signals for scientific discovery. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her Ph.D. in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS, and her bachelor's degree in Electrical Engineering from the University of Michigan. She is a Rosenberg Scholar, Heritage Medical Research Institute Investigator, recipient of the Royal Photographic Society Progress Medal, Electronic Imaging Scientist of the Year Award, Sloan Fellowship, University of Michigan Outstanding Recent Alumni Award, and co-recipient of the Breakthrough Prize in Fundamental Physics. As part of the Event Horizon Telescope Collaboration, she co-led the Imaging Working Group and acted as coordinator for papers concerning the first imaging of the M87* and Sagittarius A* black holes.
</div>
</div>
</div>
<h2>Invited Talks</h2>
<div class="people">
<div>
<a href="https://www.atulingle.com/ ">
<img src=https://static.wixstatic.com/media/bcd6ad_9da527019aa04130aa435a7f9537d014~mv2.jpg/v1/crop/x_30,y_0,w_1915,h_1915/fill/w_412,h_412,al_c,q_80,usm_0.66_1.00_0.01,enc_auto/atul.jpg alt="Atul Ingle">
<div class="name">Atul Ingle</div>
<div class="aff">Portland State University</div>
</a>
<div id="c5" class="content_bio">
Atul Ingle received the Ph.D. in Electrical Engineering from the University of Wisconsin-Madison in 2015.<!--more--> He is an Assistant Professor in the Department of Computer Science at Portland State University and an IEEE Member. He directs the Portland State Computational Imaging Laboratory which designs next generation computational cameras and computer vision algorithms for resource-constrained applications. His work on single-photon 3D cameras received the Marr Prize honorable mention award at ICCV 2019 and the ICCP Best Paper award in 2023.
</div>
</div>
<div>
<a href="https://cs.gmu.edu/~jinweiye/ ">
<img src=https://cs.gmu.edu/~jinweiye/assets/images/jinwei_ye_s.jpg alt="Jinwei Ye">
<div class="name">Jinwei Ye</div>
<div class="aff">George Mason University</div>
</a>
<div id="c6" class="content_bio">
Jinwei Ye is an Assistant Professor of Computer Science at George Mason University. Before that, she was an Assistant Professor at Louisiana State University (2017–2021). She received her Ph.D. in Computer Science from the University of Delaware in 2014. She was a postdoctoral fellow at US Army Research Lab (2014–2015), and a senior research scientist at Canon U.S.A. (2015–2017). Her research interests are at the intersection of computer vision, computational imaging, and computer graphics, with a focus on geometry and material reconstruction. Her works are mainly supported by NSF and ARL. She received the NSF CRII award in 2020 and NSF CAREER award in 2023. She served in the senior program committee (area chair) and organizing committee for many computer vision and AI conferences, including CVPR, ICCV, ICCP, AAAI, and IJCAI. She is a Senior Member of IEEE.
</div>
</div>
<div>
<a href="https://www.eee.hku.hk/~evanpeng/">
<img src=https://www.eee.hku.hk/~evanpeng/images/WebImage_EvanPeng.jpg alt="Evan Peng">
<div class="name">Evan Peng</div>
<div class="aff">University of Hong Kong</div>
</a>
<div id="c7" class="content_bio">
Yifan "Evan" Peng is currently an Assistant Professor at the University of Hong Kong (HKU), affiliated with both EEE and CS departments. Before joining HKU, he was a Postdoctoral Research Scholar at Stanford University. Dr. Peng received his PhD in Computer Science, the University of British Columbia, both his MSc and BS in Optical Science and Engineering from State Key Lab of Modern Optical Instrumentation, Zhejiang University. Dr. Peng has been working on a family of Neural + X projects for cameras, displays, microscopes, and rendering. Dr. Peng was the recipient of the AsiaGraphics Young Research Award (2022), ICBS Frontiers of Science Award (2023), as well as the IEEE VR Tech Significant New Researcher Award (2023).
</div>
</div>
<div>
<a href="https://akshatdave.github.io/">
<img src=https://akshatdave.github.io/images/AkshatDaveDP.jpg alt="Akshat Dave">
<div class="name">Akshat Dave</div>
<div class="aff">MIT</div>
</a>
<div id="c8" class="content_bio">
Akshat Dave is a postdoctoral associate in the Camera Culture group at MIT Media lab with Prof. Ramesh Raskar. His research lies at the intersection of computer vision, graphics, and imaging. He received his PhD in 2023 from Rice University advised by Prof. Ashok Veeraraghavan where his PhD dissertation won the Ralph Budd Thesis Award. He is also a recipient of the Lodieska Stockbridge Vaughn Fellowship and the Texas Instruments Fellowship.
</div>
</div>
<div>
<a href="http://abedavis.com/">
<img src=https://www.cs.cornell.edu/abe/group/content/groupbios/abe/abe.jpeg alt="Abe Davis">
<div class="name">Abe Davis</div>
<div class="aff">Cornell</div>
</a>
<div id="c9" class="content_bio">
Abe Davis is an assistant professor in the Computer Science Department at Cornell University, where his research group works at the intersections of computer graphics, vision, and human-computer interaction. Abe earned his Ph.D. in EECS from MIT CSAIL, and his thesis won the MIT Sprowls Award for Outstanding PhD Dissertation in Computer Science as well as honorable mention for the ACM SIGGRAPH Outstanding Doctoral Dissertation Award. Abe was also named one of Forbes Magazine's "30 under 30", Business Insider's "50 Scientists Who are Changing the World" and "8 Innovative Scientists in Tech and Engineering", he has won the "Most Practical SHM Solution for Civil Infrastructures" Award at IWSHM, and is a recipient of the NSF CAREER award in 2024.
</div>
</div>
</div>
</div>
<div class="section" id="schedule">
<h2>Schedule</h2>
<table>
<tr>
<td>Time(Seattle local)</td>
<td>Title</td>
<td>Speaker</td>
</tr>
<tr>
<td>8:45 - 9:00</td>
<td>Welcome / Opening Remarks</td>
<td>Organizers</td>
</tr>
<tr>
<td>9:00 - 9:30 </td>
<td>Keynote 1: Advanced Optical Imaging: Scattering and Absorption-Based Internal Structure Analysis with Photoacoustic Technology</td>
<td>Imari Sato</td>
</tr>
<tr>
<td>9:30-9:50 </td>
<td>Invited Talk 1: Invisible Fluorescent Markers for Deformable Tracking</td>
<td>Jinwei Ye</td>
</tr>
<tr>
<td>9:50 - 10:10 </td>
<td>Invited Talk 2: Resource-Aware Single-Photon Imaging</td>
<td>Atul Ingle</td>
</tr>
<tr>
<td>10:10 - 10:30 </td>
<td>Morning Break</td>
<td></td>
</tr>
<tr>
<td>10:30 - 11:00</td>
<td>Keynote 2: Spatially-Selective Lensing</td>
<td>Aswin C. Sankaranarayanan</td>
</tr>
<tr>
<td>11:00 - 11:15 </td>
<td>Spotlight presentations</td>
<td></td>
</tr>
<tr>
<td>11:15 - 12:30 </td>
<td>Poster Session (Boards #315-344)</td>
<td></td>
</tr>
<tr>
<td>12:30 - 13:30 </td>
<td>Lunch break</td>
<td></td>
</tr>
<tr>
<td>13:30 - 14:00 </td>
<td>Keynote 3: Computational photography at the point of capture on mobile cameras</td>
<td>Marc Levoy </td>
</tr>
<tr>
<td>14:00 - 14:20 </td>
<td>Invited Talk 3: Mobile Time-Lapse</td>
<td>Abe Davis</td>
</tr>
<tr>
<td>14:20 - 14:40 </td>
<td>Invited Talk 4: From Cameras to Displays, End-to-End Optimization Empowers Imaging Fidelity</td>
<td>Evan Peng</td>
</tr>
<tr>
<td>14:40 - 15:00 </td>
<td>Invited Talk 5: Revealing the Invisible with Neural Inverse Light Transport</td>
<td>Akshat Dave</td>
</tr>
<tr>
<td>15:00 - 15:30 </td>
<td>Afternoon Break</td>
<td></td>
</tr>
<tr>
<td>15:30 - 16:00 </td>
<td>Keynote 4: Seeing Beyond the Blur: Imaging Black Holes with Increasingly Strong Assumptions</td>
<td>Katie Bouman</td>
</tr>
<tr>
<td>16:00 - 16:45 </td>
<td>Panel discussion</td>
<td></td>
</tr>
<tr>
<td>16:45 - 16:55 </td>
<td>Closing Remarks</td>
<td></td>
</tr>
</table>
</div>
<div class="section" id="posters">
<h2>Posters & Spotlights</h2>
<!-- <div align="middle">
<iframe width="997" height="561" src="https://www.youtube.com/embed/FbnoEK_HXQY" title="CVPR CCD 2023 Poster spotlight videos" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div> -->
<table>
<tr>
<td>ID</td>
<td>Board Number</td>
<td>Title</td>
<td>Presenter</td>
</tr>
<tr>
<td>1</td>
<td> #315 </td>
<td>Learning Constrained Binary Color Filter Arrays For Enhanced Demosaicking with Trainable Hard Thresholding</td>
<td>Ali Cafer Gurbuz</td>
</tr>
<tr>
<td>2</td>
<td> #316 </td>
<td>Physics constrained neural tomography of a black hole</td>
<td>Aviad Levis</td>
</tr>
<tr>
<td>3</td>
<td> #317 </td>
<td>Single View Refractive Index Tomography with Neural Fields</td>
<td>Brandon Zhao</td>
</tr>
<tr>
<td>4</td>
<td> #318 </td>
<td>Towards 3D Vision with Low-Cost Single-Photon Cameras</td>
<td>Carter Sifferman</td>
</tr>
<tr>
<td>5</td>
<td> #319 </td>
<td>Optimized nano optics for 360 Structured light</td>
<td>Eunsue Choi</td>
</tr>
<tr>
<td>6</td>
<td> #320 </td>
<td>PixRO: Pixel-Distributed Rotational Odometry with Gaussian Belief Propagation</td>
<td>Ignacio Alzugaray</td>
</tr>
<tr>
<td>7</td>
<td> #321 </td>
<td>Behind the Blurry Background: Practical Synthetic Features To Enable Robust Imaging Through Scattering</td>
<td>Jeffrey Alido</td>
</tr>
<tr>
<td>8</td>
<td> #322 </td>
<td>Doppler Time-of-Flight Rendering</td>
<td>Juhyeon Kim</td>
</tr>
<tr>
<td>9</td>
<td> #323 </td>
<td>3D sensing with single-photon cameras for resource-constrained applications</td>
<td>Kaustubh Sadekar</td>
</tr>
<tr>
<td>10</td>
<td> #324 </td>
<td>Seeing the World Through Your Eyes</td>
<td>Kevin Zhang</td>
</tr>
<tr>
<td>11</td>
<td> #325 </td>
<td>WaveMo: Learning Wavefront Modulations to See Through Scattering</td>
<td>Mingyang Xie</td>
</tr>
<tr>
<td>12</td>
<td> #326 </td>
<td>Domain Expansion via Network Adaptation for Solving Inverse Problems</td>
<td>Nebiyou Tenager Yismaw</td>
</tr>
<tr>
<td>13</td>
<td> #327 </td>
<td>TurboSL: Dense, Accurate and Fast 3D by Neural Inverse Structured Light</td>
<td>Parsa Mirdehghan</td>
</tr>
<tr>
<td>14</td>
<td> #328 </td>
<td>Computational multi-aperture camera for wide-field high-resolution imaging</td>
<td>Qianwan Yang</td>
</tr>
<tr>
<td>15</td>
<td> #329 </td>
<td>Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence</td>
<td>Ripon Kumar Saha</td>
</tr>
<tr>
<td>16</td>
<td> #330 </td>
<td>CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras</td>
<td>Sachin Shah</td>
</tr>
<tr>
<td>17</td>
<td> #331 </td>
<td>Snapshot Lidar: Fourier embedding of amplitude and phase for single-image depth reconstruction</td>
<td>Sarah Friday</td>
</tr>
<tr>
<td>18</td>
<td> #332 </td>
<td>Differentiable Display Photometric Stereo</td>
<td>Seokjun Choi</td>
</tr>
<tr>
<td>19</td>
<td> #333 </td>
<td>Dispersed Structured Light for Hyperspectral 3D imaging</td>
<td>Suhyun Shin</td>
</tr>
<tr>
<td>20</td>
<td> #334 </td>
<td>Generalized Event Cameras</td>
<td>Varun Sundar</td>
</tr>
<tr>
<td>21</td>
<td> #335 </td>
<td>ƒNeRF: High Quality Radiance Fields from Practical Cameras</td>
<td>Yi Hua</td>
</tr>
<tr>
<td>22</td>
<td> #336 </td>
<td>Spectral and Polarization Vision: Spectro-polarimetric Real-world Dataset</td>
<td>Yujin Jeon</td>
</tr>
</tr>
<tr>
<td>23</td>
<td> #337 </td>
<td>Projecting Trackable Thermal Patterns for Dynamic Computer Vision</td>
<td>Mark Sheinin</td>
</tr>
<tr>
<td>24</td>
<td> #338 </td>
<td>Explicit Neural Fields for 3D Refractive Index Reconstruction using Two-photon Fluorescence Illuminations</td>
<td>Yi Xue</td>
</tr>
<tr>
<td>25</td>
<td> #339 </td>
<td>Streaming quanta sensors</td>
<td>Tianyi Zhang</td>
</tr>
<td>26</td>
<td> #340 </td>
<td>Textureless Deformable Object Tracking with Invisible Markers</td>
<td>Yubei Tu</td>
</tr>
</table>
</div>
<div class="section" id="organizers">
<h2>Workshop Chairs</h2>
<div class="people">
<a href="https://intra.ece.ucr.edu/~sasif/">
<img src="https://intra.ece.ucr.edu/~sasif/images/salman.jpg" alt="Salman Asif">
<div>Salman Asif</div>
<div class="aff">UC Riverside</div>
</a>
<a href="https://bme.ucdavis.edu/people/yi-xue">
<img src="https://bme.ucdavis.edu/sites/g/files/dgvnsk5766/files/styles/sf_profile/public/media/images/headshot_YiXue_2021.JPG?h=5f7320ae&itok=nFr9q9ca" alt="Yi Xue">
<div>Yi Xue</div>
<div class="aff">UC Davis</div>
</a>
<a href="https://www.marksheinin.com">
<img src="https://static.wixstatic.com/media/a41a28_00e0c880c1574c93b65620b4721e2e49~mv2.jpg/v1/fill/w_330,h_360,al_c,q_80,usm_0.66_1.00_0.01,enc_auto/a41a28_00e0c880c1574c93b65620b4721e2e49~mv2.jpg" alt="Mark Sheinin">
<div>Mark Sheinin</div>
<div class="aff">Weizmann Institute of Science</div>
</a>
<a href="https://kristinamonakhova.com/kristina/">
<img src="https://kristinamonakhova.com/assets/img/Kristina_Monakhova.jpg" alt="Kristina Monakhova">
<div>Kristina Monakhova</div>
<div class="aff">Cornell</div>
</a>
</div>
<div class="section" id="organizers">
<h2>Website Chairs</h2>
<div class="people">
<a href="https://scholar.google.com/citations?user=IC3XeAwAAAAJ&hl=en">
<img src="profiles/nyismaw.jpg" alt="Nebiyou Yismaw">
<div>Nebiyou Yismaw</div>
<div class="aff">UC Riverside</div>
</a>
</div>
</div>
</div>
<div class="foot">
<p>Computational Cameras and Displays Workshop - June 18, 2024</p>
</div>