-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
432 lines (414 loc) · 28.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Zilin Xu's Personal Page</title>
<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<link href="https://fonts.googleapis.com/css?family=Roboto:100,300,400,500,700,900" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/@mdi/[email protected]/css/materialdesignicons.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/vuetify.min.css" rel="stylesheet">
<link rel="icon" href='images/icon.png'>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no, minimal-ui">
</head>
<body>
<div id="app">
<v-app>
<div class="header-img">
</div>
<v-container >
<v-row justify="center">
<!-- Left column -->
<v-col class="py-4 text-center" :xl="4" :lg="4" :md="4" :sm="12" :xs="12">
<!-- Self introduction -->
<v-row justify="center" style="margin-top: -3em">
<v-avatar size="192" >
<img src="images/avatar3.jpg">
</v-avatar>
</v-row>
<br><br>
<v-row justify="center" >
<h2>徐 子 林</h2>
</v-row>
<v-row justify="center" >
<h2>Zilin Xu</h2>
</v-row>
<v-row justify="center" >
<h4>Pronounced as Tzu Lim Hsu</h4>
</v-row>
<v-row justify="center" >
<span>University of California, Santa Barbara</span>
</v-row>
<v-row justify="center" >
<span>[email protected]</span>
</v-row>
<!-- Self introduction end -->
</v-col>
<!-- Left column end -->
<!-- Right column -->
<v-col class="py-4" :xl="8" :lg="8" :md="8" :sm="12" :xs="12">
<v-row justify="left" >
<v-col class="pr-3" :xl="6" :lg="6" :md="6" :sm="12" :xs="12" >
<!-- About me -->
<v-row justify="left" >
<div class="text-h4 mt-4">About Me</div>
</v-row>
<v-row justify="left" >
<v-col xl="12" :lg="12" :md="12" :sm="12" :xs="12">
<p v-html="selfIntroduction"></p>
<a href="cv/cv_0924.pdf" target="_blank">Résumé <v-icon color="primary" size="x-large">mdi-file-account</v-icon></a>
</v-col>
</v-row>
<!-- About me end -->
</v-col>
<v-divider
vertical
></v-divider>
<v-col class="pl-6" :xl="6" :lg="6" :md="6" :sm="12" :xs="12" >
<!-- News -->
<v-row justify="left" >
<div class="text-h4 mt-4">News</div>
</v-row>
<div v-for="item in news">
<v-row justify="left" v-if="item.img == null">
<v-col class="col-12">
<li><b>[{{item.date}}] </b> <span v-html="item.context"></span></li>
</v-col>
</v-row>
<v-row justify="left" v-if="item.img != null">
<v-col class="col-8">
<li><b>[{{item.date}}] </b> <span v-html="item.context"></span></li>
</v-col>
<v-col class="col-1"></v-col>
<v-col class="col-3">
<v-img width="100%" class="mt-2"
:src="item.img">
</v-img>
</v-col>
</v-row>
</div>
<!-- News end -->
</v-col>
</v-row>
<!-- Publications -->
<v-row justify="left" >
<div class="text-h4 mt-4">Selected Publications</div>
</v-row>
<v-row justify="left" v-for="item in publications" >
<v-col xl="4" :lg="4" :md="4" :sm="4" :xs="4">
<v-img width="100%" class="mt-2"
:src="item.teaser">
</v-img>
</v-col>
<v-col class="py-4 text-left" :xl="8" :lg="8" :md="8" :sm="8" :xs="8">
<b>{{item.title}}</b>
<br>
<span v-html="item.authors"></span>
<br>
<i>{{item.publication}}</i>
<br>
<span class="mr-2" v-for="linksItem in item.links">
<a :href="linksItem.link" target="_blank">{{linksItem.text}}</a>
</span>
<br>
<p>
{{item.abstract}}
</p>
<v-divider></v-divider>
</v-col>
</v-row>
<!-- Publications end -->
<!-- Projects -->
<v-row justify="left" >
<div class="text-h4 mt-4">Selected Projects</div>
</v-row>
<v-row justify="left" v-for="item in projects" >
<v-col xl="4" :lg="4" :md="4" :sm="4" :xs="4">
<v-img width="100%" class="mt-2"
:src="item.teaser">
</v-img>
</v-col>
<v-col class="py-4 text-left" :xl="8" :lg="8" :md="8" :sm="8" :xs="8">
<b>{{item.title}}</b>
<br>
{{item.date}}
<br>
<span class="mr-2" v-for="linksItem in item.links">
<a :href="linksItem.link" target="_blank">{{linksItem.text}}</a>
</span>
<br>
<p>
{{item.discription}}
</p>
<v-divider></v-divider>
</v-col>
</v-row>
<!-- Projects end -->
<!-- Others -->
<v-row justify="left" >
<div class="text-h4 mt-4">Other Publications</div>
</v-row>
<v-row justify="left" v-for="item in others" >
<v-col xl="12" lg="12" md="12" sm="12" xs="12">
<b>{{item.title}}</b>
<br>
<span v-html="item.authors"></span>
<br>
<i>{{item.publication}}</i>
<br><br>
</v-col>
</v-row>
<!-- <v-divider></v-divider> -->
<!-- Others end -->
<v-row justify="left">
<div class="text-h4 mt-4">Misc.</div>
</v-row>
<v-row justify="left">
<v-col class="py-4 text-left" :xl="8" :lg="8" :md="8" :sm="12" :xs="12">
<p>
I'm a passionate hardcore gamer who loves playing almost every type of video game (except horror games like <i>Phasmophobia</i>). I started playing video games at the age of 4.
<i>Warcraft 3</i> and <i>Command & Conquer: Red Alert 2</i> were my first games. When I was in middle school, I was shocked by the incredibly realistic graphics (at that time) of <i>Crysis 2</i>,
and it sparked my desire to pursue a career in computer graphics.
</p>
<p>
I'm good at FPS esports games. In <i>Apex Legends</i>, I reached the highest rank, Apex Predator (temporarily achieved the highest ranking of #74); in <i>Valorant</i>, I've solo queued to Diamond; and in <i>Overwatch</i>, I'm a multi-season Master.
</p>
<p>
Besides esports games, I enjoy narrative-driven games with good storylines, such as <i>Red Dead Redemption 2</i> and <i>Baldur's Gate 3</i>, and ARPG games like <i>Bloodborne</i> and <i>Elden Ring</i>. I also enjoy some lightweight rogue-like (or rogue-lite) games.
My favorite game this year is <i>Black Myth: Wukong</i>, and I believe this game will win the TGA's Game of the Year award.
</p>
<p>
This love for gaming is what inspired me to choose rendering as my research area. I hope that one day, my research outcome can be widely applied in the video game industry to create the next generation of game graphics.
</p>
</v-col>
<v-col xl="4" lg="4" md="4" sm="12" xs="12">
<v-img width="80%" class="mt-2" src="images/apex.png"/>
</v-col>
</v-row>
</v-col>
<!-- Right column end -->
</v-row>
</v-container>
<!-- footer -->
<v-bottom-navigation
v-model="value"
:background-color="color"
dark
shift
>
<!-- <v-btn @click="gameZone = true">-->
<!-- <span>Game Zone</span>-->
<!-- <v-icon>mdi-television-play</v-icon>-->
<!-- </v-btn>-->
<!-- <v-btn>
<span></span>
<v-icon>mdi-book</v-icon>
</v-btn> -->
<!-- <v-btn>-->
<!-- <span>CV</span>-->
<!-- <v-icon>mdi-image</v-icon>-->
<!-- </v-btn>-->
</v-bottom-navigation>
</v-app>
</div>
</body>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/vue.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/vuetify.js"></script>
<script>
var app = new Vue({
el: '#app',
vuetify: new Vuetify(),
data: {
value: 1,
news:[{date:'09/2024',context:'I finish my summer internship at Autodesk.'},
{date:'08/2024', context:'I get one more SIGGRAPH paper accepted!'},
{date:'07/2024',context:'I reach Apex Predator in Apex Legends!'},],
selfIntroduction:
'Hello! I\'m Zilin, I received my Bachelor and Master degree from Shandong University. \n' +
'Currently I\'m a second year PhD student at the University of California, Santa Barbara, supervised by Prof. <a href="https://sites.cs.ucsb.edu/~lingqi/" target="_blank">Ling-Qi Yan</a>.\n',
publications:[
{
title:'A Dynamic By-example BTF Synthesis Scheme',
abstract:'Measured Bidirectional Texture Function (BTF) can faithfully reproduce a realistic appearance but is costly to acquire and store due to its 6D nature (2D spatial and 4D angular). Therefore, it is practical and necessary for rendering to synthesize BTFs from a small example patch. While previous methods managed to produce plausible results, we find that they seldomly take into consideration the property of being dynamic, so a BTF must be synthesized before the rendering process, resulting in limited size, costly pre-generation and storage issues. In this paper, we propose a dynamic BTF synthesis scheme, where a BTF at any position only needs to be synthesized when being queried. Our insight is that, with the recent advances in neural dimension reduction methods, a BTF can be decomposed into disjoint low-dimensional components. We can perform dynamic synthesis only on the positional dimensions, and during rendering, recover the BTF by querying and combining these low-dimensional functions with the help of a lightweight Multilayer Perceptron (MLP). Consequently, we obtain a fully dynamic 6D BTF synthesis scheme that does not require any pre-generation, which enables efficient rendering of our infinitely large and non-repetitive BTFs on the fly. We demonstrate the effectiveness of our method through various types of BTFs taken from UBO2014.\n',
authors:
' <b>Zilin Xu</b>,' +
' <a href="https://research.manchester.ac.uk/en/persons/zahra.montazeri" target="_blank">Zahra Montazeri</a>,' +
' <a href="https://wangningbei.github.io/" target="_blank">Beibei Wang</a>,'+
' <a href="https://sites.cs.ucsb.edu/~lingqi/" target="_blank">Ling-Qi Yan</a>',
publication:'SIGGRAPH ASIA 2024',
teaser:'images/teaser_siga24.png',
// teaser:'sigas2024/img/representative.jpg',
links:[
{text: 'Paper (arxiv)', link:'https://arxiv.org/abs/2405.14025'},
{text: 'Project Page (WIP)', link:'sigas2024/index.html'}
// {text: 'Paper', link:'https://zheng95z.github.io/assets/files/egsr2023-roma.pdf'},
// {text: 'Project Page', link:'https://zheng95z.github.io/publications/roma23'}
]
},
{
title:'Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing',
abstract:'We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and reduce aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5x–10x faster than generating and tracing DFs at the same resolution and equal storage.\n',
authors:'<a href="https://zheng95z.github.io/" target="_blank">Zheng Zeng</a>,' +
' <b>Zilin Xu</b>,' +
' <a href="https://wanglusdu.github.io/" target="_blank">Lu Wang</a>,' +
' <a href="https://winmad.github.io/">Lifan Wu</a>,' +
' <a href="https://sites.cs.ucsb.edu/~lingqi/" target="_blank">Ling-Qi Yan</a>',
publication:'Eurographics Symposium on Rendering 2023 (CGF track)',
teaser:'images/roma23_teaser.png',
links:[
{text: 'Paper', link:'https://zheng95z.github.io/assets/files/egsr2023-roma.pdf'},
{text: 'Project Page', link:'https://zheng95z.github.io/publications/roma23'}
]
},
{
title:'Lightweight Neural Basis Functions for All-Frequency Shading',
abstract:'Basis functions provide both the abilities for compact representation and the properties for efficient computation.\n' +
'Therefore, they are pervasively used in rendering to perform all-frequency shading.\n' +
'However, common basis functions, including spherical harmonics (SH), wavelets, and spherical Gaussians (SG) all have their own limitations, such as low-frequency for SH, not rotationally invariant for wavelets, and no multiple product support for SG. \n' +
'In this paper, we present neural basis functions, an implicit and data-driven set of basis functions that circumvents the limitations with all desired properties. \n' +
'We first introduce a representation neural network that takes any general 2D spherical function (e.g. environment lighting, BRDF, and visibility) as input and projects it onto the latent space as coefficients of our neural basis functions.\n' +
'Then, we design several lightweight neural networks that perform different types of computation, giving our basis functions different computational properties such as double/triple product integrals and rotations. \n' +
'We demonstrate the practicality of our neural basis functions by integrating them into all-frequency shading applications, showing that our method not only achieves a compression rate of 0.39% and 10×-40× better performance than wavelets at equal quality, but also renders all-frequency lighting effects in real-time without the aforementioned limitations from classic basis functions.',
authors:'<b>Zilin Xu</b>,' +
' <a href="https://zheng95z.github.io/" target="_blank">Zheng Zeng</a>,' +
' <a href="https://winmad.github.io/">Lifan Wu</a>,' +
' <a href="https://wanglusdu.github.io/" target="_blank">Lu Wang</a>,' +
' <a href="https://sites.cs.ucsb.edu/~lingqi/" target="_blank">Ling-Qi Yan</a>',
publication:'SIGGRAPH ASIA 2022',
teaser:'sigas2022/img/representative.jpg',
links:[
{text: 'Paper', link:'http://sites.cs.ucsb.edu/~lingqi/publications/paper_neural_basis.pdf'},
{text: 'Project Page', link:'sigas2022/index.html'}
]
},
{
title:'Neural Complex Luminaires: Representation and Rendering',
abstract:'Complex luminaires, such as grand chandeliers, can be extremely costly to render because ' +
'the light-emitting sources are typically encased in complex refractive geometry, creating difficult' +
' light paths that require many samples to evaluate with Monte Carlo approaches. Previous work has' +
' attempted to speed up this process, but the methods are either inaccurate, require the storage ' +
'of very large lightfields, and/or do not fit well into modern path-tracing frameworks. Inspired' +
' by the success of deep networks, which can model complex relationships robustly and be evaluated' +
' efficiently, we propose to use a machine learning framework to compress a complex luminaire’s' +
' lightfield into an implicit neural representation. Our approach can easily plug into conventional' +
' renderers, as it works with the standard techniques of path tracing and multiple importance sampling (MIS).' +
' Our solution is to train three networks to perform the essential operations for evaluating the complex luminaire at' +
' a specific point and view direction, importance sampling a point on the luminaire given a shading location,' +
' and blending to determine the transparency of luminaire queries to properly composite them with other scene' +
' elements. We perform favorably relative to state-of-the-art approaches and render final images that are close' +
' to the high-sample-count reference with only a fraction of the computation and storage costs, with no need' +
' to store the original luminaire geometry and materials',
authors:'<a href="http://junqiuzhu.com/" target="_blank">Junqiu Zhu</a>,' +
' <a href="https://velysianp.wixsite.com/elysonbaipersonal" target="_blank">Yaoyi Bai</a>,' +
' <b>Zilin Xu</b>,' +
' <a href="https://web.ece.ucsb.edu/~sbako/" target="_blank">Steve Bako</a>,' +
' <a href="https://www.edgarphd.com/" target="_blank">Edgar Velázquez-Armendáriz</a>,' +
' <a href="https://wanglusdu.github.io/" target="_blank">Lu Wang</a>,' +
' <a href="https://web.ece.ucsb.edu/~psen/" target="_blank">Pradeep Sen</a>,' +
' <a href="http://miloshasan.net/" target="_blank">Miloš Hašan</a>,' +
' <a href="https://sites.cs.ucsb.edu/~lingqi/" target="_blank">Ling-Qi Yan</a>',
publication:'ACM Transactions on Graphics (Proceedings of SIGGRAPH 2021)',
teaser:'images/complexluminaires.jpg',
links:[
{text: 'Paper', link:'http://sites.cs.ucsb.edu/~lingqi/publications/paper_complum.pdf'},
{text: 'Video', link:'http://sites.cs.ucsb.edu/~lingqi/publications/video_complum.mp4'}
]
},
{
title:'Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering',
abstract:'Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space.' +
' These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently,' +
' a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, ' +
'the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed.' +
' Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. ' +
'In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, ' +
'more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, ' +
'which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high ' +
'frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work.' +
' Although we only considered volumetric photon mapping, it’s straightforward to extend our method for other forms, like beam radiance estimation.\n',
authors:'<b>Zilin Xu</b>,' +
' Qiang Sun,' +
' <a href="https://wanglusdu.github.io/" target="_blank">Lu Wang</a>,' +
' <a href="http://vr.sdu.edu.cn/info/1010/1062.htm">Yanning Xu</a>,' +
' <a href="https://wangningbei.github.io/" target="_blank">Beibei Wang</a>',
publication:'Computer Graphics Forum (Proceedings of Pacific Graphics 2020)',
teaser:'images/volgrad.jpg',
links:[
{text: 'Paper', link:'https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14137'},
{text: 'Project Page', link:'pg2020/index.html'}
]
}
],
projects:[
{
title:'Advanced 3D Wood Material in MaterialX',
discription:'Implemented a procedural 3D wood material as a general compound node graph in MaterialX (an open-source project).'+
'It is able to simulate many types of wood visual effects including growth rings, pores, wood rays, etc.\n',
date:'Intern project at Autodesk, Summer 2024',
teaser:'images/wood.png',
links:[
{text: 'Project Link', link:'https://github.com/autodesk-forks/MaterialX/tree/zilin/3dwood'},
]
},
{
title:'By-example Texture Synthesis in MaterialX',
discription:'Implemented a by-example texture synthesis node graph in MaterialX.\n',
date:'Intern project at Autodesk, Summer 2024',
teaser:'images/texture_syn.png',
links:[
{text: 'Project Link', link:'https://github.com/autodesk-forks/MaterialX/tree/zilin/3dwood'},
]
},
],
others:[
{
title:'State-of-the-Art Survey of Photorealistic Rendering Based on Machine Learning',
abstract:'Nowadays, the demand for photo-realistic rendering in the movie, anime, game and other industries is increasing, ' +
'and the highly realistic rendering of 3D scenes usually requires a lot of calculation time and storage to calculate global ' +
'illumination. How to ensure the quality of rendering on the premise of improving drawing speed is still one of the core and' +
' hot issues in the field of graphics. The data-driven machine learning method has opened up a new idea. In recent years,' +
' researchers have mapped a variety of highly realistic rendering methods to machine learning problems, thereby greatly reducing' +
' the computational cost. This article summarizes and analyzes the research progress of highly realistic rendering methods based ' +
'on machine learning in recent years, including:global illumination optimization calculation methods based on machine learning,' +
' physical material modeling methods based on deep learning, and participatory media drawing method optimization based on deep ' +
'learning, Monte Carlo Denoising method based on machine learning, etc. This article discusses the mapping ideas of various drawing ' +
'methods and machine learning methods in detail, summarizes the construction methods of network models and training data sets,' +
' and conducts comparative analysis on drawing quality, drawing time, network capabilities and other aspects. ' +
'Finally, this paper proposes possible ideas and future prospects for the combination of machine learning and realistic rendering.',
authors:
'ZHAO Ye-Zi,'+
' <a href="https://wanglusdu.github.io/" target="_blank">WANG Lu</a>,' +
' <a href="http://vr.sdu.edu.cn/info/1010/1062.htm">XU Yan-Ning</a>,' +
' <a href="https://zheng95z.github.io/">ZENG Zheng</a>,' +
' Ge Liang-Sheng,' +
' <a href="http://junqiuzhu.com/" target="_blank">ZHU Jun-Qiu</a>,' +
' <b>Xu Zi-Lin</b>,' +
' <a href="http://vr.sdu.edu.cn/info/1010/1073.htm" target="_blank">Xiang-Xu Meng</a>',
publication:'Journal of Software',
}
]
},
methods:{
},
computed: {
color () {
switch (this.value) {
case 0: return 'indigo'
case 1: return 'blue-grey'
case 2: return 'brown'
case 3: return 'teal'
default: return 'blue-grey'
}
},
},
})
</script>
<style>
.header-img{
width: 100%;
height: 100%;
min-height: 10%;
background-image: linear-gradient(to bottom, rgba(255,255,255,0), rgba(255,255,255, 1)), url(images/header.jpg);
background-repeat:no-repeat;background-size:cover;
}
</style>
</html>