-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
575 lines (520 loc) · 33.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="icon" href="assets/icons/icon.png">
<!-- Basic Page Needs
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<meta charset="utf-8">
<title>Maks Sorokin</title>
<meta name="description" content="">
<meta name="author" content="">
<!-- Mobile Specific Metas
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- FONT
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<link href="https://fonts.googleapis.com/css?family=Raleway:400,300,600" rel="stylesheet" type="text/css">
<!-- CSS
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<link rel="stylesheet" href="css/normalize.css">
<link rel="stylesheet" href="css/skeleton.css">
<link rel="stylesheet" href="css/custom.css">
<!-- Favicon
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<!-- <link rel="icon" type="image/png" href="assets/"> -->
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-1J19VDQYCK"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() { dataLayer.push(arguments); }
gtag('js', new Date());
gtag('config', 'G-1J19VDQYCK');
</script>
<!-- Video playback speed and no controls -->
<script defer src="js/video.js"></script>
</head>
<body>
<div class="container">
<!-- NAVBAR --------------------------------- -->
<!-- <div class="navbar-spacer"></div>
<nav class="navbar">
<div class="container">
<ul class="navbar-list">
<li class="navbar-item"><a class="navbar-link" href="index.html">About</a></li>
<li class="navbar-item"><a class="navbar-link" href="/blog.html">Blog</a></li>
</ul>
</div>
</nav> -->
<!-- Personal Info –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row profile-row" style="margin-top: 4%">
<div class="one columns"></div>
<div class="three columns">
<!-- Profile photo –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div style="text-align: center;">
<img src="assets/img/profile.jpg" class="profile-photo" alt="profile photo"><br>
</div>
<!-- Name
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<h5 style="text-align: center;">Maks Sorokin</h5>
<!-- Social links
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div id="social">
<a href="https://twitter.com/initmaks"><img src="assets/icons/twitter.png" class="iico" /></a>
<a rel="me" href="https://sigmoid.social/@maks"><img src="assets/icons/mastodon.png" class="iico" /></a>
<a href="https://github.com/initmaks"><img src="assets/icons/github.png" class="iico" /></a>
<a href="assets/pdfs/CV.pdf"><img src="assets/icons/file.png" class="iico" /></a>
<img src="assets/icons/mail.png" style="cursor: pointer;" class="iico" id="iemail" title="click to reveal" />
</div>
<div id="demail"></div> <!-- will reveal email -->
</div>
<div class="eight columns" style="margin-top: 5%;">
<p style="text-align:justify;">
<!-- Self-introduction
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
I'm a fourth-year Robotics Ph.D. student at Georgia Tech, advised by
<a href="https://www.cc.gatech.edu/~sha9/" style="text-decoration: none;">Dr. Sehoon Ha</a>
and
<a href="https://ckllab.stanford.edu/" style="text-decoration: none;">Dr. C. Karen Liu</a>.
I am interested in applications of vision-based robot learning in real-world robotics.
Currently, I am working on outdoor navigation and environment interaction problems.
</p>
<!-- <strong>Competences:</strong>
<a>python</a>,
<a>pytorch</a>,
<a>pybullet</a>,
<a>iGibson</a>,
<a>OpenCV</a>,
<a>numpy</a>,
<a>Tensorflow</a>,
<a>C/C++</a>,
<a>ROS</a>,
<a>docker</a> -->
<h5><u>News</u></h5>
<ul>
<li> <b>JUL'23</b> - Our work on Learning-oriented Robot Design was accepted at IROS 2023 <a
href="https://learning-robot.github.io/">[project page]</a></li>
<li> <b>MAR'23</b> - Check out our latest work on Design a Learning Robot <a
href="https://learning-robot.github.io/">[project page]</a></li>
<li> <b>APR'22</b> - Human Motion Control of Quadrupedal Robots accepted at RSS <a
href="https://sites.google.com/view/humanconquad/">[project page]</a></li>
<li> <b>SEP'21</b> - Excited to be joining <a href="https://x.company">X - the moonshot factory</a> ( <a
href="https://everydayrobots.com">Everyday Robots</a> ) for PhD
Residency!</li>
<li> <b>SEP'21</b> - Check out our work on Learning Sidewalk Navigation <a
href="./navigation.html">[project page]</a></li>
<!-- <li> <b>MAY'21</b> - Awarded the fellowship by the Machine Learning Center at Georgia Tech. <a href="https://mlatgt.blog/2021/05/10/the-machine-learning-center-awards-inaugural-mlgt-fellows/">[link]</a> </li> -->
<!-- <li> <b>FEB'21</b> Paper on Learning Human Search Behavior accepted at EUROGRAPHICS'2021! <a href="https://arxiv.org/pdf/2011.03618.pdf">[pdf]</a><a href="https://arxiv.org/abs/2011.03618">[arXiv]</a></li> -->
<!-- <li> <b>DEC'20</b> Paper on Few-shot visual sensor meta-adaptation accepted at ICRA'2021! <a href="https://arxiv.org/pdf/2011.03609.pdf">[pdf]</a><a href="http://arxiv.org/abs/2011.03609">[arXiv]</a></li> -->
</ul>
</div>
</div>
</div>
<!-- Companies
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="container">
<div class="div_line"></div><br>
<div class="row">
<div class="four columns center">
<a href="https://theaiinstitute.com/">
<img src="assets/icons/org/BDAII-flowers.png" alt="BDAII Logo" class="logo">
</a>
<p class="company-name">
Research Intern @ The AI Institute<br>2024 - Present
</p>
</div>
<div class="four columns center">
<a href="https://www.gatech.edu/">
<img src="assets/icons/org/GeorgiaTechLogo.png" alt="Georgia Tech Logo" class="logo">
</a>
<p class="company-name">
PhD in Robotics @ Georgia Tech<br>2020 - Present
</p>
</div>
<div class="four columns center">
<div class="row">
<a href="https://everydayrobots.com">
<img src="assets/icons/org/EverydayRobotsLogo.gif" alt="Everyday Robots Logo" class="logo">
</a>
</div>
<p class="company-name">
AI Residency @ Google X<br>2021 - 2022
</p>
</div>
</div>
</div>
<!-- Latest
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="container">
<div class="div_line"></div><br>
<h5><u>Latest Work</u></h5>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video id="learning-robot" class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/learning_robot_morph.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">On Designing a Learning Robot: Improving Morphology for Enhanced Task Performance and
Learning</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em><strong>Maks Sorokin</strong>, Chuyuan Fu, Jie Tan, C. Karen Liu, Yunfei Bai, Wenlong Lu, Sehoon Ha,
Mohi Khansari</em>
<br><em><span class="venue">International Conference on Intelligent Robots and Systems (IROS) 2023</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We present a learning-oriented morphology optimization framework that accounts for the interplay between the
robot's morphology, onboard perception abilities, and their interaction in different tasks.
We find that morphologies optimized holistically improve the robot performance by 15-20% on
various manipulation tasks, and require 25x less data to match human-expert made morphology performance.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://learning-robot.github.io/">[project page]</a>
<a href="https://www.youtube.com/watch?v=w9B0COjGvfo">[video overview]</a>
<a href="https://arxiv.org/pdf/2303.13390.pdf">[pdf]</a>
<a href="https://arxiv.org/abs/2303.13390">[arXiv]</a>
</div>
</div>
</div>
<!-- Publications
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="container">
<br>
<div class="div_line"></div><br>
<h5><u>Publications</u></h5>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video id="humanconquad" class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/humanconquad.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Human Motion Control of Quadrupedal Robots using Deep Reinforcement Learning</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em>Sunwoo Kim, <strong>Maks Sorokin</strong>, Jehee Lee, Sehoon Ha</em>
<br><em><span class="venue">Proceedings of Robotics: Science and Systems (RSS) 2022</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We propose a novel motion control system that allows a human user to operate various motor tasks seamlessly
on a quadrupedal robot.
Using our system, a user can execute a variety of motor tasks, including standing, sitting, tilting,
manipulating, walking, and turning, on simulated and real quadrupeds.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://sites.google.com/view/humanconquad">[project page]</a>
<a href="https://arxiv.org/pdf/2204.13336.pdf">[pdf]</a>
<a href="https://arxiv.org/abs/2204.13336">[arXiv]</a>
<a href="https://www.youtube.com/watch?v=kz8hBG1CKMY">[video]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/behavior_representations.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Relax, it doesn't matter how you get there!</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em>Mehdi Azabou, Michael Mendelson, <strong>Maks Sorokin</strong>, Shantanu Thakoor, Nauman Ahad, Carolina
Urzay, Eva L Dyer</em>
<br><em><span class="venue">Neural Information Processing Systems (NeurIPS) 2023 - Spotlight</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale self-supervised representation learning
model for
behavior analysis. We combine a pooling module that aggregates features extracted over encoders with different
temporal
receptive fields, and design latent objectives to bootstrap the representations in each respective
space to encourage disentanglement across different timescales.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://multiscale-behavior.github.io/">[project page]</a>
<a href="https://arxiv.org/pdf/2303.08811.pdf">[pdf]</a>
<a href="https://arxiv.org/abs/2303.08811">[arXiv]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/urban_navigation.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Learning to Navigate Sidewalks in Outdoor Environments</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em><strong>Maks Sorokin</strong>, Jie Tan, C. Karen Liu, Sehoon Ha</em>
<br><em><span class="venue">IEEE Robotics and Automation Letters (RA-L) 2022</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We design a system which enables zero-shot vision-based policy transfer to the real-world outdoor environments
for sidewalk navigation task.
Our approach is evaluated on a quadrupedal robot navigating sidewalks in the real world walking 3.2 kilometers
with a limited number of human interventions.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="./navigation.html">[project page]</a>
<a href="https://arxiv.org/pdf/2109.05603.pdf">[pdf]</a>
<a href="https://arxiv.org/abs/2109.05603">[arXiv]</a>
<a href="https://www.youtube.com/watch?v=JsAZy3YETwQ">[video]</a>
<a href="https://techxplore.com/news/2021-09-robot-efficiently-sidewalks-urban-environments.html">[TechXplore
article]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/object_search_animation.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Learning Human Search Behavior from Egocentric View</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em><strong>Maks Sorokin</strong>, Wenhao Yu, Sehoon Ha, C. Karen Liu </em>
<br><em><span class="venue">EUROGRAPHICS 2021</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We train vision-based agent to perform object searching in photorealistic 3D scene.
And propose a motion synthesis mechanism for head motion re-targeting.
Using which we enable object searching behaviour with animated human character (PFNN/NSM).
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://arxiv.org/pdf/2011.03618.pdf">[pdf]</a>
<a href="https://arxiv.org/abs/2011.03618">[arXiv]</a>
<a href="https://www.youtube.com/watch?v=LvSHpmjt8pU">[video]</a>
<a href="https://www.youtube.com/watch?v=NzsCT3a7rpY">[talk(20 min)]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/sensor_height_adaptation.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">A Few Shot Adaptation of Visual Navigation Skills to New Observations using
Meta-Learning</div>
<!-- AUTHORS + VENUE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<br><em>Qian Luo, <strong>Maks Sorokin</strong>, Sehoon Ha</em>
<br><em><span class="venue">The IEEE International Conference on Robotics and Automation (ICRA) 2021</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
We show how vision-based navigation agents can be trained to adapt to new sensor configurations with only
three shots of experience.
Rapid adaptation is achieved by introducing a bottleneck between perception and control networks, and through
the perception component's meta-adaptation.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://arxiv.org/pdf/2011.03609.pdf">[pdf]</a>
<a href="http://arxiv.org/abs/2011.03609">[arXiv]</a>
</div>
</div>
</div>
<!-- Projects
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="container">
<br>
<div class="div_line"></div><br>
<h5><u>Projects</u></h5>
<!-- PROJECT ENTRY ###
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<img src="assets/img/sim2sim.png">
</div>
</div>
<div class="eight columns">
<!-- TITLE
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Real2Sim Image adaptation</div>
<br><em><span class="venue">2019</span></em>
<!-- DESCRIPTION
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
Image domain adaptation through the conversion of images with randomized textures (or real images) to a
canonical image representation.
Replication of a <a href="https://arxiv.org/abs/1812.07252">RCAN paper</a> with different loss modeling (<a
href="https://arxiv.org/abs/1603.08155">Perceptual/Feature Loss</a> instead of GAN loss).
</p>
<!-- LINKS
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://github.com/initmaks/ran2can">[github]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video id="learning_to_swing" class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/learning_to_swing.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Learning to swing</div>
<br><em><span class="venue">2018</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
Computer Animation class project, which utilizes off-the-shelf Soft-Actor-Critic Reinforcement Learning method
that learns to build up the momentum and swing the animated character on a pull up bar.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="pages/charanim.html">[short-summary]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video id="fetchit" class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/fetchit.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">FetchIt</div>
<br><em><span class="venue">2018</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
Mobile Manipulation project that utilises MoveIt! & GQ-CNN to grasp an object from the table using a Fetch
Robot in the Gazebo Simulator.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="pages/fetchit.html">[short-summary]</a>
</div>
</div>
<!-- PUBLICATION ENTRY –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="row video-row">
<!-- MEDIA SECTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="four columns">
<div class="video-container">
<video id="car_bc" class="no-controls-video" autoplay loop muted preload="auto" playsinline>
<source src="assets/videos/drive.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="eight columns text-column">
<!-- TITLE –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="project_title">Behavioral Clonning for Autonomous Driving</div>
<br><em><span class="venue">2017</span></em>
<!-- DESCRIPTION –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<p class="project_info">
End-to-end (image-to-steering wheel) control policy learning from data collected over multiple laps with
off-the-track recoveries generated by human.
</p>
<!-- LINKS –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<a href="https://github.com/initmaks/Self-driving_car_ND/tree/master/Behavioral-Cloning">[github]</a>
</div>
</div>
</div>
<br>
<br>
<!-- Teaching –––––––––––––––––––––––––––––––––––––––––––––––––– -->
<div class="container">
<div class="div_line"></div><br><br>
<div class="row profile-row">
<div class="one columns hide-on-small"> </div>
<div class="nine columns ">
<h5><u>Mentoring Experience</u></h5>
I've had a great pleasure working with a number of exceptional students at Georgia Tech.<br><br>
<ul>
<li><strong>PRESENT</strong>: Master's Student - <a href="https://jxu443.github.io/portfolio/">Jiaxi Xu</a></li>
<li><strong>FALL 2021</strong>: Master's Student - <a href="https://arjun-krishna.github.io/">Arjun Krishna</a> -> PhD student at UPenn’s GRASP lab </li>
<li><strong>FALL 2020</strong>: Master's Student - <a href="https://qianluo.netlify.app">Qian Luo</a> ->
NLP Algorithm Engineer at Alibaba Group</li>
</ul>
<br>
<h5><u>Teaching Experience</u></h5>
I had an amazing experience helping teach one of the largest classes (1000+ students) at Georgia Tech.
<br>CS6601 - Artificial Intelligence class by
<a href="https://www.cc.gatech.edu/people/thomas-ploetz"> Dr. Thomas Ploetz</a> & <a
href="https://www.cc.gatech.edu/home/thad/">Dr. Thad Starner</a>.
<br><br>
<ul>
<li><strong>FALL 2019 & SPRING 2020</strong>: Head Teaching Assistant </li>
<li><strong>FALL 2018 & SPRING 2019</strong>: Teaching Assistant </li>
</ul>
<br>
<h5><u>Scholarly Activities</u></h5>
<ul>
<li><strong>IROS 2023</strong> - Session co-chair Mechanism Design </li>
<li><strong>RA-L 2023</strong> - Reviewer at IEEE Robotics and Automation Letters </li>
<li><strong>RSS 2023</strong> - Reviewer at Proceedings of Robotics: Science and Systems </li>
<li><strong>RA-L 2022</strong> - Reviewer at IEEE Robotics and Automation Letters</li>
<li><strong>ICRA 2021</strong> - Reviewer at IEEE International Conference on Robotics and Automation</li>
</ul>
</div>
</div>
</div>
<!-- End Document
–––––––––––––––––––––––––––––––––––––––––––––––––– -->
<!-- attempt hiding from spambots :p - snippet credits Andrej Karpathy - karpathy.ai source code -->
<script type="text/javascript">
var e_is_shown = false;
document.getElementById('iemail').addEventListener("click", function () {
let demail = document.getElementById('demail');
demail.innerHTML = 'm' + 'aks' + ' _a' + 't_ ' + 'gatech' + '.' + 'e' + 'du';
demail.style.opacity = e_is_shown ? 0 : 1;
e_is_shown = !e_is_shown;
})
</script>
<!-- ### FOOTER ### -->
<br>
<br>
<br>
<br>
<div class="div_line"></div><br>
<p style="color:gray; text-align:center; font-size: small;">
<h7 style="text-decoration: none;">consider checking out: </h7><br><br>
<a href="https://www.givewell.org/"><img src="assets/icons/gw.jpg" style="width:100px;"></a>
<a href="https://www.givingwhatwecan.org/"><img src="assets/icons/gwwc.png" style="width:100px;"></a>
<a href="https://www.effectivealtruism.org/"><img src="assets/icons/ea.png" style="width:80px; margin:10px"></a>
</p>
<div class="div_line"></div><br>
<p style="color:gray; text-align:center; font-size: small;">
2023© Maks Sorokin<br>
built using <a href="http://getskeleton.com/">Skeleton</a>,
icon credits <a href="https://www.flaticon.com/"> flaticon</a>,
hosted by <a href="https://pages.github.com/"> GitHub Pages</a>❤️
<br>
<br>
feel free to copy: <a href="https://github.com/initmaks/initmaks.github.io">this page</a>
</p>
</body>
</html>