diff --git a/404.html b/404.html index ec78ed7..6e303ab 100755 --- a/404.html +++ b/404.html @@ -1 +1 @@ - Scheme MkDocs

404 - Not found

\ No newline at end of file + Scheme MkDocs

404 - Not found

\ No newline at end of file diff --git a/Computer Network/CS144/index.html b/Computer Network/CS144/index.html index cefe34b..f7578a5 100755 --- a/Computer Network/CS144/index.html +++ b/Computer Network/CS144/index.html @@ -1,4 +1,4 @@ - Stanford CS144: Computer Network - Scheme MkDocs
跳转至

CS144: Computer Network

课程简介

  • 所属大学:Stanford
  • 先修要求:一定的计算机系统基础,CS106L
  • 编程语言:C++
  • 课程难度:🌟🌟🌟🌟🌟
  • 预计学时:100 小时

这门课的主讲人之一是网络领域的巨擘 Nick McKeown 教授。这位拥有自己创业公司的学界业界双巨佬会在他慕课每一章节的最后采访一位业界的高管或者学界的高人,非常开阔眼界。

在这门课的 Project 中,你将用 C++ 循序渐进地搭建出整个 TCP/IP 协议栈,实现 IP 路由以及 ARP 协议,最后利用你自己的协议栈代替 Linux Kernel 的网络协议栈和其他学生的计算机进行通信,非常 amazing!

课程资源

资源汇总

CS144: Computer Network

课程简介

  • 所属大学:Stanford
  • 先修要求:一定的计算机系统基础,CS106L
  • 编程语言:C++
  • 课程难度:🌟🌟🌟🌟🌟
  • 预计学时:100 小时

这门课的主讲人之一是网络领域的巨擘 Nick McKeown 教授。这位拥有自己创业公司的学界业界双巨佬会在他慕课每一章节的最后采访一位业界的高管或者学界的高人,非常开阔眼界。

在这门课的 Project 中,你将用 C++ 循序渐进地搭建出整个 TCP/IP 协议栈,实现 IP 路由以及 ARP 协议,最后利用你自己的协议栈代替 Linux Kernel 的网络协议栈和其他学生的计算机进行通信,非常 amazing!

课程资源

资源汇总

Computer Networking: A Top-Down Approach

课程简介

  • 所属大学:马萨诸塞大学
  • 先修要求:有一定的计算机系统基础
  • 编程语言:无
  • 课程难度:🌟🌟🌟
  • 预计学时:40 小时

《自顶向下方法》是计算机网络领域的一本经典教材,两位作者 Jim Kurose 和 Keith Ross 精心制作了教材配套的课程网站,并且公开了自己录制的网课视频,交互式的在线章节测试,以及利用 WireShark 进行抓包分析的 lab。唯一遗憾的是这门课并没有硬核的编程作业,而 Stanford 的 CS144 能很好地弥补这一点。

课程资源

资源汇总

@PKUFlyingPig 在学习这门课中用到的所有资源和作业实现都汇总在 PKUFlyingPig/Computer-Network-A-Top-Down-Approach - GitHub 中。

\ No newline at end of file diff --git a/Computer Vision/EECS-498/index.html b/Computer Vision/EECS-498/index.html index 973ccbe..b0c55e9 100755 --- a/Computer Vision/EECS-498/index.html +++ b/Computer Vision/EECS-498/index.html @@ -1,4 +1,4 @@ - UMich EECS 498-007 / 598-005: Deep Learning for Computer Vision - Scheme MkDocs

UMich EECS 498-007 / 598-005: Deep Learning for Computer Vision

课程简介

  • 所属大学:UMich
  • 先修要求:Python基础,矩阵论(熟悉矩阵求导即可),微积分
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:60~80 小时

UMich 的 Computer Vision 课,课程视频和作业质量极高,涵盖的主题非常全,同时 Assignments 的难度由浅及深,覆盖了 CV 主流模型发展的全阶段,是一门非常好的 Computer Vision 入门课。

你在每个 Assignment 里会跟随 Handouts 搭建与训练 Lectures 中提到的模型/框架。

你不需要有任何的深度学习框架的使用经验,在开始的 Assignment 里,这门课会从零开始教导每个学生如何使用 Pytorch,后续也可以当成工具书,随时翻阅。

同时由于每个 Assignment 之间涉及到的主题都不同,你在递进式的 Assignment 中不仅可以亲身体会到 CV 主流模型的发展历程,领略到不同的模型和训练的方法对最终效果/准确率的影响,同时也能 Hands On 地实现它们。

在 A1 中,你会学习 Pytorch 和 Google Colab 的使用。

在 A2 中你会亲自搭建 Linear Classifier 以及一个两层的神经网络,最后你有机会亲自接触 MNIST 数据集并在此基础上训练并评估你搭建起的神经网络。

在 A3 中,你会接触到最为经典的 Convolutional Neural Network (A.K.A. CNN),亲自感受卷积神经网络的魅力。

而在 A4 中,你将实际触及搭建物体检测模型的全流程,同时跟随 Handout 实现两篇论文中的 One-Stage Detector 和 Two-Stage Detector。

到了 A5,就是从 CNN 到 RNN 的时刻了,你将有机会亲自搭建起两种不同的基于注意力的模型,RNNs (Vanilla RNN & LSTM) 和大名鼎鼎的 Transfomer。

在最后一个 Assignment(A6)中,你将有机会实现两种更为 Fancy 的模型,VAE 和 GAN,并应用在 MINST 数据集上。最后,你会实现网络可视化和风格迁移这两个非常酷炫的功能。

在 Assignments 之外,你还可以自己实现一个 Mini-Project,亲自搭建起一个完整的深度学习 Pipeline,具体可以参考课程主页。

课程所涉及的资源,如 Lectures/Notes/Assignments 都是开源的,美中不足的是 Autograder 只对本校 Enrolled 的学生开放,但因为在提供的 *.ipynb(也就是 Handout) 中已经可以确定实现的正确性,以及预期的结果,所以我个人觉得 Autograder 的缺失没有任何影响。

值得一提的是,这门课的主讲教授 Justin Johnson 正是 Fei-Fei Li 的博士毕业生,现在在 UMich 当 Assistant Professor。

而现在开源的 2017 年版本的 Stanford CS231N 的主讲人就是 Justin Johnson。

同时因为 CS231N 主要是由 Justin Johnson 和 Andrej Karpathy 建设起来的,这门课也沿用了 CS231N 的一些材料,所以学过 CS231N 的同学可能会觉得这门课的某些材料比较熟悉。

最后,我推荐每一个 Enroll 这门课的同学都去看一看 Youtube 上面的 Lectures,Justin Johnson 的讲课方式和内容都非常清晰和易懂,是非常棒的参考。

课程资源

UMich EECS 498-007 / 598-005: Deep Learning for Computer Vision

课程简介

  • 所属大学:UMich
  • 先修要求:Python基础,矩阵论(熟悉矩阵求导即可),微积分
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:60~80 小时

UMich 的 Computer Vision 课,课程视频和作业质量极高,涵盖的主题非常全,同时 Assignments 的难度由浅及深,覆盖了 CV 主流模型发展的全阶段,是一门非常好的 Computer Vision 入门课。

你在每个 Assignment 里会跟随 Handouts 搭建与训练 Lectures 中提到的模型/框架。

你不需要有任何的深度学习框架的使用经验,在开始的 Assignment 里,这门课会从零开始教导每个学生如何使用 Pytorch,后续也可以当成工具书,随时翻阅。

同时由于每个 Assignment 之间涉及到的主题都不同,你在递进式的 Assignment 中不仅可以亲身体会到 CV 主流模型的发展历程,领略到不同的模型和训练的方法对最终效果/准确率的影响,同时也能 Hands On 地实现它们。

在 A1 中,你会学习 Pytorch 和 Google Colab 的使用。

在 A2 中你会亲自搭建 Linear Classifier 以及一个两层的神经网络,最后你有机会亲自接触 MNIST 数据集并在此基础上训练并评估你搭建起的神经网络。

在 A3 中,你会接触到最为经典的 Convolutional Neural Network (A.K.A. CNN),亲自感受卷积神经网络的魅力。

而在 A4 中,你将实际触及搭建物体检测模型的全流程,同时跟随 Handout 实现两篇论文中的 One-Stage Detector 和 Two-Stage Detector。

到了 A5,就是从 CNN 到 RNN 的时刻了,你将有机会亲自搭建起两种不同的基于注意力的模型,RNNs (Vanilla RNN & LSTM) 和大名鼎鼎的 Transfomer。

在最后一个 Assignment(A6)中,你将有机会实现两种更为 Fancy 的模型,VAE 和 GAN,并应用在 MINST 数据集上。最后,你会实现网络可视化和风格迁移这两个非常酷炫的功能。

在 Assignments 之外,你还可以自己实现一个 Mini-Project,亲自搭建起一个完整的深度学习 Pipeline,具体可以参考课程主页。

课程所涉及的资源,如 Lectures/Notes/Assignments 都是开源的,美中不足的是 Autograder 只对本校 Enrolled 的学生开放,但因为在提供的 *.ipynb(也就是 Handout) 中已经可以确定实现的正确性,以及预期的结果,所以我个人觉得 Autograder 的缺失没有任何影响。

值得一提的是,这门课的主讲教授 Justin Johnson 正是 Fei-Fei Li 的博士毕业生,现在在 UMich 当 Assistant Professor。

而现在开源的 2017 年版本的 Stanford CS231N 的主讲人就是 Justin Johnson。

同时因为 CS231N 主要是由 Justin Johnson 和 Andrej Karpathy 建设起来的,这门课也沿用了 CS231N 的一些材料,所以学过 CS231N 的同学可能会觉得这门课的某些材料比较熟悉。

最后,我推荐每一个 Enroll 这门课的同学都去看一看 Youtube 上面的 Lectures,Justin Johnson 的讲课方式和内容都非常清晰和易懂,是非常棒的参考。

课程资源

OpenCV

OpenCV (Open Source Computer Vision) is an open-source library of programming functions mainly aimed at real-time computer vision. It provides a wide range of tools for image processing, video capture and analysis, 3D reconstruction, object detection, and many other applications.

OpenCV is written in C/C++ and has bindings for Python, Java, and MATLAB. It is cross-platform and can run on Linux, Windows, and macOS.

OpenCV is widely used in academic and industrial research, including in fields such as computer vision, image processing, robotics, and artificial intelligence. It is also used in mobile and embedded devices, including in self-driving cars, drones, and security systems.

The OpenCV library is free to use and open-source, and it is available under an open-source license.

OpenCV

OpenCV (Open Source Computer Vision) is an open-source library of programming functions mainly aimed at real-time computer vision. It provides a wide range of tools for image processing, video capture and analysis, 3D reconstruction, object detection, and many other applications.

OpenCV is written in C/C++ and has bindings for Python, Java, and MATLAB. It is cross-platform and can run on Linux, Windows, and macOS.

OpenCV is widely used in academic and industrial research, including in fields such as computer vision, image processing, robotics, and artificial intelligence. It is also used in mobile and embedded devices, including in self-driving cars, drones, and security systems.

The OpenCV library is free to use and open-source, and it is available under an open-source license.

CMU 15-445: Database Systems

课程简介

  • 所属大学:CMU
  • 先修要求:C++,数据结构与算法,CMU 15-213 (A.K.A. CS:APP,这也是 CMU 内部对每年 Enroll 同学的先修要求)
  • 编程语言:C++
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:100 小时

作为 CMU 数据库的入门课,这门课由数据库领域的大牛 Andy Pavlo 讲授(“这个世界上我只在乎两件事,一是我的老婆,二就是数据库”)。

这是一门质量极高,资源极齐全的 Database 入门课,这门课的 Faculty 和背后的 CMU Database Group 将课程对应的基础设施 (Autograder, Discord) 和课程资料 (Lectures, Notes, Homework) 完全开源,让每一个愿意学习数据库的同学都可以享受到几乎等同于 CMU 本校学生的课程体验。

这门课的亮点在于 CMU Database Group 专门为此课开发了一个教学用的关系型数据库 bustub,并要求你对这个数据库的组成部分进行修改,实现上述部件的功能。

具体来说,在 15-445 中你需要在四个 Project 的推进中,实现一个面向磁盘的传统关系型数据库 Bustub 中的部分关键组件。

包括 Buffer Pool Manager (内存管理), B Plus Tree (存储引擎), Query Executors & Query Optimizer (算子们 & 优化器), Concurrency Control (并发控制),分别对应 Project #1Project #4

值得一提的是,同学们在实现的过程中可以通过 shell.cpp 编译出 bustub-shell 来实时地观测自己实现部件的正确与否,正反馈非常足。

此外 bustub 作为一个 C++ 编写的中小型项目涵盖了程序构建、代码规范、单元测试等众多要求,可以作为一个优秀的开源项目学习。

课程资源

在 Fall 2019 中,Project #2 是做哈希索引,Project #4 是做日志与恢复。

在 Fall 2020 中,Project #2 是做 B 树,Project #4 是做并发控制。

在 Fall 2021 中,Project #1 是做缓存池管理,Project #2 是做哈希索引,Project #4 是做并发控制。

在 Fall 2022 中,与 Fall 2021 相比只有哈希索引换成了 B+ 树索引,其余都一样。

在 Spring 2023 中,大体内容和 Fall 2022 一样(缓存池,B+ 树索引,算子,并发控制),只不过 Project #0 换成了 Copy-On-Write Trie,同时增加了很好玩的注册大小写函数的 Task,可以直接在编译出的 bustub-shell 中看到自己写的函数的实际效果,非常有成就感。

值得注意的是,现在 bustub 在 2020 年以前的 version 都已经停止维护。

Fall 2019 的最后一个 Logging & Recovery 的 Project 已经 broken 了(在19年的 git head 上也许还可以跑,但尽管如此 Gradescope 应该也没有提供公共的版本,所以并不推荐大家去做,只看看代码和 Handout 就可以了)。

或许在 Fall 2023 的版本 Recovery 相关的功能会被修复,届时也可能有全新的 Recovery Project,让我们试目以待吧🤪

如果大家有精力的话可以都去尝试一下,或者在对书中内容理解不是很透彻的时候,尝试做一做对应的 Project 会加深你的理解(个人建议还是要全部做完,相信一定对你有帮助)。

此外,CMU数据库团队还有一个DB with ML的公开讲座系列:ML⇄DB Seminar Series

资源汇总

非官方的 Discord 是一个很好的交流平台,过往的聊天记录几乎记载了其他同学踩过的坑,你也可以提出你的问题,或者帮忙解答别人的问题,相信这是一份很好的参考。

关于 Spring 2023 的通关指南,可以参考 @xzhseh 的这篇CMU 15-445/645 (Spring 2023) Database Systems 通关指北,里面涵盖了全部你需要的通关道具,和通关方式建议,以及最重要的,我自己在做 Project 的过程中遇到的,看到的,和自己亲自踩过的坑。

@ysj1173886760 在学习这门课中用到的所有资源和作业实现都汇总在 ysj1173886760/Learning: db - GitHub 中。

由于 Andy 的要求,仓库中没有 Project 的实现,只有 Homework 的 Solution。特别的,对于 Homework1,@ysj1173886760 还写了一个 Shell 脚本来帮大家执行自动判分。

另外在课程结束后,推荐阅读一篇论文 Architecture Of a Database System,对应的中文版也在上述仓库中。论文里综述了数据库系统的整体架构,让大家可以对数据库有一个更加全面的视野。

后续课程

CMU15-721 主要讲主存数据库有关的内容,每节课都有对应的 paper 要读,推荐给希望进阶数据库的小伙伴。@ysj1173886760 目前也在跟进这门课,完成后会在这里提 PR 以提供进阶的指导。

CMU 15-445: Database Systems

课程简介

  • 所属大学:CMU
  • 先修要求:C++,数据结构与算法,CMU 15-213 (A.K.A. CS:APP,这也是 CMU 内部对每年 Enroll 同学的先修要求)
  • 编程语言:C++
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:100 小时

作为 CMU 数据库的入门课,这门课由数据库领域的大牛 Andy Pavlo 讲授(“这个世界上我只在乎两件事,一是我的老婆,二就是数据库”)。

这是一门质量极高,资源极齐全的 Database 入门课,这门课的 Faculty 和背后的 CMU Database Group 将课程对应的基础设施 (Autograder, Discord) 和课程资料 (Lectures, Notes, Homework) 完全开源,让每一个愿意学习数据库的同学都可以享受到几乎等同于 CMU 本校学生的课程体验。

这门课的亮点在于 CMU Database Group 专门为此课开发了一个教学用的关系型数据库 bustub,并要求你对这个数据库的组成部分进行修改,实现上述部件的功能。

具体来说,在 15-445 中你需要在四个 Project 的推进中,实现一个面向磁盘的传统关系型数据库 Bustub 中的部分关键组件。

包括 Buffer Pool Manager (内存管理), B Plus Tree (存储引擎), Query Executors & Query Optimizer (算子们 & 优化器), Concurrency Control (并发控制),分别对应 Project #1Project #4

值得一提的是,同学们在实现的过程中可以通过 shell.cpp 编译出 bustub-shell 来实时地观测自己实现部件的正确与否,正反馈非常足。

此外 bustub 作为一个 C++ 编写的中小型项目涵盖了程序构建、代码规范、单元测试等众多要求,可以作为一个优秀的开源项目学习。

课程资源

在 Fall 2019 中,Project #2 是做哈希索引,Project #4 是做日志与恢复。

在 Fall 2020 中,Project #2 是做 B 树,Project #4 是做并发控制。

在 Fall 2021 中,Project #1 是做缓存池管理,Project #2 是做哈希索引,Project #4 是做并发控制。

在 Fall 2022 中,与 Fall 2021 相比只有哈希索引换成了 B+ 树索引,其余都一样。

在 Spring 2023 中,大体内容和 Fall 2022 一样(缓存池,B+ 树索引,算子,并发控制),只不过 Project #0 换成了 Copy-On-Write Trie,同时增加了很好玩的注册大小写函数的 Task,可以直接在编译出的 bustub-shell 中看到自己写的函数的实际效果,非常有成就感。

值得注意的是,现在 bustub 在 2020 年以前的 version 都已经停止维护。

Fall 2019 的最后一个 Logging & Recovery 的 Project 已经 broken 了(在19年的 git head 上也许还可以跑,但尽管如此 Gradescope 应该也没有提供公共的版本,所以并不推荐大家去做,只看看代码和 Handout 就可以了)。

或许在 Fall 2023 的版本 Recovery 相关的功能会被修复,届时也可能有全新的 Recovery Project,让我们试目以待吧🤪

如果大家有精力的话可以都去尝试一下,或者在对书中内容理解不是很透彻的时候,尝试做一做对应的 Project 会加深你的理解(个人建议还是要全部做完,相信一定对你有帮助)。

此外,CMU数据库团队还有一个DB with ML的公开讲座系列:ML⇄DB Seminar Series

资源汇总

非官方的 Discord 是一个很好的交流平台,过往的聊天记录几乎记载了其他同学踩过的坑,你也可以提出你的问题,或者帮忙解答别人的问题,相信这是一份很好的参考。

关于 Spring 2023 的通关指南,可以参考 @xzhseh 的这篇CMU 15-445/645 (Spring 2023) Database Systems 通关指北,里面涵盖了全部你需要的通关道具,和通关方式建议,以及最重要的,我自己在做 Project 的过程中遇到的,看到的,和自己亲自踩过的坑。

@ysj1173886760 在学习这门课中用到的所有资源和作业实现都汇总在 ysj1173886760/Learning: db - GitHub 中。

由于 Andy 的要求,仓库中没有 Project 的实现,只有 Homework 的 Solution。特别的,对于 Homework1,@ysj1173886760 还写了一个 Shell 脚本来帮大家执行自动判分。

另外在课程结束后,推荐阅读一篇论文 Architecture Of a Database System,对应的中文版也在上述仓库中。论文里综述了数据库系统的整体架构,让大家可以对数据库有一个更加全面的视野。

后续课程

CMU15-721 主要讲主存数据库有关的内容,每节课都有对应的 paper 要读,推荐给希望进阶数据库的小伙伴。@ysj1173886760 目前也在跟进这门课,完成后会在这里提 PR 以提供进阶的指导。

CS224n: Natural Language Processing

课程简介

  • 所属大学:Stanford
  • 先修要求:深度学习基础 + Python
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:80 小时

Stanford 的 NLP 入门课程,由自然语言处理领域的巨佬 Chris Manning 领衔教授(word2vec 算法的开创者)。内容覆盖了词向量、RNN、LSTM、Seq2Seq 模型、机器翻译、注意力机制、Transformer 等等 NLP 领域的核心知识点。

5 个编程作业难度循序渐进,分别是词向量、word2vec 算法、Dependency parsing、机器翻译以及 Transformer 的 fine-tune。

最终的大作业是在 Stanford 著名的 SQuAD 数据集上训练 QA 模型,有学生的大作业甚至直接发表了顶会论文。

课程资源

资源汇总

@PKUFlyingPig 在学习这门课中用到的所有资源和作业实现都汇总在 PKUFlyingPig/CS224n - GitHub 中。

CS224n: Natural Language Processing

课程简介

  • 所属大学:Stanford
  • 先修要求:深度学习基础 + Python
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:80 小时

Stanford 的 NLP 入门课程,由自然语言处理领域的巨佬 Chris Manning 领衔教授(word2vec 算法的开创者)。内容覆盖了词向量、RNN、LSTM、Seq2Seq 模型、机器翻译、注意力机制、Transformer 等等 NLP 领域的核心知识点。

5 个编程作业难度循序渐进,分别是词向量、word2vec 算法、Dependency parsing、机器翻译以及 Transformer 的 fine-tune。

最终的大作业是在 Stanford 著名的 SQuAD 数据集上训练 QA 模型,有学生的大作业甚至直接发表了顶会论文。

课程资源

资源汇总

@PKUFlyingPig 在学习这门课中用到的所有资源和作业实现都汇总在 PKUFlyingPig/CS224n - GitHub 中。

Haskell

Haskell is a purely functional programming language. It is known for its speed and reliability, and it is often used in industry for building large-scale software systems.

Resources

Haskell

Haskell is a purely functional programming language. It is known for its speed and reliability, and it is often used in industry for building large-scale software systems.

Resources

Lean4

Lean4 is a programming language developed by Microsoft Research. It is a functional programming language that is based on theorem proving and dependent type theory. It is designed to be easy to use and easy to understand. It is also designed to be efficient and scalable.

Resources

Lean4

Lean4 is a programming language developed by Microsoft Research. It is a functional programming language that is based on theorem proving and dependent type theory. It is designed to be easy to use and easy to understand. It is also designed to be efficient and scalable.

Resources

LangChain

LangChain is a project that aims to create a language-agnostic, open-source, and community-driven framework for language learning.

The framework will be designed to be modular and extensible, allowing for easy integration of new languages and features. The framework will also be designed to be user-friendly and accessible, with clear documentation and tutorials.

The LangChain framework will be open-source and available for anyone to use and contribute to. The project will be developed in the open, with all code and documentation available for anyone to view and use.

The LangChain framework will be designed to be language-agnostic, meaning that it will be able to support any language that has a written alphabet. This will allow for easy integration of new languages and features, as well as the ability to create language-specific tools and resources.

The LangChain framework will be community-driven, meaning that it will be open to anyone who wants to contribute to the project. Anyone can submit new languages, features, or tools, and the LangChain team will review and approve them. This will allow for a collaborative and diverse community to develop and improve the framework.

The LangChain framework will be designed to be scalable, meaning that it will be able to handle large amounts of data and users. The framework will be designed to be efficient and scalable, with features such as caching and optimization in mind. The LangChain team will work to ensure that the framework is optimized for performance and scalability, and that it can handle large amounts of data and users.

The LangChain framework will be designed to be accessible, meaning that it will be designed to be easy to use and understand. The framework will be designed to be user-friendly and intuitive, with clear documentation and tutorials. The LangChain team will work to ensure that the framework is easy to use and understand, and that it is accessible to all users.

The LangChain framework will be designed to be secure, meaning that it will be designed to protect user data and prevent unauthorized access. The framework will be designed to be secure and safe, with features such as encryption and authentication in mind. The LangChain team will work to ensure that the framework is secure and safe, and that it protects user data and prevents unauthorized access.

The LangChain framework will be designed to be inclusive, meaning that it will be designed to be accessible to people with disabilities. The framework will be designed to be accessible and inclusive, with features such as high contrast and easy-to-read fonts in mind. The LangChain team will work to ensure that the framework is accessible and inclusive, and that it is designed to be used by people with disabilities.

LangChain

LangChain is a project that aims to create a language-agnostic, open-source, and community-driven framework for language learning.

The framework will be designed to be modular and extensible, allowing for easy integration of new languages and features. The framework will also be designed to be user-friendly and accessible, with clear documentation and tutorials.

The LangChain framework will be open-source and available for anyone to use and contribute to. The project will be developed in the open, with all code and documentation available for anyone to view and use.

The LangChain framework will be designed to be language-agnostic, meaning that it will be able to support any language that has a written alphabet. This will allow for easy integration of new languages and features, as well as the ability to create language-specific tools and resources.

The LangChain framework will be community-driven, meaning that it will be open to anyone who wants to contribute to the project. Anyone can submit new languages, features, or tools, and the LangChain team will review and approve them. This will allow for a collaborative and diverse community to develop and improve the framework.

The LangChain framework will be designed to be scalable, meaning that it will be able to handle large amounts of data and users. The framework will be designed to be efficient and scalable, with features such as caching and optimization in mind. The LangChain team will work to ensure that the framework is optimized for performance and scalability, and that it can handle large amounts of data and users.

The LangChain framework will be designed to be accessible, meaning that it will be designed to be easy to use and understand. The framework will be designed to be user-friendly and intuitive, with clear documentation and tutorials. The LangChain team will work to ensure that the framework is easy to use and understand, and that it is accessible to all users.

The LangChain framework will be designed to be secure, meaning that it will be designed to protect user data and prevent unauthorized access. The framework will be designed to be secure and safe, with features such as encryption and authentication in mind. The LangChain team will work to ensure that the framework is secure and safe, and that it protects user data and prevents unauthorized access.

The LangChain framework will be designed to be inclusive, meaning that it will be designed to be accessible to people with disabilities. The framework will be designed to be accessible and inclusive, with features such as high contrast and easy-to-read fonts in mind. The LangChain team will work to ensure that the framework is accessible and inclusive, and that it is designed to be used by people with disabilities.

Llama2

Localizing Llama2

可以参考以下资料以部署Llama2:

Introduction

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

Deploy

To deploy Llama2, follow the steps below:

  1. Install Llama2 on your server or cluster.
  2. Configure Llama2 to connect to your Kafka cluster.
  3. Start sending and receiving messages using Llama2.

Features

Llama2 provides a rich set of features and functionality that are not available in Llama. Some of the key features of Llama2 are:

  1. Message routing: Llama2 allows you to route messages to different topics based on certain criteria, such as message content or metadata.
  2. Message filtering: Llama2 allows you to filter messages based on certain criteria, such as message content or metadata.
  3. Message transformation: Llama2 allows you to transform messages into a different format, such as JSON or XML.
  4. Message delivery guarantee: Llama2 provides a delivery guarantee that ensures that messages are delivered at least once, exactly once, or at most once.
  5. Message replay: Llama2 allows you to replay messages that have been consumed before.
  6. Message retention: Llama2 allows you to set a retention policy for messages, which determines how long messages are kept in the system.
  7. Message compression: Llama2 allows you to compress messages to reduce the amount of data that needs to be stored and transmitted.
  8. Message ordering: Llama2 ensures that messages are delivered in the order they are sent.
  9. Message replay: Llama2 allows you to replay messages that have been consumed before.
  10. Message batching: Llama2 allows you to batch messages together and send them in a single request.
  11. Message re-partitioning: Llama2 allows you to re-partition messages to different topics based on certain criteria, such as message content or metadata.
  12. Message re-ordering: Llama2 allows you to re-order messages based on certain criteria, such as message content or metadata.
  13. Message de-duplication: Llama2 allows you to de-duplicate messages based on certain criteria, such as message content or metadata.
  14. Message encryption: Llama2 allows you to encrypt messages using various encryption algorithms, such as AES, RSA, and HMAC.
  15. Message authentication: Llama2 allows you to authenticate messages using various authentication mechanisms, such as SSL, SASL, and OAuth.
  16. Message compression: Llama2 allows you to compress messages using various compression algorithms, such as Gzip, Snappy, and LZ4.
  17. Message indexing: Llama2 allows you to index messages using various indexing techniques, such as Apache Solr, Elasticsearch, and Apache Lucene.
  18. Message monitoring: Llama2 provides monitoring capabilities that allow you to track the performance and health of your Llama2 cluster.
  19. Message security: Llama2 provides security features that allow you to secure your Llama2 cluster.

Conclusion

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

Llama2

Localizing Llama2

可以参考以下资料以部署Llama2:

Introduction

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

Deploy

To deploy Llama2, follow the steps below:

  1. Install Llama2 on your server or cluster.
  2. Configure Llama2 to connect to your Kafka cluster.
  3. Start sending and receiving messages using Llama2.

Features

Llama2 provides a rich set of features and functionality that are not available in Llama. Some of the key features of Llama2 are:

  1. Message routing: Llama2 allows you to route messages to different topics based on certain criteria, such as message content or metadata.
  2. Message filtering: Llama2 allows you to filter messages based on certain criteria, such as message content or metadata.
  3. Message transformation: Llama2 allows you to transform messages into a different format, such as JSON or XML.
  4. Message delivery guarantee: Llama2 provides a delivery guarantee that ensures that messages are delivered at least once, exactly once, or at most once.
  5. Message replay: Llama2 allows you to replay messages that have been consumed before.
  6. Message retention: Llama2 allows you to set a retention policy for messages, which determines how long messages are kept in the system.
  7. Message compression: Llama2 allows you to compress messages to reduce the amount of data that needs to be stored and transmitted.
  8. Message ordering: Llama2 ensures that messages are delivered in the order they are sent.
  9. Message replay: Llama2 allows you to replay messages that have been consumed before.
  10. Message batching: Llama2 allows you to batch messages together and send them in a single request.
  11. Message re-partitioning: Llama2 allows you to re-partition messages to different topics based on certain criteria, such as message content or metadata.
  12. Message re-ordering: Llama2 allows you to re-order messages based on certain criteria, such as message content or metadata.
  13. Message de-duplication: Llama2 allows you to de-duplicate messages based on certain criteria, such as message content or metadata.
  14. Message encryption: Llama2 allows you to encrypt messages using various encryption algorithms, such as AES, RSA, and HMAC.
  15. Message authentication: Llama2 allows you to authenticate messages using various authentication mechanisms, such as SSL, SASL, and OAuth.
  16. Message compression: Llama2 allows you to compress messages using various compression algorithms, such as Gzip, Snappy, and LZ4.
  17. Message indexing: Llama2 allows you to index messages using various indexing techniques, such as Apache Solr, Elasticsearch, and Apache Lucene.
  18. Message monitoring: Llama2 provides monitoring capabilities that allow you to track the performance and health of your Llama2 cluster.
  19. Message security: Llama2 provides security features that allow you to secure your Llama2 cluster.

Conclusion

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

CS189: Introduction to Machine Learning

课程简介

  • 所属大学:UC Berkeley
  • 先修要求:CS188, CS70
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:100 小时

这门课我没有系统上过,只是把它的课程 notes 作为工具书查阅。不过从课程网站上来看,它比 CS229 好的是开源了所有 homework 的代码以及 gradescope 的 autograder。同样,这门课讲得相当理论且深入。

课程资源

CS189: Introduction to Machine Learning

课程简介

  • 所属大学:UC Berkeley
  • 先修要求:CS188, CS70
  • 编程语言:Python
  • 课程难度:🌟🌟🌟🌟
  • 预计学时:100 小时

这门课我没有系统上过,只是把它的课程 notes 作为工具书查阅。不过从课程网站上来看,它比 CS229 好的是开源了所有 homework 的代码以及 gradescope 的 autograder。同样,这门课讲得相当理论且深入。

课程资源

C++

在本部分,主要介绍一些C++的新特性。

移动语义

移动语义可以参考以下资料:

Modern C++语法特性

C++

在本部分,主要介绍一些C++的新特性。

移动语义

移动语义可以参考以下资料:

Modern C++语法特性

Java

Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.

Java

Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.

CUDA

CUDA is a parallel computing platform and programming model created by Nvidia. It allows developers to write high-performance parallel applications using a combination of C/C++, CUDA C/C++, and Fortran. CUDA provides a rich set of APIs for parallel programming, including parallel thread execution, memory management, and device management. CUDA also includes a compiler toolchain that can generate optimized code for various architectures, including x86, x86-64, ARM, and PowerPC. CUDA is widely used in scientific computing, graphics processing, and machine learning applications.

CUDA is available for free download and installation on Windows, Linux, and macOS platforms. It is also available as a part of popular cloud computing platforms such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

CUDA is a powerful tool for parallel computing and is widely used in a wide range of applications. It is a good choice for developers who are interested in developing high-performance parallel applications using CUDA.

CUDA Programming Model

The CUDA programming model is based on a combination of C/C++ and CUDA C/C++. CUDA C/C++ is a high-level language that is designed to work with CUDA. It provides a set of built-in functions and operators that can be used to write parallel code. CUDA C/C++ code is compiled into a CUDA executable that can be run on a GPU.

The CUDA programming model consists of several components:

  • Host code: This is the code that is executed on the CPU. It interacts with the device code to perform parallel computations.
  • Device code: This is the code that is executed on the GPU. It is written in CUDA C/C++ and is executed on a single thread on the GPU.

C++ and CUDA C/C++

C++ and CUDA C/C++ are two different programming languages. C++ is a general-purpose programming language that is widely used in software development. CUDA C/C++ is a programming language that is designed to work with CUDA. It is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

C++ and CUDA C/C++ are both high-level languages that are used to write parallel applications. C++ is a general-purpose language that is used to write applications that can run on any platform. CUDA C/C++ is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

CUDA Libraries

CUDA provides a rich set of libraries that can be used to develop parallel applications. These libraries include CUDA Runtime API, CUDA Driver API, CUDA Math API, CUDA Graph API, CUDA Profiler API, CUDA Cooperative Groups API, CUDA Texture Memory API, CUDA Surface Memory API, CUDA Dynamic Parallelism API, CUDA Direct3D Interoperability, CUDA OpenGL Interoperability, CUDA VDPAU Interoperability, CUDA D3D10 Interoperability, CUDA D3D11 Interoperability, CUDA D3D12 Interoperability, CUDA OpenCL Interoperability, CUDA Level-Zero Interoperability, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA libraries are designed to work with CUDA C/C++ and provide a set of APIs for parallel programming. These libraries can be used to develop parallel applications that can run on a GPU. CUDA libraries can be used to optimize performance, reduce memory usage, and improve application performance.

CUDA Programming Tools

CUDA provides a set of tools that can be used to develop and debug CUDA applications. These tools include CUDA Compiler, CUDA Debugger, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA Compiler is a tool that is used to compile CUDA C/C++ code into a CUDA executable. CUDA Debugger is a tool that is used to debug CUDA applications. CUDA Profiler is a tool that is used to profile CUDA applications. CUDA Memcheck is a tool that is used to detect memory errors in CUDA applications. CUDA Memcached is a tool that is used to cache CUDA application data in memory. CUDA Thrust is a library that is used to write parallel algorithms in CUDA C/C++. CUDA Python is a library that is used to write CUDA applications in Python.

CUDA tools can be used to develop and debug CUDA applications. They can help to identify and fix errors in CUDA applications, optimize performance, and reduce memory usage.

CUDA

CUDA is a parallel computing platform and programming model created by Nvidia. It allows developers to write high-performance parallel applications using a combination of C/C++, CUDA C/C++, and Fortran. CUDA provides a rich set of APIs for parallel programming, including parallel thread execution, memory management, and device management. CUDA also includes a compiler toolchain that can generate optimized code for various architectures, including x86, x86-64, ARM, and PowerPC. CUDA is widely used in scientific computing, graphics processing, and machine learning applications.

CUDA is available for free download and installation on Windows, Linux, and macOS platforms. It is also available as a part of popular cloud computing platforms such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

CUDA is a powerful tool for parallel computing and is widely used in a wide range of applications. It is a good choice for developers who are interested in developing high-performance parallel applications using CUDA.

CUDA Programming Model

The CUDA programming model is based on a combination of C/C++ and CUDA C/C++. CUDA C/C++ is a high-level language that is designed to work with CUDA. It provides a set of built-in functions and operators that can be used to write parallel code. CUDA C/C++ code is compiled into a CUDA executable that can be run on a GPU.

The CUDA programming model consists of several components:

  • Host code: This is the code that is executed on the CPU. It interacts with the device code to perform parallel computations.
  • Device code: This is the code that is executed on the GPU. It is written in CUDA C/C++ and is executed on a single thread on the GPU.

C++ and CUDA C/C++

C++ and CUDA C/C++ are two different programming languages. C++ is a general-purpose programming language that is widely used in software development. CUDA C/C++ is a programming language that is designed to work with CUDA. It is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

C++ and CUDA C/C++ are both high-level languages that are used to write parallel applications. C++ is a general-purpose language that is used to write applications that can run on any platform. CUDA C/C++ is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

CUDA Libraries

CUDA provides a rich set of libraries that can be used to develop parallel applications. These libraries include CUDA Runtime API, CUDA Driver API, CUDA Math API, CUDA Graph API, CUDA Profiler API, CUDA Cooperative Groups API, CUDA Texture Memory API, CUDA Surface Memory API, CUDA Dynamic Parallelism API, CUDA Direct3D Interoperability, CUDA OpenGL Interoperability, CUDA VDPAU Interoperability, CUDA D3D10 Interoperability, CUDA D3D11 Interoperability, CUDA D3D12 Interoperability, CUDA OpenCL Interoperability, CUDA Level-Zero Interoperability, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA libraries are designed to work with CUDA C/C++ and provide a set of APIs for parallel programming. These libraries can be used to develop parallel applications that can run on a GPU. CUDA libraries can be used to optimize performance, reduce memory usage, and improve application performance.

CUDA Programming Tools

CUDA provides a set of tools that can be used to develop and debug CUDA applications. These tools include CUDA Compiler, CUDA Debugger, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA Compiler is a tool that is used to compile CUDA C/C++ code into a CUDA executable. CUDA Debugger is a tool that is used to debug CUDA applications. CUDA Profiler is a tool that is used to profile CUDA applications. CUDA Memcheck is a tool that is used to detect memory errors in CUDA applications. CUDA Memcached is a tool that is used to cache CUDA application data in memory. CUDA Thrust is a library that is used to write parallel algorithms in CUDA C/C++. CUDA Python is a library that is used to write CUDA applications in Python.

CUDA tools can be used to develop and debug CUDA applications. They can help to identify and fix errors in CUDA applications, optimize performance, and reduce memory usage.

Huggingface

Huggingface is a popular NLP library that provides a lot of pre-trained models for various tasks such as text classification, named entity recognition, and question answering. It also provides a simple interface for training and fine-tuning these models on custom datasets.

Here is an example of how to use Huggingface to fine-tune a pre-trained model for sentiment analysis on a custom dataset:

from transformers import pipeline
+ Huggingface - Scheme MkDocs      

Huggingface

Huggingface is a popular NLP library that provides a lot of pre-trained models for various tasks such as text classification, named entity recognition, and question answering. It also provides a simple interface for training and fine-tuning these models on custom datasets.

Here is an example of how to use Huggingface to fine-tune a pre-trained model for sentiment analysis on a custom dataset:

from transformers import pipeline
 
 classifier = pipeline('sentiment-analysis')
 
diff --git a/Potpourri/Manim/index.html b/Potpourri/Manim/index.html
index af007dc..644adb2 100755
--- a/Potpourri/Manim/index.html
+++ b/Potpourri/Manim/index.html
@@ -1,4 +1,4 @@
- Manim - Scheme MkDocs      

Manim

Introduction

Manim is a Python library for creating mathematical animations, which is based on the idea of creating mathematical objects and transforming them over time. It is an open-source project and is maintained by the community. It is used to create visualizations, simulations, and animations for a wide range of applications, including computer science, mathematics, physics, and more.

Installation

To install Manim, you need to have Python installed on your system. You can download and install Python from the official website. Once you have Python installed, you can install Manim using the following command:

pip install manim
+ Manim - Scheme MkDocs      

Manim

Introduction

Manim is a Python library for creating mathematical animations, which is based on the idea of creating mathematical objects and transforming them over time. It is an open-source project and is maintained by the community. It is used to create visualizations, simulations, and animations for a wide range of applications, including computer science, mathematics, physics, and more.

Installation

To install Manim, you need to have Python installed on your system. You can download and install Python from the official website. Once you have Python installed, you can install Manim using the following command:

pip install manim
 

This will install Manim and all its dependencies.

Creating Animations

To create an animation, you need to create a Python file and use the Scene class from Manim. Here is an example:

from manim import *
 
 class SquareToCircle(Scene):
diff --git a/Potpourri/SocketIO/index.html b/Potpourri/SocketIO/index.html
index 094b431..ad78341 100755
--- a/Potpourri/SocketIO/index.html
+++ b/Potpourri/SocketIO/index.html
@@ -1,4 +1,4 @@
- Socket.IO - Scheme MkDocs      

Socket.IO

Socket.IO is a real-time communication framework that enables real-time bidirectional communication between the client and the server. It uses WebSockets as a transport layer and provides a simple API for real-time communication. Socket.IO is a JavaScript library that runs in the browser and enables real-time communication between the client and the server.

Resources

Socket.IO in Python

Socket.IO can be used in Python using the python-socketio library. The library provides a client-side and a server-side implementation. The client-side implementation is used to connect to the server and send and receive messages. The server-side implementation is used to handle incoming connections and send messages to the clients.

Here's an example of how to use Socket.IO in Python:

import socketio
+ Socket.IO - Scheme MkDocs      

Socket.IO

Socket.IO is a real-time communication framework that enables real-time bidirectional communication between the client and the server. It uses WebSockets as a transport layer and provides a simple API for real-time communication. Socket.IO is a JavaScript library that runs in the browser and enables real-time communication between the client and the server.

Resources

Socket.IO in Python

Socket.IO can be used in Python using the python-socketio library. The library provides a client-side and a server-side implementation. The client-side implementation is used to connect to the server and send and receive messages. The server-side implementation is used to handle incoming connections and send messages to the clients.

Here's an example of how to use Socket.IO in Python:

import socketio
 
 sio = socketio.Client()
 
diff --git a/Useful Tools/CMake/index.html b/Useful Tools/CMake/index.html
index e7d79ce..7aecd8a 100755
--- a/Useful Tools/CMake/index.html	
+++ b/Useful Tools/CMake/index.html	
@@ -1,4 +1,4 @@
- CMake - Scheme MkDocs      

CMake

CMake is a cross-platform build system generator. It is used to build, test, and package software. It is widely used in the open-source community and is used in many popular projects such as OpenCV, VTK, and ITK.

Installing CMake

To install CMake, you can download the installer from the official website: https://cmake.org/download/.

Using CMake

To use CMake, you need to create a CMakeLists.txt file in the root directory of your project. This file contains all the instructions for building your project.

Here is a basic example of a CMakeLists.txt file:

cmake_minimum_required(VERSION 3.10)
+ CMake - Scheme MkDocs      

CMake

CMake is a cross-platform build system generator. It is used to build, test, and package software. It is widely used in the open-source community and is used in many popular projects such as OpenCV, VTK, and ITK.

Installing CMake

To install CMake, you can download the installer from the official website: https://cmake.org/download/.

Using CMake

To use CMake, you need to create a CMakeLists.txt file in the root directory of your project. This file contains all the instructions for building your project.

Here is a basic example of a CMakeLists.txt file:

cmake_minimum_required(VERSION 3.10)
 
 project(MyProject)
 
diff --git a/Useful Tools/Docker/index.html b/Useful Tools/Docker/index.html
index c376ec1..859e78c 100755
--- a/Useful Tools/Docker/index.html	
+++ b/Useful Tools/Docker/index.html	
@@ -1,4 +1,4 @@
- Docker - Scheme MkDocs      

Docker

Docker学习资料

Docker 官方文档当然是最好的初学教材,但最好的导师一定是你自己——尝试去使用 Docker 才能享受它带来的便利。Docker 在工业界发展迅猛并已经非常成熟,你可以下载它的桌面端并使用图形界面。

当然,如果你像我一样,是一个疯狂的造轮子爱好者,那不妨自己亲手写一个迷你 Docker 来加深理解。

KodeKloud Docker for the Absolute Beginner 全面的介绍了 Docker 的基础功能,并且有大量的配套练习,同时提供免费的云环境来完成练习。其余的云相关的课程如 Kubernetes 需要付费,但个人强烈推荐:讲解非常仔细,适合从 0 开始的新手;有配套的 Kubernetes 的实验环境,不用被搭建环境劝退。

Docker

Docker学习资料

Docker 官方文档当然是最好的初学教材,但最好的导师一定是你自己——尝试去使用 Docker 才能享受它带来的便利。Docker 在工业界发展迅猛并已经非常成熟,你可以下载它的桌面端并使用图形界面。

当然,如果你像我一样,是一个疯狂的造轮子爱好者,那不妨自己亲手写一个迷你 Docker 来加深理解。

KodeKloud Docker for the Absolute Beginner 全面的介绍了 Docker 的基础功能,并且有大量的配套练习,同时提供免费的云环境来完成练习。其余的云相关的课程如 Kubernetes 需要付费,但个人强烈推荐:讲解非常仔细,适合从 0 开始的新手;有配套的 Kubernetes 的实验环境,不用被搭建环境劝退。

GNU Debuger

1. Introduction

GNU Debuger (GDB) is a powerful command-line debugger that is used to debug and analyze programs. It is a powerful tool for developers and system administrators to debug and optimize their code. GDB provides a powerful set of commands and features that allow developers to debug their code in a variety of ways.

In this article, we will learn how to use GDB to debug and optimize our code. We will also learn how to use GDB commands to analyze and optimize our code.

2. Installing GDB

GDB is included in most Linux distributions and can be installed using the package manager. For example, on Ubuntu, you can install GDB using the following command:

sudo apt-get install gdb
+ GNU Debuger - Scheme MkDocs      

GNU Debuger

1. Introduction

GNU Debuger (GDB) is a powerful command-line debugger that is used to debug and analyze programs. It is a powerful tool for developers and system administrators to debug and optimize their code. GDB provides a powerful set of commands and features that allow developers to debug their code in a variety of ways.

In this article, we will learn how to use GDB to debug and optimize our code. We will also learn how to use GDB commands to analyze and optimize our code.

2. Installing GDB

GDB is included in most Linux distributions and can be installed using the package manager. For example, on Ubuntu, you can install GDB using the following command:

sudo apt-get install gdb
 

On Windows, you can download the GDB executable from the official website and add it to your PATH environment variable.

Once GDB is installed, you can start it by typing gdb in the terminal. You should see the GDB prompt:

(gdb)
 

This is the GDB command prompt. You can type GDB commands and execute them to debug and optimize your code.

3. Debugging a Program

To debug a program using GDB, you need to first compile the program with debugging symbols. You can do this by adding the -g flag to the compiler command. For example, if you are using the g++ compiler, you can compile your program with the following command:

g++ -g myprogram.cpp -o myprogram
 

Once the program is compiled, you can run it using GDB by typing the following command:

gdb myprogram
diff --git a/Useful Tools/MIT-Missing-Semester/index.html b/Useful Tools/MIT-Missing-Semester/index.html
index e190a52..cc7b184 100755
--- a/Useful Tools/MIT-Missing-Semester/index.html	
+++ b/Useful Tools/MIT-Missing-Semester/index.html	
@@ -1,4 +1,4 @@
- MIT-Missing-Semester - Scheme MkDocs      

MIT: The Missing Semester of Your CS Education

课程简介

  • 先修要求:无
  • 编程语言:shell
  • 课程难度:🌟🌟
  • 预计学时:10 小时

正如课程名字所言:“计算机教学中消失的一个学期”,这门课将会教会你许多大学的课堂上不会涉及但却对每个 CSer 无比重要的工具或者知识点。例如 Shell 编程、命令行配置、Git、Vim、tmuxssh 等等。如果你是一个计算机小白,那么我非常建议你学习一下这门课,因为它基本涉及了本书必学工具中的绝大部分内容。

除了 MIT 官方的学习资料外,北京大学图灵班开设的前沿计算实践中也开设了相关课程,资料位于这个网站下,供大家参考。

课程资源

MIT: The Missing Semester of Your CS Education

课程简介

  • 先修要求:无
  • 编程语言:shell
  • 课程难度:🌟🌟
  • 预计学时:10 小时

正如课程名字所言:“计算机教学中消失的一个学期”,这门课将会教会你许多大学的课堂上不会涉及但却对每个 CSer 无比重要的工具或者知识点。例如 Shell 编程、命令行配置、Git、Vim、tmuxssh 等等。如果你是一个计算机小白,那么我非常建议你学习一下这门课,因为它基本涉及了本书必学工具中的绝大部分内容。

除了 MIT 官方的学习资料外,北京大学图灵班开设的前沿计算实践中也开设了相关课程,资料位于这个网站下,供大家参考。

课程资源

GNU Make

1. Introduction

GNU Make is a tool for automating the build process of software projects. It is a command-line tool that can be used to build, test, and package software projects. GNU Make is a cross-platform tool that can be used on Windows, Linux, and macOS.

2. Installation

GNU Make can be installed on Windows, Linux, and macOS using the following steps:

  1. Download the latest release of GNU Make from the official website: https://www.github.com/amake/gnumake

  2. Extract the downloaded file and move the gnumake executable to a directory that is in the system PATH.

  3. Verify that GNU Make is installed by running the gnumake command in the terminal or command prompt.

3. Usage

GNU Make can be used to build, test, and package software projects by running the gnumake command in the terminal or command prompt. The gnumake command takes a command as an argument, which can be one of the following:

  • build: builds the software project.
  • test: runs the test suite of the software project.
  • package: packages the software project for distribution.

For example, to build a software project, run the following command:

gnumake build
+ GNU Make - Scheme MkDocs      

GNU Make

1. Introduction

GNU Make is a tool for automating the build process of software projects. It is a command-line tool that can be used to build, test, and package software projects. GNU Make is a cross-platform tool that can be used on Windows, Linux, and macOS.

2. Installation

GNU Make can be installed on Windows, Linux, and macOS using the following steps:

  1. Download the latest release of GNU Make from the official website: https://www.github.com/amake/gnumake

  2. Extract the downloaded file and move the gnumake executable to a directory that is in the system PATH.

  3. Verify that GNU Make is installed by running the gnumake command in the terminal or command prompt.

3. Usage

GNU Make can be used to build, test, and package software projects by running the gnumake command in the terminal or command prompt. The gnumake command takes a command as an argument, which can be one of the following:

  • build: builds the software project.
  • test: runs the test suite of the software project.
  • package: packages the software project for distribution.

For example, to build a software project, run the following command:

gnumake build
 

To run the test suite of a software project, run the following command:

gnumake test
 

To package a software project for distribution, run the following command:

gnumake package
 

4. Configuration

GNU Make can be configured by creating a Makefile.yaml file in the root directory of the software project. The Makefile.yaml file contains the configuration for GNU Make, such as the build commands, test commands, and package commands.

Here is an example Makefile.yaml file:

build:
diff --git a/Useful Tools/Regex/index.html b/Useful Tools/Regex/index.html
index f65b84e..ab9afee 100755
--- a/Useful Tools/Regex/index.html	
+++ b/Useful Tools/Regex/index.html	
@@ -1,4 +1,4 @@
- Regex - Scheme MkDocs      

Regex

Regular Expressions

Regular expressions are a sequence of characters that define a search pattern. They are used to match, locate, and manipulate text. In Python, regular expressions are implemented using the re module.

Here are some examples of regular expressions:

  • r"hello\s+world": Matches the string "hello world" with any number of spaces between "hello" and "world".
  • r"\d+": Matches one or more digits.
  • r"\w+": Matches one or more word characters (letters, digits, and underscores).
  • r"[\w\s]+": Matches one or more word characters or spaces.
  • r"[\w\s]+@[\w\s]+\.[\w]{2,3}": Matches an email address with a username, domain name, and top-level domain.

Using Regular Expressions in Python

  1. Import the re module:
import re
+ Regex - Scheme MkDocs      

Regex

Regular Expressions

Regular expressions are a sequence of characters that define a search pattern. They are used to match, locate, and manipulate text. In Python, regular expressions are implemented using the re module.

Here are some examples of regular expressions:

  • r"hello\s+world": Matches the string "hello world" with any number of spaces between "hello" and "world".
  • r"\d+": Matches one or more digits.
  • r"\w+": Matches one or more word characters (letters, digits, and underscores).
  • r"[\w\s]+": Matches one or more word characters or spaces.
  • r"[\w\s]+@[\w\s]+\.[\w]{2,3}": Matches an email address with a username, domain name, and top-level domain.

Using Regular Expressions in Python

  1. Import the re module:
import re
 
  1. Use the re.search() function to search for a pattern in a string:
string = "The quick brown fox jumps over the lazy dog"
 pattern = r"fox"
 match = re.search(pattern, string)
diff --git a/Web Framework/Django/index.html b/Web Framework/Django/index.html
index 7b09273..3bb9b5c 100755
--- a/Web Framework/Django/index.html	
+++ b/Web Framework/Django/index.html	
@@ -1,4 +1,4 @@
- Django - Scheme MkDocs      

Django

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, such as database abstraction, URL routing, and templating. Django's documentation is excellent and provides a comprehensive guide to its features and functionality.

Django is known for its ease of use, high-level abstractions, and reusability. It's also fast, scalable, and secure. With Django, you can quickly build complex web applications with reusable, well-documented code.

Django is open-source, which means that the community is constantly adding new features and functionality. This makes it a popular choice for building large-scale web applications.

Django is known for its ability to handle complex database relationships and queries, which makes it a popular choice for building complex web applications. It also has a strong community of developers who contribute to its development and documentation.

Django is also known for its support for internationalization and localization, which makes it a popular choice for building multilingual web applications. It also has a large and active developer community, which makes it a popular choice for building enterprise-level web applications.

Django

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, such as database abstraction, URL routing, and templating. Django's documentation is excellent and provides a comprehensive guide to its features and functionality.

Django is known for its ease of use, high-level abstractions, and reusability. It's also fast, scalable, and secure. With Django, you can quickly build complex web applications with reusable, well-documented code.

Django is open-source, which means that the community is constantly adding new features and functionality. This makes it a popular choice for building large-scale web applications.

Django is known for its ability to handle complex database relationships and queries, which makes it a popular choice for building complex web applications. It also has a strong community of developers who contribute to its development and documentation.

Django is also known for its support for internationalization and localization, which makes it a popular choice for building multilingual web applications. It also has a large and active developer community, which makes it a popular choice for building enterprise-level web applications.

Flask

Flask is a web framework written in Python. It is lightweight, easy to use, and provides a lot of features out of the box. It is also easy to learn and has a large community of developers who contribute to its development.

Flask

Flask is a web framework written in Python. It is lightweight, easy to use, and provides a lot of features out of the box. It is also easy to learn and has a large community of developers who contribute to its development.

MkDocs

MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

Getting Started

To get started with MkDocs, follow these steps:

  1. Install MkDocs:
pip install mkdocs
+ MkDocs - Scheme MkDocs      

MkDocs

MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

Getting Started

To get started with MkDocs, follow these steps:

  1. Install MkDocs:
pip install mkdocs
 
  1. Create a new directory for your project:
mkdir myproject
 
  1. Initialize a new MkDocs project:
cd myproject
 mkdocs new .
diff --git a/Web Framework/React/index.html b/Web Framework/React/index.html
index b493902..482513c 100755
--- a/Web Framework/React/index.html	
+++ b/Web Framework/React/index.html	
@@ -1,4 +1,4 @@
- React - Scheme MkDocs      

React

React is a JavaScript library for building user interfaces. It is used for building complex and interactive web applications. React is a declarative, component-based library that simplifies the process of building user interfaces. It is easy to learn and use, and it has a large and active community of developers.

Getting Started

To get started with React, you can follow the steps below:

  1. Install Node.js and npm on your system.
  2. Create a new React project using the create-react-app command.
  3. Start the development server using the npm start command.
  4. Open the project in your preferred code editor.

React Components

React components are the building blocks of React applications. They are small, reusable pieces of code that can be used to create complex user interfaces. React components can be created using JavaScript or JSX. JSX is a syntax extension of JavaScript that allows you to write HTML-like code within JavaScript.

React Props

React props are the input data that are passed to a component. They are used to customize the behavior and appearance of a component. Props can be passed to a component using attributes or properties.

React State

React state is the data that is managed by a component. It is used to keep track of the user interface state and is updated by the component. State can be updated using the setState method.

React Life Cycle Methods

React life cycle methods are functions that are called at different stages of the component lifecycle. They are used to perform certain actions when a component is mounted, updated, or unmounted.

React Router

React Router is a library for handling client-side routing in React applications. It allows you to create dynamic and responsive web applications with ease. It provides a simple and declarative API for handling navigation in your application.

React Hooks

React hooks are a new feature in React that allow you to use state and other React features without writing a class. They are a way to use state and other React features without writing a class. They are a way to use state and other React features without writing a class.

React

React is a JavaScript library for building user interfaces. It is used for building complex and interactive web applications. React is a declarative, component-based library that simplifies the process of building user interfaces. It is easy to learn and use, and it has a large and active community of developers.

Getting Started

To get started with React, you can follow the steps below:

  1. Install Node.js and npm on your system.
  2. Create a new React project using the create-react-app command.
  3. Start the development server using the npm start command.
  4. Open the project in your preferred code editor.

React Components

React components are the building blocks of React applications. They are small, reusable pieces of code that can be used to create complex user interfaces. React components can be created using JavaScript or JSX. JSX is a syntax extension of JavaScript that allows you to write HTML-like code within JavaScript.

React Props

React props are the input data that are passed to a component. They are used to customize the behavior and appearance of a component. Props can be passed to a component using attributes or properties.

React State

React state is the data that is managed by a component. It is used to keep track of the user interface state and is updated by the component. State can be updated using the setState method.

React Life Cycle Methods

React life cycle methods are functions that are called at different stages of the component lifecycle. They are used to perform certain actions when a component is mounted, updated, or unmounted.

React Router

React Router is a library for handling client-side routing in React applications. It allows you to create dynamic and responsive web applications with ease. It provides a simple and declarative API for handling navigation in your application.

React Hooks

React hooks are a new feature in React that allow you to use state and other React features without writing a class. They are a way to use state and other React features without writing a class. They are a way to use state and other React features without writing a class.

Streamlit

Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In this section, we will learn how to use Streamlit to create a simple web app that displays a list of movies and their ratings.

Prerequisites

  1. Python 3.6 or later
  2. Streamlit library

Step 1: Install Streamlit

To install Streamlit, run the following command in your terminal:

pip install streamlit
+ Streamlit - Scheme MkDocs      

Streamlit

Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In this section, we will learn how to use Streamlit to create a simple web app that displays a list of movies and their ratings.

Prerequisites

  1. Python 3.6 or later
  2. Streamlit library

Step 1: Install Streamlit

To install Streamlit, run the following command in your terminal:

pip install streamlit
 

Step 2: Create a new Streamlit app

To create a new Streamlit app, run the following command in your terminal:

streamlit hello
 

This will create a new directory called my_first_app with a sample app. Open the app.py file in your text editor to see the code.

Step 3: Add a list of movies and their ratings

To add a list of movies and their ratings, replace the code in app.py with the following:

import streamlit as st
 
diff --git a/Web Framework/Vue/index.html b/Web Framework/Vue/index.html
index 460e0db..e1d4886 100755
--- a/Web Framework/Vue/index.html	
+++ b/Web Framework/Vue/index.html	
@@ -1,4 +1,4 @@
- Vue.js - Scheme MkDocs      

Vue.js

Vue.js is a progressive framework for building user interfaces. It is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or frameworks.

Getting Started

To get started with Vue.js, you can follow the official guide on their website:

Examples

Here are some examples of Vue.js applications:

Vue.js

Vue.js is a progressive framework for building user interfaces. It is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or frameworks.

Getting Started

To get started with Vue.js, you can follow the official guide on their website:

Examples

Here are some examples of Vue.js applications:

Welcome to MkDocs

For full documentation visit mkdocs.org.

Commands

  • mkdocs new [dir-name] - Create a new project.
  • mkdocs serve - Start the live-reloading docs server.
  • mkdocs build - Build the documentation site.
  • mkdocs -h - Print help message and exit.

Project layout

mkdocs.yml    # The configuration file.
+ Scheme MkDocs      

Welcome to MkDocs

For full documentation visit mkdocs.org.

Commands

  • mkdocs new [dir-name] - Create a new project.
  • mkdocs serve - Start the live-reloading docs server.
  • mkdocs build - Build the documentation site.
  • mkdocs -h - Print help message and exit.

Project layout

mkdocs.yml    # The configuration file.
 docs/
     index.md  # The documentation homepage.
     ...       # Other markdown pages, images and other files.
diff --git a/search/search_index.json b/search/search_index.json
index ee75119..4b81c57 100755
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\u200b\\u3000\\-\u3001\u3002\uff0c\uff0e\uff1f\uff01\uff1b]+","pipeline":["stemmer"]},"docs":[{"location":"","title":"Welcome to MkDocs","text":"

For full documentation visit mkdocs.org.

"},{"location":"#commands","title":"Commands","text":"
  • mkdocs new [dir-name] - Create a new project.
  • mkdocs serve - Start the live-reloading docs server.
  • mkdocs build - Build the documentation site.
  • mkdocs -h - Print help message and exit.
"},{"location":"#project-layout","title":"Project layout","text":"
mkdocs.yml    # The configuration file.\ndocs/\n    index.md  # The documentation homepage.\n    ...       # Other markdown pages, images and other files.\n
"},{"location":"Computer%20Network/CS144/","title":"CS144: Computer Network","text":""},{"location":"Computer%20Network/CS144/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aStanford
  • \u5148\u4fee\u8981\u6c42\uff1a\u4e00\u5b9a\u7684\u8ba1\u7b97\u673a\u7cfb\u7edf\u57fa\u7840\uff0cCS106L
  • \u7f16\u7a0b\u8bed\u8a00\uff1aC++
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u8fd9\u95e8\u8bfe\u7684\u4e3b\u8bb2\u4eba\u4e4b\u4e00\u662f\u7f51\u7edc\u9886\u57df\u7684\u5de8\u64d8 Nick McKeown \u6559\u6388\u3002\u8fd9\u4f4d\u62e5\u6709\u81ea\u5df1\u521b\u4e1a\u516c\u53f8\u7684\u5b66\u754c\u4e1a\u754c\u53cc\u5de8\u4f6c\u4f1a\u5728\u4ed6\u6155\u8bfe\u6bcf\u4e00\u7ae0\u8282\u7684\u6700\u540e\u91c7\u8bbf\u4e00\u4f4d\u4e1a\u754c\u7684\u9ad8\u7ba1\u6216\u8005\u5b66\u754c\u7684\u9ad8\u4eba\uff0c\u975e\u5e38\u5f00\u9614\u773c\u754c\u3002

\u5728\u8fd9\u95e8\u8bfe\u7684 Project \u4e2d\uff0c\u4f60\u5c06\u7528 C++ \u5faa\u5e8f\u6e10\u8fdb\u5730\u642d\u5efa\u51fa\u6574\u4e2a TCP/IP \u534f\u8bae\u6808\uff0c\u5b9e\u73b0 IP \u8def\u7531\u4ee5\u53ca ARP \u534f\u8bae\uff0c\u6700\u540e\u5229\u7528\u4f60\u81ea\u5df1\u7684\u534f\u8bae\u6808\u4ee3\u66ff Linux Kernel \u7684\u7f51\u7edc\u534f\u8bae\u6808\u548c\u5176\u4ed6\u5b66\u751f\u7684\u8ba1\u7b97\u673a\u8fdb\u884c\u901a\u4fe1\uff0c\u975e\u5e38 amazing\uff01

"},{"location":"Computer%20Network/CS144/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://cs144.github.io/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/watch?v=r2WZNaFyrbQ&list=PL6RdenZrxrw9inR-IJv-erlOKRHjymxMN
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u65e0
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://cs144.github.io/\uff0c8 \u4e2a Project \u5e26\u4f60\u5b9e\u73b0\u6574\u4e2a TCP/IP \u534f\u8bae\u6808
"},{"location":"Computer%20Network/CS144/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"
  • PKUFlyingPig
  • Lexssama's Blogs
  • huangrt01
  • kiprey
  • \u5eb7\u5b87PL's Blog
  • doraemonzzz
  • ViXbob's libsponge
  • \u5403\u7740\u571f\u8c46\u5750\u5730\u94c1\u7684\u535a\u5ba2
  • Smith
  • \u661f\u9065\u89c1
  • EIMadrigal
  • Joey
"},{"location":"Computer%20Vision/EECS-498/","title":"UMich EECS 498-007 / 598-005: Deep Learning for Computer Vision","text":""},{"location":"Computer%20Vision/EECS-498/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aUMich
  • \u5148\u4fee\u8981\u6c42\uff1aPython\u57fa\u7840\uff0c\u77e9\u9635\u8bba(\u719f\u6089\u77e9\u9635\u6c42\u5bfc\u5373\u53ef)\uff0c\u5fae\u79ef\u5206
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a60\uff5e80 \u5c0f\u65f6

UMich \u7684 Computer Vision \u8bfe\uff0c\u8bfe\u7a0b\u89c6\u9891\u548c\u4f5c\u4e1a\u8d28\u91cf\u6781\u9ad8\uff0c\u6db5\u76d6\u7684\u4e3b\u9898\u975e\u5e38\u5168\uff0c\u540c\u65f6 Assignments \u7684\u96be\u5ea6\u7531\u6d45\u53ca\u6df1\uff0c\u8986\u76d6\u4e86 CV \u4e3b\u6d41\u6a21\u578b\u53d1\u5c55\u7684\u5168\u9636\u6bb5\uff0c\u662f\u4e00\u95e8\u975e\u5e38\u597d\u7684 Computer Vision \u5165\u95e8\u8bfe\u3002

\u4f60\u5728\u6bcf\u4e2a Assignment \u91cc\u4f1a\u8ddf\u968f Handouts \u642d\u5efa\u4e0e\u8bad\u7ec3 Lectures \u4e2d\u63d0\u5230\u7684\u6a21\u578b/\u6846\u67b6\u3002

\u4f60\u4e0d\u9700\u8981\u6709\u4efb\u4f55\u7684\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\u7684\u4f7f\u7528\u7ecf\u9a8c\uff0c\u5728\u5f00\u59cb\u7684 Assignment \u91cc\uff0c\u8fd9\u95e8\u8bfe\u4f1a\u4ece\u96f6\u5f00\u59cb\u6559\u5bfc\u6bcf\u4e2a\u5b66\u751f\u5982\u4f55\u4f7f\u7528 Pytorch\uff0c\u540e\u7eed\u4e5f\u53ef\u4ee5\u5f53\u6210\u5de5\u5177\u4e66\uff0c\u968f\u65f6\u7ffb\u9605\u3002

\u540c\u65f6\u7531\u4e8e\u6bcf\u4e2a Assignment \u4e4b\u95f4\u6d89\u53ca\u5230\u7684\u4e3b\u9898\u90fd\u4e0d\u540c\uff0c\u4f60\u5728\u9012\u8fdb\u5f0f\u7684 Assignment \u4e2d\u4e0d\u4ec5\u53ef\u4ee5\u4eb2\u8eab\u4f53\u4f1a\u5230 CV \u4e3b\u6d41\u6a21\u578b\u7684\u53d1\u5c55\u5386\u7a0b\uff0c\u9886\u7565\u5230\u4e0d\u540c\u7684\u6a21\u578b\u548c\u8bad\u7ec3\u7684\u65b9\u6cd5\u5bf9\u6700\u7ec8\u6548\u679c/\u51c6\u786e\u7387\u7684\u5f71\u54cd\uff0c\u540c\u65f6\u4e5f\u80fd Hands On \u5730\u5b9e\u73b0\u5b83\u4eec\u3002

\u5728 A1 \u4e2d\uff0c\u4f60\u4f1a\u5b66\u4e60 Pytorch \u548c Google Colab \u7684\u4f7f\u7528\u3002

\u5728 A2 \u4e2d\u4f60\u4f1a\u4eb2\u81ea\u642d\u5efa Linear Classifier \u4ee5\u53ca\u4e00\u4e2a\u4e24\u5c42\u7684\u795e\u7ecf\u7f51\u7edc\uff0c\u6700\u540e\u4f60\u6709\u673a\u4f1a\u4eb2\u81ea\u63a5\u89e6 MNIST \u6570\u636e\u96c6\u5e76\u5728\u6b64\u57fa\u7840\u4e0a\u8bad\u7ec3\u5e76\u8bc4\u4f30\u4f60\u642d\u5efa\u8d77\u7684\u795e\u7ecf\u7f51\u7edc\u3002

\u5728 A3 \u4e2d\uff0c\u4f60\u4f1a\u63a5\u89e6\u5230\u6700\u4e3a\u7ecf\u5178\u7684 Convolutional Neural Network (A.K.A. CNN)\uff0c\u4eb2\u81ea\u611f\u53d7\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u7684\u9b45\u529b\u3002

\u800c\u5728 A4 \u4e2d\uff0c\u4f60\u5c06\u5b9e\u9645\u89e6\u53ca\u642d\u5efa\u7269\u4f53\u68c0\u6d4b\u6a21\u578b\u7684\u5168\u6d41\u7a0b\uff0c\u540c\u65f6\u8ddf\u968f Handout \u5b9e\u73b0\u4e24\u7bc7\u8bba\u6587\u4e2d\u7684 One-Stage Detector \u548c Two-Stage Detector\u3002

\u5230\u4e86 A5\uff0c\u5c31\u662f\u4ece CNN \u5230 RNN \u7684\u65f6\u523b\u4e86\uff0c\u4f60\u5c06\u6709\u673a\u4f1a\u4eb2\u81ea\u642d\u5efa\u8d77\u4e24\u79cd\u4e0d\u540c\u7684\u57fa\u4e8e\u6ce8\u610f\u529b\u7684\u6a21\u578b\uff0cRNNs (Vanilla RNN & LSTM) \u548c\u5927\u540d\u9f0e\u9f0e\u7684 Transfomer\u3002

\u5728\u6700\u540e\u4e00\u4e2a Assignment\uff08A6\uff09\u4e2d\uff0c\u4f60\u5c06\u6709\u673a\u4f1a\u5b9e\u73b0\u4e24\u79cd\u66f4\u4e3a Fancy \u7684\u6a21\u578b\uff0cVAE \u548c GAN\uff0c\u5e76\u5e94\u7528\u5728 MINST \u6570\u636e\u96c6\u4e0a\u3002\u6700\u540e\uff0c\u4f60\u4f1a\u5b9e\u73b0\u7f51\u7edc\u53ef\u89c6\u5316\u548c\u98ce\u683c\u8fc1\u79fb\u8fd9\u4e24\u4e2a\u975e\u5e38\u9177\u70ab\u7684\u529f\u80fd\u3002

\u5728 Assignments \u4e4b\u5916\uff0c\u4f60\u8fd8\u53ef\u4ee5\u81ea\u5df1\u5b9e\u73b0\u4e00\u4e2a Mini-Project\uff0c\u4eb2\u81ea\u642d\u5efa\u8d77\u4e00\u4e2a\u5b8c\u6574\u7684\u6df1\u5ea6\u5b66\u4e60 Pipeline\uff0c\u5177\u4f53\u53ef\u4ee5\u53c2\u8003\u8bfe\u7a0b\u4e3b\u9875\u3002

\u8bfe\u7a0b\u6240\u6d89\u53ca\u7684\u8d44\u6e90\uff0c\u5982 Lectures/Notes/Assignments \u90fd\u662f\u5f00\u6e90\u7684\uff0c\u7f8e\u4e2d\u4e0d\u8db3\u7684\u662f Autograder \u53ea\u5bf9\u672c\u6821 Enrolled \u7684\u5b66\u751f\u5f00\u653e\uff0c\u4f46\u56e0\u4e3a\u5728\u63d0\u4f9b\u7684 *.ipynb\uff08\u4e5f\u5c31\u662f Handout\uff09 \u4e2d\u5df2\u7ecf\u53ef\u4ee5\u786e\u5b9a\u5b9e\u73b0\u7684\u6b63\u786e\u6027\uff0c\u4ee5\u53ca\u9884\u671f\u7684\u7ed3\u679c\uff0c\u6240\u4ee5\u6211\u4e2a\u4eba\u89c9\u5f97 Autograder \u7684\u7f3a\u5931\u6ca1\u6709\u4efb\u4f55\u5f71\u54cd\u3002

\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u8fd9\u95e8\u8bfe\u7684\u4e3b\u8bb2\u6559\u6388 Justin Johnson \u6b63\u662f Fei-Fei Li \u7684\u535a\u58eb\u6bd5\u4e1a\u751f\uff0c\u73b0\u5728\u5728 UMich \u5f53 Assistant Professor\u3002

\u800c\u73b0\u5728\u5f00\u6e90\u7684 2017 \u5e74\u7248\u672c\u7684 Stanford CS231N \u7684\u4e3b\u8bb2\u4eba\u5c31\u662f Justin Johnson\u3002

\u540c\u65f6\u56e0\u4e3a CS231N \u4e3b\u8981\u662f\u7531 Justin Johnson \u548c Andrej Karpathy \u5efa\u8bbe\u8d77\u6765\u7684\uff0c\u8fd9\u95e8\u8bfe\u4e5f\u6cbf\u7528\u4e86 CS231N \u7684\u4e00\u4e9b\u6750\u6599\uff0c\u6240\u4ee5\u5b66\u8fc7 CS231N \u7684\u540c\u5b66\u53ef\u80fd\u4f1a\u89c9\u5f97\u8fd9\u95e8\u8bfe\u7684\u67d0\u4e9b\u6750\u6599\u6bd4\u8f83\u719f\u6089\u3002

\u6700\u540e\uff0c\u6211\u63a8\u8350\u6bcf\u4e00\u4e2a Enroll \u8fd9\u95e8\u8bfe\u7684\u540c\u5b66\u90fd\u53bb\u770b\u4e00\u770b Youtube \u4e0a\u9762\u7684 Lectures\uff0cJustin Johnson \u7684\u8bb2\u8bfe\u65b9\u5f0f\u548c\u5185\u5bb9\u90fd\u975e\u5e38\u6e05\u6670\u548c\u6613\u61c2\uff0c\u662f\u975e\u5e38\u68d2\u7684\u53c2\u8003\u3002

"},{"location":"Computer%20Vision/EECS-498/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://web.eecs.umich.edu/~justincj/teaching/eecs498/WI2022/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PL5-TkQAfAZFbzxjBHtzdVCWE0Zbhomg7r
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u4ec5\u6709\u63a8\u8350\u6559\u6750\uff0c\u94fe\u63a5\uff1ahttps://www.deeplearningbook.org/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a\u89c1\u8bfe\u7a0b\u4e3b\u9875\uff0c6 \u4e2a Assignment \u548c\u4e00\u4e2a Mini-Project
"},{"location":"Computer%20Vision/OpenCV/","title":"OpenCV","text":"

OpenCV (Open Source Computer Vision) is an open-source library of programming functions mainly aimed at real-time computer vision. It provides a wide range of tools for image processing, video capture and analysis, 3D reconstruction, object detection, and many other applications.

OpenCV is written in C/C++ and has bindings for Python, Java, and MATLAB. It is cross-platform and can run on Linux, Windows, and macOS.

OpenCV is widely used in academic and industrial research, including in fields such as computer vision, image processing, robotics, and artificial intelligence. It is also used in mobile and embedded devices, including in self-driving cars, drones, and security systems.

The OpenCV library is free to use and open-source, and it is available under an open-source license.

"},{"location":"Datebase%20Systems/CMU15-445/","title":"CMU 15-445: Database Systems","text":""},{"location":"Datebase%20Systems/CMU15-445/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aCMU
  • \u5148\u4fee\u8981\u6c42\uff1aC++\uff0c\u6570\u636e\u7ed3\u6784\u4e0e\u7b97\u6cd5\uff0cCMU 15-213 (A.K.A. CS:APP\uff0c\u8fd9\u4e5f\u662f CMU \u5185\u90e8\u5bf9\u6bcf\u5e74 Enroll \u540c\u5b66\u7684\u5148\u4fee\u8981\u6c42)
  • \u7f16\u7a0b\u8bed\u8a00\uff1aC++
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u4f5c\u4e3a CMU \u6570\u636e\u5e93\u7684\u5165\u95e8\u8bfe\uff0c\u8fd9\u95e8\u8bfe\u7531\u6570\u636e\u5e93\u9886\u57df\u7684\u5927\u725b Andy Pavlo \u8bb2\u6388\uff08\u201c\u8fd9\u4e2a\u4e16\u754c\u4e0a\u6211\u53ea\u5728\u4e4e\u4e24\u4ef6\u4e8b\uff0c\u4e00\u662f\u6211\u7684\u8001\u5a46\uff0c\u4e8c\u5c31\u662f\u6570\u636e\u5e93\u201d\uff09\u3002

\u8fd9\u662f\u4e00\u95e8\u8d28\u91cf\u6781\u9ad8\uff0c\u8d44\u6e90\u6781\u9f50\u5168\u7684 Database \u5165\u95e8\u8bfe\uff0c\u8fd9\u95e8\u8bfe\u7684 Faculty \u548c\u80cc\u540e\u7684 CMU Database Group \u5c06\u8bfe\u7a0b\u5bf9\u5e94\u7684\u57fa\u7840\u8bbe\u65bd (Autograder, Discord) \u548c\u8bfe\u7a0b\u8d44\u6599 (Lectures, Notes, Homework) \u5b8c\u5168\u5f00\u6e90\uff0c\u8ba9\u6bcf\u4e00\u4e2a\u613f\u610f\u5b66\u4e60\u6570\u636e\u5e93\u7684\u540c\u5b66\u90fd\u53ef\u4ee5\u4eab\u53d7\u5230\u51e0\u4e4e\u7b49\u540c\u4e8e CMU \u672c\u6821\u5b66\u751f\u7684\u8bfe\u7a0b\u4f53\u9a8c\u3002

\u8fd9\u95e8\u8bfe\u7684\u4eae\u70b9\u5728\u4e8e CMU Database Group \u4e13\u95e8\u4e3a\u6b64\u8bfe\u5f00\u53d1\u4e86\u4e00\u4e2a\u6559\u5b66\u7528\u7684\u5173\u7cfb\u578b\u6570\u636e\u5e93 bustub\uff0c\u5e76\u8981\u6c42\u4f60\u5bf9\u8fd9\u4e2a\u6570\u636e\u5e93\u7684\u7ec4\u6210\u90e8\u5206\u8fdb\u884c\u4fee\u6539\uff0c\u5b9e\u73b0\u4e0a\u8ff0\u90e8\u4ef6\u7684\u529f\u80fd\u3002

\u5177\u4f53\u6765\u8bf4\uff0c\u5728 15-445 \u4e2d\u4f60\u9700\u8981\u5728\u56db\u4e2a Project \u7684\u63a8\u8fdb\u4e2d\uff0c\u5b9e\u73b0\u4e00\u4e2a\u9762\u5411\u78c1\u76d8\u7684\u4f20\u7edf\u5173\u7cfb\u578b\u6570\u636e\u5e93 Bustub \u4e2d\u7684\u90e8\u5206\u5173\u952e\u7ec4\u4ef6\u3002

\u5305\u62ec Buffer Pool Manager (\u5185\u5b58\u7ba1\u7406), B Plus Tree (\u5b58\u50a8\u5f15\u64ce), Query Executors & Query Optimizer (\u7b97\u5b50\u4eec & \u4f18\u5316\u5668), Concurrency Control (\u5e76\u53d1\u63a7\u5236)\uff0c\u5206\u522b\u5bf9\u5e94 Project #1 \u5230 Project #4\u3002

\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u540c\u5b66\u4eec\u5728\u5b9e\u73b0\u7684\u8fc7\u7a0b\u4e2d\u53ef\u4ee5\u901a\u8fc7 shell.cpp \u7f16\u8bd1\u51fa bustub-shell \u6765\u5b9e\u65f6\u5730\u89c2\u6d4b\u81ea\u5df1\u5b9e\u73b0\u90e8\u4ef6\u7684\u6b63\u786e\u4e0e\u5426\uff0c\u6b63\u53cd\u9988\u975e\u5e38\u8db3\u3002

\u6b64\u5916 bustub \u4f5c\u4e3a\u4e00\u4e2a C++ \u7f16\u5199\u7684\u4e2d\u5c0f\u578b\u9879\u76ee\u6db5\u76d6\u4e86\u7a0b\u5e8f\u6784\u5efa\u3001\u4ee3\u7801\u89c4\u8303\u3001\u5355\u5143\u6d4b\u8bd5\u7b49\u4f17\u591a\u8981\u6c42\uff0c\u53ef\u4ee5\u4f5c\u4e3a\u4e00\u4e2a\u4f18\u79c0\u7684\u5f00\u6e90\u9879\u76ee\u5b66\u4e60\u3002

"},{"location":"Datebase%20Systems/CMU15-445/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1aFall 2019, Fall 2020, Fall 2021, Fall 2022, Spring 2023, Fall 2023, Spring 2024
  • \u8bfe\u7a0b\u89c6\u9891\uff1a\u8bfe\u7a0b\u7f51\u7ad9\u514d\u8d39\u89c2\u770b, Fall 2023 \u7684 Youtube \u5168\u5f00\u6e90 Lectures
  • \u8bfe\u7a0b\u6559\u6750\uff1aDatabase System Concepts
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a5 \u4e2a Project \u548c 5 \u4e2a Homework

\u5728 Fall 2019 \u4e2d\uff0cProject #2 \u662f\u505a\u54c8\u5e0c\u7d22\u5f15\uff0cProject #4 \u662f\u505a\u65e5\u5fd7\u4e0e\u6062\u590d\u3002

\u5728 Fall 2020 \u4e2d\uff0cProject #2 \u662f\u505a B \u6811\uff0cProject #4 \u662f\u505a\u5e76\u53d1\u63a7\u5236\u3002

\u5728 Fall 2021 \u4e2d\uff0cProject #1 \u662f\u505a\u7f13\u5b58\u6c60\u7ba1\u7406\uff0cProject #2 \u662f\u505a\u54c8\u5e0c\u7d22\u5f15\uff0cProject #4 \u662f\u505a\u5e76\u53d1\u63a7\u5236\u3002

\u5728 Fall 2022 \u4e2d\uff0c\u4e0e Fall 2021 \u76f8\u6bd4\u53ea\u6709\u54c8\u5e0c\u7d22\u5f15\u6362\u6210\u4e86 B+ \u6811\u7d22\u5f15\uff0c\u5176\u4f59\u90fd\u4e00\u6837\u3002

\u5728 Spring 2023 \u4e2d\uff0c\u5927\u4f53\u5185\u5bb9\u548c Fall 2022 \u4e00\u6837\uff08\u7f13\u5b58\u6c60\uff0cB+ \u6811\u7d22\u5f15\uff0c\u7b97\u5b50\uff0c\u5e76\u53d1\u63a7\u5236\uff09\uff0c\u53ea\u4e0d\u8fc7 Project #0 \u6362\u6210\u4e86 Copy-On-Write Trie\uff0c\u540c\u65f6\u589e\u52a0\u4e86\u5f88\u597d\u73a9\u7684\u6ce8\u518c\u5927\u5c0f\u5199\u51fd\u6570\u7684 Task\uff0c\u53ef\u4ee5\u76f4\u63a5\u5728\u7f16\u8bd1\u51fa\u7684 bustub-shell \u4e2d\u770b\u5230\u81ea\u5df1\u5199\u7684\u51fd\u6570\u7684\u5b9e\u9645\u6548\u679c\uff0c\u975e\u5e38\u6709\u6210\u5c31\u611f\u3002

\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u73b0\u5728 bustub \u5728 2020 \u5e74\u4ee5\u524d\u7684 version \u90fd\u5df2\u7ecf\u505c\u6b62\u7ef4\u62a4\u3002

Fall 2019 \u7684\u6700\u540e\u4e00\u4e2a Logging & Recovery \u7684 Project \u5df2\u7ecf broken \u4e86\uff08\u572819\u5e74\u7684 git head \u4e0a\u4e5f\u8bb8\u8fd8\u53ef\u4ee5\u8dd1\uff0c\u4f46\u5c3d\u7ba1\u5982\u6b64 Gradescope \u5e94\u8be5\u4e5f\u6ca1\u6709\u63d0\u4f9b\u516c\u5171\u7684\u7248\u672c\uff0c\u6240\u4ee5\u5e76\u4e0d\u63a8\u8350\u5927\u5bb6\u53bb\u505a\uff0c\u53ea\u770b\u770b\u4ee3\u7801\u548c Handout \u5c31\u53ef\u4ee5\u4e86\uff09\u3002

\u6216\u8bb8\u5728 Fall 2023 \u7684\u7248\u672c Recovery \u76f8\u5173\u7684\u529f\u80fd\u4f1a\u88ab\u4fee\u590d\uff0c\u5c4a\u65f6\u4e5f\u53ef\u80fd\u6709\u5168\u65b0\u7684 Recovery Project\uff0c\u8ba9\u6211\u4eec\u8bd5\u76ee\u4ee5\u5f85\u5427\ud83e\udd2a

\u5982\u679c\u5927\u5bb6\u6709\u7cbe\u529b\u7684\u8bdd\u53ef\u4ee5\u90fd\u53bb\u5c1d\u8bd5\u4e00\u4e0b\uff0c\u6216\u8005\u5728\u5bf9\u4e66\u4e2d\u5185\u5bb9\u7406\u89e3\u4e0d\u662f\u5f88\u900f\u5f7b\u7684\u65f6\u5019\uff0c\u5c1d\u8bd5\u505a\u4e00\u505a\u5bf9\u5e94\u7684 Project \u4f1a\u52a0\u6df1\u4f60\u7684\u7406\u89e3\uff08\u4e2a\u4eba\u5efa\u8bae\u8fd8\u662f\u8981\u5168\u90e8\u505a\u5b8c\uff0c\u76f8\u4fe1\u4e00\u5b9a\u5bf9\u4f60\u6709\u5e2e\u52a9\uff09\u3002

\u6b64\u5916\uff0cCMU\u6570\u636e\u5e93\u56e2\u961f\u8fd8\u6709\u4e00\u4e2aDB with ML\u7684\u516c\u5f00\u8bb2\u5ea7\u7cfb\u5217\uff1aML\u21c4DB Seminar Series

"},{"location":"Datebase%20Systems/CMU15-445/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"

\u975e\u5b98\u65b9\u7684 Discord \u662f\u4e00\u4e2a\u5f88\u597d\u7684\u4ea4\u6d41\u5e73\u53f0\uff0c\u8fc7\u5f80\u7684\u804a\u5929\u8bb0\u5f55\u51e0\u4e4e\u8bb0\u8f7d\u4e86\u5176\u4ed6\u540c\u5b66\u8e29\u8fc7\u7684\u5751\uff0c\u4f60\u4e5f\u53ef\u4ee5\u63d0\u51fa\u4f60\u7684\u95ee\u9898\uff0c\u6216\u8005\u5e2e\u5fd9\u89e3\u7b54\u522b\u4eba\u7684\u95ee\u9898\uff0c\u76f8\u4fe1\u8fd9\u662f\u4e00\u4efd\u5f88\u597d\u7684\u53c2\u8003\u3002

\u5173\u4e8e Spring 2023 \u7684\u901a\u5173\u6307\u5357\uff0c\u53ef\u4ee5\u53c2\u8003 @xzhseh \u7684\u8fd9\u7bc7CMU 15-445/645 (Spring 2023) Database Systems \u901a\u5173\u6307\u5317\uff0c\u91cc\u9762\u6db5\u76d6\u4e86\u5168\u90e8\u4f60\u9700\u8981\u7684\u901a\u5173\u9053\u5177\uff0c\u548c\u901a\u5173\u65b9\u5f0f\u5efa\u8bae\uff0c\u4ee5\u53ca\u6700\u91cd\u8981\u7684\uff0c\u6211\u81ea\u5df1\u5728\u505a Project \u7684\u8fc7\u7a0b\u4e2d\u9047\u5230\u7684\uff0c\u770b\u5230\u7684\uff0c\u548c\u81ea\u5df1\u4eb2\u81ea\u8e29\u8fc7\u7684\u5751\u3002

@ysj1173886760 \u5728\u5b66\u4e60\u8fd9\u95e8\u8bfe\u4e2d\u7528\u5230\u7684\u6240\u6709\u8d44\u6e90\u548c\u4f5c\u4e1a\u5b9e\u73b0\u90fd\u6c47\u603b\u5728 ysj1173886760/Learning: db - GitHub \u4e2d\u3002

\u7531\u4e8e Andy \u7684\u8981\u6c42\uff0c\u4ed3\u5e93\u4e2d\u6ca1\u6709 Project \u7684\u5b9e\u73b0\uff0c\u53ea\u6709 Homework \u7684 Solution\u3002\u7279\u522b\u7684\uff0c\u5bf9\u4e8e Homework1\uff0c@ysj1173886760 \u8fd8\u5199\u4e86\u4e00\u4e2a Shell \u811a\u672c\u6765\u5e2e\u5927\u5bb6\u6267\u884c\u81ea\u52a8\u5224\u5206\u3002

\u53e6\u5916\u5728\u8bfe\u7a0b\u7ed3\u675f\u540e\uff0c\u63a8\u8350\u9605\u8bfb\u4e00\u7bc7\u8bba\u6587 Architecture Of a Database System\uff0c\u5bf9\u5e94\u7684\u4e2d\u6587\u7248\u4e5f\u5728\u4e0a\u8ff0\u4ed3\u5e93\u4e2d\u3002\u8bba\u6587\u91cc\u7efc\u8ff0\u4e86\u6570\u636e\u5e93\u7cfb\u7edf\u7684\u6574\u4f53\u67b6\u6784\uff0c\u8ba9\u5927\u5bb6\u53ef\u4ee5\u5bf9\u6570\u636e\u5e93\u6709\u4e00\u4e2a\u66f4\u52a0\u5168\u9762\u7684\u89c6\u91ce\u3002

"},{"location":"Datebase%20Systems/CMU15-445/#_4","title":"\u540e\u7eed\u8bfe\u7a0b","text":"

CMU15-721 \u4e3b\u8981\u8bb2\u4e3b\u5b58\u6570\u636e\u5e93\u6709\u5173\u7684\u5185\u5bb9\uff0c\u6bcf\u8282\u8bfe\u90fd\u6709\u5bf9\u5e94\u7684 paper \u8981\u8bfb\uff0c\u63a8\u8350\u7ed9\u5e0c\u671b\u8fdb\u9636\u6570\u636e\u5e93\u7684\u5c0f\u4f19\u4f34\u3002@ysj1173886760 \u76ee\u524d\u4e5f\u5728\u8ddf\u8fdb\u8fd9\u95e8\u8bfe\uff0c\u5b8c\u6210\u540e\u4f1a\u5728\u8fd9\u91cc\u63d0 PR \u4ee5\u63d0\u4f9b\u8fdb\u9636\u7684\u6307\u5bfc\u3002

"},{"location":"Deep%20Learning/CS224n/","title":"CS224n: Natural Language Processing","text":""},{"location":"Deep%20Learning/CS224n/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aStanford
  • \u5148\u4fee\u8981\u6c42\uff1a\u6df1\u5ea6\u5b66\u4e60\u57fa\u7840 + Python
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a80 \u5c0f\u65f6

Stanford \u7684 NLP \u5165\u95e8\u8bfe\u7a0b\uff0c\u7531\u81ea\u7136\u8bed\u8a00\u5904\u7406\u9886\u57df\u7684\u5de8\u4f6c Chris Manning \u9886\u8854\u6559\u6388\uff08word2vec \u7b97\u6cd5\u7684\u5f00\u521b\u8005\uff09\u3002\u5185\u5bb9\u8986\u76d6\u4e86\u8bcd\u5411\u91cf\u3001RNN\u3001LSTM\u3001Seq2Seq \u6a21\u578b\u3001\u673a\u5668\u7ffb\u8bd1\u3001\u6ce8\u610f\u529b\u673a\u5236\u3001Transformer \u7b49\u7b49 NLP \u9886\u57df\u7684\u6838\u5fc3\u77e5\u8bc6\u70b9\u3002

5 \u4e2a\u7f16\u7a0b\u4f5c\u4e1a\u96be\u5ea6\u5faa\u5e8f\u6e10\u8fdb\uff0c\u5206\u522b\u662f\u8bcd\u5411\u91cf\u3001word2vec \u7b97\u6cd5\u3001Dependency parsing\u3001\u673a\u5668\u7ffb\u8bd1\u4ee5\u53ca Transformer \u7684 fine-tune\u3002

\u6700\u7ec8\u7684\u5927\u4f5c\u4e1a\u662f\u5728 Stanford \u8457\u540d\u7684 SQuAD \u6570\u636e\u96c6\u4e0a\u8bad\u7ec3 QA \u6a21\u578b\uff0c\u6709\u5b66\u751f\u7684\u5927\u4f5c\u4e1a\u751a\u81f3\u76f4\u63a5\u53d1\u8868\u4e86\u9876\u4f1a\u8bba\u6587\u3002

"},{"location":"Deep%20Learning/CS224n/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttp://web.stanford.edu/class/cs224n/index.html
  • \u8bfe\u7a0b\u89c6\u9891\uff1aB \u7ad9\u641c\u7d22 CS224n
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u65e0
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttp://web.stanford.edu/class/cs224n/index.html\uff0c5 \u4e2a\u7f16\u7a0b\u4f5c\u4e1a + 1 \u4e2a Final Project
"},{"location":"Deep%20Learning/CS224n/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"

@PKUFlyingPig \u5728\u5b66\u4e60\u8fd9\u95e8\u8bfe\u4e2d\u7528\u5230\u7684\u6240\u6709\u8d44\u6e90\u548c\u4f5c\u4e1a\u5b9e\u73b0\u90fd\u6c47\u603b\u5728 PKUFlyingPig/CS224n - GitHub \u4e2d\u3002

"},{"location":"Functional%20Programming/Haskell/","title":"Haskell","text":"

Haskell is a purely functional programming language. It is known for its speed and reliability, and it is often used in industry for building large-scale software systems.

"},{"location":"Functional%20Programming/Haskell/#resources","title":"Resources","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://haskell.mooc.fi/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://github.com/moocfi/haskell-mooc
  • \u793e\u533a\uff1ahttps://t.me/haskell_mooc_fi
"},{"location":"Functional%20Programming/Lean4/","title":"Lean4","text":"

Lean4 is a programming language developed by Microsoft Research. It is a functional programming language that is based on theorem proving and dependent type theory. It is designed to be easy to use and easy to understand. It is also designed to be efficient and scalable.

"},{"location":"Functional%20Programming/Lean4/#resources","title":"Resources","text":"
  • Lean4 Documentation
  • Lean4 Tutorial
  • Lean4 Book
  • Lean4 Docs
"},{"location":"LLM%20Development/LangChain/","title":"LangChain","text":"

LangChain is a project that aims to create a language-agnostic, open-source, and community-driven framework for language learning.

The framework will be designed to be modular and extensible, allowing for easy integration of new languages and features. The framework will also be designed to be user-friendly and accessible, with clear documentation and tutorials.

The LangChain framework will be open-source and available for anyone to use and contribute to. The project will be developed in the open, with all code and documentation available for anyone to view and use.

The LangChain framework will be designed to be language-agnostic, meaning that it will be able to support any language that has a written alphabet. This will allow for easy integration of new languages and features, as well as the ability to create language-specific tools and resources.

The LangChain framework will be community-driven, meaning that it will be open to anyone who wants to contribute to the project. Anyone can submit new languages, features, or tools, and the LangChain team will review and approve them. This will allow for a collaborative and diverse community to develop and improve the framework.

The LangChain framework will be designed to be scalable, meaning that it will be able to handle large amounts of data and users. The framework will be designed to be efficient and scalable, with features such as caching and optimization in mind. The LangChain team will work to ensure that the framework is optimized for performance and scalability, and that it can handle large amounts of data and users.

The LangChain framework will be designed to be accessible, meaning that it will be designed to be easy to use and understand. The framework will be designed to be user-friendly and intuitive, with clear documentation and tutorials. The LangChain team will work to ensure that the framework is easy to use and understand, and that it is accessible to all users.

The LangChain framework will be designed to be secure, meaning that it will be designed to protect user data and prevent unauthorized access. The framework will be designed to be secure and safe, with features such as encryption and authentication in mind. The LangChain team will work to ensure that the framework is secure and safe, and that it protects user data and prevents unauthorized access.

The LangChain framework will be designed to be inclusive, meaning that it will be designed to be accessible to people with disabilities. The framework will be designed to be accessible and inclusive, with features such as high contrast and easy-to-read fonts in mind. The LangChain team will work to ensure that the framework is accessible and inclusive, and that it is designed to be used by people with disabilities.

"},{"location":"LLM%20Development/Llama2/","title":"Llama2","text":""},{"location":"LLM%20Development/Llama2/#localizing-llama2","title":"Localizing Llama2","text":"

\u53ef\u4ee5\u53c2\u8003\u4ee5\u4e0b\u8d44\u6599\u4ee5\u90e8\u7f72Llama2\uff1a

  • LLM\u63a2\u7d22\uff1a\u73af\u5883\u642d\u5efa\u4e0e\u6a21\u578b\u672c\u5730\u90e8\u7f72
  • \u672c\u5730\u90e8\u7f72\u5f00\u6e90\u5927\u6a21\u578b\u7684\u5b8c\u6574\u6559\u7a0b\uff1aLangChain + Streamlit+ Llama
"},{"location":"LLM%20Development/Llama2/#introduction","title":"Introduction","text":"

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

"},{"location":"LLM%20Development/Llama2/#deploy","title":"Deploy","text":"

To deploy Llama2, follow the steps below:

  1. Install Llama2 on your server or cluster.
  2. Configure Llama2 to connect to your Kafka cluster.
  3. Start sending and receiving messages using Llama2.
"},{"location":"LLM%20Development/Llama2/#features","title":"Features","text":"

Llama2 provides a rich set of features and functionality that are not available in Llama. Some of the key features of Llama2 are:

  1. Message routing: Llama2 allows you to route messages to different topics based on certain criteria, such as message content or metadata.
  2. Message filtering: Llama2 allows you to filter messages based on certain criteria, such as message content or metadata.
  3. Message transformation: Llama2 allows you to transform messages into a different format, such as JSON or XML.
  4. Message delivery guarantee: Llama2 provides a delivery guarantee that ensures that messages are delivered at least once, exactly once, or at most once.
  5. Message replay: Llama2 allows you to replay messages that have been consumed before.
  6. Message retention: Llama2 allows you to set a retention policy for messages, which determines how long messages are kept in the system.
  7. Message compression: Llama2 allows you to compress messages to reduce the amount of data that needs to be stored and transmitted.
  8. Message ordering: Llama2 ensures that messages are delivered in the order they are sent.
  9. Message replay: Llama2 allows you to replay messages that have been consumed before.
  10. Message batching: Llama2 allows you to batch messages together and send them in a single request.
  11. Message re-partitioning: Llama2 allows you to re-partition messages to different topics based on certain criteria, such as message content or metadata.
  12. Message re-ordering: Llama2 allows you to re-order messages based on certain criteria, such as message content or metadata.
  13. Message de-duplication: Llama2 allows you to de-duplicate messages based on certain criteria, such as message content or metadata.
  14. Message encryption: Llama2 allows you to encrypt messages using various encryption algorithms, such as AES, RSA, and HMAC.
  15. Message authentication: Llama2 allows you to authenticate messages using various authentication mechanisms, such as SSL, SASL, and OAuth.
  16. Message compression: Llama2 allows you to compress messages using various compression algorithms, such as Gzip, Snappy, and LZ4.
  17. Message indexing: Llama2 allows you to index messages using various indexing techniques, such as Apache Solr, Elasticsearch, and Apache Lucene.
  18. Message monitoring: Llama2 provides monitoring capabilities that allow you to track the performance and health of your Llama2 cluster.
  19. Message security: Llama2 provides security features that allow you to secure your Llama2 cluster.
"},{"location":"LLM%20Development/Llama2/#conclusion","title":"Conclusion","text":"

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

"},{"location":"Machine%20Learning/CS189/","title":"CS189: Introduction to Machine Learning","text":""},{"location":"Machine%20Learning/CS189/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aUC Berkeley
  • \u5148\u4fee\u8981\u6c42\uff1aCS188, CS70
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u8fd9\u95e8\u8bfe\u6211\u6ca1\u6709\u7cfb\u7edf\u4e0a\u8fc7\uff0c\u53ea\u662f\u628a\u5b83\u7684\u8bfe\u7a0b notes \u4f5c\u4e3a\u5de5\u5177\u4e66\u67e5\u9605\u3002\u4e0d\u8fc7\u4ece\u8bfe\u7a0b\u7f51\u7ad9\u4e0a\u6765\u770b\uff0c\u5b83\u6bd4 CS229 \u597d\u7684\u662f\u5f00\u6e90\u4e86\u6240\u6709 homework \u7684\u4ee3\u7801\u4ee5\u53ca gradescope \u7684 autograder\u3002\u540c\u6837\uff0c\u8fd9\u95e8\u8bfe\u8bb2\u5f97\u76f8\u5f53\u7406\u8bba\u4e14\u6df1\u5165\u3002

"},{"location":"Machine%20Learning/CS189/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://www.eecs189.org/
  • \u8bfe\u7a0b\u5b89\u6392\uff1ahttps://people.eecs.berkeley.edu/~jrs/189/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PLCuQm2FL98HTlRmlwMk2AuFEM9n1c06HE
  • \u8bfe\u7a0b\u6559\u6750\uff1ahttps://www.eecs189.org/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://www.eecs189.org/
"},{"location":"OOP/C%2B%2B/","title":"C++","text":"

\u5728\u672c\u90e8\u5206\uff0c\u4e3b\u8981\u4ecb\u7ecd\u4e00\u4e9bC++\u7684\u65b0\u7279\u6027\u3002

"},{"location":"OOP/C%2B%2B/#_1","title":"\u79fb\u52a8\u8bed\u4e49","text":"

\u79fb\u52a8\u8bed\u4e49\u53ef\u4ee5\u53c2\u8003\u4ee5\u4e0b\u8d44\u6599\uff1a

  • \u79fb\u52a8\u8bed\u4e49\u4e00\u6587\u5165\u9b42
"},{"location":"OOP/C%2B%2B/#modern-c","title":"Modern C++\u8bed\u6cd5\u7279\u6027","text":"
  • CppCoreGuidelines
  • Modern C++
"},{"location":"OOP/Java/","title":"Java","text":"

Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.

"},{"location":"Potpourri/CUDA/","title":"CUDA","text":"

CUDA is a parallel computing platform and programming model created by Nvidia. It allows developers to write high-performance parallel applications using a combination of C/C++, CUDA C/C++, and Fortran. CUDA provides a rich set of APIs for parallel programming, including parallel thread execution, memory management, and device management. CUDA also includes a compiler toolchain that can generate optimized code for various architectures, including x86, x86-64, ARM, and PowerPC. CUDA is widely used in scientific computing, graphics processing, and machine learning applications.

CUDA is available for free download and installation on Windows, Linux, and macOS platforms. It is also available as a part of popular cloud computing platforms such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

CUDA is a powerful tool for parallel computing and is widely used in a wide range of applications. It is a good choice for developers who are interested in developing high-performance parallel applications using CUDA.

"},{"location":"Potpourri/CUDA/#cuda-programming-model","title":"CUDA Programming Model","text":"

The CUDA programming model is based on a combination of C/C++ and CUDA C/C++. CUDA C/C++ is a high-level language that is designed to work with CUDA. It provides a set of built-in functions and operators that can be used to write parallel code. CUDA C/C++ code is compiled into a CUDA executable that can be run on a GPU.

The CUDA programming model consists of several components:

  • Host code: This is the code that is executed on the CPU. It interacts with the device code to perform parallel computations.
  • Device code: This is the code that is executed on the GPU. It is written in CUDA C/C++ and is executed on a single thread on the GPU.
"},{"location":"Potpourri/CUDA/#c-and-cuda-cc","title":"C++ and CUDA C/C++","text":"

C++ and CUDA C/C++ are two different programming languages. C++ is a general-purpose programming language that is widely used in software development. CUDA C/C++ is a programming language that is designed to work with CUDA. It is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

C++ and CUDA C/C++ are both high-level languages that are used to write parallel applications. C++ is a general-purpose language that is used to write applications that can run on any platform. CUDA C/C++ is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

"},{"location":"Potpourri/CUDA/#cuda-libraries","title":"CUDA Libraries","text":"

CUDA provides a rich set of libraries that can be used to develop parallel applications. These libraries include CUDA Runtime API, CUDA Driver API, CUDA Math API, CUDA Graph API, CUDA Profiler API, CUDA Cooperative Groups API, CUDA Texture Memory API, CUDA Surface Memory API, CUDA Dynamic Parallelism API, CUDA Direct3D Interoperability, CUDA OpenGL Interoperability, CUDA VDPAU Interoperability, CUDA D3D10 Interoperability, CUDA D3D11 Interoperability, CUDA D3D12 Interoperability, CUDA OpenCL Interoperability, CUDA Level-Zero Interoperability, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA libraries are designed to work with CUDA C/C++ and provide a set of APIs for parallel programming. These libraries can be used to develop parallel applications that can run on a GPU. CUDA libraries can be used to optimize performance, reduce memory usage, and improve application performance.

"},{"location":"Potpourri/CUDA/#cuda-programming-tools","title":"CUDA Programming Tools","text":"

CUDA provides a set of tools that can be used to develop and debug CUDA applications. These tools include CUDA Compiler, CUDA Debugger, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA Compiler is a tool that is used to compile CUDA C/C++ code into a CUDA executable. CUDA Debugger is a tool that is used to debug CUDA applications. CUDA Profiler is a tool that is used to profile CUDA applications. CUDA Memcheck is a tool that is used to detect memory errors in CUDA applications. CUDA Memcached is a tool that is used to cache CUDA application data in memory. CUDA Thrust is a library that is used to write parallel algorithms in CUDA C/C++. CUDA Python is a library that is used to write CUDA applications in Python.

CUDA tools can be used to develop and debug CUDA applications. They can help to identify and fix errors in CUDA applications, optimize performance, and reduce memory usage.

"},{"location":"Potpourri/Huggingface/","title":"Huggingface","text":"

Huggingface is a popular NLP library that provides a lot of pre-trained models for various tasks such as text classification, named entity recognition, and question answering. It also provides a simple interface for training and fine-tuning these models on custom datasets.

Here is an example of how to use Huggingface to fine-tune a pre-trained model for sentiment analysis on a custom dataset:

from transformers import pipeline\n\nclassifier = pipeline('sentiment-analysis')\n\n# Custom dataset\ndata = [\n    (\"I love this movie\", \"positive\"),\n    (\"This is a terrible movie\", \"negative\"),\n    (\"I hate it\", \"negative\"),\n    (\"I don't care\", \"neutral\"),\n    (\"I'm so happy today\", \"positive\"),\n]\n\n# Fine-tune the pre-trained model on the custom dataset\nclassifier.train(data)\n\n# Test the fine-tuned model\nresult = classifier(\"I'm so happy today\")\nprint(result)\n

This code will fine-tune a pre-trained model for sentiment analysis on the custom dataset and then test the fine-tuned model on a sample sentence. The output will be a dictionary containing the predicted sentiment and its corresponding score.

You can also use Huggingface to perform other NLP tasks such as text classification, named entity recognition, and question answering.

"},{"location":"Potpourri/Manim/","title":"Manim","text":""},{"location":"Potpourri/Manim/#introduction","title":"Introduction","text":"

Manim is a Python library for creating mathematical animations, which is based on the idea of creating mathematical objects and transforming them over time. It is an open-source project and is maintained by the community. It is used to create visualizations, simulations, and animations for a wide range of applications, including computer science, mathematics, physics, and more.

"},{"location":"Potpourri/Manim/#installation","title":"Installation","text":"

To install Manim, you need to have Python installed on your system. You can download and install Python from the official website. Once you have Python installed, you can install Manim using the following command:

pip install manim\n

This will install Manim and all its dependencies.

"},{"location":"Potpourri/Manim/#creating-animations","title":"Creating Animations","text":"

To create an animation, you need to create a Python file and use the Scene class from Manim. Here is an example:

from manim import *\n\nclass SquareToCircle(Scene):\n    def construct(self):\n        square = Square()\n        circle = Circle()\n        self.play(Transform(square, circle))\n
  1. The Scene class is imported from Manim.
  2. A new class called SquareToCircle is created which inherits from the Scene class.
  3. The construct method is defined which is the entry point for the animation.
  4. Two objects, a square and a circle, are created.
  5. The Transform animation is played, which transforms the square into the circle.
"},{"location":"Potpourri/Manim/#running-animations","title":"Running Animations","text":"

To run the animation, you need to save the Python file and run it using the following command:

manim example.py SquareToCircle\n

This will run the animation and save the output as a video file. You can specify the resolution, frame rate, and other options using the command line arguments.

"},{"location":"Potpourri/Manim/#resources","title":"Resources","text":"
  • Manim Website
  • Manim Documentation
  • Manim GitHub Repository
"},{"location":"Potpourri/SocketIO/","title":"Socket.IO","text":"

Socket.IO is a real-time communication framework that enables real-time bidirectional communication between the client and the server. It uses WebSockets as a transport layer and provides a simple API for real-time communication. Socket.IO is a JavaScript library that runs in the browser and enables real-time communication between the client and the server.

"},{"location":"Potpourri/SocketIO/#resources","title":"Resources","text":"
  • Socket.IO Documentation
  • Socket.IO Client-Side Library
  • Socket.IO Server-Side Library
  • Flask-SocketIO Documentation
"},{"location":"Potpourri/SocketIO/#socketio-in-python","title":"Socket.IO in Python","text":"

Socket.IO can be used in Python using the python-socketio library. The library provides a client-side and a server-side implementation. The client-side implementation is used to connect to the server and send and receive messages. The server-side implementation is used to handle incoming connections and send messages to the clients.

Here's an example of how to use Socket.IO in Python:

import socketio\n\nsio = socketio.Client()\n\n@sio.event\ndef connect():\n    print('connection established')\n\n\n@sio.event\ndef message(data):\n    print('message received with ', data)\n    sio.emit('response', {'response': 'my response'})\n\n\n@sio.event\ndef disconnect():\n    print('disconnected from server')\n\n\nsio.connect('http://localhost:5000')\n
  1. First, we import the socketio library.
  2. We create a socketio.Client object.
  3. We define three event handlers: connect, message, and disconnect.
  4. In the connect event handler, we print a message to indicate that the connection has been established.
  5. In the message event handler, we print the received message and send a response using the emit method.
  6. In the disconnect event handler, we print a message to indicate that the connection has been lost.
  7. We connect to the server using the connect method and pass the URL of the server as an argument.

Note that the emit method is used to send a message to the server. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

The server-side implementation is a bit more complex, but it can be done using the flask-socketio library. Here's an example of how to use Socket.IO in Python with Flask:

from flask import Flask, render_template\nfrom flask_socketio import SocketIO, emit\n\napp = Flask(__name__)\napp.config['SECRET_KEY'] ='secret!'\nsocketio = SocketIO(app)\n\n@app.route('/')\ndef index():\n    return render_template('index.html')\n\n@socketio.on('connect')\ndef connect():\n    print('connected')\n\n@socketio.on('message')\ndef message(data):\n    print('message received with ', data)\n    emit('response', {'response': 'my response'})\n\n@socketio.on('disconnect')\ndef disconnect():\n    print('disconnected')\n\n\nif __name__ == '__main__':\n    socketio.run(app, debug=True)\n
  1. First, we import the Flask, render_template, SocketIO, and emit functions from the flask, flask_socketio, and socketio libraries, respectively.
  2. We create a Flask app object and set the SECRET_KEY configuration variable.
  3. We create a SocketIO object and pass the app object as an argument.
  4. We define three event handlers: connect, message, and disconnect.
  5. In the connect event handler, we print a message to indicate that a client has connected.
  6. In the message event handler, we print the received message and send a response using the emit function.
  7. In the disconnect event handler, we print a message to indicate that a client has disconnected.
  8. We run the app using the run method and pass the app object as an argument.

In this example, we use the emit function to send a message to the client with the response event name. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

Note that the emit function is used to send a message to the client. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

The client-side implementation is a bit more complex, but it can be done using the socket.io-client library. Here's an example of how to use Socket.IO in Python with a JavaScript client:

<!DOCTYPE html>\n<html>\n<head>\n  <meta charset=\"UTF-8\">\n  <title>Socket.IO Example</title>\n  <script src=\"https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.3.0/socket.io.js\"></script>\n</head>\n<body>\n  <h1>Socket.IO Example</h1>\n  <p id=\"message\"></p>\n  <script>\n    const socket = io();\n\n    socket.on('connect', () => {\n      console.log('connected');\n      socket.emit('message', 'Hello, server!');\n    });\n\n    socket.on('response', (data) => {\n      console.log('response received with ', data);\n      document.getElementById('message').innerHTML = data.response;\n    });\n\n    socket.on('disconnect', () => {\n      console.log('disconnected');\n    });\n  </script>\n</body>\n</html>\n
  1. First, we import the socket.io-client library.
  2. We create a socket object and pass the URL of the server as an argument.
  3. We define three event handlers: connect, response, and disconnect.
  4. In the connect event handler, we print a message to indicate that the connection has been established.
  5. In the response event handler, we print the received message and update the message element with the response.
  6. In the disconnect event handler, we print a message to indicate that the connection has been lost.

Note that the emit method is used to send a message to the server. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a message to the server with the message event name.

The server-side implementation is a bit more complex, but it can be done using the socket.io library. Here's an example of how to use Socket.IO in Python with a JavaScript server:

const express = require('express');\nconst app = express();\nconst http = require('http');\nconst server = http.createServer(app);\nconst { Server } = require('socket.io');\n\nconst io = new Server(server, {\n  cors: {\n    origin: '*',\n  },\n});\n\nio.on('connection', (socket) => {\n  console.log('a user connected');\n\n  socket.on('message', (data) => {\n    console.log('message received with ', data);\n    socket.emit('response', { response: 'my response' });\n  });\n\n  socket.on('disconnect', () => {\n    console.log('user disconnected');\n  });\n});\n\nserver.listen(3000, () => {\n  console.log('listening on *:3000');\n});\n
  1. First, we import the express, http, and socket.io libraries.
  2. We create an app object and a server object using the http library.
  3. We create a socket.io object and pass the server object as an argument.
  4. We define an event handler for the connection event.
  5. In the connection event handler, we print a message to indicate that a client has connected.
  6. We define an event handler for the message event.
  7. In the message event handler, we print the received message and send a response using the emit method.
  8. We define an event handler for the disconnect event.
  9. In the disconnect event handler, we print a message to indicate that a client has disconnected.
  10. We start the server using the listen method and pass the port number as an argument.
  11. We print a message to indicate that the server is listening on the specified port.

In this example, we use the emit method to send a message to the client with the response event name. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

Note that the cors option is used to allow cross-origin requests. This is necessary for Socket.IO to work between the client and the server.

Overall, Socket.IO is a powerful tool for real-time communication between the client and the server. It provides a simple API for real-time communication and is easy to use in Python and JavaScript.

"},{"location":"Useful%20Tools/CMake/","title":"CMake","text":"

CMake is a cross-platform build system generator. It is used to build, test, and package software. It is widely used in the open-source community and is used in many popular projects such as OpenCV, VTK, and ITK.

"},{"location":"Useful%20Tools/CMake/#installing-cmake","title":"Installing CMake","text":"

To install CMake, you can download the installer from the official website: https://cmake.org/download/.

"},{"location":"Useful%20Tools/CMake/#using-cmake","title":"Using CMake","text":"

To use CMake, you need to create a CMakeLists.txt file in the root directory of your project. This file contains all the instructions for building your project.

Here is a basic example of a CMakeLists.txt file:

cmake_minimum_required(VERSION 3.10)\n\nproject(MyProject)\n\nset(CMAKE_CXX_STANDARD 11)\n\nadd_executable(MyProject main.cpp)\n

In this example, we set the minimum required version of CMake to 3.10, set the project name to \"MyProject\", set the C++ standard to 11, and add an executable target called \"MyProject\" that compiles the \"main.cpp\" file.

To build the project, you can run the following command in the terminal:

cmake .\n

This will generate the necessary build files for your project in a directory called \"build\". You can then run the build process by running:

cmake --build .\n

This will build the project and create an executable file in the \"build\" directory. You can then run the executable to run your program.

"},{"location":"Useful%20Tools/CMake/#resources","title":"Resources","text":"
  • CMake Homepage
  • CMake \u5b98\u65b9 Tutorial
  • CMake Documentation
  • CMake Tutorial
  • CMake Examples
  • CMake Cheat Sheet
  • CMake FAQ
  • CMake Best Practices
  • \u4e0a\u6d77\u4ea4\u901a\u5927\u5b66 IPADS \u7ec4\u65b0\u4eba\u57f9\u8bad
"},{"location":"Useful%20Tools/Docker/","title":"Docker","text":""},{"location":"Useful%20Tools/Docker/#docker_1","title":"Docker\u5b66\u4e60\u8d44\u6599","text":"

Docker \u5b98\u65b9\u6587\u6863\u5f53\u7136\u662f\u6700\u597d\u7684\u521d\u5b66\u6559\u6750\uff0c\u4f46\u6700\u597d\u7684\u5bfc\u5e08\u4e00\u5b9a\u662f\u4f60\u81ea\u5df1\u2014\u2014\u5c1d\u8bd5\u53bb\u4f7f\u7528 Docker \u624d\u80fd\u4eab\u53d7\u5b83\u5e26\u6765\u7684\u4fbf\u5229\u3002Docker \u5728\u5de5\u4e1a\u754c\u53d1\u5c55\u8fc5\u731b\u5e76\u5df2\u7ecf\u975e\u5e38\u6210\u719f\uff0c\u4f60\u53ef\u4ee5\u4e0b\u8f7d\u5b83\u7684\u684c\u9762\u7aef\u5e76\u4f7f\u7528\u56fe\u5f62\u754c\u9762\u3002

\u5f53\u7136\uff0c\u5982\u679c\u4f60\u50cf\u6211\u4e00\u6837\uff0c\u662f\u4e00\u4e2a\u75af\u72c2\u7684\u9020\u8f6e\u5b50\u7231\u597d\u8005\uff0c\u90a3\u4e0d\u59a8\u81ea\u5df1\u4eb2\u624b\u5199\u4e00\u4e2a\u8ff7\u4f60 Docker \u6765\u52a0\u6df1\u7406\u89e3\u3002

KodeKloud Docker for the Absolute Beginner \u5168\u9762\u7684\u4ecb\u7ecd\u4e86 Docker \u7684\u57fa\u7840\u529f\u80fd\uff0c\u5e76\u4e14\u6709\u5927\u91cf\u7684\u914d\u5957\u7ec3\u4e60\uff0c\u540c\u65f6\u63d0\u4f9b\u514d\u8d39\u7684\u4e91\u73af\u5883\u6765\u5b8c\u6210\u7ec3\u4e60\u3002\u5176\u4f59\u7684\u4e91\u76f8\u5173\u7684\u8bfe\u7a0b\u5982 Kubernetes \u9700\u8981\u4ed8\u8d39\uff0c\u4f46\u4e2a\u4eba\u5f3a\u70c8\u63a8\u8350\uff1a\u8bb2\u89e3\u975e\u5e38\u4ed4\u7ec6\uff0c\u9002\u5408\u4ece 0 \u5f00\u59cb\u7684\u65b0\u624b\uff1b\u6709\u914d\u5957\u7684 Kubernetes \u7684\u5b9e\u9a8c\u73af\u5883\uff0c\u4e0d\u7528\u88ab\u642d\u5efa\u73af\u5883\u529d\u9000\u3002

"},{"location":"Useful%20Tools/GDB/","title":"GNU Debuger","text":""},{"location":"Useful%20Tools/GDB/#1-introduction","title":"1. Introduction","text":"

GNU Debuger (GDB) is a powerful command-line debugger that is used to debug and analyze programs. It is a powerful tool for developers and system administrators to debug and optimize their code. GDB provides a powerful set of commands and features that allow developers to debug their code in a variety of ways.

In this article, we will learn how to use GDB to debug and optimize our code. We will also learn how to use GDB commands to analyze and optimize our code.

"},{"location":"Useful%20Tools/GDB/#2-installing-gdb","title":"2. Installing GDB","text":"

GDB is included in most Linux distributions and can be installed using the package manager. For example, on Ubuntu, you can install GDB using the following command:

sudo apt-get install gdb\n

On Windows, you can download the GDB executable from the official website and add it to your PATH environment variable.

Once GDB is installed, you can start it by typing gdb in the terminal. You should see the GDB prompt:

(gdb)\n

This is the GDB command prompt. You can type GDB commands and execute them to debug and optimize your code.

"},{"location":"Useful%20Tools/GDB/#3-debugging-a-program","title":"3. Debugging a Program","text":"

To debug a program using GDB, you need to first compile the program with debugging symbols. You can do this by adding the -g flag to the compiler command. For example, if you are using the g++ compiler, you can compile your program with the following command:

g++ -g myprogram.cpp -o myprogram\n

Once the program is compiled, you can run it using GDB by typing the following command:

gdb myprogram\n

This will start GDB and load the program. You can then set breakpoints in your code using the break command. For example, to set a breakpoint at line 10 of your program, you can type:

break 10\n

You can then run the program using the run command:

run\n

This will start the program and stop at the breakpoint. You can then use GDB commands to analyze the program state and debug the program.

For example, you can use the print command to print the value of a variable:

print myVariable\n

You can also use the step command to execute the next line of code:

step\n

This will execute the next line of code and stop at the next breakpoint. You can use the continue command to continue running the program until the next breakpoint:

continue\n

You can use the backtrace command to view the call stack:

backtrace\n

This will show you the current function call stack. You can use the info command to view information about variables, threads, and breakpoints.

Once you are done debugging, you can exit GDB using the quit command.

"},{"location":"Useful%20Tools/GDB/#4-optimizing-a-program","title":"4. Optimizing a Program","text":"

GDB can also be used to optimize a program. You can use GDB commands to analyze the program and identify areas that can be optimized. For example, you can use the time command to measure the execution time of a function:

time myFunction\n

You can then use the profile command to identify areas that are taking the most time:

profile\n

This will show you the top 10 functions that are taking the most time to execute. You can then use GDB commands to optimize these functions.

For example, you can use the set command to change the value of a variable:

set myVariable = 10\n

This will set the value of myVariable to 10. You can also use the watch command to monitor a variable and automatically break when it changes:

watch myVariable\n

This will break the program when myVariable changes. You can then use the finish command to execute the rest of the function:

finish\n

This will execute the rest of the function and continue running the program. You can use the return command to return from a function:

return\n

This will return from the current function and continue running the program.

"},{"location":"Useful%20Tools/GDB/#reference","title":"Reference","text":"
  • \u4e00\u6587\u5feb\u901f\u4e0a\u624bGDB
"},{"location":"Useful%20Tools/MIT-Missing-Semester/","title":"MIT: The Missing Semester of Your CS Education","text":""},{"location":"Useful%20Tools/MIT-Missing-Semester/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u5148\u4fee\u8981\u6c42\uff1a\u65e0
  • \u7f16\u7a0b\u8bed\u8a00\uff1ashell
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a10 \u5c0f\u65f6

\u6b63\u5982\u8bfe\u7a0b\u540d\u5b57\u6240\u8a00\uff1a\u201c\u8ba1\u7b97\u673a\u6559\u5b66\u4e2d\u6d88\u5931\u7684\u4e00\u4e2a\u5b66\u671f\u201d\uff0c\u8fd9\u95e8\u8bfe\u5c06\u4f1a\u6559\u4f1a\u4f60\u8bb8\u591a\u5927\u5b66\u7684\u8bfe\u5802\u4e0a\u4e0d\u4f1a\u6d89\u53ca\u4f46\u5374\u5bf9\u6bcf\u4e2a CSer \u65e0\u6bd4\u91cd\u8981\u7684\u5de5\u5177\u6216\u8005\u77e5\u8bc6\u70b9\u3002\u4f8b\u5982 Shell \u7f16\u7a0b\u3001\u547d\u4ee4\u884c\u914d\u7f6e\u3001Git\u3001Vim\u3001tmux\u3001ssh \u7b49\u7b49\u3002\u5982\u679c\u4f60\u662f\u4e00\u4e2a\u8ba1\u7b97\u673a\u5c0f\u767d\uff0c\u90a3\u4e48\u6211\u975e\u5e38\u5efa\u8bae\u4f60\u5b66\u4e60\u4e00\u4e0b\u8fd9\u95e8\u8bfe\uff0c\u56e0\u4e3a\u5b83\u57fa\u672c\u6d89\u53ca\u4e86\u672c\u4e66\u5fc5\u5b66\u5de5\u5177\u4e2d\u7684\u7edd\u5927\u90e8\u5206\u5185\u5bb9\u3002

\u9664\u4e86 MIT \u5b98\u65b9\u7684\u5b66\u4e60\u8d44\u6599\u5916\uff0c\u5317\u4eac\u5927\u5b66\u56fe\u7075\u73ed\u5f00\u8bbe\u7684\u524d\u6cbf\u8ba1\u7b97\u5b9e\u8df5\u4e2d\u4e5f\u5f00\u8bbe\u4e86\u76f8\u5173\u8bfe\u7a0b\uff0c\u8d44\u6599\u4f4d\u4e8e\u8fd9\u4e2a\u7f51\u7ad9\u4e0b\uff0c\u4f9b\u5927\u5bb6\u53c2\u8003\u3002

"},{"location":"Useful%20Tools/MIT-Missing-Semester/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://missing.csail.mit.edu/2020/
  • \u8bfe\u7a0b\u4e2d\u6587\u7f51\u7ad9: https://missing-semester-cn.github.io/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PLyzOVJj3bHQuloKGG59rS43e29ro7I57J
  • \u8bfe\u7a0b\u4e2d\u6587\u5b57\u5e55\u89c6\u9891\uff1a
    • Missing_Semi_\u4e2d\u8bd1\u7ec4\uff08\u672a\u5b8c\u7ed3\uff09\uff1ahttps://space.bilibili.com/1010983811?spm_id_from=333.337.search-card.all.click
    • \u5218\u9ed1\u9ed1a\uff08\u5df2\u5b8c\u7ed3\uff09\uff1ahttps://space.bilibili.com/518734451?spm_id_from=333.337.search-card.all.click
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a\u4e00\u4e9b\u968f\u5802\u5c0f\u7ec3\u4e60\uff0c\u5177\u4f53\u89c1\u8bfe\u7a0b\u7f51\u7ad9\u3002
"},{"location":"Useful%20Tools/Makefile/","title":"GNU Make","text":""},{"location":"Useful%20Tools/Makefile/#1-introduction","title":"1. Introduction","text":"

GNU Make is a tool for automating the build process of software projects. It is a command-line tool that can be used to build, test, and package software projects. GNU Make is a cross-platform tool that can be used on Windows, Linux, and macOS.

"},{"location":"Useful%20Tools/Makefile/#2-installation","title":"2. Installation","text":"

GNU Make can be installed on Windows, Linux, and macOS using the following steps:

  1. Download the latest release of GNU Make from the official website: https://www.github.com/amake/gnumake

  2. Extract the downloaded file and move the gnumake executable to a directory that is in the system PATH.

  3. Verify that GNU Make is installed by running the gnumake command in the terminal or command prompt.

"},{"location":"Useful%20Tools/Makefile/#3-usage","title":"3. Usage","text":"

GNU Make can be used to build, test, and package software projects by running the gnumake command in the terminal or command prompt. The gnumake command takes a command as an argument, which can be one of the following:

  • build: builds the software project.
  • test: runs the test suite of the software project.
  • package: packages the software project for distribution.

For example, to build a software project, run the following command:

gnumake build\n

To run the test suite of a software project, run the following command:

gnumake test\n

To package a software project for distribution, run the following command:

gnumake package\n
"},{"location":"Useful%20Tools/Makefile/#4-configuration","title":"4. Configuration","text":"

GNU Make can be configured by creating a Makefile.yaml file in the root directory of the software project. The Makefile.yaml file contains the configuration for GNU Make, such as the build commands, test commands, and package commands.

Here is an example Makefile.yaml file:

build:\n  command: \"python setup.py build\"\n\ntest:\n  command: \"python setup.py test\"\n\npackage:\n  command: \"python setup.py sdist bdist_wheel\"\n

In this example, the build command is set to python setup.py build, which builds the software project using the setup.py file. The test command is set to python setup.py test, which rNUs the test suite of the software project using the setup.py file. The package command is set to python setup.py sdist bdist_wheel, which packages the software project for distribution using the setup.py file.

"},{"location":"Useful%20Tools/Makefile/#5-conclusion","title":"5. Conclusion","text":"

GNU Make is a powerful tool for automating the build process of software projects. It can be used to build, test, and package software projects on Windows, Linux, and macOS. The Makefile.yaml file can be used to configure GNU Make to build, test, and package software projects according to the needs of the project.

"},{"location":"Useful%20Tools/Makefile/#6resources","title":"6.Resources","text":"
  • How to Write a Makefile

  • GNU Make Manual

"},{"location":"Useful%20Tools/Regex/","title":"Regex","text":""},{"location":"Useful%20Tools/Regex/#regular-expressions","title":"Regular Expressions","text":"

Regular expressions are a sequence of characters that define a search pattern. They are used to match, locate, and manipulate text. In Python, regular expressions are implemented using the re module.

Here are some examples of regular expressions:

  • r\"hello\\s+world\": Matches the string \"hello world\" with any number of spaces between \"hello\" and \"world\".
  • r\"\\d+\": Matches one or more digits.
  • r\"\\w+\": Matches one or more word characters (letters, digits, and underscores).
  • r\"[\\w\\s]+\": Matches one or more word characters or spaces.
  • r\"[\\w\\s]+@[\\w\\s]+\\.[\\w]{2,3}\": Matches an email address with a username, domain name, and top-level domain.
"},{"location":"Useful%20Tools/Regex/#using-regular-expressions-in-python","title":"Using Regular Expressions in Python","text":"
  1. Import the re module:
import re\n
  1. Use the re.search() function to search for a pattern in a string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"fox\"\nmatch = re.search(pattern, string)\nif match:\n    print(\"Match found:\", match.group())\nelse:\n    print(\"Match not found\")\n
  1. Use the re.findall() function to find all occurrences of a pattern in a string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"\\b\\w{3}\\b\"\nmatches = re.findall(pattern, string)\nprint(\"Matches:\", matches)\n
  1. Use the re.sub() function to replace all occurrences of a pattern in a string with a new string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"fox\"\nnew_string = re.sub(pattern, \"cat\", string)\nprint(\"New string:\", new_string)\n
"},{"location":"Useful%20Tools/Regex/#resources","title":"Resources","text":"
  • Email Regex
  • Regex 101
  • Python Regular Expressions
  • Regular Expression HOWTO
  • Regular Expressions in Python
"},{"location":"Web%20Framework/Django/","title":"Django","text":"

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, such as database abstraction, URL routing, and templating. Django's documentation is excellent and provides a comprehensive guide to its features and functionality.

Django is known for its ease of use, high-level abstractions, and reusability. It's also fast, scalable, and secure. With Django, you can quickly build complex web applications with reusable, well-documented code.

Django is open-source, which means that the community is constantly adding new features and functionality. This makes it a popular choice for building large-scale web applications.

Django is known for its ability to handle complex database relationships and queries, which makes it a popular choice for building complex web applications. It also has a strong community of developers who contribute to its development and documentation.

Django is also known for its support for internationalization and localization, which makes it a popular choice for building multilingual web applications. It also has a large and active developer community, which makes it a popular choice for building enterprise-level web applications.

"},{"location":"Web%20Framework/Flask/","title":"Flask","text":"

Flask is a web framework written in Python. It is lightweight, easy to use, and provides a lot of features out of the box. It is also easy to learn and has a large community of developers who contribute to its development.

"},{"location":"Web%20Framework/MkDocs/","title":"MkDocs","text":"

MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

"},{"location":"Web%20Framework/MkDocs/#getting-started","title":"Getting Started","text":"

To get started with MkDocs, follow these steps:

  1. Install MkDocs:
pip install mkdocs\n
  1. Create a new directory for your project:
mkdir myproject\n
  1. Initialize a new MkDocs project:
cd myproject\nmkdocs new .\n
  1. Start the development server:
mkdocs serve\n
  1. Open your browser and go to http://127.0.0.1:8000/ to view your new documentation site.
"},{"location":"Web%20Framework/MkDocs/#writing-your-first-page","title":"Writing Your First Page","text":"

To create a new page, create a new file in the docs directory with a .md extension. For example, create a new file called about.md in the docs directory:

``` docs \u251c\u2500\u2500 about.md

"},{"location":"Web%20Framework/React/","title":"React","text":"

React is a JavaScript library for building user interfaces. It is used for building complex and interactive web applications. React is a declarative, component-based library that simplifies the process of building user interfaces. It is easy to learn and use, and it has a large and active community of developers.

"},{"location":"Web%20Framework/React/#getting-started","title":"Getting Started","text":"

To get started with React, you can follow the steps below:

  1. Install Node.js and npm on your system.
  2. Create a new React project using the create-react-app command.
  3. Start the development server using the npm start command.
  4. Open the project in your preferred code editor.
"},{"location":"Web%20Framework/React/#react-components","title":"React Components","text":"

React components are the building blocks of React applications. They are small, reusable pieces of code that can be used to create complex user interfaces. React components can be created using JavaScript or JSX. JSX is a syntax extension of JavaScript that allows you to write HTML-like code within JavaScript.

"},{"location":"Web%20Framework/React/#react-props","title":"React Props","text":"

React props are the input data that are passed to a component. They are used to customize the behavior and appearance of a component. Props can be passed to a component using attributes or properties.

"},{"location":"Web%20Framework/React/#react-state","title":"React State","text":"

React state is the data that is managed by a component. It is used to keep track of the user interface state and is updated by the component. State can be updated using the setState method.

"},{"location":"Web%20Framework/React/#react-life-cycle-methods","title":"React Life Cycle Methods","text":"

React life cycle methods are functions that are called at different stages of the component lifecycle. They are used to perform certain actions when a component is mounted, updated, or unmounted.

"},{"location":"Web%20Framework/React/#react-router","title":"React Router","text":"

React Router is a library for handling client-side routing in React applications. It allows you to create dynamic and responsive web applications with ease. It provides a simple and declarative API for handling navigation in your application.

"},{"location":"Web%20Framework/React/#react-hooks","title":"React Hooks","text":"

React hooks are a new feature in React that allow you to use state and other React features without writing a class. They are a way to use state and other React features without writing a class. They are a way to use state and other React features without writing a class.

"},{"location":"Web%20Framework/Streamlit/","title":"Streamlit","text":"

Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In this section, we will learn how to use Streamlit to create a simple web app that displays a list of movies and their ratings.

"},{"location":"Web%20Framework/Streamlit/#prerequisites","title":"Prerequisites","text":"
  1. Python 3.6 or later
  2. Streamlit library
"},{"location":"Web%20Framework/Streamlit/#step-1-install-streamlit","title":"Step 1: Install Streamlit","text":"

To install Streamlit, run the following command in your terminal:

pip install streamlit\n
"},{"location":"Web%20Framework/Streamlit/#step-2-create-a-new-streamlit-app","title":"Step 2: Create a new Streamlit app","text":"

To create a new Streamlit app, run the following command in your terminal:

streamlit hello\n

This will create a new directory called my_first_app with a sample app. Open the app.py file in your text editor to see the code.

"},{"location":"Web%20Framework/Streamlit/#step-3-add-a-list-of-movies-and-their-ratings","title":"Step 3: Add a list of movies and their ratings","text":"

To add a list of movies and their ratings, replace the code in app.py with the following:

import streamlit as st\n\n# Create a list of movies and their ratings\n
"},{"location":"Web%20Framework/Streamlit/#pydeck","title":"Pydeck","text":"

Pydeck is a Python library for creating data visualizations using deck.gl, an open-source WebGL-based visualization framework. We can use it to create a map of the movies and their ratings.

  • https://deckgl.readthedocs.io/
  • heatmap
"},{"location":"Web%20Framework/Vue/","title":"Vue.js","text":"

Vue.js is a progressive framework for building user interfaces. It is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or frameworks.

"},{"location":"Web%20Framework/Vue/#getting-started","title":"Getting Started","text":"

To get started with Vue.js, you can follow the official guide on their website:

  • Getting Started
  • Vue CLI
  • Vue Router
  • Vuex
  • Vue Loader
"},{"location":"Web%20Framework/Vue/#examples","title":"Examples","text":"

Here are some examples of Vue.js applications:

  • TodoMVC
  • Vue.js Examples
  • Vue.js News
  • Vue.js Jobs
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\u200b\\u3000\\-\u3001\u3002\uff0c\uff0e\uff1f\uff01\uff1b]+","pipeline":["stemmer"]},"docs":[{"location":"","title":"Welcome to MkDocs","text":"

For full documentation visit mkdocs.org.

"},{"location":"#commands","title":"Commands","text":"
  • mkdocs new [dir-name] - Create a new project.
  • mkdocs serve - Start the live-reloading docs server.
  • mkdocs build - Build the documentation site.
  • mkdocs -h - Print help message and exit.
"},{"location":"#project-layout","title":"Project layout","text":"
mkdocs.yml    # The configuration file.\ndocs/\n    index.md  # The documentation homepage.\n    ...       # Other markdown pages, images and other files.\n
"},{"location":"Computer%20Network/CS144/","title":"CS144: Computer Network","text":""},{"location":"Computer%20Network/CS144/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aStanford
  • \u5148\u4fee\u8981\u6c42\uff1a\u4e00\u5b9a\u7684\u8ba1\u7b97\u673a\u7cfb\u7edf\u57fa\u7840\uff0cCS106L
  • \u7f16\u7a0b\u8bed\u8a00\uff1aC++
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u8fd9\u95e8\u8bfe\u7684\u4e3b\u8bb2\u4eba\u4e4b\u4e00\u662f\u7f51\u7edc\u9886\u57df\u7684\u5de8\u64d8 Nick McKeown \u6559\u6388\u3002\u8fd9\u4f4d\u62e5\u6709\u81ea\u5df1\u521b\u4e1a\u516c\u53f8\u7684\u5b66\u754c\u4e1a\u754c\u53cc\u5de8\u4f6c\u4f1a\u5728\u4ed6\u6155\u8bfe\u6bcf\u4e00\u7ae0\u8282\u7684\u6700\u540e\u91c7\u8bbf\u4e00\u4f4d\u4e1a\u754c\u7684\u9ad8\u7ba1\u6216\u8005\u5b66\u754c\u7684\u9ad8\u4eba\uff0c\u975e\u5e38\u5f00\u9614\u773c\u754c\u3002

\u5728\u8fd9\u95e8\u8bfe\u7684 Project \u4e2d\uff0c\u4f60\u5c06\u7528 C++ \u5faa\u5e8f\u6e10\u8fdb\u5730\u642d\u5efa\u51fa\u6574\u4e2a TCP/IP \u534f\u8bae\u6808\uff0c\u5b9e\u73b0 IP \u8def\u7531\u4ee5\u53ca ARP \u534f\u8bae\uff0c\u6700\u540e\u5229\u7528\u4f60\u81ea\u5df1\u7684\u534f\u8bae\u6808\u4ee3\u66ff Linux Kernel \u7684\u7f51\u7edc\u534f\u8bae\u6808\u548c\u5176\u4ed6\u5b66\u751f\u7684\u8ba1\u7b97\u673a\u8fdb\u884c\u901a\u4fe1\uff0c\u975e\u5e38 amazing\uff01

"},{"location":"Computer%20Network/CS144/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://cs144.github.io/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/watch?v=r2WZNaFyrbQ&list=PL6RdenZrxrw9inR-IJv-erlOKRHjymxMN
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u65e0
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://cs144.github.io/\uff0c8 \u4e2a Project \u5e26\u4f60\u5b9e\u73b0\u6574\u4e2a TCP/IP \u534f\u8bae\u6808
"},{"location":"Computer%20Network/CS144/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"
  • PKUFlyingPig
  • Lexssama's Blogs
  • huangrt01
  • kiprey
  • \u5eb7\u5b87PL's Blog
  • doraemonzzz
  • ViXbob's libsponge
  • \u5403\u7740\u571f\u8c46\u5750\u5730\u94c1\u7684\u535a\u5ba2
  • Smith
  • \u661f\u9065\u89c1
  • EIMadrigal
  • Joey
"},{"location":"Computer%20Network/topdown/","title":"Computer Networking: A Top-Down Approach","text":""},{"location":"Computer%20Network/topdown/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1a\u9a6c\u8428\u8bf8\u585e\u5927\u5b66
  • \u5148\u4fee\u8981\u6c42\uff1a\u6709\u4e00\u5b9a\u7684\u8ba1\u7b97\u673a\u7cfb\u7edf\u57fa\u7840
  • \u7f16\u7a0b\u8bed\u8a00\uff1a\u65e0
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a40 \u5c0f\u65f6

\u300a\u81ea\u9876\u5411\u4e0b\u65b9\u6cd5\u300b\u662f\u8ba1\u7b97\u673a\u7f51\u7edc\u9886\u57df\u7684\u4e00\u672c\u7ecf\u5178\u6559\u6750\uff0c\u4e24\u4f4d\u4f5c\u8005 Jim Kurose \u548c Keith Ross \u7cbe\u5fc3\u5236\u4f5c\u4e86\u6559\u6750\u914d\u5957\u7684\u8bfe\u7a0b\u7f51\u7ad9\uff0c\u5e76\u4e14\u516c\u5f00\u4e86\u81ea\u5df1\u5f55\u5236\u7684\u7f51\u8bfe\u89c6\u9891\uff0c\u4ea4\u4e92\u5f0f\u7684\u5728\u7ebf\u7ae0\u8282\u6d4b\u8bd5\uff0c\u4ee5\u53ca\u5229\u7528 WireShark \u8fdb\u884c\u6293\u5305\u5206\u6790\u7684 lab\u3002\u552f\u4e00\u9057\u61be\u7684\u662f\u8fd9\u95e8\u8bfe\u5e76\u6ca1\u6709\u786c\u6838\u7684\u7f16\u7a0b\u4f5c\u4e1a\uff0c\u800c Stanford \u7684 CS144 \u80fd\u5f88\u597d\u5730\u5f25\u8865\u8fd9\u4e00\u70b9\u3002

"},{"location":"Computer%20Network/topdown/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://gaia.cs.umass.edu/kurose_ross/index.php
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://gaia.cs.umass.edu/kurose_ross/lectures.php
  • \u8bfe\u7a0b\u6559\u6750\uff1aComputer Networking: A Top-Down Approach
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://gaia.cs.umass.edu/kurose_ross/wireshark.php
"},{"location":"Computer%20Network/topdown/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"

@PKUFlyingPig \u5728\u5b66\u4e60\u8fd9\u95e8\u8bfe\u4e2d\u7528\u5230\u7684\u6240\u6709\u8d44\u6e90\u548c\u4f5c\u4e1a\u5b9e\u73b0\u90fd\u6c47\u603b\u5728 PKUFlyingPig/Computer-Network-A-Top-Down-Approach - GitHub \u4e2d\u3002

"},{"location":"Computer%20Vision/EECS-498/","title":"UMich EECS 498-007 / 598-005: Deep Learning for Computer Vision","text":""},{"location":"Computer%20Vision/EECS-498/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aUMich
  • \u5148\u4fee\u8981\u6c42\uff1aPython\u57fa\u7840\uff0c\u77e9\u9635\u8bba(\u719f\u6089\u77e9\u9635\u6c42\u5bfc\u5373\u53ef)\uff0c\u5fae\u79ef\u5206
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a60\uff5e80 \u5c0f\u65f6

UMich \u7684 Computer Vision \u8bfe\uff0c\u8bfe\u7a0b\u89c6\u9891\u548c\u4f5c\u4e1a\u8d28\u91cf\u6781\u9ad8\uff0c\u6db5\u76d6\u7684\u4e3b\u9898\u975e\u5e38\u5168\uff0c\u540c\u65f6 Assignments \u7684\u96be\u5ea6\u7531\u6d45\u53ca\u6df1\uff0c\u8986\u76d6\u4e86 CV \u4e3b\u6d41\u6a21\u578b\u53d1\u5c55\u7684\u5168\u9636\u6bb5\uff0c\u662f\u4e00\u95e8\u975e\u5e38\u597d\u7684 Computer Vision \u5165\u95e8\u8bfe\u3002

\u4f60\u5728\u6bcf\u4e2a Assignment \u91cc\u4f1a\u8ddf\u968f Handouts \u642d\u5efa\u4e0e\u8bad\u7ec3 Lectures \u4e2d\u63d0\u5230\u7684\u6a21\u578b/\u6846\u67b6\u3002

\u4f60\u4e0d\u9700\u8981\u6709\u4efb\u4f55\u7684\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\u7684\u4f7f\u7528\u7ecf\u9a8c\uff0c\u5728\u5f00\u59cb\u7684 Assignment \u91cc\uff0c\u8fd9\u95e8\u8bfe\u4f1a\u4ece\u96f6\u5f00\u59cb\u6559\u5bfc\u6bcf\u4e2a\u5b66\u751f\u5982\u4f55\u4f7f\u7528 Pytorch\uff0c\u540e\u7eed\u4e5f\u53ef\u4ee5\u5f53\u6210\u5de5\u5177\u4e66\uff0c\u968f\u65f6\u7ffb\u9605\u3002

\u540c\u65f6\u7531\u4e8e\u6bcf\u4e2a Assignment \u4e4b\u95f4\u6d89\u53ca\u5230\u7684\u4e3b\u9898\u90fd\u4e0d\u540c\uff0c\u4f60\u5728\u9012\u8fdb\u5f0f\u7684 Assignment \u4e2d\u4e0d\u4ec5\u53ef\u4ee5\u4eb2\u8eab\u4f53\u4f1a\u5230 CV \u4e3b\u6d41\u6a21\u578b\u7684\u53d1\u5c55\u5386\u7a0b\uff0c\u9886\u7565\u5230\u4e0d\u540c\u7684\u6a21\u578b\u548c\u8bad\u7ec3\u7684\u65b9\u6cd5\u5bf9\u6700\u7ec8\u6548\u679c/\u51c6\u786e\u7387\u7684\u5f71\u54cd\uff0c\u540c\u65f6\u4e5f\u80fd Hands On \u5730\u5b9e\u73b0\u5b83\u4eec\u3002

\u5728 A1 \u4e2d\uff0c\u4f60\u4f1a\u5b66\u4e60 Pytorch \u548c Google Colab \u7684\u4f7f\u7528\u3002

\u5728 A2 \u4e2d\u4f60\u4f1a\u4eb2\u81ea\u642d\u5efa Linear Classifier \u4ee5\u53ca\u4e00\u4e2a\u4e24\u5c42\u7684\u795e\u7ecf\u7f51\u7edc\uff0c\u6700\u540e\u4f60\u6709\u673a\u4f1a\u4eb2\u81ea\u63a5\u89e6 MNIST \u6570\u636e\u96c6\u5e76\u5728\u6b64\u57fa\u7840\u4e0a\u8bad\u7ec3\u5e76\u8bc4\u4f30\u4f60\u642d\u5efa\u8d77\u7684\u795e\u7ecf\u7f51\u7edc\u3002

\u5728 A3 \u4e2d\uff0c\u4f60\u4f1a\u63a5\u89e6\u5230\u6700\u4e3a\u7ecf\u5178\u7684 Convolutional Neural Network (A.K.A. CNN)\uff0c\u4eb2\u81ea\u611f\u53d7\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u7684\u9b45\u529b\u3002

\u800c\u5728 A4 \u4e2d\uff0c\u4f60\u5c06\u5b9e\u9645\u89e6\u53ca\u642d\u5efa\u7269\u4f53\u68c0\u6d4b\u6a21\u578b\u7684\u5168\u6d41\u7a0b\uff0c\u540c\u65f6\u8ddf\u968f Handout \u5b9e\u73b0\u4e24\u7bc7\u8bba\u6587\u4e2d\u7684 One-Stage Detector \u548c Two-Stage Detector\u3002

\u5230\u4e86 A5\uff0c\u5c31\u662f\u4ece CNN \u5230 RNN \u7684\u65f6\u523b\u4e86\uff0c\u4f60\u5c06\u6709\u673a\u4f1a\u4eb2\u81ea\u642d\u5efa\u8d77\u4e24\u79cd\u4e0d\u540c\u7684\u57fa\u4e8e\u6ce8\u610f\u529b\u7684\u6a21\u578b\uff0cRNNs (Vanilla RNN & LSTM) \u548c\u5927\u540d\u9f0e\u9f0e\u7684 Transfomer\u3002

\u5728\u6700\u540e\u4e00\u4e2a Assignment\uff08A6\uff09\u4e2d\uff0c\u4f60\u5c06\u6709\u673a\u4f1a\u5b9e\u73b0\u4e24\u79cd\u66f4\u4e3a Fancy \u7684\u6a21\u578b\uff0cVAE \u548c GAN\uff0c\u5e76\u5e94\u7528\u5728 MINST \u6570\u636e\u96c6\u4e0a\u3002\u6700\u540e\uff0c\u4f60\u4f1a\u5b9e\u73b0\u7f51\u7edc\u53ef\u89c6\u5316\u548c\u98ce\u683c\u8fc1\u79fb\u8fd9\u4e24\u4e2a\u975e\u5e38\u9177\u70ab\u7684\u529f\u80fd\u3002

\u5728 Assignments \u4e4b\u5916\uff0c\u4f60\u8fd8\u53ef\u4ee5\u81ea\u5df1\u5b9e\u73b0\u4e00\u4e2a Mini-Project\uff0c\u4eb2\u81ea\u642d\u5efa\u8d77\u4e00\u4e2a\u5b8c\u6574\u7684\u6df1\u5ea6\u5b66\u4e60 Pipeline\uff0c\u5177\u4f53\u53ef\u4ee5\u53c2\u8003\u8bfe\u7a0b\u4e3b\u9875\u3002

\u8bfe\u7a0b\u6240\u6d89\u53ca\u7684\u8d44\u6e90\uff0c\u5982 Lectures/Notes/Assignments \u90fd\u662f\u5f00\u6e90\u7684\uff0c\u7f8e\u4e2d\u4e0d\u8db3\u7684\u662f Autograder \u53ea\u5bf9\u672c\u6821 Enrolled \u7684\u5b66\u751f\u5f00\u653e\uff0c\u4f46\u56e0\u4e3a\u5728\u63d0\u4f9b\u7684 *.ipynb\uff08\u4e5f\u5c31\u662f Handout\uff09 \u4e2d\u5df2\u7ecf\u53ef\u4ee5\u786e\u5b9a\u5b9e\u73b0\u7684\u6b63\u786e\u6027\uff0c\u4ee5\u53ca\u9884\u671f\u7684\u7ed3\u679c\uff0c\u6240\u4ee5\u6211\u4e2a\u4eba\u89c9\u5f97 Autograder \u7684\u7f3a\u5931\u6ca1\u6709\u4efb\u4f55\u5f71\u54cd\u3002

\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u8fd9\u95e8\u8bfe\u7684\u4e3b\u8bb2\u6559\u6388 Justin Johnson \u6b63\u662f Fei-Fei Li \u7684\u535a\u58eb\u6bd5\u4e1a\u751f\uff0c\u73b0\u5728\u5728 UMich \u5f53 Assistant Professor\u3002

\u800c\u73b0\u5728\u5f00\u6e90\u7684 2017 \u5e74\u7248\u672c\u7684 Stanford CS231N \u7684\u4e3b\u8bb2\u4eba\u5c31\u662f Justin Johnson\u3002

\u540c\u65f6\u56e0\u4e3a CS231N \u4e3b\u8981\u662f\u7531 Justin Johnson \u548c Andrej Karpathy \u5efa\u8bbe\u8d77\u6765\u7684\uff0c\u8fd9\u95e8\u8bfe\u4e5f\u6cbf\u7528\u4e86 CS231N \u7684\u4e00\u4e9b\u6750\u6599\uff0c\u6240\u4ee5\u5b66\u8fc7 CS231N \u7684\u540c\u5b66\u53ef\u80fd\u4f1a\u89c9\u5f97\u8fd9\u95e8\u8bfe\u7684\u67d0\u4e9b\u6750\u6599\u6bd4\u8f83\u719f\u6089\u3002

\u6700\u540e\uff0c\u6211\u63a8\u8350\u6bcf\u4e00\u4e2a Enroll \u8fd9\u95e8\u8bfe\u7684\u540c\u5b66\u90fd\u53bb\u770b\u4e00\u770b Youtube \u4e0a\u9762\u7684 Lectures\uff0cJustin Johnson \u7684\u8bb2\u8bfe\u65b9\u5f0f\u548c\u5185\u5bb9\u90fd\u975e\u5e38\u6e05\u6670\u548c\u6613\u61c2\uff0c\u662f\u975e\u5e38\u68d2\u7684\u53c2\u8003\u3002

"},{"location":"Computer%20Vision/EECS-498/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://web.eecs.umich.edu/~justincj/teaching/eecs498/WI2022/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PL5-TkQAfAZFbzxjBHtzdVCWE0Zbhomg7r
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u4ec5\u6709\u63a8\u8350\u6559\u6750\uff0c\u94fe\u63a5\uff1ahttps://www.deeplearningbook.org/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a\u89c1\u8bfe\u7a0b\u4e3b\u9875\uff0c6 \u4e2a Assignment \u548c\u4e00\u4e2a Mini-Project
"},{"location":"Computer%20Vision/OpenCV/","title":"OpenCV","text":"

OpenCV (Open Source Computer Vision) is an open-source library of programming functions mainly aimed at real-time computer vision. It provides a wide range of tools for image processing, video capture and analysis, 3D reconstruction, object detection, and many other applications.

OpenCV is written in C/C++ and has bindings for Python, Java, and MATLAB. It is cross-platform and can run on Linux, Windows, and macOS.

OpenCV is widely used in academic and industrial research, including in fields such as computer vision, image processing, robotics, and artificial intelligence. It is also used in mobile and embedded devices, including in self-driving cars, drones, and security systems.

The OpenCV library is free to use and open-source, and it is available under an open-source license.

"},{"location":"Datebase%20Systems/CMU15-445/","title":"CMU 15-445: Database Systems","text":""},{"location":"Datebase%20Systems/CMU15-445/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aCMU
  • \u5148\u4fee\u8981\u6c42\uff1aC++\uff0c\u6570\u636e\u7ed3\u6784\u4e0e\u7b97\u6cd5\uff0cCMU 15-213 (A.K.A. CS:APP\uff0c\u8fd9\u4e5f\u662f CMU \u5185\u90e8\u5bf9\u6bcf\u5e74 Enroll \u540c\u5b66\u7684\u5148\u4fee\u8981\u6c42)
  • \u7f16\u7a0b\u8bed\u8a00\uff1aC++
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u4f5c\u4e3a CMU \u6570\u636e\u5e93\u7684\u5165\u95e8\u8bfe\uff0c\u8fd9\u95e8\u8bfe\u7531\u6570\u636e\u5e93\u9886\u57df\u7684\u5927\u725b Andy Pavlo \u8bb2\u6388\uff08\u201c\u8fd9\u4e2a\u4e16\u754c\u4e0a\u6211\u53ea\u5728\u4e4e\u4e24\u4ef6\u4e8b\uff0c\u4e00\u662f\u6211\u7684\u8001\u5a46\uff0c\u4e8c\u5c31\u662f\u6570\u636e\u5e93\u201d\uff09\u3002

\u8fd9\u662f\u4e00\u95e8\u8d28\u91cf\u6781\u9ad8\uff0c\u8d44\u6e90\u6781\u9f50\u5168\u7684 Database \u5165\u95e8\u8bfe\uff0c\u8fd9\u95e8\u8bfe\u7684 Faculty \u548c\u80cc\u540e\u7684 CMU Database Group \u5c06\u8bfe\u7a0b\u5bf9\u5e94\u7684\u57fa\u7840\u8bbe\u65bd (Autograder, Discord) \u548c\u8bfe\u7a0b\u8d44\u6599 (Lectures, Notes, Homework) \u5b8c\u5168\u5f00\u6e90\uff0c\u8ba9\u6bcf\u4e00\u4e2a\u613f\u610f\u5b66\u4e60\u6570\u636e\u5e93\u7684\u540c\u5b66\u90fd\u53ef\u4ee5\u4eab\u53d7\u5230\u51e0\u4e4e\u7b49\u540c\u4e8e CMU \u672c\u6821\u5b66\u751f\u7684\u8bfe\u7a0b\u4f53\u9a8c\u3002

\u8fd9\u95e8\u8bfe\u7684\u4eae\u70b9\u5728\u4e8e CMU Database Group \u4e13\u95e8\u4e3a\u6b64\u8bfe\u5f00\u53d1\u4e86\u4e00\u4e2a\u6559\u5b66\u7528\u7684\u5173\u7cfb\u578b\u6570\u636e\u5e93 bustub\uff0c\u5e76\u8981\u6c42\u4f60\u5bf9\u8fd9\u4e2a\u6570\u636e\u5e93\u7684\u7ec4\u6210\u90e8\u5206\u8fdb\u884c\u4fee\u6539\uff0c\u5b9e\u73b0\u4e0a\u8ff0\u90e8\u4ef6\u7684\u529f\u80fd\u3002

\u5177\u4f53\u6765\u8bf4\uff0c\u5728 15-445 \u4e2d\u4f60\u9700\u8981\u5728\u56db\u4e2a Project \u7684\u63a8\u8fdb\u4e2d\uff0c\u5b9e\u73b0\u4e00\u4e2a\u9762\u5411\u78c1\u76d8\u7684\u4f20\u7edf\u5173\u7cfb\u578b\u6570\u636e\u5e93 Bustub \u4e2d\u7684\u90e8\u5206\u5173\u952e\u7ec4\u4ef6\u3002

\u5305\u62ec Buffer Pool Manager (\u5185\u5b58\u7ba1\u7406), B Plus Tree (\u5b58\u50a8\u5f15\u64ce), Query Executors & Query Optimizer (\u7b97\u5b50\u4eec & \u4f18\u5316\u5668), Concurrency Control (\u5e76\u53d1\u63a7\u5236)\uff0c\u5206\u522b\u5bf9\u5e94 Project #1 \u5230 Project #4\u3002

\u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u540c\u5b66\u4eec\u5728\u5b9e\u73b0\u7684\u8fc7\u7a0b\u4e2d\u53ef\u4ee5\u901a\u8fc7 shell.cpp \u7f16\u8bd1\u51fa bustub-shell \u6765\u5b9e\u65f6\u5730\u89c2\u6d4b\u81ea\u5df1\u5b9e\u73b0\u90e8\u4ef6\u7684\u6b63\u786e\u4e0e\u5426\uff0c\u6b63\u53cd\u9988\u975e\u5e38\u8db3\u3002

\u6b64\u5916 bustub \u4f5c\u4e3a\u4e00\u4e2a C++ \u7f16\u5199\u7684\u4e2d\u5c0f\u578b\u9879\u76ee\u6db5\u76d6\u4e86\u7a0b\u5e8f\u6784\u5efa\u3001\u4ee3\u7801\u89c4\u8303\u3001\u5355\u5143\u6d4b\u8bd5\u7b49\u4f17\u591a\u8981\u6c42\uff0c\u53ef\u4ee5\u4f5c\u4e3a\u4e00\u4e2a\u4f18\u79c0\u7684\u5f00\u6e90\u9879\u76ee\u5b66\u4e60\u3002

"},{"location":"Datebase%20Systems/CMU15-445/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1aFall 2019, Fall 2020, Fall 2021, Fall 2022, Spring 2023, Fall 2023, Spring 2024
  • \u8bfe\u7a0b\u89c6\u9891\uff1a\u8bfe\u7a0b\u7f51\u7ad9\u514d\u8d39\u89c2\u770b, Fall 2023 \u7684 Youtube \u5168\u5f00\u6e90 Lectures
  • \u8bfe\u7a0b\u6559\u6750\uff1aDatabase System Concepts
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a5 \u4e2a Project \u548c 5 \u4e2a Homework

\u5728 Fall 2019 \u4e2d\uff0cProject #2 \u662f\u505a\u54c8\u5e0c\u7d22\u5f15\uff0cProject #4 \u662f\u505a\u65e5\u5fd7\u4e0e\u6062\u590d\u3002

\u5728 Fall 2020 \u4e2d\uff0cProject #2 \u662f\u505a B \u6811\uff0cProject #4 \u662f\u505a\u5e76\u53d1\u63a7\u5236\u3002

\u5728 Fall 2021 \u4e2d\uff0cProject #1 \u662f\u505a\u7f13\u5b58\u6c60\u7ba1\u7406\uff0cProject #2 \u662f\u505a\u54c8\u5e0c\u7d22\u5f15\uff0cProject #4 \u662f\u505a\u5e76\u53d1\u63a7\u5236\u3002

\u5728 Fall 2022 \u4e2d\uff0c\u4e0e Fall 2021 \u76f8\u6bd4\u53ea\u6709\u54c8\u5e0c\u7d22\u5f15\u6362\u6210\u4e86 B+ \u6811\u7d22\u5f15\uff0c\u5176\u4f59\u90fd\u4e00\u6837\u3002

\u5728 Spring 2023 \u4e2d\uff0c\u5927\u4f53\u5185\u5bb9\u548c Fall 2022 \u4e00\u6837\uff08\u7f13\u5b58\u6c60\uff0cB+ \u6811\u7d22\u5f15\uff0c\u7b97\u5b50\uff0c\u5e76\u53d1\u63a7\u5236\uff09\uff0c\u53ea\u4e0d\u8fc7 Project #0 \u6362\u6210\u4e86 Copy-On-Write Trie\uff0c\u540c\u65f6\u589e\u52a0\u4e86\u5f88\u597d\u73a9\u7684\u6ce8\u518c\u5927\u5c0f\u5199\u51fd\u6570\u7684 Task\uff0c\u53ef\u4ee5\u76f4\u63a5\u5728\u7f16\u8bd1\u51fa\u7684 bustub-shell \u4e2d\u770b\u5230\u81ea\u5df1\u5199\u7684\u51fd\u6570\u7684\u5b9e\u9645\u6548\u679c\uff0c\u975e\u5e38\u6709\u6210\u5c31\u611f\u3002

\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u73b0\u5728 bustub \u5728 2020 \u5e74\u4ee5\u524d\u7684 version \u90fd\u5df2\u7ecf\u505c\u6b62\u7ef4\u62a4\u3002

Fall 2019 \u7684\u6700\u540e\u4e00\u4e2a Logging & Recovery \u7684 Project \u5df2\u7ecf broken \u4e86\uff08\u572819\u5e74\u7684 git head \u4e0a\u4e5f\u8bb8\u8fd8\u53ef\u4ee5\u8dd1\uff0c\u4f46\u5c3d\u7ba1\u5982\u6b64 Gradescope \u5e94\u8be5\u4e5f\u6ca1\u6709\u63d0\u4f9b\u516c\u5171\u7684\u7248\u672c\uff0c\u6240\u4ee5\u5e76\u4e0d\u63a8\u8350\u5927\u5bb6\u53bb\u505a\uff0c\u53ea\u770b\u770b\u4ee3\u7801\u548c Handout \u5c31\u53ef\u4ee5\u4e86\uff09\u3002

\u6216\u8bb8\u5728 Fall 2023 \u7684\u7248\u672c Recovery \u76f8\u5173\u7684\u529f\u80fd\u4f1a\u88ab\u4fee\u590d\uff0c\u5c4a\u65f6\u4e5f\u53ef\u80fd\u6709\u5168\u65b0\u7684 Recovery Project\uff0c\u8ba9\u6211\u4eec\u8bd5\u76ee\u4ee5\u5f85\u5427\ud83e\udd2a

\u5982\u679c\u5927\u5bb6\u6709\u7cbe\u529b\u7684\u8bdd\u53ef\u4ee5\u90fd\u53bb\u5c1d\u8bd5\u4e00\u4e0b\uff0c\u6216\u8005\u5728\u5bf9\u4e66\u4e2d\u5185\u5bb9\u7406\u89e3\u4e0d\u662f\u5f88\u900f\u5f7b\u7684\u65f6\u5019\uff0c\u5c1d\u8bd5\u505a\u4e00\u505a\u5bf9\u5e94\u7684 Project \u4f1a\u52a0\u6df1\u4f60\u7684\u7406\u89e3\uff08\u4e2a\u4eba\u5efa\u8bae\u8fd8\u662f\u8981\u5168\u90e8\u505a\u5b8c\uff0c\u76f8\u4fe1\u4e00\u5b9a\u5bf9\u4f60\u6709\u5e2e\u52a9\uff09\u3002

\u6b64\u5916\uff0cCMU\u6570\u636e\u5e93\u56e2\u961f\u8fd8\u6709\u4e00\u4e2aDB with ML\u7684\u516c\u5f00\u8bb2\u5ea7\u7cfb\u5217\uff1aML\u21c4DB Seminar Series

"},{"location":"Datebase%20Systems/CMU15-445/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"

\u975e\u5b98\u65b9\u7684 Discord \u662f\u4e00\u4e2a\u5f88\u597d\u7684\u4ea4\u6d41\u5e73\u53f0\uff0c\u8fc7\u5f80\u7684\u804a\u5929\u8bb0\u5f55\u51e0\u4e4e\u8bb0\u8f7d\u4e86\u5176\u4ed6\u540c\u5b66\u8e29\u8fc7\u7684\u5751\uff0c\u4f60\u4e5f\u53ef\u4ee5\u63d0\u51fa\u4f60\u7684\u95ee\u9898\uff0c\u6216\u8005\u5e2e\u5fd9\u89e3\u7b54\u522b\u4eba\u7684\u95ee\u9898\uff0c\u76f8\u4fe1\u8fd9\u662f\u4e00\u4efd\u5f88\u597d\u7684\u53c2\u8003\u3002

\u5173\u4e8e Spring 2023 \u7684\u901a\u5173\u6307\u5357\uff0c\u53ef\u4ee5\u53c2\u8003 @xzhseh \u7684\u8fd9\u7bc7CMU 15-445/645 (Spring 2023) Database Systems \u901a\u5173\u6307\u5317\uff0c\u91cc\u9762\u6db5\u76d6\u4e86\u5168\u90e8\u4f60\u9700\u8981\u7684\u901a\u5173\u9053\u5177\uff0c\u548c\u901a\u5173\u65b9\u5f0f\u5efa\u8bae\uff0c\u4ee5\u53ca\u6700\u91cd\u8981\u7684\uff0c\u6211\u81ea\u5df1\u5728\u505a Project \u7684\u8fc7\u7a0b\u4e2d\u9047\u5230\u7684\uff0c\u770b\u5230\u7684\uff0c\u548c\u81ea\u5df1\u4eb2\u81ea\u8e29\u8fc7\u7684\u5751\u3002

@ysj1173886760 \u5728\u5b66\u4e60\u8fd9\u95e8\u8bfe\u4e2d\u7528\u5230\u7684\u6240\u6709\u8d44\u6e90\u548c\u4f5c\u4e1a\u5b9e\u73b0\u90fd\u6c47\u603b\u5728 ysj1173886760/Learning: db - GitHub \u4e2d\u3002

\u7531\u4e8e Andy \u7684\u8981\u6c42\uff0c\u4ed3\u5e93\u4e2d\u6ca1\u6709 Project \u7684\u5b9e\u73b0\uff0c\u53ea\u6709 Homework \u7684 Solution\u3002\u7279\u522b\u7684\uff0c\u5bf9\u4e8e Homework1\uff0c@ysj1173886760 \u8fd8\u5199\u4e86\u4e00\u4e2a Shell \u811a\u672c\u6765\u5e2e\u5927\u5bb6\u6267\u884c\u81ea\u52a8\u5224\u5206\u3002

\u53e6\u5916\u5728\u8bfe\u7a0b\u7ed3\u675f\u540e\uff0c\u63a8\u8350\u9605\u8bfb\u4e00\u7bc7\u8bba\u6587 Architecture Of a Database System\uff0c\u5bf9\u5e94\u7684\u4e2d\u6587\u7248\u4e5f\u5728\u4e0a\u8ff0\u4ed3\u5e93\u4e2d\u3002\u8bba\u6587\u91cc\u7efc\u8ff0\u4e86\u6570\u636e\u5e93\u7cfb\u7edf\u7684\u6574\u4f53\u67b6\u6784\uff0c\u8ba9\u5927\u5bb6\u53ef\u4ee5\u5bf9\u6570\u636e\u5e93\u6709\u4e00\u4e2a\u66f4\u52a0\u5168\u9762\u7684\u89c6\u91ce\u3002

"},{"location":"Datebase%20Systems/CMU15-445/#_4","title":"\u540e\u7eed\u8bfe\u7a0b","text":"

CMU15-721 \u4e3b\u8981\u8bb2\u4e3b\u5b58\u6570\u636e\u5e93\u6709\u5173\u7684\u5185\u5bb9\uff0c\u6bcf\u8282\u8bfe\u90fd\u6709\u5bf9\u5e94\u7684 paper \u8981\u8bfb\uff0c\u63a8\u8350\u7ed9\u5e0c\u671b\u8fdb\u9636\u6570\u636e\u5e93\u7684\u5c0f\u4f19\u4f34\u3002@ysj1173886760 \u76ee\u524d\u4e5f\u5728\u8ddf\u8fdb\u8fd9\u95e8\u8bfe\uff0c\u5b8c\u6210\u540e\u4f1a\u5728\u8fd9\u91cc\u63d0 PR \u4ee5\u63d0\u4f9b\u8fdb\u9636\u7684\u6307\u5bfc\u3002

"},{"location":"Deep%20Learning/CS224n/","title":"CS224n: Natural Language Processing","text":""},{"location":"Deep%20Learning/CS224n/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aStanford
  • \u5148\u4fee\u8981\u6c42\uff1a\u6df1\u5ea6\u5b66\u4e60\u57fa\u7840 + Python
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a80 \u5c0f\u65f6

Stanford \u7684 NLP \u5165\u95e8\u8bfe\u7a0b\uff0c\u7531\u81ea\u7136\u8bed\u8a00\u5904\u7406\u9886\u57df\u7684\u5de8\u4f6c Chris Manning \u9886\u8854\u6559\u6388\uff08word2vec \u7b97\u6cd5\u7684\u5f00\u521b\u8005\uff09\u3002\u5185\u5bb9\u8986\u76d6\u4e86\u8bcd\u5411\u91cf\u3001RNN\u3001LSTM\u3001Seq2Seq \u6a21\u578b\u3001\u673a\u5668\u7ffb\u8bd1\u3001\u6ce8\u610f\u529b\u673a\u5236\u3001Transformer \u7b49\u7b49 NLP \u9886\u57df\u7684\u6838\u5fc3\u77e5\u8bc6\u70b9\u3002

5 \u4e2a\u7f16\u7a0b\u4f5c\u4e1a\u96be\u5ea6\u5faa\u5e8f\u6e10\u8fdb\uff0c\u5206\u522b\u662f\u8bcd\u5411\u91cf\u3001word2vec \u7b97\u6cd5\u3001Dependency parsing\u3001\u673a\u5668\u7ffb\u8bd1\u4ee5\u53ca Transformer \u7684 fine-tune\u3002

\u6700\u7ec8\u7684\u5927\u4f5c\u4e1a\u662f\u5728 Stanford \u8457\u540d\u7684 SQuAD \u6570\u636e\u96c6\u4e0a\u8bad\u7ec3 QA \u6a21\u578b\uff0c\u6709\u5b66\u751f\u7684\u5927\u4f5c\u4e1a\u751a\u81f3\u76f4\u63a5\u53d1\u8868\u4e86\u9876\u4f1a\u8bba\u6587\u3002

"},{"location":"Deep%20Learning/CS224n/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttp://web.stanford.edu/class/cs224n/index.html
  • \u8bfe\u7a0b\u89c6\u9891\uff1aB \u7ad9\u641c\u7d22 CS224n
  • \u8bfe\u7a0b\u6559\u6750\uff1a\u65e0
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttp://web.stanford.edu/class/cs224n/index.html\uff0c5 \u4e2a\u7f16\u7a0b\u4f5c\u4e1a + 1 \u4e2a Final Project
"},{"location":"Deep%20Learning/CS224n/#_3","title":"\u8d44\u6e90\u6c47\u603b","text":"

@PKUFlyingPig \u5728\u5b66\u4e60\u8fd9\u95e8\u8bfe\u4e2d\u7528\u5230\u7684\u6240\u6709\u8d44\u6e90\u548c\u4f5c\u4e1a\u5b9e\u73b0\u90fd\u6c47\u603b\u5728 PKUFlyingPig/CS224n - GitHub \u4e2d\u3002

"},{"location":"Functional%20Programming/Haskell/","title":"Haskell","text":"

Haskell is a purely functional programming language. It is known for its speed and reliability, and it is often used in industry for building large-scale software systems.

"},{"location":"Functional%20Programming/Haskell/#resources","title":"Resources","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://haskell.mooc.fi/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://github.com/moocfi/haskell-mooc
  • \u793e\u533a\uff1ahttps://t.me/haskell_mooc_fi
"},{"location":"Functional%20Programming/Lean4/","title":"Lean4","text":"

Lean4 is a programming language developed by Microsoft Research. It is a functional programming language that is based on theorem proving and dependent type theory. It is designed to be easy to use and easy to understand. It is also designed to be efficient and scalable.

"},{"location":"Functional%20Programming/Lean4/#resources","title":"Resources","text":"
  • Lean4 Documentation
  • Lean4 Tutorial
  • Lean4 Book
  • Lean4 Docs
"},{"location":"LLM%20Development/LangChain/","title":"LangChain","text":"

LangChain is a project that aims to create a language-agnostic, open-source, and community-driven framework for language learning.

The framework will be designed to be modular and extensible, allowing for easy integration of new languages and features. The framework will also be designed to be user-friendly and accessible, with clear documentation and tutorials.

The LangChain framework will be open-source and available for anyone to use and contribute to. The project will be developed in the open, with all code and documentation available for anyone to view and use.

The LangChain framework will be designed to be language-agnostic, meaning that it will be able to support any language that has a written alphabet. This will allow for easy integration of new languages and features, as well as the ability to create language-specific tools and resources.

The LangChain framework will be community-driven, meaning that it will be open to anyone who wants to contribute to the project. Anyone can submit new languages, features, or tools, and the LangChain team will review and approve them. This will allow for a collaborative and diverse community to develop and improve the framework.

The LangChain framework will be designed to be scalable, meaning that it will be able to handle large amounts of data and users. The framework will be designed to be efficient and scalable, with features such as caching and optimization in mind. The LangChain team will work to ensure that the framework is optimized for performance and scalability, and that it can handle large amounts of data and users.

The LangChain framework will be designed to be accessible, meaning that it will be designed to be easy to use and understand. The framework will be designed to be user-friendly and intuitive, with clear documentation and tutorials. The LangChain team will work to ensure that the framework is easy to use and understand, and that it is accessible to all users.

The LangChain framework will be designed to be secure, meaning that it will be designed to protect user data and prevent unauthorized access. The framework will be designed to be secure and safe, with features such as encryption and authentication in mind. The LangChain team will work to ensure that the framework is secure and safe, and that it protects user data and prevents unauthorized access.

The LangChain framework will be designed to be inclusive, meaning that it will be designed to be accessible to people with disabilities. The framework will be designed to be accessible and inclusive, with features such as high contrast and easy-to-read fonts in mind. The LangChain team will work to ensure that the framework is accessible and inclusive, and that it is designed to be used by people with disabilities.

"},{"location":"LLM%20Development/Llama2/","title":"Llama2","text":""},{"location":"LLM%20Development/Llama2/#localizing-llama2","title":"Localizing Llama2","text":"

\u53ef\u4ee5\u53c2\u8003\u4ee5\u4e0b\u8d44\u6599\u4ee5\u90e8\u7f72Llama2\uff1a

  • LLM\u63a2\u7d22\uff1a\u73af\u5883\u642d\u5efa\u4e0e\u6a21\u578b\u672c\u5730\u90e8\u7f72
  • \u672c\u5730\u90e8\u7f72\u5f00\u6e90\u5927\u6a21\u578b\u7684\u5b8c\u6574\u6559\u7a0b\uff1aLangChain + Streamlit+ Llama
"},{"location":"LLM%20Development/Llama2/#introduction","title":"Introduction","text":"

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

"},{"location":"LLM%20Development/Llama2/#deploy","title":"Deploy","text":"

To deploy Llama2, follow the steps below:

  1. Install Llama2 on your server or cluster.
  2. Configure Llama2 to connect to your Kafka cluster.
  3. Start sending and receiving messages using Llama2.
"},{"location":"LLM%20Development/Llama2/#features","title":"Features","text":"

Llama2 provides a rich set of features and functionality that are not available in Llama. Some of the key features of Llama2 are:

  1. Message routing: Llama2 allows you to route messages to different topics based on certain criteria, such as message content or metadata.
  2. Message filtering: Llama2 allows you to filter messages based on certain criteria, such as message content or metadata.
  3. Message transformation: Llama2 allows you to transform messages into a different format, such as JSON or XML.
  4. Message delivery guarantee: Llama2 provides a delivery guarantee that ensures that messages are delivered at least once, exactly once, or at most once.
  5. Message replay: Llama2 allows you to replay messages that have been consumed before.
  6. Message retention: Llama2 allows you to set a retention policy for messages, which determines how long messages are kept in the system.
  7. Message compression: Llama2 allows you to compress messages to reduce the amount of data that needs to be stored and transmitted.
  8. Message ordering: Llama2 ensures that messages are delivered in the order they are sent.
  9. Message replay: Llama2 allows you to replay messages that have been consumed before.
  10. Message batching: Llama2 allows you to batch messages together and send them in a single request.
  11. Message re-partitioning: Llama2 allows you to re-partition messages to different topics based on certain criteria, such as message content or metadata.
  12. Message re-ordering: Llama2 allows you to re-order messages based on certain criteria, such as message content or metadata.
  13. Message de-duplication: Llama2 allows you to de-duplicate messages based on certain criteria, such as message content or metadata.
  14. Message encryption: Llama2 allows you to encrypt messages using various encryption algorithms, such as AES, RSA, and HMAC.
  15. Message authentication: Llama2 allows you to authenticate messages using various authentication mechanisms, such as SSL, SASL, and OAuth.
  16. Message compression: Llama2 allows you to compress messages using various compression algorithms, such as Gzip, Snappy, and LZ4.
  17. Message indexing: Llama2 allows you to index messages using various indexing techniques, such as Apache Solr, Elasticsearch, and Apache Lucene.
  18. Message monitoring: Llama2 provides monitoring capabilities that allow you to track the performance and health of your Llama2 cluster.
  19. Message security: Llama2 provides security features that allow you to secure your Llama2 cluster.
"},{"location":"LLM%20Development/Llama2/#conclusion","title":"Conclusion","text":"

Llama2 is a new generation of Llama, a high-performance, low-latency, and scalable messaging system. Llama2 is designed to be a drop-in replacement for Llama, and provides a better performance, scalability, and reliability. Llama2 is also designed to be more flexible and extensible, allowing for new features and functionality to be added as needed.

Llama2 is built on top of the Apache Kafka messaging system, which is widely used in the industry for high-throughput, low-latency messaging. Llama2 is designed to be compatible with Kafka, and can be used as a drop-in replacement for Llama. Llama2 also provides a rich set of features and functionality that are not available in Llama, such as message routing, message filtering, and message transformation.

Llama2 is designed to be easy to use and deploy, and can be deployed on-premises or in the cloud. Llama2 is also designed to be highly available and fault-tolerant, and can handle a wide range of workloads and use cases.

"},{"location":"Machine%20Learning/CS189/","title":"CS189: Introduction to Machine Learning","text":""},{"location":"Machine%20Learning/CS189/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u6240\u5c5e\u5927\u5b66\uff1aUC Berkeley
  • \u5148\u4fee\u8981\u6c42\uff1aCS188, CS70
  • \u7f16\u7a0b\u8bed\u8a00\uff1aPython
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a100 \u5c0f\u65f6

\u8fd9\u95e8\u8bfe\u6211\u6ca1\u6709\u7cfb\u7edf\u4e0a\u8fc7\uff0c\u53ea\u662f\u628a\u5b83\u7684\u8bfe\u7a0b notes \u4f5c\u4e3a\u5de5\u5177\u4e66\u67e5\u9605\u3002\u4e0d\u8fc7\u4ece\u8bfe\u7a0b\u7f51\u7ad9\u4e0a\u6765\u770b\uff0c\u5b83\u6bd4 CS229 \u597d\u7684\u662f\u5f00\u6e90\u4e86\u6240\u6709 homework \u7684\u4ee3\u7801\u4ee5\u53ca gradescope \u7684 autograder\u3002\u540c\u6837\uff0c\u8fd9\u95e8\u8bfe\u8bb2\u5f97\u76f8\u5f53\u7406\u8bba\u4e14\u6df1\u5165\u3002

"},{"location":"Machine%20Learning/CS189/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://www.eecs189.org/
  • \u8bfe\u7a0b\u5b89\u6392\uff1ahttps://people.eecs.berkeley.edu/~jrs/189/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PLCuQm2FL98HTlRmlwMk2AuFEM9n1c06HE
  • \u8bfe\u7a0b\u6559\u6750\uff1ahttps://www.eecs189.org/
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1ahttps://www.eecs189.org/
"},{"location":"OOP/C%2B%2B/","title":"C++","text":"

\u5728\u672c\u90e8\u5206\uff0c\u4e3b\u8981\u4ecb\u7ecd\u4e00\u4e9bC++\u7684\u65b0\u7279\u6027\u3002

"},{"location":"OOP/C%2B%2B/#_1","title":"\u79fb\u52a8\u8bed\u4e49","text":"

\u79fb\u52a8\u8bed\u4e49\u53ef\u4ee5\u53c2\u8003\u4ee5\u4e0b\u8d44\u6599\uff1a

  • \u79fb\u52a8\u8bed\u4e49\u4e00\u6587\u5165\u9b42
"},{"location":"OOP/C%2B%2B/#modern-c","title":"Modern C++\u8bed\u6cd5\u7279\u6027","text":"
  • CppCoreGuidelines
  • Modern C++
"},{"location":"OOP/Java/","title":"Java","text":"

Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.

"},{"location":"Potpourri/CUDA/","title":"CUDA","text":"

CUDA is a parallel computing platform and programming model created by Nvidia. It allows developers to write high-performance parallel applications using a combination of C/C++, CUDA C/C++, and Fortran. CUDA provides a rich set of APIs for parallel programming, including parallel thread execution, memory management, and device management. CUDA also includes a compiler toolchain that can generate optimized code for various architectures, including x86, x86-64, ARM, and PowerPC. CUDA is widely used in scientific computing, graphics processing, and machine learning applications.

CUDA is available for free download and installation on Windows, Linux, and macOS platforms. It is also available as a part of popular cloud computing platforms such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

CUDA is a powerful tool for parallel computing and is widely used in a wide range of applications. It is a good choice for developers who are interested in developing high-performance parallel applications using CUDA.

"},{"location":"Potpourri/CUDA/#cuda-programming-model","title":"CUDA Programming Model","text":"

The CUDA programming model is based on a combination of C/C++ and CUDA C/C++. CUDA C/C++ is a high-level language that is designed to work with CUDA. It provides a set of built-in functions and operators that can be used to write parallel code. CUDA C/C++ code is compiled into a CUDA executable that can be run on a GPU.

The CUDA programming model consists of several components:

  • Host code: This is the code that is executed on the CPU. It interacts with the device code to perform parallel computations.
  • Device code: This is the code that is executed on the GPU. It is written in CUDA C/C++ and is executed on a single thread on the GPU.
"},{"location":"Potpourri/CUDA/#c-and-cuda-cc","title":"C++ and CUDA C/C++","text":"

C++ and CUDA C/C++ are two different programming languages. C++ is a general-purpose programming language that is widely used in software development. CUDA C/C++ is a programming language that is designed to work with CUDA. It is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

C++ and CUDA C/C++ are both high-level languages that are used to write parallel applications. C++ is a general-purpose language that is used to write applications that can run on any platform. CUDA C/C++ is a subset of C++ that is specifically designed for parallel computing. CUDA C/C++ code can be compiled into a CUDA executable that can be run on a GPU.

"},{"location":"Potpourri/CUDA/#cuda-libraries","title":"CUDA Libraries","text":"

CUDA provides a rich set of libraries that can be used to develop parallel applications. These libraries include CUDA Runtime API, CUDA Driver API, CUDA Math API, CUDA Graph API, CUDA Profiler API, CUDA Cooperative Groups API, CUDA Texture Memory API, CUDA Surface Memory API, CUDA Dynamic Parallelism API, CUDA Direct3D Interoperability, CUDA OpenGL Interoperability, CUDA VDPAU Interoperability, CUDA D3D10 Interoperability, CUDA D3D11 Interoperability, CUDA D3D12 Interoperability, CUDA OpenCL Interoperability, CUDA Level-Zero Interoperability, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA libraries are designed to work with CUDA C/C++ and provide a set of APIs for parallel programming. These libraries can be used to develop parallel applications that can run on a GPU. CUDA libraries can be used to optimize performance, reduce memory usage, and improve application performance.

"},{"location":"Potpourri/CUDA/#cuda-programming-tools","title":"CUDA Programming Tools","text":"

CUDA provides a set of tools that can be used to develop and debug CUDA applications. These tools include CUDA Compiler, CUDA Debugger, CUDA Profiler, CUDA Memcheck, CUDA Memcached, CUDA Thrust, and CUDA Python.

CUDA Compiler is a tool that is used to compile CUDA C/C++ code into a CUDA executable. CUDA Debugger is a tool that is used to debug CUDA applications. CUDA Profiler is a tool that is used to profile CUDA applications. CUDA Memcheck is a tool that is used to detect memory errors in CUDA applications. CUDA Memcached is a tool that is used to cache CUDA application data in memory. CUDA Thrust is a library that is used to write parallel algorithms in CUDA C/C++. CUDA Python is a library that is used to write CUDA applications in Python.

CUDA tools can be used to develop and debug CUDA applications. They can help to identify and fix errors in CUDA applications, optimize performance, and reduce memory usage.

"},{"location":"Potpourri/Huggingface/","title":"Huggingface","text":"

Huggingface is a popular NLP library that provides a lot of pre-trained models for various tasks such as text classification, named entity recognition, and question answering. It also provides a simple interface for training and fine-tuning these models on custom datasets.

Here is an example of how to use Huggingface to fine-tune a pre-trained model for sentiment analysis on a custom dataset:

from transformers import pipeline\n\nclassifier = pipeline('sentiment-analysis')\n\n# Custom dataset\ndata = [\n    (\"I love this movie\", \"positive\"),\n    (\"This is a terrible movie\", \"negative\"),\n    (\"I hate it\", \"negative\"),\n    (\"I don't care\", \"neutral\"),\n    (\"I'm so happy today\", \"positive\"),\n]\n\n# Fine-tune the pre-trained model on the custom dataset\nclassifier.train(data)\n\n# Test the fine-tuned model\nresult = classifier(\"I'm so happy today\")\nprint(result)\n

This code will fine-tune a pre-trained model for sentiment analysis on the custom dataset and then test the fine-tuned model on a sample sentence. The output will be a dictionary containing the predicted sentiment and its corresponding score.

You can also use Huggingface to perform other NLP tasks such as text classification, named entity recognition, and question answering.

"},{"location":"Potpourri/Manim/","title":"Manim","text":""},{"location":"Potpourri/Manim/#introduction","title":"Introduction","text":"

Manim is a Python library for creating mathematical animations, which is based on the idea of creating mathematical objects and transforming them over time. It is an open-source project and is maintained by the community. It is used to create visualizations, simulations, and animations for a wide range of applications, including computer science, mathematics, physics, and more.

"},{"location":"Potpourri/Manim/#installation","title":"Installation","text":"

To install Manim, you need to have Python installed on your system. You can download and install Python from the official website. Once you have Python installed, you can install Manim using the following command:

pip install manim\n

This will install Manim and all its dependencies.

"},{"location":"Potpourri/Manim/#creating-animations","title":"Creating Animations","text":"

To create an animation, you need to create a Python file and use the Scene class from Manim. Here is an example:

from manim import *\n\nclass SquareToCircle(Scene):\n    def construct(self):\n        square = Square()\n        circle = Circle()\n        self.play(Transform(square, circle))\n
  1. The Scene class is imported from Manim.
  2. A new class called SquareToCircle is created which inherits from the Scene class.
  3. The construct method is defined which is the entry point for the animation.
  4. Two objects, a square and a circle, are created.
  5. The Transform animation is played, which transforms the square into the circle.
"},{"location":"Potpourri/Manim/#running-animations","title":"Running Animations","text":"

To run the animation, you need to save the Python file and run it using the following command:

manim example.py SquareToCircle\n

This will run the animation and save the output as a video file. You can specify the resolution, frame rate, and other options using the command line arguments.

"},{"location":"Potpourri/Manim/#resources","title":"Resources","text":"
  • Manim Website
  • Manim Documentation
  • Manim GitHub Repository
"},{"location":"Potpourri/SocketIO/","title":"Socket.IO","text":"

Socket.IO is a real-time communication framework that enables real-time bidirectional communication between the client and the server. It uses WebSockets as a transport layer and provides a simple API for real-time communication. Socket.IO is a JavaScript library that runs in the browser and enables real-time communication between the client and the server.

"},{"location":"Potpourri/SocketIO/#resources","title":"Resources","text":"
  • Socket.IO Documentation
  • Socket.IO Client-Side Library
  • Socket.IO Server-Side Library
  • Flask-SocketIO Documentation
"},{"location":"Potpourri/SocketIO/#socketio-in-python","title":"Socket.IO in Python","text":"

Socket.IO can be used in Python using the python-socketio library. The library provides a client-side and a server-side implementation. The client-side implementation is used to connect to the server and send and receive messages. The server-side implementation is used to handle incoming connections and send messages to the clients.

Here's an example of how to use Socket.IO in Python:

import socketio\n\nsio = socketio.Client()\n\n@sio.event\ndef connect():\n    print('connection established')\n\n\n@sio.event\ndef message(data):\n    print('message received with ', data)\n    sio.emit('response', {'response': 'my response'})\n\n\n@sio.event\ndef disconnect():\n    print('disconnected from server')\n\n\nsio.connect('http://localhost:5000')\n
  1. First, we import the socketio library.
  2. We create a socketio.Client object.
  3. We define three event handlers: connect, message, and disconnect.
  4. In the connect event handler, we print a message to indicate that the connection has been established.
  5. In the message event handler, we print the received message and send a response using the emit method.
  6. In the disconnect event handler, we print a message to indicate that the connection has been lost.
  7. We connect to the server using the connect method and pass the URL of the server as an argument.

Note that the emit method is used to send a message to the server. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

The server-side implementation is a bit more complex, but it can be done using the flask-socketio library. Here's an example of how to use Socket.IO in Python with Flask:

from flask import Flask, render_template\nfrom flask_socketio import SocketIO, emit\n\napp = Flask(__name__)\napp.config['SECRET_KEY'] ='secret!'\nsocketio = SocketIO(app)\n\n@app.route('/')\ndef index():\n    return render_template('index.html')\n\n@socketio.on('connect')\ndef connect():\n    print('connected')\n\n@socketio.on('message')\ndef message(data):\n    print('message received with ', data)\n    emit('response', {'response': 'my response'})\n\n@socketio.on('disconnect')\ndef disconnect():\n    print('disconnected')\n\n\nif __name__ == '__main__':\n    socketio.run(app, debug=True)\n
  1. First, we import the Flask, render_template, SocketIO, and emit functions from the flask, flask_socketio, and socketio libraries, respectively.
  2. We create a Flask app object and set the SECRET_KEY configuration variable.
  3. We create a SocketIO object and pass the app object as an argument.
  4. We define three event handlers: connect, message, and disconnect.
  5. In the connect event handler, we print a message to indicate that a client has connected.
  6. In the message event handler, we print the received message and send a response using the emit function.
  7. In the disconnect event handler, we print a message to indicate that a client has disconnected.
  8. We run the app using the run method and pass the app object as an argument.

In this example, we use the emit function to send a message to the client with the response event name. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

Note that the emit function is used to send a message to the client. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

The client-side implementation is a bit more complex, but it can be done using the socket.io-client library. Here's an example of how to use Socket.IO in Python with a JavaScript client:

<!DOCTYPE html>\n<html>\n<head>\n  <meta charset=\"UTF-8\">\n  <title>Socket.IO Example</title>\n  <script src=\"https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.3.0/socket.io.js\"></script>\n</head>\n<body>\n  <h1>Socket.IO Example</h1>\n  <p id=\"message\"></p>\n  <script>\n    const socket = io();\n\n    socket.on('connect', () => {\n      console.log('connected');\n      socket.emit('message', 'Hello, server!');\n    });\n\n    socket.on('response', (data) => {\n      console.log('response received with ', data);\n      document.getElementById('message').innerHTML = data.response;\n    });\n\n    socket.on('disconnect', () => {\n      console.log('disconnected');\n    });\n  </script>\n</body>\n</html>\n
  1. First, we import the socket.io-client library.
  2. We create a socket object and pass the URL of the server as an argument.
  3. We define three event handlers: connect, response, and disconnect.
  4. In the connect event handler, we print a message to indicate that the connection has been established.
  5. In the response event handler, we print the received message and update the message element with the response.
  6. In the disconnect event handler, we print a message to indicate that the connection has been lost.

Note that the emit method is used to send a message to the server. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a message to the server with the message event name.

The server-side implementation is a bit more complex, but it can be done using the socket.io library. Here's an example of how to use Socket.IO in Python with a JavaScript server:

const express = require('express');\nconst app = express();\nconst http = require('http');\nconst server = http.createServer(app);\nconst { Server } = require('socket.io');\n\nconst io = new Server(server, {\n  cors: {\n    origin: '*',\n  },\n});\n\nio.on('connection', (socket) => {\n  console.log('a user connected');\n\n  socket.on('message', (data) => {\n    console.log('message received with ', data);\n    socket.emit('response', { response: 'my response' });\n  });\n\n  socket.on('disconnect', () => {\n    console.log('user disconnected');\n  });\n});\n\nserver.listen(3000, () => {\n  console.log('listening on *:3000');\n});\n
  1. First, we import the express, http, and socket.io libraries.
  2. We create an app object and a server object using the http library.
  3. We create a socket.io object and pass the server object as an argument.
  4. We define an event handler for the connection event.
  5. In the connection event handler, we print a message to indicate that a client has connected.
  6. We define an event handler for the message event.
  7. In the message event handler, we print the received message and send a response using the emit method.
  8. We define an event handler for the disconnect event.
  9. In the disconnect event handler, we print a message to indicate that a client has disconnected.
  10. We start the server using the listen method and pass the port number as an argument.
  11. We print a message to indicate that the server is listening on the specified port.

In this example, we use the emit method to send a message to the client with the response event name. The first argument is the event name, and the second argument is the data to be sent. In this example, we send a response to the client with the response event name.

Note that the cors option is used to allow cross-origin requests. This is necessary for Socket.IO to work between the client and the server.

Overall, Socket.IO is a powerful tool for real-time communication between the client and the server. It provides a simple API for real-time communication and is easy to use in Python and JavaScript.

"},{"location":"Useful%20Tools/CMake/","title":"CMake","text":"

CMake is a cross-platform build system generator. It is used to build, test, and package software. It is widely used in the open-source community and is used in many popular projects such as OpenCV, VTK, and ITK.

"},{"location":"Useful%20Tools/CMake/#installing-cmake","title":"Installing CMake","text":"

To install CMake, you can download the installer from the official website: https://cmake.org/download/.

"},{"location":"Useful%20Tools/CMake/#using-cmake","title":"Using CMake","text":"

To use CMake, you need to create a CMakeLists.txt file in the root directory of your project. This file contains all the instructions for building your project.

Here is a basic example of a CMakeLists.txt file:

cmake_minimum_required(VERSION 3.10)\n\nproject(MyProject)\n\nset(CMAKE_CXX_STANDARD 11)\n\nadd_executable(MyProject main.cpp)\n

In this example, we set the minimum required version of CMake to 3.10, set the project name to \"MyProject\", set the C++ standard to 11, and add an executable target called \"MyProject\" that compiles the \"main.cpp\" file.

To build the project, you can run the following command in the terminal:

cmake .\n

This will generate the necessary build files for your project in a directory called \"build\". You can then run the build process by running:

cmake --build .\n

This will build the project and create an executable file in the \"build\" directory. You can then run the executable to run your program.

"},{"location":"Useful%20Tools/CMake/#resources","title":"Resources","text":"
  • CMake Homepage
  • CMake \u5b98\u65b9 Tutorial
  • CMake Documentation
  • CMake Tutorial
  • CMake Examples
  • CMake Cheat Sheet
  • CMake FAQ
  • CMake Best Practices
  • \u4e0a\u6d77\u4ea4\u901a\u5927\u5b66 IPADS \u7ec4\u65b0\u4eba\u57f9\u8bad
"},{"location":"Useful%20Tools/Docker/","title":"Docker","text":""},{"location":"Useful%20Tools/Docker/#docker_1","title":"Docker\u5b66\u4e60\u8d44\u6599","text":"

Docker \u5b98\u65b9\u6587\u6863\u5f53\u7136\u662f\u6700\u597d\u7684\u521d\u5b66\u6559\u6750\uff0c\u4f46\u6700\u597d\u7684\u5bfc\u5e08\u4e00\u5b9a\u662f\u4f60\u81ea\u5df1\u2014\u2014\u5c1d\u8bd5\u53bb\u4f7f\u7528 Docker \u624d\u80fd\u4eab\u53d7\u5b83\u5e26\u6765\u7684\u4fbf\u5229\u3002Docker \u5728\u5de5\u4e1a\u754c\u53d1\u5c55\u8fc5\u731b\u5e76\u5df2\u7ecf\u975e\u5e38\u6210\u719f\uff0c\u4f60\u53ef\u4ee5\u4e0b\u8f7d\u5b83\u7684\u684c\u9762\u7aef\u5e76\u4f7f\u7528\u56fe\u5f62\u754c\u9762\u3002

\u5f53\u7136\uff0c\u5982\u679c\u4f60\u50cf\u6211\u4e00\u6837\uff0c\u662f\u4e00\u4e2a\u75af\u72c2\u7684\u9020\u8f6e\u5b50\u7231\u597d\u8005\uff0c\u90a3\u4e0d\u59a8\u81ea\u5df1\u4eb2\u624b\u5199\u4e00\u4e2a\u8ff7\u4f60 Docker \u6765\u52a0\u6df1\u7406\u89e3\u3002

KodeKloud Docker for the Absolute Beginner \u5168\u9762\u7684\u4ecb\u7ecd\u4e86 Docker \u7684\u57fa\u7840\u529f\u80fd\uff0c\u5e76\u4e14\u6709\u5927\u91cf\u7684\u914d\u5957\u7ec3\u4e60\uff0c\u540c\u65f6\u63d0\u4f9b\u514d\u8d39\u7684\u4e91\u73af\u5883\u6765\u5b8c\u6210\u7ec3\u4e60\u3002\u5176\u4f59\u7684\u4e91\u76f8\u5173\u7684\u8bfe\u7a0b\u5982 Kubernetes \u9700\u8981\u4ed8\u8d39\uff0c\u4f46\u4e2a\u4eba\u5f3a\u70c8\u63a8\u8350\uff1a\u8bb2\u89e3\u975e\u5e38\u4ed4\u7ec6\uff0c\u9002\u5408\u4ece 0 \u5f00\u59cb\u7684\u65b0\u624b\uff1b\u6709\u914d\u5957\u7684 Kubernetes \u7684\u5b9e\u9a8c\u73af\u5883\uff0c\u4e0d\u7528\u88ab\u642d\u5efa\u73af\u5883\u529d\u9000\u3002

"},{"location":"Useful%20Tools/GDB/","title":"GNU Debuger","text":""},{"location":"Useful%20Tools/GDB/#1-introduction","title":"1. Introduction","text":"

GNU Debuger (GDB) is a powerful command-line debugger that is used to debug and analyze programs. It is a powerful tool for developers and system administrators to debug and optimize their code. GDB provides a powerful set of commands and features that allow developers to debug their code in a variety of ways.

In this article, we will learn how to use GDB to debug and optimize our code. We will also learn how to use GDB commands to analyze and optimize our code.

"},{"location":"Useful%20Tools/GDB/#2-installing-gdb","title":"2. Installing GDB","text":"

GDB is included in most Linux distributions and can be installed using the package manager. For example, on Ubuntu, you can install GDB using the following command:

sudo apt-get install gdb\n

On Windows, you can download the GDB executable from the official website and add it to your PATH environment variable.

Once GDB is installed, you can start it by typing gdb in the terminal. You should see the GDB prompt:

(gdb)\n

This is the GDB command prompt. You can type GDB commands and execute them to debug and optimize your code.

"},{"location":"Useful%20Tools/GDB/#3-debugging-a-program","title":"3. Debugging a Program","text":"

To debug a program using GDB, you need to first compile the program with debugging symbols. You can do this by adding the -g flag to the compiler command. For example, if you are using the g++ compiler, you can compile your program with the following command:

g++ -g myprogram.cpp -o myprogram\n

Once the program is compiled, you can run it using GDB by typing the following command:

gdb myprogram\n

This will start GDB and load the program. You can then set breakpoints in your code using the break command. For example, to set a breakpoint at line 10 of your program, you can type:

break 10\n

You can then run the program using the run command:

run\n

This will start the program and stop at the breakpoint. You can then use GDB commands to analyze the program state and debug the program.

For example, you can use the print command to print the value of a variable:

print myVariable\n

You can also use the step command to execute the next line of code:

step\n

This will execute the next line of code and stop at the next breakpoint. You can use the continue command to continue running the program until the next breakpoint:

continue\n

You can use the backtrace command to view the call stack:

backtrace\n

This will show you the current function call stack. You can use the info command to view information about variables, threads, and breakpoints.

Once you are done debugging, you can exit GDB using the quit command.

"},{"location":"Useful%20Tools/GDB/#4-optimizing-a-program","title":"4. Optimizing a Program","text":"

GDB can also be used to optimize a program. You can use GDB commands to analyze the program and identify areas that can be optimized. For example, you can use the time command to measure the execution time of a function:

time myFunction\n

You can then use the profile command to identify areas that are taking the most time:

profile\n

This will show you the top 10 functions that are taking the most time to execute. You can then use GDB commands to optimize these functions.

For example, you can use the set command to change the value of a variable:

set myVariable = 10\n

This will set the value of myVariable to 10. You can also use the watch command to monitor a variable and automatically break when it changes:

watch myVariable\n

This will break the program when myVariable changes. You can then use the finish command to execute the rest of the function:

finish\n

This will execute the rest of the function and continue running the program. You can use the return command to return from a function:

return\n

This will return from the current function and continue running the program.

"},{"location":"Useful%20Tools/GDB/#reference","title":"Reference","text":"
  • \u4e00\u6587\u5feb\u901f\u4e0a\u624bGDB
"},{"location":"Useful%20Tools/MIT-Missing-Semester/","title":"MIT: The Missing Semester of Your CS Education","text":""},{"location":"Useful%20Tools/MIT-Missing-Semester/#_1","title":"\u8bfe\u7a0b\u7b80\u4ecb","text":"
  • \u5148\u4fee\u8981\u6c42\uff1a\u65e0
  • \u7f16\u7a0b\u8bed\u8a00\uff1ashell
  • \u8bfe\u7a0b\u96be\u5ea6\uff1a\ud83c\udf1f\ud83c\udf1f
  • \u9884\u8ba1\u5b66\u65f6\uff1a10 \u5c0f\u65f6

\u6b63\u5982\u8bfe\u7a0b\u540d\u5b57\u6240\u8a00\uff1a\u201c\u8ba1\u7b97\u673a\u6559\u5b66\u4e2d\u6d88\u5931\u7684\u4e00\u4e2a\u5b66\u671f\u201d\uff0c\u8fd9\u95e8\u8bfe\u5c06\u4f1a\u6559\u4f1a\u4f60\u8bb8\u591a\u5927\u5b66\u7684\u8bfe\u5802\u4e0a\u4e0d\u4f1a\u6d89\u53ca\u4f46\u5374\u5bf9\u6bcf\u4e2a CSer \u65e0\u6bd4\u91cd\u8981\u7684\u5de5\u5177\u6216\u8005\u77e5\u8bc6\u70b9\u3002\u4f8b\u5982 Shell \u7f16\u7a0b\u3001\u547d\u4ee4\u884c\u914d\u7f6e\u3001Git\u3001Vim\u3001tmux\u3001ssh \u7b49\u7b49\u3002\u5982\u679c\u4f60\u662f\u4e00\u4e2a\u8ba1\u7b97\u673a\u5c0f\u767d\uff0c\u90a3\u4e48\u6211\u975e\u5e38\u5efa\u8bae\u4f60\u5b66\u4e60\u4e00\u4e0b\u8fd9\u95e8\u8bfe\uff0c\u56e0\u4e3a\u5b83\u57fa\u672c\u6d89\u53ca\u4e86\u672c\u4e66\u5fc5\u5b66\u5de5\u5177\u4e2d\u7684\u7edd\u5927\u90e8\u5206\u5185\u5bb9\u3002

\u9664\u4e86 MIT \u5b98\u65b9\u7684\u5b66\u4e60\u8d44\u6599\u5916\uff0c\u5317\u4eac\u5927\u5b66\u56fe\u7075\u73ed\u5f00\u8bbe\u7684\u524d\u6cbf\u8ba1\u7b97\u5b9e\u8df5\u4e2d\u4e5f\u5f00\u8bbe\u4e86\u76f8\u5173\u8bfe\u7a0b\uff0c\u8d44\u6599\u4f4d\u4e8e\u8fd9\u4e2a\u7f51\u7ad9\u4e0b\uff0c\u4f9b\u5927\u5bb6\u53c2\u8003\u3002

"},{"location":"Useful%20Tools/MIT-Missing-Semester/#_2","title":"\u8bfe\u7a0b\u8d44\u6e90","text":"
  • \u8bfe\u7a0b\u7f51\u7ad9\uff1ahttps://missing.csail.mit.edu/2020/
  • \u8bfe\u7a0b\u4e2d\u6587\u7f51\u7ad9: https://missing-semester-cn.github.io/
  • \u8bfe\u7a0b\u89c6\u9891\uff1ahttps://www.youtube.com/playlist?list=PLyzOVJj3bHQuloKGG59rS43e29ro7I57J
  • \u8bfe\u7a0b\u4e2d\u6587\u5b57\u5e55\u89c6\u9891\uff1a
    • Missing_Semi_\u4e2d\u8bd1\u7ec4\uff08\u672a\u5b8c\u7ed3\uff09\uff1ahttps://space.bilibili.com/1010983811?spm_id_from=333.337.search-card.all.click
    • \u5218\u9ed1\u9ed1a\uff08\u5df2\u5b8c\u7ed3\uff09\uff1ahttps://space.bilibili.com/518734451?spm_id_from=333.337.search-card.all.click
  • \u8bfe\u7a0b\u4f5c\u4e1a\uff1a\u4e00\u4e9b\u968f\u5802\u5c0f\u7ec3\u4e60\uff0c\u5177\u4f53\u89c1\u8bfe\u7a0b\u7f51\u7ad9\u3002
"},{"location":"Useful%20Tools/Makefile/","title":"GNU Make","text":""},{"location":"Useful%20Tools/Makefile/#1-introduction","title":"1. Introduction","text":"

GNU Make is a tool for automating the build process of software projects. It is a command-line tool that can be used to build, test, and package software projects. GNU Make is a cross-platform tool that can be used on Windows, Linux, and macOS.

"},{"location":"Useful%20Tools/Makefile/#2-installation","title":"2. Installation","text":"

GNU Make can be installed on Windows, Linux, and macOS using the following steps:

  1. Download the latest release of GNU Make from the official website: https://www.github.com/amake/gnumake

  2. Extract the downloaded file and move the gnumake executable to a directory that is in the system PATH.

  3. Verify that GNU Make is installed by running the gnumake command in the terminal or command prompt.

"},{"location":"Useful%20Tools/Makefile/#3-usage","title":"3. Usage","text":"

GNU Make can be used to build, test, and package software projects by running the gnumake command in the terminal or command prompt. The gnumake command takes a command as an argument, which can be one of the following:

  • build: builds the software project.
  • test: runs the test suite of the software project.
  • package: packages the software project for distribution.

For example, to build a software project, run the following command:

gnumake build\n

To run the test suite of a software project, run the following command:

gnumake test\n

To package a software project for distribution, run the following command:

gnumake package\n
"},{"location":"Useful%20Tools/Makefile/#4-configuration","title":"4. Configuration","text":"

GNU Make can be configured by creating a Makefile.yaml file in the root directory of the software project. The Makefile.yaml file contains the configuration for GNU Make, such as the build commands, test commands, and package commands.

Here is an example Makefile.yaml file:

build:\n  command: \"python setup.py build\"\n\ntest:\n  command: \"python setup.py test\"\n\npackage:\n  command: \"python setup.py sdist bdist_wheel\"\n

In this example, the build command is set to python setup.py build, which builds the software project using the setup.py file. The test command is set to python setup.py test, which rNUs the test suite of the software project using the setup.py file. The package command is set to python setup.py sdist bdist_wheel, which packages the software project for distribution using the setup.py file.

"},{"location":"Useful%20Tools/Makefile/#5-conclusion","title":"5. Conclusion","text":"

GNU Make is a powerful tool for automating the build process of software projects. It can be used to build, test, and package software projects on Windows, Linux, and macOS. The Makefile.yaml file can be used to configure GNU Make to build, test, and package software projects according to the needs of the project.

"},{"location":"Useful%20Tools/Makefile/#6resources","title":"6.Resources","text":"
  • How to Write a Makefile

  • GNU Make Manual

"},{"location":"Useful%20Tools/Regex/","title":"Regex","text":""},{"location":"Useful%20Tools/Regex/#regular-expressions","title":"Regular Expressions","text":"

Regular expressions are a sequence of characters that define a search pattern. They are used to match, locate, and manipulate text. In Python, regular expressions are implemented using the re module.

Here are some examples of regular expressions:

  • r\"hello\\s+world\": Matches the string \"hello world\" with any number of spaces between \"hello\" and \"world\".
  • r\"\\d+\": Matches one or more digits.
  • r\"\\w+\": Matches one or more word characters (letters, digits, and underscores).
  • r\"[\\w\\s]+\": Matches one or more word characters or spaces.
  • r\"[\\w\\s]+@[\\w\\s]+\\.[\\w]{2,3}\": Matches an email address with a username, domain name, and top-level domain.
"},{"location":"Useful%20Tools/Regex/#using-regular-expressions-in-python","title":"Using Regular Expressions in Python","text":"
  1. Import the re module:
import re\n
  1. Use the re.search() function to search for a pattern in a string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"fox\"\nmatch = re.search(pattern, string)\nif match:\n    print(\"Match found:\", match.group())\nelse:\n    print(\"Match not found\")\n
  1. Use the re.findall() function to find all occurrences of a pattern in a string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"\\b\\w{3}\\b\"\nmatches = re.findall(pattern, string)\nprint(\"Matches:\", matches)\n
  1. Use the re.sub() function to replace all occurrences of a pattern in a string with a new string:
string = \"The quick brown fox jumps over the lazy dog\"\npattern = r\"fox\"\nnew_string = re.sub(pattern, \"cat\", string)\nprint(\"New string:\", new_string)\n
"},{"location":"Useful%20Tools/Regex/#resources","title":"Resources","text":"
  • Email Regex
  • Regex 101
  • Python Regular Expressions
  • Regular Expression HOWTO
  • Regular Expressions in Python
"},{"location":"Web%20Framework/Django/","title":"Django","text":"

Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, such as database abstraction, URL routing, and templating. Django's documentation is excellent and provides a comprehensive guide to its features and functionality.

Django is known for its ease of use, high-level abstractions, and reusability. It's also fast, scalable, and secure. With Django, you can quickly build complex web applications with reusable, well-documented code.

Django is open-source, which means that the community is constantly adding new features and functionality. This makes it a popular choice for building large-scale web applications.

Django is known for its ability to handle complex database relationships and queries, which makes it a popular choice for building complex web applications. It also has a strong community of developers who contribute to its development and documentation.

Django is also known for its support for internationalization and localization, which makes it a popular choice for building multilingual web applications. It also has a large and active developer community, which makes it a popular choice for building enterprise-level web applications.

"},{"location":"Web%20Framework/Flask/","title":"Flask","text":"

Flask is a web framework written in Python. It is lightweight, easy to use, and provides a lot of features out of the box. It is also easy to learn and has a large community of developers who contribute to its development.

"},{"location":"Web%20Framework/MkDocs/","title":"MkDocs","text":"

MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

"},{"location":"Web%20Framework/MkDocs/#getting-started","title":"Getting Started","text":"

To get started with MkDocs, follow these steps:

  1. Install MkDocs:
pip install mkdocs\n
  1. Create a new directory for your project:
mkdir myproject\n
  1. Initialize a new MkDocs project:
cd myproject\nmkdocs new .\n
  1. Start the development server:
mkdocs serve\n
  1. Open your browser and go to http://127.0.0.1:8000/ to view your new documentation site.
"},{"location":"Web%20Framework/MkDocs/#writing-your-first-page","title":"Writing Your First Page","text":"

To create a new page, create a new file in the docs directory with a .md extension. For example, create a new file called about.md in the docs directory:

``` docs \u251c\u2500\u2500 about.md

"},{"location":"Web%20Framework/React/","title":"React","text":"

React is a JavaScript library for building user interfaces. It is used for building complex and interactive web applications. React is a declarative, component-based library that simplifies the process of building user interfaces. It is easy to learn and use, and it has a large and active community of developers.

"},{"location":"Web%20Framework/React/#getting-started","title":"Getting Started","text":"

To get started with React, you can follow the steps below:

  1. Install Node.js and npm on your system.
  2. Create a new React project using the create-react-app command.
  3. Start the development server using the npm start command.
  4. Open the project in your preferred code editor.
"},{"location":"Web%20Framework/React/#react-components","title":"React Components","text":"

React components are the building blocks of React applications. They are small, reusable pieces of code that can be used to create complex user interfaces. React components can be created using JavaScript or JSX. JSX is a syntax extension of JavaScript that allows you to write HTML-like code within JavaScript.

"},{"location":"Web%20Framework/React/#react-props","title":"React Props","text":"

React props are the input data that are passed to a component. They are used to customize the behavior and appearance of a component. Props can be passed to a component using attributes or properties.

"},{"location":"Web%20Framework/React/#react-state","title":"React State","text":"

React state is the data that is managed by a component. It is used to keep track of the user interface state and is updated by the component. State can be updated using the setState method.

"},{"location":"Web%20Framework/React/#react-life-cycle-methods","title":"React Life Cycle Methods","text":"

React life cycle methods are functions that are called at different stages of the component lifecycle. They are used to perform certain actions when a component is mounted, updated, or unmounted.

"},{"location":"Web%20Framework/React/#react-router","title":"React Router","text":"

React Router is a library for handling client-side routing in React applications. It allows you to create dynamic and responsive web applications with ease. It provides a simple and declarative API for handling navigation in your application.

"},{"location":"Web%20Framework/React/#react-hooks","title":"React Hooks","text":"

React hooks are a new feature in React that allow you to use state and other React features without writing a class. They are a way to use state and other React features without writing a class. They are a way to use state and other React features without writing a class.

"},{"location":"Web%20Framework/Streamlit/","title":"Streamlit","text":"

Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In this section, we will learn how to use Streamlit to create a simple web app that displays a list of movies and their ratings.

"},{"location":"Web%20Framework/Streamlit/#prerequisites","title":"Prerequisites","text":"
  1. Python 3.6 or later
  2. Streamlit library
"},{"location":"Web%20Framework/Streamlit/#step-1-install-streamlit","title":"Step 1: Install Streamlit","text":"

To install Streamlit, run the following command in your terminal:

pip install streamlit\n
"},{"location":"Web%20Framework/Streamlit/#step-2-create-a-new-streamlit-app","title":"Step 2: Create a new Streamlit app","text":"

To create a new Streamlit app, run the following command in your terminal:

streamlit hello\n

This will create a new directory called my_first_app with a sample app. Open the app.py file in your text editor to see the code.

"},{"location":"Web%20Framework/Streamlit/#step-3-add-a-list-of-movies-and-their-ratings","title":"Step 3: Add a list of movies and their ratings","text":"

To add a list of movies and their ratings, replace the code in app.py with the following:

import streamlit as st\n\n# Create a list of movies and their ratings\n
"},{"location":"Web%20Framework/Streamlit/#pydeck","title":"Pydeck","text":"

Pydeck is a Python library for creating data visualizations using deck.gl, an open-source WebGL-based visualization framework. We can use it to create a map of the movies and their ratings.

  • https://deckgl.readthedocs.io/
  • heatmap
"},{"location":"Web%20Framework/Vue/","title":"Vue.js","text":"

Vue.js is a progressive framework for building user interfaces. It is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or frameworks.

"},{"location":"Web%20Framework/Vue/#getting-started","title":"Getting Started","text":"

To get started with Vue.js, you can follow the official guide on their website:

  • Getting Started
  • Vue CLI
  • Vue Router
  • Vuex
  • Vue Loader
"},{"location":"Web%20Framework/Vue/#examples","title":"Examples","text":"

Here are some examples of Vue.js applications:

  • TodoMVC
  • Vue.js Examples
  • Vue.js News
  • Vue.js Jobs
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 351e82f..48ade91 100755 --- a/sitemap.xml +++ b/sitemap.xml @@ -10,6 +10,11 @@ 2024-02-26 daily + + https://Kian-Chen.github.io/Scheme2024Winter/Computer%20Network/topdown/ + 2024-02-26 + daily + https://Kian-Chen.github.io/Scheme2024Winter/Computer%20Vision/EECS-498/ 2024-02-26 diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 4556090..0028c21 100755 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ