-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 41.4 KB
/
index.json
1
[{"categories":["client-go","源码"],"content":"ListerWatcher接口 ListerWatcher是Lister和Watcher接口的结合体,Lister负责与APIServer通信列出全量对象,后者Watcher负责监听这些对象的增量变化。 List-Watch机制存在的原因: 一句话概括:为了提高访问效率。因为k8s资源信息都是保存在etcd中,每一次访问资源都需要客户端通过APIServer进行访问,如果很多的客户端频繁的列举出全量对象(比如列举所有的pod),这会对APIServer进程(或者成为服务)不堪重负。 因此List-Watch就是为了在本地进行缓存(Indexer),只需要访问一次APIServer列举出全量对象,并且同步本地缓存内容(Indexer)。后续通过Watch机制监听所有的这类对象的变化,当监听到变化的时候,也只需要同步本地缓存(Indexer)即可。这样就会大大的提高效率,与APIServer的通信也只是资源的增量变化。 ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/:1:0","tags":["client-go","源码"],"title":"Client Go源码分析 ListWatch实现","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/"},{"categories":["client-go","源码"],"content":"ListerWatcher接口定义 // Lister is any object that knows how to perform an initial list. type Lister interface { // List should return a list type object; the Items field will be extracted, and the // ResourceVersion field will be used to start the watch in the right place. List(options metav1.ListOptions) (runtime.Object, error) } // Watcher is any object that knows how to start a watch on a resource. type Watcher interface { // Watch should begin a watch at the specified version. Watch(options metav1.ListOptions) (watch.Interface, error) } // ListerWatcher is any object that knows how to perform an initial list and start a watch on a resource. type ListerWatcher interface { Lister Watcher } 从注释中可以看出对上述接口的描述: Lister接口的函数List主要实现是返回所有的资源列表; Watcher接口的函数Watch开始监听上述资源(需要指定特定的版本resourceVersion是一个全局的ID); ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/:1:1","tags":["client-go","源码"],"title":"Client Go源码分析 ListWatch实现","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/"},{"categories":["client-go","源码"],"content":"ListWatch结构实现接口 // ListFunc knows how to list resources type ListFunc func(options metav1.ListOptions) (runtime.Object, error) // WatchFunc knows how to watch resources type WatchFunc func(options metav1.ListOptions) (watch.Interface, error) // ListWatch knows how to list and watch a set of apiserver resources. It satisfies the ListerWatcher interface. // It is a convenience function for users of NewReflector, etc. // ListFunc and WatchFunc must not be nil type ListWatch struct { ListFunc ListFunc WatchFunc WatchFunc // DisableChunking requests no chunking for this list watcher. DisableChunking bool } 具体的实现函数如下: // List实现了ListerWatcher.List() // List a set of apiserver resources func (lw *ListWatch) List(options metav1.ListOptions) (runtime.Object, error) { // ListWatch is used in Reflector, which already supports pagination. // Don't paginate here to avoid duplication. return lw.ListFunc(options) } // List实现了ListerWatcher.Watch() // Watch a set of apiserver resources func (lw *ListWatch) Watch(options metav1.ListOptions) (watch.Interface, error) { return lw.WatchFunc(options) } 从上面可以看到,因为ListerWatcher接口只包含List和Watch两个函数,所以这里的ListWatch struct也只是两个成员函数,所以就认为是ListWatch struct实现了ListerWatcher接口。 非常值得注意的是,这里的List函数和Watch成员函数分别调用了ListWatch注册的两个函数,ListFunc和WatchFunc。 后续所有资源类型的Informer都会注册自己的ListWatch结构,比如在创建下面的Deployment的Informer就会进行注册自己的ListWatch结构(也就是List\u0026Watch函数) ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/:1:2","tags":["client-go","源码"],"title":"Client Go源码分析 ListWatch实现","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/"},{"categories":["client-go","源码"],"content":"使用 ListWatch 的 Informer 后文会介绍,各资源类型都有自己特定的 Informer(codegen 工具自动生成),如 Deployment Informer,它们使用自己资源类型的 ClientSet 来初始化 ListWatch,只返回对应类型的对象: // 来源于 k8s.io/client-go/informers/extensions/v1beta1/deployment.go func NewFilteredDeploymentInformer(client kubernetes.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { return cache.NewSharedIndexInformer( // 使用特定资源类型的 RESTClient 创建 ListWatch \u0026cache.ListWatch{ ListFunc: func(options v1.ListOptions) (runtime.Object, error) { if tweakListOptions != nil { tweakListOptions(\u0026options) } return client.ExtensionsV1beta1().Deployments(namespace).List(options) }, WatchFunc: func(options v1.ListOptions) (watch.Interface, error) { if tweakListOptions != nil { tweakListOptions(\u0026options) } return client.ExtensionsV1beta1().Deployments(namespace).Watch(options) }, }, \u0026extensionsv1beta1.Deployment{}, resyncPeriod, indexers, ) } ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/:1:3","tags":["client-go","源码"],"title":"Client Go源码分析 ListWatch实现","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-listwatch%E5%AE%9E%E7%8E%B0/"},{"categories":["client-go","源码"],"content":"Client-go的整体流程如下图所示: 首先是从上面图中可以看到,有几个比较重要的组件:Reflector、Informer、Indexer。 Informer在初始化的时候,会先调用Kubernetes List API获得某种resource类型的全部对象,并且获取对象的resourceVersion缓存在内存中。然后调用watch API去watch这种resource,根据resourceVersion号去进行watch,并维护这份缓存。然后将watch到的Event加入到DeltaFIFO中,Reflector的主要工作就是在不断的watch操作。 ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:0:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"万恶之源-ListerWatcher Interface 一提到client-go不得不说的就是ListWatch机制,该机制的主要目的是减少APIServer的压力,也就是缓存的概念,List就是第一次访问客户端第一次访问APIServer的时候是全量访问,也就是List出etcd中该类的所有的资源,比如Pod。而Watch就是监听这些已缓存的资源的是否发生了更改(ResourceVersion),如果发生变化的则调用具体的handler(Add、Update和Delete)进行处理。 ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:1:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"首先看ListerWatcher接口定义 // tools/cache/listwatch.go // Lister is any object that knows how to perform an initial list. type Lister interface { // List should return a list type object; the Items field will be extracted, and the // ResourceVersion field will be used to start the watch in the right place. List(options metav1.ListOptions) (runtime.Object, error) } // Watcher is any object that knows how to start a watch on a resource. type Watcher interface { // Watch should begin a watch at the specified version. Watch(options metav1.ListOptions) (watch.Interface, error) } // ListerWatcher is any object that knows how to perform an initial list and start a watch on a resource. type ListerWatcher interface { Lister Watcher } ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:1:1","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"List是怎样做的? 在Reflector的List方法中,会单独启用一个协程去进行List操作: pager := pager.New(pager.SimplePageFunc(func(opts metav1.ListOptions) (runtime.Object, error) { return r.listerWatcher.List(opts) })) pager是进行分页,其中是调用了Reflector成员的listerWatcher的List方法列举出对应opts的所有对象。下面再来看一下这个函数的到底是如何调用API Server的: // 这里是需要用户自定义Controller type ListWatch struct { ListFunc ListFunc WatchFunc WatchFunc // DisableChunking requests no chunking for this list watcher. DisableChunking bool } // List a set of apiserver resources func (lw *ListWatch) List(options metav1.ListOptions) (runtime.Object, error) { // ListWatch is used in Reflector, which already supports pagination. // Don't paginate here to avoid duplication. return lw.ListFunc(options) } 从上面可以看出,这个ListWatch是需要自定义,其实在每一种资源中都实现了对ListWatch的注册,比如Pod的Informer函数: // informers/core/v1/pod.go func NewFilteredPodInformer(client kubernetes.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { return cache.NewSharedIndexInformer( \u0026cache.ListWatch{ ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { if tweakListOptions != nil { tweakListOptions(\u0026options) } return client.CoreV1().Pods(namespace).List(context.TODO(), options) }, WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { if tweakListOptions != nil { tweakListOptions(\u0026options) } return client.CoreV1().Pods(namespace).Watch(context.TODO(), options) }, }, \u0026corev1.Pod{}, resyncPeriod, indexers, ) } 上面的注册的函数是client.CoreV1().Pods(namespace).List(context.TODO(), options)。client就是负责与API Server进行通信的客户端:kubernetes.Interface。 ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:2:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"Watch是怎样做的? 同样是在Reflector的ListWatch方法中单独启用一个协程去调用watch函数: w, err := r.listerWatcher.Watch(options) 同样是在listwatch中使用的调用自己注册的watchFunc函数: // Watch a set of apiserver resources func (lw *ListWatch) Watch(options metav1.ListOptions) (watch.Interface, error) { return lw.WatchFunc(options) } 和上面的Pod中的ListFunc一样,WatchFunc也是自己注册的。拿Pod为例,然后实际调用的就是在PodInformer注册的WatchFunc: return client.CoreV1().Pods(namespace).Watch(context.TODO(), options) 其中Watch方法最终调用的是kubernetes/typed/core/v1下的Watch函数: func (c *pods) Watch(ctx context.Context, opts metav1.ListOptions) (watch.Interface, error) { var timeout time.Duration if opts.TimeoutSeconds != nil { timeout = time.Duration(*opts.TimeoutSeconds) * time.Second } opts.Watch = true return c.client.Get(). Namespace(c.ns). Resource(\"pods\"). VersionedParams(\u0026opts, scheme.ParameterCodec). Timeout(timeout). Watch(ctx) } 其实最终是调用了Request下的Watch函数:使用的是Request结构的RestClient(http.Client)进行通信的。 下面是client-go对于请求(request的封装) type Request struct { c *RESTClient warningHandler WarningHandler rateLimiter flowcontrol.RateLimiter backoff BackoffManager timeout time.Duration // 这里应该是最多重试次数 maxRetries int // generic components accessible via method setters verb string pathPrefix string subpath string params url.Values headers http.Header // structural elements of the request that are part of the Kubernetes API conventions namespace string namespaceSet bool resource string resourceName string subresource string // output err error body io.Reader retryFn requestRetryFunc } ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:3:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"Reflector是如何保存该类资源事件到DeltaFIFO中的? 同样是在Reflector的ListAndWatch函数中,在 Watch 之后对其进行操作,使用的函数是watchHandler,具体的代码实现如下所示: err = watchHandler(start, w, r.store, r.expectedType, r.expectedGVK, r.name, r.expectedTypeName, r.setLastSyncResourceVersion, r.clock, resyncerrc, stopCh) 这里会根据watch的事件类型对其进行处理,相应的调用的是store.Add/store.Update/store.Delete,但是最终对于实现了Store接口的DeltaFIFO来说,都是Add操作。 switch event.Type { case watch.Added: err := store.Add(event.Object) if err != nil { utilruntime.HandleError(fmt.Errorf(\"%s: unable to add watch event object (%#v) to store: %v\", name, event.Object, err)) } case watch.Modified: err := store.Update(event.Object) if err != nil { utilruntime.HandleError(fmt.Errorf(\"%s: unable to update watch event object (%#v) to store: %v\", name, event.Object, err)) } case watch.Deleted: // TODO: Will any consumers need access to the \"last known // state\", which is passed in event.Object? If so, may need // to change this. err := store.Delete(event.Object) if err != nil { utilruntime.HandleError(fmt.Errorf(\"%s: unable to delete watch event object (%#v) from store: %v\", name, event.Object, err)) } 因为上面的storage的实现是DeltaFIFO,所以这里调用的store.Add/store.Update/store.Delete都是DeltaFIFO的成员函数,具体实现在tools/cache/delta_fifo.go中,下面是对增删改的具体实现: // Add inserts an item, and puts it in the queue. The item is only enqueued // if it doesn't already exist in the set. func (f *DeltaFIFO) Add(obj interface{}) error { f.lock.Lock() defer f.lock.Unlock() f.populated = true return f.queueActionLocked(Added, obj) } // Update is just like Add, but makes an Updated Delta. func (f *DeltaFIFO) Update(obj interface{}) error { f.lock.Lock() defer f.lock.Unlock() f.populated = true return f.queueActionLocked(Updated, obj) } // Delete is just like Add, but makes a Deleted Delta. If the given // object does not already exist, it will be ignored. (It may have // already been deleted by a Replace (re-list), for example.) In this // method `f.knownObjects`, if not nil, provides (via GetByKey) // _additional_ objects that are considered to already exist. func (f *DeltaFIFO) Delete(obj interface{}) error { id, err := f.KeyOf(obj) if err != nil { return KeyError{obj, err} } f.lock.Lock() defer f.lock.Unlock() f.populated = true if f.knownObjects == nil { if _, exists := f.items[id]; !exists { // Presumably, this was deleted when a relist happened. // Don't provide a second report of the same deletion. return nil } } else { // We only want to skip the \"deletion\" action if the object doesn't // exist in knownObjects and it doesn't have corresponding item in items. // Note that even if there is a \"deletion\" action in items, we can ignore it, // because it will be deduped automatically in \"queueActionLocked\" _, exists, err := f.knownObjects.GetByKey(id) _, itemsExist := f.items[id] if err == nil \u0026\u0026 !exists \u0026\u0026 !itemsExist { // Presumably, this was deleted when a relist happened. // Don't provide a second report of the same deletion. return nil } } // exist in items and/or KnownObjects return f.queueActionLocked(Deleted, obj) } 从上面的代码中可以看到,所有的事件在最后都调用了queueActionLocked,也就是将该Delta放入了一个map[string]Deltas(f.items-\u003eDeltaFIFO成员)。其中map的key就是该对象的key,同时会将该key放入到一个queue(f.queue-\u003eDeltaFIFO的成员)中等待进行处理,这样就会对每一个资源(对象)可以保证被顺序处理。 // tools/cache/delta_fifo.go func (f *DeltaFIFO) queueActionLocked(actionType DeltaType, obj interface{}) error { id, err := f.KeyOf(obj) if err != nil { return KeyError{obj, err} } oldDeltas := f.items[id] newDeltas := append(oldDeltas, Delta{actionType, obj}) newDeltas = dedupDeltas(newDeltas) if len(newDeltas) \u003e 0 { if _, exists := f.items[id]; !exists { f.queue = append(f.queue, id) } f.items[id] = newDeltas f.cond.Broadcast() } else { ... 从上面的代码中可以看到,其实所有的操作增删改的操作都是append,即将这些对象以及对象事件(Delta)添加至DeltaFIFO中。同时就是还会将该对象的Key添加到queue中,这里的queue的作用就是维护在Pop DeltaFIFO的时候可以将保证顺序弹出。其中Key的计算是由成员keyFunc决定的。每一个资源所对应的key应该都是不一样的。成员queue是用来保存消费顺序的(Pop)。 DeltaFIFO中保存的内容应该是这样的: 从上面的图中也可以看到,DeltaFIFO对象的queue成员中保存的是resource对应的key,而items成员其实是一个map结构: map[string]Deltas, key对应的就是queue中的对象的key","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:4:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"DeltaFIFO中保存的内容? DeltaFIFO中保存的是一个个的Deltas列表,Deltas就是某一种对象的Delta列表。 Delta对象由事件类型和对象组成,Deltas就是这些对象组成的列表,所以DeltaFIFO中保存的就是这些一个个对象的列表,比如Pod和Node是由两个不同的列表维护的。 type Delta struct { Type DeltaType Object interface{} } // Deltas is a list of one or more 'Delta's to an individual object. // The oldest delta is at index 0, the newest delta is the last one. type Deltas []Delta DeltaFIFO结构中比较重要的几个成员如下所示,以及几个变量的作用也已经在上面进行注释: type DeltaFIFO struct { // `items` maps a key to a Deltas. // Each such Deltas has at least one Delta. items map[string]Deltas // `queue` maintains FIFO order of keys for consumption in Pop(). // There are no duplicates in `queue`. // A key is in `queue` if and only if it is in `items`. queue []string ... // keyFunc is used to make the key used for queued item // insertion and retrieval, and should be deterministic. keyFunc KeyFunc } ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:5:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"Event事件是如何消费的? 保存在DeltaFIFO中的事件的对其进行消费是client-go另外一条重要的主线。 在shared_informer.go的Run函数主要启动了两个部分:Controller和sharedProcessor。具体代码如下: // tools/cache/shared_informer.go func (s *sharedIndexInformer) Run(stopCh \u003c-chan struct{}) { defer utilruntime.HandleCrash() if s.HasStarted() { klog.Warningf(\"The sharedIndexInformer has started, run more than once is not allowed\") return } // 创建带有indexer的DeltaFIFO // 所以后面使用的FIFO其实就是DeltaFIFO fifo := NewDeltaFIFOWithOptions(DeltaFIFOOptions{ KnownObjects: s.indexer, EmitDeltaTypeReplaced: true, }) // 在controller定义的Config,用于创建Controller // Config中包含了DeltaFIFO/ListerWatcher两个重要组件,同时还有用于处理Delta的Process cfg := \u0026Config{ Queue: fifo, ListerWatcher: s.listerWatcher, ObjectType: s.objectType, FullResyncPeriod: s.resyncCheckPeriod, RetryOnError: false, ShouldResync: s.processor.shouldResync, // 这里注册的就是处理Delta的函数(ProcessFunc) // 这个函数在Delta从FIFO中被弹出来之前被调用,调用顺序是: // 这个也是WatchEvent消费过程:Controller.Run()-\u003eController.ProcessLoop()-\u003equeue.Pop()-\u003esharedIndexInformer.HandleDeltas() Process: s.HandleDeltas, WatchErrorHandler: s.watchErrorHandler, } // 根据Config对象创建一个Controller。这里会创建一个函数块,函数块的目的是加锁 func() { s.startedLock.Lock() defer s.startedLock.Unlock() s.controller = New(cfg) s.controller.(*controller).clock = s.clock s.started = true }() // 调用时间处理函数,处理DeltaFIFO // 启用Process // Separate stop channel because Processor should be stopped strictly after controller processorStopCh := make(chan struct{}) var wg wait.Group defer wg.Wait() // Wait for Processor to stop defer close(processorStopCh) // Tell Processor to stop wg.StartWithChannel(processorStopCh, s.cacheMutationDetector.Run) // 这里调用了sharedProcessor.run方法 wg.StartWithChannel(processorStopCh, s.processor.run) defer func() { s.startedLock.Lock() defer s.startedLock.Unlock() s.stopped = true // Don't want any new listeners }() // 启动controller--启动reflector--ListerWatcher同步apiServer数据,并且watch apiServer将event加入到DeltaFIFO中 // 同时调用controller.processLoop函数进行处理 s.controller.Run(stopCh) } ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:6:0","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"Controller做了什么 Controller的Run方法做了三件事: 创建并启动新的Reflector,调用Reflector的Run函数进行ListAndWatch,ListWatch会将监听到的对象事件保存到DeltaFIFO中; 并且会调用controller的processLoop函数完成对DeltaFIFO的消费,即从Reflector的queue成员中依次弹出要处理的对象,并且调用PopProcessFunc函数处理; processFunc其实就是HandleDeltas函数,随后HandleDeltas调用了processDeltas,该函数是核心工作内容:(1)添加(更新/删除)Delta的obj成员到Indexer(也就是本地缓存)中;(2)添加(更新/删除)Delta的obj成员到AddChan(workqueue)中等待sharedProcessor进行一次从queue(chan)中取出进行处理。 // tools/cache/controller.go func (c *controller) processLoop() { for { obj, err := c.config.Queue.Pop(PopProcessFunc(c.config.Process)) if err != nil { if err == ErrFIFOClosed { return } if c.config.RetryOnError { // This is the safe way to re-enqueue. c.config.Queue.AddIfNotPresent(obj) } } } } 从上面可以看到,这里的处理函数实际是config配置的Process成员,在shared_informer.go中,这里的Process函数是HandleDeltas函数。 // tools/cache/shared_informer.go Run() cfg := \u0026Config{ Queue: fifo, ListerWatcher: s.listerWatcher, ObjectType: s.objectType, FullResyncPeriod: s.resyncCheckPeriod, RetryOnError: false, ShouldResync: s.processor.shouldResync, // 这里注册的就是处理Delta的函数(ProcessFunc) // 这个函数在Delta从FIFO中被弹出来之前被调用,调用顺序是: // 这个也是WatchEvent消费过程:Controller.Run()-\u003eController.ProcessLoop()-\u003equeue.Pop()-\u003esharedIndexInformer.HandleDeltas() Process: s.HandleDeltas, WatchErrorHandler: s.watchErrorHandler, } HandleDealtas函数调用了processDeltas函数,该函数的具体的实现如下: // tools/cache/controller.go processDeltas() func processDeltas( // Object which receives event notifications from the given deltas handler ResourceEventHandler, clientState Store, transformer TransformFunc, deltas Deltas, ) error { // from oldest to newest for _, d := range deltas { obj := d.Object if transformer != nil { var err error obj, err = transformer(obj) if err != nil { return err } } // 这里会完成两个工作: // 1. 更新Local Store(Indexer); // 2. 完成事件分发(这里实际上还没有真正的处理事件,而是调用了sharedIndexInformer.OnAdd/OnUpdate/OnDelete) // 在sharedIndexInformer的这些函数中,完成了事件分发。分发的依据就是根据事件的类型(add/update/delete), // 具体的看sharedIndexInformer.OnAdd/OnUpdate/OnDelete下面的distribute函数 switch d.Type { case Sync, Replaced, Added, Updated: if old, exists, err := clientState.Get(obj); err == nil \u0026\u0026 exists { if err := clientState.Update(obj); err != nil { return err } handler.OnUpdate(old, obj) } else { if err := clientState.Add(obj); err != nil { return err } handler.OnAdd(obj) } case Deleted: if err := clientState.Delete(obj); err != nil { return err } handler.OnDelete(obj) } } return nil } 从代码可以看出,该函数主要使用for遍历弹出的一个Deltas(某一个obj的列表)。根据Deltas的数据类型进行区分,主要做了两部分工作: 更新Local Store(Indexer) ,就是将这些Delta添加/更新/删除(有Type和Object,这里只添加/更新/删除了Object,也就是本地对象缓存Indexer中),所以这里也是保证Indexer中和etcd数据库中的数据一致的实现。实时更新。 完成事件分发。(这里实际上还没有真正的处理事件,而是调用了sharedIndexInformer.OnAdd/OnUpdate/OnDelete)。分发的依据就是根据事件的类型(add/update/delete)。 所谓的分发就是添加到workqueue(开具第一张图中介绍)中等待处理。 这里以Add事件进行说明,追踪的是handler.OnAdd(obj)这个方法。 上面的processDeltas函数传入的第一个参数是sharedIndexInformer对象,又因为sharedIndexInformer实现了ResourceEventHandler接口,所以上面的handler.OnAdd(obj)最终会调用的是sharedIndexInformer的OnAdd方法,该方法如下: // tools/cache/shared_informer.go OnAdd() func (s *sharedIndexInformer) OnAdd(obj interface{}) { // Invocation of this function is locked under s.blockDeltas, so it is // save to distribute the notification s.cacheMutationDetector.AddObject(obj) // 这里传入的目标还是从Deltas弹出的Delta对象。调用历史: // for:controller.processLoop()-\u003eProcessFunc-\u003esharedIndexInformer.HandleDeltas() // -\u003eController.processDeltas(这里传入的参数应该是ResourceEventHandler,因为sharedIndexInformer实现了该接口,所以传入的是sharedIndexInformer对象) // -\u003ehandler.OnAdd/OnUpdate/OnDelete = sharedIndexInformer.OnAdd/OnUpdate/OnDelete // -\u003edistribute s.processor.distribute(addNotification{newObj: obj}, false) } 上面的distribute()函数会完成事件的分发。如前所述,distribute其实就是将处理的DeltaFIFO的Obj添加到addChan中,等待处理。 这里的obj类型就是DeltaFIFO item(Delta)中的对象,类型如下所示: [{Add, obj1},{Update, obj1},{Delete, obj1}] func (p *sharedProcessor) distribute(obj interface{}, sync bool) { p.listenersLock.RLock() defer p.listenersLock.RUnlock() for listener, isSyncing := range p.listeners { switch { case !sync: // non-sync messages are delivered to every","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:6:1","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["client-go","源码"],"content":"sharedProcessor是如何处理addChan中的对象的 然后是交由sharedProcessor的run函数进行处理。run函数的调用也是在sharedIndexInformer中单独的协程中进行处理: wg.StartWithChannel(processorStopCh, s.processor.run) 这里会启动两个协程同时去处理: func (p *sharedProcessor) run(stopCh \u003c-chan struct{}) { func() { p.listenersLock.RLock() defer p.listenersLock.RUnlock() // 同时启动两个协程去进行run和pop for listener := range p.listeners { p.wg.Start(listener.run) // run p.wg.Start(listener.pop) // pop } p.listenersStarted = true }() \u003c-stopCh p.listenersLock.Lock() defer p.listenersLock.Unlock() for listener := range p.listeners { close(listener.addCh) // Tell .pop() to stop. .pop() will tell .run() to stop } // Wipe out list of listeners since they are now closed // (processorListener cannot be re-used) p.listeners = nil // Reset to false since no listeners are running p.listenersStarted = false p.wg.Wait() // Wait for all .pop() and .run() to stop } 在pop的时候借助了一个无限大的循环队列(buffer.RingGrowing),原因是:pop作为addCh 的消费逻辑 必须非常快,而下游nextCh 的消费函数run 执行的速度看业务而定,中间要通过pendingNotifications 缓冲。 最后run函数也非常简单,就是调用了ResourceEventHandler的方法: // tools/cache/shared_informer run() func (p *processorListener) run() { // this call blocks until the channel is closed. When a panic happens during the notification // we will catch it, **the offending item will be skipped!**, and after a short delay (one second) // the next notification will be attempted. This is usually better than the alternative of never // delivering again. stopCh := make(chan struct{}) wait.Until(func() { for next := range p.nextCh { // type updateNotification struct switch notification := next.(type) { case updateNotification: p.handler.OnUpdate(notification.oldObj, notification.newObj) // type addNotification struct case addNotification: p.handler.OnAdd(notification.newObj) // type deleteNotification struct case deleteNotification: p.handler.OnDelete(notification.oldObj) default: utilruntime.HandleError(fmt.Errorf(\"unrecognized notification: %T\", next)) } } // the only way to get here is if the p.nextCh is empty and closed close(stopCh) }, 1*time.Second, stopCh) } 下面的一个图很好的表示了处理流程: 如果 event 处理较慢,则会导致pendingNotifications 积压,event 处理的延迟增大. ","date":"2023-02-26","objectID":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/:6:2","tags":["client-go"],"title":"Client Go整体流程梳理","uri":"/client-go%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-%E6%95%B4%E4%BD%93%E6%B5%81%E7%A8%8B%E6%A2%B3%E7%90%86/"},{"categories":["部署"],"content":" 虽然Ubuntu和Centos都是Linux系统,但是安装的命令还是稍有区别; 这里给予的Ubuntu版本是20.04,对于更早的版本没有尝试,但是应该大差不差。 我们使用KubeAdm作为安装工具,这里没有过多的解释,目的就是方便快速搭建一个集群。 ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:0","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"1. 禁用Swap分区 # 注释掉swap一行 sudo vi /etc/fstab ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:1","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"2. iptables设置 确保 br_netfilter 模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter 来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter。 为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。例如: cat \u003c\u003cEOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat \u003c\u003cEOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:2","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"3. 安装Docker 安装Docker(在安装的时候可以指定版本进行安装,和想要安装的Kuberntes版本保持一致) sudo apt update sudo apt install -y docker.io sudo systemctl start docker \u0026\u0026 sudo systemctl enable docker ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:3","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"4. 安装Kubeadm、kubelet和kubectl 4.1 首先安装依赖包 sudo apt-get update \u0026\u0026 sudo apt -y upgrade sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - # 如果上述命令提示失败的话,使用下面的命令代替 # curl -s https://gitee.com/thepoy/k8s/raw/master/apt-key.gpg | sudo apt-key add - sudo cat \u003e\u003e/etc/apt/sources.list.d/kubernetes.list \u003c\u003cEOF deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF # 更新apt包索引,用于安装kubelet、kubeadm和kubectl sudo apt-get update 4.2 开始安装Kubeadm、kubelet、kubectl # 查看可以安装的指定版本 apt list kubeadm -a # 安装指定版本的Kubeadm、kubelet、kubectl sudo apt-get install -y kubelet=1.20.15-00 kubeadm=1.20.15-00 kubectl=1.20.15-00 # 如果上述命令报错的话,添加 `--allow-unauthenticated` 选项 systemctl enable kubelet systemctl enable docker ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:4","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"5. 预下载k8s集群组件镜像 5.1 查看 kubeadm init 时所需要的组件镜像列表 kubeadm config images list # 输出类似如下信息,这些代表是kubeadm要下载安装的组件; I1025 15:01:13.041337 340088 version.go:254] remote version is much newer: v1.25.3; falling back to: stable-1.20 k8s.gcr.io/kube-apiserver:v1.20.15 k8s.gcr.io/kube-controller-manager:v1.20.15 k8s.gcr.io/kube-scheduler:v1.20.15 k8s.gcr.io/kube-proxy:v1.20.15 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 5.2 使用脚本下载并修改tag cat \u003c\u003cEOF \u003e pull-k8s-images.sh for i in `kubeadm config images list`; do imageName=${i#k8s.gcr.io/} docker pull registry.aliyuncs.com/google_containers/$imageName docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.aliyuncs.com/google_containers/$imageName done; EOF # 执行脚本 chmod +x pull-k8s-images.sh ./pull-k8s-images.sh 上述步骤1-5需要在所有的节点执行,下面的步骤只需要在Master节点进行执行。 ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:5","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"6. 安装k8s集群(kubeadm init) kubeadm init --apiserver-advertise-address=\u003c使用自己的ip地址\u003e --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.1(修改为自己的版本) --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:6","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"7. 安装flannel插件 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:7","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":["部署"],"content":"8. 测试集群:安装Nginx测试集群 kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc ","date":"2023-02-12","objectID":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/:0:8","tags":["安装K8s集群"],"title":"K8s第一课-在Ubuntu上安装K8s集群","uri":"/k8s%E7%AC%AC%E4%B8%80%E8%AF%BE-%E5%9C%A8ubuntu%E4%B8%8A%E5%AE%89%E8%A3%85k8s%E9%9B%86%E7%BE%A4/"},{"categories":null,"content":"关于 LoveIt","date":"2019-08-02","objectID":"/about/","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":" LoveIt 是一个由 Dillon 开发的简洁、优雅且高效的 Hugo 博客主题。 它的原型基于 LeaveIt 主题 和 KeepIt 主题。 Hugo 主题 LoveIt ","date":"2019-08-02","objectID":"/about/:0:0","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"特性 ","date":"2019-08-02","objectID":"/about/:1:0","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"性能和 SEO 性能优化:在 Google PageSpeed Insights 中, 99/100 的移动设备得分和 100/100 的桌面设备得分 使用基于 JSON-LD 格式 的 SEO SCHEMA 文件进行 SEO 优化 支持 Google Analytics 支持 Fathom Analytics 支持 Plausible Analytics 支持 Yandex Metrica 支持搜索引擎的网站验证 (Google, Bind, Yandex and Baidu) 支持所有第三方库的 CDN 基于 lazysizes 自动转换图片为懒加载 ","date":"2019-08-02","objectID":"/about/:1:1","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"外观和布局 桌面端/移动端 响应式布局 浅色/深色 主题模式 全局一致的设计语言 支持分页 易用和自动展开的文章目录 支持多语言和国际化 美观的 CSS 动画 社交和评论系统 支持 Gravatar 头像 支持本地头像 支持多达 73 种社交链接 支持多达 24 种网站分享 支持 Disqus 评论系统 支持 Gitalk 评论系统 支持 Valine 评论系统 支持 Facebook comments 评论系统 支持 Telegram comments 评论系统 支持 Commento 评论系统 支持 utterances 评论系统 支持 giscus 评论系统 ","date":"2019-08-02","objectID":"/about/:1:2","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"扩展功能 支持基于 Lunr.js 或 algolia 的搜索 支持 Twemoji 支持代码高亮 一键复制代码到剪贴板 支持基于 lightGallery 的图片画廊 支持 Font Awesome 图标的扩展 Markdown 语法 支持上标注释的扩展 Markdown 语法 支持分数的扩展 Markdown 语法 支持基于 $\\KaTeX$ 的数学公式 支持基于 mermaid 的图表 shortcode 支持基于 ECharts 的交互式数据可视化 shortcode 支持基于 Mapbox GL JS 的 Mapbox shortcode 支持基于 APlayer 和 MetingJS 的音乐播放器 shortcode 支持 Bilibili 视频 shortcode 支持多种注释的 shortcode 支持自定义样式的 shortcode 支持自定义脚本的 shortcode 支持基于 TypeIt 的打字动画 shortcode 支持基于 cookieconsent 的 Cookie 许可横幅 支持人物标签的 shortcode … ","date":"2019-08-02","objectID":"/about/:1:3","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"许可协议 LoveIt 根据 MIT 许可协议授权。 更多信息请查看 LICENSE 文件。 ","date":"2019-08-02","objectID":"/about/:2:0","tags":null,"title":"关于 LoveIt","uri":"/about/"},{"categories":null,"content":"特别感谢 LoveIt 主题中用到了以下项目,感谢它们的作者: normalize.css Font Awesome Simple Icons Animate.css autocomplete Lunr.js algoliasearch lazysizes object-fit-images Twemoji emoji-data lightGallery clipboard.js Sharer.js TypeIt $\\KaTeX$ mermaid ECharts Mapbox GL JS APlayer MetingJS Gitalk Valine cookieconsent ","date":"2019-08-02","objectID":"/about/:3:0","tags":null,"title":"关于 LoveIt","uri":"/about/"}]