-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ApplicationProfile is not up to date after removal of deployment #411
Comments
@mgalesloot the applicationprofile should have been deleted by the periodic cleanup, however since it runs every 24h by default (can be overridden https://github.com/kubescape/storage/blob/main/main.go#L78-L81) |
The scenario I have described is when a user restarts a deployment and the nodeagent starts it's learning period. During the learning period there is no alerting. |
@amitschendel WDYT ? |
@mgalesloot after checking your use case, a few things to consider:
|
Description
ApplicationProfile is not up to date after removal of deployment
Environment
K8s: Kind on Docker desktop
Version: quay.io/kubescape/node-agent:v0.2.178
Installation with helm chart (
helm upgrade --install kubescape kubescape/kubescape-operator -n kubescape --create-namespace --set capabilities.runtimeDetection=enable --set alertCRD.installDefault=true --set nodeAgent.config.maxLearningPeriod=10m --set capabilities.continuousScan=enable
)Steps To Reproduce
Install Kubescape operator, install nginx deployment. Wait for learning period.
Nginx pods are running, and runtime detection works.
k get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-d556bf558-p6srg 1/1 Running 0 39h
nginx-deployment-d556bf558-pd6x2 1/1 Running 0 39h
Application profile exists
k get applicationprofiles.spdx.softwarecomposition.kubescape.io
NAME CREATED AT
replicaset-nginx-deployment-d556bf558 2024-11-22T17:10:52Z
Now we delete the deployment
k delete deployment nginx-deployment
deployment.apps "nginx-deployment" deleted
The application profile still exists...
k get applicationprofiles.spdx.softwarecomposition.kubescape.io
NAME CREATED AT
replicaset-nginx-deployment-d556bf558 2024-11-22T17:10:52Z
The node agent stopped monitoring
stern node -n kubescape --since 1m | grep nginx
node-agent-lnf6z › node-agent
node-agent-lnf6z node-agent {"level":"info","ts":"2024-11-24T08:16:54Z","msg":"stop monitor on container - container has terminated","container ID":"e7e07c2a57112abc74281299dd94506e35394269dfa3334c4eebd52fe365144a","k8s workload":"default/nginx-deployment-d556bf558-pd6x2/nginx"}
node-agent-lnf6z node-agent {"level":"info","ts":"2024-11-24T08:16:54Z","msg":"stop monitor on container - container has terminated","container ID":"c607b3b617284d3b8fd8e8670e1fd7d157c097cb79ff6db512db48a68e4442f3","k8s workload":"default/nginx-deployment-d556bf558-p6srg/nginx"}
Now create the deployment again
kubectl apply -f https://k8s.io/examples/application/deployment.yaml -n default
deployment.apps/nginx-deployment created
The node agent starts monitoring
stern node -n kubescape --since 1m | grep nginx
node-agent-lnf6z › node-agent
node-agent-lnf6z node-agent {"level":"info","ts":"2024-11-24T08:18:16Z","msg":"start monitor on container","container ID":"1670f24d8cfba5b74cfa560f15749292ebc14a5b85544d947d733e8ff0866576","k8s workload":"default/nginx-deployment-d556bf558-h8pvb/nginx"}
node-agent-lnf6z node-agent {"level":"info","ts":"2024-11-24T08:18:16Z","msg":"start monitor on container","container ID":"ae08f838919feabe41ec98a77708f05a19cb7290ab92b7105765a7463dcf6b39","k8s workload":"default/nginx-deployment-d556bf558-hxp85/nginx"}
completed
.k get applicationprofiles.spdx.softwarecomposition.kubescape.io -o yaml | grep 'kubescape.io/status'
kubescape.io/status: completed
Expected behavior
Expected the status in the ApplicationProfile to correctly indicate that the monitoring is not yet finalized and the runtime detection is not activated.
Actual Behavior
ApplicationProfile incorrecly shows the status as 'completed' while the monitoring is still in progress.
The text was updated successfully, but these errors were encountered: