Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cmd for creating user kubeconfigs (cont. from #545) #598

Closed

Conversation

josibake
Copy link
Collaborator

Continues on #545. This command is used to create user specific authentication files , after running warnet deploy namespaces/<your custom config>.

The custom config here is for creating a namespace for each team, creating user service accounts in each namespace, and applying whatever roles to those users are defined in the namespaces.yaml and namespace-default.yaml files. If no roles are specified in the config files, whatever is in values.yaml for the namespaces chart is used. Currently, the defaults in the chart are to create pod viewer and pod manager roles for each user.

Testing

  1. warnet admin init - this copies over the example namespaces directory
  2. warnet deploy namespaces/two_namespaces_two_users/
  3. warnet admin create-kubeconfigs wargames- - issues a token for each user in each namespace with the prefix "wargames-"

To test, run warnet auth <user_kubeconfig>. This will switch your current-context in kubectl to this user context. You should be able to deploy and run scenarios , and do everything needed to participate in a war game. If you see a permissions error, you can easily update the namespaces.yaml file to fix the permissions for that user and redeploy the namespaces and reissue the tokens.

Todo

  • test all permissions are correct - if not, we should update either the defaults in values.yaml or at least update the example
  • write a doc for an admin - the person expected to use these commands
  • write an e2e test

@bdp-DrahtBot
Copy link
Collaborator

bdp-DrahtBot commented Sep 13, 2024

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #621 (Make test_framework available for users when creating scenarios by pinheadmz)
  • #616 (Kubeconfig Step Thru by mplsgrant)
  • #614 (swap out kubectl by mplsgrant)
  • #612 (Add --debug option to run scenarios for faster development by pinheadmz)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

Copy link
Contributor

@pinheadmz pinheadmz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested out creating and auth-ing from the config files in this branch, everything seemed to work. k9s even refused to list namespaces after switching context to alice-redteam...

I know you are still planning to write docs which is good. I don't understand how the actual network of tanks relates... do you just add namespace: redteam to the individual tanks in network.yaml? and then is there still a "master" namespace where an admin can monitor all tanks at once?

I left a few comments and suggestions below

warnet admin init also copies in the default 6 node network without asking if i want a custom network, and it does not copy in the scenarios. One suggestion is instead of "initializing" the warnet admin command could be more of an "upgrade" to a project directory that adds namespace stuff without disturbing networks or scenarios.

OR would be even cooler if warnet admin init did create a network as well, with inquierer, and also distributed the namesapces according to how many teams the user wants, etc

os.makedirs(kubeconfig_dir, exist_ok=True)

# Get all namespaces that start with prefix
# This assumes when deploying multiple namespacs for the purpose of team games, all namespaces start with a prefix,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s "namespaces"


# Get all namespaces that start with prefix
# This assumes when deploying multiple namespacs for the purpose of team games, all namespaces start with a prefix,
# e.g., tabconf-wargames-*. Currently, this is a bit brittle, but we can improve on this in the future
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it feel like instead of using naming conventions k8s probably wants us to use metadata labels or something.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it would be great to have a "cluster level" configmap. We could have a wargames-admin namespace , where we create a configmap for all of the war-games related metadata (e.g., all of the namespaces , players), and then any administration commands can reference that to get all of the correct values? This admin namespace could also be used for deploying the signet miner, etc.

# Create a kubeconfig file for the user
kubeconfig_file = os.path.join(kubeconfig_dir, f"{sa}-{namespace}-kubeconfig")

# TODO: move yaml out of python code to resources/manifests/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could use yaml.dump() like in deploy_namespaces() and deploy_network()



def get_kubeconfig_value(jsonpath):
command = f"kubectl config view --minify -o jsonpath={jsonpath}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems a little silly to me to import the kubernetes python client library into this module and then just use shell commands anyway, but the whole file is like that so it should just be a cleanup PR later

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, I had the same thought. I am somewhat in favour of using the Kubernetes python client for everything, but noticed everyone was defaulting to string-ifying kubectl commands. So long as we keep all the k8s logic in this file, should be super easy to refactor this at some point in the future.

@josibake
Copy link
Collaborator Author

josibake commented Sep 13, 2024

Thanks for the review @pinheadmz , good questions!

I don't understand how the actual network of tanks relates... do you just add namespace: redteam to the individual tanks in network.yaml?

A network of tanks is always meant to be deployed into a single namespace. The nice thing is, since we are using bitcoin core's addnode to define the connections, any single network.yaml file can connect to nodes outside of the current network.yaml. So the way I'm picturing this is something like:

  1. Admin chooses the number of teams and users and creates a namespaces.yaml for configuring this particular "war-game" setup.
  2. Admin chooses the "network" each team will start with and creates a <team>-network.yaml file for each team, where in each <team>-node-defaults.yaml file a team specific bitcoin rpcuser/password is set for the nodes
    • One option here is to then give the teams their own network file and have them deploy it, e.g., warnet deploy networks/blue_team/ (this works today since teams have permissions to create pods). This means the network would get deployed into whatever kubectl thinks is their default namespace, which will have already been set automatically in kubectl when they import the kubeconfig file. This option "just works" today. The downside is teams can just teardown and redeploy their network if they are losing
    • The other option would be we add namespace as an argument to the network.yaml file, but this goes against the "helm philosophy"
    • The third option, which I prefer, is we add --namespace as an argument to the warnet deploy command as a way of overriding whatever kubectl thinks is the default namespace, which also means we could build some sort of admin quickstart on top of this that would guide an admin through deploying multiple networks into multiple namespaces, etc. This would enable the admin to deploy the networks into each team namespace. This will be necessary if/when we decided to limit team permissions so they can only create "commander" pods, and not just generally create pods

and then is there still a "master" namespace where an admin can monitor all tanks at once?

In my mind, there really is no such thing as a "master" namespace. Namespaces are for grouping resources and limiting permissions, so if you are the cluster admin, you already have access to all of the namespaces. Namespaces can also be given permissions to look into other namespaces, e.g., warnet-logging MUST have permissions to get stuff from all the other namespaces (iirc it already works like this today). In the two team example, could also deploy a third namespace for the admins where we run the signet miner. We would just define the roles and permissions such that whatever is deployed in the admin namespace has access to all the other namespaces.

OR would be even cooler if warnet admin init did create a network as well, with inquierer, and also distributed the namesapces according to how many teams the user wants, etc

I really like this idea, I'll try to implement this. I do agree its a bit clunky how the warnet admin init works today , its mostly unchanged from what I hacked together in Italy, so I think doing something that only creates a namespaces dir and goes through a Quickstart for deploying multiple teams is a really good idea.

@pinheadmz
Copy link
Contributor

In my mind, there really is no such thing as a "master" namespace

Would be nice for the big tabconf scoreboard if we could view all the pod status like with k9s. If the logging stack can aggregate all pod info that might be enough...

@m3dwards
Copy link
Collaborator

Do the users then just run scenarios? These permissions do not allow logging stack but not sure that's a problem?

(.venv) ➜  adminin warnet deploy networks/6_node_bitcoin
Found collectLogs or metricsExport in network definition, Deploying logging stack
"grafana" already exists with the same configuration, skipping
"prometheus-community" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "scalinglightning" chart repository
Update Complete. ⎈Happy Helming!⎈
Error: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:wargames-blue-team:carol" cannot list resource "secrets" in API group "" in the namespace "warnet-logging"
Traceback (most recent call last):
  File "/Users/maxedwards/source/warnet/.venv/bin/warnet", line 8, in <module>
    sys.exit(cli())
             ^^^^^
  File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/src/warnet/deploy.py", line 50, in deploy
    dl = deploy_logging_stack(directory, debug)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/src/warnet/deploy.py", line 94, in deploy_logging_stack
    if not stream_command(command):
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/maxedwards/source/warnet/src/warnet/process.py", line 28, in stream_command
    raise Exception(process.stderr)
Exception: None

@josibake
Copy link
Collaborator Author

Do the users then just run scenarios

The expectation is the logging stack would be deployed into warnet-logging or some namespace. Users would then interface with the dashboards via the "big dashboard" , or via an ingress endpoint locally.

I don't think it makes sense to have logging running in each namespace? especially for something like fork-observer. I've got a local branch that I'm reworking some of the flow and permissions to make things more seamless but had to pivot a bit this week. Will try to finish today/tomorrow.

@josibake
Copy link
Collaborator Author

If the logging stack can aggregate all pod info that might be enough

This is ideally how it should work. We know it already works across namespaces because warnet-logging aggregate information from the "default" (warnet) namespace. The only thing that needs to change is the assumption that there is only ever one namespace. For this, I added some code to annotate the team namespaces with a label so that whenever we want to get all the pods in all of the team namespaces, we query the namespaces by label and then use that.

josibake and others added 3 commits September 26, 2024 15:10
namespaces.yaml is meant for describing the overall structure of what you want
with specific overrides for specific users as needed. the "default" roles should
be defined in namespace-defaults.yaml so that they are automatically applied by default
for each user in each namespace.

at a lower level, defaults that should be applied by default for *any* namespaces deployment
should be defined in values.yaml.

namespace-defaults.yaml is meant to override values.yaml in the event
for a particular namespaces deployment the admin wants to create
tailor made roles and permisssions. otherwise, this can stay empty
and whatever is in values.yaml will be applied.

update example prefix to wargames, to illustrate this is not relying on
a default namespace of warnet. this probably needs some more thought,
but I think its best to address how to pipe through the name in a
followup rather than slow this PR down.
Replace the setup_contexts.sh script into a proper warnet command.

Co-authored-by: mplsgrant <[email protected]>
allow the user to specify a namespace to deploy into (overrides kubectl default namespace)
@m3dwards
Copy link
Collaborator

m3dwards commented Oct 2, 2024

Closing in favour of the replacement #616

@m3dwards m3dwards closed this Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants