Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dpdk driver with containers #36

Open
RahulG115 opened this issue Apr 25, 2018 · 4 comments
Open

dpdk driver with containers #36

RahulG115 opened this issue Apr 25, 2018 · 4 comments

Comments

@RahulG115
Copy link

HI ,
i have deployed the 2 containers with dpdk driver but i am unable to find out which device is used by whch container ,
can you just help me to which dpdk driver i was bounded to as in dpdk tools i am seeing both the devices intact to our dpdk driver.

@rhlyadav
Copy link

i am facing the same

@uabfra
Copy link

uabfra commented Jun 26, 2018

Me too, I have some difficulties sorting this thing out. Usually you know that the order of plugins is retained as net0, net1..., but here we dont have any visible kernel devices. Under /dev/vfio I can find some number, and if I run priviliged mode, I'll se all the vfio sriov on the blade. I need to have a mapping between netX and pci-addr at least. I'm thinking of using the network annotation for the pod, to put the data there. or if I in someway can create a file in the container from the cni plugin???

@rkamudhan
Copy link
Collaborator

We are working in SRIOV network device plugin, which uses SRIOV CNI. The plan is to provide strong isolation of the device. The intermediate solution is available in the DPDK CNI.

@uabfra
Copy link

uabfra commented Jul 3, 2018

The problem for me is that if I have more then one network (via multus) there is no way of finding out the which network (according to pod spec. ie. net0, net1 ...), because all I can see is a bunch of pci devices under vfio-pci. What I done is to add the creation of a dummy eth device, to carry info about mac address and pci addr vs the assigned dev name:

/ # ip -d link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
2: tunl0@NONE: mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0
ipip remote any local any ttl inherit nopmtudisc numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
4: eth0@if565: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 52:73:5d:96:ac:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
5: net0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
link/ether 72:36:25:32:7f:0c brd ff:ff:ff:ff:ff:ff promiscuity 0
dummy addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
alias vfio@0000:04:10.5
6: net1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
link/ether 62:76:8f:d6:85:b0 brd ff:ff:ff:ff:ff:ff promiscuity 0
dummy addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
alias vfio@0000:04:10.7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants