forked from luigirizzo/netmap
-
Notifications
You must be signed in to change notification settings - Fork 0
/
PORTING
190 lines (155 loc) · 7.28 KB
/
PORTING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# $Id$
Adding netmap support to network device drivers
------------------------------------------------
Netmap requires some small modifications to device drivers
to support the new API. You will need to add small patches
in 3-4 places in the original source, and implement typically
5 new functions.
Device driver patches
------------------------
+ in the initial part of the source, after the device-specific
headers and prototypes have been declared, add the following
<pre>
+#if defined(DEV_NETMAP) || defined(CONFIG_NETMAP) || defined(CONFIG_NETMAP_MODULE)
+#include <dev/netmap/if_re_netmap.h>
+#endif /* !DEV_NETMAP */
</pre>
The place is typically ... in FreeBSD, and
... on Linux.
The header really contains the new functions that implement
the netmap API. Including them inline simplifies the building
as it does not require to insert additional dependencies in the
build system.
On FreeBSD DEV_NETMAP is sufficient to detect whether netmap extensions
should be compiled in, whereas CONFIG_NETMAP and CONFIG_NETMAP_MODULE
are the Linux equivalent.
If a driver is made of multiple source files, you will need to include
the additional header in all the (few) patched files, preferably using
a macro such as NETMAP_FOO_MAIN to indicate the file where the
new functions should be compiled in.
+ near the end of the attach routine, once the ifnet/net_device structure
has been filled and initialized, add
<pre>
+#ifdef DEV_NETMAP
+ foo_netmap_attach(adapter);
+#endif /* DEV_NETMAP */
</pre>
The argument is either the ifnet or the private device descriptor.
This is in foo_attach() on FreeBSD, and somewhere in the path of
XXX foo_open() in Linux
+ near the code called on device removal, add
<pre>
+#ifdef DEV_NETMAP
+ netmap_detach(ifp);
+#endif /* DEV_NETMAP */
</pre>
+ after the tx/rx rings have been initialized, add a patch like this:
<pre>
+#ifdef DEV_NETMAP
+ foo_netmap_config(priv);
+#endif /* DEV_NETMAP */
</pre>
The argument is typically the private device descriptor, or even
the struct ifnet/net_device.
+ in the interrupt dispatch routines, something like
<pre>
+#ifdef DEV_NETMAP
+ int dummy;
+ if (netmap_rx_irq(adapter->netdev, rx_ring->queue_index, &dummy))
+ return true;
+#endif /* DEV_NETMAP */
...
+#ifdef DEV_NETMAP
+ if (netmap_tx_irq(adapter->netdev, tx_ring->queue_index))
+ return true; /* seems to be ignored */
+#endif /* DEV_NETMAP */
</pre>
to skip the normal processing and instead wake up the process in
charge of doing I/O
New functions
----------------
The new functions serve to register the netmap-enabled device driver,
support the enable/disable of netmap mode, attach netmap buffers to the
NIC rings, and finally implement the handlers (*_txsync(), *_rxsync())
called by the system calls.
* foo_netmap_attach()
This is a relatively mechanical function. The purpose is to fetch from
the device descriptor information on the number of rings and buffers,
the way locks are used, and invoke netmap_attach().
* foo_netmap_config()
This function is in charge of (over)writing the NIC rings with
pointers to the netmap buffers. Although this is device dependent,
we can often ignore the locking issue and expect that the locking is
already taken care of by the caller.
foo_netmap_config() only needs to run if the card is in netmap mode.
A quick way to check is to call netmap_ring_init() on one of the rings,
if the function returns NULL we can immediately exit.
Otherwise, we should run a couple of nested loops (on the rings,
and then on the buffers) to fill the NIC descriptors with the
addresses of the (preallocated) netmap buffers.
For the TX rings this can even be a no-op because these rings are
typically uninitialized, and the pointers can be overridden in the
txsync() routine.
For the receive ring, the operation is more critical because the
buffers should be available by the time the NIC is enabled.
Note that the device driver typically maintains head and tail pointers
to indicate which buffers are used. It might be convenient to retain
these indexes because may of the support routines, watchdogs etc.
depends on their values.
We should note that, especially on the receive ring, there might be
an offset between the indexes used in the netmap ring and those used
in the NIC ring (which might even be non-contiguous).
* foo_netmap_reg()
support entering/exiting of netmap mode. Typically, lock, stop the device,
set/clear the netmap flag, and restart the device.
An unfortunate side effect of stopping and restarting the device is that
in many drivers the link is reinitialized, causing long delays for the
speed negotiations and spanning tree setup.
* foo_netmap_txsync()
* foo_netmap_rxsync()
---------------
Porting to different platforms
--------------------------------
Netmap runs on various operating systems, using a common core and OS specific
functions where needed. The core has been originally developed under FreeBSD
so we tend to follow FreeBSD kernel coding style and APIs.
Among the things that change between platforms we have
- locking
- native packet representation
- network interfaces APIs
Datapath from netmap host ring to host stack
--------------------------------------------
packets injected in the host ring e.g. netmap:eth0^ go through the following path:
- application puts packets in the host netmap ring
- application calls poll() or NIOCTXSYNC
- kernel runs netmap.c :: netmap_txsync_to_host() which:
- calls netmap.c :: netmap_grab_packets() to extract packets from
the ring, encapsulate into mbufs through m_devget(), and store them
into a queue. There is a copy involved here, so the netmap buffer
can be freed immediately.
- calls netmap.c :: netmap_send_up() which dequeues each packet
and sends each through NM_SEND_UP() which is redefined on each architecture
XXX netmap_send_up() could be made OS-specific to handle batches
On windows:
m_devget() is win_make_mbuf() we could probably create a new NDIS packet
instead and save the second copy later.
NM_SEND_UP is netmap_windows.c :: send_up_to_stack()
calls injectPacket which only uses the payload and frees the mbuf
(the filter is not involved, only provides injectPacket)
Datapath from host stack to netmap host ring
---------------------------------------------
XXX check documentation
OS-dependent intercept:
FreeBSD: ifp->if_transmit = netmap_transmit
Linux: ndo_start_xmit = generic_ndo_start_xmit (?)
Windows: filter.c :: FilterReceiveNetBufferLists() calls netmap_windows.c :: windows_generic_rx_handler
XXX on windows it must be cleaned up
we can use generic_rx_handler if the mbuf has the right queue number.
It probably has not unless we have a special case for host rings
The wrappers create an mbuf if needed, then do an mbq_enqueue(kring->rx_queue)
and a notify to eventually clean up the data. This is done under mbq_lock.
XXX it would be more efficient by copying packets in the netmap ring,
but not sure if we may have someone concurrently manipulating the ring.
This can be fixed though.
Later, netmap_rxsync_from_host() copies from mbufs to the ring, stores
the mbufs in a list for later purge.