1 /*- 2 * SPDX-License-Identifier: BSD-2-Clause-FreeBSD 3 * 4 * Copyright (C) 2011-2014 Matteo Landi 5 * Copyright (C) 2011-2016 Luigi Rizzo 6 * Copyright (C) 2011-2016 Giuseppe Lettieri 7 * Copyright (C) 2011-2016 Vincenzo Maffione 8 * All rights reserved. 9 * 10 * Redistribution and use in source and binary forms, with or without 11 * modification, are permitted provided that the following conditions 12 * are met: 13 * 1. Redistributions of source code must retain the above copyright 14 * notice, this list of conditions and the following disclaimer. 15 * 2. Redistributions in binary form must reproduce the above copyright 16 * notice, this list of conditions and the following disclaimer in the 17 * documentation and/or other materials provided with the distribution. 18 * 19 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND 20 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 21 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 22 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE 23 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 24 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 25 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 26 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 27 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 28 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 29 * SUCH DAMAGE. 30 */ 31 32 33 /* 34 * $FreeBSD$ 35 * 36 * This module supports memory mapped access to network devices, 37 * see netmap(4). 38 * 39 * The module uses a large, memory pool allocated by the kernel 40 * and accessible as mmapped memory by multiple userspace threads/processes. 41 * The memory pool contains packet buffers and "netmap rings", 42 * i.e. user-accessible copies of the interface's queues. 43 * 44 * Access to the network card works like this: 45 * 1. a process/thread issues one or more open() on /dev/netmap, to create 46 * select()able file descriptor on which events are reported. 47 * 2. on each descriptor, the process issues an ioctl() to identify 48 * the interface that should report events to the file descriptor. 49 * 3. on each descriptor, the process issues an mmap() request to 50 * map the shared memory region within the process' address space. 51 * The list of interesting queues is indicated by a location in 52 * the shared memory region. 53 * 4. using the functions in the netmap(4) userspace API, a process 54 * can look up the occupation state of a queue, access memory buffers, 55 * and retrieve received packets or enqueue packets to transmit. 56 * 5. using some ioctl()s the process can synchronize the userspace view 57 * of the queue with the actual status in the kernel. This includes both 58 * receiving the notification of new packets, and transmitting new 59 * packets on the output interface. 60 * 6. select() or poll() can be used to wait for events on individual 61 * transmit or receive queues (or all queues for a given interface). 62 * 63 64 SYNCHRONIZATION (USER) 65 66 The netmap rings and data structures may be shared among multiple 67 user threads or even independent processes. 68 Any synchronization among those threads/processes is delegated 69 to the threads themselves. Only one thread at a time can be in 70 a system call on the same netmap ring. The OS does not enforce 71 this and only guarantees against system crashes in case of 72 invalid usage. 73 74 LOCKING (INTERNAL) 75 76 Within the kernel, access to the netmap rings is protected as follows: 77 78 - a spinlock on each ring, to handle producer/consumer races on 79 RX rings attached to the host stack (against multiple host 80 threads writing from the host stack to the same ring), 81 and on 'destination' rings attached to a VALE switch 82 (i.e. RX rings in VALE ports, and TX rings in NIC/host ports) 83 protecting multiple active senders for the same destination) 84 85 - an atomic variable to guarantee that there is at most one 86 instance of *_*xsync() on the ring at any time. 87 For rings connected to user file 88 descriptors, an atomic_test_and_set() protects this, and the 89 lock on the ring is not actually used. 90 For NIC RX rings connected to a VALE switch, an atomic_test_and_set() 91 is also used to prevent multiple executions (the driver might indeed 92 already guarantee this). 93 For NIC TX rings connected to a VALE switch, the lock arbitrates 94 access to the queue (both when allocating buffers and when pushing 95 them out). 96 97 - *xsync() should be protected against initializations of the card. 98 On FreeBSD most devices have the reset routine protected by 99 a RING lock (ixgbe, igb, em) or core lock (re). lem is missing 100 the RING protection on rx_reset(), this should be added. 101 102 On linux there is an external lock on the tx path, which probably 103 also arbitrates access to the reset routine. XXX to be revised 104 105 - a per-interface core_lock protecting access from the host stack 106 while interfaces may be detached from netmap mode. 107 XXX there should be no need for this lock if we detach the interfaces 108 only while they are down. 109 110 111 --- VALE SWITCH --- 112 113 NMG_LOCK() serializes all modifications to switches and ports. 114 A switch cannot be deleted until all ports are gone. 115 116 For each switch, an SX lock (RWlock on linux) protects 117 deletion of ports. When configuring or deleting a new port, the 118 lock is acquired in exclusive mode (after holding NMG_LOCK). 119 When forwarding, the lock is acquired in shared mode (without NMG_LOCK). 120 The lock is held throughout the entire forwarding cycle, 121 during which the thread may incur in a page fault. 122 Hence it is important that sleepable shared locks are used. 123 124 On the rx ring, the per-port lock is grabbed initially to reserve 125 a number of slot in the ring, then the lock is released, 126 packets are copied from source to destination, and then 127 the lock is acquired again and the receive ring is updated. 128 (A similar thing is done on the tx ring for NIC and host stack 129 ports attached to the switch) 130 131 */ 132 133 134 /* --- internals ---- 135 * 136 * Roadmap to the code that implements the above. 137 * 138 * > 1. a process/thread issues one or more open() on /dev/netmap, to create 139 * > select()able file descriptor on which events are reported. 140 * 141 * Internally, we allocate a netmap_priv_d structure, that will be 142 * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d 143 * structure for each open(). 144 * 145 * os-specific: 146 * FreeBSD: see netmap_open() (netmap_freebsd.c) 147 * linux: see linux_netmap_open() (netmap_linux.c) 148 * 149 * > 2. on each descriptor, the process issues an ioctl() to identify 150 * > the interface that should report events to the file descriptor. 151 * 152 * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0. 153 * Most important things happen in netmap_get_na() and 154 * netmap_do_regif(), called from there. Additional details can be 155 * found in the comments above those functions. 156 * 157 * In all cases, this action creates/takes-a-reference-to a 158 * netmap_*_adapter describing the port, and allocates a netmap_if 159 * and all necessary netmap rings, filling them with netmap buffers. 160 * 161 * In this phase, the sync callbacks for each ring are set (these are used 162 * in steps 5 and 6 below). The callbacks depend on the type of adapter. 163 * The adapter creation/initialization code puts them in the 164 * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they 165 * are copied from there to the netmap_kring's during netmap_do_regif(), by 166 * the nm_krings_create() callback. All the nm_krings_create callbacks 167 * actually call netmap_krings_create() to perform this and the other 168 * common stuff. netmap_krings_create() also takes care of the host rings, 169 * if needed, by setting their sync callbacks appropriately. 170 * 171 * Additional actions depend on the kind of netmap_adapter that has been 172 * registered: 173 * 174 * - netmap_hw_adapter: [netmap.c] 175 * This is a system netdev/ifp with native netmap support. 176 * The ifp is detached from the host stack by redirecting: 177 * - transmissions (from the network stack) to netmap_transmit() 178 * - receive notifications to the nm_notify() callback for 179 * this adapter. The callback is normally netmap_notify(), unless 180 * the ifp is attached to a bridge using bwrap, in which case it 181 * is netmap_bwrap_intr_notify(). 182 * 183 * - netmap_generic_adapter: [netmap_generic.c] 184 * A system netdev/ifp without native netmap support. 185 * 186 * (the decision about native/non native support is taken in 187 * netmap_get_hw_na(), called by netmap_get_na()) 188 * 189 * - netmap_vp_adapter [netmap_vale.c] 190 * Returned by netmap_get_bdg_na(). 191 * This is a persistent or ephemeral VALE port. Ephemeral ports 192 * are created on the fly if they don't already exist, and are 193 * always attached to a bridge. 194 * Persistent VALE ports must must be created separately, and i 195 * then attached like normal NICs. The NIOCREGIF we are examining 196 * will find them only if they had previosly been created and 197 * attached (see VALE_CTL below). 198 * 199 * - netmap_pipe_adapter [netmap_pipe.c] 200 * Returned by netmap_get_pipe_na(). 201 * Both pipe ends are created, if they didn't already exist. 202 * 203 * - netmap_monitor_adapter [netmap_monitor.c] 204 * Returned by netmap_get_monitor_na(). 205 * If successful, the nm_sync callbacks of the monitored adapter 206 * will be intercepted by the returned monitor. 207 * 208 * - netmap_bwrap_adapter [netmap_vale.c] 209 * Cannot be obtained in this way, see VALE_CTL below 210 * 211 * 212 * os-specific: 213 * linux: we first go through linux_netmap_ioctl() to 214 * adapt the FreeBSD interface to the linux one. 215 * 216 * 217 * > 3. on each descriptor, the process issues an mmap() request to 218 * > map the shared memory region within the process' address space. 219 * > The list of interesting queues is indicated by a location in 220 * > the shared memory region. 221 * 222 * os-specific: 223 * FreeBSD: netmap_mmap_single (netmap_freebsd.c). 224 * linux: linux_netmap_mmap (netmap_linux.c). 225 * 226 * > 4. using the functions in the netmap(4) userspace API, a process 227 * > can look up the occupation state of a queue, access memory buffers, 228 * > and retrieve received packets or enqueue packets to transmit. 229 * 230 * these actions do not involve the kernel. 231 * 232 * > 5. using some ioctl()s the process can synchronize the userspace view 233 * > of the queue with the actual status in the kernel. This includes both 234 * > receiving the notification of new packets, and transmitting new 235 * > packets on the output interface. 236 * 237 * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC 238 * cases. They invoke the nm_sync callbacks on the netmap_kring 239 * structures, as initialized in step 2 and maybe later modified 240 * by a monitor. Monitors, however, will always call the original 241 * callback before doing anything else. 242 * 243 * 244 * > 6. select() or poll() can be used to wait for events on individual 245 * > transmit or receive queues (or all queues for a given interface). 246 * 247 * Implemented in netmap_poll(). This will call the same nm_sync() 248 * callbacks as in step 5 above. 249 * 250 * os-specific: 251 * linux: we first go through linux_netmap_poll() to adapt 252 * the FreeBSD interface to the linux one. 253 * 254 * 255 * ---- VALE_CTL ----- 256 * 257 * VALE switches are controlled by issuing a NIOCREGIF with a non-null 258 * nr_cmd in the nmreq structure. These subcommands are handled by 259 * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created 260 * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF 261 * subcommands, respectively. 262 * 263 * Any network interface known to the system (including a persistent VALE 264 * port) can be attached to a VALE switch by issuing the 265 * NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports 266 * look exactly like ephemeral VALE ports (as created in step 2 above). The 267 * attachment of other interfaces, instead, requires the creation of a 268 * netmap_bwrap_adapter. Moreover, the attached interface must be put in 269 * netmap mode. This may require the creation of a netmap_generic_adapter if 270 * we have no native support for the interface, or if generic adapters have 271 * been forced by sysctl. 272 * 273 * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(), 274 * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach() 275 * callback. In the case of the bwrap, the callback creates the 276 * netmap_bwrap_adapter. The initialization of the bwrap is then 277 * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl() 278 * callback (netmap_bwrap_bdg_ctl in netmap_vale.c). 279 * A generic adapter for the wrapped ifp will be created if needed, when 280 * netmap_get_bdg_na() calls netmap_get_hw_na(). 281 * 282 * 283 * ---- DATAPATHS ----- 284 * 285 * -= SYSTEM DEVICE WITH NATIVE SUPPORT =- 286 * 287 * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach() 288 * 289 * - tx from netmap userspace: 290 * concurrently: 291 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context 292 * kring->nm_sync() == DEVICE_netmap_txsync() 293 * 2) device interrupt handler 294 * na->nm_notify() == netmap_notify() 295 * - rx from netmap userspace: 296 * concurrently: 297 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context 298 * kring->nm_sync() == DEVICE_netmap_rxsync() 299 * 2) device interrupt handler 300 * na->nm_notify() == netmap_notify() 301 * - rx from host stack 302 * concurrently: 303 * 1) host stack 304 * netmap_transmit() 305 * na->nm_notify == netmap_notify() 306 * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context 307 * kring->nm_sync() == netmap_rxsync_from_host 308 * netmap_rxsync_from_host(na, NULL, NULL) 309 * - tx to host stack 310 * ioctl(NIOCTXSYNC)/netmap_poll() in process context 311 * kring->nm_sync() == netmap_txsync_to_host 312 * netmap_txsync_to_host(na) 313 * nm_os_send_up() 314 * FreeBSD: na->if_input() == ether_input() 315 * linux: netif_rx() with NM_MAGIC_PRIORITY_RX 316 * 317 * 318 * -= SYSTEM DEVICE WITH GENERIC SUPPORT =- 319 * 320 * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach() 321 * 322 * - tx from netmap userspace: 323 * concurrently: 324 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context 325 * kring->nm_sync() == generic_netmap_txsync() 326 * nm_os_generic_xmit_frame() 327 * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX 328 * ifp->ndo_start_xmit == generic_ndo_start_xmit() 329 * gna->save_start_xmit == orig. dev. start_xmit 330 * FreeBSD: na->if_transmit() == orig. dev if_transmit 331 * 2) generic_mbuf_destructor() 332 * na->nm_notify() == netmap_notify() 333 * - rx from netmap userspace: 334 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context 335 * kring->nm_sync() == generic_netmap_rxsync() 336 * mbq_safe_dequeue() 337 * 2) device driver 338 * generic_rx_handler() 339 * mbq_safe_enqueue() 340 * na->nm_notify() == netmap_notify() 341 * - rx from host stack 342 * FreeBSD: same as native 343 * Linux: same as native except: 344 * 1) host stack 345 * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX 346 * ifp->ndo_start_xmit == generic_ndo_start_xmit() 347 * netmap_transmit() 348 * na->nm_notify() == netmap_notify() 349 * - tx to host stack (same as native): 350 * 351 * 352 * -= VALE =- 353 * 354 * INCOMING: 355 * 356 * - VALE ports: 357 * ioctl(NIOCTXSYNC)/netmap_poll() in process context 358 * kring->nm_sync() == netmap_vp_txsync() 359 * 360 * - system device with native support: 361 * from cable: 362 * interrupt 363 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) 364 * kring->nm_sync() == DEVICE_netmap_rxsync() 365 * netmap_vp_txsync() 366 * kring->nm_sync() == DEVICE_netmap_rxsync() 367 * from host stack: 368 * netmap_transmit() 369 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) 370 * kring->nm_sync() == netmap_rxsync_from_host() 371 * netmap_vp_txsync() 372 * 373 * - system device with generic support: 374 * from device driver: 375 * generic_rx_handler() 376 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) 377 * kring->nm_sync() == generic_netmap_rxsync() 378 * netmap_vp_txsync() 379 * kring->nm_sync() == generic_netmap_rxsync() 380 * from host stack: 381 * netmap_transmit() 382 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) 383 * kring->nm_sync() == netmap_rxsync_from_host() 384 * netmap_vp_txsync() 385 * 386 * (all cases) --> nm_bdg_flush() 387 * dest_na->nm_notify() == (see below) 388 * 389 * OUTGOING: 390 * 391 * - VALE ports: 392 * concurrently: 393 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context 394 * kring->nm_sync() == netmap_vp_rxsync() 395 * 2) from nm_bdg_flush() 396 * na->nm_notify() == netmap_notify() 397 * 398 * - system device with native support: 399 * to cable: 400 * na->nm_notify() == netmap_bwrap_notify() 401 * netmap_vp_rxsync() 402 * kring->nm_sync() == DEVICE_netmap_txsync() 403 * netmap_vp_rxsync() 404 * to host stack: 405 * netmap_vp_rxsync() 406 * kring->nm_sync() == netmap_txsync_to_host 407 * netmap_vp_rxsync_locked() 408 * 409 * - system device with generic adapter: 410 * to device driver: 411 * na->nm_notify() == netmap_bwrap_notify() 412 * netmap_vp_rxsync() 413 * kring->nm_sync() == generic_netmap_txsync() 414 * netmap_vp_rxsync() 415 * to host stack: 416 * netmap_vp_rxsync() 417 * kring->nm_sync() == netmap_txsync_to_host 418 * netmap_vp_rxsync() 419 * 420 */ 421 422 /* 423 * OS-specific code that is used only within this file. 424 * Other OS-specific code that must be accessed by drivers 425 * is present in netmap_kern.h 426 */ 427 428 #if defined(__FreeBSD__) 429 #include <sys/cdefs.h> /* prerequisite */ 430 #include <sys/types.h> 431 #include <sys/errno.h> 432 #include <sys/param.h> /* defines used in kernel.h */ 433 #include <sys/kernel.h> /* types used in module initialization */ 434 #include <sys/conf.h> /* cdevsw struct, UID, GID */ 435 #include <sys/filio.h> /* FIONBIO */ 436 #include <sys/sockio.h> 437 #include <sys/socketvar.h> /* struct socket */ 438 #include <sys/malloc.h> 439 #include <sys/poll.h> 440 #include <sys/rwlock.h> 441 #include <sys/socket.h> /* sockaddrs */ 442 #include <sys/selinfo.h> 443 #include <sys/sysctl.h> 444 #include <sys/jail.h> 445 #include <net/vnet.h> 446 #include <net/if.h> 447 #include <net/if_var.h> 448 #include <net/bpf.h> /* BIOCIMMEDIATE */ 449 #include <machine/bus.h> /* bus_dmamap_* */ 450 #include <sys/endian.h> 451 #include <sys/refcount.h> 452 #include <net/ethernet.h> /* ETHER_BPF_MTAP */ 453 454 455 #elif defined(linux) 456 457 #include "bsd_glue.h" 458 459 #elif defined(__APPLE__) 460 461 #warning OSX support is only partial 462 #include "osx_glue.h" 463 464 #elif defined (_WIN32) 465 466 #include "win_glue.h" 467 468 #else 469 470 #error Unsupported platform 471 472 #endif /* unsupported */ 473 474 /* 475 * common headers 476 */ 477 #include <net/netmap.h> 478 #include <dev/netmap/netmap_kern.h> 479 #include <dev/netmap/netmap_mem2.h> 480 481 482 /* user-controlled variables */ 483 int netmap_verbose; 484 #ifdef CONFIG_NETMAP_DEBUG 485 int netmap_debug; 486 #endif /* CONFIG_NETMAP_DEBUG */ 487 488 static int netmap_no_timestamp; /* don't timestamp on rxsync */ 489 int netmap_no_pendintr = 1; 490 int netmap_txsync_retry = 2; 491 static int netmap_fwd = 0; /* force transparent forwarding */ 492 493 /* 494 * netmap_admode selects the netmap mode to use. 495 * Invalid values are reset to NETMAP_ADMODE_BEST 496 */ 497 enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */ 498 NETMAP_ADMODE_NATIVE, /* either native or none */ 499 NETMAP_ADMODE_GENERIC, /* force generic */ 500 NETMAP_ADMODE_LAST }; 501 static int netmap_admode = NETMAP_ADMODE_BEST; 502 503 /* netmap_generic_mit controls mitigation of RX notifications for 504 * the generic netmap adapter. The value is a time interval in 505 * nanoseconds. */ 506 int netmap_generic_mit = 100*1000; 507 508 /* We use by default netmap-aware qdiscs with generic netmap adapters, 509 * even if there can be a little performance hit with hardware NICs. 510 * However, using the qdisc is the safer approach, for two reasons: 511 * 1) it prevents non-fifo qdiscs to break the TX notification 512 * scheme, which is based on mbuf destructors when txqdisc is 513 * not used. 514 * 2) it makes it possible to transmit over software devices that 515 * change skb->dev, like bridge, veth, ... 516 * 517 * Anyway users looking for the best performance should 518 * use native adapters. 519 */ 520 #ifdef linux 521 int netmap_generic_txqdisc = 1; 522 #endif 523 524 /* Default number of slots and queues for generic adapters. */ 525 int netmap_generic_ringsize = 1024; 526 int netmap_generic_rings = 1; 527 528 /* Non-zero to enable checksum offloading in NIC drivers */ 529 int netmap_generic_hwcsum = 0; 530 531 /* Non-zero if ptnet devices are allowed to use virtio-net headers. */ 532 int ptnet_vnet_hdr = 1; 533 534 /* 535 * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated 536 * in some other operating systems 537 */ 538 SYSBEGIN(main_init); 539 540 SYSCTL_DECL(_dev_netmap); 541 SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args"); 542 SYSCTL_INT(_dev_netmap, OID_AUTO, verbose, 543 CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode"); 544 #ifdef CONFIG_NETMAP_DEBUG 545 SYSCTL_INT(_dev_netmap, OID_AUTO, debug, 546 CTLFLAG_RW, &netmap_debug, 0, "Debug messages"); 547 #endif /* CONFIG_NETMAP_DEBUG */ 548 SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp, 549 CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp"); 550 SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr, 551 0, "Always look for new received packets."); 552 SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW, 553 &netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush."); 554 555 SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0, 556 "Force NR_FORWARD mode"); 557 SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0, 558 "Adapter mode. 0 selects the best option available," 559 "1 forces native adapter, 2 forces emulated adapter"); 560 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_hwcsum, CTLFLAG_RW, &netmap_generic_hwcsum, 561 0, "Hardware checksums. 0 to disable checksum generation by the NIC (default)," 562 "1 to enable checksum generation by the NIC"); 563 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit, 564 0, "RX notification interval in nanoseconds"); 565 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW, 566 &netmap_generic_ringsize, 0, 567 "Number of per-ring slots for emulated netmap mode"); 568 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW, 569 &netmap_generic_rings, 0, 570 "Number of TX/RX queues for emulated netmap adapters"); 571 #ifdef linux 572 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW, 573 &netmap_generic_txqdisc, 0, "Use qdisc for generic adapters"); 574 #endif 575 SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr, 576 0, "Allow ptnet devices to use virtio-net headers"); 577 578 SYSEND; 579 580 NMG_LOCK_T netmap_global_lock; 581 582 /* 583 * mark the ring as stopped, and run through the locks 584 * to make sure other users get to see it. 585 * stopped must be either NR_KR_STOPPED (for unbounded stop) 586 * of NR_KR_LOCKED (brief stop for mutual exclusion purposes) 587 */ 588 static void 589 netmap_disable_ring(struct netmap_kring *kr, int stopped) 590 { 591 nm_kr_stop(kr, stopped); 592 // XXX check if nm_kr_stop is sufficient 593 mtx_lock(&kr->q_lock); 594 mtx_unlock(&kr->q_lock); 595 nm_kr_put(kr); 596 } 597 598 /* stop or enable a single ring */ 599 void 600 netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped) 601 { 602 if (stopped) 603 netmap_disable_ring(NMR(na, t)[ring_id], stopped); 604 else 605 NMR(na, t)[ring_id]->nkr_stopped = 0; 606 } 607 608 609 /* stop or enable all the rings of na */ 610 void 611 netmap_set_all_rings(struct netmap_adapter *na, int stopped) 612 { 613 int i; 614 enum txrx t; 615 616 if (!nm_netmap_on(na)) 617 return; 618 619 for_rx_tx(t) { 620 for (i = 0; i < netmap_real_rings(na, t); i++) { 621 netmap_set_ring(na, i, t, stopped); 622 } 623 } 624 } 625 626 /* 627 * Convenience function used in drivers. Waits for current txsync()s/rxsync()s 628 * to finish and prevents any new one from starting. Call this before turning 629 * netmap mode off, or before removing the hardware rings (e.g., on module 630 * onload). 631 */ 632 void 633 netmap_disable_all_rings(struct ifnet *ifp) 634 { 635 if (NM_NA_VALID(ifp)) { 636 netmap_set_all_rings(NA(ifp), NM_KR_STOPPED); 637 } 638 } 639 640 /* 641 * Convenience function used in drivers. Re-enables rxsync and txsync on the 642 * adapter's rings In linux drivers, this should be placed near each 643 * napi_enable(). 644 */ 645 void 646 netmap_enable_all_rings(struct ifnet *ifp) 647 { 648 if (NM_NA_VALID(ifp)) { 649 netmap_set_all_rings(NA(ifp), 0 /* enabled */); 650 } 651 } 652 653 void 654 netmap_make_zombie(struct ifnet *ifp) 655 { 656 if (NM_NA_VALID(ifp)) { 657 struct netmap_adapter *na = NA(ifp); 658 netmap_set_all_rings(na, NM_KR_LOCKED); 659 na->na_flags |= NAF_ZOMBIE; 660 netmap_set_all_rings(na, 0); 661 } 662 } 663 664 void 665 netmap_undo_zombie(struct ifnet *ifp) 666 { 667 if (NM_NA_VALID(ifp)) { 668 struct netmap_adapter *na = NA(ifp); 669 if (na->na_flags & NAF_ZOMBIE) { 670 netmap_set_all_rings(na, NM_KR_LOCKED); 671 na->na_flags &= ~NAF_ZOMBIE; 672 netmap_set_all_rings(na, 0); 673 } 674 } 675 } 676 677 /* 678 * generic bound_checking function 679 */ 680 u_int 681 nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg) 682 { 683 u_int oldv = *v; 684 const char *op = NULL; 685 686 if (dflt < lo) 687 dflt = lo; 688 if (dflt > hi) 689 dflt = hi; 690 if (oldv < lo) { 691 *v = dflt; 692 op = "Bump"; 693 } else if (oldv > hi) { 694 *v = hi; 695 op = "Clamp"; 696 } 697 if (op && msg) 698 nm_prinf("%s %s to %d (was %d)", op, msg, *v, oldv); 699 return *v; 700 } 701 702 703 /* 704 * packet-dump function, user-supplied or static buffer. 705 * The destination buffer must be at least 30+4*len 706 */ 707 const char * 708 nm_dump_buf(char *p, int len, int lim, char *dst) 709 { 710 static char _dst[8192]; 711 int i, j, i0; 712 static char hex[] ="0123456789abcdef"; 713 char *o; /* output position */ 714 715 #define P_HI(x) hex[((x) & 0xf0)>>4] 716 #define P_LO(x) hex[((x) & 0xf)] 717 #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.') 718 if (!dst) 719 dst = _dst; 720 if (lim <= 0 || lim > len) 721 lim = len; 722 o = dst; 723 sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim); 724 o += strlen(o); 725 /* hexdump routine */ 726 for (i = 0; i < lim; ) { 727 sprintf(o, "%5d: ", i); 728 o += strlen(o); 729 memset(o, ' ', 48); 730 i0 = i; 731 for (j=0; j < 16 && i < lim; i++, j++) { 732 o[j*3] = P_HI(p[i]); 733 o[j*3+1] = P_LO(p[i]); 734 } 735 i = i0; 736 for (j=0; j < 16 && i < lim; i++, j++) 737 o[j + 48] = P_C(p[i]); 738 o[j+48] = '\n'; 739 o += j+49; 740 } 741 *o = '\0'; 742 #undef P_HI 743 #undef P_LO 744 #undef P_C 745 return dst; 746 } 747 748 749 /* 750 * Fetch configuration from the device, to cope with dynamic 751 * reconfigurations after loading the module. 752 */ 753 /* call with NMG_LOCK held */ 754 int 755 netmap_update_config(struct netmap_adapter *na) 756 { 757 struct nm_config_info info; 758 759 bzero(&info, sizeof(info)); 760 if (na->nm_config == NULL || 761 na->nm_config(na, &info)) { 762 /* take whatever we had at init time */ 763 info.num_tx_rings = na->num_tx_rings; 764 info.num_tx_descs = na->num_tx_desc; 765 info.num_rx_rings = na->num_rx_rings; 766 info.num_rx_descs = na->num_rx_desc; 767 info.rx_buf_maxsize = na->rx_buf_maxsize; 768 } 769 770 if (na->num_tx_rings == info.num_tx_rings && 771 na->num_tx_desc == info.num_tx_descs && 772 na->num_rx_rings == info.num_rx_rings && 773 na->num_rx_desc == info.num_rx_descs && 774 na->rx_buf_maxsize == info.rx_buf_maxsize) 775 return 0; /* nothing changed */ 776 if (na->active_fds == 0) { 777 na->num_tx_rings = info.num_tx_rings; 778 na->num_tx_desc = info.num_tx_descs; 779 na->num_rx_rings = info.num_rx_rings; 780 na->num_rx_desc = info.num_rx_descs; 781 na->rx_buf_maxsize = info.rx_buf_maxsize; 782 if (netmap_verbose) 783 nm_prinf("configuration changed for %s: txring %d x %d, " 784 "rxring %d x %d, rxbufsz %d", 785 na->name, na->num_tx_rings, na->num_tx_desc, 786 na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize); 787 return 0; 788 } 789 nm_prerr("WARNING: configuration changed for %s while active: " 790 "txring %d x %d, rxring %d x %d, rxbufsz %d", 791 na->name, info.num_tx_rings, info.num_tx_descs, 792 info.num_rx_rings, info.num_rx_descs, 793 info.rx_buf_maxsize); 794 return 1; 795 } 796 797 /* nm_sync callbacks for the host rings */ 798 static int netmap_txsync_to_host(struct netmap_kring *kring, int flags); 799 static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags); 800 801 /* create the krings array and initialize the fields common to all adapters. 802 * The array layout is this: 803 * 804 * +----------+ 805 * na->tx_rings ----->| | \ 806 * | | } na->num_tx_ring 807 * | | / 808 * +----------+ 809 * | | host tx kring 810 * na->rx_rings ----> +----------+ 811 * | | \ 812 * | | } na->num_rx_rings 813 * | | / 814 * +----------+ 815 * | | host rx kring 816 * +----------+ 817 * na->tailroom ----->| | \ 818 * | | } tailroom bytes 819 * | | / 820 * +----------+ 821 * 822 * Note: for compatibility, host krings are created even when not needed. 823 * The tailroom space is currently used by vale ports for allocating leases. 824 */ 825 /* call with NMG_LOCK held */ 826 int 827 netmap_krings_create(struct netmap_adapter *na, u_int tailroom) 828 { 829 u_int i, len, ndesc; 830 struct netmap_kring *kring; 831 u_int n[NR_TXRX]; 832 enum txrx t; 833 int err = 0; 834 835 if (na->tx_rings != NULL) { 836 if (netmap_debug & NM_DEBUG_ON) 837 nm_prerr("warning: krings were already created"); 838 return 0; 839 } 840 841 /* account for the (possibly fake) host rings */ 842 n[NR_TX] = netmap_all_rings(na, NR_TX); 843 n[NR_RX] = netmap_all_rings(na, NR_RX); 844 845 len = (n[NR_TX] + n[NR_RX]) * 846 (sizeof(struct netmap_kring) + sizeof(struct netmap_kring *)) 847 + tailroom; 848 849 na->tx_rings = nm_os_malloc((size_t)len); 850 if (na->tx_rings == NULL) { 851 nm_prerr("Cannot allocate krings"); 852 return ENOMEM; 853 } 854 na->rx_rings = na->tx_rings + n[NR_TX]; 855 na->tailroom = na->rx_rings + n[NR_RX]; 856 857 /* link the krings in the krings array */ 858 kring = (struct netmap_kring *)((char *)na->tailroom + tailroom); 859 for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) { 860 na->tx_rings[i] = kring; 861 kring++; 862 } 863 864 /* 865 * All fields in krings are 0 except the one initialized below. 866 * but better be explicit on important kring fields. 867 */ 868 for_rx_tx(t) { 869 ndesc = nma_get_ndesc(na, t); 870 for (i = 0; i < n[t]; i++) { 871 kring = NMR(na, t)[i]; 872 bzero(kring, sizeof(*kring)); 873 kring->notify_na = na; 874 kring->ring_id = i; 875 kring->tx = t; 876 kring->nkr_num_slots = ndesc; 877 kring->nr_mode = NKR_NETMAP_OFF; 878 kring->nr_pending_mode = NKR_NETMAP_OFF; 879 if (i < nma_get_nrings(na, t)) { 880 kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync); 881 } else { 882 if (!(na->na_flags & NAF_HOST_RINGS)) 883 kring->nr_kflags |= NKR_FAKERING; 884 kring->nm_sync = (t == NR_TX ? 885 netmap_txsync_to_host: 886 netmap_rxsync_from_host); 887 } 888 kring->nm_notify = na->nm_notify; 889 kring->rhead = kring->rcur = kring->nr_hwcur = 0; 890 /* 891 * IMPORTANT: Always keep one slot empty. 892 */ 893 kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0); 894 snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name, 895 nm_txrx2str(t), i); 896 nm_prdis("ktx %s h %d c %d t %d", 897 kring->name, kring->rhead, kring->rcur, kring->rtail); 898 err = nm_os_selinfo_init(&kring->si, kring->name); 899 if (err) { 900 netmap_krings_delete(na); 901 return err; 902 } 903 mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF); 904 kring->na = na; /* setting this field marks the mutex as initialized */ 905 } 906 err = nm_os_selinfo_init(&na->si[t], na->name); 907 if (err) { 908 netmap_krings_delete(na); 909 return err; 910 } 911 } 912 913 return 0; 914 } 915 916 917 /* undo the actions performed by netmap_krings_create */ 918 /* call with NMG_LOCK held */ 919 void 920 netmap_krings_delete(struct netmap_adapter *na) 921 { 922 struct netmap_kring **kring = na->tx_rings; 923 enum txrx t; 924 925 if (na->tx_rings == NULL) { 926 if (netmap_debug & NM_DEBUG_ON) 927 nm_prerr("warning: krings were already deleted"); 928 return; 929 } 930 931 for_rx_tx(t) 932 nm_os_selinfo_uninit(&na->si[t]); 933 934 /* we rely on the krings layout described above */ 935 for ( ; kring != na->tailroom; kring++) { 936 if ((*kring)->na != NULL) 937 mtx_destroy(&(*kring)->q_lock); 938 nm_os_selinfo_uninit(&(*kring)->si); 939 } 940 nm_os_free(na->tx_rings); 941 na->tx_rings = na->rx_rings = na->tailroom = NULL; 942 } 943 944 945 /* 946 * Destructor for NIC ports. They also have an mbuf queue 947 * on the rings connected to the host so we need to purge 948 * them first. 949 */ 950 /* call with NMG_LOCK held */ 951 void 952 netmap_hw_krings_delete(struct netmap_adapter *na) 953 { 954 u_int lim = netmap_real_rings(na, NR_RX), i; 955 956 for (i = nma_get_nrings(na, NR_RX); i < lim; i++) { 957 struct mbq *q = &NMR(na, NR_RX)[i]->rx_queue; 958 nm_prdis("destroy sw mbq with len %d", mbq_len(q)); 959 mbq_purge(q); 960 mbq_safe_fini(q); 961 } 962 netmap_krings_delete(na); 963 } 964 965 static void 966 netmap_mem_drop(struct netmap_adapter *na) 967 { 968 int last = netmap_mem_deref(na->nm_mem, na); 969 /* if the native allocator had been overrided on regif, 970 * restore it now and drop the temporary one 971 */ 972 if (last && na->nm_mem_prev) { 973 netmap_mem_put(na->nm_mem); 974 na->nm_mem = na->nm_mem_prev; 975 na->nm_mem_prev = NULL; 976 } 977 } 978 979 /* 980 * Undo everything that was done in netmap_do_regif(). In particular, 981 * call nm_register(ifp,0) to stop netmap mode on the interface and 982 * revert to normal operation. 983 */ 984 /* call with NMG_LOCK held */ 985 static void netmap_unset_ringid(struct netmap_priv_d *); 986 static void netmap_krings_put(struct netmap_priv_d *); 987 void 988 netmap_do_unregif(struct netmap_priv_d *priv) 989 { 990 struct netmap_adapter *na = priv->np_na; 991 992 NMG_LOCK_ASSERT(); 993 na->active_fds--; 994 /* unset nr_pending_mode and possibly release exclusive mode */ 995 netmap_krings_put(priv); 996 997 #ifdef WITH_MONITOR 998 /* XXX check whether we have to do something with monitor 999 * when rings change nr_mode. */ 1000 if (na->active_fds <= 0) { 1001 /* walk through all the rings and tell any monitor 1002 * that the port is going to exit netmap mode 1003 */ 1004 netmap_monitor_stop(na); 1005 } 1006 #endif 1007 1008 if (na->active_fds <= 0 || nm_kring_pending(priv)) { 1009 na->nm_register(na, 0); 1010 } 1011 1012 /* delete rings and buffers that are no longer needed */ 1013 netmap_mem_rings_delete(na); 1014 1015 if (na->active_fds <= 0) { /* last instance */ 1016 /* 1017 * (TO CHECK) We enter here 1018 * when the last reference to this file descriptor goes 1019 * away. This means we cannot have any pending poll() 1020 * or interrupt routine operating on the structure. 1021 * XXX The file may be closed in a thread while 1022 * another thread is using it. 1023 * Linux keeps the file opened until the last reference 1024 * by any outstanding ioctl/poll or mmap is gone. 1025 * FreeBSD does not track mmap()s (but we do) and 1026 * wakes up any sleeping poll(). Need to check what 1027 * happens if the close() occurs while a concurrent 1028 * syscall is running. 1029 */ 1030 if (netmap_debug & NM_DEBUG_ON) 1031 nm_prinf("deleting last instance for %s", na->name); 1032 1033 if (nm_netmap_on(na)) { 1034 nm_prerr("BUG: netmap on while going to delete the krings"); 1035 } 1036 1037 na->nm_krings_delete(na); 1038 1039 /* restore the default number of host tx and rx rings */ 1040 na->num_host_tx_rings = 1; 1041 na->num_host_rx_rings = 1; 1042 } 1043 1044 /* possibily decrement counter of tx_si/rx_si users */ 1045 netmap_unset_ringid(priv); 1046 /* delete the nifp */ 1047 netmap_mem_if_delete(na, priv->np_nifp); 1048 /* drop the allocator */ 1049 netmap_mem_drop(na); 1050 /* mark the priv as unregistered */ 1051 priv->np_na = NULL; 1052 priv->np_nifp = NULL; 1053 } 1054 1055 struct netmap_priv_d* 1056 netmap_priv_new(void) 1057 { 1058 struct netmap_priv_d *priv; 1059 1060 priv = nm_os_malloc(sizeof(struct netmap_priv_d)); 1061 if (priv == NULL) 1062 return NULL; 1063 priv->np_refs = 1; 1064 nm_os_get_module(); 1065 return priv; 1066 } 1067 1068 /* 1069 * Destructor of the netmap_priv_d, called when the fd is closed 1070 * Action: undo all the things done by NIOCREGIF, 1071 * On FreeBSD we need to track whether there are active mmap()s, 1072 * and we use np_active_mmaps for that. On linux, the field is always 0. 1073 * Return: 1 if we can free priv, 0 otherwise. 1074 * 1075 */ 1076 /* call with NMG_LOCK held */ 1077 void 1078 netmap_priv_delete(struct netmap_priv_d *priv) 1079 { 1080 struct netmap_adapter *na = priv->np_na; 1081 1082 /* number of active references to this fd */ 1083 if (--priv->np_refs > 0) { 1084 return; 1085 } 1086 nm_os_put_module(); 1087 if (na) { 1088 netmap_do_unregif(priv); 1089 } 1090 netmap_unget_na(na, priv->np_ifp); 1091 bzero(priv, sizeof(*priv)); /* for safety */ 1092 nm_os_free(priv); 1093 } 1094 1095 1096 /* call with NMG_LOCK *not* held */ 1097 void 1098 netmap_dtor(void *data) 1099 { 1100 struct netmap_priv_d *priv = data; 1101 1102 NMG_LOCK(); 1103 netmap_priv_delete(priv); 1104 NMG_UNLOCK(); 1105 } 1106 1107 1108 /* 1109 * Handlers for synchronization of the rings from/to the host stack. 1110 * These are associated to a network interface and are just another 1111 * ring pair managed by userspace. 1112 * 1113 * Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD 1114 * flags): 1115 * 1116 * - Before releasing buffers on hw RX rings, the application can mark 1117 * them with the NS_FORWARD flag. During the next RXSYNC or poll(), they 1118 * will be forwarded to the host stack, similarly to what happened if 1119 * the application moved them to the host TX ring. 1120 * 1121 * - Before releasing buffers on the host RX ring, the application can 1122 * mark them with the NS_FORWARD flag. During the next RXSYNC or poll(), 1123 * they will be forwarded to the hw TX rings, saving the application 1124 * from doing the same task in user-space. 1125 * 1126 * Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD 1127 * flag, or globally with the netmap_fwd sysctl. 1128 * 1129 * The transfer NIC --> host is relatively easy, just encapsulate 1130 * into mbufs and we are done. The host --> NIC side is slightly 1131 * harder because there might not be room in the tx ring so it 1132 * might take a while before releasing the buffer. 1133 */ 1134 1135 1136 /* 1137 * Pass a whole queue of mbufs to the host stack as coming from 'dst' 1138 * We do not need to lock because the queue is private. 1139 * After this call the queue is empty. 1140 */ 1141 static void 1142 netmap_send_up(struct ifnet *dst, struct mbq *q) 1143 { 1144 struct mbuf *m; 1145 struct mbuf *head = NULL, *prev = NULL; 1146 1147 /* Send packets up, outside the lock; head/prev machinery 1148 * is only useful for Windows. */ 1149 while ((m = mbq_dequeue(q)) != NULL) { 1150 if (netmap_debug & NM_DEBUG_HOST) 1151 nm_prinf("sending up pkt %p size %d", m, MBUF_LEN(m)); 1152 prev = nm_os_send_up(dst, m, prev); 1153 if (head == NULL) 1154 head = prev; 1155 } 1156 if (head) 1157 nm_os_send_up(dst, NULL, head); 1158 mbq_fini(q); 1159 } 1160 1161 1162 /* 1163 * Scan the buffers from hwcur to ring->head, and put a copy of those 1164 * marked NS_FORWARD (or all of them if forced) into a queue of mbufs. 1165 * Drop remaining packets in the unlikely event 1166 * of an mbuf shortage. 1167 */ 1168 static void 1169 netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force) 1170 { 1171 u_int const lim = kring->nkr_num_slots - 1; 1172 u_int const head = kring->rhead; 1173 u_int n; 1174 struct netmap_adapter *na = kring->na; 1175 1176 for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) { 1177 struct mbuf *m; 1178 struct netmap_slot *slot = &kring->ring->slot[n]; 1179 1180 if ((slot->flags & NS_FORWARD) == 0 && !force) 1181 continue; 1182 if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) { 1183 nm_prlim(5, "bad pkt at %d len %d", n, slot->len); 1184 continue; 1185 } 1186 slot->flags &= ~NS_FORWARD; // XXX needed ? 1187 /* XXX TODO: adapt to the case of a multisegment packet */ 1188 m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL); 1189 1190 if (m == NULL) 1191 break; 1192 mbq_enqueue(q, m); 1193 } 1194 } 1195 1196 static inline int 1197 _nm_may_forward(struct netmap_kring *kring) 1198 { 1199 return ((netmap_fwd || kring->ring->flags & NR_FORWARD) && 1200 kring->na->na_flags & NAF_HOST_RINGS && 1201 kring->tx == NR_RX); 1202 } 1203 1204 static inline int 1205 nm_may_forward_up(struct netmap_kring *kring) 1206 { 1207 return _nm_may_forward(kring) && 1208 kring->ring_id != kring->na->num_rx_rings; 1209 } 1210 1211 static inline int 1212 nm_may_forward_down(struct netmap_kring *kring, int sync_flags) 1213 { 1214 return _nm_may_forward(kring) && 1215 (sync_flags & NAF_CAN_FORWARD_DOWN) && 1216 kring->ring_id == kring->na->num_rx_rings; 1217 } 1218 1219 /* 1220 * Send to the NIC rings packets marked NS_FORWARD between 1221 * kring->nr_hwcur and kring->rhead. 1222 * Called under kring->rx_queue.lock on the sw rx ring. 1223 * 1224 * It can only be called if the user opened all the TX hw rings, 1225 * see NAF_CAN_FORWARD_DOWN flag. 1226 * We can touch the TX netmap rings (slots, head and cur) since 1227 * we are in poll/ioctl system call context, and the application 1228 * is not supposed to touch the ring (using a different thread) 1229 * during the execution of the system call. 1230 */ 1231 static u_int 1232 netmap_sw_to_nic(struct netmap_adapter *na) 1233 { 1234 struct netmap_kring *kring = na->rx_rings[na->num_rx_rings]; 1235 struct netmap_slot *rxslot = kring->ring->slot; 1236 u_int i, rxcur = kring->nr_hwcur; 1237 u_int const head = kring->rhead; 1238 u_int const src_lim = kring->nkr_num_slots - 1; 1239 u_int sent = 0; 1240 1241 /* scan rings to find space, then fill as much as possible */ 1242 for (i = 0; i < na->num_tx_rings; i++) { 1243 struct netmap_kring *kdst = na->tx_rings[i]; 1244 struct netmap_ring *rdst = kdst->ring; 1245 u_int const dst_lim = kdst->nkr_num_slots - 1; 1246 1247 /* XXX do we trust ring or kring->rcur,rtail ? */ 1248 for (; rxcur != head && !nm_ring_empty(rdst); 1249 rxcur = nm_next(rxcur, src_lim) ) { 1250 struct netmap_slot *src, *dst, tmp; 1251 u_int dst_head = rdst->head; 1252 1253 src = &rxslot[rxcur]; 1254 if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd) 1255 continue; 1256 1257 sent++; 1258 1259 dst = &rdst->slot[dst_head]; 1260 1261 tmp = *src; 1262 1263 src->buf_idx = dst->buf_idx; 1264 src->flags = NS_BUF_CHANGED; 1265 1266 dst->buf_idx = tmp.buf_idx; 1267 dst->len = tmp.len; 1268 dst->flags = NS_BUF_CHANGED; 1269 1270 rdst->head = rdst->cur = nm_next(dst_head, dst_lim); 1271 } 1272 /* if (sent) XXX txsync ? it would be just an optimization */ 1273 } 1274 return sent; 1275 } 1276 1277 1278 /* 1279 * netmap_txsync_to_host() passes packets up. We are called from a 1280 * system call in user process context, and the only contention 1281 * can be among multiple user threads erroneously calling 1282 * this routine concurrently. 1283 */ 1284 static int 1285 netmap_txsync_to_host(struct netmap_kring *kring, int flags) 1286 { 1287 struct netmap_adapter *na = kring->na; 1288 u_int const lim = kring->nkr_num_slots - 1; 1289 u_int const head = kring->rhead; 1290 struct mbq q; 1291 1292 /* Take packets from hwcur to head and pass them up. 1293 * Force hwcur = head since netmap_grab_packets() stops at head 1294 */ 1295 mbq_init(&q); 1296 netmap_grab_packets(kring, &q, 1 /* force */); 1297 nm_prdis("have %d pkts in queue", mbq_len(&q)); 1298 kring->nr_hwcur = head; 1299 kring->nr_hwtail = head + lim; 1300 if (kring->nr_hwtail > lim) 1301 kring->nr_hwtail -= lim + 1; 1302 1303 netmap_send_up(na->ifp, &q); 1304 return 0; 1305 } 1306 1307 1308 /* 1309 * rxsync backend for packets coming from the host stack. 1310 * They have been put in kring->rx_queue by netmap_transmit(). 1311 * We protect access to the kring using kring->rx_queue.lock 1312 * 1313 * also moves to the nic hw rings any packet the user has marked 1314 * for transparent-mode forwarding, then sets the NR_FORWARD 1315 * flag in the kring to let the caller push them out 1316 */ 1317 static int 1318 netmap_rxsync_from_host(struct netmap_kring *kring, int flags) 1319 { 1320 struct netmap_adapter *na = kring->na; 1321 struct netmap_ring *ring = kring->ring; 1322 u_int nm_i, n; 1323 u_int const lim = kring->nkr_num_slots - 1; 1324 u_int const head = kring->rhead; 1325 int ret = 0; 1326 struct mbq *q = &kring->rx_queue, fq; 1327 1328 mbq_init(&fq); /* fq holds packets to be freed */ 1329 1330 mbq_lock(q); 1331 1332 /* First part: import newly received packets */ 1333 n = mbq_len(q); 1334 if (n) { /* grab packets from the queue */ 1335 struct mbuf *m; 1336 uint32_t stop_i; 1337 1338 nm_i = kring->nr_hwtail; 1339 stop_i = nm_prev(kring->nr_hwcur, lim); 1340 while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) { 1341 int len = MBUF_LEN(m); 1342 struct netmap_slot *slot = &ring->slot[nm_i]; 1343 1344 m_copydata(m, 0, len, NMB(na, slot)); 1345 nm_prdis("nm %d len %d", nm_i, len); 1346 if (netmap_debug & NM_DEBUG_HOST) 1347 nm_prinf("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL)); 1348 1349 slot->len = len; 1350 slot->flags = 0; 1351 nm_i = nm_next(nm_i, lim); 1352 mbq_enqueue(&fq, m); 1353 } 1354 kring->nr_hwtail = nm_i; 1355 } 1356 1357 /* 1358 * Second part: skip past packets that userspace has released. 1359 */ 1360 nm_i = kring->nr_hwcur; 1361 if (nm_i != head) { /* something was released */ 1362 if (nm_may_forward_down(kring, flags)) { 1363 ret = netmap_sw_to_nic(na); 1364 if (ret > 0) { 1365 kring->nr_kflags |= NR_FORWARD; 1366 ret = 0; 1367 } 1368 } 1369 kring->nr_hwcur = head; 1370 } 1371 1372 mbq_unlock(q); 1373 1374 mbq_purge(&fq); 1375 mbq_fini(&fq); 1376 1377 return ret; 1378 } 1379 1380 1381 /* Get a netmap adapter for the port. 1382 * 1383 * If it is possible to satisfy the request, return 0 1384 * with *na containing the netmap adapter found. 1385 * Otherwise return an error code, with *na containing NULL. 1386 * 1387 * When the port is attached to a bridge, we always return 1388 * EBUSY. 1389 * Otherwise, if the port is already bound to a file descriptor, 1390 * then we unconditionally return the existing adapter into *na. 1391 * In all the other cases, we return (into *na) either native, 1392 * generic or NULL, according to the following table: 1393 * 1394 * native_support 1395 * active_fds dev.netmap.admode YES NO 1396 * ------------------------------------------------------- 1397 * >0 * NA(ifp) NA(ifp) 1398 * 1399 * 0 NETMAP_ADMODE_BEST NATIVE GENERIC 1400 * 0 NETMAP_ADMODE_NATIVE NATIVE NULL 1401 * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC 1402 * 1403 */ 1404 static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */ 1405 int 1406 netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na) 1407 { 1408 /* generic support */ 1409 int i = netmap_admode; /* Take a snapshot. */ 1410 struct netmap_adapter *prev_na; 1411 int error = 0; 1412 1413 *na = NULL; /* default */ 1414 1415 /* reset in case of invalid value */ 1416 if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST) 1417 i = netmap_admode = NETMAP_ADMODE_BEST; 1418 1419 if (NM_NA_VALID(ifp)) { 1420 prev_na = NA(ifp); 1421 /* If an adapter already exists, return it if 1422 * there are active file descriptors or if 1423 * netmap is not forced to use generic 1424 * adapters. 1425 */ 1426 if (NETMAP_OWNED_BY_ANY(prev_na) 1427 || i != NETMAP_ADMODE_GENERIC 1428 || prev_na->na_flags & NAF_FORCE_NATIVE 1429 #ifdef WITH_PIPES 1430 /* ugly, but we cannot allow an adapter switch 1431 * if some pipe is referring to this one 1432 */ 1433 || prev_na->na_next_pipe > 0 1434 #endif 1435 ) { 1436 *na = prev_na; 1437 goto assign_mem; 1438 } 1439 } 1440 1441 /* If there isn't native support and netmap is not allowed 1442 * to use generic adapters, we cannot satisfy the request. 1443 */ 1444 if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE) 1445 return EOPNOTSUPP; 1446 1447 /* Otherwise, create a generic adapter and return it, 1448 * saving the previously used netmap adapter, if any. 1449 * 1450 * Note that here 'prev_na', if not NULL, MUST be a 1451 * native adapter, and CANNOT be a generic one. This is 1452 * true because generic adapters are created on demand, and 1453 * destroyed when not used anymore. Therefore, if the adapter 1454 * currently attached to an interface 'ifp' is generic, it 1455 * must be that 1456 * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))). 1457 * Consequently, if NA(ifp) is generic, we will enter one of 1458 * the branches above. This ensures that we never override 1459 * a generic adapter with another generic adapter. 1460 */ 1461 error = generic_netmap_attach(ifp); 1462 if (error) 1463 return error; 1464 1465 *na = NA(ifp); 1466 1467 assign_mem: 1468 if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) && 1469 (*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) { 1470 (*na)->nm_mem_prev = (*na)->nm_mem; 1471 (*na)->nm_mem = netmap_mem_get(nmd); 1472 } 1473 1474 return 0; 1475 } 1476 1477 /* 1478 * MUST BE CALLED UNDER NMG_LOCK() 1479 * 1480 * Get a refcounted reference to a netmap adapter attached 1481 * to the interface specified by req. 1482 * This is always called in the execution of an ioctl(). 1483 * 1484 * Return ENXIO if the interface specified by the request does 1485 * not exist, ENOTSUP if netmap is not supported by the interface, 1486 * EBUSY if the interface is already attached to a bridge, 1487 * EINVAL if parameters are invalid, ENOMEM if needed resources 1488 * could not be allocated. 1489 * If successful, hold a reference to the netmap adapter. 1490 * 1491 * If the interface specified by req is a system one, also keep 1492 * a reference to it and return a valid *ifp. 1493 */ 1494 int 1495 netmap_get_na(struct nmreq_header *hdr, 1496 struct netmap_adapter **na, struct ifnet **ifp, 1497 struct netmap_mem_d *nmd, int create) 1498 { 1499 struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body; 1500 int error = 0; 1501 struct netmap_adapter *ret = NULL; 1502 int nmd_ref = 0; 1503 1504 *na = NULL; /* default return value */ 1505 *ifp = NULL; 1506 1507 if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) { 1508 return EINVAL; 1509 } 1510 1511 if (req->nr_mode == NR_REG_PIPE_MASTER || 1512 req->nr_mode == NR_REG_PIPE_SLAVE) { 1513 /* Do not accept deprecated pipe modes. */ 1514 nm_prerr("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax"); 1515 return EINVAL; 1516 } 1517 1518 NMG_LOCK_ASSERT(); 1519 1520 /* if the request contain a memid, try to find the 1521 * corresponding memory region 1522 */ 1523 if (nmd == NULL && req->nr_mem_id) { 1524 nmd = netmap_mem_find(req->nr_mem_id); 1525 if (nmd == NULL) 1526 return EINVAL; 1527 /* keep the rereference */ 1528 nmd_ref = 1; 1529 } 1530 1531 /* We cascade through all possible types of netmap adapter. 1532 * All netmap_get_*_na() functions return an error and an na, 1533 * with the following combinations: 1534 * 1535 * error na 1536 * 0 NULL type doesn't match 1537 * !0 NULL type matches, but na creation/lookup failed 1538 * 0 !NULL type matches and na created/found 1539 * !0 !NULL impossible 1540 */ 1541 error = netmap_get_null_na(hdr, na, nmd, create); 1542 if (error || *na != NULL) 1543 goto out; 1544 1545 /* try to see if this is a monitor port */ 1546 error = netmap_get_monitor_na(hdr, na, nmd, create); 1547 if (error || *na != NULL) 1548 goto out; 1549 1550 /* try to see if this is a pipe port */ 1551 error = netmap_get_pipe_na(hdr, na, nmd, create); 1552 if (error || *na != NULL) 1553 goto out; 1554 1555 /* try to see if this is a bridge port */ 1556 error = netmap_get_vale_na(hdr, na, nmd, create); 1557 if (error) 1558 goto out; 1559 1560 if (*na != NULL) /* valid match in netmap_get_bdg_na() */ 1561 goto out; 1562 1563 /* 1564 * This must be a hardware na, lookup the name in the system. 1565 * Note that by hardware we actually mean "it shows up in ifconfig". 1566 * This may still be a tap, a veth/epair, or even a 1567 * persistent VALE port. 1568 */ 1569 *ifp = ifunit_ref(hdr->nr_name); 1570 if (*ifp == NULL) { 1571 error = ENXIO; 1572 goto out; 1573 } 1574 1575 error = netmap_get_hw_na(*ifp, nmd, &ret); 1576 if (error) 1577 goto out; 1578 1579 *na = ret; 1580 netmap_adapter_get(ret); 1581 1582 /* 1583 * if the adapter supports the host rings and it is not alread open, 1584 * try to set the number of host rings as requested by the user 1585 */ 1586 if (((*na)->na_flags & NAF_HOST_RINGS) && (*na)->active_fds == 0) { 1587 if (req->nr_host_tx_rings) 1588 (*na)->num_host_tx_rings = req->nr_host_tx_rings; 1589 if (req->nr_host_rx_rings) 1590 (*na)->num_host_rx_rings = req->nr_host_rx_rings; 1591 } 1592 nm_prdis("%s: host tx %d rx %u", (*na)->name, (*na)->num_host_tx_rings, 1593 (*na)->num_host_rx_rings); 1594 1595 out: 1596 if (error) { 1597 if (ret) 1598 netmap_adapter_put(ret); 1599 if (*ifp) { 1600 if_rele(*ifp); 1601 *ifp = NULL; 1602 } 1603 } 1604 if (nmd_ref) 1605 netmap_mem_put(nmd); 1606 1607 return error; 1608 } 1609 1610 /* undo netmap_get_na() */ 1611 void 1612 netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp) 1613 { 1614 if (ifp) 1615 if_rele(ifp); 1616 if (na) 1617 netmap_adapter_put(na); 1618 } 1619 1620 1621 #define NM_FAIL_ON(t) do { \ 1622 if (unlikely(t)) { \ 1623 nm_prlim(5, "%s: fail '" #t "' " \ 1624 "h %d c %d t %d " \ 1625 "rh %d rc %d rt %d " \ 1626 "hc %d ht %d", \ 1627 kring->name, \ 1628 head, cur, ring->tail, \ 1629 kring->rhead, kring->rcur, kring->rtail, \ 1630 kring->nr_hwcur, kring->nr_hwtail); \ 1631 return kring->nkr_num_slots; \ 1632 } \ 1633 } while (0) 1634 1635 /* 1636 * validate parameters on entry for *_txsync() 1637 * Returns ring->cur if ok, or something >= kring->nkr_num_slots 1638 * in case of error. 1639 * 1640 * rhead, rcur and rtail=hwtail are stored from previous round. 1641 * hwcur is the next packet to send to the ring. 1642 * 1643 * We want 1644 * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail 1645 * 1646 * hwcur, rhead, rtail and hwtail are reliable 1647 */ 1648 u_int 1649 nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) 1650 { 1651 u_int head = ring->head; /* read only once */ 1652 u_int cur = ring->cur; /* read only once */ 1653 u_int n = kring->nkr_num_slots; 1654 1655 nm_prdis(5, "%s kcur %d ktail %d head %d cur %d tail %d", 1656 kring->name, 1657 kring->nr_hwcur, kring->nr_hwtail, 1658 ring->head, ring->cur, ring->tail); 1659 #if 1 /* kernel sanity checks; but we can trust the kring. */ 1660 NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n || 1661 kring->rtail >= n || kring->nr_hwtail >= n); 1662 #endif /* kernel sanity checks */ 1663 /* 1664 * user sanity checks. We only use head, 1665 * A, B, ... are possible positions for head: 1666 * 1667 * 0 A rhead B rtail C n-1 1668 * 0 D rtail E rhead F n-1 1669 * 1670 * B, F, D are valid. A, C, E are wrong 1671 */ 1672 if (kring->rtail >= kring->rhead) { 1673 /* want rhead <= head <= rtail */ 1674 NM_FAIL_ON(head < kring->rhead || head > kring->rtail); 1675 /* and also head <= cur <= rtail */ 1676 NM_FAIL_ON(cur < head || cur > kring->rtail); 1677 } else { /* here rtail < rhead */ 1678 /* we need head outside rtail .. rhead */ 1679 NM_FAIL_ON(head > kring->rtail && head < kring->rhead); 1680 1681 /* two cases now: head <= rtail or head >= rhead */ 1682 if (head <= kring->rtail) { 1683 /* want head <= cur <= rtail */ 1684 NM_FAIL_ON(cur < head || cur > kring->rtail); 1685 } else { /* head >= rhead */ 1686 /* cur must be outside rtail..head */ 1687 NM_FAIL_ON(cur > kring->rtail && cur < head); 1688 } 1689 } 1690 if (ring->tail != kring->rtail) { 1691 nm_prlim(5, "%s tail overwritten was %d need %d", kring->name, 1692 ring->tail, kring->rtail); 1693 ring->tail = kring->rtail; 1694 } 1695 kring->rhead = head; 1696 kring->rcur = cur; 1697 return head; 1698 } 1699 1700 1701 /* 1702 * validate parameters on entry for *_rxsync() 1703 * Returns ring->head if ok, kring->nkr_num_slots on error. 1704 * 1705 * For a valid configuration, 1706 * hwcur <= head <= cur <= tail <= hwtail 1707 * 1708 * We only consider head and cur. 1709 * hwcur and hwtail are reliable. 1710 * 1711 */ 1712 u_int 1713 nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) 1714 { 1715 uint32_t const n = kring->nkr_num_slots; 1716 uint32_t head, cur; 1717 1718 nm_prdis(5,"%s kc %d kt %d h %d c %d t %d", 1719 kring->name, 1720 kring->nr_hwcur, kring->nr_hwtail, 1721 ring->head, ring->cur, ring->tail); 1722 /* 1723 * Before storing the new values, we should check they do not 1724 * move backwards. However: 1725 * - head is not an issue because the previous value is hwcur; 1726 * - cur could in principle go back, however it does not matter 1727 * because we are processing a brand new rxsync() 1728 */ 1729 cur = kring->rcur = ring->cur; /* read only once */ 1730 head = kring->rhead = ring->head; /* read only once */ 1731 #if 1 /* kernel sanity checks */ 1732 NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n); 1733 #endif /* kernel sanity checks */ 1734 /* user sanity checks */ 1735 if (kring->nr_hwtail >= kring->nr_hwcur) { 1736 /* want hwcur <= rhead <= hwtail */ 1737 NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail); 1738 /* and also rhead <= rcur <= hwtail */ 1739 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); 1740 } else { 1741 /* we need rhead outside hwtail..hwcur */ 1742 NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail); 1743 /* two cases now: head <= hwtail or head >= hwcur */ 1744 if (head <= kring->nr_hwtail) { 1745 /* want head <= cur <= hwtail */ 1746 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); 1747 } else { 1748 /* cur must be outside hwtail..head */ 1749 NM_FAIL_ON(cur < head && cur > kring->nr_hwtail); 1750 } 1751 } 1752 if (ring->tail != kring->rtail) { 1753 nm_prlim(5, "%s tail overwritten was %d need %d", 1754 kring->name, 1755 ring->tail, kring->rtail); 1756 ring->tail = kring->rtail; 1757 } 1758 return head; 1759 } 1760 1761 1762 /* 1763 * Error routine called when txsync/rxsync detects an error. 1764 * Can't do much more than resetting head = cur = hwcur, tail = hwtail 1765 * Return 1 on reinit. 1766 * 1767 * This routine is only called by the upper half of the kernel. 1768 * It only reads hwcur (which is changed only by the upper half, too) 1769 * and hwtail (which may be changed by the lower half, but only on 1770 * a tx ring and only to increase it, so any error will be recovered 1771 * on the next call). For the above, we don't strictly need to call 1772 * it under lock. 1773 */ 1774 int 1775 netmap_ring_reinit(struct netmap_kring *kring) 1776 { 1777 struct netmap_ring *ring = kring->ring; 1778 u_int i, lim = kring->nkr_num_slots - 1; 1779 int errors = 0; 1780 1781 // XXX KASSERT nm_kr_tryget 1782 nm_prlim(10, "called for %s", kring->name); 1783 // XXX probably wrong to trust userspace 1784 kring->rhead = ring->head; 1785 kring->rcur = ring->cur; 1786 kring->rtail = ring->tail; 1787 1788 if (ring->cur > lim) 1789 errors++; 1790 if (ring->head > lim) 1791 errors++; 1792 if (ring->tail > lim) 1793 errors++; 1794 for (i = 0; i <= lim; i++) { 1795 u_int idx = ring->slot[i].buf_idx; 1796 u_int len = ring->slot[i].len; 1797 if (idx < 2 || idx >= kring->na->na_lut.objtotal) { 1798 nm_prlim(5, "bad index at slot %d idx %d len %d ", i, idx, len); 1799 ring->slot[i].buf_idx = 0; 1800 ring->slot[i].len = 0; 1801 } else if (len > NETMAP_BUF_SIZE(kring->na)) { 1802 ring->slot[i].len = 0; 1803 nm_prlim(5, "bad len at slot %d idx %d len %d", i, idx, len); 1804 } 1805 } 1806 if (errors) { 1807 nm_prlim(10, "total %d errors", errors); 1808 nm_prlim(10, "%s reinit, cur %d -> %d tail %d -> %d", 1809 kring->name, 1810 ring->cur, kring->nr_hwcur, 1811 ring->tail, kring->nr_hwtail); 1812 ring->head = kring->rhead = kring->nr_hwcur; 1813 ring->cur = kring->rcur = kring->nr_hwcur; 1814 ring->tail = kring->rtail = kring->nr_hwtail; 1815 } 1816 return (errors ? 1 : 0); 1817 } 1818 1819 /* interpret the ringid and flags fields of an nmreq, by translating them 1820 * into a pair of intervals of ring indices: 1821 * 1822 * [priv->np_txqfirst, priv->np_txqlast) and 1823 * [priv->np_rxqfirst, priv->np_rxqlast) 1824 * 1825 */ 1826 int 1827 netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode, 1828 uint16_t nr_ringid, uint64_t nr_flags) 1829 { 1830 struct netmap_adapter *na = priv->np_na; 1831 int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY }; 1832 enum txrx t; 1833 u_int j; 1834 1835 for_rx_tx(t) { 1836 if (nr_flags & excluded_direction[t]) { 1837 priv->np_qfirst[t] = priv->np_qlast[t] = 0; 1838 continue; 1839 } 1840 switch (nr_mode) { 1841 case NR_REG_ALL_NIC: 1842 case NR_REG_NULL: 1843 priv->np_qfirst[t] = 0; 1844 priv->np_qlast[t] = nma_get_nrings(na, t); 1845 nm_prdis("ALL/PIPE: %s %d %d", nm_txrx2str(t), 1846 priv->np_qfirst[t], priv->np_qlast[t]); 1847 break; 1848 case NR_REG_SW: 1849 case NR_REG_NIC_SW: 1850 if (!(na->na_flags & NAF_HOST_RINGS)) { 1851 nm_prerr("host rings not supported"); 1852 return EINVAL; 1853 } 1854 priv->np_qfirst[t] = (nr_mode == NR_REG_SW ? 1855 nma_get_nrings(na, t) : 0); 1856 priv->np_qlast[t] = netmap_all_rings(na, t); 1857 nm_prdis("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW", 1858 nm_txrx2str(t), 1859 priv->np_qfirst[t], priv->np_qlast[t]); 1860 break; 1861 case NR_REG_ONE_NIC: 1862 if (nr_ringid >= na->num_tx_rings && 1863 nr_ringid >= na->num_rx_rings) { 1864 nm_prerr("invalid ring id %d", nr_ringid); 1865 return EINVAL; 1866 } 1867 /* if not enough rings, use the first one */ 1868 j = nr_ringid; 1869 if (j >= nma_get_nrings(na, t)) 1870 j = 0; 1871 priv->np_qfirst[t] = j; 1872 priv->np_qlast[t] = j + 1; 1873 nm_prdis("ONE_NIC: %s %d %d", nm_txrx2str(t), 1874 priv->np_qfirst[t], priv->np_qlast[t]); 1875 break; 1876 case NR_REG_ONE_SW: 1877 if (!(na->na_flags & NAF_HOST_RINGS)) { 1878 nm_prerr("host rings not supported"); 1879 return EINVAL; 1880 } 1881 if (nr_ringid >= na->num_host_tx_rings && 1882 nr_ringid >= na->num_host_rx_rings) { 1883 nm_prerr("invalid ring id %d", nr_ringid); 1884 return EINVAL; 1885 } 1886 /* if not enough rings, use the first one */ 1887 j = nr_ringid; 1888 if (j >= nma_get_host_nrings(na, t)) 1889 j = 0; 1890 priv->np_qfirst[t] = nma_get_nrings(na, t) + j; 1891 priv->np_qlast[t] = nma_get_nrings(na, t) + j + 1; 1892 nm_prdis("ONE_SW: %s %d %d", nm_txrx2str(t), 1893 priv->np_qfirst[t], priv->np_qlast[t]); 1894 break; 1895 default: 1896 nm_prerr("invalid regif type %d", nr_mode); 1897 return EINVAL; 1898 } 1899 } 1900 priv->np_flags = nr_flags; 1901 1902 /* Allow transparent forwarding mode in the host --> nic 1903 * direction only if all the TX hw rings have been opened. */ 1904 if (priv->np_qfirst[NR_TX] == 0 && 1905 priv->np_qlast[NR_TX] >= na->num_tx_rings) { 1906 priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN; 1907 } 1908 1909 if (netmap_verbose) { 1910 nm_prinf("%s: tx [%d,%d) rx [%d,%d) id %d", 1911 na->name, 1912 priv->np_qfirst[NR_TX], 1913 priv->np_qlast[NR_TX], 1914 priv->np_qfirst[NR_RX], 1915 priv->np_qlast[NR_RX], 1916 nr_ringid); 1917 } 1918 return 0; 1919 } 1920 1921 1922 /* 1923 * Set the ring ID. For devices with a single queue, a request 1924 * for all rings is the same as a single ring. 1925 */ 1926 static int 1927 netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode, 1928 uint16_t nr_ringid, uint64_t nr_flags) 1929 { 1930 struct netmap_adapter *na = priv->np_na; 1931 int error; 1932 enum txrx t; 1933 1934 error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags); 1935 if (error) { 1936 return error; 1937 } 1938 1939 priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1; 1940 1941 /* optimization: count the users registered for more than 1942 * one ring, which are the ones sleeping on the global queue. 1943 * The default netmap_notify() callback will then 1944 * avoid signaling the global queue if nobody is using it 1945 */ 1946 for_rx_tx(t) { 1947 if (nm_si_user(priv, t)) 1948 na->si_users[t]++; 1949 } 1950 return 0; 1951 } 1952 1953 static void 1954 netmap_unset_ringid(struct netmap_priv_d *priv) 1955 { 1956 struct netmap_adapter *na = priv->np_na; 1957 enum txrx t; 1958 1959 for_rx_tx(t) { 1960 if (nm_si_user(priv, t)) 1961 na->si_users[t]--; 1962 priv->np_qfirst[t] = priv->np_qlast[t] = 0; 1963 } 1964 priv->np_flags = 0; 1965 priv->np_txpoll = 0; 1966 priv->np_kloop_state = 0; 1967 } 1968 1969 1970 /* Set the nr_pending_mode for the requested rings. 1971 * If requested, also try to get exclusive access to the rings, provided 1972 * the rings we want to bind are not exclusively owned by a previous bind. 1973 */ 1974 static int 1975 netmap_krings_get(struct netmap_priv_d *priv) 1976 { 1977 struct netmap_adapter *na = priv->np_na; 1978 u_int i; 1979 struct netmap_kring *kring; 1980 int excl = (priv->np_flags & NR_EXCLUSIVE); 1981 enum txrx t; 1982 1983 if (netmap_debug & NM_DEBUG_ON) 1984 nm_prinf("%s: grabbing tx [%d, %d) rx [%d, %d)", 1985 na->name, 1986 priv->np_qfirst[NR_TX], 1987 priv->np_qlast[NR_TX], 1988 priv->np_qfirst[NR_RX], 1989 priv->np_qlast[NR_RX]); 1990 1991 /* first round: check that all the requested rings 1992 * are neither alread exclusively owned, nor we 1993 * want exclusive ownership when they are already in use 1994 */ 1995 for_rx_tx(t) { 1996 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { 1997 kring = NMR(na, t)[i]; 1998 if ((kring->nr_kflags & NKR_EXCLUSIVE) || 1999 (kring->users && excl)) 2000 { 2001 nm_prdis("ring %s busy", kring->name); 2002 return EBUSY; 2003 } 2004 } 2005 } 2006 2007 /* second round: increment usage count (possibly marking them 2008 * as exclusive) and set the nr_pending_mode 2009 */ 2010 for_rx_tx(t) { 2011 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { 2012 kring = NMR(na, t)[i]; 2013 kring->users++; 2014 if (excl) 2015 kring->nr_kflags |= NKR_EXCLUSIVE; 2016 kring->nr_pending_mode = NKR_NETMAP_ON; 2017 } 2018 } 2019 2020 return 0; 2021 2022 } 2023 2024 /* Undo netmap_krings_get(). This is done by clearing the exclusive mode 2025 * if was asked on regif, and unset the nr_pending_mode if we are the 2026 * last users of the involved rings. */ 2027 static void 2028 netmap_krings_put(struct netmap_priv_d *priv) 2029 { 2030 struct netmap_adapter *na = priv->np_na; 2031 u_int i; 2032 struct netmap_kring *kring; 2033 int excl = (priv->np_flags & NR_EXCLUSIVE); 2034 enum txrx t; 2035 2036 nm_prdis("%s: releasing tx [%d, %d) rx [%d, %d)", 2037 na->name, 2038 priv->np_qfirst[NR_TX], 2039 priv->np_qlast[NR_TX], 2040 priv->np_qfirst[NR_RX], 2041 priv->np_qlast[MR_RX]); 2042 2043 for_rx_tx(t) { 2044 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { 2045 kring = NMR(na, t)[i]; 2046 if (excl) 2047 kring->nr_kflags &= ~NKR_EXCLUSIVE; 2048 kring->users--; 2049 if (kring->users == 0) 2050 kring->nr_pending_mode = NKR_NETMAP_OFF; 2051 } 2052 } 2053 } 2054 2055 static int 2056 nm_priv_rx_enabled(struct netmap_priv_d *priv) 2057 { 2058 return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]); 2059 } 2060 2061 /* Validate the CSB entries for both directions (atok and ktoa). 2062 * To be called under NMG_LOCK(). */ 2063 static int 2064 netmap_csb_validate(struct netmap_priv_d *priv, struct nmreq_opt_csb *csbo) 2065 { 2066 struct nm_csb_atok *csb_atok_base = 2067 (struct nm_csb_atok *)(uintptr_t)csbo->csb_atok; 2068 struct nm_csb_ktoa *csb_ktoa_base = 2069 (struct nm_csb_ktoa *)(uintptr_t)csbo->csb_ktoa; 2070 enum txrx t; 2071 int num_rings[NR_TXRX], tot_rings; 2072 size_t entry_size[2]; 2073 void *csb_start[2]; 2074 int i; 2075 2076 if (priv->np_kloop_state & NM_SYNC_KLOOP_RUNNING) { 2077 nm_prerr("Cannot update CSB while kloop is running"); 2078 return EBUSY; 2079 } 2080 2081 tot_rings = 0; 2082 for_rx_tx(t) { 2083 num_rings[t] = priv->np_qlast[t] - priv->np_qfirst[t]; 2084 tot_rings += num_rings[t]; 2085 } 2086 if (tot_rings <= 0) 2087 return 0; 2088 2089 if (!(priv->np_flags & NR_EXCLUSIVE)) { 2090 nm_prerr("CSB mode requires NR_EXCLUSIVE"); 2091 return EINVAL; 2092 } 2093 2094 entry_size[0] = sizeof(*csb_atok_base); 2095 entry_size[1] = sizeof(*csb_ktoa_base); 2096 csb_start[0] = (void *)csb_atok_base; 2097 csb_start[1] = (void *)csb_ktoa_base; 2098 2099 for (i = 0; i < 2; i++) { 2100 /* On Linux we could use access_ok() to simplify 2101 * the validation. However, the advantage of 2102 * this approach is that it works also on 2103 * FreeBSD. */ 2104 size_t csb_size = tot_rings * entry_size[i]; 2105 void *tmp; 2106 int err; 2107 2108 if ((uintptr_t)csb_start[i] & (entry_size[i]-1)) { 2109 nm_prerr("Unaligned CSB address"); 2110 return EINVAL; 2111 } 2112 2113 tmp = nm_os_malloc(csb_size); 2114 if (!tmp) 2115 return ENOMEM; 2116 if (i == 0) { 2117 /* Application --> kernel direction. */ 2118 err = copyin(csb_start[i], tmp, csb_size); 2119 } else { 2120 /* Kernel --> application direction. */ 2121 memset(tmp, 0, csb_size); 2122 err = copyout(tmp, csb_start[i], csb_size); 2123 } 2124 nm_os_free(tmp); 2125 if (err) { 2126 nm_prerr("Invalid CSB address"); 2127 return err; 2128 } 2129 } 2130 2131 priv->np_csb_atok_base = csb_atok_base; 2132 priv->np_csb_ktoa_base = csb_ktoa_base; 2133 2134 /* Initialize the CSB. */ 2135 for_rx_tx(t) { 2136 for (i = 0; i < num_rings[t]; i++) { 2137 struct netmap_kring *kring = 2138 NMR(priv->np_na, t)[i + priv->np_qfirst[t]]; 2139 struct nm_csb_atok *csb_atok = csb_atok_base + i; 2140 struct nm_csb_ktoa *csb_ktoa = csb_ktoa_base + i; 2141 2142 if (t == NR_RX) { 2143 csb_atok += num_rings[NR_TX]; 2144 csb_ktoa += num_rings[NR_TX]; 2145 } 2146 2147 CSB_WRITE(csb_atok, head, kring->rhead); 2148 CSB_WRITE(csb_atok, cur, kring->rcur); 2149 CSB_WRITE(csb_atok, appl_need_kick, 1); 2150 CSB_WRITE(csb_atok, sync_flags, 1); 2151 CSB_WRITE(csb_ktoa, hwcur, kring->nr_hwcur); 2152 CSB_WRITE(csb_ktoa, hwtail, kring->nr_hwtail); 2153 CSB_WRITE(csb_ktoa, kern_need_kick, 1); 2154 2155 nm_prinf("csb_init for kring %s: head %u, cur %u, " 2156 "hwcur %u, hwtail %u", kring->name, 2157 kring->rhead, kring->rcur, kring->nr_hwcur, 2158 kring->nr_hwtail); 2159 } 2160 } 2161 2162 return 0; 2163 } 2164 2165 /* Ensure that the netmap adapter can support the given MTU. 2166 * @return EINVAL if the na cannot be set to mtu, 0 otherwise. 2167 */ 2168 int 2169 netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu) { 2170 unsigned nbs = NETMAP_BUF_SIZE(na); 2171 2172 if (mtu <= na->rx_buf_maxsize) { 2173 /* The MTU fits a single NIC slot. We only 2174 * Need to check that netmap buffers are 2175 * large enough to hold an MTU. NS_MOREFRAG 2176 * cannot be used in this case. */ 2177 if (nbs < mtu) { 2178 nm_prerr("error: netmap buf size (%u) " 2179 "< device MTU (%u)", nbs, mtu); 2180 return EINVAL; 2181 } 2182 } else { 2183 /* More NIC slots may be needed to receive 2184 * or transmit a single packet. Check that 2185 * the adapter supports NS_MOREFRAG and that 2186 * netmap buffers are large enough to hold 2187 * the maximum per-slot size. */ 2188 if (!(na->na_flags & NAF_MOREFRAG)) { 2189 nm_prerr("error: large MTU (%d) needed " 2190 "but %s does not support " 2191 "NS_MOREFRAG", mtu, 2192 na->ifp->if_xname); 2193 return EINVAL; 2194 } else if (nbs < na->rx_buf_maxsize) { 2195 nm_prerr("error: using NS_MOREFRAG on " 2196 "%s requires netmap buf size " 2197 ">= %u", na->ifp->if_xname, 2198 na->rx_buf_maxsize); 2199 return EINVAL; 2200 } else { 2201 nm_prinf("info: netmap application on " 2202 "%s needs to support " 2203 "NS_MOREFRAG " 2204 "(MTU=%u,netmap_buf_size=%u)", 2205 na->ifp->if_xname, mtu, nbs); 2206 } 2207 } 2208 return 0; 2209 } 2210 2211 2212 /* 2213 * possibly move the interface to netmap-mode. 2214 * If success it returns a pointer to netmap_if, otherwise NULL. 2215 * This must be called with NMG_LOCK held. 2216 * 2217 * The following na callbacks are called in the process: 2218 * 2219 * na->nm_config() [by netmap_update_config] 2220 * (get current number and size of rings) 2221 * 2222 * We have a generic one for linux (netmap_linux_config). 2223 * The bwrap has to override this, since it has to forward 2224 * the request to the wrapped adapter (netmap_bwrap_config). 2225 * 2226 * 2227 * na->nm_krings_create() 2228 * (create and init the krings array) 2229 * 2230 * One of the following: 2231 * 2232 * * netmap_hw_krings_create, (hw ports) 2233 * creates the standard layout for the krings 2234 * and adds the mbq (used for the host rings). 2235 * 2236 * * netmap_vp_krings_create (VALE ports) 2237 * add leases and scratchpads 2238 * 2239 * * netmap_pipe_krings_create (pipes) 2240 * create the krings and rings of both ends and 2241 * cross-link them 2242 * 2243 * * netmap_monitor_krings_create (monitors) 2244 * avoid allocating the mbq 2245 * 2246 * * netmap_bwrap_krings_create (bwraps) 2247 * create both the brap krings array, 2248 * the krings array of the wrapped adapter, and 2249 * (if needed) the fake array for the host adapter 2250 * 2251 * na->nm_register(, 1) 2252 * (put the adapter in netmap mode) 2253 * 2254 * This may be one of the following: 2255 * 2256 * * netmap_hw_reg (hw ports) 2257 * checks that the ifp is still there, then calls 2258 * the hardware specific callback; 2259 * 2260 * * netmap_vp_reg (VALE ports) 2261 * If the port is connected to a bridge, 2262 * set the NAF_NETMAP_ON flag under the 2263 * bridge write lock. 2264 * 2265 * * netmap_pipe_reg (pipes) 2266 * inform the other pipe end that it is no 2267 * longer responsible for the lifetime of this 2268 * pipe end 2269 * 2270 * * netmap_monitor_reg (monitors) 2271 * intercept the sync callbacks of the monitored 2272 * rings 2273 * 2274 * * netmap_bwrap_reg (bwraps) 2275 * cross-link the bwrap and hwna rings, 2276 * forward the request to the hwna, override 2277 * the hwna notify callback (to get the frames 2278 * coming from outside go through the bridge). 2279 * 2280 * 2281 */ 2282 int 2283 netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na, 2284 uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags) 2285 { 2286 struct netmap_if *nifp = NULL; 2287 int error; 2288 2289 NMG_LOCK_ASSERT(); 2290 priv->np_na = na; /* store the reference */ 2291 error = netmap_mem_finalize(na->nm_mem, na); 2292 if (error) 2293 goto err; 2294 2295 if (na->active_fds == 0) { 2296 2297 /* cache the allocator info in the na */ 2298 error = netmap_mem_get_lut(na->nm_mem, &na->na_lut); 2299 if (error) 2300 goto err_drop_mem; 2301 nm_prdis("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal, 2302 na->na_lut.objsize); 2303 2304 /* ring configuration may have changed, fetch from the card */ 2305 netmap_update_config(na); 2306 } 2307 2308 /* compute the range of tx and rx rings to monitor */ 2309 error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags); 2310 if (error) 2311 goto err_put_lut; 2312 2313 if (na->active_fds == 0) { 2314 /* 2315 * If this is the first registration of the adapter, 2316 * perform sanity checks and create the in-kernel view 2317 * of the netmap rings (the netmap krings). 2318 */ 2319 if (na->ifp && nm_priv_rx_enabled(priv)) { 2320 /* This netmap adapter is attached to an ifnet. */ 2321 unsigned mtu = nm_os_ifnet_mtu(na->ifp); 2322 2323 nm_prdis("%s: mtu %d rx_buf_maxsize %d netmap_buf_size %d", 2324 na->name, mtu, na->rx_buf_maxsize, NETMAP_BUF_SIZE(na)); 2325 2326 if (na->rx_buf_maxsize == 0) { 2327 nm_prerr("%s: error: rx_buf_maxsize == 0", na->name); 2328 error = EIO; 2329 goto err_drop_mem; 2330 } 2331 2332 error = netmap_buf_size_validate(na, mtu); 2333 if (error) 2334 goto err_drop_mem; 2335 } 2336 2337 /* 2338 * Depending on the adapter, this may also create 2339 * the netmap rings themselves 2340 */ 2341 error = na->nm_krings_create(na); 2342 if (error) 2343 goto err_put_lut; 2344 2345 } 2346 2347 /* now the krings must exist and we can check whether some 2348 * previous bind has exclusive ownership on them, and set 2349 * nr_pending_mode 2350 */ 2351 error = netmap_krings_get(priv); 2352 if (error) 2353 goto err_del_krings; 2354 2355 /* create all needed missing netmap rings */ 2356 error = netmap_mem_rings_create(na); 2357 if (error) 2358 goto err_rel_excl; 2359 2360 /* in all cases, create a new netmap if */ 2361 nifp = netmap_mem_if_new(na, priv); 2362 if (nifp == NULL) { 2363 error = ENOMEM; 2364 goto err_rel_excl; 2365 } 2366 2367 if (nm_kring_pending(priv)) { 2368 /* Some kring is switching mode, tell the adapter to 2369 * react on this. */ 2370 error = na->nm_register(na, 1); 2371 if (error) 2372 goto err_del_if; 2373 } 2374 2375 /* Commit the reference. */ 2376 na->active_fds++; 2377 2378 /* 2379 * advertise that the interface is ready by setting np_nifp. 2380 * The barrier is needed because readers (poll, *SYNC and mmap) 2381 * check for priv->np_nifp != NULL without locking 2382 */ 2383 mb(); /* make sure previous writes are visible to all CPUs */ 2384 priv->np_nifp = nifp; 2385 2386 return 0; 2387 2388 err_del_if: 2389 netmap_mem_if_delete(na, nifp); 2390 err_rel_excl: 2391 netmap_krings_put(priv); 2392 netmap_mem_rings_delete(na); 2393 err_del_krings: 2394 if (na->active_fds == 0) 2395 na->nm_krings_delete(na); 2396 err_put_lut: 2397 if (na->active_fds == 0) 2398 memset(&na->na_lut, 0, sizeof(na->na_lut)); 2399 err_drop_mem: 2400 netmap_mem_drop(na); 2401 err: 2402 priv->np_na = NULL; 2403 return error; 2404 } 2405 2406 2407 /* 2408 * update kring and ring at the end of rxsync/txsync. 2409 */ 2410 static inline void 2411 nm_sync_finalize(struct netmap_kring *kring) 2412 { 2413 /* 2414 * Update ring tail to what the kernel knows 2415 * After txsync: head/rhead/hwcur might be behind cur/rcur 2416 * if no carrier. 2417 */ 2418 kring->ring->tail = kring->rtail = kring->nr_hwtail; 2419 2420 nm_prdis(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d", 2421 kring->name, kring->nr_hwcur, kring->nr_hwtail, 2422 kring->rhead, kring->rcur, kring->rtail); 2423 } 2424 2425 /* set ring timestamp */ 2426 static inline void 2427 ring_timestamp_set(struct netmap_ring *ring) 2428 { 2429 if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) { 2430 microtime(&ring->ts); 2431 } 2432 } 2433 2434 static int nmreq_copyin(struct nmreq_header *, int); 2435 static int nmreq_copyout(struct nmreq_header *, int); 2436 static int nmreq_checkoptions(struct nmreq_header *); 2437 2438 /* 2439 * ioctl(2) support for the "netmap" device. 2440 * 2441 * Following a list of accepted commands: 2442 * - NIOCCTRL device control API 2443 * - NIOCTXSYNC sync TX rings 2444 * - NIOCRXSYNC sync RX rings 2445 * - SIOCGIFADDR just for convenience 2446 * - NIOCGINFO deprecated (legacy API) 2447 * - NIOCREGIF deprecated (legacy API) 2448 * 2449 * Return 0 on success, errno otherwise. 2450 */ 2451 int 2452 netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data, 2453 struct thread *td, int nr_body_is_user) 2454 { 2455 struct mbq q; /* packets from RX hw queues to host stack */ 2456 struct netmap_adapter *na = NULL; 2457 struct netmap_mem_d *nmd = NULL; 2458 struct ifnet *ifp = NULL; 2459 int error = 0; 2460 u_int i, qfirst, qlast; 2461 struct netmap_kring **krings; 2462 int sync_flags; 2463 enum txrx t; 2464 2465 switch (cmd) { 2466 case NIOCCTRL: { 2467 struct nmreq_header *hdr = (struct nmreq_header *)data; 2468 2469 if (hdr->nr_version < NETMAP_MIN_API || 2470 hdr->nr_version > NETMAP_MAX_API) { 2471 nm_prerr("API mismatch: got %d need %d", 2472 hdr->nr_version, NETMAP_API); 2473 return EINVAL; 2474 } 2475 2476 /* Make a kernel-space copy of the user-space nr_body. 2477 * For convenince, the nr_body pointer and the pointers 2478 * in the options list will be replaced with their 2479 * kernel-space counterparts. The original pointers are 2480 * saved internally and later restored by nmreq_copyout 2481 */ 2482 error = nmreq_copyin(hdr, nr_body_is_user); 2483 if (error) { 2484 return error; 2485 } 2486 2487 /* Sanitize hdr->nr_name. */ 2488 hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0'; 2489 2490 switch (hdr->nr_reqtype) { 2491 case NETMAP_REQ_REGISTER: { 2492 struct nmreq_register *req = 2493 (struct nmreq_register *)(uintptr_t)hdr->nr_body; 2494 struct netmap_if *nifp; 2495 2496 /* Protect access to priv from concurrent requests. */ 2497 NMG_LOCK(); 2498 do { 2499 struct nmreq_option *opt; 2500 u_int memflags; 2501 2502 if (priv->np_nifp != NULL) { /* thread already registered */ 2503 error = EBUSY; 2504 break; 2505 } 2506 2507 #ifdef WITH_EXTMEM 2508 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, 2509 NETMAP_REQ_OPT_EXTMEM); 2510 if (opt != NULL) { 2511 struct nmreq_opt_extmem *e = 2512 (struct nmreq_opt_extmem *)opt; 2513 2514 error = nmreq_checkduplicate(opt); 2515 if (error) { 2516 opt->nro_status = error; 2517 break; 2518 } 2519 nmd = netmap_mem_ext_create(e->nro_usrptr, 2520 &e->nro_info, &error); 2521 opt->nro_status = error; 2522 if (nmd == NULL) 2523 break; 2524 } 2525 #endif /* WITH_EXTMEM */ 2526 2527 if (nmd == NULL && req->nr_mem_id) { 2528 /* find the allocator and get a reference */ 2529 nmd = netmap_mem_find(req->nr_mem_id); 2530 if (nmd == NULL) { 2531 if (netmap_verbose) { 2532 nm_prerr("%s: failed to find mem_id %u", 2533 hdr->nr_name, req->nr_mem_id); 2534 } 2535 error = EINVAL; 2536 break; 2537 } 2538 } 2539 /* find the interface and a reference */ 2540 error = netmap_get_na(hdr, &na, &ifp, nmd, 2541 1 /* create */); /* keep reference */ 2542 if (error) 2543 break; 2544 if (NETMAP_OWNED_BY_KERN(na)) { 2545 error = EBUSY; 2546 break; 2547 } 2548 2549 if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) { 2550 nm_prerr("virt_hdr_len=%d, but application does " 2551 "not accept it", na->virt_hdr_len); 2552 error = EIO; 2553 break; 2554 } 2555 2556 error = netmap_do_regif(priv, na, req->nr_mode, 2557 req->nr_ringid, req->nr_flags); 2558 if (error) { /* reg. failed, release priv and ref */ 2559 break; 2560 } 2561 2562 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, 2563 NETMAP_REQ_OPT_CSB); 2564 if (opt != NULL) { 2565 struct nmreq_opt_csb *csbo = 2566 (struct nmreq_opt_csb *)opt; 2567 error = nmreq_checkduplicate(opt); 2568 if (!error) { 2569 error = netmap_csb_validate(priv, csbo); 2570 } 2571 opt->nro_status = error; 2572 if (error) { 2573 netmap_do_unregif(priv); 2574 break; 2575 } 2576 } 2577 2578 nifp = priv->np_nifp; 2579 2580 /* return the offset of the netmap_if object */ 2581 req->nr_rx_rings = na->num_rx_rings; 2582 req->nr_tx_rings = na->num_tx_rings; 2583 req->nr_rx_slots = na->num_rx_desc; 2584 req->nr_tx_slots = na->num_tx_desc; 2585 req->nr_host_tx_rings = na->num_host_tx_rings; 2586 req->nr_host_rx_rings = na->num_host_rx_rings; 2587 error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags, 2588 &req->nr_mem_id); 2589 if (error) { 2590 netmap_do_unregif(priv); 2591 break; 2592 } 2593 if (memflags & NETMAP_MEM_PRIVATE) { 2594 *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM; 2595 } 2596 for_rx_tx(t) { 2597 priv->np_si[t] = nm_si_user(priv, t) ? 2598 &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si; 2599 } 2600 2601 if (req->nr_extra_bufs) { 2602 if (netmap_verbose) 2603 nm_prinf("requested %d extra buffers", 2604 req->nr_extra_bufs); 2605 req->nr_extra_bufs = netmap_extra_alloc(na, 2606 &nifp->ni_bufs_head, req->nr_extra_bufs); 2607 if (netmap_verbose) 2608 nm_prinf("got %d extra buffers", req->nr_extra_bufs); 2609 } 2610 req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp); 2611 2612 error = nmreq_checkoptions(hdr); 2613 if (error) { 2614 netmap_do_unregif(priv); 2615 break; 2616 } 2617 2618 /* store ifp reference so that priv destructor may release it */ 2619 priv->np_ifp = ifp; 2620 } while (0); 2621 if (error) { 2622 netmap_unget_na(na, ifp); 2623 } 2624 /* release the reference from netmap_mem_find() or 2625 * netmap_mem_ext_create() 2626 */ 2627 if (nmd) 2628 netmap_mem_put(nmd); 2629 NMG_UNLOCK(); 2630 break; 2631 } 2632 2633 case NETMAP_REQ_PORT_INFO_GET: { 2634 struct nmreq_port_info_get *req = 2635 (struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body; 2636 2637 NMG_LOCK(); 2638 do { 2639 u_int memflags; 2640 2641 if (hdr->nr_name[0] != '\0') { 2642 /* Build a nmreq_register out of the nmreq_port_info_get, 2643 * so that we can call netmap_get_na(). */ 2644 struct nmreq_register regreq; 2645 bzero(®req, sizeof(regreq)); 2646 regreq.nr_mode = NR_REG_ALL_NIC; 2647 regreq.nr_tx_slots = req->nr_tx_slots; 2648 regreq.nr_rx_slots = req->nr_rx_slots; 2649 regreq.nr_tx_rings = req->nr_tx_rings; 2650 regreq.nr_rx_rings = req->nr_rx_rings; 2651 regreq.nr_host_tx_rings = req->nr_host_tx_rings; 2652 regreq.nr_host_rx_rings = req->nr_host_rx_rings; 2653 regreq.nr_mem_id = req->nr_mem_id; 2654 2655 /* get a refcount */ 2656 hdr->nr_reqtype = NETMAP_REQ_REGISTER; 2657 hdr->nr_body = (uintptr_t)®req; 2658 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */); 2659 hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */ 2660 hdr->nr_body = (uintptr_t)req; /* reset nr_body */ 2661 if (error) { 2662 na = NULL; 2663 ifp = NULL; 2664 break; 2665 } 2666 nmd = na->nm_mem; /* get memory allocator */ 2667 } else { 2668 nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1); 2669 if (nmd == NULL) { 2670 if (netmap_verbose) 2671 nm_prerr("%s: failed to find mem_id %u", 2672 hdr->nr_name, 2673 req->nr_mem_id ? req->nr_mem_id : 1); 2674 error = EINVAL; 2675 break; 2676 } 2677 } 2678 2679 error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags, 2680 &req->nr_mem_id); 2681 if (error) 2682 break; 2683 if (na == NULL) /* only memory info */ 2684 break; 2685 netmap_update_config(na); 2686 req->nr_rx_rings = na->num_rx_rings; 2687 req->nr_tx_rings = na->num_tx_rings; 2688 req->nr_rx_slots = na->num_rx_desc; 2689 req->nr_tx_slots = na->num_tx_desc; 2690 req->nr_host_tx_rings = na->num_host_tx_rings; 2691 req->nr_host_rx_rings = na->num_host_rx_rings; 2692 } while (0); 2693 netmap_unget_na(na, ifp); 2694 NMG_UNLOCK(); 2695 break; 2696 } 2697 #ifdef WITH_VALE 2698 case NETMAP_REQ_VALE_ATTACH: { 2699 error = netmap_vale_attach(hdr, NULL /* userspace request */); 2700 break; 2701 } 2702 2703 case NETMAP_REQ_VALE_DETACH: { 2704 error = netmap_vale_detach(hdr, NULL /* userspace request */); 2705 break; 2706 } 2707 2708 case NETMAP_REQ_VALE_LIST: { 2709 error = netmap_vale_list(hdr); 2710 break; 2711 } 2712 2713 case NETMAP_REQ_PORT_HDR_SET: { 2714 struct nmreq_port_hdr *req = 2715 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body; 2716 /* Build a nmreq_register out of the nmreq_port_hdr, 2717 * so that we can call netmap_get_bdg_na(). */ 2718 struct nmreq_register regreq; 2719 bzero(®req, sizeof(regreq)); 2720 regreq.nr_mode = NR_REG_ALL_NIC; 2721 2722 /* For now we only support virtio-net headers, and only for 2723 * VALE ports, but this may change in future. Valid lengths 2724 * for the virtio-net header are 0 (no header), 10 and 12. */ 2725 if (req->nr_hdr_len != 0 && 2726 req->nr_hdr_len != sizeof(struct nm_vnet_hdr) && 2727 req->nr_hdr_len != 12) { 2728 if (netmap_verbose) 2729 nm_prerr("invalid hdr_len %u", req->nr_hdr_len); 2730 error = EINVAL; 2731 break; 2732 } 2733 NMG_LOCK(); 2734 hdr->nr_reqtype = NETMAP_REQ_REGISTER; 2735 hdr->nr_body = (uintptr_t)®req; 2736 error = netmap_get_vale_na(hdr, &na, NULL, 0); 2737 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET; 2738 hdr->nr_body = (uintptr_t)req; 2739 if (na && !error) { 2740 struct netmap_vp_adapter *vpna = 2741 (struct netmap_vp_adapter *)na; 2742 na->virt_hdr_len = req->nr_hdr_len; 2743 if (na->virt_hdr_len) { 2744 vpna->mfs = NETMAP_BUF_SIZE(na); 2745 } 2746 if (netmap_verbose) 2747 nm_prinf("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na); 2748 netmap_adapter_put(na); 2749 } else if (!na) { 2750 error = ENXIO; 2751 } 2752 NMG_UNLOCK(); 2753 break; 2754 } 2755 2756 case NETMAP_REQ_PORT_HDR_GET: { 2757 /* Get vnet-header length for this netmap port */ 2758 struct nmreq_port_hdr *req = 2759 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body; 2760 /* Build a nmreq_register out of the nmreq_port_hdr, 2761 * so that we can call netmap_get_bdg_na(). */ 2762 struct nmreq_register regreq; 2763 struct ifnet *ifp; 2764 2765 bzero(®req, sizeof(regreq)); 2766 regreq.nr_mode = NR_REG_ALL_NIC; 2767 NMG_LOCK(); 2768 hdr->nr_reqtype = NETMAP_REQ_REGISTER; 2769 hdr->nr_body = (uintptr_t)®req; 2770 error = netmap_get_na(hdr, &na, &ifp, NULL, 0); 2771 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET; 2772 hdr->nr_body = (uintptr_t)req; 2773 if (na && !error) { 2774 req->nr_hdr_len = na->virt_hdr_len; 2775 } 2776 netmap_unget_na(na, ifp); 2777 NMG_UNLOCK(); 2778 break; 2779 } 2780 2781 case NETMAP_REQ_VALE_NEWIF: { 2782 error = nm_vi_create(hdr); 2783 break; 2784 } 2785 2786 case NETMAP_REQ_VALE_DELIF: { 2787 error = nm_vi_destroy(hdr->nr_name); 2788 break; 2789 } 2790 2791 case NETMAP_REQ_VALE_POLLING_ENABLE: 2792 case NETMAP_REQ_VALE_POLLING_DISABLE: { 2793 error = nm_bdg_polling(hdr); 2794 break; 2795 } 2796 #endif /* WITH_VALE */ 2797 case NETMAP_REQ_POOLS_INFO_GET: { 2798 /* Get information from the memory allocator used for 2799 * hdr->nr_name. */ 2800 struct nmreq_pools_info *req = 2801 (struct nmreq_pools_info *)(uintptr_t)hdr->nr_body; 2802 NMG_LOCK(); 2803 do { 2804 /* Build a nmreq_register out of the nmreq_pools_info, 2805 * so that we can call netmap_get_na(). */ 2806 struct nmreq_register regreq; 2807 bzero(®req, sizeof(regreq)); 2808 regreq.nr_mem_id = req->nr_mem_id; 2809 regreq.nr_mode = NR_REG_ALL_NIC; 2810 2811 hdr->nr_reqtype = NETMAP_REQ_REGISTER; 2812 hdr->nr_body = (uintptr_t)®req; 2813 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */); 2814 hdr->nr_reqtype = NETMAP_REQ_POOLS_INFO_GET; /* reset type */ 2815 hdr->nr_body = (uintptr_t)req; /* reset nr_body */ 2816 if (error) { 2817 na = NULL; 2818 ifp = NULL; 2819 break; 2820 } 2821 nmd = na->nm_mem; /* grab the memory allocator */ 2822 if (nmd == NULL) { 2823 error = EINVAL; 2824 break; 2825 } 2826 2827 /* Finalize the memory allocator, get the pools 2828 * information and release the allocator. */ 2829 error = netmap_mem_finalize(nmd, na); 2830 if (error) { 2831 break; 2832 } 2833 error = netmap_mem_pools_info_get(req, nmd); 2834 netmap_mem_drop(na); 2835 } while (0); 2836 netmap_unget_na(na, ifp); 2837 NMG_UNLOCK(); 2838 break; 2839 } 2840 2841 case NETMAP_REQ_CSB_ENABLE: { 2842 struct nmreq_option *opt; 2843 2844 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, 2845 NETMAP_REQ_OPT_CSB); 2846 if (opt == NULL) { 2847 error = EINVAL; 2848 } else { 2849 struct nmreq_opt_csb *csbo = 2850 (struct nmreq_opt_csb *)opt; 2851 error = nmreq_checkduplicate(opt); 2852 if (!error) { 2853 NMG_LOCK(); 2854 error = netmap_csb_validate(priv, csbo); 2855 NMG_UNLOCK(); 2856 } 2857 opt->nro_status = error; 2858 } 2859 break; 2860 } 2861 2862 case NETMAP_REQ_SYNC_KLOOP_START: { 2863 error = netmap_sync_kloop(priv, hdr); 2864 break; 2865 } 2866 2867 case NETMAP_REQ_SYNC_KLOOP_STOP: { 2868 error = netmap_sync_kloop_stop(priv); 2869 break; 2870 } 2871 2872 default: { 2873 error = EINVAL; 2874 break; 2875 } 2876 } 2877 /* Write back request body to userspace and reset the 2878 * user-space pointer. */ 2879 error = nmreq_copyout(hdr, error); 2880 break; 2881 } 2882 2883 case NIOCTXSYNC: 2884 case NIOCRXSYNC: { 2885 if (unlikely(priv->np_nifp == NULL)) { 2886 error = ENXIO; 2887 break; 2888 } 2889 mb(); /* make sure following reads are not from cache */ 2890 2891 if (unlikely(priv->np_csb_atok_base)) { 2892 nm_prerr("Invalid sync in CSB mode"); 2893 error = EBUSY; 2894 break; 2895 } 2896 2897 na = priv->np_na; /* we have a reference */ 2898 2899 mbq_init(&q); 2900 t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX); 2901 krings = NMR(na, t); 2902 qfirst = priv->np_qfirst[t]; 2903 qlast = priv->np_qlast[t]; 2904 sync_flags = priv->np_sync_flags; 2905 2906 for (i = qfirst; i < qlast; i++) { 2907 struct netmap_kring *kring = krings[i]; 2908 struct netmap_ring *ring = kring->ring; 2909 2910 if (unlikely(nm_kr_tryget(kring, 1, &error))) { 2911 error = (error ? EIO : 0); 2912 continue; 2913 } 2914 2915 if (cmd == NIOCTXSYNC) { 2916 if (netmap_debug & NM_DEBUG_TXSYNC) 2917 nm_prinf("pre txsync ring %d cur %d hwcur %d", 2918 i, ring->cur, 2919 kring->nr_hwcur); 2920 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { 2921 netmap_ring_reinit(kring); 2922 } else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) { 2923 nm_sync_finalize(kring); 2924 } 2925 if (netmap_debug & NM_DEBUG_TXSYNC) 2926 nm_prinf("post txsync ring %d cur %d hwcur %d", 2927 i, ring->cur, 2928 kring->nr_hwcur); 2929 } else { 2930 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { 2931 netmap_ring_reinit(kring); 2932 } 2933 if (nm_may_forward_up(kring)) { 2934 /* transparent forwarding, see netmap_poll() */ 2935 netmap_grab_packets(kring, &q, netmap_fwd); 2936 } 2937 if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) { 2938 nm_sync_finalize(kring); 2939 } 2940 ring_timestamp_set(ring); 2941 } 2942 nm_kr_put(kring); 2943 } 2944 2945 if (mbq_peek(&q)) { 2946 netmap_send_up(na->ifp, &q); 2947 } 2948 2949 break; 2950 } 2951 2952 default: { 2953 return netmap_ioctl_legacy(priv, cmd, data, td); 2954 break; 2955 } 2956 } 2957 2958 return (error); 2959 } 2960 2961 size_t 2962 nmreq_size_by_type(uint16_t nr_reqtype) 2963 { 2964 switch (nr_reqtype) { 2965 case NETMAP_REQ_REGISTER: 2966 return sizeof(struct nmreq_register); 2967 case NETMAP_REQ_PORT_INFO_GET: 2968 return sizeof(struct nmreq_port_info_get); 2969 case NETMAP_REQ_VALE_ATTACH: 2970 return sizeof(struct nmreq_vale_attach); 2971 case NETMAP_REQ_VALE_DETACH: 2972 return sizeof(struct nmreq_vale_detach); 2973 case NETMAP_REQ_VALE_LIST: 2974 return sizeof(struct nmreq_vale_list); 2975 case NETMAP_REQ_PORT_HDR_SET: 2976 case NETMAP_REQ_PORT_HDR_GET: 2977 return sizeof(struct nmreq_port_hdr); 2978 case NETMAP_REQ_VALE_NEWIF: 2979 return sizeof(struct nmreq_vale_newif); 2980 case NETMAP_REQ_VALE_DELIF: 2981 case NETMAP_REQ_SYNC_KLOOP_STOP: 2982 case NETMAP_REQ_CSB_ENABLE: 2983 return 0; 2984 case NETMAP_REQ_VALE_POLLING_ENABLE: 2985 case NETMAP_REQ_VALE_POLLING_DISABLE: 2986 return sizeof(struct nmreq_vale_polling); 2987 case NETMAP_REQ_POOLS_INFO_GET: 2988 return sizeof(struct nmreq_pools_info); 2989 case NETMAP_REQ_SYNC_KLOOP_START: 2990 return sizeof(struct nmreq_sync_kloop_start); 2991 } 2992 return 0; 2993 } 2994 2995 static size_t 2996 nmreq_opt_size_by_type(uint32_t nro_reqtype, uint64_t nro_size) 2997 { 2998 size_t rv = sizeof(struct nmreq_option); 2999 #ifdef NETMAP_REQ_OPT_DEBUG 3000 if (nro_reqtype & NETMAP_REQ_OPT_DEBUG) 3001 return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG); 3002 #endif /* NETMAP_REQ_OPT_DEBUG */ 3003 switch (nro_reqtype) { 3004 #ifdef WITH_EXTMEM 3005 case NETMAP_REQ_OPT_EXTMEM: 3006 rv = sizeof(struct nmreq_opt_extmem); 3007 break; 3008 #endif /* WITH_EXTMEM */ 3009 case NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS: 3010 if (nro_size >= rv) 3011 rv = nro_size; 3012 break; 3013 case NETMAP_REQ_OPT_CSB: 3014 rv = sizeof(struct nmreq_opt_csb); 3015 break; 3016 case NETMAP_REQ_OPT_SYNC_KLOOP_MODE: 3017 rv = sizeof(struct nmreq_opt_sync_kloop_mode); 3018 break; 3019 } 3020 /* subtract the common header */ 3021 return rv - sizeof(struct nmreq_option); 3022 } 3023 3024 int 3025 nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user) 3026 { 3027 size_t rqsz, optsz, bufsz; 3028 int error; 3029 char *ker = NULL, *p; 3030 struct nmreq_option **next, *src; 3031 struct nmreq_option buf; 3032 uint64_t *ptrs; 3033 3034 if (hdr->nr_reserved) { 3035 if (netmap_verbose) 3036 nm_prerr("nr_reserved must be zero"); 3037 return EINVAL; 3038 } 3039 3040 if (!nr_body_is_user) 3041 return 0; 3042 3043 hdr->nr_reserved = nr_body_is_user; 3044 3045 /* compute the total size of the buffer */ 3046 rqsz = nmreq_size_by_type(hdr->nr_reqtype); 3047 if (rqsz > NETMAP_REQ_MAXSIZE) { 3048 error = EMSGSIZE; 3049 goto out_err; 3050 } 3051 if ((rqsz && hdr->nr_body == (uintptr_t)NULL) || 3052 (!rqsz && hdr->nr_body != (uintptr_t)NULL)) { 3053 /* Request body expected, but not found; or 3054 * request body found but unexpected. */ 3055 if (netmap_verbose) 3056 nm_prerr("nr_body expected but not found, or vice versa"); 3057 error = EINVAL; 3058 goto out_err; 3059 } 3060 3061 bufsz = 2 * sizeof(void *) + rqsz; 3062 optsz = 0; 3063 for (src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; src; 3064 src = (struct nmreq_option *)(uintptr_t)buf.nro_next) 3065 { 3066 error = copyin(src, &buf, sizeof(*src)); 3067 if (error) 3068 goto out_err; 3069 optsz += sizeof(*src); 3070 optsz += nmreq_opt_size_by_type(buf.nro_reqtype, buf.nro_size); 3071 if (rqsz + optsz > NETMAP_REQ_MAXSIZE) { 3072 error = EMSGSIZE; 3073 goto out_err; 3074 } 3075 bufsz += optsz + sizeof(void *); 3076 } 3077 3078 ker = nm_os_malloc(bufsz); 3079 if (ker == NULL) { 3080 error = ENOMEM; 3081 goto out_err; 3082 } 3083 p = ker; 3084 3085 /* make a copy of the user pointers */ 3086 ptrs = (uint64_t*)p; 3087 *ptrs++ = hdr->nr_body; 3088 *ptrs++ = hdr->nr_options; 3089 p = (char *)ptrs; 3090 3091 /* copy the body */ 3092 error = copyin((void *)(uintptr_t)hdr->nr_body, p, rqsz); 3093 if (error) 3094 goto out_restore; 3095 /* overwrite the user pointer with the in-kernel one */ 3096 hdr->nr_body = (uintptr_t)p; 3097 p += rqsz; 3098 3099 /* copy the options */ 3100 next = (struct nmreq_option **)&hdr->nr_options; 3101 src = *next; 3102 while (src) { 3103 struct nmreq_option *opt; 3104 3105 /* copy the option header */ 3106 ptrs = (uint64_t *)p; 3107 opt = (struct nmreq_option *)(ptrs + 1); 3108 error = copyin(src, opt, sizeof(*src)); 3109 if (error) 3110 goto out_restore; 3111 /* make a copy of the user next pointer */ 3112 *ptrs = opt->nro_next; 3113 /* overwrite the user pointer with the in-kernel one */ 3114 *next = opt; 3115 3116 /* initialize the option as not supported. 3117 * Recognized options will update this field. 3118 */ 3119 opt->nro_status = EOPNOTSUPP; 3120 3121 p = (char *)(opt + 1); 3122 3123 /* copy the option body */ 3124 optsz = nmreq_opt_size_by_type(opt->nro_reqtype, 3125 opt->nro_size); 3126 if (optsz) { 3127 /* the option body follows the option header */ 3128 error = copyin(src + 1, p, optsz); 3129 if (error) 3130 goto out_restore; 3131 p += optsz; 3132 } 3133 3134 /* move to next option */ 3135 next = (struct nmreq_option **)&opt->nro_next; 3136 src = *next; 3137 } 3138 return 0; 3139 3140 out_restore: 3141 ptrs = (uint64_t *)ker; 3142 hdr->nr_body = *ptrs++; 3143 hdr->nr_options = *ptrs++; 3144 hdr->nr_reserved = 0; 3145 nm_os_free(ker); 3146 out_err: 3147 return error; 3148 } 3149 3150 static int 3151 nmreq_copyout(struct nmreq_header *hdr, int rerror) 3152 { 3153 struct nmreq_option *src, *dst; 3154 void *ker = (void *)(uintptr_t)hdr->nr_body, *bufstart; 3155 uint64_t *ptrs; 3156 size_t bodysz; 3157 int error; 3158 3159 if (!hdr->nr_reserved) 3160 return rerror; 3161 3162 /* restore the user pointers in the header */ 3163 ptrs = (uint64_t *)ker - 2; 3164 bufstart = ptrs; 3165 hdr->nr_body = *ptrs++; 3166 src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; 3167 hdr->nr_options = *ptrs; 3168 3169 if (!rerror) { 3170 /* copy the body */ 3171 bodysz = nmreq_size_by_type(hdr->nr_reqtype); 3172 error = copyout(ker, (void *)(uintptr_t)hdr->nr_body, bodysz); 3173 if (error) { 3174 rerror = error; 3175 goto out; 3176 } 3177 } 3178 3179 /* copy the options */ 3180 dst = (struct nmreq_option *)(uintptr_t)hdr->nr_options; 3181 while (src) { 3182 size_t optsz; 3183 uint64_t next; 3184 3185 /* restore the user pointer */ 3186 next = src->nro_next; 3187 ptrs = (uint64_t *)src - 1; 3188 src->nro_next = *ptrs; 3189 3190 /* always copy the option header */ 3191 error = copyout(src, dst, sizeof(*src)); 3192 if (error) { 3193 rerror = error; 3194 goto out; 3195 } 3196 3197 /* copy the option body only if there was no error */ 3198 if (!rerror && !src->nro_status) { 3199 optsz = nmreq_opt_size_by_type(src->nro_reqtype, 3200 src->nro_size); 3201 if (optsz) { 3202 error = copyout(src + 1, dst + 1, optsz); 3203 if (error) { 3204 rerror = error; 3205 goto out; 3206 } 3207 } 3208 } 3209 src = (struct nmreq_option *)(uintptr_t)next; 3210 dst = (struct nmreq_option *)(uintptr_t)*ptrs; 3211 } 3212 3213 3214 out: 3215 hdr->nr_reserved = 0; 3216 nm_os_free(bufstart); 3217 return rerror; 3218 } 3219 3220 struct nmreq_option * 3221 nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype) 3222 { 3223 for ( ; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next) 3224 if (opt->nro_reqtype == reqtype) 3225 return opt; 3226 return NULL; 3227 } 3228 3229 int 3230 nmreq_checkduplicate(struct nmreq_option *opt) { 3231 uint16_t type = opt->nro_reqtype; 3232 int dup = 0; 3233 3234 while ((opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)opt->nro_next, 3235 type))) { 3236 dup++; 3237 opt->nro_status = EINVAL; 3238 } 3239 return (dup ? EINVAL : 0); 3240 } 3241 3242 static int 3243 nmreq_checkoptions(struct nmreq_header *hdr) 3244 { 3245 struct nmreq_option *opt; 3246 /* return error if there is still any option 3247 * marked as not supported 3248 */ 3249 3250 for (opt = (struct nmreq_option *)(uintptr_t)hdr->nr_options; opt; 3251 opt = (struct nmreq_option *)(uintptr_t)opt->nro_next) 3252 if (opt->nro_status == EOPNOTSUPP) 3253 return EOPNOTSUPP; 3254 3255 return 0; 3256 } 3257 3258 /* 3259 * select(2) and poll(2) handlers for the "netmap" device. 3260 * 3261 * Can be called for one or more queues. 3262 * Return true the event mask corresponding to ready events. 3263 * If there are no ready events (and 'sr' is not NULL), do a 3264 * selrecord on either individual selinfo or on the global one. 3265 * Device-dependent parts (locking and sync of tx/rx rings) 3266 * are done through callbacks. 3267 * 3268 * On linux, arguments are really pwait, the poll table, and 'td' is struct file * 3269 * The first one is remapped to pwait as selrecord() uses the name as an 3270 * hidden argument. 3271 */ 3272 int 3273 netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr) 3274 { 3275 struct netmap_adapter *na; 3276 struct netmap_kring *kring; 3277 struct netmap_ring *ring; 3278 u_int i, want[NR_TXRX], revents = 0; 3279 NM_SELINFO_T *si[NR_TXRX]; 3280 #define want_tx want[NR_TX] 3281 #define want_rx want[NR_RX] 3282 struct mbq q; /* packets from RX hw queues to host stack */ 3283 3284 /* 3285 * In order to avoid nested locks, we need to "double check" 3286 * txsync and rxsync if we decide to do a selrecord(). 3287 * retry_tx (and retry_rx, later) prevent looping forever. 3288 */ 3289 int retry_tx = 1, retry_rx = 1; 3290 3291 /* Transparent mode: send_down is 1 if we have found some 3292 * packets to forward (host RX ring --> NIC) during the rx 3293 * scan and we have not sent them down to the NIC yet. 3294 * Transparent mode requires to bind all rings to a single 3295 * file descriptor. 3296 */ 3297 int send_down = 0; 3298 int sync_flags = priv->np_sync_flags; 3299 3300 mbq_init(&q); 3301 3302 if (unlikely(priv->np_nifp == NULL)) { 3303 return POLLERR; 3304 } 3305 mb(); /* make sure following reads are not from cache */ 3306 3307 na = priv->np_na; 3308 3309 if (unlikely(!nm_netmap_on(na))) 3310 return POLLERR; 3311 3312 if (unlikely(priv->np_csb_atok_base)) { 3313 nm_prerr("Invalid poll in CSB mode"); 3314 return POLLERR; 3315 } 3316 3317 if (netmap_debug & NM_DEBUG_ON) 3318 nm_prinf("device %s events 0x%x", na->name, events); 3319 want_tx = events & (POLLOUT | POLLWRNORM); 3320 want_rx = events & (POLLIN | POLLRDNORM); 3321 3322 /* 3323 * If the card has more than one queue AND the file descriptor is 3324 * bound to all of them, we sleep on the "global" selinfo, otherwise 3325 * we sleep on individual selinfo (FreeBSD only allows two selinfo's 3326 * per file descriptor). 3327 * The interrupt routine in the driver wake one or the other 3328 * (or both) depending on which clients are active. 3329 * 3330 * rxsync() is only called if we run out of buffers on a POLLIN. 3331 * txsync() is called if we run out of buffers on POLLOUT, or 3332 * there are pending packets to send. The latter can be disabled 3333 * passing NETMAP_NO_TX_POLL in the NIOCREG call. 3334 */ 3335 si[NR_RX] = priv->np_si[NR_RX]; 3336 si[NR_TX] = priv->np_si[NR_TX]; 3337 3338 #ifdef __FreeBSD__ 3339 /* 3340 * We start with a lock free round which is cheap if we have 3341 * slots available. If this fails, then lock and call the sync 3342 * routines. We can't do this on Linux, as the contract says 3343 * that we must call nm_os_selrecord() unconditionally. 3344 */ 3345 if (want_tx) { 3346 const enum txrx t = NR_TX; 3347 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { 3348 kring = NMR(na, t)[i]; 3349 if (kring->ring->cur != kring->ring->tail) { 3350 /* Some unseen TX space is available, so what 3351 * we don't need to run txsync. */ 3352 revents |= want[t]; 3353 want[t] = 0; 3354 break; 3355 } 3356 } 3357 } 3358 if (want_rx) { 3359 const enum txrx t = NR_RX; 3360 int rxsync_needed = 0; 3361 3362 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { 3363 kring = NMR(na, t)[i]; 3364 if (kring->ring->cur == kring->ring->tail 3365 || kring->rhead != kring->ring->head) { 3366 /* There are no unseen packets on this ring, 3367 * or there are some buffers to be returned 3368 * to the netmap port. We therefore go ahead 3369 * and run rxsync. */ 3370 rxsync_needed = 1; 3371 break; 3372 } 3373 } 3374 if (!rxsync_needed) { 3375 revents |= want_rx; 3376 want_rx = 0; 3377 } 3378 } 3379 #endif 3380 3381 #ifdef linux 3382 /* The selrecord must be unconditional on linux. */ 3383 nm_os_selrecord(sr, si[NR_RX]); 3384 nm_os_selrecord(sr, si[NR_TX]); 3385 #endif /* linux */ 3386 3387 /* 3388 * If we want to push packets out (priv->np_txpoll) or 3389 * want_tx is still set, we must issue txsync calls 3390 * (on all rings, to avoid that the tx rings stall). 3391 * Fortunately, normal tx mode has np_txpoll set. 3392 */ 3393 if (priv->np_txpoll || want_tx) { 3394 /* 3395 * The first round checks if anyone is ready, if not 3396 * do a selrecord and another round to handle races. 3397 * want_tx goes to 0 if any space is found, and is 3398 * used to skip rings with no pending transmissions. 3399 */ 3400 flush_tx: 3401 for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) { 3402 int found = 0; 3403 3404 kring = na->tx_rings[i]; 3405 ring = kring->ring; 3406 3407 /* 3408 * Don't try to txsync this TX ring if we already found some 3409 * space in some of the TX rings (want_tx == 0) and there are no 3410 * TX slots in this ring that need to be flushed to the NIC 3411 * (head == hwcur). 3412 */ 3413 if (!send_down && !want_tx && ring->head == kring->nr_hwcur) 3414 continue; 3415 3416 if (nm_kr_tryget(kring, 1, &revents)) 3417 continue; 3418 3419 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { 3420 netmap_ring_reinit(kring); 3421 revents |= POLLERR; 3422 } else { 3423 if (kring->nm_sync(kring, sync_flags)) 3424 revents |= POLLERR; 3425 else 3426 nm_sync_finalize(kring); 3427 } 3428 3429 /* 3430 * If we found new slots, notify potential 3431 * listeners on the same ring. 3432 * Since we just did a txsync, look at the copies 3433 * of cur,tail in the kring. 3434 */ 3435 found = kring->rcur != kring->rtail; 3436 nm_kr_put(kring); 3437 if (found) { /* notify other listeners */ 3438 revents |= want_tx; 3439 want_tx = 0; 3440 #ifndef linux 3441 kring->nm_notify(kring, 0); 3442 #endif /* linux */ 3443 } 3444 } 3445 /* if there were any packet to forward we must have handled them by now */ 3446 send_down = 0; 3447 if (want_tx && retry_tx && sr) { 3448 #ifndef linux 3449 nm_os_selrecord(sr, si[NR_TX]); 3450 #endif /* !linux */ 3451 retry_tx = 0; 3452 goto flush_tx; 3453 } 3454 } 3455 3456 /* 3457 * If want_rx is still set scan receive rings. 3458 * Do it on all rings because otherwise we starve. 3459 */ 3460 if (want_rx) { 3461 /* two rounds here for race avoidance */ 3462 do_retry_rx: 3463 for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) { 3464 int found = 0; 3465 3466 kring = na->rx_rings[i]; 3467 ring = kring->ring; 3468 3469 if (unlikely(nm_kr_tryget(kring, 1, &revents))) 3470 continue; 3471 3472 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { 3473 netmap_ring_reinit(kring); 3474 revents |= POLLERR; 3475 } 3476 /* now we can use kring->rcur, rtail */ 3477 3478 /* 3479 * transparent mode support: collect packets from 3480 * hw rxring(s) that have been released by the user 3481 */ 3482 if (nm_may_forward_up(kring)) { 3483 netmap_grab_packets(kring, &q, netmap_fwd); 3484 } 3485 3486 /* Clear the NR_FORWARD flag anyway, it may be set by 3487 * the nm_sync() below only on for the host RX ring (see 3488 * netmap_rxsync_from_host()). */ 3489 kring->nr_kflags &= ~NR_FORWARD; 3490 if (kring->nm_sync(kring, sync_flags)) 3491 revents |= POLLERR; 3492 else 3493 nm_sync_finalize(kring); 3494 send_down |= (kring->nr_kflags & NR_FORWARD); 3495 ring_timestamp_set(ring); 3496 found = kring->rcur != kring->rtail; 3497 nm_kr_put(kring); 3498 if (found) { 3499 revents |= want_rx; 3500 retry_rx = 0; 3501 #ifndef linux 3502 kring->nm_notify(kring, 0); 3503 #endif /* linux */ 3504 } 3505 } 3506 3507 #ifndef linux 3508 if (retry_rx && sr) { 3509 nm_os_selrecord(sr, si[NR_RX]); 3510 } 3511 #endif /* !linux */ 3512 if (send_down || retry_rx) { 3513 retry_rx = 0; 3514 if (send_down) 3515 goto flush_tx; /* and retry_rx */ 3516 else 3517 goto do_retry_rx; 3518 } 3519 } 3520 3521 /* 3522 * Transparent mode: released bufs (i.e. between kring->nr_hwcur and 3523 * ring->head) marked with NS_FORWARD on hw rx rings are passed up 3524 * to the host stack. 3525 */ 3526 3527 if (mbq_peek(&q)) { 3528 netmap_send_up(na->ifp, &q); 3529 } 3530 3531 return (revents); 3532 #undef want_tx 3533 #undef want_rx 3534 } 3535 3536 int 3537 nma_intr_enable(struct netmap_adapter *na, int onoff) 3538 { 3539 bool changed = false; 3540 enum txrx t; 3541 int i; 3542 3543 for_rx_tx(t) { 3544 for (i = 0; i < nma_get_nrings(na, t); i++) { 3545 struct netmap_kring *kring = NMR(na, t)[i]; 3546 int on = !(kring->nr_kflags & NKR_NOINTR); 3547 3548 if (!!onoff != !!on) { 3549 changed = true; 3550 } 3551 if (onoff) { 3552 kring->nr_kflags &= ~NKR_NOINTR; 3553 } else { 3554 kring->nr_kflags |= NKR_NOINTR; 3555 } 3556 } 3557 } 3558 3559 if (!changed) { 3560 return 0; /* nothing to do */ 3561 } 3562 3563 if (!na->nm_intr) { 3564 nm_prerr("Cannot %s interrupts for %s", onoff ? "enable" : "disable", 3565 na->name); 3566 return -1; 3567 } 3568 3569 na->nm_intr(na, onoff); 3570 3571 return 0; 3572 } 3573 3574 3575 /*-------------------- driver support routines -------------------*/ 3576 3577 /* default notify callback */ 3578 static int 3579 netmap_notify(struct netmap_kring *kring, int flags) 3580 { 3581 struct netmap_adapter *na = kring->notify_na; 3582 enum txrx t = kring->tx; 3583 3584 nm_os_selwakeup(&kring->si); 3585 /* optimization: avoid a wake up on the global 3586 * queue if nobody has registered for more 3587 * than one ring 3588 */ 3589 if (na->si_users[t] > 0) 3590 nm_os_selwakeup(&na->si[t]); 3591 3592 return NM_IRQ_COMPLETED; 3593 } 3594 3595 /* called by all routines that create netmap_adapters. 3596 * provide some defaults and get a reference to the 3597 * memory allocator 3598 */ 3599 int 3600 netmap_attach_common(struct netmap_adapter *na) 3601 { 3602 if (!na->rx_buf_maxsize) { 3603 /* Set a conservative default (larger is safer). */ 3604 na->rx_buf_maxsize = PAGE_SIZE; 3605 } 3606 3607 #ifdef __FreeBSD__ 3608 if (na->na_flags & NAF_HOST_RINGS && na->ifp) { 3609 na->if_input = na->ifp->if_input; /* for netmap_send_up */ 3610 } 3611 na->pdev = na; /* make sure netmap_mem_map() is called */ 3612 #endif /* __FreeBSD__ */ 3613 if (na->na_flags & NAF_HOST_RINGS) { 3614 if (na->num_host_rx_rings == 0) 3615 na->num_host_rx_rings = 1; 3616 if (na->num_host_tx_rings == 0) 3617 na->num_host_tx_rings = 1; 3618 } 3619 if (na->nm_krings_create == NULL) { 3620 /* we assume that we have been called by a driver, 3621 * since other port types all provide their own 3622 * nm_krings_create 3623 */ 3624 na->nm_krings_create = netmap_hw_krings_create; 3625 na->nm_krings_delete = netmap_hw_krings_delete; 3626 } 3627 if (na->nm_notify == NULL) 3628 na->nm_notify = netmap_notify; 3629 na->active_fds = 0; 3630 3631 if (na->nm_mem == NULL) { 3632 /* use the global allocator */ 3633 na->nm_mem = netmap_mem_get(&nm_mem); 3634 } 3635 #ifdef WITH_VALE 3636 if (na->nm_bdg_attach == NULL) 3637 /* no special nm_bdg_attach callback. On VALE 3638 * attach, we need to interpose a bwrap 3639 */ 3640 na->nm_bdg_attach = netmap_default_bdg_attach; 3641 #endif 3642 3643 return 0; 3644 } 3645 3646 /* Wrapper for the register callback provided netmap-enabled 3647 * hardware drivers. 3648 * nm_iszombie(na) means that the driver module has been 3649 * unloaded, so we cannot call into it. 3650 * nm_os_ifnet_lock() must guarantee mutual exclusion with 3651 * module unloading. 3652 */ 3653 static int 3654 netmap_hw_reg(struct netmap_adapter *na, int onoff) 3655 { 3656 struct netmap_hw_adapter *hwna = 3657 (struct netmap_hw_adapter*)na; 3658 int error = 0; 3659 3660 nm_os_ifnet_lock(); 3661 3662 if (nm_iszombie(na)) { 3663 if (onoff) { 3664 error = ENXIO; 3665 } else if (na != NULL) { 3666 na->na_flags &= ~NAF_NETMAP_ON; 3667 } 3668 goto out; 3669 } 3670 3671 error = hwna->nm_hw_register(na, onoff); 3672 3673 out: 3674 nm_os_ifnet_unlock(); 3675 3676 return error; 3677 } 3678 3679 static void 3680 netmap_hw_dtor(struct netmap_adapter *na) 3681 { 3682 if (na->ifp == NULL) 3683 return; 3684 3685 NM_DETACH_NA(na->ifp); 3686 } 3687 3688 3689 /* 3690 * Allocate a netmap_adapter object, and initialize it from the 3691 * 'arg' passed by the driver on attach. 3692 * We allocate a block of memory of 'size' bytes, which has room 3693 * for struct netmap_adapter plus additional room private to 3694 * the caller. 3695 * Return 0 on success, ENOMEM otherwise. 3696 */ 3697 int 3698 netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg) 3699 { 3700 struct netmap_hw_adapter *hwna = NULL; 3701 struct ifnet *ifp = NULL; 3702 3703 if (size < sizeof(struct netmap_hw_adapter)) { 3704 if (netmap_debug & NM_DEBUG_ON) 3705 nm_prerr("Invalid netmap adapter size %d", (int)size); 3706 return EINVAL; 3707 } 3708 3709 if (arg == NULL || arg->ifp == NULL) { 3710 if (netmap_debug & NM_DEBUG_ON) 3711 nm_prerr("either arg or arg->ifp is NULL"); 3712 return EINVAL; 3713 } 3714 3715 if (arg->num_tx_rings == 0 || arg->num_rx_rings == 0) { 3716 if (netmap_debug & NM_DEBUG_ON) 3717 nm_prerr("%s: invalid rings tx %d rx %d", 3718 arg->name, arg->num_tx_rings, arg->num_rx_rings); 3719 return EINVAL; 3720 } 3721 3722 ifp = arg->ifp; 3723 if (NM_NA_CLASH(ifp)) { 3724 /* If NA(ifp) is not null but there is no valid netmap 3725 * adapter it means that someone else is using the same 3726 * pointer (e.g. ax25_ptr on linux). This happens for 3727 * instance when also PF_RING is in use. */ 3728 nm_prerr("Error: netmap adapter hook is busy"); 3729 return EBUSY; 3730 } 3731 3732 hwna = nm_os_malloc(size); 3733 if (hwna == NULL) 3734 goto fail; 3735 hwna->up = *arg; 3736 hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE; 3737 strlcpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name)); 3738 if (override_reg) { 3739 hwna->nm_hw_register = hwna->up.nm_register; 3740 hwna->up.nm_register = netmap_hw_reg; 3741 } 3742 if (netmap_attach_common(&hwna->up)) { 3743 nm_os_free(hwna); 3744 goto fail; 3745 } 3746 netmap_adapter_get(&hwna->up); 3747 3748 NM_ATTACH_NA(ifp, &hwna->up); 3749 3750 nm_os_onattach(ifp); 3751 3752 if (arg->nm_dtor == NULL) { 3753 hwna->up.nm_dtor = netmap_hw_dtor; 3754 } 3755 3756 if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n", 3757 hwna->up.num_tx_rings, hwna->up.num_tx_desc, 3758 hwna->up.num_rx_rings, hwna->up.num_rx_desc); 3759 return 0; 3760 3761 fail: 3762 nm_prerr("fail, arg %p ifp %p na %p", arg, ifp, hwna); 3763 return (hwna ? EINVAL : ENOMEM); 3764 } 3765 3766 3767 int 3768 netmap_attach(struct netmap_adapter *arg) 3769 { 3770 return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter), 3771 1 /* override nm_reg */); 3772 } 3773 3774 3775 void 3776 NM_DBG(netmap_adapter_get)(struct netmap_adapter *na) 3777 { 3778 if (!na) { 3779 return; 3780 } 3781 3782 refcount_acquire(&na->na_refcount); 3783 } 3784 3785 3786 /* returns 1 iff the netmap_adapter is destroyed */ 3787 int 3788 NM_DBG(netmap_adapter_put)(struct netmap_adapter *na) 3789 { 3790 if (!na) 3791 return 1; 3792 3793 if (!refcount_release(&na->na_refcount)) 3794 return 0; 3795 3796 if (na->nm_dtor) 3797 na->nm_dtor(na); 3798 3799 if (na->tx_rings) { /* XXX should not happen */ 3800 if (netmap_debug & NM_DEBUG_ON) 3801 nm_prerr("freeing leftover tx_rings"); 3802 na->nm_krings_delete(na); 3803 } 3804 netmap_pipe_dealloc(na); 3805 if (na->nm_mem) 3806 netmap_mem_put(na->nm_mem); 3807 bzero(na, sizeof(*na)); 3808 nm_os_free(na); 3809 3810 return 1; 3811 } 3812 3813 /* nm_krings_create callback for all hardware native adapters */ 3814 int 3815 netmap_hw_krings_create(struct netmap_adapter *na) 3816 { 3817 int ret = netmap_krings_create(na, 0); 3818 if (ret == 0) { 3819 /* initialize the mbq for the sw rx ring */ 3820 u_int lim = netmap_real_rings(na, NR_RX), i; 3821 for (i = na->num_rx_rings; i < lim; i++) { 3822 mbq_safe_init(&NMR(na, NR_RX)[i]->rx_queue); 3823 } 3824 nm_prdis("initialized sw rx queue %d", na->num_rx_rings); 3825 } 3826 return ret; 3827 } 3828 3829 3830 3831 /* 3832 * Called on module unload by the netmap-enabled drivers 3833 */ 3834 void 3835 netmap_detach(struct ifnet *ifp) 3836 { 3837 struct netmap_adapter *na = NA(ifp); 3838 3839 if (!na) 3840 return; 3841 3842 NMG_LOCK(); 3843 netmap_set_all_rings(na, NM_KR_LOCKED); 3844 /* 3845 * if the netmap adapter is not native, somebody 3846 * changed it, so we can not release it here. 3847 * The NAF_ZOMBIE flag will notify the new owner that 3848 * the driver is gone. 3849 */ 3850 if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) { 3851 na->na_flags |= NAF_ZOMBIE; 3852 } 3853 /* give active users a chance to notice that NAF_ZOMBIE has been 3854 * turned on, so that they can stop and return an error to userspace. 3855 * Note that this becomes a NOP if there are no active users and, 3856 * therefore, the put() above has deleted the na, since now NA(ifp) is 3857 * NULL. 3858 */ 3859 netmap_enable_all_rings(ifp); 3860 NMG_UNLOCK(); 3861 } 3862 3863 3864 /* 3865 * Intercept packets from the network stack and pass them 3866 * to netmap as incoming packets on the 'software' ring. 3867 * 3868 * We only store packets in a bounded mbq and then copy them 3869 * in the relevant rxsync routine. 3870 * 3871 * We rely on the OS to make sure that the ifp and na do not go 3872 * away (typically the caller checks for IFF_DRV_RUNNING or the like). 3873 * In nm_register() or whenever there is a reinitialization, 3874 * we make sure to make the mode change visible here. 3875 */ 3876 int 3877 netmap_transmit(struct ifnet *ifp, struct mbuf *m) 3878 { 3879 struct netmap_adapter *na = NA(ifp); 3880 struct netmap_kring *kring, *tx_kring; 3881 u_int len = MBUF_LEN(m); 3882 u_int error = ENOBUFS; 3883 unsigned int txr; 3884 struct mbq *q; 3885 int busy; 3886 u_int i; 3887 3888 i = MBUF_TXQ(m); 3889 if (i >= na->num_host_rx_rings) { 3890 i = i % na->num_host_rx_rings; 3891 } 3892 kring = NMR(na, NR_RX)[nma_get_nrings(na, NR_RX) + i]; 3893 3894 // XXX [Linux] we do not need this lock 3895 // if we follow the down/configure/up protocol -gl 3896 // mtx_lock(&na->core_lock); 3897 3898 if (!nm_netmap_on(na)) { 3899 nm_prerr("%s not in netmap mode anymore", na->name); 3900 error = ENXIO; 3901 goto done; 3902 } 3903 3904 txr = MBUF_TXQ(m); 3905 if (txr >= na->num_tx_rings) { 3906 txr %= na->num_tx_rings; 3907 } 3908 tx_kring = NMR(na, NR_TX)[txr]; 3909 3910 if (tx_kring->nr_mode == NKR_NETMAP_OFF) { 3911 return MBUF_TRANSMIT(na, ifp, m); 3912 } 3913 3914 q = &kring->rx_queue; 3915 3916 // XXX reconsider long packets if we handle fragments 3917 if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */ 3918 nm_prerr("%s from_host, drop packet size %d > %d", na->name, 3919 len, NETMAP_BUF_SIZE(na)); 3920 goto done; 3921 } 3922 3923 if (!netmap_generic_hwcsum) { 3924 if (nm_os_mbuf_has_csum_offld(m)) { 3925 nm_prlim(1, "%s drop mbuf that needs checksum offload", na->name); 3926 goto done; 3927 } 3928 } 3929 3930 if (nm_os_mbuf_has_seg_offld(m)) { 3931 nm_prlim(1, "%s drop mbuf that needs generic segmentation offload", na->name); 3932 goto done; 3933 } 3934 3935 #ifdef __FreeBSD__ 3936 ETHER_BPF_MTAP(ifp, m); 3937 #endif /* __FreeBSD__ */ 3938 3939 /* protect against netmap_rxsync_from_host(), netmap_sw_to_nic() 3940 * and maybe other instances of netmap_transmit (the latter 3941 * not possible on Linux). 3942 * We enqueue the mbuf only if we are sure there is going to be 3943 * enough room in the host RX ring, otherwise we drop it. 3944 */ 3945 mbq_lock(q); 3946 3947 busy = kring->nr_hwtail - kring->nr_hwcur; 3948 if (busy < 0) 3949 busy += kring->nkr_num_slots; 3950 if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) { 3951 nm_prlim(2, "%s full hwcur %d hwtail %d qlen %d", na->name, 3952 kring->nr_hwcur, kring->nr_hwtail, mbq_len(q)); 3953 } else { 3954 mbq_enqueue(q, m); 3955 nm_prdis(2, "%s %d bufs in queue", na->name, mbq_len(q)); 3956 /* notify outside the lock */ 3957 m = NULL; 3958 error = 0; 3959 } 3960 mbq_unlock(q); 3961 3962 done: 3963 if (m) 3964 m_freem(m); 3965 /* unconditionally wake up listeners */ 3966 kring->nm_notify(kring, 0); 3967 /* this is normally netmap_notify(), but for nics 3968 * connected to a bridge it is netmap_bwrap_intr_notify(), 3969 * that possibly forwards the frames through the switch 3970 */ 3971 3972 return (error); 3973 } 3974 3975 3976 /* 3977 * netmap_reset() is called by the driver routines when reinitializing 3978 * a ring. The driver is in charge of locking to protect the kring. 3979 * If native netmap mode is not set just return NULL. 3980 * If native netmap mode is set, in particular, we have to set nr_mode to 3981 * NKR_NETMAP_ON. 3982 */ 3983 struct netmap_slot * 3984 netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n, 3985 u_int new_cur) 3986 { 3987 struct netmap_kring *kring; 3988 int new_hwofs, lim; 3989 3990 if (!nm_native_on(na)) { 3991 nm_prdis("interface not in native netmap mode"); 3992 return NULL; /* nothing to reinitialize */ 3993 } 3994 3995 /* XXX note- in the new scheme, we are not guaranteed to be 3996 * under lock (e.g. when called on a device reset). 3997 * In this case, we should set a flag and do not trust too 3998 * much the values. In practice: TODO 3999 * - set a RESET flag somewhere in the kring 4000 * - do the processing in a conservative way 4001 * - let the *sync() fixup at the end. 4002 */ 4003 if (tx == NR_TX) { 4004 if (n >= na->num_tx_rings) 4005 return NULL; 4006 4007 kring = na->tx_rings[n]; 4008 4009 if (kring->nr_pending_mode == NKR_NETMAP_OFF) { 4010 kring->nr_mode = NKR_NETMAP_OFF; 4011 return NULL; 4012 } 4013 4014 // XXX check whether we should use hwcur or rcur 4015 new_hwofs = kring->nr_hwcur - new_cur; 4016 } else { 4017 if (n >= na->num_rx_rings) 4018 return NULL; 4019 kring = na->rx_rings[n]; 4020 4021 if (kring->nr_pending_mode == NKR_NETMAP_OFF) { 4022 kring->nr_mode = NKR_NETMAP_OFF; 4023 return NULL; 4024 } 4025 4026 new_hwofs = kring->nr_hwtail - new_cur; 4027 } 4028 lim = kring->nkr_num_slots - 1; 4029 if (new_hwofs > lim) 4030 new_hwofs -= lim + 1; 4031 4032 /* Always set the new offset value and realign the ring. */ 4033 if (netmap_debug & NM_DEBUG_ON) 4034 nm_prinf("%s %s%d hwofs %d -> %d, hwtail %d -> %d", 4035 na->name, 4036 tx == NR_TX ? "TX" : "RX", n, 4037 kring->nkr_hwofs, new_hwofs, 4038 kring->nr_hwtail, 4039 tx == NR_TX ? lim : kring->nr_hwtail); 4040 kring->nkr_hwofs = new_hwofs; 4041 if (tx == NR_TX) { 4042 kring->nr_hwtail = kring->nr_hwcur + lim; 4043 if (kring->nr_hwtail > lim) 4044 kring->nr_hwtail -= lim + 1; 4045 } 4046 4047 /* 4048 * Wakeup on the individual and global selwait 4049 * We do the wakeup here, but the ring is not yet reconfigured. 4050 * However, we are under lock so there are no races. 4051 */ 4052 kring->nr_mode = NKR_NETMAP_ON; 4053 kring->nm_notify(kring, 0); 4054 return kring->ring->slot; 4055 } 4056 4057 4058 /* 4059 * Dispatch rx/tx interrupts to the netmap rings. 4060 * 4061 * "work_done" is non-null on the RX path, NULL for the TX path. 4062 * We rely on the OS to make sure that there is only one active 4063 * instance per queue, and that there is appropriate locking. 4064 * 4065 * The 'notify' routine depends on what the ring is attached to. 4066 * - for a netmap file descriptor, do a selwakeup on the individual 4067 * waitqueue, plus one on the global one if needed 4068 * (see netmap_notify) 4069 * - for a nic connected to a switch, call the proper forwarding routine 4070 * (see netmap_bwrap_intr_notify) 4071 */ 4072 int 4073 netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done) 4074 { 4075 struct netmap_kring *kring; 4076 enum txrx t = (work_done ? NR_RX : NR_TX); 4077 4078 q &= NETMAP_RING_MASK; 4079 4080 if (netmap_debug & (NM_DEBUG_RXINTR|NM_DEBUG_TXINTR)) { 4081 nm_prlim(5, "received %s queue %d", work_done ? "RX" : "TX" , q); 4082 } 4083 4084 if (q >= nma_get_nrings(na, t)) 4085 return NM_IRQ_PASS; // not a physical queue 4086 4087 kring = NMR(na, t)[q]; 4088 4089 if (kring->nr_mode == NKR_NETMAP_OFF) { 4090 return NM_IRQ_PASS; 4091 } 4092 4093 if (t == NR_RX) { 4094 kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ? 4095 *work_done = 1; /* do not fire napi again */ 4096 } 4097 4098 return kring->nm_notify(kring, 0); 4099 } 4100 4101 4102 /* 4103 * Default functions to handle rx/tx interrupts from a physical device. 4104 * "work_done" is non-null on the RX path, NULL for the TX path. 4105 * 4106 * If the card is not in netmap mode, simply return NM_IRQ_PASS, 4107 * so that the caller proceeds with regular processing. 4108 * Otherwise call netmap_common_irq(). 4109 * 4110 * If the card is connected to a netmap file descriptor, 4111 * do a selwakeup on the individual queue, plus one on the global one 4112 * if needed (multiqueue card _and_ there are multiqueue listeners), 4113 * and return NR_IRQ_COMPLETED. 4114 * 4115 * Finally, if called on rx from an interface connected to a switch, 4116 * calls the proper forwarding routine. 4117 */ 4118 int 4119 netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done) 4120 { 4121 struct netmap_adapter *na = NA(ifp); 4122 4123 /* 4124 * XXX emulated netmap mode sets NAF_SKIP_INTR so 4125 * we still use the regular driver even though the previous 4126 * check fails. It is unclear whether we should use 4127 * nm_native_on() here. 4128 */ 4129 if (!nm_netmap_on(na)) 4130 return NM_IRQ_PASS; 4131 4132 if (na->na_flags & NAF_SKIP_INTR) { 4133 nm_prdis("use regular interrupt"); 4134 return NM_IRQ_PASS; 4135 } 4136 4137 return netmap_common_irq(na, q, work_done); 4138 } 4139 4140 /* set/clear native flags and if_transmit/netdev_ops */ 4141 void 4142 nm_set_native_flags(struct netmap_adapter *na) 4143 { 4144 struct ifnet *ifp = na->ifp; 4145 4146 /* We do the setup for intercepting packets only if we are the 4147 * first user of this adapapter. */ 4148 if (na->active_fds > 0) { 4149 return; 4150 } 4151 4152 na->na_flags |= NAF_NETMAP_ON; 4153 nm_os_onenter(ifp); 4154 nm_update_hostrings_mode(na); 4155 } 4156 4157 void 4158 nm_clear_native_flags(struct netmap_adapter *na) 4159 { 4160 struct ifnet *ifp = na->ifp; 4161 4162 /* We undo the setup for intercepting packets only if we are the 4163 * last user of this adapter. */ 4164 if (na->active_fds > 0) { 4165 return; 4166 } 4167 4168 nm_update_hostrings_mode(na); 4169 nm_os_onexit(ifp); 4170 4171 na->na_flags &= ~NAF_NETMAP_ON; 4172 } 4173 4174 void 4175 netmap_krings_mode_commit(struct netmap_adapter *na, int onoff) 4176 { 4177 enum txrx t; 4178 4179 for_rx_tx(t) { 4180 int i; 4181 4182 for (i = 0; i < netmap_real_rings(na, t); i++) { 4183 struct netmap_kring *kring = NMR(na, t)[i]; 4184 4185 if (onoff && nm_kring_pending_on(kring)) 4186 kring->nr_mode = NKR_NETMAP_ON; 4187 else if (!onoff && nm_kring_pending_off(kring)) 4188 kring->nr_mode = NKR_NETMAP_OFF; 4189 } 4190 } 4191 } 4192 4193 /* 4194 * Module loader and unloader 4195 * 4196 * netmap_init() creates the /dev/netmap device and initializes 4197 * all global variables. Returns 0 on success, errno on failure 4198 * (but there is no chance) 4199 * 4200 * netmap_fini() destroys everything. 4201 */ 4202 4203 static struct cdev *netmap_dev; /* /dev/netmap character device. */ 4204 extern struct cdevsw netmap_cdevsw; 4205 4206 4207 void 4208 netmap_fini(void) 4209 { 4210 if (netmap_dev) 4211 destroy_dev(netmap_dev); 4212 /* we assume that there are no longer netmap users */ 4213 nm_os_ifnet_fini(); 4214 netmap_uninit_bridges(); 4215 netmap_mem_fini(); 4216 NMG_LOCK_DESTROY(); 4217 nm_prinf("netmap: unloaded module."); 4218 } 4219 4220 4221 int 4222 netmap_init(void) 4223 { 4224 int error; 4225 4226 NMG_LOCK_INIT(); 4227 4228 error = netmap_mem_init(); 4229 if (error != 0) 4230 goto fail; 4231 /* 4232 * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls 4233 * when the module is compiled in. 4234 * XXX could use make_dev_credv() to get error number 4235 */ 4236 netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD, 4237 &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600, 4238 "netmap"); 4239 if (!netmap_dev) 4240 goto fail; 4241 4242 error = netmap_init_bridges(); 4243 if (error) 4244 goto fail; 4245 4246 #ifdef __FreeBSD__ 4247 nm_os_vi_init_index(); 4248 #endif 4249 4250 error = nm_os_ifnet_init(); 4251 if (error) 4252 goto fail; 4253 4254 nm_prinf("netmap: loaded module"); 4255 return (0); 4256 fail: 4257 netmap_fini(); 4258 return (EINVAL); /* may be incorrect */ 4259 } 4260