#
765ad4f0 |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpcsec_tls: cleanup the rpctls_syscall()
With all the recent changes we don't need extra argument that specifies what exactly the syscalls does, neither we need a copyout-able pointer, just a pointe
rpcsec_tls: cleanup the rpctls_syscall()
With all the recent changes we don't need extra argument that specifies what exactly the syscalls does, neither we need a copyout-able pointer, just a pointer sized integer.
Reviewed by: rmacklem Differential Revision: https://reviews.freebsd.org/D48649
show more ...
|
#
8e5f80da |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpc.tlsservd: provide parallelism with help of pthread(3)
At normal NFS server runtime there is not much RPC traffic from kernel to rpc.tlsservd. But as Rick rmacklem@ explained, the notion of mult
rpc.tlsservd: provide parallelism with help of pthread(3)
At normal NFS server runtime there is not much RPC traffic from kernel to rpc.tlsservd. But as Rick rmacklem@ explained, the notion of multiple workers exists to handle a situation when a server reboots and it has several hundred or thousands of TLS/TCP connections from clients. Once it comes back up, all the clients make TCP connections and do TLS handshakes.
So cleanup the remnants of the workers, that left after the conversion of RPC over netlink(4) transport and restore desired parallelism with help of pthread(3).
We are processing the TLS handshakes in separate threads, one per handshake. Number of concurrent threads is capped by hw.ncpu / 2, but this can be overriden with -N.
Differential Revision: https://reviews.freebsd.org/D48570
show more ...
|
#
af805255 |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpcsec_tls/server: API refactoring between kernel and rpc.tlsservd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the netlink(4) socket as RPC transport started using kernel so
rpcsec_tls/server: API refactoring between kernel and rpc.tlsservd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the netlink(4) socket as RPC transport started using kernel socket pointer as a reliable cookie, we can shave off quite a lot of complexity. We will utilize the same kernel-generated cookie in all RPCs. And the need for the daemon generated cookie in the form of timestamp+sequence vanishes.
We also stop passing notion of 'process position' from userland to kernel. The TLS handshake parallelism to be reimplemented in the daemon without any awareness about that in the kernel.
This time bump the RPC version.
Reviewed by: rmacklem Differential Revision: https://reviews.freebsd.org/D48566
show more ...
|
#
56a96c51 |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpcsec_tls/client: API refactoring between kernel and rpc.tlsclntd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the netlink(4) socket as RPC transport started using kernel so
rpcsec_tls/client: API refactoring between kernel and rpc.tlsclntd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the netlink(4) socket as RPC transport started using kernel socket pointer as a reliable cookie, we can shave off quite a lot of complexity. We will utilize the same kernel-generated cookie in all RPCs. And the need for the daemon generated cookie in the form of timestamp+sequence vanishes.
In the clnt_vc.c we no longer need to store the userland cookie, but we still need to observe the TLS life cycle of the client. We observe RPCTLS_INHANDSHAKE state, that lives for a short time when the socket had already been fetched by the daemon with the syscall, but the RPC call is still waiting for the reply from daemon.
This time bump the RPC version.
Reviewed by: rmacklem Differential Revision: https://reviews.freebsd.org/D48564
show more ...
|
#
42eec520 |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpcsec_tls/server: use netlink RPC client to talk to rpc.tlsservd(8)
The server part just repeats what had been done to the client. We trust the parallelism of clnt_nl and we pass socket cookie to
rpcsec_tls/server: use netlink RPC client to talk to rpc.tlsservd(8)
The server part just repeats what had been done to the client. We trust the parallelism of clnt_nl and we pass socket cookie to the daemon, which we then expect to see in the rpctls_syscall(RPCTLS_SYSC_SRVSOCKET) to find the corresponding socket+xprt. We reuse the same database that is used for clients.
Note 1: this will be optimized further in a separate commit. This one is made intentionally minimal, to ease the review process.
Note 2: this change intentionally ignores aspect of multiple workers of rpc.tlsservd(8). This also will be addressed in a future commit.
Reviewed by: rmacklem Differential Revision: https://reviews.freebsd.org/D48561
show more ...
|
#
a3a6dc24 |
| 01-Feb-2025 |
Gleb Smirnoff <glebius@FreeBSD.org> |
rpcsec_tls/client: use netlink RPC client to talk to rpc.tlsclntd(8)
In addition to using netlink(4) socket instead of unix(4) to pass rpctlscd_* RPC commands to rpc.tlsclntd(8), the logic of passin
rpcsec_tls/client: use netlink RPC client to talk to rpc.tlsclntd(8)
In addition to using netlink(4) socket instead of unix(4) to pass rpctlscd_* RPC commands to rpc.tlsclntd(8), the logic of passing file descriptor is also changed. Since clnt_nl provides us all needed parallelism and waits on individual RPC xids, we don't need to store socket in a global variable and serialize all communication to the daemon. Instead, we will augment rpctlscd_connect arguments with a cookie that is basically a pointer to socket, that we keep for the daemon. While sleeping on the request, we will store a database of all sockets associated with rpctlscd_connect RPCs that we have sent to userland. The daemon then will send us back the cookie in the rpctls_syscall(RPCTLS_SYSC_CLSOCKET) argument and we will find and return the socket for this upcall.
This will be optimized further in a separate commit, that will also touch clnt_vc.c and other krpc files. This commit is intentionally made minimal, so that it is easier to understand what changes with netlink(4) transport.
Reviewed by: rmacklem Differential Revision: https://reviews.freebsd.org/D48559
show more ...
|
Revision tags: release/14.1.0-p7, release/14.2.0-p1, release/13.4.0-p3, release/14.2.0, release/13.4.0, release/14.1.0, release/13.3.0, release/14.0.0 |
|
#
1a878807 |
| 02-Nov-2023 |
Rick Macklem <rmacklem@FreeBSD.org> |
krpc: Display stats of TLS usage
This patch adds some sysctls: kern.rpc.unenc.tx_msgcnt kern.rpc.unenc.tx_msgbytes kern.rpc.unenc.rx_msgcnt kern.rpc.unenc.rx_msgbytes kern.rpc.tls.tx_msgcnt kern.rpc
krpc: Display stats of TLS usage
This patch adds some sysctls: kern.rpc.unenc.tx_msgcnt kern.rpc.unenc.tx_msgbytes kern.rpc.unenc.rx_msgcnt kern.rpc.unenc.rx_msgbytes kern.rpc.tls.tx_msgcnt kern.rpc.tls.tx_msgbytes kern.rpc.tls.rx_msgcnt kern.rpc.tls.rx_msgbytes kern.rpc.tls.handshake_success kern.rpc.tls.handshake_failed kern.rpc.tls.alerts which allow a NFS server sysadmin to determine how much NFS-over-TLS is being used. A large number of failed handshakes might also indicate an NFS confirguration problem.
This patch moves the definition of "kern.rpc" from the kgssapi module to the krpc module. As such, both modules need to be rebuilt from sources. Since __FreeBSD_version was bumped yesterday, I will not bump it again.
Suggested by: gwollman Discussed on: freebsd-current MFC after: 1 month
show more ...
|
#
95ee2897 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: two-line .h pattern
Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/
|
#
4d846d26 |
| 10-May-2023 |
Warner Losh <imp@FreeBSD.org> |
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of BSD-2-Clause.
Discussed with: pfg MFC After: 3 days Sponsored by: Netflix
show more ...
|
Revision tags: release/13.2.0 |
|
#
ef6fcc5e |
| 20-Feb-2023 |
Rick Macklem <rmacklem@FreeBSD.org> |
nfsd: Add VNET_SYSUNINIT() macros for vnet cleanup
Commit ed03776ca7f4 enabled the vnet front end macros. As such, for kernels built with the VIMAGE option will malloc data and initialize locks on a
nfsd: Add VNET_SYSUNINIT() macros for vnet cleanup
Commit ed03776ca7f4 enabled the vnet front end macros. As such, for kernels built with the VIMAGE option will malloc data and initialize locks on a per-vnet basis, typically via a VNET_SYSINIT().
This patch adds VNET_SYSUNINIT() macros to do the frees of the per-vnet malloc'd data and destroys of per-vnet locks. It also removes the mtx_lock/mtx_unlock calls from nfsrvd_cleancache(), since they are not needed.
Discussed with: bz, jamie MFC after: 3 months
show more ...
|
#
ed03776c |
| 18-Feb-2023 |
Rick Macklem <rmacklem@FreeBSD.org> |
nfsd: Enable the NFSD_VNET vnet front end macros
Several commits have added front end macros for the vnet macros to the NFS server, krpc and kgssapi. These macros are now null, but this patch chang
nfsd: Enable the NFSD_VNET vnet front end macros
Several commits have added front end macros for the vnet macros to the NFS server, krpc and kgssapi. These macros are now null, but this patch changes them to front end the vnet macros.
With this commit, many global variables in the code become vnet'd, so that nfsd(8), nfsuserd(8), rpc.tlsservd(8) and gssd(8) can run in a vnet prison, once enabled. To run the NFS server in a vnet prison still requires a couple of patches (in D37741 and D38371) that allow mountd(8) to export file systems from within a vnet prison. Once these are committed to main, a small patch to kern_jail.c allowing "allow.nfsd" without VNET_NFSD defined will allow the NFS server to run in a vnet prison.
One area that still needs to be settled is cleanup when a prison is removed. Without this, everything should work except there will be a leak of malloc'd data and mutex locks when a vnet prison is removed.
MFC after: 3 months
show more ...
|
#
6444662a |
| 15-Feb-2023 |
Rick Macklem <rmacklem@FreeBSD.org> |
krpc: Add macros so that rpc.tlsservd can run in vnet prison
Commit 7344856e3a6d added a lot of macros that will front end vnet macros so that nfsd(8) can run in vnet prison. This patch adds similar
krpc: Add macros so that rpc.tlsservd can run in vnet prison
Commit 7344856e3a6d added a lot of macros that will front end vnet macros so that nfsd(8) can run in vnet prison. This patch adds similar macros named KRPC_VNETxxx so that the rpc.tlsservd(8) daemon can run in a vnet prison, once the macros front end the vnet ones. For now, they are null macros.
MFC after: 3 months
show more ...
|
Revision tags: release/12.4.0 |
|
#
564ed8e8 |
| 22-Aug-2022 |
Rick Macklem <rmacklem@FreeBSD.org> |
nfsd: Allow multiple instances of rpc.tlsservd
During a discussion with someone working on NFS-over-TLS for a non-FreeBSD platform, we agreed that a single server daemon for TLS handshakes could bec
nfsd: Allow multiple instances of rpc.tlsservd
During a discussion with someone working on NFS-over-TLS for a non-FreeBSD platform, we agreed that a single server daemon for TLS handshakes could become a bottleneck when an NFS server first boots, if many concurrent NFS-over-TLS connections are attempted.
This patch modifies the kernel RPC code so that it can handle multiple rpc.tlsservd daemons. A separate commit currently under review as D35886 for the rpc.tlsservd daemon.
show more ...
|
Revision tags: release/13.1.0, release/12.3.0, release/13.0.0 |
|
#
665b1365 |
| 22-Dec-2020 |
Rick Macklem <rmacklem@FreeBSD.org> |
Add a new "tlscertname" NFS mount option.
When using NFS-over-TLS, an NFS client can optionally provide an X.509 certificate to the server during the TLS handshake. For some situations, such as dif
Add a new "tlscertname" NFS mount option.
When using NFS-over-TLS, an NFS client can optionally provide an X.509 certificate to the server during the TLS handshake. For some situations, such as different NFS servers or different certificates being mapped to different user credentials on the NFS server, there may be a need for different mounts to provide different certificates.
This new mount option called "tlscertname" may be used to specify a non-default certificate be provided. This alernate certificate will be stored in /etc/rpc.tlsclntd in a file with a name based on what is provided by this mount option.
show more ...
|
Revision tags: release/12.2.0 |
|
#
e2515283 |
| 27-Aug-2020 |
Glen Barber <gjb@FreeBSD.org> |
MFH
Sponsored by: Rubicon Communications, LLC (netgate.com)
|
#
ab0c29af |
| 22-Aug-2020 |
Rick Macklem <rmacklem@FreeBSD.org> |
Add TLS support to the kernel RPC.
An internet draft titled "Towards Remote Procedure Call Encryption By Default" describes how TLS is to be used for Sun RPC, with NFS as an intended use case. This
Add TLS support to the kernel RPC.
An internet draft titled "Towards Remote Procedure Call Encryption By Default" describes how TLS is to be used for Sun RPC, with NFS as an intended use case. This patch adds client and server support for this to the kernel RPC, using KERN_TLS and upcalls to daemons for the handshake, peer reset and other non-application data record cases.
The upcalls to the daemons use three fields to uniquely identify the TCP connection. They are the time.tv_sec, time.tv_usec of the connection establshment, plus a 64bit sequence number. The time fields avoid problems with re-use of the sequence number after a daemon restart. For the server side, once a Null RPC with AUTH_TLS is received, kernel reception on the socket is blocked and an upcall to the rpctlssd(8) daemon is done to perform the TLS handshake. Upon completion, the completion status of the handshake is stored in xp_tls as flag bits and the reply to the Null RPC is sent. For the client, if CLSET_TLS has been set, a new TCP connection will send the Null RPC with AUTH_TLS to initiate the handshake. The client kernel RPC code will then block kernel I/O on the socket and do an upcall to the rpctlscd(8) daemon to perform the handshake. If the upcall is successful, ct_rcvstate will be maintained to indicate if/when an upcall is being done.
If non-application data records are received, the code does an upcall to the appropriate daemon, which will do a SSL_read() of 0 length to handle the record(s).
When the socket is being shut down, upcalls are done to the daemons, so that they can perform SSL_shutdown() calls to perform the "peer reset".
The rpctlssd(8) and rpctlscd(8) daemons require a patched version of the openssl library and, as such, will not be committed to head at this time.
Although the changes done by this patch are fairly numerous, there should be no semantics change to the kernel RPC at this time. A future commit to the NFS code will optionally enable use of TLS for NFS.
show more ...
|
Revision tags: release/11.4.0 |
|
#
c19cba61 |
| 31-May-2020 |
Rick Macklem <rmacklem@FreeBSD.org> |
Add the .h file that describes the operations for the rpctls_syscall.
This .h file will be used by the nfs-over-tls daemons to do the system call that was added by r361599.
|