1.\" 2.\" Copyright (C) 2018 Matthew Macy <mmacy@FreeBSD.org>. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice(s), this list of conditions and the following disclaimer as 9.\" the first lines of this file unmodified other than the possible 10.\" addition of one or more copyright notices. 11.\" 2. Redistributions in binary form must reproduce the above copyright 12.\" notice(s), this list of conditions and the following disclaimer in the 13.\" documentation and/or other materials provided with the distribution. 14.\" 15.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY 16.\" EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 17.\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 18.\" DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY 19.\" DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 20.\" (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 21.\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 22.\" CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 23.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 24.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH 25.\" DAMAGE. 26.\" 27.\" $FreeBSD$ 28.\" 29.Dd June 25, 2018 30.Dt EPOCH 9 31.Os 32.Sh NAME 33.Nm epoch , 34.Nm epoch_context , 35.Nm epoch_alloc , 36.Nm epoch_free , 37.Nm epoch_enter , 38.Nm epoch_exit , 39.Nm epoch_wait , 40.Nm epoch_call , 41.Nm in_epoch , 42.Nd kernel epoch based reclamation 43.Sh SYNOPSIS 44.In sys/param.h 45.In sys/proc.h 46.In sys/epoch.h 47.Ft epoch_t 48.Fn epoch_alloc "int flags" 49.Ft void 50.Fn epoch_enter "epoch_t epoch" 51.Ft void 52.Fn epoch_enter_preempt "epoch_t epoch" "epoch_tracker_t et" 53.Ft void 54.Fn epoch_exit "epoch_t epoch" 55.Ft void 56.Fn epoch_exit_preempt "epoch_t epoch" "epoch_tracker_t et" 57.Ft void 58.Fn epoch_wait "epoch_t epoch" 59.Ft void 60.Fn epoch_wait_preempt "epoch_t epoch" 61.Ft void 62.Fn epoch_call "epoch_t epoch" "epoch_context_t ctx" "void (*callback) (epoch_context_t)" 63.Ft int 64.Fn in_epoch "epoch_t epoch" 65.Sh DESCRIPTION 66Epochs are used to guarantee liveness and immutability of data by 67deferring reclamation and mutation until a grace period has elapsed. 68Epochs do not have any lock ordering issues. 69Entering and leaving an epoch section will never block. 70.Pp 71Epochs are allocated with 72.Fn epoch_alloc 73and freed with 74.Fn epoch_free . 75The flags passed to epoch_alloc determine whether preemption is 76allowed during a section or not (the default), as specified by 77EPOCH_PREEMPT. 78Threads indicate the start of an epoch critical section by calling 79.Fn epoch_enter . 80The end of a critical section is indicated by calling 81.Fn epoch_exit . 82The _preempt variants can be used around code which requires preemption. 83A thread can wait until a grace period has elapsed 84since any threads have entered 85the epoch by calling 86.Fn epoch_wait 87or 88.Fn epoch_wait_preempt , 89depending on the epoch_type. 90The use of a default epoch type allows one to use 91.Fn epoch_wait 92which is guaranteed to have much shorter completion times since 93we know that none of the threads in an epoch section will be preempted 94before completing its section. 95If the thread can't sleep or is otherwise in a performance sensitive 96path it can ensure that a grace period has elapsed by calling 97.Fn epoch_call 98with a callback with any work that needs to wait for an epoch to elapse. 99Only non-sleepable locks can be acquired during a section protected by 100.Fn epoch_enter_preempt 101and 102.Fn epoch_exit_preempt . 103INVARIANTS can assert that a thread is in an epoch by using 104.Fn in_epoch . 105.Pp 106The epoch API currently does not support sleeping in epoch_preempt sections. 107A caller should never call 108.Fn epoch_wait 109in the middle of an epoch section for the same epoch as this will lead to a deadlock. 110.Pp 111By default mutexes cannot be held across 112.Fn epoch_wait_preempt . 113To permit this the epoch must be allocated with 114EPOCH_LOCKED. 115When doing this one must be cautious of creating a situation where a deadlock is 116possible. Note that epochs are not a straight replacement for read locks. 117Callers must use safe list and tailq traversal routines in an epoch (see ck_queue). 118When modifying a list referenced from an epoch section safe removal 119routines must be used and the caller can no longer modify a list entry 120in place. 121An item to be modified must be handled with copy on write 122and frees must be deferred until after a grace period has elapsed. 123.Sh RETURN VALUES 124.Fn in_epoch curepoch 125will return 1 if curthread is in curepoch, 0 otherwise. 126.Sh CAVEATS 127One must be cautious when using 128.Fn epoch_wait_preempt 129threads are pinned during epoch sections so if a thread in a section is then 130preempted by a higher priority compute bound thread on that CPU it can be 131prevented from leaving the section. 132Thus the wait time for the waiter is 133potentially unbounded. 134.Sh EXAMPLES 135Async free example: 136Thread 1: 137.Bd -literal 138int 139in_pcbladdr(struct inpcb *inp, struct in_addr *faddr, struct in_laddr *laddr, 140 struct ucred *cred) 141{ 142 /* ... */ 143 epoch_enter(net_epoch); 144 CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { 145 sa = ifa->ifa_addr; 146 if (sa->sa_family != AF_INET) 147 continue; 148 sin = (struct sockaddr_in *)sa; 149 if (prison_check_ip4(cred, &sin->sin_addr) == 0) { 150 ia = (struct in_ifaddr *)ifa; 151 break; 152 } 153 } 154 epoch_exit(net_epoch); 155 /* ... */ 156} 157.Ed 158Thread 2: 159.Bd -literal 160void 161ifa_free(struct ifaddr *ifa) 162{ 163 164 if (refcount_release(&ifa->ifa_refcnt)) 165 epoch_call(net_epoch, &ifa->ifa_epoch_ctx, ifa_destroy); 166} 167 168void 169if_purgeaddrs(struct ifnet *ifp) 170{ 171 172 /* .... * 173 IF_ADDR_WLOCK(ifp); 174 CK_STAILQ_REMOVE(&ifp->if_addrhead, ifa, ifaddr, ifa_link); 175 IF_ADDR_WUNLOCK(ifp); 176 ifa_free(ifa); 177} 178.Ed 179.Pp 180Thread 1 traverses the ifaddr list in an epoch. 181Thread 2 unlinks with the corresponding epoch safe macro, marks as logically free, 182and then defers deletion. 183More general mutation or a synchronous 184free would have to follow a call to 185.Fn epoch_wait . 186.Sh ERRORS 187None. 188.Sh NOTES 189The 190.Nm 191kernel programming interface is under development and is subject to change. 192.El 193.Sh SEE ALSO 194.Xr locking 9 , 195.Xr mtx_pool 9 , 196.Xr mutex 9 , 197.Xr rwlock 9 , 198.Xr sema 9 , 199.Xr sleep 9 , 200.Xr sx 9 , 201.Xr timeout 9 202