#
764a768e |
| 09-Aug-2015 |
Baptiste Daroussin <bapt@FreeBSD.org> |
Merge from HEAD
|
#
e362cf0e |
| 08-Aug-2015 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
MFP r274553: * Move lle creation/deletion from lla_lookup to separate functions: lla_lookup(LLE_CREATE) -> lla_create lla_lookup(LLE_DELETE) -> lla_delete lla_create now returns with LLE_EXCLUSIV
MFP r274553: * Move lle creation/deletion from lla_lookup to separate functions: lla_lookup(LLE_CREATE) -> lla_create lla_lookup(LLE_DELETE) -> lla_delete lla_create now returns with LLE_EXCLUSIVE lock for lle. * Provide typedefs for new/existing lltable callbacks.
Reviewed by: ae
show more ...
|
#
3a749863 |
| 05-Jan-2015 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Allocate hash tables separately * Make llt_hash() callback more flexible * Default hash size and hashing method is now per-af * Move lltable allocation to separate function
|
#
b44a7d5d |
| 03-Jan-2015 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Use unified code for deleting entry by sockaddr instead of per-af one. * Remove now unused llt_delete_addr callback.
|
#
20dd8995 |
| 03-Jan-2015 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Hide lltable implementation details in if_llatbl_var.h * Make most of lltable_* methods 'normal' functions instead of inline * Add lltable_get_<af|ifp>() functions to access given lltable fields *
* Hide lltable implementation details in if_llatbl_var.h * Make most of lltable_* methods 'normal' functions instead of inline * Add lltable_get_<af|ifp>() functions to access given lltable fields * Temporarily resurrect nd6_lookup() function
show more ...
|
#
d82ed505 |
| 09-Dec-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Simplify lle lookup/create api by using addresses instead of sockaddrs.
|
#
73b52ad8 |
| 08-Dec-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Use llt_prepare_static_entry method to prepare valid per-af static entry.
|
#
0368226e |
| 08-Dec-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Retire abstract llentry_free() in favor of lltable_drop_entry_queue() and explicit calls to RTENTRY_FREE_LOCKED() * Use lltable_prefix_free() in arp_ifscrub to be consistent with nd6. * Rename <
* Retire abstract llentry_free() in favor of lltable_drop_entry_queue() and explicit calls to RTENTRY_FREE_LOCKED() * Use lltable_prefix_free() in arp_ifscrub to be consistent with nd6. * Rename <lltable_|llt>_delete function to _delete_addr() to note that this function is used to external callers. Make this function maintain its own locking. * Use lookup/unlink/clear call chain from internal callers instead of delete_addr. * Fix LLE_DELETED flag handling
show more ...
|
#
721cd2e0 |
| 07-Dec-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Do not enforce particular lle storage scheme: * move lltable allocation to per-domain callbacks. * make llentry_link/unlink functions overridable llt methods. * make hash table traversal another over
Do not enforce particular lle storage scheme: * move lltable allocation to per-domain callbacks. * make llentry_link/unlink functions overridable llt methods. * make hash table traversal another overridable llt method.
show more ...
|
#
a743ccd4 |
| 07-Dec-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Add llt_clear_entry() callback which is able to do all lle cleanup including unlinking/freeing * Relax locking in lltable_prefix_free_af/lltable_free * Do not pass @llt to lle free callback: it
* Add llt_clear_entry() callback which is able to do all lle cleanup including unlinking/freeing * Relax locking in lltable_prefix_free_af/lltable_free * Do not pass @llt to lle free callback: it is always NULL now. * Unify arptimer/nd6_llinfo_timer: explicitly unlock lle avoiding unlock/lock sequinces * Do not pass unlocked lle to nd6_ns_output(): add nd6_llinfo_get_holdsrc() to retrieve preferred source address from lle hold queue and pass it instead of lle. * Finally, make nd6_create() create and return unlocked lle * Separate defrtr handling code from nd6_free(): use nd6_check_del_defrtr() to check if we need to keep entry instead of performing GC, use nd6_check_recalc_defrtr() to perform actual recalc on lle removal. * Move isRouter handling from nd6_cache_lladdr() to separate nd6_check_router() * Add initial code to maintain lle runtime flags in sync.
show more ...
|
#
ce313fdd |
| 30-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Unify lle table dump/prefix removal code. * Rename lla_XXX -> lltable_XXX_lle to reduce number of name prefixes used by lltable code.
|
#
73d77028 |
| 23-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Do more fine-grained lltable locking: use table runtime lock as rare as we can.
|
#
9479029b |
| 23-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Add lltable llt_hash callback * Move lltable items insertions/deletions to generic llt code.
|
#
27688dfe |
| 22-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Temporarily revert r274774.
|
#
2e47d2f9 |
| 22-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Mark ifaddr/rtsock static entries RLLE_VALID.
|
#
9883e41b |
| 21-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Switch IF_AFDATA lock to rmlock
|
#
df629abf |
| 16-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Rework LLE code locking: * struct llentry is now basically split into 2 pieces: all fields within 64 bytes (amd64) are now protected by both ifdata lock AND lle lock, e.g. you require both locks
Rework LLE code locking: * struct llentry is now basically split into 2 pieces: all fields within 64 bytes (amd64) are now protected by both ifdata lock AND lle lock, e.g. you require both locks to be held exclusively for modification. All data necessary for fast path operations is kept here. Some fields were added: - r_l3addr - makes lookup key liev within first 64 bytes. - r_flags - flags, containing pre-compiled decision whether given lle contains usable data or not. Current the only flag is RLLE_VALID. - r_len - prepend data len, currently unused - r_kick - used to provide feedback to control plane (see below). All other fields are protected by lle lock. * Add simple state machine for ARP to handle "about to expire" case: Current model (for the fast path) is the following: - rlock afdata - find / rlock rte - runlock afdata - see if "expire time" is approaching (time_uptime + la->la_preempt > la->la_expire) - if true, call arprequest() and decrease la_preempt - store MAC and runlock rte New model (data plane): - rlock afdata - find rte - check if it can be used using r_* fields only - if true, store MAC - if r_kick field != 0 set it to 0. - runlock afdata New mode (control plane): - schedule arptimer to be called in (V_arpt_keep - V_arp_maxtries) seconds instead of V_arpt_keep. - on first timer invocation change state from ARP_LLINFO_REACHABLE to ARP_LLINFO_VERIFY, sets r_kick to 1 and shedules next call in V_arpt_rexmit (default to 1 sec). - on subsequent timer invocations in ARP_LLINFO_VERIFY state, checks for r_kick value: reschedule if not changed, and send arprequest() if set to zero (e.g. entry was used). * Convert IPv4 path to use new single-lock approach. IPv6 bits to follow. * Slow down in_arpinput(): now valid reply will (in most cases) require acquiring afdata WLOCK twice. This is requirement for storing changed lle data. This change will be slightly optimized in future. * Provide explicit hash link/unlink functions for both ipv4/ipv6 code. This will probably be moved to generic lle code once we have per-AF hashing callback inside lltable. * Perform lle unlink on deletion immediately instead of delaying it to the timer routine. * Make r244183 more explicit: use new LLE_CALLOUTREF flag to indicate the presence of lle reference used for safe callout calls.
show more ...
|
#
b4b1367a |
| 15-Nov-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
* Move lle creation/deletion from lla_lookup to separate functions: lla_lookup(LLE_CREATE) -> lla_create lla_lookup(LLE_DELETE) -> lla_delete Assume lla_create to return LLE_EXCLUSIVE lock for
* Move lle creation/deletion from lla_lookup to separate functions: lla_lookup(LLE_CREATE) -> lla_create lla_lookup(LLE_DELETE) -> lla_delete Assume lla_create to return LLE_EXCLUSIVE lock for lle. * Rework lla_rt_output to perform all lle changes under afdata WLOCK. * change arp_ifscrub() ackquire afdata WLOCK, the same as arp_ifinit().
show more ...
|
Revision tags: release/10.1.0, release/9.3.0, release/10.0.0, release/9.2.0 |
|
#
d1d01586 |
| 05-Sep-2013 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge from head
|
#
cfe30d02 |
| 19-Jun-2013 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge fresh head.
|
Revision tags: release/8.4.0 |
|
#
f89d4c3a |
| 06-May-2013 |
Andre Oppermann <andre@FreeBSD.org> |
Back out r249318, r249320 and r249327 due to a heisenbug most likely related to a race condition in the ipi_hash_lock with the exact cause currently unknown but under investigation.
|
#
69e6d7b7 |
| 12-Apr-2013 |
Simon J. Gerraty <sjg@FreeBSD.org> |
sync from head
|
#
e8b3186b |
| 09-Apr-2013 |
Andre Oppermann <andre@FreeBSD.org> |
Change certain heavily used network related mutexes and rwlocks to reside on their own cache line to prevent false sharing with other nearby structures, especially for those in the .bss segment.
NB:
Change certain heavily used network related mutexes and rwlocks to reside on their own cache line to prevent false sharing with other nearby structures, especially for those in the .bss segment.
NB: Those mutexes and rwlocks with variables next to them that get changed on every invocation do not benefit from their own cache line. Actually it may be net negative because two cache misses would be incurred in those cases.
show more ...
|
#
d241a0e6 |
| 26-Feb-2013 |
Xin LI <delphij@FreeBSD.org> |
IFC @247348.
|
#
d9a44755 |
| 08-Feb-2013 |
David E. O'Brien <obrien@FreeBSD.org> |
Sync with HEAD.
|