/linux/include/net/ |
H A D | inet_timewait_sock.h | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | inet_hashtables.h | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/ipv4/ |
H A D | inet_timewait_sock.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | inet_diag.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | inet_hashtables.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp_ipv4.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/ipv6/ |
H A D | inet6_hashtables.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp_ipv6.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/dccp/ |
H A D | proto.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ipv6.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ipv4.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/core/ |
H A D | sock.c | diff 3ab5aee7fe840b5b1b35a8d1ac11c3de5281e611 Mon Nov 17 04:40:17 CET 2008 Eric Dumazet <dada1@cosmosbay.com> net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls
RCU was added to UDP lookups, using a fast infrastructure : - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the price of call_rcu() at freeing time. - hlist_nulls permits to use few memory barriers.
This patch uses same infrastructure for TCP/DCCP established and timewait sockets.
Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications using short lived TCP connections. A followup patch, converting rwlocks to spinlocks will even speedup this case.
__inet_lookup_established() is pretty fast now we dont have to dirty a contended cache line (read_lock/read_unlock)
Only established and timewait hashtable are converted to RCU (bind table and listen table are still using traditional locking)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|