Searched hist:ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 (Results 1 – 8 of 8) sorted by relevance
/linux/include/trace/events/ |
H A D | sock.h | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/sched/ |
H A D | em_meta.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/core/ |
H A D | datagram.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | sock.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/net/ipv4/ |
H A D | inet_diag.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp_output.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp.c | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/linux/include/net/ |
H A D | sock.h | diff ab4e846a82d0ae00176de19f2db3c5c64f8eb5f2 Fri Oct 11 05:17:46 CEST 2019 Eric Dumazet <edumazet@google.com> tcp: annotate sk->sk_wmem_queued lockless reads
For the sake of tcp_poll(), there are few places where we fetch sk->sk_wmem_queued while this field can change from IRQ or other cpu.
We need to add READ_ONCE() annotations, and also make sure write sides use corresponding WRITE_ONCE() to avoid store-tearing.
sk_wmem_queued_add() helper is added so that we can in the future convert to ADD_ONCE() or equivalent if/when available.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|