11da177e4SLinus Torvalds /* 21da177e4SLinus Torvalds * INET An implementation of the TCP/IP protocol suite for the LINUX 31da177e4SLinus Torvalds * operating system. INET is implemented using the BSD Socket 41da177e4SLinus Torvalds * interface as the means of communication with the user level. 51da177e4SLinus Torvalds * 61da177e4SLinus Torvalds * Implementation of the Transmission Control Protocol(TCP). 71da177e4SLinus Torvalds * 802c30a84SJesper Juhl * Authors: Ross Biro 91da177e4SLinus Torvalds * Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG> 101da177e4SLinus Torvalds * Mark Evans, <evansmp@uhura.aston.ac.uk> 111da177e4SLinus Torvalds * Corey Minyard <wf-rch!minyard@relay.EU.net> 121da177e4SLinus Torvalds * Florian La Roche, <flla@stud.uni-sb.de> 131da177e4SLinus Torvalds * Charles Hedrick, <hedrick@klinzhai.rutgers.edu> 141da177e4SLinus Torvalds * Linus Torvalds, <torvalds@cs.helsinki.fi> 151da177e4SLinus Torvalds * Alan Cox, <gw4pts@gw4pts.ampr.org> 161da177e4SLinus Torvalds * Matthew Dillon, <dillon@apollo.west.oic.com> 171da177e4SLinus Torvalds * Arnt Gulbrandsen, <agulbra@nvg.unit.no> 181da177e4SLinus Torvalds * Jorge Cwik, <jorge@laser.satlink.net> 191da177e4SLinus Torvalds * 201da177e4SLinus Torvalds * Fixes: 211da177e4SLinus Torvalds * Alan Cox : Numerous verify_area() calls 221da177e4SLinus Torvalds * Alan Cox : Set the ACK bit on a reset 231da177e4SLinus Torvalds * Alan Cox : Stopped it crashing if it closed while 241da177e4SLinus Torvalds * sk->inuse=1 and was trying to connect 251da177e4SLinus Torvalds * (tcp_err()). 261da177e4SLinus Torvalds * Alan Cox : All icmp error handling was broken 271da177e4SLinus Torvalds * pointers passed where wrong and the 281da177e4SLinus Torvalds * socket was looked up backwards. Nobody 291da177e4SLinus Torvalds * tested any icmp error code obviously. 301da177e4SLinus Torvalds * Alan Cox : tcp_err() now handled properly. It 311da177e4SLinus Torvalds * wakes people on errors. poll 321da177e4SLinus Torvalds * behaves and the icmp error race 331da177e4SLinus Torvalds * has gone by moving it into sock.c 341da177e4SLinus Torvalds * Alan Cox : tcp_send_reset() fixed to work for 351da177e4SLinus Torvalds * everything not just packets for 361da177e4SLinus Torvalds * unknown sockets. 371da177e4SLinus Torvalds * Alan Cox : tcp option processing. 381da177e4SLinus Torvalds * Alan Cox : Reset tweaked (still not 100%) [Had 391da177e4SLinus Torvalds * syn rule wrong] 401da177e4SLinus Torvalds * Herp Rosmanith : More reset fixes 411da177e4SLinus Torvalds * Alan Cox : No longer acks invalid rst frames. 421da177e4SLinus Torvalds * Acking any kind of RST is right out. 431da177e4SLinus Torvalds * Alan Cox : Sets an ignore me flag on an rst 441da177e4SLinus Torvalds * receive otherwise odd bits of prattle 451da177e4SLinus Torvalds * escape still 461da177e4SLinus Torvalds * Alan Cox : Fixed another acking RST frame bug. 471da177e4SLinus Torvalds * Should stop LAN workplace lockups. 481da177e4SLinus Torvalds * Alan Cox : Some tidyups using the new skb list 491da177e4SLinus Torvalds * facilities 501da177e4SLinus Torvalds * Alan Cox : sk->keepopen now seems to work 511da177e4SLinus Torvalds * Alan Cox : Pulls options out correctly on accepts 521da177e4SLinus Torvalds * Alan Cox : Fixed assorted sk->rqueue->next errors 531da177e4SLinus Torvalds * Alan Cox : PSH doesn't end a TCP read. Switched a 541da177e4SLinus Torvalds * bit to skb ops. 551da177e4SLinus Torvalds * Alan Cox : Tidied tcp_data to avoid a potential 561da177e4SLinus Torvalds * nasty. 571da177e4SLinus Torvalds * Alan Cox : Added some better commenting, as the 581da177e4SLinus Torvalds * tcp is hard to follow 591da177e4SLinus Torvalds * Alan Cox : Removed incorrect check for 20 * psh 601da177e4SLinus Torvalds * Michael O'Reilly : ack < copied bug fix. 611da177e4SLinus Torvalds * Johannes Stille : Misc tcp fixes (not all in yet). 621da177e4SLinus Torvalds * Alan Cox : FIN with no memory -> CRASH 631da177e4SLinus Torvalds * Alan Cox : Added socket option proto entries. 641da177e4SLinus Torvalds * Also added awareness of them to accept. 651da177e4SLinus Torvalds * Alan Cox : Added TCP options (SOL_TCP) 661da177e4SLinus Torvalds * Alan Cox : Switched wakeup calls to callbacks, 671da177e4SLinus Torvalds * so the kernel can layer network 681da177e4SLinus Torvalds * sockets. 691da177e4SLinus Torvalds * Alan Cox : Use ip_tos/ip_ttl settings. 701da177e4SLinus Torvalds * Alan Cox : Handle FIN (more) properly (we hope). 711da177e4SLinus Torvalds * Alan Cox : RST frames sent on unsynchronised 721da177e4SLinus Torvalds * state ack error. 731da177e4SLinus Torvalds * Alan Cox : Put in missing check for SYN bit. 741da177e4SLinus Torvalds * Alan Cox : Added tcp_select_window() aka NET2E 751da177e4SLinus Torvalds * window non shrink trick. 761da177e4SLinus Torvalds * Alan Cox : Added a couple of small NET2E timer 771da177e4SLinus Torvalds * fixes 781da177e4SLinus Torvalds * Charles Hedrick : TCP fixes 791da177e4SLinus Torvalds * Toomas Tamm : TCP window fixes 801da177e4SLinus Torvalds * Alan Cox : Small URG fix to rlogin ^C ack fight 811da177e4SLinus Torvalds * Charles Hedrick : Rewrote most of it to actually work 821da177e4SLinus Torvalds * Linus : Rewrote tcp_read() and URG handling 831da177e4SLinus Torvalds * completely 841da177e4SLinus Torvalds * Gerhard Koerting: Fixed some missing timer handling 851da177e4SLinus Torvalds * Matthew Dillon : Reworked TCP machine states as per RFC 861da177e4SLinus Torvalds * Gerhard Koerting: PC/TCP workarounds 871da177e4SLinus Torvalds * Adam Caldwell : Assorted timer/timing errors 881da177e4SLinus Torvalds * Matthew Dillon : Fixed another RST bug 891da177e4SLinus Torvalds * Alan Cox : Move to kernel side addressing changes. 901da177e4SLinus Torvalds * Alan Cox : Beginning work on TCP fastpathing 911da177e4SLinus Torvalds * (not yet usable) 921da177e4SLinus Torvalds * Arnt Gulbrandsen: Turbocharged tcp_check() routine. 931da177e4SLinus Torvalds * Alan Cox : TCP fast path debugging 941da177e4SLinus Torvalds * Alan Cox : Window clamping 951da177e4SLinus Torvalds * Michael Riepe : Bug in tcp_check() 961da177e4SLinus Torvalds * Matt Dillon : More TCP improvements and RST bug fixes 971da177e4SLinus Torvalds * Matt Dillon : Yet more small nasties remove from the 981da177e4SLinus Torvalds * TCP code (Be very nice to this man if 991da177e4SLinus Torvalds * tcp finally works 100%) 8) 1001da177e4SLinus Torvalds * Alan Cox : BSD accept semantics. 1011da177e4SLinus Torvalds * Alan Cox : Reset on closedown bug. 1021da177e4SLinus Torvalds * Peter De Schrijver : ENOTCONN check missing in tcp_sendto(). 1031da177e4SLinus Torvalds * Michael Pall : Handle poll() after URG properly in 1041da177e4SLinus Torvalds * all cases. 1051da177e4SLinus Torvalds * Michael Pall : Undo the last fix in tcp_read_urg() 1061da177e4SLinus Torvalds * (multi URG PUSH broke rlogin). 1071da177e4SLinus Torvalds * Michael Pall : Fix the multi URG PUSH problem in 1081da177e4SLinus Torvalds * tcp_readable(), poll() after URG 1091da177e4SLinus Torvalds * works now. 1101da177e4SLinus Torvalds * Michael Pall : recv(...,MSG_OOB) never blocks in the 1111da177e4SLinus Torvalds * BSD api. 1121da177e4SLinus Torvalds * Alan Cox : Changed the semantics of sk->socket to 1131da177e4SLinus Torvalds * fix a race and a signal problem with 1141da177e4SLinus Torvalds * accept() and async I/O. 1151da177e4SLinus Torvalds * Alan Cox : Relaxed the rules on tcp_sendto(). 1161da177e4SLinus Torvalds * Yury Shevchuk : Really fixed accept() blocking problem. 1171da177e4SLinus Torvalds * Craig I. Hagan : Allow for BSD compatible TIME_WAIT for 1181da177e4SLinus Torvalds * clients/servers which listen in on 1191da177e4SLinus Torvalds * fixed ports. 1201da177e4SLinus Torvalds * Alan Cox : Cleaned the above up and shrank it to 1211da177e4SLinus Torvalds * a sensible code size. 1221da177e4SLinus Torvalds * Alan Cox : Self connect lockup fix. 1231da177e4SLinus Torvalds * Alan Cox : No connect to multicast. 1241da177e4SLinus Torvalds * Ross Biro : Close unaccepted children on master 1251da177e4SLinus Torvalds * socket close. 1261da177e4SLinus Torvalds * Alan Cox : Reset tracing code. 1271da177e4SLinus Torvalds * Alan Cox : Spurious resets on shutdown. 1281da177e4SLinus Torvalds * Alan Cox : Giant 15 minute/60 second timer error 1291da177e4SLinus Torvalds * Alan Cox : Small whoops in polling before an 1301da177e4SLinus Torvalds * accept. 1311da177e4SLinus Torvalds * Alan Cox : Kept the state trace facility since 1321da177e4SLinus Torvalds * it's handy for debugging. 1331da177e4SLinus Torvalds * Alan Cox : More reset handler fixes. 1341da177e4SLinus Torvalds * Alan Cox : Started rewriting the code based on 1351da177e4SLinus Torvalds * the RFC's for other useful protocol 1361da177e4SLinus Torvalds * references see: Comer, KA9Q NOS, and 1371da177e4SLinus Torvalds * for a reference on the difference 1381da177e4SLinus Torvalds * between specifications and how BSD 1391da177e4SLinus Torvalds * works see the 4.4lite source. 1401da177e4SLinus Torvalds * A.N.Kuznetsov : Don't time wait on completion of tidy 1411da177e4SLinus Torvalds * close. 1421da177e4SLinus Torvalds * Linus Torvalds : Fin/Shutdown & copied_seq changes. 1431da177e4SLinus Torvalds * Linus Torvalds : Fixed BSD port reuse to work first syn 1441da177e4SLinus Torvalds * Alan Cox : Reimplemented timers as per the RFC 1451da177e4SLinus Torvalds * and using multiple timers for sanity. 1461da177e4SLinus Torvalds * Alan Cox : Small bug fixes, and a lot of new 1471da177e4SLinus Torvalds * comments. 1481da177e4SLinus Torvalds * Alan Cox : Fixed dual reader crash by locking 1491da177e4SLinus Torvalds * the buffers (much like datagram.c) 1501da177e4SLinus Torvalds * Alan Cox : Fixed stuck sockets in probe. A probe 1511da177e4SLinus Torvalds * now gets fed up of retrying without 1521da177e4SLinus Torvalds * (even a no space) answer. 1531da177e4SLinus Torvalds * Alan Cox : Extracted closing code better 1541da177e4SLinus Torvalds * Alan Cox : Fixed the closing state machine to 1551da177e4SLinus Torvalds * resemble the RFC. 1561da177e4SLinus Torvalds * Alan Cox : More 'per spec' fixes. 1571da177e4SLinus Torvalds * Jorge Cwik : Even faster checksumming. 1581da177e4SLinus Torvalds * Alan Cox : tcp_data() doesn't ack illegal PSH 1591da177e4SLinus Torvalds * only frames. At least one pc tcp stack 1601da177e4SLinus Torvalds * generates them. 1611da177e4SLinus Torvalds * Alan Cox : Cache last socket. 1621da177e4SLinus Torvalds * Alan Cox : Per route irtt. 1631da177e4SLinus Torvalds * Matt Day : poll()->select() match BSD precisely on error 1641da177e4SLinus Torvalds * Alan Cox : New buffers 1651da177e4SLinus Torvalds * Marc Tamsky : Various sk->prot->retransmits and 1661da177e4SLinus Torvalds * sk->retransmits misupdating fixed. 1671da177e4SLinus Torvalds * Fixed tcp_write_timeout: stuck close, 1681da177e4SLinus Torvalds * and TCP syn retries gets used now. 1691da177e4SLinus Torvalds * Mark Yarvis : In tcp_read_wakeup(), don't send an 1701da177e4SLinus Torvalds * ack if state is TCP_CLOSED. 1711da177e4SLinus Torvalds * Alan Cox : Look up device on a retransmit - routes may 1721da177e4SLinus Torvalds * change. Doesn't yet cope with MSS shrink right 1731da177e4SLinus Torvalds * but it's a start! 1741da177e4SLinus Torvalds * Marc Tamsky : Closing in closing fixes. 1751da177e4SLinus Torvalds * Mike Shaver : RFC1122 verifications. 1761da177e4SLinus Torvalds * Alan Cox : rcv_saddr errors. 1771da177e4SLinus Torvalds * Alan Cox : Block double connect(). 1781da177e4SLinus Torvalds * Alan Cox : Small hooks for enSKIP. 1791da177e4SLinus Torvalds * Alexey Kuznetsov: Path MTU discovery. 1801da177e4SLinus Torvalds * Alan Cox : Support soft errors. 1811da177e4SLinus Torvalds * Alan Cox : Fix MTU discovery pathological case 1821da177e4SLinus Torvalds * when the remote claims no mtu! 1831da177e4SLinus Torvalds * Marc Tamsky : TCP_CLOSE fix. 1841da177e4SLinus Torvalds * Colin (G3TNE) : Send a reset on syn ack replies in 1851da177e4SLinus Torvalds * window but wrong (fixes NT lpd problems) 1861da177e4SLinus Torvalds * Pedro Roque : Better TCP window handling, delayed ack. 1871da177e4SLinus Torvalds * Joerg Reuter : No modification of locked buffers in 1881da177e4SLinus Torvalds * tcp_do_retransmit() 1891da177e4SLinus Torvalds * Eric Schenk : Changed receiver side silly window 1901da177e4SLinus Torvalds * avoidance algorithm to BSD style 1911da177e4SLinus Torvalds * algorithm. This doubles throughput 1921da177e4SLinus Torvalds * against machines running Solaris, 1931da177e4SLinus Torvalds * and seems to result in general 1941da177e4SLinus Torvalds * improvement. 1951da177e4SLinus Torvalds * Stefan Magdalinski : adjusted tcp_readable() to fix FIONREAD 1961da177e4SLinus Torvalds * Willy Konynenberg : Transparent proxying support. 1971da177e4SLinus Torvalds * Mike McLagan : Routing by source 1981da177e4SLinus Torvalds * Keith Owens : Do proper merging with partial SKB's in 1991da177e4SLinus Torvalds * tcp_do_sendmsg to avoid burstiness. 2001da177e4SLinus Torvalds * Eric Schenk : Fix fast close down bug with 2011da177e4SLinus Torvalds * shutdown() followed by close(). 2021da177e4SLinus Torvalds * Andi Kleen : Make poll agree with SIGIO 2031da177e4SLinus Torvalds * Salvatore Sanfilippo : Support SO_LINGER with linger == 1 and 2041da177e4SLinus Torvalds * lingertime == 0 (RFC 793 ABORT Call) 2051da177e4SLinus Torvalds * Hirokazu Takahashi : Use copy_from_user() instead of 2061da177e4SLinus Torvalds * csum_and_copy_from_user() if possible. 2071da177e4SLinus Torvalds * 2081da177e4SLinus Torvalds * This program is free software; you can redistribute it and/or 2091da177e4SLinus Torvalds * modify it under the terms of the GNU General Public License 2101da177e4SLinus Torvalds * as published by the Free Software Foundation; either version 2111da177e4SLinus Torvalds * 2 of the License, or(at your option) any later version. 2121da177e4SLinus Torvalds * 2131da177e4SLinus Torvalds * Description of States: 2141da177e4SLinus Torvalds * 2151da177e4SLinus Torvalds * TCP_SYN_SENT sent a connection request, waiting for ack 2161da177e4SLinus Torvalds * 2171da177e4SLinus Torvalds * TCP_SYN_RECV received a connection request, sent ack, 2181da177e4SLinus Torvalds * waiting for final ack in three-way handshake. 2191da177e4SLinus Torvalds * 2201da177e4SLinus Torvalds * TCP_ESTABLISHED connection established 2211da177e4SLinus Torvalds * 2221da177e4SLinus Torvalds * TCP_FIN_WAIT1 our side has shutdown, waiting to complete 2231da177e4SLinus Torvalds * transmission of remaining buffered data 2241da177e4SLinus Torvalds * 2251da177e4SLinus Torvalds * TCP_FIN_WAIT2 all buffered data sent, waiting for remote 2261da177e4SLinus Torvalds * to shutdown 2271da177e4SLinus Torvalds * 2281da177e4SLinus Torvalds * TCP_CLOSING both sides have shutdown but we still have 2291da177e4SLinus Torvalds * data we have to finish sending 2301da177e4SLinus Torvalds * 2311da177e4SLinus Torvalds * TCP_TIME_WAIT timeout to catch resent junk before entering 2321da177e4SLinus Torvalds * closed, can only be entered from FIN_WAIT2 2331da177e4SLinus Torvalds * or CLOSING. Required because the other end 2341da177e4SLinus Torvalds * may not have gotten our last ACK causing it 2351da177e4SLinus Torvalds * to retransmit the data packet (which we ignore) 2361da177e4SLinus Torvalds * 2371da177e4SLinus Torvalds * TCP_CLOSE_WAIT remote side has shutdown and is waiting for 2381da177e4SLinus Torvalds * us to finish writing our data and to shutdown 2391da177e4SLinus Torvalds * (we have to close() to move on to LAST_ACK) 2401da177e4SLinus Torvalds * 2411da177e4SLinus Torvalds * TCP_LAST_ACK out side has shutdown after remote has 2421da177e4SLinus Torvalds * shutdown. There may still be data in our 2431da177e4SLinus Torvalds * buffer that we have to finish sending 2441da177e4SLinus Torvalds * 2451da177e4SLinus Torvalds * TCP_CLOSE socket is finished 2461da177e4SLinus Torvalds */ 2471da177e4SLinus Torvalds 248afd46503SJoe Perches #define pr_fmt(fmt) "TCP: " fmt 249afd46503SJoe Perches 250172589ccSIlpo Järvinen #include <linux/kernel.h> 2511da177e4SLinus Torvalds #include <linux/module.h> 2521da177e4SLinus Torvalds #include <linux/types.h> 2531da177e4SLinus Torvalds #include <linux/fcntl.h> 2541da177e4SLinus Torvalds #include <linux/poll.h> 2551da177e4SLinus Torvalds #include <linux/init.h> 2561da177e4SLinus Torvalds #include <linux/fs.h> 2579c55e01cSJens Axboe #include <linux/skbuff.h> 25881b23b4aSAndrew Morton #include <linux/scatterlist.h> 2599c55e01cSJens Axboe #include <linux/splice.h> 2609c55e01cSJens Axboe #include <linux/net.h> 2619c55e01cSJens Axboe #include <linux/socket.h> 2621da177e4SLinus Torvalds #include <linux/random.h> 2631da177e4SLinus Torvalds #include <linux/bootmem.h> 26457413ebcSMiquel van Smoorenburg #include <linux/highmem.h> 26557413ebcSMiquel van Smoorenburg #include <linux/swap.h> 266b8059eadSDavid S. Miller #include <linux/cache.h> 267f4c50d99SHerbert Xu #include <linux/err.h> 268cfb6eeb4SYOSHIFUJI Hideaki #include <linux/crypto.h> 269da5c78c8SWilliam Allen Simpson #include <linux/time.h> 2705a0e3ad6STejun Heo #include <linux/slab.h> 2711da177e4SLinus Torvalds 2721da177e4SLinus Torvalds #include <net/icmp.h> 273cf60af03SYuchung Cheng #include <net/inet_common.h> 2741da177e4SLinus Torvalds #include <net/tcp.h> 2751da177e4SLinus Torvalds #include <net/xfrm.h> 2761da177e4SLinus Torvalds #include <net/ip.h> 2771a2449a8SChris Leech #include <net/netdma.h> 2789c55e01cSJens Axboe #include <net/sock.h> 2791da177e4SLinus Torvalds 2801da177e4SLinus Torvalds #include <asm/uaccess.h> 2811da177e4SLinus Torvalds #include <asm/ioctls.h> 282076bb0c8SEliezer Tamir #include <net/busy_poll.h> 2831da177e4SLinus Torvalds 284ab32ea5dSBrian Haley int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT; 2851da177e4SLinus Torvalds 28695bd09ebSEric Dumazet int sysctl_tcp_min_tso_segs __read_mostly = 2; 28795bd09ebSEric Dumazet 288f54b3111SEric Dumazet int sysctl_tcp_autocorking __read_mostly = 1; 289f54b3111SEric Dumazet 290dd24c001SEric Dumazet struct percpu_counter tcp_orphan_count; 2910a5578cfSArnaldo Carvalho de Melo EXPORT_SYMBOL_GPL(tcp_orphan_count); 2920a5578cfSArnaldo Carvalho de Melo 293a4fe34bfSEric W. Biederman long sysctl_tcp_mem[3] __read_mostly; 294b8059eadSDavid S. Miller int sysctl_tcp_wmem[3] __read_mostly; 295b8059eadSDavid S. Miller int sysctl_tcp_rmem[3] __read_mostly; 2961da177e4SLinus Torvalds 297a4fe34bfSEric W. Biederman EXPORT_SYMBOL(sysctl_tcp_mem); 2981da177e4SLinus Torvalds EXPORT_SYMBOL(sysctl_tcp_rmem); 2991da177e4SLinus Torvalds EXPORT_SYMBOL(sysctl_tcp_wmem); 3001da177e4SLinus Torvalds 3018d987e5cSEric Dumazet atomic_long_t tcp_memory_allocated; /* Current allocated memory. */ 3021da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_memory_allocated); 3031748376bSEric Dumazet 3041748376bSEric Dumazet /* 3051748376bSEric Dumazet * Current number of TCP sockets. 3061748376bSEric Dumazet */ 3071748376bSEric Dumazet struct percpu_counter tcp_sockets_allocated; 3081da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_sockets_allocated); 3091da177e4SLinus Torvalds 3101da177e4SLinus Torvalds /* 3119c55e01cSJens Axboe * TCP splice context 3129c55e01cSJens Axboe */ 3139c55e01cSJens Axboe struct tcp_splice_state { 3149c55e01cSJens Axboe struct pipe_inode_info *pipe; 3159c55e01cSJens Axboe size_t len; 3169c55e01cSJens Axboe unsigned int flags; 3179c55e01cSJens Axboe }; 3189c55e01cSJens Axboe 3199c55e01cSJens Axboe /* 3201da177e4SLinus Torvalds * Pressure flag: try to collapse. 3211da177e4SLinus Torvalds * Technical note: it is used by multiple contexts non atomically. 3223ab224beSHideo Aoki * All the __sk_mem_schedule() is of this nature: accounting 3231da177e4SLinus Torvalds * is strict, actions are advisory and have some latency. 3241da177e4SLinus Torvalds */ 3254103f8cdSEric Dumazet int tcp_memory_pressure __read_mostly; 3261da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_memory_pressure); 3271da177e4SLinus Torvalds 3285c52ba17SPavel Emelyanov void tcp_enter_memory_pressure(struct sock *sk) 3291da177e4SLinus Torvalds { 3301da177e4SLinus Torvalds if (!tcp_memory_pressure) { 3314e673444SPavel Emelyanov NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMEMORYPRESSURES); 3321da177e4SLinus Torvalds tcp_memory_pressure = 1; 3331da177e4SLinus Torvalds } 3341da177e4SLinus Torvalds } 3351da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_enter_memory_pressure); 3361da177e4SLinus Torvalds 337b103cf34SJulian Anastasov /* Convert seconds to retransmits based on initial and max timeout */ 338b103cf34SJulian Anastasov static u8 secs_to_retrans(int seconds, int timeout, int rto_max) 339b103cf34SJulian Anastasov { 340b103cf34SJulian Anastasov u8 res = 0; 341b103cf34SJulian Anastasov 342b103cf34SJulian Anastasov if (seconds > 0) { 343b103cf34SJulian Anastasov int period = timeout; 344b103cf34SJulian Anastasov 345b103cf34SJulian Anastasov res = 1; 346b103cf34SJulian Anastasov while (seconds > period && res < 255) { 347b103cf34SJulian Anastasov res++; 348b103cf34SJulian Anastasov timeout <<= 1; 349b103cf34SJulian Anastasov if (timeout > rto_max) 350b103cf34SJulian Anastasov timeout = rto_max; 351b103cf34SJulian Anastasov period += timeout; 352b103cf34SJulian Anastasov } 353b103cf34SJulian Anastasov } 354b103cf34SJulian Anastasov return res; 355b103cf34SJulian Anastasov } 356b103cf34SJulian Anastasov 357b103cf34SJulian Anastasov /* Convert retransmits to seconds based on initial and max timeout */ 358b103cf34SJulian Anastasov static int retrans_to_secs(u8 retrans, int timeout, int rto_max) 359b103cf34SJulian Anastasov { 360b103cf34SJulian Anastasov int period = 0; 361b103cf34SJulian Anastasov 362b103cf34SJulian Anastasov if (retrans > 0) { 363b103cf34SJulian Anastasov period = timeout; 364b103cf34SJulian Anastasov while (--retrans) { 365b103cf34SJulian Anastasov timeout <<= 1; 366b103cf34SJulian Anastasov if (timeout > rto_max) 367b103cf34SJulian Anastasov timeout = rto_max; 368b103cf34SJulian Anastasov period += timeout; 369b103cf34SJulian Anastasov } 370b103cf34SJulian Anastasov } 371b103cf34SJulian Anastasov return period; 372b103cf34SJulian Anastasov } 373b103cf34SJulian Anastasov 374900f65d3SNeal Cardwell /* Address-family independent initialization for a tcp_sock. 375900f65d3SNeal Cardwell * 376900f65d3SNeal Cardwell * NOTE: A lot of things set to zero explicitly by call to 377900f65d3SNeal Cardwell * sk_alloc() so need not be done here. 378900f65d3SNeal Cardwell */ 379900f65d3SNeal Cardwell void tcp_init_sock(struct sock *sk) 380900f65d3SNeal Cardwell { 381900f65d3SNeal Cardwell struct inet_connection_sock *icsk = inet_csk(sk); 382900f65d3SNeal Cardwell struct tcp_sock *tp = tcp_sk(sk); 383900f65d3SNeal Cardwell 384996b175eSEric Dumazet __skb_queue_head_init(&tp->out_of_order_queue); 385900f65d3SNeal Cardwell tcp_init_xmit_timers(sk); 386900f65d3SNeal Cardwell tcp_prequeue_init(tp); 38746d3ceabSEric Dumazet INIT_LIST_HEAD(&tp->tsq_node); 388900f65d3SNeal Cardwell 389900f65d3SNeal Cardwell icsk->icsk_rto = TCP_TIMEOUT_INIT; 390*740b0f18SEric Dumazet tp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT); 391900f65d3SNeal Cardwell 392900f65d3SNeal Cardwell /* So many TCP implementations out there (incorrectly) count the 393900f65d3SNeal Cardwell * initial SYN frame in their delayed-ACK and congestion control 394900f65d3SNeal Cardwell * algorithms that we must have the following bandaid to talk 395900f65d3SNeal Cardwell * efficiently to them. -DaveM 396900f65d3SNeal Cardwell */ 397900f65d3SNeal Cardwell tp->snd_cwnd = TCP_INIT_CWND; 398900f65d3SNeal Cardwell 399900f65d3SNeal Cardwell /* See draft-stevens-tcpca-spec-01 for discussion of the 400900f65d3SNeal Cardwell * initialization of these values. 401900f65d3SNeal Cardwell */ 402900f65d3SNeal Cardwell tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; 403900f65d3SNeal Cardwell tp->snd_cwnd_clamp = ~0; 404900f65d3SNeal Cardwell tp->mss_cache = TCP_MSS_DEFAULT; 405900f65d3SNeal Cardwell 406900f65d3SNeal Cardwell tp->reordering = sysctl_tcp_reordering; 407eed530b6SYuchung Cheng tcp_enable_early_retrans(tp); 408900f65d3SNeal Cardwell icsk->icsk_ca_ops = &tcp_init_congestion_ops; 409900f65d3SNeal Cardwell 410ceaa1fefSAndrey Vagin tp->tsoffset = 0; 411ceaa1fefSAndrey Vagin 412900f65d3SNeal Cardwell sk->sk_state = TCP_CLOSE; 413900f65d3SNeal Cardwell 414900f65d3SNeal Cardwell sk->sk_write_space = sk_stream_write_space; 415900f65d3SNeal Cardwell sock_set_flag(sk, SOCK_USE_WRITE_QUEUE); 416900f65d3SNeal Cardwell 417900f65d3SNeal Cardwell icsk->icsk_sync_mss = tcp_sync_mss; 418900f65d3SNeal Cardwell 419900f65d3SNeal Cardwell sk->sk_sndbuf = sysctl_tcp_wmem[1]; 420900f65d3SNeal Cardwell sk->sk_rcvbuf = sysctl_tcp_rmem[1]; 421900f65d3SNeal Cardwell 422900f65d3SNeal Cardwell local_bh_disable(); 423900f65d3SNeal Cardwell sock_update_memcg(sk); 424900f65d3SNeal Cardwell sk_sockets_allocated_inc(sk); 425900f65d3SNeal Cardwell local_bh_enable(); 426900f65d3SNeal Cardwell } 427900f65d3SNeal Cardwell EXPORT_SYMBOL(tcp_init_sock); 428900f65d3SNeal Cardwell 4291da177e4SLinus Torvalds /* 4301da177e4SLinus Torvalds * Wait for a TCP event. 4311da177e4SLinus Torvalds * 4321da177e4SLinus Torvalds * Note that we don't need to lock the socket, as the upper poll layers 4331da177e4SLinus Torvalds * take care of normal races (between the test and the event) and we don't 4341da177e4SLinus Torvalds * go look at any of the socket buffers directly. 4351da177e4SLinus Torvalds */ 4361da177e4SLinus Torvalds unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait) 4371da177e4SLinus Torvalds { 4381da177e4SLinus Torvalds unsigned int mask; 4391da177e4SLinus Torvalds struct sock *sk = sock->sk; 440cf533ea5SEric Dumazet const struct tcp_sock *tp = tcp_sk(sk); 4411da177e4SLinus Torvalds 442c3f1dbafSDavid Majnemer sock_rps_record_flow(sk); 443c3f1dbafSDavid Majnemer 444aa395145SEric Dumazet sock_poll_wait(file, sk_sleep(sk), wait); 4451da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 446dc40c7bcSArnaldo Carvalho de Melo return inet_csk_listen_poll(sk); 4471da177e4SLinus Torvalds 4481da177e4SLinus Torvalds /* Socket is not locked. We are protected from async events 44970efce27SWill Newton * by poll logic and correct handling of state changes 45070efce27SWill Newton * made by other threads is impossible in any case. 4511da177e4SLinus Torvalds */ 4521da177e4SLinus Torvalds 4531da177e4SLinus Torvalds mask = 0; 4541da177e4SLinus Torvalds 4551da177e4SLinus Torvalds /* 4561da177e4SLinus Torvalds * POLLHUP is certainly not done right. But poll() doesn't 4571da177e4SLinus Torvalds * have a notion of HUP in just one direction, and for a 4581da177e4SLinus Torvalds * socket the read side is more interesting. 4591da177e4SLinus Torvalds * 4601da177e4SLinus Torvalds * Some poll() documentation says that POLLHUP is incompatible 4611da177e4SLinus Torvalds * with the POLLOUT/POLLWR flags, so somebody should check this 4621da177e4SLinus Torvalds * all. But careful, it tends to be safer to return too many 4631da177e4SLinus Torvalds * bits than too few, and you can easily break real applications 4641da177e4SLinus Torvalds * if you don't tell them that something has hung up! 4651da177e4SLinus Torvalds * 4661da177e4SLinus Torvalds * Check-me. 4671da177e4SLinus Torvalds * 4681da177e4SLinus Torvalds * Check number 1. POLLHUP is _UNMASKABLE_ event (see UNIX98 and 4691da177e4SLinus Torvalds * our fs/select.c). It means that after we received EOF, 4701da177e4SLinus Torvalds * poll always returns immediately, making impossible poll() on write() 4711da177e4SLinus Torvalds * in state CLOSE_WAIT. One solution is evident --- to set POLLHUP 4721da177e4SLinus Torvalds * if and only if shutdown has been made in both directions. 4731da177e4SLinus Torvalds * Actually, it is interesting to look how Solaris and DUX 47470efce27SWill Newton * solve this dilemma. I would prefer, if POLLHUP were maskable, 4751da177e4SLinus Torvalds * then we could set it on SND_SHUTDOWN. BTW examples given 4761da177e4SLinus Torvalds * in Stevens' books assume exactly this behaviour, it explains 47770efce27SWill Newton * why POLLHUP is incompatible with POLLOUT. --ANK 4781da177e4SLinus Torvalds * 4791da177e4SLinus Torvalds * NOTE. Check for TCP_CLOSE is added. The goal is to prevent 4801da177e4SLinus Torvalds * blocking on fresh not-connected or disconnected socket. --ANK 4811da177e4SLinus Torvalds */ 4821da177e4SLinus Torvalds if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == TCP_CLOSE) 4831da177e4SLinus Torvalds mask |= POLLHUP; 4841da177e4SLinus Torvalds if (sk->sk_shutdown & RCV_SHUTDOWN) 485f348d70aSDavide Libenzi mask |= POLLIN | POLLRDNORM | POLLRDHUP; 4861da177e4SLinus Torvalds 4878336886fSJerry Chu /* Connected or passive Fast Open socket? */ 4888336886fSJerry Chu if (sk->sk_state != TCP_SYN_SENT && 4898336886fSJerry Chu (sk->sk_state != TCP_SYN_RECV || tp->fastopen_rsk != NULL)) { 490c7004482SDavid S. Miller int target = sock_rcvlowat(sk, 0, INT_MAX); 491c7004482SDavid S. Miller 492c7004482SDavid S. Miller if (tp->urg_seq == tp->copied_seq && 493c7004482SDavid S. Miller !sock_flag(sk, SOCK_URGINLINE) && 494c7004482SDavid S. Miller tp->urg_data) 495b634f875SAlexandra Kossovsky target++; 496c7004482SDavid S. Miller 4971da177e4SLinus Torvalds /* Potential race condition. If read of tp below will 4981da177e4SLinus Torvalds * escape above sk->sk_state, we can be illegally awaken 4991da177e4SLinus Torvalds * in SYN_* states. */ 500c7004482SDavid S. Miller if (tp->rcv_nxt - tp->copied_seq >= target) 5011da177e4SLinus Torvalds mask |= POLLIN | POLLRDNORM; 5021da177e4SLinus Torvalds 5031da177e4SLinus Torvalds if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { 50464dc6130SEric Dumazet if (sk_stream_is_writeable(sk)) { 5051da177e4SLinus Torvalds mask |= POLLOUT | POLLWRNORM; 5061da177e4SLinus Torvalds } else { /* send SIGIO later */ 5071da177e4SLinus Torvalds set_bit(SOCK_ASYNC_NOSPACE, 5081da177e4SLinus Torvalds &sk->sk_socket->flags); 5091da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 5101da177e4SLinus Torvalds 5111da177e4SLinus Torvalds /* Race breaker. If space is freed after 5121da177e4SLinus Torvalds * wspace test but before the flags are set, 5131da177e4SLinus Torvalds * IO signal will be lost. 5141da177e4SLinus Torvalds */ 51564dc6130SEric Dumazet if (sk_stream_is_writeable(sk)) 5161da177e4SLinus Torvalds mask |= POLLOUT | POLLWRNORM; 5171da177e4SLinus Torvalds } 518d84ba638SKOSAKI Motohiro } else 519d84ba638SKOSAKI Motohiro mask |= POLLOUT | POLLWRNORM; 5201da177e4SLinus Torvalds 5211da177e4SLinus Torvalds if (tp->urg_data & TCP_URG_VALID) 5221da177e4SLinus Torvalds mask |= POLLPRI; 5231da177e4SLinus Torvalds } 524a4d25803STom Marshall /* This barrier is coupled with smp_wmb() in tcp_reset() */ 525a4d25803STom Marshall smp_rmb(); 526a4d25803STom Marshall if (sk->sk_err) 527a4d25803STom Marshall mask |= POLLERR; 528a4d25803STom Marshall 5291da177e4SLinus Torvalds return mask; 5301da177e4SLinus Torvalds } 5314bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_poll); 5321da177e4SLinus Torvalds 5331da177e4SLinus Torvalds int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg) 5341da177e4SLinus Torvalds { 5351da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 5361da177e4SLinus Torvalds int answ; 5370e71c55cSEric Dumazet bool slow; 5381da177e4SLinus Torvalds 5391da177e4SLinus Torvalds switch (cmd) { 5401da177e4SLinus Torvalds case SIOCINQ: 5411da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 5421da177e4SLinus Torvalds return -EINVAL; 5431da177e4SLinus Torvalds 5440e71c55cSEric Dumazet slow = lock_sock_fast(sk); 5451da177e4SLinus Torvalds if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) 5461da177e4SLinus Torvalds answ = 0; 5471da177e4SLinus Torvalds else if (sock_flag(sk, SOCK_URGINLINE) || 5481da177e4SLinus Torvalds !tp->urg_data || 5491da177e4SLinus Torvalds before(tp->urg_seq, tp->copied_seq) || 5501da177e4SLinus Torvalds !before(tp->urg_seq, tp->rcv_nxt)) { 55191521944SDavid S. Miller 5521da177e4SLinus Torvalds answ = tp->rcv_nxt - tp->copied_seq; 5531da177e4SLinus Torvalds 554a3374c42SEric Dumazet /* Subtract 1, if FIN was received */ 555a3374c42SEric Dumazet if (answ && sock_flag(sk, SOCK_DONE)) 556a3374c42SEric Dumazet answ--; 5571da177e4SLinus Torvalds } else 5581da177e4SLinus Torvalds answ = tp->urg_seq - tp->copied_seq; 5590e71c55cSEric Dumazet unlock_sock_fast(sk, slow); 5601da177e4SLinus Torvalds break; 5611da177e4SLinus Torvalds case SIOCATMARK: 5621da177e4SLinus Torvalds answ = tp->urg_data && tp->urg_seq == tp->copied_seq; 5631da177e4SLinus Torvalds break; 5641da177e4SLinus Torvalds case SIOCOUTQ: 5651da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 5661da177e4SLinus Torvalds return -EINVAL; 5671da177e4SLinus Torvalds 5681da177e4SLinus Torvalds if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) 5691da177e4SLinus Torvalds answ = 0; 5701da177e4SLinus Torvalds else 5711da177e4SLinus Torvalds answ = tp->write_seq - tp->snd_una; 5721da177e4SLinus Torvalds break; 5732f4e1b39SMario Schuknecht case SIOCOUTQNSD: 5742f4e1b39SMario Schuknecht if (sk->sk_state == TCP_LISTEN) 5752f4e1b39SMario Schuknecht return -EINVAL; 5762f4e1b39SMario Schuknecht 5772f4e1b39SMario Schuknecht if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) 5782f4e1b39SMario Schuknecht answ = 0; 5792f4e1b39SMario Schuknecht else 5802f4e1b39SMario Schuknecht answ = tp->write_seq - tp->snd_nxt; 5812f4e1b39SMario Schuknecht break; 5821da177e4SLinus Torvalds default: 5831da177e4SLinus Torvalds return -ENOIOCTLCMD; 5843ff50b79SStephen Hemminger } 5851da177e4SLinus Torvalds 5861da177e4SLinus Torvalds return put_user(answ, (int __user *)arg); 5871da177e4SLinus Torvalds } 5884bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_ioctl); 5891da177e4SLinus Torvalds 5901da177e4SLinus Torvalds static inline void tcp_mark_push(struct tcp_sock *tp, struct sk_buff *skb) 5911da177e4SLinus Torvalds { 5924de075e0SEric Dumazet TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_PSH; 5931da177e4SLinus Torvalds tp->pushed_seq = tp->write_seq; 5941da177e4SLinus Torvalds } 5951da177e4SLinus Torvalds 596a2a385d6SEric Dumazet static inline bool forced_push(const struct tcp_sock *tp) 5971da177e4SLinus Torvalds { 5981da177e4SLinus Torvalds return after(tp->write_seq, tp->pushed_seq + (tp->max_window >> 1)); 5991da177e4SLinus Torvalds } 6001da177e4SLinus Torvalds 6019e412ba7SIlpo Järvinen static inline void skb_entail(struct sock *sk, struct sk_buff *skb) 6021da177e4SLinus Torvalds { 6039e412ba7SIlpo Järvinen struct tcp_sock *tp = tcp_sk(sk); 604352d4800SArnaldo Carvalho de Melo struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); 605352d4800SArnaldo Carvalho de Melo 6061da177e4SLinus Torvalds skb->csum = 0; 607352d4800SArnaldo Carvalho de Melo tcb->seq = tcb->end_seq = tp->write_seq; 6084de075e0SEric Dumazet tcb->tcp_flags = TCPHDR_ACK; 609352d4800SArnaldo Carvalho de Melo tcb->sacked = 0; 6101da177e4SLinus Torvalds skb_header_release(skb); 611fe067e8aSDavid S. Miller tcp_add_write_queue_tail(sk, skb); 6123ab224beSHideo Aoki sk->sk_wmem_queued += skb->truesize; 6133ab224beSHideo Aoki sk_mem_charge(sk, skb->truesize); 61489ebd197SDavid S. Miller if (tp->nonagle & TCP_NAGLE_PUSH) 6151da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_PUSH; 6161da177e4SLinus Torvalds } 6171da177e4SLinus Torvalds 618afeca340SKrishna Kumar static inline void tcp_mark_urg(struct tcp_sock *tp, int flags) 6191da177e4SLinus Torvalds { 62033f5f57eSIlpo Järvinen if (flags & MSG_OOB) 6211da177e4SLinus Torvalds tp->snd_up = tp->write_seq; 6221da177e4SLinus Torvalds } 6231da177e4SLinus Torvalds 624f54b3111SEric Dumazet /* If a not yet filled skb is pushed, do not send it if 625a181ceb5SEric Dumazet * we have data packets in Qdisc or NIC queues : 626f54b3111SEric Dumazet * Because TX completion will happen shortly, it gives a chance 627f54b3111SEric Dumazet * to coalesce future sendmsg() payload into this skb, without 628f54b3111SEric Dumazet * need for a timer, and with no latency trade off. 629f54b3111SEric Dumazet * As packets containing data payload have a bigger truesize 630a181ceb5SEric Dumazet * than pure acks (dataless) packets, the last checks prevent 631a181ceb5SEric Dumazet * autocorking if we only have an ACK in Qdisc/NIC queues, 632a181ceb5SEric Dumazet * or if TX completion was delayed after we processed ACK packet. 633f54b3111SEric Dumazet */ 634f54b3111SEric Dumazet static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb, 635f54b3111SEric Dumazet int size_goal) 6361da177e4SLinus Torvalds { 637f54b3111SEric Dumazet return skb->len < size_goal && 638f54b3111SEric Dumazet sysctl_tcp_autocorking && 639a181ceb5SEric Dumazet skb != tcp_write_queue_head(sk) && 640f54b3111SEric Dumazet atomic_read(&sk->sk_wmem_alloc) > skb->truesize; 641f54b3111SEric Dumazet } 6429e412ba7SIlpo Järvinen 643f54b3111SEric Dumazet static void tcp_push(struct sock *sk, int flags, int mss_now, 644f54b3111SEric Dumazet int nonagle, int size_goal) 645f54b3111SEric Dumazet { 646f54b3111SEric Dumazet struct tcp_sock *tp = tcp_sk(sk); 647f54b3111SEric Dumazet struct sk_buff *skb; 648f54b3111SEric Dumazet 649f54b3111SEric Dumazet if (!tcp_send_head(sk)) 650f54b3111SEric Dumazet return; 651f54b3111SEric Dumazet 652f54b3111SEric Dumazet skb = tcp_write_queue_tail(sk); 6531da177e4SLinus Torvalds if (!(flags & MSG_MORE) || forced_push(tp)) 654f54b3111SEric Dumazet tcp_mark_push(tp, skb); 655afeca340SKrishna Kumar 656afeca340SKrishna Kumar tcp_mark_urg(tp, flags); 657f54b3111SEric Dumazet 658f54b3111SEric Dumazet if (tcp_should_autocork(sk, skb, size_goal)) { 659f54b3111SEric Dumazet 660f54b3111SEric Dumazet /* avoid atomic op if TSQ_THROTTLED bit is already set */ 661f54b3111SEric Dumazet if (!test_bit(TSQ_THROTTLED, &tp->tsq_flags)) { 662f54b3111SEric Dumazet NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAUTOCORKING); 663f54b3111SEric Dumazet set_bit(TSQ_THROTTLED, &tp->tsq_flags); 6641da177e4SLinus Torvalds } 665a181ceb5SEric Dumazet /* It is possible TX completion already happened 666a181ceb5SEric Dumazet * before we set TSQ_THROTTLED. 667a181ceb5SEric Dumazet */ 668a181ceb5SEric Dumazet if (atomic_read(&sk->sk_wmem_alloc) > skb->truesize) 669f54b3111SEric Dumazet return; 670f54b3111SEric Dumazet } 671f54b3111SEric Dumazet 672f54b3111SEric Dumazet if (flags & MSG_MORE) 673f54b3111SEric Dumazet nonagle = TCP_NAGLE_CORK; 674f54b3111SEric Dumazet 675f54b3111SEric Dumazet __tcp_push_pending_frames(sk, mss_now, nonagle); 6761da177e4SLinus Torvalds } 6771da177e4SLinus Torvalds 6786ff7751dSAdrian Bunk static int tcp_splice_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb, 6799c55e01cSJens Axboe unsigned int offset, size_t len) 6809c55e01cSJens Axboe { 6819c55e01cSJens Axboe struct tcp_splice_state *tss = rd_desc->arg.data; 68233966dd0SWilly Tarreau int ret; 6839c55e01cSJens Axboe 6849fa5fdf2SDimitris Michailidis ret = skb_splice_bits(skb, offset, tss->pipe, min(rd_desc->count, len), 6859fa5fdf2SDimitris Michailidis tss->flags); 68633966dd0SWilly Tarreau if (ret > 0) 68733966dd0SWilly Tarreau rd_desc->count -= ret; 68833966dd0SWilly Tarreau return ret; 6899c55e01cSJens Axboe } 6909c55e01cSJens Axboe 6919c55e01cSJens Axboe static int __tcp_splice_read(struct sock *sk, struct tcp_splice_state *tss) 6929c55e01cSJens Axboe { 6939c55e01cSJens Axboe /* Store TCP splice context information in read_descriptor_t. */ 6949c55e01cSJens Axboe read_descriptor_t rd_desc = { 6959c55e01cSJens Axboe .arg.data = tss, 69633966dd0SWilly Tarreau .count = tss->len, 6979c55e01cSJens Axboe }; 6989c55e01cSJens Axboe 6999c55e01cSJens Axboe return tcp_read_sock(sk, &rd_desc, tcp_splice_data_recv); 7009c55e01cSJens Axboe } 7019c55e01cSJens Axboe 7029c55e01cSJens Axboe /** 7039c55e01cSJens Axboe * tcp_splice_read - splice data from TCP socket to a pipe 7049c55e01cSJens Axboe * @sock: socket to splice from 7059c55e01cSJens Axboe * @ppos: position (not valid) 7069c55e01cSJens Axboe * @pipe: pipe to splice to 7079c55e01cSJens Axboe * @len: number of bytes to splice 7089c55e01cSJens Axboe * @flags: splice modifier flags 7099c55e01cSJens Axboe * 7109c55e01cSJens Axboe * Description: 7119c55e01cSJens Axboe * Will read pages from given socket and fill them into a pipe. 7129c55e01cSJens Axboe * 7139c55e01cSJens Axboe **/ 7149c55e01cSJens Axboe ssize_t tcp_splice_read(struct socket *sock, loff_t *ppos, 7159c55e01cSJens Axboe struct pipe_inode_info *pipe, size_t len, 7169c55e01cSJens Axboe unsigned int flags) 7179c55e01cSJens Axboe { 7189c55e01cSJens Axboe struct sock *sk = sock->sk; 7199c55e01cSJens Axboe struct tcp_splice_state tss = { 7209c55e01cSJens Axboe .pipe = pipe, 7219c55e01cSJens Axboe .len = len, 7229c55e01cSJens Axboe .flags = flags, 7239c55e01cSJens Axboe }; 7249c55e01cSJens Axboe long timeo; 7259c55e01cSJens Axboe ssize_t spliced; 7269c55e01cSJens Axboe int ret; 7279c55e01cSJens Axboe 7283a047bf8SChangli Gao sock_rps_record_flow(sk); 7299c55e01cSJens Axboe /* 7309c55e01cSJens Axboe * We can't seek on a socket input 7319c55e01cSJens Axboe */ 7329c55e01cSJens Axboe if (unlikely(*ppos)) 7339c55e01cSJens Axboe return -ESPIPE; 7349c55e01cSJens Axboe 7359c55e01cSJens Axboe ret = spliced = 0; 7369c55e01cSJens Axboe 7379c55e01cSJens Axboe lock_sock(sk); 7389c55e01cSJens Axboe 73942324c62SEric Dumazet timeo = sock_rcvtimeo(sk, sock->file->f_flags & O_NONBLOCK); 7409c55e01cSJens Axboe while (tss.len) { 7419c55e01cSJens Axboe ret = __tcp_splice_read(sk, &tss); 7429c55e01cSJens Axboe if (ret < 0) 7439c55e01cSJens Axboe break; 7449c55e01cSJens Axboe else if (!ret) { 7459c55e01cSJens Axboe if (spliced) 7469c55e01cSJens Axboe break; 7479c55e01cSJens Axboe if (sock_flag(sk, SOCK_DONE)) 7489c55e01cSJens Axboe break; 7499c55e01cSJens Axboe if (sk->sk_err) { 7509c55e01cSJens Axboe ret = sock_error(sk); 7519c55e01cSJens Axboe break; 7529c55e01cSJens Axboe } 7539c55e01cSJens Axboe if (sk->sk_shutdown & RCV_SHUTDOWN) 7549c55e01cSJens Axboe break; 7559c55e01cSJens Axboe if (sk->sk_state == TCP_CLOSE) { 7569c55e01cSJens Axboe /* 7579c55e01cSJens Axboe * This occurs when user tries to read 7589c55e01cSJens Axboe * from never connected socket. 7599c55e01cSJens Axboe */ 7609c55e01cSJens Axboe if (!sock_flag(sk, SOCK_DONE)) 7619c55e01cSJens Axboe ret = -ENOTCONN; 7629c55e01cSJens Axboe break; 7639c55e01cSJens Axboe } 7649c55e01cSJens Axboe if (!timeo) { 7659c55e01cSJens Axboe ret = -EAGAIN; 7669c55e01cSJens Axboe break; 7679c55e01cSJens Axboe } 7689c55e01cSJens Axboe sk_wait_data(sk, &timeo); 7699c55e01cSJens Axboe if (signal_pending(current)) { 7709c55e01cSJens Axboe ret = sock_intr_errno(timeo); 7719c55e01cSJens Axboe break; 7729c55e01cSJens Axboe } 7739c55e01cSJens Axboe continue; 7749c55e01cSJens Axboe } 7759c55e01cSJens Axboe tss.len -= ret; 7769c55e01cSJens Axboe spliced += ret; 7779c55e01cSJens Axboe 77833966dd0SWilly Tarreau if (!timeo) 77933966dd0SWilly Tarreau break; 7809c55e01cSJens Axboe release_sock(sk); 7819c55e01cSJens Axboe lock_sock(sk); 7829c55e01cSJens Axboe 7839c55e01cSJens Axboe if (sk->sk_err || sk->sk_state == TCP_CLOSE || 78433966dd0SWilly Tarreau (sk->sk_shutdown & RCV_SHUTDOWN) || 7859c55e01cSJens Axboe signal_pending(current)) 7869c55e01cSJens Axboe break; 7879c55e01cSJens Axboe } 7889c55e01cSJens Axboe 7899c55e01cSJens Axboe release_sock(sk); 7909c55e01cSJens Axboe 7919c55e01cSJens Axboe if (spliced) 7929c55e01cSJens Axboe return spliced; 7939c55e01cSJens Axboe 7949c55e01cSJens Axboe return ret; 7959c55e01cSJens Axboe } 7964bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_splice_read); 7979c55e01cSJens Axboe 798df97c708SPavel Emelyanov struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp) 799f561d0f2SPavel Emelyanov { 800f561d0f2SPavel Emelyanov struct sk_buff *skb; 801f561d0f2SPavel Emelyanov 802f561d0f2SPavel Emelyanov /* The TCP header must be at least 32-bit aligned. */ 803f561d0f2SPavel Emelyanov size = ALIGN(size, 4); 804f561d0f2SPavel Emelyanov 805f561d0f2SPavel Emelyanov skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp); 806f561d0f2SPavel Emelyanov if (skb) { 8073ab224beSHideo Aoki if (sk_wmem_schedule(sk, skb->truesize)) { 808a21d4572SEric Dumazet skb_reserve(skb, sk->sk_prot->max_header); 809f561d0f2SPavel Emelyanov /* 810f561d0f2SPavel Emelyanov * Make sure that we have exactly size bytes 811f561d0f2SPavel Emelyanov * available to the caller, no more, no less. 812f561d0f2SPavel Emelyanov */ 81316fad69cSEric Dumazet skb->reserved_tailroom = skb->end - skb->tail - size; 814f561d0f2SPavel Emelyanov return skb; 815f561d0f2SPavel Emelyanov } 816f561d0f2SPavel Emelyanov __kfree_skb(skb); 817f561d0f2SPavel Emelyanov } else { 8185c52ba17SPavel Emelyanov sk->sk_prot->enter_memory_pressure(sk); 819f561d0f2SPavel Emelyanov sk_stream_moderate_sndbuf(sk); 820f561d0f2SPavel Emelyanov } 821f561d0f2SPavel Emelyanov return NULL; 822f561d0f2SPavel Emelyanov } 823f561d0f2SPavel Emelyanov 8240c54b85fSIlpo Järvinen static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now, 8250c54b85fSIlpo Järvinen int large_allowed) 8260c54b85fSIlpo Järvinen { 8270c54b85fSIlpo Järvinen struct tcp_sock *tp = tcp_sk(sk); 8282a3a041cSIlpo Järvinen u32 xmit_size_goal, old_size_goal; 8290c54b85fSIlpo Järvinen 8300c54b85fSIlpo Järvinen xmit_size_goal = mss_now; 8310c54b85fSIlpo Järvinen 8320c54b85fSIlpo Järvinen if (large_allowed && sk_can_gso(sk)) { 83395bd09ebSEric Dumazet u32 gso_size, hlen; 8340c54b85fSIlpo Järvinen 83595bd09ebSEric Dumazet /* Maybe we should/could use sk->sk_prot->max_header here ? */ 83695bd09ebSEric Dumazet hlen = inet_csk(sk)->icsk_af_ops->net_header_len + 83795bd09ebSEric Dumazet inet_csk(sk)->icsk_ext_hdr_len + 83895bd09ebSEric Dumazet tp->tcp_header_len; 83995bd09ebSEric Dumazet 84095bd09ebSEric Dumazet /* Goal is to send at least one packet per ms, 84195bd09ebSEric Dumazet * not one big TSO packet every 100 ms. 84295bd09ebSEric Dumazet * This preserves ACK clocking and is consistent 84395bd09ebSEric Dumazet * with tcp_tso_should_defer() heuristic. 84495bd09ebSEric Dumazet */ 84595bd09ebSEric Dumazet gso_size = sk->sk_pacing_rate / (2 * MSEC_PER_SEC); 84695bd09ebSEric Dumazet gso_size = max_t(u32, gso_size, 84795bd09ebSEric Dumazet sysctl_tcp_min_tso_segs * mss_now); 84895bd09ebSEric Dumazet 84995bd09ebSEric Dumazet xmit_size_goal = min_t(u32, gso_size, 85095bd09ebSEric Dumazet sk->sk_gso_max_size - 1 - hlen); 85195bd09ebSEric Dumazet 8520c54b85fSIlpo Järvinen xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal); 8532a3a041cSIlpo Järvinen 8542a3a041cSIlpo Järvinen /* We try hard to avoid divides here */ 8552a3a041cSIlpo Järvinen old_size_goal = tp->xmit_size_goal_segs * mss_now; 8562a3a041cSIlpo Järvinen 8572a3a041cSIlpo Järvinen if (likely(old_size_goal <= xmit_size_goal && 8582a3a041cSIlpo Järvinen old_size_goal + mss_now > xmit_size_goal)) { 8592a3a041cSIlpo Järvinen xmit_size_goal = old_size_goal; 8602a3a041cSIlpo Järvinen } else { 8611485348dSBen Hutchings tp->xmit_size_goal_segs = 8621485348dSBen Hutchings min_t(u16, xmit_size_goal / mss_now, 8631485348dSBen Hutchings sk->sk_gso_max_segs); 8642a3a041cSIlpo Järvinen xmit_size_goal = tp->xmit_size_goal_segs * mss_now; 8652a3a041cSIlpo Järvinen } 8660c54b85fSIlpo Järvinen } 8670c54b85fSIlpo Järvinen 868afece1c6SIlpo Järvinen return max(xmit_size_goal, mss_now); 8690c54b85fSIlpo Järvinen } 8700c54b85fSIlpo Järvinen 8710c54b85fSIlpo Järvinen static int tcp_send_mss(struct sock *sk, int *size_goal, int flags) 8720c54b85fSIlpo Järvinen { 8730c54b85fSIlpo Järvinen int mss_now; 8740c54b85fSIlpo Järvinen 8750c54b85fSIlpo Järvinen mss_now = tcp_current_mss(sk); 8760c54b85fSIlpo Järvinen *size_goal = tcp_xmit_size_goal(sk, mss_now, !(flags & MSG_OOB)); 8770c54b85fSIlpo Järvinen 8780c54b85fSIlpo Järvinen return mss_now; 8790c54b85fSIlpo Järvinen } 8800c54b85fSIlpo Järvinen 88164022d0bSEric Dumazet static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, 88264022d0bSEric Dumazet size_t size, int flags) 8831da177e4SLinus Torvalds { 8841da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 885c1b4a7e6SDavid S. Miller int mss_now, size_goal; 8861da177e4SLinus Torvalds int err; 8871da177e4SLinus Torvalds ssize_t copied; 8881da177e4SLinus Torvalds long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); 8891da177e4SLinus Torvalds 8908336886fSJerry Chu /* Wait for a connection to finish. One exception is TCP Fast Open 8918336886fSJerry Chu * (passive side) where data is allowed to be sent before a connection 8928336886fSJerry Chu * is fully established. 8938336886fSJerry Chu */ 8948336886fSJerry Chu if (((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) && 8958336886fSJerry Chu !tcp_passive_fastopen(sk)) { 8961da177e4SLinus Torvalds if ((err = sk_stream_wait_connect(sk, &timeo)) != 0) 8971da177e4SLinus Torvalds goto out_err; 8988336886fSJerry Chu } 8991da177e4SLinus Torvalds 9001da177e4SLinus Torvalds clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 9011da177e4SLinus Torvalds 9020c54b85fSIlpo Järvinen mss_now = tcp_send_mss(sk, &size_goal, flags); 9031da177e4SLinus Torvalds copied = 0; 9041da177e4SLinus Torvalds 9051da177e4SLinus Torvalds err = -EPIPE; 9061da177e4SLinus Torvalds if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 9070d6a775eSIlpo Järvinen goto out_err; 9081da177e4SLinus Torvalds 90964022d0bSEric Dumazet while (size > 0) { 910fe067e8aSDavid S. Miller struct sk_buff *skb = tcp_write_queue_tail(sk); 91138ba0a65SEric Dumazet int copy, i; 91238ba0a65SEric Dumazet bool can_coalesce; 9131da177e4SLinus Torvalds 914fe067e8aSDavid S. Miller if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0) { 9151da177e4SLinus Torvalds new_segment: 9161da177e4SLinus Torvalds if (!sk_stream_memory_free(sk)) 9171da177e4SLinus Torvalds goto wait_for_sndbuf; 9181da177e4SLinus Torvalds 919df97c708SPavel Emelyanov skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); 9201da177e4SLinus Torvalds if (!skb) 9211da177e4SLinus Torvalds goto wait_for_memory; 9221da177e4SLinus Torvalds 9239e412ba7SIlpo Järvinen skb_entail(sk, skb); 924c1b4a7e6SDavid S. Miller copy = size_goal; 9251da177e4SLinus Torvalds } 9261da177e4SLinus Torvalds 9271da177e4SLinus Torvalds if (copy > size) 9281da177e4SLinus Torvalds copy = size; 9291da177e4SLinus Torvalds 9301da177e4SLinus Torvalds i = skb_shinfo(skb)->nr_frags; 9311da177e4SLinus Torvalds can_coalesce = skb_can_coalesce(skb, i, page, offset); 9321da177e4SLinus Torvalds if (!can_coalesce && i >= MAX_SKB_FRAGS) { 9331da177e4SLinus Torvalds tcp_mark_push(tp, skb); 9341da177e4SLinus Torvalds goto new_segment; 9351da177e4SLinus Torvalds } 9363ab224beSHideo Aoki if (!sk_wmem_schedule(sk, copy)) 9371da177e4SLinus Torvalds goto wait_for_memory; 9381da177e4SLinus Torvalds 9391da177e4SLinus Torvalds if (can_coalesce) { 9409e903e08SEric Dumazet skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); 9411da177e4SLinus Torvalds } else { 9421da177e4SLinus Torvalds get_page(page); 9431da177e4SLinus Torvalds skb_fill_page_desc(skb, i, page, offset, copy); 9441da177e4SLinus Torvalds } 945c9af6db4SPravin B Shelar skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; 946cef401deSEric Dumazet 9471da177e4SLinus Torvalds skb->len += copy; 9481da177e4SLinus Torvalds skb->data_len += copy; 9491da177e4SLinus Torvalds skb->truesize += copy; 9501da177e4SLinus Torvalds sk->sk_wmem_queued += copy; 9513ab224beSHideo Aoki sk_mem_charge(sk, copy); 95284fa7933SPatrick McHardy skb->ip_summed = CHECKSUM_PARTIAL; 9531da177e4SLinus Torvalds tp->write_seq += copy; 9541da177e4SLinus Torvalds TCP_SKB_CB(skb)->end_seq += copy; 9557967168cSHerbert Xu skb_shinfo(skb)->gso_segs = 0; 9561da177e4SLinus Torvalds 9571da177e4SLinus Torvalds if (!copied) 9584de075e0SEric Dumazet TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH; 9591da177e4SLinus Torvalds 9601da177e4SLinus Torvalds copied += copy; 96164022d0bSEric Dumazet offset += copy; 96264022d0bSEric Dumazet if (!(size -= copy)) 9631da177e4SLinus Torvalds goto out; 9641da177e4SLinus Torvalds 96569d15067SHerbert Xu if (skb->len < size_goal || (flags & MSG_OOB)) 9661da177e4SLinus Torvalds continue; 9671da177e4SLinus Torvalds 9681da177e4SLinus Torvalds if (forced_push(tp)) { 9691da177e4SLinus Torvalds tcp_mark_push(tp, skb); 9709e412ba7SIlpo Järvinen __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); 971fe067e8aSDavid S. Miller } else if (skb == tcp_send_head(sk)) 9721da177e4SLinus Torvalds tcp_push_one(sk, mss_now); 9731da177e4SLinus Torvalds continue; 9741da177e4SLinus Torvalds 9751da177e4SLinus Torvalds wait_for_sndbuf: 9761da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 9771da177e4SLinus Torvalds wait_for_memory: 978f54b3111SEric Dumazet tcp_push(sk, flags & ~MSG_MORE, mss_now, 979f54b3111SEric Dumazet TCP_NAGLE_PUSH, size_goal); 9801da177e4SLinus Torvalds 9811da177e4SLinus Torvalds if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) 9821da177e4SLinus Torvalds goto do_error; 9831da177e4SLinus Torvalds 9840c54b85fSIlpo Järvinen mss_now = tcp_send_mss(sk, &size_goal, flags); 9851da177e4SLinus Torvalds } 9861da177e4SLinus Torvalds 9871da177e4SLinus Torvalds out: 98835f9c09fSEric Dumazet if (copied && !(flags & MSG_SENDPAGE_NOTLAST)) 989f54b3111SEric Dumazet tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); 9901da177e4SLinus Torvalds return copied; 9911da177e4SLinus Torvalds 9921da177e4SLinus Torvalds do_error: 9931da177e4SLinus Torvalds if (copied) 9941da177e4SLinus Torvalds goto out; 9951da177e4SLinus Torvalds out_err: 9961da177e4SLinus Torvalds return sk_stream_error(sk, flags, err); 9971da177e4SLinus Torvalds } 9981da177e4SLinus Torvalds 9997ba42910SChangli Gao int tcp_sendpage(struct sock *sk, struct page *page, int offset, 10001da177e4SLinus Torvalds size_t size, int flags) 10011da177e4SLinus Torvalds { 10021da177e4SLinus Torvalds ssize_t res; 10031da177e4SLinus Torvalds 10041da177e4SLinus Torvalds if (!(sk->sk_route_caps & NETIF_F_SG) || 10058648b305SHerbert Xu !(sk->sk_route_caps & NETIF_F_ALL_CSUM)) 10067ba42910SChangli Gao return sock_no_sendpage(sk->sk_socket, page, offset, size, 10077ba42910SChangli Gao flags); 10081da177e4SLinus Torvalds 10091da177e4SLinus Torvalds lock_sock(sk); 101064022d0bSEric Dumazet res = do_tcp_sendpages(sk, page, offset, size, flags); 10111da177e4SLinus Torvalds release_sock(sk); 10121da177e4SLinus Torvalds return res; 10131da177e4SLinus Torvalds } 10144bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_sendpage); 10151da177e4SLinus Torvalds 1016690e99c4SEric Dumazet static inline int select_size(const struct sock *sk, bool sg) 10171da177e4SLinus Torvalds { 1018cf533ea5SEric Dumazet const struct tcp_sock *tp = tcp_sk(sk); 1019c1b4a7e6SDavid S. Miller int tmp = tp->mss_cache; 10201da177e4SLinus Torvalds 1021def87cf4SKrishna Kumar if (sg) { 1022f07d960dSEric Dumazet if (sk_can_gso(sk)) { 1023f07d960dSEric Dumazet /* Small frames wont use a full page: 1024f07d960dSEric Dumazet * Payload will immediately follow tcp header. 1025f07d960dSEric Dumazet */ 1026f07d960dSEric Dumazet tmp = SKB_WITH_OVERHEAD(2048 - MAX_TCP_HEADER); 1027f07d960dSEric Dumazet } else { 1028b4e26f5eSDavid S. Miller int pgbreak = SKB_MAX_HEAD(MAX_TCP_HEADER); 1029b4e26f5eSDavid S. Miller 1030b4e26f5eSDavid S. Miller if (tmp >= pgbreak && 1031b4e26f5eSDavid S. Miller tmp <= pgbreak + (MAX_SKB_FRAGS - 1) * PAGE_SIZE) 1032b4e26f5eSDavid S. Miller tmp = pgbreak; 1033b4e26f5eSDavid S. Miller } 1034b4e26f5eSDavid S. Miller } 10351da177e4SLinus Torvalds 10361da177e4SLinus Torvalds return tmp; 10371da177e4SLinus Torvalds } 10381da177e4SLinus Torvalds 1039cf60af03SYuchung Cheng void tcp_free_fastopen_req(struct tcp_sock *tp) 1040cf60af03SYuchung Cheng { 1041cf60af03SYuchung Cheng if (tp->fastopen_req != NULL) { 1042cf60af03SYuchung Cheng kfree(tp->fastopen_req); 1043cf60af03SYuchung Cheng tp->fastopen_req = NULL; 1044cf60af03SYuchung Cheng } 1045cf60af03SYuchung Cheng } 1046cf60af03SYuchung Cheng 1047cf60af03SYuchung Cheng static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *size) 1048cf60af03SYuchung Cheng { 1049cf60af03SYuchung Cheng struct tcp_sock *tp = tcp_sk(sk); 1050cf60af03SYuchung Cheng int err, flags; 1051cf60af03SYuchung Cheng 1052cf60af03SYuchung Cheng if (!(sysctl_tcp_fastopen & TFO_CLIENT_ENABLE)) 1053cf60af03SYuchung Cheng return -EOPNOTSUPP; 1054cf60af03SYuchung Cheng if (tp->fastopen_req != NULL) 1055cf60af03SYuchung Cheng return -EALREADY; /* Another Fast Open is in progress */ 1056cf60af03SYuchung Cheng 1057cf60af03SYuchung Cheng tp->fastopen_req = kzalloc(sizeof(struct tcp_fastopen_request), 1058cf60af03SYuchung Cheng sk->sk_allocation); 1059cf60af03SYuchung Cheng if (unlikely(tp->fastopen_req == NULL)) 1060cf60af03SYuchung Cheng return -ENOBUFS; 1061cf60af03SYuchung Cheng tp->fastopen_req->data = msg; 1062cf60af03SYuchung Cheng 1063cf60af03SYuchung Cheng flags = (msg->msg_flags & MSG_DONTWAIT) ? O_NONBLOCK : 0; 1064cf60af03SYuchung Cheng err = __inet_stream_connect(sk->sk_socket, msg->msg_name, 1065cf60af03SYuchung Cheng msg->msg_namelen, flags); 1066cf60af03SYuchung Cheng *size = tp->fastopen_req->copied; 1067cf60af03SYuchung Cheng tcp_free_fastopen_req(tp); 1068cf60af03SYuchung Cheng return err; 1069cf60af03SYuchung Cheng } 1070cf60af03SYuchung Cheng 10717ba42910SChangli Gao int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 10721da177e4SLinus Torvalds size_t size) 10731da177e4SLinus Torvalds { 10741da177e4SLinus Torvalds struct iovec *iov; 10751da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 10761da177e4SLinus Torvalds struct sk_buff *skb; 1077cf60af03SYuchung Cheng int iovlen, flags, err, copied = 0; 1078cf60af03SYuchung Cheng int mss_now = 0, size_goal, copied_syn = 0, offset = 0; 1079690e99c4SEric Dumazet bool sg; 10801da177e4SLinus Torvalds long timeo; 10811da177e4SLinus Torvalds 10821da177e4SLinus Torvalds lock_sock(sk); 10831da177e4SLinus Torvalds 10841da177e4SLinus Torvalds flags = msg->msg_flags; 1085cf60af03SYuchung Cheng if (flags & MSG_FASTOPEN) { 1086cf60af03SYuchung Cheng err = tcp_sendmsg_fastopen(sk, msg, &copied_syn); 1087cf60af03SYuchung Cheng if (err == -EINPROGRESS && copied_syn > 0) 1088cf60af03SYuchung Cheng goto out; 1089cf60af03SYuchung Cheng else if (err) 1090cf60af03SYuchung Cheng goto out_err; 1091cf60af03SYuchung Cheng offset = copied_syn; 1092cf60af03SYuchung Cheng } 1093cf60af03SYuchung Cheng 10941da177e4SLinus Torvalds timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); 10951da177e4SLinus Torvalds 10968336886fSJerry Chu /* Wait for a connection to finish. One exception is TCP Fast Open 10978336886fSJerry Chu * (passive side) where data is allowed to be sent before a connection 10988336886fSJerry Chu * is fully established. 10998336886fSJerry Chu */ 11008336886fSJerry Chu if (((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) && 11018336886fSJerry Chu !tcp_passive_fastopen(sk)) { 11021da177e4SLinus Torvalds if ((err = sk_stream_wait_connect(sk, &timeo)) != 0) 1103cf60af03SYuchung Cheng goto do_error; 11048336886fSJerry Chu } 11051da177e4SLinus Torvalds 1106c0e88ff0SPavel Emelyanov if (unlikely(tp->repair)) { 1107c0e88ff0SPavel Emelyanov if (tp->repair_queue == TCP_RECV_QUEUE) { 1108c0e88ff0SPavel Emelyanov copied = tcp_send_rcvq(sk, msg, size); 1109c0e88ff0SPavel Emelyanov goto out; 1110c0e88ff0SPavel Emelyanov } 1111c0e88ff0SPavel Emelyanov 1112c0e88ff0SPavel Emelyanov err = -EINVAL; 1113c0e88ff0SPavel Emelyanov if (tp->repair_queue == TCP_NO_QUEUE) 1114c0e88ff0SPavel Emelyanov goto out_err; 1115c0e88ff0SPavel Emelyanov 1116c0e88ff0SPavel Emelyanov /* 'common' sending to sendq */ 1117c0e88ff0SPavel Emelyanov } 1118c0e88ff0SPavel Emelyanov 11191da177e4SLinus Torvalds /* This should be in poll */ 11201da177e4SLinus Torvalds clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 11211da177e4SLinus Torvalds 11220c54b85fSIlpo Järvinen mss_now = tcp_send_mss(sk, &size_goal, flags); 11231da177e4SLinus Torvalds 11241da177e4SLinus Torvalds /* Ok commence sending. */ 11251da177e4SLinus Torvalds iovlen = msg->msg_iovlen; 11261da177e4SLinus Torvalds iov = msg->msg_iov; 11271da177e4SLinus Torvalds copied = 0; 11281da177e4SLinus Torvalds 11291da177e4SLinus Torvalds err = -EPIPE; 11301da177e4SLinus Torvalds if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 11310d6a775eSIlpo Järvinen goto out_err; 11321da177e4SLinus Torvalds 1133690e99c4SEric Dumazet sg = !!(sk->sk_route_caps & NETIF_F_SG); 1134def87cf4SKrishna Kumar 11351da177e4SLinus Torvalds while (--iovlen >= 0) { 113601db403cSDavid S. Miller size_t seglen = iov->iov_len; 11371da177e4SLinus Torvalds unsigned char __user *from = iov->iov_base; 11381da177e4SLinus Torvalds 11391da177e4SLinus Torvalds iov++; 1140cf60af03SYuchung Cheng if (unlikely(offset > 0)) { /* Skip bytes copied in SYN */ 1141cf60af03SYuchung Cheng if (offset >= seglen) { 1142cf60af03SYuchung Cheng offset -= seglen; 1143cf60af03SYuchung Cheng continue; 1144cf60af03SYuchung Cheng } 1145cf60af03SYuchung Cheng seglen -= offset; 1146cf60af03SYuchung Cheng from += offset; 1147cf60af03SYuchung Cheng offset = 0; 1148cf60af03SYuchung Cheng } 11491da177e4SLinus Torvalds 11501da177e4SLinus Torvalds while (seglen > 0) { 11516828b92bSHerbert Xu int copy = 0; 11526828b92bSHerbert Xu int max = size_goal; 11531da177e4SLinus Torvalds 1154fe067e8aSDavid S. Miller skb = tcp_write_queue_tail(sk); 11556828b92bSHerbert Xu if (tcp_send_head(sk)) { 11566828b92bSHerbert Xu if (skb->ip_summed == CHECKSUM_NONE) 11576828b92bSHerbert Xu max = mss_now; 11586828b92bSHerbert Xu copy = max - skb->len; 11596828b92bSHerbert Xu } 11601da177e4SLinus Torvalds 11616828b92bSHerbert Xu if (copy <= 0) { 11621da177e4SLinus Torvalds new_segment: 11631da177e4SLinus Torvalds /* Allocate new segment. If the interface is SG, 11641da177e4SLinus Torvalds * allocate skb fitting to single page. 11651da177e4SLinus Torvalds */ 11661da177e4SLinus Torvalds if (!sk_stream_memory_free(sk)) 11671da177e4SLinus Torvalds goto wait_for_sndbuf; 11681da177e4SLinus Torvalds 1169def87cf4SKrishna Kumar skb = sk_stream_alloc_skb(sk, 1170def87cf4SKrishna Kumar select_size(sk, sg), 1171df97c708SPavel Emelyanov sk->sk_allocation); 11721da177e4SLinus Torvalds if (!skb) 11731da177e4SLinus Torvalds goto wait_for_memory; 11741da177e4SLinus Torvalds 11751da177e4SLinus Torvalds /* 11767ed5c5aeSAndrey Vagin * All packets are restored as if they have 11777ed5c5aeSAndrey Vagin * already been sent. 11787ed5c5aeSAndrey Vagin */ 11797ed5c5aeSAndrey Vagin if (tp->repair) 11807ed5c5aeSAndrey Vagin TCP_SKB_CB(skb)->when = tcp_time_stamp; 11817ed5c5aeSAndrey Vagin 11827ed5c5aeSAndrey Vagin /* 11831da177e4SLinus Torvalds * Check whether we can use HW checksum. 11841da177e4SLinus Torvalds */ 11858648b305SHerbert Xu if (sk->sk_route_caps & NETIF_F_ALL_CSUM) 118684fa7933SPatrick McHardy skb->ip_summed = CHECKSUM_PARTIAL; 11871da177e4SLinus Torvalds 11889e412ba7SIlpo Järvinen skb_entail(sk, skb); 1189c1b4a7e6SDavid S. Miller copy = size_goal; 11906828b92bSHerbert Xu max = size_goal; 11911da177e4SLinus Torvalds } 11921da177e4SLinus Torvalds 11931da177e4SLinus Torvalds /* Try to append data to the end of skb. */ 11941da177e4SLinus Torvalds if (copy > seglen) 11951da177e4SLinus Torvalds copy = seglen; 11961da177e4SLinus Torvalds 11971da177e4SLinus Torvalds /* Where to copy to? */ 1198a21d4572SEric Dumazet if (skb_availroom(skb) > 0) { 11991da177e4SLinus Torvalds /* We have some space in skb head. Superb! */ 1200a21d4572SEric Dumazet copy = min_t(int, copy, skb_availroom(skb)); 1201c6e1a0d1STom Herbert err = skb_add_data_nocache(sk, skb, from, copy); 1202c6e1a0d1STom Herbert if (err) 12031da177e4SLinus Torvalds goto do_fault; 12041da177e4SLinus Torvalds } else { 12055640f768SEric Dumazet bool merge = true; 12061da177e4SLinus Torvalds int i = skb_shinfo(skb)->nr_frags; 12075640f768SEric Dumazet struct page_frag *pfrag = sk_page_frag(sk); 1208761965eaSEric Dumazet 12095640f768SEric Dumazet if (!sk_page_frag_refill(sk, pfrag)) 12105640f768SEric Dumazet goto wait_for_memory; 1211761965eaSEric Dumazet 12125640f768SEric Dumazet if (!skb_can_coalesce(skb, i, pfrag->page, 12135640f768SEric Dumazet pfrag->offset)) { 12145640f768SEric Dumazet if (i == MAX_SKB_FRAGS || !sg) { 12151da177e4SLinus Torvalds tcp_mark_push(tp, skb); 12161da177e4SLinus Torvalds goto new_segment; 12171da177e4SLinus Torvalds } 12185640f768SEric Dumazet merge = false; 12195640f768SEric Dumazet } 1220ef015786SHerbert Xu 12215640f768SEric Dumazet copy = min_t(int, copy, pfrag->size - pfrag->offset); 1222ef015786SHerbert Xu 12233ab224beSHideo Aoki if (!sk_wmem_schedule(sk, copy)) 1224ef015786SHerbert Xu goto wait_for_memory; 12251da177e4SLinus Torvalds 1226c6e1a0d1STom Herbert err = skb_copy_to_page_nocache(sk, from, skb, 12275640f768SEric Dumazet pfrag->page, 12285640f768SEric Dumazet pfrag->offset, 12295640f768SEric Dumazet copy); 12305640f768SEric Dumazet if (err) 12311da177e4SLinus Torvalds goto do_error; 12321da177e4SLinus Torvalds 12331da177e4SLinus Torvalds /* Update the skb. */ 12341da177e4SLinus Torvalds if (merge) { 12359e903e08SEric Dumazet skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); 12361da177e4SLinus Torvalds } else { 12375640f768SEric Dumazet skb_fill_page_desc(skb, i, pfrag->page, 12385640f768SEric Dumazet pfrag->offset, copy); 12395640f768SEric Dumazet get_page(pfrag->page); 12401da177e4SLinus Torvalds } 12415640f768SEric Dumazet pfrag->offset += copy; 12421da177e4SLinus Torvalds } 12431da177e4SLinus Torvalds 12441da177e4SLinus Torvalds if (!copied) 12454de075e0SEric Dumazet TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH; 12461da177e4SLinus Torvalds 12471da177e4SLinus Torvalds tp->write_seq += copy; 12481da177e4SLinus Torvalds TCP_SKB_CB(skb)->end_seq += copy; 12497967168cSHerbert Xu skb_shinfo(skb)->gso_segs = 0; 12501da177e4SLinus Torvalds 12511da177e4SLinus Torvalds from += copy; 12521da177e4SLinus Torvalds copied += copy; 12531da177e4SLinus Torvalds if ((seglen -= copy) == 0 && iovlen == 0) 12541da177e4SLinus Torvalds goto out; 12551da177e4SLinus Torvalds 1256c0e88ff0SPavel Emelyanov if (skb->len < max || (flags & MSG_OOB) || unlikely(tp->repair)) 12571da177e4SLinus Torvalds continue; 12581da177e4SLinus Torvalds 12591da177e4SLinus Torvalds if (forced_push(tp)) { 12601da177e4SLinus Torvalds tcp_mark_push(tp, skb); 12619e412ba7SIlpo Järvinen __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); 1262fe067e8aSDavid S. Miller } else if (skb == tcp_send_head(sk)) 12631da177e4SLinus Torvalds tcp_push_one(sk, mss_now); 12641da177e4SLinus Torvalds continue; 12651da177e4SLinus Torvalds 12661da177e4SLinus Torvalds wait_for_sndbuf: 12671da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 12681da177e4SLinus Torvalds wait_for_memory: 1269ec342325SAndrew Vagin if (copied) 1270f54b3111SEric Dumazet tcp_push(sk, flags & ~MSG_MORE, mss_now, 1271f54b3111SEric Dumazet TCP_NAGLE_PUSH, size_goal); 12721da177e4SLinus Torvalds 12731da177e4SLinus Torvalds if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) 12741da177e4SLinus Torvalds goto do_error; 12751da177e4SLinus Torvalds 12760c54b85fSIlpo Järvinen mss_now = tcp_send_mss(sk, &size_goal, flags); 12771da177e4SLinus Torvalds } 12781da177e4SLinus Torvalds } 12791da177e4SLinus Torvalds 12801da177e4SLinus Torvalds out: 1281ec342325SAndrew Vagin if (copied) 1282f54b3111SEric Dumazet tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); 12831da177e4SLinus Torvalds release_sock(sk); 1284cf60af03SYuchung Cheng return copied + copied_syn; 12851da177e4SLinus Torvalds 12861da177e4SLinus Torvalds do_fault: 12871da177e4SLinus Torvalds if (!skb->len) { 1288fe067e8aSDavid S. Miller tcp_unlink_write_queue(skb, sk); 1289fe067e8aSDavid S. Miller /* It is the one place in all of TCP, except connection 1290fe067e8aSDavid S. Miller * reset, where we can be unlinking the send_head. 1291fe067e8aSDavid S. Miller */ 1292fe067e8aSDavid S. Miller tcp_check_send_head(sk, skb); 12933ab224beSHideo Aoki sk_wmem_free_skb(sk, skb); 12941da177e4SLinus Torvalds } 12951da177e4SLinus Torvalds 12961da177e4SLinus Torvalds do_error: 1297cf60af03SYuchung Cheng if (copied + copied_syn) 12981da177e4SLinus Torvalds goto out; 12991da177e4SLinus Torvalds out_err: 13001da177e4SLinus Torvalds err = sk_stream_error(sk, flags, err); 13011da177e4SLinus Torvalds release_sock(sk); 13021da177e4SLinus Torvalds return err; 13031da177e4SLinus Torvalds } 13044bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_sendmsg); 13051da177e4SLinus Torvalds 13061da177e4SLinus Torvalds /* 13071da177e4SLinus Torvalds * Handle reading urgent data. BSD has very simple semantics for 13081da177e4SLinus Torvalds * this, no blocking and very strange errors 8) 13091da177e4SLinus Torvalds */ 13101da177e4SLinus Torvalds 1311377f0a08SRami Rosen static int tcp_recv_urg(struct sock *sk, struct msghdr *msg, int len, int flags) 13121da177e4SLinus Torvalds { 13131da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 13141da177e4SLinus Torvalds 13151da177e4SLinus Torvalds /* No URG data to read. */ 13161da177e4SLinus Torvalds if (sock_flag(sk, SOCK_URGINLINE) || !tp->urg_data || 13171da177e4SLinus Torvalds tp->urg_data == TCP_URG_READ) 13181da177e4SLinus Torvalds return -EINVAL; /* Yes this is right ! */ 13191da177e4SLinus Torvalds 13201da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DONE)) 13211da177e4SLinus Torvalds return -ENOTCONN; 13221da177e4SLinus Torvalds 13231da177e4SLinus Torvalds if (tp->urg_data & TCP_URG_VALID) { 13241da177e4SLinus Torvalds int err = 0; 13251da177e4SLinus Torvalds char c = tp->urg_data; 13261da177e4SLinus Torvalds 13271da177e4SLinus Torvalds if (!(flags & MSG_PEEK)) 13281da177e4SLinus Torvalds tp->urg_data = TCP_URG_READ; 13291da177e4SLinus Torvalds 13301da177e4SLinus Torvalds /* Read urgent data. */ 13311da177e4SLinus Torvalds msg->msg_flags |= MSG_OOB; 13321da177e4SLinus Torvalds 13331da177e4SLinus Torvalds if (len > 0) { 13341da177e4SLinus Torvalds if (!(flags & MSG_TRUNC)) 13351da177e4SLinus Torvalds err = memcpy_toiovec(msg->msg_iov, &c, 1); 13361da177e4SLinus Torvalds len = 1; 13371da177e4SLinus Torvalds } else 13381da177e4SLinus Torvalds msg->msg_flags |= MSG_TRUNC; 13391da177e4SLinus Torvalds 13401da177e4SLinus Torvalds return err ? -EFAULT : len; 13411da177e4SLinus Torvalds } 13421da177e4SLinus Torvalds 13431da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN)) 13441da177e4SLinus Torvalds return 0; 13451da177e4SLinus Torvalds 13461da177e4SLinus Torvalds /* Fixed the recv(..., MSG_OOB) behaviour. BSD docs and 13471da177e4SLinus Torvalds * the available implementations agree in this case: 13481da177e4SLinus Torvalds * this call should never block, independent of the 13491da177e4SLinus Torvalds * blocking state of the socket. 13501da177e4SLinus Torvalds * Mike <pall@rz.uni-karlsruhe.de> 13511da177e4SLinus Torvalds */ 13521da177e4SLinus Torvalds return -EAGAIN; 13531da177e4SLinus Torvalds } 13541da177e4SLinus Torvalds 1355c0e88ff0SPavel Emelyanov static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len) 1356c0e88ff0SPavel Emelyanov { 1357c0e88ff0SPavel Emelyanov struct sk_buff *skb; 1358c0e88ff0SPavel Emelyanov int copied = 0, err = 0; 1359c0e88ff0SPavel Emelyanov 1360c0e88ff0SPavel Emelyanov /* XXX -- need to support SO_PEEK_OFF */ 1361c0e88ff0SPavel Emelyanov 1362c0e88ff0SPavel Emelyanov skb_queue_walk(&sk->sk_write_queue, skb) { 1363c0e88ff0SPavel Emelyanov err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, skb->len); 1364c0e88ff0SPavel Emelyanov if (err) 1365c0e88ff0SPavel Emelyanov break; 1366c0e88ff0SPavel Emelyanov 1367c0e88ff0SPavel Emelyanov copied += skb->len; 1368c0e88ff0SPavel Emelyanov } 1369c0e88ff0SPavel Emelyanov 1370c0e88ff0SPavel Emelyanov return err ?: copied; 1371c0e88ff0SPavel Emelyanov } 1372c0e88ff0SPavel Emelyanov 13731da177e4SLinus Torvalds /* Clean up the receive buffer for full frames taken by the user, 13741da177e4SLinus Torvalds * then send an ACK if necessary. COPIED is the number of bytes 13751da177e4SLinus Torvalds * tcp_recvmsg has given to the user so far, it speeds up the 13761da177e4SLinus Torvalds * calculation of whether or not we must ACK for the sake of 13771da177e4SLinus Torvalds * a window update. 13781da177e4SLinus Torvalds */ 13790e4b4992SChris Leech void tcp_cleanup_rbuf(struct sock *sk, int copied) 13801da177e4SLinus Torvalds { 13811da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 1382a2a385d6SEric Dumazet bool time_to_ack = false; 13831da177e4SLinus Torvalds 13841da177e4SLinus Torvalds struct sk_buff *skb = skb_peek(&sk->sk_receive_queue); 13851da177e4SLinus Torvalds 1386d792c100SIlpo Järvinen WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq), 13872af6fd8bSJoe Perches "cleanup rbuf bug: copied %X seq %X rcvnxt %X\n", 1388d792c100SIlpo Järvinen tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt); 13891da177e4SLinus Torvalds 1390463c84b9SArnaldo Carvalho de Melo if (inet_csk_ack_scheduled(sk)) { 1391463c84b9SArnaldo Carvalho de Melo const struct inet_connection_sock *icsk = inet_csk(sk); 13921da177e4SLinus Torvalds /* Delayed ACKs frequently hit locked sockets during bulk 13931da177e4SLinus Torvalds * receive. */ 1394463c84b9SArnaldo Carvalho de Melo if (icsk->icsk_ack.blocked || 13951da177e4SLinus Torvalds /* Once-per-two-segments ACK was not sent by tcp_input.c */ 1396463c84b9SArnaldo Carvalho de Melo tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss || 13971da177e4SLinus Torvalds /* 13981da177e4SLinus Torvalds * If this read emptied read buffer, we send ACK, if 13991da177e4SLinus Torvalds * connection is not bidirectional, user drained 14001da177e4SLinus Torvalds * receive buffer and there was a small segment 14011da177e4SLinus Torvalds * in queue. 14021da177e4SLinus Torvalds */ 14031ef9696cSAlexey Kuznetsov (copied > 0 && 14041ef9696cSAlexey Kuznetsov ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED2) || 14051ef9696cSAlexey Kuznetsov ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED) && 14061ef9696cSAlexey Kuznetsov !icsk->icsk_ack.pingpong)) && 14071ef9696cSAlexey Kuznetsov !atomic_read(&sk->sk_rmem_alloc))) 1408a2a385d6SEric Dumazet time_to_ack = true; 14091da177e4SLinus Torvalds } 14101da177e4SLinus Torvalds 14111da177e4SLinus Torvalds /* We send an ACK if we can now advertise a non-zero window 14121da177e4SLinus Torvalds * which has been raised "significantly". 14131da177e4SLinus Torvalds * 14141da177e4SLinus Torvalds * Even if window raised up to infinity, do not send window open ACK 14151da177e4SLinus Torvalds * in states, where we will not receive more. It is useless. 14161da177e4SLinus Torvalds */ 14171da177e4SLinus Torvalds if (copied > 0 && !time_to_ack && !(sk->sk_shutdown & RCV_SHUTDOWN)) { 14181da177e4SLinus Torvalds __u32 rcv_window_now = tcp_receive_window(tp); 14191da177e4SLinus Torvalds 14201da177e4SLinus Torvalds /* Optimize, __tcp_select_window() is not cheap. */ 14211da177e4SLinus Torvalds if (2*rcv_window_now <= tp->window_clamp) { 14221da177e4SLinus Torvalds __u32 new_window = __tcp_select_window(sk); 14231da177e4SLinus Torvalds 14241da177e4SLinus Torvalds /* Send ACK now, if this read freed lots of space 14251da177e4SLinus Torvalds * in our buffer. Certainly, new_window is new window. 14261da177e4SLinus Torvalds * We can advertise it now, if it is not less than current one. 14271da177e4SLinus Torvalds * "Lots" means "at least twice" here. 14281da177e4SLinus Torvalds */ 14291da177e4SLinus Torvalds if (new_window && new_window >= 2 * rcv_window_now) 1430a2a385d6SEric Dumazet time_to_ack = true; 14311da177e4SLinus Torvalds } 14321da177e4SLinus Torvalds } 14331da177e4SLinus Torvalds if (time_to_ack) 14341da177e4SLinus Torvalds tcp_send_ack(sk); 14351da177e4SLinus Torvalds } 14361da177e4SLinus Torvalds 14371da177e4SLinus Torvalds static void tcp_prequeue_process(struct sock *sk) 14381da177e4SLinus Torvalds { 14391da177e4SLinus Torvalds struct sk_buff *skb; 14401da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 14411da177e4SLinus Torvalds 14426f67c817SPavel Emelyanov NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPPREQUEUED); 14431da177e4SLinus Torvalds 14441da177e4SLinus Torvalds /* RX process wants to run with disabled BHs, though it is not 14451da177e4SLinus Torvalds * necessary */ 14461da177e4SLinus Torvalds local_bh_disable(); 14471da177e4SLinus Torvalds while ((skb = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) 1448c57943a1SPeter Zijlstra sk_backlog_rcv(sk, skb); 14491da177e4SLinus Torvalds local_bh_enable(); 14501da177e4SLinus Torvalds 14511da177e4SLinus Torvalds /* Clear memory counter. */ 14521da177e4SLinus Torvalds tp->ucopy.memory = 0; 14531da177e4SLinus Torvalds } 14541da177e4SLinus Torvalds 145573852e81SSteven J. Magnani #ifdef CONFIG_NET_DMA 145673852e81SSteven J. Magnani static void tcp_service_net_dma(struct sock *sk, bool wait) 145773852e81SSteven J. Magnani { 145873852e81SSteven J. Magnani dma_cookie_t done, used; 145973852e81SSteven J. Magnani dma_cookie_t last_issued; 146073852e81SSteven J. Magnani struct tcp_sock *tp = tcp_sk(sk); 146173852e81SSteven J. Magnani 146273852e81SSteven J. Magnani if (!tp->ucopy.dma_chan) 146373852e81SSteven J. Magnani return; 146473852e81SSteven J. Magnani 146573852e81SSteven J. Magnani last_issued = tp->ucopy.dma_cookie; 1466b9ee8683SBartlomiej Zolnierkiewicz dma_async_issue_pending(tp->ucopy.dma_chan); 146773852e81SSteven J. Magnani 146873852e81SSteven J. Magnani do { 1469e239345fSBartlomiej Zolnierkiewicz if (dma_async_is_tx_complete(tp->ucopy.dma_chan, 147073852e81SSteven J. Magnani last_issued, &done, 147127bf6970SVinod Koul &used) == DMA_COMPLETE) { 147273852e81SSteven J. Magnani /* Safe to free early-copied skbs now */ 147373852e81SSteven J. Magnani __skb_queue_purge(&sk->sk_async_wait_queue); 147473852e81SSteven J. Magnani break; 147573852e81SSteven J. Magnani } else { 147673852e81SSteven J. Magnani struct sk_buff *skb; 147773852e81SSteven J. Magnani while ((skb = skb_peek(&sk->sk_async_wait_queue)) && 147873852e81SSteven J. Magnani (dma_async_is_complete(skb->dma_cookie, done, 147927bf6970SVinod Koul used) == DMA_COMPLETE)) { 148073852e81SSteven J. Magnani __skb_dequeue(&sk->sk_async_wait_queue); 148173852e81SSteven J. Magnani kfree_skb(skb); 148273852e81SSteven J. Magnani } 148373852e81SSteven J. Magnani } 148473852e81SSteven J. Magnani } while (wait); 148573852e81SSteven J. Magnani } 148673852e81SSteven J. Magnani #endif 148773852e81SSteven J. Magnani 1488f26845b4SEric Dumazet static struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off) 14891da177e4SLinus Torvalds { 14901da177e4SLinus Torvalds struct sk_buff *skb; 14911da177e4SLinus Torvalds u32 offset; 14921da177e4SLinus Torvalds 1493f26845b4SEric Dumazet while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) { 14941da177e4SLinus Torvalds offset = seq - TCP_SKB_CB(skb)->seq; 1495aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->syn) 14961da177e4SLinus Torvalds offset--; 1497aa8223c7SArnaldo Carvalho de Melo if (offset < skb->len || tcp_hdr(skb)->fin) { 14981da177e4SLinus Torvalds *off = offset; 14991da177e4SLinus Torvalds return skb; 15001da177e4SLinus Torvalds } 1501f26845b4SEric Dumazet /* This looks weird, but this can happen if TCP collapsing 1502f26845b4SEric Dumazet * splitted a fat GRO packet, while we released socket lock 1503f26845b4SEric Dumazet * in skb_splice_bits() 1504f26845b4SEric Dumazet */ 1505f26845b4SEric Dumazet sk_eat_skb(sk, skb, false); 15061da177e4SLinus Torvalds } 15071da177e4SLinus Torvalds return NULL; 15081da177e4SLinus Torvalds } 15091da177e4SLinus Torvalds 15101da177e4SLinus Torvalds /* 15111da177e4SLinus Torvalds * This routine provides an alternative to tcp_recvmsg() for routines 15121da177e4SLinus Torvalds * that would like to handle copying from skbuffs directly in 'sendfile' 15131da177e4SLinus Torvalds * fashion. 15141da177e4SLinus Torvalds * Note: 15151da177e4SLinus Torvalds * - It is assumed that the socket was locked by the caller. 15161da177e4SLinus Torvalds * - The routine does not block. 15171da177e4SLinus Torvalds * - At present, there is no support for reading OOB data 15181da177e4SLinus Torvalds * or for 'peeking' the socket using this routine 15191da177e4SLinus Torvalds * (although both would be easy to implement). 15201da177e4SLinus Torvalds */ 15211da177e4SLinus Torvalds int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 15221da177e4SLinus Torvalds sk_read_actor_t recv_actor) 15231da177e4SLinus Torvalds { 15241da177e4SLinus Torvalds struct sk_buff *skb; 15251da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 15261da177e4SLinus Torvalds u32 seq = tp->copied_seq; 15271da177e4SLinus Torvalds u32 offset; 15281da177e4SLinus Torvalds int copied = 0; 15291da177e4SLinus Torvalds 15301da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 15311da177e4SLinus Torvalds return -ENOTCONN; 15321da177e4SLinus Torvalds while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) { 15331da177e4SLinus Torvalds if (offset < skb->len) { 1534374e7b59SOctavian Purdila int used; 1535374e7b59SOctavian Purdila size_t len; 15361da177e4SLinus Torvalds 15371da177e4SLinus Torvalds len = skb->len - offset; 15381da177e4SLinus Torvalds /* Stop reading if we hit a patch of urgent data */ 15391da177e4SLinus Torvalds if (tp->urg_data) { 15401da177e4SLinus Torvalds u32 urg_offset = tp->urg_seq - seq; 15411da177e4SLinus Torvalds if (urg_offset < len) 15421da177e4SLinus Torvalds len = urg_offset; 15431da177e4SLinus Torvalds if (!len) 15441da177e4SLinus Torvalds break; 15451da177e4SLinus Torvalds } 15461da177e4SLinus Torvalds used = recv_actor(desc, skb, offset, len); 1547ff905b1eSEric Dumazet if (used <= 0) { 1548ddb61a57SJens Axboe if (!copied) 1549ddb61a57SJens Axboe copied = used; 1550ddb61a57SJens Axboe break; 1551ddb61a57SJens Axboe } else if (used <= len) { 15521da177e4SLinus Torvalds seq += used; 15531da177e4SLinus Torvalds copied += used; 15541da177e4SLinus Torvalds offset += used; 15551da177e4SLinus Torvalds } 155602275a2eSWilly Tarreau /* If recv_actor drops the lock (e.g. TCP splice 1557293ad604SOctavian Purdila * receive) the skb pointer might be invalid when 1558293ad604SOctavian Purdila * getting here: tcp_collapse might have deleted it 1559293ad604SOctavian Purdila * while aggregating skbs from the socket queue. 1560293ad604SOctavian Purdila */ 1561293ad604SOctavian Purdila skb = tcp_recv_skb(sk, seq - 1, &offset); 156202275a2eSWilly Tarreau if (!skb) 15631da177e4SLinus Torvalds break; 156402275a2eSWilly Tarreau /* TCP coalescing might have appended data to the skb. 156502275a2eSWilly Tarreau * Try to splice more frags 156602275a2eSWilly Tarreau */ 156702275a2eSWilly Tarreau if (offset + 1 != skb->len) 156802275a2eSWilly Tarreau continue; 15691da177e4SLinus Torvalds } 1570aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) { 1571dc6b9b78SEric Dumazet sk_eat_skb(sk, skb, false); 15721da177e4SLinus Torvalds ++seq; 15731da177e4SLinus Torvalds break; 15741da177e4SLinus Torvalds } 1575dc6b9b78SEric Dumazet sk_eat_skb(sk, skb, false); 15761da177e4SLinus Torvalds if (!desc->count) 15771da177e4SLinus Torvalds break; 1578baff42abSSteven J. Magnani tp->copied_seq = seq; 15791da177e4SLinus Torvalds } 15801da177e4SLinus Torvalds tp->copied_seq = seq; 15811da177e4SLinus Torvalds 15821da177e4SLinus Torvalds tcp_rcv_space_adjust(sk); 15831da177e4SLinus Torvalds 15841da177e4SLinus Torvalds /* Clean up data we have read: This will do ACK frames. */ 1585f26845b4SEric Dumazet if (copied > 0) { 1586f26845b4SEric Dumazet tcp_recv_skb(sk, seq, &offset); 15870e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 1588f26845b4SEric Dumazet } 15891da177e4SLinus Torvalds return copied; 15901da177e4SLinus Torvalds } 15914bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_read_sock); 15921da177e4SLinus Torvalds 15931da177e4SLinus Torvalds /* 15941da177e4SLinus Torvalds * This routine copies from a sock struct into the user buffer. 15951da177e4SLinus Torvalds * 15961da177e4SLinus Torvalds * Technical note: in 2.3 we work on _locked_ socket, so that 15971da177e4SLinus Torvalds * tricks with *seq access order and skb->users are not required. 15981da177e4SLinus Torvalds * Probably, code can be easily improved even more. 15991da177e4SLinus Torvalds */ 16001da177e4SLinus Torvalds 16011da177e4SLinus Torvalds int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 16021da177e4SLinus Torvalds size_t len, int nonblock, int flags, int *addr_len) 16031da177e4SLinus Torvalds { 16041da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 16051da177e4SLinus Torvalds int copied = 0; 16061da177e4SLinus Torvalds u32 peek_seq; 16071da177e4SLinus Torvalds u32 *seq; 16081da177e4SLinus Torvalds unsigned long used; 16091da177e4SLinus Torvalds int err; 16101da177e4SLinus Torvalds int target; /* Read at least this many bytes */ 16111da177e4SLinus Torvalds long timeo; 16121da177e4SLinus Torvalds struct task_struct *user_recv = NULL; 1613dc6b9b78SEric Dumazet bool copied_early = false; 16142b1244a4SChris Leech struct sk_buff *skb; 161577527313SIlpo Järvinen u32 urg_hole = 0; 16161da177e4SLinus Torvalds 1617cbf55001SEliezer Tamir if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) && 1618cbf55001SEliezer Tamir (sk->sk_state == TCP_ESTABLISHED)) 1619cbf55001SEliezer Tamir sk_busy_loop(sk, nonblock); 1620d30e383bSEliezer Tamir 16211da177e4SLinus Torvalds lock_sock(sk); 16221da177e4SLinus Torvalds 16231da177e4SLinus Torvalds err = -ENOTCONN; 16241da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 16251da177e4SLinus Torvalds goto out; 16261da177e4SLinus Torvalds 16271da177e4SLinus Torvalds timeo = sock_rcvtimeo(sk, nonblock); 16281da177e4SLinus Torvalds 16291da177e4SLinus Torvalds /* Urgent data needs to be handled specially. */ 16301da177e4SLinus Torvalds if (flags & MSG_OOB) 16311da177e4SLinus Torvalds goto recv_urg; 16321da177e4SLinus Torvalds 1633c0e88ff0SPavel Emelyanov if (unlikely(tp->repair)) { 1634c0e88ff0SPavel Emelyanov err = -EPERM; 1635c0e88ff0SPavel Emelyanov if (!(flags & MSG_PEEK)) 1636c0e88ff0SPavel Emelyanov goto out; 1637c0e88ff0SPavel Emelyanov 1638c0e88ff0SPavel Emelyanov if (tp->repair_queue == TCP_SEND_QUEUE) 1639c0e88ff0SPavel Emelyanov goto recv_sndq; 1640c0e88ff0SPavel Emelyanov 1641c0e88ff0SPavel Emelyanov err = -EINVAL; 1642c0e88ff0SPavel Emelyanov if (tp->repair_queue == TCP_NO_QUEUE) 1643c0e88ff0SPavel Emelyanov goto out; 1644c0e88ff0SPavel Emelyanov 1645c0e88ff0SPavel Emelyanov /* 'common' recv queue MSG_PEEK-ing */ 1646c0e88ff0SPavel Emelyanov } 1647c0e88ff0SPavel Emelyanov 16481da177e4SLinus Torvalds seq = &tp->copied_seq; 16491da177e4SLinus Torvalds if (flags & MSG_PEEK) { 16501da177e4SLinus Torvalds peek_seq = tp->copied_seq; 16511da177e4SLinus Torvalds seq = &peek_seq; 16521da177e4SLinus Torvalds } 16531da177e4SLinus Torvalds 16541da177e4SLinus Torvalds target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); 16551da177e4SLinus Torvalds 16561a2449a8SChris Leech #ifdef CONFIG_NET_DMA 16571a2449a8SChris Leech tp->ucopy.dma_chan = NULL; 16581a2449a8SChris Leech preempt_disable(); 16592b1244a4SChris Leech skb = skb_peek_tail(&sk->sk_receive_queue); 1660e00c5d8bSAndrew Morton { 1661e00c5d8bSAndrew Morton int available = 0; 1662e00c5d8bSAndrew Morton 16632b1244a4SChris Leech if (skb) 16642b1244a4SChris Leech available = TCP_SKB_CB(skb)->seq + skb->len - (*seq); 16652b1244a4SChris Leech if ((available < target) && 16662b1244a4SChris Leech (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && 1667e00c5d8bSAndrew Morton !sysctl_tcp_low_latency && 1668a2bd1140SDave Jiang net_dma_find_channel()) { 16691774e9f3SPeter Zijlstra preempt_enable(); 1670e00c5d8bSAndrew Morton tp->ucopy.pinned_list = 1671e00c5d8bSAndrew Morton dma_pin_iovec_pages(msg->msg_iov, len); 1672e00c5d8bSAndrew Morton } else { 16731774e9f3SPeter Zijlstra preempt_enable(); 1674e00c5d8bSAndrew Morton } 1675e00c5d8bSAndrew Morton } 16761a2449a8SChris Leech #endif 16771a2449a8SChris Leech 16781da177e4SLinus Torvalds do { 16791da177e4SLinus Torvalds u32 offset; 16801da177e4SLinus Torvalds 16811da177e4SLinus Torvalds /* Are we at urgent data? Stop if we have read anything or have SIGURG pending. */ 16821da177e4SLinus Torvalds if (tp->urg_data && tp->urg_seq == *seq) { 16831da177e4SLinus Torvalds if (copied) 16841da177e4SLinus Torvalds break; 16851da177e4SLinus Torvalds if (signal_pending(current)) { 16861da177e4SLinus Torvalds copied = timeo ? sock_intr_errno(timeo) : -EAGAIN; 16871da177e4SLinus Torvalds break; 16881da177e4SLinus Torvalds } 16891da177e4SLinus Torvalds } 16901da177e4SLinus Torvalds 16911da177e4SLinus Torvalds /* Next get a buffer. */ 16921da177e4SLinus Torvalds 169391521944SDavid S. Miller skb_queue_walk(&sk->sk_receive_queue, skb) { 16941da177e4SLinus Torvalds /* Now that we have two receive queues this 16951da177e4SLinus Torvalds * shouldn't happen. 16961da177e4SLinus Torvalds */ 1697d792c100SIlpo Järvinen if (WARN(before(*seq, TCP_SKB_CB(skb)->seq), 16982af6fd8bSJoe Perches "recvmsg bug: copied %X seq %X rcvnxt %X fl %X\n", 16992af6fd8bSJoe Perches *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt, 1700d792c100SIlpo Järvinen flags)) 17011da177e4SLinus Torvalds break; 1702d792c100SIlpo Järvinen 17031da177e4SLinus Torvalds offset = *seq - TCP_SKB_CB(skb)->seq; 1704aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->syn) 17051da177e4SLinus Torvalds offset--; 17061da177e4SLinus Torvalds if (offset < skb->len) 17071da177e4SLinus Torvalds goto found_ok_skb; 1708aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) 17091da177e4SLinus Torvalds goto found_fin_ok; 17102af6fd8bSJoe Perches WARN(!(flags & MSG_PEEK), 17112af6fd8bSJoe Perches "recvmsg bug 2: copied %X seq %X rcvnxt %X fl %X\n", 17122af6fd8bSJoe Perches *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt, flags); 171391521944SDavid S. Miller } 17141da177e4SLinus Torvalds 17151da177e4SLinus Torvalds /* Well, if we have backlog, try to process it now yet. */ 17161da177e4SLinus Torvalds 17171da177e4SLinus Torvalds if (copied >= target && !sk->sk_backlog.tail) 17181da177e4SLinus Torvalds break; 17191da177e4SLinus Torvalds 17201da177e4SLinus Torvalds if (copied) { 17211da177e4SLinus Torvalds if (sk->sk_err || 17221da177e4SLinus Torvalds sk->sk_state == TCP_CLOSE || 17231da177e4SLinus Torvalds (sk->sk_shutdown & RCV_SHUTDOWN) || 17241da177e4SLinus Torvalds !timeo || 1725518a09efSDavid S. Miller signal_pending(current)) 17261da177e4SLinus Torvalds break; 17271da177e4SLinus Torvalds } else { 17281da177e4SLinus Torvalds if (sock_flag(sk, SOCK_DONE)) 17291da177e4SLinus Torvalds break; 17301da177e4SLinus Torvalds 17311da177e4SLinus Torvalds if (sk->sk_err) { 17321da177e4SLinus Torvalds copied = sock_error(sk); 17331da177e4SLinus Torvalds break; 17341da177e4SLinus Torvalds } 17351da177e4SLinus Torvalds 17361da177e4SLinus Torvalds if (sk->sk_shutdown & RCV_SHUTDOWN) 17371da177e4SLinus Torvalds break; 17381da177e4SLinus Torvalds 17391da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE) { 17401da177e4SLinus Torvalds if (!sock_flag(sk, SOCK_DONE)) { 17411da177e4SLinus Torvalds /* This occurs when user tries to read 17421da177e4SLinus Torvalds * from never connected socket. 17431da177e4SLinus Torvalds */ 17441da177e4SLinus Torvalds copied = -ENOTCONN; 17451da177e4SLinus Torvalds break; 17461da177e4SLinus Torvalds } 17471da177e4SLinus Torvalds break; 17481da177e4SLinus Torvalds } 17491da177e4SLinus Torvalds 17501da177e4SLinus Torvalds if (!timeo) { 17511da177e4SLinus Torvalds copied = -EAGAIN; 17521da177e4SLinus Torvalds break; 17531da177e4SLinus Torvalds } 17541da177e4SLinus Torvalds 17551da177e4SLinus Torvalds if (signal_pending(current)) { 17561da177e4SLinus Torvalds copied = sock_intr_errno(timeo); 17571da177e4SLinus Torvalds break; 17581da177e4SLinus Torvalds } 17591da177e4SLinus Torvalds } 17601da177e4SLinus Torvalds 17610e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 17621da177e4SLinus Torvalds 17637df55125SDavid S. Miller if (!sysctl_tcp_low_latency && tp->ucopy.task == user_recv) { 17641da177e4SLinus Torvalds /* Install new reader */ 17651da177e4SLinus Torvalds if (!user_recv && !(flags & (MSG_TRUNC | MSG_PEEK))) { 17661da177e4SLinus Torvalds user_recv = current; 17671da177e4SLinus Torvalds tp->ucopy.task = user_recv; 17681da177e4SLinus Torvalds tp->ucopy.iov = msg->msg_iov; 17691da177e4SLinus Torvalds } 17701da177e4SLinus Torvalds 17711da177e4SLinus Torvalds tp->ucopy.len = len; 17721da177e4SLinus Torvalds 1773547b792cSIlpo Järvinen WARN_ON(tp->copied_seq != tp->rcv_nxt && 1774547b792cSIlpo Järvinen !(flags & (MSG_PEEK | MSG_TRUNC))); 17751da177e4SLinus Torvalds 17761da177e4SLinus Torvalds /* Ugly... If prequeue is not empty, we have to 17771da177e4SLinus Torvalds * process it before releasing socket, otherwise 17781da177e4SLinus Torvalds * order will be broken at second iteration. 17791da177e4SLinus Torvalds * More elegant solution is required!!! 17801da177e4SLinus Torvalds * 17811da177e4SLinus Torvalds * Look: we have the following (pseudo)queues: 17821da177e4SLinus Torvalds * 17831da177e4SLinus Torvalds * 1. packets in flight 17841da177e4SLinus Torvalds * 2. backlog 17851da177e4SLinus Torvalds * 3. prequeue 17861da177e4SLinus Torvalds * 4. receive_queue 17871da177e4SLinus Torvalds * 17881da177e4SLinus Torvalds * Each queue can be processed only if the next ones 17891da177e4SLinus Torvalds * are empty. At this point we have empty receive_queue. 17901da177e4SLinus Torvalds * But prequeue _can_ be not empty after 2nd iteration, 17911da177e4SLinus Torvalds * when we jumped to start of loop because backlog 17921da177e4SLinus Torvalds * processing added something to receive_queue. 17931da177e4SLinus Torvalds * We cannot release_sock(), because backlog contains 17941da177e4SLinus Torvalds * packets arrived _after_ prequeued ones. 17951da177e4SLinus Torvalds * 17961da177e4SLinus Torvalds * Shortly, algorithm is clear --- to process all 17971da177e4SLinus Torvalds * the queues in order. We could make it more directly, 17981da177e4SLinus Torvalds * requeueing packets from backlog to prequeue, if 17991da177e4SLinus Torvalds * is not empty. It is more elegant, but eats cycles, 18001da177e4SLinus Torvalds * unfortunately. 18011da177e4SLinus Torvalds */ 1802b03efcfbSDavid S. Miller if (!skb_queue_empty(&tp->ucopy.prequeue)) 18031da177e4SLinus Torvalds goto do_prequeue; 18041da177e4SLinus Torvalds 18051da177e4SLinus Torvalds /* __ Set realtime policy in scheduler __ */ 18061da177e4SLinus Torvalds } 18071da177e4SLinus Torvalds 180873852e81SSteven J. Magnani #ifdef CONFIG_NET_DMA 180915c04175SMichal Kubeček if (tp->ucopy.dma_chan) { 181015c04175SMichal Kubeček if (tp->rcv_wnd == 0 && 181115c04175SMichal Kubeček !skb_queue_empty(&sk->sk_async_wait_queue)) { 181215c04175SMichal Kubeček tcp_service_net_dma(sk, true); 181315c04175SMichal Kubeček tcp_cleanup_rbuf(sk, copied); 181415c04175SMichal Kubeček } else 1815b9ee8683SBartlomiej Zolnierkiewicz dma_async_issue_pending(tp->ucopy.dma_chan); 181615c04175SMichal Kubeček } 181773852e81SSteven J. Magnani #endif 18181da177e4SLinus Torvalds if (copied >= target) { 18191da177e4SLinus Torvalds /* Do not sleep, just process backlog. */ 18201da177e4SLinus Torvalds release_sock(sk); 18211da177e4SLinus Torvalds lock_sock(sk); 18221da177e4SLinus Torvalds } else 18231da177e4SLinus Torvalds sk_wait_data(sk, &timeo); 18241da177e4SLinus Torvalds 18251a2449a8SChris Leech #ifdef CONFIG_NET_DMA 182673852e81SSteven J. Magnani tcp_service_net_dma(sk, false); /* Don't block */ 18271a2449a8SChris Leech tp->ucopy.wakeup = 0; 18281a2449a8SChris Leech #endif 18291a2449a8SChris Leech 18301da177e4SLinus Torvalds if (user_recv) { 18311da177e4SLinus Torvalds int chunk; 18321da177e4SLinus Torvalds 18331da177e4SLinus Torvalds /* __ Restore normal policy in scheduler __ */ 18341da177e4SLinus Torvalds 18351da177e4SLinus Torvalds if ((chunk = len - tp->ucopy.len) != 0) { 1836ed88098eSPavel Emelyanov NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk); 18371da177e4SLinus Torvalds len -= chunk; 18381da177e4SLinus Torvalds copied += chunk; 18391da177e4SLinus Torvalds } 18401da177e4SLinus Torvalds 18411da177e4SLinus Torvalds if (tp->rcv_nxt == tp->copied_seq && 1842b03efcfbSDavid S. Miller !skb_queue_empty(&tp->ucopy.prequeue)) { 18431da177e4SLinus Torvalds do_prequeue: 18441da177e4SLinus Torvalds tcp_prequeue_process(sk); 18451da177e4SLinus Torvalds 18461da177e4SLinus Torvalds if ((chunk = len - tp->ucopy.len) != 0) { 1847ed88098eSPavel Emelyanov NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk); 18481da177e4SLinus Torvalds len -= chunk; 18491da177e4SLinus Torvalds copied += chunk; 18501da177e4SLinus Torvalds } 18511da177e4SLinus Torvalds } 18521da177e4SLinus Torvalds } 185377527313SIlpo Järvinen if ((flags & MSG_PEEK) && 185477527313SIlpo Järvinen (peek_seq - copied - urg_hole != tp->copied_seq)) { 1855e87cc472SJoe Perches net_dbg_ratelimited("TCP(%s:%d): Application bug, race in MSG_PEEK\n", 1856e87cc472SJoe Perches current->comm, 1857e87cc472SJoe Perches task_pid_nr(current)); 18581da177e4SLinus Torvalds peek_seq = tp->copied_seq; 18591da177e4SLinus Torvalds } 18601da177e4SLinus Torvalds continue; 18611da177e4SLinus Torvalds 18621da177e4SLinus Torvalds found_ok_skb: 18631da177e4SLinus Torvalds /* Ok so how much can we use? */ 18641da177e4SLinus Torvalds used = skb->len - offset; 18651da177e4SLinus Torvalds if (len < used) 18661da177e4SLinus Torvalds used = len; 18671da177e4SLinus Torvalds 18681da177e4SLinus Torvalds /* Do we have urgent data here? */ 18691da177e4SLinus Torvalds if (tp->urg_data) { 18701da177e4SLinus Torvalds u32 urg_offset = tp->urg_seq - *seq; 18711da177e4SLinus Torvalds if (urg_offset < used) { 18721da177e4SLinus Torvalds if (!urg_offset) { 18731da177e4SLinus Torvalds if (!sock_flag(sk, SOCK_URGINLINE)) { 18741da177e4SLinus Torvalds ++*seq; 187577527313SIlpo Järvinen urg_hole++; 18761da177e4SLinus Torvalds offset++; 18771da177e4SLinus Torvalds used--; 18781da177e4SLinus Torvalds if (!used) 18791da177e4SLinus Torvalds goto skip_copy; 18801da177e4SLinus Torvalds } 18811da177e4SLinus Torvalds } else 18821da177e4SLinus Torvalds used = urg_offset; 18831da177e4SLinus Torvalds } 18841da177e4SLinus Torvalds } 18851da177e4SLinus Torvalds 18861da177e4SLinus Torvalds if (!(flags & MSG_TRUNC)) { 18871a2449a8SChris Leech #ifdef CONFIG_NET_DMA 18881a2449a8SChris Leech if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 1889a2bd1140SDave Jiang tp->ucopy.dma_chan = net_dma_find_channel(); 18901a2449a8SChris Leech 18911a2449a8SChris Leech if (tp->ucopy.dma_chan) { 18921a2449a8SChris Leech tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( 18931a2449a8SChris Leech tp->ucopy.dma_chan, skb, offset, 18941a2449a8SChris Leech msg->msg_iov, used, 18951a2449a8SChris Leech tp->ucopy.pinned_list); 18961a2449a8SChris Leech 18971a2449a8SChris Leech if (tp->ucopy.dma_cookie < 0) { 18981a2449a8SChris Leech 1899afd46503SJoe Perches pr_alert("%s: dma_cookie < 0\n", 1900afd46503SJoe Perches __func__); 19011a2449a8SChris Leech 19021a2449a8SChris Leech /* Exception. Bailout! */ 19031a2449a8SChris Leech if (!copied) 19041a2449a8SChris Leech copied = -EFAULT; 19051a2449a8SChris Leech break; 19061a2449a8SChris Leech } 190773852e81SSteven J. Magnani 1908b9ee8683SBartlomiej Zolnierkiewicz dma_async_issue_pending(tp->ucopy.dma_chan); 190973852e81SSteven J. Magnani 19101a2449a8SChris Leech if ((offset + used) == skb->len) 1911dc6b9b78SEric Dumazet copied_early = true; 19121a2449a8SChris Leech 19131a2449a8SChris Leech } else 19141a2449a8SChris Leech #endif 19151a2449a8SChris Leech { 19161da177e4SLinus Torvalds err = skb_copy_datagram_iovec(skb, offset, 19171da177e4SLinus Torvalds msg->msg_iov, used); 19181da177e4SLinus Torvalds if (err) { 19191da177e4SLinus Torvalds /* Exception. Bailout! */ 19201da177e4SLinus Torvalds if (!copied) 19211da177e4SLinus Torvalds copied = -EFAULT; 19221da177e4SLinus Torvalds break; 19231da177e4SLinus Torvalds } 19241da177e4SLinus Torvalds } 19251a2449a8SChris Leech } 19261da177e4SLinus Torvalds 19271da177e4SLinus Torvalds *seq += used; 19281da177e4SLinus Torvalds copied += used; 19291da177e4SLinus Torvalds len -= used; 19301da177e4SLinus Torvalds 19311da177e4SLinus Torvalds tcp_rcv_space_adjust(sk); 19321da177e4SLinus Torvalds 19331da177e4SLinus Torvalds skip_copy: 19341da177e4SLinus Torvalds if (tp->urg_data && after(tp->copied_seq, tp->urg_seq)) { 19351da177e4SLinus Torvalds tp->urg_data = 0; 19369e412ba7SIlpo Järvinen tcp_fast_path_check(sk); 19371da177e4SLinus Torvalds } 19381da177e4SLinus Torvalds if (used + offset < skb->len) 19391da177e4SLinus Torvalds continue; 19401da177e4SLinus Torvalds 1941aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) 19421da177e4SLinus Torvalds goto found_fin_ok; 19431a2449a8SChris Leech if (!(flags & MSG_PEEK)) { 19441a2449a8SChris Leech sk_eat_skb(sk, skb, copied_early); 1945dc6b9b78SEric Dumazet copied_early = false; 19461a2449a8SChris Leech } 19471da177e4SLinus Torvalds continue; 19481da177e4SLinus Torvalds 19491da177e4SLinus Torvalds found_fin_ok: 19501da177e4SLinus Torvalds /* Process the FIN. */ 19511da177e4SLinus Torvalds ++*seq; 19521a2449a8SChris Leech if (!(flags & MSG_PEEK)) { 19531a2449a8SChris Leech sk_eat_skb(sk, skb, copied_early); 1954dc6b9b78SEric Dumazet copied_early = false; 19551a2449a8SChris Leech } 19561da177e4SLinus Torvalds break; 19571da177e4SLinus Torvalds } while (len > 0); 19581da177e4SLinus Torvalds 19591da177e4SLinus Torvalds if (user_recv) { 1960b03efcfbSDavid S. Miller if (!skb_queue_empty(&tp->ucopy.prequeue)) { 19611da177e4SLinus Torvalds int chunk; 19621da177e4SLinus Torvalds 19631da177e4SLinus Torvalds tp->ucopy.len = copied > 0 ? len : 0; 19641da177e4SLinus Torvalds 19651da177e4SLinus Torvalds tcp_prequeue_process(sk); 19661da177e4SLinus Torvalds 19671da177e4SLinus Torvalds if (copied > 0 && (chunk = len - tp->ucopy.len) != 0) { 1968ed88098eSPavel Emelyanov NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk); 19691da177e4SLinus Torvalds len -= chunk; 19701da177e4SLinus Torvalds copied += chunk; 19711da177e4SLinus Torvalds } 19721da177e4SLinus Torvalds } 19731da177e4SLinus Torvalds 19741da177e4SLinus Torvalds tp->ucopy.task = NULL; 19751da177e4SLinus Torvalds tp->ucopy.len = 0; 19761da177e4SLinus Torvalds } 19771da177e4SLinus Torvalds 19781a2449a8SChris Leech #ifdef CONFIG_NET_DMA 197973852e81SSteven J. Magnani tcp_service_net_dma(sk, true); /* Wait for queue to drain */ 19801a2449a8SChris Leech tp->ucopy.dma_chan = NULL; 198173852e81SSteven J. Magnani 19821a2449a8SChris Leech if (tp->ucopy.pinned_list) { 19831a2449a8SChris Leech dma_unpin_iovec_pages(tp->ucopy.pinned_list); 19841a2449a8SChris Leech tp->ucopy.pinned_list = NULL; 19851a2449a8SChris Leech } 19861a2449a8SChris Leech #endif 19871a2449a8SChris Leech 19881da177e4SLinus Torvalds /* According to UNIX98, msg_name/msg_namelen are ignored 19891da177e4SLinus Torvalds * on connected socket. I was just happy when found this 8) --ANK 19901da177e4SLinus Torvalds */ 19911da177e4SLinus Torvalds 19921da177e4SLinus Torvalds /* Clean up data we have read: This will do ACK frames. */ 19930e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 19941da177e4SLinus Torvalds 19951da177e4SLinus Torvalds release_sock(sk); 19961da177e4SLinus Torvalds return copied; 19971da177e4SLinus Torvalds 19981da177e4SLinus Torvalds out: 19991da177e4SLinus Torvalds release_sock(sk); 20001da177e4SLinus Torvalds return err; 20011da177e4SLinus Torvalds 20021da177e4SLinus Torvalds recv_urg: 2003377f0a08SRami Rosen err = tcp_recv_urg(sk, msg, len, flags); 20041da177e4SLinus Torvalds goto out; 2005c0e88ff0SPavel Emelyanov 2006c0e88ff0SPavel Emelyanov recv_sndq: 2007c0e88ff0SPavel Emelyanov err = tcp_peek_sndq(sk, msg, len); 2008c0e88ff0SPavel Emelyanov goto out; 20091da177e4SLinus Torvalds } 20104bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_recvmsg); 20111da177e4SLinus Torvalds 2012490d5046SIlpo Järvinen void tcp_set_state(struct sock *sk, int state) 2013490d5046SIlpo Järvinen { 2014490d5046SIlpo Järvinen int oldstate = sk->sk_state; 2015490d5046SIlpo Järvinen 2016490d5046SIlpo Järvinen switch (state) { 2017490d5046SIlpo Järvinen case TCP_ESTABLISHED: 2018490d5046SIlpo Järvinen if (oldstate != TCP_ESTABLISHED) 201981cc8a75SPavel Emelyanov TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB); 2020490d5046SIlpo Järvinen break; 2021490d5046SIlpo Järvinen 2022490d5046SIlpo Järvinen case TCP_CLOSE: 2023490d5046SIlpo Järvinen if (oldstate == TCP_CLOSE_WAIT || oldstate == TCP_ESTABLISHED) 202481cc8a75SPavel Emelyanov TCP_INC_STATS(sock_net(sk), TCP_MIB_ESTABRESETS); 2025490d5046SIlpo Järvinen 2026490d5046SIlpo Järvinen sk->sk_prot->unhash(sk); 2027490d5046SIlpo Järvinen if (inet_csk(sk)->icsk_bind_hash && 2028490d5046SIlpo Järvinen !(sk->sk_userlocks & SOCK_BINDPORT_LOCK)) 2029ab1e0a13SArnaldo Carvalho de Melo inet_put_port(sk); 2030490d5046SIlpo Järvinen /* fall through */ 2031490d5046SIlpo Järvinen default: 2032490d5046SIlpo Järvinen if (oldstate == TCP_ESTABLISHED) 203374688e48SPavel Emelyanov TCP_DEC_STATS(sock_net(sk), TCP_MIB_CURRESTAB); 2034490d5046SIlpo Järvinen } 2035490d5046SIlpo Järvinen 2036490d5046SIlpo Järvinen /* Change state AFTER socket is unhashed to avoid closed 2037490d5046SIlpo Järvinen * socket sitting in hash tables. 2038490d5046SIlpo Järvinen */ 2039490d5046SIlpo Järvinen sk->sk_state = state; 2040490d5046SIlpo Järvinen 2041490d5046SIlpo Järvinen #ifdef STATE_TRACE 2042490d5046SIlpo Järvinen SOCK_DEBUG(sk, "TCP sk=%p, State %s -> %s\n", sk, statename[oldstate], statename[state]); 2043490d5046SIlpo Järvinen #endif 2044490d5046SIlpo Järvinen } 2045490d5046SIlpo Järvinen EXPORT_SYMBOL_GPL(tcp_set_state); 2046490d5046SIlpo Järvinen 20471da177e4SLinus Torvalds /* 20481da177e4SLinus Torvalds * State processing on a close. This implements the state shift for 20491da177e4SLinus Torvalds * sending our FIN frame. Note that we only send a FIN for some 20501da177e4SLinus Torvalds * states. A shutdown() may have already sent the FIN, or we may be 20511da177e4SLinus Torvalds * closed. 20521da177e4SLinus Torvalds */ 20531da177e4SLinus Torvalds 20549b5b5cffSArjan van de Ven static const unsigned char new_state[16] = { 20551da177e4SLinus Torvalds /* current state: new state: action: */ 20561da177e4SLinus Torvalds /* (Invalid) */ TCP_CLOSE, 20571da177e4SLinus Torvalds /* TCP_ESTABLISHED */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, 20581da177e4SLinus Torvalds /* TCP_SYN_SENT */ TCP_CLOSE, 20591da177e4SLinus Torvalds /* TCP_SYN_RECV */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, 20601da177e4SLinus Torvalds /* TCP_FIN_WAIT1 */ TCP_FIN_WAIT1, 20611da177e4SLinus Torvalds /* TCP_FIN_WAIT2 */ TCP_FIN_WAIT2, 20621da177e4SLinus Torvalds /* TCP_TIME_WAIT */ TCP_CLOSE, 20631da177e4SLinus Torvalds /* TCP_CLOSE */ TCP_CLOSE, 20641da177e4SLinus Torvalds /* TCP_CLOSE_WAIT */ TCP_LAST_ACK | TCP_ACTION_FIN, 20651da177e4SLinus Torvalds /* TCP_LAST_ACK */ TCP_LAST_ACK, 20661da177e4SLinus Torvalds /* TCP_LISTEN */ TCP_CLOSE, 20671da177e4SLinus Torvalds /* TCP_CLOSING */ TCP_CLOSING, 20681da177e4SLinus Torvalds }; 20691da177e4SLinus Torvalds 20701da177e4SLinus Torvalds static int tcp_close_state(struct sock *sk) 20711da177e4SLinus Torvalds { 20721da177e4SLinus Torvalds int next = (int)new_state[sk->sk_state]; 20731da177e4SLinus Torvalds int ns = next & TCP_STATE_MASK; 20741da177e4SLinus Torvalds 20751da177e4SLinus Torvalds tcp_set_state(sk, ns); 20761da177e4SLinus Torvalds 20771da177e4SLinus Torvalds return next & TCP_ACTION_FIN; 20781da177e4SLinus Torvalds } 20791da177e4SLinus Torvalds 20801da177e4SLinus Torvalds /* 20811da177e4SLinus Torvalds * Shutdown the sending side of a connection. Much like close except 20821f29b058SSatoru SATOH * that we don't receive shut down or sock_set_flag(sk, SOCK_DEAD). 20831da177e4SLinus Torvalds */ 20841da177e4SLinus Torvalds 20851da177e4SLinus Torvalds void tcp_shutdown(struct sock *sk, int how) 20861da177e4SLinus Torvalds { 20871da177e4SLinus Torvalds /* We need to grab some memory, and put together a FIN, 20881da177e4SLinus Torvalds * and then put it into the queue to be sent. 20891da177e4SLinus Torvalds * Tim MacKenzie(tym@dibbler.cs.monash.edu.au) 4 Dec '92. 20901da177e4SLinus Torvalds */ 20911da177e4SLinus Torvalds if (!(how & SEND_SHUTDOWN)) 20921da177e4SLinus Torvalds return; 20931da177e4SLinus Torvalds 20941da177e4SLinus Torvalds /* If we've already sent a FIN, or it's a closed state, skip this. */ 20951da177e4SLinus Torvalds if ((1 << sk->sk_state) & 20961da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_SYN_SENT | 20971da177e4SLinus Torvalds TCPF_SYN_RECV | TCPF_CLOSE_WAIT)) { 20981da177e4SLinus Torvalds /* Clear out any half completed packets. FIN if needed. */ 20991da177e4SLinus Torvalds if (tcp_close_state(sk)) 21001da177e4SLinus Torvalds tcp_send_fin(sk); 21011da177e4SLinus Torvalds } 21021da177e4SLinus Torvalds } 21034bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_shutdown); 21041da177e4SLinus Torvalds 2105efcdbf24SArun Sharma bool tcp_check_oom(struct sock *sk, int shift) 2106efcdbf24SArun Sharma { 2107efcdbf24SArun Sharma bool too_many_orphans, out_of_socket_memory; 2108efcdbf24SArun Sharma 2109efcdbf24SArun Sharma too_many_orphans = tcp_too_many_orphans(sk, shift); 2110efcdbf24SArun Sharma out_of_socket_memory = tcp_out_of_memory(sk); 2111efcdbf24SArun Sharma 2112e87cc472SJoe Perches if (too_many_orphans) 2113e87cc472SJoe Perches net_info_ratelimited("too many orphaned sockets\n"); 2114e87cc472SJoe Perches if (out_of_socket_memory) 2115e87cc472SJoe Perches net_info_ratelimited("out of memory -- consider tuning tcp_mem\n"); 2116efcdbf24SArun Sharma return too_many_orphans || out_of_socket_memory; 2117efcdbf24SArun Sharma } 2118efcdbf24SArun Sharma 21191da177e4SLinus Torvalds void tcp_close(struct sock *sk, long timeout) 21201da177e4SLinus Torvalds { 21211da177e4SLinus Torvalds struct sk_buff *skb; 21221da177e4SLinus Torvalds int data_was_unread = 0; 212375c2d907SHerbert Xu int state; 21241da177e4SLinus Torvalds 21251da177e4SLinus Torvalds lock_sock(sk); 21261da177e4SLinus Torvalds sk->sk_shutdown = SHUTDOWN_MASK; 21271da177e4SLinus Torvalds 21281da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) { 21291da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 21301da177e4SLinus Torvalds 21311da177e4SLinus Torvalds /* Special case. */ 21320a5578cfSArnaldo Carvalho de Melo inet_csk_listen_stop(sk); 21331da177e4SLinus Torvalds 21341da177e4SLinus Torvalds goto adjudge_to_death; 21351da177e4SLinus Torvalds } 21361da177e4SLinus Torvalds 21371da177e4SLinus Torvalds /* We need to flush the recv. buffs. We do this only on the 21381da177e4SLinus Torvalds * descriptor close, not protocol-sourced closes, because the 21391da177e4SLinus Torvalds * reader process may not have drained the data yet! 21401da177e4SLinus Torvalds */ 21411da177e4SLinus Torvalds while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { 21421da177e4SLinus Torvalds u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq - 2143aa8223c7SArnaldo Carvalho de Melo tcp_hdr(skb)->fin; 21441da177e4SLinus Torvalds data_was_unread += len; 21451da177e4SLinus Torvalds __kfree_skb(skb); 21461da177e4SLinus Torvalds } 21471da177e4SLinus Torvalds 21483ab224beSHideo Aoki sk_mem_reclaim(sk); 21491da177e4SLinus Torvalds 2150565b7b2dSKonstantin Khorenko /* If socket has been already reset (e.g. in tcp_reset()) - kill it. */ 2151565b7b2dSKonstantin Khorenko if (sk->sk_state == TCP_CLOSE) 2152565b7b2dSKonstantin Khorenko goto adjudge_to_death; 2153565b7b2dSKonstantin Khorenko 215465bb723cSGerrit Renker /* As outlined in RFC 2525, section 2.17, we send a RST here because 215565bb723cSGerrit Renker * data was lost. To witness the awful effects of the old behavior of 215665bb723cSGerrit Renker * always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk 215765bb723cSGerrit Renker * GET in an FTP client, suspend the process, wait for the client to 215865bb723cSGerrit Renker * advertise a zero window, then kill -9 the FTP client, wheee... 215965bb723cSGerrit Renker * Note: timeout is always zero in such a case. 21601da177e4SLinus Torvalds */ 2161ee995283SPavel Emelyanov if (unlikely(tcp_sk(sk)->repair)) { 2162ee995283SPavel Emelyanov sk->sk_prot->disconnect(sk, 0); 2163ee995283SPavel Emelyanov } else if (data_was_unread) { 21641da177e4SLinus Torvalds /* Unread data was tossed, zap the connection. */ 21656f67c817SPavel Emelyanov NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE); 21661da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 2167aa133076SWu Fengguang tcp_send_active_reset(sk, sk->sk_allocation); 21681da177e4SLinus Torvalds } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) { 21691da177e4SLinus Torvalds /* Check zero linger _after_ checking for unread data. */ 21701da177e4SLinus Torvalds sk->sk_prot->disconnect(sk, 0); 21716f67c817SPavel Emelyanov NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONDATA); 21721da177e4SLinus Torvalds } else if (tcp_close_state(sk)) { 21731da177e4SLinus Torvalds /* We FIN if the application ate all the data before 21741da177e4SLinus Torvalds * zapping the connection. 21751da177e4SLinus Torvalds */ 21761da177e4SLinus Torvalds 21771da177e4SLinus Torvalds /* RED-PEN. Formally speaking, we have broken TCP state 21781da177e4SLinus Torvalds * machine. State transitions: 21791da177e4SLinus Torvalds * 21801da177e4SLinus Torvalds * TCP_ESTABLISHED -> TCP_FIN_WAIT1 21811da177e4SLinus Torvalds * TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible) 21821da177e4SLinus Torvalds * TCP_CLOSE_WAIT -> TCP_LAST_ACK 21831da177e4SLinus Torvalds * 21841da177e4SLinus Torvalds * are legal only when FIN has been sent (i.e. in window), 21851da177e4SLinus Torvalds * rather than queued out of window. Purists blame. 21861da177e4SLinus Torvalds * 21871da177e4SLinus Torvalds * F.e. "RFC state" is ESTABLISHED, 21881da177e4SLinus Torvalds * if Linux state is FIN-WAIT-1, but FIN is still not sent. 21891da177e4SLinus Torvalds * 21901da177e4SLinus Torvalds * The visible declinations are that sometimes 21911da177e4SLinus Torvalds * we enter time-wait state, when it is not required really 21921da177e4SLinus Torvalds * (harmless), do not send active resets, when they are 21931da177e4SLinus Torvalds * required by specs (TCP_ESTABLISHED, TCP_CLOSE_WAIT, when 21941da177e4SLinus Torvalds * they look as CLOSING or LAST_ACK for Linux) 21951da177e4SLinus Torvalds * Probably, I missed some more holelets. 21961da177e4SLinus Torvalds * --ANK 21978336886fSJerry Chu * XXX (TFO) - To start off we don't support SYN+ACK+FIN 21988336886fSJerry Chu * in a single packet! (May consider it later but will 21998336886fSJerry Chu * probably need API support or TCP_CORK SYN-ACK until 22008336886fSJerry Chu * data is written and socket is closed.) 22011da177e4SLinus Torvalds */ 22021da177e4SLinus Torvalds tcp_send_fin(sk); 22031da177e4SLinus Torvalds } 22041da177e4SLinus Torvalds 22051da177e4SLinus Torvalds sk_stream_wait_close(sk, timeout); 22061da177e4SLinus Torvalds 22071da177e4SLinus Torvalds adjudge_to_death: 220875c2d907SHerbert Xu state = sk->sk_state; 220975c2d907SHerbert Xu sock_hold(sk); 221075c2d907SHerbert Xu sock_orphan(sk); 221175c2d907SHerbert Xu 22121da177e4SLinus Torvalds /* It is the last release_sock in its life. It will remove backlog. */ 22131da177e4SLinus Torvalds release_sock(sk); 22141da177e4SLinus Torvalds 22151da177e4SLinus Torvalds 22161da177e4SLinus Torvalds /* Now socket is owned by kernel and we acquire BH lock 22171da177e4SLinus Torvalds to finish close. No need to check for user refs. 22181da177e4SLinus Torvalds */ 22191da177e4SLinus Torvalds local_bh_disable(); 22201da177e4SLinus Torvalds bh_lock_sock(sk); 2221547b792cSIlpo Järvinen WARN_ON(sock_owned_by_user(sk)); 22221da177e4SLinus Torvalds 2223eb4dea58SHerbert Xu percpu_counter_inc(sk->sk_prot->orphan_count); 2224eb4dea58SHerbert Xu 222575c2d907SHerbert Xu /* Have we already been destroyed by a softirq or backlog? */ 222675c2d907SHerbert Xu if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE) 222775c2d907SHerbert Xu goto out; 22281da177e4SLinus Torvalds 22291da177e4SLinus Torvalds /* This is a (useful) BSD violating of the RFC. There is a 22301da177e4SLinus Torvalds * problem with TCP as specified in that the other end could 22311da177e4SLinus Torvalds * keep a socket open forever with no application left this end. 2232b10bd54cSJesper Juhl * We use a 1 minute timeout (about the same as BSD) then kill 22331da177e4SLinus Torvalds * our end. If they send after that then tough - BUT: long enough 22341da177e4SLinus Torvalds * that we won't make the old 4*rto = almost no time - whoops 22351da177e4SLinus Torvalds * reset mistake. 22361da177e4SLinus Torvalds * 22371da177e4SLinus Torvalds * Nope, it was not mistake. It is really desired behaviour 22381da177e4SLinus Torvalds * f.e. on http servers, when such sockets are useless, but 22391da177e4SLinus Torvalds * consume significant resources. Let's do it with special 22401da177e4SLinus Torvalds * linger2 option. --ANK 22411da177e4SLinus Torvalds */ 22421da177e4SLinus Torvalds 22431da177e4SLinus Torvalds if (sk->sk_state == TCP_FIN_WAIT2) { 22441da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 22451da177e4SLinus Torvalds if (tp->linger2 < 0) { 22461da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 22471da177e4SLinus Torvalds tcp_send_active_reset(sk, GFP_ATOMIC); 2248de0744afSPavel Emelyanov NET_INC_STATS_BH(sock_net(sk), 2249de0744afSPavel Emelyanov LINUX_MIB_TCPABORTONLINGER); 22501da177e4SLinus Torvalds } else { 2251463c84b9SArnaldo Carvalho de Melo const int tmo = tcp_fin_time(sk); 22521da177e4SLinus Torvalds 22531da177e4SLinus Torvalds if (tmo > TCP_TIMEWAIT_LEN) { 225452499afeSDavid S. Miller inet_csk_reset_keepalive_timer(sk, 225552499afeSDavid S. Miller tmo - TCP_TIMEWAIT_LEN); 22561da177e4SLinus Torvalds } else { 22571da177e4SLinus Torvalds tcp_time_wait(sk, TCP_FIN_WAIT2, tmo); 22581da177e4SLinus Torvalds goto out; 22591da177e4SLinus Torvalds } 22601da177e4SLinus Torvalds } 22611da177e4SLinus Torvalds } 22621da177e4SLinus Torvalds if (sk->sk_state != TCP_CLOSE) { 22633ab224beSHideo Aoki sk_mem_reclaim(sk); 2264efcdbf24SArun Sharma if (tcp_check_oom(sk, 0)) { 22651da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 22661da177e4SLinus Torvalds tcp_send_active_reset(sk, GFP_ATOMIC); 2267de0744afSPavel Emelyanov NET_INC_STATS_BH(sock_net(sk), 2268de0744afSPavel Emelyanov LINUX_MIB_TCPABORTONMEMORY); 22691da177e4SLinus Torvalds } 22701da177e4SLinus Torvalds } 22711da177e4SLinus Torvalds 22728336886fSJerry Chu if (sk->sk_state == TCP_CLOSE) { 22738336886fSJerry Chu struct request_sock *req = tcp_sk(sk)->fastopen_rsk; 22748336886fSJerry Chu /* We could get here with a non-NULL req if the socket is 22758336886fSJerry Chu * aborted (e.g., closed with unread data) before 3WHS 22768336886fSJerry Chu * finishes. 22778336886fSJerry Chu */ 22788336886fSJerry Chu if (req != NULL) 22798336886fSJerry Chu reqsk_fastopen_remove(sk, req, false); 22800a5578cfSArnaldo Carvalho de Melo inet_csk_destroy_sock(sk); 22818336886fSJerry Chu } 22821da177e4SLinus Torvalds /* Otherwise, socket is reprieved until protocol close. */ 22831da177e4SLinus Torvalds 22841da177e4SLinus Torvalds out: 22851da177e4SLinus Torvalds bh_unlock_sock(sk); 22861da177e4SLinus Torvalds local_bh_enable(); 22871da177e4SLinus Torvalds sock_put(sk); 22881da177e4SLinus Torvalds } 22894bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_close); 22901da177e4SLinus Torvalds 22911da177e4SLinus Torvalds /* These states need RST on ABORT according to RFC793 */ 22921da177e4SLinus Torvalds 2293a2a385d6SEric Dumazet static inline bool tcp_need_reset(int state) 22941da177e4SLinus Torvalds { 22951da177e4SLinus Torvalds return (1 << state) & 22961da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT | TCPF_FIN_WAIT1 | 22971da177e4SLinus Torvalds TCPF_FIN_WAIT2 | TCPF_SYN_RECV); 22981da177e4SLinus Torvalds } 22991da177e4SLinus Torvalds 23001da177e4SLinus Torvalds int tcp_disconnect(struct sock *sk, int flags) 23011da177e4SLinus Torvalds { 23021da177e4SLinus Torvalds struct inet_sock *inet = inet_sk(sk); 2303463c84b9SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 23041da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 23051da177e4SLinus Torvalds int err = 0; 23061da177e4SLinus Torvalds int old_state = sk->sk_state; 23071da177e4SLinus Torvalds 23081da177e4SLinus Torvalds if (old_state != TCP_CLOSE) 23091da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 23101da177e4SLinus Torvalds 23111da177e4SLinus Torvalds /* ABORT function of RFC793 */ 23121da177e4SLinus Torvalds if (old_state == TCP_LISTEN) { 23130a5578cfSArnaldo Carvalho de Melo inet_csk_listen_stop(sk); 2314ee995283SPavel Emelyanov } else if (unlikely(tp->repair)) { 2315ee995283SPavel Emelyanov sk->sk_err = ECONNABORTED; 23161da177e4SLinus Torvalds } else if (tcp_need_reset(old_state) || 23171da177e4SLinus Torvalds (tp->snd_nxt != tp->write_seq && 23181da177e4SLinus Torvalds (1 << old_state) & (TCPF_CLOSING | TCPF_LAST_ACK))) { 2319caa20d9aSStephen Hemminger /* The last check adjusts for discrepancy of Linux wrt. RFC 23201da177e4SLinus Torvalds * states 23211da177e4SLinus Torvalds */ 23221da177e4SLinus Torvalds tcp_send_active_reset(sk, gfp_any()); 23231da177e4SLinus Torvalds sk->sk_err = ECONNRESET; 23241da177e4SLinus Torvalds } else if (old_state == TCP_SYN_SENT) 23251da177e4SLinus Torvalds sk->sk_err = ECONNRESET; 23261da177e4SLinus Torvalds 23271da177e4SLinus Torvalds tcp_clear_xmit_timers(sk); 23281da177e4SLinus Torvalds __skb_queue_purge(&sk->sk_receive_queue); 2329fe067e8aSDavid S. Miller tcp_write_queue_purge(sk); 23301da177e4SLinus Torvalds __skb_queue_purge(&tp->out_of_order_queue); 23311a2449a8SChris Leech #ifdef CONFIG_NET_DMA 23321a2449a8SChris Leech __skb_queue_purge(&sk->sk_async_wait_queue); 23331a2449a8SChris Leech #endif 23341da177e4SLinus Torvalds 2335c720c7e8SEric Dumazet inet->inet_dport = 0; 23361da177e4SLinus Torvalds 23371da177e4SLinus Torvalds if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK)) 23381da177e4SLinus Torvalds inet_reset_saddr(sk); 23391da177e4SLinus Torvalds 23401da177e4SLinus Torvalds sk->sk_shutdown = 0; 23411da177e4SLinus Torvalds sock_reset_flag(sk, SOCK_DONE); 2342*740b0f18SEric Dumazet tp->srtt_us = 0; 23431da177e4SLinus Torvalds if ((tp->write_seq += tp->max_window + 2) == 0) 23441da177e4SLinus Torvalds tp->write_seq = 1; 2345463c84b9SArnaldo Carvalho de Melo icsk->icsk_backoff = 0; 23461da177e4SLinus Torvalds tp->snd_cwnd = 2; 23476687e988SArnaldo Carvalho de Melo icsk->icsk_probes_out = 0; 23481da177e4SLinus Torvalds tp->packets_out = 0; 23490b6a05c1SIlpo Järvinen tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; 23501da177e4SLinus Torvalds tp->snd_cwnd_cnt = 0; 23511fdf475aSEric Dumazet tp->window_clamp = 0; 23526687e988SArnaldo Carvalho de Melo tcp_set_ca_state(sk, TCP_CA_Open); 23531da177e4SLinus Torvalds tcp_clear_retrans(tp); 2354463c84b9SArnaldo Carvalho de Melo inet_csk_delack_init(sk); 2355fe067e8aSDavid S. Miller tcp_init_send_head(sk); 2356b40b4f79SSrinivas Aji memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 23571da177e4SLinus Torvalds __sk_dst_reset(sk); 23581da177e4SLinus Torvalds 2359c720c7e8SEric Dumazet WARN_ON(inet->inet_num && !icsk->icsk_bind_hash); 23601da177e4SLinus Torvalds 23611da177e4SLinus Torvalds sk->sk_error_report(sk); 23621da177e4SLinus Torvalds return err; 23631da177e4SLinus Torvalds } 23644bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_disconnect); 23651da177e4SLinus Torvalds 2366bb68b647SChristoph Paasch void tcp_sock_destruct(struct sock *sk) 2367bb68b647SChristoph Paasch { 2368bb68b647SChristoph Paasch inet_sock_destruct(sk); 2369bb68b647SChristoph Paasch 2370bb68b647SChristoph Paasch kfree(inet_csk(sk)->icsk_accept_queue.fastopenq); 2371bb68b647SChristoph Paasch } 2372bb68b647SChristoph Paasch 2373a2a385d6SEric Dumazet static inline bool tcp_can_repair_sock(const struct sock *sk) 2374ee995283SPavel Emelyanov { 237552e804c6SEric W. Biederman return ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN) && 2376ee995283SPavel Emelyanov ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_ESTABLISHED)); 2377ee995283SPavel Emelyanov } 2378ee995283SPavel Emelyanov 2379de248a75SPavel Emelyanov static int tcp_repair_options_est(struct tcp_sock *tp, 2380de248a75SPavel Emelyanov struct tcp_repair_opt __user *optbuf, unsigned int len) 2381b139ba4eSPavel Emelyanov { 2382de248a75SPavel Emelyanov struct tcp_repair_opt opt; 2383b139ba4eSPavel Emelyanov 2384de248a75SPavel Emelyanov while (len >= sizeof(opt)) { 2385de248a75SPavel Emelyanov if (copy_from_user(&opt, optbuf, sizeof(opt))) 2386b139ba4eSPavel Emelyanov return -EFAULT; 2387b139ba4eSPavel Emelyanov 2388b139ba4eSPavel Emelyanov optbuf++; 2389de248a75SPavel Emelyanov len -= sizeof(opt); 2390b139ba4eSPavel Emelyanov 2391de248a75SPavel Emelyanov switch (opt.opt_code) { 2392de248a75SPavel Emelyanov case TCPOPT_MSS: 2393de248a75SPavel Emelyanov tp->rx_opt.mss_clamp = opt.opt_val; 2394b139ba4eSPavel Emelyanov break; 2395de248a75SPavel Emelyanov case TCPOPT_WINDOW: 2396bc26ccd8SAndrey Vagin { 2397bc26ccd8SAndrey Vagin u16 snd_wscale = opt.opt_val & 0xFFFF; 2398bc26ccd8SAndrey Vagin u16 rcv_wscale = opt.opt_val >> 16; 2399bc26ccd8SAndrey Vagin 2400bc26ccd8SAndrey Vagin if (snd_wscale > 14 || rcv_wscale > 14) 2401b139ba4eSPavel Emelyanov return -EFBIG; 2402b139ba4eSPavel Emelyanov 2403bc26ccd8SAndrey Vagin tp->rx_opt.snd_wscale = snd_wscale; 2404bc26ccd8SAndrey Vagin tp->rx_opt.rcv_wscale = rcv_wscale; 2405bc26ccd8SAndrey Vagin tp->rx_opt.wscale_ok = 1; 2406bc26ccd8SAndrey Vagin } 2407b139ba4eSPavel Emelyanov break; 2408b139ba4eSPavel Emelyanov case TCPOPT_SACK_PERM: 2409de248a75SPavel Emelyanov if (opt.opt_val != 0) 2410de248a75SPavel Emelyanov return -EINVAL; 2411de248a75SPavel Emelyanov 2412b139ba4eSPavel Emelyanov tp->rx_opt.sack_ok |= TCP_SACK_SEEN; 2413b139ba4eSPavel Emelyanov if (sysctl_tcp_fack) 2414b139ba4eSPavel Emelyanov tcp_enable_fack(tp); 2415b139ba4eSPavel Emelyanov break; 2416b139ba4eSPavel Emelyanov case TCPOPT_TIMESTAMP: 2417de248a75SPavel Emelyanov if (opt.opt_val != 0) 2418de248a75SPavel Emelyanov return -EINVAL; 2419de248a75SPavel Emelyanov 2420b139ba4eSPavel Emelyanov tp->rx_opt.tstamp_ok = 1; 2421b139ba4eSPavel Emelyanov break; 2422b139ba4eSPavel Emelyanov } 2423b139ba4eSPavel Emelyanov } 2424b139ba4eSPavel Emelyanov 2425b139ba4eSPavel Emelyanov return 0; 2426b139ba4eSPavel Emelyanov } 2427b139ba4eSPavel Emelyanov 24281da177e4SLinus Torvalds /* 24291da177e4SLinus Torvalds * Socket option code for TCP. 24301da177e4SLinus Torvalds */ 24313fdadf7dSDmitry Mishin static int do_tcp_setsockopt(struct sock *sk, int level, 2432b7058842SDavid S. Miller int optname, char __user *optval, unsigned int optlen) 24331da177e4SLinus Torvalds { 24341da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 2435463c84b9SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 24361da177e4SLinus Torvalds int val; 24371da177e4SLinus Torvalds int err = 0; 24381da177e4SLinus Torvalds 2439e56fb50fSWilliam Allen Simpson /* These are data/string values, all the others are ints */ 2440e56fb50fSWilliam Allen Simpson switch (optname) { 2441e56fb50fSWilliam Allen Simpson case TCP_CONGESTION: { 24425f8ef48dSStephen Hemminger char name[TCP_CA_NAME_MAX]; 24435f8ef48dSStephen Hemminger 24445f8ef48dSStephen Hemminger if (optlen < 1) 24455f8ef48dSStephen Hemminger return -EINVAL; 24465f8ef48dSStephen Hemminger 24475f8ef48dSStephen Hemminger val = strncpy_from_user(name, optval, 24484fdb78d3SAndrew Morton min_t(long, TCP_CA_NAME_MAX-1, optlen)); 24495f8ef48dSStephen Hemminger if (val < 0) 24505f8ef48dSStephen Hemminger return -EFAULT; 24515f8ef48dSStephen Hemminger name[val] = 0; 24525f8ef48dSStephen Hemminger 24535f8ef48dSStephen Hemminger lock_sock(sk); 24546687e988SArnaldo Carvalho de Melo err = tcp_set_congestion_control(sk, name); 24555f8ef48dSStephen Hemminger release_sock(sk); 24565f8ef48dSStephen Hemminger return err; 24575f8ef48dSStephen Hemminger } 2458e56fb50fSWilliam Allen Simpson default: 2459e56fb50fSWilliam Allen Simpson /* fallthru */ 2460e56fb50fSWilliam Allen Simpson break; 2461ccbd6a5aSJoe Perches } 24625f8ef48dSStephen Hemminger 24631da177e4SLinus Torvalds if (optlen < sizeof(int)) 24641da177e4SLinus Torvalds return -EINVAL; 24651da177e4SLinus Torvalds 24661da177e4SLinus Torvalds if (get_user(val, (int __user *)optval)) 24671da177e4SLinus Torvalds return -EFAULT; 24681da177e4SLinus Torvalds 24691da177e4SLinus Torvalds lock_sock(sk); 24701da177e4SLinus Torvalds 24711da177e4SLinus Torvalds switch (optname) { 24721da177e4SLinus Torvalds case TCP_MAXSEG: 24731da177e4SLinus Torvalds /* Values greater than interface MTU won't take effect. However 24741da177e4SLinus Torvalds * at the point when this call is done we typically don't yet 24751da177e4SLinus Torvalds * know which interface is going to be used */ 2476c39508d6SDavid S. Miller if (val < TCP_MIN_MSS || val > MAX_TCP_WINDOW) { 24771da177e4SLinus Torvalds err = -EINVAL; 24781da177e4SLinus Torvalds break; 24791da177e4SLinus Torvalds } 24801da177e4SLinus Torvalds tp->rx_opt.user_mss = val; 24811da177e4SLinus Torvalds break; 24821da177e4SLinus Torvalds 24831da177e4SLinus Torvalds case TCP_NODELAY: 24841da177e4SLinus Torvalds if (val) { 24851da177e4SLinus Torvalds /* TCP_NODELAY is weaker than TCP_CORK, so that 24861da177e4SLinus Torvalds * this option on corked socket is remembered, but 24871da177e4SLinus Torvalds * it is not activated until cork is cleared. 24881da177e4SLinus Torvalds * 24891da177e4SLinus Torvalds * However, when TCP_NODELAY is set we make 24901da177e4SLinus Torvalds * an explicit push, which overrides even TCP_CORK 24911da177e4SLinus Torvalds * for currently queued segments. 24921da177e4SLinus Torvalds */ 24931da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_OFF|TCP_NAGLE_PUSH; 24949e412ba7SIlpo Järvinen tcp_push_pending_frames(sk); 24951da177e4SLinus Torvalds } else { 24961da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_OFF; 24971da177e4SLinus Torvalds } 24981da177e4SLinus Torvalds break; 24991da177e4SLinus Torvalds 250036e31b0aSAndreas Petlund case TCP_THIN_LINEAR_TIMEOUTS: 250136e31b0aSAndreas Petlund if (val < 0 || val > 1) 250236e31b0aSAndreas Petlund err = -EINVAL; 250336e31b0aSAndreas Petlund else 250436e31b0aSAndreas Petlund tp->thin_lto = val; 250536e31b0aSAndreas Petlund break; 250636e31b0aSAndreas Petlund 25077e380175SAndreas Petlund case TCP_THIN_DUPACK: 25087e380175SAndreas Petlund if (val < 0 || val > 1) 25097e380175SAndreas Petlund err = -EINVAL; 2510e2e5c4c0SDave Jones else { 25117e380175SAndreas Petlund tp->thin_dupack = val; 2512eed530b6SYuchung Cheng if (tp->thin_dupack) 2513eed530b6SYuchung Cheng tcp_disable_early_retrans(tp); 2514e2e5c4c0SDave Jones } 25157e380175SAndreas Petlund break; 25167e380175SAndreas Petlund 2517ee995283SPavel Emelyanov case TCP_REPAIR: 2518ee995283SPavel Emelyanov if (!tcp_can_repair_sock(sk)) 2519ee995283SPavel Emelyanov err = -EPERM; 2520ee995283SPavel Emelyanov else if (val == 1) { 2521ee995283SPavel Emelyanov tp->repair = 1; 2522ee995283SPavel Emelyanov sk->sk_reuse = SK_FORCE_REUSE; 2523ee995283SPavel Emelyanov tp->repair_queue = TCP_NO_QUEUE; 2524ee995283SPavel Emelyanov } else if (val == 0) { 2525ee995283SPavel Emelyanov tp->repair = 0; 2526ee995283SPavel Emelyanov sk->sk_reuse = SK_NO_REUSE; 2527ee995283SPavel Emelyanov tcp_send_window_probe(sk); 2528ee995283SPavel Emelyanov } else 2529ee995283SPavel Emelyanov err = -EINVAL; 2530ee995283SPavel Emelyanov 2531ee995283SPavel Emelyanov break; 2532ee995283SPavel Emelyanov 2533ee995283SPavel Emelyanov case TCP_REPAIR_QUEUE: 2534ee995283SPavel Emelyanov if (!tp->repair) 2535ee995283SPavel Emelyanov err = -EPERM; 2536ee995283SPavel Emelyanov else if (val < TCP_QUEUES_NR) 2537ee995283SPavel Emelyanov tp->repair_queue = val; 2538ee995283SPavel Emelyanov else 2539ee995283SPavel Emelyanov err = -EINVAL; 2540ee995283SPavel Emelyanov break; 2541ee995283SPavel Emelyanov 2542ee995283SPavel Emelyanov case TCP_QUEUE_SEQ: 2543ee995283SPavel Emelyanov if (sk->sk_state != TCP_CLOSE) 2544ee995283SPavel Emelyanov err = -EPERM; 2545ee995283SPavel Emelyanov else if (tp->repair_queue == TCP_SEND_QUEUE) 2546ee995283SPavel Emelyanov tp->write_seq = val; 2547ee995283SPavel Emelyanov else if (tp->repair_queue == TCP_RECV_QUEUE) 2548ee995283SPavel Emelyanov tp->rcv_nxt = val; 2549ee995283SPavel Emelyanov else 2550ee995283SPavel Emelyanov err = -EINVAL; 2551ee995283SPavel Emelyanov break; 2552ee995283SPavel Emelyanov 2553b139ba4eSPavel Emelyanov case TCP_REPAIR_OPTIONS: 2554b139ba4eSPavel Emelyanov if (!tp->repair) 2555b139ba4eSPavel Emelyanov err = -EINVAL; 2556b139ba4eSPavel Emelyanov else if (sk->sk_state == TCP_ESTABLISHED) 2557de248a75SPavel Emelyanov err = tcp_repair_options_est(tp, 2558de248a75SPavel Emelyanov (struct tcp_repair_opt __user *)optval, 2559de248a75SPavel Emelyanov optlen); 2560b139ba4eSPavel Emelyanov else 2561b139ba4eSPavel Emelyanov err = -EPERM; 2562b139ba4eSPavel Emelyanov break; 2563b139ba4eSPavel Emelyanov 25641da177e4SLinus Torvalds case TCP_CORK: 25651da177e4SLinus Torvalds /* When set indicates to always queue non-full frames. 25661da177e4SLinus Torvalds * Later the user clears this option and we transmit 25671da177e4SLinus Torvalds * any pending partial frames in the queue. This is 25681da177e4SLinus Torvalds * meant to be used alongside sendfile() to get properly 25691da177e4SLinus Torvalds * filled frames when the user (for example) must write 25701da177e4SLinus Torvalds * out headers with a write() call first and then use 25711da177e4SLinus Torvalds * sendfile to send out the data parts. 25721da177e4SLinus Torvalds * 25731da177e4SLinus Torvalds * TCP_CORK can be set together with TCP_NODELAY and it is 25741da177e4SLinus Torvalds * stronger than TCP_NODELAY. 25751da177e4SLinus Torvalds */ 25761da177e4SLinus Torvalds if (val) { 25771da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_CORK; 25781da177e4SLinus Torvalds } else { 25791da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_CORK; 25801da177e4SLinus Torvalds if (tp->nonagle&TCP_NAGLE_OFF) 25811da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_PUSH; 25829e412ba7SIlpo Järvinen tcp_push_pending_frames(sk); 25831da177e4SLinus Torvalds } 25841da177e4SLinus Torvalds break; 25851da177e4SLinus Torvalds 25861da177e4SLinus Torvalds case TCP_KEEPIDLE: 25871da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPIDLE) 25881da177e4SLinus Torvalds err = -EINVAL; 25891da177e4SLinus Torvalds else { 25901da177e4SLinus Torvalds tp->keepalive_time = val * HZ; 25911da177e4SLinus Torvalds if (sock_flag(sk, SOCK_KEEPOPEN) && 25921da177e4SLinus Torvalds !((1 << sk->sk_state) & 25931da177e4SLinus Torvalds (TCPF_CLOSE | TCPF_LISTEN))) { 25946c37e5deSFlavio Leitner u32 elapsed = keepalive_time_elapsed(tp); 25951da177e4SLinus Torvalds if (tp->keepalive_time > elapsed) 25961da177e4SLinus Torvalds elapsed = tp->keepalive_time - elapsed; 25971da177e4SLinus Torvalds else 25981da177e4SLinus Torvalds elapsed = 0; 2599463c84b9SArnaldo Carvalho de Melo inet_csk_reset_keepalive_timer(sk, elapsed); 26001da177e4SLinus Torvalds } 26011da177e4SLinus Torvalds } 26021da177e4SLinus Torvalds break; 26031da177e4SLinus Torvalds case TCP_KEEPINTVL: 26041da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPINTVL) 26051da177e4SLinus Torvalds err = -EINVAL; 26061da177e4SLinus Torvalds else 26071da177e4SLinus Torvalds tp->keepalive_intvl = val * HZ; 26081da177e4SLinus Torvalds break; 26091da177e4SLinus Torvalds case TCP_KEEPCNT: 26101da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPCNT) 26111da177e4SLinus Torvalds err = -EINVAL; 26121da177e4SLinus Torvalds else 26131da177e4SLinus Torvalds tp->keepalive_probes = val; 26141da177e4SLinus Torvalds break; 26151da177e4SLinus Torvalds case TCP_SYNCNT: 26161da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_SYNCNT) 26171da177e4SLinus Torvalds err = -EINVAL; 26181da177e4SLinus Torvalds else 2619463c84b9SArnaldo Carvalho de Melo icsk->icsk_syn_retries = val; 26201da177e4SLinus Torvalds break; 26211da177e4SLinus Torvalds 26221da177e4SLinus Torvalds case TCP_LINGER2: 26231da177e4SLinus Torvalds if (val < 0) 26241da177e4SLinus Torvalds tp->linger2 = -1; 26251da177e4SLinus Torvalds else if (val > sysctl_tcp_fin_timeout / HZ) 26261da177e4SLinus Torvalds tp->linger2 = 0; 26271da177e4SLinus Torvalds else 26281da177e4SLinus Torvalds tp->linger2 = val * HZ; 26291da177e4SLinus Torvalds break; 26301da177e4SLinus Torvalds 26311da177e4SLinus Torvalds case TCP_DEFER_ACCEPT: 2632b103cf34SJulian Anastasov /* Translate value in seconds to number of retransmits */ 2633b103cf34SJulian Anastasov icsk->icsk_accept_queue.rskq_defer_accept = 2634b103cf34SJulian Anastasov secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ, 2635b103cf34SJulian Anastasov TCP_RTO_MAX / HZ); 26361da177e4SLinus Torvalds break; 26371da177e4SLinus Torvalds 26381da177e4SLinus Torvalds case TCP_WINDOW_CLAMP: 26391da177e4SLinus Torvalds if (!val) { 26401da177e4SLinus Torvalds if (sk->sk_state != TCP_CLOSE) { 26411da177e4SLinus Torvalds err = -EINVAL; 26421da177e4SLinus Torvalds break; 26431da177e4SLinus Torvalds } 26441da177e4SLinus Torvalds tp->window_clamp = 0; 26451da177e4SLinus Torvalds } else 26461da177e4SLinus Torvalds tp->window_clamp = val < SOCK_MIN_RCVBUF / 2 ? 26471da177e4SLinus Torvalds SOCK_MIN_RCVBUF / 2 : val; 26481da177e4SLinus Torvalds break; 26491da177e4SLinus Torvalds 26501da177e4SLinus Torvalds case TCP_QUICKACK: 26511da177e4SLinus Torvalds if (!val) { 2652463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 1; 26531da177e4SLinus Torvalds } else { 2654463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 0; 26551da177e4SLinus Torvalds if ((1 << sk->sk_state) & 26561da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT) && 2657463c84b9SArnaldo Carvalho de Melo inet_csk_ack_scheduled(sk)) { 2658463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pending |= ICSK_ACK_PUSHED; 26590e4b4992SChris Leech tcp_cleanup_rbuf(sk, 1); 26601da177e4SLinus Torvalds if (!(val & 1)) 2661463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 1; 26621da177e4SLinus Torvalds } 26631da177e4SLinus Torvalds } 26641da177e4SLinus Torvalds break; 26651da177e4SLinus Torvalds 2666cfb6eeb4SYOSHIFUJI Hideaki #ifdef CONFIG_TCP_MD5SIG 2667cfb6eeb4SYOSHIFUJI Hideaki case TCP_MD5SIG: 2668cfb6eeb4SYOSHIFUJI Hideaki /* Read the IP->Key mappings from userspace */ 2669cfb6eeb4SYOSHIFUJI Hideaki err = tp->af_specific->md5_parse(sk, optval, optlen); 2670cfb6eeb4SYOSHIFUJI Hideaki break; 2671cfb6eeb4SYOSHIFUJI Hideaki #endif 2672dca43c75SJerry Chu case TCP_USER_TIMEOUT: 2673dca43c75SJerry Chu /* Cap the max timeout in ms TCP will retry/retrans 2674dca43c75SJerry Chu * before giving up and aborting (ETIMEDOUT) a connection. 2675dca43c75SJerry Chu */ 267642493570SHangbin Liu if (val < 0) 267742493570SHangbin Liu err = -EINVAL; 267842493570SHangbin Liu else 2679dca43c75SJerry Chu icsk->icsk_user_timeout = msecs_to_jiffies(val); 2680dca43c75SJerry Chu break; 26818336886fSJerry Chu 26828336886fSJerry Chu case TCP_FASTOPEN: 26838336886fSJerry Chu if (val >= 0 && ((1 << sk->sk_state) & (TCPF_CLOSE | 26848336886fSJerry Chu TCPF_LISTEN))) 26858336886fSJerry Chu err = fastopen_init_queue(sk, val); 26868336886fSJerry Chu else 26878336886fSJerry Chu err = -EINVAL; 26888336886fSJerry Chu break; 268993be6ce0SAndrey Vagin case TCP_TIMESTAMP: 269093be6ce0SAndrey Vagin if (!tp->repair) 269193be6ce0SAndrey Vagin err = -EPERM; 269293be6ce0SAndrey Vagin else 269393be6ce0SAndrey Vagin tp->tsoffset = val - tcp_time_stamp; 269493be6ce0SAndrey Vagin break; 2695c9bee3b7SEric Dumazet case TCP_NOTSENT_LOWAT: 2696c9bee3b7SEric Dumazet tp->notsent_lowat = val; 2697c9bee3b7SEric Dumazet sk->sk_write_space(sk); 2698c9bee3b7SEric Dumazet break; 26991da177e4SLinus Torvalds default: 27001da177e4SLinus Torvalds err = -ENOPROTOOPT; 27011da177e4SLinus Torvalds break; 27023ff50b79SStephen Hemminger } 27033ff50b79SStephen Hemminger 27041da177e4SLinus Torvalds release_sock(sk); 27051da177e4SLinus Torvalds return err; 27061da177e4SLinus Torvalds } 27071da177e4SLinus Torvalds 27083fdadf7dSDmitry Mishin int tcp_setsockopt(struct sock *sk, int level, int optname, char __user *optval, 2709b7058842SDavid S. Miller unsigned int optlen) 27103fdadf7dSDmitry Mishin { 2711cf533ea5SEric Dumazet const struct inet_connection_sock *icsk = inet_csk(sk); 27123fdadf7dSDmitry Mishin 27133fdadf7dSDmitry Mishin if (level != SOL_TCP) 27143fdadf7dSDmitry Mishin return icsk->icsk_af_ops->setsockopt(sk, level, optname, 27153fdadf7dSDmitry Mishin optval, optlen); 27163fdadf7dSDmitry Mishin return do_tcp_setsockopt(sk, level, optname, optval, optlen); 27173fdadf7dSDmitry Mishin } 27184bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_setsockopt); 27193fdadf7dSDmitry Mishin 27203fdadf7dSDmitry Mishin #ifdef CONFIG_COMPAT 2721543d9cfeSArnaldo Carvalho de Melo int compat_tcp_setsockopt(struct sock *sk, int level, int optname, 2722b7058842SDavid S. Miller char __user *optval, unsigned int optlen) 27233fdadf7dSDmitry Mishin { 2724dec73ff0SArnaldo Carvalho de Melo if (level != SOL_TCP) 2725dec73ff0SArnaldo Carvalho de Melo return inet_csk_compat_setsockopt(sk, level, optname, 2726dec73ff0SArnaldo Carvalho de Melo optval, optlen); 27273fdadf7dSDmitry Mishin return do_tcp_setsockopt(sk, level, optname, optval, optlen); 27283fdadf7dSDmitry Mishin } 2729543d9cfeSArnaldo Carvalho de Melo EXPORT_SYMBOL(compat_tcp_setsockopt); 27303fdadf7dSDmitry Mishin #endif 27313fdadf7dSDmitry Mishin 27321da177e4SLinus Torvalds /* Return information about state of tcp endpoint in API format. */ 2733cf533ea5SEric Dumazet void tcp_get_info(const struct sock *sk, struct tcp_info *info) 27341da177e4SLinus Torvalds { 2735cf533ea5SEric Dumazet const struct tcp_sock *tp = tcp_sk(sk); 2736463c84b9SArnaldo Carvalho de Melo const struct inet_connection_sock *icsk = inet_csk(sk); 27371da177e4SLinus Torvalds u32 now = tcp_time_stamp; 27381da177e4SLinus Torvalds 27391da177e4SLinus Torvalds memset(info, 0, sizeof(*info)); 27401da177e4SLinus Torvalds 27411da177e4SLinus Torvalds info->tcpi_state = sk->sk_state; 27426687e988SArnaldo Carvalho de Melo info->tcpi_ca_state = icsk->icsk_ca_state; 2743463c84b9SArnaldo Carvalho de Melo info->tcpi_retransmits = icsk->icsk_retransmits; 27446687e988SArnaldo Carvalho de Melo info->tcpi_probes = icsk->icsk_probes_out; 2745463c84b9SArnaldo Carvalho de Melo info->tcpi_backoff = icsk->icsk_backoff; 27461da177e4SLinus Torvalds 27471da177e4SLinus Torvalds if (tp->rx_opt.tstamp_ok) 27481da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_TIMESTAMPS; 2749e60402d0SIlpo Järvinen if (tcp_is_sack(tp)) 27501da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_SACK; 27511da177e4SLinus Torvalds if (tp->rx_opt.wscale_ok) { 27521da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_WSCALE; 27531da177e4SLinus Torvalds info->tcpi_snd_wscale = tp->rx_opt.snd_wscale; 27541da177e4SLinus Torvalds info->tcpi_rcv_wscale = tp->rx_opt.rcv_wscale; 27551da177e4SLinus Torvalds } 27561da177e4SLinus Torvalds 27571da177e4SLinus Torvalds if (tp->ecn_flags & TCP_ECN_OK) 27581da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_ECN; 2759b5c5693bSEric Dumazet if (tp->ecn_flags & TCP_ECN_SEEN) 2760b5c5693bSEric Dumazet info->tcpi_options |= TCPI_OPT_ECN_SEEN; 27616f73601eSYuchung Cheng if (tp->syn_data_acked) 27626f73601eSYuchung Cheng info->tcpi_options |= TCPI_OPT_SYN_DATA; 27631da177e4SLinus Torvalds 2764463c84b9SArnaldo Carvalho de Melo info->tcpi_rto = jiffies_to_usecs(icsk->icsk_rto); 2765463c84b9SArnaldo Carvalho de Melo info->tcpi_ato = jiffies_to_usecs(icsk->icsk_ack.ato); 2766c1b4a7e6SDavid S. Miller info->tcpi_snd_mss = tp->mss_cache; 2767463c84b9SArnaldo Carvalho de Melo info->tcpi_rcv_mss = icsk->icsk_ack.rcv_mss; 27681da177e4SLinus Torvalds 27695ee3afbaSRick Jones if (sk->sk_state == TCP_LISTEN) { 27705ee3afbaSRick Jones info->tcpi_unacked = sk->sk_ack_backlog; 27715ee3afbaSRick Jones info->tcpi_sacked = sk->sk_max_ack_backlog; 27725ee3afbaSRick Jones } else { 27731da177e4SLinus Torvalds info->tcpi_unacked = tp->packets_out; 27741da177e4SLinus Torvalds info->tcpi_sacked = tp->sacked_out; 27755ee3afbaSRick Jones } 27761da177e4SLinus Torvalds info->tcpi_lost = tp->lost_out; 27771da177e4SLinus Torvalds info->tcpi_retrans = tp->retrans_out; 27781da177e4SLinus Torvalds info->tcpi_fackets = tp->fackets_out; 27791da177e4SLinus Torvalds 27801da177e4SLinus Torvalds info->tcpi_last_data_sent = jiffies_to_msecs(now - tp->lsndtime); 2781463c84b9SArnaldo Carvalho de Melo info->tcpi_last_data_recv = jiffies_to_msecs(now - icsk->icsk_ack.lrcvtime); 27821da177e4SLinus Torvalds info->tcpi_last_ack_recv = jiffies_to_msecs(now - tp->rcv_tstamp); 27831da177e4SLinus Torvalds 2784d83d8461SArnaldo Carvalho de Melo info->tcpi_pmtu = icsk->icsk_pmtu_cookie; 27851da177e4SLinus Torvalds info->tcpi_rcv_ssthresh = tp->rcv_ssthresh; 2786*740b0f18SEric Dumazet info->tcpi_rtt = tp->srtt_us >> 3; 2787*740b0f18SEric Dumazet info->tcpi_rttvar = tp->mdev_us >> 2; 27881da177e4SLinus Torvalds info->tcpi_snd_ssthresh = tp->snd_ssthresh; 27891da177e4SLinus Torvalds info->tcpi_snd_cwnd = tp->snd_cwnd; 27901da177e4SLinus Torvalds info->tcpi_advmss = tp->advmss; 27911da177e4SLinus Torvalds info->tcpi_reordering = tp->reordering; 27921da177e4SLinus Torvalds 27931da177e4SLinus Torvalds info->tcpi_rcv_rtt = jiffies_to_usecs(tp->rcv_rtt_est.rtt)>>3; 27941da177e4SLinus Torvalds info->tcpi_rcv_space = tp->rcvq_space.space; 27951da177e4SLinus Torvalds 27961da177e4SLinus Torvalds info->tcpi_total_retrans = tp->total_retrans; 2797977cb0ecSEric Dumazet 2798977cb0ecSEric Dumazet info->tcpi_pacing_rate = sk->sk_pacing_rate != ~0U ? 2799977cb0ecSEric Dumazet sk->sk_pacing_rate : ~0ULL; 2800977cb0ecSEric Dumazet info->tcpi_max_pacing_rate = sk->sk_max_pacing_rate != ~0U ? 2801977cb0ecSEric Dumazet sk->sk_max_pacing_rate : ~0ULL; 28021da177e4SLinus Torvalds } 28031da177e4SLinus Torvalds EXPORT_SYMBOL_GPL(tcp_get_info); 28041da177e4SLinus Torvalds 28053fdadf7dSDmitry Mishin static int do_tcp_getsockopt(struct sock *sk, int level, 28063fdadf7dSDmitry Mishin int optname, char __user *optval, int __user *optlen) 28071da177e4SLinus Torvalds { 2808295f7324SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 28091da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 28101da177e4SLinus Torvalds int val, len; 28111da177e4SLinus Torvalds 28121da177e4SLinus Torvalds if (get_user(len, optlen)) 28131da177e4SLinus Torvalds return -EFAULT; 28141da177e4SLinus Torvalds 28151da177e4SLinus Torvalds len = min_t(unsigned int, len, sizeof(int)); 28161da177e4SLinus Torvalds 28171da177e4SLinus Torvalds if (len < 0) 28181da177e4SLinus Torvalds return -EINVAL; 28191da177e4SLinus Torvalds 28201da177e4SLinus Torvalds switch (optname) { 28211da177e4SLinus Torvalds case TCP_MAXSEG: 2822c1b4a7e6SDavid S. Miller val = tp->mss_cache; 28231da177e4SLinus Torvalds if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 28241da177e4SLinus Torvalds val = tp->rx_opt.user_mss; 28255e6a3ce6SPavel Emelyanov if (tp->repair) 28265e6a3ce6SPavel Emelyanov val = tp->rx_opt.mss_clamp; 28271da177e4SLinus Torvalds break; 28281da177e4SLinus Torvalds case TCP_NODELAY: 28291da177e4SLinus Torvalds val = !!(tp->nonagle&TCP_NAGLE_OFF); 28301da177e4SLinus Torvalds break; 28311da177e4SLinus Torvalds case TCP_CORK: 28321da177e4SLinus Torvalds val = !!(tp->nonagle&TCP_NAGLE_CORK); 28331da177e4SLinus Torvalds break; 28341da177e4SLinus Torvalds case TCP_KEEPIDLE: 2835df19a626SEric Dumazet val = keepalive_time_when(tp) / HZ; 28361da177e4SLinus Torvalds break; 28371da177e4SLinus Torvalds case TCP_KEEPINTVL: 2838df19a626SEric Dumazet val = keepalive_intvl_when(tp) / HZ; 28391da177e4SLinus Torvalds break; 28401da177e4SLinus Torvalds case TCP_KEEPCNT: 2841df19a626SEric Dumazet val = keepalive_probes(tp); 28421da177e4SLinus Torvalds break; 28431da177e4SLinus Torvalds case TCP_SYNCNT: 2844295f7324SArnaldo Carvalho de Melo val = icsk->icsk_syn_retries ? : sysctl_tcp_syn_retries; 28451da177e4SLinus Torvalds break; 28461da177e4SLinus Torvalds case TCP_LINGER2: 28471da177e4SLinus Torvalds val = tp->linger2; 28481da177e4SLinus Torvalds if (val >= 0) 28491da177e4SLinus Torvalds val = (val ? : sysctl_tcp_fin_timeout) / HZ; 28501da177e4SLinus Torvalds break; 28511da177e4SLinus Torvalds case TCP_DEFER_ACCEPT: 2852b103cf34SJulian Anastasov val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept, 2853b103cf34SJulian Anastasov TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ); 28541da177e4SLinus Torvalds break; 28551da177e4SLinus Torvalds case TCP_WINDOW_CLAMP: 28561da177e4SLinus Torvalds val = tp->window_clamp; 28571da177e4SLinus Torvalds break; 28581da177e4SLinus Torvalds case TCP_INFO: { 28591da177e4SLinus Torvalds struct tcp_info info; 28601da177e4SLinus Torvalds 28611da177e4SLinus Torvalds if (get_user(len, optlen)) 28621da177e4SLinus Torvalds return -EFAULT; 28631da177e4SLinus Torvalds 28641da177e4SLinus Torvalds tcp_get_info(sk, &info); 28651da177e4SLinus Torvalds 28661da177e4SLinus Torvalds len = min_t(unsigned int, len, sizeof(info)); 28671da177e4SLinus Torvalds if (put_user(len, optlen)) 28681da177e4SLinus Torvalds return -EFAULT; 28691da177e4SLinus Torvalds if (copy_to_user(optval, &info, len)) 28701da177e4SLinus Torvalds return -EFAULT; 28711da177e4SLinus Torvalds return 0; 28721da177e4SLinus Torvalds } 28731da177e4SLinus Torvalds case TCP_QUICKACK: 2874295f7324SArnaldo Carvalho de Melo val = !icsk->icsk_ack.pingpong; 28751da177e4SLinus Torvalds break; 28765f8ef48dSStephen Hemminger 28775f8ef48dSStephen Hemminger case TCP_CONGESTION: 28785f8ef48dSStephen Hemminger if (get_user(len, optlen)) 28795f8ef48dSStephen Hemminger return -EFAULT; 28805f8ef48dSStephen Hemminger len = min_t(unsigned int, len, TCP_CA_NAME_MAX); 28815f8ef48dSStephen Hemminger if (put_user(len, optlen)) 28825f8ef48dSStephen Hemminger return -EFAULT; 28836687e988SArnaldo Carvalho de Melo if (copy_to_user(optval, icsk->icsk_ca_ops->name, len)) 28845f8ef48dSStephen Hemminger return -EFAULT; 28855f8ef48dSStephen Hemminger return 0; 2886e56fb50fSWilliam Allen Simpson 28873c0fef0bSJosh Hunt case TCP_THIN_LINEAR_TIMEOUTS: 28883c0fef0bSJosh Hunt val = tp->thin_lto; 28893c0fef0bSJosh Hunt break; 28903c0fef0bSJosh Hunt case TCP_THIN_DUPACK: 28913c0fef0bSJosh Hunt val = tp->thin_dupack; 28923c0fef0bSJosh Hunt break; 2893dca43c75SJerry Chu 2894ee995283SPavel Emelyanov case TCP_REPAIR: 2895ee995283SPavel Emelyanov val = tp->repair; 2896ee995283SPavel Emelyanov break; 2897ee995283SPavel Emelyanov 2898ee995283SPavel Emelyanov case TCP_REPAIR_QUEUE: 2899ee995283SPavel Emelyanov if (tp->repair) 2900ee995283SPavel Emelyanov val = tp->repair_queue; 2901ee995283SPavel Emelyanov else 2902ee995283SPavel Emelyanov return -EINVAL; 2903ee995283SPavel Emelyanov break; 2904ee995283SPavel Emelyanov 2905ee995283SPavel Emelyanov case TCP_QUEUE_SEQ: 2906ee995283SPavel Emelyanov if (tp->repair_queue == TCP_SEND_QUEUE) 2907ee995283SPavel Emelyanov val = tp->write_seq; 2908ee995283SPavel Emelyanov else if (tp->repair_queue == TCP_RECV_QUEUE) 2909ee995283SPavel Emelyanov val = tp->rcv_nxt; 2910ee995283SPavel Emelyanov else 2911ee995283SPavel Emelyanov return -EINVAL; 2912ee995283SPavel Emelyanov break; 2913ee995283SPavel Emelyanov 2914dca43c75SJerry Chu case TCP_USER_TIMEOUT: 2915dca43c75SJerry Chu val = jiffies_to_msecs(icsk->icsk_user_timeout); 2916dca43c75SJerry Chu break; 291793be6ce0SAndrey Vagin case TCP_TIMESTAMP: 291893be6ce0SAndrey Vagin val = tcp_time_stamp + tp->tsoffset; 291993be6ce0SAndrey Vagin break; 2920c9bee3b7SEric Dumazet case TCP_NOTSENT_LOWAT: 2921c9bee3b7SEric Dumazet val = tp->notsent_lowat; 2922c9bee3b7SEric Dumazet break; 29231da177e4SLinus Torvalds default: 29241da177e4SLinus Torvalds return -ENOPROTOOPT; 29253ff50b79SStephen Hemminger } 29261da177e4SLinus Torvalds 29271da177e4SLinus Torvalds if (put_user(len, optlen)) 29281da177e4SLinus Torvalds return -EFAULT; 29291da177e4SLinus Torvalds if (copy_to_user(optval, &val, len)) 29301da177e4SLinus Torvalds return -EFAULT; 29311da177e4SLinus Torvalds return 0; 29321da177e4SLinus Torvalds } 29331da177e4SLinus Torvalds 29343fdadf7dSDmitry Mishin int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, 29353fdadf7dSDmitry Mishin int __user *optlen) 29363fdadf7dSDmitry Mishin { 29373fdadf7dSDmitry Mishin struct inet_connection_sock *icsk = inet_csk(sk); 29383fdadf7dSDmitry Mishin 29393fdadf7dSDmitry Mishin if (level != SOL_TCP) 29403fdadf7dSDmitry Mishin return icsk->icsk_af_ops->getsockopt(sk, level, optname, 29413fdadf7dSDmitry Mishin optval, optlen); 29423fdadf7dSDmitry Mishin return do_tcp_getsockopt(sk, level, optname, optval, optlen); 29433fdadf7dSDmitry Mishin } 29444bc2f18bSEric Dumazet EXPORT_SYMBOL(tcp_getsockopt); 29453fdadf7dSDmitry Mishin 29463fdadf7dSDmitry Mishin #ifdef CONFIG_COMPAT 2947543d9cfeSArnaldo Carvalho de Melo int compat_tcp_getsockopt(struct sock *sk, int level, int optname, 2948543d9cfeSArnaldo Carvalho de Melo char __user *optval, int __user *optlen) 29493fdadf7dSDmitry Mishin { 2950dec73ff0SArnaldo Carvalho de Melo if (level != SOL_TCP) 2951dec73ff0SArnaldo Carvalho de Melo return inet_csk_compat_getsockopt(sk, level, optname, 2952dec73ff0SArnaldo Carvalho de Melo optval, optlen); 29533fdadf7dSDmitry Mishin return do_tcp_getsockopt(sk, level, optname, optval, optlen); 29543fdadf7dSDmitry Mishin } 2955543d9cfeSArnaldo Carvalho de Melo EXPORT_SYMBOL(compat_tcp_getsockopt); 29563fdadf7dSDmitry Mishin #endif 29571da177e4SLinus Torvalds 2958cfb6eeb4SYOSHIFUJI Hideaki #ifdef CONFIG_TCP_MD5SIG 295971cea17eSEric Dumazet static struct tcp_md5sig_pool __percpu *tcp_md5sig_pool __read_mostly; 296071cea17eSEric Dumazet static DEFINE_MUTEX(tcp_md5sig_mutex); 2961cfb6eeb4SYOSHIFUJI Hideaki 2962765cf997SEric Dumazet static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool __percpu *pool) 2963cfb6eeb4SYOSHIFUJI Hideaki { 2964cfb6eeb4SYOSHIFUJI Hideaki int cpu; 2965765cf997SEric Dumazet 2966cfb6eeb4SYOSHIFUJI Hideaki for_each_possible_cpu(cpu) { 2967765cf997SEric Dumazet struct tcp_md5sig_pool *p = per_cpu_ptr(pool, cpu); 2968765cf997SEric Dumazet 2969cfb6eeb4SYOSHIFUJI Hideaki if (p->md5_desc.tfm) 2970cfb6eeb4SYOSHIFUJI Hideaki crypto_free_hash(p->md5_desc.tfm); 2971cfb6eeb4SYOSHIFUJI Hideaki } 2972cfb6eeb4SYOSHIFUJI Hideaki free_percpu(pool); 2973cfb6eeb4SYOSHIFUJI Hideaki } 2974cfb6eeb4SYOSHIFUJI Hideaki 297571cea17eSEric Dumazet static void __tcp_alloc_md5sig_pool(void) 2976cfb6eeb4SYOSHIFUJI Hideaki { 2977cfb6eeb4SYOSHIFUJI Hideaki int cpu; 2978765cf997SEric Dumazet struct tcp_md5sig_pool __percpu *pool; 2979cfb6eeb4SYOSHIFUJI Hideaki 2980765cf997SEric Dumazet pool = alloc_percpu(struct tcp_md5sig_pool); 2981cfb6eeb4SYOSHIFUJI Hideaki if (!pool) 298271cea17eSEric Dumazet return; 2983cfb6eeb4SYOSHIFUJI Hideaki 2984cfb6eeb4SYOSHIFUJI Hideaki for_each_possible_cpu(cpu) { 2985cfb6eeb4SYOSHIFUJI Hideaki struct crypto_hash *hash; 2986cfb6eeb4SYOSHIFUJI Hideaki 2987cfb6eeb4SYOSHIFUJI Hideaki hash = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC); 298850c3a487SYOSHIFUJI Hideaki / 吉藤英明 if (IS_ERR_OR_NULL(hash)) 2989cfb6eeb4SYOSHIFUJI Hideaki goto out_free; 2990cfb6eeb4SYOSHIFUJI Hideaki 2991765cf997SEric Dumazet per_cpu_ptr(pool, cpu)->md5_desc.tfm = hash; 2992cfb6eeb4SYOSHIFUJI Hideaki } 299371cea17eSEric Dumazet /* before setting tcp_md5sig_pool, we must commit all writes 299471cea17eSEric Dumazet * to memory. See ACCESS_ONCE() in tcp_get_md5sig_pool() 299571cea17eSEric Dumazet */ 299671cea17eSEric Dumazet smp_wmb(); 299771cea17eSEric Dumazet tcp_md5sig_pool = pool; 299871cea17eSEric Dumazet return; 2999cfb6eeb4SYOSHIFUJI Hideaki out_free: 3000cfb6eeb4SYOSHIFUJI Hideaki __tcp_free_md5sig_pool(pool); 3001cfb6eeb4SYOSHIFUJI Hideaki } 3002cfb6eeb4SYOSHIFUJI Hideaki 300371cea17eSEric Dumazet bool tcp_alloc_md5sig_pool(void) 3004cfb6eeb4SYOSHIFUJI Hideaki { 300571cea17eSEric Dumazet if (unlikely(!tcp_md5sig_pool)) { 300671cea17eSEric Dumazet mutex_lock(&tcp_md5sig_mutex); 3007cfb6eeb4SYOSHIFUJI Hideaki 300871cea17eSEric Dumazet if (!tcp_md5sig_pool) 300971cea17eSEric Dumazet __tcp_alloc_md5sig_pool(); 3010cfb6eeb4SYOSHIFUJI Hideaki 301171cea17eSEric Dumazet mutex_unlock(&tcp_md5sig_mutex); 3012cfb6eeb4SYOSHIFUJI Hideaki } 301371cea17eSEric Dumazet return tcp_md5sig_pool != NULL; 3014cfb6eeb4SYOSHIFUJI Hideaki } 3015cfb6eeb4SYOSHIFUJI Hideaki EXPORT_SYMBOL(tcp_alloc_md5sig_pool); 3016cfb6eeb4SYOSHIFUJI Hideaki 301735790c04SEric Dumazet 301835790c04SEric Dumazet /** 301935790c04SEric Dumazet * tcp_get_md5sig_pool - get md5sig_pool for this user 302035790c04SEric Dumazet * 302135790c04SEric Dumazet * We use percpu structure, so if we succeed, we exit with preemption 302235790c04SEric Dumazet * and BH disabled, to make sure another thread or softirq handling 302335790c04SEric Dumazet * wont try to get same context. 302435790c04SEric Dumazet */ 302535790c04SEric Dumazet struct tcp_md5sig_pool *tcp_get_md5sig_pool(void) 3026cfb6eeb4SYOSHIFUJI Hideaki { 3027765cf997SEric Dumazet struct tcp_md5sig_pool __percpu *p; 302835790c04SEric Dumazet 302935790c04SEric Dumazet local_bh_disable(); 303071cea17eSEric Dumazet p = ACCESS_ONCE(tcp_md5sig_pool); 3031cfb6eeb4SYOSHIFUJI Hideaki if (p) 303271cea17eSEric Dumazet return __this_cpu_ptr(p); 303335790c04SEric Dumazet 303435790c04SEric Dumazet local_bh_enable(); 303535790c04SEric Dumazet return NULL; 3036cfb6eeb4SYOSHIFUJI Hideaki } 303735790c04SEric Dumazet EXPORT_SYMBOL(tcp_get_md5sig_pool); 3038cfb6eeb4SYOSHIFUJI Hideaki 303949a72dfbSAdam Langley int tcp_md5_hash_header(struct tcp_md5sig_pool *hp, 3040ca35a0efSEric Dumazet const struct tcphdr *th) 304149a72dfbSAdam Langley { 304249a72dfbSAdam Langley struct scatterlist sg; 3043ca35a0efSEric Dumazet struct tcphdr hdr; 304449a72dfbSAdam Langley int err; 304549a72dfbSAdam Langley 3046ca35a0efSEric Dumazet /* We are not allowed to change tcphdr, make a local copy */ 3047ca35a0efSEric Dumazet memcpy(&hdr, th, sizeof(hdr)); 3048ca35a0efSEric Dumazet hdr.check = 0; 3049ca35a0efSEric Dumazet 305049a72dfbSAdam Langley /* options aren't included in the hash */ 3051ca35a0efSEric Dumazet sg_init_one(&sg, &hdr, sizeof(hdr)); 3052ca35a0efSEric Dumazet err = crypto_hash_update(&hp->md5_desc, &sg, sizeof(hdr)); 305349a72dfbSAdam Langley return err; 305449a72dfbSAdam Langley } 305549a72dfbSAdam Langley EXPORT_SYMBOL(tcp_md5_hash_header); 305649a72dfbSAdam Langley 305749a72dfbSAdam Langley int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, 3058cf533ea5SEric Dumazet const struct sk_buff *skb, unsigned int header_len) 305949a72dfbSAdam Langley { 306049a72dfbSAdam Langley struct scatterlist sg; 306149a72dfbSAdam Langley const struct tcphdr *tp = tcp_hdr(skb); 306249a72dfbSAdam Langley struct hash_desc *desc = &hp->md5_desc; 306395c96174SEric Dumazet unsigned int i; 306495c96174SEric Dumazet const unsigned int head_data_len = skb_headlen(skb) > header_len ? 306549a72dfbSAdam Langley skb_headlen(skb) - header_len : 0; 306649a72dfbSAdam Langley const struct skb_shared_info *shi = skb_shinfo(skb); 3067d7fd1b57SEric Dumazet struct sk_buff *frag_iter; 306849a72dfbSAdam Langley 306949a72dfbSAdam Langley sg_init_table(&sg, 1); 307049a72dfbSAdam Langley 307149a72dfbSAdam Langley sg_set_buf(&sg, ((u8 *) tp) + header_len, head_data_len); 307249a72dfbSAdam Langley if (crypto_hash_update(desc, &sg, head_data_len)) 307349a72dfbSAdam Langley return 1; 307449a72dfbSAdam Langley 307549a72dfbSAdam Langley for (i = 0; i < shi->nr_frags; ++i) { 307649a72dfbSAdam Langley const struct skb_frag_struct *f = &shi->frags[i]; 307754d27fcbSEric Dumazet unsigned int offset = f->page_offset; 307854d27fcbSEric Dumazet struct page *page = skb_frag_page(f) + (offset >> PAGE_SHIFT); 307954d27fcbSEric Dumazet 308054d27fcbSEric Dumazet sg_set_page(&sg, page, skb_frag_size(f), 308154d27fcbSEric Dumazet offset_in_page(offset)); 30829e903e08SEric Dumazet if (crypto_hash_update(desc, &sg, skb_frag_size(f))) 308349a72dfbSAdam Langley return 1; 308449a72dfbSAdam Langley } 308549a72dfbSAdam Langley 3086d7fd1b57SEric Dumazet skb_walk_frags(skb, frag_iter) 3087d7fd1b57SEric Dumazet if (tcp_md5_hash_skb_data(hp, frag_iter, 0)) 3088d7fd1b57SEric Dumazet return 1; 3089d7fd1b57SEric Dumazet 309049a72dfbSAdam Langley return 0; 309149a72dfbSAdam Langley } 309249a72dfbSAdam Langley EXPORT_SYMBOL(tcp_md5_hash_skb_data); 309349a72dfbSAdam Langley 3094cf533ea5SEric Dumazet int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key) 309549a72dfbSAdam Langley { 309649a72dfbSAdam Langley struct scatterlist sg; 309749a72dfbSAdam Langley 309849a72dfbSAdam Langley sg_init_one(&sg, key->key, key->keylen); 309949a72dfbSAdam Langley return crypto_hash_update(&hp->md5_desc, &sg, key->keylen); 310049a72dfbSAdam Langley } 310149a72dfbSAdam Langley EXPORT_SYMBOL(tcp_md5_hash_key); 310249a72dfbSAdam Langley 3103cfb6eeb4SYOSHIFUJI Hideaki #endif 3104cfb6eeb4SYOSHIFUJI Hideaki 31054ac02babSAndi Kleen void tcp_done(struct sock *sk) 31064ac02babSAndi Kleen { 31078336886fSJerry Chu struct request_sock *req = tcp_sk(sk)->fastopen_rsk; 31088336886fSJerry Chu 31094ac02babSAndi Kleen if (sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) 311063231bddSPavel Emelyanov TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS); 31114ac02babSAndi Kleen 31124ac02babSAndi Kleen tcp_set_state(sk, TCP_CLOSE); 31134ac02babSAndi Kleen tcp_clear_xmit_timers(sk); 31148336886fSJerry Chu if (req != NULL) 31158336886fSJerry Chu reqsk_fastopen_remove(sk, req, false); 31164ac02babSAndi Kleen 31174ac02babSAndi Kleen sk->sk_shutdown = SHUTDOWN_MASK; 31184ac02babSAndi Kleen 31194ac02babSAndi Kleen if (!sock_flag(sk, SOCK_DEAD)) 31204ac02babSAndi Kleen sk->sk_state_change(sk); 31214ac02babSAndi Kleen else 31224ac02babSAndi Kleen inet_csk_destroy_sock(sk); 31234ac02babSAndi Kleen } 31244ac02babSAndi Kleen EXPORT_SYMBOL_GPL(tcp_done); 31254ac02babSAndi Kleen 31265f8ef48dSStephen Hemminger extern struct tcp_congestion_ops tcp_reno; 31271da177e4SLinus Torvalds 31281da177e4SLinus Torvalds static __initdata unsigned long thash_entries; 31291da177e4SLinus Torvalds static int __init set_thash_entries(char *str) 31301da177e4SLinus Torvalds { 3131413c27d8SEldad Zack ssize_t ret; 3132413c27d8SEldad Zack 31331da177e4SLinus Torvalds if (!str) 31341da177e4SLinus Torvalds return 0; 3135413c27d8SEldad Zack 3136413c27d8SEldad Zack ret = kstrtoul(str, 0, &thash_entries); 3137413c27d8SEldad Zack if (ret) 3138413c27d8SEldad Zack return 0; 3139413c27d8SEldad Zack 31401da177e4SLinus Torvalds return 1; 31411da177e4SLinus Torvalds } 31421da177e4SLinus Torvalds __setup("thash_entries=", set_thash_entries); 31431da177e4SLinus Torvalds 3144a4fe34bfSEric W. Biederman static void tcp_init_mem(void) 31454acb4190SGlauber Costa { 31464acb4190SGlauber Costa unsigned long limit = nr_free_buffer_pages() / 8; 31474acb4190SGlauber Costa limit = max(limit, 128UL); 3148a4fe34bfSEric W. Biederman sysctl_tcp_mem[0] = limit / 4 * 3; 3149a4fe34bfSEric W. Biederman sysctl_tcp_mem[1] = limit; 3150a4fe34bfSEric W. Biederman sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2; 31514acb4190SGlauber Costa } 31524acb4190SGlauber Costa 31531da177e4SLinus Torvalds void __init tcp_init(void) 31541da177e4SLinus Torvalds { 31551da177e4SLinus Torvalds struct sk_buff *skb = NULL; 3156f03d78dbSEric Dumazet unsigned long limit; 3157b49960a0SEric Dumazet int max_rshare, max_wshare, cnt; 3158074b8517SDimitri Sivanich unsigned int i; 31591da177e4SLinus Torvalds 31601f9e636eSPavel Emelyanov BUILD_BUG_ON(sizeof(struct tcp_skb_cb) > sizeof(skb->cb)); 31611da177e4SLinus Torvalds 31621748376bSEric Dumazet percpu_counter_init(&tcp_sockets_allocated, 0); 3163dd24c001SEric Dumazet percpu_counter_init(&tcp_orphan_count, 0); 31646e04e021SArnaldo Carvalho de Melo tcp_hashinfo.bind_bucket_cachep = 31656e04e021SArnaldo Carvalho de Melo kmem_cache_create("tcp_bind_bucket", 31666e04e021SArnaldo Carvalho de Melo sizeof(struct inet_bind_bucket), 0, 316720c2df83SPaul Mundt SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL); 31681da177e4SLinus Torvalds 31691da177e4SLinus Torvalds /* Size and allocate the main established and bind bucket 31701da177e4SLinus Torvalds * hash tables. 31711da177e4SLinus Torvalds * 31721da177e4SLinus Torvalds * The methodology is similar to that of the buffer cache. 31731da177e4SLinus Torvalds */ 31746e04e021SArnaldo Carvalho de Melo tcp_hashinfo.ehash = 31751da177e4SLinus Torvalds alloc_large_system_hash("TCP established", 31760f7ff927SArnaldo Carvalho de Melo sizeof(struct inet_ehash_bucket), 31771da177e4SLinus Torvalds thash_entries, 3178fd90b29dSEric Dumazet 17, /* one slot per 128 KB of memory */ 31799e950efaSJohn Heffner 0, 31801da177e4SLinus Torvalds NULL, 3181f373b53bSEric Dumazet &tcp_hashinfo.ehash_mask, 318231fe62b9STim Bird 0, 31830ccfe618SJean Delvare thash_entries ? 0 : 512 * 1024); 318405dbc7b5SEric Dumazet for (i = 0; i <= tcp_hashinfo.ehash_mask; i++) 31853ab5aee7SEric Dumazet INIT_HLIST_NULLS_HEAD(&tcp_hashinfo.ehash[i].chain, i); 318605dbc7b5SEric Dumazet 3187230140cfSEric Dumazet if (inet_ehash_locks_alloc(&tcp_hashinfo)) 3188230140cfSEric Dumazet panic("TCP: failed to alloc ehash_locks"); 31896e04e021SArnaldo Carvalho de Melo tcp_hashinfo.bhash = 31901da177e4SLinus Torvalds alloc_large_system_hash("TCP bind", 31910f7ff927SArnaldo Carvalho de Melo sizeof(struct inet_bind_hashbucket), 3192f373b53bSEric Dumazet tcp_hashinfo.ehash_mask + 1, 3193fd90b29dSEric Dumazet 17, /* one slot per 128 KB of memory */ 31949e950efaSJohn Heffner 0, 31956e04e021SArnaldo Carvalho de Melo &tcp_hashinfo.bhash_size, 31961da177e4SLinus Torvalds NULL, 319731fe62b9STim Bird 0, 31981da177e4SLinus Torvalds 64 * 1024); 3199074b8517SDimitri Sivanich tcp_hashinfo.bhash_size = 1U << tcp_hashinfo.bhash_size; 32006e04e021SArnaldo Carvalho de Melo for (i = 0; i < tcp_hashinfo.bhash_size; i++) { 32016e04e021SArnaldo Carvalho de Melo spin_lock_init(&tcp_hashinfo.bhash[i].lock); 32026e04e021SArnaldo Carvalho de Melo INIT_HLIST_HEAD(&tcp_hashinfo.bhash[i].chain); 32031da177e4SLinus Torvalds } 32041da177e4SLinus Torvalds 3205c5ed63d6SEric Dumazet 3206c5ed63d6SEric Dumazet cnt = tcp_hashinfo.ehash_mask + 1; 3207c5ed63d6SEric Dumazet 3208c5ed63d6SEric Dumazet tcp_death_row.sysctl_max_tw_buckets = cnt / 2; 3209c5ed63d6SEric Dumazet sysctl_tcp_max_orphans = cnt / 2; 3210c5ed63d6SEric Dumazet sysctl_max_syn_backlog = max(128, cnt / 256); 32111da177e4SLinus Torvalds 3212a4fe34bfSEric W. Biederman tcp_init_mem(); 3213c43b874dSJason Wang /* Set per-socket limits to no more than 1/128 the pressure threshold */ 32145fb84b14SEric Dumazet limit = nr_free_buffer_pages() << (PAGE_SHIFT - 7); 3215b49960a0SEric Dumazet max_wshare = min(4UL*1024*1024, limit); 3216b49960a0SEric Dumazet max_rshare = min(6UL*1024*1024, limit); 32177b4f4b5eSJohn Heffner 32183ab224beSHideo Aoki sysctl_tcp_wmem[0] = SK_MEM_QUANTUM; 32197b4f4b5eSJohn Heffner sysctl_tcp_wmem[1] = 16*1024; 3220b49960a0SEric Dumazet sysctl_tcp_wmem[2] = max(64*1024, max_wshare); 32217b4f4b5eSJohn Heffner 32223ab224beSHideo Aoki sysctl_tcp_rmem[0] = SK_MEM_QUANTUM; 32237b4f4b5eSJohn Heffner sysctl_tcp_rmem[1] = 87380; 3224b49960a0SEric Dumazet sysctl_tcp_rmem[2] = max(87380, max_rshare); 32251da177e4SLinus Torvalds 3226afd46503SJoe Perches pr_info("Hash tables configured (established %u bind %u)\n", 3227f373b53bSEric Dumazet tcp_hashinfo.ehash_mask + 1, tcp_hashinfo.bhash_size); 3228317a76f9SStephen Hemminger 322951c5d0c4SDavid S. Miller tcp_metrics_init(); 323051c5d0c4SDavid S. Miller 3231317a76f9SStephen Hemminger tcp_register_congestion_control(&tcp_reno); 3232da5c78c8SWilliam Allen Simpson 323346d3ceabSEric Dumazet tcp_tasklet_init(); 32341da177e4SLinus Torvalds } 3235