11da177e4SLinus Torvalds /* 21da177e4SLinus Torvalds * INET An implementation of the TCP/IP protocol suite for the LINUX 31da177e4SLinus Torvalds * operating system. INET is implemented using the BSD Socket 41da177e4SLinus Torvalds * interface as the means of communication with the user level. 51da177e4SLinus Torvalds * 61da177e4SLinus Torvalds * Implementation of the Transmission Control Protocol(TCP). 71da177e4SLinus Torvalds * 81da177e4SLinus Torvalds * Version: $Id: tcp.c,v 1.216 2002/02/01 22:01:04 davem Exp $ 91da177e4SLinus Torvalds * 1002c30a84SJesper Juhl * Authors: Ross Biro 111da177e4SLinus Torvalds * Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG> 121da177e4SLinus Torvalds * Mark Evans, <evansmp@uhura.aston.ac.uk> 131da177e4SLinus Torvalds * Corey Minyard <wf-rch!minyard@relay.EU.net> 141da177e4SLinus Torvalds * Florian La Roche, <flla@stud.uni-sb.de> 151da177e4SLinus Torvalds * Charles Hedrick, <hedrick@klinzhai.rutgers.edu> 161da177e4SLinus Torvalds * Linus Torvalds, <torvalds@cs.helsinki.fi> 171da177e4SLinus Torvalds * Alan Cox, <gw4pts@gw4pts.ampr.org> 181da177e4SLinus Torvalds * Matthew Dillon, <dillon@apollo.west.oic.com> 191da177e4SLinus Torvalds * Arnt Gulbrandsen, <agulbra@nvg.unit.no> 201da177e4SLinus Torvalds * Jorge Cwik, <jorge@laser.satlink.net> 211da177e4SLinus Torvalds * 221da177e4SLinus Torvalds * Fixes: 231da177e4SLinus Torvalds * Alan Cox : Numerous verify_area() calls 241da177e4SLinus Torvalds * Alan Cox : Set the ACK bit on a reset 251da177e4SLinus Torvalds * Alan Cox : Stopped it crashing if it closed while 261da177e4SLinus Torvalds * sk->inuse=1 and was trying to connect 271da177e4SLinus Torvalds * (tcp_err()). 281da177e4SLinus Torvalds * Alan Cox : All icmp error handling was broken 291da177e4SLinus Torvalds * pointers passed where wrong and the 301da177e4SLinus Torvalds * socket was looked up backwards. Nobody 311da177e4SLinus Torvalds * tested any icmp error code obviously. 321da177e4SLinus Torvalds * Alan Cox : tcp_err() now handled properly. It 331da177e4SLinus Torvalds * wakes people on errors. poll 341da177e4SLinus Torvalds * behaves and the icmp error race 351da177e4SLinus Torvalds * has gone by moving it into sock.c 361da177e4SLinus Torvalds * Alan Cox : tcp_send_reset() fixed to work for 371da177e4SLinus Torvalds * everything not just packets for 381da177e4SLinus Torvalds * unknown sockets. 391da177e4SLinus Torvalds * Alan Cox : tcp option processing. 401da177e4SLinus Torvalds * Alan Cox : Reset tweaked (still not 100%) [Had 411da177e4SLinus Torvalds * syn rule wrong] 421da177e4SLinus Torvalds * Herp Rosmanith : More reset fixes 431da177e4SLinus Torvalds * Alan Cox : No longer acks invalid rst frames. 441da177e4SLinus Torvalds * Acking any kind of RST is right out. 451da177e4SLinus Torvalds * Alan Cox : Sets an ignore me flag on an rst 461da177e4SLinus Torvalds * receive otherwise odd bits of prattle 471da177e4SLinus Torvalds * escape still 481da177e4SLinus Torvalds * Alan Cox : Fixed another acking RST frame bug. 491da177e4SLinus Torvalds * Should stop LAN workplace lockups. 501da177e4SLinus Torvalds * Alan Cox : Some tidyups using the new skb list 511da177e4SLinus Torvalds * facilities 521da177e4SLinus Torvalds * Alan Cox : sk->keepopen now seems to work 531da177e4SLinus Torvalds * Alan Cox : Pulls options out correctly on accepts 541da177e4SLinus Torvalds * Alan Cox : Fixed assorted sk->rqueue->next errors 551da177e4SLinus Torvalds * Alan Cox : PSH doesn't end a TCP read. Switched a 561da177e4SLinus Torvalds * bit to skb ops. 571da177e4SLinus Torvalds * Alan Cox : Tidied tcp_data to avoid a potential 581da177e4SLinus Torvalds * nasty. 591da177e4SLinus Torvalds * Alan Cox : Added some better commenting, as the 601da177e4SLinus Torvalds * tcp is hard to follow 611da177e4SLinus Torvalds * Alan Cox : Removed incorrect check for 20 * psh 621da177e4SLinus Torvalds * Michael O'Reilly : ack < copied bug fix. 631da177e4SLinus Torvalds * Johannes Stille : Misc tcp fixes (not all in yet). 641da177e4SLinus Torvalds * Alan Cox : FIN with no memory -> CRASH 651da177e4SLinus Torvalds * Alan Cox : Added socket option proto entries. 661da177e4SLinus Torvalds * Also added awareness of them to accept. 671da177e4SLinus Torvalds * Alan Cox : Added TCP options (SOL_TCP) 681da177e4SLinus Torvalds * Alan Cox : Switched wakeup calls to callbacks, 691da177e4SLinus Torvalds * so the kernel can layer network 701da177e4SLinus Torvalds * sockets. 711da177e4SLinus Torvalds * Alan Cox : Use ip_tos/ip_ttl settings. 721da177e4SLinus Torvalds * Alan Cox : Handle FIN (more) properly (we hope). 731da177e4SLinus Torvalds * Alan Cox : RST frames sent on unsynchronised 741da177e4SLinus Torvalds * state ack error. 751da177e4SLinus Torvalds * Alan Cox : Put in missing check for SYN bit. 761da177e4SLinus Torvalds * Alan Cox : Added tcp_select_window() aka NET2E 771da177e4SLinus Torvalds * window non shrink trick. 781da177e4SLinus Torvalds * Alan Cox : Added a couple of small NET2E timer 791da177e4SLinus Torvalds * fixes 801da177e4SLinus Torvalds * Charles Hedrick : TCP fixes 811da177e4SLinus Torvalds * Toomas Tamm : TCP window fixes 821da177e4SLinus Torvalds * Alan Cox : Small URG fix to rlogin ^C ack fight 831da177e4SLinus Torvalds * Charles Hedrick : Rewrote most of it to actually work 841da177e4SLinus Torvalds * Linus : Rewrote tcp_read() and URG handling 851da177e4SLinus Torvalds * completely 861da177e4SLinus Torvalds * Gerhard Koerting: Fixed some missing timer handling 871da177e4SLinus Torvalds * Matthew Dillon : Reworked TCP machine states as per RFC 881da177e4SLinus Torvalds * Gerhard Koerting: PC/TCP workarounds 891da177e4SLinus Torvalds * Adam Caldwell : Assorted timer/timing errors 901da177e4SLinus Torvalds * Matthew Dillon : Fixed another RST bug 911da177e4SLinus Torvalds * Alan Cox : Move to kernel side addressing changes. 921da177e4SLinus Torvalds * Alan Cox : Beginning work on TCP fastpathing 931da177e4SLinus Torvalds * (not yet usable) 941da177e4SLinus Torvalds * Arnt Gulbrandsen: Turbocharged tcp_check() routine. 951da177e4SLinus Torvalds * Alan Cox : TCP fast path debugging 961da177e4SLinus Torvalds * Alan Cox : Window clamping 971da177e4SLinus Torvalds * Michael Riepe : Bug in tcp_check() 981da177e4SLinus Torvalds * Matt Dillon : More TCP improvements and RST bug fixes 991da177e4SLinus Torvalds * Matt Dillon : Yet more small nasties remove from the 1001da177e4SLinus Torvalds * TCP code (Be very nice to this man if 1011da177e4SLinus Torvalds * tcp finally works 100%) 8) 1021da177e4SLinus Torvalds * Alan Cox : BSD accept semantics. 1031da177e4SLinus Torvalds * Alan Cox : Reset on closedown bug. 1041da177e4SLinus Torvalds * Peter De Schrijver : ENOTCONN check missing in tcp_sendto(). 1051da177e4SLinus Torvalds * Michael Pall : Handle poll() after URG properly in 1061da177e4SLinus Torvalds * all cases. 1071da177e4SLinus Torvalds * Michael Pall : Undo the last fix in tcp_read_urg() 1081da177e4SLinus Torvalds * (multi URG PUSH broke rlogin). 1091da177e4SLinus Torvalds * Michael Pall : Fix the multi URG PUSH problem in 1101da177e4SLinus Torvalds * tcp_readable(), poll() after URG 1111da177e4SLinus Torvalds * works now. 1121da177e4SLinus Torvalds * Michael Pall : recv(...,MSG_OOB) never blocks in the 1131da177e4SLinus Torvalds * BSD api. 1141da177e4SLinus Torvalds * Alan Cox : Changed the semantics of sk->socket to 1151da177e4SLinus Torvalds * fix a race and a signal problem with 1161da177e4SLinus Torvalds * accept() and async I/O. 1171da177e4SLinus Torvalds * Alan Cox : Relaxed the rules on tcp_sendto(). 1181da177e4SLinus Torvalds * Yury Shevchuk : Really fixed accept() blocking problem. 1191da177e4SLinus Torvalds * Craig I. Hagan : Allow for BSD compatible TIME_WAIT for 1201da177e4SLinus Torvalds * clients/servers which listen in on 1211da177e4SLinus Torvalds * fixed ports. 1221da177e4SLinus Torvalds * Alan Cox : Cleaned the above up and shrank it to 1231da177e4SLinus Torvalds * a sensible code size. 1241da177e4SLinus Torvalds * Alan Cox : Self connect lockup fix. 1251da177e4SLinus Torvalds * Alan Cox : No connect to multicast. 1261da177e4SLinus Torvalds * Ross Biro : Close unaccepted children on master 1271da177e4SLinus Torvalds * socket close. 1281da177e4SLinus Torvalds * Alan Cox : Reset tracing code. 1291da177e4SLinus Torvalds * Alan Cox : Spurious resets on shutdown. 1301da177e4SLinus Torvalds * Alan Cox : Giant 15 minute/60 second timer error 1311da177e4SLinus Torvalds * Alan Cox : Small whoops in polling before an 1321da177e4SLinus Torvalds * accept. 1331da177e4SLinus Torvalds * Alan Cox : Kept the state trace facility since 1341da177e4SLinus Torvalds * it's handy for debugging. 1351da177e4SLinus Torvalds * Alan Cox : More reset handler fixes. 1361da177e4SLinus Torvalds * Alan Cox : Started rewriting the code based on 1371da177e4SLinus Torvalds * the RFC's for other useful protocol 1381da177e4SLinus Torvalds * references see: Comer, KA9Q NOS, and 1391da177e4SLinus Torvalds * for a reference on the difference 1401da177e4SLinus Torvalds * between specifications and how BSD 1411da177e4SLinus Torvalds * works see the 4.4lite source. 1421da177e4SLinus Torvalds * A.N.Kuznetsov : Don't time wait on completion of tidy 1431da177e4SLinus Torvalds * close. 1441da177e4SLinus Torvalds * Linus Torvalds : Fin/Shutdown & copied_seq changes. 1451da177e4SLinus Torvalds * Linus Torvalds : Fixed BSD port reuse to work first syn 1461da177e4SLinus Torvalds * Alan Cox : Reimplemented timers as per the RFC 1471da177e4SLinus Torvalds * and using multiple timers for sanity. 1481da177e4SLinus Torvalds * Alan Cox : Small bug fixes, and a lot of new 1491da177e4SLinus Torvalds * comments. 1501da177e4SLinus Torvalds * Alan Cox : Fixed dual reader crash by locking 1511da177e4SLinus Torvalds * the buffers (much like datagram.c) 1521da177e4SLinus Torvalds * Alan Cox : Fixed stuck sockets in probe. A probe 1531da177e4SLinus Torvalds * now gets fed up of retrying without 1541da177e4SLinus Torvalds * (even a no space) answer. 1551da177e4SLinus Torvalds * Alan Cox : Extracted closing code better 1561da177e4SLinus Torvalds * Alan Cox : Fixed the closing state machine to 1571da177e4SLinus Torvalds * resemble the RFC. 1581da177e4SLinus Torvalds * Alan Cox : More 'per spec' fixes. 1591da177e4SLinus Torvalds * Jorge Cwik : Even faster checksumming. 1601da177e4SLinus Torvalds * Alan Cox : tcp_data() doesn't ack illegal PSH 1611da177e4SLinus Torvalds * only frames. At least one pc tcp stack 1621da177e4SLinus Torvalds * generates them. 1631da177e4SLinus Torvalds * Alan Cox : Cache last socket. 1641da177e4SLinus Torvalds * Alan Cox : Per route irtt. 1651da177e4SLinus Torvalds * Matt Day : poll()->select() match BSD precisely on error 1661da177e4SLinus Torvalds * Alan Cox : New buffers 1671da177e4SLinus Torvalds * Marc Tamsky : Various sk->prot->retransmits and 1681da177e4SLinus Torvalds * sk->retransmits misupdating fixed. 1691da177e4SLinus Torvalds * Fixed tcp_write_timeout: stuck close, 1701da177e4SLinus Torvalds * and TCP syn retries gets used now. 1711da177e4SLinus Torvalds * Mark Yarvis : In tcp_read_wakeup(), don't send an 1721da177e4SLinus Torvalds * ack if state is TCP_CLOSED. 1731da177e4SLinus Torvalds * Alan Cox : Look up device on a retransmit - routes may 1741da177e4SLinus Torvalds * change. Doesn't yet cope with MSS shrink right 1751da177e4SLinus Torvalds * but it's a start! 1761da177e4SLinus Torvalds * Marc Tamsky : Closing in closing fixes. 1771da177e4SLinus Torvalds * Mike Shaver : RFC1122 verifications. 1781da177e4SLinus Torvalds * Alan Cox : rcv_saddr errors. 1791da177e4SLinus Torvalds * Alan Cox : Block double connect(). 1801da177e4SLinus Torvalds * Alan Cox : Small hooks for enSKIP. 1811da177e4SLinus Torvalds * Alexey Kuznetsov: Path MTU discovery. 1821da177e4SLinus Torvalds * Alan Cox : Support soft errors. 1831da177e4SLinus Torvalds * Alan Cox : Fix MTU discovery pathological case 1841da177e4SLinus Torvalds * when the remote claims no mtu! 1851da177e4SLinus Torvalds * Marc Tamsky : TCP_CLOSE fix. 1861da177e4SLinus Torvalds * Colin (G3TNE) : Send a reset on syn ack replies in 1871da177e4SLinus Torvalds * window but wrong (fixes NT lpd problems) 1881da177e4SLinus Torvalds * Pedro Roque : Better TCP window handling, delayed ack. 1891da177e4SLinus Torvalds * Joerg Reuter : No modification of locked buffers in 1901da177e4SLinus Torvalds * tcp_do_retransmit() 1911da177e4SLinus Torvalds * Eric Schenk : Changed receiver side silly window 1921da177e4SLinus Torvalds * avoidance algorithm to BSD style 1931da177e4SLinus Torvalds * algorithm. This doubles throughput 1941da177e4SLinus Torvalds * against machines running Solaris, 1951da177e4SLinus Torvalds * and seems to result in general 1961da177e4SLinus Torvalds * improvement. 1971da177e4SLinus Torvalds * Stefan Magdalinski : adjusted tcp_readable() to fix FIONREAD 1981da177e4SLinus Torvalds * Willy Konynenberg : Transparent proxying support. 1991da177e4SLinus Torvalds * Mike McLagan : Routing by source 2001da177e4SLinus Torvalds * Keith Owens : Do proper merging with partial SKB's in 2011da177e4SLinus Torvalds * tcp_do_sendmsg to avoid burstiness. 2021da177e4SLinus Torvalds * Eric Schenk : Fix fast close down bug with 2031da177e4SLinus Torvalds * shutdown() followed by close(). 2041da177e4SLinus Torvalds * Andi Kleen : Make poll agree with SIGIO 2051da177e4SLinus Torvalds * Salvatore Sanfilippo : Support SO_LINGER with linger == 1 and 2061da177e4SLinus Torvalds * lingertime == 0 (RFC 793 ABORT Call) 2071da177e4SLinus Torvalds * Hirokazu Takahashi : Use copy_from_user() instead of 2081da177e4SLinus Torvalds * csum_and_copy_from_user() if possible. 2091da177e4SLinus Torvalds * 2101da177e4SLinus Torvalds * This program is free software; you can redistribute it and/or 2111da177e4SLinus Torvalds * modify it under the terms of the GNU General Public License 2121da177e4SLinus Torvalds * as published by the Free Software Foundation; either version 2131da177e4SLinus Torvalds * 2 of the License, or(at your option) any later version. 2141da177e4SLinus Torvalds * 2151da177e4SLinus Torvalds * Description of States: 2161da177e4SLinus Torvalds * 2171da177e4SLinus Torvalds * TCP_SYN_SENT sent a connection request, waiting for ack 2181da177e4SLinus Torvalds * 2191da177e4SLinus Torvalds * TCP_SYN_RECV received a connection request, sent ack, 2201da177e4SLinus Torvalds * waiting for final ack in three-way handshake. 2211da177e4SLinus Torvalds * 2221da177e4SLinus Torvalds * TCP_ESTABLISHED connection established 2231da177e4SLinus Torvalds * 2241da177e4SLinus Torvalds * TCP_FIN_WAIT1 our side has shutdown, waiting to complete 2251da177e4SLinus Torvalds * transmission of remaining buffered data 2261da177e4SLinus Torvalds * 2271da177e4SLinus Torvalds * TCP_FIN_WAIT2 all buffered data sent, waiting for remote 2281da177e4SLinus Torvalds * to shutdown 2291da177e4SLinus Torvalds * 2301da177e4SLinus Torvalds * TCP_CLOSING both sides have shutdown but we still have 2311da177e4SLinus Torvalds * data we have to finish sending 2321da177e4SLinus Torvalds * 2331da177e4SLinus Torvalds * TCP_TIME_WAIT timeout to catch resent junk before entering 2341da177e4SLinus Torvalds * closed, can only be entered from FIN_WAIT2 2351da177e4SLinus Torvalds * or CLOSING. Required because the other end 2361da177e4SLinus Torvalds * may not have gotten our last ACK causing it 2371da177e4SLinus Torvalds * to retransmit the data packet (which we ignore) 2381da177e4SLinus Torvalds * 2391da177e4SLinus Torvalds * TCP_CLOSE_WAIT remote side has shutdown and is waiting for 2401da177e4SLinus Torvalds * us to finish writing our data and to shutdown 2411da177e4SLinus Torvalds * (we have to close() to move on to LAST_ACK) 2421da177e4SLinus Torvalds * 2431da177e4SLinus Torvalds * TCP_LAST_ACK out side has shutdown after remote has 2441da177e4SLinus Torvalds * shutdown. There may still be data in our 2451da177e4SLinus Torvalds * buffer that we have to finish sending 2461da177e4SLinus Torvalds * 2471da177e4SLinus Torvalds * TCP_CLOSE socket is finished 2481da177e4SLinus Torvalds */ 2491da177e4SLinus Torvalds 2501da177e4SLinus Torvalds #include <linux/module.h> 2511da177e4SLinus Torvalds #include <linux/types.h> 2521da177e4SLinus Torvalds #include <linux/fcntl.h> 2531da177e4SLinus Torvalds #include <linux/poll.h> 2541da177e4SLinus Torvalds #include <linux/init.h> 2551da177e4SLinus Torvalds #include <linux/smp_lock.h> 2561da177e4SLinus Torvalds #include <linux/fs.h> 2571da177e4SLinus Torvalds #include <linux/random.h> 2581da177e4SLinus Torvalds #include <linux/bootmem.h> 259b8059eadSDavid S. Miller #include <linux/cache.h> 260f4c50d99SHerbert Xu #include <linux/err.h> 261cfb6eeb4SYOSHIFUJI Hideaki #include <linux/crypto.h> 2621da177e4SLinus Torvalds 2631da177e4SLinus Torvalds #include <net/icmp.h> 2641da177e4SLinus Torvalds #include <net/tcp.h> 2651da177e4SLinus Torvalds #include <net/xfrm.h> 2661da177e4SLinus Torvalds #include <net/ip.h> 2671a2449a8SChris Leech #include <net/netdma.h> 2681da177e4SLinus Torvalds 2691da177e4SLinus Torvalds #include <asm/uaccess.h> 2701da177e4SLinus Torvalds #include <asm/ioctls.h> 2711da177e4SLinus Torvalds 272ab32ea5dSBrian Haley int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT; 2731da177e4SLinus Torvalds 274ba89966cSEric Dumazet DEFINE_SNMP_STAT(struct tcp_mib, tcp_statistics) __read_mostly; 2751da177e4SLinus Torvalds 2761da177e4SLinus Torvalds atomic_t tcp_orphan_count = ATOMIC_INIT(0); 2771da177e4SLinus Torvalds 2780a5578cfSArnaldo Carvalho de Melo EXPORT_SYMBOL_GPL(tcp_orphan_count); 2790a5578cfSArnaldo Carvalho de Melo 280b8059eadSDavid S. Miller int sysctl_tcp_mem[3] __read_mostly; 281b8059eadSDavid S. Miller int sysctl_tcp_wmem[3] __read_mostly; 282b8059eadSDavid S. Miller int sysctl_tcp_rmem[3] __read_mostly; 2831da177e4SLinus Torvalds 2841da177e4SLinus Torvalds EXPORT_SYMBOL(sysctl_tcp_mem); 2851da177e4SLinus Torvalds EXPORT_SYMBOL(sysctl_tcp_rmem); 2861da177e4SLinus Torvalds EXPORT_SYMBOL(sysctl_tcp_wmem); 2871da177e4SLinus Torvalds 2881da177e4SLinus Torvalds atomic_t tcp_memory_allocated; /* Current allocated memory. */ 2891da177e4SLinus Torvalds atomic_t tcp_sockets_allocated; /* Current number of TCP sockets. */ 2901da177e4SLinus Torvalds 2911da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_memory_allocated); 2921da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_sockets_allocated); 2931da177e4SLinus Torvalds 2941da177e4SLinus Torvalds /* 2951da177e4SLinus Torvalds * Pressure flag: try to collapse. 2961da177e4SLinus Torvalds * Technical note: it is used by multiple contexts non atomically. 2971da177e4SLinus Torvalds * All the sk_stream_mem_schedule() is of this nature: accounting 2981da177e4SLinus Torvalds * is strict, actions are advisory and have some latency. 2991da177e4SLinus Torvalds */ 3004103f8cdSEric Dumazet int tcp_memory_pressure __read_mostly; 3011da177e4SLinus Torvalds 3021da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_memory_pressure); 3031da177e4SLinus Torvalds 3041da177e4SLinus Torvalds void tcp_enter_memory_pressure(void) 3051da177e4SLinus Torvalds { 3061da177e4SLinus Torvalds if (!tcp_memory_pressure) { 3071da177e4SLinus Torvalds NET_INC_STATS(LINUX_MIB_TCPMEMORYPRESSURES); 3081da177e4SLinus Torvalds tcp_memory_pressure = 1; 3091da177e4SLinus Torvalds } 3101da177e4SLinus Torvalds } 3111da177e4SLinus Torvalds 3121da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_enter_memory_pressure); 3131da177e4SLinus Torvalds 3141da177e4SLinus Torvalds /* 3151da177e4SLinus Torvalds * Wait for a TCP event. 3161da177e4SLinus Torvalds * 3171da177e4SLinus Torvalds * Note that we don't need to lock the socket, as the upper poll layers 3181da177e4SLinus Torvalds * take care of normal races (between the test and the event) and we don't 3191da177e4SLinus Torvalds * go look at any of the socket buffers directly. 3201da177e4SLinus Torvalds */ 3211da177e4SLinus Torvalds unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait) 3221da177e4SLinus Torvalds { 3231da177e4SLinus Torvalds unsigned int mask; 3241da177e4SLinus Torvalds struct sock *sk = sock->sk; 3251da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 3261da177e4SLinus Torvalds 3271da177e4SLinus Torvalds poll_wait(file, sk->sk_sleep, wait); 3281da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 329dc40c7bcSArnaldo Carvalho de Melo return inet_csk_listen_poll(sk); 3301da177e4SLinus Torvalds 3311da177e4SLinus Torvalds /* Socket is not locked. We are protected from async events 3321da177e4SLinus Torvalds by poll logic and correct handling of state changes 3331da177e4SLinus Torvalds made by another threads is impossible in any case. 3341da177e4SLinus Torvalds */ 3351da177e4SLinus Torvalds 3361da177e4SLinus Torvalds mask = 0; 3371da177e4SLinus Torvalds if (sk->sk_err) 3381da177e4SLinus Torvalds mask = POLLERR; 3391da177e4SLinus Torvalds 3401da177e4SLinus Torvalds /* 3411da177e4SLinus Torvalds * POLLHUP is certainly not done right. But poll() doesn't 3421da177e4SLinus Torvalds * have a notion of HUP in just one direction, and for a 3431da177e4SLinus Torvalds * socket the read side is more interesting. 3441da177e4SLinus Torvalds * 3451da177e4SLinus Torvalds * Some poll() documentation says that POLLHUP is incompatible 3461da177e4SLinus Torvalds * with the POLLOUT/POLLWR flags, so somebody should check this 3471da177e4SLinus Torvalds * all. But careful, it tends to be safer to return too many 3481da177e4SLinus Torvalds * bits than too few, and you can easily break real applications 3491da177e4SLinus Torvalds * if you don't tell them that something has hung up! 3501da177e4SLinus Torvalds * 3511da177e4SLinus Torvalds * Check-me. 3521da177e4SLinus Torvalds * 3531da177e4SLinus Torvalds * Check number 1. POLLHUP is _UNMASKABLE_ event (see UNIX98 and 3541da177e4SLinus Torvalds * our fs/select.c). It means that after we received EOF, 3551da177e4SLinus Torvalds * poll always returns immediately, making impossible poll() on write() 3561da177e4SLinus Torvalds * in state CLOSE_WAIT. One solution is evident --- to set POLLHUP 3571da177e4SLinus Torvalds * if and only if shutdown has been made in both directions. 3581da177e4SLinus Torvalds * Actually, it is interesting to look how Solaris and DUX 3591da177e4SLinus Torvalds * solve this dilemma. I would prefer, if PULLHUP were maskable, 3601da177e4SLinus Torvalds * then we could set it on SND_SHUTDOWN. BTW examples given 3611da177e4SLinus Torvalds * in Stevens' books assume exactly this behaviour, it explains 3621da177e4SLinus Torvalds * why PULLHUP is incompatible with POLLOUT. --ANK 3631da177e4SLinus Torvalds * 3641da177e4SLinus Torvalds * NOTE. Check for TCP_CLOSE is added. The goal is to prevent 3651da177e4SLinus Torvalds * blocking on fresh not-connected or disconnected socket. --ANK 3661da177e4SLinus Torvalds */ 3671da177e4SLinus Torvalds if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == TCP_CLOSE) 3681da177e4SLinus Torvalds mask |= POLLHUP; 3691da177e4SLinus Torvalds if (sk->sk_shutdown & RCV_SHUTDOWN) 370f348d70aSDavide Libenzi mask |= POLLIN | POLLRDNORM | POLLRDHUP; 3711da177e4SLinus Torvalds 3721da177e4SLinus Torvalds /* Connected? */ 3731da177e4SLinus Torvalds if ((1 << sk->sk_state) & ~(TCPF_SYN_SENT | TCPF_SYN_RECV)) { 3741da177e4SLinus Torvalds /* Potential race condition. If read of tp below will 3751da177e4SLinus Torvalds * escape above sk->sk_state, we can be illegally awaken 3761da177e4SLinus Torvalds * in SYN_* states. */ 3771da177e4SLinus Torvalds if ((tp->rcv_nxt != tp->copied_seq) && 3781da177e4SLinus Torvalds (tp->urg_seq != tp->copied_seq || 3791da177e4SLinus Torvalds tp->rcv_nxt != tp->copied_seq + 1 || 3801da177e4SLinus Torvalds sock_flag(sk, SOCK_URGINLINE) || !tp->urg_data)) 3811da177e4SLinus Torvalds mask |= POLLIN | POLLRDNORM; 3821da177e4SLinus Torvalds 3831da177e4SLinus Torvalds if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { 3841da177e4SLinus Torvalds if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) { 3851da177e4SLinus Torvalds mask |= POLLOUT | POLLWRNORM; 3861da177e4SLinus Torvalds } else { /* send SIGIO later */ 3871da177e4SLinus Torvalds set_bit(SOCK_ASYNC_NOSPACE, 3881da177e4SLinus Torvalds &sk->sk_socket->flags); 3891da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 3901da177e4SLinus Torvalds 3911da177e4SLinus Torvalds /* Race breaker. If space is freed after 3921da177e4SLinus Torvalds * wspace test but before the flags are set, 3931da177e4SLinus Torvalds * IO signal will be lost. 3941da177e4SLinus Torvalds */ 3951da177e4SLinus Torvalds if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) 3961da177e4SLinus Torvalds mask |= POLLOUT | POLLWRNORM; 3971da177e4SLinus Torvalds } 3981da177e4SLinus Torvalds } 3991da177e4SLinus Torvalds 4001da177e4SLinus Torvalds if (tp->urg_data & TCP_URG_VALID) 4011da177e4SLinus Torvalds mask |= POLLPRI; 4021da177e4SLinus Torvalds } 4031da177e4SLinus Torvalds return mask; 4041da177e4SLinus Torvalds } 4051da177e4SLinus Torvalds 4061da177e4SLinus Torvalds int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg) 4071da177e4SLinus Torvalds { 4081da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 4091da177e4SLinus Torvalds int answ; 4101da177e4SLinus Torvalds 4111da177e4SLinus Torvalds switch (cmd) { 4121da177e4SLinus Torvalds case SIOCINQ: 4131da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 4141da177e4SLinus Torvalds return -EINVAL; 4151da177e4SLinus Torvalds 4161da177e4SLinus Torvalds lock_sock(sk); 4171da177e4SLinus Torvalds if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) 4181da177e4SLinus Torvalds answ = 0; 4191da177e4SLinus Torvalds else if (sock_flag(sk, SOCK_URGINLINE) || 4201da177e4SLinus Torvalds !tp->urg_data || 4211da177e4SLinus Torvalds before(tp->urg_seq, tp->copied_seq) || 4221da177e4SLinus Torvalds !before(tp->urg_seq, tp->rcv_nxt)) { 4231da177e4SLinus Torvalds answ = tp->rcv_nxt - tp->copied_seq; 4241da177e4SLinus Torvalds 4251da177e4SLinus Torvalds /* Subtract 1, if FIN is in queue. */ 4261da177e4SLinus Torvalds if (answ && !skb_queue_empty(&sk->sk_receive_queue)) 4271da177e4SLinus Torvalds answ -= 428aa8223c7SArnaldo Carvalho de Melo tcp_hdr((struct sk_buff *)sk->sk_receive_queue.prev)->fin; 4291da177e4SLinus Torvalds } else 4301da177e4SLinus Torvalds answ = tp->urg_seq - tp->copied_seq; 4311da177e4SLinus Torvalds release_sock(sk); 4321da177e4SLinus Torvalds break; 4331da177e4SLinus Torvalds case SIOCATMARK: 4341da177e4SLinus Torvalds answ = tp->urg_data && tp->urg_seq == tp->copied_seq; 4351da177e4SLinus Torvalds break; 4361da177e4SLinus Torvalds case SIOCOUTQ: 4371da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 4381da177e4SLinus Torvalds return -EINVAL; 4391da177e4SLinus Torvalds 4401da177e4SLinus Torvalds if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) 4411da177e4SLinus Torvalds answ = 0; 4421da177e4SLinus Torvalds else 4431da177e4SLinus Torvalds answ = tp->write_seq - tp->snd_una; 4441da177e4SLinus Torvalds break; 4451da177e4SLinus Torvalds default: 4461da177e4SLinus Torvalds return -ENOIOCTLCMD; 4473ff50b79SStephen Hemminger } 4481da177e4SLinus Torvalds 4491da177e4SLinus Torvalds return put_user(answ, (int __user *)arg); 4501da177e4SLinus Torvalds } 4511da177e4SLinus Torvalds 4521da177e4SLinus Torvalds static inline void tcp_mark_push(struct tcp_sock *tp, struct sk_buff *skb) 4531da177e4SLinus Torvalds { 4541da177e4SLinus Torvalds TCP_SKB_CB(skb)->flags |= TCPCB_FLAG_PSH; 4551da177e4SLinus Torvalds tp->pushed_seq = tp->write_seq; 4561da177e4SLinus Torvalds } 4571da177e4SLinus Torvalds 4581da177e4SLinus Torvalds static inline int forced_push(struct tcp_sock *tp) 4591da177e4SLinus Torvalds { 4601da177e4SLinus Torvalds return after(tp->write_seq, tp->pushed_seq + (tp->max_window >> 1)); 4611da177e4SLinus Torvalds } 4621da177e4SLinus Torvalds 4639e412ba7SIlpo Järvinen static inline void skb_entail(struct sock *sk, struct sk_buff *skb) 4641da177e4SLinus Torvalds { 4659e412ba7SIlpo Järvinen struct tcp_sock *tp = tcp_sk(sk); 466352d4800SArnaldo Carvalho de Melo struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); 467352d4800SArnaldo Carvalho de Melo 4681da177e4SLinus Torvalds skb->csum = 0; 469352d4800SArnaldo Carvalho de Melo tcb->seq = tcb->end_seq = tp->write_seq; 470352d4800SArnaldo Carvalho de Melo tcb->flags = TCPCB_FLAG_ACK; 471352d4800SArnaldo Carvalho de Melo tcb->sacked = 0; 4721da177e4SLinus Torvalds skb_header_release(skb); 473fe067e8aSDavid S. Miller tcp_add_write_queue_tail(sk, skb); 4741da177e4SLinus Torvalds sk_charge_skb(sk, skb); 47589ebd197SDavid S. Miller if (tp->nonagle & TCP_NAGLE_PUSH) 4761da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_PUSH; 4771da177e4SLinus Torvalds } 4781da177e4SLinus Torvalds 4791da177e4SLinus Torvalds static inline void tcp_mark_urg(struct tcp_sock *tp, int flags, 4801da177e4SLinus Torvalds struct sk_buff *skb) 4811da177e4SLinus Torvalds { 4821da177e4SLinus Torvalds if (flags & MSG_OOB) { 4831da177e4SLinus Torvalds tp->urg_mode = 1; 4841da177e4SLinus Torvalds tp->snd_up = tp->write_seq; 4851da177e4SLinus Torvalds TCP_SKB_CB(skb)->sacked |= TCPCB_URG; 4861da177e4SLinus Torvalds } 4871da177e4SLinus Torvalds } 4881da177e4SLinus Torvalds 4899e412ba7SIlpo Järvinen static inline void tcp_push(struct sock *sk, int flags, int mss_now, 4909e412ba7SIlpo Järvinen int nonagle) 4911da177e4SLinus Torvalds { 4929e412ba7SIlpo Järvinen struct tcp_sock *tp = tcp_sk(sk); 4939e412ba7SIlpo Järvinen 494fe067e8aSDavid S. Miller if (tcp_send_head(sk)) { 495fe067e8aSDavid S. Miller struct sk_buff *skb = tcp_write_queue_tail(sk); 4961da177e4SLinus Torvalds if (!(flags & MSG_MORE) || forced_push(tp)) 4971da177e4SLinus Torvalds tcp_mark_push(tp, skb); 4981da177e4SLinus Torvalds tcp_mark_urg(tp, flags, skb); 4999e412ba7SIlpo Järvinen __tcp_push_pending_frames(sk, mss_now, 5001da177e4SLinus Torvalds (flags & MSG_MORE) ? TCP_NAGLE_CORK : nonagle); 5011da177e4SLinus Torvalds } 5021da177e4SLinus Torvalds } 5031da177e4SLinus Torvalds 5041da177e4SLinus Torvalds static ssize_t do_tcp_sendpages(struct sock *sk, struct page **pages, int poffset, 5051da177e4SLinus Torvalds size_t psize, int flags) 5061da177e4SLinus Torvalds { 5071da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 508c1b4a7e6SDavid S. Miller int mss_now, size_goal; 5091da177e4SLinus Torvalds int err; 5101da177e4SLinus Torvalds ssize_t copied; 5111da177e4SLinus Torvalds long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); 5121da177e4SLinus Torvalds 5131da177e4SLinus Torvalds /* Wait for a connection to finish. */ 5141da177e4SLinus Torvalds if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) 5151da177e4SLinus Torvalds if ((err = sk_stream_wait_connect(sk, &timeo)) != 0) 5161da177e4SLinus Torvalds goto out_err; 5171da177e4SLinus Torvalds 5181da177e4SLinus Torvalds clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 5191da177e4SLinus Torvalds 5201da177e4SLinus Torvalds mss_now = tcp_current_mss(sk, !(flags&MSG_OOB)); 521c1b4a7e6SDavid S. Miller size_goal = tp->xmit_size_goal; 5221da177e4SLinus Torvalds copied = 0; 5231da177e4SLinus Torvalds 5241da177e4SLinus Torvalds err = -EPIPE; 5251da177e4SLinus Torvalds if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 5261da177e4SLinus Torvalds goto do_error; 5271da177e4SLinus Torvalds 5281da177e4SLinus Torvalds while (psize > 0) { 529fe067e8aSDavid S. Miller struct sk_buff *skb = tcp_write_queue_tail(sk); 5301da177e4SLinus Torvalds struct page *page = pages[poffset / PAGE_SIZE]; 5311da177e4SLinus Torvalds int copy, i, can_coalesce; 5321da177e4SLinus Torvalds int offset = poffset % PAGE_SIZE; 5331da177e4SLinus Torvalds int size = min_t(size_t, psize, PAGE_SIZE - offset); 5341da177e4SLinus Torvalds 535fe067e8aSDavid S. Miller if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0) { 5361da177e4SLinus Torvalds new_segment: 5371da177e4SLinus Torvalds if (!sk_stream_memory_free(sk)) 5381da177e4SLinus Torvalds goto wait_for_sndbuf; 5391da177e4SLinus Torvalds 5401da177e4SLinus Torvalds skb = sk_stream_alloc_pskb(sk, 0, 0, 5411da177e4SLinus Torvalds sk->sk_allocation); 5421da177e4SLinus Torvalds if (!skb) 5431da177e4SLinus Torvalds goto wait_for_memory; 5441da177e4SLinus Torvalds 5459e412ba7SIlpo Järvinen skb_entail(sk, skb); 546c1b4a7e6SDavid S. Miller copy = size_goal; 5471da177e4SLinus Torvalds } 5481da177e4SLinus Torvalds 5491da177e4SLinus Torvalds if (copy > size) 5501da177e4SLinus Torvalds copy = size; 5511da177e4SLinus Torvalds 5521da177e4SLinus Torvalds i = skb_shinfo(skb)->nr_frags; 5531da177e4SLinus Torvalds can_coalesce = skb_can_coalesce(skb, i, page, offset); 5541da177e4SLinus Torvalds if (!can_coalesce && i >= MAX_SKB_FRAGS) { 5551da177e4SLinus Torvalds tcp_mark_push(tp, skb); 5561da177e4SLinus Torvalds goto new_segment; 5571da177e4SLinus Torvalds } 558d80d99d6SHerbert Xu if (!sk_stream_wmem_schedule(sk, copy)) 5591da177e4SLinus Torvalds goto wait_for_memory; 5601da177e4SLinus Torvalds 5611da177e4SLinus Torvalds if (can_coalesce) { 5621da177e4SLinus Torvalds skb_shinfo(skb)->frags[i - 1].size += copy; 5631da177e4SLinus Torvalds } else { 5641da177e4SLinus Torvalds get_page(page); 5651da177e4SLinus Torvalds skb_fill_page_desc(skb, i, page, offset, copy); 5661da177e4SLinus Torvalds } 5671da177e4SLinus Torvalds 5681da177e4SLinus Torvalds skb->len += copy; 5691da177e4SLinus Torvalds skb->data_len += copy; 5701da177e4SLinus Torvalds skb->truesize += copy; 5711da177e4SLinus Torvalds sk->sk_wmem_queued += copy; 5721da177e4SLinus Torvalds sk->sk_forward_alloc -= copy; 57384fa7933SPatrick McHardy skb->ip_summed = CHECKSUM_PARTIAL; 5741da177e4SLinus Torvalds tp->write_seq += copy; 5751da177e4SLinus Torvalds TCP_SKB_CB(skb)->end_seq += copy; 5767967168cSHerbert Xu skb_shinfo(skb)->gso_segs = 0; 5771da177e4SLinus Torvalds 5781da177e4SLinus Torvalds if (!copied) 5791da177e4SLinus Torvalds TCP_SKB_CB(skb)->flags &= ~TCPCB_FLAG_PSH; 5801da177e4SLinus Torvalds 5811da177e4SLinus Torvalds copied += copy; 5821da177e4SLinus Torvalds poffset += copy; 5831da177e4SLinus Torvalds if (!(psize -= copy)) 5841da177e4SLinus Torvalds goto out; 5851da177e4SLinus Torvalds 586c1b4a7e6SDavid S. Miller if (skb->len < mss_now || (flags & MSG_OOB)) 5871da177e4SLinus Torvalds continue; 5881da177e4SLinus Torvalds 5891da177e4SLinus Torvalds if (forced_push(tp)) { 5901da177e4SLinus Torvalds tcp_mark_push(tp, skb); 5919e412ba7SIlpo Järvinen __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); 592fe067e8aSDavid S. Miller } else if (skb == tcp_send_head(sk)) 5931da177e4SLinus Torvalds tcp_push_one(sk, mss_now); 5941da177e4SLinus Torvalds continue; 5951da177e4SLinus Torvalds 5961da177e4SLinus Torvalds wait_for_sndbuf: 5971da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 5981da177e4SLinus Torvalds wait_for_memory: 5991da177e4SLinus Torvalds if (copied) 6009e412ba7SIlpo Järvinen tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); 6011da177e4SLinus Torvalds 6021da177e4SLinus Torvalds if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) 6031da177e4SLinus Torvalds goto do_error; 6041da177e4SLinus Torvalds 6051da177e4SLinus Torvalds mss_now = tcp_current_mss(sk, !(flags&MSG_OOB)); 606c1b4a7e6SDavid S. Miller size_goal = tp->xmit_size_goal; 6071da177e4SLinus Torvalds } 6081da177e4SLinus Torvalds 6091da177e4SLinus Torvalds out: 6101da177e4SLinus Torvalds if (copied) 6119e412ba7SIlpo Järvinen tcp_push(sk, flags, mss_now, tp->nonagle); 6121da177e4SLinus Torvalds return copied; 6131da177e4SLinus Torvalds 6141da177e4SLinus Torvalds do_error: 6151da177e4SLinus Torvalds if (copied) 6161da177e4SLinus Torvalds goto out; 6171da177e4SLinus Torvalds out_err: 6181da177e4SLinus Torvalds return sk_stream_error(sk, flags, err); 6191da177e4SLinus Torvalds } 6201da177e4SLinus Torvalds 6211da177e4SLinus Torvalds ssize_t tcp_sendpage(struct socket *sock, struct page *page, int offset, 6221da177e4SLinus Torvalds size_t size, int flags) 6231da177e4SLinus Torvalds { 6241da177e4SLinus Torvalds ssize_t res; 6251da177e4SLinus Torvalds struct sock *sk = sock->sk; 6261da177e4SLinus Torvalds 6271da177e4SLinus Torvalds if (!(sk->sk_route_caps & NETIF_F_SG) || 6288648b305SHerbert Xu !(sk->sk_route_caps & NETIF_F_ALL_CSUM)) 6291da177e4SLinus Torvalds return sock_no_sendpage(sock, page, offset, size, flags); 6301da177e4SLinus Torvalds 6311da177e4SLinus Torvalds lock_sock(sk); 6321da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 6331da177e4SLinus Torvalds res = do_tcp_sendpages(sk, &page, offset, size, flags); 6341da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 6351da177e4SLinus Torvalds release_sock(sk); 6361da177e4SLinus Torvalds return res; 6371da177e4SLinus Torvalds } 6381da177e4SLinus Torvalds 6391da177e4SLinus Torvalds #define TCP_PAGE(sk) (sk->sk_sndmsg_page) 6401da177e4SLinus Torvalds #define TCP_OFF(sk) (sk->sk_sndmsg_off) 6411da177e4SLinus Torvalds 6429e412ba7SIlpo Järvinen static inline int select_size(struct sock *sk) 6431da177e4SLinus Torvalds { 6449e412ba7SIlpo Järvinen struct tcp_sock *tp = tcp_sk(sk); 645c1b4a7e6SDavid S. Miller int tmp = tp->mss_cache; 6461da177e4SLinus Torvalds 647b4e26f5eSDavid S. Miller if (sk->sk_route_caps & NETIF_F_SG) { 648bcd76111SHerbert Xu if (sk_can_gso(sk)) 649c65f7f00SDavid S. Miller tmp = 0; 650b4e26f5eSDavid S. Miller else { 651b4e26f5eSDavid S. Miller int pgbreak = SKB_MAX_HEAD(MAX_TCP_HEADER); 652b4e26f5eSDavid S. Miller 653b4e26f5eSDavid S. Miller if (tmp >= pgbreak && 654b4e26f5eSDavid S. Miller tmp <= pgbreak + (MAX_SKB_FRAGS - 1) * PAGE_SIZE) 655b4e26f5eSDavid S. Miller tmp = pgbreak; 656b4e26f5eSDavid S. Miller } 657b4e26f5eSDavid S. Miller } 6581da177e4SLinus Torvalds 6591da177e4SLinus Torvalds return tmp; 6601da177e4SLinus Torvalds } 6611da177e4SLinus Torvalds 6621da177e4SLinus Torvalds int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 6631da177e4SLinus Torvalds size_t size) 6641da177e4SLinus Torvalds { 6651da177e4SLinus Torvalds struct iovec *iov; 6661da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 6671da177e4SLinus Torvalds struct sk_buff *skb; 6681da177e4SLinus Torvalds int iovlen, flags; 669c1b4a7e6SDavid S. Miller int mss_now, size_goal; 6701da177e4SLinus Torvalds int err, copied; 6711da177e4SLinus Torvalds long timeo; 6721da177e4SLinus Torvalds 6731da177e4SLinus Torvalds lock_sock(sk); 6741da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 6751da177e4SLinus Torvalds 6761da177e4SLinus Torvalds flags = msg->msg_flags; 6771da177e4SLinus Torvalds timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); 6781da177e4SLinus Torvalds 6791da177e4SLinus Torvalds /* Wait for a connection to finish. */ 6801da177e4SLinus Torvalds if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) 6811da177e4SLinus Torvalds if ((err = sk_stream_wait_connect(sk, &timeo)) != 0) 6821da177e4SLinus Torvalds goto out_err; 6831da177e4SLinus Torvalds 6841da177e4SLinus Torvalds /* This should be in poll */ 6851da177e4SLinus Torvalds clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 6861da177e4SLinus Torvalds 6871da177e4SLinus Torvalds mss_now = tcp_current_mss(sk, !(flags&MSG_OOB)); 688c1b4a7e6SDavid S. Miller size_goal = tp->xmit_size_goal; 6891da177e4SLinus Torvalds 6901da177e4SLinus Torvalds /* Ok commence sending. */ 6911da177e4SLinus Torvalds iovlen = msg->msg_iovlen; 6921da177e4SLinus Torvalds iov = msg->msg_iov; 6931da177e4SLinus Torvalds copied = 0; 6941da177e4SLinus Torvalds 6951da177e4SLinus Torvalds err = -EPIPE; 6961da177e4SLinus Torvalds if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 6971da177e4SLinus Torvalds goto do_error; 6981da177e4SLinus Torvalds 6991da177e4SLinus Torvalds while (--iovlen >= 0) { 7001da177e4SLinus Torvalds int seglen = iov->iov_len; 7011da177e4SLinus Torvalds unsigned char __user *from = iov->iov_base; 7021da177e4SLinus Torvalds 7031da177e4SLinus Torvalds iov++; 7041da177e4SLinus Torvalds 7051da177e4SLinus Torvalds while (seglen > 0) { 7061da177e4SLinus Torvalds int copy; 7071da177e4SLinus Torvalds 708fe067e8aSDavid S. Miller skb = tcp_write_queue_tail(sk); 7091da177e4SLinus Torvalds 710fe067e8aSDavid S. Miller if (!tcp_send_head(sk) || 711c1b4a7e6SDavid S. Miller (copy = size_goal - skb->len) <= 0) { 7121da177e4SLinus Torvalds 7131da177e4SLinus Torvalds new_segment: 7141da177e4SLinus Torvalds /* Allocate new segment. If the interface is SG, 7151da177e4SLinus Torvalds * allocate skb fitting to single page. 7161da177e4SLinus Torvalds */ 7171da177e4SLinus Torvalds if (!sk_stream_memory_free(sk)) 7181da177e4SLinus Torvalds goto wait_for_sndbuf; 7191da177e4SLinus Torvalds 7209e412ba7SIlpo Järvinen skb = sk_stream_alloc_pskb(sk, select_size(sk), 7211da177e4SLinus Torvalds 0, sk->sk_allocation); 7221da177e4SLinus Torvalds if (!skb) 7231da177e4SLinus Torvalds goto wait_for_memory; 7241da177e4SLinus Torvalds 7251da177e4SLinus Torvalds /* 7261da177e4SLinus Torvalds * Check whether we can use HW checksum. 7271da177e4SLinus Torvalds */ 7288648b305SHerbert Xu if (sk->sk_route_caps & NETIF_F_ALL_CSUM) 72984fa7933SPatrick McHardy skb->ip_summed = CHECKSUM_PARTIAL; 7301da177e4SLinus Torvalds 7319e412ba7SIlpo Järvinen skb_entail(sk, skb); 732c1b4a7e6SDavid S. Miller copy = size_goal; 7331da177e4SLinus Torvalds } 7341da177e4SLinus Torvalds 7351da177e4SLinus Torvalds /* Try to append data to the end of skb. */ 7361da177e4SLinus Torvalds if (copy > seglen) 7371da177e4SLinus Torvalds copy = seglen; 7381da177e4SLinus Torvalds 7391da177e4SLinus Torvalds /* Where to copy to? */ 7401da177e4SLinus Torvalds if (skb_tailroom(skb) > 0) { 7411da177e4SLinus Torvalds /* We have some space in skb head. Superb! */ 7421da177e4SLinus Torvalds if (copy > skb_tailroom(skb)) 7431da177e4SLinus Torvalds copy = skb_tailroom(skb); 7441da177e4SLinus Torvalds if ((err = skb_add_data(skb, from, copy)) != 0) 7451da177e4SLinus Torvalds goto do_fault; 7461da177e4SLinus Torvalds } else { 7471da177e4SLinus Torvalds int merge = 0; 7481da177e4SLinus Torvalds int i = skb_shinfo(skb)->nr_frags; 7491da177e4SLinus Torvalds struct page *page = TCP_PAGE(sk); 7501da177e4SLinus Torvalds int off = TCP_OFF(sk); 7511da177e4SLinus Torvalds 7521da177e4SLinus Torvalds if (skb_can_coalesce(skb, i, page, off) && 7531da177e4SLinus Torvalds off != PAGE_SIZE) { 7541da177e4SLinus Torvalds /* We can extend the last page 7551da177e4SLinus Torvalds * fragment. */ 7561da177e4SLinus Torvalds merge = 1; 7571da177e4SLinus Torvalds } else if (i == MAX_SKB_FRAGS || 7581da177e4SLinus Torvalds (!i && 7591da177e4SLinus Torvalds !(sk->sk_route_caps & NETIF_F_SG))) { 7601da177e4SLinus Torvalds /* Need to add new fragment and cannot 7611da177e4SLinus Torvalds * do this because interface is non-SG, 7621da177e4SLinus Torvalds * or because all the page slots are 7631da177e4SLinus Torvalds * busy. */ 7641da177e4SLinus Torvalds tcp_mark_push(tp, skb); 7651da177e4SLinus Torvalds goto new_segment; 7661da177e4SLinus Torvalds } else if (page) { 7671da177e4SLinus Torvalds if (off == PAGE_SIZE) { 7681da177e4SLinus Torvalds put_page(page); 7691da177e4SLinus Torvalds TCP_PAGE(sk) = page = NULL; 770fb5f5e6eSHerbert Xu off = 0; 7711da177e4SLinus Torvalds } 772ef015786SHerbert Xu } else 773fb5f5e6eSHerbert Xu off = 0; 774ef015786SHerbert Xu 775ef015786SHerbert Xu if (copy > PAGE_SIZE - off) 776ef015786SHerbert Xu copy = PAGE_SIZE - off; 777ef015786SHerbert Xu 778ef015786SHerbert Xu if (!sk_stream_wmem_schedule(sk, copy)) 779ef015786SHerbert Xu goto wait_for_memory; 7801da177e4SLinus Torvalds 7811da177e4SLinus Torvalds if (!page) { 7821da177e4SLinus Torvalds /* Allocate new cache page. */ 7831da177e4SLinus Torvalds if (!(page = sk_stream_alloc_page(sk))) 7841da177e4SLinus Torvalds goto wait_for_memory; 7851da177e4SLinus Torvalds } 7861da177e4SLinus Torvalds 7871da177e4SLinus Torvalds /* Time to copy data. We are close to 7881da177e4SLinus Torvalds * the end! */ 7891da177e4SLinus Torvalds err = skb_copy_to_page(sk, from, skb, page, 7901da177e4SLinus Torvalds off, copy); 7911da177e4SLinus Torvalds if (err) { 7921da177e4SLinus Torvalds /* If this page was new, give it to the 7931da177e4SLinus Torvalds * socket so it does not get leaked. 7941da177e4SLinus Torvalds */ 7951da177e4SLinus Torvalds if (!TCP_PAGE(sk)) { 7961da177e4SLinus Torvalds TCP_PAGE(sk) = page; 7971da177e4SLinus Torvalds TCP_OFF(sk) = 0; 7981da177e4SLinus Torvalds } 7991da177e4SLinus Torvalds goto do_error; 8001da177e4SLinus Torvalds } 8011da177e4SLinus Torvalds 8021da177e4SLinus Torvalds /* Update the skb. */ 8031da177e4SLinus Torvalds if (merge) { 8041da177e4SLinus Torvalds skb_shinfo(skb)->frags[i - 1].size += 8051da177e4SLinus Torvalds copy; 8061da177e4SLinus Torvalds } else { 8071da177e4SLinus Torvalds skb_fill_page_desc(skb, i, page, off, copy); 8081da177e4SLinus Torvalds if (TCP_PAGE(sk)) { 8091da177e4SLinus Torvalds get_page(page); 8101da177e4SLinus Torvalds } else if (off + copy < PAGE_SIZE) { 8111da177e4SLinus Torvalds get_page(page); 8121da177e4SLinus Torvalds TCP_PAGE(sk) = page; 8131da177e4SLinus Torvalds } 8141da177e4SLinus Torvalds } 8151da177e4SLinus Torvalds 8161da177e4SLinus Torvalds TCP_OFF(sk) = off + copy; 8171da177e4SLinus Torvalds } 8181da177e4SLinus Torvalds 8191da177e4SLinus Torvalds if (!copied) 8201da177e4SLinus Torvalds TCP_SKB_CB(skb)->flags &= ~TCPCB_FLAG_PSH; 8211da177e4SLinus Torvalds 8221da177e4SLinus Torvalds tp->write_seq += copy; 8231da177e4SLinus Torvalds TCP_SKB_CB(skb)->end_seq += copy; 8247967168cSHerbert Xu skb_shinfo(skb)->gso_segs = 0; 8251da177e4SLinus Torvalds 8261da177e4SLinus Torvalds from += copy; 8271da177e4SLinus Torvalds copied += copy; 8281da177e4SLinus Torvalds if ((seglen -= copy) == 0 && iovlen == 0) 8291da177e4SLinus Torvalds goto out; 8301da177e4SLinus Torvalds 831c1b4a7e6SDavid S. Miller if (skb->len < mss_now || (flags & MSG_OOB)) 8321da177e4SLinus Torvalds continue; 8331da177e4SLinus Torvalds 8341da177e4SLinus Torvalds if (forced_push(tp)) { 8351da177e4SLinus Torvalds tcp_mark_push(tp, skb); 8369e412ba7SIlpo Järvinen __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); 837fe067e8aSDavid S. Miller } else if (skb == tcp_send_head(sk)) 8381da177e4SLinus Torvalds tcp_push_one(sk, mss_now); 8391da177e4SLinus Torvalds continue; 8401da177e4SLinus Torvalds 8411da177e4SLinus Torvalds wait_for_sndbuf: 8421da177e4SLinus Torvalds set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 8431da177e4SLinus Torvalds wait_for_memory: 8441da177e4SLinus Torvalds if (copied) 8459e412ba7SIlpo Järvinen tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); 8461da177e4SLinus Torvalds 8471da177e4SLinus Torvalds if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) 8481da177e4SLinus Torvalds goto do_error; 8491da177e4SLinus Torvalds 8501da177e4SLinus Torvalds mss_now = tcp_current_mss(sk, !(flags&MSG_OOB)); 851c1b4a7e6SDavid S. Miller size_goal = tp->xmit_size_goal; 8521da177e4SLinus Torvalds } 8531da177e4SLinus Torvalds } 8541da177e4SLinus Torvalds 8551da177e4SLinus Torvalds out: 8561da177e4SLinus Torvalds if (copied) 8579e412ba7SIlpo Järvinen tcp_push(sk, flags, mss_now, tp->nonagle); 8581da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 8591da177e4SLinus Torvalds release_sock(sk); 8601da177e4SLinus Torvalds return copied; 8611da177e4SLinus Torvalds 8621da177e4SLinus Torvalds do_fault: 8631da177e4SLinus Torvalds if (!skb->len) { 864fe067e8aSDavid S. Miller tcp_unlink_write_queue(skb, sk); 865fe067e8aSDavid S. Miller /* It is the one place in all of TCP, except connection 866fe067e8aSDavid S. Miller * reset, where we can be unlinking the send_head. 867fe067e8aSDavid S. Miller */ 868fe067e8aSDavid S. Miller tcp_check_send_head(sk, skb); 8691da177e4SLinus Torvalds sk_stream_free_skb(sk, skb); 8701da177e4SLinus Torvalds } 8711da177e4SLinus Torvalds 8721da177e4SLinus Torvalds do_error: 8731da177e4SLinus Torvalds if (copied) 8741da177e4SLinus Torvalds goto out; 8751da177e4SLinus Torvalds out_err: 8761da177e4SLinus Torvalds err = sk_stream_error(sk, flags, err); 8771da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 8781da177e4SLinus Torvalds release_sock(sk); 8791da177e4SLinus Torvalds return err; 8801da177e4SLinus Torvalds } 8811da177e4SLinus Torvalds 8821da177e4SLinus Torvalds /* 8831da177e4SLinus Torvalds * Handle reading urgent data. BSD has very simple semantics for 8841da177e4SLinus Torvalds * this, no blocking and very strange errors 8) 8851da177e4SLinus Torvalds */ 8861da177e4SLinus Torvalds 8871da177e4SLinus Torvalds static int tcp_recv_urg(struct sock *sk, long timeo, 8881da177e4SLinus Torvalds struct msghdr *msg, int len, int flags, 8891da177e4SLinus Torvalds int *addr_len) 8901da177e4SLinus Torvalds { 8911da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 8921da177e4SLinus Torvalds 8931da177e4SLinus Torvalds /* No URG data to read. */ 8941da177e4SLinus Torvalds if (sock_flag(sk, SOCK_URGINLINE) || !tp->urg_data || 8951da177e4SLinus Torvalds tp->urg_data == TCP_URG_READ) 8961da177e4SLinus Torvalds return -EINVAL; /* Yes this is right ! */ 8971da177e4SLinus Torvalds 8981da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DONE)) 8991da177e4SLinus Torvalds return -ENOTCONN; 9001da177e4SLinus Torvalds 9011da177e4SLinus Torvalds if (tp->urg_data & TCP_URG_VALID) { 9021da177e4SLinus Torvalds int err = 0; 9031da177e4SLinus Torvalds char c = tp->urg_data; 9041da177e4SLinus Torvalds 9051da177e4SLinus Torvalds if (!(flags & MSG_PEEK)) 9061da177e4SLinus Torvalds tp->urg_data = TCP_URG_READ; 9071da177e4SLinus Torvalds 9081da177e4SLinus Torvalds /* Read urgent data. */ 9091da177e4SLinus Torvalds msg->msg_flags |= MSG_OOB; 9101da177e4SLinus Torvalds 9111da177e4SLinus Torvalds if (len > 0) { 9121da177e4SLinus Torvalds if (!(flags & MSG_TRUNC)) 9131da177e4SLinus Torvalds err = memcpy_toiovec(msg->msg_iov, &c, 1); 9141da177e4SLinus Torvalds len = 1; 9151da177e4SLinus Torvalds } else 9161da177e4SLinus Torvalds msg->msg_flags |= MSG_TRUNC; 9171da177e4SLinus Torvalds 9181da177e4SLinus Torvalds return err ? -EFAULT : len; 9191da177e4SLinus Torvalds } 9201da177e4SLinus Torvalds 9211da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN)) 9221da177e4SLinus Torvalds return 0; 9231da177e4SLinus Torvalds 9241da177e4SLinus Torvalds /* Fixed the recv(..., MSG_OOB) behaviour. BSD docs and 9251da177e4SLinus Torvalds * the available implementations agree in this case: 9261da177e4SLinus Torvalds * this call should never block, independent of the 9271da177e4SLinus Torvalds * blocking state of the socket. 9281da177e4SLinus Torvalds * Mike <pall@rz.uni-karlsruhe.de> 9291da177e4SLinus Torvalds */ 9301da177e4SLinus Torvalds return -EAGAIN; 9311da177e4SLinus Torvalds } 9321da177e4SLinus Torvalds 9331da177e4SLinus Torvalds /* Clean up the receive buffer for full frames taken by the user, 9341da177e4SLinus Torvalds * then send an ACK if necessary. COPIED is the number of bytes 9351da177e4SLinus Torvalds * tcp_recvmsg has given to the user so far, it speeds up the 9361da177e4SLinus Torvalds * calculation of whether or not we must ACK for the sake of 9371da177e4SLinus Torvalds * a window update. 9381da177e4SLinus Torvalds */ 9390e4b4992SChris Leech void tcp_cleanup_rbuf(struct sock *sk, int copied) 9401da177e4SLinus Torvalds { 9411da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 9421da177e4SLinus Torvalds int time_to_ack = 0; 9431da177e4SLinus Torvalds 9441da177e4SLinus Torvalds #if TCP_DEBUG 9451da177e4SLinus Torvalds struct sk_buff *skb = skb_peek(&sk->sk_receive_queue); 9461da177e4SLinus Torvalds 9471da177e4SLinus Torvalds BUG_TRAP(!skb || before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq)); 9481da177e4SLinus Torvalds #endif 9491da177e4SLinus Torvalds 950463c84b9SArnaldo Carvalho de Melo if (inet_csk_ack_scheduled(sk)) { 951463c84b9SArnaldo Carvalho de Melo const struct inet_connection_sock *icsk = inet_csk(sk); 9521da177e4SLinus Torvalds /* Delayed ACKs frequently hit locked sockets during bulk 9531da177e4SLinus Torvalds * receive. */ 954463c84b9SArnaldo Carvalho de Melo if (icsk->icsk_ack.blocked || 9551da177e4SLinus Torvalds /* Once-per-two-segments ACK was not sent by tcp_input.c */ 956463c84b9SArnaldo Carvalho de Melo tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss || 9571da177e4SLinus Torvalds /* 9581da177e4SLinus Torvalds * If this read emptied read buffer, we send ACK, if 9591da177e4SLinus Torvalds * connection is not bidirectional, user drained 9601da177e4SLinus Torvalds * receive buffer and there was a small segment 9611da177e4SLinus Torvalds * in queue. 9621da177e4SLinus Torvalds */ 9631ef9696cSAlexey Kuznetsov (copied > 0 && 9641ef9696cSAlexey Kuznetsov ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED2) || 9651ef9696cSAlexey Kuznetsov ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED) && 9661ef9696cSAlexey Kuznetsov !icsk->icsk_ack.pingpong)) && 9671ef9696cSAlexey Kuznetsov !atomic_read(&sk->sk_rmem_alloc))) 9681da177e4SLinus Torvalds time_to_ack = 1; 9691da177e4SLinus Torvalds } 9701da177e4SLinus Torvalds 9711da177e4SLinus Torvalds /* We send an ACK if we can now advertise a non-zero window 9721da177e4SLinus Torvalds * which has been raised "significantly". 9731da177e4SLinus Torvalds * 9741da177e4SLinus Torvalds * Even if window raised up to infinity, do not send window open ACK 9751da177e4SLinus Torvalds * in states, where we will not receive more. It is useless. 9761da177e4SLinus Torvalds */ 9771da177e4SLinus Torvalds if (copied > 0 && !time_to_ack && !(sk->sk_shutdown & RCV_SHUTDOWN)) { 9781da177e4SLinus Torvalds __u32 rcv_window_now = tcp_receive_window(tp); 9791da177e4SLinus Torvalds 9801da177e4SLinus Torvalds /* Optimize, __tcp_select_window() is not cheap. */ 9811da177e4SLinus Torvalds if (2*rcv_window_now <= tp->window_clamp) { 9821da177e4SLinus Torvalds __u32 new_window = __tcp_select_window(sk); 9831da177e4SLinus Torvalds 9841da177e4SLinus Torvalds /* Send ACK now, if this read freed lots of space 9851da177e4SLinus Torvalds * in our buffer. Certainly, new_window is new window. 9861da177e4SLinus Torvalds * We can advertise it now, if it is not less than current one. 9871da177e4SLinus Torvalds * "Lots" means "at least twice" here. 9881da177e4SLinus Torvalds */ 9891da177e4SLinus Torvalds if (new_window && new_window >= 2 * rcv_window_now) 9901da177e4SLinus Torvalds time_to_ack = 1; 9911da177e4SLinus Torvalds } 9921da177e4SLinus Torvalds } 9931da177e4SLinus Torvalds if (time_to_ack) 9941da177e4SLinus Torvalds tcp_send_ack(sk); 9951da177e4SLinus Torvalds } 9961da177e4SLinus Torvalds 9971da177e4SLinus Torvalds static void tcp_prequeue_process(struct sock *sk) 9981da177e4SLinus Torvalds { 9991da177e4SLinus Torvalds struct sk_buff *skb; 10001da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 10011da177e4SLinus Torvalds 1002b03efcfbSDavid S. Miller NET_INC_STATS_USER(LINUX_MIB_TCPPREQUEUED); 10031da177e4SLinus Torvalds 10041da177e4SLinus Torvalds /* RX process wants to run with disabled BHs, though it is not 10051da177e4SLinus Torvalds * necessary */ 10061da177e4SLinus Torvalds local_bh_disable(); 10071da177e4SLinus Torvalds while ((skb = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) 10081da177e4SLinus Torvalds sk->sk_backlog_rcv(sk, skb); 10091da177e4SLinus Torvalds local_bh_enable(); 10101da177e4SLinus Torvalds 10111da177e4SLinus Torvalds /* Clear memory counter. */ 10121da177e4SLinus Torvalds tp->ucopy.memory = 0; 10131da177e4SLinus Torvalds } 10141da177e4SLinus Torvalds 10151da177e4SLinus Torvalds static inline struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off) 10161da177e4SLinus Torvalds { 10171da177e4SLinus Torvalds struct sk_buff *skb; 10181da177e4SLinus Torvalds u32 offset; 10191da177e4SLinus Torvalds 10201da177e4SLinus Torvalds skb_queue_walk(&sk->sk_receive_queue, skb) { 10211da177e4SLinus Torvalds offset = seq - TCP_SKB_CB(skb)->seq; 1022aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->syn) 10231da177e4SLinus Torvalds offset--; 1024aa8223c7SArnaldo Carvalho de Melo if (offset < skb->len || tcp_hdr(skb)->fin) { 10251da177e4SLinus Torvalds *off = offset; 10261da177e4SLinus Torvalds return skb; 10271da177e4SLinus Torvalds } 10281da177e4SLinus Torvalds } 10291da177e4SLinus Torvalds return NULL; 10301da177e4SLinus Torvalds } 10311da177e4SLinus Torvalds 10321da177e4SLinus Torvalds /* 10331da177e4SLinus Torvalds * This routine provides an alternative to tcp_recvmsg() for routines 10341da177e4SLinus Torvalds * that would like to handle copying from skbuffs directly in 'sendfile' 10351da177e4SLinus Torvalds * fashion. 10361da177e4SLinus Torvalds * Note: 10371da177e4SLinus Torvalds * - It is assumed that the socket was locked by the caller. 10381da177e4SLinus Torvalds * - The routine does not block. 10391da177e4SLinus Torvalds * - At present, there is no support for reading OOB data 10401da177e4SLinus Torvalds * or for 'peeking' the socket using this routine 10411da177e4SLinus Torvalds * (although both would be easy to implement). 10421da177e4SLinus Torvalds */ 10431da177e4SLinus Torvalds int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 10441da177e4SLinus Torvalds sk_read_actor_t recv_actor) 10451da177e4SLinus Torvalds { 10461da177e4SLinus Torvalds struct sk_buff *skb; 10471da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 10481da177e4SLinus Torvalds u32 seq = tp->copied_seq; 10491da177e4SLinus Torvalds u32 offset; 10501da177e4SLinus Torvalds int copied = 0; 10511da177e4SLinus Torvalds 10521da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 10531da177e4SLinus Torvalds return -ENOTCONN; 10541da177e4SLinus Torvalds while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) { 10551da177e4SLinus Torvalds if (offset < skb->len) { 10561da177e4SLinus Torvalds size_t used, len; 10571da177e4SLinus Torvalds 10581da177e4SLinus Torvalds len = skb->len - offset; 10591da177e4SLinus Torvalds /* Stop reading if we hit a patch of urgent data */ 10601da177e4SLinus Torvalds if (tp->urg_data) { 10611da177e4SLinus Torvalds u32 urg_offset = tp->urg_seq - seq; 10621da177e4SLinus Torvalds if (urg_offset < len) 10631da177e4SLinus Torvalds len = urg_offset; 10641da177e4SLinus Torvalds if (!len) 10651da177e4SLinus Torvalds break; 10661da177e4SLinus Torvalds } 10671da177e4SLinus Torvalds used = recv_actor(desc, skb, offset, len); 10681da177e4SLinus Torvalds if (used <= len) { 10691da177e4SLinus Torvalds seq += used; 10701da177e4SLinus Torvalds copied += used; 10711da177e4SLinus Torvalds offset += used; 10721da177e4SLinus Torvalds } 10731da177e4SLinus Torvalds if (offset != skb->len) 10741da177e4SLinus Torvalds break; 10751da177e4SLinus Torvalds } 1076aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) { 1077624d1164SChris Leech sk_eat_skb(sk, skb, 0); 10781da177e4SLinus Torvalds ++seq; 10791da177e4SLinus Torvalds break; 10801da177e4SLinus Torvalds } 1081624d1164SChris Leech sk_eat_skb(sk, skb, 0); 10821da177e4SLinus Torvalds if (!desc->count) 10831da177e4SLinus Torvalds break; 10841da177e4SLinus Torvalds } 10851da177e4SLinus Torvalds tp->copied_seq = seq; 10861da177e4SLinus Torvalds 10871da177e4SLinus Torvalds tcp_rcv_space_adjust(sk); 10881da177e4SLinus Torvalds 10891da177e4SLinus Torvalds /* Clean up data we have read: This will do ACK frames. */ 10901da177e4SLinus Torvalds if (copied) 10910e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 10921da177e4SLinus Torvalds return copied; 10931da177e4SLinus Torvalds } 10941da177e4SLinus Torvalds 10951da177e4SLinus Torvalds /* 10961da177e4SLinus Torvalds * This routine copies from a sock struct into the user buffer. 10971da177e4SLinus Torvalds * 10981da177e4SLinus Torvalds * Technical note: in 2.3 we work on _locked_ socket, so that 10991da177e4SLinus Torvalds * tricks with *seq access order and skb->users are not required. 11001da177e4SLinus Torvalds * Probably, code can be easily improved even more. 11011da177e4SLinus Torvalds */ 11021da177e4SLinus Torvalds 11031da177e4SLinus Torvalds int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 11041da177e4SLinus Torvalds size_t len, int nonblock, int flags, int *addr_len) 11051da177e4SLinus Torvalds { 11061da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 11071da177e4SLinus Torvalds int copied = 0; 11081da177e4SLinus Torvalds u32 peek_seq; 11091da177e4SLinus Torvalds u32 *seq; 11101da177e4SLinus Torvalds unsigned long used; 11111da177e4SLinus Torvalds int err; 11121da177e4SLinus Torvalds int target; /* Read at least this many bytes */ 11131da177e4SLinus Torvalds long timeo; 11141da177e4SLinus Torvalds struct task_struct *user_recv = NULL; 11151a2449a8SChris Leech int copied_early = 0; 11161da177e4SLinus Torvalds 11171da177e4SLinus Torvalds lock_sock(sk); 11181da177e4SLinus Torvalds 11191da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 11201da177e4SLinus Torvalds 11211da177e4SLinus Torvalds err = -ENOTCONN; 11221da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) 11231da177e4SLinus Torvalds goto out; 11241da177e4SLinus Torvalds 11251da177e4SLinus Torvalds timeo = sock_rcvtimeo(sk, nonblock); 11261da177e4SLinus Torvalds 11271da177e4SLinus Torvalds /* Urgent data needs to be handled specially. */ 11281da177e4SLinus Torvalds if (flags & MSG_OOB) 11291da177e4SLinus Torvalds goto recv_urg; 11301da177e4SLinus Torvalds 11311da177e4SLinus Torvalds seq = &tp->copied_seq; 11321da177e4SLinus Torvalds if (flags & MSG_PEEK) { 11331da177e4SLinus Torvalds peek_seq = tp->copied_seq; 11341da177e4SLinus Torvalds seq = &peek_seq; 11351da177e4SLinus Torvalds } 11361da177e4SLinus Torvalds 11371da177e4SLinus Torvalds target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); 11381da177e4SLinus Torvalds 11391a2449a8SChris Leech #ifdef CONFIG_NET_DMA 11401a2449a8SChris Leech tp->ucopy.dma_chan = NULL; 11411a2449a8SChris Leech preempt_disable(); 11421a2449a8SChris Leech if ((len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && 114329bbd72dSAlexey Dobriyan !sysctl_tcp_low_latency && __get_cpu_var(softnet_data).net_dma) { 11441a2449a8SChris Leech preempt_enable_no_resched(); 11451a2449a8SChris Leech tp->ucopy.pinned_list = dma_pin_iovec_pages(msg->msg_iov, len); 11461a2449a8SChris Leech } else 11471a2449a8SChris Leech preempt_enable_no_resched(); 11481a2449a8SChris Leech #endif 11491a2449a8SChris Leech 11501da177e4SLinus Torvalds do { 11511da177e4SLinus Torvalds struct sk_buff *skb; 11521da177e4SLinus Torvalds u32 offset; 11531da177e4SLinus Torvalds 11541da177e4SLinus Torvalds /* Are we at urgent data? Stop if we have read anything or have SIGURG pending. */ 11551da177e4SLinus Torvalds if (tp->urg_data && tp->urg_seq == *seq) { 11561da177e4SLinus Torvalds if (copied) 11571da177e4SLinus Torvalds break; 11581da177e4SLinus Torvalds if (signal_pending(current)) { 11591da177e4SLinus Torvalds copied = timeo ? sock_intr_errno(timeo) : -EAGAIN; 11601da177e4SLinus Torvalds break; 11611da177e4SLinus Torvalds } 11621da177e4SLinus Torvalds } 11631da177e4SLinus Torvalds 11641da177e4SLinus Torvalds /* Next get a buffer. */ 11651da177e4SLinus Torvalds 11661da177e4SLinus Torvalds skb = skb_peek(&sk->sk_receive_queue); 11671da177e4SLinus Torvalds do { 11681da177e4SLinus Torvalds if (!skb) 11691da177e4SLinus Torvalds break; 11701da177e4SLinus Torvalds 11711da177e4SLinus Torvalds /* Now that we have two receive queues this 11721da177e4SLinus Torvalds * shouldn't happen. 11731da177e4SLinus Torvalds */ 11741da177e4SLinus Torvalds if (before(*seq, TCP_SKB_CB(skb)->seq)) { 11751da177e4SLinus Torvalds printk(KERN_INFO "recvmsg bug: copied %X " 11761da177e4SLinus Torvalds "seq %X\n", *seq, TCP_SKB_CB(skb)->seq); 11771da177e4SLinus Torvalds break; 11781da177e4SLinus Torvalds } 11791da177e4SLinus Torvalds offset = *seq - TCP_SKB_CB(skb)->seq; 1180aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->syn) 11811da177e4SLinus Torvalds offset--; 11821da177e4SLinus Torvalds if (offset < skb->len) 11831da177e4SLinus Torvalds goto found_ok_skb; 1184aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) 11851da177e4SLinus Torvalds goto found_fin_ok; 11861da177e4SLinus Torvalds BUG_TRAP(flags & MSG_PEEK); 11871da177e4SLinus Torvalds skb = skb->next; 11881da177e4SLinus Torvalds } while (skb != (struct sk_buff *)&sk->sk_receive_queue); 11891da177e4SLinus Torvalds 11901da177e4SLinus Torvalds /* Well, if we have backlog, try to process it now yet. */ 11911da177e4SLinus Torvalds 11921da177e4SLinus Torvalds if (copied >= target && !sk->sk_backlog.tail) 11931da177e4SLinus Torvalds break; 11941da177e4SLinus Torvalds 11951da177e4SLinus Torvalds if (copied) { 11961da177e4SLinus Torvalds if (sk->sk_err || 11971da177e4SLinus Torvalds sk->sk_state == TCP_CLOSE || 11981da177e4SLinus Torvalds (sk->sk_shutdown & RCV_SHUTDOWN) || 11991da177e4SLinus Torvalds !timeo || 12001da177e4SLinus Torvalds signal_pending(current) || 12011da177e4SLinus Torvalds (flags & MSG_PEEK)) 12021da177e4SLinus Torvalds break; 12031da177e4SLinus Torvalds } else { 12041da177e4SLinus Torvalds if (sock_flag(sk, SOCK_DONE)) 12051da177e4SLinus Torvalds break; 12061da177e4SLinus Torvalds 12071da177e4SLinus Torvalds if (sk->sk_err) { 12081da177e4SLinus Torvalds copied = sock_error(sk); 12091da177e4SLinus Torvalds break; 12101da177e4SLinus Torvalds } 12111da177e4SLinus Torvalds 12121da177e4SLinus Torvalds if (sk->sk_shutdown & RCV_SHUTDOWN) 12131da177e4SLinus Torvalds break; 12141da177e4SLinus Torvalds 12151da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE) { 12161da177e4SLinus Torvalds if (!sock_flag(sk, SOCK_DONE)) { 12171da177e4SLinus Torvalds /* This occurs when user tries to read 12181da177e4SLinus Torvalds * from never connected socket. 12191da177e4SLinus Torvalds */ 12201da177e4SLinus Torvalds copied = -ENOTCONN; 12211da177e4SLinus Torvalds break; 12221da177e4SLinus Torvalds } 12231da177e4SLinus Torvalds break; 12241da177e4SLinus Torvalds } 12251da177e4SLinus Torvalds 12261da177e4SLinus Torvalds if (!timeo) { 12271da177e4SLinus Torvalds copied = -EAGAIN; 12281da177e4SLinus Torvalds break; 12291da177e4SLinus Torvalds } 12301da177e4SLinus Torvalds 12311da177e4SLinus Torvalds if (signal_pending(current)) { 12321da177e4SLinus Torvalds copied = sock_intr_errno(timeo); 12331da177e4SLinus Torvalds break; 12341da177e4SLinus Torvalds } 12351da177e4SLinus Torvalds } 12361da177e4SLinus Torvalds 12370e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 12381da177e4SLinus Torvalds 12397df55125SDavid S. Miller if (!sysctl_tcp_low_latency && tp->ucopy.task == user_recv) { 12401da177e4SLinus Torvalds /* Install new reader */ 12411da177e4SLinus Torvalds if (!user_recv && !(flags & (MSG_TRUNC | MSG_PEEK))) { 12421da177e4SLinus Torvalds user_recv = current; 12431da177e4SLinus Torvalds tp->ucopy.task = user_recv; 12441da177e4SLinus Torvalds tp->ucopy.iov = msg->msg_iov; 12451da177e4SLinus Torvalds } 12461da177e4SLinus Torvalds 12471da177e4SLinus Torvalds tp->ucopy.len = len; 12481da177e4SLinus Torvalds 12491da177e4SLinus Torvalds BUG_TRAP(tp->copied_seq == tp->rcv_nxt || 12501da177e4SLinus Torvalds (flags & (MSG_PEEK | MSG_TRUNC))); 12511da177e4SLinus Torvalds 12521da177e4SLinus Torvalds /* Ugly... If prequeue is not empty, we have to 12531da177e4SLinus Torvalds * process it before releasing socket, otherwise 12541da177e4SLinus Torvalds * order will be broken at second iteration. 12551da177e4SLinus Torvalds * More elegant solution is required!!! 12561da177e4SLinus Torvalds * 12571da177e4SLinus Torvalds * Look: we have the following (pseudo)queues: 12581da177e4SLinus Torvalds * 12591da177e4SLinus Torvalds * 1. packets in flight 12601da177e4SLinus Torvalds * 2. backlog 12611da177e4SLinus Torvalds * 3. prequeue 12621da177e4SLinus Torvalds * 4. receive_queue 12631da177e4SLinus Torvalds * 12641da177e4SLinus Torvalds * Each queue can be processed only if the next ones 12651da177e4SLinus Torvalds * are empty. At this point we have empty receive_queue. 12661da177e4SLinus Torvalds * But prequeue _can_ be not empty after 2nd iteration, 12671da177e4SLinus Torvalds * when we jumped to start of loop because backlog 12681da177e4SLinus Torvalds * processing added something to receive_queue. 12691da177e4SLinus Torvalds * We cannot release_sock(), because backlog contains 12701da177e4SLinus Torvalds * packets arrived _after_ prequeued ones. 12711da177e4SLinus Torvalds * 12721da177e4SLinus Torvalds * Shortly, algorithm is clear --- to process all 12731da177e4SLinus Torvalds * the queues in order. We could make it more directly, 12741da177e4SLinus Torvalds * requeueing packets from backlog to prequeue, if 12751da177e4SLinus Torvalds * is not empty. It is more elegant, but eats cycles, 12761da177e4SLinus Torvalds * unfortunately. 12771da177e4SLinus Torvalds */ 1278b03efcfbSDavid S. Miller if (!skb_queue_empty(&tp->ucopy.prequeue)) 12791da177e4SLinus Torvalds goto do_prequeue; 12801da177e4SLinus Torvalds 12811da177e4SLinus Torvalds /* __ Set realtime policy in scheduler __ */ 12821da177e4SLinus Torvalds } 12831da177e4SLinus Torvalds 12841da177e4SLinus Torvalds if (copied >= target) { 12851da177e4SLinus Torvalds /* Do not sleep, just process backlog. */ 12861da177e4SLinus Torvalds release_sock(sk); 12871da177e4SLinus Torvalds lock_sock(sk); 12881da177e4SLinus Torvalds } else 12891da177e4SLinus Torvalds sk_wait_data(sk, &timeo); 12901da177e4SLinus Torvalds 12911a2449a8SChris Leech #ifdef CONFIG_NET_DMA 12921a2449a8SChris Leech tp->ucopy.wakeup = 0; 12931a2449a8SChris Leech #endif 12941a2449a8SChris Leech 12951da177e4SLinus Torvalds if (user_recv) { 12961da177e4SLinus Torvalds int chunk; 12971da177e4SLinus Torvalds 12981da177e4SLinus Torvalds /* __ Restore normal policy in scheduler __ */ 12991da177e4SLinus Torvalds 13001da177e4SLinus Torvalds if ((chunk = len - tp->ucopy.len) != 0) { 13011da177e4SLinus Torvalds NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk); 13021da177e4SLinus Torvalds len -= chunk; 13031da177e4SLinus Torvalds copied += chunk; 13041da177e4SLinus Torvalds } 13051da177e4SLinus Torvalds 13061da177e4SLinus Torvalds if (tp->rcv_nxt == tp->copied_seq && 1307b03efcfbSDavid S. Miller !skb_queue_empty(&tp->ucopy.prequeue)) { 13081da177e4SLinus Torvalds do_prequeue: 13091da177e4SLinus Torvalds tcp_prequeue_process(sk); 13101da177e4SLinus Torvalds 13111da177e4SLinus Torvalds if ((chunk = len - tp->ucopy.len) != 0) { 13121da177e4SLinus Torvalds NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk); 13131da177e4SLinus Torvalds len -= chunk; 13141da177e4SLinus Torvalds copied += chunk; 13151da177e4SLinus Torvalds } 13161da177e4SLinus Torvalds } 13171da177e4SLinus Torvalds } 13181da177e4SLinus Torvalds if ((flags & MSG_PEEK) && peek_seq != tp->copied_seq) { 13191da177e4SLinus Torvalds if (net_ratelimit()) 13201da177e4SLinus Torvalds printk(KERN_DEBUG "TCP(%s:%d): Application bug, race in MSG_PEEK.\n", 13211da177e4SLinus Torvalds current->comm, current->pid); 13221da177e4SLinus Torvalds peek_seq = tp->copied_seq; 13231da177e4SLinus Torvalds } 13241da177e4SLinus Torvalds continue; 13251da177e4SLinus Torvalds 13261da177e4SLinus Torvalds found_ok_skb: 13271da177e4SLinus Torvalds /* Ok so how much can we use? */ 13281da177e4SLinus Torvalds used = skb->len - offset; 13291da177e4SLinus Torvalds if (len < used) 13301da177e4SLinus Torvalds used = len; 13311da177e4SLinus Torvalds 13321da177e4SLinus Torvalds /* Do we have urgent data here? */ 13331da177e4SLinus Torvalds if (tp->urg_data) { 13341da177e4SLinus Torvalds u32 urg_offset = tp->urg_seq - *seq; 13351da177e4SLinus Torvalds if (urg_offset < used) { 13361da177e4SLinus Torvalds if (!urg_offset) { 13371da177e4SLinus Torvalds if (!sock_flag(sk, SOCK_URGINLINE)) { 13381da177e4SLinus Torvalds ++*seq; 13391da177e4SLinus Torvalds offset++; 13401da177e4SLinus Torvalds used--; 13411da177e4SLinus Torvalds if (!used) 13421da177e4SLinus Torvalds goto skip_copy; 13431da177e4SLinus Torvalds } 13441da177e4SLinus Torvalds } else 13451da177e4SLinus Torvalds used = urg_offset; 13461da177e4SLinus Torvalds } 13471da177e4SLinus Torvalds } 13481da177e4SLinus Torvalds 13491da177e4SLinus Torvalds if (!(flags & MSG_TRUNC)) { 13501a2449a8SChris Leech #ifdef CONFIG_NET_DMA 13511a2449a8SChris Leech if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 13521a2449a8SChris Leech tp->ucopy.dma_chan = get_softnet_dma(); 13531a2449a8SChris Leech 13541a2449a8SChris Leech if (tp->ucopy.dma_chan) { 13551a2449a8SChris Leech tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( 13561a2449a8SChris Leech tp->ucopy.dma_chan, skb, offset, 13571a2449a8SChris Leech msg->msg_iov, used, 13581a2449a8SChris Leech tp->ucopy.pinned_list); 13591a2449a8SChris Leech 13601a2449a8SChris Leech if (tp->ucopy.dma_cookie < 0) { 13611a2449a8SChris Leech 13621a2449a8SChris Leech printk(KERN_ALERT "dma_cookie < 0\n"); 13631a2449a8SChris Leech 13641a2449a8SChris Leech /* Exception. Bailout! */ 13651a2449a8SChris Leech if (!copied) 13661a2449a8SChris Leech copied = -EFAULT; 13671a2449a8SChris Leech break; 13681a2449a8SChris Leech } 13691a2449a8SChris Leech if ((offset + used) == skb->len) 13701a2449a8SChris Leech copied_early = 1; 13711a2449a8SChris Leech 13721a2449a8SChris Leech } else 13731a2449a8SChris Leech #endif 13741a2449a8SChris Leech { 13751da177e4SLinus Torvalds err = skb_copy_datagram_iovec(skb, offset, 13761da177e4SLinus Torvalds msg->msg_iov, used); 13771da177e4SLinus Torvalds if (err) { 13781da177e4SLinus Torvalds /* Exception. Bailout! */ 13791da177e4SLinus Torvalds if (!copied) 13801da177e4SLinus Torvalds copied = -EFAULT; 13811da177e4SLinus Torvalds break; 13821da177e4SLinus Torvalds } 13831da177e4SLinus Torvalds } 13841a2449a8SChris Leech } 13851da177e4SLinus Torvalds 13861da177e4SLinus Torvalds *seq += used; 13871da177e4SLinus Torvalds copied += used; 13881da177e4SLinus Torvalds len -= used; 13891da177e4SLinus Torvalds 13901da177e4SLinus Torvalds tcp_rcv_space_adjust(sk); 13911da177e4SLinus Torvalds 13921da177e4SLinus Torvalds skip_copy: 13931da177e4SLinus Torvalds if (tp->urg_data && after(tp->copied_seq, tp->urg_seq)) { 13941da177e4SLinus Torvalds tp->urg_data = 0; 13959e412ba7SIlpo Järvinen tcp_fast_path_check(sk); 13961da177e4SLinus Torvalds } 13971da177e4SLinus Torvalds if (used + offset < skb->len) 13981da177e4SLinus Torvalds continue; 13991da177e4SLinus Torvalds 1400aa8223c7SArnaldo Carvalho de Melo if (tcp_hdr(skb)->fin) 14011da177e4SLinus Torvalds goto found_fin_ok; 14021a2449a8SChris Leech if (!(flags & MSG_PEEK)) { 14031a2449a8SChris Leech sk_eat_skb(sk, skb, copied_early); 14041a2449a8SChris Leech copied_early = 0; 14051a2449a8SChris Leech } 14061da177e4SLinus Torvalds continue; 14071da177e4SLinus Torvalds 14081da177e4SLinus Torvalds found_fin_ok: 14091da177e4SLinus Torvalds /* Process the FIN. */ 14101da177e4SLinus Torvalds ++*seq; 14111a2449a8SChris Leech if (!(flags & MSG_PEEK)) { 14121a2449a8SChris Leech sk_eat_skb(sk, skb, copied_early); 14131a2449a8SChris Leech copied_early = 0; 14141a2449a8SChris Leech } 14151da177e4SLinus Torvalds break; 14161da177e4SLinus Torvalds } while (len > 0); 14171da177e4SLinus Torvalds 14181da177e4SLinus Torvalds if (user_recv) { 1419b03efcfbSDavid S. Miller if (!skb_queue_empty(&tp->ucopy.prequeue)) { 14201da177e4SLinus Torvalds int chunk; 14211da177e4SLinus Torvalds 14221da177e4SLinus Torvalds tp->ucopy.len = copied > 0 ? len : 0; 14231da177e4SLinus Torvalds 14241da177e4SLinus Torvalds tcp_prequeue_process(sk); 14251da177e4SLinus Torvalds 14261da177e4SLinus Torvalds if (copied > 0 && (chunk = len - tp->ucopy.len) != 0) { 14271da177e4SLinus Torvalds NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk); 14281da177e4SLinus Torvalds len -= chunk; 14291da177e4SLinus Torvalds copied += chunk; 14301da177e4SLinus Torvalds } 14311da177e4SLinus Torvalds } 14321da177e4SLinus Torvalds 14331da177e4SLinus Torvalds tp->ucopy.task = NULL; 14341da177e4SLinus Torvalds tp->ucopy.len = 0; 14351da177e4SLinus Torvalds } 14361da177e4SLinus Torvalds 14371a2449a8SChris Leech #ifdef CONFIG_NET_DMA 14381a2449a8SChris Leech if (tp->ucopy.dma_chan) { 14391a2449a8SChris Leech struct sk_buff *skb; 14401a2449a8SChris Leech dma_cookie_t done, used; 14411a2449a8SChris Leech 14421a2449a8SChris Leech dma_async_memcpy_issue_pending(tp->ucopy.dma_chan); 14431a2449a8SChris Leech 14441a2449a8SChris Leech while (dma_async_memcpy_complete(tp->ucopy.dma_chan, 14451a2449a8SChris Leech tp->ucopy.dma_cookie, &done, 14461a2449a8SChris Leech &used) == DMA_IN_PROGRESS) { 14471a2449a8SChris Leech /* do partial cleanup of sk_async_wait_queue */ 14481a2449a8SChris Leech while ((skb = skb_peek(&sk->sk_async_wait_queue)) && 14491a2449a8SChris Leech (dma_async_is_complete(skb->dma_cookie, done, 14501a2449a8SChris Leech used) == DMA_SUCCESS)) { 14511a2449a8SChris Leech __skb_dequeue(&sk->sk_async_wait_queue); 14521a2449a8SChris Leech kfree_skb(skb); 14531a2449a8SChris Leech } 14541a2449a8SChris Leech } 14551a2449a8SChris Leech 14561a2449a8SChris Leech /* Safe to free early-copied skbs now */ 14571a2449a8SChris Leech __skb_queue_purge(&sk->sk_async_wait_queue); 14581a2449a8SChris Leech dma_chan_put(tp->ucopy.dma_chan); 14591a2449a8SChris Leech tp->ucopy.dma_chan = NULL; 14601a2449a8SChris Leech } 14611a2449a8SChris Leech if (tp->ucopy.pinned_list) { 14621a2449a8SChris Leech dma_unpin_iovec_pages(tp->ucopy.pinned_list); 14631a2449a8SChris Leech tp->ucopy.pinned_list = NULL; 14641a2449a8SChris Leech } 14651a2449a8SChris Leech #endif 14661a2449a8SChris Leech 14671da177e4SLinus Torvalds /* According to UNIX98, msg_name/msg_namelen are ignored 14681da177e4SLinus Torvalds * on connected socket. I was just happy when found this 8) --ANK 14691da177e4SLinus Torvalds */ 14701da177e4SLinus Torvalds 14711da177e4SLinus Torvalds /* Clean up data we have read: This will do ACK frames. */ 14720e4b4992SChris Leech tcp_cleanup_rbuf(sk, copied); 14731da177e4SLinus Torvalds 14741da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 14751da177e4SLinus Torvalds release_sock(sk); 14761da177e4SLinus Torvalds return copied; 14771da177e4SLinus Torvalds 14781da177e4SLinus Torvalds out: 14791da177e4SLinus Torvalds TCP_CHECK_TIMER(sk); 14801da177e4SLinus Torvalds release_sock(sk); 14811da177e4SLinus Torvalds return err; 14821da177e4SLinus Torvalds 14831da177e4SLinus Torvalds recv_urg: 14841da177e4SLinus Torvalds err = tcp_recv_urg(sk, timeo, msg, len, flags, addr_len); 14851da177e4SLinus Torvalds goto out; 14861da177e4SLinus Torvalds } 14871da177e4SLinus Torvalds 14881da177e4SLinus Torvalds /* 14891da177e4SLinus Torvalds * State processing on a close. This implements the state shift for 14901da177e4SLinus Torvalds * sending our FIN frame. Note that we only send a FIN for some 14911da177e4SLinus Torvalds * states. A shutdown() may have already sent the FIN, or we may be 14921da177e4SLinus Torvalds * closed. 14931da177e4SLinus Torvalds */ 14941da177e4SLinus Torvalds 14959b5b5cffSArjan van de Ven static const unsigned char new_state[16] = { 14961da177e4SLinus Torvalds /* current state: new state: action: */ 14971da177e4SLinus Torvalds /* (Invalid) */ TCP_CLOSE, 14981da177e4SLinus Torvalds /* TCP_ESTABLISHED */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, 14991da177e4SLinus Torvalds /* TCP_SYN_SENT */ TCP_CLOSE, 15001da177e4SLinus Torvalds /* TCP_SYN_RECV */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, 15011da177e4SLinus Torvalds /* TCP_FIN_WAIT1 */ TCP_FIN_WAIT1, 15021da177e4SLinus Torvalds /* TCP_FIN_WAIT2 */ TCP_FIN_WAIT2, 15031da177e4SLinus Torvalds /* TCP_TIME_WAIT */ TCP_CLOSE, 15041da177e4SLinus Torvalds /* TCP_CLOSE */ TCP_CLOSE, 15051da177e4SLinus Torvalds /* TCP_CLOSE_WAIT */ TCP_LAST_ACK | TCP_ACTION_FIN, 15061da177e4SLinus Torvalds /* TCP_LAST_ACK */ TCP_LAST_ACK, 15071da177e4SLinus Torvalds /* TCP_LISTEN */ TCP_CLOSE, 15081da177e4SLinus Torvalds /* TCP_CLOSING */ TCP_CLOSING, 15091da177e4SLinus Torvalds }; 15101da177e4SLinus Torvalds 15111da177e4SLinus Torvalds static int tcp_close_state(struct sock *sk) 15121da177e4SLinus Torvalds { 15131da177e4SLinus Torvalds int next = (int)new_state[sk->sk_state]; 15141da177e4SLinus Torvalds int ns = next & TCP_STATE_MASK; 15151da177e4SLinus Torvalds 15161da177e4SLinus Torvalds tcp_set_state(sk, ns); 15171da177e4SLinus Torvalds 15181da177e4SLinus Torvalds return next & TCP_ACTION_FIN; 15191da177e4SLinus Torvalds } 15201da177e4SLinus Torvalds 15211da177e4SLinus Torvalds /* 15221da177e4SLinus Torvalds * Shutdown the sending side of a connection. Much like close except 15231da177e4SLinus Torvalds * that we don't receive shut down or set_sock_flag(sk, SOCK_DEAD). 15241da177e4SLinus Torvalds */ 15251da177e4SLinus Torvalds 15261da177e4SLinus Torvalds void tcp_shutdown(struct sock *sk, int how) 15271da177e4SLinus Torvalds { 15281da177e4SLinus Torvalds /* We need to grab some memory, and put together a FIN, 15291da177e4SLinus Torvalds * and then put it into the queue to be sent. 15301da177e4SLinus Torvalds * Tim MacKenzie(tym@dibbler.cs.monash.edu.au) 4 Dec '92. 15311da177e4SLinus Torvalds */ 15321da177e4SLinus Torvalds if (!(how & SEND_SHUTDOWN)) 15331da177e4SLinus Torvalds return; 15341da177e4SLinus Torvalds 15351da177e4SLinus Torvalds /* If we've already sent a FIN, or it's a closed state, skip this. */ 15361da177e4SLinus Torvalds if ((1 << sk->sk_state) & 15371da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_SYN_SENT | 15381da177e4SLinus Torvalds TCPF_SYN_RECV | TCPF_CLOSE_WAIT)) { 15391da177e4SLinus Torvalds /* Clear out any half completed packets. FIN if needed. */ 15401da177e4SLinus Torvalds if (tcp_close_state(sk)) 15411da177e4SLinus Torvalds tcp_send_fin(sk); 15421da177e4SLinus Torvalds } 15431da177e4SLinus Torvalds } 15441da177e4SLinus Torvalds 15451da177e4SLinus Torvalds void tcp_close(struct sock *sk, long timeout) 15461da177e4SLinus Torvalds { 15471da177e4SLinus Torvalds struct sk_buff *skb; 15481da177e4SLinus Torvalds int data_was_unread = 0; 154975c2d907SHerbert Xu int state; 15501da177e4SLinus Torvalds 15511da177e4SLinus Torvalds lock_sock(sk); 15521da177e4SLinus Torvalds sk->sk_shutdown = SHUTDOWN_MASK; 15531da177e4SLinus Torvalds 15541da177e4SLinus Torvalds if (sk->sk_state == TCP_LISTEN) { 15551da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 15561da177e4SLinus Torvalds 15571da177e4SLinus Torvalds /* Special case. */ 15580a5578cfSArnaldo Carvalho de Melo inet_csk_listen_stop(sk); 15591da177e4SLinus Torvalds 15601da177e4SLinus Torvalds goto adjudge_to_death; 15611da177e4SLinus Torvalds } 15621da177e4SLinus Torvalds 15631da177e4SLinus Torvalds /* We need to flush the recv. buffs. We do this only on the 15641da177e4SLinus Torvalds * descriptor close, not protocol-sourced closes, because the 15651da177e4SLinus Torvalds * reader process may not have drained the data yet! 15661da177e4SLinus Torvalds */ 15671da177e4SLinus Torvalds while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { 15681da177e4SLinus Torvalds u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq - 1569aa8223c7SArnaldo Carvalho de Melo tcp_hdr(skb)->fin; 15701da177e4SLinus Torvalds data_was_unread += len; 15711da177e4SLinus Torvalds __kfree_skb(skb); 15721da177e4SLinus Torvalds } 15731da177e4SLinus Torvalds 15741da177e4SLinus Torvalds sk_stream_mem_reclaim(sk); 15751da177e4SLinus Torvalds 157665bb723cSGerrit Renker /* As outlined in RFC 2525, section 2.17, we send a RST here because 157765bb723cSGerrit Renker * data was lost. To witness the awful effects of the old behavior of 157865bb723cSGerrit Renker * always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk 157965bb723cSGerrit Renker * GET in an FTP client, suspend the process, wait for the client to 158065bb723cSGerrit Renker * advertise a zero window, then kill -9 the FTP client, wheee... 158165bb723cSGerrit Renker * Note: timeout is always zero in such a case. 15821da177e4SLinus Torvalds */ 15831da177e4SLinus Torvalds if (data_was_unread) { 15841da177e4SLinus Torvalds /* Unread data was tossed, zap the connection. */ 15851da177e4SLinus Torvalds NET_INC_STATS_USER(LINUX_MIB_TCPABORTONCLOSE); 15861da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 15871da177e4SLinus Torvalds tcp_send_active_reset(sk, GFP_KERNEL); 15881da177e4SLinus Torvalds } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) { 15891da177e4SLinus Torvalds /* Check zero linger _after_ checking for unread data. */ 15901da177e4SLinus Torvalds sk->sk_prot->disconnect(sk, 0); 15911da177e4SLinus Torvalds NET_INC_STATS_USER(LINUX_MIB_TCPABORTONDATA); 15921da177e4SLinus Torvalds } else if (tcp_close_state(sk)) { 15931da177e4SLinus Torvalds /* We FIN if the application ate all the data before 15941da177e4SLinus Torvalds * zapping the connection. 15951da177e4SLinus Torvalds */ 15961da177e4SLinus Torvalds 15971da177e4SLinus Torvalds /* RED-PEN. Formally speaking, we have broken TCP state 15981da177e4SLinus Torvalds * machine. State transitions: 15991da177e4SLinus Torvalds * 16001da177e4SLinus Torvalds * TCP_ESTABLISHED -> TCP_FIN_WAIT1 16011da177e4SLinus Torvalds * TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible) 16021da177e4SLinus Torvalds * TCP_CLOSE_WAIT -> TCP_LAST_ACK 16031da177e4SLinus Torvalds * 16041da177e4SLinus Torvalds * are legal only when FIN has been sent (i.e. in window), 16051da177e4SLinus Torvalds * rather than queued out of window. Purists blame. 16061da177e4SLinus Torvalds * 16071da177e4SLinus Torvalds * F.e. "RFC state" is ESTABLISHED, 16081da177e4SLinus Torvalds * if Linux state is FIN-WAIT-1, but FIN is still not sent. 16091da177e4SLinus Torvalds * 16101da177e4SLinus Torvalds * The visible declinations are that sometimes 16111da177e4SLinus Torvalds * we enter time-wait state, when it is not required really 16121da177e4SLinus Torvalds * (harmless), do not send active resets, when they are 16131da177e4SLinus Torvalds * required by specs (TCP_ESTABLISHED, TCP_CLOSE_WAIT, when 16141da177e4SLinus Torvalds * they look as CLOSING or LAST_ACK for Linux) 16151da177e4SLinus Torvalds * Probably, I missed some more holelets. 16161da177e4SLinus Torvalds * --ANK 16171da177e4SLinus Torvalds */ 16181da177e4SLinus Torvalds tcp_send_fin(sk); 16191da177e4SLinus Torvalds } 16201da177e4SLinus Torvalds 16211da177e4SLinus Torvalds sk_stream_wait_close(sk, timeout); 16221da177e4SLinus Torvalds 16231da177e4SLinus Torvalds adjudge_to_death: 162475c2d907SHerbert Xu state = sk->sk_state; 162575c2d907SHerbert Xu sock_hold(sk); 162675c2d907SHerbert Xu sock_orphan(sk); 162775c2d907SHerbert Xu atomic_inc(sk->sk_prot->orphan_count); 162875c2d907SHerbert Xu 16291da177e4SLinus Torvalds /* It is the last release_sock in its life. It will remove backlog. */ 16301da177e4SLinus Torvalds release_sock(sk); 16311da177e4SLinus Torvalds 16321da177e4SLinus Torvalds 16331da177e4SLinus Torvalds /* Now socket is owned by kernel and we acquire BH lock 16341da177e4SLinus Torvalds to finish close. No need to check for user refs. 16351da177e4SLinus Torvalds */ 16361da177e4SLinus Torvalds local_bh_disable(); 16371da177e4SLinus Torvalds bh_lock_sock(sk); 16381da177e4SLinus Torvalds BUG_TRAP(!sock_owned_by_user(sk)); 16391da177e4SLinus Torvalds 164075c2d907SHerbert Xu /* Have we already been destroyed by a softirq or backlog? */ 164175c2d907SHerbert Xu if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE) 164275c2d907SHerbert Xu goto out; 16431da177e4SLinus Torvalds 16441da177e4SLinus Torvalds /* This is a (useful) BSD violating of the RFC. There is a 16451da177e4SLinus Torvalds * problem with TCP as specified in that the other end could 16461da177e4SLinus Torvalds * keep a socket open forever with no application left this end. 16471da177e4SLinus Torvalds * We use a 3 minute timeout (about the same as BSD) then kill 16481da177e4SLinus Torvalds * our end. If they send after that then tough - BUT: long enough 16491da177e4SLinus Torvalds * that we won't make the old 4*rto = almost no time - whoops 16501da177e4SLinus Torvalds * reset mistake. 16511da177e4SLinus Torvalds * 16521da177e4SLinus Torvalds * Nope, it was not mistake. It is really desired behaviour 16531da177e4SLinus Torvalds * f.e. on http servers, when such sockets are useless, but 16541da177e4SLinus Torvalds * consume significant resources. Let's do it with special 16551da177e4SLinus Torvalds * linger2 option. --ANK 16561da177e4SLinus Torvalds */ 16571da177e4SLinus Torvalds 16581da177e4SLinus Torvalds if (sk->sk_state == TCP_FIN_WAIT2) { 16591da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 16601da177e4SLinus Torvalds if (tp->linger2 < 0) { 16611da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 16621da177e4SLinus Torvalds tcp_send_active_reset(sk, GFP_ATOMIC); 16631da177e4SLinus Torvalds NET_INC_STATS_BH(LINUX_MIB_TCPABORTONLINGER); 16641da177e4SLinus Torvalds } else { 1665463c84b9SArnaldo Carvalho de Melo const int tmo = tcp_fin_time(sk); 16661da177e4SLinus Torvalds 16671da177e4SLinus Torvalds if (tmo > TCP_TIMEWAIT_LEN) { 166852499afeSDavid S. Miller inet_csk_reset_keepalive_timer(sk, 166952499afeSDavid S. Miller tmo - TCP_TIMEWAIT_LEN); 16701da177e4SLinus Torvalds } else { 16711da177e4SLinus Torvalds tcp_time_wait(sk, TCP_FIN_WAIT2, tmo); 16721da177e4SLinus Torvalds goto out; 16731da177e4SLinus Torvalds } 16741da177e4SLinus Torvalds } 16751da177e4SLinus Torvalds } 16761da177e4SLinus Torvalds if (sk->sk_state != TCP_CLOSE) { 16771da177e4SLinus Torvalds sk_stream_mem_reclaim(sk); 16780a5578cfSArnaldo Carvalho de Melo if (atomic_read(sk->sk_prot->orphan_count) > sysctl_tcp_max_orphans || 16791da177e4SLinus Torvalds (sk->sk_wmem_queued > SOCK_MIN_SNDBUF && 16801da177e4SLinus Torvalds atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2])) { 16811da177e4SLinus Torvalds if (net_ratelimit()) 16821da177e4SLinus Torvalds printk(KERN_INFO "TCP: too many of orphaned " 16831da177e4SLinus Torvalds "sockets\n"); 16841da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 16851da177e4SLinus Torvalds tcp_send_active_reset(sk, GFP_ATOMIC); 16861da177e4SLinus Torvalds NET_INC_STATS_BH(LINUX_MIB_TCPABORTONMEMORY); 16871da177e4SLinus Torvalds } 16881da177e4SLinus Torvalds } 16891da177e4SLinus Torvalds 16901da177e4SLinus Torvalds if (sk->sk_state == TCP_CLOSE) 16910a5578cfSArnaldo Carvalho de Melo inet_csk_destroy_sock(sk); 16921da177e4SLinus Torvalds /* Otherwise, socket is reprieved until protocol close. */ 16931da177e4SLinus Torvalds 16941da177e4SLinus Torvalds out: 16951da177e4SLinus Torvalds bh_unlock_sock(sk); 16961da177e4SLinus Torvalds local_bh_enable(); 16971da177e4SLinus Torvalds sock_put(sk); 16981da177e4SLinus Torvalds } 16991da177e4SLinus Torvalds 17001da177e4SLinus Torvalds /* These states need RST on ABORT according to RFC793 */ 17011da177e4SLinus Torvalds 17021da177e4SLinus Torvalds static inline int tcp_need_reset(int state) 17031da177e4SLinus Torvalds { 17041da177e4SLinus Torvalds return (1 << state) & 17051da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT | TCPF_FIN_WAIT1 | 17061da177e4SLinus Torvalds TCPF_FIN_WAIT2 | TCPF_SYN_RECV); 17071da177e4SLinus Torvalds } 17081da177e4SLinus Torvalds 17091da177e4SLinus Torvalds int tcp_disconnect(struct sock *sk, int flags) 17101da177e4SLinus Torvalds { 17111da177e4SLinus Torvalds struct inet_sock *inet = inet_sk(sk); 1712463c84b9SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 17131da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 17141da177e4SLinus Torvalds int err = 0; 17151da177e4SLinus Torvalds int old_state = sk->sk_state; 17161da177e4SLinus Torvalds 17171da177e4SLinus Torvalds if (old_state != TCP_CLOSE) 17181da177e4SLinus Torvalds tcp_set_state(sk, TCP_CLOSE); 17191da177e4SLinus Torvalds 17201da177e4SLinus Torvalds /* ABORT function of RFC793 */ 17211da177e4SLinus Torvalds if (old_state == TCP_LISTEN) { 17220a5578cfSArnaldo Carvalho de Melo inet_csk_listen_stop(sk); 17231da177e4SLinus Torvalds } else if (tcp_need_reset(old_state) || 17241da177e4SLinus Torvalds (tp->snd_nxt != tp->write_seq && 17251da177e4SLinus Torvalds (1 << old_state) & (TCPF_CLOSING | TCPF_LAST_ACK))) { 1726caa20d9aSStephen Hemminger /* The last check adjusts for discrepancy of Linux wrt. RFC 17271da177e4SLinus Torvalds * states 17281da177e4SLinus Torvalds */ 17291da177e4SLinus Torvalds tcp_send_active_reset(sk, gfp_any()); 17301da177e4SLinus Torvalds sk->sk_err = ECONNRESET; 17311da177e4SLinus Torvalds } else if (old_state == TCP_SYN_SENT) 17321da177e4SLinus Torvalds sk->sk_err = ECONNRESET; 17331da177e4SLinus Torvalds 17341da177e4SLinus Torvalds tcp_clear_xmit_timers(sk); 17351da177e4SLinus Torvalds __skb_queue_purge(&sk->sk_receive_queue); 1736fe067e8aSDavid S. Miller tcp_write_queue_purge(sk); 17371da177e4SLinus Torvalds __skb_queue_purge(&tp->out_of_order_queue); 17381a2449a8SChris Leech #ifdef CONFIG_NET_DMA 17391a2449a8SChris Leech __skb_queue_purge(&sk->sk_async_wait_queue); 17401a2449a8SChris Leech #endif 17411da177e4SLinus Torvalds 17421da177e4SLinus Torvalds inet->dport = 0; 17431da177e4SLinus Torvalds 17441da177e4SLinus Torvalds if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK)) 17451da177e4SLinus Torvalds inet_reset_saddr(sk); 17461da177e4SLinus Torvalds 17471da177e4SLinus Torvalds sk->sk_shutdown = 0; 17481da177e4SLinus Torvalds sock_reset_flag(sk, SOCK_DONE); 17491da177e4SLinus Torvalds tp->srtt = 0; 17501da177e4SLinus Torvalds if ((tp->write_seq += tp->max_window + 2) == 0) 17511da177e4SLinus Torvalds tp->write_seq = 1; 1752463c84b9SArnaldo Carvalho de Melo icsk->icsk_backoff = 0; 17531da177e4SLinus Torvalds tp->snd_cwnd = 2; 17546687e988SArnaldo Carvalho de Melo icsk->icsk_probes_out = 0; 17551da177e4SLinus Torvalds tp->packets_out = 0; 17561da177e4SLinus Torvalds tp->snd_ssthresh = 0x7fffffff; 17571da177e4SLinus Torvalds tp->snd_cwnd_cnt = 0; 17589772efb9SStephen Hemminger tp->bytes_acked = 0; 17596687e988SArnaldo Carvalho de Melo tcp_set_ca_state(sk, TCP_CA_Open); 17601da177e4SLinus Torvalds tcp_clear_retrans(tp); 1761463c84b9SArnaldo Carvalho de Melo inet_csk_delack_init(sk); 1762fe067e8aSDavid S. Miller tcp_init_send_head(sk); 1763*b40b4f79SSrinivas Aji memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 17641da177e4SLinus Torvalds __sk_dst_reset(sk); 17651da177e4SLinus Torvalds 1766463c84b9SArnaldo Carvalho de Melo BUG_TRAP(!inet->num || icsk->icsk_bind_hash); 17671da177e4SLinus Torvalds 17681da177e4SLinus Torvalds sk->sk_error_report(sk); 17691da177e4SLinus Torvalds return err; 17701da177e4SLinus Torvalds } 17711da177e4SLinus Torvalds 17721da177e4SLinus Torvalds /* 17731da177e4SLinus Torvalds * Socket option code for TCP. 17741da177e4SLinus Torvalds */ 17753fdadf7dSDmitry Mishin static int do_tcp_setsockopt(struct sock *sk, int level, 17763fdadf7dSDmitry Mishin int optname, char __user *optval, int optlen) 17771da177e4SLinus Torvalds { 17781da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 1779463c84b9SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 17801da177e4SLinus Torvalds int val; 17811da177e4SLinus Torvalds int err = 0; 17821da177e4SLinus Torvalds 17835f8ef48dSStephen Hemminger /* This is a string value all the others are int's */ 17845f8ef48dSStephen Hemminger if (optname == TCP_CONGESTION) { 17855f8ef48dSStephen Hemminger char name[TCP_CA_NAME_MAX]; 17865f8ef48dSStephen Hemminger 17875f8ef48dSStephen Hemminger if (optlen < 1) 17885f8ef48dSStephen Hemminger return -EINVAL; 17895f8ef48dSStephen Hemminger 17905f8ef48dSStephen Hemminger val = strncpy_from_user(name, optval, 17915f8ef48dSStephen Hemminger min(TCP_CA_NAME_MAX-1, optlen)); 17925f8ef48dSStephen Hemminger if (val < 0) 17935f8ef48dSStephen Hemminger return -EFAULT; 17945f8ef48dSStephen Hemminger name[val] = 0; 17955f8ef48dSStephen Hemminger 17965f8ef48dSStephen Hemminger lock_sock(sk); 17976687e988SArnaldo Carvalho de Melo err = tcp_set_congestion_control(sk, name); 17985f8ef48dSStephen Hemminger release_sock(sk); 17995f8ef48dSStephen Hemminger return err; 18005f8ef48dSStephen Hemminger } 18015f8ef48dSStephen Hemminger 18021da177e4SLinus Torvalds if (optlen < sizeof(int)) 18031da177e4SLinus Torvalds return -EINVAL; 18041da177e4SLinus Torvalds 18051da177e4SLinus Torvalds if (get_user(val, (int __user *)optval)) 18061da177e4SLinus Torvalds return -EFAULT; 18071da177e4SLinus Torvalds 18081da177e4SLinus Torvalds lock_sock(sk); 18091da177e4SLinus Torvalds 18101da177e4SLinus Torvalds switch (optname) { 18111da177e4SLinus Torvalds case TCP_MAXSEG: 18121da177e4SLinus Torvalds /* Values greater than interface MTU won't take effect. However 18131da177e4SLinus Torvalds * at the point when this call is done we typically don't yet 18141da177e4SLinus Torvalds * know which interface is going to be used */ 18151da177e4SLinus Torvalds if (val < 8 || val > MAX_TCP_WINDOW) { 18161da177e4SLinus Torvalds err = -EINVAL; 18171da177e4SLinus Torvalds break; 18181da177e4SLinus Torvalds } 18191da177e4SLinus Torvalds tp->rx_opt.user_mss = val; 18201da177e4SLinus Torvalds break; 18211da177e4SLinus Torvalds 18221da177e4SLinus Torvalds case TCP_NODELAY: 18231da177e4SLinus Torvalds if (val) { 18241da177e4SLinus Torvalds /* TCP_NODELAY is weaker than TCP_CORK, so that 18251da177e4SLinus Torvalds * this option on corked socket is remembered, but 18261da177e4SLinus Torvalds * it is not activated until cork is cleared. 18271da177e4SLinus Torvalds * 18281da177e4SLinus Torvalds * However, when TCP_NODELAY is set we make 18291da177e4SLinus Torvalds * an explicit push, which overrides even TCP_CORK 18301da177e4SLinus Torvalds * for currently queued segments. 18311da177e4SLinus Torvalds */ 18321da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_OFF|TCP_NAGLE_PUSH; 18339e412ba7SIlpo Järvinen tcp_push_pending_frames(sk); 18341da177e4SLinus Torvalds } else { 18351da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_OFF; 18361da177e4SLinus Torvalds } 18371da177e4SLinus Torvalds break; 18381da177e4SLinus Torvalds 18391da177e4SLinus Torvalds case TCP_CORK: 18401da177e4SLinus Torvalds /* When set indicates to always queue non-full frames. 18411da177e4SLinus Torvalds * Later the user clears this option and we transmit 18421da177e4SLinus Torvalds * any pending partial frames in the queue. This is 18431da177e4SLinus Torvalds * meant to be used alongside sendfile() to get properly 18441da177e4SLinus Torvalds * filled frames when the user (for example) must write 18451da177e4SLinus Torvalds * out headers with a write() call first and then use 18461da177e4SLinus Torvalds * sendfile to send out the data parts. 18471da177e4SLinus Torvalds * 18481da177e4SLinus Torvalds * TCP_CORK can be set together with TCP_NODELAY and it is 18491da177e4SLinus Torvalds * stronger than TCP_NODELAY. 18501da177e4SLinus Torvalds */ 18511da177e4SLinus Torvalds if (val) { 18521da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_CORK; 18531da177e4SLinus Torvalds } else { 18541da177e4SLinus Torvalds tp->nonagle &= ~TCP_NAGLE_CORK; 18551da177e4SLinus Torvalds if (tp->nonagle&TCP_NAGLE_OFF) 18561da177e4SLinus Torvalds tp->nonagle |= TCP_NAGLE_PUSH; 18579e412ba7SIlpo Järvinen tcp_push_pending_frames(sk); 18581da177e4SLinus Torvalds } 18591da177e4SLinus Torvalds break; 18601da177e4SLinus Torvalds 18611da177e4SLinus Torvalds case TCP_KEEPIDLE: 18621da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPIDLE) 18631da177e4SLinus Torvalds err = -EINVAL; 18641da177e4SLinus Torvalds else { 18651da177e4SLinus Torvalds tp->keepalive_time = val * HZ; 18661da177e4SLinus Torvalds if (sock_flag(sk, SOCK_KEEPOPEN) && 18671da177e4SLinus Torvalds !((1 << sk->sk_state) & 18681da177e4SLinus Torvalds (TCPF_CLOSE | TCPF_LISTEN))) { 18691da177e4SLinus Torvalds __u32 elapsed = tcp_time_stamp - tp->rcv_tstamp; 18701da177e4SLinus Torvalds if (tp->keepalive_time > elapsed) 18711da177e4SLinus Torvalds elapsed = tp->keepalive_time - elapsed; 18721da177e4SLinus Torvalds else 18731da177e4SLinus Torvalds elapsed = 0; 1874463c84b9SArnaldo Carvalho de Melo inet_csk_reset_keepalive_timer(sk, elapsed); 18751da177e4SLinus Torvalds } 18761da177e4SLinus Torvalds } 18771da177e4SLinus Torvalds break; 18781da177e4SLinus Torvalds case TCP_KEEPINTVL: 18791da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPINTVL) 18801da177e4SLinus Torvalds err = -EINVAL; 18811da177e4SLinus Torvalds else 18821da177e4SLinus Torvalds tp->keepalive_intvl = val * HZ; 18831da177e4SLinus Torvalds break; 18841da177e4SLinus Torvalds case TCP_KEEPCNT: 18851da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_KEEPCNT) 18861da177e4SLinus Torvalds err = -EINVAL; 18871da177e4SLinus Torvalds else 18881da177e4SLinus Torvalds tp->keepalive_probes = val; 18891da177e4SLinus Torvalds break; 18901da177e4SLinus Torvalds case TCP_SYNCNT: 18911da177e4SLinus Torvalds if (val < 1 || val > MAX_TCP_SYNCNT) 18921da177e4SLinus Torvalds err = -EINVAL; 18931da177e4SLinus Torvalds else 1894463c84b9SArnaldo Carvalho de Melo icsk->icsk_syn_retries = val; 18951da177e4SLinus Torvalds break; 18961da177e4SLinus Torvalds 18971da177e4SLinus Torvalds case TCP_LINGER2: 18981da177e4SLinus Torvalds if (val < 0) 18991da177e4SLinus Torvalds tp->linger2 = -1; 19001da177e4SLinus Torvalds else if (val > sysctl_tcp_fin_timeout / HZ) 19011da177e4SLinus Torvalds tp->linger2 = 0; 19021da177e4SLinus Torvalds else 19031da177e4SLinus Torvalds tp->linger2 = val * HZ; 19041da177e4SLinus Torvalds break; 19051da177e4SLinus Torvalds 19061da177e4SLinus Torvalds case TCP_DEFER_ACCEPT: 1907295f7324SArnaldo Carvalho de Melo icsk->icsk_accept_queue.rskq_defer_accept = 0; 19081da177e4SLinus Torvalds if (val > 0) { 19091da177e4SLinus Torvalds /* Translate value in seconds to number of 19101da177e4SLinus Torvalds * retransmits */ 1911295f7324SArnaldo Carvalho de Melo while (icsk->icsk_accept_queue.rskq_defer_accept < 32 && 19121da177e4SLinus Torvalds val > ((TCP_TIMEOUT_INIT / HZ) << 1913295f7324SArnaldo Carvalho de Melo icsk->icsk_accept_queue.rskq_defer_accept)) 1914295f7324SArnaldo Carvalho de Melo icsk->icsk_accept_queue.rskq_defer_accept++; 1915295f7324SArnaldo Carvalho de Melo icsk->icsk_accept_queue.rskq_defer_accept++; 19161da177e4SLinus Torvalds } 19171da177e4SLinus Torvalds break; 19181da177e4SLinus Torvalds 19191da177e4SLinus Torvalds case TCP_WINDOW_CLAMP: 19201da177e4SLinus Torvalds if (!val) { 19211da177e4SLinus Torvalds if (sk->sk_state != TCP_CLOSE) { 19221da177e4SLinus Torvalds err = -EINVAL; 19231da177e4SLinus Torvalds break; 19241da177e4SLinus Torvalds } 19251da177e4SLinus Torvalds tp->window_clamp = 0; 19261da177e4SLinus Torvalds } else 19271da177e4SLinus Torvalds tp->window_clamp = val < SOCK_MIN_RCVBUF / 2 ? 19281da177e4SLinus Torvalds SOCK_MIN_RCVBUF / 2 : val; 19291da177e4SLinus Torvalds break; 19301da177e4SLinus Torvalds 19311da177e4SLinus Torvalds case TCP_QUICKACK: 19321da177e4SLinus Torvalds if (!val) { 1933463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 1; 19341da177e4SLinus Torvalds } else { 1935463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 0; 19361da177e4SLinus Torvalds if ((1 << sk->sk_state) & 19371da177e4SLinus Torvalds (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT) && 1938463c84b9SArnaldo Carvalho de Melo inet_csk_ack_scheduled(sk)) { 1939463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pending |= ICSK_ACK_PUSHED; 19400e4b4992SChris Leech tcp_cleanup_rbuf(sk, 1); 19411da177e4SLinus Torvalds if (!(val & 1)) 1942463c84b9SArnaldo Carvalho de Melo icsk->icsk_ack.pingpong = 1; 19431da177e4SLinus Torvalds } 19441da177e4SLinus Torvalds } 19451da177e4SLinus Torvalds break; 19461da177e4SLinus Torvalds 1947cfb6eeb4SYOSHIFUJI Hideaki #ifdef CONFIG_TCP_MD5SIG 1948cfb6eeb4SYOSHIFUJI Hideaki case TCP_MD5SIG: 1949cfb6eeb4SYOSHIFUJI Hideaki /* Read the IP->Key mappings from userspace */ 1950cfb6eeb4SYOSHIFUJI Hideaki err = tp->af_specific->md5_parse(sk, optval, optlen); 1951cfb6eeb4SYOSHIFUJI Hideaki break; 1952cfb6eeb4SYOSHIFUJI Hideaki #endif 1953cfb6eeb4SYOSHIFUJI Hideaki 19541da177e4SLinus Torvalds default: 19551da177e4SLinus Torvalds err = -ENOPROTOOPT; 19561da177e4SLinus Torvalds break; 19573ff50b79SStephen Hemminger } 19583ff50b79SStephen Hemminger 19591da177e4SLinus Torvalds release_sock(sk); 19601da177e4SLinus Torvalds return err; 19611da177e4SLinus Torvalds } 19621da177e4SLinus Torvalds 19633fdadf7dSDmitry Mishin int tcp_setsockopt(struct sock *sk, int level, int optname, char __user *optval, 19643fdadf7dSDmitry Mishin int optlen) 19653fdadf7dSDmitry Mishin { 19663fdadf7dSDmitry Mishin struct inet_connection_sock *icsk = inet_csk(sk); 19673fdadf7dSDmitry Mishin 19683fdadf7dSDmitry Mishin if (level != SOL_TCP) 19693fdadf7dSDmitry Mishin return icsk->icsk_af_ops->setsockopt(sk, level, optname, 19703fdadf7dSDmitry Mishin optval, optlen); 19713fdadf7dSDmitry Mishin return do_tcp_setsockopt(sk, level, optname, optval, optlen); 19723fdadf7dSDmitry Mishin } 19733fdadf7dSDmitry Mishin 19743fdadf7dSDmitry Mishin #ifdef CONFIG_COMPAT 1975543d9cfeSArnaldo Carvalho de Melo int compat_tcp_setsockopt(struct sock *sk, int level, int optname, 1976543d9cfeSArnaldo Carvalho de Melo char __user *optval, int optlen) 19773fdadf7dSDmitry Mishin { 1978dec73ff0SArnaldo Carvalho de Melo if (level != SOL_TCP) 1979dec73ff0SArnaldo Carvalho de Melo return inet_csk_compat_setsockopt(sk, level, optname, 1980dec73ff0SArnaldo Carvalho de Melo optval, optlen); 19813fdadf7dSDmitry Mishin return do_tcp_setsockopt(sk, level, optname, optval, optlen); 19823fdadf7dSDmitry Mishin } 1983543d9cfeSArnaldo Carvalho de Melo 1984543d9cfeSArnaldo Carvalho de Melo EXPORT_SYMBOL(compat_tcp_setsockopt); 19853fdadf7dSDmitry Mishin #endif 19863fdadf7dSDmitry Mishin 19871da177e4SLinus Torvalds /* Return information about state of tcp endpoint in API format. */ 19881da177e4SLinus Torvalds void tcp_get_info(struct sock *sk, struct tcp_info *info) 19891da177e4SLinus Torvalds { 19901da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 1991463c84b9SArnaldo Carvalho de Melo const struct inet_connection_sock *icsk = inet_csk(sk); 19921da177e4SLinus Torvalds u32 now = tcp_time_stamp; 19931da177e4SLinus Torvalds 19941da177e4SLinus Torvalds memset(info, 0, sizeof(*info)); 19951da177e4SLinus Torvalds 19961da177e4SLinus Torvalds info->tcpi_state = sk->sk_state; 19976687e988SArnaldo Carvalho de Melo info->tcpi_ca_state = icsk->icsk_ca_state; 1998463c84b9SArnaldo Carvalho de Melo info->tcpi_retransmits = icsk->icsk_retransmits; 19996687e988SArnaldo Carvalho de Melo info->tcpi_probes = icsk->icsk_probes_out; 2000463c84b9SArnaldo Carvalho de Melo info->tcpi_backoff = icsk->icsk_backoff; 20011da177e4SLinus Torvalds 20021da177e4SLinus Torvalds if (tp->rx_opt.tstamp_ok) 20031da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_TIMESTAMPS; 20041da177e4SLinus Torvalds if (tp->rx_opt.sack_ok) 20051da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_SACK; 20061da177e4SLinus Torvalds if (tp->rx_opt.wscale_ok) { 20071da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_WSCALE; 20081da177e4SLinus Torvalds info->tcpi_snd_wscale = tp->rx_opt.snd_wscale; 20091da177e4SLinus Torvalds info->tcpi_rcv_wscale = tp->rx_opt.rcv_wscale; 20101da177e4SLinus Torvalds } 20111da177e4SLinus Torvalds 20121da177e4SLinus Torvalds if (tp->ecn_flags&TCP_ECN_OK) 20131da177e4SLinus Torvalds info->tcpi_options |= TCPI_OPT_ECN; 20141da177e4SLinus Torvalds 2015463c84b9SArnaldo Carvalho de Melo info->tcpi_rto = jiffies_to_usecs(icsk->icsk_rto); 2016463c84b9SArnaldo Carvalho de Melo info->tcpi_ato = jiffies_to_usecs(icsk->icsk_ack.ato); 2017c1b4a7e6SDavid S. Miller info->tcpi_snd_mss = tp->mss_cache; 2018463c84b9SArnaldo Carvalho de Melo info->tcpi_rcv_mss = icsk->icsk_ack.rcv_mss; 20191da177e4SLinus Torvalds 20201da177e4SLinus Torvalds info->tcpi_unacked = tp->packets_out; 20211da177e4SLinus Torvalds info->tcpi_sacked = tp->sacked_out; 20221da177e4SLinus Torvalds info->tcpi_lost = tp->lost_out; 20231da177e4SLinus Torvalds info->tcpi_retrans = tp->retrans_out; 20241da177e4SLinus Torvalds info->tcpi_fackets = tp->fackets_out; 20251da177e4SLinus Torvalds 20261da177e4SLinus Torvalds info->tcpi_last_data_sent = jiffies_to_msecs(now - tp->lsndtime); 2027463c84b9SArnaldo Carvalho de Melo info->tcpi_last_data_recv = jiffies_to_msecs(now - icsk->icsk_ack.lrcvtime); 20281da177e4SLinus Torvalds info->tcpi_last_ack_recv = jiffies_to_msecs(now - tp->rcv_tstamp); 20291da177e4SLinus Torvalds 2030d83d8461SArnaldo Carvalho de Melo info->tcpi_pmtu = icsk->icsk_pmtu_cookie; 20311da177e4SLinus Torvalds info->tcpi_rcv_ssthresh = tp->rcv_ssthresh; 20321da177e4SLinus Torvalds info->tcpi_rtt = jiffies_to_usecs(tp->srtt)>>3; 20331da177e4SLinus Torvalds info->tcpi_rttvar = jiffies_to_usecs(tp->mdev)>>2; 20341da177e4SLinus Torvalds info->tcpi_snd_ssthresh = tp->snd_ssthresh; 20351da177e4SLinus Torvalds info->tcpi_snd_cwnd = tp->snd_cwnd; 20361da177e4SLinus Torvalds info->tcpi_advmss = tp->advmss; 20371da177e4SLinus Torvalds info->tcpi_reordering = tp->reordering; 20381da177e4SLinus Torvalds 20391da177e4SLinus Torvalds info->tcpi_rcv_rtt = jiffies_to_usecs(tp->rcv_rtt_est.rtt)>>3; 20401da177e4SLinus Torvalds info->tcpi_rcv_space = tp->rcvq_space.space; 20411da177e4SLinus Torvalds 20421da177e4SLinus Torvalds info->tcpi_total_retrans = tp->total_retrans; 20431da177e4SLinus Torvalds } 20441da177e4SLinus Torvalds 20451da177e4SLinus Torvalds EXPORT_SYMBOL_GPL(tcp_get_info); 20461da177e4SLinus Torvalds 20473fdadf7dSDmitry Mishin static int do_tcp_getsockopt(struct sock *sk, int level, 20483fdadf7dSDmitry Mishin int optname, char __user *optval, int __user *optlen) 20491da177e4SLinus Torvalds { 2050295f7324SArnaldo Carvalho de Melo struct inet_connection_sock *icsk = inet_csk(sk); 20511da177e4SLinus Torvalds struct tcp_sock *tp = tcp_sk(sk); 20521da177e4SLinus Torvalds int val, len; 20531da177e4SLinus Torvalds 20541da177e4SLinus Torvalds if (get_user(len, optlen)) 20551da177e4SLinus Torvalds return -EFAULT; 20561da177e4SLinus Torvalds 20571da177e4SLinus Torvalds len = min_t(unsigned int, len, sizeof(int)); 20581da177e4SLinus Torvalds 20591da177e4SLinus Torvalds if (len < 0) 20601da177e4SLinus Torvalds return -EINVAL; 20611da177e4SLinus Torvalds 20621da177e4SLinus Torvalds switch (optname) { 20631da177e4SLinus Torvalds case TCP_MAXSEG: 2064c1b4a7e6SDavid S. Miller val = tp->mss_cache; 20651da177e4SLinus Torvalds if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 20661da177e4SLinus Torvalds val = tp->rx_opt.user_mss; 20671da177e4SLinus Torvalds break; 20681da177e4SLinus Torvalds case TCP_NODELAY: 20691da177e4SLinus Torvalds val = !!(tp->nonagle&TCP_NAGLE_OFF); 20701da177e4SLinus Torvalds break; 20711da177e4SLinus Torvalds case TCP_CORK: 20721da177e4SLinus Torvalds val = !!(tp->nonagle&TCP_NAGLE_CORK); 20731da177e4SLinus Torvalds break; 20741da177e4SLinus Torvalds case TCP_KEEPIDLE: 20751da177e4SLinus Torvalds val = (tp->keepalive_time ? : sysctl_tcp_keepalive_time) / HZ; 20761da177e4SLinus Torvalds break; 20771da177e4SLinus Torvalds case TCP_KEEPINTVL: 20781da177e4SLinus Torvalds val = (tp->keepalive_intvl ? : sysctl_tcp_keepalive_intvl) / HZ; 20791da177e4SLinus Torvalds break; 20801da177e4SLinus Torvalds case TCP_KEEPCNT: 20811da177e4SLinus Torvalds val = tp->keepalive_probes ? : sysctl_tcp_keepalive_probes; 20821da177e4SLinus Torvalds break; 20831da177e4SLinus Torvalds case TCP_SYNCNT: 2084295f7324SArnaldo Carvalho de Melo val = icsk->icsk_syn_retries ? : sysctl_tcp_syn_retries; 20851da177e4SLinus Torvalds break; 20861da177e4SLinus Torvalds case TCP_LINGER2: 20871da177e4SLinus Torvalds val = tp->linger2; 20881da177e4SLinus Torvalds if (val >= 0) 20891da177e4SLinus Torvalds val = (val ? : sysctl_tcp_fin_timeout) / HZ; 20901da177e4SLinus Torvalds break; 20911da177e4SLinus Torvalds case TCP_DEFER_ACCEPT: 2092295f7324SArnaldo Carvalho de Melo val = !icsk->icsk_accept_queue.rskq_defer_accept ? 0 : 2093295f7324SArnaldo Carvalho de Melo ((TCP_TIMEOUT_INIT / HZ) << (icsk->icsk_accept_queue.rskq_defer_accept - 1)); 20941da177e4SLinus Torvalds break; 20951da177e4SLinus Torvalds case TCP_WINDOW_CLAMP: 20961da177e4SLinus Torvalds val = tp->window_clamp; 20971da177e4SLinus Torvalds break; 20981da177e4SLinus Torvalds case TCP_INFO: { 20991da177e4SLinus Torvalds struct tcp_info info; 21001da177e4SLinus Torvalds 21011da177e4SLinus Torvalds if (get_user(len, optlen)) 21021da177e4SLinus Torvalds return -EFAULT; 21031da177e4SLinus Torvalds 21041da177e4SLinus Torvalds tcp_get_info(sk, &info); 21051da177e4SLinus Torvalds 21061da177e4SLinus Torvalds len = min_t(unsigned int, len, sizeof(info)); 21071da177e4SLinus Torvalds if (put_user(len, optlen)) 21081da177e4SLinus Torvalds return -EFAULT; 21091da177e4SLinus Torvalds if (copy_to_user(optval, &info, len)) 21101da177e4SLinus Torvalds return -EFAULT; 21111da177e4SLinus Torvalds return 0; 21121da177e4SLinus Torvalds } 21131da177e4SLinus Torvalds case TCP_QUICKACK: 2114295f7324SArnaldo Carvalho de Melo val = !icsk->icsk_ack.pingpong; 21151da177e4SLinus Torvalds break; 21165f8ef48dSStephen Hemminger 21175f8ef48dSStephen Hemminger case TCP_CONGESTION: 21185f8ef48dSStephen Hemminger if (get_user(len, optlen)) 21195f8ef48dSStephen Hemminger return -EFAULT; 21205f8ef48dSStephen Hemminger len = min_t(unsigned int, len, TCP_CA_NAME_MAX); 21215f8ef48dSStephen Hemminger if (put_user(len, optlen)) 21225f8ef48dSStephen Hemminger return -EFAULT; 21236687e988SArnaldo Carvalho de Melo if (copy_to_user(optval, icsk->icsk_ca_ops->name, len)) 21245f8ef48dSStephen Hemminger return -EFAULT; 21255f8ef48dSStephen Hemminger return 0; 21261da177e4SLinus Torvalds default: 21271da177e4SLinus Torvalds return -ENOPROTOOPT; 21283ff50b79SStephen Hemminger } 21291da177e4SLinus Torvalds 21301da177e4SLinus Torvalds if (put_user(len, optlen)) 21311da177e4SLinus Torvalds return -EFAULT; 21321da177e4SLinus Torvalds if (copy_to_user(optval, &val, len)) 21331da177e4SLinus Torvalds return -EFAULT; 21341da177e4SLinus Torvalds return 0; 21351da177e4SLinus Torvalds } 21361da177e4SLinus Torvalds 21373fdadf7dSDmitry Mishin int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, 21383fdadf7dSDmitry Mishin int __user *optlen) 21393fdadf7dSDmitry Mishin { 21403fdadf7dSDmitry Mishin struct inet_connection_sock *icsk = inet_csk(sk); 21413fdadf7dSDmitry Mishin 21423fdadf7dSDmitry Mishin if (level != SOL_TCP) 21433fdadf7dSDmitry Mishin return icsk->icsk_af_ops->getsockopt(sk, level, optname, 21443fdadf7dSDmitry Mishin optval, optlen); 21453fdadf7dSDmitry Mishin return do_tcp_getsockopt(sk, level, optname, optval, optlen); 21463fdadf7dSDmitry Mishin } 21473fdadf7dSDmitry Mishin 21483fdadf7dSDmitry Mishin #ifdef CONFIG_COMPAT 2149543d9cfeSArnaldo Carvalho de Melo int compat_tcp_getsockopt(struct sock *sk, int level, int optname, 2150543d9cfeSArnaldo Carvalho de Melo char __user *optval, int __user *optlen) 21513fdadf7dSDmitry Mishin { 2152dec73ff0SArnaldo Carvalho de Melo if (level != SOL_TCP) 2153dec73ff0SArnaldo Carvalho de Melo return inet_csk_compat_getsockopt(sk, level, optname, 2154dec73ff0SArnaldo Carvalho de Melo optval, optlen); 21553fdadf7dSDmitry Mishin return do_tcp_getsockopt(sk, level, optname, optval, optlen); 21563fdadf7dSDmitry Mishin } 2157543d9cfeSArnaldo Carvalho de Melo 2158543d9cfeSArnaldo Carvalho de Melo EXPORT_SYMBOL(compat_tcp_getsockopt); 21593fdadf7dSDmitry Mishin #endif 21601da177e4SLinus Torvalds 2161576a30ebSHerbert Xu struct sk_buff *tcp_tso_segment(struct sk_buff *skb, int features) 2162f4c50d99SHerbert Xu { 2163f4c50d99SHerbert Xu struct sk_buff *segs = ERR_PTR(-EINVAL); 2164f4c50d99SHerbert Xu struct tcphdr *th; 2165f4c50d99SHerbert Xu unsigned thlen; 2166f4c50d99SHerbert Xu unsigned int seq; 2167d3bc23e7SAl Viro __be32 delta; 2168f4c50d99SHerbert Xu unsigned int oldlen; 2169f4c50d99SHerbert Xu unsigned int len; 2170f4c50d99SHerbert Xu 2171f4c50d99SHerbert Xu if (!pskb_may_pull(skb, sizeof(*th))) 2172f4c50d99SHerbert Xu goto out; 2173f4c50d99SHerbert Xu 2174aa8223c7SArnaldo Carvalho de Melo th = tcp_hdr(skb); 2175f4c50d99SHerbert Xu thlen = th->doff * 4; 2176f4c50d99SHerbert Xu if (thlen < sizeof(*th)) 2177f4c50d99SHerbert Xu goto out; 2178f4c50d99SHerbert Xu 2179f4c50d99SHerbert Xu if (!pskb_may_pull(skb, thlen)) 2180f4c50d99SHerbert Xu goto out; 2181f4c50d99SHerbert Xu 21820718bcc0SHerbert Xu oldlen = (u16)~skb->len; 2183f4c50d99SHerbert Xu __skb_pull(skb, thlen); 2184f4c50d99SHerbert Xu 21853820c3f3SHerbert Xu if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) { 21863820c3f3SHerbert Xu /* Packet is from an untrusted source, reset gso_segs. */ 2187bbcf467dSHerbert Xu int type = skb_shinfo(skb)->gso_type; 2188bbcf467dSHerbert Xu int mss; 21893820c3f3SHerbert Xu 2190bbcf467dSHerbert Xu if (unlikely(type & 2191bbcf467dSHerbert Xu ~(SKB_GSO_TCPV4 | 2192bbcf467dSHerbert Xu SKB_GSO_DODGY | 2193bbcf467dSHerbert Xu SKB_GSO_TCP_ECN | 2194bbcf467dSHerbert Xu SKB_GSO_TCPV6 | 2195bbcf467dSHerbert Xu 0) || 2196bbcf467dSHerbert Xu !(type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)))) 2197bbcf467dSHerbert Xu goto out; 2198bbcf467dSHerbert Xu 2199bbcf467dSHerbert Xu mss = skb_shinfo(skb)->gso_size; 22003820c3f3SHerbert Xu skb_shinfo(skb)->gso_segs = (skb->len + mss - 1) / mss; 22013820c3f3SHerbert Xu 22023820c3f3SHerbert Xu segs = NULL; 22033820c3f3SHerbert Xu goto out; 22043820c3f3SHerbert Xu } 22053820c3f3SHerbert Xu 2206576a30ebSHerbert Xu segs = skb_segment(skb, features); 2207f4c50d99SHerbert Xu if (IS_ERR(segs)) 2208f4c50d99SHerbert Xu goto out; 2209f4c50d99SHerbert Xu 2210f4c50d99SHerbert Xu len = skb_shinfo(skb)->gso_size; 22110718bcc0SHerbert Xu delta = htonl(oldlen + (thlen + len)); 2212f4c50d99SHerbert Xu 2213f4c50d99SHerbert Xu skb = segs; 2214aa8223c7SArnaldo Carvalho de Melo th = tcp_hdr(skb); 2215f4c50d99SHerbert Xu seq = ntohl(th->seq); 2216f4c50d99SHerbert Xu 2217f4c50d99SHerbert Xu do { 2218f4c50d99SHerbert Xu th->fin = th->psh = 0; 2219f4c50d99SHerbert Xu 2220d3bc23e7SAl Viro th->check = ~csum_fold((__force __wsum)((__force u32)th->check + 2221d3bc23e7SAl Viro (__force u32)delta)); 222284fa7933SPatrick McHardy if (skb->ip_summed != CHECKSUM_PARTIAL) 22239c70220bSArnaldo Carvalho de Melo th->check = 22249c70220bSArnaldo Carvalho de Melo csum_fold(csum_partial(skb_transport_header(skb), 22259c70220bSArnaldo Carvalho de Melo thlen, skb->csum)); 2226f4c50d99SHerbert Xu 2227f4c50d99SHerbert Xu seq += len; 2228f4c50d99SHerbert Xu skb = skb->next; 2229aa8223c7SArnaldo Carvalho de Melo th = tcp_hdr(skb); 2230f4c50d99SHerbert Xu 2231f4c50d99SHerbert Xu th->seq = htonl(seq); 2232f4c50d99SHerbert Xu th->cwr = 0; 2233f4c50d99SHerbert Xu } while (skb->next); 2234f4c50d99SHerbert Xu 223527a884dcSArnaldo Carvalho de Melo delta = htonl(oldlen + (skb->tail - skb->transport_header) + 22369c70220bSArnaldo Carvalho de Melo skb->data_len); 2237d3bc23e7SAl Viro th->check = ~csum_fold((__force __wsum)((__force u32)th->check + 2238d3bc23e7SAl Viro (__force u32)delta)); 223984fa7933SPatrick McHardy if (skb->ip_summed != CHECKSUM_PARTIAL) 22409c70220bSArnaldo Carvalho de Melo th->check = csum_fold(csum_partial(skb_transport_header(skb), 22419c70220bSArnaldo Carvalho de Melo thlen, skb->csum)); 2242f4c50d99SHerbert Xu 2243f4c50d99SHerbert Xu out: 2244f4c50d99SHerbert Xu return segs; 2245f4c50d99SHerbert Xu } 2246adcfc7d0SHerbert Xu EXPORT_SYMBOL(tcp_tso_segment); 2247f4c50d99SHerbert Xu 2248cfb6eeb4SYOSHIFUJI Hideaki #ifdef CONFIG_TCP_MD5SIG 2249cfb6eeb4SYOSHIFUJI Hideaki static unsigned long tcp_md5sig_users; 2250cfb6eeb4SYOSHIFUJI Hideaki static struct tcp_md5sig_pool **tcp_md5sig_pool; 2251cfb6eeb4SYOSHIFUJI Hideaki static DEFINE_SPINLOCK(tcp_md5sig_pool_lock); 2252cfb6eeb4SYOSHIFUJI Hideaki 2253cfb6eeb4SYOSHIFUJI Hideaki static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool **pool) 2254cfb6eeb4SYOSHIFUJI Hideaki { 2255cfb6eeb4SYOSHIFUJI Hideaki int cpu; 2256cfb6eeb4SYOSHIFUJI Hideaki for_each_possible_cpu(cpu) { 2257cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool *p = *per_cpu_ptr(pool, cpu); 2258cfb6eeb4SYOSHIFUJI Hideaki if (p) { 2259cfb6eeb4SYOSHIFUJI Hideaki if (p->md5_desc.tfm) 2260cfb6eeb4SYOSHIFUJI Hideaki crypto_free_hash(p->md5_desc.tfm); 2261cfb6eeb4SYOSHIFUJI Hideaki kfree(p); 2262cfb6eeb4SYOSHIFUJI Hideaki p = NULL; 2263cfb6eeb4SYOSHIFUJI Hideaki } 2264cfb6eeb4SYOSHIFUJI Hideaki } 2265cfb6eeb4SYOSHIFUJI Hideaki free_percpu(pool); 2266cfb6eeb4SYOSHIFUJI Hideaki } 2267cfb6eeb4SYOSHIFUJI Hideaki 2268cfb6eeb4SYOSHIFUJI Hideaki void tcp_free_md5sig_pool(void) 2269cfb6eeb4SYOSHIFUJI Hideaki { 2270cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **pool = NULL; 2271cfb6eeb4SYOSHIFUJI Hideaki 22722c4f6219SDavid S. Miller spin_lock_bh(&tcp_md5sig_pool_lock); 2273cfb6eeb4SYOSHIFUJI Hideaki if (--tcp_md5sig_users == 0) { 2274cfb6eeb4SYOSHIFUJI Hideaki pool = tcp_md5sig_pool; 2275cfb6eeb4SYOSHIFUJI Hideaki tcp_md5sig_pool = NULL; 2276cfb6eeb4SYOSHIFUJI Hideaki } 22772c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2278cfb6eeb4SYOSHIFUJI Hideaki if (pool) 2279cfb6eeb4SYOSHIFUJI Hideaki __tcp_free_md5sig_pool(pool); 2280cfb6eeb4SYOSHIFUJI Hideaki } 2281cfb6eeb4SYOSHIFUJI Hideaki 2282cfb6eeb4SYOSHIFUJI Hideaki EXPORT_SYMBOL(tcp_free_md5sig_pool); 2283cfb6eeb4SYOSHIFUJI Hideaki 2284f5b99bcdSAdrian Bunk static struct tcp_md5sig_pool **__tcp_alloc_md5sig_pool(void) 2285cfb6eeb4SYOSHIFUJI Hideaki { 2286cfb6eeb4SYOSHIFUJI Hideaki int cpu; 2287cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **pool; 2288cfb6eeb4SYOSHIFUJI Hideaki 2289cfb6eeb4SYOSHIFUJI Hideaki pool = alloc_percpu(struct tcp_md5sig_pool *); 2290cfb6eeb4SYOSHIFUJI Hideaki if (!pool) 2291cfb6eeb4SYOSHIFUJI Hideaki return NULL; 2292cfb6eeb4SYOSHIFUJI Hideaki 2293cfb6eeb4SYOSHIFUJI Hideaki for_each_possible_cpu(cpu) { 2294cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool *p; 2295cfb6eeb4SYOSHIFUJI Hideaki struct crypto_hash *hash; 2296cfb6eeb4SYOSHIFUJI Hideaki 2297cfb6eeb4SYOSHIFUJI Hideaki p = kzalloc(sizeof(*p), GFP_KERNEL); 2298cfb6eeb4SYOSHIFUJI Hideaki if (!p) 2299cfb6eeb4SYOSHIFUJI Hideaki goto out_free; 2300cfb6eeb4SYOSHIFUJI Hideaki *per_cpu_ptr(pool, cpu) = p; 2301cfb6eeb4SYOSHIFUJI Hideaki 2302cfb6eeb4SYOSHIFUJI Hideaki hash = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC); 2303cfb6eeb4SYOSHIFUJI Hideaki if (!hash || IS_ERR(hash)) 2304cfb6eeb4SYOSHIFUJI Hideaki goto out_free; 2305cfb6eeb4SYOSHIFUJI Hideaki 2306cfb6eeb4SYOSHIFUJI Hideaki p->md5_desc.tfm = hash; 2307cfb6eeb4SYOSHIFUJI Hideaki } 2308cfb6eeb4SYOSHIFUJI Hideaki return pool; 2309cfb6eeb4SYOSHIFUJI Hideaki out_free: 2310cfb6eeb4SYOSHIFUJI Hideaki __tcp_free_md5sig_pool(pool); 2311cfb6eeb4SYOSHIFUJI Hideaki return NULL; 2312cfb6eeb4SYOSHIFUJI Hideaki } 2313cfb6eeb4SYOSHIFUJI Hideaki 2314cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **tcp_alloc_md5sig_pool(void) 2315cfb6eeb4SYOSHIFUJI Hideaki { 2316cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **pool; 2317cfb6eeb4SYOSHIFUJI Hideaki int alloc = 0; 2318cfb6eeb4SYOSHIFUJI Hideaki 2319cfb6eeb4SYOSHIFUJI Hideaki retry: 23202c4f6219SDavid S. Miller spin_lock_bh(&tcp_md5sig_pool_lock); 2321cfb6eeb4SYOSHIFUJI Hideaki pool = tcp_md5sig_pool; 2322cfb6eeb4SYOSHIFUJI Hideaki if (tcp_md5sig_users++ == 0) { 2323cfb6eeb4SYOSHIFUJI Hideaki alloc = 1; 23242c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2325cfb6eeb4SYOSHIFUJI Hideaki } else if (!pool) { 2326cfb6eeb4SYOSHIFUJI Hideaki tcp_md5sig_users--; 23272c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2328cfb6eeb4SYOSHIFUJI Hideaki cpu_relax(); 2329cfb6eeb4SYOSHIFUJI Hideaki goto retry; 2330cfb6eeb4SYOSHIFUJI Hideaki } else 23312c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2332cfb6eeb4SYOSHIFUJI Hideaki 2333cfb6eeb4SYOSHIFUJI Hideaki if (alloc) { 2334cfb6eeb4SYOSHIFUJI Hideaki /* we cannot hold spinlock here because this may sleep. */ 2335cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **p = __tcp_alloc_md5sig_pool(); 23362c4f6219SDavid S. Miller spin_lock_bh(&tcp_md5sig_pool_lock); 2337cfb6eeb4SYOSHIFUJI Hideaki if (!p) { 2338cfb6eeb4SYOSHIFUJI Hideaki tcp_md5sig_users--; 23392c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2340cfb6eeb4SYOSHIFUJI Hideaki return NULL; 2341cfb6eeb4SYOSHIFUJI Hideaki } 2342cfb6eeb4SYOSHIFUJI Hideaki pool = tcp_md5sig_pool; 2343cfb6eeb4SYOSHIFUJI Hideaki if (pool) { 2344cfb6eeb4SYOSHIFUJI Hideaki /* oops, it has already been assigned. */ 23452c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2346cfb6eeb4SYOSHIFUJI Hideaki __tcp_free_md5sig_pool(p); 2347cfb6eeb4SYOSHIFUJI Hideaki } else { 2348cfb6eeb4SYOSHIFUJI Hideaki tcp_md5sig_pool = pool = p; 23492c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2350cfb6eeb4SYOSHIFUJI Hideaki } 2351cfb6eeb4SYOSHIFUJI Hideaki } 2352cfb6eeb4SYOSHIFUJI Hideaki return pool; 2353cfb6eeb4SYOSHIFUJI Hideaki } 2354cfb6eeb4SYOSHIFUJI Hideaki 2355cfb6eeb4SYOSHIFUJI Hideaki EXPORT_SYMBOL(tcp_alloc_md5sig_pool); 2356cfb6eeb4SYOSHIFUJI Hideaki 2357cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool *__tcp_get_md5sig_pool(int cpu) 2358cfb6eeb4SYOSHIFUJI Hideaki { 2359cfb6eeb4SYOSHIFUJI Hideaki struct tcp_md5sig_pool **p; 23602c4f6219SDavid S. Miller spin_lock_bh(&tcp_md5sig_pool_lock); 2361cfb6eeb4SYOSHIFUJI Hideaki p = tcp_md5sig_pool; 2362cfb6eeb4SYOSHIFUJI Hideaki if (p) 2363cfb6eeb4SYOSHIFUJI Hideaki tcp_md5sig_users++; 23642c4f6219SDavid S. Miller spin_unlock_bh(&tcp_md5sig_pool_lock); 2365cfb6eeb4SYOSHIFUJI Hideaki return (p ? *per_cpu_ptr(p, cpu) : NULL); 2366cfb6eeb4SYOSHIFUJI Hideaki } 2367cfb6eeb4SYOSHIFUJI Hideaki 2368cfb6eeb4SYOSHIFUJI Hideaki EXPORT_SYMBOL(__tcp_get_md5sig_pool); 2369cfb6eeb4SYOSHIFUJI Hideaki 23706931ba7cSDavid S. Miller void __tcp_put_md5sig_pool(void) 23716931ba7cSDavid S. Miller { 23726931ba7cSDavid S. Miller tcp_free_md5sig_pool(); 2373cfb6eeb4SYOSHIFUJI Hideaki } 2374cfb6eeb4SYOSHIFUJI Hideaki 2375cfb6eeb4SYOSHIFUJI Hideaki EXPORT_SYMBOL(__tcp_put_md5sig_pool); 2376cfb6eeb4SYOSHIFUJI Hideaki #endif 2377cfb6eeb4SYOSHIFUJI Hideaki 23784ac02babSAndi Kleen void tcp_done(struct sock *sk) 23794ac02babSAndi Kleen { 23804ac02babSAndi Kleen if(sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) 23814ac02babSAndi Kleen TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS); 23824ac02babSAndi Kleen 23834ac02babSAndi Kleen tcp_set_state(sk, TCP_CLOSE); 23844ac02babSAndi Kleen tcp_clear_xmit_timers(sk); 23854ac02babSAndi Kleen 23864ac02babSAndi Kleen sk->sk_shutdown = SHUTDOWN_MASK; 23874ac02babSAndi Kleen 23884ac02babSAndi Kleen if (!sock_flag(sk, SOCK_DEAD)) 23894ac02babSAndi Kleen sk->sk_state_change(sk); 23904ac02babSAndi Kleen else 23914ac02babSAndi Kleen inet_csk_destroy_sock(sk); 23924ac02babSAndi Kleen } 23934ac02babSAndi Kleen EXPORT_SYMBOL_GPL(tcp_done); 23944ac02babSAndi Kleen 23951da177e4SLinus Torvalds extern void __skb_cb_too_small_for_tcp(int, int); 23965f8ef48dSStephen Hemminger extern struct tcp_congestion_ops tcp_reno; 23971da177e4SLinus Torvalds 23981da177e4SLinus Torvalds static __initdata unsigned long thash_entries; 23991da177e4SLinus Torvalds static int __init set_thash_entries(char *str) 24001da177e4SLinus Torvalds { 24011da177e4SLinus Torvalds if (!str) 24021da177e4SLinus Torvalds return 0; 24031da177e4SLinus Torvalds thash_entries = simple_strtoul(str, &str, 0); 24041da177e4SLinus Torvalds return 1; 24051da177e4SLinus Torvalds } 24061da177e4SLinus Torvalds __setup("thash_entries=", set_thash_entries); 24071da177e4SLinus Torvalds 24081da177e4SLinus Torvalds void __init tcp_init(void) 24091da177e4SLinus Torvalds { 24101da177e4SLinus Torvalds struct sk_buff *skb = NULL; 24117b4f4b5eSJohn Heffner unsigned long limit; 24127b4f4b5eSJohn Heffner int order, i, max_share; 24131da177e4SLinus Torvalds 24141da177e4SLinus Torvalds if (sizeof(struct tcp_skb_cb) > sizeof(skb->cb)) 24151da177e4SLinus Torvalds __skb_cb_too_small_for_tcp(sizeof(struct tcp_skb_cb), 24161da177e4SLinus Torvalds sizeof(skb->cb)); 24171da177e4SLinus Torvalds 24186e04e021SArnaldo Carvalho de Melo tcp_hashinfo.bind_bucket_cachep = 24196e04e021SArnaldo Carvalho de Melo kmem_cache_create("tcp_bind_bucket", 24206e04e021SArnaldo Carvalho de Melo sizeof(struct inet_bind_bucket), 0, 2421e5d679f3SAlexey Dobriyan SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); 24221da177e4SLinus Torvalds 24231da177e4SLinus Torvalds /* Size and allocate the main established and bind bucket 24241da177e4SLinus Torvalds * hash tables. 24251da177e4SLinus Torvalds * 24261da177e4SLinus Torvalds * The methodology is similar to that of the buffer cache. 24271da177e4SLinus Torvalds */ 24286e04e021SArnaldo Carvalho de Melo tcp_hashinfo.ehash = 24291da177e4SLinus Torvalds alloc_large_system_hash("TCP established", 24300f7ff927SArnaldo Carvalho de Melo sizeof(struct inet_ehash_bucket), 24311da177e4SLinus Torvalds thash_entries, 24321da177e4SLinus Torvalds (num_physpages >= 128 * 1024) ? 243318955cfcSMike Stroyan 13 : 15, 24349e950efaSJohn Heffner 0, 24356e04e021SArnaldo Carvalho de Melo &tcp_hashinfo.ehash_size, 24361da177e4SLinus Torvalds NULL, 24371da177e4SLinus Torvalds 0); 2438dbca9b27SEric Dumazet tcp_hashinfo.ehash_size = 1 << tcp_hashinfo.ehash_size; 2439dbca9b27SEric Dumazet for (i = 0; i < tcp_hashinfo.ehash_size; i++) { 24406e04e021SArnaldo Carvalho de Melo rwlock_init(&tcp_hashinfo.ehash[i].lock); 24416e04e021SArnaldo Carvalho de Melo INIT_HLIST_HEAD(&tcp_hashinfo.ehash[i].chain); 2442dbca9b27SEric Dumazet INIT_HLIST_HEAD(&tcp_hashinfo.ehash[i].twchain); 24431da177e4SLinus Torvalds } 24441da177e4SLinus Torvalds 24456e04e021SArnaldo Carvalho de Melo tcp_hashinfo.bhash = 24461da177e4SLinus Torvalds alloc_large_system_hash("TCP bind", 24470f7ff927SArnaldo Carvalho de Melo sizeof(struct inet_bind_hashbucket), 24486e04e021SArnaldo Carvalho de Melo tcp_hashinfo.ehash_size, 24491da177e4SLinus Torvalds (num_physpages >= 128 * 1024) ? 245018955cfcSMike Stroyan 13 : 15, 24519e950efaSJohn Heffner 0, 24526e04e021SArnaldo Carvalho de Melo &tcp_hashinfo.bhash_size, 24531da177e4SLinus Torvalds NULL, 24541da177e4SLinus Torvalds 64 * 1024); 24556e04e021SArnaldo Carvalho de Melo tcp_hashinfo.bhash_size = 1 << tcp_hashinfo.bhash_size; 24566e04e021SArnaldo Carvalho de Melo for (i = 0; i < tcp_hashinfo.bhash_size; i++) { 24576e04e021SArnaldo Carvalho de Melo spin_lock_init(&tcp_hashinfo.bhash[i].lock); 24586e04e021SArnaldo Carvalho de Melo INIT_HLIST_HEAD(&tcp_hashinfo.bhash[i].chain); 24591da177e4SLinus Torvalds } 24601da177e4SLinus Torvalds 24611da177e4SLinus Torvalds /* Try to be a bit smarter and adjust defaults depending 24621da177e4SLinus Torvalds * on available memory. 24631da177e4SLinus Torvalds */ 24641da177e4SLinus Torvalds for (order = 0; ((1 << order) << PAGE_SHIFT) < 24656e04e021SArnaldo Carvalho de Melo (tcp_hashinfo.bhash_size * sizeof(struct inet_bind_hashbucket)); 24661da177e4SLinus Torvalds order++) 24671da177e4SLinus Torvalds ; 2468e7626486SAndi Kleen if (order >= 4) { 24691da177e4SLinus Torvalds sysctl_local_port_range[0] = 32768; 24701da177e4SLinus Torvalds sysctl_local_port_range[1] = 61000; 2471295ff7edSArnaldo Carvalho de Melo tcp_death_row.sysctl_max_tw_buckets = 180000; 24721da177e4SLinus Torvalds sysctl_tcp_max_orphans = 4096 << (order - 4); 24731da177e4SLinus Torvalds sysctl_max_syn_backlog = 1024; 24741da177e4SLinus Torvalds } else if (order < 3) { 24751da177e4SLinus Torvalds sysctl_local_port_range[0] = 1024 * (3 - order); 2476295ff7edSArnaldo Carvalho de Melo tcp_death_row.sysctl_max_tw_buckets >>= (3 - order); 24771da177e4SLinus Torvalds sysctl_tcp_max_orphans >>= (3 - order); 24781da177e4SLinus Torvalds sysctl_max_syn_backlog = 128; 24791da177e4SLinus Torvalds } 24801da177e4SLinus Torvalds 248153cdcc04SJohn Heffner /* Set the pressure threshold to be a fraction of global memory that 248253cdcc04SJohn Heffner * is up to 1/2 at 256 MB, decreasing toward zero with the amount of 248353cdcc04SJohn Heffner * memory, with a floor of 128 pages. 248453cdcc04SJohn Heffner */ 248553cdcc04SJohn Heffner limit = min(nr_all_pages, 1UL<<(28-PAGE_SHIFT)) >> (20-PAGE_SHIFT); 248653cdcc04SJohn Heffner limit = (limit * (nr_all_pages >> (20-PAGE_SHIFT))) >> (PAGE_SHIFT-11); 248753cdcc04SJohn Heffner limit = max(limit, 128UL); 248853cdcc04SJohn Heffner sysctl_tcp_mem[0] = limit / 4 * 3; 248953cdcc04SJohn Heffner sysctl_tcp_mem[1] = limit; 249052bf376cSJohn Heffner sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2; 24911da177e4SLinus Torvalds 249253cdcc04SJohn Heffner /* Set per-socket limits to no more than 1/128 the pressure threshold */ 24937b4f4b5eSJohn Heffner limit = ((unsigned long)sysctl_tcp_mem[1]) << (PAGE_SHIFT - 7); 24947b4f4b5eSJohn Heffner max_share = min(4UL*1024*1024, limit); 24957b4f4b5eSJohn Heffner 24967b4f4b5eSJohn Heffner sysctl_tcp_wmem[0] = SK_STREAM_MEM_QUANTUM; 24977b4f4b5eSJohn Heffner sysctl_tcp_wmem[1] = 16*1024; 24987b4f4b5eSJohn Heffner sysctl_tcp_wmem[2] = max(64*1024, max_share); 24997b4f4b5eSJohn Heffner 25007b4f4b5eSJohn Heffner sysctl_tcp_rmem[0] = SK_STREAM_MEM_QUANTUM; 25017b4f4b5eSJohn Heffner sysctl_tcp_rmem[1] = 87380; 25027b4f4b5eSJohn Heffner sysctl_tcp_rmem[2] = max(87380, max_share); 25031da177e4SLinus Torvalds 25041da177e4SLinus Torvalds printk(KERN_INFO "TCP: Hash tables configured " 25051da177e4SLinus Torvalds "(established %d bind %d)\n", 2506dbca9b27SEric Dumazet tcp_hashinfo.ehash_size, tcp_hashinfo.bhash_size); 2507317a76f9SStephen Hemminger 2508317a76f9SStephen Hemminger tcp_register_congestion_control(&tcp_reno); 25091da177e4SLinus Torvalds } 25101da177e4SLinus Torvalds 25111da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_close); 25121da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_disconnect); 25131da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_getsockopt); 25141da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_ioctl); 25151da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_poll); 25161da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_read_sock); 25171da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_recvmsg); 25181da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_sendmsg); 25191da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_sendpage); 25201da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_setsockopt); 25211da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_shutdown); 25221da177e4SLinus Torvalds EXPORT_SYMBOL(tcp_statistics); 2523