From f51319da0f2c66df5c5f8837336e9f8dbe417358 Mon Sep 17 00:00:00 2001 From: Vicentiu Galanopulo Date: Tue, 7 Jan 2020 15:48:39 +0100 Subject: [PATCH] Date: Sat, 8 Jun 2019 10:38:06 -0700 Subject: [PATCH net 2/4] tcp: tcp_fragment() should apply sane memory limits From: Eric Dumazet Jonathan Looney reported that a malicious peer can force a sender to fragment its retransmit queue into tiny skbs, inflating memory usage and/or overflow 32bit counters. TCP allows an application to queue up to sk_sndbuf bytes, so we need to give some allowance for non malicious splitting of retransmit queue. A new SNMP counter is added to monitor how many times TCP did not allow to split an skb if the allowance was exceeded. Note that this counter might increase in the case applications use SO_SNDBUF socket option to lower sk_sndbuf. Signed-off-by: Eric Dumazet Reported-by: Jonathan Looney Acked-by: Neal Cardwell Acked-by: Yuchung Cheng Reviewed-by: Tyler Hicks Cc: Bruce Curtis Cc: Jonathan Lemon Upstream-Status: Inappropriate [not author] Signed-off-by: Vicentiu Galanopulo --- net/ipv4/tcp_output.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 2697e43..23329ea 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1300,6 +1300,11 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue, if (nsize < 0) nsize = 0; + if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf)) { + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG); + return -ENOMEM; + } + /* tcp_sendmsg() can overshoot sk_wmem_queued by one full size skb. * We need some allowance to not penalize applications setting small * SO_SNDBUF values. -- 2.7.4