commit aec56d2f3792bdb06548929bc516a185d723d234
Author: Ben Hutchings <ben@decadent.org.uk>
Date:   Thu Jun 20 18:11:30 2019 +0100

    Linux 3.16.69

commit 7ce5a5796ca119c5c6935ea9f4e785f0cb7f39b7
Author: Eric Dumazet <edumazet@google.com>
Date:   Sat Jun 8 10:22:49 2019 -0700

    tcp: enforce tcp_min_snd_mss in tcp_mtu_probing()
    
    commit 967c05aee439e6e5d7d805e195b3a20ef5c433d6 upstream.
    
    If mtu probing is enabled tcp_mtu_probing() could very well end up
    with a too small MSS.
    
    Use the new sysctl tcp_min_snd_mss to make sure MSS search
    is performed in an acceptable range.
    
    CVE-2019-11479 -- tcp mss hardcoded to 48
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Jonathan Lemon <jonathan.lemon@gmail.com>
    Cc: Jonathan Looney <jtl@netflix.com>
    Acked-by: Neal Cardwell <ncardwell@google.com>
    Cc: Yuchung Cheng <ycheng@google.com>
    Cc: Tyler Hicks <tyhicks@canonical.com>
    Cc: Bruce Curtis <brucec@netflix.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    [Salvatore Bonaccorso: Backport for context changes in 4.9.168]
    [bwh: Backported to 3.16: The sysctl is global]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 6b7e7997ad3505db7de85ff12276fc84659481d3
Author: Eric Dumazet <edumazet@google.com>
Date:   Thu Jun 6 09:15:31 2019 -0700

    tcp: add tcp_min_snd_mss sysctl
    
    commit 5f3e2bf008c2221478101ee72f5cb4654b9fc363 upstream.
    
    Some TCP peers announce a very small MSS option in their SYN and/or
    SYN/ACK messages.
    
    This forces the stack to send packets with a very high network/cpu
    overhead.
    
    Linux has enforced a minimal value of 48. Since this value includes
    the size of TCP options, and that the options can consume up to 40
    bytes, this means that each segment can include only 8 bytes of payload.
    
    In some cases, it can be useful to increase the minimal value
    to a saner value.
    
    We still let the default to 48 (TCP_MIN_SND_MSS), for compatibility
    reasons.
    
    Note that TCP_MAXSEG socket option enforces a minimal value
    of (TCP_MIN_MSS). David Miller increased this minimal value
    in commit c39508d6f118 ("tcp: Make TCP_MAXSEG minimum more correct.")
    from 64 to 88.
    
    We might in the future merge TCP_MIN_SND_MSS and TCP_MIN_MSS.
    
    CVE-2019-11479 -- tcp mss hardcoded to 48
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Suggested-by: Jonathan Looney <jtl@netflix.com>
    Acked-by: Neal Cardwell <ncardwell@google.com>
    Cc: Yuchung Cheng <ycheng@google.com>
    Cc: Tyler Hicks <tyhicks@canonical.com>
    Cc: Bruce Curtis <brucec@netflix.com>
    Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    [Salvatore Bonaccorso: Backport for context changes in 4.9.168]
    [bwh: Backported to 3.16: Make the sysctl global, consistent with
     net.ipv4.tcp_base_mss]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit dc97a907bc76b71c08e7e99a5b1b30ef4d5e4a85
Author: Eric Dumazet <edumazet@google.com>
Date:   Sat May 18 05:12:05 2019 -0700

    tcp: tcp_fragment() should apply sane memory limits
    
    commit f070ef2ac66716357066b683fb0baf55f8191a2e upstream.
    
    Jonathan Looney reported that a malicious peer can force a sender
    to fragment its retransmit queue into tiny skbs, inflating memory
    usage and/or overflow 32bit counters.
    
    TCP allows an application to queue up to sk_sndbuf bytes,
    so we need to give some allowance for non malicious splitting
    of retransmit queue.
    
    A new SNMP counter is added to monitor how many times TCP
    did not allow to split an skb if the allowance was exceeded.
    
    Note that this counter might increase in the case applications
    use SO_SNDBUF socket option to lower sk_sndbuf.
    
    CVE-2019-11478 : tcp_fragment, prevent fragmenting a packet when the
            socket is already using more than half the allowed space
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Jonathan Looney <jtl@netflix.com>
    Acked-by: Neal Cardwell <ncardwell@google.com>
    Acked-by: Yuchung Cheng <ycheng@google.com>
    Reviewed-by: Tyler Hicks <tyhicks@canonical.com>
    Cc: Bruce Curtis <brucec@netflix.com>
    Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    [Salvatore Bonaccorso: Adjust context for backport to 4.9.168]
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit ef27e3c531782ec8213108e11e5515f9724303c7
Author: Eric Dumazet <edumazet@google.com>
Date:   Fri May 17 17:17:22 2019 -0700

    tcp: limit payload size of sacked skbs
    
    commit 3b4929f65b0d8249f19a50245cd88ed1a2f78cff upstream.
    
    Jonathan Looney reported that TCP can trigger the following crash
    in tcp_shifted_skb() :
    
            BUG_ON(tcp_skb_pcount(skb) < pcount);
    
    This can happen if the remote peer has advertized the smallest
    MSS that linux TCP accepts : 48
    
    An skb can hold 17 fragments, and each fragment can hold 32KB
    on x86, or 64KB on PowerPC.
    
    This means that the 16bit witdh of TCP_SKB_CB(skb)->tcp_gso_segs
    can overflow.
    
    Note that tcp_sendmsg() builds skbs with less than 64KB
    of payload, so this problem needs SACK to be enabled.
    SACK blocks allow TCP to coalesce multiple skbs in the retransmit
    queue, thus filling the 17 fragments to maximal capacity.
    
    CVE-2019-11477 -- u16 overflow of TCP_SKB_CB(skb)->tcp_gso_segs
    
    Backport notes, provided by Joao Martins <joao.m.martins@oracle.com>
    
    v4.15 or since commit 737ff314563 ("tcp: use sequence distance to
    detect reordering") had switched from the packet-based FACK tracking and
    switched to sequence-based.
    
    v4.14 and older still have the old logic and hence on
    tcp_skb_shift_data() needs to retain its original logic and have
    @fack_count in sync. In other words, we keep the increment of pcount with
    tcp_skb_pcount(skb) to later used that to update fack_count. To make it
    more explicit we track the new skb that gets incremented to pcount in
    @next_pcount, and we get to avoid the constant invocation of
    tcp_skb_pcount(skb) all together.
    
    Fixes: 832d11c5cd07 ("tcp: Try to restore large SKBs while SACK processing")
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Jonathan Looney <jtl@netflix.com>
    Acked-by: Neal Cardwell <ncardwell@google.com>
    Reviewed-by: Tyler Hicks <tyhicks@canonical.com>
    Cc: Yuchung Cheng <ycheng@google.com>
    Cc: Bruce Curtis <brucec@netflix.com>
    Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    [Salvatore Bonaccorso: Adjust for context changes to backport to
    4.9.168]
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit acaf43aa7ede1e500532f1f5d910e207f89d5e1f
Author: Young Xiao <YangX92@hotmail.com>
Date:   Fri Apr 12 15:24:30 2019 +0800

    Bluetooth: hidp: fix buffer overflow
    
    commit a1616a5ac99ede5d605047a9012481ce7ff18b16 upstream.
    
    Struct ca is copied from userspace. It is not checked whether the "name"
    field is NULL terminated, which allows local users to obtain potentially
    sensitive information from kernel stack memory, via a HIDPCONNADD command.
    
    This vulnerability is similar to CVE-2011-1079.
    
    Signed-off-by: Young Xiao <YangX92@hotmail.com>
    Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 13c4be25bdcbe5045f9b17ad875c3253a4888e45
Author: Sriram Rajagopalan <sriramr@arista.com>
Date:   Fri May 10 19:28:06 2019 -0400

    ext4: zero out the unused memory region in the extent tree block
    
    commit 592acbf16821288ecdc4192c47e3774a4c48bb64 upstream.
    
    This commit zeroes out the unused memory region in the buffer_head
    corresponding to the extent metablock after writing the extent header
    and the corresponding extent node entries.
    
    This is done to prevent random uninitialized data from getting into
    the filesystem when the extent block is synced.
    
    This fixes CVE-2019-11833.
    
    Signed-off-by: Sriram Rajagopalan <sriramr@arista.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit bd0908fbd84009cb5f01cf1a258a6f7fd78b6b3a
Author: Jason Yan <yanaijie@huawei.com>
Date:   Fri Feb 15 19:50:27 2019 +0800

    scsi: megaraid_sas: return error when create DMA pool failed
    
    commit bcf3b67d16a4c8ffae0aa79de5853435e683945c upstream.
    
    when create DMA pool for cmd frames failed, we should return -ENOMEM,
    instead of 0.
    In some case in:
    
        megasas_init_adapter_fusion()
    
        -->megasas_alloc_cmds()
           -->megasas_create_frame_pool
              create DMA pool failed,
            --> megasas_free_cmds() [1]
    
        -->megasas_alloc_cmds_fusion()
           failed, then goto fail_alloc_cmds.
        -->megasas_free_cmds() [2]
    
    we will call megasas_free_cmds twice, [1] will kfree cmd_list,
    [2] will use cmd_list.it will cause a problem:
    
    Unable to handle kernel NULL pointer dereference at virtual address
    00000000
    pgd = ffffffc000f70000
    [00000000] *pgd=0000001fbf893003, *pud=0000001fbf893003,
    *pmd=0000001fbf894003, *pte=006000006d000707
    Internal error: Oops: 96000005 [#1] SMP
     Modules linked in:
     CPU: 18 PID: 1 Comm: swapper/0 Not tainted
     task: ffffffdfb9290000 ti: ffffffdfb923c000 task.ti: ffffffdfb923c000
     PC is at megasas_free_cmds+0x30/0x70
     LR is at megasas_free_cmds+0x24/0x70
     ...
     Call trace:
     [<ffffffc0005b779c>] megasas_free_cmds+0x30/0x70
     [<ffffffc0005bca74>] megasas_init_adapter_fusion+0x2f4/0x4d8
     [<ffffffc0005b926c>] megasas_init_fw+0x2dc/0x760
     [<ffffffc0005b9ab0>] megasas_probe_one+0x3c0/0xcd8
     [<ffffffc0004a5abc>] local_pci_probe+0x4c/0xb4
     [<ffffffc0004a5c40>] pci_device_probe+0x11c/0x14c
     [<ffffffc00053a5e4>] driver_probe_device+0x1ec/0x430
     [<ffffffc00053a92c>] __driver_attach+0xa8/0xb0
     [<ffffffc000538178>] bus_for_each_dev+0x74/0xc8
      [<ffffffc000539e88>] driver_attach+0x28/0x34
     [<ffffffc000539a18>] bus_add_driver+0x16c/0x248
     [<ffffffc00053b234>] driver_register+0x6c/0x138
     [<ffffffc0004a5350>] __pci_register_driver+0x5c/0x6c
     [<ffffffc000ce3868>] megasas_init+0xc0/0x1a8
     [<ffffffc000082a58>] do_one_initcall+0xe8/0x1ec
     [<ffffffc000ca7be8>] kernel_init_freeable+0x1c8/0x284
     [<ffffffc0008d90b8>] kernel_init+0x1c/0xe4
    
    Signed-off-by: Jason Yan <yanaijie@huawei.com>
    Acked-by: Sumit Saxena <sumit.saxena@broadcom.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit bfa8c73482dae6bafc0741cbfd63f84d11311b36
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Tue May 14 15:47:03 2019 -0700

    drivers/virt/fsl_hypervisor.c: prevent integer overflow in ioctl
    
    commit 6a024330650e24556b8a18cc654ad00cfecf6c6c upstream.
    
    The "param.count" value is a u64 thatcomes from the user.  The code
    later in the function assumes that param.count is at least one and if
    it's not then it leads to an Oops when we dereference the ZERO_SIZE_PTR.
    
    Also the addition can have an integer overflow which would lead us to
    allocate a smaller "pages" array than required.  I can't immediately
    tell what the possible run times implications are, but it's safest to
    prevent the overflow.
    
    Link: http://lkml.kernel.org/r/20181218082129.GE32567@kadam
    Fixes: 6db7199407ca ("drivers/virt: introduce Freescale hypervisor management driver")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
    Cc: Timur Tabi <timur@freescale.com>
    Cc: Mihai Caraman <mihai.caraman@freescale.com>
    Cc: Kumar Gala <galak@kernel.crashing.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit b96659f18c61120dbf8b4cc36fbc05589bf9dc02
Author: Jiri Kosina <jkosina@suse.cz>
Date:   Tue May 14 15:41:38 2019 -0700

    mm/mincore.c: make mincore() more conservative
    
    commit 134fca9063ad4851de767d1768180e5dede9a881 upstream.
    
    The semantics of what mincore() considers to be resident is not
    completely clear, but Linux has always (since 2.3.52, which is when
    mincore() was initially done) treated it as "page is available in page
    cache".
    
    That's potentially a problem, as that [in]directly exposes
    meta-information about pagecache / memory mapping state even about
    memory not strictly belonging to the process executing the syscall,
    opening possibilities for sidechannel attacks.
    
    Change the semantics of mincore() so that it only reveals pagecache
    information for non-anonymous mappings that belog to files that the
    calling process could (if it tried to) successfully open for writing;
    otherwise we'd be including shared non-exclusive mappings, which
    
     - is the sidechannel
    
     - is not the usecase for mincore(), as that's primarily used for data,
       not (shared) text
    
    [jkosina@suse.cz: v2]
      Link: http://lkml.kernel.org/r/20190312141708.6652-2-vbabka@suse.cz
    [mhocko@suse.com: restructure can_do_mincore() conditions]
    Link: http://lkml.kernel.org/r/nycvar.YFH.7.76.1903062342020.19912@cbobk.fhfr.pm
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Josh Snyder <joshs@netflix.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Originally-by: Linus Torvalds <torvalds@linux-foundation.org>
    Originally-by: Dominique Martinet <asmadeus@codewreck.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Chinner <david@fromorbit.com>
    Cc: Kevin Easton <kevin@guarana.org>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Cyril Hrubis <chrubis@suse.cz>
    Cc: Tejun Heo <tj@kernel.org>
    Cc: Kirill A. Shutemov <kirill@shutemov.name>
    Cc: Daniel Gruss <daniel@gruss.cc>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 762d8ea0c73165fc9c99a9bc63b82706cbb56062
Author: Oleg Nesterov <oleg@redhat.com>
Date:   Tue Sep 8 14:58:28 2015 -0700

    mm: introduce vma_is_anonymous(vma) helper
    
    commit b5330628546616af14ff23075fbf8d4ad91f6e25 upstream.
    
    special_mapping_fault() is absolutely broken.  It seems it was always
    wrong, but this didn't matter until vdso/vvar started to use more than
    one page.
    
    And after this change vma_is_anonymous() becomes really trivial, it
    simply checks vm_ops == NULL.  However, I do think the helper makes
    sense.  There are a lot of ->vm_ops != NULL checks, the helper makes the
    caller's code more understandable (self-documented) and this is more
    grep-friendly.
    
    This patch (of 3):
    
    Preparation.  Add the new simple helper, vma_is_anonymous(vma), and change
    handle_pte_fault() to use it.  It will have more users.
    
    The name is not accurate, say a hpet_mmap()'ed vma is not anonymous.
    Perhaps it should be named vma_has_fault() instead.  But it matches the
    logic in mmap.c/memory.c (see next changes).  "True" just means that a
    page fault will use do_anonymous_page().
    
    Signed-off-by: Oleg Nesterov <oleg@redhat.com>
    Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Pavel Emelyanov <xemul@parallels.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [bwh: Backported to 3.16 as dependency of "mm/mincore.c: make mincore() more
     conservative"; adjusted context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>