commit 085c22bb54b8c64a874f8ff4a8575af2c90c6664
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Fri Aug 17 21:02:45 2018 +0200

    Linux 4.17.16

commit c812c53e814d0b5a186d8fdf9644f6d6fd291cc5
Author: Toshi Kani <toshi.kani@hpe.com>
Date:   Wed Jun 27 08:13:48 2018 -0600

    x86/mm: Add TLB purge to free pmd/pte page interfaces
    
    commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream.
    
    ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
    a pud / pmd map.  The following preconditions are met at their entry.
     - All pte entries for a target pud/pmd address range have been cleared.
     - System-wide TLB purges have been peformed for a target pud/pmd address
       range.
    
    The preconditions assure that there is no stale TLB entry for the range.
    Speculation may not cache TLB entries since it requires all levels of page
    entries, including ptes, to have P & A-bits set for an associated address.
    However, speculation may cache pud/pmd entries (paging-structure caches)
    when they have P-bit set.
    
    Add a system-wide TLB purge (INVLPG) to a single page after clearing
    pud/pmd entry's P-bit.
    
    SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
    states that:
      INVLPG invalidates all paging-structure caches associated with the
      current PCID regardless of the liner addresses to which they correspond.
    
    Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
    Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: mhocko@suse.com
    Cc: akpm@linux-foundation.org
    Cc: hpa@zytor.com
    Cc: cpandya@codeaurora.org
    Cc: linux-mm@kvack.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: Joerg Roedel <joro@8bytes.org>
    Cc: stable@vger.kernel.org
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: <stable@vger.kernel.org>
    Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a033b10861c7e176a53b606934cffce51e26c203
Author: Chintan Pandya <cpandya@codeaurora.org>
Date:   Wed Jun 27 08:13:47 2018 -0600

    ioremap: Update pgtable free interfaces with addr
    
    commit 785a19f9d1dd8a4ab2d0633be4656653bd3de1fc upstream.
    
    The following kernel panic was observed on ARM64 platform due to a stale
    TLB entry.
    
     1. ioremap with 4K size, a valid pte page table is set.
     2. iounmap it, its pte entry is set to 0.
     3. ioremap the same address with 2M size, update its pmd entry with
        a new value.
     4. CPU may hit an exception because the old pmd entry is still in TLB,
        which leads to a kernel panic.
    
    Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
    table") has addressed this panic by falling to pte mappings in the above
    case on ARM64.
    
    To support pmd mappings in all cases, TLB purge needs to be performed
    in this case on ARM64.
    
    Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
    so that TLB purge can be added later in seprate patches.
    
    [toshi.kani@hpe.com: merge changes, rewrite patch description]
    Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
    Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
    Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: mhocko@suse.com
    Cc: akpm@linux-foundation.org
    Cc: hpa@zytor.com
    Cc: linux-mm@kvack.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: Joerg Roedel <joro@8bytes.org>
    Cc: stable@vger.kernel.org
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: <stable@vger.kernel.org>
    Link: https://lkml.kernel.org/r/20180627141348.21777-3-toshi.kani@hpe.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0c37356f695faeb8f47c9107707bd47b53622a46
Author: Mark Salyzyn <salyzyn@android.com>
Date:   Tue Jul 31 15:02:13 2018 -0700

    Bluetooth: hidp: buffer overflow in hidp_process_report
    
    commit 7992c18810e568b95c869b227137a2215702a805 upstream.
    
    CVE-2018-9363
    
    The buffer length is unsigned at all layers, but gets cast to int and
    checked in hidp_process_report and can lead to a buffer overflow.
    Switch len parameter to unsigned int to resolve issue.
    
    This affects 3.18 and newer kernels.
    
    Signed-off-by: Mark Salyzyn <salyzyn@android.com>
    Fixes: a4b1b5877b514b276f0f31efe02388a9c2836728 ("HID: Bluetooth: hidp: make sure input buffers are big enough")
    Cc: Marcel Holtmann <marcel@holtmann.org>
    Cc: Johan Hedberg <johan.hedberg@gmail.com>
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com>
    Cc: linux-bluetooth@vger.kernel.org
    Cc: netdev@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: security@kernel.org
    Cc: kernel-team@android.com
    Acked-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 82ca3d13d8523b93b2ee054bb96aa34ec6e907c6
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jul 23 10:54:56 2018 -0700

    crypto: skcipher - fix crash flushing dcache in error path
    
    commit 8088d3dd4d7c6933a65aa169393b5d88d8065672 upstream.
    
    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page.  But in the error case of
    skcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.
    
    Fix it by reorganizing skcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.
    
    This bug was found by syzkaller fuzzing.
    
    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
    
            #include <linux/if_alg.h>
            #include <sys/socket.h>
            #include <unistd.h>
    
            int main()
            {
                    struct sockaddr_alg addr = {
                            .salg_type = "skcipher",
                            .salg_name = "cbc(aes-generic)",
                    };
                    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
                    int fd;
    
                    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
                    bind(fd, (void *)&addr, sizeof(addr));
                    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
                    fd = accept(fd, NULL, NULL);
                    write(fd, buffer, 15);
                    read(fd, buffer, 15);
            }
    
    Reported-by: Liu Chao <liuchao741@huawei.com>
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: <stable@vger.kernel.org> # v4.10+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f21a81f5b952b32abfc329ca93078a91a28fe8cd
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jul 23 09:57:50 2018 -0700

    crypto: skcipher - fix aligning block size in skcipher_copy_iv()
    
    commit 0567fc9e90b9b1c8dbce8a5468758e6206744d4a upstream.
    
    The ALIGN() macro needs to be passed the alignment, not the alignmask
    (which is the alignment minus 1).
    
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: <stable@vger.kernel.org> # v4.10+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 483efa679b61f0d47cf37fd16862ad290f780864
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jul 23 10:54:58 2018 -0700

    crypto: ablkcipher - fix crash flushing dcache in error path
    
    commit 318abdfbe708aaaa652c79fb500e9bd60521f9dc upstream.
    
    Like the skcipher_walk and blkcipher_walk cases:
    
    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page.  But in the error case of
    ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.
    
    Fix it by reorganizing ablkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.
    
    Reported-by: Liu Chao <liuchao741@huawei.com>
    Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces")
    Cc: <stable@vger.kernel.org> # v2.6.35+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a404181a65f9e3d5b15f52f0d1907ca4e36df269
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jul 23 10:54:57 2018 -0700

    crypto: blkcipher - fix crash flushing dcache in error path
    
    commit 0868def3e4100591e7a1fdbf3eed1439cc8f7ca3 upstream.
    
    Like the skcipher_walk case:
    
    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page.  But in the error case of
    blkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.
    
    Fix it by reorganizing blkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.
    
    This bug was found by syzkaller fuzzing.
    
    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
    
            #include <linux/if_alg.h>
            #include <sys/socket.h>
            #include <unistd.h>
    
            int main()
            {
                    struct sockaddr_alg addr = {
                            .salg_type = "skcipher",
                            .salg_name = "ecb(aes-generic)",
                    };
                    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
                    int fd;
    
                    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
                    bind(fd, (void *)&addr, sizeof(addr));
                    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
                    fd = accept(fd, NULL, NULL);
                    write(fd, buffer, 15);
                    read(fd, buffer, 15);
            }
    
    Reported-by: Liu Chao <liuchao741@huawei.com>
    Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
    Cc: <stable@vger.kernel.org> # v2.6.19+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 55c689bd3d1c7c5ccddea1aac75ed67e5af8464a
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jun 18 10:22:38 2018 -0700

    crypto: vmac - separate tfm and request context
    
    commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream.
    
    syzbot reported a crash in vmac_final() when multiple threads
    concurrently use the same "vmac(aes)" transform through AF_ALG.  The bug
    is pretty fundamental: the VMAC template doesn't separate per-request
    state from per-tfm (per-key) state like the other hash algorithms do,
    but rather stores it all in the tfm context.  That's wrong.
    
    Also, vmac_final() incorrectly zeroes most of the state including the
    derived keys and cached pseudorandom pad.  Therefore, only the first
    VMAC invocation with a given key calculates the correct digest.
    
    Fix these bugs by splitting the per-tfm state from the per-request state
    and using the proper init/update/final sequencing for requests.
    
    Reproducer for the crash:
    
        #include <linux/if_alg.h>
        #include <sys/socket.h>
        #include <unistd.h>
    
        int main()
        {
                int fd;
                struct sockaddr_alg addr = {
                        .salg_type = "hash",
                        .salg_name = "vmac(aes)",
                };
                char buf[256] = { 0 };
    
                fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
                bind(fd, (void *)&addr, sizeof(addr));
                setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
                fork();
                fd = accept(fd, NULL, NULL);
                for (;;)
                        write(fd, buf, 256);
        }
    
    The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
    VMAC_NHBYTES, causing vmac_final() to memset() a negative length.
    
    Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: <stable@vger.kernel.org> # v2.6.32+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b26243c535d39b89ed0cb8c97bcc09fe8d98bb07
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Jun 18 10:22:37 2018 -0700

    crypto: vmac - require a block cipher with 128-bit block size
    
    commit 73bf20ef3df262026c3470241ae4ac8196943ffa upstream.
    
    The VMAC template assumes the block cipher has a 128-bit block size, but
    it failed to check for that.  Thus it was possible to instantiate it
    using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
    uninitialized memory to be used.
    
    Add the needed check when instantiating the template.
    
    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: <stable@vger.kernel.org> # v2.6.32+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7aaaad0da8a5d72fae0c688fba6ef44937202169
Author: Eric Biggers <ebiggers@google.com>
Date:   Fri Jun 29 14:14:35 2018 -0700

    crypto: x86/sha256-mb - fix digest copy in sha256_mb_mgr_get_comp_job_avx2()
    
    commit af839b4e546613aed1fbd64def73956aa98631e7 upstream.
    
    There is a copy-paste error where sha256_mb_mgr_get_comp_job_avx2()
    copies the SHA-256 digest state from sha256_mb_mgr::args::digest to
    job_sha256::result_digest.  Consequently, the sha256_mb algorithm
    sometimes calculates the wrong digest.  Fix it.
    
    Reproducer using AF_ALG:
    
        #include <assert.h>
        #include <linux/if_alg.h>
        #include <stdio.h>
        #include <string.h>
        #include <sys/socket.h>
        #include <unistd.h>
    
        static const __u8 expected[32] =
            "\xad\x7f\xac\xb2\x58\x6f\xc6\xe9\x66\xc0\x04\xd7\xd1\xd1\x6b\x02"
            "\x4f\x58\x05\xff\x7c\xb4\x7c\x7a\x85\xda\xbd\x8b\x48\x89\x2c\xa7";
    
        int main()
        {
            int fd;
            struct sockaddr_alg addr = {
                .salg_type = "hash",
                .salg_name = "sha256_mb",
            };
            __u8 data[4096] = { 0 };
            __u8 digest[32];
            int ret;
            int i;
    
            fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
            bind(fd, (void *)&addr, sizeof(addr));
            fork();
            fd = accept(fd, 0, 0);
            do {
                ret = write(fd, data, 4096);
                assert(ret == 4096);
                ret = read(fd, digest, 32);
                assert(ret == 32);
            } while (memcmp(digest, expected, 32) == 0);
    
            printf("wrong digest: ");
            for (i = 0; i < 32; i++)
                printf("%02x", digest[i]);
            printf("\n");
        }
    
    Output was:
    
        wrong digest: ad7facb2000000000000000000000000ffffffef7cb47c7a85dabd8b48892ca7
    
    Fixes: 172b1d6b5a93 ("crypto: sha256-mb - fix ctx pointer and digest copy")
    Cc: <stable@vger.kernel.org> # v4.8+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3dcf89469076be59201e466b61661743d4515908
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Tue Jul 3 12:11:33 2018 -0500

    crypto: ccp - Fix command completion detection race
    
    commit f426d2b20f1cd63818873593031593e15c3db20b upstream.
    
    The wait_event() function is used to detect command completion.  The
    interrupt handler will set the wait condition variable when the interrupt
    is triggered.  However, the variable used for wait_event() is initialized
    after the command has been submitted, which can create a race condition
    with the interrupt handler and result in the wait_event() never returning.
    Move the initialization of the wait condition variable to just before
    command submission.
    
    Fixes: 200664d5237f ("crypto: ccp: Add Secure Encrypted Virtualization (SEV) command support")
    Cc: <stable@vger.kernel.org> # 4.16.x-
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
    Acked-by: Gary R Hook <gary.hook@amd.com>
    Acked-by: Gary R Hook <gary.hook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 80a5eb6b3745b6bdfdf19f4687dba46aeffdb639
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Thu Jul 26 09:37:59 2018 -0500

    crypto: ccp - Check for NULL PSP pointer at module unload
    
    commit afb31cd2d1a1bc3ca055fb2519ec4e9ab969ffe0 upstream.
    
    Should the PSP initialization fail, the PSP data structure will be
    freed and the value contained in the sp_device struct set to NULL.
    At module unload, psp_dev_destroy() does not check if the pointer
    value is NULL and will end up dereferencing a NULL pointer.
    
    Add a pointer check of the psp_data field in the sp_device struct
    in psp_dev_destroy() and return immediately if it is NULL.
    
    Cc: <stable@vger.kernel.org> # 4.16.x-
    Fixes: 2a6170dfe755 ("crypto: ccp: Add Platform Security Processor (PSP) device support")
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Acked-by: Gary R Hook <gary.hook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a2a9b8b34cc83b7a21002ba021b5057ae6e6a8ee
Author: Gilad Ben-Yossef <gilad@benyossef.com>
Date:   Sun Jul 1 08:02:36 2018 +0100

    crypto: ccree - fix iv handling
    
    commit 00904aa0cd59a36d659ec93d272309e2174bcb5b upstream.
    
    We were copying our last cipher block into the request for use as IV for
    all modes of operations. Fix this by discerning the behaviour based on
    the mode of operation used: copy ciphertext for CBC, update counter for
    CTR.
    
    CC: stable@vger.kernel.org
    Fixes: 63ee04c8b491 ("crypto: ccree - add skcipher support")
    Reported by: Hadar Gat <hadar.gat@arm.com>
    Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4e2befa3feeb4da592b61dc9f03b36a40c2df566
Author: Hadar Gat <hadar.gat@arm.com>
Date:   Sun Jul 1 08:02:34 2018 +0100

    crypto: ccree - fix finup
    
    commit 26497e72a1aba4d27c50c4cbf0182db94e58a590 upstream.
    
    finup() operation was incorrect, padding was missing.
    Fix by setting the ccree HW to enable padding.
    
    Signed-off-by: Hadar Gat <hadar.gat@arm.com>
    [ gilad@benyossef.com: refactored for better code sharing ]
    Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6b773162355a368f86c44d2723d496034ae68167
Author: Randy Dunlap <rdunlap@infradead.org>
Date:   Sun Jul 1 19:46:06 2018 -0700

    kbuild: verify that $DEPMOD is installed
    
    commit 934193a654c1f4d0643ddbf4b2529b508cae926e upstream.
    
    Verify that 'depmod' ($DEPMOD) is installed.
    This is a partial revert of commit 620c231c7a7f
    ("kbuild: do not check for ancient modutils tools").
    
    Also update Documentation/process/changes.rst to refer to
    kmod instead of module-init-tools.
    
    Fixes kernel bugzilla #198965:
    https://bugzilla.kernel.org/show_bug.cgi?id=198965
    
    Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
    Cc: Lucas De Marchi <lucas.demarchi@profusion.mobi>
    Cc: Lucas De Marchi <lucas.de.marchi@gmail.com>
    Cc: Michal Marek <michal.lkml@markovi.net>
    Cc: Jessica Yu <jeyu@kernel.org>
    Cc: Chih-Wei Huang <cwhuang@linux.org.tw>
    Cc: stable@vger.kernel.org # any kernel since 2012
    Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9c018f562dd9c07d1fd18579b7ecb43401b601c9
Author: Toshi Kani <toshi.kani@hpe.com>
Date:   Wed Jun 27 08:13:46 2018 -0600

    x86/mm: Disable ioremap free page handling on x86-PAE
    
    commit f967db0b9ed44ec3057a28f3b28efc51df51b835 upstream.
    
    ioremap() supports pmd mappings on x86-PAE.  However, kernel's pmd
    tables are not shared among processes on x86-PAE.  Therefore, any
    update to sync'd pmd entries need re-syncing.  Freeing a pte page
    also leads to a vmalloc fault and hits the BUG_ON in vmalloc_sync_one().
    
    Disable free page handling on x86-PAE.  pud_free_pmd_page() and
    pmd_free_pte_page() simply return 0 if a given pud/pmd entry is present.
    This assures that ioremap() does not update sync'd pmd entries at the
    cost of falling back to pte mappings.
    
    Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
    Reported-by: Joerg Roedel <joro@8bytes.org>
    Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: mhocko@suse.com
    Cc: akpm@linux-foundation.org
    Cc: hpa@zytor.com
    Cc: cpandya@codeaurora.org
    Cc: linux-mm@kvack.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: stable@vger.kernel.org
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: <stable@vger.kernel.org>
    Link: https://lkml.kernel.org/r/20180627141348.21777-2-toshi.kani@hpe.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit dc891b7ba38115890df8ce722d7ac27f22766e8f
Author: M. Vefa Bicakci <m.v.b@runbox.com>
Date:   Tue Jul 24 08:45:47 2018 -0400

    xen/pv: Call get_cpu_address_sizes to set x86_virt/phys_bits
    
    commit 405c018a25fe464dc68057bbc8014a58f2bd4422 upstream.
    
    Commit d94a155c59c9 ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits
    adjustment corruption") has moved the query and calculation of the
    x86_virt_bits and x86_phys_bits fields of the cpuinfo_x86 struct
    from the get_cpu_cap function to a new function named
    get_cpu_address_sizes.
    
    One of the call sites related to Xen PV VMs was unfortunately missed
    in the aforementioned commit. This prevents successful boot-up of
    kernel versions 4.17 and up in Xen PV VMs if CONFIG_DEBUG_VIRTUAL
    is enabled, due to the following code path:
    
      enlighten_pv.c::xen_start_kernel
        mmu_pv.c::xen_reserve_special_pages
          page.h::__pa
            physaddr.c::__phys_addr
              physaddr.h::phys_addr_valid
    
    phys_addr_valid uses boot_cpu_data.x86_phys_bits to validate physical
    addresses. boot_cpu_data.x86_phys_bits is no longer populated before
    the call to xen_reserve_special_pages due to the aforementioned commit
    though, so the validation performed by phys_addr_valid fails, which
    causes __phys_addr to trigger a BUG, preventing boot-up.
    
    Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
    Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Cc: Juergen Gross <jgross@suse.com>
    Cc: xen-devel@lists.xenproject.org
    Cc: x86@kernel.org
    Cc: stable@vger.kernel.org # for v4.17 and up
    Fixes: d94a155c59c9 ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 63687997fd18e65d63719b38888073a88537a1e2
Author: Dave Hansen <dave.hansen@linux.intel.com>
Date:   Thu Aug 2 15:58:25 2018 -0700

    x86/mm/pti: Clear Global bit more aggressively
    
    commit eac7073aa69aa1cac819aa712146284f53f642b1 upstream.
    
    The kernel image starts out with the Global bit set across the entire
    kernel image.  The bit is cleared with set_memory_nonglobal() in the
    configurations with PCIDs where the performance benefits of the Global bit
    are not needed.
    
    However, this is fragile.  It means that we are stuck opting *out* of the
    less-secure (Global bit set) configuration, which seems backwards.  Let's
    start more secure (Global bit clear) and then let things opt back in if
    they want performance, or are truly mapping common data between kernel and
    userspace.
    
    This fixes a bug.  Before this patch, there are areas that are unmapped
    from the user page tables (like like everything above 0xffffffff82600000 in
    the example below).  These have the hallmark of being a wrong Global area:
    they are not identical in the 'current_kernel' and 'current_user' page
    table dumps.  They are also read-write, which means they're much more
    likely to contain secrets.
    
    Before this patch:
    
    current_kernel:---[ High Kernel Mapping ]---
    current_kernel-0xffffffff80000000-0xffffffff81000000          16M                               pmd
    current_kernel-0xffffffff81000000-0xffffffff81e00000          14M     ro         PSE     GLB x  pmd
    current_kernel-0xffffffff81e00000-0xffffffff81e11000          68K     ro                 GLB x  pte
    current_kernel-0xffffffff81e11000-0xffffffff82000000        1980K     RW                 GLB NX pte
    current_kernel-0xffffffff82000000-0xffffffff82600000           6M     ro         PSE     GLB NX pmd
    current_kernel-0xffffffff82600000-0xffffffff82c00000           6M     RW         PSE     GLB NX pmd
    current_kernel-0xffffffff82c00000-0xffffffff82e00000           2M     RW                 GLB NX pte
    current_kernel-0xffffffff82e00000-0xffffffff83200000           4M     RW         PSE     GLB NX pmd
    current_kernel-0xffffffff83200000-0xffffffffa0000000         462M                               pmd
    
     current_user:---[ High Kernel Mapping ]---
     current_user-0xffffffff80000000-0xffffffff81000000          16M                               pmd
     current_user-0xffffffff81000000-0xffffffff81e00000          14M     ro         PSE     GLB x  pmd
     current_user-0xffffffff81e00000-0xffffffff81e11000          68K     ro                 GLB x  pte
     current_user-0xffffffff81e11000-0xffffffff82000000        1980K     RW                 GLB NX pte
     current_user-0xffffffff82000000-0xffffffff82600000           6M     ro         PSE     GLB NX pmd
     current_user-0xffffffff82600000-0xffffffffa0000000         474M                               pmd
    
    After this patch:
    
    current_kernel:---[ High Kernel Mapping ]---
    current_kernel-0xffffffff80000000-0xffffffff81000000          16M                               pmd
    current_kernel-0xffffffff81000000-0xffffffff81e00000          14M     ro         PSE     GLB x  pmd
    current_kernel-0xffffffff81e00000-0xffffffff81e11000          68K     ro                 GLB x  pte
    current_kernel-0xffffffff81e11000-0xffffffff82000000        1980K     RW                     NX pte
    current_kernel-0xffffffff82000000-0xffffffff82600000           6M     ro         PSE     GLB NX pmd
    current_kernel-0xffffffff82600000-0xffffffff82c00000           6M     RW         PSE         NX pmd
    current_kernel-0xffffffff82c00000-0xffffffff82e00000           2M     RW                     NX pte
    current_kernel-0xffffffff82e00000-0xffffffff83200000           4M     RW         PSE         NX pmd
    current_kernel-0xffffffff83200000-0xffffffffa0000000         462M                               pmd
    
      current_user:---[ High Kernel Mapping ]---
      current_user-0xffffffff80000000-0xffffffff81000000          16M                               pmd
      current_user-0xffffffff81000000-0xffffffff81e00000          14M     ro         PSE     GLB x  pmd
      current_user-0xffffffff81e00000-0xffffffff81e11000          68K     ro                 GLB x  pte
      current_user-0xffffffff81e11000-0xffffffff82000000        1980K     RW                     NX pte
      current_user-0xffffffff82000000-0xffffffff82600000           6M     ro         PSE     GLB NX pmd
      current_user-0xffffffff82600000-0xffffffffa0000000         474M                               pmd
    
    Fixes: 0f561fce4d69 ("x86/pti: Enable global pages for shared areas")
    Reported-by: Hugh Dickins <hughd@google.com>
    Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: keescook@google.com
    Cc: aarcange@redhat.com
    Cc: jgross@suse.com
    Cc: jpoimboe@redhat.com
    Cc: gregkh@linuxfoundation.org
    Cc: peterz@infradead.org
    Cc: torvalds@linux-foundation.org
    Cc: bp@alien8.de
    Cc: luto@kernel.org
    Cc: ak@linux.intel.com
    Cc: Kees Cook <keescook@google.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Juergen Gross <jgross@suse.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Andi Kleen <ak@linux.intel.com>
    Link: https://lkml.kernel.org/r/20180802225825.A100C071@viggo.jf.intel.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit af1cd0e896512c1e4ab640622d0821d0825cd316
Author: Dou Liyang <douly.fnst@cn.fujitsu.com>
Date:   Mon Jul 30 15:59:47 2018 +0800

    x86/platform/UV: Mark memblock related init code and data correctly
    
    commit 24cfd8ca1d28331b9dad3b88d1958c976b2cfab6 upstream.
    
    parse_mem_block_size() and mem_block_size are only used during init. Mark
    them accordingly.
    
    Fixes: d7609f4210cb ("x86/platform/UV: Add kernel parameter to set memory block size")
    Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: hpa@zytor.com
    Cc: Mike Travis <mike.travis@hpe.com>
    Cc: Andrew Banman <andrew.banman@hpe.com>
    Link: https://lkml.kernel.org/r/20180730075947.23023-1-douly.fnst@cn.fujitsu.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 921eb6977ca888c25ee852a49ec8c66d289bb703
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Wed Aug 15 13:22:27 2018 -0700

    x86: i8259: Add missing include file
    
    commit 0a957467c5fd46142bc9c52758ffc552d4c5e2f7 upstream.
    
    i8259.h uses inb/outb and thus needs to include asm/io.h to avoid the
    following build error, as seen with x86_64:defconfig and CONFIG_SMP=n.
    
      In file included from drivers/rtc/rtc-cmos.c:45:0:
      arch/x86/include/asm/i8259.h: In function 'inb_pic':
      arch/x86/include/asm/i8259.h:32:24: error:
            implicit declaration of function 'inb'
    
      arch/x86/include/asm/i8259.h: In function 'outb_pic':
      arch/x86/include/asm/i8259.h:45:2: error:
            implicit declaration of function 'outb'
    
    Reported-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
    Suggested-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
    Fixes: 447ae3166702 ("x86: Don't include linux/irq.h from asm/hardirq.h")
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0fee79788745f1a60040bd2c0bb3da67e3cafe69
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Wed Aug 15 08:38:33 2018 -0700

    x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
    
    commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
    
    allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
    
      ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
    
    Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
    Reported-by: Meelis Roos <mroos@linux.ee>
    Cc: Meelis Roos <mroos@linux.ee>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>