commit a413cc78bc64edec0a05f5c812f25c152988d6da
Author: Ben Hutchings <ben@decadent.org.uk>
Date:   Mon Mar 19 18:59:18 2018 +0000

    Linux 3.16.56

commit c3de339f17f9097667dd1ecf908a1014848487bb
Author: Arnd Bergmann <arnd@arndb.de>
Date:   Thu Feb 15 16:16:57 2018 +0100

    x86: fix build warnign with 32-bit PAE
    
    I ran into a 4.9 build warning in randconfig testing, starting with the
    KAISER patches:
    
    arch/x86/kernel/ldt.c: In function 'alloc_ldt_struct':
    arch/x86/include/asm/pgtable_types.h:208:24: error: large integer implicitly truncated to unsigned type [-Werror=overflow]
     #define __PAGE_KERNEL  (__PAGE_KERNEL_EXEC | _PAGE_NX)
                            ^
    arch/x86/kernel/ldt.c:81:6: note: in expansion of macro '__PAGE_KERNEL'
          __PAGE_KERNEL);
          ^~~~~~~~~~~~~
    
    I originally ran into this last year when the patches were part of linux-next,
    and tried to work around it by using the proper 'pteval_t' types consistently,
    but that caused additional problems.
    
    This takes a much simpler approach, and makes the argument type of the dummy
    helper always 64-bit, which is wide enough for any page table layout and
    won't hurt since this call is just an empty stub anyway.
    
    Fixes: 8f0baadf2bea ("kaiser: merged update")
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Acked-by: Kees Cook <keescook@chromium.org>
    Acked-by: Hugh Dickins <hughd@google.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 98ce99aa23b43c3dc736cd0354537fca029d69cb
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:49 2018 -0800

    x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
    
    commit 304ec1b050310548db33063e567123fae8fd0301 upstream.
    
    Quoting Linus:
    
        I do think that it would be a good idea to very expressly document
        the fact that it's not that the user access itself is unsafe. I do
        agree that things like "get_user()" want to be protected, but not
        because of any direct bugs or problems with get_user() and friends,
        but simply because get_user() is an excellent source of a pointer
        that is obviously controlled from a potentially attacking user
        space. So it's a prime candidate for then finding _subsequent_
        accesses that can then be used to perturb the cache.
    
    __uaccess_begin_nospec() covers __get_user() and copy_from_iter() where the
    limit check is far away from the user pointer de-reference. In those cases
    a barrier_nospec() prevents speculation with a potential pointer to
    privileged memory. uaccess_try_nospec covers get_user_try.
    
    Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
    Suggested-by: Andi Kleen <ak@linux.intel.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: Kees Cook <keescook@chromium.org>
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727416953.33451.10508284228526170604.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16:
     - Convert several more functions to use __uaccess_begin_nospec(), that
       are just wrappers in mainline
     - There's no 'case 8' in __copy_to_user_inatomic()
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 8f483696e8edb46461472afa27a764ac6a11929e
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:44 2018 -0800

    x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
    
    commit b5c4ae4f35325d520b230bab6eb3310613b72ac1 upstream.
    
    In preparation for converting some __uaccess_begin() instances to
    __uacess_begin_nospec(), make sure all 'from user' uaccess paths are
    using the _begin(), _end() helpers rather than open-coded stac() and
    clac().
    
    No functional changes.
    
    Suggested-by: Ingo Molnar <mingo@redhat.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727416438.33451.17309465232057176966.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16:
     - Convert several more functions to use __uaccess_begin_nospec(), that
       are just wrappers in mainline
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c81e7a89b4844ebcc88c81f40e4b2b2516e2793d
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:39 2018 -0800

    x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
    
    commit b3bbfb3fb5d25776b8e3f361d2eedaabb0b496cd upstream.
    
    For __get_user() paths, do not allow the kernel to speculate on the value
    of a user controlled pointer. In addition to the 'stac' instruction for
    Supervisor Mode Access Protection (SMAP), a barrier_nospec() causes the
    access_ok() result to resolve in the pipeline before the CPU might take any
    speculative action on the pointer value. Given the cost of 'stac' the
    speculation barrier is placed after 'stac' to hopefully overlap the cost of
    disabling SMAP with the cost of flushing the instruction pipeline.
    
    Since __get_user is a major kernel interface that deals with user
    controlled pointers, the __uaccess_begin_nospec() mechanism will prevent
    speculative execution past an access_ok() permission check. While
    speculative execution past access_ok() is not enough to lead to a kernel
    memory leak, it is a necessary precondition.
    
    To be clear, __uaccess_begin_nospec() is addressing a class of potential
    problems near __get_user() usages.
    
    Note, that while the barrier_nospec() in __uaccess_begin_nospec() is used
    to protect __get_user(), pointer masking similar to array_index_nospec()
    will be used for get_user() since it incorporates a bounds check near the
    usage.
    
    uaccess_try_nospec provides the same mechanism for get_user_try.
    
    No functional changes.
    
    Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
    Suggested-by: Andi Kleen <ak@linux.intel.com>
    Suggested-by: Ingo Molnar <mingo@redhat.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727415922.33451.5796614273104346583.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16: use current_thread_info()]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 0a2d7d9b1e63dd28baf6c8e1416b64a33f89c900
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Tue Feb 23 14:58:52 2016 -0800

    x86: fix SMAP in 32-bit environments
    
    commit de9e478b9d49f3a0214310d921450cf5bb4a21e6 upstream.
    
    In commit 11f1a4b9755f ("x86: reorganize SMAP handling in user space
    accesses") I changed how the stac/clac instructions were generated
    around the user space accesses, which then made it possible to do
    batched accesses efficiently for user string copies etc.
    
    However, in doing so, I completely spaced out, and didn't even think
    about the 32-bit case.  And nobody really even seemed to notice, because
    SMAP doesn't even exist until modern Skylake processors, and you'd have
    to be crazy to run 32-bit kernels on a modern CPU.
    
    Which brings us to Andy Lutomirski.
    
    He actually tested the 32-bit kernel on new hardware, and noticed that
    it doesn't work.  My bad.  The trivial fix is to add the required
    uaccess begin/end markers around the raw accesses in <asm/uaccess_32.h>.
    
    I feel a bit bad about this patch, just because that header file really
    should be cleaned up to avoid all the duplicated code in it, and this
    commit just expands on the problem.  But this just fixes the bug without
    any bigger cleanup surgery.
    
    Reported-and-tested-by: Andy Lutomirski <luto@kernel.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [bwh: Backported to 3.16: There's no 'case 8' in __copy_to_user_inatomic()]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit df18e8620dbde4fd95cfd73cc6f634e649ca1536
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Thu Dec 17 09:45:09 2015 -0800

    x86: reorganize SMAP handling in user space accesses
    
    commit 11f1a4b9755f5dbc3e822a96502ebe9b044b14d8 upstream.
    
    This reorganizes how we do the stac/clac instructions in the user access
    code.  Instead of adding the instructions directly to the same inline
    asm that does the actual user level access and exception handling, add
    them at a higher level.
    
    This is mainly preparation for the next step, where we will expose an
    interface to allow users to mark several accesses together as being user
    space accesses, but it does already clean up some code:
    
     - the inlined trivial cases of copy_in_user() now do stac/clac just
       once over the accesses: they used to do one pair around the user
       space read, and another pair around the write-back.
    
     - the {get,put}_user_ex() macros that are used with the catch/try
       handling don't do any stac/clac at all, because that happens in the
       try/catch surrounding them.
    
    Other than those two cleanups that happened naturally from the
    re-organization, this should not make any difference. Yet.
    
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 9ffb15cb898ad23326266aac346ac0c0813362ea
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Fri Feb 16 13:20:54 2018 -0800

    nospec: Include <asm/barrier.h> dependency
    
    commit eb6174f6d1be16b19cfa43dac296bfed003ce1a6 upstream.
    
    The nospec.h header expects the per-architecture header file
    <asm/barrier.h> to optionally define array_index_mask_nospec(). Include
    that dependency to prevent inadvertent fallback to the default
    array_index_mask_nospec() implementation.
    
    The default implementation may not provide a full mitigation
    on architectures that perform data value speculation.
    
    Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-arch@vger.kernel.org
    Link: http://lkml.kernel.org/r/151881605404.17395.1341935530792574707.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 58bfafe0fd61f23334a00159867661c3e3d35bc9
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Fri Feb 16 13:20:42 2018 -0800

    nospec: Kill array_index_nospec_mask_check()
    
    commit 1d91c1d2c80cb70e2e553845e278b87a960c04da upstream.
    
    There are multiple problems with the dynamic sanity checking in
    array_index_nospec_mask_check():
    
    * It causes unnecessary overhead in the 32-bit case since integer sized
      @index values will no longer cause the check to be compiled away like
      in the 64-bit case.
    
    * In the 32-bit case it may trigger with user controllable input when
      the expectation is that should only trigger during development of new
      kernel enabling.
    
    * The macro reuses the input parameter in multiple locations which is
      broken if someone passes an expression like 'index++' to
      array_index_nospec().
    
    Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-arch@vger.kernel.org
    Link: http://lkml.kernel.org/r/151881604278.17395.6605847763178076520.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit bac58dda1aef970651788b3eb6fcc67561314a1f
Author: Will Deacon <will.deacon@arm.com>
Date:   Mon Feb 5 14:16:06 2018 +0000

    nospec: Move array_index_nospec() parameter checking into separate macro
    
    commit 8fa80c503b484ddc1abbd10c7cb2ab81f3824a50 upstream.
    
    For architectures providing their own implementation of
    array_index_mask_nospec() in asm/barrier.h, attempting to use WARN_ONCE() to
    complain about out-of-range parameters using WARN_ON() results in a mess
    of mutually-dependent include files.
    
    Rather than unpick the dependencies, simply have the core code in nospec.h
    perform the checking for us.
    
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Acked-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/r/1517840166-15399-1-git-send-email-will.deacon@arm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit bea12e2cac58e4cf5b72d5e435db3a96ba044001
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Wed Feb 14 10:14:17 2018 +0300

    x86/spectre: Fix an error message
    
    commit 9de29eac8d2189424d81c0d840cd0469aa3d41c8 upstream.
    
    If i == ARRAY_SIZE(mitigation_options) then we accidentally print
    garbage from one space beyond the end of the mitigation_options[] array.
    
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: KarimAllah Ahmed <karahmed@amazon.de>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: kernel-janitors@vger.kernel.org
    Fixes: 9005c6834c0f ("x86/spectre: Simplify spectre_v2 command line parsing")
    Link: http://lkml.kernel.org/r/20180214071416.GA26677@mwanda
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 95e1ed3de28edff972f87098d3dfc31779bb8d7c
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Sat Jan 27 16:24:32 2018 +0000

    x86/cpufeatures: Clean up Spectre v2 related CPUID flags
    
    commit 2961298efe1ea1b6fc0d7ee8b76018fa6c0bcef2 upstream.
    
    We want to expose the hardware features simply in /proc/cpuinfo as "ibrs",
    "ibpb" and "stibp". Since AMD has separate CPUID bits for those, use them
    as the user-visible bits.
    
    When the Intel SPEC_CTRL bit is set which indicates both IBRS and IBPB
    capability, set those (AMD) bits accordingly. Likewise if the Intel STIBP
    bit is set, set the AMD STIBP that's used for the generic hardware
    capability.
    
    Hide the rest from /proc/cpuinfo by putting "" in the comments. Including
    RETPOLINE and RETPOLINE_AMD which shouldn't be visible there. There are
    patches to make the sysfs vulnerabilities information non-readable by
    non-root, and the same should apply to all information about which
    mitigations are actually in use. Those *shouldn't* appear in /proc/cpuinfo.
    
    The feature bit for whether IBPB is actually used, which is needed for
    ALTERNATIVEs, is renamed to X86_FEATURE_USE_IBPB.
    
    Originally-by: Borislav Petkov <bp@suse.de>
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: ak@linux.intel.com
    Cc: dave.hansen@intel.com
    Cc: karahmed@amazon.de
    Cc: arjan@linux.intel.com
    Cc: torvalds@linux-foundation.org
    Cc: peterz@infradead.org
    Cc: bp@alien8.de
    Cc: pbonzini@redhat.com
    Cc: tim.c.chen@linux.intel.com
    Cc: gregkh@linux-foundation.org
    Link: https://lkml.kernel.org/r/1517070274-12128-2-git-send-email-dwmw@amazon.co.uk
    [bwh: For 3.16, just apply the part that hides fake CPU feature bits]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit bb4875a7f2f2c9339d4041b5662bc029d7c42f28
Author: Darren Kenny <darren.kenny@oracle.com>
Date:   Fri Feb 2 19:12:20 2018 +0000

    x86/speculation: Fix typo IBRS_ATT, which should be IBRS_ALL
    
    commit af189c95a371b59f493dbe0f50c0a09724868881 upstream.
    
    Fixes: 117cc7a908c83 ("x86/retpoline: Fill return stack buffer on vmexit")
    Signed-off-by: Darren Kenny <darren.kenny@oracle.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Masami Hiramatsu <mhiramat@kernel.org>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Link: https://lkml.kernel.org/r/20180202191220.blvgkgutojecxr3b@starbug-vm.ie.oracle.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 876b732e36a6d19faceeab476937d22615e3d677
Author: KarimAllah Ahmed <karahmed@amazon.de>
Date:   Thu Feb 1 11:27:21 2018 +0000

    x86/spectre: Simplify spectre_v2 command line parsing
    
    commit 9005c6834c0ffdfe46afa76656bd9276cca864f6 upstream.
    
    [dwmw2: Use ARRAY_SIZE]
    
    Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: peterz@infradead.org
    Cc: bp@alien8.de
    Link: https://lkml.kernel.org/r/1517484441-1420-3-git-send-email-dwmw@amazon.co.uk
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit f63a8c783f0de68418551ab850f65c1eb54b9b03
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Feb 1 11:27:20 2018 +0000

    x86/retpoline: Avoid retpolines for built-in __init functions
    
    commit 66f793099a636862a71c59d4a6ba91387b155e0c upstream.
    
    There's no point in building init code with retpolines, since it runs before
    any potentially hostile userspace does. And before the retpoline is actually
    ALTERNATIVEd into place, for much of it.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: karahmed@amazon.de
    Cc: peterz@infradead.org
    Cc: bp@alien8.de
    Link: https://lkml.kernel.org/r/1517484441-1420-2-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c2453bdbedac62eaf0a94dc007b778e79c6878e7
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Wed Jan 31 17:47:03 2018 -0800

    x86/kvm: Update spectre-v1 mitigation
    
    commit 085331dfc6bbe3501fb936e657331ca943827600 upstream.
    
    Commit 75f139aaf896 "KVM: x86: Add memory barrier on vmcs field lookup"
    added a raw 'asm("lfence");' to prevent a bounds check bypass of
    'vmcs_field_to_offset_table'.
    
    The lfence can be avoided in this path by using the array_index_nospec()
    helper designed for these types of fixes.
    
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Andrew Honig <ahonig@google.com>
    Cc: kvm@vger.kernel.org
    Cc: Jim Mattson <jmattson@google.com>
    Link: https://lkml.kernel.org/r/151744959670.6342.3001723920950249067.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16:
     - Replace max_vmcs_field with the local size variable
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit cb4ea8a760444750f9db5e87159d44da9dc786a3
Author: Josh Poimboeuf <jpoimboe@redhat.com>
Date:   Tue Jan 30 22:13:33 2018 -0600

    x86/paravirt: Remove 'noreplace-paravirt' cmdline option
    
    commit 12c69f1e94c89d40696e83804dd2f0965b5250cd upstream.
    
    The 'noreplace-paravirt' option disables paravirt patching, leaving the
    original pv indirect calls in place.
    
    That's highly incompatible with retpolines, unless we want to uglify
    paravirt even further and convert the paravirt calls to retpolines.
    
    As far as I can tell, the option doesn't seem to be useful for much
    other than introducing surprising corner cases and making the kernel
    vulnerable to Spectre v2.  It was probably a debug option from the early
    paravirt days.  So just remove it.
    
    Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Ashok Raj <ashok.raj@intel.com>
    Cc: Greg KH <gregkh@linuxfoundation.org>
    Cc: Jun Nakajima <jun.nakajima@intel.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Rusty Russell <rusty@rustcorp.com.au>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Asit Mallick <asit.k.mallick@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jason Baron <jbaron@akamai.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Alok Kataria <akataria@vmware.com>
    Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Link: https://lkml.kernel.org/r/20180131041333.2x6blhxirc2kclrq@treble
    [bwh: Backported to 3.16: adjust filename]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit ead76f4b6ca1e46ae7c8bed05a5b114d05b84de7
Author: Colin Ian King <colin.king@canonical.com>
Date:   Tue Jan 30 19:32:18 2018 +0000

    x86/spectre: Fix spelling mistake: "vunerable"-> "vulnerable"
    
    commit e698dcdfcda41efd0984de539767b4cddd235f1e upstream.
    
    Trivial fix to spelling mistake in pr_err error message text.
    
    Signed-off-by: Colin Ian King <colin.king@canonical.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: kernel-janitors@vger.kernel.org
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Link: https://lkml.kernel.org/r/20180130193218.9271-1-colin.king@canonical.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit f5e5a9d5013ba3902045cad1e0809b535fa49ffd
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:03:21 2018 -0800

    x86/spectre: Report get_user mitigation for spectre_v1
    
    commit edfbae53dab8348fca778531be9f4855d2ca0360 upstream.
    
    Reflect the presence of get_user(), __get_user(), and 'syscall' protections
    in sysfs. The expectation is that new and better tooling will allow the
    kernel to grow more usages of array_index_nospec(), for now, only claim
    mitigation for __user pointer de-references.
    
    Reported-by: Jiri Slaby <jslaby@suse.cz>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727420158.33451.11658324346540434635.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c270bd1cc653058717b1234d65f9d262315ea2ed
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:03:15 2018 -0800

    nl80211: Sanitize array index in parse_txq_params
    
    commit 259d8c1e984318497c84eef547bbb6b1d9f4eb05 upstream.
    
    Wireless drivers rely on parse_txq_params to validate that txq_params->ac
    is less than NL80211_NUM_ACS by the time the low-level driver's ->conf_tx()
    handler is called. Use a new helper, array_index_nospec(), to sanitize
    txq_params->ac with respect to speculation. I.e. ensure that any
    speculation into ->conf_tx() handlers is done with a value of
    txq_params->ac that is within the bounds of [0, NL80211_NUM_ACS).
    
    Reported-by: Christian Lamparter <chunkeey@gmail.com>
    Reported-by: Elena Reshetova <elena.reshetova@intel.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Johannes Berg <johannes@sipsolutions.net>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: linux-wireless@vger.kernel.org
    Cc: torvalds@linux-foundation.org
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727419584.33451.7700736761686184303.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 3feddbaccb94069b3b7bbeb3476e76e7907f2c9e
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:03:05 2018 -0800

    vfs, fdtable: Prevent bounds-check bypass via speculative execution
    
    commit 56c30ba7b348b90484969054d561f711ba196507 upstream.
    
    'fd' is a user controlled value that is used as a data dependency to
    read from the 'fdt->fd' array.  In order to avoid potential leaks of
    kernel memory values, block speculative execution of the instruction
    stream that could issue reads based on an invalid 'file *' returned from
    __fcheck_files.
    
    Co-developed-by: Elena Reshetova <elena.reshetova@intel.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727418500.33451.17392199002892248656.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 8c34712ac217fff16d0d225bd48d68af40c33038
Author: Ben Hutchings <ben@decadent.org.uk>
Date:   Fri Mar 9 00:11:14 2018 +0000

    x86/syscall: Sanitize syscall table de-references under speculation
    
    commit 2fbd7af5af8665d18bcefae3e9700be07e22b681 upstream.
    
    The upstream version of this, touching C code, was written by Dan Williams,
    with the following description:
    
    > The syscall table base is a user controlled function pointer in kernel
    > space. Use array_index_nospec() to prevent any out of bounds speculation.
    >
    > While retpoline prevents speculating into a userspace directed target it
    > does not stop the pointer de-reference, the concern is leaking memory
    > relative to the syscall table base, by observing instruction cache
    > behavior.
    
    The x86_64 assembly version for 4.4 was written by Jiri Slaby, with
    the following description:
    
    > In 4.4.118, we have commit c8961332d6da (x86/syscall: Sanitize syscall
    > table de-references under speculation), which is a backport of upstream
    > commit 2fbd7af5af86. But it fixed only the C part of the upstream patch
    > -- the IA32 sysentry. So it ommitted completely the assembly part -- the
    > 64bit sysentry.
    >
    > Fix that in this patch by explicit array_index_mask_nospec written in
    > assembly. The same was used in lib/getuser.S.
    >
    > However, to have "sbb" working properly, we have to switch from "cmp"
    > against (NR_syscalls-1) to (NR_syscalls), otherwise the last syscall
    > number would be "and"ed by 0. It is because the original "ja" relies on
    > "CF" or "ZF", but we rely only on "CF" in "sbb". That means: switch to
    > "jae" conditional jump too.
    >
    > Final note: use rcx for mask as this is exactly what is overwritten by
    > the 4th syscall argument (r10) right after.
    
    In 3.16 the x86_32 syscall table lookup is also written in assembly.
    So I've taken Jiri's version and added similar masking in entry_32.S,
    using edx as the temporary.  edx is clobbered by SAVE_REGS and seems
    to be free at this point.
    
    The ia32 compat syscall table lookup on x86_64 is also written in
    assembly, so I've added the same masking in ia32entry.S, using r8 as
    the temporary since it is always clobbered by the following
    instructions.
    
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Jiri Slaby <jslaby@suse.cz>
    Cc: Jan Beulich <JBeulich@suse.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: alan@linux.intel.com
    Cc: Jinpu Wang <jinpu.wang@profitbricks.com>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 49731da00f0f2160cdcaeaf2daa711f6b7061663
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:54 2018 -0800

    x86/get_user: Use pointer masking to limit speculation
    
    commit c7f631cb07e7da06ac1d231ca178452339e32a94 upstream.
    
    Quoting Linus:
    
        I do think that it would be a good idea to very expressly document
        the fact that it's not that the user access itself is unsafe. I do
        agree that things like "get_user()" want to be protected, but not
        because of any direct bugs or problems with get_user() and friends,
        but simply because get_user() is an excellent source of a pointer
        that is obviously controlled from a potentially attacking user
        space. So it's a prime candidate for then finding _subsequent_
        accesses that can then be used to perturb the cache.
    
    Unlike the __get_user() case get_user() includes the address limit check
    near the pointer de-reference. With that locality the speculation can be
    mitigated with pointer narrowing rather than a barrier, i.e.
    array_index_nospec(). Where the narrowing is performed by:
    
            cmp %limit, %ptr
            sbb %mask, %mask
            and %mask, %ptr
    
    With respect to speculation the value of %ptr is either less than %limit
    or NULL.
    
    Co-developed-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: Kees Cook <keescook@chromium.org>
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727417469.33451.11804043010080838495.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit aaafddb9c55db1b3f015062f73bbe2c76c528cc2
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:33 2018 -0800

    x86: Introduce barrier_nospec
    
    commit b3d7ad85b80bbc404635dca80f5b129f6242bc7a upstream.
    
    Rename the open coded form of this instruction sequence from
    rdtsc_ordered() into a generic barrier primitive, barrier_nospec().
    
    One of the mitigations for Spectre variant1 vulnerabilities is to fence
    speculative execution after successfully validating a bounds check. I.e.
    force the result of a bounds check to resolve in the instruction pipeline
    to ensure speculative execution honors that result before potentially
    operating on out-of-bounds data.
    
    No functional changes.
    
    Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
    Suggested-by: Andi Kleen <ak@linux.intel.com>
    Suggested-by: Ingo Molnar <mingo@redhat.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727415361.33451.9049453007262764675.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16: update rdtsc_barrier() instead of rdtsc_ordered()]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 6e8bca7c8e4ed97ce71aa50113ab90525c21fbac
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:28 2018 -0800

    x86: Implement array_index_mask_nospec
    
    commit babdde2698d482b6c0de1eab4f697cf5856c5859 upstream.
    
    array_index_nospec() uses a mask to sanitize user controllable array
    indexes, i.e. generate a 0 mask if 'index' >= 'size', and a ~0 mask
    otherwise. While the default array_index_mask_nospec() handles the
    carry-bit from the (index - size) result in software.
    
    The x86 array_index_mask_nospec() does the same, but the carry-bit is
    handled in the processor CF flag without conditional instructions in the
    control flow.
    
    Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: gregkh@linuxfoundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727414808.33451.1873237130672785331.stgit@dwillia2-desk3.amr.corp.intel.com
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 0510dfd3cc0f01fbea0857736afc2e06c8c59014
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Jan 29 17:02:22 2018 -0800

    array_index_nospec: Sanitize speculative array de-references
    
    commit f3804203306e098dae9ca51540fcd5eb700d7f40 upstream.
    
    array_index_nospec() is proposed as a generic mechanism to mitigate
    against Spectre-variant-1 attacks, i.e. an attack that bypasses boundary
    checks via speculative execution. The array_index_nospec()
    implementation is expected to be safe for current generation CPUs across
    multiple architectures (ARM, x86).
    
    Based on an original implementation by Linus Torvalds, tweaked to remove
    speculative flows by Alexei Starovoitov, and tweaked again by Linus to
    introduce an x86 assembly implementation for the mask generation.
    
    Co-developed-by: Linus Torvalds <torvalds@linux-foundation.org>
    Co-developed-by: Alexei Starovoitov <ast@kernel.org>
    Suggested-by: Cyril Novikov <cnovikov@lynx.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-arch@vger.kernel.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: Russell King <linux@armlinux.org.uk>
    Cc: gregkh@linuxfoundation.org
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727414229.33451.18411580953862676575.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit daa41d548d5b088304957529529307105017f8f2
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Mon Jan 29 17:02:16 2018 -0800

    Documentation: Document array_index_nospec
    
    commit f84a56f73dddaeac1dba8045b007f742f61cd2da upstream.
    
    Document the rationale and usage of the new array_index_nospec() helper.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Cc: linux-arch@vger.kernel.org
    Cc: Jonathan Corbet <corbet@lwn.net>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: gregkh@linuxfoundation.org
    Cc: kernel-hardening@lists.openwall.com
    Cc: torvalds@linux-foundation.org
    Cc: alan@linux.intel.com
    Link: https://lkml.kernel.org/r/151727413645.33451.15878817161436755393.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 7163500dfe60dbe67275021f276a4b584737387c
Author: Dou Liyang <douly.fnst@cn.fujitsu.com>
Date:   Tue Jan 30 14:13:50 2018 +0800

    x86/spectre: Check CONFIG_RETPOLINE in command line parser
    
    commit 9471eee9186a46893726e22ebb54cade3f9bc043 upstream.
    
    The spectre_v2 option 'auto' does not check whether CONFIG_RETPOLINE is
    enabled. As a consequence it fails to emit the appropriate warning and sets
    feature flags which have no effect at all.
    
    Add the missing IS_ENABLED() check.
    
    Fixes: da285121560e ("x86/spectre: Add boot time option to select Spectre v2 mitigation")
    Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: ak@linux.intel.com
    Cc: peterz@infradead.org
    Cc: Tomohiro" <misono.tomohiro@jp.fujitsu.com>
    Cc: dave.hansen@intel.com
    Cc: bp@alien8.de
    Cc: arjan@linux.intel.com
    Cc: dwmw@amazon.co.uk
    Link: https://lkml.kernel.org/r/f5892721-7528-3647-08fb-f8d10e65ad87@cn.fujitsu.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit b6912947161a3766d1542a4cc4d1e1ac105e09cc
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Sat Jan 27 15:45:14 2018 +0100

    x86/cpu/bugs: Make retpoline module warning conditional
    
    commit e383095c7fe8d218e00ec0f83e4b95ed4e627b02 upstream.
    
    If sysfs is disabled and RETPOLINE not defined:
    
    arch/x86/kernel/cpu/bugs.c:97:13: warning: ‘spectre_v2_bad_module’ defined but not used
    [-Wunused-variable]
     static bool spectre_v2_bad_module;
    
    Hide it.
    
    Fixes: caf7501a1b4e ("module/retpoline: Warn about missing retpoline in module")
    Reported-by: Borislav Petkov <bp@alien8.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 676520e3ec3a190e5b78c643e740922ab65c4fd7
Author: Borislav Petkov <bp@suse.de>
Date:   Fri Jan 26 13:11:39 2018 +0100

    x86/bugs: Drop one "mitigation" from dmesg
    
    commit 55fa19d3e51f33d9cd4056d25836d93abf9438db upstream.
    
    Make
    
    [    0.031118] Spectre V2 mitigation: Mitigation: Full generic retpoline
    
    into
    
    [    0.031118] Spectre V2: Mitigation: Full generic retpoline
    
    to reduce the mitigation mitigations strings.
    
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: riel@redhat.com
    Cc: ak@linux.intel.com
    Cc: peterz@infradead.org
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: jikos@kernel.org
    Cc: luto@amacapital.net
    Cc: dave.hansen@intel.com
    Cc: torvalds@linux-foundation.org
    Cc: keescook@google.com
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: tim.c.chen@linux.intel.com
    Cc: pjt@google.com
    Link: https://lkml.kernel.org/r/20180126121139.31959-5-bp@alien8.de
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 416020e03245a1a60d5de2277149571e8c3d25c4
Author: Borislav Petkov <bp@suse.de>
Date:   Fri Jan 26 13:11:37 2018 +0100

    x86/nospec: Fix header guards names
    
    commit 7a32fc51ca938e67974cbb9db31e1a43f98345a9 upstream.
    
    ... to adhere to the _ASM_X86_ naming scheme.
    
    No functional change.
    
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: riel@redhat.com
    Cc: ak@linux.intel.com
    Cc: peterz@infradead.org
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: jikos@kernel.org
    Cc: luto@amacapital.net
    Cc: dave.hansen@intel.com
    Cc: torvalds@linux-foundation.org
    Cc: keescook@google.com
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: tim.c.chen@linux.intel.com
    Cc: gregkh@linux-foundation.org
    Cc: pjt@google.com
    Link: https://lkml.kernel.org/r/20180126121139.31959-3-bp@alien8.de
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 5ebf8d581c41a7ffc13225b6dbfdd89245f565b4
Author: Andi Kleen <ak@linux.intel.com>
Date:   Thu Jan 25 15:50:28 2018 -0800

    module/retpoline: Warn about missing retpoline in module
    
    commit caf7501a1b4ec964190f31f9c3f163de252273b8 upstream.
    
    There's a risk that a kernel which has full retpoline mitigations becomes
    vulnerable when a module gets loaded that hasn't been compiled with the
    right compiler or the right option.
    
    To enable detection of that mismatch at module load time, add a module info
    string "retpoline" at build time when the module was compiled with
    retpoline support. This only covers compiled C source, but assembler source
    or prebuilt object files are not checked.
    
    If a retpoline enabled kernel detects a non retpoline protected module at
    load time, print a warning and report it in the sysfs vulnerability file.
    
    [ tglx: Massaged changelog ]
    
    Signed-off-by: Andi Kleen <ak@linux.intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: gregkh@linuxfoundation.org
    Cc: torvalds@linux-foundation.org
    Cc: jeyu@kernel.org
    Cc: arjan@linux.intel.com
    Link: https://lkml.kernel.org/r/20180125235028.31211-1-andi@firstfloor.org
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 4b7e6a0ee22df9f46797a7b562825c87f13e08c7
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Thu Jan 25 10:58:14 2018 +0100

    KVM: VMX: Make indirect call speculation safe
    
    commit c940a3fb1e2e9b7d03228ab28f375fb5a47ff699 upstream.
    
    Replace indirect call with CALL_NOSPEC.
    
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Ashok Raj <ashok.raj@intel.com>
    Cc: Greg KH <gregkh@linuxfoundation.org>
    Cc: Jun Nakajima <jun.nakajima@intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: rga@amazon.de
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Asit Mallick <asit.k.mallick@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Jason Baron <jbaron@akamai.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Link: https://lkml.kernel.org/r/20180125095843.645776917@infradead.org
    [bwh: Backported to 3.16: also add ASM_CALL_CONSTRAINT]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c635a741c8c53eee2b68835cdc7b785d4d4a23cb
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Thu Jan 25 10:58:13 2018 +0100

    KVM: x86: Make indirect calls in emulator speculation safe
    
    commit 1a29b5b7f347a1a9230c1e0af5b37e3e571588ab upstream.
    
    Replace the indirect calls with CALL_NOSPEC.
    
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Ashok Raj <ashok.raj@intel.com>
    Cc: Greg KH <gregkh@linuxfoundation.org>
    Cc: Jun Nakajima <jun.nakajima@intel.com>
    Cc: David Woodhouse <dwmw2@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: rga@amazon.de
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Asit Mallick <asit.k.mallick@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Jason Baron <jbaron@akamai.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Link: https://lkml.kernel.org/r/20180125095843.595615683@infradead.org
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit f7c0071623089656c155ab6eb3c7967389415284
Author: Waiman Long <longman@redhat.com>
Date:   Mon Jan 22 17:09:34 2018 -0500

    x86/retpoline: Remove the esp/rsp thunk
    
    commit 1df37383a8aeabb9b418698f0bcdffea01f4b1b2 upstream.
    
    It doesn't make sense to have an indirect call thunk with esp/rsp as
    retpoline code won't work correctly with the stack pointer register.
    Removing it will help compiler writers to catch error in case such
    a thunk call is emitted incorrectly.
    
    Fixes: 76b043848fd2 ("x86/retpoline: Add initial retpoline support")
    Suggested-by: Jeff Law <law@redhat.com>
    Signed-off-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1516658974-27852-1-git-send-email-longman@redhat.com
    [bwh: Backported to 3.16: adjust filename]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 98fbe878ffa97973502c1a75b0b2130ae9091b97
Author: Gustavo A. R. Silva <garsilva@embeddedor.com>
Date:   Tue Feb 13 13:22:08 2018 -0600

    x86/cpu: Change type of x86_cache_size variable to unsigned int
    
    commit 24dbc6000f4b9b0ef5a9daecb161f1907733765a upstream.
    
    Currently, x86_cache_size is of type int, which makes no sense as we
    will never have a valid cache size equal or less than 0. So instead of
    initializing this variable to -1, it can perfectly be initialized to 0
    and use it as an unsigned variable instead.
    
    Suggested-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Addresses-Coverity-ID: 1464429
    Link: http://lkml.kernel.org/r/20180213192208.GA26414@embeddedor.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 1f8ab11aba17e183e30e45f06227600f76617012
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Fri Jan 12 17:49:25 2018 +0000

    x86/retpoline: Fill RSB on context switch for affected CPUs
    
    commit c995efd5a740d9cbafbf58bde4973e8b50b4d761 upstream.
    
    On context switch from a shallow call stack to a deeper one, as the CPU
    does 'ret' up the deeper side it may encounter RSB entries (predictions for
    where the 'ret' goes to) which were populated in userspace.
    
    This is problematic if neither SMEP nor KPTI (the latter of which marks
    userspace pages as NX for the kernel) are active, as malicious code in
    userspace may then be executed speculatively.
    
    Overwrite the CPU's return prediction stack with calls which are predicted
    to return to an infinite loop, to "capture" speculation if this
    happens. This is required both for retpoline, and also in conjunction with
    IBRS for !SMEP && !KPTI.
    
    On Skylake+ the problem is slightly different, and an *underflow* of the
    RSB may cause errant branch predictions to occur. So there it's not so much
    overwrite, as *filling* the RSB to attempt to prevent it getting
    empty. This is only a partial solution for Skylake+ since there are many
    other conditions which may result in the RSB becoming empty. The full
    solution on Skylake+ is to use IBRS, which will prevent the problem even
    when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
    required on context switch.
    
    [ tglx: Added missing vendor check and slighty massaged comments and
            changelog ]
    
    [js] backport to 4.4 -- __switch_to_asm does not exist there, we
         have to patch the switch_to macros for both x86_32 and x86_64.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515779365-9032-1-git-send-email-dwmw@amazon.co.uk
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>
    [bwh: Backported to 3.16: use the first available feature number]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 824bab2adcf1ff3a220314ebe7d5c3f2d83aa00a
Author: Dave Hansen <dave@sr71.net>
Date:   Thu Jun 2 17:19:27 2016 -0700

    x86/cpu/intel: Introduce macros for Intel family numbers
    
    commit 970442c599b22ccd644ebfe94d1d303bf6f87c05 upstream.
    
    Problem:
    
    We have a boatload of open-coded family-6 model numbers.  Half of
    them have these model numbers in hex and the other half in
    decimal.  This makes grepping for them tons of fun, if you were
    to try.
    
    Solution:
    
    Consolidate all the magic numbers.  Put all the definitions in
    one header.
    
    The names here are closely derived from the comments describing
    the models from arch/x86/events/intel/core.c.  We could easily
    make them shorter by doing things like s/SANDYBRIDGE/SNB/, but
    they seemed fine even with the longer versions to me.
    
    Do not take any of these names too literally, like "DESKTOP"
    or "MOBILE".  These are all colloquial names and not precise
    descriptions of everywhere a given model will show up.
    
    Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Adrian Hunter <adrian.hunter@intel.com>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Darren Hart <dvhart@infradead.org>
    Cc: Dave Hansen <dave@sr71.net>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: Doug Thompson <dougthompson@xmission.com>
    Cc: Eduardo Valentin <edubezval@gmail.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
    Cc: Kan Liang <kan.liang@intel.com>
    Cc: Len Brown <lenb@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Cc: Rajneesh Bhardwaj <rajneesh.bhardwaj@intel.com>
    Cc: Souvik Kumar Chakravarty <souvik.k.chakravarty@intel.com>
    Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Tony Luck <tony.luck@intel.com>
    Cc: Ulf Hansson <ulf.hansson@linaro.org>
    Cc: Viresh Kumar <viresh.kumar@linaro.org>
    Cc: Vishwanath Somayaji <vishwanath.somayaji@intel.com>
    Cc: Zhang Rui <rui.zhang@intel.com>
    Cc: jacob.jun.pan@intel.com
    Cc: linux-acpi@vger.kernel.org
    Cc: linux-edac@vger.kernel.org
    Cc: linux-mmc@vger.kernel.org
    Cc: linux-pm@vger.kernel.org
    Cc: platform-driver-x86@vger.kernel.org
    Link: http://lkml.kernel.org/r/20160603001927.F2A7D828@viggo.jf.intel.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 9bc82ae19eb4a46dc5792e4b71cdfdfebfa2bbed
Author: Andi Kleen <ak@linux.intel.com>
Date:   Wed Jan 17 14:53:28 2018 -0800

    x86/retpoline: Optimize inline assembler for vmexit_fill_RSB
    
    commit 3f7d875566d8e79c5e0b2c9a413e91b2c29e0854 upstream.
    
    The generated assembler for the C fill RSB inline asm operations has
    several issues:
    
    - The C code sets up the loop register, which is then immediately
      overwritten in __FILL_RETURN_BUFFER with the same value again.
    
    - The C code also passes in the iteration count in another register, which
      is not used at all.
    
    Remove these two unnecessary operations. Just rely on the single constant
    passed to the macro for the iterations.
    
    Signed-off-by: Andi Kleen <ak@linux.intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: dave.hansen@intel.com
    Cc: gregkh@linuxfoundation.org
    Cc: torvalds@linux-foundation.org
    Cc: arjan@linux.intel.com
    Link: https://lkml.kernel.org/r/20180117225328.15414-1-andi@firstfloor.org
    [bwh: Backported to 3.16: adjust contex]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 5938f6b9d292f224eb57db9a110ad9bad629af23
Author: zhenwei.pi <zhenwei.pi@youruncloud.com>
Date:   Thu Jan 18 09:04:52 2018 +0800

    x86/pti: Document fix wrong index
    
    commit 98f0fceec7f84d80bc053e49e596088573086421 upstream.
    
    In section <2. Runtime Cost>, fix wrong index.
    
    Signed-off-by: zhenwei.pi <zhenwei.pi@youruncloud.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: dave.hansen@linux.intel.com
    Link: https://lkml.kernel.org/r/1516237492-27739-1-git-send-email-zhenwei.pi@youruncloud.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 073bc0d7414aa0f9982c7d569391b856827cf789
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Fri Jan 19 01:15:20 2018 +0900

    kprobes/x86: Disable optimizing on the function jumps to indirect thunk
    
    commit c86a32c09f8ced67971a2310e3b0dda4d1749007 upstream.
    
    Since indirect jump instructions will be replaced by jump
    to __x86_indirect_thunk_*, those jmp instruction must be
    treated as an indirect jump. Since optprobe prohibits to
    optimize probes in the function which uses an indirect jump,
    it also needs to find out the function which jump to
    __x86_indirect_thunk_* and disable optimization.
    
    Add a check that the jump target address is between the
    __indirect_thunk_start/end when optimizing kprobe.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Link: https://lkml.kernel.org/r/151629212062.10241.6991266100233002273.stgit@devbox
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 818e15da4741f65d4deab3302027e3d30df77b46
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Fri Jan 19 01:14:51 2018 +0900

    kprobes/x86: Blacklist indirect thunk functions for kprobes
    
    commit c1804a236894ecc942da7dc6c5abe209e56cba93 upstream.
    
    Mark __x86_indirect_thunk_* functions as blacklist for kprobes
    because those functions can be called from anywhere in the kernel
    including blacklist functions of kprobes.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Link: https://lkml.kernel.org/r/151629209111.10241.5444852823378068683.stgit@devbox
    [bwh: Backported to 3.16: exports are still done from C]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit a31751ca36c2d3e1e712aa8e7542669a18d2df4a
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Fri Jan 19 01:14:21 2018 +0900

    retpoline: Introduce start/end markers of indirect thunk
    
    commit 736e80a4213e9bbce40a7c050337047128b472ac upstream.
    
    Introduce start/end markers of __x86_indirect_thunk_* functions.
    To make it easy, consolidate .text.__x86.indirect_thunk.* sections
    to one .text.__x86.indirect_thunk section and put it in the
    end of kernel text section and adds __indirect_thunk_start/end
    so that other subsystem (e.g. kprobes) can identify it.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Link: https://lkml.kernel.org/r/151629206178.10241.6828804696410044771.stgit@devbox
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit a77a3f23b319797a76c4b4c1f327e55b7665a879
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Sat Jan 13 17:27:30 2018 -0600

    x86/retpoline: Add LFENCE to the retpoline/RSB filling RSB macros
    
    commit 28d437d550e1e39f805d99f9f8ac399c778827b7 upstream.
    
    The PAUSE instruction is currently used in the retpoline and RSB filling
    macros as a speculation trap.  The use of PAUSE was originally suggested
    because it showed a very, very small difference in the amount of
    cycles/time used to execute the retpoline as compared to LFENCE.  On AMD,
    the PAUSE instruction is not a serializing instruction, so the pause/jmp
    loop will use excess power as it is speculated over waiting for return
    to mispredict to the correct target.
    
    The RSB filling macro is applicable to AMD, and, if software is unable to
    verify that LFENCE is serializing on AMD (possible when running under a
    hypervisor), the generic retpoline support will be used and, so, is also
    applicable to AMD.  Keep the current usage of PAUSE for Intel, but add an
    LFENCE instruction to the speculation trap for AMD.
    
    The same sequence has been adopted by GCC for the GCC generated retpolines.
    
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Borislav Petkov <bp@alien8.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Paul Turner <pjt@google.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Kees Cook <keescook@google.com>
    Link: https://lkml.kernel.org/r/20180113232730.31060.36287.stgit@tlendack-t1.amdoffice.net
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 6653cade70f3e470cd03793b7a5004f4388b72fc
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Sun Jan 14 22:13:29 2018 +0100

    x86/retpoline: Remove compile time warning
    
    commit b8b9ce4b5aec8de9e23cabb0a26b78641f9ab1d6 upstream.
    
    Remove the compile time warning when CONFIG_RETPOLINE=y and the compiler
    does not have retpoline support. Linus rationale for this is:
    
      It's wrong because it will just make people turn off RETPOLINE, and the
      asm updates - and return stack clearing - that are independent of the
      compiler are likely the most important parts because they are likely the
      ones easiest to target.
    
      And it's annoying because most people won't be able to do anything about
      it. The number of people building their own compiler? Very small. So if
      their distro hasn't got a compiler yet (and pretty much nobody does), the
      warning is just annoying crap.
    
      It is already properly reported as part of the sysfs interface. The
      compile-time warning only encourages bad things.
    
    Fixes: 76b043848fd2 ("x86/retpoline: Add initial retpoline support")
    Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Link: https://lkml.kernel.org/r/CA+55aFzWgquv4i6Mab6bASqYXg3ErV3XDFEYf=GEcCDQg5uAtw@mail.gmail.com
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 20e080e2752d108388d386ecc0b33d7797dfb18f
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Fri Jan 12 11:11:27 2018 +0000

    x86/retpoline: Fill return stack buffer on vmexit
    
    commit 117cc7a908c83697b0b737d15ae1eb5943afe35b upstream.
    
    In accordance with the Intel and AMD documentation, we need to overwrite
    all entries in the RSB on exiting a guest, to prevent malicious branch
    target predictions from affecting the host kernel. This is needed both
    for retpoline and for IBRS.
    
    [ak: numbers again for the RSB stuffing labels]
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515755487-8524-1-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16:
     - Drop the ANNOTATE_NOSPEC_ALTERNATIVEs
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 901d43afc183eeaa38b723bbd5281bf337dc8ef8
Author: Andi Kleen <ak@linux.intel.com>
Date:   Thu Jan 11 21:46:33 2018 +0000

    x86/retpoline/irq32: Convert assembler indirect jumps
    
    commit 7614e913db1f40fff819b36216484dc3808995d4 upstream.
    
    Convert all indirect jumps in 32bit irq inline asm code to use non
    speculative sequences.
    
    Signed-off-by: Andi Kleen <ak@linux.intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-12-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit e09338ed691a7efb05572e68e35dac298d2c8fb3
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:32 2018 +0000

    x86/retpoline/checksum32: Convert assembler indirect jumps
    
    commit 5096732f6f695001fa2d6f1335a2680b37912c69 upstream.
    
    Convert all indirect jumps in 32bit checksum assembler code to use
    non-speculative sequences when CONFIG_RETPOLINE is enabled.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-11-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit a920aeeee51092d259eae48a22a878554483a446
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:31 2018 +0000

    x86/retpoline/xen: Convert Xen hypercall indirect jumps
    
    commit ea08816d5b185ab3d09e95e393f265af54560350 upstream.
    
    Convert indirect call in Xen hypercall to use non-speculative sequence,
    when CONFIG_RETPOLINE is enabled.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-10-git-send-email-dwmw@amazon.co.uk
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 96075314274130d26f617164b2c997e62709ea16
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:30 2018 +0000

    x86/retpoline/hyperv: Convert assembler indirect jumps
    
    commit e70e5892b28c18f517f29ab6e83bd57705104b31 upstream.
    
    Convert all indirect jumps in hyperv inline asm code to use non-speculative
    sequences when CONFIG_RETPOLINE is enabled.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-9-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16:
     - Drop changes to hv_do_fast_hypercall8()
     - Include earlier updates to the asm constraints
     - Adjust filename, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 46d1f87264f6c76befbea9d4a60bfce174439512
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:29 2018 +0000

    x86/retpoline/ftrace: Convert ftrace assembler indirect jumps
    
    commit 9351803bd803cdbeb9b5a7850b7b6f464806e3db upstream.
    
    Convert all indirect jumps in ftrace assembler code to use non-speculative
    sequences when CONFIG_RETPOLINE is enabled.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-8-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust filenames, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 6e3008622f5086b6fc819a9588ee67ce374071d0
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:28 2018 +0000

    x86/retpoline/entry: Convert entry assembler indirect jumps
    
    commit 2641f08bb7fc63a636a2b18173221d7040a3512e upstream.
    
    Convert indirect jumps in core 32/64bit entry assembler code to use
    non-speculative sequences when CONFIG_RETPOLINE is enabled.
    
    Don't use CALL_NOSPEC in entry_SYSCALL_64_fastpath because the return
    address after the 'call' instruction must be *precisely* at the
    .Lentry_SYSCALL_64_after_fastpath label for stub_ptregs_64 to work,
    and the use of alternatives will mess that up unless we play horrid
    games to prepend with NOPs and make the variants the same length. It's
    not worth it; in the case where we ALTERNATIVE out the retpoline, the
    first instruction at __x86.indirect_thunk.rax is going to be a bare
    jmp *%rax anyway.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-7-git-send-email-dwmw@amazon.co.uk
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Razvan Ghitulete <rga@amazon.de>
    [bwh: Backported to 3.16:
     - Also update indirect jumps through system call table in entry_32.s and
       ia32entry.S
     - Adjust filenames, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 738e4b6f5719880c4e4ae14899881cafb9ee0373
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:27 2018 +0000

    x86/retpoline/crypto: Convert crypto assembler indirect jumps
    
    commit 9697fa39efd3fc3692f2949d4045f393ec58450b upstream.
    
    Convert all indirect jumps in crypto assembler code to use non-speculative
    sequences when CONFIG_RETPOLINE is enabled.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-6-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c4ed1bad5316fe67b4c5e0e600a6372e24dff2f4
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:26 2018 +0000

    x86/spectre: Add boot time option to select Spectre v2 mitigation
    
    commit da285121560e769cc31797bba6422eea71d473e0 upstream.
    
    Add a spectre_v2= option to select the mitigation used for the indirect
    branch speculation vulnerability.
    
    Currently, the only option available is retpoline, in its various forms.
    This will be expanded to cover the new IBRS/IBPB microcode features.
    
    The RETPOLINE_AMD feature relies on a serializing LFENCE for speculation
    control. For AMD hardware, only set RETPOLINE_AMD if LFENCE is a
    serializing instruction, which is indicated by the LFENCE_RDTSC feature.
    
    [ tglx: Folded back the LFENCE/AMD fixes and reworked it so IBRS
            integration becomes simple ]
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-5-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: adjust filename, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit a4a51323985f7a5de61650c73668f6fb1a87cf39
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Jan 11 21:46:25 2018 +0000

    x86/retpoline: Add initial retpoline support
    
    commit 76b043848fd22dbf7f8bf3a1452f8c70d557b860 upstream.
    
    Enable the use of -mindirect-branch=thunk-extern in newer GCC, and provide
    the corresponding thunks. Provide assembler macros for invoking the thunks
    in the same way that GCC does, from native and inline assembler.
    
    This adds X86_FEATURE_RETPOLINE and sets it by default on all CPUs. In
    some circumstances, IBRS microcode features may be used instead, and the
    retpoline can be disabled.
    
    On AMD CPUs if lfence is serialising, the retpoline can be dramatically
    simplified to a simple "lfence; jmp *\reg". A future patch, after it has
    been verified that lfence really is serialising in all circumstances, can
    enable this by setting the X86_FEATURE_RETPOLINE_AMD feature bit in addition
    to X86_FEATURE_RETPOLINE.
    
    Do not align the retpoline in the altinstr section, because there is no
    guarantee that it stays aligned when it's copied over the oldinstr during
    alternative patching.
    
    [ Andi Kleen: Rename the macros, add CONFIG_RETPOLINE option, export thunks]
    [ tglx: Put actual function CALL/JMP in front of the macros, convert to
            symbolic labels ]
    [ dwmw2: Convert back to numeric labels, merge objtool fixes ]
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: thomas.lendacky@amd.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515707194-20531-4-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16:
     - Add C source to export the thunk symbols
     - Drop ANNOTATE_NOSPEC_ALTERNATIVE since we don't have objtool
     - Use the first available feaure numbers
     - Adjust filename, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 1fb3b6e6936837dec5d2a5813b0ca84d68e5141e
Author: Andrey Ryabinin <aryabinin@virtuozzo.com>
Date:   Fri Sep 29 17:15:36 2017 +0300

    x86/asm: Use register variable to get stack pointer value
    
    commit 196bd485ee4f03ce4c690bfcf38138abfcd0a4bc upstream.
    
    Currently we use current_stack_pointer() function to get the value
    of the stack pointer register. Since commit:
    
      f5caf621ee35 ("x86/asm: Fix inline asm call constraints for Clang")
    
    ... we have a stack register variable declared. It can be used instead of
    current_stack_pointer() function which allows to optimize away some
    excessive "mov %rsp, %<dst>" instructions:
    
     -mov    %rsp,%rdx
     -sub    %rdx,%rax
     -cmp    $0x3fff,%rax
     -ja     ffffffff810722fd <ist_begin_non_atomic+0x2d>
    
     +sub    %rsp,%rax
     +cmp    $0x3fff,%rax
     +ja     ffffffff810722fa <ist_begin_non_atomic+0x2a>
    
    Remove current_stack_pointer(), rename __asm_call_sp to current_stack_pointer
    and use it instead of the removed function.
    
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20170929141537.29167-1-aryabinin@virtuozzo.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [dwmw2: We want ASM_CALL_CONSTRAINT for retpoline]
    Signed-off-by: David Woodhouse <dwmw@amazon.co.ku>
    Signed-off-by: Razvan Ghitulete <rga@amazon.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    [bwh: Backported to 3.16: drop change in ist_begin_non_atomic()]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 7a090db2a8f8849b0402335ed90986e038dd58e6
Author: Andy Lutomirski <luto@amacapital.net>
Date:   Thu Nov 13 15:57:07 2014 -0800

    x86: Clean up current_stack_pointer
    
    commit 83653c16da91112236292871b820cb8b367220e3 upstream.
    
    There's no good reason for it to be a macro, and x86_64 will want to
    use it, so it should be in a header.
    
    Acked-by: Borislav Petkov <bp@suse.de>
    Signed-off-by: Andy Lutomirski <luto@amacapital.net>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 1550db1c7f9ec5dd89bf135a90ca8daca9fefb3d
Author: Masahiro Yamada <yamada.masahiro@socionext.com>
Date:   Tue Jun 14 14:58:54 2016 +0900

    kconfig.h: use __is_defined() to check if MODULE is defined
    
    commit 4f920843d248946545415c1bf6120942048708ed upstream.
    
    The macro MODULE is not a config option, it is a per-file build
    option.  So, config_enabled(MODULE) is not sensible.  (There is
    another case in include/linux/export.h, where config_enabled() is
    used against a non-config option.)
    
    This commit renames some macros in include/linux/kconfig.h for the
    use for non-config macros and replaces config_enabled(MODULE) with
    __is_defined(MODULE).
    
    I am keeping config_enabled() because it is still referenced from
    some places, but I expect it would be deprecated in the future.
    
    Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
    Signed-off-by: Michal Marek <mmarek@suse.com>
    [bwh: Backported to 3.16: drop change in IS_REACHABLE()]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit c63676dec458b070ff81a2cb5ce0f88437372ef7
Author: Andy Lutomirski <luto@kernel.org>
Date:   Tue Apr 26 12:23:25 2016 -0700

    x86/asm: Make asm/alternative.h safe from assembly
    
    commit f005f5d860e0231fe212cfda8c1a3148b99609f4 upstream.
    
    asm/alternative.h isn't directly useful from assembly, but it
    shouldn't break the build.
    
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/e5b693fcef99fe6e80341c9e97a002fb23871e91.1461698311.git.luto@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 1639dd3b31e6b279079a2e71e1135993dea87884
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Mon Jan 8 16:09:32 2018 -0600

    x86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC
    
    commit 9c6a73c75864ad9fa49e5fa6513e4c4071c0e29f upstream.
    
    With LFENCE now a serializing instruction, use LFENCE_RDTSC in preference
    to MFENCE_RDTSC.  However, since the kernel could be running under a
    hypervisor that does not support writing that MSR, read the MSR back and
    verify that the bit has been set successfully.  If the MSR can be read
    and the bit is set, then set the LFENCE_RDTSC feature, otherwise set the
    MFENCE_RDTSC feature.
    
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/20180108220932.12580.52458.stgit@tlendack-t1.amdoffice.net
    [bwh: Backported to 3.16: adjust filename, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit ae732f0da724f69f81c054c1b2014e2b144f2256
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Mon Jan 8 16:09:21 2018 -0600

    x86/cpu/AMD: Make LFENCE a serializing instruction
    
    commit e4d0e84e490790798691aaa0f2e598637f1867ec upstream.
    
    To aid in speculation control, make LFENCE a serializing instruction
    since it has less overhead than MFENCE.  This is done by setting bit 1
    of MSR 0xc0011029 (DE_CFG).  Some families that support LFENCE do not
    have this MSR.  For these families, the LFENCE instruction is already
    serializing.
    
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/20180108220921.12580.71694.stgit@tlendack-t1.amdoffice.net
    [bwh: Backported to 3.16: adjust filename, context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit d5d31a6934e730a414f4de3b6d425405bc23f892
Author: Borislav Petkov <bp@suse.de>
Date:   Wed Jan 10 12:28:16 2018 +0100

    x86/alternatives: Fix optimize_nops() checking
    
    commit 612e8e9350fd19cae6900cf36ea0c6892d1a0dca upstream.
    
    The alternatives code checks only the first byte whether it is a NOP, but
    with NOPs in front of the payload and having actual instructions after it
    breaks the "optimized' test.
    
    Make sure to scan all bytes before deciding to optimize the NOPs in there.
    
    Reported-by: David Woodhouse <dwmw2@infradead.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Andi Kleen <andi@firstfloor.org>
    Cc: Andrew Lutomirski <luto@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/20180110112815.mgciyf5acwacphkq@pd.tnic
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 75414fd52a4535f6b40bbd8059313ef2410402e3
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Thu Sep 3 12:34:55 2015 +0200

    x86/alternatives: Make optimize_nops() interrupt safe and synced
    
    commit 66c117d7fa2ae429911e60d84bf31a90b2b96189 upstream.
    
    Richard reported the following crash:
    
    [    0.036000] BUG: unable to handle kernel paging request at 55501e06
    [    0.036000] IP: [<c0aae48b>] common_interrupt+0xb/0x38
    [    0.036000] Call Trace:
    [    0.036000]  [<c0409c80>] ? add_nops+0x90/0xa0
    [    0.036000]  [<c040a054>] apply_alternatives+0x274/0x630
    
    Chuck decoded:
    
     "  0:   8d 90 90 83 04 24       lea    0x24048390(%eax),%edx
        6:   80 fc 0f                cmp    $0xf,%ah
        9:   a8 0f                   test   $0xf,%al
     >> b:   a0 06 1e 50 55          mov    0x55501e06,%al
       10:   57                      push   %edi
       11:   56                      push   %esi
    
     Interrupt 0x30 occurred while the alternatives code was replacing the
     initial 0x90,0x90,0x90 NOPs (from the ASM_CLAC macro) with the
     optimized version, 0x8d,0x76,0x00. Only the first byte has been
     replaced so far, and it makes a mess out of the insn decoding."
    
    optimize_nops() is buggy in two aspects:
    
    - It's not disabling interrupts across the modification
    - It's lacking a sync_core() call
    
    Add both.
    
    Fixes: 4fd4b6e5537c 'x86/alternatives: Use optimized NOPs for padding'
    Reported-and-tested-by: "Richard W.M. Jones" <rjones@redhat.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Richard W.M. Jones <rjones@redhat.com>
    Cc: Chuck Ebbert <cebbert.lkml@gmail.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1509031232340.15006@nanos
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 803150953830f29301011790be80fe3c1153c4d3
Author: Borislav Petkov <bp@suse.de>
Date:   Sat Apr 4 15:34:43 2015 +0200

    x86/alternatives: Fix ALTERNATIVE_2 padding generation properly
    
    commit dbe4058a6a44af4ca5d146aebe01b0a1f9b7fd2a upstream.
    
    Quentin caught a corner case with the generation of instruction
    padding in the ALTERNATIVE_2 macro: if len(orig_insn) <
    len(alt1) < len(alt2), then not enough padding gets added and
    that is not good(tm) as we could overwrite the beginning of the
    next instruction.
    
    Luckily, at the time of this writing, we don't have
    ALTERNATIVE_2() invocations which have that problem and even if
    we did, a simple fix would be to prepend the instructions with
    enough prefixes so that that corner case doesn't happen.
    
    However, best it would be if we fixed it properly. See below for
    a simple, abstracted example of what we're doing.
    
    So what we ended up doing is, we compute the
    
            max(len(alt1), len(alt2)) - len(orig_insn)
    
    and feed that value to the .skip gas directive. The max() cannot
    have conditionals due to gas limitations, thus the fancy integer
    math.
    
    With this patch, all ALTERNATIVE_2 sites get padded correctly;
    generating obscure test cases pass too:
    
      #define alt_max_short(a, b)    ((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
    
      #define gen_skip(orig, alt1, alt2, marker)    \
            .skip -((alt_max_short(alt1, alt2) - (orig)) > 0) * \
                    (alt_max_short(alt1, alt2) - (orig)),marker
    
            .pushsection .text, "ax"
      .globl main
      main:
            gen_skip(1, 2, 4, 0x09)
            gen_skip(4, 1, 2, 0x10)
            ...
            .popsection
    
    Thanks to Quentin for catching it and double-checking the fix!
    
    Reported-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20150404133443.GE21152@pd.tnic
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit d6e895e8c362fa340b647d4b3e59b6195db73528
Author: Borislav Petkov <bp@suse.de>
Date:   Sat Apr 4 23:07:42 2015 +0200

    x86/alternatives: Guard NOPs optimization
    
    commit 69df353ff305805fc16082d0c5bfa6e20fa8b863 upstream.
    
    Take a look at the first instruction byte before optimizing the NOP -
    there might be something else there already, like the ALTERNATIVE_2()
    in rdtsc_barrier() which NOPs out on AMD even though we just
    patched in an MFENCE.
    
    This happens because the alternatives sees X86_FEATURE_MFENCE_RDTSC,
    AMD CPUs set it, we patch in the MFENCE and right afterwards it sees
    X86_FEATURE_LFENCE_RDTSC which AMD CPUs don't set and we blindly
    optimize the NOP.
    
    Checking whether at least the first byte is 0x90 prevents that.
    
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/1428181662-18020-1-git-send-email-bp@alien8.de
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 84f67d9e273a5512a712a566b6d777a241943838
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Tue Jan 9 15:02:51 2018 +0000

    sysfs/cpu: Fix typos in vulnerability documentation
    
    commit 9ecccfaa7cb5249bd31bdceb93fcf5bedb8a24d8 upstream.
    
    Fixes: 87590ce6e ("sysfs/cpu: Add vulnerability folder")
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 15224fb82d3bb0d72bdac57391131e72fade34d3
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Sun Jan 7 22:48:01 2018 +0100

    x86/cpu: Implement CPU vulnerabilites sysfs functions
    
    commit 61dc0f555b5c761cdafb0ba5bd41ecf22d68a4c4 upstream.
    
    Implement the CPU vulnerabilty show functions for meltdown, spectre_v1 and
    spectre_v2.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Linus Torvalds <torvalds@linuxfoundation.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Link: https://lkml.kernel.org/r/20180107214913.177414879@linutronix.de
    [bwh: Backported to 3.16:
     - Meltdown mitigation feature flag is KAISER
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 56e8e5a33f3e6214685da30b556f93b02f26c82d
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Sun Jan 7 22:48:00 2018 +0100

    sysfs/cpu: Add vulnerability folder
    
    commit 87590ce6e373d1a5401f6539f0c59ef92dd924a9 upstream.
    
    As the meltdown/spectre problem affects several CPU architectures, it makes
    sense to have common way to express whether a system is affected by a
    particular vulnerability or not. If affected the way to express the
    mitigation should be common as well.
    
    Create /sys/devices/system/cpu/vulnerabilities folder and files for
    meltdown, spectre_v1 and spectre_v2.
    
    Allow architectures to override the show function.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Linus Torvalds <torvalds@linuxfoundation.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: David Woodhouse <dwmw@amazon.co.uk>
    Link: https://lkml.kernel.org/r/20180107214913.096657732@linutronix.de
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 129a7a0f1f3480a5edf9dceca274e8c6673a86b8
Author: Borislav Petkov <bp@suse.de>
Date:   Mon Oct 24 19:38:43 2016 +0200

    x86/cpu: Merge bugs.c and bugs_64.c
    
    commit 62a67e123e058a67db58bc6a14354dd037bafd0a upstream.
    
    Should be easier when following boot paths. It probably is a left over
    from the x86 unification eons ago.
    
    No functionality change.
    
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20161024173844.23038-3-bp@alien8.de
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [bwh: Backported to 3.16:
     - Add #ifdef around check_fpu(), which is not used on x86_64
     - Adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 1b2a12a04fe7233e49dd5601815469bf40fa4749
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Sat Jan 6 11:49:23 2018 +0000

    x86/cpufeatures: Add X86_BUG_SPECTRE_V[12]
    
    commit 99c6fa2511d8a683e61468be91b83f85452115fa upstream.
    
    Add the bug bits for spectre v1/2 and force them unconditionally for all
    cpus.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: gnomes@lxorguk.ukuu.org.uk
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
    Cc: Paul Turner <pjt@google.com>
    Link: https://lkml.kernel.org/r/1515239374-23361-2-git-send-email-dwmw@amazon.co.uk
    [bwh: Backported to 3.16: assign the first available bug numbers]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit b7d94254937440e0fe2e63b48d8eec91c4c5836a
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Fri Jan 5 15:27:34 2018 +0100

    x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN
    
    commit de791821c295cc61419a06fe5562288417d1bc58 upstream.
    
    Use the name associated with the particular attack which needs page table
    isolation for mitigation.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: David Woodhouse <dwmw@amazon.co.uk>
    Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
    Cc: Jiri Koshina <jikos@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Andi Lutomirski  <luto@amacapital.net>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Paul Turner <pjt@google.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Greg KH <gregkh@linux-foundation.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Kees Cook <keescook@google.com>
    Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801051525300.1724@nanos
    [bwh: Backported to 3.16: bug number is different]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 99216cae9cd0cbd8eb20ee4abc836698a4ef4321
Author: Tom Lendacky <thomas.lendacky@amd.com>
Date:   Tue Dec 26 23:43:54 2017 -0600

    x86/cpu, x86/pti: Do not enable PTI on AMD processors
    
    commit 694d99d40972f12e59a3696effee8a376b79d7c8 upstream.
    
    AMD processors are not subject to the types of attacks that the kernel
    page table isolation feature protects against.  The AMD microarchitecture
    does not allow memory references, including speculative references, that
    access higher privileged data when running in a lesser privileged mode
    when that access would result in a page fault.
    
    Disable page table isolation by default on AMD processors by not setting
    the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI
    is set.
    
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Link: https://lkml.kernel.org/r/20171227054354.20369.94587.stgit@tlendack-t1.amdoffice.net
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 76ebcf5a9d44c2f52ce41ae289c6d74c74dd2955
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Dec 4 15:07:33 2017 +0100

    x86/cpufeatures: Add X86_BUG_CPU_INSECURE
    
    commit a89f040fa34ec9cd682aed98b8f04e3c47d998bd upstream.
    
    Many x86 CPUs leak information to user space due to missing isolation of
    user space and kernel space page tables. There are many well documented
    ways to exploit that.
    
    The upcoming software migitation of isolating the user and kernel space
    page tables needs a misfeature flag so code can be made runtime
    conditional.
    
    Add the BUG bits which indicates that the CPU is affected and add a feature
    bit which indicates that the software migitation is enabled.
    
    Assume for now that _ALL_ x86 CPUs are affected by this. Exceptions can be
    made later.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Laight <David.Laight@aculab.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: Eduardo Valentin <eduval@amazon.com>
    Cc: Greg KH <gregkh@linuxfoundation.org>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Juergen Gross <jgross@suse.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: aliguori@amazon.com
    Cc: daniel.gruss@iaik.tugraz.at
    Cc: hughd@google.com
    Cc: keescook@google.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [bwh: Backported to 3.16: assign the first available bug number]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 4a125eee123be7bf1fd17e7c1220b9781ae7aa14
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Dec 4 15:07:32 2017 +0100

    x86/cpufeatures: Make CPU bugs sticky
    
    commit 6cbd2171e89b13377261d15e64384df60ecb530e upstream.
    
    There is currently no way to force CPU bug bits like CPU feature bits. That
    makes it impossible to set a bug bit once at boot and have it stick for all
    upcoming CPUs.
    
    Extend the force set/clear arrays to handle bug bits as well.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Borislav Petkov <bpetkov@suse.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Laight <David.Laight@aculab.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: Eduardo Valentin <eduval@amazon.com>
    Cc: Greg KH <gregkh@linuxfoundation.org>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Juergen Gross <jgross@suse.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: aliguori@amazon.com
    Cc: daniel.gruss@iaik.tugraz.at
    Cc: hughd@google.com
    Cc: keescook@google.com
    Link: https://lkml.kernel.org/r/20171204150606.992156574@linutronix.de
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit dc7f8f4da29ee471a4ff55828cc6ac2571346b22
Author: Andy Lutomirski <luto@kernel.org>
Date:   Wed Jan 18 11:15:38 2017 -0800

    x86/cpu: Factor out application of forced CPU caps
    
    commit 8bf1ebca215c262e48c15a4a15f175991776f57f upstream.
    
    There are multiple call sites that apply forced CPU caps.  Factor
    them into a helper.
    
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Fenghua Yu <fenghua.yu@intel.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Matthew Whitehead <tedheadster@gmail.com>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
    Link: http://lkml.kernel.org/r/623ff7555488122143e4417de09b18be2085ad06.1484705016.git.luto@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit 63d893d87b8feb2e548fd38e9b2a958b2a030934
Author: Dave Hansen <dave.hansen@linux.intel.com>
Date:   Fri Jan 5 09:44:36 2018 -0800

    x86/Documentation: Add PTI description
    
    commit 01c9b17bf673b05bb401b76ec763e9730ccf1376 upstream.
    
    Add some details about how PTI works, what some of the downsides
    are, and how to debug it when things go wrong.
    
    Also document the kernel parameter: 'pti/nopti'.
    
    Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
    Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
    Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
    Cc: Richard Fellner <richard.fellner@student.tugraz.at>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Andi Lutomirsky <luto@kernel.org>
    Link: https://lkml.kernel.org/r/20180105174436.1BC6FA2B@viggo.jf.intel.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

commit a070adf5031b218fa8fa45914c388886e61d7214
Author: Jim Mattson <jmattson@google.com>
Date:   Wed Jan 3 14:31:38 2018 -0800

    kvm: vmx: Scrub hardware GPRs at VM-exit
    
    commit 0cb5b30698fdc8f6b4646012e3acb4ddce430788 upstream.
    
    Guest GPR values are live in the hardware GPRs at VM-exit.  Do not
    leave any guest values in hardware GPRs after the guest GPR values are
    saved to the vcpu_vmx structure.
    
    This is a partial mitigation for CVE 2017-5715 and CVE 2017-5753.
    Specifically, it defeats the Project Zero PoC for CVE 2017-5715.
    
    Suggested-by: Eric Northup <digitaleric@google.com>
    Signed-off-by: Jim Mattson <jmattson@google.com>
    Reviewed-by: Eric Northup <digitaleric@google.com>
    Reviewed-by: Benjamin Serebrin <serebrin@google.com>
    Reviewed-by: Andrew Honig <ahonig@google.com>
    [Paolo: Add AMD bits, Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>]
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>