commit 6d4397ac828d4198fbec7bf0acd4ce0d832139da
Author: Jiri Slaby <jslaby@suse.cz>
Date:   Thu Apr 30 11:25:52 2015 +0200

    Linux 3.12.42

commit b5b655c99dff0fada0e43773025e1805b6726232
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Fri Apr 17 11:55:01 2015 +0200

    arm/arm64: KVM: Keep elrsr/aisr in sync with software model
    
    commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
    
    There is an interesting bug in the vgic code, which manifests itself
    when the KVM run loop has a signal pending or needs a vmid generation
    rollover after having disabled interrupts but before actually switching
    to the guest.
    
    In this case, we flush the vgic as usual, but we sync back the vgic
    state and exit to userspace before entering the guest.  The consequence
    is that we will be syncing the list registers back to the software model
    using the GICH_ELRSR and GICH_EISR from the last execution of the guest,
    potentially overwriting a list register containing an interrupt.
    
    This showed up during migration testing where we would capture a state
    where the VM has masked the arch timer but there were no interrupts,
    resulting in a hung test.
    
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Reported-by: Alex Bennee <alex.bennee@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit e83df6dec91f5e7d83bf1722d50c9642b09c2307
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Mar 10 19:07:00 2015 +0000

    arm64: KVM: Do not use pgd_index to index stage-2 pgd
    
    commit 04b8dc85bf4a64517e3cf20e409eeaa503b15cc1 upstream.
    
    The kernel's pgd_index macro is designed to index a normal, page
    sized array. KVM is a bit diffferent, as we can use concatenated
    pages to have a bigger address space (for example 40bit IPA with
    4kB pages gives us an 8kB PGD.
    
    In the above case, the use of pgd_index will always return an index
    inside the first 4kB, which makes a guest that has memory above
    0x8000000000 rather unhappy, as it spins forever in a page fault,
    whist the host happilly corrupts the lower pgd.
    
    The obvious fix is to get our own kvm_pgd_index that does the right
    thing(tm).
    
    Tested on X-Gene with a hacked kvmtool that put memory at a stupidly
    high address.
    
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit d54c7a6d9f014e83e8e99fcc241f83537ffa4618
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Sun Jan 11 14:10:11 2015 +0100

    arm64: KVM: Fix HCR setting for 32bit guests
    
    commit 801f6772cecea6cfc7da61aa197716ab64db5f9e upstream.
    
    Commit b856a59141b1 (arm/arm64: KVM: Reset the HCR on each vcpu
    when resetting the vcpu) moved the init of the HCR register to
    happen later in the init of a vcpu, but left out the fixup
    done in kvm_reset_vcpu when preparing for a 32bit guest.
    
    As a result, the 32bit guest is run as a 64bit guest, but the
    rest of the kernel still manages it as a 32bit. Fun follows.
    
    Moving the fixup to vcpu_reset_hcr solves the problem for good.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit c387a64b1c1279565b98f97031e665813d4baba3
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Sun Jan 11 14:10:10 2015 +0100

    arm64: KVM: Fix TLB invalidation by IPA/VMID
    
    commit 55e858b75808347378e5117c3c2339f46cc03575 upstream.
    
    It took about two years for someone to notice that the IPA passed
    to TLBI IPAS2E1IS must be shifted by 12 bits. Clearly our reviewing
    is not as good as it should be...
    
    Paper bag time for me.
    
    Reported-by: Mario Smarduch <m.smarduch@samsung.com>
    Tested-by: Mario Smarduch <m.smarduch@samsung.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit fd700e92b07816f4ad1f378c9f1657fa1f09d9c5
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Fri Dec 12 21:19:23 2014 +0100

    arm/arm64: KVM: Require in-kernel vgic for the arch timers
    
    commit 05971120fca43e0357789a14b3386bb56eef2201 upstream.
    
    It is curently possible to run a VM with architected timers support
    without creating an in-kernel VGIC, which will result in interrupts from
    the virtual timer going nowhere.
    
    To address this issue, move the architected timers initialization to the
    time when we run a VCPU for the first time, and then only initialize
    (and enable) the architected timers if we have a properly created and
    initialized in-kernel VGIC.
    
    When injecting interrupts from the virtual timer to the vgic, the
    current setup should ensure that this never calls an on-demand init of
    the VGIC, which is the only call path that could return an error from
    kvm_vgic_inject_irq(), so capture the return value and raise a warning
    if there's an error there.
    
    We also change the kvm_timer_init() function from returning an int to be
    a void function, since the function always succeeds.
    
    Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 67ffa0e4618acd554a1c5a0fba54338e2bee0973
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Tue Dec 9 14:33:45 2014 +0100

    arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
    
    commit 716139df2517fbc3f2306dbe8eba0fa88dca0189 upstream.
    
    When the vgic initializes its internal state it does so based on the
    number of VCPUs available at the time.  If we allow KVM to create more
    VCPUs after the VGIC has been initialized, we are likely to error out in
    unfortunate ways later, perform buffer overflows etc.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Eric Auger <eric.auger@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit fc234577d5802f898551d89b38650cab4c98ed12
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Thu Nov 27 10:35:03 2014 +0100

    arm/arm64: KVM: Introduce stage2_unmap_vm
    
    commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream.
    
    Introduce a new function to unmap user RAM regions in the stage2 page
    tables.  This is needed on reboot (or when the guest turns off the MMU)
    to ensure we fault in pages again and make the dcache, RAM, and icache
    coherent.
    
    Using unmap_stage2_range for the whole guest physical range does not
    work, because that unmaps IO regions (such as the GIC) which will not be
    recreated or in the best case faulted in on a page-by-page basis.
    
    Call this function on secondary and subsequent calls to the
    KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest
    Stage-1 MMU is off when faulting in pages and make the caches coherent.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit f5a186c1839430cccc30bea14817e669ceb8d81c
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Thu Oct 16 17:21:16 2014 +0200

    arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu
    
    commit b856a59141b1066d3c896a0d0231f84dabd040af upstream.
    
    When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also
    reset the HCR, because we now modify the HCR dynamically to
    enable/disable trapping of guest accesses to the VM registers.
    
    This is crucial for reboot of VMs working since otherwise we will not be
    doing the necessary cache maintenance operations when faulting in pages
    with the guest MMU off.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 96c7d3a6b93b12868acd5432e408f42e36afe8d5
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Thu Oct 16 16:14:43 2014 +0200

    arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option
    
    commit 3ad8b3de526a76fbe9466b366059e4958957b88f upstream.
    
    The implementation of KVM_ARM_VCPU_INIT is currently not doing what
    userspace expects, namely making sure that a vcpu which may have been
    turned off using PSCI is returned to its initial state, which would be
    powered on if userspace does not set the KVM_ARM_VCPU_POWER_OFF flag.
    
    Implement the expected functionality and clarify the ABI.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 9177e8d7ad081480234c6ff59d98f941413a6b2c
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Tue Dec 2 15:27:51 2014 +0100

    arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag
    
    commit 03f1d4c17edb31b41b14ca3a749ae38d2dd6639d upstream.
    
    If a VCPU was originally started with power off (typically to be brought
    up by PSCI in SMP configurations), there is no need to clear the
    POWER_OFF flag in the kernel, as this flag is only tested during the
    init ioctl itself.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 529ad12b779bb522563156ba02fa67ec55c0de99
Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Date:   Mon Nov 10 08:33:55 2014 +0000

    arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn()
    
    commit 07a9748c78cfc39b54f06125a216b67b9c8f09ed upstream.
    
    Instead of using kvm_is_mmio_pfn() to decide whether a host region
    should be stage 2 mapped with device attributes, add a new static
    function kvm_is_device_pfn() that disregards RAM pages with the
    reserved bit set, as those should usually not be mapped as device
    memory.
    
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 5030e3c059fbcf6223a1d386f9b7a1aa940f140e
Author: Geoff Levand <geoff@infradead.org>
Date:   Fri Oct 31 23:06:47 2014 +0000

    arm64/kvm: Fix assembler compatibility of macros
    
    commit 286fb1cc32b11c18da3573a8c8c37a4f9da16e30 upstream.
    
    Some of the macros defined in kvm_arm.h are useful in assembly files, but are
    not compatible with the assembler.  Change any C language integer constant
    definitions using appended U, UL, or ULL to the UL() preprocessor macro.  Also,
    add a preprocessor include of the asm/memory.h file which defines the UL()
    macro.
    
    Fixes build errors like these when using kvm_arm.h in assembly
    source files:
    
      Error: unexpected characters following instruction at operand 3 -- `and x0,x1,#((1U<<25)-1)'
    
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Geoff Levand <geoff@infradead.org>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 8f4ca4f5b938fbc0598b66c041ced7a2fe82dccf
Author: Steve Capper <steve.capper@linaro.org>
Date:   Tue Oct 14 15:02:15 2014 +0100

    arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort
    
    commit 3d08c629244257473450a8ba17cb8184b91e68f8 upstream.
    
    Commit:
    b886576 ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
    
    introduced some code in user_mem_abort that failed to compile if
    STRICT_MM_TYPECHECKS was enabled.
    
    This patch fixes up the failing comparison.
    
    Signed-off-by: Steve Capper <steve.capper@linaro.org>
    Reviewed-by: Kim Phillips <kim.phillips@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 91a82540f3e417f9fb908bcacffc8c8d27c59078
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Fri Oct 10 12:14:29 2014 +0200

    arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
    
    commit c3058d5da2222629bc2223c488a4512b59bb4baf upstream.
    
    When creating or moving a memslot, make sure the IPA space is within the
    addressable range of the guest.  Otherwise, user space can create too
    large a memslot and KVM would try to access potentially unallocated page
    table entries when inserting entries in the Stage-2 page tables.
    
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 296cffc0f8705d35c2bcb754154ee4e829de238b
Author: Vladimir Murzin <vladimir.murzin@arm.com>
Date:   Mon Sep 22 15:52:48 2014 +0100

    arm: kvm: fix CPU hotplug
    
    commit 37a34ac1d4775aafbc73b9db53c7daebbbc67e6a upstream.
    
    On some platforms with no power management capabilities, the hotplug
    implementation is allowed to return from a smp_ops.cpu_die() call as a
    function return. Upon a CPU onlining event, the KVM CPU notifier tries
    to reinstall the hyp stub, which fails on platform where no reset took
    place following a hotplug event, with the message:
    
    CPU1: smp_ops.cpu_die() returned, trying to resuscitate
    CPU1: Booted secondary processor
    Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x80409540
    unexpected data abort in Hyp mode at: 0x80401fe8
    unexpected HVC/SVC trap in Hyp mode at: 0x805c6170
    
    since KVM code is trying to reinstall the stub on a system where it is
    already configured.
    
    To prevent this issue, this patch adds a check in the KVM hotplug
    notifier that detects if the HYP stub really needs re-installing when a
    CPU is onlined and skips the installation call if the stub is already in
    place, which means that the CPU has not been reset.
    
    Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
    Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 628d3e68cb2113cb0c4b935745e35a2efb7e944b
Author: Joel Schopp <joel.schopp@amd.com>
Date:   Wed Jul 9 11:17:04 2014 -0500

    arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
    
    commit dbff124e29fa24aff9705b354b5f4648cd96e0bb upstream.
    
    The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
    and not all the bits in the PA range. This is clearly a bug that
    manifests itself on systems that allocate memory in the higher address
    space range.
    
     [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
       instead of a hard-coded value and to move the alignment check of the
       allocation to mmu.c.  Also added a comment explaining why we hardcode
       the IPA range and changed the stage-2 pgd allocation to be based on
       the 40 bit IPA range instead of the maximum possible 48 bit PA range.
       - Christoffer ]
    
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Joel Schopp <joel.schopp@amd.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 64b4f742f29eb19cd8c669703de4102a67228076
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jul 8 12:09:00 2014 +0100

    KVM: ARM: vgic: plug irq injection race
    
    commit 71afaba4a2e98bb7bdeba5078370ab43d46e67a1 upstream.
    
    As it stands, nothing prevents userspace from injecting an interrupt
    before the guest's GIC is actually initialized.
    
    This goes unnoticed so far (as everything is pretty much statically
    allocated), but ends up exploding in a spectacular way once we switch
    to a more dynamic allocation (the GIC data structure isn't there yet).
    
    The fix is to test for the "ready" flag in the VGIC distributor before
    trying to inject the interrupt. Note that in order to avoid breaking
    userspace, we have to ignore what is essentially an error.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 5db2afbfd7ec7e9188481f8e26ada8b4bba8144c
Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Date:   Tue Sep 9 11:27:09 2014 +0100

    ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()
    
    commit a7d079cea2dffb112e26da2566dd84c0ef1fce97 upstream.
    
    The ISS encoding for an exception from a Data Abort has a WnR
    bit[6] that indicates whether the Data Abort was caused by a
    read or a write instruction. While there are several fields
    in the encoding that are only valid if the ISV bit[24] is set,
    WnR is not one of them, so we can read it unconditionally.
    
    Instead of fixing both implementations of kvm_is_write_fault()
    in place, reimplement it just once using kvm_vcpu_dabt_iswrite(),
    which already does the right thing with respect to the WnR bit.
    Also fix up the callers to pass 'vcpu'
    
    Acked-by: Laszlo Ersek <lersek@redhat.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit b86185fb902b2f976f1910174a1b9b7b5a958ccb
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Tue Aug 26 14:33:02 2014 +0200

    arm/arm64: KVM: Complete WFI/WFE instructions
    
    commit 05e0127f9e362b36aa35f17b1a3d52bca9322a3a upstream.
    
    The architecture specifies that when the processor wakes up from a WFE
    or WFI instruction, the instruction is considered complete, however we
    currrently return to EL1 (or EL0) at the WFI/WFE instruction itself.
    
    While most guests may not be affected by this because their local
    exception handler performs an exception returning setting the event bit
    or with an interrupt pending, some guests like UEFI will get wedged due
    this little mishap.
    
    Simply skip the instruction when we have completed the emulation.
    
    Cc: <stable@vger.kernel.org>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 186a6d9a4eb0ceaac063ee6744840701d03033af
Author: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date:   Thu Jul 31 12:23:23 2014 +0530

    ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMU
    
    commit f6edbbf36da3a27b298b66c7955fc84e1dcca305 upstream.
    
    X-Gene u-boot runs in EL2 mode with MMU enabled hence we might
    have stale EL2 tlb enteris when we enable EL2 MMU on each host CPU.
    
    This can happen on any ARM/ARM64 board running bootloader in
    Hyp-mode (or EL2-mode) with MMU enabled.
    
    This patch ensures that we flush all Hyp-mode (or EL2-mode) TLBs
    on each host CPU before enabling Hyp-mode (or EL2-mode) MMU.
    
    Cc: <stable@vger.kernel.org>
    Tested-by: Mark Rutland <mark.rutland@arm.com>
    Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
    Signed-off-by: Anup Patel <anup.patel@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit e0d4621edf9ad337efbecac8c8492fad8397096d
Author: Will Deacon <will.deacon@arm.com>
Date:   Tue Aug 26 15:13:24 2014 +0100

    KVM: vgic: return int instead of bool when checking I/O ranges
    
    commit 1fa451bcc67fa921a04c5fac8dbcde7844d54512 upstream.
    
    vgic_ioaddr_overlap claims to return a bool, but in reality it returns
    an int. Shut sparse up by fixing the type signature.
    
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 3594f00b2f7cd667f16a062ebb8fecedf8451174
Author: Will Deacon <will.deacon@arm.com>
Date:   Tue Aug 26 15:13:22 2014 +0100

    KVM: ARM/arm64: avoid returning negative error code as bool
    
    commit 18d457661fb9fa69352822ab98d39331c3d0e571 upstream.
    
    is_valid_cache returns true if the specified cache is valid.
    Unfortunately, if the parameter passed it out of range, we return
    -ENOENT, which ends up as true leading to potential hilarity.
    
    This patch returns false on the failure path instead.
    
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 60731b1c244efd3192f1fb36c843a0a3add66599
Author: Will Deacon <will.deacon@arm.com>
Date:   Tue Aug 26 15:13:21 2014 +0100

    KVM: ARM/arm64: fix broken __percpu annotation
    
    commit 4000be423cb01a8d09de878bb8184511c49d4238 upstream.
    
    Running sparse results in a bunch of noisy address space mismatches
    thanks to the broken __percpu annotation on kvm_get_running_vcpus.
    
    This function returns a pcpu pointer to a pointer, not a pointer to a
    pcpu pointer. This patch fixes the annotation, which kills the warnings
    from sparse.
    
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 86644d65edc817d4bc056209e376b161e7ed8115
Author: Will Deacon <will.deacon@arm.com>
Date:   Tue Aug 26 15:13:20 2014 +0100

    KVM: ARM/arm64: fix non-const declaration of function returning const
    
    commit 6951e48bff0b55d2a8e825a953fc1f8e3a34bf1c upstream.
    
    Sparse kicks up about a type mismatch for kvm_target_cpu:
    
    arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers
    
    so fix this by adding the missing const attribute to the function
    declaration.
    
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit fe815dff94f09b479b029a8541a89e16c4b417e9
Author: Victor Kamensky <victor.kamensky@linaro.org>
Date:   Thu Jun 12 09:30:09 2014 -0700

    ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
    
    commit ba083d20d8cfa9e999043cd89c4ebc964ccf8927 upstream.
    
    esr_el2 field of struct kvm_vcpu_fault_info has u32 type.
    It should be stored as word. Current code works in LE case
    because existing puts least significant word of x1 into
    esr_el2, and it puts most significant work of x1 into next
    field, which accidentally is OK because it is updated again
    by next instruction. But existing code breaks in BE case.
    
    Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 6ea8cfca070cd1880ddc53aa839c37c2b0a141f3
Author: Li Liu <john.liuli@huawei.com>
Date:   Tue Jul 1 18:01:50 2014 +0800

    ARM: virt: fix wrong HSCTLR.EE bit setting
    
    commit af92394efc8be73edd2301fc15f9b57fd430cd18 upstream.
    
    HSCTLR.EE is defined as bit[25] referring to arm manual
    DDI0606C.b(p1590).
    
    Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Li Liu <john.liuli@huawei.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 6a6ab17612d5dd201198e88d6f7fcb476a7ae127
Author: Alex Bennée <alex.bennee@linaro.org>
Date:   Tue Jul 1 16:53:13 2014 +0100

    arm64: KVM: export demux regids as KVM_REG_ARM64
    
    commit efd48ceacea78e4d4656aa0a6bf4c5b92ed22130 upstream.
    
    I suspect this is a -ECUTPASTE fault from the initial implementation. If
    we don't declare the register ID to be KVM_REG_ARM64 the KVM_GET_ONE_REG
    implementation kvm_arm_get_reg() returns -EINVAL and hilarity ensues.
    
    The kvm/api.txt document describes all arm64 registers as starting with
    0x60xx... (i.e KVM_REG_ARM64).
    
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 714b16417725d1adc0b8d2010c5cbb4c8df5e371
Author: Kim Phillips <kim.phillips@linaro.org>
Date:   Thu Jun 26 01:45:51 2014 +0100

    ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
    
    commit b88657674d39fc2127d62d0de9ca142e166443c8 upstream.
    
    A userspace process can map device MMIO memory via VFIO or /dev/mem,
    e.g., for platform device passthrough support in QEMU.
    
    During early development, we found the PAGE_S2 memory type being used
    for MMIO mappings.  This patch corrects that by using the more strongly
    ordered memory type for device MMIO mappings: PAGE_S2_DEVICE.
    
    Signed-off-by: Kim Phillips <kim.phillips@linaro.org>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit ae4760d4e450dff0a8880d91d7b546a818c619a7
Author: Eric Auger <eric.auger@linaro.org>
Date:   Fri Jun 6 11:10:23 2014 +0200

    ARM: KVM: Unmap IPA on memslot delete/move
    
    commit df6ce24f2ee485c4f9a5cb610063a5eb60da8267 upstream.
    
    Currently when a KVM region is deleted or moved after
    KVM_SET_USER_MEMORY_REGION ioctl, the corresponding
    intermediate physical memory is not unmapped.
    
    This patch corrects this and unmaps the region's IPA range
    in kvm_arch_commit_memory_region using unmap_stage2_range.
    
    Signed-off-by: Eric Auger <eric.auger@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 9586ed38bdd7f4ab157ffd5abc2d396b966eef78
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Fri May 9 23:31:31 2014 +0200

    arm/arm64: KVM: Fix and refactor unmap_range
    
    commit 4f853a714bf16338ff5261128e6c7ae2569e9505 upstream.
    
    unmap_range() was utterly broken, to quote Marc, and broke in all sorts
    of situations.  It was also quite complicated to follow and didn't
    follow the usual scheme of having a separate iterating function for each
    level of page tables.
    
    Address this by refactoring the code and introduce a pgd_clear()
    function.
    
    Reviewed-by: Jungseok Lee <jays.lee@samsung.com>
    Reviewed-by: Mario Smarduch <m.smarduch@samsung.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 4244260371d136600ba026fcba0da82d9386d352
Author: Will Deacon <will.deacon@arm.com>
Date:   Fri Jul 25 16:29:12 2014 +0100

    kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform
    
    commit 63afbe7a0ac184ef8485dac4914e87b211b5bfaa upstream.
    
    If the physical address of GICV isn't page-aligned, then we end up
    creating a stage-2 mapping of the page containing it, which causes us to
    map neighbouring memory locations directly into the guest.
    
    As an example, consider a platform with GICV at physical 0x2c02f000
    running a 64k-page host kernel. If qemu maps this into the guest at
    0x80010000, then guest physical addresses 0x80010000 - 0x8001efff will
    map host physical region 0x2c020000 - 0x2c02efff. Accesses to these
    physical regions may cause UNPREDICTABLE behaviour, for example, on the
    Juno platform this will cause an SError exception to EL3, which brings
    down the entire physical CPU resulting in RCU stalls / HYP panics / host
    crashing / wasted weeks of debugging.
    
    SBSA recommends that systems alias the 4k GICV across the bounding 64k
    region, in which case GICV physical could be described as 0x2c020000 in
    the above scenario.
    
    This patch fixes the problem by failing the vgic probe if the physical
    base address or the size of GICV aren't page-aligned. Note that this
    generated a warning in dmesg about freeing enabled IRQs, so I had to
    move the IRQ enabling later in the probe.
    
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Cc: Gleb Natapov <gleb@kernel.org>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Joel Schopp <joel.schopp@amd.com>
    Cc: Don Dutile <ddutile@redhat.com>
    Acked-by: Peter Maydell <peter.maydell@linaro.org>
    Acked-by: Joel Schopp <joel.schopp@amd.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 1ebc41a91ab4179ea706464f5a50ebc018ae2f39
Author: Will Deacon <will.deacon@arm.com>
Date:   Fri May 2 16:24:14 2014 +0100

    arm64: kvm: use inner-shareable barriers for inner-shareable maintenance
    
    commit ee9e101c11478680d579bd20bb38a4d3e2514fe3 upstream.
    
    In order to ensure completion of inner-shareable maintenance instructions
    (cache and TLB) on AArch64, we can use the -ish suffix to the dsb
    instruction.
    
    This patch relaxes our dsb sy instructions to dsb ish where possible.
    
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 68779c62b1d0d0af82077e6ff03140e3fd24f940
Author: Haibin Wang <wanghaibin.wang@huawei.com>
Date:   Tue Apr 29 14:49:17 2014 +0800

    KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address.
    
    commit 30c2117085bc4e05d091cee6eba79f069b41a9cd upstream.
    
    Currently below check in vgic_ioaddr_overlap will always succeed,
    because the vgic dist base and vgic cpu base are still kept UNDEF
    after initialization. The code as follows will be return forever.
    
    	if (IS_VGIC_ADDR_UNDEF(dist) || IS_VGIC_ADDR_UNDEF(cpu))
                    return 0;
    
    So, before invoking the vgic_ioaddr_overlap, it needs to set the
    corresponding base address firstly.
    
    Signed-off-by: Haibin Wang <wanghaibin.wang@huawei.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit bc1a23b6ed483568e8443647e9afdcadee2dca5e
Author: Andre Przywara <andre.przywara@arm.com>
Date:   Fri Apr 11 00:07:18 2014 +0200

    KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses
    
    commit f2ae85b2ab3776b9e4e42e5b6fa090f40d396794 upstream.
    
    Since KVM internally represents the ICFGR registers by stuffing two
    of them into one word, the offset for accessing the internal
    representation and the one for the MMIO based access are different.
    So keep the original offset around, but adjust the internal array
    offset by one bit.
    
    Reported-by: Haibin Wang <wanghaibin.wang@huawei.com>
    Signed-off-by: Andre Przywara <andre.przywara@arm.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit bd946bbe7f4d0fa0a18f6dabb0711d0faa707f4d
Author: Will Deacon <will.deacon@arm.com>
Date:   Fri Apr 25 11:46:04 2014 +0100

    ARM: KVM: disable KVM in Kconfig on big-endian systems
    
    commit 4e4468fac4381b92eb333d94256e7fb8350f3de3 upstream.
    
    KVM currently crashes and burns on big-endian hosts, so don't allow it
    to be selected until we've got that fixed.
    
    Cc: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 7693fb0246759b4cbb7e7d3ffad4b41667fae9bd
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Mar 6 03:30:46 2014 +0000

    ARM: KVM: fix non-VGIC compilation
    
    commit 6cbde8253a8143ada18ec0d1711230747a7c1934 upstream.
    
    Add a stub for kvm_vgic_addr when compiling without
    CONFIG_KVM_ARM_VGIC. The usefulness of this configurarion is extremely
    doubtful, but let's fix it anyway (until we decide that we'll always
    support a VGIC).
    
    Reported-by: Michele Paolino <m.paolino@virtualopensystems.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 4247324050f4465cfb0a8feb7b5bee262a083245
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 14 18:00:55 2014 +0000

    ARM: KVM: trap VM system registers until MMU and caches are ON
    
    commit 8034699a42d68043b495c7e0cfafccd920707ec8 upstream.
    
    In order to be able to detect the point where the guest enables
    its MMU and caches, trap all the VM related system registers.
    
    Once we see the guest enabling both the MMU and the caches, we
    can go back to a saner mode of operation, which is to leave these
    registers in complete control of the guest.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 676d3a639d2f4a8ae157b69a153ed9fe48ac1aa9
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Wed Jan 22 10:20:09 2014 +0000

    ARM: KVM: add world-switch for AMAIR{0,1}
    
    commit af20814ee927ed888288d98917a766b4179c4fe0 upstream.
    
    HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1.
    In order to minimise the amount of surprise a guest could generate by
    trying to access these registers with caches off, add them to the
    list of registers we switch/handle.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 0879aae91506bf55f8f5f4225e7fba260517dfad
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Wed Jan 22 09:43:38 2014 +0000

    ARM: KVM: introduce per-vcpu HYP Configuration Register
    
    commit ac30a11e8e92a03dbe236b285c5cbae0bf563141 upstream.
    
    So far, KVM/ARM used a fixed HCR configuration per guest, except for
    the VI/VF/VA bits to control the interrupt in absence of VGIC.
    
    With the upcoming need to dynamically reconfigure trapping, it becomes
    necessary to allow the HCR to be changed on a per-vcpu basis.
    
    The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR
    field, initialized at setup time.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit ec5d9b94fe9eea94587c644ab7d4ca3409b74c7c
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 21 18:56:26 2014 +0000

    ARM: KVM: fix ordering of 64bit coprocessor accesses
    
    commit 547f781378a22b65c2ab468f235c23001b5924da upstream.
    
    Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
    added an ordering dependency for the 64bit registers.
    
    The order described is: CRn, CRm, Op1, Op2, 64bit-first.
    
    Unfortunately, the implementation is: CRn, 64bit-first, CRm...
    
    Move the 64bit test to be last in order to match the documentation.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 3234c495d209932e9f5a1e44b7c8d45a5e33d614
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 21 18:56:26 2014 +0000

    ARM: KVM: fix handling of trapped 64bit coprocessor accesses
    
    commit 46c214dd595381c880794413facadfa07fba5c95 upstream.
    
    Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
    changed the way we match the 64bit coprocessor access from
    user space, but didn't update the trap handler for the same
    set of registers.
    
    The effect is that a trapped 64bit access is never matched, leading
    to a fault being injected into the guest. This went unnoticed as we
    didn't really trap any 64bit register so far.
    
    Placing the CRm field of the access into the CRn field of the matching
    structure fixes the problem. Also update the debug feature to emit the
    expected string in case of failing match.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit fd299f72a30cac7a6beae3b8f66906ebee10bbbf
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 14 19:13:10 2014 +0000

    ARM: KVM: force cache clean on page fault when caches are off
    
    commit 159793001d7d85af17855630c94f0a176848e16b upstream.
    
    In order for a guest with caches disabled to observe data written
    contained in a given page, we need to make sure that page is
    committed to memory, and not just hanging in the cache (as guest
    accesses are completely bypassing the cache until it decides to
    enable it).
    
    For this purpose, hook into the coherent_cache_guest_page
    function and flush the region if the guest SCTLR
    register doesn't show the MMU and caches as being enabled.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 053689a94fa5696e4beb2306872cec533abee998
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Wed Jan 15 12:50:23 2014 +0000

    arm64: KVM: flush VM pages before letting the guest enable caches
    
    commit 9d218a1fcf4c6b759d442ef702842fae92e1ea61 upstream.
    
    When the guest runs with caches disabled (like in an early boot
    sequence, for example), all the writes are diectly going to RAM,
    bypassing the caches altogether.
    
    Once the MMU and caches are enabled, whatever sits in the cache
    becomes suddenly visible, which isn't what the guest expects.
    
    A way to avoid this potential disaster is to invalidate the cache
    when the MMU is being turned on. For this, we hook into the SCTLR_EL1
    trapping code, and scan the stage-2 page tables, invalidating the
    pages/sections that have already been mapped in.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 4aa1444f2ecbe70415d23c6282389b62d4b6f59d
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Feb 18 14:29:03 2014 +0000

    ARM: KVM: introduce kvm_p*d_addr_end
    
    commit a3c8bd31af260a17d626514f636849ee1cd1f63e upstream.
    
    The use of p*d_addr_end with stage-2 translation is slightly dodgy,
    as the IPA is 40bits, while all the p*d_addr_end helpers are
    taking an unsigned long (arm64 is fine with that as unligned long
    is 64bit).
    
    The fix is to introduce 64bit clean versions of the same helpers,
    and use them in the stage-2 page table code.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 6dc878f7eb7b59517518fd6d2b6a80978eda5aa1
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 14 18:00:55 2014 +0000

    arm64: KVM: trap VM system registers until MMU and caches are ON
    
    commit 4d44923b17bff283c002ed961373848284aaff1b upstream.
    
    In order to be able to detect the point where the guest enables
    its MMU and caches, trap all the VM related system registers.
    
    Once we see the guest enabling both the MMU and the caches, we
    can go back to a saner mode of operation, which is to leave these
    registers in complete control of the guest.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit fc8ec36d4ae7b701fa473cb7686dc0afee71c050
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 21 10:55:17 2014 +0000

    arm64: KVM: allows discrimination of AArch32 sysreg access
    
    commit 2072d29c46b73e39b3c6c56c6027af77086f45fd upstream.
    
    The current handling of AArch32 trapping is slightly less than
    perfect, as it is not possible (from a handler point of view)
    to distinguish it from an AArch64 access, nor to tell a 32bit
    from a 64bit access either.
    
    Fix this by introducing two additional flags:
    - is_aarch32: true if the access was made in AArch32 mode
    - is_32bit: true if is_aarch32 == true and a MCR/MRC instruction
      was used to perform the access (as opposed to MCRR/MRRC).
    
    This allows a handler to cover all the possible conditions in which
    a system register gets trapped.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 8462018ebdfe8d208cb05a10a16ae74337d01ca9
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jan 14 19:13:10 2014 +0000

    arm64: KVM: force cache clean on page fault when caches are off
    
    commit 2d58b733c87689d3d5144e4ac94ea861cc729145 upstream.
    
    In order for the guest with caches off to observe data written
    contained in a given page, we need to make sure that page is
    committed to memory, and not just hanging in the cache (as
    guest accesses are completely bypassing the cache until it
    decides to enable it).
    
    For this purpose, hook into the coherent_icache_guest_page
    function and flush the region if the guest SCTLR_EL1
    register doesn't show the MMU  and caches as being enabled.
    The function also get renamed to coherent_cache_guest_page.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 449b62f6603d57cb80519176406a1adc2936a144
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Wed Feb 26 18:47:36 2014 +0000

    arm/arm64: KVM: detect CPU reset on CPU_PM_EXIT
    
    commit b20c9f29c5c25921c6ad18b50d4b61e6d181c3cc upstream.
    
    Commit 1fcf7ce0c602 (arm: kvm: implement CPU PM notifier) added
    support for CPU power-management, using a cpu_notifier to re-init
    KVM on a CPU that entered CPU idle.
    
    The code assumed that a CPU entering idle would actually be powered
    off, loosing its state entierely, and would then need to be
    reinitialized. It turns out that this is not always the case, and
    some HW performs CPU PM without actually killing the core. In this
    case, we try to reinitialize KVM while it is still live. It ends up
    badly, as reported by Andre Przywara (using a Calxeda Midway):
    
    [    3.663897] Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x685760
    [    3.663897] unexpected data abort in Hyp mode at: 0xc067d150
    [    3.663897] unexpected HVC/SVC trap in Hyp mode at: 0xc0901dd0
    
    The trick here is to detect if we've been through a full re-init or
    not by looking at HVBAR (VBAR_EL2 on arm64). This involves
    implementing the backend for __hyp_get_vectors in the main KVM HYP
    code (rather small), and checking the return value against the
    default one when the CPU notifier is called on CPU_PM_EXIT.
    
    Reported-by: Andre Przywara <osp@andrep.de>
    Tested-by: Andre Przywara <osp@andrep.de>
    Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Cc: Rob Herring <rob.herring@linaro.org>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit ed663694cef3b169679335ab49b84e695729f027
Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Date:   Mon Aug 5 15:04:46 2013 +0100

    arm: kvm: implement CPU PM notifier
    
    commit 1fcf7ce0c60213994269fb59569ec161eb6e08d6 upstream.
    
    Upon CPU shutdown and consequent warm-reboot, the hypervisor CPU state
    must be re-initialized. This patch implements a CPU PM notifier that
    upon warm-boot calls a KVM hook to reinitialize properly the hypervisor
    state so that the CPU can be safely resumed.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 6ae675035bf4fc67801f0fe48de40401ecd55262
Author: Sachin Kamat <sachin.kamat@linaro.org>
Date:   Tue Jan 7 13:45:15 2014 +0530

    KVM: ARM: Remove duplicate include
    
    commit 61466710de078c697106fa5b70ec7afc9feab520 upstream.
    
    trace.h was included twice. Remove duplicate inclusion.
    
    Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit ddde84be95be4781dbb72c26ff14ab2b01b7c34a
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Tue Nov 19 17:43:19 2013 -0800

    arm: KVM: Don't return PSCI_INVAL if waitqueue is inactive
    
    commit 478a8237f656d86d25b3e4e4bf3c48f590156294 upstream.
    
    The current KVM implementation of PSCI returns INVALID_PARAMETERS if the
    waitqueue for the corresponding CPU is not active.  This does not seem
    correct, since KVM should not care what the specific thread is doing,
    for example, user space may not have called KVM_RUN on this VCPU yet or
    the thread may be busy looping to user space because it received a
    signal; this is really up to the user space implementation.  Instead we
    should check specifically that the CPU is marked as being turned off,
    regardless of the VCPU thread state, and if it is, we shall
    simply clear the pause flag on the CPU and wake up the thread if it
    happens to be blocked for us.
    
    Further, the implementation seems to be racy when executing multiple
    VCPU threads.  There really isn't a reasonable user space programming
    scheme to ensure all secondary CPUs have reached kvm_vcpu_first_run_init
    before turning on the boot CPU.
    
    Therefore, set the pause flag on the vcpu at VCPU init time (which can
    reasonably be expected to be completed for all CPUs by user space before
    running any VCPUs) and clear both this flag and the feature (in case the
    feature can somehow get set again in the future) and ping the waitqueue
    on turning on a VCPU using PSCI.
    
    Reported-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 5c792059d2d57a19a19ab49507acdf0b8d0d5dd6
Author: Anup Patel <anup.patel@linaro.org>
Date:   Thu Dec 12 16:12:23 2013 +0000

    arm64: KVM: Force undefined exception for Guest SMC intructions
    
    commit e5cf9dcdbfd26cd4e1991db08755da900454efeb upstream.
    
    The SMC-based PSCI emulation for Guest is going to be very different
    from the in-kernel HVC-based PSCI emulation hence for now just inject
    undefined exception when Guest executes SMC instruction.
    
    Signed-off-by: Anup Patel <anup.patel@linaro.org>
    Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
    Signed-off-by: marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit a9ae6f03a85a4108027dc764eeb9625b59f58b0c
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Wed Dec 11 20:29:11 2013 -0800

    arm/arm64: kvm: Set vcpu->cpu to -1 on vcpu_put
    
    commit e9b152cb957cb194437f37e79f0f3c9d34fe53d6 upstream.
    
    The arch-generic KVM code expects the cpu field of a vcpu to be -1 if
    the vcpu is no longer assigned to a cpu.  This is used for the optimized
    make_all_cpus_request path and will be used by the vgic code to check
    that no vcpus are running.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 69a023a70ac77b581f11f12ea61d065fc3bf79fc
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Mon Sep 23 14:55:55 2013 -0700

    ARM: KVM: Allow creating the VGIC after VCPUs
    
    commit e1ba0207a1b3714bb3f000e506285ae5123cdfa7 upstream.
    
    Rework the VGIC initialization slightly to allow initialization of the
    vgic cpu-specific state even if the irqchip (the VGIC) hasn't been
    created by user space yet.  This is safe, because the vgic data
    structures are already allocated when the CPU is allocated if VGIC
    support is compiled into the kernel.  Further, the init process does not
    depend on any other information and the sacrifice is a slight
    performance degradation for creating VMs in the no-VGIC case.
    
    The reason is that the new device control API doesn't mandate creating
    the VGIC before creating the VCPU and it is unreasonable to require user
    space to create the VGIC before creating the VCPUs.
    
    At the same time move the irqchip_in_kernel check out of
    kvm_vcpu_first_run_init and into the init function to make the per-vcpu
    and global init functions symmetric and add comments on the exported
    functions making it a bit easier to understand the init flow by only
    looking at vgic.c.
    
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit f3ad05f6c31a18c83d46d234a32769f151c46593
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Sat Nov 16 10:51:25 2013 -0800

    arm/arm64: KVM: arch_timer: Initialize cntvoff at kvm_init
    
    commit a1a64387adeeba7a34ce06f2774e81f496ee803b upstream.
    
    Initialize the cntvoff at kvm_init_vm time, not before running the VCPUs
    at the first time because that will overwrite any potentially restored
    values from user space.
    
    Cc: Andre Przywara <andre.przywara@linaro.org>
    Acked-by: Marc Zynger <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 0057950da5b287bbc3885872c98c57ee33bb26c5
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Fri Aug 2 11:41:13 2013 +0100

    arm64: KVM: Yield CPU when vcpu executes a WFE
    
    commit d241aac798eb042e605f78c31a4122e583b2cd13 upstream.
    
    On an (even slightly) oversubscribed system, spinlocks are quickly
    becoming a bottleneck, as some vcpus are spinning, waiting for a
    lock to be released, while the vcpu holding the lock may not be
    running at all.
    
    The solution is to trap blocking WFEs and tell KVM that we're
    now spinning. This ensures that other vpus will get a scheduling
    boost, allowing the lock to be released more quickly. Also, using
    CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance
    when the VM is severely overcommited.
    
    Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit d5e7963fdf52ba609f960466fb864ce81966cd17
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Fri Oct 18 18:19:03 2013 +0100

    arm/arm64: KVM: PSCI: use MPIDR to identify a target CPU
    
    commit 79c648806f9034abf54332b78043bb242189d953 upstream.
    
    The KVM PSCI code blindly assumes that vcpu_id and MPIDR are
    the same thing. This is true when vcpus are organized as a flat
    topology, but is wrong when trying to emulate any other topology
    (such as A15 clusters).
    
    Change the KVM PSCI CPU_ON code to look at the MPIDR instead
    of the vcpu_id to pick a target CPU.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit a8f46a9f4da79c29b7af2a1b4f77c536da0fed3b
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Mon Mar 23 15:07:23 2015 +0800

    ARM: KVM: fix L2CTLR to be per-cluster
    
    commit 9cbb6d969cb6561de45d917b8bb9281cb374bb35 upstream.
    
    The L2CTLR register contains the number of CPUs in this cluster.
    
    Make sure the register content is actually relevant to the vcpu
    that is being configured by computing the number of cores that are
    part of its cluster.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 130d0bdf34fb4e7b43683bb9ef27fcb812915e49
Author: Christoffer Dall <christoffer.dall@linaro.org>
Date:   Tue Oct 15 18:10:42 2013 -0700

    KVM: ARM: Update comments for kvm_handle_wfi
    
    commit 82ea046c95a3c3ddcfa058c8a270b9afb6e93700 upstream.
    
    Update comments to reflect what is really going on and add the TWE bit
    to the comments in kvm_arm.h.
    
    Also renames the function to kvm_handle_wfx like is done on arm64 for
    consistency and uber-correctness.
    
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 3925ae28c4489b87dc4a515716172c1a3a1d2363
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Mon Mar 23 15:03:28 2015 +0800

    ARM: KVM: Fix MPIDR computing to support virtual clusters
    
    commit 2d1d841bd44e24b58a3d3cc4fa793670aaa38fbf upstream.
    
    In order to be able to support more than 4 A7 or A15 CPUs,
    we need to fix the MPIDR computing to reflect the fact that
    both A15 and A7 can only exist in clusters of at most 4 CPUs.
    
    Fix the MPIDR computing to allow virtual clusters to be exposed
    to the guest.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit a05353acef600b2629d76a049515de66e73459c1
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Oct 8 18:38:13 2013 +0100

    ARM: KVM: Yield CPU when vcpu executes a WFE
    
    commit 1f5580986a3667e9d67b65d916bb4249fd86a400 upstream.
    
    On an (even slightly) oversubscribed system, spinlocks are quickly
    becoming a bottleneck, as some vcpus are spinning, waiting for a
    lock to be released, while the vcpu holding the lock may not be
    running at all.
    
    This creates contention, and the observed slowdown is 40x for
    hackbench. No, this isn't a typo.
    
    The solution is to trap blocking WFEs and tell KVM that we're
    now spinning. This ensures that other vpus will get a scheduling
    boost, allowing the lock to be released more quickly. Also, using
    CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance
    when the VM is severely overcommited.
    
    Quick test to estimate the performance: hackbench 1 process 1000
    
    2xA15 host (baseline):	1.843s
    
    2xA15 guest w/o patch:	2.083s
    4xA15 guest w/o patch:	80.212s
    8xA15 guest w/o patch:	Could not be bothered to find out
    
    2xA15 guest w/ patch:	2.102s
    4xA15 guest w/ patch:	3.205s
    8xA15 guest w/ patch:	6.887s
    
    So we go from a 40x degradation to 1.5x in the 2x overcommit case,
    which is vaguely more acceptable.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 1c8eb115f4394d13cb907f6ffb976640becdb13e
Author: Jonathan Austin <jonathan.austin@arm.com>
Date:   Thu Sep 26 16:49:26 2013 +0100

    KVM: ARM: fix the size of TTBCR_{T0SZ,T1SZ} masks
    
    commit 5e497046f005528464f9600a4ee04f49df713596 upstream.
    
    The T{0,1}SZ fields of TTBCR are 3 bits wide when using the long descriptor
    format. Likewise, the T0SZ field of the HTCR is 3-bits. KVM currently
    defines TTBCR_T{0,1}SZ as 3, not 7.
    
    The T0SZ mask is used to calculate the value for the HTCR, both to pick out
    TTBCR.T0SZ and mask off the equivalent field in the HTCR during
    read-modify-write. The incorrect mask size causes the (UNKNOWN) reset value
    of HTCR.T0SZ to leak in to the calculated HTCR value. Linux will hang when
    initializing KVM if HTCR's reset value has bit 2 set (sometimes the case on
    A7/TC2)
    
    Fixing T0SZ allows A7 cores to boot and T1SZ is also fixed for completeness.
    
    Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>

commit 45aaf85687dd6ac119c55c5ec0dbe0bef0e62235
Author: Jonathan Austin <jonathan.austin@arm.com>
Date:   Thu Sep 26 16:49:27 2013 +0100

    KVM: ARM: Fix calculation of virtual CPU ID
    
    commit 1158fca401e09665c440a9fe4fd4f131ee85c13b upstream.
    
    KVM does not have a notion of multiple clusters for CPUs, just a linear
    array of CPUs. When using a system with cores in more than one cluster, the
    current method for calculating the virtual MPIDR will leak the (physical)
    cluster information into the virtual MPIDR. One effect of this is that
    Linux under KVM fails to boot multiple CPUs that aren't in the 0th cluster.
    
    This patch does away with exposing the real MPIDR fields in favour of simply
    using the virtual CPU number (but preserving the U bit, as before).
    
    Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
    Acked-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
    Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
    Signed-off-by: Jiri Slaby <jslaby@suse.cz>