commit 67e128d68505fa37da2b9ae6b532f11db1624a2f
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Thu Oct 22 14:39:30 2015 -0700

    Linux 3.14.55

commit f176253aef5765d03ebb4b55b90ca142d3d32497
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Thu Feb 6 14:26:10 2014 +0100

    arc,hexagon: Delete asm/barrier.h
    
    commit 2ab08ee9f0a4eba27c7c4ce0b6d5118e8a18554b upstream.
    
    Both already use asm-generic/barrier.h as per their
    include/asm/Kbuild. Remove the stale files.
    
    Signed-off-by: Peter Zijlstra <peterz@infradead.org>
    Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Link: http://lkml.kernel.org/n/tip-c7vlkshl3tblim0o8z2p70kt@git.kernel.org
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Richard Kuo <rkuo@codeaurora.org>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: linux-hexagon@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f612b706e741b6c4d5ac65fd6e428188bfc344b2
Author: Christoph Hellwig <hch@lst.de>
Date:   Sat Oct 3 19:16:07 2015 +0200

    3w-9xxx: don't unmap bounce buffered commands
    
    commit 15e3d5a285ab9283136dba34bbf72886d9146706 upstream.
    
    3w controller don't dma map small single SGL entry commands but instead
    bounce buffer them.  Add a helper to identify these commands and don't
    call scsi_dma_unmap for them.
    
    Based on an earlier patch from James Bottomley.
    
    Fixes: 118c85 ("3w-9xxx: fix command completion race")
    Reported-by: Tóth Attila <atoth@atoth.sote.hu>
    Tested-by: Tóth Attila <atoth@atoth.sote.hu>
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Acked-by: Adam Radford <aradford@gmail.com>
    Signed-off-by: James Bottomley <JBottomley@Odin.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6ff75737a48b8718b0703673837b9b3f04abc886
Author: Joonsoo Kim <js1304@gmail.com>
Date:   Thu Oct 1 15:36:54 2015 -0700

    mm/slab: fix unexpected index mapping result of kmalloc_size(INDEX_NODE+1)
    
    commit 03a2d2a3eafe4015412cf4e9675ca0e2d9204074 upstream.
    
    Commit description is copied from the original post of this bug:
    
      http://comments.gmane.org/gmane.linux.kernel.mm/135349
    
    Kernels after v3.9 use kmalloc_size(INDEX_NODE + 1) to get the next
    larger cache size than the size index INDEX_NODE mapping.  In kernels
    3.9 and earlier we used malloc_sizes[INDEX_L3 + 1].cs_size.
    
    However, sometimes we can't get the right output we expected via
    kmalloc_size(INDEX_NODE + 1), causing a BUG().
    
    The mapping table in the latest kernel is like:
        index = {0,   1,  2 ,  3,  4,   5,   6,   n}
         size = {0,   96, 192, 8, 16,  32,  64,   2^n}
    The mapping table before 3.10 is like this:
        index = {0 , 1 , 2,   3,  4 ,  5 ,  6,   n}
        size  = {32, 64, 96, 128, 192, 256, 512, 2^(n+3)}
    
    The problem on my mips64 machine is as follows:
    
    (1) When configured DEBUG_SLAB && DEBUG_PAGEALLOC && DEBUG_LOCK_ALLOC
        && DEBUG_SPINLOCK, the sizeof(struct kmem_cache_node) will be "150",
        and the macro INDEX_NODE turns out to be "2": #define INDEX_NODE
        kmalloc_index(sizeof(struct kmem_cache_node))
    
    (2) Then the result of kmalloc_size(INDEX_NODE + 1) is 8.
    
    (3) Then "if(size >= kmalloc_size(INDEX_NODE + 1)" will lead to "size
        = PAGE_SIZE".
    
    (4) Then "if ((size >= (PAGE_SIZE >> 3))" test will be satisfied and
        "flags |= CFLGS_OFF_SLAB" will be covered.
    
    (5) if (flags & CFLGS_OFF_SLAB)" test will be satisfied and will go to
        "cachep->slabp_cache = kmalloc_slab(slab_size, 0u)", and the result
        here may be NULL while kernel bootup.
    
    (6) Finally,"BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));" causes the
        BUG info as the following shows (may be only mips64 has this problem):
    
    This patch fixes the problem of kmalloc_size(INDEX_NODE + 1) and removes
    the BUG by adding 'size >= 256' check to guarantee that all necessary
    small sized slabs are initialized regardless sequence of slab size in
    mapping table.
    
    Fixes: e33660165c90 ("slab: Use common kmalloc_index/kmalloc_size...")
    Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Reported-by: Liuhailong <liu.hailong6@zte.com.cn>
    Acked-by: Christoph Lameter <cl@linux.com>
    Cc: Pekka Enberg <penberg@kernel.org>
    Cc: David Rientjes <rientjes@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b58897150a626dd3e710f11b629c267dd4dbd5f1
Author: covici@ccs.covici.com <covici@ccs.covici.com>
Date:   Wed May 20 05:44:11 2015 -0400

    staging: speakup: fix speakup-r regression
    
    commit b1d562acc78f0af46de0dfe447410bc40bdb7ece upstream.
    
    Here is a patch to make speakup-r work again.
    
    It broke in 3.6 due to commit 4369c64c79a22b98d3b7eff9d089196cd878a10a
    "Input: Send events one packet at a time)
    
    The problem was that the fakekey.c routine to fake a down arrow no
    longer functioned properly and putting the input_sync fixed it.
    
    Fixes: 4369c64c79a22b98d3b7eff9d089196cd878a10a
    Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Signed-off-by: John Covici <covici@ccs.covici.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3df65ac14e0764f6dec07cba371f51cff7bd765b
Author: Joe Thornber <ejt@redhat.com>
Date:   Fri Oct 9 14:03:38 2015 +0100

    dm cache: fix NULL pointer when switching from cleaner policy
    
    commit 2bffa1503c5c06192eb1459180fac4416575a966 upstream.
    
    The cleaner policy doesn't make use of the per cache block hint space in
    the metadata (unlike the other policies).  When switching from the
    cleaner policy to mq or smq a NULL pointer crash (in dm_tm_new_block)
    was observed.  The crash was caused by bugs in dm-cache-metadata.c
    when trying to skip creation of the hint btree.
    
    The minimal fix is to change hint size for the cleaner policy to 4 bytes
    (only hint size supported).
    
    Signed-off-by: Joe Thornber <ejt@redhat.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a9dfb91592b3a2af6a26e9f32d453e179e9e21e0
Author: Ben Dooks <ben.dooks@codethink.co.uk>
Date:   Tue Sep 29 15:01:08 2015 +0100

    clk: ti: fix dual-registration of uart4_ick
    
    commit 19e79687de22f23bcfb5e79cce3daba20af228d1 upstream.
    
    On the OMAP AM3517 platform the uart4_ick gets registered
    twice, causing any power management to /dev/ttyO3 to fail
    when trying to wake the device up.
    
    This solves the following oops:
    
    [] Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa09e008
    [] PC is at serial_omap_pm+0x48/0x15c
    [] LR is at _raw_spin_unlock_irqrestore+0x30/0x5c
    
    Fixes: aafd900cab87 ("CLK: TI: add omap3 clock init file")
    Cc: mturquette@baylibre.com
    Cc: sboyd@codeaurora.org
    Cc: linux-clk@vger.kernel.org
    Cc: linux-omap@vger.kernel.org
    Cc: linux-kernel@lists.codethink.co.uk
    Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
    Signed-off-by: Tero Kristo <t-kristo@ti.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ae5137a8f24aa93ab8c56345edbfd3828db796f6
Author: Jan Kara <jack@suse.com>
Date:   Tue Jul 28 14:57:14 2015 -0400

    jbd2: avoid infinite loop when destroying aborted journal
    
    commit 841df7df196237ea63233f0f9eaa41db53afd70f upstream.
    
    Commit 6f6a6fda2945 "jbd2: fix ocfs2 corrupt when updating journal
    superblock fails" changed jbd2_cleanup_journal_tail() to return EIO
    when the journal is aborted. That makes logic in
    jbd2_log_do_checkpoint() bail out which is fine, except that
    jbd2_journal_destroy() expects jbd2_log_do_checkpoint() to always make
    a progress in cleaning the journal. Without it jbd2_journal_destroy()
    just loops in an infinite loop.
    
    Fix jbd2_journal_destroy() to cleanup journal checkpoint lists of
    jbd2_log_do_checkpoint() fails with error.
    
    Reported-by: Eryu Guan <guaneryu@gmail.com>
    Tested-by: Eryu Guan <guaneryu@gmail.com>
    Fixes: 6f6a6fda294506dfe0e3e0a253bb2d2923f28f0a
    Signed-off-by: Jan Kara <jack@suse.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2bc9689906c7cae36e5fed5c6c796aefe850c382
Author: Ben Hutchings <ben@decadent.org.uk>
Date:   Sat Sep 26 12:23:56 2015 +0100

    genirq: Fix race in register_irq_proc()
    
    commit 95c2b17534654829db428f11bcf4297c059a2a7e upstream.
    
    Per-IRQ directories in procfs are created only when a handler is first
    added to the irqdesc, not when the irqdesc is created.  In the case of
    a shared IRQ, multiple tasks can race to create a directory.  This
    race condition seems to have been present forever, but is easier to
    hit with async probing.
    
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    Link: http://lkml.kernel.org/r/1443266636.2004.2.camel@decadent.org.uk
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8ea4b34355189e1f1eacaf2d825f2dce776b3b9c
Author: Roland Dreier <roland@purestorage.com>
Date:   Mon Oct 5 10:29:28 2015 -0700

    fib_rules: Fix dump_rules() not to exit early
    
    Backports of 41fc014332d9 ("fib_rules: fix fib rule dumps across
    multiple skbs") introduced a regression in "ip rule show" - it ends up
    dumping the first rule over and over and never exiting, because 3.19
    and earlier are missing commit 053c095a82cf ("netlink: make
    nlmsg_end() and genlmsg_end() void"), so fib_nl_fill_rule() ends up
    returning skb->len (i.e. > 0) in the success case.
    
    Fix this by checking the return code for < 0 instead of != 0.
    
    Signed-off-by: Roland Dreier <roland@purestorage.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b387ee409e947ed33c568e394c3340210f0550f5
Author: Andreas Schwab <schwab@linux-m68k.org>
Date:   Wed Sep 23 23:12:09 2015 +0200

    m68k: Define asmlinkage_protect
    
    commit 8474ba74193d302e8340dddd1e16c85cc4b98caf upstream.
    
    Make sure the compiler does not modify arguments of syscall functions.
    This can happen if the compiler generates a tailcall to another
    function.  For example, without asmlinkage_protect sys_openat is compiled
    into this function:
    
    sys_openat:
    	clr.l %d0
    	move.w 18(%sp),%d0
    	move.l %d0,16(%sp)
    	jbra do_sys_open
    
    Note how the fourth argument is modified in place, modifying the register
    %d4 that gets restored from this stack slot when the function returns to
    user-space.  The caller may expect the register to be unmodified across
    system calls.
    
    Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
    Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 608a786ff70fab16ec7090566576472447f0927b
Author: Mark Salyzyn <salyzyn@android.com>
Date:   Mon Sep 21 21:39:50 2015 +0100

    arm64: readahead: fault retry breaks mmap file read random detection
    
    commit 569ba74a7ba69f46ce2950bf085b37fea2408385 upstream.
    
    This is the arm64 portion of commit 45cac65b0fcd ("readahead: fault
    retry breaks mmap file read random detection"), which was absent from
    the initial port and has since gone unnoticed. The original commit says:
    
    > .fault now can retry.  The retry can break state machine of .fault.  In
    > filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
    > try, since the page is in page cache now, ra->mmap_miss is decreased.  And
    > these are done in one fault, so we can't detect random mmap file access.
    >
    > Add a new flag to indicate .fault is tried once.  In the second try, skip
    > ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.
    
    With this change, Mark reports that:
    
    > Random read improves by 250%, sequential read improves by 40%, and
    > random write by 400% to an eMMC device with dm crypto wrapped around it.
    
    Cc: Shaohua Li <shli@kernel.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Wu Fengguang <fengguang.wu@intel.com>
    Signed-off-by: Mark Salyzyn <salyzyn@android.com>
    Signed-off-by: Riley Andrews <riandrews@android.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 27fa5a5ee0e2b2556ce59c0ad04f606635fa6392
Author: Eric W. Biederman <ebiederm@xmission.com>
Date:   Sat Aug 15 20:27:13 2015 -0500

    vfs: Test for and handle paths that are unreachable from their mnt_root
    
    commit 397d425dc26da728396e66d392d5dcb8dac30c37 upstream.
    
    In rare cases a directory can be renamed out from under a bind mount.
    In those cases without special handling it becomes possible to walk up
    the directory tree to the root dentry of the filesystem and down
    from the root dentry to every other file or directory on the filesystem.
    
    Like division by zero .. from an unconnected path can not be given
    a useful semantic as there is no predicting at which path component
    the code will realize it is unconnected.  We certainly can not match
    the current behavior as the current behavior is a security hole.
    
    Therefore when encounting .. when following an unconnected path
    return -ENOENT.
    
    - Add a function path_connected to verify path->dentry is reachable
      from path->mnt.mnt_root.  AKA to validate that rename did not do
      something nasty to the bind mount.
    
      To avoid races path_connected must be called after following a path
      component to it's next path component.
    
    Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cb1320693b9d8d32651a2bb7cd15498408732b8f
Author: Eric W. Biederman <ebiederm@xmission.com>
Date:   Sat Aug 15 13:36:12 2015 -0500

    dcache: Handle escaped paths in prepend_path
    
    commit cde93be45a8a90d8c264c776fab63487b5038a65 upstream.
    
    A rename can result in a dentry that by walking up d_parent
    will never reach it's mnt_root.  For lack of a better term
    I call this an escaped path.
    
    prepend_path is called by four different functions __d_path,
    d_absolute_path, d_path, and getcwd.
    
    __d_path only wants to see paths are connected to the root it passes
    in.  So __d_path needs prepend_path to return an error.
    
    d_absolute_path similarly wants to see paths that are connected to
    some root.  Escaped paths are not connected to any mnt_root so
    d_absolute_path needs prepend_path to return an error greater
    than 1.  So escaped paths will be treated like paths on lazily
    unmounted mounts.
    
    getcwd needs to prepend "(unreachable)" so getcwd also needs
    prepend_path to return an error.
    
    d_path is the interesting hold out.  d_path just wants to print
    something, and does not care about the weird cases.  Which raises
    the question what should be printed?
    
    Given that <escaped_path>/<anything> should result in -ENOENT I
    believe it is desirable for escaped paths to be printed as empty
    paths.  As there are not really any meaninful path components when
    considered from the perspective of a mount tree.
    
    So tweak prepend_path to return an empty path with an new error
    code of 3 when it encounters an escaped path.
    
    Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0b39400f99f47ac8671c0956c480774b7f73804b
Author: shengyong <shengyong1@huawei.com>
Date:   Mon Sep 28 17:57:19 2015 +0000

    UBI: return ENOSPC if no enough space available
    
    commit 7c7feb2ebfc9c0552c51f0c050db1d1a004faac5 upstream.
    
    UBI: attaching mtd1 to ubi0
    UBI: scanning is finished
    UBI error: init_volumes: not enough PEBs, required 706, available 686
    UBI error: ubi_wl_init: no enough physical eraseblocks (-20, need 1)
    UBI error: ubi_attach_mtd_dev: failed to attach mtd1, error -12 <= NOT ENOMEM
    UBI error: ubi_init: cannot attach mtd1
    
    If available PEBs are not enough when initializing volumes, return -ENOSPC
    directly. If available PEBs are not enough when initializing WL, return
    -ENOSPC instead of -ENOMEM.
    
    Signed-off-by: Sheng Yong <shengyong1@huawei.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Reviewed-by: David Gstir <david@sigma-star.at>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2210f9853ede87346f9733fae5d781c029193536
Author: Richard Weinberger <richard@nod.at>
Date:   Tue Sep 22 23:58:07 2015 +0200

    UBI: Validate data_size
    
    commit 281fda27673f833a01d516658a64d22a32c8e072 upstream.
    
    Make sure that data_size is less than LEB size.
    Otherwise a handcrafted UBI image is able to trigger
    an out of bounds memory access in ubi_compare_lebs().
    
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Reviewed-by: David Gstir <david@sigma-star.at>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 25b030bbdc8d6a36b9c2fbbbb8b806f846fc4a89
Author: Paul Mackerras <paulus@ozlabs.org>
Date:   Thu Sep 10 14:36:21 2015 +1000

    powerpc/MSI: Fix race condition in tearing down MSI interrupts
    
    commit e297c939b745e420ef0b9dc989cb87bda617b399 upstream.
    
    This fixes a race which can result in the same virtual IRQ number
    being assigned to two different MSI interrupts.  The most visible
    consequence of that is usually a warning and stack trace from the
    sysfs code about an attempt to create a duplicate entry in sysfs.
    
    The race happens when one CPU (say CPU 0) is disposing of an MSI
    while another CPU (say CPU 1) is setting up an MSI.  CPU 0 calls
    (for example) pnv_teardown_msi_irqs(), which calls
    msi_bitmap_free_hwirqs() to indicate that the MSI (i.e. its
    hardware IRQ number) is no longer in use.  Then, before CPU 0 gets
    to calling irq_dispose_mapping() to free up the virtal IRQ number,
    CPU 1 comes in and calls msi_bitmap_alloc_hwirqs() to allocate an
    MSI, and gets the same hardware IRQ number that CPU 0 just freed.
    CPU 1 then calls irq_create_mapping() to get a virtual IRQ number,
    which sees that there is currently a mapping for that hardware IRQ
    number and returns the corresponding virtual IRQ number (which is
    the same virtual IRQ number that CPU 0 was using).  CPU 0 then
    calls irq_dispose_mapping() and frees that virtual IRQ number.
    Now, if another CPU comes along and calls irq_create_mapping(), it
    is likely to get the virtual IRQ number that was just freed,
    resulting in the same virtual IRQ number apparently being used for
    two different hardware interrupts.
    
    To fix this race, we just move the call to msi_bitmap_free_hwirqs()
    to after the call to irq_dispose_mapping().  Since virq_to_hw()
    doesn't work for the virtual IRQ number after irq_dispose_mapping()
    has been called, we need to call it before irq_dispose_mapping() and
    remember the result for the msi_bitmap_free_hwirqs() call.
    
    The pattern of calling msi_bitmap_free_hwirqs() before
    irq_dispose_mapping() appears in 5 places under arch/powerpc, and
    appears to have originated in commit 05af7bd2d75e ("[POWERPC] MPIC
    U3/U4 MSI backend") from 2007.
    
    Fixes: 05af7bd2d75e ("[POWERPC] MPIC U3/U4 MSI backend")
    Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
    Signed-off-by: Paul Mackerras <paulus@samba.org>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f2051e82af274390819751c94d0be7301323fbbd
Author: NeilBrown <neilb@suse.com>
Date:   Wed Jul 22 10:20:07 2015 +1000

    md: flush ->event_work before stopping array.
    
    commit ee5d004fd0591536a061451eba2b187092e9127c upstream.
    
    The 'event_work' worker used by dm-raid may still be running
    when the array is stopped.  This can result in an oops.
    
    So flush the workqueue on which it is run after detaching
    and before destroying the device.
    
    Reported-by: Heinz Mauelshagen <heinzm@redhat.com>
    Signed-off-by: NeilBrown <neilb@suse.com>
    Fixes: 9d09e663d550 ("dm: raid456 basic support")
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 319fe9641ac1b4404b1d26b448041846509dfe15
Author: Ian Abbott <abbotti@mev.co.uk>
Date:   Thu Jul 23 16:46:58 2015 +0100

    staging: comedi: usbduxsigma: don't clobber ao_timer in command test
    
    commit c04a1f17803e0d3eeada586ca34a6b436959bc20 upstream.
    
    `devpriv->ao_timer` is used while an asynchronous command is running on
    the AO subdevice.  It also gets modified by the subdevice's `cmdtest`
    handler for checking new asynchronous commands,
    `usbduxsigma_ao_cmdtest()`, which is not correct as it's allowed to
    check new commands while an old command is still running.  Fix it by
    moving the code which sets up `devpriv->ao_timer` into the subdevice's
    `cmd` handler, `usbduxsigma_ao_cmd()`.
    
    Note that the removed code in `usbduxsigma_ao_cmdtest()` checked that
    `devpriv->ao_timer` did not end up less that 1, but that could not
    happen due because `cmd->scan_begin_arg` or `cmd->convert_arg` had
    already been range-checked.
    
    Also note that we tested the `high_speed` variable in the old code, but
    that is currently always 0 and means that we always use "scan" timing
    (`cmd->scan_begin_src == TRIG_TIMER` and `cmd->convert_src == TRIG_NOW`)
    and never "convert" (individual sample) timing (`cmd->scan_begin_src ==
    TRIG_FOLLOW` and `cmd->convert_src == TRIG_TIMER`).  The moved code
    tests `cmd->convert_src` instead to decide whether "scan" or "convert"
    timing is being used, although currently only "scan" timing is
    supported.
    
    Fixes: fb1ef622e7a3 ("staging: comedi: usbduxsigma: tidy up analog output command support")
    Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
    Reviewed-by: Bernd Porr <mail@berndporr.me.uk>
    Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2e1a916c6599d0fa5478a0c282ae87e48398a2a0
Author: Ian Abbott <abbotti@mev.co.uk>
Date:   Thu Jul 23 16:46:57 2015 +0100

    staging: comedi: usbduxsigma: don't clobber ai_timer in command test
    
    commit 423b24c37dd5794a674c74b0ed56392003a69891 upstream.
    
    `devpriv->ai_timer` is used while an asynchronous command is running on
    the AI subdevice.  It also gets modified by the subdevice's `cmdtest`
    handler for checking new asynchronous commands
    (`usbduxsigma_ai_cmdtest()`), which is not correct as it's allowed to
    check new commands while an old command is still running.  Fix it by
    moving the code which sets up `devpriv->ai_timer` and
    `devpriv->ai_interval` into the subdevice's `cmd` handler,
    `usbduxsigma_ai_cmd()`.
    
    Note that the removed code in `usbduxsigma_ai_cmdtest()` checked that
    `devpriv->ai_timer` did not end up less than than 1, but that could not
    happen because `cmd->scan_begin_arg` had already been checked to be at
    least the minimum required value (at least when `cmd->scan_begin_src ==
    TRIG_TIMER`, which had also been checked to be the case).
    
    Fixes: b986be8527c7 ("staging: comedi: usbduxsigma: tidy up analog input command support)
    Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
    Reviewed-by: Bernd Porr <mail@berndporr.me.uk>
    Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c295dbe5abe8bb7ccd08c52d6a619a9f2480c17d
Author: James Hogan <james.hogan@imgtec.com>
Date:   Fri Mar 27 08:33:43 2015 +0000

    MIPS: dma-default: Fix 32-bit fall back to GFP_DMA
    
    commit 53960059d56ecef67d4ddd546731623641a3d2d1 upstream.
    
    If there is a DMA zone (usually 24bit = 16MB I believe), but no DMA32
    zone, as is the case for some 32-bit kernels, then massage_gfp_flags()
    will cause DMA memory allocated for devices with a 32..63-bit
    coherent_dma_mask to fall back to using __GFP_DMA, even though there may
    only be 32-bits of physical address available anyway.
    
    Correct that case to compare against a mask the size of phys_addr_t
    instead of always using a 64-bit mask.
    
    Signed-off-by: James Hogan <james.hogan@imgtec.com>
    Fixes: a2e715a86c6d ("MIPS: DMA: Fix computation of DMA flags from device's coherent_dma_mask.")
    Cc: Ralf Baechle <ralf@linux-mips.org>
    Cc: linux-mips@linux-mips.org
    Patchwork: https://patchwork.linux-mips.org/patch/9610/
    Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f2f4f60cef89d8e4b12c1e54a326334d775b9e42
Author: Yao-Wen Mao <yaowen@google.com>
Date:   Mon Aug 31 14:24:09 2015 +0800

    USB: Add reset-resume quirk for two Plantronics usb headphones.
    
    commit 8484bf2981b3d006426ac052a3642c9ce1d8d980 upstream.
    
    These two headphones need a reset-resume quirk to properly resume to
    original volume level.
    
    Signed-off-by: Yao-Wen Mao <yaowen@google.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ecef886609d7eff25649493fab55a9c893b0c210
Author: Vincent Palatin <vpalatin@chromium.org>
Date:   Thu Oct 1 14:10:22 2015 -0700

    usb: Add device quirk for Logitech PTZ cameras
    
    commit 72194739f54607bbf8cfded159627a2015381557 upstream.
    
    Add a device quirk for the Logitech PTZ Pro Camera and its sibling the
    ConferenceCam CC3000e Camera.
    This fixes the failed camera enumeration on some boot, particularly on
    machines with fast CPU.
    
    Tested by connecting a Logitech PTZ Pro Camera to a machine with a
    Haswell Core i7-4600U CPU @ 2.10GHz, and doing thousands of reboot cycles
    while recording the kernel logs and taking camera picture after each boot.
    Before the patch, more than 7% of the boots show some enumeration transfer
    failures and in a few of them, the kernel is giving up before actually
    enumerating the webcam. After the patch, the enumeration has been correct
    on every reboot.
    
    Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1dd5e8d1fd9f570fb341c76df339bbbeed003351
Author: Mathias Nyman <mathias.nyman@linux.intel.com>
Date:   Mon Sep 21 17:46:09 2015 +0300

    usb: Use the USB_SS_MULT() macro to get the burst multiplier.
    
    commit ff30cbc8da425754e8ab96904db1d295bd034f27 upstream.
    
    Bits 1:0 of the bmAttributes are used for the burst multiplier.
    The rest of the bits used to be reserved (zero), but USB3.1 takes bit 7
    into use.
    
    Use the existing USB_SS_MULT() macro instead to make sure the mult value
    and hence max packet calculations are correct for USB3.1 devices.
    
    Note that burst multiplier in bmAttributes is zero based and that
    the USB_SS_MULT() macro adds one.
    
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit dddf1e7757ab8d63e5633e9d4d888784aa43109d
Author: Jann Horn <jann@thejh.net>
Date:   Fri Sep 18 23:41:23 2015 +0200

    security: fix typo in security_task_prctl
    
    commit b7f76ea2ef6739ee484a165ffbac98deb855d3d3 upstream.
    
    Signed-off-by: Jann Horn <jann@thejh.net>
    Reviewed-by: Andy Lutomirski <luto@kernel.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b1cb6a5ad95be4e1087c421174de35a25095ad66
Author: Mark Brown <broonie@kernel.org>
Date:   Sat Sep 19 07:12:34 2015 -0700

    regmap: debugfs: Don't bother actually printing when calculating max length
    
    commit 176fc2d5770a0990eebff903ba680d2edd32e718 upstream.
    
    The in kernel snprintf() will conveniently return the actual length of
    the printed string even if not given an output beffer at all so just do
    that rather than relying on the user to pass in a suitable buffer,
    ensuring that we don't need to worry if the buffer was truncated due to
    the size of the buffer passed in.
    
    Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3c3455972d3a9d2a5030a59b9e65eea3f7b9c9dc
Author: Mark Brown <broonie@kernel.org>
Date:   Sat Sep 19 07:00:18 2015 -0700

    regmap: debugfs: Ensure we don't underflow when printing access masks
    
    commit b763ec17ac762470eec5be8ebcc43e4f8b2c2b82 upstream.
    
    If a read is attempted which is smaller than the line length then we may
    underflow the subtraction we're doing with the unsigned size_t type so
    move some of the calculation to be additions on the right hand side
    instead in order to avoid this.
    
    Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Cc: stable@vger.kernel.org
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 476b9e54c0692e17e42e0225dee996df218e734f
Author: Antoine Ténart <antoine.tenart@free-electrons.com>
Date:   Tue Aug 18 10:59:10 2015 +0200

    mtd: pxa3xx_nand: add a default chunk size
    
    commit bc3e00f04cc1fe033a289c2fc2e5c73c0168d360 upstream.
    
    When keeping the configuration set by the bootloader (by using
    the marvell,nand-keep-config property), the pxa3xx_nand_detect_config()
    function is called and set the chunk size to 512 as a default value if
    NDCR_PAGE_SZ is not set.
    
    In the other case, when not keeping the bootloader configuration, no
    chunk size is set. Fix this by adding a default chunk size of 512.
    
    Fixes: 70ed85232a93 ("mtd: nand: pxa3xx: Introduce multiple page I/O
    support")
    
    Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
    Acked-by: Robert Jarzmik <robert.jarzmik@free>
    Signed-off-by: Brian Norris <computersforpeace@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2bd79382bb409b72ba46ae0805ef375e07d1e0e4
Author: Peter Seiderer <ps.report@gmx.net>
Date:   Thu Sep 17 21:40:12 2015 +0200

    cifs: use server timestamp for ntlmv2 authentication
    
    commit 98ce94c8df762d413b3ecb849e2b966b21606d04 upstream.
    
    Linux cifs mount with ntlmssp against an Mac OS X (Yosemite
    10.10.5) share fails in case the clocks differ more than +/-2h:
    
    digest-service: digest-request: od failed with 2 proto=ntlmv2
    digest-service: digest-request: kdc failed with -1561745592 proto=ntlmv2
    
    Fix this by (re-)using the given server timestamp for the
    ntlmv2 authentication (as Windows 7 does).
    
    A related problem was also reported earlier by Namjae Jaen (see below):
    
    Windows machine has extended security feature which refuse to allow
    authentication when there is time difference between server time and
    client time when ntlmv2 negotiation is used. This problem is prevalent
    in embedded enviornment where system time is set to default 1970.
    
    Modern servers send the server timestamp in the TargetInfo Av_Pair
    structure in the challenge message [see MS-NLMP 2.2.2.1]
    In [MS-NLMP 3.1.5.1.2] it is explicitly mentioned that the client must
    use the server provided timestamp if present OR current time if it is
    not
    
    Reported-by: Namjae Jeon <namjae.jeon@samsung.com>
    Signed-off-by: Peter Seiderer <ps.report@gmx.net>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 673282beaf861ee04d541b410043fe81faca465e
Author: Julian Anastasov <ja@ssi.bg>
Date:   Wed Jul 8 08:31:33 2015 +0300

    ipvs: fix crash with sync protocol v0 and FTP
    
    commit 56184858d1fc95c46723436b455cb7261cd8be6f upstream.
    
    Fix crash in 3.5+ if FTP is used after switching
    sync_version to 0.
    
    Fixes: 749c42b620a9 ("ipvs: reduce sync rate with time thresholds")
    Signed-off-by: Julian Anastasov <ja@ssi.bg>
    Signed-off-by: Simon Horman <horms@verge.net.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cc2b6a186da7580d4557e7175c5ab4b18d9a57f0
Author: Julian Anastasov <ja@ssi.bg>
Date:   Sat Jun 27 14:39:30 2015 +0300

    ipvs: do not use random local source address for tunnels
    
    commit 4754957f04f5f368792a0eb7dab0ae89fb93dcfd upstream.
    
    Michael Vallaly reports about wrong source address used
    in rare cases for tunneled traffic. Looks like
    __ip_vs_get_out_rt in 3.10+ is providing uninitialized
    dest_dst->dst_saddr.ip because ip_vs_dest_dst_alloc uses
    kmalloc. While we retry after seeing EINVAL from routing
    for data that does not look like valid local address, it
    still succeeded when this memory was previously used from
    other dests and with different local addresses. As result,
    we can use valid local address that is not suitable for
    our real server.
    
    Fix it by providing 0.0.0.0 every time our cache is refreshed.
    By this way we will get preferred source address from routing.
    
    Reported-by: Michael Vallaly <lvs@nolatency.com>
    Fixes: 026ace060dfe ("ipvs: optimize dst usage for real server")
    Signed-off-by: Julian Anastasov <ja@ssi.bg>
    Signed-off-by: Simon Horman <horms@verge.net.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 58c01a5074bc551a151b6b44f56ed40debd6b99d
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed Sep 30 12:48:40 2015 -0400

    Initialize msg/shm IPC objects before doing ipc_addid()
    
    commit b9a532277938798b53178d5a66af6e2915cb27cf upstream.
    
    As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
    having initialized the IPC object state.  Yes, we initialize the IPC
    object in a locked state, but with all the lockless RCU lookup work,
    that IPC object lock no longer means that the state cannot be seen.
    
    We already did this for the IPC semaphore code (see commit e8577d1f0329:
    "ipc/sem.c: fully initialize sem_array before making it visible") but we
    clearly forgot about msg and shm.
    
    Reported-by: Dmitry Vyukov <dvyukov@google.com>
    Cc: Manfred Spraul <manfred@colorfullife.com>
    Cc: Davidlohr Bueso <dbueso@suse.de>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ef38414c57a0d1c054e410d83c008934f341f17a
Author: Reyad Attiyat <reyad.attiyat@gmail.com>
Date:   Thu Aug 6 19:23:58 2015 +0300

    usb: xhci: Add support for URB_ZERO_PACKET to bulk/sg transfers
    
    commit 4758dcd19a7d9ba9610b38fecb93f65f56f86346 upstream.
    
    This commit checks for the URB_ZERO_PACKET flag and creates an extra
    zero-length td if the urb transfer length is a multiple of the endpoint's
    max packet length.
    
    Signed-off-by: Reyad Attiyat <reyad.attiyat@gmail.com>
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Cc: Oliver Neukum <oneukum@suse.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b4b2c2e2422d9f8c1b048453009e6e4ecab23bc5
Author: Mathias Nyman <mathias.nyman@linux.intel.com>
Date:   Mon Sep 21 17:46:16 2015 +0300

    xhci: change xhci 1.0 only restrictions to support xhci 1.1
    
    commit dca7794539eff04b786fb6907186989e5eaaa9c2 upstream.
    
    Some changes between xhci 0.96 and xhci 1.0 specifications forced us to
    check the hci version in code, some of these checks were implemented as
    hci_version == 1.0, which will not work with new xhci 1.1 controllers.
    
    xhci 1.1 behaves similar to xhci 1.0 in these cases, so change these
    checks to hci_version >= 1.0
    
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d1f0a835d22598554b7b3cae188ca44af17aec78
Author: Roger Quadros <rogerq@ti.com>
Date:   Mon Sep 21 17:46:13 2015 +0300

    usb: xhci: Clear XHCI_STATE_DYING on start
    
    commit e5bfeab0ad515b4f6df39fe716603e9dc6d3dfd0 upstream.
    
    For whatever reason if XHCI died in the previous instant
    then it will never recover on the next xhci_start unless we
    clear the DYING flag.
    
    Signed-off-by: Roger Quadros <rogerq@ti.com>
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fe6689e03318d5745d88328395fd326e08238533
Author: Johan Hovold <johan@kernel.org>
Date:   Wed Sep 23 11:41:42 2015 -0700

    USB: whiteheat: fix potential null-deref at probe
    
    commit cbb4be652d374f64661137756b8f357a1827d6a4 upstream.
    
    Fix potential null-pointer dereference at probe by making sure that the
    required endpoints are present.
    
    The whiteheat driver assumes there are at least five pairs of bulk
    endpoints, of which the final pair is used for the "command port". An
    attempt to bind to an interface with fewer bulk endpoints would
    currently lead to an oops.
    
    Fixes CVE-2015-5257.
    
    Reported-by: Moein Ghasemzadeh <moein@istuary.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 49fd0d611898595c8070870294b2074394971af7
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Tue Jun 23 11:34:21 2015 +0200

    drm: Reject DRI1 hw lock ioctl functions for kms drivers
    
    commit da168d81b44898404d281d5dbe70154ab5f117c1 upstream.
    
    I've done some extensive history digging across libdrm, mesa and
    xf86-video-{intel,nouveau,ati}. The only potential user of this with
    kms drivers I could find was ttmtest, which once used drmGetLock
    still. But that mistake was quickly fixed up. Even the intel xvmc
    library (which otherwise was really good with using dri1 stuff in kms
    mode) managed to never take the hw lock for dri2 (and hence kms).
    
    Hence it should be save to unconditionally disallow this.
    
    Cc: Peter Antoine <peter.antoine@intel.com>
    Reviewed-by: Peter Antoine <peter.antoine@intel.com>
    Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0d6ed49807dc5db67665bbcf4b86340aa7ee4334
Author: Fabiano Fidêncio <fidencio@redhat.com>
Date:   Thu Sep 24 15:18:34 2015 +0200

    drm/qxl: recreate the primary surface when the bo is not primary
    
    commit 8d0d94015e96b8853c4f7f06eac3f269e1b3d866 upstream.
    
    When disabling/enabling a crtc the primary area must be updated
    independently of which crtc has been disabled/enabled.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1264735
    
    Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cdae6f1a2cb3785868c22e83f4d4525764c25de0
Author: Dave Airlie <airlied@redhat.com>
Date:   Mon Sep 14 10:28:34 2015 +1000

    drm/qxl: only report first monitor as connected if we have no state
    
    commit 69e5d3f893e19613486f300fd6e631810338aa4b upstream.
    
    If the server isn't new enough to give us state, report the first
    monitor as always connected, otherwise believe the server side.
    
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3036665506a6a96bee1a3184762b7e0424d60907
Author: Steve French <smfrench@gmail.com>
Date:   Tue Sep 22 09:29:38 2015 -0500

    disabling oplocks/leases via module parm enable_oplocks broken for SMB3
    
    commit e0ddde9d44e37fbc21ce893553094ecf1a633ab5 upstream.
    
    leases (oplocks) were always requested for SMB2/SMB3 even when oplocks
    disabled in the cifs.ko module.
    
    Signed-off-by: Steve French <steve.french@primarydata.com>
    Reviewed-by: Chandrika Srinivasan <chandrika.srinivasan@citrix.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c6a8f449f328a96ea3cfe702d1964b9df92409d3
Author: Pablo Neira Ayuso <pablo@netfilter.org>
Date:   Mon Sep 14 18:04:09 2015 +0200

    netfilter: nft_compat: skip family comparison in case of NFPROTO_UNSPEC
    
    commit ba378ca9c04a5fc1b2cf0f0274a9d02eb3d1bad9 upstream.
    
    Fix lookup of existing match/target structures in the corresponding list
    by skipping the family check if NFPROTO_UNSPEC is used.
    
    This is resulting in the allocation and insertion of one match/target
    structure for each use of them. So this not only bloats memory
    consumption but also severely affects the time to reload the ruleset
    from the iptables-compat utility.
    
    After this patch, iptables-compat-restore and iptables-compat take
    almost the same time to reload large rulesets.
    
    Fixes: 0ca743a55991 ("netfilter: nf_tables: add compatibility layer for x_tables")
    Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 49330bb5dcad7dba808ccb484b0d3f1d3d19db5d
Author: Pablo Neira Ayuso <pablo@netfilter.org>
Date:   Thu Jul 9 22:56:00 2015 +0200

    netfilter: ctnetlink: put back references to master ct and expect objects
    
    commit 95dd8653de658143770cb0e55a58d2aab97c79d2 upstream.
    
    We have to put back the references to the master conntrack and the expectation
    that we just created, otherwise we'll leak them.
    
    Fixes: 0ef71ee1a5b9 ("netfilter: ctnetlink: refactor ctnetlink_create_expect")
    Reported-by: Tim Wiess <Tim.Wiess@watchguard.com>
    Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 918678f0f81f22ee6184b8c3816b8f5f85aaa4da
Author: Joe Stringer <joestringer@nicira.com>
Date:   Tue Jul 21 21:37:31 2015 -0700

    netfilter: nf_conntrack: Support expectations in different zones
    
    commit 4b31814d20cbe5cd4ccf18089751e77a04afe4f2 upstream.
    
    When zones were originally introduced, the expectation functions were
    all extended to perform lookup using the zone. However, insertion was
    not modified to check the zone. This means that two expectations which
    are intended to apply for different connections that have the same tuple
    but exist in different zones cannot both be tracked.
    
    Fixes: 5d0aa2ccd4 (netfilter: nf_conntrack: add support for "conntrack zones")
    Signed-off-by: Joe Stringer <joestringer@nicira.com>
    Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3d9525000f7759de0065bbaf0a4d741bb0d2666d
Author: Mikulas Patocka <mpatocka@redhat.com>
Date:   Fri Oct 2 11:17:37 2015 -0400

    dm raid: fix round up of default region size
    
    commit 042745ee53a0a7c1f5aff191a4a24213c6dcfb52 upstream.
    
    Commit 3a0f9aaee028 ("dm raid: round region_size to power of two")
    intended to make sure that the default region size is a power of two.
    However, the logic in that commit is incorrect and sets the variable
    region_size to 0 or 1, depending on whether min_region_size is a power
    of two.
    
    Fix this logic, using roundup_pow_of_two(), so that region_size is
    properly rounded up to the next power of two.
    
    Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
    Fixes: 3a0f9aaee028 ("dm raid: round region_size to power of two")
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2c058c48a5ba8108ed6a43c63fd6fb50a1579926
Author: Liu.Zhao <lzsos369@163.com>
Date:   Mon Aug 24 08:36:12 2015 -0700

    USB: option: add ZTE PIDs
    
    commit 19ab6bc5674a30fdb6a2436b068d19a3c17dc73e upstream.
    
    This is intended to add ZTE device PIDs on kernel.
    
    Signed-off-by: Liu.Zhao <lzsos369@163.com>
    [johan: sort the new entries ]
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4b78d69641fb02e2b412da752e2cc15e100369cf
Author: Shawn Lin <shawn.lin@rock-chips.com>
Date:   Wed Sep 9 15:41:52 2015 +0800

    staging: ion: fix corruption of ion_import_dma_buf
    
    commit 6fa92e2bcf6390e64895b12761e851c452d87bd8 upstream.
    
    we found this issue but still exit in lastest kernel. Simply
    keep ion_handle_create under mutex_lock to avoid this race.
    
    WARNING: CPU: 2 PID: 2648 at drivers/staging/android/ion/ion.c:512 ion_handle_add+0xb4/0xc0()
    ion_handle_add: buffer already found.
    Modules linked in: iwlmvm iwlwifi mac80211 cfg80211 compat
    CPU: 2 PID: 2648 Comm: TimedEventQueue Tainted: G        W    3.14.0 #7
     00000000 00000000 9a3efd2c 80faf273 9a3efd6c 9a3efd5c 80935dc9 811d7fd3
     9a3efd88 00000a58 812208a0 00000200 80e128d4 80e128d4 8d4ae00c a8cd8600
     a8cd8094 9a3efd74 80935e0e 00000009 9a3efd6c 811d7fd3 9a3efd88 9a3efd9c
    Call Trace:
      [<80faf273>] dump_stack+0x48/0x69
      [<80935dc9>] warn_slowpath_common+0x79/0x90
      [<80e128d4>] ? ion_handle_add+0xb4/0xc0
      [<80e128d4>] ? ion_handle_add+0xb4/0xc0
      [<80935e0e>] warn_slowpath_fmt+0x2e/0x30
      [<80e128d4>] ion_handle_add+0xb4/0xc0
      [<80e144cc>] ion_import_dma_buf+0x8c/0x110
      [<80c517c4>] reg_init+0x364/0x7d0
      [<80993363>] ? futex_wait+0x123/0x210
      [<80992e0e>] ? get_futex_key+0x16e/0x1e0
      [<8099308f>] ? futex_wake+0x5f/0x120
      [<80c51e19>] vpu_service_ioctl+0x1e9/0x500
      [<80994aec>] ? do_futex+0xec/0x8e0
      [<80971080>] ? prepare_to_wait_event+0xc0/0xc0
      [<80c51c30>] ? reg_init+0x7d0/0x7d0
      [<80a22562>] do_vfs_ioctl+0x2d2/0x4c0
      [<80b198ad>] ? inode_has_perm.isra.41+0x2d/0x40
      [<80b199cf>] ? file_has_perm+0x7f/0x90
      [<80b1a5f7>] ? selinux_file_ioctl+0x47/0xf0
      [<80a227a8>] SyS_ioctl+0x58/0x80
      [<80fb45e8>] syscall_call+0x7/0x7
      [<80fb0000>] ? mmc_do_calc_max_discard+0xab/0xe4
    
    Fixes: 83271f626 ("ion: hold reference to handle...")
    Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
    Reviewed-by: Laura Abbott <labbott@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f7badc7e7cdcef817bba17c254e289076cfec5fd
Author: Joe Thornber <ejt@redhat.com>
Date:   Wed Aug 12 15:12:09 2015 +0100

    dm btree: add ref counting ops for the leaves of top level btrees
    
    commit b0dc3c8bc157c60b1d470163882be8c13e1950af upstream.
    
    When using nested btrees, the top leaves of the top levels contain
    block addresses for the root of the next tree down.  If we shadow a
    shared leaf node the leaf values (sub tree roots) should be incremented
    accordingly.
    
    This is only an issue if there is metadata sharing in the top levels.
    Which only occurs if metadata snapshots are being used (as is possible
    with dm-thinp).  And could result in a block from the thinp metadata
    snap being reused early, thus corrupting the thinp metadata snap.
    
    Signed-off-by: Joe Thornber <ejt@redhat.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8f9abef184f367249c8d90b452130e93d990923a
Author: Filipe Manana <fdmanana@suse.com>
Date:   Mon Sep 28 09:56:26 2015 +0100

    Btrfs: update fix for read corruption of compressed and shared extents
    
    commit 808f80b46790f27e145c72112189d6a3be2bc884 upstream.
    
    My previous fix in commit 005efedf2c7d ("Btrfs: fix read corruption of
    compressed and shared extents") was effective only if the compressed
    extents cover a file range with a length that is not a multiple of 16
    pages. That's because the detection of when we reached a different range
    of the file that shares the same compressed extent as the previously
    processed range was done at extent_io.c:__do_contiguous_readpages(),
    which covers subranges with a length up to 16 pages, because
    extent_readpages() groups the pages in clusters no larger than 16 pages.
    So fix this by tracking the start of the previously processed file
    range's extent map at extent_readpages().
    
    The following test case for fstests reproduces the issue:
    
      seq=`basename $0`
      seqres=$RESULT_DIR/$seq
      echo "QA output created by $seq"
      tmp=/tmp/$$
      status=1	# failure is the default!
      trap "_cleanup; exit \$status" 0 1 2 3 15
    
      _cleanup()
      {
          rm -f $tmp.*
      }
    
      # get standard environment, filters and checks
      . ./common/rc
      . ./common/filter
    
      # real QA test starts here
      _need_to_be_root
      _supported_fs btrfs
      _supported_os Linux
      _require_scratch
      _require_cloner
    
      rm -f $seqres.full
    
      test_clone_and_read_compressed_extent()
      {
          local mount_opts=$1
    
          _scratch_mkfs >>$seqres.full 2>&1
          _scratch_mount $mount_opts
    
          # Create our test file with a single extent of 64Kb that is going to
          # be compressed no matter which compression algo is used (zlib/lzo).
          $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 64K" \
              $SCRATCH_MNT/foo | _filter_xfs_io
    
          # Now clone the compressed extent into an adjacent file offset.
          $CLONER_PROG -s 0 -d $((64 * 1024)) -l $((64 * 1024)) \
              $SCRATCH_MNT/foo $SCRATCH_MNT/foo
    
          echo "File digest before unmount:"
          md5sum $SCRATCH_MNT/foo | _filter_scratch
    
          # Remount the fs or clear the page cache to trigger the bug in
          # btrfs. Because the extent has an uncompressed length that is a
          # multiple of 16 pages, all the pages belonging to the second range
          # of the file (64K to 128K), which points to the same extent as the
          # first range (0K to 64K), had their contents full of zeroes instead
          # of the byte 0xaa. This was a bug exclusively in the read path of
          # compressed extents, the correct data was stored on disk, btrfs
          # just failed to fill in the pages correctly.
          _scratch_remount
    
          echo "File digest after remount:"
          # Must match the digest we got before.
          md5sum $SCRATCH_MNT/foo | _filter_scratch
      }
    
      echo -e "\nTesting with zlib compression..."
      test_clone_and_read_compressed_extent "-o compress=zlib"
    
      _scratch_unmount
    
      echo -e "\nTesting with lzo compression..."
      test_clone_and_read_compressed_extent "-o compress=lzo"
    
      status=0
      exit
    
    Signed-off-by: Filipe Manana <fdmanana@suse.com>
    Tested-by: Timofey Titovets <nefelim4ag@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8a9a267ca441e99718fd5a495f905afa8f7f5c26
Author: Filipe Manana <fdmanana@suse.com>
Date:   Mon Sep 14 09:09:31 2015 +0100

    Btrfs: fix read corruption of compressed and shared extents
    
    commit 005efedf2c7d0a270ffbe28d8997b03844f3e3e7 upstream.
    
    If a file has a range pointing to a compressed extent, followed by
    another range that points to the same compressed extent and a read
    operation attempts to read both ranges (either completely or part of
    them), the pages that correspond to the second range are incorrectly
    filled with zeroes.
    
    Consider the following example:
    
      File layout
      [0 - 8K]                      [8K - 24K]
          |                             |
          |                             |
       points to extent X,         points to extent X,
       offset 4K, length of 8K     offset 0, length 16K
    
      [extent X, compressed length = 4K uncompressed length = 16K]
    
    If a readpages() call spans the 2 ranges, a single bio to read the extent
    is submitted - extent_io.c:submit_extent_page() would only create a new
    bio to cover the second range pointing to the extent if the extent it
    points to had a different logical address than the extent associated with
    the first range. This has a consequence of the compressed read end io
    handler (compression.c:end_compressed_bio_read()) finish once the extent
    is decompressed into the pages covering the first range, leaving the
    remaining pages (belonging to the second range) filled with zeroes (done
    by compression.c:btrfs_clear_biovec_end()).
    
    So fix this by submitting the current bio whenever we find a range
    pointing to a compressed extent that was preceded by a range with a
    different extent map. This is the simplest solution for this corner
    case. Making the end io callback populate both ranges (or more, if we
    have multiple pointing to the same extent) is a much more complex
    solution since each bio is tightly coupled with a single extent map and
    the extent maps associated to the ranges pointing to the shared extent
    can have different offsets and lengths.
    
    The following test case for fstests triggers the issue:
    
      seq=`basename $0`
      seqres=$RESULT_DIR/$seq
      echo "QA output created by $seq"
      tmp=/tmp/$$
      status=1	# failure is the default!
      trap "_cleanup; exit \$status" 0 1 2 3 15
    
      _cleanup()
      {
          rm -f $tmp.*
      }
    
      # get standard environment, filters and checks
      . ./common/rc
      . ./common/filter
    
      # real QA test starts here
      _need_to_be_root
      _supported_fs btrfs
      _supported_os Linux
      _require_scratch
      _require_cloner
    
      rm -f $seqres.full
    
      test_clone_and_read_compressed_extent()
      {
          local mount_opts=$1
    
          _scratch_mkfs >>$seqres.full 2>&1
          _scratch_mount $mount_opts
    
          # Create a test file with a single extent that is compressed (the
          # data we write into it is highly compressible no matter which
          # compression algorithm is used, zlib or lzo).
          $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K"        \
                          -c "pwrite -S 0xbb 4K 8K"        \
                          -c "pwrite -S 0xcc 12K 4K"       \
                          $SCRATCH_MNT/foo | _filter_xfs_io
    
          # Now clone our extent into an adjacent offset.
          $CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
              $SCRATCH_MNT/foo $SCRATCH_MNT/foo
    
          # Same as before but for this file we clone the extent into a lower
          # file offset.
          $XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K"         \
                          -c "pwrite -S 0xbb 12K 8K"        \
                          -c "pwrite -S 0xcc 20K 4K"        \
                          $SCRATCH_MNT/bar | _filter_xfs_io
    
          $CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
              $SCRATCH_MNT/bar $SCRATCH_MNT/bar
    
          echo "File digests before unmounting filesystem:"
          md5sum $SCRATCH_MNT/foo | _filter_scratch
          md5sum $SCRATCH_MNT/bar | _filter_scratch
    
          # Evicting the inode or clearing the page cache before reading
          # again the file would also trigger the bug - reads were returning
          # all bytes in the range corresponding to the second reference to
          # the extent with a value of 0, but the correct data was persisted
          # (it was a bug exclusively in the read path). The issue happened
          # only if the same readpages() call targeted pages belonging to the
          # first and second ranges that point to the same compressed extent.
          _scratch_remount
    
          echo "File digests after mounting filesystem again:"
          # Must match the same digests we got before.
          md5sum $SCRATCH_MNT/foo | _filter_scratch
          md5sum $SCRATCH_MNT/bar | _filter_scratch
      }
    
      echo -e "\nTesting with zlib compression..."
      test_clone_and_read_compressed_extent "-o compress=zlib"
    
      _scratch_unmount
    
      echo -e "\nTesting with lzo compression..."
      test_clone_and_read_compressed_extent "-o compress=lzo"
    
      status=0
      exit
    
    Signed-off-by: Filipe Manana <fdmanana@suse.com>
    Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
    Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 49ba84c3451a30def2ff18c31776d4f2035cd4c7
Author: Jeff Mahoney <jeffm@suse.com>
Date:   Fri Sep 11 21:44:17 2015 -0400

    btrfs: skip waiting on ordered range for special files
    
    commit a30e577c96f59b1e1678ea5462432b09bf7d5cbc upstream.
    
    In btrfs_evict_inode, we properly truncate the page cache for evicted
    inodes but then we call btrfs_wait_ordered_range for every inode as well.
    It's the right thing to do for regular files but results in incorrect
    behavior for device inodes for block devices.
    
    filemap_fdatawrite_range gets called with inode->i_mapping which gets
    resolved to the block device inode before getting passed to
    wbc_attach_fdatawrite_inode and ultimately to inode_to_bdi.  What happens
    next depends on whether there's an open file handle associated with the
    inode.  If there is, we write to the block device, which is unexpected
    behavior.  If there isn't, we through normally and inode->i_data is used.
    We can also end up racing against open/close which can result in crashes
    when i_mapping points to a block device inode that has been closed.
    
    Since there can't be any page cache associated with special file inodes,
    it's safe to skip the btrfs_wait_ordered_range call entirely and avoid
    the problem.
    
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=100911
    Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
    Signed-off-by: Jeff Mahoney <jeffm@suse.com>
    Reviewed-by: Filipe Manana <fdmanana@suse.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b331ab92349feddb822b30f8c374b23445ffd630
Author: Yitian Bu <buyitian@gmail.com>
Date:   Fri Oct 2 15:18:41 2015 +0800

    ASoC: dwc: correct irq clear method
    
    commit 4873867e5f2bd90faad861dd94865099fc3140f3 upstream.
    
    from Designware I2S datasheet, tx/rx XRUN irq is cleared by
    reading register TOR/ROR, rather than by writing into them.
    
    Signed-off-by: Yitian Bu <yitian.bu@tangramtek.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2f32be0a672b151ce509e6c31ca8e95ac5ed57ca
Author: Robert Jarzmik <robert.jarzmik@free.fr>
Date:   Tue Sep 15 20:51:31 2015 +0200

    ASoC: fix broken pxa SoC support
    
    commit 3c8f7710c1c44fb650bc29b6ef78ed8b60cfaa28 upstream.
    
    The previous fix of pxa library support, which was introduced to fix the
    library dependency, broke the previous SoC behavior, where a machine
    code binding pxa2xx-ac97 with a coded relied on :
     - sound/soc/pxa/pxa2xx-ac97.c
     - sound/soc/codecs/XXX.c
    
    For example, the mioa701_wm9713.c machine code is currently broken. The
    "select ARM" statement wrongly selects the soc/arm/pxa2xx-ac97 for
    compilation, as per an unfortunate fate SND_PXA2XX_AC97 is both declared
    in sound/arm/Kconfig and sound/soc/pxa/Kconfig.
    
    Fix this by ensuring that SND_PXA2XX_SOC correctly triggers the correct
    pxa2xx-ac97 compilation.
    
    Fixes: 846172dfe33c ("ASoC: fix SND_PXA2XX_LIB Kconfig warning")
    Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 994e1a179b12e4b24aa7242efdad2d8615303c8f
Author: Robert Jarzmik <robert.jarzmik@free.fr>
Date:   Tue Sep 22 21:20:22 2015 +0200

    ASoC: pxa: pxa2xx-ac97: fix dma requestor lines
    
    commit 8811191fdf7ed02ee07cb8469428158572d355a2 upstream.
    
    PCM receive and transmit DMA requestor lines were reverted, breaking the
    PCM playback interface for PXA platforms using the sound/soc/ variant
    instead of the sound/arm variant.
    
    The commit below shows the inversion in the requestor lines.
    
    Fixes: d65a14587a9b ("ASoC: pxa: use snd_dmaengine_dai_dma_data")
    Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 85a7b3510c362afd51635e8981454d18f78b7e73
Author: John Flatness <john@zerocrates.org>
Date:   Fri Oct 2 17:07:49 2015 -0400

    ALSA: hda - Apply SPDIF pin ctl to MacBookPro 12,1
    
    commit e8ff581f7ac2bc3b8886094b7ca635dcc4d1b0e9 upstream.
    
    The MacBookPro 12,1 has the same setup as the 11 for controlling the
    status of the optical audio light. Simply apply the existing workaround
    to the subsystem ID for the 12,1.
    
    [sorted the fixup entry by tiwai]
    
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=105401
    Signed-off-by: John Flatness <john@zerocrates.org>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 22991b314005874d4f5266794bd8c675491d37f3
Author: Takashi Iwai <tiwai@suse.de>
Date:   Mon Oct 5 16:55:09 2015 +0200

    ALSA: synth: Fix conflicting OSS device registration on AWE32
    
    commit 225db5762dc1a35b26850477ffa06e5cd0097243 upstream.
    
    When OSS emulation is loaded on ISA SB AWE32 chip, we get now kernel
    warnings like:
      WARNING: CPU: 0 PID: 2791 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x51/0x80()
      sysfs: cannot create duplicate filename '/devices/isa/sbawe.0/sound/card0/seq-oss-0-0'
    
    It's because both emux synth and opl3 drivers try to register their
    OSS device object with the same static index number 0.  This hasn't
    been a big problem until the recent rewrite of device management code
    (that exposes sysfs at the same time), but it's been an obvious bug.
    
    This patch works around it just by using a different index number of
    emux synth object.  There can be a more elegant way to fix, but it's
    enough for now, as this code won't be touched so often, in anyway.
    
    Reported-and-tested-by: Michael Shell <list1@michaelshell.org>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8bee9e817583ddd202d9f4dcbdceecd3c34e569c
Author: Mel Gorman <mgorman@techsingularity.net>
Date:   Thu Oct 1 15:36:57 2015 -0700

    mm: hugetlbfs: skip shared VMAs when unmapping private pages to satisfy a fault
    
    commit 2f84a8990ebbe235c59716896e017c6b2ca1200f upstream.
    
    SunDong reported the following on
    
      https://bugzilla.kernel.org/show_bug.cgi?id=103841
    
    	I think I find a linux bug, I have the test cases is constructed. I
    	can stable recurring problems in fedora22(4.0.4) kernel version,
    	arch for x86_64.  I construct transparent huge page, when the parent
    	and child process with MAP_SHARE, MAP_PRIVATE way to access the same
    	huge page area, it has the opportunity to lead to huge page copy on
    	write failure, and then it will munmap the child corresponding mmap
    	area, but then the child mmap area with VM_MAYSHARE attributes, child
    	process munmap this area can trigger VM_BUG_ON in set_vma_resv_flags
    	functions (vma - > vm_flags & VM_MAYSHARE).
    
    There were a number of problems with the report (e.g.  it's hugetlbfs that
    triggers this, not transparent huge pages) but it was fundamentally
    correct in that a VM_BUG_ON in set_vma_resv_flags() can be triggered that
    looks like this
    
    	 vma ffff8804651fd0d0 start 00007fc474e00000 end 00007fc475e00000
    	 next ffff8804651fd018 prev ffff8804651fd188 mm ffff88046b1b1800
    	 prot 8000000000000027 anon_vma           (null) vm_ops ffffffff8182a7a0
    	 pgoff 0 file ffff88106bdb9800 private_data           (null)
    	 flags: 0x84400fb(read|write|shared|mayread|maywrite|mayexec|mayshare|dontexpand|hugetlb)
    	 ------------
    	 kernel BUG at mm/hugetlb.c:462!
    	 SMP
    	 Modules linked in: xt_pkttype xt_LOG xt_limit [..]
    	 CPU: 38 PID: 26839 Comm: map Not tainted 4.0.4-default #1
    	 Hardware name: Dell Inc. PowerEdge R810/0TT6JF, BIOS 2.7.4 04/26/2012
    	 set_vma_resv_flags+0x2d/0x30
    
    The VM_BUG_ON is correct because private and shared mappings have
    different reservation accounting but the warning clearly shows that the
    VMA is shared.
    
    When a private COW fails to allocate a new page then only the process
    that created the VMA gets the page -- all the children unmap the page.
    If the children access that data in the future then they get killed.
    
    The problem is that the same file is mapped shared and private.  During
    the COW, the allocation fails, the VMAs are traversed to unmap the other
    private pages but a shared VMA is found and the bug is triggered.  This
    patch identifies such VMAs and skips them.
    
    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Reported-by: SunDong <sund_sky@126.com>
    Reviewed-by: Michal Hocko <mhocko@suse.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Cc: David Rientjes <rientjes@google.com>
    Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6b4f1c43bab821c98e9006f5e1772f9a19ebad6d
Author: Tan, Jui Nee <jui.nee.tan@intel.com>
Date:   Tue Sep 1 10:22:51 2015 +0800

    spi: spi-pxa2xx: Check status register to determine if SSSR_TINT is disabled
    
    commit 02bc933ebb59208f42c2e6305b2c17fd306f695d upstream.
    
    On Intel Baytrail, there is case when interrupt handler get called, no SPI
    message is captured. The RX FIFO is indeed empty when RX timeout pending
    interrupt (SSSR_TINT) happens.
    
    Use the BIOS version where both HSUART and SPI are on the same IRQ. Both
    drivers are using IRQF_SHARED when calling the request_irq function. When
    running two separate and independent SPI and HSUART application that
    generate data traffic on both components, user will see messages like
    below on the console:
    
      pxa2xx-spi pxa2xx-spi.0: bad message state in interrupt handler
    
    This commit will fix this by first checking Receiver Time-out Interrupt,
    if it is disabled, ignore the request and return without servicing.
    
    Signed-off-by: Tan, Jui Nee <jui.nee.tan@intel.com>
    Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 00b349a1bcb0ee42a7374100f071010231fec623
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Sun Sep 6 01:46:54 2015 +0300

    spi: Fix documentation of spi_alloc_master()
    
    commit a394d635193b641f2c86ead5ada5b115d57c51f8 upstream.
    
    Actually, spi_master_put() after spi_alloc_master() must _not_ be followed
    by kfree(). The memory is already freed with the call to spi_master_put()
    through spi_master_class, which registers a release function. Calling both
    spi_master_put() and kfree() results in often nasty (and delayed) crashes
    elsewhere in the kernel, often in the networking stack.
    
    This reverts commit eb4af0f5349235df2e4a5057a72fc8962d00308a.
    
    Link to patch and concerns: https://lkml.org/lkml/2012/9/3/269
    or
    http://lkml.iu.edu/hypermail/linux/kernel/1209.0/00790.html
    
    Alexey Klimov: This revert becomes valid after
    94c69f765f1b4a658d96905ec59928e3e3e07e6a when spi-imx.c
    has been fixed and there is no need to call kfree() so comment
    for spi_alloc_master() should be fixed.
    
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Alexey Klimov <alexey.klimov@linaro.org>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 709b4154b5ba1fe5f6e3b53f9a1d4e3c0003c293
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Sep 29 14:45:09 2015 +0200

    sched/core: Fix TASK_DEAD race in finish_task_switch()
    
    commit 95913d97914f44db2b81271c2e2ebd4d2ac2df83 upstream.
    
    So the problem this patch is trying to address is as follows:
    
            CPU0                            CPU1
    
            context_switch(A, B)
                                            ttwu(A)
                                              LOCK A->pi_lock
                                              A->on_cpu == 0
            finish_task_switch(A)
              prev_state = A->state  <-.
              WMB                      |
              A->on_cpu = 0;           |
              UNLOCK rq0->lock         |
                                       |    context_switch(C, A)
                                       `--  A->state = TASK_DEAD
              prev_state == TASK_DEAD
                put_task_struct(A)
                                            context_switch(A, C)
                                            finish_task_switch(A)
                                              A->state == TASK_DEAD
                                                put_task_struct(A)
    
    The argument being that the WMB will allow the load of A->state on CPU0
    to cross over and observe CPU1's store of A->state, which will then
    result in a double-drop and use-after-free.
    
    Now the comment states (and this was true once upon a long time ago)
    that we need to observe A->state while holding rq->lock because that
    will order us against the wakeup; however the wakeup will not in fact
    acquire (that) rq->lock; it takes A->pi_lock these days.
    
    We can obviously fix this by upgrading the WMB to an MB, but that is
    expensive, so we'd rather avoid that.
    
    The alternative this patch takes is: smp_store_release(&A->on_cpu, 0),
    which avoids the MB on some archs, but not important ones like ARM.
    
    Reported-by: Oleg Nesterov <oleg@redhat.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Cc: manfred@colorfullife.com
    Cc: will.deacon@arm.com
    Fixes: e4a52bcb9a18 ("sched: Remove rq->lock from the first half of ttwu()")
    Link: http://lkml.kernel.org/r/20150929124509.GG3816@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2c6bcc45689efbee5a751c48b3488be97d03d1f1
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Fri Sep 25 11:59:52 2015 +0200

    x86/xen: Support kexec/kdump in HVM guests by doing a soft reset
    
    commit 0b34a166f291d255755be46e43ed5497cdd194f2 upstream.
    
    Currently there is a number of issues preventing PVHVM Xen guests from
    doing successful kexec/kdump:
    
      - Bound event channels.
      - Registered vcpu_info.
      - PIRQ/emuirq mappings.
      - shared_info frame after XENMAPSPACE_shared_info operation.
      - Active grant mappings.
    
    Basically, newly booted kernel stumbles upon already set up Xen
    interfaces and there is no way to reestablish them. In Xen-4.7 a new
    feature called 'soft reset' is coming. A guest performing kexec/kdump
    operation is supposed to call SCHEDOP_shutdown hypercall with
    SHUTDOWN_soft_reset reason before jumping to new kernel. Hypervisor
    (with some help from toolstack) will do full domain cleanup (but
    keeping its memory and vCPU contexts intact) returning the guest to
    the state it had when it was first booted and thus allowing it to
    start over.
    
    Doing SHUTDOWN_soft_reset on Xen hypervisors which don't support it is
    probably OK as by default all unknown shutdown reasons cause domain
    destroy with a message in toolstack log: 'Unknown shutdown reason code
    5. Destroying domain.'  which gives a clue to what the problem is and
    eliminates false expectations.
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 410f94b1138eb76889dc7842b0adcc136c3a5f05
Author: Stephen Smalley <sds@tycho.nsa.gov>
Date:   Thu Oct 1 09:04:22 2015 -0400

    x86/mm: Set NX on gap between __ex_table and rodata
    
    commit ab76f7b4ab2397ffdd2f1eb07c55697d19991d10 upstream.
    
    Unused space between the end of __ex_table and the start of
    rodata can be left W+x in the kernel page tables.  Extend the
    setting of the NX bit to cover this gap by starting from
    text_end rather than rodata_start.
    
      Before:
      ---[ High Kernel Mapping ]---
      0xffffffff80000000-0xffffffff81000000          16M                               pmd
      0xffffffff81000000-0xffffffff81600000           6M     ro         PSE     GLB x  pmd
      0xffffffff81600000-0xffffffff81754000        1360K     ro                 GLB x  pte
      0xffffffff81754000-0xffffffff81800000         688K     RW                 GLB x  pte
      0xffffffff81800000-0xffffffff81a00000           2M     ro         PSE     GLB NX pmd
      0xffffffff81a00000-0xffffffff81b3b000        1260K     ro                 GLB NX pte
      0xffffffff81b3b000-0xffffffff82000000        4884K     RW                 GLB NX pte
      0xffffffff82000000-0xffffffff82200000           2M     RW         PSE     GLB NX pmd
      0xffffffff82200000-0xffffffffa0000000         478M                               pmd
    
      After:
      ---[ High Kernel Mapping ]---
      0xffffffff80000000-0xffffffff81000000          16M                               pmd
      0xffffffff81000000-0xffffffff81600000           6M     ro         PSE     GLB x  pmd
      0xffffffff81600000-0xffffffff81754000        1360K     ro                 GLB x  pte
      0xffffffff81754000-0xffffffff81800000         688K     RW                 GLB NX pte
      0xffffffff81800000-0xffffffff81a00000           2M     ro         PSE     GLB NX pmd
      0xffffffff81a00000-0xffffffff81b3b000        1260K     ro                 GLB NX pte
      0xffffffff81b3b000-0xffffffff82000000        4884K     RW                 GLB NX pte
      0xffffffff82000000-0xffffffff82200000           2M     RW         PSE     GLB NX pmd
      0xffffffff82200000-0xffffffffa0000000         478M                               pmd
    
    Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
    Acked-by: Kees Cook <keescook@chromium.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Link: http://lkml.kernel.org/r/1443704662-3138-1-git-send-email-sds@tycho.nsa.gov
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a1e2667a69ffced2657b071fdcade738cb6b2ece
Author: Matt Fleming <matt.fleming@intel.com>
Date:   Fri Sep 25 23:02:18 2015 +0100

    x86/efi: Fix boot crash by mapping EFI memmap entries bottom-up at runtime, instead of top-down
    
    commit a5caa209ba9c29c6421292e7879d2387a2ef39c9 upstream.
    
    Beginning with UEFI v2.5 EFI_PROPERTIES_TABLE was introduced
    that signals that the firmware PE/COFF loader supports splitting
    code and data sections of PE/COFF images into separate EFI
    memory map entries. This allows the kernel to map those regions
    with strict memory protections, e.g. EFI_MEMORY_RO for code,
    EFI_MEMORY_XP for data, etc.
    
    Unfortunately, an unwritten requirement of this new feature is
    that the regions need to be mapped with the same offsets
    relative to each other as observed in the EFI memory map. If
    this is not done crashes like this may occur,
    
      BUG: unable to handle kernel paging request at fffffffefe6086dd
      IP: [<fffffffefe6086dd>] 0xfffffffefe6086dd
      Call Trace:
       [<ffffffff8104c90e>] efi_call+0x7e/0x100
       [<ffffffff81602091>] ? virt_efi_set_variable+0x61/0x90
       [<ffffffff8104c583>] efi_delete_dummy_variable+0x63/0x70
       [<ffffffff81f4e4aa>] efi_enter_virtual_mode+0x383/0x392
       [<ffffffff81f37e1b>] start_kernel+0x38a/0x417
       [<ffffffff81f37495>] x86_64_start_reservations+0x2a/0x2c
       [<ffffffff81f37582>] x86_64_start_kernel+0xeb/0xef
    
    Here 0xfffffffefe6086dd refers to an address the firmware
    expects to be mapped but which the OS never claimed was mapped.
    The issue is that included in these regions are relative
    addresses to other regions which were emitted by the firmware
    toolchain before the "splitting" of sections occurred at
    runtime.
    
    Needless to say, we don't satisfy this unwritten requirement on
    x86_64 and instead map the EFI memory map entries in reverse
    order. The above crash is almost certainly triggerable with any
    kernel newer than v3.13 because that's when we rewrote the EFI
    runtime region mapping code, in commit d2f7cbe7b26a ("x86/efi:
    Runtime services virtual mapping"). For kernel versions before
    v3.13 things may work by pure luck depending on the
    fragmentation of the kernel virtual address space at the time we
    map the EFI regions.
    
    Instead of mapping the EFI memory map entries in reverse order,
    where entry N has a higher virtual address than entry N+1, map
    them in the same order as they appear in the EFI memory map to
    preserve this relative offset between regions.
    
    This patch has been kept as small as possible with the intention
    that it should be applied aggressively to stable and
    distribution kernels. It is very much a bugfix rather than
    support for a new feature, since when EFI_PROPERTIES_TABLE is
    enabled we must map things as outlined above to even boot - we
    have no way of asking the firmware not to split the code/data
    regions.
    
    In fact, this patch doesn't even make use of the more strict
    memory protections available in UEFI v2.5. That will come later.
    
    Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Matt Fleming <matt.fleming@intel.com>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: Chun-Yi <jlee@suse.com>
    Cc: Dave Young <dyoung@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: James Bottomley <JBottomley@Odin.com>
    Cc: Lee, Chun-Yi <jlee@suse.com>
    Cc: Leif Lindholm <leif.lindholm@linaro.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Matthew Garrett <mjg59@srcf.ucam.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Jones <pjones@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Link: http://lkml.kernel.org/r/1443218539-7610-2-git-send-email-matt@codeblueprint.co.uk
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3fd49abb844ff3b97cdc041a73f025288cf75af0
Author: Dirk Müller <dmueller@suse.com>
Date:   Thu Oct 1 13:43:42 2015 +0200

    Use WARN_ON_ONCE for missing X86_FEATURE_NRIPS
    
    commit d2922422c48df93f3edff7d872ee4f3191fefb08 upstream.
    
    The cpu feature flags are not ever going to change, so warning
    everytime can cause a lot of kernel log spam
    (in our case more than 10GB/hour).
    
    The warning seems to only occur when nested virtualization is
    enabled, so it's probably triggered by a KVM bug.  This is a
    sensible and safe change anyway, and the KVM bug fix might not
    be suitable for stable releases anyway.
    
    Signed-off-by: Dirk Mueller <dmueller@suse.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 76bdbeab6899465e47817b70b16eae5f5b6d9f19
Author: Andy Lutomirski <luto@kernel.org>
Date:   Sun Sep 20 16:32:05 2015 -0700

    x86/nmi/64: Fix a paravirt stack-clobbering bug in the NMI code
    
    commit 83c133cf11fb0e68a51681447e372489f052d40e upstream.
    
    The NMI entry code that switches to the normal kernel stack needs to
    be very careful not to clobber any extra stack slots on the NMI
    stack.  The code is fine under the assumption that SWAPGS is just a
    normal instruction, but that assumption isn't really true.  Use
    SWAPGS_UNSAFE_STACK instead.
    
    This is part of a fix for some random crashes that Sasha saw.
    
    Fixes: 9b6e6a8334d5 ("x86/nmi/64: Switch stacks on userspace NMI entry")
    Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Link: http://lkml.kernel.org/r/974bc40edffdb5c2950a5c4977f821a446b76178.1442791737.git.luto@kernel.org
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9b53b9dcfe11b35a27992429fa2e78e8c99769b2
Author: Andy Lutomirski <luto@kernel.org>
Date:   Sun Sep 20 16:32:04 2015 -0700

    x86/paravirt: Replace the paravirt nop with a bona fide empty function
    
    commit fc57a7c68020dcf954428869eafd934c0ab1536f upstream.
    
    PARAVIRT_ADJUST_EXCEPTION_FRAME generates this code (using nmi as an
    example, trimmed for readability):
    
        ff 15 00 00 00 00       callq  *0x0(%rip)        # 2796 <nmi+0x6>
                  2792: R_X86_64_PC32     pv_irq_ops+0x2c
    
    That's a call through a function pointer to regular C function that
    does nothing on native boots, but that function isn't protected
    against kprobes, isn't marked notrace, and is certainly not
    guaranteed to preserve any registers if the compiler is feeling
    perverse.  This is bad news for a CLBR_NONE operation.
    
    Of course, if everything works correctly, once paravirt ops are
    patched, it gets nopped out, but what if we hit this code before
    paravirt ops are patched in?  This can potentially cause breakage
    that is very difficult to debug.
    
    A more subtle failure is possible here, too: if _paravirt_nop uses
    the stack at all (even just to push RBP), it will overwrite the "NMI
    executing" variable if it's called in the NMI prologue.
    
    The Xen case, perhaps surprisingly, is fine, because it's already
    written in asm.
    
    Fix all of the cases that default to paravirt_nop (including
    adjust_exception_frame) with a big hammer: replace paravirt_nop with
    an asm function that is just a ret instruction.
    
    The Xen case may have other problems, so document them.
    
    This is part of a fix for some random crashes that Sasha saw.
    
    Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Link: http://lkml.kernel.org/r/8f5d2ba295f9d73751c33d97fda03e0495d9ade0.1442791737.git.luto@kernel.org
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 34b561d0f90812f8869d2f958b247a6dd75b72f8
Author: David Woodhouse <dwmw2@infradead.org>
Date:   Wed Sep 16 14:10:03 2015 +0100

    x86/platform: Fix Geode LX timekeeping in the generic x86 build
    
    commit 03da3ff1cfcd7774c8780d2547ba0d995f7dc03d upstream.
    
    In 2007, commit 07190a08eef36 ("Mark TSC on GeodeLX reliable")
    bypassed verification of the TSC on Geode LX. However, this code
    (now in the check_system_tsc_reliable() function in
    arch/x86/kernel/tsc.c) was only present if CONFIG_MGEODE_LX was
    set.
    
    OpenWRT has recently started building its generic Geode target
    for Geode GX, not LX, to include support for additional
    platforms. This broke the timekeeping on LX-based devices,
    because the TSC wasn't marked as reliable:
    https://dev.openwrt.org/ticket/20531
    
    By adding a runtime check on is_geode_lx(), we can also include
    the fix if CONFIG_MGEODEGX1 or CONFIG_X86_GENERIC are set, thus
    fixing the problem.
    
    Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
    Cc: Andres Salomon <dilinger@queued.net>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Marcelo Tosatti <marcelo@kvack.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/1442409003.131189.87.camel@infradead.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 01cf9c697bc5e87e8a32c2cd95f30141e4e79224
Author: Shaohua Li <shli@fb.com>
Date:   Thu Jul 30 16:24:43 2015 -0700

    x86/apic: Serialize LVTT and TSC_DEADLINE writes
    
    commit 5d7c631d926b59aa16f3c56eaeb83f1036c81dc7 upstream.
    
    The APIC LVTT register is MMIO mapped but the TSC_DEADLINE register is an
    MSR. The write to the TSC_DEADLINE MSR is not serializing, so it's not
    guaranteed that the write to LVTT has reached the APIC before the
    TSC_DEADLINE MSR is written. In such a case the write to the MSR is
    ignored and as a consequence the local timer interrupt never fires.
    
    The SDM decribes this issue for xAPIC and x2APIC modes. The
    serialization methods recommended by the SDM differ.
    
    xAPIC:
     "1. Memory-mapped write to LVT Timer Register, setting bits 18:17 to 10b.
      2. WRMSR to the IA32_TSC_DEADLINE MSR a value much larger than current time-stamp counter.
      3. If RDMSR of the IA32_TSC_DEADLINE MSR returns zero, go to step 2.
      4. WRMSR to the IA32_TSC_DEADLINE MSR the desired deadline."
    
    x2APIC:
     "To allow for efficient access to the APIC registers in x2APIC mode,
      the serializing semantics of WRMSR are relaxed when writing to the
      APIC registers. Thus, system software should not use 'WRMSR to APIC
      registers in x2APIC mode' as a serializing instruction. Read and write
      accesses to the APIC registers will occur in program order. A WRMSR to
      an APIC register may complete before all preceding stores are globally
      visible; software can prevent this by inserting a serializing
      instruction, an SFENCE, or an MFENCE before the WRMSR."
    
    The xAPIC method is to just wait for the memory mapped write to hit
    the LVTT by checking whether the MSR write has reached the hardware.
    There is no reason why a proper MFENCE after the memory mapped write would
    not do the same. Andi Kleen confirmed that MFENCE is sufficient for the
    xAPIC case as well.
    
    Issue MFENCE before writing to the TSC_DEADLINE MSR. This can be done
    unconditionally as all CPUs which have TSC_DEADLINE also have MFENCE
    support.
    
    [ tglx: Massaged the changelog ]
    
    Signed-off-by: Shaohua Li <shli@fb.com>
    Reviewed-by: Ingo Molnar <mingo@kernel.org>
    Cc: <Kernel-team@fb.com>
    Cc: <lenb@kernel.org>
    Cc: <fenghua.yu@intel.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Link: http://lkml.kernel.org/r/20150909041352.GA2059853@devbig257.prn2.facebook.com
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 59ebd41e415d16748e12dea1f0c8ecdfd30fb410
Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Date:   Mon Sep 28 18:57:03 2015 +0300

    dmaengine: dw: properly read DWC_PARAMS register
    
    commit 6bea0f6d1c47b07be88dfd93f013ae05fcb3d8bf upstream.
    
    In case we have less than maximum allowed channels (8) and autoconfiguration is
    enabled the DWC_PARAMS read is wrong because it uses different arithmetic to
    what is needed for channel priority setup.
    
    Re-do the caclulations properly. This now works on AVR32 board well.
    
    Fixes: fed2574b3c9f (dw_dmac: introduce software emulation of LLP transfers)
    Cc: yitian.bu@tangramtek.com
    Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Signed-off-by: Vinod Koul <vinod.koul@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 57c65c9ade04e30998329487cce0f1a13c45d1ca
Author: Grazvydas Ignotas <notasas@gmail.com>
Date:   Wed Sep 16 01:34:31 2015 +0300

    ARM: dts: omap5-uevm.dts: fix i2c5 pinctrl offsets
    
    commit 1dbdad75074d16c3e3005180f81a01cdc04a7872 upstream.
    
    The i2c5 pinctrl offsets are wrong. If the bootloader doesn't set the
    pins up, communication with tca6424a doesn't work (controller timeouts)
    and it is not possible to enable HDMI.
    
    Fixes: 9be495c42609 ("ARM: dts: omap5-evm: Add I2c pinctrl data")
    Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 29b31a353768fbacd4d005c670330a386391427d
Author: Paul Bolle <pebolle@tiscali.nl>
Date:   Fri Jul 31 14:08:58 2015 +0200

    windfarm: decrement client count when unregistering
    
    commit fe2b592173ff0274e70dc44d1d28c19bb995aa7c upstream.
    
    wf_unregister_client() increments the client count when a client
    unregisters. That is obviously incorrect. Decrement that client count
    instead.
    
    Fixes: 75722d3992f5 ("[PATCH] ppc64: Thermal control for SMU based machines")
    
    Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3c4010be5bef979f0d3667d09294c9a326d3c71a
Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Date:   Thu Sep 3 13:24:40 2015 +0100

    ARM: 8429/1: disable GCC SRA optimization
    
    commit a077224fd35b2f7fbc93f14cf67074fc792fbac2 upstream.
    
    While working on the 32-bit ARM port of UEFI, I noticed a strange
    corruption in the kernel log. The following snprintf() statement
    (in drivers/firmware/efi/efi.c:efi_md_typeattr_format())
    
    	snprintf(pos, size, "|%3s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
    
    was producing the following output in the log:
    
    	|    |   |   |   |    |WB|WT|WC|UC]
    	|    |   |   |   |    |WB|WT|WC|UC]
    	|    |   |   |   |    |WB|WT|WC|UC]
    	|RUN|   |   |   |    |WB|WT|WC|UC]*
    	|RUN|   |   |   |    |WB|WT|WC|UC]*
    	|    |   |   |   |    |WB|WT|WC|UC]
    	|RUN|   |   |   |    |WB|WT|WC|UC]*
    	|    |   |   |   |    |WB|WT|WC|UC]
    	|RUN|   |   |   |    |   |   |   |UC]
    	|RUN|   |   |   |    |   |   |   |UC]
    
    As it turns out, this is caused by incorrect code being emitted for
    the string() function in lib/vsprintf.c. The following code
    
    	if (!(spec.flags & LEFT)) {
    		while (len < spec.field_width--) {
    			if (buf < end)
    				*buf = ' ';
    			++buf;
    		}
    	}
    	for (i = 0; i < len; ++i) {
    		if (buf < end)
    			*buf = *s;
    		++buf; ++s;
    	}
    	while (len < spec.field_width--) {
    		if (buf < end)
    			*buf = ' ';
    		++buf;
    	}
    
    when called with len == 0, triggers an issue in the GCC SRA optimization
    pass (Scalar Replacement of Aggregates), which handles promotion of signed
    struct members incorrectly. This is a known but as yet unresolved issue.
    (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932). In this particular
    case, it is causing the second while loop to be executed erroneously a
    single time, causing the additional space characters to be printed.
    
    So disable the optimization by passing -fno-ipa-sra.
    
    Acked-by: Nicolas Pitre <nico@linaro.org>
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 05a96acd1970f17028bcec4835357551df2cc1da
Author: Russell King <rmk+kernel@arm.linux.org.uk>
Date:   Fri Sep 11 16:44:02 2015 +0100

    ARM: fix Thumb2 signal handling when ARMv6 is enabled
    
    commit 9b55613f42e8d40d5c9ccb8970bde6af4764b2ab upstream.
    
    When a kernel is built covering ARMv6 to ARMv7, we omit to clear the
    IT state when entering a signal handler.  This can cause the first
    few instructions to be conditionally executed depending on the parent
    context.
    
    In any case, the original test for >= ARMv7 is broken - ARMv6 can have
    Thumb-2 support as well, and an ARMv6T2 specific build would omit this
    code too.
    
    Relax the test back to ARMv6 or greater.  This results in us always
    clearing the IT state bits in the PSR, even on CPUs where these bits
    are reserved.  However, they're reserved for the IT state, so this
    should cause no harm.
    
    Fixes: d71e1352e240 ("Clear the IT state when invoking a Thumb-2 signal handler")
    Acked-by: Tony Lindgren <tony@atomide.com>
    Tested-by: H. Nikolaus Schaller <hns@goldelico.com>
    Tested-by: Grazvydas Ignotas <notasas@gmail.com>
    Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2cb1fe8ba813215c0258fa08bc4eba59b82a6440
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Mon Aug 31 16:13:47 2015 -0700

    hwmon: (nct6775) Swap STEP_UP_TIME and STEP_DOWN_TIME registers for most chips
    
    commit 728d29400488d54974d3317fe8a232b45fdb42ee upstream.
    
    The STEP_UP_TIME and STEP_DOWN_TIME registers are swapped for all chips but
    NCT6775.
    
    Reported-by: Grazvydas Ignotas <notasas@gmail.com>
    Reviewed-by: Jean Delvare <jdelvare@suse.de>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4688a16e79d81c85564711a48914992f81b2286c
Author: Arnaldo Carvalho de Melo <acme@redhat.com>
Date:   Fri Sep 11 12:36:12 2015 -0300

    perf header: Fixup reading of HEADER_NRCPUS feature
    
    commit caa470475d9b59eeff093ae650800d34612c4379 upstream.
    
    The original patch introducing this header wrote the number of CPUs available
    and online in one order and then swapped those values when reading, fix it.
    
    Before:
    
      # perf record usleep 1
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 4
      # nrcpus avail : 4
      # echo 0 > /sys/devices/system/cpu/cpu2/online
      # perf record usleep 1
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 4
      # nrcpus avail : 3
      # echo 0 > /sys/devices/system/cpu/cpu1/online
      # perf record usleep 1
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 4
      # nrcpus avail : 2
    
    After the fix, bringing back the CPUs online:
    
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 2
      # nrcpus avail : 4
      # echo 1 > /sys/devices/system/cpu/cpu2/online
      # perf record usleep 1
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 3
      # nrcpus avail : 4
      # echo 1 > /sys/devices/system/cpu/cpu1/online
      # perf record usleep 1
      # perf report --header-only | grep 'nrcpus \(online\|avail\)'
      # nrcpus online : 4
      # nrcpus avail : 4
    
    Acked-by: Namhyung Kim <namhyung@kernel.org>
    Cc: Adrian Hunter <adrian.hunter@intel.com>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: David Ahern <dsahern@gmail.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Jiri Olsa <jolsa@kernel.org>
    Cc: Kan Liang <kan.liang@intel.com>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Wang Nan <wangnan0@huawei.com>
    Fixes: fbe96f29ce4b ("perf tools: Make perf.data more self-descriptive (v8)")
    Link: http://lkml.kernel.org/r/20150911153323.GP23511@kernel.org
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4800dd5eab0feaa2eae5c736a271bb43df017201
Author: Kan Liang <kan.liang@intel.com>
Date:   Thu Jul 2 03:08:43 2015 -0400

    perf stat: Get correct cpu id for print_aggr
    
    commit 601083cffb7cabdcc55b8195d732f0f7028570fa upstream.
    
    print_aggr() fails to print per-core/per-socket statistics after commit
    582ec0829b3d ("perf stat: Fix per-socket output bug for uncore events")
    if events have differnt cpus. Because in print_aggr(), aggr_get_id needs
    index (not cpu id) to find core/pkg id. Also, evsel cpu maps should be
    used to get aggregated id.
    
    Here is an example:
    
    Counting events cycles,uncore_imc_0/cas_count_read/. (Uncore event has
    cpumask 0,18)
    
      $ perf stat -e cycles,uncore_imc_0/cas_count_read/ -C0,18 --per-core sleep 2
    
    Without this patch, it failes to get CPU 18 result.
    
       Performance counter stats for 'CPU(s) 0,18':
    
      S0-C0           1            7526851      cycles
      S0-C0           1               1.05 MiB  uncore_imc_0/cas_count_read/
      S1-C0           0      <not counted>      cycles
      S1-C0           0      <not counted> MiB  uncore_imc_0/cas_count_read/
    
    With this patch, it can get both CPU0 and CPU18 result.
    
       Performance counter stats for 'CPU(s) 0,18':
    
      S0-C0           1            6327768      cycles
      S0-C0           1               0.47 MiB  uncore_imc_0/cas_count_read/
      S1-C0           1             330228      cycles
      S1-C0           1               0.29 MiB  uncore_imc_0/cas_count_read/
    
    Signed-off-by: Kan Liang <kan.liang@intel.com>
    Acked-by: Jiri Olsa <jolsa@kernel.org>
    Acked-by: Stephane Eranian <eranian@google.com>
    Cc: Adrian Hunter <adrian.hunter@intel.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: David Ahern <dsahern@gmail.com>
    Cc: Namhyung Kim <namhyung@kernel.org>
    Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Fixes: 582ec0829b3d ("perf stat: Fix per-socket output bug for uncore events")
    Link: http://lkml.kernel.org/r/1435820925-51091-1-git-send-email-kan.liang@intel.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8172440d38215ba5b7a7cab09d17c47d37a71e0f
Author: Arnaldo Carvalho de Melo <acme@redhat.com>
Date:   Mon Aug 10 16:53:54 2015 -0300

    perf hists: Update the column width for the "srcline" sort key
    
    commit e8e6d37e73e6b950c891c780745460b87f4755b6 upstream.
    
    When we introduce a new sort key, we need to update the
    hists__calc_col_len() function accordingly, otherwise the width
    will be limited to strlen(header).
    
    We can't update it when obtaining a line value for a column (for
    instance, in sort__srcline_cmp()), because we reset it all when doing a
    resort (see hists__output_recalc_col_len()), so we need to, from what is
    in the hist_entry fields, set each of the column widths.
    
    Cc: Namhyung Kim <namhyung@kernel.org>
    Cc: Andi Kleen <andi@firstfloor.org>
    Cc: Jiri Olsa <jolsa@kernel.org>
    Fixes: 409a8be61560 ("perf tools: Add sort by src line/number")
    Link: http://lkml.kernel.org/n/tip-jgbe0yx8v1gs89cslr93pvz2@git.kernel.org
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3de5bd0b9791f0e6eb1257a1d8ef979890c989ac
Author: Adrian Hunter <adrian.hunter@intel.com>
Date:   Thu Sep 24 13:05:22 2015 +0300

    perf tools: Fix copying of /proc/kcore
    
    commit b5cabbcbd157a4bf5a92dfc85134999a3b55342d upstream.
    
    A copy of /proc/kcore containing the kernel text can be made to the
    buildid cache. e.g.
    
    	perf buildid-cache -v -k /proc/kcore
    
    To workaround objdump limitations, a copy is also made when annotating
    against /proc/kcore.
    
    The copying process stops working from libelf about v1.62 onwards (the
    problem was found with v1.63).
    
    The cause is that a call to gelf_getphdr() in kcore__add_phdr() fails
    because additional validation has been added to gelf_getphdr().
    
    The use of gelf_getphdr() is a misguided attempt to get default
    initialization of the Gelf_Phdr structure.  That should not be
    necessary because every member of the Gelf_Phdr structure is
    subsequently assigned.  So just remove the call to gelf_getphdr().
    
    Similarly, a call to gelf_getehdr() in gelf_kcore__init() can be
    removed also.
    
    Committer notes:
    
    Note to stable@kernel.org, from Adrian in the cover letter for this
    patchkit:
    
    The "Fix copying of /proc/kcore" problem goes back to v3.13 if you think
    it is important enough for stable.
    
    Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    Link: http://lkml.kernel.org/r/1443089122-19082-3-git-send-email-adrian.hunter@intel.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6c53349e460ad10aeb6068f6153c064cc8d67ad0
Author: Jenny Derzhavetz <jennyf@mellanox.com>
Date:   Sun Sep 6 14:52:20 2015 +0300

    iser-target: remove command with state ISTATE_REMOVE
    
    commit a4c15cd957cbd728f685645de7a150df5912591a upstream.
    
    As documented in iscsit_sequence_cmd:
    /*
     * Existing callers for iscsit_sequence_cmd() will silently
     * ignore commands with CMDSN_LOWER_THAN_EXP, so force this
     * return for CMDSN_MAXCMDSN_OVERRUN as well..
     */
    
    We need to silently finish a command when it's in ISTATE_REMOVE.
    This fixes an teardown hang we were seeing where a mis-behaved
    initiator (triggered by allocation error injections) sent us a
    cmdsn which was lower than expected.
    
    Signed-off-by: Jenny Derzhavetz <jennyf@mellanox.com>
    Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
    Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 88e819558253927eaccc76c63cef2732f655c8c2
Author: Michal Hocko <mhocko@suse.com>
Date:   Thu Aug 27 20:16:37 2015 +0200

    scsi: fix scsi_error_handler vs. scsi_host_dev_release race
    
    commit 537b604c8b3aa8b96fe35f87dd085816552e294c upstream.
    
    b9d5c6b7ef57 ("[SCSI] cleanup setting task state in
    scsi_error_handler()") has introduced a race between scsi_error_handler
    and scsi_host_dev_release resulting in the hang when the device goes
    away because scsi_error_handler might miss a wake up:
    
    CPU0					CPU1
    scsi_error_handler			scsi_host_dev_release
      					  kthread_stop()
      kthread_should_stop()
        test_bit(KTHREAD_SHOULD_STOP)
    					    set_bit(KTHREAD_SHOULD_STOP)
    					    wake_up_process()
    					    wait_for_completion()
    
      set_current_state(TASK_INTERRUPTIBLE)
      schedule()
    
    The most straightforward solution seems to be to invert the ordering of
    the set_current_state and kthread_should_stop.
    
    The issue has been noticed during reboot test on a 3.0 based kernel but
    the current code seems to be affected in the same way.
    
    [jejb: additional comment added]
    Reported-and-debugged-by: Mike Mayer <Mike.Meyer@teradata.com>
    Signed-off-by: Michal Hocko <mhocko@suse.com>
    Reviewed-by: Dan Williams <dan.j.williams@intel.com>
    Reviewed-by: Hannes Reinecke <hare@suse.de>
    Signed-off-by: James Bottomley <JBottomley@Odin.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 28b5dc064ae410801e1aed8ea66b1a1bd5556cbb
Author: Jason Wang <jasowang@redhat.com>
Date:   Tue Sep 15 14:41:57 2015 +0800

    kvm: fix zero length mmio searching
    
    commit 8f4216c7d28976f7ec1b2bcbfa0a9f787133c45e upstream.
    
    Currently, if we had a zero length mmio eventfd assigned on
    KVM_MMIO_BUS. It will never be found by kvm_io_bus_cmp() since it
    always compares the kvm_io_range() with the length that guest
    wrote. This will cause e.g for vhost, kick will be trapped by qemu
    userspace instead of vhost. Fixing this by using zero length if an
    iodevice is zero length.
    
    Cc: Gleb Natapov <gleb@kernel.org>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>