commit 0e13335254d5d54933969dba1d7625f55e657f52
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Sep 27 10:57:40 2017 +0200

    Linux 3.18.72

commit 0aad447e99a025ae3a3833c18f94f9700b575367
Author: Michael Lyle <mlyle@lyle.org>
Date:   Wed Sep 6 14:26:02 2017 +0800

    bcache: fix bch_hprint crash and improve output
    
    commit 9276717b9e297a62d1151a43d1cd286213f68eb7 upstream.
    
    Most importantly, solve a crash where %llu was used to format signed
    numbers.  This would cause a buffer overflow when reading sysfs
    writeback_rate_debug, as only 20 bytes were allocated for this and
    %llu writes 20 characters plus a null.
    
    Always use the units mechanism rather than having different output
    paths for simplicity.
    
    Also, correct problems with display output where 1.10 was a larger
    number than 1.09, by multiplying by 10 and then dividing by 1024 instead
    of dividing by 100.  (Remainders of >= 1000 would print as .10).
    
    Minor changes: Always display the decimal point instead of trying to
    omit it based on number of digits shown.  Decide what units to use
    based on 1000 as a threshold, not 1024 (in other words, always print
    at most 3 digits before the decimal point).
    
    Signed-off-by: Michael Lyle <mlyle@lyle.org>
    Reported-by: Dmitry Yu Okunev <dyokunev@ut.mephi.ru>
    Acked-by: Kent Overstreet <kent.overstreet@gmail.com>
    Reviewed-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 50aee960b0d1f55c2fb682ab54262bcceaf558aa
Author: Tang Junhui <tang.junhui@zte.com.cn>
Date:   Wed Sep 6 14:25:59 2017 +0800

    bcache: fix for gc and write-back race
    
    commit 9baf30972b5568d8b5bc8b3c46a6ec5b58100463 upstream.
    
    gc and write-back get raced (see the email "bcache get stucked" I sended
    before):
    gc thread                               write-back thread
    |                                       |bch_writeback_thread()
    |bch_gc_thread()                        |
    |                                       |==>read_dirty()
    |==>bch_btree_gc()                      |
    |==>btree_root() //get btree root       |
    |                //node write locker    |
    |==>bch_btree_gc_root()                 |
    |                                       |==>read_dirty_submit()
    |                                       |==>write_dirty()
    |                                       |==>continue_at(cl,
    |                                       |               write_dirty_finish,
    |                                       |               system_wq);
    |                                       |==>write_dirty_finish()//excute
    |                                       |               //in system_wq
    |                                       |==>bch_btree_insert()
    |                                       |==>bch_btree_map_leaf_nodes()
    |                                       |==>__bch_btree_map_nodes()
    |                                       |==>btree_root //try to get btree
    |                                       |              //root node read
    |                                       |              //lock
    |                                       |-----stuck here
    |==>bch_btree_set_root()
    |==>bch_journal_meta()
    |==>bch_journal()
    |==>journal_try_write()
    |==>journal_write_unlocked() //journal_full(&c->journal)
    |                            //condition satisfied
    |==>continue_at(cl, journal_write, system_wq); //try to excute
    |                               //journal_write in system_wq
    |                               //but work queue is excuting
    |                               //write_dirty_finish()
    |==>closure_sync(); //wait journal_write execute
    |                   //over and wake up gc,
    |-------------stuck here
    |==>release root node write locker
    
    This patch alloc a separate work-queue for write-back thread to avoid such
    race.
    
    (Commit log re-organized by Coly Li to pass checkpatch.pl checking)
    
    Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
    Acked-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c04dc907e590c85d84f252c1de390da1cfd5e3dc
Author: Tony Asleson <tasleson@redhat.com>
Date:   Wed Sep 6 14:25:57 2017 +0800

    bcache: Correct return value for sysfs attach errors
    
    commit 77fa100f27475d08a569b9d51c17722130f089e7 upstream.
    
    If you encounter any errors in bch_cached_dev_attach it will return
    a negative error code.  The variable 'v' which stores the result is
    unsigned, thus user space sees a very large value returned for bytes
    written which can cause incorrect user space behavior.  Utilize 1
    signed variable to use throughout the function to preserve error return
    capability.
    
    Signed-off-by: Tony Asleson <tasleson@redhat.com>
    Acked-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0b312f81db414f2ec82034c317bd2dae58c03fb1
Author: Tang Junhui <tang.junhui@zte.com.cn>
Date:   Wed Sep 6 14:25:56 2017 +0800

    bcache: correct cache_dirty_target in __update_writeback_rate()
    
    commit a8394090a9129b40f9d90dcb7f4a49d60c727ca6 upstream.
    
    __update_write_rate() uses a Proportion-Differentiation Controller
    algorithm to control writeback rate. A dirty target number is used in
    this PD controller to control writeback rate. A larger target number
    will make the writeback rate smaller, on the versus, a smaller target
    number will make the writeback rate larger.
    
    bcache uses the following steps to calculate the target number,
    1) cache_sectors = all-buckets-of-cache-set * buckets-size
    2) cache_dirty_target = cache_sectors * cached-device-writeback_percent
    3) target = cache_dirty_target *
    (sectors-of-cached-device/sectors-of-all-cached-devices-of-this-cache-set)
    
    The calculation at step 1) for cache_sectors is incorrect, which does
    not consider dirty blocks occupied by flash only volume.
    
    A flash only volume can be took as a bcache device without cached
    device. All data sectors allocated for it are persistent on cache device
    and marked dirty, they are not touched by bcache writeback and garbage
    collection code. So data blocks of flash only volume should be ignore
    when calculating cache_sectors of cache set.
    
    Current code does not subtract dirty sectors of flash only volume, which
    results a larger target number from the above 3 steps. And in sequence
    the cache device's writeback rate is smaller then a correct value,
    writeback speed is slower on all cached devices.
    
    This patch fixes the incorrect slower writeback rate by subtracting
    dirty sectors of flash only volumes in __update_writeback_rate().
    
    (Commit log composed by Coly Li to pass checkpatch.pl checking)
    
    Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
    Reviewed-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 615f8ab2ffcdd13ad0ce98e62a5268f9954893a6
Author: Jan Kara <jack@suse.cz>
Date:   Wed Sep 6 14:25:51 2017 +0800

    bcache: Fix leak of bdev reference
    
    commit 4b758df21ee7081ab41448d21d60367efaa625b3 upstream.
    
    If blkdev_get_by_path() in register_bcache() fails, we try to lookup the
    block device using lookup_bdev() to detect which situation we are in to
    properly report error. However we never drop the reference returned to
    us from lookup_bdev(). Fix that.
    
    Signed-off-by: Jan Kara <jack@suse.cz>
    Acked-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cfa0386af092a3afcf5edc8930ed7bb4990697e3
Author: Tang Junhui <tang.junhui@zte.com.cn>
Date:   Thu Sep 7 01:28:53 2017 +0800

    bcache: initialize dirty stripes in flash_dev_run()
    
    commit 175206cf9ab63161dec74d9cd7f9992e062491f5 upstream.
    
    bcache uses a Proportion-Differentiation Controller algorithm to control
    writeback rate to cached devices. In the PD controller algorithm, dirty
    stripes of thin flash device should not be counted in, because flash only
    volumes never write back dirty data.
    
    Currently dirty stripe counter for thin flash device is not initialized
    when the thin flash device starts. Which means the following calculation
    in PD controller will reference an undefined dirty stripes number, and
    all cached devices attached to the same cache set where the thin flash
    device lies on may have an inaccurate writeback rate.
    
    This patch calles bch_sectors_dirty_init() in flash_dev_run(), to
    correctly initialize dirty stripe counter when the thin flash device
    starts to run. This patch also does following parameter data type change,
     -void bch_sectors_dirty_init(struct cached_dev *dc);
     +void bch_sectors_dirty_init(struct bcache_device *);
    to call this function conveniently in flash_dev_run().
    
    (Commit log is composed by Coly Li)
    
    Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
    Reviewed-by: Coly Li <colyli@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 15ac0595018f5fdfbec2a23574b81a01c73ee5e1
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Tue Aug 8 08:56:21 2017 -0400

    media: uvcvideo: Prevent heap overflow when accessing mapped controls
    
    commit 7e09f7d5c790278ab98e5f2c22307ebe8ad6e8ba upstream.
    
    The size of uvc_control_mapping is user controlled leading to a
    potential heap overflow in the uvc driver. This adds a check to verify
    the user provided size fits within the bounds of the defined buffer
    size.
    
    Originally-from: Richard Simmons <rssimmo@amazon.com>
    
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
    Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
    Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6b3412ff96615bab06863c00c371b5601e3b1e1c
Author: Daniel Mentz <danielmentz@google.com>
Date:   Wed Aug 2 23:42:17 2017 -0400

    media: v4l2-compat-ioctl32: Fix timespec conversion
    
    commit 9c7ba1d7634cef490b85bc64c4091ff004821bfd upstream.
    
    Certain syscalls like recvmmsg support 64 bit timespec values for the
    X32 ABI. The helper function compat_put_timespec converts a timespec
    value to a 32 bit or 64 bit value depending on what ABI is used. The
    v4l2 compat layer, however, is not designed to support 64 bit timespec
    values and always uses 32 bit values. Hence, compat_put_timespec must
    not be used.
    
    Without this patch, user space will be provided with bad timestamp
    values from the VIDIOC_DQEVENT ioctl. Also, fields of the struct
    v4l2_event32 that come immediately after timestamp get overwritten,
    namely the field named id.
    
    Fixes: 81993e81a994 ("compat: Get rid of (get|put)_compat_time(val|spec)")
    Cc: H. Peter Anvin <hpa@linux.intel.com>
    Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
    Cc: Tiffany Lin <tiffany.lin@mediatek.com>
    Cc: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
    Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
    Signed-off-by: Daniel Mentz <danielmentz@google.com>
    Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
    Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 19b86cde515002a32b413964babc0e736478e19e
Author: Aleksandr Bezzubikov <zuban32s@gmail.com>
Date:   Tue Jul 18 17:12:25 2017 +0300

    PCI: shpchp: Enable bridge bus mastering if MSI is enabled
    
    commit 48b79a14505349a29b3e20f03619ada9b33c4b17 upstream.
    
    An SHPC may generate MSIs to notify software about slot or controller
    events (SHPC spec r1.0, sec 4.7).  A PCI device can only generate an MSI if
    it has bus mastering enabled.
    
    Enable bus mastering if the bridge contains an SHPC that uses MSI for event
    notifications.
    
    Signed-off-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
    [bhelgaas: changelog]
    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
    Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
    Acked-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 13bbb8242ab25fd693f49c43ab560e6f6b85142b
Author: Jose Abreu <Jose.Abreu@synopsys.com>
Date:   Fri Sep 1 17:00:23 2017 +0100

    ARC: Re-enable MMU upon Machine Check exception
    
    commit 1ee55a8f7f6b7ca4c0c59e0b4b4e3584a085c2d3 upstream.
    
    I recently came upon a scenario where I would get a double fault
    machine check exception tiriggered by a kernel module.
    However the ensuing crash stacktrace (ksym lookup) was not working
    correctly.
    
    Turns out that machine check auto-disables MMU while modules are allocated
    in kernel vaddr spapce.
    
    This patch re-enables the MMU before start printing the stacktrace
    making stacktracing of modules work upon a fatal exception.
    
    Signed-off-by: Jose Abreu <joabreu@synopsys.com>
    Reviewed-by: Alexey Brodkin <abrodkin@synopsys.com>
    Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
    [vgupta: moved code into low level handler to avoid in 2 places]
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 88645cf3edc042ef20d70a9f530256fcffb53f0a
Author: Baohong Liu <baohong.liu@intel.com>
Date:   Tue Sep 5 16:57:19 2017 -0500

    tracing: Apply trace_clock changes to instance max buffer
    
    commit 170b3b1050e28d1ba0700e262f0899ffa4fccc52 upstream.
    
    Currently trace_clock timestamps are applied to both regular and max
    buffers only for global trace. For instance trace, trace_clock
    timestamps are applied only to regular buffer. But, regular and max
    buffers can be swapped, for example, following a snapshot. So, for
    instance trace, bad timestamps can be seen following a snapshot.
    Let's apply trace_clock timestamps to instance max buffer as well.
    
    Link: http://lkml.kernel.org/r/ebdb168d0be042dcdf51f81e696b17fabe3609c1.1504642143.git.tom.zanussi@linux.intel.com
    
    Fixes: 277ba0446 ("tracing: Add interface to allow multiple trace buffers")
    Signed-off-by: Baohong Liu <baohong.liu@intel.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 45a521cba61051409ea2b932104ab6767e7b68b4
Author: Steven Rostedt (VMware) <rostedt@goodmis.org>
Date:   Fri Sep 1 12:04:09 2017 -0400

    ftrace: Fix selftest goto location on error
    
    commit 46320a6acc4fb58f04bcf78c4c942cc43b20f986 upstream.
    
    In the second iteration of trace_selftest_ops(), the error goto label is
    wrong in the case where trace_selftest_test_global_cnt is off. In the
    case of error, it leaks the dynamic ops that was allocated.
    
    Fixes: 95950c2e ("ftrace: Add self-tests for multiple function trace users")
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 78d88643d218fccb0fcf7fa957e785d4048e35cb
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Wed Aug 30 16:30:35 2017 +0300

    scsi: qla2xxx: Fix an integer overflow in sysfs code
    
    commit e6f77540c067b48dee10f1e33678415bfcc89017 upstream.
    
    The value of "size" comes from the user.  When we add "start + size" it
    could lead to an integer overflow bug.
    
    It means we vmalloc() a lot more memory than we had intended.  I believe
    that on 64 bit systems vmalloc() can succeed even if we ask it to
    allocate huge 4GB buffers.  So we would get memory corruption and likely
    a crash when we call ha->isp_ops->write_optrom() and ->read_optrom().
    
    Only root can trigger this bug.
    
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=194061
    
    Fixes: b7cc176c9eb3 ("[SCSI] qla2xxx: Allow region-based flash-part accesses.")
    Reported-by: shqking <shqking@gmail.com>
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9793679d8dc92d1d8a187d023d2d7a17dd9348b5
Author: Hannes Reinecke <hare@suse.de>
Date:   Fri Sep 15 14:05:16 2017 +0200

    scsi: sg: fixup infoleak when using SG_GET_REQUEST_TABLE
    
    commit 3e0097499839e0fe3af380410eababe5a47c4cf9 upstream.
    
    When calling SG_GET_REQUEST_TABLE ioctl only a half-filled table is
    returned; the remaining part will then contain stale kernel memory
    information.  This patch zeroes out the entire table to avoid this
    issue.
    
    Signed-off-by: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 691e12db75fb7f55bbbf8c1fea7d462eb1a5e38a
Author: Hannes Reinecke <hare@suse.de>
Date:   Fri Sep 15 14:05:15 2017 +0200

    scsi: sg: factor out sg_fill_request_table()
    
    commit 4759df905a474d245752c9dc94288e779b8734dd upstream.
    
    Factor out sg_fill_request_table() for better readability.
    
    [mkp: typos, applied by hand]
    
    Signed-off-by: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9f8cb7be5242f23b01647704da2d274d2b3b8578
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Thu Aug 17 10:09:54 2017 +0300

    scsi: sg: off by one in sg_ioctl()
    
    commit bd46fc406b30d1db1aff8dabaff8d18bb423fdcf upstream.
    
    If "val" is SG_MAX_QUEUE then we are one element beyond the end of the
    "rinfo" array so the > should be >=.
    
    Fixes: 109bade9c625 ("scsi: sg: use standard lists for sg_requests")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Acked-by: Douglas Gilbert <dgilbert@interlog.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a19e985cac6dbdf0571ef3b369631661d5512b89
Author: Hannes Reinecke <hare@suse.de>
Date:   Fri Apr 7 09:34:16 2017 +0200

    scsi: sg: use standard lists for sg_requests
    
    commit 109bade9c625c89bb5ea753aaa1a0a97e6fbb548 upstream.
    
    'Sg_request' is using a private list implementation; convert it to
    standard lists.
    
    Signed-off-by: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
    Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 53a5b214a406fe05bc3cfa2e4efc84e270731f18
Author: Hannes Reinecke <hare@suse.de>
Date:   Fri Apr 7 09:34:13 2017 +0200

    scsi: sg: remove 'save_scat_len'
    
    commit 136e57bf43dc4babbfb8783abbf707d483cacbe3 upstream.
    
    Unused.
    
    Signed-off-by: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
    Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9f582840f9e58a34f9f293a8d6da26fedf08f1d8
Author: Steffen Maier <maier@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:58 2017 +0200

    scsi: zfcp: trace high part of "new" 64 bit SCSI LUN
    
    commit 5d4a3d0a2ff23799b956e5962b886287614e7fad upstream.
    
    Complements debugging aspects of the otherwise functionally complete
    v3.17 commit 9cb78c16f5da ("scsi: use 64-bit LUNs").
    
    While I don't have access to a target exporting 3 or 4 level LUNs,
    I did test it by explicitly attaching a non-existent fake 4 level LUN
    by means of zfcp sysfs attribute "unit_add".
    In order to see corresponding trace records of otherwise successful
    events, we had to increase the trace level of area SCSI and HBA to 6.
    
    $ echo 6 > /sys/kernel/debug/s390dbf/zfcp_0.0.1880_scsi/level
    $ echo 6 > /sys/kernel/debug/s390dbf/zfcp_0.0.1880_hba/level
    
    $ echo 0x4011402240334044 > \
      /sys/bus/ccw/drivers/zfcp/0.0.1880/0x50050763031bd327/unit_add
    
    Example output formatted by an updated zfcpdbf from the s390-tools
    package interspersed with kernel messages at scsi_logging_level=4605:
    
    Timestamp      : ...
    Area           : REC
    Subarea        : 00
    Level          : 1
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : scsla_1
    LUN            : 0x4011402240334044
    WWPN           : 0x50050763031bd327
    D_ID           : 0x00......
    Adapter status : 0x5400050b
    Port status    : 0x54000001
    LUN status     : 0x41000000
    Ready count    : 0x00000001
    Running count  : 0x00000000
    ERP want       : 0x01
    ERP need       : 0x01
    
    scsi 2:0:0:4630896905707208721: scsi scan: INQUIRY pass 1 length 36
    scsi 2:0:0:4630896905707208721: scsi scan: INQUIRY successful with code 0x0
    
    Timestamp      : ...
    Area           : HBA
    Subarea        : 00
    Level          : 6
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : fs_norm
    Request ID     : 0x<inquiry2-req-id>
    Request status : 0x00000010
    FSF cmnd       : 0x00000001
    FSF sequence no: 0x...
    FSF issued     : ...
    FSF stat       : 0x00000000
    FSF stat qual  : 00000000 00000000 00000000 00000000
    Prot stat      : 0x00000001
    Prot stat qual : ........ ........ 00000000 00000000
    Port handle    : 0x...
    LUN handle     : 0x...
    |
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 6
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : rsl_nor
    Request ID     : 0x<inquiry2-req-id>
    SCSI ID        : 0x00000000
    SCSI LUN       : 0x40224011
    SCSI LUN high  : 0x40444033 <=======================
    SCSI result    : 0x00000000
    SCSI retries   : 0x00
    SCSI allowed   : 0x03
    SCSI scribble  : 0x<inquiry2-req-id>
    SCSI opcode    : 12000000 a4000000 00000000 00000000
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000000 00000000
                     00000000 00000000
    
    scsi 2:0:0:4630896905707208721: scsi scan: INQUIRY pass 2 length 164
    scsi 2:0:0:4630896905707208721: scsi scan: INQUIRY successful with code 0x0
    scsi 2:0:0:4630896905707208721: scsi scan: peripheral device type of 31, \
    no device added
    
    Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: 9cb78c16f5da ("scsi: use 64-bit LUNs")
    Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Reviewed-by: Jens Remus <jremus@linux.vnet.ibm.com>
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit bb5d52954f27b6c6ebc02bbc88e2a55d60d1914c
Author: Steffen Maier <maier@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:57 2017 +0200

    scsi: zfcp: trace HBA FSF response by default on dismiss or timedout late response
    
    commit fdb7cee3b9e3c561502e58137a837341f10cbf8b upstream.
    
    At the default trace level, we only trace unsuccessful events including
    FSF responses.
    
    zfcp_dbf_hba_fsf_response() only used protocol status and FSF status to
    decide on an unsuccessful response. However, this is only one of multiple
    possible sources determining a failed struct zfcp_fsf_req.
    
    An FSF request can also "fail" if its response runs into an ERP timeout
    or if it gets dismissed because a higher level recovery was triggered
    [trace tags "erscf_1" or "erscf_2" in zfcp_erp_strategy_check_fsfreq()].
    FSF requests with ERP timeout are:
    FSF_QTCB_EXCHANGE_CONFIG_DATA, FSF_QTCB_EXCHANGE_PORT_DATA,
    FSF_QTCB_OPEN_PORT_WITH_DID or FSF_QTCB_CLOSE_PORT or
    FSF_QTCB_CLOSE_PHYSICAL_PORT for target ports,
    FSF_QTCB_OPEN_LUN, FSF_QTCB_CLOSE_LUN.
    One example is slow queue processing which can cause follow-on errors,
    e.g. FSF_PORT_ALREADY_OPEN after FSF_QTCB_OPEN_PORT_WITH_DID timed out.
    In order to see the root cause, we need to see late responses even if the
    channel presented them successfully with FSF_PROT_GOOD and FSF_GOOD.
    Example trace records formatted with zfcpdbf from the s390-tools package:
    
    Timestamp      : ...
    Area           : REC
    Subarea        : 00
    Level          : 1
    Exception      : -
    CPU ID         : ..
    Caller         : ...
    Record ID      : 1
    Tag            : fcegpf1
    LUN            : 0xffffffffffffffff
    WWPN           : 0x<WWPN>
    D_ID           : 0x00<D_ID>
    Adapter status : 0x5400050b
    Port status    : 0x41200000
    LUN status     : 0x00000000
    Ready count    : 0x00000001
    Running count  : 0x...
    ERP want       : 0x02                           ZFCP_ERP_ACTION_REOPEN_PORT
    ERP need       : 0x02                           ZFCP_ERP_ACTION_REOPEN_PORT
    |
    Timestamp      : ...                            30 seconds later
    Area           : REC
    Subarea        : 00
    Level          : 1
    Exception      : -
    CPU ID         : ..
    Caller         : ...
    Record ID      : 2
    Tag            : erscf_2
    LUN            : 0xffffffffffffffff
    WWPN           : 0x<WWPN>
    D_ID           : 0x00<D_ID>
    Adapter status : 0x5400050b
    Port status    : 0x41200000
    LUN status     : 0x00000000
    Request ID     : 0x<request_ID>
    ERP status     : 0x10000000                     ZFCP_STATUS_ERP_TIMEDOUT
    ERP step       : 0x0800                         ZFCP_ERP_STEP_PORT_OPENING
    ERP action     : 0x02                           ZFCP_ERP_ACTION_REOPEN_PORT
    ERP count      : 0x00
    |
    Timestamp      : ...                            later than previous record
    Area           : HBA
    Subarea        : 00
    Level          : 5      > default level         => 3    <= default level
    Exception      : -
    CPU ID         : 00
    Caller         : ...
    Record ID      : 1
    Tag            : fs_qtcb                        => fs_rerr
    Request ID     : 0x<request_ID>
    Request status : 0x00001010                     ZFCP_STATUS_FSFREQ_DISMISSED
                                                    | ZFCP_STATUS_FSFREQ_CLEANUP
    FSF cmnd       : 0x00000005
    FSF sequence no: 0x...
    FSF issued     : ...                            > 30 seconds ago
    FSF stat       : 0x00000000                     FSF_GOOD
    FSF stat qual  : 00000000 00000000 00000000 00000000
    Prot stat      : 0x00000001                     FSF_PROT_GOOD
    Prot stat qual : 00000000 00000000 00000000 00000000
    Port handle    : 0x...
    LUN handle     : 0x00000000
    QTCB log length: ...
    QTCB log info  : ...
    
    In case of problems detecting that new responses are waiting on the input
    queue, we sooner or later trigger adapter recovery due to an FSF request
    timeout (trace tag "fsrth_1").
    FSF requests with FSF request timeout are:
    typically FSF_QTCB_ABORT_FCP_CMND; but theoretically also
    FSF_QTCB_EXCHANGE_CONFIG_DATA or FSF_QTCB_EXCHANGE_PORT_DATA via sysfs,
    FSF_QTCB_OPEN_PORT_WITH_DID or FSF_QTCB_CLOSE_PORT for WKA ports,
    FSF_QTCB_FCP_CMND for task management function (LUN / target reset).
    One or more pending requests can meanwhile have FSF_PROT_GOOD and FSF_GOOD
    because the channel filled in the response via DMA into the request's QTCB.
    
    In a theroretical case, inject code can create an erroneous FSF request
    on purpose. If data router is enabled, it uses deferred error reporting.
    A READ SCSI command can succeed with FSF_PROT_GOOD, FSF_GOOD, and
    SAM_STAT_GOOD. But on writing the read data to host memory via DMA,
    it can still fail, e.g. if an intentionally wrong scatter list does not
    provide enough space. Rather than getting an unsuccessful response,
    we get a QDIO activate check which in turn triggers adapter recovery.
    One or more pending requests can meanwhile have FSF_PROT_GOOD and FSF_GOOD
    because the channel filled in the response via DMA into the request's QTCB.
    Example trace records formatted with zfcpdbf from the s390-tools package:
    
    Timestamp      : ...
    Area           : HBA
    Subarea        : 00
    Level          : 6      > default level         => 3    <= default level
    Exception      : -
    CPU ID         : ..
    Caller         : ...
    Record ID      : 1
    Tag            : fs_norm                        => fs_rerr
    Request ID     : 0x<request_ID2>
    Request status : 0x00001010                     ZFCP_STATUS_FSFREQ_DISMISSED
                                                    | ZFCP_STATUS_FSFREQ_CLEANUP
    FSF cmnd       : 0x00000001
    FSF sequence no: 0x...
    FSF issued     : ...
    FSF stat       : 0x00000000                     FSF_GOOD
    FSF stat qual  : 00000000 00000000 00000000 00000000
    Prot stat      : 0x00000001                     FSF_PROT_GOOD
    Prot stat qual : ........ ........ 00000000 00000000
    Port handle    : 0x...
    LUN handle     : 0x...
    |
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 3
    Exception      : -
    CPU ID         : ..
    Caller         : ...
    Record ID      : 1
    Tag            : rsl_err
    Request ID     : 0x<request_ID2>
    SCSI ID        : 0x...
    SCSI LUN       : 0x...
    SCSI result    : 0x000e0000                     DID_TRANSPORT_DISRUPTED
    SCSI retries   : 0x00
    SCSI allowed   : 0x05
    SCSI scribble  : 0x<request_ID2>
    SCSI opcode    : 28...                          Read(10)
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000000 00000000
                                             ^^     SAM_STAT_GOOD
                     00000000 00000000
    
    Only with luck in both above cases, we could see a follow-on trace record
    of an unsuccesful event following a successful but late FSF response with
    FSF_PROT_GOOD and FSF_GOOD. Typically this was the case for I/O requests
    resulting in a SCSI trace record "rsl_err" with DID_TRANSPORT_DISRUPTED
    [On ZFCP_STATUS_FSFREQ_DISMISSED, zfcp_fsf_protstatus_eval() sets
    ZFCP_STATUS_FSFREQ_ERROR seen by the request handler functions as failure].
    However, the reason for this follow-on trace was invisible because the
    corresponding HBA trace record was missing at the default trace level
    (by default hidden records with tags "fs_norm", "fs_qtcb", or "fs_open").
    
    On adapter recovery, after we had shut down the QDIO queues, we perform
    unsuccessful pseudo completions with flag ZFCP_STATUS_FSFREQ_DISMISSED
    for each pending FSF request in zfcp_fsf_req_dismiss_all().
    In order to find the root cause, we need to see all pseudo responses even
    if the channel presented them successfully with FSF_PROT_GOOD and FSF_GOOD.
    
    Therefore, check zfcp_fsf_req.status for ZFCP_STATUS_FSFREQ_DISMISSED
    or ZFCP_STATUS_FSFREQ_ERROR and trace with a new tag "fs_rerr".
    
    It does not matter that there are numerous places which set
    ZFCP_STATUS_FSFREQ_ERROR after the location where we trace an FSF response
    early. These cases are based on protocol status != FSF_PROT_GOOD or
    == FSF_PROT_FSF_STATUS_PRESENTED and are thus already traced by default
    as trace tag "fs_perr" or "fs_ferr" respectively.
    
    NB: The trace record with tag "fssrh_1" for status read buffers on dismiss
    all remains. zfcp_fsf_req_complete() handles this and returns early.
    All other FSF request types are handled separately and as described above.
    
    Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: 8a36e4532ea1 ("[SCSI] zfcp: enhancement of zfcp debug features")
    Fixes: 2e261af84cdb ("[SCSI] zfcp: Only collect FSF/HBA debug data for matching trace levels")
    Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2a30f9d547687bbf1e06709f03ab07230bc628ef
Author: Steffen Maier <maier@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:56 2017 +0200

    scsi: zfcp: fix payload with full FCP_RSP IU in SCSI trace records
    
    commit 12c3e5754c8022a4f2fd1e9f00d19e99ee0d3cc1 upstream.
    
    If the FCP_RSP UI has optional parts (FCP_SNS_INFO or FCP_RSP_INFO) and
    thus does not fit into the fsp_rsp field built into a SCSI trace record,
    trace the full FCP_RSP UI with all optional parts as payload record
    instead of just FCP_SNS_INFO as payload and
    a 1 byte RSP_INFO_CODE part of FCP_RSP_INFO built into the SCSI record.
    
    That way we would also get the full FCP_SNS_INFO in case a
    target would ever send more than
    min(SCSI_SENSE_BUFFERSIZE==96, ZFCP_DBF_PAY_MAX_REC==256)==96.
    
    The mandatory part of FCP_RSP IU is only 24 bytes.
    PAYload costs at least one full PAY record of 256 bytes anyway.
    We cap to the hardware response size which is only FSF_FCP_RSP_SIZE==128.
    So we can just put the whole FCP_RSP IU with any optional parts into
    PAYload similarly as we do for SAN PAY since v4.9 commit aceeffbb59bb
    ("zfcp: trace full payload of all SAN records (req,resp,iels)").
    This does not cause any additional trace records wasting memory.
    
    Decoded trace records were confusing because they showed a hard-coded
    sense data length of 96 even if the FCP_RSP_IU field FCP_SNS_LEN showed
    actually less.
    
    Since the same commit, we set pl_len for SAN traces to the full length of a
    request/response even if we cap the corresponding trace.
    In contrast, here for SCSI traces we set pl_len to the pre-computed
    length of FCP_RSP IU considering SNS_LEN or RSP_LEN if valid.
    Nonetheless we trace a hardcoded payload of length FSF_FCP_RSP_SIZE==128
    if there were optional parts.
    This makes it easier for the zfcpdbf tool to format only the relevant
    part of the long FCP_RSP UI buffer. And any trailing information is still
    available in the payload trace record just in case.
    
    Rename the payload record tag from "fcp_sns" to "fcp_riu" to make the new
    content explicit to zfcpdbf which can then pick a suitable field name such
    as "FCP rsp IU all:" instead of "Sense info :"
    Also, the same zfcpdbf can still be backwards compatible with "fcp_sns".
    
    Old example trace record before this fix, formatted with the tool zfcpdbf
    from s390-tools:
    
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 3
    Exception      : -
    CPU id         : ..
    Caller         : 0x...
    Record id      : 1
    Tag            : rsl_err
    Request id     : 0x<request_id>
    SCSI ID        : 0x...
    SCSI LUN       : 0x...
    SCSI result    : 0x00000002
    SCSI retries   : 0x00
    SCSI allowed   : 0x05
    SCSI scribble  : 0x<request_id>
    SCSI opcode    : 00000000 00000000 00000000 00000000
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000202 00000000
                                           ^^==FCP_SNS_LEN_VALID
                     00000020 00000000
                     ^^^^^^^^==FCP_SNS_LEN==32
    Sense len      : 96 <==min(SCSI_SENSE_BUFFERSIZE,ZFCP_DBF_PAY_MAX_REC)
    Sense info     : 70000600 00000018 00000000 29000000
                     00000400 00000000 00000000 00000000
                     00000000 00000000 00000000 00000000<==superfluous
                     00000000 00000000 00000000 00000000<==superfluous
                     00000000 00000000 00000000 00000000<==superfluous
                     00000000 00000000 00000000 00000000<==superfluous
    
    New example trace records with this fix:
    
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 3
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : rsl_err
    Request ID     : 0x<request_id>
    SCSI ID        : 0x...
    SCSI LUN       : 0x...
    SCSI result    : 0x00000002
    SCSI retries   : 0x00
    SCSI allowed   : 0x03
    SCSI scribble  : 0x<request_id>
    SCSI opcode    : a30c0112 00000000 02000000 00000000
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000a02 00000200
                     00000020 00000000
    FCP rsp IU len : 56
    FCP rsp IU all : 00000000 00000000 00000a02 00000200
                                           ^^=FCP_RESID_UNDER|FCP_SNS_LEN_VALID
                     00000020 00000000 70000500 00000018
                     ^^^^^^^^==FCP_SNS_LEN
                                       ^^^^^^^^^^^^^^^^^
                     00000000 240000cb 00011100 00000000
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                     00000000 00000000
                     ^^^^^^^^^^^^^^^^^==FCP_SNS_INFO
    
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 1
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : lr_okay
    Request ID     : 0x<request_id>
    SCSI ID        : 0x...
    SCSI LUN       : 0x...
    SCSI result    : 0x00000000
    SCSI retries   : 0x00
    SCSI allowed   : 0x05
    SCSI scribble  : 0x<request_id>
    SCSI opcode    : <CDB of unrelated SCSI command passed to eh handler>
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000100 00000000
                     00000000 00000008
    FCP rsp IU len : 32
    FCP rsp IU all : 00000000 00000000 00000100 00000000
                                           ^^==FCP_RSP_LEN_VALID
                     00000000 00000008 00000000 00000000
                              ^^^^^^^^==FCP_RSP_LEN
                                       ^^^^^^^^^^^^^^^^^==FCP_RSP_INFO
    
    Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: 250a1352b95e ("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.")
    Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 277958c8e3c0cbbab093073266044260dab52161
Author: Steffen Maier <maier@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:55 2017 +0200

    scsi: zfcp: fix missing trace records for early returns in TMF eh handlers
    
    commit 1a5d999ebfc7bfe28deb48931bb57faa8e4102b6 upstream.
    
    For problem determination we need to see that we were in scsi_eh
    as well as whether and why we were successful or not.
    
    The following commits introduced new early returns without adding
    a trace record:
    
    v2.6.35 commit a1dbfddd02d2
    ("[SCSI] zfcp: Pass return code from fc_block_scsi_eh to scsi eh")
    on fc_block_scsi_eh() returning != 0 which is FAST_IO_FAIL,
    
    v2.6.30 commit 63caf367e1c9
    ("[SCSI] zfcp: Improve reliability of SCSI eh handlers in zfcp")
    on not having gotten an FSF request after the maximum number of retry
    attempts and thus could not issue a TMF and has to return FAILED.
    
    Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: a1dbfddd02d2 ("[SCSI] zfcp: Pass return code from fc_block_scsi_eh to scsi eh")
    Fixes: 63caf367e1c9 ("[SCSI] zfcp: Improve reliability of SCSI eh handlers in zfcp")
    Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9799a4d5fc26ede777fd359c2129608af63540e7
Author: Benjamin Block <bblock@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:52 2017 +0200

    scsi: zfcp: add handling for FCP_RESID_OVER to the fcp ingress path
    
    commit a099b7b1fc1f0418ab8d79ecf98153e1e134656e upstream.
    
    Up until now zfcp would just ignore the FCP_RESID_OVER flag in the FCP
    response IU. When this flag is set, it is possible, in regards to the
    FCP standard, that the storage-server processes the command normally, up
    to the point where data is missing and simply ignores those.
    
    In this case no CHECK CONDITION would be set, and because we ignored the
    FCP_RESID_OVER flag we resulted in at least a data loss or even
    -corruption as a follow-up error, depending on how the
    applications/layers on top behave. To prevent this, we now set the
    host-byte of the corresponding scsi_cmnd to DID_ERROR.
    
    Other storage-behaviors, where the same condition results in a CHECK
    CONDITION set in the answer, don't need to be changed as they are
    handled in the mid-layer already.
    
    Following is an example trace record decoded with zfcpdbf from the
    s390-tools package. We forcefully injected a fc_dl which is one byte too
    small:
    
    Timestamp      : ...
    Area           : SCSI
    Subarea        : 00
    Level          : 3
    Exception      : -
    CPU ID         : ..
    Caller         : 0x...
    Record ID      : 1
    Tag            : rsl_err
    Request ID     : 0x...
    SCSI ID        : 0x...
    SCSI LUN       : 0x...
    SCSI result    : 0x00070000
                         ^^DID_ERROR
    SCSI retries   : 0x..
    SCSI allowed   : 0x..
    SCSI scribble  : 0x...
    SCSI opcode    : 2a000000 00000000 08000000 00000000
    FCP rsp inf cod: 0x00
    FCP rsp IU     : 00000000 00000000 00000400 00000001
                                           ^^fr_flags==FCP_RESID_OVER
                                             ^^fr_status==SAM_STAT_GOOD
                                                ^^^^^^^^fr_resid
                     00000000 00000000
    
    As of now, we don't actively handle to possibility that a response IU
    has both flags - FCP_RESID_OVER and FCP_RESID_UNDER - set at once.
    
    Reported-by: Luke M. Hopkins <lmhopkin@us.ibm.com>
    Reviewed-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: 553448f6c483 ("[SCSI] zfcp: Message cleanup")
    Fixes: ea127f975424 ("[PATCH] s390 (7/7): zfcp host adapter.") (tglx/history.git)
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ad385a77e093c6834ce1129ba80dba610320c20b
Author: Steffen Maier <maier@linux.vnet.ibm.com>
Date:   Fri Jul 28 12:30:51 2017 +0200

    scsi: zfcp: fix queuecommand for scsi_eh commands when DIX enabled
    
    commit 71b8e45da51a7b64a23378221c0a5868bd79da4f upstream.
    
    Since commit db007fc5e20c ("[SCSI] Command protection operation"),
    scsi_eh_prep_cmnd() saves scmd->prot_op and temporarily resets it to
    SCSI_PROT_NORMAL.
    Other FCP LLDDs such as qla2xxx and lpfc shield their queuecommand()
    to only access any of scsi_prot_sg...() if
    (scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL).
    
    Do the same thing for zfcp, which introduced DIX support with
    commit ef3eb71d8ba4 ("[SCSI] zfcp: Introduce experimental support for
    DIF/DIX").
    
    Otherwise, TUR SCSI commands as part of scsi_eh likely fail in zfcp,
    because the regular SCSI command with DIX protection data, that scsi_eh
    re-uses in scsi_send_eh_cmnd(), of course still has
    (scsi_prot_sg_count() != 0) and so zfcp sends down bogus requests to the
    FCP channel hardware.
    
    This causes scsi_eh_test_devices() to have (finish_cmds == 0)
    [not SCSI device is online or not scsi_eh_tur() failed]
    so regular SCSI commands, that caused / were affected by scsi_eh,
    are moved to work_q and scsi_eh_test_devices() itself returns false.
    In turn, it unnecessarily escalates in our case in scsi_eh_ready_devs()
    beyond host reset to finally scsi_eh_offline_sdevs()
    which sets affected SCSI devices offline with the following kernel message:
    
    "kernel: sd H:0:T:L: Device offlined - not ready after error recovery"
    
    Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
    Fixes: ef3eb71d8ba4 ("[SCSI] zfcp: Introduce experimental support for DIF/DIX")
    Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2e41a42cf3d1fee9bfc636c5ca991e1112f7f695
Author: Bart Van Assche <bart.vanassche@wdc.com>
Date:   Thu Aug 17 13:12:46 2017 -0700

    skd: Submit requests to firmware before triggering the doorbell
    
    commit 5fbd545cd3fd311ea1d6e8be4cedddd0ee5684c7 upstream.
    
    Ensure that the members of struct skd_msg_buf have been transferred
    to the PCIe adapter before the doorbell is triggered. This patch
    avoids that I/O fails sporadically and that the following error
    message is reported:
    
    (skd0:STM000196603:[0000:00:09.0]): Completion mismatch comp_id=0x0000 skreq=0x0400 new=0x0000
    
    Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Hannes Reinecke <hare@suse.de>
    Cc: Johannes Thumshirn <jthumshirn@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3070df32bfdf79c544f80dd1e1181f7cb4760830
Author: Bart Van Assche <bart.vanassche@wdc.com>
Date:   Thu Aug 17 13:12:45 2017 -0700

    skd: Avoid that module unloading triggers a use-after-free
    
    commit 7277cc67b3916eed47558c64f9c9c0de00a35cda upstream.
    
    Since put_disk() triggers a disk_release() call and since that
    last function calls blk_put_queue() if disk->queue != NULL, clear
    the disk->queue pointer before calling put_disk(). This avoids
    that unloading the skd kernel module triggers the following
    use-after-free:
    
    WARNING: CPU: 8 PID: 297 at lib/refcount.c:128 refcount_sub_and_test+0x70/0x80
    refcount_t: underflow; use-after-free.
    CPU: 8 PID: 297 Comm: kworker/8:1 Not tainted 4.11.10-300.fc26.x86_64 #1
    Workqueue: events work_for_cpu_fn
    Call Trace:
     dump_stack+0x63/0x84
     __warn+0xcb/0xf0
     warn_slowpath_fmt+0x5a/0x80
     refcount_sub_and_test+0x70/0x80
     refcount_dec_and_test+0x11/0x20
     kobject_put+0x1f/0x50
     blk_put_queue+0x15/0x20
     disk_release+0xae/0xf0
     device_release+0x32/0x90
     kobject_release+0x67/0x170
     kobject_put+0x2b/0x50
     put_disk+0x17/0x20
     skd_destruct+0x5c/0x890 [skd]
     skd_pci_probe+0x124d/0x13a0 [skd]
     local_pci_probe+0x42/0xa0
     work_for_cpu_fn+0x14/0x20
     process_one_work+0x19e/0x470
     worker_thread+0x1dc/0x4a0
     kthread+0x125/0x140
     ret_from_fork+0x25/0x30
    
    Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Hannes Reinecke <hare@suse.de>
    Cc: Johannes Thumshirn <jthumshirn@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1074b91bdd4b4ea9b17876c17404b81c19218e49
Author: NeilBrown <neilb@suse.com>
Date:   Thu Aug 31 10:23:25 2017 +1000

    md/bitmap: disable bitmap_resize for file-backed bitmaps.
    
    commit e8a27f836f165c26f867ece7f31eb5c811692319 upstream.
    
    bitmap_resize() does not work for file-backed bitmaps.
    The buffer_heads are allocated and initialized when
    the bitmap is read from the file, but resize doesn't
    read from the file, it loads from the internal bitmap.
    When it comes time to write the new bitmap, the bh is
    non-existent and we crash.
    
    The common case when growing an array involves making the array larger,
    and that normally means making the bitmap larger.  Doing
    that inside the kernel is possible, but would need more code.
    It is probably easier to require people who use file-backed
    bitmaps to remove them and re-add after a reshape.
    
    So this patch disables the resizing of arrays which have
    file-backed bitmaps.  This is better than crashing.
    
    Reported-by: Zhilong Liu <zlliu@suse.com>
    Fixes: d60b479d177a ("md/bitmap: add bitmap_resize function to allow bitmap resizing.")
    Signed-off-by: NeilBrown <neilb@suse.com>
    Signed-off-by: Shaohua Li <shli@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 617f119a64ce5cc0b9b1eee1e7bcf1e82a671772
Author: Bart Van Assche <bart.vanassche@wdc.com>
Date:   Thu Aug 17 13:12:44 2017 -0700

    block: Relax a check in blk_start_queue()
    
    commit 4ddd56b003f251091a67c15ae3fe4a5c5c5e390a upstream.
    
    Calling blk_start_queue() from interrupt context with the queue
    lock held and without disabling IRQs, as the skd driver does, is
    safe. This patch avoids that loading the skd driver triggers the
    following warning:
    
    WARNING: CPU: 11 PID: 1348 at block/blk-core.c:283 blk_start_queue+0x84/0xa0
    RIP: 0010:blk_start_queue+0x84/0xa0
    Call Trace:
     skd_unquiesce_dev+0x12a/0x1d0 [skd]
     skd_complete_internal+0x1e7/0x5a0 [skd]
     skd_complete_other+0xc2/0xd0 [skd]
     skd_isr_completion_posted.isra.30+0x2a5/0x470 [skd]
     skd_isr+0x14f/0x180 [skd]
     irq_forced_thread_fn+0x2a/0x70
     irq_thread+0x144/0x1a0
     kthread+0x125/0x140
     ret_from_fork+0x2a/0x40
    
    Fixes: commit a038e2536472 ("[PATCH] blk_start_queue() must be called with irq disabled - add warning")
    Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
    Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
    Cc: Andrew Morton <akpm@osdl.org>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Hannes Reinecke <hare@suse.de>
    Cc: Johannes Thumshirn <jthumshirn@suse.de>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 87e47744e1879b1e57d428ffc5018901d7488f95
Author: Michael Ellerman <mpe@ellerman.id.au>
Date:   Thu Aug 24 20:49:57 2017 +1000

    powerpc: Fix DAR reporting when alignment handler faults
    
    commit f9effe925039cf54489b5c04e0d40073bb3a123d upstream.
    
    Anton noticed that if we fault part way through emulating an unaligned
    instruction, we don't update the DAR to reflect that.
    
    The DAR value is eventually reported back to userspace as the address
    in the SEGV signal, and if userspace is using that value to demand
    fault then it can be confused by us not setting the value correctly.
    
    This patch is ugly as hell, but is intended to be the minimal fix and
    back ports easily.
    
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ab1717dafd8355d3a52ec94deacf806a232ad773
Author: zhangyi (F) <yi.zhang@huawei.com>
Date:   Thu Aug 24 15:19:39 2017 -0400

    ext4: fix incorrect quotaoff if the quota feature is enabled
    
    commit b0a5a9589decd07db755d6a8d9c0910d96ff7992 upstream.
    
    Current ext4 quota should always "usage enabled" if the
    quota feautre is enabled. But in ext4_orphan_cleanup(), it
    turn quotas off directly (used for the older journaled
    quota), so we cannot turn it on again via "quotaon" unless
    umount and remount ext4.
    
    Simple reproduce:
    
      mkfs.ext4 -O project,quota /dev/vdb1
      mount -o prjquota /dev/vdb1 /mnt
      chattr -p 123 /mnt
      chattr +P /mnt
      touch /mnt/aa /mnt/bb
      exec 100<>/mnt/aa
      rm -f /mnt/aa
      sync
      echo c > /proc/sysrq-trigger
    
      #reboot and mount
      mount -o prjquota /dev/vdb1 /mnt
      #query status
      quotaon -Ppv /dev/vdb1
      #output
      quotaon: Cannot find mountpoint for device /dev/vdb1
      quotaon: No correct mountpoint specified.
    
    This patch add check for journaled quotas to avoid incorrect
    quotaoff when ext4 has quota feautre.
    
    Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1d4ba7f963a93a2207fd103d4a36df1b5aeefea2
Author: Stephan Mueller <smueller@chronox.de>
Date:   Thu Sep 21 10:16:53 2017 +0200

    crypto: AF_ALG - remove SGL terminator indicator when chaining
    
    Fixed differently upstream as commit 2d97591ef43d ("crypto: af_alg - consolidation of duplicate code")
    
    The SGL is MAX_SGL_ENTS + 1 in size. The last SG entry is used for the
    chaining and is properly updated with the sg_chain invocation. During
    the filling-in of the initial SG entries, sg_mark_end is called for each
    SG entry. This is appropriate as long as no additional SGL is chained
    with the current SGL. However, when a new SGL is chained and the last
    SG entry is updated with sg_chain, the last but one entry still contains
    the end marker from the sg_mark_end. This end marker must be removed as
    otherwise a walk of the chained SGLs will cause a NULL pointer
    dereference at the last but one SG entry, because sg_next will return
    NULL.
    
    The patch only applies to all kernels up to and including 4.13. The
    patch 2d97591ef43d0587be22ad1b0d758d6df4999a0b added to 4.14-rc1
    introduced a complete new code base which addresses this bug in
    a different way. Yet, that patch is too invasive for stable kernels
    and was therefore not marked for stable.
    
    Fixes: 8ff590903d5fc ("crypto: algif_skcipher - User-space interface for skcipher operations")
    Signed-off-by: Stephan Mueller <smueller@chronox.de>
    Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a6d09b8a6ee3d85cac18a04fad0d219cf94e4526
Author: Kai-Heng Feng <kai.heng.feng@canonical.com>
Date:   Fri Sep 15 09:36:16 2017 -0700

    Input: i8042 - add Gigabyte P57 to the keyboard reset table
    
    commit 697c5d8a36768b36729533fb44622b35d56d6ad0 upstream.
    
    Similar to other Gigabyte laptops, the touchpad on P57 requires a
    keyboard reset to detect Elantech touchpad correctly.
    
    BugLink: https://bugs.launchpad.net/bugs/1594214
    Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
    Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 53c94358a14ffa7f3ba24d9749e8b297cdb0b520
Author: Sabrina Dubroca <sd@queasysnail.net>
Date:   Wed Feb 4 15:25:09 2015 +0100

    ip6_gre: fix endianness errors in ip6gre_err
    
    commit d1e158e2d7a0a91110b206653f0e02376e809150 upstream.
    
    info is in network byte order, change it back to host byte order
    before use. In particular, the current code sets the MTU of the tunnel
    to a wrong (too big) value.
    
    Fixes: c12b395a4664 ("gre: Support GRE over IPv6")
    Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
    Acked-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c8443922edb7f28ecdfe5d2b43552ebdcb33d1fb
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Sep 20 12:31:05 2017 +0200

    Revert "usb: musb: fix tx fifo flush handling again"
    
    This reverts commit 98b91bfa5e478b9bf332f9f149b1c25ffd58f877 which is
    commit 45d73860530a14c608f410b91c6c341777bfa85d upstream.
    
    It should not have been applied to the 3.18-stable tree at all.
    
    Reported-by: Greg Kaiser <gkaiser@google.com>
    Cc: Bin Liu <b-liu@ti.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 70768be91bcc00508038ee743c76fe9f404dfcf9
Author: Jaegeuk Kim <jaegeuk@kernel.org>
Date:   Sat Aug 12 21:33:23 2017 -0700

    f2fs: check hot_data for roll-forward recovery
    
    commit 125c9fb1ccb53eb2ea9380df40f3c743f3fb2fed upstream.
    
    We need to check HOT_DATA to truncate any previous data block when doing
    roll-forward recovery.
    
    Reviewed-by: Chao Yu <yuchao0@huawei.com>
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4c97e1c4a67cf5c1150e78de714571a87654ac3d
Author: Eric Dumazet <edumazet@google.com>
Date:   Fri Sep 8 15:48:47 2017 -0700

    ipv6: fix typo in fib6_net_exit()
    
    
    [ Upstream commit 32a805baf0fb70b6dbedefcd7249ac7f580f9e3b ]
    
    IPv6 FIB should use FIB6_TABLE_HASHSZ, not FIB_TABLE_HASHSZ.
    
    Fixes: ba1cc08d9488 ("ipv6: fix memory leak with multiple tables during netns destruction")
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fbca27ad2916e18251168d7b43e3d185001917fc
Author: Sabrina Dubroca <sd@queasysnail.net>
Date:   Fri Sep 8 10:26:19 2017 +0200

    ipv6: fix memory leak with multiple tables during netns destruction
    
    
    [ Upstream commit ba1cc08d9488c94cb8d94f545305688b72a2a300 ]
    
    fib6_net_exit only frees the main and local tables. If another table was
    created with fib6_alloc_table, we leak it when the netns is destroyed.
    
    Fix this in the same way ip_fib_net_exit cleans up tables, by walking
    through the whole hashtable of fib6_table's. We can get rid of the
    special cases for local and main, since they're also part of the
    hashtable.
    
    Reproducer:
        ip netns add x
        ip -net x -6 rule add from 6003:1::/64 table 100
        ip netns del x
    
    Reported-by: Jianlin Shi <jishi@redhat.com>
    Fixes: 58f09b78b730 ("[NETNS][IPV6] ip6_fib - make it per network namespace")
    Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1722ca90e1b88e6b7f0824908828e2462d7405ac
Author: Wei Wang <weiwan@google.com>
Date:   Thu May 18 11:22:33 2017 -0700

    tcp: initialize rcv_mss to TCP_MIN_MSS instead of 0
    
    
    [ Upstream commit 499350a5a6e7512d9ed369ed63a4244b6536f4f8 ]
    
    When tcp_disconnect() is called, inet_csk_delack_init() sets
    icsk->icsk_ack.rcv_mss to 0.
    This could potentially cause tcp_recvmsg() => tcp_cleanup_rbuf() =>
    __tcp_select_window() call path to have division by 0 issue.
    So this patch initializes rcv_mss to TCP_MIN_MSS instead of 0.
    
    Reported-by: Andrey Konovalov  <andreyknvl@google.com>
    Signed-off-by: Wei Wang <weiwan@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Neal Cardwell <ncardwell@google.com>
    Signed-off-by: Yuchung Cheng <ycheng@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 39194a40eef5a4771404ea04910b8216f8d2a065
Author: Florian Fainelli <f.fainelli@gmail.com>
Date:   Wed Aug 30 17:49:29 2017 -0700

    Revert "net: phy: Correctly process PHY_HALTED in phy_stop_machine()"
    
    
    [ Upstream commit ebc8254aeae34226d0bc8fda309fd9790d4dccfe ]
    
    This reverts commit 7ad813f208533cebfcc32d3d7474dc1677d1b09a ("net: phy:
    Correctly process PHY_HALTED in phy_stop_machine()") because it is
    creating the possibility for a NULL pointer dereference.
    
    David Daney provide the following call trace and diagram of events:
    
    When ndo_stop() is called we call:
    
     phy_disconnect()
        +---> phy_stop_interrupts() implies: phydev->irq = PHY_POLL;
        +---> phy_stop_machine()
        |      +---> phy_state_machine()
        |              +----> queue_delayed_work(): Work queued.
        +--->phy_detach() implies: phydev->attached_dev = NULL;
    
    Now at a later time the queued work does:
    
     phy_state_machine()
        +---->netif_carrier_off(phydev->attached_dev): Oh no! It is NULL:
    
     CPU 12 Unable to handle kernel paging request at virtual address
    0000000000000048, epc == ffffffff80de37ec, ra == ffffffff80c7c
    Oops[#1]:
    CPU: 12 PID: 1502 Comm: kworker/12:1 Not tainted 4.9.43-Cavium-Octeon+ #1
    Workqueue: events_power_efficient phy_state_machine
    task: 80000004021ed100 task.stack: 8000000409d70000
    $ 0   : 0000000000000000 ffffffff84720060 0000000000000048 0000000000000004
    $ 4   : 0000000000000000 0000000000000001 0000000000000004 0000000000000000
    $ 8   : 0000000000000000 0000000000000000 00000000ffff98f3 0000000000000000
    $12   : 8000000409d73fe0 0000000000009c00 ffffffff846547c8 000000000000af3b
    $16   : 80000004096bab68 80000004096babd0 0000000000000000 80000004096ba800
    $20   : 0000000000000000 0000000000000000 ffffffff81090000 0000000000000008
    $24   : 0000000000000061 ffffffff808637b0
    $28   : 8000000409d70000 8000000409d73cf0 80000000271bd300 ffffffff80c7804c
    Hi    : 000000000000002a
    Lo    : 000000000000003f
    epc   : ffffffff80de37ec netif_carrier_off+0xc/0x58
    ra    : ffffffff80c7804c phy_state_machine+0x48c/0x4f8
    Status: 14009ce3        KX SX UX KERNEL EXL IE
    Cause : 00800008 (ExcCode 02)
    BadVA : 0000000000000048
    PrId  : 000d9501 (Cavium Octeon III)
    Modules linked in:
    Process kworker/12:1 (pid: 1502, threadinfo=8000000409d70000,
    task=80000004021ed100, tls=0000000000000000)
    Stack : 8000000409a54000 80000004096bab68 80000000271bd300 80000000271c1e00
            0000000000000000 ffffffff808a1708 8000000409a54000 80000000271bd300
            80000000271bd320 8000000409a54030 ffffffff80ff0f00 0000000000000001
            ffffffff81090000 ffffffff808a1ac0 8000000402182080 ffffffff84650000
            8000000402182080 ffffffff84650000 ffffffff80ff0000 8000000409a54000
            ffffffff808a1970 0000000000000000 80000004099e8000 8000000402099240
            0000000000000000 ffffffff808a8598 0000000000000000 8000000408eeeb00
            8000000409a54000 00000000810a1d00 0000000000000000 8000000409d73de8
            8000000409d73de8 0000000000000088 000000000c009c00 8000000409d73e08
            8000000409d73e08 8000000402182080 ffffffff808a84d0 8000000402182080
            ...
    Call Trace:
    [<ffffffff80de37ec>] netif_carrier_off+0xc/0x58
    [<ffffffff80c7804c>] phy_state_machine+0x48c/0x4f8
    [<ffffffff808a1708>] process_one_work+0x158/0x368
    [<ffffffff808a1ac0>] worker_thread+0x150/0x4c0
    [<ffffffff808a8598>] kthread+0xc8/0xe0
    [<ffffffff808617f0>] ret_from_kernel_thread+0x14/0x1c
    
    The original motivation for this change originated from Marc Gonzales
    indicating that his network driver did not have its adjust_link callback
    executing with phydev->link = 0 while he was expecting it.
    
    PHYLIB has never made any such guarantees ever because phy_stop() merely just
    tells the workqueue to move into PHY_HALTED state which will happen
    asynchronously.
    
    Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
    Reported-by: David Daney <ddaney.cavm@gmail.com>
    Fixes: 7ad813f20853 ("net: phy: Correctly process PHY_HALTED in phy_stop_machine()")
    Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fea532ea08ada35c0700c4c1be878437b023ed00
Author: Arnd Bergmann <arnd@arndb.de>
Date:   Wed Aug 23 15:59:49 2017 +0200

    qlge: avoid memcpy buffer overflow
    
    
    [ Upstream commit e58f95831e7468d25eb6e41f234842ecfe6f014f ]
    
    gcc-8.0.0 (snapshot) points out that we copy a variable-length string
    into a fixed length field using memcpy() with the destination length,
    and that ends up copying whatever follows the string:
    
        inlined from 'ql_core_dump' at drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:1106:2:
    drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:708:2: error: 'memcpy' reading 15 bytes from a region of size 14 [-Werror=stringop-overflow=]
      memcpy(seg_hdr->description, desc, (sizeof(seg_hdr->description)) - 1);
    
    Changing it to use strncpy() will instead zero-pad the destination,
    which seems to be the right thing to do here.
    
    The bug is probably harmless, but it seems like a good idea to address
    it in stable kernels as well, if only for the purpose of building with
    gcc-8 without warnings.
    
    Fixes: a61f80261306 ("qlge: Add ethtool register dump function.")
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 51ef0b663c13cffbc1cc74102122bd4013726c1b
Author: Stefano Brivio <sbrivio@redhat.com>
Date:   Fri Aug 18 14:40:53 2017 +0200

    ipv6: accept 64k - 1 packet length in ip6_find_1stfragopt()
    
    
    [ Upstream commit 3de33e1ba0506723ab25734e098cf280ecc34756 ]
    
    A packet length of exactly IPV6_MAXPLEN is allowed, we should
    refuse parsing options only if the size is 64KiB or more.
    
    While at it, remove one extra variable and one assignment which
    were also introduced by the commit that introduced the size
    check. Checking the sum 'offset + len' and only later adding
    'len' to 'offset' doesn't provide any advantage over directly
    summing to 'offset' and checking it.
    
    Fixes: 6399f1fae4ec ("ipv6: avoid overflow of offset in ip6_find_1stfragopt")
    Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>